content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How to get number of specific words from a string Please I need to write a program in python3 that return the number of word in a string that has letter that repeat only n time successive. Expl if n=2 "first loop ddd" the code must return 1 [Loop contains 2 o] [d is repeated 3 times in ddd so it wan't be counted]. I wrote a long code but i did not get a result. words=st.split(" ") for word in words: for i in range(1,len(word)-nb+1): k=word[i:i+nb] if( k==word[i]*nb and kelma[0]!=word[i-1] and k[-1]!=word[i+nb] ): nbr=nbr+1 print(word) break return nbr A: You could create a words list by splitting the sentence by whitespace, and then searching each word (after removing any punctuation etc..) for occurrences of repeated letters. I've kept a set of words found, so that the same word isn't counted multiple times if it has repeats of more than one letter, but if you did want to count these you could just use a counter instead of the set. def find_repeats(search_string, r): found = set() words = search_string.lower().split() for word in words: letters = ''.join(filter(lambda d: d.isalpha, word)) # remove non-alphabetical chars for l in set(letters): if l*r in letters and l*(r+1) not in letters: found.add(letters) return len(found) print(find_repeats('first loop ddd',2)) # output 1 Edit: To handle the case where multiple occurrences of the same letter happen, using a greedy regular expression is useful. In this case you match occurrences of each letter repeated as many times as possible (hence the term greedy), and then check the length of the repeats. re also allows for an alternative construction of the word list to the earlier example. import re def find_repeats(search_string, r): found = set() for word in re.findall('[a-z]+',search_string.lower()): #words list (or '\w+' possible) for letter in set(word): for repeats in re.findall(letter + '+', word): # letter+ is one or more of letter if len(repeats) == r: found.add(word) return len(found) print(find_repeats('first, loop; ddd caaabaaaa',3)) # output 2
How to get number of specific words from a string
Please I need to write a program in python3 that return the number of word in a string that has letter that repeat only n time successive. Expl if n=2 "first loop ddd" the code must return 1 [Loop contains 2 o] [d is repeated 3 times in ddd so it wan't be counted]. I wrote a long code but i did not get a result. words=st.split(" ") for word in words: for i in range(1,len(word)-nb+1): k=word[i:i+nb] if( k==word[i]*nb and kelma[0]!=word[i-1] and k[-1]!=word[i+nb] ): nbr=nbr+1 print(word) break return nbr
[ "You could create a words list by splitting the sentence by whitespace, and then searching each word (after removing any punctuation etc..) for occurrences of repeated letters. I've kept a set of words found, so that the same word isn't counted multiple times if it has repeats of more than one letter, but if you did want to count these you could just use a counter instead of the set.\ndef find_repeats(search_string, r):\n found = set()\n words = search_string.lower().split()\n for word in words:\n letters = ''.join(filter(lambda d: d.isalpha, word)) # remove non-alphabetical chars\n for l in set(letters):\n if l*r in letters and l*(r+1) not in letters:\n found.add(letters)\n return len(found)\n\nprint(find_repeats('first loop ddd',2))\n# output 1\n\nEdit:\nTo handle the case where multiple occurrences of the same letter happen, using a greedy regular expression is useful. In this case you match occurrences of each letter repeated as many times as possible (hence the term greedy), and then check the length of the repeats. re also allows for an alternative construction of the word list to the earlier example.\nimport re\ndef find_repeats(search_string, r):\n found = set()\n for word in re.findall('[a-z]+',search_string.lower()): #words list (or '\\w+' possible)\n for letter in set(word):\n for repeats in re.findall(letter + '+', word): # letter+ is one or more of letter\n if len(repeats) == r:\n found.add(word)\n return len(found)\nprint(find_repeats('first, loop; ddd caaabaaaa',3))\n# output 2\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074539415_python.txt
Q: Flask WTForms how to prevent duplicate form submission I'm new to Flask. Forms.py: class NoteForm(FlaskForm): note = fields.TextAreaField("Note") add_note = fields.SubmitField("Add Note") router.py: add_note_form = forms.NoteForm() template: <div class="form-group"> {{ add_note_form.add_note}} </div> Now if I click the add note button multiple times in a very short time ,the form will be summitted for multiple times, especial when the page is loading slowly. Is there anyway I can prevent duplicate form submission ? A: One way to do this is to disable the submit button after form is submitted onClick="this.form.submit(); this.disabled=true; this.value='Saving…'; " Another way would be to give you record an id and check for duplicate in the backend.
Flask WTForms how to prevent duplicate form submission
I'm new to Flask. Forms.py: class NoteForm(FlaskForm): note = fields.TextAreaField("Note") add_note = fields.SubmitField("Add Note") router.py: add_note_form = forms.NoteForm() template: <div class="form-group"> {{ add_note_form.add_note}} </div> Now if I click the add note button multiple times in a very short time ,the form will be summitted for multiple times, especial when the page is loading slowly. Is there anyway I can prevent duplicate form submission ?
[ "One way to do this is to disable the submit button after form is submitted\nonClick=\"this.form.submit(); this.disabled=true; this.value='Saving…'; \"\n\nAnother way would be to give you record an id and check for duplicate in the backend.\n" ]
[ 1 ]
[]
[]
[ "flask", "flask_wtforms", "python", "python_3.x", "wtforms" ]
stackoverflow_0074539465_flask_flask_wtforms_python_python_3.x_wtforms.txt
Q: Create new column using multiple groupby's in Pandas I have a dataset where I would like to: group by location and box and take a count of the box Data ID location type box status aa NY no box55 hey aa NY no box55 hi aa NY yes box66 hello aa NY yes box66 goodbye aa CA no box11 hey aa CA no box11 hi aa CA yes box11 hello aa CA yes box11 goodbye aa CA no box86 hey aa CA no box86 hi aa CA yes box86 hello aa CA yes box99 goodbye aa CA no box99 hey aa CA no box99 hi Desired location box count box NY 2 box55 NY 2 box66 CA 3 box11 CA 3 box86 CA 3 box99 Doing df['box count'] = df.groupby(['location','box'])['box'].size() Any suggestion is appreciated. A: Try: df = df.groupby(["location", "box"], as_index=False).agg( **{"box count": ("box", "size")} ) print(df) Prints: location box box count 0 CA box11 4 1 CA box86 3 2 CA box99 3 3 NY box55 2 4 NY box66 2 EDIT: m = df.groupby(["location"])["box"].nunique() df = df.groupby(["location", "box"], as_index=False).agg( **{ "box count": ( "location", lambda x: m[x.iat[0]], ) } ) print(df) Prints: location box box count 0 CA box11 3 1 CA box86 3 2 CA box99 3 3 NY box55 2 4 NY box66 2
Create new column using multiple groupby's in Pandas
I have a dataset where I would like to: group by location and box and take a count of the box Data ID location type box status aa NY no box55 hey aa NY no box55 hi aa NY yes box66 hello aa NY yes box66 goodbye aa CA no box11 hey aa CA no box11 hi aa CA yes box11 hello aa CA yes box11 goodbye aa CA no box86 hey aa CA no box86 hi aa CA yes box86 hello aa CA yes box99 goodbye aa CA no box99 hey aa CA no box99 hi Desired location box count box NY 2 box55 NY 2 box66 CA 3 box11 CA 3 box86 CA 3 box99 Doing df['box count'] = df.groupby(['location','box'])['box'].size() Any suggestion is appreciated.
[ "Try:\ndf = df.groupby([\"location\", \"box\"], as_index=False).agg(\n **{\"box count\": (\"box\", \"size\")}\n)\nprint(df)\n\nPrints:\n location box box count\n0 CA box11 4\n1 CA box86 3\n2 CA box99 3\n3 NY box55 2\n4 NY box66 2\n\n\nEDIT:\nm = df.groupby([\"location\"])[\"box\"].nunique()\ndf = df.groupby([\"location\", \"box\"], as_index=False).agg(\n **{\n \"box count\": (\n \"location\",\n lambda x: m[x.iat[0]],\n )\n }\n)\nprint(df)\n\nPrints:\n location box box count\n0 CA box11 3\n1 CA box86 3\n2 CA box99 3\n3 NY box55 2\n4 NY box66 2\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "numpy", "pandas", "python" ]
stackoverflow_0074540009_group_by_numpy_pandas_python.txt
Q: How to calculate 1st and 3rd quartiles? I have DataFrame: time_diff avg_trips 0 0.450000 1.0 1 0.483333 1.0 2 0.500000 1.0 3 0.516667 1.0 4 0.533333 2.0 I want to get 1st quartile, 3rd quartile and median for the column time_diff. To obtain median, I use np.median(df["time_diff"].values). How can I calculate quartiles? A: By using pandas: df.time_diff.quantile([0.25,0.5,0.75]) Out[793]: 0.25 0.483333 0.50 0.500000 0.75 0.516667 Name: time_diff, dtype: float64 A: You can use np.percentile to calculate quartiles (including the median): >>> np.percentile(df.time_diff, 25) # Q1 0.48333300000000001 >>> np.percentile(df.time_diff, 50) # median 0.5 >>> np.percentile(df.time_diff, 75) # Q3 0.51666699999999999 Or all at once: >>> np.percentile(df.time_diff, [25, 50, 75]) array([ 0.483333, 0.5 , 0.516667]) A: Coincidentally, this information is captured with the describe method: df.time_diff.describe() count 5.000000 mean 0.496667 std 0.032059 min 0.450000 25% 0.483333 50% 0.500000 75% 0.516667 max 0.533333 Name: time_diff, dtype: float64 A: np.percentile DOES NOT calculate the values of Q1, median, and Q3. Consider the sorted list below: samples = [1, 1, 8, 12, 13, 13, 14, 16, 19, 22, 27, 28, 31] running np.percentile(samples, [25, 50, 75]) returns the actual values from the list: Out[1]: array([12., 14., 22.]) However, the quartiles are Q1=10.0, Median=14, Q3=24.5 (you can also use this link to find the quartiles and median online). One can use the below code to calculate the quartiles and median of a sorted list (because of sorting this approach requires O(nlogn) computations where n is the number of items). Moreover, finding quartiles and median can be done in O(n) computations using the Median of medians Selection algorithm (order statistics). samples = sorted([28, 12, 8, 27, 16, 31, 14, 13, 19, 1, 1, 22, 13]) def find_median(sorted_list): indices = [] list_size = len(sorted_list) median = 0 if list_size % 2 == 0: indices.append(int(list_size / 2) - 1) # -1 because index starts from 0 indices.append(int(list_size / 2)) median = (sorted_list[indices[0]] + sorted_list[indices[1]]) / 2 pass else: indices.append(int(list_size / 2)) median = sorted_list[indices[0]] pass return median, indices pass median, median_indices = find_median(samples) Q1, Q1_indices = find_median(samples[:median_indices[0]]) Q3, Q3_indices = find_median(samples[median_indices[-1] + 1:]) quartiles = [Q1, median, Q3] print("(Q1, median, Q3): {}".format(quartiles)) A: Building upon or rather correcting a bit on what Babak said.... np.percentile DOES VERY MUCH calculate the values of Q1, median, and Q3. Consider the sorted list below: s1=[18,45,66,70,76,83,88,90,90,95,95,98] running np.percentile(s1, [25, 50, 75]) returns the actual values from the list: [69. 85.5 91.25] However, the quartiles are Q1=68.0, Median=85.5, Q3=92.5, which is the correct thing to say What we are missing here is the interpolation parameter of the np.percentile and related functions. By default the value of this argument is linear. This optional parameter specifies the interpolation method to use when the desired quantile lies between two data points i < j: linear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j. lower: i. higher: j. nearest: i or j, whichever is nearest. midpoint: (i + j) / 2. Thus running np.percentile(s1, [25, 50, 75], interpolation='midpoint') returns the actual results for the list: [68. 85.5 92.5] A: Using np.percentile. q75, q25 = np.percentile(DataFrame, [75,25]) iqr = q75 - q25 Answer from How do you find the IQR in Numpy? A: you can use df.describe() which would show the information A: If you want to use raw python rather than numpy or panda, you can use the python stats module to find the median of the upper and lower half of the list: >>> import statistics as stat >>> def quartile(data): data.sort() half_list = int(len(data)//2) upper_quartile = stat.median(data[-half_list] lower_quartile = stat.median(data[:half_list]) print("Lower Quartile: "+str(lower_quartile)) print("Upper Quartile: "+str(upper_quartile)) print("Interquartile Range: "+str(upper_quartile-lower_quartile) >>> quartile(df.time_diff) Line 1: import the statistics module under the alias "stat" Line 2: define the quartile function Line 3: sort the data into ascending order Line 4: get the length of half of the list Line 5: get the median of the lower half of the list Line 6: get the median of the upper half of the list Line 7: print the lower quartile Line 8: print the upper quartile Line 9: print the interquartile range Line 10: run the quartile function for the time_diff column of the DataFrame A: In my efforts to learn object-oriented programming alongside learning statistics, I made this, maybe you'll find it useful: samplesCourse = [9, 10, 10, 11, 13, 15, 16, 19, 19, 21, 23, 28, 30, 33, 34, 36, 44, 45, 47, 60] class sampleSet: def __init__(self, sampleList): self.sampleList = sampleList self.interList = list(sampleList) # interList is sampleList alias; alias used to maintain integrity of original sampleList def find_median(self): self.median = 0 if len(self.sampleList) % 2 == 0: # find median for even-numbered sample list length self.medL = self.interList[int(len(self.interList)/2)-1] self.medU = self.interList[int(len(self.interList)/2)] self.median = (self.medL + self.medU)/2 else: # find median for odd-numbered sample list length self.median = self.interList[int((len(self.interList)-1)/2)] return self.median def find_1stQuartile(self, median): self.lower50List = [] self.Q1 = 0 # break out lower 50 percentile from sampleList if len(self.interList) % 2 == 0: self.lower50List = self.interList[:int(len(self.interList)/2)] else: # drop median to make list ready to divide into 50 percentiles self.interList.pop(interList.index(self.median)) self.lower50List = self.interList[:int(len(self.interList)/2)] # find 1st quartile (median of lower 50 percentiles) if len(self.lower50List) % 2 == 0: self.Q1L = self.lower50List[int(len(self.lower50List)/2)-1] self.Q1U = self.lower50List[int(len(self.lower50List)/2)] self.Q1 = (self.Q1L + self.Q1U)/2 else: self.Q1 = self.lower50List[int((len(self.lower50List)-1)/2)] return self.Q1 def find_3rdQuartile(self, median): self.upper50List = [] self.Q3 = 0 # break out upper 50 percentile from sampleList if len(self.sampleList) % 2 == 0: self.upper50List = self.interList[int(len(self.interList)/2):] else: self.interList.pop(interList.index(self.median)) self.upper50List = self.interList[int(len(self.interList)/2):] # find 3rd quartile (median of upper 50 percentiles) if len(self.upper50List) % 2 == 0: self.Q3L = self.upper50List[int(len(self.upper50List)/2)-1] self.Q3U = self.upper50List[int(len(self.upper50List)/2)] self.Q3 = (self.Q3L + self.Q3U)/2 else: self.Q3 = self.upper50List[int((len(self.upper50List)-1)/2)] return self.Q3 def find_InterQuartileRange(self, Q1, Q3): self.IQR = self.Q3 - self.Q1 return self.IQR def find_UpperFence(self, Q3, IQR): self.fence = self.Q3 + 1.5 * self.IQR return self.fence samples = sampleSet(samplesCourse) median = samples.find_median() firstQ = samples.find_1stQuartile(median) thirdQ = samples.find_3rdQuartile(median) iqr = samples.find_InterQuartileRange(firstQ, thirdQ) fence = samples.find_UpperFence(thirdQ, iqr) print("Median is: ", median) print("1st quartile is: ", firstQ) print("3rd quartile is: ", thirdQ) print("IQR is: ", iqr) print("Upper fence is: ", fence) A: The main difference of the signatures between numpy.percentile and pandas.quantile: with pandas the q paramter should be given in a scala between [0-1] instead with numpy between [0-100]. Both of them, by default, use a linear interpolation technique to find such quantities. Instead, DataFrame.describe has a less flexible signature and allow to use only the linear one. In numpy >= 1.22 the parameter interpolation is deprecated and replaced with method. Here an example of usage with linear interpolation: (default behavior) import pandas as pd import numpy as np s =[18,45,66,70,76,83,88,90,90,95,95,98, 100] print(pd.DataFrame(s).quantile(q=[.25, .50, .75])) print(np.percentile(s, q=[25, 50, 75])) print(pd.DataFrame(s).describe(percentiles=[.25, .5, .75])) # the parameter is redundant, it's the default behavior Here using the midpoint interpolation: s_even = [18,45,66,70,76,83,88,90,90,95,95,98] print(pd.DataFrame(s_even).quantile(q=[.25, .5, .75], interpolation='midpoint')) print(np.percentile(s_even, q=[25, 50, 75], interpolation='midpoint')) # verion < 1.22 print(np.percentile(s_even, q=[25, 50, 75], method='midpoint')) # version >= 1.22 s_odd = s_even + [100] # made it odd print(pd.DataFrame(s_odd).quantile(q=[.25, .50, .75], interpolation='midpoint')) print(np.percentile(s_odd, q=[25, 50, 75], interpolation='midpoint')) # verion < 1.22 print(np.percentile(s_odd, q=[25, 50, 75], method='midpoint')) # version >= 1.22 A: I also faced a similar problem when trying to find a package that finds quartiles. That's not to say the others are wrong but to say this is how I personally would have defined quartiles. It is similar to Shikar's results with using mid-point but also works on lists that have an odd length. If the quartile position is between lengths, it will use the average of the neighbouring values. (i.e. position always treated as either the exact position or 0.5 of the position) import math def find_quartile_postions(size): if size == 1: # All quartiles are the first (only) element return 0, 0, 0 elif size == 2: # Lower quartile is first element, Upper quartile is second element, Median is average # Set to 0.5, 0.5, 0.5 if you prefer all quartiles to be the mean value return 0, 0.5, 1 else: # Lower quartile is element at 1/4th position, median at 1/2th, upper at 3/4 # Quartiles can be between positions if size + 1 is not divisible by 4 return (size + 1) / 4 - 1, (size + 1) / 2 - 1, 3 * (size + 1) / 4 - 1 def find_quartiles(num_array): size = len(num_array) if size == 0: quartiles = [0,0,0] else: sorted_array = sorted(num_array) lower_pos, median_pos, upper_pos = find_quartile_postions(size) # Floor so can work in arrays floored_lower_pos = math.floor(lower_pos) floored_median_pos = math.floor(median_pos) floored_upper_pos = math.floor(upper_pos) # If position is an integer, the quartile is the elem at position # else the quartile is the mean of the elem & the elem one position above lower_quartile = (sorted_array[floored_lower_pos] if (lower_pos % 1 == 0) else (sorted_array[floored_lower_pos] + sorted_array[floored_lower_pos + 1]) / 2 ) median = (sorted_array[floored_median_pos] if (median_pos % 1 == 0) else (sorted_array[floored_median_pos] + sorted_array[floored_median_pos + 1]) / 2 ) upper_quartile = (sorted_array[floored_upper_pos] if (upper_pos % 1 == 0) else (sorted_array[floored_upper_pos] + sorted_array[floored_upper_pos + 1]) / 2 ) quartiles = [lower_quartile, median, upper_quartile] return quartiles A: try that way: dfo = sorted(df.time_diff) n=len(dfo) Q1=int((n+3)/4) Q3=int((3*n+1)/4) print("Q1 position: ", Q1, "Q1 position: " ,Q3) print("Q1 value: ", dfo[Q1], "Q1 value: ", dfo[Q3]) A: If you're interested in using JS, I have developed a solution: var withThis = (obj, cb) => cb(obj), sort = array => array.sort((a, b) => a - b), fractile = (array, parts, nth) => withThis( (nth * (array.length + 1) / parts), decimal => withThis(Math.floor(decimal), even => withThis(sort(array), sorted => sorted[even - 1] + ( (decimal - even) * ( sorted[even] - sorted[even - 1] ) ) ) ) ), data = [ 78, 72, 74, 79, 74, 71, 75, 74, 72, 68, 72, 73, 72, 74, 75, 74, 73, 74, 65, 72, 66, 75, 80, 69, 82, 73, 74, 72, 79, 71, 70, 75, 71, 70, 70, 70, 75, 76, 77, 67 ] fractile(data, 4, 1) // 1st Quartile is 71 fractile(data, 10, 3) // 3rd Decile is 71.3 fractile(data, 100, 82) // 82nd Percentile is 75.62 You can just copy paste the codes onto your browser and get the exact result. And more about 'Statistics with JS' can be found in https://gist.github.com/rikyperdana/a7349c790cf5b034a1b77db64415e73c/edit A: This can be easily done using the python statistics module. https://docs.python.org/3/library/statistics.html import statistics time_diff = [0.45,0.483333,0.5,0.516667,0.5333333] statistics.quantiles(time_diff, method='inclusive') [0.483333, 0.5, 0.516667] The above defaults to 4 groups of data (n=4) with 3 split points (1st quartile, median, 3rd quartile), and setting the method to inclusive uses all the data in the list. The output is a list of 1st quartile, median and 3rd quartile. A: Full working example: import numpy as np sizes_height = np.random.randn(100) df = pd.DataFrame(sizes_height) # df = pd.Series(sizes_height) # x = df.time_diff.quantile(sizes_height) x = df.describe() print() x 0 count 100.000000 mean 0.059808 std 1.012960 min -2.552990 25% -0.643857 50% 0.094096 75% 0.737077 max 2.269755
How to calculate 1st and 3rd quartiles?
I have DataFrame: time_diff avg_trips 0 0.450000 1.0 1 0.483333 1.0 2 0.500000 1.0 3 0.516667 1.0 4 0.533333 2.0 I want to get 1st quartile, 3rd quartile and median for the column time_diff. To obtain median, I use np.median(df["time_diff"].values). How can I calculate quartiles?
[ "By using pandas:\ndf.time_diff.quantile([0.25,0.5,0.75])\n\n\nOut[793]: \n0.25 0.483333\n0.50 0.500000\n0.75 0.516667\nName: time_diff, dtype: float64\n\n", "You can use np.percentile to calculate quartiles (including the median):\n>>> np.percentile(df.time_diff, 25) # Q1\n0.48333300000000001\n\n>>> np.percentile(df.time_diff, 50) # median\n0.5\n\n>>> np.percentile(df.time_diff, 75) # Q3\n0.51666699999999999\n\nOr all at once:\n>>> np.percentile(df.time_diff, [25, 50, 75])\narray([ 0.483333, 0.5 , 0.516667])\n\n", "Coincidentally, this information is captured with the describe method:\ndf.time_diff.describe()\n\ncount 5.000000\nmean 0.496667\nstd 0.032059\nmin 0.450000\n25% 0.483333\n50% 0.500000\n75% 0.516667\nmax 0.533333\nName: time_diff, dtype: float64\n\n", "np.percentile DOES NOT calculate the values of Q1, median, and Q3. Consider the sorted list below:\nsamples = [1, 1, 8, 12, 13, 13, 14, 16, 19, 22, 27, 28, 31]\n\nrunning np.percentile(samples, [25, 50, 75]) returns the actual values from the list:\nOut[1]: array([12., 14., 22.])\n\nHowever, the quartiles are Q1=10.0, Median=14, Q3=24.5 (you can also use this link to find the quartiles and median online).\nOne can use the below code to calculate the quartiles and median of a sorted list (because of sorting this approach requires O(nlogn) computations where n is the number of items).\nMoreover, finding quartiles and median can be done in O(n) computations using the Median of medians Selection algorithm (order statistics).\nsamples = sorted([28, 12, 8, 27, 16, 31, 14, 13, 19, 1, 1, 22, 13])\n\ndef find_median(sorted_list):\n indices = []\n\n list_size = len(sorted_list)\n median = 0\n\n if list_size % 2 == 0:\n indices.append(int(list_size / 2) - 1) # -1 because index starts from 0\n indices.append(int(list_size / 2))\n\n median = (sorted_list[indices[0]] + sorted_list[indices[1]]) / 2\n pass\n else:\n indices.append(int(list_size / 2))\n\n median = sorted_list[indices[0]]\n pass\n\n return median, indices\n pass\n\nmedian, median_indices = find_median(samples)\nQ1, Q1_indices = find_median(samples[:median_indices[0]])\nQ3, Q3_indices = find_median(samples[median_indices[-1] + 1:])\n\nquartiles = [Q1, median, Q3]\n\nprint(\"(Q1, median, Q3): {}\".format(quartiles))\n\n", "Building upon or rather correcting a bit on what Babak said....\nnp.percentile DOES VERY MUCH calculate the values of Q1, median, and Q3. Consider the sorted list below:\ns1=[18,45,66,70,76,83,88,90,90,95,95,98]\n\nrunning np.percentile(s1, [25, 50, 75]) returns the actual values from the list:\n[69. 85.5 91.25]\n\nHowever, the quartiles are Q1=68.0, Median=85.5, Q3=92.5, which is the correct thing to say\nWhat we are missing here is the interpolation parameter of the np.percentile and related functions. By default the value of this argument is linear. This optional parameter specifies the interpolation method to use when the desired quantile lies between two data points i < j:\nlinear: i + (j - i) * fraction, where fraction is the fractional part of the index surrounded by i and j.\nlower: i.\nhigher: j.\nnearest: i or j, whichever is nearest.\nmidpoint: (i + j) / 2.\nThus running np.percentile(s1, [25, 50, 75], interpolation='midpoint') returns the actual results for the list:\n[68. 85.5 92.5]\n\n", "Using np.percentile.\nq75, q25 = np.percentile(DataFrame, [75,25])\niqr = q75 - q25\n\nAnswer from How do you find the IQR in Numpy?\n", "you can use\ndf.describe()\n\nwhich would show the information\n\n", "If you want to use raw python rather than numpy or panda, you can use the python stats module to find the median of the upper and lower half of the list:\n >>> import statistics as stat\n >>> def quartile(data):\n data.sort() \n half_list = int(len(data)//2)\n upper_quartile = stat.median(data[-half_list]\n lower_quartile = stat.median(data[:half_list])\n print(\"Lower Quartile: \"+str(lower_quartile))\n print(\"Upper Quartile: \"+str(upper_quartile))\n print(\"Interquartile Range: \"+str(upper_quartile-lower_quartile)\n\n >>> quartile(df.time_diff)\n\nLine 1: import the statistics module under the alias \"stat\"\nLine 2: define the quartile function\nLine 3: sort the data into ascending order\nLine 4: get the length of half of the list\nLine 5: get the median of the lower half of the list\nLine 6: get the median of the upper half of the list\nLine 7: print the lower quartile\nLine 8: print the upper quartile\nLine 9: print the interquartile range\nLine 10: run the quartile function for the time_diff column of the DataFrame\n", "In my efforts to learn object-oriented programming alongside learning statistics, I made this, maybe you'll find it useful:\nsamplesCourse = [9, 10, 10, 11, 13, 15, 16, 19, 19, 21, 23, 28, 30, 33, 34, 36, 44, 45, 47, 60]\n\nclass sampleSet:\n def __init__(self, sampleList):\n self.sampleList = sampleList\n self.interList = list(sampleList) # interList is sampleList alias; alias used to maintain integrity of original sampleList\n\n def find_median(self):\n self.median = 0\n\n if len(self.sampleList) % 2 == 0:\n # find median for even-numbered sample list length\n self.medL = self.interList[int(len(self.interList)/2)-1]\n self.medU = self.interList[int(len(self.interList)/2)]\n self.median = (self.medL + self.medU)/2\n\n else:\n # find median for odd-numbered sample list length\n self.median = self.interList[int((len(self.interList)-1)/2)]\n return self.median\n\n def find_1stQuartile(self, median):\n self.lower50List = []\n self.Q1 = 0\n\n # break out lower 50 percentile from sampleList\n if len(self.interList) % 2 == 0:\n self.lower50List = self.interList[:int(len(self.interList)/2)]\n else:\n # drop median to make list ready to divide into 50 percentiles\n self.interList.pop(interList.index(self.median))\n self.lower50List = self.interList[:int(len(self.interList)/2)]\n\n # find 1st quartile (median of lower 50 percentiles)\n if len(self.lower50List) % 2 == 0:\n self.Q1L = self.lower50List[int(len(self.lower50List)/2)-1]\n self.Q1U = self.lower50List[int(len(self.lower50List)/2)]\n self.Q1 = (self.Q1L + self.Q1U)/2\n\n else:\n self.Q1 = self.lower50List[int((len(self.lower50List)-1)/2)]\n\n return self.Q1\n\n def find_3rdQuartile(self, median):\n self.upper50List = []\n self.Q3 = 0\n\n # break out upper 50 percentile from sampleList\n if len(self.sampleList) % 2 == 0:\n self.upper50List = self.interList[int(len(self.interList)/2):]\n else:\n self.interList.pop(interList.index(self.median))\n self.upper50List = self.interList[int(len(self.interList)/2):]\n\n # find 3rd quartile (median of upper 50 percentiles)\n if len(self.upper50List) % 2 == 0:\n self.Q3L = self.upper50List[int(len(self.upper50List)/2)-1]\n self.Q3U = self.upper50List[int(len(self.upper50List)/2)]\n self.Q3 = (self.Q3L + self.Q3U)/2\n\n else:\n self.Q3 = self.upper50List[int((len(self.upper50List)-1)/2)]\n\n return self.Q3\n\n def find_InterQuartileRange(self, Q1, Q3):\n self.IQR = self.Q3 - self.Q1\n return self.IQR\n\n def find_UpperFence(self, Q3, IQR):\n self.fence = self.Q3 + 1.5 * self.IQR\n return self.fence\n\nsamples = sampleSet(samplesCourse)\nmedian = samples.find_median()\nfirstQ = samples.find_1stQuartile(median)\nthirdQ = samples.find_3rdQuartile(median)\niqr = samples.find_InterQuartileRange(firstQ, thirdQ)\nfence = samples.find_UpperFence(thirdQ, iqr)\n\nprint(\"Median is: \", median)\nprint(\"1st quartile is: \", firstQ)\nprint(\"3rd quartile is: \", thirdQ)\nprint(\"IQR is: \", iqr)\nprint(\"Upper fence is: \", fence)\n\n", "The main difference of the signatures between numpy.percentile\nand pandas.quantile: with pandas the q paramter should be given in a scala between [0-1] instead with numpy between [0-100].\nBoth of them, by default, use a linear interpolation technique to find such quantities. Instead, DataFrame.describe has a less flexible signature and allow to use only the linear one.\nIn numpy >= 1.22 the parameter interpolation is deprecated and replaced with method.\nHere an example of usage with linear interpolation: (default behavior)\nimport pandas as pd\nimport numpy as np\n\n\ns =[18,45,66,70,76,83,88,90,90,95,95,98, 100]\nprint(pd.DataFrame(s).quantile(q=[.25, .50, .75]))\nprint(np.percentile(s, q=[25, 50, 75]))\nprint(pd.DataFrame(s).describe(percentiles=[.25, .5, .75])) # the parameter is redundant, it's the default behavior\n\nHere using the midpoint interpolation:\ns_even = [18,45,66,70,76,83,88,90,90,95,95,98]\nprint(pd.DataFrame(s_even).quantile(q=[.25, .5, .75], interpolation='midpoint'))\nprint(np.percentile(s_even, q=[25, 50, 75], interpolation='midpoint')) # verion < 1.22\nprint(np.percentile(s_even, q=[25, 50, 75], method='midpoint')) # version >= 1.22\n\ns_odd = s_even + [100] # made it odd\nprint(pd.DataFrame(s_odd).quantile(q=[.25, .50, .75], interpolation='midpoint'))\nprint(np.percentile(s_odd, q=[25, 50, 75], interpolation='midpoint')) # verion < 1.22\nprint(np.percentile(s_odd, q=[25, 50, 75], method='midpoint')) # version >= 1.22\n\n", "I also faced a similar problem when trying to find a package that finds quartiles. That's not to say the others are wrong but to say this is how I personally would have defined quartiles. It is similar to Shikar's results with using mid-point but also works on lists that have an odd length. If the quartile position is between lengths, it will use the average of the neighbouring values. (i.e. position always treated as either the exact position or 0.5 of the position)\nimport math\n\ndef find_quartile_postions(size):\n if size == 1:\n # All quartiles are the first (only) element\n return 0, 0, 0\n elif size == 2:\n # Lower quartile is first element, Upper quartile is second element, Median is average\n # Set to 0.5, 0.5, 0.5 if you prefer all quartiles to be the mean value\n return 0, 0.5, 1\n else:\n # Lower quartile is element at 1/4th position, median at 1/2th, upper at 3/4\n # Quartiles can be between positions if size + 1 is not divisible by 4\n return (size + 1) / 4 - 1, (size + 1) / 2 - 1, 3 * (size + 1) / 4 - 1\n\ndef find_quartiles(num_array):\n size = len(num_array)\n \n if size == 0:\n quartiles = [0,0,0]\n else:\n sorted_array = sorted(num_array)\n lower_pos, median_pos, upper_pos = find_quartile_postions(size)\n\n # Floor so can work in arrays\n floored_lower_pos = math.floor(lower_pos)\n floored_median_pos = math.floor(median_pos)\n floored_upper_pos = math.floor(upper_pos)\n\n # If position is an integer, the quartile is the elem at position\n # else the quartile is the mean of the elem & the elem one position above\n lower_quartile = (sorted_array[floored_lower_pos]\n if (lower_pos % 1 == 0)\n else (sorted_array[floored_lower_pos] + sorted_array[floored_lower_pos + 1]) / 2\n )\n\n median = (sorted_array[floored_median_pos]\n if (median_pos % 1 == 0)\n else (sorted_array[floored_median_pos] + sorted_array[floored_median_pos + 1]) / 2\n )\n\n upper_quartile = (sorted_array[floored_upper_pos]\n if (upper_pos % 1 == 0)\n else (sorted_array[floored_upper_pos] + sorted_array[floored_upper_pos + 1]) / 2\n )\n\n quartiles = [lower_quartile, median, upper_quartile]\n\n return quartiles\n\n", "try that way:\ndfo = sorted(df.time_diff)\n\nn=len(dfo)\n\nQ1=int((n+3)/4) \nQ3=int((3*n+1)/4) \n\n\nprint(\"Q1 position: \", Q1, \"Q1 position: \" ,Q3)\n\nprint(\"Q1 value: \", dfo[Q1], \"Q1 value: \", dfo[Q3])\n\n", "If you're interested in using JS, I have developed a solution:\nvar\nwithThis = (obj, cb) => cb(obj),\nsort = array => array.sort((a, b) => a - b),\n\nfractile = (array, parts, nth) => withThis(\n (nth * (array.length + 1) / parts),\n decimal => withThis(Math.floor(decimal),\n even => withThis(sort(array),\n sorted => sorted[even - 1] + (\n (decimal - even) * (\n sorted[even] - sorted[even - 1]\n )\n )\n )\n )\n),\n\ndata = [\n 78, 72, 74, 79, 74, 71, 75, 74, 72, 68,\n 72, 73, 72, 74, 75, 74, 73, 74, 65, 72,\n 66, 75, 80, 69, 82, 73, 74, 72, 79, 71,\n 70, 75, 71, 70, 70, 70, 75, 76, 77, 67\n]\n\nfractile(data, 4, 1) // 1st Quartile is 71\nfractile(data, 10, 3) // 3rd Decile is 71.3\nfractile(data, 100, 82) // 82nd Percentile is 75.62\n\nYou can just copy paste the codes onto your browser and get the exact result.\nAnd more about 'Statistics with JS' can be found in https://gist.github.com/rikyperdana/a7349c790cf5b034a1b77db64415e73c/edit\n", "This can be easily done using the python statistics module.\nhttps://docs.python.org/3/library/statistics.html\nimport statistics\n\ntime_diff = [0.45,0.483333,0.5,0.516667,0.5333333]\nstatistics.quantiles(time_diff, method='inclusive')\n\n[0.483333, 0.5, 0.516667]\nThe above defaults to 4 groups of data (n=4) with 3 split points (1st quartile, median, 3rd quartile), and setting the method to inclusive uses all the data in the list.\nThe output is a list of 1st quartile, median and 3rd quartile.\n", "Full working example:\nimport numpy as np\nsizes_height = np.random.randn(100)\ndf = pd.DataFrame(sizes_height)\n# df = pd.Series(sizes_height)\n# x = df.time_diff.quantile(sizes_height)\nx = df.describe()\nprint()\nx\n 0\ncount 100.000000\nmean 0.059808\nstd 1.012960\nmin -2.552990\n25% -0.643857\n50% 0.094096\n75% 0.737077\nmax 2.269755\n\n" ]
[ 91, 82, 26, 26, 13, 6, 4, 4, 2, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "numpy", "pandas", "python", "python_2.7" ]
stackoverflow_0045926230_numpy_pandas_python_python_2.7.txt
Q: ValueError: invalid literal for int() with base 10: 'quit' I keep getting this errror message: ValueError: invalid literal for int() with base 10 Here is my code snippet age = {} while age != 'quit': age = input('what is your age?') age = int(age) if age >= 18: print("You're old enough to vote.") else: print("You're not old enough to vote.") Please use **Google Colab **if possible. I tried the `except ValueError method but it did not work. Maybe I just used it incorrectly: A: One of the approach (may not be optimal) is to break the loop once you encounter ValueError. Logic can be similar to this while age != 'quit': age = input() try: age = int(age) if age >= 18: print("You're old enough to vote.") else: print("You're not old enough to vote.") except ValueError: break
ValueError: invalid literal for int() with base 10: 'quit'
I keep getting this errror message: ValueError: invalid literal for int() with base 10 Here is my code snippet age = {} while age != 'quit': age = input('what is your age?') age = int(age) if age >= 18: print("You're old enough to vote.") else: print("You're not old enough to vote.") Please use **Google Colab **if possible. I tried the `except ValueError method but it did not work. Maybe I just used it incorrectly:
[ "One of the approach (may not be optimal) is to break the loop once you encounter ValueError. Logic can be similar to this\nwhile age != 'quit':\n age = input()\n try:\n age = int(age)\n if age >= 18:\n print(\"You're old enough to vote.\")\n else:\n print(\"You're not old enough to vote.\")\n except ValueError:\n break\n\n" ]
[ 0 ]
[]
[]
[ "python", "user_input" ]
stackoverflow_0074540042_python_user_input.txt
Q: How to check whether tensor values in a different tensor pytorch? I have 2 tensors of unequal size a = torch.tensor([[1,2], [2,3],[3,4]]) b = torch.tensor([[4,5],[2,3]]) I want a boolean array of whether each value exists in the other tensor without iterating. something like a in b and the result should be [False, True, False] as only the value of a[1] is in b A: I think it's impossible without using at least some type of iteration. The most succinct way I can manage is using list comprehension: [True if i in b else False for i in a] Checks for elements in b that are in a and gives [False, True, False]. Can also be reversed to get elements a in b [False, True]. A: this should work result = [] for i in a: try: # to avoid error for the case of empty tensors result.append(max(i.numpy()[1] == b.T.numpy()[1,i.numpy()[0] == b.T.numpy()[0,:]])) except: result.append(False) result A: Neither of the solutions that use tensor in tensor work in all cases for the OP. If the tensors contain elements/tuples that match in at least one dimension, the aforementioned operation will return True for those elements, potentially leading to hours of debugging. For example: torch.tensor([2,5]) in torch.tensor([2,10]) # returns True torch.tensor([5,2]) in torch.tensor([5,10]) # returns True A solution for the above could be forcing the check for equality in each dimension, and then applying a Tensor Boolean add. Note, the following 2 methods may not be very efficient because Tensors are rather slow for iterating and equality checking, so converting to numpy may be needed for large data: [all(torch.any(i == b, dim=0)) for i in a] # OR [any((i[0] == b[:, 0]) & (i[1] == b[:, 1])) for i in a] That being said, @yuri's solution also seems to work for these edge cases, but it still seems to fail occasionally, and it is rather unreadable. A: If you need to compare all subtensors across the first dimension of a, use in: >>> [i in b for i in a] [False, True, False] A: I recently also encountered this issue though my goal is to select those row sub-tensors not "in" the other tensor. My solution is to first convert the tensors to pandas dataframe, then use .drop_duplicates(). More specifically, for OP's problem, one can do: import pandas as pd import torch tensor1_df = pd.DataFrame(tensor1) tensor1_df['val'] = False tensor2_df = pd.DataFrame(tensor2) tensor2_df['val'] = True tensor1_notin_tensor2 = torch.from_numpy(pd.concat([tensor1_df, tensor2_df]).reset_index().drop(columns=['index']).drop_duplicates(keep='last').reset_index().loc[np.arange(tensor1_df.shape[0])].val.values)
How to check whether tensor values in a different tensor pytorch?
I have 2 tensors of unequal size a = torch.tensor([[1,2], [2,3],[3,4]]) b = torch.tensor([[4,5],[2,3]]) I want a boolean array of whether each value exists in the other tensor without iterating. something like a in b and the result should be [False, True, False] as only the value of a[1] is in b
[ "I think it's impossible without using at least some type of iteration. The most succinct way I can manage is using list comprehension:\n[True if i in b else False for i in a]\n\nChecks for elements in b that are in a and gives [False, True, False]. Can also be reversed to get elements a in b [False, True].\n", "this should work\nresult = []\nfor i in a:\n try: # to avoid error for the case of empty tensors\n result.append(max(i.numpy()[1] == b.T.numpy()[1,i.numpy()[0] == b.T.numpy()[0,:]]))\n except:\n result.append(False)\nresult\n\n", "Neither of the solutions that use tensor in tensor work in all cases for the OP. If the tensors contain elements/tuples that match in at least one dimension, the aforementioned operation will return True for those elements, potentially leading to hours of debugging. For example:\ntorch.tensor([2,5]) in torch.tensor([2,10]) # returns True\ntorch.tensor([5,2]) in torch.tensor([5,10]) # returns True\n\nA solution for the above could be forcing the check for equality in each dimension, and then applying a Tensor Boolean add. Note, the following 2 methods may not be very efficient because Tensors are rather slow for iterating and equality checking, so converting to numpy may be needed for large data:\n[all(torch.any(i == b, dim=0)) for i in a] # OR\n[any((i[0] == b[:, 0]) & (i[1] == b[:, 1])) for i in a]\n\nThat being said, @yuri's solution also seems to work for these edge cases, but it still seems to fail occasionally, and it is rather unreadable.\n", "If you need to compare all subtensors across the first dimension of a, use in:\n>>> [i in b for i in a]\n[False, True, False]\n\n", "I recently also encountered this issue though my goal is to select those row sub-tensors not \"in\" the other tensor. My solution is to first convert the tensors to pandas dataframe, then use .drop_duplicates(). More specifically, for OP's problem, one can do:\nimport pandas as pd\nimport torch\n\ntensor1_df = pd.DataFrame(tensor1)\ntensor1_df['val'] = False\ntensor2_df = pd.DataFrame(tensor2)\ntensor2_df['val'] = True\ntensor1_notin_tensor2 = torch.from_numpy(pd.concat([tensor1_df, tensor2_df]).reset_index().drop(columns=['index']).drop_duplicates(keep='last').reset_index().loc[np.arange(tensor1_df.shape[0])].val.values)\n\n" ]
[ 2, 2, 1, 0, 0 ]
[]
[]
[ "python", "python_3.x", "pytorch", "torch" ]
stackoverflow_0066036375_python_python_3.x_pytorch_torch.txt
Q: NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:access.pyodbc I want to import a dataframe into a access database but I got an error NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:access.pyodbc from sqlalchemy import create_engine import urllib import pyodbc conec = (r"Driver={Microsoft Access Driver (*.mdb, *.accdb)};" r"DBQ=C:\Users\redim\Desktop\18_marzo\Libr2.accdb" ) con = f"access+pyodbc:///?odbc_connect={urllib.parse.quote_plus(conec)}" acc_engine = create_engine(con) df.to_sql('hola', acc_engine) A: For some reason this only works in Jupyter notebooks, not in PyCharm which I was having the same problems for quite some time in both until I upgraded SQLalchemy I use conda install with most of my libraries: conda update sqlalchemy But if you are using pip, then below is command: pip install --upgrade sqlalchemy I went from sqlalchemy 1.3.7 to 1.4.39 and the error went away, at least in Jupyter Notebooks anyways.
NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:access.pyodbc
I want to import a dataframe into a access database but I got an error NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:access.pyodbc from sqlalchemy import create_engine import urllib import pyodbc conec = (r"Driver={Microsoft Access Driver (*.mdb, *.accdb)};" r"DBQ=C:\Users\redim\Desktop\18_marzo\Libr2.accdb" ) con = f"access+pyodbc:///?odbc_connect={urllib.parse.quote_plus(conec)}" acc_engine = create_engine(con) df.to_sql('hola', acc_engine)
[ "For some reason this only works in Jupyter notebooks, not in PyCharm which I was having the same problems for quite some time in both until I upgraded SQLalchemy\nI use conda install with most of my libraries:\nconda update sqlalchemy\n\nBut if you are using pip, then below is command:\npip install --upgrade sqlalchemy\n\nI went from sqlalchemy 1.3.7 to 1.4.39 and the error went away, at least in Jupyter Notebooks anyways.\n" ]
[ 0 ]
[]
[]
[ "python", "sqlalchemy", "sqlalchemy_access" ]
stackoverflow_0066858168_python_sqlalchemy_sqlalchemy_access.txt
Q: Error passing wav file to IPython.display I am new to Python but I am studying it as programming language for DSP. I recorded a wav file, and have been trying to play it back using IPython.display.Audio: import IPython.display from scipy.io import wavfile rate, s = wavfile.read('h.wav') IPython.display.Audio(s, rate=rate) But this gives the following error: struct.error: ushort format requires 0 <= number <= 0xffff I tried installing FFmpeg but it hasn't helped. A: That's not a very useful error message, it took a bit of debugging to figure out what was going on! It is caused by the "shape" of the matrix returned from wavfile being the wrong way around. The docs for IPython.display.Audio say it expects a: Numpy 2d array containing waveforms for each channel. Shape=(NCHAN, NSAMPLES). If I read a (stereo) wav file I have lying around: rate, samples = wavfile.read(path) print(samples.shape) I get (141120, 2) showing this is of shape (NSAMPLES, NCHAN). Passing this array directly to Audio I get a similar error as you do. Transposing the array will flip these around, causing the array to be compatible with this method. The transpose of a matrix in Numpy is accessed via the .T attribute, e.g.: IPython.display.Audio(samples.T, rate=rate) works for me. A: Thank you for your answer, it helped me. below is my code, maybe can help someone. frequency = 44100 duration = 5 record = sd.rec((frequency * duration), samplerate=frequency , channels=1, blocking=True, dtype='float64') sd.wait() st.audio(record.T, sample_rate=frequency)
Error passing wav file to IPython.display
I am new to Python but I am studying it as programming language for DSP. I recorded a wav file, and have been trying to play it back using IPython.display.Audio: import IPython.display from scipy.io import wavfile rate, s = wavfile.read('h.wav') IPython.display.Audio(s, rate=rate) But this gives the following error: struct.error: ushort format requires 0 <= number <= 0xffff I tried installing FFmpeg but it hasn't helped.
[ "That's not a very useful error message, it took a bit of debugging to figure out what was going on! It is caused by the \"shape\" of the matrix returned from wavfile being the wrong way around.\nThe docs for IPython.display.Audio say it expects a:\n\nNumpy 2d array containing waveforms for each channel. Shape=(NCHAN, NSAMPLES).\n\nIf I read a (stereo) wav file I have lying around:\nrate, samples = wavfile.read(path)\nprint(samples.shape)\n\nI get (141120, 2) showing this is of shape (NSAMPLES, NCHAN). Passing this array directly to Audio I get a similar error as you do. Transposing the array will flip these around, causing the array to be compatible with this method. The transpose of a matrix in Numpy is accessed via the .T attribute, e.g.:\nIPython.display.Audio(samples.T, rate=rate)\n\nworks for me.\n", "Thank you for your answer, it helped me.\nbelow is my code, maybe can help someone.\nfrequency = 44100\nduration = 5\nrecord = sd.rec((frequency * duration), samplerate=frequency , channels=1, blocking=True, dtype='float64')\nsd.wait()\nst.audio(record.T, sample_rate=frequency)\n" ]
[ 8, 0 ]
[]
[]
[ "jupyter_notebook", "python", "scipy", "wav" ]
stackoverflow_0057137050_jupyter_notebook_python_scipy_wav.txt
Q: Tricky Multiple Groupings and Transformations using Pandas I have a dataset where I would like to: group by location and box and take distinct count of the box create column headers with the values in the status column and include its count based on the box Data ID location type box status aa NY no box55 hey aa NY no box55 hi aa NY yes box66 hello aa NY yes box66 goodbye aa CA no box11 hey aa CA no box11 hi aa CA yes box11 hello aa CA yes box11 goodbye aa CA no box86 hey aa CA no box86 hi aa CA yes box86 hello aa CA yes box99 goodbye aa CA no box99 hey aa CA no box99 hi Desired location box count box hey hi hello goodbye NY 2 box55 1 1 0 0 NY 2 box66 0 0 1 1 CA 3 box11 1 1 1 1 CA 3 box86 1 1 1 0 CA 3 box99 1 1 0 1 Doing df['box count'] = df.groupby(['location','box'])['box'].size() t = pd.get_dummies(df, prefix_sep='', prefix='', columns=['status']).groupby(['box', 'location'], as_index=False).sum().assign(count=df.groupby(['box', 'location'], as_index=False)['status'].size()['size']) Any suggestion is appreciated. A: Try making two dataframes: first with .groupby(), second with pd.crosstab. Then just pd.concat them: df1 = df.groupby(["location", "box"]).agg(**{"box count": ("box", "size")}) df2 = pd.crosstab([df["location"], df["box"]], df["status"]) df_out = pd.concat([df1, df2], axis=1) print(df_out.reset_index()) Prints: location box box count goodbye hello hey hi 0 CA box11 4 1 1 1 1 1 CA box86 3 0 1 1 1 2 CA box99 3 1 0 1 1 3 NY box55 2 0 0 1 1 4 NY box66 2 1 1 0 0 EDIT: m = df.groupby(["location"])["box"].nunique() df1 = df.groupby(["location", "box"]).agg( **{ "box count": ( "location", lambda x: m[x.iat[0]], ) } ) df2 = pd.crosstab([df["location"], df["box"]], df["status"]) df_out = pd.concat([df1, df2], axis=1) print(df_out.reset_index()) Prints: location box box count goodbye hello hey hi 0 CA box11 3 1 1 1 1 1 CA box86 3 0 1 1 1 2 CA box99 3 1 0 1 1 3 NY box55 2 0 0 1 1 4 NY box66 2 1 1 0 0
Tricky Multiple Groupings and Transformations using Pandas
I have a dataset where I would like to: group by location and box and take distinct count of the box create column headers with the values in the status column and include its count based on the box Data ID location type box status aa NY no box55 hey aa NY no box55 hi aa NY yes box66 hello aa NY yes box66 goodbye aa CA no box11 hey aa CA no box11 hi aa CA yes box11 hello aa CA yes box11 goodbye aa CA no box86 hey aa CA no box86 hi aa CA yes box86 hello aa CA yes box99 goodbye aa CA no box99 hey aa CA no box99 hi Desired location box count box hey hi hello goodbye NY 2 box55 1 1 0 0 NY 2 box66 0 0 1 1 CA 3 box11 1 1 1 1 CA 3 box86 1 1 1 0 CA 3 box99 1 1 0 1 Doing df['box count'] = df.groupby(['location','box'])['box'].size() t = pd.get_dummies(df, prefix_sep='', prefix='', columns=['status']).groupby(['box', 'location'], as_index=False).sum().assign(count=df.groupby(['box', 'location'], as_index=False)['status'].size()['size']) Any suggestion is appreciated.
[ "Try making two dataframes: first with .groupby(), second with pd.crosstab. Then just pd.concat them:\ndf1 = df.groupby([\"location\", \"box\"]).agg(**{\"box count\": (\"box\", \"size\")})\ndf2 = pd.crosstab([df[\"location\"], df[\"box\"]], df[\"status\"])\n\ndf_out = pd.concat([df1, df2], axis=1)\nprint(df_out.reset_index())\n\nPrints:\n location box box count goodbye hello hey hi\n0 CA box11 4 1 1 1 1\n1 CA box86 3 0 1 1 1\n2 CA box99 3 1 0 1 1\n3 NY box55 2 0 0 1 1\n4 NY box66 2 1 1 0 0\n\n\nEDIT:\nm = df.groupby([\"location\"])[\"box\"].nunique()\ndf1 = df.groupby([\"location\", \"box\"]).agg(\n **{\n \"box count\": (\n \"location\",\n lambda x: m[x.iat[0]],\n )\n }\n)\ndf2 = pd.crosstab([df[\"location\"], df[\"box\"]], df[\"status\"])\n\ndf_out = pd.concat([df1, df2], axis=1)\nprint(df_out.reset_index())\n\nPrints:\n location box box count goodbye hello hey hi\n0 CA box11 3 1 1 1 1\n1 CA box86 3 0 1 1 1\n2 CA box99 3 1 0 1 1\n3 NY box55 2 0 0 1 1\n4 NY box66 2 1 1 0 0\n\n" ]
[ 1 ]
[]
[]
[ "group_by", "numpy", "pandas", "python" ]
stackoverflow_0074539860_group_by_numpy_pandas_python.txt
Q: Sort short_names in reverse alphabetic order I dont understand what I am doing wrong: Sort short_names in reverse alphabetic order. Sample output from given program: ['Tod', 'Sam', 'Joe', 'Jan', 'Ann'] My code: short_names = ['Jan', 'Sam', 'Ann', 'Joe', 'Tod'] short_names.sort() print(short_names) A: sort function has a reverse option: short_names.sort(reverse=True) A: As always, first have a look at the documentation for list.sort: sort(*, key=None, reverse=None) This method sorts the list in place, using only < comparisons between items. reverse is a boolean value. If set to True, then the list elements are sorted as if each comparison were reversed. So the items in your list will be sorted from "smallest" to "largest" using the < comparion, which for strings means lexicographical ordering (A < AB < B). To sort it in reverse order, use the reverse parameter: short_names.sort(reverse=True) For more information have a look at the official Sorting HOW TO. A: short_names.sort() short_names.reverse()
Sort short_names in reverse alphabetic order
I dont understand what I am doing wrong: Sort short_names in reverse alphabetic order. Sample output from given program: ['Tod', 'Sam', 'Joe', 'Jan', 'Ann'] My code: short_names = ['Jan', 'Sam', 'Ann', 'Joe', 'Tod'] short_names.sort() print(short_names)
[ "sort function has a reverse option:\nshort_names.sort(reverse=True)\n\n", "As always, first have a look at the documentation for list.sort:\n\nsort(*, key=None, reverse=None)\nThis method sorts the list in place, using only < comparisons between items.\nreverse is a boolean value. If set to True, then the list elements are sorted as if each comparison were reversed.\n\nSo the items in your list will be sorted from \"smallest\" to \"largest\" using the < comparion, which for strings means lexicographical ordering (A < AB < B). To sort it in reverse order, use the reverse parameter:\nshort_names.sort(reverse=True)\nFor more information have a look at the official Sorting HOW TO.\n", "short_names.sort()\nshort_names.reverse()\n\n" ]
[ 1, 0, 0 ]
[ "I'm doing this lab right now, and this is what your code should look like for the zybook, based on the methods we have learned.\nuser_input = input()\nshort_names = user_input.split()\nshort_names.sort()\nshort_names.reverse()\nprint(short_names)\n\n" ]
[ -1 ]
[ "python", "python_2.7", "python_3.x" ]
stackoverflow_0045049758_python_python_2.7_python_3.x.txt
Q: How can a function access variables that are not defined inside the function? I recently started studying Python and I came across an example that I did not understand: def teste(): print(a, b) a = 5 b = 4 teste() # Outputs '5 4' What is happening here? Is teste() able to access a and b because those variables are globals? A: Short answer, yes. a and b are global variables in that sense. Long answer, as long as you keep the variable names on the right side of an assignment or just pass them to a function within a function, they'll act as global variables. What's happening is that Python will first look in the local scope of that function for the variable names and only if it doesn't find them go for the next scope, which is the global scope in your example. Function foo has no variable named a so Python searches in the next available scope a = "global a" def foo(): # No variable 'a' in local scope of foo() # Getting value of 'a' from the scope where foo() is called print(a) foo() # Prints "global a" If you want to declare a variable as global inside your function, you can use the global keyword. With that you can set a new value to your now global variable: a = "global a" def foo(): global a a = "Changed in function" print(a) # Prints "global a" foo() # assigns new value to a print(a) # Prints "Changed in function" If you don't use the global keyword, as soon as you use the same variable name inside a function on the left side of an assignment, you are creating a local variable overshadowing the global variable with the same name: a = "global a" def foo(): a = "local a" print(a) print(a) # Prints "global a" foo() # Prints "local a" print(a) # Prints "global a"
How can a function access variables that are not defined inside the function?
I recently started studying Python and I came across an example that I did not understand: def teste(): print(a, b) a = 5 b = 4 teste() # Outputs '5 4' What is happening here? Is teste() able to access a and b because those variables are globals?
[ "Short answer, yes. a and b are global variables in that sense.\nLong answer, as long as you keep the variable names on the right side of an assignment or just pass them to a function within a function, they'll act as global variables.\nWhat's happening is that Python will first look in the local scope of that function for the variable names and only if it doesn't find them go for the next scope, which is the global scope in your example.\nFunction foo has no variable named a so Python searches in the next available scope\na = \"global a\"\n\ndef foo():\n # No variable 'a' in local scope of foo()\n # Getting value of 'a' from the scope where foo() is called\n print(a)\n\nfoo() # Prints \"global a\"\n\nIf you want to declare a variable as global inside your function, you can use the global keyword. With that you can set a new value to your now global variable:\na = \"global a\"\n\ndef foo():\n global a\n a = \"Changed in function\"\n\nprint(a) # Prints \"global a\"\nfoo() # assigns new value to a\nprint(a) # Prints \"Changed in function\"\n\nIf you don't use the global keyword, as soon as you use the same variable name inside a function on the left side of an assignment, you are creating a local variable overshadowing the global variable with the same name:\na = \"global a\"\n\ndef foo():\n a = \"local a\"\n print(a)\n\nprint(a) # Prints \"global a\"\nfoo() # Prints \"local a\"\nprint(a) # Prints \"global a\"\n\n" ]
[ 1 ]
[]
[]
[ "global_variables", "python", "python_3.x", "scope" ]
stackoverflow_0074540137_global_variables_python_python_3.x_scope.txt
Q: Change Typo Column Values with Right Word based on Columns in Other Dataframe I have two dataframe, the first one is location , location = pd.DataFrame({'city': ['RIYADH','SEOUL','BUSAN','TOKYO','OSAKA'], 'country': ['Saudi Arabia','South Korea','South Korea','Japan','Japan']}) the other one is customer, customer = pd.DataFrame({'id': [1001,2002,3003,4004,5005,6006,7007,8008,9009], 'city': ['tokio','Sorth KOREA','riadh','JAPANN','tokyo','osako','Arab Saudi','SEOUL','buSN']}) I want to change the typo word in location column in customer dataframe with the right one in city/country from location dataframe. So the output will be like this: id location 1001 TOKYO 2002 South Korea 3003 RIYADH 4004 Japan 5005 TOKYO 6006 OSAKA 7007 Saudi Arabia 8008 SEOUL 9009 BUSAN A: A possible solution, based on RapidFuzz: from rapidfuzz import process out = (customer.assign( aux = customer['city'] .map(lambda x: process.extractOne(x, location['city']+'*'+location['country'])[0]))) out[['aux1', 'aux2']] = out['aux'].str.split(r'\*', expand=True) out['city'] = out.apply(lambda x: process.extractOne(x['city'], x.loc['aux1':'aux2'])[0], axis=1) out = out.drop(columns=['aux', 'aux1', 'aux2']) Output: id city 0 1001 TOKYO 1 2002 South Korea 2 3003 RIYADH 3 4004 Japan 4 5005 TOKYO 5 6006 OSAKA 6 7007 Saudi Arabia 7 8008 SEOUL 8 9009 BUSAN EDIT This tries to offer a solution for the OP's below comment: from rapidfuzz import process def get_match(x, y, score): match = process.extractOne(x, y) return np.nan if match[1] < score else match[0] out = (customer.assign( aux=customer['city'] .map(lambda x: process.extractOne(x, location['city']+'*'+location['country'])[0]))) out[['aux1', 'aux2']] = out['aux'].str.split(r'\*', expand=True) out['city'] = out.apply(lambda x: get_match( x['city'], x.loc['aux1':'aux2'], 92), axis=1) out = out.drop(columns=['aux', 'aux1', 'aux2']) Output: id city 0 1001 NaN 1 2002 NaN 2 3003 NaN 3 4004 NaN 4 5005 TOKYO 5 6006 NaN 6 7007 NaN 7 8008 SEOUL 8 9009 NaN
Change Typo Column Values with Right Word based on Columns in Other Dataframe
I have two dataframe, the first one is location , location = pd.DataFrame({'city': ['RIYADH','SEOUL','BUSAN','TOKYO','OSAKA'], 'country': ['Saudi Arabia','South Korea','South Korea','Japan','Japan']}) the other one is customer, customer = pd.DataFrame({'id': [1001,2002,3003,4004,5005,6006,7007,8008,9009], 'city': ['tokio','Sorth KOREA','riadh','JAPANN','tokyo','osako','Arab Saudi','SEOUL','buSN']}) I want to change the typo word in location column in customer dataframe with the right one in city/country from location dataframe. So the output will be like this: id location 1001 TOKYO 2002 South Korea 3003 RIYADH 4004 Japan 5005 TOKYO 6006 OSAKA 7007 Saudi Arabia 8008 SEOUL 9009 BUSAN
[ "A possible solution, based on RapidFuzz:\nfrom rapidfuzz import process\n\nout = (customer.assign(\n aux = customer['city']\n .map(lambda x: \n process.extractOne(x, location['city']+'*'+location['country'])[0])))\n\nout[['aux1', 'aux2']] = out['aux'].str.split(r'\\*', expand=True)\nout['city'] = out.apply(lambda x: \n process.extractOne(x['city'], x.loc['aux1':'aux2'])[0], axis=1)\nout = out.drop(columns=['aux', 'aux1', 'aux2'])\n\nOutput:\n id city\n0 1001 TOKYO\n1 2002 South Korea\n2 3003 RIYADH\n3 4004 Japan\n4 5005 TOKYO\n5 6006 OSAKA\n6 7007 Saudi Arabia\n7 8008 SEOUL\n8 9009 BUSAN\n\nEDIT\nThis tries to offer a solution for the OP's below comment:\nfrom rapidfuzz import process\n\ndef get_match(x, y, score):\n match = process.extractOne(x, y)\n return np.nan if match[1] < score else match[0]\n\nout = (customer.assign(\n aux=customer['city']\n .map(lambda x:\n process.extractOne(x, location['city']+'*'+location['country'])[0])))\n\nout[['aux1', 'aux2']] = out['aux'].str.split(r'\\*', expand=True)\nout['city'] = out.apply(lambda x: get_match(\n x['city'], x.loc['aux1':'aux2'], 92), axis=1)\nout = out.drop(columns=['aux', 'aux1', 'aux2'])\n\nOutput:\n id city\n0 1001 NaN\n1 2002 NaN\n2 3003 NaN\n3 4004 NaN\n4 5005 TOKYO\n5 6006 NaN\n6 7007 NaN\n7 8008 SEOUL\n8 9009 NaN\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "fuzzy_comparison", "pandas", "python", "similarity" ]
stackoverflow_0074540077_dataframe_fuzzy_comparison_pandas_python_similarity.txt
Q: Python asyncio listener loop doesn't run using idle main loop I have a "listener" loop that constantly watches for items to process from an asyncio queue. This loop runs in a part of the application that is not using asyncio, so I've been trying to set up a passive asyncio main loop that the listener can be transferred to as needed. The listener is started and stopped as needed per input from the user. For some reason the code below never results in the listener() actually running (i.e. print("Listener Running") is never printed). start_IOLoop_thread is run at startup of the application. Can anyone point out what the problem is with this setup? Please let me know if more info is needed. Edit: replaced code with a runnable example per the comments: import asyncio import threading from asyncio.queues import Queue import time class Client: def __init__(self): self.streamQ = Queue() self.loop = None self.start_IOLoop_thread() self.stream_listener() def stream_listener(self): self.streaming = True async def listener(): print("Listener Running") while self.streaming: data = await self.streamQ.get() # DEBUG print(data) print("Listener Stopped") print("Starting Listener") self.listener = asyncio.run_coroutine_threadsafe(listener(), self.loop) def start_IOLoop_thread(self): async def inf_loop(): # Keep the main thread alive and doing nothing # so we can freely give it tasks as needed while True: await asyncio.sleep(1) async def main(): await inf_loop() def start_IO(): self.loop = asyncio.new_event_loop() asyncio.set_event_loop(self.loop) asyncio.run(main()) print("Main Exited") threading.Thread(target=start_IO, daemon=True).start() # A small delay is needed to give the loop time to initialize, # otherwise self.loop is passed as "None" time.sleep(0.1) if __name__ == "__main__": C = Client() input("Enter to exit") A: You never start the newly created loop. I adjusted to call main (although here it does nothing I assume the original code is more complex). All changes are in start_IO. Tested with python 3.10 (I think there was some change in the past regarding threads and async) import asyncio import threading from asyncio.queues import Queue import time class Client: def __init__(self): self.streamQ = Queue() self.loop = None self.start_IOLoop_thread() self.stream_listener() def stream_listener(self): self.streaming = True async def listener(): print("Listener Running") while self.streaming: data = await self.streamQ.get() # DEBUG print(data) print("Listener Stopped") print("Starting Listener") self.listener = asyncio.run_coroutine_threadsafe(listener(), self.loop) def start_IOLoop_thread(self): async def inf_loop(): # Keep the main thread alive and doing nothing # so we can freely give it tasks as needed while True: await asyncio.sleep(1) async def main(): await inf_loop() def start_IO(): self.loop = asyncio.new_event_loop() self.loop.create_task(main()) asyncio.set_event_loop(self.loop) self.loop.run_forever() print("Main Exited") threading.Thread(target=start_IO, daemon=True).start() # A small delay is needed to give the loop time to initialize, # otherwise self.loop is passed as "None" time.sleep(0.1) if __name__ == "__main__": C = Client() input("Enter to exit")
Python asyncio listener loop doesn't run using idle main loop
I have a "listener" loop that constantly watches for items to process from an asyncio queue. This loop runs in a part of the application that is not using asyncio, so I've been trying to set up a passive asyncio main loop that the listener can be transferred to as needed. The listener is started and stopped as needed per input from the user. For some reason the code below never results in the listener() actually running (i.e. print("Listener Running") is never printed). start_IOLoop_thread is run at startup of the application. Can anyone point out what the problem is with this setup? Please let me know if more info is needed. Edit: replaced code with a runnable example per the comments: import asyncio import threading from asyncio.queues import Queue import time class Client: def __init__(self): self.streamQ = Queue() self.loop = None self.start_IOLoop_thread() self.stream_listener() def stream_listener(self): self.streaming = True async def listener(): print("Listener Running") while self.streaming: data = await self.streamQ.get() # DEBUG print(data) print("Listener Stopped") print("Starting Listener") self.listener = asyncio.run_coroutine_threadsafe(listener(), self.loop) def start_IOLoop_thread(self): async def inf_loop(): # Keep the main thread alive and doing nothing # so we can freely give it tasks as needed while True: await asyncio.sleep(1) async def main(): await inf_loop() def start_IO(): self.loop = asyncio.new_event_loop() asyncio.set_event_loop(self.loop) asyncio.run(main()) print("Main Exited") threading.Thread(target=start_IO, daemon=True).start() # A small delay is needed to give the loop time to initialize, # otherwise self.loop is passed as "None" time.sleep(0.1) if __name__ == "__main__": C = Client() input("Enter to exit")
[ "You never start the newly created loop. I adjusted to call main (although here it does nothing I assume the original code is more complex). All changes are in start_IO. Tested with python 3.10 (I think there was some change in the past regarding threads and async)\nimport asyncio\nimport threading\nfrom asyncio.queues import Queue\nimport time\n\n\nclass Client:\n def __init__(self):\n self.streamQ = Queue()\n self.loop = None\n self.start_IOLoop_thread()\n self.stream_listener()\n\n def stream_listener(self):\n self.streaming = True\n\n async def listener():\n print(\"Listener Running\")\n while self.streaming:\n data = await self.streamQ.get()\n # DEBUG\n print(data)\n print(\"Listener Stopped\")\n\n print(\"Starting Listener\")\n self.listener = asyncio.run_coroutine_threadsafe(listener(), self.loop)\n\n def start_IOLoop_thread(self):\n async def inf_loop():\n # Keep the main thread alive and doing nothing\n # so we can freely give it tasks as needed\n while True:\n await asyncio.sleep(1)\n\n async def main():\n await inf_loop()\n\n def start_IO():\n self.loop = asyncio.new_event_loop()\n self.loop.create_task(main())\n asyncio.set_event_loop(self.loop)\n self.loop.run_forever()\n\n print(\"Main Exited\")\n\n threading.Thread(target=start_IO, daemon=True).start()\n # A small delay is needed to give the loop time to initialize,\n # otherwise self.loop is passed as \"None\"\n time.sleep(0.1)\n\n\nif __name__ == \"__main__\":\n C = Client()\n input(\"Enter to exit\")\n\n" ]
[ 2 ]
[]
[]
[ "python", "python_asyncio" ]
stackoverflow_0074538362_python_python_asyncio.txt
Q: Time zone offset change history dataset by date and city parameter I am searching for Rest API that will allow me to get all Time zone offset changes of city between dates. Is there any API like this (not free) ? For example: Get --> Headers: City From date To date Rome 2001-01-01 00:00:01.000 2020-01-01 00:00:01.000 Response: Timestamp Time zone offset 2001-01-01 00:00:01.000 +1 2001-07-01 00:00:01.000 +2 2002-01-01 00:00:01.000 +1 2002-07-01 00:00:01.000 +2 ... ... ... ... 2020-01-01 00:00:01.000 +1 A: Azure Maps API. One call for Search --> Get Search Address- returns Coordinates (latitude and longitude). Another call for Timezone --> Get Timezone By Coordinates - returns times zones (historical, current, future).
Time zone offset change history dataset by date and city parameter
I am searching for Rest API that will allow me to get all Time zone offset changes of city between dates. Is there any API like this (not free) ? For example: Get --> Headers: City From date To date Rome 2001-01-01 00:00:01.000 2020-01-01 00:00:01.000 Response: Timestamp Time zone offset 2001-01-01 00:00:01.000 +1 2001-07-01 00:00:01.000 +2 2002-01-01 00:00:01.000 +1 2002-07-01 00:00:01.000 +2 ... ... ... ... 2020-01-01 00:00:01.000 +1
[ "Azure Maps API.\nOne call for Search --> Get Search Address- returns Coordinates (latitude and longitude).\nAnother call for Timezone --> Get Timezone By Coordinates - returns times zones (historical, current, future).\n" ]
[ 0 ]
[]
[]
[ "api", "python", "rest" ]
stackoverflow_0074376595_api_python_rest.txt
Q: How to get absolute path of root directory from anywhere within the directory in python Let's say I have the following directory model_folder | | ------- model_modules | | | ---- __init__.py | | | ---- foo.py | | | ---- bar.py | | ------- research | | | ----- training.ipynb | | | ----- eda.ipynb | | ------- main.py and I want to import model_modules into a script in research I can do that with the following import sys sys.path.append('/absolute/path/model_folder') from model_modules.foo import Foo from model_modules.bar import Bar However, let's say I don't explicitly know the absolute path of the root, or perhaps just don't want to hardcode it as it may change locations. How could I get the absolute path of module_folder from anywhere in the directory so I could do something like this? import sys sys.path.append(root) from model_modules.foo import Foo from model_modules.bar import Bar I referred to this question in which one of the answers recommends adding the following to the root directory, like so: utils.py from pathlib import Path def get_project_root() -> Path: return Path(__file__).parent.parent model_folder | | ------- model_modules | | | ---- __init__.py | | | ---- foo.py | | | ---- bar.py | | | ------- src | | | ---- utils.py | | | | | ------- research | | | ----- training.ipynb | | | ----- eda.ipynb | | ------- main.py But then when I try to import this into a script in a subdirectory, like training.ipynb, I get an error from src.utils import get_project_root root = get_project_root ModuleNotFoundError: No module named 'src' So my question is, how can I get the absolute path to the root directory from anywhere within the directory in python? A: sys.path[0] contain your root directory (the directory where the program is located). You can use that to add your sub-directories. import sys sys.path.append( sys.path[0] + "/model_modules") import foo and for cases where foo.py may exist elsewhere: import sys sys.path.insert( 1, sys.path[0] + "/model_modules") # put near front of list import foo
How to get absolute path of root directory from anywhere within the directory in python
Let's say I have the following directory model_folder | | ------- model_modules | | | ---- __init__.py | | | ---- foo.py | | | ---- bar.py | | ------- research | | | ----- training.ipynb | | | ----- eda.ipynb | | ------- main.py and I want to import model_modules into a script in research I can do that with the following import sys sys.path.append('/absolute/path/model_folder') from model_modules.foo import Foo from model_modules.bar import Bar However, let's say I don't explicitly know the absolute path of the root, or perhaps just don't want to hardcode it as it may change locations. How could I get the absolute path of module_folder from anywhere in the directory so I could do something like this? import sys sys.path.append(root) from model_modules.foo import Foo from model_modules.bar import Bar I referred to this question in which one of the answers recommends adding the following to the root directory, like so: utils.py from pathlib import Path def get_project_root() -> Path: return Path(__file__).parent.parent model_folder | | ------- model_modules | | | ---- __init__.py | | | ---- foo.py | | | ---- bar.py | | | ------- src | | | ---- utils.py | | | | | ------- research | | | ----- training.ipynb | | | ----- eda.ipynb | | ------- main.py But then when I try to import this into a script in a subdirectory, like training.ipynb, I get an error from src.utils import get_project_root root = get_project_root ModuleNotFoundError: No module named 'src' So my question is, how can I get the absolute path to the root directory from anywhere within the directory in python?
[ "sys.path[0] contain your root directory (the directory where the program is located). You can use that to add your sub-directories.\nimport sys\nsys.path.append( sys.path[0] + \"/model_modules\")\nimport foo\n\nand for cases where foo.py may exist elsewhere:\nimport sys\nsys.path.insert( 1, sys.path[0] + \"/model_modules\") # put near front of list\nimport foo\n\n" ]
[ 0 ]
[]
[]
[ "directory", "python" ]
stackoverflow_0074539909_directory_python.txt
Q: Speed of loading files with asyncio I'm writing a piece of code that needs to compare a python set to many other sets and retain the names of the files which have a minimum intersection length. I currently have a synchronous version but was wondering if it could benefit from async/await. I wanted to start by comparing the loading of sets. I wrote a simple script that writes a small set to disk and just reads it in n amount of times. I was suprised to see the sync version of this was a lot faster. Is this to be expected? and if not is there a flaw in the way I have coded it below? My code is the following: Synchronous version: import pickle import asyncio import time import aiofiles pickle.dump(set(range(1000)), open('set.pkl', 'wb')) def count(): print("Started Loading") with open('set.pkl', mode='rb') as f: contents = pickle.loads(f.read()) print("Finishd Loading") def main(): for _ in range(100): count() if __name__ == "__main__": s = time.perf_counter() main() elapsed = time.perf_counter() - s print(f"{__file__} executed in {elapsed:0.3f} seconds.") Asynchronous version: import pickle import asyncio import time import aiofiles pickle.dump(set(range(1000)), open('set.pkl', 'wb')) async def count(): print("Started Loading") async with aiofiles.open('set.pkl', mode='rb') as f: contents = pickle.loads(await f.read()) print("Finishd Loading") async def main(): await asyncio.gather(*(count() for _ in range(100))) if __name__ == "__main__": import time s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"{__file__} executed in {elapsed:0.3f} seconds.") Execuitng them led to: async.py executed in 0.052 seconds. sync.py executed in 0.011 seconds. A: Asyncio doesn’t help in this case because your workload is basically disk-IO bound and CPU bound. CPU bound workload cannot be sped up by Asyncio. Disk-IO bound workload could benefit from async operation if but the disk operation is very slow and your program can do other things during that time. This may not be your situation. So the slower asyncio performance is mainly due to the additional overhead introduced. A: aiofiles is implemented by using threads, so each time you tell it to read a file another thread will be instructed to read the file. the file being read is actually very small it fits in 3 KB which is under 1 page in your memory and also smaller than your core L1 cache, the computer didn't actually read anything from the disk most of the time, it's all being moved between parts of your memory. in the async case it is being moved from one core's memory to the second, which is slower than keeping everything within 1 core's cache, but for larger files that are actually read from disk and other tasks to attend to, such as reading from sockets and reading different files from disk and doing some processing concurrently you will find the async version is faster, because it is using threads under the hood, and some tasks drop the gil, like reading from files and sockets, and some processing libraries. you are still reading files at the same speed in both cases as you will be limited by your drive read speed, you will only be reducing the "dead-time" of when you are not reading files, and your example has no "dead-time", it isn't even reading a file from disk. an exception to the above happens when you are reading data from multiple HDDs and SSDs concurrently where 1 thread can never read the data fast enough so the async version will be faster, because it can read from multiple drives at the same time (assuming you have the cores and IO lanes for it in your CPU)
Speed of loading files with asyncio
I'm writing a piece of code that needs to compare a python set to many other sets and retain the names of the files which have a minimum intersection length. I currently have a synchronous version but was wondering if it could benefit from async/await. I wanted to start by comparing the loading of sets. I wrote a simple script that writes a small set to disk and just reads it in n amount of times. I was suprised to see the sync version of this was a lot faster. Is this to be expected? and if not is there a flaw in the way I have coded it below? My code is the following: Synchronous version: import pickle import asyncio import time import aiofiles pickle.dump(set(range(1000)), open('set.pkl', 'wb')) def count(): print("Started Loading") with open('set.pkl', mode='rb') as f: contents = pickle.loads(f.read()) print("Finishd Loading") def main(): for _ in range(100): count() if __name__ == "__main__": s = time.perf_counter() main() elapsed = time.perf_counter() - s print(f"{__file__} executed in {elapsed:0.3f} seconds.") Asynchronous version: import pickle import asyncio import time import aiofiles pickle.dump(set(range(1000)), open('set.pkl', 'wb')) async def count(): print("Started Loading") async with aiofiles.open('set.pkl', mode='rb') as f: contents = pickle.loads(await f.read()) print("Finishd Loading") async def main(): await asyncio.gather(*(count() for _ in range(100))) if __name__ == "__main__": import time s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"{__file__} executed in {elapsed:0.3f} seconds.") Execuitng them led to: async.py executed in 0.052 seconds. sync.py executed in 0.011 seconds.
[ "Asyncio doesn’t help in this case because your workload is basically disk-IO bound and CPU bound.\nCPU bound workload cannot be sped up by Asyncio.\nDisk-IO bound workload could benefit from async operation if but the disk operation is very slow and your program can do other things during that time. This may not be your situation.\nSo the slower asyncio performance is mainly due to the additional overhead introduced.\n", "aiofiles is implemented by using threads, so each time you tell it to read a file another thread will be instructed to read the file.\nthe file being read is actually very small it fits in 3 KB which is under 1 page in your memory and also smaller than your core L1 cache, the computer didn't actually read anything from the disk most of the time, it's all being moved between parts of your memory.\nin the async case it is being moved from one core's memory to the second, which is slower than keeping everything within 1 core's cache, but for larger files that are actually read from disk and other tasks to attend to, such as reading from sockets and reading different files from disk and doing some processing concurrently you will find the async version is faster, because it is using threads under the hood, and some tasks drop the gil, like reading from files and sockets, and some processing libraries.\nyou are still reading files at the same speed in both cases as you will be limited by your drive read speed, you will only be reducing the \"dead-time\" of when you are not reading files, and your example has no \"dead-time\", it isn't even reading a file from disk.\nan exception to the above happens when you are reading data from multiple HDDs and SSDs concurrently where 1 thread can never read the data fast enough so the async version will be faster, because it can read from multiple drives at the same time (assuming you have the cores and IO lanes for it in your CPU)\n" ]
[ 1, 0 ]
[]
[]
[ "async_await", "asynchronous", "python", "python_asyncio" ]
stackoverflow_0074537864_async_await_asynchronous_python_python_asyncio.txt
Q: How does async.queue synchronization works? I have to build an application where my computer receives information from different serial ports. My plan is to use one thread per port to read the data and another common to all to parse and save. Communication between threads is done through a async.queue but I have a problem with my implementation. I have made a simple example with a single read thread and when I tried it I found a problem in my implementation. import asyncio import serial from enum import Enum EOL = b'\x17\x00' class IdMessage(Enum): ACK = 0xa0 PLS = 0xa1 async def read(port: serial.Serial, queue: asyncio.Queue): print('Reading') while True: if port.in_waiting > 0: data = port.read_until(EOL) id, code, *load, end_b1, end_b2 = data print("[MSG]", hex(id), hex(code), [*map(chr, load)], sep = ', ') opcode = IdMessage(code) if opcode is IdMessage.ACK: print(f'Device ID: {id}') elif opcode is IdMessage.PLS: print("Put:", load) await queue.put(load) else: print('Error') async def save_data(queue: asyncio.Queue): print('Saving data.') while True: data = await queue.get() queue.task_done() print('Get:', data) n, a_msb, a_lsb, *_ = data a = (a_msb << 8) | a_lsb with open('out.csv', 'a') as fdata: print(n, a, sep=',', file=fdata) async def main(): queue = asyncio.Queue() port = serial.Serial('/dev/ttyACM0', baudrate=115200) await asyncio.sleep(2) print('Sending Information.') port.write(bytearray([0x01])) await asyncio.sleep(0.01) t1 = asyncio.create_task(read(port, queue)) t2 = asyncio.create_task(save_data(queue)) await asyncio.gather(t1, t2) if __name__ == '__main__': asyncio.run(main()) Only read is executed. But adding: async def read(port: serial.Serial, queue: asyncio.Queue): print('Reading') while True: if port.in_waiting > 0: data = port.read_until(EOL) id, code, *load, end_b1, end_b2 = data print("[MSG]", hex(id), hex(code), [*map(chr, load)], sep = ', ') opcode = IdMessage(code) if opcode is IdMessage.ACK: print(f'Device ID: {id}') elif opcode is IdMessage.PLS: print("Put:", load) await queue.put(load) else: print('Error') await queue.join() ########## This # await asyncio.sleep(0.001) <- This works too Everything works correctly. Why do I have to add an await for the other thread to work? They are not concurrent? Could it be a problem with how the queue is synced? A: Your basic idea of using one thread per serial port is a possible approach. However, in your test program, the main thread does nothing but write the data to a file. It does not share execution with a second Task, so there is no need for asyncio. If your real program is indeed that simple, you don't need asyncio at all - just an ordinary multi-threaded program will do the job. But if the program needs to do something else in the main thread, like interact with the user, then a hybrid design with an asyncio main thread and a bunch of secondary threads might be the right solution. Since that was your question, I will describe how that can be done. Since the function you call to read data from the port ( data = port.read_until(EOL)) is a blocking I/O call, it cannot effectively share a thread with asyncio. The test program you wrote doesn't work because read_until blocks the main thread until data appears at the port, preventing the asyncio event loop from running. You put this call into a tight loop (while True: in read), so it has the effect of blocking all (or almost all) the activity in the other asyncio Tasks. It's true that the event loop will run briefly when it hits await queue.put(load), but the next time the read task gets control it will block again. You cannot use a single-threaded version of the program to develop and understand this. You've got to tackle the multi-threading problem right up front. A simple rule of thumb: don't mix blocking I/O calls with asyncio in the same thread. But asyncio has methods to handle the multithreading issues. Step 1. Convert read to a thread, not a Task. Pass the event loop to it as well as the queue. Use a threadsafe way of putting an item into the queue (see comment). Note that the function queue.put_nowait now runs in the main thread. def read(port: serial.Serial, queue: asyncio.Queue, loop): print('Reading') while True: if port.in_waiting > 0: data = port.read_until(EOL) id, code, *load, end_b1, end_b2 = data print("[MSG]", hex(id), hex(code), [*map(chr, load)], sep = ', ') opcode = IdMessage(code) if opcode is IdMessage.ACK: print(f'Device ID: {id}') elif opcode is IdMessage.PLS: print("Put:", load) loop.call_soon_threadsafe(queue.put_nowait, load) # Change this line else: print('Error') Step 2. Now modify main appropriately. See the inline comments. async def main(): queue = asyncio.Queue() port = serial.Serial('/dev/ttyACM0', baudrate=115200) await asyncio.sleep(2) # not necessary in my experience print('Sending Information.') port.write(bytearray([0x01])) # blocking, but fast. So OK await asyncio.sleep(0.01) # CHANGE HERE - Launch the read thread t1 = threading.Thread(target=read, args=(port, queue, asyncio.get_event_loop())) t1.start() await save_data(queue) # CHANGE HERE - no need for gather I can't test the script so please let me know if I've made a mistake. I don't see a need to change save_data. A: asyncio.Queue is for communication between coroutines that run in the same thread. To communicate between different threads, as in your use case, use threading.Queue.
How does async.queue synchronization works?
I have to build an application where my computer receives information from different serial ports. My plan is to use one thread per port to read the data and another common to all to parse and save. Communication between threads is done through a async.queue but I have a problem with my implementation. I have made a simple example with a single read thread and when I tried it I found a problem in my implementation. import asyncio import serial from enum import Enum EOL = b'\x17\x00' class IdMessage(Enum): ACK = 0xa0 PLS = 0xa1 async def read(port: serial.Serial, queue: asyncio.Queue): print('Reading') while True: if port.in_waiting > 0: data = port.read_until(EOL) id, code, *load, end_b1, end_b2 = data print("[MSG]", hex(id), hex(code), [*map(chr, load)], sep = ', ') opcode = IdMessage(code) if opcode is IdMessage.ACK: print(f'Device ID: {id}') elif opcode is IdMessage.PLS: print("Put:", load) await queue.put(load) else: print('Error') async def save_data(queue: asyncio.Queue): print('Saving data.') while True: data = await queue.get() queue.task_done() print('Get:', data) n, a_msb, a_lsb, *_ = data a = (a_msb << 8) | a_lsb with open('out.csv', 'a') as fdata: print(n, a, sep=',', file=fdata) async def main(): queue = asyncio.Queue() port = serial.Serial('/dev/ttyACM0', baudrate=115200) await asyncio.sleep(2) print('Sending Information.') port.write(bytearray([0x01])) await asyncio.sleep(0.01) t1 = asyncio.create_task(read(port, queue)) t2 = asyncio.create_task(save_data(queue)) await asyncio.gather(t1, t2) if __name__ == '__main__': asyncio.run(main()) Only read is executed. But adding: async def read(port: serial.Serial, queue: asyncio.Queue): print('Reading') while True: if port.in_waiting > 0: data = port.read_until(EOL) id, code, *load, end_b1, end_b2 = data print("[MSG]", hex(id), hex(code), [*map(chr, load)], sep = ', ') opcode = IdMessage(code) if opcode is IdMessage.ACK: print(f'Device ID: {id}') elif opcode is IdMessage.PLS: print("Put:", load) await queue.put(load) else: print('Error') await queue.join() ########## This # await asyncio.sleep(0.001) <- This works too Everything works correctly. Why do I have to add an await for the other thread to work? They are not concurrent? Could it be a problem with how the queue is synced?
[ "Your basic idea of using one thread per serial port is a possible approach. However, in your test program, the main thread does nothing but write the data to a file. It does not share execution with a second Task, so there is no need for asyncio. If your real program is indeed that simple, you don't need asyncio at all - just an ordinary multi-threaded program will do the job. But if the program needs to do something else in the main thread, like interact with the user, then a hybrid design with an asyncio main thread and a bunch of secondary threads might be the right solution. Since that was your question, I will describe how that can be done.\nSince the function you call to read data from the port ( data = port.read_until(EOL)) is a blocking I/O call, it cannot effectively share a thread with asyncio. The test program you wrote doesn't work because read_until blocks the main thread until data appears at the port, preventing the asyncio event loop from running. You put this call into a tight loop (while True: in read), so it has the effect of blocking all (or almost all) the activity in the other asyncio Tasks. It's true that the event loop will run briefly when it hits await queue.put(load), but the next time the read task gets control it will block again.\nYou cannot use a single-threaded version of the program to develop and understand this. You've got to tackle the multi-threading problem right up front. A simple rule of thumb: don't mix blocking I/O calls with asyncio in the same thread. But asyncio has methods to handle the multithreading issues.\nStep 1. Convert read to a thread, not a Task. Pass the event loop to it as well as the queue. Use a threadsafe way of putting an item into the queue (see comment). Note that the function queue.put_nowait now runs in the main thread.\ndef read(port: serial.Serial, queue: asyncio.Queue, loop):\n print('Reading')\n while True:\n if port.in_waiting > 0:\n data = port.read_until(EOL)\n id, code, *load, end_b1, end_b2 = data\n print(\"[MSG]\", hex(id), hex(code), [*map(chr, load)], sep = ', ')\n opcode = IdMessage(code)\n if opcode is IdMessage.ACK:\n print(f'Device ID: {id}')\n elif opcode is IdMessage.PLS:\n print(\"Put:\", load)\n loop.call_soon_threadsafe(queue.put_nowait, load) # Change this line\n else:\n print('Error')\n\nStep 2. Now modify main appropriately. See the inline comments.\nasync def main():\n queue = asyncio.Queue()\n port = serial.Serial('/dev/ttyACM0', baudrate=115200)\n await asyncio.sleep(2) # not necessary in my experience\n print('Sending Information.')\n port.write(bytearray([0x01])) # blocking, but fast. So OK\n await asyncio.sleep(0.01)\n # CHANGE HERE - Launch the read thread\n t1 = threading.Thread(target=read, args=(port, queue, asyncio.get_event_loop()))\n t1.start()\n await save_data(queue) # CHANGE HERE - no need for gather\n\nI can't test the script so please let me know if I've made a mistake.\nI don't see a need to change save_data.\n", "asyncio.Queue is for communication between coroutines that run in the same thread.\nTo communicate between different threads, as in your use case, use threading.Queue.\n" ]
[ 0, 0 ]
[]
[]
[ "multithreading", "python", "python_asyncio" ]
stackoverflow_0074508546_multithreading_python_python_asyncio.txt
Q: unexpected output from the init value and from the main function I am trying to do the binary tree inversion in Python. I did in the following way. class Node: def __init__(self, data): self.left = None self.right = None self.data = data print(self.left) print(self.right) def PrintTree ( self ) : if self.left : self.left.PrintTree () print ( self.data, end= ' ' ) , if self.right : self.right.PrintTree () class Solution: ''' Function to invert the tree ''' def invertTree(self, root): if root == None: return root.left, root.right = self.invertTree(root.right),self.invertTree(root.left) return root if __name__ == '__main__': Tree = Node(10) Tree.left = Node(20) print(Tree.left.data) Tree.right = Node(30) print(Tree.right.data) Tree.left.left = Node(40) Tree.right.right = Node(50) print('Initial Tree :',end = ' ' ) Tree.PrintTree() Solution().invertTree(root=Tree) print('\nInverted Tree :', end=' ') Tree.PrintTree() The thing that is not coming as expected by me is when I print the self.left and self.right they are coming as None, but when I print it in the main function, they give the values (eg: Tree.left.data or Tree.left.data). The result is as follows: None None None None 20 None None 30 None None None None If we are getting None, then what is the point of assigning tree.left and tree.right to a value?? A: add this to PrintTree function: print (self.data) #Missing And also remove prints on init
unexpected output from the init value and from the main function
I am trying to do the binary tree inversion in Python. I did in the following way. class Node: def __init__(self, data): self.left = None self.right = None self.data = data print(self.left) print(self.right) def PrintTree ( self ) : if self.left : self.left.PrintTree () print ( self.data, end= ' ' ) , if self.right : self.right.PrintTree () class Solution: ''' Function to invert the tree ''' def invertTree(self, root): if root == None: return root.left, root.right = self.invertTree(root.right),self.invertTree(root.left) return root if __name__ == '__main__': Tree = Node(10) Tree.left = Node(20) print(Tree.left.data) Tree.right = Node(30) print(Tree.right.data) Tree.left.left = Node(40) Tree.right.right = Node(50) print('Initial Tree :',end = ' ' ) Tree.PrintTree() Solution().invertTree(root=Tree) print('\nInverted Tree :', end=' ') Tree.PrintTree() The thing that is not coming as expected by me is when I print the self.left and self.right they are coming as None, but when I print it in the main function, they give the values (eg: Tree.left.data or Tree.left.data). The result is as follows: None None None None 20 None None 30 None None None None If we are getting None, then what is the point of assigning tree.left and tree.right to a value??
[ "add this to PrintTree function:\nprint (self.data) #Missing\n\nAnd also remove prints on init\n" ]
[ 0 ]
[]
[]
[ "algorithm", "data_structures", "init", "python" ]
stackoverflow_0074540317_algorithm_data_structures_init_python.txt
Q: Overwrite a value in a pandas dataframe column based on a calculation function applied to it From the following DataFrame: worktime = 1440 person = [11,22,33,44,55] begin_date = '2019-10-01' shift= [1,2,3,1,2] pause = [90,0,85,70,0] occu = [60,0,40,20,0] time_u = [50,40,80,20,0] time_a = [84.5,0.0,10.5,47.7,0.0] time_p = 0 time_q = [35.9,69.1,0.0,0.0,84.4] df = pd.DataFrame({'date':pd.date_range(begin_date, periods=len(person)),'person':person,'shift':shift,'worktime':worktime,'pause':pause,'occu':occu, 'time_u':time_u,'time_a':time_a,'time_p ':time_p,'time_q':time_q,}) Output: date person shift worktime pause occu time_u time_a time_p time_q 0 2019-10-01 11 1 1440 90 60 50 84.5 0 35.9 1 2019-10-02 22 2 1440 0 0 40 0.0 0 69.1 2 2019-10-03 33 3 1440 85 40 80 10.5 0 0.0 3 2019-10-04 44 1 1440 70 20 20 47.7 0 0.0 4 2019-10-05 55 2 1440 0 0 0 0.0 0 84.4 I am looking for a suitable function that takes the already contained value of the columns and uses it in a calculation and then overwrites it with the result of the calculation. It concerns the columns time_u, time_a, time_p and time_q and should be applied according to the following principle: time_u = worktime - pause - occu - (existing value of time_u) time_a = (new value of time_u) - time_a time_p = (new value of time_a) - time_p time_q = (new value of time_p)- time_q Is there a possible function that could be used here? Using this formula manually, the output would look like this: date person shift worktime pause occu time_u time_a time_p time_q 0 2019-10-01 11 1 1440 90 60 1240 1155.5 1155.5 1119.6 1 2019-10-02 22 2 1440 0 0 1400 1400 1400 1330.9 2 2019-10-03 33 3 1440 85 40 1235 1224.5 1224.5 1224.5 3 2019-10-04 44 1 1440 70 20 1330 1282.3 1282.3 1282.3 4 2019-10-05 55 2 1440 0 0 1440 1440 1440 1355.6 Unfortunately, this task is way beyond my skill level, so any help in setting up the appropriate function would be greatly appreciated. Many thanks in advance A: You can simply apply the relationships you have supplied sequentially. Or are you looking for something else? By the way, you put an extra space at the end of 'time_p' df['time_u'] = df['worktime'] - df['pause'] - df['occu'] - df['time_u'] df['time_a'] = df['time_u'] - df['time_a'] df['time_p'] = df['time_a'] - df['time_p'] df['time_q'] = df['time_p'] - df['time_q']
Overwrite a value in a pandas dataframe column based on a calculation function applied to it
From the following DataFrame: worktime = 1440 person = [11,22,33,44,55] begin_date = '2019-10-01' shift= [1,2,3,1,2] pause = [90,0,85,70,0] occu = [60,0,40,20,0] time_u = [50,40,80,20,0] time_a = [84.5,0.0,10.5,47.7,0.0] time_p = 0 time_q = [35.9,69.1,0.0,0.0,84.4] df = pd.DataFrame({'date':pd.date_range(begin_date, periods=len(person)),'person':person,'shift':shift,'worktime':worktime,'pause':pause,'occu':occu, 'time_u':time_u,'time_a':time_a,'time_p ':time_p,'time_q':time_q,}) Output: date person shift worktime pause occu time_u time_a time_p time_q 0 2019-10-01 11 1 1440 90 60 50 84.5 0 35.9 1 2019-10-02 22 2 1440 0 0 40 0.0 0 69.1 2 2019-10-03 33 3 1440 85 40 80 10.5 0 0.0 3 2019-10-04 44 1 1440 70 20 20 47.7 0 0.0 4 2019-10-05 55 2 1440 0 0 0 0.0 0 84.4 I am looking for a suitable function that takes the already contained value of the columns and uses it in a calculation and then overwrites it with the result of the calculation. It concerns the columns time_u, time_a, time_p and time_q and should be applied according to the following principle: time_u = worktime - pause - occu - (existing value of time_u) time_a = (new value of time_u) - time_a time_p = (new value of time_a) - time_p time_q = (new value of time_p)- time_q Is there a possible function that could be used here? Using this formula manually, the output would look like this: date person shift worktime pause occu time_u time_a time_p time_q 0 2019-10-01 11 1 1440 90 60 1240 1155.5 1155.5 1119.6 1 2019-10-02 22 2 1440 0 0 1400 1400 1400 1330.9 2 2019-10-03 33 3 1440 85 40 1235 1224.5 1224.5 1224.5 3 2019-10-04 44 1 1440 70 20 1330 1282.3 1282.3 1282.3 4 2019-10-05 55 2 1440 0 0 1440 1440 1440 1355.6 Unfortunately, this task is way beyond my skill level, so any help in setting up the appropriate function would be greatly appreciated. Many thanks in advance
[ "You can simply apply the relationships you have supplied sequentially. Or are you looking for something else? By the way, you put an extra space at the end of 'time_p'\ndf['time_u'] = df['worktime'] - df['pause'] - df['occu'] - df['time_u']\ndf['time_a'] = df['time_u'] - df['time_a']\ndf['time_p'] = df['time_a'] - df['time_p']\ndf['time_q'] = df['time_p'] - df['time_q']\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "function", "overwrite", "pandas", "python" ]
stackoverflow_0074540220_dataframe_function_overwrite_pandas_python.txt
Q: Getting "IndexError: list index out of range" and not sure why Beginner python programmer here. I understand what an "IndexError: list index out of range" error means, but in my case I'm not sure why I'm getting it. I have a script which goes to this webpage (https://www.basketball-reference.com/players/v/valanjo01/gamelog/2022) and in the "2021-22 Regular Season" table, goes through all of the rows and prints out the values in the "Rk" column. This is my code: team_game_number_element = [] team_game_number = [] for x in range(82): y = str(x + 1) team_game_number_element[x].append(driver.find_element_by_xpath('//th[@data-stat="ranker" and contains(., "' + y + '")]')) team_game_number[x].append(team_game_number_element[x].text) print(team_game_number[x]) What I was expecting: x starts at 0, y becomes "1". Then team_game_number_element[0] is assigned to the element with that xpath (specifically the one which contains the value of y). Then team_game_number[0] is assigned the value of the text of team_game_number_element[0]. A: The append() method appends/add an element to the end of the list. In your code you can remove the index before the append method. team_game_number_element = [] team_game_number = [] for index in range(82): y = str(index + 1) team_game_number_element.append('some text') team_game_number.append(team_game_number_element[index]) print(index, team_game_number[index], y)
Getting "IndexError: list index out of range" and not sure why
Beginner python programmer here. I understand what an "IndexError: list index out of range" error means, but in my case I'm not sure why I'm getting it. I have a script which goes to this webpage (https://www.basketball-reference.com/players/v/valanjo01/gamelog/2022) and in the "2021-22 Regular Season" table, goes through all of the rows and prints out the values in the "Rk" column. This is my code: team_game_number_element = [] team_game_number = [] for x in range(82): y = str(x + 1) team_game_number_element[x].append(driver.find_element_by_xpath('//th[@data-stat="ranker" and contains(., "' + y + '")]')) team_game_number[x].append(team_game_number_element[x].text) print(team_game_number[x]) What I was expecting: x starts at 0, y becomes "1". Then team_game_number_element[0] is assigned to the element with that xpath (specifically the one which contains the value of y). Then team_game_number[0] is assigned the value of the text of team_game_number_element[0].
[ "The append() method appends/add an element to the end of the list.\nIn your code you can remove the index before the append method.\n\n\nteam_game_number_element = []\nteam_game_number = []\n\nfor index in range(82):\n y = str(index + 1)\n team_game_number_element.append('some text')\n team_game_number.append(team_game_number_element[index])\n print(index, team_game_number[index], y)\n\n\n\n" ]
[ 1 ]
[]
[]
[ "indexing", "python", "selenium" ]
stackoverflow_0074540368_indexing_python_selenium.txt
Q: Splitting a string into multiple string using re.split() I have a string that I am trying to split into 2 strings using Regex to form a list. Below is the string: Input: 'TLSD_IBPDEq.' Output: ['', ''] Expected Output: ['TLSD_IBPD', 'Eq.'] Below is what I have tried but is not working pattern = r"\S*Eq[\.,]" l = re.split(pattern,"TLSD_IBPDEq.") print(l) => ['', ''] A: If I understand, then you can apply the answer from this question. If you need to use a regex to solve this, then use a capture group and remove the last (empty) element, like this: pattern = r"(Eq\.)$" l = re.split(pattern, "TLSD_IBPDEq.")[:-1] print(l) # => ['TLSD_IBPD', 'Eq.'] A: You can do it without re: s = "TLSD_IBPDEq." if s.endswith(("Eq.", "Eq,")): print([s[:-3], s[-3:]]) Prints: ['TLSD_IBPD', 'Eq.'] Solution with re: import re s = "TLSD_IBPDEq." print(list(re.search(r"(\S*)(Eq[.,])$", s).groups())) Prints: ['TLSD_IBPD', 'Eq.']
Splitting a string into multiple string using re.split()
I have a string that I am trying to split into 2 strings using Regex to form a list. Below is the string: Input: 'TLSD_IBPDEq.' Output: ['', ''] Expected Output: ['TLSD_IBPD', 'Eq.'] Below is what I have tried but is not working pattern = r"\S*Eq[\.,]" l = re.split(pattern,"TLSD_IBPDEq.") print(l) => ['', '']
[ "If I understand, then you can apply the answer from this question. If you need to use a regex to solve this, then use a capture group and remove the last (empty) element, like this:\npattern = r\"(Eq\\.)$\"\nl = re.split(pattern, \"TLSD_IBPDEq.\")[:-1]\nprint(l) # => ['TLSD_IBPD', 'Eq.']\n\n", "You can do it without re:\ns = \"TLSD_IBPDEq.\"\n\nif s.endswith((\"Eq.\", \"Eq,\")):\n print([s[:-3], s[-3:]])\n\nPrints:\n['TLSD_IBPD', 'Eq.']\n\n\nSolution with re:\nimport re\n\ns = \"TLSD_IBPDEq.\"\n\nprint(list(re.search(r\"(\\S*)(Eq[.,])$\", s).groups()))\n\nPrints:\n['TLSD_IBPD', 'Eq.']\n\n" ]
[ 1, 1 ]
[]
[]
[ "python", "regex", "split" ]
stackoverflow_0074540445_python_regex_split.txt
Q: How to have a new list for every input I am trying to edit the input entered for each day. I have created an input_sales_day function that contains a number of products to enter for a day, an input_sales function that takes the number of products and days as parameters, where I think the problem lies, and a final function that just prints. I've tried using split, but I always get the error or just print each word instead. Here is the code, it prints: Product name: z1 quantity sold : 1 Product Name: z1 quantity sold : 1 Product name : z2 quantity sold : 2 Product Name: z2 quantity sold : 2 Product name : z3 quantity sold : 3 Product Name: z3 quantity sold: 3 Day 1 : ['1 z1', '1 z1'] Day 2 : ['1 z1', '1 z1', '2 z2', '2 z2'] Day 3: ['1 z1', '1 z1', '2 z2', '2 z2', '3 z3', '3 z3'] I try to print: Day 1: ['1 z1', '1 z1'] Day 2 : ['2 z2', '2 z2'] Day 3 : ['3 z3', '3 z3'] p = [] def input_sales_day(nbp): for i in range(nbp): np = input("Product Name: ") qv = input("quantity sold : ") p.append('{} {}'.format(qv, np)) return p def input_sales(nbp, d): sl = [] for j in range(d): n = input_sales_day(nbp) sl.append('day {} : {}'.format(j+1, n)) return sl def print_sales(sl): return '\n'.join(sl) print(print_sales(input_sales(2, 3))) A: All you need to do is make p a local variable of the input_sales_day function. If you do this, then p will be reset on every invokation. Like this: def input_sales_day(nbp): p = [] for i in range(nbp): np = input("Product Name: ") qv = input("quantity sold : ") p.append('{} {}'.format(qv, np)) return p def input_sales(nbp, d): sl = [] for j in range(d): n = input_sales_day(nbp) sl.append('day {} : {}'.format(j+1, n)) return sl def print_sales(sl): return '\n'.join(sl) print(print_sales(input_sales(2, 3))) A: You dont delete the old values from the p list, so when you go for the next day, the data from the previous day is still in the list, you have to change the way to print it or delete it every day.
How to have a new list for every input
I am trying to edit the input entered for each day. I have created an input_sales_day function that contains a number of products to enter for a day, an input_sales function that takes the number of products and days as parameters, where I think the problem lies, and a final function that just prints. I've tried using split, but I always get the error or just print each word instead. Here is the code, it prints: Product name: z1 quantity sold : 1 Product Name: z1 quantity sold : 1 Product name : z2 quantity sold : 2 Product Name: z2 quantity sold : 2 Product name : z3 quantity sold : 3 Product Name: z3 quantity sold: 3 Day 1 : ['1 z1', '1 z1'] Day 2 : ['1 z1', '1 z1', '2 z2', '2 z2'] Day 3: ['1 z1', '1 z1', '2 z2', '2 z2', '3 z3', '3 z3'] I try to print: Day 1: ['1 z1', '1 z1'] Day 2 : ['2 z2', '2 z2'] Day 3 : ['3 z3', '3 z3'] p = [] def input_sales_day(nbp): for i in range(nbp): np = input("Product Name: ") qv = input("quantity sold : ") p.append('{} {}'.format(qv, np)) return p def input_sales(nbp, d): sl = [] for j in range(d): n = input_sales_day(nbp) sl.append('day {} : {}'.format(j+1, n)) return sl def print_sales(sl): return '\n'.join(sl) print(print_sales(input_sales(2, 3)))
[ "All you need to do is make p a local variable of the input_sales_day function. If you do this, then p will be reset on every invokation. Like this:\ndef input_sales_day(nbp):\n p = []\n for i in range(nbp):\n np = input(\"Product Name: \")\n qv = input(\"quantity sold : \")\n p.append('{} {}'.format(qv, np))\n return p\n\n\ndef input_sales(nbp, d):\n sl = []\n for j in range(d):\n n = input_sales_day(nbp)\n sl.append('day {} : {}'.format(j+1, n))\n return sl\n\n\ndef print_sales(sl):\n return '\\n'.join(sl)\n\n\nprint(print_sales(input_sales(2, 3)))\n\n", "You dont delete the old values from the p list, so when you go for the next day, the data from the previous day is still in the list, you have to change the way to print it or delete it every day.\n" ]
[ 4, 2 ]
[]
[]
[ "python" ]
stackoverflow_0074540525_python.txt
Q: Unable to sleep execution within an api subscription callback I am making an api subscription to fetch real time live data from one of the api providers, However i only want to pull the data every few seconds (eg 5 seconds). I am using the below code snippet, however am unable to implement the sleep or delay effectively. Can you please help to guide why the api is not adhering to the 5 second wait ? api_ABC_connection=apiConnect(api_key="<apikey") api_ABC_connection.ws_connect() abc_list=[] # Callback to receive ticks. def on_ticks(ticks): print('###################') print(datetime.now()) time.sleep(5) fetch_time_dict = {} fetch_time_dict['fetch_time'] = datetime.now() abc_list.append(fetch_time_dict) #print("Ticks: {}".format(ticks)) abc_list.append(ticks) print(datetime.now()) return abc_list # Assign the callbacks. api_ABC_connection.on_ticks = on_ticks # subscribe stocks feeds api_ABC_connection.subscribe_feeds(<feeds parameters>) A: Assuming api_ABC_connection is calling callback function asynchronously, you can try and add the lock. Try this, it may work: lock = multiprocessing.Lock() def on_ticks(ticks): print('###################') print(datetime.now()) lock.acquire() time.sleep(5) lock.release() fetch_time_dict = {} fetch_time_dict['fetch_time'] = datetime.now() abc_list.append(fetch_time_dict) #print("Ticks: {}".format(ticks)) abc_list.append(ticks) print(datetime.now()) return abc_list You may want to put both acquire() and release() methods into another place. It depends on what behavior you actually expect.
Unable to sleep execution within an api subscription callback
I am making an api subscription to fetch real time live data from one of the api providers, However i only want to pull the data every few seconds (eg 5 seconds). I am using the below code snippet, however am unable to implement the sleep or delay effectively. Can you please help to guide why the api is not adhering to the 5 second wait ? api_ABC_connection=apiConnect(api_key="<apikey") api_ABC_connection.ws_connect() abc_list=[] # Callback to receive ticks. def on_ticks(ticks): print('###################') print(datetime.now()) time.sleep(5) fetch_time_dict = {} fetch_time_dict['fetch_time'] = datetime.now() abc_list.append(fetch_time_dict) #print("Ticks: {}".format(ticks)) abc_list.append(ticks) print(datetime.now()) return abc_list # Assign the callbacks. api_ABC_connection.on_ticks = on_ticks # subscribe stocks feeds api_ABC_connection.subscribe_feeds(<feeds parameters>)
[ "Assuming api_ABC_connection is calling callback function asynchronously, you can try and add the lock. Try this, it may work:\nlock = multiprocessing.Lock()\n\ndef on_ticks(ticks):\n print('###################')\n print(datetime.now())\n lock.acquire()\n time.sleep(5)\n lock.release()\n fetch_time_dict = {}\n fetch_time_dict['fetch_time'] = datetime.now()\n abc_list.append(fetch_time_dict)\n #print(\"Ticks: {}\".format(ticks))\n abc_list.append(ticks)\n print(datetime.now())\n \n return abc_list\n\nYou may want to put both acquire() and release() methods into another place. It depends on what behavior you actually expect.\n" ]
[ 1 ]
[]
[]
[ "api", "callback", "pandas", "python", "sleep" ]
stackoverflow_0074442477_api_callback_pandas_python_sleep.txt
Q: Asyncio: cancelling tasks and starting new ones when a signal flag is raised My program is supposed to read data forever from provider classes stored in PROVIDERS, defined in the config. Every second, it should check whether the config has changed and if so, stop all tasks, reload the config and and create new tasks. The below code raises CancelledError because I'm cancelling my tasks. Should I really try/catch each of those to achieve my goals or is there a better pattern? async def main(config_file): load_config(config_file) tasks = [] config_task = asyncio.create_task(watch_config(config_file)) # checks every 1s if config changed and raises ConfigChangedSignal if so tasks.append(config_task) for asset_name, provider in PROVIDERS.items(): task = asyncio.create_task(provider.read_forever()) tasks.append(task) try: await asyncio.gather(*tasks, return_exceptions=False) except ConfigChangedSignal: # Restarting for task in asyncio.tasks.all_tasks(): task.cancel() # raises CancelledError await main(config_file) try: asyncio.run(main(config_file)) except KeyboardInterrupt: logger.debug("Ctrl-C pressed. Aborting") A: If you are on Python 3.11, your pattern maps directly to using asyncio.TaskGroup, the "successor" to asyncio.gather, which makes use of the new "exception Groups". By default, if any task in the group raises an exception, all tasks in the group are cancelled: I played around this snippet in the ipython console, and had run asyncio.run(main(False)) for no exception and asyncio.run(main(True)) for inducing an exception just to check the results: import asyncio async def doit(i, n, cancel=False): await asyncio.sleep(n) if cancel: raise RuntimeError() print(i, "done") async def main(cancel): try: async with asyncio.TaskGroup() as group: tasks = [group.create_task(doit(i, 2)) for i in range(10)] group.create_task(doit(42, 1, cancel=cancel)) group.create_task(doit(11, .5)) except Exception: pass await asyncio.sleep(3) Your code can acommodate that - Apart from the best practice for cancelling tasks, though, you are doing a recursive call to your main that, although will work for most practical purposes, can make seasoned developers go "sigh" - and also can break in edgecases, (it will fail after ~1000 cycles, for example), and leak resources. The correct way to do that is assembling a while loop, since Python function calls, even tail calls, won't clean up the resources in the calling scope: import asyncio ... async def main(config_file): while True: load_config(config_file) try: async with asyncio.TaskGroup() as tasks: tasks.create_task(watch_config(config_file)) # checks every 1s if config changed and raises ConfigChangedSignal if so for asset_name, provider in PROVIDERS.items(): tasks.create_task.create_task(provider.read_forever()) # all tasks are awaited at the end of the with block except *ConfigChangedSignal: # <- the new syntax in Python 3.11 # Restarting is just a matter of re-doing the while-loop # ... log.info("config changed") pass # any other exception won't be caught and will error, allowing one # to review what went wrong ... For Python 3.10, looping over the tasks and cancelling each seems alright, but you should look at that recursive call. If you don't want a while-loop inside your current main, refactor the code so that main itself is called from an outter while-loop async def main(config_file): while True: await inner_main(config_file) async def inner_main(config_file): load_config(config_file) # keep the existing body ... except ConfigChangedSignal: # Restarting for task in asyncio.tasks.all_tasks(): task.cancel() # raises CancelledError # await main call dropped from here A: jsbueno’s answer is appropriate. An easy alternative is to enclose the entire event loop in an outer “while”: async def main(config_file): load_config(config_file) tasks = [] for asset_name, provider in PROVIDERS.items(): task = asyncio.create_task(provider.read_forever()) tasks.append(task) try: await watch_config(config_file) except ConfigChangedSignal: pass try: while True: asyncio.run(main(config_file)) except KeyboardInterrupt: logger.debug("Ctrl-C pressed. Aborting")
Asyncio: cancelling tasks and starting new ones when a signal flag is raised
My program is supposed to read data forever from provider classes stored in PROVIDERS, defined in the config. Every second, it should check whether the config has changed and if so, stop all tasks, reload the config and and create new tasks. The below code raises CancelledError because I'm cancelling my tasks. Should I really try/catch each of those to achieve my goals or is there a better pattern? async def main(config_file): load_config(config_file) tasks = [] config_task = asyncio.create_task(watch_config(config_file)) # checks every 1s if config changed and raises ConfigChangedSignal if so tasks.append(config_task) for asset_name, provider in PROVIDERS.items(): task = asyncio.create_task(provider.read_forever()) tasks.append(task) try: await asyncio.gather(*tasks, return_exceptions=False) except ConfigChangedSignal: # Restarting for task in asyncio.tasks.all_tasks(): task.cancel() # raises CancelledError await main(config_file) try: asyncio.run(main(config_file)) except KeyboardInterrupt: logger.debug("Ctrl-C pressed. Aborting")
[ "If you are on Python 3.11, your pattern maps directly to using asyncio.TaskGroup, the \"successor\" to asyncio.gather, which makes use of the new \"exception Groups\". By default, if any task in the group raises an exception, all tasks in the group are cancelled:\nI played around this snippet in the ipython console, and had run asyncio.run(main(False)) for no exception and asyncio.run(main(True)) for inducing an exception just to check the results:\nimport asyncio\n\nasync def doit(i, n, cancel=False):\n await asyncio.sleep(n)\n if cancel:\n raise RuntimeError()\n print(i, \"done\")\n\nasync def main(cancel):\n try:\n async with asyncio.TaskGroup() as group:\n tasks = [group.create_task(doit(i, 2)) for i in range(10)]\n group.create_task(doit(42, 1, cancel=cancel))\n group.create_task(doit(11, .5))\n except Exception:\n pass\n await asyncio.sleep(3)\n\n\nYour code can acommodate that -\nApart from the best practice for cancelling tasks, though, you are doing a recursive call to your main that, although will work for most practical purposes, can make seasoned developers go \"sigh\" - and also can break in edgecases, (it will fail after ~1000 cycles, for example), and leak resources.\nThe correct way to do that is assembling a while loop, since Python function calls, even tail calls, won't clean up the resources in the calling scope:\nimport asyncio\n...\n\n\nasync def main(config_file):\n while True:\n load_config(config_file)\n try:\n async with asyncio.TaskGroup() as tasks:\n tasks.create_task(watch_config(config_file)) # checks every 1s if config changed and raises ConfigChangedSignal if so\n\n for asset_name, provider in PROVIDERS.items():\n tasks.create_task.create_task(provider.read_forever())\n\n # all tasks are awaited at the end of the with block\n except *ConfigChangedSignal: # <- the new syntax in Python 3.11\n # Restarting is just a matter of re-doing the while-loop\n # ... log.info(\"config changed\")\n pass\n\n # any other exception won't be caught and will error, allowing one\n # to review what went wrong\n \n...\n\n\n\nFor Python 3.10, looping over the tasks and cancelling each seems alright, but you should look at that recursive call. If you don't want a while-loop inside your current main, refactor the code so that main itself is called from an outter while-loop\n\n\nasync def main(config_file):\n while True:\n await inner_main(config_file)\n\nasync def inner_main(config_file):\n load_config(config_file)\n\n # keep the existing body\n ...\n except ConfigChangedSignal:\n # Restarting\n for task in asyncio.tasks.all_tasks():\n task.cancel() # raises CancelledError\n # await main call dropped from here\n\n\n\n", "jsbueno’s answer is appropriate.\nAn easy alternative is to enclose the entire event loop in an outer “while”:\nasync def main(config_file):\n load_config(config_file)\n\n tasks = []\n for asset_name, provider in PROVIDERS.items():\n task = asyncio.create_task(provider.read_forever())\n tasks.append(task)\n\n try:\n await watch_config(config_file)\n except ConfigChangedSignal:\n pass\n\ntry:\n while True:\n asyncio.run(main(config_file))\nexcept KeyboardInterrupt:\n logger.debug(\"Ctrl-C pressed. Aborting\")\n\n" ]
[ 2, 0 ]
[]
[]
[ "python", "python_asyncio" ]
stackoverflow_0074517438_python_python_asyncio.txt
Q: concurrent.futures captures all exceptions I have coded myself into an interesting situation that I don't know how to get out of. I have a number of functions I am running in a number of parallel threads, but when an exception is thrown within one of the threads the code continues on with no notification with concurrent.futures.ThreadPoolExecutor() as executor: # queue up the threads for parallel execution futureHost.update({executor.submit(infoCatalina, host): host for host in onlineHosts}) futureHost.update({executor.submit(infoVersion, host): host for host in onlineHosts}) futureHost.update({executor.submit(infoMount, host): host for host in onlineHosts}) # go through the threads as they complete for _ in concurrent.futures.as_completed(futureHost): x = progress(x, progBarLength) If I put 1/0 to throw a ZeroDivisionError before or after the infoVersion line the correct error is thrown. 1/0 # will throw an error futureHost.update({executor.submit(infoVersion, host): host for host in onlineHosts}) 1/0 # will throw an error However, if I put 1/0 within infoVersion() I get no message when the error is thrown and the function exits. def infoVersion(host): print('This statement prints') 1/0 # does not throw an error print('This statement does not print') I have to put messages as above to find out where my code is dying. How can I get errors to show up in my code again? A: Exceptions are captured by the future object: for future in concurrent.futures.as_completed(futureHost): if future.exception() is not None: print(f'ERROR: {future}: {future.exception()}') continue
concurrent.futures captures all exceptions
I have coded myself into an interesting situation that I don't know how to get out of. I have a number of functions I am running in a number of parallel threads, but when an exception is thrown within one of the threads the code continues on with no notification with concurrent.futures.ThreadPoolExecutor() as executor: # queue up the threads for parallel execution futureHost.update({executor.submit(infoCatalina, host): host for host in onlineHosts}) futureHost.update({executor.submit(infoVersion, host): host for host in onlineHosts}) futureHost.update({executor.submit(infoMount, host): host for host in onlineHosts}) # go through the threads as they complete for _ in concurrent.futures.as_completed(futureHost): x = progress(x, progBarLength) If I put 1/0 to throw a ZeroDivisionError before or after the infoVersion line the correct error is thrown. 1/0 # will throw an error futureHost.update({executor.submit(infoVersion, host): host for host in onlineHosts}) 1/0 # will throw an error However, if I put 1/0 within infoVersion() I get no message when the error is thrown and the function exits. def infoVersion(host): print('This statement prints') 1/0 # does not throw an error print('This statement does not print') I have to put messages as above to find out where my code is dying. How can I get errors to show up in my code again?
[ "Exceptions are captured by the future object:\n for future in concurrent.futures.as_completed(futureHost):\n if future.exception() is not None:\n print(f'ERROR: {future}: {future.exception()}')\n continue\n\n" ]
[ 0 ]
[]
[]
[ "concurrent.futures", "error_handling", "exception", "python" ]
stackoverflow_0074188604_concurrent.futures_error_handling_exception_python.txt
Q: How to measure distance between camera and an object? I'm an OpenCV beginner, just wondering which way would be the best to measure the distance between the camera to an object in a given video. Every tutorial I encountered before tutor by using camera calibration first and then undistorting the camera lens. But in this case I don't use my own camera, so is it necessary for me to use these functions? In addition, I some data of the recording camera, such as: (fx,fy) = focal length (cx,cy) = principle point (width,height) = image shape radial = radial distortion (t1,t2) = tangential distortion. A: Usually, one does measure the distance between a single camera and an object with prior knowledge of the object. It could be the dimensions of a planar pattern or the 3D positions of edges that can easily be detected automatically using image analysis. The computation of the position of the object with respect to the camera is usually done by solving a PnP problem. https://en.m.wikipedia.org/wiki/Perspective-n-Point Solving the PnP equations do require the camera parameters (at least the intrinsic matrix, and ideally the distortion coefficients for more accuracy). These parameters can be estimated by calibrating your camera. OpenCV provides a handful of functions that you can use to calibrate your monocular camera. Alternatively, you can use a platform like CalibPro to compute these parameters for you. [Disclaimer] I am the founder of Calibpro. I am happy to help you use the platform and I'd love your feedbacks on your experience using it.
How to measure distance between camera and an object?
I'm an OpenCV beginner, just wondering which way would be the best to measure the distance between the camera to an object in a given video. Every tutorial I encountered before tutor by using camera calibration first and then undistorting the camera lens. But in this case I don't use my own camera, so is it necessary for me to use these functions? In addition, I some data of the recording camera, such as: (fx,fy) = focal length (cx,cy) = principle point (width,height) = image shape radial = radial distortion (t1,t2) = tangential distortion.
[ "Usually, one does measure the distance between a single camera and an object with prior knowledge of the object. It could be the dimensions of a planar pattern or the 3D positions of edges that can easily be detected automatically using image analysis.\nThe computation of the position of the object with respect to the camera is usually done by solving a PnP problem.\nhttps://en.m.wikipedia.org/wiki/Perspective-n-Point\nSolving the PnP equations do require the camera parameters (at least the intrinsic matrix, and ideally the distortion coefficients for more accuracy).\nThese parameters can be estimated by calibrating your camera. OpenCV provides a handful of functions that you can use to calibrate your monocular camera. Alternatively, you can use a platform like CalibPro to compute these parameters for you.\n[Disclaimer] I am the founder of Calibpro. I am happy to help you use the platform and I'd love your feedbacks on your experience using it.\n" ]
[ 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074097066_opencv_python.txt
Q: Exception has occurred: TimeoutError exception: no description I am specifically using python version 3.10 to run a websocket (or any long asyncio process) for a specified period of time which is covered in the python docs. The .wait_for() method looks like the correct solution. I run this code (from the docs): import asyncio async def eternity(): # Sleep for one hour await asyncio.sleep(3600) print('yay!') async def main(): # Wait for at most 1 second print('wait for at most 1 second...') try: await asyncio.wait_for(eternity(), timeout=1.0) except TimeoutError: print('timeout!') asyncio.run(main()) The docs are here: https://docs.python.org/3/library/asyncio-task.html?highlight=wait_for#asyncio.wait_for However, I get the following error: Exception has occurred: TimeoutError exception: no description ...basically, the TimeoutError exception is not handled as expected. My research shows that others have struggled with errors, for example here: Handling a timeout error in Python sockets but the fixes are either aged (not relevant for 3.10) or do not work. I also notice that the docs specify this "Changed in version 3.10: Removed the loop parameter". So i am only interested in version 3.10 and above. So I am wondering how to get the min reproducible example working or what i have done wrong please ? A: You can remove the TimeoutError so it can jump down to the print('timeout') or can use this example to output the error except Exception as exc: print(f'The exception: {exc!r}') A: There are a number of TimeoutErrors in Python. Replace except TimeoutError with except asyncio.TimeoutError and you’ll be good. UPDATE with full example: import asyncio async def eternity(): # Sleep for one hour await asyncio.sleep(3600) print('yay!') async def main(): # Wait for at most 1 second print('wait for at most 1 second...') try: await asyncio.wait_for(eternity(), timeout=1.0) except asyncio.TimeoutError: print('timeout!') asyncio.run(main()) Apparently the example in asyncio’s docs is wrong (or at least misleading). If you look at CPython’s source code, asyncio.TimeoutError is a different exception from TimeoutError up to Python 3.10, and was changed to an alias to TimeoutError in 3.11.
Exception has occurred: TimeoutError exception: no description
I am specifically using python version 3.10 to run a websocket (or any long asyncio process) for a specified period of time which is covered in the python docs. The .wait_for() method looks like the correct solution. I run this code (from the docs): import asyncio async def eternity(): # Sleep for one hour await asyncio.sleep(3600) print('yay!') async def main(): # Wait for at most 1 second print('wait for at most 1 second...') try: await asyncio.wait_for(eternity(), timeout=1.0) except TimeoutError: print('timeout!') asyncio.run(main()) The docs are here: https://docs.python.org/3/library/asyncio-task.html?highlight=wait_for#asyncio.wait_for However, I get the following error: Exception has occurred: TimeoutError exception: no description ...basically, the TimeoutError exception is not handled as expected. My research shows that others have struggled with errors, for example here: Handling a timeout error in Python sockets but the fixes are either aged (not relevant for 3.10) or do not work. I also notice that the docs specify this "Changed in version 3.10: Removed the loop parameter". So i am only interested in version 3.10 and above. So I am wondering how to get the min reproducible example working or what i have done wrong please ?
[ "You can remove the TimeoutError so it can jump down to the print('timeout') or can use this example to output the error\nexcept Exception as exc:\n print(f'The exception: {exc!r}')\n\n", "There are a number of TimeoutErrors in Python.\nReplace except TimeoutError with except asyncio.TimeoutError and you’ll be good.\nUPDATE with full example:\nimport asyncio\n\nasync def eternity():\n # Sleep for one hour\n await asyncio.sleep(3600)\n print('yay!')\n\nasync def main():\n # Wait for at most 1 second\n print('wait for at most 1 second...')\n try:\n await asyncio.wait_for(eternity(), timeout=1.0)\n except asyncio.TimeoutError: \n print('timeout!')\n\nasyncio.run(main())\n\nApparently the example in asyncio’s docs is wrong (or at least misleading). If you look at CPython’s source code, asyncio.TimeoutError is a different exception from TimeoutError up to Python 3.10, and was changed to an alias to TimeoutError in 3.11.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_asyncio", "timeout", "timeouterror", "websocket" ]
stackoverflow_0074510354_python_python_asyncio_timeout_timeouterror_websocket.txt
Q: ERROR: Could not build wheels for spacy, which is required to install pyproject.toml-based projects Hi Guys, I am trying to install spacy model == 2.3.5 but I am getting this error, please help me! A: Try using python 3.6-3.9 instead, where there are binary wheels for pip install to use instead of having to compile from source. (This is a conflict with python 3.10 and some generated .cpp files in the source package. Python 3.10 wasn't released yet when this version was published.) A: I had the similar error while executing pip install -r requirements.txt but for aiohttp module: socket.c -o build/temp.linux-armv8l-cpython-311/aiohttp/_websocket.o aiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects Just in case I will leave here solution to my error. This error is specific to Python 3.11 version. On Python with 3.10.6 version installation went fine. To solve it I needed to update requirements.txt. Not working versions of modules with Python 3.11: aiohttp==3.8.1 yarl==1.4.2 frozenlist==1.3.0 Working versions: aiohttp==3.8.2 yarl==1.8.1 frozenlist==1.3.1 Links to the corresponding issues with fixes: https://github.com/aio-libs/aiohttp/issues/6600 https://github.com/aio-libs/yarl/issues/706 https://github.com/aio-libs/frozenlist/issues/305 A: Try using: !pip install spacy==2.3.5 Do not give space between == and 2.3.5 If you give any space between equal sign and version, it may give error.
ERROR: Could not build wheels for spacy, which is required to install pyproject.toml-based projects
Hi Guys, I am trying to install spacy model == 2.3.5 but I am getting this error, please help me!
[ "Try using python 3.6-3.9 instead, where there are binary wheels for pip install to use instead of having to compile from source.\n(This is a conflict with python 3.10 and some generated .cpp files in the source package. Python 3.10 wasn't released yet when this version was published.)\n", "I had the similar error while executing pip install -r requirements.txt but for aiohttp module:\nsocket.c -o build/temp.linux-armv8l-cpython-311/aiohttp/_websocket.o\naiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found\n#include \"longintrepr.h\" \n ^~~~~~~ 1 error generated.\nerror: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' \nfailed with exit code 1\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nERROR: Failed building wheel for aiohttp\nFailed to build aiohttp\nERROR: Could not build wheels for aiohttp, which is required to install\npyproject.toml-based projects\n\nJust in case I will leave here solution to my error. This error is specific to Python 3.11 version. On Python with 3.10.6 version installation went fine.\nTo solve it I needed to update requirements.txt.\nNot working versions of modules with Python 3.11:\naiohttp==3.8.1\nyarl==1.4.2\nfrozenlist==1.3.0\n\nWorking versions:\naiohttp==3.8.2\nyarl==1.8.1\nfrozenlist==1.3.1\n\nLinks to the corresponding issues with fixes:\n\nhttps://github.com/aio-libs/aiohttp/issues/6600\nhttps://github.com/aio-libs/yarl/issues/706\nhttps://github.com/aio-libs/frozenlist/issues/305\n\n", "Try using:\n!pip install spacy==2.3.5\nDo not give space between == and 2.3.5\nIf you give any space between equal sign and version, it may give error.\n" ]
[ 4, 1, 0 ]
[]
[]
[ "nlp", "python", "spacy" ]
stackoverflow_0071512301_nlp_python_spacy.txt
Q: Is there a faster optimization algorithm than SLSQP for my problem? I have a medium sized optimization problem that I have used scipy optimize with the SLSQP method to solve. I am wondering if there is a faster algorithm? Here is my code: from scipy.optimize import minimize, Bounds import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(500,5),columns=['pred','var1','var2','var3','weights']) def obj(x,df=df): return -(x*df['pred']).sum() def c1(x,df=df): return 5-abs((x*df['var1']).sum()) def c2(x,df=df): return 5-abs((x*df['var2']).sum()) def c3(x,df=df): return 5-abs((x*df['var3']).sum()) sol = minimize( fun=obj, x0=df['weights'], method='SLSQP', bounds=Bounds(-0.03, 0.03), constraints=[{'type': 'ineq', 'fun': c1},{'type': 'ineq', 'fun': c2},{'type': 'ineq', 'fun': c3}], options={'maxiter': 1000}) As you can see there are three constraints (sometimes 4 or 5) and the objective is to optimize about 500 weights. There are also bounds. The dataframe df is dense, I don't think there is a single zero. Is the SLSQP method the fastest way at tackling this problem? I am using google colab. A: After setting a random seed by np.random.seed(1) at the top of your code snippet in order to reproduce the results, we can time your code snippet: In [15]: def foo1(): ...: sol = minimize( ...: fun=obj, ...: x0=df['weights'], ...: method='SLSQP', ...: bounds=Bounds(-0.03, 0.03), ...: constraints=[{'type': 'ineq', 'fun': c1},{'type': 'ineq', 'fun': c2},{'type': 'ineq', 'fun': c3}], ...: options={'maxiter': 1000}) ...: return sol ...: In [16]: %timeit foo1() 10.7 s ± 299 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) As already mentioned in the comments, your constraints can be written as linear functions which turns your optimization problem into a linear optimization problem (LP) which can be solved by means of scipy.optimize.linprog. As a rule of thumb: If your problem can be written as an LP instead of an NLP, pursue the LP approach as it's much faster to solve in most cases. Your constraints basically read as | v.T @ x | <= 5 which is simply the absolute value of the dot product (scalar product) of two vectors v and x. Here, v.T denotes the transpose of the vector v and @ denotes python's matrix multiplication operator. It's easy to see that | v1.T @ x | <= 5 <=> -5 <= v1.T @ x <= 5 | v2.T @ x | <= 5 <=> -5 <= v2.T @ x <= 5 | v3.T @ x | <= 5 <=> -5 <= v3.T @ x <= 5 And hence, your LP reads: min c^T @ x s.t. v1.T @ x <= 5 -v1.T @ x <= 5 v2.T @ x <= 5 -v2.T @ x <= 5 v3.T @ x <= 5 -v3.T @ x <= 5 -0.03 <= x <= 0.03 This can be solved as follows: from scipy.optimize import linprog c = -1*df['pred'].values v1 = df['var1'].values v2 = df['var2'].values v3 = df['var3'].values A_ub = np.block([v1, -v1, v2, -v2, v3, -v3]).reshape(6, -1) b_ub = np.array([5, 5, 5, 5, 5, 5]) bounds = [(-0.03, 0.03)]*c.size res = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=None, b_eq=None, bounds=bounds) Timing this approach yields In [17]: %timeit res = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=None, b_eq=None, bounds=bounds) 2.32 ms ± 163 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) which is roughly 4300x faster.
Is there a faster optimization algorithm than SLSQP for my problem?
I have a medium sized optimization problem that I have used scipy optimize with the SLSQP method to solve. I am wondering if there is a faster algorithm? Here is my code: from scipy.optimize import minimize, Bounds import pandas as pd import numpy as np df = pd.DataFrame(np.random.rand(500,5),columns=['pred','var1','var2','var3','weights']) def obj(x,df=df): return -(x*df['pred']).sum() def c1(x,df=df): return 5-abs((x*df['var1']).sum()) def c2(x,df=df): return 5-abs((x*df['var2']).sum()) def c3(x,df=df): return 5-abs((x*df['var3']).sum()) sol = minimize( fun=obj, x0=df['weights'], method='SLSQP', bounds=Bounds(-0.03, 0.03), constraints=[{'type': 'ineq', 'fun': c1},{'type': 'ineq', 'fun': c2},{'type': 'ineq', 'fun': c3}], options={'maxiter': 1000}) As you can see there are three constraints (sometimes 4 or 5) and the objective is to optimize about 500 weights. There are also bounds. The dataframe df is dense, I don't think there is a single zero. Is the SLSQP method the fastest way at tackling this problem? I am using google colab.
[ "After setting a random seed by np.random.seed(1) at the top of your code snippet in order to reproduce the results, we can time your code snippet:\nIn [15]: def foo1():\n ...: sol = minimize(\n ...: fun=obj,\n ...: x0=df['weights'],\n ...: method='SLSQP',\n ...: bounds=Bounds(-0.03, 0.03),\n ...: constraints=[{'type': 'ineq', 'fun': c1},{'type': 'ineq', 'fun': c2},{'type': 'ineq', 'fun': c3}],\n ...: options={'maxiter': 1000})\n ...: return sol\n ...:\n\nIn [16]: %timeit foo1()\n10.7 s ± 299 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nAs already mentioned in the comments, your constraints can be written as linear functions which turns your optimization problem into a linear optimization problem (LP) which can be solved by means of scipy.optimize.linprog. As a rule of thumb: If your problem can be written as an LP instead of an NLP, pursue the LP approach as it's much faster to solve in most cases.\nYour constraints basically read as | v.T @ x | <= 5 which is simply the absolute value of the dot product (scalar product) of two vectors v and x. Here, v.T denotes the transpose of the vector v and @ denotes python's matrix multiplication operator. It's easy to see that\n| v1.T @ x | <= 5 <=> -5 <= v1.T @ x <= 5\n| v2.T @ x | <= 5 <=> -5 <= v2.T @ x <= 5\n| v3.T @ x | <= 5 <=> -5 <= v3.T @ x <= 5\n\nAnd hence, your LP reads:\nmin c^T @ x\n\ns.t. \n\n v1.T @ x <= 5\n-v1.T @ x <= 5\n v2.T @ x <= 5\n-v2.T @ x <= 5\n v3.T @ x <= 5\n-v3.T @ x <= 5\n\n-0.03 <= x <= 0.03\n\nThis can be solved as follows:\nfrom scipy.optimize import linprog\n\nc = -1*df['pred'].values\nv1 = df['var1'].values\nv2 = df['var2'].values\nv3 = df['var3'].values\n\nA_ub = np.block([v1, -v1, v2, -v2, v3, -v3]).reshape(6, -1)\nb_ub = np.array([5, 5, 5, 5, 5, 5])\nbounds = [(-0.03, 0.03)]*c.size\n\nres = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=None, b_eq=None, bounds=bounds)\n\nTiming this approach yields\nIn [17]: %timeit res = linprog(c, A_ub=A_ub, b_ub=b_ub, A_eq=None, b_eq=None, bounds=bounds)\n2.32 ms ± 163 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\nwhich is roughly 4300x faster.\n" ]
[ 1 ]
[]
[]
[ "optimization", "python", "scipy_optimize" ]
stackoverflow_0074540008_optimization_python_scipy_optimize.txt
Q: Python - Random.sample from a range with excluded values (array) I am currently using the random.sample function to extract individuals from a population. ex: n = range(1,1501) result = random.sample(n, 500) print(result) in this example I draw 500 persons among 1500. So far, so good.. Now, I want to go further and launch a search with a list of exclude people. exclude = [122,506,1100,56,76,1301] So I want to get a list of people (1494 persons) while excluding this array (exclude) I must confess that I am stuck on this question. Do you have an idea ? thank you in advance! I am learning Python language. I do a lot of exercise to train. Nevertheless I block ue on this one. A: exclude = {122, 506, 1100, 56, 76, 1301} result = random.sample([k for k in range(1, 1501) if k not in exclude], 500) # check assert set(result).isdisjoint(exclude) Marginally faster (but a bit more convoluted for my taste): result = random.sample(list(set(range(1, 1501)).difference(exclude)), 500) A: import random exclude = {1, 6} result = random.sample(list(set(range(1, 21)).difference(exclude)), 18) print(result) Thank you for your reply. It works perfectly with this example!
Python - Random.sample from a range with excluded values (array)
I am currently using the random.sample function to extract individuals from a population. ex: n = range(1,1501) result = random.sample(n, 500) print(result) in this example I draw 500 persons among 1500. So far, so good.. Now, I want to go further and launch a search with a list of exclude people. exclude = [122,506,1100,56,76,1301] So I want to get a list of people (1494 persons) while excluding this array (exclude) I must confess that I am stuck on this question. Do you have an idea ? thank you in advance! I am learning Python language. I do a lot of exercise to train. Nevertheless I block ue on this one.
[ "exclude = {122, 506, 1100, 56, 76, 1301}\nresult = random.sample([k for k in range(1, 1501) if k not in exclude], 500)\n\n# check\nassert set(result).isdisjoint(exclude)\n\nMarginally faster (but a bit more convoluted for my taste):\nresult = random.sample(list(set(range(1, 1501)).difference(exclude)), 500)\n\n", "import random\n\nexclude = {1, 6}\n\nresult = random.sample(list(set(range(1, 21)).difference(exclude)), 18)\n\nprint(result)\n\nThank you for your reply. It works perfectly with this example!\n" ]
[ 1, 1 ]
[]
[]
[ "python", "random" ]
stackoverflow_0074540504_python_random.txt
Q: How can you retrieve webpages based on URLs and convert each to a beautifulsoup object So I am scraping a website I was able to get all the information thanks to Andrej Kesely, I was also able to syntheses URLs that downloaded the first 50 pages, however now I want to retrieve the webpages based on the URLs and convert them into a beautifulsoup and I also want to retrieve all the information and the URL(href) to access the detailed car information. I am new to python and website scraping so I really don't know where to start but here is the code for that syntheses the first 50 pages of the website from bs4 import BeautifulSoup import requests import os for i in range(1, 50): response = requests.get(f"https://jammer.ie/used-cars?page={i}&per-page=12") with open(f"example{i}.html", "w" , encoding="utf-8") as fp: fp.write(response.text) urls = [] prices = [] makes = [] # for loop index by i with open(f"example{i}.html", "r") as fp: webpage = fp.read() soup = BeautifulSoup(webpage, "html.parser") tables = soup.find_all('div', {"class": "span-9 right-col"}) len(tables[0].contents) for it in tables[0].contents[1:]: if it == "\n": continue for jt in it.findall('div', class_="col-lg-4 col-md-12 car-listing"): price = jt.find('p', class_="price").text make = jt.find('h6', class_="car-make").text url = f"https://jammer.ie/used-cars?page={i}&per-page=12" urls.append(url) prices I know I must make a beautifulsoup object but I really don't know what to do if you could please explain what to do it would be great thanks I want to have it where I'm able to Retrieve the webpages based on these URLs and convert each into a beautifulsoup object and Retrieve Car Manufacturing Year, Engine, Price, Dealer information (if it is available), and the URL (href) to access the detailed car information. A: To iterate over multiple pages you can do: import requests import pandas as pd from bs4 import BeautifulSoup url = "https://jammer.ie/used-cars?page={}&per-page=12" all_data = [] for page in range(1, 3): # <-- increase number of pages here soup = BeautifulSoup(requests.get(url.format(page)).text, "html.parser") for car in soup.select(".car"): info = car.select_one(".top-info").get_text(strip=True, separator="|") make, model, year, price = info.split("|") dealer_name = car.select_one(".dealer-name h6").get_text( strip=True, separator=" " ) address = car.select_one(".address").get_text(strip=True) features = {} for feature in car.select(".car--features li"): k = feature.img["src"].split("/")[-1].split(".")[0] v = feature.span.text features[f"feature_{k}"] = v all_data.append( { "make": make, "model": model, "year": year, "price": price, "dealer_name": dealer_name, "address": address, "url": "https://jammer.ie" + car.select_one("a[href*=vehicle]")["href"], **features, } ) df = pd.DataFrame(all_data) # prints sample data to screen: print(df.tail().to_markdown(index=False)) # saves all data to CSV df.to_csv("data.csv", index=False) Prints: make model year price dealer_name address url feature_speed feature_engine feature_transmission feature_owner feature_door-icon1 feature_petrol5 feature_paint feature_hatchback Skoda Fabia 2014 €7,500 Blue Diamond Cars Co. Cork https://jammer.ie/vehicle/165691-skoda-fabia-2014 128627 miles 1.2 litres Manual 2 previous owners 4 doors Petrol Beige Estate Ford Kuga 2016 €16,750 Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/165690-ford-kuga-2016 99000 miles 2.0 litres Manual 1 previous owners 5 doors Diesel Grey MPV Hyundai i40 2015 Price on application Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/165689-hyundai-i40-2015 98000 miles 1.7 litres Manual 1 previous owners 5 doors Diesel Black Estate Dacia Sandero 2016 €9,950 Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/165688-dacia-sandero-2016 43000 miles nan Manual 3 previous owners 4 doors Petrol Blue Hatchback Ford Fiesta 2016 Price on application Ballincollig Motor Company / Trident Co. Cork https://jammer.ie/vehicle/165687-ford-fiesta-2016 45000 miles 1.0 litres Manual 2 previous owners 5 doors Petrol Silver Hatchback and saves data.csv (screenshot from LibreOffice):
How can you retrieve webpages based on URLs and convert each to a beautifulsoup object
So I am scraping a website I was able to get all the information thanks to Andrej Kesely, I was also able to syntheses URLs that downloaded the first 50 pages, however now I want to retrieve the webpages based on the URLs and convert them into a beautifulsoup and I also want to retrieve all the information and the URL(href) to access the detailed car information. I am new to python and website scraping so I really don't know where to start but here is the code for that syntheses the first 50 pages of the website from bs4 import BeautifulSoup import requests import os for i in range(1, 50): response = requests.get(f"https://jammer.ie/used-cars?page={i}&per-page=12") with open(f"example{i}.html", "w" , encoding="utf-8") as fp: fp.write(response.text) urls = [] prices = [] makes = [] # for loop index by i with open(f"example{i}.html", "r") as fp: webpage = fp.read() soup = BeautifulSoup(webpage, "html.parser") tables = soup.find_all('div', {"class": "span-9 right-col"}) len(tables[0].contents) for it in tables[0].contents[1:]: if it == "\n": continue for jt in it.findall('div', class_="col-lg-4 col-md-12 car-listing"): price = jt.find('p', class_="price").text make = jt.find('h6', class_="car-make").text url = f"https://jammer.ie/used-cars?page={i}&per-page=12" urls.append(url) prices I know I must make a beautifulsoup object but I really don't know what to do if you could please explain what to do it would be great thanks I want to have it where I'm able to Retrieve the webpages based on these URLs and convert each into a beautifulsoup object and Retrieve Car Manufacturing Year, Engine, Price, Dealer information (if it is available), and the URL (href) to access the detailed car information.
[ "To iterate over multiple pages you can do:\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\n\nurl = \"https://jammer.ie/used-cars?page={}&per-page=12\"\n\nall_data = []\n\nfor page in range(1, 3): # <-- increase number of pages here\n soup = BeautifulSoup(requests.get(url.format(page)).text, \"html.parser\")\n\n for car in soup.select(\".car\"):\n info = car.select_one(\".top-info\").get_text(strip=True, separator=\"|\")\n make, model, year, price = info.split(\"|\")\n dealer_name = car.select_one(\".dealer-name h6\").get_text(\n strip=True, separator=\" \"\n )\n address = car.select_one(\".address\").get_text(strip=True)\n\n features = {}\n for feature in car.select(\".car--features li\"):\n k = feature.img[\"src\"].split(\"/\")[-1].split(\".\")[0]\n v = feature.span.text\n features[f\"feature_{k}\"] = v\n\n all_data.append(\n {\n \"make\": make,\n \"model\": model,\n \"year\": year,\n \"price\": price,\n \"dealer_name\": dealer_name,\n \"address\": address,\n \"url\": \"https://jammer.ie\"\n + car.select_one(\"a[href*=vehicle]\")[\"href\"],\n **features,\n }\n )\n\ndf = pd.DataFrame(all_data)\n# prints sample data to screen:\nprint(df.tail().to_markdown(index=False))\n# saves all data to CSV\ndf.to_csv(\"data.csv\", index=False)\n\nPrints:\n\n\n\n\nmake\nmodel\nyear\nprice\ndealer_name\naddress\nurl\nfeature_speed\nfeature_engine\nfeature_transmission\nfeature_owner\nfeature_door-icon1\nfeature_petrol5\nfeature_paint\nfeature_hatchback\n\n\n\n\nSkoda\nFabia\n2014\n€7,500\nBlue Diamond Cars\nCo. Cork\nhttps://jammer.ie/vehicle/165691-skoda-fabia-2014\n128627 miles\n1.2 litres\nManual\n2 previous owners\n4 doors\nPetrol\nBeige\nEstate\n\n\nFord\nKuga\n2016\n€16,750\nBallincollig Motor Company / Trident\nCo. Cork\nhttps://jammer.ie/vehicle/165690-ford-kuga-2016\n99000 miles\n2.0 litres\nManual\n1 previous owners\n5 doors\nDiesel\nGrey\nMPV\n\n\nHyundai\ni40\n2015\nPrice on application\nBallincollig Motor Company / Trident\nCo. Cork\nhttps://jammer.ie/vehicle/165689-hyundai-i40-2015\n98000 miles\n1.7 litres\nManual\n1 previous owners\n5 doors\nDiesel\nBlack\nEstate\n\n\nDacia\nSandero\n2016\n€9,950\nBallincollig Motor Company / Trident\nCo. Cork\nhttps://jammer.ie/vehicle/165688-dacia-sandero-2016\n43000 miles\nnan\nManual\n3 previous owners\n4 doors\nPetrol\nBlue\nHatchback\n\n\nFord\nFiesta\n2016\nPrice on application\nBallincollig Motor Company / Trident\nCo. Cork\nhttps://jammer.ie/vehicle/165687-ford-fiesta-2016\n45000 miles\n1.0 litres\nManual\n2 previous owners\n5 doors\nPetrol\nSilver\nHatchback\n\n\n\n\nand saves data.csv (screenshot from LibreOffice):\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074540562_beautifulsoup_python_web_scraping.txt
Q: djongo + mongodb, Array Data Insert Problem I have a problem. Stack: Django-Rest-Framework + Djongo + Mongodb. Problem: Insert error array data //models.py from django.db import models from djongo import models as djongoModels class House(models.Model): house_id = models.CharField(max_length=256) class Meta: abstract = True class Users(models.Model): _id = djongoModels.ObjectIdField() email = djongoModels.CharField(max_length=256) name = djongoModels.CharField(max_length=256) house = djongoModels.ArrayField( model_container=House ) class Meta: db_table = "drf_users" //serializers.py from .models import Users, Houses from rest_framework import serializers class InsertUserSerializers(serializers.ModelSerializer): email = serializers.CharField(required=True) name = serializers.CharField(required=True) house = serializers.ListField(child=serializers.CharField()) class Meta: model = Users fields = ('email', 'name', 'house') //views.py from .models import Users from .serializers import InsertUserSerializers class UsersViewSet(viewsets.ModelViewSet): queryset = Users.objects.all() serializer_class = InsertUserSerializers permission_classes = [AllowAny] //request.http POST http://<domain>/drf/house/ HTTP/1.1 Content-Type: application/json { "email": "test6@stay.co.kr", "name": "test6", "house": ["SEOU-2023-1023-0002","GYOU-2023-1022-0001"] } //pip freeze asgiref==3.5.2 backports.zoneinfo==0.2.1 certifi==2022.9.24 cffi==1.15.1 charset-normalizer==2.1.1 cryptography==38.0.1 Deprecated==1.2.13 Django==4.1 django-cors-headers==3.13.0 django-filter==22.1 django-oauth-toolkit==2.1.0 django-rest-framework==0.1.0 django-rest-framework-mongoengine==3.4.1 djangorestframework==3.13.1 djongo==1.3.6 dnspython==2.2.1 idna==3.4 jwcrypto==1.4.2 mongoengine==0.24.2 oauthlib==3.2.1 Pillow==9.2.0 pycparser==2.21 pymongo==3.12.3 pytz==2022.2.1 requests==2.28.1 sqlparse==0.2.4 urllib3==1.26.12 wrapt==1.14.1 I want a final db values. enter image description here Is this serializers problem? Or Djongo problem. If remove a serializers house array field, there is no error. What's wrong with my source code. Please help me. A: [Self Solved] It's problem with between a model and a value. //ArrayField: Only available {key:value} //models.py class House(models.Model): house_id = models.CharField(max_length=256) class Users(models.Model): ... house = djongoModels.ArrayField( model_container=House ) //request value "house": [{"house_id":"SEOU-2023-1023-0002"},{"house_id":"GYOU-2023-1022-0001"}] //JSONField or CharField: Available {value, value...} //models.py class Users(models.Model): ... #house = djongoModels.CharField(max_length=256) <= exists max_length's problem house = djongoModels.JSONField() //request value "house": ["SEOU-2023-1023-0002","GYOU-2023-1022-0001"]
djongo + mongodb, Array Data Insert Problem
I have a problem. Stack: Django-Rest-Framework + Djongo + Mongodb. Problem: Insert error array data //models.py from django.db import models from djongo import models as djongoModels class House(models.Model): house_id = models.CharField(max_length=256) class Meta: abstract = True class Users(models.Model): _id = djongoModels.ObjectIdField() email = djongoModels.CharField(max_length=256) name = djongoModels.CharField(max_length=256) house = djongoModels.ArrayField( model_container=House ) class Meta: db_table = "drf_users" //serializers.py from .models import Users, Houses from rest_framework import serializers class InsertUserSerializers(serializers.ModelSerializer): email = serializers.CharField(required=True) name = serializers.CharField(required=True) house = serializers.ListField(child=serializers.CharField()) class Meta: model = Users fields = ('email', 'name', 'house') //views.py from .models import Users from .serializers import InsertUserSerializers class UsersViewSet(viewsets.ModelViewSet): queryset = Users.objects.all() serializer_class = InsertUserSerializers permission_classes = [AllowAny] //request.http POST http://<domain>/drf/house/ HTTP/1.1 Content-Type: application/json { "email": "test6@stay.co.kr", "name": "test6", "house": ["SEOU-2023-1023-0002","GYOU-2023-1022-0001"] } //pip freeze asgiref==3.5.2 backports.zoneinfo==0.2.1 certifi==2022.9.24 cffi==1.15.1 charset-normalizer==2.1.1 cryptography==38.0.1 Deprecated==1.2.13 Django==4.1 django-cors-headers==3.13.0 django-filter==22.1 django-oauth-toolkit==2.1.0 django-rest-framework==0.1.0 django-rest-framework-mongoengine==3.4.1 djangorestframework==3.13.1 djongo==1.3.6 dnspython==2.2.1 idna==3.4 jwcrypto==1.4.2 mongoengine==0.24.2 oauthlib==3.2.1 Pillow==9.2.0 pycparser==2.21 pymongo==3.12.3 pytz==2022.2.1 requests==2.28.1 sqlparse==0.2.4 urllib3==1.26.12 wrapt==1.14.1 I want a final db values. enter image description here Is this serializers problem? Or Djongo problem. If remove a serializers house array field, there is no error. What's wrong with my source code. Please help me.
[ "[Self Solved]\nIt's problem with between a model and a value.\n//ArrayField: Only available {key:value} \n//models.py\nclass House(models.Model):\n house_id = models.CharField(max_length=256)\n\nclass Users(models.Model):\n ...\n house = djongoModels.ArrayField(\n model_container=House\n )\n\n//request value\n\"house\": [{\"house_id\":\"SEOU-2023-1023-0002\"},{\"house_id\":\"GYOU-2023-1022-0001\"}]\n\n//JSONField or CharField: Available {value, value...} \n//models.py\nclass Users(models.Model):\n ...\n #house = djongoModels.CharField(max_length=256) <= exists max_length's problem \n house = djongoModels.JSONField()\n\n//request value\n\"house\": [\"SEOU-2023-1023-0002\",\"GYOU-2023-1022-0001\"]\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "djongo", "mongodb", "python" ]
stackoverflow_0074168540_django_django_rest_framework_djongo_mongodb_python.txt
Q: Basic Python Issue / removing whitespace I am struggling in a python undergraduate class that should have had fewer modules: for a grade, I have a code that reads a formatted file and "prints" a table. The problem is, the last entry of the table has a trailing space at the end. My print statement is for time in movieTiming[m]: print(time, end=" ") I really have no idea what to do here: i have a list that contains something like "11:30", "10:30", "9:00", and it should be printed as 11:30 10:30 9:00 (with no space after the 9:00). I have tried to join my list, but really, most of the concepts I need to do all of this were never even communicated or taught in the class. I guess that's how it goes, but I'm struggling. My approach is to appropriate existing code, try to understand it, and learn that way, but it's not making any sense to me. I am taking Java I at the same time, and Java makes sense to me because the pace of the Java course is about 1/2 of the pace of the Python class: 2x the modules means 1/2 the time. If anyone can help, thank you. Here's what I have (I'll remove the notes if it's not helpful?) # First we open the file named "movies.csv" using the open() f = open(input()) # f.readlines() reads the contents of the file and stores each line as a separate element in a list named movies. movies = f.readlines() # Next we declare 2 dictionaries named movieTiming and movieRating. # movieTiming will store the timing of each movie. # The key would be the movie name and the value would be the list of timings of the movie. movieTiming = {} # movieRating will store the rating of each movie. # key would be the movie name and the value would be the rating of the respective movie. movieRating = {} # Now we traverse through the movies list to fill our dictionaries. for m in movies: # First we split each line into 3 parts that is, we split the line whenever a comma(",") occurs. # split(",") would return a list of splitted words. # For example: when we split "16:40,Wonders of the World,G", it returns a list ["16:40","Wonders of the World","G"] movieDetails = m.split(",") # movieDetails[1] indicates the movie name. # So if the movie name is not present in the dictionary then we initialize the value with an empty list. #need a for loop if(movieDetails[1] not in movieTiming): movieTiming[movieDetails[1]] = [] # movieDetails[0] indicates the timing of the movie. # We append the time to the existing list of the movie. movieTiming[movieDetails[1]].append(movieDetails[0]) # movieDetails[2] indicates the rating of the movie. # We use strip() since a new line character will be appended at the end of the movie rating. # So to remove the new line character at the end we use strip() and we assign the rating to the respective movie. movieRating[movieDetails[1]] = movieDetails[2].strip() # Now we traverse the movieRating dictionary. for m in movieRating: # In -44.44s, negative sign indicates left justification. # 44 inidcates the width assigned to movie name. # .44 indicates the number of characters allowed for the movie name. # s indicates the data type string. # print() generally prints a message and prints a new line at the end. # So to avoid this and print the movie name, rating and timing in the same line, we use end=" " # end is used to print all in the same line separated by a space. print("%-44.44s"%m,"|","%5s"%movieRating[m],"|",end=" ") # Now we traverse through the movieTiming[m] which indicates the list of timing for the particular movie m. for time in movieTiming[m]: print(time, end=" ") # This print() will print a new line to print the next movie details in the new line. print() A: Instead of multiple calls to print, create a single space-delimited string with ' '.join and print that. print(' '.join(movieTiming[m])) As you've noted, printing a space between list elements is different from printing a space after each element. While you can play around with list indices to figure out which element is the last element and avoid printing a space after it, the join method already handles the corner cases for you. Similar to what you tried, though, consider an approach not of printing a space after all but the last element, but printing a space before all but the first. print(movieTiming[m][0], end='') for t in movieTiming[m][1:]: print(f' {t}', end='' print() I mention this not because you should consider it an alternative to str.join, but because it helps to think about your problem in different ways. A: This might help: my_list = ['11:00', '12:30', '13:00'] joined = ' '.join(my_list) print(joined) # 11:00 12:30 13:00 A: Supposed you have: time = ["19:30","19:00","18:00"] then you could apply the list as separate arguments: print(*time) You can, as always, control the separator by setting the sep keyword argument: print(*time, sep=', ') Unless you need the joined string for something else, this is the easiest method. Otherwise, use str.join(): joined_string = ' '.join([str(v) for v in time]) print(joined_string)
Basic Python Issue / removing whitespace
I am struggling in a python undergraduate class that should have had fewer modules: for a grade, I have a code that reads a formatted file and "prints" a table. The problem is, the last entry of the table has a trailing space at the end. My print statement is for time in movieTiming[m]: print(time, end=" ") I really have no idea what to do here: i have a list that contains something like "11:30", "10:30", "9:00", and it should be printed as 11:30 10:30 9:00 (with no space after the 9:00). I have tried to join my list, but really, most of the concepts I need to do all of this were never even communicated or taught in the class. I guess that's how it goes, but I'm struggling. My approach is to appropriate existing code, try to understand it, and learn that way, but it's not making any sense to me. I am taking Java I at the same time, and Java makes sense to me because the pace of the Java course is about 1/2 of the pace of the Python class: 2x the modules means 1/2 the time. If anyone can help, thank you. Here's what I have (I'll remove the notes if it's not helpful?) # First we open the file named "movies.csv" using the open() f = open(input()) # f.readlines() reads the contents of the file and stores each line as a separate element in a list named movies. movies = f.readlines() # Next we declare 2 dictionaries named movieTiming and movieRating. # movieTiming will store the timing of each movie. # The key would be the movie name and the value would be the list of timings of the movie. movieTiming = {} # movieRating will store the rating of each movie. # key would be the movie name and the value would be the rating of the respective movie. movieRating = {} # Now we traverse through the movies list to fill our dictionaries. for m in movies: # First we split each line into 3 parts that is, we split the line whenever a comma(",") occurs. # split(",") would return a list of splitted words. # For example: when we split "16:40,Wonders of the World,G", it returns a list ["16:40","Wonders of the World","G"] movieDetails = m.split(",") # movieDetails[1] indicates the movie name. # So if the movie name is not present in the dictionary then we initialize the value with an empty list. #need a for loop if(movieDetails[1] not in movieTiming): movieTiming[movieDetails[1]] = [] # movieDetails[0] indicates the timing of the movie. # We append the time to the existing list of the movie. movieTiming[movieDetails[1]].append(movieDetails[0]) # movieDetails[2] indicates the rating of the movie. # We use strip() since a new line character will be appended at the end of the movie rating. # So to remove the new line character at the end we use strip() and we assign the rating to the respective movie. movieRating[movieDetails[1]] = movieDetails[2].strip() # Now we traverse the movieRating dictionary. for m in movieRating: # In -44.44s, negative sign indicates left justification. # 44 inidcates the width assigned to movie name. # .44 indicates the number of characters allowed for the movie name. # s indicates the data type string. # print() generally prints a message and prints a new line at the end. # So to avoid this and print the movie name, rating and timing in the same line, we use end=" " # end is used to print all in the same line separated by a space. print("%-44.44s"%m,"|","%5s"%movieRating[m],"|",end=" ") # Now we traverse through the movieTiming[m] which indicates the list of timing for the particular movie m. for time in movieTiming[m]: print(time, end=" ") # This print() will print a new line to print the next movie details in the new line. print()
[ "Instead of multiple calls to print, create a single space-delimited string with ' '.join and print that.\nprint(' '.join(movieTiming[m]))\n\nAs you've noted, printing a space between list elements is different from printing a space after each element. While you can play around with list indices to figure out which element is the last element and avoid printing a space after it, the join method already handles the corner cases for you.\n\nSimilar to what you tried, though, consider an approach not of printing a space after all but the last element, but printing a space before all but the first.\nprint(movieTiming[m][0], end='')\nfor t in movieTiming[m][1:]:\n print(f' {t}', end=''\nprint()\n\nI mention this not because you should consider it an alternative to str.join, but because it helps to think about your problem in different ways.\n", "This might help:\nmy_list = ['11:00', '12:30', '13:00']\n\njoined = ' '.join(my_list)\n\nprint(joined)\n# 11:00 12:30 13:00\n\n", "Supposed you have:\ntime = [\"19:30\",\"19:00\",\"18:00\"]\n\nthen you could apply the list as separate arguments:\nprint(*time)\n\nYou can, as always, control the separator by setting the sep keyword argument:\nprint(*time, sep=', ')\n\nUnless you need the joined string for something else, this is the easiest method. Otherwise, use str.join():\njoined_string = ' '.join([str(v) for v in time])\nprint(joined_string)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074540601_python.txt
Q: Checking if the window is working Python3 How can I make a health check of a window by hwnd? Simply put, I need to handle an error if it happens. The mistake in question: I assume this can be done with Win32 libraries, but my searches haven't led to anything. A: Use SendMessageTimeout() to send the HWND a benign message, like WM_NULL. You can specify whether the function should fail if a timeout elapses, or even fail immediately if the window's thread is hung (not processing messages).
Checking if the window is working Python3
How can I make a health check of a window by hwnd? Simply put, I need to handle an error if it happens. The mistake in question: I assume this can be done with Win32 libraries, but my searches haven't led to anything.
[ "Use SendMessageTimeout() to send the HWND a benign message, like WM_NULL. You can specify whether the function should fail if a timeout elapses, or even fail immediately if the window's thread is hung (not processing messages).\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "win32gui", "winapi" ]
stackoverflow_0074538813_python_python_3.x_win32gui_winapi.txt
Q: Populating nested dictionary by iterating over dataframe not producing desired result I have a dataframe with double timestamped data (effective date and termination date), and I want to produce a nested dictionary (and ultimately a new dataframe) for each entity represented in the data that counts the active instances of the data over time. For example, if a field becomes active in 1980, I want the value for that company key in the 1980 key to increase by 1. If a field terminates in 1992, I want the value for that company key in the 1992 key to decrease by 1. Here is an example of the data: ID CO_Num CO_Name Termination_Date Effective_Date 106072 84028 COMPANY A 7/1/04 4/9/69 106084 84028 COMPANY A 12/1/85 8/20/69 106094 84028 COMPANY A 12/1/70 10/3/69 106115 84028 COMPANY B 12/1/85 1/7/70 106133 91108 COMPANY B 2/4/86 3/6/70 106133 91108 COMPANY C NaT 3/6/91 106133 91108 COMPANY C NaT 3/6/91 I created a nested dictionary with the year as the top key and the company/instance dictionary as the value, with all company values set at 0. E.g. nest_dict = {2000: {'COMPANY A': 0, 'COMPANY B': 0}, 2001: {'COMPANY A': 0, 'COMPANY B': 0}} Then, I have tried just about every way I can think of to iterate over the dataframe and/or dictionary to get the output that I would like. Here is my current iteration code. for key, value in nest_dict.items(): for data in df.values: if data[4].year == key: value[data[2]] += 1 if data[3].year == key: value[data[2]] -= 1 When I put the output into a dataframe, it looks like this: | 1969 | 1970 | 1971 | --------- | -------- | -------- | -------- | Company A | 0 | 0 | 0 | Company B | 0 | 0 | 0 | Company C | 2 | 2 | 2 | What I want to see is something like this: | 1969 | 1970 | 1971 | --------- | -------- | -------- | -------- | Company A | 3 | 2 | 2 | Company B | 0 | 2 | 2 | Company C | 0 | 0 | 0 | In this case, each column is a running tally of active instances for each company. Instead, I am getting all the same terminal value for every company. I feel like I am missing something very simple here. Any help is appreciated. A: I figured it out. My first issue was failing to make copies of the company sub-dictionary for each separate year. For more information on this, see this post My second issues was figuring out how to utilize my previous iteration output values for yr in timeseries: if yr == 1969: for row in df.values: if row[4].year == yr: timeseries[yr][row[2]] += 1 if row[3].year == yr: timeseries[yr][row[2]] -= 1 elif yr > 1969: timeseries[yr].update(timeseries[yr - 1]) for row in df.values: if row[4].year == y: timeseries[yr][row[2]] += 1 if row[3].year == yr: timeseries[yr][row[2]] -= 1 df1 = pd.DataFrame(timeseries) df1.index.name = "Company" The numerical indexing of the values calls the 'Effective Date' and 'Termination Date' values from the dataframe row, apologies if it's a bit messy to read. This gave me the desired result. 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 Company COMPANY A 3 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 COMPANY B 0 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 COMPANY C 0 1 1 3 4 4 4 4 4 4 5 6 12 17 21 21 22 21 20 20 20 20 20 20 20 18 18 16 14 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 COMPANY D 0 3 3 3 3 3 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 COMPANY E 0 0 1 2 2 4 4 7 11 16 17 19 25 25 17 17 17 17 10 10 10 10 10 10 10 10 10 10 10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 If anyone has comments or suggestions on how to do this better/more efficiently, I would appreciate it.
Populating nested dictionary by iterating over dataframe not producing desired result
I have a dataframe with double timestamped data (effective date and termination date), and I want to produce a nested dictionary (and ultimately a new dataframe) for each entity represented in the data that counts the active instances of the data over time. For example, if a field becomes active in 1980, I want the value for that company key in the 1980 key to increase by 1. If a field terminates in 1992, I want the value for that company key in the 1992 key to decrease by 1. Here is an example of the data: ID CO_Num CO_Name Termination_Date Effective_Date 106072 84028 COMPANY A 7/1/04 4/9/69 106084 84028 COMPANY A 12/1/85 8/20/69 106094 84028 COMPANY A 12/1/70 10/3/69 106115 84028 COMPANY B 12/1/85 1/7/70 106133 91108 COMPANY B 2/4/86 3/6/70 106133 91108 COMPANY C NaT 3/6/91 106133 91108 COMPANY C NaT 3/6/91 I created a nested dictionary with the year as the top key and the company/instance dictionary as the value, with all company values set at 0. E.g. nest_dict = {2000: {'COMPANY A': 0, 'COMPANY B': 0}, 2001: {'COMPANY A': 0, 'COMPANY B': 0}} Then, I have tried just about every way I can think of to iterate over the dataframe and/or dictionary to get the output that I would like. Here is my current iteration code. for key, value in nest_dict.items(): for data in df.values: if data[4].year == key: value[data[2]] += 1 if data[3].year == key: value[data[2]] -= 1 When I put the output into a dataframe, it looks like this: | 1969 | 1970 | 1971 | --------- | -------- | -------- | -------- | Company A | 0 | 0 | 0 | Company B | 0 | 0 | 0 | Company C | 2 | 2 | 2 | What I want to see is something like this: | 1969 | 1970 | 1971 | --------- | -------- | -------- | -------- | Company A | 3 | 2 | 2 | Company B | 0 | 2 | 2 | Company C | 0 | 0 | 0 | In this case, each column is a running tally of active instances for each company. Instead, I am getting all the same terminal value for every company. I feel like I am missing something very simple here. Any help is appreciated.
[ "I figured it out. My first issue was failing to make copies of the company sub-dictionary for each separate year. For more information on this, see this post\nMy second issues was figuring out how to utilize my previous iteration output values\nfor yr in timeseries:\n if yr == 1969:\n for row in df.values:\n if row[4].year == yr:\n timeseries[yr][row[2]] += 1\n if row[3].year == yr:\n timeseries[yr][row[2]] -= 1\n\n elif yr > 1969:\n timeseries[yr].update(timeseries[yr - 1])\n for row in df.values:\n if row[4].year == y:\n timeseries[yr][row[2]] += 1\n if row[3].year == yr:\n timeseries[yr][row[2]] -= 1\n\ndf1 = pd.DataFrame(timeseries)\ndf1.index.name = \"Company\"\n\nThe numerical indexing of the values calls the 'Effective Date' and 'Termination Date' values from the dataframe row, apologies if it's a bit messy to read.\nThis gave me the desired result.\n 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996\nCompany \nCOMPANY A 3 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 3 3 3 3 3 3 3 3 3 3 3 3 3 3 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\nCOMPANY B 0 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\nCOMPANY C 0 1 1 3 4 4 4 4 4 4 5 6 12 17 21 21 22 21 20 20 20 20 20 20 20 18 18 16 14 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\nCOMPANY D 0 3 3 3 3 3 3 3 3 3 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\nCOMPANY E 0 0 1 2 2 4 4 7 11 16 17 19 25 25 17 17 17 17 10 10 10 10 10 10 10 10 10 10 10 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\n\nIf anyone has comments or suggestions on how to do this better/more efficiently, I would appreciate it.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "pandas", "python", "time_series" ]
stackoverflow_0074539632_dictionary_pandas_python_time_series.txt
Q: Signers are being required to sign tabs meant for other recipients I am integrating docusign into an app using the python SDK with the flow as follows: 1.) generate an envelope with multiple documents each with its own tabs 2.) The envelope has 3 recipients( 2 signers with routing order and 1 cc) 3.) In each document there are 2 tabs groups for each signer in the envelope. 4.) Once the first signer signs all documents, the envelope is sent to the second to sign. The routing order is working just fine but the issue I am having is, the first signer is forced to sign all tabs in the envelope even ones that are attached to second signer. The same goes for the second signer. Because of this the date_signed tabs are wrongly populated when the document is signed. here is the JSON data for the recipients in the envelope definition { 'recipients': { 'signers': [ { 'client_user_id': None, 'completed_count': None, 'email': 'test1@test1.com', 'name': 'egeg feefe', 'name_metadata': None, 'recipient_attachments': None, 'recipient_id': 72295, 'recipient_id_guid': None, 'role_name': None, 'routing_order': '1', 'tabs': { 'sign_here_tabs': [{'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/222c/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 72295, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/222c/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }, {'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/333d/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 4804, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/333d/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }], }, }, { 'client_user_id': None, 'completed_count': None, 'email': 'ftt.tttvb@gmail.com', 'name': 'Dan Kerbon', 'name_metadata': None, 'recipient_attachments': None, 'recipient_id': 4804, 'recipient_id_guid': None, 'role_name': None, 'routing_order': '2', 'tabs': { 'sign_here_tabs': [{'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/222c/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 72295, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/222c/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }, {'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/333d/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 4804, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/333d/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }], }, 'user_id': None}], }, } Is there a value I am not setting or missing on the recipients? A: You use this /222c/ and /333d/ as your anchor strings for both recipients it seems to me. These are strings to be looked up in your document and be used to anchor the tabs, but since you use them for both signers, they'll get the same tabs, twice, once for each. You can either have different strings for different signers or use fixed positioning instead where you provide the X/Y coordinates instead of providing anchor strings.
Signers are being required to sign tabs meant for other recipients
I am integrating docusign into an app using the python SDK with the flow as follows: 1.) generate an envelope with multiple documents each with its own tabs 2.) The envelope has 3 recipients( 2 signers with routing order and 1 cc) 3.) In each document there are 2 tabs groups for each signer in the envelope. 4.) Once the first signer signs all documents, the envelope is sent to the second to sign. The routing order is working just fine but the issue I am having is, the first signer is forced to sign all tabs in the envelope even ones that are attached to second signer. The same goes for the second signer. Because of this the date_signed tabs are wrongly populated when the document is signed. here is the JSON data for the recipients in the envelope definition { 'recipients': { 'signers': [ { 'client_user_id': None, 'completed_count': None, 'email': 'test1@test1.com', 'name': 'egeg feefe', 'name_metadata': None, 'recipient_attachments': None, 'recipient_id': 72295, 'recipient_id_guid': None, 'role_name': None, 'routing_order': '1', 'tabs': { 'sign_here_tabs': [{'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/222c/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 72295, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/222c/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }, {'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/333d/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 4804, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/333d/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }], }, }, { 'client_user_id': None, 'completed_count': None, 'email': 'ftt.tttvb@gmail.com', 'name': 'Dan Kerbon', 'name_metadata': None, 'recipient_attachments': None, 'recipient_id': 4804, 'recipient_id_guid': None, 'role_name': None, 'routing_order': '2', 'tabs': { 'sign_here_tabs': [{'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/222c/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 72295, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/222c/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }, {'anchor_allow_white_space_in_characters': None, 'anchor_allow_white_space_in_characters_metadata': None, 'anchor_case_sensitive': None, 'anchor_case_sensitive_metadata': None, 'anchor_ignore_if_not_present': None, 'anchor_ignore_if_not_present_metadata': None, 'anchor_match_whole_word': True, 'anchor_match_whole_word_metadata': None, 'anchor_string': '/333d/', 'anchor_string_metadata': None, 'anchor_tab_processor_version': None, 'anchor_tab_processor_version_metadata': None, 'anchor_units': 'pixels', 'anchor_units_metadata': None, 'anchor_x_offset': '20', 'anchor_x_offset_metadata': None, 'anchor_y_offset': '20', 'anchor_y_offset_metadata': None, 'caption': None, 'caption_metadata': None, 'conditional_parent_label': None, 'conditional_parent_label_metadata': None, 'conditional_parent_value': None, 'conditional_parent_value_metadata': None, 'custom_tab_id': None, 'custom_tab_id_metadata': None, 'document_id': None, 'document_id_metadata': None, 'error_details': None, 'form_order': None, 'form_order_metadata': None, 'form_page_label': None, 'form_page_label_metadata': None, 'form_page_number': None, 'form_page_number_metadata': None, 'hand_draw_required': None, 'height': None, 'height_metadata': None, 'is_seal_sign_tab': None, 'merge_field': None, 'merge_field_xml': None, 'name': None, 'name_metadata': None, 'optional': None, 'optional_metadata': None, 'page_number': None, 'page_number_metadata': None, 'recipient_id': 4804, 'recipient_id_guid': None, 'recipient_id_guid_metadata': None, 'recipient_id_metadata': None, 'scale_value': None, 'scale_value_metadata': None, 'smart_contract_information': None, 'source': None, 'stamp': None, 'stamp_type': None, 'stamp_type_metadata': None, 'status': None, 'status_metadata': None, 'tab_group_labels': None, 'tab_group_labels_metadata': None, 'tab_id': None, 'tab_id_metadata': None, 'tab_label': '/333d/', 'tab_label_metadata': None, 'tab_order': None, 'tab_order_metadata': None, 'tab_type': None, 'tab_type_metadata': None, 'template_locked': None, 'template_locked_metadata': None, 'template_required': None, 'template_required_metadata': None, 'tool_tip_metadata': None, 'tooltip': None, 'width': None, }], }, 'user_id': None}], }, } Is there a value I am not setting or missing on the recipients?
[ "You use this /222c/ and /333d/ as your anchor strings for both recipients it seems to me.\nThese are strings to be looked up in your document and be used to anchor the tabs, but since you use them for both signers, they'll get the same tabs, twice, once for each.\nYou can either have different strings for different signers or use fixed positioning instead where you provide the X/Y coordinates instead of providing anchor strings.\n" ]
[ 1 ]
[]
[]
[ "docusignapi", "python" ]
stackoverflow_0074540588_docusignapi_python.txt
Q: ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects Error while installing manimce, I have been trying to install manimce library on windows subsystem for linux and after running pip install manimce Collecting manimce Downloading manimce-0.1.1.post2-py3-none-any.whl (249 kB) |████████████████████████████████| 249 kB 257 kB/s Collecting Pillow Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Collecting scipy Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) Collecting colour Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB) Collecting pangocairocffi<0.5.0,>=0.4.0 Downloading pangocairocffi-0.4.0.tar.gz (17 kB) Preparing metadata (setup.py) ... done Collecting numpy Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) Collecting pydub Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting pygments Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB) Collecting cairocffi<2.0.0,>=1.1.0 Downloading cairocffi-1.3.0.tar.gz (88 kB) |████████████████████████████████| 88 kB 160 kB/s Preparing metadata (setup.py) ... done Collecting tqdm Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB) Collecting pangocffi<0.9.0,>=0.8.0 Downloading pangocffi-0.8.0.tar.gz (33 kB) Preparing metadata (setup.py) ... done Collecting pycairo<2.0,>=1.19 Using cached pycairo-1.20.1.tar.gz (344 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting progressbar Downloading progressbar-2.5.tar.gz (10 kB) Preparing metadata (setup.py) ... done Collecting rich<7.0,>=6.0 Using cached rich-6.2.0-py3-none-any.whl (150 kB) Collecting cffi>=1.1.0 Using cached cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (446 kB) Collecting commonmark<0.10.0,>=0.9.0 Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB) Collecting typing-extensions<4.0.0,>=3.7.4 Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB) Collecting colorama<0.5.0,>=0.4.0 Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB) Collecting pycparser Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) Building wheels for collected packages: cairocffi, pangocairocffi, pangocffi, pycairo, progressbar Building wheel for cairocffi (setup.py) ... done Created wheel for cairocffi: filename=cairocffi-1.3.0-py3-none-any.whl size=89650 sha256=afc73218cc9fa1d844d7165f598e2be0428598166b4c3ed9de5bbdc94a0a6977 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/f3/97/83/8022b9237866102e18d1b7ac0a269769e6fccba0f63dceb9b7 Building wheel for pangocairocffi (setup.py) ... done Created wheel for pangocairocffi: filename=pangocairocffi-0.4.0-py3-none-any.whl size=19283 sha256=54399796259c6e24f9ab56c5747ab273dcf97fb6fed3e7b54935f9ac49351d50 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/60/58/92/507a12a5044f7fcda6f4dfd8e0a607cc1fe957bc0dea885906 Building wheel for pangocffi (setup.py) ... done Created wheel for pangocffi: filename=pangocffi-0.8.0-py3-none-any.whl size=37899 sha256=bea348af93696816b046dd901aa60d29a464460c5faac67628eb7e1ea7d1807d Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/c4/df/6d/e9d0f79b1545f6e902cc22773b1429de7a5efc240b891ee009 Building wheel for pycairo (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpuguwzu3u cwd: /tmp/pip-install-l4hqdegr/pycairo_f4d80b8f3e4840a3802342825adcdff5 Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo running build_ext 'pkg-config' not found. Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10'] ---------------------------------------- ERROR: Failed building wheel for pycairo Building wheel for progressbar (setup.py) ... done Created wheel for progressbar: filename=progressbar-2.5-py3-none-any.whl size=12074 sha256=7290ef8de5dd955bf756b90130f400dd19c2cc9ea050a5a1dce2803440f581e2 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/2c/67/ed/d84123843c937d7e7f5ba88a270d11036473144143355e2747 Successfully built cairocffi pangocairocffi pangocffi progressbar Failed to build pycairo ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip install manim_ce ERROR: Could not find a version that satisfies the requirement manim_ce (from versions: none) ERROR: No matching distribution found for manim_ce (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ manim example_scenes/basic.py -pql Command 'manim' not found, did you mean: command 'maim' from deb maim (5.5.3-1build1) Try: sudo apt install <deb name> (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ sudo apt-get install manim [sudo] password for yusifer_zendric: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package manim (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip3 install manimlib Collecting manimlib Downloading manimlib-0.2.0.tar.gz (4.8 MB) |████████████████████████████████| 4.8 MB 498 kB/s Preparing metadata (setup.py) ... done Collecting Pillow Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Collecting argparse Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB) Collecting colour Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB) Collecting numpy Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) Collecting opencv-python Downloading opencv_python-4.5.4.60-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB) |████████████████████████████████| 60.3 MB 520 kB/s Collecting progressbar Using cached progressbar-2.5-py3-none-any.whl Collecting pycairo Using cached pycairo-1.20.1.tar.gz (344 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pydub Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting pygments Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB) Collecting scipy Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) Collecting tqdm Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB) Building wheels for collected packages: manimlib, pycairo Building wheel for manimlib (setup.py) ... done Created wheel for manimlib: filename=manimlib-0.2.0-py3-none-any.whl size=212737 sha256=27efe2c226d80cfe5663928e980d3e5f5a164d8e9d0aacea5014d37ffdedb76a Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/87/36/c1/2db5ed5de9908034108f3c39538cd3367445d9cec01e7c8c23 Building wheel for pycairo (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp5o2970su cwd: /tmp/pip-install-sxxp3lw2/pycairo_d372a62d0c6b4c4484391402d21485e1 Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo running build_ext 'pkg-config' not found. Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10'] ---------------------------------------- ERROR: Failed building wheel for pycairo Successfully built manimlib Failed to build pycairo ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects all the libraries are installed accept the pycairo library. It's just showing this to install pyproject.toml error. Infact I have already done pip install pyproject.toml and it is installed then also it's showing the same error. A: apt-get install sox ffmpeg libcairo2 libcairo2-dev apt-get install texlive-full pip3 install manimlib # or pip install manimlib Then: pip3 install manimce # or pip install manimce And everything works. A: I had the same error, for a different package however. I solved the issue with: apt install libpython3.9-dev A: In my case I'm trying to install PyGObject in Fedora. But I experience the same problem. Here's how to do it in Fedora. sudo dnf install gobject-introspection-devel cairo-gobject-devel follow by installing the lib that you're using, in my case was PyGObject pip install PyGObject A: These two commands worked for me sudo apt-get install sox ffmpeg libcairo2 libcairo2-dev sudo apt install libgirepository1.0-dev A: step 1: try: pip install wheel step 2: pip install manimce if still doesn't work try: pip3 instead of pip else: reinstall python and follow steps 1 and step 2 and if it still doesn't work install a lower version and if it still doesn't work make sure it is the right package and if it still doesn't work there is some fatal error somewhere A: I had the exact description of the error but for different module (aiohttp). Just in case will leave here the description of encountered error and the solution. The error below I got while executing pip install -r requirements.txt for installation I made: socket.c -o build/temp.linux-armv8l-cpython-311/aiohttp/_websocket.o aiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found #include "longintrepr.h" ^~~~~~~ 1 error generated. error: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' failed with exit code 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for aiohttp Failed to build aiohttp ERROR: Could not build wheels for aiohttp, which is required to install pyproject.toml-based projects This error is specific to Python 3.11 version. On Python with 3.10.6 version installation went fine. To solve it I needed to update requirements.txt. Not working versions of modules with Python 3.11: aiohttp==3.8.1 yarl==1.4.2 frozenlist==1.3.0 Working versions: aiohttp==3.8.2 yarl==1.8.1 frozenlist==1.3.1 Links to the corresponding issues with fixes: https://github.com/aio-libs/aiohttp/issues/6600 https://github.com/aio-libs/yarl/issues/706 https://github.com/aio-libs/frozenlist/issues/305
ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects
Error while installing manimce, I have been trying to install manimce library on windows subsystem for linux and after running pip install manimce Collecting manimce Downloading manimce-0.1.1.post2-py3-none-any.whl (249 kB) |████████████████████████████████| 249 kB 257 kB/s Collecting Pillow Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Collecting scipy Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) Collecting colour Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB) Collecting pangocairocffi<0.5.0,>=0.4.0 Downloading pangocairocffi-0.4.0.tar.gz (17 kB) Preparing metadata (setup.py) ... done Collecting numpy Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) Collecting pydub Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting pygments Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB) Collecting cairocffi<2.0.0,>=1.1.0 Downloading cairocffi-1.3.0.tar.gz (88 kB) |████████████████████████████████| 88 kB 160 kB/s Preparing metadata (setup.py) ... done Collecting tqdm Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB) Collecting pangocffi<0.9.0,>=0.8.0 Downloading pangocffi-0.8.0.tar.gz (33 kB) Preparing metadata (setup.py) ... done Collecting pycairo<2.0,>=1.19 Using cached pycairo-1.20.1.tar.gz (344 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting progressbar Downloading progressbar-2.5.tar.gz (10 kB) Preparing metadata (setup.py) ... done Collecting rich<7.0,>=6.0 Using cached rich-6.2.0-py3-none-any.whl (150 kB) Collecting cffi>=1.1.0 Using cached cffi-1.15.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (446 kB) Collecting commonmark<0.10.0,>=0.9.0 Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB) Collecting typing-extensions<4.0.0,>=3.7.4 Using cached typing_extensions-3.10.0.2-py3-none-any.whl (26 kB) Collecting colorama<0.5.0,>=0.4.0 Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB) Collecting pycparser Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB) Building wheels for collected packages: cairocffi, pangocairocffi, pangocffi, pycairo, progressbar Building wheel for cairocffi (setup.py) ... done Created wheel for cairocffi: filename=cairocffi-1.3.0-py3-none-any.whl size=89650 sha256=afc73218cc9fa1d844d7165f598e2be0428598166b4c3ed9de5bbdc94a0a6977 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/f3/97/83/8022b9237866102e18d1b7ac0a269769e6fccba0f63dceb9b7 Building wheel for pangocairocffi (setup.py) ... done Created wheel for pangocairocffi: filename=pangocairocffi-0.4.0-py3-none-any.whl size=19283 sha256=54399796259c6e24f9ab56c5747ab273dcf97fb6fed3e7b54935f9ac49351d50 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/60/58/92/507a12a5044f7fcda6f4dfd8e0a607cc1fe957bc0dea885906 Building wheel for pangocffi (setup.py) ... done Created wheel for pangocffi: filename=pangocffi-0.8.0-py3-none-any.whl size=37899 sha256=bea348af93696816b046dd901aa60d29a464460c5faac67628eb7e1ea7d1807d Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/c4/df/6d/e9d0f79b1545f6e902cc22773b1429de7a5efc240b891ee009 Building wheel for pycairo (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmpuguwzu3u cwd: /tmp/pip-install-l4hqdegr/pycairo_f4d80b8f3e4840a3802342825adcdff5 Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo running build_ext 'pkg-config' not found. Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10'] ---------------------------------------- ERROR: Failed building wheel for pycairo Building wheel for progressbar (setup.py) ... done Created wheel for progressbar: filename=progressbar-2.5-py3-none-any.whl size=12074 sha256=7290ef8de5dd955bf756b90130f400dd19c2cc9ea050a5a1dce2803440f581e2 Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/2c/67/ed/d84123843c937d7e7f5ba88a270d11036473144143355e2747 Successfully built cairocffi pangocairocffi pangocffi progressbar Failed to build pycairo ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip install manim_ce ERROR: Could not find a version that satisfies the requirement manim_ce (from versions: none) ERROR: No matching distribution found for manim_ce (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ manim example_scenes/basic.py -pql Command 'manim' not found, did you mean: command 'maim' from deb maim (5.5.3-1build1) Try: sudo apt install <deb name> (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ sudo apt-get install manim [sudo] password for yusifer_zendric: Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package manim (venv) yusifer_zendric@Laptop-Yusifer:~/manim_ce$ pip3 install manimlib Collecting manimlib Downloading manimlib-0.2.0.tar.gz (4.8 MB) |████████████████████████████████| 4.8 MB 498 kB/s Preparing metadata (setup.py) ... done Collecting Pillow Using cached Pillow-8.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.1 MB) Collecting argparse Downloading argparse-1.4.0-py2.py3-none-any.whl (23 kB) Collecting colour Using cached colour-0.1.5-py2.py3-none-any.whl (23 kB) Collecting numpy Using cached numpy-1.21.5-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.7 MB) Collecting opencv-python Downloading opencv_python-4.5.4.60-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (60.3 MB) |████████████████████████████████| 60.3 MB 520 kB/s Collecting progressbar Using cached progressbar-2.5-py3-none-any.whl Collecting pycairo Using cached pycairo-1.20.1.tar.gz (344 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting pydub Using cached pydub-0.25.1-py2.py3-none-any.whl (32 kB) Collecting pygments Using cached Pygments-2.10.0-py3-none-any.whl (1.0 MB) Collecting scipy Using cached scipy-1.7.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (39.3 MB) Collecting tqdm Using cached tqdm-4.62.3-py2.py3-none-any.whl (76 kB) Building wheels for collected packages: manimlib, pycairo Building wheel for manimlib (setup.py) ... done Created wheel for manimlib: filename=manimlib-0.2.0-py3-none-any.whl size=212737 sha256=27efe2c226d80cfe5663928e980d3e5f5a164d8e9d0aacea5014d37ffdedb76a Stored in directory: /home/yusifer_zendric/.cache/pip/wheels/87/36/c1/2db5ed5de9908034108f3c39538cd3367445d9cec01e7c8c23 Building wheel for pycairo (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: /home/yusifer_zendric/manim_ce/venv/bin/python /home/yusifer_zendric/manim_ce/venv/lib/python3.8/site-packages/pip/_vendor/pep517/in_process/_in_process.py build_wheel /tmp/tmp5o2970su cwd: /tmp/pip-install-sxxp3lw2/pycairo_d372a62d0c6b4c4484391402d21485e1 Complete output (12 lines): running bdist_wheel running build running build_py creating build creating build/lib.linux-x86_64-3.8 creating build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.py -> build/lib.linux-x86_64-3.8/cairo copying cairo/__init__.pyi -> build/lib.linux-x86_64-3.8/cairo copying cairo/py.typed -> build/lib.linux-x86_64-3.8/cairo running build_ext 'pkg-config' not found. Command ['pkg-config', '--print-errors', '--exists', 'cairo >= 1.15.10'] ---------------------------------------- ERROR: Failed building wheel for pycairo Successfully built manimlib Failed to build pycairo ERROR: Could not build wheels for pycairo, which is required to install pyproject.toml-based projects all the libraries are installed accept the pycairo library. It's just showing this to install pyproject.toml error. Infact I have already done pip install pyproject.toml and it is installed then also it's showing the same error.
[ "apt-get install sox ffmpeg libcairo2 libcairo2-dev\napt-get install texlive-full\npip3 install manimlib # or pip install manimlib\n\nThen:\npip3 install manimce # or pip install manimce\n\nAnd everything works.\n", "I had the same error, for a different package however. I solved the issue with:\napt install libpython3.9-dev\n\n", "In my case I'm trying to install PyGObject in Fedora.\nBut I experience the same problem.\nHere's how to do it in Fedora.\nsudo dnf install gobject-introspection-devel cairo-gobject-devel\n\nfollow by installing the lib that you're using, in my case was PyGObject\npip install PyGObject\n\n", "These two commands worked for me\nsudo apt-get install sox ffmpeg libcairo2 libcairo2-dev\n\nsudo apt install libgirepository1.0-dev\n\n", "step 1: try: pip install wheel\nstep 2: pip install manimce\nif still doesn't work try: pip3 instead of pip\nelse: reinstall python and follow steps 1 and step 2\nand if it still doesn't work install a lower version\nand if it still doesn't work make sure it is the right package\nand if it still doesn't work there is some fatal error somewhere\n", "I had the exact description of the error but for different module (aiohttp). Just in case will leave here the description of encountered error and the solution.\nThe error below I got while executing pip install -r requirements.txt for installation I made:\nsocket.c -o build/temp.linux-armv8l-cpython-311/aiohttp/_websocket.o\naiohttp/_websocket.c:198:12: fatal error: 'longintrepr.h' file not found\n#include \"longintrepr.h\" \n ^~~~~~~ 1 error generated.\nerror: command '/data/data/com.termux/files/usr/bin/arm-linux-androideabi-clang' \nfailed with exit code 1\n[end of output]\nnote: This error originates from a subprocess, and is likely not a problem with pip.\nERROR: Failed building wheel for aiohttp\nFailed to build aiohttp\nERROR: Could not build wheels for aiohttp, which is required to install\npyproject.toml-based projects\n\nThis error is specific to Python 3.11 version. On Python with 3.10.6 version installation went fine.\nTo solve it I needed to update requirements.txt.\nNot working versions of modules with Python 3.11:\naiohttp==3.8.1\nyarl==1.4.2\nfrozenlist==1.3.0\n\nWorking versions:\naiohttp==3.8.2\nyarl==1.8.1\nfrozenlist==1.3.1\n\nLinks to the corresponding issues with fixes:\n\nhttps://github.com/aio-libs/aiohttp/issues/6600\nhttps://github.com/aio-libs/yarl/issues/706\nhttps://github.com/aio-libs/frozenlist/issues/305\n\n" ]
[ 7, 3, 1, 1, 0, 0 ]
[]
[]
[ "manim", "pycairo", "python", "ubuntu", "windows_subsystem_for_linux" ]
stackoverflow_0070508775_manim_pycairo_python_ubuntu_windows_subsystem_for_linux.txt
Q: Disable a charfield in DJANGO when I'm creating a new User I'm trying to do a crud in Django, it's about jefe and encargados. When I am logged in as an administrator, it has to allow me to create a encargados, but not a manager, but if I log in as a manager, it has to allow me to create a new encargados. For the jefe I am using a table called users and for the admin I am using the one from the Django admin panel. Here are the models: roles = ( ('encargado', 'ENCARGADO'), ('jefe','JEFE'), ) class Usuarios(models.Model): id = models.BigAutoField(primary_key=True) nombre = models.CharField(max_length=30) rol = models.CharField(max_length=30, choices=roles, default='encargado') correo = models.CharField(max_length=30) contraseña = models.CharField(max_length=30) cedula = models.CharField(max_length=30) class Meta: db_table = 'usuarios' This is the create view class UsuarioCrear(SuccessMessageMixin, CreateView): model = Usuarios form = Usuarios fields = "__all__" success_message = 'usuario creado correctamente !' def get_success_url(self): return reverse('leer') This would be the html to create, here I put a restriction that the roles are only seen as administrator. but really what is necessary is that if I am as an administrator it only lets me select the jefe and if I am as a jefe it only lets me select encargados {% csrf_token %} <!-- {{ form.as_p }} --> <div class="form-group"> <label for="id" class="txt_negrita">Id</label> {{ form.id|add_class:"form-control" }} </div> <div class="form-group"> <label for="nombre" class="txt_negrita">Nombre</label> {{ form.nombre|add_class:"form-control" }} </div> {% if user.is_superuser %} <div class="form-group"> <label for="rol" class="txt_negrita">Rol</label> {{ form.rol|add_class:"form-control"}} </div> {%endif%} <div class="form-group"> <label for="correo" class="txt_negrita">Correo</label> {{ form.correo|add_class:"form-control" }} </div> <div class="form-group"> <label for="contraseña" class="txt_negrita">Contraseña</label> {{ form.contraseña|add_class:"form-control" }} </div> <div class="form-group"> <label for="cedula" class="txt_negrita">Cedula</label> {{ form.cedula|add_class:"form-control" }} </div> <button type="submit" class="btn btn-primary">Aceptar</button> <a href="../" type="submit" class="btn btn-danger">Cancelar</a> </form> A: In your view, you need to get the user so you can pass it to the form via kwargs. Add the following method to your view def get_form_kwargs(self): kwargs = super().get_form_kwargs() kwargs['user'] = self.request.user return kwargs Now in your form you can test against the user when you initialise the form class Usuarios(forms.Form): def __init__(self, user, *args, **kwargs): super().__init__(*args, **kwargs) #use whatever method you need to determine options, # if self.user.is_staff: etc if self.user.rol == 'jefe': self.fields['rol'].choices = ('encargado', 'ENCARGADO'), else: self.fields['rol'].choices = ('jefe', 'JEFE'),
Disable a charfield in DJANGO when I'm creating a new User
I'm trying to do a crud in Django, it's about jefe and encargados. When I am logged in as an administrator, it has to allow me to create a encargados, but not a manager, but if I log in as a manager, it has to allow me to create a new encargados. For the jefe I am using a table called users and for the admin I am using the one from the Django admin panel. Here are the models: roles = ( ('encargado', 'ENCARGADO'), ('jefe','JEFE'), ) class Usuarios(models.Model): id = models.BigAutoField(primary_key=True) nombre = models.CharField(max_length=30) rol = models.CharField(max_length=30, choices=roles, default='encargado') correo = models.CharField(max_length=30) contraseña = models.CharField(max_length=30) cedula = models.CharField(max_length=30) class Meta: db_table = 'usuarios' This is the create view class UsuarioCrear(SuccessMessageMixin, CreateView): model = Usuarios form = Usuarios fields = "__all__" success_message = 'usuario creado correctamente !' def get_success_url(self): return reverse('leer') This would be the html to create, here I put a restriction that the roles are only seen as administrator. but really what is necessary is that if I am as an administrator it only lets me select the jefe and if I am as a jefe it only lets me select encargados {% csrf_token %} <!-- {{ form.as_p }} --> <div class="form-group"> <label for="id" class="txt_negrita">Id</label> {{ form.id|add_class:"form-control" }} </div> <div class="form-group"> <label for="nombre" class="txt_negrita">Nombre</label> {{ form.nombre|add_class:"form-control" }} </div> {% if user.is_superuser %} <div class="form-group"> <label for="rol" class="txt_negrita">Rol</label> {{ form.rol|add_class:"form-control"}} </div> {%endif%} <div class="form-group"> <label for="correo" class="txt_negrita">Correo</label> {{ form.correo|add_class:"form-control" }} </div> <div class="form-group"> <label for="contraseña" class="txt_negrita">Contraseña</label> {{ form.contraseña|add_class:"form-control" }} </div> <div class="form-group"> <label for="cedula" class="txt_negrita">Cedula</label> {{ form.cedula|add_class:"form-control" }} </div> <button type="submit" class="btn btn-primary">Aceptar</button> <a href="../" type="submit" class="btn btn-danger">Cancelar</a> </form>
[ "In your view, you need to get the user so you can pass it to the form via kwargs. Add the following method to your view\ndef get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['user'] = self.request.user\n return kwargs\n\nNow in your form you can test against the user when you initialise the form\nclass Usuarios(forms.Form):\n\n def __init__(self, user, *args, **kwargs):\n super().__init__(*args, **kwargs)\n #use whatever method you need to determine options,\n # if self.user.is_staff: etc\n if self.user.rol == 'jefe':\n self.fields['rol'].choices = ('encargado', 'ENCARGADO'),\n else:\n self.fields['rol'].choices = ('jefe', 'JEFE'),\n\n" ]
[ 1 ]
[]
[]
[ "django", "html", "python" ]
stackoverflow_0074540370_django_html_python.txt
Q: How to assign list of teams to a list of users randomly in python For example: Team User USA Mark England Sean India Sri assigning users to different teams randomly A: You could use shuffle to shuffle a user list; from random import shuffle teams = ['USA', 'England', 'India'] users = ['Mark', 'Sean', 'Sri'] shuffle(users) print([(t,u) for t,u in zip(teams, users)]) To assign multiple teams to a player, you can use iter() to ensure there are no duplicates from random import shuffle teams = ['USA', 'England', 'India','France', 'Brazil', 'Australia'] users = ['Mark', 'Sean', 'Sri'] shuffle(teams) teams_iter = iter(teams) print([(u,(t1,t2)) for u,t1,t2 in zip(users, teams_iter, teams_iter )]) A: In random module use choice from random import choice choice(['USA','England','India']) 'India' For a dataframe of users you could use lambda to get a random choice for each user: df.apply (lambda x: choice(['USA','England','India'])) A: Here a posible solution from random import randrange User = ['Mark', 'Sean', 'Sri'] Team = ['USA', 'England', 'India'] _range = len(User) new_list = [] while len(new_list) != _range: try: rr = randrange( len(Team) ); new_list.append( [Team.pop(), User.pop( rr ) ] ) except: "" print(new_list)
How to assign list of teams to a list of users randomly in python
For example: Team User USA Mark England Sean India Sri assigning users to different teams randomly
[ "You could use shuffle to shuffle a user list;\nfrom random import shuffle\nteams = ['USA', 'England', 'India']\nusers = ['Mark', 'Sean', 'Sri']\nshuffle(users)\nprint([(t,u) for t,u in zip(teams, users)])\n\nTo assign multiple teams to a player, you can use iter() to ensure there are no duplicates\nfrom random import shuffle\nteams = ['USA', 'England', 'India','France', 'Brazil', 'Australia']\nusers = ['Mark', 'Sean', 'Sri']\nshuffle(teams)\nteams_iter = iter(teams)\nprint([(u,(t1,t2)) for u,t1,t2 in zip(users, teams_iter, teams_iter )])\n\n", "In random module use choice\nfrom random import choice\nchoice(['USA','England','India'])\n'India'\n\nFor a dataframe of users you could use lambda to get a random choice for each user:\ndf.apply (lambda x: choice(['USA','England','India']))\n\n", "Here a posible solution\n\n\nfrom random import randrange\n\nUser = ['Mark', 'Sean', 'Sri']\nTeam = ['USA', 'England', 'India']\n_range = len(User)\n\nnew_list = []\n\nwhile len(new_list) != _range:\n try:\n rr = randrange( len(Team) );\n new_list.append( [Team.pop(), User.pop( rr ) ] )\n except:\n \"\"\n\nprint(new_list)\n\n\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "function", "numpy", "pandas", "python", "random" ]
stackoverflow_0074540633_function_numpy_pandas_python_random.txt
Q: Both codes work(caesar cipher): but one code rearranges the output Beginner python programmer here. Before I knew about using .index(), i used a work around. Whilst it did work something peculiar happened. The output string was re-arranged and i don't know why. Here is my code: alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] text = input("Type your message:\n").lower() shift = int(input("Type the shift number:\n")) #input for text = "code" integer for shift = 5 #First attempt for index, price in enumerate(alphabet): new_index = shift + index for loop in text: if loop == price: print(alphabet[new_index]) #Second attempt using .index for letter in text: position = alphabet.index(letter) new_index = position + shift print(alphabet[new_index]) Here are the outputs output for first code = hijt output for second code = htij A: Your first code prints the word with the letters rearranged in alphabetical order (before using the cipher). You go through the alphabet in your enumerate, a-z, and you look for each letter in your word. For example, if your word was 'ba', with a shift of one, it should output 'cb' - but it outputs 'bc'. It is because your loop looks for 'a's and prints the converted values out before doing so for 'b's. Your second is correct. Note: I have no idea why your sample output is on a single line - print generally adds a newline, so each letter would have been on a separate line. Also, you should realize that your code doesn't work when the new letter goes past 'z' - it has an index out of range error.
Both codes work(caesar cipher): but one code rearranges the output
Beginner python programmer here. Before I knew about using .index(), i used a work around. Whilst it did work something peculiar happened. The output string was re-arranged and i don't know why. Here is my code: alphabet = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z'] text = input("Type your message:\n").lower() shift = int(input("Type the shift number:\n")) #input for text = "code" integer for shift = 5 #First attempt for index, price in enumerate(alphabet): new_index = shift + index for loop in text: if loop == price: print(alphabet[new_index]) #Second attempt using .index for letter in text: position = alphabet.index(letter) new_index = position + shift print(alphabet[new_index]) Here are the outputs output for first code = hijt output for second code = htij
[ "Your first code prints the word with the letters rearranged in alphabetical order (before using the cipher). You go through the alphabet in your enumerate, a-z, and you look for each letter in your word. For example, if your word was 'ba', with a shift of one, it should output 'cb' - but it outputs 'bc'. It is because your loop looks for 'a's and prints the converted values out before doing so for 'b's.\nYour second is correct.\nNote: I have no idea why your sample output is on a single line - print generally adds a newline, so each letter would have been on a separate line. Also, you should realize that your code doesn't work when the new letter goes past 'z' - it has an index out of range error.\n" ]
[ 0 ]
[]
[]
[ "caesar_cipher", "python", "python_3.x" ]
stackoverflow_0074540620_caesar_cipher_python_python_3.x.txt
Q: Group and take count by expanding values in column Pandas I wish to groupby and then create column headers with the values in a specific column and list their counts. Data location box type ny box11 hey ny box11 hey ny box13 hello ny box13 hello ny box13 hello ca box5 hi ca box8 hello Desired location hey hello hi ny 2 3 0 ca 0 1 1 Doing using crosstab as SO member assisted w this script. first group, then crosstab df1 = df.groupby(["location", "box"]).agg() df2 = pd.crosstab([df["location"], df["box"]], df["type"]) Any suggestion is appreciated- still researching A: No need box df1 = pd.crosstab(df["location"], df["type"]) Out[271]: type hello hey hi location ca 1 0 1 ny 3 2 0
Group and take count by expanding values in column Pandas
I wish to groupby and then create column headers with the values in a specific column and list their counts. Data location box type ny box11 hey ny box11 hey ny box13 hello ny box13 hello ny box13 hello ca box5 hi ca box8 hello Desired location hey hello hi ny 2 3 0 ca 0 1 1 Doing using crosstab as SO member assisted w this script. first group, then crosstab df1 = df.groupby(["location", "box"]).agg() df2 = pd.crosstab([df["location"], df["box"]], df["type"]) Any suggestion is appreciated- still researching
[ "No need box\ndf1 = pd.crosstab(df[\"location\"], df[\"type\"])\nOut[271]: \ntype hello hey hi\nlocation \nca 1 0 1\nny 3 2 0\n\n" ]
[ 3 ]
[]
[]
[ "group_by", "numpy", "pandas", "python" ]
stackoverflow_0074540717_group_by_numpy_pandas_python.txt
Q: vscode python URLError: # requirements import pandas as pd from urllib.request import Request, urlopen from fake_useragent import UserAgent from bs4 import BeautifulSoup ua = UserAgent() ua.ie req = Request(df["URL"][0], headers={"User-Agent" : ua.ie}) html = urlopen(req).read() soup_tmp = BeautifulSoup(html, "html.parser") soup_tmp.find("p", "addy") #soup_find.select_one(".addy") URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I'm a student who studying python on vscode. I don't know what I'm missing TT. df["URL"][0] <- worked .. anybody help me ..? + i solve it !!!!! import requests req = requests. get(df["URL"]49, headers={'user-agent' :ua.ie}) soup_tmp = BeautifulSoup(req.content, 'html.parser') soup_tmp.select_one('.addy') it works !!!!!! A: Obviously, the problem is df["URL"][0] in the line: req = Request(df["URL"][0], headers={"User-Agent" : ua.ie}) At the same time, you didn't provide the url you used. I used Google to test that it worked well: url='https://www.google.com' req = Request(url, headers={"User-Agent" : ua.ie}) You need to check whether the url you use is correct, which is not a problem with the codes.
vscode python URLError:
# requirements import pandas as pd from urllib.request import Request, urlopen from fake_useragent import UserAgent from bs4 import BeautifulSoup ua = UserAgent() ua.ie req = Request(df["URL"][0], headers={"User-Agent" : ua.ie}) html = urlopen(req).read() soup_tmp = BeautifulSoup(html, "html.parser") soup_tmp.find("p", "addy") #soup_find.select_one(".addy") URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known> I'm a student who studying python on vscode. I don't know what I'm missing TT. df["URL"][0] <- worked .. anybody help me ..? + i solve it !!!!! import requests req = requests. get(df["URL"]49, headers={'user-agent' :ua.ie}) soup_tmp = BeautifulSoup(req.content, 'html.parser') soup_tmp.select_one('.addy') it works !!!!!!
[ "Obviously, the problem is df[\"URL\"][0] in the line:\nreq = Request(df[\"URL\"][0], headers={\"User-Agent\" : ua.ie})\n\nAt the same time, you didn't provide the url you used. I used Google to test that it worked well:\nurl='https://www.google.com'\nreq = Request(url, headers={\"User-Agent\" : ua.ie})\n\nYou need to check whether the url you use is correct, which is not a problem with the codes.\n" ]
[ 0 ]
[]
[]
[ "error_handling", "python", "visual_studio_code" ]
stackoverflow_0074529323_error_handling_python_visual_studio_code.txt
Q: error: command '/usr/bin/clang' failed with exit code 1 I download a not commonly-used software package in github in Mac M1. I am trying to compile and install myself according to the instruction. I have encountered the following problem saying "command/usr/bin/clang with exits error 1". I did install xcode in my mac. Because the built-in gcc version is 4.2, I upgrade the version using brew install gcc@7 command and the link to this gcc version. But I still face the same compiling problem. Does anyone have instructions for me how to solve this? The author did not maintain the source code anymore and I have struggled for one day and still can not fix the problem. A: The follow steps worked! upgrade pip and related components by: python -m pip install --upgrade pip pip install –upgrade wheel pip install –upgrade setuptools install openssl brew install openssl re2 reinstall your package with some environment to be set LDFLAGS="-L$(/opt/homebrew/bin/brew --prefix openssl)/lib -L$(/opt/homebrew/bin/brew --prefix re2)/lib" CPPFLAGS="-I$(/opt/homebrew/bin/brew --prefix openssl)/include -I$(/opt/homebrew/bin/brew --prefix re2)/include" GRPC_BUILD_WITH_BORING_SSL_ASM="" GRPC_PYTHON_BUILD_SYSTEM_RE2=true GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=true GRPC_PYTHON_BUILD_SYSTEM_ZLIB=true pip install <your package name> Source: https://candid.technology/error-command-usr-bin-clang-failed-with-exit-code-1/
error: command '/usr/bin/clang' failed with exit code 1
I download a not commonly-used software package in github in Mac M1. I am trying to compile and install myself according to the instruction. I have encountered the following problem saying "command/usr/bin/clang with exits error 1". I did install xcode in my mac. Because the built-in gcc version is 4.2, I upgrade the version using brew install gcc@7 command and the link to this gcc version. But I still face the same compiling problem. Does anyone have instructions for me how to solve this? The author did not maintain the source code anymore and I have struggled for one day and still can not fix the problem.
[ "The follow steps worked!\n\nupgrade pip and related components by:\n\npython -m pip install --upgrade pip\npip install –upgrade wheel \npip install –upgrade setuptools\n\n\ninstall openssl\n\nbrew install openssl re2\n\n\nreinstall your package with some environment to be set\n\nLDFLAGS=\"-L$(/opt/homebrew/bin/brew --prefix openssl)/lib -L$(/opt/homebrew/bin/brew --prefix re2)/lib\" CPPFLAGS=\"-I$(/opt/homebrew/bin/brew --prefix openssl)/include -I$(/opt/homebrew/bin/brew --prefix re2)/include\" GRPC_BUILD_WITH_BORING_SSL_ASM=\"\" GRPC_PYTHON_BUILD_SYSTEM_RE2=true GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=true GRPC_PYTHON_BUILD_SYSTEM_ZLIB=true pip install <your package name>\n\nSource: https://candid.technology/error-command-usr-bin-clang-failed-with-exit-code-1/\n" ]
[ 0 ]
[]
[]
[ "clang", "gcc", "macos", "python" ]
stackoverflow_0071671666_clang_gcc_macos_python.txt
Q: Remove weights from networkx graph I have a weighted Networkx graph G. I first want to make some operation on G with weights (which is why I just don't read the input and set weights=None) and then remove them from G afterwards. What is the most straightforward way to make it unweighted? I could just do: G = nx.from_scipy_sparse_array(nx.to_scipy_sparse_array(G,weight=None)) Or loop through the G.adj dictionary and set weights=0, but both of these options feels too complicated. Something like: G = G.drop_weights() A: It is possible to access the data structure of the networkx graphs directly and remove any unwanted attributes. At the end, what you can do is define a function that loops over the dictionaries and remove the "weight" attribute. def drop_weights(G): '''Drop the weights from a networkx weighted graph.''' for node, edges in nx.to_dict_of_dicts(G).items(): for edge, attrs in edges.items(): attrs.pop('weight', None) and an example of usage: import networkx as nx def drop_weights(G): '''Drop the weights from a networkx weighted graph.''' for node, edges in nx.to_dict_of_dicts(G).items(): for edge, attrs in edges.items(): attrs.pop('weight', None) G = nx.Graph() G.add_weighted_edges_from([(1,2,0.125), (1,3,0.75), (2,4,1.2), (3,4,0.375)]) print(nx.is_weighted(G)) # True F = nx.Graph(G) print(nx.is_weighted(F)) # True # OP's suggestion F = nx.from_scipy_sparse_array(nx.to_scipy_sparse_array(G,weight=None)) print(nx.is_weighted(F)) # True # Correct solution drop_weights(F) print(nx.is_weighted(F)) # False Note that even reconstructing the graph without the weights through nx.to_scipy_sparse_array is not enough because the graph is constructed with weights, only these are set to 1.
Remove weights from networkx graph
I have a weighted Networkx graph G. I first want to make some operation on G with weights (which is why I just don't read the input and set weights=None) and then remove them from G afterwards. What is the most straightforward way to make it unweighted? I could just do: G = nx.from_scipy_sparse_array(nx.to_scipy_sparse_array(G,weight=None)) Or loop through the G.adj dictionary and set weights=0, but both of these options feels too complicated. Something like: G = G.drop_weights()
[ "It is possible to access the data structure of the networkx graphs directly and remove any unwanted attributes.\nAt the end, what you can do is define a function that loops over the dictionaries and remove the \"weight\" attribute.\ndef drop_weights(G):\n '''Drop the weights from a networkx weighted graph.'''\n for node, edges in nx.to_dict_of_dicts(G).items():\n for edge, attrs in edges.items():\n attrs.pop('weight', None)\n\nand an example of usage:\nimport networkx as nx\n\ndef drop_weights(G):\n '''Drop the weights from a networkx weighted graph.'''\n for node, edges in nx.to_dict_of_dicts(G).items():\n for edge, attrs in edges.items():\n attrs.pop('weight', None)\n\nG = nx.Graph()\nG.add_weighted_edges_from([(1,2,0.125), (1,3,0.75), (2,4,1.2), (3,4,0.375)])\n\nprint(nx.is_weighted(G)) # True\n\nF = nx.Graph(G)\nprint(nx.is_weighted(F)) # True\n\n# OP's suggestion\nF = nx.from_scipy_sparse_array(nx.to_scipy_sparse_array(G,weight=None))\nprint(nx.is_weighted(F)) # True\n\n# Correct solution\ndrop_weights(F)\nprint(nx.is_weighted(F)) # False\n\nNote that even reconstructing the graph without the weights through nx.to_scipy_sparse_array is not enough because the graph is constructed with weights, only these are set to 1.\n" ]
[ 0 ]
[]
[]
[ "networkx", "python" ]
stackoverflow_0072045825_networkx_python.txt
Q: Pandas - improve performance when grouping and applying custom function I have a dataframe like this. My data size is approximately over 100,000 rows. Category val1 val2 val3 val4 A 1 2 3 4 A 4 3 2 1 B 1 2 3 4 B 3 4 1 2 B 1 5 3 1 I'd like to group with Category column at first, and calculate with my own method in each group. Custom method returns a float value cal. The desired output is in a dictionary form with results. { 'A': { 'cal': a }, 'B:' { 'cal': b }, ... } I tried with groupby and apply of pandas. def my_cal(df): ret = ... return {'cal': ret} df.groupby('Category').apply(lambda grp: my_cal(grp)).to_dict() When I measured a time in jupyter notebook with timeit, it takes over 1 second which is too long for me. Is there a way to optimize this and perform with reduced time? ------------- EDIT ------------- Updated my_cal's arguments from dataframe to array. def my_cal(val1: float, val2: float, val3: float, val4: float): ret = inner_cal(val1, val2, val3, val4) # inner_cal is in external library return {'cal': ret} df.groupby('Category').apply(lambda grp: my_cal(grp['val1'].to_numpy(), grp['val2'].to_numpy(), grp['val3'].to_numpy(), grp['val4'].to_numpy())).to_dict() A: Here are some things you could try: Reduce the number of rows, by removing elements with invalid values, prior to applying the group by (if possible). Reduce the data frame's memory footprint, by shrinking its columns data types. Use numba, to generate an optimized machine code version of my_cal function. You can also find additional strategies that you might consider trying here: https://pandas.pydata.org/docs/user_guide/enhancingperf.html# Shrinking columns data types The following code enables you to reduce your data frame's memory usage, by converting each column data type to its smallest representation possible. For example, if you have a column with values stored as int64, it will try to determine whether the column's values range can be represented as int8, int16, or int32. In addition it can also convert values with object data type to category, and int to uint. import numpy as np import pandas as pd def df_shrink_dtypes(df, skip=None, obj2cat=True, int2uint=False): """ Try to shrink data types for ``DataFrame`` columns. Allows ``object`` -> ``category``, ``int`` -> ``uint``, and exclusion. Parameters ---------- df : pandas.DataFrame The dataframe to shrink. skip : list, default=[] The names of the columns to skip. obj2cat : bool, default=True Whether to cast ``object`` columns to ``category``. int2uint : bool, default=False Whether to cast ``int`` columns to ``uint``. Returns ------- new_dtypes : dict The new data types for the columns. """ if skip is None: skip = [] # 1: Build column filter and type-map excl_types, skip = {"category", "datetime64[ns]", "bool"}, set(skip) typemap = { "int": [ (np.dtype(x), np.iinfo(x).min, np.iinfo(x).max) for x in (np.int8, np.int16, np.int32, np.int64) ], "uint": [ (np.dtype(x), np.iinfo(x).min, np.iinfo(x).max) for x in (np.uint8, np.uint16, np.uint32, np.uint64) ], "float": [ (np.dtype(x), np.finfo(x).min, np.finfo(x).max) for x in (np.float32, np.float64, np.longdouble) ], } if obj2cat: # User wants to "categorify" dtype('Object'), # which may not always save space. typemap["object"] = "category" else: excl_types.add("object") new_dtypes = {} exclude = lambda dt: dt[1].name not in excl_types and dt[0] not in skip for c, old_t in filter(exclude, df.dtypes.items()): t = next((v for k, v in typemap.items() if old_t.name.startswith(k)), None) # Find the smallest type that fits if isinstance(t, list): if int2uint and t == typemap["int"] and df[c].min() >= 0: t = typemap["uint"] new_t = next( (r[0] for r in t if r[1] <= df[c].min() and r[2] >= df[c].max()), None ) if new_t and new_t == old_t: new_t = None else: new_t = t if isinstance(t, str) else None if new_t: new_dtypes[c] = new_t return new_dtypes def df_shrink(df, skip=None, obj2cat=True, int2uint=False): """Reduce memory usage, shrinking data types for ``DataFrame`` columns. Parameters ---------- df : pandas.DataFrame The dataframe to shrink. skip : list, default=[] The names of the columns to skip. obj2cat : bool, default=True Whether to cast ``object`` columns to ``category``. int2uint : bool, default=False Whether to cast ``int`` columns to ``uint``. Returns ------- df : pandas.DataFrame The dataframe with the new data types. See Also -------- - :func:`df_shrink_dtypes`: function that determines the new data types to use for each column. """ if skip is None: skip = [] dt = df_shrink_dtypes(df, skip, obj2cat=obj2cat, int2uint=int2uint) return df.astype(dt) Example: # Generating dataframe with 100,000 rows, and 5 columns: nrows = 100_000 cats = ["A", "B", "C", "D", "E", "F", "G"] df = pd.DataFrame( {"Category": np.random.choice(cats, size=nrows), "val1": np.random.randint(1, 8, nrows), "val2": np.random.randint(1, 8, nrows), "val3": np.random.randint(1, 8, nrows), "val4": np.random.randint(1, 8, nrows)} ) df.dtypes # # Category object # val1 int64 # val2 int64 # val3 int64 # val4 int64 # dtype: object # Applying `df_shrink` to `df` columns: _df = df_shrink(df) _df.dtypes # # Category category # val1 int8 # val2 int8 # val3 int8 # val4 int8 # dtype: object # Comparring memory usage of `df` vs. `_df`: df.info(memory_usage=True) # <class 'pandas.core.frame.DataFrame'> # RangeIndex: 100000 entries, 0 to 99999 # Data columns (total 5 columns): # # Column Non-Null Count Dtype # --- ------ -------------- ----- # 0 Category 100000 non-null object # 1 val1 100000 non-null int64 # 2 val2 100000 non-null int64 # 3 val3 100000 non-null int64 # 4 val4 100000 non-null int64 # dtypes: int64(4), object(1) # memory usage: 3.8+ MB <---- Original memory footprint _df.info(memory_usage=True) # <class 'pandas.core.frame.DataFrame'> # RangeIndex: 100000 entries, 0 to 99999 # Data columns (total 5 columns): # # Column Non-Null Count Dtype # --- ------ -------------- ----- # 0 Category 100000 non-null category # 1 val1 100000 non-null int8 # 2 val2 100000 non-null int8 # 3 val3 100000 non-null int8 # 4 val4 100000 non-null int8 # dtypes: category(1), int8(4) # memory usage: 488.8 KB <---- Almost 8x reduction! Using numba to generate an optimized machine code version of my_cal function To install numba on your Python environment, execute the following command: pip install -U numba To use Numba with pandas, you'll have to define my_cal, decorating it with @jit. You'll also need to pass the underlying grp values as NumPy arrays. You can do so by using the to_numpy() method. Here's an example on how your function should look like: import numpy as np import pandas as pd import numba # NOTE: define each column separately, and inform each data type, to improve performance. @numba.jit def my_cal(val1: int, val2: int, val3: int, val4: int): return val1 + val2 + val3 + val4 # Using numba optimized version of `my_cal`: %%timeit _df.groupby('Category').apply( lambda grp: my_cal( grp['val1'].to_numpy(), grp['val2'].to_numpy(), grp['val3'].to_numpy(), grp['val4'].to_numpy(), ) ).to_dict() # 6.33 ms ± 221 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) Execution time comparison The following code compares the different ways we could implement the DataFrame.groupby/apply operation: # OPTION 1: original implementation df.groupby('Category').apply(lambda grp: grp.sum(numeric_only=True)).to_dict() # 18.9 ms ± 500 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # OPTION 2: original implementation with memory optimized dataframe _df.groupby('Category').apply(lambda grp grp.sum(numeric_only=True)).to_dict() # 9.96 ms ± 140 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) # OPTION 3: Using numba optimized `my_cal` function, with memory optimized dataframe _df.groupby('Category').apply( lambda grp: my_cal( grp['val1'].to_numpy(), grp['val2'].to_numpy(), grp['val3'].to_numpy(), grp['val4'].to_numpy(), ) ).to_dict() # 6.33 ms ± 221 µs per loop (mean ± std. dev. of 7 runs, 1 loop each) Results Summary: Implementation Execution Time Per Loop OPTION 1 18.9 ms ± 500 µs OPTION 2 9.96 ms ± 140 µs OPTION 3 6.33 ms ± 221 µs Edit: using numba to optimize my_cal function Caveats Numba is best at accelerating functions that apply numerical functions to NumPy arrays. If you try to @jit a function that contains unsupported Python or NumPy code, compilation will revert object mode which will mostly likely not speed up your function. The warning you're receiving is because my_cal is calling an inner function that is not being @jit optimized, and therefore, numba is unable to optimize your code. If you have access and can make changes to inner_cal, then you could try also including the @jit decorator to it and specifying its parameters type hints. The problem with that approach is that if inner_cal contains calls to other functions, you'll have to do the same thing to these other functions. Before you chose to convert all inner functions to numba, I strongly suggest you analyze your code, to determine if those inner functions are also operating on top of numpy arrays. Otherwise it's a waste of time. To give you an example, here's how your inner_cal function should look like, If you use numba: @numba.jit def inner_cal(val1: float, val2: float, val3: float, val4: float) -> float: return val1 + val2 + val3 + val4 @numba.jit def my_cal(val1: float, val2: float, val3: float, val4: float) -> dict: ret = inner_cal(val1, val2, val3, val4) # inner_cal is in external library return {'cal': ret}
Pandas - improve performance when grouping and applying custom function
I have a dataframe like this. My data size is approximately over 100,000 rows. Category val1 val2 val3 val4 A 1 2 3 4 A 4 3 2 1 B 1 2 3 4 B 3 4 1 2 B 1 5 3 1 I'd like to group with Category column at first, and calculate with my own method in each group. Custom method returns a float value cal. The desired output is in a dictionary form with results. { 'A': { 'cal': a }, 'B:' { 'cal': b }, ... } I tried with groupby and apply of pandas. def my_cal(df): ret = ... return {'cal': ret} df.groupby('Category').apply(lambda grp: my_cal(grp)).to_dict() When I measured a time in jupyter notebook with timeit, it takes over 1 second which is too long for me. Is there a way to optimize this and perform with reduced time? ------------- EDIT ------------- Updated my_cal's arguments from dataframe to array. def my_cal(val1: float, val2: float, val3: float, val4: float): ret = inner_cal(val1, val2, val3, val4) # inner_cal is in external library return {'cal': ret} df.groupby('Category').apply(lambda grp: my_cal(grp['val1'].to_numpy(), grp['val2'].to_numpy(), grp['val3'].to_numpy(), grp['val4'].to_numpy())).to_dict()
[ "Here are some things you could try:\n\nReduce the number of rows, by removing elements with invalid values, prior to applying the group by (if possible).\nReduce the data frame's memory footprint, by shrinking its columns data types.\nUse numba, to generate an optimized machine code version of my_cal function.\n\nYou can also find additional strategies that you might consider trying here: https://pandas.pydata.org/docs/user_guide/enhancingperf.html#\nShrinking columns data types\nThe following code enables you to reduce your data frame's memory usage, by converting each column data type to its smallest representation possible. For example, if you have a column with values stored as int64, it will try to determine whether the column's values range can be represented as int8, int16, or int32. In addition it can also convert values with object data type to category, and int to uint.\n\nimport numpy as np\nimport pandas as pd\n\n\ndef df_shrink_dtypes(df, skip=None, obj2cat=True, int2uint=False):\n \"\"\"\n Try to shrink data types for ``DataFrame`` columns.\n\n Allows ``object`` -> ``category``, ``int`` -> ``uint``, and exclusion.\n\n Parameters\n ----------\n df : pandas.DataFrame\n The dataframe to shrink.\n skip : list, default=[]\n The names of the columns to skip.\n obj2cat : bool, default=True\n Whether to cast ``object`` columns to ``category``.\n int2uint : bool, default=False\n Whether to cast ``int`` columns to ``uint``.\n\n Returns\n -------\n new_dtypes : dict\n The new data types for the columns.\n \"\"\"\n if skip is None:\n skip = []\n # 1: Build column filter and type-map\n excl_types, skip = {\"category\", \"datetime64[ns]\", \"bool\"}, set(skip)\n\n typemap = {\n \"int\": [\n (np.dtype(x), np.iinfo(x).min, np.iinfo(x).max)\n for x in (np.int8, np.int16, np.int32, np.int64)\n ],\n \"uint\": [\n (np.dtype(x), np.iinfo(x).min, np.iinfo(x).max)\n for x in (np.uint8, np.uint16, np.uint32, np.uint64)\n ],\n \"float\": [\n (np.dtype(x), np.finfo(x).min, np.finfo(x).max)\n for x in (np.float32, np.float64, np.longdouble)\n ],\n }\n if obj2cat:\n # User wants to \"categorify\" dtype('Object'),\n # which may not always save space.\n typemap[\"object\"] = \"category\"\n else:\n excl_types.add(\"object\")\n\n new_dtypes = {}\n exclude = lambda dt: dt[1].name not in excl_types and dt[0] not in skip\n\n for c, old_t in filter(exclude, df.dtypes.items()):\n t = next((v for k, v in typemap.items() if old_t.name.startswith(k)), None)\n\n # Find the smallest type that fits\n if isinstance(t, list):\n if int2uint and t == typemap[\"int\"] and df[c].min() >= 0:\n t = typemap[\"uint\"]\n new_t = next(\n (r[0] for r in t if r[1] <= df[c].min() and r[2] >= df[c].max()), None\n )\n if new_t and new_t == old_t:\n new_t = None\n else:\n new_t = t if isinstance(t, str) else None\n if new_t:\n new_dtypes[c] = new_t\n return new_dtypes\n\n\ndef df_shrink(df, skip=None, obj2cat=True, int2uint=False):\n \"\"\"Reduce memory usage, shrinking data types for ``DataFrame`` columns.\n\n Parameters\n ----------\n df : pandas.DataFrame\n The dataframe to shrink.\n skip : list, default=[]\n The names of the columns to skip.\n obj2cat : bool, default=True\n Whether to cast ``object`` columns to ``category``.\n int2uint : bool, default=False\n Whether to cast ``int`` columns to ``uint``.\n\n Returns\n -------\n df : pandas.DataFrame\n The dataframe with the new data types.\n\n See Also\n --------\n - :func:`df_shrink_dtypes`: function that determines the new data types to\n use for each column.\n \"\"\"\n if skip is None:\n skip = []\n dt = df_shrink_dtypes(df, skip, obj2cat=obj2cat, int2uint=int2uint)\n return df.astype(dt)\n\n\nExample:\n\n# Generating dataframe with 100,000 rows, and 5 columns:\n\nnrows = 100_000\ncats = [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\"]\n\ndf = pd.DataFrame(\n {\"Category\": np.random.choice(cats, size=nrows),\n \"val1\": np.random.randint(1, 8, nrows),\n \"val2\": np.random.randint(1, 8, nrows),\n \"val3\": np.random.randint(1, 8, nrows),\n \"val4\": np.random.randint(1, 8, nrows)}\n)\n\ndf.dtypes\n#\n# Category object\n# val1 int64\n# val2 int64\n# val3 int64\n# val4 int64\n# dtype: object\n\n# Applying `df_shrink` to `df` columns:\n_df = df_shrink(df)\n\n_df.dtypes\n#\n# Category category\n# val1 int8\n# val2 int8\n# val3 int8\n# val4 int8\n# dtype: object\n\n# Comparring memory usage of `df` vs. `_df`:\n\ndf.info(memory_usage=True)\n# <class 'pandas.core.frame.DataFrame'>\n# RangeIndex: 100000 entries, 0 to 99999\n# Data columns (total 5 columns):\n# # Column Non-Null Count Dtype \n# --- ------ -------------- ----- \n# 0 Category 100000 non-null object\n# 1 val1 100000 non-null int64 \n# 2 val2 100000 non-null int64 \n# 3 val3 100000 non-null int64 \n# 4 val4 100000 non-null int64 \n# dtypes: int64(4), object(1)\n# memory usage: 3.8+ MB <---- Original memory footprint\n\n_df.info(memory_usage=True)\n# <class 'pandas.core.frame.DataFrame'>\n# RangeIndex: 100000 entries, 0 to 99999\n# Data columns (total 5 columns):\n# # Column Non-Null Count Dtype \n# --- ------ -------------- ----- \n# 0 Category 100000 non-null category\n# 1 val1 100000 non-null int8 \n# 2 val2 100000 non-null int8 \n# 3 val3 100000 non-null int8 \n# 4 val4 100000 non-null int8 \n# dtypes: category(1), int8(4)\n# memory usage: 488.8 KB <---- Almost 8x reduction!\n\nUsing numba to generate an optimized machine code version of my_cal function\nTo install numba on your Python environment, execute the following command:\npip install -U numba\n\nTo use Numba with pandas, you'll have to define my_cal, decorating it with @jit. You'll also need to pass the underlying grp values as NumPy arrays. You can do so by using the to_numpy() method. Here's an example on how your function should look like:\n\nimport numpy as np\nimport pandas as pd\nimport numba\n\n# NOTE: define each column separately, and inform each data type, to improve performance.\n@numba.jit\ndef my_cal(val1: int, val2: int, val3: int, val4: int):\n return val1 + val2 + val3 + val4\n\n# Using numba optimized version of `my_cal`:\n\n%%timeit\n_df.groupby('Category').apply(\n lambda grp: my_cal(\n grp['val1'].to_numpy(),\n grp['val2'].to_numpy(),\n grp['val3'].to_numpy(),\n grp['val4'].to_numpy(),\n )\n).to_dict()\n# 6.33 ms ± 221 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\n\nExecution time comparison\nThe following code compares the different ways we could implement the DataFrame.groupby/apply operation:\n\n# OPTION 1: original implementation\ndf.groupby('Category').apply(lambda grp: grp.sum(numeric_only=True)).to_dict()\n# 18.9 ms ± 500 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n\n# OPTION 2: original implementation with memory optimized dataframe\n_df.groupby('Category').apply(lambda grp\ngrp.sum(numeric_only=True)).to_dict()\n# 9.96 ms ± 140 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)\n\n# OPTION 3: Using numba optimized `my_cal` function, with memory optimized dataframe\n_df.groupby('Category').apply(\n lambda grp: my_cal(\n grp['val1'].to_numpy(),\n grp['val2'].to_numpy(),\n grp['val3'].to_numpy(),\n grp['val4'].to_numpy(),\n )\n).to_dict()\n# 6.33 ms ± 221 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)\n\nResults Summary:\n\n\n\n\nImplementation\nExecution Time Per Loop\n\n\n\n\nOPTION 1\n18.9 ms ± 500 µs\n\n\nOPTION 2\n9.96 ms ± 140 µs\n\n\nOPTION 3\n6.33 ms ± 221 µs\n\n\n\nEdit: using numba to optimize my_cal function\nCaveats\nNumba is best at accelerating functions that apply numerical functions to NumPy arrays. If you try to @jit a function that contains unsupported Python or NumPy code, compilation will revert object mode which will mostly likely not speed up your function.\nThe warning you're receiving is because my_cal is calling an inner function that is not being @jit optimized, and therefore, numba is unable to optimize your code. If you have access and can make changes to inner_cal, then you could try also including the @jit decorator to it and specifying its parameters type hints.\nThe problem with that approach is that if inner_cal contains calls to other functions, you'll have to do the same thing to these other functions. Before you chose to convert all inner functions to numba, I strongly suggest you analyze your code, to determine if those inner functions are also operating on top of numpy arrays. Otherwise it's a waste of time.\nTo give you an example, here's how your inner_cal function should look like, If you use numba:\n\n@numba.jit\ndef inner_cal(val1: float, val2: float, val3: float, val4: float) -> float:\n return val1 + val2 + val3 + val4\n\n\n@numba.jit\ndef my_cal(val1: float, val2: float, val3: float, val4: float) -> dict:\n ret = inner_cal(val1, val2, val3, val4) # inner_cal is in external library\n return {'cal': ret}\n\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074539837_pandas_python.txt
Q: Edit attribute in script string with AST I'm unfamiliar with the AST module and would appreciate any insight. If, for example, I have a string that contains a valid python script such as import sys #Just any module class SomeClass: def __init__(self): self.x = 10 self.b = 15 def a_func(self): print(self.x) I would like to be able to programmatically edit lines such as changing self.x = 10 to something like self.x = 20. I can break it down somewhat with ast via: some_string = "..." #String of class above for body_item in ast.parse(some_string): ... But this doesn't feel like the "right" way(not that there is a right way since this is somewhat niche). I was hoping someone could correct me towards something cleaner, or just better. A: You can start by using ast.dump to get an idea of the AST structure of the code you're dealing with: import ast code='self.x = 10' print(ast.dump(ast.parse(code), indent=2)) This outputs: Module( body=[ Assign( targets=[ Attribute( value=Name(id='self', ctx=Load()), attr='x', ctx=Store())], value=Constant(value=10))], type_ignores=[]) From which you can see what you want to look for is an Assign node where the first of targets is an Attribute node whose value is a Name node with an id of 'self' and an attr of 'x'. With this knowledge, you can then use ast.walk to traverse the AST nodes to look for a node with the aforementioned properties, modify its value to a Constant node with a value of 20, and finally use ast.unparse to convert AST back to a string of code: import ast code = ''' import sys #Just any module class SomeClass: def __init__(self): self.x = 10 self.b = 15 def a_func(self): print(self.x) ''' tree = ast.parse(code) for node in ast.walk(tree): if ( isinstance(node, ast.Assign) and isinstance((target := node.targets[0]), ast.Attribute) and isinstance(target.value, ast.Name) and target.value.id == 'self' and target.attr == 'x' ): node.value = ast.Constant(value=20) print(ast.unparse(tree)) This outputs: class SomeClass: def __init__(self): self.x = 20 self.b = 15 def a_func(self): print(self.x) Note that ast.unparse requires Python 3.10 or later. If you're using an earlier version, you can use astunparse.unparse from the astunparse package instead. Demo: https://trinket.io/python3/3b09901326
Edit attribute in script string with AST
I'm unfamiliar with the AST module and would appreciate any insight. If, for example, I have a string that contains a valid python script such as import sys #Just any module class SomeClass: def __init__(self): self.x = 10 self.b = 15 def a_func(self): print(self.x) I would like to be able to programmatically edit lines such as changing self.x = 10 to something like self.x = 20. I can break it down somewhat with ast via: some_string = "..." #String of class above for body_item in ast.parse(some_string): ... But this doesn't feel like the "right" way(not that there is a right way since this is somewhat niche). I was hoping someone could correct me towards something cleaner, or just better.
[ "You can start by using ast.dump to get an idea of the AST structure of the code you're dealing with:\nimport ast\n\ncode='self.x = 10'\nprint(ast.dump(ast.parse(code), indent=2))\n\nThis outputs:\nModule(\n body=[\n Assign(\n targets=[\n Attribute(\n value=Name(id='self', ctx=Load()),\n attr='x',\n ctx=Store())],\n value=Constant(value=10))],\n type_ignores=[])\n\nFrom which you can see what you want to look for is an Assign node where the first of targets is an Attribute node whose value is a Name node with an id of 'self' and an attr of 'x'.\nWith this knowledge, you can then use ast.walk to traverse the AST nodes to look for a node with the aforementioned properties, modify its value to a Constant node with a value of 20, and finally use ast.unparse to convert AST back to a string of code:\nimport ast\n\ncode = '''\nimport sys #Just any module\nclass SomeClass:\n def __init__(self):\n self.x = 10\n self.b = 15\n def a_func(self):\n print(self.x)\n'''\ntree = ast.parse(code)\nfor node in ast.walk(tree):\n if (\n isinstance(node, ast.Assign) and\n isinstance((target := node.targets[0]), ast.Attribute) and\n isinstance(target.value, ast.Name) and\n target.value.id == 'self' and\n target.attr == 'x'\n ):\n node.value = ast.Constant(value=20)\nprint(ast.unparse(tree))\n\nThis outputs:\nclass SomeClass:\n\n def __init__(self):\n self.x = 20\n self.b = 15\n\n def a_func(self):\n print(self.x)\n\nNote that ast.unparse requires Python 3.10 or later. If you're using an earlier version, you can use astunparse.unparse from the astunparse package instead.\nDemo: https://trinket.io/python3/3b09901326\n" ]
[ 1 ]
[]
[]
[ "abstract_syntax_tree", "metaprogramming", "python" ]
stackoverflow_0074540739_abstract_syntax_tree_metaprogramming_python.txt
Q: (Python) How to run tasks concurrently (and independently) without using asyncio gather? I have a pool of tasks in a list and each of those tasks take a different amount of time to complete. To mimick this I'll use this piece of code: tasks = [asyncio.create_task(asyncio.sleep(i)) for i in range(10)] If I use the asyncio gather API like this: await asyncio.gather(*tasks), then this statement blocks the event loop untill all the tasks in the current batch finishes. The problem however is in my program, I want to execute those tasks with a ThreadPoolExecutor like interface with x number of workers. This will allow for executing tasks more efficiently (Tasks which finish up early will be replaced by another task instead of waiting for all tasks of that batch to complete). How can I achieve this using python asyncio? A: asyncio.gather() waits for the tasks to finish. If you don’t want to wait, just don’t call asyncio.gather(). The tasks will kick off even if you don’t call gather(), as long as your event loop keeps running. To keep an event loop running, call loop.run_forever() as your entry point, or call asyncio.run(coro()) and in coro() call await asyncio.Future() to “wait forever”. You may also check out the TaskGroup class of Python 3.11 to see if it’s semantics meet what you need.
(Python) How to run tasks concurrently (and independently) without using asyncio gather?
I have a pool of tasks in a list and each of those tasks take a different amount of time to complete. To mimick this I'll use this piece of code: tasks = [asyncio.create_task(asyncio.sleep(i)) for i in range(10)] If I use the asyncio gather API like this: await asyncio.gather(*tasks), then this statement blocks the event loop untill all the tasks in the current batch finishes. The problem however is in my program, I want to execute those tasks with a ThreadPoolExecutor like interface with x number of workers. This will allow for executing tasks more efficiently (Tasks which finish up early will be replaced by another task instead of waiting for all tasks of that batch to complete). How can I achieve this using python asyncio?
[ "asyncio.gather() waits for the tasks to finish. If you don’t want to wait, just don’t call asyncio.gather().\nThe tasks will kick off even if you don’t call gather(), as long as your event loop keeps running.\nTo keep an event loop running, call loop.run_forever() as your entry point, or call asyncio.run(coro()) and in coro() call await asyncio.Future() to “wait forever”.\nYou may also check out the TaskGroup class of Python 3.11 to see if it’s semantics meet what you need.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "python_asyncio", "threadpool" ]
stackoverflow_0074477978_python_python_3.x_python_asyncio_threadpool.txt
Q: how to print object data with user input userInput = input("Enter Name: ") class person: def __init__(self, name, age, job): self.name = name self.age = age self.job = job People = [ person('Josh',23,'Consultant'), person('Maya',25,'Accountant'), person('Dan',32,'Social Worker'), person('Keon',38,'Biomaterials Developer'), person('Michelle',28,'Surgeon'), person('Joey',34,'Lawyer') ] so if userInput = Josh, it would print Josh's name, age, and job ‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌ A: The slow method is to iterate over the list and find a match. def find_person(people, name): for person in people: if person.name == name: return person raise ValueError("No matching person found") This is O(n) in the size of the list, which will cause problems if your list of people is large. But since you know in advance that you're going to be looking up people by name, you can use the person's name as a dictionary key, effectively indexing by the name rather than an integer value. So rather than creating a list of people, creating a dictionary people = { 'Josh': person('Josh',23,'Consultant'), 'Maya': person('Maya',25,'Accountant'), 'Dan': person('Dan',32,'Social Worker'), 'Keon': person('Keon',38,'Biomaterials Developer'), 'Michelle': person('Michelle',28,'Surgeon'), 'Joey': person('Joey',34,'Lawyer'), } Then "look up a person by name" is as simple as people[name]. Of course, you'll want to hide this complexity behind a class or something and make the client-facing side look like a list, or whatever data structure you want it to look like. This approach also currently assumes that no two people will ever have the same first name, though you can work around that by having a dictionary of lists if there's a possibility of duplicates.
how to print object data with user input
userInput = input("Enter Name: ") class person: def __init__(self, name, age, job): self.name = name self.age = age self.job = job People = [ person('Josh',23,'Consultant'), person('Maya',25,'Accountant'), person('Dan',32,'Social Worker'), person('Keon',38,'Biomaterials Developer'), person('Michelle',28,'Surgeon'), person('Joey',34,'Lawyer') ] so if userInput = Josh, it would print Josh's name, age, and job ‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌‌
[ "The slow method is to iterate over the list and find a match.\ndef find_person(people, name):\n for person in people:\n if person.name == name:\n return person\n raise ValueError(\"No matching person found\")\n\nThis is O(n) in the size of the list, which will cause problems if your list of people is large.\nBut since you know in advance that you're going to be looking up people by name, you can use the person's name as a dictionary key, effectively indexing by the name rather than an integer value. So rather than creating a list of people, creating a dictionary\npeople = {\n 'Josh': person('Josh',23,'Consultant'),\n 'Maya': person('Maya',25,'Accountant'),\n 'Dan': person('Dan',32,'Social Worker'),\n 'Keon': person('Keon',38,'Biomaterials Developer'),\n 'Michelle': person('Michelle',28,'Surgeon'),\n 'Joey': person('Joey',34,'Lawyer'),\n}\n\nThen \"look up a person by name\" is as simple as people[name]. Of course, you'll want to hide this complexity behind a class or something and make the client-facing side look like a list, or whatever data structure you want it to look like. This approach also currently assumes that no two people will ever have the same first name, though you can work around that by having a dictionary of lists if there's a possibility of duplicates.\n" ]
[ 0 ]
[]
[]
[ "class", "input", "object", "python", "python_3.x" ]
stackoverflow_0074540836_class_input_object_python_python_3.x.txt
Q: How to merge/concat dataframe and dummies without duplicate columns I have a dataframe, with pair of columns containing categorical data (they are the same, differing only by the amount of values for their categories); and I've made two sets of dummies for those two columns, viz: dummies1 = pd.get_dummies(df.loc[df['col1'].isin(columns_valuecounts_top3.index)], columns=['col1', 'col2']) dummies2 = pd.get_dummies(df.loc[df['col2'].isin(columns_valuecounts_top3.index)], columns=['col1', 'col2']) But, instead of creating two datasets with dummies ONLY (without all other original df's columns), the aforementioned action created a new dataset for each variable, adding dummy columns to them (the amount of values (rows) was different, though, because those dummies were created for two columns at the same time - it was expected, so not a problem). I wanted: | col1 | col1_a | col1_b | | col2 | col2_a | col2_b | Instead I got: name | salary | col1 | col1_a | col2_b | name | salary | col2 | col2_a | col2_b | And therefore, when I concatenated them, instead of receiving this: name | salary | col1 | col2 | col1_a | col1_b | col2_a | col2_b | I got: name | salary | col1 | col2 | name | salary | col1 | col1_a | col1_b | name | salary | col2 | col2_a | col2_b | And that's definitely not what I wanted. How can I get dummies properly and add them to dataframe? It would definitely be a mess to delete those columns one by one, as originally there are 30-40 of them in my df. I think there can be a better solution, so I humbly ask the community for an advice. Guys are asking for a reproducible example; I'll try my best: df = pd.DataFrame({'name': ['hacker', 'scamer', 'breaker', 'coder', 'leaker', 'tester', 'helper', 'leader'], 'salary': [1000, 1250, 250, 1001, 2500, 1500, 500, 3000], 'col1': ['a', 'b', 'b', 'a', 'c', 'a', 'd', 'e'], 'col2': ['b', 'c', 'c', Nan, 'b', 'd', Nan, 'a']}) Below is what I expect to get after creating dummies and merging it with the df: df1 = pd.concat([df, dummies1, dummies2], axis=1) name | salary | col1 | col1_a | col1_b | col1_c | col1_d | col1_e | col2 | col2_a | col2_b | col2_c | col2_d | col2_e | hacker | 1000 | a | 1 | 0 | 0 | 0 | 0 | b | 0 | 1 | 0 | 0 | 0 | scamer | 1250 | b | 0 | 1 | 0 | 0 | 0 | c | 0 | 0 | 1 | 0 | 0 | breaker | 250 | b | 0 | 1 | 0 | 0 | 0 | c | 0 | 0 | 1 | 0 | 0 | coder | 1001 | a | 1 | 0 | 0 | 0 | 0 | Nan | 0 | 0 | 0 | 0 | 0 | leaker | 2500 | c | 0 | 0 | 1 | 0 | 0 | b | 0 | 1 | 0 | 0 | 0 | tester | 1500 | a | 1 | 0 | 0 | 0 | 0 | d | 0 | 0 | 0 | 1 | 0 | helper | 500 | d | 0 | 0 | 0 | 1 | 0 | Nan | 0 | 0 | 0 | 0 | 0 | leader | 3000 | e | 0 | 0 | 0 | 0 | 1 | a | 1 | 0 | 0 | 0 | 0 | A: If you want only the dummy value, you can pass only that column into pd.get_dummies. dummies1 = pd.get_dummies(df.loc[df['col1'].isin(columns_valuecounts_top3.index), 'col1']) dummies2 = pd.get_dummies(df.loc[df['col2'].isin(columns_valuecounts_top3.index), 'col2'])
How to merge/concat dataframe and dummies without duplicate columns
I have a dataframe, with pair of columns containing categorical data (they are the same, differing only by the amount of values for their categories); and I've made two sets of dummies for those two columns, viz: dummies1 = pd.get_dummies(df.loc[df['col1'].isin(columns_valuecounts_top3.index)], columns=['col1', 'col2']) dummies2 = pd.get_dummies(df.loc[df['col2'].isin(columns_valuecounts_top3.index)], columns=['col1', 'col2']) But, instead of creating two datasets with dummies ONLY (without all other original df's columns), the aforementioned action created a new dataset for each variable, adding dummy columns to them (the amount of values (rows) was different, though, because those dummies were created for two columns at the same time - it was expected, so not a problem). I wanted: | col1 | col1_a | col1_b | | col2 | col2_a | col2_b | Instead I got: name | salary | col1 | col1_a | col2_b | name | salary | col2 | col2_a | col2_b | And therefore, when I concatenated them, instead of receiving this: name | salary | col1 | col2 | col1_a | col1_b | col2_a | col2_b | I got: name | salary | col1 | col2 | name | salary | col1 | col1_a | col1_b | name | salary | col2 | col2_a | col2_b | And that's definitely not what I wanted. How can I get dummies properly and add them to dataframe? It would definitely be a mess to delete those columns one by one, as originally there are 30-40 of them in my df. I think there can be a better solution, so I humbly ask the community for an advice. Guys are asking for a reproducible example; I'll try my best: df = pd.DataFrame({'name': ['hacker', 'scamer', 'breaker', 'coder', 'leaker', 'tester', 'helper', 'leader'], 'salary': [1000, 1250, 250, 1001, 2500, 1500, 500, 3000], 'col1': ['a', 'b', 'b', 'a', 'c', 'a', 'd', 'e'], 'col2': ['b', 'c', 'c', Nan, 'b', 'd', Nan, 'a']}) Below is what I expect to get after creating dummies and merging it with the df: df1 = pd.concat([df, dummies1, dummies2], axis=1) name | salary | col1 | col1_a | col1_b | col1_c | col1_d | col1_e | col2 | col2_a | col2_b | col2_c | col2_d | col2_e | hacker | 1000 | a | 1 | 0 | 0 | 0 | 0 | b | 0 | 1 | 0 | 0 | 0 | scamer | 1250 | b | 0 | 1 | 0 | 0 | 0 | c | 0 | 0 | 1 | 0 | 0 | breaker | 250 | b | 0 | 1 | 0 | 0 | 0 | c | 0 | 0 | 1 | 0 | 0 | coder | 1001 | a | 1 | 0 | 0 | 0 | 0 | Nan | 0 | 0 | 0 | 0 | 0 | leaker | 2500 | c | 0 | 0 | 1 | 0 | 0 | b | 0 | 1 | 0 | 0 | 0 | tester | 1500 | a | 1 | 0 | 0 | 0 | 0 | d | 0 | 0 | 0 | 1 | 0 | helper | 500 | d | 0 | 0 | 0 | 1 | 0 | Nan | 0 | 0 | 0 | 0 | 0 | leader | 3000 | e | 0 | 0 | 0 | 0 | 1 | a | 1 | 0 | 0 | 0 | 0 |
[ "If you want only the dummy value, you can pass only that column into pd.get_dummies.\ndummies1 = pd.get_dummies(df.loc[df['col1'].isin(columns_valuecounts_top3.index), 'col1']) \ndummies2 = pd.get_dummies(df.loc[df['col2'].isin(columns_valuecounts_top3.index), 'col2'])\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "dummy_variable", "pandas", "python" ]
stackoverflow_0074540000_dataframe_dummy_variable_pandas_python.txt
Q: Whats wrong with this code for checking age? I want to know if inputed date of birth is over 18 or under. def is_under_18(birth): now = date.today() return ( now.year - birth.year < 18 or now.year - birth.year == 18 and ( now.month < birth.month or now.month == birth.month and now.day <= birth.day ) ) And then: year = int(input("Year born: ")) month = int(input("Month born: ")) day = int(input("Day born: "))` birth = date(year,month,day) if is_under_18(birth): print('Under 18') else: print('Adult') However, the only thing is, say I add a user which his birthday is the 25th of November 2004. The program lets me add it because it does not count the month. If I add a user which was born the 1st of January 2005, it doesn't allow me because 2022-2005=17. A: Your original code doesn't seem to have a problem with the dates you mention, but does have a bug as Nov 22, 2004 is "Under 18" and today's date is Nov 22, 2022 (18th birthday). Use now.day < birth.day instead. But if you compute the birthday required to be 18 by replacing today's year with 18 less, then directly compare the dates, you don't have to have a complicated comparison: from datetime import date def is_under_18(birth): # today = date.today() today = date(2022,11,22) # for repeatability of results born_on_or_before = today.replace(year=today.year - 18) return birth > born_on_or_before print(f'Today is {date.today()}') for year,month,day in [(2004,11,21), (2004,11,22), (2004,11,23), (2004,11,25), (2005,1,1)]: birth = date(year,month,day) if is_under_18(birth): print(f'{birth} Under 18') else: print(f'{birth} Adult') Output: Today is 2022-11-22 2004-11-21 Adult 2004-11-22 Adult 2004-11-23 Under 18 2004-11-25 Under 18 2005-01-01 Under 18
Whats wrong with this code for checking age?
I want to know if inputed date of birth is over 18 or under. def is_under_18(birth): now = date.today() return ( now.year - birth.year < 18 or now.year - birth.year == 18 and ( now.month < birth.month or now.month == birth.month and now.day <= birth.day ) ) And then: year = int(input("Year born: ")) month = int(input("Month born: ")) day = int(input("Day born: "))` birth = date(year,month,day) if is_under_18(birth): print('Under 18') else: print('Adult') However, the only thing is, say I add a user which his birthday is the 25th of November 2004. The program lets me add it because it does not count the month. If I add a user which was born the 1st of January 2005, it doesn't allow me because 2022-2005=17.
[ "Your original code doesn't seem to have a problem with the dates you mention, but does have a bug as Nov 22, 2004 is \"Under 18\" and today's date is Nov 22, 2022 (18th birthday). Use now.day < birth.day instead.\nBut if you compute the birthday required to be 18 by replacing today's year with 18 less, then directly compare the dates, you don't have to have a complicated comparison:\nfrom datetime import date\n\ndef is_under_18(birth):\n # today = date.today()\n today = date(2022,11,22) # for repeatability of results\n born_on_or_before = today.replace(year=today.year - 18)\n return birth > born_on_or_before\n\nprint(f'Today is {date.today()}')\nfor year,month,day in [(2004,11,21), (2004,11,22), (2004,11,23), (2004,11,25), (2005,1,1)]:\n birth = date(year,month,day)\n\n if is_under_18(birth):\n print(f'{birth} Under 18')\n else:\n print(f'{birth} Adult')\n\nOutput:\nToday is 2022-11-22\n2004-11-21 Adult\n2004-11-22 Adult\n2004-11-23 Under 18\n2004-11-25 Under 18\n2005-01-01 Under 18\n\n" ]
[ 0 ]
[]
[]
[ "date", "python", "python_3.x" ]
stackoverflow_0074540811_date_python_python_3.x.txt
Q: Issue using GEKKO solve in Python: getting different result using same code in two different .py files The libraries used are pandas to read an excel file and gekko to solve an equation. Both .py files use the same code and the same excel file. The difference between them is one has an extra for cycle to get values from several sheets and the other is only able to read one sheet at a time. The results they produce from the same sheet are different. Shouldn't they be the same since the data is equal? Thank you for your help. A: Gekko uses solvers that iterate to find a solution. However, the same outputs can be expected with the same inputs and equations. Here is an example that returns True and True to verify that the solutions are the same. from gekko import GEKKO m1 = GEKKO() x1,y1 = m1.Array(m1.Var,2) m1.Equations([3*x1+2*y1==1, x1+2*y1==0]) m1.solve(disp=False) m2 = GEKKO() x2,y2 = m2.Array(m2.Var,2) m2.Equations([3*x2+2*y2==1, x2+2*y2==0]) m2.solve(disp=False) print(x1.value[0]==x2.value[0]) print(y1.value[0]==y2.value[0])
Issue using GEKKO solve in Python: getting different result using same code in two different .py files
The libraries used are pandas to read an excel file and gekko to solve an equation. Both .py files use the same code and the same excel file. The difference between them is one has an extra for cycle to get values from several sheets and the other is only able to read one sheet at a time. The results they produce from the same sheet are different. Shouldn't they be the same since the data is equal? Thank you for your help.
[ "Gekko uses solvers that iterate to find a solution. However, the same outputs can be expected with the same inputs and equations. Here is an example that returns True and True to verify that the solutions are the same.\nfrom gekko import GEKKO\n\nm1 = GEKKO()\nx1,y1 = m1.Array(m1.Var,2)\nm1.Equations([3*x1+2*y1==1, x1+2*y1==0])\nm1.solve(disp=False)\n\nm2 = GEKKO()\nx2,y2 = m2.Array(m2.Var,2)\nm2.Equations([3*x2+2*y2==1, x2+2*y2==0])\nm2.solve(disp=False)\n\nprint(x1.value[0]==x2.value[0])\nprint(y1.value[0]==y2.value[0])\n\n" ]
[ 0 ]
[]
[]
[ "gekko", "pandas", "python" ]
stackoverflow_0074540273_gekko_pandas_python.txt
Q: Cannot find the table tag in the website to scrap information using Beautiful soup I am trying to obtain the values of this columns (Year, Mom Dy, Hr, Mn, Sec) from the following [website https://www.ngdc.noaa.gov/hazel/view/hazards/tsunami/event-data?maxYear=2022&minYear=2010&country=USA] but I am new using Beautiful soup and I cannot find the table tag in the inspection to obtain the information. This are the columns I have tried using this code: url = 'https://www.ngdc.noaa.gov/hazel/view/hazards/tsunami/event-data? maxYear=2022&minYear=2010&country=USA' r = requests.get(url) soup = BeautifulSoup(r.content, 'html.parser') soup.find('class',attrs={'ReactVirtualized__Grid__innerScrollContainer'}) But nothing is returned. A: Data comes from an API you can call. You can optionally create a date index and sort on that as well after generating a DataFrame from the returned json. import requests import pandas as pd df = pd.DataFrame(requests.get('https://www.ngdc.noaa.gov/hazel/hazard-service/api/v1/tsunamis/events?++maxYear=2022&minYear=2010&country=USA').json()['items']) df['date'] = pd.to_datetime(df[['year', 'month', 'day']]) df.set_index('date', inplace=True) df.sort_index(inplace=True) df You can read about the API options here: https://www.ngdc.noaa.gov/hazel/view/swagger#/Tsunami%20Events# There is also a search tool here: https://www.ngdc.noaa.gov/hazel/view/hazards/tsunami/event-search
Cannot find the table tag in the website to scrap information using Beautiful soup
I am trying to obtain the values of this columns (Year, Mom Dy, Hr, Mn, Sec) from the following [website https://www.ngdc.noaa.gov/hazel/view/hazards/tsunami/event-data?maxYear=2022&minYear=2010&country=USA] but I am new using Beautiful soup and I cannot find the table tag in the inspection to obtain the information. This are the columns I have tried using this code: url = 'https://www.ngdc.noaa.gov/hazel/view/hazards/tsunami/event-data? maxYear=2022&minYear=2010&country=USA' r = requests.get(url) soup = BeautifulSoup(r.content, 'html.parser') soup.find('class',attrs={'ReactVirtualized__Grid__innerScrollContainer'}) But nothing is returned.
[ "Data comes from an API you can call. You can optionally create a date index and sort on that as well after generating a DataFrame from the returned json.\nimport requests\nimport pandas as pd\n\ndf = pd.DataFrame(requests.get('https://www.ngdc.noaa.gov/hazel/hazard-service/api/v1/tsunamis/events?++maxYear=2022&minYear=2010&country=USA').json()['items'])\ndf['date'] = pd.to_datetime(df[['year', 'month', 'day']])\ndf.set_index('date', inplace=True)\ndf.sort_index(inplace=True)\ndf\n\n\nYou can read about the API options here:\nhttps://www.ngdc.noaa.gov/hazel/view/swagger#/Tsunami%20Events#\nThere is also a search tool here:\nhttps://www.ngdc.noaa.gov/hazel/view/hazards/tsunami/event-search\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python", "python_beautifultable", "web_scraping" ]
stackoverflow_0074540242_beautifulsoup_python_python_beautifultable_web_scraping.txt
Q: How to timeout an async test in pytest with fixture? I am testing an async function that might get deadlocked. I tried to add a fixture to limit the function to only run for 5 seconds before raising a failure, but it hasn't worked so far. Setup: pipenv --python==3.6 pipenv install pytest==4.4.1 pipenv install pytest-asyncio==0.10.0 Code: import asyncio import pytest @pytest.fixture def my_fixture(): # attempt to start a timer that will stop the test somehow asyncio.ensure_future(time_limit()) yield 'eggs' async def time_limit(): await asyncio.sleep(5) print('time limit reached') # this isn't printed raise AssertionError @pytest.mark.asyncio async def test(my_fixture): assert my_fixture == 'eggs' await asyncio.sleep(10) print('this should not print') # this is printed assert 0 -- Edit: Mikhail's solution works fine. I can't find a way to incorporate it into a fixture, though. A: Convenient way to limit function (or block of code) with timeout is to use async-timeout module. You can use it inside your test function or, for example, create a decorator. Unlike with fixture it'll allow to specify concrete time for each test: import asyncio import pytest from async_timeout import timeout def with_timeout(t): def wrapper(corofunc): async def run(*args, **kwargs): with timeout(t): return await corofunc(*args, **kwargs) return run return wrapper @pytest.mark.asyncio @with_timeout(2) async def test_sleep_1(): await asyncio.sleep(1) assert 1 == 1 @pytest.mark.asyncio @with_timeout(2) async def test_sleep_3(): await asyncio.sleep(3) assert 1 == 1 It's not hard to create decorator for concrete time (with_timeout_5 = partial(with_timeout, 5)). I don't know how to create texture (if you really need fixture), but code above can provide starting point. Also not sure if there's a common way to achieve goal better. A: There is a way to use fixtures for timeout, one just needs to add the following hook into conftest.py. Any fixture prefixed with timeout must return a number of seconds(int, float) the test can run. The closest fixture w.r.t scope is chosen. autouse fixtures have lesser priority than explicitly chosen ones. Later one is preferred. Unfortunately order in the function argument list does NOT matter. If there is no such fixture, the test is not restricted and will run indefinitely as usual. The test must be marked with pytest.mark.asyncio too, but that is needed anyway. # Add to conftest.py import asyncio import pytest _TIMEOUT_FIXTURE_PREFIX = "timeout" @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_setup(item: pytest.Item): """Wrap all tests marked with pytest.mark.asyncio with their specified timeout. Must run as early as possible. Parameters ---------- item : pytest.Item Test to wrap """ yield orig_obj = item.obj timeouts = [n for n in item.funcargs if n.startswith(_TIMEOUT_FIXTURE_PREFIX)] # Picks the closest timeout fixture if there are multiple tname = None if len(timeouts) == 0 else timeouts[-1] # Only pick marked functions if item.get_closest_marker("asyncio") is not None and tname is not None: async def new_obj(*args, **kwargs): """Timed wrapper around the test function.""" try: return await asyncio.wait_for( orig_obj(*args, **kwargs), timeout=item.funcargs[tname] ) except Exception as e: pytest.fail(f"Test {item.name} did not finish in time.") item.obj = new_obj Example: @pytest.fixture def timeout_2s(): return 2 @pytest.fixture(scope="module", autouse=True) def timeout_5s(): # You can do whatever you need here, just return/yield a number return 5 async def test_timeout_1(): # Uses timeout_5s fixture by default await aio.sleep(0) # Passes return 1 async def test_timeout_2(timeout_2s): # Uses timeout_2s because it is closest await aio.sleep(5) # Timeouts WARNING Might not work with some other plugins, I have only tested it with pytest-asyncio, it definitely won't work if item is redefined by some hook. A: I just loved Quimby's approach of marking tests with timeouts. Here's my attempt to improve it, using pytest marks: # tests/conftest.py import asyncio @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_pyfunc_call(pyfuncitem: pytest.Function): """ Wrap all tests marked with pytest.mark.async_timeout with their specified timeout. """ orig_obj = pyfuncitem.obj if marker := pyfuncitem.get_closest_marker("async_timeout"): async def new_obj(*args, **kwargs): """Timed wrapper around the test function.""" try: return await asyncio.wait_for(orig_obj(*args, **kwargs), timeout=marker.args[0]) except (asyncio.CancelledError, asyncio.TimeoutError): pytest.fail(f"Test {pyfuncitem.name} did not finish in time.") pyfuncitem.obj = new_obj yield def pytest_configure(config: pytest.Config): config.addinivalue_line("markers", "async_timeout(timeout): cancels the test execution after the specified amount of seconds") Usage: @pytest.mark.asyncio @pytest.mark.async_timeout(10) async def potentially_hanging_function(): await asyncio.sleep(20) It should not be hard to include this to the asyncio mark on pytest-asyncio, so we can get a syntax like: @pytest.mark.asyncio(timeout=10) async def potentially_hanging_function(): await asyncio.sleep(20) EDIT: looks like there's already a PR for that.
How to timeout an async test in pytest with fixture?
I am testing an async function that might get deadlocked. I tried to add a fixture to limit the function to only run for 5 seconds before raising a failure, but it hasn't worked so far. Setup: pipenv --python==3.6 pipenv install pytest==4.4.1 pipenv install pytest-asyncio==0.10.0 Code: import asyncio import pytest @pytest.fixture def my_fixture(): # attempt to start a timer that will stop the test somehow asyncio.ensure_future(time_limit()) yield 'eggs' async def time_limit(): await asyncio.sleep(5) print('time limit reached') # this isn't printed raise AssertionError @pytest.mark.asyncio async def test(my_fixture): assert my_fixture == 'eggs' await asyncio.sleep(10) print('this should not print') # this is printed assert 0 -- Edit: Mikhail's solution works fine. I can't find a way to incorporate it into a fixture, though.
[ "Convenient way to limit function (or block of code) with timeout is to use async-timeout module. You can use it inside your test function or, for example, create a decorator. Unlike with fixture it'll allow to specify concrete time for each test:\nimport asyncio\nimport pytest\nfrom async_timeout import timeout\n\n\ndef with_timeout(t):\n def wrapper(corofunc):\n async def run(*args, **kwargs):\n with timeout(t):\n return await corofunc(*args, **kwargs)\n return run \n return wrapper\n\n\n@pytest.mark.asyncio\n@with_timeout(2)\nasync def test_sleep_1():\n await asyncio.sleep(1)\n assert 1 == 1\n\n\n@pytest.mark.asyncio\n@with_timeout(2)\nasync def test_sleep_3():\n await asyncio.sleep(3)\n assert 1 == 1\n\nIt's not hard to create decorator for concrete time (with_timeout_5 = partial(with_timeout, 5)).\n\nI don't know how to create texture (if you really need fixture), but code above can provide starting point. Also not sure if there's a common way to achieve goal better.\n", "There is a way to use fixtures for timeout, one just needs to add the following hook into conftest.py.\n\nAny fixture prefixed with timeout must return a number of seconds(int, float) the test can run.\nThe closest fixture w.r.t scope is chosen. autouse fixtures have lesser priority than explicitly chosen ones. Later one is preferred. Unfortunately order in the function argument list does NOT matter.\nIf there is no such fixture, the test is not restricted and will run indefinitely as usual.\nThe test must be marked with pytest.mark.asyncio too, but that is needed anyway.\n\n# Add to conftest.py\nimport asyncio\n\nimport pytest\n\n_TIMEOUT_FIXTURE_PREFIX = \"timeout\"\n\n\n@pytest.hookimpl(tryfirst=True, hookwrapper=True)\ndef pytest_runtest_setup(item: pytest.Item):\n \"\"\"Wrap all tests marked with pytest.mark.asyncio with their specified timeout.\n\n Must run as early as possible.\n\n Parameters\n ----------\n item : pytest.Item\n Test to wrap\n \"\"\"\n yield\n orig_obj = item.obj\n timeouts = [n for n in item.funcargs if n.startswith(_TIMEOUT_FIXTURE_PREFIX)]\n # Picks the closest timeout fixture if there are multiple\n tname = None if len(timeouts) == 0 else timeouts[-1]\n\n # Only pick marked functions\n if item.get_closest_marker(\"asyncio\") is not None and tname is not None:\n\n async def new_obj(*args, **kwargs):\n \"\"\"Timed wrapper around the test function.\"\"\"\n try:\n return await asyncio.wait_for(\n orig_obj(*args, **kwargs), timeout=item.funcargs[tname]\n )\n except Exception as e:\n pytest.fail(f\"Test {item.name} did not finish in time.\")\n\n item.obj = new_obj\n\nExample:\n@pytest.fixture\ndef timeout_2s():\n return 2\n\n\n@pytest.fixture(scope=\"module\", autouse=True)\ndef timeout_5s():\n # You can do whatever you need here, just return/yield a number\n return 5\n\n\nasync def test_timeout_1():\n # Uses timeout_5s fixture by default\n await aio.sleep(0) # Passes\n return 1\n\n\nasync def test_timeout_2(timeout_2s):\n # Uses timeout_2s because it is closest\n await aio.sleep(5) # Timeouts\n\nWARNING\nMight not work with some other plugins, I have only tested it with pytest-asyncio, it definitely won't work if item is redefined by some hook.\n", "I just loved Quimby's approach of marking tests with timeouts. Here's my attempt to improve it, using pytest marks:\n# tests/conftest.py\nimport asyncio\n\n\n@pytest.hookimpl(tryfirst=True, hookwrapper=True)\ndef pytest_pyfunc_call(pyfuncitem: pytest.Function):\n \"\"\"\n Wrap all tests marked with pytest.mark.async_timeout with their specified timeout.\n \"\"\"\n orig_obj = pyfuncitem.obj\n\n if marker := pyfuncitem.get_closest_marker(\"async_timeout\"):\n\n async def new_obj(*args, **kwargs):\n \"\"\"Timed wrapper around the test function.\"\"\"\n try:\n return await asyncio.wait_for(orig_obj(*args, **kwargs), timeout=marker.args[0])\n except (asyncio.CancelledError, asyncio.TimeoutError):\n pytest.fail(f\"Test {pyfuncitem.name} did not finish in time.\")\n\n pyfuncitem.obj = new_obj\n\n yield\n\n\ndef pytest_configure(config: pytest.Config):\n config.addinivalue_line(\"markers\", \"async_timeout(timeout): cancels the test execution after the specified amount of seconds\")\n\nUsage:\n@pytest.mark.asyncio\n@pytest.mark.async_timeout(10)\nasync def potentially_hanging_function():\n await asyncio.sleep(20)\n\nIt should not be hard to include this to the asyncio mark on pytest-asyncio, so we can get a syntax like:\n@pytest.mark.asyncio(timeout=10)\nasync def potentially_hanging_function():\n await asyncio.sleep(20)\n\nEDIT: looks like there's already a PR for that.\n" ]
[ 8, 2, 0 ]
[]
[]
[ "pytest", "pytest_asyncio", "python", "python_asyncio" ]
stackoverflow_0055684737_pytest_pytest_asyncio_python_python_asyncio.txt
Q: Read data from Quip Spreadsheet with Python I need to make a tool with Python which needs to read data from a given Quip. I have read the Quip Api documentation but I can't find anything code related. Does anyone have a source of inspiration for this implementation? I tried 2 different implementation from various sources but they are not working: 1. import quip import quipclient as quipclient id = 'completed with id' thread = 'TestSpreadsheet' base_url = 'completed with base_url' ACCES_TOKEN = "completed with the token" client = quip.QuipClient(access_token=ACCES_TOKEN, base_url = base_url) with open("template.html", "rt") as f: template = f.read() jso = client.new_document(template, title="My Spreadsheet", type="spreadsheet") thread_id = jso['thread']["id"] --> not sure where do I get that from user = client.get_authenticated_user() print(f'User: {user}') client.update_spreadsheet_headers((thread_id, 'Name', 'Email')) client.get_thread(id) spreadsheet = client.get_first_spreadsheet(thread_id) headers = client.get_spreadsheet_header_items(spreadsheet) print(headers) import quip import pandas as pd import numpy as np import html5lib token = 'completed with token' base_url = 'completed with base url' thread_id = not sure where do I get that from client = quip.QuipClient(token, base_url = base_url) rawdictionary = client.get_thread(thread_id) dfs=pd.read_html(rawdictionary['html']) raw_df = dfs[0] A: In the latest version, Quip has put the thread ID in the url as well so for example: https://<your enterprise quip host>/<thread_id>/<your spreadsheet name> So for exporting a spreadsheet to lets say a dataframe following would be helper code import quipclient import pandas as pd import lxml ACCESS_TOKEN = "XXXXX" quip = quipclient.QuipClient(access_token=ACCESS_TOKEN, base_url='https://<your-quip-url>.com') user = quip.get_authenticated_user() spread_sheet = quip.get_thread(id = '<thread id from the url>') spread_sheet_html_part = spread_sheet['html'] df = pd.read_html(spread_sheet_html_part) main_table = df[0] main_table.columns = main_table.iloc[0] main_table = main_table[1:] main_table.to_csv("final_result.csv",index=False)
Read data from Quip Spreadsheet with Python
I need to make a tool with Python which needs to read data from a given Quip. I have read the Quip Api documentation but I can't find anything code related. Does anyone have a source of inspiration for this implementation? I tried 2 different implementation from various sources but they are not working: 1. import quip import quipclient as quipclient id = 'completed with id' thread = 'TestSpreadsheet' base_url = 'completed with base_url' ACCES_TOKEN = "completed with the token" client = quip.QuipClient(access_token=ACCES_TOKEN, base_url = base_url) with open("template.html", "rt") as f: template = f.read() jso = client.new_document(template, title="My Spreadsheet", type="spreadsheet") thread_id = jso['thread']["id"] --> not sure where do I get that from user = client.get_authenticated_user() print(f'User: {user}') client.update_spreadsheet_headers((thread_id, 'Name', 'Email')) client.get_thread(id) spreadsheet = client.get_first_spreadsheet(thread_id) headers = client.get_spreadsheet_header_items(spreadsheet) print(headers) import quip import pandas as pd import numpy as np import html5lib token = 'completed with token' base_url = 'completed with base url' thread_id = not sure where do I get that from client = quip.QuipClient(token, base_url = base_url) rawdictionary = client.get_thread(thread_id) dfs=pd.read_html(rawdictionary['html']) raw_df = dfs[0]
[ "In the latest version, Quip has put the thread ID in the url as well\nso for example: https://<your enterprise quip host>/<thread_id>/<your spreadsheet name>\nSo for exporting a spreadsheet to lets say a dataframe following would be helper code\nimport quipclient\nimport pandas as pd\nimport lxml\n\nACCESS_TOKEN = \"XXXXX\"\nquip = quipclient.QuipClient(access_token=ACCESS_TOKEN, base_url='https://<your-quip-url>.com')\nuser = quip.get_authenticated_user()\nspread_sheet = quip.get_thread(id = '<thread id from the url>')\n\nspread_sheet_html_part = spread_sheet['html'] \ndf = pd.read_html(spread_sheet_html_part)\nmain_table = df[0]\nmain_table.columns = main_table.iloc[0]\nmain_table = main_table[1:]\nmain_table.to_csv(\"final_result.csv\",index=False)\n\n" ]
[ 0 ]
[]
[]
[ "python", "quip", "spreadsheet", "thread_id", "token" ]
stackoverflow_0073449477_python_quip_spreadsheet_thread_id_token.txt
Q: populating form with data from session; django I'm wondering how to fill my form with data that i have stored in my session. my model: models.py class Order(models.Model): order_by = ForeignKey(User, on_delete=DO_NOTHING) order_status = ForeignKey(OrderStatus, on_delete=DO_NOTHING) created = DateTimeField(default=datetime.now) address_street = CharField(max_length=256) address_postal_code = CharField(max_length=18) address_city = CharField(max_length=128) shipping = ForeignKey(ShippingMethod, on_delete=DO_NOTHING) payment = DecimalField(max_digits=12, decimal_places=2, null=False) payment_method = ForeignKey(PaymentMethod, on_delete=DO_NOTHING) def __str__(self): return self.id I have a view that stores my choices in 'cart' class AddProductToCartView(View): def get(self, request, pk): cart = request.session.get("cart") if cart: for item in cart: if item["id"] == pk: item["quantity"] += 1 break else: cart.append({"id": pk, "quantity": 1}) request.session["cart"] = cart else: request.session["cart"] = [{"id": pk, "quantity": 1}] return redirect("store:cart_view") class CartView(View): def get(self, request): in_cart = [] overall_price = 0 overall_tax = 0 if "cart" in request.session: overall_price = 0 for item in request.session["cart"]: product = Product.objects.select_related("category").get(id=item["id"]) total_price = product.price * item["quantity"] tax = (total_price * product.tax) in_cart.append({ "product": product, "quantity": item["quantity"], "total_price": total_price, "tax": tax }) overall_price += total_price overall_tax += tax return render( request, template_name="cart/cart_view.html", context={ "products": in_cart if len(in_cart) > 0 else None, "overall_price": overall_price, "overall_tax": overall_tax } ) and I want to populate my form with data (some of it not everything is in my 'cart' so still i hate to take info from the user from a form. I get that i want to create a form with certain data: class AddAditionalDataForm(forms.ModelForm): class Meta: model = Order fields = ['certain fields'] and in the views.py class CheckOutView(CreateView): form_class = AddAditionalDataForm template_name = 'check_form/check_out_form.html' I don't what a finish solution more like a hint in witch direction to go with this ? A: To populate a form from multiple data sources, you can simply merge those data sources, and use the new data. # request.POST is immutable by default. # .copy() will make a new, mutable copy. data = request.POST.copy() data['cart'] = request.sessions.get('cart') form = AddAdditionalDataForm(data) Two more things I want to mention. It is not a good practice using user submitted data without validating. What if pk does not match any existing Product? Your Order model has no field related to Product. I am not sure if this is intended, but having an additional model describing order list has several benefits. 2.1 Order detail is now traceable. 2.2 Discount can be applied to individual order. 2.3 Product statistics now include sales data. # Example order item definition class OrderItem(models.Model): order = models.ForeignKey(Order, on_delete=models.CASCADE) product = models.ForeignKey(Product, on_delete=models.PROTECT) quantity = models.IntegerField(default=1) price = models.DecimalField(max_digits=12, decimal_places=2) In such case, form validation becomes a little more complicate ItemFormSet = modelformset_factory(OrderItem, fields=['product', 'quantity', 'price']) class CheckOutView(CreateView): form_class = AddAdditionalDataForm template_name = 'check_form/check_out_form.html' def post(self, request, *args, **kwargs): order_form = self.form_class(request.POST) # NOTE formset identifies different item data by their prefix, # and requires additional management data. i.e. # cart_data = { # 'item-0-pk': 17, 'item-0-quantity': 1, # 'item-1-pk': 28, 'item-1-quantity': 3, # 'item-INITIAL_FORMS': 0, # 'item-TOTAL_FORMS': 2} cart_data = get_cart_data(request.session.get('cart')) items_formset = ItemFormSet(cart_data) # Save order only when both forms are valid. if order_form.is_valid() and items_formset.is_valid() order = order_form.save() items = items_formset.save(commit=False) for item in items: item.order = order OrderItem.objects.bulk_create(items) return redirect(...) else: ... def get_context_data(self, **kwargs): # Include empty formset in get request. # NOTE not to override form and formset containing error data in post request. ...
populating form with data from session; django
I'm wondering how to fill my form with data that i have stored in my session. my model: models.py class Order(models.Model): order_by = ForeignKey(User, on_delete=DO_NOTHING) order_status = ForeignKey(OrderStatus, on_delete=DO_NOTHING) created = DateTimeField(default=datetime.now) address_street = CharField(max_length=256) address_postal_code = CharField(max_length=18) address_city = CharField(max_length=128) shipping = ForeignKey(ShippingMethod, on_delete=DO_NOTHING) payment = DecimalField(max_digits=12, decimal_places=2, null=False) payment_method = ForeignKey(PaymentMethod, on_delete=DO_NOTHING) def __str__(self): return self.id I have a view that stores my choices in 'cart' class AddProductToCartView(View): def get(self, request, pk): cart = request.session.get("cart") if cart: for item in cart: if item["id"] == pk: item["quantity"] += 1 break else: cart.append({"id": pk, "quantity": 1}) request.session["cart"] = cart else: request.session["cart"] = [{"id": pk, "quantity": 1}] return redirect("store:cart_view") class CartView(View): def get(self, request): in_cart = [] overall_price = 0 overall_tax = 0 if "cart" in request.session: overall_price = 0 for item in request.session["cart"]: product = Product.objects.select_related("category").get(id=item["id"]) total_price = product.price * item["quantity"] tax = (total_price * product.tax) in_cart.append({ "product": product, "quantity": item["quantity"], "total_price": total_price, "tax": tax }) overall_price += total_price overall_tax += tax return render( request, template_name="cart/cart_view.html", context={ "products": in_cart if len(in_cart) > 0 else None, "overall_price": overall_price, "overall_tax": overall_tax } ) and I want to populate my form with data (some of it not everything is in my 'cart' so still i hate to take info from the user from a form. I get that i want to create a form with certain data: class AddAditionalDataForm(forms.ModelForm): class Meta: model = Order fields = ['certain fields'] and in the views.py class CheckOutView(CreateView): form_class = AddAditionalDataForm template_name = 'check_form/check_out_form.html' I don't what a finish solution more like a hint in witch direction to go with this ?
[ "To populate a form from multiple data sources, you can simply merge those data sources, and use the new data.\n# request.POST is immutable by default. \n# .copy() will make a new, mutable copy. \ndata = request.POST.copy()\ndata['cart'] = request.sessions.get('cart')\n\nform = AddAdditionalDataForm(data)\n\nTwo more things I want to mention.\n\nIt is not a good practice using user submitted data without validating. What if pk does not match any existing Product?\n\nYour Order model has no field related to Product. I am not sure if this is intended, but having an additional model describing order list has several benefits.\n\n\n2.1 Order detail is now traceable.\n2.2 Discount can be applied to individual order.\n2.3 Product statistics now include sales data.\n# Example order item definition\nclass OrderItem(models.Model):\n order = models.ForeignKey(Order, on_delete=models.CASCADE)\n product = models.ForeignKey(Product, on_delete=models.PROTECT)\n quantity = models.IntegerField(default=1)\n price = models.DecimalField(max_digits=12, decimal_places=2)\n\nIn such case, form validation becomes a little more complicate\nItemFormSet = modelformset_factory(OrderItem, fields=['product', 'quantity', 'price'])\n\nclass CheckOutView(CreateView):\n form_class = AddAdditionalDataForm\n template_name = 'check_form/check_out_form.html'\n def post(self, request, *args, **kwargs):\n order_form = self.form_class(request.POST)\n # NOTE formset identifies different item data by their prefix,\n # and requires additional management data. i.e.\n # cart_data = {\n # 'item-0-pk': 17, 'item-0-quantity': 1,\n # 'item-1-pk': 28, 'item-1-quantity': 3,\n # 'item-INITIAL_FORMS': 0,\n # 'item-TOTAL_FORMS': 2}\n cart_data = get_cart_data(request.session.get('cart'))\n items_formset = ItemFormSet(cart_data)\n # Save order only when both forms are valid.\n if order_form.is_valid() and items_formset.is_valid()\n order = order_form.save()\n items = items_formset.save(commit=False)\n for item in items:\n item.order = order\n OrderItem.objects.bulk_create(items)\n return redirect(...)\n else:\n ...\n def get_context_data(self, **kwargs):\n # Include empty formset in get request. \n # NOTE not to override form and formset containing error data in post request. \n ...\n\n" ]
[ 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074534204_django_python.txt
Q: how to select rows with a certain pattern I'm stuck in a problem, because I can't find any solution to deal with it, I have the following sample: data = [['John', 6, 'A'], ['Paul', 6, 'D'], ['Juli', 9, 'D'], ['Geeta', 4, 'A'], ['Jay', 6, 'D'], ['Sara', 6, 'A'], ['Mario', 3, 'D'], ['Peter', 6, 'A'], ['Jin', 6, 'D'], ['Carl', 6, 'A']] df = pd.DataFrame(data, columns=['Name', 'Number', 'Label']) I previously grouped by number with the following line of code: df = df.sort_values('number') and got this output: Name Number Label Mario 3 D Geeta 4 A Peter 4 A Jin 4 D John 6 A Paul 6 D Jay 6 D Sara 6 A Carl 6 A Juli 9 D So I just want to select pair of rows which have an 'A' in the last column and followed by a row with a 'D' in the last column, and find all pair of rows that match this pattern in the same group (I don't want the last 'A' of a group and the 'D' of the next group), so the solution of the problem is: Name Number Label Peter 4 A Jin 4 D John 6 A Paul 6 D Anyone can help me? A: You need to use: # is the row label A? m1 = df['Label'].eq('A') # id the next row label D? m2 = df['Label'].shift(-1).eq('D') # create a mask combining both conditions mask = m1&m2 # select the matching rows and the next one (boolean OR) df[mask|mask.shift()] output: Name Number Label 0 John 6 A 1 Paul 6 D 3 Geeta 4 A 4 Jay 6 D update: match on group as your rows are sorted per group you can add another condition: m1 = df['Label'].eq('A') m2 = df['Label'].shift(-1).eq('D') m3 = df['Number'].eq(df['Number'].shift(-1)) mask = m1&m2&m3 df[df[mask|mask.shift()]] output: Name Number Label 2 Peter 4 A 3 Jin 4 D 4 John 6 A 5 Paul 6 D A: def function1(dd:pd.DataFrame): id=dd[(dd.Label=='D')&(dd.Label.shift()=='A')].index return dd.loc[id.union(id-1).sort_values()] df.groupby('Number').apply(function1).reset_index(drop=True) Name Number Label 0 Peter 4 A 1 Jin 4 D 2 John 6 A 3 Paul 6 D
how to select rows with a certain pattern
I'm stuck in a problem, because I can't find any solution to deal with it, I have the following sample: data = [['John', 6, 'A'], ['Paul', 6, 'D'], ['Juli', 9, 'D'], ['Geeta', 4, 'A'], ['Jay', 6, 'D'], ['Sara', 6, 'A'], ['Mario', 3, 'D'], ['Peter', 6, 'A'], ['Jin', 6, 'D'], ['Carl', 6, 'A']] df = pd.DataFrame(data, columns=['Name', 'Number', 'Label']) I previously grouped by number with the following line of code: df = df.sort_values('number') and got this output: Name Number Label Mario 3 D Geeta 4 A Peter 4 A Jin 4 D John 6 A Paul 6 D Jay 6 D Sara 6 A Carl 6 A Juli 9 D So I just want to select pair of rows which have an 'A' in the last column and followed by a row with a 'D' in the last column, and find all pair of rows that match this pattern in the same group (I don't want the last 'A' of a group and the 'D' of the next group), so the solution of the problem is: Name Number Label Peter 4 A Jin 4 D John 6 A Paul 6 D Anyone can help me?
[ "You need to use:\n# is the row label A?\nm1 = df['Label'].eq('A')\n# id the next row label D?\nm2 = df['Label'].shift(-1).eq('D')\n# create a mask combining both conditions\nmask = m1&m2\n\n# select the matching rows and the next one (boolean OR)\ndf[mask|mask.shift()]\n\noutput:\n Name Number Label\n0 John 6 A\n1 Paul 6 D\n3 Geeta 4 A\n4 Jay 6 D\n\nupdate: match on group\nas your rows are sorted per group you can add another condition:\nm1 = df['Label'].eq('A')\nm2 = df['Label'].shift(-1).eq('D')\nm3 = df['Number'].eq(df['Number'].shift(-1))\nmask = m1&m2&m3\n\ndf[df[mask|mask.shift()]]\n\noutput:\n Name Number Label\n2 Peter 4 A\n3 Jin 4 D\n4 John 6 A\n5 Paul 6 D\n\n", " def function1(dd:pd.DataFrame):\n id=dd[(dd.Label=='D')&(dd.Label.shift()=='A')].index\n return dd.loc[id.union(id-1).sort_values()]\n df.groupby('Number').apply(function1).reset_index(drop=True)\n \n Name Number Label\n0 Peter 4 A\n1 Jin 4 D\n2 John 6 A\n3 Paul 6 D\n\n" ]
[ 3, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0072435732_pandas_python.txt
Q: Garbage collection in python module I am using a very simple ctypes module: % cat acme/__init__.py from acme import lowlevel and % cat acme/lowlevel.py import logging _lib = cdll.LoadLibrary("libacme.so.0") def _func(name, restype, argtypes): func = getattr(_lib, name) func.restype = restype func.argtypes = argtypes return func def py_log_func(a, b, c): log_levels = { 1: logging.DEBUG, 2: logging.INFO, 3: logging.WARNING, 4: logging.ERROR, 5: logging.CRITICAL, } log = logging.getLogger(b) log.log(log_levels[a], c) return LOGFUNC = CFUNCTYPE(None, c_int, c_char_p, c_char_p) acme_log_listener_configure = _func("acme_log_listener_configure", None, [LOGFUNC]) # setup default listener: acme_log_listener_configure(LOGFUNC(py_log_func)) For any experienced python developer the error is quite obvious. So my question is: is the current fix the correct one: # store at module level the log function to prevent python from doing garbage # collection on the function: PY_LOG_FUNC = LOGFUNC(py_log_func) # setup default listener: acme_log_listener_configure(PY_LOG_FUNC) In other word PY_LOG_FUNC is guarantee to have a single value throughout the python process lifetime ? Per documentation: Make sure you keep references to CFUNCTYPE() objects as long as they are used from C code. ctypes doesn’t, and if you don’t, they may be garbage collected, crashing your program when a callback is made. ref: https://docs.python.org/3/library/ctypes.html#callback-functions A: If you don't reassign PY_LOG_FUNC and it doesn't go out of scope it won't change. Your fix works. Here's an alternative. Decorate the Python function with the C callback signature. the decorator is the same as coding py_log_func = LOGFUNC(py_log_func) so it redefines the Python function name as the C callback so it is in-scope as long as the function exists. LOGFUNC = CFUNCTYPE(None, c_int, c_char_p, c_char_p) @LOGFUNC def py_log_func(a, b, c): ... ... acme_log_listener_configure(py_log_func)
Garbage collection in python module
I am using a very simple ctypes module: % cat acme/__init__.py from acme import lowlevel and % cat acme/lowlevel.py import logging _lib = cdll.LoadLibrary("libacme.so.0") def _func(name, restype, argtypes): func = getattr(_lib, name) func.restype = restype func.argtypes = argtypes return func def py_log_func(a, b, c): log_levels = { 1: logging.DEBUG, 2: logging.INFO, 3: logging.WARNING, 4: logging.ERROR, 5: logging.CRITICAL, } log = logging.getLogger(b) log.log(log_levels[a], c) return LOGFUNC = CFUNCTYPE(None, c_int, c_char_p, c_char_p) acme_log_listener_configure = _func("acme_log_listener_configure", None, [LOGFUNC]) # setup default listener: acme_log_listener_configure(LOGFUNC(py_log_func)) For any experienced python developer the error is quite obvious. So my question is: is the current fix the correct one: # store at module level the log function to prevent python from doing garbage # collection on the function: PY_LOG_FUNC = LOGFUNC(py_log_func) # setup default listener: acme_log_listener_configure(PY_LOG_FUNC) In other word PY_LOG_FUNC is guarantee to have a single value throughout the python process lifetime ? Per documentation: Make sure you keep references to CFUNCTYPE() objects as long as they are used from C code. ctypes doesn’t, and if you don’t, they may be garbage collected, crashing your program when a callback is made. ref: https://docs.python.org/3/library/ctypes.html#callback-functions
[ "If you don't reassign PY_LOG_FUNC and it doesn't go out of scope it won't change.\nYour fix works.\nHere's an alternative. Decorate the Python function with the C callback signature. the decorator is the same as coding py_log_func = LOGFUNC(py_log_func) so it redefines the Python function name as the C callback so it is in-scope as long as the function exists.\nLOGFUNC = CFUNCTYPE(None, c_int, c_char_p, c_char_p)\n\n@LOGFUNC\ndef py_log_func(a, b, c):\n ...\n\n...\nacme_log_listener_configure(py_log_func)\n\n" ]
[ 1 ]
[]
[]
[ "callback", "ctypes", "python" ]
stackoverflow_0074517129_callback_ctypes_python.txt
Q: Vscode: always running the same python file when pressing the run button By default the run button always runs the file you're currently viewing, and it's annoying because most of the time I don't want that: I'll be editing another file and then want to run my main.py file, so instead I have to go in the main file and then execute it. How can I change this? I tried looking online but couldn't find anything useful. A: Create a launch.json file in the Run and Debug panel, and then replace the configuration "program": "${file}", with "program": "./main.py",. File structure pytest11 |-.venv |-.vscode | |-launch.json |-demo.py |-main.py launch.json { "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "./main.py", "console": "integratedTerminal", "justMyCode": true } ] } Then select Run Without Debugging, or use the shortcut key Ctrl+F5to run the code.
Vscode: always running the same python file when pressing the run button
By default the run button always runs the file you're currently viewing, and it's annoying because most of the time I don't want that: I'll be editing another file and then want to run my main.py file, so instead I have to go in the main file and then execute it. How can I change this? I tried looking online but couldn't find anything useful.
[ "Create a launch.json file in the Run and Debug panel, and then replace the configuration \"program\": \"${file}\", with \"program\": \"./main.py\",.\n\nFile structure\npytest11\n|-.venv\n|-.vscode\n| |-launch.json\n|-demo.py\n|-main.py\n\nlaunch.json\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"name\": \"Python: Current File\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"./main.py\",\n \"console\": \"integratedTerminal\",\n \"justMyCode\": true\n }\n ]\n}\n\nThen select Run Without Debugging, or use the shortcut key Ctrl+F5to run the code.\n\n" ]
[ 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074534084_python_visual_studio_code.txt
Q: Function equivalent of Excel's SUMIFS() I have a sales table with columns item, week, and sales. I wanted to create a week to date sales column (wtd sales) that is a weekly roll-up of sales per item. I have no idea how to create this in Python. I'm stuck at groupby(), which probably is not the answer. Can anyone help? output_df['wtd sales'] = input_df.groupby(['item'])['sales'].transform(wtd) A: As I stated in my comment, you are looking for cumsum(): import pandas as pd df = pd.DataFrame({ 'items': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'], 'weeks': [1, 2, 3, 4, 1, 2, 3, 4], 'sales': [100, 101, 102, 130, 10, 11, 12, 13] }) df.groupby(['items'])['sales'].cumsum() Which results in: 0 100 1 201 2 303 3 433 4 10 5 21 6 33 7 46 Name: sales, dtype: int64 I'm using: pd.__version__ '1.5.1' Putting it all together: import pandas as pd df = pd.DataFrame({ 'items': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'], 'weeks': [1, 2, 3, 4, 1, 2, 3, 4], 'sales': [100, 101, 102, 130, 10, 11, 12, 13] }) df['wtds'] = df.groupby(['items'])['sales'].cumsum() Resulting in: items weeks sales wtds 0 A 1 100 100 1 A 2 101 201 2 A 3 102 303 3 A 4 130 433 4 B 1 10 10 5 B 2 11 21 6 B 3 12 33 7 B 4 13 46
Function equivalent of Excel's SUMIFS()
I have a sales table with columns item, week, and sales. I wanted to create a week to date sales column (wtd sales) that is a weekly roll-up of sales per item. I have no idea how to create this in Python. I'm stuck at groupby(), which probably is not the answer. Can anyone help? output_df['wtd sales'] = input_df.groupby(['item'])['sales'].transform(wtd)
[ "As I stated in my comment, you are looking for cumsum():\nimport pandas as pd\n\ndf = pd.DataFrame({\n 'items': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'],\n 'weeks': [1, 2, 3, 4, 1, 2, 3, 4],\n 'sales': [100, 101, 102, 130, 10, 11, 12, 13]\n})\n\ndf.groupby(['items'])['sales'].cumsum()\n\nWhich results in:\n0 100\n1 201\n2 303\n3 433\n4 10\n5 21\n6 33\n7 46\nName: sales, dtype: int64\n\nI'm using:\npd.__version__\n'1.5.1'\n \n\nPutting it all together:\nimport pandas as pd\n\ndf = pd.DataFrame({\n 'items': ['A', 'A', 'A', 'A', 'B', 'B', 'B', 'B'],\n 'weeks': [1, 2, 3, 4, 1, 2, 3, 4],\n 'sales': [100, 101, 102, 130, 10, 11, 12, 13]\n})\n\ndf['wtds'] = df.groupby(['items'])['sales'].cumsum()\n\nResulting in:\n items weeks sales wtds\n0 A 1 100 100\n1 A 2 101 201\n2 A 3 102 303\n3 A 4 130 433\n4 B 1 10 10\n5 B 2 11 21\n6 B 3 12 33\n7 B 4 13 46 \n\n" ]
[ 0 ]
[]
[]
[ "excel", "python" ]
stackoverflow_0074540921_excel_python.txt
Q: my flowaverage function wont produce an output Part 3 – Create the Functions to Analyse a Packet the flowaverage function wont produce an output please help - Python . For you to know if a packet is involved in malicious activity or not you must first identify characteristics of malicious traffic and then find a way to represent this in python. For this assignment we will use four metrics to determine if a packet is malicious or not. Average Packet Size – This metric will accept a list of packets and gets the average payload size of all the packets. It will return a list of packets that are above the average of the list. here is my code ` def makePacket(srcIP, dstIP, length, prt, sp, dp, sqn, pld): return ("PK", srcIP, dstIP, [length, prt, [sp, dp], sqn, pld]) def getPacketSrc(pkt): return pkt[1]`` def getPacketDst(pkt): return pkt[2] def getPacketDetails(pkt): return pkt[3] def isPacket(pkt): return type(pkt[1]) != type([]) and pkt[0] == "PK" and type(pkt) == type(()) def isEmptyPkt(pkt): return getPacketDetails(pkt) == [] def getLength(pkt): a = getPacketDetails(pkt) return a[0] def getProtocol(pkt): a = getPacketDetails(pkt) return a[1] def getSrcPort(pkt): a = getPacketDetails(pkt) b = a[2] return b[0] def getDstPort(pkt): a = getPacketDetails(pkt) b = a[2] return b[1] def getSqn(pkt): a = getPacketDetails(pkt) return a[3] def getPayloadSize(pkt): a= getPacketDetails(pkt) return a[4] def flowAverage(pkt): packets=[] payloads=[] for p in pkt: list(getPacketDetails(p)[1]) payloads.append(pkt)[1] total=0 for p in payloads: total=total+p avg=total/len(payloads) return avg def suspPort(pkt): if getSrcPort(pkt) > 500 or getDstPort(pkt)>500: return True else: return False def suspProto(pkt): protoLst=["HTTP","SMTP", "UDP", "TCP", "DHCP"] if getProtocol(pkt) not in protoLst: return True else: return False def ipBlacklist(pkt): ipBlackList=[["213.217.236.184","444.221.232.94","149.88.83.47","223.70.250.146","169.51.6.136","229.22369.24"]] if getPacketSrc(pkt) in IpBlackList: return True else: return False ` ``` ` im expecting Input 111.202.230.44 62.82.29.190 3 HTTP 80 3463 1562431 87 Sample Output 0 Output Average Packet Size => [('PK', '333.230.18.207', '213.217.236.184', [56, 'IRC', [501, 5643], 1762431, 318]), ('PK', '444.221.232.94', '50.168.160.19', [1003, 'TCP', [4657, 4875], 1962431, 428])] Suspicious Port (pkt) => True Suspicious Port (pk3) => True Suspicious Protocol (pkt) => False Suspicious Protocol (pk4) => False IP Blacklist (pkt) => False IP Blacklist (pk5) => False A: It looks to me like you are returning early from your for loop, instead of iterating over all the packets. To get the average of the packet lengths, you could do something like this: def flowAverage(pkt_list): payloads = [] large_packets = [] for pkt in pkt_list: payloads.append(getPayloadSize(pkt)) total = sum(payloads) avg = total / len(payloads) for pkt in pkt_list: if getPayloadSize(pkt) > avg: large_packets.append(pkt) return large_packets
my flowaverage function wont produce an output
Part 3 – Create the Functions to Analyse a Packet the flowaverage function wont produce an output please help - Python . For you to know if a packet is involved in malicious activity or not you must first identify characteristics of malicious traffic and then find a way to represent this in python. For this assignment we will use four metrics to determine if a packet is malicious or not. Average Packet Size – This metric will accept a list of packets and gets the average payload size of all the packets. It will return a list of packets that are above the average of the list. here is my code ` def makePacket(srcIP, dstIP, length, prt, sp, dp, sqn, pld): return ("PK", srcIP, dstIP, [length, prt, [sp, dp], sqn, pld]) def getPacketSrc(pkt): return pkt[1]`` def getPacketDst(pkt): return pkt[2] def getPacketDetails(pkt): return pkt[3] def isPacket(pkt): return type(pkt[1]) != type([]) and pkt[0] == "PK" and type(pkt) == type(()) def isEmptyPkt(pkt): return getPacketDetails(pkt) == [] def getLength(pkt): a = getPacketDetails(pkt) return a[0] def getProtocol(pkt): a = getPacketDetails(pkt) return a[1] def getSrcPort(pkt): a = getPacketDetails(pkt) b = a[2] return b[0] def getDstPort(pkt): a = getPacketDetails(pkt) b = a[2] return b[1] def getSqn(pkt): a = getPacketDetails(pkt) return a[3] def getPayloadSize(pkt): a= getPacketDetails(pkt) return a[4] def flowAverage(pkt): packets=[] payloads=[] for p in pkt: list(getPacketDetails(p)[1]) payloads.append(pkt)[1] total=0 for p in payloads: total=total+p avg=total/len(payloads) return avg def suspPort(pkt): if getSrcPort(pkt) > 500 or getDstPort(pkt)>500: return True else: return False def suspProto(pkt): protoLst=["HTTP","SMTP", "UDP", "TCP", "DHCP"] if getProtocol(pkt) not in protoLst: return True else: return False def ipBlacklist(pkt): ipBlackList=[["213.217.236.184","444.221.232.94","149.88.83.47","223.70.250.146","169.51.6.136","229.22369.24"]] if getPacketSrc(pkt) in IpBlackList: return True else: return False ` ``` ` im expecting Input 111.202.230.44 62.82.29.190 3 HTTP 80 3463 1562431 87 Sample Output 0 Output Average Packet Size => [('PK', '333.230.18.207', '213.217.236.184', [56, 'IRC', [501, 5643], 1762431, 318]), ('PK', '444.221.232.94', '50.168.160.19', [1003, 'TCP', [4657, 4875], 1962431, 428])] Suspicious Port (pkt) => True Suspicious Port (pk3) => True Suspicious Protocol (pkt) => False Suspicious Protocol (pk4) => False IP Blacklist (pkt) => False IP Blacklist (pk5) => False
[ "It looks to me like you are returning early from your for loop, instead of iterating over all the packets. To get the average of the packet lengths, you could do something like this:\ndef flowAverage(pkt_list):\n payloads = []\n large_packets = []\n for pkt in pkt_list:\n payloads.append(getPayloadSize(pkt))\n total = sum(payloads)\n avg = total / len(payloads)\n \n for pkt in pkt_list:\n if getPayloadSize(pkt) > avg:\n large_packets.append(pkt)\n return large_packets\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074540972_python.txt
Q: Selenium. NoSuchElementException can someone be able to understand what the problem of this code is?I understand that the question is not new, but what I found just didn't help me, but maybe I was looking badly wd = webdriver.Chrome('chromedriver',options=chrome_options) wd.get('https://www.uniprot.org/uniprotkb/Q14050/entry') sleep(15) Molmass = wd.find_element('xpath','//*[@id="sequences"]/div/div[2]/section/ul/li[2]/div/div[2]') HTML: <div class="decorated-list-item__content">63,616</div> Selector: #sequences > div > div.card__content > section > ul > li:nth-child(2) > div > div.decorated-list-item__content XPATH: //*[@id="sequences"]/div/div[2]/section/ul/li[2]/div/div[2] Error: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="sequences"]/div/div[2]/section/ul/li[2]/div/div[2]"} (Session info: headless chrome=107.0.5304.87) I tried searching by class, selectors, xpath, but nothing helps, I tried to set a timer so that the page had time to load, but there was no result A: I am assuming you are trying the get the value '63,616', for that you can use any one of the below locators: CSS_SELECTOR: driver.find_element(By.CSS_SELECTOR, ".sequence-container li:nth-of-type(2) .decorated-list-item__content").text XPATH: driver.find_element(By.XPATH, ".//section[@class='sequence-container']//li[2]//div[@class='decorated-list-item__content']").text
Selenium. NoSuchElementException
can someone be able to understand what the problem of this code is?I understand that the question is not new, but what I found just didn't help me, but maybe I was looking badly wd = webdriver.Chrome('chromedriver',options=chrome_options) wd.get('https://www.uniprot.org/uniprotkb/Q14050/entry') sleep(15) Molmass = wd.find_element('xpath','//*[@id="sequences"]/div/div[2]/section/ul/li[2]/div/div[2]') HTML: <div class="decorated-list-item__content">63,616</div> Selector: #sequences > div > div.card__content > section > ul > li:nth-child(2) > div > div.decorated-list-item__content XPATH: //*[@id="sequences"]/div/div[2]/section/ul/li[2]/div/div[2] Error: NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//*[@id="sequences"]/div/div[2]/section/ul/li[2]/div/div[2]"} (Session info: headless chrome=107.0.5304.87) I tried searching by class, selectors, xpath, but nothing helps, I tried to set a timer so that the page had time to load, but there was no result
[ "I am assuming you are trying the get the value '63,616', for that you can use any one of the below locators:\nCSS_SELECTOR:\n driver.find_element(By.CSS_SELECTOR, \".sequence-container li:nth-of-type(2) .decorated-list-item__content\").text\n\nXPATH:\ndriver.find_element(By.XPATH, \".//section[@class='sequence-container']//li[2]//div[@class='decorated-list-item__content']\").text\n\n" ]
[ 0 ]
[]
[]
[ "html", "html_parsing", "python", "selenium" ]
stackoverflow_0074538790_html_html_parsing_python_selenium.txt
Q: Condense dataset pandas I wish to condense my dataset. Essentially it is a groupby. Data id box status aa box11 hey aa box11 hey aa box11 hey aa box11 hey aa box5 hello aa box5 hello aa box5 hello aa box5 hello aa box5 hello bb box8 no bb box8 no Desired id box status aa box11 hey aa box5 hello bb box8 no Doing df1 = df.groupby(["id"])["box"]).agg() A: DataFrame.drop_duplicates() If you want to be careful and exclude "id" you can use the subset keyword: df1 = df.drop_duplicates(subset = ['box', 'status']) EDIT: To clarify, drop_duplicates() will only drop rows if the full row is duplicated. Subset just tells it which rows to consider. If you had a row where box='box8' and status='hey', this row would not drop. Both are duplicates individually but are in a unique combination.
Condense dataset pandas
I wish to condense my dataset. Essentially it is a groupby. Data id box status aa box11 hey aa box11 hey aa box11 hey aa box11 hey aa box5 hello aa box5 hello aa box5 hello aa box5 hello aa box5 hello bb box8 no bb box8 no Desired id box status aa box11 hey aa box5 hello bb box8 no Doing df1 = df.groupby(["id"])["box"]).agg()
[ "DataFrame.drop_duplicates()\nIf you want to be careful and exclude \"id\" you can use the subset keyword:\ndf1 = df.drop_duplicates(subset = ['box', 'status'])\n\nEDIT:\nTo clarify, drop_duplicates() will only drop rows if the full row is duplicated. Subset just tells it which rows to consider. If you had a row where box='box8' and status='hey', this row would not drop. Both are duplicates individually but are in a unique combination.\n" ]
[ 1 ]
[]
[]
[ "group_by", "numpy", "pandas", "python" ]
stackoverflow_0074541010_group_by_numpy_pandas_python.txt
Q: double quoted elements in csv cant read with pandas I have an input file where every value is stored as a string. It is inside a csv file with each entry inside double quotes. Example file: "column1","column2", "column3", "column4", "column5", "column6" "AM", "07", "1", "SD", "SD", "CR" "AM", "08", "1,2,3", "PR,SD,SD", "PR,SD,SD", "PR,SD,SD" "AM", "01", "2", "SD", "SD", "SD" There are only six columns. What options do I need to enter to pandas read_csv to read this correctly? I currently am trying: import pandas as pd df = pd.read_csv(file, quotechar='"') but this gives me the error message: CParserError: Error tokenizing data. C error: Expected 6 fields in line 3, saw 14 Which obviously means that it is ignoring the '"' and parsing every comma as a field. However, for line 3, columns 3 through 6 should be strings with commas in them. ("1,2,3", "PR,SD,SD", "PR,SD,SD", "PR,SD,SD") How do I get pandas.read_csv to parse this correctly? Thanks. A: This will work. It falls back to the python parser (as you have non-regular separators, e.g. they are comma and sometimes space). If you only have commas it would use the c-parser and be much faster. In [1]: import csv In [2]: !cat test.csv "column1","column2", "column3", "column4", "column5", "column6" "AM", "07", "1", "SD", "SD", "CR" "AM", "08", "1,2,3", "PR,SD,SD", "PR,SD,SD", "PR,SD,SD" "AM", "01", "2", "SD", "SD", "SD" In [3]: pd.read_csv('test.csv',sep=',\s+',quoting=csv.QUOTE_ALL) pandas/io/parsers.py:637: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators; you can avoid this warning by specifying engine='python'. ParserWarning) Out[3]: "column1","column2" "column3" "column4" "column5" "column6" "AM" "07" "1" "SD" "SD" "CR" "AM" "08" "1,2,3" "PR,SD,SD" "PR,SD,SD" "PR,SD,SD" "AM" "01" "2" "SD" "SD" "SD" A: This worked for me: (I used Python 3.9) dataset = pd.read_csv('test.csv', sep=',', skipinitialspace=True)
double quoted elements in csv cant read with pandas
I have an input file where every value is stored as a string. It is inside a csv file with each entry inside double quotes. Example file: "column1","column2", "column3", "column4", "column5", "column6" "AM", "07", "1", "SD", "SD", "CR" "AM", "08", "1,2,3", "PR,SD,SD", "PR,SD,SD", "PR,SD,SD" "AM", "01", "2", "SD", "SD", "SD" There are only six columns. What options do I need to enter to pandas read_csv to read this correctly? I currently am trying: import pandas as pd df = pd.read_csv(file, quotechar='"') but this gives me the error message: CParserError: Error tokenizing data. C error: Expected 6 fields in line 3, saw 14 Which obviously means that it is ignoring the '"' and parsing every comma as a field. However, for line 3, columns 3 through 6 should be strings with commas in them. ("1,2,3", "PR,SD,SD", "PR,SD,SD", "PR,SD,SD") How do I get pandas.read_csv to parse this correctly? Thanks.
[ "This will work. It falls back to the python parser (as you have non-regular separators, e.g. they are comma and sometimes space). If you only have commas it would use the c-parser and be much faster.\nIn [1]: import csv\n\nIn [2]: !cat test.csv\n\"column1\",\"column2\", \"column3\", \"column4\", \"column5\", \"column6\"\n\"AM\", \"07\", \"1\", \"SD\", \"SD\", \"CR\"\n\"AM\", \"08\", \"1,2,3\", \"PR,SD,SD\", \"PR,SD,SD\", \"PR,SD,SD\"\n\"AM\", \"01\", \"2\", \"SD\", \"SD\", \"SD\"\n\nIn [3]: pd.read_csv('test.csv',sep=',\\s+',quoting=csv.QUOTE_ALL)\npandas/io/parsers.py:637: ParserWarning: Falling back to the 'python' engine because the 'c' engine does not support regex separators; you can avoid this warning by specifying engine='python'.\n ParserWarning)\nOut[3]: \n \"column1\",\"column2\" \"column3\" \"column4\" \"column5\" \"column6\"\n\"AM\" \"07\" \"1\" \"SD\" \"SD\" \"CR\"\n\"AM\" \"08\" \"1,2,3\" \"PR,SD,SD\" \"PR,SD,SD\" \"PR,SD,SD\"\n\"AM\" \"01\" \"2\" \"SD\" \"SD\" \"SD\"\n\n", "This worked for me: (I used Python 3.9)\ndataset = pd.read_csv('test.csv', sep=',', skipinitialspace=True)\n\n" ]
[ 27, 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0026595819_csv_pandas_python.txt
Q: WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) Whenever I install a pip library in Python, I get a series of warnings. For example : WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) How can I avoid getting these warnings ? A: the warning below: can fix as follows. go to the lib\site-packages folder, then look for folders starting with ~ like what you see in the picture below and mentioned in that warning, then remove them this can be fixed this warning and no longer appears A: I solved this error by heading over to... [ START > ADD OR REMOVE PROGRAMS > SEARCH PYTHON ] Once there, removed Python 3.10.1. That should solve the problem.
WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages)
Whenever I install a pip library in Python, I get a series of warnings. For example : WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution -ip (c:\python310\lib\site-packages) WARNING: Ignoring invalid distribution - (c:\python310\lib\site-packages) How can I avoid getting these warnings ?
[ "the warning below:\n\ncan fix as follows.\ngo to the lib\\site-packages folder, then look for folders starting with ~ like what you see in the picture below\n\nand mentioned in that warning, then remove them\nthis can be fixed this warning and no longer appears\n", "I solved this error by heading over to...\n[ START > ADD OR REMOVE PROGRAMS > SEARCH PYTHON ]\nOnce there, removed Python 3.10.1. That should solve the problem.\n" ]
[ 29, 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0070998452_pip_python.txt
Q: how to fix rows getting "None" when using .apply function in pandas dataframe? Im working on a large dataset of 7GB were i need to use BERT AI algorithm for text classification, i used a random dataset i found on kaggle as an alternative example to minimise the process time and to apply a function i created (for future use on the original dataset) to clean the text by removing punctuations and lemmatize words etc,.So when i chose the column "Message to examine to clean all the texts there by using the .apply from pandas library, it works fine but when i add the result i get to a new dataframe or to the same dataframe, all rows turns into empty rows with no value. anyone knows how can i fix this issue? explanation of the issue i tried the lambda function inside apply newtext['message to examine'] = newtext['message to examine'].apply(lambda x : clean_text(x)) i tried copying the dataframe and store it to a new one newdataframe = pd.DataFrame(df['message to examine'].apply(cleantext)).copy() A: Usually this happens when one forgets to add a return statement to their apply function, which in this case is your clean_text. As a side-note, you can simply do .apply(clean_text) without the lambda function.
how to fix rows getting "None" when using .apply function in pandas dataframe?
Im working on a large dataset of 7GB were i need to use BERT AI algorithm for text classification, i used a random dataset i found on kaggle as an alternative example to minimise the process time and to apply a function i created (for future use on the original dataset) to clean the text by removing punctuations and lemmatize words etc,.So when i chose the column "Message to examine to clean all the texts there by using the .apply from pandas library, it works fine but when i add the result i get to a new dataframe or to the same dataframe, all rows turns into empty rows with no value. anyone knows how can i fix this issue? explanation of the issue i tried the lambda function inside apply newtext['message to examine'] = newtext['message to examine'].apply(lambda x : clean_text(x)) i tried copying the dataframe and store it to a new one newdataframe = pd.DataFrame(df['message to examine'].apply(cleantext)).copy()
[ "Usually this happens when one forgets to add a return statement to their apply function, which in this case is your clean_text.\nAs a side-note, you can simply do .apply(clean_text) without the lambda function.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "series", "string" ]
stackoverflow_0074540423_dataframe_pandas_python_series_string.txt
Q: SQLAlchemy doesn't correctly create in-memory database Making an API using FastAPI and SQLAlchemy I'm experiencing strange behaviour when database (SQLite) is in-memory which doesn't occur when stored as file. Model: from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String Base = declarative_base() class Thing(Base): __tablename__ = "thing" id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String) I create two global engine objects. One with database as file, the other as in-memory database: from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker args = dict(echo=True, connect_args={"check_same_thread": False}) engine1 = create_engine("sqlite:///db.sqlite", **args) engine2 = create_engine("sqlite:///:memory:", **args) Session1 = sessionmaker(bind=engine1) Session2 = sessionmaker(bind=engine2) I create my FastAPI app and a path to add an object to database: from fastapi import FastAPI app = FastAPI() @app.get("/") def foo(x: int): with {1: Session1, 2: Session2}[x]() as session: session.add(Thing(name="foo")) session.commit() My main to simulate requests and check everything is working: from fastapi.testclient import TestClient if __name__ == "__main__": Base.metadata.create_all(engine1) Base.metadata.create_all(engine2) client = TestClient(app) assert client.get("/1").status_code == 200 assert client.get("/2").status_code == 200 thing table is created in engine1 and committed, same with engine2. On first request "foo" was successfully inserted into engine1's database (stored as file) but second request raises "sqlite3.OperationalError" claiming "no such table: thing". Why is there different behaviour between the two? Why does in-memory database claim the table doesn't exist even though SQLAlchemy logs show create table statement ran successfully and was committed? A: The docs explain this in the following https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#using-a-memory-database-in-multiple-threads To use a :memory: database in a multithreaded scenario, the same connection object must be shared among threads, since the database exists only within the scope of that connection. The StaticPool implementation will maintain a single connection globally, and the check_same_thread flag can be passed to Pysqlite as False It also shows how to get the intended behavior, so in your case from sqlalchemy.pool import StaticPool args = dict(echo=True, connect_args={"check_same_thread": False}, poolclass=StaticPool)
SQLAlchemy doesn't correctly create in-memory database
Making an API using FastAPI and SQLAlchemy I'm experiencing strange behaviour when database (SQLite) is in-memory which doesn't occur when stored as file. Model: from sqlalchemy.ext.declarative import declarative_base from sqlalchemy import Column, Integer, String Base = declarative_base() class Thing(Base): __tablename__ = "thing" id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String) I create two global engine objects. One with database as file, the other as in-memory database: from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker args = dict(echo=True, connect_args={"check_same_thread": False}) engine1 = create_engine("sqlite:///db.sqlite", **args) engine2 = create_engine("sqlite:///:memory:", **args) Session1 = sessionmaker(bind=engine1) Session2 = sessionmaker(bind=engine2) I create my FastAPI app and a path to add an object to database: from fastapi import FastAPI app = FastAPI() @app.get("/") def foo(x: int): with {1: Session1, 2: Session2}[x]() as session: session.add(Thing(name="foo")) session.commit() My main to simulate requests and check everything is working: from fastapi.testclient import TestClient if __name__ == "__main__": Base.metadata.create_all(engine1) Base.metadata.create_all(engine2) client = TestClient(app) assert client.get("/1").status_code == 200 assert client.get("/2").status_code == 200 thing table is created in engine1 and committed, same with engine2. On first request "foo" was successfully inserted into engine1's database (stored as file) but second request raises "sqlite3.OperationalError" claiming "no such table: thing". Why is there different behaviour between the two? Why does in-memory database claim the table doesn't exist even though SQLAlchemy logs show create table statement ran successfully and was committed?
[ "The docs explain this in the following https://docs.sqlalchemy.org/en/14/dialects/sqlite.html#using-a-memory-database-in-multiple-threads\n\nTo use a :memory: database in a multithreaded scenario, the same connection object must be shared among threads, since the database exists only within the scope of that connection. The StaticPool implementation will maintain a single connection globally, and the check_same_thread flag can be passed to Pysqlite as False\n\nIt also shows how to get the intended behavior, so in your case\nfrom sqlalchemy.pool import StaticPool\n\nargs = dict(echo=True, connect_args={\"check_same_thread\": False}, poolclass=StaticPool)\n\n" ]
[ 2 ]
[]
[]
[ "fastapi", "python", "sqlalchemy", "sqlite" ]
stackoverflow_0074536228_fastapi_python_sqlalchemy_sqlite.txt
Q: Check if module exists, if not install it I want to check if a module exists, if it doesn't I want to install it. How should I do this? So far I have this code which correctly prints f if the module doesn't exist. try: import keyring except ImportError: print 'f' A: import pip def import_or_install(package): try: __import__(package) except ImportError: pip.main(['install', package]) This code simply attempt to import a package, where package is of type str, and if it is unable to, calls pip and attempt to install it from there. A: Here is how it should be done, and if I am wrong, please correct me. However, Noufal seems to confirm it in another answer to this question, so I guess it's right. When writing the setup.py script for some scripts I wrote, I was dependent on the package manager of my distribution to install the required library for me. So, in my setup.py file, I did this: package = 'package_name' try: return __import__(package) except ImportError: return None So if package_name was installed, fine, continue. Else, install it via the package manager which I called using subprocess. A: This approach of dynamic import work really well in cases you just want to print a message if module is not installed. Automatically installing a module SHOULDN'T be done like issuing pip via subprocess. That's why we have setuptools (or distribute). We have some great tutorials on packaging, and the task of dependencies detection/installation is as simple as providing install_requires=[ 'FancyDependency', 'otherFancy>=1.0' ]. That's just it! But, if you really NEED to do by hand, you can use setuptools to help you. from pkg_resources import WorkingSet , DistributionNotFound working_set = WorkingSet() # Printing all installed modules print tuple(working_set) # Detecting if module is installed try: dep = working_set.require('paramiko>=1.0') except DistributionNotFound: pass # Installing it (anyone knows a better way?) from setuptools.command.easy_install import main as install install(['django>=1.2']) A: NOTE: Ipython / Jupyter specific solution. While using notebooks / online kernels, I usually do it using systems call. try: import keyring except: !pip install keyring import keyring P.S. One may wish to call conda install or mamba install instead. A: You can use os.system as follows: import os package = "package_name" try: __import__package except: os.system("pip install "+ package) A: You can launch pip install %s"%keyring in the except part to do this but I don't recommend it. The correct way is to package your application using distutils so that when it's installed, dependencies will be pulled in. A: Not all modules can be installed so easily. Not all of them have easy-install support, some can only be installed by building them.. others require some non-python prerequisites, like gcc, which makes things even more complicated (and forget about it working well on Windows). So I would say you could probably make it work for some predetermined modules, but there's no chance it'll be something generic that works for any module. A: I made an import_neccessary_modules() function to fix this common issue. # ====================================================================================== # == Fix any missing Module, that need to be installed with PIP.exe. [Windows System] == # ====================================================================================== import importlib, os def import_neccessary_modules(modname:str)->None: ''' Import a Module, and if that fails, try to use the Command Window PIP.exe to install it, if that fails, because PIP in not in the Path, try find the location of PIP.exe and again attempt to install from the Command Window. ''' try: # If Module it is already installed, try to Import it importlib.import_module(modname) print(f"Importing {modname}") except ImportError: # Error if Module is not installed Yet, the '\033[93m' is just code to print in certain colors print(f"\033[93mSince you don't have the Python Module [{modname}] installed!") print("I will need to install it using Python's PIP.exe command.\033[0m") if os.system('PIP --version') == 0: # No error from running PIP in the Command Window, therefor PIP.exe is in the %PATH% os.system(f'PIP install {modname}') else: # Error, PIP.exe is NOT in the Path!! So I'll try to find it. pip_location_attempt_1 = sys.executable.replace("python.exe", "") + "pip.exe" pip_location_attempt_2 = sys.executable.replace("python.exe", "") + "scripts\pip.exe" if os.path.exists(pip_location_attempt_1): # The Attempt #1 File exists!!! os.system(pip_location_attempt_1 + " install " + modname) elif os.path.exists(pip_location_attempt_2): # The Attempt #2 File exists!!! os.system(pip_location_attempt_2 + " install " + modname) else: # Neither Attempts found the PIP.exe file, So i Fail... print(f"\033[91mAbort!!! I can't find PIP.exe program!") print(f"You'll need to manually install the Module: {modname} in order for this program to work.") print(f"Find the PIP.exe file on your computer and in the CMD Command window...") print(f" in that directory, type PIP.exe install {modname}\033[0m") exit() import_neccessary_modules('art') import_neccessary_modules('pyperclip') import_neccessary_modules('winsound') A: Here is my approach. The idea is loop until python has already installed all modules by built in module as "pip" . import pip while True: try: #import your modules here. ! import seaborn import bokeh break except ImportError as err_mdl: print((err_mdl.name)) pip.main(['install', err_mdl.name]) A: I tried installing transformers using the below method and it worked fine. Similarly, you can just replace your library name instead of "transformers". import pip try: from transformers import pipeline except ModuleNotFoundError: pip.main(['install', "transformers"]) from transformers import pipeline A: I tried this in a new virtual envoirnment with no packages installed and it installed the necessary package i.e. opencv-python Example is given below import os try: import cv2 except ImportError: os.system('pip install opencv-python')
Check if module exists, if not install it
I want to check if a module exists, if it doesn't I want to install it. How should I do this? So far I have this code which correctly prints f if the module doesn't exist. try: import keyring except ImportError: print 'f'
[ "import pip\n\ndef import_or_install(package):\n try:\n __import__(package)\n except ImportError:\n pip.main(['install', package]) \n\nThis code simply attempt to import a package, where package is of type str, and if it is unable to, calls pip and attempt to install it from there.\n", "Here is how it should be done, and if I am wrong, please correct me. However, Noufal seems to confirm it in another answer to this question, so I guess it's right. \nWhen writing the setup.py script for some scripts I wrote, I was dependent on the package manager of my distribution to install the required library for me.\nSo, in my setup.py file, I did this:\npackage = 'package_name'\ntry:\n return __import__(package)\nexcept ImportError:\n return None\n\nSo if package_name was installed, fine, continue. Else, install it via the package manager which I called using subprocess.\n", "This approach of dynamic import work really well in cases you just want to print a message if module is not installed. Automatically installing a module SHOULDN'T be done like issuing pip via subprocess. That's why we have setuptools (or distribute).\nWe have some great tutorials on packaging, and the task of dependencies detection/installation is as simple as providing install_requires=[ 'FancyDependency', 'otherFancy>=1.0' ]. That's just it!\nBut, if you really NEED to do by hand, you can use setuptools to help you.\nfrom pkg_resources import WorkingSet , DistributionNotFound\nworking_set = WorkingSet()\n\n# Printing all installed modules\nprint tuple(working_set)\n\n# Detecting if module is installed\ntry:\n dep = working_set.require('paramiko>=1.0')\nexcept DistributionNotFound:\n pass\n\n# Installing it (anyone knows a better way?)\nfrom setuptools.command.easy_install import main as install\ninstall(['django>=1.2'])\n\n", "NOTE: Ipython / Jupyter specific solution.\nWhile using notebooks / online kernels, I usually do it using systems call.\ntry:\n import keyring\nexcept:\n !pip install keyring\n import keyring\n\nP.S. One may wish to call conda install or mamba install instead.\n", "You can use os.system as follows:\nimport os\n\npackage = \"package_name\"\n\ntry:\n __import__package\nexcept:\n os.system(\"pip install \"+ package)\n\n", "You can launch pip install %s\"%keyring in the except part to do this but I don't recommend it. The correct way is to package your application using distutils so that when it's installed, dependencies will be pulled in.\n", "Not all modules can be installed so easily. Not all of them have easy-install support, some can only be installed by building them.. others require some non-python prerequisites, like gcc, which makes things even more complicated (and forget about it working well on Windows). \nSo I would say you could probably make it work for some predetermined modules, but there's no chance it'll be something generic that works for any module.\n", "I made an import_neccessary_modules() function to fix this common issue.\n# ======================================================================================\n# == Fix any missing Module, that need to be installed with PIP.exe. [Windows System] ==\n# ======================================================================================\nimport importlib, os\ndef import_neccessary_modules(modname:str)->None:\n '''\n Import a Module,\n and if that fails, try to use the Command Window PIP.exe to install it,\n if that fails, because PIP in not in the Path,\n try find the location of PIP.exe and again attempt to install from the Command Window.\n '''\n try:\n # If Module it is already installed, try to Import it\n importlib.import_module(modname)\n print(f\"Importing {modname}\")\n except ImportError:\n # Error if Module is not installed Yet, the '\\033[93m' is just code to print in certain colors\n print(f\"\\033[93mSince you don't have the Python Module [{modname}] installed!\")\n print(\"I will need to install it using Python's PIP.exe command.\\033[0m\")\n if os.system('PIP --version') == 0:\n # No error from running PIP in the Command Window, therefor PIP.exe is in the %PATH%\n os.system(f'PIP install {modname}')\n else:\n # Error, PIP.exe is NOT in the Path!! So I'll try to find it.\n pip_location_attempt_1 = sys.executable.replace(\"python.exe\", \"\") + \"pip.exe\"\n pip_location_attempt_2 = sys.executable.replace(\"python.exe\", \"\") + \"scripts\\pip.exe\"\n if os.path.exists(pip_location_attempt_1):\n # The Attempt #1 File exists!!!\n os.system(pip_location_attempt_1 + \" install \" + modname)\n elif os.path.exists(pip_location_attempt_2):\n # The Attempt #2 File exists!!!\n os.system(pip_location_attempt_2 + \" install \" + modname)\n else:\n # Neither Attempts found the PIP.exe file, So i Fail...\n print(f\"\\033[91mAbort!!! I can't find PIP.exe program!\")\n print(f\"You'll need to manually install the Module: {modname} in order for this program to work.\")\n print(f\"Find the PIP.exe file on your computer and in the CMD Command window...\")\n print(f\" in that directory, type PIP.exe install {modname}\\033[0m\")\n exit()\n\n\nimport_neccessary_modules('art')\nimport_neccessary_modules('pyperclip')\nimport_neccessary_modules('winsound')\n\n", "Here is my approach. The idea is loop until python has already installed all modules by built in module as \"pip\" .\nimport pip\n\nwhile True:\n \n try:\n #import your modules here. !\n import seaborn\n import bokeh\n\n break\n\n except ImportError as err_mdl:\n \n print((err_mdl.name))\n pip.main(['install', err_mdl.name])\n\n", "I tried installing transformers using the below method and it worked fine. Similarly, you can just replace your library name instead of \"transformers\".\n\nimport pip\ntry:\n from transformers import pipeline\nexcept ModuleNotFoundError:\n pip.main(['install', \"transformers\"])\n from transformers import pipeline\n\n", "I tried this in a new virtual envoirnment with no packages installed and it installed the necessary package i.e. opencv-python\nExample is given below\nimport os\n\ntry:\n import cv2\nexcept ImportError:\n os.system('pip install opencv-python')\n\n" ]
[ 47, 20, 11, 9, 2, 1, 1, 1, 1, 0, 0 ]
[]
[]
[ "import", "module", "python" ]
stackoverflow_0004527554_import_module_python.txt
Q: How to pass a list between two functions? I have a list that's been modified in one function, and I want it to go to another function in order to be read and modified further. def get_cards_player(): deck = Deck() deck.shuffle() player1 = [] player2 = [] you_won = False #if u won, var is true, if u lose, var is false for i in range(5): player1_cards = deck.get_card() player1.append(player1_cards.get_name()) player2_cards = deck.get_card() player2.append(player2_cards.get_name()) print('Your Hand:',', '.join(player1)) print('Opponent Hand:',', '.join(player2)) return player1, player2 def calc_winner(): I want player1 and player2 lists in get_cards_player function to go to the calc_winner function so that I can read the list and do stuff with it. Thanks. A: calc_winner should take these lists as parameters. I purposely changed the names to highlight that parameters don't have to have the same name in different functions. get_cards_player already creates and returns the lists, so no need to change. Again, to show the different ways you can do this, I'm remembering the tuple containing the two players and using that in the call. def calc_winner(p1list, p2list): print(p1list, p2list) players = get_card_player() calc_winner(players[0], players[1])
How to pass a list between two functions?
I have a list that's been modified in one function, and I want it to go to another function in order to be read and modified further. def get_cards_player(): deck = Deck() deck.shuffle() player1 = [] player2 = [] you_won = False #if u won, var is true, if u lose, var is false for i in range(5): player1_cards = deck.get_card() player1.append(player1_cards.get_name()) player2_cards = deck.get_card() player2.append(player2_cards.get_name()) print('Your Hand:',', '.join(player1)) print('Opponent Hand:',', '.join(player2)) return player1, player2 def calc_winner(): I want player1 and player2 lists in get_cards_player function to go to the calc_winner function so that I can read the list and do stuff with it. Thanks.
[ "calc_winner should take these lists as parameters. I purposely changed the names to highlight that parameters don't have to have the same name in different functions.\nget_cards_player already creates and returns the lists, so no need to change. Again, to show the different ways you can do this, I'm remembering the tuple containing the two players and using that in the call.\ndef calc_winner(p1list, p2list):\n print(p1list, p2list)\n\nplayers = get_card_player()\ncalc_winner(players[0], players[1])\n\n" ]
[ 1 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074541154_list_python.txt
Q: Using """ text here"""" isn't printing items on a new line I've tried to google an answer and I probably just don't know the right thing to look for, so I'm not finding anything. Sorry if this is a newbish question, I'm still fairly new to python. Thank you in advance for your help! I'm defining a group of characters forming the words Thank You using """, but when I call it, it's not printing correctly. What am I doing wrong? This is my code: thankyou = """ ______ _ _ (_) | | | | | | | | __ _ _ | | __ | |/ \ / | / |/ | |/_) | | / \ | | (_/\_/| |/\_/|_/ | |_/| \_ \_/|/\__// \_/|_ /| \| """ How the code looks when it's called: The code and how it's supposed to look I expected the characters to print on a new line in the same way I added them. Instea, they're coming out jumbled. I've tried googling and didn't find anything. I've also tried putting it all on one line and using \n to break a new line, but that's also not working. Any advice would be appreciated! thankyou = '\n ______ _ _ \n () | | | | | \n | | | __ _ _ | | __ \n | |/ \ / | / |/ | |/) | | / \ | | \n (/_/| |/_/|/ | |/| _ _/|/_// _/|_ \n /| \n |' I also tried this: thankyou = " ______ _ _ "\ " (_) | | | | | "\ " | | | __ _ _ | | __ "\ " | |/ \ / | / |/ | |/_) | | / \ | | "\ " (_/\_/| |/\_/|_/ | |_/| \_ \_/|/\__// \_/|_ "\ " /| "\ " \|"\ edit: – jasonharper YOUR COMMENT IS THE ONE THAT WORKED! THANK YOU! A: Try resizing your window. when the terminal is to small, it will word wrap and have the characters that don't fit on the next line, thus making your output all jumbled and weird.
Using """ text here"""" isn't printing items on a new line
I've tried to google an answer and I probably just don't know the right thing to look for, so I'm not finding anything. Sorry if this is a newbish question, I'm still fairly new to python. Thank you in advance for your help! I'm defining a group of characters forming the words Thank You using """, but when I call it, it's not printing correctly. What am I doing wrong? This is my code: thankyou = """ ______ _ _ (_) | | | | | | | | __ _ _ | | __ | |/ \ / | / |/ | |/_) | | / \ | | (_/\_/| |/\_/|_/ | |_/| \_ \_/|/\__// \_/|_ /| \| """ How the code looks when it's called: The code and how it's supposed to look I expected the characters to print on a new line in the same way I added them. Instea, they're coming out jumbled. I've tried googling and didn't find anything. I've also tried putting it all on one line and using \n to break a new line, but that's also not working. Any advice would be appreciated! thankyou = '\n ______ _ _ \n () | | | | | \n | | | __ _ _ | | __ \n | |/ \ / | / |/ | |/) | | / \ | | \n (/_/| |/_/|/ | |/| _ _/|/_// _/|_ \n /| \n |' I also tried this: thankyou = " ______ _ _ "\ " (_) | | | | | "\ " | | | __ _ _ | | __ "\ " | |/ \ / | / |/ | |/_) | | / \ | | "\ " (_/\_/| |/\_/|_/ | |_/| \_ \_/|/\__// \_/|_ "\ " /| "\ " \|"\ edit: – jasonharper YOUR COMMENT IS THE ONE THAT WORKED! THANK YOU!
[ "Try resizing your window.\nwhen the terminal is to small, it will word wrap and have the characters that don't fit on the next line, thus making your output all jumbled and weird.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074540824_python.txt
Q: Why does it not need to use the "self" parameter like in Python to define Class in C++? I am a beginner of C++ with Python background. I have some ambiguity about the process of obtaining instance attributes in Python classes and C++ classes. As follows, I list two classes that have the same function in Python and C++ respectively. My problem is that I am used to using self parameters to distinguish class attributes and instance attributes, and to ensure that instance attributes between different instances do not interfere with each other. But I do not know how C++ can do this without self parameters. I hope someone can explain in detail how C++achieves “self” without self, and what happens in under the hood? Thanks A: c++ just doesn't write "self" explicitly, maybe you need to learn about the keyword "this".
Why does it not need to use the "self" parameter like in Python to define Class in C++?
I am a beginner of C++ with Python background. I have some ambiguity about the process of obtaining instance attributes in Python classes and C++ classes. As follows, I list two classes that have the same function in Python and C++ respectively. My problem is that I am used to using self parameters to distinguish class attributes and instance attributes, and to ensure that instance attributes between different instances do not interfere with each other. But I do not know how C++ can do this without self parameters. I hope someone can explain in detail how C++achieves “self” without self, and what happens in under the hood? Thanks
[ "c++ just doesn't write \"self\" explicitly, maybe you need to learn about the keyword \"this\".\n" ]
[ 0 ]
[]
[]
[ "c++", "instance_variables", "oop", "python", "self" ]
stackoverflow_0074541205_c++_instance_variables_oop_python_self.txt
Q: How would I force user input to be only 1 and 0 in my code So im trying to force the user to give me purely an input between 1 and 0 and I managed to do so for the most part but it'll only work if all three inputs are above that and my code only gives me and input for a def AND(a, b): return a and b def OR(a, b): return a and b def NOR(a, b): return a and b user=[] def main(): a= False b= False c= False n_attempts = 1 for _ in range(n_attempts): a_raw = input("for a, 1 or 0: ") try: a = int(a_raw) except ValueError: print(f"Invalid value for 'a': {a!r}") continue b_raw = input("for a, 1 or 0: ") try: b = int(b_raw) except ValueError: print(f"Invalid value for 'a': {b!r}") continue c_raw = input("for a, 1 or 0: ") try: c = int(c_raw) except ValueError: print(f"Invalid value for 'a': {c!r}") continue print ("Result of (A NOR B) OR (B AND C) is: " , int(OR(NOR(a, b), AND(b, c)))) main() i tried if and elif statements and also work to some degree where itll activate if all inputs are above 1 or 0 for _ in range(3): a=input("for a, 1 or 0: ") b=input("for b, 1 or 0: ") c=input("for c, 1 or 0: ") if a =="0" or a=="1": break else: print("wrong input") if b =="0" or b=="1": break else: print("wrong input") if c =="0" or c=="1": break else: print("wrong input") im supposed to writethe code as blocks in functions that will perform each gate. There will be one gate per function. Pass the inputs to the functions and the outputs from the functions. using that as a reference A: You can use a while loop to keep asking for a valid input until it gets one. Use a for loop to iterate through the names and store input values in a dict to avoid duplicate code: values = {} for name in 'a', 'b', 'c': while True: try: value = input(f'for {name}, 1 or 0: ') value = values[name] = int(value) assert value in (0, 1) break except (ValueError, AssertionError): print(f"Invalid value for '{name}': {value!r}") print(values['a'], values['b'], values['c']) Demo: https://replit.com/@blhsing/AcclaimedYawningExpertise A: you should not direct assignment the value of input to a or b or c value = input("some description here") if value in ["0", "1"]: a = int(value) else: break A: Okay, so I'm assuming a lot here, but I gather that what you want is code that does the thing in the logic gate image. I recommend working backwards from Q. (I'm using empty parentheses for placeholders.) So, for the first one back from Q is or. So ()or(). A function for that would be def or_function(a, b): return a or b Using that function would make it or_function((), ()). Then, it's A nor B, or not (A or B) on the top. (not (A or B)) or () Similarly, a function for that would be def nor_function(a, b): return not(a or b) Using functions, it would now be or_function(nor_function(A, B), ()). Then, it's B and C on the bottom. The final answer is (not (A or B)) or (B and C). Since this seems to be your homework, I'll leave you to the last one - it should be fairly similar to the others. Note: If input is 0 or 1, then you need to convert 0 to False and 1 to True. A: I don't know if this is what you want to do. a=bool b=bool c=bool while True: v=["1","0"] a=input("input 1 or 0: ") b=input("input 1 or 0: ") c=input("input 1 or 0: ") if (a and b and c in v): break else: print("wrong input")
How would I force user input to be only 1 and 0 in my code
So im trying to force the user to give me purely an input between 1 and 0 and I managed to do so for the most part but it'll only work if all three inputs are above that and my code only gives me and input for a def AND(a, b): return a and b def OR(a, b): return a and b def NOR(a, b): return a and b user=[] def main(): a= False b= False c= False n_attempts = 1 for _ in range(n_attempts): a_raw = input("for a, 1 or 0: ") try: a = int(a_raw) except ValueError: print(f"Invalid value for 'a': {a!r}") continue b_raw = input("for a, 1 or 0: ") try: b = int(b_raw) except ValueError: print(f"Invalid value for 'a': {b!r}") continue c_raw = input("for a, 1 or 0: ") try: c = int(c_raw) except ValueError: print(f"Invalid value for 'a': {c!r}") continue print ("Result of (A NOR B) OR (B AND C) is: " , int(OR(NOR(a, b), AND(b, c)))) main() i tried if and elif statements and also work to some degree where itll activate if all inputs are above 1 or 0 for _ in range(3): a=input("for a, 1 or 0: ") b=input("for b, 1 or 0: ") c=input("for c, 1 or 0: ") if a =="0" or a=="1": break else: print("wrong input") if b =="0" or b=="1": break else: print("wrong input") if c =="0" or c=="1": break else: print("wrong input") im supposed to writethe code as blocks in functions that will perform each gate. There will be one gate per function. Pass the inputs to the functions and the outputs from the functions. using that as a reference
[ "You can use a while loop to keep asking for a valid input until it gets one. Use a for loop to iterate through the names and store input values in a dict to avoid duplicate code:\nvalues = {}\nfor name in 'a', 'b', 'c':\n while True:\n try:\n value = input(f'for {name}, 1 or 0: ')\n value = values[name] = int(value)\n assert value in (0, 1)\n break\n except (ValueError, AssertionError):\n print(f\"Invalid value for '{name}': {value!r}\")\n\nprint(values['a'], values['b'], values['c'])\n\nDemo: https://replit.com/@blhsing/AcclaimedYawningExpertise\n", "you should not direct assignment the value of input to a or b or c\nvalue = input(\"some description here\")\nif value in [\"0\", \"1\"]:\n a = int(value)\nelse:\n break\n\n", "Okay, so I'm assuming a lot here, but I gather that what you want is code that does the thing in the logic gate image. I recommend working backwards from Q.\n(I'm using empty parentheses for placeholders.)\nSo, for the first one back from Q is or. So ()or(). A function for that would be\ndef or_function(a, b):\n return a or b\n\nUsing that function would make it or_function((), ()).\nThen, it's A nor B, or not (A or B) on the top. (not (A or B)) or () Similarly, a function for that would be\ndef nor_function(a, b):\n return not(a or b)\n\nUsing functions, it would now be or_function(nor_function(A, B), ()).\nThen, it's B and C on the bottom. The final answer is (not (A or B)) or (B and C).\nSince this seems to be your homework, I'll leave you to the last one - it should be fairly similar to the others.\nNote: If input is 0 or 1, then you need to convert 0 to False and 1 to True.\n", "I don't know if this is what you want to do.\na=bool\nb=bool\nc=bool\n\nwhile True:\n\n v=[\"1\",\"0\"]\n a=input(\"input 1 or 0: \")\n b=input(\"input 1 or 0: \")\n c=input(\"input 1 or 0: \")\n if (a and b and c in v):\n break\n else:\n print(\"wrong input\")\n\n" ]
[ 0, 0, 0, -3 ]
[]
[]
[ "function", "if_statement", "input", "loops", "python" ]
stackoverflow_0074540891_function_if_statement_input_loops_python.txt
Q: calling a method inside a class from a different file I am trying to implement python classes and objects in my application code. Currently, I have a file that includes all the frequently used functions. I import them in another file. funcs.py class name1(): def func1(x): return x def func2(y): return y .... file1.py from funcs import func1 from funcs import func2 I'd like to organize the code in class, method and attributes and then invoke them in different files. How do I call a method within a class from another file? What changes do I need to make in funcs.py file? A: If you want to call a method within a class, first you have to instantiate an object of that class, and then call the method in reference to the object. Below is not an ideal implementation but it's just for example. example.py class MyClass: def my_method(self): print('something') object1 = MyClass() object1.my_method() Then when you want to call the method in another file you have to first import them. another.py from .example import MyClass object2 = MyClass() object2.my_method() If you just want to call the method without having to create an object first you can use @staticmethod. class MyClass: @staticmethod def my_method(self): print('something') MyClass.my_method() Yet as I said this is not the ideal implementation. As @juanpa.arrivillaga said ideally you cannot just throw in any method and bundle them into a single class. The content of a class is all related to the object you want to define as a class.
calling a method inside a class from a different file
I am trying to implement python classes and objects in my application code. Currently, I have a file that includes all the frequently used functions. I import them in another file. funcs.py class name1(): def func1(x): return x def func2(y): return y .... file1.py from funcs import func1 from funcs import func2 I'd like to organize the code in class, method and attributes and then invoke them in different files. How do I call a method within a class from another file? What changes do I need to make in funcs.py file?
[ "If you want to call a method within a class, first you have to instantiate an object of that class, and then call the method in reference to the object. Below is not an ideal implementation but it's just for example. \nexample.py\nclass MyClass:\n def my_method(self):\n print('something')\n\nobject1 = MyClass()\nobject1.my_method()\n\nThen when you want to call the method in another file you have to first import them.\n another.py\nfrom .example import MyClass\n\nobject2 = MyClass()\nobject2.my_method()\n\nIf you just want to call the method without having to create an object first you can use @staticmethod.\nclass MyClass:\n @staticmethod\n def my_method(self):\n print('something')\n\nMyClass.my_method()\n\nYet as I said this is not the ideal implementation. As @juanpa.arrivillaga said ideally you cannot just throw in any method and bundle them into a single class. The content of a class is all related to the object you want to define as a class.\n" ]
[ 1 ]
[]
[]
[ "class", "methods", "oop", "python" ]
stackoverflow_0074541145_class_methods_oop_python.txt
Q: Is there a way to specify a range of valid values for a function argument with type hinting in python? I am a big fan of the type hinting in python, however I am curious if there is a way to specify a valid range of values for a given parameter using type hinting. What I had in mind is something like from typing import * def function( number: Union[float, int], fraction: Float[0.0, 1.0] = 0.5 # give a hint that this should be between 0 and 1, ): return fraction * number I can imagine one can enforce this with an assertion, or perhaps specify what the valid range of values is within the docstring, but it feels like having something like Float[0.0, 1.0] would look more elegant. A: Python 3.9 introduced typing.Annotated: In [75]: from typing import * In [76]: from dataclasses import dataclass In [77]: @dataclass ...: class ValueRange: ...: min: float ...: max: float ...: In [78]: def function( ...: number: Union[float, int], ...: fraction: Annotated[float, ValueRange(0.0, 1.0)] = 0.5 ...: ): ...: return fraction * number ...: Like any other type hint it does not perform any runtime checks: In [79]: function(1, 2) Out[79]: 2 However you can implement your own runtime checks. The code below is just an example, it does not cover all cases and probably an overkill for your simple function: In [88]: import inspect In [89]: @dataclass ...: class ValueRange: ...: min: float ...: max: float ...: ...: def validate_value(self, x): ...: if not (self.min <= x <= self.max): ...: raise ValueError(f'{x} must be in range [{self.min}, {self.max}]') ...: In [90]: def check_annotated(func): ...: hints = get_type_hints(func, include_extras=True) ...: spec = inspect.getfullargspec(func) ...: ...: def wrapper(*args, **kwargs): ...: for idx, arg_name in enumerate(spec[0]): ...: hint = hints.get(arg_name) ...: validators = getattr(hint, '__metadata__', None) ...: if not validators: ...: continue ...: for validator in validators: ...: validator.validate_value(args[idx]) ...: ...: return func(*args, **kwargs) ...: return wrapper ...: ...: In [91]: @check_annotated ...: def function_2( ...: number: Union[float, int], ...: fraction: Annotated[float, ValueRange(0.0, 1.0)] = 0.5 ...: ): ...: return fraction * number ...: ...: In [92]: function_2(1, 2) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-92-c9345023c025> in <module> ----> 1 function_2(1, 2) <ipython-input-90-01115cb628ba> in wrapper(*args, **kwargs) 10 continue 11 for validator in validators: ---> 12 validator.validate_value(args[idx]) 13 14 return func(*args, **kwargs) <ipython-input-87-7f4ac07379f9> in validate_value(self, x) 6 def validate_value(self, x): 7 if not (self.min <= x <= self.max): ----> 8 raise ValueError(f'{x} must be in range [{self.min}, {self.max}]') 9 ValueError: 2 must be in range [0.0, 1.0] In [93]: function_2(1, 1) Out[93]: 1 A: If you can and you don't mind using third-party packages, Pydantic provides Constrained Types. For your specific example, one of the constrained types is confloat with the following parameters: ge: float = None: enforces float to be greater than or equal to the set value lt: float = None: enforces float to be less than the set value In [35]: from pydantic import confloat In [36]: def function( ...: number: Union[float, int], ...: fraction: confloat(ge=0.0, le=1.0) = 0.5, ...: ) -> float: ...: return fraction * number If only used as a type hint, it doesn't enforce it at runtime: In [38]: function(1, 0) Out[38]: 0 In [39]: function(1, 1.0) Out[39]: 1.0 In [40]: function(1, 15) Out[40]: 15 But you can use Pydantic's validate_arguments decorator which: allows the arguments passed to a function to be parsed and validated using the function's annotations before the function is called In [41]: from pydantic import confloat, validate_arguments In [42]: @validate_arguments ...: def function( ...: number: Union[float, int], ...: fraction: confloat(ge=0.0, le=1.0) = 0.5, ...: ) -> float: ...: return fraction * number ...: In [43]: function(1, 0) Out[43]: 0.0 In [44]: function(1, 1.0) Out[44]: 1.0 In [45]: function(1, 0.37) Out[45]: 0.37 In [46]: function(1, 15) --------------------------------------------------------------------------- ValidationError Traceback (most recent call last) Cell In [46], line 1 ----> 1 function(1, 15) ... ValidationError: 1 validation error for Function fraction ensure this value is less than or equal to 1.0 (type=value_error.number.not_le; limit_value=1.0) See the ConstrainedTypes section for more con* variations.
Is there a way to specify a range of valid values for a function argument with type hinting in python?
I am a big fan of the type hinting in python, however I am curious if there is a way to specify a valid range of values for a given parameter using type hinting. What I had in mind is something like from typing import * def function( number: Union[float, int], fraction: Float[0.0, 1.0] = 0.5 # give a hint that this should be between 0 and 1, ): return fraction * number I can imagine one can enforce this with an assertion, or perhaps specify what the valid range of values is within the docstring, but it feels like having something like Float[0.0, 1.0] would look more elegant.
[ "Python 3.9 introduced typing.Annotated:\nIn [75]: from typing import *\n\nIn [76]: from dataclasses import dataclass\n\nIn [77]: @dataclass\n ...: class ValueRange:\n ...: min: float\n ...: max: float\n ...:\n\nIn [78]: def function(\n ...: number: Union[float, int],\n ...: fraction: Annotated[float, ValueRange(0.0, 1.0)] = 0.5\n ...: ):\n ...: return fraction * number\n ...:\n\nLike any other type hint it does not perform any runtime checks:\nIn [79]: function(1, 2)\nOut[79]: 2\n\nHowever you can implement your own runtime checks. The code below is just an example, it does not cover all cases and probably an overkill for your simple function:\nIn [88]: import inspect\n\nIn [89]: @dataclass\n ...: class ValueRange:\n ...: min: float\n ...: max: float\n ...:\n ...: def validate_value(self, x):\n ...: if not (self.min <= x <= self.max):\n ...: raise ValueError(f'{x} must be in range [{self.min}, {self.max}]')\n ...:\n\nIn [90]: def check_annotated(func):\n ...: hints = get_type_hints(func, include_extras=True)\n ...: spec = inspect.getfullargspec(func)\n ...:\n ...: def wrapper(*args, **kwargs):\n ...: for idx, arg_name in enumerate(spec[0]):\n ...: hint = hints.get(arg_name)\n ...: validators = getattr(hint, '__metadata__', None)\n ...: if not validators:\n ...: continue\n ...: for validator in validators:\n ...: validator.validate_value(args[idx])\n ...:\n ...: return func(*args, **kwargs)\n ...: return wrapper\n ...:\n ...:\n\nIn [91]: @check_annotated\n ...: def function_2(\n ...: number: Union[float, int],\n ...: fraction: Annotated[float, ValueRange(0.0, 1.0)] = 0.5\n ...: ):\n ...: return fraction * number\n ...:\n ...:\n\nIn [92]: function_2(1, 2)\n---------------------------------------------------------------------------\nValueError Traceback (most recent call last)\n<ipython-input-92-c9345023c025> in <module>\n----> 1 function_2(1, 2)\n\n<ipython-input-90-01115cb628ba> in wrapper(*args, **kwargs)\n 10 continue\n 11 for validator in validators:\n---> 12 validator.validate_value(args[idx])\n 13\n 14 return func(*args, **kwargs)\n\n<ipython-input-87-7f4ac07379f9> in validate_value(self, x)\n 6 def validate_value(self, x):\n 7 if not (self.min <= x <= self.max):\n----> 8 raise ValueError(f'{x} must be in range [{self.min}, {self.max}]')\n 9\n\nValueError: 2 must be in range [0.0, 1.0]\n\nIn [93]: function_2(1, 1)\nOut[93]: 1\n\n", "If you can and you don't mind using third-party packages, Pydantic provides Constrained Types. For your specific example, one of the constrained types is confloat with the following parameters:\n\n\nge: float = None: enforces float to be greater than or equal to the set value\nlt: float = None: enforces float to be less than the set value\n\n\nIn [35]: from pydantic import confloat\n\nIn [36]: def function(\n ...: number: Union[float, int],\n ...: fraction: confloat(ge=0.0, le=1.0) = 0.5,\n ...: ) -> float:\n ...: return fraction * number\n\nIf only used as a type hint, it doesn't enforce it at runtime:\nIn [38]: function(1, 0)\nOut[38]: 0\n\nIn [39]: function(1, 1.0)\nOut[39]: 1.0\n\nIn [40]: function(1, 15)\nOut[40]: 15\n\nBut you can use Pydantic's validate_arguments decorator which:\n\nallows the arguments passed to a function to be parsed and validated using the function's annotations before the function is called\n\nIn [41]: from pydantic import confloat, validate_arguments\n\nIn [42]: @validate_arguments\n ...: def function(\n ...: number: Union[float, int],\n ...: fraction: confloat(ge=0.0, le=1.0) = 0.5,\n ...: ) -> float:\n ...: return fraction * number\n ...: \n\nIn [43]: function(1, 0)\nOut[43]: 0.0\n\nIn [44]: function(1, 1.0)\nOut[44]: 1.0\n\nIn [45]: function(1, 0.37)\nOut[45]: 0.37\n\nIn [46]: function(1, 15)\n---------------------------------------------------------------------------\nValidationError Traceback (most recent call last)\nCell In [46], line 1\n----> 1 function(1, 15)\n\n...\nValidationError: 1 validation error for Function\nfraction\n ensure this value is less than or equal to 1.0 (type=value_error.number.not_le; limit_value=1.0)\n\nSee the ConstrainedTypes section for more con* variations.\n" ]
[ 11, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0066451253_python_python_3.x.txt
Q: Replicate plotly plot as connected scatter plot I want to plot a graph for one API, which has different versions in it throughout the years, with commits on the y axis. My current graph looks something like this: I want to connect all the scatter plot dots together, with the version name on top of it. My desired output is something like the line in the graph. My dataframe looks like this: info_version commits Year-Month \ 0 20.1.1 28 2020-08 1 18.2.8 28 2020-01 2 18.2.7 28 2019-11 3 20.1.1 28 2019-11 4 18.2.6 28 2019-10 info_title 0 Avi TestSeDatastoreLevel2 Object API 1 Avi TestSeDatastoreLevel2 Object API 2 Avi TestSeDatastoreLevel2 Object API 3 Avi TestSeDatastoreLevel2 Object API 4 Avi TestSeDatastoreLevel2 Object API This is my code as of now: import plotly.express as px fig = px.scatter(final_api.query("info_title=='Avi TestSeDatastoreLevel2 Object API'"), x="Year-Month", y="commits", color="info_version",title='Different Path Version found within one OAS file', width=1000, height=700) fig.show() fig.update_layout(yaxis_range=[0,80]) I am a bit stuck and new to plotly functions, so any guidance will be great. If there is any other library in which I could generate a similar plot, that would be helpful as well. A: To realize your question, use the graph object to create a graph with markers, line segments, and annotations. The function required for a line graph is to create a staircase-like graph, so you set the shape of the line. Next, a color scale is applied to the markers of the scatter plot in order to color-code the markers. You can change this to whatever you need. Finally, use the annotation function to rotate the text. I have changed some of the data you have presented to make the graph look better. import pandas as pd import numpy as np import io data = ''' info_version commits Year-Month info_title 0 20.1.1 32 2020-08 "Avi TestSeDatastoreLevel2 Object API" 1 18.2.8 31 2020-01 "Avi TestSeDatastoreLevel2 Object API" 2 18.2.7 30 2019-12 "Avi TestSeDatastoreLevel2 Object API" 3 20.1.1 29 2019-11 "Avi TestSeDatastoreLevel2 Object API" 4 18.2.6 28 2019-10 "Avi TestSeDatastoreLevel2 Object API" ''' df = pd.read_csv(io.StringIO(data), delim_whitespace=True) df['Year-Month'] = pd.to_datetime(df['Year-Month']) import plotly.graph_objects as go fig = go.Figure() fig.add_trace(go.Scatter(mode='lines', x=df['Year-Month'], y=df['commits'], line_color='gray', line_width=1, line_shape='vh', showlegend=False ) ) fig.add_trace(go.Scatter(mode='markers', x=df['Year-Month'], y=df['commits'], marker=dict(color=df['commits'], colorscale='Blues'), showlegend=False ) ) for _,row in df.iterrows(): fig.add_annotation( go.layout.Annotation( x=row['Year-Month'], y=row['commits'], text=row['info_version'], showarrow=False, align='center', yanchor='bottom', yshift=5, textangle=-90) ) fig.update_layout(title_text='Different Path Version found within one OAS file', template='plotly_white') fig.show()
Replicate plotly plot as connected scatter plot
I want to plot a graph for one API, which has different versions in it throughout the years, with commits on the y axis. My current graph looks something like this: I want to connect all the scatter plot dots together, with the version name on top of it. My desired output is something like the line in the graph. My dataframe looks like this: info_version commits Year-Month \ 0 20.1.1 28 2020-08 1 18.2.8 28 2020-01 2 18.2.7 28 2019-11 3 20.1.1 28 2019-11 4 18.2.6 28 2019-10 info_title 0 Avi TestSeDatastoreLevel2 Object API 1 Avi TestSeDatastoreLevel2 Object API 2 Avi TestSeDatastoreLevel2 Object API 3 Avi TestSeDatastoreLevel2 Object API 4 Avi TestSeDatastoreLevel2 Object API This is my code as of now: import plotly.express as px fig = px.scatter(final_api.query("info_title=='Avi TestSeDatastoreLevel2 Object API'"), x="Year-Month", y="commits", color="info_version",title='Different Path Version found within one OAS file', width=1000, height=700) fig.show() fig.update_layout(yaxis_range=[0,80]) I am a bit stuck and new to plotly functions, so any guidance will be great. If there is any other library in which I could generate a similar plot, that would be helpful as well.
[ "To realize your question, use the graph object to create a graph with markers, line segments, and annotations. The function required for a line graph is to create a staircase-like graph, so you set the shape of the line. Next, a color scale is applied to the markers of the scatter plot in order to color-code the markers. You can change this to whatever you need. Finally, use the annotation function to rotate the text. I have changed some of the data you have presented to make the graph look better.\nimport pandas as pd\nimport numpy as np\nimport io\n\ndata = '''\n info_version commits Year-Month info_title\n0 20.1.1 32 2020-08 \"Avi TestSeDatastoreLevel2 Object API\" \n1 18.2.8 31 2020-01 \"Avi TestSeDatastoreLevel2 Object API\" \n2 18.2.7 30 2019-12 \"Avi TestSeDatastoreLevel2 Object API\" \n3 20.1.1 29 2019-11 \"Avi TestSeDatastoreLevel2 Object API\" \n4 18.2.6 28 2019-10 \"Avi TestSeDatastoreLevel2 Object API\" \n'''\n\ndf = pd.read_csv(io.StringIO(data), delim_whitespace=True)\ndf['Year-Month'] = pd.to_datetime(df['Year-Month']) \n\nimport plotly.graph_objects as go\n\nfig = go.Figure()\n\nfig.add_trace(go.Scatter(mode='lines',\n x=df['Year-Month'],\n y=df['commits'],\n line_color='gray',\n line_width=1,\n line_shape='vh',\n showlegend=False\n )\n )\n\nfig.add_trace(go.Scatter(mode='markers',\n x=df['Year-Month'],\n y=df['commits'],\n marker=dict(color=df['commits'], colorscale='Blues'),\n showlegend=False\n )\n )\n\nfor _,row in df.iterrows():\n fig.add_annotation(\n go.layout.Annotation(\n x=row['Year-Month'],\n y=row['commits'],\n text=row['info_version'],\n showarrow=False,\n align='center',\n yanchor='bottom',\n yshift=5,\n textangle=-90)\n )\nfig.update_layout(title_text='Different Path Version found within one OAS file',\n template='plotly_white')\n\nfig.show()\n\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "plotly", "python" ]
stackoverflow_0074540093_pandas_plotly_python.txt
Q: Iterating over a dictionary (follow up to previous question) Hello i am new to python and i am building a small program that returns false if a string is an isogram (words with no repeating letters consecutive or non-consecutive), and false otherwise. It also ignores letter case. So far i have initilised an empty dictionary which will store key value pairs containing the letter (as the key) and its frequency (the value) Then I iterated with a for loop and in each iteration, the dictionary would be updated with the letter and its count. If it already have the letter, then it would increment the key value by 1, else it would remain initialised as 1. def is_isogram(string): dict = {} for letter in string.lower(): #if we have the letter if letter in dict: dict[letter] += 1 # if we don't have the letter else: dict[letter] = 1 Now for me to actually determine wheather its an isogram or not i looped over the dictionary keys, and wrote a condition. However, it keeps giving me the exact opposite output. for values in dict: if dict[values] > 1: return False else: return True OUTPUT: True I also tried list comprehensions and lambdas but i keep getting the same result, I get True everytime. Does anyone know why? A: First and foremost welcome to Python! I took a look and it seems like the issue is occurring in your second code section, the for-loop over the dictionary values. Adding a print statement within the loop may help debugging these sorts of things in the future, i.e. for values in dict: print(values) if dict[values] > 1: return False else: return True HINT: This should show you that you are only looking at the first letter in any given string and returned prematurely! To fix this, you need to move the "return True" section until you are done checking every letter in the string. Fixing this should get your code working as intended! Additionally, as you get further into Python you may discover that there are many ways to approach the same problem. For this one in particular, a more 'efficient' or 'elegant' approach can be found here, but again, there's many different ways to solve any problem.
Iterating over a dictionary (follow up to previous question)
Hello i am new to python and i am building a small program that returns false if a string is an isogram (words with no repeating letters consecutive or non-consecutive), and false otherwise. It also ignores letter case. So far i have initilised an empty dictionary which will store key value pairs containing the letter (as the key) and its frequency (the value) Then I iterated with a for loop and in each iteration, the dictionary would be updated with the letter and its count. If it already have the letter, then it would increment the key value by 1, else it would remain initialised as 1. def is_isogram(string): dict = {} for letter in string.lower(): #if we have the letter if letter in dict: dict[letter] += 1 # if we don't have the letter else: dict[letter] = 1 Now for me to actually determine wheather its an isogram or not i looped over the dictionary keys, and wrote a condition. However, it keeps giving me the exact opposite output. for values in dict: if dict[values] > 1: return False else: return True OUTPUT: True I also tried list comprehensions and lambdas but i keep getting the same result, I get True everytime. Does anyone know why?
[ "First and foremost welcome to Python! I took a look and it seems like the issue is occurring in your second code section, the for-loop over the dictionary values.\nAdding a print statement within the loop may help debugging these sorts of things in the future, i.e.\nfor values in dict:\n print(values)\n if dict[values] > 1:\n return False \n else:\n return True\n\nHINT: This should show you that you are only looking at the first letter in any given string and returned prematurely! To fix this, you need to move the \"return True\" section until you are done checking every letter in the string.\nFixing this should get your code working as intended! Additionally, as you get further into Python you may discover that there are many ways to approach the same problem. For this one in particular, a more 'efficient' or 'elegant' approach can be found here, but again, there's many different ways to solve any problem.\n" ]
[ 1 ]
[]
[]
[ "dictionary", "for_loop", "if_statement", "python" ]
stackoverflow_0074471431_dictionary_for_loop_if_statement_python.txt
Q: Filling 0 with previous value at index I have a df: 1 2 3 4 5 6 7 8 9 10 A 10 0 0 15 0 21 45 0 0 7 I am trying fill index A values with the current value if the next value is 0 so that the df would look like this: 1 2 3 4 5 6 7 8 9 10 A 10 10 10 15 15 21 45 45 45 7 I tried: df.loc[['A']].replace(to_replace=0, method='ffill').values But this does not work, where is my mistake? A: If you want to use your method, you need to work with Series on both sides: df.loc['A'] = df.loc['A'].replace(to_replace=0, method='ffill') Alternatively, you can mask the 0 with NaNs, and ffill the data on axis=1: df.mask(df.eq(0)).ffill(axis=1) output: 1 2 3 4 5 6 7 8 9 10 A 10.0 10.0 10.0 15.0 15.0 21.0 45.0 45.0 45.0 7.0 A: Well you should change your code a little bit and work with series: import pandas as pd df = pd.DataFrame({'1': [10], '2': [0], '3': [0], '4': [15], '5': [0], '6': [21], '7': [45], '8': [0], '9': [0], '10': [7]}, index=['A']) print(df.apply(lambda x: pd.Series(x.values).replace(to_replace=0, method='ffill').values, axis=1)) Output: A [10, 10, 10, 15, 15, 21, 45, 45, 45, 7] dtype: object This way, if you have multiple indices, the code still works: import pandas as pd df = pd.DataFrame({'1': [10, 11], '2': [0, 12], '3': [0, 0], '4': [15, 0], '5': [0, 3], '6': [21, 3], '7': [45, 0], '8': [0, 4], '9': [0, 5], '10': [7, 0]}, index=['A', 'B']) print(df.apply(lambda x: pd.Series(x.values).replace(to_replace=0, method='ffill').values, axis=1)) Output: A [10, 10, 10, 15, 15, 21, 45, 45, 45, 7] B [11, 12, 12, 12, 3, 3, 3, 4, 5, 5] dtype: object A: df.applymap(lambda x:pd.NA if x==0 else x).fillna(method='ffill',axis=1) 1 2 3 4 5 6 7 8 9 10 A 10 10 10 15 15 21 45 45 45 7
Filling 0 with previous value at index
I have a df: 1 2 3 4 5 6 7 8 9 10 A 10 0 0 15 0 21 45 0 0 7 I am trying fill index A values with the current value if the next value is 0 so that the df would look like this: 1 2 3 4 5 6 7 8 9 10 A 10 10 10 15 15 21 45 45 45 7 I tried: df.loc[['A']].replace(to_replace=0, method='ffill').values But this does not work, where is my mistake?
[ "If you want to use your method, you need to work with Series on both sides:\ndf.loc['A'] = df.loc['A'].replace(to_replace=0, method='ffill')\n\nAlternatively, you can mask the 0 with NaNs, and ffill the data on axis=1:\ndf.mask(df.eq(0)).ffill(axis=1)\n\noutput:\n 1 2 3 4 5 6 7 8 9 10\nA 10.0 10.0 10.0 15.0 15.0 21.0 45.0 45.0 45.0 7.0\n\n", "Well you should change your code a little bit and work with series:\nimport pandas as pd\n\ndf = pd.DataFrame({'1': [10], '2': [0], '3': [0], '4': [15], '5': [0],\n '6': [21], '7': [45], '8': [0], '9': [0], '10': [7]},\n index=['A'])\nprint(df.apply(lambda x: pd.Series(x.values).replace(to_replace=0, method='ffill').values, axis=1))\n\nOutput:\nA [10, 10, 10, 15, 15, 21, 45, 45, 45, 7]\ndtype: object\n\nThis way, if you have multiple indices, the code still works:\nimport pandas as pd\n\ndf = pd.DataFrame({'1': [10, 11], '2': [0, 12], '3': [0, 0], '4': [15, 0], '5': [0, 3],\n '6': [21, 3], '7': [45, 0], '8': [0, 4], '9': [0, 5], '10': [7, 0]},\n index=['A', 'B'])\nprint(df.apply(lambda x: pd.Series(x.values).replace(to_replace=0, method='ffill').values, axis=1))\n\nOutput:\nA [10, 10, 10, 15, 15, 21, 45, 45, 45, 7]\nB [11, 12, 12, 12, 3, 3, 3, 4, 5, 5]\ndtype: object\n\n", "df.applymap(lambda x:pd.NA if x==0 else x).fillna(method='ffill',axis=1)\n\n\n 1 2 3 4 5 6 7 8 9 10\nA 10 10 10 15 15 21 45 45 45 7\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0070592685_pandas_python.txt
Q: prime number machine and if not working tkinter I am building a machine that can recognize prime numbers individually and show the result to the user,But there is a problem that I showed in the text below >>12 >>12,is a not prime number >>7 >>7,is a not prime number Which prime number and which composite number can be converted into composite numbers my codes: from tkinter import * def rso(): a = int(text.get()) for i in range(a + 1): pass if a % i == 1: Label(app,text=(a,"is a prime number"),font=20).pack() else: Label(app,text=(a,"is a NOT prime number"),font=20).pack() app = Tk() text = Entry(app,font=20) text.pack() Button(app,text="sumbit",font=20,command=rso).pack() app.mainloop() A: Try this: num = 3 flag = False if num > 1: # check for factors for i in range(2, num): if (num % i) == 0: flag = True break if flag: print(f"{num} is not a prime number") else: print(f"{num} is a prime number")
prime number machine and if not working tkinter
I am building a machine that can recognize prime numbers individually and show the result to the user,But there is a problem that I showed in the text below >>12 >>12,is a not prime number >>7 >>7,is a not prime number Which prime number and which composite number can be converted into composite numbers my codes: from tkinter import * def rso(): a = int(text.get()) for i in range(a + 1): pass if a % i == 1: Label(app,text=(a,"is a prime number"),font=20).pack() else: Label(app,text=(a,"is a NOT prime number"),font=20).pack() app = Tk() text = Entry(app,font=20) text.pack() Button(app,text="sumbit",font=20,command=rso).pack() app.mainloop()
[ "Try this:\nnum = 3\nflag = False\n\nif num > 1:\n # check for factors\n for i in range(2, num):\n if (num % i) == 0:\n flag = True\n break\n\nif flag:\n print(f\"{num} is not a prime number\")\nelse:\n print(f\"{num} is a prime number\")\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074538009_python_tkinter.txt
Q: Install pip for new python version I installed Python3.11 which is located usr/local/bin/python3, which came without pip. The old Python3.10 was located in usr/bin/python3. I tried to install pip with sudo apt-install python3-pip, but it seems to be attached to the old Python3.10. If I check pip --version, the output will be this: pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10), but I need it for Python3.11. For example if I try to pip install requests now, I get Requirements already satisfied: requests in /usr/lib/python3/dist-packages (2.25.1), which is the Python3.10 folder. A: Maybe you need pyenv: What pyenv does... Lets you change the global Python version on a per-user basis. Provides support for per-project Python versions. Allows you to override the Python version with an environment variable. Searches for commands from multiple versions of Python at a time. This may be helpful to test across Python versions with tox. I'm using it to manage my virtual environments and my global environments ❯ pyenv global 3.10.5 ❯ pyenv versions system 3.7.10 * 3.10.5 (set by /home/xunjie/.pyenv/version) 3.10.8 ❯ which python /home/xunjie/.pyenv/shims/python ❯ which pip /home/xunjie/.pyenv/shims/pip A: Your new python version(/usr/local/bin/python3) has pip. But your symbolic links have got twisted, you couldn't use them easily. Try this below. /usr/local/bin/python3 -m pip install pip Also before you change your symbolic links, you have to use pip like this below. /usr/local/bin/python3 -m pip install <package> If you want to use new python version as python OR python3 command, whereis python3.11 the result will be like this. the second column is the binary (But in your case /usr/local/bin/python3). > python3.11: /usr/local/bin/python3.11 /usr/local/share/man/man1/python3.11.1 Before changing all symbolic links to link new python3.11, Let's find a newer pip binary which pip3.11 the result of mine is > /usr/local/bin/pip3.11 Let's find the old python and pip symbolic link path. which python which python3 which pip which pip3 let's say the result is which python > /usr/bin/python which python3 > /usr/bin/python3 which pip > ~/.local/bin/pip which pip3 > ~/.local/bin/pip3 Let's connect symbolic links to newer python. ln -sf /usr/local/bin/python3.11 /usr/bin/python ln -sf /usr/local/bin/python3.11 /usr/bin/python3 ln -sf /usr/local/bin/pip3.11 ~/.local/bin/pip ln -sf /usr/local/bin/pip3.11 ~/.local/bin/pip3 ln -s creates a symbolic link. -f option of ln overwrites an existing symbolic link.
Install pip for new python version
I installed Python3.11 which is located usr/local/bin/python3, which came without pip. The old Python3.10 was located in usr/bin/python3. I tried to install pip with sudo apt-install python3-pip, but it seems to be attached to the old Python3.10. If I check pip --version, the output will be this: pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10), but I need it for Python3.11. For example if I try to pip install requests now, I get Requirements already satisfied: requests in /usr/lib/python3/dist-packages (2.25.1), which is the Python3.10 folder.
[ "Maybe you need pyenv:\n\nWhat pyenv does...\n\nLets you change the global Python version on a per-user basis.\nProvides support for per-project Python versions.\nAllows you to override the Python version with an environment variable.\nSearches for commands from multiple versions of Python at a time. This may be helpful to test across Python versions with tox.\n\n\nI'm using it to manage my virtual environments and my global environments\n❯ pyenv global 3.10.5\n\n❯ pyenv versions\n system\n 3.7.10\n* 3.10.5 (set by /home/xunjie/.pyenv/version)\n 3.10.8\n\n❯ which python\n/home/xunjie/.pyenv/shims/python\n\n❯ which pip \n/home/xunjie/.pyenv/shims/pip\n\n", "Your new python version(/usr/local/bin/python3) has pip.\nBut your symbolic links have got twisted, you couldn't use them easily.\nTry this below.\n/usr/local/bin/python3 -m pip install pip\n\nAlso before you change your symbolic links, you have to use pip like this below.\n/usr/local/bin/python3 -m pip install <package>\n\nIf you want to use new python version as python OR python3 command,\n whereis python3.11\n\nthe result will be like this. the second column is the binary (But in your case /usr/local/bin/python3).\n> python3.11: /usr/local/bin/python3.11 /usr/local/share/man/man1/python3.11.1\n\nBefore changing all symbolic links to link new python3.11,\nLet's find a newer pip binary\nwhich pip3.11\n\nthe result of mine is\n> /usr/local/bin/pip3.11\n\nLet's find the old python and pip symbolic link path.\nwhich python\nwhich python3\nwhich pip\nwhich pip3\n\nlet's say the result is\nwhich python\n> /usr/bin/python\nwhich python3\n> /usr/bin/python3\nwhich pip\n> ~/.local/bin/pip\nwhich pip3\n> ~/.local/bin/pip3\n\nLet's connect symbolic links to newer python.\nln -sf /usr/local/bin/python3.11 /usr/bin/python\nln -sf /usr/local/bin/python3.11 /usr/bin/python3\nln -sf /usr/local/bin/pip3.11 ~/.local/bin/pip\nln -sf /usr/local/bin/pip3.11 ~/.local/bin/pip3\n\nln -s creates a symbolic link.\n-f option of ln overwrites an existing symbolic link.\n" ]
[ 1, 0 ]
[]
[]
[ "linux", "pip", "python", "ubuntu" ]
stackoverflow_0074541264_linux_pip_python_ubuntu.txt
Q: Auto __repr__ method I want to have simple representation of any class, like { property = value }, is there auto __repr__? A: Simplest way: def __repr__(self): return str(self.__dict__) A: Yes, you can make a class "AutoRepr" and let all other classes extend it: >>> class AutoRepr(object): ... def __repr__(self): ... items = ("%s = %r" % (k, v) for k, v in self.__dict__.items()) ... return "<%s: {%s}>" % (self.__class__.__name__, ', '.join(items)) ... >>> class AnyOtherClass(AutoRepr): ... def __init__(self): ... self.foo = 'foo' ... self.bar = 'bar' ... >>> repr(AnyOtherClass()) "<AnyOtherClass: {foo = 'foo', bar = 'bar'}>" Note that the above code will not act nicely on data structures that (either directly or indirectly) reference themselves. As an alternative, you can define a function that works on any type: >>> def autoRepr(obj): ... try: ... items = ("%s = %r" % (k, v) for k, v in obj.__dict__.items()) ... return "<%s: {%s}." % (obj.__class__.__name__, ', '.join(items)) ... except AttributeError: ... return repr(obj) ... >>> class AnyOtherClass(object): ... def __init__(self): ... self.foo = 'foo' ... self.bar = 'bar' ... >>> autoRepr(AnyOtherClass()) "<AnyOtherClass: {foo = 'foo', bar = 'bar'}>" >>> autoRepr(7) '7' >>> autoRepr(None) 'None' Note that the above function is not defined recursively, on purpose, for the reason mentioned earlier. A: Well, I played a little bit with other answers and got a very pretty solution: class data: @staticmethod def repr(obj): items = [] for prop, value in obj.__dict__.items(): try: item = "%s = %r" % (prop, value) assert len(item) < 20 except: item = "%s: <%s>" % (prop, value.__class__.__name__) items.append(item) return "%s(%s)" % (obj.__class__.__name__, ', '.join(items)) def __init__(self, cls): cls.__repr__ = data.repr self.cls = cls def __call__(self, *args, **kwargs): return self.cls(*args, **kwargs) You use it as a decorator: @data class PythonBean: def __init__(self): self.int = 1 self.list = [5, 6, 7] self.str = "hello" self.obj = SomeOtherClass() and get a smart __repr__ out of the box: PythonBean(int = 1, obj: <SomeOtherClass>, list = [5, 6, 7], str = 'hello') This works with any recursive classes, including tree structures. If you try to put a self-reference in the class self.ref = self, the function will try (successfully) to work it out for about a second. Of course, always mind your boss - mine would not like such a syntax sugar )) A: Do you mean __dict__ ? A: class MyClass: def __init__(self, foo: str, bar: int): self.foo = foo self.bar = bar self._baz: bool = True def __repr__(self): f"{self.__class__.__name__}({', '.join([f'{k}={v!r}' for k, v in self.__dict__.items() if not k.startswith('_')])})" mc = MyClass('a', 99) print(mc) # MyClass(foo='a', bar=99) # ^^^ Note that _baz=True was hidden here A: I use this helper function to generate repr s for my classes. It is easy to run in a unittest function, ie. def test_makeRepr(self): makeRepr(Foo, Foo(), "anOptional space delimitedString ToProvideCustom Fields") this should output a number of potential repr to the console, that you can then copy/paste into your class. def makeRepr(classObj, instance = None, customFields = None): """Code writing helper function that will generate a __repr__ function that can be copy/pasted into a class definition. Args: classObj (class): instance (class): customFields (string): Returns: None: Always call the __repr__ function afterwards to ensure expected output. ie. print(foo) def __repr__(self): msg = "<Foo(var1 = {}, var2 = {})>" attributes = [self.var1, self.var2] return msg.format(*attributes) """ if isinstance(instance, classObj): className = instance.__class__.__name__ else: className=classObj.__name__ print('Generating a __repr__ function for: ', className,"\n") print("\tClass Type: "+classObj.__name__, "has the following fields:") print("\t"+" ".join(classObj.__dict__.keys()),"\n") if instance: print("\tInstance of: "+instance.__class__.__name__, "has the following fields:") print("\t"+" ".join(instance.__dict__.keys()),"\n") else: print('\tInstance of: Instance not provided.\n') if customFields: print("\t"+"These fields were provided to makeRepr:") print("\t"+customFields,"\n") else: print("\t"+"These fields were provided to makeRepr: None\n") print("Edit the list of fields, and rerun makeRepr with the new list if necessary.\n\n") print("repr with class type:\n") classResult = buildRepr( classObj.__name__, " ".join(classObj.__dict__.keys())) print(classResult,"\n\n") if isinstance(instance, classObj): instanceResult = buildRepr( instance.__class__.__name__, " ".join(instance.__dict__.keys())) else: instanceResult = "\t-----Instance not provided." print("repr with instance of class:\n") print(instanceResult,"\n\n") if customFields: customResult = buildRepr( classObj.__name__, customFields) else: customResult = '\t-----Custom fields not provided' print("repr with custom fields and class name:\n") print(customResult,"\n\n") print('Current __repr__') print("Class Object: ",classObj) if instance: print("Instance: ",instance.__repr__()) else: print("Instance: ", "None") def buildRepr(typeName,fields): funcDefLine = "def __repr__(self):" msgLineBase = ' msg = "<{typename}({attribute})>"' attributeListLineBase = ' attributes = [{attributeList}]' returnLine = ' return msg.format(*attributes)' x = ['self.' + x for x in fields.split()] xResult = ", ".join(x) y = [x + ' = {}' for x in fields.split()] yResult = ', '.join(y) msgLine = msgLineBase.format(typename = typeName, attribute = yResult) attributeListLine = attributeListLineBase.format(attributeList = xResult) result = "{declaration}\n{message}\n{attributes}\n{returnLine}".format(declaration = funcDefLine, message = msgLine, attributes = attributeListLine, returnLine =returnLine ) return result A: To make @uzi's answer clearer, I have included more sample code. This is handy for a quick and dirty script: class MyClass: def __repr__(self): return "MyClass:" + str(self.__dict__)
Auto __repr__ method
I want to have simple representation of any class, like { property = value }, is there auto __repr__?
[ "Simplest way:\ndef __repr__(self):\n return str(self.__dict__)\n\n", "Yes, you can make a class \"AutoRepr\" and let all other classes extend it:\n>>> class AutoRepr(object):\n... def __repr__(self):\n... items = (\"%s = %r\" % (k, v) for k, v in self.__dict__.items())\n... return \"<%s: {%s}>\" % (self.__class__.__name__, ', '.join(items))\n... \n>>> class AnyOtherClass(AutoRepr):\n... def __init__(self):\n... self.foo = 'foo'\n... self.bar = 'bar'\n...\n>>> repr(AnyOtherClass())\n\"<AnyOtherClass: {foo = 'foo', bar = 'bar'}>\"\n\nNote that the above code will not act nicely on data structures that (either directly or indirectly) reference themselves. As an alternative, you can define a function that works on any type:\n>>> def autoRepr(obj):\n... try:\n... items = (\"%s = %r\" % (k, v) for k, v in obj.__dict__.items())\n... return \"<%s: {%s}.\" % (obj.__class__.__name__, ', '.join(items))\n... except AttributeError:\n... return repr(obj)\n... \n>>> class AnyOtherClass(object):\n... def __init__(self):\n... self.foo = 'foo'\n... self.bar = 'bar'\n...\n>>> autoRepr(AnyOtherClass())\n\"<AnyOtherClass: {foo = 'foo', bar = 'bar'}>\"\n>>> autoRepr(7)\n'7'\n>>> autoRepr(None)\n'None'\n\nNote that the above function is not defined recursively, on purpose, for the reason mentioned earlier.\n", "Well, I played a little bit with other answers and got a very pretty solution:\nclass data:\n @staticmethod\n def repr(obj):\n items = []\n for prop, value in obj.__dict__.items():\n try:\n item = \"%s = %r\" % (prop, value)\n assert len(item) < 20\n except:\n item = \"%s: <%s>\" % (prop, value.__class__.__name__)\n items.append(item)\n\n return \"%s(%s)\" % (obj.__class__.__name__, ', '.join(items))\n\n def __init__(self, cls):\n cls.__repr__ = data.repr\n self.cls = cls\n\n def __call__(self, *args, **kwargs):\n return self.cls(*args, **kwargs)\n\nYou use it as a decorator:\n@data\nclass PythonBean:\n def __init__(self):\n self.int = 1\n self.list = [5, 6, 7]\n self.str = \"hello\"\n self.obj = SomeOtherClass()\n\nand get a smart __repr__ out of the box:\nPythonBean(int = 1, obj: <SomeOtherClass>, list = [5, 6, 7], str = 'hello')\n\nThis works with any recursive classes, including tree structures. If you try to put a self-reference in the class self.ref = self, the function will try (successfully) to work it out for about a second.\nOf course, always mind your boss - mine would not like such a syntax sugar ))\n", "Do you mean\n__dict__\n\n?\n", "class MyClass:\n def __init__(self, foo: str, bar: int):\n self.foo = foo\n self.bar = bar\n self._baz: bool = True\n\n def __repr__(self):\n f\"{self.__class__.__name__}({', '.join([f'{k}={v!r}' for k, v in self.__dict__.items() if not k.startswith('_')])})\"\n\nmc = MyClass('a', 99)\n\nprint(mc)\n# MyClass(foo='a', bar=99)\n# ^^^ Note that _baz=True was hidden here\n\n", "I use this helper function to generate repr s for my classes. It is easy to run in a unittest function, ie.\ndef test_makeRepr(self):\n makeRepr(Foo, Foo(), \"anOptional space delimitedString ToProvideCustom Fields\")\n\nthis should output a number of potential repr to the console, that you can then copy/paste into your class.\ndef makeRepr(classObj, instance = None, customFields = None):\n \"\"\"Code writing helper function that will generate a __repr__ function that can be copy/pasted into a class definition.\n\n Args:\n classObj (class):\n instance (class):\n customFields (string):\n\n Returns:\n None:\n\n Always call the __repr__ function afterwards to ensure expected output.\n ie. print(foo)\n\n def __repr__(self):\n msg = \"<Foo(var1 = {}, var2 = {})>\"\n attributes = [self.var1, self.var2]\n return msg.format(*attributes)\n \"\"\" \n if isinstance(instance, classObj):\n className = instance.__class__.__name__\n else:\n className=classObj.__name__\n\n print('Generating a __repr__ function for: ', className,\"\\n\")\n print(\"\\tClass Type: \"+classObj.__name__, \"has the following fields:\")\n print(\"\\t\"+\" \".join(classObj.__dict__.keys()),\"\\n\")\n if instance:\n print(\"\\tInstance of: \"+instance.__class__.__name__, \"has the following fields:\")\n print(\"\\t\"+\" \".join(instance.__dict__.keys()),\"\\n\")\n else:\n print('\\tInstance of: Instance not provided.\\n')\n\n if customFields:\n print(\"\\t\"+\"These fields were provided to makeRepr:\")\n print(\"\\t\"+customFields,\"\\n\")\n else:\n print(\"\\t\"+\"These fields were provided to makeRepr: None\\n\")\n print(\"Edit the list of fields, and rerun makeRepr with the new list if necessary.\\n\\n\")\n\n print(\"repr with class type:\\n\")\n classResult = buildRepr( classObj.__name__, \" \".join(classObj.__dict__.keys()))\n print(classResult,\"\\n\\n\")\n\n if isinstance(instance, classObj):\n instanceResult = buildRepr( instance.__class__.__name__, \" \".join(instance.__dict__.keys()))\n else:\n instanceResult = \"\\t-----Instance not provided.\"\n print(\"repr with instance of class:\\n\")\n print(instanceResult,\"\\n\\n\") \n\n if customFields:\n customResult = buildRepr( classObj.__name__, customFields)\n else:\n customResult = '\\t-----Custom fields not provided'\n print(\"repr with custom fields and class name:\\n\")\n print(customResult,\"\\n\\n\") \n\n print('Current __repr__')\n print(\"Class Object: \",classObj)\n if instance:\n print(\"Instance: \",instance.__repr__())\n else:\n print(\"Instance: \", \"None\")\n\n\ndef buildRepr(typeName,fields):\n funcDefLine = \"def __repr__(self):\"\n msgLineBase = ' msg = \"<{typename}({attribute})>\"'\n attributeListLineBase = ' attributes = [{attributeList}]'\n returnLine = ' return msg.format(*attributes)'\n x = ['self.' + x for x in fields.split()]\n xResult = \", \".join(x)\n y = [x + ' = {}' for x in fields.split()]\n yResult = ', '.join(y)\n msgLine = msgLineBase.format(typename = typeName, attribute = yResult)\n attributeListLine = attributeListLineBase.format(attributeList = xResult) \n result = \"{declaration}\\n{message}\\n{attributes}\\n{returnLine}\".format(declaration = funcDefLine,\n message = msgLine,\n attributes = attributeListLine,\n returnLine =returnLine )\n return result\n\n", "To make @uzi's answer clearer, I have included more sample code. This is handy for a quick and dirty script:\nclass MyClass:\n def __repr__(self):\n return \"MyClass:\" + str(self.__dict__)\n\n" ]
[ 33, 14, 7, 5, 3, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0000750908_python.txt
Q: Append column to a dataframe using list comprehension format I would like to append a column of zeros to a dataframe if the column in question is not already inside the dataframe. If the dataframe looks like this: df = pd.DataFrame({'a':[0,1,0], 'c':[1,1,1]}) ---------------------------------------------- a c 0 0 1 1 1 1 2 0 1 And the complete list of column names that the dataframe should have are: col_names = ['a', 'b', 'c'] I would like the output to look like this after applying the list comprehension to the df: a b c 0 0 0 1 1 1 0 1 2 0 0 1 This is the complete code I have so far: col_names = ['a','b','c'] df = pd.DataFrame({'a':[0,1,0], 'c':[1,1,1]}) # This is what I would like to convert into a list comprehension (one line) format if possible for col in col_names: if col not in df.columns: df[col] = 0 # Re-order the columns into the correct order df = df[col_names] print(df) A: A list comprehension would produce a list. You don't want a list, you want to add columns to your dataframe. List comprehensions should not be used for side effects, ever. You can however, produce the columns you want to add as a list and use advanced indexing to assign all the columns at the same time: df[[col for col in col_names if col not in df.columns]] = 0
Append column to a dataframe using list comprehension format
I would like to append a column of zeros to a dataframe if the column in question is not already inside the dataframe. If the dataframe looks like this: df = pd.DataFrame({'a':[0,1,0], 'c':[1,1,1]}) ---------------------------------------------- a c 0 0 1 1 1 1 2 0 1 And the complete list of column names that the dataframe should have are: col_names = ['a', 'b', 'c'] I would like the output to look like this after applying the list comprehension to the df: a b c 0 0 0 1 1 1 0 1 2 0 0 1 This is the complete code I have so far: col_names = ['a','b','c'] df = pd.DataFrame({'a':[0,1,0], 'c':[1,1,1]}) # This is what I would like to convert into a list comprehension (one line) format if possible for col in col_names: if col not in df.columns: df[col] = 0 # Re-order the columns into the correct order df = df[col_names] print(df)
[ "A list comprehension would produce a list. You don't want a list, you want to add columns to your dataframe. List comprehensions should not be used for side effects, ever.\nYou can however, produce the columns you want to add as a list and use advanced indexing to assign all the columns at the same time:\ndf[[col for col in col_names if col not in df.columns]] = 0\n\n" ]
[ 1 ]
[]
[]
[ "append", "dataframe", "list_comprehension", "pandas", "python" ]
stackoverflow_0074541208_append_dataframe_list_comprehension_pandas_python.txt
Q: Is there a way in python to detect if a domain does not exist or error? i want to ask whether or not if it's possible to detect a website that isn't available or a website can't be reach in python? And there is also a site where it says "The site can't be reached", and when checking the network it says status "(Failed)" To detect a site i used this code. import requests exist=[] for b in Phishing: try: request = requests.get(b) if request.status_code == 200: exist.append(b) print('Exist') elif request.status_code == 204: print('user does not exist') elif request.status_code == 304: print('Not available') elif request.status_code == 504: print('Timeout') elif request.status_code == (failed): print('failed') except: print('Not Exist') So far the code that i used to detect a website is this. I'm open for suggestion on how to improve the code. Thank you! A: In general, if requests.get() throws a ConnectionError exception, then the hostname does not exist, or is unreachable, or is not serving a website. Otherwise if requests.get() does not throw an exception (regardless of the specific http return code), then a website does exist at that address. A: import requests sites_to_be_evaluated = [ 'http://example.com', 'http://example.net', 'http://example.gov' ] sites_accessible = [] valid_dns = [] for site in sites_to_be_evaluated: try: fqdn = site.split('//')[1] # remove before // fqdn = fqdn.split('/')[0] # remove anything after / if requests.get(f'https://dns.google/resolve?name={fqdn}&type=ANY').json()['Status'] == 0: valid_dns.append(fqdn) if requests.get(site).status_code == 200: # 200 is the status code for a successful request sites_accessible.append(site) except requests.exceptions.ConnectionError: pass print(f'The following sites are accessible: {sites_accessible}') >> The following sites are accessible: ['http://example.com', 'http://example.net'] print(f'The following sites have valid DNS: {valid_dns}') >> The following sites have valid DNS: ['example.com', 'example.net'] Just because a site is returns a 200 OK does not mean the site is useful. It may be a parked website that contains nothing more than advertisements. Likewise, if the website is unresponsive - that only indicates that the requester cannot receive a respond. You may need to evaluate with a tracert to determine network paths. If you use Google's DNS API service, please read their documentation and respect their infrastructure.
Is there a way in python to detect if a domain does not exist or error?
i want to ask whether or not if it's possible to detect a website that isn't available or a website can't be reach in python? And there is also a site where it says "The site can't be reached", and when checking the network it says status "(Failed)" To detect a site i used this code. import requests exist=[] for b in Phishing: try: request = requests.get(b) if request.status_code == 200: exist.append(b) print('Exist') elif request.status_code == 204: print('user does not exist') elif request.status_code == 304: print('Not available') elif request.status_code == 504: print('Timeout') elif request.status_code == (failed): print('failed') except: print('Not Exist') So far the code that i used to detect a website is this. I'm open for suggestion on how to improve the code. Thank you!
[ "In general, if requests.get() throws a ConnectionError exception, then the hostname does not exist, or is unreachable, or is not serving a website.\nOtherwise if requests.get() does not throw an exception (regardless of the specific http return code), then a website does exist at that address.\n", "import requests\n\nsites_to_be_evaluated = [\n 'http://example.com',\n 'http://example.net',\n 'http://example.gov'\n]\n\nsites_accessible = []\nvalid_dns = []\n\nfor site in sites_to_be_evaluated:\n try:\n fqdn = site.split('//')[1] # remove before // \n fqdn = fqdn.split('/')[0] # remove anything after /\n if requests.get(f'https://dns.google/resolve?name={fqdn}&type=ANY').json()['Status'] == 0:\n valid_dns.append(fqdn)\n if requests.get(site).status_code == 200: # 200 is the status code for a successful request\n sites_accessible.append(site)\n except requests.exceptions.ConnectionError:\n pass\n\nprint(f'The following sites are accessible: {sites_accessible}')\n>> The following sites are accessible: ['http://example.com', 'http://example.net']\nprint(f'The following sites have valid DNS: {valid_dns}')\n>> The following sites have valid DNS: ['example.com', 'example.net']\n\nJust because a site is returns a 200 OK does not mean the site is useful. It may be a parked website that contains nothing more than advertisements. Likewise, if the website is unresponsive - that only indicates that the requester cannot receive a respond. You may need to evaluate with a tracert to determine network paths.\nIf you use Google's DNS API service, please read their documentation and respect their infrastructure.\n" ]
[ 0, 0 ]
[]
[]
[ "jupyter_notebook", "python", "python_requests" ]
stackoverflow_0074541297_jupyter_notebook_python_python_requests.txt
Q: Assign value to column and reset after nth row I have a pandas dataframe that looks like this... index my_column 0 1 2 3 4 5 6 What I need to do is conditionally assign values to 'my_column' depending on the index. The first three rows should have the values 'dog', 'cat', 'bird'. Then, the next three rows should also have 'dog', 'cat', 'bird'. That pattern should apply until the end of the dataset. index my_column 0 dog 1 cat 2 bird 3 dog 4 cat 5 bird 6 dog I've tried the following code to no avail. for index, row in df.iterrows(): counter=3 my_column='dog' if counter>3 break else counter+=1 my_column='cat' counter+=1 if counter>3 break else counter+=1 my_column='bird' if counter>3 break A: Several problems: Your if syntax is incorrect, you are missing colons and proper indentation You are breaking out of your loop, terminating it early instead of using an if, elif, else structure You are trying to update your dataframe while iterating over it. See this question about why you shouldn't update while you iterate. Instead, you could do values = ["dog", "cat", "bird"] num_values = len(values) for index in df.index(): df.at[index, "my_column"] = values[index % num_values] A: Advanced indexing One solution would be to turn dog-cat-bird into a pd.Series and use advanced indexing: dcb = pd.Series(["dog", "cat", "bird"]) df["my_column"] = dcb[df.index % len(dcb)].reset_index(drop=True) This works by first creating an index array from df.index % len(dcb): In [8]: df.index % len(dcb) Out[8]: Int64Index([0, 1, 2, 0, 1, 2, 0], dtype='int64') Then, by using advanced indexing, you can select the elements from dcb with that index array: In [9]: dcb[df.index % len(dcb)] Out[9]: 0 dog 1 cat 2 bird 0 dog 1 cat 2 bird 0 dog dtype: object Finally, notice that the index of the above array repeats. Reset it and drop the old index with .reset_index(drop=True), and finally assign to your dataframe. Using a generator Here's an alternate solution using an infinite dog-cat-bird generator: In [2]: df Out[2]: my_column 0 1 2 3 4 5 6 In [3]: def dog_cat_bird(): ...: while True: ...: yield from ("dog", "cat", "bird") ...: In [4]: dcb = dog_cat_bird() In [5]: df["my_column"].apply(lambda _: next(dcb)) Out[5]: 0 dog 1 cat 2 bird 3 dog 4 cat 5 bird 6 dog Name: my_column, dtype: object A: Create a dictionary: pet_dict = {0:'dog', 1:'cat', 2:'bird'} You can get the index value using the .name and modulus (%) function by 3 to get your desired result: df.apply (lambda x: pet_dict[x.name%3],axis=1) 0 dog 1 cat 2 bird 3 dog 4 cat 5 bird 6 dog 7 cat 8 bird 9 dog
Assign value to column and reset after nth row
I have a pandas dataframe that looks like this... index my_column 0 1 2 3 4 5 6 What I need to do is conditionally assign values to 'my_column' depending on the index. The first three rows should have the values 'dog', 'cat', 'bird'. Then, the next three rows should also have 'dog', 'cat', 'bird'. That pattern should apply until the end of the dataset. index my_column 0 dog 1 cat 2 bird 3 dog 4 cat 5 bird 6 dog I've tried the following code to no avail. for index, row in df.iterrows(): counter=3 my_column='dog' if counter>3 break else counter+=1 my_column='cat' counter+=1 if counter>3 break else counter+=1 my_column='bird' if counter>3 break
[ "Several problems:\n\nYour if syntax is incorrect, you are missing colons and proper indentation\nYou are breaking out of your loop, terminating it early instead of using an if, elif, else structure\nYou are trying to update your dataframe while iterating over it.\n\nSee this question about why you shouldn't update while you iterate.\nInstead, you could do\nvalues = [\"dog\", \"cat\", \"bird\"]\n\nnum_values = len(values)\n\nfor index in df.index():\n df.at[index, \"my_column\"] = values[index % num_values]\n \n\n", "Advanced indexing\nOne solution would be to turn dog-cat-bird into a pd.Series and use advanced indexing:\ndcb = pd.Series([\"dog\", \"cat\", \"bird\"])\n\ndf[\"my_column\"] = dcb[df.index % len(dcb)].reset_index(drop=True)\n\nThis works by first creating an index array from df.index % len(dcb):\nIn [8]: df.index % len(dcb)\nOut[8]: Int64Index([0, 1, 2, 0, 1, 2, 0], dtype='int64')\n\nThen, by using advanced indexing, you can select the elements from dcb with that index array:\nIn [9]: dcb[df.index % len(dcb)]\nOut[9]:\n0 dog\n1 cat\n2 bird\n0 dog\n1 cat\n2 bird\n0 dog\ndtype: object\n\nFinally, notice that the index of the above array repeats. Reset it and drop the old index with .reset_index(drop=True), and finally assign to your dataframe.\nUsing a generator\nHere's an alternate solution using an infinite dog-cat-bird generator:\nIn [2]: df\nOut[2]:\n my_column\n0\n1\n2\n3\n4\n5\n6\n\nIn [3]: def dog_cat_bird():\n ...: while True:\n ...: yield from (\"dog\", \"cat\", \"bird\")\n ...:\n\nIn [4]: dcb = dog_cat_bird()\n\nIn [5]: df[\"my_column\"].apply(lambda _: next(dcb))\nOut[5]:\n0 dog\n1 cat\n2 bird\n3 dog\n4 cat\n5 bird\n6 dog\nName: my_column, dtype: object\n\n", "Create a dictionary:\npet_dict = {0:'dog',\n 1:'cat',\n 2:'bird'}\n\nYou can get the index value using the .name and modulus (%) function by 3 to get your desired result:\ndf.apply (lambda x: pet_dict[x.name%3],axis=1)\n0 dog\n1 cat\n2 bird\n3 dog\n4 cat\n5 bird\n6 dog\n7 cat\n8 bird\n9 dog\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074541372_pandas_python.txt
Q: Is there a way to pause my if statement without pausing my entire script in pygame? Pretty simple, I just want to make a damage system in pygame and i want invincibility frames (aka a delay) so that you don't just die instantly. For reference here's the if statement if pygame.Rect.colliderect(player_rect, baddude_rect): print('hit') time.sleep(0.5) if you need the entire script i will post it, stackoverflow is picky. I've already tried async and threading. A: One way of doing this is to use the millisecond timer provided by pygame.time.get_ticks(). This returns the number of milliseconds since PyGame started. It's handy for doing time calculations. So, reading through the comments, you want the player to be invulnerable for some time (0.5 seconds) after taking a hit. So let's get that into a constant: HIT_CLOCK = 500 # milliseconds player is safe for, after taking a hit So when the player is hit, we need to wait that long before recording another hit. This is some time in the future from when the player is hit, but that's easy to calculate: time_now = pygame.time.get_ticks() player_hit_clock = time_now + HIT_CLOCK # time in the future And when the code is deciding if the player should be hit, it can just compare that "future time" to ensure it's now past: if pygame.Rect.colliderect(player_rect, baddude_rect): time_now = pygame.time.get_ticks() if time_now > player_hit_clock: print('hit') player_hit_clock = time_now + HIT_CLOCK #<<-- reset clock It's important to ensure player_hit_clock is initialised before the first use, setting it to 0 would be enough. So, putting all that together: HIT_CLOCK = 500 # milliseconds player is safe for, after taking a hit # ... player_hit_clock = 0 # controls player consecutive damage rate # ... if pygame.Rect.colliderect(player_rect, baddude_rect): time_now = pygame.time.get_ticks() if time_now > player_hit_clock: print('hit') player_hit_clock = time_now + HIT_CLOCK # <<-- reset the hit-clock else: print('player hit cooldown') The benefits of using the millisecond clock are: All code just runs normally, waiting for the timer to expire. The cost of using the clock is only a fetch of the time, with a single integer comparison. (If you use the clock for other things, only one fetch is needed per loop) It's real-time, so even if your program drops a few frames, timing is preserved.
Is there a way to pause my if statement without pausing my entire script in pygame?
Pretty simple, I just want to make a damage system in pygame and i want invincibility frames (aka a delay) so that you don't just die instantly. For reference here's the if statement if pygame.Rect.colliderect(player_rect, baddude_rect): print('hit') time.sleep(0.5) if you need the entire script i will post it, stackoverflow is picky. I've already tried async and threading.
[ "One way of doing this is to use the millisecond timer provided by pygame.time.get_ticks(). This returns the number of milliseconds since PyGame started. It's handy for doing time calculations.\nSo, reading through the comments, you want the player to be invulnerable for some time (0.5 seconds) after taking a hit. So let's get that into a constant:\nHIT_CLOCK = 500 # milliseconds player is safe for, after taking a hit\n\nSo when the player is hit, we need to wait that long before recording another hit. This is some time in the future from when the player is hit, but that's easy to calculate:\ntime_now = pygame.time.get_ticks()\nplayer_hit_clock = time_now + HIT_CLOCK # time in the future\n\nAnd when the code is deciding if the player should be hit, it can just compare that \"future time\" to ensure it's now past:\nif pygame.Rect.colliderect(player_rect, baddude_rect):\n time_now = pygame.time.get_ticks()\n if time_now > player_hit_clock:\n print('hit')\n player_hit_clock = time_now + HIT_CLOCK #<<-- reset clock\n\nIt's important to ensure player_hit_clock is initialised before the first use, setting it to 0 would be enough. So, putting all that together:\nHIT_CLOCK = 500 # milliseconds player is safe for, after taking a hit\n\n# ...\n\nplayer_hit_clock = 0 # controls player consecutive damage rate\n\n# ...\n\nif pygame.Rect.colliderect(player_rect, baddude_rect):\n time_now = pygame.time.get_ticks()\n if time_now > player_hit_clock:\n print('hit')\n player_hit_clock = time_now + HIT_CLOCK # <<-- reset the hit-clock\n else:\n print('player hit cooldown')\n\nThe benefits of using the millisecond clock are:\n\nAll code just runs normally, waiting for the timer to expire.\nThe cost of using the clock is only a fetch of the time, with a single integer comparison.\n\n(If you use the clock for other things, only one fetch is needed per loop)\n\n\nIt's real-time, so even if your program drops a few frames, timing is preserved.\n\n" ]
[ 3 ]
[]
[]
[ "pause", "pygame", "python" ]
stackoverflow_0074540691_pause_pygame_python.txt
Q: How to filter row dataframe based on value of another dataframe How to get filter based data rows from Genre column coming from another dataframe? I have a movies dataframe as follows: Movie_Name Genre Rating Halloween Crime, Horror, Thriller 6.5 Nope Horror, Mystery, Sci-Fi 6.9 The Midnight Club Drama, Horror, Mystery 6.7 The Northman Action, Adventure, Drama 7.1 Prey Action, Adventure, Drama 7.2 Uncharted Action, Adventure 6.3 Sherwood Crime, Drama, Mystery 7.4 And I have a user dataframe as follows: User_Id User_Name Genre 100 Christine Horror, Thriller, Drama I want to get the following rows as output because the user likes horror, thriller, and drama genres. Movie_Name Genre Rating Halloween Crime, Horror, Thriller 6.5 Nope Horror, Mystery, Sci-Fi 6.9 The Midnight Club Drama, Horror, Mystery 6.7 The Northman Action, Adventure, Drama 7.1 Prey Action, Adventure, Drama 7.2 Sherwood Crime, Drama, Mystery 7.4 How can I get the Movie rows where a value in the Genre column matches at least one of the User's Genre preferences? A: try this: pattern = user['Genre'].str.replace(', ', '|')[0] result = movies.query('Genre.str.contains(@pattern)') print(result) A: The example use a for loop to get a list for each user on df2 import pandas as pd df=pd.read_csv("db1.csv",header=[0]) # movies df2=pd.read_csv("db2.csv",header=[0]) # users for ir,row in df2.iterrows(): gen=row["Genre"].replace(",","|").replace(" ","") filtereddf=df[df["Genre"].str.contains(gen)]
How to filter row dataframe based on value of another dataframe
How to get filter based data rows from Genre column coming from another dataframe? I have a movies dataframe as follows: Movie_Name Genre Rating Halloween Crime, Horror, Thriller 6.5 Nope Horror, Mystery, Sci-Fi 6.9 The Midnight Club Drama, Horror, Mystery 6.7 The Northman Action, Adventure, Drama 7.1 Prey Action, Adventure, Drama 7.2 Uncharted Action, Adventure 6.3 Sherwood Crime, Drama, Mystery 7.4 And I have a user dataframe as follows: User_Id User_Name Genre 100 Christine Horror, Thriller, Drama I want to get the following rows as output because the user likes horror, thriller, and drama genres. Movie_Name Genre Rating Halloween Crime, Horror, Thriller 6.5 Nope Horror, Mystery, Sci-Fi 6.9 The Midnight Club Drama, Horror, Mystery 6.7 The Northman Action, Adventure, Drama 7.1 Prey Action, Adventure, Drama 7.2 Sherwood Crime, Drama, Mystery 7.4 How can I get the Movie rows where a value in the Genre column matches at least one of the User's Genre preferences?
[ "try this:\npattern = user['Genre'].str.replace(', ', '|')[0]\nresult = movies.query('Genre.str.contains(@pattern)')\nprint(result)\n\n", "The example use a for loop to get a list for each user on df2\nimport pandas as pd\ndf=pd.read_csv(\"db1.csv\",header=[0]) # movies\ndf2=pd.read_csv(\"db2.csv\",header=[0]) # users\n\nfor ir,row in df2.iterrows():\n gen=row[\"Genre\"].replace(\",\",\"|\").replace(\" \",\"\")\n filtereddf=df[df[\"Genre\"].str.contains(gen)]\n \n\n" ]
[ 2, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074541262_pandas_python.txt
Q: How to convert Clipboard Image BMP to PNG using Pillow package without saving and then loading I would like to convert an image obtained from the Windows Clipboard to PNG format without having to save and then reload. As per the code below, I am saving the clipboard image and then reloading it. Is there a way to convert the image to PNG format without those extra steps, such that the PIL.BmpImagePlugin.DibImageFile gets converted to PIL.PngImagePlugin.PngImageFile Here is the current code: from PIL import ImageGrab, Image # Get the clipboard image img1 = ImageGrab.grabclipboard() # Save the image from the clipboard to file img1.save('paste.png', 'PNG') print("Image Type1:", type(img1)) # Load the image back in img2 = Image.open('paste.png') print("Image Type2:", type(img2)) OUTPUT: Image Type1: <class 'PIL.BmpImagePlugin.DibImageFile'> Image Type2: <class 'PIL.PngImagePlugin.PngImageFile'> A: As per some help from Seon's comment, this got me on the right track, and fulfilled my requirements. As per Seon: "the idea is to save the image to an in-memory BytesIO object, and reload it from there. We're still saving and loading, but not to disk." Which is exactly what I wanted. Here is the code I used: from PIL import ImageGrab, Image import io def convertImageFormat(imgObj, outputFormat="PNG"): newImgObj = imgObj if outputFormat and (imgObj.format != outputFormat): imageBytesIO = io.BytesIO() imgObj.save(imageBytesIO, outputFormat) newImgObj = Image.open(imageBytesIO) return newImgObj # Get the clipboard image and convert to PNG img1 = ImageGrab.grabclipboard() img2 = convertImageFormat(img1) # Check the types print("Image Type1:", type(img1)) print("Image Type2:", type(img2)) OUTPUT: Image Type1: <class 'PIL.BmpImagePlugin.DibImageFile'> Image Type2: <class 'PIL.PngImagePlugin.PngImageFile'>
How to convert Clipboard Image BMP to PNG using Pillow package without saving and then loading
I would like to convert an image obtained from the Windows Clipboard to PNG format without having to save and then reload. As per the code below, I am saving the clipboard image and then reloading it. Is there a way to convert the image to PNG format without those extra steps, such that the PIL.BmpImagePlugin.DibImageFile gets converted to PIL.PngImagePlugin.PngImageFile Here is the current code: from PIL import ImageGrab, Image # Get the clipboard image img1 = ImageGrab.grabclipboard() # Save the image from the clipboard to file img1.save('paste.png', 'PNG') print("Image Type1:", type(img1)) # Load the image back in img2 = Image.open('paste.png') print("Image Type2:", type(img2)) OUTPUT: Image Type1: <class 'PIL.BmpImagePlugin.DibImageFile'> Image Type2: <class 'PIL.PngImagePlugin.PngImageFile'>
[ "As per some help from Seon's comment, this got me on the right track, and fulfilled my requirements. \nAs per Seon:\n\n\"the idea is to save the image to an in-memory BytesIO object, and reload it from there. We're still saving and loading, but not to disk.\"\n\nWhich is exactly what I wanted.\nHere is the code I used:\nfrom PIL import ImageGrab, Image\nimport io\n\n\ndef convertImageFormat(imgObj, outputFormat=\"PNG\"):\n newImgObj = imgObj\n if outputFormat and (imgObj.format != outputFormat):\n imageBytesIO = io.BytesIO()\n imgObj.save(imageBytesIO, outputFormat)\n newImgObj = Image.open(imageBytesIO)\n \n return newImgObj\n\n\n# Get the clipboard image and convert to PNG\nimg1 = ImageGrab.grabclipboard()\nimg2 = convertImageFormat(img1)\n\n# Check the types\nprint(\"Image Type1:\", type(img1))\nprint(\"Image Type2:\", type(img2))\n\n\nOUTPUT:\nImage Type1: <class 'PIL.BmpImagePlugin.DibImageFile'>\nImage Type2: <class 'PIL.PngImagePlugin.PngImageFile'>\n\n" ]
[ 0 ]
[]
[]
[ "bmp", "clipboard", "png", "python", "python_imaging_library" ]
stackoverflow_0074520589_bmp_clipboard_png_python_python_imaging_library.txt
Q: Efficient Method to interpolate between 2 pandas date objects? I am trying to create a table that shows the months that a category of people is available, using an excel table like this one: Table I know that I can interpolate using the following method: import pandas as pd data = pd.read_csv('Dataset.csv') final = pd.DataFrame() for index,row in data.iterrows(): start = row['Start Date'] end = row['End Date'] range = pd.date_range(start,end, freq='M') df = pd.DataFrame(range) df['Name'] = str(row['Project']) final = pd.concat([final, df], ignore_index=True) But I know that this method is very inefficient, and that there should be a more efficient way using pandas native methods, but I am unsure how to do this. The output should look this this: Output A: Take a look at this answer, which seems to be doing the same thing you want: https://stackoverflow.com/a/61930008/11542834
Efficient Method to interpolate between 2 pandas date objects?
I am trying to create a table that shows the months that a category of people is available, using an excel table like this one: Table I know that I can interpolate using the following method: import pandas as pd data = pd.read_csv('Dataset.csv') final = pd.DataFrame() for index,row in data.iterrows(): start = row['Start Date'] end = row['End Date'] range = pd.date_range(start,end, freq='M') df = pd.DataFrame(range) df['Name'] = str(row['Project']) final = pd.concat([final, df], ignore_index=True) But I know that this method is very inefficient, and that there should be a more efficient way using pandas native methods, but I am unsure how to do this. The output should look this this: Output
[ "Take a look at this answer, which seems to be doing the same thing you want: https://stackoverflow.com/a/61930008/11542834\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074541432_pandas_python.txt
Q: Combine bar chart and highlight when using Pandas Styler on the same column Is there a way to combine bar styler and highlight styler in Pandas' DataFrame? For example, I want to red highlight a NaN value, but if it is not NaN, green bar is shown. Score 79 --> green bar 84 --> green bar nan --> red highlight Currently, I can only use highlight_null or apply_map to highlight the NaN value, but don't know how to combine it with pandas.io.formats.style.Styler.bar A: Finally, I can combine it by calling the bar function first, and then followed by applymap def style_zero(v, props=''): return props if v == 0 or v == np.nan else None df.style.bar(color='#5fba7d').applymap(style_zero, props='background-color:pink;color:red') Maybe, it will help someone who have the same concern with me.
Combine bar chart and highlight when using Pandas Styler on the same column
Is there a way to combine bar styler and highlight styler in Pandas' DataFrame? For example, I want to red highlight a NaN value, but if it is not NaN, green bar is shown. Score 79 --> green bar 84 --> green bar nan --> red highlight Currently, I can only use highlight_null or apply_map to highlight the NaN value, but don't know how to combine it with pandas.io.formats.style.Styler.bar
[ "Finally, I can combine it by calling the bar function first, and then followed by applymap\ndef style_zero(v, props=''):\n return props if v == 0 or v == np.nan else None\n\ndf.style.bar(color='#5fba7d').applymap(style_zero, props='background-color:pink;color:red')\n\nMaybe, it will help someone who have the same concern with me.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "pandas_styles", "python" ]
stackoverflow_0074541618_dataframe_pandas_pandas_styles_python.txt
Q: Python printing list of items Below is my function and output. I want to remove the \n present in the output. def printInventory(): fh = open("stock.txt","r") print('Current Inventory') print('-----------------') L=fh.readlines() print("List of all Stock Items") for i in L: L=i.split(",") print(L) CHOICE = int(input('Enter 98 to continue or 99 to exit: ')) if CHOICE == 98: menuDisplay() else: exit() Output: List of all Stock Items ['APPLE', '100\n'] ['BANANA', '50\n'] ['CHILLI', '100\n'] ['MANGO', '300\n'] I would like to remove the \n from the output A: You can use the .strip() function to remove the new line character For instance: out = ['APPLE', '100\n'] out[1] = out[1].strip('\n') print(out) # ['APPLE', '100'] If you have a list of values, you can just loop through and apply the same logic to each item in the list A: Since you are reading each line, you could rewrite the code to iterate over each line once instead of reading them all at once with readlines(). This has the benefit of not modifying the list you are iterating over. def printInventory(): L = [] print('Current Inventory') print('-----------------') with open("stock.txt", "r") as fh: for line in fh: line = line.strip() L.append(line.split(",")) print(L) # ... Using the with open() syntax also ensures that the file is closed properly, even if the program crashes: Why is `with open()` better for opening files in Python?
Python printing list of items
Below is my function and output. I want to remove the \n present in the output. def printInventory(): fh = open("stock.txt","r") print('Current Inventory') print('-----------------') L=fh.readlines() print("List of all Stock Items") for i in L: L=i.split(",") print(L) CHOICE = int(input('Enter 98 to continue or 99 to exit: ')) if CHOICE == 98: menuDisplay() else: exit() Output: List of all Stock Items ['APPLE', '100\n'] ['BANANA', '50\n'] ['CHILLI', '100\n'] ['MANGO', '300\n'] I would like to remove the \n from the output
[ "You can use the .strip() function to remove the new line character\nFor instance:\nout = ['APPLE', '100\\n']\nout[1] = out[1].strip('\\n')\nprint(out) # ['APPLE', '100']\n\nIf you have a list of values, you can just loop through and apply the same logic to each item in the list\n", "Since you are reading each line, you could rewrite the code to iterate over each line once instead of reading them all at once with readlines(). This has the benefit of not modifying the list you are iterating over.\ndef printInventory():\n L = []\n\n print('Current Inventory')\n print('-----------------')\n\n with open(\"stock.txt\", \"r\") as fh:\n for line in fh:\n line = line.strip()\n L.append(line.split(\",\"))\n print(L)\n # ...\n\nUsing the with open() syntax also ensures that the file is closed properly, even if the program crashes: Why is `with open()` better for opening files in Python?\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074531582_python.txt
Q: Pandas Multiindex columns slice: use combination of all and pesice selet Input: hierarchical headered dataframe (multiindex columns). Ask: select combination of specific column(s) [level0, level1] and broadcast [level0, :] Example: import numpy as np import pandas as pd index=pd.MultiIndex.from_product([["A", "B"], ["x", "y", "z"]]) df = pd.DataFrame(np.random.randn(8,6), columns=index) The desire result is to select ('A','y') and everything for 'B'. I've managed to achieve this using the solution below: df[[x for x in df.columns if x == ('A','z') or x[0]=='B']] I've tried to use .loc[] and slice(None) but this did not work. Is there a more elegant solution to iterate over the columns' tuples? Cheers. A: Your list comprehension should work: cols = [ent for ent in df if ent == ('A','y') or ent[0] == 'B'] df.loc[:, cols] A B y x y z 0 0.069915 2.563734 1.034784 0.659189 1 -0.240847 1.924626 1.241827 0.973155 2 -1.091353 -1.003005 1.648075 -1.162863 3 -0.747503 -0.211539 1.861991 -1.011261 4 -0.354648 -0.117533 0.524876 0.884997 5 0.786158 -2.073479 1.374893 1.428770 6 0.597740 0.853482 -0.187112 0.000626 7 -0.749839 -1.084576 -0.327888 -0.286908 Another option is with select_columns from pyjanitor: # pip install pyjanitor import pandas as pd import janitor df.select_columns(('A','y'), 'B') A B y x y z 0 0.069915 2.563734 1.034784 0.659189 1 -0.240847 1.924626 1.241827 0.973155 2 -1.091353 -1.003005 1.648075 -1.162863 3 -0.747503 -0.211539 1.861991 -1.011261 4 -0.354648 -0.117533 0.524876 0.884997 5 0.786158 -2.073479 1.374893 1.428770 6 0.597740 0.853482 -0.187112 0.000626 7 -0.749839 -1.084576 -0.327888 -0.286908
Pandas Multiindex columns slice: use combination of all and pesice selet
Input: hierarchical headered dataframe (multiindex columns). Ask: select combination of specific column(s) [level0, level1] and broadcast [level0, :] Example: import numpy as np import pandas as pd index=pd.MultiIndex.from_product([["A", "B"], ["x", "y", "z"]]) df = pd.DataFrame(np.random.randn(8,6), columns=index) The desire result is to select ('A','y') and everything for 'B'. I've managed to achieve this using the solution below: df[[x for x in df.columns if x == ('A','z') or x[0]=='B']] I've tried to use .loc[] and slice(None) but this did not work. Is there a more elegant solution to iterate over the columns' tuples? Cheers.
[ "Your list comprehension should work:\ncols = [ent for ent in df if ent == ('A','y') or ent[0] == 'B']\ndf.loc[:, cols]\n A B\n y x y z\n0 0.069915 2.563734 1.034784 0.659189\n1 -0.240847 1.924626 1.241827 0.973155\n2 -1.091353 -1.003005 1.648075 -1.162863\n3 -0.747503 -0.211539 1.861991 -1.011261\n4 -0.354648 -0.117533 0.524876 0.884997\n5 0.786158 -2.073479 1.374893 1.428770\n6 0.597740 0.853482 -0.187112 0.000626\n7 -0.749839 -1.084576 -0.327888 -0.286908\n\nAnother option is with select_columns from pyjanitor:\n# pip install pyjanitor\nimport pandas as pd\nimport janitor\n\ndf.select_columns(('A','y'), 'B')\n A B\n y x y z\n0 0.069915 2.563734 1.034784 0.659189\n1 -0.240847 1.924626 1.241827 0.973155\n2 -1.091353 -1.003005 1.648075 -1.162863\n3 -0.747503 -0.211539 1.861991 -1.011261\n4 -0.354648 -0.117533 0.524876 0.884997\n5 0.786158 -2.073479 1.374893 1.428770\n6 0.597740 0.853482 -0.187112 0.000626\n7 -0.749839 -1.084576 -0.327888 -0.286908\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "numpy", "pandas", "python" ]
stackoverflow_0074540899_dataframe_numpy_pandas_python.txt
Q: AttributeError: 'DatetimeProperties' object has no attribute 'day_name' I am using pandas day_name() function but its giving attribute error as below: s = pd.Series(pd.date_range(start='2018-01-01', freq='D', periods=3)) s 0 2018-01-01 1 2018-01-02 2 2018-01-03 dtype: datetime64[ns] s.dt.day_name() --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-37-75cff12ad412> in <module>() ----> 1 s.dt.day_name() AttributeError: 'DatetimeProperties' object has no attribute 'day_name' pandas documentation has the same example. Don't know why it's not working. A: Worked for me when I tried in my iPython notebook.
AttributeError: 'DatetimeProperties' object has no attribute 'day_name'
I am using pandas day_name() function but its giving attribute error as below: s = pd.Series(pd.date_range(start='2018-01-01', freq='D', periods=3)) s 0 2018-01-01 1 2018-01-02 2 2018-01-03 dtype: datetime64[ns] s.dt.day_name() --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-37-75cff12ad412> in <module>() ----> 1 s.dt.day_name() AttributeError: 'DatetimeProperties' object has no attribute 'day_name' pandas documentation has the same example. Don't know why it's not working.
[ "\nWorked for me when I tried in my iPython notebook.\n" ]
[ 0 ]
[]
[]
[ "datetime", "pandas", "python", "weekday" ]
stackoverflow_0074537161_datetime_pandas_python_weekday.txt
Q: PyQt6 how to get black menu bar in Mac OS Dark Mode I'm developing an app with PyQt6 for Mac OS. When using Dark Mode in the Mac OS settings it applies to all widgets etc in my app however for some reason the menubar does not turn black like it does for most other apps e.g. Google Chrome when running Mac OS in Dark Mode. I've noticed that the Finder app also does not get a black menubar for some reason. Anyway, just wanted to check if anyone has successfully managed to get a black menu bar for their PyQt6 app when running in Mac OS Dark Mode? I'm using Python 3.11 and Mac OS Ventura. I'm using PyInstaller to build the bundle. A: I solved it. Seems one only gets a black menu bar when running in full screen, which I was not doing.
PyQt6 how to get black menu bar in Mac OS Dark Mode
I'm developing an app with PyQt6 for Mac OS. When using Dark Mode in the Mac OS settings it applies to all widgets etc in my app however for some reason the menubar does not turn black like it does for most other apps e.g. Google Chrome when running Mac OS in Dark Mode. I've noticed that the Finder app also does not get a black menubar for some reason. Anyway, just wanted to check if anyone has successfully managed to get a black menu bar for their PyQt6 app when running in Mac OS Dark Mode? I'm using Python 3.11 and Mac OS Ventura. I'm using PyInstaller to build the bundle.
[ "I solved it. Seems one only gets a black menu bar when running in full screen, which I was not doing.\n" ]
[ 0 ]
[]
[]
[ "macos", "pyqt6", "python", "qt6" ]
stackoverflow_0074541742_macos_pyqt6_python_qt6.txt
Q: How to store user input in a variable in django python So i want to take the user input and compare it to data present in the sqlite3 db, and if matches I'd like to print that whole row, using django orm. models.py from django.db import models class Inventory(models.Model): item_bc = models.CharField(max_length=100) item_details = models.CharField(max_length=100) urls.py from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index'), path('search', views.search, name='search'), ] views.py from django.shortcuts import render from django.http import HttpResponse from models import Inventory # Create your views here. def index(request): return render(request, 'form.html') def search(request): search_input = request.POST.get('barcode') data = Inventory.objects.filter(item_bc=search_input).values() return render(request, 'result.html', {"data": data}) I really appreciate your time and help, thank you! i think adding logic to the search function to compare should work but extremely new to django and dont really know on how to start.. A: html <form action="search" method="post"> BARCODE: <input type="text" name="barcode"> <br> <br> <input type="submit"> </form> in view.py you can fetch searched input like this and filter. def search(request): search_input = request.POST['barcode'] data = ModelName.objects.filter(fieldname__icontains=search_input) return render(request, 'result.html', {"data":data})
How to store user input in a variable in django python
So i want to take the user input and compare it to data present in the sqlite3 db, and if matches I'd like to print that whole row, using django orm. models.py from django.db import models class Inventory(models.Model): item_bc = models.CharField(max_length=100) item_details = models.CharField(max_length=100) urls.py from django.urls import path from . import views urlpatterns = [ path('', views.index, name='index'), path('search', views.search, name='search'), ] views.py from django.shortcuts import render from django.http import HttpResponse from models import Inventory # Create your views here. def index(request): return render(request, 'form.html') def search(request): search_input = request.POST.get('barcode') data = Inventory.objects.filter(item_bc=search_input).values() return render(request, 'result.html', {"data": data}) I really appreciate your time and help, thank you! i think adding logic to the search function to compare should work but extremely new to django and dont really know on how to start..
[ "html\n<form action=\"search\" method=\"post\">\n BARCODE: <input type=\"text\" name=\"barcode\">\n <br>\n <br>\n <input type=\"submit\">\n</form>\n\nin view.py you can fetch searched input like this and filter.\ndef search(request):\n search_input = request.POST['barcode']\n data = ModelName.objects.filter(fieldname__icontains=search_input)\n return render(request, 'result.html', {\"data\":data})\n\n" ]
[ 0 ]
[]
[]
[ "database", "django", "orm", "python", "user_input" ]
stackoverflow_0074541719_database_django_orm_python_user_input.txt
Q: My ball keeps leaving a trail. I did use blitz and update() but it is not working .import pygame from sys import exit pygame.init() widthscreen = 1440 #middle 720 heightscreen = 790 #middle 395 w_surface = 800 h_surface = 500 midalignX_lg = (widthscreen-w_surface)/2 midalignY_lg = (heightscreen-h_surface)/2 screen = pygame.display.set_mode((widthscreen,heightscreen)) pygame.display.set_caption("Collision Game") clock = pygame.time.Clock() test_font = pygame.font.Font('font/Pixeltype.ttf', 45) surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') blue_b = pygame.image.load('images/blue.png') blue_b = pygame.transform.scale(blue_b,(35,35)) yellow_b = pygame.image.load('images/yellow.png') yellow_b = pygame.transform.scale(yellow_b,(35,35)) text_surface = test_font.render('Ball Option:', True, 'White') #object_1_x_pos = 20 #object_1 = pygame.draw.circle(surface, (137,204,240), (object_1_x_pos, 200), 15, 25) barrier_1_x = 10 barrier_1 = pygame.image.load('images/yellow.png') barrier_1 = pygame.transform.scale(barrier_1,(35,35)) while True: #elements & update #event loop for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.blit(surface, (midalignX_lg,midalignY_lg)) screen.blit(blue_b,(150,250)) screen.blit(yellow_b, (150,300)) screen.blit(text_surface,(150, 200)) barrier_1_x += 1 surface.blit(barrier_1, (barrier_1_x, 350)) pygame.display.update() clock.tick(60) My ball keeps leaving a trail. I did use blitz and update() but it is not working. How do i solve this? I did blit everything to my knowledge. When I run the code, the yellow ball within the light yellow rectangle moves but moves with a trail of lines and of the same ball. A: The problem is that you have one surface that you continually blit a yellow square on. The program doesn't know to remove the previously drawn square. What you can do is just redraw the surface on every loop which is fine given that your program is relatively simple. while True: #elements & update #event loop for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.blit(surface, (midalignX_lg,midalignY_lg)) screen.blit(blue_b,(150,250)) screen.blit(yellow_b, (150,300)) screen.blit(text_surface,(150, 200)) barrier_1_x += 1 # Note the redrawing of the surface surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') surface.blit(barrier_1, (barrier_1_x, 350)) pygame.display.update() clock.tick(60)
My ball keeps leaving a trail. I did use blitz and update() but it is not working
.import pygame from sys import exit pygame.init() widthscreen = 1440 #middle 720 heightscreen = 790 #middle 395 w_surface = 800 h_surface = 500 midalignX_lg = (widthscreen-w_surface)/2 midalignY_lg = (heightscreen-h_surface)/2 screen = pygame.display.set_mode((widthscreen,heightscreen)) pygame.display.set_caption("Collision Game") clock = pygame.time.Clock() test_font = pygame.font.Font('font/Pixeltype.ttf', 45) surface = pygame.Surface((w_surface,h_surface)) surface.fill('Light Yellow') blue_b = pygame.image.load('images/blue.png') blue_b = pygame.transform.scale(blue_b,(35,35)) yellow_b = pygame.image.load('images/yellow.png') yellow_b = pygame.transform.scale(yellow_b,(35,35)) text_surface = test_font.render('Ball Option:', True, 'White') #object_1_x_pos = 20 #object_1 = pygame.draw.circle(surface, (137,204,240), (object_1_x_pos, 200), 15, 25) barrier_1_x = 10 barrier_1 = pygame.image.load('images/yellow.png') barrier_1 = pygame.transform.scale(barrier_1,(35,35)) while True: #elements & update #event loop for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() exit() screen.blit(surface, (midalignX_lg,midalignY_lg)) screen.blit(blue_b,(150,250)) screen.blit(yellow_b, (150,300)) screen.blit(text_surface,(150, 200)) barrier_1_x += 1 surface.blit(barrier_1, (barrier_1_x, 350)) pygame.display.update() clock.tick(60) My ball keeps leaving a trail. I did use blitz and update() but it is not working. How do i solve this? I did blit everything to my knowledge. When I run the code, the yellow ball within the light yellow rectangle moves but moves with a trail of lines and of the same ball.
[ "The problem is that you have one surface that you continually blit a yellow square on.\nThe program doesn't know to remove the previously drawn square.\nWhat you can do is just redraw the surface on every loop which is fine given that your program is relatively simple.\nwhile True:\n #elements & update\n\n #event loop\n for event in pygame.event.get():\n if event.type == pygame.QUIT:\n pygame.quit()\n exit()\n \n screen.blit(surface, (midalignX_lg,midalignY_lg))\n screen.blit(blue_b,(150,250)) \n screen.blit(yellow_b, (150,300))\n screen.blit(text_surface,(150, 200))\n\n\n barrier_1_x += 1\n # Note the redrawing of the surface\n surface = pygame.Surface((w_surface,h_surface))\n surface.fill('Light Yellow')\n surface.blit(barrier_1, (barrier_1_x, 350))\n\n pygame.display.update()\n\n clock.tick(60)\n\n" ]
[ 1 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074541645_pygame_python.txt
Q: How to get all pixels in mask in C++ In python, we can use such code to fetch all pixels under mask: src_img = cv2.imread("xxx") mask = src_img > 50 fetch = src_img[mask] what we get is a ndarray including all pixels matching condition mask. How to implement the same function using C++opencv ? I've found that copyTo can select pixels under specified mask, but it can only copy those pixels to another Mat instead of what python did. A: This is not that straightforward in C++ (as expected). That operation breaks down in further, smaller operations. One way to achieve a std::vector with the same pixel values above your threshold is this, I'm using this test image: // Read the input image: std::string imageName = "D://opencvImages//grayDog.png"; cv::Mat inputImage = cv::imread( imageName ); // Convert BGR to Gray: cv::Mat grayImage; cv::cvtColor( inputImage, grayImage, cv::COLOR_RGB2GRAY ); cv::Mat mask; int thresholdValue = 50; cv::threshold( grayImage, mask, thresholdValue, 255, cv::THRESH_BINARY ); The above bit just creates a cv::Mat where each pixel above the threshold is drawn with a value of 255, 0 otherwise. It is (one possible) equivalent of mask = src_img > 50. Now, let's mask the original grayscale image with this mask. Think about an element-wise multiplication between the two cv::Mats. One possible way is this: // Create grayscale mask: cv::Mat output; grayImage.copyTo( output, mask ); Now we have the original pixel values and everything else is zero. Convenient, because we can find now the locations of the non-zero pixels: // Locate the non-zero pixel values: std::vector< cv::Point > pixelLocations; cv::findNonZero( output, pixelLocations ); Alright, we have a std::vector of cv::Points that locate each non-zero pixel. We can use this info to index the original grayscale pixels in the original matrix: // Extract each pixel value using its location: std::vector< int > pixelValues; int totalPoints = (int)pixelLocations.size(); for( int i = 0; i < totalPoints; i++ ){ // Get pixel location: cv::Point currentPoint = pixelLocations[i]; // Get pixel value: int currentPixel = (int)grayImage.at<uchar>( currentPoint ); pixelValues.push_back( currentPixel ); // Print info: std::cout<<"i: "<<i<<" currentPoint: "<<currentPoint<<" pixelValue: "<<currentPixel<<std::endl; } You end up with pixelValues, which is a std::vector containing a list of all the pixels that are above your threshold. A: Why do you hate writing loop? I think this is the easiest way: cv::Mat Img = ... //Where, this Img is 8UC1 // * In this sample, extract the pixel positions std::vector< cv::Point > ResultData; const unsigned char Thresh = 50; for( int y=0; y<Img.rows; ++y ) { const unsigned char *p = Img.ptr<unsigned char>(y); for( int x=0; x<Img.cols; ++x, ++p ) { if( *p > Thresh ) {//Here, pick up this pixel's info you want. ResultData.emplace_back( x,y ); } } } Because I received a nervous complaint, I add an example of collecting values. In the following example, a mask image Mask is input to the process. cv::Mat Img = ... //Where, this Img is 8UC1 cv::Mat Mask = ...; //Same size as Img, 8UC1 std::vector< unsigned char > ResultData; //collect pixel values for( int y=0; y<Img.rows; ++y ) { const unsigned char *p = Img.ptr<unsigned char>(y); const unsigned char *m = Mask.ptr<unsigned char>(y); for( int x=0; x<Img.cols; ++x, ++p, ++m ) { if( *m ){ ResultData.push_back( *p ); } } }
How to get all pixels in mask in C++
In python, we can use such code to fetch all pixels under mask: src_img = cv2.imread("xxx") mask = src_img > 50 fetch = src_img[mask] what we get is a ndarray including all pixels matching condition mask. How to implement the same function using C++opencv ? I've found that copyTo can select pixels under specified mask, but it can only copy those pixels to another Mat instead of what python did.
[ "This is not that straightforward in C++ (as expected). That operation breaks down in further, smaller operations. One way to achieve a std::vector with the same pixel values above your threshold is this, I'm using this test image:\n// Read the input image:\nstd::string imageName = \"D://opencvImages//grayDog.png\";\ncv::Mat inputImage = cv::imread( imageName );\n\n// Convert BGR to Gray:\ncv::Mat grayImage;\ncv::cvtColor( inputImage, grayImage, cv::COLOR_RGB2GRAY );\n\ncv::Mat mask;\nint thresholdValue = 50;\ncv::threshold( grayImage, mask, thresholdValue, 255, cv::THRESH_BINARY );\n\nThe above bit just creates a cv::Mat where each pixel above the threshold is drawn with a value of 255, 0 otherwise. It is (one possible) equivalent of mask = src_img > 50. Now, let's mask the original grayscale image with this mask. Think about an element-wise multiplication between the two cv::Mats. One possible way is this:\n// Create grayscale mask:\ncv::Mat output;\ngrayImage.copyTo( output, mask );\n\nNow we have the original pixel values and everything else is zero. Convenient, because we can find now the locations of the non-zero pixels:\n// Locate the non-zero pixel values:\nstd::vector< cv::Point > pixelLocations;\ncv::findNonZero( output, pixelLocations );\n\nAlright, we have a std::vector of cv::Points that locate each non-zero pixel. We can use this info to index the original grayscale pixels in the original matrix:\n// Extract each pixel value using its location:\nstd::vector< int > pixelValues;\nint totalPoints = (int)pixelLocations.size();\n\nfor( int i = 0; i < totalPoints; i++ ){\n // Get pixel location:\n cv::Point currentPoint = pixelLocations[i];\n\n // Get pixel value:\n int currentPixel = (int)grayImage.at<uchar>( currentPoint );\n pixelValues.push_back( currentPixel );\n\n // Print info:\n std::cout<<\"i: \"<<i<<\" currentPoint: \"<<currentPoint<<\" pixelValue: \"<<currentPixel<<std::endl;\n}\n\nYou end up with pixelValues, which is a std::vector containing a list of all the pixels that are above your threshold.\n", "Why do you hate writing loop?\nI think this is the easiest way:\ncv::Mat Img = ... //Where, this Img is 8UC1\n\n// * In this sample, extract the pixel positions\nstd::vector< cv::Point > ResultData;\n\nconst unsigned char Thresh = 50;\nfor( int y=0; y<Img.rows; ++y )\n{\n const unsigned char *p = Img.ptr<unsigned char>(y);\n for( int x=0; x<Img.cols; ++x, ++p )\n {\n if( *p > Thresh )\n {//Here, pick up this pixel's info you want.\n ResultData.emplace_back( x,y );\n }\n }\n}\n\n\nBecause I received a nervous complaint, I add an example of collecting values.\nIn the following example, a mask image Mask is input to the process.\ncv::Mat Img = ... //Where, this Img is 8UC1\ncv::Mat Mask = ...; //Same size as Img, 8UC1\n\nstd::vector< unsigned char > ResultData; //collect pixel values\nfor( int y=0; y<Img.rows; ++y )\n{\n const unsigned char *p = Img.ptr<unsigned char>(y);\n const unsigned char *m = Mask.ptr<unsigned char>(y);\n for( int x=0; x<Img.cols; ++x, ++p, ++m )\n {\n if( *m ){ ResultData.push_back( *p ); }\n }\n}\n\n" ]
[ 2, 1 ]
[]
[]
[ "c++", "opencv", "python" ]
stackoverflow_0074527114_c++_opencv_python.txt
Q: Random motion of a person generated with forward and backwards steps with proper iterations My instructions: -A person walks a random amount of steps forward, and then a different random number of steps backwards. -The random steps are anywhere between 2 and 20 -The number of steps forward is always greater than the number of steps backwards -That motion of forward / backward random steps repeats itself again and again -The motion is consistent (the number of forward steps stays the same throughout the motion, and the number of backwards steps stays the same throughout the motion) -After making a specific amount of total steps the person is told to stop and will be a certain amount of steps forward from where they started. -The total number of steps is generated randomly and will be between 10 and 85 -You are writing a program to simulate the motion taken by the person. -Display that motion and the number of steps he ends away from where he started. For Example: -If the program generated the forward steps to be 4, and the backward steps to be 2, and the total number of steps to be 13, your program would display: FFFFBBFFFFBBF = 5 Steps from the start -If the program generated the forward steps to be 5, and the backward steps to be 3, and the total steps to be 16, your program would display FFFFFBBBFFFFFBBB = 4 Steps from the start import random x= random.randint(1,10) y= random.randint(1,10) total= random.randint(10,85) while True: counter = 2 while counter < total: print("F", end="") counter = counter + 1 total = total - 1 if total == 0 or counter > total: break counter = 2 while counter < total: print("B", end="") counter = counter + 1 total = total - 1 print("\nThe total amount of steps is", total) if total == 0 or counter > total: break I tried making variables for total and x,y and adding and subtracting but I can't find out what I'm doing wrong. I broke the steps forward and backward into x,y. 1-10 each. My loop only works sometimes, printing the steps and the statement of the total steps. Most of the time it just prints FFFFFFF with no statement whatsoever. It also only prints stuff like FFFFFBBBBB and not FFFFBBFFFFBB etc. I'm in need of guidance or corrections on my code. A: First of all, why do you need "infinite" loop if you know the total number of steps in advance? We can replace it with for loop that runs the exact number of steps. While inside the loop we want to keep track of several things: current direction (boolean - true if moving forward) total displacement (number of forward steps minus backward steps) counter of steps in current direction Here is the code that implements the above in a very straightforward (not very optimized) fashion for clarity. import random # backward cannot be greater than 9 # because it will break condition that forward should be greater backward = random.randint(1, 9) # forward has to be greater than backward forward = random.randint(backward + 1, 10) total = random.randint(10, 85) # test for specific values # forward, backward, total = 4, 2, 13 print(f"{forward} {backward} {total}") forward_direction = True displacement = 0 one_way = 0 string = [] for i in range(total): if forward_direction: one_way += 1 displacement += 1 # do we need to change direction? if one_way == forward: forward_direction = not forward_direction one_way = 0 string.append('F') else: one_way += 1 displacement -= 1 if one_way == backward: forward_direction = not forward_direction one_way = 0 string.append('B') print(f'{"".join(string)} = {displacement}') Result 4 2 13 FFFFBBFFFFBBF = 5 By the way, you can solve this problem without loop for arbitrary input numbers by using modulo operation.
Random motion of a person generated with forward and backwards steps with proper iterations
My instructions: -A person walks a random amount of steps forward, and then a different random number of steps backwards. -The random steps are anywhere between 2 and 20 -The number of steps forward is always greater than the number of steps backwards -That motion of forward / backward random steps repeats itself again and again -The motion is consistent (the number of forward steps stays the same throughout the motion, and the number of backwards steps stays the same throughout the motion) -After making a specific amount of total steps the person is told to stop and will be a certain amount of steps forward from where they started. -The total number of steps is generated randomly and will be between 10 and 85 -You are writing a program to simulate the motion taken by the person. -Display that motion and the number of steps he ends away from where he started. For Example: -If the program generated the forward steps to be 4, and the backward steps to be 2, and the total number of steps to be 13, your program would display: FFFFBBFFFFBBF = 5 Steps from the start -If the program generated the forward steps to be 5, and the backward steps to be 3, and the total steps to be 16, your program would display FFFFFBBBFFFFFBBB = 4 Steps from the start import random x= random.randint(1,10) y= random.randint(1,10) total= random.randint(10,85) while True: counter = 2 while counter < total: print("F", end="") counter = counter + 1 total = total - 1 if total == 0 or counter > total: break counter = 2 while counter < total: print("B", end="") counter = counter + 1 total = total - 1 print("\nThe total amount of steps is", total) if total == 0 or counter > total: break I tried making variables for total and x,y and adding and subtracting but I can't find out what I'm doing wrong. I broke the steps forward and backward into x,y. 1-10 each. My loop only works sometimes, printing the steps and the statement of the total steps. Most of the time it just prints FFFFFFF with no statement whatsoever. It also only prints stuff like FFFFFBBBBB and not FFFFBBFFFFBB etc. I'm in need of guidance or corrections on my code.
[ "First of all, why do you need \"infinite\" loop if you know the total number of steps in advance? We can replace it with for loop that runs the exact number of steps.\nWhile inside the loop we want to keep track of several things:\n\ncurrent direction (boolean - true if moving forward)\ntotal displacement (number of forward steps minus backward steps)\ncounter of steps in current direction\n\nHere is the code that implements the above in a very straightforward (not very optimized) fashion for clarity.\nimport random\n\n# backward cannot be greater than 9\n# because it will break condition that forward should be greater\nbackward = random.randint(1, 9)\n\n# forward has to be greater than backward\nforward = random.randint(backward + 1, 10)\ntotal = random.randint(10, 85)\n\n# test for specific values\n# forward, backward, total = 4, 2, 13\n\nprint(f\"{forward} {backward} {total}\")\n\nforward_direction = True\ndisplacement = 0\none_way = 0\nstring = []\n\nfor i in range(total):\n if forward_direction:\n one_way += 1\n displacement += 1\n # do we need to change direction?\n if one_way == forward:\n forward_direction = not forward_direction\n one_way = 0\n string.append('F')\n else:\n one_way += 1\n displacement -= 1\n if one_way == backward:\n forward_direction = not forward_direction\n one_way = 0\n string.append('B')\n\nprint(f'{\"\".join(string)} = {displacement}')\n\nResult\n\n4 2 13\nFFFFBBFFFFBBF = 5\n\nBy the way, you can solve this problem without loop for arbitrary input numbers by using modulo operation.\n" ]
[ 0 ]
[]
[]
[ "iteration", "loops", "python" ]
stackoverflow_0074540770_iteration_loops_python.txt
Q: Time out waiting for launcher to connect in VS code I did python debugging in VS code. The following is the launch.json file: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "stopOnEntry": false, "python": "${command:python.interpreterPath}", "program": "${file}", "cwd": "${workspaceFolder}", "env": {}, "envFile": "${workspaceFolder}/.env", "debugOptions":[ "RedirectOutput" ], "console": "integratedTerminal" } ] } The following is settings.json file: { "python.pythonPath": "c:\\Users\\susan\\Documents\\PythonScripts\\venv\\Scripts\\python.exe", // to fix 'Timeout waiting for debugger connections' "python.terminal.activateEnvironment" : false } When I debug the python script in VS code, I got Time out waiting for launcher to connect and cannot debug the python script. May I know how can I solve this issue? A: Its very simple. Open the launch.json file and add the following into it: { "name": "Python: Debug Console", "type": "python", "request": "launch", "program": "${file}", "console": "internalConsole" } Then save and exit it. Whatever you do, DO NOT clear the text already in there or else it may make it worser A: If for whatever reason "internalConsole" isn't a good solution: in your shell script: export PROCESS_SPAWN_TIMEOUT=30 or just hack the code directly (will be reverted if you update the extension): .vscode-server/extensions/ms-python.python-2022.18.2/pythonFiles/lib/python/debugpy/adapter/launchers.py[161]: change: timeout=(None if sudo else common.PROCESS_SPAWN_TIMEOUT) to: timeout=30,
Time out waiting for launcher to connect in VS code
I did python debugging in VS code. The following is the launch.json file: { // Use IntelliSense to learn about possible attributes. // Hover to view descriptions of existing attributes. // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387 "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "stopOnEntry": false, "python": "${command:python.interpreterPath}", "program": "${file}", "cwd": "${workspaceFolder}", "env": {}, "envFile": "${workspaceFolder}/.env", "debugOptions":[ "RedirectOutput" ], "console": "integratedTerminal" } ] } The following is settings.json file: { "python.pythonPath": "c:\\Users\\susan\\Documents\\PythonScripts\\venv\\Scripts\\python.exe", // to fix 'Timeout waiting for debugger connections' "python.terminal.activateEnvironment" : false } When I debug the python script in VS code, I got Time out waiting for launcher to connect and cannot debug the python script. May I know how can I solve this issue?
[ "Its very simple. Open the launch.json file and add the following into it:\n{\n \"name\": \"Python: Debug Console\",\n \"type\": \"python\",\n \"request\": \"launch\",\n \"program\": \"${file}\",\n \"console\": \"internalConsole\"\n}\n\nThen save and exit it. Whatever you do, DO NOT clear the text already in there or else it may make it worser\n", "If for whatever reason \"internalConsole\" isn't a good solution:\nin your shell script:\n export PROCESS_SPAWN_TIMEOUT=30\n\nor just hack the code directly (will be reverted if you update the extension):\n.vscode-server/extensions/ms-python.python-2022.18.2/pythonFiles/lib/python/debugpy/adapter/launchers.py[161]:\nchange:\ntimeout=(None if sudo else common.PROCESS_SPAWN_TIMEOUT)\n\nto:\ntimeout=30,\n\n" ]
[ 2, 0 ]
[]
[]
[ "connection_timeout", "python", "vscode_debugger" ]
stackoverflow_0071920044_connection_timeout_python_vscode_debugger.txt
Q: Splitting data frame values at a specific word I have a column of string values: df[V1] Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly I tried to use the following code but it also includes the word I want to split after: split_list = re.findall(r'\bspeak.*\b', df['V1']) I want to split each row at the exact same word into two columns. In this case it would be the word speak. I would like to end up with something like this: df V1 V2 Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly A: Using the regex that bobble bubble provided to answer my own question: fac_list = [] for fac in list: test_fac = ' '.join(fac) y = re.findall(r'^.*\bspeak\b|\S.* ', test_fac) fac_list.append(y) print(fac_list)
Splitting data frame values at a specific word
I have a column of string values: df[V1] Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly I tried to use the following code but it also includes the word I want to split after: split_list = re.findall(r'\bspeak.*\b', df['V1']) I want to split each row at the exact same word into two columns. In this case it would be the word speak. I would like to end up with something like this: df V1 V2 Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly Could you please speak a little bit more slowly
[ "Using the regex that bobble bubble provided to answer my own question:\nfac_list = []\n\nfor fac in list:\n test_fac = ' '.join(fac)\n y = re.findall(r'^.*\\bspeak\\b|\\S.* ', test_fac)\n fac_list.append(y)\nprint(fac_list)\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074541542_python_regex.txt
Q: if input type by user is not defined how to add import random def roll_dice(): dice_drawing = { 1:( "_________", "| 1 |", "| * |", "----------" ), 2:( "__________", "| 2 |", "| * * |", "-----------" ), 3:( "__________", "| 3 |", "| * * * |", "-----------" ), 4:( "__________", "| 4 |", "| * * * * |", "-----------" ), 5:( "__________", "| 5 * |", "| * * * * |", "-----------" ), 6:( "__________", "| * 6 * |", "| * * * * |", "-----------" ) } roll = input('Roll the dice Yes/No: ') while roll.lower() == 'yes'.lower(): dice1 = random.randint(1,6) dice2 = random.randint(1,6) print('dice rolled: {} and {}'.format(dice1,dice2)) print("\n".join(dice_drawing[dice1])) print("\n".join(dice_drawing[dice2])) roll = input('Roll the dice Yes/No: ') if roll not in roll: roll = input('Roll the dice Yes/No: ') roll_dice() I am not able to understand if user types something else instead of yes or no, then I want the iteration to happen again saying invalid option please type yes or no This code is working fine but what if user doesn't type yes or no type different key words than I want the iteration to run again saying its a invalid option please type yes or no , how to add this when user types wrong input which is defined by yes or no
if input type by user is not defined how to add
import random def roll_dice(): dice_drawing = { 1:( "_________", "| 1 |", "| * |", "----------" ), 2:( "__________", "| 2 |", "| * * |", "-----------" ), 3:( "__________", "| 3 |", "| * * * |", "-----------" ), 4:( "__________", "| 4 |", "| * * * * |", "-----------" ), 5:( "__________", "| 5 * |", "| * * * * |", "-----------" ), 6:( "__________", "| * 6 * |", "| * * * * |", "-----------" ) } roll = input('Roll the dice Yes/No: ') while roll.lower() == 'yes'.lower(): dice1 = random.randint(1,6) dice2 = random.randint(1,6) print('dice rolled: {} and {}'.format(dice1,dice2)) print("\n".join(dice_drawing[dice1])) print("\n".join(dice_drawing[dice2])) roll = input('Roll the dice Yes/No: ') if roll not in roll: roll = input('Roll the dice Yes/No: ') roll_dice() I am not able to understand if user types something else instead of yes or no, then I want the iteration to happen again saying invalid option please type yes or no This code is working fine but what if user doesn't type yes or no type different key words than I want the iteration to run again saying its a invalid option please type yes or no , how to add this when user types wrong input which is defined by yes or no
[]
[]
[ "is this you are finding ?\nwhile True:\n roll = input('Roll the dice Yes/No: ')\n if roll.lower() == 'yes':\n \n ##\n ## do your stuff here \n ##\n\n elif roll.lower() =='no':\n break\n else :\n print('enter yes or no') \n\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074541867_python.txt
Q: ruamel.yaml dump contains "..." I need to sort the contents of a YAML file, and I'm learning ruamel.yaml to do this. Given this file example.yml: --- - job: name: this is the job name And this Python program: import sys import ruamel.yaml yaml = ruamel.yaml.YAML() # defaults to round-trip # Read YAML file with open('example.yml', 'r') as f: data = yaml.load(f) yaml.dump(data[0]['job']['name'], sys.stdout) I get the job name, but I also get an extra line with ...: $ python example.py this is the job name ... $ I wasn't expecting to see the ... so I'm a tad confused. Where is it coming from and why? A: It's a YAML boundary marker. It's not entirely clear what exactly you are hoping to produce here. If you don't specifically want a YAML document on output, probably just print the thing. I'm guessing you are in the middle of debugging something, because this seems unrelated to the actual task you claim to be working on. If you want to sort the YAML file internally, the PyYAML module does that by default. If you need it for Ruamel specifically, ruamel.yaml equivalent of sort_keys? has a solution.
ruamel.yaml dump contains "..."
I need to sort the contents of a YAML file, and I'm learning ruamel.yaml to do this. Given this file example.yml: --- - job: name: this is the job name And this Python program: import sys import ruamel.yaml yaml = ruamel.yaml.YAML() # defaults to round-trip # Read YAML file with open('example.yml', 'r') as f: data = yaml.load(f) yaml.dump(data[0]['job']['name'], sys.stdout) I get the job name, but I also get an extra line with ...: $ python example.py this is the job name ... $ I wasn't expecting to see the ... so I'm a tad confused. Where is it coming from and why?
[ "It's a YAML boundary marker.\nIt's not entirely clear what exactly you are hoping to produce here. If you don't specifically want a YAML document on output, probably just print the thing. I'm guessing you are in the middle of debugging something, because this seems unrelated to the actual task you claim to be working on.\nIf you want to sort the YAML file internally, the PyYAML module does that by default. If you need it for Ruamel specifically, ruamel.yaml equivalent of sort_keys? has a solution.\n" ]
[ 2 ]
[]
[]
[ "python", "ruamel.yaml", "yaml" ]
stackoverflow_0074541169_python_ruamel.yaml_yaml.txt
Q: compare two dictionaries key by key I have two python dictionaries like below : d1 ={'k1':{'a':100}, 'k2':{'b':200}, 'k3':{'b':300}, 'k4':{'c':400}} d2 ={'k1':{'a':101}, 'k2':{'b':200}, 'k3':{'b':302}, 'k4':{'c':399}} I want to compare same keys and find out the difference like below: {'k1':{'diff':1}, 'k2':{'diff':0}, 'k3':{'diff':2}, 'k4':{'diff':1}} This is guaranteed that both of the input dictionaries have same keys. A: source: d1 = {'k1': {'a': 100}, 'k2': {'b': 200}, 'k3': {'b': 300}, 'k4': {'c': 400}} d2 = {'k1': {'a': 101}, 'k2': {'b': 200}, 'k3': {'b': 302}, 'k4': {'c': 399}} d3 = {} for k in d1: d_tmp = { "diff": abs(list(d1[k].values())[0] - list(d2[k].values())[0]) } d3[k] = d_tmp print(d3) output: {'k1': {'diff': 1}, 'k2': {'diff': 0}, 'k3': {'diff': 2}, 'k4': {'diff': 1}} A: You could perform something similar to this, where you iterate over all the keys in d1 and check their corresponding values in d2. You need a second inner loop that is responsible for comparing the appropriate inner key. d2 ={'k1':{'a':101}, 'k2':{'b':200}, 'k3':{'b':302}, 'k4':{'c':399}} output = {} for k, v in d1.items(): for k2, v2 in v.items(): output[k] = {'diff': abs(d2[k][k2] - v2)}
compare two dictionaries key by key
I have two python dictionaries like below : d1 ={'k1':{'a':100}, 'k2':{'b':200}, 'k3':{'b':300}, 'k4':{'c':400}} d2 ={'k1':{'a':101}, 'k2':{'b':200}, 'k3':{'b':302}, 'k4':{'c':399}} I want to compare same keys and find out the difference like below: {'k1':{'diff':1}, 'k2':{'diff':0}, 'k3':{'diff':2}, 'k4':{'diff':1}} This is guaranteed that both of the input dictionaries have same keys.
[ "source:\nd1 = {'k1': {'a': 100}, 'k2': {'b': 200}, 'k3': {'b': 300}, 'k4': {'c': 400}}\nd2 = {'k1': {'a': 101}, 'k2': {'b': 200}, 'k3': {'b': 302}, 'k4': {'c': 399}}\n\nd3 = {}\nfor k in d1:\n d_tmp = {\n \"diff\": abs(list(d1[k].values())[0] - list(d2[k].values())[0])\n }\n d3[k] = d_tmp\n\nprint(d3)\n\noutput:\n{'k1': {'diff': 1}, 'k2': {'diff': 0}, 'k3': {'diff': 2}, 'k4': {'diff': 1}}\n\n", "You could perform something similar to this, where you iterate over all the keys in d1 and check their corresponding values in d2. You need a second inner loop that is responsible for comparing the appropriate inner key.\nd2 ={'k1':{'a':101}, 'k2':{'b':200}, 'k3':{'b':302}, 'k4':{'c':399}}\n\noutput = {}\nfor k, v in d1.items():\n for k2, v2 in v.items():\n output[k] = {'diff': abs(d2[k][k2] - v2)}\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "python", "python_3.x" ]
stackoverflow_0074541901_dictionary_python_python_3.x.txt