content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How can I get the start and end indices of a note in a volume graph? I am trying to make a program, that tells me when a note has been pressed. I have the following notes exported as a .wav file (The C Major Scale 4 times with different rhythms, dynamics and in different octaves): I can get the volumes of my sound file using the following code: from scipy.io import wavfile def get_volume(file): sr, data = wavfile.read(file) if data.ndim > 1: data = data[:, 0] return data volumes = get_volume("FILE") Here are some information about the output: Max: 27851 Min: -25664 Mean: -0.7569383391943734 A Sample from the array: [ -7987 -8615 -8983 -9107 -9019 -8750 -8324 -7752 -7033 -6156 -5115 -3920 -2610 -1245 106 1377 2520 3515 4364 5077 5659 6113 6441 6639 6708 6662 6518 6288 5962 5525 4963 4265 3420 2418 1264 -27 -1429 -2901 -4388 -5814 -7101 -8186 -9028 -9614 -9955 -10077 -10012 -9785 -9401 -8846] And here is what I get when I plot the volumes array (x is the index, y is the volume): I want to get the indices of the start and end of the notes like the ones in the image (Did it by hand not accurate): When I looked at the data I realized, that it is a 1d array and I also noticed, that when a note gets louder or quiter it is not smooth. It is like a ZigZag, but there is still a trend. So basically I can't just get the gradients (slope) of each point. So I though about grouping notes into batches and getting the average gradient there and thus doing the calculations with it, like so: def get_average_gradient(arr): # Calculates average gradient return sum([i - (sum(arr) / len(arr)) for i in arr]) / len(arr) def get_note_start_end(arr_size, batch_size, arr): # Finds start and end indices ranges = [] curr_range = [0] prev_slope = curr_slope = "NO SLOPE" has_ended = False for i, j in enumerate(arr): if j > 0: curr_slope = "INCREASING" elif j < 0: curr_slope = "DECREASING" else: curr_slope = "NO SLOPE" if prev_slope == "DECREASING" and not has_ended: if i == len(arr) - 1 or arr[i + 1] < 0: if curr_slope != "DECREASING": curr_range.append((i + 1) * batch_size + batch_size) ranges.append(curr_range) curr_range = [(i + 1) * batch_size + batch_size + 1] has_ended = True if has_ended and curr_slope == "INCREASING": has_ended = False prev_slope = curr_slope ranges[-1][-1] = arr_size - 1 return ranges def get_notes(batch_size, arr): # Gets the gradients of the batches out = [] for i in range(0, len(arr), batch_size): if i + batch_size > len(arr): gradient = get_average_gradient(arr[i:]) else: gradient = get_average_gradient(arr[i: i+batch_size]) # print(gradient, i) out.append(gradient) return get_note_start_end(len(arr), batch_size, out) notes = get_notes(128, volumes) The problem with this is, that if the batch size is too small, then it returns the indices of small peaks, which aren't a note on their own. If the batch size is too big then the program misses the start and end indices. I also tried to get the notes, by using the silence. Here is the code I used: from pydub import AudioSegment, silence audio = intro = AudioSegment.from_wav("C - Major - Test.wav") dBFS = audio.dBFS notes = silence.detect_nonsilent(audio, min_silence_len=50, silence_thresh=dBFS-10) This worked the best, but it still wasn't good enough. Here is what I got: It some notes pretty well, but it wasn't able to identify notes accurately if the notes themselves didn't become very quite before a different one was played (Like in the second scale and in the fourth scale). I have been thinking about this problem for days and I have basically tried most if not all of the good(?) ideas I had. I am new to analysing audio files. Maybe I am using the wrong data to do what I want to do. Maybe I need to use the frequency data (I tried getting it, but couldn't make sense of it) Frequency code: from scipy.fft import * from scipy.io import wavfile import matplotlib.pyplot as plt def get_freq(file, start_time, end_time): sr, data = wavfile.read(file) if data.ndim > 1: data = data[:, 0] else: pass # Fourier Transform N = len(data) yf = rfft(data) xf = rfftfreq(N, 1 / sr) return xf, yf FILE = "C - Major - Test.wav" plt.plot(*get_freq(FILE, 0, 10)) plt.show() And the frequency graph: And here is the .wav file: https://drive.google.com/file/d/1CERH-eovu20uhGoV1_O3B2Ph-4-uXpiP/view?usp=sharing Any help is appreciated :) A: think this is what you need: first you convert negative numbers into positive ones and smooth the line to eliminate noise, to find the lower peaks yo work with the negative values. from scipy.io import wavfile import matplotlib.pyplot as plt from scipy.signal import find_peaks import numpy as np from scipy.signal import savgol_filter def get_volume(file): sr, data = wavfile.read(file) if data.ndim > 1: data = data[:, 0] return data v1 = abs(get_volume("test.wav")) #Smooth the curve volumes=savgol_filter(v1,10000 , 3) lv=volumes*-1 #find peaks peaks,_ = find_peaks(volumes,distance=8000,prominence=300) lpeaks,_= find_peaks(lv,distance=8000,prominence=300) # plot them plt.plot(volumes) plt.plot(peaks,volumes[peaks],"x") plt.plot(lpeaks,volumes[lpeaks],"o") plt.plot(np.zeros_like(volumes), "--", color="gray") plt.show() Plot with your test file, x marks the high peaks and o the lower peaks A: This article presents two python libraries (Aubio, librosa) to achieve what you need and includes examples of how to use them: How to Use Python to Detect Music Onsets by Lynn Zheng
How can I get the start and end indices of a note in a volume graph?
I am trying to make a program, that tells me when a note has been pressed. I have the following notes exported as a .wav file (The C Major Scale 4 times with different rhythms, dynamics and in different octaves): I can get the volumes of my sound file using the following code: from scipy.io import wavfile def get_volume(file): sr, data = wavfile.read(file) if data.ndim > 1: data = data[:, 0] return data volumes = get_volume("FILE") Here are some information about the output: Max: 27851 Min: -25664 Mean: -0.7569383391943734 A Sample from the array: [ -7987 -8615 -8983 -9107 -9019 -8750 -8324 -7752 -7033 -6156 -5115 -3920 -2610 -1245 106 1377 2520 3515 4364 5077 5659 6113 6441 6639 6708 6662 6518 6288 5962 5525 4963 4265 3420 2418 1264 -27 -1429 -2901 -4388 -5814 -7101 -8186 -9028 -9614 -9955 -10077 -10012 -9785 -9401 -8846] And here is what I get when I plot the volumes array (x is the index, y is the volume): I want to get the indices of the start and end of the notes like the ones in the image (Did it by hand not accurate): When I looked at the data I realized, that it is a 1d array and I also noticed, that when a note gets louder or quiter it is not smooth. It is like a ZigZag, but there is still a trend. So basically I can't just get the gradients (slope) of each point. So I though about grouping notes into batches and getting the average gradient there and thus doing the calculations with it, like so: def get_average_gradient(arr): # Calculates average gradient return sum([i - (sum(arr) / len(arr)) for i in arr]) / len(arr) def get_note_start_end(arr_size, batch_size, arr): # Finds start and end indices ranges = [] curr_range = [0] prev_slope = curr_slope = "NO SLOPE" has_ended = False for i, j in enumerate(arr): if j > 0: curr_slope = "INCREASING" elif j < 0: curr_slope = "DECREASING" else: curr_slope = "NO SLOPE" if prev_slope == "DECREASING" and not has_ended: if i == len(arr) - 1 or arr[i + 1] < 0: if curr_slope != "DECREASING": curr_range.append((i + 1) * batch_size + batch_size) ranges.append(curr_range) curr_range = [(i + 1) * batch_size + batch_size + 1] has_ended = True if has_ended and curr_slope == "INCREASING": has_ended = False prev_slope = curr_slope ranges[-1][-1] = arr_size - 1 return ranges def get_notes(batch_size, arr): # Gets the gradients of the batches out = [] for i in range(0, len(arr), batch_size): if i + batch_size > len(arr): gradient = get_average_gradient(arr[i:]) else: gradient = get_average_gradient(arr[i: i+batch_size]) # print(gradient, i) out.append(gradient) return get_note_start_end(len(arr), batch_size, out) notes = get_notes(128, volumes) The problem with this is, that if the batch size is too small, then it returns the indices of small peaks, which aren't a note on their own. If the batch size is too big then the program misses the start and end indices. I also tried to get the notes, by using the silence. Here is the code I used: from pydub import AudioSegment, silence audio = intro = AudioSegment.from_wav("C - Major - Test.wav") dBFS = audio.dBFS notes = silence.detect_nonsilent(audio, min_silence_len=50, silence_thresh=dBFS-10) This worked the best, but it still wasn't good enough. Here is what I got: It some notes pretty well, but it wasn't able to identify notes accurately if the notes themselves didn't become very quite before a different one was played (Like in the second scale and in the fourth scale). I have been thinking about this problem for days and I have basically tried most if not all of the good(?) ideas I had. I am new to analysing audio files. Maybe I am using the wrong data to do what I want to do. Maybe I need to use the frequency data (I tried getting it, but couldn't make sense of it) Frequency code: from scipy.fft import * from scipy.io import wavfile import matplotlib.pyplot as plt def get_freq(file, start_time, end_time): sr, data = wavfile.read(file) if data.ndim > 1: data = data[:, 0] else: pass # Fourier Transform N = len(data) yf = rfft(data) xf = rfftfreq(N, 1 / sr) return xf, yf FILE = "C - Major - Test.wav" plt.plot(*get_freq(FILE, 0, 10)) plt.show() And the frequency graph: And here is the .wav file: https://drive.google.com/file/d/1CERH-eovu20uhGoV1_O3B2Ph-4-uXpiP/view?usp=sharing Any help is appreciated :)
[ "think this is what you need:\nfirst you convert negative numbers into positive ones and smooth the line to eliminate noise, to find the lower peaks yo work with the negative values.\nfrom scipy.io import wavfile\nimport matplotlib.pyplot as plt\nfrom scipy.signal import find_peaks\nimport numpy as np\nfrom scipy.signal import savgol_filter\n\ndef get_volume(file):\n sr, data = wavfile.read(file)\n if data.ndim > 1:\n data = data[:, 0]\n return data\n\nv1 = abs(get_volume(\"test.wav\"))\n#Smooth the curve\nvolumes=savgol_filter(v1,10000 , 3)\nlv=volumes*-1\n#find peaks\npeaks,_ = find_peaks(volumes,distance=8000,prominence=300)\nlpeaks,_= find_peaks(lv,distance=8000,prominence=300)\n# plot them\nplt.plot(volumes)\nplt.plot(peaks,volumes[peaks],\"x\")\nplt.plot(lpeaks,volumes[lpeaks],\"o\")\nplt.plot(np.zeros_like(volumes), \"--\", color=\"gray\")\nplt.show()\n\n\n\n\nPlot with your test file, x marks the high peaks and o the lower peaks\n\n", "This article presents two python libraries (Aubio, librosa) to achieve what you need and includes examples of how to use them: How to Use Python to Detect Music Onsets by Lynn Zheng\n" ]
[ 1, 1 ]
[]
[]
[ "audio", "frequency", "pyaudio", "python", "volume" ]
stackoverflow_0074491739_audio_frequency_pyaudio_python_volume.txt
Q: getting standard deviation of the values in two different dataframes I have two DataFrames and I would like to find the standard deviation per rc_id for one of the columns i.e. imapcted_userscolumn in these two dataframes and create a separate column with the name std with their standard deviation value df1 : data = {"timestamp":["2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29"], "rc_id":[296,296,296,296,296,100,100,100,100], "impacted_users":[1,87,44,8,5,2,7,11,30]} df1 = pd.DataFrame(data) df1 = df1.groupby(["timestamp","rc_id"]).agg({"impacted_users": sum} ).reset_index() df1: rc_id timestamp impacted_users 296 2022-10-29 145 100 2022-10-29 50 df2 : data1 = {"rc_id":[296,296,296,100,100,100], "impacted_users":[201,202,216,300,301,350]} df2 = pd.DataFrame(data1) df2 create df2: rc_id impacted_users 296 201 296 202 296 216 100 300 100 301 100 350 Expected Output: id timestamp imapcted_users std 296 2022-10-29 11:00:00 145 27.21 100 2022-10-29 11:00:00 50 117.36 What I would like to have is std and put it as a separate columns (just as an example what values I am looking for from these columns): std(145, 201, 202,216) std (50,300,301,350) I am unable to come up with a strategy to get this standard dev. for values from different dataframes. I tried to concat the required values and then get the std by aggregation but I guess there is a better way. A: IIUC use concat with aggregate std, but because pandas Series.std has default ddof=1 for expected ouput add parameter ddof=0, last append to df1: df1 = df1.groupby(["timestamp","rc_id"], as_index=False, sort=False)["impacted_users"].sum() df = (df1.join(pd.concat([df1, df2]) .groupby('rc_id')['impacted_users'].std(ddof=0).rename('std'), on='rc_id')) print (df) timestamp rc_id impacted_users std 0 2022-10-29 296 145 27.212130 1 2022-10-29 100 50 117.367745
getting standard deviation of the values in two different dataframes
I have two DataFrames and I would like to find the standard deviation per rc_id for one of the columns i.e. imapcted_userscolumn in these two dataframes and create a separate column with the name std with their standard deviation value df1 : data = {"timestamp":["2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29","2022-10-29"], "rc_id":[296,296,296,296,296,100,100,100,100], "impacted_users":[1,87,44,8,5,2,7,11,30]} df1 = pd.DataFrame(data) df1 = df1.groupby(["timestamp","rc_id"]).agg({"impacted_users": sum} ).reset_index() df1: rc_id timestamp impacted_users 296 2022-10-29 145 100 2022-10-29 50 df2 : data1 = {"rc_id":[296,296,296,100,100,100], "impacted_users":[201,202,216,300,301,350]} df2 = pd.DataFrame(data1) df2 create df2: rc_id impacted_users 296 201 296 202 296 216 100 300 100 301 100 350 Expected Output: id timestamp imapcted_users std 296 2022-10-29 11:00:00 145 27.21 100 2022-10-29 11:00:00 50 117.36 What I would like to have is std and put it as a separate columns (just as an example what values I am looking for from these columns): std(145, 201, 202,216) std (50,300,301,350) I am unable to come up with a strategy to get this standard dev. for values from different dataframes. I tried to concat the required values and then get the std by aggregation but I guess there is a better way.
[ "IIUC use concat with aggregate std, but because pandas Series.std has default ddof=1 for expected ouput add parameter ddof=0, last append to df1:\ndf1 = df1.groupby([\"timestamp\",\"rc_id\"], as_index=False, sort=False)[\"impacted_users\"].sum()\n \ndf = (df1.join(pd.concat([df1, df2])\n .groupby('rc_id')['impacted_users'].std(ddof=0).rename('std'), on='rc_id'))\nprint (df)\n timestamp rc_id impacted_users std\n0 2022-10-29 296 145 27.212130\n1 2022-10-29 100 50 117.367745\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074527590_pandas_python.txt
Q: Visual Studio Code - problem about running code Im new and I started to learn python some time ago but today I have small problem about running code in Visual Studio Code. When i try to run code then i got:enter image description here Can you explane me why i got this? And how can I fix it? I tried nothing and i just expect fast answer A: First of all you are not running the code but debugging the code. What you show in the picture is just some powershell commands to debug the code. Because vscode uses the system's built-in terminal (power shell or cmd), the execution command is displayed when you run or debug the code. The advantage of this is that the code can be run without hassle anywhere. If you don't want the terminal to display these commands, you can use the Code Runner to output the results in the OUTPUT panel.
Visual Studio Code - problem about running code
Im new and I started to learn python some time ago but today I have small problem about running code in Visual Studio Code. When i try to run code then i got:enter image description here Can you explane me why i got this? And how can I fix it? I tried nothing and i just expect fast answer
[ "First of all you are not running the code but debugging the code. What you show in the picture is just some powershell commands to debug the code.\nBecause vscode uses the system's built-in terminal (power shell or cmd), the execution command is displayed when you run or debug the code. The advantage of this is that the code can be run without hassle anywhere.\n\nIf you don't want the terminal to display these commands, you can use the Code Runner to output the results in the OUTPUT panel.\n\n" ]
[ 0 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074523988_python_visual_studio_code.txt
Q: How to put multiple user inputs in a text file? this is the code I have right now fname = input(">>Please Enter a file name followed by .txt ") def writedata(): i=0 for i in range(3): f = open(f"{fname}", 'w') stdname = input('>>\tStudent Name: \t') marks = input('>>\tMark for exam: \t') f.write(stdname) f.write("\n") f.write(marks) f.close() def main(): writedata() the output that is intended >> Please Enter a file name, followed by .txt: studentRecord.txt >> Enter record for student 1 in the format of [1. Name, 2. Mark]: >> Student Name: James White >> Mark for exam: 100 >> Enter record for student 2 in the format of [1. Name, 2. Mark]: >> Student Name: James Brown >> Mark for exam: 85 >> Enter record for student 3 in the format of [1. Name, 2. Mark]: >> Student Name: James King >> Mark for exam: 75 >> Student record writing completed! I tried the above code and only got the last user input in the text file. I was supposed to pass file name from def main() but I don't know how to do that, I kept getting unreachable error. Can someone please help me and explain what I'm doing wrong? Thank you for your time and consideration. A: Please take note of f = open(f"{fname}", 'w') You are using the w mode, which overwrites the file everytime. Instead, use a+ mode, which appends to the file, and creates the file if it does not yet exist. A: fname = str(input(">> Please Enter a file name, followed by .txt: ")) f = open(f"{fname}","a+") for i in range(1, 4): print(f">> Enter record for student {i} in the format of [1. Name, 2. Mark]:") stdname = str(input(">> Student Name: ")) marks = str(input(">> Mark for exam: ")) f.write(stdname) f.write("\n") f.write(marks) f.write("\n") print("Student record writing completed!") f.close() def main(): writedata() if __name__ == '__main__': main() Thank you guys for your help! This is the answer I came up with.
How to put multiple user inputs in a text file?
this is the code I have right now fname = input(">>Please Enter a file name followed by .txt ") def writedata(): i=0 for i in range(3): f = open(f"{fname}", 'w') stdname = input('>>\tStudent Name: \t') marks = input('>>\tMark for exam: \t') f.write(stdname) f.write("\n") f.write(marks) f.close() def main(): writedata() the output that is intended >> Please Enter a file name, followed by .txt: studentRecord.txt >> Enter record for student 1 in the format of [1. Name, 2. Mark]: >> Student Name: James White >> Mark for exam: 100 >> Enter record for student 2 in the format of [1. Name, 2. Mark]: >> Student Name: James Brown >> Mark for exam: 85 >> Enter record for student 3 in the format of [1. Name, 2. Mark]: >> Student Name: James King >> Mark for exam: 75 >> Student record writing completed! I tried the above code and only got the last user input in the text file. I was supposed to pass file name from def main() but I don't know how to do that, I kept getting unreachable error. Can someone please help me and explain what I'm doing wrong? Thank you for your time and consideration.
[ "Please take note of\n f = open(f\"{fname}\", 'w')\n\nYou are using the w mode, which overwrites the file everytime. Instead, use a+ mode, which appends to the file, and creates the file if it does not yet exist.\n", " fname = str(input(\">> Please Enter a file name, followed by .txt: \"))\n f = open(f\"{fname}\",\"a+\")\n for i in range(1, 4):\n print(f\">> Enter record for student {i} in the format of [1. Name, 2. Mark]:\")\n stdname = str(input(\">> Student Name: \"))\n marks = str(input(\">> Mark for exam: \"))\n f.write(stdname)\n f.write(\"\\n\")\n f.write(marks)\n f.write(\"\\n\")\n print(\"Student record writing completed!\")\n f.close()\ndef main():\n writedata()\nif __name__ == '__main__':\n main()\n\nThank you guys for your help! This is the answer I came up with.\n" ]
[ 0, 0 ]
[ "You are using the write (w) file method, which overwrites your file with any new data you pass. You need the append (a) file method, which will append to your file each time.\nThe BSD fopen manpage defines the file methods as follows:\nThe argument mode points to a string beginning with one of the following\n sequences (Additional characters may follow these sequences.):\n\n ``r'' Open text file for reading. The stream is positioned at the\n beginning of the file.\n\n ``r+'' Open for reading and writing. The stream is positioned at the\n beginning of the file.\n\n ``w'' Truncate file to zero length or create text file for writing.\n The stream is positioned at the beginning of the file.\n\n ``w+'' Open for reading and writing. The file is created if it does not\n exist, otherwise it is truncated. The stream is positioned at\n the beginning of the file.\n\n ``a'' Open for writing. The file is created if it does not exist. The\n stream is positioned at the end of the file. Subsequent writes\n to the file will always end up at the then current end of file,\n irrespective of any intervening fseek(3) or similar.\n\n ``a+'' Open for reading and writing. The file is created if it does not\n exist. The stream is positioned at the end of the file. Subse-\n quent writes to the file will always end up at the then current\n end of file, irrespective of any intervening fseek(3) or similar.\n\nYou could also look at python's documentation for some more information: https://docs.python.org/3/library/functions.html#open\n" ]
[ -2 ]
[ "python", "python_3.x" ]
stackoverflow_0074521467_python_python_3.x.txt
Q: EmptyDataError No columns to parse from file i am getting the error "EmptyDataError No columns to parse from file" when i am reading the data from csv file to json file... i want to insert the data from csv file to json file A: replace "/" to "\" in you path variable A: find error and used "delim_whitespace=True" in read_csv
EmptyDataError No columns to parse from file
i am getting the error "EmptyDataError No columns to parse from file" when i am reading the data from csv file to json file... i want to insert the data from csv file to json file
[ "replace \"/\" to \"\\\" in you path variable\n", "find error and used \"delim_whitespace=True\" in read_csv\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074528070_pandas_python.txt
Q: How can I classify a string with a partial string and make a boolean column Say I have the 1st dataframe with the following strings a abcd dabcd qwerty oppoupou Then I have a 2nd dataframe with the following substrings column abc qw qaz I've been looking for a code that can classify the 1st dataframe and check each row with all the elements in the 2nd dataframe with a true or false solution. For example, for the first element, abcd it gets checked by the 2nd dataframe and it contains abc so abcd is true. Then the second element is also true because it contains abc. And the third element is true because it contains qw. Etc. Then there would be this column with the 1st dataframe that would return: true, true, true, false I found this code, but this only covers only the individual elements and not whole dataframes df["b"] = df["a"].str.contains("abc") Any suggestions for coding 2 different string dataframes for boolean? A: You need join values of column col in second DataFrame by | for regex OR: df["b"] = df["a"].str.contains('|'.join(df2['column'])) print (df) a b 0 abcd True 1 dabcd True 2 qwerty True 3 oppoupou False
How can I classify a string with a partial string and make a boolean column
Say I have the 1st dataframe with the following strings a abcd dabcd qwerty oppoupou Then I have a 2nd dataframe with the following substrings column abc qw qaz I've been looking for a code that can classify the 1st dataframe and check each row with all the elements in the 2nd dataframe with a true or false solution. For example, for the first element, abcd it gets checked by the 2nd dataframe and it contains abc so abcd is true. Then the second element is also true because it contains abc. And the third element is true because it contains qw. Etc. Then there would be this column with the 1st dataframe that would return: true, true, true, false I found this code, but this only covers only the individual elements and not whole dataframes df["b"] = df["a"].str.contains("abc") Any suggestions for coding 2 different string dataframes for boolean?
[ "You need join values of column col in second DataFrame by | for regex OR:\ndf[\"b\"] = df[\"a\"].str.contains('|'.join(df2['column']))\nprint (df)\n a b\n0 abcd True\n1 dabcd True\n2 qwerty True\n3 oppoupou False\n\n" ]
[ 2 ]
[]
[]
[ "boolean", "dataframe", "pandas", "python", "string" ]
stackoverflow_0074528289_boolean_dataframe_pandas_python_string.txt
Q: How to split numbers in string type column? I have a dataframe with column df['EVENT_DTL'] that looks like this; 1. ๋ณ€์‚ฌ์ž ์ •๋ณด : Kim_******-1****** 2. ๋ฐœ๊ฒฌ์ผ์‹œ : 2013๋…„05์›”18์ผ 13:00 3. ๋ฐœ๊ฒฌ์žฅ์†Œ : 1) ์ˆ˜์‚ฌ๊ธฐ๋ก ์ƒ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : 2) ์‹ค์ œ ์กฐ์‚ฌ์›์ด ์ž…๋ ฅํ•œ ์ฃผ์†Œ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : ์กฐ์น˜(์‚ฌ์œ  ํฌํ•จ) : 4. ๋ฐœ๊ฒฌ์žฅ์†Œ ์ฝ”๋”ฉ์‚ฌ์œ  : ์žํƒ / 5. ๋ฐฉ๋ฒ•/์ˆ˜๋‹จ : ๋ชฉ๋งค๋‹ฌ๊ธฐ6. ๋ฐœ๊ฒฌ๊ฒฝ์œ„ : 2013.5.18 13:00๊ฒฝ New York, in his apartment 7. ์ฃผ์›์ธ ์ฝ”๋”ฉ์‚ฌ์œ  : Family reason 8. ๊ธฐ๋ณธ๋ฐฐ๊ฒฝ์ •๋ณด : ์›๋‹จ๋„๋งค์—… / ์ž๋…€ ๋ฐ ์†์ฃผ ๊ณผ ๊ฑฐ์ฃผ ๊ฒฐํ˜ผ์ƒํƒœ_๋ณ„๊ฑฐ 9. ์‚ฌํšŒ๊ฒฝ์ œ์ ์ƒํƒœ : Strong depression 10. ์„ฑ๊ฒฉ : ์•Œ์ˆ˜์—†์Œ 11. ๋Œ€์ธ๊ด€๊ณ„ : ๋Œ€์ธ๊ด€๊ณ„๋ฌธ์ œ_๋ชจ๋ฆ„,์นœ๊ตฌ ๊ด€๋ จ 12. ์ •์„œ์ƒํƒœ : ์šฐ์šธํ•œ ๊ธฐ๋ถ„ ๊ด€์ฐฐ๋จ 13. ๊ฒฝ์ฐฐ ์ตœ์ข…์ž์‚ดํŒ๋‹จ์œ ๋ฌด ๋ฐ ๋‚ด์šฉ : ์ž์‚ด_๊ฐ€์กฑ๊ด€๊ณ„๋ฌธ์ œ_ ๋ชฉ๋งค๋‹ฌ๊ธฐ 14. ์ฝ”๋กœ๋‚˜์™€์˜ ๊ด€๋ จ์„ฑ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง 15. ์ฝ”๋กœ๋‚˜์˜ ์ž์‚ด์˜ํ–ฅ ๋ฐ ์ฃผ์š”์ธ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง NOTE: The above is one line, not a separate line. I'm just displaying it for your convenience. I want to spilt 1. 2. 3. โ€ฆ 15. and append "\n" before the numbers. Desired output looks like this: \n1. ๋ณ€์‚ฌ์ž ์ •๋ณด : Kim_******-1****** \n2. ๋ฐœ๊ฒฌ์ผ์‹œ : 2013๋…„05์›”18์ผ 13:00 \n3. ๋ฐœ๊ฒฌ์žฅ์†Œ : \n1) ์ˆ˜์‚ฌ๊ธฐ๋ก ์ƒ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : \n2) ์‹ค์ œ ์กฐ์‚ฌ์›์ด ์ž…๋ ฅํ•œ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : ์กฐ์น˜(์‚ฌ์œ  ํฌํ•จ) : \n4. ๋ฐœ๊ฒฌ์žฅ์†Œ ์ฝ”๋”ฉ์‚ฌ์œ  : ์žํƒ / \n5. ๋ฐฉ๋ฒ•/์ˆ˜๋‹จ : ๋ชฉ๋งค๋‹ฌ๊ธฐ \n6. ๋ฐœ๊ฒฌ๊ฒฝ์œ„ : 2013.5.18 13:00๊ฒฝ New York, in his apartment \n7. ์ฃผ์›์ธ ์ฝ”๋”ฉ์‚ฌ์œ  : Family reason \n8. ๊ธฐ๋ณธ๋ฐฐ๊ฒฝ์ •๋ณด : ์›๋‹จ๋„๋งค์—… / ์ž๋…€ ๋ฐ ์†์ฃผ ๊ณผ ๊ฑฐ์ฃผ ๊ฒฐํ˜ผ์ƒํƒœ_๋ณ„๊ฑฐ \n9. ์‚ฌํšŒ๊ฒฝ์ œ์ ์ƒํƒœ : Strong depression \n10. ์„ฑ๊ฒฉ : ์•Œ์ˆ˜์—†์Œ \n11. ๋Œ€์ธ๊ด€๊ณ„ : ๋Œ€์ธ๊ด€๊ณ„๋ฌธ์ œ_๋ชจ๋ฆ„,์นœ๊ตฌ ๊ด€๋ จ \n12. ์ •์„œ์ƒํƒœ : ์šฐ์šธํ•œ ๊ธฐ๋ถ„ ๊ด€์ฐฐ๋จ \n13. ๊ฒฝ์ฐฐ ์ตœ์ข…์ž์‚ดํŒ๋‹จ์œ ๋ฌด ๋ฐ ๋‚ด์šฉ : ์ž์‚ด_๊ฐ€์กฑ๊ด€๊ณ„๋ฌธ์ œ_ ๋ชฉ๋งค๋‹ฌ๊ธฐ \n14. ์ฝ”๋กœ๋‚˜์™€์˜ ๊ด€๋ จ์„ฑ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง \n15. ์ฝ”๋กœ๋‚˜์˜ ์ž์‚ด์˜ํ–ฅ ๋ฐ ์ฃผ์š”์ธ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง I tried this (note: there are some rows that are already starts with \n): import re df3 = df.loc[~df.EVENT_DTL.str.contains('\n',na=False),'EVENT_DTL'] re.split('(?<=1.|(?<=2.||(?<=3.|(?<=1\)|(?<=2)|(?<=4.|(?<=5.|(?<=6.|(?<=7.|(?<=8.|(?<=9.|(?<=10.|(?<=11.|(?<=12.|(?<=13.|(?<=14.|(?<=15.',df3) but it cause the error such as (sorry for the long code): error Traceback (most recent call last) <ipython-input-20-3b8b06001e11> in <module> 2 3 df3 = df.loc[~df.EVENT_DTL.str.contains('\n',na=False),'EVENT_DTL'] ----> 4 re.split('(?<=1.|(?<=2.||(?<=3.|(?<=1\)|(?<=2)|(?<=4.|(?<=5.|(?<=6.|(?<=7.|(?<=8.|(?<=9.|(?<=10.|(?<=11.|(?<=12.|(?<=13.|(?<=14.|(?<=15.',df3) 35 frames /usr/lib/python3.7/re.py in split(pattern, string, maxsplit, flags) 213 and the remainder of the string is returned as the final element 214 of the list.""" --> 215 return _compile(pattern, flags).split(string, maxsplit) 216 217 def findall(pattern, string, flags=0): /usr/lib/python3.7/re.py in _compile(pattern, flags) 286 if not sre_compile.isstring(pattern): 287 raise TypeError("first argument must be string or compiled pattern") --> 288 p = sre_compile.compile(pattern, flags) 289 if not (flags & DEBUG): 290 if len(_cache) >= _MAXCACHE: /usr/lib/python3.7/sre_compile.py in compile(p, flags) 762 if isstring(p): 763 pattern = p --> 764 p = sre_parse.parse(p, flags) 765 else: 766 pattern = None /usr/lib/python3.7/sre_parse.py in parse(str, flags, pattern) 922 923 try: --> 924 p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0) 925 except Verbose: 926 # the VERBOSE flag was switched on inside the pattern. to be /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 734 if not sourcematch(")"): 735 raise source.error("missing ), unterminated subpattern", --> 736 source.tell() - start) 737 if char == "=": 738 subpatternappend((ASSERT, (dir, p))) error: missing ), unterminated subpattern at position 119 A: df['EVENT_DTL'] = "\n" + df['EVENT_DTL'].astype(str) A: You can use replace in pandas with setting regex=True: df['EVENT_DTL'].replace(r"(\d+[\.|\)] )", r"\n\1", regex=True) The regex will match any subsequences starting with a number (\d+) with either a . or ) afterwards ([\.|\)]) and then a space. It will replace this subsequence with "\n" added to the subsequence itself (see capture groups). A more detailed explanation for the regex can be found here: https://regex101.com/r/2peTg4/1 Result of applying the regex and splitting on "\n", i.e.: df['EVENT_DTL'].replace(r"( \d+[\.|\)] )", r"\n\1", regex=True).str.split("\n").explode() 1 1. ๋ณ€์‚ฌ์ž ์ •๋ณด : Kim_******-1****** 2 2. ๋ฐœ๊ฒฌ์ผ์‹œ : 2013๋…„05์›”18์ผ 13:00 3 3. ๋ฐœ๊ฒฌ์žฅ์†Œ : 4 1) ์ˆ˜์‚ฌ๊ธฐ๋ก ์ƒ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : 5 2) ์‹ค์ œ ์กฐ์‚ฌ์›์ด ์ž…๋ ฅํ•œ ์ฃผ์†Œ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ... 6 4. ๋ฐœ๊ฒฌ์žฅ์†Œ ์ฝ”๋”ฉ์‚ฌ์œ  : ์žํƒ / 7 5. ๋ฐฉ๋ฒ•/์ˆ˜๋‹จ : ๋ชฉ๋งค๋‹ฌ๊ธฐ 8 6. ๋ฐœ๊ฒฌ๊ฒฝ์œ„ : 2013.5.18 13:00๊ฒฝ New York, in his ap... 9 7. ์ฃผ์›์ธ ์ฝ”๋”ฉ์‚ฌ์œ  : Family reason 10 8. ๊ธฐ๋ณธ๋ฐฐ๊ฒฝ์ •๋ณด : ์›๋‹จ๋„๋งค์—… / ์ž๋…€ ๋ฐ ์†์ฃผ ๊ณผ ๊ฑฐ์ฃผ ๊ฒฐํ˜ผ์ƒํƒœ_๋ณ„๊ฑฐ 11 9. ์‚ฌํšŒ๊ฒฝ์ œ์ ์ƒํƒœ : Strong depression 12 10. ์„ฑ๊ฒฉ : ์•Œ์ˆ˜์—†์Œ 13 11. ๋Œ€์ธ๊ด€๊ณ„ : ๋Œ€์ธ๊ด€๊ณ„๋ฌธ์ œ_๋ชจ๋ฆ„,์นœ๊ตฌ ๊ด€๋ จ 14 12. ์ •์„œ์ƒํƒœ : ์šฐ์šธํ•œ ๊ธฐ๋ถ„ ๊ด€์ฐฐ๋จ 15 13. ๊ฒฝ์ฐฐ ์ตœ์ข…์ž์‚ดํŒ๋‹จ์œ ๋ฌด ๋ฐ ๋‚ด์šฉ : ์ž์‚ด_๊ฐ€์กฑ๊ด€๊ณ„๋ฌธ์ œ_ ๋ชฉ๋งค๋‹ฌ๊ธฐ 16 14. ์ฝ”๋กœ๋‚˜์™€์˜ ๊ด€๋ จ์„ฑ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง 17 15. ์ฝ”๋กœ๋‚˜์˜ ์ž์‚ด์˜ํ–ฅ ๋ฐ ์ฃผ์š”์ธ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง Name: EVENT_DTL, dtype: object A: I think you are looking for any place where a line starts with a digit, and you want to add a special string there, before the digit. It isn't clear to me if you want to add a newline, or want to add a slash followed by an n to it there. This will add a newline. result = re.sub(r"^(\d)", r"\n\1", df3, flags=re.MULTILINE)) print(result) This will add a "\n" as a two-character string. result = re.sub(r"^(\d)", r"\n\1", df3, flags=re.MULTILINE)) print(result) This works by searching for a newline (indicated by ^) followed by any digit (\d), and then substituting it with "\n" followed by the originally matched digit (\1 - the first matched "group")
How to split numbers in string type column?
I have a dataframe with column df['EVENT_DTL'] that looks like this; 1. ๋ณ€์‚ฌ์ž ์ •๋ณด : Kim_******-1****** 2. ๋ฐœ๊ฒฌ์ผ์‹œ : 2013๋…„05์›”18์ผ 13:00 3. ๋ฐœ๊ฒฌ์žฅ์†Œ : 1) ์ˆ˜์‚ฌ๊ธฐ๋ก ์ƒ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : 2) ์‹ค์ œ ์กฐ์‚ฌ์›์ด ์ž…๋ ฅํ•œ ์ฃผ์†Œ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : ์กฐ์น˜(์‚ฌ์œ  ํฌํ•จ) : 4. ๋ฐœ๊ฒฌ์žฅ์†Œ ์ฝ”๋”ฉ์‚ฌ์œ  : ์žํƒ / 5. ๋ฐฉ๋ฒ•/์ˆ˜๋‹จ : ๋ชฉ๋งค๋‹ฌ๊ธฐ6. ๋ฐœ๊ฒฌ๊ฒฝ์œ„ : 2013.5.18 13:00๊ฒฝ New York, in his apartment 7. ์ฃผ์›์ธ ์ฝ”๋”ฉ์‚ฌ์œ  : Family reason 8. ๊ธฐ๋ณธ๋ฐฐ๊ฒฝ์ •๋ณด : ์›๋‹จ๋„๋งค์—… / ์ž๋…€ ๋ฐ ์†์ฃผ ๊ณผ ๊ฑฐ์ฃผ ๊ฒฐํ˜ผ์ƒํƒœ_๋ณ„๊ฑฐ 9. ์‚ฌํšŒ๊ฒฝ์ œ์ ์ƒํƒœ : Strong depression 10. ์„ฑ๊ฒฉ : ์•Œ์ˆ˜์—†์Œ 11. ๋Œ€์ธ๊ด€๊ณ„ : ๋Œ€์ธ๊ด€๊ณ„๋ฌธ์ œ_๋ชจ๋ฆ„,์นœ๊ตฌ ๊ด€๋ จ 12. ์ •์„œ์ƒํƒœ : ์šฐ์šธํ•œ ๊ธฐ๋ถ„ ๊ด€์ฐฐ๋จ 13. ๊ฒฝ์ฐฐ ์ตœ์ข…์ž์‚ดํŒ๋‹จ์œ ๋ฌด ๋ฐ ๋‚ด์šฉ : ์ž์‚ด_๊ฐ€์กฑ๊ด€๊ณ„๋ฌธ์ œ_ ๋ชฉ๋งค๋‹ฌ๊ธฐ 14. ์ฝ”๋กœ๋‚˜์™€์˜ ๊ด€๋ จ์„ฑ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง 15. ์ฝ”๋กœ๋‚˜์˜ ์ž์‚ด์˜ํ–ฅ ๋ฐ ์ฃผ์š”์ธ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง NOTE: The above is one line, not a separate line. I'm just displaying it for your convenience. I want to spilt 1. 2. 3. โ€ฆ 15. and append "\n" before the numbers. Desired output looks like this: \n1. ๋ณ€์‚ฌ์ž ์ •๋ณด : Kim_******-1****** \n2. ๋ฐœ๊ฒฌ์ผ์‹œ : 2013๋…„05์›”18์ผ 13:00 \n3. ๋ฐœ๊ฒฌ์žฅ์†Œ : \n1) ์ˆ˜์‚ฌ๊ธฐ๋ก ์ƒ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : \n2) ์‹ค์ œ ์กฐ์‚ฌ์›์ด ์ž…๋ ฅํ•œ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : ์กฐ์น˜(์‚ฌ์œ  ํฌํ•จ) : \n4. ๋ฐœ๊ฒฌ์žฅ์†Œ ์ฝ”๋”ฉ์‚ฌ์œ  : ์žํƒ / \n5. ๋ฐฉ๋ฒ•/์ˆ˜๋‹จ : ๋ชฉ๋งค๋‹ฌ๊ธฐ \n6. ๋ฐœ๊ฒฌ๊ฒฝ์œ„ : 2013.5.18 13:00๊ฒฝ New York, in his apartment \n7. ์ฃผ์›์ธ ์ฝ”๋”ฉ์‚ฌ์œ  : Family reason \n8. ๊ธฐ๋ณธ๋ฐฐ๊ฒฝ์ •๋ณด : ์›๋‹จ๋„๋งค์—… / ์ž๋…€ ๋ฐ ์†์ฃผ ๊ณผ ๊ฑฐ์ฃผ ๊ฒฐํ˜ผ์ƒํƒœ_๋ณ„๊ฑฐ \n9. ์‚ฌํšŒ๊ฒฝ์ œ์ ์ƒํƒœ : Strong depression \n10. ์„ฑ๊ฒฉ : ์•Œ์ˆ˜์—†์Œ \n11. ๋Œ€์ธ๊ด€๊ณ„ : ๋Œ€์ธ๊ด€๊ณ„๋ฌธ์ œ_๋ชจ๋ฆ„,์นœ๊ตฌ ๊ด€๋ จ \n12. ์ •์„œ์ƒํƒœ : ์šฐ์šธํ•œ ๊ธฐ๋ถ„ ๊ด€์ฐฐ๋จ \n13. ๊ฒฝ์ฐฐ ์ตœ์ข…์ž์‚ดํŒ๋‹จ์œ ๋ฌด ๋ฐ ๋‚ด์šฉ : ์ž์‚ด_๊ฐ€์กฑ๊ด€๊ณ„๋ฌธ์ œ_ ๋ชฉ๋งค๋‹ฌ๊ธฐ \n14. ์ฝ”๋กœ๋‚˜์™€์˜ ๊ด€๋ จ์„ฑ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง \n15. ์ฝ”๋กœ๋‚˜์˜ ์ž์‚ด์˜ํ–ฅ ๋ฐ ์ฃผ์š”์ธ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง I tried this (note: there are some rows that are already starts with \n): import re df3 = df.loc[~df.EVENT_DTL.str.contains('\n',na=False),'EVENT_DTL'] re.split('(?<=1.|(?<=2.||(?<=3.|(?<=1\)|(?<=2)|(?<=4.|(?<=5.|(?<=6.|(?<=7.|(?<=8.|(?<=9.|(?<=10.|(?<=11.|(?<=12.|(?<=13.|(?<=14.|(?<=15.',df3) but it cause the error such as (sorry for the long code): error Traceback (most recent call last) <ipython-input-20-3b8b06001e11> in <module> 2 3 df3 = df.loc[~df.EVENT_DTL.str.contains('\n',na=False),'EVENT_DTL'] ----> 4 re.split('(?<=1.|(?<=2.||(?<=3.|(?<=1\)|(?<=2)|(?<=4.|(?<=5.|(?<=6.|(?<=7.|(?<=8.|(?<=9.|(?<=10.|(?<=11.|(?<=12.|(?<=13.|(?<=14.|(?<=15.',df3) 35 frames /usr/lib/python3.7/re.py in split(pattern, string, maxsplit, flags) 213 and the remainder of the string is returned as the final element 214 of the list.""" --> 215 return _compile(pattern, flags).split(string, maxsplit) 216 217 def findall(pattern, string, flags=0): /usr/lib/python3.7/re.py in _compile(pattern, flags) 286 if not sre_compile.isstring(pattern): 287 raise TypeError("first argument must be string or compiled pattern") --> 288 p = sre_compile.compile(pattern, flags) 289 if not (flags & DEBUG): 290 if len(_cache) >= _MAXCACHE: /usr/lib/python3.7/sre_compile.py in compile(p, flags) 762 if isstring(p): 763 pattern = p --> 764 p = sre_parse.parse(p, flags) 765 else: 766 pattern = None /usr/lib/python3.7/sre_parse.py in parse(str, flags, pattern) 922 923 try: --> 924 p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0) 925 except Verbose: 926 # the VERBOSE flag was switched on inside the pattern. to be /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 728 if lookbehindgroups is None: 729 state.lookbehindgroups = state.groups --> 730 p = _parse_sub(source, state, verbose, nested + 1) 731 if dir < 0: 732 if lookbehindgroups is None: /usr/lib/python3.7/sre_parse.py in _parse_sub(source, state, verbose, nested) 418 while True: 419 itemsappend(_parse(source, state, verbose, nested + 1, --> 420 not nested and not items)) 421 if not sourcematch("|"): 422 break /usr/lib/python3.7/sre_parse.py in _parse(source, state, verbose, nested, first) 734 if not sourcematch(")"): 735 raise source.error("missing ), unterminated subpattern", --> 736 source.tell() - start) 737 if char == "=": 738 subpatternappend((ASSERT, (dir, p))) error: missing ), unterminated subpattern at position 119
[ "df['EVENT_DTL'] = \"\\n\" + df['EVENT_DTL'].astype(str)\n", "You can use replace in pandas with setting regex=True:\ndf['EVENT_DTL'].replace(r\"(\\d+[\\.|\\)] )\", r\"\\n\\1\", regex=True)\n\nThe regex will match any subsequences starting with a number (\\d+) with either a . or ) afterwards ([\\.|\\)]) and then a space. It will replace this subsequence with \"\\n\" added to the subsequence itself (see capture groups).\nA more detailed explanation for the regex can be found here: https://regex101.com/r/2peTg4/1\nResult of applying the regex and splitting on \"\\n\", i.e.:\ndf['EVENT_DTL'].replace(r\"( \\d+[\\.|\\)] )\", r\"\\n\\1\", regex=True).str.split(\"\\n\").explode()\n\n1 1. ๋ณ€์‚ฌ์ž ์ •๋ณด : Kim_******-1****** \n2 2. ๋ฐœ๊ฒฌ์ผ์‹œ : 2013๋…„05์›”18์ผ 13:00 \n3 3. ๋ฐœ๊ฒฌ์žฅ์†Œ : \n4 1) ์ˆ˜์‚ฌ๊ธฐ๋ก ์ƒ ์ฃผ์†Œ ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ ์ฃผ์†Œ : \n5 2) ์‹ค์ œ ์กฐ์‚ฌ์›์ด ์ž…๋ ฅํ•œ ์ฃผ์†Œ์ฃผ๋ฏผ๋“ฑ๋ก์ƒ ์ฃผ์†Œ : ์‹ค๊ฑฐ์ฃผ์ง€ ์ฃผ์†Œ : ์‹œ๋„(๋ฐœ๊ฒฌ)์žฅ์†Œ...\n6 4. ๋ฐœ๊ฒฌ์žฅ์†Œ ์ฝ”๋”ฉ์‚ฌ์œ  : ์žํƒ / \n7 5. ๋ฐฉ๋ฒ•/์ˆ˜๋‹จ : ๋ชฉ๋งค๋‹ฌ๊ธฐ\n8 6. ๋ฐœ๊ฒฌ๊ฒฝ์œ„ : 2013.5.18 13:00๊ฒฝ New York, in his ap...\n9 7. ์ฃผ์›์ธ ์ฝ”๋”ฉ์‚ฌ์œ  : Family reason \n10 8. ๊ธฐ๋ณธ๋ฐฐ๊ฒฝ์ •๋ณด : ์›๋‹จ๋„๋งค์—… / ์ž๋…€ ๋ฐ ์†์ฃผ ๊ณผ ๊ฑฐ์ฃผ ๊ฒฐํ˜ผ์ƒํƒœ_๋ณ„๊ฑฐ \n11 9. ์‚ฌํšŒ๊ฒฝ์ œ์ ์ƒํƒœ : Strong depression \n12 10. ์„ฑ๊ฒฉ : ์•Œ์ˆ˜์—†์Œ \n13 11. ๋Œ€์ธ๊ด€๊ณ„ : ๋Œ€์ธ๊ด€๊ณ„๋ฌธ์ œ_๋ชจ๋ฆ„,์นœ๊ตฌ ๊ด€๋ จ \n14 12. ์ •์„œ์ƒํƒœ : ์šฐ์šธํ•œ ๊ธฐ๋ถ„ ๊ด€์ฐฐ๋จ \n15 13. ๊ฒฝ์ฐฐ ์ตœ์ข…์ž์‚ดํŒ๋‹จ์œ ๋ฌด ๋ฐ ๋‚ด์šฉ : ์ž์‚ด_๊ฐ€์กฑ๊ด€๊ณ„๋ฌธ์ œ_ ๋ชฉ๋งค๋‹ฌ๊ธฐ \n16 14. ์ฝ”๋กœ๋‚˜์™€์˜ ๊ด€๋ จ์„ฑ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง \n17 15. ์ฝ”๋กœ๋‚˜์˜ ์ž์‚ด์˜ํ–ฅ ๋ฐ ์ฃผ์š”์ธ : ์—†์Œ_2020๋…„ ์ด์ „ ์‚ฌ๋ง\nName: EVENT_DTL, dtype: object\n\n", "I think you are looking for any place where a line starts with a digit, and you want to add a special string there, before the digit.\nIt isn't clear to me if you want to add a newline, or want to add a slash followed by an n to it there.\nThis will add a newline.\nresult = re.sub(r\"^(\\d)\", r\"\\n\\1\", df3, flags=re.MULTILINE))\nprint(result)\nThis will add a \"\\n\" as a two-character string.\nresult = re.sub(r\"^(\\d)\", r\"\\n\\1\", df3, flags=re.MULTILINE))\nprint(result)\nThis works by searching for a newline (indicated by ^) followed by any digit (\\d), and then substituting it with \"\\n\" followed by the originally matched digit (\\1 - the first matched \"group\")\n" ]
[ 0, 0, 0 ]
[]
[]
[ "numpy", "pandas", "python", "regex" ]
stackoverflow_0074528041_numpy_pandas_python_regex.txt
Q: How to assign variables to a value in text file and check if it satisfies a given condition? I have a file in.txt name="XYZ_PP_0" number="0x12" bytesize="4" info="0x0000001A" name="GK_LMP_2_0" number="0xA5" bytesize="8" info="0x00000000bbae321f" name="MP_LKO_1_0" number="0x356" bytesize="4" info="0x00000234" I need to check whether it satisfies the condition that is check if info value of number "0x12" + 0x00000004 = info value of number="0x356". If it matches print the resulted value matches with given info value of number="0x356". else print not matching. How can i do this? this is current attempt: import re pattern = r'(number=\"\w+\").*(info=\"\w+\")' with open("in.txt", "rb") as fin: for line in fin: for match_number, match_info in re.findall(pattern, line): but this will simply extract the number and info value. A: Break it into steps. Look up how to read in a text file, line by line. You'll end up with a list of lines of this file. Figure out how to extract the value from the "number" field. A simple regular expression would serve you well here I think. [Optional] Cast this value to the correct data type for your problem. Do the comparison you're interested in. You can easily google the syntax for all of these I think. Edit: posted before there was any code in the original post. I'm not entirely sure what the question is anymore. Do you need help debugging? Edit 2: Taking another stab at this since I think you're asking for RegEx syntax. Change your RegEx pattern to have parentheses around the information you want to extract. A RegEx match for such a pattern will allow you to assign the values inside this parentheses to Python variables. See this partial example. import re pattern = r'number=(\"\w+\").*info=(\"\w+\")' s = 'name="XYZ_PP_0" number="0x12" bytesize="4" info="0x0000001A"' m = re.search(pattern, s) if m: number, info = m.groups() print("number is ", number) print("info is", info) # number is "0x12" # info is "0x0000001A"
How to assign variables to a value in text file and check if it satisfies a given condition?
I have a file in.txt name="XYZ_PP_0" number="0x12" bytesize="4" info="0x0000001A" name="GK_LMP_2_0" number="0xA5" bytesize="8" info="0x00000000bbae321f" name="MP_LKO_1_0" number="0x356" bytesize="4" info="0x00000234" I need to check whether it satisfies the condition that is check if info value of number "0x12" + 0x00000004 = info value of number="0x356". If it matches print the resulted value matches with given info value of number="0x356". else print not matching. How can i do this? this is current attempt: import re pattern = r'(number=\"\w+\").*(info=\"\w+\")' with open("in.txt", "rb") as fin: for line in fin: for match_number, match_info in re.findall(pattern, line): but this will simply extract the number and info value.
[ "Break it into steps.\n\nLook up how to read in a text file, line by line. You'll end up with a list of lines of this file.\nFigure out how to extract the value from the \"number\" field. A simple regular expression would serve you well here I think.\n[Optional] Cast this value to the correct data type for your problem.\nDo the comparison you're interested in.\n\nYou can easily google the syntax for all of these I think.\nEdit: posted before there was any code in the original post. I'm not entirely sure what the question is anymore. Do you need help debugging?\nEdit 2: Taking another stab at this since I think you're asking for RegEx syntax.\nChange your RegEx pattern to have parentheses around the information you want to extract. A RegEx match for such a pattern will allow you to assign the values inside this parentheses to Python variables.\nSee this partial example.\nimport re\npattern = r'number=(\\\"\\w+\\\").*info=(\\\"\\w+\\\")'\ns = 'name=\"XYZ_PP_0\" number=\"0x12\" bytesize=\"4\" info=\"0x0000001A\"'\nm = re.search(pattern, s)\n\nif m:\n number, info = m.groups()\n print(\"number is \", number)\n print(\"info is\", info)\n# number is \"0x12\"\n# info is \"0x0000001A\"\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074528356_python_python_3.x.txt
Q: How to crop an image given proportional coordinates with Python PIL? I have an image with dimension (1920x1080) with proportional coordinates provided with a description of the detected person region. I want to crop only the detected person from the image using provided proportional coordinates. I looked up PIL crop documentation and tried the following: Provided in the integration documentation: x0, y0 The x, y coordinates corresponding to the lower-right corner of the person detection box. They are proportional distances from the upper-left of the image. x1, y1 The x, y coordinates corresponding to the upper-left corner of the person detection box. They are proportional distances from the upper-left of the image. Sample integration description provided: def img_crop(url, box): box = { 'x0': 0.974, 'x1': 0.922, 'y0': 0.502, 'y1': 0.315 } img = Image.open(requests.get(url, stream=True).raw) h, w = img.size print(img.size) return img.crop((box['x0']*h, box['y0']*w, box['x1']*h, box['y1']*w)) This results in the following error ValueError: Coordinate 'right' is less than 'left' A: But your drawing contradict your own description of what x0,y0,x1,y1 are. It is said (in a picture of text btw; it is preferable to avoid that) that x0,y0 is the lower right corner, and x1,y1 the upper left corner. Just invert x0,y0 and x1,y1. Also, note fyi, that coordinates system in PIL (and generally speaking in most image processing system. Since this is how images formats are also done), starts from the upper left corner. Like an English text: pixels are organized from left to right, and from top to bottom. EDIT: (answer to your comment) One way would be to really just swap them and replace your .crop line by return img.crop((box['x1']*h, box['y1']*w, box['x0']*h, box['y0']*w)) This would work in your code. Nevertheless, there are some other changes that are preferable. First of all, you call width of the image h, and height of the image w. Of course, it is not a problem from a python point of view, but it doesn't help readibility (I surmise that you did so because when images are np.array, such as opencv images, to get w and h, you would h,w,_=img.shape. But PIL .size return w first and h second. And then, you inverted w and h in the crop line to be consistent. Secondly, it is quite strange to rely on the fact that x0 and y0 are the biggest x and y of the box, and x1, y1 are the smallest. It would be better to do the inversion in the calling code. You did not provide it, reason why I did not try to show correction: correction has to be done in code that is not provided. (You did provide a box, to override what is passed. So in that box you could do the swap as well) box = { 'x1': 0.974, 'x0': 0.922, 'y1': 0.502, 'y0': 0.315 } But the safest way, especially since you seem unsure about where all corners are, and taking into account that sometimes, x0 could be smaller than x1, which y0 is bigger than y1, is to compute which one is min, which one is max. Like this: from PIL import Image import matplotlib.pyplot as plt def img_crop(url, box): box = { 'x0': 0.216, 'x1': 0.419, 'y0': 0.237, 'y1': 0.697 } img = Image.open(requests.get(url, stream=True).raw) w, h = img.size print(img.size) xmin=min(box['x0'], box['x1']) xmax=max(box['x0'], box['x1']) ymin=min(box['y0'], box['y1']) ymax=max(box['y0'], box['y1']) return img.crop((xmin*w, ymin*h, xmax*w, ymax*h)) There, no problem. Just pass the two x and the two y in the order x,y,x,y without bothering about which x to send first and which y to send first. On your picture, with by version of box it gets
How to crop an image given proportional coordinates with Python PIL?
I have an image with dimension (1920x1080) with proportional coordinates provided with a description of the detected person region. I want to crop only the detected person from the image using provided proportional coordinates. I looked up PIL crop documentation and tried the following: Provided in the integration documentation: x0, y0 The x, y coordinates corresponding to the lower-right corner of the person detection box. They are proportional distances from the upper-left of the image. x1, y1 The x, y coordinates corresponding to the upper-left corner of the person detection box. They are proportional distances from the upper-left of the image. Sample integration description provided: def img_crop(url, box): box = { 'x0': 0.974, 'x1': 0.922, 'y0': 0.502, 'y1': 0.315 } img = Image.open(requests.get(url, stream=True).raw) h, w = img.size print(img.size) return img.crop((box['x0']*h, box['y0']*w, box['x1']*h, box['y1']*w)) This results in the following error ValueError: Coordinate 'right' is less than 'left'
[ "But your drawing contradict your own description of what x0,y0,x1,y1 are. It is said (in a picture of text btw; it is preferable to avoid that) that x0,y0 is the lower right corner, and x1,y1 the upper left corner.\nJust invert x0,y0 and x1,y1.\nAlso, note fyi, that coordinates system in PIL (and generally speaking in most image processing system. Since this is how images formats are also done), starts from the upper left corner. Like an English text: pixels are organized from left to right, and from top to bottom.\nEDIT: (answer to your comment)\nOne way would be to really just swap them and replace your .crop line by\n return img.crop((box['x1']*h, box['y1']*w, box['x0']*h, box['y0']*w))\n\nThis would work in your code. Nevertheless, there are some other changes that are preferable. First of all, you call width of the image h, and height of the image w. Of course, it is not a problem from a python point of view, but it doesn't help readibility (I surmise that you did so because when images are np.array, such as opencv images, to get w and h, you would h,w,_=img.shape. But PIL .size return w first and h second. And then, you inverted w and h in the crop line to be consistent.\nSecondly, it is quite strange to rely on the fact that x0 and y0 are the biggest x and y of the box, and x1, y1 are the smallest. It would be better to do the inversion in the calling code. You did not provide it, reason why I did not try to show correction: correction has to be done in code that is not provided. (You did provide a box, to override what is passed. So in that box you could do the swap as well)\n box = {\n 'x1': 0.974, \n 'x0': 0.922, \n 'y1': 0.502, \n 'y0': 0.315\n }\n\nBut the safest way, especially since you seem unsure about where all corners are, and taking into account that sometimes, x0 could be smaller than x1, which y0 is bigger than y1, is to compute which one is min, which one is max.\nLike this:\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\ndef img_crop(url, box):\n box = {\n 'x0': 0.216,\n 'x1': 0.419,\n 'y0': 0.237,\n 'y1': 0.697\n }\n img = Image.open(requests.get(url, stream=True).raw)\n w, h = img.size\n print(img.size)\n xmin=min(box['x0'], box['x1'])\n xmax=max(box['x0'], box['x1'])\n ymin=min(box['y0'], box['y1'])\n ymax=max(box['y0'], box['y1'])\n return img.crop((xmin*w, ymin*h, xmax*w, ymax*h))\n\nThere, no problem. Just pass the two x and the two y in the order x,y,x,y without bothering about which x to send first and which y to send first.\nOn your picture, with by version of box it gets\n\n" ]
[ 1 ]
[]
[]
[ "computer_vision", "image", "image_processing", "python", "python_imaging_library" ]
stackoverflow_0074528174_computer_vision_image_image_processing_python_python_imaging_library.txt
Q: Setting function if cell contains string I had a function that worked fine - it checks the title of a cell for a ipy.datagrid and then sets the color of the cell based on the header def header_bg_color(cell): if cell.value in ['Portfolio -30%','Change -30%']: return '#f3722c' elif cell.value in ['Portfolio -20%','Change -20%']: return '#f8961e' elif cell.value in ['Portfolio -10%','Change -10%']: return '#f9844a' I have changed the name of my 'Portfolio -10%' column to 'Portfolio -10%' + igwd_change ...where igwd_change is a variable I define earlier. I thought that simply changing the line from elif cell.value in ['Portfolio -10%','Change -10%']: return '#f9844a' to elif cell.value in ['Portfolio -10%' + igwd_change,'Change -10%']: return '#f9844a' Would work, but I get an error Py2VegaNameError: name 'igwd_change' is not defined, available variables are ['cell', 'default_value', 'index'], note that only a subset of Python is supported However igwd_change is defined (cell above this one has definitely been run) and I can call the variable in the cell after to check. Edited to show cell working as desired (Portfolio -10%) yet cell Portfolio 0% (-3.2%) which is Portfolio 0% + igwd_change not having the required vega function applied A: you need to pass igwd_change variable inside the header_bg_color function as a parameter def header_bg_color(cell): should be def header_bg_color(cell, igwd_change): Now when calling this function, make sure you pass the same variable header_bg_color(cell, igwd_change) or header_bg_color(cell, "any custom parameter you want here")
Setting function if cell contains string
I had a function that worked fine - it checks the title of a cell for a ipy.datagrid and then sets the color of the cell based on the header def header_bg_color(cell): if cell.value in ['Portfolio -30%','Change -30%']: return '#f3722c' elif cell.value in ['Portfolio -20%','Change -20%']: return '#f8961e' elif cell.value in ['Portfolio -10%','Change -10%']: return '#f9844a' I have changed the name of my 'Portfolio -10%' column to 'Portfolio -10%' + igwd_change ...where igwd_change is a variable I define earlier. I thought that simply changing the line from elif cell.value in ['Portfolio -10%','Change -10%']: return '#f9844a' to elif cell.value in ['Portfolio -10%' + igwd_change,'Change -10%']: return '#f9844a' Would work, but I get an error Py2VegaNameError: name 'igwd_change' is not defined, available variables are ['cell', 'default_value', 'index'], note that only a subset of Python is supported However igwd_change is defined (cell above this one has definitely been run) and I can call the variable in the cell after to check. Edited to show cell working as desired (Portfolio -10%) yet cell Portfolio 0% (-3.2%) which is Portfolio 0% + igwd_change not having the required vega function applied
[ "you need to pass igwd_change variable inside the header_bg_color function as a parameter\ndef header_bg_color(cell):\n\nshould be\ndef header_bg_color(cell, igwd_change):\n\nNow when calling this function, make sure you pass the same variable\nheader_bg_color(cell, igwd_change)\n\nor\nheader_bg_color(cell, \"any custom parameter you want here\")\n\n" ]
[ 0 ]
[]
[]
[ "function", "python", "string" ]
stackoverflow_0074522406_function_python_string.txt
Q: Un-structured nested JSON to CSV using python structured nested JSON file that I need to use as a data frame(or CSV) to extract insight from the data. Below is the sample of 1 part of the JSON.. i have more then 1million records with different details n feature.. what would be the right way to Parse this as a structure Table using Python { "CRD" : { "FG" : "ZVX", "ZPN" : "04W05BA2A", "MATCH" : "exact", "COUNT" : 4, "SUMMARY" : { "ID" : "33772", "PATHID" : "10417" }, "DETAILS" : { "PARADATA" : { "FEATURES" : [ { "FEATURENAME" : "Laptop Value", "FEATUREVALUE" : "0.9 F", "FEATUREUNIT" : "", "FEATUREID" : "22", "FEATUREVALUEDETAILS" : { "VALUE" : "0.8", "SIGN" : "", "UNIT" : "F", "MULTIPLIER" : "p", "MULTIPLIERVALUE" : "9.0E-12" } }, { "FEATURENAME" : "Product weight", "FEATUREVALUE" : "", "FEATUREUNIT" : "mm", "FEATUREID" : "1372" }, { "FEATURENAME" : "Variable", "FEATUREVALUE" : "Fixed", "FEATUREUNIT" : "", "FEATUREID" : "138", "FEATUREVALUEDETAILS" : { "VALUE" : "Fixed", "SIGN" : "", "UNIT" : "", "MULTIPLIER" : "", "MULTIPLIERVALUE" : "1.0" } } ] } } } } A: you can use json_normalize: import json your_json=json.loads(your_json) #convert string to dict df = pd.json_normalize(your_json).explode('CRD.DETAILS.PARADATA.FEATURES').reset_index(drop=True) df = df.join(pd.json_normalize(df.pop('CRD.DETAILS.PARADATA.FEATURES'))).drop_duplicates() ''' | | CRD.FG | CRD.ZPN | CRD.MATCH | CRD.COUNT | CRD.SUMMARY.ID | CRD.SUMMARY.PATHID | FEATURENAME | FEATUREVALUE | FEATUREUNIT | FEATUREID | FEATUREVALUEDETAILS.VALUE | FEATUREVALUEDETAILS.SIGN | FEATUREVALUEDETAILS.UNIT | FEATUREVALUEDETAILS.MULTIPLIER | FEATUREVALUEDETAILS.MULTIPLIERVALUE | |---:|:---------|:----------|:------------|------------:|-----------------:|---------------------:|:---------------|:---------------|:--------------|------------:|:----------------------------|:---------------------------|:---------------------------|:---------------------------------|--------------------------------------:| | 0 | ZVX | 04W05BA2A | exact | 4 | 33772 | 10417 | Laptop Value | 0.9 F | | 22 | 0.8 | | F | p | 9e-12 | | 1 | ZVX | 04W05BA2A | exact | 4 | 33772 | 10417 | Product weight | | mm | 1372 | nan | nan | nan | nan | nan | | 2 | ZVX | 04W05BA2A | exact | 4 | 33772 | 10417 | Variable | Fixed | | 138 | Fixed | | | | 1 | '''
Un-structured nested JSON to CSV using python
structured nested JSON file that I need to use as a data frame(or CSV) to extract insight from the data. Below is the sample of 1 part of the JSON.. i have more then 1million records with different details n feature.. what would be the right way to Parse this as a structure Table using Python { "CRD" : { "FG" : "ZVX", "ZPN" : "04W05BA2A", "MATCH" : "exact", "COUNT" : 4, "SUMMARY" : { "ID" : "33772", "PATHID" : "10417" }, "DETAILS" : { "PARADATA" : { "FEATURES" : [ { "FEATURENAME" : "Laptop Value", "FEATUREVALUE" : "0.9 F", "FEATUREUNIT" : "", "FEATUREID" : "22", "FEATUREVALUEDETAILS" : { "VALUE" : "0.8", "SIGN" : "", "UNIT" : "F", "MULTIPLIER" : "p", "MULTIPLIERVALUE" : "9.0E-12" } }, { "FEATURENAME" : "Product weight", "FEATUREVALUE" : "", "FEATUREUNIT" : "mm", "FEATUREID" : "1372" }, { "FEATURENAME" : "Variable", "FEATUREVALUE" : "Fixed", "FEATUREUNIT" : "", "FEATUREID" : "138", "FEATUREVALUEDETAILS" : { "VALUE" : "Fixed", "SIGN" : "", "UNIT" : "", "MULTIPLIER" : "", "MULTIPLIERVALUE" : "1.0" } } ] } } } }
[ "you can use json_normalize:\nimport json\nyour_json=json.loads(your_json) #convert string to dict\n\ndf = pd.json_normalize(your_json).explode('CRD.DETAILS.PARADATA.FEATURES').reset_index(drop=True)\ndf = df.join(pd.json_normalize(df.pop('CRD.DETAILS.PARADATA.FEATURES'))).drop_duplicates()\n'''\n| | CRD.FG | CRD.ZPN | CRD.MATCH | CRD.COUNT | CRD.SUMMARY.ID | CRD.SUMMARY.PATHID | FEATURENAME | FEATUREVALUE | FEATUREUNIT | FEATUREID | FEATUREVALUEDETAILS.VALUE | FEATUREVALUEDETAILS.SIGN | FEATUREVALUEDETAILS.UNIT | FEATUREVALUEDETAILS.MULTIPLIER | FEATUREVALUEDETAILS.MULTIPLIERVALUE |\n|---:|:---------|:----------|:------------|------------:|-----------------:|---------------------:|:---------------|:---------------|:--------------|------------:|:----------------------------|:---------------------------|:---------------------------|:---------------------------------|--------------------------------------:|\n| 0 | ZVX | 04W05BA2A | exact | 4 | 33772 | 10417 | Laptop Value | 0.9 F | | 22 | 0.8 | | F | p | 9e-12 |\n| 1 | ZVX | 04W05BA2A | exact | 4 | 33772 | 10417 | Product weight | | mm | 1372 | nan | nan | nan | nan | nan |\n| 2 | ZVX | 04W05BA2A | exact | 4 | 33772 | 10417 | Variable | Fixed | | 138 | Fixed | | | | 1 |\n'''\n\n" ]
[ 1 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074528333_json_pandas_python.txt
Q: What is the correct way of traversing the following tree pre-oder? Given the following parse tree: In: from nltk.parse import CoreNLPParser from nltk.treeprettyprinter import TreePrettyPrinter parser = CoreNLPParser(url='http://localhost:9000') next(parser.raw_parse('What is the airspeed of an unladen swallow ?')).pretty_print() Out: ROOT | SBARQ __________________________|____________________________ | SQ | | ___|_________________ | | | NP | | | _____________|______________________ | | | | PP S | | | | ____|___ | | WHNP | NP | NP VP | | | ___|_____ | ___|_____ | | WP VBZ DT NN IN DT JJ VB . | | | | | | | | | What is the airspeed of an unladen swallow ? What is the correct way of traversing it in pre-order with and without the terminal nodes? So far my main issue is that I dont understand how to iterate through the tree. When I do : for e in parse_tree: print(e) I get: (ROOT (SBARQ (WHNP (WP What)) (SQ (VBZ is) (NP (NP (DT the) (NN airspeed)) (PP (IN of) (NP (DT an) (JJ unladen))) (S (VP (VB swallow))))) (. ?))) In other words, I cant access to each branch of the tree. What is the correct way of traversing this structure? A: From your description 'What is the airspeed of an unladen swallow ?'. I think you wanted leaf node all the time if i am correct.! you should apply DFS(preorder) which will give output leaf nodes of the tree. Code for-:[To print leaf node] leafnodes=[] def leafnode(node): if not node: return if not node.left and not node.right: leafnodes.append(node.val) leafnode(node.left) leafnode(node.right) leafnode(root) print(leafnodes)
What is the correct way of traversing the following tree pre-oder?
Given the following parse tree: In: from nltk.parse import CoreNLPParser from nltk.treeprettyprinter import TreePrettyPrinter parser = CoreNLPParser(url='http://localhost:9000') next(parser.raw_parse('What is the airspeed of an unladen swallow ?')).pretty_print() Out: ROOT | SBARQ __________________________|____________________________ | SQ | | ___|_________________ | | | NP | | | _____________|______________________ | | | | PP S | | | | ____|___ | | WHNP | NP | NP VP | | | ___|_____ | ___|_____ | | WP VBZ DT NN IN DT JJ VB . | | | | | | | | | What is the airspeed of an unladen swallow ? What is the correct way of traversing it in pre-order with and without the terminal nodes? So far my main issue is that I dont understand how to iterate through the tree. When I do : for e in parse_tree: print(e) I get: (ROOT (SBARQ (WHNP (WP What)) (SQ (VBZ is) (NP (NP (DT the) (NN airspeed)) (PP (IN of) (NP (DT an) (JJ unladen))) (S (VP (VB swallow))))) (. ?))) In other words, I cant access to each branch of the tree. What is the correct way of traversing this structure?
[ "From your description 'What is the airspeed of an unladen swallow ?'. I think you wanted leaf node all the time if i am correct.! you should apply DFS(preorder) which will give output leaf nodes of the tree.\nCode for-:[To print leaf node]\nleafnodes=[]\n\ndef leafnode(node):\n if not node:\n return \n if not node.left and not node.right:\n leafnodes.append(node.val)\n leafnode(node.left)\n leafnode(node.right)\n\nleafnode(root)\n\nprint(leafnodes)\n\n" ]
[ 0 ]
[]
[]
[ "data_structures", "python", "tree" ]
stackoverflow_0071690744_data_structures_python_tree.txt
Q: How could i make this password get written in a text file I want to make a program that creates passwords and then write them on a text file but the problem is that the program only writes 1 password on the textfile even tho it generates more, how i could fix this import random, time,sys #nombre = input("Plataforma: ") Simbolo = "*><๏ผ ๏ผ†๏ผ…๏ผ„๏ผƒ" letra = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" numeros = "1234567890" mayusculas = letra.lower() mayus,minus,nums,sim = True,True,True,False contraseรฑa = "" if mayus: contraseรฑa += letra if minus: contraseรฑa += mayusculas if nums: contraseรฑa += numeros if sim: contraseรฑa += Simbolo largo = 20 cantidad = 10 while 1 == 1: for i in range(cantidad): contra = "".join(random.sample(contraseรฑa,largo)) print(contra) contra = contra +"\n" parar = input().lower() with open("prueba.txt","a") as file: file.write(contra) if parar == "s": sys.exit() I tried doing a while loop and repeating the write function, but it didn't work, it just repeated the same password over and over again, it didn't write a different password A: The problem is that you are waiting for an input (parar = input().lower()) before you write the new password to the text file. Here is the working solution. import random, time,sys Simbolo = "*><๏ผ ๏ผ†๏ผ…๏ผ„๏ผƒ" letra = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" numeros = "1234567890" mayusculas = letra.lower() mayus,minus,nums,sim = True,True,True,False contraseรฑa = "" if mayus: contraseรฑa += letra if minus: contraseรฑa += mayusculas if nums: contraseรฑa += numeros if sim: contraseรฑa += Simbolo largo = 20 cantidad = 10 while True: for i in range(cantidad): contra = "".join(random.sample(contraseรฑa,largo)) print(contra) contra = contra +"\n" with open("prueba.txt","a") as file: file.write(contra) To stop the code from running, simply press "Ctrl + C" on your keyboard.
How could i make this password get written in a text file
I want to make a program that creates passwords and then write them on a text file but the problem is that the program only writes 1 password on the textfile even tho it generates more, how i could fix this import random, time,sys #nombre = input("Plataforma: ") Simbolo = "*><๏ผ ๏ผ†๏ผ…๏ผ„๏ผƒ" letra = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" numeros = "1234567890" mayusculas = letra.lower() mayus,minus,nums,sim = True,True,True,False contraseรฑa = "" if mayus: contraseรฑa += letra if minus: contraseรฑa += mayusculas if nums: contraseรฑa += numeros if sim: contraseรฑa += Simbolo largo = 20 cantidad = 10 while 1 == 1: for i in range(cantidad): contra = "".join(random.sample(contraseรฑa,largo)) print(contra) contra = contra +"\n" parar = input().lower() with open("prueba.txt","a") as file: file.write(contra) if parar == "s": sys.exit() I tried doing a while loop and repeating the write function, but it didn't work, it just repeated the same password over and over again, it didn't write a different password
[ "The problem is that you are waiting for an input (parar = input().lower()) before you write the new password to the text file.\nHere is the working solution.\nimport random, time,sys\n\nSimbolo = \"*><๏ผ ๏ผ†๏ผ…๏ผ„๏ผƒ\"\nletra = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"\nnumeros = \"1234567890\"\nmayusculas = letra.lower()\n\nmayus,minus,nums,sim = True,True,True,False\n\ncontraseรฑa = \"\"\n\nif mayus:\n contraseรฑa += letra\nif minus:\n contraseรฑa += mayusculas\nif nums:\n contraseรฑa += numeros\nif sim:\n contraseรฑa += Simbolo\n\nlargo = 20\ncantidad = 10\n\nwhile True:\n\n for i in range(cantidad):\n contra = \"\".join(random.sample(contraseรฑa,largo))\n print(contra)\n\n contra = contra +\"\\n\"\n\n with open(\"prueba.txt\",\"a\") as file:\n file.write(contra)\n\nTo stop the code from running, simply press \"Ctrl + C\" on your keyboard.\n" ]
[ 0 ]
[]
[]
[ "file", "passwords", "python" ]
stackoverflow_0074528317_file_passwords_python.txt
Q: os.environ not getting my environment variables I have a simple python app with this file directory: C:. โ”œโ”€โ”€โ”€Sample Project โ”‚ โ”œโ”€โ”€โ”€project โ”‚ โ”‚ โ”œโ”€โ”€โ”€.vscode โ”‚ โ”‚ โ”œโ”€โ”€โ”€bin โ”‚ โ”‚ โ”œโ”€โ”€โ”€models โ”‚ โ”‚ โ”œโ”€โ”€โ”€projects โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€test โ”‚ โ”‚ โ””โ”€โ”€โ”€utils โ”‚ โ””โ”€โ”€โ”€venv Inside C:\Users\usr\Desktop\raicom\Sample Project\project is my project.env which contains: sample=hello sample2=world Inside C:\Users\usr\Desktop\raicom\Sample Project\project\.vscode is my settings.json which contains: { "python.envFile": "${workspaceFolder}/project.env" } Inside C:\Users\usr\Desktop\raicom\Sample Project\project\projects\test is a file named test.py which contains: import os print(os.environ.get('sample')) print(os.environ.get('sample2')) this should print my environment variables. When I run debug mode, it does just that. but when I click Run Python File, it outputs None on both cases: What could I be missing or doing wrong? Follow up question, why is it working in debug mode but not in the run python file mode? A: It works in debug mode because when you run it from debug mode the Current working directory is the project root directory, but when you right click and say run python file in terminal it runs it with the current working directory as the directory containing the python script. When it is run with the current working directory as the python script directory it doesn't take into account your .vscode settings. A solution is to use a module to load your .env file for example: python-dotenv A: Please use debug mode. Environment variable definitions files can be used for scenarios such as debugging and tool execution (including linters, formatters, IntelliSense, and testing tools), but aren't applied to the terminal. Read docs for more details.
os.environ not getting my environment variables
I have a simple python app with this file directory: C:. โ”œโ”€โ”€โ”€Sample Project โ”‚ โ”œโ”€โ”€โ”€project โ”‚ โ”‚ โ”œโ”€โ”€โ”€.vscode โ”‚ โ”‚ โ”œโ”€โ”€โ”€bin โ”‚ โ”‚ โ”œโ”€โ”€โ”€models โ”‚ โ”‚ โ”œโ”€โ”€โ”€projects โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€test โ”‚ โ”‚ โ””โ”€โ”€โ”€utils โ”‚ โ””โ”€โ”€โ”€venv Inside C:\Users\usr\Desktop\raicom\Sample Project\project is my project.env which contains: sample=hello sample2=world Inside C:\Users\usr\Desktop\raicom\Sample Project\project\.vscode is my settings.json which contains: { "python.envFile": "${workspaceFolder}/project.env" } Inside C:\Users\usr\Desktop\raicom\Sample Project\project\projects\test is a file named test.py which contains: import os print(os.environ.get('sample')) print(os.environ.get('sample2')) this should print my environment variables. When I run debug mode, it does just that. but when I click Run Python File, it outputs None on both cases: What could I be missing or doing wrong? Follow up question, why is it working in debug mode but not in the run python file mode?
[ "It works in debug mode because when you run it from debug mode the Current working directory is the project root directory, but when you right click and say run python file in terminal it runs it with the current working directory as the directory containing the python script.\nWhen it is run with the current working directory as the python script directory it doesn't take into account your .vscode settings.\nA solution is to use a module to load your .env file for example: python-dotenv\n", "Please use debug mode.\nEnvironment variable definitions files can be used for scenarios such as debugging and tool execution (including linters, formatters, IntelliSense, and testing tools), but aren't applied to the terminal.\nRead docs for more details.\n" ]
[ 1, 0 ]
[]
[]
[ "environment_variables", "python", "virtualenv", "visual_studio_code" ]
stackoverflow_0074527980_environment_variables_python_virtualenv_visual_studio_code.txt
Q: Removing pycache in git How can I remove existing and future pycahce files from git repository in Windows? The commands I found online are not working for example when I send the command "git rm -r --cached __pycache__" I get the command "pathspec '__pycache__' did not match any files". A: The __pycache__ folders that you are seeing are not in your current and future Git commits. Because of the way Git works internallyโ€”which Git forces you to know, at least if you're going to understand itโ€”understanding this is a bit tricky, even once we get past the "directory / folder confusion" we saw in your comments. The right place to start, I believe, is at the top. Git isn't about files (or even files-and-folders / files-and-directories). Those new to Git see it as storing files, so they think it's about files, but that's just not true. Or, they note the importance of the ideas behind branches, and think that Git is about branches, and that too is not really true, because people confuse one kind of "branch" (that does matter) with branch names (which don't matter). The first thing to know, then, is that Git is really all about commits. This means that you really need to know: what a commit is, and what a commit does for you (these two overlap but are not identical). We won't really cover what a commit is here, for space reasons, but let's look at the main thing that one does for you: Each commit stores a full snapshot of every file. We now need a small digression into files and folders and how Git and your OS differ in terms of how they organize files. Your computer insists that a file has a name like file.ext and lives in a folder or directoryโ€”the two terms are interchangeableโ€”such as to, which in turn lives in another folder such as path. This produces path/to/file.ext or, on Windows, path\to\file.ext. Git, by contrast, has only files, and their names always use forward slashes and include the slashes. The file named path/to/file.ext is literally just the file, with that name. But Git does understand that your computer demands the file-in-folder format, and will convert back and forth as needed. If Git needs to extract a file whose name is some/long/file/name.ext, Git will create folders some, some/long, and so on when it must, all automatically. The strange side effect of this is that because Git stores only the files, not the folders, Git is unable to store an empty folder. This distinction actually occurs in Git's index aka staging area, which we won't get into in any detail, but it explains the problem whose answers are given in How do I add an empty directory to a Git repository? In any case, commits in Git store files, using these path names. Each commit has a full copy of every fileโ€”but the files' contents are stored in a special, Git-ized, read-only, Git-only format in which the contents are de-duplicated. So if a million commits store one particular version of one particular file, there's really only one copy, shared between all million commits. Git can do this kind of sharing because, unlike regular files on your computer, files stored in a commit, in Git, literally can't be changed. Going back to the commits now: each commit contains a full snapshot of every file (that it had when you, or whoever, made the commit). But these files are read-onlyโ€”they literally can't have their contents replaced, which is what enables that sharingโ€”and only Git itself can even read them. This makes them useless for actually getting any work done. They're fine as archives, but no good for real work. The solution to this problem is simple (and the same as in almost all other version control systems): when you select some commit to work on / with, Git will extract the files from that commit. This creates ordinary files, in ordinary folders, in an ordinary area in which you can do your work (whether that's ordinary or substandard or exemplary workโ€”that's all up to you, not to Git ). What this means is that you do your work in a working tree or work-tree (Git uses these two terms interchangeably). More importantly, it means this: The files you see and work on / with are not in Git. They may have just been extracted by Git, from some commit. But now they're ordinary files and you use them without Git being aware of what you're doing. Since Git has extracted these files into ordinary folders, you can create new files and/or new folders if you like. When you run Python programs, Python itself will, at various times, create __pycache__ folders and stuff *.pyc and/or *.pyo files into them. Python does this without Git's knowledge or understanding. Because these files are generated by Python, based on your source, and just used to speed up Python, it's a good idea to avoid putting them into the commits. There's no need to save a permanent snapshot of these files, especially since the format and contents may depend on the specific Python version (e.g., Python 3.7 generates *.cpython-37.pyc files, Python 3.9 generates *.cpython-39.pyc files, and so on). So we tell Git two things: Don't complain about the existence of these particular untracked files in the working tree. When I use an en-masse "add everything" operation like git add ., don't add these files to the index / staging-area, so that they won't go into the next commit either. We generally do this with the (poorly named) .gitignore file. Listing a file name in a .gitignore does not make Git ignore it; instead, it has the effect of doing the two things I listed here. This uses the Git-specific term untracked file, which has a simple definition that has a complex back-story. An untracked file is simply any file in your working tree that is not currently in Git's index (staging area). Since we're not going to get into a discussion of Git's index here, we have to stop there for now, but the general idea is that we don't allow the __pycache__ files to get into the index, which keeps them untracked, which keeps Git from committing them, which keeps them from getting into Git's index. It's all a bit circular here, and if you accidentally do get these files into Git's index, that's when you need the git rm -r --cached __pycache__ command. Since that command is failing, it means you don't have the problem this command is meant to solve. That's good! A: Well, you don't need __pycache__ files in your git repositories and you'd better to ignore all related files to it by adding __pycache__/ to your .gitignore file.
Removing pycache in git
How can I remove existing and future pycahce files from git repository in Windows? The commands I found online are not working for example when I send the command "git rm -r --cached __pycache__" I get the command "pathspec '__pycache__' did not match any files".
[ "The __pycache__ folders that you are seeing are not in your current and future Git commits. Because of the way Git works internallyโ€”which Git forces you to know, at least if you're going to understand itโ€”understanding this is a bit tricky, even once we get past the \"directory / folder confusion\" we saw in your comments.\nThe right place to start, I believe, is at the top. Git isn't about files (or even files-and-folders / files-and-directories). Those new to Git see it as storing files, so they think it's about files, but that's just not true. Or, they note the importance of the ideas behind branches, and think that Git is about branches, and that too is not really true, because people confuse one kind of \"branch\" (that does matter) with branch names (which don't matter). The first thing to know, then, is that Git is really all about commits.\nThis means that you really need to know:\n\nwhat a commit is, and\nwhat a commit does for you\n\n(these two overlap but are not identical). We won't really cover what a commit is here, for space reasons, but let's look at the main thing that one does for you: Each commit stores a full snapshot of every file.\n\nWe now need a small digression into files and folders and how Git and your OS differ in terms of how they organize files. Your computer insists that a file has a name like file.ext and lives in a folder or directoryโ€”the two terms are interchangeableโ€”such as to, which in turn lives in another folder such as path. This produces path/to/file.ext or, on Windows, path\\to\\file.ext.\nGit, by contrast, has only files, and their names always use forward slashes and include the slashes. The file named path/to/file.ext is literally just the file, with that name. But Git does understand that your computer demands the file-in-folder format, and will convert back and forth as needed. If Git needs to extract a file whose name is some/long/file/name.ext, Git will create folders some, some/long, and so on when it must, all automatically.\nThe strange side effect of this is that because Git stores only the files, not the folders, Git is unable to store an empty folder. This distinction actually occurs in Git's index aka staging area, which we won't get into in any detail, but it explains the problem whose answers are given in How do I add an empty directory to a Git repository?\nIn any case, commits in Git store files, using these path names. Each commit has a full copy of every fileโ€”but the files' contents are stored in a special, Git-ized, read-only, Git-only format in which the contents are de-duplicated. So if a million commits store one particular version of one particular file, there's really only one copy, shared between all million commits. Git can do this kind of sharing because, unlike regular files on your computer, files stored in a commit, in Git, literally can't be changed.\n\nGoing back to the commits now: each commit contains a full snapshot of every file (that it had when you, or whoever, made the commit). But these files are read-onlyโ€”they literally can't have their contents replaced, which is what enables that sharingโ€”and only Git itself can even read them. This makes them useless for actually getting any work done. They're fine as archives, but no good for real work.\nThe solution to this problem is simple (and the same as in almost all other version control systems): when you select some commit to work on / with, Git will extract the files from that commit. This creates ordinary files, in ordinary folders, in an ordinary area in which you can do your work (whether that's ordinary or substandard or exemplary workโ€”that's all up to you, not to Git ). What this means is that you do your work in a working tree or work-tree (Git uses these two terms interchangeably). More importantly, it means this: The files you see and work on / with are not in Git. They may have just been extracted by Git, from some commit. But now they're ordinary files and you use them without Git being aware of what you're doing.\nSince Git has extracted these files into ordinary folders, you can create new files and/or new folders if you like. When you run Python programs, Python itself will, at various times, create __pycache__ folders and stuff *.pyc and/or *.pyo files into them. Python does this without Git's knowledge or understanding.\nBecause these files are generated by Python, based on your source, and just used to speed up Python, it's a good idea to avoid putting them into the commits. There's no need to save a permanent snapshot of these files, especially since the format and contents may depend on the specific Python version (e.g., Python 3.7 generates *.cpython-37.pyc files, Python 3.9 generates *.cpython-39.pyc files, and so on). So we tell Git two things:\n\nDon't complain about the existence of these particular untracked files in the working tree.\nWhen I use an en-masse \"add everything\" operation like git add ., don't add these files to the index / staging-area, so that they won't go into the next commit either.\n\nWe generally do this with the (poorly named) .gitignore file. Listing a file name in a .gitignore does not make Git ignore it; instead, it has the effect of doing the two things I listed here.\nThis uses the Git-specific term untracked file, which has a simple definition that has a complex back-story. An untracked file is simply any file in your working tree that is not currently in Git's index (staging area). Since we're not going to get into a discussion of Git's index here, we have to stop there for now, but the general idea is that we don't allow the __pycache__ files to get into the index, which keeps them untracked, which keeps Git from committing them, which keeps them from getting into Git's index. It's all a bit circular here, and if you accidentally do get these files into Git's index, that's when you need the git rm -r --cached __pycache__ command.\nSince that command is failing, it means you don't have the problem this command is meant to solve. That's good!\n", "Well, you don't need __pycache__ files in your git repositories and you'd better to ignore all related files to it by adding __pycache__/ to your .gitignore file.\n" ]
[ 0, 0 ]
[]
[]
[ "git", "pyc", "python" ]
stackoverflow_0074462238_git_pyc_python.txt
Q: Using Python to KNN: What is wrong with my code? I am working on a class assignment where I need to use KNN to construct a classifier and report accuracy. I have some code I have been working on. I received this error on the code below. Traceback (most recent call last): File "c:\Users\jazzm\OneDrive\Desktop\python\HWK6.py", line 20, in classifier.fit(x_train, y_train) File "C:\Users\jazzm\OneDrive\Desktop\python.venv\lib\site-packages\sklearn\neighbors_classification.py", line 207, in fit return self._fit(X, y) File "C:\Users\jazzm\OneDrive\Desktop\python.venv\lib\site-packages\sklearn\neighbors_base.py", line 429, in _fit check_classification_targets(y) File "C:\Users\jazzm\OneDrive\Desktop\python.venv\lib\site-packages\sklearn\utils\multiclass.py", line 200, in check_classification_targets raise ValueError("Unknown label type: %r" % y_type) ValueError: Unknown label type: 'continuous' import pandas as PD import numpy as np import matplotlib.pyplot as mtp data_set= PD.read_csv('hw6.data.csv.gz') x= data_set.iloc[:,[2,3]].values y= data_set.iloc[:, 4].values from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test= train_test_split(x,y, test_size=.25, random_state=0) from sklearn.preprocessing import StandardScaler st_x= StandardScaler() x_train= st_x.fit_transform(x_train) x_test= st_x.transform(x_test) from sklearn.neighbors import KNeighborsClassifier classifier= KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2) classifier.fit(x_train, y_train) y_pred= classifier.predict(x_test) A: the values that you use for the response variable are continuous instead of categorical. A: The main goals are as follows: Apply StandardScaler to continuous variables Apply LabelEncoder and OnehotEncoder to categorical variables please read : link
Using Python to KNN: What is wrong with my code?
I am working on a class assignment where I need to use KNN to construct a classifier and report accuracy. I have some code I have been working on. I received this error on the code below. Traceback (most recent call last): File "c:\Users\jazzm\OneDrive\Desktop\python\HWK6.py", line 20, in classifier.fit(x_train, y_train) File "C:\Users\jazzm\OneDrive\Desktop\python.venv\lib\site-packages\sklearn\neighbors_classification.py", line 207, in fit return self._fit(X, y) File "C:\Users\jazzm\OneDrive\Desktop\python.venv\lib\site-packages\sklearn\neighbors_base.py", line 429, in _fit check_classification_targets(y) File "C:\Users\jazzm\OneDrive\Desktop\python.venv\lib\site-packages\sklearn\utils\multiclass.py", line 200, in check_classification_targets raise ValueError("Unknown label type: %r" % y_type) ValueError: Unknown label type: 'continuous' import pandas as PD import numpy as np import matplotlib.pyplot as mtp data_set= PD.read_csv('hw6.data.csv.gz') x= data_set.iloc[:,[2,3]].values y= data_set.iloc[:, 4].values from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test= train_test_split(x,y, test_size=.25, random_state=0) from sklearn.preprocessing import StandardScaler st_x= StandardScaler() x_train= st_x.fit_transform(x_train) x_test= st_x.transform(x_test) from sklearn.neighbors import KNeighborsClassifier classifier= KNeighborsClassifier(n_neighbors=5, metric='minkowski', p=2) classifier.fit(x_train, y_train) y_pred= classifier.predict(x_test)
[ "the values that you use for the response variable are continuous instead of categorical.\n", "The main goals are as follows:\n\nApply StandardScaler to continuous variables\nApply LabelEncoder and OnehotEncoder to categorical variables\n\nplease read : link\n" ]
[ 2, 0 ]
[]
[]
[ "knn", "pandas", "python", "python_3.x", "scikit_learn" ]
stackoverflow_0074510778_knn_pandas_python_python_3.x_scikit_learn.txt
Q: remove sample from anndata .obs and .x I can see how to remove columns from anndata ie keep = ['a','b','c'] adata = adata [:, keep] How does one remove rows from anndata.obs and anndata.x? for example remove adata.obs[Region='reg012'] Dataframe adata.obs A: If you want to remove row if Region contians reg012 then.. Assuming Data Frame = adata.obs adata.obs= adata.obs[~adata.obs.Region.str.contains("reg012")]
remove sample from anndata .obs and .x
I can see how to remove columns from anndata ie keep = ['a','b','c'] adata = adata [:, keep] How does one remove rows from anndata.obs and anndata.x? for example remove adata.obs[Region='reg012'] Dataframe adata.obs
[ "If you want to remove row if Region contians reg012 then..\nAssuming Data Frame = adata.obs\nadata.obs= adata.obs[~adata.obs.Region.str.contains(\"reg012\")]\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074528516_dataframe_pandas_python.txt
Q: pandas.read_sql_query() throws TypeError: 'NoneType' object is not iterable I am using pandas.read_sql_query function to read from a few sql files. One query throws an error at one particular bit which I have singled out. (python bit - nothing exotic and works with other queries) @contextmanager def open_db_connection(connection_string): pyodbc.pooling = False connection = pyodbc.connect(connection_string) try: yield connection except pyodbc.DatabaseError as err: error, = err.args sys.stderr.write(error.message) finally: connection.close() noCount = """ SET NOCOUNT ON; """ with open_db_connection(connection_string) as conn: res = pd.read_sql_query(noCount+queryObj, conn) The following bit of sql throws an error and I have no idea why it could be so. Preceding statements and various temp tables work and can be collected with pandas.read_sql_query(), however at the following bit it breaks. IF OBJECT_ID('tempdb..#test1') IS NOT NULL DROP TABLE #test1; select t.PositionID, b.SecurityID into #test1 from #tmp as t inner join placeholder.dbo.items as b on (b.PositionID = t.PositionID and b.StudyDate = '20191230') where t.ast = 'eq'; IF OBJECT_ID('tempdb..#test2') IS NOT NULL DROP TABLE #test2; select t.PositionID, case when count(i.beta_index)=0 then 1 else count(i.beta_index) end as noIndex into #test2 from #test1 as t left join #beta_index as i on (t.SecurityID = i.isin) group by t.PositionID; select * from #test2 This should return data from test2. One note though - it executes and runs perfectly fine with SQL Server Management Studio. A: The issue all along was that I was ignoring/disregarding warning messages in SSMS, which, I believe, results in cursor not being a query and pyodbc throwing ProgrammingError "No results. Previous SQL was not a query." and consequently pandas.read_sql_query() crashing. The warning: Warning: Null value is eliminated by an aggregate or other SET operation. "SET ANSI_WARNINGS OFF" at the beginning of the query solved the issue. I don't think this is the best practise, though in my case I can disregard these warnings. A: Got this answer while searching other webistes as I was getting the same issue but for teradata. As Teradata doesnt have either SET NOCOUNT ON or SET ANSI_WARNINGS OFF hadto resort to other ways to solve this problem. Instead of using read_sql, used cursor object to run the query. This resolved my problem Issue pd.read_sql(query, session) Used below curr = session.cursor() curr.execute(query) Below is the original site from where I got the answer https://www.anycodings.com/1questions/4980247/nonetype-object-is-not-iterable-error-in-pandas
pandas.read_sql_query() throws TypeError: 'NoneType' object is not iterable
I am using pandas.read_sql_query function to read from a few sql files. One query throws an error at one particular bit which I have singled out. (python bit - nothing exotic and works with other queries) @contextmanager def open_db_connection(connection_string): pyodbc.pooling = False connection = pyodbc.connect(connection_string) try: yield connection except pyodbc.DatabaseError as err: error, = err.args sys.stderr.write(error.message) finally: connection.close() noCount = """ SET NOCOUNT ON; """ with open_db_connection(connection_string) as conn: res = pd.read_sql_query(noCount+queryObj, conn) The following bit of sql throws an error and I have no idea why it could be so. Preceding statements and various temp tables work and can be collected with pandas.read_sql_query(), however at the following bit it breaks. IF OBJECT_ID('tempdb..#test1') IS NOT NULL DROP TABLE #test1; select t.PositionID, b.SecurityID into #test1 from #tmp as t inner join placeholder.dbo.items as b on (b.PositionID = t.PositionID and b.StudyDate = '20191230') where t.ast = 'eq'; IF OBJECT_ID('tempdb..#test2') IS NOT NULL DROP TABLE #test2; select t.PositionID, case when count(i.beta_index)=0 then 1 else count(i.beta_index) end as noIndex into #test2 from #test1 as t left join #beta_index as i on (t.SecurityID = i.isin) group by t.PositionID; select * from #test2 This should return data from test2. One note though - it executes and runs perfectly fine with SQL Server Management Studio.
[ "The issue all along was that I was ignoring/disregarding warning messages in SSMS, which, I believe, results in cursor not being a query and pyodbc throwing ProgrammingError \"No results. Previous SQL was not a query.\" and consequently pandas.read_sql_query() crashing.\nThe warning:\n\nWarning: Null value is eliminated by an aggregate or other SET operation.\n\n\"SET ANSI_WARNINGS OFF\" at the beginning of the query solved the issue.\nI don't think this is the best practise, though in my case I can disregard these warnings.\n", "Got this answer while searching other webistes as I was getting the same issue but for teradata.\nAs Teradata doesnt have either SET NOCOUNT ON or SET ANSI_WARNINGS OFF hadto resort to other ways to solve this problem.\nInstead of using read_sql, used cursor object to run the query. This resolved my problem\nIssue\npd.read_sql(query, session)\nUsed below\ncurr = session.cursor()\ncurr.execute(query)\nBelow is the original site from where I got the answer\nhttps://www.anycodings.com/1questions/4980247/nonetype-object-is-not-iterable-error-in-pandas\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "pyodbc", "python", "python_3.x", "sql_server" ]
stackoverflow_0060078342_pandas_pyodbc_python_python_3.x_sql_server.txt
Q: How to test APIView in Django, Django Rest Framework I am making an API with Django + Django Rest Framework. I am trying to test the GET methods of a view: View: class StuffView(APIView): queryset = Stuff.objects.none() def get(self, request, format=None): data = Stuff.objects.all().order_by('-primaryKey') StuffSerializer(data, many=True) return Response(serializer.data, 200) In my test I create test data for the Stuff and then run this utilizing DRF's APIClient: def test_stuff_view_get_all(self): response = self.client.get('/api/stuff/') self.assertEqual(response.status_code, 200) self.assertEqual(len(response.data), len(Stuff.objects.all().order_by('-primaryKey') )) This works, but I am not sure length is the best way to compare these things. The other thing I would like to test is to make sure that it is properly ordered by primary key. Should I serialize the queryset and compare the dictionary to response.data? Is there a best practice to doing this? Are there some things I am missing out on that I should be testing? A: You need to compare the response data with your posted test data, that will be a good test for checking the posted data content. You may also check the order in which the response data is received, by using index response.data[0] for first item, and [1] for second and so on. self.assertEqual(response.data[0].get('score'), self.result.score)
How to test APIView in Django, Django Rest Framework
I am making an API with Django + Django Rest Framework. I am trying to test the GET methods of a view: View: class StuffView(APIView): queryset = Stuff.objects.none() def get(self, request, format=None): data = Stuff.objects.all().order_by('-primaryKey') StuffSerializer(data, many=True) return Response(serializer.data, 200) In my test I create test data for the Stuff and then run this utilizing DRF's APIClient: def test_stuff_view_get_all(self): response = self.client.get('/api/stuff/') self.assertEqual(response.status_code, 200) self.assertEqual(len(response.data), len(Stuff.objects.all().order_by('-primaryKey') )) This works, but I am not sure length is the best way to compare these things. The other thing I would like to test is to make sure that it is properly ordered by primary key. Should I serialize the queryset and compare the dictionary to response.data? Is there a best practice to doing this? Are there some things I am missing out on that I should be testing?
[ "You need to compare the response data with your posted test data, that will be a good test for checking the posted data content. You may also check the order in which the response data is received, by using index response.data[0] for first item, and [1] for second and so on.\nself.assertEqual(response.data[0].get('score'), self.result.score)\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python" ]
stackoverflow_0039550163_django_django_rest_framework_python.txt
Q: How to segregate the column with respect to OK and not OK conditions in pyspark dataframe column? I have a dataframe df as shown below: VehNum Control_circuit control_circuit_status partnumbers errors Flag 4234456 DOC ok A567UR Software Issue 0 4234456 DOC not_okay A568UR Software Issue 1 4234456 DOC not_okay A569UR Hardware issue 2 4234457 ACR ok A234TY Hardware issue 0 4234457 ACR ok A235TY Hardware issue 0 4234457 ACR ok A234TY Hardware issue 0 4234487 QWR ok A276TY Hardware issue 0 4234487 QWR not_okay A872UR Hardware issue 1 3423448 QWR not_okay A872UR Hardware issue 1 I want to add a new column called "Control_Flag" and perform the below operations: for each VehNum, Control_circuit if it has "control_circuit_status" has the status "ok" in that Control_circuit then "Control_Flag" value will be 0 else 1. The result should be as below: VehNum Control_circuit control_circuit_status partnumbers errors Flag Control_Flag 4234456 DOC ok A567UR Software Issue 0 0 4234456 DOC not_okay A568UR Software Issue 1 0 4234456 DOC not_okay A569UR Hardware issue 2 0 4234457 ACR ok A234TY Hardware issue 0 0 4234457 ACR ok A235TY Hardware issue 0 0 4234457 ACR ok A234TY Hardware issue 0 0 4234487 QWR ok A276TY Hardware issue 0 1 4234487 QWR not_okay A872UR Hardware issue 1 1 3423448 QWR not_okay A872UR Hardware issue 1 1 How to achieve this using pyspark? A: here's the solution from pyspark.sql import functions as F from pyspark.sql.types import * from pyspark.sql import Window df = spark.createDataFrame( [ ("4234456", "DOC", "ok", "A567UR", "Software Issue", 0), ("4234456", "DOC", "not_okay", "A568UR", "Software Issue", 1), ("4234456", "DOC", "not_okay", "A569UR", "Hardware Issue", 2), ("4234457", "ACR", "ok", "A234TY", "Hardware Issue", 0), ("4234457", "ACR", "ok", "A234TY", "Hardware Issue", 0), ("4234457", "ACR", "ok", "A234TY", "Hardware Issue", 0), ("4234487", "QWR", "ok", "A276TY", "Hardware Issue", 0), ("4234487", "QWR", "not_okay", "A872UR", "Hardware Issue", 1), ("3423448", "QWR", "not_okay", "A872UR", "Hardware Issue", 1), ], ["VehNum", "Control_circuit", "control_circuit_status", "partnumbers", "errors", "Flag"], ) df_agg_window = Window.partitionBy( "VehNum", "Control_circuit", ) df = ( df .withColumn( "cc_status", F.when( F.lower(F.col("control_circuit_status")) == "ok", F.lit(1), ) .when( F.lower(F.col("control_circuit_status")) == "not_okay", F.lit(0), ) .otherwise(F.lit(0)), ) .withColumn( "flag_sum", F.sum("cc_status").over(df_agg_window), ) .withColumn( "Control_Flag", F.when( F.lower(F.col("flag_sum")) > 0, F.lit(0), ) .otherwise(F.lit(1)), ) .drop("cc_status", "flag_sum") ) df.show() output: +-------+---------------+----------------------+-----------+--------------+----+------------+ | VehNum|Control_circuit|control_circuit_status|partnumbers| errors|Flag|Control_Flag| +-------+---------------+----------------------+-----------+--------------+----+------------+ |4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| |4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| |4234457| ACR| ok| A234TY|Hardware Issue| 0| 0| |4234487| QWR| not_okay| A872UR|Hardware Issue| 1| 0| |4234487| QWR| ok| A276TY|Hardware Issue| 0| 0| |4234456| DOC| ok| A567UR|Software Issue| 0| 0| |4234456| DOC| not_okay| A569UR|Hardware Issue| 2| 0| |4234456| DOC| not_okay| A568UR|Software Issue| 1| 0| |3423448| QWR| not_okay| A872UR|Hardware Issue| 1| 1| +-------+---------------+----------------------+-----------+--------------+----+------------+
How to segregate the column with respect to OK and not OK conditions in pyspark dataframe column?
I have a dataframe df as shown below: VehNum Control_circuit control_circuit_status partnumbers errors Flag 4234456 DOC ok A567UR Software Issue 0 4234456 DOC not_okay A568UR Software Issue 1 4234456 DOC not_okay A569UR Hardware issue 2 4234457 ACR ok A234TY Hardware issue 0 4234457 ACR ok A235TY Hardware issue 0 4234457 ACR ok A234TY Hardware issue 0 4234487 QWR ok A276TY Hardware issue 0 4234487 QWR not_okay A872UR Hardware issue 1 3423448 QWR not_okay A872UR Hardware issue 1 I want to add a new column called "Control_Flag" and perform the below operations: for each VehNum, Control_circuit if it has "control_circuit_status" has the status "ok" in that Control_circuit then "Control_Flag" value will be 0 else 1. The result should be as below: VehNum Control_circuit control_circuit_status partnumbers errors Flag Control_Flag 4234456 DOC ok A567UR Software Issue 0 0 4234456 DOC not_okay A568UR Software Issue 1 0 4234456 DOC not_okay A569UR Hardware issue 2 0 4234457 ACR ok A234TY Hardware issue 0 0 4234457 ACR ok A235TY Hardware issue 0 0 4234457 ACR ok A234TY Hardware issue 0 0 4234487 QWR ok A276TY Hardware issue 0 1 4234487 QWR not_okay A872UR Hardware issue 1 1 3423448 QWR not_okay A872UR Hardware issue 1 1 How to achieve this using pyspark?
[ "here's the solution\nfrom pyspark.sql import functions as F\nfrom pyspark.sql.types import *\nfrom pyspark.sql import Window\n\ndf = spark.createDataFrame(\n [\n (\"4234456\", \"DOC\", \"ok\", \"A567UR\", \"Software Issue\", 0),\n (\"4234456\", \"DOC\", \"not_okay\", \"A568UR\", \"Software Issue\", 1),\n (\"4234456\", \"DOC\", \"not_okay\", \"A569UR\", \"Hardware Issue\", 2), \n (\"4234457\", \"ACR\", \"ok\", \"A234TY\", \"Hardware Issue\", 0),\n (\"4234457\", \"ACR\", \"ok\", \"A234TY\", \"Hardware Issue\", 0),\n (\"4234457\", \"ACR\", \"ok\", \"A234TY\", \"Hardware Issue\", 0), \n (\"4234487\", \"QWR\", \"ok\", \"A276TY\", \"Hardware Issue\", 0),\n (\"4234487\", \"QWR\", \"not_okay\", \"A872UR\", \"Hardware Issue\", 1),\n (\"3423448\", \"QWR\", \"not_okay\", \"A872UR\", \"Hardware Issue\", 1),\n ],\n [\"VehNum\", \"Control_circuit\", \"control_circuit_status\", \"partnumbers\", \"errors\", \"Flag\"],\n)\n\ndf_agg_window = Window.partitionBy(\n \"VehNum\",\n \"Control_circuit\",\n)\n\ndf = (\n df\n .withColumn(\n \"cc_status\",\n F.when(\n F.lower(F.col(\"control_circuit_status\")) == \"ok\",\n F.lit(1),\n )\n .when(\n F.lower(F.col(\"control_circuit_status\")) == \"not_okay\",\n F.lit(0),\n )\n .otherwise(F.lit(0)),\n )\n .withColumn(\n \"flag_sum\",\n F.sum(\"cc_status\").over(df_agg_window),\n )\n .withColumn(\n \"Control_Flag\",\n F.when(\n F.lower(F.col(\"flag_sum\")) > 0,\n F.lit(0),\n )\n .otherwise(F.lit(1)),\n )\n .drop(\"cc_status\", \"flag_sum\")\n)\n\n\ndf.show()\n\noutput:\n+-------+---------------+----------------------+-----------+--------------+----+------------+\n| VehNum|Control_circuit|control_circuit_status|partnumbers| errors|Flag|Control_Flag|\n+-------+---------------+----------------------+-----------+--------------+----+------------+\n|4234457| ACR| ok| A234TY|Hardware Issue| 0| 0|\n|4234457| ACR| ok| A234TY|Hardware Issue| 0| 0|\n|4234457| ACR| ok| A234TY|Hardware Issue| 0| 0|\n|4234487| QWR| not_okay| A872UR|Hardware Issue| 1| 0|\n|4234487| QWR| ok| A276TY|Hardware Issue| 0| 0|\n|4234456| DOC| ok| A567UR|Software Issue| 0| 0|\n|4234456| DOC| not_okay| A569UR|Hardware Issue| 2| 0|\n|4234456| DOC| not_okay| A568UR|Software Issue| 1| 0|\n|3423448| QWR| not_okay| A872UR|Hardware Issue| 1| 1|\n+-------+---------------+----------------------+-----------+--------------+----+------------+\n\n" ]
[ 1 ]
[]
[]
[ "pyspark", "python", "python_3.x" ]
stackoverflow_0074527853_pyspark_python_python_3.x.txt
Q: Pandas keep rows where column values change at least twice Good day, I have a large dataset with columns that keep track of the scores each person obtains. Here is a sample of the dataset: In: data = [[7, 10, 10, 10, 10], [17, 10, 10, 10, 10], [18, 8, 10, 10, 10], [20, 10, 10, 9, 9], [25, 9, 8, 8, 7]] df = pd.DataFrame(data, columns = ['person_id', 'score_1', 'score_2', 'score_3', 'score_4']) df Out: person_id score_1 score_2 score_3 score_4 0 7 10 10 10 10 1 17 10 10 10 10 2 18 8 10 10 10 3 20 10 10 9 9 4 25 9 8 8 7 I need to find the rows where the column values change at least twice. For example: Row 0: All values are the same, no change Row 1: All values are the same, no change Row 2: 1st and 2nd values differ, 1 change Row 3: 2nd and 3rd values differ, 1 change Row 4: 1st, 2nd and 4th values differ, 2 changes That means that only row 4 meets my requirements. Thus, the desired output would be: person_id score_1 score_2 score_3 score_4 4 25 9 8 8 7 All help greatly appreciated! A: IIUC, you want to count the number of unique values, per rows, limited to the "score*" columns. You can use nunique on the rows after getting the correct columns with filter. Then slice: df[df.filter(like='score').nunique(axis=1).gt(2)] If you really want the changes from left to right so that A->B->A->B counts for 3 changes: df[df.filter(like='score').diff(axis=1).ne(0).sum(axis=1).gt(2)] output: person_id score_1 score_2 score_3 score_4 4 25 9 8 8 7 A: Mozway's answer for the case A->B->A->B counts is brillant: df[df.filter(like='score').diff(axis=1).ne(0).sum(axis=1).gt(2)] A more plain answer, based on Mozway's one could be : df[ # following lines build a boolean mask df.filter(like='score') # keep only scores columns .diff(axis=1).iloc[:, 1:] # diff the columns and keep all except the first one whose values are all `NaN` .ne(0).sum(axis=1) # count changes: values that differ from 0 .ge(2) # finish the boolean mask: changes >= 2 ] A: df[((df.set_index('person_id').diff(axis=1) .fillna(0)!=0).sum(1)>1).reset_index(drop=True) ] person_id score_1 score_2 score_3 score_4 4 25 9 8 8 7
Pandas keep rows where column values change at least twice
Good day, I have a large dataset with columns that keep track of the scores each person obtains. Here is a sample of the dataset: In: data = [[7, 10, 10, 10, 10], [17, 10, 10, 10, 10], [18, 8, 10, 10, 10], [20, 10, 10, 9, 9], [25, 9, 8, 8, 7]] df = pd.DataFrame(data, columns = ['person_id', 'score_1', 'score_2', 'score_3', 'score_4']) df Out: person_id score_1 score_2 score_3 score_4 0 7 10 10 10 10 1 17 10 10 10 10 2 18 8 10 10 10 3 20 10 10 9 9 4 25 9 8 8 7 I need to find the rows where the column values change at least twice. For example: Row 0: All values are the same, no change Row 1: All values are the same, no change Row 2: 1st and 2nd values differ, 1 change Row 3: 2nd and 3rd values differ, 1 change Row 4: 1st, 2nd and 4th values differ, 2 changes That means that only row 4 meets my requirements. Thus, the desired output would be: person_id score_1 score_2 score_3 score_4 4 25 9 8 8 7 All help greatly appreciated!
[ "IIUC, you want to count the number of unique values, per rows, limited to the \"score*\" columns.\nYou can use nunique on the rows after getting the correct columns with filter. Then slice:\ndf[df.filter(like='score').nunique(axis=1).gt(2)]\n\nIf you really want the changes from left to right so that A->B->A->B counts for 3 changes:\ndf[df.filter(like='score').diff(axis=1).ne(0).sum(axis=1).gt(2)]\n\noutput:\n person_id score_1 score_2 score_3 score_4\n4 25 9 8 8 7\n\n", "Mozway's answer for the case A->B->A->B counts is brillant:\ndf[df.filter(like='score').diff(axis=1).ne(0).sum(axis=1).gt(2)]\n\nA more plain answer, based on Mozway's one could be :\ndf[ # following lines build a boolean mask\n df.filter(like='score') # keep only scores columns\n .diff(axis=1).iloc[:, 1:] # diff the columns and keep all except the first one whose values are all `NaN`\n .ne(0).sum(axis=1) # count changes: values that differ from 0 \n .ge(2) # finish the boolean mask: changes >= 2\n ]\n\n", "df[((df.set_index('person_id').diff(axis=1)\n .fillna(0)!=0).sum(1)>1).reset_index(drop=True)\n]\n\n person_id score_1 score_2 score_3 score_4\n4 25 9 8 8 7\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0071062887_pandas_python.txt
Q: How to save the swalign library output (Local Alignment - Smith-Waterman Algorithm)? I have used the below code to get the local alignment score between two strings using Smith-Waterman Algorithm. However, I'm getting the required output, I'm finding it difficult to save the result into some variable for further analysis. import swalign def Local_Alignment(string1, string2): match_score = 100 mismatch_score = -100 matrix = swalign.NucleotideScoringMatrix(match_score, mismatch_score) lalignment_object = swalign.LocalAlignment(matrix) alignment_object = lalignment_object.align(string1, string2) return alignment_object.dump() string1 = "ABCDEFGHIJKLMNOP" string2 = "CDGIKNOP" temp = Local_Alignment(string1, string2) Whenever I try to save the result into some variable, it simply stores a None value. Even though I tried storing the result in a text file, that also didn't work. A: if you got to the implementation of the library you can see in the dump function the results are dumped on the console. That is why it is returning nothing when you call the function and display temp in your case. However what you can do is go to the implementation copy the dump function and paste it there and rename it to something else, add a addition argument as filename and try to write everything in the file whatever is put on to the console A: As can be seen in the source code, the dump() method writes to a parameter out, which is sys.stdout by default. So if you don't supply the out parameter, everything is written to the standard output (console). The method does not return anything, so your statement return alignment_object.dump() doesn't do what you want, but returns None instead. Fortunately, out is expected to be a Text I/O object, which is pretty common in Python. You can pass any file-like object as out parameter, for example an actual file or an io.StringIO object. So, if you want to write the output to a file, you could pass the file as argument like this: with open('dump.txt', 'wt', encoding='utf-8') as file: return alignment_object.dump(out=file) If you want to store the output in a variable, so you can do whatever you want with it, use an io.StringIO object like this: import io import swalign def Local_Alignment(string1, string2): match_score = 100 mismatch_score = -100 matrix = swalign.NucleotideScoringMatrix(match_score, mismatch_score) lalignment_object = swalign.LocalAlignment(matrix) alignment_object = lalignment_object.align(string1, string2) with io.StringIO() as file: alignment_object.dump(out=file) return file.getvalue() string1 = "ABCDEFGHIJKLMNOP" string2 = "CDGIKNOP" temp = Local_Alignment(string1, string2)
How to save the swalign library output (Local Alignment - Smith-Waterman Algorithm)?
I have used the below code to get the local alignment score between two strings using Smith-Waterman Algorithm. However, I'm getting the required output, I'm finding it difficult to save the result into some variable for further analysis. import swalign def Local_Alignment(string1, string2): match_score = 100 mismatch_score = -100 matrix = swalign.NucleotideScoringMatrix(match_score, mismatch_score) lalignment_object = swalign.LocalAlignment(matrix) alignment_object = lalignment_object.align(string1, string2) return alignment_object.dump() string1 = "ABCDEFGHIJKLMNOP" string2 = "CDGIKNOP" temp = Local_Alignment(string1, string2) Whenever I try to save the result into some variable, it simply stores a None value. Even though I tried storing the result in a text file, that also didn't work.
[ "if you got to the implementation of the library you can see in the dump function the results are dumped on the console.\nThat is why it is returning nothing when you call the function and display temp in your case.\nHowever what you can do is go to the implementation copy the dump function and paste it there and rename it to something else, add a addition argument as filename and try to write everything in the file whatever is put on to the console\n", "As can be seen in the source code, the dump() method writes to a parameter out, which is sys.stdout by default. So if you don't supply the out parameter, everything is written to the standard output (console).\nThe method does not return anything, so your statement return alignment_object.dump() doesn't do what you want, but returns None instead.\nFortunately, out is expected to be a Text I/O object, which is pretty common in Python. You can pass any file-like object as out parameter, for example an actual file or an io.StringIO object.\nSo, if you want to write the output to a file, you could pass the file as argument like this:\nwith open('dump.txt', 'wt', encoding='utf-8') as file:\n return alignment_object.dump(out=file)\n\nIf you want to store the output in a variable, so you can do whatever you want with it, use an io.StringIO object like this:\nimport io\n\nimport swalign\n\n\ndef Local_Alignment(string1, string2):\n match_score = 100\n mismatch_score = -100\n matrix = swalign.NucleotideScoringMatrix(match_score, mismatch_score)\n lalignment_object = swalign.LocalAlignment(matrix)\n alignment_object = lalignment_object.align(string1, string2)\n with io.StringIO() as file:\n alignment_object.dump(out=file)\n return file.getvalue()\n\n\nstring1 = \"ABCDEFGHIJKLMNOP\"\nstring2 = \"CDGIKNOP\"\ntemp = Local_Alignment(string1, string2)\n\n" ]
[ 0, 0 ]
[]
[]
[ "dna_sequence", "python", "smith_waterman" ]
stackoverflow_0074121168_dna_sequence_python_smith_waterman.txt
Q: Error getting after used hooks in behave framework python (before_scenario and after_scenario) I used environment.py in my code. I used hooks before_scenario and after_scenario. After the first test run. Got an error immediately. In this code am i doing something wrong? from common.selen_base import Browser def before_scenario(context,scenario): context.browser = Browser() def after_scenario(context,scenario): context.browser.close_all() [Error after run the Feature] class WebDriverManager(object): __driver = None @classmethod def get_web_driver(cls): if cls.__driver is None: cls.__driver = webdriver.Chrome(executable_path="/usr/local/bin/chromedriver") cls.__driver.maximize_window() return cls.__driver class Browser(object): __driver = None def __init__(self): self.__driver = WebDriverManager.get_web_driver() self.wait = WebDriverWait(self.__driver, 10) A: You can add this at top of environment.py file: urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) This should resolve the issue
Error getting after used hooks in behave framework python (before_scenario and after_scenario)
I used environment.py in my code. I used hooks before_scenario and after_scenario. After the first test run. Got an error immediately. In this code am i doing something wrong? from common.selen_base import Browser def before_scenario(context,scenario): context.browser = Browser() def after_scenario(context,scenario): context.browser.close_all() [Error after run the Feature] class WebDriverManager(object): __driver = None @classmethod def get_web_driver(cls): if cls.__driver is None: cls.__driver = webdriver.Chrome(executable_path="/usr/local/bin/chromedriver") cls.__driver.maximize_window() return cls.__driver class Browser(object): __driver = None def __init__(self): self.__driver = WebDriverManager.get_web_driver() self.wait = WebDriverWait(self.__driver, 10)
[ "You can add this at top of environment.py file:\nurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n\nThis should resolve the issue\n" ]
[ 0 ]
[]
[]
[ "hook", "python", "python_behave" ]
stackoverflow_0059817925_hook_python_python_behave.txt
Q: python program that records number of attempts made during a random number between 1-10 trying to create a Python program to guess a number between 1 to 9 entered by the user and count the number of attempts taken by the computer to guess the correct number. this is what I have so far, need to add on a counter that tells me how many attempts have been made, any advice? thank you import random #Python import random module in Python defines a series of functions for generating or manipulating random integers target_num, guess_num = random.randint(1, 10), 0 while target_num != guess_num: guess_num = int(input('Guess a number between 1 and 10 until you get it right : ')) print('Well guessed!') A: Just initialize a counter and increment it at each attempt. counter += 1 is a shorcut for counter = counter + 1. The print() function use a f-string to print the counter variable. import random #Python import random module in Python defines a series of functions for generating or manipulating random integers target_num, guess_num, counter = random.randint(1, 10), 0, 0 while target_num != guess_num: counter += 1 guess_num = int(input('Guess a number between 1 and 10 until you get it right : ')) print(f'Well guessed! {counter} attempts.') A: Code correction You need to check Target num and user num usung If inside the while loop as follows. import random target_num, guess_num = random.randint(1, 10), 0 attempts =0 while 1: guess_num = int(input('Guess a number between 1 and 10 until you get it right : ')) if guess_num == target_num: print('Well guessed!') print(f"You took {attempts} chances") break else: print(f"you gussed {guess_num} which is wrong") attempts = attempts + 1 pass Sample outputs# Guess a number between 1 and 10 until you get it right : 3 you gussed 3 which is wrong. Guess a number between 1 and 10 until you get it right : 3 you gussed 3 which is wrong. Guess a number between 1 and 10 until you get it right : 10 Well guessed! You took 2 chances
python program that records number of attempts made during a random number between 1-10
trying to create a Python program to guess a number between 1 to 9 entered by the user and count the number of attempts taken by the computer to guess the correct number. this is what I have so far, need to add on a counter that tells me how many attempts have been made, any advice? thank you import random #Python import random module in Python defines a series of functions for generating or manipulating random integers target_num, guess_num = random.randint(1, 10), 0 while target_num != guess_num: guess_num = int(input('Guess a number between 1 and 10 until you get it right : ')) print('Well guessed!')
[ "Just initialize a counter and increment it at each attempt.\ncounter += 1 is a shorcut for counter = counter + 1.\nThe print() function use a f-string to print the counter variable.\nimport random #Python import random module in Python defines a series of functions for generating or manipulating random integers\ntarget_num, guess_num, counter = random.randint(1, 10), 0, 0\nwhile target_num != guess_num:\n counter += 1\n guess_num = int(input('Guess a number between 1 and 10 until you get it right : '))\nprint(f'Well guessed! {counter} attempts.')\n\n", "Code correction\nYou need to check Target num and user num usung If inside the while loop as follows.\nimport random \ntarget_num, guess_num = random.randint(1, 10), 0\nattempts =0\nwhile 1: \n guess_num = int(input('Guess a number between 1 and 10 until you get it right : '))\n\n if guess_num == target_num:\n print('Well guessed!')\n print(f\"You took {attempts} chances\")\n \n break\n\n else:\n print(f\"you gussed {guess_num} which is wrong\")\n attempts = attempts + 1\n pass\n\nSample outputs#\nGuess a number between 1 and 10 until you get it right : 3\nyou gussed 3 which is wrong.\nGuess a number between 1 and 10 until you get it right : 3\nyou gussed 3 which is wrong.\nGuess a number between 1 and 10 until you get it right : 10\nWell guessed!\nYou took 2 chances\n\n" ]
[ 0, 0 ]
[]
[]
[ "numbers", "python", "random" ]
stackoverflow_0074528610_numbers_python_random.txt
Q: pass variable to scipy curve_fit I am trying to fit a dataset using a function: def kel_voigt(x, en2, l2, en3, l3): # The first term, 300 should be a variable, from the main const = 300 * 1e-6 * math.pi / (2 * math.tan(math.radians(63.3))) return const * (((1 - (np.exp(-x / l2))) / en2) + ((1 - (np.exp(-x / l3))) / en3)) where, the fitting is called from main as: for n in range(len(sheets)): popt, pcov = sp.optimize.curve_fit(kel_voigt, np.array(tl[n]), np.array(h0l[n]), maxfev=10000) Now, the problem is, the first term of the variable load (i.e. 300) should be a variable and to be passed from main (it differs with each value of n in the main iteration). From https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html, I haven't find out a way to pass the extra parameter to the scipy.optimize.curve_fit(). How I can set the extra variable? A: You can add one additional argument for your fixed variable/constant to your function and wrap this function in each loop iteration: def kel_voigt(x, fix_var, en2, l2, en3, l3): # The first term, 300 should be a variable, from the main const = fix_var * 1e-6 * math.pi / (2 * math.tan(math.radians(63.3))) return const * (((1 - (np.exp(-x / l2))) / en2) + ((1 - (np.exp(-x / l3))) / en3)) for n in range(len(sheets)): # replace 300 with the value in the current iteration fun_to_fit = lambda x, en2, l2, en3, l3: kel_voigt(x, 300, en2, l2, en3, l3) popt, pcov = sp.optimize.curve_fit(fun_to_fit, np.array(tl[n]), np.array(h0l[n]), maxfev=10000)
pass variable to scipy curve_fit
I am trying to fit a dataset using a function: def kel_voigt(x, en2, l2, en3, l3): # The first term, 300 should be a variable, from the main const = 300 * 1e-6 * math.pi / (2 * math.tan(math.radians(63.3))) return const * (((1 - (np.exp(-x / l2))) / en2) + ((1 - (np.exp(-x / l3))) / en3)) where, the fitting is called from main as: for n in range(len(sheets)): popt, pcov = sp.optimize.curve_fit(kel_voigt, np.array(tl[n]), np.array(h0l[n]), maxfev=10000) Now, the problem is, the first term of the variable load (i.e. 300) should be a variable and to be passed from main (it differs with each value of n in the main iteration). From https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html, I haven't find out a way to pass the extra parameter to the scipy.optimize.curve_fit(). How I can set the extra variable?
[ "You can add one additional argument for your fixed variable/constant to your function and wrap this function in each loop iteration:\ndef kel_voigt(x, fix_var, en2, l2, en3, l3):\n # The first term, 300 should be a variable, from the main\n const = fix_var * 1e-6 * math.pi / (2 * math.tan(math.radians(63.3)))\n return const * (((1 - (np.exp(-x / l2))) / en2) +\n ((1 - (np.exp(-x / l3))) / en3))\n\nfor n in range(len(sheets)):\n # replace 300 with the value in the current iteration\n fun_to_fit = lambda x, en2, l2, en3, l3: kel_voigt(x, 300, en2, l2, en3, l3)\n popt, pcov = sp.optimize.curve_fit(fun_to_fit,\n np.array(tl[n]),\n np.array(h0l[n]),\n maxfev=10000)\n\n" ]
[ 1 ]
[]
[]
[ "curve_fitting", "python", "scipy_optimize" ]
stackoverflow_0074527404_curve_fitting_python_scipy_optimize.txt
Q: How to substitute value of variables in Python expression, but not evaluate the expression? I have a Python expression that looks like the following: var1 = 'GOOGLE' var2 = '5' expr = 'df[df[var1]>=var2]' In my workspace var1 and var2 are well defined so I can evaluate expr as follows: eval(expr) However, I want to pass this expr (as string) to another function with values of var1 and var2 substituted in it. I do not want to pass the variables var1 and var2, as I can have any number of variables, not just two. How do I accomplish this? A: You can simply use Python f-string as demonstrated below expr = f'df[df[{var1}] >= {var2}]' A: You can parse the expression with ast.parse and use a subclass of ast.NodeTransformer to convert Name nodes to the corresponding values as Constant nodes, and then convert the AST back to code with ast.unparse: import ast var1 = 'GOOGLE' var2 = '5' expr = 'df[df[var1]>=var2]' class NamesToConstants(ast.NodeTransformer): def visit_Name(self, node): if node.id in globals(): # feel free to use your own dict instead of globals() value = globals()[node.id] try: # convert value to integer if viable value = int(value) except: pass return ast.Constant(value=value) return node tree = ast.parse(expr) NamesToConstants().visit(tree) print(ast.unparse(tree)) This outputs: df[df['GOOGLE'] >= 5] ast.unparse requires Python 3.10 or later. If you're using an earlier version, you can use astunparse.unparse from the astunparse package instead. Demo: https://trinket.io/python3/18cc1182d0
How to substitute value of variables in Python expression, but not evaluate the expression?
I have a Python expression that looks like the following: var1 = 'GOOGLE' var2 = '5' expr = 'df[df[var1]>=var2]' In my workspace var1 and var2 are well defined so I can evaluate expr as follows: eval(expr) However, I want to pass this expr (as string) to another function with values of var1 and var2 substituted in it. I do not want to pass the variables var1 and var2, as I can have any number of variables, not just two. How do I accomplish this?
[ "You can simply use Python f-string as demonstrated below\nexpr = f'df[df[{var1}] >= {var2}]'\n\n\n", "You can parse the expression with ast.parse and use a subclass of ast.NodeTransformer to convert Name nodes to the corresponding values as Constant nodes, and then convert the AST back to code with ast.unparse:\nimport ast\n\nvar1 = 'GOOGLE'\nvar2 = '5'\nexpr = 'df[df[var1]>=var2]'\n\nclass NamesToConstants(ast.NodeTransformer):\n def visit_Name(self, node):\n if node.id in globals(): # feel free to use your own dict instead of globals()\n value = globals()[node.id]\n try: # convert value to integer if viable\n value = int(value)\n except:\n pass\n return ast.Constant(value=value)\n return node\n\ntree = ast.parse(expr)\nNamesToConstants().visit(tree)\nprint(ast.unparse(tree))\n\nThis outputs:\ndf[df['GOOGLE'] >= 5]\n\nast.unparse requires Python 3.10 or later. If you're using an earlier version, you can use astunparse.unparse from the astunparse package instead.\nDemo: https://trinket.io/python3/18cc1182d0\n" ]
[ 1, 0 ]
[]
[]
[ "eval", "python" ]
stackoverflow_0074528551_eval_python.txt
Q: Python - remove punctuation marks at the end and at the beginning of one or more words I wanted to know how to remove punctuation marks at the end and at the beginning of one or more words. If there are punctuation marks between the word, we don't remove. for example input: word = "!.test-one,-" output: word = "test-one" A: use strip >>> import string >>> word = "!.test-one,-" >>> word.strip(string.punctuation) 'test-one' A: The best solution is to use Python .strip(chars) method of the built-in class str. Another approach will be to use a regular expression and the regular expressions module. In order to understand what strip() and the regular expression does you can take a look at two functions which duplicate the behavior of strip(). The first one using recursion, the second one using while loops: chars = '''!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~''' def cstm_strip_1(word, chars): # Approach using recursion: w = word[1 if word[0] in chars else 0: -1 if word[-1] in chars else None] if w == word: return w else: return cstm_strip_1(w, chars) def cstm_strip_2(word, chars): # Approach using a while loop: i , j = 0, -1 while word[i] in chars: i += 1 while word[j] in chars: j -= 1 return word[i:j+1] import re, string chars = string.punctuation word = "~!.test-one^&test-one--two???" wsc = word.strip(chars) assert wsc == cstm_strip_1(word, chars) assert wsc == cstm_strip_2(word, chars) assert wsc == re.sub(r"(^[^\w]+)|([^\w]+$)", "", word) word = "__~!.test-one^&test-one--two??__" wsc = word.strip(chars) assert wsc == cstm_strip_1(word, chars) assert wsc == cstm_strip_2(word, chars) # assert wsc == re.sub(r"(^[^\w]+)|([^\w]+$)", "", word) assert re.sub(r"(^[^\w]+)|([^\w]+$)", "", word) == word print(re.sub(r"(^[^\w]+)|([^\w]+$)", "", word), '!=', wsc ) print('"',re.sub(r"(^[^\w]+)|([^\w]+$)", "", "\tword\t"), '" != "', "\tword\t".strip(chars), '"', sep='' ) Notice that the result when using the given regular expression pattern can differ from the result when using .strip(string.punctuation) because the set of characters covered by regular expression [^\w] pattern differs from the set of characters in string.punctuation. SUPPLEMENT What does the regular expression pattern: (^[^\w]+)|([^\w]+$) mean? Below a detailed explanation: The '|' character means 'or' providing two alternatives for the sub-string (called match) which is to find in the provided string. '(^[^\w]+)' is the first of the two alternatives for a match '(' ')' enclose what is called a "capturing group" (^[^\w]+) The first of the two '^' asserts position at start of a line '\w' : with \ escaped 'w' means: "word character" (i.e. letters a-z, A-Z, digits 0-9 and the underscore '_'). The second of the two '^' means: logical "not" (here not a "word character") i.e. all characters except a-zA-z0-9 and '_' (for example '~' or 'รถ') Notice that the meaning of '^' depends on context: '^' outside of [ ] it means start of line/string '^' inside of [ ] as first char means logical not and not as first means itself '[', ']' enclose specification of a set of characters and mean the occurrence of exactly one of them '+' means occurrence between one and unlimited times of what was defined in preceding token '([^\w]+$)' is the second alternative for a match differing from the first by stating that the match should be found at the end of the string '$' means: "end of the line" (or "end of string") The regular expression pattern tells the regular expression engine to work as follows: The engine looks at the start of the string for an occurrence of a non-word character. If one if found it will be remembered as a match and next character will be checked and added to the already found ones if it is also a non-word character. This way the start of the string is checked for occurrences of non-word characters which then will be removed from the string if the pattern is used in re.sub(r"(^[^\w]+)|([^\w]+$)", "", word) which replaces any found characters with an empty string (in other words it deletes found character from the string). After the engine hits first word character in the string the search at the start of the string will the jump to the end of the string because of the second alternative given for the pattern to find as the first alternative is limited to the start of the line. This way any non-word characters in the intermediate part of the string will be not searched for. The engine looks then at the end of a string for a non-word character and proceeds like at the start but going backwards to assure that the found non-word characters are at the end of the string. A: Using re.sub import re word = "!.test-one,-" out = re.sub(r"(^[^\w]+)|([^\w]+$)", "", word) print(out) Gives # test-one A: Check this example using slice import string sentence = "_blogs that are consistently updated by people that know about the trends, and market, and care about giving quality content to their readers." if sentence[0] in string.punctuation: sentence = sentence[1:] if sentence[-1] in string.punctuation: sentence = sentence[:-1] print(sentence) Output: blogs that are consistently updated by people that know about the trends, and market, and care about giving quality content to their readers
Python - remove punctuation marks at the end and at the beginning of one or more words
I wanted to know how to remove punctuation marks at the end and at the beginning of one or more words. If there are punctuation marks between the word, we don't remove. for example input: word = "!.test-one,-" output: word = "test-one"
[ "use strip\n>>> import string\n>>> word = \"!.test-one,-\"\n>>> word.strip(string.punctuation)\n'test-one'\n\n", "The best solution is to use Python .strip(chars) method of the built-in class str.\nAnother approach will be to use a regular expression and the regular expressions module.\nIn order to understand what strip() and the regular expression does you can take a look at two functions which duplicate the behavior of strip(). The first one using recursion, the second one using while loops:\n\nchars = '''!\"#$%&'()*+,-./:;<=>?@[\\]^_`{|}~'''\n\ndef cstm_strip_1(word, chars):\n # Approach using recursion: \n w = word[1 if word[0] in chars else 0: -1 if word[-1] in chars else None]\n if w == word:\n return w\n else: \n return cstm_strip_1(w, chars)\n\ndef cstm_strip_2(word, chars):\n # Approach using a while loop: \n i , j = 0, -1\n while word[i] in chars:\n i += 1\n while word[j] in chars:\n j -= 1\n return word[i:j+1]\n\nimport re, string\n\nchars = string.punctuation\nword = \"~!.test-one^&test-one--two???\"\n\nwsc = word.strip(chars)\nassert wsc == cstm_strip_1(word, chars)\nassert wsc == cstm_strip_2(word, chars)\nassert wsc == re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", word)\n\nword = \"__~!.test-one^&test-one--two??__\"\n\nwsc = word.strip(chars)\nassert wsc == cstm_strip_1(word, chars)\nassert wsc == cstm_strip_2(word, chars)\n# assert wsc == re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", word)\nassert re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", word) == word\n\nprint(re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", word), '!=', wsc )\nprint('\"',re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", \"\\tword\\t\"), '\" != \"', \"\\tword\\t\".strip(chars), '\"', sep='' )\n\nNotice that the result when using the given regular expression pattern can differ from the result when using .strip(string.punctuation) because the set of characters covered by regular expression [^\\w] pattern differs from the set of characters in string.punctuation.\nSUPPLEMENT\nWhat does the regular expression pattern:\n(^[^\\w]+)|([^\\w]+$)\n\nmean?\nBelow a detailed explanation:\nThe '|' character means 'or' providing two alternatives for the \n sub-string (called match) which is to find in the provided string. \n\n'(^[^\\w]+)' is the first of the two alternatives for a match\n\n '(' ')' enclose what is called a \"capturing group\" (^[^\\w]+)\n\n The first of the two '^' asserts position at start of a line\n\n '\\w' : with \\ escaped 'w' means: \"word character\" \n (i.e. letters a-z, A-Z, digits 0-9 and the underscore '_').\n\n The second of the two '^' means: logical \"not\" \n (here not a \"word character\")\n i.e. all characters except a-zA-z0-9 and '_'\n (for example '~' or 'รถ')\n Notice that the meaning of '^' depends on context: \n '^' outside of [ ] it means start of line/string\n '^' inside of [ ] as first char means logical not \n and not as first means itself \n\n '[', ']' enclose specification of a set of characters \n and mean the occurrence of exactly one of them\n\n '+' means occurrence between one and unlimited times\n of what was defined in preceding token\n\n '([^\\w]+$)' is the second alternative for a match \n differing from the first by stating that the match\n should be found at the end of the string\n '$' means: \"end of the line\" (or \"end of string\")\n\nThe regular expression pattern tells the regular expression engine to work as follows:\nThe engine looks at the start of the string for an occurrence of a non-word\ncharacter. If one if found it will be remembered as a match and next\ncharacter will be checked and added to the already found ones if it is also\na non-word character. This way the start of the string is checked for\noccurrences of non-word characters which then will be removed from the\nstring if the pattern is used in re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", word)\nwhich replaces any found characters with an empty string (in other words\nit deletes found character from the string).\nAfter the engine hits first word character in the string the search at\nthe start of the string will the jump to the end of the string because\nof the second alternative given for the pattern to find as the first\nalternative is limited to the start of the line.\nThis way any non-word characters in the intermediate part of the string\nwill be not searched for.\nThe engine looks then at the end of a string for a non-word character\nand proceeds like at the start but going backwards to assure that the\nfound non-word characters are at the end of the string.\n", "Using re.sub\nimport re\nword = \"!.test-one,-\"\nout = re.sub(r\"(^[^\\w]+)|([^\\w]+$)\", \"\", word)\nprint(out)\n\nGives #\ntest-one\n\n", "Check this example using slice\nimport string\nsentence = \"_blogs that are consistently updated by people that know about the trends, and market, and care about giving quality content to their readers.\" \nif sentence[0] in string.punctuation:\n sentence = sentence[1:]\nif sentence[-1] in string.punctuation:\n sentence = sentence[:-1]\nprint(sentence)\n\nOutput:\nblogs that are consistently updated by people that know about the trends, and market, and care about giving quality content to their readers\n\n" ]
[ 6, 2, 1, 1 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0074528239_algorithm_python.txt
Q: python password generator loop problem print error I trying to make a password generator using python. Currently, I just want the program to print random characters from the ascii table. I will later introduce numbers and symbols. I used a for loop to print random character from a range that the user inputs. It works however, when I use the end='' to print the characters on the same line a % shows up. I think it is there to show that it printed a no character. I would like the program to not print the % because later I will add other numbers and symbols. I tried subtracting 1 from the range of number. What resulted was the same string with a % but 1 less than intended. I also tried creating a while loop that would print while the variable was less than the password number. It also printed the %. Here is the code: import random import string letters=string.ascii_letters passwordnumber=int(input("How many characters do you want your password to be? ")) for i in range(passwordnumber): print(random.choice(letters), end='') A: The % print by your shell (may be zsh), it means the string not end by "\n". It's just a reminder from the shell. There is nothing wrong with you. You can just add a print() in the end of your code to print a "\n", and % will not show again. A: Try this characters = list(string.ascii_letters + string.digits + "!@#$%^&*()") def generate_random_password(): ## length of password from the user length = 8 ## shuffling the characters random.shuffle(characters) ## picking random characters from the list password = [] for i in range(length): password.append(random.choice(characters)) ## shuffling the resultant password random.shuffle(password) ## converting the list to string ## printing the list return "".join(password) A: Your script works absolutly fine in my side. see this https://onlinegdb.com/9EagkKVW1 If you feel like it's issue with end you can simply concat outputs to string and print at once like so. import random import string letters=string.ascii_letters pas ='' passwordnumber=int(input("How many characters do you want your password to be? ")) for i in range(passwordnumber): pas += random.choice(letters) print(pas) outputs # How many characters do you want your password to be? 5 AvfYm A: we can use the random .sample() method. it requires 2 arguments: - iterable of elements to use - number of elements to take the result does not contain duplicates. import random import string letters=string.ascii_letters passwordnumber=int(input("How many characters do you want your password to be? ")) pas = ''.join(random.sample(letters, k=passwordnumber)) print(pas)
python password generator loop problem print error
I trying to make a password generator using python. Currently, I just want the program to print random characters from the ascii table. I will later introduce numbers and symbols. I used a for loop to print random character from a range that the user inputs. It works however, when I use the end='' to print the characters on the same line a % shows up. I think it is there to show that it printed a no character. I would like the program to not print the % because later I will add other numbers and symbols. I tried subtracting 1 from the range of number. What resulted was the same string with a % but 1 less than intended. I also tried creating a while loop that would print while the variable was less than the password number. It also printed the %. Here is the code: import random import string letters=string.ascii_letters passwordnumber=int(input("How many characters do you want your password to be? ")) for i in range(passwordnumber): print(random.choice(letters), end='')
[ "The % print by your shell (may be zsh), it means the string not end by \"\\n\". It's just a reminder from the shell. There is nothing wrong with you. You can just add a print() in the end of your code to print a \"\\n\", and % will not show again.\n", "Try this\ncharacters = list(string.ascii_letters + string.digits + \"!@#$%^&*()\")\ndef generate_random_password():\n ## length of password from the user\n length = 8\n\n ## shuffling the characters\n random.shuffle(characters)\n\n ## picking random characters from the list\n password = []\n for i in range(length):\n password.append(random.choice(characters))\n\n ## shuffling the resultant password\n random.shuffle(password)\n\n ## converting the list to string\n ## printing the list\n return \"\".join(password)\n\n", "Your script works absolutly fine in my side. see this https://onlinegdb.com/9EagkKVW1\nIf you feel like it's issue with end you can simply concat outputs to string and print at once like so.\nimport random\nimport string\nletters=string.ascii_letters\npas =''\npasswordnumber=int(input(\"How many characters do you want your password to be? \"))\nfor i in range(passwordnumber):\n pas += random.choice(letters)\n \nprint(pas)\n\noutputs #\nHow many characters do you want your password to be? 5\nAvfYm\n\n", "we can use the random .sample() method. it requires 2 arguments:\n- iterable of elements to use\n- number of elements to take\nthe result does not contain duplicates.\nimport random\nimport string\nletters=string.ascii_letters\npasswordnumber=int(input(\"How many characters do you want your password to be? \"))\npas = ''.join(random.sample(letters, k=passwordnumber))\nprint(pas)\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "loops", "printing", "python", "syntax" ]
stackoverflow_0074528585_loops_printing_python_syntax.txt
Q: " The 'from' keyword is not supported in this version of the language. " I am trying to run tkinter in my notebook, that has windows system, that is the problem I had, all the times that i tried That is the problem i found! I wanna run tkinter modules in my Python app. A: The 'from' keyword is not supported in this version of the language. is an error message from PowerShell, not Python. Make sure you're entering code into the Python interpreter, not the PowerShell PS command line.
" The 'from' keyword is not supported in this version of the language. "
I am trying to run tkinter in my notebook, that has windows system, that is the problem I had, all the times that i tried That is the problem i found! I wanna run tkinter modules in my Python app.
[ "\nThe 'from' keyword is not supported in this version of the language.\n\nis an error message from PowerShell, not Python.\nMake sure you're entering code into the Python interpreter, not the PowerShell PS command line.\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074528850_python_tkinter.txt
Q: Django Many2Many constraint I am using Django with Django-Rest-Framework (no forms, no django_admin). I have the following models class Company(models.Model): ... class Sector(models.Model): ... company_id = models.ForeignKey(Company) employees = models.ManyToManyField(Employee) class Employee(models.Model): ... company_id = models.ForeignKey(Company) Employee can be in multiple Sectors and a Sector can have multiple Employees. (ManyToMany). I want to add a constraint employee.company_id == sector.company_id in order to add a record into the Many2Many table. I can do this validation in the serializers from the DRF, but I also want to handle this on the model level. I added a through table in the ManyToManyField class Sector(models.Model): ... company_id = models.ForeignKey(Company) employees = models.ManyToManyField(Employee, through='M2MTable') class M2MTable: ... def save(): # employee.company_id and sector.company_id validation is done here This will handle saving an M2MTable object, but however this will not handle related object references Sector.employees.add(Employee) From here I found out I can achieve this with m2m signals. Is there another way of handling this
Django Many2Many constraint
I am using Django with Django-Rest-Framework (no forms, no django_admin). I have the following models class Company(models.Model): ... class Sector(models.Model): ... company_id = models.ForeignKey(Company) employees = models.ManyToManyField(Employee) class Employee(models.Model): ... company_id = models.ForeignKey(Company) Employee can be in multiple Sectors and a Sector can have multiple Employees. (ManyToMany). I want to add a constraint employee.company_id == sector.company_id in order to add a record into the Many2Many table. I can do this validation in the serializers from the DRF, but I also want to handle this on the model level. I added a through table in the ManyToManyField class Sector(models.Model): ... company_id = models.ForeignKey(Company) employees = models.ManyToManyField(Employee, through='M2MTable') class M2MTable: ... def save(): # employee.company_id and sector.company_id validation is done here This will handle saving an M2MTable object, but however this will not handle related object references Sector.employees.add(Employee) From here I found out I can achieve this with m2m signals. Is there another way of handling this
[]
[]
[ "The through approach is good and valid on model level. Here is a related link:\nhttps://docs.djangoproject.com/en/dev/topics/db/models/#extra-fields-on-many-to-many-relationships\n" ]
[ -1 ]
[ "django", "django_orm", "django_rest_framework", "python" ]
stackoverflow_0074493162_django_django_orm_django_rest_framework_python.txt
Q: multiprocessing vs. threading for communicating from osc server to gui I'm currently quite undecided on what is actually the best approach to tackle this problem. Assuming the program only consists of: GUI using imgui and glfw OSC Server that listens for incoming messages The gui cannot block and the osc server constantly needs to be able receive new messages. So first of all, would it be sufficient to use a thread for the osc server that needs to run in parallel with the gui or would a separate process be better? Then there's the communication aspect, i don't want to ui to be blocked but still have the newest available data from the osc server. How can this be achieved? Is there something better than a queue? I don't really mind if the gui skips one or two new values, as i'm smoothing anyway, but gui should never have to wait to acquire the mutex. Also recently learned about the GIL in python, is there anything related that i need to take into account? Would appreciate any input on this :). A: I'm in the same situation. Did you make some leeway? My experience is/was that if I run the osc server code in a seperate thread it doesn't run reliable and was missing messages, or wasn't completing tasks initiated by received message. Where if i remove the gui and run the server in the main thread everything works perfect. For now i'm just making console apps but it would be nice to give them a ui for settings and such
multiprocessing vs. threading for communicating from osc server to gui
I'm currently quite undecided on what is actually the best approach to tackle this problem. Assuming the program only consists of: GUI using imgui and glfw OSC Server that listens for incoming messages The gui cannot block and the osc server constantly needs to be able receive new messages. So first of all, would it be sufficient to use a thread for the osc server that needs to run in parallel with the gui or would a separate process be better? Then there's the communication aspect, i don't want to ui to be blocked but still have the newest available data from the osc server. How can this be achieved? Is there something better than a queue? I don't really mind if the gui skips one or two new values, as i'm smoothing anyway, but gui should never have to wait to acquire the mutex. Also recently learned about the GIL in python, is there anything related that i need to take into account? Would appreciate any input on this :).
[ "I'm in the same situation.\nDid you make some leeway?\nMy experience is/was that if I run the osc server code in a seperate thread it doesn't run reliable and was missing messages, or wasn't completing tasks initiated by received message.\nWhere if i remove the gui and run the server in the main thread everything works perfect.\nFor now i'm just making console apps but it would be nice to give them a ui for settings and such\n" ]
[ 0 ]
[]
[]
[ "architecture", "multithreading", "python", "python_multiprocessing", "user_interface" ]
stackoverflow_0070267294_architecture_multithreading_python_python_multiprocessing_user_interface.txt
Q: Save pandas on spark API dataframe to a new table in azure databricks Context: I have a dataframe that I queried using SQl. From this query, I saved to a dataframe using pandas on spark API. Now, after some transformations, I'd like to save this new dataframe on a new table at a given database. Example: spark = SparkSession.builder.appName('transformation').getOrCreate() df_final = spark.sql("SELECT * FROM table") df_final = ps.DataFrame(df_final) ## Write Frame out as Table spark_df_final = spark.createDataFrame(df_final) spark_df_final.write.mode("overwrite").saveAsTable("new_database.new_table") but this doesn't work. How can I save a pandas on spark API dataframe directly to a new table in a database (this database doesn't exist yet) Thanks A: You can use the following procedure. I have the following demo table. You can convert it to pandas dataframe of spark API using the following code: df_final = spark.sql("SELECT * FROM demo") pdf = df_final.to_pandas_on_spark() #print(type(pdf)) #<class 'pyspark.pandas.frame.DataFrame'> Now after performing your required operations on this pandas dataframe on spark API, you can convert it back to spark dataframe using the following code: spark_df = pdf.to_spark() print(type(spark_df)) display(spark_df) Now to write this dataframe to a table into a new database, you have to first create the database first and then write the dataframe to table. spark.sql("create database newdb") spark_df.write.mode("overwrite").saveAsTable("newdb.new_table") You can see that the table is written to the new database. The following is a reference image of the same:
Save pandas on spark API dataframe to a new table in azure databricks
Context: I have a dataframe that I queried using SQl. From this query, I saved to a dataframe using pandas on spark API. Now, after some transformations, I'd like to save this new dataframe on a new table at a given database. Example: spark = SparkSession.builder.appName('transformation').getOrCreate() df_final = spark.sql("SELECT * FROM table") df_final = ps.DataFrame(df_final) ## Write Frame out as Table spark_df_final = spark.createDataFrame(df_final) spark_df_final.write.mode("overwrite").saveAsTable("new_database.new_table") but this doesn't work. How can I save a pandas on spark API dataframe directly to a new table in a database (this database doesn't exist yet) Thanks
[ "You can use the following procedure. I have the following demo table.\n\n\nYou can convert it to pandas dataframe of spark API using the following code:\n\ndf_final = spark.sql(\"SELECT * FROM demo\")\npdf = df_final.to_pandas_on_spark()\n#print(type(pdf))\n#<class 'pyspark.pandas.frame.DataFrame'>\n\n\nNow after performing your required operations on this pandas dataframe on spark API, you can convert it back to spark dataframe using the following code:\n\nspark_df = pdf.to_spark()\nprint(type(spark_df))\ndisplay(spark_df)\n\n\n\nNow to write this dataframe to a table into a new database, you have to first create the database first and then write the dataframe to table.\n\nspark.sql(\"create database newdb\")\nspark_df.write.mode(\"overwrite\").saveAsTable(\"newdb.new_table\")\n\n\n\nYou can see that the table is written to the new database. The following is a reference image of the same:\n\n\n" ]
[ 1 ]
[]
[]
[ "apache_spark", "azure", "databricks", "python" ]
stackoverflow_0074490859_apache_spark_azure_databricks_python.txt
Q: Accepting cookies when web scraping Related to a previous question I am trying to edit the answer to apply to another website, but can't get it to work. What I want to do here is to accept the cookie, and then extract the information from the table. (I also want to scrape the table for all of 2021 later, so any tips on how to proceed there is welcomed too). from selenium import webdriver import time from bs4 import BeautifulSoup import pandas as pd from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() options.add_experimental_option("detach", True)#optional webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service,options=options) data = [] driver.get('https://www.nordpoolgroup.com/en/Market-data1/Power-system-data/Consumption1/Consumption-prognosis/SE/Hourly/?view=table') time.sleep(3) cookie = tbutton = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, '//*[@class="pure-button"]'))).click() driver.execute_script("arguments[0].click();", tbutton) time.sleep(1) soup = BeautifulSoup(driver.page_source,"html.parser") df = pd.read_html(str(soup))[0] print(df) JavascriptException: javascript error: Cannot read properties of null (reading 'click') (Session info: chrome=107.0.5304.107) I inspected the "I accept cookies" button and it seems that "pure-button" should be inserted in the class field. What could be the issue here? Thank you. A: The click() method returns null, so this expression WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, '//*[@class="pure-button"]'))).click() returns null, so cookie and tbutton are null objects. Then you trying to click a null object with driver.execute_script("arguments[0].click();", tbutton) and this line gives you an error. So, in your code you should remove cookie = tbutton = and driver.execute_script("arguments[0].click();", tbutton) while this line WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, '//*[@class="pure-button"]'))).click() is enough. It should close the cookies. That's it. Also, this line can be improved. Since you are clicking that button it's better to use element_to_be_clickable expected condition than visibility_of_element_located. So, I'd advice this line to be: WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@class="pure-button"]'))).click()
Accepting cookies when web scraping
Related to a previous question I am trying to edit the answer to apply to another website, but can't get it to work. What I want to do here is to accept the cookie, and then extract the information from the table. (I also want to scrape the table for all of 2021 later, so any tips on how to proceed there is welcomed too). from selenium import webdriver import time from bs4 import BeautifulSoup import pandas as pd from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC options = webdriver.ChromeOptions() options.add_experimental_option("detach", True)#optional webdriver_service = Service("./chromedriver") #Your chromedriver path driver = webdriver.Chrome(service=webdriver_service,options=options) data = [] driver.get('https://www.nordpoolgroup.com/en/Market-data1/Power-system-data/Consumption1/Consumption-prognosis/SE/Hourly/?view=table') time.sleep(3) cookie = tbutton = WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, '//*[@class="pure-button"]'))).click() driver.execute_script("arguments[0].click();", tbutton) time.sleep(1) soup = BeautifulSoup(driver.page_source,"html.parser") df = pd.read_html(str(soup))[0] print(df) JavascriptException: javascript error: Cannot read properties of null (reading 'click') (Session info: chrome=107.0.5304.107) I inspected the "I accept cookies" button and it seems that "pure-button" should be inserted in the class field. What could be the issue here? Thank you.
[ "The click() method returns null, so this expression WebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, '//*[@class=\"pure-button\"]'))).click() returns null, so cookie and tbutton are null objects.\nThen you trying to click a null object with driver.execute_script(\"arguments[0].click();\", tbutton) and this line gives you an error.\nSo, in your code you should remove cookie = tbutton = and driver.execute_script(\"arguments[0].click();\", tbutton) while this line\nWebDriverWait(driver, 20).until(EC.visibility_of_element_located((By.XPATH, '//*[@class=\"pure-button\"]'))).click()\n\nis enough. It should close the cookies.\nThat's it.\nAlso, this line can be improved.\nSince you are clicking that button it's better to use element_to_be_clickable expected condition than visibility_of_element_located. So, I'd advice this line to be:\nWebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, '//*[@class=\"pure-button\"]'))).click()\n\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "webdriverwait" ]
stackoverflow_0074528831_python_selenium_selenium_webdriver_webdriverwait.txt
Q: Can Cython compiled .so extensions be imported into other languages, eg. Java? I'm in the process of learning Cython and I wasn't able to find a direct answer to this. Also please bear with me as my understanding of C is limited as of now. As far as I understand, with the cythonize command, .pyx files are converted to C, and are compiled to platform-specific libraries (.so / .pxd). My questions are: If .pyx files are fully converted to C, does it mean that these generated extensions are no longer dependent on python runtime after compilation? Assuming the same host architecture, can these generated extensions be loaded into other languages, eg. via Java's JNI? If so, are there any hello world examples on this? A: Cython extensions are fully C, but they heavily use the Python C API. This means they can't be run independent of libpython (and usually the Python standard library). However it is possible to load libpython into other languages and then use a Cython extension. Also bear in mind that anything you import within Cython is not compiled but needs to be available for import. I don't plan to answer this fully, but both ImageJ (in Java) and Julia do support Python interoperability layers and do Cython extensions would work there. You're much better off searching for "Python-Java interoperability layer" than trying to create your own way of using Cython specifically.
Can Cython compiled .so extensions be imported into other languages, eg. Java?
I'm in the process of learning Cython and I wasn't able to find a direct answer to this. Also please bear with me as my understanding of C is limited as of now. As far as I understand, with the cythonize command, .pyx files are converted to C, and are compiled to platform-specific libraries (.so / .pxd). My questions are: If .pyx files are fully converted to C, does it mean that these generated extensions are no longer dependent on python runtime after compilation? Assuming the same host architecture, can these generated extensions be loaded into other languages, eg. via Java's JNI? If so, are there any hello world examples on this?
[ "\nCython extensions are fully C, but they heavily use the Python C API. This means they can't be run independent of libpython (and usually the Python standard library). However it is possible to load libpython into other languages and then use a Cython extension. Also bear in mind that anything you import within Cython is not compiled but needs to be available for import.\n\nI don't plan to answer this fully, but both ImageJ (in Java) and Julia do support Python interoperability layers and do Cython extensions would work there. You're much better off searching for \"Python-Java interoperability layer\" than trying to create your own way of using Cython specifically.\n\n\n" ]
[ 1 ]
[]
[]
[ "cython", "java_native_interface", "python" ]
stackoverflow_0074528482_cython_java_native_interface_python.txt
Q: how to shift non nan value in multiple columns row wise by group? so i have data frame as below A1 A2 A3 A4 A5 A6 1 nan 3 7 nan 8 nan 5 nan 11 9 nan 54 6 84 12 3 nan 10 nan nan 16 nan 45 12 93 13 31 5 91 73 nan 45 nan nan 9 i want to shift the whole data frame n rows such that it skips nan rows but still preserve it. desire output: for n =2 A1 A2 A3 A4 A5 A6 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan 7 nan nan 1 nan nan 11 nan nan 54 5 3 12 9 8 10 nan 84 nan nan 45 i tried the following: df['dummy'] = df.apply(lambda x: 1 if pd.notnull(x[column]) else 0, axis=1) df['dummy2'] = df.groupby(['dummy'])[column].shift(n) df[column] = df.apply(lambda x: x['dummy2'] if x['dummy']==1 else x[column], axis=1) which is good if there is only a few columns i need to shift. i also tried the applymap function dummy_df = df.applymap(lambda x: 1 if pd.notnull(x) else 0) which returns a dummy data frame to separate groups that i want to shift, just have no idea what to do next with it. the problem is that there are thousands of columns i need to shift row wise. Is there any ways i can do this using minimum loops? And are there any ways to do it with groupby function using dummy_df? A: try this: tmp = df.apply( lambda s: s.sort_values( key=lambda v: pd.notnull(v) ).values ) res = tmp.shift(2) res A: Use lambda function with Series.dropna and Series.shift: df = df.apply(lambda x: x.dropna().shift(2)) print (df) A1 A2 A3 A4 A5 A6 0 NaN NaN NaN NaN NaN NaN 1 NaN NaN NaN NaN NaN NaN 2 NaN NaN NaN 7.0 NaN NaN 3 1.0 NaN NaN 11.0 NaN NaN 4 54.0 5.0 3.0 12.0 9.0 8.0 5 10.0 NaN 84.0 NaN NaN 45.0
how to shift non nan value in multiple columns row wise by group?
so i have data frame as below A1 A2 A3 A4 A5 A6 1 nan 3 7 nan 8 nan 5 nan 11 9 nan 54 6 84 12 3 nan 10 nan nan 16 nan 45 12 93 13 31 5 91 73 nan 45 nan nan 9 i want to shift the whole data frame n rows such that it skips nan rows but still preserve it. desire output: for n =2 A1 A2 A3 A4 A5 A6 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan 7 nan nan 1 nan nan 11 nan nan 54 5 3 12 9 8 10 nan 84 nan nan 45 i tried the following: df['dummy'] = df.apply(lambda x: 1 if pd.notnull(x[column]) else 0, axis=1) df['dummy2'] = df.groupby(['dummy'])[column].shift(n) df[column] = df.apply(lambda x: x['dummy2'] if x['dummy']==1 else x[column], axis=1) which is good if there is only a few columns i need to shift. i also tried the applymap function dummy_df = df.applymap(lambda x: 1 if pd.notnull(x) else 0) which returns a dummy data frame to separate groups that i want to shift, just have no idea what to do next with it. the problem is that there are thousands of columns i need to shift row wise. Is there any ways i can do this using minimum loops? And are there any ways to do it with groupby function using dummy_df?
[ "try this:\ntmp = df.apply(\n lambda s: s.sort_values(\n key=lambda v: pd.notnull(v)\n ).values\n )\nres = tmp.shift(2)\nres\n\n", "Use lambda function with Series.dropna and Series.shift:\ndf = df.apply(lambda x: x.dropna().shift(2))\n\nprint (df)\n A1 A2 A3 A4 A5 A6\n0 NaN NaN NaN NaN NaN NaN\n1 NaN NaN NaN NaN NaN NaN\n2 NaN NaN NaN 7.0 NaN NaN\n3 1.0 NaN NaN 11.0 NaN NaN\n4 54.0 5.0 3.0 12.0 9.0 8.0\n5 10.0 NaN 84.0 NaN NaN 45.0\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python", "shift" ]
stackoverflow_0074527018_pandas_python_shift.txt
Q: Why can't My GUI program built by PyQt5 show? I refer to the article1 to build my GUI by PyQt5,The difference between the program of the article and mine is the module <img_controller.py>. When I initilize my img_controller instance,I only need the parameter ui(the class I got from Qtdesigner)and my program ,img_controller. will revise the attributes of ui. Initialize the parameters of img_controller.py according to 1 are directed inputted attributes of ui. When I run the program got from 1, it can work normally; but I run my program, I can't get the mainwindow and the wrong message hints that "AttributeError: 'Img_controller' object has no attribute 'ui'".I don't know where is my problem, because in the function __ init __ of Img_controller(class), I state that "self.ui = ui",anyone can tell me the problem, thank you very much. The following is my program: UI.py from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(1085, 857) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.verticalLayoutWidget = QtWidgets.QWidget(self.centralwidget) self.verticalLayoutWidget.setGeometry(QtCore.QRect(110, 20, 861, 491)) self.verticalLayoutWidget.setObjectName("verticalLayoutWidget") self.verticalLayout = QtWidgets.QVBoxLayout(self.verticalLayoutWidget) self.verticalLayout.setContentsMargins(0, 0, 0, 0) self.verticalLayout.setObjectName("verticalLayout") self.scrollArea = QtWidgets.QScrollArea(self.verticalLayoutWidget) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") # self.scrollAreaWidgetContents = QtWidgets.QWidget() # self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 857, 487)) # self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.image_label = QtWidgets.QLabel(self.scrollArea) #ๆญค่™•ๆœ‰ๆ›ดๅ‹• self.image_label.setGeometry(QtCore.QRect(10, 10, 841, 471)) self.image_label.setObjectName("image_label") self.scrollArea.setWidget(self.image_label) #ๆญค่™•ๆœ‰ๆ›ดๅ‹• self.verticalLayout.addWidget(self.scrollArea) self.btn_zoomin = QtWidgets.QPushButton(self.centralwidget) self.btn_zoomin.setGeometry(QtCore.QRect(330, 530, 75, 23)) self.btn_zoomin.setObjectName("btn_zoomin") self.btn_zoomout = QtWidgets.QPushButton(self.centralwidget) self.btn_zoomout.setGeometry(QtCore.QRect(640, 530, 75, 23)) self.btn_zoomout.setObjectName("btn_zoomout") self.slider = QtWidgets.QSlider(self.centralwidget) self.slider.setGeometry(QtCore.QRect(440, 530, 160, 22)) self.slider.setOrientation(QtCore.Qt.Horizontal) self.slider.setObjectName("slider") self.btn_open = QtWidgets.QPushButton(self.centralwidget) self.btn_open.setGeometry(QtCore.QRect(140, 530, 75, 23)) self.btn_open.setObjectName("btn_open") self.label_resolution = QtWidgets.QLabel(self.centralwidget) self.label_resolution.setGeometry(QtCore.QRect(770, 530, 75, 15)) self.label_resolution.setObjectName("label_resolution") self.label_filename = QtWidgets.QLabel(self.centralwidget) self.label_filename.setGeometry(QtCore.QRect(130, 660, 111, 41)) self.label_filename.setObjectName("label_filename") self.label_img_shape = QtWidgets.QLabel(self.centralwidget) self.label_img_shape.setGeometry(QtCore.QRect(540, 620, 411, 51)) self.label_img_shape.setObjectName("label_img_shape") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 1085, 21)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.image_label.setText(_translate("MainWindow", "image")) self.btn_zoomin.setText(_translate("MainWindow", "zoom_in")) self.btn_zoomout.setText(_translate("MainWindow", "zoom_out")) self.btn_open.setText(_translate("MainWindow", "open file")) self.label_resolution.setText(_translate("MainWindow", "TextLabel")) self.label_filename.setText(_translate("MainWindow", "file_name")) self.label_img_shape.setText(_translate("MainWindow", "TextLabel")) img_controller.py from PyQt5 import QtCore, QtGui import cv2 from UI import Ui_MainWindow class Img_controller(object): def __init__(self, ui:Ui_MainWindow, img_ratio:int = 50): super(Img_controller, self).__init__() self.img_path = 'sad.jpg' self.img_ratio = img_ratio self.read_img(self.img_path) self.ui = ui def read_img(self,img_path): try: self.img = cv2.imread(img_path) self.orig_h, self.orig_w, self.orig_c = self.img.shape self.img_path = img_path except: self.img = cv2.imread(self.img_path) self.orig_h, self.orig_w, self.orig_c = self.img.shape bytesPerline = self.orig_h*self.orig_c self.qimg = QtGui.QImage(self.img, self.orig_w, self.orig_h, bytesPerline, QtGui.QImage.Format_RGB888).rgbSwapped() self.origin_qpixmap = QtGui.QPixmap.fromImage(self.qimg) self.img_ratio = 50 self.set_img_ratio() def set_img_ratio(self): self.img_ratio = pow(10, (self.img_ratio - 50)/50) qpixmap_height = self.orig_h * self.img_ratio self.qpixmap = self.origin_qpixmap.scaledToHeight(qpixmap_height) #ๆ›ดๆ–ฐUIไป‹้ขไธŠ็š„้กฏ็คบ self.__update_img() self.__update_text_ratio() self.__update_text_img_shape() self.__update_text_file_path() def __update_img(self): self.ui.image_label.setPixmap(self.qpixmap) self.ui.image_label.setAlignment(QtCore.Qt.AlignLeft | QtCore.Qt.AlignTop) def __update_text_file_path(self): self.ui.label_filename.setText(f"File path = {self.img_path}") def __update_text_ratio(self): self.ui.label_resolution.setText(f"{int(100*self.img_ratio)} %") def __update_text_img_shape(self): current_text = f"Current img shape = ({self.qpixmap.width()}, {self.qpixmap.height()})" origin_text = f"Origin img shape = ({self.origin_width}, {self.origin_height})" self.ui.label_img_shape.setText(current_text+"\t"+origin_text) def set_zoom_in(self): self.img_ratio = max(0, self.img_ratio - 1) self.set_img_ratio() def set_zoom_out(self): self.img_ratio = min(100, self.img_ratio + 1) self.set_img_ratio() def set_slider_value(self, value): self.img_ratio = value self.set_img_ratio() controller.py from PyQt5 import QtCore,QtWidgets,QtGui from PyQt5.QtWidgets import QMainWindow,QFileDialog from img_controller import Img_controller from UI import Ui_MainWindow class Ui_controller(QMainWindow): def __init__(self): super(Ui_controller,self).__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) self.setup_control() def setup_control(self): self.img_controller = Img_controller(ui = self.ui) self.ui.btn_open.clicked.connect(self.open_file) self.ui.btn_zoomin.clicked.connect(self.img_controller.set_zoom_in) self.ui.btn_zoomout.clicked.connect(self.img_controller.set_zoom_out) self.ui.slider.valueChanged.connect(self.getslidervalue) def open_file(self): filename, filetype = QFileDialog.getOpenFileName(self, "Open file", "./") # start path self.init_new_picture(filename) def init_new_picture(self, filename): self.ui.slider.setProperty("value", 50) self.img_controller.read_img(filename) def getslidervalue(self): self.img_controller.set_slider_value(self.ui.slider.value()+1) A: What you DIDN'T say was the key piece of information -- the rest of the traceback. Notice that Img_controller.__init__ calls self.read_img, which calls self.set_img_ratio, which calls self.__update_img, which uses self.ui, and that all happens BEFORE you set self.ui. You need to swap the order of that initialization.
Why can't My GUI program built by PyQt5 show?
I refer to the article1 to build my GUI by PyQt5,The difference between the program of the article and mine is the module <img_controller.py>. When I initilize my img_controller instance,I only need the parameter ui(the class I got from Qtdesigner)and my program ,img_controller. will revise the attributes of ui. Initialize the parameters of img_controller.py according to 1 are directed inputted attributes of ui. When I run the program got from 1, it can work normally; but I run my program, I can't get the mainwindow and the wrong message hints that "AttributeError: 'Img_controller' object has no attribute 'ui'".I don't know where is my problem, because in the function __ init __ of Img_controller(class), I state that "self.ui = ui",anyone can tell me the problem, thank you very much. The following is my program: UI.py from PyQt5 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(1085, 857) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.verticalLayoutWidget = QtWidgets.QWidget(self.centralwidget) self.verticalLayoutWidget.setGeometry(QtCore.QRect(110, 20, 861, 491)) self.verticalLayoutWidget.setObjectName("verticalLayoutWidget") self.verticalLayout = QtWidgets.QVBoxLayout(self.verticalLayoutWidget) self.verticalLayout.setContentsMargins(0, 0, 0, 0) self.verticalLayout.setObjectName("verticalLayout") self.scrollArea = QtWidgets.QScrollArea(self.verticalLayoutWidget) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName("scrollArea") # self.scrollAreaWidgetContents = QtWidgets.QWidget() # self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, 0, 857, 487)) # self.scrollAreaWidgetContents.setObjectName("scrollAreaWidgetContents") self.image_label = QtWidgets.QLabel(self.scrollArea) #ๆญค่™•ๆœ‰ๆ›ดๅ‹• self.image_label.setGeometry(QtCore.QRect(10, 10, 841, 471)) self.image_label.setObjectName("image_label") self.scrollArea.setWidget(self.image_label) #ๆญค่™•ๆœ‰ๆ›ดๅ‹• self.verticalLayout.addWidget(self.scrollArea) self.btn_zoomin = QtWidgets.QPushButton(self.centralwidget) self.btn_zoomin.setGeometry(QtCore.QRect(330, 530, 75, 23)) self.btn_zoomin.setObjectName("btn_zoomin") self.btn_zoomout = QtWidgets.QPushButton(self.centralwidget) self.btn_zoomout.setGeometry(QtCore.QRect(640, 530, 75, 23)) self.btn_zoomout.setObjectName("btn_zoomout") self.slider = QtWidgets.QSlider(self.centralwidget) self.slider.setGeometry(QtCore.QRect(440, 530, 160, 22)) self.slider.setOrientation(QtCore.Qt.Horizontal) self.slider.setObjectName("slider") self.btn_open = QtWidgets.QPushButton(self.centralwidget) self.btn_open.setGeometry(QtCore.QRect(140, 530, 75, 23)) self.btn_open.setObjectName("btn_open") self.label_resolution = QtWidgets.QLabel(self.centralwidget) self.label_resolution.setGeometry(QtCore.QRect(770, 530, 75, 15)) self.label_resolution.setObjectName("label_resolution") self.label_filename = QtWidgets.QLabel(self.centralwidget) self.label_filename.setGeometry(QtCore.QRect(130, 660, 111, 41)) self.label_filename.setObjectName("label_filename") self.label_img_shape = QtWidgets.QLabel(self.centralwidget) self.label_img_shape.setGeometry(QtCore.QRect(540, 620, 411, 51)) self.label_img_shape.setObjectName("label_img_shape") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 1085, 21)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.image_label.setText(_translate("MainWindow", "image")) self.btn_zoomin.setText(_translate("MainWindow", "zoom_in")) self.btn_zoomout.setText(_translate("MainWindow", "zoom_out")) self.btn_open.setText(_translate("MainWindow", "open file")) self.label_resolution.setText(_translate("MainWindow", "TextLabel")) self.label_filename.setText(_translate("MainWindow", "file_name")) self.label_img_shape.setText(_translate("MainWindow", "TextLabel")) img_controller.py from PyQt5 import QtCore, QtGui import cv2 from UI import Ui_MainWindow class Img_controller(object): def __init__(self, ui:Ui_MainWindow, img_ratio:int = 50): super(Img_controller, self).__init__() self.img_path = 'sad.jpg' self.img_ratio = img_ratio self.read_img(self.img_path) self.ui = ui def read_img(self,img_path): try: self.img = cv2.imread(img_path) self.orig_h, self.orig_w, self.orig_c = self.img.shape self.img_path = img_path except: self.img = cv2.imread(self.img_path) self.orig_h, self.orig_w, self.orig_c = self.img.shape bytesPerline = self.orig_h*self.orig_c self.qimg = QtGui.QImage(self.img, self.orig_w, self.orig_h, bytesPerline, QtGui.QImage.Format_RGB888).rgbSwapped() self.origin_qpixmap = QtGui.QPixmap.fromImage(self.qimg) self.img_ratio = 50 self.set_img_ratio() def set_img_ratio(self): self.img_ratio = pow(10, (self.img_ratio - 50)/50) qpixmap_height = self.orig_h * self.img_ratio self.qpixmap = self.origin_qpixmap.scaledToHeight(qpixmap_height) #ๆ›ดๆ–ฐUIไป‹้ขไธŠ็š„้กฏ็คบ self.__update_img() self.__update_text_ratio() self.__update_text_img_shape() self.__update_text_file_path() def __update_img(self): self.ui.image_label.setPixmap(self.qpixmap) self.ui.image_label.setAlignment(QtCore.Qt.AlignLeft | QtCore.Qt.AlignTop) def __update_text_file_path(self): self.ui.label_filename.setText(f"File path = {self.img_path}") def __update_text_ratio(self): self.ui.label_resolution.setText(f"{int(100*self.img_ratio)} %") def __update_text_img_shape(self): current_text = f"Current img shape = ({self.qpixmap.width()}, {self.qpixmap.height()})" origin_text = f"Origin img shape = ({self.origin_width}, {self.origin_height})" self.ui.label_img_shape.setText(current_text+"\t"+origin_text) def set_zoom_in(self): self.img_ratio = max(0, self.img_ratio - 1) self.set_img_ratio() def set_zoom_out(self): self.img_ratio = min(100, self.img_ratio + 1) self.set_img_ratio() def set_slider_value(self, value): self.img_ratio = value self.set_img_ratio() controller.py from PyQt5 import QtCore,QtWidgets,QtGui from PyQt5.QtWidgets import QMainWindow,QFileDialog from img_controller import Img_controller from UI import Ui_MainWindow class Ui_controller(QMainWindow): def __init__(self): super(Ui_controller,self).__init__() self.ui = Ui_MainWindow() self.ui.setupUi(self) self.setup_control() def setup_control(self): self.img_controller = Img_controller(ui = self.ui) self.ui.btn_open.clicked.connect(self.open_file) self.ui.btn_zoomin.clicked.connect(self.img_controller.set_zoom_in) self.ui.btn_zoomout.clicked.connect(self.img_controller.set_zoom_out) self.ui.slider.valueChanged.connect(self.getslidervalue) def open_file(self): filename, filetype = QFileDialog.getOpenFileName(self, "Open file", "./") # start path self.init_new_picture(filename) def init_new_picture(self, filename): self.ui.slider.setProperty("value", 50) self.img_controller.read_img(filename) def getslidervalue(self): self.img_controller.set_slider_value(self.ui.slider.value()+1)
[ "What you DIDN'T say was the key piece of information -- the rest of the traceback. Notice that Img_controller.__init__ calls self.read_img, which calls self.set_img_ratio, which calls self.__update_img, which uses self.ui, and that all happens BEFORE you set self.ui. You need to swap the order of that initialization.\n" ]
[ 0 ]
[]
[]
[ "pyqt5", "python" ]
stackoverflow_0074528626_pyqt5_python.txt
Q: Python mongodb/motor "'ObjectId' object is not iterable" error while trying to find item in collection I know that there are similar questions, but I've tried everything that was advised and still getting an error. I'm trying to fetch item from mongo collection by id, converting string to an ObjectId, like that: from bson import ObjectId async def get_single_template(db, template_id): template = await db.templates.find_one({ '_id': ObjectId(template_id) }) return template And I'm getting an error: ValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')] "template_id" is a valid string, like "601401887ecf2f6153bbaaad". ObjectId created from it - too. It fails only to work inside find_one() method. When I'm using find() with that id it works well. I've tried from bson.objectid import ObjectId too - no difference. I'm using motor library to access mongo. Is there something that I'm missing? P.S. Links to the corresponding docs: https://pymongo.readthedocs.io/en/stable/tutorial.html#querying-by-objectid Though I'm using motor async library, I can't find direct examples in it's docs. Basically, it wraps pymongo. I can only find examples in other's source code. A: Well, I've found out what caused that issue. The problem was not in the way I've tried to query data, but in the way I've tried to return it. I've forgotten to convert ObjectId to string in the entity that I've retrieved from database and tried to return it 'as is'. My bad. A: I encountered this problem as well while using Python motor for Mongodb. In your app.collection.find_one(...), add {'_id': 0} along with the dictionary which has the value you want to search for. So it should be like this: await app.collection.find_one({"value": val},{'_id': 0}) A: After long research i find one solution work for me I try many ways let's discuss with you one of them DATABASE_URL = mongodb://localhost:portname/yourdbname client = mongo_client.MongoClient( settings.DATABASE_URL#, ServerSelectionTimeoutMS=5000 ) db = client[settings.MONGO_INITDB_DATABASE] Post = db.post @router.get('/') async def posts(user_id: str = Depends(oauth2.require_user)): list = [] for i in Post.find(): list.append(i) print("list", list) return {'status': 'success', "list": list} Everything work on print but when i return the response then show me error that mentioned in post i solve this error by doing this serialize ObjectId with native fastApi methods at top of my file i just import from bson.objectid import ObjectId import pydantic pydantic.json.ENCODERS_BY_TYPE[ObjectId]=str Note: I am not expert of fastapi with MongoDB i just start learning last 5 days ago about fastapi and start learning mongoDb last 2 days ago. if you have any better practice then let me know i also trying to serialize data but on that case also not work so try this way thank you this comment help me from github https://github.com/tiangolo/fastapi/issues/1515#issuecomment-782838556
Python mongodb/motor "'ObjectId' object is not iterable" error while trying to find item in collection
I know that there are similar questions, but I've tried everything that was advised and still getting an error. I'm trying to fetch item from mongo collection by id, converting string to an ObjectId, like that: from bson import ObjectId async def get_single_template(db, template_id): template = await db.templates.find_one({ '_id': ObjectId(template_id) }) return template And I'm getting an error: ValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')] "template_id" is a valid string, like "601401887ecf2f6153bbaaad". ObjectId created from it - too. It fails only to work inside find_one() method. When I'm using find() with that id it works well. I've tried from bson.objectid import ObjectId too - no difference. I'm using motor library to access mongo. Is there something that I'm missing? P.S. Links to the corresponding docs: https://pymongo.readthedocs.io/en/stable/tutorial.html#querying-by-objectid Though I'm using motor async library, I can't find direct examples in it's docs. Basically, it wraps pymongo. I can only find examples in other's source code.
[ "Well, I've found out what caused that issue. The problem was not in the way I've tried to query data, but in the way I've tried to return it. I've forgotten to convert ObjectId to string in the entity that I've retrieved from database and tried to return it 'as is'. My bad.\n", "I encountered this problem as well while using Python motor for Mongodb. In your app.collection.find_one(...), add {'_id': 0} along with the dictionary which has the value you want to search for.\nSo it should be like this:\nawait app.collection.find_one({\"value\": val},{'_id': 0})\n\n", "After long research i find one solution work for me\nI try many ways let's discuss with you one of them\nDATABASE_URL = mongodb://localhost:portname/yourdbname\nclient = mongo_client.MongoClient(\n settings.DATABASE_URL#, ServerSelectionTimeoutMS=5000\n)\ndb = client[settings.MONGO_INITDB_DATABASE]\nPost = db.post\n\n@router.get('/')\nasync def posts(user_id: str = Depends(oauth2.require_user)):\nlist = []\nfor i in Post.find():\n list.append(i)\nprint(\"list\", list)\nreturn {'status': 'success', \"list\": list}\n\nEverything work on print but when i return the response then show me error that mentioned in post i solve this error by doing this\nserialize ObjectId with native fastApi methods\nat top of my file i just import\nfrom bson.objectid import ObjectId\nimport pydantic\npydantic.json.ENCODERS_BY_TYPE[ObjectId]=str\n\nNote:\nI am not expert of fastapi with MongoDB i just start learning last 5 days ago about fastapi and start learning mongoDb last 2 days ago. if you have any better practice then let me know\ni also trying to serialize data but on that case also not work so try this way\nthank you\nthis comment help me from github\nhttps://github.com/tiangolo/fastapi/issues/1515#issuecomment-782838556\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "fastapi", "mongodb", "pymongo", "python" ]
stackoverflow_0065970988_fastapi_mongodb_pymongo_python.txt
Q: How to add median value labels to a Seaborn boxplot using the hue argument In addition to the solution posted in this link I would also like if I can also add the Hue Parameter, and add the Median Values in each of the plots. The Current Code: testPlot = sns.boxplot(x='Pclass', y='Age', hue='Sex', data=trainData) m1 = trainData.groupby(['Pclass', 'Sex'])['Age'].median().values mL1 = [str(np.round(s, 2)) for s in m1] p1 = range(len(m1)) for tick, label in zip(p1, testPlot.get_xticklabels()): print(testPlot.text(p1[tick], m1[tick] + 1, mL1[tick])) Gives a Output Like: I'm working on the Titanic Dataset which can be found in this link. I'm getting the required values, but only when I do a print statement, how do I include it in my Plot? A: Place your labels manually according to hue parameter and width of bars for every category in a cycle of all xticklabels: import seaborn as sns import pandas as pd import numpy as np import matplotlib.pylab as plt trainData = pd.read_csv('titanic.csv') testPlot = sns.boxplot(x='pclass', y='age', hue='sex', data=trainData) m1 = trainData.groupby(['pclass', 'sex'])['age'].median().values mL1 = [str(np.round(s, 2)) for s in m1] ind = 0 for tick in range(len(testPlot.get_xticklabels())): testPlot.text(tick-.2, m1[ind+1]+1, mL1[ind+1], horizontalalignment='center', color='w', weight='semibold') testPlot.text(tick+.2, m1[ind]+1, mL1[ind], horizontalalignment='center', color='w', weight='semibold') ind += 2 plt.show() A: This answer is nearly copy & pasted from here but fit more to your example code. The linked answer is IMHO a bit missplaced there because that question is just about labeling a boxplot and not about a boxplot using the hue argument. I couldn't use your Train dataset because it is not available as Python package. So I used Titanic instead which has nearly the same column names. #!/usr/bin/env python3 import pandas as pd import matplotlib import matplotlib.patheffects as path_effects import seaborn as sns def add_median_labels(ax, fmt='.1f'): """Credits: https://stackoverflow.com/a/63295846/4865723 """ lines = ax.get_lines() boxes = [c for c in ax.get_children() if type(c).__name__ == 'PathPatch'] lines_per_box = int(len(lines) / len(boxes)) for median in lines[4:len(lines):lines_per_box]: x, y = (data.mean() for data in median.get_data()) # choose value depending on horizontal or vertical plot orientation value = x if (median.get_xdata()[1] - median.get_xdata()[0]) == 0 else y text = ax.text(x, y, f'{value:{fmt}}', ha='center', va='center', fontweight='bold', color='white') # create median-colored border around white text for contrast text.set_path_effects([ path_effects.Stroke(linewidth=3, foreground=median.get_color()), path_effects.Normal(), ]) df = sns.load_dataset('titanic') plot = sns.boxplot(x='pclass', y='age', hue='sex', data=df) add_median_labels(plot) plot.figure.show() Als an alternative when you create your boxplot with a figure-based function. In that case you need to give the axes parameter to add_median_labels(). # imports and add_median_labels() unchanged df = sns.load_dataset('titanic') plot = sns.catplot(kind='box', x='pclass', y='age', hue='sex', data=df) add_median_labels(plot.axes[0][0]) plot.figure.show() The resulting plot This solution also works with more then two categories in the column used for the hue argument.
How to add median value labels to a Seaborn boxplot using the hue argument
In addition to the solution posted in this link I would also like if I can also add the Hue Parameter, and add the Median Values in each of the plots. The Current Code: testPlot = sns.boxplot(x='Pclass', y='Age', hue='Sex', data=trainData) m1 = trainData.groupby(['Pclass', 'Sex'])['Age'].median().values mL1 = [str(np.round(s, 2)) for s in m1] p1 = range(len(m1)) for tick, label in zip(p1, testPlot.get_xticklabels()): print(testPlot.text(p1[tick], m1[tick] + 1, mL1[tick])) Gives a Output Like: I'm working on the Titanic Dataset which can be found in this link. I'm getting the required values, but only when I do a print statement, how do I include it in my Plot?
[ "Place your labels manually according to hue parameter and width of bars for every category in a cycle of all xticklabels:\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pylab as plt\n\ntrainData = pd.read_csv('titanic.csv')\ntestPlot = sns.boxplot(x='pclass', y='age', hue='sex', data=trainData)\nm1 = trainData.groupby(['pclass', 'sex'])['age'].median().values\nmL1 = [str(np.round(s, 2)) for s in m1]\n\nind = 0\nfor tick in range(len(testPlot.get_xticklabels())):\n testPlot.text(tick-.2, m1[ind+1]+1, mL1[ind+1], horizontalalignment='center', color='w', weight='semibold')\n testPlot.text(tick+.2, m1[ind]+1, mL1[ind], horizontalalignment='center', color='w', weight='semibold')\n ind += 2 \nplt.show()\n\n\n", "This answer is nearly copy & pasted from here but fit more to your example code. The linked answer is IMHO a bit missplaced there because that question is just about labeling a boxplot and not about a boxplot using the hue argument.\nI couldn't use your Train dataset because it is not available as Python package. So I used Titanic instead which has nearly the same column names.\n#!/usr/bin/env python3\nimport pandas as pd\nimport matplotlib\nimport matplotlib.patheffects as path_effects\nimport seaborn as sns\n\ndef add_median_labels(ax, fmt='.1f'):\n \"\"\"Credits: https://stackoverflow.com/a/63295846/4865723\n \"\"\"\n lines = ax.get_lines()\n boxes = [c for c in ax.get_children() if type(c).__name__ == 'PathPatch']\n lines_per_box = int(len(lines) / len(boxes))\n for median in lines[4:len(lines):lines_per_box]:\n x, y = (data.mean() for data in median.get_data())\n # choose value depending on horizontal or vertical plot orientation\n value = x if (median.get_xdata()[1] - median.get_xdata()[0]) == 0 else y\n text = ax.text(x, y, f'{value:{fmt}}', ha='center', va='center',\n fontweight='bold', color='white')\n # create median-colored border around white text for contrast\n text.set_path_effects([\n path_effects.Stroke(linewidth=3, foreground=median.get_color()),\n path_effects.Normal(),\n ])\n\n\ndf = sns.load_dataset('titanic')\nplot = sns.boxplot(x='pclass', y='age', hue='sex', data=df)\nadd_median_labels(plot)\nplot.figure.show()\n\nAls an alternative when you create your boxplot with a figure-based function. In that case you need to give the axes parameter to add_median_labels().\n# imports and add_median_labels() unchanged\ndf = sns.load_dataset('titanic')\nplot = sns.catplot(kind='box', x='pclass', y='age', hue='sex', data=df)\nadd_median_labels(plot.axes[0][0])\nplot.figure.show()\n\nThe resulting plot\n\nThis solution also works with more then two categories in the column used for the hue argument.\n\n" ]
[ 13, 1 ]
[]
[]
[ "boxplot", "matplotlib", "python", "seaborn" ]
stackoverflow_0045475962_boxplot_matplotlib_python_seaborn.txt
Q: Python Equivalent for R's order function According to this post np.argsort() would be the function I am looking for. However, this is not giving me my desire result. Below is the R code that I am trying to convert to Python and my current Python code. R Code data.frame %>% select(order(colnames(.))) Python Code dataframe.iloc[numpy.array(dataframe.columns).argsort()] The dataframe I am working with is 1,000,000+ rows and 42 columns, so I can not exactly re-create the output. But I believe I can re-create the order() outputs. From my understanding each number represents the original position in the columns list order(colnames(data.frame)) returns 3,2,5,6,8,4,7,10,9,11,12,13,14,15,16,17,18,19,23,20,21,22,1,25,26,28,24,27,38,29,34,33,36,30,31,32,35,41,42,39,40,37 numpy.array(dataframe.columns).argsort() returns 2,4,5,7,3,6,9,8,10,11,12,13,14,15,16,17,18,22,19,20,21,0,24,25,27,23,26,37,28,33,32,35,29,30,31,34,40,41,38,39,36,1 I know R does not have 0 index like python, so I know the first two numbers 3 and 2 are the same. I am looking for python code that could potentially return the same ordering at the R code. A: Do you have mixed case? This is handled differently in python and R. R: order(c('a', 'b', 'B', 'A', 'c')) # [1] 1 4 2 3 5 x <- c('a', 'b', 'B', 'A', 'c') x[order(c('a', 'b', 'B', 'A', 'c'))] # [1] "a" "A" "b" "B" "c" Python: np.argsort(['a', 'b', 'B', 'A', 'c'])+1 # array([4, 3, 1, 2, 5]) x = np.array(['a', 'b', 'B', 'A', 'c']) x[np.argsort(x)] # array(['A', 'B', 'a', 'b', 'c'], dtype='<U1') You can mimick R's behavior using numpy.lexsort and sorting by lowercase, then by the original array with swapped case: x = np.array(['a', 'b', 'B', 'A', 'c']) x[np.lexsort([np.char.swapcase(x), np.char.lower(x)])] # array(['a', 'A', 'b', 'B', 'c'], dtype='<U1') A: np.argsort is the same thing as R's order. Just experiment > x=c(1,2,3,10,20,30,5,15,25,35) > x [1] 1 2 3 10 20 30 5 15 25 35 > order(x) [1] 1 2 3 7 4 8 5 9 6 10 >>> x=np.array([1,2,3,10,20,30,5,15,25,35]) >>> x array([ 1, 2, 3, 10, 20, 30, 5, 15, 25, 35]) >>> x.argsort()+1 array([ 1, 2, 3, 7, 4, 8, 5, 9, 6, 10]) +1 here is just to have index starting with 1, since output of argsort are index (0-based index). So maybe the problem comes from your columns (shot in the dark: you have 2d-arrays, and are passing lines to R and columns to python, or something like that). But np.argsort is R's order.
Python Equivalent for R's order function
According to this post np.argsort() would be the function I am looking for. However, this is not giving me my desire result. Below is the R code that I am trying to convert to Python and my current Python code. R Code data.frame %>% select(order(colnames(.))) Python Code dataframe.iloc[numpy.array(dataframe.columns).argsort()] The dataframe I am working with is 1,000,000+ rows and 42 columns, so I can not exactly re-create the output. But I believe I can re-create the order() outputs. From my understanding each number represents the original position in the columns list order(colnames(data.frame)) returns 3,2,5,6,8,4,7,10,9,11,12,13,14,15,16,17,18,19,23,20,21,22,1,25,26,28,24,27,38,29,34,33,36,30,31,32,35,41,42,39,40,37 numpy.array(dataframe.columns).argsort() returns 2,4,5,7,3,6,9,8,10,11,12,13,14,15,16,17,18,22,19,20,21,0,24,25,27,23,26,37,28,33,32,35,29,30,31,34,40,41,38,39,36,1 I know R does not have 0 index like python, so I know the first two numbers 3 and 2 are the same. I am looking for python code that could potentially return the same ordering at the R code.
[ "Do you have mixed case? This is handled differently in python and R.\nR:\norder(c('a', 'b', 'B', 'A', 'c'))\n# [1] 1 4 2 3 5\n\nx <- c('a', 'b', 'B', 'A', 'c')\nx[order(c('a', 'b', 'B', 'A', 'c'))]\n# [1] \"a\" \"A\" \"b\" \"B\" \"c\"\n\nPython:\nnp.argsort(['a', 'b', 'B', 'A', 'c'])+1\n# array([4, 3, 1, 2, 5])\n\nx = np.array(['a', 'b', 'B', 'A', 'c'])\nx[np.argsort(x)]\n# array(['A', 'B', 'a', 'b', 'c'], dtype='<U1')\n\nYou can mimick R's behavior using numpy.lexsort and sorting by lowercase, then by the original array with swapped case:\nx = np.array(['a', 'b', 'B', 'A', 'c'])\nx[np.lexsort([np.char.swapcase(x), np.char.lower(x)])]\n# array(['a', 'A', 'b', 'B', 'c'], dtype='<U1')\n\n", "np.argsort is the same thing as R's order.\nJust experiment\n> x=c(1,2,3,10,20,30,5,15,25,35)\n> x\n [1] 1 2 3 10 20 30 5 15 25 35\n> order(x)\n [1] 1 2 3 7 4 8 5 9 6 10\n\n>>> x=np.array([1,2,3,10,20,30,5,15,25,35])\n>>> x\narray([ 1, 2, 3, 10, 20, 30, 5, 15, 25, 35])\n>>> x.argsort()+1\narray([ 1, 2, 3, 7, 4, 8, 5, 9, 6, 10])\n\n+1 here is just to have index starting with 1, since output of argsort are index (0-based index).\nSo maybe the problem comes from your columns (shot in the dark: you have 2d-arrays, and are passing lines to R and columns to python, or something like that).\nBut np.argsort is R's order.\n" ]
[ 2, 1 ]
[]
[]
[ "pandas", "python", "r" ]
stackoverflow_0074528672_pandas_python_r.txt
Q: Python: create 3D array using values of another 3D array that meet a condition I'm basically trying to take the weighted mean of a 3D dataset, but only on a filtered subset of the data, where the filter is based off of another (2D) array. The shape of the 2D data matches the first 2 dimensions of the 3D data, and is thus repeated for each slice in the 3rd dimension. Something like: import numpy as np myarr = np.array([[[4,6,8],[9,3,2]],[[2,7,4],[3,8,6]],[[1,6,7],[7,8,3]]]) myarr2 = np.array([[7,3],[6,7],[2,6]]) weights = np.random.rand(3,2,3) filtered = [] for k in range(len(myarr[0,0,:])): temp1 = myarr[:,:,k] temp2 = weights[:,:,k] filtered.append(temp1[np.where(myarr2 > 5)]*temp2[np.where(myarr2 > 5)]) average = np.array(np.sum(filtered,1)/len(filtered[0])) I am concerned about efficiency here. Is it possible to vectorize this so I don't need the loop, or are there other suggestions to make this more efficient? A: The most glaring efficiency issue, even the loop aside, is that np.where(...) is being called multiple times inside the loop, on the same condition! You can just do this a single time beforehand. Moreover, there is no need for a loop. Your operation basically equates to: mask = myarr2 > 5 average = (myarr[mask] * weights[mask]).mean(axis=0) There is no need for an np.where either. myarr2 is an array of shape (i, j) with same first two dims as myarr and weight, which have some shape (i, j, k). So if there are n True elements in the boolean mask myarr2 > 5, you can apply it on your other arrays to obtain (n, k) elements (taking all elements along third axis, when there is a True at a certain [i, j] position).
Python: create 3D array using values of another 3D array that meet a condition
I'm basically trying to take the weighted mean of a 3D dataset, but only on a filtered subset of the data, where the filter is based off of another (2D) array. The shape of the 2D data matches the first 2 dimensions of the 3D data, and is thus repeated for each slice in the 3rd dimension. Something like: import numpy as np myarr = np.array([[[4,6,8],[9,3,2]],[[2,7,4],[3,8,6]],[[1,6,7],[7,8,3]]]) myarr2 = np.array([[7,3],[6,7],[2,6]]) weights = np.random.rand(3,2,3) filtered = [] for k in range(len(myarr[0,0,:])): temp1 = myarr[:,:,k] temp2 = weights[:,:,k] filtered.append(temp1[np.where(myarr2 > 5)]*temp2[np.where(myarr2 > 5)]) average = np.array(np.sum(filtered,1)/len(filtered[0])) I am concerned about efficiency here. Is it possible to vectorize this so I don't need the loop, or are there other suggestions to make this more efficient?
[ "The most glaring efficiency issue, even the loop aside, is that np.where(...) is being called multiple times inside the loop, on the same condition! You can just do this a single time beforehand. Moreover, there is no need for a loop. Your operation basically equates to:\nmask = myarr2 > 5\naverage = (myarr[mask] * weights[mask]).mean(axis=0)\n\nThere is no need for an np.where either.\nmyarr2 is an array of shape (i, j) with same first two dims as myarr and weight, which have some shape (i, j, k).\nSo if there are n True elements in the boolean mask myarr2 > 5, you can apply it on your other arrays to obtain (n, k) elements (taking all elements along third axis, when there is a True at a certain [i, j] position).\n" ]
[ 1 ]
[]
[]
[ "arrays", "numpy", "python", "vectorization" ]
stackoverflow_0074527214_arrays_numpy_python_vectorization.txt
Q: In a given string, match all numbers where a certain word is not present either ahead or behind it [Regex, Python] I have a string like "10.0 banana 30 apple 50 TOM 70 mango 100 peach 33 TOM 4.5" and from this, I want to match only numbers which do not have the word TOM either behind or ahead of them. So match should be only numbers 10.0, 30, 100; numbers 50, 70, 33 and 4.5 should not be matched. Regex101. I have tried with negative lookbehinds and negative lookaheads, but I am missing something, it is not working as expected. A: You can use negative lookaround patterns like this: (?<!\bTOM )(?<![\d.])\d+(?:\.\d+)?(?![\d.])(?! TOM\b) Demo: https://regex101.com/r/v8IaEu/1
In a given string, match all numbers where a certain word is not present either ahead or behind it [Regex, Python]
I have a string like "10.0 banana 30 apple 50 TOM 70 mango 100 peach 33 TOM 4.5" and from this, I want to match only numbers which do not have the word TOM either behind or ahead of them. So match should be only numbers 10.0, 30, 100; numbers 50, 70, 33 and 4.5 should not be matched. Regex101. I have tried with negative lookbehinds and negative lookaheads, but I am missing something, it is not working as expected.
[ "You can use negative lookaround patterns like this:\n(?<!\\bTOM )(?<![\\d.])\\d+(?:\\.\\d+)?(?![\\d.])(?! TOM\\b)\n\nDemo: https://regex101.com/r/v8IaEu/1\n" ]
[ 2 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074529040_python_regex.txt
Q: Pass python dictionary to javascript I have a Python + JS application connected to a PostgreSQL database. The database contains data about users in different countries, which is queried by the server.py file. The result of this query is a dictionary that would look something like this: {'US': 2, 'CA': 5} This dictionary needs to be passed to my map.js file, which populates a world map according to the country code (key) and volume (value). This dictionary updates with user activity, so it needs to be passed every time someone loads the map. How can I pass the data over? It seems like I need to create a JSON file. I'm not sure how to create that file within python or how to call it from javascript. I want to replace the hardcoded 'var data' values from map.js with my query results from country_count on server.py. my server.py: @app.route("/map") def show_mapjs(): country_count = { "US": 0, "CA": 0, } country_code = session.get("country_code") for country_code, _ in country_count.items(): records_count = User_Records.query.filter_by(country_code=country_code).count() country_count[country_code] = records_count print(f"=== {country_count}") return country_count (US & CA are initialized at 0 and the records_count query updates the count as user activity increases over time.) my map.js: fetch('/map') anychart.onDocumentReady(function () { var data = [ {'id': 'US', 'value': 5}, {'id': 'CA', 'value': 2} ] var dataSet = anychart.data.set(data); var mapData = dataSet.mapAs({ description: 'description' }); var map = anychart.map(); A: what a fun project! Let's get the work under way. On your server side, import json @app.route('/map') def show_mapjs(): country_count = { "US": 0, "CA": 0, } #place your own code here to get the data from the database# country_list = [] for country, count in country_count.items(): country_list.append({"id": country, "value": count}) # Serializing json json_object = json.dumps(country_list) return json_object On your client side, First, include the below js libs in the HTML, so the next code can use it. <script src="https://cdn.anychart.com/releases/8.11.0/js/anychart-core.min.js" type="text/javascript"></script> <script src="https://cdn.anychart.com/releases/8.11.0/js/anychart-map.min.js" type="text/javascript"></script> <script src="https://cdn.anychart.com/geodata/latest/custom/world/world.js"></script> <script src="https://cdn.anychart.com/releases/v8/js/anychart-data-adapter.min.js"></script> Use the map js function as below, <script> anychart.onDocumentReady(function () { anychart.data.loadJsonFile("/map", function (data) { var map = anychart.map(); map.geoData(anychart.maps.world); var dataSet = anychart.data.set(data); // set the series var series = map.choropleth(dataSet); // disable labels series.labels(false); // set the container map.container('container'); map.draw(); } ); }); </script> You should do this way to avoid out-of-sync data loading and map rendering. This will ensure that the json is downloaded and then processed by the map. Let me know if you have issues getting this working.
Pass python dictionary to javascript
I have a Python + JS application connected to a PostgreSQL database. The database contains data about users in different countries, which is queried by the server.py file. The result of this query is a dictionary that would look something like this: {'US': 2, 'CA': 5} This dictionary needs to be passed to my map.js file, which populates a world map according to the country code (key) and volume (value). This dictionary updates with user activity, so it needs to be passed every time someone loads the map. How can I pass the data over? It seems like I need to create a JSON file. I'm not sure how to create that file within python or how to call it from javascript. I want to replace the hardcoded 'var data' values from map.js with my query results from country_count on server.py. my server.py: @app.route("/map") def show_mapjs(): country_count = { "US": 0, "CA": 0, } country_code = session.get("country_code") for country_code, _ in country_count.items(): records_count = User_Records.query.filter_by(country_code=country_code).count() country_count[country_code] = records_count print(f"=== {country_count}") return country_count (US & CA are initialized at 0 and the records_count query updates the count as user activity increases over time.) my map.js: fetch('/map') anychart.onDocumentReady(function () { var data = [ {'id': 'US', 'value': 5}, {'id': 'CA', 'value': 2} ] var dataSet = anychart.data.set(data); var mapData = dataSet.mapAs({ description: 'description' }); var map = anychart.map();
[ "what a fun project!\nLet's get the work under way.\nOn your server side,\nimport json\n\n@app.route('/map')\ndef show_mapjs():\n country_count = {\n \"US\": 0, \"CA\": 0,\n }\n \n #place your own code here to get the data from the database#\n\n country_list = [] \n for country, count in country_count.items():\n country_list.append({\"id\": country, \"value\": count})\n \n # Serializing json \n json_object = json.dumps(country_list)\n\n return json_object\n\nOn your client side,\nFirst, include the below js libs in the HTML, so the next code can use it.\n<script src=\"https://cdn.anychart.com/releases/8.11.0/js/anychart-core.min.js\" type=\"text/javascript\"></script>\n<script src=\"https://cdn.anychart.com/releases/8.11.0/js/anychart-map.min.js\" type=\"text/javascript\"></script>\n<script src=\"https://cdn.anychart.com/geodata/latest/custom/world/world.js\"></script>\n<script src=\"https://cdn.anychart.com/releases/v8/js/anychart-data-adapter.min.js\"></script>\n\nUse the map js function as below,\n <script>\n anychart.onDocumentReady(function () {\n anychart.data.loadJsonFile(\"/map\",\n function (data) {\n var map = anychart.map();\n map.geoData(anychart.maps.world);\n\n var dataSet = anychart.data.set(data);\n // set the series\n var series = map.choropleth(dataSet);\n \n // disable labels\n series.labels(false);\n\n // set the container\n map.container('container');\n map.draw();\n }\n );\n });\n </script>\n\nYou should do this way to avoid out-of-sync data loading and map rendering. This will ensure that the json is downloaded and then processed by the map.\nLet me know if you have issues getting this working.\n" ]
[ 1 ]
[]
[]
[ "flask", "javascript", "json", "python", "visualization" ]
stackoverflow_0074528238_flask_javascript_json_python_visualization.txt
Q: Validating file paths in Python I have this code where the user has to input the name of a file which includes a message and the name of a file where the message must be written after its encrypted via Caesar-Cipher. I would like to validate the inputs, so that if there's a wrong input, the code won't crash but ask the user for a valid file path until the user inputs it. I have some familiarity with validation using the while loop, however I couldn't apply it here without ruining other parts of the code. Any suggestions are appreciated. def open_file(source_path: str, dest_path: str): with open(source_path, mode='r') as fd: while line := fd.readline(): return line def write_to_file(dest_path: str, line: str): with open(dest_path, mode='a') as fd: fd.write(line) source_path = input("Enter the name of the file including the message: ") dest_path = input("Enter the name of the file where the encrypted message will be written: ") MODE_ENCRYPT = 1 def caesar(source: str, dest: str, steps, mode): alphabet = "abcdefghijklmnopqrstuvwxyzabcABCDEFGHIJKLMNOPQRSTUVWXYZABC" alpha_len: int = len(alphabet) new_data = "" file = open_file(source_path, dest_path) for char in file: index = alphabet.find(char) if index == -1: new_data += char else: # compute first parth changed = index + steps if mode == MODE_ENCRYPT else index - steps # make an offset changed %= alpha_len new_data += alphabet[changed:changed + 1] write_to_file(dest_path, new_data) return new_data while True: # Validating input key key = input("Enter the key: ") try: key = int(key) except ValueError: print("Please enter a valid key: ") continue break ciphered = caesar(source_path, dest_path, key, MODE_ENCRYPT) A: Not quite sure what you meant by not being able to use a while loop, but here is a simple way of checking if the paths exists using pathlib. from pathlib import Path while True: source_path = Path(input("Enter the name of the file including the message: ")) if source_path.exists(): break print("Please input a valid path") while True: dest_path = Path(input("Enter the name of the file where the encrypted message will be written: ")) if dest_path.exists(): break print("Please input a valid path") A: You can use inbuilt OS module in Python. Here is the code until the path is a valid path. Note: Keeping a check for MAXIMUM RETRIES will help the code to not stuck in an infinite loop for the user. import os def getPath(): MAXIMUM_RETRIES = 5 count = 0 file_path = "" while True: count += 1 file_path = input("Enter the path: ") if os.path.exists(file_path): break if count >= MAXIMUM_RETRIES: print("You have reached maximum number or re-tries. Exiting...") exit(1) print("Invalid Path. Try again.") return file_path
Validating file paths in Python
I have this code where the user has to input the name of a file which includes a message and the name of a file where the message must be written after its encrypted via Caesar-Cipher. I would like to validate the inputs, so that if there's a wrong input, the code won't crash but ask the user for a valid file path until the user inputs it. I have some familiarity with validation using the while loop, however I couldn't apply it here without ruining other parts of the code. Any suggestions are appreciated. def open_file(source_path: str, dest_path: str): with open(source_path, mode='r') as fd: while line := fd.readline(): return line def write_to_file(dest_path: str, line: str): with open(dest_path, mode='a') as fd: fd.write(line) source_path = input("Enter the name of the file including the message: ") dest_path = input("Enter the name of the file where the encrypted message will be written: ") MODE_ENCRYPT = 1 def caesar(source: str, dest: str, steps, mode): alphabet = "abcdefghijklmnopqrstuvwxyzabcABCDEFGHIJKLMNOPQRSTUVWXYZABC" alpha_len: int = len(alphabet) new_data = "" file = open_file(source_path, dest_path) for char in file: index = alphabet.find(char) if index == -1: new_data += char else: # compute first parth changed = index + steps if mode == MODE_ENCRYPT else index - steps # make an offset changed %= alpha_len new_data += alphabet[changed:changed + 1] write_to_file(dest_path, new_data) return new_data while True: # Validating input key key = input("Enter the key: ") try: key = int(key) except ValueError: print("Please enter a valid key: ") continue break ciphered = caesar(source_path, dest_path, key, MODE_ENCRYPT)
[ "Not quite sure what you meant by not being able to use a while loop, but here is a simple way of checking if the paths exists using pathlib.\nfrom pathlib import Path\n\nwhile True:\n source_path = Path(input(\"Enter the name of the file including the message: \"))\n if source_path.exists():\n break\n print(\"Please input a valid path\")\n\nwhile True:\n dest_path = Path(input(\"Enter the name of the file where the encrypted message will be written: \"))\n if dest_path.exists():\n break\n print(\"Please input a valid path\")\n\n", "You can use inbuilt OS module in Python. Here is the code until the path is a valid path.\nNote: Keeping a check for MAXIMUM RETRIES will help the code to not stuck in an infinite loop for the user.\nimport os\n\ndef getPath():\n MAXIMUM_RETRIES = 5\n count = 0\n file_path = \"\"\n while True:\n count += 1\n file_path = input(\"Enter the path: \")\n if os.path.exists(file_path):\n break\n if count >= MAXIMUM_RETRIES:\n print(\"You have reached maximum number or re-tries. Exiting...\")\n exit(1)\n print(\"Invalid Path. Try again.\")\n return file_path\n\n" ]
[ 1, 1 ]
[]
[]
[ "caesar_cipher", "python", "validation" ]
stackoverflow_0074528514_caesar_cipher_python_validation.txt
Q: how to get non continuous date time in dataframe datetime column pandas I have a datetime based dataframe as below, timestamp value ... metric 36 2014-04-02 17:20:00 125.098263 ... 25.098263 14 2014-04-06 16:25:00 140.072787 ... 265.171050 10 2014-04-11 09:00:00 127.882020 ... 393.053070 45 2014-04-11 09:05:00 115.705719 ... 508.758789 24 2014-04-11 09:15:00 127.261178 ... 636.019967 17 2014-04-11 09:20:00 121.157997 ... 757.177965 49 2014-04-11 09:25:00 120.468468 ... 877.646433 8 2014-04-11 09:45:00 135.642696 ... 1013.289128 33 2014-04-11 09:55:00 125.210049 ... 1138.499178 19 2014-04-11 10:05:00 159.259713 ... 1297.758890 52 2014-04-11 10:20:00 150.082482 ... 1447.841373 I want to create new column named as 'diff_col' contains either 'same' or 'diff' values. If a date is not continuous, it will taken as 'diff' otherwise it is 'same'. In the above dataframe, 2014-04-02 17:20:00 and 2014-04-06 16:25:00 are different dates compare to remaining datetime values. How to create the diff_col . I tried, df['diff_col']=df.groupby(pd.Grouper(key = 'timestamp', freq='1D')) but it didn't correctly create the expected column. My required dataframe is as below, timestamp value ... metric diff_col 36 2014-04-02 17:20:00 125.098263 ... 25.098263 diff 14 2014-04-06 16:25:00 140.072787 ... 265.171050 diff 10 2014-04-11 09:00:00 127.882020 ... 393.053070 same 45 2014-04-11 09:05:00 115.705719 ... 508.758789 same 24 2014-04-11 09:15:00 127.261178 ... 636.019967 same 17 2014-04-11 09:20:00 121.157997 ... 757.177965 same 49 2014-04-11 09:25:00 120.468468 ... 877.646433 same 8 2014-04-11 09:45:00 135.642696 ... 1013.289128 same 33 2014-04-11 09:55:00 125.210049 ... 1138.499178 same 19 2014-04-11 10:05:00 159.259713 ... 1297.758890 same 52 2014-04-11 10:20:00 150.082482 ... 1447.841373 same Please provide suggestion on this. Thanks, Kumar A: You can compare the successive rows to see if this is the same date (extracted with dt.normalize) and use this as grouper to get the size with groupby.transform('size'), if the size is > 1, set 'same' else 'diff' with help of numpy.where: import numpy as np # ensure datetime df['timestamp'] = pd.to_datetime(df['timestamp']) # get day s = df['timestamp'].dt.normalize() # compare successive rows and identify group size df['diff_col'] = np.where(df.groupby(s.ne(s.shift()).cumsum()) .transform('size').gt(1), 'same', 'diff') Output: timestamp value ... metric diff_col 36 2014-04-02 17:20:00 125.098263 ... 25.098263 diff 14 2014-04-06 16:25:00 140.072787 ... 265.171050 diff 10 2014-04-11 09:00:00 127.882020 ... 393.053070 same 45 2014-04-11 09:05:00 115.705719 ... 508.758789 same 24 2014-04-11 09:15:00 127.261178 ... 636.019967 same 17 2014-04-11 09:20:00 121.157997 ... 757.177965 same 49 2014-04-11 09:25:00 120.468468 ... 877.646433 same 8 2014-04-11 09:45:00 135.642696 ... 1013.289128 same 33 2014-04-11 09:55:00 125.210049 ... 1138.499178 same 19 2014-04-11 10:05:00 159.259713 ... 1297.758890 same 52 2014-04-11 10:20:00 150.082482 ... 1447.841373 same
how to get non continuous date time in dataframe datetime column pandas
I have a datetime based dataframe as below, timestamp value ... metric 36 2014-04-02 17:20:00 125.098263 ... 25.098263 14 2014-04-06 16:25:00 140.072787 ... 265.171050 10 2014-04-11 09:00:00 127.882020 ... 393.053070 45 2014-04-11 09:05:00 115.705719 ... 508.758789 24 2014-04-11 09:15:00 127.261178 ... 636.019967 17 2014-04-11 09:20:00 121.157997 ... 757.177965 49 2014-04-11 09:25:00 120.468468 ... 877.646433 8 2014-04-11 09:45:00 135.642696 ... 1013.289128 33 2014-04-11 09:55:00 125.210049 ... 1138.499178 19 2014-04-11 10:05:00 159.259713 ... 1297.758890 52 2014-04-11 10:20:00 150.082482 ... 1447.841373 I want to create new column named as 'diff_col' contains either 'same' or 'diff' values. If a date is not continuous, it will taken as 'diff' otherwise it is 'same'. In the above dataframe, 2014-04-02 17:20:00 and 2014-04-06 16:25:00 are different dates compare to remaining datetime values. How to create the diff_col . I tried, df['diff_col']=df.groupby(pd.Grouper(key = 'timestamp', freq='1D')) but it didn't correctly create the expected column. My required dataframe is as below, timestamp value ... metric diff_col 36 2014-04-02 17:20:00 125.098263 ... 25.098263 diff 14 2014-04-06 16:25:00 140.072787 ... 265.171050 diff 10 2014-04-11 09:00:00 127.882020 ... 393.053070 same 45 2014-04-11 09:05:00 115.705719 ... 508.758789 same 24 2014-04-11 09:15:00 127.261178 ... 636.019967 same 17 2014-04-11 09:20:00 121.157997 ... 757.177965 same 49 2014-04-11 09:25:00 120.468468 ... 877.646433 same 8 2014-04-11 09:45:00 135.642696 ... 1013.289128 same 33 2014-04-11 09:55:00 125.210049 ... 1138.499178 same 19 2014-04-11 10:05:00 159.259713 ... 1297.758890 same 52 2014-04-11 10:20:00 150.082482 ... 1447.841373 same Please provide suggestion on this. Thanks, Kumar
[ "You can compare the successive rows to see if this is the same date (extracted with dt.normalize) and use this as grouper to get the size with groupby.transform('size'), if the size is > 1, set 'same' else 'diff' with help of numpy.where:\nimport numpy as np\n\n# ensure datetime\ndf['timestamp'] = pd.to_datetime(df['timestamp'])\n\n# get day\ns = df['timestamp'].dt.normalize()\n\n# compare successive rows and identify group size\ndf['diff_col'] = np.where(df.groupby(s.ne(s.shift()).cumsum())\n .transform('size').gt(1),\n 'same', 'diff')\n\nOutput:\n timestamp value ... metric diff_col\n36 2014-04-02 17:20:00 125.098263 ... 25.098263 diff\n14 2014-04-06 16:25:00 140.072787 ... 265.171050 diff\n10 2014-04-11 09:00:00 127.882020 ... 393.053070 same\n45 2014-04-11 09:05:00 115.705719 ... 508.758789 same\n24 2014-04-11 09:15:00 127.261178 ... 636.019967 same\n17 2014-04-11 09:20:00 121.157997 ... 757.177965 same\n49 2014-04-11 09:25:00 120.468468 ... 877.646433 same\n8 2014-04-11 09:45:00 135.642696 ... 1013.289128 same\n33 2014-04-11 09:55:00 125.210049 ... 1138.499178 same\n19 2014-04-11 10:05:00 159.259713 ... 1297.758890 same\n52 2014-04-11 10:20:00 150.082482 ... 1447.841373 same\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python", "python_datetime" ]
stackoverflow_0074529166_pandas_python_python_datetime.txt
Q: Selenium scrolls to the element but does not click Trying to click next button from navigation bar of website "https://uk.trustpilot.com/categories/bars_cafes?subcategories=cafe" using selenium in python. from selenium.webdriver import Chrome from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By from bs4 import BeautifulSoup import time URL = "https://uk.trustpilot.com/categories/bars_cafes?subcategories=cafe" driver = Chrome(ChromeDriverManager().install()) class Scraper: def __init__(self, website): self.website = website def get_website(self): return driver.get(self.website) def ignore_cookie(self): try: ignore_cookies = driver.find_element(by=By.XPATH, value='//*[@id="onetrust-reject-all- handler"]') ignore_cookies.click() except AttributeError: pass def next_page(self): driver.find_element(by=By.NAME, value="pagination-button-next").click() The ignore cookie function works fine. But next_page function scrolls to the next button but does not click it. A: Include the following imports: from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import time as t Edit your next_page function like so: wait = WebDriverWait(driver, 25) next_page_button = wait.until(EC.element_to_be_clickable((By.XPATH, '//a[@name="pagination-button-next"]'))) next_page_button.location_once_scrolled_into_view t.sleep(2) next_page_button.click() See Selenium documentation at https://www.selenium.dev/documentation/ A: This should do it: from selenium.webdriver import Chrome from selenium.webdriver.common.by import By from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC url = "https://uk.trustpilot.com/categories/bars_cafes?subcategories=cafe" class Scraper: def __init__(self, website): self.driver = Chrome(ChromeDriverManager().install()) self.driver.get(website) self.wait = WebDriverWait(self.driver,20) def ignore_cookie(self): self.driver.find_element(By.CSS_SELECTOR, "button[class^='onetrust-close-btn-handler']").click() def fetch_content(self): while True: for item in self.driver.find_elements(By.CSS_SELECTOR, "section > [class*='card_card']"): shop_name = item.find_element(By.CSS_SELECTOR, "a[name='business-unit-card'] p[class*='displayName']").text yield shop_name try: self.next_page() self.wait.until(EC.staleness_of(item)) except Exception as err: self.driver.quit() return def next_page(self): next_page = self.driver.find_element(By.CSS_SELECTOR, "a[name='pagination-button-next']") self.driver.execute_script("arguments[0].click();", next_page) scrape = Scraper(url) scrape.ignore_cookie() for title in scrape.fetch_content(): print(title)
Selenium scrolls to the element but does not click
Trying to click next button from navigation bar of website "https://uk.trustpilot.com/categories/bars_cafes?subcategories=cafe" using selenium in python. from selenium.webdriver import Chrome from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By from bs4 import BeautifulSoup import time URL = "https://uk.trustpilot.com/categories/bars_cafes?subcategories=cafe" driver = Chrome(ChromeDriverManager().install()) class Scraper: def __init__(self, website): self.website = website def get_website(self): return driver.get(self.website) def ignore_cookie(self): try: ignore_cookies = driver.find_element(by=By.XPATH, value='//*[@id="onetrust-reject-all- handler"]') ignore_cookies.click() except AttributeError: pass def next_page(self): driver.find_element(by=By.NAME, value="pagination-button-next").click() The ignore cookie function works fine. But next_page function scrolls to the next button but does not click it.
[ "Include the following imports:\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nimport time as t\n\nEdit your next_page function like so:\nwait = WebDriverWait(driver, 25)\n\nnext_page_button = wait.until(EC.element_to_be_clickable((By.XPATH, '//a[@name=\"pagination-button-next\"]')))\nnext_page_button.location_once_scrolled_into_view\nt.sleep(2)\nnext_page_button.click()\n\nSee Selenium documentation at https://www.selenium.dev/documentation/\n", "This should do it:\nfrom selenium.webdriver import Chrome\nfrom selenium.webdriver.common.by import By\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\n\nurl = \"https://uk.trustpilot.com/categories/bars_cafes?subcategories=cafe\"\n\nclass Scraper:\n def __init__(self, website):\n self.driver = Chrome(ChromeDriverManager().install())\n self.driver.get(website)\n self.wait = WebDriverWait(self.driver,20)\n\n\n def ignore_cookie(self):\n self.driver.find_element(By.CSS_SELECTOR, \"button[class^='onetrust-close-btn-handler']\").click()\n\n\n def fetch_content(self):\n while True:\n for item in self.driver.find_elements(By.CSS_SELECTOR, \"section > [class*='card_card']\"):\n shop_name = item.find_element(By.CSS_SELECTOR, \"a[name='business-unit-card'] p[class*='displayName']\").text\n yield shop_name\n\n try:\n self.next_page()\n self.wait.until(EC.staleness_of(item))\n except Exception as err:\n self.driver.quit()\n return\n\n\n def next_page(self):\n next_page = self.driver.find_element(By.CSS_SELECTOR, \"a[name='pagination-button-next']\")\n self.driver.execute_script(\"arguments[0].click();\", next_page)\n\n\nscrape = Scraper(url)\nscrape.ignore_cookie()\nfor title in scrape.fetch_content():\n print(title)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "selenium", "web_scraping" ]
stackoverflow_0074524342_python_selenium_web_scraping.txt
Q: ChoiceField doesn't display an empty label when using a tuple What I'm trying to do I'm going to be keeping data about competitions in my database. I want to be able to search the competitions by certain criteria - competition type in particular. About competition types Competition types are kept in a tuple. A slightly shortened example: COMPETITION_TYPE_CHOICES = ( (1, 'Olympic Games'), (2, 'ISU Championships'), (3, 'Grand Prix Series'), ) These are used in the model like so (again - this is a shortened/simplified version of the model): class Competition(models.Model): name = models.CharField(max_length=256) type = models.IntegerField(choices=COMPETITION_TYPE_CHOICES) The search form I don't want the fields to be required in the search form, so the form is defined like this: class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False) The problem I'd like the select widget in ChoiceField to display an empty label, but I don't get one. Any help with this would be much appreciated :) A: I've found a solution that works the way I want it to without violating the DRY principle. Not very clean, but it'll have to do I suppose. According to the documentation choices don't have to be a tuple: Finally, note that choices can be any iterable object -- not necessarily a list or tuple. This lets you construct choices dynamically. But if you find yourself hacking choices to be dynamic, you're probably better off using a proper database table with a ForeignKey. choices is meant for static data that doesn't change much, if ever. So the solution I'm going with for the moment is: COMPETITION_TYPE_CHOICES = [ (1, 'Olympic Games'), (2, 'ISU Championships'), (3, 'Grand Prix Series'), ] COMP_TYPE_CHOICES_AND_EMPTY = [('','All')] + COMPETITION_TYPE_CHOICES And then: class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMP_TYPE_CHOICES_AND_EMPTY, required=False) The model stays the same as it was. A: I tried both Monika's and Evgeniy's solutions with no success, but Monika has a good point in that the choices do not need to be tuples. Therefore, the easiest (and DRYest) solution is to simply do what Django does already in the Model Field. Simply add the blank choice and the tuples together after converting them to a list: from django.db.models.fields import BLANK_CHOICE_DASH ... type = forms.ChoiceField(choices=BLANK_CHOICE_DASH + list(COMPETITION_TYPE_CHOICES), required=False) A: Better choice is to update field choices in form init method COMPETITION_TYPE_CHOICES = ( (1, 'Olympic Games'), (2, 'ISU Championships'), (3, 'Grand Prix Series'), ) class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False) def __init__(self, *args, **kwargs): super(CompetitionSearchForm, self).__init__(*args, **kwargs) self.fields['type'].choices.insert(0, ('','---------' ) ) A: According to the documentation: Either an iterable (e.g., a list or tuple) of 2-tuples to use as choices for this field, or a callable that returns such an iterable. (https://docs.djangoproject.com/en/dev/ref/forms/fields/) So, you can simple: sample_field = forms.ChoiceField(choices=(('', '---'),) + Model.YOUR_CHOICES) A: Try adding blank=True to the model fields (assuming that's the behavior you want), then changing the form to a ModelForm and removing the field definitions. Note that any fields for which you set blank=True won't be required when validating or saving the model. Again, this may not be what you want but if it is it'll allow Django to take care of a few things automatically. Otherwise just change your COMPETITION_TYPE_CHOICES to: COMPETITION_TYPE_CHOICES = ( ('', '---------'), ('1', 'Olympic Games'), ('2', 'ISU Championships'), ('3', 'Grand Prix Series'), ) A: Just a small change to Evgeniy's answer that checks if the blank alternative is not already added. Without the check (at least when running the builtin runserver) one extra empty label is added for each page reload. COMPETITION_TYPE_CHOICES = ( (1, 'Olympic Games'), (2, 'ISU Championships'), (3, 'Grand Prix Series'), ) class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False) def __init__(self, *args, **kwargs): super(CompetitionSearchForm, self).__init__(*args, **kwargs) if not self.fields['type'].choices[0][0] == '': self.fields['type'].choices.insert(0, ('','---------' ) ) A: Why don't you use ModelForm if you are already have model class? Best solution: forms.py class CompetitionSearchForm(ModelForm): class Meta: model = Competition models.py class Competition(models.Model): name = models.CharField(max_length=256) type = models.IntegerField(choices=COMPETITION_TYPE_CHOICES, default=COMPETITION_TYPE_CHOICES[0][0], blank=True) You can set blank=False to remove empty_label from list A: A little late to the party.. How about not modifying the choices at all and just handling it with a widget? from django.db.models import BLANK_CHOICE_DASH class EmptySelect(Select): empty_value = BLANK_CHOICE_DASH[0] empty_label = BLANK_CHOICE_DASH[1] @property def choices(self): yield (self.empty_value, self.empty_label,) for choice in self._choices: yield choice @choices.setter def choices(self, val): self._choices = val Then just call it: class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False, widget=EmptySelect) This is what you end up with: print(CompetitionSearchForm().as_p()) <p> <label for="id_name">Name:</label> <input id="id_name" name="name" type="text" /> </p> <p> <label for="id_type">Type:</label> <select id="id_type" name="type"> <option value="" selected="selected">------</option> <option value="1">Olympic Games</option> <option value="2">ISU Championships</option> <option value="3">Grand Prix Series</option> </select> </p> A: Extending Javier's answer. Instead of customizing the signature of choices which would fail in mypy checking, its better to use a custom property and change it only in the display options. class EmptySelect(Select): @property def custom_choices(self): yield BLANK_CHOICE_DASH[0] yield from self.choices def optgroups(self, name, value, attrs=None): """Return a list of optgroups for this widget.""" groups = [] has_selected = False # START_CHANGES for index, (option_value, option_label) in enumerate(self.custom_choices): # END_CHANGES if option_value is None: option_value = "" subgroup = [] if isinstance(option_label, (list, tuple)): group_name = option_value subindex = 0 choices = option_label else: group_name = None subindex = None choices = [(option_value, option_label)] groups.append((group_name, subgroup, index)) for subvalue, sublabel in choices: selected = str(subvalue) in value and (not has_selected or self.allow_multiple_selected) has_selected |= selected subgroup.append( self.create_option( name, subvalue, sublabel, selected, index, subindex=subindex, attrs=attrs, ) ) if subindex is not None: subindex += 1 return groups And use this widget anywhere like: class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False, widget=EmptySelect) Note: Don't use type as a filed name as it's python built-in keyword instead name it something else for good practice.
ChoiceField doesn't display an empty label when using a tuple
What I'm trying to do I'm going to be keeping data about competitions in my database. I want to be able to search the competitions by certain criteria - competition type in particular. About competition types Competition types are kept in a tuple. A slightly shortened example: COMPETITION_TYPE_CHOICES = ( (1, 'Olympic Games'), (2, 'ISU Championships'), (3, 'Grand Prix Series'), ) These are used in the model like so (again - this is a shortened/simplified version of the model): class Competition(models.Model): name = models.CharField(max_length=256) type = models.IntegerField(choices=COMPETITION_TYPE_CHOICES) The search form I don't want the fields to be required in the search form, so the form is defined like this: class CompetitionSearchForm(forms.Form): name = forms.CharField(required=False) type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False) The problem I'd like the select widget in ChoiceField to display an empty label, but I don't get one. Any help with this would be much appreciated :)
[ "I've found a solution that works the way I want it to without violating the DRY principle. Not very clean, but it'll have to do I suppose.\nAccording to the documentation choices don't have to be a tuple:\n\nFinally, note that choices can be any\n iterable object -- not necessarily a\n list or tuple. This lets you construct\n choices dynamically. But if you find\n yourself hacking choices to be\n dynamic, you're probably better off\n using a proper database table with a\n ForeignKey. choices is meant for\n static data that doesn't change much,\n if ever.\n\nSo the solution I'm going with for the moment is:\nCOMPETITION_TYPE_CHOICES = [\n (1, 'Olympic Games'),\n (2, 'ISU Championships'),\n (3, 'Grand Prix Series'),\n]\n\nCOMP_TYPE_CHOICES_AND_EMPTY = [('','All')] + COMPETITION_TYPE_CHOICES\n\nAnd then:\nclass CompetitionSearchForm(forms.Form):\n name = forms.CharField(required=False)\n type = forms.ChoiceField(choices=COMP_TYPE_CHOICES_AND_EMPTY, required=False)\n\nThe model stays the same as it was.\n", "I tried both Monika's and Evgeniy's solutions with no success, but Monika has a good point in that the choices do not need to be tuples. Therefore, the easiest (and DRYest) solution is to simply do what Django does already in the Model Field. Simply add the blank choice and the tuples together after converting them to a list:\nfrom django.db.models.fields import BLANK_CHOICE_DASH\n\n...\n\ntype = forms.ChoiceField(choices=BLANK_CHOICE_DASH + list(COMPETITION_TYPE_CHOICES), required=False)\n\n", "Better choice is to update field choices in form init method\nCOMPETITION_TYPE_CHOICES = (\n (1, 'Olympic Games'),\n (2, 'ISU Championships'),\n (3, 'Grand Prix Series'),\n)\n\n\nclass CompetitionSearchForm(forms.Form):\n name = forms.CharField(required=False)\n type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False)\n\n def __init__(self, *args, **kwargs):\n super(CompetitionSearchForm, self).__init__(*args, **kwargs)\n self.fields['type'].choices.insert(0, ('','---------' ) )\n\n", "According to the documentation:\n\nEither an iterable (e.g., a list or tuple) of 2-tuples to use as choices for this field, or a callable that returns such an iterable. (https://docs.djangoproject.com/en/dev/ref/forms/fields/)\n\nSo, you can simple:\nsample_field = forms.ChoiceField(choices=(('', '---'),) + Model.YOUR_CHOICES)\n\n", "Try adding blank=True to the model fields (assuming that's the behavior you want), then changing the form to a ModelForm and removing the field definitions. Note that any fields for which you set blank=True won't be required when validating or saving the model. Again, this may not be what you want but if it is it'll allow Django to take care of a few things automatically.\nOtherwise just change your COMPETITION_TYPE_CHOICES to:\nCOMPETITION_TYPE_CHOICES = (\n ('', '---------'),\n ('1', 'Olympic Games'),\n ('2', 'ISU Championships'),\n ('3', 'Grand Prix Series'),\n)\n\n", "Just a small change to Evgeniy's answer that checks if the blank alternative is not already added.\nWithout the check (at least when running the builtin runserver) one extra empty label is added for each page reload.\nCOMPETITION_TYPE_CHOICES = (\n (1, 'Olympic Games'),\n (2, 'ISU Championships'),\n (3, 'Grand Prix Series'),\n)\n\nclass CompetitionSearchForm(forms.Form):\n name = forms.CharField(required=False)\n type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False)\n\n def __init__(self, *args, **kwargs):\n super(CompetitionSearchForm, self).__init__(*args, **kwargs)\n if not self.fields['type'].choices[0][0] == '':\n self.fields['type'].choices.insert(0, ('','---------' ) )\n\n", "Why don't you use ModelForm if you are already have model class?\nBest solution:\nforms.py\nclass CompetitionSearchForm(ModelForm):\n\n class Meta:\n model = Competition\n\nmodels.py\nclass Competition(models.Model):\n name = models.CharField(max_length=256)\n type = models.IntegerField(choices=COMPETITION_TYPE_CHOICES, default=COMPETITION_TYPE_CHOICES[0][0], blank=True)\n\nYou can set blank=False to remove empty_label from list\n", "A little late to the party..\nHow about not modifying the choices at all and just handling it with a widget?\nfrom django.db.models import BLANK_CHOICE_DASH\n\nclass EmptySelect(Select):\n empty_value = BLANK_CHOICE_DASH[0]\n empty_label = BLANK_CHOICE_DASH[1]\n\n @property\n def choices(self):\n yield (self.empty_value, self.empty_label,)\n for choice in self._choices:\n yield choice\n\n @choices.setter\n def choices(self, val):\n self._choices = val\n\nThen just call it:\nclass CompetitionSearchForm(forms.Form):\n name = forms.CharField(required=False)\n type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False, widget=EmptySelect)\n\nThis is what you end up with:\nprint(CompetitionSearchForm().as_p())\n<p>\n <label for=\"id_name\">Name:</label>\n <input id=\"id_name\" name=\"name\" type=\"text\" />\n</p>\n<p>\n <label for=\"id_type\">Type:</label>\n <select id=\"id_type\" name=\"type\">\n <option value=\"\" selected=\"selected\">------</option>\n <option value=\"1\">Olympic Games</option>\n <option value=\"2\">ISU Championships</option>\n <option value=\"3\">Grand Prix Series</option>\n </select>\n</p>\n\n", "Extending Javier's answer.\nInstead of customizing the signature of choices which would fail in mypy checking, its better to use a custom property and change it only in the display options.\nclass EmptySelect(Select):\n @property\n def custom_choices(self):\n yield BLANK_CHOICE_DASH[0]\n yield from self.choices\n\n def optgroups(self, name, value, attrs=None):\n \"\"\"Return a list of optgroups for this widget.\"\"\"\n groups = []\n has_selected = False\n # START_CHANGES\n for index, (option_value, option_label) in enumerate(self.custom_choices):\n # END_CHANGES\n if option_value is None:\n option_value = \"\"\n\n subgroup = []\n if isinstance(option_label, (list, tuple)):\n group_name = option_value\n subindex = 0\n choices = option_label\n else:\n group_name = None\n subindex = None\n choices = [(option_value, option_label)]\n groups.append((group_name, subgroup, index))\n\n for subvalue, sublabel in choices:\n selected = str(subvalue) in value and (not has_selected or self.allow_multiple_selected)\n has_selected |= selected\n subgroup.append(\n self.create_option(\n name,\n subvalue,\n sublabel,\n selected,\n index,\n subindex=subindex,\n attrs=attrs,\n )\n )\n if subindex is not None:\n subindex += 1\n return groups\n\nAnd use this widget anywhere like:\nclass CompetitionSearchForm(forms.Form):\n name = forms.CharField(required=False)\n type = forms.ChoiceField(choices=COMPETITION_TYPE_CHOICES,required=False, widget=EmptySelect)\n\nNote: Don't use type as a filed name as it's python built-in keyword instead name it something else for good practice.\n" ]
[ 36, 32, 10, 8, 7, 6, 0, 0, 0 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0001765757_django_django_forms_python.txt
Q: python Or operator notworking sorry im really new to python im trying to keep the cursor within a 100x100 box but it doesnt do that, im still able to move it within a t shape spanning the whole screen and not a box in the middle of it. it seems like its just ignoring 1 of the variables what this is supposed to do is simply detect if the mouse has left the 100x100 area the placeholder is simply so i can put somthing there later pyautogui.moveTo(550,550) while True: mos = pyautogui.position() print(mos[0],mos[1]) if (500 < mos[0] < 600) or (500 < mos[1] < 600) : pass else: print('placeholder') print('f') i've gotten this to work but im still confused why the first version doesnt work pyautogui.moveTo(550,550) while True: mos = pyautogui.position() print(mos[0],mos[1]) if (500 < mos[0] < 600): pass else: print('placeholder') print('f') if (500 < mos[1] < 600): pass else: print('placeholder') print('f') A: ok no clue but i fixed it by putting not infront of it pyautogui.moveTo(550,550) while True: mos = pyautogui.position() print(mos[0],mos[1]) if not (500 < mos[0] < 600) or not(500 < mos[1] < 600): break A: Your first version should have been if (500 < mos[0] < 600) and (500 < mos[1] < 600) : Because you want your cursor to be within the box limits both on the x and y axis. Putting not in front of both conditions, and inverting the logic, as you did in the final version, gets the expected result because not (not A or not B) is logically the same as A and B (this is known as the De Morgan law) A: I think this way is more readable rather than cramping everything into the if conditions. You might want to consider adding a sleep to not overwork this process. Lastly, the usage of "pass" serves no purpose here as it's meant to allow continuation of the loop. Whereas for your case you would want to restart the loop. Thus, we use "continue" instead. import time import pyautogui pyautogui.moveTo(550,550) while True: time.sleep(0.1) LeftRightPos = pyautogui.position()[0] TopDownPos = pyautogui.position()[1] if LeftRightPos<500 or LeftRightPos>600: print "LeftRight Out of Position" break elif TopDownPos<500 or TopDownPos>600: print "TopDown Out of Position" break else: continue
python Or operator notworking
sorry im really new to python im trying to keep the cursor within a 100x100 box but it doesnt do that, im still able to move it within a t shape spanning the whole screen and not a box in the middle of it. it seems like its just ignoring 1 of the variables what this is supposed to do is simply detect if the mouse has left the 100x100 area the placeholder is simply so i can put somthing there later pyautogui.moveTo(550,550) while True: mos = pyautogui.position() print(mos[0],mos[1]) if (500 < mos[0] < 600) or (500 < mos[1] < 600) : pass else: print('placeholder') print('f') i've gotten this to work but im still confused why the first version doesnt work pyautogui.moveTo(550,550) while True: mos = pyautogui.position() print(mos[0],mos[1]) if (500 < mos[0] < 600): pass else: print('placeholder') print('f') if (500 < mos[1] < 600): pass else: print('placeholder') print('f')
[ "ok no clue but i fixed it by putting not infront of it\npyautogui.moveTo(550,550)\n\nwhile True:\n mos = pyautogui.position()\n print(mos[0],mos[1])\n if not (500 < mos[0] < 600) or not(500 < mos[1] < 600):\n break\n\n", "Your first version should have been\nif (500 < mos[0] < 600) and (500 < mos[1] < 600) :\n\nBecause you want your cursor to be within the box limits both on the x and y axis.\nPutting not in front of both conditions, and inverting the logic, as you did in the final version, gets the expected result because not (not A or not B) is logically the same as A and B (this is known as the De Morgan law)\n", "I think this way is more readable rather than cramping everything into the if conditions.\nYou might want to consider adding a sleep to not overwork this process.\nLastly, the usage of \"pass\" serves no purpose here as it's meant to allow continuation of the loop. Whereas for your case you would want to restart the loop. Thus, we use \"continue\" instead.\nimport time\nimport pyautogui\n\npyautogui.moveTo(550,550)\n\nwhile True:\n time.sleep(0.1)\n LeftRightPos = pyautogui.position()[0]\n TopDownPos = pyautogui.position()[1]\n if LeftRightPos<500 or LeftRightPos>600:\n print \"LeftRight Out of Position\"\n break\n elif TopDownPos<500 or TopDownPos>600:\n print \"TopDown Out of Position\"\n break\n else:\n continue\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074527424_python.txt
Q: Split a string at uppercase letters What is the pythonic way to split a string before the occurrences of a given set of characters? For example, I want to split 'TheLongAndWindingRoad' at any occurrence of an uppercase letter (possibly except the first), and obtain ['The', 'Long', 'And', 'Winding', 'Road']. Edit: It should also split single occurrences, i.e. from 'ABC' I'd like to obtain ['A', 'B', 'C']. A: Unfortunately it's not possible to split on a zero-width match in Python. But you can use re.findall instead: >>> import re >>> re.findall('[A-Z][^A-Z]*', 'TheLongAndWindingRoad') ['The', 'Long', 'And', 'Winding', 'Road'] >>> re.findall('[A-Z][^A-Z]*', 'ABC') ['A', 'B', 'C'] A: Here is an alternative regex solution. The problem can be reprased as "how do I insert a space before each uppercase letter, before doing the split": >>> s = "TheLongAndWindingRoad ABC A123B45" >>> re.sub( r"([A-Z])", r" \1", s).split() ['The', 'Long', 'And', 'Winding', 'Road', 'A', 'B', 'C', 'A123', 'B45'] This has the advantage of preserving all non-whitespace characters, which most other solutions do not. A: Use a lookahead and a lookbehind: In Python 3.7, you can do this: re.split('(?<=.)(?=[A-Z])', 'TheLongAndWindingRoad') And it yields: ['The', 'Long', 'And', 'Winding', 'Road'] You need the look-behind to avoid an empty string at the beginning. A: >>> import re >>> re.findall('[A-Z][a-z]*', 'TheLongAndWindingRoad') ['The', 'Long', 'And', 'Winding', 'Road'] >>> re.findall('[A-Z][a-z]*', 'SplitAString') ['Split', 'A', 'String'] >>> re.findall('[A-Z][a-z]*', 'ABC') ['A', 'B', 'C'] If you want "It'sATest" to split to ["It's", 'A', 'Test'] change the rexeg to "[A-Z][a-z']*" A: A variation on @ChristopheD 's solution s = 'TheLongAndWindingRoad' pos = [i for i,e in enumerate(s+'A') if e.isupper()] parts = [s[pos[j]:pos[j+1]] for j in xrange(len(pos)-1)] print parts A: I think that a better answer might be to split the string up into words that do not end in a capital. This would handle the case where the string doesn't start with a capital letter. re.findall('.[^A-Z]*', 'aboutTheLongAndWindingRoad') example: >>> import re >>> re.findall('.[^A-Z]*', 'aboutTheLongAndWindingRoadABC') ['about', 'The', 'Long', 'And', 'Winding', 'Road', 'A', 'B', 'C'] A: import re filter(None, re.split("([A-Z][^A-Z]*)", "TheLongAndWindingRoad")) or [s for s in re.split("([A-Z][^A-Z]*)", "TheLongAndWindingRoad") if s] A: Pythonic way could be: "".join([(" "+i if i.isupper() else i) for i in 'TheLongAndWindingRoad']).strip().split() ['The', 'Long', 'And', 'Winding', 'Road'] Works good for Unicode, avoiding re/re2. "".join([(" "+i if i.isupper() else i) for i in 'ะกัƒะฟะตั€ะœะฐั€ะบะตั‚ั‹ะŸั€ะพะดะฐะถะฐะšะปะธะตะฝั‚']).strip().split() ['ะกัƒะฟะตั€', 'ะœะฐั€ะบะตั‚ั‹', 'ะŸั€ะพะดะฐะถะฐ', 'ะšะปะธะตะฝั‚'] A: src = 'TheLongAndWindingRoad' glue = ' ' result = ''.join(glue + x if x.isupper() else x for x in src).strip(glue).split(glue) A: Another without regex and the ability to keep contiguous uppercase if wanted def split_on_uppercase(s, keep_contiguous=False): """ Args: s (str): string keep_contiguous (bool): flag to indicate we want to keep contiguous uppercase chars together Returns: """ string_length = len(s) is_lower_around = (lambda: s[i-1].islower() or string_length > (i + 1) and s[i + 1].islower()) start = 0 parts = [] for i in range(1, string_length): if s[i].isupper() and (not keep_contiguous or is_lower_around()): parts.append(s[start: i]) start = i parts.append(s[start:]) return parts >>> split_on_uppercase('theLongWindingRoad') ['the', 'Long', 'Winding', 'Road'] >>> split_on_uppercase('TheLongWindingRoad') ['The', 'Long', 'Winding', 'Road'] >>> split_on_uppercase('TheLongWINDINGRoadT', True) ['The', 'Long', 'WINDING', 'Road', 'T'] >>> split_on_uppercase('ABC') ['A', 'B', 'C'] >>> split_on_uppercase('ABCD', True) ['ABCD'] >>> split_on_uppercase('') [''] >>> split_on_uppercase('hello world') ['hello world'] A: Alternative solution (if you dislike explicit regexes): s = 'TheLongAndWindingRoad' pos = [i for i,e in enumerate(s) if e.isupper()] parts = [] for j in xrange(len(pos)): try: parts.append(s[pos[j]:pos[j+1]]) except IndexError: parts.append(s[pos[j]:]) print parts A: Replace every uppercase letter 'L' in the given with an empty space plus that letter " L". We can do this using list comprehension or we can define a function to do it as follows. s = 'TheLongANDWindingRoad ABC A123B45' ''.join([char if (char.islower() or not char.isalpha()) else ' '+char for char in list(s)]).strip().split() >>> ['The', 'Long', 'A', 'N', 'D', 'Winding', 'Road', 'A', 'B', 'C', 'A123', 'B45'] If you choose to go by a function, here is how. def splitAtUpperCase(text): result = "" for char in text: if char.isupper(): result += " " + char else: result += char return result.split() In the case of the given example: print(splitAtUpperCase('TheLongAndWindingRoad')) >>>['The', 'Long', 'A', 'N', 'D', 'Winding', 'Road'] But most of the time that we are splitting a sentence at upper case letters, it is usually the case that we want to maintain abbreviations that are typically a continuous stream of uppercase letters. The code below would help. def splitAtUpperCase(s): for i in range(len(s)-1)[::-1]: if s[i].isupper() and s[i+1].islower(): s = s[:i]+' '+s[i:] if s[i].isupper() and s[i-1].islower(): s = s[:i]+' '+s[i:] return s.split() splitAtUpperCase('TheLongANDWindingRoad') >>> ['The', 'Long', 'AND', 'Winding', 'Road'] Thanks. A: An alternative way without using regex or enumerate: word = 'TheLongAndWindingRoad' list = [x for x in word] for char in list: if char != list[0] and char.isupper(): list[list.index(char)] = ' ' + char fin_list = ''.join(list).split(' ') I think it is clearer and simpler without chaining too many methods or using a long list comprehension that can be difficult to read. A: This is possible with the more_itertools.split_before tool. import more_itertools as mit iterable = "TheLongAndWindingRoad" [ "".join(i) for i in mit.split_before(iterable, pred=lambda s: s.isupper())] # ['The', 'Long', 'And', 'Winding', 'Road'] It should also split single occurrences, i.e. from 'ABC' I'd like to obtain ['A', 'B', 'C']. iterable = "ABC" [ "".join(i) for i in mit.split_before(iterable, pred=lambda s: s.isupper())] # ['A', 'B', 'C'] more_itertools is a third-party package with 60+ useful tools including implementations for all of the original itertools recipes, which obviates their manual implementation. A: An alternate way using enumerate and isupper() Code: strs = 'TheLongAndWindingRoad' ind =0 count =0 new_lst=[] for index, val in enumerate(strs[1:],1): if val.isupper(): new_lst.append(strs[ind:index]) ind=index if ind<len(strs): new_lst.append(strs[ind:]) print new_lst Output: ['The', 'Long', 'And', 'Winding', 'Road'] A: Sharing what came to mind when I read the post. Different from other posts. strs = 'TheLongAndWindingRoad' # grab index of uppercase letters in strs start_idx = [i for i,j in enumerate(strs) if j.isupper()] # create empty list strs_list = [] # initiate counter cnt = 1 for pos in start_idx: start_pos = pos # use counter to grab next positional element and overlook IndexeError try: end_pos = start_idx[cnt] except IndexError: continue # append to empty list strs_list.append(strs[start_pos:end_pos]) cnt += 1 A: You might also wanna do it this way def camelcase(s): words = [] for char in s: if char.isupper(): words.append(':'+char) else: words.append(char) words = ((''.join(words)).split(':')) return len(words) This will output as follows s = 'oneTwoThree' print(camecase(s) //['one', 'Two', 'Three'] A: def solution(s): st = '' for c in s: if c == c.upper(): st += ' ' st += c return st A: I'm using list def split_by_upper(x): i = 0 lis = list(x) while True: if i == len(lis)-1: if lis[i].isupper(): lis.insert(i,",") break if lis[i].isupper() and i != 0: lis.insert(i,",") i+=1 i+=1 return "".join(lis).split(",") OUTPUT: data = "TheLongAndWindingRoad" print(split_by_upper(data))` >> ['The', 'Long', 'And', 'Winding', 'Road'] A: My solution for splitting on capitalized letters - keeps capitalized words text = 'theLongAndWindingRoad ABC' result = re.sub('(?<=.)(?=[A-Z][a-z])', r" ", text).split() print(result) #['the', 'Long', 'And', 'Winding', 'Road', 'ABC'] A: Little late in the party, but: In [1]: camel = "CamelCaseConfig" In [2]: parts = "".join([ f"|{c}" if c.isupper() else c for c in camel ]).lstrip("|").split("|") In [3]: screaming_snake = "_".join([ part.upper() for part in parts ]) In [4]: screaming_snake Out[4]: 'CAMEL_CASE_CONFIG' part of my answer is based on other people answer from here
Split a string at uppercase letters
What is the pythonic way to split a string before the occurrences of a given set of characters? For example, I want to split 'TheLongAndWindingRoad' at any occurrence of an uppercase letter (possibly except the first), and obtain ['The', 'Long', 'And', 'Winding', 'Road']. Edit: It should also split single occurrences, i.e. from 'ABC' I'd like to obtain ['A', 'B', 'C'].
[ "Unfortunately it's not possible to split on a zero-width match in Python. But you can use re.findall instead:\n>>> import re\n>>> re.findall('[A-Z][^A-Z]*', 'TheLongAndWindingRoad')\n['The', 'Long', 'And', 'Winding', 'Road']\n>>> re.findall('[A-Z][^A-Z]*', 'ABC')\n['A', 'B', 'C']\n\n", "Here is an alternative regex solution. The problem can be reprased as \"how do I insert a space before each uppercase letter, before doing the split\":\n>>> s = \"TheLongAndWindingRoad ABC A123B45\"\n>>> re.sub( r\"([A-Z])\", r\" \\1\", s).split()\n['The', 'Long', 'And', 'Winding', 'Road', 'A', 'B', 'C', 'A123', 'B45']\n\nThis has the advantage of preserving all non-whitespace characters, which most other solutions do not.\n", "Use a lookahead and a lookbehind:\nIn Python 3.7, you can do this:\nre.split('(?<=.)(?=[A-Z])', 'TheLongAndWindingRoad')\n\nAnd it yields:\n['The', 'Long', 'And', 'Winding', 'Road']\n\nYou need the look-behind to avoid an empty string at the beginning.\n", ">>> import re\n>>> re.findall('[A-Z][a-z]*', 'TheLongAndWindingRoad')\n['The', 'Long', 'And', 'Winding', 'Road']\n\n>>> re.findall('[A-Z][a-z]*', 'SplitAString')\n['Split', 'A', 'String']\n\n>>> re.findall('[A-Z][a-z]*', 'ABC')\n['A', 'B', 'C']\n\nIf you want \"It'sATest\" to split to [\"It's\", 'A', 'Test'] change the rexeg to \"[A-Z][a-z']*\"\n", "A variation on @ChristopheD 's solution\ns = 'TheLongAndWindingRoad'\n\npos = [i for i,e in enumerate(s+'A') if e.isupper()]\nparts = [s[pos[j]:pos[j+1]] for j in xrange(len(pos)-1)]\n\nprint parts\n\n", "I think that a better answer might be to split the string up into words that do not end in a capital. This would handle the case where the string doesn't start with a capital letter.\n re.findall('.[^A-Z]*', 'aboutTheLongAndWindingRoad')\n\nexample:\n>>> import re\n>>> re.findall('.[^A-Z]*', 'aboutTheLongAndWindingRoadABC')\n['about', 'The', 'Long', 'And', 'Winding', 'Road', 'A', 'B', 'C']\n\n", "import re\nfilter(None, re.split(\"([A-Z][^A-Z]*)\", \"TheLongAndWindingRoad\"))\n\nor\n[s for s in re.split(\"([A-Z][^A-Z]*)\", \"TheLongAndWindingRoad\") if s]\n\n", "Pythonic way could be:\n\"\".join([(\" \"+i if i.isupper() else i) for i in 'TheLongAndWindingRoad']).strip().split()\n['The', 'Long', 'And', 'Winding', 'Road']\n\nWorks good for Unicode, avoiding re/re2.\n\"\".join([(\" \"+i if i.isupper() else i) for i in 'ะกัƒะฟะตั€ะœะฐั€ะบะตั‚ั‹ะŸั€ะพะดะฐะถะฐะšะปะธะตะฝั‚']).strip().split()\n['ะกัƒะฟะตั€', 'ะœะฐั€ะบะตั‚ั‹', 'ะŸั€ะพะดะฐะถะฐ', 'ะšะปะธะตะฝั‚']\n\n", "src = 'TheLongAndWindingRoad'\nglue = ' '\n\nresult = ''.join(glue + x if x.isupper() else x for x in src).strip(glue).split(glue)\n\n", "Another without regex and the ability to keep contiguous uppercase if wanted\ndef split_on_uppercase(s, keep_contiguous=False):\n \"\"\"\n\n Args:\n s (str): string\n keep_contiguous (bool): flag to indicate we want to \n keep contiguous uppercase chars together\n\n Returns:\n\n \"\"\"\n\n string_length = len(s)\n is_lower_around = (lambda: s[i-1].islower() or \n string_length > (i + 1) and s[i + 1].islower())\n\n start = 0\n parts = []\n for i in range(1, string_length):\n if s[i].isupper() and (not keep_contiguous or is_lower_around()):\n parts.append(s[start: i])\n start = i\n parts.append(s[start:])\n\n return parts\n\n>>> split_on_uppercase('theLongWindingRoad')\n['the', 'Long', 'Winding', 'Road']\n>>> split_on_uppercase('TheLongWindingRoad')\n['The', 'Long', 'Winding', 'Road']\n>>> split_on_uppercase('TheLongWINDINGRoadT', True)\n['The', 'Long', 'WINDING', 'Road', 'T']\n>>> split_on_uppercase('ABC')\n['A', 'B', 'C']\n>>> split_on_uppercase('ABCD', True)\n['ABCD']\n>>> split_on_uppercase('')\n['']\n>>> split_on_uppercase('hello world')\n['hello world']\n\n", "Alternative solution (if you dislike explicit regexes):\ns = 'TheLongAndWindingRoad'\n\npos = [i for i,e in enumerate(s) if e.isupper()]\n\nparts = []\nfor j in xrange(len(pos)):\n try:\n parts.append(s[pos[j]:pos[j+1]])\n except IndexError:\n parts.append(s[pos[j]:])\n\nprint parts\n\n", "Replace every uppercase letter 'L' in the given with an empty space plus that letter \" L\". We can do this using list comprehension or we can define a function to do it as follows.\ns = 'TheLongANDWindingRoad ABC A123B45'\n''.join([char if (char.islower() or not char.isalpha()) else ' '+char for char in list(s)]).strip().split()\n>>> ['The', 'Long', 'A', 'N', 'D', 'Winding', 'Road', 'A', 'B', 'C', 'A123', 'B45']\n\nIf you choose to go by a function, here is how.\ndef splitAtUpperCase(text):\n result = \"\"\n for char in text:\n if char.isupper():\n result += \" \" + char\n else:\n result += char\n return result.split()\n\nIn the case of the given example:\nprint(splitAtUpperCase('TheLongAndWindingRoad')) \n>>>['The', 'Long', 'A', 'N', 'D', 'Winding', 'Road']\n\nBut most of the time that we are splitting a sentence at upper case letters, it is usually the case that we want to maintain abbreviations that are typically a continuous stream of uppercase letters. The code below would help.\ndef splitAtUpperCase(s):\n for i in range(len(s)-1)[::-1]:\n if s[i].isupper() and s[i+1].islower():\n s = s[:i]+' '+s[i:]\n if s[i].isupper() and s[i-1].islower():\n s = s[:i]+' '+s[i:]\n return s.split()\n\nsplitAtUpperCase('TheLongANDWindingRoad')\n\n>>> ['The', 'Long', 'AND', 'Winding', 'Road']\n\nThanks.\n", "An alternative way without using regex or enumerate:\nword = 'TheLongAndWindingRoad'\nlist = [x for x in word]\n\nfor char in list:\n if char != list[0] and char.isupper():\n list[list.index(char)] = ' ' + char\n\nfin_list = ''.join(list).split(' ')\n\nI think it is clearer and simpler without chaining too many methods or using a long list comprehension that can be difficult to read.\n", "This is possible with the more_itertools.split_before tool.\nimport more_itertools as mit\n\n\niterable = \"TheLongAndWindingRoad\"\n[ \"\".join(i) for i in mit.split_before(iterable, pred=lambda s: s.isupper())]\n# ['The', 'Long', 'And', 'Winding', 'Road']\n\n\nIt should also split single occurrences, i.e. from 'ABC' I'd like to obtain ['A', 'B', 'C'].\n\niterable = \"ABC\"\n[ \"\".join(i) for i in mit.split_before(iterable, pred=lambda s: s.isupper())]\n# ['A', 'B', 'C']\n\nmore_itertools is a third-party package with 60+ useful tools including implementations for all of the original itertools recipes, which obviates their manual implementation.\n", "An alternate way using enumerate and isupper()\nCode:\nstrs = 'TheLongAndWindingRoad'\nind =0\ncount =0\nnew_lst=[]\nfor index, val in enumerate(strs[1:],1):\n if val.isupper():\n new_lst.append(strs[ind:index])\n ind=index\nif ind<len(strs):\n new_lst.append(strs[ind:])\nprint new_lst\n\nOutput:\n['The', 'Long', 'And', 'Winding', 'Road']\n\n", "Sharing what came to mind when I read the post. Different from other posts.\nstrs = 'TheLongAndWindingRoad'\n\n# grab index of uppercase letters in strs\nstart_idx = [i for i,j in enumerate(strs) if j.isupper()]\n\n# create empty list\nstrs_list = []\n\n# initiate counter\ncnt = 1\n\nfor pos in start_idx:\n start_pos = pos\n\n # use counter to grab next positional element and overlook IndexeError\n try:\n end_pos = start_idx[cnt]\n except IndexError:\n continue\n\n # append to empty list\n strs_list.append(strs[start_pos:end_pos])\n\n cnt += 1\n\n", "You might also wanna do it this way\ndef camelcase(s):\n \n words = []\n \n for char in s:\n if char.isupper():\n words.append(':'+char)\n else:\n words.append(char)\n words = ((''.join(words)).split(':'))\n \n return len(words)\n\nThis will output as follows\ns = 'oneTwoThree'\nprint(camecase(s)\n//['one', 'Two', 'Three']\n\n", "def solution(s):\n \n st = ''\n for c in s:\n if c == c.upper():\n st += ' ' \n st += c \n \n return st\n\n", "I'm using list\ndef split_by_upper(x): \ni = 0 \nlis = list(x)\nwhile True:\n if i == len(lis)-1:\n if lis[i].isupper():\n lis.insert(i,\",\")\n break\n if lis[i].isupper() and i != 0:\n lis.insert(i,\",\")\n i+=1\n i+=1\nreturn \"\".join(lis).split(\",\")\n\nOUTPUT:\ndata = \"TheLongAndWindingRoad\"\nprint(split_by_upper(data))`\n>> ['The', 'Long', 'And', 'Winding', 'Road']\n\n", "My solution for splitting on capitalized letters - keeps capitalized words\ntext = 'theLongAndWindingRoad ABC'\nresult = re.sub('(?<=.)(?=[A-Z][a-z])', r\" \", text).split()\nprint(result)\n#['the', 'Long', 'And', 'Winding', 'Road', 'ABC']\n\n", "Little late in the party, but:\nIn [1]: camel = \"CamelCaseConfig\"\nIn [2]: parts = \"\".join([\n f\"|{c}\" if c.isupper() else c\n for c in camel\n]).lstrip(\"|\").split(\"|\")\nIn [3]: screaming_snake = \"_\".join([\n part.upper()\n for part in parts\n])\nIn [4]: screaming_snake\nOut[4]: 'CAMEL_CASE_CONFIG'\n\npart of my answer is based on other people answer from here\n" ]
[ 180, 42, 23, 20, 14, 10, 6, 6, 5, 5, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "python", "regex", "string" ]
stackoverflow_0002277352_python_regex_string.txt
Q: How to return the value of a while loop counter I want to return the counter of the while loops, i and b after every loop repetition to use in another function. I haven't found anything related to returning these values. def dis(reps, towards, back): t = 0 # move towards t times b = 0 # move back b times i = 0 # repetitions n = 1 # counts every step while i < reps: while t < towards: do_something() i += 1 n +=1 while b < back: do_something() n += 1 b += 1 t = 0 b = 0 i += 1 I thought of adding another variable like counter_towards and counter_back and add a value each time but that would not fix my problem since I still would have to return these values every rep. Adding the related functions to a class might work aswlel but that would be lot of work and i thought there might be an easy answer to this question. A: You have to watch on python Generators. Here's a link!
How to return the value of a while loop counter
I want to return the counter of the while loops, i and b after every loop repetition to use in another function. I haven't found anything related to returning these values. def dis(reps, towards, back): t = 0 # move towards t times b = 0 # move back b times i = 0 # repetitions n = 1 # counts every step while i < reps: while t < towards: do_something() i += 1 n +=1 while b < back: do_something() n += 1 b += 1 t = 0 b = 0 i += 1 I thought of adding another variable like counter_towards and counter_back and add a value each time but that would not fix my problem since I still would have to return these values every rep. Adding the related functions to a class might work aswlel but that would be lot of work and i thought there might be an easy answer to this question.
[ "You have to watch on python Generators. Here's a link!\n" ]
[ 1 ]
[]
[]
[ "loops", "python", "return", "while_loop" ]
stackoverflow_0074529177_loops_python_return_while_loop.txt
Q: Complex queryset with django content type model I have a set of models that contain content that is created and contributed by users. Model User: class User(models.Model): first_name = models.CharField(max_length=30, blank=True) last_name = models.CharField(max_length=150, blank=True) is_active = models.BooleanField(default=True) Model Tip: class Tip(models.Model): title = models.CharField(max_length=30, blank=True) content = models.CharField(max_length=150, blank=True) Model Example: class Example(models.Model): headline = models.CharField(max_length=30, blank=True) content = models.CharField(max_length=150, blank=True) Model Struggle: class Struggle(models.Model): headline = models.CharField(max_length=30, blank=True) content = models.CharField(max_length=150, blank=True) and model UserContribution class UserContribution(models.Model): id = models.AutoField(primary_key=True) contributed_by = models.ForeignKey( settings.AUTH_USER_MODEL, verbose_name="User that contributed the object", on_delete=models.CASCADE ) contributed_at = models.DateTimeField(auto_now_add=True) object_id = models.PositiveIntegerField( help_text="Primary key of the model", ) content_type = models.ForeignKey( ContentType, on_delete=models.CASCADE ) I want to be able to select a set of users and list the contribution objects they have contributed (created or updated). For example, [ { "user_id": 1, "first_name": "A", "last_name": "B", "tips": [ { "id": 1, "title": "abc", "content": "bcd", "contibuted_at": "2021-08-10" }, { "id": 2, "title": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ], "examples": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-08-10" }, { "id": 2, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ], "struggles": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-08-10" }, { "id": 2, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-02" } ] }, { "user_id": 2, "first_name": "C", "last_name": "D", "tips": [ { "id": 1, "title": "abc", "content": "bcd", "contibuted_at": "2021-09-09" }, { "id": 3, "title": "eabc", "content": "abcd", "contibuted_at": "2021-09-02" } ], "examples": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-09-10" }, { "id": 3, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ], "struggles": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-09-10" }, { "id": 2, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ] } ] Is there a specific way this can be achieved using Django's ORM, or do I have to use a raw SQL query? And what would the most efficient way be to achieve this in raw SQL? A: I think you can get most of this by writing an admin class object for your models and create list_filter to access the content of child or sibling models. Assuming the UserContribution model is the 'parent' model. As example in your main app's admin.py create a admin model for UserContribution and register it: eg. #admin.py @admin.register(UserContribution) class UserContributionAdmin(admin.ModelAdmin): ordering=('contributed_by','object_id',) list_display = ('id','contributed_by','object_id','contributed_at', 'content_type',) search_fields = ('contributed_by','content_type',) readonly_fields=('contributed_by','id',) actions = [export_as_csv] #you will need to define this export_as_csv function above the current one if you require it filter_horizontal = () # below is where you create the filters that will appear on the admin panel list_filter = ('contributed_by','content_type','struggle__headline','example__headline','tip__title',) fieldsets = () list_per_page=20 These filters should appear in your admin panel for the app and create querysets which you can access from the 'actions' you will create or you can just set up the display to show what your are looking for. The format of the items in '' is key to accessing the data fields depending on the model you create this admin function for. This example is for the UserContribution[UC] model so all of its model fields are accessed directly. To access a child model field (or related table) use: 'modelname__fieldname' separated with double underscore all lowercase as shown above. You can even access a child field from another child field via the common parent! For example in class TipAdmin(admin.ModelAdmin) you can write a filter to show the Tips of those who had a specific Struggle by accessing the parent eg list_filter=('usercontribution__struggle__headline',) VCool! If the filters have too many options you can reformat them with built in dropdown filters that can be imported such as from more_admin_filters import MultiSelectDropdownFilter from django_admin_listfilter_dropdown.filters import DropdownFilter #these go with the filter instantiations list_filter=('contributed_by',('content_type',DropdownFilter),) viola!
Complex queryset with django content type model
I have a set of models that contain content that is created and contributed by users. Model User: class User(models.Model): first_name = models.CharField(max_length=30, blank=True) last_name = models.CharField(max_length=150, blank=True) is_active = models.BooleanField(default=True) Model Tip: class Tip(models.Model): title = models.CharField(max_length=30, blank=True) content = models.CharField(max_length=150, blank=True) Model Example: class Example(models.Model): headline = models.CharField(max_length=30, blank=True) content = models.CharField(max_length=150, blank=True) Model Struggle: class Struggle(models.Model): headline = models.CharField(max_length=30, blank=True) content = models.CharField(max_length=150, blank=True) and model UserContribution class UserContribution(models.Model): id = models.AutoField(primary_key=True) contributed_by = models.ForeignKey( settings.AUTH_USER_MODEL, verbose_name="User that contributed the object", on_delete=models.CASCADE ) contributed_at = models.DateTimeField(auto_now_add=True) object_id = models.PositiveIntegerField( help_text="Primary key of the model", ) content_type = models.ForeignKey( ContentType, on_delete=models.CASCADE ) I want to be able to select a set of users and list the contribution objects they have contributed (created or updated). For example, [ { "user_id": 1, "first_name": "A", "last_name": "B", "tips": [ { "id": 1, "title": "abc", "content": "bcd", "contibuted_at": "2021-08-10" }, { "id": 2, "title": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ], "examples": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-08-10" }, { "id": 2, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ], "struggles": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-08-10" }, { "id": 2, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-02" } ] }, { "user_id": 2, "first_name": "C", "last_name": "D", "tips": [ { "id": 1, "title": "abc", "content": "bcd", "contibuted_at": "2021-09-09" }, { "id": 3, "title": "eabc", "content": "abcd", "contibuted_at": "2021-09-02" } ], "examples": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-09-10" }, { "id": 3, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ], "struggles": [ { "id": 1, "headline": "abc", "content": "bcd", "contibuted_at": "2021-09-10" }, { "id": 2, "headline": "eabc", "content": "abcd", "contibuted_at": "2021-08-09" } ] } ] Is there a specific way this can be achieved using Django's ORM, or do I have to use a raw SQL query? And what would the most efficient way be to achieve this in raw SQL?
[ "I think you can get most of this by writing an admin class object for your models and create list_filter to access the content of child or sibling models. Assuming the UserContribution model is the 'parent' model. As example in your main app's admin.py create a admin model for UserContribution and register it: eg.\n\n#admin.py\n\n @admin.register(UserContribution)\n class UserContributionAdmin(admin.ModelAdmin):\n ordering=('contributed_by','object_id',)\n list_display = ('id','contributed_by','object_id','contributed_at', 'content_type',)\n search_fields = ('contributed_by','content_type',)\n readonly_fields=('contributed_by','id',)\n actions = [export_as_csv] #you will need to define this export_as_csv function above the current one if you require it\n\n filter_horizontal = ()\n # below is where you create the filters that will appear on the admin panel\n list_filter = ('contributed_by','content_type','struggle__headline','example__headline','tip__title',)\n fieldsets = ()\n list_per_page=20\n\nThese filters should appear in your admin panel for the app and create querysets which you can access from the 'actions' you will create or you can just set up the display to show what your are looking for. The format of the items in '' is key to accessing the data fields depending on the model you create this admin function for. This example is for the UserContribution[UC] model so all of its model fields are accessed directly. To access a child model field (or related table) use: 'modelname__fieldname' separated with double underscore all lowercase as shown above. You can even access a child field from another child field via the common parent! For example in class TipAdmin(admin.ModelAdmin) you can write a filter to show the Tips of those who had a specific Struggle by accessing the parent eg\nlist_filter=('usercontribution__struggle__headline',)\n\nVCool! If the filters have too many options you can reformat them with built in dropdown filters that can be imported such as\n from more_admin_filters import MultiSelectDropdownFilter\n from django_admin_listfilter_dropdown.filters import DropdownFilter\n #these go with the filter instantiations\n list_filter=('contributed_by',('content_type',DropdownFilter),)\n\nviola!\n" ]
[ 0 ]
[]
[]
[ "django", "django_contenttypes", "django_models", "django_queryset", "python" ]
stackoverflow_0069503835_django_django_contenttypes_django_models_django_queryset_python.txt
Q: Pandas Merging Multiple Columns at the Same Between Two Dataframes I'm trying to find a way to merge in multiple columns at the same time with Pandas. I have the output I want by doing five separate merges, but it feels like there should be a more pythonic way to do it. Essentially I have a dataframe with five keyword columns in a dataframe called df_striking which I'm trying to merge in search volume data from another dataframe (called df_keyword_vol) into adjacent rows. Minimum Reproducible Example: import pandas as pd striking_data = { "KW1": ["nectarine", "apricot", "plum"], "KW1 Vol": ["", "", ""], "KW2": ["apple", "orange", "pear"], "KW2 Vol": ["", "", ""], "KW3": ["banana", "grapefruit", "cherry"], "KW3 Vol": ["", "", ""], "KW4": ["kiwi", "lemon", "peach"], "KW4 Vol": ["", "", ""], "KW5": ["raspberry", "blueberry", "berries"], "KW5 Vol": ["", "", ""], } df_striking = pd.DataFrame(striking_data) keyword_vol_data = { "Keyword": [ "nectarine", "apricot", "plum", "apple", "orange", "pear", "banana", "grapefruit", "cherry", "kiwi", "lemon", "peach", "raspberry", "blueberry", "berries", ], "Volume": [ 1000, 500, 200, 600, 800, 1000, 450, 10, 900, 1200, 150, 700, 400, 850, 1000, ], } df_keyword_vol = pd.DataFrame(keyword_vol_data) Desired Output What I've tried. I've made two functions to merge the keyword data a row a time, but it's just not very pythonic! # two functions to merge in the keyword volume data for KWs 1 - 5 def merger(col1, col2): dx = df_striking.merge(df_keyword_vol, how='left', left_on=col1, right_on=col2) return dx def volume(vol1, vol2): vol = df_striking[vol1] = df_striking[vol2] df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True) return vol df_striking = merger("KW1", "Keyword") volume("KW1 Vol", "Volume") df_striking = merger("KW2", "Keyword") volume("KW2 Vol", "Volume") df_striking = merger("KW3", "Keyword") volume("KW3 Vol", "Volume") df_striking = merger("KW4", "Keyword") volume("KW4 Vol", "Volume") df_striking = merger("KW5", "Keyword") volume("KW5 Vol", "Volume") A: If you already have the empty columns, you can use: mapping = df_keyword_vol.set_index('Keyword')['Volume'] df_striking.iloc[:, 1::2] = df_striking.iloc[:, ::2].replace(mapping) Else, if you only have the KWx columns: df2 = (pd.concat([df, df.replace(mapping)], axis=1) .sort_index(axis=1) ) output: KW1 KW1 KW2 KW2 KW3 KW3 KW4 KW4 KW5 KW5 0 nectarine 1000 apple 600 banana 450 kiwi 1200 raspberry 400 1 apricot 500 orange 800 grapefruit 10 lemon 150 blueberry 850 2 plum 200 pear 1000 cherry 900 peach 700 berries 1000 A: Itโ€™s easier if you transform it all to a long format: >>> striking = df_striking.filter(regex='KW[0-9]*$').stack().rename('Keyword').reset_index() >>> joined = striking.merge(df_keyword_vol) >>> joined level_0 level_1 Keyword Volume 0 0 KW1 nectarine 1000 1 0 KW2 apple 600 2 0 KW3 banana 450 3 0 KW4 kiwi 1200 4 0 KW5 raspberry 400 5 1 KW1 apricot 500 6 1 KW2 orange 800 7 1 KW3 grapefruit 10 8 1 KW4 lemon 150 9 1 KW5 blueberry 850 10 2 KW1 plum 200 11 2 KW2 pear 1000 12 2 KW3 cherry 900 13 2 KW4 peach 700 14 2 KW5 berries 1000 Then you can get the original format with .pivot, but with a multi-index as columns: >>> joined.pivot('index', 'level_1', ['Keyword', 'Volume']) Keyword Volume level_1 KW1 KW2 KW3 KW4 KW5 KW1 KW2 KW3 KW4 KW5 index 0 nectarine apple banana kiwi raspberry 1000 600 450 1200 400 1 apricot orange grapefruit lemon blueberry 500 800 10 150 850 2 plum pear cherry peach berries 200 1000 900 700 1000 We can get around that weird format with a pd.concat: >>> pd.concat([ ... joined.pivot('index', 'level_1', 'Keyword'), ... joined.pivot('index', 'level_1', 'Volume').add_suffix(' Vol') ... ], axis='columns').sort_index(axis='columns') level_1 KW1 KW1 Vol KW2 KW2 Vol KW3 KW3 Vol KW4 KW4 Vol KW5 KW5 Vol index 0 nectarine 1000 apple 600 banana 450 kiwi 1200 raspberry 400 1 apricot 500 orange 800 grapefruit 10 lemon 150 blueberry 850 2 plum 200 pear 1000 cherry 900 peach 700 berries 1000 A: pd.concat([v.reset_index(drop=True).drop('col1',axis=1) for k,v in df_keyword_vol.assign(col1=df_keyword_vol.index//3) .groupby('col1')] ,axis=1)\ .set_axis(df_striking.columns,axis=1) KW1 KW1 KW2 KW2 KW3 KW3 KW4 KW4 KW5 KW5 0 nectarine 1000 apple 600 banana 450 kiwi 1200 raspberry 400 1 apricot 500 orange 800 grapefruit 10 lemon 150 blueberry 850 2 plum 200 pear 1000 cherry 900 peach 700 berries 1000
Pandas Merging Multiple Columns at the Same Between Two Dataframes
I'm trying to find a way to merge in multiple columns at the same time with Pandas. I have the output I want by doing five separate merges, but it feels like there should be a more pythonic way to do it. Essentially I have a dataframe with five keyword columns in a dataframe called df_striking which I'm trying to merge in search volume data from another dataframe (called df_keyword_vol) into adjacent rows. Minimum Reproducible Example: import pandas as pd striking_data = { "KW1": ["nectarine", "apricot", "plum"], "KW1 Vol": ["", "", ""], "KW2": ["apple", "orange", "pear"], "KW2 Vol": ["", "", ""], "KW3": ["banana", "grapefruit", "cherry"], "KW3 Vol": ["", "", ""], "KW4": ["kiwi", "lemon", "peach"], "KW4 Vol": ["", "", ""], "KW5": ["raspberry", "blueberry", "berries"], "KW5 Vol": ["", "", ""], } df_striking = pd.DataFrame(striking_data) keyword_vol_data = { "Keyword": [ "nectarine", "apricot", "plum", "apple", "orange", "pear", "banana", "grapefruit", "cherry", "kiwi", "lemon", "peach", "raspberry", "blueberry", "berries", ], "Volume": [ 1000, 500, 200, 600, 800, 1000, 450, 10, 900, 1200, 150, 700, 400, 850, 1000, ], } df_keyword_vol = pd.DataFrame(keyword_vol_data) Desired Output What I've tried. I've made two functions to merge the keyword data a row a time, but it's just not very pythonic! # two functions to merge in the keyword volume data for KWs 1 - 5 def merger(col1, col2): dx = df_striking.merge(df_keyword_vol, how='left', left_on=col1, right_on=col2) return dx def volume(vol1, vol2): vol = df_striking[vol1] = df_striking[vol2] df_striking.drop(['Keyword', 'Volume'], axis=1, inplace=True) return vol df_striking = merger("KW1", "Keyword") volume("KW1 Vol", "Volume") df_striking = merger("KW2", "Keyword") volume("KW2 Vol", "Volume") df_striking = merger("KW3", "Keyword") volume("KW3 Vol", "Volume") df_striking = merger("KW4", "Keyword") volume("KW4 Vol", "Volume") df_striking = merger("KW5", "Keyword") volume("KW5 Vol", "Volume")
[ "If you already have the empty columns, you can use:\nmapping = df_keyword_vol.set_index('Keyword')['Volume']\n\ndf_striking.iloc[:, 1::2] = df_striking.iloc[:, ::2].replace(mapping)\n\n\nElse, if you only have the KWx columns:\ndf2 = (pd.concat([df, df.replace(mapping)], axis=1)\n .sort_index(axis=1)\n )\n\noutput:\n KW1 KW1 KW2 KW2 KW3 KW3 KW4 KW4 KW5 KW5\n0 nectarine 1000 apple 600 banana 450 kiwi 1200 raspberry 400\n1 apricot 500 orange 800 grapefruit 10 lemon 150 blueberry 850\n2 plum 200 pear 1000 cherry 900 peach 700 berries 1000\n\n", "Itโ€™s easier if you transform it all to a long format:\n>>> striking = df_striking.filter(regex='KW[0-9]*$').stack().rename('Keyword').reset_index()\n>>> joined = striking.merge(df_keyword_vol)\n>>> joined\n level_0 level_1 Keyword Volume\n0 0 KW1 nectarine 1000\n1 0 KW2 apple 600\n2 0 KW3 banana 450\n3 0 KW4 kiwi 1200\n4 0 KW5 raspberry 400\n5 1 KW1 apricot 500\n6 1 KW2 orange 800\n7 1 KW3 grapefruit 10\n8 1 KW4 lemon 150\n9 1 KW5 blueberry 850\n10 2 KW1 plum 200\n11 2 KW2 pear 1000\n12 2 KW3 cherry 900\n13 2 KW4 peach 700\n14 2 KW5 berries 1000\n\nThen you can get the original format with .pivot, but with a multi-index as columns:\n>>> joined.pivot('index', 'level_1', ['Keyword', 'Volume'])\n Keyword Volume \nlevel_1 KW1 KW2 KW3 KW4 KW5 KW1 KW2 KW3 KW4 KW5\nindex \n0 nectarine apple banana kiwi raspberry 1000 600 450 1200 400\n1 apricot orange grapefruit lemon blueberry 500 800 10 150 850\n2 plum pear cherry peach berries 200 1000 900 700 1000\n\nWe can get around that weird format with a pd.concat:\n>>> pd.concat([\n... joined.pivot('index', 'level_1', 'Keyword'),\n... joined.pivot('index', 'level_1', 'Volume').add_suffix(' Vol')\n... ], axis='columns').sort_index(axis='columns')\nlevel_1 KW1 KW1 Vol KW2 KW2 Vol KW3 KW3 Vol KW4 KW4 Vol KW5 KW5 Vol\nindex \n0 nectarine 1000 apple 600 banana 450 kiwi 1200 raspberry 400\n1 apricot 500 orange 800 grapefruit 10 lemon 150 blueberry 850\n2 plum 200 pear 1000 cherry 900 peach 700 berries 1000\n\n", "pd.concat([v.reset_index(drop=True).drop('col1',axis=1)\n for k,v in\n df_keyword_vol.assign(col1=df_keyword_vol.index//3)\n .groupby('col1')]\n ,axis=1)\\\n .set_axis(df_striking.columns,axis=1)\n\n\n KW1 KW1 KW2 KW2 KW3 KW3 KW4 KW4 KW5 KW5\n0 nectarine 1000 apple 600 banana 450 kiwi 1200 raspberry 400\n1 apricot 500 orange 800 grapefruit 10 lemon 150 blueberry 850\n2 plum 200 pear 1000 cherry 900 peach 700 berries 1000\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0069366947_pandas_python.txt
Q: How do I shorten a for loop with arithmetic inside? I've been wondering if I can shorten a for loop with arithmetic inside of it Here is my code: n = int(input("n: ")) string = '' for i in range(n): string += input() I want to make it a one line code, Is it possible? This is what I tried: [string+=input() for i in range(n)] A: Well, if you really want, you can do: string = ''.join(input() for _ in range(int(input("n: "))))
How do I shorten a for loop with arithmetic inside?
I've been wondering if I can shorten a for loop with arithmetic inside of it Here is my code: n = int(input("n: ")) string = '' for i in range(n): string += input() I want to make it a one line code, Is it possible? This is what I tried: [string+=input() for i in range(n)]
[ "Well, if you really want, you can do:\nstring = ''.join(input() for _ in range(int(input(\"n: \"))))\n\n" ]
[ 0 ]
[]
[]
[ "for_loop", "python", "python_3.x" ]
stackoverflow_0074529241_for_loop_python_python_3.x.txt
Q: ASAMMDF - MemoryError: Unable to allocate 16.8 MiB for an array with shape (2207220,) and data type float64 I am trying to extract data from a ".dat" file by using asammdf. After extracting the data using asammdf, I am trying to convert the data into a dataframe that can be analyzed using pandas and matplotlib. Following is the code that I am using for extracting the data and converting to dataframe: import pandas as pd import numpy as np import matplotlib.pyplot as plt import asammdf import tkinter as tk from tkinter.ttk import * Data_File_01 = asammdf.MDF(r"C:\Users\hsr4ban\Desktop\16_MC Dynamic.dat") Data_01 = Data_File_01.to_dataframe() However, I am getting the memory error as below: runfile('C:/Users/hsr4ban/Desktop/untitled0.py', wdir='C:/Users/hsr4ban/Desktop') Traceback (most recent call last): File ~\Desktop\untitled0.py:11 in <module> Data_File_01 = asammdf.MDF(r"C:\Users\hsr4ban\Desktop\16_MC Dynamic.dat").to_dataframe() File ~\AppData\Roaming\Python\Python39\site-packages\asammdf\mdf.py:4466 in to_dataframe df = pd.DataFrame(nonstrings, index=master) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\frame.py:636 in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\construction.py:502 in dict_to_mgr return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\construction.py:156 in arrays_to_mgr return create_block_manager_from_column_arrays( File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:1959 in create_block_manager_from_column_arrays mgr._consolidate_inplace() File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:1685 in _consolidate_inplace self.blocks = tuple(_consolidate(self.blocks)) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:2084 in _consolidate merged_blocks = _merge_blocks( File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:2111 in _merge_blocks new_values = np.vstack([b.values for b in blocks]) # type: ignore[misc] File <__array_function__ internals>:180 in vstack File ~\AppData\Roaming\Python\Python39\site-packages\numpy\core\shape_base.py:282 in vstack return _nx.concatenate(arrs, 0) File <__array_function__ internals>:180 in concatenate MemoryError: Unable to allocate 5.46 GiB for an array with shape (332, 2207220) and data type float64 I checked in stack overflow. There are few answers suggested in case of ".csv" file but in this case it is ".dat" file and I could not find much help. Can someone please suggest how can it be resolved? Thanks in advance. A: You need to use the raster argument whenc alling to_dataframe because you have too many individual timestamps in the file (see https://asammdf.readthedocs.io/en/master/api.html#asammdf.mdf.MDF.iter_to_dataframe)
ASAMMDF - MemoryError: Unable to allocate 16.8 MiB for an array with shape (2207220,) and data type float64
I am trying to extract data from a ".dat" file by using asammdf. After extracting the data using asammdf, I am trying to convert the data into a dataframe that can be analyzed using pandas and matplotlib. Following is the code that I am using for extracting the data and converting to dataframe: import pandas as pd import numpy as np import matplotlib.pyplot as plt import asammdf import tkinter as tk from tkinter.ttk import * Data_File_01 = asammdf.MDF(r"C:\Users\hsr4ban\Desktop\16_MC Dynamic.dat") Data_01 = Data_File_01.to_dataframe() However, I am getting the memory error as below: runfile('C:/Users/hsr4ban/Desktop/untitled0.py', wdir='C:/Users/hsr4ban/Desktop') Traceback (most recent call last): File ~\Desktop\untitled0.py:11 in <module> Data_File_01 = asammdf.MDF(r"C:\Users\hsr4ban\Desktop\16_MC Dynamic.dat").to_dataframe() File ~\AppData\Roaming\Python\Python39\site-packages\asammdf\mdf.py:4466 in to_dataframe df = pd.DataFrame(nonstrings, index=master) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\frame.py:636 in __init__ mgr = dict_to_mgr(data, index, columns, dtype=dtype, copy=copy, typ=manager) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\construction.py:502 in dict_to_mgr return arrays_to_mgr(arrays, columns, index, dtype=dtype, typ=typ, consolidate=copy) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\construction.py:156 in arrays_to_mgr return create_block_manager_from_column_arrays( File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:1959 in create_block_manager_from_column_arrays mgr._consolidate_inplace() File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:1685 in _consolidate_inplace self.blocks = tuple(_consolidate(self.blocks)) File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:2084 in _consolidate merged_blocks = _merge_blocks( File ~\AppData\Roaming\Python\Python39\site-packages\pandas\core\internals\managers.py:2111 in _merge_blocks new_values = np.vstack([b.values for b in blocks]) # type: ignore[misc] File <__array_function__ internals>:180 in vstack File ~\AppData\Roaming\Python\Python39\site-packages\numpy\core\shape_base.py:282 in vstack return _nx.concatenate(arrs, 0) File <__array_function__ internals>:180 in concatenate MemoryError: Unable to allocate 5.46 GiB for an array with shape (332, 2207220) and data type float64 I checked in stack overflow. There are few answers suggested in case of ".csv" file but in this case it is ".dat" file and I could not find much help. Can someone please suggest how can it be resolved? Thanks in advance.
[ "You need to use the raster argument whenc alling to_dataframe because you have too many individual timestamps in the file (see https://asammdf.readthedocs.io/en/master/api.html#asammdf.mdf.MDF.iter_to_dataframe)\n" ]
[ 0 ]
[]
[]
[ "asammdf", "data_files", "memory", "pandas", "python" ]
stackoverflow_0074490140_asammdf_data_files_memory_pandas_python.txt
Q: Is there a way to assign enum values from variable in Python? Here's my problem. At first, I implemented in the code something like this: class HttpMethod(enum.Enum): GET = requests.get POST = requests.post ... def __call__(self, *args, **kwargs): return self.value(*args, **kwargs) But now I want to call session.get instead of requests.get from the following class, but I don't want to make session a global variable to my module. class HttpPooling: def __init__(self, **kwargs): self.session = requests.Session() retries = Retry(**kwargs) self.session.mount("http://", HTTPAdapter(max_retries=retries)) self.session.mount("https://", HTTPAdapter(max_retries=retries)) I tried many solutions to do obtain such result, but never succeeded, any idea ? I focused my tests on __ignore__ from aenum and __init_subclass__ but I feel like there might be a simplistic way that I can't figure out myself. Is there a way to do something like this: class HttpMethod(enum.Enum): pool = HttpPooling() GET = pool.session.get POST = pool.session.post A: I did finally come to a solution with the package aenum which comes with an __ignore__ field for enums. class HttpMethod(Enum): POST, GET, PUT, PATH, DELETE = range(1, 6) __pool = HttpPooling() __ignore__ = ("__pool",) def __repr__(self): return self.value.__repr__() @property def value(self): return partial(getattr(self.__pool.session, self.name.lower())) def __call__(self, *args, **kwargs): self.value(*args, **kwargs) @classmethod def keys(cls): return cls.__members__.keys()
Is there a way to assign enum values from variable in Python?
Here's my problem. At first, I implemented in the code something like this: class HttpMethod(enum.Enum): GET = requests.get POST = requests.post ... def __call__(self, *args, **kwargs): return self.value(*args, **kwargs) But now I want to call session.get instead of requests.get from the following class, but I don't want to make session a global variable to my module. class HttpPooling: def __init__(self, **kwargs): self.session = requests.Session() retries = Retry(**kwargs) self.session.mount("http://", HTTPAdapter(max_retries=retries)) self.session.mount("https://", HTTPAdapter(max_retries=retries)) I tried many solutions to do obtain such result, but never succeeded, any idea ? I focused my tests on __ignore__ from aenum and __init_subclass__ but I feel like there might be a simplistic way that I can't figure out myself. Is there a way to do something like this: class HttpMethod(enum.Enum): pool = HttpPooling() GET = pool.session.get POST = pool.session.post
[ "I did finally come to a solution with the package aenum which comes with an __ignore__ field for enums.\nclass HttpMethod(Enum):\n POST, GET, PUT, PATH, DELETE = range(1, 6)\n\n __pool = HttpPooling()\n __ignore__ = (\"__pool\",)\n\n def __repr__(self):\n return self.value.__repr__()\n\n @property\n def value(self):\n return partial(getattr(self.__pool.session, self.name.lower()))\n\n def __call__(self, *args, **kwargs):\n self.value(*args, **kwargs)\n\n @classmethod\n def keys(cls):\n return cls.__members__.keys()\n\n" ]
[ -1 ]
[]
[]
[ "enums", "python" ]
stackoverflow_0074518691_enums_python.txt
Q: Same label multiple times in one image - Tensorflow I'm trying to create a tf model, which can detect any handwriting in any image. In order to do that, i made the labels in all train pictures with just one label: edit. It means, one image can have this labels many times. After many hours of training using cpu i did't get the expected result. The model can't see any of the blocks i gave before training. I'm using the following model: http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d6_coco17_tpu-32.tar.gz Is the problem that i'm labeling one image with one label multiple times? Could be the problem of using cpu instead of gpu? I have currently one gpu with 4gb and it seems not enough. i trained the model with 2000 steps and learning_rate was 0.006. should i train it to be more than that? Any suggestions? Thank you in advanced. Edit Following the is a screenshot from tensorboard of the trained model: A: CPU vs GPU GPU has the only advantage that the training is faster. It shouldn't have any effect on the expected result. It just takes longer. Though, for some models, the difference could be large. Monitoring your training might give you more insight. Training monitoring What does it mean you did not get the expected result? How was it different from what you expected? I suggest you use some kind of monitoring, such as Tensorboard to monitor both the loss and metrics of the training and validation dataset if you do not already do so. This will give you invaluable information about the training in real-time. Pipeline debugging When your model seems not to be learning anything, you must start debugging. You can follow the following steps to make sure that none of those is your problem: https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607 I especially like to overfit the model on a single batch. This test tells me whether my algorithmic pipeline is correct, including preprocessing, the model, and the evaluation. The result should be that evaluation on the single training batch should give you a very good score, while for the rest of the data the score will be poor since the model will be overfitted on the single batch. Problem definition Sometimes, it is possible that the problem is wrongly defined. Can you clarify what your labels are? I do not fully understand. If it is a common problem, then you can search the internet to see the inspiration for how it is usually defined. EDIT Loss functions In general, you want all your loss functions to go down. In your case, they go up in the first few hundred steps, which might not necessarily indicate that there's something wrong because sometimes it takes a short time in the beginning before the training stabilizes. Nonetheless, the triangles indicate that the loss was NaN. That means that there is something wrong. I can recommend using the tf.keras.callbacks.TerminateOnNaN callback to detect the NaNs in your loss immediately, which will terminate your training promptly. Metrics From the loss functions themselves, it hard to tell what is the model's performance. In every machine learning task, you have to be able to understand the performance of a model. Metrics are used exactly for that purpose. In this case (object detection), I suggest using IoU to determine how well the predicted box overlaps with the target one, and precision and recall for evaluating the performance of your binary classification of whether the predicted box contains hand-written text or not.
Same label multiple times in one image - Tensorflow
I'm trying to create a tf model, which can detect any handwriting in any image. In order to do that, i made the labels in all train pictures with just one label: edit. It means, one image can have this labels many times. After many hours of training using cpu i did't get the expected result. The model can't see any of the blocks i gave before training. I'm using the following model: http://download.tensorflow.org/models/object_detection/tf2/20200711/efficientdet_d6_coco17_tpu-32.tar.gz Is the problem that i'm labeling one image with one label multiple times? Could be the problem of using cpu instead of gpu? I have currently one gpu with 4gb and it seems not enough. i trained the model with 2000 steps and learning_rate was 0.006. should i train it to be more than that? Any suggestions? Thank you in advanced. Edit Following the is a screenshot from tensorboard of the trained model:
[ "CPU vs GPU\nGPU has the only advantage that the training is faster. It shouldn't have any effect on the expected result. It just takes longer. Though, for some models, the difference could be large. Monitoring your training might give you more insight.\nTraining monitoring\nWhat does it mean you did not get the expected result? How was it different from what you expected?\nI suggest you use some kind of monitoring, such as Tensorboard to monitor both the loss and metrics of the training and validation dataset if you do not already do so. This will give you invaluable information about the training in real-time.\nPipeline debugging\nWhen your model seems not to be learning anything, you must start debugging.\nYou can follow the following steps to make sure that none of those is your problem: https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607\nI especially like to overfit the model on a single batch. This test tells me whether my algorithmic pipeline is correct, including preprocessing, the model, and the evaluation. The result should be that evaluation on the single training batch should give you a very good score, while for the rest of the data the score will be poor since the model will be overfitted on the single batch.\nProblem definition\nSometimes, it is possible that the problem is wrongly defined. Can you clarify what your labels are? I do not fully understand. If it is a common problem, then you can search the internet to see the inspiration for how it is usually defined.\nEDIT\nLoss functions\nIn general, you want all your loss functions to go down. In your case, they go up in the first few hundred steps, which might not necessarily indicate that there's something wrong because sometimes it takes a short time in the beginning before the training stabilizes. Nonetheless, the triangles indicate that the loss was NaN. That means that there is something wrong. I can recommend using the tf.keras.callbacks.TerminateOnNaN callback to detect the NaNs in your loss immediately, which will terminate your training promptly.\nMetrics\nFrom the loss functions themselves, it hard to tell what is the model's performance. In every machine learning task, you have to be able to understand the performance of a model. Metrics are used exactly for that purpose. In this case (object detection), I suggest using IoU to determine how well the predicted box overlaps with the target one, and precision and recall for evaluating the performance of your binary classification of whether the predicted box contains hand-written text or not.\n" ]
[ 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0074527315_python_tensorflow.txt
Q: Floating point exception (core dumped) for UNet implementation I am trying to do an implementation of KiuNet ( https://github.com/jeya-maria-jose/KiU-Net-pytorch ). But when I am executing the train command like so: python train.py --train_dataset "KiuNet/Train Folder/" --val_dataset "KiuNet/Validation Folder/" --direc 'KiuNet/Results/' --batch_size 1 --epoch 200 --save_freq 10 --modelname "kiunet" --learning_rate 0.0001 I am getting the following error: Traceback (most recent call last): File "KiuNet/KiU-Net-pytorch/train.py", line 235, in <module> loss.backward() File "/miniconda3/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [847,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [958,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [703,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [830,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [831,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [575,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [974,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [77,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [78,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [719,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [720,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [592,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [593,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [209,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [465,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [337,0,0] Assertion `t >= 0 && t < n_classes` failed. When I am running the train command with CUDA_LAUNCH_BLOCKING=1 I get the following error: ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [840,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [580,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [453,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [326,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [71,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [712,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [198,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [199,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [968,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [959,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [830,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [574,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [702,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [191,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [318,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [319,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [446,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [63,0,0] Assertion `t >= 0 && t < n_classes` failed. Floating point exception (core dumped) My torch and CUDA version are: '1.13.0+cu117' My Python version: Python 3.9.12 Any help is much appreciated! A: The repository author mentions the following. "This bug occurs when the ground truth masks have more classes than the number of classes in prediction. Please make sure you ground truth images have only 0 or 1 labels of pixels if you are training for binary segmentation. The datasets usually have the ground truth as 0 or 255 labels of pixels. So, please convert them to 0's and 1's."
Floating point exception (core dumped) for UNet implementation
I am trying to do an implementation of KiuNet ( https://github.com/jeya-maria-jose/KiU-Net-pytorch ). But when I am executing the train command like so: python train.py --train_dataset "KiuNet/Train Folder/" --val_dataset "KiuNet/Validation Folder/" --direc 'KiuNet/Results/' --batch_size 1 --epoch 200 --save_freq 10 --modelname "kiunet" --learning_rate 0.0001 I am getting the following error: Traceback (most recent call last): File "KiuNet/KiU-Net-pytorch/train.py", line 235, in <module> loss.backward() File "/miniconda3/lib/python3.9/site-packages/torch/_tensor.py", line 487, in backward torch.autograd.backward( File "/miniconda3/lib/python3.9/site-packages/torch/autograd/__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [847,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [958,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [703,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [830,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [831,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [575,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [974,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [77,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [78,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [719,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [720,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [592,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [593,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [209,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [465,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [337,0,0] Assertion `t >= 0 && t < n_classes` failed. When I am running the train command with CUDA_LAUNCH_BLOCKING=1 I get the following error: ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [840,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [580,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [453,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [326,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [71,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [712,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [198,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [199,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [968,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [959,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [830,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [574,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [702,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [191,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [318,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [319,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [446,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/NLLLoss2d.cu:104: nll_loss2d_forward_kernel: block: [0,0,0], thread: [63,0,0] Assertion `t >= 0 && t < n_classes` failed. Floating point exception (core dumped) My torch and CUDA version are: '1.13.0+cu117' My Python version: Python 3.9.12 Any help is much appreciated!
[ "The repository author mentions the following.\n\"This bug occurs when the ground truth masks have more classes than the number of classes in prediction. Please make sure you ground truth images have only 0 or 1 labels of pixels if you are training for binary segmentation. The datasets usually have the ground truth as 0 or 255 labels of pixels. So, please convert them to 0's and 1's.\"\n" ]
[ 1 ]
[]
[]
[ "cudnn", "floating_point", "python", "pytorch", "torch" ]
stackoverflow_0074520038_cudnn_floating_point_python_pytorch_torch.txt
Q: Profile picture (Portrait) validation in web services I am developing a service which can validate input picture either it is suitable portrait (Profile picture) or not. If possible service can return scoring. Each consumer can set required accepted criteria. Some key rules I want to implement for image validation are Background of image is not busy Person face is recognizable i.e. ears, nose, eyes, mouth are visible Only one person is identified in picture I am new to image processing. I ll prefer if I can find some source in .Net core. I can also choose python A: This issue related to face Recognition. If you don't want to use the cognitive-services from Microsoft or other providers. You can check the FaceRecognitionDotNet. And here is the sample(asp.net core), you can check it. If you face the error below, please search it via google, and there are a lot of github issues. ---> System.TypeInitializationException: The type initializer for 'DlibDotNet.NativeMethods' threw an exception. ---> System.DllNotFoundException: Unable to load DLL 'DlibDotNetNativeDnn' or one of its dependencies: The specified module could not be found.
Profile picture (Portrait) validation in web services
I am developing a service which can validate input picture either it is suitable portrait (Profile picture) or not. If possible service can return scoring. Each consumer can set required accepted criteria. Some key rules I want to implement for image validation are Background of image is not busy Person face is recognizable i.e. ears, nose, eyes, mouth are visible Only one person is identified in picture I am new to image processing. I ll prefer if I can find some source in .Net core. I can also choose python
[ "This issue related to face Recognition.\nIf you don't want to use the cognitive-services from Microsoft or other providers. You can check the FaceRecognitionDotNet.\nAnd here is the sample(asp.net core), you can check it.\n\nIf you face the error below, please search it via google, and there are a lot of github issues.\n---> System.TypeInitializationException: The type initializer for 'DlibDotNet.NativeMethods' threw an exception.\n---> System.DllNotFoundException: Unable to load DLL 'DlibDotNetNativeDnn' or one of its dependencies: The specified module could not be found.\n\n" ]
[ 0 ]
[]
[]
[ "asp.net_core_webapi", "image_processing", "python" ]
stackoverflow_0074517363_asp.net_core_webapi_image_processing_python.txt
Q: Exported image black with 0 value I tried to use the following code but when exporting my map and I check my output data in arcmap. It is totally black and the value is 0. I don't know what is wrong with my code. https://code.earthengine.google.com/476db72426a67e03a604b6712ce97ef4?hl=ar // The purpose of this script is to estimate sub-pixel fractions // of identifiable spectral "endmembers". This involves finding // "pure" areas to estimate the endmembers, some matrix algebra // followed by the mapping of the fractional cover. // Use the reflective bands. var bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']; // First, let's find a cloud free scene in our area of interest. // Make a point using the geometry tools and name the import 'point'. // Import Landsat 8 TOA data and name the collection 'l8'. var image = ee.Image(l8 .filterBounds(point) .filterMetadata('CLOUD_COVER','less_than', 2) .filter(ee.Filter.calendarRange(2020,2020,'year')) .filter(ee.Filter.calendarRange(10,12,'month')) .first()) .select(bands); print(image) Map.addLayer(image, {bands: ['B4', 'B3', 'B2'], max: 0.3}, 'image'); // Now, delineate polygons of 'pure' regions. Click +New Layer for // each polygon. Name the imports 'bare', 'vegetation' and 'water'. // Get the mean spectrum in each of the endmember polygons. var bareMean = image.reduceRegion(ee.Reducer.mean(), bare, 30).values(); var waterMean = image.reduceRegion(ee.Reducer.mean(), water, 30).values(); var vegMean = image.reduceRegion(ee.Reducer.mean(), vegetation, 30).values(); var snowMean = image.reduceRegion(ee.Reducer.mean(), snow, 30).values(); // Optional: plot the endmembers print(ui.Chart.image.regions(image, ee.FeatureCollection([ ee.Feature(bare, {label: 'bare'}), ee.Feature(water, {label: 'water'}), ee.Feature(vegetation, {label: 'vegetation'}), ee.Feature(snow, {label: 'snow'})]), ee.Reducer.mean(), 30, 'label', [0.48, 0.56, 0.65, 0.86, 1.61, 3.2])); // Turn the endmember lists into an array that can be used in unmixing. // Concatenate the lists along the 1-axis to make an array. var endmembers = ee.Array.cat([bareMean, vegMean, waterMean, snowMean], 1); //print(endmembers) // Turn the image into an array image, in which each pixel has a 2-D matrix. var arrayImage = image.toArray().toArray(1); // Perform the unmixing in array space using the matrixSolve image method. // Note the need to cast the endmembers into an array image. var unmixed = ee.Image(endmembers).matrixSolve(arrayImage); // Convert the result from an array image back to a multi-band image. var unmixedImage = unmixed.arrayProject([0]) .arrayFlatten([['bare', 'veg', 'water', 'snow']]); // Display the result. Map.addLayer(unmixedImage, {}, 'fractions'); // Constrained:constraining the result to be non-negative and sum to one. var constrained = image.unmix([bareMean, vegMean, waterMean, snowMean], true, true); Map.addLayer(constrained, {}, 'constrained fractions'); //Export output to Google Drive Export.image.toDrive({ image: constrained, description: 'unmix', scale: 30, region: point, maxPixels: 1e9, fileFormat: 'GeoTIFF' }); Please guide me how to solve this problem. A: You're only exporting a single pixel - the region is set to point. All bands are actually not 0, but whatever tool you're using to visualise the image will have problems picking a good stretch, giving you a black pixel. You could for instance use image.geometry() instead of pixel in this case: Export.image.toDrive({ image: constrained, description: 'unmix', scale: 3000, region: image.geometry(), maxPixels: 1e9, fileFormat: 'GeoTIFF' });
Exported image black with 0 value
I tried to use the following code but when exporting my map and I check my output data in arcmap. It is totally black and the value is 0. I don't know what is wrong with my code. https://code.earthengine.google.com/476db72426a67e03a604b6712ce97ef4?hl=ar // The purpose of this script is to estimate sub-pixel fractions // of identifiable spectral "endmembers". This involves finding // "pure" areas to estimate the endmembers, some matrix algebra // followed by the mapping of the fractional cover. // Use the reflective bands. var bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']; // First, let's find a cloud free scene in our area of interest. // Make a point using the geometry tools and name the import 'point'. // Import Landsat 8 TOA data and name the collection 'l8'. var image = ee.Image(l8 .filterBounds(point) .filterMetadata('CLOUD_COVER','less_than', 2) .filter(ee.Filter.calendarRange(2020,2020,'year')) .filter(ee.Filter.calendarRange(10,12,'month')) .first()) .select(bands); print(image) Map.addLayer(image, {bands: ['B4', 'B3', 'B2'], max: 0.3}, 'image'); // Now, delineate polygons of 'pure' regions. Click +New Layer for // each polygon. Name the imports 'bare', 'vegetation' and 'water'. // Get the mean spectrum in each of the endmember polygons. var bareMean = image.reduceRegion(ee.Reducer.mean(), bare, 30).values(); var waterMean = image.reduceRegion(ee.Reducer.mean(), water, 30).values(); var vegMean = image.reduceRegion(ee.Reducer.mean(), vegetation, 30).values(); var snowMean = image.reduceRegion(ee.Reducer.mean(), snow, 30).values(); // Optional: plot the endmembers print(ui.Chart.image.regions(image, ee.FeatureCollection([ ee.Feature(bare, {label: 'bare'}), ee.Feature(water, {label: 'water'}), ee.Feature(vegetation, {label: 'vegetation'}), ee.Feature(snow, {label: 'snow'})]), ee.Reducer.mean(), 30, 'label', [0.48, 0.56, 0.65, 0.86, 1.61, 3.2])); // Turn the endmember lists into an array that can be used in unmixing. // Concatenate the lists along the 1-axis to make an array. var endmembers = ee.Array.cat([bareMean, vegMean, waterMean, snowMean], 1); //print(endmembers) // Turn the image into an array image, in which each pixel has a 2-D matrix. var arrayImage = image.toArray().toArray(1); // Perform the unmixing in array space using the matrixSolve image method. // Note the need to cast the endmembers into an array image. var unmixed = ee.Image(endmembers).matrixSolve(arrayImage); // Convert the result from an array image back to a multi-band image. var unmixedImage = unmixed.arrayProject([0]) .arrayFlatten([['bare', 'veg', 'water', 'snow']]); // Display the result. Map.addLayer(unmixedImage, {}, 'fractions'); // Constrained:constraining the result to be non-negative and sum to one. var constrained = image.unmix([bareMean, vegMean, waterMean, snowMean], true, true); Map.addLayer(constrained, {}, 'constrained fractions'); //Export output to Google Drive Export.image.toDrive({ image: constrained, description: 'unmix', scale: 30, region: point, maxPixels: 1e9, fileFormat: 'GeoTIFF' }); Please guide me how to solve this problem.
[ "You're only exporting a single pixel - the region is set to point. All bands are actually not 0, but whatever tool you're using to visualise the image will have problems picking a good stretch, giving you a black pixel.\nYou could for instance use image.geometry() instead of pixel in this case:\nExport.image.toDrive({\n image: constrained,\n description: 'unmix',\n scale: 3000,\n region: image.geometry(),\n maxPixels: 1e9,\n fileFormat: 'GeoTIFF'\n});\n\n" ]
[ 1 ]
[]
[]
[ "arrays", "google_earth_engine", "java", "python", "python_3.x" ]
stackoverflow_0074516317_arrays_google_earth_engine_java_python_python_3.x.txt
Q: Align cell content in excel using python I am struggling to set the alignment for data in excel using python My python function loads data from excel into a pandas dataframe, calculates some new columns, then adds these columns to the original sheet. This all works well, but I now want to tidy up the result. I can set italics / bold etc using sheet['E1:J24'].font.bold = True sheet['E1:J24'].font.italic = True But I cannot set the alignment properly. I have tried the following, and several other suggestions I found online, but none of them seems to work. sheet['E1:J24'].alignment = Alignment(horizontal="center") Any help would be appreciated. Update to question, With further on-line searching I came upon this line of code which successfully adjusts the alignment. sheet.range(f'$E1:J24').api.HorizontalAlignment = -4152 I think the problem is that I connected to the worksheet using xlwings and then tried to use openpyxl to format it. Jupyter didn't give an error because I had imported 'Alignment' from openpyxl Note, for alignments use setting as follows center = -4108 right = -4152 Left = -4131 Not sure where the numbers come from A: Use 'VerticalAlignment' and/or 'HorizontalAlignment'. Import VAlign, HAlign from the Xlwings constants to use the name or just use the Excel code. I have copied these into the comments for your information. import xlwings as xw from xlwings.constants import VAlign, HAlign ### Xlwings constants """ VAlign Class xlVAlignBottom = -4107 xlVAlignCenter = -4108 xlVAlignDistributed = -4117 xlVAlignJustify = -4130 xlVAlignTop = -4160 HAlign Class xlHAlignCenter = -4108 xlHAlignCenterAcrossSelection = 7 xlHAlignDistributed = -4117 xlHAlignFill = 5 xlHAlignGeneral = 1 xlHAlignJustify = -4130 xlHAlignLeft = -4131 xlHAlignRight = -4152 """ path = "foo.xlsx" with xw.App() as app: wb = xw.Book(path) ws = wb.sheets[0] # Align text vertically ws.range(1, 1).api.VerticalAlignment = -4160 ws.range(1, 2).api.VerticalAlignment = VAlign.xlVAlignCenter ws.range(1, 3).api.VerticalAlignment = VAlign.xlVAlignBottom # Align text horizontally ws.range(2, 1).api.HorizontalAlignment = HAlign.xlHAlignLeft ws.range(2, 2).api.HorizontalAlignment = HAlign.xlHAlignCenter ws.range(2, 3).api.HorizontalAlignment = -4152 wb.save(path) wb.close()
Align cell content in excel using python
I am struggling to set the alignment for data in excel using python My python function loads data from excel into a pandas dataframe, calculates some new columns, then adds these columns to the original sheet. This all works well, but I now want to tidy up the result. I can set italics / bold etc using sheet['E1:J24'].font.bold = True sheet['E1:J24'].font.italic = True But I cannot set the alignment properly. I have tried the following, and several other suggestions I found online, but none of them seems to work. sheet['E1:J24'].alignment = Alignment(horizontal="center") Any help would be appreciated. Update to question, With further on-line searching I came upon this line of code which successfully adjusts the alignment. sheet.range(f'$E1:J24').api.HorizontalAlignment = -4152 I think the problem is that I connected to the worksheet using xlwings and then tried to use openpyxl to format it. Jupyter didn't give an error because I had imported 'Alignment' from openpyxl Note, for alignments use setting as follows center = -4108 right = -4152 Left = -4131 Not sure where the numbers come from
[ "Use 'VerticalAlignment' and/or 'HorizontalAlignment'.\nImport VAlign, HAlign from the Xlwings constants to use the name or just use the Excel code. I have copied these into the comments for your information.\nimport xlwings as xw\nfrom xlwings.constants import VAlign, HAlign\n\n### Xlwings constants\n\"\"\"\nVAlign Class\nxlVAlignBottom = -4107\nxlVAlignCenter = -4108\nxlVAlignDistributed = -4117\nxlVAlignJustify = -4130\nxlVAlignTop = -4160 \n\nHAlign Class\nxlHAlignCenter = -4108\nxlHAlignCenterAcrossSelection = 7\nxlHAlignDistributed = -4117\nxlHAlignFill = 5\nxlHAlignGeneral = 1\nxlHAlignJustify = -4130\nxlHAlignLeft = -4131\nxlHAlignRight = -4152\n\"\"\"\n\n\npath = \"foo.xlsx\"\n\nwith xw.App() as app:\n wb = xw.Book(path)\n ws = wb.sheets[0]\n\n # Align text vertically \n ws.range(1, 1).api.VerticalAlignment = -4160\n ws.range(1, 2).api.VerticalAlignment = VAlign.xlVAlignCenter\n ws.range(1, 3).api.VerticalAlignment = VAlign.xlVAlignBottom\n # Align text horizontally\n ws.range(2, 1).api.HorizontalAlignment = HAlign.xlHAlignLeft\n ws.range(2, 2).api.HorizontalAlignment = HAlign.xlHAlignCenter\n ws.range(2, 3).api.HorizontalAlignment = -4152\n\n wb.save(path)\n wb.close()\n\n" ]
[ 0 ]
[]
[]
[ "excel", "pandas", "python", "xlwings" ]
stackoverflow_0074518839_excel_pandas_python_xlwings.txt
Q: How can I vectorize the following algorithm? Is there a way that I could do vectorization instead of for loop for the following algorithm? def test_func(df): idx_lst = [df.index[0]] end = df.loc[df.index[0], "end"] for idx in df.index[1:]: if df.loc[idx, "begin"] > end: end = df.loc[idx, "end"] idx_lst.append(idx) return df.loc[idx_lst] test case: df = pd.DataFrame({"begin":[3,5,7,8,10,12,14], "end":[8,9,10,12,13,14,17]}) begin end 0 3 8 1 5 9 2 7 10 3 8 12 4 10 13 5 12 14 6 14 17 test_func(df) begin end 0 3 8 4 10 13 6 14 17 A: I agree with the earlier comment that it is hard or even impossible to use vectorization. But try instead the following function: def myFunc(df): arr = df.begin.values > df.end[:, np.newaxis] r = 0 idx_lst = [r] while True: wrk = np.nonzero(arr[r])[0] if wrk.size == 0: return df.iloc[idx_lst] r = wrk[0] idx_lst.append(r) The advantage of my solution is that the "comparison array" (arr - whether begin column from some row > end column from another row) is computed in one go. Another advantage is that there is no need to process each row. Yet another advantage is that I use Numpy, which is known to operate faster than Pandas. Using %timeit, on your source data sample, I stated that my function takes just the same time to generate the result as yours. But try it on a greater sample of source data. Maybe my solution will be faster.
How can I vectorize the following algorithm?
Is there a way that I could do vectorization instead of for loop for the following algorithm? def test_func(df): idx_lst = [df.index[0]] end = df.loc[df.index[0], "end"] for idx in df.index[1:]: if df.loc[idx, "begin"] > end: end = df.loc[idx, "end"] idx_lst.append(idx) return df.loc[idx_lst] test case: df = pd.DataFrame({"begin":[3,5,7,8,10,12,14], "end":[8,9,10,12,13,14,17]}) begin end 0 3 8 1 5 9 2 7 10 3 8 12 4 10 13 5 12 14 6 14 17 test_func(df) begin end 0 3 8 4 10 13 6 14 17
[ "I agree with the earlier comment that it is hard or even impossible to use\nvectorization.\nBut try instead the following function:\ndef myFunc(df):\n arr = df.begin.values > df.end[:, np.newaxis]\n r = 0\n idx_lst = [r]\n while True:\n wrk = np.nonzero(arr[r])[0]\n if wrk.size == 0:\n return df.iloc[idx_lst]\n r = wrk[0]\n idx_lst.append(r)\n\nThe advantage of my solution is that the \"comparison array\" (arr - whether\nbegin column from some row > end column from another row) is computed\nin one go.\nAnother advantage is that there is no need to process each row.\nYet another advantage is that I use Numpy, which is known to operate\nfaster than Pandas.\nUsing %timeit, on your source data sample, I stated that my function\ntakes just the same time to generate the result as yours.\nBut try it on a greater sample of source data. Maybe my solution will be faster.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "vectorization" ]
stackoverflow_0074527883_pandas_python_vectorization.txt
Q: make Keras 'None' batch size unchanged, using tf.scatter_nd I need to input a pooling module to the LSTM decoder, and I'm constructing this using a custom layer with the encoder LSTM states and Keras Input layer as inputs. In this custom layer, I need to scatter the updates to the indices: updates: <tf.Tensor --- shape=(None, 225, 5, 32) dtype=float32> indices: <tf.Tensor --- shape=(None, 225) dtype=int32> with tf.scatter_nd to create a tensor with shape=(None, 960, 5, 32) , something like this: tf.scatter_nd(tf.expand_dims(indices, 2), updates, shape=[None, 960, 5, 32]) but the problem is that doing this, rises error due to NoneType in shape and I don't want to declare the batch_size in it because it is a Keras layer and only is certain in the learning process. in this state the working version of code is this: tf.scatter_nd(tf.expand_dims(indices, 2), updates, shape=[960, 5, 32]) >>> <tf.Tensor 'ScatterNd_4:0' shape=(960, 5, 32) dtype=float32> that has ignored the batch_size in the output. Is there any alternative way to construct the needed output tensor instead of tf.scatter_nd or a way to make this work properly? A: I had similar issue with tf.scatter_nd operation. I solved it by infering batch size during runtime using tf.shape(input)[0]. So in your case, the following code should work: bs = tf.shape(indices)[0] tf.scatter_nd(tf.expand_dims(indices, 2), updates, shape=[bs, 960, 5, 32])
make Keras 'None' batch size unchanged, using tf.scatter_nd
I need to input a pooling module to the LSTM decoder, and I'm constructing this using a custom layer with the encoder LSTM states and Keras Input layer as inputs. In this custom layer, I need to scatter the updates to the indices: updates: <tf.Tensor --- shape=(None, 225, 5, 32) dtype=float32> indices: <tf.Tensor --- shape=(None, 225) dtype=int32> with tf.scatter_nd to create a tensor with shape=(None, 960, 5, 32) , something like this: tf.scatter_nd(tf.expand_dims(indices, 2), updates, shape=[None, 960, 5, 32]) but the problem is that doing this, rises error due to NoneType in shape and I don't want to declare the batch_size in it because it is a Keras layer and only is certain in the learning process. in this state the working version of code is this: tf.scatter_nd(tf.expand_dims(indices, 2), updates, shape=[960, 5, 32]) >>> <tf.Tensor 'ScatterNd_4:0' shape=(960, 5, 32) dtype=float32> that has ignored the batch_size in the output. Is there any alternative way to construct the needed output tensor instead of tf.scatter_nd or a way to make this work properly?
[ "I had similar issue with tf.scatter_nd operation. I solved it by infering batch size during runtime using tf.shape(input)[0]. So in your case, the following code should work:\nbs = tf.shape(indices)[0]\ntf.scatter_nd(tf.expand_dims(indices, 2), updates, shape=[bs, 960, 5, 32])\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "keras", "python", "tensorflow" ]
stackoverflow_0064193001_deep_learning_keras_python_tensorflow.txt
Q: Checking input of a method if it exists on a list Python @dataclass class Product: name: str quantity: int price: float class Transaction: def __init__(self): self.mapDict = {} self.mapVal = [] def add_item(self, name, quantity, price): res = name not in self.mapVal if res: self.mapDict[len(self.mapVal)] = Product(name, quantity, price) self.mapVal.append(name) return res def check_if_not_exists(self, name): if name not in self.mapVal: return name else: raise Exception("Name already exists. Update instead?") I try to replace res = name not in self.mapVal line in add_item() by creating check_if_not_exists() so I can use it later for different method such as update_name() or remove_item(). I know using res = self.check_if_not_exists(name) would do the job, but is there a better way? A: You can return a True or False in check_if_not_exists. def add_item(self, name, quantity, price): product_doesnt_exists = self.check_if_not_exists(name) if product_doesnt_exists: self.mapDict[len(self.mapVal)] = Product(name, quantity, price) self.mapVal.append(name) return True else: return False # or raise Exception('Product already exists') def check_if_not_exists(self, name): if name not in self.mapVal: return False else: raise True I suggest you to use set instead of list for mapVal attribute, because it's faster to check if an element exists in a set than in a list. I suggest you to rename your variables and methods. mapDict and mapVal are not good names, instead you can use products and product_names or something like that. add_item and check_if_not_exists are not good names, instead you can use add_product and product_exists or something like that. class Transaction: def __init__(self): self.products = {} self.product_names = [] def add_item(self, name, quantity, price): product_exists = self.product_exists(name) if product_exists: return False else: self.products[len(self.product_names)] = Product(name, quantity, price) self.product_names.append(name) return True def product_exists(self, name): if name in self.product_names: return True else: raise False You can remove product_names attribute and use products.keys() instead, and set product names as keys in products dictionary. class Transaction: def __init__(self): self.products = {} def add_item(self, name, quantity, price): product_exists = self.product_exists(name) if product_exists: return False else: self.products[name] = Product(name, quantity, price) return True def product_exists(self, name): return name in self.products.keys()
Checking input of a method if it exists on a list Python
@dataclass class Product: name: str quantity: int price: float class Transaction: def __init__(self): self.mapDict = {} self.mapVal = [] def add_item(self, name, quantity, price): res = name not in self.mapVal if res: self.mapDict[len(self.mapVal)] = Product(name, quantity, price) self.mapVal.append(name) return res def check_if_not_exists(self, name): if name not in self.mapVal: return name else: raise Exception("Name already exists. Update instead?") I try to replace res = name not in self.mapVal line in add_item() by creating check_if_not_exists() so I can use it later for different method such as update_name() or remove_item(). I know using res = self.check_if_not_exists(name) would do the job, but is there a better way?
[ "You can return a True or False in check_if_not_exists.\ndef add_item(self, name, quantity, price):\n product_doesnt_exists = self.check_if_not_exists(name)\n if product_doesnt_exists:\n self.mapDict[len(self.mapVal)] = Product(name, quantity, price)\n self.mapVal.append(name)\n return True\n else:\n return False # or raise Exception('Product already exists')\n\n\ndef check_if_not_exists(self, name):\n if name not in self.mapVal:\n return False\n else:\n raise True\n\nI suggest you to use set instead of list for mapVal attribute, because it's faster to check if an element exists in a set than in a list.\nI suggest you to rename your variables and methods. mapDict and mapVal are not good names, instead you can use products and product_names or something like that. add_item and check_if_not_exists are not good names, instead you can use add_product and product_exists or something like that.\nclass Transaction:\n\n def __init__(self):\n self.products = {}\n self.product_names = []\n\n def add_item(self, name, quantity, price):\n product_exists = self.product_exists(name)\n if product_exists:\n return False\n else:\n self.products[len(self.product_names)] = Product(name, quantity, price)\n self.product_names.append(name)\n return True\n\n def product_exists(self, name):\n if name in self.product_names:\n return True\n else:\n raise False\n\nYou can remove product_names attribute and use products.keys() instead, and set product names as keys in products dictionary.\nclass Transaction:\n\n def __init__(self):\n self.products = {}\n\n def add_item(self, name, quantity, price):\n product_exists = self.product_exists(name)\n if product_exists:\n return False\n else:\n self.products[name] = Product(name, quantity, price)\n return True\n\n def product_exists(self, name):\n return name in self.products.keys()\n\n" ]
[ 1 ]
[]
[]
[ "class", "dictionary", "oop", "python" ]
stackoverflow_0074529171_class_dictionary_oop_python.txt
Q: Print Latex for system of equations in SymPy? How would I write a system of equations in SymPy and output the equivalent Latex? The latex function seems to accept only one expression at a time. import sympy as sp x, y, z = sp.symbols('x, y, z') eq1 = sp.Eq(x + y + z, 1) eq2 = sp.Eq(x + y + 2 * z, 3) output = sp.latex() # Do something here? A: One way is to create a function and combine the latex output of each equation. def system_to_latex(*equations): n = len(equations) if n == 0: return "" l1 = r"\left\{\begin{matrix}%s\end{matrix}\right." l2 = r" \\ ".join(sp.latex(eq) for eq in equations) return l1 % l2 print(system_to_latex(eq1, eq2)) # out: \left\{\begin{matrix}x + y + z = 1 \\ x + y + 2 z = 3\end{matrix}\right.
Print Latex for system of equations in SymPy?
How would I write a system of equations in SymPy and output the equivalent Latex? The latex function seems to accept only one expression at a time. import sympy as sp x, y, z = sp.symbols('x, y, z') eq1 = sp.Eq(x + y + z, 1) eq2 = sp.Eq(x + y + 2 * z, 3) output = sp.latex() # Do something here?
[ "One way is to create a function and combine the latex output of each equation.\ndef system_to_latex(*equations):\n n = len(equations)\n if n == 0:\n return \"\"\n l1 = r\"\\left\\{\\begin{matrix}%s\\end{matrix}\\right.\"\n l2 = r\" \\\\ \".join(sp.latex(eq) for eq in equations)\n return l1 % l2\n\nprint(system_to_latex(eq1, eq2))\n# out: \\left\\{\\begin{matrix}x + y + z = 1 \\\\ x + y + 2 z = 3\\end{matrix}\\right.\n\n" ]
[ 1 ]
[]
[]
[ "python", "sympy" ]
stackoverflow_0074527451_python_sympy.txt
Q: Multihead model based on DenseNet201 using Keras I am trying to use this notebook where we define a 3-head model based on DenseNet201. The AlexNet based works correctly but DenseNet201 throws me an error. I am a Pytorch user and have not been able to figure out the error of ValueError: Missing data for input "input_5". You passed a data dictionary with keys ['img_input']. Expected the following keys: ['input_5']. I know somewhere in the following code snippet I should have a name 'img_input' but I cannot figure it out. class base_model(): def __init__(self, side_dim, n_bb, n_classes, name_model): self.side_dim = side_dim self.name_model = name_model # base model DenseNet if name_model == 'DenseNet201': self.base_model = keras.applications.DenseNet201( include_top=False, input_shape=(self.side_dim, self.side_dim, 3), ) self.image_input = self.base_model.input self.flatten = keras.layers.Flatten()(self.base_model.layers[-2].output) self.BatcNorm = keras.layers.BatchNormalization()(self.flatten) print('Base model: DenseNet121 (7.2M params x 201 layers') # ---------------------------------------------------------------------- # Add head with three different outputs to last layer of the basic model # ---------------------------------------------------------------------- # class output self.class_categorical = keras.layers.Dense((n_bb * n_classes), activation='softmax')(self.BatcNorm) self.class_output = keras.layers.Reshape((n_bb, n_classes), name='class_output')(self.class_categorical) # confidence output self.score_confidence = keras.layers.Dense((n_bb), name='score_confidence', activation='tanh')(self.BatcNorm) # bounding boxes coordinate output self.score_coords = keras.layers.Dense((n_bb * 4), name='score_coords')(self.BatcNorm) The error is thrown when I run the following: # let's start our training train_history = myModel.fit({'img_input': X_train}, {'class_output': class_target, 'score_confidence': target_confidence, 'score_coords': target_coords}, epochs=N_ep, validation_data=({'img_input': X_val}, {'class_output': Val_class, 'score_confidence': Val_confidence, 'score_coords': Val_coords}), batch_size=Batchs, initial_epoch = init_ep, verbose=1, callbacks=[callbacks, tensorboard_callback]) In the AlexNet based network, the input name is changed directly but I do not know how to do it for the DenseNet201. Can you please help me? A: The issue is that your input node does not have the same name as the dictionary key holding your input. You can create your input layer before hand wit the right name, and pass it to the DenseNet201 function as the input tensor. self.image_input = keras.Input((self.side_dim, self.side_dim, 3), name="img_input") self.base_model = keras.applications.DenseNet201( include_top=False, input_tensor=self.image_input, ) Another option is to get the name of the input right in your dictionary by using the name of the input node: myModel.fit({myModel.input.name: X_train}, {'class_output': class_target, 'score_confidence': target_confidence, 'score_coords': target_coords}) A final option is to skip using a dictionary all together, given that you have a single input: myModel.fit(X_train, {'class_output': class_target, 'score_confidence': target_confidence, 'score_coords': target_coords})
Multihead model based on DenseNet201 using Keras
I am trying to use this notebook where we define a 3-head model based on DenseNet201. The AlexNet based works correctly but DenseNet201 throws me an error. I am a Pytorch user and have not been able to figure out the error of ValueError: Missing data for input "input_5". You passed a data dictionary with keys ['img_input']. Expected the following keys: ['input_5']. I know somewhere in the following code snippet I should have a name 'img_input' but I cannot figure it out. class base_model(): def __init__(self, side_dim, n_bb, n_classes, name_model): self.side_dim = side_dim self.name_model = name_model # base model DenseNet if name_model == 'DenseNet201': self.base_model = keras.applications.DenseNet201( include_top=False, input_shape=(self.side_dim, self.side_dim, 3), ) self.image_input = self.base_model.input self.flatten = keras.layers.Flatten()(self.base_model.layers[-2].output) self.BatcNorm = keras.layers.BatchNormalization()(self.flatten) print('Base model: DenseNet121 (7.2M params x 201 layers') # ---------------------------------------------------------------------- # Add head with three different outputs to last layer of the basic model # ---------------------------------------------------------------------- # class output self.class_categorical = keras.layers.Dense((n_bb * n_classes), activation='softmax')(self.BatcNorm) self.class_output = keras.layers.Reshape((n_bb, n_classes), name='class_output')(self.class_categorical) # confidence output self.score_confidence = keras.layers.Dense((n_bb), name='score_confidence', activation='tanh')(self.BatcNorm) # bounding boxes coordinate output self.score_coords = keras.layers.Dense((n_bb * 4), name='score_coords')(self.BatcNorm) The error is thrown when I run the following: # let's start our training train_history = myModel.fit({'img_input': X_train}, {'class_output': class_target, 'score_confidence': target_confidence, 'score_coords': target_coords}, epochs=N_ep, validation_data=({'img_input': X_val}, {'class_output': Val_class, 'score_confidence': Val_confidence, 'score_coords': Val_coords}), batch_size=Batchs, initial_epoch = init_ep, verbose=1, callbacks=[callbacks, tensorboard_callback]) In the AlexNet based network, the input name is changed directly but I do not know how to do it for the DenseNet201. Can you please help me?
[ "The issue is that your input node does not have the same name as the dictionary key holding your input.\nYou can create your input layer before hand wit the right name, and pass it to the DenseNet201 function as the input tensor.\nself.image_input = keras.Input((self.side_dim, self.side_dim, 3), name=\"img_input\")\nself.base_model = keras.applications.DenseNet201(\n include_top=False,\n input_tensor=self.image_input,\n )\n\nAnother option is to get the name of the input right in your dictionary by using the name of the input node:\nmyModel.fit({myModel.input.name: X_train}, \n {'class_output': class_target, \n 'score_confidence': target_confidence, \n 'score_coords': target_coords})\n\nA final option is to skip using a dictionary all together, given that you have a single input:\nmyModel.fit(X_train, \n {'class_output': class_target, \n 'score_confidence': target_confidence, \n 'score_coords': target_coords})\n\n" ]
[ 2 ]
[]
[]
[ "densenet", "keras", "python", "tensorflow" ]
stackoverflow_0074527844_densenet_keras_python_tensorflow.txt
Q: How to make user hyperlink in python telegram bot? Stack overflow! I'm using telebot module for my telegram bot (from telebot import types). I want to send messages to telegram users. In this messages I want to paste a link to another telegram users. My code is: linked_user = '[username](tg://user?id=999999999)' bot.send_message( admin_chat_id, f'{linked_user}', parse_mode='MarkdownV2', disable_web_page_preview=True) I expect that admin will receive a message with username in it. And if admin will click on the text, he will be redirected to the linked_user's profile... The problem is: It's not always a hyperlinked text. It can be a plain text... With some user's chat.id's it works well, with other - don't! I tried to make decisions from the fact, than not every telegram user have 9-digit chat.id - but it's not the reason too... So I want to make hyperlink for EVERY user... don't know how to do that, so please help me!) A: Some Users have specific privacy settings. So even though you can pm them, you cant "publish" their usernames so anyone else can Contact them. So you are not doing anything wrong.
How to make user hyperlink in python telegram bot?
Stack overflow! I'm using telebot module for my telegram bot (from telebot import types). I want to send messages to telegram users. In this messages I want to paste a link to another telegram users. My code is: linked_user = '[username](tg://user?id=999999999)' bot.send_message( admin_chat_id, f'{linked_user}', parse_mode='MarkdownV2', disable_web_page_preview=True) I expect that admin will receive a message with username in it. And if admin will click on the text, he will be redirected to the linked_user's profile... The problem is: It's not always a hyperlinked text. It can be a plain text... With some user's chat.id's it works well, with other - don't! I tried to make decisions from the fact, than not every telegram user have 9-digit chat.id - but it's not the reason too... So I want to make hyperlink for EVERY user... don't know how to do that, so please help me!)
[ "Some Users have specific privacy settings. So even though you can pm them, you cant \"publish\" their usernames so anyone else can Contact them. So you are not doing anything wrong.\n" ]
[ 0 ]
[]
[]
[ "python", "python_telegram_bot", "telebot", "telegram", "telegram_bot" ]
stackoverflow_0071180687_python_python_telegram_bot_telebot_telegram_telegram_bot.txt
Q: How can I check the loss of a model at a specific epoch in pytorch? I was training a deep learning model (link) and it was printing the loss and robustness stats after each epoch, but when it was done executing the terminal closed so I could not see the stats (I am using ssh+screen function so that is normal). I did 120 epochs and after training a folder called log was generated which contains train_stats.npy and a folder called resnet (the training code was in train_resnet.py) was generated and it contains 2 files for each epoch, for example: model-res-epoch93.pt opt-res-checkpoint_epoch93.tar model-res-epoch94.pt opt-res-checkpoint_epoch94.tar model-res-epoch95.pt opt-res-checkpoint_epoch95.tar model-res-epoch96.pt opt-res-checkpoint_epoch96.tar model-res-epoch97.pt opt-res-checkpoint_epoch97.tar model-res-epoch98.pt opt-res-checkpoint_epoch98.tar model-res-epoch99.pt opt-res-checkpoint_epoch99.tar model-res-epoch9.pt opt-res-checkpoint_epoch9.tar Is there any way I could use any of these files to get back the stats at a specific epoch? Or do I have to repeat the training? A: Those files are likely to only contain the model states and training checkpoints. If you saved your loss and metrics inside the checkpoint archives then you will be able to retrieve this information. Else this information is simply not accessible anymore. What are you saving inside the .tar archives?
How can I check the loss of a model at a specific epoch in pytorch?
I was training a deep learning model (link) and it was printing the loss and robustness stats after each epoch, but when it was done executing the terminal closed so I could not see the stats (I am using ssh+screen function so that is normal). I did 120 epochs and after training a folder called log was generated which contains train_stats.npy and a folder called resnet (the training code was in train_resnet.py) was generated and it contains 2 files for each epoch, for example: model-res-epoch93.pt opt-res-checkpoint_epoch93.tar model-res-epoch94.pt opt-res-checkpoint_epoch94.tar model-res-epoch95.pt opt-res-checkpoint_epoch95.tar model-res-epoch96.pt opt-res-checkpoint_epoch96.tar model-res-epoch97.pt opt-res-checkpoint_epoch97.tar model-res-epoch98.pt opt-res-checkpoint_epoch98.tar model-res-epoch99.pt opt-res-checkpoint_epoch99.tar model-res-epoch9.pt opt-res-checkpoint_epoch9.tar Is there any way I could use any of these files to get back the stats at a specific epoch? Or do I have to repeat the training?
[ "Those files are likely to only contain the model states and training checkpoints. If you saved your loss and metrics inside the checkpoint archives then you will be able to retrieve this information. Else this information is simply not accessible anymore.\nWhat are you saving inside the .tar archives?\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "python", "pytorch" ]
stackoverflow_0074529449_deep_learning_python_pytorch.txt
Q: Is there any way to show the dependency trees for pip packages? I have a project with multiple package dependencies, the main requirements being listed in requirements.txt. When I call pip freeze it prints the currently installed packages as plain list. I would prefer to also get their dependency relationships, something like this: Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 Jinja2==2.7 Werkzeug==0.8.3 Flask-Admin==1.0.6 Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 The goal is to detect the dependencies of each specific package: Werkzeug==0.8.3 Flask==0.9 Flask-Admin==1.0.6 And insert these into my current requirements.txt. For example, for this input: Flask==0.9 Flask-Admin==1.0.6 Werkzeug==0.8.3 I would like to get: Flask==0.9 Jinja2==2.7 Flask-Admin==1.0.6 Werkzeug==0.8.3 Is there any way show the dependencies of installed pip packages? A: You should take a look at pipdeptree: $ pip install pipdeptree $ pipdeptree -fl Warning!!! Cyclic dependencies found: ------------------------------------------------------------------------ xlwt==0.7.5 ruamel.ext.rtf==0.1.1 xlrd==0.9.3 openpyxl==2.0.4 - jdcal==1.0 pymongo==2.7.1 reportlab==3.1.8 - Pillow==2.5.1 - pip - setuptools It doesn't generate a requirements.txt file as you indicated directly. However the source (255 lines of python code) should be relatively easy to modify to your needs, or alternatively you can (as @MERose indicated is in the pipdeptree 0.3 README ) out use: pipdeptree --freeze --warn silence | grep -P '^[\w0-9\-=.]+' > requirements.txt The 0.5 version of pipdeptree also allows JSON output with the --json option, that is more easily machine parseble, at the expense of being less readable. A: Warning: py2 only / abandonware yolk can display dependencies for packages, provided that they were installed via setuptools came with metadata that includes dependency information $ yolk -d Theano Theano 0.6.0rc3 scipy>=0.7.2 numpy>=1.5.0 A: You can do it by installing pipdeptree package. Open command prompt in your project folder. If you are using any virtual environment, then switch to that virtual environment. Install pipdeptree package using pip pip install pipdeptree pipdeptree -fl This package will list all the dependencies of your project. For more pipdeptree A: I realize that many years has passed since this question was asked, but it showed up in my searches so I thought I'd share some knowledge. The pip-tools package contains a tool called pip-compile that seems to also solve the original poster's problem. pip-compile takes an input file, which can be setup.py, setup.cfg, pyproject.toml, or requirements.in. The input file is what you write by hand and contains the "direct" dependencies. It may not specify exact dependency versions, but may use version ranges (nor no constraints at all). The tool outputs a new rquirements.txt file with all the indirect dependencies added and also pins down the dependencies to exact versions. If you run the pip-compile tool again after updating the source file, it will add or remove dependencies from the output file if needed. You can also choose to upgrade a specific dependency by adding a flag. So while pip-compile does not show you the dependency tree itself, it helps you with collecting all the leafs of the dependency tree (which I assume was what the original poster wanted to do in the end). Read more here: https://github.com/jazzband/pip-tools/
Is there any way to show the dependency trees for pip packages?
I have a project with multiple package dependencies, the main requirements being listed in requirements.txt. When I call pip freeze it prints the currently installed packages as plain list. I would prefer to also get their dependency relationships, something like this: Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 Jinja2==2.7 Werkzeug==0.8.3 Flask-Admin==1.0.6 Flask==0.9 Jinja2==2.7 Werkzeug==0.8.3 The goal is to detect the dependencies of each specific package: Werkzeug==0.8.3 Flask==0.9 Flask-Admin==1.0.6 And insert these into my current requirements.txt. For example, for this input: Flask==0.9 Flask-Admin==1.0.6 Werkzeug==0.8.3 I would like to get: Flask==0.9 Jinja2==2.7 Flask-Admin==1.0.6 Werkzeug==0.8.3 Is there any way show the dependencies of installed pip packages?
[ "You should take a look at pipdeptree:\n$ pip install pipdeptree\n$ pipdeptree -fl\nWarning!!! Cyclic dependencies found:\n------------------------------------------------------------------------\nxlwt==0.7.5\nruamel.ext.rtf==0.1.1\nxlrd==0.9.3\nopenpyxl==2.0.4\n - jdcal==1.0\npymongo==2.7.1\nreportlab==3.1.8\n - Pillow==2.5.1\n - pip\n - setuptools\n\nIt doesn't generate a requirements.txt file as you indicated directly. However the source (255 lines of python code) should be relatively easy to modify to your needs, or alternatively you can (as @MERose indicated is in the pipdeptree 0.3 README ) out use:\npipdeptree --freeze --warn silence | grep -P '^[\\w0-9\\-=.]+' > requirements.txt\n\nThe 0.5 version of pipdeptree also allows JSON output with the --json option, that is more easily machine parseble, at the expense of being less readable.\n", "Warning: py2 only / abandonware\nyolk can display dependencies for packages, provided that they\n\nwere installed via setuptools\ncame with metadata that includes dependency information\n$ yolk -d Theano\nTheano 0.6.0rc3\n scipy>=0.7.2\n numpy>=1.5.0\n\n\n", "You can do it by installing pipdeptree package.\nOpen command prompt in your project folder. If you are using any virtual environment, then switch to that virtual environment.\nInstall pipdeptree package using pip\npip install pipdeptree\npipdeptree -fl\n\nThis package will list all the dependencies of your project.\nFor more pipdeptree\n\n", "I realize that many years has passed since this question was asked, but it showed up in my searches so I thought I'd share some knowledge.\nThe pip-tools package contains a tool called pip-compile that seems to also solve the original poster's problem.\npip-compile takes an input file, which can be setup.py, setup.cfg, pyproject.toml, or requirements.in. The input file is what you write by hand and contains the \"direct\" dependencies. It may not specify exact dependency versions, but may use version ranges (nor no constraints at all). The tool outputs a new rquirements.txt file with all the indirect dependencies added and also pins down the dependencies to exact versions.\nIf you run the pip-compile tool again after updating the source file, it will add or remove dependencies from the output file if needed. You can also choose to upgrade a specific dependency by adding a flag.\nSo while pip-compile does not show you the dependency tree itself, it helps you with collecting all the leafs of the dependency tree (which I assume was what the original poster wanted to do in the end).\nRead more here: https://github.com/jazzband/pip-tools/\n" ]
[ 235, 12, 5, 0 ]
[]
[]
[ "pip", "python", "requirements.txt" ]
stackoverflow_0017194301_pip_python_requirements.txt.txt
Q: How is the time complexity of a nested for loop n^2 +1? So I was reviewing some slides my teacher gave us and we are given the following Python code: a=5 b=6 c=10 for i in range(n): for j in range(n): x = i * j y = j * j z = i * j for k in range(n): w = a*k + 45 v = b*b d=33 For the first part (variable declaration) the time complexity is constant, so O(1) or for the purposes of writing the whole thing as an equation at the end, 3. And same for the last part with 1. Now, for the second and third parts is where my question comes in. The second part apparently has a 3n^2 + 2 time complexity and the third one a 2n + 1. I know that the 3n^2 and 2n come from the number of variables inside the loops (because they get iterated that many times, and in the nested one that makes it n*n). But I just don't know where the + 2 and + 1 come from. I've tried looking up how come a for loop in Python is n+1 but not a single site so far describes it like that, I think it's because all of them give the general time complexity, which of course I get that it's O(n), but part of my assignment is to give the specific one as well and that's where the constants come in. My guess is that the n comes from the range(n) part rather than from the for i in itself, and thus that declaration of the for is essentially like any other variable declarations (constant) but I'm really not sure and would like to understand why. (If you don't feel like giving out a full explanation I'd be fine with just any link to some site/video that does so). Thank you :) A: formula for a for loop: x*n+1. x - number of operations performs for each iteration. n - number of iterations +1 - creating range obj. So in your case the formula is 1 + n(3n + 1) <=>1 + 3n^2 + n. creating main loop range obj + n iterations * (3 operations * n iterations + creating 1 range object) The time complexity depends on programming language you are computing it for. Source: http://math.uni.wroc.pl/~jagiella/p2python/skrypt_html/wyklad2-1.html
How is the time complexity of a nested for loop n^2 +1?
So I was reviewing some slides my teacher gave us and we are given the following Python code: a=5 b=6 c=10 for i in range(n): for j in range(n): x = i * j y = j * j z = i * j for k in range(n): w = a*k + 45 v = b*b d=33 For the first part (variable declaration) the time complexity is constant, so O(1) or for the purposes of writing the whole thing as an equation at the end, 3. And same for the last part with 1. Now, for the second and third parts is where my question comes in. The second part apparently has a 3n^2 + 2 time complexity and the third one a 2n + 1. I know that the 3n^2 and 2n come from the number of variables inside the loops (because they get iterated that many times, and in the nested one that makes it n*n). But I just don't know where the + 2 and + 1 come from. I've tried looking up how come a for loop in Python is n+1 but not a single site so far describes it like that, I think it's because all of them give the general time complexity, which of course I get that it's O(n), but part of my assignment is to give the specific one as well and that's where the constants come in. My guess is that the n comes from the range(n) part rather than from the for i in itself, and thus that declaration of the for is essentially like any other variable declarations (constant) but I'm really not sure and would like to understand why. (If you don't feel like giving out a full explanation I'd be fine with just any link to some site/video that does so). Thank you :)
[ "formula for a for loop: x*n+1.\nx - number of operations performs for each iteration.\nn - number of iterations\n+1 - creating range obj.\nSo in your case the formula is 1 + n(3n + 1) <=>1 + 3n^2 + n.\ncreating main loop range obj + n iterations * (3 operations * n iterations + creating 1 range object)\nThe time complexity depends on programming language you are computing it for.\nSource: http://math.uni.wroc.pl/~jagiella/p2python/skrypt_html/wyklad2-1.html\n" ]
[ 1 ]
[]
[]
[ "big_o", "python", "time_complexity" ]
stackoverflow_0074529134_big_o_python_time_complexity.txt
Q: Return json/dictionary from psycopg3 SELECT query I've been asked to migrate a program from psycopg2 to psycopg3. In this program they use with connection.cursor(cursor_factory=RealDictCursor) as cursor: to obtain a dictionary that's later turned into a JSON file. My problem is that RealDictCursor appears to be a psycopg2 extra feature, and as such get an error when trying to use it for psycopg3. Is there any alternative for use in psycopg3? Tried using the psycopg2 library but didn't work. Didn't find any suitable alternative for psycopg3 other than manually going through the returned data A: The way to generate rows as dictionaries in psycopg3 is by passing the dict_row row factory to the connection. >>> from psycopg.rows import dict_row >>> >>> conn = psycopg.connect(dbname='test', row_factory=dict_row) >>> cur = conn.cursor() >>> cur.execute('select id, name from users') <psycopg.Cursor [TUPLES_OK] [INTRANS] (user=me database=test) at 0x7f0a2bebbdc0> >>> cur.fetchall() [ {'id': 1, 'name': 'Alice'}, {'id': 2, 'name': 'Bob'}, {'id': 3, 'name': 'Carol'}, {'id': 4, 'name': 'Dave'}, {'id': 5, 'name': 'Eve'} ] >>>
Return json/dictionary from psycopg3 SELECT query
I've been asked to migrate a program from psycopg2 to psycopg3. In this program they use with connection.cursor(cursor_factory=RealDictCursor) as cursor: to obtain a dictionary that's later turned into a JSON file. My problem is that RealDictCursor appears to be a psycopg2 extra feature, and as such get an error when trying to use it for psycopg3. Is there any alternative for use in psycopg3? Tried using the psycopg2 library but didn't work. Didn't find any suitable alternative for psycopg3 other than manually going through the returned data
[ "The way to generate rows as dictionaries in psycopg3 is by passing the dict_row row factory to the connection.\n>>> from psycopg.rows import dict_row\n>>>\n>>> conn = psycopg.connect(dbname='test', row_factory=dict_row)\n>>> cur = conn.cursor()\n>>> cur.execute('select id, name from users')\n<psycopg.Cursor [TUPLES_OK] [INTRANS] (user=me database=test) at 0x7f0a2bebbdc0>\n>>> cur.fetchall()\n[\n {'id': 1, 'name': 'Alice'},\n {'id': 2, 'name': 'Bob'},\n {'id': 3, 'name': 'Carol'},\n {'id': 4, 'name': 'Dave'},\n {'id': 5, 'name': 'Eve'}\n]\n>>> \n\n" ]
[ 0 ]
[]
[]
[ "postgresql", "psycopg3", "python" ]
stackoverflow_0074529506_postgresql_psycopg3_python.txt
Q: Some Python objects were not bound to checkpointed values I am trying to get started with Tensorflow 2.0 Object Detection API. I have gone through the installation following the official tutorial and I pass all the tests. However, I keep getting an error message that I don't understand when I try to run the main module. This is how I run it: python model_main_tf2.py --model_dir=ssd_resnet50_v1_fpn_640x640_coco17_tpu-8 --pipeline_config_path=ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/pipeline.config This is the beginning of the error message: Traceback (most recent call last): File "model_main_tf2.py", line 113, in <module> tf.compat.v1.app.run() File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "model_main_tf2.py", line 110, in main record_summaries=FLAGS.record_summaries) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 569, in train_loop unpad_groundtruth_tensors) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 383, in load_fine_tune_checkpoint ckpt.restore(checkpoint_path).assert_existing_objects_matched() File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/tensorflow/python/training/tracking/util.py", line 791, in assert_existing_objects_matched (list(unused_python_objects),)) AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program: [SyncOnReadVariable:{ 0: <tf.Variable 'conv2_block1_0_bn/moving_variance:0' shape=(256,) dtype=float32, numpy= array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., In the pipeline.config, I specify a checkpoint like this: fine_tune_checkpoint: "ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0" These are the contents of ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ : checkpoint ckpt-0.data-00000-of-00001 ckpt-0.index I have searched Google but couldn't find any answer. In this issue, the suggested solution is outdated (the code they suggest to replace is not there anymore). Question: What is the problem and how can I solve it? I am doing this on a server with CentOS Linux 7. I am using Python 3.7. I am new to Tensorflow so please if I am missing any important information, let me know. A: From the file name you provided (ssd_resnet50_v1_fpn_640x640_coco17_tpu-8), I can see you are trying to work with an object detection task. Therefore, in your pipeline.config file change this line: fine_tune_checkpoint_type: "classification" To: fine_tune_checkpoint_type: "detection" This should solve your problem. A: For me it was usefull to check type of feature extractor. I change type: "mobilenet_v2" to type: "mobilenet_v2_fpn_sep_conv" in pipeline.config. And its start working. A: I had the same error but for me, it was a simple copy&paste mistake. My fine_tune_checkpoint pointed to faster_rcnn_inception_resnet_v2_640x640_coco17_tpu-8/checkpoint/ckpt-0 instead of faster_rcnn_resnet50_v1_640x640_coco17_tpu-8/checkpoint/ckpt-0 A: I've been running into the same issue trying to get MobileNet & CenterNet to work. First of all: this error seems to be dependend on which Tensorflow version you are using. In my case, a colleague used TF 2.2 and it worked, whereas my TF 2.10 threw this error! However, there are reasons why you would not want to downgrade. If you are training a custom dataset and don't need the pre-trained COCO weights, there is an easy workaround: Simply don't use the fine tune checkpoint which you downloaded from the Model Zoo. To do so, in pipeline.config delete the line fine_tune_checkpoint: "your_path" and this error will disappear.
Some Python objects were not bound to checkpointed values
I am trying to get started with Tensorflow 2.0 Object Detection API. I have gone through the installation following the official tutorial and I pass all the tests. However, I keep getting an error message that I don't understand when I try to run the main module. This is how I run it: python model_main_tf2.py --model_dir=ssd_resnet50_v1_fpn_640x640_coco17_tpu-8 --pipeline_config_path=ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/pipeline.config This is the beginning of the error message: Traceback (most recent call last): File "model_main_tf2.py", line 113, in <module> tf.compat.v1.app.run() File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 40, in run _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/absl/app.py", line 299, in run _run_main(main, args) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/absl/app.py", line 250, in _run_main sys.exit(main(argv)) File "model_main_tf2.py", line 110, in main record_summaries=FLAGS.record_summaries) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 569, in train_loop unpad_groundtruth_tensors) File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/object_detection/model_lib_v2.py", line 383, in load_fine_tune_checkpoint ckpt.restore(checkpoint_path).assert_existing_objects_matched() File "/home/hd/hd_hd/hd_rs239/.conda/envs/jan_tf2/lib/python3.7/site-packages/tensorflow/python/training/tracking/util.py", line 791, in assert_existing_objects_matched (list(unused_python_objects),)) AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program: [SyncOnReadVariable:{ 0: <tf.Variable 'conv2_block1_0_bn/moving_variance:0' shape=(256,) dtype=float32, numpy= array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., In the pipeline.config, I specify a checkpoint like this: fine_tune_checkpoint: "ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ckpt-0" These are the contents of ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/checkpoint/ : checkpoint ckpt-0.data-00000-of-00001 ckpt-0.index I have searched Google but couldn't find any answer. In this issue, the suggested solution is outdated (the code they suggest to replace is not there anymore). Question: What is the problem and how can I solve it? I am doing this on a server with CentOS Linux 7. I am using Python 3.7. I am new to Tensorflow so please if I am missing any important information, let me know.
[ "From the file name you provided (ssd_resnet50_v1_fpn_640x640_coco17_tpu-8), I can see you are trying to work with an object detection task. Therefore, in your pipeline.config file change this line:\nfine_tune_checkpoint_type: \"classification\"\n\nTo:\nfine_tune_checkpoint_type: \"detection\"\n\nThis should solve your problem.\n", "For me it was usefull to check type of feature extractor. I change type: \"mobilenet_v2\" to type: \"mobilenet_v2_fpn_sep_conv\" in pipeline.config. And its start working.\n", "I had the same error but for me, it was a simple copy&paste mistake. My fine_tune_checkpoint pointed to faster_rcnn_inception_resnet_v2_640x640_coco17_tpu-8/checkpoint/ckpt-0 instead of faster_rcnn_resnet50_v1_640x640_coco17_tpu-8/checkpoint/ckpt-0\n", "I've been running into the same issue trying to get MobileNet & CenterNet to work.\nFirst of all: this error seems to be dependend on which Tensorflow version you are using. In my case, a colleague used TF 2.2 and it worked, whereas my TF 2.10 threw this error!\nHowever, there are reasons why you would not want to downgrade. If you are training a custom dataset and don't need the pre-trained COCO weights, there is an easy workaround:\nSimply don't use the fine tune checkpoint which you downloaded from the Model Zoo. To do so, in pipeline.config delete the line fine_tune_checkpoint: \"your_path\" and this error will disappear.\n" ]
[ 42, 4, 0, 0 ]
[]
[]
[ "deep_learning", "object_detection_api", "python", "tensorflow", "tensorflow2.0" ]
stackoverflow_0063552169_deep_learning_object_detection_api_python_tensorflow_tensorflow2.0.txt
Q: How to keep django-q run on ubuntu nginx server I use ubuntu with nginx & gunicorn and try to run django-q How can I keep django-q run when shutdown terminal please A: You will need to either run it as service (refer to answer) or use a process manager as described in the documentation
How to keep django-q run on ubuntu nginx server
I use ubuntu with nginx & gunicorn and try to run django-q How can I keep django-q run when shutdown terminal please
[ "You will need to either run it as service (refer to answer) or use a process manager as described in the documentation\n" ]
[ 0 ]
[]
[]
[ "django", "django_q", "python" ]
stackoverflow_0074516668_django_django_q_python.txt
Q: Plot a Pandas Pivoted table using python Iam trying to produce a line plot for the following table such that: X- axis is the dates[shown in the columns] Y-axis is the number value for each Region/Date Chart legend would be the Region[index]. [ARABIAN GULF/BALTIC SEA ...] Hence, there would be total of 3 line plots, one for each Region, where x-axis is the dates. Here is the table: Here is the Code: x={'REGION': {0: 'ANDAMAN SEA', 1: 'ARABIAN GULF', 2: 'BALTIC SEA'}, '2022-08-29': {0: 13, 1: 28, 2: 121}, '2022-09-05': {0: 13, 1: 24, 2: 120}, '2022-09-12': {0: 12, 1: 26, 2: 114}, '2022-09-19': {0: 18, 1: 55, 2: 105}, '2022-09-26': {0: 20, 1: 36, 2: 113}, '2022-10-03': {0: 19, 1: 25, 2: 116}, '2022-10-10': {0: 19, 1: 70, 2: 114}, '2022-10-17': {0: 23, 1: 95, 2: 113}} df=pd.DataFrame(x) df.plot() plt.show() Using the above code im getting the following INCORRECT plot: A: I think you want: df.set_index('REGION').T.plot() Output: Intermediate: df.set_index('REGION').T REGION ANDAMAN SEA ARABIAN GULF BALTIC SEA 2022-08-29 13 28 121 2022-09-05 13 24 120 2022-09-12 12 26 114 2022-09-19 18 55 105 2022-09-26 20 36 113 2022-10-03 19 25 116 2022-10-10 19 70 114 2022-10-17 23 95 113
Plot a Pandas Pivoted table using python
Iam trying to produce a line plot for the following table such that: X- axis is the dates[shown in the columns] Y-axis is the number value for each Region/Date Chart legend would be the Region[index]. [ARABIAN GULF/BALTIC SEA ...] Hence, there would be total of 3 line plots, one for each Region, where x-axis is the dates. Here is the table: Here is the Code: x={'REGION': {0: 'ANDAMAN SEA', 1: 'ARABIAN GULF', 2: 'BALTIC SEA'}, '2022-08-29': {0: 13, 1: 28, 2: 121}, '2022-09-05': {0: 13, 1: 24, 2: 120}, '2022-09-12': {0: 12, 1: 26, 2: 114}, '2022-09-19': {0: 18, 1: 55, 2: 105}, '2022-09-26': {0: 20, 1: 36, 2: 113}, '2022-10-03': {0: 19, 1: 25, 2: 116}, '2022-10-10': {0: 19, 1: 70, 2: 114}, '2022-10-17': {0: 23, 1: 95, 2: 113}} df=pd.DataFrame(x) df.plot() plt.show() Using the above code im getting the following INCORRECT plot:
[ "I think you want:\ndf.set_index('REGION').T.plot()\n\nOutput:\n\nIntermediate:\ndf.set_index('REGION').T\n\nREGION ANDAMAN SEA ARABIAN GULF BALTIC SEA\n2022-08-29 13 28 121\n2022-09-05 13 24 120\n2022-09-12 12 26 114\n2022-09-19 18 55 105\n2022-09-26 20 36 113\n2022-10-03 19 25 116\n2022-10-10 19 70 114\n2022-10-17 23 95 113\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "visualization" ]
stackoverflow_0074529558_pandas_python_visualization.txt
Q: pip subprocess to install build dependencies did not run successfully With the following docker file, FROM python:3.9-slim-buster WORKDIR /python-docker COPY requirements.txt requirements.txt RUN python3 -m pip install --upgrade pip RUN pip3 install -r requirements.txt COPY . . EXPOSE 5000 CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"] requirement file as follows, boto3==1.21.32 Flask==2.2.2 Flask_Cors==3.0.10 hvac==1.0.2 PyJWT==2.6.0 PyMySQL==0.10.1 zenpy==2.0.24 gunicorn==20.1.0 pandas==1.4.2 When I tried building multi arc docker file with following command, docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t name/flask-docker:latest --push . It displays the following output: ร— pip subprocess to install build dependencies did not run successfully. #0 148.6 โ”‚ exit code: 1 #0 148.6 โ•ฐโ”€> [262 lines of output] #0 148.6 Collecting setuptools>=51.0.0 #0 148.6 Downloading setuptools-65.6.0-py3-none-any.whl (1.2 MB) #0 148.6 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.2/1.2 MB 7.0 MB/s eta 0:00:00 #0 148.6 Collecting wheel #0 148.6 Downloading wheel-0.38.4-py3-none-any.whl (36 kB) #0 148.6 Collecting Cython<3,>=0.29.24 #0 148.6 Downloading Cython-0.29.32-py2.py3-none-any.whl (986 kB) #0 148.6 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 986.3/986.3 kB 9.1 MB/s eta 0:00:00 #0 148.6 Collecting oldest-supported-numpy>=0.10 #0 148.6 Downloading oldest_supported_numpy-2022.11.19-py3-none-any.whl (4.9 kB) #0 148.6 Collecting numpy==1.19.3 #0 148.6 Downloading numpy-1.19.3.zip (7.3 MB) #0 148.6 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 7.3/7.3 MB 13.4 MB/s eta 0:00:00 #0 148.6 Installing build dependencies: started #0 148.6 Installing build dependencies: finished with status 'done' #0 148.6 Getting requirements to build wheel: started #0 148.6 Getting requirements to build wheel: finished with status 'done' #0 148.6 Preparing metadata (pyproject.toml): started #0 148.6 Preparing metadata (pyproject.toml): still running... #0 148.6 Preparing metadata (pyproject.toml): finished with status 'error' #0 148.6 error: subprocess-exited-with-error #0 148.6 #0 148.6 ร— Preparing metadata (pyproject.toml) did not run successfully. #0 148.6 โ”‚ exit code: 1 #0 148.6 โ•ฐโ”€> [227 lines of output] #0 148.6 Running from numpy source directory. #0 148.6 setup.py:480: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates #0 148.6 run_build = parse_setuppy_commands() #0 148.6 Processing numpy/random/_bounded_integers.pxd.in #0 148.6 Processing numpy/random/_common.pyx #0 148.6 Processing numpy/random/_mt19937.pyx #0 148.6 Processing numpy/random/_philox.pyx #0 148.6 Processing numpy/random/mtrand.pyx #0 148.6 Processing numpy/random/bit_generator.pyx RuntimeError: Broken toolchain: cannot link a simple C program #0 148.6 [end of output] #0 148.6 #0 148.6 note: This error originates from a subprocess, and is likely not a problem with pip. #0 148.6 error: metadata-generation-failed #0 148.6 #0 148.6 ร— Encountered error while generating package metadata. #0 148.6 โ•ฐโ”€> See above for output. #0 148.6 #0 148.6 note: This is an issue with the package mentioned above, not pip. #0 148.6 hint: See above for details. #0 148.6 [end of output] #0 148.6 #0 148.6 note: This error originates from a subprocess, and is likely not a problem with pip. #0 148.6 error: subprocess-exited-with-error #0 148.6 #0 148.6 ร— pip subprocess to install build dependencies did not run successfully. #0 148.6 โ”‚ exit code: 1 #0 148.6 โ•ฐโ”€> See above for output. #0 148.6 #0 148.6 note: This error originates from a subprocess, and is likely not a problem with pip. ------ Dockerfile:9 -------------------- 7 | RUN python3 -m pip install --upgrade pip 8 | 9 | >>> RUN pip3 install -r requirements.txt As the above issue says it is not problem with pip , couldn't exactly get whether it is the problem with any of the underlying packages or not? Note: I tried removing pandas from requirement file and it is working , but I need that as I have an import based on that. A: ok man just try to install packages one by one or you can remove the version in front of every package so it will automatically adjust just like this boto3 Flask Flask_Cors hvac PyJWT PyMySQL zenpy gunicorn pandas``` #I think it will work fine
pip subprocess to install build dependencies did not run successfully
With the following docker file, FROM python:3.9-slim-buster WORKDIR /python-docker COPY requirements.txt requirements.txt RUN python3 -m pip install --upgrade pip RUN pip3 install -r requirements.txt COPY . . EXPOSE 5000 CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0"] requirement file as follows, boto3==1.21.32 Flask==2.2.2 Flask_Cors==3.0.10 hvac==1.0.2 PyJWT==2.6.0 PyMySQL==0.10.1 zenpy==2.0.24 gunicorn==20.1.0 pandas==1.4.2 When I tried building multi arc docker file with following command, docker buildx build --platform linux/amd64,linux/arm64,linux/arm/v7 -t name/flask-docker:latest --push . It displays the following output: ร— pip subprocess to install build dependencies did not run successfully. #0 148.6 โ”‚ exit code: 1 #0 148.6 โ•ฐโ”€> [262 lines of output] #0 148.6 Collecting setuptools>=51.0.0 #0 148.6 Downloading setuptools-65.6.0-py3-none-any.whl (1.2 MB) #0 148.6 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1.2/1.2 MB 7.0 MB/s eta 0:00:00 #0 148.6 Collecting wheel #0 148.6 Downloading wheel-0.38.4-py3-none-any.whl (36 kB) #0 148.6 Collecting Cython<3,>=0.29.24 #0 148.6 Downloading Cython-0.29.32-py2.py3-none-any.whl (986 kB) #0 148.6 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 986.3/986.3 kB 9.1 MB/s eta 0:00:00 #0 148.6 Collecting oldest-supported-numpy>=0.10 #0 148.6 Downloading oldest_supported_numpy-2022.11.19-py3-none-any.whl (4.9 kB) #0 148.6 Collecting numpy==1.19.3 #0 148.6 Downloading numpy-1.19.3.zip (7.3 MB) #0 148.6 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 7.3/7.3 MB 13.4 MB/s eta 0:00:00 #0 148.6 Installing build dependencies: started #0 148.6 Installing build dependencies: finished with status 'done' #0 148.6 Getting requirements to build wheel: started #0 148.6 Getting requirements to build wheel: finished with status 'done' #0 148.6 Preparing metadata (pyproject.toml): started #0 148.6 Preparing metadata (pyproject.toml): still running... #0 148.6 Preparing metadata (pyproject.toml): finished with status 'error' #0 148.6 error: subprocess-exited-with-error #0 148.6 #0 148.6 ร— Preparing metadata (pyproject.toml) did not run successfully. #0 148.6 โ”‚ exit code: 1 #0 148.6 โ•ฐโ”€> [227 lines of output] #0 148.6 Running from numpy source directory. #0 148.6 setup.py:480: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates #0 148.6 run_build = parse_setuppy_commands() #0 148.6 Processing numpy/random/_bounded_integers.pxd.in #0 148.6 Processing numpy/random/_common.pyx #0 148.6 Processing numpy/random/_mt19937.pyx #0 148.6 Processing numpy/random/_philox.pyx #0 148.6 Processing numpy/random/mtrand.pyx #0 148.6 Processing numpy/random/bit_generator.pyx RuntimeError: Broken toolchain: cannot link a simple C program #0 148.6 [end of output] #0 148.6 #0 148.6 note: This error originates from a subprocess, and is likely not a problem with pip. #0 148.6 error: metadata-generation-failed #0 148.6 #0 148.6 ร— Encountered error while generating package metadata. #0 148.6 โ•ฐโ”€> See above for output. #0 148.6 #0 148.6 note: This is an issue with the package mentioned above, not pip. #0 148.6 hint: See above for details. #0 148.6 [end of output] #0 148.6 #0 148.6 note: This error originates from a subprocess, and is likely not a problem with pip. #0 148.6 error: subprocess-exited-with-error #0 148.6 #0 148.6 ร— pip subprocess to install build dependencies did not run successfully. #0 148.6 โ”‚ exit code: 1 #0 148.6 โ•ฐโ”€> See above for output. #0 148.6 #0 148.6 note: This error originates from a subprocess, and is likely not a problem with pip. ------ Dockerfile:9 -------------------- 7 | RUN python3 -m pip install --upgrade pip 8 | 9 | >>> RUN pip3 install -r requirements.txt As the above issue says it is not problem with pip , couldn't exactly get whether it is the problem with any of the underlying packages or not? Note: I tried removing pandas from requirement file and it is working , but I need that as I have an import based on that.
[ "ok man just try to install packages one by one or you can remove the version in front of every package so it will automatically adjust\njust like this\nboto3\nFlask\nFlask_Cors\nhvac\nPyJWT\nPyMySQL\nzenpy\ngunicorn\npandas```\n#I think it will work fine\n\n" ]
[ 0 ]
[]
[]
[ "docker", "pip", "python" ]
stackoverflow_0074529519_docker_pip_python.txt
Q: {TypeError} Object of type Commit is not JSON serializable I've a dict with some repo information and I want to write it a json file, but this error raises during dumps method: {TypeError} Object of type Commit is not JSON serializable. __repo_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) repo = Repo(__repo_path) __tags = sorted( (tag for tag in repo.tags if tag.commit.committed_datetime <= repo.head.commit.committed_datetime), key=lambda t: t.commit.committed_datetime,) try: __current_branch = repo.active_branch except TypeError as e: __current_branch = "release" SCM_DATA = { "CHANGESET": repo.head.commit, "BRANCH": __current_branch, "TAG": __tags[-1], "IS_DIRTY": repo.is_dirty(), } json_version = json.dumps(SCM_DATA) How can I fix it? A: Make sure to use names/text messages not objects: import git, json repo = git.Repo('C:/data/foo') __current_branch = repo.active_branch.name __tags = repo.tags SCM_DATA = { "CHANGESET": repo.head.commit.message, "BRANCH": __current_branch, "TAG": __tags[-1].name, "IS_DIRTY": repo.is_dirty(), } json_version = json.dumps(SCM_DATA) print(json_version) Out: {'CHANGESET': 'inital commit', 'BRANCH': 'dev', 'TAG': 'v0.0.1', 'IS_DIRTY': True}
{TypeError} Object of type Commit is not JSON serializable
I've a dict with some repo information and I want to write it a json file, but this error raises during dumps method: {TypeError} Object of type Commit is not JSON serializable. __repo_path = os.path.abspath(os.path.join(os.path.dirname(__file__), "..")) repo = Repo(__repo_path) __tags = sorted( (tag for tag in repo.tags if tag.commit.committed_datetime <= repo.head.commit.committed_datetime), key=lambda t: t.commit.committed_datetime,) try: __current_branch = repo.active_branch except TypeError as e: __current_branch = "release" SCM_DATA = { "CHANGESET": repo.head.commit, "BRANCH": __current_branch, "TAG": __tags[-1], "IS_DIRTY": repo.is_dirty(), } json_version = json.dumps(SCM_DATA) How can I fix it?
[ "Make sure to use names/text messages not objects:\nimport git, json\n\nrepo = git.Repo('C:/data/foo')\n__current_branch = repo.active_branch.name\n__tags = repo.tags\n\nSCM_DATA = {\n \"CHANGESET\": repo.head.commit.message,\n \"BRANCH\": __current_branch,\n \"TAG\": __tags[-1].name,\n \"IS_DIRTY\": repo.is_dirty(),\n}\n\njson_version = json.dumps(SCM_DATA)\nprint(json_version)\n\nOut:\n{'CHANGESET': 'inital commit', 'BRANCH': 'dev', 'TAG': 'v0.0.1', 'IS_DIRTY': True}\n\n" ]
[ 1 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074529527_json_python.txt
Q: How to exclude a specific element from a list comprehension with conditionals I am trying to use a list comprehension to extract specific elements from a list, using conditionals on the list indices. When the list indices differ, specific operations need to happen. When the list indices are the same, no element should be added. The latter is what I do not know how to do, except by adding '' and removing it afterwards. Example (simpler than my actual case, but conceptually the same): x = [0, 1, 2, 3, 4] i = 2 x2 = [2 * x[j] - x[i] if j > i else 2 * x[i] - x[j] if j < i else '' for j in x] x2.remove('') x2 # [4, 3, 4, 6] How would you exclude the case where i == j a priori? I would have thought that just not having else '' at the end would work, but then I get an invalid_syntax error. I suppose in essence I am looking for a neutral element for the list comprehension. A: You can put if clauses after for to filter some elements. x2 = [2 * x[j] - x[i] if j > i else 2 * x[i] - x[j] for j in x if j != i] A: You can apply two kind of conditionals to a list comprehension. The one you are applying is applied to every element that make it to that point of code to get a value, that is why you need the else. You also want the filter behaviour (discard values that don't meet a condition), so you have to apply another conditional after the for, which decides which values to consider for the generated list: x = [0, 1, 2, 3, 4] i = 2 x2 = [2 * x[j] - x[i] if j > i else 2 * x[i] - x[j] for j in x if j != i]
How to exclude a specific element from a list comprehension with conditionals
I am trying to use a list comprehension to extract specific elements from a list, using conditionals on the list indices. When the list indices differ, specific operations need to happen. When the list indices are the same, no element should be added. The latter is what I do not know how to do, except by adding '' and removing it afterwards. Example (simpler than my actual case, but conceptually the same): x = [0, 1, 2, 3, 4] i = 2 x2 = [2 * x[j] - x[i] if j > i else 2 * x[i] - x[j] if j < i else '' for j in x] x2.remove('') x2 # [4, 3, 4, 6] How would you exclude the case where i == j a priori? I would have thought that just not having else '' at the end would work, but then I get an invalid_syntax error. I suppose in essence I am looking for a neutral element for the list comprehension.
[ "You can put if clauses after for to filter some elements.\nx2 = [2 * x[j] - x[i] if j > i else 2 * x[i] - x[j] for j in x if j != i]\n\n", "You can apply two kind of conditionals to a list comprehension. The one you are applying is applied to every element that make it to that point of code to get a value, that is why you need the else. You also want the filter behaviour (discard values that don't meet a condition), so you have to apply another conditional after the for, which decides which values to consider for the generated list:\nx = [0, 1, 2, 3, 4]\ni = 2\nx2 = [2 * x[j] - x[i] if j > i else 2 * x[i] - x[j] for j in x if j != i]\n\n" ]
[ 2, 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074529555_list_python.txt
Q: How do I render Displacy on Spyder Notebook? I want Spyder to display the plot of dependencies using Displacy visualizer of Spacy. Here is the code: import spacy nlp = spacy.load('en_core_web_sm') from spacy import displacy doc = nlp(u'This is a short text.') displacy.render(doc, style='dep', options={'distance':110}) The program ends without displaying anything. If I add jupyter=True, I get this: <IPython.core.display.HTML object> A: In my case running your code in Spyder 5.1.2 returns me the string for the svg of the plot. To visualize the plot while running the code from Spyder you will need to use displacy.serve method. That will run a web server serving the svg/plot. You should be able to access/view it at that point through your browser by going to http://localhost:5000/. A: By looking at the source code you will notice you may interact with the HTML yield using the renderer method set_render_wrapper. Read the following post that demonstrates how to essentially assign the render to an svg and save it. (serve is also a good solution actually but this will help save your results directly) Save SpaCy render file as SVG using DisplaCy
How do I render Displacy on Spyder Notebook?
I want Spyder to display the plot of dependencies using Displacy visualizer of Spacy. Here is the code: import spacy nlp = spacy.load('en_core_web_sm') from spacy import displacy doc = nlp(u'This is a short text.') displacy.render(doc, style='dep', options={'distance':110}) The program ends without displaying anything. If I add jupyter=True, I get this: <IPython.core.display.HTML object>
[ "In my case running your code in Spyder 5.1.2 returns me the string for the svg of the plot.\nTo visualize the plot while running the code from Spyder you will need to use displacy.serve method. That will run a web server serving the svg/plot. You should be able to access/view it at that point through your browser by going to http://localhost:5000/.\n", "By looking at the source code you will notice you may interact with the HTML yield using the renderer method set_render_wrapper. Read the following post that demonstrates how to essentially assign the render to an svg and save it. (serve is also a good solution actually but this will help save your results directly)\nSave SpaCy render file as SVG using DisplaCy\n" ]
[ 2, 0 ]
[]
[]
[ "jupyter_notebook", "python", "spacy", "spyder" ]
stackoverflow_0069078885_jupyter_notebook_python_spacy_spyder.txt
Q: foreign key dynamic filter with another foreign key in admin.py in django I have a problem with the dynamic design of the admin. I want the selected productCategory to be dynamically filtered when I select the productType. For example, I do this manually in models.py (ProductCategory.objects.filter(productType=2 or 1 or 4 ...( i cant dynamic)) models.py class ProductType(models.Model): name = models.CharField(max_length=200) slug = models.SlugField(max_length=200, unique=True) class ProductCategory(models.Model): productType = models.ForeignKey(ProductType, on_delete=models.CASCADE) name = models.CharField(max_length=200) slug = models.SlugField(max_length=200, unique=True) class Product(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE) brand = models.ForeignKey(Brand, on_delete=models.CASCADE) productType = models.ForeignKey(ProductType, on_delete=models.CASCADE, default=1) productCategory = models.ForeignKey(ProductCategory, on_delete=models.CASCADE) enter code here admin.py class ProductForm(forms.ModelForm): def __init__(self, *args, **kwargs): super(ProductForm, self).__init__(*args, **kwargs) if self.instance: self.fields['productCategory'].queryset = ProductCategory.objects.filter(productType=self.instance.productType.id) @admin.register(Product) class ProductAdmin(admin.ModelAdmin): form = ProductForm A: I tried to do same think for long time but haven't solved yet. But i can tell you why this is not work. The problem is self.instance.productType.id is None because you have not selected yet. Try to type print() like this and you will see why its not work. def __init__(self, *args, **kwargs): super(ProductForm, self).__init__(*args, **kwargs) if self.instance: print(self.instance.productType.id) self.fields['productCategory'].queryset = ProductCategory.objects.filter(productType=self.instance.productType.id)
foreign key dynamic filter with another foreign key in admin.py in django
I have a problem with the dynamic design of the admin. I want the selected productCategory to be dynamically filtered when I select the productType. For example, I do this manually in models.py (ProductCategory.objects.filter(productType=2 or 1 or 4 ...( i cant dynamic)) models.py class ProductType(models.Model): name = models.CharField(max_length=200) slug = models.SlugField(max_length=200, unique=True) class ProductCategory(models.Model): productType = models.ForeignKey(ProductType, on_delete=models.CASCADE) name = models.CharField(max_length=200) slug = models.SlugField(max_length=200, unique=True) class Product(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE) brand = models.ForeignKey(Brand, on_delete=models.CASCADE) productType = models.ForeignKey(ProductType, on_delete=models.CASCADE, default=1) productCategory = models.ForeignKey(ProductCategory, on_delete=models.CASCADE) enter code here admin.py class ProductForm(forms.ModelForm): def __init__(self, *args, **kwargs): super(ProductForm, self).__init__(*args, **kwargs) if self.instance: self.fields['productCategory'].queryset = ProductCategory.objects.filter(productType=self.instance.productType.id) @admin.register(Product) class ProductAdmin(admin.ModelAdmin): form = ProductForm
[ "I tried to do same think for long time but haven't solved yet. But i can tell you why this is not work.\nThe problem is self.instance.productType.id is None because you have not selected yet.\nTry to type print() like this and you will see why its not work.\ndef __init__(self, *args, **kwargs):\n super(ProductForm, self).__init__(*args, **kwargs)\n \n if self.instance:\n print(self.instance.productType.id)\n self.fields['productCategory'].queryset = ProductCategory.objects.filter(productType=self.instance.productType.id)\n\n" ]
[ 0 ]
[]
[]
[ "django", "django_admin", "django_models", "foreign_keys", "python" ]
stackoverflow_0073804134_django_django_admin_django_models_foreign_keys_python.txt
Q: how to scrape this interactive chart data at desired datetime? I am currently attempting to scrape this website to print all data in the blue rectangle from https://mempool.jhoenicke.de/#BTC,6m,weight Desired point that I want to scrape I would like to scrape the text in all the individual tooltips because I can see that the data is under the id="tooltip" like this data under id="tooltip" I tried to scrape it with selenium by click and hold on the element id = "tooltip" but it doesn't work from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.common.action_chains import ActionChains import time from time import sleep from random import randint website = 'https://jochen-hoenicke.de/queue/#BTC,6m,weight' path = '/Users/LENOVO/Downloads/chromedriver' driver = webdriver.Chrome(path) driver.get(website) element = driver.find_element("xpath", '//*[@id="tooltip"]') element.click() element2 = driver.find_element("style", '//*[@id="tooltip"]') action = ActionChains(driver) action.click_and_hold(element2) action.perform() time.sleep(3) memdate = driver.find_element("xpath",'//[@id="tooltip"]/strong').text print(memdate) action.release(element2) action.perform() but it has failed since element.click() I just want to know that did I go to the right direction or Could you guide me the right way to get the data from the table under tooltip at the desired datetime in tag strong. Thank you so very much in advance. A: Essentially you need to move your mouse in a horizontal line across the page, near to the bottom of the chart, and record the tooltip content each time that it changes. # wait until key elements are loaded canvas = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, "canvas[class='flot-overlay']"))) WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, "#tooltip"))) prev_date = "UNKNOWN" action = webdriver.ActionChains(driver) action.move_to_element(canvas).move_by_offset(0, 10).click().perform() for i in range(20): action.move_by_offset(1, 0).click().perform() try: WebDriverWait(driver, 2).until_not(EC.text_to_be_present_in_element((By.CSS_SELECTOR, "#tooltip strong"), prev_date)) tooltip_date = driver.find_element(By.CSS_SELECTOR, "#tooltip strong") tooltip_table = driver.find_element(By.CSS_SELECTOR, "#tooltip table") print("%s\n%s" % (tooltip_date.text, tooltip_table.text)) prev_date = tooltip_date.text except TimeoutException: # tooltip date has not changed so keep moving continue Starting point, stopping point and increment will need to be refined in order to pull all values from the chart, but this shows the general idea.
how to scrape this interactive chart data at desired datetime?
I am currently attempting to scrape this website to print all data in the blue rectangle from https://mempool.jhoenicke.de/#BTC,6m,weight Desired point that I want to scrape I would like to scrape the text in all the individual tooltips because I can see that the data is under the id="tooltip" like this data under id="tooltip" I tried to scrape it with selenium by click and hold on the element id = "tooltip" but it doesn't work from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.common.action_chains import ActionChains import time from time import sleep from random import randint website = 'https://jochen-hoenicke.de/queue/#BTC,6m,weight' path = '/Users/LENOVO/Downloads/chromedriver' driver = webdriver.Chrome(path) driver.get(website) element = driver.find_element("xpath", '//*[@id="tooltip"]') element.click() element2 = driver.find_element("style", '//*[@id="tooltip"]') action = ActionChains(driver) action.click_and_hold(element2) action.perform() time.sleep(3) memdate = driver.find_element("xpath",'//[@id="tooltip"]/strong').text print(memdate) action.release(element2) action.perform() but it has failed since element.click() I just want to know that did I go to the right direction or Could you guide me the right way to get the data from the table under tooltip at the desired datetime in tag strong. Thank you so very much in advance.
[ "Essentially you need to move your mouse in a horizontal line across the page, near to the bottom of the chart, and record the tooltip content each time that it changes.\n# wait until key elements are loaded\ncanvas = WebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, \"canvas[class='flot-overlay']\")))\nWebDriverWait(driver, 5).until(EC.presence_of_element_located((By.CSS_SELECTOR, \"#tooltip\")))\nprev_date = \"UNKNOWN\"\naction = webdriver.ActionChains(driver)\naction.move_to_element(canvas).move_by_offset(0, 10).click().perform()\nfor i in range(20):\n action.move_by_offset(1, 0).click().perform() \n try:\n WebDriverWait(driver, 2).until_not(EC.text_to_be_present_in_element((By.CSS_SELECTOR, \"#tooltip strong\"), prev_date))\n tooltip_date = driver.find_element(By.CSS_SELECTOR, \"#tooltip strong\")\n tooltip_table = driver.find_element(By.CSS_SELECTOR, \"#tooltip table\")\n print(\"%s\\n%s\" % (tooltip_date.text, tooltip_table.text))\n prev_date = tooltip_date.text\n except TimeoutException:\n # tooltip date has not changed so keep moving\n continue \n\nStarting point, stopping point and increment will need to be refined in order to pull all values from the chart, but this shows the general idea.\n" ]
[ 0 ]
[]
[]
[ "charts", "interactive", "python", "selenium", "web_scraping" ]
stackoverflow_0074528872_charts_interactive_python_selenium_web_scraping.txt
Q: How can I make and train custom-dataset in my own dataset? I have questions on my work. I build a multiclass classification model that classifies an input image as one label of 4 classes. Currently, I have 100,000 images that are made up 4 classes imbalanced. And I also have csv file including information of file name, class, path. I made a csv file using Pandas library. Now given my computing power, I wanna just test only 20,000 images. For sure, those 20,000 images should have images of 4 classes as same ratio. In my opinion, it will be good to use info of my csv file by class. But my problem is that I have no idea how to flesh out my thinking. So I need your guys tips. Thank you in advance! A: Here's the solution I found, although it might not be the most optimal one. Assuming that you have 5,000 images for each class. If you have a dataframe (your csv file) that is structured as follows: >>> df filename class 0 one.png 1 1 two.png 2 . . . 99,000 name.png 4 You can then get a subdataframe with subdf = pd.DataFrame(columns=df.columns) class_names = df['class'].unique() n_to_sample = 20,000/len(class_names) for class_name in class_names: subdf = pd.concat([subdf,df[df['class']==class_name].sample(n=n_to_sample)]) Hope this worked!
How can I make and train custom-dataset in my own dataset?
I have questions on my work. I build a multiclass classification model that classifies an input image as one label of 4 classes. Currently, I have 100,000 images that are made up 4 classes imbalanced. And I also have csv file including information of file name, class, path. I made a csv file using Pandas library. Now given my computing power, I wanna just test only 20,000 images. For sure, those 20,000 images should have images of 4 classes as same ratio. In my opinion, it will be good to use info of my csv file by class. But my problem is that I have no idea how to flesh out my thinking. So I need your guys tips. Thank you in advance!
[ "Here's the solution I found, although it might not be the most optimal one. Assuming that you have 5,000 images for each class.\nIf you have a dataframe (your csv file) that is structured as follows:\n>>> df\n\n filename class\n0 one.png 1\n1 two.png 2\n.\n.\n.\n99,000 name.png 4\n\nYou can then get a subdataframe with\nsubdf = pd.DataFrame(columns=df.columns)\nclass_names = df['class'].unique()\nn_to_sample = 20,000/len(class_names)\n\nfor class_name in class_names:\n subdf = pd.concat([subdf,df[df['class']==class_name].sample(n=n_to_sample)])\n\nHope this worked!\n" ]
[ 0 ]
[]
[]
[ "keras", "pandas", "python", "tensorflow2.0" ]
stackoverflow_0074515506_keras_pandas_python_tensorflow2.0.txt
Q: how to read and verify a text file exists or not in python in while loop? I'm trying to verify if a file exists or not in the current directory. At the same time, read if the file exists. The below is my code import os.path def readdata(): isFileExist = False while isFileExist == False: userIN = str(input("Please Enter a file name, followed by .txt: ")) isExist = os.path.exists(userIN) if isFileExistExist == False: print(f"Checking... {userIN} DOES NOT exist in the current directory. Please retry.") else: print(f"Checking... {userIN} exist in the current directory.") print(f"\n The file < {userIN} > includes the following data:") IN = open(userIN,"r") std1N = IN.readline() std1M = IN.readline() std2N = IN.readline() std2M = IN.readline() std3N = IN.readline() std3M = IN.readline() IN.close() print(f" Student Name: {std1N.strip()}") print(f" Performance: {std1M.strip()} out of 100") print(f" Student Name: {std2N.strip()}") print(f" Performance: {std2M.strip()} out of 100") print(f" Student Name: {std3N.strip()}") print(f" Performance: {std3M.strip()} out of 100") print(f" The average exam score of the 3.0 students in the file < {userIN} > is {(eval(s1M.strip())+eval(s2M.strip())+eval(s3M.strip()))/3:.2f}.") def main(): readdata() if __name__ == '__main__': main() How to use main to transfer file name to the function readdata()? And then verify if it exists in the current file directory and read the file. As well as, getting the average 3 student's average in that file. I want to transfer the filename from main to readdata() function. How can I achieve that? Thank you for your time and consideration A: There is a couple of methods to check it. try except try: file = open(file_name) except FileNotFoundError: # do something when file not exist pathlib.Path object has exists method. os.path.isfile method To read data from file use: with open(file_name) as f: for line in f: # do something with current line # context manager close file automaticly or with open(file_name) as f: lines = f.readlines() for line in lines: # do something with current line Second way is worse then first because: first one uses generator second one uses creates and use list (slower and use more memory) But in your case you need to read 2 lines per iteration so you can use readline method 2 times per iteration and break it when readline return None. e.g with open(file_name) as f: current_name = f.readline() current_performance = f.readline() while current_name is not None: # do something current_name = f.readline() current_performance = f.readline() You can also extend condition with a and current_performance is not None to be sure that u have both values.
how to read and verify a text file exists or not in python in while loop?
I'm trying to verify if a file exists or not in the current directory. At the same time, read if the file exists. The below is my code import os.path def readdata(): isFileExist = False while isFileExist == False: userIN = str(input("Please Enter a file name, followed by .txt: ")) isExist = os.path.exists(userIN) if isFileExistExist == False: print(f"Checking... {userIN} DOES NOT exist in the current directory. Please retry.") else: print(f"Checking... {userIN} exist in the current directory.") print(f"\n The file < {userIN} > includes the following data:") IN = open(userIN,"r") std1N = IN.readline() std1M = IN.readline() std2N = IN.readline() std2M = IN.readline() std3N = IN.readline() std3M = IN.readline() IN.close() print(f" Student Name: {std1N.strip()}") print(f" Performance: {std1M.strip()} out of 100") print(f" Student Name: {std2N.strip()}") print(f" Performance: {std2M.strip()} out of 100") print(f" Student Name: {std3N.strip()}") print(f" Performance: {std3M.strip()} out of 100") print(f" The average exam score of the 3.0 students in the file < {userIN} > is {(eval(s1M.strip())+eval(s2M.strip())+eval(s3M.strip()))/3:.2f}.") def main(): readdata() if __name__ == '__main__': main() How to use main to transfer file name to the function readdata()? And then verify if it exists in the current file directory and read the file. As well as, getting the average 3 student's average in that file. I want to transfer the filename from main to readdata() function. How can I achieve that? Thank you for your time and consideration
[ "There is a couple of methods to check it.\n\ntry except\n\ntry:\n file = open(file_name)\nexcept FileNotFoundError:\n # do something when file not exist\n\n\npathlib.Path object has exists method.\nos.path.isfile method\n\nTo read data from file use:\nwith open(file_name) as f:\n for line in f:\n # do something with current line\n# context manager close file automaticly\n\nor\nwith open(file_name) as f:\n lines = f.readlines()\nfor line in lines:\n # do something with current line\n\nSecond way is worse then first because:\n\nfirst one uses generator\nsecond one uses creates and use list (slower and use more memory)\n\nBut in your case you need to read 2 lines per iteration so you can use readline method 2 times per iteration and break it when readline return None.\ne.g\nwith open(file_name) as f:\n current_name = f.readline()\n current_performance = f.readline()\n while current_name is not None:\n # do something\n current_name = f.readline()\n current_performance = f.readline()\n\nYou can also extend condition with a and current_performance is not None to be sure that u have both values.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074529629_python_python_3.x.txt
Q: Django UserCreationForm and Bootstrap Forms Layouts I am trying to extend the UserCreationForm using a Bootstrap layout style for the field username. After the input tag in the registration form, I would like to add a div element like an example that I have readapted from the Bootstrap page: i.e. suggesting the user to enter the same username as the company domain. Let's focus to the bare minimum. The form readapted from Bootstrap is: <form class="row gy-2 gx-3 align-items-center method="POST" action="{% url 'register' %} "> <div class="col-auto"> <div class="input-group"> <input type="text" name="username" class="form-control" placeholder="your.username" id="id_username"> <div class="input-group-text">@company.domain.com</div> </div> </div> <div class="col-auto"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> Which produces the following: For the moment, I am using only {{form.as_p}} in my html template file: <form class="row gy-2 gx-3 align-items-center method="POST" action="{% url 'register' %} "> {{form.as_p}} <div class="col-auto"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> And I don't know how to add the <div class="input-group-text">@company.domain.com</div> part embedded in a <div class="input-group">...</div> block. My actual forms.py is a bit more complex, but readapted for this minimum example it contains the widget attributes as follows: class SignupForm(UserCreationForm): username = forms.CharField(label="", widget=forms.TextInput(attrs={'class': 'form-control', 'placeholder': 'your.username'})) class Meta: model = User fields = ( 'username', ) Without additional libraries, is there a way to extend the widget attributes? Is it even possible to use {{form.as_p}} as I am currently doing or should I use another method? A: If you want to use only {{ form.as_p }} with bootstrap then you need to install django-bootstrap. Install it using pip: pip install django-bootstrap4 After installation, add it in INSTALLED_APPS in settings.py file. INSTALLED_APPS = [ 'bootstrap4', ] And in templates, you need to load it. {% load bootstrap4 %} {% bootstrap_messages %} <form class="row gy-2 gx-3 align-items-center method="POST" action="{% url 'register' %} "> {% csrf_token %} {% bootstrap_form form %} # added `form` here. we don't need to use with as_p. <div class="form-group"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> This is the way you can use bootstrap along with {{ form.as_p }} OR Try another way: Simply you can use {{ form.username }} inside an input tag in template. For Example: <input type="text" value="{{ form.username }}"> <form class="row gy-2 gx-3 align-items-center method="POST" action="{% url 'register' %} "> <div class="col-auto"> <div class="input-group"> <input type="text" name="username" class="form-control" placeholder="your.username" id="id_username" value="{{ form.username }}"> #Added here <div class="input-group-text">@company.domain.com</div> </div> </div> <div class="col-auto"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form>
Django UserCreationForm and Bootstrap Forms Layouts
I am trying to extend the UserCreationForm using a Bootstrap layout style for the field username. After the input tag in the registration form, I would like to add a div element like an example that I have readapted from the Bootstrap page: i.e. suggesting the user to enter the same username as the company domain. Let's focus to the bare minimum. The form readapted from Bootstrap is: <form class="row gy-2 gx-3 align-items-center method="POST" action="{% url 'register' %} "> <div class="col-auto"> <div class="input-group"> <input type="text" name="username" class="form-control" placeholder="your.username" id="id_username"> <div class="input-group-text">@company.domain.com</div> </div> </div> <div class="col-auto"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> Which produces the following: For the moment, I am using only {{form.as_p}} in my html template file: <form class="row gy-2 gx-3 align-items-center method="POST" action="{% url 'register' %} "> {{form.as_p}} <div class="col-auto"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> And I don't know how to add the <div class="input-group-text">@company.domain.com</div> part embedded in a <div class="input-group">...</div> block. My actual forms.py is a bit more complex, but readapted for this minimum example it contains the widget attributes as follows: class SignupForm(UserCreationForm): username = forms.CharField(label="", widget=forms.TextInput(attrs={'class': 'form-control', 'placeholder': 'your.username'})) class Meta: model = User fields = ( 'username', ) Without additional libraries, is there a way to extend the widget attributes? Is it even possible to use {{form.as_p}} as I am currently doing or should I use another method?
[ "If you want to use only {{ form.as_p }} with bootstrap then you need to install django-bootstrap.\nInstall it using pip:\npip install django-bootstrap4\n\nAfter installation, add it in INSTALLED_APPS in settings.py file.\nINSTALLED_APPS = [\n 'bootstrap4',\n]\n\nAnd in templates, you need to load it.\n{% load bootstrap4 %}\n{% bootstrap_messages %}\n<form class=\"row gy-2 gx-3 align-items-center method=\"POST\" action=\"{% url 'register' %} \">\n\n {% csrf_token %}\n \n {% bootstrap_form form %} # added `form` here. we don't need to use with as_p.\n\n <div class=\"form-group\">\n <button type=\"submit\" class=\"btn btn-primary\">Submit</button>\n </div>\n</form>\n\nThis is the way you can use bootstrap along with {{ form.as_p }}\nOR\nTry another way:\nSimply you can use {{ form.username }} inside an input tag in template.\nFor Example:\n<input type=\"text\" value=\"{{ form.username }}\">\n\n\n<form class=\"row gy-2 gx-3 align-items-center method=\"POST\" action=\"{% url 'register' %} \">\n <div class=\"col-auto\">\n <div class=\"input-group\">\n <input type=\"text\" name=\"username\" class=\"form-control\" placeholder=\"your.username\" id=\"id_username\" value=\"{{ form.username }}\"> #Added here\n <div class=\"input-group-text\">@company.domain.com</div>\n </div>\n </div>\n <div class=\"col-auto\">\n <button type=\"submit\" class=\"btn btn-primary\">Submit</button>\n </div>\n</form>\n\n" ]
[ 1 ]
[]
[]
[ "bootstrap_5", "django", "python" ]
stackoverflow_0074529800_bootstrap_5_django_python.txt
Q: Libtorrent. Answer some questions To begin with, English is not my native language, so it's hard for me to read the libtorrent documentation and all this question has been translated. I ask you to answer these questions, if you know any of them, answer only him. I am using libtorrent 2.0.7 and Python 3.8 It is not necessary to answer questions in python, I will try to figure it out even if you answer in c++ At the moment when the torrent is not loaded yet. How do I get all the files to be uploaded? At the moment when the torrent is loaded. How do I get the path to the files that were uploaded? (I found a similar question, but its answer stopped working because of deprecated) I'm trying to use handle.get_torrent_info() to answer point 1, but returns DeprecationWarning: get_torrent_info() is deprecated I tried to look in the source file, but it doesn't say what to use instead of this function. Do you know? I would like to set a download speed limit for the entire session. To do this, I found session.download_rate_limit() in its parameters , but when using it, it returns DeprecationWarning: download_rate_limit() is deprecated. I also tried to look in the documentation, but I didn't find it. I also didn't figure out what parameters it accepts, I tried int, but it returned an error. As in point 2, it is not written what to use instead of the outdated function. Do you know? I would like the session to download only 1 torrent at a time, and the rest queued in the order of enabling the download from the pause state. How to do this, I do not know at all. Help please A: I found the answer to the 1st and 2nd question: test = handle.status() for i in range(test.torrent_file.files().num_files()): print(test.torrent_file.files().file_path(i))
Libtorrent. Answer some questions
To begin with, English is not my native language, so it's hard for me to read the libtorrent documentation and all this question has been translated. I ask you to answer these questions, if you know any of them, answer only him. I am using libtorrent 2.0.7 and Python 3.8 It is not necessary to answer questions in python, I will try to figure it out even if you answer in c++ At the moment when the torrent is not loaded yet. How do I get all the files to be uploaded? At the moment when the torrent is loaded. How do I get the path to the files that were uploaded? (I found a similar question, but its answer stopped working because of deprecated) I'm trying to use handle.get_torrent_info() to answer point 1, but returns DeprecationWarning: get_torrent_info() is deprecated I tried to look in the source file, but it doesn't say what to use instead of this function. Do you know? I would like to set a download speed limit for the entire session. To do this, I found session.download_rate_limit() in its parameters , but when using it, it returns DeprecationWarning: download_rate_limit() is deprecated. I also tried to look in the documentation, but I didn't find it. I also didn't figure out what parameters it accepts, I tried int, but it returned an error. As in point 2, it is not written what to use instead of the outdated function. Do you know? I would like the session to download only 1 torrent at a time, and the rest queued in the order of enabling the download from the pause state. How to do this, I do not know at all. Help please
[ "I found the answer to the 1st and 2nd question:\ntest = handle.status()\nfor i in range(test.torrent_file.files().num_files()):\n print(test.torrent_file.files().file_path(i))\n\n" ]
[ 0 ]
[]
[]
[ "libtorrent", "python" ]
stackoverflow_0074529732_libtorrent_python.txt
Q: Cant able to install streamlit-webrtc package when i try to install this package using this command(pip install -U streamlit-webrtc) iam a getting an error which i am not aware of that Please let me know how to resolve this issue A: go to https://visualstudio.microsoft.com/visual-cpp-build-tools/ and install this. Then install the microsoft build tools(just check-mark it). After the installation of around 1.7gb, run the "pip install streamlit-webrtc" command. Installation will be completed.
Cant able to install streamlit-webrtc package
when i try to install this package using this command(pip install -U streamlit-webrtc) iam a getting an error which i am not aware of that Please let me know how to resolve this issue
[ "go to https://visualstudio.microsoft.com/visual-cpp-build-tools/ and install this. Then install the microsoft build tools(just check-mark it). After the installation of around 1.7gb, run the \"pip install streamlit-webrtc\" command. Installation will be completed.\n" ]
[ 0 ]
[]
[]
[ "python", "streamlit", "webrtc" ]
stackoverflow_0073126521_python_streamlit_webrtc.txt
Q: Split one excel file into multiple with specific number of rows in Pandas Let's say I have an excel file with 101 rows, I need to split and write into 11 excel files with equivalent row number 10 for each new file, except the last one since there is only one row left. This is code I have tried, but I get KeyError: 11: df = pd.DataFrame(data=np.random.rand(101, 3), columns=list('ABC')) groups = df.groupby(int(len(df.index)/10) + 1) for i, g in groups: g.to_excel("%s.xlsx" % i, index = False, index_lable = False) Someone could help with this issue? Thanks a lot. Reference related: Split pandas dataframe into multiple dataframes with equal numbers of rows A: I think you need np.arange: df = pd.DataFrame(data=np.random.rand(101, 3), columns=list('ABC')) groups = df.groupby(np.arange(len(df.index))//10) for i, g in groups: print(g) A: I solved a similar problem as follows. Backstory to my issue was that I have created an Azure Function with an HTTP trigger, but was overwhelming the endpoint when iterating through 2k rows of requests. So chunked up the origin file into rows of 50: import pandas as pd import logging INXL = pd.read_excel('split/031022.xlsx', engine="openpyxl") row_count = (len(INXL.index)) #make sure we are dealing with a table bigger than 50 if row_count >= 51: row_start = (row_count -50) else: row_start = 1 def extract(rs, rc): while rc >= 51: #loop body # set the extraction to be between the row start and ending index row_extract = INXL.iloc[rs:rc] with pd.ExcelWriter(f'output_{rc}.xlsx') as writer: row_extract.to_excel(writer,index=False) rc -= 50 rs -= 50 extract(row_start, row_count) if row_count < 51: row_extract = INXL.iloc[row_start:row_count] with pd.ExcelWriter(f'output_{row_count}.xlsx') as writer: row_extract.to_excel(writer,index=False) logging.info("extract completed")
Split one excel file into multiple with specific number of rows in Pandas
Let's say I have an excel file with 101 rows, I need to split and write into 11 excel files with equivalent row number 10 for each new file, except the last one since there is only one row left. This is code I have tried, but I get KeyError: 11: df = pd.DataFrame(data=np.random.rand(101, 3), columns=list('ABC')) groups = df.groupby(int(len(df.index)/10) + 1) for i, g in groups: g.to_excel("%s.xlsx" % i, index = False, index_lable = False) Someone could help with this issue? Thanks a lot. Reference related: Split pandas dataframe into multiple dataframes with equal numbers of rows
[ "I think you need np.arange:\ndf = pd.DataFrame(data=np.random.rand(101, 3), columns=list('ABC'))\ngroups = df.groupby(np.arange(len(df.index))//10)\nfor i, g in groups:\n print(g)\n\n", "I solved a similar problem as follows. Backstory to my issue was that I have created an Azure Function with an HTTP trigger, but was overwhelming the endpoint when iterating through 2k rows of requests. So chunked up the origin file into rows of 50:\nimport pandas as pd\nimport logging\n\nINXL = pd.read_excel('split/031022.xlsx', engine=\"openpyxl\")\n\n\nrow_count = (len(INXL.index))\n#make sure we are dealing with a table bigger than 50 \nif row_count >= 51:\n row_start = (row_count -50)\nelse:\n row_start = 1\n\n\ndef extract(rs, rc):\n while rc >= 51: #loop body\n # set the extraction to be between the row start and ending index\n row_extract = INXL.iloc[rs:rc]\n with pd.ExcelWriter(f'output_{rc}.xlsx') as writer: \n row_extract.to_excel(writer,index=False)\n rc -= 50\n rs -= 50\n \n\nextract(row_start, row_count)\nif row_count < 51:\n row_extract = INXL.iloc[row_start:row_count]\n with pd.ExcelWriter(f'output_{row_count}.xlsx') as writer: \n row_extract.to_excel(writer,index=False) \n logging.info(\"extract completed\") \n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "pandas_groupby", "python", "python_3.x" ]
stackoverflow_0060000054_dataframe_pandas_pandas_groupby_python_python_3.x.txt
Q: Unexpected increase of memory usage on datetime index when using pandas.to_numeric with apply() I followed the example from the docs to downcast datatypes to decrease memory usage: https://pandas.pydata.org/docs/user_guide/scale.html#use-efficient-datatypes I tried to downcast two columns of a dataframe with a datetime index from float64 to float32 with pd.to_numeric. Given the following dataframe: import pandas as pd # version: 1.3.5 import numpy as np # version: 1.21.5 df = pd.DataFrame(np.random.uniform(50, 100, size=(200, 2)), columns=['x','y'], index=pd.date_range("2022-01-01", periods=200, freq="H")) Data types and memory usage: print(df.dtypes) x float64 y float64 dtype: object print(df.index.dtype) datetime64[ns] print(df.memory_usage(deep=True)) Index 1600 x 1600 y 1600 dtype: int64 If I downcast the columns x and y like this, it works as expected: df['x'] = pd.to_numeric(df['x'], downcast='float') df['y'] = pd.to_numeric(df['y'], downcast='float') Data types / memory usage: print(df.dtypes) x float32 y float32 dtype: object print(df.memory_usage(deep=True)) Index 1600 x 800 y 800 dtype: int64 If I use the apply() method to downcast the two columns (also used in the doc example), it also works: df[['x','y']] = df[['x','y']].apply(pd.to_numeric, downcast='float') Data types: print(df.dtypes) x float32 y float32 dtype: object But look at the memory usage of the datetime index. It's over 6 times larger: print(df.memory_usage(deep=True)) Index 9896 x 800 y 800 dtype: int64 Why does it behave like this? Did I miss something? A: It seems to be a bug in older pandas versions (btw, same on Windows and Ubuntu). I just installed pandas 1.5.1 and it works as expected. Unfortunately, I can't update the pandas version in my project yet, so I won't use the apply() method until I'm ready to use a newer version. Anyway, thanks to juanpa.arrivillaga for looking at this.
Unexpected increase of memory usage on datetime index when using pandas.to_numeric with apply()
I followed the example from the docs to downcast datatypes to decrease memory usage: https://pandas.pydata.org/docs/user_guide/scale.html#use-efficient-datatypes I tried to downcast two columns of a dataframe with a datetime index from float64 to float32 with pd.to_numeric. Given the following dataframe: import pandas as pd # version: 1.3.5 import numpy as np # version: 1.21.5 df = pd.DataFrame(np.random.uniform(50, 100, size=(200, 2)), columns=['x','y'], index=pd.date_range("2022-01-01", periods=200, freq="H")) Data types and memory usage: print(df.dtypes) x float64 y float64 dtype: object print(df.index.dtype) datetime64[ns] print(df.memory_usage(deep=True)) Index 1600 x 1600 y 1600 dtype: int64 If I downcast the columns x and y like this, it works as expected: df['x'] = pd.to_numeric(df['x'], downcast='float') df['y'] = pd.to_numeric(df['y'], downcast='float') Data types / memory usage: print(df.dtypes) x float32 y float32 dtype: object print(df.memory_usage(deep=True)) Index 1600 x 800 y 800 dtype: int64 If I use the apply() method to downcast the two columns (also used in the doc example), it also works: df[['x','y']] = df[['x','y']].apply(pd.to_numeric, downcast='float') Data types: print(df.dtypes) x float32 y float32 dtype: object But look at the memory usage of the datetime index. It's over 6 times larger: print(df.memory_usage(deep=True)) Index 9896 x 800 y 800 dtype: int64 Why does it behave like this? Did I miss something?
[ "It seems to be a bug in older pandas versions (btw, same on Windows and Ubuntu). I just installed pandas 1.5.1 and it works as expected. Unfortunately, I can't update the pandas version in my project yet, so I won't use the apply() method until I'm ready to use a newer version.\nAnyway, thanks to juanpa.arrivillaga for looking at this.\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074525557_pandas_python.txt
Q: I was trying to see the seasonal and trend factor in the timeseries, but graph is not working correctly. Tried after deleting cache memory too Hi was trying to do seasonal_decomposition for a time series, but wasn't getting proper result: date value 2020-02-01 67.05 2020-03-01 69.08 2020-06-01 70.25 2020-07-01 68.74 2020-08-01 67.31 . . . till 2022-11-04 Code: from statsmodels.tsa.seasonal import seasonal_decompose df_add_decompose = seasonal_decompose(df_modified, model = 'additive', period=12) df_add_decompose.plot() A: you have to sort index: from statsmodels.tsa.seasonal import seasonal_decompose df_modified = df_modified.sort_index() df_add_decompose = seasonal_decompose(df_modified, model = 'additive', period=12) df_add_decompose.plot()
I was trying to see the seasonal and trend factor in the timeseries, but graph is not working correctly. Tried after deleting cache memory too
Hi was trying to do seasonal_decomposition for a time series, but wasn't getting proper result: date value 2020-02-01 67.05 2020-03-01 69.08 2020-06-01 70.25 2020-07-01 68.74 2020-08-01 67.31 . . . till 2022-11-04 Code: from statsmodels.tsa.seasonal import seasonal_decompose df_add_decompose = seasonal_decompose(df_modified, model = 'additive', period=12) df_add_decompose.plot()
[ "you have to sort index:\nfrom statsmodels.tsa.seasonal import seasonal_decompose\ndf_modified = df_modified.sort_index()\ndf_add_decompose = seasonal_decompose(df_modified, model = 'additive', period=12)\ndf_add_decompose.plot()\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "decomposition", "python", "time_series", "timeserieschart" ]
stackoverflow_0074529646_dataframe_decomposition_python_time_series_timeserieschart.txt
Q: How to prevent double quotes at start and end of DAT file, while writing a pandas DF using to_csv()? My panda DF contains huge data and giving filename with '.DAT' extension (client requirement) and using to_csv() to write data. When I open the file in notepad or any other text viewer, I see double quotes at start and end of the file: " col1|Col2|Col3 D1|D2|D3 ... So On D1n|D2n|D3n " How to remove these double quotes while writing the dataframe as CSV file? I tried quote, quoting parameters in to_csv, replace function. Please suggest any parameter combination to eliminate this A: To write to a CSV you would do it like this normally. Generally, this should not give you quotes in the file by default. import pandas as pd df = pd.read_csv('path\to\source_folder\input.dat') df.to_csv('path\to\folder\s.dat') Can we see a sample of the Code? A: Try this df.to_csv("data.dat",header=None, index=None, sep='|', escapechar='')
How to prevent double quotes at start and end of DAT file, while writing a pandas DF using to_csv()?
My panda DF contains huge data and giving filename with '.DAT' extension (client requirement) and using to_csv() to write data. When I open the file in notepad or any other text viewer, I see double quotes at start and end of the file: " col1|Col2|Col3 D1|D2|D3 ... So On D1n|D2n|D3n " How to remove these double quotes while writing the dataframe as CSV file? I tried quote, quoting parameters in to_csv, replace function. Please suggest any parameter combination to eliminate this
[ "To write to a CSV you would do it like this normally.\nGenerally, this should not give you quotes in the file by default.\nimport pandas as pd\n\ndf = pd.read_csv('path\\to\\source_folder\\input.dat')\n\ndf.to_csv('path\\to\\folder\\s.dat')\n\nCan we see a sample of the Code?\n", "Try this\ndf.to_csv(\"data.dat\",header=None, index=None, sep='|', escapechar='')\n\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark_sql", "dataframe", "pandas", "pyspark", "python" ]
stackoverflow_0074326096_apache_spark_sql_dataframe_pandas_pyspark_python.txt
Q: Python py7zr can't list files in archive - how to read 7z archive without extracting it I tried to list all files inside 7z archive (I don't want to extract them). I followed the documentation of the creators of py7zr. My code look like this: def checkArchive(archivePath): for filename in os.listdir(archivePath): print("Filename is: " + filename) cmd = "py7zr l " + filename os.system(cmd) I also tried cmd = python -m "py7zr l " + filename as a cmd command. But no matter what command I use, the program always returns an error: not a 7z file. I made sure and I know that all files on which the command operates have the extension 7z. How can I get py7zr to start recognizing the file type? Or is there any other way to list the 7zarchive? A: I think can use this to list files in a 7z archive. import py7zr with py7zr.SevenZipFile(r'<PATH TO 7Z FILE>.7z', 'r') as archive: all_paths = archive.getnames()
Python py7zr can't list files in archive - how to read 7z archive without extracting it
I tried to list all files inside 7z archive (I don't want to extract them). I followed the documentation of the creators of py7zr. My code look like this: def checkArchive(archivePath): for filename in os.listdir(archivePath): print("Filename is: " + filename) cmd = "py7zr l " + filename os.system(cmd) I also tried cmd = python -m "py7zr l " + filename as a cmd command. But no matter what command I use, the program always returns an error: not a 7z file. I made sure and I know that all files on which the command operates have the extension 7z. How can I get py7zr to start recognizing the file type? Or is there any other way to list the 7zarchive?
[ "I think can use this to list files in a 7z archive.\nimport py7zr\n\nwith py7zr.SevenZipFile(r'<PATH TO 7Z FILE>.7z', 'r') as archive:\n all_paths = archive.getnames()\n\n" ]
[ 0 ]
[]
[]
[ "7zip", "py7zr", "python", "python_3.x" ]
stackoverflow_0072306786_7zip_py7zr_python_python_3.x.txt
Q: conditionally_trigger for TriggerDagRunOperator I have 2 DAGs: dag_a and dag_b (dag_a -> dag_b) After dag_a is executed, TriggerDagRunOperator is called, which starts dag_b. The problem is, when dag_b is off (paused), dag_a's TriggerDagRunOperator creates scheduled runs in dag_b that queue up for as long as dag_a is running. After turning dag_b back ON, the execution of tasks from the queue begins. I'm trying to find a solution for TriggerDagRunOperator, namely a conditionally_trigger function that would skip the execution of the TriggerDagRunOperator task if dag_b is paused (OFF). How can i do this? A: You can use ShortCircuitOperator to execute/skip the downstream dag_b. Then, use the Airflow Rest API (or shell/CLI) to figure out whether dag_b is paused or not. dag_a = TriggerDagRunOperator( trigger_dag_id='dag_a', ... ) pause_check = ShortCircuitOperator( task_id='pause_check', python_callable=is_dag_paused, op_kwargs={ 'dag_id': 'dag_b' } ) dag_b = TriggerDagRunOperator( trigger_dag_id='dag_b', ... ) dag_a >> pause_check >> dag_b and is_dag_paused function can be like this. (here I use Rest API.) def is_dag_paused(**kwargs): import requests from requests.auth import HTTPBasicAuth dag_id = kwargs['dag_id'] res = requests.get(f'http://{airflow_host}/api/v1/dags/{dag_id}/details', auth=HTTPBasicAuth('username', 'pasword')) # The auth method could be different for you. if res.status_code == 200: rjson = res.json() # if you return True, the downstream tasks will be executed # if False, it will be skipped return not rjson['is_paused'] else: print('Error: ', res) exit(1) A: import airflow.settings from airflow.models import DagModel def check_status_dag(*op_args): session = airflow.settings.Session() qry = session.query(DagModel).filter(DagModel.dag_id == op_args[0]) if not qry.value(DagModel.is_paused): return op_args[1] else: return op_args[2] Where check_status_dag is the method of making a choice decision for executing a further branch, op_args[0] is the dag_id of the dag being checked for pause status, op_args[1] and op_args[2] are the names of the tasks in accordance with the logic of the BranchPythonOperator start = DummyOperator( task_id = 'start', dag=dag ) check_dag_B = BranchPythonOperator( task_id = "check_dag_B", python_callable = check_status_dag, op_args = ['dag_B','trigger_dag_B','skip_trigger_dag_B'], trigger_rule = 'all_done', dag = dag ) trigger_dag_B = TriggerDagRunOperator( task_id = 'trigger_dag_B', trigger_dag_id = 'dag_B', dag = dag ) skip_trigger_dag_B = DummyOperator( task_id = 'skip_trigger_dag_B', dag = dag ) finish = DummyOperator( task_id = 'finish', trigger_rule = 'all_done', dag=dag ) start >> check_dag_B >> [trigger_dag_B, skip_trigger_dag_B] >> finish#or continue working
conditionally_trigger for TriggerDagRunOperator
I have 2 DAGs: dag_a and dag_b (dag_a -> dag_b) After dag_a is executed, TriggerDagRunOperator is called, which starts dag_b. The problem is, when dag_b is off (paused), dag_a's TriggerDagRunOperator creates scheduled runs in dag_b that queue up for as long as dag_a is running. After turning dag_b back ON, the execution of tasks from the queue begins. I'm trying to find a solution for TriggerDagRunOperator, namely a conditionally_trigger function that would skip the execution of the TriggerDagRunOperator task if dag_b is paused (OFF). How can i do this?
[ "You can use ShortCircuitOperator to execute/skip the downstream dag_b. Then, use the Airflow Rest API (or shell/CLI) to figure out whether dag_b is paused or not.\ndag_a = TriggerDagRunOperator(\n trigger_dag_id='dag_a',\n ...\n)\n\npause_check = ShortCircuitOperator(\n task_id='pause_check',\n python_callable=is_dag_paused,\n op_kwargs={\n 'dag_id': 'dag_b'\n }\n)\n\ndag_b = TriggerDagRunOperator(\n trigger_dag_id='dag_b',\n ...\n)\n\ndag_a >> pause_check >> dag_b\n\nand is_dag_paused function can be like this. (here I use Rest API.)\ndef is_dag_paused(**kwargs):\n import requests\n from requests.auth import HTTPBasicAuth\n \n dag_id = kwargs['dag_id']\n res = requests.get(f'http://{airflow_host}/api/v1/dags/{dag_id}/details',\n auth=HTTPBasicAuth('username', 'pasword')) # The auth method could be different for you. \n\n if res.status_code == 200:\n rjson = res.json()\n # if you return True, the downstream tasks will be executed\n # if False, it will be skipped\n return not rjson['is_paused']\n else:\n print('Error: ', res)\n exit(1)\n\n", "import airflow.settings\nfrom airflow.models import DagModel\ndef check_status_dag(*op_args):\n session = airflow.settings.Session()\n qry = session.query(DagModel).filter(DagModel.dag_id == op_args[0])\n if not qry.value(DagModel.is_paused):\n return op_args[1]\n else: return op_args[2]\n\nWhere check_status_dag is the method of making a choice decision for executing a further branch, op_args[0] is the dag_id of the dag being checked for pause status, op_args[1] and op_args[2] are the names of the tasks in accordance with the logic of the BranchPythonOperator\nstart = DummyOperator(\n task_id = 'start',\n dag=dag\n )\n\ncheck_dag_B = BranchPythonOperator(\n task_id = \"check_dag_B\",\n python_callable = check_status_dag,\n op_args = ['dag_B','trigger_dag_B','skip_trigger_dag_B'],\n trigger_rule = 'all_done',\n dag = dag\n)\n\ntrigger_dag_B = TriggerDagRunOperator(\n task_id = 'trigger_dag_B',\n trigger_dag_id = 'dag_B',\n dag = dag\n)\n\nskip_trigger_dag_B = DummyOperator(\n task_id = 'skip_trigger_dag_B',\n dag = dag\n)\n\nfinish = DummyOperator(\n task_id = 'finish',\n trigger_rule = 'all_done',\n dag=dag\n)\n\nstart >> check_dag_B >> [trigger_dag_B, skip_trigger_dag_B] >> finish#or continue working\n\n" ]
[ 1, 0 ]
[]
[]
[ "airflow", "python" ]
stackoverflow_0074492876_airflow_python.txt
Q: Direction of the rotation not the same as the angle The helicopter should fly according to angle 1. when a key is pressed, it should fly according to angle 2. It is working. With angle 1 = 0 the helicopter flies parallel to the x axis. The helicopter image also shows this. With angle 2 = 45 it goes diagonally down. But the picture shows diagonally upwards. How can I reconcile angle and image rotation? import pygame, sys import random import math pygame.init() clock = pygame.time.Clock() screen = pygame.display.set_mode((1000,600)) class Helikopter(pygame.sprite.Sprite): def __init__(self,x,y,speed,angle1,angle2): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("Bilder/heli1.png").convert_alpha() self.img = pygame.transform.scale(self.image,(160,60)) self.rect = self.img.get_rect() self.rect.center=(x,y) self.rot1 = angle1 self.rot2 = angle2 self.angle1 = math.radians(angle1) self.angle2 = math.radians(angle2) self.rot = self.rot1 self.angle = self.angle1 self.speed = speed self.absturz = False def update(self): if self.rect.x > 1000 or self.rect.y > 600: self.rect.x = - 20 self.rect.y = random.randrange(100,300) self.absturz = False self.rot = self.rot1 self.angle = self.angle1 if self.absturz == True: self.angle = self.angle2 self.rot = self.rot2 - 45 else: self.absturz = False self.rect.center=calculate_new_xy(self.rect.center,self.speed,self.angle) self.image = pygame.transform.rotate(self.img, self.rot) def calculate_new_xy(old_xy,speed,angle_in_radians): new_x = old_xy[0] + (speed*math.cos(angle_in_radians)) new_y = old_xy[1] + (speed*math.sin(angle_in_radians)) return new_x, new_y heli = Helikopter(300,100,3,30,60) alle_sprites = pygame.sprite.Group() heli_sprites = pygame.sprite.Group() heli = Helikopter(300,100,3,0,90) heli_sprites.add(heli) alle_sprites.add(heli) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: heli.absturz = True screen.fill((250,250,250)) alle_sprites.update() alle_sprites.draw(screen) pygame.display.flip() clock.tick(60) A: In the Pygame coordinate system the y-axis points down the screen, but the mathematical y axis points form the bottom to the top. To compansate that you have to invert the angle of rotation when you call pygame.transform.rotate: self.image = pygame.transform.rotate(self.img, self.rot) self.image = pygame.transform.rotate(self.img, -self.rot) Also see How do I rotate an image around its center using PyGame?
Direction of the rotation not the same as the angle
The helicopter should fly according to angle 1. when a key is pressed, it should fly according to angle 2. It is working. With angle 1 = 0 the helicopter flies parallel to the x axis. The helicopter image also shows this. With angle 2 = 45 it goes diagonally down. But the picture shows diagonally upwards. How can I reconcile angle and image rotation? import pygame, sys import random import math pygame.init() clock = pygame.time.Clock() screen = pygame.display.set_mode((1000,600)) class Helikopter(pygame.sprite.Sprite): def __init__(self,x,y,speed,angle1,angle2): pygame.sprite.Sprite.__init__(self) self.image = pygame.image.load("Bilder/heli1.png").convert_alpha() self.img = pygame.transform.scale(self.image,(160,60)) self.rect = self.img.get_rect() self.rect.center=(x,y) self.rot1 = angle1 self.rot2 = angle2 self.angle1 = math.radians(angle1) self.angle2 = math.radians(angle2) self.rot = self.rot1 self.angle = self.angle1 self.speed = speed self.absturz = False def update(self): if self.rect.x > 1000 or self.rect.y > 600: self.rect.x = - 20 self.rect.y = random.randrange(100,300) self.absturz = False self.rot = self.rot1 self.angle = self.angle1 if self.absturz == True: self.angle = self.angle2 self.rot = self.rot2 - 45 else: self.absturz = False self.rect.center=calculate_new_xy(self.rect.center,self.speed,self.angle) self.image = pygame.transform.rotate(self.img, self.rot) def calculate_new_xy(old_xy,speed,angle_in_radians): new_x = old_xy[0] + (speed*math.cos(angle_in_radians)) new_y = old_xy[1] + (speed*math.sin(angle_in_radians)) return new_x, new_y heli = Helikopter(300,100,3,30,60) alle_sprites = pygame.sprite.Group() heli_sprites = pygame.sprite.Group() heli = Helikopter(300,100,3,0,90) heli_sprites.add(heli) alle_sprites.add(heli) while True: for event in pygame.event.get(): if event.type == pygame.QUIT: pygame.quit() sys.exit() if event.type == pygame.KEYDOWN: heli.absturz = True screen.fill((250,250,250)) alle_sprites.update() alle_sprites.draw(screen) pygame.display.flip() clock.tick(60)
[ "In the Pygame coordinate system the y-axis points down the screen, but the mathematical y axis points form the bottom to the top. To compansate that you have to invert the angle of rotation when you call pygame.transform.rotate:\nself.image = pygame.transform.rotate(self.img, self.rot)\nself.image = pygame.transform.rotate(self.img, -self.rot) \n\nAlso see How do I rotate an image around its center using PyGame?\n" ]
[ 3 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074529739_pygame_python.txt
Q: Python: How to connect to Bluestacks or another emulator by ppadb? I'm trying to simulate simple gestures like tap or swipe in BlueStacks emulator by using Python and PPADB. The problem is when I'm trying to connect. Client(host="127.0.0.1", port=5037) There is no devices. Emulator have address: But when I try to connect to it by PPADB, then nothing happened and terminal stops work. Here is the same ask. I found working application where someone solved this problem but i dont understand what he exactly did. Can someone check it and write simple code in one file? Here is a link to this app and code. A: BlueStacks uses port 5037 for ADB. This means that adb = Client(host='127.0.0.1', port=5555) should instead be adb = Client(host='127.0.0.1', port=5037)
Python: How to connect to Bluestacks or another emulator by ppadb?
I'm trying to simulate simple gestures like tap or swipe in BlueStacks emulator by using Python and PPADB. The problem is when I'm trying to connect. Client(host="127.0.0.1", port=5037) There is no devices. Emulator have address: But when I try to connect to it by PPADB, then nothing happened and terminal stops work. Here is the same ask. I found working application where someone solved this problem but i dont understand what he exactly did. Can someone check it and write simple code in one file? Here is a link to this app and code.
[ "BlueStacks uses port 5037 for ADB. This means that\nadb = Client(host='127.0.0.1', port=5555)\nshould instead be\nadb = Client(host='127.0.0.1', port=5037)\n" ]
[ 0 ]
[]
[]
[ "adb", "android_emulator", "bluestacks", "python" ]
stackoverflow_0074530092_adb_android_emulator_bluestacks_python.txt
Q: Get info from str-list My info-resource (binance-api) returns info as string list. Can you help me and explain how can I take variable 'initialLeverage': Code def long(): lever = client.futures_leverage_bracket() lever = pd.DataFrame(lever) print(lever) #vol() Terminal symbol brackets 0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion... 1 BTSUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... 2 INJUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion... 3 TRXBUSD [{'bracket': 1, 'initialLeverage': 20, 'notion... 4 ZRXUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... .. ... ... 220 OCEANUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... 221 LEVERBUSD [{'bracket': 1, 'initialLeverage': 20, 'notion... 222 CHZUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... 223 DUSKUSDT [{'bracket': 1, 'initialLeverage': 20, 'notion... 224 CTSIUSDT [{'bracket': 1, 'initialLeverage': 20, 'notion... [225 rows x 2 columns] Endpoint - initialLeverage Thx in advance) I've tried to convert it to different formats but it is full one string, so it didn't help me I've also tried to make "double pd" as def long(): lever = client.futures_leverage_bracket() lever = pd.DataFrame(lever) lev = lever['brackets'] lev = pd.DataFrame(lev) lev = lev['initialLeverage'] print(lever) #vol() But it doesn't working and returns me KeyError: 'initialLeverage' A: you can use a lambda function. This creates a new column in the dataframe x and saves the data in a list. df['initialLeverage']=df['brackets'].apply(lambda x: [i['initialLeverage'] for i in x]) Details: #create sample df df=pd.DataFrame(data={'symbol':['SUSHIUSDT','BTSUSDT'],'brackets':[[{'bracket': 1, 'initialLeverage': 25, 'notion':'abc'}], [{'bracket': 1., 'initialLeverage': 50, 'notion':'foo'}]]}) print(df) ''' symbol brackets 0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion': 'abc'}] 1 BTSUSDT [{'bracket': 1.0, 'initialLeverage': 50, 'notion': 'foo'}] ''' df['initialLeverage']=df['brackets'].apply(lambda x: [i['initialLeverage'] for i in x]) print(df) ''' symbol brackets initialLeverage 0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion': 'abc'}] [25] 1 BTSUSDT [{'bracket': 1.0, 'initialLeverage': 50, 'notion': 'foo'}] [50] ''' #if initialLeverage is always single value. df['initialLeverage']=df['brackets'].apply(lambda x: [i['initialLeverage'] for i in x][0]) symbol brackets initialLeverage 0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion': 'abc'}] 25 1 BTSUSDT [{'bracket': 1.0, 'initialLeverage': 50, 'notion': 'foo'}] 50
Get info from str-list
My info-resource (binance-api) returns info as string list. Can you help me and explain how can I take variable 'initialLeverage': Code def long(): lever = client.futures_leverage_bracket() lever = pd.DataFrame(lever) print(lever) #vol() Terminal symbol brackets 0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion... 1 BTSUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... 2 INJUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion... 3 TRXBUSD [{'bracket': 1, 'initialLeverage': 20, 'notion... 4 ZRXUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... .. ... ... 220 OCEANUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... 221 LEVERBUSD [{'bracket': 1, 'initialLeverage': 20, 'notion... 222 CHZUSDT [{'bracket': 1, 'initialLeverage': 50, 'notion... 223 DUSKUSDT [{'bracket': 1, 'initialLeverage': 20, 'notion... 224 CTSIUSDT [{'bracket': 1, 'initialLeverage': 20, 'notion... [225 rows x 2 columns] Endpoint - initialLeverage Thx in advance) I've tried to convert it to different formats but it is full one string, so it didn't help me I've also tried to make "double pd" as def long(): lever = client.futures_leverage_bracket() lever = pd.DataFrame(lever) lev = lever['brackets'] lev = pd.DataFrame(lev) lev = lev['initialLeverage'] print(lever) #vol() But it doesn't working and returns me KeyError: 'initialLeverage'
[ "you can use a lambda function. This creates a new column in the dataframe x and saves the data in a list.\ndf['initialLeverage']=df['brackets'].apply(lambda x: [i['initialLeverage'] for i in x])\n\nDetails:\n#create sample df\ndf=pd.DataFrame(data={'symbol':['SUSHIUSDT','BTSUSDT'],'brackets':[[{'bracket': 1, 'initialLeverage': 25, 'notion':'abc'}],\n [{'bracket': 1., 'initialLeverage': 50, 'notion':'foo'}]]})\nprint(df)\n'''\n symbol brackets\n0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion': 'abc'}]\n1 BTSUSDT [{'bracket': 1.0, 'initialLeverage': 50, 'notion': 'foo'}]\n\n'''\n\ndf['initialLeverage']=df['brackets'].apply(lambda x: [i['initialLeverage'] for i in x])\nprint(df)\n\n'''\n\n symbol brackets initialLeverage\n0 SUSHIUSDT [{'bracket': 1, 'initialLeverage': 25, 'notion': 'abc'}] [25]\n1 BTSUSDT [{'bracket': 1.0, 'initialLeverage': 50, 'notion': 'foo'}] [50]\n\n'''\n#if initialLeverage is always single value.\ndf['initialLeverage']=df['brackets'].apply(lambda x: [i['initialLeverage'] for i in x][0])\n\n\n\n\n\n\nsymbol\nbrackets\ninitialLeverage\n\n\n\n\n0\nSUSHIUSDT\n[{'bracket': 1, 'initialLeverage': 25, 'notion': 'abc'}]\n25\n\n\n1\nBTSUSDT\n[{'bracket': 1.0, 'initialLeverage': 50, 'notion': 'foo'}]\n50\n\n\n\n" ]
[ 0 ]
[]
[]
[ "api", "binance", "dataframe", "keyerror", "python" ]
stackoverflow_0074525724_api_binance_dataframe_keyerror_python.txt
Q: How to select element by classpath (SELENIUM, PYTHON) I Trying select this path but not works, chrome_options = Options() caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "eager" # interactive #chrome_options.add_argument("--headless") driver = uc.Chrome(options=chrome_options, desired_capabilities=caps) driver.get('https://www.santander.com.br/emprestimo/login') time.sleep(5) driver.find_element(By.CSS_SELECTOR, 'input#cpf.my-4.ng-pristine.ng-invalid.dss-form-field__input.ng-touched').send_keys(cpf) Tried, driver.find_element(By.CSS_SELECTOR, 'input#cpf.my-4.ng-pristine.ng-invalid.dss-form-field__input.ng-touched').send_keys(cpf) //div[contains(@class, "dss-form-field dss-form-field--right-icon")] A: The field you are trying to select is inside Shadow DOM. Such elements are quite straightforward to access using Chrome and Selenium 4: shadow_host = driver.find_element(By.TAG_NAME, "pdc-juc-root") shadow_root = shadow_host.shadow_root input = shadow_root.find_element(By.ID, "cpf") action = webdriver.ActionChains(driver) action.move_to_element(input).click().send_keys("123456789").perform() # delay before closing so can see that it works! time.sleep(3)
How to select element by classpath (SELENIUM, PYTHON)
I Trying select this path but not works, chrome_options = Options() caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "eager" # interactive #chrome_options.add_argument("--headless") driver = uc.Chrome(options=chrome_options, desired_capabilities=caps) driver.get('https://www.santander.com.br/emprestimo/login') time.sleep(5) driver.find_element(By.CSS_SELECTOR, 'input#cpf.my-4.ng-pristine.ng-invalid.dss-form-field__input.ng-touched').send_keys(cpf) Tried, driver.find_element(By.CSS_SELECTOR, 'input#cpf.my-4.ng-pristine.ng-invalid.dss-form-field__input.ng-touched').send_keys(cpf) //div[contains(@class, "dss-form-field dss-form-field--right-icon")]
[ "The field you are trying to select is inside Shadow DOM. Such elements are quite straightforward to access using Chrome and Selenium 4:\nshadow_host = driver.find_element(By.TAG_NAME, \"pdc-juc-root\")\nshadow_root = shadow_host.shadow_root\ninput = shadow_root.find_element(By.ID, \"cpf\")\naction = webdriver.ActionChains(driver)\naction.move_to_element(input).click().send_keys(\"123456789\").perform()\n# delay before closing so can see that it works!\ntime.sleep(3)\n\n" ]
[ 0 ]
[]
[]
[ "css", "python", "selenium" ]
stackoverflow_0074526629_css_python_selenium.txt
Q: How to preserve column names in scikit-learn ColumnTransformer? I', creating some pipelines using scikit-learn but I'm having some trouble keeping the variables names as the original names, and not as the transformer_name__feature_name format This is the scenario: I have a set of transformers, both custom and some from scikit-learn itself The set of transformers used in each step and the columns it uses is defined in an external file, from which I don't know beforehand which transformers I'm going to apply and to which columns, for example, let's say in a python dictionary named data, it would look like this [{'transformer': MinMaxScaler(), 'columns': ['column1', 'column2'], 'name': 'MinMaxScaler'}, {'transformer': CustomTransfomer(), 'columns': ['column2', 'column5'], 'name': 'CustomTransfomer'}] Now I create the pipeline from this definition like this. transformers = [(step["name"], step["transformer"], step["columns"]) for step in data["steps"]] preprocessor = ColumnTransformer(transformers=transformers, remainder='passthrough', verbose_feature_names_out=False) pipe = Pipeline([('preprocessor', preprocessor)]) I try to use the parameter verbose_feature_names_out=True to prevent the default prefix naming, but I get an error saying that column names are not unique. If I set verbose_feature_names_out=True then the problem in this example is that column 2 gets applied to the first transformation step, but not the second one, as the name of the column is changed to MinMaxScaler__column2, so I end up with columns named MinMaxScaler__column2 and CustomTransformer__column2, but both transformations were applied individually, not one after the other. In this example, how can I apply both transformers to the specified columns and, in the end, remind with the original column number and names column1,...,column5? A: The ColumnTransformer can only perform one transform per column. If you want to perform for column2 2 transformation, you should define a pipeline that perform first the MinMaxScaler and then your CustomTransformer. I would modify your code as follows: from sklearn.pipeline import make_pipeline data = [ {'transformer': MinMaxScaler(), 'columns': ['column1'], 'name': 'MinMaxScaler'}, {'transformer': CustomTransformer(), 'columns': ['column5'], 'name': 'CustomTransfomer'}, { 'transformer': make_pipeline(MinMaxScaler(),CustomTransformer()), 'columns': ['column2'], 'name': 'pipeline' } ] This will define a new transformer that perform both operations.
How to preserve column names in scikit-learn ColumnTransformer?
I', creating some pipelines using scikit-learn but I'm having some trouble keeping the variables names as the original names, and not as the transformer_name__feature_name format This is the scenario: I have a set of transformers, both custom and some from scikit-learn itself The set of transformers used in each step and the columns it uses is defined in an external file, from which I don't know beforehand which transformers I'm going to apply and to which columns, for example, let's say in a python dictionary named data, it would look like this [{'transformer': MinMaxScaler(), 'columns': ['column1', 'column2'], 'name': 'MinMaxScaler'}, {'transformer': CustomTransfomer(), 'columns': ['column2', 'column5'], 'name': 'CustomTransfomer'}] Now I create the pipeline from this definition like this. transformers = [(step["name"], step["transformer"], step["columns"]) for step in data["steps"]] preprocessor = ColumnTransformer(transformers=transformers, remainder='passthrough', verbose_feature_names_out=False) pipe = Pipeline([('preprocessor', preprocessor)]) I try to use the parameter verbose_feature_names_out=True to prevent the default prefix naming, but I get an error saying that column names are not unique. If I set verbose_feature_names_out=True then the problem in this example is that column 2 gets applied to the first transformation step, but not the second one, as the name of the column is changed to MinMaxScaler__column2, so I end up with columns named MinMaxScaler__column2 and CustomTransformer__column2, but both transformations were applied individually, not one after the other. In this example, how can I apply both transformers to the specified columns and, in the end, remind with the original column number and names column1,...,column5?
[ "The ColumnTransformer can only perform one transform per column.\nIf you want to perform for column2 2 transformation, you should define a pipeline that perform first the MinMaxScaler and then your CustomTransformer.\nI would modify your code as follows:\nfrom sklearn.pipeline import make_pipeline\ndata = [\n {'transformer': MinMaxScaler(), 'columns': ['column1'], 'name': 'MinMaxScaler'},\n {'transformer': CustomTransformer(), 'columns': ['column5'], 'name': 'CustomTransfomer'},\n {\n 'transformer': make_pipeline(MinMaxScaler(),CustomTransformer()),\n 'columns': ['column2'],\n 'name': 'pipeline'\n }\n]\n\nThis will define a new transformer that perform both operations.\n\n" ]
[ 0 ]
[]
[]
[ "python", "scikit_learn", "scikit_learn_pipeline" ]
stackoverflow_0074524532_python_scikit_learn_scikit_learn_pipeline.txt
Q: how to read sonar data in python I need to read sonar datatype file in python. Sonar data contains the ocean details, It used to measure the depth of the sea. The file contains binary data and extension as .s7k format. A: I downloaded a sample s7k file to test whether I could read it---and I could. (The sample file I used to test can be downloaded here.) First, in a new project folder, download this dg_formats.py file, which contains a list of reson datagram codes and helpers. Next, download this reader.py file to the same folder. Line #9 of reader.py is the following: from hyo2.openbst.lib.raw.parsers.reson.dg_formats import parse, ResonDatagrams, reson_datagram_code Change this to: from dg_formats import parse, ResonDatagrams, reson_datagram_code Basically we skip installing the hyo2-qc library, we just take the parsing and reader code. Afterwards, I can simply do: from pathlib import Path from reader import Reson from dg_formats import parse, ResonDatagrams, reson_datagram_code input_path = Path("20190730_144835.s7k") # the sample I downloaded sonar_file = Reson(input_path) sonar_file.is_mapped() We can check the attributes with the class dict: >>> sonar_file.__dict__ {'_valid': True, 'data': None, 'map': {7200: [[64, 1564498115590.0002, 334, 386]], 7022: [[466, 1564495196916.9998, 32, 0]], 7001: [[566, 1564495197011.0015, 7699, 0]], 7021: [[8333, 1564498114898.9983, 20116, 0], [2340566, 1564498116758.9988, 20116, 0], [5031198, 1564498118618.9995, 20116, 0], ... '_reson_sync_patt': 65535, 'format_type': 's7k', 'file_length': 88684918, 'file_location': 88684918, 'file_end': True} We can also see a list of map keys like this: >>> print(sorted(sonar_file.map.keys())) [1003, 1012, 1013, 7000, 7001, 7002, 7004, 7007, 7010, 7021, 7022, 7027, 7028, 7058, 7200, 7300, 7503, 7504, 7610] This is basically what information is inside. From dg_formats.py, we can match the code and see what's what. For example, 1003 is position, so: >>> position_data = sonar_file.get_datagram(ResonDatagrams.POSITION) >>> print(position_data) [<dg_formats.Data1003 at 0x7f4946c1c5e0>, <dg_formats.Data1003 at 0x7f4946c1c130>, ... ... <dg_formats.Data1003 at 0x7f493e28d880>] # let's get information of first position >>> print(position_data[0].__dict__) {'desc': 'Position', 'time': 1564498113624.0005, 'num_beams_max': 512, 'parse_check': True, 'header_fmt': '<If3d5B', 'header_size': 37, 'datum': 'WGS', 'latency': 0.0, 'latitude': 0.7517794800836651, 'longitude': -1.2341624217514013, 'datum_height': 1.32, 'position_flag': 0, 'qual_flag': 0, 'position_method': 0, 'num_of_satelites': 15} There isn't helpers for all the codes (for example, nothing for 7200, 7300, 7503, 7504, 7610), but I hope it's a start to how to get info out of the file!
how to read sonar data in python
I need to read sonar datatype file in python. Sonar data contains the ocean details, It used to measure the depth of the sea. The file contains binary data and extension as .s7k format.
[ "I downloaded a sample s7k file to test whether I could read it---and I could. (The sample file I used to test can be downloaded here.)\n\nFirst, in a new project folder, download this dg_formats.py file, which contains a list of reson datagram codes and helpers.\n\nNext, download this reader.py file to the same folder.\n\nLine #9 of reader.py is the following:\nfrom hyo2.openbst.lib.raw.parsers.reson.dg_formats import parse, ResonDatagrams, reson_datagram_code\n\nChange this to:\nfrom dg_formats import parse, ResonDatagrams, reson_datagram_code\n\nBasically we skip installing the hyo2-qc library, we just take the parsing and reader code.\n\n\nAfterwards, I can simply do:\nfrom pathlib import Path\nfrom reader import Reson\nfrom dg_formats import parse, ResonDatagrams, reson_datagram_code\n\ninput_path = Path(\"20190730_144835.s7k\") # the sample I downloaded\n\nsonar_file = Reson(input_path)\nsonar_file.is_mapped()\n\nWe can check the attributes with the class dict:\n>>> sonar_file.__dict__\n{'_valid': True,\n 'data': None,\n 'map': {7200: [[64, 1564498115590.0002, 334, 386]],\n 7022: [[466, 1564495196916.9998, 32, 0]],\n 7001: [[566, 1564495197011.0015, 7699, 0]],\n 7021: [[8333, 1564498114898.9983, 20116, 0],\n [2340566, 1564498116758.9988, 20116, 0],\n [5031198, 1564498118618.9995, 20116, 0],\n...\n '_reson_sync_patt': 65535,\n 'format_type': 's7k',\n 'file_length': 88684918,\n 'file_location': 88684918,\n 'file_end': True}\n\nWe can also see a list of map keys like this:\n>>> print(sorted(sonar_file.map.keys()))\n[1003, 1012, 1013, 7000, 7001, 7002, 7004, 7007, 7010, 7021, 7022, 7027, 7028, 7058, 7200, 7300, 7503, 7504, 7610]\n\nThis is basically what information is inside. From dg_formats.py, we can match the code and see what's what. For example, 1003 is position, so:\n>>> position_data = sonar_file.get_datagram(ResonDatagrams.POSITION)\n>>> print(position_data)\n\n[<dg_formats.Data1003 at 0x7f4946c1c5e0>,\n <dg_formats.Data1003 at 0x7f4946c1c130>,\n...\n...\n <dg_formats.Data1003 at 0x7f493e28d880>]\n\n# let's get information of first position\n>>> print(position_data[0].__dict__)\n\n{'desc': 'Position', 'time': 1564498113624.0005, 'num_beams_max': 512, 'parse_check': True, 'header_fmt': '<If3d5B', 'header_size': 37, 'datum': 'WGS', 'latency': 0.0, 'latitude': 0.7517794800836651, 'longitude': -1.2341624217514013, 'datum_height': 1.32, 'position_flag': 0, 'qual_flag': 0, 'position_method': 0, 'num_of_satelites': 15}\n\nThere isn't helpers for all the codes (for example, nothing for 7200, 7300, 7503, 7504, 7610), but I hope it's a start to how to get info out of the file!\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python", "sonarqube" ]
stackoverflow_0074514680_numpy_pandas_python_sonarqube.txt
Q: I need only top 20 highest review count among all the cars bar graph? Basically i have a dataset with car models and i need a bar graph where the highest review count of 20 car brands should be displayed in the bar graph! I have tried this below code but i am getting all the brand models from the dataset but i need only top 20 highest review count car brands in bar graph. Used Dataset : https://www.kaggle.com/datasets/tr1gg3rtrash/cars-2022-dataset?group=owned A: Pandas contain a feature to sort values for a DataFrame e.g.: DataFrame.sort_values(<column_name>, ascending=False). For more information on sorting values using pandas can be found in pandas.sort_values documentation. Sorting: Data=Data.sort_values('reviews_count', ascending=False).reset_index(drop=True) Slicing the DataFrame to 20: Data = Data[0:20] Overall: Data=pd.read_csv("CARS_1.csv") Data=Data.sort_values('reviews_count', ascending=False).reset_index(drop=True) Data = Data[0:20] #Data.plot.bar() plt.figure(figsize=(30,9)) x_zoom = np.linspace(-1, 1, 50) y_zoom = np.sin(x_zoom) plt.bar(Data['car_name'], Data['reviews_count']) plt.xlabel("car name") plt.ylabel("Review Count") plt.show() Result: A: import matplotlib.pyplot as plt # sort the dataframe df.sort_values(['reviews_count', 'car_name',], ascending=False, inplace=True) plt.figure(figsize=(30,9)) x_zoom = np.linspace(-1, 1, 50) y_zoom = np.sin(x_zoom) plt.bar(df['car_name'][:20], df['reviews_count'][:20]) plt.xlabel("car name") plt.ylabel("Review Count") plt.xticks(rotation = 45) plt.show()
I need only top 20 highest review count among all the cars bar graph?
Basically i have a dataset with car models and i need a bar graph where the highest review count of 20 car brands should be displayed in the bar graph! I have tried this below code but i am getting all the brand models from the dataset but i need only top 20 highest review count car brands in bar graph. Used Dataset : https://www.kaggle.com/datasets/tr1gg3rtrash/cars-2022-dataset?group=owned
[ "Pandas contain a feature to sort values for a DataFrame e.g.: DataFrame.sort_values(<column_name>, ascending=False).\nFor more information on sorting values using pandas can be found in pandas.sort_values documentation.\nSorting:\nData=Data.sort_values('reviews_count', ascending=False).reset_index(drop=True)\n\nSlicing the DataFrame to 20:\nData = Data[0:20]\n\nOverall:\nData=pd.read_csv(\"CARS_1.csv\")\nData=Data.sort_values('reviews_count', ascending=False).reset_index(drop=True)\n\nData = Data[0:20]\n\n#Data.plot.bar() \nplt.figure(figsize=(30,9)) \nx_zoom = np.linspace(-1, 1, 50) \ny_zoom = np.sin(x_zoom) \nplt.bar(Data['car_name'], Data['reviews_count']) \nplt.xlabel(\"car name\") \nplt.ylabel(\"Review Count\") \nplt.show()\n\nResult:\n\n", "import matplotlib.pyplot as plt\n\n# sort the dataframe\ndf.sort_values(['reviews_count', 'car_name',], ascending=False, inplace=True)\n\n\nplt.figure(figsize=(30,9))\nx_zoom = np.linspace(-1, 1, 50)\ny_zoom = np.sin(x_zoom)\n\nplt.bar(df['car_name'][:20], df['reviews_count'][:20])\n\nplt.xlabel(\"car name\")\nplt.ylabel(\"Review Count\")\nplt.xticks(rotation = 45)\n\nplt.show()\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataset", "machine_learning", "python", "scikit_learn", "visualization" ]
stackoverflow_0074529863_dataset_machine_learning_python_scikit_learn_visualization.txt
Q: how to get rid of KeyError: 'kivy.garden.matplotlib'? i am using matplotlib with kivy when i am running my file i am getting this error can anyone suggest something. Traceback (most recent call last): File "<input>", line 1, in <module> File "/root/pycharm-2019.3.3/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/root/pycharm-2019.3.3/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/root/PycharmProjects/vsts/venv/main.py", line 17, in <module> from kivy.garden.matplotlib.backend_kivyagg import FigureCanvasKivyAgg File "/root/pycharm-2019.3.3/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 640, in _load_backward_compatible KeyError: 'kivy.garden.matplotlib' A: use these commands it helped mine that had error for from kivy.garden.matplotlib : python 3.7 using anaconda pip install kivy pip install kivy-garden garden install matplotlib pip install matplotlib==2.2.2 A: If I were you, I would clone "https://github.com/kivy-garden/garden.matplotlib" to the "virt(directory with your virtual environment)/lib/python/kivy/garden/" "git clone https://github.com/kivy-garden/garden.matplotlib" and rename the directory "garden.matplotlib" to the "matplotlib"
how to get rid of KeyError: 'kivy.garden.matplotlib'?
i am using matplotlib with kivy when i am running my file i am getting this error can anyone suggest something. Traceback (most recent call last): File "<input>", line 1, in <module> File "/root/pycharm-2019.3.3/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/root/pycharm-2019.3.3/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/root/PycharmProjects/vsts/venv/main.py", line 17, in <module> from kivy.garden.matplotlib.backend_kivyagg import FigureCanvasKivyAgg File "/root/pycharm-2019.3.3/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 640, in _load_backward_compatible KeyError: 'kivy.garden.matplotlib'
[ "use these commands it helped mine that had error for\nfrom kivy.garden.matplotlib :\npython 3.7 using anaconda\n\npip install kivy\npip install kivy-garden\ngarden install matplotlib\npip install matplotlib==2.2.2\n\n", "If I were you, I would clone \"https://github.com/kivy-garden/garden.matplotlib\" to the \"virt(directory with your virtual environment)/lib/python/kivy/garden/\"\n\"git clone https://github.com/kivy-garden/garden.matplotlib\"\nand rename the directory \"garden.matplotlib\" to the \"matplotlib\"\n" ]
[ 1, 0 ]
[]
[]
[ "android", "kivy", "matplotlib", "python", "runtime_error" ]
stackoverflow_0063655196_android_kivy_matplotlib_python_runtime_error.txt
Q: python numpy how to insert multiple rows between each row I have a numpy array like this: 26.4812 32.0000 -5.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 10.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 0.0000 10000.0000 20000.0000 2.0000... I want to change it so that the 3rd column(z value) has more steps like this: 26.4812 32.0000 -5.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 -4.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 -3.0000 10000.0000 20000.0000 2.0000 ... 26.4812 32.0000 9.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 10.0000 10000.0000 20000.0000 2.0000... the steps must be defined by a variable step = 1mm at this example. how can i achive that? A: This is little ugly maybe but it does what you want: arr1 = np.array([[1, -3, -3], [1, 3, 3], [1, 3, 7]]) z=arr1[:, 2] new_z = [] for i in range(len(z)-1): new_z.append(np.arange(z[i],z[i+1]+1)) new_z = np.unique(np.concatenate(new_z)) new_array = np.c_[np.repeat(arr1[0, 0], new_z.shape[0]), new_z, np.repeat(arr1[0, 2], new_z.shape[0])] print(new_array) A: import numpy as np def generate_array(zmin, zmax, step=1): # generate z values based on min, max and step (-5, 10, and 1 in the example) z_values= np.arange(zmin, zmax+step, step) # create an array with the same data for every z value (for now) default_value = 0 array = np.repeat([[26.4812, 32.0000, default_value, 10000.0000, 20000.0000, 2.0000]], len(z_values), axis=0) # replace the z values array[:, 2] = z_values return array This can be used like this >>> generate_array(-5, 10) >>> array([ [ 2.64812e+01, 3.20000e+01, -5.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, -4.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, -3.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, -2.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, -1.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 0.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 1.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 2.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 3.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 4.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 5.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 6.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 7.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 8.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 9.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00], [ 2.64812e+01, 3.20000e+01, 1.00000e+01, 1.00000e+04, 2.00000e+04, 2.00000e+00] ])
python numpy how to insert multiple rows between each row
I have a numpy array like this: 26.4812 32.0000 -5.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 10.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 0.0000 10000.0000 20000.0000 2.0000... I want to change it so that the 3rd column(z value) has more steps like this: 26.4812 32.0000 -5.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 -4.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 -3.0000 10000.0000 20000.0000 2.0000 ... 26.4812 32.0000 9.0000 10000.0000 20000.0000 2.0000 26.4812 32.0000 10.0000 10000.0000 20000.0000 2.0000... the steps must be defined by a variable step = 1mm at this example. how can i achive that?
[ "This is little ugly maybe but it does what you want:\narr1 = np.array([[1, -3, -3],\n [1, 3, 3],\n [1, 3, 7]])\nz=arr1[:, 2]\nnew_z = []\nfor i in range(len(z)-1):\n new_z.append(np.arange(z[i],z[i+1]+1))\nnew_z = np.unique(np.concatenate(new_z))\nnew_array = np.c_[np.repeat(arr1[0, 0], new_z.shape[0]), new_z, \nnp.repeat(arr1[0, 2], new_z.shape[0])]\nprint(new_array)\n\n", "import numpy as np\n\ndef generate_array(zmin, zmax, step=1):\n # generate z values based on min, max and step (-5, 10, and 1 in the example)\n z_values= np.arange(zmin, zmax+step, step)\n\n # create an array with the same data for every z value (for now)\n default_value = 0\n array = np.repeat([[26.4812, 32.0000, default_value, 10000.0000, 20000.0000, 2.0000]], len(z_values), axis=0)\n \n # replace the z values\n array[:, 2] = z_values\n\n return array\n\nThis can be used like this\n>>> generate_array(-5, 10)\n>>> array([\n [ 2.64812e+01, 3.20000e+01, -5.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, -4.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, -3.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, -2.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, -1.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 0.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 1.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 2.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 3.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 4.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 5.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 6.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 7.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 8.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 9.00000e+00, 1.00000e+04, 2.00000e+04, 2.00000e+00],\n [ 2.64812e+01, 3.20000e+01, 1.00000e+01, 1.00000e+04, 2.00000e+04, 2.00000e+00]\n])\n\n" ]
[ 0, 0 ]
[]
[]
[ "numpy", "numpy_ndarray", "python" ]
stackoverflow_0074529980_numpy_numpy_ndarray_python.txt
Q: kivymd application crashes on android when using MDRaisedButton My application works perfectly on the machine. I am using kivymd without any external libraries. but application crashes on android when using MDRaisedButton MDRaisedButton: text: 'Enter' custom_color: app.theme_cls.primary_color pos_hint: {"center_x": 0.5, "center_y": 0.35} size_hint_x: .8 text_color: 1,1,1,1 When I see the Android cat I have this return: 11-16 08:45:56.034 1950 32332 W ActivityTaskManager: Force finishing activity org.pdv.denky.android.kdem/org.kivy.android.PythonActivity 11-16 08:45:56.036 1950 32332 V WindowManager: Changing focus of displayId=0 to null from Window{16a7d70 u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity} 11-16 08:45:56.065 1950 9792 I WindowManager: WIN DEATH: Window{16a7d70 u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity} 11-16 08:45:56.065 1950 9792 W InputManager-JNI: Input channel object '16a7d70 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_left_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_top_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_right_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_bottom_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.082 1950 2025 W InputDispatcher: Letterbox_top_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} has FLAG_SLIPPERY. Please report this in b/157929241 What can it be? # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy==2.1.0,kivymd==1.1.1,pillow A: Try kivymd=1.0.2 in the buildozer.spec requirements
kivymd application crashes on android when using MDRaisedButton
My application works perfectly on the machine. I am using kivymd without any external libraries. but application crashes on android when using MDRaisedButton MDRaisedButton: text: 'Enter' custom_color: app.theme_cls.primary_color pos_hint: {"center_x": 0.5, "center_y": 0.35} size_hint_x: .8 text_color: 1,1,1,1 When I see the Android cat I have this return: 11-16 08:45:56.034 1950 32332 W ActivityTaskManager: Force finishing activity org.pdv.denky.android.kdem/org.kivy.android.PythonActivity 11-16 08:45:56.036 1950 32332 V WindowManager: Changing focus of displayId=0 to null from Window{16a7d70 u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity} 11-16 08:45:56.065 1950 9792 I WindowManager: WIN DEATH: Window{16a7d70 u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity} 11-16 08:45:56.065 1950 9792 W InputManager-JNI: Input channel object '16a7d70 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_left_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_top_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_right_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.076 1950 7623 W InputManager-JNI: Input channel object 'Letterbox_bottom_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} (client)' was disposed without first being removed with the input manager! 11-16 08:45:56.082 1950 2025 W InputDispatcher: Letterbox_top_ActivityRecord{da6a64b u0 org.pdv.denky.android.kdem/org.kivy.android.PythonActivity t2250} has FLAG_SLIPPERY. Please report this in b/157929241 What can it be? # (list) Application requirements # comma separated e.g. requirements = sqlite3,kivy requirements = python3,kivy==2.1.0,kivymd==1.1.1,pillow
[ "Try kivymd=1.0.2 in the buildozer.spec requirements\n" ]
[ 0 ]
[]
[]
[ "kivy", "kivymd", "python" ]
stackoverflow_0074459968_kivy_kivymd_python.txt
Q: error driver.find_element or find_elements I'm trying to click "Create New Network" by using selenium. <button type="button" id="dt-refreshBtn" class="btn wc-btn--link" data-label="Create New Network" role="link"><span class="icon-button" data-testid="dnxButton-iconButtonContainer" data-awt="networkListing-button-createNew"><i class="dnac-icon-add-circle" data-testid="dnxButton-icon" title="Create New Network"></i><span class="dnx-btn-icon-label" data-testid="dnxButton-iconLabel">Create New Network</span></span></button> <span class="icon-button" data-testid="dnxButton-iconButtonContainer" data-awt="networkListing-button-createNew"><i class="dnac-icon-add-circle" data-testid="dnxButton-icon" title="Create New Network"></i><span class="dnx-btn-icon-label" data-testid="dnxButton-iconLabel">Create New Network</span></span> <i class="dnac-icon-add-circle" data-testid="dnxButton-icon" title="Create New Network"></i> <span class="dnx-btn-icon-label" data-testid="dnxButton-iconLabel">Create New Network</span> I tried several scripts to find the location of "Create New Network" button, but got failed with below reason. Message: no such element: Unable to locate element: AttributeError: 'list' object has no attribute 'send_keys' 'list' object has no attribute 'click' here are scripts I've tried. driver.find_element(By.CSS_SELECTOR, "[title='Create New Network']").click() driver.find_element(By.CSS_SELECTOR, "[data-awt='networkListing-button-createNew']").click() driver.find_element(By.CSS_SELECTOR, "[id='dt-refreshBtn']").click() driver.find_element(By.CSS_SELECTOR, "[class='dnx-btn-icon-label']").click() driver.find_elements(By.XPATH, "//*[@class='dnx-btn-icon-label']").send_keys(Keys.ENTER) driver.find_elements(By.XPATH, "//button[@class='btn wc-btn--link']")[0].send_keys(Keys.ENTER) driver.find_elements(By.XPATH, "//*[@id='dt-refreshBtn']").send_keys(Keys.ENTER) driver.find_element(By.ID, "dt-refreshBtn").send_keys(Keys.ENTER) driver.find_element(By.CSS_SELECTOR, "[data-testid='dnxButton-icon']").send_keys(Keys.ENTER) driver.find_element(By.CSS_SELECTOR, "[data-testid='dnxButton-iconLabel']").send_keys(Keys.ENTER) driver.find_elements(By.CSS_SELECTOR, "[data-awt='networkListing-button-createNew']").click() driver.find_element(By.CSS_SELECTOR, "[title='Create New Network']").click() driver.find_elements(By.XPATH, "//*[@id='dt-refreshBtn']").click() could you please help this one ? A: Now let's go through each error .find_elements() is used for multiple elements and .click() | send_keys() is used for a single element is why the majority will give 'list' object has no attribute 'click' unless you access the individual element. .send_keys() is normally used for input tags or textareas and you'd want .click() for the button tag. Now some valid xpaths would be like so: driver.find_element(By.XPATH, "//button[@class='btn wc-btn--link']").click() would be a valid xpath if that is the only button class with that class name. driver.find_element(By.XPATH, "//button[@id='dt-refreshBtn']").click() If this still doesn't find check if the element is under iframes or shadow roots. A: driver.find_element(By.XPATH, "//button[@id='dt-refreshBtn']").click() should work A: finally, I found the reason and got the solution in this code. driver.find_element(By.XPATH, "//button[@id='dt-refreshBtn' and @class='btn wc-btn--link']") with this combination, it worked. Thanks everyone.
error driver.find_element or find_elements
I'm trying to click "Create New Network" by using selenium. <button type="button" id="dt-refreshBtn" class="btn wc-btn--link" data-label="Create New Network" role="link"><span class="icon-button" data-testid="dnxButton-iconButtonContainer" data-awt="networkListing-button-createNew"><i class="dnac-icon-add-circle" data-testid="dnxButton-icon" title="Create New Network"></i><span class="dnx-btn-icon-label" data-testid="dnxButton-iconLabel">Create New Network</span></span></button> <span class="icon-button" data-testid="dnxButton-iconButtonContainer" data-awt="networkListing-button-createNew"><i class="dnac-icon-add-circle" data-testid="dnxButton-icon" title="Create New Network"></i><span class="dnx-btn-icon-label" data-testid="dnxButton-iconLabel">Create New Network</span></span> <i class="dnac-icon-add-circle" data-testid="dnxButton-icon" title="Create New Network"></i> <span class="dnx-btn-icon-label" data-testid="dnxButton-iconLabel">Create New Network</span> I tried several scripts to find the location of "Create New Network" button, but got failed with below reason. Message: no such element: Unable to locate element: AttributeError: 'list' object has no attribute 'send_keys' 'list' object has no attribute 'click' here are scripts I've tried. driver.find_element(By.CSS_SELECTOR, "[title='Create New Network']").click() driver.find_element(By.CSS_SELECTOR, "[data-awt='networkListing-button-createNew']").click() driver.find_element(By.CSS_SELECTOR, "[id='dt-refreshBtn']").click() driver.find_element(By.CSS_SELECTOR, "[class='dnx-btn-icon-label']").click() driver.find_elements(By.XPATH, "//*[@class='dnx-btn-icon-label']").send_keys(Keys.ENTER) driver.find_elements(By.XPATH, "//button[@class='btn wc-btn--link']")[0].send_keys(Keys.ENTER) driver.find_elements(By.XPATH, "//*[@id='dt-refreshBtn']").send_keys(Keys.ENTER) driver.find_element(By.ID, "dt-refreshBtn").send_keys(Keys.ENTER) driver.find_element(By.CSS_SELECTOR, "[data-testid='dnxButton-icon']").send_keys(Keys.ENTER) driver.find_element(By.CSS_SELECTOR, "[data-testid='dnxButton-iconLabel']").send_keys(Keys.ENTER) driver.find_elements(By.CSS_SELECTOR, "[data-awt='networkListing-button-createNew']").click() driver.find_element(By.CSS_SELECTOR, "[title='Create New Network']").click() driver.find_elements(By.XPATH, "//*[@id='dt-refreshBtn']").click() could you please help this one ?
[ "Now let's go through each error .find_elements() is used for multiple elements and .click() | send_keys() is used for a single element is why the majority will give 'list' object has no attribute 'click' unless you access the individual element.\n.send_keys() is normally used for input tags or textareas and you'd want .click() for the button tag.\nNow some valid xpaths would be like so:\ndriver.find_element(By.XPATH, \"//button[@class='btn wc-btn--link']\").click()\n\nwould be a valid xpath if that is the only button class with that class name.\ndriver.find_element(By.XPATH, \"//button[@id='dt-refreshBtn']\").click() \n\nIf this still doesn't find check if the element is under iframes or shadow roots.\n", "driver.find_element(By.XPATH, \"//button[@id='dt-refreshBtn']\").click() \n\nshould work\n", "finally, I found the reason and got the solution in this code.\ndriver.find_element(By.XPATH, \"//button[@id='dt-refreshBtn' and @class='btn wc-btn--link']\")\n\nwith this combination, it worked.\nThanks everyone.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "css_selectors", "python", "selenium", "selenium_webdriver" ]
stackoverflow_0074396966_css_selectors_python_selenium_selenium_webdriver.txt