content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device I am facing an error to install the packages on aws ec2 instance with Ubuntu 18 using the following command - pip install -e . The error is - ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device What did I check? RAM using free -h command. Disk utilization using sudo ncdu -x command. Since pip tries to download to the default location given by TMPDIR variable, I also removed files from that location. I removed contents from .cache directory. I removed contents from /tmp directory. Still I am facing this issue. A: The answer provided at https://github.com/pypa/pip/issues/5816#issuecomment-425410189, states that pip downloads files to temporary directory, environment variable TMPDIR specifies that directory, also pip puts files into cache thus --cache-dir specification, --no-cache-dir should work too. --build specifies directory where wheel will be built, so its specification is also useful. For my user, I made a custom directory named codebase/pip_cache/ in my home directory. First I tried --no-cache-dir using the following command - TMPDIR=/home/deepakahire/codebase/pip_cache/ pip install -e . --no-cache-dir. This didn't work. Finally, I specified the --cache-dir as well, and used the following command to install the package - TMPDIR=/home/deepakahire/codebase/pip_cache/ pip install --cache-dir=/home/deepakahire/codebase/pip_cache/ -e . This worked for me. Caveat- Blindly deleting everything from /tmp directory would delete your tmux sessions, but will still keep the services/instances alive on the same ports, which were switched on in any tmux session previously.
ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device
I am facing an error to install the packages on aws ec2 instance with Ubuntu 18 using the following command - pip install -e . The error is - ERROR: Could not install packages due to an OSError: [Errno 28] No space left on device What did I check? RAM using free -h command. Disk utilization using sudo ncdu -x command. Since pip tries to download to the default location given by TMPDIR variable, I also removed files from that location. I removed contents from .cache directory. I removed contents from /tmp directory. Still I am facing this issue.
[ "The answer provided at https://github.com/pypa/pip/issues/5816#issuecomment-425410189, states that\n\npip downloads files to temporary directory, environment variable TMPDIR specifies that directory, also pip puts files into cache thus --cache-dir specification, --no-cache-dir should work too. --build specifies directory where wheel will be built, so its specification is also useful.\n\nFor my user, I made a custom directory named codebase/pip_cache/ in my home directory.\nFirst I tried --no-cache-dir using the following command -\nTMPDIR=/home/deepakahire/codebase/pip_cache/ pip install -e . --no-cache-dir. This didn't work.\nFinally, I specified the --cache-dir as well, and used the following command to install the package -\nTMPDIR=/home/deepakahire/codebase/pip_cache/ pip install --cache-dir=/home/deepakahire/codebase/pip_cache/ -e . This worked for me.\nCaveat-\nBlindly deleting everything from /tmp directory would delete your tmux sessions, but will still keep the services/instances alive on the same ports, which were switched on in any tmux session previously.\n" ]
[ 4 ]
[]
[]
[ "pip", "python", "python_3.x" ]
stackoverflow_0074515846_pip_python_python_3.x.txt
Q: How to access class object when I use torch.nn.DataParallel()? I want to train my model using PyTorch with multiple GPUs. I included the following line: model = torch.nn.DataParallel(model, device_ids=opt.gpu_ids) Then, I tried to access the optimizer that was defined in my model definition: G_opt = model.module.optimizer_G However, I got an error: AttributeError: 'DataParallel' object has no attribute optimizer_G I think it is related with the definition of optimizer in my model definition. It works when I use single GPU without torch.nn.DataParallel. But it does not work with multi GPUs even though I call with module and I could not find the solution. Here is the model definition: class MyModel(torch.nn.Module): ... self.optimizer_G = torch.optim.Adam(params, lr=opt.lr, betas=(opt.beta1, 0.999)) I used Pix2PixHD implementation in GitHub if you want to see the full code. Thank you, Best. Edit: I solved the problem by using model.module.module.optimizer_G. A: Use model.module , but sometime before running the model it doesn't work for some reason, use model.module.module at that time. Best of luck
How to access class object when I use torch.nn.DataParallel()?
I want to train my model using PyTorch with multiple GPUs. I included the following line: model = torch.nn.DataParallel(model, device_ids=opt.gpu_ids) Then, I tried to access the optimizer that was defined in my model definition: G_opt = model.module.optimizer_G However, I got an error: AttributeError: 'DataParallel' object has no attribute optimizer_G I think it is related with the definition of optimizer in my model definition. It works when I use single GPU without torch.nn.DataParallel. But it does not work with multi GPUs even though I call with module and I could not find the solution. Here is the model definition: class MyModel(torch.nn.Module): ... self.optimizer_G = torch.optim.Adam(params, lr=opt.lr, betas=(opt.beta1, 0.999)) I used Pix2PixHD implementation in GitHub if you want to see the full code. Thank you, Best. Edit: I solved the problem by using model.module.module.optimizer_G.
[ "Use model.module , but sometime before running the model it doesn't work for some reason, use model.module.module at that time.\nBest of luck\n" ]
[ 0 ]
[]
[]
[ "multi_gpu", "python", "pytorch", "torch" ]
stackoverflow_0066607905_multi_gpu_python_pytorch_torch.txt
Q: Python Decimal - multiplication by zero Why does the following code: from decimal import Decimal result = Decimal('0') * Decimal('0.8881783462119193534061639577') print(result) return 0E-28 ? I've traced it to the following code in the module: if not self or not other: ans = _dec_from_triple(resultsign, '0', resultexp) # Fixing in case the exponent is out of bounds ans = ans._fix(context) return ans The code appears to follow Decimal Arithmetic Specification, which doesn't explicitly suggest what to do when we multiply by zero, referring to 'special numbers' from another standard, which also doesn't specify what we do when we multiply an integer by zero :) So the decimal library does the thing that is explicitly specified: The coefficient of the result, before rounding, is computed by multiplying together the coefficients of the operands. The exponent of the result, before rounding, is the sum of the exponents of the two operands. The sign of the result is the exclusive or of the signs of the operands. Question: what is the need to return the coefficient and exponent (i.e, 0E-28) if one of the operands is a zero? We already know what that coefficient is when calling the multiplication function. Why not just return zero? A: Raymond Hettinger has given a comprehensive explanation at cpython github: In Arithmetic Operations, the section on Arithmetic operations rules tells us: Trailing zeros are not removed after operations. There are test cases covering multiplication by zero. Here are some from multiply.decTest: -- zeros, etc. mulx021 multiply 0 0 -> 0 mulx022 multiply 0 -0 -> -0 mulx023 multiply -0 0 -> -0 mulx024 multiply -0 -0 -> 0 mulx025 multiply -0.0 -0.0 -> 0.00 mulx026 multiply -0.0 -0.0 -> 0.00 mulx027 multiply -0.0 -0.0 -> 0.00 mulx028 multiply -0.0 -0.0 -> 0.00 mulx030 multiply 5.00 1E-3 -> 0.00500 mulx031 multiply 00.00 0.000 -> 0.00000 mulx032 multiply 00.00 0E-3 -> 0.00000 -- rhs is 0 mulx033 multiply 0E-3 00.00 -> 0.00000 -- lhs is 0 mulx034 multiply -5.00 1E-3 -> -0.00500 mulx035 multiply -00.00 0.000 -> -0.00000 mulx036 multiply -00.00 0E-3 -> -0.00000 -- rhs is 0 mulx037 multiply -0E-3 00.00 -> -0.00000 -- lhs is 0 mulx038 multiply 5.00 -1E-3 -> -0.00500 mulx039 multiply 00.00 -0.000 -> -0.00000 mulx040 multiply 00.00 -0E-3 -> -0.00000 -- rhs is 0 mulx041 multiply 0E-3 -00.00 -> -0.00000 -- lhs is 0 mulx042 multiply -5.00 -1E-3 -> 0.00500 mulx043 multiply -00.00 -0.000 -> 0.00000 mulx044 multiply -00.00 -0E-3 -> 0.00000 -- rhs is 0 mulx045 multiply -0E-3 -00.00 -> 0.00000 -- lhs is 0 And this from the examples: mulx053 multiply 0.9 -0 -> -0.0 In the Summary of Arithmetic section, the motivation is explained at a high level: The arithmetic was designed as a decimal extended floating-point arithmetic, directly implementing the rules that people are taught at school. Up to a given working precision, exact unrounded results are given when possible (for instance, 0.9 ÷ 10 gives 0.09, not 0.089999996), and trailing zeros are correctly preserved in most operations (1.23 + 1.27 gives 2.50, not 2.5). Where results would exceed the working precision, floating-point rules apply. More detail in given in the FAQ section Why are trailing fractional zeros important?.
Python Decimal - multiplication by zero
Why does the following code: from decimal import Decimal result = Decimal('0') * Decimal('0.8881783462119193534061639577') print(result) return 0E-28 ? I've traced it to the following code in the module: if not self or not other: ans = _dec_from_triple(resultsign, '0', resultexp) # Fixing in case the exponent is out of bounds ans = ans._fix(context) return ans The code appears to follow Decimal Arithmetic Specification, which doesn't explicitly suggest what to do when we multiply by zero, referring to 'special numbers' from another standard, which also doesn't specify what we do when we multiply an integer by zero :) So the decimal library does the thing that is explicitly specified: The coefficient of the result, before rounding, is computed by multiplying together the coefficients of the operands. The exponent of the result, before rounding, is the sum of the exponents of the two operands. The sign of the result is the exclusive or of the signs of the operands. Question: what is the need to return the coefficient and exponent (i.e, 0E-28) if one of the operands is a zero? We already know what that coefficient is when calling the multiplication function. Why not just return zero?
[ "Raymond Hettinger has given a comprehensive explanation at cpython github:\nIn Arithmetic Operations, the section on Arithmetic operations rules tells us:\n\nTrailing zeros are not removed after operations.\n\nThere are test cases covering multiplication by zero. Here are some from multiply.decTest:\n-- zeros, etc.\nmulx021 multiply 0 0 -> 0\nmulx022 multiply 0 -0 -> -0\nmulx023 multiply -0 0 -> -0\nmulx024 multiply -0 -0 -> 0\nmulx025 multiply -0.0 -0.0 -> 0.00\nmulx026 multiply -0.0 -0.0 -> 0.00\nmulx027 multiply -0.0 -0.0 -> 0.00\nmulx028 multiply -0.0 -0.0 -> 0.00\nmulx030 multiply 5.00 1E-3 -> 0.00500\nmulx031 multiply 00.00 0.000 -> 0.00000\nmulx032 multiply 00.00 0E-3 -> 0.00000 -- rhs is 0\nmulx033 multiply 0E-3 00.00 -> 0.00000 -- lhs is 0\nmulx034 multiply -5.00 1E-3 -> -0.00500\nmulx035 multiply -00.00 0.000 -> -0.00000\nmulx036 multiply -00.00 0E-3 -> -0.00000 -- rhs is 0\nmulx037 multiply -0E-3 00.00 -> -0.00000 -- lhs is 0\nmulx038 multiply 5.00 -1E-3 -> -0.00500\nmulx039 multiply 00.00 -0.000 -> -0.00000\nmulx040 multiply 00.00 -0E-3 -> -0.00000 -- rhs is 0\nmulx041 multiply 0E-3 -00.00 -> -0.00000 -- lhs is 0\nmulx042 multiply -5.00 -1E-3 -> 0.00500\nmulx043 multiply -00.00 -0.000 -> 0.00000\nmulx044 multiply -00.00 -0E-3 -> 0.00000 -- rhs is 0\nmulx045 multiply -0E-3 -00.00 -> 0.00000 -- lhs is 0\n\nAnd this from the examples:\nmulx053 multiply 0.9 -0 -> -0.0\n\nIn the Summary of Arithmetic section, the motivation is explained at a high level:\n\nThe arithmetic was designed as a decimal extended floating-point arithmetic, directly implementing the rules that people are taught at\nschool. Up to a given working precision, exact unrounded results are\ngiven when possible (for instance, 0.9 ÷ 10 gives 0.09, not\n0.089999996), and trailing zeros are correctly preserved in most operations (1.23 + 1.27 gives 2.50, not 2.5). Where results would\nexceed the working precision, floating-point rules apply.\n\nMore detail in given in the FAQ section Why are trailing fractional zeros important?.\n" ]
[ 1 ]
[]
[]
[ "arithmetic_expressions", "decimal", "python" ]
stackoverflow_0074500614_arithmetic_expressions_decimal_python.txt
Q: Tkinter window Completly freeze when you move it The window of Tkinter just completly freeze with all the widgets when I move the Tkinter window and that's my problem I tested it with another code and it always does the same thing Is the problem exclusively with tkinter? just move your tkinter window from left to right you will see that absolutely all the program freeze it's incredible Someone said to put main in a separated thread but how ? Like without example I don't even know what It means :( how do you put the threads outside of mainloop() ? What does it mean ? I putted root.mainloop() before the thread1 = threading.Thread(target= lambda : fct(), daemon=True) thread1.start() and it does nothing from tkinter import * from tkinter import ttk import time import threading import win32api import pyautogui root = Tk() root.geometry('800x438') root.resizable(False,False) root.configure(bg='gray') label = Label(root, text='Display content', fg='yellow', bg='black', font=('Arial', 13), width=20) label.place(relx=0.5,rely=0.3) firstentryvar = StringVar() secondentryvar = StringVar() firstentry = Entry(root, textvariable=firstentryvar , justify=CENTER, font = ('Arial', 12)) secondentry = Entry(root, textvariable=secondentryvar, justify=CENTER, font = ('Arial', 12)) def displaycontent(*args): firstentry.pack() secondentry.pack() label.bind('<Button-1>', hidecontent) def hidecontent(*args): firstentry.pack_forget() secondentry.pack_forget() label.bind('<Button-1>', displaycontent) label.bind('<Button-1>', displaycontent) def function1(*args): count = 0 bool = False while count < 10: for i in firstentry.get(): if bool == False: count +=1 print(i) bool = True else: bool = False def function2(*args): while True: if win32api.GetKeyState(0x45) < 0: print('you pressed e') thread1 = threading.Thread(target = lambda : function1(), daemon=True) thread1.start() thread2 = threading.Thread(target = lambda : function2(), daemon=True) thread2.start() root.mainloop() the code may not mean much but it's enough to reproduce my example, well you will notice that if you click on the display label and then move the window without entering anything in the entries the window will bug/freeze why? A: I played a bit with your code and it seems that problem is with your while loops. Even though you used threads correctly, using while loops this way makes your program uses all the resources to loop into it. What I means is as you started program, even before you press label to show entry widgets, your loops just iterated thousand of times if not tens of thousands. However, simply putting a time sleep, you can easily stop this exponential resource consuming. However, you shouldn't use time.sleep with tkinter if you aren't using inside threads. As we are using loops inside threads, there is no problem. For example: def function1(*args): count = 0 bool = False while count < 10: time.sleep(0.1) for i in firstentry.get(): if bool == False: count +=1 print(i) bool = True else: bool = False
Tkinter window Completly freeze when you move it
The window of Tkinter just completly freeze with all the widgets when I move the Tkinter window and that's my problem I tested it with another code and it always does the same thing Is the problem exclusively with tkinter? just move your tkinter window from left to right you will see that absolutely all the program freeze it's incredible Someone said to put main in a separated thread but how ? Like without example I don't even know what It means :( how do you put the threads outside of mainloop() ? What does it mean ? I putted root.mainloop() before the thread1 = threading.Thread(target= lambda : fct(), daemon=True) thread1.start() and it does nothing from tkinter import * from tkinter import ttk import time import threading import win32api import pyautogui root = Tk() root.geometry('800x438') root.resizable(False,False) root.configure(bg='gray') label = Label(root, text='Display content', fg='yellow', bg='black', font=('Arial', 13), width=20) label.place(relx=0.5,rely=0.3) firstentryvar = StringVar() secondentryvar = StringVar() firstentry = Entry(root, textvariable=firstentryvar , justify=CENTER, font = ('Arial', 12)) secondentry = Entry(root, textvariable=secondentryvar, justify=CENTER, font = ('Arial', 12)) def displaycontent(*args): firstentry.pack() secondentry.pack() label.bind('<Button-1>', hidecontent) def hidecontent(*args): firstentry.pack_forget() secondentry.pack_forget() label.bind('<Button-1>', displaycontent) label.bind('<Button-1>', displaycontent) def function1(*args): count = 0 bool = False while count < 10: for i in firstentry.get(): if bool == False: count +=1 print(i) bool = True else: bool = False def function2(*args): while True: if win32api.GetKeyState(0x45) < 0: print('you pressed e') thread1 = threading.Thread(target = lambda : function1(), daemon=True) thread1.start() thread2 = threading.Thread(target = lambda : function2(), daemon=True) thread2.start() root.mainloop() the code may not mean much but it's enough to reproduce my example, well you will notice that if you click on the display label and then move the window without entering anything in the entries the window will bug/freeze why?
[ "I played a bit with your code and it seems that problem is with your while loops.\nEven though you used threads correctly, using while loops this way makes your program uses all the resources to loop into it. What I means is as you started program, even before you press label to show entry widgets, your loops just iterated thousand of times if not tens of thousands.\nHowever, simply putting a time sleep, you can easily stop this exponential resource consuming. However, you shouldn't use time.sleep with tkinter if you aren't using inside threads. As we are using loops inside threads, there is no problem.\nFor example:\ndef function1(*args): \n count = 0\n bool = False\n while count < 10:\n time.sleep(0.1)\n for i in firstentry.get(): \n if bool == False: \n count +=1 \n print(i)\n bool = True\n else: \n bool = False\n\n" ]
[ 1 ]
[]
[]
[ "freeze", "python", "tkinter" ]
stackoverflow_0074512081_freeze_python_tkinter.txt
Q: I want to replace special symbol to another text in python with pandas I want to change the characters at once, but it doesn't change when I use the special symbol like [ or ( or : or - . What should I do? my sample datatable is below df col1 0 ( red ) apple 1 [ 20220901 ] autumn 2 - gotohome 3 sample : salt bread and I want to get this below df col1 0 red apple 1 20220901 autumn 2 gotohome 3 sample salt bread my trial is below but it's not working. change_word = { '( red )' : 'red\n', '[ 20220901 ]' : '20220901\n', '- ' : '', ':' : '\n' } regex = r'\b(?:' + r'|'.join(change_word.keys()) + r')\b' df["col1"] = df["col1"].str.replace(regex, lambda m: change_word[m.group()], regex=True) A: You can maybe use something like: import re badchars = '()[]\t-:' df2 = (df['col1'] .str.strip(badchars+' ') # strip unwanted chars at extremities .str.split(f'\s*[{re.escape(badchars)}]+\s*') # split on badchars + spaces .explode().to_frame() # explode as new rows ) Output: col1 0 red 0 apple 1 20220901 1 autumn 2 gotohome 3 sample 3 salt bread
I want to replace special symbol to another text in python with pandas
I want to change the characters at once, but it doesn't change when I use the special symbol like [ or ( or : or - . What should I do? my sample datatable is below df col1 0 ( red ) apple 1 [ 20220901 ] autumn 2 - gotohome 3 sample : salt bread and I want to get this below df col1 0 red apple 1 20220901 autumn 2 gotohome 3 sample salt bread my trial is below but it's not working. change_word = { '( red )' : 'red\n', '[ 20220901 ]' : '20220901\n', '- ' : '', ':' : '\n' } regex = r'\b(?:' + r'|'.join(change_word.keys()) + r')\b' df["col1"] = df["col1"].str.replace(regex, lambda m: change_word[m.group()], regex=True)
[ "You can maybe use something like:\nimport re\n\nbadchars = '()[]\\t-:'\ndf2 = (df['col1']\n .str.strip(badchars+' ') # strip unwanted chars at extremities\n .str.split(f'\\s*[{re.escape(badchars)}]+\\s*') # split on badchars + spaces\n .explode().to_frame() # explode as new rows\n )\n\nOutput:\n col1\n0 red\n0 apple\n1 20220901\n1 autumn\n2 gotohome\n3 sample\n3 salt bread\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074515003_pandas_python.txt
Q: How to compare datetime.time objects I have a column in my Dataframe that contains datetime.time() values. example : --> df.loc[0,'tat'] output: datetime.time(0, 21, 4) I want to write multiple if conditions with this column. example: --> if df.loc[0,'tat'] < 2: df.loc[0,'SLA'] = 'less than 2 hour SLA' else: df.loc[0,'SLA'] = 'greater than 2 hour SLA' --> if df.loc[0,'tat'] < 4 and df.loc[0,'tat'] > 2: df.loc[0,'SLA'] = '2-4 hour SLA' else: df.loc[0,'SLA'] = 'greater than 4 hour SLA' When I compare df.loc[r,'tat']< 2 it gives a TypeError: '<' not supported between instances of 'datetime.time' and 'int' I then tried to create timedeltas. timedelta_2 = timedelta(hours=2) df.loc[r,'tat']< timedelta_2 It still gives me a TypeError: '<' not supported between instances of 'datetime.time' and 'datetime.timedelta' How else am I supposed to compare ?! A: You need compare hours with scalars, solution for new helper column hour with cut: hours = pd.to_datetime(df['tat'].astype(str)).dt.hour hours = df['tat'].apply(lambda x: x.hour) df['SLA'] = pd.cut(hours, bins=[0,2,3,24], labels=['less than 2 hour SLA','2-4 hour SLA','greater than 4 hour SLA']) Or you can extract hour from datetime.time objects: if df.loc[0,'tat'].hour < 2: df.loc[0,'SLA'] = 'less than 2 hour SLA' elif (df.loc[0,'tat'].hour < 4) and (df.loc[0,'tat'].hour > 2): df.loc[0,'SLA'] = '2-4 hour SLA' else: df.loc[0,'SLA'] = 'greater than 4 hour SLA' Solution for new column SLA: def func(x): if x.hour < 2: return 'less than 2 hour SLA' elif (x.hour < 4) and (x.hour > 2): return '2-4 hour SLA' else: return 'greater than 4 hour SLA' df['SLA'] = df['tat'].apply(func) A: You cannot compare a dattime.time instance with in integer what you are doing is compaing apples with oranages i would suggest converting the int to a time instance import datetime.datetime as dt if df.loc[0,'tat'] < dt.time(hours=2,minutes=0,seconds=0): df.loc[0,'SLA'] = 'less than 2 hour SLA' else: df.loc[0,'SLA'] = 'greater than 2 hour SLA' if df.loc[0,'tat'] < dt.time(4,0,0) and df.loc[0,'tat'] > dt.time(hours=2,minutes=0,seconds=0): df.loc[0,'SLA'] = '2-4 hour SLA' else: df.loc[0,'SLA'] = 'greater than 4 hour SLA' I would also suggest that you make sure both the columns ('tat', 'SLA') are of datetime.time instance A: Datetime.time object represents time independent from the day. E.g. datetime.time(0, 21, 4) Translates to 00:21:04 AM Source: https://docs.python.org/3/library/datetime.html#date-objects So what you probably need is something to compare the 'tat' time to, in other words another datetime.time object e.g. if df.loc[0,'tat']-other_datetime_time_object < timedelta(hours=2): Or if 'tat' is actually representing duration of something, it should be timedelta object instead. EDIT: If you cannot change the object type, the dirtiest hack would be to compare it to 00:00:00 hrs, e.g. if df.loc[0,'tat']-datetime.time(0,0,0) < timedelta(hours=2):
How to compare datetime.time objects
I have a column in my Dataframe that contains datetime.time() values. example : --> df.loc[0,'tat'] output: datetime.time(0, 21, 4) I want to write multiple if conditions with this column. example: --> if df.loc[0,'tat'] < 2: df.loc[0,'SLA'] = 'less than 2 hour SLA' else: df.loc[0,'SLA'] = 'greater than 2 hour SLA' --> if df.loc[0,'tat'] < 4 and df.loc[0,'tat'] > 2: df.loc[0,'SLA'] = '2-4 hour SLA' else: df.loc[0,'SLA'] = 'greater than 4 hour SLA' When I compare df.loc[r,'tat']< 2 it gives a TypeError: '<' not supported between instances of 'datetime.time' and 'int' I then tried to create timedeltas. timedelta_2 = timedelta(hours=2) df.loc[r,'tat']< timedelta_2 It still gives me a TypeError: '<' not supported between instances of 'datetime.time' and 'datetime.timedelta' How else am I supposed to compare ?!
[ "You need compare hours with scalars, solution for new helper column hour with cut:\nhours = pd.to_datetime(df['tat'].astype(str)).dt.hour\n\nhours = df['tat'].apply(lambda x: x.hour)\n\ndf['SLA'] = pd.cut(hours, bins=[0,2,3,24], \n labels=['less than 2 hour SLA','2-4 hour SLA','greater than 4 hour SLA'])\n\nOr you can extract hour from datetime.time objects:\nif df.loc[0,'tat'].hour < 2:\n df.loc[0,'SLA'] = 'less than 2 hour SLA'\nelif (df.loc[0,'tat'].hour < 4) and (df.loc[0,'tat'].hour > 2):\n df.loc[0,'SLA'] = '2-4 hour SLA'\nelse:\n df.loc[0,'SLA'] = 'greater than 4 hour SLA'\n\nSolution for new column SLA:\ndef func(x):\n\n if x.hour < 2:\n return 'less than 2 hour SLA'\n elif (x.hour < 4) and (x.hour > 2):\n return '2-4 hour SLA'\n else:\n return 'greater than 4 hour SLA'\n\ndf['SLA'] = df['tat'].apply(func)\n\n", "You cannot compare a dattime.time instance with in integer \nwhat you are doing is compaing apples with oranages \ni would suggest converting the int to a time instance\nimport datetime.datetime as dt\nif df.loc[0,'tat'] < dt.time(hours=2,minutes=0,seconds=0):\n df.loc[0,'SLA'] = 'less than 2 hour SLA'\nelse:\n df.loc[0,'SLA'] = 'greater than 2 hour SLA'\n\nif df.loc[0,'tat'] < dt.time(4,0,0) and df.loc[0,'tat'] > dt.time(hours=2,minutes=0,seconds=0):\n df.loc[0,'SLA'] = '2-4 hour SLA'\nelse:\n df.loc[0,'SLA'] = 'greater than 4 hour SLA'\n\nI would also suggest that you make sure both the columns ('tat', 'SLA') are of datetime.time instance\n", "Datetime.time object represents time independent from the day. E.g.\ndatetime.time(0, 21, 4)\n\nTranslates to 00:21:04 AM\nSource: https://docs.python.org/3/library/datetime.html#date-objects\nSo what you probably need is something to compare the 'tat' time to, in other words another datetime.time object e.g.\nif df.loc[0,'tat']-other_datetime_time_object < timedelta(hours=2):\n\nOr if 'tat' is actually representing duration of something, it should be timedelta object instead.\nEDIT: If you cannot change the object type, the dirtiest hack would be to compare it to 00:00:00 hrs, e.g.\nif df.loc[0,'tat']-datetime.time(0,0,0) < timedelta(hours=2):\n\n" ]
[ 3, 0, 0 ]
[]
[]
[ "datetime", "pandas", "python" ]
stackoverflow_0074515699_datetime_pandas_python.txt
Q: Difference between del, remove, and pop on lists Is there any difference between these three methods to remove an element from a list? >>> a = [1, 2, 3] >>> a.remove(2) >>> a [1, 3] >>> a = [1, 2, 3] >>> del a[1] >>> a [1, 3] >>> a = [1, 2, 3] >>> a.pop(1) 2 >>> a [1, 3] A: The effects of the three different methods to remove an element from a list: remove removes the first matching value, not a specific index: >>> a = [0, 2, 3, 2] >>> a.remove(2) >>> a [0, 3, 2] del removes the item at a specific index: >>> a = [9, 8, 7, 6] >>> del a[1] >>> a [9, 7, 6] and pop removes the item at a specific index and returns it. >>> a = [4, 3, 5] >>> a.pop(1) 3 >>> a [4, 5] Their error modes are different too: >>> a = [4, 5, 6] >>> a.remove(7) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: list.remove(x): x not in list >>> del a[7] Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: list assignment index out of range >>> a.pop(7) Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: pop index out of range A: Use del to remove an element by index, pop() to remove it by index if you need the returned value, and remove() to delete an element by value. The last requires searching the list, and raises ValueError if no such value occurs in the list. When deleting index i from a list of n elements, the computational complexities of these methods are del O(n - i) pop O(n - i) remove O(n) A: Since no-one else has mentioned it, note that del (unlike pop) allows the removal of a range of indexes because of list slicing: >>> lst = [3, 2, 2, 1] >>> del lst[1:] >>> lst [3] This also allows avoidance of an IndexError if the index is not in the list: >>> lst = [3, 2, 2, 1] >>> del lst[10:] >>> lst [3, 2, 2, 1] A: Already answered quite well by others. This one from my end :) Evidently, pop is the only one which returns the value, and remove is the only one which searches the object, while del limits itself to a simple deletion. A: Many good explanations are here but I will try my best to simplify more. Among all these methods, remove & pop are postfix while delete is prefix. remove(): Is used to remove first occurrence of element. remove(n) => first occurrence of n in the list. >>> a = [0, 2, 3, 2, 1, 4, 6, 5, 7] >>> a.remove(2) # where i = 2 >>> a [0, 3, 2, 1, 4, 6, 5, 7] pop(): Is used to remove element ... if no index is specified: pop() => from end of list >>> a.pop() >>> a [0, 3, 2, 1, 4, 6, 5] if an index is specified: pop(index) => of index >>> a.pop(2) >>> a [0, 3, 1, 4, 6, 5] WARNING: Dangerous Method Ahead del(): It's a prefix method. Keep an eye on two different syntaxes for same method: with [] and without. It possesses power to: Delete index del a[index] => used to delete by index and its associated value just like pop. >>> del a[1] >>> a [0, 1, 4, 6, 5] Delete values in range [index_1:index_N]: del a[0:3] => multiple values in range. >>> del a[0:3] >>> a [6, 5] Last but not least, to delete whole list in one shot. del (a) => as said above. >>> del (a) >>> a Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'a' is not defined Hope this clarifies the confusion. A: pop Takes index (when given, else take last), removes value at that index, and returns value remove Takes value, removes first occurrence, and returns nothing delete Takes index, removes value at that index, and returns nothing A: Any operation/function on different data structures is defined for particular actions. Here in your case i.e. removing an element, delete, Pop and remove. (If you consider sets, Add another operation - discard) Other confusing case is while adding. Insert/Append. For Demonstration, Let us Implement deque. deque is a hybrid linear data structure, where you can add elements / remove elements from both ends.(Rear and front Ends) class Deque(object): def __init__(self): self.items=[] def addFront(self,item): return self.items.insert(0,item) def addRear(self,item): return self.items.append(item) def deleteFront(self): return self.items.pop(0) def deleteRear(self): return self.items.pop() def returnAll(self): return self.items[:] In here, see the operations: def deleteFront(self): return self.items.pop(0) def deleteRear(self): return self.items.pop() Operations have to return something. So, pop - With and without an index. If I don't want to return the value: del self.items[0] Delete by value not Index: remove : list_ez=[1,2,3,4,5,6,7,8] for i in list_ez: if i%2==0: list_ez.remove(i) print list_ez Returns [1,3,5,7] let us consider the case of sets. set_ez=set_ez=set(range(10)) set_ez.remove(11) # Gives Key Value Error. ##KeyError: 11 set_ez.discard(11) # Does Not return any errors. A: Here is a detailed answer. del can be used for any class object whereas pop and remove and bounded to specific classes. For del Here are some examples >>> a = 5 >>> b = "this is string" >>> c = 1.432 >>> d = myClass() >>> del c >>> del a, b, d # we can use comma separated objects We can override __del__ method in user-created classes. Specific uses with list >>> a = [1, 4, 2, 4, 12, 3, 0] >>> del a[4] >>> a [1, 4, 2, 4, 3, 0] >>> del a[1: 3] # we can also use slicing for deleting range of indices >>> a [1, 4, 3, 0] For pop pop takes the index as a parameter and removes the element at that index Unlike del, pop when called on list object returns the value at that index >>> a = [1, 5, 3, 4, 7, 8] >>> a.pop(3) # Will return the value at index 3 4 >>> a [1, 5, 3, 7, 8] For remove remove takes the parameter value and remove that value from the list. If multiple values are present will remove the first occurrence Note: Will throw ValueError if that value is not present >>> a = [1, 5, 3, 4, 2, 7, 5] >>> a.remove(5) # removes first occurence of 5 >>> a [1, 3, 4, 2, 7, 5] >>> a.remove(5) >>> a [1, 3, 4, 2, 7] Hope this answer is helpful. A: The remove operation on a list is given a value to remove. It searches the list to find an item with that value and deletes the first matching item it finds. It is an error if there is no matching item, raises a ValueError. >>> x = [1, 0, 0, 0, 3, 4, 5] >>> x.remove(4) >>> x [1, 0, 0, 0, 3, 5] >>> del x[7] Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> del x[7] IndexError: list assignment index out of range The del statement can be used to delete an entire list. If you have a specific list item as your argument to del (e.g. listname[7] to specifically reference the 8th item in the list), it'll just delete that item. It is even possible to delete a "slice" from a list. It is an error if there index out of range, raises a IndexError. >>> x = [1, 2, 3, 4] >>> del x[3] >>> x [1, 2, 3] >>> del x[4] Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> del x[4] IndexError: list assignment index out of range The usual use of pop is to delete the last item from a list as you use the list as a stack. Unlike del, pop returns the value that it popped off the list. You can optionally give an index value to pop and pop from other than the end of the list (e.g listname.pop(0) will delete the first item from the list and return that first item as its result). You can use this to make the list behave like a queue, but there are library routines available that can provide queue operations with better performance than pop(0) does. It is an error if there index out of range, raises a IndexError. >>> x = [1, 2, 3] >>> x.pop(2) 3 >>> x [1, 2] >>> x.pop(4) Traceback (most recent call last): File "<pyshell#1>", line 1, in <module> x.pop(4) IndexError: pop index out of range See collections.deque for more details. A: Remove basically works on the value . Delete and pop work on the index Remove basically removes the first matching value. Delete deletes the item from a specific index Pop basically takes an index and returns the value at that index. Next time you print the list the value doesnt appear. A: While pop and delete both take indices to remove an element as stated in above comments. A key difference is the time complexity for them. The time complexity for pop() with no index is O(1) but is not the same case for deletion of last element. If your use case is always to delete the last element, it's always preferable to use pop() over delete(). For more explanation on time complexities, you can refer to https://www.ics.uci.edu/~pattis/ICS-33/lectures/complexitypython.txt A: remove(), del and pop() are slow... What about 'None'? In the midst of so many responses, I didn't see anyone talking about performance. So I have an performance tip: remove(), del and pop() after deletion move all remaining values ​​to the left... 1, 2, 3, 4, 5, 6 remove(3) 1, 2, <- 4, 5, 6 ...making processing slow! Changing the desired value to a null for further processing of deletions only can add a lot of speed to your program, especially when dealing with a large volume of data: my_array[2] = None Of course, setting a null value is different from removing it, but if you want to understand a little more about deletion, thinking about the performance of this operation also seems interesting to me. A: Difference among del, pop & remove in terms of execution speed: While removing any intermediate item: import timeit print(timeit.timeit("a=[1,2,3,4,5]\ndel a[3]",number=100000)) print(timeit.timeit("a=[1,2,3,4,5]\na.pop(3)",number=100000)) print(timeit.timeit("a=[1,2,3,4,5]\na.remove(3)",number=100000)) del vs pop vs remove: 0.019387657986953855 0.02506213402375579 0.033232167130336165 del() seems significantly faster than the other two, while remove() being the slowest. While removing the last item: print(timeit.timeit("a=[1,2,3,4,5]\ndel a[-1]",number=100000)) print(timeit.timeit("a=[1,2,3,4,5]\na.pop()",number=100000)) print(timeit.timeit("a=[1,2,3,4,5]\na.remove(5)",number=100000)) del vs pop vs remove: 0.01974551402963698 0.020333584863692522 0.03434014297090471 del() and pop() take similar time removing last item.
Difference between del, remove, and pop on lists
Is there any difference between these three methods to remove an element from a list? >>> a = [1, 2, 3] >>> a.remove(2) >>> a [1, 3] >>> a = [1, 2, 3] >>> del a[1] >>> a [1, 3] >>> a = [1, 2, 3] >>> a.pop(1) 2 >>> a [1, 3]
[ "The effects of the three different methods to remove an element from a list:\nremove removes the first matching value, not a specific index:\n>>> a = [0, 2, 3, 2]\n>>> a.remove(2)\n>>> a\n[0, 3, 2]\n\ndel removes the item at a specific index:\n>>> a = [9, 8, 7, 6]\n>>> del a[1]\n>>> a\n[9, 7, 6]\n\nand pop removes the item at a specific index and returns it.\n>>> a = [4, 3, 5]\n>>> a.pop(1)\n3\n>>> a\n[4, 5]\n\nTheir error modes are different too:\n>>> a = [4, 5, 6]\n>>> a.remove(7)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: list.remove(x): x not in list\n>>> del a[7]\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nIndexError: list assignment index out of range\n>>> a.pop(7)\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nIndexError: pop index out of range\n\n", "Use del to remove an element by index, pop() to remove it by index if you need the returned value, and remove() to delete an element by value. The last requires searching the list, and raises ValueError if no such value occurs in the list.\nWhen deleting index i from a list of n elements, the computational complexities of these methods are\ndel O(n - i)\npop O(n - i)\nremove O(n)\n\n", "Since no-one else has mentioned it, note that del (unlike pop) allows the removal of a range of indexes because of list slicing:\n>>> lst = [3, 2, 2, 1]\n>>> del lst[1:]\n>>> lst\n[3]\n\nThis also allows avoidance of an IndexError if the index is not in the list:\n>>> lst = [3, 2, 2, 1]\n>>> del lst[10:]\n>>> lst\n[3, 2, 2, 1]\n\n", "Already answered quite well by others. This one from my end :)\n\nEvidently, pop is the only one which returns the value, and remove is the only one which searches the object, while del limits itself to a simple deletion.\n", "Many good explanations are here but I will try my best to simplify more.\nAmong all these methods, remove & pop are postfix while delete is prefix.\nremove(): Is used to remove first occurrence of element.\nremove(n) => first occurrence of n in the list.\n>>> a = [0, 2, 3, 2, 1, 4, 6, 5, 7]\n>>> a.remove(2) # where i = 2\n>>> a\n[0, 3, 2, 1, 4, 6, 5, 7]\n\npop(): Is used to remove element ...\n\nif no index is specified:\npop() => from end of list\n\n>>> a.pop()\n>>> a\n[0, 3, 2, 1, 4, 6, 5]\n\n\nif an index is specified:\npop(index) => of index\n\n>>> a.pop(2)\n>>> a\n[0, 3, 1, 4, 6, 5]\n\nWARNING: Dangerous Method Ahead\ndel(): It's a prefix method.\nKeep an eye on two different syntaxes for same method: with [] and without. It possesses power to:\n\nDelete index\ndel a[index] => used to delete by index and its associated value just like pop.\n\n>>> del a[1]\n>>> a\n[0, 1, 4, 6, 5]\n\n\nDelete values in range [index_1:index_N]:\ndel a[0:3] => multiple values in range.\n\n>>> del a[0:3]\n>>> a\n[6, 5]\n\n\nLast but not least, to delete whole list in one shot.\ndel (a) => as said above.\n\n>>> del (a)\n>>> a\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nNameError: name 'a' is not defined\n\nHope this clarifies the confusion.\n", "pop\n\nTakes index (when given, else take last), removes value at that index, and returns value\n\nremove\n\nTakes value, removes first occurrence, and returns nothing\n\ndelete\n\nTakes index, removes value at that index, and returns nothing\n\n", "Any operation/function on different data structures is defined for particular actions. Here in your case i.e. removing an element, delete, Pop and remove. (If you consider sets, Add another operation - discard)\nOther confusing case is while adding. Insert/Append. \nFor Demonstration, Let us Implement deque. deque is a hybrid linear data structure, where you can add elements / remove elements from both ends.(Rear and front Ends) \nclass Deque(object):\n\n def __init__(self):\n\n self.items=[]\n\n def addFront(self,item):\n\n return self.items.insert(0,item)\n def addRear(self,item):\n\n return self.items.append(item)\n def deleteFront(self):\n\n return self.items.pop(0)\n def deleteRear(self):\n return self.items.pop()\n def returnAll(self):\n\n return self.items[:]\n\nIn here, see the operations:\ndef deleteFront(self):\n\n return self.items.pop(0)\ndef deleteRear(self):\n return self.items.pop()\n\nOperations have to return something. So, pop - With and without an index. \nIf I don't want to return the value:\ndel self.items[0]\nDelete by value not Index: \n\nremove : \nlist_ez=[1,2,3,4,5,6,7,8]\nfor i in list_ez:\n if i%2==0:\n list_ez.remove(i)\nprint list_ez\n\n\nReturns [1,3,5,7]\nlet us consider the case of sets.\nset_ez=set_ez=set(range(10))\n\nset_ez.remove(11)\n\n# Gives Key Value Error. \n##KeyError: 11\n\nset_ez.discard(11)\n\n# Does Not return any errors.\n\n", "Here is a detailed answer.\ndel can be used for any class object whereas pop and remove and bounded to specific classes.\nFor del\nHere are some examples\n>>> a = 5\n>>> b = \"this is string\"\n>>> c = 1.432\n>>> d = myClass()\n\n>>> del c\n>>> del a, b, d # we can use comma separated objects\n\nWe can override __del__ method in user-created classes.\nSpecific uses with list\n>>> a = [1, 4, 2, 4, 12, 3, 0]\n>>> del a[4]\n>>> a\n[1, 4, 2, 4, 3, 0]\n\n>>> del a[1: 3] # we can also use slicing for deleting range of indices\n>>> a\n[1, 4, 3, 0]\n\nFor pop\npop takes the index as a parameter and removes the element at that index\nUnlike del, pop when called on list object returns the value at that index\n>>> a = [1, 5, 3, 4, 7, 8]\n>>> a.pop(3) # Will return the value at index 3\n4\n>>> a\n[1, 5, 3, 7, 8]\n\nFor remove\nremove takes the parameter value and remove that value from the list.\nIf multiple values are present will remove the first occurrence\nNote: Will throw ValueError if that value is not present\n>>> a = [1, 5, 3, 4, 2, 7, 5]\n>>> a.remove(5) # removes first occurence of 5\n>>> a\n[1, 3, 4, 2, 7, 5]\n>>> a.remove(5)\n>>> a\n[1, 3, 4, 2, 7]\n\nHope this answer is helpful.\n", "The remove operation on a list is given a value to remove. It searches the list to find an item with that value and deletes the first matching item it finds. It is an error if there is no matching item, raises a ValueError.\n>>> x = [1, 0, 0, 0, 3, 4, 5]\n>>> x.remove(4)\n>>> x\n[1, 0, 0, 0, 3, 5]\n>>> del x[7]\nTraceback (most recent call last):\n File \"<pyshell#1>\", line 1, in <module>\n del x[7]\nIndexError: list assignment index out of range\n\nThe del statement can be used to delete an entire list. If you have a specific list item as your argument to del (e.g. listname[7] to specifically reference the 8th item in the list), it'll just delete that item. It is even possible to delete a \"slice\" from a list. It is an error if there index out of range, raises a IndexError.\n>>> x = [1, 2, 3, 4]\n>>> del x[3]\n>>> x\n[1, 2, 3]\n>>> del x[4]\nTraceback (most recent call last):\n File \"<pyshell#1>\", line 1, in <module>\n del x[4]\nIndexError: list assignment index out of range\n\nThe usual use of pop is to delete the last item from a list as you use the list as a stack. Unlike del, pop returns the value that it popped off the list. You can optionally give an index value to pop and pop from other than the end of the list (e.g listname.pop(0) will delete the first item from the list and return that first item as its result). You can use this to make the list behave like a queue, but there are library routines available that can provide queue operations with better performance than pop(0) does. It is an error if there index out of range, raises a IndexError.\n>>> x = [1, 2, 3] \n>>> x.pop(2) \n3 \n>>> x \n[1, 2]\n>>> x.pop(4)\nTraceback (most recent call last):\n File \"<pyshell#1>\", line 1, in <module>\n x.pop(4)\nIndexError: pop index out of range\n\nSee collections.deque for more details.\n", "Remove basically works on the value .\nDelete and pop work on the index\nRemove basically removes the first matching value.\nDelete deletes the item from a specific index\nPop basically takes an index and returns the value at that index. Next time you print the list the value doesnt appear.\n\n", "While pop and delete both take indices to remove an element as stated in above comments. A key difference is the time complexity for them. The time complexity for pop() with no index is O(1) but is not the same case for deletion of last element.\nIf your use case is always to delete the last element, it's always preferable to use pop() over delete(). For more explanation on time complexities, you can refer to https://www.ics.uci.edu/~pattis/ICS-33/lectures/complexitypython.txt \n", "remove(), del and pop() are slow... What about 'None'?\nIn the midst of so many responses, I didn't see anyone talking about performance. So I have an performance tip:\nremove(), del and pop() after deletion move all remaining values ​​to the left...\n1, 2, 3, 4, 5, 6\nremove(3)\n1, 2, <- 4, 5, 6\n\n...making processing slow!\nChanging the desired value to a null for further processing of deletions only can add a lot of speed to your program, especially when dealing with a large volume of data:\nmy_array[2] = None\n\nOf course, setting a null value is different from removing it, but if you want to understand a little more about deletion, thinking about the performance of this operation also seems interesting to me.\n", "Difference among del, pop & remove in terms of execution speed:\nWhile removing any intermediate item:\nimport timeit\nprint(timeit.timeit(\"a=[1,2,3,4,5]\\ndel a[3]\",number=100000))\nprint(timeit.timeit(\"a=[1,2,3,4,5]\\na.pop(3)\",number=100000))\nprint(timeit.timeit(\"a=[1,2,3,4,5]\\na.remove(3)\",number=100000))\n\ndel vs pop vs remove:\n0.019387657986953855\n0.02506213402375579\n0.033232167130336165\n\ndel() seems significantly faster than the other two, while remove() being the slowest.\nWhile removing the last item:\nprint(timeit.timeit(\"a=[1,2,3,4,5]\\ndel a[-1]\",number=100000))\nprint(timeit.timeit(\"a=[1,2,3,4,5]\\na.pop()\",number=100000))\nprint(timeit.timeit(\"a=[1,2,3,4,5]\\na.remove(5)\",number=100000))\n\ndel vs pop vs remove:\n0.01974551402963698\n0.020333584863692522\n0.03434014297090471\n\ndel() and pop() take similar time removing last item.\n" ]
[ 1640, 261, 122, 68, 29, 23, 3, 3, 2, 1, 0, 0, 0 ]
[ "You can also use remove to remove a value by index as well. \nn = [1, 3, 5]\n\nn.remove(n[1])\n\nn would then refer to [1, 5]\n" ]
[ -5 ]
[ "list", "python" ]
stackoverflow_0011520492_list_python.txt
Q: Prophet Forecasting My dataframe is in weekly level as below: sample was trying to implement prophet model using the below code. df.columns = ['ds', 'y'] # define the model model = Prophet(seasonality_mode='multiplicative') # fit the model model1=model.fit(df) model1.predict(10) I need to predict the output in a weekly level for the next 10 weeks.How can I fix this? I need to predict the output in a weekly level for the next 10 weeks.How can I fix this? A: You need to use model.make_future_dataframe to create new dates: model = Prophet() model.fit(df) future = model.make_future_dataframe(periods=10, freq='W') predictions = model.predict(future) predictions will give predicted values for the whole dataframe, you can reach to the forecasted values for the next 10 weeks with simple indexing: future_preds = predictions.iloc[-10:]
Prophet Forecasting
My dataframe is in weekly level as below: sample was trying to implement prophet model using the below code. df.columns = ['ds', 'y'] # define the model model = Prophet(seasonality_mode='multiplicative') # fit the model model1=model.fit(df) model1.predict(10) I need to predict the output in a weekly level for the next 10 weeks.How can I fix this? I need to predict the output in a weekly level for the next 10 weeks.How can I fix this?
[ "You need to use model.make_future_dataframe to create new dates:\nmodel = Prophet()\nmodel.fit(df)\n\nfuture = model.make_future_dataframe(periods=10, freq='W')\n\npredictions = model.predict(future)\n\npredictions will give predicted values for the whole dataframe, you can reach to the forecasted values for the next 10 weeks with simple indexing:\nfuture_preds = predictions.iloc[-10:]\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "prophet", "python" ]
stackoverflow_0074515286_pandas_prophet_python.txt
Q: Send and receive objects through sockets in Python I have searched a lot on the Internet, but I haven't been able to find the solution to send an object over the socket and receive it as is. I know it needs pickling which I have already done. And that converts it to bytes and is received on the other hand. But how can I convert those bytes to that type of object? process_time_data = (current_process_start_time, current_process_end_time) prepared_process_data = self.prepare_data_to_send(process_time_data) data_string = io.StringIO(prepared_process_data) data_string = pack('>I', len(data_string)) + data_string self.send_to_server(data_string) This is the code which is converting the object to StringIO on the client and sending to the server. And on the server side I am getting bytes. Now I am searching for bytes to be converted to StringIO again so that I can get the object value. In the code, Object is wrapped in StringIO and is being sent over the socket. Is there a better approach? The server-side code is as follows. server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) #server.setblocking(0) server.bind(('127.0.0.1', 50000)) server.listen(5) inputs = [server] outputs = [] message_queues = {} while inputs: readable, writeable, exceptional = select.select(inputs, outputs, inputs) for s in readable: if s is server: connection, client_address = s.accept() print(client_address) connection.setblocking(0) inputs.append(connection) message_queues[connection] = queue.Queue() print('server started...') else: print('Getting data step 1') raw_msglen = s.recv(4) msglen = unpack('>I', raw_msglen)[0] final_data = b'' while len(final_data) < msglen: data = s.recv(msglen - len(final_data)) if data: #print(data) final_data += data message_queues[s].put(data) if s not in outputs: outputs.append(s) else: if s in outputs: outputs.remove(s) else: break inputs.remove(connection) #s.close() del message_queues[s] process_data = ProcessData() process_screen = ProcessScreen() if final_data is not None: try: deserialized_data = final_data.decode("utf-8") print(deserialized_data) except (EOFError): break else: print('final data is empty.') print(process_data.project_id) print(process_data.start_time) print(process_data.end_time) print(process_data.process_id) The two helper functions are as follows: def receive_all(server, message_length, message_queues, inputs, outputs): # Helper function to recv message_length bytes or return None if EOF is hit data = b'' while len(data) < message_length: packet = server.recv(message_length - len(data)) if not packet: return None data += packet message_queues[server].put(data) if server not in outputs: outputs.append(server) else: if server in outputs: outputs.remove(server) inputs.remove(server) del message_queues[server] return data def receive_message(server, message_queues, inputs, outputs): # Read message length and unpack it into an integer raw_msglen = receive_all(server, 4, message_queues, inputs, outputs) if not raw_msglen: return None message_length = unpack('>I', raw_msglen)[0] return receive_all(server, message_length, message_queues, inputs, outputs) And two of the model classes are as follows: class ProcessData: process_id = 0 project_id = 0 task_id = 0 start_time = 0 end_time = 0 user_id = 0 weekend_id = 0 # Model class to send image data to the server class ProcessScreen: process_id = 0 image_data = bytearray() A: You're looking for pickle and the loads and dumps operations. Sockets are basically byte streams. Let us consider the case you have. class ProcessData: process_id = 0 project_id = 0 task_id = 0 start_time = 0 end_time = 0 user_id = 0 weekend_id = 0 An instance of this class needs to be pickled into a data string by doing data_string = pickle.dumps(ProcessData()) and unpickled by doing data_variable = pickle.loads(data) where data is what is received. So let us consider a case where the client creates an object of ProcessData and sends it to server. Here's what the client would look like. Here's a minimal example. Client import socket, pickle class ProcessData: process_id = 0 project_id = 0 task_id = 0 start_time = 0 end_time = 0 user_id = 0 weekend_id = 0 HOST = 'localhost' PORT = 50007 # Create a socket connection. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((HOST, PORT)) # Create an instance of ProcessData() to send to server. variable = ProcessData() # Pickle the object and send it to the server data_string = pickle.dumps(variable) s.send(data_string) s.close() print 'Data Sent to Server' Now your server which receives this data looks as follows. Server import socket, pickle class ProcessData: process_id = 0 project_id = 0 task_id = 0 start_time = 0 end_time = 0 user_id = 0 weekend_id = 0 HOST = 'localhost' PORT = 50007 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind((HOST, PORT)) s.listen(1) conn, addr = s.accept() print 'Connected by', addr data = conn.recv(4096) data_variable = pickle.loads(data) conn.close() print data_variable # Access the information by doing data_variable.process_id or data_variable.task_id etc.., print 'Data received from client' Running the server first creates a bind on the port and then running the client makes the data transfer via the socket. You could also look at this answer. A: Shameless plug here, but a friend and I have recently released tlspyo, an open-source library whose purpose is to help you transfer python objects over network easily and in a secure fashion. Transferring pickled objects via Internet sockets without using something like tlspyo is basically an open door for hackers, so don't do it. With tlspyo, your code looks like this: Server: from tlspyo import Relay if __name__ == "__main__": my_server = Relay(port=3000,password="<same strong password>") # (...) Client 1: from tlspyo import Endpoint if __name__ == "__main__": client_1 = Endpoint( ip_server='<ip of your server>' port=3000, password="<same strong password>", groups="client 1") # send an object to client 2: my_object = "my object" # doesn't have to be a string, of course client_1.broadcast(my_object, "client 2") # (...) Client 2: from tlspyo import Endpoint if __name__ == "__main__": client_2 = Endpoint( ip_server='<ip of my Relay>' port=3000, password="<same strong password>", groups="client 2") # receive the object sent by client 1: my_object = client_2.receive_all(blocking=True)[0] # (...) (You will need to setup TLS for this code to work, check out the documentation - or you can disable TLS using security=None, but if you are transferring over the Internet you don't want to do that.)
Send and receive objects through sockets in Python
I have searched a lot on the Internet, but I haven't been able to find the solution to send an object over the socket and receive it as is. I know it needs pickling which I have already done. And that converts it to bytes and is received on the other hand. But how can I convert those bytes to that type of object? process_time_data = (current_process_start_time, current_process_end_time) prepared_process_data = self.prepare_data_to_send(process_time_data) data_string = io.StringIO(prepared_process_data) data_string = pack('>I', len(data_string)) + data_string self.send_to_server(data_string) This is the code which is converting the object to StringIO on the client and sending to the server. And on the server side I am getting bytes. Now I am searching for bytes to be converted to StringIO again so that I can get the object value. In the code, Object is wrapped in StringIO and is being sent over the socket. Is there a better approach? The server-side code is as follows. server = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) #server.setblocking(0) server.bind(('127.0.0.1', 50000)) server.listen(5) inputs = [server] outputs = [] message_queues = {} while inputs: readable, writeable, exceptional = select.select(inputs, outputs, inputs) for s in readable: if s is server: connection, client_address = s.accept() print(client_address) connection.setblocking(0) inputs.append(connection) message_queues[connection] = queue.Queue() print('server started...') else: print('Getting data step 1') raw_msglen = s.recv(4) msglen = unpack('>I', raw_msglen)[0] final_data = b'' while len(final_data) < msglen: data = s.recv(msglen - len(final_data)) if data: #print(data) final_data += data message_queues[s].put(data) if s not in outputs: outputs.append(s) else: if s in outputs: outputs.remove(s) else: break inputs.remove(connection) #s.close() del message_queues[s] process_data = ProcessData() process_screen = ProcessScreen() if final_data is not None: try: deserialized_data = final_data.decode("utf-8") print(deserialized_data) except (EOFError): break else: print('final data is empty.') print(process_data.project_id) print(process_data.start_time) print(process_data.end_time) print(process_data.process_id) The two helper functions are as follows: def receive_all(server, message_length, message_queues, inputs, outputs): # Helper function to recv message_length bytes or return None if EOF is hit data = b'' while len(data) < message_length: packet = server.recv(message_length - len(data)) if not packet: return None data += packet message_queues[server].put(data) if server not in outputs: outputs.append(server) else: if server in outputs: outputs.remove(server) inputs.remove(server) del message_queues[server] return data def receive_message(server, message_queues, inputs, outputs): # Read message length and unpack it into an integer raw_msglen = receive_all(server, 4, message_queues, inputs, outputs) if not raw_msglen: return None message_length = unpack('>I', raw_msglen)[0] return receive_all(server, message_length, message_queues, inputs, outputs) And two of the model classes are as follows: class ProcessData: process_id = 0 project_id = 0 task_id = 0 start_time = 0 end_time = 0 user_id = 0 weekend_id = 0 # Model class to send image data to the server class ProcessScreen: process_id = 0 image_data = bytearray()
[ "You're looking for pickle and the loads and dumps operations. Sockets are basically byte streams. Let us consider the case you have.\nclass ProcessData:\n process_id = 0\n project_id = 0\n task_id = 0\n start_time = 0\n end_time = 0\n user_id = 0\n weekend_id = 0\n\nAn instance of this class needs to be pickled into a data string by doing data_string = pickle.dumps(ProcessData()) and unpickled by doing data_variable = pickle.loads(data) where data is what is received.\nSo let us consider a case where the client creates an object of ProcessData and sends it to server. Here's what the client would look like. Here's a minimal example.\nClient\nimport socket, pickle\n\nclass ProcessData:\n process_id = 0\n project_id = 0\n task_id = 0\n start_time = 0\n end_time = 0\n user_id = 0\n weekend_id = 0\n\n\nHOST = 'localhost'\nPORT = 50007\n# Create a socket connection.\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect((HOST, PORT))\n\n# Create an instance of ProcessData() to send to server.\nvariable = ProcessData()\n# Pickle the object and send it to the server\ndata_string = pickle.dumps(variable)\ns.send(data_string)\n\ns.close()\nprint 'Data Sent to Server'\n\nNow your server which receives this data looks as follows.\nServer\nimport socket, pickle\n\nclass ProcessData:\n process_id = 0\n project_id = 0\n task_id = 0\n start_time = 0\n end_time = 0\n user_id = 0\n weekend_id = 0\n\n\nHOST = 'localhost'\nPORT = 50007\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind((HOST, PORT))\ns.listen(1)\nconn, addr = s.accept()\nprint 'Connected by', addr\n\ndata = conn.recv(4096)\ndata_variable = pickle.loads(data)\nconn.close()\nprint data_variable\n# Access the information by doing data_variable.process_id or data_variable.task_id etc..,\nprint 'Data received from client'\n\nRunning the server first creates a bind on the port and then running the client makes the data transfer via the socket. You could also look at this answer.\n", "Shameless plug here, but a friend and I have recently released tlspyo, an open-source library whose purpose is to help you transfer python objects over network easily and in a secure fashion.\nTransferring pickled objects via Internet sockets without using something like tlspyo is basically an open door for hackers, so don't do it.\nWith tlspyo, your code looks like this:\nServer:\nfrom tlspyo import Relay\n\nif __name__ == \"__main__\":\n my_server = Relay(port=3000,password=\"<same strong password>\")\n\n # (...)\n\nClient 1:\nfrom tlspyo import Endpoint\n\nif __name__ == \"__main__\":\n client_1 = Endpoint(\n ip_server='<ip of your server>'\n port=3000,\n password=\"<same strong password>\",\n groups=\"client 1\")\n\n # send an object to client 2:\n my_object = \"my object\" # doesn't have to be a string, of course\n client_1.broadcast(my_object, \"client 2\")\n\n # (...)\n\nClient 2:\nfrom tlspyo import Endpoint\n\nif __name__ == \"__main__\":\n client_2 = Endpoint(\n ip_server='<ip of my Relay>'\n port=3000,\n password=\"<same strong password>\",\n groups=\"client 2\")\n\n # receive the object sent by client 1:\n my_object = client_2.receive_all(blocking=True)[0]\n\n\n # (...)\n\n(You will need to setup TLS for this code to work, check out the documentation - or you can disable TLS using security=None, but if you are transferring over the Internet you don't want to do that.)\n" ]
[ 27, 0 ]
[ "An option is to use JSON serialization.\nHowever, Python objects are not serializable, so you have to map your class object into Dict first, using either function vars (preferred) or the built-in __dict__.\nAdapting the answer from Sudheesh Singanamalla and based on this answer:\nClient\nimport socket, json\n\nclass ProcessData:\n process_id = 0\n project_id = 0\n task_id = 0\n start_time = 0\n end_time = 0\n user_id = 0\n weekend_id = 0\n\n\nHOST = 'localhost'\nPORT = 50007\n# Create a socket connection.\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.connect((HOST, PORT))\n\n# Create an instance of ProcessData() to send to server.\nvariable = ProcessData()\n\n# Map your object into dict\ndata_as_dict = vars(variable)\n\n# Serialize your dict object\ndata_string = json.dumps(data_as_dict)\n\n# Send this encoded object\ns.send(data_string.encode(encoding=\"utf-8\"))\n\ns.close()\nprint 'Data Sent to Server'\n\nServer\nimport socket, json\n\nclass ProcessData:\n process_id = 0\n project_id = 0\n task_id = 0\n start_time = 0\n end_time = 0\n user_id = 0\n weekend_id = 0\n\n\nHOST = 'localhost'\nPORT = 50007\ns = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\ns.bind((HOST, PORT))\ns.listen(1)\nconn, addr = s.accept()\nprint 'Connected by', addr\n\ndata_encoded = conn.recv(4096)\ndata_string = data_encoded.decode(encoding=\"utf-8\")\n\ndata_variable = json.loads(data_string)\n# data_variable is a dict representing your sent object\n\nconn.close()\nprint 'Data received from client'\n\nWarning\nOne important point is that dict mapping of an object instance does not map class variable, only instance variable. See this answer for more information. Example:\nclass ProcessData:\n # class variables\n process_id = 0\n project_id = 1\n\n def __init__(self):\n # instance variables\n self.task_id = 2\n self.start_time = 3\n\nobj = ProcessData()\ndict_obj = vars(obj)\n\nprint(dict_obj)\n# outputs: {'task_id': 2, 'start_time': 3}\n\n# To access class variables:\ndict_class_variables = vars(ProcessData)\n\nprint(dict_class_variables['process_id'])\n# outputs: 0\n\n" ]
[ -1 ]
[ "marshalling", "python", "serialization", "sockets" ]
stackoverflow_0047391774_marshalling_python_serialization_sockets.txt
Q: Unable to save data from Django form I am trying to save data from a form into a database table named 'ModuleNames', but it is updating 'ModuleType' column of foreign(instance) table. I created an instance of said foreign table because it was giving a different error about assigning value to the foreign key column and from various blogs I learned that the instance is needed, but it seems it is not the correct way. I am really unsure what to do now? models.py class ModuleTypes(models.Model): ModuleType = models.CharField(max_length = 50) ModuleDesc = models.CharField(max_length = 256) Sort = models.SmallIntegerField() isActive = models.BooleanField() slug = models.SlugField(('Type'), max_length=50, blank=True) class Meta: app_label = 'zz' def save(self, *args, **kwargs): if not self.id: self.slug = slugify(self.Type) super(ModuleTypes, self).save(*args, **kwargs) class ModuleNames(models.Model): ModuleName = models.CharField(max_length = 50) ModuleDesc = models.CharField(max_length = 256) ModuleSort = models.SmallIntegerField() isActive = models.BooleanField() ModuleType = models.ForeignKey(ModuleTypes, on_delete=models.CASCADE, null = True) slug = models.SlugField(('ModuleName'), max_length=50, blank=True) class Meta: app_label = 'zz' def __unicode__(self): return self.status forms.py class ModuleForm(forms.ModelForm): moduleName = forms.CharField(label='Module Name', max_length=50) ModuleDesc = forms.CharField(max_length = 256) ModuleSort = forms.IntegerField() isActive = forms.BooleanField() ModuleType = forms.IntegerField() class Meta: model = ModuleNames fields = ('moduleName','ModuleDesc','ModuleSort','isActive','ModuleType') views.py def addmodule(request,moduletype): template_name = 'module.html' modules = ModuleNames.objects.all() listmodules = ModuleTypes.objects.get(ModuleType=moduletype) modules = ModuleNames.objects.filter(ModuleType_id=listmodules) if request.method == 'GET': args = {'modules': modules } return render(request,template_name, args) if request.method == 'POST': form = ModuleForm(request.POST, instance=ModuleTypes.objects.get(ModuleType=moduletype)) if form.is_valid(): #form.pop('csrfmiddlewaretoken', None)It is annoying this part because of that i put in comment. post = form.save(commit=False) post.save() else: #raise error return render(request, template_name, {'modules': modules}) Thanks I don't get any error by above code but i get below error when not using 'instance' of foreign table Environment: Request Method: POST Request URL: http://127.0.0.1:8000/module/nav-tab/new Django Version: 2.1.3 Python Version: 3.7.1 Installed Applications: ['Comp', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/exception.py" in inner 34. response = get_response(request) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response 126. response = self.process_exception_by_middleware(e, request) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response 124. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/utils/decorators.py" in _wrapped_view 142. response = view_func(request, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/utils/decorators.py" in _wrapped_view 142. response = view_func(request, *args, **kwargs) File "/Users/cem/Documents/Projects/DevComp/Comp/views.py" in addmodule 133. if form.is_valid(): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/forms.py" in is_valid 185. return self.is_bound and not self.errors File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/forms.py" in errors 180. self.full_clean() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/forms.py" in full_clean 383. self._post_clean() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/models.py" in _post_clean 398. self.instance = construct_instance(self, self.instance, opts.fields, opts.exclude) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/models.py" in construct_instance 60. f.save_form_data(instance, cleaned_data[f.name]) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/models/fields/__init__.py" in save_form_data 854. setattr(instance, self.name, data) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py" in __set__ 210. self.field.remote_field.model._meta.object_name, Exception Type: ValueError at /module/nav-tab/new Exception Value: Cannot assign "1": "ModuleNames.ModuleType" must be a "ModuleTypes" instance. A: I think the problem is here: form = ModuleForm(request.POST, instance=ModuleTypes.objects.get(ModuleType=moduletype)) # <-- here You are passing ModuleTypes as instance where your should be passing ModuleNames model instance. So you should update the form like this: form = ModuleForm(request.POST, instance=ModuleNames.objects.get(ModuleType=listmodules)) # listmodules is a ModuleType object Also minor refactoring: if request.method == 'POST': form = ModuleForm(request.POST, instance=ModuleNames.objects.get(ModuleType=listmodules)) if form.is_valid(): # <-- valid check in post request #form.pop('csrfmiddlewaretoken', None)It is annoying this part because of that i put in comment. post = form.save() return render(request, template_name, {'modules': modules}) And also please use snake_case in Model Field(Model class's attribute) definitions as per PEP8 Style Guide. A: install django==2.2.28, mssql-django==1.1.3
Unable to save data from Django form
I am trying to save data from a form into a database table named 'ModuleNames', but it is updating 'ModuleType' column of foreign(instance) table. I created an instance of said foreign table because it was giving a different error about assigning value to the foreign key column and from various blogs I learned that the instance is needed, but it seems it is not the correct way. I am really unsure what to do now? models.py class ModuleTypes(models.Model): ModuleType = models.CharField(max_length = 50) ModuleDesc = models.CharField(max_length = 256) Sort = models.SmallIntegerField() isActive = models.BooleanField() slug = models.SlugField(('Type'), max_length=50, blank=True) class Meta: app_label = 'zz' def save(self, *args, **kwargs): if not self.id: self.slug = slugify(self.Type) super(ModuleTypes, self).save(*args, **kwargs) class ModuleNames(models.Model): ModuleName = models.CharField(max_length = 50) ModuleDesc = models.CharField(max_length = 256) ModuleSort = models.SmallIntegerField() isActive = models.BooleanField() ModuleType = models.ForeignKey(ModuleTypes, on_delete=models.CASCADE, null = True) slug = models.SlugField(('ModuleName'), max_length=50, blank=True) class Meta: app_label = 'zz' def __unicode__(self): return self.status forms.py class ModuleForm(forms.ModelForm): moduleName = forms.CharField(label='Module Name', max_length=50) ModuleDesc = forms.CharField(max_length = 256) ModuleSort = forms.IntegerField() isActive = forms.BooleanField() ModuleType = forms.IntegerField() class Meta: model = ModuleNames fields = ('moduleName','ModuleDesc','ModuleSort','isActive','ModuleType') views.py def addmodule(request,moduletype): template_name = 'module.html' modules = ModuleNames.objects.all() listmodules = ModuleTypes.objects.get(ModuleType=moduletype) modules = ModuleNames.objects.filter(ModuleType_id=listmodules) if request.method == 'GET': args = {'modules': modules } return render(request,template_name, args) if request.method == 'POST': form = ModuleForm(request.POST, instance=ModuleTypes.objects.get(ModuleType=moduletype)) if form.is_valid(): #form.pop('csrfmiddlewaretoken', None)It is annoying this part because of that i put in comment. post = form.save(commit=False) post.save() else: #raise error return render(request, template_name, {'modules': modules}) Thanks I don't get any error by above code but i get below error when not using 'instance' of foreign table Environment: Request Method: POST Request URL: http://127.0.0.1:8000/module/nav-tab/new Django Version: 2.1.3 Python Version: 3.7.1 Installed Applications: ['Comp', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles'] Installed Middleware: ['django.middleware.security.SecurityMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware'] Traceback: File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/exception.py" in inner 34. response = get_response(request) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response 126. response = self.process_exception_by_middleware(e, request) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response 124. response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/utils/decorators.py" in _wrapped_view 142. response = view_func(request, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/utils/decorators.py" in _wrapped_view 142. response = view_func(request, *args, **kwargs) File "/Users/cem/Documents/Projects/DevComp/Comp/views.py" in addmodule 133. if form.is_valid(): File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/forms.py" in is_valid 185. return self.is_bound and not self.errors File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/forms.py" in errors 180. self.full_clean() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/forms.py" in full_clean 383. self._post_clean() File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/models.py" in _post_clean 398. self.instance = construct_instance(self, self.instance, opts.fields, opts.exclude) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/forms/models.py" in construct_instance 60. f.save_form_data(instance, cleaned_data[f.name]) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/models/fields/__init__.py" in save_form_data 854. setattr(instance, self.name, data) File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/django/db/models/fields/related_descriptors.py" in __set__ 210. self.field.remote_field.model._meta.object_name, Exception Type: ValueError at /module/nav-tab/new Exception Value: Cannot assign "1": "ModuleNames.ModuleType" must be a "ModuleTypes" instance.
[ "I think the problem is here:\nform = ModuleForm(request.POST, instance=ModuleTypes.objects.get(ModuleType=moduletype)) # <-- here\n\nYou are passing ModuleTypes as instance where your should be passing ModuleNames model instance. So you should update the form like this:\nform = ModuleForm(request.POST, instance=ModuleNames.objects.get(ModuleType=listmodules)) # listmodules is a ModuleType object\n\nAlso minor refactoring:\nif request.method == 'POST':\n form = ModuleForm(request.POST, instance=ModuleNames.objects.get(ModuleType=listmodules))\n if form.is_valid(): # <-- valid check in post request\n #form.pop('csrfmiddlewaretoken', None)It is annoying this part because of that i put in comment.\n post = form.save()\nreturn render(request, template_name, {'modules': modules})\n\nAnd also please use snake_case in Model Field(Model class's attribute) definitions as per PEP8 Style Guide.\n", "install django==2.2.28, mssql-django==1.1.3\n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0054166392_django_forms_python.txt
Q: merge two dataframes on common cell values of different columns I have two dataframes df1 = pd.DataFrame({'col1': [1,2,3], 'col2': [4,5,6]}) df2 = pd.DataFrame({'col3': [1,5,3]}) and would like to left merge df1 to df2. I don't have a fixed merge column in df1 though. I would like to merge on col1 if the cell value of col1 exists in df2.col3 and on col2 if the cell value of col2 exists in df2.col3. So in the above example merge on col1, col2 and then col1. (This is just an example, I actually have more than only two columns). I could do this but I'm not sure if it's ok. df1 = df1.assign(merge_col = np.where(df1.col1.isin(df2.col3), df1.col1, df1.col2)) df1.merge(df2, left_on='merge_col', right_on='col3', how='left') Are there any better ways to solve it? A: Perform the merges in the preferred order, and use combine_first to combine the merges: (df1.merge(df2, left_on='col1', right_on='col3', how='left') .combine_first(df1.merge(df2, left_on='col2', right_on='col3', how='left') ) ) For a generic method with many columns: cols = ['col1', 'col2'] from functools import reduce out = reduce( lambda a,b: a.combine_first(b), [df1.merge(df2, left_on=col, right_on='col3', how='left') for col in cols] ) Output: col1 col2 col3 0 1 4 1.0 1 2 5 5.0 2 3 6 3.0 Better example: Adding another column to df2 to illustrate the merge: df2 = pd.DataFrame({'col3': [1,5,3], 'new': ['A', 'B', 'C']}) Output: col1 col2 col3 new 0 1 4 1.0 A 1 2 5 5.0 B 2 3 6 3.0 C A: I think your solution is possible modify with get merged Series with compare all columns from list and then merge with this Series: Explanation of s: Compare all columns by DataFrame.isin, create missing values if no match by DataFrame.where and for priority marge back filling missing values with select first column by position: cols = ['col1', 'col2'] s = df1[cols].where(df1[cols].isin(df2.col3)).bfill(axis=1).iloc[:, 0] print (s) 0 1.0 1 5.0 2 3.0 Name: col1, dtype: float64 df = df1.merge(df2, left_on=s, right_on='col3', how='left') print (df) col1 col2 col3 0 1 4 1 1 2 5 5 2 3 6 3 Your solution with helper column: cols = ['col1', 'col2'] df1 = (df1.assign(merge_col = = df1[cols].where(df1[cols].isin(df2.col3)) .bfill(axis=1).iloc[:, 0])) df = df1.merge(df2, left_on='merge_col', right_on='col3', how='left') print (df) col1 col2 merge_col col3 0 1 4 1.0 1 1 2 5 5.0 5 2 3 6 3.0 3 Explanation of s: Compare all columns by DataFrame.isin, create missing values if no match by DataFrame.where and for priority marge back filling missing values with select first column by position: print (df1[cols].isin(df2.col3)) col1 col2 0 True False 1 False True 2 True False print (df1[cols].where(df1[cols].isin(df2.col3))) col1 col2 0 1.0 NaN 1 NaN 5.0 2 3.0 NaN print (df1[cols].where(df1[cols].isin(df2.col3)).bfill(axis=1)) col1 col2 0 1.0 NaN 1 5.0 5.0 2 3.0 NaN print (df1[cols].where(df1[cols].isin(df2.col3)).bfill(axis=1).iloc[:, 0]) 0 1.0 1 5.0 2 3.0 Name: col1, dtype: float64
merge two dataframes on common cell values of different columns
I have two dataframes df1 = pd.DataFrame({'col1': [1,2,3], 'col2': [4,5,6]}) df2 = pd.DataFrame({'col3': [1,5,3]}) and would like to left merge df1 to df2. I don't have a fixed merge column in df1 though. I would like to merge on col1 if the cell value of col1 exists in df2.col3 and on col2 if the cell value of col2 exists in df2.col3. So in the above example merge on col1, col2 and then col1. (This is just an example, I actually have more than only two columns). I could do this but I'm not sure if it's ok. df1 = df1.assign(merge_col = np.where(df1.col1.isin(df2.col3), df1.col1, df1.col2)) df1.merge(df2, left_on='merge_col', right_on='col3', how='left') Are there any better ways to solve it?
[ "Perform the merges in the preferred order, and use combine_first to combine the merges:\n(df1.merge(df2, left_on='col1', right_on='col3', how='left')\n .combine_first(df1.merge(df2, left_on='col2', right_on='col3', how='left')\n )\n)\n\nFor a generic method with many columns:\ncols = ['col1', 'col2']\n\nfrom functools import reduce\n\nout = reduce(\n lambda a,b: a.combine_first(b),\n [df1.merge(df2, left_on=col, right_on='col3', how='left')\n for col in cols]\n)\n\nOutput:\n col1 col2 col3\n0 1 4 1.0\n1 2 5 5.0\n2 3 6 3.0\n\nBetter example:\nAdding another column to df2 to illustrate the merge:\ndf2 = pd.DataFrame({'col3': [1,5,3], 'new': ['A', 'B', 'C']})\n\nOutput:\n col1 col2 col3 new\n0 1 4 1.0 A\n1 2 5 5.0 B\n2 3 6 3.0 C\n\n", "I think your solution is possible modify with get merged Series with compare all columns from list and then merge with this Series:\nExplanation of s: Compare all columns by DataFrame.isin, create missing values if no match by DataFrame.where and for priority marge back filling missing values with select first column by position:\ncols = ['col1', 'col2']\n\ns = df1[cols].where(df1[cols].isin(df2.col3)).bfill(axis=1).iloc[:, 0]\nprint (s)\n0 1.0\n1 5.0\n2 3.0\nName: col1, dtype: float64\n\ndf = df1.merge(df2, left_on=s, right_on='col3', how='left')\nprint (df)\n col1 col2 col3\n0 1 4 1\n1 2 5 5\n2 3 6 3\n\nYour solution with helper column:\ncols = ['col1', 'col2']\n\ndf1 = (df1.assign(merge_col = = df1[cols].where(df1[cols].isin(df2.col3))\n .bfill(axis=1).iloc[:, 0]))\ndf = df1.merge(df2, left_on='merge_col', right_on='col3', how='left')\n\nprint (df)\n col1 col2 merge_col col3\n0 1 4 1.0 1\n1 2 5 5.0 5\n2 3 6 3.0 3\n\nExplanation of s: Compare all columns by DataFrame.isin, create missing values if no match by DataFrame.where and for priority marge back filling missing values with select first column by position:\nprint (df1[cols].isin(df2.col3))\n col1 col2\n0 True False\n1 False True\n2 True False\n\nprint (df1[cols].where(df1[cols].isin(df2.col3)))\n col1 col2\n0 1.0 NaN\n1 NaN 5.0\n2 3.0 NaN\n\nprint (df1[cols].where(df1[cols].isin(df2.col3)).bfill(axis=1))\n col1 col2\n0 1.0 NaN\n1 5.0 5.0\n2 3.0 NaN\n\nprint (df1[cols].where(df1[cols].isin(df2.col3)).bfill(axis=1).iloc[:, 0])\n0 1.0\n1 5.0\n2 3.0\nName: col1, dtype: float64\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "merge", "pandas", "python" ]
stackoverflow_0074515932_dataframe_merge_pandas_python.txt
Q: (python) MNIST with local picture face img AttributeError: 'PngImageFile' object has no attribute 'reshape' environment: google colab, python goal: python mnist predict my own picture issue: AttributeError: 'PngImageFile' object has no attribute 'reshape' Update tried code, and output import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels '''plt.imshow(train_images[819], cmap=plt.get_cmap('gray')) print(train_images[819]) print(train_labels[819])''' from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs=1,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/00_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) im_arr = np.array(reIm.convert("L")) im1 = img.reshape((1,28*28)) im1 = img.astype('float32')/255 predict = model.predict_classes(im1) print ('predict as:') print (predict) output: AttributeError: 'PngImageFile' object has no attribute 'reshape' 469/469 [==============================] - 6s 11ms/step - loss: 0.2553 - accuracy: 0.9258 313/313 [==============================] - 1s 3ms/step - loss: 0.1399 - accuracy: 0.9582 test_acc: 0.9581999778747559 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-41-8c95d72a4bf4> in <module> 68 im_arr = np.array(reIm.convert("L")) 69 ---> 70 im1 = img.reshape((1,28*28)) 71 im1 = img.astype('float32')/255 72 AttributeError: 'PngImageFile' object has no attribute 'reshape' link of the screenshot on google colab, and the pic I want MNIST to predict https://imgur.com/a/kTtz0ei img name: "00_a.png" img path: '/content/00_a.png' "ls" result: 00_a.png 03_a.png 06_a.png 09_a.png 00_a__result.png 04_a.png 07_a.png epic_num_reader_joy.model/ 02_a.png 05_a.png 08_a.png m_lenet.h5 02_b.png 05_b.png 09_a.jpg sample_data/ the "00_a.png" is there, and I right click to copy the path is : '/content/00_a.png' with following code, the path "/content/00_a.png" is working import cv2 from matplotlib import pyplot as plt %matplotlib inline im = cv2.imread("/content/00_a.png",1) # load image as bgr im2 = im[:,:,::-1] # transform image to rgb plt.imshow(im2) plt.show() I also share the pic on share link https://imgur.com/a/kTtz0ei that I can use the path to call the picture (ps.cv2.imshow will failed , only the plt.show() is working in google colab ) tried code from export suggestion: I'm either input (im_arr) or (reIm) or (img), but they all get some error tensor = tf.keras.utils.img_to_array(im) I screenshot 3 result here https://imgur.com/a/TBU1YW2 import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels '''plt.imshow(train_images[819], cmap=plt.get_cmap('gray')) print(train_images[819]) print(train_labels[819])''' from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs=1,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/00_a.png' img = Image.open(picPath) # I though I resize here already reIm = img.resize((28,28),Image.ANTIALIAS) im_arr = np.array(reIm.convert("L")) import torch import tensorflow as tf # either input (im_arr) or (reIm) or (img), but they all get some error tensor = tf.keras.utils.img_to_array(im_arr) im1 = tensor.reshape((1,28*28)) im1 = img.astype('float32')/255 predict = model.predict_classes(im1) print ('predict as:') print (predict) tried code from export suggestion: array = np.array(im) I screenshot result here https://imgur.com/a/TBU1YW2 import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels '''plt.imshow(train_images[819], cmap=plt.get_cmap('gray')) print(train_images[819]) print(train_labels[819])''' from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs=1,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/00_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) # add tried code im_arr = np.array(reIm) import torch import tensorflow as tf tensor = tf.convert_to_tensor(im_arr) im1 = tensor.reshape((1,28*28)) im1 = img.astype('float32')/255 predict = model.predict_classes(im1) print ('predict as:') print (predict) output: 469/469 [==============================] - 7s 13ms/step - loss: 0.2557 - accuracy: 0.9263 313/313 [==============================] - 1s 3ms/step - loss: 0.1256 - accuracy: 0.9627 test_acc: 0.9627000093460083 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-8-d999f75eebea> in <module> 77 78 tensor = tf.convert_to_tensor(im_arr) ---> 79 im1 = tensor.reshape((1,28*28)) 80 im1 = img.astype('float32')/255 81 /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in __getattr__(self, name) 443 from tensorflow.python.ops.numpy_ops import np_config 444 np_config.enable_numpy_behavior() --> 445 """) 446 self.__getattribute__(name) 447 AttributeError: EagerTensor object has no attribute 'reshape'. If you are looking for numpy-related methods, please run the following: from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior() A: The problem is not due to path. When you are reading an image (like Png) with PIL, the type in PngImageFile which does not have reshape method. The reshape is for the tensor class. convert the Image that you read into proper data format you expect then rehsape and give to your model tensor = tf.keras.utils.img_to_array(im) or array = np.array(im) ... and by the way looking at the error you printed, the image size could not be reshaped to the size you are interested, first you should resize then convert to tensor , array , etc and reshape.
(python) MNIST with local picture face img AttributeError: 'PngImageFile' object has no attribute 'reshape'
environment: google colab, python goal: python mnist predict my own picture issue: AttributeError: 'PngImageFile' object has no attribute 'reshape' Update tried code, and output import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels '''plt.imshow(train_images[819], cmap=plt.get_cmap('gray')) print(train_images[819]) print(train_labels[819])''' from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs=1,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/00_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) im_arr = np.array(reIm.convert("L")) im1 = img.reshape((1,28*28)) im1 = img.astype('float32')/255 predict = model.predict_classes(im1) print ('predict as:') print (predict) output: AttributeError: 'PngImageFile' object has no attribute 'reshape' 469/469 [==============================] - 6s 11ms/step - loss: 0.2553 - accuracy: 0.9258 313/313 [==============================] - 1s 3ms/step - loss: 0.1399 - accuracy: 0.9582 test_acc: 0.9581999778747559 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-41-8c95d72a4bf4> in <module> 68 im_arr = np.array(reIm.convert("L")) 69 ---> 70 im1 = img.reshape((1,28*28)) 71 im1 = img.astype('float32')/255 72 AttributeError: 'PngImageFile' object has no attribute 'reshape' link of the screenshot on google colab, and the pic I want MNIST to predict https://imgur.com/a/kTtz0ei img name: "00_a.png" img path: '/content/00_a.png' "ls" result: 00_a.png 03_a.png 06_a.png 09_a.png 00_a__result.png 04_a.png 07_a.png epic_num_reader_joy.model/ 02_a.png 05_a.png 08_a.png m_lenet.h5 02_b.png 05_b.png 09_a.jpg sample_data/ the "00_a.png" is there, and I right click to copy the path is : '/content/00_a.png' with following code, the path "/content/00_a.png" is working import cv2 from matplotlib import pyplot as plt %matplotlib inline im = cv2.imread("/content/00_a.png",1) # load image as bgr im2 = im[:,:,::-1] # transform image to rgb plt.imshow(im2) plt.show() I also share the pic on share link https://imgur.com/a/kTtz0ei that I can use the path to call the picture (ps.cv2.imshow will failed , only the plt.show() is working in google colab ) tried code from export suggestion: I'm either input (im_arr) or (reIm) or (img), but they all get some error tensor = tf.keras.utils.img_to_array(im) I screenshot 3 result here https://imgur.com/a/TBU1YW2 import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels '''plt.imshow(train_images[819], cmap=plt.get_cmap('gray')) print(train_images[819]) print(train_labels[819])''' from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs=1,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/00_a.png' img = Image.open(picPath) # I though I resize here already reIm = img.resize((28,28),Image.ANTIALIAS) im_arr = np.array(reIm.convert("L")) import torch import tensorflow as tf # either input (im_arr) or (reIm) or (img), but they all get some error tensor = tf.keras.utils.img_to_array(im_arr) im1 = tensor.reshape((1,28*28)) im1 = img.astype('float32')/255 predict = model.predict_classes(im1) print ('predict as:') print (predict) tried code from export suggestion: array = np.array(im) I screenshot result here https://imgur.com/a/TBU1YW2 import keras from keras.datasets import mnist import matplotlib.pyplot as plt import PIL from PIL import Image (train_images,train_labels),(test_images,test_labels) = mnist.load_data() train_images.shape len(train_labels) train_labels test_images.shape len(test_labels) test_labels '''plt.imshow(train_images[819], cmap=plt.get_cmap('gray')) print(train_images[819]) print(train_labels[819])''' from keras import models from keras import layers network = models.Sequential() network.add(layers.Dense(512,activation='relu',input_shape=(28*28,))) network.add(layers.Dense(10,activation='softmax')) network.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) train_images = train_images.reshape((60000,28*28)) train_images = train_images.astype('float32')/255 test_images = test_images.reshape((10000,28*28)) test_images = test_images.astype('float32')/255 from keras.utils import to_categorical train_labels = to_categorical(train_labels) test_labels = to_categorical(test_labels) network.fit(train_images,train_labels,epochs=1,batch_size=128) test_loss , test_acc = network.evaluate(test_images,test_labels) print('test_acc:',test_acc) network.save('m_lenet.h5') ######### import numpy as np from keras.models import load_model import matplotlib.pyplot as plt from PIL import Image model = load_model('/content/m_lenet.h5') picPath = '/content/00_a.png' img = Image.open(picPath) reIm = img.resize((28,28),Image.ANTIALIAS) # add tried code im_arr = np.array(reIm) import torch import tensorflow as tf tensor = tf.convert_to_tensor(im_arr) im1 = tensor.reshape((1,28*28)) im1 = img.astype('float32')/255 predict = model.predict_classes(im1) print ('predict as:') print (predict) output: 469/469 [==============================] - 7s 13ms/step - loss: 0.2557 - accuracy: 0.9263 313/313 [==============================] - 1s 3ms/step - loss: 0.1256 - accuracy: 0.9627 test_acc: 0.9627000093460083 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-8-d999f75eebea> in <module> 77 78 tensor = tf.convert_to_tensor(im_arr) ---> 79 im1 = tensor.reshape((1,28*28)) 80 im1 = img.astype('float32')/255 81 /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in __getattr__(self, name) 443 from tensorflow.python.ops.numpy_ops import np_config 444 np_config.enable_numpy_behavior() --> 445 """) 446 self.__getattribute__(name) 447 AttributeError: EagerTensor object has no attribute 'reshape'. If you are looking for numpy-related methods, please run the following: from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior()
[ "The problem is not due to path. When you are reading an image (like Png) with PIL, the type in PngImageFile which does not have reshape method. The reshape is for the tensor class. convert the Image that you read into proper data format you expect then rehsape and give to your model\ntensor = tf.keras.utils.img_to_array(im)\n\nor\narray = np.array(im)\n...\n\nand by the way looking at the error you printed, the image size could not be reshaped to the size you are interested, first you should resize then convert to tensor , array , etc and reshape.\n" ]
[ 0 ]
[]
[]
[ "keras", "matplotlib", "mnist", "python", "typeerror" ]
stackoverflow_0074514954_keras_matplotlib_mnist_python_typeerror.txt
Q: iloc[] by value columns I want to use iloc with value in column. df1 = pd.DataFrame({'col1': ['1' ,'1','1','2','2','2','2','2','3' ,'3','3'], 'col2': ['A' ,'B','C','D','E','F','G','H','I' ,'J','K']}) I want to select index 2 in each column value as data frame and the result will be like col1 col2 1 C 2 F 3 K Thank you so much A: Use GroupBy.nth: df2 = df1.groupby('col1', as_index=False).nth(2) Alternative with GroupBy.cumcount: df2 = df1[df1.groupby('col1').cumcount().eq(2)] print (df2) col1 col2 2 1 C 5 2 F 10 3 K A: Use GroupBy.nth with as_index=False: df1.groupby('col1', as_index=False).nth(2) output: col1 col2 2 1 C 5 2 F 10 3 K A: df1.groupby('col1').agg(lambda ss:ss.iloc[2]) col2 col1 1 C 2 F 3 K
iloc[] by value columns
I want to use iloc with value in column. df1 = pd.DataFrame({'col1': ['1' ,'1','1','2','2','2','2','2','3' ,'3','3'], 'col2': ['A' ,'B','C','D','E','F','G','H','I' ,'J','K']}) I want to select index 2 in each column value as data frame and the result will be like col1 col2 1 C 2 F 3 K Thank you so much
[ "Use GroupBy.nth:\ndf2 = df1.groupby('col1', as_index=False).nth(2)\n\nAlternative with GroupBy.cumcount:\ndf2 = df1[df1.groupby('col1').cumcount().eq(2)]\n\n\nprint (df2)\n col1 col2\n2 1 C\n5 2 F\n10 3 K\n\n", "Use GroupBy.nth with as_index=False:\ndf1.groupby('col1', as_index=False).nth(2)\n\noutput:\n col1 col2\n2 1 C\n5 2 F\n10 3 K\n\n", "df1.groupby('col1').agg(lambda ss:ss.iloc[2])\n\n col2\ncol1 \n1 C\n2 F\n3 K\n\n" ]
[ 4, 3, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0071392956_pandas_python.txt
Q: How to make a clear messages command in cog import discord from discord.ext import commands class Purge(commands.Cog): def __init__(self, client): self.client = client @commands.command() async def clear(ctx, amount = 5): if amount == 0: await ctx.send("AMOUNT CANNOT BE 0!") else: await ctx.channel.purge(limit = amount + 1) def setup(client): client.add_cog(Purge(client)) when i type the command -clear it doesn't do anything nor is there a error message telling me anything A: async def setup(client): await client.add_cog(Purge(client))
How to make a clear messages command in cog
import discord from discord.ext import commands class Purge(commands.Cog): def __init__(self, client): self.client = client @commands.command() async def clear(ctx, amount = 5): if amount == 0: await ctx.send("AMOUNT CANNOT BE 0!") else: await ctx.channel.purge(limit = amount + 1) def setup(client): client.add_cog(Purge(client)) when i type the command -clear it doesn't do anything nor is there a error message telling me anything
[ "async def setup(client):\n\n await client.add_cog(Purge(client))\n\n" ]
[ 0 ]
[]
[]
[ "discord.py", "python" ]
stackoverflow_0068144114_discord.py_python.txt
Q: Calling Python 2 script from Python 3 I have two scripts, the main is in Python 3, and the second one is written in Python 2 (it also uses a Python 2 library). There is one method in the Python 2 script I want to call from the Python 3 script, but I don't know how to cross this bridge. A: Calling different python versions from each other can be done very elegantly using execnet. The following function does the charm: import execnet def call_python_version(Version, Module, Function, ArgumentList): gw = execnet.makegateway("popen//python=python%s" % Version) channel = gw.remote_exec(""" from %s import %s as the_function channel.send(the_function(*channel.receive())) """ % (Module, Function)) channel.send(ArgumentList) return channel.receive() Example: A my_module.py written in Python 2.7: def my_function(X, Y): return "Hello %s %s!" % (X, Y) Then the following function calls result = call_python_version("2.7", "my_module", "my_function", ["Mr", "Bear"]) print(result) result = call_python_version("2.7", "my_module", "my_function", ["Mrs", "Wolf"]) print(result) result in Hello Mr Bear! Hello Mrs Wolf! What happened is that a 'gateway' was instantiated waiting for an argument list with channel.receive(). Once it came in, it as been translated and passed to my_function. my_function returned the string it generated and channel.send(...) sent the string back. On other side of the gateway channel.receive() catches that result and returns it to the caller. The caller finally prints the string as produced by my_function in the python 3 module. A: You could run python2 using subprocess (python module) doing the following: From python 3: #!/usr/bin/env python3 import subprocess python3_command = "py2file.py arg1 arg2" # launch your python2 script process = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE) output, error = process.communicate() # receive output from the python2 script Where output stores whatever python 2 returned A: Maybe to late, but there is one more simple option for call python2.7 scripts: script = ["python2.7", "script.py", "arg1"] process = subprocess.Popen(" ".join(script), shell=True, env={"PYTHONPATH": "."}) A: I am running my python code with python 3, but I need a tool (ocropus) that is written with python 2.7. I spent a long time trying all these options with subprocess, and kept having errors, and the script would not complete. From the command line, it runs just fine. So I finally tried something simple that worked, but that I had not found in my searches online. I put the ocropus command inside a bash script: #!/bin/bash /usr/local/bin/ocropus-gpageseg $1 I call the bash script with subprocess. command = [ocropus_gpageseg_path, current_path] process = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE) output, error = process.communicate() print('output',output,'error',error) This really gives the ocropus script its own little world, which it seems to need. I am posting this in the hope that it will save someone else some time. A: It works for me if I call the python 2 executable directly from a python 3 environment. python2_command = 'C:\Python27\python.exe python2_script.py arg1' process = subprocess.Popen(python2_command.split(), stdout=subprocess.PIPE) output, error = process.communicate() python3_command = 'python python3_script.py arg1' process = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE) output, error = process.communicate() A: I ended up creating a new function in the python3 script, which wraps the python2.7 code. It correctly formats error messages created by the python2.7 code and is extending mikelsr's answer and using run() as recommended by subprocess docs. in bar.py (python2.7 code): def foo27(input): return input * 2 in your python3 file: import ast import subprocess def foo3(parameter): try: return ast.literal_eval(subprocess.run( [ "C:/path/to/python2.7/python.exe", "-c", # run python2.7 in command mode "from bar import foo27;"+ "print(foo27({}))".format(parameter) # print the output ], capture_output=True, check=True ).stdout.decode("utf-8")) # evaluate the printed output except subprocess.CalledProcessError as e: print(e.stdout) raise Exception("foo27 errored with message below:\n\n{}" .format(e.stderr.decode("utf-8"))) print(foo3(21)) # 42 This works when passing in simple python objects, like dicts, as the parameter but does not work for objects created by classes, eg. numpy arrays. These have to be serialized and re-instantiated on the other side of the barrier. A: Note: This was happening when running my python 2.x s/w in the liclipse IDE. When I ran it from a bash script on the command line it didn't have the problem. Here is a problem & solution I had when mixing python 2.x & 3.x scripts. I am running a python 2.6 process & needed to call/execute a python 3.6 script. The environment variable PYTHONPATH was set to point to 2.6 python s/w, so it was choking on the followng: File "/usr/lib64/python2.6/encodings/__init__.py", line 123 raise CodecRegistryError,\ This caused the 3.6 python script to fail. So instead of calling the 3.6 program directly I created a bash script which nuked the PYTHONPATH environment variable. #!/bin/bash export PYTHONPATH= ## Now call the 3.6 python scrtipt ./36psrc/rpiapi/RPiAPI.py $1 A: Instead of calling them in python 3, you could run them in conda env batch by creating a batch file as below: call C:\ProgramData\AnacondaNew\Scripts\activate.bat C:\Python27\python.exe "script27.py" C:\ProgramData\AnacondaNew\python.exe "script3.py" call conda deactivate pause
Calling Python 2 script from Python 3
I have two scripts, the main is in Python 3, and the second one is written in Python 2 (it also uses a Python 2 library). There is one method in the Python 2 script I want to call from the Python 3 script, but I don't know how to cross this bridge.
[ "Calling different python versions from each other can be done very elegantly using execnet. The following function does the charm:\nimport execnet\n\ndef call_python_version(Version, Module, Function, ArgumentList):\n gw = execnet.makegateway(\"popen//python=python%s\" % Version)\n channel = gw.remote_exec(\"\"\"\n from %s import %s as the_function\n channel.send(the_function(*channel.receive()))\n \"\"\" % (Module, Function))\n channel.send(ArgumentList)\n return channel.receive()\n\nExample: A my_module.py written in Python 2.7:\ndef my_function(X, Y): \n return \"Hello %s %s!\" % (X, Y)\n\nThen the following function calls\nresult = call_python_version(\"2.7\", \"my_module\", \"my_function\", \n [\"Mr\", \"Bear\"]) \nprint(result) \nresult = call_python_version(\"2.7\", \"my_module\", \"my_function\", \n [\"Mrs\", \"Wolf\"]) \nprint(result)\n\nresult in \nHello Mr Bear!\nHello Mrs Wolf!\n\nWhat happened is that a 'gateway' was instantiated waiting\nfor an argument list with channel.receive(). Once it came in, it as been translated and passed to my_function. my_function returned the string it generated and channel.send(...) sent the string back. On other side of the gateway channel.receive() catches that result and returns it to the caller. The caller finally prints the string as produced by my_function in the python 3 module.\n", "You could run python2 using subprocess (python module) doing the following:\nFrom python 3:\n#!/usr/bin/env python3\nimport subprocess\n\npython3_command = \"py2file.py arg1 arg2\" # launch your python2 script\n\nprocess = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE)\noutput, error = process.communicate() # receive output from the python2 script\n\nWhere output stores whatever python 2 returned\n", "Maybe to late, but there is one more simple option for call python2.7 scripts:\nscript = [\"python2.7\", \"script.py\", \"arg1\"] \nprocess = subprocess.Popen(\" \".join(script),\n shell=True, \n env={\"PYTHONPATH\": \".\"})\n\n", "I am running my python code with python 3, but I need a tool (ocropus) that is written with python 2.7. I spent a long time trying all these options with subprocess, and kept having errors, and the script would not complete. From the command line, it runs just fine. So I finally tried something simple that worked, but that I had not found in my searches online. I put the ocropus command inside a bash script:\n#!/bin/bash\n\n/usr/local/bin/ocropus-gpageseg $1\n\nI call the bash script with subprocess. \ncommand = [ocropus_gpageseg_path, current_path]\nprocess = subprocess.Popen(command,shell=True,stdout=subprocess.PIPE,stderr=subprocess.PIPE)\noutput, error = process.communicate()\nprint('output',output,'error',error)\n\nThis really gives the ocropus script its own little world, which it seems to need. I am posting this in the hope that it will save someone else some time.\n", "It works for me if I call the python 2 executable directly from a python 3 environment. \npython2_command = 'C:\\Python27\\python.exe python2_script.py arg1'\nprocess = subprocess.Popen(python2_command.split(), stdout=subprocess.PIPE)\noutput, error = process.communicate()\n\npython3_command = 'python python3_script.py arg1'\nprocess = subprocess.Popen(python3_command.split(), stdout=subprocess.PIPE)\noutput, error = process.communicate()\n\n", "I ended up creating a new function in the python3 script, which wraps the python2.7 code. It correctly formats error messages created by the python2.7 code and is extending mikelsr's answer and using run() as recommended by subprocess docs.\nin bar.py (python2.7 code):\ndef foo27(input):\n return input * 2\n\nin your python3 file:\nimport ast\nimport subprocess\n\ndef foo3(parameter):\n try:\n return ast.literal_eval(subprocess.run(\n [\n \"C:/path/to/python2.7/python.exe\", \"-c\", # run python2.7 in command mode\n \"from bar import foo27;\"+\n \"print(foo27({}))\".format(parameter) # print the output \n ],\n capture_output=True,\n check=True\n ).stdout.decode(\"utf-8\")) # evaluate the printed output\n except subprocess.CalledProcessError as e:\n print(e.stdout)\n raise Exception(\"foo27 errored with message below:\\n\\n{}\"\n .format(e.stderr.decode(\"utf-8\")))\n\nprint(foo3(21))\n# 42\n\nThis works when passing in simple python objects, like dicts, as the parameter but does not work for objects created by classes, eg. numpy arrays. These have to be serialized and re-instantiated on the other side of the barrier.\n", "Note: This was happening when running my python 2.x s/w in the liclipse IDE.\nWhen I ran it from a bash script on the command line it didn't have the problem.\nHere is a problem & solution I had when mixing python 2.x & 3.x scripts.\nI am running a python 2.6 process & needed to call/execute a python 3.6 script.\nThe environment variable PYTHONPATH was set to point to 2.6 python s/w, so it was choking on the followng: \nFile \"/usr/lib64/python2.6/encodings/__init__.py\", line 123\nraise CodecRegistryError,\\\n\nThis caused the 3.6 python script to fail.\nSo instead of calling the 3.6 program directly I created a bash script which nuked the PYTHONPATH environment variable.\n#!/bin/bash\nexport PYTHONPATH=\n## Now call the 3.6 python scrtipt\n./36psrc/rpiapi/RPiAPI.py $1\n\n", "Instead of calling them in python 3, you could run them in conda env batch by creating a batch file as below:\ncall C:\\ProgramData\\AnacondaNew\\Scripts\\activate.bat\nC:\\Python27\\python.exe \"script27.py\"\nC:\\ProgramData\\AnacondaNew\\python.exe \"script3.py\"\ncall conda deactivate\npause\n" ]
[ 28, 18, 12, 3, 2, 1, 0, 0 ]
[ "I recommend to convert the Python2 files to Python3:\nhttps://pythonconverter.com/\n" ]
[ -2 ]
[ "python" ]
stackoverflow_0027863832_python.txt
Q: Python - multiple *args inside a tuple (is it possible at all?) I am not sure if that is possible at all. I want when I create a tuple and iterate over it multiple *args to be created. For example: alabama_state="Alabama","Montgomery","Mobile","Tuscaloosa","Dothan","Huntsville","Birmingham","Madison","Auburn","Phenix City" state_name,capital,*metropolitan,*city=alabama_state print(state_name) print(capital) print(metropolitan) print(city) I want print(state_name) to print Alabama, print(capital) to print Montgomery, print(metropolitan) to print everything from Mobile to Huntsville included and print(city) to print everything from Birmingham to the end. How can I include specific count in the *args. Didn't find useful info. A: itertools.islice may be an option, but it is not readable enough: >>> from itertools import islice >>> alabama_state = ("Alabama", "Montgomery", "Mobile", "Tuscaloosa", "Dothan", ... "Huntsville", "Birmingham", "Madison", "Auburn", "Phenix City") >>> it = iter(alabama_state) >>> (state_name, capital, *metropolitan), (*city,) = islice(it, 6), it >>> state_name, capital, metropolitan, city ('Alabama', 'Montgomery', ['Mobile', 'Tuscaloosa', 'Dothan', 'Huntsville'], ['Birmingham', 'Madison', 'Auburn', 'Phenix City']) Example of multiple islice: >>> it = iter(range(10)) >>> (a, *b), (*c,), (*d,) = islice(it, 4), islice(it, 4), it >>> a, b, c, d (0, [1, 2, 3], [4, 5, 6, 7], [8, 9]) A: You can't do this with spread syntax. You can use slices to specify specific indexes. state_name, capital, *rest = alabama_state metropolitan, city = rest[:4], rest[4:]
Python - multiple *args inside a tuple (is it possible at all?)
I am not sure if that is possible at all. I want when I create a tuple and iterate over it multiple *args to be created. For example: alabama_state="Alabama","Montgomery","Mobile","Tuscaloosa","Dothan","Huntsville","Birmingham","Madison","Auburn","Phenix City" state_name,capital,*metropolitan,*city=alabama_state print(state_name) print(capital) print(metropolitan) print(city) I want print(state_name) to print Alabama, print(capital) to print Montgomery, print(metropolitan) to print everything from Mobile to Huntsville included and print(city) to print everything from Birmingham to the end. How can I include specific count in the *args. Didn't find useful info.
[ "itertools.islice may be an option, but it is not readable enough:\n>>> from itertools import islice\n>>> alabama_state = (\"Alabama\", \"Montgomery\", \"Mobile\", \"Tuscaloosa\", \"Dothan\",\n... \"Huntsville\", \"Birmingham\", \"Madison\", \"Auburn\", \"Phenix City\")\n>>> it = iter(alabama_state)\n>>> (state_name, capital, *metropolitan), (*city,) = islice(it, 6), it\n>>> state_name, capital, metropolitan, city\n('Alabama',\n 'Montgomery',\n ['Mobile', 'Tuscaloosa', 'Dothan', 'Huntsville'],\n ['Birmingham', 'Madison', 'Auburn', 'Phenix City'])\n\nExample of multiple islice:\n>>> it = iter(range(10))\n>>> (a, *b), (*c,), (*d,) = islice(it, 4), islice(it, 4), it\n>>> a, b, c, d\n(0, [1, 2, 3], [4, 5, 6, 7], [8, 9])\n\n", "You can't do this with spread syntax. You can use slices to specify specific indexes.\nstate_name, capital, *rest = alabama_state\nmetropolitan, city = rest[:4], rest[4:]\n\n" ]
[ 1, 0 ]
[]
[]
[ "arguments", "iteration", "python", "tuples" ]
stackoverflow_0074515962_arguments_iteration_python_tuples.txt
Q: Select all c files except file name or path contains particular string I have a folder with many subfolders including zip files. I want to select .c files if the file path does not contain particular strings. For example, exclude file paths containing "abc", "myfiles", "new" exclude C:\Users\Downloads\All_h_files\abcmln.c C:\Users\Downloads\All_h_files\myfilesos\mlo.c C:\Users\Downloads\All_h_files\newfile.c C:\Users\Downloads\All_h_files\newfile\sso.c C:\Users\Downloads\All_h_files\nno.c I tried, import os import shutil path = "C:\\Users\\Downloads\\All_h_files\\" destination_folder = "C:\\Users\\Downloads\\All_h" for root, dirs, files in os.walk(path): for file in files: destination = destination_folder + '\\' + file source = os.path.join(root,file) if "new" or "myfiles" or "abc" not in source: if(file.endswith(".c")): shutil.copy(source, destination) print("Files Copied!!!") Not giving the expected result. A: change this sentence: if "new" or "myfiles" or "abc" not in source: to if not all([item not in source for item in ["new", "myfiles", "abc"]]): furthermore you could use the glob package to list the files. like glob.glob(os.path.join(what_ever_pth, '*.c')) to list file with *.c extension. and use methods like os.path.join for constructing paths to ensure the behavior is gonna be correct no matter which os.
Select all c files except file name or path contains particular string
I have a folder with many subfolders including zip files. I want to select .c files if the file path does not contain particular strings. For example, exclude file paths containing "abc", "myfiles", "new" exclude C:\Users\Downloads\All_h_files\abcmln.c C:\Users\Downloads\All_h_files\myfilesos\mlo.c C:\Users\Downloads\All_h_files\newfile.c C:\Users\Downloads\All_h_files\newfile\sso.c C:\Users\Downloads\All_h_files\nno.c I tried, import os import shutil path = "C:\\Users\\Downloads\\All_h_files\\" destination_folder = "C:\\Users\\Downloads\\All_h" for root, dirs, files in os.walk(path): for file in files: destination = destination_folder + '\\' + file source = os.path.join(root,file) if "new" or "myfiles" or "abc" not in source: if(file.endswith(".c")): shutil.copy(source, destination) print("Files Copied!!!") Not giving the expected result.
[ "change this sentence:\nif \"new\" or \"myfiles\" or \"abc\" not in source:\n\nto\nif not all([item not in source for item in [\"new\", \"myfiles\", \"abc\"]]):\n\nfurthermore you could use the glob package to list the files. like glob.glob(os.path.join(what_ever_pth, '*.c')) to list file with *.c extension.\nand use methods like os.path.join for constructing paths to ensure the behavior is gonna be correct no matter which os.\n" ]
[ 0 ]
[]
[]
[ "glob", "python", "shutil" ]
stackoverflow_0074515562_glob_python_shutil.txt
Q: User input shape size, then calculate using class in python Hi I'm a beginner and this is the details to code. Create a parent class called shape. This should have following methods inputSides() – Ask user to enter the sides of the shape. Now create subclasses for a circle, rectangle and triangle. These should include an appropriate area() method that will use the side values from the shape class. This is what I came up with and still struggling class shape(): def __init__(self, r = None, s1 = None, s2 = None, b = None, h = None): self.radius = r self.side1 = s1 self.side2 = s2 self.base = b self.height = h def inputSidesC(self): self.radius = int(input("Enter radius: ")) circle() def inputSidesR(self): self.side1 = int(input("Enter side 1: ")) self.side2 = int(input("Enter side 2: ")) rectangle() def inputSidesT(self): self.base = int(input("Enter base: ")) self.height = int(input("Enter height: ")) triangle() class circle(shape): def __init__(self, r = None): self.radius = r def area(self): pi = 3.14159265359 print("Area of circle: ", pi * (self.radius * 2)) class rectangle(shape): def __init__(self, s1 = None, s2 = None): self.side1 = s1 self.side2 = s2 def area(self): print("area of rectangle", self.side1 * self.side2) class triangle(shape): def __init__(self, b = None, h = None): self.base = b self.height = h def area(self): print("Area of triangle: ", 0.5 * self.base * self.height) c = circle() c.inputSidesC() r = rectangle() r.inputSidesR() t = triangle() t.inputSidesT() Enter radius: 2 Area of circle: 12.57 Enter side 1: 2 Enter side 2: 4 Area of rectangle: 8 Enter base: 2 Enter height: 4 Area of triangle: 4 A: You should work more on inheritance.You shouldn't create instances from parent into child classes. As all of your shapes have an area attribute, let's put area calculator inside parent class. class shape(): def calculateArea(self,*args): a = 1 [a:= a*i for i in args] return a this calculate area function will multiply whatever it takes as arguments and return. This will work in more situation like rectangle, triangle etc. class circle(shape): def __init__(self, r = None): self.radius = r def area(self): pi = 3.14159265359 return self.calculateArea(float(self.radius),pi,2) As you inherited from shape class, child classes can use parent class methods. However, in some cases you would need to inherit shape class but assume that calculating an area is different than others. In this case you can override it by simple redefine method class rectangle(shape): def __init__(self): self.side1 = float(input("edge 1")) self.side2 = float(input("edge 2")) def calculateArea(self): return self.side1 * self.side2 at the end, simply create instances and call methods. c = circle(input("Enter Circle Radius")) print(c.area()) r = rectangle() print(r.calculateArea())
User input shape size, then calculate using class in python
Hi I'm a beginner and this is the details to code. Create a parent class called shape. This should have following methods inputSides() – Ask user to enter the sides of the shape. Now create subclasses for a circle, rectangle and triangle. These should include an appropriate area() method that will use the side values from the shape class. This is what I came up with and still struggling class shape(): def __init__(self, r = None, s1 = None, s2 = None, b = None, h = None): self.radius = r self.side1 = s1 self.side2 = s2 self.base = b self.height = h def inputSidesC(self): self.radius = int(input("Enter radius: ")) circle() def inputSidesR(self): self.side1 = int(input("Enter side 1: ")) self.side2 = int(input("Enter side 2: ")) rectangle() def inputSidesT(self): self.base = int(input("Enter base: ")) self.height = int(input("Enter height: ")) triangle() class circle(shape): def __init__(self, r = None): self.radius = r def area(self): pi = 3.14159265359 print("Area of circle: ", pi * (self.radius * 2)) class rectangle(shape): def __init__(self, s1 = None, s2 = None): self.side1 = s1 self.side2 = s2 def area(self): print("area of rectangle", self.side1 * self.side2) class triangle(shape): def __init__(self, b = None, h = None): self.base = b self.height = h def area(self): print("Area of triangle: ", 0.5 * self.base * self.height) c = circle() c.inputSidesC() r = rectangle() r.inputSidesR() t = triangle() t.inputSidesT() Enter radius: 2 Area of circle: 12.57 Enter side 1: 2 Enter side 2: 4 Area of rectangle: 8 Enter base: 2 Enter height: 4 Area of triangle: 4
[ "You should work more on inheritance.You shouldn't create instances from parent into child classes.\nAs all of your shapes have an area attribute, let's put area calculator inside parent class.\nclass shape():\n def calculateArea(self,*args):\n a = 1\n [a:= a*i for i in args]\n return a\n\nthis calculate area function will multiply whatever it takes as arguments and return. This will work in more situation like rectangle, triangle etc.\nclass circle(shape):\n def __init__(self, r = None):\n self.radius = r\n def area(self):\n pi = 3.14159265359\n return self.calculateArea(float(self.radius),pi,2)\n\nAs you inherited from shape class, child classes can use parent class methods.\nHowever, in some cases you would need to inherit shape class but assume that calculating an area is different than others. In this case you can override it by simple redefine method\nclass rectangle(shape):\n def __init__(self):\n self.side1 = float(input(\"edge 1\"))\n self.side2 = float(input(\"edge 2\"))\n def calculateArea(self):\n return self.side1 * self.side2\n\nat the end, simply create instances and call methods.\nc = circle(input(\"Enter Circle Radius\"))\nprint(c.area())\n\nr = rectangle()\nprint(r.calculateArea())\n\n" ]
[ 0 ]
[]
[]
[ "class", "input", "math", "python", "superclass" ]
stackoverflow_0074516047_class_input_math_python_superclass.txt
Q: Add two legends in the same plot I've a x and y. Both are flattened 2D arrays. I've two similar arrays, one for determining the colour of datapoint, another for determining detection method ("transit" or "radial"), which is used for determining the marker shape. a=np.random.uniform(0,100,(10,10)).ravel() #My x b=np.random.uniform(0,100,(10,10)).ravel() #My y d=np.random.choice([0,1,2],(10,10)).ravel() #Color map e=np.random.choice(["radial","transit"],(10,10)).ravel() #Marker type Since there can be only one type of marker in a scatterplot and I have two types of markers, I call the scatterplot twice. a_radial=a[e=="radial"] b_radial=b[e=="radial"] d_radial=d[e=="radial"] a_transit=a[e=="transit"] b_transit=b[e=="transit"] d_transit=d[e=='transit'] fig,ax=plt.subplots() #One plot each for two types of markers. scatter1=ax.scatter(a_radial,b_radial,c=d_radial,marker='o',label="Radial") scatter2=ax.scatter(a_transit,b_transit,c=d_transit,marker='^',label="Transit") ax.legend(*scatter1.legend_elements(),loc=(1.04, 0),title="Legend") ax.legend(loc=(1.01, 0),title="Detection") plt.show() This is giving me the following plot I want a label for color map too but as soon as I add the code for it, the label for "Detection" disappears Why is that and how can I resolve it? I added command for the label legends but it only shows one at a time. I was expecting both of them to show up at the same time. A: You could just manually add the first legend to the Axes: leg1 = ax.legend(*scatter1.legend_elements(), bbox_to_anchor=(1.04, 1), loc="upper left", title="Legend") ax.add_artist(leg1) However, this is not every clear as the color legend uses the marker for Radial and the Detection legend uses just two arbitrary colors out of the three. So a better solution is to make two neutral legends that don't mix marker symbol and color: import matplotlib.patches as mpatches import matplotlib.lines as mlines # ... handles = [mpatches.Patch(color=line.get_color()) for line in scatter1.legend_elements()[0]] leg1 = ax.legend(handles, scatter1.legend_elements()[1], bbox_to_anchor=(1.04, 1), loc="upper left", title="Legend") ax.add_artist(leg1) handles = [mlines.Line2D([], [], marker=marker, mec='k', mfc='w', ls='') for marker in ['o', '^']] ax.legend(handles, ['Radial', 'Transit'], loc=(1.01, 0),title="Detection") As an alternative, you could use seaborn's scatterplot, where you can specify hue and style to get two legend categories (although in one legend), see 4th example in the seaborn scatterplot docs. In this example, however, the marker used for the different hues (days) is the same as for Lunch, so I think my solution above is a bit clearer.
Add two legends in the same plot
I've a x and y. Both are flattened 2D arrays. I've two similar arrays, one for determining the colour of datapoint, another for determining detection method ("transit" or "radial"), which is used for determining the marker shape. a=np.random.uniform(0,100,(10,10)).ravel() #My x b=np.random.uniform(0,100,(10,10)).ravel() #My y d=np.random.choice([0,1,2],(10,10)).ravel() #Color map e=np.random.choice(["radial","transit"],(10,10)).ravel() #Marker type Since there can be only one type of marker in a scatterplot and I have two types of markers, I call the scatterplot twice. a_radial=a[e=="radial"] b_radial=b[e=="radial"] d_radial=d[e=="radial"] a_transit=a[e=="transit"] b_transit=b[e=="transit"] d_transit=d[e=='transit'] fig,ax=plt.subplots() #One plot each for two types of markers. scatter1=ax.scatter(a_radial,b_radial,c=d_radial,marker='o',label="Radial") scatter2=ax.scatter(a_transit,b_transit,c=d_transit,marker='^',label="Transit") ax.legend(*scatter1.legend_elements(),loc=(1.04, 0),title="Legend") ax.legend(loc=(1.01, 0),title="Detection") plt.show() This is giving me the following plot I want a label for color map too but as soon as I add the code for it, the label for "Detection" disappears Why is that and how can I resolve it? I added command for the label legends but it only shows one at a time. I was expecting both of them to show up at the same time.
[ "You could just manually add the first legend to the Axes:\nleg1 = ax.legend(*scatter1.legend_elements(), bbox_to_anchor=(1.04, 1), loc=\"upper left\", title=\"Legend\")\nax.add_artist(leg1)\n\n\nHowever, this is not every clear as the color legend uses the marker for Radial and the Detection legend uses just two arbitrary colors out of the three.\nSo a better solution is to make two neutral legends that don't mix marker symbol and color:\nimport matplotlib.patches as mpatches\nimport matplotlib.lines as mlines\n\n# ...\n\nhandles = [mpatches.Patch(color=line.get_color()) for line in scatter1.legend_elements()[0]]\nleg1 = ax.legend(handles, scatter1.legend_elements()[1], bbox_to_anchor=(1.04, 1), loc=\"upper left\", title=\"Legend\")\nax.add_artist(leg1)\n\nhandles = [mlines.Line2D([], [], marker=marker, mec='k', mfc='w', ls='') for marker in ['o', '^']]\nax.legend(handles, ['Radial', 'Transit'], loc=(1.01, 0),title=\"Detection\")\n\n\n\nAs an alternative, you could use seaborn's scatterplot, where you can specify hue and style to get two legend categories (although in one legend), see 4th example in the seaborn scatterplot docs. In this example, however, the marker used for the different hues (days) is the same as for Lunch, so I think my solution above is a bit clearer.\n" ]
[ 2 ]
[]
[]
[ "colors", "legend", "matplotlib", "python", "scatter_plot" ]
stackoverflow_0074510820_colors_legend_matplotlib_python_scatter_plot.txt
Q: I'm trying to get specific results from my Lucky Sevens program, but I'm not sure where to go from here I'm trying to calculate the number of rolls it takes to go broke, and the amount of rolls that would have left you with the most money. The program is split into several functions outside of main (not my choice) so that makes it more difficult for me. I'm very new to python, and this is an exercise for school. I'm just not really sure where to go from here, and I realize I'm probably doing some of this wrong. Here's the code I have so far: import random def displayHeader(funds): print ("--------------------------") print ("--------------------------") print ("- Lucky Sevens -") print ("--------------------------") print ("--------------------------") funds = int(input("How many dollars do you have? ")) def rollDie(newFunds): #this function is supposed to simulate the roll of two die and return results while funds > 0: diceRoll = random.randint(1,6) totalRoll = (diceRoll + diceRoll) if totalRoll == 7: funds = funds + 4 else: funds = funds - 1 if funds == 0: newFunds = funds def displayResults(): #this function is supposed to display the final results. #the number of rolls, the number of rolls you should have stopped at, and the max amount of money you would have had. def main(): #everything gathered from the last function would be printed here. main() A: import random maxmoney = [] minmoney = [] def displayHeader(): print ("--------------------------") print ("--------------------------") print ("- Lucky Sevens -") print ("--------------------------") print ("--------------------------") funds = int(input("How many dollars do you have? ")) return funds def rollDie(): #this function is supposed to simulate the roll of two die and return results funds = displayHeader() while funds > 0: diceRoll1 = random.randint(1,6) diceRoll2 = random.randint(1,6) totalRoll = (diceRoll1 + diceRoll2) if totalRoll == 7: funds = funds + 4 maxmoney.append(funds) else: funds = funds - 1 minmoney.append(funds) def displayResults(): #this function is supposed to display the final results. #the number of rolls, the number of rolls you should have stopped at, and the max amount of money you would have had. rollDie() numOfRolls = len(maxmoney) + len(minmoney) numOfRolls2Stop = (len(maxmoney) - 1 - maxmoney[::-1].index(max(maxmoney))) + (len(minmoney) - 1 - minmoney[::-1].index(max(maxmoney)-1)) + 1 if maxmoney and minmoney else 0 maxAmount = max(maxmoney) if maxmoney else 0 return numOfRolls, numOfRolls2Stop, maxAmount def main(): #everything gathered from the last function would be printed here. a, b, c = displayResults() print('The number of total rolls is : ' + str(a)) print("The number of rolls you should've stopped at is: " + str(b)) print("The maximun amount of money you would've had is: $" + str(c)) main() A: Your program use variables that can only be accessed in certain scopes, so I think that the most recommended thing is that you use a class. displayHeader input method it could stop the execution of the program since if you do not introduce a numerical value it will raises an exception called ValueError, if this does not help you much, I advise you to read the code carefully and add missing variables such as the input amount and the final amount and others... class rollDiceGame(): def __init__(self): self.funds = 0 self.diceRollCount = 0 def displayHeader(self): print ("--------------------------") print ("--------------------------") print ("- Lucky Sevens -") print ("--------------------------") print ("--------------------------") self.funds = int(input("How many dollars do you have? ")) def rollDice(self): while self.funds > 0: diceRoll = random.randint(1,6) totalRoll = (diceRoll + diceRoll) self.diceRollCount += 1 if totalRoll == 7: self.funds += 4 else: self.funds -= 1 def displayResult(self): print('Roll count %d' % (self.diceRollCount)) print('Current funds %d' % (self.funds)) def main(): test = RollDiceGame() test.displayHeader() test.rollDice() test.displayResult() main()
I'm trying to get specific results from my Lucky Sevens program, but I'm not sure where to go from here
I'm trying to calculate the number of rolls it takes to go broke, and the amount of rolls that would have left you with the most money. The program is split into several functions outside of main (not my choice) so that makes it more difficult for me. I'm very new to python, and this is an exercise for school. I'm just not really sure where to go from here, and I realize I'm probably doing some of this wrong. Here's the code I have so far: import random def displayHeader(funds): print ("--------------------------") print ("--------------------------") print ("- Lucky Sevens -") print ("--------------------------") print ("--------------------------") funds = int(input("How many dollars do you have? ")) def rollDie(newFunds): #this function is supposed to simulate the roll of two die and return results while funds > 0: diceRoll = random.randint(1,6) totalRoll = (diceRoll + diceRoll) if totalRoll == 7: funds = funds + 4 else: funds = funds - 1 if funds == 0: newFunds = funds def displayResults(): #this function is supposed to display the final results. #the number of rolls, the number of rolls you should have stopped at, and the max amount of money you would have had. def main(): #everything gathered from the last function would be printed here. main()
[ "import random\n\nmaxmoney = []\nminmoney = []\n\ndef displayHeader():\n print (\"--------------------------\")\n print (\"--------------------------\")\n print (\"- Lucky Sevens -\")\n print (\"--------------------------\")\n print (\"--------------------------\")\n funds = int(input(\"How many dollars do you have? \"))\n return funds\n\n \n\ndef rollDie(): \n #this function is supposed to simulate the roll of two die and return results\n funds = displayHeader()\n while funds > 0:\n diceRoll1 = random.randint(1,6)\n diceRoll2 = random.randint(1,6)\n totalRoll = (diceRoll1 + diceRoll2)\n if totalRoll == 7:\n funds = funds + 4\n maxmoney.append(funds)\n else:\n funds = funds - 1\n minmoney.append(funds)\n\n\ndef displayResults(): \n #this function is supposed to display the final results. \n #the number of rolls, the number of rolls you should have stopped at, and the max amount of money you would have had.\n rollDie()\n numOfRolls = len(maxmoney) + len(minmoney)\n numOfRolls2Stop = (len(maxmoney) - 1 - maxmoney[::-1].index(max(maxmoney))) + (len(minmoney) - 1 - minmoney[::-1].index(max(maxmoney)-1)) + 1 if maxmoney and minmoney else 0\n maxAmount = max(maxmoney) if maxmoney else 0\n return numOfRolls, numOfRolls2Stop, maxAmount\n\ndef main():\n #everything gathered from the last function would be printed here. \n a, b, c = displayResults()\n print('The number of total rolls is : ' + str(a))\n print(\"The number of rolls you should've stopped at is: \" + str(b))\n print(\"The maximun amount of money you would've had is: $\" + str(c))\n \n\nmain()\n\n", "Your program use variables that can only be accessed in certain scopes, so I think that the most recommended thing is that you use a class.\ndisplayHeader input method it could stop the execution of the program since if you do not introduce a numerical value it will raises an exception called ValueError, if this does not help you much, I advise you to read the code carefully and add missing variables such as the input amount and the final amount and others...\nclass rollDiceGame():\n def __init__(self): \n self.funds = 0\n self.diceRollCount = 0\n \n def displayHeader(self):\n print (\"--------------------------\")\n print (\"--------------------------\")\n print (\"- Lucky Sevens -\")\n print (\"--------------------------\")\n print (\"--------------------------\")\n self.funds = int(input(\"How many dollars do you have? \")) \n\n def rollDice(self):\n while self.funds > 0:\n diceRoll = random.randint(1,6)\n totalRoll = (diceRoll + diceRoll)\n self.diceRollCount += 1\n if totalRoll == 7:\n self.funds += 4\n else:\n self.funds -= 1\n\n def displayResult(self):\n print('Roll count %d' % (self.diceRollCount))\n print('Current funds %d' % (self.funds))\n\ndef main():\n test = RollDiceGame()\n test.displayHeader()\n test.rollDice()\n test.displayResult()\n\nmain()\n\n" ]
[ 0, 0 ]
[]
[]
[ "loops", "parameters", "python" ]
stackoverflow_0074514985_loops_parameters_python.txt
Q: Label a whole numpy array with one label on matplotlib I would like to label a whole numpy array with only one label. The following code for example creates 6 (=2+4) labels instead of only 2 labels: import numpy as np import matplotlib.pyplot as plt a = np.random.rand(10,2) b = np.random.rand(10,4) plt.figure() plt.plot(a, 'blue', label = 'a') plt.plot(b, 'red', label = 'b') plt.legend() How should the code above be modified to create only 2 legend labels, 'a' and 'b'? A: a_lines = plt.plot(a, c='blue') b_lines = plt.plot(b, c='red') plt.legend(handles=[a_lines[0], b_lines[0]], labels=['a', 'b'])
Label a whole numpy array with one label on matplotlib
I would like to label a whole numpy array with only one label. The following code for example creates 6 (=2+4) labels instead of only 2 labels: import numpy as np import matplotlib.pyplot as plt a = np.random.rand(10,2) b = np.random.rand(10,4) plt.figure() plt.plot(a, 'blue', label = 'a') plt.plot(b, 'red', label = 'b') plt.legend() How should the code above be modified to create only 2 legend labels, 'a' and 'b'?
[ "a_lines = plt.plot(a, c='blue')\nb_lines = plt.plot(b, c='red')\nplt.legend(handles=[a_lines[0], b_lines[0]], labels=['a', 'b'])\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "numpy_ndarray", "python" ]
stackoverflow_0074509789_matplotlib_numpy_ndarray_python.txt
Q: How to change file names in bulk, beginner here enter image description here I have files names like that in a directory, what I want to do is, ISOLUX_LL2023_864-EN-P4-500-730.JPG rename this file for example to; LL2023_864-EN-P4-500-700.jpg first to delete "ISOLUX_" from all, then turn the file extension to.jpg, I am a beginner by the way, don't know much about using Python. Thanks in advance I tried the codes I found online but couldn't manage them to work A: This should work, As you have not mentioned a way the "bulk" files should be named I just made a list to name them. The file names would change for the old file name, check the number of files in the folder for the range(3). You can also use a loop to create a list of the new names. import os from os import listdir from os.path import isfile, join new_file_name = ["10.txt", "20.txt", "30.txt"] # new file names cwd = os.getcwd() old_file_name = [os.path.join(cwd, f) for f in os.listdir(cwd) if os.path.isfile(os.path.join(cwd, f))] #getting the file names in the folder for i in range(3): #Changing file name os.rename(old_file_name[i], new_file_name[i])
How to change file names in bulk, beginner here
enter image description here I have files names like that in a directory, what I want to do is, ISOLUX_LL2023_864-EN-P4-500-730.JPG rename this file for example to; LL2023_864-EN-P4-500-700.jpg first to delete "ISOLUX_" from all, then turn the file extension to.jpg, I am a beginner by the way, don't know much about using Python. Thanks in advance I tried the codes I found online but couldn't manage them to work
[ "This should work, As you have not mentioned a way the \"bulk\" files should be named I just made a list to name them.\nThe file names would change for the old file name, check the number of files in the folder for the range(3).\nYou can also use a loop to create a list of the new names.\nimport os\nfrom os import listdir\nfrom os.path import isfile, join\n\nnew_file_name = [\"10.txt\", \"20.txt\", \"30.txt\"] # new file names\n\ncwd = os.getcwd()\nold_file_name = [os.path.join(cwd, f) for f in os.listdir(cwd) if \nos.path.isfile(os.path.join(cwd, f))] #getting the file names in the folder\n\nfor i in range(3): #Changing file name\n os.rename(old_file_name[i], new_file_name[i])\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "rename", "windows" ]
stackoverflow_0074515551_python_python_3.x_rename_windows.txt
Q: brownie:ValueError: execution reverted: VM Exception while processing transaction: revert Macbook Pro : Monterey Intel Core i7 Brownie v1.17.2 I am learning solidity according to reference(https://www.youtube.com/watch?v=M576WGiDBdQ&t=25510s). What I tried to do here, is use brownie to deploy a contract(FundMe) in a script (deploy.py),then write a test script(scripts/fund_and_withdraw.py.py) I met the same error,the MockV3Aggregator deployed successfully, but the getEntrancePrice is 0 Googled it find this answer,don't quite follow.(https://ethereum.stackexchange.com/questions/114889/deploying-ganache-local-w-brownie-vm-exception-while-processing-transaction-in) getPrice() isn't returning a number you want from the mock, somewhere in the vicinity of 2B. Think this is a bug with the Ganache implementation- performing the calculation (minimumUSD * precision) / price in getEntranceFee() gives you a number less than 1- and, since Solidity can't handle floats, Solidity simply sees it as a 0, and the whole thing errors out. scripts/fund_and_withdraw.py from brownie import FundMe from scripts.helpful_scripts import get_account def fund(): fund_me = FundMe[-1] account = get_account() entrance_fee = fund_me.getEntranceFee() print(f"The entrance fee is : {entrance_fee} !") print("funding") fund_me.fund({"from": account, "value": entrance_fee}) print(f"Funded {entrance_fee} !") def withdraw(): fund_me = FundMe[-1] account = get_account() fund_me.withdraw({"from": account}) def main(): fund() withdraw() deploy.py from brownie import FundMe, network, config, MockV3Aggregator from scripts.helpful_scripts import ( get_account, deploy_mocks, LOCAL_BLOCKCHAIN_ENVIRONMENT, ) def deploy_fund_me(): account = get_account() # if we have a persistent network like rinkeby,use the associated address # otherwise ,deploy mocks if network.show_active() not in LOCAL_BLOCKCHAIN_ENVIRONMENT: price_feed_address = config["networks"][network.show_active()][ "eth_usd_price_feed" ] else: deploy_mocks() # just use the latest mockV3Aggregator price_feed_address = MockV3Aggregator[-1].address print("***********************************************************") print(f"MockVeAggrator's address is {price_feed_address}") fund_me = FundMe.deploy( price_feed_address, {"from": account}, publish_source=config["networks"][network.show_active()].get("verify"), ) print("***********************************************************") print(f"The Ether price is :{fund_me.getPrice()}\n") print(f"Contract deployed to {fund_me.address}\n") entrance_fee = fund_me.getEntranceFee() print("***********************************************************") print(f"The entrance fee is : {entrance_fee} !\n") return fund_me def main(): deploy_fund_me() FundMe.sol // SPDX-License-Identifier: MIT pragma solidity 0.8.0; // we need tell brownie @chainlink means == what input in config,need to tell compiler import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol"; import "@chainlink/contracts/src/v0.6/vendor/SafeMathChainlink.sol"; contract FundMe { //using SafeMathChainlink for uint256; mapping(address => uint256) public addressToAmountFunded; address[] public funders; address public owner; AggregatorV3Interface public priceFeed; // if you're following along with the freecodecamp video // Please see https://github.com/PatrickAlphaC/fund_me // to get the starting solidity contract code, it'll be slightly different than this! constructor(address _priceFeed) { // make price feed a parameter priceFeed = AggregatorV3Interface(_priceFeed); owner = msg.sender; } function fund() public payable { uint256 mimimumUSD = 50 * 10**18; require( getConversionRate(msg.value) >= mimimumUSD, "You need to spend more ETH!" ); addressToAmountFunded[msg.sender] += msg.value; funders.push(msg.sender); } function getVersion() public view returns (uint256) { return priceFeed.version(); } function getPrice() public view returns (uint256) { (, int256 answer, , , ) = priceFeed.latestRoundData(); return uint256(answer * 10000000000); } // 1000000000 function getConversionRate(uint256 ethAmount) public view returns (uint256) { uint256 ethPrice = getPrice(); uint256 ethAmountInUsd = (ethPrice * ethAmount) / 1000000000000000000; return ethAmountInUsd; } function getEntranceFee() public view returns (uint256) { // mimimumUSD uint256 mimimumUSD = 50 * 10**18; uint256 price = getPrice(); uint256 precision = 1 * 10**18; return (mimimumUSD * precision) / price; } modifier onlyOwner() { require(msg.sender == owner); _; } function withdraw() public payable onlyOwner { payable(msg.sender).transfer(address(this).balance); for ( uint256 funderIndex = 0; funderIndex < funders.length; funderIndex++ ) { address funder = funders[funderIndex]; addressToAmountFunded[funder] = 0; } funders = new address[](0); } } update the error disappear magicly (may be come back later),right now the error info is :list out of index actually i faced the same error in another project (Brownie test IndexError: list index out of range) according the answer and brownie docs, I need to add account. what confused me is if I launched the ganache local blockchain already ,why I still need to add account or I added the wrong way? A: function getEntranceFee() public view returns (uint256) { // mimimumUSD uint256 mimimumUSD = 50 * 10**18; uint256 price = getPrice(); uint256 precision = 1 * 10**18; return (mimimumUSD * precision) / price; } You do not need to multiply with precison. currently, assuming eth price 3000, you are returning (50 * (10^18) * (10^18)) / (3000 * 10^10) (50 * (10^36)) / (3 * 10^13) (50/3)*(10^23) I think your return value should be return mimimumUSD / price; A: Hi I'm following your same course, I too had fallen into the same problem as you. I forgot to replace the address with the variable price_feed_address here: deploy.py fund_me = FundMe.deploy( price_feed_address, # <--- before there was "0x8A..ect" {"from": account}, publish_source=config["networks"][network.show_active()].get("verify") ) Now everything works. The only difference I found in our codes is here: deploy.py from brownie import FundMe, MockV3Aggregator, network, config from scripts.helpful_script import ( get_account, deploy_mocks, LOCAL_BLOCKCHAIN_ENVIRONMENTS ) def deploy_fund_me(): account = get_account() # pass the price feed address to our fundme contract # if we are on a persistent network like rinkeby, use the associated address # otherwise, deploy mocks if network.show_active() not in LOCAL_BLOCKCHAIN_ENVIRONMENTS: price_feed_address = config["networks"][network.show_active()][ "eth_usd_price_feed" ] else: deploy_mocks() price_feed_address = MockV3Aggregator[-1].address fund_me = FundMe.deploy( price_feed_address, {"from": account}, publish_source=config["networks"][network.show_active()].get("verify") ) print(f"Contract deployed to {fund_me.address}") return fund_me def main(): deploy_fund_me() You call: entrance_fee = fund_me.getEntranceFee() A: If you're getting the getEntrancePrice as 0, remove the web3.toWei function in this part right here in the helpful_scripts.py file: MockV3Aggregator.deploy( DECIMALS, Web3.toWei(STARTING_PRICE, "ether"), {"from": get_account()} ) and change to this: MockV3Aggregator.deploy(DECIMALS, STARTING_PRICE, {"from": get_account()}) Since you already specified at the top(as global variables) that the mock price output decimal place would be 8 and added the additional 8 zeros after 2000 to send the mock constructor an initial price with 8 decimal places, by leaving the toWei function your adding an additional 18 decimal places for a total of 26 which is wrong. In addition, you're output after deploying the mock is being added another 10 decimal places for a total of 36. DECIMALS = 8 STARTING_PRICE = 200000000000 My guess is the reason you're getting 0 is because you exceeded the size allowed by the uint8 decimal variable or maybe sending the initial price with 26 decimal places to the constructor takes more gas fees than the gas limit allowed to each block and transaction by ganache. Someone clarify if able! A: Please compare your FundMe.sol with the github source: https://github.com/PatrickAlphaC/brownie_fund_me/blob/main/contracts/FundMe.sol You'll see: function getEntranceFee() public view returns (uint256) return (mimimumUSD * precision) / price; it already be changed: return ((minimumUSD * precision) / price) + 1; A: I also got this issue, resolved by fixing the entrance fee. we need to keep the brackets function getEntranceFee() public view returns (uint256) { uint256 minimumUSD = 50 * (10**18); uint256 price = getPrice(); uint256 pricision = 1 * (10**18); return ((minimumUSD * pricision) / price); }
brownie:ValueError: execution reverted: VM Exception while processing transaction: revert
Macbook Pro : Monterey Intel Core i7 Brownie v1.17.2 I am learning solidity according to reference(https://www.youtube.com/watch?v=M576WGiDBdQ&t=25510s). What I tried to do here, is use brownie to deploy a contract(FundMe) in a script (deploy.py),then write a test script(scripts/fund_and_withdraw.py.py) I met the same error,the MockV3Aggregator deployed successfully, but the getEntrancePrice is 0 Googled it find this answer,don't quite follow.(https://ethereum.stackexchange.com/questions/114889/deploying-ganache-local-w-brownie-vm-exception-while-processing-transaction-in) getPrice() isn't returning a number you want from the mock, somewhere in the vicinity of 2B. Think this is a bug with the Ganache implementation- performing the calculation (minimumUSD * precision) / price in getEntranceFee() gives you a number less than 1- and, since Solidity can't handle floats, Solidity simply sees it as a 0, and the whole thing errors out. scripts/fund_and_withdraw.py from brownie import FundMe from scripts.helpful_scripts import get_account def fund(): fund_me = FundMe[-1] account = get_account() entrance_fee = fund_me.getEntranceFee() print(f"The entrance fee is : {entrance_fee} !") print("funding") fund_me.fund({"from": account, "value": entrance_fee}) print(f"Funded {entrance_fee} !") def withdraw(): fund_me = FundMe[-1] account = get_account() fund_me.withdraw({"from": account}) def main(): fund() withdraw() deploy.py from brownie import FundMe, network, config, MockV3Aggregator from scripts.helpful_scripts import ( get_account, deploy_mocks, LOCAL_BLOCKCHAIN_ENVIRONMENT, ) def deploy_fund_me(): account = get_account() # if we have a persistent network like rinkeby,use the associated address # otherwise ,deploy mocks if network.show_active() not in LOCAL_BLOCKCHAIN_ENVIRONMENT: price_feed_address = config["networks"][network.show_active()][ "eth_usd_price_feed" ] else: deploy_mocks() # just use the latest mockV3Aggregator price_feed_address = MockV3Aggregator[-1].address print("***********************************************************") print(f"MockVeAggrator's address is {price_feed_address}") fund_me = FundMe.deploy( price_feed_address, {"from": account}, publish_source=config["networks"][network.show_active()].get("verify"), ) print("***********************************************************") print(f"The Ether price is :{fund_me.getPrice()}\n") print(f"Contract deployed to {fund_me.address}\n") entrance_fee = fund_me.getEntranceFee() print("***********************************************************") print(f"The entrance fee is : {entrance_fee} !\n") return fund_me def main(): deploy_fund_me() FundMe.sol // SPDX-License-Identifier: MIT pragma solidity 0.8.0; // we need tell brownie @chainlink means == what input in config,need to tell compiler import "@chainlink/contracts/src/v0.6/interfaces/AggregatorV3Interface.sol"; import "@chainlink/contracts/src/v0.6/vendor/SafeMathChainlink.sol"; contract FundMe { //using SafeMathChainlink for uint256; mapping(address => uint256) public addressToAmountFunded; address[] public funders; address public owner; AggregatorV3Interface public priceFeed; // if you're following along with the freecodecamp video // Please see https://github.com/PatrickAlphaC/fund_me // to get the starting solidity contract code, it'll be slightly different than this! constructor(address _priceFeed) { // make price feed a parameter priceFeed = AggregatorV3Interface(_priceFeed); owner = msg.sender; } function fund() public payable { uint256 mimimumUSD = 50 * 10**18; require( getConversionRate(msg.value) >= mimimumUSD, "You need to spend more ETH!" ); addressToAmountFunded[msg.sender] += msg.value; funders.push(msg.sender); } function getVersion() public view returns (uint256) { return priceFeed.version(); } function getPrice() public view returns (uint256) { (, int256 answer, , , ) = priceFeed.latestRoundData(); return uint256(answer * 10000000000); } // 1000000000 function getConversionRate(uint256 ethAmount) public view returns (uint256) { uint256 ethPrice = getPrice(); uint256 ethAmountInUsd = (ethPrice * ethAmount) / 1000000000000000000; return ethAmountInUsd; } function getEntranceFee() public view returns (uint256) { // mimimumUSD uint256 mimimumUSD = 50 * 10**18; uint256 price = getPrice(); uint256 precision = 1 * 10**18; return (mimimumUSD * precision) / price; } modifier onlyOwner() { require(msg.sender == owner); _; } function withdraw() public payable onlyOwner { payable(msg.sender).transfer(address(this).balance); for ( uint256 funderIndex = 0; funderIndex < funders.length; funderIndex++ ) { address funder = funders[funderIndex]; addressToAmountFunded[funder] = 0; } funders = new address[](0); } } update the error disappear magicly (may be come back later),right now the error info is :list out of index actually i faced the same error in another project (Brownie test IndexError: list index out of range) according the answer and brownie docs, I need to add account. what confused me is if I launched the ganache local blockchain already ,why I still need to add account or I added the wrong way?
[ "function getEntranceFee() public view returns (uint256) {\n // mimimumUSD\n uint256 mimimumUSD = 50 * 10**18;\n uint256 price = getPrice();\n uint256 precision = 1 * 10**18;\n return (mimimumUSD * precision) / price;\n }\n\nYou do not need to multiply with precison. currently, assuming eth price 3000, you are returning\n (50 * (10^18) * (10^18)) / (3000 * 10^10)\n\n (50 * (10^36)) / (3 * 10^13)\n\n (50/3)*(10^23)\n\nI think your return value should be\n return mimimumUSD / price;\n\n", "Hi I'm following your same course, I too had fallen into the same problem as you. I forgot to replace the address with the variable price_feed_address here:\ndeploy.py\n fund_me = FundMe.deploy(\n price_feed_address, # <--- before there was \"0x8A..ect\"\n {\"from\": account}, \n publish_source=config[\"networks\"][network.show_active()].get(\"verify\")\n )\n\nNow everything works.\nThe only difference I found in our codes is here:\ndeploy.py\nfrom brownie import FundMe, MockV3Aggregator, network, config\nfrom scripts.helpful_script import (\n get_account, \n deploy_mocks, \n LOCAL_BLOCKCHAIN_ENVIRONMENTS\n)\n\ndef deploy_fund_me():\n account = get_account()\n # pass the price feed address to our fundme contract\n \n # if we are on a persistent network like rinkeby, use the associated address\n # otherwise, deploy mocks\n\n if network.show_active() not in LOCAL_BLOCKCHAIN_ENVIRONMENTS:\n price_feed_address = config[\"networks\"][network.show_active()][\n \"eth_usd_price_feed\"\n ]\n else:\n deploy_mocks()\n price_feed_address = MockV3Aggregator[-1].address\n \n fund_me = FundMe.deploy(\n price_feed_address,\n {\"from\": account}, \n publish_source=config[\"networks\"][network.show_active()].get(\"verify\")\n )\n print(f\"Contract deployed to {fund_me.address}\")\n return fund_me\n\ndef main():\n deploy_fund_me()\n\nYou call:\nentrance_fee = fund_me.getEntranceFee()\n\n", "If you're getting the getEntrancePrice as 0, remove the web3.toWei function in this part right here in the helpful_scripts.py file:\nMockV3Aggregator.deploy(\n DECIMALS, Web3.toWei(STARTING_PRICE, \"ether\"), {\"from\": get_account()}\n )\n\nand change to this:\nMockV3Aggregator.deploy(DECIMALS, STARTING_PRICE, {\"from\": get_account()})\n\nSince you already specified at the top(as global variables) that the mock price output decimal place would be 8 and added the additional 8 zeros after 2000 to send the mock constructor an initial price with 8 decimal places, by leaving the toWei function your adding an additional 18 decimal places for a total of 26 which is wrong.\nIn addition, you're output after deploying the mock is being added another 10 decimal places for a total of 36.\nDECIMALS = 8\nSTARTING_PRICE = 200000000000\n\nMy guess is the reason you're getting 0 is because you exceeded the size allowed by the uint8 decimal variable or maybe sending the initial price with 26 decimal places to the constructor takes more gas fees than the gas limit allowed to each block and transaction by ganache. Someone clarify if able!\n", "Please compare your FundMe.sol with the github source:\nhttps://github.com/PatrickAlphaC/brownie_fund_me/blob/main/contracts/FundMe.sol\nYou'll see: function getEntranceFee() public view returns (uint256)\n return (mimimumUSD * precision) / price;\n\nit already be changed:\n return ((minimumUSD * precision) / price) + 1;\n\n", "I also got this issue, resolved by fixing the entrance fee. we need to keep the brackets\n function getEntranceFee() public view returns (uint256) {\n uint256 minimumUSD = 50 * (10**18);\n uint256 price = getPrice();\n uint256 pricision = 1 * (10**18);\n return ((minimumUSD * pricision) / price);\n }\n\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "brownie", "chainlink", "ethereum", "python", "solidity" ]
stackoverflow_0070751581_brownie_chainlink_ethereum_python_solidity.txt
Q: Unable to print TCPcump information using python subprocess I wanted to process tcpdump output in a python script and so far I was able to get to this implementation from subprocess import Popen, PIPE, CalledProcessError import os import signal import time if __name__=="__main__": cmd = ["sudo","tcpdump", "-c","1000","-i","any","port","22","-n"] with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p: try: for line in p.stdout: print(line,flush=True) # process line here except KeyboardInterrupt: print("Quitting") This is what I uderstood from the second answer of this previously asked question. While it is not waiting for the subprocess to complete to print the output of the tcpdump, I still get the output in chunks of 20-30 lines at a time. Is there a way to read even if there is a single line in stdout pf the subprocess? PS: I am running this script on a raspberry Pi 4 with ubuntu server 22.04.1 A: tcpdump uses a larger buffer if you connect its standard output to a pipe. You can easily see this by running the following two commands. (I changed the count from 1000 to 40 and removed port 22 in order to quickly get output on my system.) $ sudo tcpdump -c 40 -i any -n $ sudo tcpdump -c 40 -i any -n | cat The first command prints one line at a time. The second collects everything in a buffer and prints everything when tcpdump exits. The solution is to tell tcpdump to use line buffering with the -l argument. $ sudo tcpdump -l -c 40 -i any -n | cat Do the same in your Python program. import subprocess cmd = ["sudo", "tcpdump", "-l", "-c", "40", "-i", "any", "-n"] with subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0, text=True) as p: for line in p.stdout: print(line.strip()) When I run this, I get one line printed at a time. Note that universal_newlines is a backward-compatible alias for text, so the latter should be preferred.
Unable to print TCPcump information using python subprocess
I wanted to process tcpdump output in a python script and so far I was able to get to this implementation from subprocess import Popen, PIPE, CalledProcessError import os import signal import time if __name__=="__main__": cmd = ["sudo","tcpdump", "-c","1000","-i","any","port","22","-n"] with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p: try: for line in p.stdout: print(line,flush=True) # process line here except KeyboardInterrupt: print("Quitting") This is what I uderstood from the second answer of this previously asked question. While it is not waiting for the subprocess to complete to print the output of the tcpdump, I still get the output in chunks of 20-30 lines at a time. Is there a way to read even if there is a single line in stdout pf the subprocess? PS: I am running this script on a raspberry Pi 4 with ubuntu server 22.04.1
[ "tcpdump uses a larger buffer if you connect its standard output to a pipe. You can easily see this by running the following two commands. (I changed the count from 1000 to 40 and removed port 22 in order to quickly get output on my system.)\n$ sudo tcpdump -c 40 -i any -n\n$ sudo tcpdump -c 40 -i any -n | cat\n\nThe first command prints one line at a time. The second collects everything in a buffer and prints everything when tcpdump exits. The solution is to tell tcpdump to use line buffering with the -l argument.\n$ sudo tcpdump -l -c 40 -i any -n | cat\n\nDo the same in your Python program.\nimport subprocess\n\ncmd = [\"sudo\", \"tcpdump\", \"-l\", \"-c\", \"40\", \"-i\", \"any\", \"-n\"]\nwith subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0, text=True) as p:\n for line in p.stdout:\n print(line.strip())\n\nWhen I run this, I get one line printed at a time.\nNote that universal_newlines is a backward-compatible alias for text, so the latter should be preferred.\n" ]
[ 1 ]
[]
[]
[ "networking", "python", "scripting", "subprocess", "ubuntu" ]
stackoverflow_0074499381_networking_python_scripting_subprocess_ubuntu.txt
Q: how to define selection condition in regex in python I am having a string in which some binary numbers are mentioned. I want to count number of occurrence of given pattern, but I want set my pattern above 7 digits of character, so the result should show only more than 7 characters. it means how I can set my pattern selection, so it should count only 7 digits and above (pattern = r"(0+1+0+1+0+1+)" {<7} ). please note: it should select 7 digits and above, not just fixed 7 digits. import re from collections import Counter pattern = r"0+1+0+1+0+1+" test_str = '01010100110011001100011100011110000101101110100001101011000111011001010011001001001101000011' \ '00110011001100110011010101001100110001111110010100100111010110001001100010011010110011' cnt = Counter(re.findall(pattern, test_str)) print(cnt.most_common()) # result [('010101', 2), ('00110001110001111', 1), ('011101000011', 1), ('0111011001', 1), ('010010011', 1), ('001100110011', 1), ('00011111100101', 1), ('010110001', 1)] result should be show only more than 7 character it no supposed to show ('010101', 2) A: The simplest solution is to filter the list of regex matches. import re from collections import Counter pattern = r"0+1+0+1+0+1+" test_str = '01010100110011001100011100011110000101101110100001101011000111011001010011001001001101000011' \ '00110011001100110011010101001100110001111110010100100111010110001001100010011010110011' cnt = Counter([p for p in re.findall(pattern, test_str) if len(p) > 6]) print(cnt.most_common()) Output: [('001100110011', 2), ('000111000111100001', 1), ('011011101', 1), ('00001101011', 1), ('000111011001', 1), ('010011001', 1), ('001001101', 1), ('00001100110011', 1), ('00110011000111111', 1), ('00101001', 1), ('0011101011', 1), ('000100110001', 1), ('001101011', 1)]
how to define selection condition in regex in python
I am having a string in which some binary numbers are mentioned. I want to count number of occurrence of given pattern, but I want set my pattern above 7 digits of character, so the result should show only more than 7 characters. it means how I can set my pattern selection, so it should count only 7 digits and above (pattern = r"(0+1+0+1+0+1+)" {<7} ). please note: it should select 7 digits and above, not just fixed 7 digits. import re from collections import Counter pattern = r"0+1+0+1+0+1+" test_str = '01010100110011001100011100011110000101101110100001101011000111011001010011001001001101000011' \ '00110011001100110011010101001100110001111110010100100111010110001001100010011010110011' cnt = Counter(re.findall(pattern, test_str)) print(cnt.most_common()) # result [('010101', 2), ('00110001110001111', 1), ('011101000011', 1), ('0111011001', 1), ('010010011', 1), ('001100110011', 1), ('00011111100101', 1), ('010110001', 1)] result should be show only more than 7 character it no supposed to show ('010101', 2)
[ "The simplest solution is to filter the list of regex matches.\nimport re\nfrom collections import Counter\n\npattern = r\"0+1+0+1+0+1+\"\n\ntest_str = '01010100110011001100011100011110000101101110100001101011000111011001010011001001001101000011' \\\n '00110011001100110011010101001100110001111110010100100111010110001001100010011010110011'\n\ncnt = Counter([p for p in re.findall(pattern, test_str) if len(p) > 6])\nprint(cnt.most_common())\n\nOutput:\n[('001100110011', 2), ('000111000111100001', 1), ('011011101', 1), ('00001101011', 1), ('000111011001', 1), ('010011001', 1), ('001001101', 1), ('00001100110011', 1), ('00110011000111111', 1), ('00101001', 1), ('0011101011', 1), ('000100110001', 1), ('001101011', 1)]\n\n" ]
[ 2 ]
[]
[]
[ "python", "regex", "select", "string" ]
stackoverflow_0074515999_python_regex_select_string.txt
Q: Jupyter’s kernel crash when i use groupby I’m analizing a dataset with 200 columns and 6000 rows. I computed all the possibile differences between columns using iterools and implemented them into the dataset. So now the number of columns has increased. Until now everything work fine and kernel doesn’t have problems. Kernel dies when i try to group columns with same first value and sum them. #difference between two columns,all possible combinations 1-2,1-3,..,199-200 def sb(df): comb=itertools.permutations(df.columns,2) N_f=pd.DataFrame() N_f = pd.concat([df[a]-df[b] for a,b in comb],axis=1) N_f.iloc[0,:]=[abs(number) for number in N_f.iloc[0,:]] return N_f #Here i transform the first row into columns headers and then i try to sum columns with the same head def fg(m): f.columns=f.iloc[0] f=f.iloc[1:] f=f.groupby(f.columns,axis=1).sum() return f Now i tried to run the code without the groupby part, but the kernel keeps dying. A: Kernel crashes often suggest a large spike in resource usage, which your machine and/or juypter configuration could not handle. The question is then, "What am I doing that is using so many resources?". That's for you to figure out, but my guess is that it has to do with your list comprehension over permutations. Permutations are extremely expensive, and having in-memory data structures for each permutation is going to hurt. I suggest debugging like so: # Print out the size of this. Does it surprise you? comb=itertools.permutations(df.columns,2) N_f=pd.DataFrame() # Instead of doing these operations in one list comprehension, # instead make a for loop and print out the memory # usage at each iteration in the loop. # How is it scaling? N_f = pd.concat([df[a]-df[b] for a,b in comb],axis=1)
Jupyter’s kernel crash when i use groupby
I’m analizing a dataset with 200 columns and 6000 rows. I computed all the possibile differences between columns using iterools and implemented them into the dataset. So now the number of columns has increased. Until now everything work fine and kernel doesn’t have problems. Kernel dies when i try to group columns with same first value and sum them. #difference between two columns,all possible combinations 1-2,1-3,..,199-200 def sb(df): comb=itertools.permutations(df.columns,2) N_f=pd.DataFrame() N_f = pd.concat([df[a]-df[b] for a,b in comb],axis=1) N_f.iloc[0,:]=[abs(number) for number in N_f.iloc[0,:]] return N_f #Here i transform the first row into columns headers and then i try to sum columns with the same head def fg(m): f.columns=f.iloc[0] f=f.iloc[1:] f=f.groupby(f.columns,axis=1).sum() return f Now i tried to run the code without the groupby part, but the kernel keeps dying.
[ "Kernel crashes often suggest a large spike in resource usage, which your machine and/or juypter configuration could not handle.\nThe question is then, \"What am I doing that is using so many resources?\".\nThat's for you to figure out, but my guess is that it has to do with your list comprehension over permutations. Permutations are extremely expensive, and having in-memory data structures for each permutation is going to hurt.\nI suggest debugging like so:\n# Print out the size of this. Does it surprise you?\ncomb=itertools.permutations(df.columns,2)\n\nN_f=pd.DataFrame()\n\n# Instead of doing these operations in one list comprehension,\n# instead make a for loop and print out the memory\n# usage at each iteration in the loop.\n# How is it scaling?\nN_f = pd.concat([df[a]-df[b] for a,b in comb],axis=1)\n\n" ]
[ 0 ]
[]
[]
[ "crash", "pandas", "python" ]
stackoverflow_0074512801_crash_pandas_python.txt
Q: find resources that are never tagged in aws using boto3 Using this we can't find resources that are never tagged: client = boto3.client('resourcegroupstaggingapi') How to find resources that are never tagged in AWS using boto3? A: We are using a tagging concept that requires 5 tags as mandatory, and we created a config rule that checks for these. https://docs.aws.amazon.com/config/latest/developerguide/required-tags.html
find resources that are never tagged in aws using boto3
Using this we can't find resources that are never tagged: client = boto3.client('resourcegroupstaggingapi') How to find resources that are never tagged in AWS using boto3?
[ "We are using a tagging concept that requires 5 tags as mandatory, and we created a config rule that checks for these.\nhttps://docs.aws.amazon.com/config/latest/developerguide/required-tags.html\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_lambda", "boto3", "python" ]
stackoverflow_0074514475_amazon_web_services_aws_lambda_boto3_python.txt
Q: Equivalent for R / dplyr's glimpse() function in Python for Panda dataframes? I find the glimpse function very useful in R/dplyr. But as someone who is used to R and is working with Python now, I haven't found something as useful for Panda dataframes. In Python, I've tried things like .describe() and .info() and .head() but none of these give me the useful snapshot which R's glimpse() gives us. Nice features which I'm quite accustomed to having in glimpse() include: All variables/column names as rows in the output All variable/column data types The first few observations of each column Total number of observations Total number of variables/columns Here is some simple code you could work it with: R library(dplyr) test <- data.frame(column_one = c("A", "B", "C", "D"), column_two = c(1:4)) glimpse(test) # The output is as follows Rows: 4 Columns: 2 $ column_one <chr> "A", "B", "C", "D" $ column_two <int> 1, 2, 3, 4 Python import pandas as pd test = pd.DataFrame({'column_one':['A', 'B', 'C', 'D'], 'column_two':[1, 2, 3, 4]}) Is there a single function for Python which mirrors these capabilities closely (not multiple and not partly)? If not, how would you create a function that does the job precisely? A: Here is one way to do it: def glimpse(df): print(f"Rows: {df.shape[0]}") print(f"Columns: {df.shape[1]}") for col in df.columns: print(f"$ {col} <{df[col].dtype}> {df[col].head().values}") Then: import pandas as pd df = pd.DataFrame( {"column_one": ["A", "B", "C", "D"], "column_two": [1, 2, 3, 4]} ) glimpse(df) # Output Rows: 4 Columns: 2 $ column_one <object> ['A' 'B' 'C' 'D'] $ column_two <int64> [1 2 3 4]
Equivalent for R / dplyr's glimpse() function in Python for Panda dataframes?
I find the glimpse function very useful in R/dplyr. But as someone who is used to R and is working with Python now, I haven't found something as useful for Panda dataframes. In Python, I've tried things like .describe() and .info() and .head() but none of these give me the useful snapshot which R's glimpse() gives us. Nice features which I'm quite accustomed to having in glimpse() include: All variables/column names as rows in the output All variable/column data types The first few observations of each column Total number of observations Total number of variables/columns Here is some simple code you could work it with: R library(dplyr) test <- data.frame(column_one = c("A", "B", "C", "D"), column_two = c(1:4)) glimpse(test) # The output is as follows Rows: 4 Columns: 2 $ column_one <chr> "A", "B", "C", "D" $ column_two <int> 1, 2, 3, 4 Python import pandas as pd test = pd.DataFrame({'column_one':['A', 'B', 'C', 'D'], 'column_two':[1, 2, 3, 4]}) Is there a single function for Python which mirrors these capabilities closely (not multiple and not partly)? If not, how would you create a function that does the job precisely?
[ "Here is one way to do it:\ndef glimpse(df):\n print(f\"Rows: {df.shape[0]}\")\n print(f\"Columns: {df.shape[1]}\")\n for col in df.columns:\n print(f\"$ {col} <{df[col].dtype}> {df[col].head().values}\")\n\nThen:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\"column_one\": [\"A\", \"B\", \"C\", \"D\"], \"column_two\": [1, 2, 3, 4]}\n)\n\nglimpse(df)\n\n# Output\nRows: 4\nColumns: 2\n$ column_one <object> ['A' 'B' 'C' 'D']\n$ column_two <int64> [1 2 3 4]\n\n" ]
[ 1 ]
[]
[]
[ "dplyr", "pandas", "python", "r" ]
stackoverflow_0074414355_dplyr_pandas_python_r.txt
Q: How can i allow both IP Address and URL in django field? I want to allow both flutterdemo.hp.com and 12.135.720.12 in django field. This is what i tried. from rest_framework import serializers, viewsets from django.core.validators import URLValidator class FlutterSerializer(serializers.HyperlinkedModelSerializer): fqdn_ip = serializers.CharField(max_length = 100, validators =[URLValidator]) But it is allowing all the text and just working as CharFiled. URLField is treating "flutterdemo.hp.com" as invalid. How can i achieve this? Thanks, A: You can use this third-party library for Validating URL and IP. Validate Ipv4 Ip here Validate Ipv6 Ip here Validate Url here After validate you can save with CharField
How can i allow both IP Address and URL in django field?
I want to allow both flutterdemo.hp.com and 12.135.720.12 in django field. This is what i tried. from rest_framework import serializers, viewsets from django.core.validators import URLValidator class FlutterSerializer(serializers.HyperlinkedModelSerializer): fqdn_ip = serializers.CharField(max_length = 100, validators =[URLValidator]) But it is allowing all the text and just working as CharFiled. URLField is treating "flutterdemo.hp.com" as invalid. How can i achieve this? Thanks,
[ "You can use this third-party library for Validating URL and IP.\nValidate Ipv4 Ip here\nValidate Ipv6 Ip here\nValidate Url here\nAfter validate you can save with CharField\n" ]
[ 1 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "orm", "python" ]
stackoverflow_0074515515_django_django_models_django_rest_framework_orm_python.txt
Q: pyspark dataframe combine identical rows based on start and end column I have a dataframe contains billion records and which I want to combine identical rows into one rows based on their effective_start and effective_end date key1 key2 start end k11 k2 2000-01-01 2000-02-01 k11 k2 2000-02-01 2000-03-01 k11 k2 2000-03-01 2000-04-01 k11 k2 2000-04-01 2000-05-01 k11 k2 2000-05-01 2000-06-01 k11 k2 2000-08-01 2000-09-01 k11 k2 2000-09-01 2000-10-01 k22 k2 2000-01-01 2000-02-01 k22 k2 2000-02-01 2000-03-01 k22 k3 2000-03-01 2000-04-01 k22 k3 2000-04-01 2000-05-01 k22 k3 2000-05-01 2000-06-01 if group by key1/key2 then sort by start, you can see there are three groups key11/key2, key22/key2, key22/key3, If the previous row's end equals to next row's start, then the same group can be combined, otherwise it is not combined. The expected output is key1 key2 start end k11 k2 2000-01-01 2000-06-01 k11 k2 2000-08-01 2000-10-01 k22 k2 2000-01-01 2000-03-01 k22 k3 2000-03-01 2000-06-01 How do I achieve this? Thanks in advance. A: The logic is: Append "end" column shifted by one record. Since, spark has distributed architecture, there is no notion of "position or index" of a record. This is done with help of Window function and columns with which to order by. Compute the diff between "start" and previous record's "end" column. Group the records until this diff is zero. To identify such diff groups, compute cumulative sum. Finally group by groups identified above. The start columns is min of the group and end is max. df = spark.createDataFrame(data=[["k11","k2","2000-01-01","2000-02-01"],["k11","k2","2000-02-01","2000-03-01"],["k11","k2","2000-03-01","2000-04-01"],["k11","k2","2000-04-01","2000-05-01"],["k11","k2","2000-05-01","2000-06-01"],["k11","k2","2000-08-01","2000-09-01"],["k11","k2","2000-09-01","2000-10-01"],["k22","k2","2000-01-01","2000-02-01"],["k22","k2","2000-02-01","2000-03-01"],["k22","k3","2000-03-01","2000-04-01"],["k22","k3","2000-04-01","2000-05-01"],["k22","k3","2000-05-01","2000-06-01"]], schema=["key1","key2","start","end"]) from pyspark.sql.window import Window w = Window.partitionBy("key1", "key2").orderBy("start") df = df.withColumn("start", F.to_date("start", format="yyyy-MM-dd")).withColumn("end", F.to_date("end", format="yyyy-MM-dd")) df = df.withColumn("prev_end", F.lag("end", offset=1).over(w)) df = df.withColumn("date_diff", F.datediff(F.col("start"), F.col("prev_end"))) df = df.withColumn("is_continuous", F.when(F.col("date_diff").isNull() | (F.col("date_diff") > 0), F.lit(1)).otherwise(F.lit(0))) df = df.withColumn("cumsum", F.sum(F.col("is_continuous")).over(w)) df = df.groupBy("key1", "key2", "cumsum").agg(F.min("start").alias("start"), F.max("end").alias("end")).drop("cumsum") [Out]: +----+----+----------+----------+ |key1|key2|start |end | +----+----+----------+----------+ |k11 |k2 |2000-01-01|2000-06-01| |k11 |k2 |2000-08-01|2000-10-01| |k22 |k2 |2000-01-01|2000-03-01| |k22 |k3 |2000-03-01|2000-06-01| +----+----+----------+----------+
pyspark dataframe combine identical rows based on start and end column
I have a dataframe contains billion records and which I want to combine identical rows into one rows based on their effective_start and effective_end date key1 key2 start end k11 k2 2000-01-01 2000-02-01 k11 k2 2000-02-01 2000-03-01 k11 k2 2000-03-01 2000-04-01 k11 k2 2000-04-01 2000-05-01 k11 k2 2000-05-01 2000-06-01 k11 k2 2000-08-01 2000-09-01 k11 k2 2000-09-01 2000-10-01 k22 k2 2000-01-01 2000-02-01 k22 k2 2000-02-01 2000-03-01 k22 k3 2000-03-01 2000-04-01 k22 k3 2000-04-01 2000-05-01 k22 k3 2000-05-01 2000-06-01 if group by key1/key2 then sort by start, you can see there are three groups key11/key2, key22/key2, key22/key3, If the previous row's end equals to next row's start, then the same group can be combined, otherwise it is not combined. The expected output is key1 key2 start end k11 k2 2000-01-01 2000-06-01 k11 k2 2000-08-01 2000-10-01 k22 k2 2000-01-01 2000-03-01 k22 k3 2000-03-01 2000-06-01 How do I achieve this? Thanks in advance.
[ "The logic is:\n\nAppend \"end\" column shifted by one record. Since, spark has distributed architecture, there is no notion of \"position or index\" of a record. This is done with help of Window function and columns with which to order by.\nCompute the diff between \"start\" and previous record's \"end\" column.\nGroup the records until this diff is zero. To identify such diff groups, compute cumulative sum.\nFinally group by groups identified above. The start columns is min of the group and end is max.\n\ndf = spark.createDataFrame(data=[[\"k11\",\"k2\",\"2000-01-01\",\"2000-02-01\"],[\"k11\",\"k2\",\"2000-02-01\",\"2000-03-01\"],[\"k11\",\"k2\",\"2000-03-01\",\"2000-04-01\"],[\"k11\",\"k2\",\"2000-04-01\",\"2000-05-01\"],[\"k11\",\"k2\",\"2000-05-01\",\"2000-06-01\"],[\"k11\",\"k2\",\"2000-08-01\",\"2000-09-01\"],[\"k11\",\"k2\",\"2000-09-01\",\"2000-10-01\"],[\"k22\",\"k2\",\"2000-01-01\",\"2000-02-01\"],[\"k22\",\"k2\",\"2000-02-01\",\"2000-03-01\"],[\"k22\",\"k3\",\"2000-03-01\",\"2000-04-01\"],[\"k22\",\"k3\",\"2000-04-01\",\"2000-05-01\"],[\"k22\",\"k3\",\"2000-05-01\",\"2000-06-01\"]], schema=[\"key1\",\"key2\",\"start\",\"end\"])\n\nfrom pyspark.sql.window import Window\nw = Window.partitionBy(\"key1\", \"key2\").orderBy(\"start\")\n\ndf = df.withColumn(\"start\", F.to_date(\"start\", format=\"yyyy-MM-dd\")).withColumn(\"end\", F.to_date(\"end\", format=\"yyyy-MM-dd\"))\ndf = df.withColumn(\"prev_end\", F.lag(\"end\", offset=1).over(w))\ndf = df.withColumn(\"date_diff\", F.datediff(F.col(\"start\"), F.col(\"prev_end\")))\ndf = df.withColumn(\"is_continuous\", F.when(F.col(\"date_diff\").isNull() | (F.col(\"date_diff\") > 0), F.lit(1)).otherwise(F.lit(0)))\ndf = df.withColumn(\"cumsum\", F.sum(F.col(\"is_continuous\")).over(w))\n\ndf = df.groupBy(\"key1\", \"key2\", \"cumsum\").agg(F.min(\"start\").alias(\"start\"), F.max(\"end\").alias(\"end\")).drop(\"cumsum\")\n\n[Out]:\n+----+----+----------+----------+\n|key1|key2|start |end |\n+----+----+----------+----------+\n|k11 |k2 |2000-01-01|2000-06-01|\n|k11 |k2 |2000-08-01|2000-10-01|\n|k22 |k2 |2000-01-01|2000-03-01|\n|k22 |k3 |2000-03-01|2000-06-01|\n+----+----+----------+----------+\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "apache_spark_sql", "pyspark", "python", "sql" ]
stackoverflow_0074515764_apache_spark_apache_spark_sql_pyspark_python_sql.txt
Q: How to mock httpx.AsyncClient() in Pytest I need to write test case for a function which use to fetch data from API. In there i used httpx.AsyncClient() as context manager. But i don't understand how to write test case for that function. async def make_dropbox_request(url, payload, dropbox_token): async with httpx.AsyncClient(timeout=None, follow_redirects=True) as client: headers = { 'Content-Type': 'application/json', 'authorization': 'Bearer '+ dropbox_token } # make the api call response = await client.post(url, headers=headers, json=payload) if response.status_code not in [200]: print('Dropbox Status Code: ' + str(response.status_code)) if response.status_code in [200, 202, 303]: return json.loads(response.text) elif response.status_code == 401: raise DropboxAuthenticationError() elif response.status_code == 429: sleep_time = int(response.headers['Retry-After']) if sleep_time < 1*60: await asyncio.sleep(sleep_time) raise DropboxMaxRateLimitError() raise DropboxMaxDailyRateLimitError() raise DropboxHTTPError() I need to write test cases without calling the API. So there for i believe in this case i need to mock client.post() but i do not understand how to do that. If anyone can help me to figure this out that would be really helpful for me. This image also include my code block A: TL;DR: use return_value.__aenter__.return_value to mock the async context. Assuming you are using Pytest and pytest-mock, your can use the mocker fixture to mock httpx.AsyncClient. Since the post function is async, you will need to use an AsyncMock. Finally, since you use an async context, you will also need to use return_value.__aenter__.return_value to properly mock the returned context. Note for a synchronous context, simply use __enter__ instead of __aenter__. @pytest.fixture def mock_AsyncClient(mocker: MockerFixture) -> Mock: mocked_AsyncClient = mocker.patch(f"{TESTED_MODULE}.AsyncClient") mocked_async_client = Mock() response = Response(status_code=200) mocked_async_client.post = AsyncMock(return_value=response) mocked_AsyncClient.return_value.__aenter__.return_value = mocked_async_client return mocked_async_client A: I also faced with same issue and handled it with patch decorator. I share my code, so that might help for others. from unittest.mock import patch import pytest import httpx from app.services import your_service @pytest.mark.anyio @patch( 'app.services.your_service.httpx.AsyncClient.post', return_value = httpx.Response(200, json={'id': '9ed7dasdasd-08ff-4ae1-8952-37e3a323eb08'}) ) async def test_get_id(mocker): result = await your_service.get_id() assert result == '9ed7dasdasd-08ff-4ae1-8952-37e3a323eb08' A: You can try out the RESPX mocking library to test and mock your HTTPX clients. ​ ​ In your case, something like this should do it: ​ ​ async def make_dropbox_request(url, payload, dropbox_token): ... response = await client.post(url, headers=headers, json=payload) ... return response.json() ​ ​ @respx.mock async def test_dropbox_endpoint(): url = "https://dropbox-api/some-endpoint/" endpoint = respx.post(url).respond(json={"some": "data"}) result = await make_dropbox_request(url, ..., ...) assert endpoint.called assert result == {"some": "data"} ​ To be dry and not repeat the mocking in each test, you can set up your own pytest fixture, or respx instance, globally that pre-mocks all dropbox api endpoints, and then in each test just alter response/error depending on the scenario for the test, to get full test coverage on make_dropbox_request. ​ @pytest.fixture() async def dropbox_mock(): async with respx.mock() as dropbox: # default endpoints and their responses dropbox.post("some-endpoint", name="foo").respond(404) dropbox.post("some-other-endpoint", name="bar").respond(404) # ^ name routes for access in tests yield dropbox ​ ​ async def test_some_case(dropbox_mock): dropbox_mock["foo"].respond(json={}) ....
How to mock httpx.AsyncClient() in Pytest
I need to write test case for a function which use to fetch data from API. In there i used httpx.AsyncClient() as context manager. But i don't understand how to write test case for that function. async def make_dropbox_request(url, payload, dropbox_token): async with httpx.AsyncClient(timeout=None, follow_redirects=True) as client: headers = { 'Content-Type': 'application/json', 'authorization': 'Bearer '+ dropbox_token } # make the api call response = await client.post(url, headers=headers, json=payload) if response.status_code not in [200]: print('Dropbox Status Code: ' + str(response.status_code)) if response.status_code in [200, 202, 303]: return json.loads(response.text) elif response.status_code == 401: raise DropboxAuthenticationError() elif response.status_code == 429: sleep_time = int(response.headers['Retry-After']) if sleep_time < 1*60: await asyncio.sleep(sleep_time) raise DropboxMaxRateLimitError() raise DropboxMaxDailyRateLimitError() raise DropboxHTTPError() I need to write test cases without calling the API. So there for i believe in this case i need to mock client.post() but i do not understand how to do that. If anyone can help me to figure this out that would be really helpful for me. This image also include my code block
[ "TL;DR: use return_value.__aenter__.return_value to mock the async context.\nAssuming you are using Pytest and pytest-mock, your can use the mocker fixture to mock httpx.AsyncClient.\nSince the post function is async, you will need to use an AsyncMock.\nFinally, since you use an async context, you will also need to use return_value.__aenter__.return_value to properly mock the returned context. Note for a synchronous context, simply use __enter__ instead of __aenter__.\n@pytest.fixture\ndef mock_AsyncClient(mocker: MockerFixture) -> Mock:\n mocked_AsyncClient = mocker.patch(f\"{TESTED_MODULE}.AsyncClient\")\n\n mocked_async_client = Mock()\n response = Response(status_code=200)\n mocked_async_client.post = AsyncMock(return_value=response)\n mocked_AsyncClient.return_value.__aenter__.return_value = mocked_async_client\n\n return mocked_async_client\n\n", "I also faced with same issue and handled it with patch decorator. I share my code, so that might help for others.\nfrom unittest.mock import patch\nimport pytest\nimport httpx\nfrom app.services import your_service\n\n\n@pytest.mark.anyio\n@patch(\n 'app.services.your_service.httpx.AsyncClient.post',\n return_value = httpx.Response(200, json={'id': '9ed7dasdasd-08ff-4ae1-8952-37e3a323eb08'})\n)\nasync def test_get_id(mocker): \n result = await your_service.get_id()\n assert result == '9ed7dasdasd-08ff-4ae1-8952-37e3a323eb08'\n\n", "You can try out the RESPX mocking library to test and mock your HTTPX clients.\n​\n​\nIn your case, something like this should do it:\n​\n​\nasync def make_dropbox_request(url, payload, dropbox_token):\n ...\n response = await client.post(url, headers=headers, json=payload)\n ...\n return response.json()\n​\n​\n@respx.mock\nasync def test_dropbox_endpoint():\n url = \"https://dropbox-api/some-endpoint/\"\n endpoint = respx.post(url).respond(json={\"some\": \"data\"})\n result = await make_dropbox_request(url, ..., ...)\n assert endpoint.called\n assert result == {\"some\": \"data\"}\n\n​\nTo be dry and not repeat the mocking in each test, you can set up your own pytest fixture, or respx instance, globally that pre-mocks all dropbox api endpoints, and then in each test just alter response/error depending on the scenario for the test, to get full test coverage on make_dropbox_request.\n​\n@pytest.fixture()\nasync def dropbox_mock():\n async with respx.mock() as dropbox:\n # default endpoints and their responses\n dropbox.post(\"some-endpoint\", name=\"foo\").respond(404)\n dropbox.post(\"some-other-endpoint\", name=\"bar\").respond(404)\n # ^ name routes for access in tests\n yield dropbox\n​\n​\nasync def test_some_case(dropbox_mock):\n dropbox_mock[\"foo\"].respond(json={})\n ....\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "asynchronous", "httpx", "mocking", "pytest", "python" ]
stackoverflow_0070633584_asynchronous_httpx_mocking_pytest_python.txt
Q: Index i out of range for an array while using nested loops Starting to learn to code and I was doing the fantasy items exercise from automate boring stuff with python. I tried comparing each item of the addedItems array to the dictionary keys to see if they exist, if not I would create a new key with the default value 1. However it says that I have index out of range error, although creating a regular for loop and testing the array it seems to iterate without a problem, what am I missing? ` def displayInventory(inventory): print("Inventory: ") item_total = 0 for k, v in inventory.items(): item_total += v print(v, k) print("Total number of items: " + str(item_total)) def addToInventory(inventory, addedItems): items = [] amount = [] print(addedItems) for keys, values in inventory.items(): items.append(keys) amount.append(values) for i in range(len(addedItems)): for j in range(len(inventory)): if addedItems[i] == items[i]: inventory[items[j]] =+ 1 else: inventory.setdefault(addedItems[i], 1) inv = {'gold coin': 42, 'rope': 1} dragonLoot = ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] inv = addToInventory(inv, dragonLoot) displayInventory(inv) ` Here is the error message ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-54-b83d92c005f4> in <module> 26 inv = {'gold coin': 42, 'rope': 1} 27 dragonLoot = ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] ---> 28 inv = addToInventory(inv, dragonLoot) 29 displayInventory(inv) <ipython-input-54-b83d92c005f4> in addToInventory(inventory, addedItems) 19 for i in range(len(addedItems)): 20 for j in range(len(inventory)): ---> 21 if addedItems[i] == items[i]: 22 inventory[items[j]] =+ 1 23 else: IndexError: list index out of range I tried testing index i in regular for loops and it iterated through the items without issue, I am not sure why it says out of range. EDIT: Solved! Thank you very much!!! A: def displayInventory(inventory): item_total = 0 for k, v in inventory.items(): item_total += int(v) print(v, k) print("Total number of items: " + str(item_total)) def addToInventory(inventory, addedItems): items = [] amount = [] print(addedItems) for keys, values in inventory.items(): items.append(keys) amount.append(values) for i in range(len(inventory)): for j in range(len(addedItems)): if addedItems[j] == items[i]: inventory[items[i]] += 1 else: inventory.setdefault(addedItems[i], 1) return inventory inv = {'gold coin': 42, 'rope': 1} dragonLoot = ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] inv = addToInventory(inv, dragonLoot) displayInventory(inv) A: items will take ['gold coin', 'rope'], so len items is 2, the for loop is in range(len(addedItems)) that mean in range of 5: tha loop will can compare items[0] and items[1] then the rest will be out of range of the list and lead to the error you have. and your function addToInventory must return a result otherwhise you will get an error here : for k, v in inventory.items(): because items will not be known by the function
Index i out of range for an array while using nested loops
Starting to learn to code and I was doing the fantasy items exercise from automate boring stuff with python. I tried comparing each item of the addedItems array to the dictionary keys to see if they exist, if not I would create a new key with the default value 1. However it says that I have index out of range error, although creating a regular for loop and testing the array it seems to iterate without a problem, what am I missing? ` def displayInventory(inventory): print("Inventory: ") item_total = 0 for k, v in inventory.items(): item_total += v print(v, k) print("Total number of items: " + str(item_total)) def addToInventory(inventory, addedItems): items = [] amount = [] print(addedItems) for keys, values in inventory.items(): items.append(keys) amount.append(values) for i in range(len(addedItems)): for j in range(len(inventory)): if addedItems[i] == items[i]: inventory[items[j]] =+ 1 else: inventory.setdefault(addedItems[i], 1) inv = {'gold coin': 42, 'rope': 1} dragonLoot = ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] inv = addToInventory(inv, dragonLoot) displayInventory(inv) ` Here is the error message ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-54-b83d92c005f4> in <module> 26 inv = {'gold coin': 42, 'rope': 1} 27 dragonLoot = ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby'] ---> 28 inv = addToInventory(inv, dragonLoot) 29 displayInventory(inv) <ipython-input-54-b83d92c005f4> in addToInventory(inventory, addedItems) 19 for i in range(len(addedItems)): 20 for j in range(len(inventory)): ---> 21 if addedItems[i] == items[i]: 22 inventory[items[j]] =+ 1 23 else: IndexError: list index out of range I tried testing index i in regular for loops and it iterated through the items without issue, I am not sure why it says out of range. EDIT: Solved! Thank you very much!!!
[ "def displayInventory(inventory):\n item_total = 0\n for k, v in inventory.items():\n item_total += int(v)\n print(v, k)\n print(\"Total number of items: \" + str(item_total))\n\ndef addToInventory(inventory, addedItems):\n items = []\n amount = []\n print(addedItems)\n for keys, values in inventory.items():\n items.append(keys)\n amount.append(values)\n for i in range(len(inventory)):\n for j in range(len(addedItems)):\n if addedItems[j] == items[i]:\n inventory[items[i]] += 1\n else:\n inventory.setdefault(addedItems[i], 1)\n return inventory\n\ninv = {'gold coin': 42, 'rope': 1}\ndragonLoot = ['gold coin', 'dagger', 'gold coin', 'gold coin', 'ruby']\ninv = addToInventory(inv, dragonLoot)\ndisplayInventory(inv)\n\n", "items will take ['gold coin', 'rope'], so len items is 2, the for loop is in range(len(addedItems)) that mean in range of 5:\ntha loop will can compare items[0] and items[1] then the rest will be out of range of the list and lead to the error you have.\nand your function addToInventory must return a result otherwhise you will get an error here :\nfor k, v in inventory.items():\n\nbecause items will not be known by the function\n" ]
[ 2, 0 ]
[]
[]
[ "dictionary", "indexing", "list", "nested", "python" ]
stackoverflow_0074516186_dictionary_indexing_list_nested_python.txt
Q: Azure App Service with AAD identity provider - Python & Streamlit framework for app - get logged in user Have a web app developed in Python with the Streamlit framework. Deploying as an Azure app service. Authentication to the app is via AAD. I'm unable to get details such as name/email address of the logged in user. Most welcome any suggestions (I've tried /.auth/me endpoint, looking at cookie sessions). Thanks! A: The /.auth/me endpoint gives you the information you need, i.e., it is a part of the access token (RS256 encoded) and maybe even decoded as well. You need to include the AppServiceAuthSession cookie in your get request to the endpoint. This code snippet should work in streamlit: import requests from streamlit.server.server import Server from streamlit.report_thread import get_report_ctx session_id = get_report_ctx().session_id session_info = Server.get_current()._get_session_info(session_id) session_headers = session_info.ws.request.headers ckks = session_headers['cookie'] ckkd = dict(item.split("=",1) for item in ckks.split("; ")) tokens = requests.get('https://<your_app>.azurewebsites.net/.auth/me',cookies=ckkd) tokens = tokens.json() A: The latest answer from the streamlit forum (answer from Ennui) : (streamlit version 1.14+) from streamlit.web.server.websocket_headers import _get_websocket_headers headers = _get_websocket_headers() if "X-Ms-Client-Principal-Name" in headers: user_email = headers["X-Ms-Client-Principal-Name"] st.write(headers) # have a look at what else is in the dict
Azure App Service with AAD identity provider - Python & Streamlit framework for app - get logged in user
Have a web app developed in Python with the Streamlit framework. Deploying as an Azure app service. Authentication to the app is via AAD. I'm unable to get details such as name/email address of the logged in user. Most welcome any suggestions (I've tried /.auth/me endpoint, looking at cookie sessions). Thanks!
[ "The /.auth/me endpoint gives you the information you need, i.e., it is a part of the access token (RS256 encoded) and maybe even decoded as well. You need to include the AppServiceAuthSession cookie in your get request to the endpoint.\nThis code snippet should work in streamlit:\nimport requests\nfrom streamlit.server.server import Server\nfrom streamlit.report_thread import get_report_ctx\n\nsession_id = get_report_ctx().session_id\nsession_info = Server.get_current()._get_session_info(session_id)\nsession_headers = session_info.ws.request.headers\nckks = session_headers['cookie']\nckkd = dict(item.split(\"=\",1) for item in ckks.split(\"; \"))\n\ntokens = requests.get('https://<your_app>.azurewebsites.net/.auth/me',cookies=ckkd)\ntokens = tokens.json()\n\n", "The latest answer from the streamlit forum (answer from Ennui) :\n(streamlit version 1.14+)\nfrom streamlit.web.server.websocket_headers import _get_websocket_headers\n\nheaders = _get_websocket_headers()\n\nif \"X-Ms-Client-Principal-Name\" in headers:\n user_email = headers[\"X-Ms-Client-Principal-Name\"]\n\nst.write(headers) # have a look at what else is in the dict\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure", "azure_web_app_service", "python" ]
stackoverflow_0070162168_azure_azure_web_app_service_python.txt
Q: create a xml with just root tag using python I'm trying to create a xml file with just one root tag without any subElements to it. I tried with following code import xml.etree.cElementTree as ET root = ET.Element("root") tree = ET.ElementTree(root) tree.write("filename.xml") I am getting filename.xml as below: <root /> But I am expecting as below: <root> </root> without any internal tags in root A: The element can be self-closing if it is empty, meaning your output is valid XML. See: https://www.w3schools.com/xml/xml_elements.asp
create a xml with just root tag using python
I'm trying to create a xml file with just one root tag without any subElements to it. I tried with following code import xml.etree.cElementTree as ET root = ET.Element("root") tree = ET.ElementTree(root) tree.write("filename.xml") I am getting filename.xml as below: <root /> But I am expecting as below: <root> </root> without any internal tags in root
[ "The element can be self-closing if it is empty, meaning your output is valid XML.\nSee: https://www.w3schools.com/xml/xml_elements.asp\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "xml" ]
stackoverflow_0074516498_python_python_3.x_xml.txt
Q: How to save output after two layers of neural network in Pytorch I wrote a convolutional autoencoder that was supposed to work on the ORL dataset (400 images in dataset, size 32*32) in csv. format. What I want is to observe how the data changes through the autoencoder. That's why I wrote a test1 function in the class that goes through only the first two layers. class ConvAutoencoder(nn.Module): def __init__(self): super(ConvAutoencoder, self).__init__() ## encoder layers ## self.conv1 = nn.Conv2d(1, 3, 3) self.conv2 = nn.Conv2d(3 ,1, 3) self.conv3 = nn.Conv2d(1, 3, 3) self.conv4 = nn.Conv2d(3, 1, 3) ## decoder layers ## self.t_conv1 = nn.ConvTranspose2d(1, 3, 3) self.t_conv2 = nn.ConvTranspose2d(3, 1, 3) self.t_conv3 = nn.ConvTranspose2d(1, 3, 3) self.t_conv4 = nn.ConvTranspose2d(3, 1, 3) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) x = F.relu(self.conv4(x)) ## decode ## x = F.relu(self.t_conv1(x)) x = F.relu(self.t_conv2(x)) x = F.relu(self.t_conv3(x)) x = (self.t_conv4(x)) return x def test1(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) return x But the problem arises when I really want to check what is in those two layers. mn_dataset_loader = torch.utils.data.DataLoader(dataset=custom_mnist_from_csv, batch_size=200, shuffle=False) for epoch in range(200): running_loss = 0 br = 0 for data in mn_dataset_loader: inputs = data[0].to(device, non_blocking=True) optimizer.zero_grad() outputs = model(inputs).to(device) i1 = model.test1(inputs).to(device) i1 = torch.squeeze(i1, 0) i1 = i1.flatten().to(device) i1=i1.unsqueeze(0) print(i1.shape) loss = lossFn(data.to(device), outputs) loss.backward() optimizer.step() running_loss += loss.item() print('[Epoch %d] loss: %.3f' % (epoch + 1, running_loss/len(mn_dataset_loader))) print('Done Training') My question is actually why is i1 what is the output after only 2 layers of the form torch.Size([1, 784])? Why not torch.Size([400, 784]) because there are actually so many images? So how to actually see what is actually the output of the first two layers? A: You specify a batch size of 200 but then take only the first element (inputs = data[0]) If you want to run it on all images change the batch size to 400 and don't take only the first element
How to save output after two layers of neural network in Pytorch
I wrote a convolutional autoencoder that was supposed to work on the ORL dataset (400 images in dataset, size 32*32) in csv. format. What I want is to observe how the data changes through the autoencoder. That's why I wrote a test1 function in the class that goes through only the first two layers. class ConvAutoencoder(nn.Module): def __init__(self): super(ConvAutoencoder, self).__init__() ## encoder layers ## self.conv1 = nn.Conv2d(1, 3, 3) self.conv2 = nn.Conv2d(3 ,1, 3) self.conv3 = nn.Conv2d(1, 3, 3) self.conv4 = nn.Conv2d(3, 1, 3) ## decoder layers ## self.t_conv1 = nn.ConvTranspose2d(1, 3, 3) self.t_conv2 = nn.ConvTranspose2d(3, 1, 3) self.t_conv3 = nn.ConvTranspose2d(1, 3, 3) self.t_conv4 = nn.ConvTranspose2d(3, 1, 3) def forward(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) x = F.relu(self.conv3(x)) x = F.relu(self.conv4(x)) ## decode ## x = F.relu(self.t_conv1(x)) x = F.relu(self.t_conv2(x)) x = F.relu(self.t_conv3(x)) x = (self.t_conv4(x)) return x def test1(self, x): x = F.relu(self.conv1(x)) x = F.relu(self.conv2(x)) return x But the problem arises when I really want to check what is in those two layers. mn_dataset_loader = torch.utils.data.DataLoader(dataset=custom_mnist_from_csv, batch_size=200, shuffle=False) for epoch in range(200): running_loss = 0 br = 0 for data in mn_dataset_loader: inputs = data[0].to(device, non_blocking=True) optimizer.zero_grad() outputs = model(inputs).to(device) i1 = model.test1(inputs).to(device) i1 = torch.squeeze(i1, 0) i1 = i1.flatten().to(device) i1=i1.unsqueeze(0) print(i1.shape) loss = lossFn(data.to(device), outputs) loss.backward() optimizer.step() running_loss += loss.item() print('[Epoch %d] loss: %.3f' % (epoch + 1, running_loss/len(mn_dataset_loader))) print('Done Training') My question is actually why is i1 what is the output after only 2 layers of the form torch.Size([1, 784])? Why not torch.Size([400, 784]) because there are actually so many images? So how to actually see what is actually the output of the first two layers?
[ "You specify a batch size of 200 but then take only the first element (inputs = data[0])\nIf you want to run it on all images change the batch size to 400 and don't take only the first element\n" ]
[ 1 ]
[]
[]
[ "autoencoder", "batchsize", "python", "pytorch" ]
stackoverflow_0074516025_autoencoder_batchsize_python_pytorch.txt
Q: Python inheritance - add argument to parent method I have a base class with function run. For example: class A: @abstractmethod def run(self, steps): ... It is possible to define class B with more arguments to the run method. class B(A): def run(self, steps, save): ... Working with typing, I can specify if a function gets either A or B as argument. By specifying the function gets A, I tell that I only need the basic interface of run. While specifying B says I need the extended one. The purpose of this design is to declare a base interface that all the children share but each one can have an extended API. This is impossible to be done in other languages. Hence I wonder, is it an anti-pattern? Is it something legit to do? A: In Python you can do something like the following. class A: def run(self, steps): print("Using class A's run.") print(f"steps are {steps}") class B(A): def run(self, steps, other_arg=None): if other_arg: print("Using class B's override.") print(f"steps are {steps}") else: # Use parent's run logic instead. super().run(steps) x = B() x.run(100) x.run(30, other_arg="something") # Using class A's run. # steps are 100 # Using class B's override. # steps are 30 Now, should you do this? There is a time and a place. You can get into trouble as well. Imagine you break the interface of the core object you're inheriting from, so the core object loses its abstraction value. You'd have been better off having two objects or rewriting your abstraction to be more robust to the differences in object you wish you represent. Edit: Note that the original question changed to make the base run method abstract. The solution posted here is mostly invalidated by that.
Python inheritance - add argument to parent method
I have a base class with function run. For example: class A: @abstractmethod def run(self, steps): ... It is possible to define class B with more arguments to the run method. class B(A): def run(self, steps, save): ... Working with typing, I can specify if a function gets either A or B as argument. By specifying the function gets A, I tell that I only need the basic interface of run. While specifying B says I need the extended one. The purpose of this design is to declare a base interface that all the children share but each one can have an extended API. This is impossible to be done in other languages. Hence I wonder, is it an anti-pattern? Is it something legit to do?
[ "In Python you can do something like the following.\nclass A:\n\n def run(self, steps):\n print(\"Using class A's run.\")\n print(f\"steps are {steps}\")\n\n\nclass B(A):\n\n def run(self, steps, other_arg=None):\n if other_arg:\n print(\"Using class B's override.\")\n print(f\"steps are {steps}\")\n else:\n # Use parent's run logic instead.\n super().run(steps)\n\nx = B()\nx.run(100)\nx.run(30, other_arg=\"something\")\n\n# Using class A's run.\n# steps are 100\n# Using class B's override.\n# steps are 30\n\nNow, should you do this? There is a time and a place. You can get into trouble as well. Imagine you break the interface of the core object you're inheriting from, so the core object loses its abstraction value. You'd have been better off having two objects or rewriting your abstraction to be more robust to the differences in object you wish you represent.\nEdit: Note that the original question changed to make the base run method abstract. The solution posted here is mostly invalidated by that.\n" ]
[ 1 ]
[]
[]
[ "inheritance", "overriding", "python" ]
stackoverflow_0074516402_inheritance_overriding_python.txt
Q: How to reate a dataframe based on excel sheet name and cell position? I have an excel table (sample.xlsx) which contains 3 sheets ('Sheet1','Sheet2','Sheet3'). Now I have read all the sheets and combine them into one dataframe. import pandas as pd data_df = pd.concat(pd.read_excel("sample.xlsx", header=None, index_col=None, sheet_name=None)) data_df looks like this: 0 1 2 Sheet1 0 val1 val2 val3 1 val11 val21 val31 Sheet2 0 val1 val2 val3 1 val11 val21 val31 Sheet3 0 val1 val2 val3 1 val11 val21 val31 Is there any way to create a new dataframe which has the same shape with data_df but each cell value is the cell position info? I have tried to get multiple index: multi_index = data_df.index.levels[:] and I get: [['Sheet1', 'Sheet2', 'Sheet3'], [0, 1]] But I don't know how to use these data to create a new dataframe like this: 0 1 2 0 Sheet1 - A1 Sheet1 - B1 Sheet1 - C1 1 Sheet1 - A2 Sheet1 - B2 Sheet1 - C2 2 Sheet2 - A1 Sheet2 - B1 Sheet2 - C1 3 Sheet2 - A2 Sheet2 - B2 Sheet2 - C2 4 Sheet3 - A1 Sheet3 - B1 Sheet3 - C1 5 Sheet3 - A2 Sheet3 - B2 Sheet3 - C2 Thanks in advance! A: Since the values in your data_df don't matter, you could build the cartesian product of index and columns and build a new dataframe of it. mapping = dict(enumerate('ABCDEFGHIJKLMNOPQRSTUVWXYZ')) mapping is needed to convert the column numbers 0,1,2,... to A,B,C,... UPDATE In case you have excel sheets with more than 26 columns, you would need to do some more calculations to get the Excel column names like AA, AB,...AZ and so on. In this post there are several answers which cover that issue. import itertools import numpy as np out = ( pd.DataFrame( np.array([f"{x[0][0]} - {mapping[int(x[1])]}{x[0][1]+1}" for x in itertools.product(data_df.index, data_df.columns)]) .reshape(len(data_df), -1) ) ) print(out) 0 1 2 0 Sheet1 - A1 Sheet1 - B1 Sheet1 - C1 1 Sheet1 - A2 Sheet1 - B2 Sheet1 - C2 2 Sheet2 - A1 Sheet2 - B1 Sheet2 - C1 3 Sheet2 - A2 Sheet2 - B2 Sheet2 - C2 4 Sheet3 - A1 Sheet3 - B1 Sheet3 - C1 5 Sheet3 - A2 Sheet3 - B2 Sheet3 - C2 A: Look into the “openpyxl” package. I think you can do something like “for sheet in sheets” then read each sheet to your data frame like that. Hope this guides you to the right place.
How to reate a dataframe based on excel sheet name and cell position?
I have an excel table (sample.xlsx) which contains 3 sheets ('Sheet1','Sheet2','Sheet3'). Now I have read all the sheets and combine them into one dataframe. import pandas as pd data_df = pd.concat(pd.read_excel("sample.xlsx", header=None, index_col=None, sheet_name=None)) data_df looks like this: 0 1 2 Sheet1 0 val1 val2 val3 1 val11 val21 val31 Sheet2 0 val1 val2 val3 1 val11 val21 val31 Sheet3 0 val1 val2 val3 1 val11 val21 val31 Is there any way to create a new dataframe which has the same shape with data_df but each cell value is the cell position info? I have tried to get multiple index: multi_index = data_df.index.levels[:] and I get: [['Sheet1', 'Sheet2', 'Sheet3'], [0, 1]] But I don't know how to use these data to create a new dataframe like this: 0 1 2 0 Sheet1 - A1 Sheet1 - B1 Sheet1 - C1 1 Sheet1 - A2 Sheet1 - B2 Sheet1 - C2 2 Sheet2 - A1 Sheet2 - B1 Sheet2 - C1 3 Sheet2 - A2 Sheet2 - B2 Sheet2 - C2 4 Sheet3 - A1 Sheet3 - B1 Sheet3 - C1 5 Sheet3 - A2 Sheet3 - B2 Sheet3 - C2 Thanks in advance!
[ "Since the values in your data_df don't matter, you could build the cartesian product of index and columns and build a new dataframe of it.\nmapping = dict(enumerate('ABCDEFGHIJKLMNOPQRSTUVWXYZ'))\n\nmapping is needed to convert the column numbers 0,1,2,... to A,B,C,...\nUPDATE\nIn case you have excel sheets with more than 26 columns, you would need to do some more calculations to get the Excel column names like AA, AB,...AZ and so on. In this post there are several answers which cover that issue.\nimport itertools\nimport numpy as np\n\nout = (\n pd.DataFrame(\n np.array([f\"{x[0][0]} - {mapping[int(x[1])]}{x[0][1]+1}\" \n for x in itertools.product(data_df.index, data_df.columns)])\n .reshape(len(data_df), -1)\n )\n)\n\nprint(out)\n\n 0 1 2\n0 Sheet1 - A1 Sheet1 - B1 Sheet1 - C1\n1 Sheet1 - A2 Sheet1 - B2 Sheet1 - C2\n2 Sheet2 - A1 Sheet2 - B1 Sheet2 - C1\n3 Sheet2 - A2 Sheet2 - B2 Sheet2 - C2\n4 Sheet3 - A1 Sheet3 - B1 Sheet3 - C1\n5 Sheet3 - A2 Sheet3 - B2 Sheet3 - C2\n\n", "Look into the “openpyxl” package. I think you can do something like “for sheet in sheets” then read each sheet to your data frame like that. Hope this guides you to the right place.\n" ]
[ 1, 0 ]
[]
[]
[ "data_analysis", "dataframe", "excel", "pandas", "python" ]
stackoverflow_0074514966_data_analysis_dataframe_excel_pandas_python.txt
Q: Use python and pandas to set key for imported data from text file to dataframe This feels like an incredibly straight forward problem, but I am new and stuck, apologies. It doesn't necessarily need a key, but that was how I thought to solve it. I have a text file whose abbreviated contents resemble this: name_of_source 128 1024.000000 225.569918 name_of_source_2 140 1120.000000 229.085200 etc etc I really need the output dataframe to resemble: name_of_source 128 1024.000000 225.569918 name_of_source_2 140 1120.000000 229.085200 I'm struggling to overcome the linebreak between the name and the data import pandas as pd import os data= pd.read_csv(path+'combined.txt', header=None, sep = "\s+|\t+|\s+\t+|\t+\s+", names='name vol1 vol2 vol3'.split(' ')) A: You can use pandas.DataFrame.join: df= pd.read_csv("test.txt", header=None) out= ( df.rename(columns= {0: "Name"}) .join(df.shift(-1).rename(columns={0: "Vals"})) .iloc[::2] ) # Output : print(out) Name Vals 0 name_of_source 128 1024.000000 225.569918 2 name_of_source_2 140 1120.000000 229.085200 If need separate values, use pandas.Series.str.split with pandas.concat : print(pd.concat([out, out.pop("Vals").str.split(expand=True).add_prefix('Vals_')], axis=1)) Name Vals_0 Vals_1 Vals_2 0 name_of_source 128 1024.000000 225.569918 2 name_of_source_2 140 1120.000000 229.085200 # .txt used:
Use python and pandas to set key for imported data from text file to dataframe
This feels like an incredibly straight forward problem, but I am new and stuck, apologies. It doesn't necessarily need a key, but that was how I thought to solve it. I have a text file whose abbreviated contents resemble this: name_of_source 128 1024.000000 225.569918 name_of_source_2 140 1120.000000 229.085200 etc etc I really need the output dataframe to resemble: name_of_source 128 1024.000000 225.569918 name_of_source_2 140 1120.000000 229.085200 I'm struggling to overcome the linebreak between the name and the data import pandas as pd import os data= pd.read_csv(path+'combined.txt', header=None, sep = "\s+|\t+|\s+\t+|\t+\s+", names='name vol1 vol2 vol3'.split(' '))
[ "You can use pandas.DataFrame.join:\ndf= pd.read_csv(\"test.txt\", header=None)\n\nout= (\n df.rename(columns= {0: \"Name\"})\n .join(df.shift(-1).rename(columns={0: \"Vals\"}))\n .iloc[::2]\n )\n\n# Output :\nprint(out)\n Name Vals\n0 name_of_source 128 1024.000000 225.569918\n2 name_of_source_2 140 1120.000000 229.085200\n\nIf need separate values, use pandas.Series.str.split with pandas.concat :\nprint(pd.concat([out, out.pop(\"Vals\").str.split(expand=True).add_prefix('Vals_')], axis=1))\n\n Name Vals_0 Vals_1 Vals_2\n0 name_of_source 128 1024.000000 225.569918\n2 name_of_source_2 140 1120.000000 229.085200\n\n# .txt used:\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074515584_pandas_python.txt
Q: "No module named 'sympy'" Error. Problem occurs after installation I am new to Python. I am trying to use the Sympy package. I am running Python 3.11 in Pycharm I am using Windows 10. It displays: ModuleNotFoundError: No module named 'sympy' I ran pip install sympy, it installed it. And when I try reinstalling it, it displays: Requirement already satisfied: sympy in c:\users\jrk\appdata\local\programs\python\python311\lib\site-packages (1.11.1) Requirement already satisfied: mpmath>=0.19 in c:\users\jrk\appdata\local\programs\python\python311\lib\site-packages (from sympy) (1.2.1) I tried going through this guide https://bobbyhadz.com/blog/python-no-module-named-sympy but it did not work. I tried uninstalling it and reinstalling it both with pip and mpip it unfortunately didnt work at the top you can see which interpreter i am using I can see that there is a file called sympy in my downloads folder, maybe that has to be relocated, but where to? A: My instructions might be a bit rusty as I don't regularly use Windows for Python but here goes: You'll notice that the path to your Pycharm Python interpreter (c:\Users\jrk\PycharmProjects..) is different than the path reported by pip in your error messages (c:\Users\jrk\appdata..). It's perfectly normal to have multiple versions of Python around, but you'll need to be a bit careful about which one you're invoking. It's unclear from your description where exactly you are running the pip commands, but usually if you open built-in terminal in Pycharm, it most likely has the right virtual environment activated automatically. The command line prompt should begin with (venv) if that is the case. If you then run pip install sympy you ought to have it installed in the correct place. If you're using a regular command prompt, you can also manually activate the virtual environment by running the activate or activate.bat file that you'll find in c:\Users\jrk\PycharmProjects\pythonProject\venv\Scripts\ folder. Might want to also read a bit about virtual envs @ https://docs.python.org/3/library/venv.html A: I found the solution. I do not know why it worked. But i moved the project from the local file on my computer to the server we use in our company, and then it worked
"No module named 'sympy'" Error. Problem occurs after installation
I am new to Python. I am trying to use the Sympy package. I am running Python 3.11 in Pycharm I am using Windows 10. It displays: ModuleNotFoundError: No module named 'sympy' I ran pip install sympy, it installed it. And when I try reinstalling it, it displays: Requirement already satisfied: sympy in c:\users\jrk\appdata\local\programs\python\python311\lib\site-packages (1.11.1) Requirement already satisfied: mpmath>=0.19 in c:\users\jrk\appdata\local\programs\python\python311\lib\site-packages (from sympy) (1.2.1) I tried going through this guide https://bobbyhadz.com/blog/python-no-module-named-sympy but it did not work. I tried uninstalling it and reinstalling it both with pip and mpip it unfortunately didnt work at the top you can see which interpreter i am using I can see that there is a file called sympy in my downloads folder, maybe that has to be relocated, but where to?
[ "My instructions might be a bit rusty as I don't regularly use Windows for Python but here goes:\nYou'll notice that the path to your Pycharm Python interpreter (c:\\Users\\jrk\\PycharmProjects..) is different than the path reported by pip in your error messages (c:\\Users\\jrk\\appdata..).\nIt's perfectly normal to have multiple versions of Python around, but you'll need to be a bit careful about which one you're invoking.\nIt's unclear from your description where exactly you are running the pip commands, but usually if you open built-in terminal in Pycharm, it most likely has the right virtual environment activated automatically. The command line prompt should begin with (venv) if that is the case. If you then run pip install sympy you ought to have it installed in the correct place.\nIf you're using a regular command prompt, you can also manually activate the virtual environment by running the activate or activate.bat file that you'll find in c:\\Users\\jrk\\PycharmProjects\\pythonProject\\venv\\Scripts\\ folder.\nMight want to also read a bit about virtual envs @ https://docs.python.org/3/library/venv.html\n", "I found the solution. I do not know why it worked. But i moved the project from the local file on my computer to the server we use in our company, and then it worked\n" ]
[ 0, 0 ]
[ "Try Uninstalling it and reinstalling\npip uninstall sympy\n\npip install sympy\n" ]
[ -2 ]
[ "module", "pip", "pycharm", "python", "sympy" ]
stackoverflow_0074357986_module_pip_pycharm_python_sympy.txt
Q: Is an abstract method a normal instance method in a non-abstract class in Python? I defined the abstract method sound() with @abstractmethod under the non-abstract class Animal which doesn't extend ABC and Cat class extends Animal class, then I could instantiate both Animal and Cat classes without any errors as shown below: from abc import ABC, abstractmethod class Animal: # Doesn't extend "ABC" @abstractmethod # Here def sound(self): print("Wow!!") class Cat(Animal): pass obj1 = Animal() # Here obj1.sound() obj2 = Cat() # Here obj2.sound() Output: Wow!! Wow!! So, is an abstract method a normal instance method in a non-abstract class in Python? A: The abstractmethod decorator just adds some annotation to the method, which is evaluated by ABC when you try to instantiate the class. The method itself doesn't change in any way, it's still a regular instance method. It's the cooperation of ABC together with those abstractmethod decorator annotations that result in the desired behaviour of preventing instantiation of "abstract classes". It's not a concept enforced at the language level in any way, it's just layered on top of regular Python classes.
Is an abstract method a normal instance method in a non-abstract class in Python?
I defined the abstract method sound() with @abstractmethod under the non-abstract class Animal which doesn't extend ABC and Cat class extends Animal class, then I could instantiate both Animal and Cat classes without any errors as shown below: from abc import ABC, abstractmethod class Animal: # Doesn't extend "ABC" @abstractmethod # Here def sound(self): print("Wow!!") class Cat(Animal): pass obj1 = Animal() # Here obj1.sound() obj2 = Cat() # Here obj2.sound() Output: Wow!! Wow!! So, is an abstract method a normal instance method in a non-abstract class in Python?
[ "The abstractmethod decorator just adds some annotation to the method, which is evaluated by ABC when you try to instantiate the class. The method itself doesn't change in any way, it's still a regular instance method. It's the cooperation of ABC together with those abstractmethod decorator annotations that result in the desired behaviour of preventing instantiation of \"abstract classes\". It's not a concept enforced at the language level in any way, it's just layered on top of regular Python classes.\n" ]
[ 3 ]
[]
[]
[ "abstract_class", "abstract_methods", "instance_methods", "python", "python_3.x" ]
stackoverflow_0074516565_abstract_class_abstract_methods_instance_methods_python_python_3.x.txt
Q: Shortest way to get first item of `OrderedDict` in Python 3 What's the shortest way to get first item of OrderedDict in Python 3? My best: list(ordered_dict.items())[0] Quite long and ugly. I can think of: next(iter(ordered_dict.items())) # Fixed, thanks Ashwini But it's not very self-describing. Any better suggestions? A: Programming Practices for Readabililty In general, if you feel like code is not self-describing, the usual solution is to factor it out into a well-named function: def first(s): '''Return the first element from an ordered collection or an arbitrary element from an unordered collection. Raise StopIteration if the collection is empty. ''' return next(iter(s)) With that helper function, the subsequent code becomes very readable: >>> extension = {'xml', 'html', 'css', 'php', 'xhmtl'} >>> one_extension = first(extension) Patterns for Extracting a Single Value from Collection The usual ways to get an element from a set, dict, OrderedDict, generator, or other non-indexable collection are: for value in some_collection: break and: value = next(iter(some_collection)) The latter is nice because the next() function lets you specify a default value if collection is empty or you can choose to let it raise an exception. The next() function is also explicit that it is asking for the next item. Alternative Approach If you actually need indexing and slicing and other sequence behaviors (such as indexing multiple elements), it is a simple matter to convert to a list with list(some_collection) or to use [itertools.islice()][2]: s = list(some_collection) print(s[0], s[1]) s = list(islice(n, some_collection)) print(s) A: Use popitem(last=False), but keep in mind that it removes the entry from the dictionary, i.e. is destructive. from collections import OrderedDict o = OrderedDict() o['first'] = 123 o['second'] = 234 o['third'] = 345 first_item = o.popitem(last=False) >>> ('first', 123) For more details, have a look at the manual on collections. It also works with Python 2.x. A: Subclassing and adding a method to OrderedDict would be the answer to clarity issues: >>> o = ExtOrderedDict(('a',1), ('b', 2)) >>> o.first_item() ('a', 1) The implementation of ExtOrderedDict: class ExtOrderedDict(OrderedDict): def first_item(self): return next(iter(self.items())) A: Code that's readable, leaves the OrderedDict unchanged and doesn't needlessly generate a potentially large list just to get the first item: for item in ordered_dict.items(): return item If ordered_dict is empty, None would be returned implicitly. An alternate version for use inside a stretch of code: for first in ordered_dict.items(): break # Leave the name 'first' bound to the first item else: raise IndexError("Empty ordered dict") The Python 2.x code corresponding to the first example above would need to use iteritems() instead: for item in ordered_dict.iteritems(): return item A: You might want to consider using SortedDict instead of OrderedDict. It provides SortedDict.peekitem to peek an item. Runtime complexity: O(log(n)) >>> sd = SortedDict({'a': 1, 'b': 2, 'c': 3}) >>> sd.peekitem(0) ('a', 1)
Shortest way to get first item of `OrderedDict` in Python 3
What's the shortest way to get first item of OrderedDict in Python 3? My best: list(ordered_dict.items())[0] Quite long and ugly. I can think of: next(iter(ordered_dict.items())) # Fixed, thanks Ashwini But it's not very self-describing. Any better suggestions?
[ " Programming Practices for Readabililty \nIn general, if you feel like code is not self-describing, the usual solution is to factor it out into a well-named function:\ndef first(s):\n '''Return the first element from an ordered collection\n or an arbitrary element from an unordered collection.\n Raise StopIteration if the collection is empty.\n '''\n return next(iter(s))\n\nWith that helper function, the subsequent code becomes very readable:\n>>> extension = {'xml', 'html', 'css', 'php', 'xhmtl'}\n>>> one_extension = first(extension)\n\n Patterns for Extracting a Single Value from Collection \nThe usual ways to get an element from a set, dict, OrderedDict, generator, or other non-indexable collection are:\nfor value in some_collection:\n break\n\nand:\nvalue = next(iter(some_collection))\n\nThe latter is nice because the next() function lets you specify a default value if collection is empty or you can choose to let it raise an exception. The next() function is also explicit that it is asking for the next item.\n Alternative Approach \nIf you actually need indexing and slicing and other sequence behaviors (such as indexing multiple elements), it is a simple matter to convert to a list with list(some_collection) or to use [itertools.islice()][2]:\ns = list(some_collection)\nprint(s[0], s[1])\n\ns = list(islice(n, some_collection))\nprint(s)\n\n", "Use popitem(last=False), but keep in mind that it removes the entry from the dictionary, i.e. is destructive.\nfrom collections import OrderedDict\no = OrderedDict()\no['first'] = 123\no['second'] = 234\no['third'] = 345\n\nfirst_item = o.popitem(last=False)\n>>> ('first', 123)\n\nFor more details, have a look at the manual on collections. It also works with Python 2.x.\n", "Subclassing and adding a method to OrderedDict would be the answer to clarity issues:\n>>> o = ExtOrderedDict(('a',1), ('b', 2))\n>>> o.first_item()\n('a', 1)\n\nThe implementation of ExtOrderedDict:\n\nclass ExtOrderedDict(OrderedDict):\n def first_item(self):\n return next(iter(self.items()))\n\n", "Code that's readable, leaves the OrderedDict unchanged and doesn't needlessly generate a potentially large list just to get the first item:\nfor item in ordered_dict.items():\n return item\n\nIf ordered_dict is empty, None would be returned implicitly.\nAn alternate version for use inside a stretch of code:\nfor first in ordered_dict.items():\n break # Leave the name 'first' bound to the first item\nelse:\n raise IndexError(\"Empty ordered dict\")\n\nThe Python 2.x code corresponding to the first example above would need to use iteritems() instead:\nfor item in ordered_dict.iteritems():\n return item\n\n", "You might want to consider using SortedDict instead of OrderedDict.\nIt provides SortedDict.peekitem to peek an item.\nRuntime complexity: O(log(n))\n>>> sd = SortedDict({'a': 1, 'b': 2, 'c': 3})\n>>> sd.peekitem(0)\n('a', 1)\n\n" ]
[ 73, 31, 5, 5, 0 ]
[ "First record:\n[key for key, value in ordered_dict][0]\n\nLast record:\n[key for key, value in ordered_dict][-1]\n\n" ]
[ -2 ]
[ "indexing", "iterable", "python", "python_3.x" ]
stackoverflow_0021062781_indexing_iterable_python_python_3.x.txt
Q: Why can't dataclasses have mutable defaults in their class attributes declaration? This seems like something that is likely to have been asked before, but an hour or so of searching has yielded no results. Passing default list argument to dataclasses looked promising, but it's not quite what I'm looking for. Here's the problem: when one tries to assign a mutable value to a class attribute, there's an error: @dataclass class Foo: bar: list = [] # ValueError: mutable default <class 'list'> for field a is not allowed: use default_factory I gathered from the error message that I'm supposed to use the following instead: @dataclass class Foo: bar: list = field(default_factory=list) But why are mutable defaults not allowed? Is it to enforce avoidance of the mutable default argument problem? A: It looks like my question was quite clearly answered in the docs (which derived from PEP 557, as shmee mentioned): Python stores default member variable values in class attributes. Consider this example, not using dataclasses: class C: x = [] def add(self, element): self.x.append(element) o1 = C() o2 = C() o1.add(1) o2.add(2) assert o1.x == [1, 2] assert o1.x is o2.x Note that the two instances of class C share the same class variable x, as expected. Using dataclasses, if this code was valid: @dataclass class D: x: List = [] def add(self, element): self.x += element it would generate code similar to: class D: x = [] def __init__(self, x=x): self.x = x def add(self, element): self.x += element This has the same issue as the original example using class C. That is, two instances of class D that do not specify a value for x when creating a class instance will share the same copy of x. Because dataclasses just use normal Python class creation they also share this behavior. There is no general way for Data Classes to detect this condition. Instead, dataclasses will raise a ValueError if it detects a default parameter of type list, dict, or set. This is a partial solution, but it does protect against many common errors. A: The above answer is not correct. A mutable default value, such as an empty list can be defined in data class by using default_factory. @dataclass class D: x: list = field(default_factory=list) Using default factory functions is a way to create new instances of >mutable types as default values for fields: @dataclass class D: x: list = field(default_factory=list) assert D().x is not D().x The link is here
Why can't dataclasses have mutable defaults in their class attributes declaration?
This seems like something that is likely to have been asked before, but an hour or so of searching has yielded no results. Passing default list argument to dataclasses looked promising, but it's not quite what I'm looking for. Here's the problem: when one tries to assign a mutable value to a class attribute, there's an error: @dataclass class Foo: bar: list = [] # ValueError: mutable default <class 'list'> for field a is not allowed: use default_factory I gathered from the error message that I'm supposed to use the following instead: @dataclass class Foo: bar: list = field(default_factory=list) But why are mutable defaults not allowed? Is it to enforce avoidance of the mutable default argument problem?
[ "It looks like my question was quite clearly answered in the docs (which derived from PEP 557, as shmee mentioned):\n\nPython stores default member variable values in class attributes. Consider this example, not using dataclasses:\nclass C:\n x = []\n def add(self, element):\n self.x.append(element)\n\no1 = C()\no2 = C()\no1.add(1)\no2.add(2)\nassert o1.x == [1, 2]\nassert o1.x is o2.x\n\nNote that the two instances of class C share the same class variable x, as expected.\nUsing dataclasses, if this code was valid:\n@dataclass\nclass D:\n x: List = []\n def add(self, element):\n self.x += element\n\nit would generate code similar to:\nclass D:\n x = []\n def __init__(self, x=x):\n self.x = x\n def add(self, element):\n self.x += element\n\nThis has the same issue as the original example using class C. That is, two instances of class D that do not specify a value for x when creating a class instance will share the same copy of x. Because dataclasses just use normal Python class creation they also share this behavior. There is no general way for Data Classes to detect this condition. Instead, dataclasses will raise a ValueError if it detects a default parameter of type list, dict, or set. This is a partial solution, but it does protect against many common errors.\n\n", "The above answer is not correct. A mutable default value, such as an empty list can be defined in data class by using default_factory.\n @dataclass\n class D:\n x: list = field(default_factory=list) \n\n\nUsing default factory functions is a way to create new instances of >mutable types as default values for fields:\n @dataclass\n class D:\n x: list = field(default_factory=list)\n\n assert D().x is not D().x\n\n\nThe link is here\n" ]
[ 106, 3 ]
[ "import field like dataclass.\nfrom dataclasses import dataclass, field\n\nand use this for lists:\n@dataclass\nclass Foo:\n bar: list = field(default_factory=list)\n\n" ]
[ -1 ]
[ "python", "python_3.x" ]
stackoverflow_0053632152_python_python_3.x.txt
Q: Python 3.10 .join function questions So let's say, i want to do something thing like this a = ['AB', 'CD'] s = '1. \n' print(s.join(a)) Expected Output: 1. AB 2. CD Actual Output: AB1. CD1. So my question is, How can i add something at the beginning of the string s? And also increase the number. example: 1. ... 2. ... A: a = ['AB', 'CD'] rs = "" for i, v in enumerate(a): rs += f"{i + 1}. {v}\n" print(rs) A: You can use enumerate and a list comprehension to create that string: a = ['AB', 'CD'] s = '\n'.join( f"{idx}. {val}" for idx, val in enumerate(a, 1))
Python 3.10 .join function questions
So let's say, i want to do something thing like this a = ['AB', 'CD'] s = '1. \n' print(s.join(a)) Expected Output: 1. AB 2. CD Actual Output: AB1. CD1. So my question is, How can i add something at the beginning of the string s? And also increase the number. example: 1. ... 2. ...
[ "a = ['AB', 'CD']\nrs = \"\"\nfor i, v in enumerate(a):\n rs += f\"{i + 1}. {v}\\n\"\nprint(rs)\n\n", "You can use enumerate and a list comprehension to create that string:\na = ['AB', 'CD']\n\ns = '\\n'.join(\n f\"{idx}. {val}\"\n for idx, val in enumerate(a, 1))\n\n" ]
[ 0, 0 ]
[]
[]
[ "list", "printing", "python", "string" ]
stackoverflow_0074516502_list_printing_python_string.txt
Q: Increase performance of df.rolling(...).apply(...) for large dataframes Execution time of this code is too long. df.rolling(window=255).apply(myFunc) My dataframes shape is (500, 10000). 0 1 ... 9999 2021-11-01 0.011111 0.054242 2021-11-04 0.025244 0.003653 2021-11-05 0.524521 0.099521 2021-11-06 0.054241 0.138321 ... I make the calculation for each date with the last 255 date values. myFunc looks like: def myFunc(x): coefs = ... return np.sqrt(np.sum(x ** 2 * coefs)) I tried to use swifter but performances are the same : import swifter df.swifter.rolling(window=255).apply(myFunc) I also tried with Dask, but I think I didn't understand it well because the performance are not much better: import dask.dataframe as dd ddf = dd.from_pandas(df) ddf = ddf.rolling(window=255).apply(myFunc, raw=False) ddf.execute() I didn't manage to parallelize the execution with partitions. How can I use dask to improve performance ? I'm on Windows. A: This can be done using numpy+numba pretty efficiently. Quick MRE: import numpy as np, pandas as pd, numba df = pd.DataFrame( np.random.random(size=(500, 10000)), index=pd.date_range("2021-11-01", freq="D", periods=500) ) coefs = np.random.random(size=255) Write the function using pure numpy operations and simple loops, making use of numba.njit(parallel=True) and numba.prange: @numba.njit(parallel=True) def numba_func(values, coefficients): # define result array: size of original, minus length of # coefficients, + 1 result_tmp = np.zeros( shape=(values.shape[0] - len(coefficients) + 1, values.shape[1]), dtype=values.dtype, ) result_final = np.empty_like(result_tmp) # nested for loops are your friend with numba! # (you must unlearn what you have learned) for j in numba.prange(values.shape[1]): for i in range(values.shape[0] - len(coefficients) + 1): for k in range(len(coefficients)): result_tmp[i, j] += values[i + k, j] ** 2 * coefficients[k] result_final[:, j] = np.sqrt(result_tmp[:, j]) return result_final This runs very quickly: In [5]: %%time ...: result = pd.DataFrame( ...: numba_func(df.values, coefs), ...: index=df.index[len(coefs) - 1:], ...: ) ...: ...: CPU times: user 1.69 s, sys: 40.9 ms, total: 1.73 s Wall time: 844 ms Note: I'm a huge fan of dask. But the first rule of dask performance is don't use dask. If it's small enough to fit comfortably into memory, you'll usually get the best performance from tuning your pandas or numpy operations and leveraging speedups from cython, numba, etc. And once a problem is big enough to move to dask, these same tuning rules apply to the operations you perform on dask chunks/partitions, too! A: First, since you are using numpy functions, specify the parameter raw=True. Toy example: import pandas as pd import numpy as np def foo(x): coefs = 2 return np.sqrt(np.sum(x ** 2 * coefs)) df = pd.DataFrame(np.random.random((500, 10000))) %%time res = df.rolling(250).apply(foo) Wall time: 359.3 s # with raw=True %%time res = df.rolling(250).apply(foo, raw=True) Wall time: 15.2 s You can also easily parallelize your calculations using the parallel-pandas library. Only two additional lines of code! # pip install parallel-pandas import pandas as pd import numpy as np from parallel_pandas import ParallelPandas #initialize parallel-pandas ParallelPandas.initialize(n_cpu=8, disable_pr_bar=True) def foo(x): coefs = 2 return np.sqrt(np.sum(x ** 2 * coefs)) df = pd.DataFrame(np.random.random((500, 1000))) # p_apply - is parallel analogue of apply method %%time res = df.rolling(250).p_apply(foo, raw=True, executor='processes') Wall time: 2.2 s With engine='numba' %%time res = df.rolling(250).p_apply(foo, raw=True, executor='processes', engine='numba') Wall time: 1.2 s Total speedup is 359/1.2 ~ 300!
Increase performance of df.rolling(...).apply(...) for large dataframes
Execution time of this code is too long. df.rolling(window=255).apply(myFunc) My dataframes shape is (500, 10000). 0 1 ... 9999 2021-11-01 0.011111 0.054242 2021-11-04 0.025244 0.003653 2021-11-05 0.524521 0.099521 2021-11-06 0.054241 0.138321 ... I make the calculation for each date with the last 255 date values. myFunc looks like: def myFunc(x): coefs = ... return np.sqrt(np.sum(x ** 2 * coefs)) I tried to use swifter but performances are the same : import swifter df.swifter.rolling(window=255).apply(myFunc) I also tried with Dask, but I think I didn't understand it well because the performance are not much better: import dask.dataframe as dd ddf = dd.from_pandas(df) ddf = ddf.rolling(window=255).apply(myFunc, raw=False) ddf.execute() I didn't manage to parallelize the execution with partitions. How can I use dask to improve performance ? I'm on Windows.
[ "This can be done using numpy+numba pretty efficiently.\nQuick MRE:\nimport numpy as np, pandas as pd, numba\n\ndf = pd.DataFrame(\n np.random.random(size=(500, 10000)),\n index=pd.date_range(\"2021-11-01\", freq=\"D\", periods=500)\n)\n\ncoefs = np.random.random(size=255)\n\nWrite the function using pure numpy operations and simple loops, making use of numba.njit(parallel=True) and numba.prange:\n@numba.njit(parallel=True)\ndef numba_func(values, coefficients):\n # define result array: size of original, minus length of\n # coefficients, + 1\n result_tmp = np.zeros(\n shape=(values.shape[0] - len(coefficients) + 1, values.shape[1]),\n dtype=values.dtype,\n )\n\n result_final = np.empty_like(result_tmp)\n\n # nested for loops are your friend with numba!\n # (you must unlearn what you have learned)\n for j in numba.prange(values.shape[1]):\n for i in range(values.shape[0] - len(coefficients) + 1):\n for k in range(len(coefficients)):\n result_tmp[i, j] += values[i + k, j] ** 2 * coefficients[k]\n\n result_final[:, j] = np.sqrt(result_tmp[:, j])\n\n return result_final\n\nThis runs very quickly:\nIn [5]: %%time\n ...: result = pd.DataFrame(\n ...: numba_func(df.values, coefs),\n ...: index=df.index[len(coefs) - 1:],\n ...: )\n ...:\n ...:\nCPU times: user 1.69 s, sys: 40.9 ms, total: 1.73 s\nWall time: 844 ms\n\nNote: I'm a huge fan of dask. But the first rule of dask performance is don't use dask. If it's small enough to fit comfortably into memory, you'll usually get the best performance from tuning your pandas or numpy operations and leveraging speedups from cython, numba, etc. And once a problem is big enough to move to dask, these same tuning rules apply to the operations you perform on dask chunks/partitions, too!\n", "First, since you are using numpy functions, specify the parameter raw=True. Toy example:\nimport pandas as pd\nimport numpy as np\n\ndef foo(x):\n coefs = 2\n return np.sqrt(np.sum(x ** 2 * coefs)) \n\ndf = pd.DataFrame(np.random.random((500, 10000)))\n\n%%time\nres = df.rolling(250).apply(foo)\n\nWall time: 359.3 s\n\n# with raw=True\n%%time\nres = df.rolling(250).apply(foo, raw=True)\n\nWall time: 15.2 s\n\n\nYou can also easily parallelize your calculations using the parallel-pandas library. Only two additional lines of code!\n# pip install parallel-pandas\nimport pandas as pd\nimport numpy as np\nfrom parallel_pandas import ParallelPandas\n\n#initialize parallel-pandas\nParallelPandas.initialize(n_cpu=8, disable_pr_bar=True)\n\ndef foo(x):\n coefs = 2\n return np.sqrt(np.sum(x ** 2 * coefs)) \n\ndf = pd.DataFrame(np.random.random((500, 1000)))\n\n# p_apply - is parallel analogue of apply method\n%%time\nres = df.rolling(250).p_apply(foo, raw=True, executor='processes')\n\nWall time: 2.2 s\n\nWith engine='numba'\n%%time\nres = df.rolling(250).p_apply(foo, raw=True, executor='processes', engine='numba')\n\nWall time: 1.2 s\n\n\nTotal speedup is 359/1.2 ~ 300!\n" ]
[ 2, 2 ]
[]
[]
[ "dask", "pandas", "python", "swifter" ]
stackoverflow_0074487361_dask_pandas_python_swifter.txt
Q: How to get the indexes of the same values in a list? Say I have a list like this: l = [1, 2, 3, 4, 5, 3] how do I get the indexes of those 3s that have been repeated? A: First you need to figure out which elements are repeated and where. I do it by indexing it in a dictionary. Then you need to extract all repeated values. from collections import defaultdict l = [1, 2, 3, 4, 5, 3] _indices = defaultdict(list) for index, item in enumerate(l): _indices[item].append(index) for key, value in _indices.items(): if len(value) > 1: # Do something when them print(key, value) Output: 3 [2, 5] Another would be to filter them out like so: duplicates_dict = {key: indices for key, indices in _indices.items() if len(indices) > 1} A: you could use a dictionary comprehension to get all the repeated numbers and their indexes in one go: L = [1, 2, 3, 4, 5, 3, 8, 9, 9, 8, 9] R = { n:rep[n] for rep in [{}] for i,n in enumerate(L) if rep.setdefault(n,[]).append(i) or len(rep[n])==2 } print(R) {3: [2, 5], 9: [7, 8, 10], 8: [6, 9]} The equivalent using a for loop would be: R = dict() for i,n in enumerate(L): R.setdefault(n,[]).append(i) R = {n:rep for n,rep in R.items() if len(rep)>1} Counter from collections could be used to avoid the unnecessary creation of single item lists: from collections import Counter counts = Counter(L) R = dict() for i,n in enumerate(L): if counts[n]>1: R.setdefault(n,[]).append(i) A: find deplicates and loop through the list to find the corresponding index locations. Not the most efficient, but works input_list = [1,4,5,7,1,2,4] duplicates = input_list.copy() for x in set(duplicates): duplicates.remove(x) duplicates = list(set(duplicates)) dict_duplicates = {} for d in duplicates: l_ind = [] dict_duplicates[d] = l_ind for i in range(len(input_list)): if d == input_list[i]: l_ind.append(i) dict_duplicates A: If you want to access and use all of them, you can iterate over the position on the list, and specify this position in 'index' function. l = [1, 2, 3, 4, 5, 3] repeated_indexes = [] pos = 0 for item in l: if item == 3: index = l.index(item, pos) repeated_indexes.append(index) pos +=1 See documentation of index function here : https://docs.python.org/3/library/array.html#array.array.index
How to get the indexes of the same values in a list?
Say I have a list like this: l = [1, 2, 3, 4, 5, 3] how do I get the indexes of those 3s that have been repeated?
[ "First you need to figure out which elements are repeated and where. I do it by indexing it in a dictionary.\nThen you need to extract all repeated values.\nfrom collections import defaultdict\n\nl = [1, 2, 3, 4, 5, 3]\n_indices = defaultdict(list)\n\nfor index, item in enumerate(l):\n _indices[item].append(index)\n\nfor key, value in _indices.items():\n if len(value) > 1:\n # Do something when them\n print(key, value)\n\nOutput:\n3 [2, 5]\n\nAnother would be to filter them out like so:\nduplicates_dict = {key: indices for key, indices in _indices.items() if len(indices) > 1}\n\n", "you could use a dictionary comprehension to get all the repeated numbers and their indexes in one go:\nL = [1, 2, 3, 4, 5, 3, 8, 9, 9, 8, 9]\n\nR = { n:rep[n] for rep in [{}] for i,n in enumerate(L) \n if rep.setdefault(n,[]).append(i) or len(rep[n])==2 }\n\nprint(R)\n\n{3: [2, 5], \n 9: [7, 8, 10], \n 8: [6, 9]}\n\nThe equivalent using a for loop would be:\nR = dict()\nfor i,n in enumerate(L):\n R.setdefault(n,[]).append(i)\nR = {n:rep for n,rep in R.items() if len(rep)>1}\n\nCounter from collections could be used to avoid the unnecessary creation of single item lists:\nfrom collections import Counter\ncounts = Counter(L)\nR = dict()\nfor i,n in enumerate(L):\n if counts[n]>1:\n R.setdefault(n,[]).append(i)\n\n", "find deplicates and loop through the list to find the corresponding index locations. Not the most efficient, but works\ninput_list = [1,4,5,7,1,2,4]\nduplicates = input_list.copy()\n\nfor x in set(duplicates):\n duplicates.remove(x)\n\nduplicates = list(set(duplicates))\ndict_duplicates = {}\nfor d in duplicates:\n l_ind = []\n dict_duplicates[d] = l_ind \n for i in range(len(input_list)):\n if d == input_list[i]:\n l_ind.append(i)\ndict_duplicates \n\n", "If you want to access and use all of them, you can iterate over the position on the list, and specify this position in 'index' function.\nl = [1, 2, 3, 4, 5, 3]\nrepeated_indexes = []\npos = 0 \nfor item in l:\n if item == 3:\n index = l.index(item, pos)\n repeated_indexes.append(index)\n pos +=1\n\nSee documentation of index function here : https://docs.python.org/3/library/array.html#array.array.index\n" ]
[ 1, 1, 1, 1 ]
[]
[]
[ "arrays", "list", "python" ]
stackoverflow_0070488053_arrays_list_python.txt
Q: How to add empty/dummy row with continuous datetime index in pandas? This is my dataframe consumption hour start_time 2022-09-30 14:00:00+02:00 199.0 14.0 2022-09-30 15:00:00+02:00 173.0 15.0 2022-09-30 16:00:00+02:00 173.0 16.0 2022-09-30 17:00:00+02:00 156.0 17.0 2022-09-30 18:00:00+02:00 142.0 18.0 2022-09-30 19:00:00+02:00 163.0 19.0 2022-09-30 20:00:00+02:00 138.0 20.0 2022-09-30 21:00:00+02:00 183.0 21.0 2022-09-30 22:00:00+02:00 138.0 22.0 2022-09-30 23:00:00+02:00 143.0 23.0 I want outout like this consumption hour start_time 2022-09-30 14:00:00+02:00 199.0 14.0 2022-09-30 15:00:00+02:00 173.0 15.0 2022-09-30 16:00:00+02:00 173.0 16.0 2022-09-30 17:00:00+02:00 156.0 17.0 2022-09-30 18:00:00+02:00 142.0 18.0 2022-09-30 19:00:00+02:00 163.0 19.0 2022-09-30 20:00:00+02:00 138.0 20.0 2022-09-30 21:00:00+02:00 183.0 21.0 2022-09-30 22:00:00+02:00 138.0 22.0 2022-09-30 23:00:00+02:00 143.0 23.0 *2022-09-31 00:00:00+02:00 00.0 00.0* *2022-09-31 01:00:00+02:00 00.0 01.0* Here my index is datetime (start_time), i want to create rows with continuation of datetime and values as dummy or zero. How to do it in pandas python? A: Create helper DataFrame and add to original by concat: N = 2 df1 = (pd.DataFrame({'consumption':0}, index=pd.date_range(df.index.max() + pd.Timedelta('1h'), df.index.max() + pd.Timedelta(f'{N}h'), freq='H')) .assign(hour=lambda x: x.index.hour)) df = pd.concat([df, df1]) print (df) consumption hour 2022-09-30 14:00:00+02:00 199.0 14.0 2022-09-30 15:00:00+02:00 173.0 15.0 2022-09-30 16:00:00+02:00 173.0 16.0 2022-09-30 17:00:00+02:00 156.0 17.0 2022-09-30 18:00:00+02:00 142.0 18.0 2022-09-30 19:00:00+02:00 163.0 19.0 2022-09-30 20:00:00+02:00 138.0 20.0 2022-09-30 21:00:00+02:00 183.0 21.0 2022-09-30 22:00:00+02:00 138.0 22.0 2022-09-30 23:00:00+02:00 143.0 23.0 2022-10-01 00:00:00+02:00 0.0 0.0 2022-10-01 01:00:00+02:00 0.0 1.0 Or use DataFrame.reindex with new index with added N hours: N = 2 df = (df.reindex(pd.date_range(df.index.min(), df.index.max() + pd.Timedelta(f'{N}h'), freq='H'), fill_value=0) .assign(hour=lambda x: x.index.hour)) print (df) consumption hour 2022-09-30 14:00:00+02:00 199.0 14 2022-09-30 15:00:00+02:00 173.0 15 2022-09-30 16:00:00+02:00 173.0 16 2022-09-30 17:00:00+02:00 156.0 17 2022-09-30 18:00:00+02:00 142.0 18 2022-09-30 19:00:00+02:00 163.0 19 2022-09-30 20:00:00+02:00 138.0 20 2022-09-30 21:00:00+02:00 183.0 21 2022-09-30 22:00:00+02:00 138.0 22 2022-09-30 23:00:00+02:00 143.0 23 2022-10-01 00:00:00+02:00 0.0 0 2022-10-01 01:00:00+02:00 0.0 1
How to add empty/dummy row with continuous datetime index in pandas?
This is my dataframe consumption hour start_time 2022-09-30 14:00:00+02:00 199.0 14.0 2022-09-30 15:00:00+02:00 173.0 15.0 2022-09-30 16:00:00+02:00 173.0 16.0 2022-09-30 17:00:00+02:00 156.0 17.0 2022-09-30 18:00:00+02:00 142.0 18.0 2022-09-30 19:00:00+02:00 163.0 19.0 2022-09-30 20:00:00+02:00 138.0 20.0 2022-09-30 21:00:00+02:00 183.0 21.0 2022-09-30 22:00:00+02:00 138.0 22.0 2022-09-30 23:00:00+02:00 143.0 23.0 I want outout like this consumption hour start_time 2022-09-30 14:00:00+02:00 199.0 14.0 2022-09-30 15:00:00+02:00 173.0 15.0 2022-09-30 16:00:00+02:00 173.0 16.0 2022-09-30 17:00:00+02:00 156.0 17.0 2022-09-30 18:00:00+02:00 142.0 18.0 2022-09-30 19:00:00+02:00 163.0 19.0 2022-09-30 20:00:00+02:00 138.0 20.0 2022-09-30 21:00:00+02:00 183.0 21.0 2022-09-30 22:00:00+02:00 138.0 22.0 2022-09-30 23:00:00+02:00 143.0 23.0 *2022-09-31 00:00:00+02:00 00.0 00.0* *2022-09-31 01:00:00+02:00 00.0 01.0* Here my index is datetime (start_time), i want to create rows with continuation of datetime and values as dummy or zero. How to do it in pandas python?
[ "Create helper DataFrame and add to original by concat:\nN = 2\ndf1 = (pd.DataFrame({'consumption':0}, \n index=pd.date_range(df.index.max() + pd.Timedelta('1h'),\n df.index.max() + pd.Timedelta(f'{N}h'),\n freq='H'))\n .assign(hour=lambda x: x.index.hour))\n\ndf = pd.concat([df, df1])\nprint (df)\n consumption hour\n2022-09-30 14:00:00+02:00 199.0 14.0\n2022-09-30 15:00:00+02:00 173.0 15.0\n2022-09-30 16:00:00+02:00 173.0 16.0\n2022-09-30 17:00:00+02:00 156.0 17.0\n2022-09-30 18:00:00+02:00 142.0 18.0\n2022-09-30 19:00:00+02:00 163.0 19.0\n2022-09-30 20:00:00+02:00 138.0 20.0\n2022-09-30 21:00:00+02:00 183.0 21.0\n2022-09-30 22:00:00+02:00 138.0 22.0\n2022-09-30 23:00:00+02:00 143.0 23.0\n2022-10-01 00:00:00+02:00 0.0 0.0\n2022-10-01 01:00:00+02:00 0.0 1.0\n\nOr use DataFrame.reindex with new index with added N hours:\nN = 2\ndf = (df.reindex(pd.date_range(df.index.min(), \n df.index.max() + pd.Timedelta(f'{N}h'), \n freq='H'), fill_value=0)\n .assign(hour=lambda x: x.index.hour))\n\nprint (df)\n consumption hour\n2022-09-30 14:00:00+02:00 199.0 14\n2022-09-30 15:00:00+02:00 173.0 15\n2022-09-30 16:00:00+02:00 173.0 16\n2022-09-30 17:00:00+02:00 156.0 17\n2022-09-30 18:00:00+02:00 142.0 18\n2022-09-30 19:00:00+02:00 163.0 19\n2022-09-30 20:00:00+02:00 138.0 20\n2022-09-30 21:00:00+02:00 183.0 21\n2022-09-30 22:00:00+02:00 138.0 22\n2022-09-30 23:00:00+02:00 143.0 23\n2022-10-01 00:00:00+02:00 0.0 0\n2022-10-01 01:00:00+02:00 0.0 1\n\n" ]
[ 1 ]
[]
[]
[ "datetime", "dummy_variable", "pandas", "python", "time_series" ]
stackoverflow_0074516628_datetime_dummy_variable_pandas_python_time_series.txt
Q: Selenium element not interactable error on headless mode but works without headless I'm trying to scrape the webpage ted.europa.eu using Python with Selenium to retrieve information from the tenders. The script is supposed to be executed once a day with the new publications. The problem I have is that navigating to the new tenders I need Selenium to apply a filter to get only the ones from the same day the script it's executed. I already have the script for this and works perfectly, the problem is that when I activate the headless mode I get the following error selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: [object HTMLInputElement] has no size and location This is the code I have that applies the filter I need: import sys import time import re from datetime import datetime from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.action_chains import ActionChains from dotenv import load_dotenv load_dotenv("../../../../.env") sys.path.append("../src") sys.path.append("../../../../utils") from driver import * from lted import LTED from runnable import * # start print('start...') counter = 0 start = datetime.now() # get driver driver = get_driver_from_url("https://ted.europa.eu/TED/browse/browseByMap.do%22) actions = ActionChains(driver) # change language to spanish WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "lgId"))) driver.find_element(By.ID, "lgId").click() driver.find_element(By.XPATH, "//select[@id='lgId']/option[text()='español (es)']").click() # click on "Busqueda avanzada" WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "goToSearch"))) driver.find_element(By.ID, "goToSearch").click() # accept cookies and close tab WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "cookie-consent-banner"))) driver.find_element(By.XPATH, "//div[@id='cookie-consent-banner']/div[1]/div[1]/div[2]/a[1]").click() driver.find_element(By.XPATH, "//div[@id='cookie-consent-banner']/div[1]/div[1]/div[2]/a[1]").click() # click on specific date and set to today WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "publicationDateSpecific"))) element = driver.find_element(By.ID, "publicationDateSpecific") actions.move_to_element(element).perform() driver.find_element(By.ID, "publicationDateSpecific").click() driver.find_element(By.CLASS_NAME, "ui-state-highlight").click() # click on search driver.find_element(By.ID, "search").click() From the imports the only think I need to explain is that from the line from dirver import * comes the method get_driver_from_url() that is used later in the code. This method looks like this: def get_driver_from_url(url): options = webdriver.ChromeOptions() options.add_argument("--no-sandbox") options.add_argument("--disable-dev-shm-usage") options.add_argument("--start-maximized") options.add_argument("--headless") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get(url) return driver As I said this code works perfectly without the headless mode, but when activated I get the error. At first got another error and searching on the Internet found out that it could be because the element is not on screen, so I added the argument "--start-maximized" to make sure the Chrome tab is as big as possible and added the ActionChains to use actions.move_to_element(element).perform(), but I get this error on this exact code line. Also tried changing the line WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "publicationDateSpecific"))) to WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "publicationDateSpecific"))) but it just didn't work. Update: Also tried changing to EC.visibility_of_element_located as mentioned in this post but didn't work either What am I doing wrong? A: This is probably because of the window size. Try adding this: chrome_options = Options() chrome_options.add_argument("--window-size=1920,1080") chrome_options.add_argument("--start-maximized") chrome_options.add_argument("--headless") A: So, after a long time of try and error, I found that adding element = driver.find_element(By.ID, "publicationDateSpecific") driver.execute_script("window.scrollTo(0,"+str(element.location["y"])+")") makes the script work both in headless mode and normal mode
Selenium element not interactable error on headless mode but works without headless
I'm trying to scrape the webpage ted.europa.eu using Python with Selenium to retrieve information from the tenders. The script is supposed to be executed once a day with the new publications. The problem I have is that navigating to the new tenders I need Selenium to apply a filter to get only the ones from the same day the script it's executed. I already have the script for this and works perfectly, the problem is that when I activate the headless mode I get the following error selenium.common.exceptions.ElementNotInteractableException: Message: element not interactable: [object HTMLInputElement] has no size and location This is the code I have that applies the filter I need: import sys import time import re from datetime import datetime from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.action_chains import ActionChains from dotenv import load_dotenv load_dotenv("../../../../.env") sys.path.append("../src") sys.path.append("../../../../utils") from driver import * from lted import LTED from runnable import * # start print('start...') counter = 0 start = datetime.now() # get driver driver = get_driver_from_url("https://ted.europa.eu/TED/browse/browseByMap.do%22) actions = ActionChains(driver) # change language to spanish WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "lgId"))) driver.find_element(By.ID, "lgId").click() driver.find_element(By.XPATH, "//select[@id='lgId']/option[text()='español (es)']").click() # click on "Busqueda avanzada" WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "goToSearch"))) driver.find_element(By.ID, "goToSearch").click() # accept cookies and close tab WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "cookie-consent-banner"))) driver.find_element(By.XPATH, "//div[@id='cookie-consent-banner']/div[1]/div[1]/div[2]/a[1]").click() driver.find_element(By.XPATH, "//div[@id='cookie-consent-banner']/div[1]/div[1]/div[2]/a[1]").click() # click on specific date and set to today WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "publicationDateSpecific"))) element = driver.find_element(By.ID, "publicationDateSpecific") actions.move_to_element(element).perform() driver.find_element(By.ID, "publicationDateSpecific").click() driver.find_element(By.CLASS_NAME, "ui-state-highlight").click() # click on search driver.find_element(By.ID, "search").click() From the imports the only think I need to explain is that from the line from dirver import * comes the method get_driver_from_url() that is used later in the code. This method looks like this: def get_driver_from_url(url): options = webdriver.ChromeOptions() options.add_argument("--no-sandbox") options.add_argument("--disable-dev-shm-usage") options.add_argument("--start-maximized") options.add_argument("--headless") driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=options) driver.get(url) return driver As I said this code works perfectly without the headless mode, but when activated I get the error. At first got another error and searching on the Internet found out that it could be because the element is not on screen, so I added the argument "--start-maximized" to make sure the Chrome tab is as big as possible and added the ActionChains to use actions.move_to_element(element).perform(), but I get this error on this exact code line. Also tried changing the line WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "publicationDateSpecific"))) to WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "publicationDateSpecific"))) but it just didn't work. Update: Also tried changing to EC.visibility_of_element_located as mentioned in this post but didn't work either What am I doing wrong?
[ "This is probably because of the window size.\nTry adding this:\n chrome_options = Options()\n chrome_options.add_argument(\"--window-size=1920,1080\")\n chrome_options.add_argument(\"--start-maximized\")\n chrome_options.add_argument(\"--headless\")\n\n", "So, after a long time of try and error, I found that adding\nelement = driver.find_element(By.ID, \"publicationDateSpecific\")\ndriver.execute_script(\"window.scrollTo(0,\"+str(element.location[\"y\"])+\")\")\n\nmakes the script work both in headless mode and normal mode\n" ]
[ 0, 0 ]
[]
[]
[ "python", "selenium", "selenium_chromedriver", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074473765_python_selenium_selenium_chromedriver_selenium_webdriver_web_scraping.txt
Q: Azure Databricks workspace CLI - cannot create new folder in /Repos folder as service principle I am developing an Azure pipeline and want to create (https://docs.databricks.com/dev-tools/api/latest/repos.html#operation/create-repo) a repo in Databricks and save it to /Repos/sub_folder/repo_name To test the commands in the pipeline, I am using the Databricks cli and repos API (as described in the link above) locally from my PC. This all works fine and all the repo files are saved into the subfolder under the root /Repos folder. When I try this in the pipeline, running as the Service Principle, the pipeline fails when trying to create a subfolder under /Repos. In the pipeline, I am issuing the following Databricks CLI command: databricks workspace mkdirs /Repos/sub_folder The error shown is Error: Authorization failed. Your token may be expired or lack the valid scope Is there some further configuration, or permissions, required to allow the Service Principle create a folder/save files under /Repos? PS. I would use the /Shared workspace, instead of /Repos, but saving to /Shared does not seem to work with the "Files in Repos" feature (https://learn.microsoft.com/en-us/azure/databricks/repos/work-with-notebooks-other-files), which I need to access non-notebook files and run my actual model. Any suggestions much appreciated... A: The issue was two-fold. Firstly the service principle was not configured with admin rights so could not create the sub-folder in /Repos. Once this was fixed, I got a different error when issuing the post command trying to create the repo (in the newly created sub-folder). The error I got was: {"error_code":"PERMISSION_DENIED","message":"Missing Git provider credentials. Go to User Settings > Git Integration to add your personal access token."} The solution to this permissions issue has already been answered here
Azure Databricks workspace CLI - cannot create new folder in /Repos folder as service principle
I am developing an Azure pipeline and want to create (https://docs.databricks.com/dev-tools/api/latest/repos.html#operation/create-repo) a repo in Databricks and save it to /Repos/sub_folder/repo_name To test the commands in the pipeline, I am using the Databricks cli and repos API (as described in the link above) locally from my PC. This all works fine and all the repo files are saved into the subfolder under the root /Repos folder. When I try this in the pipeline, running as the Service Principle, the pipeline fails when trying to create a subfolder under /Repos. In the pipeline, I am issuing the following Databricks CLI command: databricks workspace mkdirs /Repos/sub_folder The error shown is Error: Authorization failed. Your token may be expired or lack the valid scope Is there some further configuration, or permissions, required to allow the Service Principle create a folder/save files under /Repos? PS. I would use the /Shared workspace, instead of /Repos, but saving to /Shared does not seem to work with the "Files in Repos" feature (https://learn.microsoft.com/en-us/azure/databricks/repos/work-with-notebooks-other-files), which I need to access non-notebook files and run my actual model. Any suggestions much appreciated...
[ "The issue was two-fold. Firstly the service principle was not configured with admin rights so could not create the sub-folder in /Repos. Once this was fixed, I got a different error when issuing the post command trying to create the repo (in the newly created sub-folder). The error I got was:\n{\"error_code\":\"PERMISSION_DENIED\",\"message\":\"Missing Git provider credentials. Go to User Settings > Git Integration to add your personal access token.\"}\n\nThe solution to this permissions issue has already been answered here\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_databricks", "pipeline", "python" ]
stackoverflow_0074482670_azure_azure_databricks_pipeline_python.txt
Q: How do I count the number of messages per day in pycord? So I basicly count all the messages in a channel. I also want to count the number of messages per day. I know message.created_at returns a datetime, but how do I count how many times a date is present in this list? this is my current code: count = 0 async for message in channel.history(limit=None): count += 1 print(message.created_at) I tried to do it like this: count = 0 async for message in channel.history(limit=None): count += 1 dates.append(message.created_at) print(dates.count(dates[0])) But this just returns "1" (while there are far more different days in the list) This is my first post on stack overflow, don't be toxic please, feedback is welcome! A: Your code isn't counting the number of messages on a certain date, it's counting the message on a certain datetime.datetime object, which represents a specific point in time (could be down to a microsecond depending on the API precision). This is because message.created_at returns the time and date of the message, what you want is probably message.created_at.date() (see datetime docs). If you want to know for each date how many messages there were, you can use a dictionary to count: messages_per_date = {} # empty dictionary async for message in channel.history(limit=None): message_date = message.created_at.date() # Add the date to the dictionary if necessary if message_date not in messages_per_date: messages_per_date[message_date] = 0 # Increment the number of messages on that date messages_per_date[message_date] += 1 # Print the number of messages today print(messages_per_date[datetime.date.today()]) You can use collections.defaultdict to make the code significantly shorter: import collections # This creates a dictionary in which the default value is zero messages_per_date = collections.defaultdict(int) async for message in channel.history(limit=None): # Now the values can be incremented without checking the key messages_per_date[message.created_at.date()] += 1 print(messages_per_date[datetime.date.today()])
How do I count the number of messages per day in pycord?
So I basicly count all the messages in a channel. I also want to count the number of messages per day. I know message.created_at returns a datetime, but how do I count how many times a date is present in this list? this is my current code: count = 0 async for message in channel.history(limit=None): count += 1 print(message.created_at) I tried to do it like this: count = 0 async for message in channel.history(limit=None): count += 1 dates.append(message.created_at) print(dates.count(dates[0])) But this just returns "1" (while there are far more different days in the list) This is my first post on stack overflow, don't be toxic please, feedback is welcome!
[ "Your code isn't counting the number of messages on a certain date, it's counting the message on a certain datetime.datetime object, which represents a specific point in time (could be down to a microsecond depending on the API precision).\nThis is because message.created_at returns the time and date of the message, what you want is probably message.created_at.date() (see datetime docs).\nIf you want to know for each date how many messages there were, you can use a dictionary to count:\nmessages_per_date = {} # empty dictionary\nasync for message in channel.history(limit=None):\n message_date = message.created_at.date()\n # Add the date to the dictionary if necessary\n if message_date not in messages_per_date:\n messages_per_date[message_date] = 0\n # Increment the number of messages on that date\n messages_per_date[message_date] += 1\n# Print the number of messages today\nprint(messages_per_date[datetime.date.today()])\n\nYou can use collections.defaultdict to make the code significantly shorter:\nimport collections\n\n# This creates a dictionary in which the default value is zero\nmessages_per_date = collections.defaultdict(int)\n\nasync for message in channel.history(limit=None):\n # Now the values can be incremented without checking the key\n messages_per_date[message.created_at.date()] += 1\n\nprint(messages_per_date[datetime.date.today()])\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "discord", "pycord", "python" ]
stackoverflow_0074516627_datetime_discord_pycord_python.txt
Q: how to check right number of products and breaking my while loop? I got a mission to wright a code to grocery shopping list, every number that I put in should activete other function. I got 3 problems in this code. the greater problem is that the loop should break when I put in the number 9 and that isn't break. in the function how_moch_product_name_in_list I need a for loop that add one to how_much_products like the number of the products from the same name and it always wright 1 because the loop run over 1 time if I put in beer for example, how to chnge it and wright it right. I tried to wright the iterable number as int like number = int(input()) and the rest of the if statment also as int and he written me 'int' object is not iterable, how I need to fix this to int that work? that the full code. def print_list(grocery_list): """The function print grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ print(grocery_list) def print_len_list(grocery_list): """The function print the legth of grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ print(len(grocery_list)) def product_in_grocery_list(grocery_list): """The function check if the product is in the grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ check_product_name_in_grocery_list = input("""the ____ product is in the grocery list? """) check_product_name_in_grocery_list = check_product_name_in_grocery_list.split(",") print(check_product_name_in_grocery_list) if check_product_name_in_grocery_list in grocery_list: print(True) else: print(False) def how_moch_product_name_in_list(grocery_list): """The function check how much products there is in the grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ check_how_much_product_name_in_grocery_list = input("""how much the ____ product is in the grocery list? """) how_much_products = 0 for check_how_much_product_name_in_grocery_list in grocery_list: if check_how_much_product_name_in_grocery_list in grocery_list: how_much_products += 1 print(how_much_products) def delete_product_from_list(grocery_list): """The function delete product from grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none. :rtype: none. """ delete_product = input("""delete ___ product from grocery list """) grocery_list.remove(delete_product) print(grocery_list) def add_product_to_list(grocery_list): """The function add product to grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none. :rtype: none. """ add_product = input("""add ___ product to grocery list """) grocery_list.append(add_product) print(grocery_list) def print_ilegal_products(grocery_list): """The function check if prodoct is liegal product in the grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ ilegal_products = [] for j in grocery_list: if j.isalpha() == False: ilegal_products.append(j) elif len(j) < 3: ilegal_products.append(j) print(ilegal_products) def remove_duplicates_from_list(grocery_list): """The function check if there is duplicates products is in the grocery shopping list and removing them. :param my_list: Shopping list. :type my_list: list. :return: none. :rtype: none. """ grocery_list = list(dict.fromkeys(grocery_list)) print(grocery_list) def process_grocery_store(grocery_list): """The function processing the grocery shopping list. you put number and this number is the action you want to process the list and it run the right function in a loop until you input 9 for a number and then the loop breaks. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ number = input() for i in number: if i == "1": print_list(grocery_list) elif i == "2": print_len_list(grocery_list) elif i == "3": product_in_grocery_list(grocery_list) elif i == "4": how_moch_product_name_in_list(grocery_list) elif i == "5": delete_product_from_list(grocery_list) elif i == "6": add_product_to_list(grocery_list) elif i == "7": print_ilegal_products(grocery_list) elif i == "8": remove_duplicates_from_list(grocery_list) elif i == "9": return "break" def main(): #grocery_store_list = input("""enter your grocery store list here #""") grocery_store_list = ['cucamber', 'cheese', 'egg', 'soap', 'tomato', 'chips', 'fish', 'meat', 'soymilk', 'chocklate', 'coffee', 'lettece', 'orange', 'apple', 'salmon', 'bread', 'beer', 'beer'] #grocery_store_list = grocery_store_list.split(",") while process_grocery_store(grocery_store_list) != "break": process_grocery_store(grocery_store_list) if __name__ == "__main__": main() A: The while loop in your main function calls the process_grocery_store function twice - once in the condition evaluation, and once if the condition is True, and you ignore the return value there. so if you enter "break" when the function is run the inside the loop, than the "break" has no effect. Try something like this: status = "" while status != "break": status = process_grocery_store(grocery_store_list) For the second problem, you override the value of the input() in the for loop, so the condition inside the loop is always met and you always increase how_much_products. Also, instead of checking if the item you iterate over is the one you want, you check if it is in the list. wanted_product = input("""how much the ____ product is in the grocery list? """) how_much_products = 0 for product in grocery_list: if product == wanted_product : how_much_products += 1 print(how_much_products)
how to check right number of products and breaking my while loop?
I got a mission to wright a code to grocery shopping list, every number that I put in should activete other function. I got 3 problems in this code. the greater problem is that the loop should break when I put in the number 9 and that isn't break. in the function how_moch_product_name_in_list I need a for loop that add one to how_much_products like the number of the products from the same name and it always wright 1 because the loop run over 1 time if I put in beer for example, how to chnge it and wright it right. I tried to wright the iterable number as int like number = int(input()) and the rest of the if statment also as int and he written me 'int' object is not iterable, how I need to fix this to int that work? that the full code. def print_list(grocery_list): """The function print grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ print(grocery_list) def print_len_list(grocery_list): """The function print the legth of grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ print(len(grocery_list)) def product_in_grocery_list(grocery_list): """The function check if the product is in the grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ check_product_name_in_grocery_list = input("""the ____ product is in the grocery list? """) check_product_name_in_grocery_list = check_product_name_in_grocery_list.split(",") print(check_product_name_in_grocery_list) if check_product_name_in_grocery_list in grocery_list: print(True) else: print(False) def how_moch_product_name_in_list(grocery_list): """The function check how much products there is in the grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ check_how_much_product_name_in_grocery_list = input("""how much the ____ product is in the grocery list? """) how_much_products = 0 for check_how_much_product_name_in_grocery_list in grocery_list: if check_how_much_product_name_in_grocery_list in grocery_list: how_much_products += 1 print(how_much_products) def delete_product_from_list(grocery_list): """The function delete product from grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none. :rtype: none. """ delete_product = input("""delete ___ product from grocery list """) grocery_list.remove(delete_product) print(grocery_list) def add_product_to_list(grocery_list): """The function add product to grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none. :rtype: none. """ add_product = input("""add ___ product to grocery list """) grocery_list.append(add_product) print(grocery_list) def print_ilegal_products(grocery_list): """The function check if prodoct is liegal product in the grocery shopping list. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ ilegal_products = [] for j in grocery_list: if j.isalpha() == False: ilegal_products.append(j) elif len(j) < 3: ilegal_products.append(j) print(ilegal_products) def remove_duplicates_from_list(grocery_list): """The function check if there is duplicates products is in the grocery shopping list and removing them. :param my_list: Shopping list. :type my_list: list. :return: none. :rtype: none. """ grocery_list = list(dict.fromkeys(grocery_list)) print(grocery_list) def process_grocery_store(grocery_list): """The function processing the grocery shopping list. you put number and this number is the action you want to process the list and it run the right function in a loop until you input 9 for a number and then the loop breaks. :param my_list: Shopping list. :type my_list: list. :return: none :rtype: none """ number = input() for i in number: if i == "1": print_list(grocery_list) elif i == "2": print_len_list(grocery_list) elif i == "3": product_in_grocery_list(grocery_list) elif i == "4": how_moch_product_name_in_list(grocery_list) elif i == "5": delete_product_from_list(grocery_list) elif i == "6": add_product_to_list(grocery_list) elif i == "7": print_ilegal_products(grocery_list) elif i == "8": remove_duplicates_from_list(grocery_list) elif i == "9": return "break" def main(): #grocery_store_list = input("""enter your grocery store list here #""") grocery_store_list = ['cucamber', 'cheese', 'egg', 'soap', 'tomato', 'chips', 'fish', 'meat', 'soymilk', 'chocklate', 'coffee', 'lettece', 'orange', 'apple', 'salmon', 'bread', 'beer', 'beer'] #grocery_store_list = grocery_store_list.split(",") while process_grocery_store(grocery_store_list) != "break": process_grocery_store(grocery_store_list) if __name__ == "__main__": main()
[ "The while loop in your main function calls the process_grocery_store function twice - once in the condition evaluation, and once if the condition is True, and you ignore the return value there. so if you enter \"break\" when the function is run the inside the loop, than the \"break\" has no effect.\nTry something like this:\nstatus = \"\"\nwhile status != \"break\":\n status = process_grocery_store(grocery_store_list)\n\nFor the second problem, you override the value of the input() in the for loop, so the condition inside the loop is always met and you always increase how_much_products. Also, instead of checking if the item you iterate over is the one you want, you check if it is in the list.\nwanted_product = input(\"\"\"how much the ____ product is in the grocery list?\n\"\"\")\nhow_much_products = 0\nfor product in grocery_list:\n if product == wanted_product :\n how_much_products += 1\nprint(how_much_products)\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074516051_python_python_3.x.txt
Q: Can't change a value of a function's variable inside a "with" block python I want my function to return True or False but it always return False. I've tried to remove the first assignment of user(user = False) but an error occured : "user is not defined" Here is the function def findUser(name): user = False with open('users', 'rb') as usersFile: myUnpickle = pickle.Unpickler(usersFile) users = myUnpickle.load() for userName, userScore in users.items(): if(userName.lower() == name.lower()): user = True return user The file content is a dictionary When I print users, I see them but when I try to find a user using the function I get a False With this I expected to get a "True" If data's content is {"Jean":20, "Joe":10} def findUser(name): user = False with open('data', 'rb') as usersFile: myUnpickle = pickle.Unpickler(usersFile) users = myUnpickle.load() for userName, userScore in users.items(): if(userName.lower() == name.lower()): user = True return user if(findUser("Joe")): ... else ... But all I have is the else block Here is the insertion def insertUser(name) If(os.path.exists("users")): with open('users', 'rb') as usersFile: myUnpickler = pickle.Unpickler(usersFile) fileContent = myUnpickler.load() fileContent[name] = 0 with open('users', 'wb') as usersFile: myPickler = pickle.Pickler(usersFile) myPickler.dump(fileContent) else: with open('users', 'rb') as usersFile: default = {"default":0} pickle.Pickler(usersFile).dump(default) A: I was just opening the wrong file when looking for the user, I didn't add what to do if the file wasn't find that's why when there was not an initialization of the return variable I got an error. I had just to change the file name. The variable's value inside the "with" wasn't take into account because the "with" block was never called, the condition to go there was always false. The file name was "users" but my code was : Def connectUser(userName): user = False if(os.path.exists("user")): //user Without "s" with: ... user = True return user The file doesn't exist, the code inside the if doesn't exist
Can't change a value of a function's variable inside a "with" block python
I want my function to return True or False but it always return False. I've tried to remove the first assignment of user(user = False) but an error occured : "user is not defined" Here is the function def findUser(name): user = False with open('users', 'rb') as usersFile: myUnpickle = pickle.Unpickler(usersFile) users = myUnpickle.load() for userName, userScore in users.items(): if(userName.lower() == name.lower()): user = True return user The file content is a dictionary When I print users, I see them but when I try to find a user using the function I get a False With this I expected to get a "True" If data's content is {"Jean":20, "Joe":10} def findUser(name): user = False with open('data', 'rb') as usersFile: myUnpickle = pickle.Unpickler(usersFile) users = myUnpickle.load() for userName, userScore in users.items(): if(userName.lower() == name.lower()): user = True return user if(findUser("Joe")): ... else ... But all I have is the else block Here is the insertion def insertUser(name) If(os.path.exists("users")): with open('users', 'rb') as usersFile: myUnpickler = pickle.Unpickler(usersFile) fileContent = myUnpickler.load() fileContent[name] = 0 with open('users', 'wb') as usersFile: myPickler = pickle.Pickler(usersFile) myPickler.dump(fileContent) else: with open('users', 'rb') as usersFile: default = {"default":0} pickle.Pickler(usersFile).dump(default)
[ "I was just opening the wrong file when looking for the user, I didn't add what to do if the file wasn't find that's why when there was not an initialization of the return variable I got an error. I had just to change the file name. The variable's value inside the \"with\" wasn't take into account because the \"with\" block was never called, the condition to go there was always false.\nThe file name was \"users\" but my code was :\nDef connectUser(userName):\n user = False\n if(os.path.exists(\"user\")): //user Without \"s\"\n with:\n ...\n user = True\n return user\n\nThe file doesn't exist, the code inside the if doesn't exist\n" ]
[ 0 ]
[]
[]
[ "environment_variables", "python", "return_value", "variables", "with_statement" ]
stackoverflow_0074502881_environment_variables_python_return_value_variables_with_statement.txt
Q: How do i create a number of inputs from an input I'm brand new to this, 10 days in. Ive been thinking how I could solve this for 30 min. Please help. Find Average You need to calculate the average of a collection of values. Every value will be valid number. The average must be printed with two digits after the decimal point. Input- On the first line, you will receive N - the number of the values you must read On the next N lines you will receive numbers. Output- On the only line of output, print the average with two digits after the decimal point. Input 4 1 1 1 1 Output 1.00 Input 3 2.5 1.25 3 Output 2.25 From what I see, I figure I need to create as much inputs as the N of the first one is and then input the numbers Id like to avarage and then create a formula to avarage them. I may be completely wrong in my logic, in any case Id be happy for some advice. So far I tried creating a while loop to create inputs from the first input. But have no clue how to properly sintax it and continue with making the new inputs into variables I can use a=int(input()) x=1 while x<a or x==a: float(input()) x=x+1 A: a=int(input('Total number of input: ')) total = 0.0 for i in range(a): total += float(input(f'Input #{i+1}: ')) print('average: ', round(total/a,2)) Modified a bit on your version to make it work A: There were few things that you were doing wrong. when the numbers are decimals use float not int. If you are looking for a single line input this should be how it's done. When writing the code please use proper variables and add a string that's asking for the input. total = 0 first_num=int(input("Number of inputs: ")) number = input("Enter numbers: ") input_nums = number.split() for i in range(first_num): total = total + int(input_nums[i]) average = total/first_num print(average) If you are looking for a multiline output This should be how it's done. first_num=int(input("Number of inputs: ")) x=1 total = 0 while x<first_num or x==first_num: number = float(input("Enter numbers: ")) total = total + number x=x+1 avg = total/first_num print(avg)
How do i create a number of inputs from an input
I'm brand new to this, 10 days in. Ive been thinking how I could solve this for 30 min. Please help. Find Average You need to calculate the average of a collection of values. Every value will be valid number. The average must be printed with two digits after the decimal point. Input- On the first line, you will receive N - the number of the values you must read On the next N lines you will receive numbers. Output- On the only line of output, print the average with two digits after the decimal point. Input 4 1 1 1 1 Output 1.00 Input 3 2.5 1.25 3 Output 2.25 From what I see, I figure I need to create as much inputs as the N of the first one is and then input the numbers Id like to avarage and then create a formula to avarage them. I may be completely wrong in my logic, in any case Id be happy for some advice. So far I tried creating a while loop to create inputs from the first input. But have no clue how to properly sintax it and continue with making the new inputs into variables I can use a=int(input()) x=1 while x<a or x==a: float(input()) x=x+1
[ "a=int(input('Total number of input: '))\n\ntotal = 0.0\n\nfor i in range(a):\n total += float(input(f'Input #{i+1}: '))\n \nprint('average: ', round(total/a,2))\n\nModified a bit on your version to make it work\n", "There were few things that you were doing wrong. when the numbers are decimals use float not int.\nIf you are looking for a single line input this should be how it's done.\nWhen writing the code please use proper variables and add a string that's asking for the input.\ntotal = 0\n\nfirst_num=int(input(\"Number of inputs: \"))\nnumber = input(\"Enter numbers: \")\ninput_nums = number.split()\n \n\nfor i in range(first_num):\n total = total + int(input_nums[i])\n\n\naverage = total/first_num\nprint(average)\n\nIf you are looking for a multiline output This should be how it's done.\n\nfirst_num=int(input(\"Number of inputs: \"))\nx=1\ntotal = 0\nwhile x<first_num or x==first_num:\n number = float(input(\"Enter numbers: \"))\n total = total + number\n x=x+1\n\navg = total/first_num\nprint(avg)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074516401_python_python_3.x.txt
Q: ValueError: could not convert string to float: '$2,464.34' I am trying to convert the data to float in order make it as numerical format in excel to sort the data i am getting error.wherever the float in mentioned i did it now but previously there was no float . def get_output_value(self, key, value, neutral=None): display = value if value is None and not neutral.person.is_active: return '-', '-' if value is None: return float(f"${Decimal('.00')}"), float(f"${Decimal('.00')}") if isinstance(value, Decimal): return float(f"${intcomma(value.quantize(Decimal('.00')))}"), float(f"${intcomma(display.quantize(Decimal('.00')))}") return float(value), display A: The answer is in the error message in this case. '$2,464.34' is a string with $ and , characters in it, but float() expects a number-like input. TLDR, you want float('2464.34') but you're giving float('$2,464.34')
ValueError: could not convert string to float: '$2,464.34'
I am trying to convert the data to float in order make it as numerical format in excel to sort the data i am getting error.wherever the float in mentioned i did it now but previously there was no float . def get_output_value(self, key, value, neutral=None): display = value if value is None and not neutral.person.is_active: return '-', '-' if value is None: return float(f"${Decimal('.00')}"), float(f"${Decimal('.00')}") if isinstance(value, Decimal): return float(f"${intcomma(value.quantize(Decimal('.00')))}"), float(f"${intcomma(display.quantize(Decimal('.00')))}") return float(value), display
[ "The answer is in the error message in this case. '$2,464.34' is a string with $ and , characters in it, but float() expects a number-like input.\nTLDR, you want float('2464.34') but you're giving float('$2,464.34')\n" ]
[ 4 ]
[]
[]
[ "django", "excel", "python", "python_3.x" ]
stackoverflow_0074516822_django_excel_python_python_3.x.txt
Q: Can mark_geoshape () be used for Canadian Provinces/cities? I'm looking to somehow figure out a way to insert a geographic graph of British Columbia which is a part of Canada in my data analysis. I have made this image here explaining what tree is being planted the most in Vancouver Now I want to make a geograph kind of like this https://altair-viz.github.io/gallery/airports_count.html to answer: how the density/distribution of species planted different in different neighbourhoods look like. This is what I'm having trouble with. Thus from vega_datasets import data world_map = alt.topo_feature(data.world_110m.url, 'countries') alt.Chart(world_map).mark_geoshape().project() and it's giving me a world map! Great! I tried zooming into just British Columbia but it's not really working out. Can anyone give me any direction on where to go and how I should go about answering my question? I really wanted to use geoshape I also found this if it's helpful https://global.mapit.mysociety.org/area/960958.html Thank you and I appreciate everyones advice! A: Canadian provinces are not part of world_110m map in the example gallery. You would need to provide your own geojson and topojson file that contains that information in order to work with Altair and then follow the guidelines here How can I make a map using GeoJSON data in Altair?. You can also work with geopandas together with Altair, which in many ways is more flexible. We are working on integrating info on this into the docs, but in the meantime you can view this preview version to get started https://deploy-preview-1--spontaneous-sorbet-49ed10.netlify.app/user_guide/marks/geoshape.html A: Looks like you got your data from here import pandas as pd import numpy as np import plotly.express as px #loading data df = pd.read_csv('street-trees.csv', sep=';') #extracting coords df['coords'] = df['Geom'].str.extract('\[(.*?)\]') df['lon'] = df['coords'].str.split(',').str[0].astype(float) df['lat'] = df['coords'].str.split(',').str[1].astype(float) #getting neighborhood totals df2 = pd.merge(df[['NEIGHBOURHOOD_NAME']].value_counts().reset_index(), df[['NEIGHBOURHOOD_NAME', 'lon', 'lat']].groupby('NEIGHBOURHOOD_NAME').mean().reset_index()) #drawing figure fig = px.scatter_mapbox(df2, lat='lat', lon='lon', color=0, opacity=0.5, center=dict(lon=df2['lon'].mean(), lat=df2['lat'].mean()), zoom=11, size=0) fig.update_layout(mapbox_style='open-street-map') fig.show() A: I am definitely not an expert but using Joel's advice ... you can download a geojson from here: https://data.opendatasoft.com/explore/dataset/georef-canada-province%40public/export/?disjunctive.prov_name_en Because I downloaded it I then had to open it rather than reference a url like most of the examples so can_prov_file = 'C:/PyProjects/georef-canada-province.geojson' with open(can_prov_file) as f: var_geojson = geojson.load(f) data_geojson = alt.InlineData(values=var_geojson, format=alt.DataFormat(property='features',type='json')) # chart object provinces = alt.Chart(data_geojson).mark_geoshape( ).encode( color="properties.prov_name_en:O" ).project( type='identity', reflectY=True ) Worked for me. Best of luck.
Can mark_geoshape () be used for Canadian Provinces/cities?
I'm looking to somehow figure out a way to insert a geographic graph of British Columbia which is a part of Canada in my data analysis. I have made this image here explaining what tree is being planted the most in Vancouver Now I want to make a geograph kind of like this https://altair-viz.github.io/gallery/airports_count.html to answer: how the density/distribution of species planted different in different neighbourhoods look like. This is what I'm having trouble with. Thus from vega_datasets import data world_map = alt.topo_feature(data.world_110m.url, 'countries') alt.Chart(world_map).mark_geoshape().project() and it's giving me a world map! Great! I tried zooming into just British Columbia but it's not really working out. Can anyone give me any direction on where to go and how I should go about answering my question? I really wanted to use geoshape I also found this if it's helpful https://global.mapit.mysociety.org/area/960958.html Thank you and I appreciate everyones advice!
[ "Canadian provinces are not part of world_110m map in the example gallery. You would need to provide your own geojson and topojson file that contains that information in order to work with Altair and then follow the guidelines here How can I make a map using GeoJSON data in Altair?.\nYou can also work with geopandas together with Altair, which in many ways is more flexible. We are working on integrating info on this into the docs, but in the meantime you can view this preview version to get started https://deploy-preview-1--spontaneous-sorbet-49ed10.netlify.app/user_guide/marks/geoshape.html\n", "Looks like you got your data from here\nimport pandas as pd\nimport numpy as np\nimport plotly.express as px\n\n#loading data\ndf = pd.read_csv('street-trees.csv', sep=';')\n#extracting coords\ndf['coords'] = df['Geom'].str.extract('\\[(.*?)\\]')\ndf['lon'] = df['coords'].str.split(',').str[0].astype(float)\ndf['lat'] = df['coords'].str.split(',').str[1].astype(float)\n\n#getting neighborhood totals\ndf2 = pd.merge(df[['NEIGHBOURHOOD_NAME']].value_counts().reset_index(), df[['NEIGHBOURHOOD_NAME', 'lon', 'lat']].groupby('NEIGHBOURHOOD_NAME').mean().reset_index())\n\n#drawing figure\nfig = px.scatter_mapbox(df2,\n lat='lat',\n lon='lon',\n color=0,\n opacity=0.5,\n center=dict(lon=df2['lon'].mean(),\n lat=df2['lat'].mean()),\n zoom=11,\n size=0)\n\nfig.update_layout(mapbox_style='open-street-map')\n\nfig.show()\n\n\n", "I am definitely not an expert but using Joel's advice ... you can download a geojson from here:\nhttps://data.opendatasoft.com/explore/dataset/georef-canada-province%40public/export/?disjunctive.prov_name_en\nBecause I downloaded it I then had to open it rather than reference a url like most of the examples so\ncan_prov_file = 'C:/PyProjects/georef-canada-province.geojson'\nwith open(can_prov_file) as f:\n var_geojson = geojson.load(f)\ndata_geojson = alt.InlineData(values=var_geojson, format=alt.DataFormat(property='features',type='json'))\n\n# chart object\nprovinces = alt.Chart(data_geojson).mark_geoshape(\n).encode(\n color=\"properties.prov_name_en:O\"\n).project(\n type='identity', reflectY=True\n) \n\nWorked for me. Best of luck.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "altair", "geojson", "python", "topojson", "vega_lite" ]
stackoverflow_0074168389_altair_geojson_python_topojson_vega_lite.txt
Q: '<' not supported between instances of 'str' and 'int' in Python When I try to create a new variable in dataframe Call08q1_09q1 by adding two float variable Call08q1_09q1['MBS']=Call08q1_09q1['RCFD8639']+Call08q1_09q1['RCFD2170'] the error below shows up: '<' not supported between instances of 'str' and 'int' in Python However, I don't have string in my dataframe. Call08q1_09q1.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 39675 entries, 0 to 39674 Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 RSSD9001 39675 non-null float64 1 RSSD9999 39675 non-null float64 2 RCFD2170 39673 non-null float64 3 RCFD8639 38166 non-null float64 4 RCFD8641 38166 non-null float64 5 RCFD8639 38166 non-null float64 6 RCFD0211 38166 non-null float64 7 RCFD1287 38166 non-null float64 8 RCON3531 1107 non-null float64 9 RCFD1289 38166 non-null float64 10 RCFD1294 38166 non-null float64 11 RCFD1293 38166 non-null float64 12 RCFD1298 38166 non-null float64 13 RCON3532 1111 non-null float64 14 RCFD3210 38443 non-null float64 15 RIAD4230 38398 non-null float64 16 RIAD4340 38441 non-null float64 17 RCFD2122 39644 non-null float64 18 RCFD2125 249 non-null float64 19 RCFD1600 52 non-null float64 dtypes: float64(20) A: You have loads of nulls in your columns as the printout tells you. How are those represented? Can you add these nulls with ints? I suggest you debug by inspecting these null values and taking appropriate action to fill them, drop them, or otherwise transform them into something useful. A: The error has not occured in the line of code below since this one does not contain any comparison operator (<, >, ..). Call08q1_09q1['MBS']= Call08q1_09q1['RCFD8639'] + Call08q1_09q1['RCFD2170'] The error has for sure occured in a line where you try to compare a string with a number (int) like the scenario below : s="1" n= 3 s < n --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In [1], line 3 1 s="1" 2 n= 3 ----> 3 s<n TypeError: '<' not supported between instances of 'str' and 'int' To fix that you need to cast the string as a number : int(s) < n #True
'<' not supported between instances of 'str' and 'int' in Python
When I try to create a new variable in dataframe Call08q1_09q1 by adding two float variable Call08q1_09q1['MBS']=Call08q1_09q1['RCFD8639']+Call08q1_09q1['RCFD2170'] the error below shows up: '<' not supported between instances of 'str' and 'int' in Python However, I don't have string in my dataframe. Call08q1_09q1.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 39675 entries, 0 to 39674 Data columns (total 20 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 RSSD9001 39675 non-null float64 1 RSSD9999 39675 non-null float64 2 RCFD2170 39673 non-null float64 3 RCFD8639 38166 non-null float64 4 RCFD8641 38166 non-null float64 5 RCFD8639 38166 non-null float64 6 RCFD0211 38166 non-null float64 7 RCFD1287 38166 non-null float64 8 RCON3531 1107 non-null float64 9 RCFD1289 38166 non-null float64 10 RCFD1294 38166 non-null float64 11 RCFD1293 38166 non-null float64 12 RCFD1298 38166 non-null float64 13 RCON3532 1111 non-null float64 14 RCFD3210 38443 non-null float64 15 RIAD4230 38398 non-null float64 16 RIAD4340 38441 non-null float64 17 RCFD2122 39644 non-null float64 18 RCFD2125 249 non-null float64 19 RCFD1600 52 non-null float64 dtypes: float64(20)
[ "You have loads of nulls in your columns as the printout tells you. How are those represented? Can you add these nulls with ints? I suggest you debug by inspecting these null values and taking appropriate action to fill them, drop them, or otherwise transform them into something useful.\n", "The error has not occured in the line of code below since this one does not contain any comparison operator (<, >, ..).\nCall08q1_09q1['MBS']= Call08q1_09q1['RCFD8639'] + Call08q1_09q1['RCFD2170']\n\nThe error has for sure occured in a line where you try to compare a string with a number (int) like the scenario below :\ns=\"1\"\nn= 3\ns < n\n---------------------------------------------------------------------------\nTypeError Traceback (most recent call last)\nCell In [1], line 3\n 1 s=\"1\"\n 2 n= 3\n----> 3 s<n\n\nTypeError: '<' not supported between instances of 'str' and 'int'\n\nTo fix that you need to cast the string as a number :\nint(s) < n\n#True\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074516633_pandas_python.txt
Q: Appium-python-client, find element function im new to the appium library and I'm facing issues with the methods to locate elements in an android app, the two methods in question are: 1- driver.find_element_by_id 2- driver.find_element #desired_cap defined above driver = web.driver.remote("http://localhost:4723/wd/hub", desired_cap) driver.find_element_by_id(#element id) the problem is i can't find this method. when i type it in the ide it doesn't work. I tried using the find_element method but it didn't work too. desired_cap = { "appium:deviceName": "emulator-5554", "appium:appActivity": ".LandingScreen", "platformName": "Android", "appium:appPackage": "com.myapp", "appium:noReset": "true" } driver = webdriver.Remote("http://localhost:4723/wd/hub",desired_cap) driver.find_element("AppiumBy.ID","com.talabat:id/ivToolbarAddressArrow") i tried to make the second method work: "AppiumBy.ID" By.ID ID "id" none of them made the duntion work A: First of all, I believe you have a typo and it should read webdriver iso web.driver: driver = webdriver.remote("http://localhost:4723/wd/hub", desired_cap) To answer your question: Appium works with accessibility ID, like so: driver.find_element("AppiumBy.ACCESSIBILITY_ID","SomeAccessibilityID") Appium documentation here
Appium-python-client, find element function
im new to the appium library and I'm facing issues with the methods to locate elements in an android app, the two methods in question are: 1- driver.find_element_by_id 2- driver.find_element #desired_cap defined above driver = web.driver.remote("http://localhost:4723/wd/hub", desired_cap) driver.find_element_by_id(#element id) the problem is i can't find this method. when i type it in the ide it doesn't work. I tried using the find_element method but it didn't work too. desired_cap = { "appium:deviceName": "emulator-5554", "appium:appActivity": ".LandingScreen", "platformName": "Android", "appium:appPackage": "com.myapp", "appium:noReset": "true" } driver = webdriver.Remote("http://localhost:4723/wd/hub",desired_cap) driver.find_element("AppiumBy.ID","com.talabat:id/ivToolbarAddressArrow") i tried to make the second method work: "AppiumBy.ID" By.ID ID "id" none of them made the duntion work
[ "First of all, I believe you have a typo and it should read webdriver iso web.driver:\ndriver = webdriver.remote(\"http://localhost:4723/wd/hub\", desired_cap)\n\nTo answer your question: Appium works with accessibility ID, like so:\ndriver.find_element(\"AppiumBy.ACCESSIBILITY_ID\",\"SomeAccessibilityID\")\n\nAppium documentation here\n" ]
[ 0 ]
[]
[]
[ "python", "python_appium" ]
stackoverflow_0074506206_python_python_appium.txt
Q: Data transfer for Jinja2 from Updateviev I have such a template on every html page into which I want to transfer data from my url processors: {% block title %} {{ title }} {% endblock %} {% block username %} <b>{{username}}</b> {% endblock %} When using regular def functions, I pass them like this: data_ = { 'form': form, 'data': data, 'username': user_name, 'title': 'Add campaign page' } return render(request, 'dashboard/add_campaign.html', data_) But when I use a class based on UpdateView: class CampaignEditor(UpdateView): model = Campaigns template_name = 'dashboard/add_campaign.html' form_class = CampaignsForm There is a slightly different data structure, could you tell me how to pass the required date through the class? A: You override get_context_data: class CampaignEditor(UpdateView): model = Campaigns template_name = 'dashboard/add_campaign.html' form_class = CampaignsForm def get_context_data(self, *args, **kwargs): return super().get_context_data( *args, **kwargs, data='some_data', title='Add campagin page', username=self.request.user ) This builds the dictionary that is passed to the template. The UpdateView will already populate this for example with the form.
Data transfer for Jinja2 from Updateviev
I have such a template on every html page into which I want to transfer data from my url processors: {% block title %} {{ title }} {% endblock %} {% block username %} <b>{{username}}</b> {% endblock %} When using regular def functions, I pass them like this: data_ = { 'form': form, 'data': data, 'username': user_name, 'title': 'Add campaign page' } return render(request, 'dashboard/add_campaign.html', data_) But when I use a class based on UpdateView: class CampaignEditor(UpdateView): model = Campaigns template_name = 'dashboard/add_campaign.html' form_class = CampaignsForm There is a slightly different data structure, could you tell me how to pass the required date through the class?
[ "You override get_context_data:\nclass CampaignEditor(UpdateView):\n model = Campaigns\n template_name = 'dashboard/add_campaign.html'\n form_class = CampaignsForm\n\n def get_context_data(self, *args, **kwargs):\n return super().get_context_data(\n *args,\n **kwargs,\n data='some_data',\n title='Add campagin page',\n username=self.request.user\n )\nThis builds the dictionary that is passed to the template. The UpdateView will already populate this for example with the form.\n" ]
[ 2 ]
[]
[]
[ "django", "jinja2", "python", "updateview" ]
stackoverflow_0074516873_django_jinja2_python_updateview.txt
Q: is there a function in python to round off three digits after decimal but show all three digits even if zero i want to round off number 7.00087 and output should be 7.000 I tried round() function but it eliminates zero. A: This function gives the right output: def my_round(x, decimals): str_decimals = str(x % int(x)) return str(int(x)) + str_decimals[1 : 2 + decimals] What does my_round() do: 1- It retrieves decimals in a string format as a variable named str_decimals 2- Concatenates rounded int(x) with desired decimals and return as output. Let's apply it: Input x = 7.00087 y = my_round(x, 3) print(y) Ouput >>> 7.000
is there a function in python to round off three digits after decimal but show all three digits even if zero
i want to round off number 7.00087 and output should be 7.000 I tried round() function but it eliminates zero.
[ "This function gives the right output:\ndef my_round(x, decimals):\n str_decimals = str(x % int(x))\n return str(int(x)) + str_decimals[1 : 2 + decimals]\n\nWhat does my_round() do:\n1- It retrieves decimals in a string format as a variable named str_decimals\n2- Concatenates rounded int(x) with desired decimals and return as output.\nLet's apply it:\nInput\nx = 7.00087\ny = my_round(x, 3)\nprint(y)\n\nOuput\n>>> 7.000\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074516459_python.txt
Q: How to perform replacements on a string only if it is not preceded and followed by a substring? import re, datetime input_text = "Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del 2022_-_02_-_18 llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)" print(repr(input_text)) # --> output input_date_structure = r"(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})" identify_only_date_regex_00 = input_date_structure + r"[\s|]*" + r"(\b\d{2}:\d{2}[\s|]*[ap]m)?" #to identify if there is a time after the date identify_only_date_regex_01 = r"(\b\d{2}:\d{2}[\s|]*[ap]m)?" + r"[\s|]*" + input_date_structure #to identify if there is a time before the date date_restructuring_structure = r"\g<year>_-_\g<month>_-_\g<startDay>" restructuring_only_date = lambda x: x.group() if x.group(1) else "(" + fr"{x.expand(date_restructuring_structure)}" + " 00:00 am)" #do the replace with re.sub() method and the regex patterns instructions input_text = re.sub(identify_only_date_regex_00, restructuring_only_date, input_text) input_text = re.sub(identify_only_date_regex_01, restructuring_only_date, input_text) #print output print(repr(input_text)) # --> output The wrong output that I get: 'Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del(2022_-_02_-_18 00:00 am) llega el avion, pero no ((2022_-_02_-_18 00:00 am) 20:16 pm) a las ((2022_-_02_-_18 00:00 am) 00:16 am), de esos hay dos (22)' The correct output, where only dates that were not preceded or followed by times hh:mm am or pm, indicated as r"(\d{2}:\d{2}[ \s|]*[ap]m)?", are modified: "Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del (2022_-_02_-_18 00:00 am) llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)" I don't understand why it's failing, at least I think I'm conditioning my regex correctly using \b and ? Not replace "sdsdds 2022_-_02_-_18 00:16 am sdsddssd2 Not replace "sdsdsd 00:16 am 2022_-_02_-_18 sdsdsd" replace "sdsdds 2022_-_02_-_18 sdsdsd" A: You can merge the two regexps to form an expression like (Group1)?(...)(Group5)? (5 is due to the fact you have three capturing groups in the middle part, and even though they are named capturing groups, they are still assigned a numeric ID), and then check if Group 1 or 5 is matched inside the lambda: import re, datetime input_text = "Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del 2022_-_02_-_18 llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)" input_date_structure = r"(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})" identify_only_date_regex = r"(\b\d{2}:\d{2}[\s|]*[ap]m)?[\s|]*" + input_date_structure + r"[\s|]*(\b\d{2}:\d{2}[\s|]*[ap]m)?" date_restructuring_structure = r"\g<year>_-_\g<month>_-_\g<startDay>" restructuring_only_date = lambda x: x.group() if x.group(1) or x.group(5) else "(" + x.expand(date_restructuring_structure) + " 00:00 am)" input_text = re.sub(identify_only_date_regex, restructuring_only_date, input_text) print(repr(input_text)) # --> output See the Python demo. The output is Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del(2022_-_02_-_18 00:00 am)llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)
How to perform replacements on a string only if it is not preceded and followed by a substring?
import re, datetime input_text = "Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del 2022_-_02_-_18 llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)" print(repr(input_text)) # --> output input_date_structure = r"(?P<year>\d*)_-_(?P<month>\d{2})_-_(?P<startDay>\d{2})" identify_only_date_regex_00 = input_date_structure + r"[\s|]*" + r"(\b\d{2}:\d{2}[\s|]*[ap]m)?" #to identify if there is a time after the date identify_only_date_regex_01 = r"(\b\d{2}:\d{2}[\s|]*[ap]m)?" + r"[\s|]*" + input_date_structure #to identify if there is a time before the date date_restructuring_structure = r"\g<year>_-_\g<month>_-_\g<startDay>" restructuring_only_date = lambda x: x.group() if x.group(1) else "(" + fr"{x.expand(date_restructuring_structure)}" + " 00:00 am)" #do the replace with re.sub() method and the regex patterns instructions input_text = re.sub(identify_only_date_regex_00, restructuring_only_date, input_text) input_text = re.sub(identify_only_date_regex_01, restructuring_only_date, input_text) #print output print(repr(input_text)) # --> output The wrong output that I get: 'Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del(2022_-_02_-_18 00:00 am) llega el avion, pero no ((2022_-_02_-_18 00:00 am) 20:16 pm) a las ((2022_-_02_-_18 00:00 am) 00:16 am), de esos hay dos (22)' The correct output, where only dates that were not preceded or followed by times hh:mm am or pm, indicated as r"(\d{2}:\d{2}[ \s|]*[ap]m)?", are modified: "Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del (2022_-_02_-_18 00:00 am) llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)" I don't understand why it's failing, at least I think I'm conditioning my regex correctly using \b and ? Not replace "sdsdds 2022_-_02_-_18 00:16 am sdsddssd2 Not replace "sdsdsd 00:16 am 2022_-_02_-_18 sdsdsd" replace "sdsdds 2022_-_02_-_18 sdsdsd"
[ "You can merge the two regexps to form an expression like (Group1)?(...)(Group5)? (5 is due to the fact you have three capturing groups in the middle part, and even though they are named capturing groups, they are still assigned a numeric ID), and then check if Group 1 or 5 is matched inside the lambda:\nimport re, datetime\n\ninput_text = \"Alrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del 2022_-_02_-_18 llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)\"\n\ninput_date_structure = r\"(?P<year>\\d*)_-_(?P<month>\\d{2})_-_(?P<startDay>\\d{2})\"\n\nidentify_only_date_regex = r\"(\\b\\d{2}:\\d{2}[\\s|]*[ap]m)?[\\s|]*\" + input_date_structure + r\"[\\s|]*(\\b\\d{2}:\\d{2}[\\s|]*[ap]m)?\"\n\ndate_restructuring_structure = r\"\\g<year>_-_\\g<month>_-_\\g<startDay>\"\nrestructuring_only_date = lambda x: x.group() if x.group(1) or x.group(5) else \"(\" + x.expand(date_restructuring_structure) + \" 00:00 am)\"\n\ninput_text = re.sub(identify_only_date_regex, restructuring_only_date, input_text)\nprint(repr(input_text)) # --> output\n\nSee the Python demo.\nThe output is\nAlrededor de las 00:16 am o las 23:30 pm 2022_-_02_-_18 , quizas cerca del(2022_-_02_-_18 00:00 am)llega el avion, pero no (2022_-_02_-_18 20:16 pm) a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)\n\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x", "regex", "regex_group", "replace" ]
stackoverflow_0074514707_python_python_3.x_regex_regex_group_replace.txt
Q: Problem at calling module paddleocr in Python with Anaconda Good morning, I have been trying to install paddleOCR(https://github.com/PaddlePaddle/PaddleOCR) with anaconda and I tried to start it with the command line at cmd and it works fine: (paddle_env) C:\OCR>paddleocr --image_dir source/test.png --use_angle_cls true --lang en But when I try to do it by code: from paddleocr import PaddleOCR,draw_ocr ocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to download and load model into memory img_path = './source/test.jpg' result = ocr.ocr(img_path, cls=True) for line in result: print(line) I get the next error like if the library of paddle was not installed but it is. So I'm not guessing which is the error: File "C:\OCR\required\ocr.py", line 1, in <module> from paddleocr import PaddleOCR,draw_ocr ModuleNotFoundError: No module named 'paddleocr' In both executions I'm using the same env. Thank you for your help. A: Try this prompt command: pip install "paddleocr>=2.0.1" Link to documentation: https://pypi.org/project/paddleocr/ A: Try this prompt command: pip install "paddlepadlle"
Problem at calling module paddleocr in Python with Anaconda
Good morning, I have been trying to install paddleOCR(https://github.com/PaddlePaddle/PaddleOCR) with anaconda and I tried to start it with the command line at cmd and it works fine: (paddle_env) C:\OCR>paddleocr --image_dir source/test.png --use_angle_cls true --lang en But when I try to do it by code: from paddleocr import PaddleOCR,draw_ocr ocr = PaddleOCR(use_angle_cls=True, lang='en') # need to run only once to download and load model into memory img_path = './source/test.jpg' result = ocr.ocr(img_path, cls=True) for line in result: print(line) I get the next error like if the library of paddle was not installed but it is. So I'm not guessing which is the error: File "C:\OCR\required\ocr.py", line 1, in <module> from paddleocr import PaddleOCR,draw_ocr ModuleNotFoundError: No module named 'paddleocr' In both executions I'm using the same env. Thank you for your help.
[ "Try this prompt command:\npip install \"paddleocr>=2.0.1\"\nLink to documentation: https://pypi.org/project/paddleocr/\n", "Try this prompt command:\npip install \"paddlepadlle\"\n" ]
[ 1, 0 ]
[]
[]
[ "conda", "paddle_paddle", "pip", "python" ]
stackoverflow_0069324201_conda_paddle_paddle_pip_python.txt
Q: How do I get specific keys and their values from nested dict in python? I need help, please be kind I'm a beginner. I have a nested dict like this: dict_ = { "timestamp": "2022-11-18T10: 10: 49.301Z", "name" : "example", "person":{ "birthyear": "2002" "birthname": "Examply" }, "order":{ "orderId": "1234" "ordername": "onetwothreefour" } } How do I get a new dict like: new_dict = {"timestamp": "2022-11-18T10: 10: 49.301Z", "birthyear": "2002", "birthname": "Examply", "orderId": "1234"} I tried the normal things I could google. But I only found solutions like getting the values without the keys back or it only works for flatten dicts. Last thing I tried: new_dict = {key: msg[key] for key in msg.keys() & {'timestamp', 'birthyear', 'birthname', 'orderId'} This do not work for the nested dict. May someone has an easy option for it. A: A general approach: dict_ = { "timestamp": "2022-11-18T10: 10: 49.301Z", "name": "example", "person": { "birthyear": "2002", "birthname": "Examply" }, "order": { "orderId": "1234", "ordername": "onetwothreefour" } } def nested_getitem(d, keys): current = d for key in keys: current = current[key] return current new_dict = {"timestamp": nested_getitem(dict_, ["timestamp"]), "birthyear": nested_getitem(dict_, ["person", "birthyear"]), "birthname": nested_getitem(dict_, ["person", "birthname"]), "orderId": nested_getitem(dict_, ["order", "orderId"]), } print(new_dict) Output {'timestamp': '2022-11-18T10: 10: 49.301Z', 'birthyear': '2002', 'birthname': 'Examply', 'orderId': '1234'} A: dict_ = { "timestamp": "2022-11-18T10: 10: 49.301Z", "name" : "example", "person":{ "birthyear": "2002", "birthname": "Examply" }, "order":{ "orderId": "1234", "ordername": "onetwothreefour" } } def get_new_dict(valid_dict): new_dict = {'timestamp': valid_dict['timestamp'], 'birthyear': valid_dict['person']['birthyear'], 'birthname': valid_dict['person']['birthname'], 'orderId': valid_dict['order']['orderId'] } return new_dict new_dict = get_new_dict(dict_) print(new_dict) output: {'timestamp': '2022-11-18T10: 10: 49.301Z', 'birthyear': '2002', 'birthname': 'Examply', 'orderId': '1234'}
How do I get specific keys and their values from nested dict in python?
I need help, please be kind I'm a beginner. I have a nested dict like this: dict_ = { "timestamp": "2022-11-18T10: 10: 49.301Z", "name" : "example", "person":{ "birthyear": "2002" "birthname": "Examply" }, "order":{ "orderId": "1234" "ordername": "onetwothreefour" } } How do I get a new dict like: new_dict = {"timestamp": "2022-11-18T10: 10: 49.301Z", "birthyear": "2002", "birthname": "Examply", "orderId": "1234"} I tried the normal things I could google. But I only found solutions like getting the values without the keys back or it only works for flatten dicts. Last thing I tried: new_dict = {key: msg[key] for key in msg.keys() & {'timestamp', 'birthyear', 'birthname', 'orderId'} This do not work for the nested dict. May someone has an easy option for it.
[ "A general approach:\ndict_ = {\n \"timestamp\": \"2022-11-18T10: 10: 49.301Z\",\n \"name\": \"example\",\n \"person\": {\n \"birthyear\": \"2002\",\n \"birthname\": \"Examply\"\n },\n \"order\": {\n \"orderId\": \"1234\",\n \"ordername\": \"onetwothreefour\"\n }\n}\n\n\ndef nested_getitem(d, keys):\n current = d\n for key in keys:\n current = current[key]\n return current\n\n\nnew_dict = {\"timestamp\": nested_getitem(dict_, [\"timestamp\"]),\n \"birthyear\": nested_getitem(dict_, [\"person\", \"birthyear\"]),\n \"birthname\": nested_getitem(dict_, [\"person\", \"birthname\"]),\n \"orderId\": nested_getitem(dict_, [\"order\", \"orderId\"]),\n }\nprint(new_dict)\n\nOutput\n{'timestamp': '2022-11-18T10: 10: 49.301Z', 'birthyear': '2002', 'birthname': 'Examply', 'orderId': '1234'}\n\n", "dict_ = {\n \"timestamp\": \"2022-11-18T10: 10: 49.301Z\",\n \"name\" : \"example\",\n \"person\":{\n \"birthyear\": \"2002\",\n \"birthname\": \"Examply\"\n },\n \"order\":{\n \"orderId\": \"1234\",\n \"ordername\": \"onetwothreefour\"\n }\n}\n\n\ndef get_new_dict(valid_dict):\n new_dict = {'timestamp': valid_dict['timestamp'],\n 'birthyear': valid_dict['person']['birthyear'], \n 'birthname': valid_dict['person']['birthname'], \n 'orderId': valid_dict['order']['orderId']\n }\n \n return new_dict\n \nnew_dict = get_new_dict(dict_)\n\nprint(new_dict)\n\noutput:\n{'timestamp': '2022-11-18T10: 10: 49.301Z', 'birthyear': '2002', 'birthname': 'Examply', 'orderId': '1234'}\n\n" ]
[ 1, 1 ]
[]
[]
[ "dictionary", "key", "nested", "python" ]
stackoverflow_0074516642_dictionary_key_nested_python.txt
Q: TypeError: Missing 1 required positional argument: 'self' I can't get past the error: Traceback (most recent call last): File "C:\Users\Dom\Desktop\test\test.py", line 7, in <module> p = Pump.getPumps() TypeError: getPumps() missing 1 required positional argument: 'self' I examined several tutorials but there doesn't seem to be anything different from my code. The only thing I can think of is that Python 3.3 requires different syntax. class Pump: def __init__(self): print("init") # never prints def getPumps(self): # Open database connection # some stuff here that never gets executed because of error pass # dummy code p = Pump.getPumps() print(p) If I understand correctly, self is passed to the constructor and methods automatically. What am I doing wrong here? A: You need to instantiate a class instance here. Use p = Pump() p.getPumps() Small example - >>> class TestClass: def __init__(self): print("in init") def testFunc(self): print("in Test Func") >>> testInstance = TestClass() in init >>> testInstance.testFunc() in Test Func A: You need to initialize it first: p = Pump().getPumps() A: Works and is simpler than every other solution I see here : Pump().getPumps() This is great if you don't need to reuse a class instance. Tested on Python 3.7.3. A: The self keyword in Python is analogous to this keyword in C++ / Java / C#. In Python 2 it is done implicitly by the compiler (yes Python does compilation internally). It's just that in Python 3 you need to mention it explicitly in the constructor and member functions. example: class Pump(): # member variable # account_holder # balance_amount # constructor def __init__(self,ah,bal): self.account_holder = ah self.balance_amount = bal def getPumps(self): print("The details of your account are:"+self.account_number + self.balance_amount) # object = class(*passing values to constructor*) p = Pump("Tahir",12000) p.getPumps() A: You can call the method like pump.getPumps(). By adding @classmethod decorator on the method. A class method receives the class as the implicit first argument, just like an instance method receives the instance. class Pump: def __init__(self): print ("init") # never prints @classmethod def getPumps(cls): # Open database connection # some stuff here that never gets executed because of error So, simply call Pump.getPumps() . In java, it is termed as static method. A: You can also get this error by prematurely taking PyCharm's advice to annotate a method @staticmethod. Remove the annotation. A: I got the same error below: TypeError: test() missing 1 required positional argument: 'self' When an instance method had self, then I called it directly by class name as shown below: class Person: def test(self): # <- With "self" print("Test") Person.test() # Here And, when a static method had self, then I called it by object or directly by class name as shown below: class Person: @staticmethod def test(self): # <- With "self" print("Test") obj = Person() obj.test() # Here # Or Person.test() # Here So, I called the instance method with object as shown below: class Person: def test(self): # <- With "self" print("Test") obj = Person() obj.test() # Here And, I removed self from the static method as shown below: class Person: @staticmethod def test(): # <- "self" removed print("Test") obj = Person() obj.test() # Here # Or Person.test() # Here Then, the error was solved: Test In detail, I explain about instance method in my answer for What is an "instance method" in Python? and also explain about @staticmethod and @classmethod in my answer for @classmethod vs @staticmethod in Python. A: If skipping parentheses for the object declaration (typo), then exactly this error occurs. # WRONG! will result in TypeError: getPumps() missing 1 required positional argument: 'self' p = Pump p.getPumps() Do not forget the parentheses for the Pump object # CORRECT! p = Pump() p.getPumps()
TypeError: Missing 1 required positional argument: 'self'
I can't get past the error: Traceback (most recent call last): File "C:\Users\Dom\Desktop\test\test.py", line 7, in <module> p = Pump.getPumps() TypeError: getPumps() missing 1 required positional argument: 'self' I examined several tutorials but there doesn't seem to be anything different from my code. The only thing I can think of is that Python 3.3 requires different syntax. class Pump: def __init__(self): print("init") # never prints def getPumps(self): # Open database connection # some stuff here that never gets executed because of error pass # dummy code p = Pump.getPumps() print(p) If I understand correctly, self is passed to the constructor and methods automatically. What am I doing wrong here?
[ "You need to instantiate a class instance here.\nUse\np = Pump()\np.getPumps()\n\nSmall example - \n>>> class TestClass:\n def __init__(self):\n print(\"in init\")\n def testFunc(self):\n print(\"in Test Func\")\n\n\n>>> testInstance = TestClass()\nin init\n>>> testInstance.testFunc()\nin Test Func\n\n", "You need to initialize it first:\np = Pump().getPumps()\n\n", "Works and is simpler than every other solution I see here :\nPump().getPumps()\n\nThis is great if you don't need to reuse a class instance. Tested on Python 3.7.3.\n", "The self keyword in Python is analogous to this keyword in C++ / Java / C#.\nIn Python 2 it is done implicitly by the compiler (yes Python does compilation internally).\nIt's just that in Python 3 you need to mention it explicitly in the constructor and member functions. example:\nclass Pump():\n # member variable\n # account_holder\n # balance_amount\n\n # constructor\n def __init__(self,ah,bal):\n self.account_holder = ah\n self.balance_amount = bal\n\n def getPumps(self):\n print(\"The details of your account are:\"+self.account_number + self.balance_amount)\n\n# object = class(*passing values to constructor*)\np = Pump(\"Tahir\",12000)\np.getPumps()\n\n", "You can call the method like pump.getPumps(). By adding @classmethod decorator on the method. A class method receives the class as the implicit first argument, just like an instance method receives the instance.\nclass Pump:\n\ndef __init__(self):\n print (\"init\") # never prints\n\n@classmethod\ndef getPumps(cls):\n # Open database connection\n # some stuff here that never gets executed because of error\n\nSo, simply call Pump.getPumps() .\nIn java, it is termed as static method.\n", "You can also get this error by prematurely taking PyCharm's advice to annotate a method @staticmethod. Remove the annotation.\n", "I got the same error below:\n\nTypeError: test() missing 1 required positional argument: 'self'\n\nWhen an instance method had self, then I called it directly by class name as shown below:\nclass Person:\n def test(self): # <- With \"self\" \n print(\"Test\")\n\nPerson.test() # Here\n\nAnd, when a static method had self, then I called it by object or directly by class name as shown below:\nclass Person:\n @staticmethod\n def test(self): # <- With \"self\" \n print(\"Test\")\n\nobj = Person()\nobj.test() # Here\n\n# Or\n\nPerson.test() # Here\n\nSo, I called the instance method with object as shown below:\nclass Person:\n def test(self): # <- With \"self\" \n print(\"Test\")\n\nobj = Person()\nobj.test() # Here\n\nAnd, I removed self from the static method as shown below:\nclass Person:\n @staticmethod\n def test(): # <- \"self\" removed \n print(\"Test\")\n\nobj = Person()\nobj.test() # Here\n\n# Or\n\nPerson.test() # Here\n\nThen, the error was solved:\nTest\n\nIn detail, I explain about instance method in my answer for What is an \"instance method\" in Python? and also explain about @staticmethod and @classmethod in my answer for @classmethod vs @staticmethod in Python.\n", "If skipping parentheses for the object declaration (typo), then exactly this error occurs.\n# WRONG! will result in TypeError: getPumps() missing 1 required positional argument: 'self'\np = Pump\np.getPumps()\n\nDo not forget the parentheses for the Pump object\n# CORRECT!\np = Pump()\np.getPumps()\n\n" ]
[ 526, 96, 20, 10, 6, 4, 1, 0 ]
[]
[]
[ "constructor", "instance_methods", "python", "python_3.x", "self" ]
stackoverflow_0017534345_constructor_instance_methods_python_python_3.x_self.txt
Q: How to append data to an existing csv file in AWS S3 using python boto3 I have a csv file in s3 but I have to append the data to that file whenever I call the function but i am not able to do that, df = pd.DataFrame(data_list) bytes_to_write = df.to_csv(None, header=None, index=False).encode() file_name = "Words/word_dictionary.csv" # Not working the below line s3_client.put_object(Body=bytes_to_write, Bucket='recengine', Key=file_name) This code is directly replacing the data inside the file instead of appending, Any solution? A: s3 has no append functionality. You need to read the file from s3, append the data in your code, then upload the complete file to the same key in s3. Check this thread on the AWS forum for details The code will probably look like: df = pd.DataFrame(data_list) bytes_to_write = df.to_csv(None, header=None, index=False).encode() file_name = "Words/word_dictionary.csv" # get the existing file curent_data = s3_client.get_object(Bucket='recengine', Key=file_name) # append appended_data = current_data + bytes_to_write # overwrite s3_client.put_object(Body=appended_data, Bucket='recengine', Key=file_name) A: You can try using aws data wrangler library from awslabs to append , overwrite csv dataset stored in s3. Check out their documentation and tutorials from here link A: You can utilize the pandas concat function to append the data and then write the csv back to the S3 bucket: from io import StringIO import pandas as pd # read current data from bucket as data frame csv_obj = s3_client.get_object(Bucket=bucket, Key=key) current_data = csv_obj['Body'].read().decode('utf-8') current_df = pd.read_csv(StringIO(csv_string)) # append data appended_data = pd.concat([current_df, new_df], ignore_index=True) appended_data_encoded = appended_data.to_csv(None, index=False).encode('utf-8') # write the appended data to s3 bucket s3_client.put_object(Body=appended_data_encoded,Bucket=bucket, Key=key)
How to append data to an existing csv file in AWS S3 using python boto3
I have a csv file in s3 but I have to append the data to that file whenever I call the function but i am not able to do that, df = pd.DataFrame(data_list) bytes_to_write = df.to_csv(None, header=None, index=False).encode() file_name = "Words/word_dictionary.csv" # Not working the below line s3_client.put_object(Body=bytes_to_write, Bucket='recengine', Key=file_name) This code is directly replacing the data inside the file instead of appending, Any solution?
[ "s3 has no append functionality. You need to read the file from s3, append the data in your code, then upload the complete file to the same key in s3.\nCheck this thread on the AWS forum for details\nThe code will probably look like:\ndf = pd.DataFrame(data_list)\nbytes_to_write = df.to_csv(None, header=None, index=False).encode()\nfile_name = \"Words/word_dictionary.csv\"\n\n# get the existing file\ncurent_data = s3_client.get_object(Bucket='recengine', Key=file_name)\n# append\nappended_data = current_data + bytes_to_write\n# overwrite\ns3_client.put_object(Body=appended_data, Bucket='recengine', Key=file_name)\n\n", "You can try using aws data wrangler library from awslabs to append , overwrite csv dataset stored in s3. Check out their documentation and tutorials from here link\n", "You can utilize the pandas concat function to append the data and then write the csv back to the S3 bucket:\nfrom io import StringIO\nimport pandas as pd\n\n# read current data from bucket as data frame\ncsv_obj = s3_client.get_object(Bucket=bucket, Key=key)\ncurrent_data = csv_obj['Body'].read().decode('utf-8')\ncurrent_df = pd.read_csv(StringIO(csv_string))\n\n# append data\nappended_data = pd.concat([current_df, new_df], ignore_index=True)\nappended_data_encoded = appended_data.to_csv(None, index=False).encode('utf-8')\n\n# write the appended data to s3 bucket\ns3_client.put_object(Body=appended_data_encoded,Bucket=bucket, Key=key)\n\n" ]
[ 5, 2, 1 ]
[]
[]
[ "amazon_s3", "boto3", "python" ]
stackoverflow_0061453620_amazon_s3_boto3_python.txt
Q: Python getpixel then click on described color in RGB I've found this code at stackoverflow color = (0, 137, 241) s = pyautogui.screenshot() for x in range(s.width): for y in range(s.height): if s.getpixel((x, y)) == color: pyautogui.click(x, y) # do something here break I'm creating some bot for a game that waits for its turn, picks up a spell and then clicks on monsters tile. The problem is that I want to click only once on the tile of the color = (0, 137, 241). Right now it searches for all of them in for loop. How to specify x and y (width and height) to make it pyautogui.click(x, y) only once at described colors RGB? Thanks a lot for help! A: The key here is to break both for loops once it met your condition. The problem with your code is it only stops on the current column of pixels that met your condition then continue to search on other columns of pixels because that initial loop hasn't been broken. color = (0, 137, 241) s = pyautogui.screenshot() for x in range(s.width): for y in range(s.height): if s.getpixel((x, y)) == color: pyautogui.click(x, y) # do something here break else # <---this is the key. continue break
Python getpixel then click on described color in RGB
I've found this code at stackoverflow color = (0, 137, 241) s = pyautogui.screenshot() for x in range(s.width): for y in range(s.height): if s.getpixel((x, y)) == color: pyautogui.click(x, y) # do something here break I'm creating some bot for a game that waits for its turn, picks up a spell and then clicks on monsters tile. The problem is that I want to click only once on the tile of the color = (0, 137, 241). Right now it searches for all of them in for loop. How to specify x and y (width and height) to make it pyautogui.click(x, y) only once at described colors RGB? Thanks a lot for help!
[ "The key here is to break both for loops once it met your condition. The problem with your code is it only stops on the current column of pixels that met your condition then continue to search on other columns of pixels because that initial loop hasn't been broken.\ncolor = (0, 137, 241)\ns = pyautogui.screenshot()\nfor x in range(s.width):\n for y in range(s.height):\n if s.getpixel((x, y)) == color:\n pyautogui.click(x, y) # do something here\n break\n else # <---this is the key. \n continue\n break\n\n" ]
[ 0 ]
[]
[]
[ "getpixel", "python", "screenshot" ]
stackoverflow_0062321751_getpixel_python_screenshot.txt
Q: Azureml - Why my environment image build status is always "already exists" I'm using custom Dockerfile to create environment for Azure machine learning. However everytime I run my code, I always get back "already exists" on the UI for my environment. I didn't find much documentation on this status which is why I'm asking here. I assume that this means that an image with the same dockerfile exists in my container registry. What I don't get is: if the image already exits why my environment is unusable and set to this. To create my environment I use this snippet: ws = Workspace.from_config() env = Environment.from_dockerfile(environment_name, f"./environment/{environment_name}/Dockerfile") env.python.user_managed_dependencies = True env.register(ws) env.build(ws) Am I doing something wrong there? Thanks for your help A: By default, all the environments will be working on Linux machine as it is from the docker image. With respect to the issue, we need to clear the cache of the images and then restart the run. Check out the below syntaxes which need to be used. docker-compose build --no-cache -> to clear the cache and don't forget to restart the most updated image docker-compose up -d <service> --force-recreate The issue will be resolved. It might be because of the cache. Checkout the documentation to check and recreate the entire operation. Link With respect to UI. Create the DevOps image with Inference clusters.
Azureml - Why my environment image build status is always "already exists"
I'm using custom Dockerfile to create environment for Azure machine learning. However everytime I run my code, I always get back "already exists" on the UI for my environment. I didn't find much documentation on this status which is why I'm asking here. I assume that this means that an image with the same dockerfile exists in my container registry. What I don't get is: if the image already exits why my environment is unusable and set to this. To create my environment I use this snippet: ws = Workspace.from_config() env = Environment.from_dockerfile(environment_name, f"./environment/{environment_name}/Dockerfile") env.python.user_managed_dependencies = True env.register(ws) env.build(ws) Am I doing something wrong there? Thanks for your help
[ "By default, all the environments will be working on Linux machine as it is from the docker image. With respect to the issue, we need to clear the cache of the images and then restart the run. Check out the below\nsyntaxes which need to be used.\ndocker-compose build --no-cache -> to clear the cache\nand don't forget to restart the most updated image\ndocker-compose up -d <service> --force-recreate\n\nThe issue will be resolved. It might be because of the cache.\nCheckout the documentation to check and recreate the entire operation. Link\nWith respect to UI. Create the DevOps image with Inference clusters.\n\n\n\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_machine_learning_service", "azureml_python_sdk", "azuremlsdk", "python" ]
stackoverflow_0074491717_azure_azure_machine_learning_service_azureml_python_sdk_azuremlsdk_python.txt
Q: How to plot average value lines and not every single value in Plotly First of all; sorry if what I am writing here is not up to stackoverflow standards, I am trying my best. I have a dataframe with around 18k rows and 89 columns with information about football players. For example I need to plot a line graph to visualize the connection between age and overall rating of a player. But when I plot a line for that with: fig = px.line(df, x="Age", y="Overall") fig.show() This is the result: Bad Result This is obviously not a good visualization. I want to plot the average rating for every age, so its a single line which shows the connection between age and overall rating. Is there an easy function for plotting or do I have to create the right data myself? A: It sounds like what you might want to do here is groupby() on "age" and then average on "overall" to create a final dataframe before plugging into that plotting function. Roughly, import pandas as pd data = { "age": [1, 1, 2, 2, 3, 3], "overall": [50, 100, 1, 1, 600, 700], # clarifies how to select the correct column to average "irrelevant": [1, 1, 1, 1, 1, 1] } df = pd.DataFrame(data) new_df = df.groupby('age')['overall'].mean() new_df # age # 1 75.0 # 2 1.0 # 3 650.0 # Name: overall, dtype: float64 Alternatively, you can use a scatter plot if you're comfortable having the individual points show the trend instead. Sometimes scatter plots are useful for this situation since a line of averages might have very different samples sizes at each point on the x axis, so you can lose information by making a line.
How to plot average value lines and not every single value in Plotly
First of all; sorry if what I am writing here is not up to stackoverflow standards, I am trying my best. I have a dataframe with around 18k rows and 89 columns with information about football players. For example I need to plot a line graph to visualize the connection between age and overall rating of a player. But when I plot a line for that with: fig = px.line(df, x="Age", y="Overall") fig.show() This is the result: Bad Result This is obviously not a good visualization. I want to plot the average rating for every age, so its a single line which shows the connection between age and overall rating. Is there an easy function for plotting or do I have to create the right data myself?
[ "It sounds like what you might want to do here is groupby() on \"age\" and then average on \"overall\" to create a final dataframe before plugging into that plotting function.\nRoughly,\nimport pandas as pd\n\ndata = {\n \"age\": [1, 1, 2, 2, 3, 3],\n \"overall\": [50, 100, 1, 1, 600, 700],\n # clarifies how to select the correct column to average\n \"irrelevant\": [1, 1, 1, 1, 1, 1]\n}\n\ndf = pd.DataFrame(data)\nnew_df = df.groupby('age')['overall'].mean()\nnew_df\n\n# age\n# 1 75.0\n# 2 1.0\n# 3 650.0\n# Name: overall, dtype: float64\n\nAlternatively, you can use a scatter plot if you're comfortable having the individual points show the trend instead. Sometimes scatter plots are useful for this situation since a line of averages might have very different samples sizes at each point on the x axis, so you can lose information by making a line.\n" ]
[ 1 ]
[]
[]
[ "average", "line", "pandas", "plotly", "python" ]
stackoverflow_0074516853_average_line_pandas_plotly_python.txt
Q: How to remove a node from a dict using jsonpath-ng? In Python I have a list of dictionaries and I want to remove a given node from each dictionary in the list. I don't know anything about those dictionaries except they all have the same (unknown) schema. The node to be removed may be anywhere in the dictionaries and it is specified by a JSONPath expression. Example: Input data: [ { "top": { "lower": 1, "other": 1 } }, { "top": { "lower": 2, "other": 4 } }, { "top": { "lower": 3, "other": 9 } } ] Node to be removed: $.*.top.lower Expected result: [ { "top": { "other": 1 } }, { "top": { "other": 4 } }, { "top": { "other": 9 } } ] Using the jsonpath library my first attempt was this: from jsonpath import JSONPath def remove_node_from_dict(data, node): node_key = JSONPath(node).segments.pop() for record in data: del record[node_key] but this works only if the node to remove is at the top level of the dictionaries. Researching for solutions I came across the jsonpath-ng library which claims to have "the ability to update or remove nodes in the tree". However, I couldn't find any documentation on this - how is it done? EDIT: Based on this answer to a related question I found a solution that works at least for simple paths (no filters etc.) using plain Python (not the jsonpath-ng library). Which would be sufficient for my use case. I would still like to learn how to do it with jsonpath-ng in a more generic way. A: Here's a naive solution which I've used in the past: import copy import jsonpath_ng.ext as jp def remove_matched_element(path, spec): _new_spec = copy.deepcopy(spec) jep = jp.parse(path) for match in jep.find(spec): _t_path = "$" spec_path = _new_spec spec_path_parent = None for pe in str(match.full_path).split("."): if _t_path != "$" and type(jp.parse(_t_path).find(_new_spec)[0].value) == list: _t_path = f"{_t_path}{'.'}{pe}" _idx = int(pe.replace("[", "").replace("]", "")) spec_path_parent = spec_path spec_path = spec_path[_idx] elif _t_path != "$" and type(jp.parse(_t_path).find(_new_spec)[0].value) == dict and pe == "[0]": keyp = list(jp.parse(_t_path).find(_new_spec)[0].value.keys())[0] _idx = keyp _t_path = f"{_t_path}.{keyp}" spec_path_parent = spec_path spec_path = spec_path[keyp] else: if type(spec_path) == list: _idx = int(pe.replace("[", "").replace("]", "")) _t_path = f"{_t_path}[{_idx}]" else: _idx = pe _t_path = f"{_t_path}{'.'}{pe}" spec_path_parent = spec_path spec_path = spec_path[_idx] spec_path_parent.pop(_idx) return _new_spec def test_soc_sol(): spec = [ {"top": {"lower": 1, "other": 1}}, {"top": {"lower": 2, "other": 4}}, {"top": {"lower": 3, "other": 9}} ] print( yaml.safe_dump(remove_matched_element("$..lower", spec))) The above code results in: [ { "top": { "other": 1 } }, { "top": { "other": 4 } }, { "top": { "other": 9 } } ]
How to remove a node from a dict using jsonpath-ng?
In Python I have a list of dictionaries and I want to remove a given node from each dictionary in the list. I don't know anything about those dictionaries except they all have the same (unknown) schema. The node to be removed may be anywhere in the dictionaries and it is specified by a JSONPath expression. Example: Input data: [ { "top": { "lower": 1, "other": 1 } }, { "top": { "lower": 2, "other": 4 } }, { "top": { "lower": 3, "other": 9 } } ] Node to be removed: $.*.top.lower Expected result: [ { "top": { "other": 1 } }, { "top": { "other": 4 } }, { "top": { "other": 9 } } ] Using the jsonpath library my first attempt was this: from jsonpath import JSONPath def remove_node_from_dict(data, node): node_key = JSONPath(node).segments.pop() for record in data: del record[node_key] but this works only if the node to remove is at the top level of the dictionaries. Researching for solutions I came across the jsonpath-ng library which claims to have "the ability to update or remove nodes in the tree". However, I couldn't find any documentation on this - how is it done? EDIT: Based on this answer to a related question I found a solution that works at least for simple paths (no filters etc.) using plain Python (not the jsonpath-ng library). Which would be sufficient for my use case. I would still like to learn how to do it with jsonpath-ng in a more generic way.
[ "Here's a naive solution which I've used in the past:\nimport copy\nimport jsonpath_ng.ext as jp\n\ndef remove_matched_element(path, spec):\n _new_spec = copy.deepcopy(spec)\n jep = jp.parse(path)\n for match in jep.find(spec):\n _t_path = \"$\"\n spec_path = _new_spec\n spec_path_parent = None\n for pe in str(match.full_path).split(\".\"):\n if _t_path != \"$\" and type(jp.parse(_t_path).find(_new_spec)[0].value) == list:\n _t_path = f\"{_t_path}{'.'}{pe}\"\n _idx = int(pe.replace(\"[\", \"\").replace(\"]\", \"\"))\n spec_path_parent = spec_path\n spec_path = spec_path[_idx]\n elif _t_path != \"$\" and type(jp.parse(_t_path).find(_new_spec)[0].value) == dict and pe == \"[0]\":\n keyp = list(jp.parse(_t_path).find(_new_spec)[0].value.keys())[0]\n _idx = keyp\n _t_path = f\"{_t_path}.{keyp}\"\n spec_path_parent = spec_path\n spec_path = spec_path[keyp]\n else:\n if type(spec_path) == list:\n _idx = int(pe.replace(\"[\", \"\").replace(\"]\", \"\"))\n _t_path = f\"{_t_path}[{_idx}]\"\n else:\n _idx = pe\n _t_path = f\"{_t_path}{'.'}{pe}\"\n spec_path_parent = spec_path\n spec_path = spec_path[_idx]\n spec_path_parent.pop(_idx)\n return _new_spec\n\n\ndef test_soc_sol():\n spec = [\n {\"top\": {\"lower\": 1, \"other\": 1}},\n {\"top\": {\"lower\": 2, \"other\": 4}},\n {\"top\": {\"lower\": 3, \"other\": 9}}\n ]\n\n print(\n yaml.safe_dump(remove_matched_element(\"$..lower\", spec)))\n\nThe above code results in:\n[\n {\n \"top\": {\n \"other\": 1\n }\n },\n {\n \"top\": {\n \"other\": 4\n }\n },\n {\n \"top\": {\n \"other\": 9\n }\n }\n]\n\n" ]
[ 0 ]
[ "If you know the schema is fixed, you can simply remove the key like this\nl = [\n { \"top\": { \"lower\": 1, \"other\": 1 } },\n { \"top\": { \"lower\": 2, \"other\": 4 } },\n { \"top\": { \"lower\": 3, \"other\": 9 } }\n]\n\nfor d in l:\n del d[\"top\"][\"lower\"]\n\n" ]
[ -2 ]
[ "jsonpath", "jsonpath_ng", "python" ]
stackoverflow_0071500862_jsonpath_jsonpath_ng_python.txt
Q: Extract any possible combination of two strings Giving these two strings x = 'abc' y = 'dc'; How can I get this output -> set()={'ac', 'ab', 'cd', 'ad', 'cb', 'bd'} Getting ab from x then ac from x then ad from x and y ... If it is possible using only set functions without additional libraries. I tried this : X = set() for i in x: for j in y: X.add(i+j) print(X) A: You can try the following code : def get_combinations(x, y): r = set() def add(s): if s[::-1] not in r and s[0] != s[1]: r.add(s) c = set(x + y) for i in c: for j in c: if i <= j: add(i + j) return r The inner add function check that the mirror is not present and the two letters are different. c is used to iterate over all possibilities. The condition i <= j is used to sort the mirrors option by alphabetical order. The following assertion is verified (note that I changed your "cb" by "bc" because of the alphabetical order) : x = "abc" y = "dc" assert get_combinations(x, y) == {"ac", "ab", "cd", "ad", "bc", "bd"}
Extract any possible combination of two strings
Giving these two strings x = 'abc' y = 'dc'; How can I get this output -> set()={'ac', 'ab', 'cd', 'ad', 'cb', 'bd'} Getting ab from x then ac from x then ad from x and y ... If it is possible using only set functions without additional libraries. I tried this : X = set() for i in x: for j in y: X.add(i+j) print(X)
[ "You can try the following code :\ndef get_combinations(x, y):\n r = set()\n\n def add(s):\n if s[::-1] not in r and s[0] != s[1]:\n r.add(s)\n\n c = set(x + y)\n for i in c:\n for j in c:\n if i <= j:\n add(i + j)\n\n return r\n\nThe inner add function check that the mirror is not present and the two letters are different.\nc is used to iterate over all possibilities.\nThe condition i <= j is used to sort the mirrors option by alphabetical order.\nThe following assertion is verified (note that I changed your \"cb\" by \"bc\" because of the alphabetical order) :\nx = \"abc\"\ny = \"dc\"\nassert get_combinations(x, y) == {\"ac\", \"ab\", \"cd\", \"ad\", \"bc\", \"bd\"}\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074516597_python.txt
Q: Bigquery converts my string field into integer while loading json file with Python {"number":"1234123"} I am assigning this data to my Bigquery table using bigquery.LoadJobConfig in python. The type of my number column in my bigquery table is string. When I do the load operation, it converts the data type in my bigquery table to integer. How can I solve this? The file type I loaded: json. job_config = bigquery.LoadJobConfig( create_disposition=bigquery.CreateDisposition.CREATE_IF_NEEDED, write_disposition=bigquery.WriteDisposition.WRITE_APPEND, source_format=bigquery.SourceFormat.NEWLINE_DELIMITED_JSON,autodetect=True ) Additionally: When I set autodetect to False, I get an error like Error while reading data, error message: JSON table encountered too many errors A: I recommend you to pass a BigQuery schema to prevent this situation, instead to use autodetect=True, example : from google.cloud import bigquery # Construct a BigQuery client object. client = bigquery.Client() # TODO(developer): Set table_id to the ID of the table to create. # table_id = "your-project.your_dataset.your_table_name" job_config = bigquery.LoadJobConfig( schema=[ bigquery.SchemaField("number", "STRING") ], create_disposition=bigquery.CreateDisposition.CREATE_IF_NEEDED, write_disposition=bigquery.WriteDisposition.WRITE_APPEND, source_format=bigquery.SourceFormat.NEWLINE_DELIMITED_JSON, autodetect=False ) uri = "gs://cloud-samples-data/bigquery/us-states/us-states.json" load_job = client.load_table_from_uri( uri, table_id, location="US", # Must match the destination dataset location. job_config=job_config, ) # Make an API request. load_job.result() # Waits for the job to complete. destination_table = client.get_table(table_id) print("Loaded {} rows.".format(destination_table.num_rows)) In this example I set the schema of BigQuery table and autodetect to False. If you use autodetect to True, you can't have a control on your field types. You can check the documentation to have more information.
Bigquery converts my string field into integer while loading json file with Python
{"number":"1234123"} I am assigning this data to my Bigquery table using bigquery.LoadJobConfig in python. The type of my number column in my bigquery table is string. When I do the load operation, it converts the data type in my bigquery table to integer. How can I solve this? The file type I loaded: json. job_config = bigquery.LoadJobConfig( create_disposition=bigquery.CreateDisposition.CREATE_IF_NEEDED, write_disposition=bigquery.WriteDisposition.WRITE_APPEND, source_format=bigquery.SourceFormat.NEWLINE_DELIMITED_JSON,autodetect=True ) Additionally: When I set autodetect to False, I get an error like Error while reading data, error message: JSON table encountered too many errors
[ "I recommend you to pass a BigQuery schema to prevent this situation, instead to use autodetect=True, example :\nfrom google.cloud import bigquery\n\n# Construct a BigQuery client object.\nclient = bigquery.Client()\n\n# TODO(developer): Set table_id to the ID of the table to create.\n# table_id = \"your-project.your_dataset.your_table_name\"\n\njob_config = bigquery.LoadJobConfig(\n schema=[\n bigquery.SchemaField(\"number\", \"STRING\")\n ],\n create_disposition=bigquery.CreateDisposition.CREATE_IF_NEEDED,\n write_disposition=bigquery.WriteDisposition.WRITE_APPEND,\n source_format=bigquery.SourceFormat.NEWLINE_DELIMITED_JSON,\n autodetect=False\n)\nuri = \"gs://cloud-samples-data/bigquery/us-states/us-states.json\"\n\nload_job = client.load_table_from_uri(\n uri,\n table_id,\n location=\"US\", # Must match the destination dataset location.\n job_config=job_config,\n) # Make an API request.\n\nload_job.result() # Waits for the job to complete.\n\ndestination_table = client.get_table(table_id)\nprint(\"Loaded {} rows.\".format(destination_table.num_rows))\n\nIn this example I set the schema of BigQuery table and autodetect to False.\nIf you use autodetect to True, you can't have a control on your field types.\nYou can check the documentation to have more information.\n" ]
[ 0 ]
[]
[]
[ "google_bigquery", "json", "python" ]
stackoverflow_0074516188_google_bigquery_json_python.txt
Q: Fast Bitwise Get Column in Python Is there an efficient way to get an array of boolean values that are in the n-th position in bitwise array in Python? Create numpy array with values 0 or 1: import numpy as np array = np.array( [ [1, 0, 1], [1, 1, 1], [0, 0, 1], ] ) Compress size by np.packbits: pack_array = np.packbits(array, axis=1) Expected result - some function that could get all values from n-th column from bitwise array. For example if I would like the second column I would like to get (the same as I would call array[:,1]): array([0, 1, 0]) I have tried numba with the following function. It returns right results but it is very slow: import numpy as np from numba import njit @njit(nopython=True, fastmath=True) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=np.int32) for i in range(n): res[i] = bool(packed[i, j//8] & (128>>(j%8))) return res How to test it? import numpy as np import time from numba import njit array = np.random.choice(a=[False, True], size=(100000000,15)) pack_array = np.packbits(array, axis=1) start = time.time() array[:,10] print('np array') print(time.time()-start) @njit(nopython=True, fastmath=True) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=np.int32) for i in range(n): res[i] = bool(packed[i, j//8] & (128>>(j%8))) return res # To initialize getVector(pack_array, 10) start = time.time() getVector(pack_array, 10) print('getVector') print(time.time()-start) It returns: np array 0.00010132789611816406 getVector 0.15648770332336426 A: Besides some micro-optimisations, I dont believe that there is much that can be optimised here. There are also a few small mistakes in your code: @njit(nopython=True) is saying the same thing twice (the n in njit already stands for nopython mode.) simply @njit or @jit(nopython=True) should be used fastMath is for "cutting corners" when doing floating point math, since we are only working with integers and booleans here, it can be safely removed because it does nothing for us here. My updated code (seeing a meagre 40% perfomance increase on my machine): np.random.seed(0) array = np.random.choice(a=[False, True], size=(10000000,15)) pack_array = np.packbits(array, axis=1) @njit(locals={'res': nb.boolean[:]}) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=nb.boolean) byte = j//8 bit = 128>>(j%8) for i in range(n): res[i] = bool(packed[i, byte] & bit) return res getVector(pack_array, 10) In your answer, "res" is a list of 32 bit integers, by giving np.zeros() the numba (NOT numpy) boolean datatype, we can swap it to the more efficient booleans. This is where most of the perfomance improvement comes from. On my machine putting j_mod and j_flr outside of the loop had no noticable effect. But it did have an effect for the commenter @Michael Szczesny, so it might help you aswell. I would not try to use strides, which @Nick ODell is suggesting, because they can be quite dangerous if used incorrectly. (See the numpy documentation). edit: I have made a few small changes that were suggested by Michael. (Thanks)
Fast Bitwise Get Column in Python
Is there an efficient way to get an array of boolean values that are in the n-th position in bitwise array in Python? Create numpy array with values 0 or 1: import numpy as np array = np.array( [ [1, 0, 1], [1, 1, 1], [0, 0, 1], ] ) Compress size by np.packbits: pack_array = np.packbits(array, axis=1) Expected result - some function that could get all values from n-th column from bitwise array. For example if I would like the second column I would like to get (the same as I would call array[:,1]): array([0, 1, 0]) I have tried numba with the following function. It returns right results but it is very slow: import numpy as np from numba import njit @njit(nopython=True, fastmath=True) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=np.int32) for i in range(n): res[i] = bool(packed[i, j//8] & (128>>(j%8))) return res How to test it? import numpy as np import time from numba import njit array = np.random.choice(a=[False, True], size=(100000000,15)) pack_array = np.packbits(array, axis=1) start = time.time() array[:,10] print('np array') print(time.time()-start) @njit(nopython=True, fastmath=True) def getVector(packed, j): n = packed.shape[0] res = np.zeros(n, dtype=np.int32) for i in range(n): res[i] = bool(packed[i, j//8] & (128>>(j%8))) return res # To initialize getVector(pack_array, 10) start = time.time() getVector(pack_array, 10) print('getVector') print(time.time()-start) It returns: np array 0.00010132789611816406 getVector 0.15648770332336426
[ "Besides some micro-optimisations, I dont believe that there is much that can be optimised here. There are also a few small mistakes in your code:\n\n@njit(nopython=True) is saying the same thing twice (the n in njit already stands for nopython mode.) simply @njit or @jit(nopython=True) should be used\nfastMath is for \"cutting corners\" when doing floating point math, since we are only working with integers and booleans here, it can be safely removed because it does nothing for us here.\n\nMy updated code (seeing a meagre 40% perfomance increase on my machine):\nnp.random.seed(0)\narray = np.random.choice(a=[False, True], size=(10000000,15))\n\npack_array = np.packbits(array, axis=1)\n\n@njit(locals={'res': nb.boolean[:]})\ndef getVector(packed, j):\n n = packed.shape[0]\n res = np.zeros(n, dtype=nb.boolean)\n byte = j//8\n bit = 128>>(j%8)\n for i in range(n):\n res[i] = bool(packed[i, byte] & bit)\n return res\n\ngetVector(pack_array, 10)\n\nIn your answer, \"res\" is a list of 32 bit integers, by giving np.zeros() the numba (NOT numpy) boolean datatype, we can swap it to the more efficient booleans. This is where most of the perfomance improvement comes from. On my machine putting j_mod and j_flr outside of the loop had no noticable effect. But it did have an effect for the commenter @Michael Szczesny, so it might help you aswell.\nI would not try to use strides, which @Nick ODell is suggesting, because they can be quite dangerous if used incorrectly. (See the numpy documentation).\nedit: I have made a few small changes that were suggested by Michael. (Thanks)\n" ]
[ 2 ]
[]
[]
[ "bit", "bit_manipulation", "numba", "numpy", "python" ]
stackoverflow_0074512005_bit_bit_manipulation_numba_numpy_python.txt
Q: Python Function to test ping I'm trying to create a function that I can call on a timed basis to check for good ping and return the result so I can update the on-screen display. I am new to python so I don't fully understand how to return a value or set a variable in a function. Here is my code that works: import os hostname = "google.com" response = os.system("ping -c 1 " + hostname) if response == 0: pingstatus = "Network Active" else: pingstatus = "Network Error" Here is my attempt at creating a function: def check_ping(): hostname = "google.com" response = os.system("ping -c 1 " + hostname) # and then check the response... if response == 0: pingstatus = "Network Active" else: pingstatus = "Network Error" And here is how I display pingstatus: label = font_status.render("%s" % pingstatus, 1, (0,0,0)) So what I am looking for is how to return pingstatus from the function. Any help would be greatly appreciated. A: It looks like you want the return keyword def check_ping(): hostname = "taylor" response = os.system("ping -c 1 " + hostname) # and then check the response... if response == 0: pingstatus = "Network Active" else: pingstatus = "Network Error" return pingstatus You need to capture/'receive' the return value of the function(pingstatus) in a variable with something like: pingstatus = check_ping() NOTE: ping -c is for Linux, for Windows use ping -n Some info on python functions: http://www.tutorialspoint.com/python/python_functions.htm http://www.learnpython.org/en/Functions It's probably worth going through a good introductory tutorial to Python, which will cover all the fundamentals. I recommend investigating Udacity.com and codeacademy.com EDIT: This is an old question now, but.. for people who have issues with pingstatus not being defined, or returning an unexpected value, first make triple sure your code is right. Then try defining pingstatus before the if block. This may help, but issues arising from this change are for a different question. All the best. A: Here is a simplified function that returns a boolean and has no output pushed to stdout: import subprocess, platform def pingOk(sHost): try: output = subprocess.check_output("ping -{} 1 {}".format('n' if platform.system().lower()=="windows" else 'c', sHost), shell=True) except Exception, e: return False return True A: Adding on to the other answers, you can check the OS and decide whether to use "-c" or "-n": import os, platform host = "8.8.8.8" os.system("ping " + ("-n 1 " if platform.system().lower()=="windows" else "-c 1 ") + host) This will work on Windows, OS X, and Linux You can also use sys: import os, sys host = "8.8.8.8" os.system("ping " + ("-n 1 " if sys.platform().lower()=="win32" else "-c 1 ") + host) A: Try this def ping(server='example.com', count=1, wait_sec=1): """ :rtype: dict or None """ cmd = "ping -c {} -W {} {}".format(count, wait_sec, server).split(' ') try: output = subprocess.check_output(cmd).decode().strip() lines = output.split("\n") total = lines[-2].split(',')[3].split()[1] loss = lines[-2].split(',')[2].split()[0] timing = lines[-1].split()[3].split('/') return { 'type': 'rtt', 'min': timing[0], 'avg': timing[1], 'max': timing[2], 'mdev': timing[3], 'total': total, 'loss': loss, } except Exception as e: print(e) return None A: import platform import subprocess def myping(host): parameter = '-n' if platform.system().lower()=='windows' else '-c' command = ['ping', parameter, '1', host] response = subprocess.call(command) if response == 0: return True else: return False print(myping("www.google.com")) A: I got a problem in Windows with the response destination host unreachable, because it returns 0. Then, I did this function to solve it, and now it works fine. import os import platform def check_ping(hostname, attempts = 1, silent = False): parameter = '-n' if platform.system().lower()=='windows' else '-c' filter = ' | findstr /i "TTL"' if platform.system().lower()=='windows' else ' | grep "ttl"' if (silent): silent = ' > NUL' if platform.system().lower()=='windows' else ' >/dev/null' else: silent = '' response = os.system('ping ' + parameter + ' ' + str(attempts) + ' ' + hostname + filter + silent) if response == 0: return True else: return False Now I can call the command: print (check_ping('192.168.1.1', 2, False)) The first parameter is the host The second is the number of requests. The third is if you want to show the ping responses or not A: This function will test ping for given number of retry attempts and will return True if reachable else False - def ping(host, retry_packets): """Returns True if host (str) responds to a ping request.""" # Option for the number of packets as a function of param = '-n' if platform.system().lower() == 'windows' else '-c' # Building the command. Ex: "ping -c 1 google.com" command = ['ping', param, str(retry_packets), host] return subprocess.call(command) == 0 # Driver Code print("Ping Status : {}".format(ping(host="xx.xx.xx.xx", retry_packets=2))) Output : Pinging xx.xx.xx.xx with 32 bytes of data: Reply from xx.xx.xx.xx: bytes=32 time=517ms TTL=60 Reply from xx.xx.xx.xx: bytes=32 time=490ms TTL=60 Ping statistics for xx.xx.xx.xx: Packets: Sent = 2, Received = 2, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 490ms, Maximum = 517ms, Average = 503ms Ping Status : True Note : Change xx.xx.xx.xx with your IP
Python Function to test ping
I'm trying to create a function that I can call on a timed basis to check for good ping and return the result so I can update the on-screen display. I am new to python so I don't fully understand how to return a value or set a variable in a function. Here is my code that works: import os hostname = "google.com" response = os.system("ping -c 1 " + hostname) if response == 0: pingstatus = "Network Active" else: pingstatus = "Network Error" Here is my attempt at creating a function: def check_ping(): hostname = "google.com" response = os.system("ping -c 1 " + hostname) # and then check the response... if response == 0: pingstatus = "Network Active" else: pingstatus = "Network Error" And here is how I display pingstatus: label = font_status.render("%s" % pingstatus, 1, (0,0,0)) So what I am looking for is how to return pingstatus from the function. Any help would be greatly appreciated.
[ "It looks like you want the return keyword\ndef check_ping():\n hostname = \"taylor\"\n response = os.system(\"ping -c 1 \" + hostname)\n # and then check the response...\n if response == 0:\n pingstatus = \"Network Active\"\n else:\n pingstatus = \"Network Error\"\n \n return pingstatus\n\nYou need to capture/'receive' the return value of the function(pingstatus) in a variable with something like:\npingstatus = check_ping()\n\nNOTE: ping -c is for Linux, for Windows use ping -n\nSome info on python functions:\nhttp://www.tutorialspoint.com/python/python_functions.htm\nhttp://www.learnpython.org/en/Functions\nIt's probably worth going through a good introductory tutorial to Python, which will cover all the fundamentals. I recommend investigating Udacity.com and codeacademy.com\nEDIT: This is an old question now, but.. for people who have issues with pingstatus not being defined, or returning an unexpected value, first make triple sure your code is right. Then try defining pingstatus before the if block. This may help, but issues arising from this change are for a different question. All the best.\n", "Here is a simplified function that returns a boolean and has no output pushed to stdout:\nimport subprocess, platform\ndef pingOk(sHost):\n try:\n output = subprocess.check_output(\"ping -{} 1 {}\".format('n' if platform.system().lower()==\"windows\" else 'c', sHost), shell=True)\n\n except Exception, e:\n return False\n\n return True\n\n", "Adding on to the other answers, you can check the OS and decide whether to use \"-c\" or \"-n\":\nimport os, platform\nhost = \"8.8.8.8\"\nos.system(\"ping \" + (\"-n 1 \" if platform.system().lower()==\"windows\" else \"-c 1 \") + host)\n\nThis will work on Windows, OS X, and Linux\nYou can also use sys:\nimport os, sys\nhost = \"8.8.8.8\"\nos.system(\"ping \" + (\"-n 1 \" if sys.platform().lower()==\"win32\" else \"-c 1 \") + host)\n\n", "Try this\ndef ping(server='example.com', count=1, wait_sec=1):\n \"\"\"\n\n :rtype: dict or None\n \"\"\"\n cmd = \"ping -c {} -W {} {}\".format(count, wait_sec, server).split(' ')\n try:\n output = subprocess.check_output(cmd).decode().strip()\n lines = output.split(\"\\n\")\n total = lines[-2].split(',')[3].split()[1]\n loss = lines[-2].split(',')[2].split()[0]\n timing = lines[-1].split()[3].split('/')\n return {\n 'type': 'rtt',\n 'min': timing[0],\n 'avg': timing[1],\n 'max': timing[2],\n 'mdev': timing[3],\n 'total': total,\n 'loss': loss,\n }\n except Exception as e:\n print(e)\n return None\n\n", "import platform\nimport subprocess\n\ndef myping(host):\n parameter = '-n' if platform.system().lower()=='windows' else '-c'\n\n command = ['ping', parameter, '1', host]\n response = subprocess.call(command)\n\n if response == 0:\n return True\n else:\n return False\n \nprint(myping(\"www.google.com\"))\n\n", "I got a problem in Windows with the response destination host unreachable, because it returns 0.\nThen, I did this function to solve it, and now it works fine.\nimport os\nimport platform\n\ndef check_ping(hostname, attempts = 1, silent = False):\n parameter = '-n' if platform.system().lower()=='windows' else '-c'\n filter = ' | findstr /i \"TTL\"' if platform.system().lower()=='windows' else ' | grep \"ttl\"'\n if (silent):\n silent = ' > NUL' if platform.system().lower()=='windows' else ' >/dev/null'\n else:\n silent = ''\n response = os.system('ping ' + parameter + ' ' + str(attempts) + ' ' + hostname + filter + silent)\n\n if response == 0:\n return True\n else:\n return False\n\nNow I can call the command:\nprint (check_ping('192.168.1.1', 2, False))\n\nThe first parameter is the host\nThe second is the number of requests.\nThe third is if you want to show the ping responses or not\n", "This function will test ping for given number of retry attempts and will return True if reachable else False -\ndef ping(host, retry_packets):\n \"\"\"Returns True if host (str) responds to a ping request.\"\"\"\n\n # Option for the number of packets as a function of\n param = '-n' if platform.system().lower() == 'windows' else '-c'\n\n # Building the command. Ex: \"ping -c 1 google.com\"\n command = ['ping', param, str(retry_packets), host]\n return subprocess.call(command) == 0\n\n# Driver Code\nprint(\"Ping Status : {}\".format(ping(host=\"xx.xx.xx.xx\", retry_packets=2)))\n\n\nOutput :\n\nPinging xx.xx.xx.xx with 32 bytes of data:\nReply from xx.xx.xx.xx: bytes=32 time=517ms TTL=60\nReply from xx.xx.xx.xx: bytes=32 time=490ms TTL=60\n\nPing statistics for xx.xx.xx.xx:\n Packets: Sent = 2, Received = 2, Lost = 0 (0% loss),\nApproximate round trip times in milli-seconds:\n Minimum = 490ms, Maximum = 517ms, Average = 503ms\nPing Status : True\n\nNote : Change xx.xx.xx.xx with your IP\n" ]
[ 42, 20, 14, 5, 2, 1, 0 ]
[ "This is my version of check ping function. May be if well be usefull for someone:\ndef check_ping(host):\nif platform.system().lower() == \"windows\":\nresponse = os.system(\"ping -n 1 -w 500 \" + host + \" > nul\")\nif response == 0:\nreturn \"alive\"\nelse:\nreturn \"not alive\"\nelse:\nresponse = os.system(\"ping -c 1 -W 0.5\" + host + \"> /dev/null\")\nif response == 1:\nreturn \"alive\"\nelse:\nreturn \"not alive\"\n\n" ]
[ -2 ]
[ "function", "python", "return", "variables" ]
stackoverflow_0026468640_function_python_return_variables.txt
Q: background function in Python I've got a Python script that sometimes displays images to the user. The images can, at times, be quite large, and they are reused often. Displaying them is not critical, but displaying the message associated with them is. I've got a function that downloads the image needed and saves it locally. Right now it's run inline with the code that displays a message to the user, but that can sometimes take over 10 seconds for non-local images. Is there a way I could call this function when it's needed, but run it in the background while the code continues to execute? I would just use a default image until the correct one becomes available. A: Do something like this: def function_that_downloads(my_args): # do some long download here then inline, do something like this: import threading def my_inline_function(some_args): # do some stuff download_thread = threading.Thread(target=function_that_downloads, name="Downloader", args=some_args) download_thread.start() # continue doing stuff You may want to check if the thread has finished before going on to other things by calling download_thread.isAlive() A: Typically the way to do this would be to use a thread pool and queue downloads which would issue a signal, a.k.a an event, when that task has finished processing. You can do this within the scope of the threading module Python provides. To perform said actions, I would use event objects and the Queue module. However, a quick and dirty demonstration of what you can do using a simple threading.Thread implementation can be seen below: import os import threading import time import urllib2 class ImageDownloader(threading.Thread): def __init__(self, function_that_downloads): threading.Thread.__init__(self) self.runnable = function_that_downloads self.daemon = True def run(self): self.runnable() def downloads(): with open('somefile.html', 'w+') as f: try: f.write(urllib2.urlopen('http://google.com').read()) except urllib2.HTTPError: f.write('sorry no dice') print 'hi there user' print 'how are you today?' thread = ImageDownloader(downloads) thread.start() while not os.path.exists('somefile.html'): print 'i am executing but the thread has started to download' time.sleep(1) print 'look ma, thread is not alive: ', thread.is_alive() It would probably make sense to not poll like I'm doing above. In which case, I would change the code to this: import os import threading import time import urllib2 class ImageDownloader(threading.Thread): def __init__(self, function_that_downloads): threading.Thread.__init__(self) self.runnable = function_that_downloads def run(self): self.runnable() def downloads(): with open('somefile.html', 'w+') as f: try: f.write(urllib2.urlopen('http://google.com').read()) except urllib2.HTTPError: f.write('sorry no dice') print 'hi there user' print 'how are you today?' thread = ImageDownloader(downloads) thread.start() # show message thread.join() # display image Notice that there's no daemon flag set here. A: I prefer to use gevent for this sort of thing: import gevent from gevent import monkey; monkey.patch_all() greenlet = gevent.spawn( function_to_download_image ) display_message() # ... perhaps interaction with the user here # this will wait for the operation to complete (optional) greenlet.join() # alternatively if the image display is no longer important, this will abort it: #greenlet.kill() Everything runs in one thread, but whenever a kernel operation blocks, gevent switches contexts when there are other "greenlets" running. Worries about locking, etc are much reduced, as there is only one thing running at a time, yet the image will continue to download whenever a blocking operation executes in the "main" context. Depending on how much, and what kind of thing you want to do in the background, this can be either better or worse than threading-based solutions; certainly, it is much more scaleable (ie you can do many more things in the background), but that might not be of concern in the current situation. A: simple way. import threading import os def killme(): if keyboard.read_key() == "q": print("Bye ..........") os._exit(0) threading.Thread(target=killme, name="killer").start() if you want to add more keys add defs and threading.Thread(target=killme, name="killer").start() lines more Looking bad but works 1000% then complex codes
background function in Python
I've got a Python script that sometimes displays images to the user. The images can, at times, be quite large, and they are reused often. Displaying them is not critical, but displaying the message associated with them is. I've got a function that downloads the image needed and saves it locally. Right now it's run inline with the code that displays a message to the user, but that can sometimes take over 10 seconds for non-local images. Is there a way I could call this function when it's needed, but run it in the background while the code continues to execute? I would just use a default image until the correct one becomes available.
[ "Do something like this:\ndef function_that_downloads(my_args):\n # do some long download here\n\nthen inline, do something like this:\nimport threading\ndef my_inline_function(some_args):\n # do some stuff\n download_thread = threading.Thread(target=function_that_downloads, name=\"Downloader\", args=some_args)\n download_thread.start()\n # continue doing stuff\n\nYou may want to check if the thread has finished before going on to other things by calling download_thread.isAlive()\n", "Typically the way to do this would be to use a thread pool and queue downloads which would issue a signal, a.k.a an event, when that task has finished processing. You can do this within the scope of the threading module Python provides.\nTo perform said actions, I would use event objects and the Queue module.\nHowever, a quick and dirty demonstration of what you can do using a simple threading.Thread implementation can be seen below:\nimport os\nimport threading\nimport time\nimport urllib2\n\n\nclass ImageDownloader(threading.Thread):\n\n def __init__(self, function_that_downloads):\n threading.Thread.__init__(self)\n self.runnable = function_that_downloads\n self.daemon = True\n\n def run(self):\n self.runnable()\n\n\ndef downloads():\n with open('somefile.html', 'w+') as f:\n try:\n f.write(urllib2.urlopen('http://google.com').read())\n except urllib2.HTTPError:\n f.write('sorry no dice')\n\n\nprint 'hi there user'\nprint 'how are you today?'\nthread = ImageDownloader(downloads)\nthread.start()\nwhile not os.path.exists('somefile.html'):\n print 'i am executing but the thread has started to download'\n time.sleep(1)\n\nprint 'look ma, thread is not alive: ', thread.is_alive()\n\nIt would probably make sense to not poll like I'm doing above. In which case, I would change the code to this:\nimport os\nimport threading\nimport time\nimport urllib2\n\n\nclass ImageDownloader(threading.Thread):\n\n def __init__(self, function_that_downloads):\n threading.Thread.__init__(self)\n self.runnable = function_that_downloads\n\n def run(self):\n self.runnable()\n\n\ndef downloads():\n with open('somefile.html', 'w+') as f:\n try:\n f.write(urllib2.urlopen('http://google.com').read())\n except urllib2.HTTPError:\n f.write('sorry no dice')\n\n\nprint 'hi there user'\nprint 'how are you today?'\nthread = ImageDownloader(downloads)\nthread.start()\n# show message\nthread.join()\n# display image\n\nNotice that there's no daemon flag set here.\n", "I prefer to use gevent for this sort of thing:\nimport gevent\nfrom gevent import monkey; monkey.patch_all()\n\ngreenlet = gevent.spawn( function_to_download_image )\ndisplay_message()\n# ... perhaps interaction with the user here\n\n# this will wait for the operation to complete (optional)\ngreenlet.join()\n# alternatively if the image display is no longer important, this will abort it:\n#greenlet.kill()\n\nEverything runs in one thread, but whenever a kernel operation blocks, gevent switches contexts when there are other \"greenlets\" running. Worries about locking, etc are much reduced, as there is only one thing running at a time, yet the image will continue to download whenever a blocking operation executes in the \"main\" context.\nDepending on how much, and what kind of thing you want to do in the background, this can be either better or worse than threading-based solutions; certainly, it is much more scaleable (ie you can do many more things in the background), but that might not be of concern in the current situation.\n", "simple way.\n\nimport threading\nimport os\n\ndef killme():\n if keyboard.read_key() == \"q\":\n print(\"Bye ..........\")\n os._exit(0)\n\nthreading.Thread(target=killme, name=\"killer\").start()\n\n\nif you want to add more keys add defs and threading.Thread(target=killme, name=\"killer\").start() lines more\nLooking bad but works 1000% then complex codes\n" ]
[ 165, 7, 6, 0 ]
[]
[]
[ "multithreading", "python" ]
stackoverflow_0007168508_multithreading_python.txt
Q: nbdev_export fails - TypeError: _default_exp_() takes 3 positional arguments but 4 were given I'm very new with nbdev. I have created nbdev environment, worked on one notebook inside "nbs" folder. However, I had to organize the notebooks on "nbs" and I created new folder to contain some of these notebooks. (for example, I have folder name "nbs" and then inside it I have several notebooks and folders such as "weather_scripts","astrology_scripts" , and each folder like this contais scripts or sometimes more fodlers with scripts). Since then, when I try to visualize my documentation, with running this on bash: nbdev_export && pip install ./ && nbdev_preview I get error with export- ~(.venv) user@me:~/git/my_script$ nbdev_export Traceback (most recent call last): File "/home/user/git/my_script/.venv/bin/nbdev_export", line 8, in <module> sys.exit(nbdev_export()) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/fastcore/script.py", line 119, in _f return tfunc(**merge(args, args_from_prog(func, xtra))) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/doclinks.py", line 137, in nbdev_export for f in files: nb_export(f) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/export.py", line 49, in nb_export nb.process() File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 126, in process for proc in self.procs: self._proc(proc) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 119, in _proc for cell in self.nb.cells: self._process_cell(proc, cell) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 109, in _process_cell if f: self._process_comment(f, cell, cmd) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 115, in _process_comment return proc(cell, *args) TypeError: _default_exp_() takes 3 positional arguments but 4 were given~~ The only thing that was changed between the time it worked and stopped working was that I opened notebooks in a folder inside nbs. However, I don't understand the error, and I'm very new to nbdev, so, I'm looking for help to understand why do I get this error and how I can solve it , any idea will be helpful :) A: In the end the problem was inside one of the notebooks. The first block had problem with the deafult_exp: ##this is error: #|default_exp my notebook ##this is correct (with _ and without space) #|default_exp my_notebook The way to debug it was to take out of nbs folder all the notebooks and then try one by one. I hope it will help someone...
nbdev_export fails - TypeError: _default_exp_() takes 3 positional arguments but 4 were given
I'm very new with nbdev. I have created nbdev environment, worked on one notebook inside "nbs" folder. However, I had to organize the notebooks on "nbs" and I created new folder to contain some of these notebooks. (for example, I have folder name "nbs" and then inside it I have several notebooks and folders such as "weather_scripts","astrology_scripts" , and each folder like this contais scripts or sometimes more fodlers with scripts). Since then, when I try to visualize my documentation, with running this on bash: nbdev_export && pip install ./ && nbdev_preview I get error with export- ~(.venv) user@me:~/git/my_script$ nbdev_export Traceback (most recent call last): File "/home/user/git/my_script/.venv/bin/nbdev_export", line 8, in <module> sys.exit(nbdev_export()) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/fastcore/script.py", line 119, in _f return tfunc(**merge(args, args_from_prog(func, xtra))) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/doclinks.py", line 137, in nbdev_export for f in files: nb_export(f) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/export.py", line 49, in nb_export nb.process() File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 126, in process for proc in self.procs: self._proc(proc) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 119, in _proc for cell in self.nb.cells: self._process_cell(proc, cell) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 109, in _process_cell if f: self._process_comment(f, cell, cmd) File "/home/user/git/my_script/.venv/lib/python3.9/site-packages/nbdev/process.py", line 115, in _process_comment return proc(cell, *args) TypeError: _default_exp_() takes 3 positional arguments but 4 were given~~ The only thing that was changed between the time it worked and stopped working was that I opened notebooks in a folder inside nbs. However, I don't understand the error, and I'm very new to nbdev, so, I'm looking for help to understand why do I get this error and how I can solve it , any idea will be helpful :)
[ "In the end the problem was inside one of the notebooks.\nThe first block had problem with the deafult_exp:\n##this is error:\n#|default_exp my notebook\n\n##this is correct (with _ and without space)\n#|default_exp my_notebook\n\nThe way to debug it was to take out of nbs folder all the notebooks and then try one by one.\nI hope it will help someone...\n" ]
[ 0 ]
[]
[]
[ "bash", "jupyter_notebook", "nbdev", "python" ]
stackoverflow_0074516831_bash_jupyter_notebook_nbdev_python.txt
Q: how to i get os.listdir with follow the file? enter image description here enter image description here Hello, I have encountered a problem. When I use os.listdir, I hope that the effect of picture 1 will appear, but the effect of python is reversed. I would like to ask how can I get the data and want the effect of picture 1 A: import os files = os.listdir()[::-1] print(files) somelist[::-1] reverses the list starts from the end towards the first taking each element as step=-1 See this answer A: Do not post images of text (your second image). Use reverse() files = os.listdir() files.reverse() It is unclear how the files are ordered in the screenshot given. This solution may not work in all environments, though this will solve the symptom of your problem as given.
how to i get os.listdir with follow the file?
enter image description here enter image description here Hello, I have encountered a problem. When I use os.listdir, I hope that the effect of picture 1 will appear, but the effect of python is reversed. I would like to ask how can I get the data and want the effect of picture 1
[ "import os\nfiles = os.listdir()[::-1]\nprint(files)\n\nsomelist[::-1] reverses the list\nstarts from the end towards the first taking each element as step=-1\nSee this answer\n", "Do not post images of text (your second image). Use reverse()\nfiles = os.listdir()\nfiles.reverse()\n\nIt is unclear how the files are ordered in the screenshot given. This solution may not work in all environments, though this will solve the symptom of your problem as given.\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python", "python_os" ]
stackoverflow_0074517276_list_python_python_os.txt
Q: iterate through slices of a numpy array I have a pandas dataframe e.g. df = pd.DataFrame({'dim1': ['a', 'a', 'b', 'b'], 'dim2': ['x', 'y', 'x', 'y'], 'val': [2, 4, 6, 8]}) This can represent an array of N dimensions, I have chosen two here for simplicity. I will convert this to a numpy array and then want to iterate and sum over this numpy array for each 'slice' of the array. I have achieved this but I do not know how to generalise for N dimensions. Function to convert to df -> numpy.array def df_to_numpy(df: pd.DataFrame) -> np.array: # Function to convert to np.array try: shape = [len(level) for level in df.index.levels] except AttributeError: shape = [len(df.index)] ncol = df.shape[-1] if ncol > 1: shape.append(ncol) return df.to_numpy().reshape(shape) Now to convert use this and unstack(). (Not generalise yet due to column names but easy enough to at later point) arr = df_to_numpy(df.set_index(['dim1', 'dim2']).unstack()) Now use a loop and swapaxes() to iterate through the slices of this array for _ in range(len(arr.shape)): # we now iterate through the unique groupings of this dimension for ii in range(arr.shape[0]): print('Unq grouping no.: ',ii) print('Sum: ', arr[ii,:].sum()) # swap the last and first axes and repeat step arr = arr.swapaxes(0,len(arr.shape) - 1) This appears to work for my example and some higher dimension ones I've tried. However this is not generalisable. e.g. for 4 dimensions the sum would be arr[ii,:,:,:], how can I generalise this line to work for n dimensions? A: looking to your data frame it feels like your headers are misleading. the dimesion is x and y. In this case you have unorganized data set. So if you want to have 4 dimensional you can still keep structure of your data frame just have extra 2 rows for each a and b. like now you have a: x, y. Then you can have a: x, y, z, k. df = pd.DataFrame({'dim1': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], 'dim2': ['x', 'y', 'z', 'k', 'x', 'y', 'z', 'k'], 'val': [2, 4, 6, 8, 3, 2, 1, 5]}) def df_to_numpy(df: pd.DataFrame) -> np.array: # Function to convert to np.array try: shape = [len(level) for level in df.index.levels] except AttributeError: shape = [len(df.index)] ncol = df.shape[-1] if ncol > 1: shape.append(ncol) return df.to_numpy().reshape(shape) arr = df_to_numpy(df.set_index(['dim1', 'dim2']).unstack()) arr for _ in range(len(arr.shape)): # we now iterate through the unique groupings of this dimension for ii in range(arr.shape[0]): print('Unq grouping no.: ',ii) print('Sum: ', arr[ii,:].sum()) # swap the last and first axes and repeat step arr = arr.swapaxes(0,len(arr.shape) - 1)
iterate through slices of a numpy array
I have a pandas dataframe e.g. df = pd.DataFrame({'dim1': ['a', 'a', 'b', 'b'], 'dim2': ['x', 'y', 'x', 'y'], 'val': [2, 4, 6, 8]}) This can represent an array of N dimensions, I have chosen two here for simplicity. I will convert this to a numpy array and then want to iterate and sum over this numpy array for each 'slice' of the array. I have achieved this but I do not know how to generalise for N dimensions. Function to convert to df -> numpy.array def df_to_numpy(df: pd.DataFrame) -> np.array: # Function to convert to np.array try: shape = [len(level) for level in df.index.levels] except AttributeError: shape = [len(df.index)] ncol = df.shape[-1] if ncol > 1: shape.append(ncol) return df.to_numpy().reshape(shape) Now to convert use this and unstack(). (Not generalise yet due to column names but easy enough to at later point) arr = df_to_numpy(df.set_index(['dim1', 'dim2']).unstack()) Now use a loop and swapaxes() to iterate through the slices of this array for _ in range(len(arr.shape)): # we now iterate through the unique groupings of this dimension for ii in range(arr.shape[0]): print('Unq grouping no.: ',ii) print('Sum: ', arr[ii,:].sum()) # swap the last and first axes and repeat step arr = arr.swapaxes(0,len(arr.shape) - 1) This appears to work for my example and some higher dimension ones I've tried. However this is not generalisable. e.g. for 4 dimensions the sum would be arr[ii,:,:,:], how can I generalise this line to work for n dimensions?
[ "looking to your data frame it feels like your headers are misleading. the dimesion is x and y. In this case you have unorganized data set. So if you want to have 4 dimensional you can still keep structure of your data frame just have extra 2 rows for each a and b. like now you have a: x, y. Then you can have a: x, y, z, k.\ndf = pd.DataFrame({'dim1': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'], 'dim2': ['x', 'y', 'z', 'k', 'x', 'y', 'z', 'k'], 'val': [2, 4, 6, 8, 3, 2, 1, 5]})\n\ndef df_to_numpy(df: pd.DataFrame) -> np.array:\n # Function to convert to np.array\n try:\n shape = [len(level) for level in df.index.levels]\n except AttributeError:\n shape = [len(df.index)]\n ncol = df.shape[-1]\n if ncol > 1:\n shape.append(ncol)\n return df.to_numpy().reshape(shape)\n\narr = df_to_numpy(df.set_index(['dim1', 'dim2']).unstack())\narr\nfor _ in range(len(arr.shape)):\n # we now iterate through the unique groupings of this dimension\n for ii in range(arr.shape[0]):\n print('Unq grouping no.: ',ii)\n print('Sum: ', arr[ii,:].sum())\n # swap the last and first axes and repeat step\n arr = arr.swapaxes(0,len(arr.shape) - 1)\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074517286_arrays_numpy_python.txt
Q: How to run batch command in my python script? I have to run a one-line batch command in my Python script. Currently, I am saving my command in a .bat file and executing the .bat file using the subprocess. But I want to omit the .bat file and directly include the command in my python script. Because I might need to use different bat files for different use cases. I would prefer to use one dynamic python script than save multiple .bat files. bat command: "C:\Program Files (x86)\temp\FL.B5.exe" /s /a "C:\Users\kuk\Downloads\B5+Typ B.2.asc" /o "C:\Users\kuk\Download\B5+Typ B.2.docx" Python script was: import subprocess as sp sp.call([r"C:\Users\kuk\Downloads\test.bat"]) What I want is: import subprocess exe = r"C:\Program Files (x86)\temp\FL.B5.exe" input = r"C:\Users\kuk\Downloads\B5+Typ B.2.asc" output = r"C:\Users\kuk\Downloads\B5+Typ B.2.docx" cmd = '{} /s /a {} /o {}'.format(soft,var1,var2) subprocess.call(cmd) I don't know what is wrong, but unable to execute the script. Any help would be appreciated!! A: your question is ambiguous. if i interpreted it correctly, you can use os.system os.system is a function which executes commands from console. import os os.system('"C:\Program Files (x86)\temp\FL.B5.exe" /s /a "C:\Users\kuk\Downloads\B5+Typ B.2.asc" /o "C:\Users\kuk\Download\B5+Typ B.2.docx"') Do not use input as a variable name. It is a python built-in function. A: The subprocess.call method takes a list as an argument. When dealing with shell execution tasks I like to use the shlex library to quote parameters and split commands into a list. Note: I haven't had to execute scripts on Windows for about 12 years so I'm not sure how compatible shlex is with the platform. import shlex import subprocess as sp exe = r"C:\Program Files (x86)\temp\FL.B5.exe" in_file = r"C:\Users\kuk\Downloads\B5+Typ B.2.asc" out_file = r"C:\Users\kuk\Downloads\B5+Typ B.2.docx" params = tuple([shlex.quote(p) for p in (exe, in_file, out_file)]) command = f'%s /s /a %s /o %s' % params sp.call(shlex.split(command))
How to run batch command in my python script?
I have to run a one-line batch command in my Python script. Currently, I am saving my command in a .bat file and executing the .bat file using the subprocess. But I want to omit the .bat file and directly include the command in my python script. Because I might need to use different bat files for different use cases. I would prefer to use one dynamic python script than save multiple .bat files. bat command: "C:\Program Files (x86)\temp\FL.B5.exe" /s /a "C:\Users\kuk\Downloads\B5+Typ B.2.asc" /o "C:\Users\kuk\Download\B5+Typ B.2.docx" Python script was: import subprocess as sp sp.call([r"C:\Users\kuk\Downloads\test.bat"]) What I want is: import subprocess exe = r"C:\Program Files (x86)\temp\FL.B5.exe" input = r"C:\Users\kuk\Downloads\B5+Typ B.2.asc" output = r"C:\Users\kuk\Downloads\B5+Typ B.2.docx" cmd = '{} /s /a {} /o {}'.format(soft,var1,var2) subprocess.call(cmd) I don't know what is wrong, but unable to execute the script. Any help would be appreciated!!
[ "your question is ambiguous. if i interpreted it correctly, you can use os.system\nos.system is a function which executes commands from console.\nimport os\n\nos.system('\"C:\\Program Files (x86)\\temp\\FL.B5.exe\" /s /a \"C:\\Users\\kuk\\Downloads\\B5+Typ B.2.asc\" /o \"C:\\Users\\kuk\\Download\\B5+Typ B.2.docx\"')\n\nDo not use input as a variable name. It is a python built-in function.\n", "The subprocess.call method takes a list as an argument. When dealing with shell execution tasks I like to use the shlex library to quote parameters and split commands into a list.\nNote: I haven't had to execute scripts on Windows for about 12 years so I'm not sure how compatible shlex is with the platform.\nimport shlex\nimport subprocess as sp\n\nexe = r\"C:\\Program Files (x86)\\temp\\FL.B5.exe\"\nin_file = r\"C:\\Users\\kuk\\Downloads\\B5+Typ B.2.asc\"\nout_file = r\"C:\\Users\\kuk\\Downloads\\B5+Typ B.2.docx\" \n\nparams = tuple([shlex.quote(p) for p in (exe, in_file, out_file)])\n\ncommand = f'%s /s /a %s /o %s' % params\n\nsp.call(shlex.split(command))\n\n" ]
[ 0, 0 ]
[]
[]
[ "batch_file", "python" ]
stackoverflow_0074516912_batch_file_python.txt
Q: AssertionError: Signal dimention should be of the format of (N,) but it is (743424, 2) instead For my ML project, I'm using a Model to which I give a video and audio as input file to detect the synthetic voice in the video. But it returns an error on the audio_processing() function: Code for audio_processing() def audio_processing(wav_file, verbose=True): rate, sig = wav.read(wav_file) if verbose: print("Sig length: {}, sample_rate: {}".format(len(sig), rate)) try: mfcc_features = speechpy.feature.mfcc(sig, sampling_frequency=rate, frame_length=0.010, frame_stride=0.010) except IndexError: raise ValueError("ERROR: Index error occurred while extracting mfcc") if verbose: print("mfcc_features shape:", mfcc_features.shape) # Number of audio clips = len(mfcc_features) // length of each audio clip number_of_audio_clips = len(mfcc_features) // AUDIO_TIME_STEPS if verbose: print("Number of audio clips:", number_of_audio_clips) # Don't consider the first MFCC feature, only consider the next 12 (Checked in syncnet_demo.m) # Also, only consider AUDIO_TIME_STEPS*number_of_audio_clips features mfcc_features = mfcc_features[:AUDIO_TIME_STEPS*number_of_audio_clips, 1:] # Reshape mfcc_features from (x, 12) to (x//20, 12, 20, 1) mfcc_features = np.expand_dims(np.transpose(np.split(mfcc_features, number_of_audio_clips), (0, 2, 1)), axis=-1) if verbose: print("Final mfcc_features shape:", mfcc_features.shape) return mfcc_features Error: AssertionError: Signal dimention should be of the format of (N,) but it is (691200, 2) instead File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 2548, in __call__ return self.wsgi_app(environ, start_response) File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 2528, in wsgi_app response = self.handle_exception(e) File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "D:\VU Final Project\4 - Final Deliverable\Synthetic-Speech-Detection-in-Video\app.py", line 673, in modelprediction audio_fea = audio_processing(audio, False) File "D:\VU Final Project\4 - Final Deliverable\Synthetic-Speech-Detection-in-Video\app.py", line 49, in audio_processing mfcc_features = speechpy.feature.mfcc( File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\speechpy\feature.py", line 139, in mfcc feature, energy = mfe(signal, sampling_frequency=sampling_frequency, File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\speechpy\feature.py", line 185, in mfe frames = processing.stack_frames( File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\speechpy\processing.py", line 90, in stack_frames assert sig.ndim == 1, s % str(sig.shape) A: From the looks of it, your audio file contains two channels, which you can check by looking at the shape of the array that the wav.read function returns: sig.shape. The speechpy.feature.mfcc function expects a single-channel audio. I believe what you can do is to convert your audio to a single channel, for example by averaging the two channels: sig = np.mean(sig, axis=1) If you want your function to work on both single-channel and multi-channel data, you can just compute the mean only if the signal of your audio is multi-channel: if sig.ndim == 2: sig = np.mean(sig, axis=1)
AssertionError: Signal dimention should be of the format of (N,) but it is (743424, 2) instead
For my ML project, I'm using a Model to which I give a video and audio as input file to detect the synthetic voice in the video. But it returns an error on the audio_processing() function: Code for audio_processing() def audio_processing(wav_file, verbose=True): rate, sig = wav.read(wav_file) if verbose: print("Sig length: {}, sample_rate: {}".format(len(sig), rate)) try: mfcc_features = speechpy.feature.mfcc(sig, sampling_frequency=rate, frame_length=0.010, frame_stride=0.010) except IndexError: raise ValueError("ERROR: Index error occurred while extracting mfcc") if verbose: print("mfcc_features shape:", mfcc_features.shape) # Number of audio clips = len(mfcc_features) // length of each audio clip number_of_audio_clips = len(mfcc_features) // AUDIO_TIME_STEPS if verbose: print("Number of audio clips:", number_of_audio_clips) # Don't consider the first MFCC feature, only consider the next 12 (Checked in syncnet_demo.m) # Also, only consider AUDIO_TIME_STEPS*number_of_audio_clips features mfcc_features = mfcc_features[:AUDIO_TIME_STEPS*number_of_audio_clips, 1:] # Reshape mfcc_features from (x, 12) to (x//20, 12, 20, 1) mfcc_features = np.expand_dims(np.transpose(np.split(mfcc_features, number_of_audio_clips), (0, 2, 1)), axis=-1) if verbose: print("Final mfcc_features shape:", mfcc_features.shape) return mfcc_features Error: AssertionError: Signal dimention should be of the format of (N,) but it is (691200, 2) instead File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 2548, in __call__ return self.wsgi_app(environ, start_response) File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 2528, in wsgi_app response = self.handle_exception(e) File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 2525, in wsgi_app response = self.full_dispatch_request() File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 1822, in full_dispatch_request rv = self.handle_user_exception(e) File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 1820, in full_dispatch_request rv = self.dispatch_request() File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\flask\app.py", line 1796, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) File "D:\VU Final Project\4 - Final Deliverable\Synthetic-Speech-Detection-in-Video\app.py", line 673, in modelprediction audio_fea = audio_processing(audio, False) File "D:\VU Final Project\4 - Final Deliverable\Synthetic-Speech-Detection-in-Video\app.py", line 49, in audio_processing mfcc_features = speechpy.feature.mfcc( File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\speechpy\feature.py", line 139, in mfcc feature, energy = mfe(signal, sampling_frequency=sampling_frequency, File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\speechpy\feature.py", line 185, in mfe frames = processing.stack_frames( File "C:\Users\DELL\AppData\Roaming\Python\Python39\site-packages\speechpy\processing.py", line 90, in stack_frames assert sig.ndim == 1, s % str(sig.shape)
[ "From the looks of it, your audio file contains two channels, which you can check by looking at the shape of the array that the wav.read function returns: sig.shape.\nThe speechpy.feature.mfcc function expects a single-channel audio.\nI believe what you can do is to convert your audio to a single channel, for example by averaging the two channels:\nsig = np.mean(sig, axis=1)\n\nIf you want your function to work on both single-channel and multi-channel data, you can just compute the mean only if the signal of your audio is multi-channel:\nif sig.ndim == 2:\n sig = np.mean(sig, axis=1)\n\n" ]
[ 1 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "scipy", "tensorflow" ]
stackoverflow_0074516426_deep_learning_machine_learning_python_scipy_tensorflow.txt
Q: Video streaming with OpenCV and flask I have a flask web application that reads my camera and is supposed to display it in my web browser. But instead of displaying it, I am getting a blank image as shown here: py file import cv2 import numpy from flask import Flask, render_template, Response, stream_with_context, Request video = cv2.VideoCapture(0) app = Flask(__name__) def video_stream(): while(True): ret, frame = video.read() if not ret: break else: ret, buffer = cv2.imencode('.jpeg',frame) frame = buffer.tobytes() yield (b'--frame\r\n' b'Content-type: image/jpeg\r\n\r\n' + frame + b'\r\n') @app.route('/siteTest') def siteTest(): return render_template('siteTest.html') @app.route('/video_feed') def video_feed(): return Response(video_stream(), mimetype= 'multipart/x-mixed-replace; boundary = frame') app.run(host ='0.0.0.0', port= '5000', debug=False) (changed the IP for obvious reasons) html file <html> <head> <meta name = "viewport" content = "width = device-width, initial-scale=1"> <style> img{ display: block; margin-left: auto; margin-right: auto; } h1 {text-align: center;} </style> </head> <body> <img id="bg" src = "{{ url_for('video_feed') }}" style="width: 88%;"> </body> </html> Any help would be appreciated I tried changing the response which didn't work as well as changing the video_stream() but I think that I did something wrong. A: Remove whitespaces from boundary = frame Restart server code: def video_feed(): return Response(video_stream(), mimetype= 'multipart/x-mixed-replace; boundary=frame')
Video streaming with OpenCV and flask
I have a flask web application that reads my camera and is supposed to display it in my web browser. But instead of displaying it, I am getting a blank image as shown here: py file import cv2 import numpy from flask import Flask, render_template, Response, stream_with_context, Request video = cv2.VideoCapture(0) app = Flask(__name__) def video_stream(): while(True): ret, frame = video.read() if not ret: break else: ret, buffer = cv2.imencode('.jpeg',frame) frame = buffer.tobytes() yield (b'--frame\r\n' b'Content-type: image/jpeg\r\n\r\n' + frame + b'\r\n') @app.route('/siteTest') def siteTest(): return render_template('siteTest.html') @app.route('/video_feed') def video_feed(): return Response(video_stream(), mimetype= 'multipart/x-mixed-replace; boundary = frame') app.run(host ='0.0.0.0', port= '5000', debug=False) (changed the IP for obvious reasons) html file <html> <head> <meta name = "viewport" content = "width = device-width, initial-scale=1"> <style> img{ display: block; margin-left: auto; margin-right: auto; } h1 {text-align: center;} </style> </head> <body> <img id="bg" src = "{{ url_for('video_feed') }}" style="width: 88%;"> </body> </html> Any help would be appreciated I tried changing the response which didn't work as well as changing the video_stream() but I think that I did something wrong.
[ "\nRemove whitespaces from boundary = frame\nRestart server\n\ncode:\ndef video_feed():\n return Response(video_stream(), mimetype= 'multipart/x-mixed-replace; boundary=frame')\n\n" ]
[ 0 ]
[]
[]
[ "flask", "opencv", "python", "video_capture" ]
stackoverflow_0074515443_flask_opencv_python_video_capture.txt
Q: Why an uninstalled module is still importable in Python I want to get rid of a module in Python and I use the "pip uninstall " command. However, for some reason the module is still importable! I am using VS code on a Mac OS. Here is the screenshot of the code: As you can see, the yellow warning says the polars package is not installed (because I already excuted the uninstall command) however in the cell below it, the polars module has been imported succesfully! Can anyone explain what is happening and how can I completely remove the module so it is not importable anymore? A: 1 -Try to use %pip, maybe you are using virtual machines and %pip is 'magic' command that actually runs commands to uninstall the same package across all machines in the cluster 2- Script wrappers installed by python setup.py develop. You need to remove all files manually, and also undo any other stuff that installation did manually. If you don't know the list of all files, you can reinstall it with the --record option, and take a look at the list this produces. To record list of installed files, you can use: $ python setup.py install --record files.txt
Why an uninstalled module is still importable in Python
I want to get rid of a module in Python and I use the "pip uninstall " command. However, for some reason the module is still importable! I am using VS code on a Mac OS. Here is the screenshot of the code: As you can see, the yellow warning says the polars package is not installed (because I already excuted the uninstall command) however in the cell below it, the polars module has been imported succesfully! Can anyone explain what is happening and how can I completely remove the module so it is not importable anymore?
[ "1 -Try to use %pip, maybe you are using virtual machines and %pip is 'magic' command that actually runs commands to uninstall the same package across all machines in the cluster\n2- Script wrappers installed by python setup.py develop.\nYou need to remove all files manually, and also undo any other stuff that installation did manually.\nIf you don't know the list of all files, you can reinstall it with the --record option, and take a look at the list this produces. To record list of installed files, you can use:\n$ python setup.py install --record files.txt\n\n" ]
[ 0 ]
[ "All you have to do is restart your kernel.\nUse this button to restart the kernel\nhttps://i.stack.imgur.com/S9Q6L.png\n" ]
[ -2 ]
[ "python" ]
stackoverflow_0074517161_python.txt
Q: How to upload folder on Google Cloud Storage using Python API I have successfully uploaded single text file on Google Cloud Storage. But when i try to upload whole folder, It gives permission denied error. filename = "d:/foldername" #here test1 is the folder. Error: Traceback (most recent call last): File "test1.py", line 142, in <module> upload() File "test1.py", line 106, in upload media = MediaFileUpload(filename, chunksize=CHUNKSIZE, resumable=True) File "D:\jatin\Project\GAE_django\GCS_test\oauth2client\util.py", line 132, in positional_wrapper return wrapped(*args, **kwargs) File "D:\jatin\Project\GAE_django\GCS_test\apiclient\http.py", line 422, in __init__ fd = open(self._filename, 'rb') IOError: [Errno 13] Permission denied: 'd:/foldername' A: This works for me. Copy all content from a local directory to a specific bucket-name/full-path (recursive) in google cloud storage: import glob from google.cloud import storage def upload_local_directory_to_gcs(local_path, bucket, gcs_path): assert os.path.isdir(local_path) for local_file in glob.glob(local_path + '/**'): if not os.path.isfile(local_file): upload_local_directory_to_gcs(local_file, bucket, gcs_path + "/" + os.path.basename(local_file)) else: remote_path = os.path.join(gcs_path, local_file[1 + len(local_path):]) blob = bucket.blob(remote_path) blob.upload_from_filename(local_file) upload_local_directory_to_gcs(local_path, bucket, BUCKET_FOLDER_DIR) A: A version without a recursive function, and it works with 'top level files' (unlike the top answer): import glob import os from google.cloud import storage GCS_CLIENT = storage.Client() def upload_from_directory(directory_path: str, dest_bucket_name: str, dest_blob_name: str): rel_paths = glob.glob(directory_path + '/**', recursive=True) bucket = GCS_CLIENT.get_bucket(dest_bucket_name) for local_file in rel_paths: remote_path = f'{dest_blob_name}/{"/".join(local_file.split(os.sep)[1:])}' if os.path.isfile(local_file): blob = bucket.blob(remote_path) blob.upload_from_filename(local_file) A: A folder is a cataloging structure containing references to files and directories. The library will not accept a folder as an argument. As far as I understand, your use case is to make an upload to GCS preserving a local folder structure. To accomplish that you can use the os python module and make a recursive function (e.g process_folder) that will take path as an argument. This logic can be used for the function: Use os.listdir() method to get a list of objects within the source path (will return both files and folders). Iterate over a list from step 1 to separate files from folders via os.path.isdir() method. Iterate over files and upload them with adjusted path (e.g. path+ “/“ + file_name). Iterate over folders making a recursive call (e.g. process_folder(path+folder_name)). It’ll be necessary to work with two paths: Real system path (e.g. “/Users/User/…/upload_folder/folder_name”) used with os module. Virtual path for GCS file uploads (e.g. “upload”+”/“ + folder_name + ”/“ + file_name). Don’t forget to implement exponential backoff referenced at [1] to deal with 500 errors. You can use a Drive SDK example at [2] as a reference. [1] - https://developers.google.com/storage/docs/json_api/v1/how-tos/upload#exp-backoff [2] - https://developers.google.com/drive/web/handle-errors A: I assume the sheer filename = "D:\foldername" is not enough info about the source code. Neither am I sure that this is even possible.. via the web interface you can also just upload files or create folders where you then upload the files. You could save the folders name, then create it (I've never used the google-app-engine, but I guess that should be possible) and then upload the contents to the new folder A: Refer - https://hackersandslackers.com/manage-files-in-google-cloud-storage-with-python/ from os import listdir from os.path import isfile, join ... def upload_files(bucketName): """Upload files to GCP bucket.""" files = [f for f in listdir(localFolder) if isfile(join(localFolder, f))] for file in files: localFile = localFolder + file blob = bucket.blob(bucketFolder + file) blob.upload_from_filename(localFile) return f'Uploaded {files} to "{bucketName}" bucket.' A: The solution can also be used for windows systems. Simply provide the folder name to upload the destination bucket name.Additionally, it can handle any level of subdirectories in a folder. import os from google.cloud import storage storage_client = storage.Client() def upload_files(bucketName, folderName): """Upload files to GCP bucket.""" bucket = storage_client.get_bucket(bucketName) for path, subdirs, files in os.walk(folderName): for name in files: path_local = os.path.join(path, name) blob_path = path_local.replace('\\','/') blob = bucket.blob(blob_path) blob.upload_from_filename(path_local) A: I just came across the gcsfs library which seems to be also about better interfaces You could copy an entire directory into a gcs location like this: def upload_to_gcs(src_dir: str, gcs_dst: str): fs = gcsfs.GCSFileSystem() fs.put(src_dir, gcs_dst, recursive=True) A: Here is my recursive implementation . we need to create a file named gdrive_utils.py and write the following. from googleapiclient.discovery import build from google_auth_oauthlib.flow import InstalledAppFlow from google.auth.transport.requests import Request from apiclient.http import MediaFileUpload, MediaIoBaseDownload import pickle import glob import os # The following scopes are required for access to google drive. # If modifying these scopes, delete the file token.pickle. SCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly', 'https://www.googleapis.com/auth/drive.metadata', 'https://www.googleapis.com/auth/drive', 'https://www.googleapis.com/auth/drive.file', 'https://www.googleapis.com/auth/drive.appdata'] def get_gdrive_service(): """ Tries to authenticate using a token. If token expires or not present creates one. :return: Returns authenticated service object :rtype: object """ creds = None # The file token.pickle stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.pickle'): with open('token.pickle', 'rb') as token: creds = pickle.load(token) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'keys/client-secret.json', SCOPES) creds = flow.run_local_server(port=0) # Save the credentials for the next run with open('token.pickle', 'wb') as token: pickle.dump(creds, token) # return Google Drive API service return build('drive', 'v3', credentials=creds) def createRemoteFolder(drive_service, folderName, parent_id): # Create a folder on Drive, returns the newely created folders ID body = { 'name': folderName, 'mimeType': "application/vnd.google-apps.folder", 'parents': [parent_id] } root_folder = drive_service.files().create(body = body, supportsAllDrives=True, fields='id').execute() return root_folder['id'] def upload_file(drive_service, file_location, parent_id): # Create a folder on Drive, returns the newely created folders ID body = { 'name': os.path.split(file_location)[1], 'parents': [parent_id] } media = MediaFileUpload(file_location, resumable=True) file_details = drive_service.files().create(body = body, media_body=media, supportsAllDrives=True, fields='id').execute() return file_details['id'] def upload_file_recursively(g_drive_service, root, folder_id): files_list = glob.glob(root) if files_list: for file_contents in files_list: if os.path.isdir(file_contents): # create new _folder new_folder_id = createRemoteFolder(g_drive_service, os.path.split(file_contents)[1], folder_id) upload_file_recursively(g_drive_service, os.path.join(file_contents, '*'), new_folder_id) else: # upload to given folder id upload_file(g_drive_service, file_contents, folder_id) After that use the following import os from gdrive_utils import createRemoteFolder, upload_file_recursively, get_gdrive_service g_drive_service = get_gdrive_service() FOLDER_ID_FOR_UPLOAD = "<replace with folder id where you want upload>" main_folder_id = createRemoteFolder(g_drive_service, '<name_of_main_folder>', FOLDER_ID_FOR_UPLOAD) And finally use this upload_file_recursively(g_drive_service, os.path.join("<your_path_>", '*'), main_folder_id) A: Another option is to use gsutils, the command-line tool for interacting with Google Cloud: gsutil cp -r ./my/local/directory gs://my_gcp_bucket/foo/bar The -r flag tells gsutils to copy recursively. Link gsutils to documentation. Invoking gsutils in Python can be done like this: import subprocess subprocess.check_call('gsutil cp -r ./my/local/directory gs://my_gcp_bucket/foo/bar')
How to upload folder on Google Cloud Storage using Python API
I have successfully uploaded single text file on Google Cloud Storage. But when i try to upload whole folder, It gives permission denied error. filename = "d:/foldername" #here test1 is the folder. Error: Traceback (most recent call last): File "test1.py", line 142, in <module> upload() File "test1.py", line 106, in upload media = MediaFileUpload(filename, chunksize=CHUNKSIZE, resumable=True) File "D:\jatin\Project\GAE_django\GCS_test\oauth2client\util.py", line 132, in positional_wrapper return wrapped(*args, **kwargs) File "D:\jatin\Project\GAE_django\GCS_test\apiclient\http.py", line 422, in __init__ fd = open(self._filename, 'rb') IOError: [Errno 13] Permission denied: 'd:/foldername'
[ "This works for me. Copy all content from a local directory to a specific bucket-name/full-path (recursive) in google cloud storage:\nimport glob\nfrom google.cloud import storage\n\ndef upload_local_directory_to_gcs(local_path, bucket, gcs_path):\n assert os.path.isdir(local_path)\n for local_file in glob.glob(local_path + '/**'):\n if not os.path.isfile(local_file):\n upload_local_directory_to_gcs(local_file, bucket, gcs_path + \"/\" + os.path.basename(local_file))\n else:\n remote_path = os.path.join(gcs_path, local_file[1 + len(local_path):])\n blob = bucket.blob(remote_path)\n blob.upload_from_filename(local_file)\n\n\nupload_local_directory_to_gcs(local_path, bucket, BUCKET_FOLDER_DIR)\n\n", "A version without a recursive function, and it works with 'top level files' (unlike the top answer):\nimport glob\nimport os \nfrom google.cloud import storage\n\nGCS_CLIENT = storage.Client()\ndef upload_from_directory(directory_path: str, dest_bucket_name: str, dest_blob_name: str):\n rel_paths = glob.glob(directory_path + '/**', recursive=True)\n bucket = GCS_CLIENT.get_bucket(dest_bucket_name)\n for local_file in rel_paths:\n remote_path = f'{dest_blob_name}/{\"/\".join(local_file.split(os.sep)[1:])}'\n if os.path.isfile(local_file):\n blob = bucket.blob(remote_path)\n blob.upload_from_filename(local_file)\n\n", "A folder is a cataloging structure containing references to files and directories. The library will not accept a folder as an argument. \nAs far as I understand, your use case is to make an upload to GCS preserving a local folder structure. To accomplish that you can use the os python module and make a recursive function (e.g process_folder) that will take path as an argument. This logic can be used for the function:\n\nUse os.listdir() method to get a list of objects within the source path (will return both files and folders).\nIterate over a list from step 1 to separate files from folders via os.path.isdir() method.\nIterate over files and upload them with adjusted path (e.g. path+ “/“ + file_name).\nIterate over folders making a recursive call (e.g. process_folder(path+folder_name)).\n\nIt’ll be necessary to work with two paths:\n\nReal system path (e.g. “/Users/User/…/upload_folder/folder_name”) used with os module.\nVirtual path for GCS file uploads (e.g. “upload”+”/“ + folder_name + ”/“ + file_name).\n\nDon’t forget to implement exponential backoff referenced at [1] to deal with 500 errors. You can use a Drive SDK example at [2] as a reference.\n[1] - https://developers.google.com/storage/docs/json_api/v1/how-tos/upload#exp-backoff\n[2] - https://developers.google.com/drive/web/handle-errors\n", "I assume the sheer filename = \"D:\\foldername\" is not enough info about the source code. Neither am I sure that this is even possible.. via the web interface you can also just upload files or create folders where you then upload the files. \nYou could save the folders name, then create it (I've never used the google-app-engine, but I guess that should be possible) and then upload the contents to the new folder\n", "Refer -\nhttps://hackersandslackers.com/manage-files-in-google-cloud-storage-with-python/\nfrom os import listdir\nfrom os.path import isfile, join\n\n...\n\ndef upload_files(bucketName):\n \"\"\"Upload files to GCP bucket.\"\"\"\n files = [f for f in listdir(localFolder) if isfile(join(localFolder, f))]\n for file in files:\n localFile = localFolder + file\n blob = bucket.blob(bucketFolder + file)\n blob.upload_from_filename(localFile)\n return f'Uploaded {files} to \"{bucketName}\" bucket.'\n\n", "The solution can also be used for windows systems. Simply provide the folder name to upload the destination bucket name.Additionally, it can handle any level of subdirectories in a folder.\nimport os\nfrom google.cloud import storage\nstorage_client = storage.Client()\ndef upload_files(bucketName, folderName):\n\"\"\"Upload files to GCP bucket.\"\"\"\nbucket = storage_client.get_bucket(bucketName)\nfor path, subdirs, files in os.walk(folderName):\n for name in files:\n path_local = os.path.join(path, name)\n blob_path = path_local.replace('\\\\','/')\n blob = bucket.blob(blob_path)\n blob.upload_from_filename(path_local)\n\n", "I just came across the gcsfs library which seems to be also about better interfaces\nYou could copy an entire directory into a gcs location like this:\n\ndef upload_to_gcs(src_dir: str, gcs_dst: str):\n fs = gcsfs.GCSFileSystem()\n fs.put(src_dir, gcs_dst, recursive=True)\n\n", "Here is my recursive implementation . we need to create a file named gdrive_utils.py and write the following.\n\nfrom googleapiclient.discovery import build\nfrom google_auth_oauthlib.flow import InstalledAppFlow\nfrom google.auth.transport.requests import Request\nfrom apiclient.http import MediaFileUpload, MediaIoBaseDownload\nimport pickle\nimport glob\nimport os\n\n\n# The following scopes are required for access to google drive.\n# If modifying these scopes, delete the file token.pickle.\nSCOPES = ['https://www.googleapis.com/auth/drive.metadata.readonly',\n 'https://www.googleapis.com/auth/drive.metadata',\n 'https://www.googleapis.com/auth/drive',\n 'https://www.googleapis.com/auth/drive.file',\n 'https://www.googleapis.com/auth/drive.appdata']\n\n\ndef get_gdrive_service():\n \"\"\"\n Tries to authenticate using a token. If token expires or not present creates one.\n :return: Returns authenticated service object\n :rtype: object\n \"\"\"\n creds = None\n # The file token.pickle stores the user's access and refresh tokens, and is\n # created automatically when the authorization flow completes for the first\n # time.\n if os.path.exists('token.pickle'):\n with open('token.pickle', 'rb') as token:\n creds = pickle.load(token)\n # If there are no (valid) credentials available, let the user log in.\n if not creds or not creds.valid:\n if creds and creds.expired and creds.refresh_token:\n creds.refresh(Request())\n else:\n flow = InstalledAppFlow.from_client_secrets_file(\n 'keys/client-secret.json', SCOPES)\n creds = flow.run_local_server(port=0)\n # Save the credentials for the next run\n with open('token.pickle', 'wb') as token:\n pickle.dump(creds, token)\n # return Google Drive API service\n return build('drive', 'v3', credentials=creds)\n\n\ndef createRemoteFolder(drive_service, folderName, parent_id):\n # Create a folder on Drive, returns the newely created folders ID\n body = {\n 'name': folderName,\n 'mimeType': \"application/vnd.google-apps.folder\",\n 'parents': [parent_id]\n }\n\n root_folder = drive_service.files().create(body = body, supportsAllDrives=True, fields='id').execute()\n return root_folder['id']\n\n\ndef upload_file(drive_service, file_location, parent_id):\n # Create a folder on Drive, returns the newely created folders ID\n body = {\n 'name': os.path.split(file_location)[1],\n 'parents': [parent_id]\n }\n\n media = MediaFileUpload(file_location,\n resumable=True)\n\n file_details = drive_service.files().create(body = body,\n media_body=media,\n supportsAllDrives=True,\n fields='id').execute()\n return file_details['id']\n\n\ndef upload_file_recursively(g_drive_service, root, folder_id):\n\n files_list = glob.glob(root)\n if files_list:\n for file_contents in files_list:\n if os.path.isdir(file_contents):\n # create new _folder\n new_folder_id = createRemoteFolder(g_drive_service, os.path.split(file_contents)[1],\n folder_id)\n upload_file_recursively(g_drive_service, os.path.join(file_contents, '*'), new_folder_id)\n else:\n # upload to given folder id\n upload_file(g_drive_service, file_contents, folder_id)\n\n\nAfter that use the following\nimport os\n\nfrom gdrive_utils import createRemoteFolder, upload_file_recursively, get_gdrive_service\n\ng_drive_service = get_gdrive_service()\nFOLDER_ID_FOR_UPLOAD = \"<replace with folder id where you want upload>\"\nmain_folder_id = createRemoteFolder(g_drive_service, '<name_of_main_folder>', FOLDER_ID_FOR_UPLOAD)\n\n\nAnd finally use this\nupload_file_recursively(g_drive_service, os.path.join(\"<your_path_>\", '*'), main_folder_id)\n\n", "Another option is to use gsutils, the command-line tool for interacting with Google Cloud:\ngsutil cp -r ./my/local/directory gs://my_gcp_bucket/foo/bar\n\nThe -r flag tells gsutils to copy recursively. Link gsutils to documentation.\nInvoking gsutils in Python can be done like this:\nimport subprocess\n\nsubprocess.check_call('gsutil cp -r ./my/local/directory gs://my_gcp_bucket/foo/bar')\n\n" ]
[ 16, 8, 4, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "django", "google_app_engine", "python" ]
stackoverflow_0025599503_django_google_app_engine_python.txt
Q: Python insert element based on condition I'm trying to make a list where based on a condition, an element may or may not exist. For example, if it's true, the list is [1, 2, 3], and otherwise, it's [1, 3]. Currently, what I could do is either initialize the list and call .insert or .append the elements individually, or alternatively, I could do something like [1] + ([2] if condition else []) + [3], but that's ugly. I was wondering if there was some sort of syntax like [1, 2 if condition, 3] for example, but I can't seem to find anything of that sort. Is there a similar syntax to this? EDIT My list isn't [1, 2, 3]. I want a general solution for any type of object because I'm not even working with numbers (these are WTForms validators) A: I'm also used to doing this in Perl with a pattern like: my @arr = (1, (condition? (2) : ()), 3); In Python you can get somewhat close to this with a solution that's pretty close to what you have with the list +, but uses * unpacking to avoid a lot of the other arrays: arr = [1, *((2,) if condition else ()), 3] Could clean that up a little with a helper that yields the value if the condition is true. T = TypeVar("T") def cond(val: T, ok: bool) -> Iterable[T]: if ok: yield val arr = [1, *cond(2, condition), 3] This has the downside of not short circuiting though, so if the creation of the used value is expensive you might want to provide a function instead of a value to be called and returned when the condition is true. Yet another option could be to use a sentinel value and filter for it when constructing the list. Can combine this with a helper to do the filtering. class _Ignore: pass IGNORE = _Ignore() def cond_list(*elements: Union[T, _Ignore]) -> list[T]: return [e for e in elements if not isinstance(e, _Ignore)] arr = cond_list(1, 2 if condition else IGNORE, 3) If you want to stay flexible for the resulting container type you could choose to return an iterable which you can pass to a container constructor: def cond_iter(*elements: Union[T, _Ignore]) -> Iterable[T]: yield from (e for e in elements if not isinstance(e, _Ignore)) arr = set(cond_iter(1, 2 if condition else IGNORE, 3)) A: You can create a list with conditions and use list comprehension: condition = [True, False, True, True, False] [i for i in range(len(condition)) if condition[i]] A: if you have a list of contidions for all elements you could to this: elems = ['a', 'b', 'c'] conditions = [True, False, True] lst = [item for item, condition in zip(elems, conditions) if condition] print(lst) that could also be done using itertools.compress: from itertools import compress elems = ['a', 'b', 'c'] conditions = [True, False, True] lst = list(compress(elems, conditions)) or you generate your list and remove the element afterwards: lst = ['a', 'b', 'c'] if condition: lst.remove('b') A: Is this what you're looking for? [i for i in range(1,4) if i!=2] UPDATE: based on nice answer above. Making condition a function instead of a list will generalize it. def condition(x): if(x==2): return False else: return True [i for i in range(1,4) if condition(i)] A: You should consider to generate the list with a generator like: def generate(condition: bool, value: int): yield 1 if condition: yield value yield 3 l = list(generate(True, 2) A: You can use the splat/unpacking operator to expand an element either to the object you want to add or to nothing. Try changing the condition from True to False to check the results. class FirstClass: x = 5 class SecondClass: y = 10 possible_object = (SecondClass().y,) if True else () l = [ FirstClass().x, *possible_object, FirstClass().x, *possible_object, ] print(l) A: I think you got a variation of this answer before, but maybe this makes it more readable, like you wanted? def is_not_dog(item): return item != "dog" items = ["cat", "dog", "horse", "snail"] not_dog = [i for i in items if is_not_dog(i)] not_dog ['cat', 'horse', 'snail'] A: Here's an attempt using filter(): conditioned = {2} condition = True [i for i in filter(lambda x: condition if x in conditioned else True, [1,2,3,4])] # [1, 2, 3, 4] condition = False [i for i in filter(lambda x: condition if x in conditioned else True, [1,2,3,4])] [1, 3, 4] Of course you could replace the boolean condition with a function condition(), and use it as a filter instead of the lambda function. A: If I have understood your question correctly, you want to output 1 predefined list if condition is true or return the same list plus an element if condition is false (for example). If that's correct you might want something like this: a = [1, 2] my_condition = 5 > 6 print(a if my_condition else a + [3]) Or if you want to define the list in the same line: my_condition = 5 > 6 print([1,2] if my_condition else [1, 2] + [3]) In these cases 5 is not greater than 6 so the condition is false so [1, 2, 3] will be output. Edit: just reading your question it looks like you might want a single value then for true + [one or more values] or for false + [one or more values] so this might be better: a = [1] print(a + [3] if my_condition else a + [2, 3]) print([1] + [3] if my_condition else [1] + [2, 3]) (Just using prints to demonstrate but you can assign this output to a variable using brackets) I hope this answers your question.
Python insert element based on condition
I'm trying to make a list where based on a condition, an element may or may not exist. For example, if it's true, the list is [1, 2, 3], and otherwise, it's [1, 3]. Currently, what I could do is either initialize the list and call .insert or .append the elements individually, or alternatively, I could do something like [1] + ([2] if condition else []) + [3], but that's ugly. I was wondering if there was some sort of syntax like [1, 2 if condition, 3] for example, but I can't seem to find anything of that sort. Is there a similar syntax to this? EDIT My list isn't [1, 2, 3]. I want a general solution for any type of object because I'm not even working with numbers (these are WTForms validators)
[ "I'm also used to doing this in Perl with a pattern like:\nmy @arr = (1, (condition? (2) : ()), 3);\n\nIn Python you can get somewhat close to this with a solution that's pretty close to what you have with the list +, but uses * unpacking to avoid a lot of the other arrays:\narr = [1, *((2,) if condition else ()), 3]\n\nCould clean that up a little with a helper that yields the value if the condition is true.\nT = TypeVar(\"T\")\n\ndef cond(val: T, ok: bool) -> Iterable[T]:\n if ok:\n yield val\n\narr = [1, *cond(2, condition), 3]\n\nThis has the downside of not short circuiting though, so if the creation of the used value is expensive you might want to provide a function instead of a value to be called and returned when the condition is true.\nYet another option could be to use a sentinel value and filter for it when constructing the list. Can combine this with a helper to do the filtering.\nclass _Ignore: pass\nIGNORE = _Ignore()\n\ndef cond_list(*elements: Union[T, _Ignore]) -> list[T]:\n return [e for e in elements if not isinstance(e, _Ignore)]\n\n\narr = cond_list(1, 2 if condition else IGNORE, 3)\n\nIf you want to stay flexible for the resulting container type you could choose to return an iterable which you can pass to a container constructor:\ndef cond_iter(*elements: Union[T, _Ignore]) -> Iterable[T]:\n yield from (e for e in elements if not isinstance(e, _Ignore))\n\n\narr = set(cond_iter(1, 2 if condition else IGNORE, 3))\n\n", "You can create a list with conditions and use list comprehension:\ncondition = [True, False, True, True, False]\n[i for i in range(len(condition)) if condition[i]]\n\n", "if you have a list of contidions for all elements you could to this:\nelems = ['a', 'b', 'c']\nconditions = [True, False, True]\n\nlst = [item for item, condition in zip(elems, conditions) if condition]\nprint(lst)\n\nthat could also be done using itertools.compress:\nfrom itertools import compress\n\nelems = ['a', 'b', 'c']\nconditions = [True, False, True]\nlst = list(compress(elems, conditions))\n\n\nor you generate your list and remove the element afterwards:\nlst = ['a', 'b', 'c']\nif condition:\n lst.remove('b')\n\n", "Is this what you're looking for?\n[i for i in range(1,4) if i!=2]\nUPDATE: based on nice answer above.\nMaking condition a function instead of a list will generalize it. \ndef condition(x):\n if(x==2):\n return False\n else:\n return True\n\n[i for i in range(1,4) if condition(i)] \n\n", "You should consider to generate the list with a generator like:\ndef generate(condition: bool, value: int):\n yield 1\n if condition:\n yield value\n yield 3\nl = list(generate(True, 2)\n\n", "You can use the splat/unpacking operator to expand an element either to the object you want to add or to nothing. Try changing the condition from True to False to check the results.\nclass FirstClass:\n x = 5\n \nclass SecondClass:\n y = 10\n\n\npossible_object = (SecondClass().y,) if True else ()\nl = [\n FirstClass().x,\n *possible_object,\n FirstClass().x,\n *possible_object,\n]\nprint(l)\n\n", "I think you got a variation of this answer before, but maybe this makes it more readable, like you wanted?\ndef is_not_dog(item):\n return item != \"dog\"\n\nitems = [\"cat\", \"dog\", \"horse\", \"snail\"]\n\nnot_dog = [i for i in items if is_not_dog(i)]\nnot_dog\n\n\n['cat', 'horse', 'snail']\n\n", "Here's an attempt using filter():\nconditioned = {2}\ncondition = True\n[i for i in filter(lambda x: condition if x in conditioned else True, [1,2,3,4])]\n# [1, 2, 3, 4]\n\ncondition = False\n[i for i in filter(lambda x: condition if x in conditioned else True, [1,2,3,4])]\n[1, 3, 4]\n\nOf course you could replace the boolean condition with a function condition(), and use it as a filter instead of the lambda function.\n", "If I have understood your question correctly, you want to output 1 predefined list if condition is true or return the same list plus an element if condition is false (for example). If that's correct you might want something like this:\na = [1, 2]\n\nmy_condition = 5 > 6\n\nprint(a if my_condition else a + [3])\n\nOr if you want to define the list in the same line:\nmy_condition = 5 > 6\nprint([1,2] if my_condition else [1, 2] + [3])\n\nIn these cases 5 is not greater than 6 so the condition is false so [1, 2, 3] will be output.\nEdit: just reading your question it looks like you might want a single value then for true + [one or more values] or for false + [one or more values] so this might be better:\na = [1]\nprint(a + [3] if my_condition else a + [2, 3])\nprint([1] + [3] if my_condition else [1] + [2, 3])\n\n(Just using prints to demonstrate but you can assign this output to a variable using brackets)\nI hope this answers your question.\n" ]
[ 7, 4, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0056150856_list_python.txt
Q: How do I increase the fontsize of the scale tick in matplotlib? I am trying to increase the fontsize of the scale tick in a matplotlib plot when using scientific notation for the tick labels. import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 100, 100) y = np.power(x, 3) plt.ticklabel_format( axis="y", style="sci", scilimits=(0,0), useMathText=True ) plt.yticks(fontsize=30) plt.xticks(fontsize=30) plt.plot(x, y) plt.show() The above is a minimal example. As you can see, the fontsize of the (x10^6) is tiny and I would like it be the same size as the other ticks. Minimal example A: You could try using plt.rc('font', size=30) to set the font size of everything on the plot? import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 100, 100) y = np.power(x, 3) plt.ticklabel_format( axis="y", style="sci", scilimits=(0,0), useMathText=True ) #set all font in plot to a given size plt.rc('font', size=30) plt.plot(x, y) plt.show() A: You need to change the size of the offset_text that belongs to the y axis: ax = plt.gca() txt = ax.yaxis.get_offset_text() txt.set_fontsize('large') Note this works also for colourbars.
How do I increase the fontsize of the scale tick in matplotlib?
I am trying to increase the fontsize of the scale tick in a matplotlib plot when using scientific notation for the tick labels. import matplotlib.pyplot as plt import numpy as np x = np.linspace(0, 100, 100) y = np.power(x, 3) plt.ticklabel_format( axis="y", style="sci", scilimits=(0,0), useMathText=True ) plt.yticks(fontsize=30) plt.xticks(fontsize=30) plt.plot(x, y) plt.show() The above is a minimal example. As you can see, the fontsize of the (x10^6) is tiny and I would like it be the same size as the other ticks. Minimal example
[ "You could try using plt.rc('font', size=30) to set the font size of everything on the plot?\nimport matplotlib.pyplot as plt \nimport numpy as np \n\nx = np.linspace(0, 100, 100)\ny = np.power(x, 3) \n\nplt.ticklabel_format(\n axis=\"y\",\n style=\"sci\",\n scilimits=(0,0),\n useMathText=True\n)\n\n#set all font in plot to a given size\nplt.rc('font', size=30)\n\nplt.plot(x, y)\nplt.show()\n\n\n", "You need to change the size of the offset_text that belongs to the y axis:\nax = plt.gca()\n\ntxt = ax.yaxis.get_offset_text()\ntxt.set_fontsize('large')\n\nNote this works also for colourbars.\n" ]
[ 1, 0 ]
[]
[]
[ "matplotlib", "plot", "python" ]
stackoverflow_0070880042_matplotlib_plot_python.txt
Q: Different results in ndiffs pmdarima (Time Series) I am analyzing a time series. It clearly has a trend and a seasonal component. When I do the adf root test I get a p-value of 0.98, meaning it's non stationary. But when I do the ndiffs in pmdarima, Philippe Perron and Dickey Fuller returns a 0, when clearly has a trend. KPSS return 1, which seems more accurate. Happens the same when do the nsdiffs, when clearly is stationary. What I am doing wrong? Why I get different results? Does this mean that doesn't have a trend? from pmdarima.arima.utils import ndiffs difs_adf = ndiffs(train_, test = "adf") difs_kpss = ndiffs(train_, test = "kpss") difs_pp = ndiffs(train_, test = "pp") print(difs_adf , difs_kpss , difs_pp) On the other hand, what means the "regression" parameter in adf test in stattools. Is "ct" used when the series has a trend? constant is synonim of heteroskedasticity? from statsmodels.tsa.stattools import adfuller, kpss adf = adfuller(train_, regression='ct', autolag='AIC') print(f'ADF Statistic: {adf[0]}') print(f'p-value: {adf[1]}') A: Not sure if helps but I found this example where the approach is to take the max result of these tests and continue with it: https://notebook.community/tgsmith61591/pyramid/examples/stock_market_example from pmdarima.arima import ndiffs kpss_diffs = ndiffs(y_train, alpha=0.05, test='kpss', max_d=6) adf_diffs = ndiffs(y_train, alpha=0.05, test='adf', max_d=6) n_diffs = max(adf_diffs, kpss_diffs) print(f"Estimated differencing term: {n_diffs}")
Different results in ndiffs pmdarima (Time Series)
I am analyzing a time series. It clearly has a trend and a seasonal component. When I do the adf root test I get a p-value of 0.98, meaning it's non stationary. But when I do the ndiffs in pmdarima, Philippe Perron and Dickey Fuller returns a 0, when clearly has a trend. KPSS return 1, which seems more accurate. Happens the same when do the nsdiffs, when clearly is stationary. What I am doing wrong? Why I get different results? Does this mean that doesn't have a trend? from pmdarima.arima.utils import ndiffs difs_adf = ndiffs(train_, test = "adf") difs_kpss = ndiffs(train_, test = "kpss") difs_pp = ndiffs(train_, test = "pp") print(difs_adf , difs_kpss , difs_pp) On the other hand, what means the "regression" parameter in adf test in stattools. Is "ct" used when the series has a trend? constant is synonim of heteroskedasticity? from statsmodels.tsa.stattools import adfuller, kpss adf = adfuller(train_, regression='ct', autolag='AIC') print(f'ADF Statistic: {adf[0]}') print(f'p-value: {adf[1]}')
[ "Not sure if helps but I found this example where the approach is to take the max result of these tests and continue with it: https://notebook.community/tgsmith61591/pyramid/examples/stock_market_example\nfrom pmdarima.arima import ndiffs\n\nkpss_diffs = ndiffs(y_train, alpha=0.05, test='kpss', max_d=6)\nadf_diffs = ndiffs(y_train, alpha=0.05, test='adf', max_d=6)\nn_diffs = max(adf_diffs, kpss_diffs)\n\nprint(f\"Estimated differencing term: {n_diffs}\")\n\n" ]
[ 0 ]
[]
[]
[ "pmdarima", "python", "time_series" ]
stackoverflow_0063859508_pmdarima_python_time_series.txt
Q: Python Sending email using SMTP - target machine actively refused connection I am trying to send email internally within work using the smtplib package in Python. I am running this script behind a VPN using the same proxy settings for R and Spyder. I use the following code which was adapted from mkyoung.com import smtplib to = 'foo@foo-corporate.com' corp_user = 'foo@foo-corporate.com' corp_pwd = 'password' smtpserver = smtplib.SMTP_SSL(local_hostname="smtp://foo-corporate.com", port = 25) smtpserver.connect() Once i try the last line smtpserver.connect(), I get the error message: [WinError 10061] No connection could be made because the target machine actively refused it This would suggest that the server is not accepting SMTP requests. However if i execute the same script in R using the Blastula package It works fine. Can anyone suggest how I can trouble shoot this? library(blastula) create_smtp_creds_key( id = "email_creds", user = "foo@foo-corporate.com", host = "smtp://foo-corporate.com", port = 25, use_ssl = TRUE ) email <- compose_email( body = md(" Hello, This is a test email ")) # Sending email by SMTP using a credentials file email %>% smtp_send( to = "foo@foo-corporate.com", from = "foo@foo-corporate.com", subject = "Testing the `smtp_send()` function", credentials = creds_key("email_creds") ) A: Seems like the context is not needed at all. This is an example using TLS. Give it a try, at least in my environment, this worked. import smtplib smtp_server = 'mail.example.com' port = 587 # For starttls sender_email = "from@mail.com" receiver_email = 'to@mail.com' password = r'password' message = f'''\ From: from-name <from@mail.com> To: to-name <to@mail.com> Subject: testmail testmail ''' try: server = smtplib.SMTP(smtp_server, port) server.ehlo() server.starttls() server.ehlo() server.login(sender_email, password) server.sendmail(sender_email, receiver_email, message) except Exception as e: # Print any error messages to stdout print(e) finally: server.quit() A: Hi @user99999 and @Ovski Thank you for investigating this for me. I managed to finally get it working with the below code import smtplib import ssl to = 'foo@foo-corporate.com' corp_user = 'foo@foo-corporate.com' corp_pwd = 'password' smtpserver = smtplib.SMTP_SSL("smtp://foo-corporate.com") smtp_server.ehlo() smtp_server.login(corp_user, corp_pwd) msg_to_send = ''' hello world! ''' smtp_server.sendmail(user,to,msg_to_send) smtp_server.quit()
Python Sending email using SMTP - target machine actively refused connection
I am trying to send email internally within work using the smtplib package in Python. I am running this script behind a VPN using the same proxy settings for R and Spyder. I use the following code which was adapted from mkyoung.com import smtplib to = 'foo@foo-corporate.com' corp_user = 'foo@foo-corporate.com' corp_pwd = 'password' smtpserver = smtplib.SMTP_SSL(local_hostname="smtp://foo-corporate.com", port = 25) smtpserver.connect() Once i try the last line smtpserver.connect(), I get the error message: [WinError 10061] No connection could be made because the target machine actively refused it This would suggest that the server is not accepting SMTP requests. However if i execute the same script in R using the Blastula package It works fine. Can anyone suggest how I can trouble shoot this? library(blastula) create_smtp_creds_key( id = "email_creds", user = "foo@foo-corporate.com", host = "smtp://foo-corporate.com", port = 25, use_ssl = TRUE ) email <- compose_email( body = md(" Hello, This is a test email ")) # Sending email by SMTP using a credentials file email %>% smtp_send( to = "foo@foo-corporate.com", from = "foo@foo-corporate.com", subject = "Testing the `smtp_send()` function", credentials = creds_key("email_creds") )
[ "Seems like the context is not needed at all.\nThis is an example using TLS. Give it a try, at least in my environment, this worked.\nimport smtplib\n\nsmtp_server = 'mail.example.com'\nport = 587 # For starttls\nsender_email = \"from@mail.com\"\nreceiver_email = 'to@mail.com'\npassword = r'password'\nmessage = f'''\\\nFrom: from-name <from@mail.com>\nTo: to-name <to@mail.com>\nSubject: testmail\n\ntestmail\n\n'''\ntry:\n server = smtplib.SMTP(smtp_server, port)\n server.ehlo()\n server.starttls()\n server.ehlo()\n server.login(sender_email, password)\n server.sendmail(sender_email, receiver_email, message) \nexcept Exception as e:\n # Print any error messages to stdout\n print(e)\nfinally:\n server.quit()\n\n", "Hi @user99999 and @Ovski\nThank you for investigating this for me.\nI managed to finally get it working with the below code\n\nimport smtplib\nimport ssl\n\nto = 'foo@foo-corporate.com'\ncorp_user = 'foo@foo-corporate.com'\ncorp_pwd = 'password'\nsmtpserver = smtplib.SMTP_SSL(\"smtp://foo-corporate.com\")\nsmtp_server.ehlo()\nsmtp_server.login(corp_user, corp_pwd)\nmsg_to_send = '''\nhello world!\n'''\n\nsmtp_server.sendmail(user,to,msg_to_send)\nsmtp_server.quit()\n\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "email", "python", "r" ]
stackoverflow_0074478118_email_python_r.txt
Q: Why cumulative sum has a drop I have a certain feature in my data which looks like this: I'm trying to introduce cumulative sum this column in the DataFrame as following (the feature is int64 type): df['Cumulative'] = df['feature'].cumsum() But for unknown reason I have a drop in this function which is weird since the min number in the original column is 0: Can someone explain why this happens and how can I fix that.Because I just want to sum the feature as it appears. Thank you in advance. A: Like in the comments suggested, sorting first and after that build the cumulative sum. Did you try it like this: df = df.sort_values(by='Date') #where "Date" is the column name of the values on the x-axis df['cumulative'] = df['feature'].cumsum()
Why cumulative sum has a drop
I have a certain feature in my data which looks like this: I'm trying to introduce cumulative sum this column in the DataFrame as following (the feature is int64 type): df['Cumulative'] = df['feature'].cumsum() But for unknown reason I have a drop in this function which is weird since the min number in the original column is 0: Can someone explain why this happens and how can I fix that.Because I just want to sum the feature as it appears. Thank you in advance.
[ "Like in the comments suggested, sorting first and after that build the cumulative sum.\nDid you try it like this:\ndf = df.sort_values(by='Date') #where \"Date\" is the column name of the values on the x-axis\ndf['cumulative'] = df['feature'].cumsum()\n\n" ]
[ 1 ]
[]
[]
[ "cumulative_sum", "dataframe", "pandas", "python" ]
stackoverflow_0074516792_cumulative_sum_dataframe_pandas_python.txt
Q: how to handle POST request with flask At first I should say that I searched a lot and think that there's no problem with the code but it doesn't work. I send a dict by post method in the local host through this code: `<body> <div class="middle"> <form action="insert.py" method="post" > <input type="number" class="num" name="temp" placeholder="temperature" /> <input type="number" class="num" name="hum" placeholder="humidity"/> <input type="submit" class="btn" name="insert" value="send"> </form> </div> </body>` but I receive nothing in the insert.py file. nothing is shown on the screen. Can anyone say that what's wrong with the code? insert.py file is as follows: `#!C:\Users\Edris\AppData\Local\Programs\Python\Python310\python.exe print("Content-Type: text/html\n\n") from flask import Flask, request app = Flask(__name__) @app.route('/POST', methods=['POST']) def form(): if request.method=="POST": temp = request.form.get('temp') print("insert OK") print(temp)` I searched through lots of codes in the site and it seems that there would be no problem with the code. the first code gets two numbers from user and in the second file (insert.py) the entered numbers must be shown on the screen but it's not A: You are supposed to post the form to some endpoint with action. Please pay attention to where you are posting to. insert.py is not a valid endpoint in your application, on the other hand /POST probably is. <form action="/POST" method="post"> ...
how to handle POST request with flask
At first I should say that I searched a lot and think that there's no problem with the code but it doesn't work. I send a dict by post method in the local host through this code: `<body> <div class="middle"> <form action="insert.py" method="post" > <input type="number" class="num" name="temp" placeholder="temperature" /> <input type="number" class="num" name="hum" placeholder="humidity"/> <input type="submit" class="btn" name="insert" value="send"> </form> </div> </body>` but I receive nothing in the insert.py file. nothing is shown on the screen. Can anyone say that what's wrong with the code? insert.py file is as follows: `#!C:\Users\Edris\AppData\Local\Programs\Python\Python310\python.exe print("Content-Type: text/html\n\n") from flask import Flask, request app = Flask(__name__) @app.route('/POST', methods=['POST']) def form(): if request.method=="POST": temp = request.form.get('temp') print("insert OK") print(temp)` I searched through lots of codes in the site and it seems that there would be no problem with the code. the first code gets two numbers from user and in the second file (insert.py) the entered numbers must be shown on the screen but it's not
[ "You are supposed to post the form to some endpoint with action. Please pay attention to where you are posting to. insert.py is not a valid endpoint in your application, on the other hand /POST probably is.\n<form action=\"/POST\" method=\"post\">\n...\n\n" ]
[ 0 ]
[]
[]
[ "flask", "http", "http_post", "python" ]
stackoverflow_0074510740_flask_http_http_post_python.txt
Q: KafkaSource connection to Confluent Kafka (with SSL & SchemaRegistry) I tried to connect to Confluent Kafka with KafkaSource (from MLRun) and I used historically this easy code: # code with usage 'kafka-python>=2.0.2' from kafka import KafkaProducer, KafkaConsumer consumer = KafkaConsumer( 'ak47-data.v1', bootstrap_servers =[ 'cpkafka01.eu.prod:9092', 'cpkafka02.eu.prod:9092', 'cpkafka03.eu.prod:9092' ], client_id='test', auto_offset_reset='earliest', sasl_mechanism="SCRAM-SHA-256", sasl_plain_password="***********", sasl_plain_username="***********", security_protocol='SASL_SSL', ssl_cafile="/v3io/bigdata/rootca.crt", ssl_certfile=None, ssl_keyfile=None) # print first topic for message in consumer: print ("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition, message.offset, message.key, message.value)) break How to rewrite this code with KafkaSource usage? A: Let me share function code for KafkaSource (for MLRun>=1.1.0). You can specific certificate (see rootca.crt) and list of kafka topics also. from mlrun.datastore.sources import KafkaSource # certificate with open('/v3io/bigdata/rootca.crt') as x: caCert = x.read() # definition of KafkaSource kafka_source = KafkaSource( brokers=['cpkafka01.eu.prod:9092', 'cpkafka02.eu.prod:9092', 'cpkafka03.eu.prod:9092'], topics=["ak47-data.v1"], initial_offset="earliest", group="test", attributes={"sasl" : { "enable": True, "password" : "******", "user" : "*******", "handshake" : True, "mechanism" : "SCRAM-SHA-256"}, "tls" : { "enable": True, "insecureSkipVerify" : False }, "caCert" : caCert} )
KafkaSource connection to Confluent Kafka (with SSL & SchemaRegistry)
I tried to connect to Confluent Kafka with KafkaSource (from MLRun) and I used historically this easy code: # code with usage 'kafka-python>=2.0.2' from kafka import KafkaProducer, KafkaConsumer consumer = KafkaConsumer( 'ak47-data.v1', bootstrap_servers =[ 'cpkafka01.eu.prod:9092', 'cpkafka02.eu.prod:9092', 'cpkafka03.eu.prod:9092' ], client_id='test', auto_offset_reset='earliest', sasl_mechanism="SCRAM-SHA-256", sasl_plain_password="***********", sasl_plain_username="***********", security_protocol='SASL_SSL', ssl_cafile="/v3io/bigdata/rootca.crt", ssl_certfile=None, ssl_keyfile=None) # print first topic for message in consumer: print ("%s:%d:%d: key=%s value=%s" % (message.topic, message.partition, message.offset, message.key, message.value)) break How to rewrite this code with KafkaSource usage?
[ "Let me share function code for KafkaSource (for MLRun>=1.1.0). You can specific certificate (see rootca.crt) and list of kafka topics also.\nfrom mlrun.datastore.sources import KafkaSource\n\n# certificate\nwith open('/v3io/bigdata/rootca.crt') as x: \n caCert = x.read()\n\n# definition of KafkaSource\nkafka_source = KafkaSource(\n brokers=['cpkafka01.eu.prod:9092', \n 'cpkafka02.eu.prod:9092', \n 'cpkafka03.eu.prod:9092'],\n topics=[\"ak47-data.v1\"],\n initial_offset=\"earliest\",\n group=\"test\",\n attributes={\"sasl\" : {\n \"enable\": True,\n \"password\" : \"******\",\n \"user\" : \"*******\",\n \"handshake\" : True,\n \"mechanism\" : \"SCRAM-SHA-256\"},\n \"tls\" : {\n \"enable\": True,\n \"insecureSkipVerify\" : False\n }, \n \"caCert\" : caCert}\n)\n\n" ]
[ 1 ]
[]
[]
[ "confluent_kafka_python", "mlrun", "python" ]
stackoverflow_0074511987_confluent_kafka_python_mlrun_python.txt
Q: Repost deleted images to discord using a bot I have written a discord.py bot on repl.it that makes one able to quote on text. i will spare you on the code for this one, since it does not help in answerign the question, but basically, it splits a send message with a parameter and puts these splits into an embed. To make the channel look clean, the bot then deletes the original message. As an update for my bot I want to implement qoting images, as they don't require writing that much. This is where i ran into a problem: Using the prefix 'dq' i can attach an image and write a context for the image seperated with a space. It outputs the following: It might seem fine now, but after an hour or two, the image is either gone or loading forever: (This is a previous example, the text doesn't match) I know that this is because the original message is being deleted by the bot. The Bot grabs the attachment URL and repost it: # checking for the prefix if message.content.startswith("dq "): # delete original message await message.delete() # split the message with a parameter quot = message.content.split("//") # the sent messages get dived by length of quot, that equals the amount of arguments given if len(quot) == 1: # 1st argument taken from message context = quot[0] # attachment taken from message urll = message.attachments[0].url # message context gets split to delte the prefix 'dq' text = context.split() text.pop(0) # text being put together with discord text style context = "*"+" ".join(text)+"*\n" # creates an embed with the content above embeda = discord.Embed(title=context, color=0xffffff) embeda.set_image(url=urll) embeda.set_footer(text="("+message.author.name+")") What i am getting at in this horrobly long description: i need a way to delete the images from the original message, but i need them to stay in the message the bot sent. It would be nice to get some quick advice on this problem! A: This happens because you are deleting the message the image is sent in. Discord sees the relevant message is gone and removes the file from its servers to save space. You need to save the image either locally or in memory then upload it to either discord's CDN or a third party CDN and refer to it in your embed.
Repost deleted images to discord using a bot
I have written a discord.py bot on repl.it that makes one able to quote on text. i will spare you on the code for this one, since it does not help in answerign the question, but basically, it splits a send message with a parameter and puts these splits into an embed. To make the channel look clean, the bot then deletes the original message. As an update for my bot I want to implement qoting images, as they don't require writing that much. This is where i ran into a problem: Using the prefix 'dq' i can attach an image and write a context for the image seperated with a space. It outputs the following: It might seem fine now, but after an hour or two, the image is either gone or loading forever: (This is a previous example, the text doesn't match) I know that this is because the original message is being deleted by the bot. The Bot grabs the attachment URL and repost it: # checking for the prefix if message.content.startswith("dq "): # delete original message await message.delete() # split the message with a parameter quot = message.content.split("//") # the sent messages get dived by length of quot, that equals the amount of arguments given if len(quot) == 1: # 1st argument taken from message context = quot[0] # attachment taken from message urll = message.attachments[0].url # message context gets split to delte the prefix 'dq' text = context.split() text.pop(0) # text being put together with discord text style context = "*"+" ".join(text)+"*\n" # creates an embed with the content above embeda = discord.Embed(title=context, color=0xffffff) embeda.set_image(url=urll) embeda.set_footer(text="("+message.author.name+")") What i am getting at in this horrobly long description: i need a way to delete the images from the original message, but i need them to stay in the message the bot sent. It would be nice to get some quick advice on this problem!
[ "This happens because you are deleting the message the image is sent in. Discord sees the relevant message is gone and removes the file from its servers to save space.\nYou need to save the image either locally or in memory then upload it to either discord's CDN or a third party CDN and refer to it in your embed.\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "image", "python", "repl.it" ]
stackoverflow_0074508070_discord_discord.py_image_python_repl.it.txt
Q: NameError: name 'sosete' is not defined I try to get the len of all products displayed on this site https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html Using this code import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.common.action_chains import ActionChains options = Options() options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) driver.get("https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html") cookies_bttn = driver.find_element(By.ID, "onetrust-accept-btn-handler") cookies_bttn.click() driver.implicitly_wait(10) country_save = driver.find_element(By.CSS_SELECTOR, "#geoblocking > div > div > div.select-country-container > button.button.is-sm.confirm") country_save.click() hoover = ActionChains(driver) product = driver.find_elements(By.CLASS_NAME, "grid-item normal") z = 0 for sosete in product: sth = sosete.find_element(By.XPATH, '//*[@id="main-content"]/div/div/div[2]/section[1]/div/ul/li["+str(z+1)+"]/div') z = z+1 print(len(sth)) I don't know why it is saying sosete is not defined while I clearly defined it in for loop. Any help please? Also if I try to get all products len by using class name only 20 products out of 31 are printed like so: whole_product = driver.find_elements(By.CLASS_NAME, "grid-card-link") print(len(whole_product)) i = 0 product = driver.find_element(By.CLASS_NAME, "product-image") hoover.move_to_element(product) sosete = driver.find_elements(By.CLASS_NAME, "quick-purchase") for purchase_bttn in sosete: purchase_bttn.click() time.sleep(1) i = i + 1 A: Page is being loaded dynamically, as you scroll. here is way to (correctly) define the product range, scroll the page, wait for them to load, and print them out: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.keys import Keys import time as t chrome_options = Options() chrome_options.add_argument("--no-sandbox") chrome_options.add_argument('disable-notifications') chrome_options.add_argument("window-size=1280,720") webdriver_service = Service("chromedriver/chromedriver") ## path to where you saved chromedriver binary driver = webdriver.Chrome(service=webdriver_service, options=chrome_options) wait = WebDriverWait(driver, 25) url = 'https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html' driver.get(url) footer = wait.until(EC.element_to_be_clickable((By.XPATH, '//div[@class="footer-social"]'))) pbody = wait.until(EC.presence_of_element_located((By.TAG_NAME, 'body'))) for x in range(5): pbody.send_keys(Keys.PAGE_DOWN) print('scrolled') t.sleep(5) sosetute = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@class="category-product-card"]'))) print('How many socks?', len(sosetute)) for ciorapel in sosetute: print(ciorapel.text) Result in terminal: scrolled scrolled scrolled scrolled scrolled How many socks? 31 Set 3 perechi de șosete fluture 55,90 RON Șosete lungi cu dungi 39,90 RON Set 3 perechi șosete ciuperci 39,90 RON Set 3 perechi șosete Snoopy 39,90 RON Set 3 perechi șosete Fetițele Powerpuff 55,90 RON Set 3 perechi șosete Lizzie 39,90 RON Set 3 perechi șosete până la gleznă broderie 39,90 RON Jambiere denim 109,90 RON Jambiere blană 129,90 RON Jambiere tricot 55,90 RON + 1 Culoare Jambiere tricot 55,90 RON + 1 Culoare Set 3 perechi șosete print 39,90 RON Set 4 perechi șosete neon 39,90 RON Set 4 perechi șosete simple 39,90 RON Set de 3 perechi de șosete cu text 39,90 RON Set de 2 perechi de șosete texturate 55,90 RON Șosete print Art Series 69,90 RON + 1 Culoare Șosete print Art Series 69,90 RON + 1 Culoare Set 3 perechi șosete print 39,90 RON Set 3 perechi șosete curly broderie 39,90 RON Set de 3 perechi de șosete racing 39,90 RON Set 3 perechi șosete print 39,90 RON Set 3 perechi șosete print 39,90 RON Set de 3 perechi de șosete racing 39,90 RON Set 3 perechi șosete cu imprimeu varsity 39,90 RON Set 3 perechi șosete lungi varsity campus 39,90 RON Șosete text 39,90 RON Set 3 perechi șosete colorate 39,90 RON Set 3 perechi șosete print 55,90 RON Șosete broderie Sponge Bob 39,90 RON Set 3 perechi șosete scurte broderie 39,90 RON Selenium documentation is quite good.
NameError: name 'sosete' is not defined
I try to get the len of all products displayed on this site https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html Using this code import time from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.common.action_chains import ActionChains options = Options() options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) driver.get("https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html") cookies_bttn = driver.find_element(By.ID, "onetrust-accept-btn-handler") cookies_bttn.click() driver.implicitly_wait(10) country_save = driver.find_element(By.CSS_SELECTOR, "#geoblocking > div > div > div.select-country-container > button.button.is-sm.confirm") country_save.click() hoover = ActionChains(driver) product = driver.find_elements(By.CLASS_NAME, "grid-item normal") z = 0 for sosete in product: sth = sosete.find_element(By.XPATH, '//*[@id="main-content"]/div/div/div[2]/section[1]/div/ul/li["+str(z+1)+"]/div') z = z+1 print(len(sth)) I don't know why it is saying sosete is not defined while I clearly defined it in for loop. Any help please? Also if I try to get all products len by using class name only 20 products out of 31 are printed like so: whole_product = driver.find_elements(By.CLASS_NAME, "grid-card-link") print(len(whole_product)) i = 0 product = driver.find_element(By.CLASS_NAME, "product-image") hoover.move_to_element(product) sosete = driver.find_elements(By.CLASS_NAME, "quick-purchase") for purchase_bttn in sosete: purchase_bttn.click() time.sleep(1) i = i + 1
[ "Page is being loaded dynamically, as you scroll. here is way to (correctly) define the product range, scroll the page, wait for them to load, and print them out:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.keys import Keys\nimport time as t\n\nchrome_options = Options()\nchrome_options.add_argument(\"--no-sandbox\")\nchrome_options.add_argument('disable-notifications')\nchrome_options.add_argument(\"window-size=1280,720\")\n\nwebdriver_service = Service(\"chromedriver/chromedriver\") ## path to where you saved chromedriver binary\ndriver = webdriver.Chrome(service=webdriver_service, options=chrome_options)\nwait = WebDriverWait(driver, 25)\nurl = 'https://www.bershka.com/ro/femeie/accesorii/%C8%99osete-c1010194004.html'\ndriver.get(url)\nfooter = wait.until(EC.element_to_be_clickable((By.XPATH, '//div[@class=\"footer-social\"]')))\npbody = wait.until(EC.presence_of_element_located((By.TAG_NAME, 'body')))\nfor x in range(5):\n pbody.send_keys(Keys.PAGE_DOWN)\n print('scrolled')\n t.sleep(5)\nsosetute = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//div[@class=\"category-product-card\"]')))\n\nprint('How many socks?', len(sosetute))\nfor ciorapel in sosetute:\n print(ciorapel.text)\n\nResult in terminal:\nscrolled\nscrolled\nscrolled\nscrolled\nscrolled\nHow many socks? 31\nSet 3 perechi de șosete fluture\n55,90 RON\nȘosete lungi cu dungi\n39,90 RON\nSet 3 perechi șosete ciuperci\n39,90 RON\nSet 3 perechi șosete Snoopy\n39,90 RON\nSet 3 perechi șosete Fetițele Powerpuff\n55,90 RON\nSet 3 perechi șosete Lizzie\n39,90 RON\nSet 3 perechi șosete până la gleznă broderie\n39,90 RON\nJambiere denim\n109,90 RON\nJambiere blană\n129,90 RON\nJambiere tricot\n55,90 RON\n+ 1 Culoare\nJambiere tricot\n55,90 RON\n+ 1 Culoare\nSet 3 perechi șosete print\n39,90 RON\nSet 4 perechi șosete neon\n39,90 RON\nSet 4 perechi șosete simple\n39,90 RON\nSet de 3 perechi de șosete cu text\n39,90 RON\nSet de 2 perechi de șosete texturate\n55,90 RON\nȘosete print Art Series\n69,90 RON\n+ 1 Culoare\nȘosete print Art Series\n69,90 RON\n+ 1 Culoare\nSet 3 perechi șosete print\n39,90 RON\nSet 3 perechi șosete curly broderie\n39,90 RON\nSet de 3 perechi de șosete racing\n39,90 RON\nSet 3 perechi șosete print\n39,90 RON\nSet 3 perechi șosete print\n39,90 RON\nSet de 3 perechi de șosete racing\n39,90 RON\nSet 3 perechi șosete cu imprimeu varsity\n39,90 RON\nSet 3 perechi șosete lungi varsity campus\n39,90 RON\nȘosete text\n39,90 RON\nSet 3 perechi șosete colorate\n39,90 RON\nSet 3 perechi șosete print\n55,90 RON\nȘosete broderie Sponge Bob\n39,90 RON\nSet 3 perechi șosete scurte broderie\n39,90 RON\n\nSelenium documentation is quite good.\n" ]
[ 1 ]
[]
[]
[ "python", "selenium_chromedriver", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074517086_python_selenium_chromedriver_selenium_webdriver_web_scraping.txt
Q: ModuleNotFoundError: No module named '_ctypes' Mac M1 While installing some libraries you may find the issue ModuleNotFoundError: No module named '_ctypes' A: Short version: Try installing python 3.7.13 with pyenv: pyenv install 3.7.13, and if that does not work, try python 3.7.12 (pyenv install 3.7.12). The pyenv release 2.2.3 addresses the compilation problems for 3.6.15/3.7.12 on M1 macs, specifically for ctypes. Long version: The underlying cause for the _ctypes error seems to be that libffi cannot be found during the compilation, and is therefore (silently) skipped during the Python installation. There is a comprehensive overview for installing different versions using pyenv at this page, some specific versions require homebrew patches. Here is an overview of those patches. However, I would try installing them without the patches first, as the pyenv team has fixed various compilation problems since that post was written. The general syntax for installing with a patch is: pyenv install --patch X.X.X <<(curl -sSL link_to_patch) where X.X.X is the version you want to install. Another solution is to use an x86 version of homebrew. Officially, Python 3.7 and lower are not supported on Apple Silicon. A: Make sure you are running python 3.8.10 + A: Updating to Python 3.7.13 (or higher versions) should solve the problem.
ModuleNotFoundError: No module named '_ctypes' Mac M1
While installing some libraries you may find the issue ModuleNotFoundError: No module named '_ctypes'
[ "Short version:\nTry installing python 3.7.13 with pyenv:\npyenv install 3.7.13, and if that does not work, try python 3.7.12 (pyenv install 3.7.12).\nThe pyenv release 2.2.3 addresses the compilation problems for 3.6.15/3.7.12 on M1 macs, specifically for ctypes.\nLong version:\nThe underlying cause for the _ctypes error seems to be that libffi cannot be found during the compilation, and is therefore (silently) skipped during the Python installation.\nThere is a comprehensive overview for installing different versions using pyenv at this page, some specific versions require homebrew patches. Here is an overview of those patches. However, I would try installing them without the patches first, as the pyenv team has fixed various compilation problems since that post was written.\nThe general syntax for installing with a patch is:\npyenv install --patch X.X.X <<(curl -sSL link_to_patch) where X.X.X is the version you want to install.\nAnother solution is to use an x86 version of homebrew.\nOfficially, Python 3.7 and lower are not supported on Apple Silicon.\n", "Make sure you are running python 3.8.10 +\n", "Updating to Python 3.7.13 (or higher versions) should solve the problem.\n" ]
[ 17, 11, 0 ]
[]
[]
[ "macos", "python" ]
stackoverflow_0069496504_macos_python.txt
Q: Django `bulk_create` with related objects I have a Django system that runs billing for thousands of customers on a regular basis. Here are my models: class Invoice(models.Model): balance = models.DecimalField( max_digits=6, decimal_places=2, ) class Transaction(models.Model): amount = models.DecimalField( max_digits=6, decimal_places=2, ) invoice = models.ForeignKey( Invoice, on_delete=models.CASCADE, related_name='invoices', null=False ) When billing is run, thousands of invoices with tens of transactions each are created using several nested for loops, which triggers an insert for each created record. I could run bulk_create() on the transactions for each individual invoice, but this still results in thousands of calls to bulk_create(). How would one bulk-create thousands of related models so that the relationship is maintained and the database is used in the most efficient way possible? Notes: I'm looking for a native Django solution that would work on all databases (with the possible exception of SQLite). My system runs billing in a celery task to decouple long-running code from active requests, but I am still concerned with how long it takes to complete a billing cycle. The solution should assume that other requests or running tasks are also reading from and writing to the tables in question. A: You could bulk_create all the Invoice objects, refresh them from the db, so that they all have ids, create the Transaction objects for all the invoices and then also save them with bulk_create. All of this can be done inside a single transaction.atomic context. Also, specifically for django 1.10 and postrgres, look at this answer. A: You can do it with two bulk create queries, with following method. new_invoices = [] new_transactions = [] for loop: invoice = Invoice(params) new_invoices.append(invoice) for loop: transaction = Transaction(params) transaction.invoice = invoice new_transactions.append(transaction) Invoice.objects.bulk_create(new_invoices) for each in new_transactions: each.invoice_id = each.invoice.id Transaction.objects.bulk_create(new_transactions) A: Another way for this purpose can be like the below code snippet: from django.utils import timezone from django.db import transaction new_invoices = [] new_transactions = [] for sth in sth_else: ... invoice = Invoice(params) new_invoices.append(invoice) for sth in sth_else: ... new_transactions.append(transaction) with transaction.atomic(): other_invoice_ids = Invoice.objects.values_list('id', flat=True) now = timezone.now() Invoice.objects.bulk_create(new_invoices) new_invoices = Invoice.objects.exclude(id__in=other_invoice_ids).values_list('id', flat=True) for invoice_id in new_invoices: transaction = Transaction(params, invoice_id=invoice_id) new_transactions.append(transaction) Transaction.objects.bulk_create(new_transactions) I write this answer based on this post on another question in the community.
Django `bulk_create` with related objects
I have a Django system that runs billing for thousands of customers on a regular basis. Here are my models: class Invoice(models.Model): balance = models.DecimalField( max_digits=6, decimal_places=2, ) class Transaction(models.Model): amount = models.DecimalField( max_digits=6, decimal_places=2, ) invoice = models.ForeignKey( Invoice, on_delete=models.CASCADE, related_name='invoices', null=False ) When billing is run, thousands of invoices with tens of transactions each are created using several nested for loops, which triggers an insert for each created record. I could run bulk_create() on the transactions for each individual invoice, but this still results in thousands of calls to bulk_create(). How would one bulk-create thousands of related models so that the relationship is maintained and the database is used in the most efficient way possible? Notes: I'm looking for a native Django solution that would work on all databases (with the possible exception of SQLite). My system runs billing in a celery task to decouple long-running code from active requests, but I am still concerned with how long it takes to complete a billing cycle. The solution should assume that other requests or running tasks are also reading from and writing to the tables in question.
[ "You could bulk_create all the Invoice objects, refresh them from the db, so that they all have ids, create the Transaction objects for all the invoices and then also save them with bulk_create. All of this can be done inside a single transaction.atomic context. \nAlso, specifically for django 1.10 and postrgres, look at this answer.\n", "You can do it with two bulk create queries, with following method.\nnew_invoices = []\nnew_transactions = []\nfor loop:\n invoice = Invoice(params)\n new_invoices.append(invoice)\n\n for loop: \n transaction = Transaction(params)\n transaction.invoice = invoice\n new_transactions.append(transaction)\n\nInvoice.objects.bulk_create(new_invoices)\n\nfor each in new_transactions:\n each.invoice_id = each.invoice.id\n\nTransaction.objects.bulk_create(new_transactions) \n\n", "Another way for this purpose can be like the below code snippet:\nfrom django.utils import timezone\nfrom django.db import transaction\n\nnew_invoices = []\nnew_transactions = []\nfor sth in sth_else:\n ...\n invoice = Invoice(params)\n new_invoices.append(invoice)\n\nfor sth in sth_else:\n ...\n new_transactions.append(transaction)\n\nwith transaction.atomic():\n other_invoice_ids = Invoice.objects.values_list('id', flat=True)\n now = timezone.now()\n Invoice.objects.bulk_create(new_invoices)\n\n new_invoices = Invoice.objects.exclude(id__in=other_invoice_ids).values_list('id', flat=True)\n for invoice_id in new_invoices:\n transaction = Transaction(params, invoice_id=invoice_id)\n new_transactions.append(transaction)\n\n Transaction.objects.bulk_create(new_transactions)\n\nI write this answer based on this post on another question in the community.\n" ]
[ 12, 2, 0 ]
[]
[]
[ "bulkinsert", "django", "django_models", "python" ]
stackoverflow_0040789962_bulkinsert_django_django_models_python.txt
Q: Python - ModuleNotFoundError- No module named 'XXX' I have folder hierarchy as: ->Project Folder -Main.py ->modules Folder ->PowerSupply Folder - PowerSupply.py - SerialPort.py In Main.py I am importing PowerSupply.py with following command from modules.PowerSupply.PowerSupply import * Then inside of PowerSupply.py, I am importing SerilPort.py with following command from SerialPort import SerialPort So, when I try to run the Main.py, PowerSupply.py throw an error in the line from SerialPort import SerialPort. The error is "Exception has occurred: ModuleNotFoundError No module named 'SerialPort'" When I modify the PowerSupply.py as from modules.PowerSupply.SerialPort import SerialPort, it is not throwing error. But it don`t seem like a good way to me. Is there any way to solve this error? A: Well described here: https://docs.python.org/3/reference/import.html When importing modules, you need to stick to hierarchy. If modules folder is part of hierarchy, you cannot skip it. You could solve it with adding PowerSupply folder to Python search path. A: Inside Powersupply.py try explicit relative import: from .SerialPort import Serialport "When I modify the PowerSupply.py as from modules.PowerSupply.SerialPort import SerialPort, it is not throwing error. But it don`t seem like a good way to me. Is there any way to solve this error" Note that according to PEP 8 absolute imports (which your solution is) are actually preferred: https://peps.python.org/pep-0008/#imports
Python - ModuleNotFoundError- No module named 'XXX'
I have folder hierarchy as: ->Project Folder -Main.py ->modules Folder ->PowerSupply Folder - PowerSupply.py - SerialPort.py In Main.py I am importing PowerSupply.py with following command from modules.PowerSupply.PowerSupply import * Then inside of PowerSupply.py, I am importing SerilPort.py with following command from SerialPort import SerialPort So, when I try to run the Main.py, PowerSupply.py throw an error in the line from SerialPort import SerialPort. The error is "Exception has occurred: ModuleNotFoundError No module named 'SerialPort'" When I modify the PowerSupply.py as from modules.PowerSupply.SerialPort import SerialPort, it is not throwing error. But it don`t seem like a good way to me. Is there any way to solve this error?
[ "Well described here: https://docs.python.org/3/reference/import.html\nWhen importing modules, you need to stick to hierarchy.\nIf modules folder is part of hierarchy, you cannot skip it.\nYou could solve it with adding PowerSupply folder to Python search path.\n", "Inside Powersupply.py try explicit relative import:\nfrom .SerialPort import Serialport\n\n\"When I modify the PowerSupply.py as\nfrom modules.PowerSupply.SerialPort import SerialPort, it is not throwing error. But it don`t seem like a good way to me. Is there any way to solve this error\"\nNote that according to PEP 8 absolute imports (which your solution is) are actually preferred: https://peps.python.org/pep-0008/#imports\n" ]
[ 1, 1 ]
[]
[]
[ "python", "visual_studio_code" ]
stackoverflow_0074517208_python_visual_studio_code.txt
Q: Python: transforming complex data for a Sankey plot I am trying to produce a Sankey plot of the events that take one week before and one week after an index event of a patient. Imagine I have the following data frame: df = patient_id start_date end_date Value Index_event_date Value_Index_event 1 28-12-1999 02-01-2000 A 01-01-2000 X 2 28-12-2000 02-12-2001 B 01-01-2001 X 3 28-12-2001 02-01-2002 A 01-01-2002 X I would like to group into "codes" the above data frame. For example, one week before the index event is code1, the week of the index event is code2, and the next week after the index event is code3. The resulting data frame would be: patient_id code1 code2 code3 1 A X A 2 B X Na 3 A X A In the above example all patients except for patient 2 have observations in both weeks (one before and one after the index event). In the case of patient 2, it has only an observation in the week before the index event and that is why for code3 (week after the index event), we see an Na. A: With the dataframe you provided: import pandas as pd df = pd.DataFrame( { "patient_id": [1, 2, 3], "start_date": ["28-12-1999", "28-12-2000", "28-12-2001"], "end_date": ["02-01-2000", "02-12-2001", "02-01-2002"], "Value": ["A", "B", "A"], "Index_event_date": ["01-01-2000", "01-01-2001", "01-01-2002"], "Value_Index_event": ["X", "X", "X"], } ) Here is one way to do it with Pandas to_datetime and DateOffset (assuming that, by week, you mean 7 days before/after): # Setup for col in ["start_date", "end_date", "Index_event_date"]: df[col] = pd.to_datetime(df[col], format="%d-%m-%Y") # Add new columns df["code1"] = df.apply( lambda x: x["Value"] if x["start_date"] >= (x["Index_event_date"] - pd.DateOffset(days=7)) else None, axis=1, ) df["code2"] = df["Value_Index_event"] df["code3"] = df.apply( lambda x: x["Value"] if x["end_date"] <= (x["Index_event_date"] + pd.DateOffset(days=7)) else None, axis=1, ) # Cleanup df = df.drop( columns=["start_date", "end_date", "Value", "Index_event_date", "Value_Index_event"] ) Then: print(df) # Output patient_id code1 code2 code3 0 1 A X A 1 2 B X None 2 3 A X A
Python: transforming complex data for a Sankey plot
I am trying to produce a Sankey plot of the events that take one week before and one week after an index event of a patient. Imagine I have the following data frame: df = patient_id start_date end_date Value Index_event_date Value_Index_event 1 28-12-1999 02-01-2000 A 01-01-2000 X 2 28-12-2000 02-12-2001 B 01-01-2001 X 3 28-12-2001 02-01-2002 A 01-01-2002 X I would like to group into "codes" the above data frame. For example, one week before the index event is code1, the week of the index event is code2, and the next week after the index event is code3. The resulting data frame would be: patient_id code1 code2 code3 1 A X A 2 B X Na 3 A X A In the above example all patients except for patient 2 have observations in both weeks (one before and one after the index event). In the case of patient 2, it has only an observation in the week before the index event and that is why for code3 (week after the index event), we see an Na.
[ "With the dataframe you provided:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n \"patient_id\": [1, 2, 3],\n \"start_date\": [\"28-12-1999\", \"28-12-2000\", \"28-12-2001\"],\n \"end_date\": [\"02-01-2000\", \"02-12-2001\", \"02-01-2002\"],\n \"Value\": [\"A\", \"B\", \"A\"],\n \"Index_event_date\": [\"01-01-2000\", \"01-01-2001\", \"01-01-2002\"],\n \"Value_Index_event\": [\"X\", \"X\", \"X\"],\n }\n)\n\nHere is one way to do it with Pandas to_datetime and DateOffset (assuming that, by week, you mean 7 days before/after):\n# Setup\nfor col in [\"start_date\", \"end_date\", \"Index_event_date\"]:\n df[col] = pd.to_datetime(df[col], format=\"%d-%m-%Y\")\n\n# Add new columns\ndf[\"code1\"] = df.apply(\n lambda x: x[\"Value\"]\n if x[\"start_date\"] >= (x[\"Index_event_date\"] - pd.DateOffset(days=7))\n else None,\n axis=1,\n)\ndf[\"code2\"] = df[\"Value_Index_event\"]\ndf[\"code3\"] = df.apply(\n lambda x: x[\"Value\"]\n if x[\"end_date\"] <= (x[\"Index_event_date\"] + pd.DateOffset(days=7))\n else None,\n axis=1,\n)\n\n# Cleanup\ndf = df.drop(\n columns=[\"start_date\", \"end_date\", \"Value\", \"Index_event_date\", \"Value_Index_event\"]\n)\n\nThen:\nprint(df)\n# Output\n patient_id code1 code2 code3\n0 1 A X A\n1 2 B X None\n2 3 A X A\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python", "sankey_diagram" ]
stackoverflow_0074419763_pandas_python_sankey_diagram.txt
Q: Null Space of Large Sparse Matrix I am working on a project that requires me to compute the null space of fairly large sparse matrices (2400 x 2400) multiple times. So far I have been using the scipy library to do so (does not take in account that matrix is sparse), although I am sure there must be a faster way. Looking around I found lots of publications on different algorithms to do so, but I was hoping to take an easier route and use premade modules. Is there any python/C/C++/fortran/matlab/etc... library that could help me in this? I tried looking for such modules, but I could only find scipy.sparse, but it unfortunately does not contain a function for this. A: Look in scipy.sparse.linalg there seems to be everything you need to find the null space. For instance: https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.spsolve.html#scipy.sparse.linalg.spsolve
Null Space of Large Sparse Matrix
I am working on a project that requires me to compute the null space of fairly large sparse matrices (2400 x 2400) multiple times. So far I have been using the scipy library to do so (does not take in account that matrix is sparse), although I am sure there must be a faster way. Looking around I found lots of publications on different algorithms to do so, but I was hoping to take an easier route and use premade modules. Is there any python/C/C++/fortran/matlab/etc... library that could help me in this? I tried looking for such modules, but I could only find scipy.sparse, but it unfortunately does not contain a function for this.
[ "Look in scipy.sparse.linalg there seems to be everything you need to find the null space.\nFor instance:\nhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.linalg.spsolve.html#scipy.sparse.linalg.spsolve\n" ]
[ 0 ]
[]
[]
[ "matrix", "performance", "python" ]
stackoverflow_0074517598_matrix_performance_python.txt
Q: Detecting unexpected type conversion in python I have a piece of complex Python code involving the using of 32-bit numerical values (for saving memory and bandwidth). But later I discovered many of these 32-bit numbers were implicitly converted to 64-bit in some high-level functions. For example, the sum function, by default, can transforms a 32bit array to a 64bit number. In [152]: x32 Out[152]: array([ 0. , 1.010101, 2.020202, 3.030303, 4.040404, 5.050505, 6.060606, 7.070707, 8.080808, 9.090909, 10.10101 , 11.111111, 12.121212, 13.131313, 14.141414, 15.151515, 16.161615, 17.171717, 18.181818, 19.19192 , 20.20202 , 21.212122, 22.222221, 23.232323, 24.242424, 25.252525, 26.262627, 27.272728, 28.282827, 29.292929, 30.30303 , 31.313131, 32.32323 , 33.333332, 34.343433, 35.353535, 36.363636, 37.373737, 38.38384 , 39.39394 , 40.40404 , 41.414143, 42.424244, 43.434345, 44.444443, 45.454544, 46.464645, 47.474747, 48.484848, 49.49495 , 50.50505 , 51.515152, 52.525253, 53.535355, 54.545456, 55.555557, 56.565655, 57.575756, 58.585857, 59.59596 , 60.60606 , 61.61616 , 62.626263, 63.636364, 64.64646 , 65.65656 , 66.666664, 67.676765, 68.68687 , 69.69697 , 70.70707 , 71.71717 , 72.72727 , 73.73737 , 74.747475, 75.757576, 76.76768 , 77.77778 , 78.78788 , 79.79798 , 80.80808 , 81.818184, 82.828285, 83.83839 , 84.84849 , 85.85859 , 86.86869 , 87.878784, 88.888885, 89.89899 , 90.90909 , 91.91919 , 92.92929 , 93.93939 , 94.94949 , 95.959595, 96.969696, 97.9798 , 98.9899 , 100. ], dtype=float32) In [153]: sum(x32) Out[153]: 4999.999972701073 In [154]: type(sum(x32)) Out[154]: numpy.float64 The reason in this case sum(x32) is 64-bit should be from the default accumulator of sum, 0, as shown here: In [156]: type(sum(x32, start=np.float32(0))) Out[156]: numpy.float32 Above, I use the sum function as an example, to explain type conversion is everywhere if I use 32bit as inputs. I have changed the sum part to avoid such implicit type conversion. But I would like to know if internally in my library call, there is any other unexpected 32bit -> 64bit conversion. Is there a general programming language solution to monitor any possible type conversion? For example, can I run my python code with some special debugging tool so that any type conversion from 32bit to 64bit will trigger an alarm or being logged? A: I think you are nearly there to be honest. original_dtype = x32.dtype new_dtype = sum(x32, start=np.float32(0))).dtype assert new_dtype == original_dtype, f"dtypes differ, {new_dtype=} != {original_dtype=}" To use this method globally, you can write something like: def type_checker_func(func,input_array,*args): dtype_orig = input_array.dtype result = func(input_array,*args) dtype_new = result.dtype if dtype_new != dtype_orig: print(f"dtypes differ, {dtype_new=} != {dtype_orig=}") return result my_answer = type_checker_func(sum,x32,start=np.float32(0)) But I am not sure how you would best handle multiple return values (consider np.histogram), all sorts of args, etc. etc. I am also not sure how to invoke the type_checker_func globally / implicitly (if only for numpy fns). Update: I posted a github question asking about doing this for every function call using line_profiler - see https://github.com/pyutils/line_profiler/issues/188 - fingers crossed.
Detecting unexpected type conversion in python
I have a piece of complex Python code involving the using of 32-bit numerical values (for saving memory and bandwidth). But later I discovered many of these 32-bit numbers were implicitly converted to 64-bit in some high-level functions. For example, the sum function, by default, can transforms a 32bit array to a 64bit number. In [152]: x32 Out[152]: array([ 0. , 1.010101, 2.020202, 3.030303, 4.040404, 5.050505, 6.060606, 7.070707, 8.080808, 9.090909, 10.10101 , 11.111111, 12.121212, 13.131313, 14.141414, 15.151515, 16.161615, 17.171717, 18.181818, 19.19192 , 20.20202 , 21.212122, 22.222221, 23.232323, 24.242424, 25.252525, 26.262627, 27.272728, 28.282827, 29.292929, 30.30303 , 31.313131, 32.32323 , 33.333332, 34.343433, 35.353535, 36.363636, 37.373737, 38.38384 , 39.39394 , 40.40404 , 41.414143, 42.424244, 43.434345, 44.444443, 45.454544, 46.464645, 47.474747, 48.484848, 49.49495 , 50.50505 , 51.515152, 52.525253, 53.535355, 54.545456, 55.555557, 56.565655, 57.575756, 58.585857, 59.59596 , 60.60606 , 61.61616 , 62.626263, 63.636364, 64.64646 , 65.65656 , 66.666664, 67.676765, 68.68687 , 69.69697 , 70.70707 , 71.71717 , 72.72727 , 73.73737 , 74.747475, 75.757576, 76.76768 , 77.77778 , 78.78788 , 79.79798 , 80.80808 , 81.818184, 82.828285, 83.83839 , 84.84849 , 85.85859 , 86.86869 , 87.878784, 88.888885, 89.89899 , 90.90909 , 91.91919 , 92.92929 , 93.93939 , 94.94949 , 95.959595, 96.969696, 97.9798 , 98.9899 , 100. ], dtype=float32) In [153]: sum(x32) Out[153]: 4999.999972701073 In [154]: type(sum(x32)) Out[154]: numpy.float64 The reason in this case sum(x32) is 64-bit should be from the default accumulator of sum, 0, as shown here: In [156]: type(sum(x32, start=np.float32(0))) Out[156]: numpy.float32 Above, I use the sum function as an example, to explain type conversion is everywhere if I use 32bit as inputs. I have changed the sum part to avoid such implicit type conversion. But I would like to know if internally in my library call, there is any other unexpected 32bit -> 64bit conversion. Is there a general programming language solution to monitor any possible type conversion? For example, can I run my python code with some special debugging tool so that any type conversion from 32bit to 64bit will trigger an alarm or being logged?
[ "I think you are nearly there to be honest.\noriginal_dtype = x32.dtype\n\nnew_dtype = sum(x32, start=np.float32(0))).dtype\n\nassert new_dtype == original_dtype, f\"dtypes differ, {new_dtype=} != {original_dtype=}\"\n\nTo use this method globally, you can write something like:\ndef type_checker_func(func,input_array,*args):\n\n dtype_orig = input_array.dtype\n\n result = func(input_array,*args)\n\n dtype_new = result.dtype\n\n if dtype_new != dtype_orig:\n print(f\"dtypes differ, {dtype_new=} != {dtype_orig=}\")\n\n return result\n\nmy_answer = type_checker_func(sum,x32,start=np.float32(0))\n\n\nBut I am not sure how you would best handle multiple return values (consider np.histogram), all sorts of args, etc. etc.\nI am also not sure how to invoke the type_checker_func globally / implicitly (if only for numpy fns).\nUpdate: I posted a github question asking about doing this for every function call using line_profiler - see https://github.com/pyutils/line_profiler/issues/188 - fingers crossed.\n" ]
[ 1 ]
[]
[]
[ "python", "type_conversion" ]
stackoverflow_0074516195_python_type_conversion.txt
Q: Python Pandas Read from column A & B instead of column name I'm relativley new to python I have a excel file where i can read,Column A "url" and Column B "name". In the future the columns will have no "column name" so i need it to read from Column A directly and column B and start iterating from cell 1. I tried using index_col(0) but can't really seem to get the hang of it. This is a simple download image script. import requests import pandas as pd df = pd.read_excel(r'C:\Users\exdata1.xlsx') for index, row in df.iterrows(): url = row['url'] file_name = url.split('/') r = requests.get(url) file_name=(row['name']+".jpeg") if r.status_code == 200: with open(file_name, "wb") as f: f.write(r.content) print (file_name) I tried this below without any good result. url = row['index_col(0)'] #0 for excel column "A" file_name=(row['index_col(1)']+".jpeg") #1 for excel Column "B" Apreciate any support! A: You can set header=None as an argument of pandas.read_excel and give names to your columns. Try this : import requests import pandas as pd df = pd.read_excel(r'C:\Users\exdata1.xlsx', header=None, names=['url', 'name']) for index, row in df.iterrows(): url = row['url'] file_name = url.split('/') r = requests.get(url) file_name=(row['name']+'.jpeg') if r.status_code == 200: with open(file_name, 'wb') as f: f.write(r.content) print(file_name) A: If your files had no columns name pandas assign values to each column such as Unnamed: 0, you can check that py printing df.info or df.head() you can assign columns names when reading from your file so you df always has columns name: df.rename( columns={"Unnamed: 0" :'url', Unnamed: 0: 'name'}, inplace=True ) then you are good to go.
Python Pandas Read from column A & B instead of column name
I'm relativley new to python I have a excel file where i can read,Column A "url" and Column B "name". In the future the columns will have no "column name" so i need it to read from Column A directly and column B and start iterating from cell 1. I tried using index_col(0) but can't really seem to get the hang of it. This is a simple download image script. import requests import pandas as pd df = pd.read_excel(r'C:\Users\exdata1.xlsx') for index, row in df.iterrows(): url = row['url'] file_name = url.split('/') r = requests.get(url) file_name=(row['name']+".jpeg") if r.status_code == 200: with open(file_name, "wb") as f: f.write(r.content) print (file_name) I tried this below without any good result. url = row['index_col(0)'] #0 for excel column "A" file_name=(row['index_col(1)']+".jpeg") #1 for excel Column "B" Apreciate any support!
[ "You can set header=None as an argument of pandas.read_excel and give names to your columns.\nTry this :\nimport requests\nimport pandas as pd\n \ndf = pd.read_excel(r'C:\\Users\\exdata1.xlsx', header=None, names=['url', 'name'])\n\nfor index, row in df.iterrows():\n url = row['url']\n file_name = url.split('/')\n r = requests.get(url) \n file_name=(row['name']+'.jpeg') \n\n if r.status_code == 200:\n with open(file_name, 'wb') as f:\n f.write(r.content)\n print(file_name)\n\n", "If your files had no columns name pandas assign values to each column such as Unnamed: 0, you can check that py printing df.info or df.head()\nyou can assign columns names when reading from your file so you df always has columns name:\ndf.rename( columns={\"Unnamed: 0\" :'url', Unnamed: 0: 'name'}, inplace=True )\n\nthen you are good to go.\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "excel", "loops", "pandas", "python" ]
stackoverflow_0074517443_dataframe_excel_loops_pandas_python.txt
Q: list of entries (files and folders) in a directory tree by os.scandir() in Python I have used "os.walk()" to list all subfolders and files in a directory tree , but heard that "os.scandir()" does the job up to 2X - 20X faster. So I tried this code: def tree2list (directory:str) -> list: import os tree = [] counter = 0 for i in os.scandir(directory): if i.is_dir(): counter+=1 tree.append ([counter,'Folder', i.name, i.path]) ## doesn't list the whole tree tree2list(i.path) #print(i.path) ## this line prints all subfolders in the tree else: counter+=1 tree.append([counter,'File', i.name, i.path]) #print(i.path) ## this line prints all files in the tree return tree and when test it: ## tester folder = 'E:/Test' print(tree2list(folder)) I got only the content of the root directory and none from sub-directories below tree hierarchy, while all print statements in above code work fine. [[1, 'Folder', 'Archive', 'E:/Test\\Archive'], [2, 'Folder', 'Source', 'E:/Test\\Source']] What have I done wrong ?, and how can I fix it?! A: Using generators (yield, yield from) allows to manage the recursion with concise code: from pprint import pprint from typing import Iterator, Tuple def tree2list(directory: str) -> Iterator[Tuple[str, str, str]]: import os for i in os.scandir(directory): if i.is_dir(): yield ["Folder", i.name, i.path] yield from tree2list(i.path) else: yield ["File", i.name, i.path] folder = "/home/yfgy6415/dev/tmp" pprint(list(tree2list(folder))) Or: pprint(list(enumerate(tree2list(folder), start=1))) if you want the counter. A: Your code almost works, just a minor modification is required: def tree2list(directory: str) -> list: import os tree = [] counter = 0 for i in os.scandir(directory): if i.is_dir(): counter += 1 tree.append([counter, 'Folder', i.name, i.path]) tree.extend(tree2list(i.path)) # print(i.path) ## this line prints all subfolders in the tree else: counter += 1 tree.append([counter, 'File', i.name, i.path]) # print(i.path) ## this line prints all files in the tree return tree Although I don't understand what the purpose of the counter variable is, so I'd probably remove it. Further, I have to agree with @Gelineau that your approach utilizes array-copies quite heavily and is therefore most likely quite slow. An iterator based approach as in his response is more suited for a large number of files. A: Adding to the accepted answer. In case... Getting all files in the directory and subdirectories matching some pattern (*.py for example): import os from fnmatch import fnmatch def file_tree_fn(root): file_list = [] for python_file in os.scandir(str(root)): if python_file.is_dir(): file_list.extend(file_tree_fn(python_file.path)) else: file_list.append(python_file.path) if fnmatch(python_file.path, "*.py") & python_file.is_file() else None return file_list print(file_tree_fn(root))
list of entries (files and folders) in a directory tree by os.scandir() in Python
I have used "os.walk()" to list all subfolders and files in a directory tree , but heard that "os.scandir()" does the job up to 2X - 20X faster. So I tried this code: def tree2list (directory:str) -> list: import os tree = [] counter = 0 for i in os.scandir(directory): if i.is_dir(): counter+=1 tree.append ([counter,'Folder', i.name, i.path]) ## doesn't list the whole tree tree2list(i.path) #print(i.path) ## this line prints all subfolders in the tree else: counter+=1 tree.append([counter,'File', i.name, i.path]) #print(i.path) ## this line prints all files in the tree return tree and when test it: ## tester folder = 'E:/Test' print(tree2list(folder)) I got only the content of the root directory and none from sub-directories below tree hierarchy, while all print statements in above code work fine. [[1, 'Folder', 'Archive', 'E:/Test\\Archive'], [2, 'Folder', 'Source', 'E:/Test\\Source']] What have I done wrong ?, and how can I fix it?!
[ "Using generators (yield, yield from) allows to manage the recursion with concise code:\nfrom pprint import pprint\nfrom typing import Iterator, Tuple\n\n\ndef tree2list(directory: str) -> Iterator[Tuple[str, str, str]]:\n import os\n\n for i in os.scandir(directory):\n if i.is_dir():\n yield [\"Folder\", i.name, i.path]\n yield from tree2list(i.path)\n else:\n yield [\"File\", i.name, i.path]\n\n\nfolder = \"/home/yfgy6415/dev/tmp\"\npprint(list(tree2list(folder)))\n\nOr: pprint(list(enumerate(tree2list(folder), start=1))) if you want the counter.\n", "Your code almost works, just a minor modification is required:\ndef tree2list(directory: str) -> list:\n import os\n tree = []\n counter = 0\n for i in os.scandir(directory):\n if i.is_dir():\n counter += 1\n tree.append([counter, 'Folder', i.name, i.path])\n tree.extend(tree2list(i.path))\n # print(i.path) ## this line prints all subfolders in the tree\n else:\n counter += 1\n tree.append([counter, 'File', i.name, i.path])\n # print(i.path) ## this line prints all files in the tree\n return tree\n\nAlthough I don't understand what the purpose of the counter variable is, so I'd probably remove it.\nFurther, I have to agree with @Gelineau that your approach utilizes array-copies quite heavily and is therefore most likely quite slow. An iterator based approach as in his response is more suited for a large number of files.\n", "Adding to the accepted answer. In case... Getting all files in the directory and subdirectories matching some pattern (*.py for example):\nimport os\nfrom fnmatch import fnmatch\n\n\ndef file_tree_fn(root):\n file_list = []\n for python_file in os.scandir(str(root)):\n if python_file.is_dir():\n file_list.extend(file_tree_fn(python_file.path))\n else:\n file_list.append(python_file.path) if fnmatch(python_file.path, \"*.py\") & python_file.is_file() else None\n return file_list\n\nprint(file_tree_fn(root))\n\n" ]
[ 2, 2, 0 ]
[]
[]
[ "python", "python_3.x", "scandir" ]
stackoverflow_0072938098_python_python_3.x_scandir.txt
Q: AWS CDK can't find ARN of dead letter queue when creating SQS I'm trying to create an SQS with a dead letter queue but when I deploy AWS says it can't find the ARN for the dead letter queue. My code is below for my SQS stack. class SqsCdkStack(Stack): def __init__(self, scope: Construct, construct_id: str, app_name: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) dead_letter_queue: sqs.Queue = sqs.Queue( self, id="VfAwsRtsMlinfCdkDeadLetterQueue", queue_name=f"{app_name}-dead-letter-queue", retention_period=Duration.days(14) ) self.sqs_queue: sqs.Queue = sqs.Queue( self, id="VfAwsRtsMlinfCdkContactResponseQueue", queue_name=f"{app_name}-contact-and-response-queue", retention_period=Duration.days(4), visibility_timeout=Duration.seconds(30), delivery_delay=Duration.seconds(0), receive_message_wait_time=Duration.seconds(0), max_message_size_bytes=262144, # 256 KiB encryption=sqs.QueueEncryption.SQS_MANAGED, dead_letter_queue=sqs.DeadLetterQueue( max_receive_count=1, queue=dead_letter_queue ) ) A: CloudFormation should know to create the DLQ before the Queue, but try making the dependency explicit with: self.sqs_queue.node.add_dependency(dead_letter_queue)
AWS CDK can't find ARN of dead letter queue when creating SQS
I'm trying to create an SQS with a dead letter queue but when I deploy AWS says it can't find the ARN for the dead letter queue. My code is below for my SQS stack. class SqsCdkStack(Stack): def __init__(self, scope: Construct, construct_id: str, app_name: str, **kwargs) -> None: super().__init__(scope, construct_id, **kwargs) dead_letter_queue: sqs.Queue = sqs.Queue( self, id="VfAwsRtsMlinfCdkDeadLetterQueue", queue_name=f"{app_name}-dead-letter-queue", retention_period=Duration.days(14) ) self.sqs_queue: sqs.Queue = sqs.Queue( self, id="VfAwsRtsMlinfCdkContactResponseQueue", queue_name=f"{app_name}-contact-and-response-queue", retention_period=Duration.days(4), visibility_timeout=Duration.seconds(30), delivery_delay=Duration.seconds(0), receive_message_wait_time=Duration.seconds(0), max_message_size_bytes=262144, # 256 KiB encryption=sqs.QueueEncryption.SQS_MANAGED, dead_letter_queue=sqs.DeadLetterQueue( max_receive_count=1, queue=dead_letter_queue ) )
[ "CloudFormation should know to create the DLQ before the Queue, but try making the dependency explicit with:\nself.sqs_queue.node.add_dependency(dead_letter_queue)\n\n" ]
[ 0 ]
[]
[]
[ "amazon_sqs", "amazon_web_services", "aws_cdk", "python" ]
stackoverflow_0074489043_amazon_sqs_amazon_web_services_aws_cdk_python.txt
Q: In pyspark how to check the format a pyspark was read in? Delta vs parquet I have function that reads in file which could either be in delta or parquet format. def getData(filename,fileFormat) if data_format == "parquet": return spark.read.parquet(filename) elif data_format == "delta": return spark.read.format("delta").load(filename) I then use the returned pyspark.sql.dataframe.DataFrame in some analysis function def someAnalyticalFunction(df) if df == "parquet": #I know this isnt right but how do i check the data format? #do some analysis elif df == "delta" #do some slight different analysis Is there away that i can check in the analysis function, what format the dataframe (df) the was read in? A: You can't do that with Spark, but you can use dbutils.fs to check if delta metadata file exists A: You can do it with the isDeltaTable method from the Delta Lake API. https://docs.delta.io/latest/api/python/index.html https://docs.delta.io/latest/api/python/index.html#delta.tables.DeltaTable.isDeltaTable from delta.tables import DeltaTable def check_format(spark: SparkSession, path: str) -> str: """ Return the format of the spark path """ if DeltaTable.isDeltaTable(spark, path): return "delta" else: return "parquet"
In pyspark how to check the format a pyspark was read in? Delta vs parquet
I have function that reads in file which could either be in delta or parquet format. def getData(filename,fileFormat) if data_format == "parquet": return spark.read.parquet(filename) elif data_format == "delta": return spark.read.format("delta").load(filename) I then use the returned pyspark.sql.dataframe.DataFrame in some analysis function def someAnalyticalFunction(df) if df == "parquet": #I know this isnt right but how do i check the data format? #do some analysis elif df == "delta" #do some slight different analysis Is there away that i can check in the analysis function, what format the dataframe (df) the was read in?
[ "You can't do that with Spark, but you can use dbutils.fs to check if delta metadata file exists\n", "You can do it with the isDeltaTable method from the Delta Lake API.\n\nhttps://docs.delta.io/latest/api/python/index.html\nhttps://docs.delta.io/latest/api/python/index.html#delta.tables.DeltaTable.isDeltaTable\n\nfrom delta.tables import DeltaTable\n\ndef check_format(spark: SparkSession, path: str) -> str:\n \"\"\" Return the format of the spark path \"\"\"\n if DeltaTable.isDeltaTable(spark, path):\n return \"delta\"\n\n else:\n return \"parquet\"\n\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark_sql", "azure_databricks", "pyspark", "python" ]
stackoverflow_0071288669_apache_spark_sql_azure_databricks_pyspark_python.txt
Q: libGL.so.1: cannot open shared object file: No such file or directory - even when using open cv headless I have a Docker image I am building to run on AWS Lambda. One of the dependencies is opencv, but I am using the headless version. My requirements file is: absl-py==1.0.0 attrs==21.4.0 cycler==0.11.0 flatbuffers==2.0 fonttools==4.33.3 imageio==2.19.2 jmespath==1.0.0 kiwisolver==1.4.2 matplotlib==3.5.2 mediapipe==0.8.10 networkx==2.8.1 numpy==1.22.3 onnxruntime==1.11.1 opencv-contrib-python-headless==4.5.5.64 packaging==21.3 Pillow==9.1.1 protobuf==3.20.1 pyparsing==3.0.9 python-dateutil==2.8.2 PyWavelets==1.3.0 scikit-image==0.19.2 scipy==1.8.1 six==1.16.0 tifffile==2022.5.4 urllib3==1.26.9 And my Dockerfile is: FROM public.ecr.aws/lambda/python:3.8 COPY requirements.txt . RUN pip install -r requirements.txt && rm requirements.txt COPY lambda_function.py ./ COPY remove.py ./ COPY detect.py ./ COPY u2net.onnx ./ CMD [ "lambda_function.lambda_handler" ] The exact error I am getting in Lambda is: { "errorMessage": "Unable to import module 'lambda_function': libGL.so.1: cannot open shared object file: No such file or directory", "errorType": "Runtime.ImportModuleError", "stackTrace": [] } I have tried researching what could be happening but have come up empty handed. Why would I be getting this error when using the headless version? Thank you A: I needed to update my container repositories and install a dependency, libgl1-mesa-glx: RUN apt update # Dependency for opencv-python (cv2). `import cv2` raises ImportError: libGL.so.1: cannot open shared object file: No such file or directory # Solution from https://askubuntu.com/a/1015744 RUN apt install -y libgl1-mesa-glx
libGL.so.1: cannot open shared object file: No such file or directory - even when using open cv headless
I have a Docker image I am building to run on AWS Lambda. One of the dependencies is opencv, but I am using the headless version. My requirements file is: absl-py==1.0.0 attrs==21.4.0 cycler==0.11.0 flatbuffers==2.0 fonttools==4.33.3 imageio==2.19.2 jmespath==1.0.0 kiwisolver==1.4.2 matplotlib==3.5.2 mediapipe==0.8.10 networkx==2.8.1 numpy==1.22.3 onnxruntime==1.11.1 opencv-contrib-python-headless==4.5.5.64 packaging==21.3 Pillow==9.1.1 protobuf==3.20.1 pyparsing==3.0.9 python-dateutil==2.8.2 PyWavelets==1.3.0 scikit-image==0.19.2 scipy==1.8.1 six==1.16.0 tifffile==2022.5.4 urllib3==1.26.9 And my Dockerfile is: FROM public.ecr.aws/lambda/python:3.8 COPY requirements.txt . RUN pip install -r requirements.txt && rm requirements.txt COPY lambda_function.py ./ COPY remove.py ./ COPY detect.py ./ COPY u2net.onnx ./ CMD [ "lambda_function.lambda_handler" ] The exact error I am getting in Lambda is: { "errorMessage": "Unable to import module 'lambda_function': libGL.so.1: cannot open shared object file: No such file or directory", "errorType": "Runtime.ImportModuleError", "stackTrace": [] } I have tried researching what could be happening but have come up empty handed. Why would I be getting this error when using the headless version? Thank you
[ "I needed to update my container repositories and install a dependency, libgl1-mesa-glx:\nRUN apt update\n# Dependency for opencv-python (cv2). `import cv2` raises ImportError: libGL.so.1: cannot open shared object file: No such file or directory\n# Solution from https://askubuntu.com/a/1015744\nRUN apt install -y libgl1-mesa-glx\n\n" ]
[ 0 ]
[]
[]
[ "aws_lambda", "docker", "opencv", "python" ]
stackoverflow_0072365190_aws_lambda_docker_opencv_python.txt