content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Combine two lists into one multidimensional list I would like to merge two lists into one 2d list. list1=["Peter", "Mark", "John"] list2=[1,2,3] into list3=[["Peter",1],["Mark",2],["John",3]] A: list3 = [list(a) for a in zip(list1, list2)] A: An alternative: >>> map(list,zip(list1,list2)) [['Peter', 1], ['Mark', 2], ['John', 3]] or in python3: >>> list(map(list,zip(list1,list2))) [['Peter', 1], ['Mark', 2], ['John', 3]] (you can omit the outer list()-cast in most circumstances, though) A: I actually used: list3a = np.concatenate((list1, list2)) list3 = np.reshape(list3a, (-1,2)) because otherwise I get the error: 'list indices must be integers, not tuples' when trying to reference the array. A: zip() iterates over both lists in sync and you will get a and b in parallel. Both values then create a new list as an element of the final list (list3). list3 = [[a, b] for a, b in zip(list1, list2)]
Combine two lists into one multidimensional list
I would like to merge two lists into one 2d list. list1=["Peter", "Mark", "John"] list2=[1,2,3] into list3=[["Peter",1],["Mark",2],["John",3]]
[ "list3 = [list(a) for a in zip(list1, list2)]\n\n", "An alternative:\n>>> map(list,zip(list1,list2))\n[['Peter', 1], ['Mark', 2], ['John', 3]]\n\nor in python3:\n>>> list(map(list,zip(list1,list2)))\n[['Peter', 1], ['Mark', 2], ['John', 3]]\n\n(you can omit the outer list()-cast in most circumstances, though)\n", "I actually used:\nlist3a = np.concatenate((list1, list2))\nlist3 = np.reshape(list3a, (-1,2))\n\nbecause otherwise I get the error: 'list indices must be integers, not tuples' when trying to reference the array.\n", "zip() iterates over both lists in sync and you will get a and b in parallel. Both values then create a new list as an element of the final list (list3).\nlist3 = [[a, b] for a, b in zip(list1, list2)]\n\n" ]
[ 27, 3, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0012624623_list_python.txt
Q: How can i make some points of different color in last frame of plotly.graph_objects fig = go.Figure() f = [] hours = range(0, 60) pts = [rd.randrange(11, 100, 13) for i in range(1, 61)] for i in range(1, 61): f.append(go.Frame(data=[go.Scatter(x=list(hours[1:61]), y=list(pts[:i]), name="PRB Usage",line=dict(color="#1E90FF"),marker=dict(size=10,line=dict(width=3,color="yellow")))])) fig.update(frames=f) c = fig.to_html(config={'displayModeBar': False}) Here my animation have 60 frames of jointed scatter plot , after finishing with animation i want to change color of those points who are greater than 80 and less than 20 I tried to add_trace of new plot after fig.update(frames=f) statement but it showing brefore completing the animation A: Your question only has a partial code, so I have made up the deficiency with my guess. If you want to add special processing to the last graph of the animation, create a new final frame and replace the last frame in the list of already existing frames. The color condition specifies a list of colors above and beyond the standard value in the new last frame. import plotly.graph_objects as go import random as rd f = [] hours = range(0, 60) pts = [rd.randrange(11, 100, 13) for i in range(1, 61)] data = [go.Scatter(x=[], y=[])] layout = go.Layout( xaxis=dict(range=[0,60], autorange=False), yaxis=dict(range=[0,100], autorange=False), title='Animation frames', updatemenus=[dict( type="buttons", buttons=[dict(label="Play", method="animate", args=[None])])] ) for i in range(1, 61): f.append(go.Frame(data=[ go.Scatter( mode='markers', x=list(hours[:i]), y=list(pts[:i]), name="PRB Usage", # line=dict(color="#1E90FF"), marker=dict(size=10, color="blue") ) ])) f[-1] = go.Frame(data=[go.Scatter(mode='markers', x=list(hours), y=pts, marker=dict(size=10, color=['red' if x >= 80 else 'blue' for x in pts]))]) fig = go.Figure(data=data, layout=layout, frames=f) fig.show()
How can i make some points of different color in last frame of plotly.graph_objects
fig = go.Figure() f = [] hours = range(0, 60) pts = [rd.randrange(11, 100, 13) for i in range(1, 61)] for i in range(1, 61): f.append(go.Frame(data=[go.Scatter(x=list(hours[1:61]), y=list(pts[:i]), name="PRB Usage",line=dict(color="#1E90FF"),marker=dict(size=10,line=dict(width=3,color="yellow")))])) fig.update(frames=f) c = fig.to_html(config={'displayModeBar': False}) Here my animation have 60 frames of jointed scatter plot , after finishing with animation i want to change color of those points who are greater than 80 and less than 20 I tried to add_trace of new plot after fig.update(frames=f) statement but it showing brefore completing the animation
[ "Your question only has a partial code, so I have made up the deficiency with my guess. If you want to add special processing to the last graph of the animation, create a new final frame and replace the last frame in the list of already existing frames. The color condition specifies a list of colors above and beyond the standard value in the new last frame.\nimport plotly.graph_objects as go\nimport random as rd\n\nf = []\nhours = range(0, 60)\npts = [rd.randrange(11, 100, 13) for i in range(1, 61)]\n\ndata = [go.Scatter(x=[], y=[])]\n\nlayout = go.Layout(\n xaxis=dict(range=[0,60], autorange=False),\n yaxis=dict(range=[0,100], autorange=False),\n title='Animation frames',\n updatemenus=[dict(\n type=\"buttons\",\n buttons=[dict(label=\"Play\",\n method=\"animate\",\n args=[None])])]\n)\n \nfor i in range(1, 61):\n f.append(go.Frame(data=[\n go.Scatter(\n mode='markers',\n x=list(hours[:i]),\n y=list(pts[:i]),\n name=\"PRB Usage\",\n # line=dict(color=\"#1E90FF\"),\n marker=dict(size=10, color=\"blue\")\n )\n ]))\n\nf[-1] = go.Frame(data=[go.Scatter(mode='markers', x=list(hours), y=pts, marker=dict(size=10, color=['red' if x >= 80 else 'blue' for x in pts]))])\n \nfig = go.Figure(data=data, layout=layout, frames=f)\n\nfig.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "plotly", "plotly.graph_objects", "python" ]
stackoverflow_0074559806_plotly_plotly.graph_objects_python.txt
Q: Compare an iteration with the next and next next iteration I have a dataframe with three columns (Site, EventTime, EndTime), and I need to compare the first item on Site with the next and the next value. If those three are different, I need to copy the endTime of the last one into the first one. I tried this: i = 0 while i <= len(df): if df['Site'][i] != df['Site'][i+1] and df['Site'][i] != df['Site'][i+2]: df['EndTime'][i]=df['EndTime'][i+2] i = i+2 if df['Site'][i] != df['Site'][i+1]: df['EndTime'][i]=df['EndTime'][i+1] i = i+1 i = i+1 and it gave this error: C:\Users\\AppData\Local\Temp\ipykernel_19504\3633764966.py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df['EndTime'][i]=df['EndTime'][i+2] Output exceeds the size limit. Open the full output data in a text editor I need to replace the endTime whith the last endTime when the next two iterations are different from each other: What I need: A: Here I am assuming you need to set the new time on all last record of site a, with the next first record of site c EndTime. Followinglly, I have used lambda function to pass through line by line, and check condintion, where give true if current site and next row site is different and current site is a, if yes then get the first value of c Endtime in remain in loop data, or else take None. Try following Code: import numpy as np df['new time'] = df[:-1].apply(lambda r: df[r.name:].loc[df['site']=='c'].groupby(['site']).first()['EndTime'].values[0] if (df.at[r.name,'site']!=df.at[r.name+1,'site'] and r.site=='a') else np.nan, axis=1) df
Compare an iteration with the next and next next iteration
I have a dataframe with three columns (Site, EventTime, EndTime), and I need to compare the first item on Site with the next and the next value. If those three are different, I need to copy the endTime of the last one into the first one. I tried this: i = 0 while i <= len(df): if df['Site'][i] != df['Site'][i+1] and df['Site'][i] != df['Site'][i+2]: df['EndTime'][i]=df['EndTime'][i+2] i = i+2 if df['Site'][i] != df['Site'][i+1]: df['EndTime'][i]=df['EndTime'][i+1] i = i+1 i = i+1 and it gave this error: C:\Users\\AppData\Local\Temp\ipykernel_19504\3633764966.py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df['EndTime'][i]=df['EndTime'][i+2] Output exceeds the size limit. Open the full output data in a text editor I need to replace the endTime whith the last endTime when the next two iterations are different from each other: What I need:
[ "Here I am assuming you need to set the new time on all last record of site a, with the next first record of site c EndTime.\nFollowinglly, I have used lambda function to pass through line by line, and check condintion, where give true if current site and next row site is different and current site is a, if yes then get the first value of c Endtime in remain in loop data, or else take None.\nTry following Code:\nimport numpy as np\ndf['new time'] = df[:-1].apply(lambda r: df[r.name:].loc[df['site']=='c'].groupby(['site']).first()['EndTime'].values[0]\n if (df.at[r.name,'site']!=df.at[r.name+1,'site'] and r.site=='a') else np.nan, axis=1)\ndf\n\n" ]
[ 0 ]
[]
[]
[ "iteration", "python" ]
stackoverflow_0074433680_iteration_python.txt
Q: 'NOT NULL constraint failed' after adding to models.py I'm using userena and after adding the following line to my models.py zipcode = models.IntegerField(_('zipcode'), max_length=5) I get the following error after I hit the submit button on th signup form: IntegrityError at /accounts/signup/ NOT NULL constraint failed: accounts_myprofile.zipcode My question is what does this error mean, and is this related to Userena? A: You must create a migration, where you will specify default value for a new field, since you don't want it to be null. If null is not required, simply add null=True and create and run migration. A: coldmind's answer is correct but lacks details. The NOT NULL constraint failed occurs when something tries to set None to the zipcode property, while it has not been explicitly allowed. It usually happens when: Your field has Null=False by default, so that the value in the database cannot be None (i.e. undefined) when the object is created and saved in the database (this happens after a objects_set.create() call or setting the .zipcode property and doing a .save() call). For instance, if somewhere in your code an assignment results in: model.zipcode = None This error is raised. When creating or updating the database, Django is constrained to find a default value to fill the field, because Null=False by default. It does not find any because you haven't defined any. So this error can not only happen during code execution but also when creating the database? Note that the same error would be returned if you define default=None, or if your default value with an incorrect type, for instance default='00000' instead of 00000 for your field (maybe can there be an automatic conversion between char and integers, but I would advise against relying on it. Besides, explicit is better than implicit). Most likely an error would also be raised if the default value violates the max_length property, e.g. 123456 So you'll have to define the field by one of the following: models.IntegerField(_('zipcode'), max_length=5, Null=True, blank=True) models.IntegerField(_('zipcode'), max_length=5, Null=False, blank=True, default=00000) models.IntegerField(_('zipcode'), max_length=5, blank=True, default=00000) and then make a migration (python3 manage.py makemigration <app_name>) and then migrate (python3 manage.py migrate). For safety you can also delete the last failed migration files in <app_name>/migrations/, there are usually named after this pattern: <NUMBER>_auto_<DATE>_<HOUR>.py Finally, if you don't set Null=True, make sure that mode.zipcode = None is never done anywhere. A: If the zipcode field is not a required field then add null=True and blank=True, then run makemigrations and migrate command to successfully reflect the changes in the database. A: In Django Null=True means Null Values are accepted. But Some Filed having Django that Blank=True are not satisfied for put Blank fields. DateTimeField ForeignKey These two fields are used and if you want to put Blank I recommend adding NULL=TRUE
'NOT NULL constraint failed' after adding to models.py
I'm using userena and after adding the following line to my models.py zipcode = models.IntegerField(_('zipcode'), max_length=5) I get the following error after I hit the submit button on th signup form: IntegrityError at /accounts/signup/ NOT NULL constraint failed: accounts_myprofile.zipcode My question is what does this error mean, and is this related to Userena?
[ "You must create a migration, where you will specify default value for a new field, since you don't want it to be null. If null is not required, simply add null=True and create and run migration.\n", "coldmind's answer is correct but lacks details.\nThe NOT NULL constraint failed occurs when something tries to set None to the zipcode property, while it has not been explicitly allowed.\nIt usually happens when:\n\nYour field has Null=False by default, so that the value in the database cannot be None (i.e. undefined) when the object is created and saved in the database (this happens after a objects_set.create() call or setting the .zipcode property and doing a .save() call).\nFor instance, if somewhere in your code an assignment results in:\nmodel.zipcode = None\n\nThis error is raised.\n\nWhen creating or updating the database, Django is constrained to find a default value to fill the field, because Null=False by default. It does not find any because you haven't defined any. So this error can not only happen during code execution but also when creating the database?\n\nNote that the same error would be returned if you define default=None, or if your default value with an incorrect type, for instance default='00000' instead of 00000 for your field (maybe can there be an automatic conversion between char and integers, but I would advise against relying on it. Besides, explicit is better than implicit). Most likely an error would also be raised if the default value violates the max_length property, e.g. 123456\nSo you'll have to define the field by one of the following:\nmodels.IntegerField(_('zipcode'), max_length=5, Null=True,\n blank=True)\n\nmodels.IntegerField(_('zipcode'), max_length=5, Null=False,\n blank=True, default=00000)\n\nmodels.IntegerField(_('zipcode'), max_length=5, blank=True,\n default=00000)\n\nand then make a migration (python3 manage.py makemigration <app_name>) and then migrate (python3 manage.py migrate).\nFor safety you can also delete the last failed migration files in <app_name>/migrations/, there are usually named after this pattern:\n<NUMBER>_auto_<DATE>_<HOUR>.py\n\n\n\nFinally, if you don't set Null=True, make sure that mode.zipcode = None is never done anywhere.\n", "If the zipcode field is not a required field then add null=True and blank=True, then run makemigrations and migrate command to successfully reflect the changes in the database.\n", "In Django Null=True means Null Values are accepted. But Some Filed having Django that Blank=True are not satisfied for put Blank fields.\n\nDateTimeField\nForeignKey\n\nThese two fields are used and if you want to put Blank I recommend adding NULL=TRUE\n" ]
[ 98, 17, 9, 0 ]
[ "Since you added a new property to the model, you must first delete the database. Then manage.py migrations then manage.py migrate.\n" ]
[ -5 ]
[ "django", "django_migrations", "django_models", "python" ]
stackoverflow_0025964312_django_django_migrations_django_models_python.txt
Q: check frequency of keyword in df in a text I have a given text string: text = """Alice has two apples and bananas. Apples are very healty.""" and a dataframe: word apples bananas company I would like to add a column "frequency" which will count occurrences of each word in column "word" in the text. So the output should be as below: word frequency apples 2 bananas 1 company 0 A: Convert the text to lowercase and then use regex to convert it to a list of words. You might check out this page for learning purposes. Loop through each row in the dataset and use lambda function to count the specific value in the previously created list. # Import and create the data import pandas as pd import re text = """Alice has two apples and bananas. Apples are very healty.""" df = pd.DataFrame(data={'word':['apples','bananas','company']}) # Solution words_list = re.findall(r'\w+', text.lower()) df['Frequency'] = df['word'].apply(lambda x: words_list.count(x)) A: import pandas as pd df = pd.DataFrame(['apples', 'bananas', 'company'], columns=['word']) para = "Alice has two apples and bananas. Apples are very healty.".lower() df['frequency'] = df['word'].apply(lambda x : para.count(x.lower())) word frequency 0 apples 2 1 bananas 1 2 company 0
check frequency of keyword in df in a text
I have a given text string: text = """Alice has two apples and bananas. Apples are very healty.""" and a dataframe: word apples bananas company I would like to add a column "frequency" which will count occurrences of each word in column "word" in the text. So the output should be as below: word frequency apples 2 bananas 1 company 0
[ "\nConvert the text to lowercase and then use regex to convert it to a list of words. You might check out this page for learning purposes.\nLoop through each row in the dataset and use lambda function to count the specific value in the previously created list.\n\n# Import and create the data\nimport pandas as pd\nimport re\ntext = \"\"\"Alice has two apples and bananas. Apples are very healty.\"\"\"\ndf = pd.DataFrame(data={'word':['apples','bananas','company']})\n\n# Solution\nwords_list = re.findall(r'\\w+', text.lower())\ndf['Frequency'] = df['word'].apply(lambda x: words_list.count(x))\n\n", "import pandas as pd\ndf = pd.DataFrame(['apples', 'bananas', 'company'], columns=['word'])\npara = \"Alice has two apples and bananas. Apples are very healty.\".lower()\ndf['frequency'] = df['word'].apply(lambda x : para.count(x.lower()))\n\n word frequency\n0 apples 2\n1 bananas 1\n2 company 0\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074560829_dataframe_python.txt
Q: Split numpy array into chunks I have an array x of 30 samples, and I wish to separate it out into chunks of 8 samples each in 2 different ways. First, I want to separate it avoiding any overlap so that I end up with 3 arrays of length 8 and the final array will be only 6 (due to some samples being missing). Secondly, I want to separate it so that the final array will be the last 2 samples of the previous array plus the final 6. Both methods preferably without for loops as I'm trying to optimise this for when I expand it to arrays with lengths in the ten thousands. I have tried using np.array_split as follows x = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0]) y = np.array_split(x,np.ceil(len(x)/8)) However, that results in: y = [array([1, 1, 2, 1, 1, 2, 1, 0]), array([3, 1, 2, 2, 1, 2, 1, 1]), array([50, 1, 1, 1, 1, 4, 1]), array([11, 15, 0, 0, 1, 1, 0])] so y is clearly made up of 2x8 length arrays and 2x7 length arrays, not what I want. How do I go about achieving it the way I want. The first method is the more important, the second is a bonus. Thanks A: import numpy as np x = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50 ,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0]) def split_reminder(x, chunk_size, axis=0): indices = np.arange(chunk_size, x.shape[axis], chunk_size) return np.array_split(x, indices, axis) split_reminder(x, 8) Checkout the below link for reference: Similar answer A: """for the first you can use range""" x = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0]) res = [x[i:i+8] for i in range(0, len(x), 8)] """for the second you could just pop the first item""" res.pop(0) print(res) A: Use utilpsace from utilspie import iterutils x = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0]) print(list(iterutils.get_chunks(x, 8))) Gives [array([1, 1, 2, 1, 1, 2, 1, 0]), #Length 8 array([3, 1, 2, 2, 1, 2, 1, 1]), #Length 8 array([50, 1, 1, 1, 1, 4, 1, 11]), #Length 8 array([15, 0, 0, 1, 1, 0])] #Length 6 Solution 2 Fill uneven array lengths with above array elements using bottleNeck Complete code. ## import numpy as np from utilspie import iterutils import itertools from bottleneck import push x = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0]) x =(list(iterutils.get_chunks(x, 8))) x_new=np.array(list(itertools.zip_longest(*x, fillvalue=np.nan))).T x_new=push(x_new, axis=0) print(x_new) Gives # [[ 1. 1. 2. 1. 1. 2. 1. 0.] #Length 8 [ 3. 1. 2. 2. 1. 2. 1. 1.] #Length 8 [50. 1. 1. 1. 1. 4. 1. 11.] #Length 8 [15. 0. 0. 1. 1. 0. 1. 11.]] #Length 8 A: You could split just the part of the array will produces your chunk size then add back on an array of the final 8 values num = int(len(x)/8) y = np.array_split(x[:num*8], num) y += [x[-9:-1]]
Split numpy array into chunks
I have an array x of 30 samples, and I wish to separate it out into chunks of 8 samples each in 2 different ways. First, I want to separate it avoiding any overlap so that I end up with 3 arrays of length 8 and the final array will be only 6 (due to some samples being missing). Secondly, I want to separate it so that the final array will be the last 2 samples of the previous array plus the final 6. Both methods preferably without for loops as I'm trying to optimise this for when I expand it to arrays with lengths in the ten thousands. I have tried using np.array_split as follows x = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0]) y = np.array_split(x,np.ceil(len(x)/8)) However, that results in: y = [array([1, 1, 2, 1, 1, 2, 1, 0]), array([3, 1, 2, 2, 1, 2, 1, 1]), array([50, 1, 1, 1, 1, 4, 1]), array([11, 15, 0, 0, 1, 1, 0])] so y is clearly made up of 2x8 length arrays and 2x7 length arrays, not what I want. How do I go about achieving it the way I want. The first method is the more important, the second is a bonus. Thanks
[ "import numpy as np\n\nx = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50 ,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0])\n\ndef split_reminder(x, chunk_size, axis=0):\n indices = np.arange(chunk_size, x.shape[axis], chunk_size)\n return np.array_split(x, indices, axis)\n\nsplit_reminder(x, 8)\n\nCheckout the below link for reference:\nSimilar answer\n", "\"\"\"for the first you can use range\"\"\"\nx = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0])\nres = [x[i:i+8] for i in range(0, len(x), 8)]\n\"\"\"for the second you could just pop the first item\"\"\"\nres.pop(0)\nprint(res)\n\n", "Use utilpsace\nfrom utilspie import iterutils\n\nx = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0])\n\nprint(list(iterutils.get_chunks(x, 8)))\n\nGives\n[array([1, 1, 2, 1, 1, 2, 1, 0]), #Length 8\n array([3, 1, 2, 2, 1, 2, 1, 1]), #Length 8\n array([50, 1, 1, 1, 1, 4, 1, 11]), #Length 8\n array([15, 0, 0, 1, 1, 0])] #Length 6\n\nSolution 2\nFill uneven array lengths with above array elements using bottleNeck\nComplete code. ##\nimport numpy as np\nfrom utilspie import iterutils\nimport itertools\nfrom bottleneck import push\n\nx = np.array([1 ,1, 2 ,1 ,1 ,2 ,1, 0 ,3, 1, 2 ,2, 1, 2, 1, 1,50,1 ,1, 1, 1, 4, 1, 11, 15, 0, 0, 1, 1,0])\n\nx =(list(iterutils.get_chunks(x, 8)))\n\nx_new=np.array(list(itertools.zip_longest(*x, fillvalue=np.nan))).T\n\nx_new=push(x_new, axis=0)\nprint(x_new)\n\nGives #\n[[ 1. 1. 2. 1. 1. 2. 1. 0.] #Length 8\n [ 3. 1. 2. 2. 1. 2. 1. 1.] #Length 8\n [50. 1. 1. 1. 1. 4. 1. 11.] #Length 8\n [15. 0. 0. 1. 1. 0. 1. 11.]] #Length 8\n\n", "You could split just the part of the array will produces your chunk size then add back on an array of the final 8 values\nnum = int(len(x)/8)\ny = np.array_split(x[:num*8], num)\ny += [x[-9:-1]]\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074560670_arrays_numpy_python.txt
Q: Python Tkinter, Display Live Data I want to display live data in a GUI, in tkinter. The data I am getting contains a list of two integers [current, voltage]. I am getting new data every second. I managed to create a GUI, now I want to know how to display data in GUI Label widgets (python tkinter) and update labels dynamically. Any suggestions please Here is my code so far: #data getting is a list eg. [10, 12] from tkinter import * import tkinter.font #main Window using Tk win = Tk() win.title("v1.0") win.geometry('800x480') win.configure(background='#CD5C5C') #Labels voltage = Label(win, text = "voltage") voltage.place(x=15, y=100) current = Label(win, text = "current") current.place(x=15, y=200) #display measured values #how to display here !!! currentValues = Label(win, text = "want to display somewhere like this") currentValues.place(x=200, y=100) voltageValues = Label(win, text = "want to display somewhere like this") voltageValues.place(x=200, y=200) mainloop() A: If you want to graph your live data and want to avoid using other libraries to do that for you, you might find the following to be an enlightening starting point for creating your own graphs. The sample draws a full circle of values when evaluating the math.sin function that comes in the standard library. The code takes into account automatic sampling, resizing, and updating as needed and should be fairly responsive. #! /usr/bin/env python3 import math import threading import time import tkinter.ttk import uuid from tkinter.constants import EW, NSEW, SE class Application(tkinter.ttk.Frame): FPS = 10 # frames per second used to update the graph MARGINS = 10, 10, 10, 10 # internal spacing around the graph @classmethod def main(cls): tkinter.NoDefaultRoot() root = tkinter.Tk() root.title('Tkinter Graphing') # noinspection SpellCheckingInspection root.minsize(640, 480) # VGA (NTSC) cls(root).grid(sticky=NSEW) root.grid_rowconfigure(0, weight=1) root.grid_columnconfigure(0, weight=1) root.mainloop() def __init__(self, master=None, **kw): super().__init__(master, **kw) self.display = tkinter.Canvas(self, background='white') self.display.bind('<Configure>', self.draw) self.start = StatefulButton(self, 'Start Graphing', self.start_graph) self.grip = tkinter.ttk.Sizegrip(self) self.grid_widgets(padx=5, pady=5) self.data_source = DataSource() self.after_idle(self.update_graph, round(1000 / self.FPS)) self.run_graph = None def grid_widgets(self, **kw): self.display.grid(row=0, column=0, columnspan=2, sticky=NSEW, **kw) self.start.grid(row=1, column=0, sticky=EW, **kw) self.grip.grid(row=1, column=1, sticky=SE) self.grid_rowconfigure(0, weight=1) self.grid_columnconfigure(0, weight=1) def start_graph(self): self.run_graph = True threading.Thread(target=self.__simulate, daemon=True).start() return 'Stop Graphing', self.stop_graph def stop_graph(self): self.run_graph = False return 'Clear Graph', self.clear_graph def clear_graph(self): self.data_source.clear() self.reset_display() return 'Start Graphing', self.start_graph # def __simulate(self): # # simulate changing populations # for population in itertools.count(): # if not self.run_graph: # break # self.data_source.append(population, get_max_age(population, 200)) # def __simulate(self): # # simulate changing ages # for age in itertools.count(1): # if not self.run_graph: # break # self.data_source.append(age, get_max_age(250_000_000, age)) def __simulate(self): # draw a sine curve for x in range(800): time.sleep(0.01) if not self.run_graph: break self.data_source.append(x, math.sin(x * math.pi / 400)) def update_graph(self, rate, previous_version=None): if previous_version is None: self.reset_display() current_version = self.data_source.version if current_version != previous_version: data_source = self.data_source.copy() self.draw(data_source) self.after(rate, self.update_graph, rate, current_version) def reset_display(self): self.display.delete('data') self.display.create_line((0, 0, 0, 0), tag='data', fill='black') def draw(self, data_source): if not isinstance(data_source, DataSource): data_source = self.data_source.copy() if data_source: self.display.coords('data', *data_source.frame( self.MARGINS, self.display.winfo_width(), self.display.winfo_height(), True )) class StatefulButton(tkinter.ttk.Button): def __init__(self, master, text, command, **kw): kw.update(text=text, command=self.__do_command) super().__init__(master, **kw) self.__command = command def __do_command(self): self['text'], self.__command = self.__command() def new(obj): kind = type(obj) return kind.__new__(kind) def interpolate(x, y, z): return x * (1 - z) + y * z def interpolate_array(array, z): if z <= 0: return array[0] if z >= 1: return array[-1] share = 1 / (len(array) - 1) index = int(z / share) x, y = array[index:index + 2] return interpolate(x, y, z % share / share) def sample(array, count): scale = count - 1 return tuple(interpolate_array(array, z / scale) for z in range(count)) class DataSource: EMPTY = uuid.uuid4() def __init__(self): self.__x = [] self.__y = [] self.__version = self.EMPTY self.__mutex = threading.Lock() @property def version(self): return self.__version def copy(self): instance = new(self) with self.__mutex: instance.__x = self.__x.copy() instance.__y = self.__y.copy() instance.__version = self.__version instance.__mutex = threading.Lock() return instance def __bool__(self): return bool(self.__x or self.__y) def frame(self, margins, width, height, auto_sample=False, timing=False): if timing: start = time.perf_counter() x1, y1, x2, y2 = margins drawing_width = width - x1 - x2 drawing_height = height - y1 - y2 with self.__mutex: x_tuple = tuple(self.__x) y_tuple = tuple(self.__y) if auto_sample and len(x_tuple) > drawing_width: x_tuple = sample(x_tuple, drawing_width) y_tuple = sample(y_tuple, drawing_width) max_y = max(y_tuple) x_scaling_factor = max(x_tuple) - min(x_tuple) y_scaling_factor = max_y - min(y_tuple) coords = tuple( coord for x, y in zip(x_tuple, y_tuple) for coord in ( round(x1 + drawing_width * x / x_scaling_factor), round(y1 + drawing_height * (max_y - y) / y_scaling_factor))) if timing: # noinspection PyUnboundLocalVariable print(f'len = {len(coords) >> 1}; ' f'sec = {time.perf_counter() - start:.6f}') return coords def append(self, x, y): with self.__mutex: self.__x.append(x) self.__y.append(y) self.__version = uuid.uuid4() def clear(self): with self.__mutex: self.__x.clear() self.__y.clear() self.__version = self.EMPTY def extend(self, iterable): with self.__mutex: for x, y in iterable: self.__x.append(x) self.__y.append(y) self.__version = uuid.uuid4() if __name__ == '__main__': Application.main() A: You can change label text dynamically: This is a way using textvariable option with StringVar and .set() method str_var = tk.StringVar(value="Default") currentValues= Label(win, textvariable=my_string_var) currentValues.place(x=200, y=100) str_var.set("New value") Another way using simply .configure() method currentValues = Label(win, text = "default") currentValues.configure(text="New value") Finally, to make the UI update without waiting the rest of the loop do an update win.update() A: This answer describes a complete minimalistic realtime plot. It is inspired by a demo. The whole script is below: import matplotlib.pyplot as plt import numpy as np import tkinter as tk from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk from tkinter import ttk x_data, y_data = [], [] class Win(tk.Tk): def __init__(self): super().__init__() self.title('I-V liveplot') self.geometry('500x450') # Frame that holds wigets on the left side left_frame = ttk.Frame(self) left_frame.pack(side= "left", padx =10, pady = 10) #, fill="y", expand=True self.fig = plt.figure(figsize=(4, 3.5), dpi=100) self.ax = self.fig.add_subplot(1,1,1) self.line, = self.ax.plot([0], [0]) self.ax.set_xlabel('Voltage / V', fontsize = 12) self.ax.set_ylabel('Current / A', fontsize = 12) self.fig.tight_layout() self.canvas = FigureCanvasTkAgg(self.fig, master=self) self.toolbar = NavigationToolbar2Tk(self.canvas, self) self.canvas.get_tk_widget().pack(side= tk.BOTTOM) voltage_range_label = tk.Label(left_frame, text = "Voltage range") voltage_range_label.pack(side = "top", padx =10, pady =2) self.voltage_range = tk.IntVar() self.voltage_range.set(10) voltage_range_spinbox = ttk.Spinbox(left_frame, from_=-3e2, to = 5e2, textvariable = self.voltage_range, width=8) voltage_range_spinbox.pack(side="top", padx =10, pady =5) voltage_step_label = tk.Label(left_frame, text = "Step") voltage_step_label.pack(side = "top", padx =10, pady =2) self.step = tk.IntVar() self.step.set(1) step_spinbox = ttk.Spinbox(left_frame, from_=-3e2, to = 5e2, textvariable = self.step, width =9) step_spinbox.pack(side="top", padx =10, pady =5) self.start = tk.BooleanVar(value = False) start_butt = ttk.Button(left_frame, text="Start", command= lambda: self.start.set(True)) start_butt.pack(side='top', padx =10, pady =10) stop_butt = ttk.Button(left_frame, text="Resume", command=lambda: self.is_paused.set(False)) stop_butt.pack(side="top", padx =10, pady =10) self.is_paused = tk.BooleanVar() # variable to hold the pause/resume state restart_butt = ttk.Button(left_frame, text="Pause", command=lambda: self.is_paused.set(True)) restart_butt.pack(side="top", padx =10, pady =10) def update(self, k=1): if self.start.get() and not self.is_paused.get(): # quasi For Loop idx = [i for i in range(0, k, self.step.get())][-1] x_data.append(idx) y_data.append(np.sin(idx/5)) self.line.set_data(x_data, y_data) self.fig.gca().relim() self.fig.gca().autoscale_view() self.canvas.draw() #self.canvas.flush_events() k += self.step.get()[![enter image description here][2]][2] if k <= self.voltage_range.get(): self.after(1000, self.update, k) if __name__ == "__main__": app = Win() app.after(1000, app.update) app.mainloop() This code works properly and results in an output shown in Graph. I hope it would be helpful. . A: I want to display some live data in a GUI. I think what you want to do is use the .after() method. The .after() method queues tkinter to run some code after a set time. For example: currentValues = Label(win, text = "want to display somewhere like this") currentValues.place(x=200, y=100) voltageValues = Label(win, text = "want to display somewhere like this") voltageValues.place(x=200, y=200) def live_update(): currentValues['text'] = updated_value voltageValues['text'] = updated_value win.after(1000, live_update) # 1000 is equivalent to 1 second (closest you'll get) live_update() # to start the update loop 1000 units in the after method is the closest you'll get to 1 second exactly.
Python Tkinter, Display Live Data
I want to display live data in a GUI, in tkinter. The data I am getting contains a list of two integers [current, voltage]. I am getting new data every second. I managed to create a GUI, now I want to know how to display data in GUI Label widgets (python tkinter) and update labels dynamically. Any suggestions please Here is my code so far: #data getting is a list eg. [10, 12] from tkinter import * import tkinter.font #main Window using Tk win = Tk() win.title("v1.0") win.geometry('800x480') win.configure(background='#CD5C5C') #Labels voltage = Label(win, text = "voltage") voltage.place(x=15, y=100) current = Label(win, text = "current") current.place(x=15, y=200) #display measured values #how to display here !!! currentValues = Label(win, text = "want to display somewhere like this") currentValues.place(x=200, y=100) voltageValues = Label(win, text = "want to display somewhere like this") voltageValues.place(x=200, y=200) mainloop()
[ "If you want to graph your live data and want to avoid using other libraries to do that for you, you might find the following to be an enlightening starting point for creating your own graphs. The sample draws a full circle of values when evaluating the math.sin function that comes in the standard library. The code takes into account automatic sampling, resizing, and updating as needed and should be fairly responsive.\n#! /usr/bin/env python3\nimport math\nimport threading\nimport time\nimport tkinter.ttk\nimport uuid\nfrom tkinter.constants import EW, NSEW, SE\n\n\nclass Application(tkinter.ttk.Frame):\n FPS = 10 # frames per second used to update the graph\n MARGINS = 10, 10, 10, 10 # internal spacing around the graph\n\n @classmethod\n def main(cls):\n tkinter.NoDefaultRoot()\n root = tkinter.Tk()\n root.title('Tkinter Graphing')\n # noinspection SpellCheckingInspection\n root.minsize(640, 480) # VGA (NTSC)\n cls(root).grid(sticky=NSEW)\n root.grid_rowconfigure(0, weight=1)\n root.grid_columnconfigure(0, weight=1)\n root.mainloop()\n\n def __init__(self, master=None, **kw):\n super().__init__(master, **kw)\n self.display = tkinter.Canvas(self, background='white')\n self.display.bind('<Configure>', self.draw)\n self.start = StatefulButton(self, 'Start Graphing', self.start_graph)\n self.grip = tkinter.ttk.Sizegrip(self)\n self.grid_widgets(padx=5, pady=5)\n self.data_source = DataSource()\n self.after_idle(self.update_graph, round(1000 / self.FPS))\n self.run_graph = None\n\n def grid_widgets(self, **kw):\n self.display.grid(row=0, column=0, columnspan=2, sticky=NSEW, **kw)\n self.start.grid(row=1, column=0, sticky=EW, **kw)\n self.grip.grid(row=1, column=1, sticky=SE)\n self.grid_rowconfigure(0, weight=1)\n self.grid_columnconfigure(0, weight=1)\n\n def start_graph(self):\n self.run_graph = True\n threading.Thread(target=self.__simulate, daemon=True).start()\n return 'Stop Graphing', self.stop_graph\n\n def stop_graph(self):\n self.run_graph = False\n return 'Clear Graph', self.clear_graph\n\n def clear_graph(self):\n self.data_source.clear()\n self.reset_display()\n return 'Start Graphing', self.start_graph\n\n # def __simulate(self):\n # # simulate changing populations\n # for population in itertools.count():\n # if not self.run_graph:\n # break\n # self.data_source.append(population, get_max_age(population, 200))\n\n # def __simulate(self):\n # # simulate changing ages\n # for age in itertools.count(1):\n # if not self.run_graph:\n # break\n # self.data_source.append(age, get_max_age(250_000_000, age))\n\n def __simulate(self):\n # draw a sine curve\n for x in range(800):\n time.sleep(0.01)\n if not self.run_graph:\n break\n self.data_source.append(x, math.sin(x * math.pi / 400))\n\n def update_graph(self, rate, previous_version=None):\n if previous_version is None:\n self.reset_display()\n current_version = self.data_source.version\n if current_version != previous_version:\n data_source = self.data_source.copy()\n self.draw(data_source)\n self.after(rate, self.update_graph, rate, current_version)\n\n def reset_display(self):\n self.display.delete('data')\n self.display.create_line((0, 0, 0, 0), tag='data', fill='black')\n\n def draw(self, data_source):\n if not isinstance(data_source, DataSource):\n data_source = self.data_source.copy()\n if data_source:\n self.display.coords('data', *data_source.frame(\n self.MARGINS,\n self.display.winfo_width(),\n self.display.winfo_height(),\n True\n ))\n\n\nclass StatefulButton(tkinter.ttk.Button):\n def __init__(self, master, text, command, **kw):\n kw.update(text=text, command=self.__do_command)\n super().__init__(master, **kw)\n self.__command = command\n\n def __do_command(self):\n self['text'], self.__command = self.__command()\n\n\ndef new(obj):\n kind = type(obj)\n return kind.__new__(kind)\n\n\ndef interpolate(x, y, z):\n return x * (1 - z) + y * z\n\n\ndef interpolate_array(array, z):\n if z <= 0:\n return array[0]\n if z >= 1:\n return array[-1]\n share = 1 / (len(array) - 1)\n index = int(z / share)\n x, y = array[index:index + 2]\n return interpolate(x, y, z % share / share)\n\n\ndef sample(array, count):\n scale = count - 1\n return tuple(interpolate_array(array, z / scale) for z in range(count))\n\n\nclass DataSource:\n EMPTY = uuid.uuid4()\n\n def __init__(self):\n self.__x = []\n self.__y = []\n self.__version = self.EMPTY\n self.__mutex = threading.Lock()\n\n @property\n def version(self):\n return self.__version\n\n def copy(self):\n instance = new(self)\n with self.__mutex:\n instance.__x = self.__x.copy()\n instance.__y = self.__y.copy()\n instance.__version = self.__version\n instance.__mutex = threading.Lock()\n return instance\n\n def __bool__(self):\n return bool(self.__x or self.__y)\n\n def frame(self, margins, width, height, auto_sample=False, timing=False):\n if timing:\n start = time.perf_counter()\n x1, y1, x2, y2 = margins\n drawing_width = width - x1 - x2\n drawing_height = height - y1 - y2\n with self.__mutex:\n x_tuple = tuple(self.__x)\n y_tuple = tuple(self.__y)\n if auto_sample and len(x_tuple) > drawing_width:\n x_tuple = sample(x_tuple, drawing_width)\n y_tuple = sample(y_tuple, drawing_width)\n max_y = max(y_tuple)\n x_scaling_factor = max(x_tuple) - min(x_tuple)\n y_scaling_factor = max_y - min(y_tuple)\n coords = tuple(\n coord\n for x, y in zip(x_tuple, y_tuple)\n for coord in (\n round(x1 + drawing_width * x / x_scaling_factor),\n round(y1 + drawing_height * (max_y - y) / y_scaling_factor)))\n if timing:\n # noinspection PyUnboundLocalVariable\n print(f'len = {len(coords) >> 1}; '\n f'sec = {time.perf_counter() - start:.6f}')\n return coords\n\n def append(self, x, y):\n with self.__mutex:\n self.__x.append(x)\n self.__y.append(y)\n self.__version = uuid.uuid4()\n\n def clear(self):\n with self.__mutex:\n self.__x.clear()\n self.__y.clear()\n self.__version = self.EMPTY\n\n def extend(self, iterable):\n with self.__mutex:\n for x, y in iterable:\n self.__x.append(x)\n self.__y.append(y)\n self.__version = uuid.uuid4()\n\n\nif __name__ == '__main__':\n Application.main()\n\n", "You can change label text dynamically:\nThis is a way using textvariable option with StringVar and .set() method\nstr_var = tk.StringVar(value=\"Default\")\n\ncurrentValues= Label(win, textvariable=my_string_var)\ncurrentValues.place(x=200, y=100)\n\nstr_var.set(\"New value\")\n\nAnother way using simply .configure() method\ncurrentValues = Label(win, text = \"default\")\ncurrentValues.configure(text=\"New value\")\n\nFinally, to make the UI update without waiting the rest of the loop do an update\nwin.update()\n\n", "This answer describes a complete minimalistic realtime plot. It is inspired by a demo.\nThe whole script is below:\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tkinter as tk\nfrom matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2Tk\nfrom tkinter import ttk\n\nx_data, y_data = [], []\n\nclass Win(tk.Tk):\n\n def __init__(self):\n \n super().__init__()\n \n self.title('I-V liveplot')\n self.geometry('500x450')\n\n # Frame that holds wigets on the left side\n left_frame = ttk.Frame(self)\n left_frame.pack(side= \"left\", padx =10, pady = 10) #, fill=\"y\", expand=True\n \n self.fig = plt.figure(figsize=(4, 3.5), dpi=100)\n self.ax = self.fig.add_subplot(1,1,1)\n self.line, = self.ax.plot([0], [0])\n self.ax.set_xlabel('Voltage / V', fontsize = 12)\n self.ax.set_ylabel('Current / A', fontsize = 12)\n self.fig.tight_layout()\n self.canvas = FigureCanvasTkAgg(self.fig, master=self)\n self.toolbar = NavigationToolbar2Tk(self.canvas, self)\n self.canvas.get_tk_widget().pack(side= tk.BOTTOM)\n \n \n voltage_range_label = tk.Label(left_frame, text = \"Voltage range\")\n voltage_range_label.pack(side = \"top\", padx =10, pady =2)\n self.voltage_range = tk.IntVar()\n self.voltage_range.set(10)\n voltage_range_spinbox = ttk.Spinbox(left_frame, from_=-3e2, to = 5e2, textvariable = self.voltage_range, width=8)\n voltage_range_spinbox.pack(side=\"top\", padx =10, pady =5)\n \n voltage_step_label = tk.Label(left_frame, text = \"Step\")\n voltage_step_label.pack(side = \"top\", padx =10, pady =2)\n self.step = tk.IntVar()\n self.step.set(1)\n step_spinbox = ttk.Spinbox(left_frame, from_=-3e2, to = 5e2, textvariable = self.step, width =9)\n step_spinbox.pack(side=\"top\", padx =10, pady =5)\n \n self.start = tk.BooleanVar(value = False)\n start_butt = ttk.Button(left_frame, text=\"Start\", command= lambda: self.start.set(True))\n start_butt.pack(side='top', padx =10, pady =10)\n \n stop_butt = ttk.Button(left_frame, text=\"Resume\", command=lambda: self.is_paused.set(False))\n stop_butt.pack(side=\"top\", padx =10, pady =10) \n \n self.is_paused = tk.BooleanVar() # variable to hold the pause/resume state\n restart_butt = ttk.Button(left_frame, text=\"Pause\", command=lambda: self.is_paused.set(True))\n restart_butt.pack(side=\"top\", padx =10, pady =10)\n\n def update(self, k=1):\n\n if self.start.get() and not self.is_paused.get(): \n\n # quasi For Loop\n idx = [i for i in range(0, k, self.step.get())][-1]\n x_data.append(idx)\n y_data.append(np.sin(idx/5))\n self.line.set_data(x_data, y_data)\n self.fig.gca().relim()\n self.fig.gca().autoscale_view()\n self.canvas.draw()\n #self.canvas.flush_events()\n k += self.step.get()[![enter image description here][2]][2]\n \n if k <= self.voltage_range.get():\n \n self.after(1000, self.update, k)\n\n\nif __name__ == \"__main__\":\n app = Win()\n app.after(1000, app.update)\n app.mainloop()\n\nThis code works properly and results in an output shown in Graph. I hope it would be helpful.\n.\n", "\nI want to display some live data in a GUI.\n\nI think what you want to do is use the .after() method. The .after() method queues tkinter to run some code after a set time.\nFor example:\ncurrentValues = Label(win, text = \"want to display somewhere like this\")\ncurrentValues.place(x=200, y=100)\n\nvoltageValues = Label(win, text = \"want to display somewhere like this\")\nvoltageValues.place(x=200, y=200)\n\n\ndef live_update():\n currentValues['text'] = updated_value\n voltageValues['text'] = updated_value\n win.after(1000, live_update) # 1000 is equivalent to 1 second (closest you'll get)\n\nlive_update() # to start the update loop\n\n1000 units in the after method is the closest you'll get to 1 second exactly.\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "python", "python_3.x", "tkinter", "user_interface" ]
stackoverflow_0056275043_python_python_3.x_tkinter_user_interface.txt
Q: How to make voice detection in python faster? I have some voice detection code and it works! but, it runs really slowly. Can I do anything to make it faster? import speech_recognition import pyttsx3 recognizer = speech_recognition.Recognizer() while True: try: with speech_recognition.Microphone() as mic: recognizer.adjust_for_ambient_noise(mic, duration=0.2) audio = recognizer.listen(mic) text = recognizer.recognize_google(audio) text = text.lower() print(f" {text}") except speech_recognition.UnknownValueError(): recognizer = speech_recognition.Recognizer() continue A: Try creating mic once instead of each iteration: import speech_recognition import pyttsx3 recognizer = speech_recognition.Recognizer() with speech_recognition.Microphone() as mic: while True: try: recognizer.adjust_for_ambient_noise(mic, duration=0.2) audio = recognizer.listen(mic) text = recognizer.recognize_google(audio) text = text.lower() print(f" {text}") except speech_recognition.UnknownValueError(): pass
How to make voice detection in python faster?
I have some voice detection code and it works! but, it runs really slowly. Can I do anything to make it faster? import speech_recognition import pyttsx3 recognizer = speech_recognition.Recognizer() while True: try: with speech_recognition.Microphone() as mic: recognizer.adjust_for_ambient_noise(mic, duration=0.2) audio = recognizer.listen(mic) text = recognizer.recognize_google(audio) text = text.lower() print(f" {text}") except speech_recognition.UnknownValueError(): recognizer = speech_recognition.Recognizer() continue
[ "Try creating mic once instead of each iteration:\nimport speech_recognition\nimport pyttsx3\n\nrecognizer = speech_recognition.Recognizer()\n\n\nwith speech_recognition.Microphone() as mic:\n while True:\n try:\n recognizer.adjust_for_ambient_noise(mic, duration=0.2)\n audio = recognizer.listen(mic)\n\n text = recognizer.recognize_google(audio)\n text = text.lower()\n print(f\" {text}\")\n except speech_recognition.UnknownValueError():\n pass\n\n" ]
[ 1 ]
[]
[]
[ "python", "pyttsx3", "voice_recognition" ]
stackoverflow_0074560606_python_pyttsx3_voice_recognition.txt
Q: How to send request to a public web service with Python? i need a guide to establish a connection to a public web service, send request to it and get response back. for example this web service: http://webservices.oorsprong.org/websamples.countryinfo/CountryInfoService.wso?WSDL i've tried to test this API with SoapUI application. this API has a bunch of methods such as sending you a country's capital by getting it's ISO name (send IR as request and get Tehran as respond). now somehow i want to do this through Python. i want to have access to all it's methods and send requests. only by connecting to API's address or any other way (maybe by loading each method's XML code and running it in Python? idk). is it possible? any guide? A: I'd recommend checking out this library: https://requests.readthedocs.io/en/latest/ You can send HTTP requests to URL endpoints, parse out data, etc. Hope this helps! A: I have successfully used suds and SOAPpy in the past. I see people recommend Zeep nowadays but I haven't used it.
How to send request to a public web service with Python?
i need a guide to establish a connection to a public web service, send request to it and get response back. for example this web service: http://webservices.oorsprong.org/websamples.countryinfo/CountryInfoService.wso?WSDL i've tried to test this API with SoapUI application. this API has a bunch of methods such as sending you a country's capital by getting it's ISO name (send IR as request and get Tehran as respond). now somehow i want to do this through Python. i want to have access to all it's methods and send requests. only by connecting to API's address or any other way (maybe by loading each method's XML code and running it in Python? idk). is it possible? any guide?
[ "I'd recommend checking out this library:\nhttps://requests.readthedocs.io/en/latest/\nYou can send HTTP requests to URL endpoints, parse out data, etc. Hope this helps!\n", "I have successfully used suds and SOAPpy in the past. I see people recommend Zeep nowadays but I haven't used it.\n" ]
[ 1, 1 ]
[]
[]
[ "python", "soap", "soapui", "web_services", "xml" ]
stackoverflow_0074549589_python_soap_soapui_web_services_xml.txt
Q: plt.show() create graph 2 times [Extra graph] https://i.stack.imgur.com/3euVn.png[1] Plt.show() is creating graph 3 times while I am using plt.show() only 2 time 1 in each script.1 graph close immediately like after 1 sec The code is as: from ScriptsTogather import new fig, axes = plt.subplots(2, 1, figsize=(4, 4), num='pyplot') plt.show(block=False) def process_msg(msg): fig.canvas. fig.canvas.flush_events() def read_mindray(): Read data if __name__ == "__main__": try: thread_mindray = multiprocessing.Process(target=read, daemon=True) thread_mindray.start() new() except: raise Exception print('end?') A: I managed to solve it. I was calling plt.show() outside the function that was making an empty graph and then for canvass.draw it was making another graph.
plt.show() create graph 2 times
[Extra graph] https://i.stack.imgur.com/3euVn.png[1] Plt.show() is creating graph 3 times while I am using plt.show() only 2 time 1 in each script.1 graph close immediately like after 1 sec The code is as: from ScriptsTogather import new fig, axes = plt.subplots(2, 1, figsize=(4, 4), num='pyplot') plt.show(block=False) def process_msg(msg): fig.canvas. fig.canvas.flush_events() def read_mindray(): Read data if __name__ == "__main__": try: thread_mindray = multiprocessing.Process(target=read, daemon=True) thread_mindray.start() new() except: raise Exception print('end?')
[ "I managed to solve it. I was calling plt.show() outside the function that was making an empty graph and then for canvass.draw it was making another graph.\n" ]
[ 1 ]
[]
[]
[ "charts", "matplotlib", "python" ]
stackoverflow_0074560379_charts_matplotlib_python.txt
Q: How to label a line in matplotlib (python)? I followed the documentation but still failed to label a line. plt.plot([min(np.array(positions)[:,0]), max(np.array(positions)[:,0])], [0,0], color='k', label='East') # West-East plt.plot([0,0], [min(np.array(positions)[:,1]), max(np.array(positions)[:,1])], color='k', label='North') # South-North In the code snippet above, I am trying to plot out the North direction and the East direction. position contains the points to be plotted. But I end up with 2 straight lines with NO labels as follows: Where went wrong? A: The argument label is used to set the string that will be shown in the legend. For example consider the following snippet: import matplotlib.pyplot as plt plt.plot([1,2,3],'r-',label='Sample Label Red') plt.plot([0.5,2,3.5],'b-',label='Sample Label Blue') plt.legend() plt.show() This will plot 2 lines as shown: The arrow function supports labels. Do check this link: http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.arrow A: when adding the label attribute, don't forget to add .legend() method. import matplotlib.pyplot as plt plt.plot([1,2],[3,5],'ro',label='one') plt.plot([1,2],[1,2],'g^',label='two') plt.plot([1,2],[1,6],'bs',label='three') plt.axis([0,4,0,10]) plt.ylabel('x2') plt.xlabel('x1') plt.legend() plt.show() A: This should work : plt.legend(YourDataFrame.columns)
How to label a line in matplotlib (python)?
I followed the documentation but still failed to label a line. plt.plot([min(np.array(positions)[:,0]), max(np.array(positions)[:,0])], [0,0], color='k', label='East') # West-East plt.plot([0,0], [min(np.array(positions)[:,1]), max(np.array(positions)[:,1])], color='k', label='North') # South-North In the code snippet above, I am trying to plot out the North direction and the East direction. position contains the points to be plotted. But I end up with 2 straight lines with NO labels as follows: Where went wrong?
[ "The argument label is used to set the string that will be shown in the legend. For example consider the following snippet:\n import matplotlib.pyplot as plt\n plt.plot([1,2,3],'r-',label='Sample Label Red')\n plt.plot([0.5,2,3.5],'b-',label='Sample Label Blue')\n plt.legend()\n plt.show()\n\nThis will plot 2 lines as shown:\n\nThe arrow function supports labels. Do check this link:\nhttp://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.arrow\n", "when adding the label attribute, don't forget to add .legend() method.\nimport matplotlib.pyplot as plt\nplt.plot([1,2],[3,5],'ro',label='one')\nplt.plot([1,2],[1,2],'g^',label='two')\nplt.plot([1,2],[1,6],'bs',label='three')\nplt.axis([0,4,0,10])\nplt.ylabel('x2')\nplt.xlabel('x1')\nplt.legend()\nplt.show()\n\n\n", "This should work :\nplt.legend(YourDataFrame.columns)\n\n" ]
[ 48, 4, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0017941083_matplotlib_python.txt
Q: How to parse this xml file which have two root elements? <objects> <object> <record> <net_amount>3657.82</net_amount> <order_number>47004603</order_number> <invoice_source>Email</invoice_source> <invoice_capture_date>2022-11-13</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>594826</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>594826</document_capture_provider_ref> </record> </object> </objects> how can i parse this xml data. this data have two "object" elements. when i remove one "object" i am able to parse this. but otherwise i cannot parse it. for file in files: tree = ET.parse(file) root = tree.getroot() for i in root.findall("record"): net_amount = i.find("net_amount").text order_number = i.find("order_number").text when i use this above code i want to get the "net_amount" and "order_number". but when i remove one object from the xml file it works fine. but i have so many files like this. is there any method to make it work. please help me A: You've done the hard part already, all you have to do is to wrap your code in a loop that will go through the object tags. for file in files: tree = ET.parse(file) root = tree.getroot() #This is the outer "objects" tags for obj in root.findall("object"): #Loop over all object in it for i in obj.findall("record"): #Resume the original search in the specific object tag rather than the outer one net_amount = i.find("net_amount").text order_number = i.find("order_number").text
How to parse this xml file which have two root elements?
<objects> <object> <record> <net_amount>3657.82</net_amount> <order_number>47004603</order_number> <invoice_source>Email</invoice_source> <invoice_capture_date>2022-11-13</invoice_capture_date> <document_type>INVOICE</document_type> <data_capture_provider_code>00001</data_capture_provider_code> <data_capture_provider_reference>594826</data_capture_provider_reference> <document_capture_provide_code>00002</document_capture_provide_code> <document_capture_provider_ref>594826</document_capture_provider_ref> </record> </object> </objects> how can i parse this xml data. this data have two "object" elements. when i remove one "object" i am able to parse this. but otherwise i cannot parse it. for file in files: tree = ET.parse(file) root = tree.getroot() for i in root.findall("record"): net_amount = i.find("net_amount").text order_number = i.find("order_number").text when i use this above code i want to get the "net_amount" and "order_number". but when i remove one object from the xml file it works fine. but i have so many files like this. is there any method to make it work. please help me
[ "You've done the hard part already, all you have to do is to wrap your code in a loop that will go through the object tags.\nfor file in files:\n tree = ET.parse(file)\n root = tree.getroot() #This is the outer \"objects\" tags\n for obj in root.findall(\"object\"): #Loop over all object in it\n for i in obj.findall(\"record\"): #Resume the original search in the specific object tag rather than the outer one\n net_amount = i.find(\"net_amount\").text\n order_number = i.find(\"order_number\").text\n\n" ]
[ 1 ]
[]
[]
[ "python", "xml" ]
stackoverflow_0074561227_python_xml.txt
Q: Is this always necessary to use r before path declaration in python I have often seen syntax like this in python code. import os os.chdir(r'C:\Users\test\Desktop') I was wondering why would I need to give r before the path, I believe it has something to do with '\' in the path , Is there any other way to give path instead of using r'' A: It makes sure that the backslash doesn't escape the characters. It's same as os.chdir('C:\\Users\\test\\Desktop') A: 'r' before string literal make Python parse it as a "raw" string, without escaping. If you want not to use 'r' before string literal, but specify path with single slashes, you can use this notation: "C:/Users/test/Desktop" As it would be in unix-pased systems. Windows understand both "\" and "/" in file paths, so, using "/" give you ability to avoid 'r' letter before the path string. Also, as it was mentioned, you can specify path with double slashes, but, as I realized, this is not that you wanted: "C:\\Users\\test\\Desktop" A: Only when it has escape sequences print('C:\sys\cat\Desktop') Better to give it as raw type to avoid the glitches or using forward slash. A: You can use forward slashes in Windows as well, so you dont need raw string literals: >>> import os >>> os.stat(r'C:\Users\f3k\Desktop\excel.vbs') nt.stat_result(st_mode=33206, st_ino=0L, st_dev=0, st_nlink=0, st_uid=0, st_gid=0, st_size=555L, st_atime=1367585162L, st_mtime=1367586148L, st_ctime=1367585162L) Same using forward slashes: >>> os.stat('C:/Users/f3k/Desktop/excel.vbs') nt.stat_result(st_mode=33206, st_ino=0L, st_dev=0, st_nlink=0, st_uid=0, st_gid=0, st_size=555L, st_atime=1367585162L, st_mtime=1367586148L, st_ctime=1367585162L) But take care using os.path.join(): >>> os.path.join('C:/Users/f3k/Desktop', 'excel.vbs') 'C:/Users/f3k/Desktop\\excel.vbs' A: As per knowledge, you can use forward slash instead of backward slash and put r on it. If you use a backslash then you have to put r in front of it or you can do a forward slash if you want to. Example - > You can try this in Jupyter notebook: f = open(r'F:\love.txt', 'r') or f = open('F:/love.txt', 'r') Both will work fine.
Is this always necessary to use r before path declaration in python
I have often seen syntax like this in python code. import os os.chdir(r'C:\Users\test\Desktop') I was wondering why would I need to give r before the path, I believe it has something to do with '\' in the path , Is there any other way to give path instead of using r''
[ "It makes sure that the backslash doesn't escape the characters. It's same as\nos.chdir('C:\\\\Users\\\\test\\\\Desktop')\n\n", "'r' before string literal make Python parse it as a \"raw\" string, without escaping.\nIf you want not to use 'r' before string literal, but specify path with single slashes, you can use this notation:\n\"C:/Users/test/Desktop\"\n\nAs it would be in unix-pased systems. Windows understand both \"\\\" and \"/\" in file paths, so, using \"/\" give you ability to avoid 'r' letter before the path string.\nAlso, as it was mentioned, you can specify path with double slashes, but, as I realized, this is not that you wanted:\n\"C:\\\\Users\\\\test\\\\Desktop\"\n\n", "Only when it has escape sequences\nprint('C:\\sys\\cat\\Desktop')\n\nBetter to give it as raw type to avoid the glitches or using forward slash.\n", "You can use forward slashes in Windows as well, so you dont need raw string literals:\n>>> import os\n>>> os.stat(r'C:\\Users\\f3k\\Desktop\\excel.vbs')\nnt.stat_result(st_mode=33206, st_ino=0L, st_dev=0, st_nlink=0, st_uid=0, st_gid=0, st_size=555L, st_atime=1367585162L, st_mtime=1367586148L, st_ctime=1367585162L)\n\nSame using forward slashes:\n>>> os.stat('C:/Users/f3k/Desktop/excel.vbs')\nnt.stat_result(st_mode=33206, st_ino=0L, st_dev=0, st_nlink=0, st_uid=0, st_gid=0, st_size=555L, st_atime=1367585162L, st_mtime=1367586148L, st_ctime=1367585162L)\n\nBut take care using os.path.join():\n>>> os.path.join('C:/Users/f3k/Desktop', 'excel.vbs')\n'C:/Users/f3k/Desktop\\\\excel.vbs'\n\n", "As per knowledge, you can use forward slash instead of backward slash and put r on it. If you use a backslash then you have to put r in front of it or you can do a forward slash if you want to.\nExample - > You can try this in Jupyter notebook:\nf = open(r'F:\\love.txt', 'r') \n\nor\nf = open('F:/love.txt', 'r')\n\nBoth will work fine.\n" ]
[ 2, 1, 0, 0, 0 ]
[]
[]
[ "path", "python" ]
stackoverflow_0047010506_path_python.txt
Q: Correct procedure to connect to a network database I am making an application with qooxdoo, and I need to connect to a sqlite database. I am not able to. A few years ago I made another application and I was able to connect, in that case it was mysql, perfectly. I have a server networking using python and bottle, say at address.es:8080/idCars and I can request sql commands from it without problems. But now I need to connect to qooxdoo and I can't find the way. What steps do I have to follow, if you can at least give me a working skeleton I would appreciate it. I am an amateur, not a professional. I have tried this but the server does not even receive a signal, in playground for test. var req = new qx.io.request.Jsonp(); req.setUrl("http://url:8080/getData"); req.send(); And run sql server, if go to link via web ok, but from qooxdoo no. In sqlite server: from bottle import run, Bottle from bottle.ext import sqlite from bottle import template import json app = Bottle() plugin = sqlite.Plugin(dbfile='/home/__/manejo_db/id_Aves.sqlite') app.install( plugin) @app.route('/getData') def show( db): salida= [] row = db.execute('SELECT paxiarin, spacie_code FROM aves_identificadas group by paxiarin order by paxiarin;').fetchall() if row: for i in row: sql= "SELECT count( spacie_code) FROM aves_identificadas where spacie_code= '%s'" % i[1] conteo= db.execute( sql) res= conteo.fetchone() salida.append( [ i[ 0], res[ 0]]) # output = template('plantilla', rows=salida) # print( json.dumps( salida)) return json.dumps( { 'mensaje': "bien", 'resultados': salida}) return HTTPError(404, "Page not found") run( app, host='blabla.bla', port=8080) Thanks. A: There is problem with CORS. I think you have a response from the server. The simple client which works on port 8080: const req = new qx.io.request.Xhr("http://localhost:8081/getData"); req.addListener("success", function(e) { const req = e.getTarget(); console.log(req.getResponse()); }, this); req.send(); Some fixes to your server: from bottle import run, request, response, Bottle from bottle.ext import sqlite from bottle import template import json app = Bottle() plugin = sqlite.Plugin(dbfile='/home/__/manejo_db/id_Aves.sqlite') app.install( plugin) @app.hook('after_request') def enable_cors(): """ You need to add some headers to each request. Don't use the wildcard '*' for Access-Control-Allow-Origin in production. """ response.headers['Access-Control-Allow-Origin'] = '*' response.headers['Access-Control-Allow-Methods'] = 'PUT, GET, POST, DELETE, OPTIONS' response.headers['Access-Control-Allow-Headers'] = 'Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token' @app.route('/getData') def show( db): salida= [] row = db.execute('SELECT paxiarin, spacie_code FROM aves_identificadas group by paxiarin order by paxiarin;').fetchall() if row: for i in row: sql= "SELECT count( spacie_code) FROM aves_identificadas where spacie_code= '%s'" % i[1] conteo= db.execute( sql) res= conteo.fetchone() salida.append( [ i[ 0], res[ 0]]) # output = template('plantilla', rows=salida) # print( json.dumps( salida)) return json.dumps( { 'mensaje': "bien", 'resultados': salida}) return HTTPError(404, "Page not found") run( app, host='localhost', port=8081) Notice the server works on a different port than your client which lead to CORS problem. If you have somehow the same origin for server and client there is no problem with CORS. This code works and tested. You may see logs on server side and your client qooxdoo page.
Correct procedure to connect to a network database
I am making an application with qooxdoo, and I need to connect to a sqlite database. I am not able to. A few years ago I made another application and I was able to connect, in that case it was mysql, perfectly. I have a server networking using python and bottle, say at address.es:8080/idCars and I can request sql commands from it without problems. But now I need to connect to qooxdoo and I can't find the way. What steps do I have to follow, if you can at least give me a working skeleton I would appreciate it. I am an amateur, not a professional. I have tried this but the server does not even receive a signal, in playground for test. var req = new qx.io.request.Jsonp(); req.setUrl("http://url:8080/getData"); req.send(); And run sql server, if go to link via web ok, but from qooxdoo no. In sqlite server: from bottle import run, Bottle from bottle.ext import sqlite from bottle import template import json app = Bottle() plugin = sqlite.Plugin(dbfile='/home/__/manejo_db/id_Aves.sqlite') app.install( plugin) @app.route('/getData') def show( db): salida= [] row = db.execute('SELECT paxiarin, spacie_code FROM aves_identificadas group by paxiarin order by paxiarin;').fetchall() if row: for i in row: sql= "SELECT count( spacie_code) FROM aves_identificadas where spacie_code= '%s'" % i[1] conteo= db.execute( sql) res= conteo.fetchone() salida.append( [ i[ 0], res[ 0]]) # output = template('plantilla', rows=salida) # print( json.dumps( salida)) return json.dumps( { 'mensaje': "bien", 'resultados': salida}) return HTTPError(404, "Page not found") run( app, host='blabla.bla', port=8080) Thanks.
[ "There is problem with CORS. I think you have a response from the server.\nThe simple client which works on port 8080:\nconst req = new qx.io.request.Xhr(\"http://localhost:8081/getData\");\nreq.addListener(\"success\", function(e) {\n const req = e.getTarget();\n console.log(req.getResponse());\n}, this);\nreq.send();\n\nSome fixes to your server:\nfrom bottle import run, request, response, Bottle\nfrom bottle.ext import sqlite\nfrom bottle import template\nimport json\n\n\napp = Bottle()\nplugin = sqlite.Plugin(dbfile='/home/__/manejo_db/id_Aves.sqlite')\napp.install( plugin)\n\n@app.hook('after_request')\ndef enable_cors():\n \"\"\"\n You need to add some headers to each request.\n Don't use the wildcard '*' for Access-Control-Allow-Origin in production.\n \"\"\"\n response.headers['Access-Control-Allow-Origin'] = '*'\n response.headers['Access-Control-Allow-Methods'] = 'PUT, GET, POST, DELETE, OPTIONS'\n response.headers['Access-Control-Allow-Headers'] = 'Origin, Accept, Content-Type, X-Requested-With, X-CSRF-Token'\n\n@app.route('/getData')\ndef show( db):\n salida= []\n row = db.execute('SELECT paxiarin, spacie_code FROM aves_identificadas group by paxiarin order by paxiarin;').fetchall()\n if row:\n for i in row:\n sql= \"SELECT count( spacie_code) FROM aves_identificadas where spacie_code= '%s'\" % i[1]\n conteo= db.execute( sql)\n res= conteo.fetchone()\n salida.append( [ i[ 0], res[ 0]])\n# output = template('plantilla', rows=salida)\n# print( json.dumps( salida))\n return json.dumps( { 'mensaje': \"bien\", 'resultados': salida})\n return HTTPError(404, \"Page not found\")\n\nrun( app, host='localhost', port=8081)\n\nNotice the server works on a different port than your client which lead to CORS problem. If you have somehow the same origin for server and client there is no problem with CORS. This code works and tested. You may see logs on server side and your client qooxdoo page.\n" ]
[ 0 ]
[]
[]
[ "bottle", "json", "python", "qooxdoo" ]
stackoverflow_0074557774_bottle_json_python_qooxdoo.txt
Q: dataframe group, sum and concatenate I have a dataframe dfsorted : dfsorted = df.sort_values(["sku"], ascending=[True]) print(dfsorted.head()) id sku bill qty_left 186 01-04 50469 0 16 01-20 50262 15 267 01-20 50460 1 18 01-20 50262 5 17 01-20 50262 5 How can I group / aggregate the dfsorted into this desired result: sku bill qty_left 01-04 50469 0 01-20 50262, 50460 26 So : group the dataframe by 'sku' for each 'sku', concatenate the 'bill' values (these are already formatted as strings, I don't care if there are duplicates but unique values would be nice too) for each 'sku', sum the 'qty_left' values. Thanks! A: Use agg, where you can apply both custom (lambda) functions as standard (such as sum) functions: df.groupby('sku').agg({'bill': lambda x: set(x), 'qty_left':'sum'}) set makes sure they are unique values, using list makes them just concatenated. result: bill qty_left sku 01-04 {50469} 0 01-20 {50460, 50262} 26 If you want a string instead of a set for bill you can use: df2.bill.apply(lambda s: ', '.join(list(map(str, s)))) Where df2 is the result of the groupby.agg function above. A: Use GroupBy.agg with lambda function for remove duplicates in original ordering: df1 = (df.groupby('sku', as_index=False) .agg({'bill': lambda x:','.join(dict.fromkeys(x)), 'qty_left':'sum'})) print (df1) sku bill qty_left 0 01-04 50469 0 1 01-20 50262,50460 26 If bfill column are strings use: df1 = (df.astype({'bill':str}) .groupby('sku', as_index=False) .agg({'bill': lambda x:','.join(dict.fromkeys(x)), 'qty_left':'sum'})) print (df1) sku bill qty_left 0 01-04 50469 0 1 01-20 50262,50460 26
dataframe group, sum and concatenate
I have a dataframe dfsorted : dfsorted = df.sort_values(["sku"], ascending=[True]) print(dfsorted.head()) id sku bill qty_left 186 01-04 50469 0 16 01-20 50262 15 267 01-20 50460 1 18 01-20 50262 5 17 01-20 50262 5 How can I group / aggregate the dfsorted into this desired result: sku bill qty_left 01-04 50469 0 01-20 50262, 50460 26 So : group the dataframe by 'sku' for each 'sku', concatenate the 'bill' values (these are already formatted as strings, I don't care if there are duplicates but unique values would be nice too) for each 'sku', sum the 'qty_left' values. Thanks!
[ "Use agg, where you can apply both custom (lambda) functions as standard (such as sum) functions:\ndf.groupby('sku').agg({'bill': lambda x: set(x), 'qty_left':'sum'})\n\nset makes sure they are unique values, using list makes them just concatenated.\nresult:\n bill qty_left\nsku \n01-04 {50469} 0\n01-20 {50460, 50262} 26\n\nIf you want a string instead of a set for bill you can use:\ndf2.bill.apply(lambda s: ', '.join(list(map(str, s))))\n\nWhere df2 is the result of the groupby.agg function above.\n", "Use GroupBy.agg with lambda function for remove duplicates in original ordering:\ndf1 = (df.groupby('sku', as_index=False)\n .agg({'bill': lambda x:','.join(dict.fromkeys(x)), \n 'qty_left':'sum'}))\nprint (df1)\n sku bill qty_left\n0 01-04 50469 0\n1 01-20 50262,50460 26\n\nIf bfill column are strings use:\ndf1 = (df.astype({'bill':str})\n .groupby('sku', as_index=False)\n .agg({'bill': lambda x:','.join(dict.fromkeys(x)), \n 'qty_left':'sum'}))\nprint (df1)\n sku bill qty_left\n0 01-04 50469 0\n1 01-20 50262,50460 26\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "group_by", "pandas", "python" ]
stackoverflow_0074560979_dataframe_group_by_pandas_python.txt
Q: Delete rows with a certain condition in pandas I have a data frame and I want to delete rows that in the column "Phrase", pattern "___" exists. Index PHRASE Label 0 proposed by the president of the 1 1 Living ___ 1 2 "Murder, ___ Wrote" 0 But Imagin that the data fram has 2,000,000 enteries import re df_clean = pd.DataFrame() z = 0 y = 0 for i in df_original["PHRASE"]: x = re.search("___", i) if x: y = y + 1 else: df_clean.append([i]) z = z + 1 this is what I came up with so far, I know it's not right, Does anyone know the answer? (by the way append takes a lot of time) A: df[~df['phrase'].str.contains('___')] Where the ~ symbol negates the operation.
Delete rows with a certain condition in pandas
I have a data frame and I want to delete rows that in the column "Phrase", pattern "___" exists. Index PHRASE Label 0 proposed by the president of the 1 1 Living ___ 1 2 "Murder, ___ Wrote" 0 But Imagin that the data fram has 2,000,000 enteries import re df_clean = pd.DataFrame() z = 0 y = 0 for i in df_original["PHRASE"]: x = re.search("___", i) if x: y = y + 1 else: df_clean.append([i]) z = z + 1 this is what I came up with so far, I know it's not right, Does anyone know the answer? (by the way append takes a lot of time)
[ "df[~df['phrase'].str.contains('___')]\n\nWhere the ~ symbol negates the operation.\n" ]
[ 0 ]
[]
[]
[ "nlp", "pandas", "python" ]
stackoverflow_0074561241_nlp_pandas_python.txt
Q: Best way to pass multiple conditions in pandas between dataframes I have a model I am building and in the test version I am stuck on how to pass multiple conditions to generate a new dataframe from existing ones. I currently have an inefficient function that loops through my dataframes one for each period (1-5) and one for each date in the dataset. I have created a subset of the data for illustration of my problem so that it is a little less clunky to understand my problem based on feedback from previous users. In my actual dataset I have multiple IDs, periods 1-48 and many more dates. I am trying to pass through conditions between three dataframes to generate a new one that will later be fed into a new part of my model. I have the completed model in excel but now translating it in pandas. In excel the solution is using nested IF statements with sums. What I am trying to do is apply to following conditions to my below dataframes: For every settlement period (1-10) on each row I need to assess the following conditions: if the cumulative sum (from dataframe 2) is < value x in dataframe 3 (fossil) AND if the cumulative sum (from dataframe 2) + the relating value from dataframe one is > value x in dataframe 3 (fossil) THEN: value x in dataframe 3 - the cumulative sum value (from dataframe 2) otherwise: value from dataframe 1 Dataframes: DF1 Index BM Unit ID Technology Rank Gas_Quantile Coal_Quantile 1 2 3 4 5 0 ID 1 Gas 1 0 130.4332 130.932 130.78 130.58 130.568 1 ID 2 Gas 2 0 339.45 342.325 322.525 312.4 303.775 2 ID 3 Gas 3 1 363.312 386.712 364.464 363.312 363.312 3 ID 4 Coal 4 0 334.4 419.5 436.7 441.9 440.5 4 ID 5 Gas 5 1 370.65 370.45 359.9 326.25 326.2 5 ID 6 Coal 6 0 337 423.4 423.1 427.5 427 6 ID 7 Gas 7 2 240.4065 293.169 252.2675 256.5055 261.653 7 ID 8 Gas 8 2 297.7333 360.2667 355.4 357.0667 358.6667 8 ID 9 Gas 9 3 106.624 106.112 105.964 106 106 9 ID 10 Gas 10 3 432.8 430.4 430.7 431.9 432.1 DF2 Index BM Unit ID Technology Rank Gas_Quantile Coal_Quantile 1 2 3 4 5 0 ID 1 Gas 1 0 130.4332 130.932 130.78 130.58 130.568 1 ID 2 Gas 2 0 469.8832 473.257 453.305 442.98 434.343 2 ID 3 Gas 3 1 833.1952 859.969 817.769 806.292 797.655 3 ID 4 Coal 4 0 1167.595 1279.469 1254.469 1248.192 1238.155 4 ID 5 Gas 5 1 1538.245 1649.919 1614.369 1574.442 1564.355 5 ID 6 Coal 6 0 1875.245 2073.319 2037.469 2001.942 1991.355 6 ID 7 Gas 7 2 2115.652 2366.488 2289.737 2258.448 2253.008 7 ID 8 Gas 8 2 2413.385 2726.755 2645.137 2615.514 2611.675 8 ID 9 Gas 9 3 2520.009 2832.867 2751.101 2721.514 2717.675 9 ID 10 Gas 10 3 2952.809 3263.267 3181.801 3153.414 3149.775 DF3 settlementPeriod 1 2 3 4 5 settlementDate Type 03/01/2022 Fossil 2540.10 2884.05 2322.03 2027.54 2043.56 ZE 18190.57 18261.24 18367.98 18198.04 18072.02 04/01/2022 Fossil 2772.00 3415.52 3534.11 3580.13 3501.39 ZE 16883.01 16655.47 16581.57 16322.97 16027.87 05/01/2022 Fossil 2653.98 2700.04 2186.64 1702.38 1617.53 ZE 19296.58 19774.30 20163.37 20379.58 20584.48 06/01/2022 Fossil 11556.75 11924.36 11581.64 11144.16 11358.06 ZE 11850.34 11698.00 11801.11 11592.45 11671.91 07/01/2022 Fossil 2373.65 2418.45 2221.58 2154.57 2192.19 ZE 18433.11 17909.67 17774.73 17816.40 17868.83 08/01/2022 Fossil 6407.98 6529.93 6075.51 5258.58 4559.91 ZE 15237.24 15360.68 14994.30 14741.95 14695.10 Example desired output with one input for period one: Index BM Unit ID Technology Rank Gas_Quantile Coal_Quantile 1 2 3 4 5 0 ID 1 Gas 1 0 130.43 1 ID 2 Gas 2 0 2 ID 3 Gas 3 1 3 ID 4 Coal 4 0 4 ID 5 Gas 5 1 5 ID 6 Coal 6 0 6 ID 7 Gas 7 2 7 ID 8 Gas 8 2 8 ID 9 Gas 9 3 9 ID 10 Gas 10 3 My current solution which does not seem to be that efficient in my next step is as follows with another loop for each day in another function: def run_loop_for_day_SP_generation(DF1,DF2,DF3): modelled_gen_df = pd.DataFrame(index=DF1.index,columns=DF1.columns) for SP in DF3.columns: for row in DF1.index: if row == 0: modelled_gen_df.loc[row,SP]=DF1.loc[row,SP] elif DF2.loc[row-1,SP]<DF3[SP][0]: if DF2.loc[row-1,SP]+DF1.loc[row,SP]>DF3[SP][0]: modelled_gen_df.loc[row,SP]=DF3[SP][0] - DF2.loc[row-1,SP] else: modelled_gen_df.loc[row,SP]=DF1.loc[row,SP] else: modelled_gen_df.loc[row,SP] = 0 modelled_gen_df[['BM Unit ID','Technology','ave rank','rank']] =DF1[['BM Unit ID','Technology','ave rank']] return modelled_gen_df What would the most pythonic way to solve this be? using np.select? ok more questions. 1) your conditions. there are two if statements back to back, is it "if this cond AND this cond, then x, else y?" 2) why there is only 1 value in df3 and the rest x and y? Is it just for demonstration or is it really like that? 3) you write "settlementPeriod (1-10)", with that you mean for each settlementDate Type "Fossil" right? 4) we don't know Merit_order which occurs in your function. Can't run the code like that. Update: There is only one value in DF3 for demonstration. Here I used X and Y to fill it in. SettlementPeriod is the columns 1-5 as in DF1,2 and 3. Settlement date is the date of the data which is why I need a second function that then loops through all days. Update: Second function to go through all dates def model_all_days_generation(DF1,DF2,DF3): all_dates = DF3.index.get_level_values(0).unique() modelled_generation_dictionary = {'Date':'dataframe'} ## Top label for date in all_dates: single_day_fossil = DF3.loc[:,'Fossil',:].loc[date].to_frame().transpose() day_modelled_gen = run_loop_for_day_SP_generation(DF1,DF2,DF3) modelled_generation_dictionary[date] = day_modelled_gen return modelled_generation_dictionary EDIT: Output from debug: (('03/01/2022', 'Fossil'), settlementPeriod 1 2540.100 2 2884.050 3 2322.026 4 2027.544 5 2043.558 6 1967.350 7 2050.054 8 1917.484 9 1948.606 10 1912.418 11 1998.150 12 2441.200 13 3098.836 14 3052.854 15 3470.942 16 3844.768 17 4515.572 18 5700.036 19 7408.294 20 7944.532 21 7185.508 22 7200.348 23 7052.050 24 7807.184 25 8065.094 26 8011.100 27 8355.068 28 8567.930 29 8657.718 30 8810.142 31 9275.370 32 9910.762 33 10308.158 34 11240.784 35 11660.706 36 11624.170 37 11452.386 38 11219.704 39 10306.176 40 9785.316 41 8583.608 42 7625.128 43 6738.098 44 5965.298 45 5475.074 46 4584.388 47 3761.072 48 2774.104 Name: (03/01/2022, Fossil), dtype: float64) Error: KeyError: "None of [Index(['1,', '2,', '3,', '4,', '5,', '6,', '7,', '8,', '9,', '10,', '11,',\n '12,', '13,', '14,', '15,', '16,', '17,', '18,', '19,', '20,', '21,',\n '22,', '23,', '24,', '25,', '26,', '27,', '28,', '29,', '30,', '31,',\n '32,', '33,', '34,', '35,', '36,', '37,', '38,', '39,', '40,', '41,',\n '42,', '43,', '44,', '45,', '46,', '47,', '48'],\n dtype='object')] are in the [columns]" A: Here is my approach to your task: The function choices will calculate the new values of the columns 1-5 with its conditions. The function each_date will calculate that new dataframe for each date where Type == 'Fossil' cols = list('12345') # ['1', '2', '3', '4', '5'] # predefine all columns you need here def choices(c, thresh): col = c.name thresh = thresh[col] cond1 = df2[col].shift() < thresh cond2 = (df2[col].shift() + df1[col]) > thresh m1 = cond1 & cond2 m2 = cond1 & (~cond2) m3 = ~cond1 #no need to definde m1, m2, m3 here, but easier to read imo cond = [m1, m2, m3] choices = [thresh - df2[col].shift(), df1[col], 0] return np.select(cond, choices) # Series with the length of df1 def each_date(row): tmp = df1[cols].apply(choices, thresh=row) # choices will be applied per column tmp.loc[0, :] = df1.loc[0,cols] # values of the first row of the new df get values of df1 return tmp #create your dictionary modelled_generation_dictionary = {'Date':'dataframe'} #loop through each row, apply `each_date` on each row (which is a Series) and concat some columns to it, I assumed you want to have Unit ID and Technology with it for row in df3.loc[df3.index.get_level_values(1)=='Fossil'].iterrows(): print(row) # for debugging res = pd.concat([df1[["BM Unit ID", "Technology"]], each_date(row[1])],axis=1) # change key from timestamp to Date only and make it a string for format like: "2022-03-01" modelled_generation_dictionary[f"{row[0][0].date()}"] = res print(modelled_generation_dictionary) For debugging: I added the row print(row), each row should be a tuple, containing a tuple as 1st element (with two elements, 1st the timestamp, 2nd the Type) and the 2nd element a pandas.Series, like this: ((Timestamp('2022-03-01 00:00:00'), 'Fossil'), 1 2540.10 2 2884.05 3 2322.03 4 2027.54 5 2043.56 Name: (2022-03-01 00:00:00, Fossil), dtype: float64) Output of that dict: Date dataframe 2022-03-01 Index BM Unit ID Technology 1 2 3 4 5 0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.580 130.568 1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.400 303.775 2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.312 363.312 3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.900 440.500 4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.250 326.200 5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.500 427.000 6 6 ID 7 Gas 240.4065 293.1690 252.2675 25.598 52.205 7 7 ID 8 Gas 297.7333 360.2667 32.2930 0.000 0.000 8 8 ID 9 Gas 106.6240 106.1120 0.0000 0.000 0.000 9 9 ID 10 Gas 20.0910 51.1830 0.0000 0.000 0.000 2022-04-01 Index BM Unit ID Technology 1 2 3 4 5 0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.5800 130.5680 1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.4000 303.7750 2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.3120 363.3120 3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.9000 440.5000 4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.2500 326.2000 5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.5000 427.0000 6 6 ID 7 Gas 240.4065 293.1690 252.2675 256.5055 261.6530 7 7 ID 8 Gas 297.7333 360.2667 355.4000 357.0667 358.6667 8 8 ID 9 Gas 106.6240 106.1120 105.9640 106.0000 106.0000 9 9 ID 10 Gas 251.9910 430.4000 430.7000 431.9000 432.1000 2022-05-01 Index BM Unit ID Technology 1 2 3 4 5 0 0 ID 1 Gas 130.4332 130.932 130.780 130.580 130.568 1 1 ID 2 Gas 339.4500 342.325 322.525 312.400 303.775 2 2 ID 3 Gas 363.3120 386.712 364.464 363.312 363.312 3 3 ID 4 Coal 334.4000 419.500 436.700 441.900 440.500 4 4 ID 5 Gas 370.6500 370.450 359.900 326.250 326.200 5 5 ID 6 Coal 337.0000 423.400 423.100 127.938 53.175 6 6 ID 7 Gas 240.4065 293.169 149.171 0.000 0.000 7 7 ID 8 Gas 297.7333 333.552 0.000 0.000 0.000 8 8 ID 9 Gas 106.6240 0.000 0.000 0.000 0.000 9 9 ID 10 Gas 133.9710 0.000 0.000 0.000 0.000 2022-06-01 Index BM Unit ID Technology 1 2 3 4 5 0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.5800 130.5680 1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.4000 303.7750 2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.3120 363.3120 3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.9000 440.5000 4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.2500 326.2000 5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.5000 427.0000 6 6 ID 7 Gas 240.4065 293.1690 252.2675 256.5055 261.6530 7 7 ID 8 Gas 297.7333 360.2667 355.4000 357.0667 358.6667 8 8 ID 9 Gas 106.6240 106.1120 105.9640 106.0000 106.0000 9 9 ID 10 Gas 432.8000 430.4000 430.7000 431.9000 432.1000 2022-07-01 Index BM Unit ID Technology 1 2 3 4 5 0 0 ID 1 Gas 130.4332 130.932 130.780 130.580 130.568 1 1 ID 2 Gas 339.4500 342.325 322.525 312.400 303.775 2 2 ID 3 Gas 363.3120 386.712 364.464 363.312 363.312 3 3 ID 4 Coal 334.4000 419.500 436.700 441.900 440.500 4 4 ID 5 Gas 370.6500 370.450 359.900 326.250 326.200 5 5 ID 6 Coal 337.0000 423.400 423.100 427.500 427.000 6 6 ID 7 Gas 240.4065 293.169 184.111 152.628 200.835 7 7 ID 8 Gas 257.9980 51.962 0.000 0.000 0.000 8 8 ID 9 Gas 0.0000 0.000 0.000 0.000 0.000 9 9 ID 10 Gas 0.0000 0.000 0.000 0.000 0.000 2022-08-01 Index BM Unit ID Technology 1 2 3 4 5 0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.5800 130.5680 1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.4000 303.7750 2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.3120 363.3120 3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.9000 440.5000 4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.2500 326.2000 5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.5000 427.0000 6 6 ID 7 Gas 240.4065 293.1690 252.2675 256.5055 261.6530 7 7 ID 8 Gas 297.7333 360.2667 355.4000 357.0667 358.6667 8 8 ID 9 Gas 106.6240 106.1120 105.9640 106.0000 106.0000 9 9 ID 10 Gas 432.8000 430.4000 430.7000 431.9000 432.1000
Best way to pass multiple conditions in pandas between dataframes
I have a model I am building and in the test version I am stuck on how to pass multiple conditions to generate a new dataframe from existing ones. I currently have an inefficient function that loops through my dataframes one for each period (1-5) and one for each date in the dataset. I have created a subset of the data for illustration of my problem so that it is a little less clunky to understand my problem based on feedback from previous users. In my actual dataset I have multiple IDs, periods 1-48 and many more dates. I am trying to pass through conditions between three dataframes to generate a new one that will later be fed into a new part of my model. I have the completed model in excel but now translating it in pandas. In excel the solution is using nested IF statements with sums. What I am trying to do is apply to following conditions to my below dataframes: For every settlement period (1-10) on each row I need to assess the following conditions: if the cumulative sum (from dataframe 2) is < value x in dataframe 3 (fossil) AND if the cumulative sum (from dataframe 2) + the relating value from dataframe one is > value x in dataframe 3 (fossil) THEN: value x in dataframe 3 - the cumulative sum value (from dataframe 2) otherwise: value from dataframe 1 Dataframes: DF1 Index BM Unit ID Technology Rank Gas_Quantile Coal_Quantile 1 2 3 4 5 0 ID 1 Gas 1 0 130.4332 130.932 130.78 130.58 130.568 1 ID 2 Gas 2 0 339.45 342.325 322.525 312.4 303.775 2 ID 3 Gas 3 1 363.312 386.712 364.464 363.312 363.312 3 ID 4 Coal 4 0 334.4 419.5 436.7 441.9 440.5 4 ID 5 Gas 5 1 370.65 370.45 359.9 326.25 326.2 5 ID 6 Coal 6 0 337 423.4 423.1 427.5 427 6 ID 7 Gas 7 2 240.4065 293.169 252.2675 256.5055 261.653 7 ID 8 Gas 8 2 297.7333 360.2667 355.4 357.0667 358.6667 8 ID 9 Gas 9 3 106.624 106.112 105.964 106 106 9 ID 10 Gas 10 3 432.8 430.4 430.7 431.9 432.1 DF2 Index BM Unit ID Technology Rank Gas_Quantile Coal_Quantile 1 2 3 4 5 0 ID 1 Gas 1 0 130.4332 130.932 130.78 130.58 130.568 1 ID 2 Gas 2 0 469.8832 473.257 453.305 442.98 434.343 2 ID 3 Gas 3 1 833.1952 859.969 817.769 806.292 797.655 3 ID 4 Coal 4 0 1167.595 1279.469 1254.469 1248.192 1238.155 4 ID 5 Gas 5 1 1538.245 1649.919 1614.369 1574.442 1564.355 5 ID 6 Coal 6 0 1875.245 2073.319 2037.469 2001.942 1991.355 6 ID 7 Gas 7 2 2115.652 2366.488 2289.737 2258.448 2253.008 7 ID 8 Gas 8 2 2413.385 2726.755 2645.137 2615.514 2611.675 8 ID 9 Gas 9 3 2520.009 2832.867 2751.101 2721.514 2717.675 9 ID 10 Gas 10 3 2952.809 3263.267 3181.801 3153.414 3149.775 DF3 settlementPeriod 1 2 3 4 5 settlementDate Type 03/01/2022 Fossil 2540.10 2884.05 2322.03 2027.54 2043.56 ZE 18190.57 18261.24 18367.98 18198.04 18072.02 04/01/2022 Fossil 2772.00 3415.52 3534.11 3580.13 3501.39 ZE 16883.01 16655.47 16581.57 16322.97 16027.87 05/01/2022 Fossil 2653.98 2700.04 2186.64 1702.38 1617.53 ZE 19296.58 19774.30 20163.37 20379.58 20584.48 06/01/2022 Fossil 11556.75 11924.36 11581.64 11144.16 11358.06 ZE 11850.34 11698.00 11801.11 11592.45 11671.91 07/01/2022 Fossil 2373.65 2418.45 2221.58 2154.57 2192.19 ZE 18433.11 17909.67 17774.73 17816.40 17868.83 08/01/2022 Fossil 6407.98 6529.93 6075.51 5258.58 4559.91 ZE 15237.24 15360.68 14994.30 14741.95 14695.10 Example desired output with one input for period one: Index BM Unit ID Technology Rank Gas_Quantile Coal_Quantile 1 2 3 4 5 0 ID 1 Gas 1 0 130.43 1 ID 2 Gas 2 0 2 ID 3 Gas 3 1 3 ID 4 Coal 4 0 4 ID 5 Gas 5 1 5 ID 6 Coal 6 0 6 ID 7 Gas 7 2 7 ID 8 Gas 8 2 8 ID 9 Gas 9 3 9 ID 10 Gas 10 3 My current solution which does not seem to be that efficient in my next step is as follows with another loop for each day in another function: def run_loop_for_day_SP_generation(DF1,DF2,DF3): modelled_gen_df = pd.DataFrame(index=DF1.index,columns=DF1.columns) for SP in DF3.columns: for row in DF1.index: if row == 0: modelled_gen_df.loc[row,SP]=DF1.loc[row,SP] elif DF2.loc[row-1,SP]<DF3[SP][0]: if DF2.loc[row-1,SP]+DF1.loc[row,SP]>DF3[SP][0]: modelled_gen_df.loc[row,SP]=DF3[SP][0] - DF2.loc[row-1,SP] else: modelled_gen_df.loc[row,SP]=DF1.loc[row,SP] else: modelled_gen_df.loc[row,SP] = 0 modelled_gen_df[['BM Unit ID','Technology','ave rank','rank']] =DF1[['BM Unit ID','Technology','ave rank']] return modelled_gen_df What would the most pythonic way to solve this be? using np.select? ok more questions. 1) your conditions. there are two if statements back to back, is it "if this cond AND this cond, then x, else y?" 2) why there is only 1 value in df3 and the rest x and y? Is it just for demonstration or is it really like that? 3) you write "settlementPeriod (1-10)", with that you mean for each settlementDate Type "Fossil" right? 4) we don't know Merit_order which occurs in your function. Can't run the code like that. Update: There is only one value in DF3 for demonstration. Here I used X and Y to fill it in. SettlementPeriod is the columns 1-5 as in DF1,2 and 3. Settlement date is the date of the data which is why I need a second function that then loops through all days. Update: Second function to go through all dates def model_all_days_generation(DF1,DF2,DF3): all_dates = DF3.index.get_level_values(0).unique() modelled_generation_dictionary = {'Date':'dataframe'} ## Top label for date in all_dates: single_day_fossil = DF3.loc[:,'Fossil',:].loc[date].to_frame().transpose() day_modelled_gen = run_loop_for_day_SP_generation(DF1,DF2,DF3) modelled_generation_dictionary[date] = day_modelled_gen return modelled_generation_dictionary EDIT: Output from debug: (('03/01/2022', 'Fossil'), settlementPeriod 1 2540.100 2 2884.050 3 2322.026 4 2027.544 5 2043.558 6 1967.350 7 2050.054 8 1917.484 9 1948.606 10 1912.418 11 1998.150 12 2441.200 13 3098.836 14 3052.854 15 3470.942 16 3844.768 17 4515.572 18 5700.036 19 7408.294 20 7944.532 21 7185.508 22 7200.348 23 7052.050 24 7807.184 25 8065.094 26 8011.100 27 8355.068 28 8567.930 29 8657.718 30 8810.142 31 9275.370 32 9910.762 33 10308.158 34 11240.784 35 11660.706 36 11624.170 37 11452.386 38 11219.704 39 10306.176 40 9785.316 41 8583.608 42 7625.128 43 6738.098 44 5965.298 45 5475.074 46 4584.388 47 3761.072 48 2774.104 Name: (03/01/2022, Fossil), dtype: float64) Error: KeyError: "None of [Index(['1,', '2,', '3,', '4,', '5,', '6,', '7,', '8,', '9,', '10,', '11,',\n '12,', '13,', '14,', '15,', '16,', '17,', '18,', '19,', '20,', '21,',\n '22,', '23,', '24,', '25,', '26,', '27,', '28,', '29,', '30,', '31,',\n '32,', '33,', '34,', '35,', '36,', '37,', '38,', '39,', '40,', '41,',\n '42,', '43,', '44,', '45,', '46,', '47,', '48'],\n dtype='object')] are in the [columns]"
[ "Here is my approach to your task:\nThe function choices will calculate the new values of the columns 1-5 with its conditions.\nThe function each_date will calculate that new dataframe for each date where Type == 'Fossil'\ncols = list('12345')\n# ['1', '2', '3', '4', '5'] # predefine all columns you need here \n\ndef choices(c, thresh):\n col = c.name\n thresh = thresh[col]\n \n cond1 = df2[col].shift() < thresh\n cond2 = (df2[col].shift() + df1[col]) > thresh \n m1 = cond1 & cond2\n m2 = cond1 & (~cond2)\n m3 = ~cond1\n #no need to definde m1, m2, m3 here, but easier to read imo\n \n cond = [m1, m2, m3]\n choices = [thresh - df2[col].shift(), df1[col], 0]\n return np.select(cond, choices) # Series with the length of df1\n\n\ndef each_date(row):\n tmp = df1[cols].apply(choices, thresh=row) # choices will be applied per column\n tmp.loc[0, :] = df1.loc[0,cols] # values of the first row of the new df get values of df1\n return tmp\n\n#create your dictionary\nmodelled_generation_dictionary = {'Date':'dataframe'}\n\n#loop through each row, apply `each_date` on each row (which is a Series) and concat some columns to it, I assumed you want to have Unit ID and Technology with it\nfor row in df3.loc[df3.index.get_level_values(1)=='Fossil'].iterrows():\n print(row) # for debugging\n res = pd.concat([df1[[\"BM Unit ID\", \"Technology\"]], each_date(row[1])],axis=1)\n\n # change key from timestamp to Date only and make it a string for format like: \"2022-03-01\"\n modelled_generation_dictionary[f\"{row[0][0].date()}\"] = res\n\nprint(modelled_generation_dictionary)\n\nFor debugging: I added the row print(row), each row should be a tuple, containing a tuple as 1st element (with two elements, 1st the timestamp, 2nd the Type) and the 2nd element a pandas.Series, like this:\n((Timestamp('2022-03-01 00:00:00'), 'Fossil'), 1 2540.10\n2 2884.05\n3 2322.03\n4 2027.54\n5 2043.56\nName: (2022-03-01 00:00:00, Fossil), dtype: float64)\n\nOutput of that dict:\nDate\ndataframe\n\n2022-03-01\n Index BM Unit ID Technology 1 2 3 4 5\n0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.580 130.568\n1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.400 303.775\n2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.312 363.312\n3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.900 440.500\n4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.250 326.200\n5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.500 427.000\n6 6 ID 7 Gas 240.4065 293.1690 252.2675 25.598 52.205\n7 7 ID 8 Gas 297.7333 360.2667 32.2930 0.000 0.000\n8 8 ID 9 Gas 106.6240 106.1120 0.0000 0.000 0.000\n9 9 ID 10 Gas 20.0910 51.1830 0.0000 0.000 0.000\n\n2022-04-01\n Index BM Unit ID Technology 1 2 3 4 5\n0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.5800 130.5680\n1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.4000 303.7750\n2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.3120 363.3120\n3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.9000 440.5000\n4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.2500 326.2000\n5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.5000 427.0000\n6 6 ID 7 Gas 240.4065 293.1690 252.2675 256.5055 261.6530\n7 7 ID 8 Gas 297.7333 360.2667 355.4000 357.0667 358.6667\n8 8 ID 9 Gas 106.6240 106.1120 105.9640 106.0000 106.0000\n9 9 ID 10 Gas 251.9910 430.4000 430.7000 431.9000 432.1000\n\n2022-05-01\n Index BM Unit ID Technology 1 2 3 4 5\n0 0 ID 1 Gas 130.4332 130.932 130.780 130.580 130.568\n1 1 ID 2 Gas 339.4500 342.325 322.525 312.400 303.775\n2 2 ID 3 Gas 363.3120 386.712 364.464 363.312 363.312\n3 3 ID 4 Coal 334.4000 419.500 436.700 441.900 440.500\n4 4 ID 5 Gas 370.6500 370.450 359.900 326.250 326.200\n5 5 ID 6 Coal 337.0000 423.400 423.100 127.938 53.175\n6 6 ID 7 Gas 240.4065 293.169 149.171 0.000 0.000\n7 7 ID 8 Gas 297.7333 333.552 0.000 0.000 0.000\n8 8 ID 9 Gas 106.6240 0.000 0.000 0.000 0.000\n9 9 ID 10 Gas 133.9710 0.000 0.000 0.000 0.000\n\n2022-06-01\n Index BM Unit ID Technology 1 2 3 4 5\n0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.5800 130.5680\n1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.4000 303.7750\n2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.3120 363.3120\n3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.9000 440.5000\n4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.2500 326.2000\n5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.5000 427.0000\n6 6 ID 7 Gas 240.4065 293.1690 252.2675 256.5055 261.6530\n7 7 ID 8 Gas 297.7333 360.2667 355.4000 357.0667 358.6667\n8 8 ID 9 Gas 106.6240 106.1120 105.9640 106.0000 106.0000\n9 9 ID 10 Gas 432.8000 430.4000 430.7000 431.9000 432.1000\n\n2022-07-01\n Index BM Unit ID Technology 1 2 3 4 5\n0 0 ID 1 Gas 130.4332 130.932 130.780 130.580 130.568\n1 1 ID 2 Gas 339.4500 342.325 322.525 312.400 303.775\n2 2 ID 3 Gas 363.3120 386.712 364.464 363.312 363.312\n3 3 ID 4 Coal 334.4000 419.500 436.700 441.900 440.500\n4 4 ID 5 Gas 370.6500 370.450 359.900 326.250 326.200\n5 5 ID 6 Coal 337.0000 423.400 423.100 427.500 427.000\n6 6 ID 7 Gas 240.4065 293.169 184.111 152.628 200.835\n7 7 ID 8 Gas 257.9980 51.962 0.000 0.000 0.000\n8 8 ID 9 Gas 0.0000 0.000 0.000 0.000 0.000\n9 9 ID 10 Gas 0.0000 0.000 0.000 0.000 0.000\n\n2022-08-01\n Index BM Unit ID Technology 1 2 3 4 5\n0 0 ID 1 Gas 130.4332 130.9320 130.7800 130.5800 130.5680\n1 1 ID 2 Gas 339.4500 342.3250 322.5250 312.4000 303.7750\n2 2 ID 3 Gas 363.3120 386.7120 364.4640 363.3120 363.3120\n3 3 ID 4 Coal 334.4000 419.5000 436.7000 441.9000 440.5000\n4 4 ID 5 Gas 370.6500 370.4500 359.9000 326.2500 326.2000\n5 5 ID 6 Coal 337.0000 423.4000 423.1000 427.5000 427.0000\n6 6 ID 7 Gas 240.4065 293.1690 252.2675 256.5055 261.6530\n7 7 ID 8 Gas 297.7333 360.2667 355.4000 357.0667 358.6667\n8 8 ID 9 Gas 106.6240 106.1120 105.9640 106.0000 106.0000\n9 9 ID 10 Gas 432.8000 430.4000 430.7000 431.9000 432.1000\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "excel", "numpy", "pandas", "python" ]
stackoverflow_0074547501_dataframe_excel_numpy_pandas_python.txt
Q: Merge N lists of tuples of counts Suppose I have N sorted lists of tuples ("val", "count_of_val") (sorted lexigoraphically by the character "val"). I want to merge all lists and get the total counts, e.g.: vec1: [("a", 10), ("b", 5)] vec2: [("a" , 7), ("b", 10), ("c", 2)] vec3: [("d", 2)] vec4: [] ... Now I want to merge all of them in 1 big list (not a dictionary) to count total occurences: [("a", 17), ("b", 15), ("c", 2), ("d", 2)]. I know that I can merge each vec one by one. I also think of N iterators through each list. But I was thinking if there is a faster solution. Since lists are sorted, dictionary should be equivalent. So, is there are a mechanism which is better than what I am suggesting? A: accumulate values by key in a collections.Counter. import collections vec1= [("a", 10), ("b", 5)] vec2= [("a" , 7), ("b", 10), ("c", 2)] vec3= [("d", 2)] c = collections.Counter() for vct in (vec1,vec2,vec3): for k,v in vct: c[k] += v print(c) or use update which adds instead of replacing for vct in (vec1,vec2,vec3): c.update(dict(vct)) you get: Counter({'a': 17, 'b': 15, 'c': 2, 'd': 2}) convert back as tuples >>> tuple(c.items()) (('c', 2), ('a', 17), ('b', 15), ('d', 2)) As a one-liner with sum (based on Stef comment) c = sum((collections.Counter(dict(v)) for v in (vec1,vec2,vec3)),start=collections.Counter()) This is the most performant method, even if it involves creating some temporary dictionaries. It could be avoided with zip if all "keys" were present in each list but that's not the case here. Performing lookups directly in the lists of tuples involves linear searches and therefore is not recommended.
Merge N lists of tuples of counts
Suppose I have N sorted lists of tuples ("val", "count_of_val") (sorted lexigoraphically by the character "val"). I want to merge all lists and get the total counts, e.g.: vec1: [("a", 10), ("b", 5)] vec2: [("a" , 7), ("b", 10), ("c", 2)] vec3: [("d", 2)] vec4: [] ... Now I want to merge all of them in 1 big list (not a dictionary) to count total occurences: [("a", 17), ("b", 15), ("c", 2), ("d", 2)]. I know that I can merge each vec one by one. I also think of N iterators through each list. But I was thinking if there is a faster solution. Since lists are sorted, dictionary should be equivalent. So, is there are a mechanism which is better than what I am suggesting?
[ "accumulate values by key in a collections.Counter.\nimport collections\n\nvec1= [(\"a\", 10), (\"b\", 5)]\nvec2= [(\"a\" , 7), (\"b\", 10), (\"c\", 2)]\nvec3= [(\"d\", 2)]\n\nc = collections.Counter()\nfor vct in (vec1,vec2,vec3):\n for k,v in vct:\n c[k] += v\n\nprint(c)\n\nor use update which adds instead of replacing\nfor vct in (vec1,vec2,vec3):\n c.update(dict(vct))\n\nyou get:\nCounter({'a': 17, 'b': 15, 'c': 2, 'd': 2})\n\nconvert back as tuples\n>>> tuple(c.items())\n(('c', 2), ('a', 17), ('b', 15), ('d', 2))\n\nAs a one-liner with sum (based on Stef comment)\nc = sum((collections.Counter(dict(v)) for v in (vec1,vec2,vec3)),start=collections.Counter())\n\nThis is the most performant method, even if it involves creating some temporary dictionaries. It could be avoided with zip if all \"keys\" were present in each list but that's not the case here.\nPerforming lookups directly in the lists of tuples involves linear searches and therefore is not recommended.\n" ]
[ 3 ]
[]
[]
[ "algorithm", "list", "merge", "python", "tuples" ]
stackoverflow_0074561365_algorithm_list_merge_python_tuples.txt
Q: How to stop ruamel.yaml from sorting dict keys? I am on Python 3.11 and ruamel.yaml==0.17.21 How do I stop ruamel.yaml from sorting the dict keys when doing a dump()? If I print the dict outright, it shows the keys are ordered as I added them. But when I dump to file, the keys become alphabetically sorted. Edit: Minimal working code: import sys from typing import NamedTuple from pprint import pprint import ruamel.yaml as ryaml class Loc(NamedTuple): lat: float long: float dadata = { "EMEA": { "rating": 5, "leads": ["Jane", "Jack"], "locs": [Loc(3.0, 3.0), Loc(4.0, 4.0), Loc(0, 0)], }, "APAC": { "rating": 5, "leads": ["Jane", "John"], "locs": [Loc(1.0, 1.0), Loc(2.0, 2.0), Loc(0, 0)], }, } class TupleAsFlowSeq(ryaml.Representer): def ignore_aliases(self, data): # type: (Any) -> bool return True def represent_tuple(self, data): # type: (Any) -> Any if isinstance(data, tuple): return self.represent_sequence("tag:yaml.org,2002:seq", list(data), flow_style=True) def represent_data(self, data): # type: (Any) -> Any if isinstance(data, tuple): return self.represent_sequence("tag:yaml.org,2002:seq", list(data), flow_style=True) return super().represent_data(data) def main(): assert all(map(lambda o: isinstance(o, tuple), dadata["APAC"]["locs"])) assert all(map(lambda o: isinstance(o, tuple), dadata["EMEA"]["locs"])) pprint(dadata, sort_dicts=False) yml = ryaml.YAML() yml.Representer = TupleAsFlowSeq yml.default_flow_style = None yml.dump(dadata, sys.stdout) if __name__ == '__main__': main() the output: {'EMEA': {'rating': 5, 'leads': ['Jane', 'Jack'], 'locs': [Loc(lat=3.0, long=3.0), Loc(lat=4.0, long=4.0), Loc(lat=0, long=0)]}, 'APAC': {'rating': 5, 'leads': ['Jane', 'John'], 'locs': [Loc(lat=1.0, long=1.0), Loc(lat=2.0, long=2.0), Loc(lat=0, long=0)]}} APAC: leads: - Jane - John locs: - [1.0, 1.0] - [2.0, 2.0] - [0, 0] rating: 5 EMEA: leads: - Jane - Jack locs: - [3.0, 3.0] - [4.0, 4.0] - [0, 0] rating: 5 As you can see, in the ruamel.yaml output, APAC comes before EMEA while in the data, EMEA is first. Also the order of the dict keys in the 2nd level dict changes from rating, leads, locs to leads, locs, rating A: Ohhh I'm supposed to inherit from RoundTripRepresenter instead of Representer. Okay.
How to stop ruamel.yaml from sorting dict keys?
I am on Python 3.11 and ruamel.yaml==0.17.21 How do I stop ruamel.yaml from sorting the dict keys when doing a dump()? If I print the dict outright, it shows the keys are ordered as I added them. But when I dump to file, the keys become alphabetically sorted. Edit: Minimal working code: import sys from typing import NamedTuple from pprint import pprint import ruamel.yaml as ryaml class Loc(NamedTuple): lat: float long: float dadata = { "EMEA": { "rating": 5, "leads": ["Jane", "Jack"], "locs": [Loc(3.0, 3.0), Loc(4.0, 4.0), Loc(0, 0)], }, "APAC": { "rating": 5, "leads": ["Jane", "John"], "locs": [Loc(1.0, 1.0), Loc(2.0, 2.0), Loc(0, 0)], }, } class TupleAsFlowSeq(ryaml.Representer): def ignore_aliases(self, data): # type: (Any) -> bool return True def represent_tuple(self, data): # type: (Any) -> Any if isinstance(data, tuple): return self.represent_sequence("tag:yaml.org,2002:seq", list(data), flow_style=True) def represent_data(self, data): # type: (Any) -> Any if isinstance(data, tuple): return self.represent_sequence("tag:yaml.org,2002:seq", list(data), flow_style=True) return super().represent_data(data) def main(): assert all(map(lambda o: isinstance(o, tuple), dadata["APAC"]["locs"])) assert all(map(lambda o: isinstance(o, tuple), dadata["EMEA"]["locs"])) pprint(dadata, sort_dicts=False) yml = ryaml.YAML() yml.Representer = TupleAsFlowSeq yml.default_flow_style = None yml.dump(dadata, sys.stdout) if __name__ == '__main__': main() the output: {'EMEA': {'rating': 5, 'leads': ['Jane', 'Jack'], 'locs': [Loc(lat=3.0, long=3.0), Loc(lat=4.0, long=4.0), Loc(lat=0, long=0)]}, 'APAC': {'rating': 5, 'leads': ['Jane', 'John'], 'locs': [Loc(lat=1.0, long=1.0), Loc(lat=2.0, long=2.0), Loc(lat=0, long=0)]}} APAC: leads: - Jane - John locs: - [1.0, 1.0] - [2.0, 2.0] - [0, 0] rating: 5 EMEA: leads: - Jane - Jack locs: - [3.0, 3.0] - [4.0, 4.0] - [0, 0] rating: 5 As you can see, in the ruamel.yaml output, APAC comes before EMEA while in the data, EMEA is first. Also the order of the dict keys in the 2nd level dict changes from rating, leads, locs to leads, locs, rating
[ "Ohhh I'm supposed to inherit from RoundTripRepresenter instead of Representer. Okay.\n" ]
[ 0 ]
[]
[]
[ "python", "ruamel.yaml" ]
stackoverflow_0074561293_python_ruamel.yaml.txt
Q: How to use GridSearchCV output for a scikit prediction? In the following code: # Load dataset iris = datasets.load_iris() X, y = iris.data, iris.target rf_feature_imp = RandomForestClassifier(100) feat_selection = SelectFromModel(rf_feature_imp, threshold=0.5) clf = RandomForestClassifier(5000) model = Pipeline([ ('fs', feat_selection), ('clf', clf), ]) params = { 'fs__threshold': [0.5, 0.3, 0.7], 'fs__estimator__max_features': ['auto', 'sqrt', 'log2'], 'clf__max_features': ['auto', 'sqrt', 'log2'], } gs = GridSearchCV(model, params, ...) gs.fit(X,y) What should be used for a prediction? gs? gs.best_estimator_? or gs.best_estimator_.named_steps['clf']? What is the difference between these 3? A: gs.predict(X_test) is equivalent to gs.best_estimator_.predict(X_test). Using either, X_test will be passed through your entire pipeline and it will return the predictions. gs.best_estimator_.named_steps['clf'].predict(), however is only the last phase of the pipeline. To use it, the feature selection step must already have been performed. This would only work if you have previously run your data through gs.best_estimator_.named_steps['fs'].transform() Three equivalent methods for generating predictions are shown below: Using gs directly. pred = gs.predict(X_test) Using best_estimator_. pred = gs.best_estimator_.predict(X_test) Calling each step in the pipeline individual. X_test_fs = gs.best_estimator_.named_steps['fs'].transform(X_test) pred = gs.best_estimator_.named_steps['clf'].predict(X_test_fs) A: If you pass True to the value of refit parameter of GridSearchCV (which is the default value anyway), then the estimator with best parameters refits on the whole dataset, so you can use gs.fit(X_test) for prediction. If the value of refit is equal to False while fitting the GridSearchCV object on your training set, then for prediction, you have only one option which is using gs.best_estimator_.predict(X_test).
How to use GridSearchCV output for a scikit prediction?
In the following code: # Load dataset iris = datasets.load_iris() X, y = iris.data, iris.target rf_feature_imp = RandomForestClassifier(100) feat_selection = SelectFromModel(rf_feature_imp, threshold=0.5) clf = RandomForestClassifier(5000) model = Pipeline([ ('fs', feat_selection), ('clf', clf), ]) params = { 'fs__threshold': [0.5, 0.3, 0.7], 'fs__estimator__max_features': ['auto', 'sqrt', 'log2'], 'clf__max_features': ['auto', 'sqrt', 'log2'], } gs = GridSearchCV(model, params, ...) gs.fit(X,y) What should be used for a prediction? gs? gs.best_estimator_? or gs.best_estimator_.named_steps['clf']? What is the difference between these 3?
[ "gs.predict(X_test) is equivalent to gs.best_estimator_.predict(X_test). Using either, X_test will be passed through your entire pipeline and it will return the predictions.\ngs.best_estimator_.named_steps['clf'].predict(), however is only the last phase of the pipeline. To use it, the feature selection step must already have been performed. This would only work if you have previously run your data through gs.best_estimator_.named_steps['fs'].transform()\nThree equivalent methods for generating predictions are shown below:\nUsing gs directly.\npred = gs.predict(X_test)\n\nUsing best_estimator_.\npred = gs.best_estimator_.predict(X_test)\n\nCalling each step in the pipeline individual.\nX_test_fs = gs.best_estimator_.named_steps['fs'].transform(X_test)\npred = gs.best_estimator_.named_steps['clf'].predict(X_test_fs)\n\n", "If you pass True to the value of refit parameter of GridSearchCV (which is the default value anyway), then the estimator with best parameters refits on the whole dataset, so you can use gs.fit(X_test) for prediction.\nIf the value of refit is equal to False while fitting the GridSearchCV object on your training set, then for prediction, you have only one option which is using gs.best_estimator_.predict(X_test).\n" ]
[ 33, 0 ]
[]
[]
[ "grid_search", "python", "scikit_learn" ]
stackoverflow_0035388647_grid_search_python_scikit_learn.txt
Q: Get all the rows of a table along with matching rows of another table in django ORM using select_related I have 2 models Model 1 class Model1(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) type = models.CharField(max_length=255) details = models.TextField(max_length=1000) price = models.FloatField() Model 2 class Model2(models.Model): id = models.IntegerField(primary_key=True) user_id = models.ForeignKey( User, on_delete=models.CASCADE ) plan_selected = models.ForeignKey(Model1) I am trying to check whether a user has selected any plans. The field plan_selected is a foreign key for Model1 - id. I want to get all details of Model1 along with details of Model2 in a single line of the query set using Django. So far I have tried to get is : sub_details = Model1.objects.select_related('Model2').filter(user_id=id) A: For select_related(), you want to select on the field name, not the related model's name. But all this does is that it adds a join, pulls all rows resulting from that join, and then your python representations have this relation cached (no more queries when accessed). You also need to use __ to traverse relationships across lookups. Docs here: https://docs.djangoproject.com/en/4.1/ref/models/querysets/#select-related But you don't even need select_related() for your goal: "I am trying to check whether a user has selected any plans.". You don't need Model1 here. That would be, given a user's id user_id: Model2.objects.filter(user_id=user_id).exists() # OR if you have a User instance `user`: user.model2_set.exists() If however what you want is "all instances of Model1 related to user via a Model2": Model1.objects.filter(model2__user_id=user_id).all() to which you can chain prefetch_related('model2_set') (this is 1 -> Many so you're pre-fetching, not selecting - i.e fetches and caches ahead of time each results' related model2 instances in one go.) However, that'd be easier to manage with a ManyToMany field with User on Model1, bypassing the need for Model2 entirely (which is essentially just a through table here): https://docs.djangoproject.com/en/4.1/topics/db/examples/many_to_many/
Get all the rows of a table along with matching rows of another table in django ORM using select_related
I have 2 models Model 1 class Model1(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) type = models.CharField(max_length=255) details = models.TextField(max_length=1000) price = models.FloatField() Model 2 class Model2(models.Model): id = models.IntegerField(primary_key=True) user_id = models.ForeignKey( User, on_delete=models.CASCADE ) plan_selected = models.ForeignKey(Model1) I am trying to check whether a user has selected any plans. The field plan_selected is a foreign key for Model1 - id. I want to get all details of Model1 along with details of Model2 in a single line of the query set using Django. So far I have tried to get is : sub_details = Model1.objects.select_related('Model2').filter(user_id=id)
[ "For select_related(), you want to select on the field name, not the related model's name. But all this does is that it adds a join, pulls all rows resulting from that join, and then your python representations have this relation cached (no more queries when accessed).\nYou also need to use __ to traverse relationships across lookups.\nDocs here: https://docs.djangoproject.com/en/4.1/ref/models/querysets/#select-related\nBut you don't even need select_related() for your goal: \"I am trying to check whether a user has selected any plans.\". You don't need Model1 here. That would be, given a user's id user_id:\nModel2.objects.filter(user_id=user_id).exists()\n\n# OR if you have a User instance `user`:\n\nuser.model2_set.exists()\n\nIf however what you want is \"all instances of Model1 related to user via a Model2\":\nModel1.objects.filter(model2__user_id=user_id).all()\n\nto which you can chain prefetch_related('model2_set') (this is 1 -> Many so you're pre-fetching, not selecting - i.e fetches and caches ahead of time each results' related model2 instances in one go.)\nHowever, that'd be easier to manage with a ManyToMany field with User on Model1, bypassing the need for Model2 entirely (which is essentially just a through table here): https://docs.djangoproject.com/en/4.1/topics/db/examples/many_to_many/\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "orm", "postgresql", "python" ]
stackoverflow_0074561030_django_django_rest_framework_orm_postgresql_python.txt
Q: Is it possible in SQLAlchemy to define isolation level SNAPSHOT for PostgreSQL? My web application uses SQLAlchemy with a PostgreSQL database. Now there is a need to use the equivalent of Microsoft SQL Server's SNAPSHOT transaction isolation level, but I did not find a solution in the SQLAlchemy v1.4.44 documentation. A: Microsoft SQL Server's “snapshot isolation” is documented as The term "snapshot" reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment in time when the transaction begins. No locks are acquired on the underlying data rows or data pages in a snapshot transaction, which permits other transactions to execute without being blocked by a prior uncompleted transaction. Transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level in SQL Server. This non-blocking behavior also significantly reduces the likelihood of deadlocks for complex transactions. Snapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised. You can avoid this by using UPDLOCK hints for SELECT statements that access data to be modified. That behavior is exactly what you get if you use the transaction isolation level REPEATABLE READ in PostgreSQL, so that's what you should use. Note that PostgreSQL always uses multiversioning, so there is no way in PostgreSQL to emulate the read locks that Microsoft SQL server takes for other isolation levels.
Is it possible in SQLAlchemy to define isolation level SNAPSHOT for PostgreSQL?
My web application uses SQLAlchemy with a PostgreSQL database. Now there is a need to use the equivalent of Microsoft SQL Server's SNAPSHOT transaction isolation level, but I did not find a solution in the SQLAlchemy v1.4.44 documentation.
[ "Microsoft SQL Server's “snapshot isolation” is documented as\n\nThe term \"snapshot\" reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment in time when the transaction begins. No locks are acquired on the underlying data rows or data pages in a snapshot transaction, which permits other transactions to execute without being blocked by a prior uncompleted transaction. Transactions that modify data do not block transactions that read data, and transactions that read data do not block transactions that write data, as they normally would under the default READ COMMITTED isolation level in SQL Server. This non-blocking behavior also significantly reduces the likelihood of deadlocks for complex transactions.\nSnapshot isolation uses an optimistic concurrency model. If a snapshot transaction attempts to commit modifications to data that has changed since the transaction began, the transaction will roll back and an error will be raised. You can avoid this by using UPDLOCK hints for SELECT statements that access data to be modified.\n\nThat behavior is exactly what you get if you use the transaction isolation level REPEATABLE READ in PostgreSQL, so that's what you should use. Note that PostgreSQL always uses multiversioning, so there is no way in PostgreSQL to emulate the read locks that Microsoft SQL server takes for other isolation levels.\n" ]
[ 1 ]
[]
[]
[ "postgresql", "python", "python_3.x", "sql_server", "sqlalchemy" ]
stackoverflow_0074559100_postgresql_python_python_3.x_sql_server_sqlalchemy.txt
Q: DRF APITestCase force_authenticate make request.user return tuple instead of User object I have a custom authentication class following the docs class ExampleAuthentication(authentication.BaseAuthentication): def authenticate(self, request): username = request.META.get('HTTP_X_USERNAME') if not username: return None try: user = User.objects.get(username=username) except User.DoesNotExist: raise exceptions.AuthenticationFailed('No such user') return (user, None) and I used it in my APIView: class profile(APIView): permission_classes = () authentication_classes = (ExampleAuthentication,) def get(self, request, format=None): try: print('user', request.user) serializer = GetUserSerializer(request.user) return JsonResponse({'code': 200,'data': serializer.data}, status=200) except Exception as e: return JsonResponse({'code': 500,'data': "Server error"}, status=500) when I try to call it normally from the API through postman I got the following result from the print and it worked normally: user User(143) I wrote a test using force_authenticate(): class BaseUserAPITest(APITestCase): def setUp(self): # self.factory = APIRequestFactory() self.user = models.User.objects.get_or_create( username='test_user_1', uid='test_user_1', defaults={'agent_type': 1} ) def test_details(self): url = reverse("api.profile") self.client.force_authenticate(user=self.user) response = self.client.get(url) self.assertEqual(response.status_code, 200) I got server error because the print of request.user return a tuple instead of a User object, this is the print from the test log user (<User: User(143)>, True) I tried searching up and seem like there no result or explanation on why this happening My version: django==2.2.8 djangorestframework==3.10.2 A: The problem is not force_authenticate but get_or_create method. It returns tuple. First element of the tuple is object and second one is boolean indicating if object was created or not. To fix change your code in setUp method to this: def setUp(self): # self.factory = APIRequestFactory() self.user, _ = models.User.objects.get_or_create( username='test_user_1', uid='test_user_1', defaults={'agent_type': 1} )
DRF APITestCase force_authenticate make request.user return tuple instead of User object
I have a custom authentication class following the docs class ExampleAuthentication(authentication.BaseAuthentication): def authenticate(self, request): username = request.META.get('HTTP_X_USERNAME') if not username: return None try: user = User.objects.get(username=username) except User.DoesNotExist: raise exceptions.AuthenticationFailed('No such user') return (user, None) and I used it in my APIView: class profile(APIView): permission_classes = () authentication_classes = (ExampleAuthentication,) def get(self, request, format=None): try: print('user', request.user) serializer = GetUserSerializer(request.user) return JsonResponse({'code': 200,'data': serializer.data}, status=200) except Exception as e: return JsonResponse({'code': 500,'data': "Server error"}, status=500) when I try to call it normally from the API through postman I got the following result from the print and it worked normally: user User(143) I wrote a test using force_authenticate(): class BaseUserAPITest(APITestCase): def setUp(self): # self.factory = APIRequestFactory() self.user = models.User.objects.get_or_create( username='test_user_1', uid='test_user_1', defaults={'agent_type': 1} ) def test_details(self): url = reverse("api.profile") self.client.force_authenticate(user=self.user) response = self.client.get(url) self.assertEqual(response.status_code, 200) I got server error because the print of request.user return a tuple instead of a User object, this is the print from the test log user (<User: User(143)>, True) I tried searching up and seem like there no result or explanation on why this happening My version: django==2.2.8 djangorestframework==3.10.2
[ "The problem is not force_authenticate but get_or_create method. It returns tuple. First element of the tuple is object and second one is boolean indicating if object was created or not. To fix change your code in setUp method to this:\ndef setUp(self):\n # self.factory = APIRequestFactory()\n self.user, _ = models.User.objects.get_or_create(\n username='test_user_1',\n uid='test_user_1',\n defaults={'agent_type': 1}\n )\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_rest_framework", "python", "testing" ]
stackoverflow_0074559404_django_django_rest_framework_python_testing.txt
Q: Auto reloading flask server on Docker I want my flask server to detect changes in code and reload automatically. I'm running this on docker container. Whenever I change something, I have to build and up again the container. I have no idea where's wrong. This is my first time using flask. Here's my tree ├── docker-compose.yml └── web ├── Dockerfile ├── app.py ├── crawler.py └── requirements.txt and code(app.py) from flask import Flask import requests app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello Flask!!' if __name__ == '__main__': app.run(debug = True, host = '0.0.0.0') and docker-compose version: '2' services: web: build: ./web ports: - "5000:5000" volumes: - ./web:/code Please give me some advice. Thank you in advance. A: Flask supports code reload when in debug mode as you've already done. The problem is that the application is running on a container and this isolates it from the real source code you are developing. Anyway, you can share the source between the running container and the host with volumes on your docker-compose.yaml like this: Here is the docker-compose.yaml version: "3" services: web: build: ./web ports: ['5000:5000'] volumes: ['./web:/app'] And here the Dockerfile: FROM python:alpine EXPOSE 5000 WORKDIR app COPY * /app/ RUN pip install -r requirements.txt CMD python app.py A: I managed to achieve flask auto reload in docker using docker-compose with the following config: version: "3" services: web: build: ./web entrypoint: - flask - run - --host=0.0.0.0 environment: FLASK_DEBUG: 1 FLASK_APP: ./app.py ports: ['5000:5000'] volumes: ['./web:/app'] You have to manually specify environment variables and entrypoint in the docker compose file in order to achieve auto reload. A: Assuming your file structure is the below: Dockerfile: (note WORKING DIR) FROM python:3.6.5-slim RUN mkdir -p /home/project/bottle WORKDIR /home/project/bottle COPY requirements.txt . RUN pip install --upgrade pip --no-cache-dir -r requirements.txt COPY . . CMD ["python", "app.py"] Docker Compose: version: '3' services: web: container_name: web volumes: - './web:/home/project/bottle/' <== Local Folder:WORKDIR build: ./web ports: - "8080:8080" A: This is my example: version: "3.8" services: local-development: build: context: . dockerfile: Dockerfiles/development.Dockerfile ports: - "5000:5000" volumes: - .:/code from flask import Flask app = Flask(__name__) @app.route('/') def hello_world(): return "hello world" if __name__ in "__main__": app.run(host="0.0.0.0", port=5000, debug=True) debug=True enables Flask to change as your code changes. Docker already plugs into your fs events to change the code "in the container". A: If the compose is running different services (app and rq, for instance) you need to set up the volumes on both, or it won't work.
Auto reloading flask server on Docker
I want my flask server to detect changes in code and reload automatically. I'm running this on docker container. Whenever I change something, I have to build and up again the container. I have no idea where's wrong. This is my first time using flask. Here's my tree ├── docker-compose.yml └── web ├── Dockerfile ├── app.py ├── crawler.py └── requirements.txt and code(app.py) from flask import Flask import requests app = Flask(__name__) @app.route('/') def hello_world(): return 'Hello Flask!!' if __name__ == '__main__': app.run(debug = True, host = '0.0.0.0') and docker-compose version: '2' services: web: build: ./web ports: - "5000:5000" volumes: - ./web:/code Please give me some advice. Thank you in advance.
[ "Flask supports code reload when in debug mode as you've already done. The problem is that the application is running on a container and this isolates it from the real source code you are developing. Anyway, you can share the source between the running container and the host with volumes on your docker-compose.yaml like this:\nHere is the docker-compose.yaml\nversion: \"3\"\nservices:\n web:\n build: ./web\n ports: ['5000:5000']\n volumes: ['./web:/app']\n\nAnd here the Dockerfile:\nFROM python:alpine\n\nEXPOSE 5000\n\nWORKDIR app\n\nCOPY * /app/\n\nRUN pip install -r requirements.txt\n\nCMD python app.py\n\n", "I managed to achieve flask auto reload in docker using docker-compose with the following config:\nversion: \"3\"\nservices:\n web:\n build: ./web\n entrypoint:\n - flask\n - run\n - --host=0.0.0.0\n environment:\n FLASK_DEBUG: 1\n FLASK_APP: ./app.py\n ports: ['5000:5000']\n volumes: ['./web:/app']\n\nYou have to manually specify environment variables and entrypoint in the docker compose file in order to achieve auto reload.\n", "Assuming your file structure is the below:\n\nDockerfile: (note WORKING DIR)\nFROM python:3.6.5-slim\nRUN mkdir -p /home/project/bottle\nWORKDIR /home/project/bottle \nCOPY requirements.txt .\nRUN pip install --upgrade pip --no-cache-dir -r requirements.txt\nCOPY . .\nCMD [\"python\", \"app.py\"]\n\nDocker Compose:\nversion: '3'\n\nservices:\n web:\n container_name: web\n volumes:\n - './web:/home/project/bottle/' <== Local Folder:WORKDIR\n build: ./web\n ports:\n - \"8080:8080\"\n\n", "This is my example:\nversion: \"3.8\"\n\nservices:\n local-development:\n build:\n context: .\n dockerfile: Dockerfiles/development.Dockerfile\n ports:\n - \"5000:5000\"\n volumes:\n - .:/code\n\nfrom flask import Flask\n\napp = Flask(__name__)\n\n\n@app.route('/')\ndef hello_world():\n return \"hello world\"\n\n\nif __name__ in \"__main__\":\n app.run(host=\"0.0.0.0\", port=5000, debug=True)\n\ndebug=True enables Flask to change as your code changes.\nDocker already plugs into your fs events to change the code \"in the container\".\n", "If the compose is running different services (app and rq, for instance) you need to set up the volumes on both, or it won't work.\n" ]
[ 35, 21, 6, 1, 0 ]
[]
[]
[ "docker", "flask", "python" ]
stackoverflow_0044342741_docker_flask_python.txt
Q: Extract first sequence of strings in pandas column I have a column in a DF as below | Column A | | ab, bce, bc | | bc, abcd, ab | | ab, cd, abc | and i want to create a new column that only takes the first sequence, as showed below | Column A | Column B | | ab, bce, bc | ab | | bc, abcd, ab | bc | | ab, cd, abc | ab | I tried with this code but it only gives me the first letter of the first sequence, not the entire abbrevation df.loc[:, 'ColumnB'] = df.ColumnA.map(lambda x: x[0]) A: I guess the items in columnA are strings like e.g. 'ab, bce, bc', so just use split ;). df.loc[:, 'ColumnB'] = df.ColumnA.map(lambda x: x.split(',')[0]) A: You can alos try vectorised str method split and use integer indexing on the list to get the first element: df['Column B'] = df['Column A'].str.split(',').str[0] Should gives Column A Column B ab, bce, bc ab bc, abcd, ab bc ab, cd, abc ab A: You're close, you just need to convert strings to lists with pandas.Series.split before the map : df["Column B"]= df["Column A"].str.split(",").map(lambda x: x[0]) You can also use pandas.Series.get : df["Column B"]= df["Column A"].str.split(",").str.get(0) Another option is list comprehension: df["Column B"]= [el[0] for el in df["Column A"].str.split(",")] # Output : print(df) Column A Column B 0 ab, bce, bc ab 1 bc, abcd, ab bc 2 ab, cd, abc ab A: So,the row is treated as string and you are getting the first index of string "ab,bce,bc". You need to convert that to a list and then take the first element which will be "ab" now. df.loc[:, 'ColumnB'] = df.ColumnA.map(lambda x: x.split(",")[0]) This creates "ColumnB" as you require. Hope it helps! A: If you want the first chunk, don't split. Instead extract the initial non , characters. This will be more efficient: df['Column B'] = df['Column A'].str.extract('([^,]+)') Output: Column A Column B 0 ab, bce, bc ab 1 bc, abcd, ab bc 2 ab, cd, abc ab
Extract first sequence of strings in pandas column
I have a column in a DF as below | Column A | | ab, bce, bc | | bc, abcd, ab | | ab, cd, abc | and i want to create a new column that only takes the first sequence, as showed below | Column A | Column B | | ab, bce, bc | ab | | bc, abcd, ab | bc | | ab, cd, abc | ab | I tried with this code but it only gives me the first letter of the first sequence, not the entire abbrevation df.loc[:, 'ColumnB'] = df.ColumnA.map(lambda x: x[0])
[ "I guess the items in columnA are strings like e.g. 'ab, bce, bc', so just use split ;).\ndf.loc[:, 'ColumnB'] = df.ColumnA.map(lambda x: x.split(',')[0])\n\n", "You can alos try vectorised str method split and use integer indexing on the list to get the first element:\ndf['Column B'] = df['Column A'].str.split(',').str[0]\n\nShould gives\nColumn A Column B \nab, bce, bc ab \nbc, abcd, ab bc \nab, cd, abc ab \n\n", "You're close, you just need to convert strings to lists with pandas.Series.split before the map :\ndf[\"Column B\"]= df[\"Column A\"].str.split(\",\").map(lambda x: x[0])\n\nYou can also use pandas.Series.get :\ndf[\"Column B\"]= df[\"Column A\"].str.split(\",\").str.get(0)\n\nAnother option is list comprehension:\ndf[\"Column B\"]= [el[0] for el in df[\"Column A\"].str.split(\",\")]\n\n# Output :\nprint(df)\n\n Column A Column B\n0 ab, bce, bc ab\n1 bc, abcd, ab bc\n2 ab, cd, abc ab\n\n", "So,the row is treated as string and you are getting the first index of string \"ab,bce,bc\".\nYou need to convert that to a list and then take the first element which will be \"ab\" now.\ndf.loc[:, 'ColumnB'] = df.ColumnA.map(lambda x: x.split(\",\")[0])\n\nThis creates \"ColumnB\" as you require.\nHope it helps!\n", "If you want the first chunk, don't split. Instead extract the initial non , characters. This will be more efficient:\ndf['Column B'] = df['Column A'].str.extract('([^,]+)')\n\nOutput:\n Column A Column B\n0 ab, bce, bc ab\n1 bc, abcd, ab bc\n2 ab, cd, abc ab\n\n" ]
[ 2, 1, 1, 1, 0 ]
[]
[]
[ "columnsorting", "pandas", "python" ]
stackoverflow_0074560547_columnsorting_pandas_python.txt
Q: Trying to parse the div, but I get an error import requests from bs4 import BeautifulSoup from texttable import Texttable url = "https://realpython.github.io/fake-jobs/" site = requests.get(url) #send a request to the site table = Texttable() #create a table table.set_chars(['-', '|', '+', '=']) table.header(['Titel','Company','Location']) table.set_cols_dtype(['t','i','a']) table.set_cols_align(["c", "c", "c"]) table.set_cols_valign(["m", "m", "m"]) table.set_cols_width([20,20,20]) table.set_deco(Texttable.BORDER|Texttable.HEADER |Texttable.HLINES| Texttable.VLINES) with open('Shore.txt', 'w') as f: #create a file pass soup = BeautifulSoup(site.content, "html.parser") results = soup.find(id="ResultsContainer") job_elements = results.find_all("div", class_="card-content") #find all div with class "card-content" for job_element in job_elements: title_element = job_element.find("h2", class_="title") #get the different elements from divs with class "card-content" company_element = job_element.find("h3", class_="company") #get the different elements from divs with class "card-content" location_element = job_element.find("p", class_="location") #get the different elements from divs with class "card-content" item_element = job_element.find("a", class_="card-footer-item") #get the link with divs from class "card-content" item_site = requests.get(item_element["href"]) #send a request to the site from link item_soup = BeautifulSoup(item_site.content, "html.parser") results_site = item_soup.find(id="ResultsContainer") item_element_elements = results_site.find("div", class_="content") item_element_element = item_element_elements.find("p", class_=False) #get the text without class print(title_element.text.strip()) #get it all data received into the console print(company_element.text.strip()) #get all data received into the console print(location_element.text.strip()) #get all data received into the console print(item_element_element.text.strip()) # get all data received into the console table.add_row([title_element.text.strip(),company_element.text.strip(),location_element.text.strip()]) #add rows in corrects rows "add_rows" with open('Shore.txt', 'w') as f: #enter all data received into a table file f.write(table.draw()) f.write(str(len(job_elements))) f.close print(len(job_elements)) #get the number of elements with the class Error: line 38, in <module> item_element_elements = results_site.find("div", class_="content") AttributeError: 'NoneType' object has no attribute 'find' Trying to parse the div: item_element_elements = results_site.find("div", class_="content") item_element_element = item_element_elements.find("p", class_=False) But I get an error. Can't find the "find" attribute. I was able to parse all the other elements. No idea how to fix this. A: Try to select your elements more specific - Issue here is that you select the first link and not that one that is leading to the details: item_element = job_element.select_one("a.card-footer-item[href*='fake-jobs/jobs']") or item_element = job_element.find_all("a", class_="card-footer-item")[-1] You could do it also with .find() but may checkout the css selectors Example from bs4 import BeautifulSoup import requests url = "https://realpython.github.io/fake-jobs/" soup = BeautifulSoup(requests.get(url).text) for job_element in soup.select('#ResultsContainer .card-content')[:1]: #... item_element = job_element.select_one("a.card-footer-item[href*='fake-jobs/jobs']") #get the link with divs from class "card-content" item_soup = BeautifulSoup(requests.get(item_element.get("href")).text) item_element_element = item_soup.select_one("div.content p").text #get the text without class print(item_element_element)
Trying to parse the div, but I get an error
import requests from bs4 import BeautifulSoup from texttable import Texttable url = "https://realpython.github.io/fake-jobs/" site = requests.get(url) #send a request to the site table = Texttable() #create a table table.set_chars(['-', '|', '+', '=']) table.header(['Titel','Company','Location']) table.set_cols_dtype(['t','i','a']) table.set_cols_align(["c", "c", "c"]) table.set_cols_valign(["m", "m", "m"]) table.set_cols_width([20,20,20]) table.set_deco(Texttable.BORDER|Texttable.HEADER |Texttable.HLINES| Texttable.VLINES) with open('Shore.txt', 'w') as f: #create a file pass soup = BeautifulSoup(site.content, "html.parser") results = soup.find(id="ResultsContainer") job_elements = results.find_all("div", class_="card-content") #find all div with class "card-content" for job_element in job_elements: title_element = job_element.find("h2", class_="title") #get the different elements from divs with class "card-content" company_element = job_element.find("h3", class_="company") #get the different elements from divs with class "card-content" location_element = job_element.find("p", class_="location") #get the different elements from divs with class "card-content" item_element = job_element.find("a", class_="card-footer-item") #get the link with divs from class "card-content" item_site = requests.get(item_element["href"]) #send a request to the site from link item_soup = BeautifulSoup(item_site.content, "html.parser") results_site = item_soup.find(id="ResultsContainer") item_element_elements = results_site.find("div", class_="content") item_element_element = item_element_elements.find("p", class_=False) #get the text without class print(title_element.text.strip()) #get it all data received into the console print(company_element.text.strip()) #get all data received into the console print(location_element.text.strip()) #get all data received into the console print(item_element_element.text.strip()) # get all data received into the console table.add_row([title_element.text.strip(),company_element.text.strip(),location_element.text.strip()]) #add rows in corrects rows "add_rows" with open('Shore.txt', 'w') as f: #enter all data received into a table file f.write(table.draw()) f.write(str(len(job_elements))) f.close print(len(job_elements)) #get the number of elements with the class Error: line 38, in <module> item_element_elements = results_site.find("div", class_="content") AttributeError: 'NoneType' object has no attribute 'find' Trying to parse the div: item_element_elements = results_site.find("div", class_="content") item_element_element = item_element_elements.find("p", class_=False) But I get an error. Can't find the "find" attribute. I was able to parse all the other elements. No idea how to fix this.
[ "Try to select your elements more specific - Issue here is that you select the first link and not that one that is leading to the details:\nitem_element = job_element.select_one(\"a.card-footer-item[href*='fake-jobs/jobs']\")\n\nor\nitem_element = job_element.find_all(\"a\", class_=\"card-footer-item\")[-1]\n\nYou could do it also with .find() but may checkout the css selectors\nExample\nfrom bs4 import BeautifulSoup\nimport requests\n \nurl = \"https://realpython.github.io/fake-jobs/\"\nsoup = BeautifulSoup(requests.get(url).text)\n\nfor job_element in soup.select('#ResultsContainer .card-content')[:1]:\n #...\n item_element = job_element.select_one(\"a.card-footer-item[href*='fake-jobs/jobs']\") #get the link with divs from class \"card-content\"\n\n item_soup = BeautifulSoup(requests.get(item_element.get(\"href\")).text)\n item_element_element = item_soup.select_one(\"div.content p\").text #get the text without class\n print(item_element_element)\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "parsing", "python", "python_3.x", "web_scraping" ]
stackoverflow_0074560676_beautifulsoup_parsing_python_python_3.x_web_scraping.txt
Q: How to install nvidia apex on Google Colab what I did is follow the instruction on the official github site !git clone https://github.com/NVIDIA/apex !cd apex !pip install -v --no-cache-dir ./ it gives me the error: ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. Exception information: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 326, in run self.name, wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 268, in populate_requirement_set wheel_cache=wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/constructors.py", line 248, in install_req_from_line "nor 'pyproject.toml' found." % name pip._internal.exceptions.InstallationError: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. A: Worked for me after adding CUDA_HOME enviroment variable: %%writefile setup.sh export CUDA_HOME=/usr/local/cuda-10.1 git clone https://github.com/NVIDIA/apex pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex !sh setup.sh A: (wanted to just add a comment but I don't have enough reputation...) it works for me but the cd is actually not required. Also, I needed the two global options as suggested here: https://github.com/NVIDIA/apex/issues/86 %%writefile setup.sh git clone https://github.com/NVIDIA/apex pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex then !sh setup.sh A: Updated First, create a file e.g. setup.sh as follows: For apex with CUDA and C++ extensions: %%writefile setup.sh git clone https://github.com/NVIDIA/apex cd apex pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./ Then, install it !sh setup.sh For Python-only build %%writefile setup.sh git clone https://github.com/NVIDIA/apex cd apex pip install -v --disable-pip-version-check --no-cache-dir ./ A Python-only build omits certain Fused kernels required to use apex.optimizers.FusedAdam, apex.normalization.FusedLayerNorm, etc. Check apex quickstart. A: In colab instead of using "!" use "%' before cd command !git clone https://github.com/NVIDIA/apex %cd apex !pip install -v --no-cache-dir ./ The above code will work just fine. A: I tried a few options, but I liked the one in this website, which worked very well with fast_bert and torch: try: import apex except Exception: ! git clone https://github.com/NVIDIA/apex.git % cd apex !pip install --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" . %cd .. A: The problem is with !cd apex. Use %cd apex instead. Read this: https://stackoverflow.com/a/57212513/8690463 A: I use paperspace, and this worked for me: !pip install git+https://github.com/NVIDIA/apex A: The following worked for me in November, 2022. apex.optimizers.FusedAdam, apex.normalization.FusedLayerNorm, etc. require CUDA and C++ extensions (see e.g., here). Thus, it's not sufficient to install the Python-only built. To built apex the cuda version of PyTorch and apex must match, as explained here. Query the version Ubuntu Colab is running on: !lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.6 LTS Release: 18.04 Codename: bionic To get the current cuda version run: !nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0 Look-up the latest PyTorch built and compute plattform here. Next, got to the cuda toolkit archive and configure a version that matches the cuda-version of PyTorch and your OS-Version. Copy the installation instructions: wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb sudo cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudo apt-get update sudo apt-get -y install cuda Remove Sudo and change the last line to include your cuda-version e.g., !apt-get -y install cuda-11-7 (without exclamation mark if run in shell directly): !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin !mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 !wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb !dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb !cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/ !apt-get update Your cuda version will now be updated: !nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Jun__8_16:49:14_PDT_2022 Cuda compilation tools, release 11.7, V11.7.99 Build cuda_11.7.r11.7/compiler.31442593_0 Next, updated the outdated Pytorch version in Google Colab: !pip install torch -U Build apex. Depending on what you might require fewer global options: !git clone https://github.com/NVIDIA/apex.git && cd apex && pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_multihead_attn" . && cd .. && rm -rf apex ... Successfully installed apex-0.1 You can now import apex as desired: from apex import optimizers, normalization ...
How to install nvidia apex on Google Colab
what I did is follow the instruction on the official github site !git clone https://github.com/NVIDIA/apex !cd apex !pip install -v --no-cache-dir ./ it gives me the error: ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. Exception information: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages/pip/_internal/commands/install.py", line 326, in run self.name, wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 268, in populate_requirement_set wheel_cache=wheel_cache File "/usr/local/lib/python3.6/dist-packages/pip/_internal/req/constructors.py", line 248, in install_req_from_line "nor 'pyproject.toml' found." % name pip._internal.exceptions.InstallationError: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
[ "Worked for me after adding CUDA_HOME enviroment variable:\n%%writefile setup.sh\n\nexport CUDA_HOME=/usr/local/cuda-10.1\ngit clone https://github.com/NVIDIA/apex\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./apex\n\n!sh setup.sh\n\n", "(wanted to just add a comment but I don't have enough reputation...)\nit works for me but the cd is actually not required. Also, I needed the two global options as suggested here: https://github.com/NVIDIA/apex/issues/86\n%%writefile setup.sh\n\ngit clone https://github.com/NVIDIA/apex\npip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./apex\n\nthen\n!sh setup.sh\n\n", "Updated\nFirst, create a file e.g. setup.sh as follows:\nFor apex with CUDA and C++ extensions:\n%%writefile setup.sh\n\ngit clone https://github.com/NVIDIA/apex\ncd apex\npip install -v --disable-pip-version-check --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" ./\n\nThen, install it\n!sh setup.sh\n\n\nFor Python-only build\n%%writefile setup.sh\n\ngit clone https://github.com/NVIDIA/apex\ncd apex\npip install -v --disable-pip-version-check --no-cache-dir ./\n\nA Python-only build omits certain Fused kernels required to use apex.optimizers.FusedAdam, apex.normalization.FusedLayerNorm, etc.\nCheck apex quickstart.\n", "In colab instead of using \"!\" use \"%' before cd command\n!git clone https://github.com/NVIDIA/apex\n%cd apex\n!pip install -v --no-cache-dir ./\n\nThe above code will work just fine.\n", "I tried a few options, but I liked the one in this website, which worked very well with fast_bert and torch:\ntry:\n import apex\nexcept Exception:\n ! git clone https://github.com/NVIDIA/apex.git\n % cd apex\n !pip install --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" .\n %cd ..\n\n", "The problem is with !cd apex. Use %cd apex instead.\nRead this: https://stackoverflow.com/a/57212513/8690463\n", "I use paperspace, and this worked for me:\n!pip install git+https://github.com/NVIDIA/apex\n\n", "The following worked for me in November, 2022.\napex.optimizers.FusedAdam, apex.normalization.FusedLayerNorm, etc. require CUDA and C++ extensions (see e.g., here). Thus, it's not sufficient to install the Python-only built. To built apex the cuda version of PyTorch and apex must match, as explained here.\nQuery the version Ubuntu Colab is running on:\n!lsb_release -a\n\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 18.04.6 LTS\nRelease: 18.04\nCodename: bionic\n\nTo get the current cuda version run:\n!nvcc --version\n\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2021 NVIDIA Corporation\nBuilt on Sun_Feb_14_21:12:58_PST_2021\nCuda compilation tools, release 11.2, V11.2.152\nBuild cuda_11.2.r11.2/compiler.29618528_0 \n\nLook-up the latest PyTorch built and compute plattform here.\n\nNext, got to the cuda toolkit archive and configure a version that matches the cuda-version of PyTorch and your OS-Version.\n\nCopy the installation instructions:\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\nsudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\nwget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\nsudo dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\nsudo cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/\nsudo apt-get update\nsudo apt-get -y install cuda\n\nRemove Sudo and change the last line to include your cuda-version e.g., !apt-get -y install cuda-11-7 (without exclamation mark if run in shell directly):\n!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\n!mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\n!wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\n!dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\n!cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/\n!apt-get update\n\nYour cuda version will now be updated:\n!nvcc --version\n\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2022 NVIDIA Corporation\nBuilt on Wed_Jun__8_16:49:14_PDT_2022\nCuda compilation tools, release 11.7, V11.7.99\nBuild cuda_11.7.r11.7/compiler.31442593_0\n\nNext, updated the outdated Pytorch version in Google Colab:\n!pip install torch -U\n\nBuild apex. Depending on what you might require fewer global options:\n!git clone https://github.com/NVIDIA/apex.git && cd apex && pip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" --global-option=\"--fast_multihead_attn\" . && cd .. && rm -rf apex\n\n...\nSuccessfully installed apex-0.1\n\nYou can now import apex as desired:\nfrom apex import optimizers, normalization\n...\n\n" ]
[ 18, 17, 10, 6, 2, 1, 0, 0 ]
[]
[]
[ "google_colaboratory", "gpu", "nvidia", "python", "pytorch" ]
stackoverflow_0057284345_google_colaboratory_gpu_nvidia_python_pytorch.txt
Q: Allow only one layer at a time in Folium LayerControl I want to make an "interactive" map with multiple layers using geopandas explore() function and folium. I was able to generate exactly what I aim for, with one exception: the constraint that only one layer would be allowed at a time. In other words, I want that if someone click on the layer "Adaptation climat ☀❄️", then the layer that was previously selected is automatically unselected and only "Adaptation climat ☀❄️" is displayed. I looked online for hours and did not find a solution. I guess it's at the LayerControl() level of Folium but I can't find the solution. I found FeatureGroup layer control in Folium - only one active layer that is related but does not give an answer. Thanks in advance for your help! The dataframe is of the form: The html map obtained looks like: And the code is: list_var = ['note_vetuste', 'note_encadrement', 'note_climat', 'note_cantine', 'note_abord_securise'] list_var_display = ['Vétusté des locaux ', 'Moyens humains ‍', 'Adaptation climat ☀❄️', 'Cantine ', 'Sécurisation des abords ‍♀️'] m = gdf[['ecole', list_var[0], list_var[0]+'_count', 'geometry']].explore( column=list_var[0], cmap = 'RdYlGn', marker_kwds=dict(radius=10, fill=True), legend=False, tooltip=False, popup=True, k=5, # use 10 bins vmin=1, vmax=5, tiles=None, legend_kwds=dict(caption='',colorbar=True, fmt="{:5.2f}"), name=list_var_display[0], missing_kwds={'color': 'darkgrey'} ) list_var.pop(0) list_var_display.pop(0) for k, var in enumerate(list_var): gdf[['ecole', var, var+'_count', 'geometry']].explore( m=m, column=var, marker_kwds=dict(radius=10, fill=True), cmap='RdYlGn', tooltip=False, popup=True, legend=False, k=5, # use 10 bins vmin=1, vmax=5, name=list_var_display[k], # name of the layer in the map missing_kwds={'color': 'darkgrey'}, show=False ) folium.TileLayer('cartodbpositron', control=False).add_to(m) # use folium to add alternative tiles folium.LayerControl(position="topleft", collapsed=False).add_to(m) # use folium to add layer control m.save("carte.html") A: It seems to be possible that you differ that behaviour with the overlay-status of the layers. Now we need to find a possibility to forward overlay=False to your map as you include/create the map with geopandas (like described in your referenced answer https://stackoverflow.com/a/63189269/13843906 ). Perhaps try passing it with the second explore statement: ...explore(m=m, ... , overlay=False, ...) or Another info: In this github issue is mentioned that this functionality will be added in folium 0.14: https://github.com/python-visualization/folium/issues/1025 Looking through the topic you will find a link to a jupyter where the new behaviour seems described: https://github.com/python-visualization/folium/pull/1592#pullrequestreview-1184241705 Try updating your folium version and add these lines to your code (perhaps use list_var_displayinstead of list_var: from folium.plugins import GroupedLayerControl GroupedLayerControl( groups={'groups1': list_var}, collapsed=False, ).add_to(m) Not sure if it works as you do not use pure folium, but it is worth a try
Allow only one layer at a time in Folium LayerControl
I want to make an "interactive" map with multiple layers using geopandas explore() function and folium. I was able to generate exactly what I aim for, with one exception: the constraint that only one layer would be allowed at a time. In other words, I want that if someone click on the layer "Adaptation climat ☀❄️", then the layer that was previously selected is automatically unselected and only "Adaptation climat ☀❄️" is displayed. I looked online for hours and did not find a solution. I guess it's at the LayerControl() level of Folium but I can't find the solution. I found FeatureGroup layer control in Folium - only one active layer that is related but does not give an answer. Thanks in advance for your help! The dataframe is of the form: The html map obtained looks like: And the code is: list_var = ['note_vetuste', 'note_encadrement', 'note_climat', 'note_cantine', 'note_abord_securise'] list_var_display = ['Vétusté des locaux ', 'Moyens humains ‍', 'Adaptation climat ☀❄️', 'Cantine ', 'Sécurisation des abords ‍♀️'] m = gdf[['ecole', list_var[0], list_var[0]+'_count', 'geometry']].explore( column=list_var[0], cmap = 'RdYlGn', marker_kwds=dict(radius=10, fill=True), legend=False, tooltip=False, popup=True, k=5, # use 10 bins vmin=1, vmax=5, tiles=None, legend_kwds=dict(caption='',colorbar=True, fmt="{:5.2f}"), name=list_var_display[0], missing_kwds={'color': 'darkgrey'} ) list_var.pop(0) list_var_display.pop(0) for k, var in enumerate(list_var): gdf[['ecole', var, var+'_count', 'geometry']].explore( m=m, column=var, marker_kwds=dict(radius=10, fill=True), cmap='RdYlGn', tooltip=False, popup=True, legend=False, k=5, # use 10 bins vmin=1, vmax=5, name=list_var_display[k], # name of the layer in the map missing_kwds={'color': 'darkgrey'}, show=False ) folium.TileLayer('cartodbpositron', control=False).add_to(m) # use folium to add alternative tiles folium.LayerControl(position="topleft", collapsed=False).add_to(m) # use folium to add layer control m.save("carte.html")
[ "It seems to be possible that you differ that behaviour with the overlay-status of the layers. Now we need to find a possibility to forward overlay=False to your map as you include/create the map with geopandas (like described in your referenced answer https://stackoverflow.com/a/63189269/13843906 ). Perhaps try passing it with the second explore statement:\n ...explore(m=m,\n ... ,\n overlay=False,\n ...)\n\nor\nAnother info: In this github issue is mentioned that this functionality will be added in folium 0.14: https://github.com/python-visualization/folium/issues/1025 Looking through the topic you will find a link to a jupyter where the new behaviour seems described: https://github.com/python-visualization/folium/pull/1592#pullrequestreview-1184241705\nTry updating your folium version and add these lines to your code (perhaps use list_var_displayinstead of list_var:\nfrom folium.plugins import GroupedLayerControl\n\nGroupedLayerControl(\n groups={'groups1': list_var},\n collapsed=False,\n).add_to(m) \n\nNot sure if it works as you do not use pure folium, but it is worth a try\n" ]
[ 2 ]
[]
[]
[ "folium", "geopandas", "python" ]
stackoverflow_0074561214_folium_geopandas_python.txt
Q: Python OSError: [Errno 22] Invalid argument when use pd.read_csv with two csv files I am new here and I need a help. I got a trouble with OSError: [Errno 22] Invalid argument when I tried to use pd.read_csv with two csv files for dataset preprocess. I created two dummy dataset as below: test_1.csv: DATE,permno,datadate,gvkey, ....... (and a lot of features) 19260130,10006,19260130,3934, ........ 19260130,10022,19260130,3942, ........ 19260130,10030,19260130,3969, ........ 19260130,10049,19260130,3976, ........ 19260130,10057,19260130,3977, ........ 19260130,10065,19260130,3984, ........ 19260130,10073,19260130,3985, ........ test_2.csv: DATE,permno,datadate,Q's ratio 19260130,10006,19260130,1.16541374714217 19260130,10022,19260130,1.01102923080989 19260130,10030,19260130,1.06549175520466 19260130,10049,19260130,1.54355923255147 19260130,10057,19260130,3.56608118773024 19260130,10065,19260130,2.6860629359338 19260130,10073,19260130,2.0303420958083 my code here: import pandas as pd DATA_DIR = r'C:\Users\steve\Desktop\Data\test_1.csv' df = pd.read_csv(DATA_DIR, parse_dates=['DATE', 'datadate']) q = pd.read_csv(DATA_DIR + r'C:\Users\steve\Desktop\Data\test_2.csv', index_col=0, parse_dates=[1, 3]) I got this error: Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\Program Files\JetBrains\PyCharm 2021.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Program Files\JetBrains\PyCharm 2021.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/steve/PycharmProjects/Empirical Asset via Machine Learning/test.py", line 5, in <module> q = pd.read_csv(DATA_DIR + r'C:\Users\steve\Desktop\Data\test_2.csv', index_col=0, parse_dates=[1, 3]) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\util\_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 586, in read_csv return _read(filepath_or_buffer, kwds) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 482, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 811, in __init__ self._engine = self._make_engine(self.engine) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 1040, in _make_engine return mapping[engine](self.f, **self.options) # type: ignore[call-arg] File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\c_parser_wrapper.py", line 51, in __init__ self._open_handles(src, kwds) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\base_parser.py", line 222, in _open_handles self.handles = get_handle( File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\common.py", line 702, in get_handle handle = open( OSError: [Errno 22] Invalid argument: 'C:\\Users\\steve\\Desktop\\Data\\test_1.csvC:\\Users\\steve\\Desktop\\Data\\test_2.csv' I have searched several similar topics on Stackoverflow, and try but seems no one use pd.read_csv('test_1.csv' + 'test_2.csv', .....) like me Please help, thanks. A: Invalid argument: 'C:\Users\steve\Desktop\Data\test_1.csvC:\Users\steve\Desktop\Data\test_2.csv' You are trying to read the CSV of an invalid path. You cannot read two csv files at once. When you call this... pd.read_csv(DATA_DIR + r'C:\Users\steve\Desktop\Data\test_2.csv', index_col=0, parse_dates=[1, 3]) You are concatenating two absolute paths together, just use... pd.read_csv(r'C:\Users\steve\Desktop\Data\test_2.csv', index_col=0, parse_dates=[1, 3]) A perhaps better way to do this using your DATA_DIR would be this... DATA_DIR = r'C:\Users\steve\Desktop\Data\' df = pd.read_csv(DATA_DIR + 'test_1.csv', parse_dates=['DATE', 'datadate']) pd.read_csv(DATA_DIR + 'test_2.csv', index_col=0, parse_dates=[1, 3]) A: As we can see ure trying to open file path named 'C:\Users\steve\Desktop\Data\test_1.csvC:\Users\steve\Desktop\Data\test_2.csv' and its not exist, u have to try create two different variables to open csv,and one to save results. A: Try replacing the backslashes'' with forwardslashes'/' fixed it for me
Python OSError: [Errno 22] Invalid argument when use pd.read_csv with two csv files
I am new here and I need a help. I got a trouble with OSError: [Errno 22] Invalid argument when I tried to use pd.read_csv with two csv files for dataset preprocess. I created two dummy dataset as below: test_1.csv: DATE,permno,datadate,gvkey, ....... (and a lot of features) 19260130,10006,19260130,3934, ........ 19260130,10022,19260130,3942, ........ 19260130,10030,19260130,3969, ........ 19260130,10049,19260130,3976, ........ 19260130,10057,19260130,3977, ........ 19260130,10065,19260130,3984, ........ 19260130,10073,19260130,3985, ........ test_2.csv: DATE,permno,datadate,Q's ratio 19260130,10006,19260130,1.16541374714217 19260130,10022,19260130,1.01102923080989 19260130,10030,19260130,1.06549175520466 19260130,10049,19260130,1.54355923255147 19260130,10057,19260130,3.56608118773024 19260130,10065,19260130,2.6860629359338 19260130,10073,19260130,2.0303420958083 my code here: import pandas as pd DATA_DIR = r'C:\Users\steve\Desktop\Data\test_1.csv' df = pd.read_csv(DATA_DIR, parse_dates=['DATE', 'datadate']) q = pd.read_csv(DATA_DIR + r'C:\Users\steve\Desktop\Data\test_2.csv', index_col=0, parse_dates=[1, 3]) I got this error: Traceback (most recent call last): File "<input>", line 1, in <module> File "C:\Program Files\JetBrains\PyCharm 2021.1\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Program Files\JetBrains\PyCharm 2021.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/steve/PycharmProjects/Empirical Asset via Machine Learning/test.py", line 5, in <module> q = pd.read_csv(DATA_DIR + r'C:\Users\steve\Desktop\Data\test_2.csv', index_col=0, parse_dates=[1, 3]) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\util\_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 586, in read_csv return _read(filepath_or_buffer, kwds) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 482, in _read parser = TextFileReader(filepath_or_buffer, **kwds) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 811, in __init__ self._engine = self._make_engine(self.engine) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\readers.py", line 1040, in _make_engine return mapping[engine](self.f, **self.options) # type: ignore[call-arg] File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\c_parser_wrapper.py", line 51, in __init__ self._open_handles(src, kwds) File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\parsers\base_parser.py", line 222, in _open_handles self.handles = get_handle( File "C:\Users\steve\PycharmProjects\Empirical Asset via Machine Learning\venv\lib\site-packages\pandas\io\common.py", line 702, in get_handle handle = open( OSError: [Errno 22] Invalid argument: 'C:\\Users\\steve\\Desktop\\Data\\test_1.csvC:\\Users\\steve\\Desktop\\Data\\test_2.csv' I have searched several similar topics on Stackoverflow, and try but seems no one use pd.read_csv('test_1.csv' + 'test_2.csv', .....) like me Please help, thanks.
[ "\nInvalid argument: 'C:\\Users\\steve\\Desktop\\Data\\test_1.csvC:\\Users\\steve\\Desktop\\Data\\test_2.csv'\n\nYou are trying to read the CSV of an invalid path. You cannot read two csv files at once.\nWhen you call this...\npd.read_csv(DATA_DIR + r'C:\\Users\\steve\\Desktop\\Data\\test_2.csv', index_col=0, parse_dates=[1, 3])\n\nYou are concatenating two absolute paths together, just use...\npd.read_csv(r'C:\\Users\\steve\\Desktop\\Data\\test_2.csv', index_col=0, parse_dates=[1, 3])\n\nA perhaps better way to do this using your DATA_DIR would be this...\nDATA_DIR = r'C:\\Users\\steve\\Desktop\\Data\\'\ndf = pd.read_csv(DATA_DIR + 'test_1.csv', parse_dates=['DATE', 'datadate'])\npd.read_csv(DATA_DIR + 'test_2.csv', index_col=0, parse_dates=[1, 3])\n\n", "As we can see ure trying to open file path named 'C:\\Users\\steve\\Desktop\\Data\\test_1.csvC:\\Users\\steve\\Desktop\\Data\\test_2.csv' and its not exist, u have to try create two different variables to open csv,and one to save results.\n", "Try replacing the backslashes'' with forwardslashes'/' fixed it for me\n" ]
[ 0, 0, 0 ]
[]
[]
[ "csv", "dataframe", "pandas", "python" ]
stackoverflow_0069403282_csv_dataframe_pandas_python.txt
Q: How Can I Solve This No Duplicated 2 Column Calculation? Hello StackOverflow People! I have some trouble here, I do some research but I still can't make it. I have two columns that are substracted from a Dataset, the columns are "# Externo" and "Nro Envio ML". I want that the result of the code gives me only the numbers that exist in "# Externo" but no in "Nro Envio ML" For Example: If 41765931626 is only in "# Externo" column but no in "Nro Envio ML", I want to print that number. Also if no exist any number in "# Externo" that is not on "Nro Envio ML" I want to print some text print("No strange sales") Here its the code I tried. Sorry for my bad english import numpy as np df2=df2.dropna(subset=['Unnamed: 13']) df2 = df2[df2['Unnamed: 13'] != 'Nro. Envío'] df2['Nro Envio ML']=df2['Unnamed: 13'] dfn=df2[["# Externo","Nro Envio ML"]] dfn1 = dfn[dfn['# Externo'] != dfn['Nro Envio ML']] dfn1 Also with diff It gives me values that are on 'Nro Envio ML' Link for Sample: https://github.com/francoveracallorda/sample A: I would go outside of pandas and use the python built in set and compute the difference. Here is a simplified example: import pandas as pd df = pd.DataFrame({ "# Externo": [3, 5, 4, 2, 1, 7, 8], "Nro Envio ML": [4, 9, 0, 2, 1, 3, 5] }) diff = set(df["# Externo"]) - set(df["Nro Envio ML"]) # diff contains the values that are in df["# Externo"] but not in df["Nro Envio ML"]. print(f"Weird sales: {diff}" if diff else "No strange sales") # Output: # Weird sales: {8, 7} PS: If you want to stay inside pandas, you can use diff = df.loc[~df["# Externo"].isin(df["Nro Envio ML"]), "# Externo"] to compute the safe difference as a pd.Series. A: You can use ~ and isin of pandas. series1 = pd.Series([2, 4, 8, 20, 10, 47, 99]) series2= pd.Series([1, 3, 6, 4, 10, 99, 50]) series3 = pd.Series([2, 4, 8, 20, 10, 47, 99]) df = pd.concat([series1, series2,series3], axis=1) Case 1: Number in series1 but not in series2 diff = series1[~series1.isin(series2)] Case 2: No any number in series1 and not in series2 same = series1[~series1.isin(series3)]
How Can I Solve This No Duplicated 2 Column Calculation?
Hello StackOverflow People! I have some trouble here, I do some research but I still can't make it. I have two columns that are substracted from a Dataset, the columns are "# Externo" and "Nro Envio ML". I want that the result of the code gives me only the numbers that exist in "# Externo" but no in "Nro Envio ML" For Example: If 41765931626 is only in "# Externo" column but no in "Nro Envio ML", I want to print that number. Also if no exist any number in "# Externo" that is not on "Nro Envio ML" I want to print some text print("No strange sales") Here its the code I tried. Sorry for my bad english import numpy as np df2=df2.dropna(subset=['Unnamed: 13']) df2 = df2[df2['Unnamed: 13'] != 'Nro. Envío'] df2['Nro Envio ML']=df2['Unnamed: 13'] dfn=df2[["# Externo","Nro Envio ML"]] dfn1 = dfn[dfn['# Externo'] != dfn['Nro Envio ML']] dfn1 Also with diff It gives me values that are on 'Nro Envio ML' Link for Sample: https://github.com/francoveracallorda/sample
[ "I would go outside of pandas and use the python built in set and compute the difference. Here is a simplified example:\nimport pandas as pd\n\ndf = pd.DataFrame({\n \"# Externo\": [3, 5, 4, 2, 1, 7, 8],\n \"Nro Envio ML\": [4, 9, 0, 2, 1, 3, 5]\n})\n\ndiff = set(df[\"# Externo\"]) - set(df[\"Nro Envio ML\"])\n# diff contains the values that are in df[\"# Externo\"] but not in df[\"Nro Envio ML\"].\n\nprint(f\"Weird sales: {diff}\" if diff else \"No strange sales\")\n# Output:\n# Weird sales: {8, 7}\n\nPS: If you want to stay inside pandas, you can use diff = df.loc[~df[\"# Externo\"].isin(df[\"Nro Envio ML\"]), \"# Externo\"] to compute the safe difference as a pd.Series.\n", "You can use ~ and isin of pandas.\nseries1 = pd.Series([2, 4, 8, 20, 10, 47, 99])\nseries2= pd.Series([1, 3, 6, 4, 10, 99, 50])\nseries3 = pd.Series([2, 4, 8, 20, 10, 47, 99])\ndf = pd.concat([series1, series2,series3], axis=1)\n\nCase 1: Number in series1 but not in series2\ndiff = series1[~series1.isin(series2)]\n\nCase 2: No any number in series1 and not in series2\nsame = series1[~series1.isin(series3)]\n\n" ]
[ 2, 0 ]
[]
[]
[ "dataframe", "duplicates", "numpy", "pandas", "python" ]
stackoverflow_0074560667_dataframe_duplicates_numpy_pandas_python.txt
Q: django how to get count for manytomany field I have model for question: class Question(models.Model): user = models.ForeignKey(User) title = models.CharField(max_length=120) description = models.TextField() answers = models.ManyToManyField('Answer',related_name='answer_name', blank=True) post_date = models.DateTimeField(auto_now=True) def __unicode__(self): return self.title And I have model for answer: class Answer(models.Model): user = models.ForeignKey(User) question = models.ForeignKey(Question) ans_body = models.TextField() post_date = models.DateTimeField(auto_now=True) def __unicode__(self): return self.ans_body Question creation and answer submission are working perfectly. I cant correctly show answer for particular question. But when I try to get the count of answer for particular question its not showing. It displays 0 count. In my view I am getting the list of the answer by: context["question_list"] = Question.objects.all() And in my template {% for question in question_list %} {{ question.title }} Ans:{{question.answers.count}} {% endfor %} When I do this I get the count 0 if there are answers. How can I get the count of the answers for particular questions. A: This worked: {{question.answer_set.count}} Happy.. A: You can do something like {{ question.answers.all.count }}, but if you are iterating over more than question it will cause a database query for every question. If you want to annotate the whole queryset with the count for each question: from django.db.models import Count context['question_list'] = Question.objects.all().annotate( answer_count=Count('answers') ) Then you can access the count for each question with {{ question.answer_count }}. A: Why not use: Question.objects.all().count() For my project, I have a Field in 'Info' Model users_like = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name="%(app_label)s_%(class)s_likes", blank=True) I use below code to count the number of like then show it in Admin List Page. # admin.py from django.contrib import admin from .models import Info class InfoAdmin(admin.ModelAdmin): list_display = ('id', 'title', 'like_count',) def like_count(self, obj): return obj.users_like.all().count() admin.site.register(Info, InfoAdmin) The result is: Hope these can help you! A: This will work for you without any problem in your case: {{question.answer_name.count}}
django how to get count for manytomany field
I have model for question: class Question(models.Model): user = models.ForeignKey(User) title = models.CharField(max_length=120) description = models.TextField() answers = models.ManyToManyField('Answer',related_name='answer_name', blank=True) post_date = models.DateTimeField(auto_now=True) def __unicode__(self): return self.title And I have model for answer: class Answer(models.Model): user = models.ForeignKey(User) question = models.ForeignKey(Question) ans_body = models.TextField() post_date = models.DateTimeField(auto_now=True) def __unicode__(self): return self.ans_body Question creation and answer submission are working perfectly. I cant correctly show answer for particular question. But when I try to get the count of answer for particular question its not showing. It displays 0 count. In my view I am getting the list of the answer by: context["question_list"] = Question.objects.all() And in my template {% for question in question_list %} {{ question.title }} Ans:{{question.answers.count}} {% endfor %} When I do this I get the count 0 if there are answers. How can I get the count of the answers for particular questions.
[ "This worked:\n{{question.answer_set.count}}\n\nHappy..\n", "You can do something like {{ question.answers.all.count }}, but if you are iterating over more than question it will cause a database query for every question. \nIf you want to annotate the whole queryset with the count for each question:\nfrom django.db.models import Count\n\ncontext['question_list'] = Question.objects.all().annotate(\n answer_count=Count('answers')\n)\n\nThen you can access the count for each question with {{ question.answer_count }}.\n", "Why not use: Question.objects.all().count()\nFor my project, I have a Field in 'Info' Model\nusers_like = models.ManyToManyField(settings.AUTH_USER_MODEL, related_name=\"%(app_label)s_%(class)s_likes\", blank=True)\n\nI use below code to count the number of like then show it in Admin List Page.\n# admin.py\nfrom django.contrib import admin\n\nfrom .models import Info\nclass InfoAdmin(admin.ModelAdmin):\n list_display = ('id', 'title', 'like_count',)\n def like_count(self, obj):\n return obj.users_like.all().count()\n\nadmin.site.register(Info, InfoAdmin)\n\nThe result is:\n\nHope these can help you!\n", "This will work for you without any problem in your case:\n{{question.answer_name.count}}\n\n" ]
[ 17, 16, 5, 0 ]
[]
[]
[ "django", "django_queryset", "python" ]
stackoverflow_0027149984_django_django_queryset_python.txt
Q: Installing fbprophet on colab Hi when I try to intall fbprophet on google colab i get this error anyone knows how to fix it? Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting fbprophet Using cached fbprophet-0.7.1.tar.gz (64 kB) Requirement already satisfied: Cython>=0.22 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.29.30) Requirement already satisfied: cmdstanpy==0.9.5 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.9.5) Requirement already satisfied: pystan>=2.14 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (3.3.0) Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (1.21.6) Requirement already satisfied: pandas>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (1.3.5) Requirement already satisfied: matplotlib>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (3.2.2) Requirement already satisfied: LunarCalendar>=0.0.9 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.0.9) Requirement already satisfied: convertdate>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (2.4.0) Requirement already satisfied: holidays>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.14.2) Requirement already satisfied: setuptools-git>=1.2 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (1.2) Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (2.8.2) Requirement already satisfied: tqdm>=4.36.1 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (4.64.0) Requirement already satisfied: pymeeus<=1,>=0.3.13 in /usr/local/lib/python3.7/dist-packages (from convertdate>=2.1.2->fbprophet) (0.5.11) Requirement already satisfied: hijri-converter in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->fbprophet) (2.2.4) Requirement already satisfied: korean-lunar-calendar in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->fbprophet) (0.2.1) Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from LunarCalendar>=0.0.9->fbprophet) (2022.1) Requirement already satisfied: ephem>=3.7.5.3 in /usr/local/lib/python3.7/dist-packages (from LunarCalendar>=0.0.9->fbprophet) (4.1.3) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->fbprophet) (3.0.9) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->fbprophet) (1.4.4) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->fbprophet) (0.11.0) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.0.0->fbprophet) (4.1.1) Requirement already satisfied: clikit<0.7,>=0.6 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (0.6.2) Requirement already satisfied: pysimdjson<4.0,>=3.2 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (3.2.0) Requirement already satisfied: aiohttp<4.0,>=3.6 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (3.8.1) Requirement already satisfied: httpstan<4.7,>=4.6 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (4.6.1) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (2.1.0) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (4.0.2) Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (1.7.2) Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (1.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (1.3.0) Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (6.0.2) Requirement already satisfied: asynctest==0.13.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (0.13.0) Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (21.4.0) Requirement already satisfied: pylev<2.0,>=1.3 in /usr/local/lib/python3.7/dist-packages (from clikit<0.7,>=0.6->pystan>=2.14->fbprophet) (1.4.0) Requirement already satisfied: crashtest<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from clikit<0.7,>=0.6->pystan>=2.14->fbprophet) (0.3.1) Requirement already satisfied: pastel<0.3.0,>=0.2.0 in /usr/local/lib/python3.7/dist-packages (from clikit<0.7,>=0.6->pystan>=2.14->fbprophet) (0.2.1) Requirement already satisfied: appdirs<2.0,>=1.4 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (1.4.4) Requirement already satisfied: webargs<9.0,>=8.0 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (8.2.0) Requirement already satisfied: setuptools>=41.0 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (57.4.0) Requirement already satisfied: marshmallow<4.0,>=3.10 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (3.17.0) Requirement already satisfied: packaging>=17.0 in /usr/local/lib/python3.7/dist-packages (from marshmallow<4.0,>=3.10->httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (21.3) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.8.0->fbprophet) (1.15.0) Requirement already satisfied: idna>=2.0 in /usr/local/lib/python3.7/dist-packages (from yarl<2.0,>=1.0->aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (2.10) Building wheels for collected packages: fbprophet Building wheel for fbprophet (setup.py) ... error ERROR: Failed building wheel for fbprophet Running setup.py clean for fbprophet Failed to build fbprophet Installing collected packages: fbprophet Running setup.py install for fbprophet ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-mghunffz/fbprophet_2d75d382214e4622a66d686904a6dfb7/setup.py'"'"'; __file__='"'"'/tmp/pip-install-mghunffz/fbprophet_2d75d382214e4622a66d686904a6dfb7/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-dqpxd5x_/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/fbprophet Check the logs for full command output. I'm adding some random text so stackoverflow lets me post this question pleas ignore Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum. A: You can try this currently i am working on fb-prophet in colab and i used this packages for me it is working smoothly - This is older version !pip install pystan~=2.14 !pip install fbprophet For latest version just install prophet there is no need for installing pystan - !pip install prophet import prophet A: Working in Colab, I solved importing first: import io, os, sys, setuptools, tokenize then as they mentioned before: !pip install prophet import prophet
Installing fbprophet on colab
Hi when I try to intall fbprophet on google colab i get this error anyone knows how to fix it? Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/ Collecting fbprophet Using cached fbprophet-0.7.1.tar.gz (64 kB) Requirement already satisfied: Cython>=0.22 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.29.30) Requirement already satisfied: cmdstanpy==0.9.5 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.9.5) Requirement already satisfied: pystan>=2.14 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (3.3.0) Requirement already satisfied: numpy>=1.15.4 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (1.21.6) Requirement already satisfied: pandas>=1.0.4 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (1.3.5) Requirement already satisfied: matplotlib>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (3.2.2) Requirement already satisfied: LunarCalendar>=0.0.9 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.0.9) Requirement already satisfied: convertdate>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (2.4.0) Requirement already satisfied: holidays>=0.10.2 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (0.14.2) Requirement already satisfied: setuptools-git>=1.2 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (1.2) Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (2.8.2) Requirement already satisfied: tqdm>=4.36.1 in /usr/local/lib/python3.7/dist-packages (from fbprophet) (4.64.0) Requirement already satisfied: pymeeus<=1,>=0.3.13 in /usr/local/lib/python3.7/dist-packages (from convertdate>=2.1.2->fbprophet) (0.5.11) Requirement already satisfied: hijri-converter in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->fbprophet) (2.2.4) Requirement already satisfied: korean-lunar-calendar in /usr/local/lib/python3.7/dist-packages (from holidays>=0.10.2->fbprophet) (0.2.1) Requirement already satisfied: pytz in /usr/local/lib/python3.7/dist-packages (from LunarCalendar>=0.0.9->fbprophet) (2022.1) Requirement already satisfied: ephem>=3.7.5.3 in /usr/local/lib/python3.7/dist-packages (from LunarCalendar>=0.0.9->fbprophet) (4.1.3) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->fbprophet) (3.0.9) Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->fbprophet) (1.4.4) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib>=2.0.0->fbprophet) (0.11.0) Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from kiwisolver>=1.0.1->matplotlib>=2.0.0->fbprophet) (4.1.1) Requirement already satisfied: clikit<0.7,>=0.6 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (0.6.2) Requirement already satisfied: pysimdjson<4.0,>=3.2 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (3.2.0) Requirement already satisfied: aiohttp<4.0,>=3.6 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (3.8.1) Requirement already satisfied: httpstan<4.7,>=4.6 in /usr/local/lib/python3.7/dist-packages (from pystan>=2.14->fbprophet) (4.6.1) Requirement already satisfied: charset-normalizer<3.0,>=2.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (2.1.0) Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (4.0.2) Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (1.7.2) Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (1.2.0) Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (1.3.0) Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (6.0.2) Requirement already satisfied: asynctest==0.13.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (0.13.0) Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.7/dist-packages (from aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (21.4.0) Requirement already satisfied: pylev<2.0,>=1.3 in /usr/local/lib/python3.7/dist-packages (from clikit<0.7,>=0.6->pystan>=2.14->fbprophet) (1.4.0) Requirement already satisfied: crashtest<0.4.0,>=0.3.0 in /usr/local/lib/python3.7/dist-packages (from clikit<0.7,>=0.6->pystan>=2.14->fbprophet) (0.3.1) Requirement already satisfied: pastel<0.3.0,>=0.2.0 in /usr/local/lib/python3.7/dist-packages (from clikit<0.7,>=0.6->pystan>=2.14->fbprophet) (0.2.1) Requirement already satisfied: appdirs<2.0,>=1.4 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (1.4.4) Requirement already satisfied: webargs<9.0,>=8.0 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (8.2.0) Requirement already satisfied: setuptools>=41.0 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (57.4.0) Requirement already satisfied: marshmallow<4.0,>=3.10 in /usr/local/lib/python3.7/dist-packages (from httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (3.17.0) Requirement already satisfied: packaging>=17.0 in /usr/local/lib/python3.7/dist-packages (from marshmallow<4.0,>=3.10->httpstan<4.7,>=4.6->pystan>=2.14->fbprophet) (21.3) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.8.0->fbprophet) (1.15.0) Requirement already satisfied: idna>=2.0 in /usr/local/lib/python3.7/dist-packages (from yarl<2.0,>=1.0->aiohttp<4.0,>=3.6->pystan>=2.14->fbprophet) (2.10) Building wheels for collected packages: fbprophet Building wheel for fbprophet (setup.py) ... error ERROR: Failed building wheel for fbprophet Running setup.py clean for fbprophet Failed to build fbprophet Installing collected packages: fbprophet Running setup.py install for fbprophet ... error ERROR: Command errored out with exit status 1: /usr/bin/python3 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-mghunffz/fbprophet_2d75d382214e4622a66d686904a6dfb7/setup.py'"'"'; __file__='"'"'/tmp/pip-install-mghunffz/fbprophet_2d75d382214e4622a66d686904a6dfb7/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-dqpxd5x_/install-record.txt --single-version-externally-managed --compile --install-headers /usr/local/include/python3.7/fbprophet Check the logs for full command output. I'm adding some random text so stackoverflow lets me post this question pleas ignore Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
[ "You can try this currently i am working on fb-prophet in colab and i used this packages for me it is working smoothly -\nThis is older version\n!pip install pystan~=2.14\n!pip install fbprophet\n\nFor latest version just install prophet there is no need for installing pystan -\n!pip install prophet\n\nimport prophet\n\n", "Working in Colab, I solved importing first:\nimport io, os, sys, setuptools, tokenize\n\nthen as they mentioned before:\n!pip install prophet\nimport prophet\n\n" ]
[ 13, 0 ]
[]
[]
[ "facebook_prophet", "google_colaboratory", "python" ]
stackoverflow_0073142498_facebook_prophet_google_colaboratory_python.txt
Q: from pymongo import MongoClient - Error: [AttributeError: module 'h11' has no attribute 'Event'] Hi I keep getting this error message, I reinstalled my ubuntu system to correct it but it didn't seem to work. Code: from pymongo import MongoClient Only package installed is pymongo I get the same error in both Anaconda by starting a new env and in my locally installed python. Python version 3.8.10 Error message: Anyone have a solution? AttributeError Traceback (most recent call last) <ipython-input-1-4aaffd5c9f5f> in <module> ----> 1 from pymongo import MongoClient ~/.local/lib/python3.8/site-packages/pymongo/__init__.py in <module> 90 from pymongo.common import MAX_SUPPORTED_WIRE_VERSION, MIN_SUPPORTED_WIRE_VERSION 91 from pymongo.cursor import CursorType ---> 92 from pymongo.mongo_client import MongoClient 93 from pymongo.operations import ( 94 DeleteMany, ~/.local/lib/python3.8/site-packages/pymongo/mongo_client.py in <module> 57 from bson.son import SON 58 from bson.timestamp import Timestamp ---> 59 from pymongo import ( 60 _csot, 61 client_session, ~/.local/lib/python3.8/site-packages/pymongo/uri_parser.py in <module> 30 ) 31 from pymongo.errors import ConfigurationError, InvalidURI ---> 32 from pymongo.srv_resolver import _HAVE_DNSPYTHON, _SrvResolver 33 from pymongo.typings import _Address 34 ~/.local/lib/python3.8/site-packages/pymongo/srv_resolver.py in <module> 19 20 try: ---> 21 from dns import resolver 22 23 _HAVE_DNSPYTHON = True ~/.local/lib/python3.8/site-packages/dns/resolver.py in <module> 36 import dns.message 37 import dns.name ---> 38 import dns.query 39 import dns.rcode 40 import dns.rdataclass ~/.local/lib/python3.8/site-packages/dns/query.py in <module> 50 _have_http2 = False 51 try: ---> 52 import httpx 53 _have_httpx = True 54 try: ~/.local/lib/python3.8/site-packages/httpx/__init__.py in <module> 1 from .__version__ import __description__, __title__, __version__ ----> 2 from ._api import delete, get, head, options, patch, post, put, request, stream 3 from ._auth import Auth, BasicAuth, DigestAuth 4 from ._client import USE_CLIENT_DEFAULT, AsyncClient, Client 5 from ._config import Limits, Proxy, Timeout, create_ssl_context ~/.local/lib/python3.8/site-packages/httpx/_api.py in <module> 2 from contextlib import contextmanager 3 ----> 4 from ._client import Client 5 from ._config import DEFAULT_TIMEOUT_CONFIG 6 from ._models import Response ~/.local/lib/python3.8/site-packages/httpx/_client.py in <module> 27 from ._transports.asgi import ASGITransport 28 from ._transports.base import AsyncBaseTransport, BaseTransport ---> 29 from ._transports.default import AsyncHTTPTransport, HTTPTransport 30 from ._transports.wsgi import WSGITransport 31 from ._types import ( ~/.local/lib/python3.8/site-packages/httpx/_transports/default.py in <module> 28 from types import TracebackType 29 ---> 30 import httpcore 31 32 from .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context ~/.local/lib/python3.8/site-packages/httpcore/__init__.py in <module> ----> 1 from ._api import request, stream 2 from ._async import ( 3 AsyncConnectionInterface, 4 AsyncConnectionPool, 5 AsyncHTTP2Connection, ~/.local/lib/python3.8/site-packages/httpcore/_api.py in <module> 3 4 from ._models import URL, Response ----> 5 from ._sync.connection_pool import ConnectionPool 6 7 ~/.local/lib/python3.8/site-packages/httpcore/_sync/__init__.py in <module> ----> 1 from .connection import HTTPConnection 2 from .connection_pool import ConnectionPool 3 from .http11 import HTTP11Connection 4 from .http_proxy import HTTPProxy 5 from .interfaces import ConnectionInterface ~/.local/lib/python3.8/site-packages/httpcore/_sync/connection.py in <module> 11 from ..backends.sync import SyncBackend 12 from ..backends.base import NetworkBackend, NetworkStream ---> 13 from .http11 import HTTP11Connection 14 from .interfaces import ConnectionInterface 15 ~/.local/lib/python3.8/site-packages/httpcore/_sync/http11.py in <module> 42 43 ---> 44 class HTTP11Connection(ConnectionInterface): 45 READ_NUM_BYTES = 64 * 1024 46 ~/.local/lib/python3.8/site-packages/httpcore/_sync/http11.py in HTTP11Connection() 138 139 def _send_event( --> 140 self, event: h11.Event, timeout: Optional[float] = None 141 ) -> None: 142 bytes_to_send = self._h11_state.send(event) AttributeError: module 'h11' has no attribute 'Event' A: Found a solution: pip install --force-reinstall httpcore==0.15 Fixed the error
from pymongo import MongoClient - Error: [AttributeError: module 'h11' has no attribute 'Event']
Hi I keep getting this error message, I reinstalled my ubuntu system to correct it but it didn't seem to work. Code: from pymongo import MongoClient Only package installed is pymongo I get the same error in both Anaconda by starting a new env and in my locally installed python. Python version 3.8.10 Error message: Anyone have a solution? AttributeError Traceback (most recent call last) <ipython-input-1-4aaffd5c9f5f> in <module> ----> 1 from pymongo import MongoClient ~/.local/lib/python3.8/site-packages/pymongo/__init__.py in <module> 90 from pymongo.common import MAX_SUPPORTED_WIRE_VERSION, MIN_SUPPORTED_WIRE_VERSION 91 from pymongo.cursor import CursorType ---> 92 from pymongo.mongo_client import MongoClient 93 from pymongo.operations import ( 94 DeleteMany, ~/.local/lib/python3.8/site-packages/pymongo/mongo_client.py in <module> 57 from bson.son import SON 58 from bson.timestamp import Timestamp ---> 59 from pymongo import ( 60 _csot, 61 client_session, ~/.local/lib/python3.8/site-packages/pymongo/uri_parser.py in <module> 30 ) 31 from pymongo.errors import ConfigurationError, InvalidURI ---> 32 from pymongo.srv_resolver import _HAVE_DNSPYTHON, _SrvResolver 33 from pymongo.typings import _Address 34 ~/.local/lib/python3.8/site-packages/pymongo/srv_resolver.py in <module> 19 20 try: ---> 21 from dns import resolver 22 23 _HAVE_DNSPYTHON = True ~/.local/lib/python3.8/site-packages/dns/resolver.py in <module> 36 import dns.message 37 import dns.name ---> 38 import dns.query 39 import dns.rcode 40 import dns.rdataclass ~/.local/lib/python3.8/site-packages/dns/query.py in <module> 50 _have_http2 = False 51 try: ---> 52 import httpx 53 _have_httpx = True 54 try: ~/.local/lib/python3.8/site-packages/httpx/__init__.py in <module> 1 from .__version__ import __description__, __title__, __version__ ----> 2 from ._api import delete, get, head, options, patch, post, put, request, stream 3 from ._auth import Auth, BasicAuth, DigestAuth 4 from ._client import USE_CLIENT_DEFAULT, AsyncClient, Client 5 from ._config import Limits, Proxy, Timeout, create_ssl_context ~/.local/lib/python3.8/site-packages/httpx/_api.py in <module> 2 from contextlib import contextmanager 3 ----> 4 from ._client import Client 5 from ._config import DEFAULT_TIMEOUT_CONFIG 6 from ._models import Response ~/.local/lib/python3.8/site-packages/httpx/_client.py in <module> 27 from ._transports.asgi import ASGITransport 28 from ._transports.base import AsyncBaseTransport, BaseTransport ---> 29 from ._transports.default import AsyncHTTPTransport, HTTPTransport 30 from ._transports.wsgi import WSGITransport 31 from ._types import ( ~/.local/lib/python3.8/site-packages/httpx/_transports/default.py in <module> 28 from types import TracebackType 29 ---> 30 import httpcore 31 32 from .._config import DEFAULT_LIMITS, Limits, Proxy, create_ssl_context ~/.local/lib/python3.8/site-packages/httpcore/__init__.py in <module> ----> 1 from ._api import request, stream 2 from ._async import ( 3 AsyncConnectionInterface, 4 AsyncConnectionPool, 5 AsyncHTTP2Connection, ~/.local/lib/python3.8/site-packages/httpcore/_api.py in <module> 3 4 from ._models import URL, Response ----> 5 from ._sync.connection_pool import ConnectionPool 6 7 ~/.local/lib/python3.8/site-packages/httpcore/_sync/__init__.py in <module> ----> 1 from .connection import HTTPConnection 2 from .connection_pool import ConnectionPool 3 from .http11 import HTTP11Connection 4 from .http_proxy import HTTPProxy 5 from .interfaces import ConnectionInterface ~/.local/lib/python3.8/site-packages/httpcore/_sync/connection.py in <module> 11 from ..backends.sync import SyncBackend 12 from ..backends.base import NetworkBackend, NetworkStream ---> 13 from .http11 import HTTP11Connection 14 from .interfaces import ConnectionInterface 15 ~/.local/lib/python3.8/site-packages/httpcore/_sync/http11.py in <module> 42 43 ---> 44 class HTTP11Connection(ConnectionInterface): 45 READ_NUM_BYTES = 64 * 1024 46 ~/.local/lib/python3.8/site-packages/httpcore/_sync/http11.py in HTTP11Connection() 138 139 def _send_event( --> 140 self, event: h11.Event, timeout: Optional[float] = None 141 ) -> None: 142 bytes_to_send = self._h11_state.send(event) AttributeError: module 'h11' has no attribute 'Event'
[ "Found a solution:\npip install --force-reinstall httpcore==0.15\nFixed the error\n" ]
[ 1 ]
[]
[]
[ "pymongo", "python" ]
stackoverflow_0074561596_pymongo_python.txt
Q: Pandas: Adding a df column based on other column with multiple values map to the same new column value I have a dataframe like this: df1 = pd.DataFrame({'col1' : ['cat', 'cat', 'dog', 'green', 'blue']}) and I want a new column that gives the category, like this: dfoutput = pd.DataFrame({'col1' : ['cat', 'cat', 'dog', 'green', 'blue'], 'col2' : ['animal', 'animal', 'animal', 'color', 'color']}) I know I could do it inefficiently using .loc: df1.loc[df1['col1'] == 'cat','col2'] = 'animal' df1.loc[df1['col1'] == 'dog','col2'] = 'animal' How do I combine cat and dog to both be animal? This doesn't work: df1.loc[df1['col1'] == 'cat' | df1['col1'] == 'dog','col2'] = 'animal' A: Build your dict then do map d={'dog':'ani','cat':'ani','green':'color','blue':'color'} df1['col2']=df1.col1.map(d) df1 col1 col2 0 cat ani 1 cat ani 2 dog ani 3 green color 4 blue color A: Since multiple items may belong to a single category I suggest you start with a dictionary mapping category to items: cat_item = {'animal': ['cat', 'dog'], 'color': ['green', 'blue']} You'll probably find this easier to maintain. Then reverse your dictionary using a dictionary comprehension, followed by pd.Series.map: item_cat = {w: k for k, v in cat_item.items() for w in v} df1['col2'] = df1['col1'].map(item_cat) print(df1) col1 col2 0 cat animal 1 cat animal 2 dog animal 3 green color 4 blue color You can also use pd.Series.replace, but this will be generally less efficient. A: you could also try using np.select like this: options = [(df1.col1.str.contains('cat|dog')), (df1.col1.str.contains('green|blue'))] settings = ['animal', 'color'] df1['setting'] = np.select(options,settings) I've found this works quite fast even with very big dataframes
Pandas: Adding a df column based on other column with multiple values map to the same new column value
I have a dataframe like this: df1 = pd.DataFrame({'col1' : ['cat', 'cat', 'dog', 'green', 'blue']}) and I want a new column that gives the category, like this: dfoutput = pd.DataFrame({'col1' : ['cat', 'cat', 'dog', 'green', 'blue'], 'col2' : ['animal', 'animal', 'animal', 'color', 'color']}) I know I could do it inefficiently using .loc: df1.loc[df1['col1'] == 'cat','col2'] = 'animal' df1.loc[df1['col1'] == 'dog','col2'] = 'animal' How do I combine cat and dog to both be animal? This doesn't work: df1.loc[df1['col1'] == 'cat' | df1['col1'] == 'dog','col2'] = 'animal'
[ "Build your dict then do map \nd={'dog':'ani','cat':'ani','green':'color','blue':'color'}\ndf1['col2']=df1.col1.map(d)\ndf1\n col1 col2\n0 cat ani\n1 cat ani\n2 dog ani\n3 green color\n4 blue color\n\n", "Since multiple items may belong to a single category I suggest you start with a dictionary mapping category to items:\ncat_item = {'animal': ['cat', 'dog'], 'color': ['green', 'blue']}\n\nYou'll probably find this easier to maintain. Then reverse your dictionary using a dictionary comprehension, followed by pd.Series.map:\nitem_cat = {w: k for k, v in cat_item.items() for w in v}\n\ndf1['col2'] = df1['col1'].map(item_cat)\n\nprint(df1)\n\n col1 col2\n0 cat animal\n1 cat animal\n2 dog animal\n3 green color\n4 blue color\n\nYou can also use pd.Series.replace, but this will be generally less efficient.\n", "you could also try using np.select like this:\noptions = [(df1.col1.str.contains('cat|dog')), \n (df1.col1.str.contains('green|blue'))]\n\nsettings = ['animal', 'color']\n\ndf1['setting'] = np.select(options,settings)\n\nI've found this works quite fast even with very big dataframes\n" ]
[ 6, 3, 0 ]
[]
[]
[ "dictionary", "pandas", "python", "series" ]
stackoverflow_0054031812_dictionary_pandas_python_series.txt
Q: Tkinter Calculator can you guys tell me how to fix the problem and let the answer show up from tkinter import * def tkinter_calculator(): window=Tk() window.title("Calculator") l1=Label(window,text="Welcome to Calculator") l1.pack() e1=Entry(window,width=10,bd=4) e1.place(x=300,y=300) l2=Label(window,text="+") l2.place(x=500,y=300) e2=Entry(window,width=10,bd=4) e2.place(x=600,y=300) l3=Label(window,text="=") l3.place(x=750,y=300) def add_ops(): global e1 num1 = e1.get() global e2 num2=e2.get() addVal = int(num1) + int(num2) output=Label(window, text=w"Your answer is" + int(addVal)) output.place(x=300,y=300) add_ops() b1=Button(window,text="Calculate",fg="blue",bg="silver",command=add_ops) b1.place(x=850,y=300) window.mainloop() tkinter_calculator() I expected a text showing up the answer I expected two textboxes and a calculate button and when I press the button it should calculate and show what the answer is so that it functions like a basic calculator A: There are few issues in your code: you can only see the label l1 because other widgets are put in the window using .place() which will not adjust the window size. You need to specify the initial size of the window in order to see those widgets. e1 and e2 are local variables inside tkinter_calculator(), so they cannot be found when they are accessed inside add_ops() because they are declared as global variables. Remove global e1 and global e2. add_ops() is called before creating the Calculate button, so it will raise ValueError exception when calling int() on the content of the two entry boxes because there is nothing input in the two entries yet. Remove that line. extra w in w"..." new label for the result will be created when Calculate button is clicked. Create output label once outside add_ops() and update its text inside that function. you cannot concatenate integer to string, so "Your answer is" + int(addVal) will raise exception. Change it to "Your answer is "+str(addVal). Below is the updated tkinter_calculator(): def tkinter_calculator(): window = Tk() window.title("Calculator") window.geometry("1000x400") # give initial window size l1 = Label(window, text="Welcome to Calculator") l1.pack() e1 = Entry(window, width=10, bd=4) e1.place(x=300, y=300) l2 = Label(window, text="+") l2.place(x=500, y=300) e2 = Entry(window, width=10, bd=4) e2.place(x=600, y=300) l3 = Label(window, text="=") l3.place(x=750, y=300) def add_ops(): num1 = e1.get() num2 = e2.get() addVal = int(num1) + int(num2) # update output label output.config(text="Your answer is "+str(addVal)) b1 = Button(window, text="Calculate", fg="blue", bg="silver", command=add_ops) b1.place(x=850, y=300) # create output label once output = Label(window) output.place(x=300, y=350) window.mainloop() To better control the layout of the widgets, .grid() is recommended over .place(). Below is the modified tkinter_calculator() using .grid() instead of .place(): def tkinter_calculator(): window = Tk() window.title("Calculator") window.config(padx=10) l1 = Label(window, text="Welcome to Calculator") l1.grid(row=0, column=0, columnspan=5, pady=10) e1 = Entry(window, width=10, bd=4) e1.grid(row=1, column=0) l2 = Label(window, text="+") l2.grid(row=1, column=1, padx=5) e2 = Entry(window, width=10, bd=4) e2.grid(row=1, column=2) l3 = Label(window, text="=") l3.grid(row=1, column=3, padx=5) def add_ops(): try: num1 = int(e1.get()) num2 = int(e2.get()) output.config(text=f"Your answer is {num1+num2}") except ValueError as ex: output.config(text="Please enter valid numbers") b1 = Button(window, text="Calculate", fg="blue", bg="silver", command=add_ops) b1.grid(row=1, column=4) output = Label(window) output.grid(row=2, column=0, columnspan=5, pady=10) window.mainloop() Result:
Tkinter Calculator can you guys tell me how to fix the problem and let the answer show up
from tkinter import * def tkinter_calculator(): window=Tk() window.title("Calculator") l1=Label(window,text="Welcome to Calculator") l1.pack() e1=Entry(window,width=10,bd=4) e1.place(x=300,y=300) l2=Label(window,text="+") l2.place(x=500,y=300) e2=Entry(window,width=10,bd=4) e2.place(x=600,y=300) l3=Label(window,text="=") l3.place(x=750,y=300) def add_ops(): global e1 num1 = e1.get() global e2 num2=e2.get() addVal = int(num1) + int(num2) output=Label(window, text=w"Your answer is" + int(addVal)) output.place(x=300,y=300) add_ops() b1=Button(window,text="Calculate",fg="blue",bg="silver",command=add_ops) b1.place(x=850,y=300) window.mainloop() tkinter_calculator() I expected a text showing up the answer I expected two textboxes and a calculate button and when I press the button it should calculate and show what the answer is so that it functions like a basic calculator
[ "There are few issues in your code:\n\nyou can only see the label l1 because other widgets are put in the window using .place() which will not adjust the window size. You need to specify the initial size of the window in order to see those widgets.\ne1 and e2 are local variables inside tkinter_calculator(), so they cannot be found when they are accessed inside add_ops() because they are declared as global variables. Remove global e1 and global e2.\nadd_ops() is called before creating the Calculate button, so it will raise ValueError exception when calling int() on the content of the two entry boxes because there is nothing input in the two entries yet. Remove that line.\nextra w in w\"...\"\nnew label for the result will be created when Calculate button is clicked. Create output label once outside add_ops() and update its text inside that function.\nyou cannot concatenate integer to string, so \"Your answer is\" + int(addVal) will raise exception. Change it to \"Your answer is \"+str(addVal).\n\nBelow is the updated tkinter_calculator():\ndef tkinter_calculator():\n window = Tk()\n window.title(\"Calculator\")\n window.geometry(\"1000x400\") # give initial window size\n l1 = Label(window, text=\"Welcome to Calculator\")\n l1.pack()\n e1 = Entry(window, width=10, bd=4)\n e1.place(x=300, y=300)\n l2 = Label(window, text=\"+\")\n l2.place(x=500, y=300)\n e2 = Entry(window, width=10, bd=4)\n e2.place(x=600, y=300)\n l3 = Label(window, text=\"=\")\n l3.place(x=750, y=300)\n def add_ops():\n num1 = e1.get()\n num2 = e2.get()\n addVal = int(num1) + int(num2)\n # update output label\n output.config(text=\"Your answer is \"+str(addVal))\n b1 = Button(window, text=\"Calculate\", fg=\"blue\", bg=\"silver\", command=add_ops)\n b1.place(x=850, y=300)\n # create output label once\n output = Label(window)\n output.place(x=300, y=350)\n window.mainloop()\n\n\nTo better control the layout of the widgets, .grid() is recommended over .place(). Below is the modified tkinter_calculator() using .grid() instead of .place():\ndef tkinter_calculator():\n window = Tk()\n window.title(\"Calculator\")\n window.config(padx=10)\n\n l1 = Label(window, text=\"Welcome to Calculator\")\n l1.grid(row=0, column=0, columnspan=5, pady=10)\n\n e1 = Entry(window, width=10, bd=4)\n e1.grid(row=1, column=0)\n\n l2 = Label(window, text=\"+\")\n l2.grid(row=1, column=1, padx=5)\n\n e2 = Entry(window, width=10, bd=4)\n e2.grid(row=1, column=2)\n\n l3 = Label(window, text=\"=\")\n l3.grid(row=1, column=3, padx=5)\n\n def add_ops():\n try:\n num1 = int(e1.get())\n num2 = int(e2.get())\n output.config(text=f\"Your answer is {num1+num2}\")\n except ValueError as ex:\n output.config(text=\"Please enter valid numbers\")\n\n b1 = Button(window, text=\"Calculate\", fg=\"blue\", bg=\"silver\", command=add_ops)\n b1.grid(row=1, column=4)\n\n output = Label(window)\n output.grid(row=2, column=0, columnspan=5, pady=10)\n\n window.mainloop()\n\nResult:\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074561179_python_tkinter.txt
Q: How to read parquet file using Pandas I am trying to read a parquet file using Python 3.6. import pandas as pd df = pd.read_parquet('smalldata.parquet') df.head() However, this is generating an error that module pandas has no attribute read_parquet. What dependencies should I cater in order to solve this problem? Edit 1: I updated Pandas and this is the stacktrace Requirement already up-to-date: pandas in /home/fatima/miniconda2/lib/python2.7/site-packages (0.24.2) Requirement already satisfied, skipping upgrade: pytz>=2011k in /home/fatima/miniconda2/lib/python2.7/site-packages (from pandas) (2018.9) Requirement already satisfied, skipping upgrade: numpy>=1.12.0 in /home/fatima/miniconda2/lib/python2.7/site-packages (from pandas) (1.16.2) Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /home/fatima/miniconda2/lib/python2.7/site-packages (from pandas) (2.8.0) Requirement already satisfied, skipping upgrade: six>=1.5 in /home/fatima/miniconda2/lib/python2.7/site-packages (from python-dateutil>=2.5.0->pandas) (1.12.0) Edit 2: this is what conda list gives me pandas 0.24.2 pypi_0 pypi A: You will need to install the required packages: pip install pandas pyarrow s3fs fastparquet A: If you are trying to read Parquet files in Pandas, it may be that you don't have one of the engines installed for reading Parquet files, such as pyarrow or fastparquet. You would need to install those dependencies as Pandas read_parquet requires either of these engines in order to read Parquet files. For each of those dependencies, you would also need to figure out which dependencies are required for installing each of those libraries. If this isn't the issue, can you please comment on what the error you are encountering may be?
How to read parquet file using Pandas
I am trying to read a parquet file using Python 3.6. import pandas as pd df = pd.read_parquet('smalldata.parquet') df.head() However, this is generating an error that module pandas has no attribute read_parquet. What dependencies should I cater in order to solve this problem? Edit 1: I updated Pandas and this is the stacktrace Requirement already up-to-date: pandas in /home/fatima/miniconda2/lib/python2.7/site-packages (0.24.2) Requirement already satisfied, skipping upgrade: pytz>=2011k in /home/fatima/miniconda2/lib/python2.7/site-packages (from pandas) (2018.9) Requirement already satisfied, skipping upgrade: numpy>=1.12.0 in /home/fatima/miniconda2/lib/python2.7/site-packages (from pandas) (1.16.2) Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /home/fatima/miniconda2/lib/python2.7/site-packages (from pandas) (2.8.0) Requirement already satisfied, skipping upgrade: six>=1.5 in /home/fatima/miniconda2/lib/python2.7/site-packages (from python-dateutil>=2.5.0->pandas) (1.12.0) Edit 2: this is what conda list gives me pandas 0.24.2 pypi_0 pypi
[ "You will need to install the required packages:\npip install pandas pyarrow s3fs fastparquet\n\n", "If you are trying to read Parquet files in Pandas, it may be that you don't have one of the engines installed for reading Parquet files, such as pyarrow or fastparquet. You would need to install those dependencies as Pandas read_parquet requires either of these engines in order to read Parquet files. For each of those dependencies, you would also need to figure out which dependencies are required for installing each of those libraries. \nIf this isn't the issue, can you please comment on what the error you are encountering may be?\n" ]
[ 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0055334346_pandas_python.txt
Q: Run python script oracle data integrator (ODI) I am looking for help to be able to execute a python script on oracle data integrator (ODI) I have not found any documentation for this process I would appreciate if someone can help me with this process I don't know where in ODI I could do this type of execution A: Essentially ODI doesn't support Python directly but there are a couple of things you can do. The things to consider are: where you need to run the code what you want the code to do how integrated into ODI do you need it to be Jython ODI does support Jython which is a Java implementation of Python. This can be embedded within procedures and Knowledge Modules which allows you to (relatively) easily make use of the ODI metadata. It isn't particularly friendly to code or debug but is functional and powerful, you get access to ODI logging etc. Whilst this is possible I would look to do this is Groovy rather than Jython as it is much cleaner and simpler Shell Script If your python script is already there and is completely stand alone you can use an OdiOsCommand inside of a package. You would need an agent installed on the box on which you want to run the script and you can just do something like python /path/mypythonscript.py just as you would from the command line. This is simple enough but the integration into ODI is very limited. It will handle errors just like a shell script (so handled exceptions will be swallowed and lost) and any parameters you want to pass will need to be via the command line.
Run python script oracle data integrator (ODI)
I am looking for help to be able to execute a python script on oracle data integrator (ODI) I have not found any documentation for this process I would appreciate if someone can help me with this process I don't know where in ODI I could do this type of execution
[ "Essentially ODI doesn't support Python directly but there are a couple of things you can do. The things to consider are:\n\nwhere you need to run the code\nwhat you want the code to do\nhow integrated into ODI do you need it to be\n\nJython\nODI does support Jython which is a Java implementation of Python. This can be embedded within procedures and Knowledge Modules which allows you to (relatively) easily make use of the ODI metadata. It isn't particularly friendly to code or debug but is functional and powerful, you get access to ODI logging etc.\nWhilst this is possible I would look to do this is Groovy rather than Jython as it is much cleaner and simpler\nShell Script\nIf your python script is already there and is completely stand alone you can use an OdiOsCommand inside of a package. You would need an agent installed on the box on which you want to run the script and you can just do something like\npython /path/mypythonscript.py\n\njust as you would from the command line.\nThis is simple enough but the integration into ODI is very limited. It will handle errors just like a shell script (so handled exceptions will be swallowed and lost) and any parameters you want to pass will need to be via the command line.\n" ]
[ 0 ]
[]
[]
[ "oracle_data_integrator", "python" ]
stackoverflow_0074539405_oracle_data_integrator_python.txt
Q: numpy's "linalg.eig()" and "linalg.eigh()" for the same hermitian matrix This question was due to a misunderstanding. See the answer below. numpy.linalg methods eig() and eigh() appear to return different eigenvectors for the same hermitian matrix. Here the code: import numpy as np H = [[0.6 , -1j, 0], [1j, 0.4, 0], [0, 0, -1]] evals, evects = np.linalg.eig(H) print('\nOutput of the eig function') for i in range(0,3): print('evect for eval=',evals[i],'\n',evects[i,0],'\n',evects[i][1],'\n',evects[i][2]) evals, evects = np.linalg.eigh(H) print('\nOutput of the eigh function') for i in range(0,3): print('evect for eval=',evals[i],'\n',evects[i,0],'\n',evects[i][1],'\n',evects[i][2]) A: Posting this to help anyone who might have had the same kind of misunderstanding I had: The eigenvectors are the columns of the resulting matrix for both functions. The fault was in the original code, which extracted rows from the eigenvector matrix instead of columns. The correct code is the following one. H = [[0.6 , -1j, 0], [1j, 0.4, 0], [0, 0, -1]] evals, evects = np.linalg.eig(H) print('\nOutput of the eig function') for i in range(0,3): print('evect for eval=',evals[i],'\n',evects.T[i,0],'\n',evects.T[i][1],'\n',evects.T[i][2]) evals, evects = np.linalg.eigh(H) print('\nOutput of the eigh function') for i in range(0,3): print('evect for eval=',evals[i],'\n',evects.T[i,0],'\n',evects.T[i][1],'\n',evects.T[i][2])
numpy's "linalg.eig()" and "linalg.eigh()" for the same hermitian matrix
This question was due to a misunderstanding. See the answer below. numpy.linalg methods eig() and eigh() appear to return different eigenvectors for the same hermitian matrix. Here the code: import numpy as np H = [[0.6 , -1j, 0], [1j, 0.4, 0], [0, 0, -1]] evals, evects = np.linalg.eig(H) print('\nOutput of the eig function') for i in range(0,3): print('evect for eval=',evals[i],'\n',evects[i,0],'\n',evects[i][1],'\n',evects[i][2]) evals, evects = np.linalg.eigh(H) print('\nOutput of the eigh function') for i in range(0,3): print('evect for eval=',evals[i],'\n',evects[i,0],'\n',evects[i][1],'\n',evects[i][2])
[ "Posting this to help anyone who might have had the same kind of misunderstanding I had:\nThe eigenvectors are the columns of the resulting matrix for both functions. The fault was in the original code, which extracted rows from the eigenvector matrix instead of columns. The correct code is the following one.\nH = [[0.6 , -1j, 0], [1j, 0.4, 0], [0, 0, -1]]\n\nevals, evects = np.linalg.eig(H)\nprint('\\nOutput of the eig function')\nfor i in range(0,3):\n print('evect for eval=',evals[i],'\\n',evects.T[i,0],'\\n',evects.T[i][1],'\\n',evects.T[i][2])\n \nevals, evects = np.linalg.eigh(H)\nprint('\\nOutput of the eigh function')\nfor i in range(0,3):\n print('evect for eval=',evals[i],'\\n',evects.T[i,0],'\\n',evects.T[i][1],'\\n',evects.T[i][2])\n\n" ]
[ 0 ]
[]
[]
[ "eigenvector", "matrix", "python" ]
stackoverflow_0074553962_eigenvector_matrix_python.txt
Q: Save information of student object to separate .txt file line by line (Python) (I'm new to Python) I have created a School class containing a dictionary where I let the user save student objects from a Student class. class School: def __init__(self): self.students = {} class Student: def __init__(self, first_name, last_name, ssn) The program starts off by importing information from an already existing .txt file, then the user is given the option to add/delete students or modify the information of the existing ones. When I end the program I want to give the option to "save and exit", so that the changes gets written to either the .txt file it imported the information from or a new one. I would like the program to write the first name on the first line, the last name on the next line, the social security number on the line after that, and then move on to the following student until all information has been saved. This is how I would like the program to run: def save_and_exit(original_file): ... answer = input("Would you like to save to the original file or a new one?") if answer in original: ... #saves to the same file the imported information came from elif answer in new: new_file = input("What is the name of the new file?") ... #saves to a new file named by the user A: This answer might be overkill, but let me give you some advice: get familiar with dataclasses -> the way to go for classes containing data get familiar with fundamental data structures dict + json dont invent the wheel by yourself - look for and use ready to go database-solutions for python rather than messing with self-made txt.files for data persistence cli is the way to go for so many python scripts. get familiar with it what you are trying to solve is for me a database with a simple cli. I suggest looking at these third party modules: typer and tinydb with these you can make a "school.py" like this: from dataclasses import asdict, dataclass, field from tinydb import TinyDB, Query, where import typer FILE = "data.json" app = typer.Typer() @dataclass class Student: first_name: str last_name: str ssn: str @classmethod def from_dict(cls, d): return Student(**d) @dataclass class School: name: str students: list[Student] = field(default_factory=list) @classmethod def from_dict(cls, d): cls.name = d["name"] cls.students = [Student.from_dict(s) for s in d["students"]] return cls @app.command() def add_student(first_name: str, last_name: str, ssn: str): student = Student(first_name, last_name, ssn) my_school.students.append(student) db.update( {"students": [asdict(student) for student in my_school.students]}, doc_ids=[1], ) @app.command() def list_all(): print(db.get(doc_id=1)) if __name__ == "__main__": db = TinyDB("school_db.json") if not db.get(doc_id=1): print("init") my_school = School("my_school") db.insert(asdict(my_school)) my_school = School.from_dict(db.get(doc_id=1)) app() you can use it in the console by entering: python school.py add-student bruno mars 23gs44g5 and then python school.py list-all inside the program folder you will see a "data.json" file. there you have all your data in a human readable and very interchangeable form good luck
Save information of student object to separate .txt file line by line (Python)
(I'm new to Python) I have created a School class containing a dictionary where I let the user save student objects from a Student class. class School: def __init__(self): self.students = {} class Student: def __init__(self, first_name, last_name, ssn) The program starts off by importing information from an already existing .txt file, then the user is given the option to add/delete students or modify the information of the existing ones. When I end the program I want to give the option to "save and exit", so that the changes gets written to either the .txt file it imported the information from or a new one. I would like the program to write the first name on the first line, the last name on the next line, the social security number on the line after that, and then move on to the following student until all information has been saved. This is how I would like the program to run: def save_and_exit(original_file): ... answer = input("Would you like to save to the original file or a new one?") if answer in original: ... #saves to the same file the imported information came from elif answer in new: new_file = input("What is the name of the new file?") ... #saves to a new file named by the user
[ "This answer might be overkill, but let me give you some advice:\n\nget familiar with dataclasses -> the way to go for classes containing data\nget familiar with fundamental data structures dict + json\ndont invent the wheel by yourself - look for and use ready to go database-solutions for python rather than messing with self-made txt.files for data persistence\ncli is the way to go for so many python scripts. get familiar with it\n\nwhat you are trying to solve is for me a database with a simple cli.\nI suggest looking at these third party modules: typer and tinydb\nwith these you can make a \"school.py\" like this:\nfrom dataclasses import asdict, dataclass, field\nfrom tinydb import TinyDB, Query, where\nimport typer\n\nFILE = \"data.json\"\n\napp = typer.Typer()\n\n\n@dataclass\nclass Student:\n first_name: str\n last_name: str\n ssn: str\n\n @classmethod\n def from_dict(cls, d):\n return Student(**d)\n\n\n@dataclass\nclass School:\n name: str\n students: list[Student] = field(default_factory=list)\n\n @classmethod\n def from_dict(cls, d):\n cls.name = d[\"name\"]\n cls.students = [Student.from_dict(s) for s in d[\"students\"]]\n return cls\n\n\n@app.command()\ndef add_student(first_name: str, last_name: str, ssn: str):\n student = Student(first_name, last_name, ssn)\n my_school.students.append(student)\n db.update(\n {\"students\": [asdict(student) for student in my_school.students]},\n doc_ids=[1],\n )\n\n\n@app.command()\ndef list_all():\n print(db.get(doc_id=1))\n\n\nif __name__ == \"__main__\":\n db = TinyDB(\"school_db.json\")\n if not db.get(doc_id=1):\n print(\"init\")\n my_school = School(\"my_school\")\n db.insert(asdict(my_school))\n my_school = School.from_dict(db.get(doc_id=1))\n app()\n\nyou can use it in the console by entering:\npython school.py add-student bruno mars 23gs44g5\nand then\npython school.py list-all\ninside the program folder you will see a \"data.json\" file. there you have all your data in a human readable and very interchangeable form\ngood luck\n" ]
[ 0 ]
[]
[]
[ "class", "object", "python", "save" ]
stackoverflow_0074558761_class_object_python_save.txt
Q: How to install Nvidia Apex I am trying to install apex on colab by Nvidia but failed several times. I tried number of different solutions including ones provided by Github official repository. I also tried answers provided here. Every time I try I encounter error like this` torch.__version__ = 1.9.0+cu102 /tmp/pip-req-build-xogkfxc5/setup.py:67: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies! warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!") Compiling cuda extensions with nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0 from /usr/local/cuda/bin Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-req-build-xogkfxc5/setup.py", line 171, in <module> check_cuda_torch_binary_vs_bare_metal(torch.utils.cpp_extension.CUDA_HOME) File "/tmp/pip-req-build-xogkfxc5/setup.py", line 102, in check_cuda_torch_binary_vs_bare_metal raise RuntimeError("Cuda extensions are being compiled with a version of Cuda that does " + RuntimeError: Cuda extensions are being compiled with a version of Cuda that does not match the version used to compile Pytorch binaries. Pytorch binaries were compiled with Cuda 10.2. In some cases, a minor-version mismatch will not cause later errors: https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. You can try commenting out this check (at your own risk). Running setup.py install for apex ... error ERROR: Command errored out with exit status 1: /home/liuyuan/anaconda3/envs/yolox/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-xogkfxc5/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-xogkfxc5/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-wm_6rdtx/install-record.txt --single-version-externally-managed --compile --install-headers /home/liuyuan/anaconda3/envs/yolox/include/python3.8/apex Check the logs for full command output. As you can see I am using latest versions of pytorch and cuda and also considered previous versions but every solutions were failed. A: To built apex on Colab, the cuda version of PyTorch and your system must match, as explained here. Note that, e. g., apex.optimizers.FusedAdam, apex.normalization.FusedLayerNorm, etc. require CUDA and C++ extensions. You can built apex on Colab using the following simple steps: Query the version Ubuntu Colab is running on: !lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.6 LTS Release: 18.04 Codename: bionic To get the current cuda version run: !nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0 Look-up the latest PyTorch built and compute plattform here. Next, got to the cuda toolkit archive and configure a version that matches the cuda-version of PyTorch and your OS-Version. Copy the installation instructions: wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb sudo cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudo apt-get update sudo apt-get -y install cuda Remove Sudo and change the last line to include your cuda-version e.g., !apt-get -y install cuda-11-7 (without exclamation mark if run in shell directly): !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin !mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 !wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb !dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb !cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/ !apt-get update !apt-get -y install cuda-11-7 Your cuda version will now be updated: !nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Jun__8_16:49:14_PDT_2022 Cuda compilation tools, release 11.7, V11.7.99 Build cuda_11.7.r11.7/compiler.31442593_0 Next, updated the outdated Pytorch version in Google Colab: !pip install torch -U Build apex. Depending on what you might require fewer global options: !git clone https://github.com/NVIDIA/apex.git && cd apex && pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_multihead_attn" . && cd .. && rm -rf apex ... Successfully installed apex-0.1 You can now import apex as desired: from apex import optimizers, normalization ...
How to install Nvidia Apex
I am trying to install apex on colab by Nvidia but failed several times. I tried number of different solutions including ones provided by Github official repository. I also tried answers provided here. Every time I try I encounter error like this` torch.__version__ = 1.9.0+cu102 /tmp/pip-req-build-xogkfxc5/setup.py:67: UserWarning: Option --pyprof not specified. Not installing PyProf dependencies! warnings.warn("Option --pyprof not specified. Not installing PyProf dependencies!") Compiling cuda extensions with nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0 from /usr/local/cuda/bin Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-req-build-xogkfxc5/setup.py", line 171, in <module> check_cuda_torch_binary_vs_bare_metal(torch.utils.cpp_extension.CUDA_HOME) File "/tmp/pip-req-build-xogkfxc5/setup.py", line 102, in check_cuda_torch_binary_vs_bare_metal raise RuntimeError("Cuda extensions are being compiled with a version of Cuda that does " + RuntimeError: Cuda extensions are being compiled with a version of Cuda that does not match the version used to compile Pytorch binaries. Pytorch binaries were compiled with Cuda 10.2. In some cases, a minor-version mismatch will not cause later errors: https://github.com/NVIDIA/apex/pull/323#discussion_r287021798. You can try commenting out this check (at your own risk). Running setup.py install for apex ... error ERROR: Command errored out with exit status 1: /home/liuyuan/anaconda3/envs/yolox/bin/python3.8 -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-xogkfxc5/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-xogkfxc5/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' --cpp_ext --cuda_ext install --record /tmp/pip-record-wm_6rdtx/install-record.txt --single-version-externally-managed --compile --install-headers /home/liuyuan/anaconda3/envs/yolox/include/python3.8/apex Check the logs for full command output. As you can see I am using latest versions of pytorch and cuda and also considered previous versions but every solutions were failed.
[ "To built apex on Colab, the cuda version of PyTorch and your system must match, as explained here.\nNote that, e. g., apex.optimizers.FusedAdam, apex.normalization.FusedLayerNorm, etc. require CUDA and C++ extensions.\nYou can built apex on Colab using the following simple steps:\nQuery the version Ubuntu Colab is running on:\n!lsb_release -a\n\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 18.04.6 LTS\nRelease: 18.04\nCodename: bionic\n\nTo get the current cuda version run:\n!nvcc --version\n\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2021 NVIDIA Corporation\nBuilt on Sun_Feb_14_21:12:58_PST_2021\nCuda compilation tools, release 11.2, V11.2.152\nBuild cuda_11.2.r11.2/compiler.29618528_0 \n\nLook-up the latest PyTorch built and compute plattform here.\n\nNext, got to the cuda toolkit archive and configure a version that matches the cuda-version of PyTorch and your OS-Version.\n\nCopy the installation instructions:\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\nsudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\nwget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\nsudo dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\nsudo cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/\nsudo apt-get update\nsudo apt-get -y install cuda\n\nRemove Sudo and change the last line to include your cuda-version e.g., !apt-get -y install cuda-11-7 (without exclamation mark if run in shell directly):\n!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\n!mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\n!wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\n!dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\n!cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/\n!apt-get update\n!apt-get -y install cuda-11-7\n\nYour cuda version will now be updated:\n!nvcc --version\n\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2022 NVIDIA Corporation\nBuilt on Wed_Jun__8_16:49:14_PDT_2022\nCuda compilation tools, release 11.7, V11.7.99\nBuild cuda_11.7.r11.7/compiler.31442593_0\n\nNext, updated the outdated Pytorch version in Google Colab:\n!pip install torch -U\n\nBuild apex. Depending on what you might require fewer global options:\n!git clone https://github.com/NVIDIA/apex.git && cd apex && pip install -v --no-cache-dir --global-option=\"--cpp_ext\" --global-option=\"--cuda_ext\" --global-option=\"--fast_multihead_attn\" . && cd .. && rm -rf apex\n\n...\nSuccessfully installed apex-0.1\n\nYou can now import apex as desired:\nfrom apex import optimizers, normalization\n...\n\n" ]
[ 0 ]
[]
[]
[ "google_colaboratory", "machine_learning", "nvidia", "python" ]
stackoverflow_0068558286_google_colaboratory_machine_learning_nvidia_python.txt
Q: how to display first_name in database on django `views.py from allauth.account.views import SignupView from .forms import HODSignUpForm class HodSignUp(SignupView): template_name = 'account/signup.html' form_class = HODSignUpForm redirect_field_name = '' view_name = 'hod_sign_up' def get_context_data(self, **kwargs): ret = super(HodSignUp, self).get_context_data(**kwargs) ret.update(self.kwargs) return ret forms.py from .models import Admin from po.models import User from allauth.account.forms import SignupForm class HODSignUpForm(SignupForm): first_name=forms.CharField(required=False) last_name=forms.CharField(required=False) class Meta: model= Admin fields = ['first_name','last_name'] def save(self,request): user = super(HODSignUpForm, self).save(request) user.is_hod = True user= User(first_name=self.cleaned_data.get('first_name'), last_name=self.cleaned_data.get('last_name')) user.save() return user models.py from po.models import User class Admin(models.Model): user = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL) first_name = models.CharField(max_length=30, db_column='first_name') last_name = models.CharField(max_length=30, db_column='last_name') po.models.py from django.contrib.auth.models import AbstractUser class User(AbstractUser): is_active = models.BooleanField(default=True) is_hod= models.BooleanField(default=False) first_name = models.CharField(null=True, max_length=50) last_name=models.CharField(null=True, max_length=50) admin.py from po.models import User class Schooladmin(admin.ModelAdmin): list_display = ("id","is_active","is_hod","first_name","last_name") list_filter = ("is_active","is_hod") add_fieldsets = ( ('Personal Info', { 'fields': ('first_name', 'last_name') }), ) admin.site.register(User,Schooladmin) enter image description here i want this image show name but how does show firstname and lastname on database? enter image description here here's porblem. A: Add verbose_name when creating class: from po.models import User class Admin(models.Model): user = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL) first_name = models.CharField(max_length=30, db_column='first_name', verbose_name='Name') last_name = models.CharField(max_length=30, db_column='last_name') Or you can add class Meta to models and use labels: labels = { 'first_name': 'Name' } A: Try this: from django.contrib.auth.models import AbstractUser class User(AbstractUser): list_display = ('is_active', 'is_hod', 'first_name', 'last_name') and from po.models import User class Admin(models.Model): user = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL) first_name = models.CharField(max_length=30, db_column='first_name', verbose_name='Name') last_name = models.CharField(max_length=30, db_column='last_name') is_active = models.BooleanField(default=True) is_hod= models.BooleanField(default=False) A: forms.py: from .models import Admin from po.models import User from allauth.account.forms import SignupForm class HODSignUpForm(SignupForm): class Meta(SignupForm.Meta): model= Admin fields = ['first_name','last_name'] labels = {'first_name': 'Name'} def save(self,request): user = super(HODSignUpForm, self).save(request) user.is_hod = True user= User(first_name=self.cleaned_data.get('first_name'), last_name=self.cleaned_data.get('last_name')) user.save() return user
how to display first_name in database on django
`views.py from allauth.account.views import SignupView from .forms import HODSignUpForm class HodSignUp(SignupView): template_name = 'account/signup.html' form_class = HODSignUpForm redirect_field_name = '' view_name = 'hod_sign_up' def get_context_data(self, **kwargs): ret = super(HodSignUp, self).get_context_data(**kwargs) ret.update(self.kwargs) return ret forms.py from .models import Admin from po.models import User from allauth.account.forms import SignupForm class HODSignUpForm(SignupForm): first_name=forms.CharField(required=False) last_name=forms.CharField(required=False) class Meta: model= Admin fields = ['first_name','last_name'] def save(self,request): user = super(HODSignUpForm, self).save(request) user.is_hod = True user= User(first_name=self.cleaned_data.get('first_name'), last_name=self.cleaned_data.get('last_name')) user.save() return user models.py from po.models import User class Admin(models.Model): user = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL) first_name = models.CharField(max_length=30, db_column='first_name') last_name = models.CharField(max_length=30, db_column='last_name') po.models.py from django.contrib.auth.models import AbstractUser class User(AbstractUser): is_active = models.BooleanField(default=True) is_hod= models.BooleanField(default=False) first_name = models.CharField(null=True, max_length=50) last_name=models.CharField(null=True, max_length=50) admin.py from po.models import User class Schooladmin(admin.ModelAdmin): list_display = ("id","is_active","is_hod","first_name","last_name") list_filter = ("is_active","is_hod") add_fieldsets = ( ('Personal Info', { 'fields': ('first_name', 'last_name') }), ) admin.site.register(User,Schooladmin) enter image description here i want this image show name but how does show firstname and lastname on database? enter image description here here's porblem.
[ "Add verbose_name when creating class:\nfrom po.models import User\nclass Admin(models.Model):\n user = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL)\n first_name = models.CharField(max_length=30, db_column='first_name', verbose_name='Name')\n last_name = models.CharField(max_length=30, db_column='last_name')\n\nOr you can add class Meta to models and use labels:\nlabels = {\n 'first_name': 'Name'\n}\n\n", "Try this:\nfrom django.contrib.auth.models import AbstractUser\nclass User(AbstractUser):\n list_display = ('is_active', 'is_hod', 'first_name', 'last_name')\n\nand\nfrom po.models import User\nclass Admin(models.Model):\n user = models.ForeignKey(User, blank=True, null=True, on_delete=models.SET_NULL)\n first_name = models.CharField(max_length=30, db_column='first_name', verbose_name='Name')\n last_name = models.CharField(max_length=30, db_column='last_name')\n is_active = models.BooleanField(default=True)\n is_hod= models.BooleanField(default=False)\n\n", "forms.py:\nfrom .models import Admin\nfrom po.models import User\nfrom allauth.account.forms import SignupForm\n\nclass HODSignUpForm(SignupForm):\n\n class Meta(SignupForm.Meta):\n model= Admin\n fields = ['first_name','last_name']\n\n labels = {'first_name': 'Name'}\n\n def save(self,request):\n user = super(HODSignUpForm, self).save(request)\n user.is_hod = True\n user= User(first_name=self.cleaned_data.get('first_name'),\n last_name=self.cleaned_data.get('last_name'))\n user.save()\n return user\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074557996_django_python.txt
Q: Using Python Docx to remove blank lines I am using Python docx to remove blank lines from documents containing text and images. Using the paragraph.clear() and paragraph.run.clear() works to a point, but the outputted file still has blank lines which only have a paragraph mark shown in Word. Is there a way of searching directly for paragraph marks? Or is there a better way of clearing the lines? # code snippet for paragraphs in document.paragraphs: if paragraphs.text == "\n": paragraphs.clear() A: Empty lines are not marked by "\n" but by empty string "". Plus, clear() removes text but not the paragraph itself. Try to test len(paragraph.text)==0 for each paragraph. A: This removed all the empty lines for me in my document file for paragraph in doc.paragraphs: if len(paragraph.text) == 0: p = paragraph._element p.getparent().remove(p) p._p = p._element = None A: Using len(paragraph.text)==1 helps as opposed to using len(paragraph.text)==0 as new line is also a character. I just wanted to copy the lines except the blank lines to a new document so it gave me the output. When I used paragraph.text=paragraph.strip('\n') the font style,bold,underlined and italic were removed.So checking the length of each paragraph and clearing that paragraph does the trick.
Using Python Docx to remove blank lines
I am using Python docx to remove blank lines from documents containing text and images. Using the paragraph.clear() and paragraph.run.clear() works to a point, but the outputted file still has blank lines which only have a paragraph mark shown in Word. Is there a way of searching directly for paragraph marks? Or is there a better way of clearing the lines? # code snippet for paragraphs in document.paragraphs: if paragraphs.text == "\n": paragraphs.clear()
[ "Empty lines are not marked by \"\\n\" but by empty string \"\".\nPlus, clear() removes text but not the paragraph itself.\nTry to test len(paragraph.text)==0 for each paragraph.\n", "This removed all the empty lines for me in my document file\nfor paragraph in doc.paragraphs:\n if len(paragraph.text) == 0:\n p = paragraph._element\n p.getparent().remove(p)\n p._p = p._element = None\n\n", "Using len(paragraph.text)==1 helps as opposed to using len(paragraph.text)==0 as new line is also a character.\nI just wanted to copy the lines except the blank lines to a new document so it gave me the output.\nWhen I used paragraph.text=paragraph.strip('\\n') the font style,bold,underlined and italic were removed.So checking the length of each paragraph and clearing that paragraph does the trick.\n" ]
[ 1, 1, 0 ]
[]
[]
[ "python", "python_docx" ]
stackoverflow_0043710188_python_python_docx.txt
Q: Confluent-Kafka Python - Describe consumer groups (to get the lag of each consumer group) I want to get the details of the consumer group using confluent-kafka. The cli equivalent of that is ` ./kafka-consumer-groups.sh --bootstrap-server XXXXXXXXX:9092 --describe --group my-group My end goal is to get the value of lag from the output. Is there any method in confluent-kafka python API to get these details. There is a method in the java API but I couldn't find it in python API. I tried using the describe_configs method in the adminClient API but it ended up throwing kafkaException with following details This most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details. A: For now I have come up with the following solution. It's a work around to get the combined lag of a consumer group def get_lag(topic,numPartitions): diff = list() for i in range(numPartitions): topic_partition = TopicPartition(topic, partition=i) low, high = consumer.get_watermark_offsets(topic_partition) currentList = consumer.committed([topic_partition]) current = currentList[0].offset diff.append(high-current) return sum(diff) # Combined Lag
Confluent-Kafka Python - Describe consumer groups (to get the lag of each consumer group)
I want to get the details of the consumer group using confluent-kafka. The cli equivalent of that is ` ./kafka-consumer-groups.sh --bootstrap-server XXXXXXXXX:9092 --describe --group my-group My end goal is to get the value of lag from the output. Is there any method in confluent-kafka python API to get these details. There is a method in the java API but I couldn't find it in python API. I tried using the describe_configs method in the adminClient API but it ended up throwing kafkaException with following details This most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details.
[ "For now I have come up with the following solution. It's a work around to get the combined lag of a consumer group\n def get_lag(topic,numPartitions):\n diff = list()\n for i in range(numPartitions):\n topic_partition = TopicPartition(topic, partition=i)\n low, high = consumer.get_watermark_offsets(topic_partition)\n currentList = consumer.committed([topic_partition])\n current = currentList[0].offset\n diff.append(high-current)\n return sum(diff) # Combined Lag\n\n" ]
[ 0 ]
[]
[]
[ "apache_kafka", "confluent_kafka_python", "kafka_consumer_api", "python" ]
stackoverflow_0074558033_apache_kafka_confluent_kafka_python_kafka_consumer_api_python.txt
Q: Keras' clear_session() not working in Google colab I run a keras model for several times in Google colab. Due to the nature of tensorflow there is a new model created each time of the program run, which leads to exhausted memory after some runs. I found that clear_session() of keras should help at the problem, but it doesn't seem to work. I created an MWE for Google colab below. import numpy as np from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras import backend as K X = np.zeros([10, 10000]) y = np.zeros([10, 10000]) ######## m = Sequential([Dense(10000, input_shape=(10000,)), Dense(10000), Dense(10000), Dense(10000)]) m.compile(loss='mse') m.summary() m.fit(X,y) K.clear_session() After running the part below ######## for three times, I get the following error: --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) <ipython-input-3-7ae5ab890fc2> in <module> 3 m.summary() 4 ----> 5 m.fit(X,y) 6 K.clear_session() 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: ResourceExhaustedError: Graph execution error: Detected at node 'RMSprop/RMSprop/update_2/mul_2' defined at (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance app.start() File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 612, in start self.io_loop.start() File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever self._run_once() File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once handle._run() File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.7/dist-packages/tornado/ioloop.py", line 758, in _run_callback ret = callback() File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1233, in inner self.run() File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1147, in run yielded = self.gen.send(value) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 381, in dispatch_queue yield self.process_one() File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 346, in wrapper runner = Runner(result, future, yielded) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1080, in __init__ self.run() File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1147, in run yielded = self.gen.send(value) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 365, in process_one yield gen.maybe_future(dispatch(*args)) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 545, in execute_request user_expressions, allow_stdin, File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2855, in run_cell raw_cell, store_history, silent, shell_futures) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell return runner(coro) File "/usr/local/lib/python3.7/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner coro.send(None) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 3058, in run_cell_async interactivity=interactivity, compiler=compiler, result=result) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes if (await self.run_code(code, result, async_=asy)): File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-3-7ae5ab890fc2>", line 5, in <module> m.fit(X,y) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1409, in fit tmp_logs = self.train_function(iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1051, in train_function return step_function(self, iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1040, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1030, in run_step outputs = model.train_step(data) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 893, in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize return self.apply_gradients(grads_and_vars, name=name) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 682, in apply_gradients name=name) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 724, in _distributed_apply var, apply_grad_to_update_var, args=(grad,), group=False) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 706, in apply_grad_to_update_var update_op = self._resource_apply_dense(grad, var, **apply_kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/rmsprop.py", line 216, in _resource_apply_dense var_t = var - coefficients["lr_t"] * grad / ( Node: 'RMSprop/RMSprop/update_2/mul_2' failed to allocate memory [[{{node RMSprop/RMSprop/update_2/mul_2}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [Op:__inference_train_function_2465] I want to play around with slightly different data on the same model, so I run a similar part several times. I can simply restart the notebook after the error, but it takes some time to load the data, so is there an option how I can really clear an old model? Thanks for help. A: Please restart the runtime and try again as I tried replicating the above code and it's working fine. You can check the output mentioned below for the same code: import numpy as np from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras import backend as K X = np.zeros([10, 10000]) y = np.zeros([10, 10000]) ######## m = Sequential([Dense(10000, input_shape=(10000,)), Dense(10000), Dense(10000), Dense(10000)]) m.compile(loss='mse') m.summary() m.fit(X,y, epochs=2) K.clear_session() Output: Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 10000) 100010000 dense_1 (Dense) (None, 10000) 100010000 dense_2 (Dense) (None, 10000) 100010000 dense_3 (Dense) (None, 10000) 100010000 ================================================================= Total params: 400,040,000 Trainable params: 400,040,000 Non-trainable params: 0 _________________________________________________________________ Epoch 1/2 1/1 [==============================] - 10s 10s/step - loss: 0.0000e+00 Epoch 2/2 1/1 [==============================] - 6s 6s/step - loss: 0.0000e+00
Keras' clear_session() not working in Google colab
I run a keras model for several times in Google colab. Due to the nature of tensorflow there is a new model created each time of the program run, which leads to exhausted memory after some runs. I found that clear_session() of keras should help at the problem, but it doesn't seem to work. I created an MWE for Google colab below. import numpy as np from tensorflow.keras import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras import backend as K X = np.zeros([10, 10000]) y = np.zeros([10, 10000]) ######## m = Sequential([Dense(10000, input_shape=(10000,)), Dense(10000), Dense(10000), Dense(10000)]) m.compile(loss='mse') m.summary() m.fit(X,y) K.clear_session() After running the part below ######## for three times, I get the following error: --------------------------------------------------------------------------- ResourceExhaustedError Traceback (most recent call last) <ipython-input-3-7ae5ab890fc2> in <module> 3 m.summary() 4 ----> 5 m.fit(X,y) 6 K.clear_session() 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name) 53 ctx.ensure_initialized() 54 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name, ---> 55 inputs, attrs, num_outputs) 56 except core._NotOkStatusException as e: 57 if name is not None: ResourceExhaustedError: Graph execution error: Detected at node 'RMSprop/RMSprop/update_2/mul_2' defined at (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module> app.launch_new_instance() File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 846, in launch_instance app.start() File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 612, in start self.io_loop.start() File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start self.asyncio_loop.run_forever() File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever self._run_once() File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once handle._run() File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run self._context.run(self._callback, *self._args) File "/usr/local/lib/python3.7/dist-packages/tornado/ioloop.py", line 758, in _run_callback ret = callback() File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1233, in inner self.run() File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1147, in run yielded = self.gen.send(value) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 381, in dispatch_queue yield self.process_one() File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 346, in wrapper runner = Runner(result, future, yielded) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1080, in __init__ self.run() File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 1147, in run yielded = self.gen.send(value) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 365, in process_one yield gen.maybe_future(dispatch(*args)) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 268, in dispatch_shell yield gen.maybe_future(handler(stream, idents, msg)) File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 545, in execute_request user_expressions, allow_stdin, File "/usr/local/lib/python3.7/dist-packages/tornado/gen.py", line 326, in wrapper yielded = next(result) File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 306, in do_execute res = shell.run_cell(code, store_history=store_history, silent=silent) File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 536, in run_cell return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2855, in run_cell raw_cell, store_history, silent, shell_futures) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2881, in _run_cell return runner(coro) File "/usr/local/lib/python3.7/dist-packages/IPython/core/async_helpers.py", line 68, in _pseudo_sync_runner coro.send(None) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 3058, in run_cell_async interactivity=interactivity, compiler=compiler, result=result) File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 3249, in run_ast_nodes if (await self.run_code(code, result, async_=asy)): File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 3326, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-3-7ae5ab890fc2>", line 5, in <module> m.fit(X,y) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 64, in error_handler return fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1409, in fit tmp_logs = self.train_function(iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1051, in train_function return step_function(self, iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1040, in step_function outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1030, in run_step outputs = model.train_step(data) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 893, in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 539, in minimize return self.apply_gradients(grads_and_vars, name=name) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 682, in apply_gradients name=name) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 724, in _distributed_apply var, apply_grad_to_update_var, args=(grad,), group=False) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py", line 706, in apply_grad_to_update_var update_op = self._resource_apply_dense(grad, var, **apply_kwargs) File "/usr/local/lib/python3.7/dist-packages/keras/optimizers/optimizer_v2/rmsprop.py", line 216, in _resource_apply_dense var_t = var - coefficients["lr_t"] * grad / ( Node: 'RMSprop/RMSprop/update_2/mul_2' failed to allocate memory [[{{node RMSprop/RMSprop/update_2/mul_2}}]] Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode. [Op:__inference_train_function_2465] I want to play around with slightly different data on the same model, so I run a similar part several times. I can simply restart the notebook after the error, but it takes some time to load the data, so is there an option how I can really clear an old model? Thanks for help.
[ "Please restart the runtime and try again as I tried replicating the above code and it's working fine.\nYou can check the output mentioned below for the same code:\nimport numpy as np\nfrom tensorflow.keras import Sequential\nfrom tensorflow.keras.layers import Dense\nfrom tensorflow.keras import backend as K\n\nX = np.zeros([10, 10000])\ny = np.zeros([10, 10000])\n\n########\nm = Sequential([Dense(10000, input_shape=(10000,)), Dense(10000), Dense(10000), Dense(10000)])\nm.compile(loss='mse')\nm.summary()\n\nm.fit(X,y, epochs=2)\nK.clear_session()\n\nOutput:\nModel: \"sequential\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n dense (Dense) (None, 10000) 100010000 \n \n dense_1 (Dense) (None, 10000) 100010000 \n \n dense_2 (Dense) (None, 10000) 100010000 \n \n dense_3 (Dense) (None, 10000) 100010000 \n \n=================================================================\nTotal params: 400,040,000\nTrainable params: 400,040,000\nNon-trainable params: 0\n_________________________________________________________________\nEpoch 1/2\n1/1 [==============================] - 10s 10s/step - loss: 0.0000e+00\nEpoch 2/2\n1/1 [==============================] - 6s 6s/step - loss: 0.0000e+00\n\n" ]
[ 0 ]
[]
[]
[ "google_colaboratory", "jupyter_notebook", "keras", "python", "tensorflow" ]
stackoverflow_0074289376_google_colaboratory_jupyter_notebook_keras_python_tensorflow.txt
Q: Checking if 1D numpy array in a list of 1D numpy arrays and None I want to check whether a 1D numpy array in the list of a 1D numpy arrays and None for an if condition. I did it like this: arr = np.array([1,2]) lst = [np.array([1,2]), np.array([3,4]), None, None] if list(arr) in [list(i) for i in lst if i is not None]: print("Yes") else: print("No") but the size of the list and the numpy array can be much larger, so is there a more efficient way to do this? instead of changing every numpy array to list. A: You cannot avoid one iteration through the lst to modify its elements (numpy arrays) somehow. But instead of creating a list of lists out of the numpy arrays, you can create a set of tuples instead and store it: set_of_arrays_as_tuples = set([tuple(array) for array in lst if array is not None]) Then, any subsequent query to check presence in the set can be done in constant time, instead of linear time: tuple(arr) in set_of_arrays_as_tuples ->True
Checking if 1D numpy array in a list of 1D numpy arrays and None
I want to check whether a 1D numpy array in the list of a 1D numpy arrays and None for an if condition. I did it like this: arr = np.array([1,2]) lst = [np.array([1,2]), np.array([3,4]), None, None] if list(arr) in [list(i) for i in lst if i is not None]: print("Yes") else: print("No") but the size of the list and the numpy array can be much larger, so is there a more efficient way to do this? instead of changing every numpy array to list.
[ "You cannot avoid one iteration through the lst to modify its elements (numpy arrays) somehow.\nBut instead of creating a list of lists out of the numpy arrays, you can create a set of tuples instead and store it:\nset_of_arrays_as_tuples = set([tuple(array) for array in lst if array is not None])\nThen, any subsequent query to check presence in the set can be done in constant time, instead of linear time:\ntuple(arr) in set_of_arrays_as_tuples\n->True\n" ]
[ 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074561704_numpy_python.txt
Q: How to use Linear regression when my **X** values are normally distributed? To be more specific the error variance of the x value is half of the variance of error in y. I looked over sklearn and couldn't find a function which takes the error variance of x into account. A: Not 100% sure I understand the question. But if I understand it correctly, you are trying to use linear regression to find the linear model with maximum likelihood. In other words, an error for data where X and Y are uncertain is less serious that one where X and Y are very accurate. If that is so, what people do in such case, is usually to weight each sample with inverse of error variance. With sklearn, weight is simply the 3rd (optionnal) parameter of .fit So I would lin=LinearRegression() lin.fit(X, Y, 1/variance(X)) variance(X) being your evaluation of X variance. Edit (after your comment) Then I don't get the question. The fact that X and Y measure have an error, that those error do not have the same magnitude (anyway, "same magnitude" about a weight and a size would be meaningless), etc. is not a problem. If there were no error, then you wouldn't be doing a linear regression, would you? As long as those error have a 0 expected value (and if not, just remove the expected value of the error from the variable :D), and are not correlated... (so, being independent is a sufficient condition) That is Gauss-Markov hypothesis, and it is the foundation of the least square method (the one used by sklearn). If you know something else from your error, then, back to my first answer. But if all you know is that the error on Y tends to be bigger than error on X, then, there is no problem to solve.
How to use Linear regression when my **X** values are normally distributed?
To be more specific the error variance of the x value is half of the variance of error in y. I looked over sklearn and couldn't find a function which takes the error variance of x into account.
[ "Not 100% sure I understand the question. But if I understand it correctly, you are trying to use linear regression to find the linear model with maximum likelihood. In other words, an error for data where X and Y are uncertain is less serious that one where X and Y are very accurate.\nIf that is so, what people do in such case, is usually to weight each sample with inverse of error variance.\nWith sklearn, weight is simply the 3rd (optionnal) parameter of .fit\nSo I would\nlin=LinearRegression()\nlin.fit(X, Y, 1/variance(X))\n\nvariance(X) being your evaluation of X variance.\nEdit (after your comment)\nThen I don't get the question. The fact that X and Y measure have an error, that those error do not have the same magnitude (anyway, \"same magnitude\" about a weight and a size would be meaningless), etc. is not a problem. If there were no error, then you wouldn't be doing a linear regression, would you? As long as those error have a 0 expected value (and if not, just remove the expected value of the error from the variable :D), and are not correlated... (so, being independent is a sufficient condition)\nThat is Gauss-Markov hypothesis, and it is the foundation of the least square method (the one used by sklearn).\nIf you know something else from your error, then, back to my first answer. But if all you know is that the error on Y tends to be bigger than error on X, then, there is no problem to solve.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "python", "scikit_learn" ]
stackoverflow_0074560695_machine_learning_python_scikit_learn.txt
Q: How to install CUDA in Google Colab GPU's It seems that Google Colab GPU's doesn't come with CUDA Toolkit, how can I install CUDA in Google Colab GPU's. I am getting this error in installing mxnet in Google Colab. Installing collected packages: mxnet Successfully installed mxnet-1.2.0 ERROR: Incomplete installation for leveraging GPUs for computations. Please make sure you have CUDA installed and run the following line in your terminal and try again: pip uninstall -y mxnet && pip install mxnet-cu90==1.1.0 Adjust 'cu90' depending on your CUDA version ('cu75' and 'cu80' are also available). You can also disable GPU usage altogether by invoking turicreate.config.set_num_gpus(0). An exception has occurred, use %tb to see the full traceback. SystemExit: 1 A: Cuda is not showing on your notebook because you have not enabled GPU in Colab. The Google Colab comes with both options GPU or without GPU. You can enable or disable GPU in runtime settings Go to Menu > Runtime > Change runtime. Change hardware acceleration to GPU. To check if GPU is running or not, run the following command !nvidia-smi If the output is like the following image it means your GPU and cuda are working. You can see the CUDA version also. After that to check if PyTorch is capable of using GPU, run the following code. import torch torch.cuda.is_available() # Output would be True if Pytorch is using GPU otherwise it would be False. To check if TensorFlow is capable of using GPU, run the following code. import tensorflow as tf tf.test.gpu_device_name() # Standard output is '/device:GPU:0' A: I pretty much believe that Google Colab has Cuda pre-installed... You can make sure by opening a new notebook and type !nvcc --version which would return the installed Cuda version. Here is mine: A: Go here: https://developer.nvidia.com/cuda-downloads Select Linux -> x86_64 -> Ubuntu -> 16.04 -> deb (local) Copy link from the download button. Now you have to compose the sequence of commands. First one will be the call to wget that will download CUDA installer from the link you saved on step 3 There will be installation instruction under "Base installer" section. Copy them as well, but remove sudo from all the lines. Preface each line with commands with !, insert into a cell and run For me the command sequence was the following: !wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb !dpkg -i cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb !apt-key add /var/cuda-repo-9-2-local/7fa2af80.pub !apt-get update !apt-get install cuda Now finally install mxnet. As cuda version I installed above is 9.2 I had to slighly change your command: !pip install mxnet-cu92 Successfully installed graphviz-0.8.3 mxnet-cu92-1.2.0 A: If you switch to using GPU then CUDA will be available on your VM. Basically what you need to do is to match MXNet's version with installed CUDA version. Here's what I used to install MXNet on Colab: First check the CUDA version !cat /usr/local/lib/python3.6/dist-packages/external/local_config_cuda/cuda/cuda/cuda_config.h |\ grep TF_CUDA_VERSION For me it outputted #define TF_CUDA_VERSION "8.0" Then I installed MXNet with !pip install mxnet-cu80 A: I think the easiest way here is to install mxnet-cu80. Just use the following code: !pip install mxnet-cu80 import mxnet as mx And you could check whether it works by: a = mx.nd.ones((2, 3), mx.gpu()) b = a * 2 + 1 b.asnumpy() I think colab right now just support cu80 and higher versions won't work. For more information, you could see the following two websites: Google Colab Free GPU Tutorial Installing mxnet A: This solution worked for me in November, 2022. Query the version of Ubuntu that Colab is running on (run in notebook using ! or in terminal without): !lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.6 LTS Release: 18.04 Codename: bionic Query the current cuda version in Colab (only for comparision): !nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0 Next, got to the cuda toolkit archive or latest builds and configure the desired cuda version and os version. The Distribution is Ubuntu. Copy the installation instructions: wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb sudo cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudo apt-get update sudo apt-get -y install cuda Change the last line to include your cuda-version e.g., apt-get -y install cuda-11-7. Otherwise a more recent version might be installed. !wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin !mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 !wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-!repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb !dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb !cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/ !apt-get update !apt-get -y install cuda-11-7 Your cuda version will now be updated: nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Wed_Jun__8_16:49:14_PDT_2022 Cuda compilation tools, release 11.7, V11.7.99 Build cuda_11.7.r11.7/compiler.31442593_0
How to install CUDA in Google Colab GPU's
It seems that Google Colab GPU's doesn't come with CUDA Toolkit, how can I install CUDA in Google Colab GPU's. I am getting this error in installing mxnet in Google Colab. Installing collected packages: mxnet Successfully installed mxnet-1.2.0 ERROR: Incomplete installation for leveraging GPUs for computations. Please make sure you have CUDA installed and run the following line in your terminal and try again: pip uninstall -y mxnet && pip install mxnet-cu90==1.1.0 Adjust 'cu90' depending on your CUDA version ('cu75' and 'cu80' are also available). You can also disable GPU usage altogether by invoking turicreate.config.set_num_gpus(0). An exception has occurred, use %tb to see the full traceback. SystemExit: 1
[ "Cuda is not showing on your notebook because you have not enabled GPU in Colab.\nThe Google Colab comes with both options GPU or without GPU.\nYou can enable or disable GPU in runtime settings\nGo to Menu > Runtime > Change runtime.\n\nChange hardware acceleration to GPU.\n\nTo check if GPU is running or not, run the following command\n!nvidia-smi\n\nIf the output is like the following image it means your GPU and cuda are working. You can see the CUDA version also.\nAfter that to check if PyTorch is capable of using GPU, run the following code.\nimport torch\ntorch.cuda.is_available()\n# Output would be True if Pytorch is using GPU otherwise it would be False.\n\nTo check if TensorFlow is capable of using GPU, run the following code.\nimport tensorflow as tf\ntf.test.gpu_device_name()\n# Standard output is '/device:GPU:0'\n\n", "I pretty much believe that Google Colab has Cuda pre-installed... You can make sure by opening a new notebook and type !nvcc --version which would return the installed Cuda version.\nHere is mine:\n\n", "\nGo here: https://developer.nvidia.com/cuda-downloads\nSelect Linux -> x86_64 -> Ubuntu -> 16.04 -> deb (local)\nCopy link from the download button.\nNow you have to compose the sequence of commands. First one will be the call to wget that will download CUDA installer from the link you saved on step 3\nThere will be installation instruction under \"Base installer\" section. Copy them as well, but remove sudo from all the lines. \nPreface each line with commands with !, insert into a cell and run\nFor me the command sequence was the following:\n!wget https://developer.nvidia.com/compute/cuda/9.2/Prod/local_installers/cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64 -O cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb\n!dpkg -i cuda-repo-ubuntu1604-9-2-local_9.2.88-1_amd64.deb\n!apt-key add /var/cuda-repo-9-2-local/7fa2af80.pub\n!apt-get update\n!apt-get install cuda\nNow finally install mxnet. As cuda version I installed above is 9.2 I had to slighly change your command: !pip install mxnet-cu92\nSuccessfully installed graphviz-0.8.3 mxnet-cu92-1.2.0\n\n", "If you switch to using GPU then CUDA will be available on your VM. Basically what you need to do is to match MXNet's version with installed CUDA version.\nHere's what I used to install MXNet on Colab:\nFirst check the CUDA version\n!cat /usr/local/lib/python3.6/dist-packages/external/local_config_cuda/cuda/cuda/cuda_config.h |\\\ngrep TF_CUDA_VERSION\n\nFor me it outputted #define TF_CUDA_VERSION \"8.0\"\n\nThen I installed MXNet with \n!pip install mxnet-cu80\n\n", "I think the easiest way here is to install mxnet-cu80. Just use the following code:\n!pip install mxnet-cu80\nimport mxnet as mx\n\nAnd you could check whether it works by:\na = mx.nd.ones((2, 3), mx.gpu())\nb = a * 2 + 1\nb.asnumpy()\n\nI think colab right now just support cu80 and higher versions won't work.\nFor more information, you could see the following two websites:\nGoogle Colab Free GPU Tutorial\nInstalling mxnet\n", "This solution worked for me in November, 2022. Query the version of Ubuntu that Colab is running on (run in notebook using ! or in terminal without):\n!lsb_release -a\n\nNo LSB modules are available.\nDistributor ID: Ubuntu\nDescription: Ubuntu 18.04.6 LTS\nRelease: 18.04\nCodename: bionic\n\nQuery the current cuda version in Colab (only for comparision):\n!nvcc --version\n\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2021 NVIDIA Corporation\nBuilt on Sun_Feb_14_21:12:58_PST_2021\nCuda compilation tools, release 11.2, V11.2.152\nBuild cuda_11.2.r11.2/compiler.29618528_0 \n\nNext, got to the cuda toolkit archive or latest builds and configure the desired cuda version and os version. The Distribution is Ubuntu.\n\nCopy the installation instructions:\nwget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\nsudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\nwget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\nsudo dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\nsudo cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/\nsudo apt-get update\nsudo apt-get -y install cuda\n\nChange the last line to include your cuda-version e.g., apt-get -y install cuda-11-7. Otherwise a more recent version might be installed.\n!wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin\n!mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600\n!wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-!repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\n!dpkg -i cuda-repo-ubuntu1804-11-7-local_11.7.0-515.43.04-1_amd64.deb\n!cp /var/cuda-repo-ubuntu1804-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/\n!apt-get update\n!apt-get -y install cuda-11-7\n\nYour cuda version will now be updated:\nnvcc --version\n\nnvcc: NVIDIA (R) Cuda compiler driver\nCopyright (c) 2005-2022 NVIDIA Corporation\nBuilt on Wed_Jun__8_16:49:14_PDT_2022\nCuda compilation tools, release 11.7, V11.7.99\nBuild cuda_11.7.r11.7/compiler.31442593_0\n\n" ]
[ 71, 28, 16, 2, 1, 0 ]
[ "To run in Colab, you need CUDA 8 (mxnet 1.1.0 for cuda 9+ is broken). But Google Colab runs now 9.2. There is, however the way to uninstall 9.2, install 8.0 and then install mxnet 1.1.0 cu80. \nThe complete jupyter code is here : Medium\n", "There is a guide which clearly explains that how to enable Cuda in Colab.\n" ]
[ -1, -1 ]
[ "cuda", "google_colaboratory", "machine_learning", "python", "turi_create" ]
stackoverflow_0050560395_cuda_google_colaboratory_machine_learning_python_turi_create.txt
Q: Fastapi - need to use both Body and Depends as default value I have an endpoint in in which the main body parameter was defined as follows: @router.post("/myendpoint") async def delete_objectss(param: DeleteItemParams = DeleteItemParamsMetadata, .....) Reason behind this is that I needed: DeleteItemParamsMetadata = Body(None, description="my verbose description " \ " that will appear on swaggerui under the schema of this parameter") Now I have a need for performing some custom checks when I receive this parameter, in order to have that dependency sorted, I could replace my param definition within /myendpoint as follows: @router.post("/myendpoint") async def delete_objectss(param: DeleteItemParams = Depends(validate_param), .....) or course where I have somewhere a definition def validate_param(param: DeleteItemParams): if bla_bla: # lenghty and complex condition raise HTTPException(status_code=422, detail="Invalid item") param = possibly_alter_param_in_some_way(param) return param The problem I have now, is that as far as I know Depends does not support a description field, so I now lost my verbose description that would have appeared on swaggerui. Does anyone know of a way so I can have the best of both worlds and have the dependency pulled AND the description processed? Thank you to anyone who has any input! A: A parameter in a dependency can have a Body reference (or any other type) and it will be resolved correctly. Since you removed that metadata reference in your example it won't show up. You can fix that by adding it back: DeleteItemParamsMetadata = Body(None, description="my verbose description " \ " that will appear on swaggerui under the schema of this parameter") def validate_param(param: DeleteItemParams = DeleteItemParamsMetadata): or directly inline: def validate_param(param: DeleteItemParams = Body(None, description="my verbose description that will appear on swaggerui under the schema of this parameter")):
Fastapi - need to use both Body and Depends as default value
I have an endpoint in in which the main body parameter was defined as follows: @router.post("/myendpoint") async def delete_objectss(param: DeleteItemParams = DeleteItemParamsMetadata, .....) Reason behind this is that I needed: DeleteItemParamsMetadata = Body(None, description="my verbose description " \ " that will appear on swaggerui under the schema of this parameter") Now I have a need for performing some custom checks when I receive this parameter, in order to have that dependency sorted, I could replace my param definition within /myendpoint as follows: @router.post("/myendpoint") async def delete_objectss(param: DeleteItemParams = Depends(validate_param), .....) or course where I have somewhere a definition def validate_param(param: DeleteItemParams): if bla_bla: # lenghty and complex condition raise HTTPException(status_code=422, detail="Invalid item") param = possibly_alter_param_in_some_way(param) return param The problem I have now, is that as far as I know Depends does not support a description field, so I now lost my verbose description that would have appeared on swaggerui. Does anyone know of a way so I can have the best of both worlds and have the dependency pulled AND the description processed? Thank you to anyone who has any input!
[ "A parameter in a dependency can have a Body reference (or any other type) and it will be resolved correctly. Since you removed that metadata reference in your example it won't show up. You can fix that by adding it back:\nDeleteItemParamsMetadata = Body(None, description=\"my verbose description \" \\\n\" that will appear on swaggerui under the schema of this parameter\")\n\ndef validate_param(param: DeleteItemParams = DeleteItemParamsMetadata):\n\nor directly inline:\ndef validate_param(param: DeleteItemParams = Body(None, description=\"my verbose description that will appear on swaggerui under the schema of this parameter\")):\n\n" ]
[ 2 ]
[]
[]
[ "depends", "fastapi", "python", "swagger_ui" ]
stackoverflow_0074560552_depends_fastapi_python_swagger_ui.txt
Q: How to get the highest element in absolute value in a numpy matrix? Here is what I am currently doing, it works but it's a little cumbersome: x = np.matrix([[1, 1], [2, -3]]) xmax = x.flat[abs(x).argmax()] A: The value you're looking for has to be either x.max() or x.min() so you could do max(x.min(), x.max(), key=abs) which is similar to aestrivex's solution but perhaps more readable? Note this will return the minimum in the case where x.min() and x.max() have the same absolute value e.g. -5 and 5. If you have a preference just order the inputs to max accordingly. A: This one computes the absolute max'es fast - respecting an arbitrary axis argument in the same way as np.max and np.argmax themselves do. def absmaxND(a, axis=None): amax = a.max(axis) amin = a.min(axis) return np.where(-amin > amax, amin, amax) For long arrays its about 2.5x faster than a.flat[abs(a).argmax()] even for the simple case axis=None - because it doesn't render abs() of the original big array. A: The most compact way would probably be: x_max = x.flat[np.abs(x).argmax()] By default, the .argmax() method operates directly on the flattened array (taken from the NumPy documentation). So the operation looks for the maximum absolute value of the n-dimensional array np.abs(x). A: I was looking for a way to get the signed values of the maximum absolute values of an N-dimensional array along a specified axis, which none of these answers handle. So, I put together a function to do it. No promises, but it works as far as I've tested it: def maxabs(a, axis=None): """Return slice of a, keeping only those values that are furthest away from 0 along axis""" maxa = a.max(axis=axis) mina = a.min(axis=axis) p = abs(maxa) > abs(mina) # bool, or indices where +ve values win n = abs(mina) > abs(maxa) # bool, or indices where -ve values win if axis == None: if p: return maxa else: return mina shape = list(a.shape) shape.pop(axis) out = np.zeros(shape, dtype=a.dtype) out[p] = maxa[p] out[n] = mina[n] return out A: EDIT: My answer is off-topic, sorry. As Ophion pointed out this would return the index, not the value - you have to use flat with my "xmax" (which is really "xmaxInd") to get the proper value. Ergo I think your solution is best. After experimenting a bit I realized you can just do this: x = np.matrix([[1,1], [2,-3]]) absX = abs(x) xmax = argmax(absX) It seems that numpy allows you to take the abs as well as the argmax of a matrix. How convenient! timeit checks: def meth1(): x = np.matrix([[1,1],[2,-3]]) xmax = x.flat[abs(x).argmax()] def meth2(): x = np.matrix([[1,1],[2,-3]]) xmax = argmax(abs(x)) t1 = timeit.Timer("meth1()","from __main__ import meth1") t2 = timeit.Timer("meth2()","from __main__ import meth2") mean(t1.repeat(1,100000)) gives Out[99]: 7.854323148727417 mean(t2.repeat(1,100000)) gives Out[98]: 7.7788529396057129 So meth2() is slightly faster. Probably because it doesn't involve calling flat. A: The only thing that I could think of, which looks even worse, is: xmax=x[np.unravel_index(abs(x).argmax(), x.shape)] A: pardon the necro, but i can't seem to comment on JoeCondron's answer. I like: max(x.min(), x.max(), key=abs) but i believe it can be simplified further: max(x, key=abs) seems to work for me (or for non-1D) : max(x.flat, key=abs) A: I use it dt = np.random.rand(50000,500)-0.5 # ur xmax = dt.flat[abs(dt).argmax()] #230 ms # new newdt = np.array([dt.min(),dt.max()]) # 56ms xmax = newdt.flat[abs(newdt).argmax()] # 4ms it is almost 4 times faster (60 ms against 230)!! A: If you want to get this value for a 2D numpy array, you can also do the following: x[np.arange(len(x)), np.abs(x).argmax(axis=1)] For example if your x looks like this: x = np.array([[ -1, -10], [ -40, -5], [ -10, 15], [ 11, 35]]) and you want to get the maximum value for each row disregarding the sign, the result would be np.array([-10, -40, 15, 35]).
How to get the highest element in absolute value in a numpy matrix?
Here is what I am currently doing, it works but it's a little cumbersome: x = np.matrix([[1, 1], [2, -3]]) xmax = x.flat[abs(x).argmax()]
[ "The value you're looking for has to be either x.max() or x.min() so you could do\nmax(x.min(), x.max(), key=abs)\n\nwhich is similar to aestrivex's solution but perhaps more readable? Note this will return the minimum in the case where x.min() and x.max() have the same absolute value e.g. -5 and 5. If you have a preference just order the inputs to max accordingly. \n", "This one computes the absolute max'es fast - respecting an arbitrary axis argument in the same way as np.max and np.argmax themselves do.\ndef absmaxND(a, axis=None):\n amax = a.max(axis)\n amin = a.min(axis)\n return np.where(-amin > amax, amin, amax)\n\nFor long arrays its about 2.5x faster than a.flat[abs(a).argmax()] even for the simple case axis=None - because it doesn't render abs() of the original big array.\n", "The most compact way would probably be:\nx_max = x.flat[np.abs(x).argmax()] \n\nBy default, the .argmax() method operates directly on the flattened array (taken from the NumPy documentation). So the operation looks for the maximum absolute value of the n-dimensional array np.abs(x).\n", "I was looking for a way to get the signed values of the maximum absolute values of an N-dimensional array along a specified axis, which none of these answers handle. So, I put together a function to do it. No promises, but it works as far as I've tested it:\ndef maxabs(a, axis=None):\n \"\"\"Return slice of a, keeping only those values that are furthest away\n from 0 along axis\"\"\"\n maxa = a.max(axis=axis)\n mina = a.min(axis=axis)\n p = abs(maxa) > abs(mina) # bool, or indices where +ve values win\n n = abs(mina) > abs(maxa) # bool, or indices where -ve values win\n if axis == None:\n if p: return maxa\n else: return mina\n shape = list(a.shape)\n shape.pop(axis)\n out = np.zeros(shape, dtype=a.dtype)\n out[p] = maxa[p]\n out[n] = mina[n]\n return out\n\n", "EDIT: My answer is off-topic, sorry. As Ophion pointed out this would return the index, not the value - you have to use flat with my \"xmax\" (which is really \"xmaxInd\") to get the proper value. Ergo I think your solution is best.\n\nAfter experimenting a bit I realized you can just do this:\nx = np.matrix([[1,1], [2,-3]])\nabsX = abs(x)\nxmax = argmax(absX)\n\nIt seems that numpy allows you to take the abs as well as the argmax of a matrix. How convenient!\ntimeit checks:\ndef meth1():\n x = np.matrix([[1,1],[2,-3]])\n xmax = x.flat[abs(x).argmax()]\n\ndef meth2():\n x = np.matrix([[1,1],[2,-3]])\n xmax = argmax(abs(x))\n\nt1 = timeit.Timer(\"meth1()\",\"from __main__ import meth1\")\nt2 = timeit.Timer(\"meth2()\",\"from __main__ import meth2\")\n\nmean(t1.repeat(1,100000)) gives Out[99]: 7.854323148727417\nmean(t2.repeat(1,100000)) gives Out[98]: 7.7788529396057129\nSo meth2() is slightly faster. Probably because it doesn't involve calling flat.\n", "The only thing that I could think of, which looks even worse, is:\nxmax=x[np.unravel_index(abs(x).argmax(), x.shape)]\n\n", "pardon the necro, but i can't seem to comment on JoeCondron's answer.\nI like:\nmax(x.min(), x.max(), key=abs) \n\nbut i believe it can be simplified further:\nmax(x, key=abs)\n\nseems to work for me (or for non-1D) :\nmax(x.flat, key=abs)\n\n", "I use it\ndt = np.random.rand(50000,500)-0.5\n# ur\nxmax = dt.flat[abs(dt).argmax()] #230 ms\n\n# new\nnewdt = np.array([dt.min(),dt.max()]) # 56ms\nxmax = newdt.flat[abs(newdt).argmax()] # 4ms\n\nit is almost 4 times faster (60 ms against 230)!!\n", "If you want to get this value for a 2D numpy array, you can also do the following:\nx[np.arange(len(x)), np.abs(x).argmax(axis=1)]\n\nFor example if your x looks like this:\nx = np.array([[ -1, -10],\n [ -40, -5],\n [ -10, 15],\n [ 11, 35]])\n\nand you want to get the maximum value for each row disregarding the sign, the result would be np.array([-10, -40, 15, 35]).\n" ]
[ 42, 8, 8, 6, 3, 1, 1, 0, 0 ]
[ "I think this is a pretty straightforward way, which might be slightly better if code readability is your primary concern. But really, your way is just as elegant.\nnp.min(x) if np.max(abs(x)) == abs(np.min(x)) else np.max(x)\n\n" ]
[ -1 ]
[ "numpy", "python" ]
stackoverflow_0017794266_numpy_python.txt
Q: Open S3 object as a string with Boto3 I'm aware that with Boto 2 it's possible to open an S3 object as a string with: get_contents_as_string() Is there an equivalent function in boto3 ? A: read will return bytes. At least for Python 3, if you want to return a string, you have to decode using the right encoding: import boto3 s3 = boto3.resource('s3') obj = s3.Object(bucket, key) obj.get()['Body'].read().decode('utf-8') A: I had a problem to read/parse the object from S3 because of .get() using Python 2.7 inside an AWS Lambda. I added json to the example to show it became parsable :) import boto3 import json s3 = boto3.client('s3') obj = s3.get_object(Bucket=bucket, Key=key) j = json.loads(obj['Body'].read()) NOTE (for python 2.7): My object is all ascii, so I don't need .decode('utf-8') NOTE (for python 3): We moved to python 3 and discovered that read() now returns bytes so if you want to get a string out of it, you must use: j = json.loads(obj['Body'].read().decode('utf-8')) A: This isn't in the boto3 documentation. This worked for me: object.get()["Body"].read() object being an s3 object: http://boto3.readthedocs.org/en/latest/reference/services/s3.html#object A: Python3 + Using boto3 API approach. By using S3.Client.download_fileobj API and Python file-like object, S3 Object content can be retrieved to memory. Since the retrieved content is bytes, in order to convert to str, it need to be decoded. import io import boto3 client = boto3.client('s3') bytes_buffer = io.BytesIO() client.download_fileobj(Bucket=bucket_name, Key=object_key, Fileobj=bytes_buffer) byte_value = bytes_buffer.getvalue() str_value = byte_value.decode() #python3, default decoding is utf-8 A: Fastest approach As stated in the documentation here, download_fileobj uses parallelisation: This is a managed transfer which will perform a multipart download in multiple threads if necessary. Quoting aws documentation: You can retrieve a part of an object from S3 by specifying the part number in GetObjectRequest. TransferManager uses this logic to download all parts of an object asynchronously and writes them to individual, temporary files. The temporary files are then merged into the destination file provided by the user. This can be exploited keeping the data in memory instead of writing it into a file. The approach that @Gatsby Lee has shown does it and that's the reason why it is the fastest among those that are listed. Anyway, it can be improved even more using the Config parameter: import io import boto3 client = boto3.client('s3') buffer = io.BytesIO() # This is just an example, parameters should be fine tuned according to: # 1. The size of the object that is being read (bigger the file, bigger the chunks) # 2. The number of threads available on the machine that runs this code config = TransferConfig( multipart_threshold=1024 * 25, # Concurrent read only if object size > 25MB max_concurrency=10, # Up to 10 concurrent readers multipart_chunksize=1024 * 25, # 25MB chunks per reader use_threads=True # Must be True to enable multiple readers ) # This method writes the data into the buffer client.download_fileobj( Bucket=bucket_name, Key=object_key, Fileobj=buffer, Config=config ) str_value = buffer.getvalue().decode() For objects bigger than 1GB, it is already worth it in terms of speed. A: Decoding the whole object body to one string: obj = s3.Object(bucket, key).get() big_str = obj['Body'].read().decode() Decoding the object body to strings line-by-line: obj = s3.Object(bucket, key).get() reader = csv.reader(line.decode() for line in obj['Body'].iter_lines()) The default encoding in bytes' decode() is already 'utf-8' since Python 3. When decoding as JSON, no need to convert to string, as json.loads accepts bytes too, since Python 3.6: obj = s3.Object(bucket, key).get() json.loads(obj['Body'].read())
Open S3 object as a string with Boto3
I'm aware that with Boto 2 it's possible to open an S3 object as a string with: get_contents_as_string() Is there an equivalent function in boto3 ?
[ "read will return bytes. At least for Python 3, if you want to return a string, you have to decode using the right encoding:\nimport boto3\n\ns3 = boto3.resource('s3')\n\nobj = s3.Object(bucket, key)\nobj.get()['Body'].read().decode('utf-8') \n\n", "I had a problem to read/parse the object from S3 because of .get() using Python 2.7 inside an AWS Lambda.\nI added json to the example to show it became parsable :)\nimport boto3\nimport json\n\ns3 = boto3.client('s3')\n\nobj = s3.get_object(Bucket=bucket, Key=key)\nj = json.loads(obj['Body'].read())\n\nNOTE (for python 2.7): My object is all ascii, so I don't need .decode('utf-8')\nNOTE (for python 3): We moved to python 3 and discovered that read() now returns bytes so if you want to get a string out of it, you must use:\nj = json.loads(obj['Body'].read().decode('utf-8'))\n", "This isn't in the boto3 documentation. This worked for me:\nobject.get()[\"Body\"].read()\n\nobject being an s3 object: http://boto3.readthedocs.org/en/latest/reference/services/s3.html#object\n", "Python3 + Using boto3 API approach.\nBy using S3.Client.download_fileobj API and Python file-like object, S3 Object content can be retrieved to memory.\nSince the retrieved content is bytes, in order to convert to str, it need to be decoded.\nimport io\nimport boto3\n\nclient = boto3.client('s3')\nbytes_buffer = io.BytesIO()\nclient.download_fileobj(Bucket=bucket_name, Key=object_key, Fileobj=bytes_buffer)\nbyte_value = bytes_buffer.getvalue()\nstr_value = byte_value.decode() #python3, default decoding is utf-8\n\n", "Fastest approach\nAs stated in the documentation here, download_fileobj uses parallelisation:\n\nThis is a managed transfer which will perform a multipart download in multiple threads if necessary.\n\nQuoting aws documentation:\n\nYou can retrieve a part of an object from S3 by specifying the part number in GetObjectRequest. TransferManager uses this logic to download all parts of an object asynchronously and writes them to individual, temporary files. The temporary files are then merged into the destination file provided by the user.\n\n\nThis can be exploited keeping the data in memory instead of writing it into a file.\nThe approach that @Gatsby Lee has shown does it and that's the reason why it is the fastest among those that are listed.\nAnyway, it can be improved even more using the Config parameter:\nimport io\nimport boto3\n\nclient = boto3.client('s3')\nbuffer = io.BytesIO()\n\n# This is just an example, parameters should be fine tuned according to:\n# 1. The size of the object that is being read (bigger the file, bigger the chunks)\n# 2. The number of threads available on the machine that runs this code\n\nconfig = TransferConfig(\n multipart_threshold=1024 * 25, # Concurrent read only if object size > 25MB\n max_concurrency=10, # Up to 10 concurrent readers\n multipart_chunksize=1024 * 25, # 25MB chunks per reader\n use_threads=True # Must be True to enable multiple readers\n)\n\n# This method writes the data into the buffer\nclient.download_fileobj( \n Bucket=bucket_name, \n Key=object_key, \n Fileobj=buffer,\n Config=config\n)\n\nstr_value = buffer.getvalue().decode()\n\n\nFor objects bigger than 1GB, it is already worth it in terms of speed.\n", "Decoding the whole object body to one string:\nobj = s3.Object(bucket, key).get()\nbig_str = obj['Body'].read().decode()\n\nDecoding the object body to strings line-by-line:\nobj = s3.Object(bucket, key).get()\nreader = csv.reader(line.decode() for line in obj['Body'].iter_lines())\n\nThe default encoding in bytes' decode() is already 'utf-8' since Python 3.\nWhen decoding as JSON, no need to convert to string, as json.loads accepts bytes too, since Python 3.6:\nobj = s3.Object(bucket, key).get()\njson.loads(obj['Body'].read())\n\n" ]
[ 342, 162, 86, 48, 2, 1 ]
[ "If body contains a io.StringIO, you have to do like below:\nobject.get()['Body'].getvalue()\n\n" ]
[ -7 ]
[ "amazon_s3", "amazon_web_services", "boto", "boto3", "python" ]
stackoverflow_0031976273_amazon_s3_amazon_web_services_boto_boto3_python.txt
Q: 'KMeans' object has no attribute '_n_threads' Keep getting this error and I suspect it is related to the version difference between the sklearn, but I am not sure. Also I have tried to update the sklearn version, but I cannot install past 0.22 version in my Jupiter notebook Pickle and fit with sklearn version 0.22 on a Jupyter notebook Running on AWS Sagemaker model = KMeans(n_clusters=5) model.fit(df[:train]) centroids = model.cluster_centers_ centroids_label = model.labels_ #Save model model_file_name = 'model-name-v1.pkl' model_pkl = open(model_file_name, 'wb') pickle.dump(model, model_pkl) model_pkl.close() saved_model_pkl = open(model_file_name, 'rb') object = s3.Object(bucket_name, 'models/{}'.format(model_file_name)) object.put(Body=saved_model_pkl) Unpickle and predict with sklearn version 0.23 This is the code that unpickle the model from an S3 bucket, this is running on AWS lambda: import json import os import pickle def lambda_handler(event, context): parameter_for_evaluation = [ # features ] response = s3.get_object(Bucket=bucket_name, Key='models/{}'.format(model_file_name)) body = response['Body'].read() model = pickle.loads(body) result = model.predict([parameter_for_evaluation]).tolist()[0] print("model result: ", result) And this is the error: in my AWS lambda when try to predict 'KMeans' object has no attribute '_n_threads': AttributeError Traceback (most recent call last): File "/var/task/app.py", line 58, in lambda_handler result = model.predict([parameter_for_evaluation]).tolist()[0] File "/var/task/sklearn/cluster/_kmeans.py", line 1188, in predict self.cluster_centers_, self._n_threads)[0] AttributeError: 'KMeans' object has no attribute '_n_threads' There are other warnings in my Cloudwatch logs that might be relevant, however they appeared also before when the error was not happening OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k /var/task/joblib/_multiprocessing_helpers.py:45: UserWarning: [Errno 38] Function not implemented. joblib will operate in serial mode warnings.warn('%s. joblib will operate in serial mode' % (e,)) /var/task/sklearn/base.py:334: UserWarning: Trying to unpickle estimator KMeans from version 0.22.1 when using version 0.23.1. This might lead to breaking code or invalid results. Use at your own risk. This error is not happening if I do the same operation in the same Jupiter notebook This is what I have tried to install sklearn 0.23.1 # Nothing changed !conda update sklearn # Error cannot find pip3 !python3 -m pip3 install --upgrade sklearn # From the logs is installing 0.22 !pip install sklearn --upgrade !conda install scikit-learn -y # Stuck forever at: Solving environment !conda config --append channels conda-forge !conda install scikit-learn=0.23.1 A: Some of my students ran into the same error when accessing the internals of an KMeans object: kmeans2 = KMeans(n_clusters=n_clusters) kmeans2.cluster_centers_ = clusters In this scenario the problem could be worked around by running KMeans with a small subset of the original data. kmeans2 = KMeans(n_clusters=n_clusters).fit(df[:smallnumber]) kmeans2.cluster_centers_ = clusters Where clusters is a reordered set of clusters from another KMeans instance. A: The problem is due to versioning of scikit-learn, the models were trained with an older version. If you can't downgrade the version of scikit-learn you can use the following code to allow predictions with your KMeans model. import pickle from sklearn.utils._openmp_helpers import _openmp_effective_n_threads # Load the model model = pickle.loads(body) model._n_threads = _openmp_effective_n_threads() result = model.predict([parameter_for_evaluation]).tolist()[0] the key is in this import: from sklearn.utils._openmp_helpers import _openmp_effective_n_threads
'KMeans' object has no attribute '_n_threads'
Keep getting this error and I suspect it is related to the version difference between the sklearn, but I am not sure. Also I have tried to update the sklearn version, but I cannot install past 0.22 version in my Jupiter notebook Pickle and fit with sklearn version 0.22 on a Jupyter notebook Running on AWS Sagemaker model = KMeans(n_clusters=5) model.fit(df[:train]) centroids = model.cluster_centers_ centroids_label = model.labels_ #Save model model_file_name = 'model-name-v1.pkl' model_pkl = open(model_file_name, 'wb') pickle.dump(model, model_pkl) model_pkl.close() saved_model_pkl = open(model_file_name, 'rb') object = s3.Object(bucket_name, 'models/{}'.format(model_file_name)) object.put(Body=saved_model_pkl) Unpickle and predict with sklearn version 0.23 This is the code that unpickle the model from an S3 bucket, this is running on AWS lambda: import json import os import pickle def lambda_handler(event, context): parameter_for_evaluation = [ # features ] response = s3.get_object(Bucket=bucket_name, Key='models/{}'.format(model_file_name)) body = response['Body'].read() model = pickle.loads(body) result = model.predict([parameter_for_evaluation]).tolist()[0] print("model result: ", result) And this is the error: in my AWS lambda when try to predict 'KMeans' object has no attribute '_n_threads': AttributeError Traceback (most recent call last): File "/var/task/app.py", line 58, in lambda_handler result = model.predict([parameter_for_evaluation]).tolist()[0] File "/var/task/sklearn/cluster/_kmeans.py", line 1188, in predict self.cluster_centers_, self._n_threads)[0] AttributeError: 'KMeans' object has no attribute '_n_threads' There are other warnings in my Cloudwatch logs that might be relevant, however they appeared also before when the error was not happening OpenBLAS WARNING - could not determine the L2 cache size on this system, assuming 256k /var/task/joblib/_multiprocessing_helpers.py:45: UserWarning: [Errno 38] Function not implemented. joblib will operate in serial mode warnings.warn('%s. joblib will operate in serial mode' % (e,)) /var/task/sklearn/base.py:334: UserWarning: Trying to unpickle estimator KMeans from version 0.22.1 when using version 0.23.1. This might lead to breaking code or invalid results. Use at your own risk. This error is not happening if I do the same operation in the same Jupiter notebook This is what I have tried to install sklearn 0.23.1 # Nothing changed !conda update sklearn # Error cannot find pip3 !python3 -m pip3 install --upgrade sklearn # From the logs is installing 0.22 !pip install sklearn --upgrade !conda install scikit-learn -y # Stuck forever at: Solving environment !conda config --append channels conda-forge !conda install scikit-learn=0.23.1
[ "Some of my students ran into the same error when accessing the internals of an KMeans object:\nkmeans2 = KMeans(n_clusters=n_clusters) \nkmeans2.cluster_centers_ = clusters\n\nIn this scenario the problem could be worked around by running KMeans with a small subset of the original data.\nkmeans2 = KMeans(n_clusters=n_clusters).fit(df[:smallnumber])\nkmeans2.cluster_centers_ = clusters\n\nWhere clusters is a reordered set of clusters from another KMeans instance.\n", "The problem is due to versioning of scikit-learn, the models were trained with an older version. If you can't downgrade the version of scikit-learn you can use the following code to allow predictions with your KMeans model.\nimport pickle\nfrom sklearn.utils._openmp_helpers import _openmp_effective_n_threads\n\n# Load the model\nmodel = pickle.loads(body)\nmodel._n_threads = _openmp_effective_n_threads()\n\nresult = model.predict([parameter_for_evaluation]).tolist()[0]\n\nthe key is in this import: from sklearn.utils._openmp_helpers import _openmp_effective_n_threads\n" ]
[ 1, 0 ]
[ "Retraining the model with latest package versions should solve the problem.\n" ]
[ -1 ]
[ "python", "scikit_learn" ]
stackoverflow_0062186418_python_scikit_learn.txt
Q: Filter an evaluated QuerySet in Django The requirement is for me to be able to access members of an evaluated QuerySet by a string attribute, in this case name. I don't like the idea of looping over a QuerySet as it seems like there is a more efficient way. After I've called something like: my_objects = MyObject.objects.all() And I evaluate it with something like: len(my_objects) What is the best way to get a specific result by name from an evaluated QuerySet, in this case my_objects? Ideally I'd like to see something like my_objects.filter(foo='bar'). Note I'll need all of the results in the evaluated QuerySet during the course of the process, so that's why I have it in one initial query. A: There is no direct way of doing this to get a object based on field value from queryset. But you can do one thing is to create a dictionary from queryset and set name as key (must be unique): my_objects = MyObject.objects.all() obj_dict = {obj.name: obj for obj in my_objects} print obj_dict['any_name'] FYI: If you want to just retrieve a single object from table then you can use .get method: obj = MyObject.objects.get(pk=id) or obj = MyObject.objects.get(name='unique_name') A: Evaluated queryset is a list. There is no indexation of list elements, so looping is required anyway. But evaluated queryset is assumed to be not so huge, max few hundreds of entries so looping is ok. Do not evaluate large querysets. By the way, similar lists of objects are created by prefetch_related(). There is an implementation of ListQuerySet which supports many filters, ordering and distinct, making possible to run many (not all though) queries for such lists of objects: https://github.com/Dmitri-Sintsov/django-jinja-knockout/blob/master/django_jinja_knockout/query.py A: @aamir-rind's solution is great, but it only works if name is unique. If name is not unique, you could, for example, collect the objects in a list: # map name to list of objects my_objects_dict = dict() for obj in my_objects: key = obj.name if key not in my_objects_dict: my_objects_dict[key] = [] my_objects_dict[key].append(obj) If my_objects has not been evaluated yet, the loop will evaluate it. If my_objects has already been evaluated, e.g. as in my_objects = list(MyModel.objects.all()), it will not be evaluated again.
Filter an evaluated QuerySet in Django
The requirement is for me to be able to access members of an evaluated QuerySet by a string attribute, in this case name. I don't like the idea of looping over a QuerySet as it seems like there is a more efficient way. After I've called something like: my_objects = MyObject.objects.all() And I evaluate it with something like: len(my_objects) What is the best way to get a specific result by name from an evaluated QuerySet, in this case my_objects? Ideally I'd like to see something like my_objects.filter(foo='bar'). Note I'll need all of the results in the evaluated QuerySet during the course of the process, so that's why I have it in one initial query.
[ "There is no direct way of doing this to get a object based on field value from queryset. But you can do one thing is to create a dictionary from queryset and set name as key (must be unique):\nmy_objects = MyObject.objects.all()\nobj_dict = {obj.name: obj for obj in my_objects}\nprint obj_dict['any_name']\n\nFYI: If you want to just retrieve a single object from table then you can use .get method:\nobj = MyObject.objects.get(pk=id)\nor\nobj = MyObject.objects.get(name='unique_name')\n\n", "Evaluated queryset is a list. There is no indexation of list elements, so looping is required anyway. But evaluated queryset is assumed to be not so huge, max few hundreds of entries so looping is ok. Do not evaluate large querysets.\nBy the way, similar lists of objects are created by prefetch_related(). There is an implementation of ListQuerySet which supports many filters, ordering and distinct, making possible to run many (not all though) queries for such lists of objects:\nhttps://github.com/Dmitri-Sintsov/django-jinja-knockout/blob/master/django_jinja_knockout/query.py\n", "@aamir-rind's solution is great, but it only works if name is unique.\nIf name is not unique, you could, for example, collect the objects in a list:\n# map name to list of objects\nmy_objects_dict = dict()\nfor obj in my_objects:\n key = obj.name\n if key not in my_objects_dict:\n my_objects_dict[key] = []\n my_objects_dict[key].append(obj)\n\nIf my_objects has not been evaluated yet, the loop will evaluate it. If my_objects has already been evaluated, e.g. as in my_objects = list(MyModel.objects.all()), it will not be evaluated again.\n" ]
[ 3, 0, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0014160647_django_python.txt
Q: Validating a BST algorithm I am trying to solve a leetcode problem and am facing an issue with my code. What i want is that prev store the value of the previous node but when i run the recursive code the value of prev always becomes None. # Definition for a binary tree node. # class TreeNode: # def __init__(self, val=0, left=None, right=None): # self.val = val # self.left = left # self.right = right class Solution: def isValidBST(self, root: Optional[TreeNode]) -> bool: if not root: return True prev = None if root: if not self.isValidBST(root.left): return False if prev is not None and prev >= root.val: return False prev = root.val return self.isValidBST(root.right) Can you please explain why this code is failing especially why the value of prev always becomes None in every recursion call A: The problem has two reasons: prev is a local name, and whatever happens to the prev in a recursive call, it doesn't affect the value of prev at the side of the caller, since that is a distinct name. Concretely, the condition if prev is not None will never be true; prev is still None. Even if somehow you would make prev a nonlocal name so that all recursive calls would access the same prev, these calls (except for the base case), all set previs back toNone. But this is undesired: you should maintain the previous value, except for the case where the top-level (first) call is made: only then should prevbe initialised toNone`. You can solve this by defining prev as a global variable, or better, as an attribute of self. This way there will be one prev that the recursive process will work with. Since you need to initialise this prev to None only once, but have recursive calls, you should separate the initial call from the recursive calls, and perform that initialisation only in the initial call. For that purpose you can separate the function into two functions: the main one will perform the initialisation and call the other, recursive one: class Solution: def isValidBSTHelper(self, root: Optional[TreeNode]) -> bool: if not root: return True if not self.isValidBSTHelper(root.left): return False if self.prev is not None and self.prev >= root.val: return False self.prev = root.val return self.isValidBSTHelper(root.right) def isValidBST(self, root: Optional[TreeNode]) -> bool: self.prev = None # Attribute has larger scope return self.isValidBSTHelper(root) You can also make the inorder traversal with a generator, which has the recursive part, and then loop over the values from that iterator and compare them: class Solution: def inorder(self, root: Optional[TreeNode]): if root: yield from self.inorder(root.left) yield root.val yield from self.inorder(root.right) def isValidBST(self, root: Optional[TreeNode]) -> bool: values = self.inorder(root) prev = next(values, None) # get first value and advance for val in values: if prev >= val: return False prev = val return True or you could launch two inorder iterations, with one step difference, and use zip in isValidBST (the inorder function remains the same): def isValidBST(self, root: Optional[TreeNode]) -> bool: previous = self.inorder(root) values = self.inorder(root) next(values, None) # move one iterator one step forward return all(a < b for a, b in zip(previous, values)) # all pairs must be in order A: Can you please explain why this code is failing especially why the value of prev always becomes None in every recursion call The variable prev is local to each frame (recursion/function call). On each recursion pass, you initialize it to None. This results that for example the condition if prev is not None and prev >= root.val: is never reached in your code, because on each evaluation prev is always None. I think this response might be valuable to you: What is the relation between stack frame and a scope? And this (it is from python1 docs but still valid): https://understanding-recursion.readthedocs.io
Validating a BST algorithm
I am trying to solve a leetcode problem and am facing an issue with my code. What i want is that prev store the value of the previous node but when i run the recursive code the value of prev always becomes None. # Definition for a binary tree node. # class TreeNode: # def __init__(self, val=0, left=None, right=None): # self.val = val # self.left = left # self.right = right class Solution: def isValidBST(self, root: Optional[TreeNode]) -> bool: if not root: return True prev = None if root: if not self.isValidBST(root.left): return False if prev is not None and prev >= root.val: return False prev = root.val return self.isValidBST(root.right) Can you please explain why this code is failing especially why the value of prev always becomes None in every recursion call
[ "The problem has two reasons:\n\nprev is a local name, and whatever happens to the prev in a recursive call, it doesn't affect the value of prev at the side of the caller, since that is a distinct name. Concretely, the condition if prev is not None will never be true; prev is still None.\nEven if somehow you would make prev a nonlocal name so that all recursive calls would access the same prev, these calls (except for the base case), all set previs back toNone. But this is undesired: you should maintain the previous value, except for the case where the top-level (first) call is made: only then should prevbe initialised toNone`.\n\nYou can solve this by defining prev as a global variable, or better, as an attribute of self. This way there will be one prev that the recursive process will work with.\nSince you need to initialise this prev to None only once, but have recursive calls, you should separate the initial call from the recursive calls, and perform that initialisation only in the initial call. For that purpose you can separate the function into two functions: the main one will perform the initialisation and call the other, recursive one:\nclass Solution:\n def isValidBSTHelper(self, root: Optional[TreeNode]) -> bool:\n if not root:\n return True\n if not self.isValidBSTHelper(root.left):\n return False\n if self.prev is not None and self.prev >= root.val:\n return False\n self.prev = root.val\n return self.isValidBSTHelper(root.right)\n\n def isValidBST(self, root: Optional[TreeNode]) -> bool:\n self.prev = None # Attribute has larger scope\n return self.isValidBSTHelper(root)\n\nYou can also make the inorder traversal with a generator, which has the recursive part, and then loop over the values from that iterator and compare them:\nclass Solution:\n def inorder(self, root: Optional[TreeNode]):\n if root:\n yield from self.inorder(root.left)\n yield root.val\n yield from self.inorder(root.right)\n \n def isValidBST(self, root: Optional[TreeNode]) -> bool:\n values = self.inorder(root)\n prev = next(values, None) # get first value and advance\n for val in values:\n if prev >= val:\n return False\n prev = val\n return True\n\nor you could launch two inorder iterations, with one step difference, and use zip in isValidBST (the inorder function remains the same):\n def isValidBST(self, root: Optional[TreeNode]) -> bool:\n previous = self.inorder(root)\n values = self.inorder(root)\n next(values, None) # move one iterator one step forward\n return all(a < b for a, b in zip(previous, values)) # all pairs must be in order\n\n", "\nCan you please explain why this code is failing especially why the\nvalue of prev always becomes None in every recursion call\n\nThe variable prev is local to each frame (recursion/function call). On each recursion pass, you initialize it to None. This results that for example the condition if prev is not None and prev >= root.val: is never reached in your code, because on each evaluation prev is always None.\nI think this response might be valuable to you: What is the relation between stack frame and a scope?\nAnd this (it is from python1 docs but still valid): https://understanding-recursion.readthedocs.io\n" ]
[ 1, 0 ]
[]
[]
[ "binary_search_tree", "python", "python_3.x", "tree" ]
stackoverflow_0074559762_binary_search_tree_python_python_3.x_tree.txt
Q: Python Networkx centrality measure range of nodes How do I select a range from given values when drawing degree_centrality graph. B1: 0.64 E2: 0.61 C3: 0.60 B2: 0.58 M1: 0.50 C1: 0.328 R1: 0.228 def draw(G, pos, measures, measure_name): nodes = nx.draw_networkx_nodes(G, pos, node_size=250, cmap=plt.cm.plasma, node_color=list(measures.values()), nodelist=measures.keys()) nodes.set_norm(mcolors.SymLogNorm(linthresh=0.01, linscale=1, base=10)) # labels = nx.draw_networkx_labels(G, pos) edges = nx.draw_networkx_edges(G, pos) plt.title(measure_name) plt.colorbar(nodes) plt.axis('off') plt.show() pos = nx.spring_layout(G, seed=675) draw(G, pos, nx.degree_centrality(G), 'Degree Centrality') I am trying to use Network centrality measure visualisation to draw visualise centrality measure but i am only interested in visualising nodes within a range of values. I only wany to visualise range 0.64 to 0.60 from the given range above. B1: 0.64 E2: 0.61 C3: 0.60 A: One way is to reduce the dictionary measures: def draw(G, pos, measures, measure_name): # reduce measures min_val, max_val = 0.1, 0.4 measures = {k:v for k, v in measures.items() if v<=max_val and v>=min_val} ... Note that the min_val, max_val can be added as arguments of the function or alternatively, the subsetting can be done before passing values to the function (so calculate the centrality measure separately, subset it, and only after pass it to the function).
Python Networkx centrality measure range of nodes
How do I select a range from given values when drawing degree_centrality graph. B1: 0.64 E2: 0.61 C3: 0.60 B2: 0.58 M1: 0.50 C1: 0.328 R1: 0.228 def draw(G, pos, measures, measure_name): nodes = nx.draw_networkx_nodes(G, pos, node_size=250, cmap=plt.cm.plasma, node_color=list(measures.values()), nodelist=measures.keys()) nodes.set_norm(mcolors.SymLogNorm(linthresh=0.01, linscale=1, base=10)) # labels = nx.draw_networkx_labels(G, pos) edges = nx.draw_networkx_edges(G, pos) plt.title(measure_name) plt.colorbar(nodes) plt.axis('off') plt.show() pos = nx.spring_layout(G, seed=675) draw(G, pos, nx.degree_centrality(G), 'Degree Centrality') I am trying to use Network centrality measure visualisation to draw visualise centrality measure but i am only interested in visualising nodes within a range of values. I only wany to visualise range 0.64 to 0.60 from the given range above. B1: 0.64 E2: 0.61 C3: 0.60
[ "One way is to reduce the dictionary measures:\ndef draw(G, pos, measures, measure_name):\n\n\n # reduce measures\n min_val, max_val = 0.1, 0.4\n measures = {k:v for k, v in measures.items() if v<=max_val and v>=min_val}\n\n ...\n\nNote that the min_val, max_val can be added as arguments of the function or alternatively, the subsetting can be done before passing values to the function (so calculate the centrality measure separately, subset it, and only after pass it to the function).\n" ]
[ 0 ]
[]
[]
[ "data_science", "dataframe", "networkx", "python", "python_3.x" ]
stackoverflow_0074561497_data_science_dataframe_networkx_python_python_3.x.txt
Q: RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x720 and 784x10) Any ideas how I can fix this run time error? I would like to create these layers to read in the mnist dataset: A 2d convolutional layer with 10 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2. A 2d convolutional layer with 20 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2. Fully-connected layer followed by a ReLU activation. class CNN(torch.nn.Module): def __init__(self): super().__init__() self.flatten = torch.nn.Flatten(start_dim=1) self.conv1 = torch.nn.Conv2d(in_channels = 1, out_channels = 10, kernel_size = 5, stride = 1, padding = 1, padding_mode = 'zeros') self.conv2 = torch.nn.Conv2d(in_channels = 10, out_channels = 20, kernel_size = 5, stride = 1, padding = 1, padding_mode = 'zeros') self.fc = torch.nn.Linear(input_size, output_size) self.max_pool2d = torch.nn.MaxPool2d(kernel_size = 2) self.act = torch.nn.ReLU() self.log_softmax = torch.nn.LogSoftmax(dim = 1) # ------------------ def forward(self, x): # ------------------ # Write your implementation here. x = self.conv1(x) x = self.act(x) x = self.max_pool2d(x) x = self.conv2(x) x = self.act(x) x = self.max_pool2d(x) x = self.flatten(x) # x = x.view(x.size(0), -1) x = self.act(self.fc(x)) y_output = self.log_softmax(x) return y_output # ------------------ model = CNN().to(DEVICE) # sanity check print(model) from torchsummary import summary summary(model, (1,32,32)) Running into a wall since I don't know how to fix this error. A: Based upon the network details that you provided: I need to create: A 2d convolutional layer with 10 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2. A 2d convolutional layer with 20 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2. Fully-connected layer followed by a ReLU activation. The following model should do the job: import torch import torch.nn as nn class ConvNet(nn.Module): def __init__(self): super(ConvNet, self).__init__() # assuming you have 1 input channels, replace 1 here with your choice of input channels self.conv1 = nn.Sequential(nn.Conv2d(1, 10, kernel_size=5, stride=1, padding=0), nn.ReLU(), nn.MaxPool2d(2)) self.conv2 = nn.Sequential(nn.Conv2d(10, 20, kernel_size=5, stride=1, padding=0), nn.ReLU(), nn.MaxPool2d(2), nn.Flatten()) #assuming input is 1 channels with height and width 32, and output of fc is 10 self.linear = nn.Sequential(nn.Linear(500, 10), nn.ReLU()) def forward(self, x): x = self.conv1(x) x = self.conv2(x) x = self.linear(x) return x The above model should work.
RuntimeError: mat1 and mat2 shapes cannot be multiplied (2x720 and 784x10)
Any ideas how I can fix this run time error? I would like to create these layers to read in the mnist dataset: A 2d convolutional layer with 10 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2. A 2d convolutional layer with 20 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2. Fully-connected layer followed by a ReLU activation. class CNN(torch.nn.Module): def __init__(self): super().__init__() self.flatten = torch.nn.Flatten(start_dim=1) self.conv1 = torch.nn.Conv2d(in_channels = 1, out_channels = 10, kernel_size = 5, stride = 1, padding = 1, padding_mode = 'zeros') self.conv2 = torch.nn.Conv2d(in_channels = 10, out_channels = 20, kernel_size = 5, stride = 1, padding = 1, padding_mode = 'zeros') self.fc = torch.nn.Linear(input_size, output_size) self.max_pool2d = torch.nn.MaxPool2d(kernel_size = 2) self.act = torch.nn.ReLU() self.log_softmax = torch.nn.LogSoftmax(dim = 1) # ------------------ def forward(self, x): # ------------------ # Write your implementation here. x = self.conv1(x) x = self.act(x) x = self.max_pool2d(x) x = self.conv2(x) x = self.act(x) x = self.max_pool2d(x) x = self.flatten(x) # x = x.view(x.size(0), -1) x = self.act(self.fc(x)) y_output = self.log_softmax(x) return y_output # ------------------ model = CNN().to(DEVICE) # sanity check print(model) from torchsummary import summary summary(model, (1,32,32)) Running into a wall since I don't know how to fix this error.
[ "Based upon the network details that you provided:\n\nI need to create:\n\nA 2d convolutional layer with 10 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2.\nA 2d convolutional layer with 20 filters of size 5x5 with stride 1, zero padding, followed by a ReLU activation, then a 2d max pooling operation with size 2x2.\nFully-connected layer followed by a ReLU activation.\n\n\nThe following model should do the job:\nimport torch\nimport torch.nn as nn\n\nclass ConvNet(nn.Module):\n def __init__(self):\n super(ConvNet, self).__init__()\n # assuming you have 1 input channels, replace 1 here with your choice of input channels\n self.conv1 = nn.Sequential(nn.Conv2d(1, 10, kernel_size=5, stride=1, padding=0),\n nn.ReLU(),\n nn.MaxPool2d(2))\n \n self.conv2 = nn.Sequential(nn.Conv2d(10, 20, kernel_size=5, stride=1, padding=0),\n nn.ReLU(),\n nn.MaxPool2d(2),\n nn.Flatten())\n \n #assuming input is 1 channels with height and width 32, and output of fc is 10\n self.linear = nn.Sequential(nn.Linear(500, 10),\n nn.ReLU())\n\n def forward(self, x):\n x = self.conv1(x)\n x = self.conv2(x)\n x = self.linear(x)\n return x\n\nThe above model should work.\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "neural_network", "python", "pytorch" ]
stackoverflow_0074560241_conv_neural_network_neural_network_python_pytorch.txt
Q: How to zfill after a certain value in a list I have a list that looks like this ls = ['DATA2022_10.csv', 'DATA2022_2.csv', 'DATA2022_3.csv', 'DATA2022_4.csv', 'DATA2022_5.csv', 'DATA2022_6.csv', 'DATA2022_7.csv', 'DATA2022_8.csv', 'DATA2022_9.csv'] I want to zfill element in this list in order to sort my data. Expected output: ls = ['DATA2022_02.csv', 'DATA2022_03.csv', 'DATA2022_04.csv', 'DATA2022_05.csv', 'DATA2022_06.csv', 'DATA2022_07.csv', 'DATA2022_08.csv', 'DATA2022_09.csv', 'DATA2022_10.csv'] zfill only allows me to add 0 on the left or on the right, How can I add the 0 only after the 9th value of the element of my list, i.e. the _ ? A: You don't even need to add zero and sort. If your goal is to sort the list use natsort directly. from natsort import natsorted new =natsorted(ls) print(new) Gives # ['DATA2022_2.csv', 'DATA2022_3.csv', 'DATA2022_4.csv', 'DATA2022_5.csv', 'DATA2022_6.csv', 'DATA2022_7.csv', 'DATA2022_8.csv', 'DATA2022_9.csv', 'DATA2022_10.csv']
How to zfill after a certain value in a list
I have a list that looks like this ls = ['DATA2022_10.csv', 'DATA2022_2.csv', 'DATA2022_3.csv', 'DATA2022_4.csv', 'DATA2022_5.csv', 'DATA2022_6.csv', 'DATA2022_7.csv', 'DATA2022_8.csv', 'DATA2022_9.csv'] I want to zfill element in this list in order to sort my data. Expected output: ls = ['DATA2022_02.csv', 'DATA2022_03.csv', 'DATA2022_04.csv', 'DATA2022_05.csv', 'DATA2022_06.csv', 'DATA2022_07.csv', 'DATA2022_08.csv', 'DATA2022_09.csv', 'DATA2022_10.csv'] zfill only allows me to add 0 on the left or on the right, How can I add the 0 only after the 9th value of the element of my list, i.e. the _ ?
[ "You don't even need to add zero and sort. If your goal is to sort the list use natsort directly.\nfrom natsort import natsorted\n\nnew =natsorted(ls)\nprint(new)\n\nGives #\n['DATA2022_2.csv', 'DATA2022_3.csv', 'DATA2022_4.csv', 'DATA2022_5.csv', 'DATA2022_6.csv', 'DATA2022_7.csv', 'DATA2022_8.csv', 'DATA2022_9.csv', 'DATA2022_10.csv']\n\n" ]
[ 3 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074562066_list_python.txt
Q: Code not being performed before function in if-statement I am trying to run an if-statement where once a number 1 to 7 is selected the corresponding financial data is uploader as ticker. However, in my if-statement the code to import the data is not being run and it jumps directly to the function main_2(). Both the function main_2() and the code to import financial data as ticker and then just keep the 'Close' run perfect seperately but when put together in the if-statement main_2() only runs. I am using spyder to run this program. import yahoofinance as yf def main_(): print("Choose dataset") print("1.Amazon \n2.Apple \n3.Cisco \n4.Meta \n5.Microsoft \n6.Qualcomm \n7.Starbucks") choice = input("Please choose option: ") if choice == '1': ticker = yf.Ticker('AMZN') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '2': ticker = yf.Ticker('AAPL') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '3': ticker = yf.Ticker('CSCO') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '4': ticker = yf.Ticker('META') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '5': ticker = yf.Ticker('MSFT') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '6': ticker = yf.Ticker('QCOM') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '7': ticker = yf.Ticker('SBUX') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() def main_2(): typ = input('Choose type of analysis:') if typ == '1': print(ticker_.describe()) main_() elif typ == '2': print('1.Moving Averages \n2.Scatter plot \n3.Trend Lines') main_3() elif typ == '3': predictive() main_() def main_3(): graphical = input('Choose sort of graphical analysis:') if graphical == '1': moving_averages() main_4() elif graphical == '2': scatter_plot() main_4() elif graphical == '3': trend_line() main_4() def main_4(): print('1.Return to Main Menu \n2.Quit') option = input('Choose option:') if option == '1': main_() elif option == '2': quit() main() A: You should have this error: NameError: name 'ticker' is not defined Call main_2 with ticker as a parameter: main_2(ticker) In order to test it you can print ticker in main_2 to see if it works properly. def main_2(ticker): print(ticker)
Code not being performed before function in if-statement
I am trying to run an if-statement where once a number 1 to 7 is selected the corresponding financial data is uploader as ticker. However, in my if-statement the code to import the data is not being run and it jumps directly to the function main_2(). Both the function main_2() and the code to import financial data as ticker and then just keep the 'Close' run perfect seperately but when put together in the if-statement main_2() only runs. I am using spyder to run this program. import yahoofinance as yf def main_(): print("Choose dataset") print("1.Amazon \n2.Apple \n3.Cisco \n4.Meta \n5.Microsoft \n6.Qualcomm \n7.Starbucks") choice = input("Please choose option: ") if choice == '1': ticker = yf.Ticker('AMZN') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '2': ticker = yf.Ticker('AAPL') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '3': ticker = yf.Ticker('CSCO') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '4': ticker = yf.Ticker('META') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '5': ticker = yf.Ticker('MSFT') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '6': ticker = yf.Ticker('QCOM') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() if choice == '7': ticker = yf.Ticker('SBUX') ticker = ticker.history(period="5y") ticker_ = ticker[['Close']] print('1.Descriptive Analytics \n2.Visual Analysis \n3.Predictive Analysis') main_2() def main_2(): typ = input('Choose type of analysis:') if typ == '1': print(ticker_.describe()) main_() elif typ == '2': print('1.Moving Averages \n2.Scatter plot \n3.Trend Lines') main_3() elif typ == '3': predictive() main_() def main_3(): graphical = input('Choose sort of graphical analysis:') if graphical == '1': moving_averages() main_4() elif graphical == '2': scatter_plot() main_4() elif graphical == '3': trend_line() main_4() def main_4(): print('1.Return to Main Menu \n2.Quit') option = input('Choose option:') if option == '1': main_() elif option == '2': quit() main()
[ "You should have this error: NameError: name 'ticker' is not defined\nCall main_2 with ticker as a parameter: main_2(ticker)\nIn order to test it you can print ticker in main_2 to see if it works properly.\ndef main_2(ticker): \n print(ticker)\n\n" ]
[ 1 ]
[]
[]
[ "python", "spyder" ]
stackoverflow_0074562016_python_spyder.txt
Q: How to merge two 2D convolutions together “*” means convolution Hello, I am trying to find a way to merge two 2D convolutions together. Assume that I have an image “Img” of dimensions (1x20x20) and two kernels “k1” and “k2” both of dimensions (1x3x3). Normally you would first convolve Img with k1 and then convolve the result with k2: (Img * k1) * k2 My goal is to find a kernel k3 that if applied to Img does the same thing of the expression above. Since convolutions are linear operators this is possible. In order to do that (at least mathematically speaking) we can just first convolve k1 with k2 and then apply the result over Img: k3 = k1 * k2 (Img * k1) * k2 = Img * (k1 * k2) = Img * k3 This formula although works well in the mathematical world, it doesn’t work at all at an implementation level. Take for instance the example above. Both k1 and k2 are of dimensions (1x3x3). If I just blindly apply the formula above and I convolve k1 with k2 then my output will be of dimension (1x1x1). This is clearly not what I want. Therefore, even in this very simple scenario, this formula is “wrong”. What we are supposed to do in this case is to pad k1 with 2 pixels in order to obtain the correct kernel k3 we are looking for. I’ve found a code that does this here. I’ll report the code here for simplicity: import torch def merge_conv_kernels(k1, k2): """ :input k1: A tensor of shape ``(out1, in1, s1, s1)`` :input k1: A tensor of shape ``(out2, in2, s2, s2)`` :returns: A tensor of shape ``(out2, in1, s1+s2-1, s1+s2-1)`` so that convolving with it equals convolving with k1 and then with k2. """ padding = k2.shape[-1] - 1 # Flip because this is actually correlation, and permute to adapt to BHCW k3 = torch.conv2d(k1.permute(1, 0, 2, 3), k2.flip(-1, -2), padding=padding).permute(1, 0, 2, 3) return k3 However, this code doesn’t work at all when the two convolutions have different paddings and strides. I was wondering if it is still possible to merge convolutions together when paddings and strides are taken into consideration and if someone could provide a hint on how to do it or a working code for this more complicated scenario (PyTorch). Thank you A: When the first convolution pad enough (up to kernel size - 1) and no stride, you can merge your convolution with any pad/stride for the second convolution with: def merge_conv_kernels(k1, k2, s2, p2): # Assuming p1 = k1.shape[-1] - 1 and s1 = 1 kernel_pad = k2.shape[-1] - 1 k3 = torch.conv2d(k1.permute(1, 0, 2, 3), k2.flip(-1, -2), padding=kernel_pad, stride=1).permute(1, 0, 2, 3) p3 = k1.shape[-1] - 1 + p2 s3 = s2 return k3, s3, p3 If you have pad=0 in the first convolution, you can find counter-example. For instance in a 3*3 image and : kernel1 = tensor([[1, 0, -1], [1, 0, -1], [1, 0, -1]]) p1, s1 = 0, 1 kernel2 = ones(3, 3) p2, s2 = 2, 1 Basically the combination of the two convolutions applies the kernel 1 and copy the value in a 3*3 image. You can't get it with only one convolution. First, the kernel should be of size 4 or 5 with padding 1 or 2 to get all the input values every time (or more than 5 but it leads to never used values in the kernel). Now by considering each pixel and any input matrix, we can see that the kernel must contain the kernel 1 in a all its 3*3 sub-matrices. But it is impossible due to the asymmetry of the kernel 1. You can find similar problems with stride > 1 or padding < kernel_size - 1 in the first convolution that decreases the size of the output.
How to merge two 2D convolutions together
“*” means convolution Hello, I am trying to find a way to merge two 2D convolutions together. Assume that I have an image “Img” of dimensions (1x20x20) and two kernels “k1” and “k2” both of dimensions (1x3x3). Normally you would first convolve Img with k1 and then convolve the result with k2: (Img * k1) * k2 My goal is to find a kernel k3 that if applied to Img does the same thing of the expression above. Since convolutions are linear operators this is possible. In order to do that (at least mathematically speaking) we can just first convolve k1 with k2 and then apply the result over Img: k3 = k1 * k2 (Img * k1) * k2 = Img * (k1 * k2) = Img * k3 This formula although works well in the mathematical world, it doesn’t work at all at an implementation level. Take for instance the example above. Both k1 and k2 are of dimensions (1x3x3). If I just blindly apply the formula above and I convolve k1 with k2 then my output will be of dimension (1x1x1). This is clearly not what I want. Therefore, even in this very simple scenario, this formula is “wrong”. What we are supposed to do in this case is to pad k1 with 2 pixels in order to obtain the correct kernel k3 we are looking for. I’ve found a code that does this here. I’ll report the code here for simplicity: import torch def merge_conv_kernels(k1, k2): """ :input k1: A tensor of shape ``(out1, in1, s1, s1)`` :input k1: A tensor of shape ``(out2, in2, s2, s2)`` :returns: A tensor of shape ``(out2, in1, s1+s2-1, s1+s2-1)`` so that convolving with it equals convolving with k1 and then with k2. """ padding = k2.shape[-1] - 1 # Flip because this is actually correlation, and permute to adapt to BHCW k3 = torch.conv2d(k1.permute(1, 0, 2, 3), k2.flip(-1, -2), padding=padding).permute(1, 0, 2, 3) return k3 However, this code doesn’t work at all when the two convolutions have different paddings and strides. I was wondering if it is still possible to merge convolutions together when paddings and strides are taken into consideration and if someone could provide a hint on how to do it or a working code for this more complicated scenario (PyTorch). Thank you
[ "When the first convolution pad enough (up to kernel size - 1) and no stride, you can merge your convolution with any pad/stride for the second convolution with:\ndef merge_conv_kernels(k1, k2, s2, p2):\n # Assuming p1 = k1.shape[-1] - 1 and s1 = 1\n kernel_pad = k2.shape[-1] - 1\n k3 = torch.conv2d(k1.permute(1, 0, 2, 3), k2.flip(-1, -2),\n padding=kernel_pad,\n stride=1).permute(1, 0, 2, 3)\n p3 = k1.shape[-1] - 1 + p2\n s3 = s2\n return k3, s3, p3\n\nIf you have pad=0 in the first convolution, you can find counter-example. For instance in a 3*3 image and :\nkernel1 = tensor([[1, 0, -1],\n [1, 0, -1],\n [1, 0, -1]])\np1, s1 = 0, 1\nkernel2 = ones(3, 3)\np2, s2 = 2, 1\n\nBasically the combination of the two convolutions applies the kernel 1 and copy the value in a 3*3 image. You can't get it with only one convolution. First, the kernel should be of size 4 or 5 with padding 1 or 2 to get all the input values every time (or more than 5 but it leads to never used values in the kernel). Now by considering each pixel and any input matrix, we can see that the kernel must contain the kernel 1 in a all its 3*3 sub-matrices. But it is impossible due to the asymmetry of the kernel 1.\nYou can find similar problems with stride > 1 or padding < kernel_size - 1 in the first convolution that decreases the size of the output.\n" ]
[ 0 ]
[]
[]
[ "conv_neural_network", "convolution", "linear_algebra", "python", "pytorch" ]
stackoverflow_0074559543_conv_neural_network_convolution_linear_algebra_python_pytorch.txt
Q: how to divide a column element wise in python I want to divide the first column of this table element wise by 3.6. dict_read = { 'tractionForceTable': [']traction_V(km/h)_Force(N)', 'table']} outputdict = {key: framehandle.value_readin(value) for (key, value) in dict_read.items()}` enter image description here It throws an error something like : outputdict["traction_ForceTable"] = outputdict["tractionForceTable"][:, 0] / 3.6 File "C:\Users\hppat\Desktop\venv\lib\site-packages\pandas\core\frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\hppat\Desktop\venv\lib\site-packages\pandas\core\indexes\base.py", line 3636, in get_loc self._check_indexing_error(key) File "C:\Users\hppat\Desktop\venv\lib\site-packages\pandas\core\indexes\base.py", line 5651, in _check_indexing_error raise InvalidIndexError(key) pandas.errors.InvalidIndexError: (slice(None, None, None), 0) Here's what I tried: outputdict["traction_Table"] = outputdict["tractionForceTable"][:, 1] / 3.6 A: There are several ways to do it, here are two. I suggest from your error message that your data is in a pd.DataFrame. I used a shortened version of your data. import pandas as pd df = pd.DataFrame({'velocity': [1,2,3,4,5], 'mfbp': [36600000, 1800000, 1200000, 900000, 720000]}) You could use map (or apply) and define a lambda function that is applied to every cell. df['mfbp'].map(lambda x: x/3.6) Or you use the pandas built-in method pd.Series.divide df['mfbp'].divide(3.6) Output in both cases: 0 1.016667e+07 1 5.000000e+05 2 3.333333e+05 3 2.500000e+05 4 2.000000e+05
how to divide a column element wise in python
I want to divide the first column of this table element wise by 3.6. dict_read = { 'tractionForceTable': [']traction_V(km/h)_Force(N)', 'table']} outputdict = {key: framehandle.value_readin(value) for (key, value) in dict_read.items()}` enter image description here It throws an error something like : outputdict["traction_ForceTable"] = outputdict["tractionForceTable"][:, 0] / 3.6 File "C:\Users\hppat\Desktop\venv\lib\site-packages\pandas\core\frame.py", line 3505, in __getitem__ indexer = self.columns.get_loc(key) File "C:\Users\hppat\Desktop\venv\lib\site-packages\pandas\core\indexes\base.py", line 3636, in get_loc self._check_indexing_error(key) File "C:\Users\hppat\Desktop\venv\lib\site-packages\pandas\core\indexes\base.py", line 5651, in _check_indexing_error raise InvalidIndexError(key) pandas.errors.InvalidIndexError: (slice(None, None, None), 0) Here's what I tried: outputdict["traction_Table"] = outputdict["tractionForceTable"][:, 1] / 3.6
[ "There are several ways to do it, here are two. I suggest from your error message that your data is in a pd.DataFrame. I used a shortened version of your data.\nimport pandas as pd \ndf = pd.DataFrame({'velocity': [1,2,3,4,5],\n 'mfbp': [36600000, 1800000, 1200000, 900000, 720000]})\n \n\nYou could use map (or apply) and define a lambda function that is applied to every cell.\ndf['mfbp'].map(lambda x: x/3.6)\n\nOr you use the pandas built-in method pd.Series.divide\ndf['mfbp'].divide(3.6)\n\nOutput in both cases:\n0 1.016667e+07\n1 5.000000e+05\n2 3.333333e+05\n3 2.500000e+05\n4 2.000000e+05\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074559911_dictionary_list_python.txt
Q: How to access an item from S3 using boto3 and read() its contents I have a method that fetches a file from a URL and converts it to OpenCV image def my_method(self, imgurl): req = urllib.urlopen(imgurl) r = req.read() arr = np.asarray(bytearray(r), dtype=np.uint8) image = cv2.imdecode(arr,-1) # 'load it as it is' return image I would like to use boto3 to access an object from s3 bucket and convert it to an image just like above method does. However, I'm not sure how to access an item from a bucket using boto3 and then further how to read() contents of that item. Below is what I've tried >>> import botocore >>> import boto3 >>> client = boto3.client('s3',aws_access_key_id="myaccsskey",aws_secret_access_key="secretkey") >>> bucketname = "mybucket" >>> itemname = "demo.png" Questions How can I access a particular item from a bucket using boto3? Is there a way to read the contents of the accessed item similar to what I'm doing in my_method using req.read()? A: I would do 1 this way: import boto3 s3 = boto3.resource('s3', use_ssl=False, endpoint_url="http://localhost:4567", aws_access_key_id="", aws_secret_access_key="", ) obj = s3.Object(bucketname, itemname) For 2, I have never tried by this SO answer suggest: body = obj.get()['Body'].read() using the high-level ressource proposed by boto3. A: As I explained here, the following is the fastest approach to read from an S3 file: import io import boto3 client = boto3.client('s3') buffer = io.BytesIO() # This is just an example, parameters should be fine tuned according to: # 1. The size of the object that is being read (bigger the file, bigger the chunks) # 2. The number of threads available on the machine that runs this code config = TransferConfig( multipart_threshold=1024 * 25, # Concurrent read only if object size > 25MB max_concurrency=10, # Up to 10 concurrent readers multipart_chunksize=1024 * 25, # 25MB chunks per reader use_threads=True # Must be True to enable multiple readers ) client.download_fileobj( Bucket=bucket_name, Key=object_key, Fileobj=buffer, Config=config ) body = buffer.getvalue().decode()
How to access an item from S3 using boto3 and read() its contents
I have a method that fetches a file from a URL and converts it to OpenCV image def my_method(self, imgurl): req = urllib.urlopen(imgurl) r = req.read() arr = np.asarray(bytearray(r), dtype=np.uint8) image = cv2.imdecode(arr,-1) # 'load it as it is' return image I would like to use boto3 to access an object from s3 bucket and convert it to an image just like above method does. However, I'm not sure how to access an item from a bucket using boto3 and then further how to read() contents of that item. Below is what I've tried >>> import botocore >>> import boto3 >>> client = boto3.client('s3',aws_access_key_id="myaccsskey",aws_secret_access_key="secretkey") >>> bucketname = "mybucket" >>> itemname = "demo.png" Questions How can I access a particular item from a bucket using boto3? Is there a way to read the contents of the accessed item similar to what I'm doing in my_method using req.read()?
[ "I would do 1 this way:\nimport boto3\ns3 = boto3.resource('s3',\n use_ssl=False,\n endpoint_url=\"http://localhost:4567\",\n aws_access_key_id=\"\",\n aws_secret_access_key=\"\",\n)\nobj = s3.Object(bucketname, itemname)\n\nFor 2, I have never tried by this SO answer suggest:\nbody = obj.get()['Body'].read()\n\nusing the high-level ressource proposed by boto3.\n", "As I explained here, the following is the fastest approach to read from an S3 file:\nimport io\nimport boto3\n\nclient = boto3.client('s3')\nbuffer = io.BytesIO()\n\n# This is just an example, parameters should be fine tuned according to:\n# 1. The size of the object that is being read (bigger the file, bigger the chunks)\n# 2. The number of threads available on the machine that runs this code\n\nconfig = TransferConfig(\n multipart_threshold=1024 * 25, # Concurrent read only if object size > 25MB\n max_concurrency=10, # Up to 10 concurrent readers\n multipart_chunksize=1024 * 25, # 25MB chunks per reader\n use_threads=True # Must be True to enable multiple readers\n)\n\nclient.download_fileobj(\n Bucket=bucket_name, \n Key=object_key, \n Fileobj=buffer,\n Config=config\n)\n\nbody = buffer.getvalue().decode()\n\n" ]
[ 4, 0 ]
[]
[]
[ "amazon_s3", "boto3", "numpy", "python" ]
stackoverflow_0040239328_amazon_s3_boto3_numpy_python.txt
Q: How to create recalculating variables in Python Suppose I have the code: a = 2 b = a + 2 a = 3 The question is: how to keep b updated on each change in a? E.g., after the above code I would like to get: print(b) to be 5, not 4. Of course, b can be a function of a via def, but, say, in IPython it's more comfortable to have simple variables. Are there way to do so? Maybe via SymPy or other libraries? A: You can do a lambda, which is basically a function... The only malus is that you have to do b() to get the value instead of just b >>> a = 2 >>> b = lambda: a + 2 >>> b() 4 >>> a = 3 >>> b() 5 A: Fair warning: this is a hack only suitable for experimentation and play in a Python interpreter environment. Do not feed untrusted input into this code. You can define b as an instance of the following class: class Expression(object): def __init__(self, expression): self.expression = expression def __repr__(self): return repr(eval(self.expression)) def __str__(self): return str(eval(self.expression)) Instances of this object will evaluate the expression automatically when printed or echoed in a Python interpreter. Expressions only support references to global names. Demo: >>> a = 5 >>> b = Expression('a + 5') >>> b 10 >>> a = 20 >>> b 25 A: If you want a sympy solution you can do: >>> from sympy import Symbol >>> a = Symbol('a') >>> b = Symbol('b') >>> c = (a + 3) * b >>> c b*(a + 3) >>> c.subs({'a': 3, 'b': 4}) 24 Or you can even create your own evaluate function: >>> def e(exp): vars = globals() data = {} for atom in exp.atoms(): if atom.is_Symbol: if atom.name in vars: data[atom.name] = vars[atom.name] return exp.subs(data) >>> a = 44 >>> b = 33 >>> e(c) >>> 1551 >>> a = 1 >>> e(c) >>> 132 A: You can't do it on the same way like Java or C. However, you can wrap the variable with your own custom class in order to achieve your goal. Look at the following example: class ReferenceVariable: def __init__(self, value): self.value = value def get(self): return self.value def set(self, val): self.value = val a = ReferenceVariable(3) b = a a.set(a.get()+4) print(b.get()) // Prints 7 A: It looks like you want a property. class MyClazz(object): def __init__(self, a): self.a = a @property def b(self): return self.a + 2 if __name__ == '__main__': c = MyClazz(2) print(c.a) # prints 2 print(c.b) # prints 4 c.a = 10 print(c.b) # prints 12 Take a look at the documentation for using property as a decorator, for details on how to add setters and so on if you want that. Since you didn't specify how b is being set, I just hard-coded it, but it would be trivial to make the b-specific part customizable too; something like: class MyClazz(object): def __init__(self, a, b_part=2): self.a = a self._b = b_part @property def b(self): return self.a + self._b A: Part of what you say you want sounds a lot like how cells in most spreadsheet programs work, so I suggest you use something like what is in this highly rated ActiveState recipe. It's so short and simple, I'll reproduce it here and apply it to your trivial example: class SpreadSheet(object): _cells = {} tools = {} def __setitem__(self, key, formula): self._cells[key] = formula def getformula(self, key): return self._cells[key] def __getitem__(self, key ): return eval(self._cells[key], SpreadSheet.tools, self) from math import sin, cos, pi, sqrt SpreadSheet.tools.update(sin=sin, cos=cos, pi=pi, sqrt=sqrt, len=len) ss = SpreadSheet() ss['a'] = '2' ss['b'] = 'a + 2' print ss['b'] # 4 ss['a'] = '3' print ss['b'] # 5 Many of the recipe's comments describe improvements, some significant, so I'd also suggest reading them. A: I got this idea, after I worked with Qt. It use object property, rather then variable: from types import FunctionType class MyObject: def __init__(self,name): self.name= name self._a_f=None self._a=None @property def a(self): if self._a_f is not None: self._a= self._a_f() return self._a @a.setter def a(self,v): if type(v) is FunctionType: self._a_f=v else: self._a_f=None self._a=v o1,o2,o3=map(MyObject,'abc') o1.a = lambda: o2.a + o3.a o2.a = lambda: o3.a * 2 o3.a = 10 print( o1.a ) #print 30 o2.a = o1.a + o3.a #this will unbind o3.a from o2.a, setting it to 40 print( o1.a ) #print 50 But what if you want to know when o1.a changed? That's what my first desire was, but implementation is hard. Even if it probably answer other question, here have some example: class MyObject(metaclass=HaveBindableProperties): a= BindableProperty() someOther= BindableProperty() o1,o2,o3=map(MyObject,'abc') o1.a = lambda: o2.a + o3.a o2.a = lambda: o3.a * 2 @o1.subscribe_a def printa(): print( o1.a ) o3.a = 1 #prints 3 printa.unsubscribe() #to remove subscribtion A: While I am clearly very late to this party, I thought this might be helpful to someone. from random import randint x = randint(1, 10) for sample in range(10): print(f"\nsample: {sample+1}") print(f"assigned = {x}") print(f"embedded = {randint(1, 10)}") Output: sample: 1 assigned = 2 embedded = 2 sample: 2 assigned = 2 embedded = 10 sample: 3 assigned = 2 embedded = 5 sample: 4 assigned = 2 embedded = 3 sample: 5 assigned = 2 embedded = 1 sample: 6 assigned = 2 embedded = 1 sample: 7 assigned = 2 embedded = 10 sample: 8 assigned = 2 embedded = 7 sample: 9 assigned = 2 embedded = 4 sample: 10 assigned = 2 embedded = 10 As we can see, if we call the function then it will recalculate. This is a slight variation on the suggested lambda in the accepted answer, as that effectively creates the function within the variable.
How to create recalculating variables in Python
Suppose I have the code: a = 2 b = a + 2 a = 3 The question is: how to keep b updated on each change in a? E.g., after the above code I would like to get: print(b) to be 5, not 4. Of course, b can be a function of a via def, but, say, in IPython it's more comfortable to have simple variables. Are there way to do so? Maybe via SymPy or other libraries?
[ "You can do a lambda, which is basically a function... The only malus is that you have to do b() to get the value instead of just b\n>>> a = 2\n>>> b = lambda: a + 2\n>>> b()\n4\n>>> a = 3\n>>> b()\n5\n\n", "Fair warning: this is a hack only suitable for experimentation and play in a Python interpreter environment. Do not feed untrusted input into this code.\nYou can define b as an instance of the following class:\nclass Expression(object):\n def __init__(self, expression):\n self.expression = expression\n def __repr__(self):\n return repr(eval(self.expression))\n def __str__(self):\n return str(eval(self.expression))\n\nInstances of this object will evaluate the expression automatically when printed or echoed in a Python interpreter. Expressions only support references to global names.\nDemo:\n>>> a = 5\n>>> b = Expression('a + 5')\n>>> b\n10\n>>> a = 20\n>>> b\n25\n\n", "If you want a sympy solution you can do:\n>>> from sympy import Symbol\n>>> a = Symbol('a')\n>>> b = Symbol('b')\n>>> c = (a + 3) * b\n>>> c\nb*(a + 3)\n>>> c.subs({'a': 3, 'b': 4})\n24\n\nOr you can even create your own evaluate function:\n>>> def e(exp):\n vars = globals()\n data = {}\n for atom in exp.atoms():\n if atom.is_Symbol:\n if atom.name in vars:\n data[atom.name] = vars[atom.name]\n return exp.subs(data)\n>>> a = 44\n>>> b = 33\n>>> e(c)\n>>> 1551\n>>> a = 1\n>>> e(c)\n>>> 132\n\n", "You can't do it on the same way like Java or C.\nHowever, you can wrap the variable with your own custom class in order to achieve your goal.\nLook at the following example:\nclass ReferenceVariable:\n def __init__(self, value):\n self.value = value\n\n def get(self):\n return self.value\n\n def set(self, val):\n self.value = val\n\na = ReferenceVariable(3)\nb = a\na.set(a.get()+4)\nprint(b.get()) // Prints 7\n\n", "It looks like you want a property.\nclass MyClazz(object):\n def __init__(self, a):\n self.a = a\n\n @property\n def b(self):\n return self.a + 2\n\nif __name__ == '__main__':\n c = MyClazz(2)\n print(c.a) # prints 2\n print(c.b) # prints 4\n c.a = 10\n print(c.b) # prints 12\n\nTake a look at the documentation for using property as a decorator, for details on how to add setters and so on if you want that. Since you didn't specify how b is being set, I just hard-coded it, but it would be trivial to make the b-specific part customizable too; something like:\nclass MyClazz(object):\n def __init__(self, a, b_part=2):\n self.a = a\n self._b = b_part\n\n @property\n def b(self):\n return self.a + self._b\n\n", "Part of what you say you want sounds a lot like how cells in most spreadsheet programs work, so I suggest you use something like what is in this highly rated ActiveState recipe. It's so short and simple, I'll reproduce it here and apply it to your trivial example:\nclass SpreadSheet(object):\n _cells = {}\n tools = {}\n def __setitem__(self, key, formula):\n self._cells[key] = formula\n def getformula(self, key):\n return self._cells[key]\n def __getitem__(self, key ):\n return eval(self._cells[key], SpreadSheet.tools, self)\n\nfrom math import sin, cos, pi, sqrt\nSpreadSheet.tools.update(sin=sin, cos=cos, pi=pi, sqrt=sqrt, len=len)\n\nss = SpreadSheet()\nss['a'] = '2'\nss['b'] = 'a + 2'\nprint ss['b'] # 4\nss['a'] = '3'\nprint ss['b'] # 5\n\nMany of the recipe's comments describe improvements, some significant, so I'd also suggest reading them.\n", "I got this idea, after I worked with Qt. It use object property, rather then variable:\nfrom types import FunctionType\n\nclass MyObject:\n def __init__(self,name):\n self.name= name\n self._a_f=None\n self._a=None\n\n @property\n def a(self):\n if self._a_f is not None:\n self._a= self._a_f()\n return self._a\n @a.setter\n def a(self,v):\n if type(v) is FunctionType:\n self._a_f=v\n else:\n self._a_f=None\n self._a=v\n\no1,o2,o3=map(MyObject,'abc')\n\no1.a = lambda: o2.a + o3.a\no2.a = lambda: o3.a * 2\no3.a = 10\nprint( o1.a ) #print 30\n\no2.a = o1.a + o3.a #this will unbind o3.a from o2.a, setting it to 40\nprint( o1.a ) #print 50\n\nBut what if you want to know when o1.a changed? That's what my first desire was, but implementation is hard. Even if it probably answer other question, here have some example:\nclass MyObject(metaclass=HaveBindableProperties):\n a= BindableProperty()\n someOther= BindableProperty()\n\n\no1,o2,o3=map(MyObject,'abc')\n\no1.a = lambda: o2.a + o3.a\no2.a = lambda: o3.a * 2\n\n@o1.subscribe_a\ndef printa():\n print( o1.a )\n\no3.a = 1 #prints 3\n\nprinta.unsubscribe() #to remove subscribtion\n\n", "While I am clearly very late to this party, I thought this might be helpful to someone.\nfrom random import randint\n\nx = randint(1, 10)\n\nfor sample in range(10):\n print(f\"\\nsample: {sample+1}\")\n print(f\"assigned = {x}\")\n print(f\"embedded = {randint(1, 10)}\")\n\nOutput:\nsample: 1\nassigned = 2\nembedded = 2\n\nsample: 2\nassigned = 2\nembedded = 10\n\nsample: 3\nassigned = 2\nembedded = 5\n\nsample: 4\nassigned = 2\nembedded = 3\n\nsample: 5\nassigned = 2\nembedded = 1\n\nsample: 6\nassigned = 2\nembedded = 1\n\nsample: 7\nassigned = 2\nembedded = 10\n\nsample: 8\nassigned = 2\nembedded = 7\n\nsample: 9\nassigned = 2\nembedded = 4\n\nsample: 10\nassigned = 2\nembedded = 10\n\nAs we can see, if we call the function then it will recalculate. This is a slight variation on the suggested lambda in the accepted answer, as that effectively creates the function within the variable.\n" ]
[ 8, 4, 2, 1, 1, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0018064564_python.txt
Q: How to prevent np.where from turning 0 into '0'? I want to create an array with np.where that has strings and 0s in it. So usually its dtype would be 'object'. Minimal example: A = np.array([[1,2,1],[2,1,2],[1,1,2]]) x = np.where(A==1,0,'hello') As a result I get array([['0', 'hello', '0'], ['hello', '0', 'hello'], ['0', '0', 'hello']], dtype='<U11') I would like those '0's to be 0s. Since np.where has no argument for dtype, I don't know how to do this except by replacing them afterwards. There has to be a better way to do this. A: You could use an object array as first input values for where: x = np.where(A==1, np.zeros_like(A).astype(object), 'hello') Output: array([[0, 'hello', 0], ['hello', 0, 'hello'], [0, 0, 'hello']], dtype=object)
How to prevent np.where from turning 0 into '0'?
I want to create an array with np.where that has strings and 0s in it. So usually its dtype would be 'object'. Minimal example: A = np.array([[1,2,1],[2,1,2],[1,1,2]]) x = np.where(A==1,0,'hello') As a result I get array([['0', 'hello', '0'], ['hello', '0', 'hello'], ['0', '0', 'hello']], dtype='<U11') I would like those '0's to be 0s. Since np.where has no argument for dtype, I don't know how to do this except by replacing them afterwards. There has to be a better way to do this.
[ "You could use an object array as first input values for where:\nx = np.where(A==1, np.zeros_like(A).astype(object), 'hello')\n\nOutput:\narray([[0, 'hello', 0],\n ['hello', 0, 'hello'],\n [0, 0, 'hello']], dtype=object)\n\n" ]
[ 2 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074562254_numpy_python.txt
Q: Is there any way to get the recording url and timeline of Microsoft Teams meeting using GRAPH API's I'm trying to fetch the recording details and timeline file using GRAPH API of Teams, but it is not there is there any way we can fetch them? I can able to fetch the recording using one drive but issue is we need to grant drive scopes which is not good, can't we achieve using teams graph API's? also timeline of the meeting A: Teams meeting Record link is available in Graph Beta API under Chat messages - callRecordingUrl. chatMessage, eventMessageDetail, callRecordingEventMessageDetail Please go through List chats documentation to get chat ID. Alternatively You can get the chat id directly if you create a meeting using Graph API. You can directly take chat id from here and perform next step Run this example in List messages in a chat by replacing the chat ID with your chat ID. In the response you will see eventDetail object with callRecordingUrl. Attaching a screenshot for your reference. Please follow these docs for Timelines https://learn.microsoft.com/en-us/graph/api/onlinemeeting-get?view=graph-rest-1.0&tabs=http https://learn.microsoft.com/en-us/graph/api/meetingattendancereport-get?view=graph-rest-1.0&tabs=http
Is there any way to get the recording url and timeline of Microsoft Teams meeting using GRAPH API's
I'm trying to fetch the recording details and timeline file using GRAPH API of Teams, but it is not there is there any way we can fetch them? I can able to fetch the recording using one drive but issue is we need to grant drive scopes which is not good, can't we achieve using teams graph API's? also timeline of the meeting
[ "Teams meeting Record link is available in Graph Beta API under Chat messages - callRecordingUrl.\nchatMessage, eventMessageDetail, callRecordingEventMessageDetail\n\nPlease go through List chats documentation to get chat ID. Alternatively You can get the chat id directly if you create a meeting using Graph API. You can directly take chat id from here and perform next step \nRun this example in List messages in a chat by replacing the chat ID with your chat ID. In the response you will see eventDetail object with callRecordingUrl. Attaching a screenshot for your reference.\n\n\nPlease follow these docs for Timelines\n\nhttps://learn.microsoft.com/en-us/graph/api/onlinemeeting-get?view=graph-rest-1.0&tabs=http\nhttps://learn.microsoft.com/en-us/graph/api/meetingattendancereport-get?view=graph-rest-1.0&tabs=http\n\n" ]
[ 2 ]
[]
[]
[ "microsoft_graph_api", "microsoft_teams", "python" ]
stackoverflow_0074557628_microsoft_graph_api_microsoft_teams_python.txt
Q: How to split a dataset with peaks and finding area under these peaks? I have two datasets, one with consumption of energy and one with production of energy. I merged these two and filtered out all of the surplus energy peaks from this. This resulted in a dataframe with lots of peaks and zeros for all moments there is no surplus energy. What I am looking for is to find the amount of energy in each peak. More or less this means finding the area under each indivivual peak, from the moment it starts from zero to returning to zero. I -unsucesfully- tried to split the peaks everytime the graph hits zero. I simply have no idea on how to code something that will split the dataset into individual peaks or -for that matter- calculate how much energy there is in every peak. It is too much data to do this by hand (almost a year of data in 15-minute intervals). Simply summing all the data and dividing by the amount of datapoints will not cut it. I hope it is clear what I'm trying to achieve here. Thanks EDIT Let's say the data looks something like this: df = pd.DataFrame() df['Value'] = [0, 0, 0, 0, 1, 1, 0, 1, 2, 4, 3, 0, 4, 0, 1, 0, 4, 0, 1, 0] df['Timestamp'] = pd.date_range(start='1/1/2018', periods=len(df), freq='15T') df.plot(x='Timestamp', y='Value') I can't split it on the zeros with partitioning, I cannot find anything online where a dataset is split everytime there is a zero. Thanks. A: I have something for you, but (a) it may have off-by-one errors, and (b) there needs to be some manual fudging at the first and last rows of the dataframe, if Value isn't zero for these rows. Disclaimers dispensed, here goes. First, (1) put in columns indicating when a row is the beginning of a shift, and when it's at the end. At least for me, this involved some fumbling around with the parameters of the shift() calls. (2) Calculate the area under the graph for the 15-min time period represented by each row, using the trapezium rule. (3) Add a column to store ID's of each peak later. So far we have: # (1) df["start_peak"] = df["Value"].shift(-1).ne(0) & df["Value"].eq(0) df["end_peak"] = (df["Value"].shift(-1).eq(0) & df["Value"].ne(0)).shift(1) # (2) df["Area"] = 0.5 * 0.25 * (df["Value"] + df["Value"].shift(-1)) # (3) df["peak_ID"] = pd.NaT # (3) Now we need to loop through the rows of the dataframe and assign peak IDs to each row. The logic I chose was: (4) if it's the start of a peak, then the ID of the peak is the timestamp, (5) if it's not in a peak, then the ID is undefined (pd.NaT), and (6) otherwise the ID is the timestamp from the beginning of the peak. Note that there are a few ways to iterate down the rows of a dataframe (iterrows, iteritems, itertuples), but in general you should avoid it when you can. I don't think we can avoid it here. previous_peak_id = pd.NaT for (i, row) in df.iterrows(): if row["start_peak"]: # (4) df.loc[i, "peak_id"] = row["Timestamp"] # Just setting row["peak_id"] does not affect the main dataframe. previous_peak_id = row["Timestamp"] elif not(row["end_peak"]): # (6) df.loc[i, "peak_id"] = previous_peak_id else: pass # (5) already assigned a pd.NaT Finally, we group the rows of the dataframes into groups where the peak_id are the same, and sum the Area for each group. This adds up the trapezium slices for each peak_id, thus summing the area under each peak. df.groupby(["peak_id"]).sum()["Area"]
How to split a dataset with peaks and finding area under these peaks?
I have two datasets, one with consumption of energy and one with production of energy. I merged these two and filtered out all of the surplus energy peaks from this. This resulted in a dataframe with lots of peaks and zeros for all moments there is no surplus energy. What I am looking for is to find the amount of energy in each peak. More or less this means finding the area under each indivivual peak, from the moment it starts from zero to returning to zero. I -unsucesfully- tried to split the peaks everytime the graph hits zero. I simply have no idea on how to code something that will split the dataset into individual peaks or -for that matter- calculate how much energy there is in every peak. It is too much data to do this by hand (almost a year of data in 15-minute intervals). Simply summing all the data and dividing by the amount of datapoints will not cut it. I hope it is clear what I'm trying to achieve here. Thanks EDIT Let's say the data looks something like this: df = pd.DataFrame() df['Value'] = [0, 0, 0, 0, 1, 1, 0, 1, 2, 4, 3, 0, 4, 0, 1, 0, 4, 0, 1, 0] df['Timestamp'] = pd.date_range(start='1/1/2018', periods=len(df), freq='15T') df.plot(x='Timestamp', y='Value') I can't split it on the zeros with partitioning, I cannot find anything online where a dataset is split everytime there is a zero. Thanks.
[ "I have something for you, but (a) it may have off-by-one errors, and (b) there needs to be some manual fudging at the first and last rows of the dataframe, if Value isn't zero for these rows. Disclaimers dispensed, here goes.\nFirst, (1) put in columns indicating when a row is the beginning of a shift, and when it's at the end. At least for me, this involved some fumbling around with the parameters of the shift() calls. (2) Calculate the area under the graph for the 15-min time period represented by each row, using the trapezium rule. (3) Add a column to store ID's of each peak later. So far we have:\n# (1)\ndf[\"start_peak\"] = df[\"Value\"].shift(-1).ne(0) & df[\"Value\"].eq(0)\ndf[\"end_peak\"] = (df[\"Value\"].shift(-1).eq(0) & df[\"Value\"].ne(0)).shift(1) \n\n# (2)\ndf[\"Area\"] = 0.5 * 0.25 * (df[\"Value\"] + df[\"Value\"].shift(-1))\n\n# (3)\ndf[\"peak_ID\"] = pd.NaT # (3)\n\nNow we need to loop through the rows of the dataframe and assign peak IDs to each row. The logic I chose was: (4) if it's the start of a peak, then the ID of the peak is the timestamp, (5) if it's not in a peak, then the ID is undefined (pd.NaT), and (6) otherwise the ID is the timestamp from the beginning of the peak. Note that there are a few ways to iterate down the rows of a dataframe (iterrows, iteritems, itertuples), but in general you should avoid it when you can. I don't think we can avoid it here.\nprevious_peak_id = pd.NaT\nfor (i, row) in df.iterrows():\n if row[\"start_peak\"]: # (4)\n df.loc[i, \"peak_id\"] = row[\"Timestamp\"] # Just setting row[\"peak_id\"] does not affect the main dataframe.\n previous_peak_id = row[\"Timestamp\"]\n elif not(row[\"end_peak\"]): # (6)\n df.loc[i, \"peak_id\"] = previous_peak_id\n else:\n pass # (5) already assigned a pd.NaT\n\nFinally, we group the rows of the dataframes into groups where the peak_id are the same, and sum the Area for each group. This adds up the trapezium slices for each peak_id, thus summing the area under each peak.\ndf.groupby([\"peak_id\"]).sum()[\"Area\"]\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python", "split", "sum" ]
stackoverflow_0074545531_pandas_python_split_sum.txt
Q: youtube-dl :: how to listen to audio while downloading I know that I can download an audio track from YouTube through this easy command: youtube-dl -f 251 'http://www.youtube.com/watch?v=HRIF4_WzU1w' Lately YouTube has been slowing down the download speed. Is there a way I can listen to the audio while downloading the track? Where is the file located while it is downloading? On the RAM? A: As far as I know when downloading a video while precising -f 251 it is being written while downloading in VIDEO_TITLE.webm.part and at the end of the download this file is rename VIDEO_TITLE.webm. To listen to the audio of a video while downloading its track, just open in a web-browser the second URL returned by: youtube-dl -g 'HRIF4_WzU1w' If you are also interested in having the audio file at the end of your listening, use your command or use Ctrl + S in your web-browser.
youtube-dl :: how to listen to audio while downloading
I know that I can download an audio track from YouTube through this easy command: youtube-dl -f 251 'http://www.youtube.com/watch?v=HRIF4_WzU1w' Lately YouTube has been slowing down the download speed. Is there a way I can listen to the audio while downloading the track? Where is the file located while it is downloading? On the RAM?
[ "As far as I know when downloading a video while precising -f 251 it is being written while downloading in VIDEO_TITLE.webm.part and at the end of the download this file is rename VIDEO_TITLE.webm.\nTo listen to the audio of a video while downloading its track, just open in a web-browser the second URL returned by:\nyoutube-dl -g 'HRIF4_WzU1w'\n\nIf you are also interested in having the audio file at the end of your listening, use your command or use Ctrl + S in your web-browser.\n" ]
[ 1 ]
[]
[]
[ "python", "tcp", "udp", "youtube", "youtube_dl" ]
stackoverflow_0074512222_python_tcp_udp_youtube_youtube_dl.txt
Q: PySpark Get row with max value from multiple columns grouped I would be happy for some help here :-) I have the following dataframe: Type | Number | Date | Value | ------------------------------------ A | 1 | 2022-10-01 | 5 | A | 2 | 2022-10-01 | 8 | A | 3 | 2022-11-23 | 4 | B | 1 | 2022-02-02 | 1 | B | 2 | 2022-02-04 | 9 | B | 3 | 2022-02-04 | 3 | B | 4 | 2022-02-04 | 1 | The result should be grouped by Type and Date and the Value should be the Value from the row where Number is the maximum (with the grouping condition): Type | Number | Date | Value | ------------------------------------ A | 2 | 2022-10-01 | 8 | A | 3 | 2022-11-23 | 4 | B | 1 | 2022-02-02 | 1 | B | 4 | 2022-02-04 | 1 | I tried the following (in a few variations; also with groupBy()) but without success: from pyspark.sql.functions import * from pyspark.sql import Window w = Window.partitionBy("Type", "Date") df_result = ( df.withColumn("MaxNumber", max("Number").over(w)) .where(col("Number") == col("MaxNumber")) .drop("MaxNumber") ) df_result.display() Thanks for any help in advance. A: Your code works fine : df.withColumn("MaxNumber", F.max("Number").over(w)).where( F.col("Number") == F.col("MaxNumber") ).show() +----+------+----------+-----+---------+ |Type|Number| Date|Value|MaxNumber| +----+------+----------+-----+---------+ | A| 2|2022-10-01| 8| 2| | A| 3|2022-11-23| 4| 3| | B| 1|2022-02-02| 1| 1| | B| 4|2022-02-04| 1| 4| +----+------+----------+-----+---------+ I would do it a bit differently, using row_number instead : from pyspark.sql import functions as F from pyspark.sql import Window w = Window.partitionBy("Type", "Date").orderBy(F.col("Number").desc()) df.withColumn("MaxNumber", F.row_number().over(w)).show() +----+------+----------+-----+---------+ |Type|Number| Date|Value|MaxNumber| +----+------+----------+-----+---------+ | A| 2|2022-10-01| 8| 1| <-- | A| 1|2022-10-01| 5| 2| | A| 3|2022-11-23| 4| 1| <-- | B| 1|2022-02-02| 1| 1| <-- | B| 4|2022-02-04| 1| 1| <-- | B| 3|2022-02-04| 3| 2| | B| 2|2022-02-04| 9| 3| +----+------+----------+-----+---------+ Then you just filter where("MaxNumber = 1") : df.withColumn("MaxNumber", F.row_number().over(w)).where("MaxNumber = 1").drop( "MaxNumber" ).show() +----+------+----------+-----+ |Type|Number| Date|Value| +----+------+----------+-----+ | A| 2|2022-10-01| 8| | A| 3|2022-11-23| 4| | B| 1|2022-02-02| 1| | B| 4|2022-02-04| 1| +----+------+----------+-----+
PySpark Get row with max value from multiple columns grouped
I would be happy for some help here :-) I have the following dataframe: Type | Number | Date | Value | ------------------------------------ A | 1 | 2022-10-01 | 5 | A | 2 | 2022-10-01 | 8 | A | 3 | 2022-11-23 | 4 | B | 1 | 2022-02-02 | 1 | B | 2 | 2022-02-04 | 9 | B | 3 | 2022-02-04 | 3 | B | 4 | 2022-02-04 | 1 | The result should be grouped by Type and Date and the Value should be the Value from the row where Number is the maximum (with the grouping condition): Type | Number | Date | Value | ------------------------------------ A | 2 | 2022-10-01 | 8 | A | 3 | 2022-11-23 | 4 | B | 1 | 2022-02-02 | 1 | B | 4 | 2022-02-04 | 1 | I tried the following (in a few variations; also with groupBy()) but without success: from pyspark.sql.functions import * from pyspark.sql import Window w = Window.partitionBy("Type", "Date") df_result = ( df.withColumn("MaxNumber", max("Number").over(w)) .where(col("Number") == col("MaxNumber")) .drop("MaxNumber") ) df_result.display() Thanks for any help in advance.
[ "Your code works fine :\ndf.withColumn(\"MaxNumber\", F.max(\"Number\").over(w)).where(\n F.col(\"Number\") == F.col(\"MaxNumber\")\n).show()\n\n+----+------+----------+-----+---------+\n|Type|Number| Date|Value|MaxNumber|\n+----+------+----------+-----+---------+\n| A| 2|2022-10-01| 8| 2|\n| A| 3|2022-11-23| 4| 3|\n| B| 1|2022-02-02| 1| 1|\n| B| 4|2022-02-04| 1| 4|\n+----+------+----------+-----+---------+\n\n\nI would do it a bit differently, using row_number instead :\nfrom pyspark.sql import functions as F\nfrom pyspark.sql import Window\n\nw = Window.partitionBy(\"Type\", \"Date\").orderBy(F.col(\"Number\").desc())\n\ndf.withColumn(\"MaxNumber\", F.row_number().over(w)).show()\n\n+----+------+----------+-----+---------+\n|Type|Number| Date|Value|MaxNumber|\n+----+------+----------+-----+---------+\n| A| 2|2022-10-01| 8| 1| <--\n| A| 1|2022-10-01| 5| 2|\n| A| 3|2022-11-23| 4| 1| <--\n| B| 1|2022-02-02| 1| 1| <--\n| B| 4|2022-02-04| 1| 1| <--\n| B| 3|2022-02-04| 3| 2|\n| B| 2|2022-02-04| 9| 3|\n+----+------+----------+-----+---------+\n\nThen you just filter where(\"MaxNumber = 1\") :\ndf.withColumn(\"MaxNumber\", F.row_number().over(w)).where(\"MaxNumber = 1\").drop(\n \"MaxNumber\"\n).show()\n\n+----+------+----------+-----+\n|Type|Number| Date|Value|\n+----+------+----------+-----+\n| A| 2|2022-10-01| 8|\n| A| 3|2022-11-23| 4|\n| B| 1|2022-02-02| 1|\n| B| 4|2022-02-04| 1|\n+----+------+----------+-----+\n\n" ]
[ 0 ]
[]
[]
[ "databricks", "pyspark", "python" ]
stackoverflow_0074561382_databricks_pyspark_python.txt
Q: About the outconverter of cx_Oracle component for python doesn't work when the value is None I have a requirement to get data from the database through cx_oracle convert. and during the fetch data, if the value of the Number field is None, it needs to be converted to -1. I want to use outconverter attribute of the Variable. but I found if the value is None, the outconverter will be fired. here is my example code: import cx_Oracle with cx_Oracle.connect("omrscpif" ,"omrscpif", 'ammiceng') as connect: def OutConverter(value): if value is None: return '' return value def NumberOutConverter(value): if value is None: return -1 return value def OutputTypeHandler(cursor, name, defaultType, size, precision, scale): if defaultType in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR): return cursor.var(str, size, cursor.arraysize, outconverter=OutConverter) if defaultType == cx_Oracle.DB_TYPE_NUMBER: return cursor.var(cx_Oracle.DB_TYPE_NUMBER, size, cursor.arraysize, outconverter=NumberOutConverter) # Finally, we will assign the defination to the outputtypehandler of the connect. connect.outputtypehandler = OutputTypeHandler with connect.cursor() as cursor: cursor.execute("select fordertypevalue, forderqtyfuturecommitted, fbackorderqty from so_send where ftrnflg = 'N'") result = cursor.fetchall() print(result) and the result is below: [(None, None, 1.0)] how to solve the issue? A: The outconverter value is not called if the value is None as described in the documentation. If you want this behavior you can log an enhancement request. An issue was logged for this.
About the outconverter of cx_Oracle component for python doesn't work when the value is None
I have a requirement to get data from the database through cx_oracle convert. and during the fetch data, if the value of the Number field is None, it needs to be converted to -1. I want to use outconverter attribute of the Variable. but I found if the value is None, the outconverter will be fired. here is my example code: import cx_Oracle with cx_Oracle.connect("omrscpif" ,"omrscpif", 'ammiceng') as connect: def OutConverter(value): if value is None: return '' return value def NumberOutConverter(value): if value is None: return -1 return value def OutputTypeHandler(cursor, name, defaultType, size, precision, scale): if defaultType in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR): return cursor.var(str, size, cursor.arraysize, outconverter=OutConverter) if defaultType == cx_Oracle.DB_TYPE_NUMBER: return cursor.var(cx_Oracle.DB_TYPE_NUMBER, size, cursor.arraysize, outconverter=NumberOutConverter) # Finally, we will assign the defination to the outputtypehandler of the connect. connect.outputtypehandler = OutputTypeHandler with connect.cursor() as cursor: cursor.execute("select fordertypevalue, forderqtyfuturecommitted, fbackorderqty from so_send where ftrnflg = 'N'") result = cursor.fetchall() print(result) and the result is below: [(None, None, 1.0)] how to solve the issue?
[ "The outconverter value is not called if the value is None as described in the documentation. If you want this behavior you can log an enhancement request.\nAn issue was logged for this.\n" ]
[ 2 ]
[]
[]
[ "cx_oracle", "python" ]
stackoverflow_0074557918_cx_oracle_python.txt
Q: How to parse ingress object in cdktf security group? Problem Unable to create security group rules in aws using CDKTF Code import cdktf_cdktf_provider_aws.security_group as SecurityGroup_ self.security_group_ = SecurityGroup_.SecurityGroup(self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[{"from_port":"3306","to_port":"3306"}]) Error 29: "ingress": [ 30: { 31: "cidr_blocks": null, 32: "description": "smartstack_dependency", 33: "from_port": null, 34: "ipv6_cidr_blocks": null, 35: "prefix_list_ids": null, 36: "protocol": "tcp", 37: "security_groups": null, 38: "self": null, 39: "to_port": null 40: } 41: ], The argument "ingress.0.to_port" is required, but no definition was found. Tried the following code- import cdktf_cdktf_provider_aws.security_group as SecurityGroup_ self.security_group_ = SecurityGroup_.SecurityGroup(self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[{"from_port":"3306","to_port":"3306"}]) A: Change the code to self.security_group_ = SecurityGroup_.SecurityGroup( self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[SecurityGroup_.SecurityGroupIngress(from_port=3306,to_port=3306, "security_groups":['test-sg'])]) Ingress takes a list of class obj SecurityGroupIngress
How to parse ingress object in cdktf security group?
Problem Unable to create security group rules in aws using CDKTF Code import cdktf_cdktf_provider_aws.security_group as SecurityGroup_ self.security_group_ = SecurityGroup_.SecurityGroup(self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[{"from_port":"3306","to_port":"3306"}]) Error 29: "ingress": [ 30: { 31: "cidr_blocks": null, 32: "description": "smartstack_dependency", 33: "from_port": null, 34: "ipv6_cidr_blocks": null, 35: "prefix_list_ids": null, 36: "protocol": "tcp", 37: "security_groups": null, 38: "self": null, 39: "to_port": null 40: } 41: ], The argument "ingress.0.to_port" is required, but no definition was found. Tried the following code- import cdktf_cdktf_provider_aws.security_group as SecurityGroup_ self.security_group_ = SecurityGroup_.SecurityGroup(self.scope_object, id_=self.id, name=self.name, vpc_id=self.vpc_id, ingress=[{"from_port":"3306","to_port":"3306"}])
[ "Change the code to\nself.security_group_ = SecurityGroup_.SecurityGroup(\nself.scope_object, \nid_=self.id, \nname=self.name, \nvpc_id=self.vpc_id, \ningress=[SecurityGroup_.SecurityGroupIngress(from_port=3306,to_port=3306, \"security_groups\":['test-sg'])])\n\nIngress takes a list of class obj SecurityGroupIngress\n" ]
[ 3 ]
[]
[]
[ "amazon_web_services", "aws_security_group", "python", "terraform", "terraform_cdk" ]
stackoverflow_0074559553_amazon_web_services_aws_security_group_python_terraform_terraform_cdk.txt
Q: How to expand a list to a certain size without repeating each individual list elements that n-times? I'm looking to keep the individual elements of a list repeating for x number of times, but can only see how to repeat the full list x number of times. For example, I want to repeat the list [3, 5, 1, 9, 8] such that if x=12, then I want to produce tthe following list (i.e the list continues to repeat in order until there are 12 individual elements in the list: [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5] I can do the below but this is obviously not what I want and I'm unsure how to proceed from here. my_list = [3, 5, 1, 9, 8] x = 12 print(my_list * 12) [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8] A: Your code repeats list 12 times. You need to repeat list until length is matched. This can achieved using Itertools - Functions creating iterators for efficient looping from itertools import cycle, islice lis = [3, 5, 1, 9, 8] out = list(islice(cycle(lis), 12)) print(out) Gives # [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5] More pythonic # Use a for loop to access each element in list and iterate over 'length' times. Repeat Ith element you access through loop in same list until length matches. lis = [3, 5, 1, 9, 8] length = 12 out = [lis[i%len(lis)] for i in range(length)] print(out) Gives ## [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5] A: There are multiple ways to go about it. If x is the final length desired and lst is the list (please do not use list as a variable name because it overwrites the builtin list function), then you can do : lst = (lst * (1 + x // len(lst)))[:x] This multiplies the list by the smallest number needed to get at least N elements, and then it slice the list to keep only the first N. For your example : >>> lst = [3, 5, 1, 9, 8] >>> x = 12 >>> (lst * (1 + x // len(lst)))[:x] [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5] You could also use a loop, for example : index = 0 while len(lst) < x: lst.append(lst[index]) index += 1
How to expand a list to a certain size without repeating each individual list elements that n-times?
I'm looking to keep the individual elements of a list repeating for x number of times, but can only see how to repeat the full list x number of times. For example, I want to repeat the list [3, 5, 1, 9, 8] such that if x=12, then I want to produce tthe following list (i.e the list continues to repeat in order until there are 12 individual elements in the list: [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5] I can do the below but this is obviously not what I want and I'm unsure how to proceed from here. my_list = [3, 5, 1, 9, 8] x = 12 print(my_list * 12) [3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5, 1, 9, 8]
[ "Your code repeats list 12 times. You need to repeat list until length is matched. This can achieved using Itertools - Functions creating iterators for efficient looping\nfrom itertools import cycle, islice\n\nlis = [3, 5, 1, 9, 8]\nout = list(islice(cycle(lis), 12))\nprint(out)\n\nGives #\n[3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5]\n\nMore pythonic #\nUse a for loop to access each element in list and iterate over 'length' times. Repeat Ith element you access through loop in same list until length matches.\nlis = [3, 5, 1, 9, 8]\nlength = 12\n\nout = [lis[i%len(lis)] for i in range(length)]\nprint(out)\n\nGives ##\n[3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5]\n\n", "There are multiple ways to go about it. If x is the final length desired and lst is the list (please do not use list as a variable name because it overwrites the builtin list function), then you can do :\nlst = (lst * (1 + x // len(lst)))[:x]\n\nThis multiplies the list by the smallest number needed to get at least N elements, and then it slice the list to keep only the first N. For your example :\n>>> lst = [3, 5, 1, 9, 8]\n>>> x = 12\n>>> (lst * (1 + x // len(lst)))[:x]\n[3, 5, 1, 9, 8, 3, 5, 1, 9, 8, 3, 5]\n\nYou could also use a loop, for example :\nindex = 0\nwhile len(lst) < x:\n lst.append(lst[index])\n index += 1\n\n" ]
[ 2, 0 ]
[]
[]
[ "list", "loops", "python", "repeat" ]
stackoverflow_0074562382_list_loops_python_repeat.txt
Q: how to sort pandas dataframe from one column I have a data frame like this: print(df) 0 1 2 0 354.7 April 4.0 1 55.4 August 8.0 2 176.5 December 12.0 3 95.5 February 2.0 4 85.6 January 1.0 5 152 July 7.0 6 238.7 June 6.0 7 104.8 March 3.0 8 283.5 May 5.0 9 278.8 November 11.0 10 249.6 October 10.0 11 212.7 September 9.0 As you can see, months are not in calendar order. So I created a second column to get the month number corresponding to each month (1-12). From there, how can I sort this data frame according to calendar months' order? A: Use sort_values to sort the df by a specific column's values: In [18]: df.sort_values('2') Out[18]: 0 1 2 4 85.6 January 1.0 3 95.5 February 2.0 7 104.8 March 3.0 0 354.7 April 4.0 8 283.5 May 5.0 6 238.7 June 6.0 5 152.0 July 7.0 1 55.4 August 8.0 11 212.7 September 9.0 10 249.6 October 10.0 9 278.8 November 11.0 2 176.5 December 12.0 If you want to sort by two columns, pass a list of column labels to sort_values with the column labels ordered according to sort priority. If you use df.sort_values(['2', '0']), the result would be sorted by column 2 then column 0. Granted, this does not really make sense for this example because each value in df['2'] is unique. A: I tried the solutions above and I do not achieve results, so I found a different solution that works for me. The ascending=False is to order the dataframe in descending order, by default it is True. I am using python 3.6.6 and pandas 0.23.4 versions. final_df = df.sort_values(by=['2'], ascending=False) You can see more details in pandas documentation here. A: Using column name worked for me. sorted_df = df.sort_values(by=['Column_name'], ascending=True) A: Panda's sort_values does the work. There are various parameters one can pass, such as ascending (bool or list of bool): Sort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by. As the default is ascending, and OP's goal is to sort ascending, one doesn't need to specify that parameter (see the last note below for the way to solve descending), so one can use one of the following ways: Performing the operation in-place, and keeping the same variable name. This requires one to pass inplace=True as follows: df.sort_values(by=['2'], inplace=True) # or df.sort_values(by = '2', inplace = True) # or df.sort_values('2', inplace = True) If doing the operation in-place is not a requirement, one can assign the change (sort) to a variable: With the same name of the original dataframe, df as df = df.sort_values(by=['2']) With a different name, such as df_new, as df_new = df.sort_values(by=['2']) All this previous operations would give the following output 0 1 2 4 85.6 January 1.0 3 95.5 February 2.0 7 104.8 March 3.0 0 354.7 April 4.0 8 283.5 May 5.0 6 238.7 June 6.0 5 152 July 7.0 1 55.4 August 8.0 11 212.7 September 9.0 10 249.6 October 10.0 9 278.8 November 11.0 2 176.5 December 12.0 Finally, one can reset the index with pandas.DataFrame.reset_index, to get the following df.reset_index(drop = True, inplace = True) # or df = df.reset_index(drop = True) [Out]: 0 1 2 0 85.6 January 1.0 1 95.5 February 2.0 2 104.8 March 3.0 3 354.7 April 4.0 4 283.5 May 5.0 5 238.7 June 6.0 6 152 July 7.0 7 55.4 August 8.0 8 212.7 September 9.0 9 249.6 October 10.0 10 278.8 November 11.0 11 176.5 December 12.0 A one-liner that sorts ascending, and resets the index would be as follows df = df.sort_values(by=['2']).reset_index(drop = True) [Out]: 0 1 2 0 85.6 January 1.0 1 95.5 February 2.0 2 104.8 March 3.0 3 354.7 April 4.0 4 283.5 May 5.0 5 238.7 June 6.0 6 152 July 7.0 7 55.4 August 8.0 8 212.7 September 9.0 9 249.6 October 10.0 10 278.8 November 11.0 11 176.5 December 12.0 Notes: If one is not doing the operation in-place, forgetting the steps mentioned above may lead one (as this user) to not be able to get the expected result. There are strong opinions on using inplace. For that, one might want to read this. One is assuming that the column 2 is not a string. If it is, one will have to convert it: Using pandas.to_numeric df['2'] = pd.to_numeric(df['2']) Using pandas.Series.astype df['2'] = df['2'].astype(float) If one wants in descending order, one needs to pass ascending=False as df = df.sort_values(by=['2'], ascending=False) # or df.sort_values(by = '2', ascending=False, inplace=True) [Out]: 0 1 2 2 176.5 December 12.0 9 278.8 November 11.0 10 249.6 October 10.0 11 212.7 September 9.0 1 55.4 August 8.0 5 152 July 7.0 6 238.7 June 6.0 8 283.5 May 5.0 0 354.7 April 4.0 7 104.8 March 3.0 3 95.5 February 2.0 4 85.6 January 1.0 A: Just as another solution: Instead of creating the second column, you can categorize your string data(month name) and sort by that like this: df.rename(columns={1:'month'},inplace=True) df['month'] = pd.Categorical(df['month'],categories=['December','November','October','September','August','July','June','May','April','March','February','January'],ordered=True) df = df.sort_values('month',ascending=False) It will give you the ordered data by month name as you specified while creating the Categorical object. A: Just adding some more operations on data. Suppose we have a dataframe df, we can do several operations to get desired outputs ID cost tax label 1 216590 1600 test 2 523213 1800 test 3 250 1500 experiment (df['label'].value_counts().to_frame().reset_index()).sort_values('label', ascending=False) will give sorted output of labels as a dataframe index label 0 test 2 1 experiment 1 A: This worked for me df.sort_values(by='Column_name', inplace=True, ascending=False) A: You probably need to reset the index after sorting: df = df.sort_values('2') df = df.reset_index(drop=True) A: Here is template of sort_values according to pandas documentation. DataFrame.sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)[source] In this case it will be like this. df.sort_values(by=['2']) API Reference pandas.DataFrame.sort_values A: Just adding a few more insights df=raw_df['2'].sort_values() # will sort only one column (i.e 2) but , df =raw_df.sort_values(by=["2"] , ascending = False) # this will sort the whole df in decending order on the basis of the column "2" A: This one worked for me: df=df.sort_values(by=[2]) Whereas: df=df.sort_values(by=['2']) is not working. A: If you want to sort column dynamically but not alphabetically. and dont want to use pd.sort_values(). you can try below solution. Problem : sort column "col1" in this sequence ['A', 'C', 'D', 'B'] import pandas as pd import numpy as np ## Sample DataFrame ## df = pd.DataFrame({'col1': ['A', 'B', 'D', 'C', 'A']}) >>> df col1 0 A 1 B 2 D 3 C 4 A ## Solution ## conditions = [] values = [] for i,j in enumerate(['A','C','D','B']): conditions.append((df['col1'] == j)) values.append(i) df['col1_Num'] = np.select(conditions, values) df.sort_values(by='col1_Num',inplace = True) >>> df col1 col1_Num 0 A 0 4 A 0 3 C 1 2 D 2 1 B 3
how to sort pandas dataframe from one column
I have a data frame like this: print(df) 0 1 2 0 354.7 April 4.0 1 55.4 August 8.0 2 176.5 December 12.0 3 95.5 February 2.0 4 85.6 January 1.0 5 152 July 7.0 6 238.7 June 6.0 7 104.8 March 3.0 8 283.5 May 5.0 9 278.8 November 11.0 10 249.6 October 10.0 11 212.7 September 9.0 As you can see, months are not in calendar order. So I created a second column to get the month number corresponding to each month (1-12). From there, how can I sort this data frame according to calendar months' order?
[ "Use sort_values to sort the df by a specific column's values:\nIn [18]:\ndf.sort_values('2')\n\nOut[18]:\n 0 1 2\n4 85.6 January 1.0\n3 95.5 February 2.0\n7 104.8 March 3.0\n0 354.7 April 4.0\n8 283.5 May 5.0\n6 238.7 June 6.0\n5 152.0 July 7.0\n1 55.4 August 8.0\n11 212.7 September 9.0\n10 249.6 October 10.0\n9 278.8 November 11.0\n2 176.5 December 12.0\n\nIf you want to sort by two columns, pass a list of column labels to sort_values with the column labels ordered according to sort priority. If you use df.sort_values(['2', '0']), the result would be sorted by column 2 then column 0. Granted, this does not really make sense for this example because each value in df['2'] is unique.\n", "I tried the solutions above and I do not achieve results, so I found a different solution that works for me. The ascending=False is to order the dataframe in descending order, by default it is True. I am using python 3.6.6 and pandas 0.23.4 versions.\nfinal_df = df.sort_values(by=['2'], ascending=False)\n\nYou can see more details in pandas documentation here.\n", "Using column name worked for me.\nsorted_df = df.sort_values(by=['Column_name'], ascending=True)\n\n", "Panda's sort_values does the work.\nThere are various parameters one can pass, such as ascending (bool or list of bool):\n\nSort ascending vs. descending. Specify list for multiple sort orders. If this is a list of bools, must match the length of the by.\n\nAs the default is ascending, and OP's goal is to sort ascending, one doesn't need to specify that parameter (see the last note below for the way to solve descending), so one can use one of the following ways:\n\nPerforming the operation in-place, and keeping the same variable name. This requires one to pass inplace=True as follows:\ndf.sort_values(by=['2'], inplace=True)\n\n# or\n\ndf.sort_values(by = '2', inplace = True)\n\n# or\n\ndf.sort_values('2', inplace = True)\n\n\nIf doing the operation in-place is not a requirement, one can assign the change (sort) to a variable:\n\nWith the same name of the original dataframe, df as\ndf = df.sort_values(by=['2'])\n\n\nWith a different name, such as df_new, as\ndf_new = df.sort_values(by=['2'])\n\n\n\n\n\nAll this previous operations would give the following output\n 0 1 2\n4 85.6 January 1.0\n3 95.5 February 2.0\n7 104.8 March 3.0\n0 354.7 April 4.0\n8 283.5 May 5.0\n6 238.7 June 6.0\n5 152 July 7.0\n1 55.4 August 8.0\n11 212.7 September 9.0\n10 249.6 October 10.0\n9 278.8 November 11.0\n2 176.5 December 12.0\n\nFinally, one can reset the index with pandas.DataFrame.reset_index, to get the following\ndf.reset_index(drop = True, inplace = True)\n\n# or\n\ndf = df.reset_index(drop = True)\n\n[Out]:\n\n 0 1 2\n0 85.6 January 1.0\n1 95.5 February 2.0\n2 104.8 March 3.0\n3 354.7 April 4.0\n4 283.5 May 5.0\n5 238.7 June 6.0\n6 152 July 7.0\n7 55.4 August 8.0\n8 212.7 September 9.0\n9 249.6 October 10.0\n10 278.8 November 11.0\n11 176.5 December 12.0\n\nA one-liner that sorts ascending, and resets the index would be as follows\ndf = df.sort_values(by=['2']).reset_index(drop = True)\n\n[Out]:\n\n 0 1 2\n0 85.6 January 1.0\n1 95.5 February 2.0\n2 104.8 March 3.0\n3 354.7 April 4.0\n4 283.5 May 5.0\n5 238.7 June 6.0\n6 152 July 7.0\n7 55.4 August 8.0\n8 212.7 September 9.0\n9 249.6 October 10.0\n10 278.8 November 11.0\n11 176.5 December 12.0\n\n\nNotes:\n\nIf one is not doing the operation in-place, forgetting the steps mentioned above may lead one (as this user) to not be able to get the expected result.\n\nThere are strong opinions on using inplace. For that, one might want to read this.\n\nOne is assuming that the column 2 is not a string. If it is, one will have to convert it:\n\nUsing pandas.to_numeric\n df['2'] = pd.to_numeric(df['2'])\n\n\nUsing pandas.Series.astype\n df['2'] = df['2'].astype(float)\n\n\n\n\nIf one wants in descending order, one needs to pass ascending=False as\n df = df.sort_values(by=['2'], ascending=False)\n\n # or\n\n df.sort_values(by = '2', ascending=False, inplace=True)\n\n [Out]:\n\n 0 1 2\n2 176.5 December 12.0\n9 278.8 November 11.0\n10 249.6 October 10.0\n11 212.7 September 9.0\n1 55.4 August 8.0\n5 152 July 7.0\n6 238.7 June 6.0\n8 283.5 May 5.0\n0 354.7 April 4.0\n7 104.8 March 3.0\n3 95.5 February 2.0\n4 85.6 January 1.0\n\n\n\n", "Just as another solution:\nInstead of creating the second column, you can categorize your string data(month name) and sort by that like this:\ndf.rename(columns={1:'month'},inplace=True)\ndf['month'] = pd.Categorical(df['month'],categories=['December','November','October','September','August','July','June','May','April','March','February','January'],ordered=True)\ndf = df.sort_values('month',ascending=False)\n\nIt will give you the ordered data by month name as you specified while creating the Categorical object.\n", "Just adding some more operations on data. Suppose we have a dataframe df, we can do several operations to get desired outputs\nID cost tax label\n1 216590 1600 test \n2 523213 1800 test \n3 250 1500 experiment\n\n(df['label'].value_counts().to_frame().reset_index()).sort_values('label', ascending=False)\n\nwill give sorted output of labels as a dataframe\n index label\n0 test 2\n1 experiment 1\n\n", "This worked for me\ndf.sort_values(by='Column_name', inplace=True, ascending=False)\n\n", "You probably need to reset the index after sorting:\ndf = df.sort_values('2')\ndf = df.reset_index(drop=True)\n\n", "Here is template of sort_values according to pandas documentation.\nDataFrame.sort_values(by, axis=0,\n ascending=True,\n inplace=False,\n kind='quicksort',\n na_position='last',\n ignore_index=False, key=None)[source]\n\nIn this case it will be like this.\ndf.sort_values(by=['2'])\nAPI Reference pandas.DataFrame.sort_values\n", "Just adding a few more insights\ndf=raw_df['2'].sort_values() # will sort only one column (i.e 2)\n\nbut ,\ndf =raw_df.sort_values(by=[\"2\"] , ascending = False) # this will sort the whole df in decending order on the basis of the column \"2\"\n\n", "This one worked for me:\ndf=df.sort_values(by=[2])\n\nWhereas:\ndf=df.sort_values(by=['2']) \n\nis not working.\n", "If you want to sort column dynamically but not alphabetically.\nand dont want to use pd.sort_values().\nyou can try below solution.\nProblem : sort column \"col1\" in this sequence ['A', 'C', 'D', 'B']\nimport pandas as pd\nimport numpy as np\n\n## Sample DataFrame ##\ndf = pd.DataFrame({'col1': ['A', 'B', 'D', 'C', 'A']})\n\n>>> df\n col1\n0 A\n1 B\n2 D\n3 C\n4 A\n## Solution ##\n\nconditions = []\nvalues = []\n\nfor i,j in enumerate(['A','C','D','B']):\n conditions.append((df['col1'] == j))\n values.append(i)\n\ndf['col1_Num'] = np.select(conditions, values)\n\ndf.sort_values(by='col1_Num',inplace = True)\n\n>>> df\n\n col1 col1_Num\n0 A 0\n4 A 0\n3 C 1\n2 D 2\n1 B 3\n\n" ]
[ 690, 244, 58, 27, 25, 12, 9, 8, 6, 6, 1, 0 ]
[ "Example:\nAssume you have a column with values 1 and 0 and you want to separate and use only one value, then:\n// furniture is one of the columns in the csv file.\n \n\nallrooms = data.groupby('furniture')['furniture'].agg('count')\nallrooms\n\n\nmyrooms1 = pan.DataFrame(allrooms, columns = ['furniture'], index = [1])\n\nmyrooms2 = pan.DataFrame(allrooms, columns = ['furniture'], index = [0])\n\nprint(myrooms1);print(myrooms2)\n\n" ]
[ -1 ]
[ "dataframe", "pandas", "python", "sorting", "time" ]
stackoverflow_0037787698_dataframe_pandas_python_sorting_time.txt
Q: Select TIMESTAMP(6) WITH TIME ZONE using Pandas, SQLAlchemy and cx_Oracle I am trying to use pandas to select some data from an Oracle database. The column in question has the data type TIMESTAMP(6) WITH TIME ZONE. I am in the same time zone as the database, but it contains data that is recorded from a different time zone. Oracle version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Python 3.8.13 SQLAlchemy 1.4.39 cx_Oracle 8.3.0 In PL/SQL Developer, the query works: SELECT col FROM table Returns 18-JAN-21 09.54.58.000000000 PM ASIA/BANGKOK In Python, I get this error: import sqlalchemy import cx_Oracle server = server port = port sid = sid username = username password = password dsn_tns = cx_Oracle.makedsn(server, port, sid) cnxn = cx_oracle.connect(username, password, dsn_tns) query = """ SELECT col FROM table """ df = pd.read_sql_query(query, cnxn) Output: DatabaseError: ORA-01805: possible error in date/time operation After some SO searching, I tried this: query = """ SELECT CAST(TO_TIMESTAMP_TZ( col, 'DD-MMM-YY HH.MI.SS.FF6 TZH TZR') ) AT TIME ZONE 'ASIA/BANGKOK' AS col FROM table """ df = pd.read_sql_query(query, cnxn_tds_dev) Which returns a different error message: ORA-00905: missing keyword How can I just select this timestamp column (and several others) using Python/SQLAlchemy/cx_Oracle? Because the query works in PL/SQL Developer, I am assuming it is an issue with cx_Oracle. I will try creating a new Python environment with an older version of cx_Oracle, per this post. A: For the record, the code I mentioned in the original comment thread is: # create table t (c TIMESTAMP(6) WITH TIME ZONE); # insert into t (c) values (systimestamp); # commit; # # Name: pandas # Version: 1.5.2 # Name: SQLAlchemy # Version: 1.4.44 # Name: cx-Oracle # Version: 8.3.0 # # Output is like: # 0 2022-11-24 11:49:25.505773 import os import platform from sqlalchemy import create_engine import pandas as pd import cx_Oracle if platform.system() == "Darwin": cx_Oracle.init_oracle_client(lib_dir=os.environ.get("HOME")+"/Downloads/instantclient_19_8") username = os.environ.get("PYTHON_USERNAME") password = os.environ.get("PYTHON_PASSWORD") connect_string = os.environ.get("PYTHON_CONNECTSTRING") hostname, service_name = connect_string.split("/") engine = create_engine(f'oracle://{username}:{password}@{hostname}/?service_name={service_name}') query = """select * from t""" df = pd.read_sql_query(query, engine) print(df) A: One solution is to cast the problematic columns as strings, then convert in pandas. query = "SELECT TO_CHAR(col) AS col FROM table" df = pd.read_sql_query(query, cnxn) df[col] = df[col].apply(pd.to_datetime, format="%d-%b-%y %I.%M.%S.%f %p %Z")
Select TIMESTAMP(6) WITH TIME ZONE using Pandas, SQLAlchemy and cx_Oracle
I am trying to use pandas to select some data from an Oracle database. The column in question has the data type TIMESTAMP(6) WITH TIME ZONE. I am in the same time zone as the database, but it contains data that is recorded from a different time zone. Oracle version: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Python 3.8.13 SQLAlchemy 1.4.39 cx_Oracle 8.3.0 In PL/SQL Developer, the query works: SELECT col FROM table Returns 18-JAN-21 09.54.58.000000000 PM ASIA/BANGKOK In Python, I get this error: import sqlalchemy import cx_Oracle server = server port = port sid = sid username = username password = password dsn_tns = cx_Oracle.makedsn(server, port, sid) cnxn = cx_oracle.connect(username, password, dsn_tns) query = """ SELECT col FROM table """ df = pd.read_sql_query(query, cnxn) Output: DatabaseError: ORA-01805: possible error in date/time operation After some SO searching, I tried this: query = """ SELECT CAST(TO_TIMESTAMP_TZ( col, 'DD-MMM-YY HH.MI.SS.FF6 TZH TZR') ) AT TIME ZONE 'ASIA/BANGKOK' AS col FROM table """ df = pd.read_sql_query(query, cnxn_tds_dev) Which returns a different error message: ORA-00905: missing keyword How can I just select this timestamp column (and several others) using Python/SQLAlchemy/cx_Oracle? Because the query works in PL/SQL Developer, I am assuming it is an issue with cx_Oracle. I will try creating a new Python environment with an older version of cx_Oracle, per this post.
[ "For the record, the code I mentioned in the original comment thread is:\n# create table t (c TIMESTAMP(6) WITH TIME ZONE);\n# insert into t (c) values (systimestamp);\n# commit;\n#\n# Name: pandas\n# Version: 1.5.2\n# Name: SQLAlchemy\n# Version: 1.4.44\n# Name: cx-Oracle\n# Version: 8.3.0\n#\n# Output is like:\n# 0 2022-11-24 11:49:25.505773\n\nimport os\nimport platform\n\nfrom sqlalchemy import create_engine\nimport pandas as pd\n\nimport cx_Oracle\n\nif platform.system() == \"Darwin\":\n cx_Oracle.init_oracle_client(lib_dir=os.environ.get(\"HOME\")+\"/Downloads/instantclient_19_8\")\n\nusername = os.environ.get(\"PYTHON_USERNAME\")\npassword = os.environ.get(\"PYTHON_PASSWORD\")\nconnect_string = os.environ.get(\"PYTHON_CONNECTSTRING\")\nhostname, service_name = connect_string.split(\"/\")\n\nengine = create_engine(f'oracle://{username}:{password}@{hostname}/?service_name={service_name}')\n\nquery = \"\"\"select * from t\"\"\"\ndf = pd.read_sql_query(query, engine)\nprint(df)\n\n", "One solution is to cast the problematic columns as strings, then convert in pandas.\nquery = \"SELECT TO_CHAR(col) AS col FROM table\"\ndf = pd.read_sql_query(query, cnxn)\ndf[col] = df[col].apply(pd.to_datetime, format=\"%d-%b-%y %I.%M.%S.%f %p %Z\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "cx_oracle", "oracle", "pandas", "python", "sqlalchemy" ]
stackoverflow_0074554255_cx_oracle_oracle_pandas_python_sqlalchemy.txt
Q: subprocess.CalledProcessError: returned non-zero exit status 1, while os.system does not raise any error Given the following command: newman run tests.postman_collection.json -e environment.json --reporters testrail,json,html Raises: RuntimeError: command 'newman run tests.postman_collection.json -e environment.json --reporters testrail,json,html ' return with error (code 1): b'\nhttps://host.testrail.io/index.php?/runs/view/1234\n' Py code that executes the command: try: newmanCLI_output = subprocess.check_output(npmCLi, shell=True).decode().strip() except subprocess.CalledProcessError as e: raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output)) And yes I do use the check_output return. The output is a url to test rail reports A: That's a misfeature of os.system; it returns the exit code so you can examine it, but doesn't raise an error if something fails. The check in subprocess.check_output means check that the command succeeded, or raise an exception otherwise. This is generally a good thing, as you don't want processes to die underneath you without a warning. But you can work around it with subprocess.run if you want to disable it; import shlex result = subprocess.run(shlex.split(npmCLi), text=True, capture_output=True) newmanCLI_output = result.stdout The switch to avoid shell=True and use shlex.split to parse the string instead is not crucial, but hopefully demonstrates how to do these things properly. You should still understand why exactly your command fails, and whether it is safe to ignore the failure.
subprocess.CalledProcessError: returned non-zero exit status 1, while os.system does not raise any error
Given the following command: newman run tests.postman_collection.json -e environment.json --reporters testrail,json,html Raises: RuntimeError: command 'newman run tests.postman_collection.json -e environment.json --reporters testrail,json,html ' return with error (code 1): b'\nhttps://host.testrail.io/index.php?/runs/view/1234\n' Py code that executes the command: try: newmanCLI_output = subprocess.check_output(npmCLi, shell=True).decode().strip() except subprocess.CalledProcessError as e: raise RuntimeError("command '{}' return with error (code {}): {}".format(e.cmd, e.returncode, e.output)) And yes I do use the check_output return. The output is a url to test rail reports
[ "That's a misfeature of os.system; it returns the exit code so you can examine it, but doesn't raise an error if something fails.\nThe check in subprocess.check_output means check that the command succeeded, or raise an exception otherwise. This is generally a good thing, as you don't want processes to die underneath you without a warning.\nBut you can work around it with subprocess.run if you want to disable it;\nimport shlex\nresult = subprocess.run(shlex.split(npmCLi), text=True, capture_output=True)\nnewmanCLI_output = result.stdout\n\nThe switch to avoid shell=True and use shlex.split to parse the string instead is not crucial, but hopefully demonstrates how to do these things properly.\nYou should still understand why exactly your command fails, and whether it is safe to ignore the failure.\n" ]
[ 2 ]
[]
[]
[ "newman", "python", "subprocess" ]
stackoverflow_0074562214_newman_python_subprocess.txt
Q: 'type' object is not subscriptable python here are the functions I defined when I try to call them I get the error note that resultmatrix is a 4x4 2d numpy array ** the function is : import numpy as np def getValues(row,column,resultMatrix): a=resultMatrix[row][column] prefix='0x' a=prefix+a an_integer = int(a, 16) return an_integer mixMatrix=np.array([['00','00','00','00'], ['00','00','00','00'], ['00','00','00','00'], ['00','00','00','00']]) def mixColumns(a, b, c, d,column): v1=(gmul(a, 2) ^ gmul(b, 3) ^ gmul(c, 1) ^ gmul(d, 1)) v2=(gmul(a, 1) ^ gmul(b, 2) ^ gmul(c, 3) ^ gmul(d, 1)) v3=(gmul(a, 1) ^ gmul(b, 1) ^ gmul(c, 2) ^ gmul(d, 3)) v4=(gmul(a, 3) ^ gmul(b, 1) ^ gmul(c, 1) ^ gmul(d, 2)) v1=hex(v1); char0=v1[2];char1=v1[3];v1=str(char0+char1) mixMatrix[0][column]=v1 v2=hex(v2); char0=v2[2];char1=v2[3];v2=str(char0+char1) mixMatrix[1][column]=v2 v3=hex(v3); char0=v3[2];char1=v3[3];v3=str(char0+char1) mixMatrix[2][column]=v3 v4=hex(v4); char0=v4[2];char1=v4[3];v4=str(char0+char1) mixMatrix[3][column]=v4 return mixMatrix def gmul(a, b): if b == 1: return a tmp = (a << 1) & 0xff if b == 2: return tmp if a < 128 else tmp ^ 0x1b if b == 3: return gmul(a, 2) ^ a when I call as shown bellow I receive the error a=getValues(0,0,resultMatrix);b=getValues(1,0,resultMatrix);c=getValues(2,0,resultMatrix);d=getValues(3,0,resultMatrix);mixColumns(a, b, c, d,0) A: The issue seems to be from the mixColumns function you cast the v1, v2, v3 and v4 to hexadecimal then you extract the third and fourth character. However you can't get the fourth character when the value is below 15 since the hex value will be on only 3 character (0 => 0x0, 15 => 0xF, 16 => 0x10). If the idea is to get the hexadecimal value without the "0x" in front of it then you can simply use v1 = v1[2:] which mean take everything from the third character. If the goal is to take the first two hexadecimal character and ignoring the rest, then you need to check if there is enough character: char0 = v1[2] char1 = v1[3] if len(v1) >= 4 else "0" So after correcting this, you'll have a mixColumns function that look like def mixColumns(a, b, c, d,column): v1=(gmul(a, 2) ^ gmul(b, 3) ^ gmul(c, 1) ^ gmul(d, 1)) v2=(gmul(a, 1) ^ gmul(b, 2) ^ gmul(c, 3) ^ gmul(d, 1)) v3=(gmul(a, 1) ^ gmul(b, 1) ^ gmul(c, 2) ^ gmul(d, 3)) v4=(gmul(a, 3) ^ gmul(b, 1) ^ gmul(c, 1) ^ gmul(d, 2)) v1=hex(v1) v1=v1[2:] mixMatrix[0][column]=v1 v2=hex(v2) v2=v2[2:] mixMatrix[1][column]=v2 v3=hex(v3)[2:] #Same thing but shorter mixMatrix[2][column]=v3 v4=hex(v4)[2:] mixMatrix[3][column]=v4 return mixMatrix
'type' object is not subscriptable python
here are the functions I defined when I try to call them I get the error note that resultmatrix is a 4x4 2d numpy array ** the function is : import numpy as np def getValues(row,column,resultMatrix): a=resultMatrix[row][column] prefix='0x' a=prefix+a an_integer = int(a, 16) return an_integer mixMatrix=np.array([['00','00','00','00'], ['00','00','00','00'], ['00','00','00','00'], ['00','00','00','00']]) def mixColumns(a, b, c, d,column): v1=(gmul(a, 2) ^ gmul(b, 3) ^ gmul(c, 1) ^ gmul(d, 1)) v2=(gmul(a, 1) ^ gmul(b, 2) ^ gmul(c, 3) ^ gmul(d, 1)) v3=(gmul(a, 1) ^ gmul(b, 1) ^ gmul(c, 2) ^ gmul(d, 3)) v4=(gmul(a, 3) ^ gmul(b, 1) ^ gmul(c, 1) ^ gmul(d, 2)) v1=hex(v1); char0=v1[2];char1=v1[3];v1=str(char0+char1) mixMatrix[0][column]=v1 v2=hex(v2); char0=v2[2];char1=v2[3];v2=str(char0+char1) mixMatrix[1][column]=v2 v3=hex(v3); char0=v3[2];char1=v3[3];v3=str(char0+char1) mixMatrix[2][column]=v3 v4=hex(v4); char0=v4[2];char1=v4[3];v4=str(char0+char1) mixMatrix[3][column]=v4 return mixMatrix def gmul(a, b): if b == 1: return a tmp = (a << 1) & 0xff if b == 2: return tmp if a < 128 else tmp ^ 0x1b if b == 3: return gmul(a, 2) ^ a when I call as shown bellow I receive the error a=getValues(0,0,resultMatrix);b=getValues(1,0,resultMatrix);c=getValues(2,0,resultMatrix);d=getValues(3,0,resultMatrix);mixColumns(a, b, c, d,0)
[ "The issue seems to be from the mixColumns function you cast the v1, v2, v3 and v4 to hexadecimal then you extract the third and fourth character.\nHowever you can't get the fourth character when the value is below 15 since the\nhex value will be on only 3 character (0 => 0x0, 15 => 0xF, 16 => 0x10).\nIf the idea is to get the hexadecimal value without the \"0x\" in front of it then you can simply use v1 = v1[2:] which mean take everything from the third character.\nIf the goal is to take the first two hexadecimal character and ignoring the rest, then you need to check if there is enough character:\nchar0 = v1[2]\nchar1 = v1[3] if len(v1) >= 4 else \"0\"\n\nSo after correcting this, you'll have a mixColumns function that look like\ndef mixColumns(a, b, c, d,column):\n v1=(gmul(a, 2) ^ gmul(b, 3) ^ gmul(c, 1) ^ gmul(d, 1))\n v2=(gmul(a, 1) ^ gmul(b, 2) ^ gmul(c, 3) ^ gmul(d, 1))\n v3=(gmul(a, 1) ^ gmul(b, 1) ^ gmul(c, 2) ^ gmul(d, 3))\n v4=(gmul(a, 3) ^ gmul(b, 1) ^ gmul(c, 1) ^ gmul(d, 2))\n \n v1=hex(v1)\n v1=v1[2:]\n mixMatrix[0][column]=v1\n\n v2=hex(v2)\n v2=v2[2:]\n mixMatrix[1][column]=v2\n\n v3=hex(v3)[2:] #Same thing but shorter\n mixMatrix[2][column]=v3\n\n v4=hex(v4)[2:]\n mixMatrix[3][column]=v4\n return mixMatrix\n\n" ]
[ 1 ]
[]
[]
[ "numpy", "python", "python_3.x" ]
stackoverflow_0074561441_numpy_python_python_3.x.txt
Q: My buildozer suddenly reject to turn my kivy app into android apk ;( I'm making a kivy app that works on android smartphone. It colaborates with sqlite3. But as I try to transport as a android apk using my buildozer, suddenly my buildozer denied to work. The Error message is this. [DEBUG]: -> running mv sqlite-amalgamation-3350500 /mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3 [DEBUG]: /usr/bin/mv: cannot move 'sqlite-amalgamation-3350500' to '/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3': Permission denied Exception in thread background thread for pid 463: Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 1641, in wrap fn(*rgs, **kwargs) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 2569, in background_thread handle_exit_code(exit_code) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 2269, in fn return self.command.handle_command_exit_code(exit_code) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 869, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /usr/bin/mv sqlite-amalgamation-3350500 /mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3 STDOUT: /usr/bin/mv: cannot move 'sqlite-amalgamation-3350500' to '/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3': Permission denied STDERR: Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1297, in <module> main() File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main ToolchainCL() File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 730, in __init__ getattr(self, command)(args) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 153, in wrapper_func build_dist_from_args(ctx, dist, args) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 212, in build_dist_from_args build_recipes(build_order, python_modules, ctx, File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 491, in build_recipes recipe.prepare_build_dir(arch.arch) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py", line 587, in prepare_build_dir self.unpack(arch) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py", line 461, in unpack shprint(sh.mv, root_directory, directory_name) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint for line in output: File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 915, in next self.wait() File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 845, in wait self.handle_command_exit_code(exit_code) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 869, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /usr/bin/mv sqlite-amalgamation-3350500 /mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3 STDOUT: /usr/bin/mv: cannot move 'sqlite-amalgamation-3350500' to '/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3': Permission denied STDERR: # Command failed: /usr/bin/python3 -m pythonforandroid.toolchain create --dist_name=LingoAdventure --bootstrap=sdl2 --requirements=python3,kivy --arch arm64-v8a --arch armeabi-v7a --copy-libs --color=always --storage-dir="/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a" --ndk-api=21 --ignore-setup-py --debug # ENVIRONMENT: # SHELL = '/bin/bash' # WSL_DISTRO_NAME = 'Ubuntu' # WT_SESSION = '3eb1ebcd-3650-44c7-983c-054f1eff6565' # NAME = 'LeeJE-Laptop' # PWD = '/mnt/c/KivyApk/Lingo_Chans' # LOGNAME = 'leejieung' # HOME = '/home/leejieung' # LANG = 'C.UTF-8' # WSL_INTEROP = '/run/WSL/43_interop' # LS_COLORS = 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:' # WAYLAND_DISPLAY = 'wayland-0' # LESSCLOSE = '/usr/bin/lesspipe %s %s' # TERM = 'xterm-256color' # LESSOPEN = '| /usr/bin/lesspipe %s' # USER = 'leejieung' # DISPLAY = ':0' # SHLVL = '1' # XDG_RUNTIME_DIR = '/mnt/wslg/runtime-dir' # WSLENV = 'WT_SESSION::WT_PROFILE_ID' # XDG_DATA_DIRS = '/usr/local/share:/usr/share:/var/lib/snapd/desktop' # PATH = ('/home/leejieung/.buildozer/android/platform/apache-ant-1.9.4/bin:/home/leejieung/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program ' 'Files (x86)/Common ' 'Files/Oracle/Java/javapath:/mnt/c/windows/system32:/mnt/c/windows:/mnt/c/windows/System32/Wbem:/mnt/c/windows/System32/WindowsPowerShell/v1.0/:/mnt/c/windows/System32/OpenSSH/:/mnt/c/Program ' 'Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/Program Files/NVIDIA ' 'Corporation/NVIDIA NvDLISR:/mnt/c/Program Files/MySQL/MySQL Server ' '8.0/bin:/mnt/c/Program Files/PowerShell/7/:/mnt/c/Program ' 'Files/Docker/Docker/resources/bin:/mnt/c/Users/lje64/AppData/Local/Programs/Python/Python310/Scripts/:/mnt/c/Users/lje64/AppData/Local/Programs/Python/Python310/:/mnt/c/Users/lje64/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/lje64/AppData/Local/Programs/Microsoft ' 'VS Code/bin:/snap/bin') # HOSTTYPE = 'x86_64' # PULSE_SERVER = '/mnt/wslg/PulseServer' # WT_PROFILE_ID = '{61c54bbd-c2c6-5271-96e7-009a87ff44bf}' # _ = '/home/leejieung/.local/bin/buildozer' # PACKAGES_PATH = '/home/leejieung/.buildozer/android/packages' # ANDROIDSDK = '/home/leejieung/.buildozer/android/platform/android-sdk' # ANDROIDNDK = '/home/leejieung/.buildozer/android/platform/android-ndk-r23b' # ANDROIDAPI = '27' # ANDROIDMINAPI = '21' # # Buildozer failed to execute the last command # The error might be hidden in the log above this error # Please read the full log, and search for it before # raising an issue with buildozer itself. # In case of a bug report, please add a full log with log_level = 2 What's the problem? Please tell me the solution as quick as possible... I searched into whole internet but I cannot find any clues A: Two ideas: Try to run buildozer with sudo Change your project directory to your /usr/ location. Maybe the permission error is a result of your /mnt/c/ folder. If you want to publish the app in the appstore you have to raise the API to 31. A: Problem solved. The ultimal-fundamental problem was "A Vaccine program's prohibition"! I have 'AhnLab V3' vaccine program and it disturbed my buildozer from working. Anyway thank you
My buildozer suddenly reject to turn my kivy app into android apk ;(
I'm making a kivy app that works on android smartphone. It colaborates with sqlite3. But as I try to transport as a android apk using my buildozer, suddenly my buildozer denied to work. The Error message is this. [DEBUG]: -> running mv sqlite-amalgamation-3350500 /mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3 [DEBUG]: /usr/bin/mv: cannot move 'sqlite-amalgamation-3350500' to '/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3': Permission denied Exception in thread background thread for pid 463: Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 1641, in wrap fn(*rgs, **kwargs) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 2569, in background_thread handle_exit_code(exit_code) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 2269, in fn return self.command.handle_command_exit_code(exit_code) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 869, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /usr/bin/mv sqlite-amalgamation-3350500 /mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3 STDOUT: /usr/bin/mv: cannot move 'sqlite-amalgamation-3350500' to '/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3': Permission denied STDERR: Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1297, in <module> main() File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main ToolchainCL() File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 730, in __init__ getattr(self, command)(args) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 153, in wrapper_func build_dist_from_args(ctx, dist, args) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 212, in build_dist_from_args build_recipes(build_order, python_modules, ctx, File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 491, in build_recipes recipe.prepare_build_dir(arch.arch) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py", line 587, in prepare_build_dir self.unpack(arch) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py", line 461, in unpack shprint(sh.mv, root_directory, directory_name) File "/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint for line in output: File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 915, in next self.wait() File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 845, in wait self.handle_command_exit_code(exit_code) File "/home/leejieung/.local/lib/python3.10/site-packages/sh.py", line 869, in handle_command_exit_code raise exc sh.ErrorReturnCode_1: RAN: /usr/bin/mv sqlite-amalgamation-3350500 /mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3 STDOUT: /usr/bin/mv: cannot move 'sqlite-amalgamation-3350500' to '/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a/build/other_builds/sqlite3/armeabi-v7a__ndk_target_21/sqlite3': Permission denied STDERR: # Command failed: /usr/bin/python3 -m pythonforandroid.toolchain create --dist_name=LingoAdventure --bootstrap=sdl2 --requirements=python3,kivy --arch arm64-v8a --arch armeabi-v7a --copy-libs --color=always --storage-dir="/mnt/c/KivyApk/Lingo_Chans/.buildozer/android/platform/build-arm64-v8a_armeabi-v7a" --ndk-api=21 --ignore-setup-py --debug # ENVIRONMENT: # SHELL = '/bin/bash' # WSL_DISTRO_NAME = 'Ubuntu' # WT_SESSION = '3eb1ebcd-3650-44c7-983c-054f1eff6565' # NAME = 'LeeJE-Laptop' # PWD = '/mnt/c/KivyApk/Lingo_Chans' # LOGNAME = 'leejieung' # HOME = '/home/leejieung' # LANG = 'C.UTF-8' # WSL_INTEROP = '/run/WSL/43_interop' # LS_COLORS = 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:' # WAYLAND_DISPLAY = 'wayland-0' # LESSCLOSE = '/usr/bin/lesspipe %s %s' # TERM = 'xterm-256color' # LESSOPEN = '| /usr/bin/lesspipe %s' # USER = 'leejieung' # DISPLAY = ':0' # SHLVL = '1' # XDG_RUNTIME_DIR = '/mnt/wslg/runtime-dir' # WSLENV = 'WT_SESSION::WT_PROFILE_ID' # XDG_DATA_DIRS = '/usr/local/share:/usr/share:/var/lib/snapd/desktop' # PATH = ('/home/leejieung/.buildozer/android/platform/apache-ant-1.9.4/bin:/home/leejieung/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/lib/wsl/lib:/mnt/c/Program ' 'Files (x86)/Common ' 'Files/Oracle/Java/javapath:/mnt/c/windows/system32:/mnt/c/windows:/mnt/c/windows/System32/Wbem:/mnt/c/windows/System32/WindowsPowerShell/v1.0/:/mnt/c/windows/System32/OpenSSH/:/mnt/c/Program ' 'Files (x86)/NVIDIA Corporation/PhysX/Common:/mnt/c/Program Files/NVIDIA ' 'Corporation/NVIDIA NvDLISR:/mnt/c/Program Files/MySQL/MySQL Server ' '8.0/bin:/mnt/c/Program Files/PowerShell/7/:/mnt/c/Program ' 'Files/Docker/Docker/resources/bin:/mnt/c/Users/lje64/AppData/Local/Programs/Python/Python310/Scripts/:/mnt/c/Users/lje64/AppData/Local/Programs/Python/Python310/:/mnt/c/Users/lje64/AppData/Local/Microsoft/WindowsApps:/mnt/c/Users/lje64/AppData/Local/Programs/Microsoft ' 'VS Code/bin:/snap/bin') # HOSTTYPE = 'x86_64' # PULSE_SERVER = '/mnt/wslg/PulseServer' # WT_PROFILE_ID = '{61c54bbd-c2c6-5271-96e7-009a87ff44bf}' # _ = '/home/leejieung/.local/bin/buildozer' # PACKAGES_PATH = '/home/leejieung/.buildozer/android/packages' # ANDROIDSDK = '/home/leejieung/.buildozer/android/platform/android-sdk' # ANDROIDNDK = '/home/leejieung/.buildozer/android/platform/android-ndk-r23b' # ANDROIDAPI = '27' # ANDROIDMINAPI = '21' # # Buildozer failed to execute the last command # The error might be hidden in the log above this error # Please read the full log, and search for it before # raising an issue with buildozer itself. # In case of a bug report, please add a full log with log_level = 2 What's the problem? Please tell me the solution as quick as possible... I searched into whole internet but I cannot find any clues
[ "Two ideas:\n\nTry to run buildozer with sudo\nChange your project directory to your /usr/ location. Maybe the permission error is a result of your /mnt/c/ folder.\n\nIf you want to publish the app in the appstore you have to raise the API to 31.\n", "Problem solved. The ultimal-fundamental problem was \"A Vaccine program's prohibition\"! I have 'AhnLab V3' vaccine program and it disturbed my buildozer from working. Anyway thank you\n" ]
[ 0, 0 ]
[]
[]
[ "buildozer", "kivy", "python" ]
stackoverflow_0074559753_buildozer_kivy_python.txt
Q: How to only get datetime without hours minutes and seconds python pandas I am doing a forecast with FBProhpet and suddenly when I do the forecast, only the forecasted dates (ds) ate being displayed with hours minutes and seconds See pictures for more information. Any ideas on how to fix that A: use: df['ds']=pd.to_datetime(df['ds']) df['ds']=df['ds'].dt.date
How to only get datetime without hours minutes and seconds python pandas
I am doing a forecast with FBProhpet and suddenly when I do the forecast, only the forecasted dates (ds) ate being displayed with hours minutes and seconds See pictures for more information. Any ideas on how to fix that
[ "use:\ndf['ds']=pd.to_datetime(df['ds'])\ndf['ds']=df['ds'].dt.date\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "forecast", "pandas", "python" ]
stackoverflow_0074562594_datetime_forecast_pandas_python.txt
Q: How to Pass Arguments (EntryPointArguments) in spark JOB using EMR Serverless? **I'm trying to pass some arguments to run my pyspark script by the parameter of boto3 (emr-serverless client) EntryPointArguments, however, it doesn't work at all, I would like to know if I'm doing it the right way. ** **my python code is like this:** ` import argparse parser = argparse.ArgumentParser() parser.add_argument('-env', nargs='?', metavar='Environment', type=str, help='String: Environment to run. Options: [dev, prd]', choices=['dev', 'prd'], required=True, default="prd") # Capture args args = parser.parse_args() env = args.env print(f"HELLO WOLRD FROM {env}")` **and my script that runs emr-serverless looks like this:** jobDriver={ "sparkSubmit": { "entryPoint": "s3://example-bucket-us-east-1-codes-prd/hello_world.py", "entryPointArguments": ["-env prd"], "sparkSubmitParameters": "--conf spark.executor.cores=2 \ --conf spark.executor.memory=4g \ --conf spark.driver.cores=2 \ --conf spark.driver.memory=8g \ --conf spark.executor.instances=1 \ --conf spark.dynamicAllocation.maxExecutors=12 \ ", } **I've already tried putting single quotes, double quotes, I've tried to pass along these parameters in the "sparkSubmitParameters" and so far, nothing works, there aren't many examples of how to do this on the internet, so my hope is that someone has already done it, and achieved, thank you!** A: I was testing it out, and I ended up figuring out how to do this. From what I understand, when it's a param like this: -env prd you have to pass in the EntryPointArguments like this: ["-env", "prd"] separating the arg, then passing the value, each one separately.
How to Pass Arguments (EntryPointArguments) in spark JOB using EMR Serverless?
**I'm trying to pass some arguments to run my pyspark script by the parameter of boto3 (emr-serverless client) EntryPointArguments, however, it doesn't work at all, I would like to know if I'm doing it the right way. ** **my python code is like this:** ` import argparse parser = argparse.ArgumentParser() parser.add_argument('-env', nargs='?', metavar='Environment', type=str, help='String: Environment to run. Options: [dev, prd]', choices=['dev', 'prd'], required=True, default="prd") # Capture args args = parser.parse_args() env = args.env print(f"HELLO WOLRD FROM {env}")` **and my script that runs emr-serverless looks like this:** jobDriver={ "sparkSubmit": { "entryPoint": "s3://example-bucket-us-east-1-codes-prd/hello_world.py", "entryPointArguments": ["-env prd"], "sparkSubmitParameters": "--conf spark.executor.cores=2 \ --conf spark.executor.memory=4g \ --conf spark.driver.cores=2 \ --conf spark.driver.memory=8g \ --conf spark.executor.instances=1 \ --conf spark.dynamicAllocation.maxExecutors=12 \ ", } **I've already tried putting single quotes, double quotes, I've tried to pass along these parameters in the "sparkSubmitParameters" and so far, nothing works, there aren't many examples of how to do this on the internet, so my hope is that someone has already done it, and achieved, thank you!**
[ "I was testing it out, and I ended up figuring out how to do this.\nFrom what I understand, when it's a param like this:\n-env prd\n\nyou have to pass in the EntryPointArguments like this:\n[\"-env\", \"prd\"]\n\nseparating the arg, then passing the value, each one separately.\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "apache_spark", "emr_serverless", "pyspark", "python" ]
stackoverflow_0074562238_amazon_web_services_apache_spark_emr_serverless_pyspark_python.txt
Q: change only numeric values to binary in dataframe I would like to change my dataframe with values into binary, given df: summary word1 word2 xyz 0 56 abc 32 0 .. .. .. I would like to convert ONLY NUMERIC values to binary, meaning - if the value in word1/2 etc is grater than 0 -> 1 and when it's 0 = stays 0. category summary word1 word2 category1 xyz 0 1 category2 abc 1 0 .. .. .. A: Check if the values in your columns 'word' are greater than 0 and convert to int (df[['word1','word2']] > 0) word1 word2 0 False True 1 True False (df[['word1','word2']] > 0).astype(int) word1 word2 0 0 1 1 1 0 And assign back: df[['word1','word2']] = (df[['word1','word2']] > 0).astype(int)
change only numeric values to binary in dataframe
I would like to change my dataframe with values into binary, given df: summary word1 word2 xyz 0 56 abc 32 0 .. .. .. I would like to convert ONLY NUMERIC values to binary, meaning - if the value in word1/2 etc is grater than 0 -> 1 and when it's 0 = stays 0. category summary word1 word2 category1 xyz 0 1 category2 abc 1 0 .. .. ..
[ "Check if the values in your columns 'word' are greater than 0 and convert to int\n(df[['word1','word2']] > 0)\n\n word1 word2\n0 False True\n1 True False\n\n(df[['word1','word2']] > 0).astype(int)\n\n word1 word2\n0 0 1\n1 1 0\n\nAnd assign back:\ndf[['word1','word2']] = (df[['word1','word2']] > 0).astype(int)\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "python" ]
stackoverflow_0074562339_dataframe_python.txt
Q: how to declare dynamic variables in python with a test I know its possible to declare dynamic variables using this method : for x in range(0, 7): globals()[f"variable1{x}"] = x What i want to do is something like : ls = [1,31,42,56, ...] for x in range(0, len(ls)): globals()[f"variable{x}"] = 10*x if ls[x] %2 == 0 else globals()[f"variable{x}"] = 11*x Code is random beacause i tried to make it as simple as possible, apologies. What i am actually trying to do is : for i in range(0,4): globals()[f"numb{i}"] = tk.Label(root, text=(best[i] +"\n"+dates[0]),width=10,height=3, bg="#F4F5B7") if len(best) >= i else tk.Label(root, text=('UNDEFINED'),width=10,height=3, bg="#F4F5B7") numb[i].grid(row=1,column=(i+5)) Where best[] is user inputed A: You try to assign the value in the if and the else. The "x if condition else y" is by itself a value so you can assign it to something you don't have to assign in the if and else part. So you should use : ls = [1,31,42,56, ...] for x in range(0, len(ls)): globals()[f"variable{x}"] = 10*x if ls[x] % 2 == 0 else 11*x Please also note that in 9 out of 10 cases, dynamic variable allocation is a bad practice, in most situation it can be replaced with a dictionary like : ls = [1,31,42,56, ...] values = {} for x in range(0, len(ls)): values[f"variable{x}"] = 10*x if ls[x] % 2 == 0 else 11*x #Then later access the values this way print(values["variable31"]) #print whatever is 11**31
how to declare dynamic variables in python with a test
I know its possible to declare dynamic variables using this method : for x in range(0, 7): globals()[f"variable1{x}"] = x What i want to do is something like : ls = [1,31,42,56, ...] for x in range(0, len(ls)): globals()[f"variable{x}"] = 10*x if ls[x] %2 == 0 else globals()[f"variable{x}"] = 11*x Code is random beacause i tried to make it as simple as possible, apologies. What i am actually trying to do is : for i in range(0,4): globals()[f"numb{i}"] = tk.Label(root, text=(best[i] +"\n"+dates[0]),width=10,height=3, bg="#F4F5B7") if len(best) >= i else tk.Label(root, text=('UNDEFINED'),width=10,height=3, bg="#F4F5B7") numb[i].grid(row=1,column=(i+5)) Where best[] is user inputed
[ "You try to assign the value in the if and the else.\nThe \"x if condition else y\" is by itself a value so you can assign it to something you don't have to assign in the if and else part. So you should use :\nls = [1,31,42,56, ...]\nfor x in range(0, len(ls)):\n globals()[f\"variable{x}\"] = 10*x if ls[x] % 2 == 0 else 11*x\n\nPlease also note that in 9 out of 10 cases, dynamic variable allocation is a bad practice, in most situation it can be replaced with a dictionary like :\nls = [1,31,42,56, ...]\nvalues = {}\nfor x in range(0, len(ls)):\n values[f\"variable{x}\"] = 10*x if ls[x] % 2 == 0 else 11*x\n\n#Then later access the values this way\nprint(values[\"variable31\"]) #print whatever is 11**31\n\n" ]
[ 1 ]
[]
[]
[ "dynamic", "python", "tkinter", "variable_assignment" ]
stackoverflow_0074562602_dynamic_python_tkinter_variable_assignment.txt
Q: How to add a value and its index to a series? I googled and most of the answers is about adding a value to a series but not update the index. Here is my series with date string as its index like this 2022-01-01 1 2022-01-02 7 2022-01-03 3 Now I like to add new value of 10 into this series with new index of 2022-01-04 date string. so the series becomes 2022-01-01 1 2022-01-02 7 2022-01-03 3 2022-01-04 10 How to do it? Thanks A: Just use the index value as a subscript, for example: >>> aa = pd.Series({"foo": 1}) >>> aa foo 1 dtype: int64 >>> aa["bar"] = 2 >>> aa foo 1 bar 2 dtype: int64 A: Is it not just something like: new_row = pd.Series(new_value, index=[index_of_new_value]) series = pd.concat([series, new_row]) I may have misunderstood your question. A: Assuming your series is called ser, here is a proposition with pandas.DatetimeIndex and pandas.Series.reindex : idx = pd.date_range('01-01-2022', '01-04-2022') ser.index = pd.DatetimeIndex(ser.index) ser = ser.reindex(idx, fill_value=10) # Output : print(ser) 2022-01-01 1 2022-01-02 7 2022-01-03 3 2022-01-04 10
How to add a value and its index to a series?
I googled and most of the answers is about adding a value to a series but not update the index. Here is my series with date string as its index like this 2022-01-01 1 2022-01-02 7 2022-01-03 3 Now I like to add new value of 10 into this series with new index of 2022-01-04 date string. so the series becomes 2022-01-01 1 2022-01-02 7 2022-01-03 3 2022-01-04 10 How to do it? Thanks
[ "Just use the index value as a subscript, for example:\n>>> aa = pd.Series({\"foo\": 1})\n>>> aa\nfoo 1\ndtype: int64\n>>> aa[\"bar\"] = 2\n>>> aa\nfoo 1\nbar 2\ndtype: int64\n\n", "Is it not just something like:\nnew_row = pd.Series(new_value, index=[index_of_new_value])\nseries = pd.concat([series, new_row])\n\nI may have misunderstood your question.\n", "Assuming your series is called ser, here is a proposition with pandas.DatetimeIndex and pandas.Series.reindex :\nidx = pd.date_range('01-01-2022', '01-04-2022')\nser.index = pd.DatetimeIndex(ser.index)\nser = ser.reindex(idx, fill_value=10)\n\n# Output :\nprint(ser)\n\n2022-01-01 1\n2022-01-02 7\n2022-01-03 3\n2022-01-04 10\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074562556_pandas_python.txt
Q: Decorators with parameters? I have a problem with the transfer of the variable insurance_mode by the decorator. I would do it by the following decorator statement: @execute_complete_reservation(True) def test_booking_gta_object(self): self.test_select_gta_object() but unfortunately, this statement does not work. Perhaps maybe there is better way to solve this problem. def execute_complete_reservation(test_case,insurance_mode): def inner_function(self,*args,**kwargs): self.test_create_qsf_query() test_case(self,*args,**kwargs) self.test_select_room_option() if insurance_mode: self.test_accept_insurance_crosseling() else: self.test_decline_insurance_crosseling() self.test_configure_pax_details() self.test_configure_payer_details return inner_function A: The syntax for decorators with arguments is a bit different - the decorator with arguments should return a function that will take a function and return another function. So it should really return a normal decorator. A bit confusing, right? What I mean is: def decorator_factory(argument): def decorator(function): def wrapper(*args, **kwargs): funny_stuff() something_with_argument(argument) result = function(*args, **kwargs) more_funny_stuff() return result return wrapper return decorator Here you can read more on the subject - it's also possible to implement this using callable objects and that is also explained there. A: Edit : for an in-depth understanding of the mental model of decorators, take a look at this awesome Pycon Talk. well worth the 30 minutes. One way of thinking about decorators with arguments is @decorator def foo(*args, **kwargs): pass translates to foo = decorator(foo) So if the decorator had arguments, @decorator_with_args(arg) def foo(*args, **kwargs): pass translates to foo = decorator_with_args(arg)(foo) decorator_with_args is a function which accepts a custom argument and which returns the actual decorator (that will be applied to the decorated function). I use a simple trick with partials to make my decorators easy from functools import partial def _pseudo_decor(fun, argument): def ret_fun(*args, **kwargs): #do stuff here, for eg. print ("decorator arg is %s" % str(argument)) return fun(*args, **kwargs) return ret_fun real_decorator = partial(_pseudo_decor, argument=arg) @real_decorator def foo(*args, **kwargs): pass Update: Above, foo becomes real_decorator(foo) One effect of decorating a function is that the name foo is overridden upon decorator declaration. foo is "overridden" by whatever is returned by real_decorator. In this case, a new function object. All of foo's metadata is overridden, notably docstring and function name. >>> print(foo) <function _pseudo_decor.<locals>.ret_fun at 0x10666a2f0> functools.wraps gives us a convenient method to "lift" the docstring and name to the returned function. from functools import partial, wraps def _pseudo_decor(fun, argument): # magic sauce to lift the name and doc of the function @wraps(fun) def ret_fun(*args, **kwargs): # pre function execution stuff here, for eg. print("decorator argument is %s" % str(argument)) returned_value = fun(*args, **kwargs) # post execution stuff here, for eg. print("returned value is %s" % returned_value) return returned_value return ret_fun real_decorator1 = partial(_pseudo_decor, argument="some_arg") real_decorator2 = partial(_pseudo_decor, argument="some_other_arg") @real_decorator1 def bar(*args, **kwargs): pass >>> print(bar) <function __main__.bar(*args, **kwargs)> >>> bar(1,2,3, k="v", x="z") decorator argument is some_arg returned value is None A: Here is a slightly modified version of t.dubrownik's answer. Why? As a general template, you should return the return value from the original function. This changes the name of the function, which could affect other decorators / code. So use @functools.wraps(): from functools import wraps def create_decorator(argument): def decorator(function): @wraps(function) def wrapper(*args, **kwargs): funny_stuff() something_with_argument(argument) retval = function(*args, **kwargs) more_funny_stuff() return retval return wrapper return decorator A: I'd like to show an idea which is IMHO quite elegant. The solution proposed by t.dubrownik shows a pattern which is always the same: you need the three-layered wrapper regardless of what the decorator does. So I thought this is a job for a meta-decorator, that is, a decorator for decorators. As a decorator is a function, it actually works as a regular decorator with arguments: def parametrized(dec): def layer(*args, **kwargs): def repl(f): return dec(f, *args, **kwargs) return repl return layer This can be applied to a regular decorator in order to add parameters. So for instance, say we have the decorator which doubles the result of a function: def double(f): def aux(*xs, **kws): return 2 * f(*xs, **kws) return aux @double def function(a): return 10 + a print function(3) # Prints 26, namely 2 * (10 + 3) With @parametrized we can build a generic @multiply decorator having a parameter @parametrized def multiply(f, n): def aux(*xs, **kws): return n * f(*xs, **kws) return aux @multiply(2) def function(a): return 10 + a print function(3) # Prints 26 @multiply(3) def function_again(a): return 10 + a print function(3) # Keeps printing 26 print function_again(3) # Prints 39, namely 3 * (10 + 3) Conventionally the first parameter of a parametrized decorator is the function, while the remaining arguments will correspond to the parameter of the parametrized decorator. An interesting usage example could be a type-safe assertive decorator: import itertools as it @parametrized def types(f, *types): def rep(*args): for a, t, n in zip(args, types, it.count()): if type(a) is not t: raise TypeError('Value %d has not type %s. %s instead' % (n, t, type(a)) ) return f(*args) return rep @types(str, int) # arg1 is str, arg2 is int def string_multiply(text, times): return text * times print(string_multiply('hello', 3)) # Prints hellohellohello print(string_multiply(3, 3)) # Fails miserably with TypeError A final note: here I'm not using functools.wraps for the wrapper functions, but I would recommend using it all the times. A: I presume your problem is passing arguments to your decorator. This is a little tricky and not straightforward. Here's an example of how to do this: class MyDec(object): def __init__(self,flag): self.flag = flag def __call__(self, original_func): decorator_self = self def wrappee( *args, **kwargs): print 'in decorator before wrapee with flag ',decorator_self.flag original_func(*args,**kwargs) print 'in decorator after wrapee with flag ',decorator_self.flag return wrappee @MyDec('foo de fa fa') def bar(a,b,c): print 'in bar',a,b,c bar('x','y','z') Prints: in decorator before wrapee with flag foo de fa fa in bar x y z in decorator after wrapee with flag foo de fa fa See Bruce Eckel's article for more details. A: Writing a decorator that works with and without parameter is a challenge because Python expects completely different behavior in these two cases! Many answers have tried to work around this and below is an improvement of answer by @norok2. Specifically, this variation eliminates the use of locals(). Following the same example as given by @norok2: import functools def multiplying(f_py=None, factor=1): assert callable(f_py) or f_py is None def _decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs): return factor * func(*args, **kwargs) return wrapper return _decorator(f_py) if callable(f_py) else _decorator @multiplying def summing(x): return sum(x) print(summing(range(10))) # 45 @multiplying() def summing(x): return sum(x) print(summing(range(10))) # 45 @multiplying(factor=10) def summing(x): return sum(x) print(summing(range(10))) # 450 Play with this code. The catch is that the user must supply key,value pairs of parameters instead of positional parameters and the first parameter is reserved. A: def decorator(argument): def real_decorator(function): def wrapper(*args): for arg in args: assert type(arg)==int,f'{arg} is not an interger' result = function(*args) result = result*argument return result return wrapper return real_decorator Usage of the decorator @decorator(2) def adder(*args): sum=0 for i in args: sum+=i return sum Then the adder(2,3) produces 10 but adder('hi',3) produces --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-143-242a8feb1cc4> in <module> ----> 1 adder('hi',3) <ipython-input-140-d3420c248ebd> in wrapper(*args) 3 def wrapper(*args): 4 for arg in args: ----> 5 assert type(arg)==int,f'{arg} is not an interger' 6 result = function(*args) 7 result = result*argument AssertionError: hi is not an interger A: This is a template for a function decorator that does not require () if no parameters are to be given and supports both positional and keyword parameters (but requires cheching on locals() to find out if the first parameter is the function to be decorated or not): import functools def decorator(x_or_func=None, *decorator_args, **decorator_kws): def _decorator(func): @functools.wraps(func) def wrapper(*args, **kws): if 'x_or_func' not in locals() \ or callable(x_or_func) \ or x_or_func is None: x = ... # <-- default `x` value else: x = x_or_func return func(*args, **kws) return wrapper return _decorator(x_or_func) if callable(x_or_func) else _decorator an example of this is given below: def multiplying(factor_or_func=None): def _decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs): if 'factor_or_func' not in locals() \ or callable(factor_or_func) \ or factor_or_func is None: factor = 1 else: factor = factor_or_func return factor * func(*args, **kwargs) return wrapper return _decorator(factor_or_func) if callable(factor_or_func) else _decorator @multiplying def summing(x): return sum(x) print(summing(range(10))) # 45 @multiplying() def summing(x): return sum(x) print(summing(range(10))) # 45 @multiplying(10) def summing(x): return sum(x) print(summing(range(10))) # 450 Alternatively, if one does not need positional arguments, one can relax the need for checking on the first parameter within the wrapper() (thus removing the need to use locals()): import functools def decorator(func_=None, **decorator_kws): def _decorator(func): @functools.wraps(func) def wrapper(*args, **kws): return func(*args, **kws) return wrapper if callable(func_): return _decorator(func_) elif func_ is None: return _decorator else: raise RuntimeWarning("Positional arguments are not supported.") an example of this is given below: import functools def multiplying(func_=None, factor=1): def _decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs): return factor * func(*args, **kwargs) return wrapper if callable(func_): return _decorator(func_) elif func_ is None: return _decorator else: raise RuntimeWarning("Positional arguments are not supported.") @multiplying def summing(x): return sum(x) print(summing(range(10))) # 45 @multiplying() def summing(x): return sum(x) print(summing(range(10))) # 45 @multiplying(factor=10) def summing(x): return sum(x) print(summing(range(10))) # 450 @multiplying(10) def summing(x): return sum(x) print(summing(range(10))) # RuntimeWarning Traceback (most recent call last) # .... # RuntimeWarning: Positional arguments are not supported. (partially reworked from @ShitalShah's answer) A: Simple as this def real_decorator(any_number_of_arguments): def pseudo_decorator(function_to_be_decorated): def real_wrapper(function_arguments): print(function_arguments) result = function_to_be_decorated(any_number_of_arguments) return result return real_wrapper return pseudo_decorator Now @real_decorator(any_number_of_arguments) def some_function(function_arguments): return "Any" A: Here we ran display info twice with two different names and two different ages. Now every time we ran display info, our decorators also added the functionality of printing out a line before and a line after that wrapped function. def decorator_function(original_function): def wrapper_function(*args, **kwargs): print('Executed Before', original_function.__name__) result = original_function(*args, **kwargs) print('Executed After', original_function.__name__, '\n') return result return wrapper_function @decorator_function def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('Mr Bean', 66) display_info('MC Jordan', 57) output: Executed Before display_info display_info ran with arguments (Mr Bean, 66) Executed After display_info Executed Before display_info display_info ran with arguments (MC Jordan, 57) Executed After display_info So now let's go ahead and get our decorator function to accept arguments. For example let's say that I wanted a customizable prefix to all of these print statements within the wrapper. Now this would be a good candidate for an argument to the decorator. The argument that we pass in will be that prefix. Now in order to do, this we're just going to add another outer layer to our decorator, so I'm going to call this a function a prefix decorator. def prefix_decorator(prefix): def decorator_function(original_function): def wrapper_function(*args, **kwargs): print(prefix, 'Executed Before', original_function.__name__) result = original_function(*args, **kwargs) print(prefix, 'Executed After', original_function.__name__, '\n') return result return wrapper_function return decorator_function @prefix_decorator('LOG:') def display_info(name, age): print('display_info ran with arguments ({}, {})'.format(name, age)) display_info('Mr Bean', 66) display_info('MC Jordan', 57) output: LOG: Executed Before display_info display_info ran with arguments (Mr Bean, 66) LOG: Executed After display_info LOG: Executed Before display_info display_info ran with arguments (MC Jordan, 57) LOG: Executed After display_info Now we have that LOG: prefix before our print statements in our wrapper function and you can change this any time that you want. A: Great answers above. This one also illustrates @wraps, which takes the doc string and function name from the original function and applies it to the new wrapped version: from functools import wraps def decorator_func_with_args(arg1, arg2): def decorator(f): @wraps(f) def wrapper(*args, **kwargs): print("Before orginal function with decorator args:", arg1, arg2) result = f(*args, **kwargs) print("Ran after the orginal function") return result return wrapper return decorator @decorator_func_with_args("foo", "bar") def hello(name): """A function which prints a greeting to the name provided. """ print('hello ', name) return 42 print("Starting script..") x = hello('Bob') print("The value of x is:", x) print("The wrapped functions docstring is:", hello.__doc__) print("The wrapped functions name is:", hello.__name__) Prints: Starting script.. Before orginal function with decorator args: foo bar hello Bob Ran after the orginal function The value of x is: 42 The wrapped functions docstring is: A function which prints a greeting to the name provided. The wrapped functions name is: hello A: In my instance, I decided to solve this via a one-line lambda to create a new decorator function: def finished_message(function, message="Finished!"): def wrapper(*args, **kwargs): output = function(*args,**kwargs) print(message) return output return wrapper @finished_message def func(): pass my_finished_message = lambda f: finished_message(f, "All Done!") @my_finished_message def my_func(): pass if __name__ == '__main__': func() my_func() When executed, this prints: Finished! All Done! Perhaps not as extensible as other solutions, but worked for me. A: It is well known that the following two pieces of code are nearly equivalent: @dec def foo(): pass foo = dec(foo) ############################################ foo = dec(foo) A common mistake is to think that @ simply hides the leftmost argument. @dec(1, 2, 3) def foo(): pass ########################################### foo = dec(foo, 1, 2, 3) It would be much easier to write decorators if the above is how @ worked. Unfortunately, that’s not the way things are done. Consider a decorator Waitwhich haults program execution for a few seconds. If you don't pass in a Wait-time then the default value is 1 seconds. Use-cases are shown below. ################################################## @Wait def print_something(something): print(something) ################################################## @Wait(3) def print_something_else(something_else): print(something_else) ################################################## @Wait(delay=3) def print_something_else(something_else): print(something_else) When Wait has an argument, such as @Wait(3), then the call Wait(3) is executed before anything else happens. That is, the following two pieces of code are equivalent @Wait(3) def print_something_else(something_else): print(something_else) ############################################### return_value = Wait(3) @return_value def print_something_else(something_else): print(something_else) This is a problem. if `Wait` has no arguments: `Wait` is the decorator. else: # `Wait` receives arguments `Wait` is not the decorator itself. Instead, `Wait` ***returns*** the decorator One solution is shown below: Let us begin by creating the following class, DelayedDecorator: class DelayedDecorator: def __init__(i, cls, *args, **kwargs): print("Delayed Decorator __init__", cls, args, kwargs) i._cls = cls i._args = args i._kwargs = kwargs def __call__(i, func): print("Delayed Decorator __call__", func) if not (callable(func)): import io with io.StringIO() as ss: print( "If only one input, input must be callable", "Instead, received:", repr(func), sep="\n", file=ss ) msg = ss.getvalue() raise TypeError(msg) return i._cls(func, *i._args, **i._kwargs) Now we can write things like: dec = DelayedDecorator(Wait, delay=4) @dec def delayed_print(something): print(something) Note that: dec does not not accept multiple arguments. dec only accepts the function to be wrapped. import inspect class PolyArgDecoratorMeta(type): def call(Wait, *args, **kwargs): try: arg_count = len(args) if (arg_count == 1): if callable(args[0]): SuperClass = inspect.getmro(PolyArgDecoratorMeta)[1] r = SuperClass.call(Wait, args[0]) else: r = DelayedDecorator(Wait, *args, **kwargs) else: r = DelayedDecorator(Wait, *args, **kwargs) finally: pass return r import time class Wait(metaclass=PolyArgDecoratorMeta): def init(i, func, delay = 2): i._func = func i._delay = delay def __call__(i, *args, **kwargs): time.sleep(i._delay) r = i._func(*args, **kwargs) return r The following two pieces of code are equivalent: @Wait def print_something(something): print (something) ################################################## def print_something(something): print(something) print_something = Wait(print_something) We can print "something" to the console very slowly, as follows: print_something("something") ################################################# @Wait(delay=1) def print_something_else(something_else): print(something_else) ################################################## def print_something_else(something_else): print(something_else) dd = DelayedDecorator(Wait, delay=1) print_something_else = dd(print_something_else) ################################################## print_something_else("something") Final Notes It may look like a lot of code, but you don't have to write the classes DelayedDecorator and PolyArgDecoratorMeta every-time. The only code you have to personally write something like as follows, which is fairly short: from PolyArgDecoratorMeta import PolyArgDecoratorMeta import time class Wait(metaclass=PolyArgDecoratorMeta): def __init__(i, func, delay = 2): i._func = func i._delay = delay def __call__(i, *args, **kwargs): time.sleep(i._delay) r = i._func(*args, **kwargs) return r A: It is a decorator that can be called in a variety of ways (tested in python3.7): import functools def my_decorator(*args_or_func, **decorator_kwargs): def _decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs): if not args_or_func or callable(args_or_func[0]): # Here you can set default values for positional arguments decorator_args = () else: decorator_args = args_or_func print( "Available inside the wrapper:", decorator_args, decorator_kwargs ) # ... result = func(*args, **kwargs) # ... return result return wrapper return _decorator(args_or_func[0]) \ if args_or_func and callable(args_or_func[0]) else _decorator @my_decorator def func_1(arg): print(arg) func_1("test") # Available inside the wrapper: () {} # test @my_decorator() def func_2(arg): print(arg) func_2("test") # Available inside the wrapper: () {} # test @my_decorator("any arg") def func_3(arg): print(arg) func_3("test") # Available inside the wrapper: ('any arg',) {} # test @my_decorator("arg_1", 2, [3, 4, 5], kwarg_1=1, kwarg_2="2") def func_4(arg): print(arg) func_4("test") # Available inside the wrapper: ('arg_1', 2, [3, 4, 5]) {'kwarg_1': 1, 'kwarg_2': '2'} # test PS thanks to user @norok2 - https://stackoverflow.com/a/57268935/5353484 UPD Decorator for validating arguments and/or result of functions and methods of a class against annotations. Can be used in synchronous or asynchronous version: https://github.com/EvgeniyBurdin/valdec A: Here is a Flask example using decorators with parameters. Suppose we have a route '/user/name' and we want to map to his home page. def matchR(dirPath): def decorator(func): def wrapper(msg): if dirPath[0:6] == '/user/': print(f"User route '{dirPath}' match, calling func {func}") name = dirPath[6:] return func(msg2=name, msg3=msg) else: print(f"Input dirPath '{dirPath}' does not match route '/user/'") return return wrapper return decorator #@matchR('/Morgan_Hills') @matchR('/user/Morgan_Hills') def home(**kwMsgs): for arg in kwMsgs: if arg == 'msg2': print(f"In home({arg}): Hello {kwMsgs[arg]}, welcome home!") if arg == 'msg3': print(f"In home({arg}): {kwMsgs[arg]}") home('This is your profile rendered as in index.html.') Output: User route '/user/Morgan_Hills' match, calling func <function home at 0x000001DD5FDCD310> In home(msg2): Hello Morgan_Hills, welcome home! In home(msg3): This is your profile rendered as in index.html. A: This is a great use case for a curried function. Curried functions essentially delay a function from being called until all inputs have been supplied. This can be used for a variety of things like wrappers or functional programming. In this case lets create a wrapper that takes in inputs. I will use a simple package pamda that includes a curry function for python. This can be used as a wrapper for other functions. Install Pamda: pip install pamda Create a simple curried decorator function with two inputs: @pamda.curry() def my_decorator(input, func): print ("Executing Decorator") print(f"input:{input}") return func Apply your decorator with the first input supplied to your target function: @my_decorator('Hi!') def foo(input): print('Executing Foo!') print(f"input:{input}") Execute your wrapped function: x=foo('Bye!') Putting everything together: from pamda import pamda @pamda.curry() def my_decorator(input, func): print ("Executing Decorator") print(f"input:{input}") return func @my_decorator('Hi!') def foo(input): print('Executing Foo!') print(f"input:{input}") x=foo('Bye!') Would give: Executing Decorator input:Hi! Executing Foo! input:Bye! A: define this "decoratorize function" to generate customized decorator function: def decoratorize(FUN, **kw): def foo(*args, **kws): return FUN(*args, **kws, **kw) return foo use it this way: @decoratorize(FUN, arg1 = , arg2 = , ...) def bar(...): ... A: the decorator with arguments should return a function that will take a function and return another function you can do that def decorator_factory(argument): def decorator(function): def wrapper(*args, **kwargs): """ add somhting """ return function(*args, **kwargs) return wrapper return decorator or you can use partial from functools module def decorator(function =None,*,argument ): if function is None : return partial(decorator,argument=argument) def wrapper(*args, **kwargs): """ add somhting """ return function(*args, **kwargs) return wrapper in the second option just make sure you pass the arguments like this : @decorator(argument = 'args') def func(): pass A: I think a working, real-world example, with usage examples of the most generic use-case can be valuable here. The following is a decorator for functions, which prints to log upon entering and exiting the function. Parameters control weather or not to print input and output values, log level and so on. import logging from functools import wraps def log_in_out(logger=logging.get_logger(), is_print_input=True, is_print_output=True, is_method=True, log_level=logging.DEBUG): """ @param logger- @param is_print_input- toggle printing input arguments @param is_print_output- toggle printing output values @param is_method- True for methods, False for functions. Makes "self" not printed in case of is_print_input==True @param log_level- @returns- a decorator that logs to logger when entering or exiting the decorated function. Don't uglify your code! """ def decor(fn): @wraps(fn) def wrapper(*args, **kwargs): if is_print_input: logger.log( msg=f"Entered {fn.__name__} with args={args[1:] if is_method else args}, kwargs={kwargs}", level=log_level ) else: logger.log( msg=f"Entered {fn.__name__}", level=log_level ) result = fn(*args, **kwargs) if is_print_output and result is not None: logger.log( msg=f"Exited {fn.__name__} with result {result}", level=log_level, ) else: logger.log( msg=f"Exited {fn.__name__}", level=log_level ) return result return wrapper return decor usage: @log_in_out(is_method=False, is_print_input=False) def foo(a, b=5): return 3, a foo(2) --> prints Entered foo Exited foo with result (3, 2) class A(): @log_in_out(is_print_output=False) def bar(self, c, m, y): return c, 6 a = A() a.bar(1, 2, y=3) --> prints Entered bar with args=(1, 2), kwargs={y:3} Exited bar A: Suppose you have a function def f(*args): print(*args) And you want to add a decorator that accepts arguments to it like this: @decorator(msg='hello') def f(*args): print(*args) This means Python will modify f as follows: f = decorator(msg='hello')(f) So, the return of the part decorator(msg='hello') should be a wrapper function that accepts the function f and returns the modified function. then you can execute the modified function. def decorator(**kwargs): def wrap(f): def modified_f(*args): print(kwargs['msg']) # use passed arguments to the decorator return f(*args) return modified_f return wrap So, when you call f, it is like you are doing: decorator(msg='hello')(f)(args) === wrap(f)(args) === modified_f(args) but modified_f has access to kwargs passed to the decorator The output of f(1,2,3) will be: hello (1, 2, 3) A: For example, I created multiply() below which can accept one or no argument or no parentheses from the decorator and I created sum() below: from numbers import Number def multiply(num=1): def _multiply(func): def core(*args, **kwargs): result = func(*args, **kwargs) if isinstance(num, Number): return result * num else: return result return core if callable(num): return _multiply(num) else: return _multiply def sum(num1, num2): return num1 + num2 Now, I put @multiply(5) on sum(), then called sum(4, 6) as shown below: # (4 + 6) x 5 = 50 @multiply(5) # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) Then, I could get the result below: 50 Next, I put @multiply() on sum(), then called sum(4, 6) as shown below: # (4 + 6) x 1 = 10 @multiply() # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) Or, I put @multiply on sum(), then called sum(4, 6) as shown below: # 4 + 6 = 10 @multiply # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) Then, I could get the result below: 10
Decorators with parameters?
I have a problem with the transfer of the variable insurance_mode by the decorator. I would do it by the following decorator statement: @execute_complete_reservation(True) def test_booking_gta_object(self): self.test_select_gta_object() but unfortunately, this statement does not work. Perhaps maybe there is better way to solve this problem. def execute_complete_reservation(test_case,insurance_mode): def inner_function(self,*args,**kwargs): self.test_create_qsf_query() test_case(self,*args,**kwargs) self.test_select_room_option() if insurance_mode: self.test_accept_insurance_crosseling() else: self.test_decline_insurance_crosseling() self.test_configure_pax_details() self.test_configure_payer_details return inner_function
[ "The syntax for decorators with arguments is a bit different - the decorator with arguments should return a function that will take a function and return another function. So it should really return a normal decorator. A bit confusing, right? What I mean is:\ndef decorator_factory(argument):\n def decorator(function):\n def wrapper(*args, **kwargs):\n funny_stuff()\n something_with_argument(argument)\n result = function(*args, **kwargs)\n more_funny_stuff()\n return result\n return wrapper\n return decorator\n\nHere you can read more on the subject - it's also possible to implement this using callable objects and that is also explained there.\n", "Edit : for an in-depth understanding of the mental model of decorators, take a look at this awesome Pycon Talk. well worth the 30 minutes.\nOne way of thinking about decorators with arguments is\n@decorator\ndef foo(*args, **kwargs):\n pass\n\ntranslates to\nfoo = decorator(foo)\n\nSo if the decorator had arguments,\n@decorator_with_args(arg)\ndef foo(*args, **kwargs):\n pass\n\ntranslates to\nfoo = decorator_with_args(arg)(foo)\n\ndecorator_with_args is a function which accepts a custom argument and which returns the actual decorator (that will be applied to the decorated function).\nI use a simple trick with partials to make my decorators easy\nfrom functools import partial\n\ndef _pseudo_decor(fun, argument):\n def ret_fun(*args, **kwargs):\n #do stuff here, for eg.\n print (\"decorator arg is %s\" % str(argument))\n return fun(*args, **kwargs)\n return ret_fun\n\nreal_decorator = partial(_pseudo_decor, argument=arg)\n\n@real_decorator\ndef foo(*args, **kwargs):\n pass\n\nUpdate:\nAbove, foo becomes real_decorator(foo)\nOne effect of decorating a function is that the name foo is overridden upon decorator declaration. foo is \"overridden\" by whatever is returned by real_decorator. In this case, a new function object.\nAll of foo's metadata is overridden, notably docstring and function name.\n>>> print(foo)\n<function _pseudo_decor.<locals>.ret_fun at 0x10666a2f0>\n\nfunctools.wraps gives us a convenient method to \"lift\" the docstring and name to the returned function.\nfrom functools import partial, wraps\n\ndef _pseudo_decor(fun, argument):\n # magic sauce to lift the name and doc of the function\n @wraps(fun)\n def ret_fun(*args, **kwargs):\n # pre function execution stuff here, for eg.\n print(\"decorator argument is %s\" % str(argument))\n returned_value = fun(*args, **kwargs)\n # post execution stuff here, for eg.\n print(\"returned value is %s\" % returned_value)\n return returned_value\n\n return ret_fun\n\nreal_decorator1 = partial(_pseudo_decor, argument=\"some_arg\")\nreal_decorator2 = partial(_pseudo_decor, argument=\"some_other_arg\")\n\n@real_decorator1\ndef bar(*args, **kwargs):\n pass\n\n>>> print(bar)\n<function __main__.bar(*args, **kwargs)>\n\n>>> bar(1,2,3, k=\"v\", x=\"z\")\ndecorator argument is some_arg\nreturned value is None\n\n", "Here is a slightly modified version of t.dubrownik's answer. Why?\n\nAs a general template, you should return the return value from the original function.\nThis changes the name of the function, which could affect other decorators / code.\n\nSo use @functools.wraps():\nfrom functools import wraps\n\ndef create_decorator(argument):\n def decorator(function):\n @wraps(function)\n def wrapper(*args, **kwargs):\n funny_stuff()\n something_with_argument(argument)\n retval = function(*args, **kwargs)\n more_funny_stuff()\n return retval\n return wrapper\n return decorator\n\n", "I'd like to show an idea which is IMHO quite elegant. The solution proposed by t.dubrownik shows a pattern which is always the same: you need the three-layered wrapper regardless of what the decorator does.\nSo I thought this is a job for a meta-decorator, that is, a decorator for decorators. As a decorator is a function, it actually works as a regular decorator with arguments:\ndef parametrized(dec):\n def layer(*args, **kwargs):\n def repl(f):\n return dec(f, *args, **kwargs)\n return repl\n return layer\n\nThis can be applied to a regular decorator in order to add parameters. So for instance, say we have the decorator which doubles the result of a function:\ndef double(f):\n def aux(*xs, **kws):\n return 2 * f(*xs, **kws)\n return aux\n\n@double\ndef function(a):\n return 10 + a\n\nprint function(3) # Prints 26, namely 2 * (10 + 3)\n\nWith @parametrized we can build a generic @multiply decorator having a parameter\n@parametrized\ndef multiply(f, n):\n def aux(*xs, **kws):\n return n * f(*xs, **kws)\n return aux\n\n@multiply(2)\ndef function(a):\n return 10 + a\n\nprint function(3) # Prints 26\n\n@multiply(3)\ndef function_again(a):\n return 10 + a\n\nprint function(3) # Keeps printing 26\nprint function_again(3) # Prints 39, namely 3 * (10 + 3)\n\nConventionally the first parameter of a parametrized decorator is the function, while the remaining arguments will correspond to the parameter of the parametrized decorator.\nAn interesting usage example could be a type-safe assertive decorator:\nimport itertools as it\n\n@parametrized\ndef types(f, *types):\n def rep(*args):\n for a, t, n in zip(args, types, it.count()):\n if type(a) is not t:\n raise TypeError('Value %d has not type %s. %s instead' %\n (n, t, type(a))\n )\n return f(*args)\n return rep\n\n@types(str, int) # arg1 is str, arg2 is int\ndef string_multiply(text, times):\n return text * times\n\nprint(string_multiply('hello', 3)) # Prints hellohellohello\nprint(string_multiply(3, 3)) # Fails miserably with TypeError\n\nA final note: here I'm not using functools.wraps for the wrapper functions, but I would recommend using it all the times.\n", "I presume your problem is passing arguments to your decorator. This is a little tricky and not straightforward.\nHere's an example of how to do this:\nclass MyDec(object):\n def __init__(self,flag):\n self.flag = flag\n def __call__(self, original_func):\n decorator_self = self\n def wrappee( *args, **kwargs):\n print 'in decorator before wrapee with flag ',decorator_self.flag\n original_func(*args,**kwargs)\n print 'in decorator after wrapee with flag ',decorator_self.flag\n return wrappee\n\n@MyDec('foo de fa fa')\ndef bar(a,b,c):\n print 'in bar',a,b,c\n\nbar('x','y','z')\n\nPrints:\nin decorator before wrapee with flag foo de fa fa\nin bar x y z\nin decorator after wrapee with flag foo de fa fa\n\nSee Bruce Eckel's article for more details.\n", "Writing a decorator that works with and without parameter is a challenge because Python expects completely different behavior in these two cases! Many answers have tried to work around this and below is an improvement of answer by @norok2. Specifically, this variation eliminates the use of locals().\nFollowing the same example as given by @norok2:\nimport functools\n\ndef multiplying(f_py=None, factor=1):\n assert callable(f_py) or f_py is None\n def _decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n return factor * func(*args, **kwargs)\n return wrapper\n return _decorator(f_py) if callable(f_py) else _decorator\n\n\n@multiplying\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 45\n\n\n@multiplying()\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 45\n\n\n@multiplying(factor=10)\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 450\n\nPlay with this code.\nThe catch is that the user must supply key,value pairs of parameters instead of positional parameters and the first parameter is reserved.\n", "def decorator(argument):\n def real_decorator(function):\n def wrapper(*args):\n for arg in args:\n assert type(arg)==int,f'{arg} is not an interger'\n result = function(*args)\n result = result*argument\n return result\n return wrapper\n return real_decorator\n\nUsage of the decorator\n@decorator(2)\ndef adder(*args):\n sum=0\n for i in args:\n sum+=i\n return sum\n\nThen the\nadder(2,3)\n\nproduces\n10\n\nbut \nadder('hi',3)\n\nproduces\n---------------------------------------------------------------------------\nAssertionError Traceback (most recent call last)\n<ipython-input-143-242a8feb1cc4> in <module>\n----> 1 adder('hi',3)\n\n<ipython-input-140-d3420c248ebd> in wrapper(*args)\n 3 def wrapper(*args):\n 4 for arg in args:\n----> 5 assert type(arg)==int,f'{arg} is not an interger'\n 6 result = function(*args)\n 7 result = result*argument\n\nAssertionError: hi is not an interger\n\n", "This is a template for a function decorator that does not require () if no parameters are to be given and supports both positional and keyword parameters (but requires cheching on locals() to find out if the first parameter is the function to be decorated or not):\nimport functools\n\n\ndef decorator(x_or_func=None, *decorator_args, **decorator_kws):\n def _decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kws):\n if 'x_or_func' not in locals() \\\n or callable(x_or_func) \\\n or x_or_func is None:\n x = ... # <-- default `x` value\n else:\n x = x_or_func\n return func(*args, **kws)\n\n return wrapper\n\n return _decorator(x_or_func) if callable(x_or_func) else _decorator\n\nan example of this is given below:\ndef multiplying(factor_or_func=None):\n def _decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n if 'factor_or_func' not in locals() \\\n or callable(factor_or_func) \\\n or factor_or_func is None:\n factor = 1\n else:\n factor = factor_or_func\n return factor * func(*args, **kwargs)\n return wrapper\n return _decorator(factor_or_func) if callable(factor_or_func) else _decorator\n\n\n@multiplying\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 45\n\n\n@multiplying()\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 45\n\n\n@multiplying(10)\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 450\n\n\nAlternatively, if one does not need positional arguments, one can relax the need for checking on the first parameter within the wrapper() (thus removing the need to use locals()):\nimport functools\n\n\ndef decorator(func_=None, **decorator_kws):\n def _decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kws):\n return func(*args, **kws)\n return wrapper\n\n if callable(func_):\n return _decorator(func_)\n elif func_ is None:\n return _decorator\n else:\n raise RuntimeWarning(\"Positional arguments are not supported.\")\n\nan example of this is given below:\nimport functools\n\n\ndef multiplying(func_=None, factor=1):\n def _decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n return factor * func(*args, **kwargs)\n return wrapper\n\n if callable(func_):\n return _decorator(func_)\n elif func_ is None:\n return _decorator\n else:\n raise RuntimeWarning(\"Positional arguments are not supported.\")\n\n\n@multiplying\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 45\n\n\n@multiplying()\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 45\n\n\n@multiplying(factor=10)\ndef summing(x): return sum(x)\n\nprint(summing(range(10)))\n# 450\n\n\n@multiplying(10)\ndef summing(x): return sum(x)\nprint(summing(range(10)))\n# RuntimeWarning Traceback (most recent call last)\n# ....\n# RuntimeWarning: Positional arguments are not supported.\n\n(partially reworked from @ShitalShah's answer)\n", "Simple as this\ndef real_decorator(any_number_of_arguments):\n def pseudo_decorator(function_to_be_decorated):\n\n def real_wrapper(function_arguments):\n print(function_arguments)\n result = function_to_be_decorated(any_number_of_arguments)\n return result\n\n return real_wrapper\n return pseudo_decorator\n\nNow\n@real_decorator(any_number_of_arguments)\ndef some_function(function_arguments):\n return \"Any\"\n\n", "\n\nHere we ran display info twice with two different names and two different ages.\nNow every time we ran display info, our decorators also added the functionality of printing out a line before and a line after that wrapped function.\n\ndef decorator_function(original_function):\n def wrapper_function(*args, **kwargs):\n print('Executed Before', original_function.__name__)\n result = original_function(*args, **kwargs)\n print('Executed After', original_function.__name__, '\\n')\n return result\n return wrapper_function\n\n\n@decorator_function\ndef display_info(name, age):\n print('display_info ran with arguments ({}, {})'.format(name, age))\n\n\ndisplay_info('Mr Bean', 66)\ndisplay_info('MC Jordan', 57)\n\noutput:\nExecuted Before display_info\ndisplay_info ran with arguments (Mr Bean, 66)\nExecuted After display_info \n\nExecuted Before display_info\ndisplay_info ran with arguments (MC Jordan, 57)\nExecuted After display_info \n\n\nSo now let's go ahead and get our decorator function to accept arguments.\n\nFor example let's say that I wanted a customizable prefix to all of these print statements within the wrapper.\n\nNow this would be a good candidate for an argument to the decorator.\n\nThe argument that we pass in will be that prefix. Now in order to do, this we're just going to add another outer layer to our decorator, so I'm going to call this a function a prefix decorator.\n\n\ndef prefix_decorator(prefix):\n def decorator_function(original_function):\n def wrapper_function(*args, **kwargs):\n print(prefix, 'Executed Before', original_function.__name__)\n result = original_function(*args, **kwargs)\n print(prefix, 'Executed After', original_function.__name__, '\\n')\n return result\n return wrapper_function\n return decorator_function\n\n\n@prefix_decorator('LOG:')\ndef display_info(name, age):\n print('display_info ran with arguments ({}, {})'.format(name, age))\n\n\ndisplay_info('Mr Bean', 66)\ndisplay_info('MC Jordan', 57)\n\noutput:\nLOG: Executed Before display_info\ndisplay_info ran with arguments (Mr Bean, 66)\nLOG: Executed After display_info \n\nLOG: Executed Before display_info\ndisplay_info ran with arguments (MC Jordan, 57)\nLOG: Executed After display_info \n\n\nNow we have that LOG: prefix before our print statements in our wrapper function and you can change this any time that you want.\n\n", "Great answers above. This one also illustrates @wraps, which takes the doc string and function name from the original function and applies it to the new wrapped version:\nfrom functools import wraps\n\ndef decorator_func_with_args(arg1, arg2):\n def decorator(f):\n @wraps(f)\n def wrapper(*args, **kwargs):\n print(\"Before orginal function with decorator args:\", arg1, arg2)\n result = f(*args, **kwargs)\n print(\"Ran after the orginal function\")\n return result\n return wrapper\n return decorator\n\n@decorator_func_with_args(\"foo\", \"bar\")\ndef hello(name):\n \"\"\"A function which prints a greeting to the name provided.\n \"\"\"\n print('hello ', name)\n return 42\n\nprint(\"Starting script..\")\nx = hello('Bob')\nprint(\"The value of x is:\", x)\nprint(\"The wrapped functions docstring is:\", hello.__doc__)\nprint(\"The wrapped functions name is:\", hello.__name__)\n\nPrints:\nStarting script..\nBefore orginal function with decorator args: foo bar\nhello Bob\nRan after the orginal function\nThe value of x is: 42\nThe wrapped functions docstring is: A function which prints a greeting to the name provided.\nThe wrapped functions name is: hello\n\n", "In my instance, I decided to solve this via a one-line lambda to create a new decorator function:\ndef finished_message(function, message=\"Finished!\"):\n\n def wrapper(*args, **kwargs):\n output = function(*args,**kwargs)\n print(message)\n return output\n\n return wrapper\n\n@finished_message\ndef func():\n pass\n\nmy_finished_message = lambda f: finished_message(f, \"All Done!\")\n\n@my_finished_message\ndef my_func():\n pass\n\nif __name__ == '__main__':\n func()\n my_func()\n\nWhen executed, this prints:\nFinished!\nAll Done!\n\nPerhaps not as extensible as other solutions, but worked for me.\n", "It is well known that the following two pieces of code are nearly equivalent: \n@dec\ndef foo():\n pass foo = dec(foo)\n\n############################################\nfoo = dec(foo)\n\nA common mistake is to think that @ simply hides the leftmost argument.\n@dec(1, 2, 3)\ndef foo():\n pass \n###########################################\nfoo = dec(foo, 1, 2, 3)\n\nIt would be much easier to write decorators if the above is how @ worked. Unfortunately, that’s not the way things are done.\n\nConsider a decorator Waitwhich haults \nprogram execution for a few seconds.\nIf you don't pass in a Wait-time\nthen the default value is 1 seconds.\nUse-cases are shown below.\n##################################################\n@Wait\ndef print_something(something):\n print(something)\n\n##################################################\n@Wait(3)\ndef print_something_else(something_else):\n print(something_else)\n\n##################################################\n@Wait(delay=3)\ndef print_something_else(something_else):\n print(something_else)\n\nWhen Wait has an argument, such as @Wait(3), then the call Wait(3)\nis executed before anything else happens.\nThat is, the following two pieces of code are equivalent\n@Wait(3)\ndef print_something_else(something_else):\n print(something_else)\n\n###############################################\nreturn_value = Wait(3)\n@return_value\ndef print_something_else(something_else):\n print(something_else)\n\nThis is a problem. \nif `Wait` has no arguments:\n `Wait` is the decorator.\nelse: # `Wait` receives arguments\n `Wait` is not the decorator itself.\n Instead, `Wait` ***returns*** the decorator\n\n\nOne solution is shown below: \nLet us begin by creating the following class, DelayedDecorator: \nclass DelayedDecorator:\n def __init__(i, cls, *args, **kwargs):\n print(\"Delayed Decorator __init__\", cls, args, kwargs)\n i._cls = cls\n i._args = args\n i._kwargs = kwargs\n def __call__(i, func):\n print(\"Delayed Decorator __call__\", func)\n if not (callable(func)):\n import io\n with io.StringIO() as ss:\n print(\n \"If only one input, input must be callable\",\n \"Instead, received:\",\n repr(func),\n sep=\"\\n\",\n file=ss\n )\n msg = ss.getvalue()\n raise TypeError(msg)\n return i._cls(func, *i._args, **i._kwargs)\n\nNow we can write things like:\n dec = DelayedDecorator(Wait, delay=4)\n @dec\n def delayed_print(something):\n print(something)\n\nNote that: \n\ndec does not not accept multiple arguments.\ndec only accepts the function to be wrapped. \nimport inspect\nclass PolyArgDecoratorMeta(type):\n def call(Wait, *args, **kwargs):\n try:\n arg_count = len(args)\n if (arg_count == 1):\n if callable(args[0]):\n SuperClass = inspect.getmro(PolyArgDecoratorMeta)[1]\n r = SuperClass.call(Wait, args[0])\n else:\n r = DelayedDecorator(Wait, *args, **kwargs)\n else:\n r = DelayedDecorator(Wait, *args, **kwargs)\n finally:\n pass\n return r\nimport time\nclass Wait(metaclass=PolyArgDecoratorMeta):\n def init(i, func, delay = 2):\n i._func = func\n i._delay = delay\ndef __call__(i, *args, **kwargs):\n time.sleep(i._delay)\n r = i._func(*args, **kwargs)\n return r \n\n\nThe following two pieces of code are equivalent:\n@Wait\ndef print_something(something):\n print (something)\n\n##################################################\n\ndef print_something(something):\n print(something)\nprint_something = Wait(print_something)\n\nWe can print \"something\" to the console very slowly, as follows:\nprint_something(\"something\")\n\n#################################################\n@Wait(delay=1)\ndef print_something_else(something_else):\n print(something_else)\n\n##################################################\ndef print_something_else(something_else):\n print(something_else)\n\ndd = DelayedDecorator(Wait, delay=1)\nprint_something_else = dd(print_something_else)\n\n##################################################\n\nprint_something_else(\"something\")\n\n\nFinal Notes\nIt may look like a lot of code, but you don't have to write the classes DelayedDecorator and PolyArgDecoratorMeta every-time. The only code you have to personally write something like as follows, which is fairly short:\nfrom PolyArgDecoratorMeta import PolyArgDecoratorMeta\nimport time\nclass Wait(metaclass=PolyArgDecoratorMeta):\n def __init__(i, func, delay = 2):\n i._func = func\n i._delay = delay\n\n def __call__(i, *args, **kwargs):\n time.sleep(i._delay)\n r = i._func(*args, **kwargs)\n return r\n\n", "It is a decorator that can be called in a variety of ways (tested in python3.7):\nimport functools\n\n\ndef my_decorator(*args_or_func, **decorator_kwargs):\n\n def _decorator(func):\n\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n\n if not args_or_func or callable(args_or_func[0]):\n # Here you can set default values for positional arguments\n decorator_args = ()\n else:\n decorator_args = args_or_func\n\n print(\n \"Available inside the wrapper:\",\n decorator_args, decorator_kwargs\n )\n\n # ...\n result = func(*args, **kwargs)\n # ...\n\n return result\n\n return wrapper\n\n return _decorator(args_or_func[0]) \\\n if args_or_func and callable(args_or_func[0]) else _decorator\n\n\n@my_decorator\ndef func_1(arg): print(arg)\n\nfunc_1(\"test\")\n# Available inside the wrapper: () {}\n# test\n\n\n@my_decorator()\ndef func_2(arg): print(arg)\n\nfunc_2(\"test\")\n# Available inside the wrapper: () {}\n# test\n\n\n@my_decorator(\"any arg\")\ndef func_3(arg): print(arg)\n\nfunc_3(\"test\")\n# Available inside the wrapper: ('any arg',) {}\n# test\n\n\n@my_decorator(\"arg_1\", 2, [3, 4, 5], kwarg_1=1, kwarg_2=\"2\")\ndef func_4(arg): print(arg)\n\nfunc_4(\"test\")\n# Available inside the wrapper: ('arg_1', 2, [3, 4, 5]) {'kwarg_1': 1, 'kwarg_2': '2'}\n# test\n\n\nPS thanks to user @norok2 - https://stackoverflow.com/a/57268935/5353484\nUPD Decorator for validating arguments and/or result of functions and methods of a class against annotations. Can be used in synchronous or asynchronous version: https://github.com/EvgeniyBurdin/valdec\n", "Here is a Flask example using decorators with parameters. Suppose we have a route '/user/name' and we want to map to his home page.\ndef matchR(dirPath):\n def decorator(func):\n def wrapper(msg):\n if dirPath[0:6] == '/user/':\n print(f\"User route '{dirPath}' match, calling func {func}\")\n name = dirPath[6:]\n return func(msg2=name, msg3=msg)\n else:\n print(f\"Input dirPath '{dirPath}' does not match route '/user/'\")\n return\n return wrapper\n return decorator\n\n#@matchR('/Morgan_Hills')\n@matchR('/user/Morgan_Hills')\ndef home(**kwMsgs):\n for arg in kwMsgs:\n if arg == 'msg2':\n print(f\"In home({arg}): Hello {kwMsgs[arg]}, welcome home!\")\n if arg == 'msg3':\n print(f\"In home({arg}): {kwMsgs[arg]}\")\n\nhome('This is your profile rendered as in index.html.')\n\nOutput:\nUser route '/user/Morgan_Hills' match, calling func <function home at 0x000001DD5FDCD310>\nIn home(msg2): Hello Morgan_Hills, welcome home!\nIn home(msg3): This is your profile rendered as in index.html.\n\n", "This is a great use case for a curried function.\nCurried functions essentially delay a function from being called until all inputs have been supplied.\nThis can be used for a variety of things like wrappers or functional programming. In this case lets create a wrapper that takes in inputs.\nI will use a simple package pamda that includes a curry function for python. This can be used as a wrapper for other functions.\nInstall Pamda:\npip install pamda\n\nCreate a simple curried decorator function with two inputs:\n@pamda.curry()\ndef my_decorator(input, func):\n print (\"Executing Decorator\")\n print(f\"input:{input}\")\n return func\n\nApply your decorator with the first input supplied to your target function:\n@my_decorator('Hi!')\ndef foo(input):\n print('Executing Foo!')\n print(f\"input:{input}\")\n\nExecute your wrapped function:\nx=foo('Bye!')\n\nPutting everything together:\nfrom pamda import pamda\n\n@pamda.curry()\ndef my_decorator(input, func):\n print (\"Executing Decorator\")\n print(f\"input:{input}\")\n return func\n\n@my_decorator('Hi!')\ndef foo(input):\n print('Executing Foo!')\n print(f\"input:{input}\")\n\nx=foo('Bye!')\n\nWould give:\nExecuting Decorator\ninput:Hi!\nExecuting Foo!\ninput:Bye!\n\n", "define this \"decoratorize function\" to generate customized decorator function:\ndef decoratorize(FUN, **kw):\n def foo(*args, **kws):\n return FUN(*args, **kws, **kw)\n return foo\n\nuse it this way:\n @decoratorize(FUN, arg1 = , arg2 = , ...)\n def bar(...):\n ...\n\n", "the decorator with arguments should return a function that will take a function and return another function you can do that\ndef decorator_factory(argument):\n def decorator(function):\n def wrapper(*args, **kwargs):\n \"\"\"\n add somhting\n \"\"\"\n return function(*args, **kwargs)\n return wrapper\n return decorator\n\nor you can use partial from functools module\ndef decorator(function =None,*,argument ):\n if function is None :\n return partial(decorator,argument=argument)\n def wrapper(*args, **kwargs):\n \"\"\"\n add somhting\n \"\"\"\n return function(*args, **kwargs)\n return wrapper\n\nin the second option just make sure you pass the arguments like this :\n@decorator(argument = 'args')\ndef func():\n pass\n\n", "I think a working, real-world example, with usage examples of the most generic use-case can be valuable here.\n\nThe following is a decorator for functions, which prints to log upon entering and exiting the function.\nParameters control weather or not to print input and output values, log level and so on.\nimport logging \nfrom functools import wraps\n\n\ndef log_in_out(logger=logging.get_logger(), is_print_input=True, is_print_output=True, is_method=True, log_level=logging.DEBUG):\n \"\"\"\n @param logger-\n @param is_print_input- toggle printing input arguments\n @param is_print_output- toggle printing output values\n @param is_method- True for methods, False for functions. Makes \"self\" not printed in case of is_print_input==True\n @param log_level-\n\n @returns- a decorator that logs to logger when entering or exiting the decorated function.\n Don't uglify your code!\n \"\"\"\n\n def decor(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n if is_print_input:\n logger.log(\n msg=f\"Entered {fn.__name__} with args={args[1:] if is_method else args}, kwargs={kwargs}\",\n level=log_level\n )\n else:\n logger.log(\n msg=f\"Entered {fn.__name__}\",\n level=log_level\n )\n\n result = fn(*args, **kwargs)\n\n if is_print_output and result is not None:\n logger.log(\n msg=f\"Exited {fn.__name__} with result {result}\",\n level=log_level,\n )\n else:\n logger.log(\n msg=f\"Exited {fn.__name__}\",\n level=log_level\n )\n\n return result\n\n return wrapper\n\n return decor\n\n\nusage:\n @log_in_out(is_method=False, is_print_input=False)\n def foo(a, b=5):\n return 3, a\n\nfoo(2) --> prints\n\nEntered foo\nExited foo with result (3, 2)\n\n class A():\n @log_in_out(is_print_output=False)\n def bar(self, c, m, y):\n return c, 6\n\na = A()\na.bar(1, 2, y=3) --> prints\n\nEntered bar with args=(1, 2), kwargs={y:3}\nExited bar\n\n", "Suppose you have a function\ndef f(*args):\n print(*args)\n\nAnd you want to add a decorator that accepts arguments to it like this:\n@decorator(msg='hello')\ndef f(*args):\n print(*args)\n\nThis means Python will modify f as follows:\nf = decorator(msg='hello')(f)\n\nSo, the return of the part decorator(msg='hello') should be a wrapper function that accepts the function f and returns the modified function. then you can execute the modified function.\ndef decorator(**kwargs):\n def wrap(f):\n def modified_f(*args):\n print(kwargs['msg']) # use passed arguments to the decorator\n return f(*args)\n return modified_f\n return wrap\n\nSo, when you call f, it is like you are doing:\ndecorator(msg='hello')(f)(args)\n=== wrap(f)(args) === modified_f(args)\nbut modified_f has access to kwargs passed to the decorator\nThe output of\nf(1,2,3)\n\nwill be:\nhello\n(1, 2, 3)\n\n", "For example, I created multiply() below which can accept one or no argument or no parentheses from the decorator and I created sum() below:\nfrom numbers import Number\n\ndef multiply(num=1):\n def _multiply(func):\n def core(*args, **kwargs):\n result = func(*args, **kwargs)\n if isinstance(num, Number):\n return result * num\n else:\n return result\n return core\n if callable(num):\n return _multiply(num)\n else:\n return _multiply\n\ndef sum(num1, num2):\n return num1 + num2\n\nNow, I put @multiply(5) on sum(), then called sum(4, 6) as shown below:\n# (4 + 6) x 5 = 50\n\n@multiply(5) # Here\ndef sum(num1, num2):\n return num1 + num2\n\nresult = sum(4, 6)\nprint(result)\n\nThen, I could get the result below:\n50\n\nNext, I put @multiply() on sum(), then called sum(4, 6) as shown below:\n# (4 + 6) x 1 = 10\n\n@multiply() # Here\ndef sum(num1, num2):\n return num1 + num2\n \nresult = sum(4, 6)\nprint(result)\n\nOr, I put @multiply on sum(), then called sum(4, 6) as shown below:\n# 4 + 6 = 10\n\n@multiply # Here\ndef sum(num1, num2):\n return num1 + num2\n \nresult = sum(4, 6)\nprint(result)\n\nThen, I could get the result below:\n10\n\n" ]
[ 1083, 482, 133, 119, 43, 36, 21, 17, 17, 10, 7, 4, 4, 2, 2, 2, 1, 1, 0, 0, 0 ]
[ "In case both the function and the decorator have to take arguments you can follow the below approach.\nFor example there is a decorator named decorator1 which takes an argument\n@decorator1(5)\ndef func1(arg1, arg2):\n print (arg1, arg2)\n\nfunc1(1, 2)\n\nNow if the decorator1 argument has to be dynamic, or passed while calling the function, \ndef func1(arg1, arg2):\n print (arg1, arg2)\n\n\na = 1\nb = 2\nseconds = 10\n\ndecorator1(seconds)(func1)(a, b)\n\nIn the above code \n\nseconds is the argument for decorator1 \na, b are the arguments of func1\n\n", "Decoration with parameters in an anonymous setting.\nAmong of the many possibilities two variations of a \"nested\" syntactic sugar decoration are presented. They differ from each other by the order of execution wrt to the target function and their effects are, in general, independent (non interacting).\nThe decorators allow an \"injection\" a of custom function either before or after the execution of the target function.\nThe calls of both functions take place in a tuple. As default, the return value is the result of the target function.\nThe syntactic sugar decoration @first_internal(send_msg)('...end') required version >= 3.9, see PEP 614 Relaxing Grammar Restrictions On Decorators.\nUsed functools.wraps to keep the doc-string of the target function.\nfrom functools import wraps\n\n\ndef first_external(f_external):\n return lambda *args_external, **kwargs_external:\\\n lambda f_target: wraps(f_target)(\n lambda *args_target, **kwargs_target:\n (f_external(*args_external, **kwargs_external),\n f_target(*args_target, **kwargs_target))[1]\n )\n\n\ndef first_internal(f_external):\n return lambda *args_external, **kwargs_external:\\\n lambda f_target: wraps(f_target)(\n lambda *args_target, **kwargs_target:\n (f_target(*args_target, **kwargs_target),\n f_external(*args_external, **kwargs_external))[0]\n )\n\n\ndef send_msg(x):\n print('msg>', x)\n\n\n@first_internal(send_msg)('...end') # python >= 3.9\n@first_external(send_msg)(\"start...\") # python >= 3.9\ndef test_function(x):\n \"\"\"Test function\"\"\"\n print('from test_function')\n return x\n\n\ntest_function(2)\n\nOutput\nmsg> start...\nfrom test_function\nmsg> ...end\n\nRemarks\n\ncomposition decorators, such as pull-back and push-forward (maybe in a more Computer Science terminology: co- and resp. contra-variant decorator), could more useful but need ad-hoc care, for example composition rules, check which parameters go where, etc\n\nsyntactic sugar acts as a kind of partial of the target function: once decorated there is no way back (without extra imports) but it is not mandatory, a decorator can be used also in its extended forms, i.e. first_external(send_msg)(\"start...\")(test_function)(2)\n\nthe results of a workbench with timeit.repeat(..., repeat=5, number=10000) which compare the classical def and lambda decoration shows that are almost equivalent:\n\nfor lambda: [6.200810984999862, 6.035239247000391, 5.346362481000142, 5.987880147000396, 5.5331550319997405] - mean -> 5.8206\n\nfor def: [6.165001932999985, 5.554595884999799, 5.798066574999666, 5.678178028000275, 5.446507932999793] - mean -> 5.7284\n\n\n\nnaturally an non-anonymous counterpart is possible and provides more flexibility\n\n\n" ]
[ -1, -1 ]
[ "decorator", "python" ]
stackoverflow_0005929107_decorator_python.txt
Q: How to print high numbered unicode characters in python i am trying to write a python program that prints music notes (like -> u1d15e). However, i cant quite get it to work. Here is what i get using the following code note = '\U0001d15e' bytes = note.encode('utf-8') print(bytes) >>> b'\xf0\x9d\x85\x9e' If i try to print the string directly i get note = '\U0001d15e' # bytes = note.encode('utf-8') print(note) Traceback (most recent call last): File "c:\___\___\___\file.py", line 34, in <module> print(note) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\U0001d15e' in position 0: character maps to <undefined> I am not sure what the problem is. I know that similar questions have been asked before, however the proposed solution did not work for me. Thank you for your help in advance :) i am using python 3.10.4 A: My system doesn't like that representation either, but can directly put it into a string and print() it To me, this is some artifact of your system being Windows and you need to set your console to use UTF-8 instead of cp1252 Using UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10) >>> "" == "\U0001d15e" True >>> note = "" >>> note.encode() b'\xf0\x9d\x85\x9e' >>> print(note)
How to print high numbered unicode characters in python
i am trying to write a python program that prints music notes (like -> u1d15e). However, i cant quite get it to work. Here is what i get using the following code note = '\U0001d15e' bytes = note.encode('utf-8') print(bytes) >>> b'\xf0\x9d\x85\x9e' If i try to print the string directly i get note = '\U0001d15e' # bytes = note.encode('utf-8') print(note) Traceback (most recent call last): File "c:\___\___\___\file.py", line 34, in <module> print(note) File "C:\Users\user\AppData\Local\Programs\Python\Python310\lib\encodings\cp1252.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\U0001d15e' in position 0: character maps to <undefined> I am not sure what the problem is. I know that similar questions have been asked before, however the proposed solution did not work for me. Thank you for your help in advance :) i am using python 3.10.4
[ "My system doesn't like that representation either, but can directly put it into a string and print() it\nTo me, this is some artifact of your system being Windows and you need to set your console to use UTF-8 instead of cp1252\nUsing UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10)\n>>> \"\" == \"\\U0001d15e\"\nTrue\n>>> note = \"\"\n>>> note.encode()\nb'\\xf0\\x9d\\x85\\x9e'\n>>> print(note)\n\n\n" ]
[ 1 ]
[]
[]
[ "python", "unicode", "windows" ]
stackoverflow_0074562616_python_unicode_windows.txt
Q: Skip blank lines in dat file with for loop in Python I'm trying to write a bit of code that reads in a dat file that has a bunch of blank lines and put them into lists to be manipulated later. Ex: 0.92; 0.70 1.53; 1.41; 1.00 1.47; 1.08 ; 0.73; 0.18 1.50; 1.17 ; ; 1.68; I would like to skip the lines that have blank spaces. This is what I have so far... file_name2 = 'Gliese.dat' filein2 = open(file_name2, 'r') lines2 = filein2.readlines() data2 = [] filein2.close() for line in lines2: data2.append(line) for i in data2: items2 = i.split(';') if items2[0]=='' or items2[1]=='': items2.strip() else: ms_U_B.append(float(items2[1])) ms_B_V.append(float(items2[0])) I'm getting the error "ValueError: could not convert string to float:' ' ". Rather new to python and would appreciate any help :) A: You get the error cause you try to convert the empty spaces to float. Try this: with open("Gliese.dat", "r")as f: data = [] for line in f: line = line.strip() if line.split(";")[0] and line.split(";")[1]: data.append(line.rstrip("\n"))
Skip blank lines in dat file with for loop in Python
I'm trying to write a bit of code that reads in a dat file that has a bunch of blank lines and put them into lists to be manipulated later. Ex: 0.92; 0.70 1.53; 1.41; 1.00 1.47; 1.08 ; 0.73; 0.18 1.50; 1.17 ; ; 1.68; I would like to skip the lines that have blank spaces. This is what I have so far... file_name2 = 'Gliese.dat' filein2 = open(file_name2, 'r') lines2 = filein2.readlines() data2 = [] filein2.close() for line in lines2: data2.append(line) for i in data2: items2 = i.split(';') if items2[0]=='' or items2[1]=='': items2.strip() else: ms_U_B.append(float(items2[1])) ms_B_V.append(float(items2[0])) I'm getting the error "ValueError: could not convert string to float:' ' ". Rather new to python and would appreciate any help :)
[ "You get the error cause you try to convert the empty spaces to float.\nTry this:\nwith open(\"Gliese.dat\", \"r\")as f:\n\n data = []\n\n for line in f:\n line = line.strip()\n if line.split(\";\")[0] and line.split(\";\")[1]:\n data.append(line.rstrip(\"\\n\"))\n\n" ]
[ 0 ]
[]
[]
[ "python", "readlines" ]
stackoverflow_0074562628_python_readlines.txt
Q: Why, after replacing every value in a row, are the first two rows completely different? | Python, Pandas I've got a simple script to remove characters from the left & right of a string containing a datetime value. The reason for this is that there are unnecessary characters on each side of the actual value I want. It works by looping through all items in a column (called Time), removing the characters & then replacing the old value with the new one. This works for the most part, excluding weirdly the first two rows in the dataframe. For some odd reason, in the .csv files I am using, the 'Time' column has the values as strings, whereas the 'Closing Time' throws an error unless I specify they are strings, even though they have the exact same structure. Here is a screenshot of what the input fields on the .csv look like: Please note: the second row, first value not having a speech mark before it is a weird excel thing & the actual value has it on as seen above in the same screenshot. Here is the code I am using: import pandas as pd df = pd.read_csv("file.csv") # reading file for item in df['Time']: item2 = item[1:] item3 = item2[:-8] df.replace(item, item3, inplace=True) for item21 in df['Closing Time']: item22 = str(item21)[1:] item23 = str(item22)[:-8] df.replace(item21, item23, inplace=True) print(df['Closing Time']) print(df['Time']) input("\nScript executed successfully | Press ENTER to Exit. ") Here is the output: Is this a bug? Because I see no reason why the first two columns specifically are coming out as different to the rest. A: If you only want to extract the timestamps as string, I would suggest using regex. Furthermore, iterating over a dataset with a for loop is highly inefficient (and with big datasets, you will notice the slowness); I suggest using an str.extract function: import pandas as pd df = pd.read_csv("file.csv") # reading file match_string = r'(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})' df['Time'] = df['Time'].str.extract(match_string) df['Closing Time'] = df['Closing Time'].str.extract(match_string) print(df['Closing Time']) print(df['Time']) input("\nScript executed successfully | Press ENTER to Exit. ")
Why, after replacing every value in a row, are the first two rows completely different? | Python, Pandas
I've got a simple script to remove characters from the left & right of a string containing a datetime value. The reason for this is that there are unnecessary characters on each side of the actual value I want. It works by looping through all items in a column (called Time), removing the characters & then replacing the old value with the new one. This works for the most part, excluding weirdly the first two rows in the dataframe. For some odd reason, in the .csv files I am using, the 'Time' column has the values as strings, whereas the 'Closing Time' throws an error unless I specify they are strings, even though they have the exact same structure. Here is a screenshot of what the input fields on the .csv look like: Please note: the second row, first value not having a speech mark before it is a weird excel thing & the actual value has it on as seen above in the same screenshot. Here is the code I am using: import pandas as pd df = pd.read_csv("file.csv") # reading file for item in df['Time']: item2 = item[1:] item3 = item2[:-8] df.replace(item, item3, inplace=True) for item21 in df['Closing Time']: item22 = str(item21)[1:] item23 = str(item22)[:-8] df.replace(item21, item23, inplace=True) print(df['Closing Time']) print(df['Time']) input("\nScript executed successfully | Press ENTER to Exit. ") Here is the output: Is this a bug? Because I see no reason why the first two columns specifically are coming out as different to the rest.
[ "If you only want to extract the timestamps as string, I would suggest using regex. Furthermore, iterating over a dataset with a for loop is highly inefficient (and with big datasets, you will notice the slowness); I suggest using an str.extract function:\nimport pandas as pd\n\ndf = pd.read_csv(\"file.csv\") # reading file\n\nmatch_string = r'(\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2})'\ndf['Time'] = df['Time'].str.extract(match_string)\ndf['Closing Time'] = df['Closing Time'].str.extract(match_string)\n\nprint(df['Closing Time'])\nprint(df['Time'])\n\ninput(\"\\nScript executed successfully | Press ENTER to Exit. \")\n\n" ]
[ 0 ]
[]
[]
[ "loops", "pandas", "python" ]
stackoverflow_0074562609_loops_pandas_python.txt
Q: RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 100 but got size 1 for tensor number 1 in the list I'm new to PyTorch not able to figure out what I'm doing wrong, below is the code x_np, y_np = datasets.make_regression(n_samples=100,n_features=1,noise=20,random_state=0) x = torch.from_numpy(x_np.astype(np.float32)) y = torch.from_numpy(y_np.astype(np.float32)) y = y.view(y.shape[0],1) n_samples, n_features = x.shape class Regression(nn.Module): def __init__(self, inputsize, outputsize, hiddensize): super(Regression, self).__init__() self.hidden_size = hiddensize self.input_size = inputsize self.output_size = outputsize self.i2h = nn.Linear(self.input_size+self.hidden_size, self.hidden_size) self.h2o = nn.Linear(self.input_size+self.hidden_size, self.output_size) def forward(self, x): hidden = torch.zeros(1, self.hidden_size) print(x.shape) print(hidden.shape) combined = torch.cat((x,hidden), 1) hidden = self.i2h(combined) output = self.h2o(combined) return output model = Regression(n_features, n_features, 16) lr = 0.01 loss = nn.MSELoss() opt = torch.optim.SGD(model.parameters(), lr = lr) for epoch in range(1000): ypred = model(x) l = loss(y, ypred) l.backward() opt.step() opt.zero_grad() if epoch % 100 == 0: [w, b] = model.parameters() print(f'epoch {epoch+1}: w = {w[0][0].item():.3f}, loss = {l:.8f}') While training, I am getting this error RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 100 but got size 1 for tensor number 1 in the list A: i2h maps self.input_size + self.hidden_size dimension to self.hidden_size, so for h2o, you have to define a mapping starting from self.hidden dimension. Also, you have to update the forward accordingly. Here is the complete code: class Regression(nn.Module): def __init__(self, inputsize, outputsize, hiddensize): super(Regression, self).__init__() self.hidden_size = hiddensize self.input_size = inputsize self.output_size = outputsize self.i2h = nn.Linear(self.input_size+self.hidden_size, self.hidden_size) self.h2o = nn.Linear(self.hidden_size, self.output_size) def forward(self, x): hidden = torch.zeros(1, self.hidden_size) print(x.shape) print(hidden.shape) combined = torch.cat((x,hidden), 1) hidden = self.i2h(combined) output = self.h2o(hidden) return output
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 100 but got size 1 for tensor number 1 in the list
I'm new to PyTorch not able to figure out what I'm doing wrong, below is the code x_np, y_np = datasets.make_regression(n_samples=100,n_features=1,noise=20,random_state=0) x = torch.from_numpy(x_np.astype(np.float32)) y = torch.from_numpy(y_np.astype(np.float32)) y = y.view(y.shape[0],1) n_samples, n_features = x.shape class Regression(nn.Module): def __init__(self, inputsize, outputsize, hiddensize): super(Regression, self).__init__() self.hidden_size = hiddensize self.input_size = inputsize self.output_size = outputsize self.i2h = nn.Linear(self.input_size+self.hidden_size, self.hidden_size) self.h2o = nn.Linear(self.input_size+self.hidden_size, self.output_size) def forward(self, x): hidden = torch.zeros(1, self.hidden_size) print(x.shape) print(hidden.shape) combined = torch.cat((x,hidden), 1) hidden = self.i2h(combined) output = self.h2o(combined) return output model = Regression(n_features, n_features, 16) lr = 0.01 loss = nn.MSELoss() opt = torch.optim.SGD(model.parameters(), lr = lr) for epoch in range(1000): ypred = model(x) l = loss(y, ypred) l.backward() opt.step() opt.zero_grad() if epoch % 100 == 0: [w, b] = model.parameters() print(f'epoch {epoch+1}: w = {w[0][0].item():.3f}, loss = {l:.8f}') While training, I am getting this error RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 100 but got size 1 for tensor number 1 in the list
[ "i2h maps self.input_size + self.hidden_size dimension to self.hidden_size, so for h2o, you have to define a mapping starting from self.hidden dimension. Also, you have to update the forward accordingly. Here is the complete code:\nclass Regression(nn.Module):\n def __init__(self, inputsize, outputsize, hiddensize):\n super(Regression, self).__init__()\n self.hidden_size = hiddensize\n self.input_size = inputsize\n self.output_size = outputsize\n self.i2h = nn.Linear(self.input_size+self.hidden_size, self.hidden_size)\n self.h2o = nn.Linear(self.hidden_size, self.output_size)\n def forward(self, x):\n hidden = torch.zeros(1, self.hidden_size)\n print(x.shape)\n print(hidden.shape)\n combined = torch.cat((x,hidden), 1)\n hidden = self.i2h(combined)\n output = self.h2o(hidden)\n return output\n\n" ]
[ 0 ]
[]
[]
[ "neural_network", "python", "pytorch" ]
stackoverflow_0074561833_neural_network_python_pytorch.txt
Q: Measure CPU Usage (in Cores) and Memory Usage of Compiled Programs I have two programs, one in go and one in python that I am trying to characterize. For this, I'd like to measure the CPU usage and Memory Usage by regularly measuring the amounts consumed by the two programs at regular intervals (say, every 0.1 seconds) for some given amount of time. I have been looking everywhere for any sort of solution to this problem, but I can't find any. Does a good solution to this exist? If so, what? A: You can also try Check server load with top, htop, iotop. A: For my particular case, the best choice is to instrument the each of the programs for something like Prometheus. Then I can scrape the data at regular intervals to get what I am looking for. In this case, I would follow off of something like: https://prometheus.io/docs/guides/go-application/ Or: https://linuxhint.com/monitor-python-applications-prometheus/
Measure CPU Usage (in Cores) and Memory Usage of Compiled Programs
I have two programs, one in go and one in python that I am trying to characterize. For this, I'd like to measure the CPU usage and Memory Usage by regularly measuring the amounts consumed by the two programs at regular intervals (say, every 0.1 seconds) for some given amount of time. I have been looking everywhere for any sort of solution to this problem, but I can't find any. Does a good solution to this exist? If so, what?
[ "You can also try Check server load with top, htop, iotop.\n", "For my particular case, the best choice is to instrument the each of the programs for something like Prometheus. Then I can scrape the data at regular intervals to get what I am looking for.\nIn this case, I would follow off of something like: https://prometheus.io/docs/guides/go-application/\nOr: https://linuxhint.com/monitor-python-applications-prometheus/\n" ]
[ 1, 1 ]
[]
[]
[ "go", "linux", "python" ]
stackoverflow_0074552123_go_linux_python.txt
Q: How to join 2 tables with getting all rows from left table and only matching ones in right Table 1 Table 2 Table2.plan_selected shows what plan did the user choose. Eg: The user with user_id=4 in Table2 choose the id = 2 plan from Table1. I want to get all the rows from Table1 and only matching rows from Table2 for a particular user_id. The expected result is like this. I want to fetch all the rows of Table1 and only the selected plan from Table2 for a particular user_id lets say 4. The expected result will be like this: id name plantype plandetails requestpermonth price isdeleted planselected 1 EXECUTIVE MONTHLY {1000 MAY REQUSTS} 1000 50 0 NULL 2 BASIC MONTHLY {500 MAY REQUSTS} 1000 25 0 2 3 FREEEE MONTHLY {10 MAY REQUSTS} 1000 0 0 NULL 4 EXECUTIVE YEARLY {1000 MAY REQUSTS} 1000 500 0 NULL 5 BASIC YEARLY {500 MAY REQUSTS} 1000 250 0 NULL 6 FREEEE YEARLY {10 MAY REQUSTS} 1000 0 0 NULL What I have tried to do was use a simple left join. select plans.id, name, plan_details, plan_type, request_per_month, price,is_deleted, plan_selected from SubscriptionsPlans as plans left join SubscriptionsOrder as orders on plans.id=orders.plan_selected where orders.user_id = 4 These are my 2 models. ORM queryset or SQL query will help class SubscriptionsPlans(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) plan_type = models.CharField(max_length=255) plan_details = models.TextField(max_length=1000) request_per_month = models.IntegerField() price = models.FloatField() is_deleted = models.BooleanField(default=False) class SubscriptionsOrder(models.Model): id = models.IntegerField(primary_key=True) user_id = models.ForeignKey( AppUser, null=True, on_delete=models.SET_NULL ) plan_selected = models.ForeignKey(SubscriptionsPlans, null=True, on_delete=models.SET_NULL) billing_info = models.IntegerField() A: You can query with: SubscriptionsPlans.objects.filter(subscriptionsorder__user_id=4) This will list all SubscriptionPlans for which there is a SubscriptionOrder with 4 as user_id.
How to join 2 tables with getting all rows from left table and only matching ones in right
Table 1 Table 2 Table2.plan_selected shows what plan did the user choose. Eg: The user with user_id=4 in Table2 choose the id = 2 plan from Table1. I want to get all the rows from Table1 and only matching rows from Table2 for a particular user_id. The expected result is like this. I want to fetch all the rows of Table1 and only the selected plan from Table2 for a particular user_id lets say 4. The expected result will be like this: id name plantype plandetails requestpermonth price isdeleted planselected 1 EXECUTIVE MONTHLY {1000 MAY REQUSTS} 1000 50 0 NULL 2 BASIC MONTHLY {500 MAY REQUSTS} 1000 25 0 2 3 FREEEE MONTHLY {10 MAY REQUSTS} 1000 0 0 NULL 4 EXECUTIVE YEARLY {1000 MAY REQUSTS} 1000 500 0 NULL 5 BASIC YEARLY {500 MAY REQUSTS} 1000 250 0 NULL 6 FREEEE YEARLY {10 MAY REQUSTS} 1000 0 0 NULL What I have tried to do was use a simple left join. select plans.id, name, plan_details, plan_type, request_per_month, price,is_deleted, plan_selected from SubscriptionsPlans as plans left join SubscriptionsOrder as orders on plans.id=orders.plan_selected where orders.user_id = 4 These are my 2 models. ORM queryset or SQL query will help class SubscriptionsPlans(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) plan_type = models.CharField(max_length=255) plan_details = models.TextField(max_length=1000) request_per_month = models.IntegerField() price = models.FloatField() is_deleted = models.BooleanField(default=False) class SubscriptionsOrder(models.Model): id = models.IntegerField(primary_key=True) user_id = models.ForeignKey( AppUser, null=True, on_delete=models.SET_NULL ) plan_selected = models.ForeignKey(SubscriptionsPlans, null=True, on_delete=models.SET_NULL) billing_info = models.IntegerField()
[ "You can query with:\nSubscriptionsPlans.objects.filter(subscriptionsorder__user_id=4)\nThis will list all SubscriptionPlans for which there is a SubscriptionOrder with 4 as user_id.\n" ]
[ 0 ]
[]
[]
[ "django", "django_rest_framework", "python", "sql" ]
stackoverflow_0074561745_django_django_rest_framework_python_sql.txt
Q: Using lxml to parse text and break it into a list of sentences using some tags to add structure Consider the following text in custom xml: <?xml version="1.0"?> <body> <heading><b>This is a title</b></heading> <p>This is a first <b>paragraph</b>.</p> <p>This is a second <b>paragraph</b>. With a list: <ul> <li>first item</li> <li>second item</li> </ul> And the end. </p> <p>This is a third paragraph. <ul> <li>This is a first long sentence.</li> <li>This is a second long sentence.</li> </ul> And the end of the paragraph.</p> </body> I would like to convert that in a list of plain strings with the following rules: Discard some tags like <b></b> Each heading and each paragraph are distinct elements in the list. Add a final period if missing at the end of the element. When a list is preceded by a colon ":", just add a line break between elements and add dashes. When a list is not preceded by a colon, act as if the paragraph was split into several paragraphs The result would be: [ "This is a title.", # Note the period "This is a first paragraph.", "This is a second paragraph. With a list:\n- first item\n- second item\nAnd the end.", "This is a third paragraph.", "This is a first long sentence.", "This is a second long sentence.", "And the end of the paragraph." ] I would like to do that by iterating on the result of the lmxl etree etree.fromstring(text). My first few trials are overly complicated and slow, and I'm sure there is a nice approach to this problem. How to do it? A: Interesting exercise... The following is a bit convoluted and won't give you the exact output you indicated, but maybe it'll be close enough for you (or someone else) to modify it: from lxml import etree stuff = """[your xml]""" doc = etree.XML(stuff) #we need this in order to count how many <li> elements meet the condition #in your xml there are only two, but this will take care of more elements comms = len(doc.xpath('//p[contains(.,":")]//ul//li')) final = [] for t in doc.xpath('//*'): line = "".join(list(t.itertext())) allin = [l.strip() for l in line.split('\n ') if len(l.strip())>0] for l in allin: ind = allin.index(l) for c in range(comms): if ":" in allin[ind-(c+1)]: final.append("- "+l) if l[-1] =="." or l[-1] ==":": final.append(l) else: if not ("- "+l in final): final.append(l+".") break final Output: ['This is a title.', 'This is a first paragraph.', 'This is a second paragraph. With a list:', '- first item', '- second item', 'And the end.', 'This is a third paragraph.', 'This is a first long sentence.', 'This is a second long sentence.', 'And the end of the paragraph.']
Using lxml to parse text and break it into a list of sentences using some tags to add structure
Consider the following text in custom xml: <?xml version="1.0"?> <body> <heading><b>This is a title</b></heading> <p>This is a first <b>paragraph</b>.</p> <p>This is a second <b>paragraph</b>. With a list: <ul> <li>first item</li> <li>second item</li> </ul> And the end. </p> <p>This is a third paragraph. <ul> <li>This is a first long sentence.</li> <li>This is a second long sentence.</li> </ul> And the end of the paragraph.</p> </body> I would like to convert that in a list of plain strings with the following rules: Discard some tags like <b></b> Each heading and each paragraph are distinct elements in the list. Add a final period if missing at the end of the element. When a list is preceded by a colon ":", just add a line break between elements and add dashes. When a list is not preceded by a colon, act as if the paragraph was split into several paragraphs The result would be: [ "This is a title.", # Note the period "This is a first paragraph.", "This is a second paragraph. With a list:\n- first item\n- second item\nAnd the end.", "This is a third paragraph.", "This is a first long sentence.", "This is a second long sentence.", "And the end of the paragraph." ] I would like to do that by iterating on the result of the lmxl etree etree.fromstring(text). My first few trials are overly complicated and slow, and I'm sure there is a nice approach to this problem. How to do it?
[ "Interesting exercise...\nThe following is a bit convoluted and won't give you the exact output you indicated, but maybe it'll be close enough for you (or someone else) to modify it:\nfrom lxml import etree\nstuff = \"\"\"[your xml]\"\"\"\n \ndoc = etree.XML(stuff)\n \n#we need this in order to count how many <li> elements meet the condition\n#in your xml there are only two, but this will take care of more elements\ncomms = len(doc.xpath('//p[contains(.,\":\")]//ul//li'))\nfinal = []\n \nfor t in doc.xpath('//*'):\n line = \"\".join(list(t.itertext())) \n allin = [l.strip() for l in line.split('\\n ') if len(l.strip())>0]\n for l in allin:\n ind = allin.index(l)\n for c in range(comms):\n if \":\" in allin[ind-(c+1)]:\n final.append(\"- \"+l)\n if l[-1] ==\".\" or l[-1] ==\":\":\n final.append(l)\n else:\n if not (\"- \"+l in final):\n final.append(l+\".\")\n break\n \nfinal\n\nOutput:\n['This is a title.',\n 'This is a first paragraph.',\n 'This is a second paragraph. With a list:',\n '- first item',\n '- second item',\n 'And the end.',\n 'This is a third paragraph.',\n 'This is a first long sentence.',\n 'This is a second long sentence.',\n 'And the end of the paragraph.']\n\n" ]
[ 0 ]
[]
[]
[ "elementtree", "lxml", "parsing", "python" ]
stackoverflow_0074554835_elementtree_lxml_parsing_python.txt
Q: Using default dictionaries problem (python) I have a slightly weird input of data that is in this format: data = { 'sensor1': {'units': 'x', 'values': [{'time': 17:00, 'value': 10}, {'time': 17:10, 'value': 12}, {'time': 17:20, 'value' :7}, ...]} 'sensor2': {'units': 'x', 'values': [{'time': 17:00, 'value': 9}, {'time': 17:20, 'value': 11}, ...]} } And I want to collect the output to look like: {'17:00': [10,9], '17:10': [12,], '17:20': [7,11], ... } So the keys are the unique timestamps (ordered) and the values are a list of the values of each sensor, in order they come in the original dictionary. If there is no value for the timestamp in one sensor, it is just left as an empty element ''. I know I might need to use defaultdict but I've not had any success. e.g. s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)] d = defaultdict(list) for k, v in s: d[k].append(v) sorted(d.items()) [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] d = defaultdict(default_factory=list) values_list = data.values() for item in values_list: for k, v in item['values']: d[k].append(v) result = sorted(d.items()) Encounters key error as each item in values_list is not a tuple but a dict. A: You can also use dict in this way: data = {'sensor1': {'units': 'x', 'values': [{'time': '17:00', 'value': 10}, {'time': '17:10', 'value': 12}, {'time': '17:20', 'value': 7}, ]}, 'sensor2': {'units': 'x', 'values': [{'time': '17:00', 'value': 9}, {'time': '17:20', 'value': 11}, ]} } d = {} for item in data.values(): for pair in item['values']: if pair["time"] in d: d[pair["time"]].append(pair["value"]) else: d[pair["time"]] = [pair["value"]] result = sorted(d.items()) print(result) Output: [('17:00', [10, 9]), ('17:10', [12]), ('17:20', [7, 11])] Using defaultdict defaultdict example with list in Python documentation : from collections import defaultdict data = {'sensor1': {'units': 'x', 'values': [{'time': '17:00', 'value': 10}, {'time': '17:10', 'value': 12}, {'time': '17:20', 'value': 7}, ]}, 'sensor2': {'units': 'x', 'values': [{'time': '17:00', 'value': 9}, {'time': '17:20', 'value': 11}, ]} } d = defaultdict(list) for item in data.values(): for pair in item['values']: d[pair["time"]].append(pair["value"]) result = sorted(d.items()) print(result) Output: [('17:00', [10, 9]), ('17:10', [12]), ('17:20', [7, 11])]
Using default dictionaries problem (python)
I have a slightly weird input of data that is in this format: data = { 'sensor1': {'units': 'x', 'values': [{'time': 17:00, 'value': 10}, {'time': 17:10, 'value': 12}, {'time': 17:20, 'value' :7}, ...]} 'sensor2': {'units': 'x', 'values': [{'time': 17:00, 'value': 9}, {'time': 17:20, 'value': 11}, ...]} } And I want to collect the output to look like: {'17:00': [10,9], '17:10': [12,], '17:20': [7,11], ... } So the keys are the unique timestamps (ordered) and the values are a list of the values of each sensor, in order they come in the original dictionary. If there is no value for the timestamp in one sensor, it is just left as an empty element ''. I know I might need to use defaultdict but I've not had any success. e.g. s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)] d = defaultdict(list) for k, v in s: d[k].append(v) sorted(d.items()) [('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])] d = defaultdict(default_factory=list) values_list = data.values() for item in values_list: for k, v in item['values']: d[k].append(v) result = sorted(d.items()) Encounters key error as each item in values_list is not a tuple but a dict.
[ "You can also use dict in this way:\ndata = {'sensor1': {'units': 'x', 'values': [{'time': '17:00', 'value': 10},\n {'time': '17:10', 'value': 12},\n {'time': '17:20', 'value': 7},\n ]},\n 'sensor2': {'units': 'x', 'values': [{'time': '17:00', 'value': 9},\n {'time': '17:20', 'value': 11},\n ]}\n }\n\nd = {}\nfor item in data.values():\n for pair in item['values']:\n if pair[\"time\"] in d:\n d[pair[\"time\"]].append(pair[\"value\"])\n else:\n d[pair[\"time\"]] = [pair[\"value\"]]\n\nresult = sorted(d.items())\nprint(result)\n\nOutput:\n[('17:00', [10, 9]), ('17:10', [12]), ('17:20', [7, 11])]\n\nUsing defaultdict defaultdict example with list in Python documentation :\nfrom collections import defaultdict\n\ndata = {'sensor1': {'units': 'x', 'values': [{'time': '17:00', 'value': 10},\n {'time': '17:10', 'value': 12},\n {'time': '17:20', 'value': 7},\n ]},\n 'sensor2': {'units': 'x', 'values': [{'time': '17:00', 'value': 9},\n {'time': '17:20', 'value': 11},\n ]}\n }\n\nd = defaultdict(list)\nfor item in data.values():\n for pair in item['values']:\n d[pair[\"time\"]].append(pair[\"value\"])\nresult = sorted(d.items())\nprint(result)\n\nOutput:\n[('17:00', [10, 9]), ('17:10', [12]), ('17:20', [7, 11])]\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "python", "resourcedictionary", "sorting" ]
stackoverflow_0074562798_dictionary_python_resourcedictionary_sorting.txt
Q: Assigning player in an interactive game I am making a solution to a simple game where you can choose the amount of players with maximum 5 players and minimum 2. Each player is idenitied by their first and last name. max_players = int(input(" Insert the number of players there are there? : ")) while len(players_list) < max_players: player1 = input(" What is your first and last name? : ") players_list.append(players) print("players so far : ") player2 = input ("What is your first and last name? :") players_list.append(players) print("players so far : ") print(players_lists) The code works partially. The problem is that although I mentioned the maximum amount of players you can still unse a number higher than 5. Also, when I insert the first name of the player, it shows that the "players" are not defined? The exepcted output is Hello how many players are in the game? insert number between 2 and 5 Please insert your name Then the same commad for the other players. A: Also, when I insert the first name of the player, it shows that the "players" are not defined You try to append players, which indeed does not exist before you call it, to players_list, which also does not exist. You need to define these two first. For the number of players limit, you can add a simple check, whether max_players is between 2 and 5. max_players = int(input(" Insert the number of players there are ? : ")) while (max_players <2) or (max_players > 5) : max_players = int(input(" Number of players must be between 2 and 5. Number of players ? ")) players_list = [] while len(players_list) < max_players: player1 = input(" What is your first and last name? : ") players_list.append(player1) print("players so far : ") print(players_list)
Assigning player in an interactive game
I am making a solution to a simple game where you can choose the amount of players with maximum 5 players and minimum 2. Each player is idenitied by their first and last name. max_players = int(input(" Insert the number of players there are there? : ")) while len(players_list) < max_players: player1 = input(" What is your first and last name? : ") players_list.append(players) print("players so far : ") player2 = input ("What is your first and last name? :") players_list.append(players) print("players so far : ") print(players_lists) The code works partially. The problem is that although I mentioned the maximum amount of players you can still unse a number higher than 5. Also, when I insert the first name of the player, it shows that the "players" are not defined? The exepcted output is Hello how many players are in the game? insert number between 2 and 5 Please insert your name Then the same commad for the other players.
[ "\nAlso, when I insert the first name of the player, it shows that the \"players\" are not defined\n\nYou try to append players, which indeed does not exist before you call it, to players_list, which also does not exist. You need to define these two first.\nFor the number of players limit, you can add a simple check, whether max_players is between 2 and 5.\nmax_players = int(input(\" Insert the number of players there are ? : \"))\nwhile (max_players <2) or (max_players > 5) :\n max_players = int(input(\" Number of players must be between 2 and 5. Number of players ? \"))\n\nplayers_list = []\nwhile len(players_list) < max_players:\n player1 = input(\" What is your first and last name? : \")\n players_list.append(player1)\n print(\"players so far : \")\n print(players_list)\n\n\n" ]
[ 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074562397_list_python.txt
Q: What is the use of `pip install -e .` if I can simply run the python script using the environment? Based on this answer, I can fully understand the use of: pip install -e /path/to/locations/repo However, I am yet to see the use of: pip install -e . I can understand it from the perspective of doing pip install -e /path/to/locations/repo, but from the working directory of the project dependency. But that's the only use case I can see. In what use case would I want to install locally the same package I am now working on? A: pip install -e will just create a projekt_name.egg-info file in the venv\Lib\site-packages folder with a link to the repo location. Nothing is copied. You can continue developing and you can access your project packages as if the repo was properly installed. No dirty sys.path.append-hacks needed.
What is the use of `pip install -e .` if I can simply run the python script using the environment?
Based on this answer, I can fully understand the use of: pip install -e /path/to/locations/repo However, I am yet to see the use of: pip install -e . I can understand it from the perspective of doing pip install -e /path/to/locations/repo, but from the working directory of the project dependency. But that's the only use case I can see. In what use case would I want to install locally the same package I am now working on?
[ "pip install -e\n\nwill just create a projekt_name.egg-info file in the venv\\Lib\\site-packages folder with a link to the repo location. Nothing is copied.\nYou can continue developing and you can access your project packages as if the repo was properly installed. No dirty sys.path.append-hacks needed.\n" ]
[ 0 ]
[]
[]
[ "pip", "python" ]
stackoverflow_0074561368_pip_python.txt
Q: filter custom spans overlaps in spacy doc I have a bunch of regex in this way: (for simplicity the regex patters are very easy, the real case the regex are very long and barely incomprehensible since they are created automatically from other tool) I want to create spans in a doc based on those regex. This is the code: import spacy from spacy.tokens import Doc, Span, Token import re rx1 = ["blue","blue print"] text = " this is blue but there is a blue print. The light is red and the heat is in the infra red." my_regexes = {'blue':["blue","blue print"], 'red': ["red", "infra red"] } nlp = spacy.blank("en") doc = nlp(text) print(doc.text) for name, rxs in my_regexes.items(): doc.spans[name] = [] for rx in rxs: for i, match in enumerate(re.finditer(rx, doc.text)): start, end = match.span() span = doc.char_span(start, end, alignment_mode="expand") # This is a Span object or None if match doesn't map to valid token sequence span_to_add = Span(doc, span.start, span.end, label=name +str(i)) doc.spans[name].append(span_to_add) if span is not None: print("Found match:", name, start, end, span.text ) It works. Now I want to filter the spans in a way that when a series of tokens (for instance "infra red") contain another span ("red") only the longest one is kept. I saw this: How to avoid double-extracting of overlapping patterns in SpaCy with Matcher? but that looks to be for a matcher, and I can not make it work in my case. Since I would like to eliminate the token Span out of the document. Any idea? A: spacy.util.filter_spans will do this. The answer is the same as the linked question, where matcher results are converted to spans in order to filter them with this function. docs.spans[name] = spacy.util.filter_spans(doc.spans[name])
filter custom spans overlaps in spacy doc
I have a bunch of regex in this way: (for simplicity the regex patters are very easy, the real case the regex are very long and barely incomprehensible since they are created automatically from other tool) I want to create spans in a doc based on those regex. This is the code: import spacy from spacy.tokens import Doc, Span, Token import re rx1 = ["blue","blue print"] text = " this is blue but there is a blue print. The light is red and the heat is in the infra red." my_regexes = {'blue':["blue","blue print"], 'red': ["red", "infra red"] } nlp = spacy.blank("en") doc = nlp(text) print(doc.text) for name, rxs in my_regexes.items(): doc.spans[name] = [] for rx in rxs: for i, match in enumerate(re.finditer(rx, doc.text)): start, end = match.span() span = doc.char_span(start, end, alignment_mode="expand") # This is a Span object or None if match doesn't map to valid token sequence span_to_add = Span(doc, span.start, span.end, label=name +str(i)) doc.spans[name].append(span_to_add) if span is not None: print("Found match:", name, start, end, span.text ) It works. Now I want to filter the spans in a way that when a series of tokens (for instance "infra red") contain another span ("red") only the longest one is kept. I saw this: How to avoid double-extracting of overlapping patterns in SpaCy with Matcher? but that looks to be for a matcher, and I can not make it work in my case. Since I would like to eliminate the token Span out of the document. Any idea?
[ "spacy.util.filter_spans will do this. The answer is the same as the linked question, where matcher results are converted to spans in order to filter them with this function.\ndocs.spans[name] = spacy.util.filter_spans(doc.spans[name])\n\n" ]
[ 1 ]
[]
[]
[ "python", "spacy" ]
stackoverflow_0074560365_python_spacy.txt
Q: How to convert a python dictionary with tuples into a pandas dataframe? I have a python dictionary in which the keys of the dictionary are tuples of two strings and the values are integers. It looks like this: mydic = { ('column1', 'index1'):33, ('column1', 'index2'):34, ('column2', 'index1'):35, ('column2', 'index2'):36 } The first string of the tuples should be used as the column-name in the dataframe and the second string in the tuple should be used as the index. The dataframe from this should look like this: (index) column1 column 2 index1 33 35 index2 34 36 Is there any way to do this? (Or do I have to loop through all elements of the dictionary and build the dataframe one value at a time by hand?) A: Build a pd.Series first (which will have a MultiIndex), then use pd.Series.unstack to get the column names. df = pd.Series(mydic).unstack(0) print(df) column1 column2 index1 33 35 index2 34 36 A: You can use pd.MultiIndex.from_tuples. mydic = { ('column1', 'index1'):33, ('column1', 'index2'):34, ('column2', 'index1'):35, ('column2', 'index2'):36 } df = pd.DataFrame(mydic.values(), index = pd.MultiIndex.from_tuples(mydic)) 0 column1 index1 33 index2 34 column2 index1 35 index2 36 What comes after that is just a workaround. df.T.stack() column1 column2 0 index1 33 35 index2 34 36 Notice that the index contains two rows. Do not forget to reset it. df.T.stack().reset_index().drop('level_0', axis = 1) level_1 column1 column2 0 index1 33 35 1 index2 34 36 You can rename the level_1 if you want to. Hope it helps.
How to convert a python dictionary with tuples into a pandas dataframe?
I have a python dictionary in which the keys of the dictionary are tuples of two strings and the values are integers. It looks like this: mydic = { ('column1', 'index1'):33, ('column1', 'index2'):34, ('column2', 'index1'):35, ('column2', 'index2'):36 } The first string of the tuples should be used as the column-name in the dataframe and the second string in the tuple should be used as the index. The dataframe from this should look like this: (index) column1 column 2 index1 33 35 index2 34 36 Is there any way to do this? (Or do I have to loop through all elements of the dictionary and build the dataframe one value at a time by hand?)
[ "Build a pd.Series first (which will have a MultiIndex), then use pd.Series.unstack to get the column names.\ndf = pd.Series(mydic).unstack(0)\nprint(df)\n\n column1 column2\nindex1 33 35\nindex2 34 36\n\n", "You can use pd.MultiIndex.from_tuples.\nmydic = { ('column1', 'index1'):33, \n ('column1', 'index2'):34, \n ('column2', 'index1'):35, \n ('column2', 'index2'):36 }\n\ndf = pd.DataFrame(mydic.values(), index = pd.MultiIndex.from_tuples(mydic))\n\n 0\ncolumn1 index1 33\n index2 34\ncolumn2 index1 35\n index2 36\n\nWhat comes after that is just a workaround.\ndf.T.stack()\n\n column1 column2\n0 index1 33 35\n index2 34 36\n\nNotice that the index contains two rows. Do not forget to reset it.\ndf.T.stack().reset_index().drop('level_0', axis = 1)\n\n level_1 column1 column2\n0 index1 33 35\n1 index2 34 36\n\nYou can rename the level_1 if you want to. Hope it helps.\n" ]
[ 3, 0 ]
[]
[]
[ "dataframe", "dictionary", "pandas", "python", "tuples" ]
stackoverflow_0074562785_dataframe_dictionary_pandas_python_tuples.txt
Q: how to search for a list in a list of lists in python I have this list: t = [['1', '0', '1', '0', '0', '0', 'up', 5], ['1', '0', '1', '0', '0', '1', 'up', 5], ['1', '0', '1', '0', '1', '0', 'down', 5]] I want to be able to find the following from that list: o = ['1', '0', '1', '0', '1', '0'] u = "up" y = "down to make it clearer, i want to find out if o exists in t, and to find out whether or not u exists in the sublist where o exists i tried : t = [['1', '0', '1', '0', '0', '0', 'up', 5], ['1', '0', '1', '0', '0', '1', 'up', 5], ['1', '0', '1', '0', '1', '0', 'down', 5]] o = ['1', '0', '1', '0', '1', '0'] u = "up" if o and u in t: print("the list you're looking for is present and the position of that sublist is up") elif o and y in t: print("the list you're looking for is present and the position of that sublist is down") else: print("it's not there") i get this result: it's not there what i am trying to get is: the list you're looking for is present and the position of that sublist is down. A: When you use in, Python will check if the entire object in an element of the list, whereas you are only searching for an element that begin with o. You could for instance do something like : matching_sub_list = None for sub_list in t: if sub_list[:len(o)] == o: # Check if the first elements of sub_list are the same as the elements of o matching_sub_list = sub_list break # This assumes there is only one list that matches if matching_sub_list is None: print("it's not there") else: if u in matching_sub_list: print("the list you're looking for is present and the position of that sublist is up") elif y in matching_sub_list: print("the list you're looking for is present and the position of that sublist is down") But I would recommand to rethink your date structure, and maybe store the sub_lists in t in another way, such as 2 lists of lists : one containing the elements, and one containing the up/down and the following number. You could also create a class dedicated to holding those values, with custom function to check, etc... A: Assuming the number of '1', '0's are fixed in each sublist and is a smaller number like 6 as in your example, its not too much to ask to build a look up set like: look_up = {(''.join(a[:6]), a[6]) for a in t} and then use that to look up like: if (''.join(o), u) in look_up: print("up") elif (''.join(o), y) in look_up: print("down") For your example it should print "down".
how to search for a list in a list of lists in python
I have this list: t = [['1', '0', '1', '0', '0', '0', 'up', 5], ['1', '0', '1', '0', '0', '1', 'up', 5], ['1', '0', '1', '0', '1', '0', 'down', 5]] I want to be able to find the following from that list: o = ['1', '0', '1', '0', '1', '0'] u = "up" y = "down to make it clearer, i want to find out if o exists in t, and to find out whether or not u exists in the sublist where o exists i tried : t = [['1', '0', '1', '0', '0', '0', 'up', 5], ['1', '0', '1', '0', '0', '1', 'up', 5], ['1', '0', '1', '0', '1', '0', 'down', 5]] o = ['1', '0', '1', '0', '1', '0'] u = "up" if o and u in t: print("the list you're looking for is present and the position of that sublist is up") elif o and y in t: print("the list you're looking for is present and the position of that sublist is down") else: print("it's not there") i get this result: it's not there what i am trying to get is: the list you're looking for is present and the position of that sublist is down.
[ "When you use in, Python will check if the entire object in an element of the list, whereas you are only searching for an element that begin with o.\nYou could for instance do something like :\nmatching_sub_list = None\nfor sub_list in t:\n if sub_list[:len(o)] == o: # Check if the first elements of sub_list are the same as the elements of o\n matching_sub_list = sub_list\n break # This assumes there is only one list that matches\n\nif matching_sub_list is None:\n print(\"it's not there\")\nelse:\n if u in matching_sub_list:\n print(\"the list you're looking for is present and the position of that sublist is up\")\n elif y in matching_sub_list:\n print(\"the list you're looking for is present and the position of that sublist is down\")\n\nBut I would recommand to rethink your date structure, and maybe store the sub_lists in t in another way, such as 2 lists of lists : one containing the elements, and one containing the up/down and the following number. You could also create a class dedicated to holding those values, with custom function to check, etc...\n", "Assuming the number of '1', '0's are fixed in each sublist and is a smaller number like 6 as in your example, its not too much to ask to build a look up set like:\nlook_up = {(''.join(a[:6]), a[6]) for a in t}\n\nand then use that to look up like:\nif (''.join(o), u) in look_up:\n print(\"up\")\nelif (''.join(o), y) in look_up:\n print(\"down\")\n\nFor your example it should print \"down\".\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074562764_list_python.txt
Q: How to create a decorator that can be used either with or without parameters? I'd like to create a Python decorator that can be used either with parameters: @redirect_output("somewhere.log") def foo(): .... or without them (for instance to redirect the output to stderr by default): @redirect_output def foo(): .... Is that at all possible? Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve. A: I know this question is old, but some of the comments are new, and while all of the viable solutions are essentially the same, most of them aren't very clean or easy to read. Like thobe's answer says, the only way to handle both cases is to check for both scenarios. The easiest way is simply to check to see if there is a single argument and it is callabe (NOTE: extra checks will be necessary if your decorator only takes 1 argument and it happens to be a callable object): def decorator(*args, **kwargs): if len(args) == 1 and len(kwargs) == 0 and callable(args[0]): # called as @decorator else: # called as @decorator(*args, **kwargs) In the first case, you do what any normal decorator does, return a modified or wrapped version of the passed in function. In the second case, you return a 'new' decorator that somehow uses the information passed in with *args, **kwargs. This is fine and all, but having to write it out for every decorator you make can be pretty annoying and not as clean. Instead, it would be nice to be able to automagically modify our decorators without having to re-write them... but that's what decorators are for! Using the following decorator decorator, we can deocrate our decorators so that they can be used with or without arguments: def doublewrap(f): ''' a decorator decorator, allowing the decorator to be used as: @decorator(with, arguments, and=kwargs) or @decorator ''' @wraps(f) def new_dec(*args, **kwargs): if len(args) == 1 and len(kwargs) == 0 and callable(args[0]): # actual decorated function return f(args[0]) else: # decorator arguments return lambda realf: f(realf, *args, **kwargs) return new_dec Now, we can decorate our decorators with @doublewrap, and they will work with and without arguments, with one caveat: I noted above but should repeat here, the check in this decorator makes an assumption about the arguments that a decorator can receive (namely that it can't receive a single, callable argument). Since we are making it applicable to any generator now, it needs to be kept in mind, or modified if it will be contradicted. The following demonstrates its use: def test_doublewrap(): from util import doublewrap from functools import wraps @doublewrap def mult(f, factor=2): '''multiply a function's return value''' @wraps(f) def wrap(*args, **kwargs): return factor*f(*args,**kwargs) return wrap # try normal @mult def f(x, y): return x + y # try args @mult(3) def f2(x, y): return x*y # try kwargs @mult(factor=5) def f3(x, y): return x - y assert f(2,3) == 10 assert f2(2,5) == 30 assert f3(8,1) == 5*7 A: Using keyword arguments with default values (as suggested by kquinn) is a good idea, but will require you to include the parenthesis: @redirect_output() def foo(): ... If you would like a version that works without the parenthesis on the decorator you will have to account both scenarios in your decorator code. If you were using Python 3.0 you could use keyword only arguments for this: def redirect_output(fn=None,*,destination=None): destination = sys.stderr if destination is None else destination def wrapper(*args, **kwargs): ... # your code here if fn is None: def decorator(fn): return functools.update_wrapper(wrapper, fn) return decorator else: return functools.update_wrapper(wrapper, fn) In Python 2.x this can be emulated with varargs tricks: def redirected_output(*fn,**options): destination = options.pop('destination', sys.stderr) if options: raise TypeError("unsupported keyword arguments: %s" % ",".join(options.keys())) def wrapper(*args, **kwargs): ... # your code here if fn: return functools.update_wrapper(wrapper, fn[0]) else: def decorator(fn): return functools.update_wrapper(wrapper, fn) return decorator Any of these versions would allow you to write code like this: @redirected_output def foo(): ... @redirected_output(destination="somewhere.log") def bar(): ... A: I know this is an old question, but I really don't like any of the techniques proposed so I wanted to add another method. I saw that django uses a really clean method in their login_required decorator in django.contrib.auth.decorators. As you can see in the decorator's docs, it can be used alone as @login_required or with arguments, @login_required(redirect_field_name='my_redirect_field'). The way they do it is quite simple. They add a kwarg (function=None) before their decorator arguments. If the decorator is used alone, function will be the actual function it is decorating, whereas if it is called with arguments, function will be None. Example: from functools import wraps def custom_decorator(function=None, some_arg=None, some_other_arg=None): def actual_decorator(f): @wraps(f) def wrapper(*args, **kwargs): # Do stuff with args here... if some_arg: print(some_arg) if some_other_arg: print(some_other_arg) return f(*args, **kwargs) return wrapper if function: return actual_decorator(function) return actual_decorator @custom_decorator def test1(): print('test1') >>> test1() test1 @custom_decorator(some_arg='hello') def test2(): print('test2') >>> test2() hello test2 @custom_decorator(some_arg='hello', some_other_arg='world') def test3(): print('test3') >>> test3() hello world test3 I find this approach that django uses to be more elegant and easier to understand than any of the other techniques proposed here. A: Several answers here already address your problem nicely. With respect to style, however, I prefer solving this decorator predicament using functools.partial, as suggested in David Beazley's Python Cookbook 3: from functools import partial, wraps def decorator(func=None, foo='spam'): if func is None: return partial(decorator, foo=foo) @wraps(func) def wrapper(*args, **kwargs): # do something with `func` and `foo`, if you're so inclined pass return wrapper While yes, you can just do @decorator() def f(*args, **kwargs): pass without funky workarounds, I find it strange looking, and I like having the option of simply decorating with @decorator. As for the secondary mission objective, redirecting a function's output is addressed in this Stack Overflow post. If you want to dive deeper, check out Chapter 9 (Metaprogramming) in Python Cookbook 3, which is freely available to be read online. Some of that material is live demoed (plus more!) in Beazley's awesome YouTube video Python 3 Metaprogramming. A: You need to detect both cases, for example using the type of the first argument, and accordingly return either the wrapper (when used without parameter) or a decorator (when used with arguments). from functools import wraps import inspect def redirect_output(fn_or_output): def decorator(fn): @wraps(fn) def wrapper(*args, **args): # Redirect output try: return fn(*args, **args) finally: # Restore output return wrapper if inspect.isfunction(fn_or_output): # Called with no parameter return decorator(fn_or_output) else: # Called with a parameter return decorator When using the @redirect_output("output.log") syntax, redirect_output is called with a single argument "output.log", and it must return a decorator accepting the function to be decorated as an argument. When used as @redirect_output, it is called directly with the function to be decorated as an argument. Or in other words: the @ syntax must be followed by an expression whose result is a function accepting a function to be decorated as its sole argument, and returning the decorated function. The expression itself can be a function call, which is the case with @redirect_output("output.log"). Convoluted, but true :-) A: A python decorator is called in a fundamentally different way depending on whether you give it arguments or not. The decoration is actually just a (syntactically restricted) expression. In your first example: @redirect_output("somewhere.log") def foo(): .... the function redirect_output is called with the given argument, which is expected to return a decorator function, which itself is called with foo as an argument, which (finally!) is expected to return the final decorated function. The equivalent code looks like this: def foo(): .... d = redirect_output("somewhere.log") foo = d(foo) The equivalent code for your second example looks like: def foo(): .... d = redirect_output foo = d(foo) So you can do what you'd like but not in a totally seamless way: import types def redirect_output(arg): def decorator(file, f): def df(*args, **kwargs): print 'redirecting to ', file return f(*args, **kwargs) return df if type(arg) is types.FunctionType: return decorator(sys.stderr, arg) return lambda f: decorator(arg, f) This should be ok unless you wish to use a function as an argument to your decorator, in which case the decorator will wrongly assume it has no arguments. It will also fail if this decoration is applied to another decoration that does not return a function type. An alternative method is just to require that the decorator function is always called, even if it is with no arguments. In this case, your second example would look like this: @redirect_output() def foo(): .... The decorator function code would look like this: def redirect_output(file = sys.stderr): def decorator(file, f): def df(*args, **kwargs): print 'redirecting to ', file return f(*args, **kwargs) return df return lambda f: decorator(file, f) A: To complete the other answers: "Is there a way to build a decorator that can be used both with and without arguments ?" No there is no generic way because there is currently something missing in the python language to detect the two different use cases. However Yes as already pointed out by other answers such as bj0s, there is a clunky workaround that is to check the type and value of the first positional argument received (and to check if no other arguments have non-default value). If you are guaranteed that users will never pass a callable as first argument of your decorator, then you can use this workaround. Note that this is the same for class decorators (replace callable by class in the previous sentence). To be sure of the above, I did quite a bit of research out there and even implemented a library named decopatch that uses a combination of all strategies cited above (and many more, including introspection) to perform "whatever is the most intelligent workaround" depending on your need. It comes bundled with two modes: nested and flat. In "nested mode" you always return a function from decopatch import function_decorator @function_decorator def add_tag(tag='hi!'): """ Example decorator to add a 'tag' attribute to a function. :param tag: the 'tag' value to set on the decorated function (default 'hi!). """ def _apply_decorator(f): """ This is the method that will be called when `@add_tag` is used on a function `f`. It should return a replacement for `f`. """ setattr(f, 'tag', tag) return f return _apply_decorator while in "flat mode" your method is directly the code that will be executed when the decorator is applied. It is injected with the decorated function object f: from decopatch import function_decorator, DECORATED @function_decorator def add_tag(tag='hi!', f=DECORATED): """ Example decorator to add a 'tag' attribute to a function. :param tag: the 'tag' value to set on the decorated function (default 'hi!). """ setattr(f, 'tag', tag) return f But frankly the best would be not to need any library here and to get that feature straight from the python language. If, like myself, you think that it is a pity that the python language is not as of today capable of providing a neat answer to this question, do not hesitate to support this idea in the python bugtracker: https://bugs.python.org/issue36553 ! A: In fact, the caveat case in @bj0's solution can be checked easily: def meta_wrap(decor): @functools.wraps(decor) def new_decor(*args, **kwargs): if len(args) == 1 and len(kwargs) == 0 and callable(args[0]): # this is the double-decorated f. # Its first argument should not be a callable doubled_f = decor(args[0]) @functools.wraps(doubled_f) def checked_doubled_f(*f_args, **f_kwargs): if callable(f_args[0]): raise ValueError('meta_wrap failure: ' 'first positional argument cannot be callable.') return doubled_f(*f_args, **f_kwargs) return checked_doubled_f else: # decorator arguments return lambda real_f: decor(real_f, *args, **kwargs) return new_decor Here are a few test cases for this fail-safe version of meta_wrap. @meta_wrap def baddecor(f, caller=lambda x: -1*x): @functools.wraps(f) def _f(*args, **kwargs): return caller(f(args[0])) return _f @baddecor # used without arg: no problem def f_call1(x): return x + 1 assert f_call1(5) == -6 @baddecor(lambda x : 2*x) # bad case def f_call2(x): return x + 1 f_call2(5) # raises ValueError # explicit keyword: no problem @baddecor(caller=lambda x : 100*x) def f_call3(x): return x + 1 assert f_call3(5) == 600 A: Since no one mentioned this, there is also a solution utilizing callable class which I find more elegant, especially in cases where the decorator is complex and one may wish to split it to multiple methods(functions). This solution utilizes __new__ magic method to do essentially what others have pointed out. First detect how the decorator was used than adjust return appropriately. class decorator_with_arguments(object): def __new__(cls, decorated_function=None, **kwargs): self = super().__new__(cls) self._init(**kwargs) if not decorated_function: return self else: return self.__call__(decorated_function) def _init(self, arg1="default", arg2="default", arg3="default"): self.arg1 = arg1 self.arg2 = arg2 self.arg3 = arg3 def __call__(self, decorated_function): def wrapped_f(*args): print("Decorator arguments:", self.arg1, self.arg2, self.arg3) print("decorated_function arguments:", *args) decorated_function(*args) return wrapped_f @decorator_with_arguments(arg1=5) def sayHello(a1, a2, a3, a4): print('sayHello arguments:', a1, a2, a3, a4) @decorator_with_arguments() def sayHello(a1, a2, a3, a4): print('sayHello arguments:', a1, a2, a3, a4) @decorator_with_arguments def sayHello(a1, a2, a3, a4): print('sayHello arguments:', a1, a2, a3, a4) If the decorator is used with arguments, than this equals: result = decorator_with_arguments(arg1=5)(sayHello)(a1, a2, a3, a4) One can see that the arguments arg1 are correctly passed to the constructor and the decorated function is passed to __call__ But if the decorator is used without arguments, than this equals: result = decorator_with_arguments(sayHello)(a1, a2, a3, a4) You see that in this case the decorated function is passed directly to the constructor and call to __call__ is entirely omitted. That is why we need to employ logic to take care of this case in __new__ magic method. Why can't we use __init__ instead of __new__? The reason is simple: python prohibits returning any other values than None from __init__ WARNING This approcach has one side effect. It will not preserve function signature! A: This does the job without no fuss: from functools import wraps def memoize(fn=None, hours=48.0): def deco(fn): @wraps(fn) def wrapper(*args, **kwargs): return fn(*args, **kwargs) return wrapper if callable(fn): return deco(fn) return deco A: this work for me: def redirect_output(func=None, /, *, output_log='./output.log'): def out_wrapper(func): def wrapper(*args, **kwargs): res = func(*args, **kwargs) print(f"{func.__name__} finished, output_log:{output_log}") return res return wrapper if func is None: return out_wrapper # @redirect_output() return out_wrapper(func) # @redirect_output @redirect_output def test1(): print("running test 1") @redirect_output(output_log="new.log") def test2(): print("running test 2") test1() print('-----') test2() A: The example code multiply() below can accept one argument or no parentheses from the decorator and sum() below can sum 2 numbers: from numbers import Number def multiply(num): def _multiply(func): def core(*args, **kwargs): result = func(*args, **kwargs) if isinstance(num, Number): return result * num else: return result return core if callable(num): return _multiply(num) else: return _multiply def sum(num1, num2): return num1 + num2 So, if you put @multiply(5) on sum(), then call sum(4, 6) as shown below: # (4 + 6) x 5 = 50 @multiply(5) # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) You can get the result below: 50 And, if you put @multiply on sum(), then call sum(4, 6) as shown below: # 4 + 6 = 10 @multiply # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) You can get the result below: 10 But, if you put @multiply() on sum(), then call sum(4, 6) as shown below: @multiply() # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) The error below occurs: TypeError: multiply() missing 1 required positional argument: 'num' So, if you want @multiply() to run without error, you need to add the default value 1 to num as shown below: from numbers import Number # Here def multiply(num=1): def _multiply(func): def core(*args, **kwargs): # ... Then, if you put @multiply() on sum(), then call sum(4, 6) as shown below: # (4 + 6) x 1 = 10 @multiply() # Here def sum(num1, num2): return num1 + num2 result = sum(4, 6) print(result) You can get the result below: 10
How to create a decorator that can be used either with or without parameters?
I'd like to create a Python decorator that can be used either with parameters: @redirect_output("somewhere.log") def foo(): .... or without them (for instance to redirect the output to stderr by default): @redirect_output def foo(): .... Is that at all possible? Note that I'm not looking for a different solution to the problem of redirecting output, it's just an example of the syntax I'd like to achieve.
[ "I know this question is old, but some of the comments are new, and while all of the viable solutions are essentially the same, most of them aren't very clean or easy to read.\nLike thobe's answer says, the only way to handle both cases is to check for both scenarios. The easiest way is simply to check to see if there is a single argument and it is callabe (NOTE: extra checks will be necessary if your decorator only takes 1 argument and it happens to be a callable object):\ndef decorator(*args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):\n # called as @decorator\n else:\n # called as @decorator(*args, **kwargs)\n\nIn the first case, you do what any normal decorator does, return a modified or wrapped version of the passed in function.\nIn the second case, you return a 'new' decorator that somehow uses the information passed in with *args, **kwargs.\nThis is fine and all, but having to write it out for every decorator you make can be pretty annoying and not as clean. Instead, it would be nice to be able to automagically modify our decorators without having to re-write them... but that's what decorators are for! \nUsing the following decorator decorator, we can deocrate our decorators so that they can be used with or without arguments:\ndef doublewrap(f):\n '''\n a decorator decorator, allowing the decorator to be used as:\n @decorator(with, arguments, and=kwargs)\n or\n @decorator\n '''\n @wraps(f)\n def new_dec(*args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):\n # actual decorated function\n return f(args[0])\n else:\n # decorator arguments\n return lambda realf: f(realf, *args, **kwargs)\n\n return new_dec\n\nNow, we can decorate our decorators with @doublewrap, and they will work with and without arguments, with one caveat:\nI noted above but should repeat here, the check in this decorator makes an assumption about the arguments that a decorator can receive (namely that it can't receive a single, callable argument). Since we are making it applicable to any generator now, it needs to be kept in mind, or modified if it will be contradicted.\nThe following demonstrates its use:\ndef test_doublewrap():\n from util import doublewrap\n from functools import wraps \n\n @doublewrap\n def mult(f, factor=2):\n '''multiply a function's return value'''\n @wraps(f)\n def wrap(*args, **kwargs):\n return factor*f(*args,**kwargs)\n return wrap\n\n # try normal\n @mult\n def f(x, y):\n return x + y\n\n # try args\n @mult(3)\n def f2(x, y):\n return x*y\n\n # try kwargs\n @mult(factor=5)\n def f3(x, y):\n return x - y\n\n assert f(2,3) == 10\n assert f2(2,5) == 30\n assert f3(8,1) == 5*7\n\n", "Using keyword arguments with default values (as suggested by kquinn) is a good idea, but will require you to include the parenthesis:\n@redirect_output()\ndef foo():\n ...\n\nIf you would like a version that works without the parenthesis on the decorator you will have to account both scenarios in your decorator code.\nIf you were using Python 3.0 you could use keyword only arguments for this:\ndef redirect_output(fn=None,*,destination=None):\n destination = sys.stderr if destination is None else destination\n def wrapper(*args, **kwargs):\n ... # your code here\n if fn is None:\n def decorator(fn):\n return functools.update_wrapper(wrapper, fn)\n return decorator\n else:\n return functools.update_wrapper(wrapper, fn)\n\nIn Python 2.x this can be emulated with varargs tricks:\ndef redirected_output(*fn,**options):\n destination = options.pop('destination', sys.stderr)\n if options:\n raise TypeError(\"unsupported keyword arguments: %s\" % \n \",\".join(options.keys()))\n def wrapper(*args, **kwargs):\n ... # your code here\n if fn:\n return functools.update_wrapper(wrapper, fn[0])\n else:\n def decorator(fn):\n return functools.update_wrapper(wrapper, fn)\n return decorator\n\nAny of these versions would allow you to write code like this:\n@redirected_output\ndef foo():\n ...\n\n@redirected_output(destination=\"somewhere.log\")\ndef bar():\n ...\n\n", "I know this is an old question, but I really don't like any of the techniques proposed so I wanted to add another method. I saw that django uses a really clean method in their login_required decorator in django.contrib.auth.decorators. As you can see in the decorator's docs, it can be used alone as @login_required or with arguments, @login_required(redirect_field_name='my_redirect_field').\nThe way they do it is quite simple. They add a kwarg (function=None) before their decorator arguments. If the decorator is used alone, function will be the actual function it is decorating, whereas if it is called with arguments, function will be None.\nExample:\nfrom functools import wraps\n\ndef custom_decorator(function=None, some_arg=None, some_other_arg=None):\n def actual_decorator(f):\n @wraps(f)\n def wrapper(*args, **kwargs):\n # Do stuff with args here...\n if some_arg:\n print(some_arg)\n if some_other_arg:\n print(some_other_arg)\n return f(*args, **kwargs)\n return wrapper\n if function:\n return actual_decorator(function)\n return actual_decorator\n\n\n@custom_decorator\ndef test1():\n print('test1')\n\n>>> test1()\ntest1\n\n\n@custom_decorator(some_arg='hello')\ndef test2():\n print('test2')\n\n>>> test2()\nhello\ntest2\n\n\n@custom_decorator(some_arg='hello', some_other_arg='world')\ndef test3():\n print('test3')\n\n>>> test3()\nhello\nworld\ntest3\n\nI find this approach that django uses to be more elegant and easier to understand than any of the other techniques proposed here.\n", "Several answers here already address your problem nicely. With respect to style, however, I prefer solving this decorator predicament using functools.partial, as suggested in David Beazley's Python Cookbook 3:\nfrom functools import partial, wraps\n\ndef decorator(func=None, foo='spam'):\n if func is None:\n return partial(decorator, foo=foo)\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n # do something with `func` and `foo`, if you're so inclined\n pass\n \n return wrapper\n\nWhile yes, you can just do\n@decorator()\ndef f(*args, **kwargs):\n pass\n\nwithout funky workarounds, I find it strange looking, and I like having the option of simply decorating with @decorator.\nAs for the secondary mission objective, redirecting a function's output is addressed in this Stack Overflow post.\n\nIf you want to dive deeper, check out Chapter 9 (Metaprogramming) in Python Cookbook 3, which is freely available to be read online.\nSome of that material is live demoed (plus more!) in Beazley's awesome YouTube video Python 3 Metaprogramming.\n", "You need to detect both cases, for example using the type of the first argument, and accordingly return either the wrapper (when used without parameter) or a decorator (when used with arguments).\nfrom functools import wraps\nimport inspect\n\ndef redirect_output(fn_or_output):\n def decorator(fn):\n @wraps(fn)\n def wrapper(*args, **args):\n # Redirect output\n try:\n return fn(*args, **args)\n finally:\n # Restore output\n return wrapper\n\n if inspect.isfunction(fn_or_output):\n # Called with no parameter\n return decorator(fn_or_output)\n else:\n # Called with a parameter\n return decorator\n\nWhen using the @redirect_output(\"output.log\") syntax, redirect_output is called with a single argument \"output.log\", and it must return a decorator accepting the function to be decorated as an argument. When used as @redirect_output, it is called directly with the function to be decorated as an argument.\nOr in other words: the @ syntax must be followed by an expression whose result is a function accepting a function to be decorated as its sole argument, and returning the decorated function. The expression itself can be a function call, which is the case with @redirect_output(\"output.log\"). Convoluted, but true :-)\n", "A python decorator is called in a fundamentally different way depending on whether you give it arguments or not. The decoration is actually just a (syntactically restricted) expression.\nIn your first example:\n@redirect_output(\"somewhere.log\")\ndef foo():\n ....\n\nthe function redirect_output is called with the\ngiven argument, which is expected to return a decorator\nfunction, which itself is called with foo as an argument,\nwhich (finally!) is expected to return the final decorated function.\nThe equivalent code looks like this:\ndef foo():\n ....\nd = redirect_output(\"somewhere.log\")\nfoo = d(foo)\n\nThe equivalent code for your second example looks like:\ndef foo():\n ....\nd = redirect_output\nfoo = d(foo)\n\nSo you can do what you'd like but not in a totally seamless way:\nimport types\ndef redirect_output(arg):\n def decorator(file, f):\n def df(*args, **kwargs):\n print 'redirecting to ', file\n return f(*args, **kwargs)\n return df\n if type(arg) is types.FunctionType:\n return decorator(sys.stderr, arg)\n return lambda f: decorator(arg, f)\n\nThis should be ok unless you wish to use a function as an\nargument to your decorator, in which case the decorator\nwill wrongly assume it has no arguments. It will also fail\nif this decoration is applied to another decoration that\ndoes not return a function type.\nAn alternative method is just to require that the\ndecorator function is always called, even if it is with no arguments.\nIn this case, your second example would look like this:\n@redirect_output()\ndef foo():\n ....\n\nThe decorator function code would look like this:\ndef redirect_output(file = sys.stderr):\n def decorator(file, f):\n def df(*args, **kwargs):\n print 'redirecting to ', file\n return f(*args, **kwargs)\n return df\n return lambda f: decorator(file, f)\n\n", "To complete the other answers:\n\n\"Is there a way to build a decorator that can be used both with and without arguments ?\"\n\nNo there is no generic way because there is currently something missing in the python language to detect the two different use cases.\nHowever Yes as already pointed out by other answers such as bj0s, there is a clunky workaround that is to check the type and value of the first positional argument received (and to check if no other arguments have non-default value). If you are guaranteed that users will never pass a callable as first argument of your decorator, then you can use this workaround. Note that this is the same for class decorators (replace callable by class in the previous sentence).\nTo be sure of the above, I did quite a bit of research out there and even implemented a library named decopatch that uses a combination of all strategies cited above (and many more, including introspection) to perform \"whatever is the most intelligent workaround\" depending on your need. It comes bundled with two modes: nested and flat.\nIn \"nested mode\" you always return a function\nfrom decopatch import function_decorator\n\n@function_decorator\ndef add_tag(tag='hi!'):\n \"\"\"\n Example decorator to add a 'tag' attribute to a function. \n :param tag: the 'tag' value to set on the decorated function (default 'hi!).\n \"\"\"\n def _apply_decorator(f):\n \"\"\"\n This is the method that will be called when `@add_tag` is used on a \n function `f`. It should return a replacement for `f`.\n \"\"\"\n setattr(f, 'tag', tag)\n return f\n return _apply_decorator\n\nwhile in \"flat mode\" your method is directly the code that will be executed when the decorator is applied. It is injected with the decorated function object f:\nfrom decopatch import function_decorator, DECORATED\n\n@function_decorator\ndef add_tag(tag='hi!', f=DECORATED):\n \"\"\"\n Example decorator to add a 'tag' attribute to a function.\n :param tag: the 'tag' value to set on the decorated function (default 'hi!).\n \"\"\"\n setattr(f, 'tag', tag)\n return f\n\nBut frankly the best would be not to need any library here and to get that feature straight from the python language. If, like myself, you think that it is a pity that the python language is not as of today capable of providing a neat answer to this question, do not hesitate to support this idea in the python bugtracker: https://bugs.python.org/issue36553 !\n", "In fact, the caveat case in @bj0's solution can be checked easily:\ndef meta_wrap(decor):\n @functools.wraps(decor)\n def new_decor(*args, **kwargs):\n if len(args) == 1 and len(kwargs) == 0 and callable(args[0]):\n # this is the double-decorated f. \n # Its first argument should not be a callable\n doubled_f = decor(args[0])\n @functools.wraps(doubled_f)\n def checked_doubled_f(*f_args, **f_kwargs):\n if callable(f_args[0]):\n raise ValueError('meta_wrap failure: '\n 'first positional argument cannot be callable.')\n return doubled_f(*f_args, **f_kwargs)\n return checked_doubled_f \n else:\n # decorator arguments\n return lambda real_f: decor(real_f, *args, **kwargs)\n\n return new_decor\n\nHere are a few test cases for this fail-safe version of meta_wrap.\n @meta_wrap\n def baddecor(f, caller=lambda x: -1*x):\n @functools.wraps(f)\n def _f(*args, **kwargs):\n return caller(f(args[0]))\n return _f\n\n @baddecor # used without arg: no problem\n def f_call1(x):\n return x + 1\n assert f_call1(5) == -6\n\n @baddecor(lambda x : 2*x) # bad case\n def f_call2(x):\n return x + 1\n f_call2(5) # raises ValueError\n\n # explicit keyword: no problem\n @baddecor(caller=lambda x : 100*x)\n def f_call3(x):\n return x + 1\n assert f_call3(5) == 600\n\n", "Since no one mentioned this, there is also a solution utilizing callable class which I find more elegant, especially in cases where the decorator is complex and one may wish to split it to multiple methods(functions). This solution utilizes __new__ magic method to do essentially what others have pointed out. First detect how the decorator was used than adjust return appropriately.\nclass decorator_with_arguments(object):\n\n def __new__(cls, decorated_function=None, **kwargs):\n\n self = super().__new__(cls)\n self._init(**kwargs)\n\n if not decorated_function:\n return self\n else:\n return self.__call__(decorated_function)\n\n def _init(self, arg1=\"default\", arg2=\"default\", arg3=\"default\"):\n self.arg1 = arg1\n self.arg2 = arg2\n self.arg3 = arg3\n\n def __call__(self, decorated_function):\n\n def wrapped_f(*args):\n print(\"Decorator arguments:\", self.arg1, self.arg2, self.arg3)\n print(\"decorated_function arguments:\", *args)\n decorated_function(*args)\n\n return wrapped_f\n\n@decorator_with_arguments(arg1=5)\ndef sayHello(a1, a2, a3, a4):\n print('sayHello arguments:', a1, a2, a3, a4)\n\n@decorator_with_arguments()\ndef sayHello(a1, a2, a3, a4):\n print('sayHello arguments:', a1, a2, a3, a4)\n\n@decorator_with_arguments\ndef sayHello(a1, a2, a3, a4):\n print('sayHello arguments:', a1, a2, a3, a4)\n\nIf the decorator is used with arguments, than this equals:\nresult = decorator_with_arguments(arg1=5)(sayHello)(a1, a2, a3, a4)\n\nOne can see that the arguments arg1 are correctly passed to the constructor and the decorated function is passed to __call__\nBut if the decorator is used without arguments, than this equals:\nresult = decorator_with_arguments(sayHello)(a1, a2, a3, a4)\n\nYou see that in this case the decorated function is passed directly to the constructor and call to __call__ is entirely omitted. That is why we need to employ logic to take care of this case in __new__ magic method.\nWhy can't we use __init__ instead of __new__? The reason is simple: python prohibits returning any other values than None from __init__\nWARNING\nThis approcach has one side effect. It will not preserve function signature!\n", "This does the job without no fuss:\nfrom functools import wraps\n\ndef memoize(fn=None, hours=48.0):\n def deco(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n return fn(*args, **kwargs)\n return wrapper\n\n if callable(fn): return deco(fn)\n return deco\n\n", "this work for me:\ndef redirect_output(func=None, /, *, output_log='./output.log'):\n def out_wrapper(func):\n def wrapper(*args, **kwargs):\n res = func(*args, **kwargs)\n print(f\"{func.__name__} finished, output_log:{output_log}\")\n return res\n\n return wrapper\n\n if func is None:\n return out_wrapper # @redirect_output()\n return out_wrapper(func) # @redirect_output\n\n\n@redirect_output\ndef test1():\n print(\"running test 1\")\n\n\n@redirect_output(output_log=\"new.log\")\ndef test2():\n print(\"running test 2\")\n\ntest1()\nprint('-----')\ntest2()\n\n", "The example code multiply() below can accept one argument or no parentheses from the decorator and sum() below can sum 2 numbers:\nfrom numbers import Number\n\ndef multiply(num):\n def _multiply(func):\n def core(*args, **kwargs):\n result = func(*args, **kwargs)\n if isinstance(num, Number):\n return result * num\n else:\n return result\n return core\n if callable(num):\n return _multiply(num)\n else:\n return _multiply\n\ndef sum(num1, num2):\n return num1 + num2\n\nSo, if you put @multiply(5) on sum(), then call sum(4, 6) as shown below:\n# (4 + 6) x 5 = 50\n\n@multiply(5) # Here\ndef sum(num1, num2):\n return num1 + num2\n\nresult = sum(4, 6)\nprint(result)\n\nYou can get the result below:\n50\n\nAnd, if you put @multiply on sum(), then call sum(4, 6) as shown below:\n# 4 + 6 = 10\n\n@multiply # Here\ndef sum(num1, num2):\n return num1 + num2\n \nresult = sum(4, 6)\nprint(result)\n\nYou can get the result below:\n10\n\nBut, if you put @multiply() on sum(), then call sum(4, 6) as shown below:\n@multiply() # Here\ndef sum(num1, num2):\n return num1 + num2\n \nresult = sum(4, 6)\nprint(result)\n\nThe error below occurs:\n\nTypeError: multiply() missing 1 required positional argument: 'num'\n\nSo, if you want @multiply() to run without error, you need to add the default value 1 to num as shown below:\nfrom numbers import Number\n # Here\ndef multiply(num=1):\n def _multiply(func):\n def core(*args, **kwargs):\n# ...\n\nThen, if you put @multiply() on sum(), then call sum(4, 6) as shown below:\n# (4 + 6) x 1 = 10\n\n@multiply() # Here\ndef sum(num1, num2):\n return num1 + num2\n \nresult = sum(4, 6)\nprint(result)\n\nYou can get the result below:\n10\n\n" ]
[ 98, 37, 31, 17, 13, 8, 3, 2, 2, 1, 1, 0 ]
[ "Have you tried keyword arguments with default values? Something like\ndef decorate_something(foo=bar, baz=quux):\n pass\n\n", "Generally you can give default arguments in Python...\ndef redirect_output(fn, output = stderr):\n # whatever\n\nNot sure if that works with decorators as well, though. I don't know of any reason why it wouldn't.\n", "Building on vartec's answer:\nimports sys\n\ndef redirect_output(func, output=None):\n if output is None:\n output = sys.stderr\n if isinstance(output, basestring):\n output = open(output, 'w') # etc...\n # everything else...\n\n" ]
[ -2, -3, -3 ]
[ "decorator", "python" ]
stackoverflow_0000653368_decorator_python.txt
Q: How to only fetch ned rows from a database in SQL? How to only insert rows that are not already in the database? I have two problems that are fairly similar. I am using Python to deal with SQL databases. First, I want to only fetch the new data from a SQL database (that continously gets updated with new entries). If I have already selected that entire row I don't want it again, just get the new ones. The code I have right now is: sql = ''' SELECT * FROM table WHERE time BETWEEN ? AND ? ''' #Select all columns of the database between the two timestamps cur.execute(sql,[start_time,end_time]) Then I want to insert other data to another database but I don't want to add rows that are already there. My code at the moment is: query = 'INSERT INTO table_2 (col_1, col_2, col_3, col_4, col_5, col_6) VALUES(%s, %s, %s, %s, %s, %s)' my_data = [] for row in data_df: my_data.append(tuple(row)) cur.executemany(query, my_data) If have tried to use the WHERE NOT EXISTS feature but I am unsure of the syntaxis and I keep getting errors. A: I'll try to help you What I'm suggesting is to add a column integer in your Table that you can name as STATUS, and initialize it at value 0.. Then you will add a "WHERE" condition like : "WHERE STATUS = 0" Then you'll UPDATE the selected row at STATUS = 1 (you should do it inside the transition) If you can't handle the table no problem => you can make a temporary table that contains all the data you're needing and the column with the status I mean you'll dump all your data in a temporary structure with one more column..
How to only fetch ned rows from a database in SQL? How to only insert rows that are not already in the database?
I have two problems that are fairly similar. I am using Python to deal with SQL databases. First, I want to only fetch the new data from a SQL database (that continously gets updated with new entries). If I have already selected that entire row I don't want it again, just get the new ones. The code I have right now is: sql = ''' SELECT * FROM table WHERE time BETWEEN ? AND ? ''' #Select all columns of the database between the two timestamps cur.execute(sql,[start_time,end_time]) Then I want to insert other data to another database but I don't want to add rows that are already there. My code at the moment is: query = 'INSERT INTO table_2 (col_1, col_2, col_3, col_4, col_5, col_6) VALUES(%s, %s, %s, %s, %s, %s)' my_data = [] for row in data_df: my_data.append(tuple(row)) cur.executemany(query, my_data) If have tried to use the WHERE NOT EXISTS feature but I am unsure of the syntaxis and I keep getting errors.
[ "I'll try to help you\nWhat I'm suggesting is to add a column integer in your Table that you can name as STATUS, and initialize it at value 0..\nThen you will add a \"WHERE\" condition like : \"WHERE STATUS = 0\"\nThen you'll UPDATE the selected row at STATUS = 1 (you should do it inside the transition)\nIf you can't handle the table no problem => you can make a temporary table that contains all the data you're needing and the column with the status\nI mean you'll dump all your data in a temporary structure with one more column..\n" ]
[ 0 ]
[]
[]
[ "mariadb", "python", "sql" ]
stackoverflow_0074561903_mariadb_python_sql.txt
Q: Modify rows between two flags (values) in dataframe columns I want to create a new dataframe with the same shape based on two existing dataframes. I have one dataframe that represents the flags and another one with the values I want to replace. The flag dataframe has only 1, -1 and NaNs, and always after a 1 I'll have a -1. So basically its a "changing state" kind of dataframe. What I want to do is: in between the interval of 1 and -1, I need to fill in the average of the same interval in the second dataframe, PR. flag = pd.DataFrame({'col1': [np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan], 'col2': [np.nan,1,-1,np.nan,1,np.nan,np.nan,np.nan,np.nan,np.nan,-1], 'col3': [np.nan,np.nan,np.nan,1,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,-1], 'col4': [np.nan,np.nan,np.nan,np.nan,np.nan,1,np.nan,-1,np.nan,np.nan,np.nan] }) PR = pd.DataFrame({'col1': [81,81.3,80.7,81.5,81,80.4,80.3,81,79.5,80.7], 'col2': [80.9,81.6,81.2,81.7,80.9,79.7,79.3,79.1,79,77.5], 'col3': [81.1,81.3,81,81.6,80.8,79.5,79.2,78.8,78.8,77.4], 'col4': [80.1,80.6,79.9,80.4,80.4,79.3,79,78.8,78.4,77] }) This would have to give me: col1 col2 col3 col4 0 NaN NaN NaN NaN 1 NaN 81.40 NaN NaN 2 NaN 81.40 NaN NaN 3 NaN NaN 79.44 NaN 4 NaN 79.25 79.44 NaN 5 NaN 79.25 79.44 79.03 6 NaN 79.25 79.44 79.03 7 NaN 79.25 79.44 79.03 8 NaN 79.25 79.44 NaN 9 NaN 79.25 79.44 NaN Any help is much appreciated! A: I would use a custom function: def process(s, ref=flag): f = ref[s.name] # get matching flag # create group and mask data outside of 1 -> -1 m = (f.map({1: True, -1: False}).ffill() | f.eq(-1) ) group = f.eq(1).cumsum().where(m) # transform to mean return s.groupby(group).transform('mean') out = PR.apply(process, ref=flag).round(2) Output: col1 col2 col3 col4 0 NaN NaN NaN NaN 1 NaN 81.40 NaN NaN 2 NaN 81.40 NaN NaN 3 NaN NaN 79.44 NaN 4 NaN 79.25 79.44 NaN 5 NaN 79.25 79.44 79.03 6 NaN 79.25 79.44 79.03 7 NaN 79.25 79.44 79.03 8 NaN 79.25 79.44 NaN 9 NaN 79.25 79.44 NaN
Modify rows between two flags (values) in dataframe columns
I want to create a new dataframe with the same shape based on two existing dataframes. I have one dataframe that represents the flags and another one with the values I want to replace. The flag dataframe has only 1, -1 and NaNs, and always after a 1 I'll have a -1. So basically its a "changing state" kind of dataframe. What I want to do is: in between the interval of 1 and -1, I need to fill in the average of the same interval in the second dataframe, PR. flag = pd.DataFrame({'col1': [np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan], 'col2': [np.nan,1,-1,np.nan,1,np.nan,np.nan,np.nan,np.nan,np.nan,-1], 'col3': [np.nan,np.nan,np.nan,1,np.nan,np.nan,np.nan,np.nan,np.nan,np.nan,-1], 'col4': [np.nan,np.nan,np.nan,np.nan,np.nan,1,np.nan,-1,np.nan,np.nan,np.nan] }) PR = pd.DataFrame({'col1': [81,81.3,80.7,81.5,81,80.4,80.3,81,79.5,80.7], 'col2': [80.9,81.6,81.2,81.7,80.9,79.7,79.3,79.1,79,77.5], 'col3': [81.1,81.3,81,81.6,80.8,79.5,79.2,78.8,78.8,77.4], 'col4': [80.1,80.6,79.9,80.4,80.4,79.3,79,78.8,78.4,77] }) This would have to give me: col1 col2 col3 col4 0 NaN NaN NaN NaN 1 NaN 81.40 NaN NaN 2 NaN 81.40 NaN NaN 3 NaN NaN 79.44 NaN 4 NaN 79.25 79.44 NaN 5 NaN 79.25 79.44 79.03 6 NaN 79.25 79.44 79.03 7 NaN 79.25 79.44 79.03 8 NaN 79.25 79.44 NaN 9 NaN 79.25 79.44 NaN Any help is much appreciated!
[ "I would use a custom function:\ndef process(s, ref=flag):\n f = ref[s.name] # get matching flag\n\n # create group and mask data outside of 1 -> -1\n m = (f.map({1: True, -1: False}).ffill()\n | f.eq(-1)\n )\n group = f.eq(1).cumsum().where(m)\n\n # transform to mean\n return s.groupby(group).transform('mean') \n\nout = PR.apply(process, ref=flag).round(2)\n\nOutput:\n col1 col2 col3 col4\n0 NaN NaN NaN NaN\n1 NaN 81.40 NaN NaN\n2 NaN 81.40 NaN NaN\n3 NaN NaN 79.44 NaN\n4 NaN 79.25 79.44 NaN\n5 NaN 79.25 79.44 79.03\n6 NaN 79.25 79.44 79.03\n7 NaN 79.25 79.44 79.03\n8 NaN 79.25 79.44 NaN\n9 NaN 79.25 79.44 NaN\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074562881_dataframe_pandas_python.txt
Q: how do i add an image to my tk window and make it automatically rezize the window to the image size i have no idea how to add images to the window and i have very little experience with python/coding in general this is the code im currently using and this works to color it but idk how to make it a picture and after 30 mins of searching on google i couldnt figure it out import random from time import sleep from tkinter import * x = random.randint(7,20) y = random.randint(7,20) h = random.randint(200,300) w = random.randint(200,300) class Window(Tk): def __init__(self): Tk.__init__(self) self.width = w self.height = h self.velx = x self.vely = y self.pos = (250,250) self.geometry(f"{self.width}x{self.height}+{self.pos[0]}+{self.pos[1]}") self.configure(background='#29a32d') def moveWin(self): x = self.pos[0] + self.velx y = self.pos[1] + self.vely downx, downy = x+self.width, y+self.height sWidth = self.winfo_screenwidth() # gives 1366 sHeight = self.winfo_screenheight() # gives 1080 if x <= 0 or downx >= sWidth: self.velx = -self.velx if y <= 0 or downy >= sHeight: self.vely = -self.vely self.pos = (x,y) self.geometry(f"+{x}+{y}") return [x, y, downx, downy] root = Window() while True: root.update() pos = root.moveWin() print(pos) sleep(0.01) A: To add an image to the window, you can use PhotoImage() to load supported image (PNG, GIF), then using a Label to show the image: ... class Window(Tk): def __init__(self): Tk.__init__(self) self.width = w self.height = h self.velx = x self.vely = y self.pos = (250,250) self.geometry(f"{self.width}x{self.height}+{self.pos[0]}+{self.pos[1]}") self.configure(background='#29a32d') # load the image self.image = PhotoImage(file="/path/to/your/image.png") # show the image using label Label(self, image=self.image).pack() ... Note that it is not recommended to use while loop in a tkinter application, replace the while loop by using .after(): ... root = Window() def move(): root.moveWin() root.after(10, move) # start the after loop move() root.mainloop()
how do i add an image to my tk window and make it automatically rezize the window to the image size
i have no idea how to add images to the window and i have very little experience with python/coding in general this is the code im currently using and this works to color it but idk how to make it a picture and after 30 mins of searching on google i couldnt figure it out import random from time import sleep from tkinter import * x = random.randint(7,20) y = random.randint(7,20) h = random.randint(200,300) w = random.randint(200,300) class Window(Tk): def __init__(self): Tk.__init__(self) self.width = w self.height = h self.velx = x self.vely = y self.pos = (250,250) self.geometry(f"{self.width}x{self.height}+{self.pos[0]}+{self.pos[1]}") self.configure(background='#29a32d') def moveWin(self): x = self.pos[0] + self.velx y = self.pos[1] + self.vely downx, downy = x+self.width, y+self.height sWidth = self.winfo_screenwidth() # gives 1366 sHeight = self.winfo_screenheight() # gives 1080 if x <= 0 or downx >= sWidth: self.velx = -self.velx if y <= 0 or downy >= sHeight: self.vely = -self.vely self.pos = (x,y) self.geometry(f"+{x}+{y}") return [x, y, downx, downy] root = Window() while True: root.update() pos = root.moveWin() print(pos) sleep(0.01)
[ "To add an image to the window, you can use PhotoImage() to load supported image (PNG, GIF), then using a Label to show the image:\n...\nclass Window(Tk):\n def __init__(self):\n Tk.__init__(self)\n self.width = w\n self.height = h\n self.velx = x\n self.vely = y\n self.pos = (250,250)\n self.geometry(f\"{self.width}x{self.height}+{self.pos[0]}+{self.pos[1]}\")\n self.configure(background='#29a32d')\n\n # load the image\n self.image = PhotoImage(file=\"/path/to/your/image.png\")\n # show the image using label\n Label(self, image=self.image).pack()\n...\n\nNote that it is not recommended to use while loop in a tkinter application, replace the while loop by using .after():\n...\nroot = Window()\n\ndef move():\n root.moveWin()\n root.after(10, move)\n# start the after loop\nmove()\nroot.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074562651_python_tkinter.txt
Q: Making rows NaN based on many conditions If I have a dataframe with some index and some value as follows: import pandas as pd from random import random my_index = [] my_vals = [] for i in range(1000): my_index.append(i+random()) my_vals.append(random()) df_vals = pd.DataFrame({'my_index': my_index, 'my_vals': my_vals}) And I have a second dataframe with a column start and end, a row must be read as an interval, so the first row would be interval from 1 to 4 (including 1 and 4). It is the following dataframe: df_intervals = pd.DataFrame({'start': [1, 7, 54, 73, 136, 235, 645, 785, 968], 'end': [4, 34, 65, 90, 200, 510, 700, 805, 988]}) I would like to make all values in the my_vals column of df_vals a NaN if the row's index (my_index) does not fall in to one of the intervals specified in the df_intervals dataframe. What is the best way to go about this automatically rather than specifying each condition manually? (In my actual data set there are more than 9 intervals, this is some example data) EDIT: in my actual data these indeces are not strictly integers, these can also be random floats A: One way would be to create an array that would include all the values of your intervals, for example when start = 1 and end = 4, the array would be [1,2,3,4]. Similarly when start = 7 and end = 34, the array would be [7,8,9,10 ... , 34]. intervals_exp = df_intervals.apply(lambda row: [n for n in range(row['start'],row['end']+1)],axis=1).explode().values intervals_exp array([1, 2, 3, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, .. 986, 987], dtype=object)] Then you can use isin to check which values of 'my_index' are in the above array using np.where: df_vals['my_vals refined'] = np.where(df_vals['my_index'].isin(intervals_exp),df_vals['my_index'],np.nan) which prints: my_index my_vals my_vals refined 0 0 0.564172 NaN 1 1 0.852806 1.0 2 2 0.643407 2.0 3 3 0.719642 3.0 4 4 0.233949 4.0 .. ... ... ... 995 995 0.355014 NaN 996 996 0.842957 NaN 997 997 0.143479 NaN 998 998 0.915176 NaN 999 999 0.147195 NaN [1000 rows x 3 columns] A: I believe this is a possible solution, def index_in_range(index, df): for index_, row in df.iterrows(): if (index >= row['start']) and (index <= row['end']): return True return False df_vals['my_vals'] = df_vals.apply(lambda row: row['my_vals'] if index_in_range(row['my_index'], df_intervals) else None, axis=1) To accomplish this without using a lambda function, you can do the following, def index_in_range(index, df): for index_, row in df.iterrows(): if (index >= row['start']) and (index <= row['end']): return True return False for index_, row in df_vals.iterrows(): if not index_in_range(row['my_index'], df_intervals): df_vals.at[index_, 'my_vals'] = None Output: my_index my_vals 0 0 NaN 1 1 0.126647 2 2 0.769215 3 3 0.819891 4 4 0.674466 ... ... ... 995 995 NaN 996 996 NaN 997 997 NaN 998 998 NaN 999 999 NaN
Making rows NaN based on many conditions
If I have a dataframe with some index and some value as follows: import pandas as pd from random import random my_index = [] my_vals = [] for i in range(1000): my_index.append(i+random()) my_vals.append(random()) df_vals = pd.DataFrame({'my_index': my_index, 'my_vals': my_vals}) And I have a second dataframe with a column start and end, a row must be read as an interval, so the first row would be interval from 1 to 4 (including 1 and 4). It is the following dataframe: df_intervals = pd.DataFrame({'start': [1, 7, 54, 73, 136, 235, 645, 785, 968], 'end': [4, 34, 65, 90, 200, 510, 700, 805, 988]}) I would like to make all values in the my_vals column of df_vals a NaN if the row's index (my_index) does not fall in to one of the intervals specified in the df_intervals dataframe. What is the best way to go about this automatically rather than specifying each condition manually? (In my actual data set there are more than 9 intervals, this is some example data) EDIT: in my actual data these indeces are not strictly integers, these can also be random floats
[ "One way would be to create an array that would include all the values of your intervals, for example when start = 1 and end = 4, the array would be [1,2,3,4]. Similarly when start = 7 and end = 34, the array would be [7,8,9,10 ... , 34].\nintervals_exp = df_intervals.apply(lambda row: [n for n in range(row['start'],row['end']+1)],axis=1).explode().values\nintervals_exp \n\narray([1, 2, 3, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21,\n 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 54, 55, 56, 57, 58,\n 59, 60, 61, 62, 63, 64, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, .. 986, 987], dtype=object)]\n\nThen you can use isin to check which values of 'my_index' are in the above array using np.where:\ndf_vals['my_vals refined'] = np.where(df_vals['my_index'].isin(intervals_exp),df_vals['my_index'],np.nan)\n\nwhich prints:\n my_index my_vals my_vals refined\n0 0 0.564172 NaN\n1 1 0.852806 1.0\n2 2 0.643407 2.0\n3 3 0.719642 3.0\n4 4 0.233949 4.0\n.. ... ... ...\n995 995 0.355014 NaN\n996 996 0.842957 NaN\n997 997 0.143479 NaN\n998 998 0.915176 NaN\n999 999 0.147195 NaN\n\n[1000 rows x 3 columns]\n\n", "I believe this is a possible solution,\ndef index_in_range(index, df):\n for index_, row in df.iterrows():\n if (index >= row['start']) and (index <= row['end']):\n return True\n \n return False\n \ndf_vals['my_vals'] = df_vals.apply(lambda row: row['my_vals'] if index_in_range(row['my_index'], df_intervals) else None, axis=1)\n\nTo accomplish this without using a lambda function, you can do the following,\ndef index_in_range(index, df):\n for index_, row in df.iterrows():\n if (index >= row['start']) and (index <= row['end']):\n return True\n \n return False\n\nfor index_, row in df_vals.iterrows():\n if not index_in_range(row['my_index'], df_intervals):\n df_vals.at[index_, 'my_vals'] = None\n\nOutput:\n my_index my_vals\n0 0 NaN\n1 1 0.126647\n2 2 0.769215\n3 3 0.819891\n4 4 0.674466\n... ... ...\n995 995 NaN\n996 996 NaN\n997 997 NaN\n998 998 NaN\n999 999 NaN\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074562700_dataframe_pandas_python.txt
Q: TypeError: __init__() got an unexpected keyword argument 'mapbox_key' everyone ! I'm trying to understand how to make maps with PyDeck, but i have a recurring error message : " TypeError: init() got an unexpected keyword argument 'mapbox_key'". After hours, I don't know how to resolve it ? have you any idea ? code : Python 3.9.12 version de Pydeck : 0.8.0 version de pandas: 1.4.2 version de vega_datasets : 0.9.0 version de ipywidgets : 7.6.5 import pydeck as pdk import pandas as pd from vega_datasets import data as vds import ipywidgets from palettable.cartocolors.sequential import BrwnYl_3 import json # Public API key MAPBOX_API_KEY = "pk.eyJ1IjoiZXphYW45MDIiLCJhIjoiY2xhdHI4NzI3MDQwazNwcDg1bDdyN3ZzMCJ9.8BOAE-IFmp6PeellMppXsA" # data data = 'https://raw.githubusercontent.com/groundhogday321/dataframe-datasets/master/fake_commute_data.csv' commute_pattern = pd.read_csv(data) print(commute_pattern.head(2)) # view (location, zoom level, etc.) view = pdk.ViewState(latitude=32.800382, longitude=-97.040728, pitch=50, zoom=9) # layer # from home (orange) to work (purple) arc_layer = pdk.Layer('ArcLayer', data=commute_pattern, get_source_position=['from_lon', 'from_lat'], get_target_position=['to_lon', 'to_lat'], get_width=5, get_tilt=15, # RGBA colors (red, green, blue, alpha) get_source_color=[255, 165, 0, 80], get_target_color=[128, 0, 128, 80]) # render map arc_layer_map = pdk.Deck(map_style='mapbox://styles/mapbox/light-v10', layers=arc_layer, initial_view_state=view, mapbox_key=MAPBOX_API_KEY) # display and save map (to_html(), show()) arc_layer_map.to_html(r'C:\Users\Admin loc\OneDrive\Bureau\arc_layer_map.html') arc_layer_map.show() I tried to load my Mapbox token and run this tutorial... A: According to the documentation pdk.Deck doesn't take a mapbox_key argument. You probably need to provide it like this: arc_layer_map = pdk.Deck(... api_keys={'mapbox': MAPBOX_API_KEY})
TypeError: __init__() got an unexpected keyword argument 'mapbox_key'
everyone ! I'm trying to understand how to make maps with PyDeck, but i have a recurring error message : " TypeError: init() got an unexpected keyword argument 'mapbox_key'". After hours, I don't know how to resolve it ? have you any idea ? code : Python 3.9.12 version de Pydeck : 0.8.0 version de pandas: 1.4.2 version de vega_datasets : 0.9.0 version de ipywidgets : 7.6.5 import pydeck as pdk import pandas as pd from vega_datasets import data as vds import ipywidgets from palettable.cartocolors.sequential import BrwnYl_3 import json # Public API key MAPBOX_API_KEY = "pk.eyJ1IjoiZXphYW45MDIiLCJhIjoiY2xhdHI4NzI3MDQwazNwcDg1bDdyN3ZzMCJ9.8BOAE-IFmp6PeellMppXsA" # data data = 'https://raw.githubusercontent.com/groundhogday321/dataframe-datasets/master/fake_commute_data.csv' commute_pattern = pd.read_csv(data) print(commute_pattern.head(2)) # view (location, zoom level, etc.) view = pdk.ViewState(latitude=32.800382, longitude=-97.040728, pitch=50, zoom=9) # layer # from home (orange) to work (purple) arc_layer = pdk.Layer('ArcLayer', data=commute_pattern, get_source_position=['from_lon', 'from_lat'], get_target_position=['to_lon', 'to_lat'], get_width=5, get_tilt=15, # RGBA colors (red, green, blue, alpha) get_source_color=[255, 165, 0, 80], get_target_color=[128, 0, 128, 80]) # render map arc_layer_map = pdk.Deck(map_style='mapbox://styles/mapbox/light-v10', layers=arc_layer, initial_view_state=view, mapbox_key=MAPBOX_API_KEY) # display and save map (to_html(), show()) arc_layer_map.to_html(r'C:\Users\Admin loc\OneDrive\Bureau\arc_layer_map.html') arc_layer_map.show() I tried to load my Mapbox token and run this tutorial...
[ "According to the documentation pdk.Deck doesn't take a mapbox_key argument.\nYou probably need to provide it like this:\narc_layer_map = pdk.Deck(...\n api_keys={'mapbox': MAPBOX_API_KEY})\n\n" ]
[ 0 ]
[]
[]
[ "mapping", "pydeck", "python" ]
stackoverflow_0074559302_mapping_pydeck_python.txt
Q: How to add missing paragraph into HTML code? I would like to add a missing paragraph tag <p></p> in a broken HTML code. Example: this is my broken HTML code: <strong>My Headline</strong> This text has a missing paragraph <strong>Some more text <a href="#">maybe with a link</a></strong> <p>this one is right</p> I'd like to add the missing paragraph tags like this: <p> <strong>My Headline</strong> </p> <p> This text has a missing paragraph </p> <p> <strong>Some more text <a href="#">maybe with a link</a></strong> </p> <p>this one is right</p> What would be the best solution to fix this problem using Python3? A: You can use methods of the str class for that. Something like this: >>> s = '''<strong>My Headline</strong> ... This text has a missing paragraph ... <strong>Some more text <a href="#">maybe with a link</a></strong> ... <p>this one is right</p>''' >>> >>> for line in s.splitlines(): ... print(f'<p>{line}</p>' if not line.startswith('<p>') else line) ... <p><strong>My Headline</strong></p> <p>This text has a missing paragraph</p> <p><strong>Some more text <a href="#">maybe with a link</a></strong></p> <p>this one is right</p> >>> A: If you want to read it from the html file, try this: with open("example.html", "r")as old, open("new.html", "w") as new: for line in old: if line.strip().startswith("<p>"): new.write(line) else: new.write("<p>\n" + line + "</p>\n")
How to add missing paragraph into HTML code?
I would like to add a missing paragraph tag <p></p> in a broken HTML code. Example: this is my broken HTML code: <strong>My Headline</strong> This text has a missing paragraph <strong>Some more text <a href="#">maybe with a link</a></strong> <p>this one is right</p> I'd like to add the missing paragraph tags like this: <p> <strong>My Headline</strong> </p> <p> This text has a missing paragraph </p> <p> <strong>Some more text <a href="#">maybe with a link</a></strong> </p> <p>this one is right</p> What would be the best solution to fix this problem using Python3?
[ "You can use methods of the str class for that.\nSomething like this:\n>>> s = '''<strong>My Headline</strong>\n... This text has a missing paragraph\n... <strong>Some more text <a href=\"#\">maybe with a link</a></strong>\n... <p>this one is right</p>'''\n>>> \n>>> for line in s.splitlines():\n... print(f'<p>{line}</p>' if not line.startswith('<p>') else line)\n... \n<p><strong>My Headline</strong></p>\n<p>This text has a missing paragraph</p>\n<p><strong>Some more text <a href=\"#\">maybe with a link</a></strong></p>\n<p>this one is right</p>\n>>> \n\n", "If you want to read it from the html file, try this:\nwith open(\"example.html\", \"r\")as old, open(\"new.html\", \"w\") as new:\n\n for line in old:\n if line.strip().startswith(\"<p>\"):\n new.write(line)\n else:\n new.write(\"<p>\\n\" + line + \"</p>\\n\")\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "regexp_replace" ]
stackoverflow_0074562932_python_regexp_replace.txt