content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Using Pandas to dynamically replace values found in other columns I have a dataset looks like this: Car Make Model Engine Toyota Rav 4 8cyl6L Toyota 8cyl6L Mitsubishi Eclipse 2.1T Mitsubishi 2.1T Monster Gravedigger 25Lsc Monster 25Lsc The data was clearly concatenated from Make + Model + Engine at some point but the car Model was not provided to me. I've been trying to use Pandas to say that if we take Car, replace instances of Make with a nothing, replace instances of Engine with nothing, then trim the spaces around the result, we will have Model. Car Make Model Engine Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc There's something I'm doing wrong when I'm trying to reference another column in this manner. df['Model'] = df['Car'].str.replace(df['Make'],'') gives me an error of "unhashable type: 'Series'". I'm guessing I'm accidentally inputting the entire 'Make' column. At every row I want to make a different substitution using data from other columns in that row. How would I accomplish this? A: you can use: df['Model']=df.apply(lambda x: x['Car'].replace(x['Make'],"").replace(x['Engine'],""),axis=1) print(df) ''' Car Make Model Engine 0 Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L 1 Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T 2 Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc ''' A: A regex proposition using re.sub : import re df['Model'] = [re.sub(f'{b}|{c}', '', a) for a,b,c in zip(df['Car'], df['Make'], df["Engine"])] # Output : print(df) Car Make Model Engine 0 Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L 1 Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T 2 Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc
Using Pandas to dynamically replace values found in other columns
I have a dataset looks like this: Car Make Model Engine Toyota Rav 4 8cyl6L Toyota 8cyl6L Mitsubishi Eclipse 2.1T Mitsubishi 2.1T Monster Gravedigger 25Lsc Monster 25Lsc The data was clearly concatenated from Make + Model + Engine at some point but the car Model was not provided to me. I've been trying to use Pandas to say that if we take Car, replace instances of Make with a nothing, replace instances of Engine with nothing, then trim the spaces around the result, we will have Model. Car Make Model Engine Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc There's something I'm doing wrong when I'm trying to reference another column in this manner. df['Model'] = df['Car'].str.replace(df['Make'],'') gives me an error of "unhashable type: 'Series'". I'm guessing I'm accidentally inputting the entire 'Make' column. At every row I want to make a different substitution using data from other columns in that row. How would I accomplish this?
[ "you can use:\ndf['Model']=df.apply(lambda x: x['Car'].replace(x['Make'],\"\").replace(x['Engine'],\"\"),axis=1)\nprint(df)\n'''\n Car Make Model Engine\n0 Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L\n1 Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T\n2 Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc\n'''\n\n", "A regex proposition using re.sub :\nimport re\n\ndf['Model'] = [re.sub(f'{b}|{c}', '', a) for a,b,c in zip(df['Car'], df['Make'], df[\"Engine\"])]\n\n# Output :\nprint(df)\n\n Car Make Model Engine\n0 Toyota Rav 4 8cyl6L Toyota Rav 4 8cyl6L\n1 Mitsubishi Eclipse 2.1T Mitsubishi Eclipse 2.1T\n2 Monster Gravedigger 25Lsc Monster Gravedigger 25Lsc\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "pandas", "python", "replace" ]
stackoverflow_0074502929_dataframe_pandas_python_replace.txt
Q: SKLearn VotingClassifier is throwing an issue about argument not being iterable? This might seem a bit simplistic and to be honest I have spent a few hours looking at this and trying back and forth and now cannot see the wood from the trees. I am constantly falling into the same error of a zip argument not being iterable when trying to fit a dataFrame and series to a votingClassifer. I currently have the following arrangement: ​def MethodName(Data, Models): YColumn = list(Data["target"].values) # Pandas.series = Data["target"] XColumn = Data.drop([~ArrayOfColumnsToDrop~],axis=1).to_records() #Pandas.DataFrame =Data[...] model = VotingClassifier(estimators=Models, voting='hard') model.fit(XColumn, YColumn) return(model, score) Currently the following error is returned: /opt/conda/lib/python3.7/site-packages/sklearn/ensemble/_voting.py in fit(self, X, y, sample_weight) 322 transformed_y = self.le_.transform(y) 323 --> 324 return super().fit(X, transformed_y, sample_weight) 325 326 def predict(self, X): /opt/conda/lib/python3.7/site-packages/sklearn/ensemble/_voting.py in fit(self, X, y, sample_weight) 63 def fit(self, X, y, sample_weight=None): 64 """Get common fit operations.""" ---> 65 names, clfs = self._validate_estimators() 66 67 if self.weights is not None and len(self.weights) != len(self.estimators): /opt/conda/lib/python3.7/site-packages/sklearn/ensemble/_base.py in _validate_estimators(self) 245 " of (string, estimator) tuples." 246 ) --> 247 names, estimators = zip(*self.estimators) 248 # defined by MetaEstimatorMixin 249 self._validate_names(names) TypeError: zip argument #1 must support iteration My understanding of this is that it wants either the X or Y argument to be iterable and both are: ` for item in YColumn: print(item) print("Iterable") break for item in XColumn: print(item) print("Iterable") break 0 Iterable (0, 0.93846872, [...]) Iterable My limited understanding of that would suggest both are iterable arrays; where the YColumn array is a 1d [entry,entry,[...],entry] and the XColumn array is a 1d of [(tuple), (tuple),[...], (tuple)]. Previous attempts: Just passing both in as a panda.series and panda.dataFrame respectively Having the YColumn be an array of tuples in the format of [(0,Y), (1,Y),...,(D,Y)]. This rightfully complained about not being a simple 1d array Using test_train_split. This resulted in the same error message about the zip iterator as now. Looking at the documentation it states that it needs an : "X{array-like, sparse matrix} of shape (n_samples, n_features)" could that be the issue? Surely passing the dataFrame in plain is array-like of shape? What am I missing on this? A: It turns out I am an arrogant coder who is too reliant on following where the error messages are thrown than actually reading them all the way through. Kaggle Notebooks doesn't have a debugger in the IDE sense, but it turns out I can add %debug to the start of a code block and it will provide a traditional debugger. The issue was not with the .fit() method and data types as implied above, but rather I was passing the wrong estimators in. I was passing a list of [estimator1,estimator2,estimator3] when in fact it needed [("str",estimator1),("str",estimator2),...]. It even states in the example, but I was too fixated on it being a data type issue. I will now go an think long and hard about how my arrogance and procrastination has lead to me wasting a week. Although in fairness to them, I did decide that if I hadn't got the answer by 19:30 I was going to just write the VotingClassifier myself. TL:DR: Don't trust the error message and only read the last portion, try and find a debugger, read the damn docs and error message through thoroughly. This was solely me being stupid.
SKLearn VotingClassifier is throwing an issue about argument not being iterable?
This might seem a bit simplistic and to be honest I have spent a few hours looking at this and trying back and forth and now cannot see the wood from the trees. I am constantly falling into the same error of a zip argument not being iterable when trying to fit a dataFrame and series to a votingClassifer. I currently have the following arrangement: ​def MethodName(Data, Models): YColumn = list(Data["target"].values) # Pandas.series = Data["target"] XColumn = Data.drop([~ArrayOfColumnsToDrop~],axis=1).to_records() #Pandas.DataFrame =Data[...] model = VotingClassifier(estimators=Models, voting='hard') model.fit(XColumn, YColumn) return(model, score) Currently the following error is returned: /opt/conda/lib/python3.7/site-packages/sklearn/ensemble/_voting.py in fit(self, X, y, sample_weight) 322 transformed_y = self.le_.transform(y) 323 --> 324 return super().fit(X, transformed_y, sample_weight) 325 326 def predict(self, X): /opt/conda/lib/python3.7/site-packages/sklearn/ensemble/_voting.py in fit(self, X, y, sample_weight) 63 def fit(self, X, y, sample_weight=None): 64 """Get common fit operations.""" ---> 65 names, clfs = self._validate_estimators() 66 67 if self.weights is not None and len(self.weights) != len(self.estimators): /opt/conda/lib/python3.7/site-packages/sklearn/ensemble/_base.py in _validate_estimators(self) 245 " of (string, estimator) tuples." 246 ) --> 247 names, estimators = zip(*self.estimators) 248 # defined by MetaEstimatorMixin 249 self._validate_names(names) TypeError: zip argument #1 must support iteration My understanding of this is that it wants either the X or Y argument to be iterable and both are: ` for item in YColumn: print(item) print("Iterable") break for item in XColumn: print(item) print("Iterable") break 0 Iterable (0, 0.93846872, [...]) Iterable My limited understanding of that would suggest both are iterable arrays; where the YColumn array is a 1d [entry,entry,[...],entry] and the XColumn array is a 1d of [(tuple), (tuple),[...], (tuple)]. Previous attempts: Just passing both in as a panda.series and panda.dataFrame respectively Having the YColumn be an array of tuples in the format of [(0,Y), (1,Y),...,(D,Y)]. This rightfully complained about not being a simple 1d array Using test_train_split. This resulted in the same error message about the zip iterator as now. Looking at the documentation it states that it needs an : "X{array-like, sparse matrix} of shape (n_samples, n_features)" could that be the issue? Surely passing the dataFrame in plain is array-like of shape? What am I missing on this?
[ "It turns out I am an arrogant coder who is too reliant on following where the error messages are thrown than actually reading them all the way through. Kaggle Notebooks doesn't have a debugger in the IDE sense, but it turns out I can add %debug to the start of a code block and it will provide a traditional debugger.\nThe issue was not with the .fit() method and data types as implied above, but rather I was passing the wrong estimators in. I was passing a list of [estimator1,estimator2,estimator3] when in fact it needed [(\"str\",estimator1),(\"str\",estimator2),...]. It even states in the example, but I was too fixated on it being a data type issue.\nI will now go an think long and hard about how my arrogance and procrastination has lead to me wasting a week. Although in fairness to them, I did decide that if I hadn't got the answer by 19:30 I was going to just write the VotingClassifier myself.\nTL:DR: Don't trust the error message and only read the last portion, try and find a debugger, read the damn docs and error message through thoroughly.\nThis was solely me being stupid.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "pandas", "python", "scikit_learn" ]
stackoverflow_0074461779_machine_learning_pandas_python_scikit_learn.txt
Q: wxPython: Is it possible to manage the hover color of a button? I'm trying to create a dark theme for my wxPython app, and I'm wondering if I can control the hover color of a button. import wx class AppButton(wx.Button): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.SetLabel('Test') self.SetOwnBackgroundColour('#131313') self.SetOwnForegroundColour('white') self.SetPosition((100,100)) class Exampple(wx.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.SetSize(300,300) self.SetBackgroundColour('#151719') self.Center() self.initUI() def initUI(self): panel = wx.Panel(self) self.button = AppButton(panel) if __name__ == '__main__': app = wx.App() ex = Exampple(None) ex.Show() app.MainLoop() There are SetOwnBackgroundColour and SetOwnForegroundColour methods, but is there something like SetOwnHoverColor or maybe a special event like wx.EVT_HOVER? Because the default light blue hover color is not what I want in my dark theme. I tried using EVT_ENTER_WINDOW and EVT_LEAVE_WINDOW and also EVT_MOTION, but it didn't work. A: So Close! wx.EVT_ENTER_WINDOW and wx.EVT_LEAVE_WINDOW will do the job, I guess you missed the implementation somehow. import wx class AppButton(wx.Button): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.SetLabel('Test') self.SetBackgroundColour('black') self.SetForegroundColour('white') self.SetPosition((100,100)) self.Bind(wx.EVT_ENTER_WINDOW, self.OnHover) self.Bind(wx.EVT_LEAVE_WINDOW, self.OnUnHover) def OnHover(self, event): print("Enter",event.Entering(), "Leave",event.Leaving()) self.SetBackgroundColour('maroon') self.SetForegroundColour('yellow') event.Skip() def OnUnHover(self, event): print("Enter",event.Entering(), "Leave",event.Leaving()) self.SetBackgroundColour('black') self.SetForegroundColour('white') event.Skip() class Exampple(wx.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.SetSize(300,300) self.SetBackgroundColour('#151719') self.Center() self.initUI() def initUI(self): panel = wx.Panel(self) self.button = AppButton(panel) if __name__ == '__main__': app = wx.App() ex = Exampple(None) ex.Show() app.MainLoop()
wxPython: Is it possible to manage the hover color of a button?
I'm trying to create a dark theme for my wxPython app, and I'm wondering if I can control the hover color of a button. import wx class AppButton(wx.Button): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.SetLabel('Test') self.SetOwnBackgroundColour('#131313') self.SetOwnForegroundColour('white') self.SetPosition((100,100)) class Exampple(wx.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.SetSize(300,300) self.SetBackgroundColour('#151719') self.Center() self.initUI() def initUI(self): panel = wx.Panel(self) self.button = AppButton(panel) if __name__ == '__main__': app = wx.App() ex = Exampple(None) ex.Show() app.MainLoop() There are SetOwnBackgroundColour and SetOwnForegroundColour methods, but is there something like SetOwnHoverColor or maybe a special event like wx.EVT_HOVER? Because the default light blue hover color is not what I want in my dark theme. I tried using EVT_ENTER_WINDOW and EVT_LEAVE_WINDOW and also EVT_MOTION, but it didn't work.
[ "So Close!\nwx.EVT_ENTER_WINDOW and wx.EVT_LEAVE_WINDOW will do the job, I guess you missed the implementation somehow.\nimport wx\n\nclass AppButton(wx.Button):\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n \n self.SetLabel('Test')\n self.SetBackgroundColour('black')\n self.SetForegroundColour('white')\n self.SetPosition((100,100))\n self.Bind(wx.EVT_ENTER_WINDOW, self.OnHover)\n self.Bind(wx.EVT_LEAVE_WINDOW, self.OnUnHover)\n \n def OnHover(self, event):\n print(\"Enter\",event.Entering(), \"Leave\",event.Leaving())\n self.SetBackgroundColour('maroon') \n self.SetForegroundColour('yellow')\n event.Skip()\n \n def OnUnHover(self, event):\n print(\"Enter\",event.Entering(), \"Leave\",event.Leaving())\n self.SetBackgroundColour('black') \n self.SetForegroundColour('white')\n event.Skip()\n\nclass Exampple(wx.Frame):\n \n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n \n self.SetSize(300,300)\n self.SetBackgroundColour('#151719')\n self.Center()\n self.initUI()\n \n def initUI(self):\n \n panel = wx.Panel(self)\n self.button = AppButton(panel)\n \nif __name__ == '__main__':\n app = wx.App()\n ex = Exampple(None)\n ex.Show()\n app.MainLoop()\n\n \n" ]
[ 0 ]
[]
[]
[ "python", "user_interface", "wxpython" ]
stackoverflow_0074500316_python_user_interface_wxpython.txt
Q: Installing pygame through pip error I'm trying to install pygame to work with python through pip, however when I use the command pip install pygame, it begins working and seems alright until it throws an error. This is the output I get, I'm not sure if i'm doing it correctly or what, I'm new to pip so I'm just not sure. Any help would be appreciated! C:\Users\Matthew>pip install pygame Collecting pygame Using cached pygame-1.9.2.tar.gz Complete output from command python setup.py egg_info: WARNING, No "Setup" File Exists, Running "config.py" Using WINDOWS configuration... Path for SDL not found. Too bad that is a requirement! Hand-fix the "Setup" Path for FONT not found. Path for IMAGE not found. Path for MIXER not found. Path for PNG not found. Path for JPEG not found. Path for PORTMIDI not found. Path for COPYLIB_tiff not found. Path for COPYLIB_z not found. Path for COPYLIB_vorbis not found. Path for COPYLIB_ogg not found. If you get compiler errors during install, doublecheck the compiler flags in the "Setup" file. Continuing With "setup.py" Error with the "Setup" file, perhaps make a clean copy from "Setup.in". Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Matthew\AppData\Local\Temp\pip-build-kzk4_t2_\pygame\setup. py", line 165, in <module> extensions = read_setup_file('Setup') File "c:\users\matthew\appdata\local\programs\python\python36-32\lib\distu tils\extension.py", line 171, in read_setup_file line = expand_makefile_vars(line, vars) File "c:\users\matthew\appdata\local\programs\python\python36-32\lib\distu tils\sysconfig.py", line 410, in expand_makefile_vars s = s[0:beg] + vars.get(m.group(1)) + s[end:] TypeError: must be str, not NoneType ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\Matthew\ AppData\Local\Temp\pip-build-kzk4_t2_\pygame\ A: This is an old question, but the error is usually related with incompatible versions of Python and PyGame. You should check if the version of Python you're using is compatible with the version of PyGame. You may need to install a specific version of PyGame, e.g. python -m pip install pygame==1.9.3 I've been using PyGame 1.9.3 with Python 3.6.8 and it's working just fine A: Currently (Nov. 2022), Python v. 3.11 for Windows x64 is incompatible with the current version of Pygame (2.1.2). In order to make it work, you have to roll back to Python 3.10.2 (confirmed through testing).
Installing pygame through pip error
I'm trying to install pygame to work with python through pip, however when I use the command pip install pygame, it begins working and seems alright until it throws an error. This is the output I get, I'm not sure if i'm doing it correctly or what, I'm new to pip so I'm just not sure. Any help would be appreciated! C:\Users\Matthew>pip install pygame Collecting pygame Using cached pygame-1.9.2.tar.gz Complete output from command python setup.py egg_info: WARNING, No "Setup" File Exists, Running "config.py" Using WINDOWS configuration... Path for SDL not found. Too bad that is a requirement! Hand-fix the "Setup" Path for FONT not found. Path for IMAGE not found. Path for MIXER not found. Path for PNG not found. Path for JPEG not found. Path for PORTMIDI not found. Path for COPYLIB_tiff not found. Path for COPYLIB_z not found. Path for COPYLIB_vorbis not found. Path for COPYLIB_ogg not found. If you get compiler errors during install, doublecheck the compiler flags in the "Setup" file. Continuing With "setup.py" Error with the "Setup" file, perhaps make a clean copy from "Setup.in". Traceback (most recent call last): File "<string>", line 1, in <module> File "C:\Users\Matthew\AppData\Local\Temp\pip-build-kzk4_t2_\pygame\setup. py", line 165, in <module> extensions = read_setup_file('Setup') File "c:\users\matthew\appdata\local\programs\python\python36-32\lib\distu tils\extension.py", line 171, in read_setup_file line = expand_makefile_vars(line, vars) File "c:\users\matthew\appdata\local\programs\python\python36-32\lib\distu tils\sysconfig.py", line 410, in expand_makefile_vars s = s[0:beg] + vars.get(m.group(1)) + s[end:] TypeError: must be str, not NoneType ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in C:\Users\Matthew\ AppData\Local\Temp\pip-build-kzk4_t2_\pygame\
[ "This is an old question, but the error is usually related with incompatible versions of Python and PyGame. You should check if the version of Python you're using is compatible with the version of PyGame.\nYou may need to install a specific version of PyGame, e.g. python -m pip install pygame==1.9.3\nI've been using PyGame 1.9.3 with Python 3.6.8 and it's working just fine\n", "Currently (Nov. 2022), Python v. 3.11 for Windows x64 is incompatible with the current version of Pygame (2.1.2). In order to make it work, you have to roll back to Python 3.10.2 (confirmed through testing).\n" ]
[ 2, 0 ]
[]
[]
[ "command_prompt", "pip", "pygame", "python" ]
stackoverflow_0041339661_command_prompt_pip_pygame_python.txt
Q: Dashboard Plotly ValueError: Invalid value I have a Dash app that plots several graphs. When the Dash app starts, some plots do not get displayed, and I see the error. This only occurs on the initial startup of the app. When the webpage is refreshed, the error does not re-appear, and all plots get displayed without errors. Callback error updating {"index":1,"tag":"bar-9-graph"}.figure @app.callback( ServersideOutput("filtered-data", "data"), Input({"tag": "v2", "index": 1}, "value"), Input({"tag": "v3", "index": 1}, "value"), Input({"tag": "v4", "index": 1}, "value"), Input({"tag": "date-range", "index": 1}, "start_date"), Input({"tag": "date-range", "index": 1}, "end_date"), memoize=True ) def filter_data(v2, v3, v4, start_date, end_date): data = hc._select_filter(df, labels_dict.keys(), [v2, v3, v4]) data = hc._date_filter(data, "fecha", start_date, end_date) return data @app.callback( Output({"tag": "bar-9-graph", "index": 1}, "figure"), Input("filtered-data", "data"), ) def make_bar_2(data): data_aux = data.copy() data_aux = data_aux.loc[:,['nit','frequency','group']] data_aux = data_aux.drop_duplicates(subset=['nit']) data_aux = data_aux.groupby(['frequency'], as_index=False).size() return hc.generic_bar_graphB(data_aux, "frequency") def generic_bar_graphB(data: pd.Series, column: str,): fig = px.bar(data, x=column, y='size', title="", labels={column:''}) fig.update_xaxes(tickangle = 330) fig.update_layout() return fig Thanks!! A: I had a similar issue, but found a solution of sorts from LeoWY on the Plotly forums. LeoWY suggests calling the graph within a try/except block in which you add the same function call after both the try statement and the except statement. As LeoWY explains, this method should allow the graph to render properly the second time if it doesn't render correctly at first. For instance, you could update your generic_bar_graph function as follows: def generic_bar_graphB(data: pd.Series, column: str,): try: fig = px.bar(data, x=column, y='size', title="", labels={column:''}) except: fig = px.bar(data, x=column, y='size', title="", labels={column:''}) fig.update_xaxes(tickangle = 330) fig.update_layout() return fig
Dashboard Plotly ValueError: Invalid value
I have a Dash app that plots several graphs. When the Dash app starts, some plots do not get displayed, and I see the error. This only occurs on the initial startup of the app. When the webpage is refreshed, the error does not re-appear, and all plots get displayed without errors. Callback error updating {"index":1,"tag":"bar-9-graph"}.figure @app.callback( ServersideOutput("filtered-data", "data"), Input({"tag": "v2", "index": 1}, "value"), Input({"tag": "v3", "index": 1}, "value"), Input({"tag": "v4", "index": 1}, "value"), Input({"tag": "date-range", "index": 1}, "start_date"), Input({"tag": "date-range", "index": 1}, "end_date"), memoize=True ) def filter_data(v2, v3, v4, start_date, end_date): data = hc._select_filter(df, labels_dict.keys(), [v2, v3, v4]) data = hc._date_filter(data, "fecha", start_date, end_date) return data @app.callback( Output({"tag": "bar-9-graph", "index": 1}, "figure"), Input("filtered-data", "data"), ) def make_bar_2(data): data_aux = data.copy() data_aux = data_aux.loc[:,['nit','frequency','group']] data_aux = data_aux.drop_duplicates(subset=['nit']) data_aux = data_aux.groupby(['frequency'], as_index=False).size() return hc.generic_bar_graphB(data_aux, "frequency") def generic_bar_graphB(data: pd.Series, column: str,): fig = px.bar(data, x=column, y='size', title="", labels={column:''}) fig.update_xaxes(tickangle = 330) fig.update_layout() return fig Thanks!!
[ "I had a similar issue, but found a solution of sorts from LeoWY on the Plotly forums. LeoWY suggests calling the graph within a try/except block in which you add the same function call after both the try statement and the except statement. As LeoWY explains, this method should allow the graph to render properly the second time if it doesn't render correctly at first.\nFor instance, you could update your generic_bar_graph function as follows:\ndef generic_bar_graphB(data: pd.Series, column: str,):\ntry:\n fig = px.bar(data, x=column, y='size', title=\"\", labels={column:''})\nexcept:\n fig = px.bar(data, x=column, y='size', title=\"\", labels={column:''})\nfig.update_xaxes(tickangle = 330)\nfig.update_layout()\nreturn fig\n\n" ]
[ 0 ]
[]
[]
[ "callback", "dashboard", "plotly", "plotly_dash", "python" ]
stackoverflow_0074367104_callback_dashboard_plotly_plotly_dash_python.txt
Q: Getting error "cannot convert the series to " when trying to convert Unix timestamp (int) to datetime I have a table called 'Generic' with a date column called 'createdDate' with values as Unix timestamp. The datatype of the Unix timestamp values are currently int64. I would like to create another column in the dataframe called 'createdDate2' which would contain the Unix dates in a datetime format (e.g. YY/MM/DD) I am running the following code: import datetime generic['createdDate2'] = datetime.datetime.fromtimestamp(generic.createdDate).strftime('%Y-%m-%d %H:%M:%S') However, I keep getting the following error: cannot convert the series to <class 'int'> Any idea what I am doing wrong? Thanks a lot! A: You give the entire dataframe to the function that only needs to take a single integer, so you get an error. You can use the code below. generic['createdDate2']=pd.to_datetime(generic['createdDate'], unit='s')
Getting error "cannot convert the series to " when trying to convert Unix timestamp (int) to datetime
I have a table called 'Generic' with a date column called 'createdDate' with values as Unix timestamp. The datatype of the Unix timestamp values are currently int64. I would like to create another column in the dataframe called 'createdDate2' which would contain the Unix dates in a datetime format (e.g. YY/MM/DD) I am running the following code: import datetime generic['createdDate2'] = datetime.datetime.fromtimestamp(generic.createdDate).strftime('%Y-%m-%d %H:%M:%S') However, I keep getting the following error: cannot convert the series to <class 'int'> Any idea what I am doing wrong? Thanks a lot!
[ "You give the entire dataframe to the function that only needs to take a single integer, so you get an error. You can use the code below.\ngeneric['createdDate2']=pd.to_datetime(generic['createdDate'], unit='s')\n\n" ]
[ 0 ]
[]
[]
[ "datetime", "python", "unix" ]
stackoverflow_0074502981_datetime_python_unix.txt
Q: python3 / sqlite3 does not actually insert into DB? currently, I'm setting an SQLite database, using python. I seem to lack something fundamental because rows are not actually inserted in the database on the disk. I'm walking through a digital ocean tutorial from https://www.digitalocean.com/community/tutorials/how-to-use-the-sqlite3-module-in-python-3 This is the code I'm attempting to run: import sqlite3 connection = sqlite3.connect("aquarium.db") print(connection.total_changes) cursor = connection.cursor() # cursor.execute("CREATE TABLE fish (name TEXT, species TEXT, tank_number INTEGER)") print(connection.total_changes) cursor = connection.cursor() #cursor.execute("INSERT INTO fish VALUES ('Sammy', 'shark', 1)") #cursor.execute("INSERT INTO fish VALUES ('Jamie', 'cuttlefish', 7)") print(connection.total_changes) rows = cursor.execute("SELECT name, species, tank_number FROM fish").fetchall() print(rows) connection.close() Imagine running the code with all lines in the first execution. I get the output: 0 0 2 [('Sammy', 'shark', 1), ('Jamie', 'cuttlefish', 7)] I now have a file called aquarium.db and it has a correct schema, but the rows are never stored on disk. Re-running the same code, omitting the lines I have commented out, I see that the file is indeed empty: 0 0 0 [] What am I missing here? BR, Michael A: you need to commit() see docs I slightly modified your code to be more clear import sqlite3 connection = sqlite3.connect("aquarium.db") print(f"Number of changes {connection.total_changes}") cursor = connection.cursor() update = False if update: try: cursor.execute("CREATE TABLE fish (name TEXT, species TEXT, tank_number INTEGER)") print(f"Number of changes {connection.total_changes}") except: print("Couldn't create table, it probably exists.") cursor.execute("INSERT INTO fish VALUES ('Sammy', 'shark', 1)") cursor.execute("INSERT INTO fish VALUES ('Jamie', 'cuttlefish', 7)") # now you need to commit your inserts connection.commit() print(f"Number of changes {connection.total_changes}") rows = cursor.execute("SELECT name, species, tank_number FROM fish").fetchall() print(rows) connection.close()
python3 / sqlite3 does not actually insert into DB?
currently, I'm setting an SQLite database, using python. I seem to lack something fundamental because rows are not actually inserted in the database on the disk. I'm walking through a digital ocean tutorial from https://www.digitalocean.com/community/tutorials/how-to-use-the-sqlite3-module-in-python-3 This is the code I'm attempting to run: import sqlite3 connection = sqlite3.connect("aquarium.db") print(connection.total_changes) cursor = connection.cursor() # cursor.execute("CREATE TABLE fish (name TEXT, species TEXT, tank_number INTEGER)") print(connection.total_changes) cursor = connection.cursor() #cursor.execute("INSERT INTO fish VALUES ('Sammy', 'shark', 1)") #cursor.execute("INSERT INTO fish VALUES ('Jamie', 'cuttlefish', 7)") print(connection.total_changes) rows = cursor.execute("SELECT name, species, tank_number FROM fish").fetchall() print(rows) connection.close() Imagine running the code with all lines in the first execution. I get the output: 0 0 2 [('Sammy', 'shark', 1), ('Jamie', 'cuttlefish', 7)] I now have a file called aquarium.db and it has a correct schema, but the rows are never stored on disk. Re-running the same code, omitting the lines I have commented out, I see that the file is indeed empty: 0 0 0 [] What am I missing here? BR, Michael
[ "you need to commit() see docs\nI slightly modified your code to be more clear\nimport sqlite3\n\nconnection = sqlite3.connect(\"aquarium.db\")\nprint(f\"Number of changes {connection.total_changes}\")\ncursor = connection.cursor()\n\nupdate = False\nif update:\n try:\n cursor.execute(\"CREATE TABLE fish (name TEXT, species TEXT, tank_number INTEGER)\")\n print(f\"Number of changes {connection.total_changes}\")\n except:\n print(\"Couldn't create table, it probably exists.\")\n\n cursor.execute(\"INSERT INTO fish VALUES ('Sammy', 'shark', 1)\")\n cursor.execute(\"INSERT INTO fish VALUES ('Jamie', 'cuttlefish', 7)\")\n # now you need to commit your inserts\n connection.commit()\n print(f\"Number of changes {connection.total_changes}\")\nrows = cursor.execute(\"SELECT name, species, tank_number FROM fish\").fetchall()\nprint(rows)\nconnection.close()\n\n" ]
[ 2 ]
[]
[]
[ "python", "sqlite" ]
stackoverflow_0074503029_python_sqlite.txt
Q: Printing different things on different SSH sessions I'm making a TUI card game for a school project, where each player would take turns sitting on the machine to play a card, get up, and let the next player play and so on. The layout I have for the moment is printing the table (on which you can find all of the previously put cards), then a line of cards in the hand of the player. What I want to do now is make it properly multiplayer where multiple people can connect on a machine with ssh, start the program, and have each player see his cards at all times (the only thing changing being the table each time another play puts a card). My problem is I don't even know where to start to do that. Is there a way to attribute an ID to a terminal, and print certain things but not others ? for example : if (id == SSHid_1) : print(CardsPlayer1) if (id == SSHid_2) : print(CardsPlayer2) if (id == SSHid_3) : print(CardsPlayer3) Also, is there a way to prompt input from a specific SSH session? A: You may find it convenient to append (timestamp, cards) tuples to files that follow a certain naming scheme: /tmp/player1.csv /tmp/player2.csv /tmp/player3.csv By convention player1 will only write to that first file, player2 to the 2nd file and so on. Now, to obtain the state of the game at some timestamp t, we read all rows from all files having stamps <= t. Stat each file and read its mtime, the modification time, to learn of new moves. Consider having all players append to a single file in order to make fewer stat calls. Or use one of the more advanced APIs to await events, such as epoll, poll, or kqueue. https://docs.python.org/3/library/select.html#select.epoll There are many IPC techniques available. Using an RDBMS, a redis server, or a pub-sub service such as kafka might fit your use case better than communicating via common FS.
Printing different things on different SSH sessions
I'm making a TUI card game for a school project, where each player would take turns sitting on the machine to play a card, get up, and let the next player play and so on. The layout I have for the moment is printing the table (on which you can find all of the previously put cards), then a line of cards in the hand of the player. What I want to do now is make it properly multiplayer where multiple people can connect on a machine with ssh, start the program, and have each player see his cards at all times (the only thing changing being the table each time another play puts a card). My problem is I don't even know where to start to do that. Is there a way to attribute an ID to a terminal, and print certain things but not others ? for example : if (id == SSHid_1) : print(CardsPlayer1) if (id == SSHid_2) : print(CardsPlayer2) if (id == SSHid_3) : print(CardsPlayer3) Also, is there a way to prompt input from a specific SSH session?
[ "You may find it convenient to append (timestamp, cards) tuples\nto files that follow a certain naming scheme:\n\n/tmp/player1.csv\n/tmp/player2.csv\n/tmp/player3.csv\n\nBy convention player1 will only write to that first file,\nplayer2 to the 2nd file and so on.\nNow, to obtain the state of the game at some timestamp t,\nwe read all rows from all files having stamps <= t.\nStat each file and read its mtime,\nthe modification time, to learn of new moves.\nConsider having all players append to a single file\nin order to make fewer stat calls.\nOr use one of the more advanced APIs to await events,\nsuch as epoll, poll, or kqueue.\nhttps://docs.python.org/3/library/select.html#select.epoll\n\nThere are many IPC techniques available.\nUsing an RDBMS, a redis server,\nor a pub-sub service such as kafka\nmight fit your use case better than\ncommunicating via common FS.\n" ]
[ 0 ]
[]
[]
[ "python", "ssh" ]
stackoverflow_0074502233_python_ssh.txt
Q: spacy doc.char_span raises error whenever there is any number in string I was trying to train a model from spacy. I have strings and their token offsets saved into the JSON file. I have read that file using utf-8 encoding and there is no special character in it. But it raises TypeError: object of type 'NoneType' has no len() # code for reading file with open("data/results.json", "r", encoding="utf-8") as file: training_data = json.loads(file.read()) I have also tried changing alignment_type from strict to contract & expand. The expand works but shows incorrect spans. span = doc.char_span(start, end, label, alignment_mode="contract") The code that I'm using import spacy from spacy.tokens import DocBin nlp = spacy.blank("en") db = DocBin() training_dataset = [[ "Department of Chemistry,Central University of Las Villas,Santa Clara,Villa Clara,54830,Cuba.", [ [ 57, 68, "city_name" ], [ 87, 91, "country_name" ] ] ]] for text, annotations in training_dataset: doc = nlp(text) ents = [] for start, end, label in annotations: span = doc.char_span(start, end, label) ents.append(span) doc.ents = ents db.add(doc) I have pasted the JSON object that is read from the file, directly into the program for debugging purposes. When I tried after removing the 54830, part, the program runs successfully. I have also referred to this Issue, but that issue has a special character. But this string doesn't have any special character. Can anyone know why this is happening with all strings that contain a number in them? A: The error TypeError: object of type 'NoneType' has no len() occurs in line doc.ents = ents when one of the entries in ents is None. The reason for having a None in the list is that doc.char_span(start, end, label) returns None when the start and end provided don't align with token boundaries. The tokenizer of the model (spacy.blank("en")) doesn't behave as needed for this use case. It seems that it doesn't produce an end of token after a comma that follows a number without space after the comma. Examples: Tokenizing a number with decimals: >>> import spacy >>> nlp = spacy.blank("en") >>> nlp.tokenizer.explain("5,1") [('TOKEN', '5,1')] One single token. Tokenizing a number + comma + letter: >>> nlp.tokenizer.explain("5,a") [('TOKEN', '5,a')] One single token. Tokenizing a letter + comma + letter: >>> nlp.tokenizer.explain("a,a") [('TOKEN', 'a'), ('INFIX', ','), ('TOKEN', 'a')] Three tokens. Tokenizing a number + comma + space + letter: >>> nlp.tokenizer.explain("5, a") [('TOKEN', '5'), ('SUFFIX', ','), ('TOKEN', 'a')] Three tokens. Tokenizing a number + comma + space + number: >>> nlp.tokenizer.explain("5, 1") [('TOKEN', '5'), ('SUFFIX', ','), ('TOKEN', '1')] Three tokens. Therefore, with the default tokenizer, a space is needed after a comma following a number so the comma is used to create the token boundaries. Workarounds: Preprocess your text to add a space after the commas you desire to split tokens by. This would also require to update the start and end values of the annotations. Create your custom tokenizer as described in Spacy documentation: https://spacy.io/usage/linguistic-features#native-tokenizers
spacy doc.char_span raises error whenever there is any number in string
I was trying to train a model from spacy. I have strings and their token offsets saved into the JSON file. I have read that file using utf-8 encoding and there is no special character in it. But it raises TypeError: object of type 'NoneType' has no len() # code for reading file with open("data/results.json", "r", encoding="utf-8") as file: training_data = json.loads(file.read()) I have also tried changing alignment_type from strict to contract & expand. The expand works but shows incorrect spans. span = doc.char_span(start, end, label, alignment_mode="contract") The code that I'm using import spacy from spacy.tokens import DocBin nlp = spacy.blank("en") db = DocBin() training_dataset = [[ "Department of Chemistry,Central University of Las Villas,Santa Clara,Villa Clara,54830,Cuba.", [ [ 57, 68, "city_name" ], [ 87, 91, "country_name" ] ] ]] for text, annotations in training_dataset: doc = nlp(text) ents = [] for start, end, label in annotations: span = doc.char_span(start, end, label) ents.append(span) doc.ents = ents db.add(doc) I have pasted the JSON object that is read from the file, directly into the program for debugging purposes. When I tried after removing the 54830, part, the program runs successfully. I have also referred to this Issue, but that issue has a special character. But this string doesn't have any special character. Can anyone know why this is happening with all strings that contain a number in them?
[ "The error TypeError: object of type 'NoneType' has no len() occurs in line doc.ents = ents when one of the entries in ents is None.\nThe reason for having a None in the list is that doc.char_span(start, end, label) returns None when the start and end provided don't align with token boundaries.\nThe tokenizer of the model (spacy.blank(\"en\")) doesn't behave as needed for this use case. It seems that it doesn't produce an end of token after a comma that follows a number without space after the comma.\nExamples:\nTokenizing a number with decimals:\n>>> import spacy\n>>> nlp = spacy.blank(\"en\")\n>>> nlp.tokenizer.explain(\"5,1\")\n[('TOKEN', '5,1')]\n\nOne single token.\nTokenizing a number + comma + letter:\n>>> nlp.tokenizer.explain(\"5,a\")\n[('TOKEN', '5,a')]\n\nOne single token.\nTokenizing a letter + comma + letter:\n>>> nlp.tokenizer.explain(\"a,a\")\n[('TOKEN', 'a'), ('INFIX', ','), ('TOKEN', 'a')]\n\nThree tokens.\nTokenizing a number + comma + space + letter:\n>>> nlp.tokenizer.explain(\"5, a\")\n[('TOKEN', '5'), ('SUFFIX', ','), ('TOKEN', 'a')]\n\nThree tokens.\nTokenizing a number + comma + space + number:\n>>> nlp.tokenizer.explain(\"5, 1\")\n[('TOKEN', '5'), ('SUFFIX', ','), ('TOKEN', '1')]\n\nThree tokens.\nTherefore, with the default tokenizer, a space is needed after a comma following a number so the comma is used to create the token boundaries.\nWorkarounds:\n\nPreprocess your text to add a space after the commas you desire to split tokens by. This would also require to update the start and end values of the annotations.\nCreate your custom tokenizer as described in Spacy documentation: https://spacy.io/usage/linguistic-features#native-tokenizers\n\n" ]
[ 1 ]
[]
[]
[ "json", "nlp", "python", "spacy", "spacy_3" ]
stackoverflow_0074494620_json_nlp_python_spacy_spacy_3.txt
Q: Difficulties with call a method from a subclass in python - AttributeError: 'str' object has no attribute I want to call a method in a subclass using threading. This method is a while loop that executes a method in the main class. I don't understand the error, as I interpret it, I am doing something wrong with the inheritance. A minimal example of my code: class Echo(WebSocket): def __init__(self, client, server, sock, address): super().__init__(server, sock, address) self.modbus = client def temp_control(self, value) do_something(value) return True class Temperature_Control3(Echo): def __init__(self): super().__init__() self.value def control(self, value): while True: print("temp_controll") self.temp_control(self, value) #call the method in Echo class time.sleep(4) def main(): with ModbusClient(host=HOST, port=PORT) as client: client.connect() time.sleep(0.01) print("Websocket server on port %s" % PORTNUM) server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client)) control = Temperature_Control3() t3 = threading.Thread(target=control.control, args=('', 'get'), daemon=False) t3.start() try: t1 = threading.Thread(target=server.serveforever()) t1.start() for thread in threading.enumerate(): print(thread.name) finally: server.close() t1 is starting well but t2 can't because of the error above. I have little experience with OOP programming, maybe someone here can help, thanks! Edit: I got a new Error: Traceback (most recent call last): File "websocketserver6_threading.py", line 566, in <module> main() File "websocketserver6_threading.py", line 545, in main control = Temperature_Control3() File "websocketserver6_threading.py", line 516, in __init__ super().__init__() TypeError: __init__() missing 4 required positional arguments: 'client', 'server', 'sock', and 'address' A: One problem is that you're referring to Temperature_Controll3.temp_controll directly, which is the wrong way to do it. Instead, you need to create an instance of that class: t3 = Temperature_Controll3() And then refer to the method of the instance: target=t3.temp_controll However, there are also other problems. The temp_controll() function is infinitely recursive: def temp_controll(self, value): while True: print("temp_controll") self.temp_controll(self, value) time.sleep(4) The function calls itself, which calls itself, which calls itself... forever. I have no idea what you're trying to do here.
Difficulties with call a method from a subclass in python - AttributeError: 'str' object has no attribute
I want to call a method in a subclass using threading. This method is a while loop that executes a method in the main class. I don't understand the error, as I interpret it, I am doing something wrong with the inheritance. A minimal example of my code: class Echo(WebSocket): def __init__(self, client, server, sock, address): super().__init__(server, sock, address) self.modbus = client def temp_control(self, value) do_something(value) return True class Temperature_Control3(Echo): def __init__(self): super().__init__() self.value def control(self, value): while True: print("temp_controll") self.temp_control(self, value) #call the method in Echo class time.sleep(4) def main(): with ModbusClient(host=HOST, port=PORT) as client: client.connect() time.sleep(0.01) print("Websocket server on port %s" % PORTNUM) server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client)) control = Temperature_Control3() t3 = threading.Thread(target=control.control, args=('', 'get'), daemon=False) t3.start() try: t1 = threading.Thread(target=server.serveforever()) t1.start() for thread in threading.enumerate(): print(thread.name) finally: server.close() t1 is starting well but t2 can't because of the error above. I have little experience with OOP programming, maybe someone here can help, thanks! Edit: I got a new Error: Traceback (most recent call last): File "websocketserver6_threading.py", line 566, in <module> main() File "websocketserver6_threading.py", line 545, in main control = Temperature_Control3() File "websocketserver6_threading.py", line 516, in __init__ super().__init__() TypeError: __init__() missing 4 required positional arguments: 'client', 'server', 'sock', and 'address'
[ "One problem is that you're referring to Temperature_Controll3.temp_controll directly, which is the wrong way to do it.\nInstead, you need to create an instance of that class:\nt3 = Temperature_Controll3()\n\nAnd then refer to the method of the instance:\ntarget=t3.temp_controll\n\nHowever, there are also other problems. The temp_controll() function is infinitely recursive:\ndef temp_controll(self, value):\n while True:\n print(\"temp_controll\")\n self.temp_controll(self, value)\n time.sleep(4)\n\nThe function calls itself, which calls itself, which calls itself... forever. I have no idea what you're trying to do here.\n" ]
[ 0 ]
[]
[]
[ "class", "inheritance", "multithreading", "python" ]
stackoverflow_0074503178_class_inheritance_multithreading_python.txt
Q: Filter out thin shapes produced by skimage.measure I'm trying to detect large blob objects. I've used skimage.measure to sort out connected components with a connectivity of 1 with counts greater than 9. from skimage import measure from skimage import measure all_labels = measure.label(np.isnan(arr), connectivity=1) unique, counts = np.unique(all_labels, return_counts=True) filtered_labels = measure.label(np.isin(all_labels,list(np.argwhere(counts>9))[1:])*all_labels) My labels show up like this: Some of them are lines, and some of them are blobs. Is there a good way to check the true thickness of the shape? I would go by max height and max width around the box of shape, but some of the lines go diagonally. A: You can use cv functions that return rotated bounding boxes and ellipses: you are basically interested to see if the minimum bounding box around your blobs has an area < threshold. You can get that (possibly rotated) bounding box or ellipse with cv2.minAreaRect(cnt) after you got your contours, or cv2.fitEllipse(cnt) for the ellipse. Another viable approach is to use erosion to check which "thin" shapes have been totally eroded/deleted.
Filter out thin shapes produced by skimage.measure
I'm trying to detect large blob objects. I've used skimage.measure to sort out connected components with a connectivity of 1 with counts greater than 9. from skimage import measure from skimage import measure all_labels = measure.label(np.isnan(arr), connectivity=1) unique, counts = np.unique(all_labels, return_counts=True) filtered_labels = measure.label(np.isin(all_labels,list(np.argwhere(counts>9))[1:])*all_labels) My labels show up like this: Some of them are lines, and some of them are blobs. Is there a good way to check the true thickness of the shape? I would go by max height and max width around the box of shape, but some of the lines go diagonally.
[ "You can use cv functions that return rotated bounding boxes and ellipses: you are basically interested to see if the minimum bounding box around your blobs has an area < threshold. You can get that (possibly rotated) bounding box or ellipse with cv2.minAreaRect(cnt)\nafter you got your contours, or cv2.fitEllipse(cnt) for the ellipse.\nAnother viable approach is to use erosion to check which \"thin\" shapes have been totally eroded/deleted.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074503117_python.txt
Q: Sorting keys in a map Program gets an input at the beginning. That inputed string can contain capital letters or any other ascii letters. We don't difference between them, so we just use lower() method. Also any letters other than letters from alphabet (numbers etc.) are used as spaces between strings. Function is supposed to analyse the input, sort it and count it. Output: {'idk': 2, 'idc': 1, 'idf': 1} Input: print(word_frequency("Idk, Idc, Idk, Idf")) I tried this and It's sorting the input, but I can't find a way to separate strings. This is what I did: def word_frequency(text): f = {} pendens = "" for s in text: if s.isalpha(): pendens += s else: pendens = pendens.lower() if pendens != " ": if f.get(pendens, -1) != -1: f[pendens] += 1 else: f[pendens] = 1 pendens = pendens.lower() if pendens != " ": if f.get(pendens, -1) != -1: f[pendens] += 1 else: f[pendens] = 1 return f print(word_frequency("Idk, Idc, Idk, Idf")) print(word_frequency("Idk,+Idc,Idk;;-;Idf")) print(word_frequency("help me please")) I'm trying to get better at coding so any form of help will be appreciated :) A: The easiest solution would involve regex and Counter, which is a type of dictionary specifically tailored to counting occurrences of values like this: >>> import re >>> from collections import Counter >>> words = 'Idk, Idc, Idk, Idf' >>> re.findall('[a-z]+', words.lower()) ['idk', 'idc', 'idk', 'idf'] >>> Counter(re.findall('[a-z]+', words.lower())) Counter({'idk': 2, 'idc': 1, 'idf': 1}) If you cannot use Counter, then a plain dictionary would also work. We can use dict.get to handle words that both are and are not in the dict yet: def count_words(words): counts = {} for word in re.findall('[a-z]+', words.lower()): counts[word] = counts.get(word, 0) + 1 return counts Results in: >>> count_words('Idk, Idc, Idk, Idf') {'idk': 2, 'idc': 1, 'idf': 1} If you cannot use regex, then the problem becomes more complicated, but still doable. A generator like the following would work: def split_words(words): word = '' for c in words.lower(): if 97 <= ord(c) <= 122: # ord('a') thru ord('z') word += c elif word: yield word word = '' if word: yield word def count_words(words): counts = {} for word in split_words(words): counts[word] = counts.get(word, 0) + 1 return counts Results in: >>> count_words('Idk, Idc, Idk, Idf') {'idk': 2, 'idc': 1, 'idf': 1}
Sorting keys in a map
Program gets an input at the beginning. That inputed string can contain capital letters or any other ascii letters. We don't difference between them, so we just use lower() method. Also any letters other than letters from alphabet (numbers etc.) are used as spaces between strings. Function is supposed to analyse the input, sort it and count it. Output: {'idk': 2, 'idc': 1, 'idf': 1} Input: print(word_frequency("Idk, Idc, Idk, Idf")) I tried this and It's sorting the input, but I can't find a way to separate strings. This is what I did: def word_frequency(text): f = {} pendens = "" for s in text: if s.isalpha(): pendens += s else: pendens = pendens.lower() if pendens != " ": if f.get(pendens, -1) != -1: f[pendens] += 1 else: f[pendens] = 1 pendens = pendens.lower() if pendens != " ": if f.get(pendens, -1) != -1: f[pendens] += 1 else: f[pendens] = 1 return f print(word_frequency("Idk, Idc, Idk, Idf")) print(word_frequency("Idk,+Idc,Idk;;-;Idf")) print(word_frequency("help me please")) I'm trying to get better at coding so any form of help will be appreciated :)
[ "The easiest solution would involve regex and Counter, which is a type of dictionary specifically tailored to counting occurrences of values like this:\n>>> import re\n>>> from collections import Counter\n\n>>> words = 'Idk, Idc, Idk, Idf'\n\n>>> re.findall('[a-z]+', words.lower())\n['idk', 'idc', 'idk', 'idf']\n\n>>> Counter(re.findall('[a-z]+', words.lower()))\nCounter({'idk': 2, 'idc': 1, 'idf': 1})\n\nIf you cannot use Counter, then a plain dictionary would also work. We can use dict.get to handle words that both are and are not in the dict yet:\ndef count_words(words):\n counts = {}\n for word in re.findall('[a-z]+', words.lower()):\n counts[word] = counts.get(word, 0) + 1\n return counts\n\nResults in:\n>>> count_words('Idk, Idc, Idk, Idf')\n{'idk': 2, 'idc': 1, 'idf': 1}\n\nIf you cannot use regex, then the problem becomes more complicated, but still doable. A generator like the following would work:\ndef split_words(words):\n word = ''\n for c in words.lower():\n if 97 <= ord(c) <= 122: # ord('a') thru ord('z')\n word += c\n elif word:\n yield word\n word = ''\n if word:\n yield word\n\n\ndef count_words(words):\n counts = {}\n for word in split_words(words):\n counts[word] = counts.get(word, 0) + 1\n return counts\n\nResults in:\n>>> count_words('Idk, Idc, Idk, Idf')\n{'idk': 2, 'idc': 1, 'idf': 1}\n\n" ]
[ 2 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074503013_dictionary_list_python.txt
Q: How to compare dataframes with the same size but different information I have two data frames where each row is a product and each column is a different month,they always have the same size and are something like this: data1 = { "product": ['A', "B", "C", "D"], "2022-01": [1, 2, 3, 4], "2022-02": [1, 2, 3, 4], "2022-03": [1, 2, 3, 4] } data2 = { "product": ['A', "B", "C", "D"], "2022-01": [13, "None", 15, 16], "2022-02": [17, 18, "None", 20], "2022-03": ["None", 22, 23, "None"] } The difference between them is that the second one can sometimes contain None values. I would like to first create a third dataframe with the interleaved data, like this (The flag would be to indicate that it was inserted): data3 = { "product": ['A', "B", "C", "D"], "2022-01": [1, 2, 3, 4], "2022-01 - flag": [13, "None", 15, 16], "2022-02": [5, 6, 7, 8], "2022-02 - flag": [17, 18, "None", 20], "2022-03": [9, 10, 11, 12] "2022-03 - flag": ["None", 22, 23, "None"] } And also another dataframe where I'm going to put the None values from data2 and put them in data1. Basically I think I need to iterate over the columns of dataframes 1 and 2 based on dates (since they have the same product) but I don't know how to do that properly. The final dataframe of this would look something like this: data4 = { "product": ['A', "B", "C", "D"], "2022-01": [1, "None", 3, 4], "2022-02": [5, 6, "None", 8], "2022-03": ["None", 10, 11, "None"] } A: You can use pandas.DataFrame.mask to get directly the output you're looking for: df1= pd.DataFrame(data1) df2= pd.DataFrame(data2) out = df1.mask(df2.replace("None", None).isna()) #out = df1.mask(df2.eq("None")) --- another alternative # Output : print(out) product 2022-01 2022-02 2022-03 0 A 1.0 5.0 NaN 1 B NaN 6.0 10.0 2 C 3.0 NaN 11.0 3 D 4.0 8.0 NaN
How to compare dataframes with the same size but different information
I have two data frames where each row is a product and each column is a different month,they always have the same size and are something like this: data1 = { "product": ['A', "B", "C", "D"], "2022-01": [1, 2, 3, 4], "2022-02": [1, 2, 3, 4], "2022-03": [1, 2, 3, 4] } data2 = { "product": ['A', "B", "C", "D"], "2022-01": [13, "None", 15, 16], "2022-02": [17, 18, "None", 20], "2022-03": ["None", 22, 23, "None"] } The difference between them is that the second one can sometimes contain None values. I would like to first create a third dataframe with the interleaved data, like this (The flag would be to indicate that it was inserted): data3 = { "product": ['A', "B", "C", "D"], "2022-01": [1, 2, 3, 4], "2022-01 - flag": [13, "None", 15, 16], "2022-02": [5, 6, 7, 8], "2022-02 - flag": [17, 18, "None", 20], "2022-03": [9, 10, 11, 12] "2022-03 - flag": ["None", 22, 23, "None"] } And also another dataframe where I'm going to put the None values from data2 and put them in data1. Basically I think I need to iterate over the columns of dataframes 1 and 2 based on dates (since they have the same product) but I don't know how to do that properly. The final dataframe of this would look something like this: data4 = { "product": ['A', "B", "C", "D"], "2022-01": [1, "None", 3, 4], "2022-02": [5, 6, "None", 8], "2022-03": ["None", 10, 11, "None"] }
[ "You can use pandas.DataFrame.mask to get directly the output you're looking for:\ndf1= pd.DataFrame(data1)\ndf2= pd.DataFrame(data2)\n\nout = df1.mask(df2.replace(\"None\", None).isna())\n#out = df1.mask(df2.eq(\"None\")) --- another alternative\n\n# Output :\nprint(out)\n\n product 2022-01 2022-02 2022-03\n0 A 1.0 5.0 NaN\n1 B NaN 6.0 10.0\n2 C 3.0 NaN 11.0\n3 D 4.0 8.0 NaN\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_3.x" ]
stackoverflow_0074503228_dataframe_pandas_python_python_3.x.txt
Q: How can I split Flask search engine results into different pages? I've created a search engine using Flask that returns search results from a Wikipedia corpus generated from articles relating to the topic of health. Some queries return hundreds of results, so I would like to add a feature that splits the results up into multiple pages. Below is the index.html code that generates the webpage: {% extends "base.html" %} {% block title %}Search Page{% endblock %} {% block contents %} <div class="container"> <div class="row"> <div class="col-lg-12"> <div class="search-result-box card-box"> <div class="row"> <div class="col-md-8 offset-md-2"> <div class="pt-3 pb-4"> <div class="search-form"> <form action="#" method="POST"> <div class="input-group"> <input type="text" name="msg" class="form-control input-lg"> <div class="input-group-btn"> <button class="btn btn-primary" type="submit">Search</button> </div> </div> </form> </div> <div class="mt-4 text-center"><h4>Search Results For {{user_query}}</h4></div> </div> </div> </div> <!-- end row --> <ul class="nav nav-tabs tabs-bordered"> <li class="nav-item"><a href="#home" data-toggle="tab" aria-expanded="true" class="nav-link active">All results <span class="badge badge-success ml-1">{{search_results_list|length}}</span></a></li> </ul> <div class="tab-content"> <div class="tab-pane active" id="home"> <div class="row"> <div class="col-md-12"> <div class="search-item"> {% if search_results_list|length > 0 %} {% for r in search_results_list %} <div class="font-13 text-success mb-3"><a href='{{r[0]}}' target="_blank">{{r[0]}}</a></div> <div class="font-13 text-success mb-3"><p target="_blank">{{r[1]}}</p></div> {% endfor %} {% else %} <p class="mt-4 text-center">No search result</P> {% endif %} </div> <ul class="pagination justify-content-end pagination-split mt-0"> <li class="page-item"><a class="page-link" href="#" aria-label="Previous"><span aria-hidden="true">«</span> <span class="sr-only">Previous</span></a></li> <li class="page-item active"><a class="page-link" href="#">1</a></li> <li class="page-item"><a class="page-link" href="#" aria-label="Next"><span aria-hidden="true">»</span> <span class="sr-only">Next</span></a></li> </ul> <div class="clearfix"></div> </div> </div> </div> <!-- end All results tab --> </div> </div> </div> </div> <!-- end row --> </div> <!-- Footer --> <footer id="main-footer" class="pt-2 py-4 bg-dark text-white text-center"> Copyright &copy; <span class="year"></span> Team Dream </footer> <!-- container --> {% endblock %} I don't have a lot of experience creating web pages with HTML so I'm not totally sure how to add this feature. Also here is the Python code that generates the app: from flask import Flask, render_template, request from search_engine import query_prep, OkapiBM25 import pickle app = Flask(__name__) @app.route('/') def results(): return render_template('index.html') @app.route('/', methods=['POST']) def process_res(): with open("inv_index.pickle", "rb") as file: inv_ind = pickle.load(file) user_search_query = request.form['msg'] queries = {'q': query_prep(user_search_query)} ranking = OkapiBM25(inv_ind, queries)['q'] seen = set() newRes= [] myDict = pickle.load(open('text_summaries.pickle','rb')) for r in ranking: newRes.append((r[1], myDict[r[1]])) return render_template('index.html', search_results_list = newRes, user_query=user_search_query) if __name__ == "__main__": app.run(debug=True) The code opens a pickle file with the inverted index of the corpus and then runs the query against the index, ranks the results, and returns them. There are also text summaries of each Wikipedia article that appear below each search result. Here is the deployed version of the web page if you want to play around with it and see what I'm trying to do: https://searchenginecapstone.herokuapp.com/ If you search "health," you'll see that there are 442 results on one page. I want to adapt the code (and I think this would be done solely in the HTML portion--but not totally sure) to split the results into chunks of 10-20. There already is a button at the bottom right of the page for Pages, but it currently doesn't do anything. I appreciate any help or ideas you can offer. Let me know if there is anything else you need to see from the coding side to answer this question. I see that Flask has native support for pagination with the paginate() method, so if I modify what's returned in render_template() with the following: from flask_paginate import Pagination, get_page_parameter def process_res(): with open("inv_index.pickle", "rb") as file: inv_ind = pickle.load(file) user_search_query = request.form['msg'] queries = {'q': query_prep(user_search_query)} ranking = OkapiBM25(inv_ind, queries)['q'] seen = set() newRes= [] myDict = pickle.load(open('text_summaries.pickle','rb')) for r in ranking: newRes.append((r[1], myDict[r[1]])) page = request.args.get(get_page_parameter(), type=int, default=1) pagination = Pagination(page=page, total=len(newRes), search=user_search_query, record_name='Search Results') return render_template('index.html', search_results_list = newRes, user_query=user_search_query, Pagination = pagination) The page seems to work (without the pagination appearing), so I think I have to update the index file to reflect the pagination changes but I don't know how. A: If you are using the Flask-SQLAlchemy extension, you can use the paginate() method to split the search engine results into different pages.
How can I split Flask search engine results into different pages?
I've created a search engine using Flask that returns search results from a Wikipedia corpus generated from articles relating to the topic of health. Some queries return hundreds of results, so I would like to add a feature that splits the results up into multiple pages. Below is the index.html code that generates the webpage: {% extends "base.html" %} {% block title %}Search Page{% endblock %} {% block contents %} <div class="container"> <div class="row"> <div class="col-lg-12"> <div class="search-result-box card-box"> <div class="row"> <div class="col-md-8 offset-md-2"> <div class="pt-3 pb-4"> <div class="search-form"> <form action="#" method="POST"> <div class="input-group"> <input type="text" name="msg" class="form-control input-lg"> <div class="input-group-btn"> <button class="btn btn-primary" type="submit">Search</button> </div> </div> </form> </div> <div class="mt-4 text-center"><h4>Search Results For {{user_query}}</h4></div> </div> </div> </div> <!-- end row --> <ul class="nav nav-tabs tabs-bordered"> <li class="nav-item"><a href="#home" data-toggle="tab" aria-expanded="true" class="nav-link active">All results <span class="badge badge-success ml-1">{{search_results_list|length}}</span></a></li> </ul> <div class="tab-content"> <div class="tab-pane active" id="home"> <div class="row"> <div class="col-md-12"> <div class="search-item"> {% if search_results_list|length > 0 %} {% for r in search_results_list %} <div class="font-13 text-success mb-3"><a href='{{r[0]}}' target="_blank">{{r[0]}}</a></div> <div class="font-13 text-success mb-3"><p target="_blank">{{r[1]}}</p></div> {% endfor %} {% else %} <p class="mt-4 text-center">No search result</P> {% endif %} </div> <ul class="pagination justify-content-end pagination-split mt-0"> <li class="page-item"><a class="page-link" href="#" aria-label="Previous"><span aria-hidden="true">«</span> <span class="sr-only">Previous</span></a></li> <li class="page-item active"><a class="page-link" href="#">1</a></li> <li class="page-item"><a class="page-link" href="#" aria-label="Next"><span aria-hidden="true">»</span> <span class="sr-only">Next</span></a></li> </ul> <div class="clearfix"></div> </div> </div> </div> <!-- end All results tab --> </div> </div> </div> </div> <!-- end row --> </div> <!-- Footer --> <footer id="main-footer" class="pt-2 py-4 bg-dark text-white text-center"> Copyright &copy; <span class="year"></span> Team Dream </footer> <!-- container --> {% endblock %} I don't have a lot of experience creating web pages with HTML so I'm not totally sure how to add this feature. Also here is the Python code that generates the app: from flask import Flask, render_template, request from search_engine import query_prep, OkapiBM25 import pickle app = Flask(__name__) @app.route('/') def results(): return render_template('index.html') @app.route('/', methods=['POST']) def process_res(): with open("inv_index.pickle", "rb") as file: inv_ind = pickle.load(file) user_search_query = request.form['msg'] queries = {'q': query_prep(user_search_query)} ranking = OkapiBM25(inv_ind, queries)['q'] seen = set() newRes= [] myDict = pickle.load(open('text_summaries.pickle','rb')) for r in ranking: newRes.append((r[1], myDict[r[1]])) return render_template('index.html', search_results_list = newRes, user_query=user_search_query) if __name__ == "__main__": app.run(debug=True) The code opens a pickle file with the inverted index of the corpus and then runs the query against the index, ranks the results, and returns them. There are also text summaries of each Wikipedia article that appear below each search result. Here is the deployed version of the web page if you want to play around with it and see what I'm trying to do: https://searchenginecapstone.herokuapp.com/ If you search "health," you'll see that there are 442 results on one page. I want to adapt the code (and I think this would be done solely in the HTML portion--but not totally sure) to split the results into chunks of 10-20. There already is a button at the bottom right of the page for Pages, but it currently doesn't do anything. I appreciate any help or ideas you can offer. Let me know if there is anything else you need to see from the coding side to answer this question. I see that Flask has native support for pagination with the paginate() method, so if I modify what's returned in render_template() with the following: from flask_paginate import Pagination, get_page_parameter def process_res(): with open("inv_index.pickle", "rb") as file: inv_ind = pickle.load(file) user_search_query = request.form['msg'] queries = {'q': query_prep(user_search_query)} ranking = OkapiBM25(inv_ind, queries)['q'] seen = set() newRes= [] myDict = pickle.load(open('text_summaries.pickle','rb')) for r in ranking: newRes.append((r[1], myDict[r[1]])) page = request.args.get(get_page_parameter(), type=int, default=1) pagination = Pagination(page=page, total=len(newRes), search=user_search_query, record_name='Search Results') return render_template('index.html', search_results_list = newRes, user_query=user_search_query, Pagination = pagination) The page seems to work (without the pagination appearing), so I think I have to update the index file to reflect the pagination changes but I don't know how.
[ "If you are using the Flask-SQLAlchemy extension, you can use the paginate() method to split the search engine results into different pages.\n" ]
[ 0 ]
[]
[]
[ "css", "flask", "heroku", "html", "python" ]
stackoverflow_0074503054_css_flask_heroku_html_python.txt
Q: Is there a quick way to turn a pandas DataFrame into a pretty HTML table? Problem: the output of df.to_html() is a plain html table, which isn't much to look at: Meanwhile, the visual representation of dataframes in the Jupyter Notebook is much nicer, but if there's an easy way to replicate it, I haven't found it. I know it should be possible to generate a more aesthetically-pleasing table by fiddling around with df.style, but before I go off learning CSS, has anyone already written a function to do this? A: Consider my dataframe df df = pd.DataFrame(np.arange(9).reshape(3, 3), list('ABC'), list('XYZ')) df X Y Z A 0 1 2 B 3 4 5 C 6 7 8 I ripped this style off of my jupyter notebook my_style = """background-color: rgba(0, 0, 0, 0); border-bottom-color: rgb(0, 0, 0); border-bottom-style: none; border-bottom-width: 0px; border-collapse: collapse; border-image-outset: 0px; border-image-repeat: stretch; border-image-slice: 100%; border-image-source: none; border-image-width: 1; border-left-color: rgb(0, 0, 0); border-left-style: none; border-left-width: 0px; border-right-color: rgb(0, 0, 0); border-right-style: none; border-right-width: 0px; border-top-color: rgb(0, 0, 0); border-top-style: none; border-top-width: 0px; box-sizing: border-box; color: rgb(0, 0, 0); display: table; font-family: "Helvetica Neue", Helvetica, Arial, sans-serif; font-size: 12px; height: 1675px; line-height: 20px; margin-left: 0px; margin-right: 0px; margin-top: 12px; table-layout: fixed; text-size-adjust: 100%; width: 700px; -webkit-border-horizontal-spacing: 0px; -webkit-border-vertical-spacing: 0px; -webkit-tap-highlight-color: rgba(0, 0, 0, 0);""" I got this from my post def HTML_with_style(df, style=None, random_id=None): from IPython.display import HTML import numpy as np import re df_html = df.to_html() if random_id is None: random_id = 'id%d' % np.random.choice(np.arange(1000000)) if style is None: style = """ <style> table#{random_id} {{color: blue}} </style> """.format(random_id=random_id) else: new_style = [] s = re.sub(r'</?style>', '', style).strip() for line in s.split('\n'): line = line.strip() if not re.match(r'^table', line): line = re.sub(r'^', 'table ', line) new_style.append(line) new_style = ['<style>'] + new_style + ['</style>'] style = re.sub(r'table(#\S+)?', 'table#%s' % random_id, '\n'.join(new_style)) df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html) return HTML(style + df_html) Then I implement HTML_with_style(df, '<style>table {{{}}}</style>'.format(my_style)) You can modify the code to dump the html def HTML_with_style(df, style=None, random_id=None): import numpy as np import re df_html = df.to_html() if random_id is None: random_id = 'id%d' % np.random.choice(np.arange(1000000)) if style is None: style = """ <style> table#{random_id} {{color: blue}} </style> """.format(random_id=random_id) else: new_style = [] s = re.sub(r'</?style>', '', style).strip() for line in s.split('\n'): line = line.strip() if not re.match(r'^table', line): line = re.sub(r'^', 'table ', line) new_style.append(line) new_style = ['<style>'] + new_style + ['</style>'] style = re.sub(r'table(#\S+)?', 'table#%s' % random_id, '\n'.join(new_style)) df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html) return style + df_html And now HTML_with_style(df, '<style>table {{{}}}</style>'.format(my_style)) '<style>\ntable#id850184 {background-color: rgba(0, 0, 0, 0);\ntable#id850184 border-bottom-color: rgb(0, 0, 0);\ntable#id850184 border-bottom-style: none;\ntable#id850184 border-bottom-width: 0px;\ntable#id850184 border-collapse: collapse;\ntable#id850184 border-image-outset: 0px;\ntable#id850184 border-image-repeat: stretch;\ntable#id850184 border-image-slice: 100%;\ntable#id850184 border-image-source: none;\ntable#id850184 border-image-width: 1;\ntable#id850184 border-left-color: rgb(0, 0, 0);\ntable#id850184 border-left-style: none;\ntable#id850184 border-left-width: 0px;\ntable#id850184 border-right-color: rgb(0, 0, 0);\ntable#id850184 border-right-style: none;\ntable#id850184 border-right-width: 0px;\ntable#id850184 border-top-color: rgb(0, 0, 0);\ntable#id850184 border-top-style: none;\ntable#id850184 border-top-width: 0px;\ntable#id850184 box-sizing: border-box;\ntable#id850184 color: rgb(0, 0, 0);\ntable#id850184 display: table#id850184;\ntable#id850184 font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;\ntable#id850184 font-size: 12px;\ntable#id850184 height: 1675px;\ntable#id850184 line-height: 20px;\ntable#id850184 margin-left: 0px;\ntable#id850184 margin-right: 0px;\ntable#id850184 margin-top: 12px;\ntable#id850184-layout: fixed;\ntable#id850184 text-size-adjust: 100%;\ntable#id850184 width: 700px;\ntable#id850184 -webkit-border-horizontal-spacing: 0px;\ntable#id850184 -webkit-border-vertical-spacing: 0px;\ntable#id850184 -webkit-tap-highlight-color: rgba(0, 0, 0, 0);}\n</style><table id=id850184 border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>X</th>\n <th>Y</th>\n <th>Z</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>A</th>\n <td>0</td>\n <td>1</td>\n <td>2</td>\n </tr>\n <tr>\n <th>B</th>\n <td>3</td>\n <td>4</td>\n <td>5</td>\n </tr>\n <tr>\n <th>C</th>\n <td>6</td>\n <td>7</td>\n <td>8</td>\n </tr>\n </tbody>\n</table>' A: After some research I found the prettiest and easiest solution to be https://pypi.org/project/pretty-html-table/ import pandas as pd from pretty_html_table import build_table df = pd.DataFrame(np.arange(9).reshape(3, 3), list('ABC'), list('XYZ')) html_table_blue_light = build_table(df, 'blue_light') print(html_table_blue_light) with open('styled_table.html', 'w') as f: f.write(html_table_blue_light) A: You could also add the following Raw NBConvert cell at the top of your notebook: <link rel="stylesheet" href="https://cdn.jupyter.org/notebook/5.1.0/style/style.min.css"> NBConvert seems to fail adding these styles into the exported HTML. The line above will add them explicitly. Source A: <link rel="stylesheet" href="https://cdn.jupyter.org/notebook/5.1.0/style/style.min.css"> you need to add class="rendered_html" to <body> otherwise it won't work. A: from pretty_html_table import build_table df = pd.read_excel('df.xlsx') html_table_blue_light = build_table(df, 'blue_light')
Is there a quick way to turn a pandas DataFrame into a pretty HTML table?
Problem: the output of df.to_html() is a plain html table, which isn't much to look at: Meanwhile, the visual representation of dataframes in the Jupyter Notebook is much nicer, but if there's an easy way to replicate it, I haven't found it. I know it should be possible to generate a more aesthetically-pleasing table by fiddling around with df.style, but before I go off learning CSS, has anyone already written a function to do this?
[ "Consider my dataframe df\ndf = pd.DataFrame(np.arange(9).reshape(3, 3), list('ABC'), list('XYZ'))\n\ndf\n\n X Y Z\nA 0 1 2\nB 3 4 5\nC 6 7 8\n\nI ripped this style off of my jupyter notebook\nmy_style = \"\"\"background-color: rgba(0, 0, 0, 0);\nborder-bottom-color: rgb(0, 0, 0);\nborder-bottom-style: none;\nborder-bottom-width: 0px;\nborder-collapse: collapse;\nborder-image-outset: 0px;\nborder-image-repeat: stretch;\nborder-image-slice: 100%;\nborder-image-source: none;\nborder-image-width: 1;\nborder-left-color: rgb(0, 0, 0);\nborder-left-style: none;\nborder-left-width: 0px;\nborder-right-color: rgb(0, 0, 0);\nborder-right-style: none;\nborder-right-width: 0px;\nborder-top-color: rgb(0, 0, 0);\nborder-top-style: none;\nborder-top-width: 0px;\nbox-sizing: border-box;\ncolor: rgb(0, 0, 0);\ndisplay: table;\nfont-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif;\nfont-size: 12px;\nheight: 1675px;\nline-height: 20px;\nmargin-left: 0px;\nmargin-right: 0px;\nmargin-top: 12px;\ntable-layout: fixed;\ntext-size-adjust: 100%;\nwidth: 700px;\n-webkit-border-horizontal-spacing: 0px;\n-webkit-border-vertical-spacing: 0px;\n-webkit-tap-highlight-color: rgba(0, 0, 0, 0);\"\"\"\n\nI got this from my post\ndef HTML_with_style(df, style=None, random_id=None):\n from IPython.display import HTML\n import numpy as np\n import re\n\n df_html = df.to_html()\n\n if random_id is None:\n random_id = 'id%d' % np.random.choice(np.arange(1000000))\n\n if style is None:\n style = \"\"\"\n <style>\n table#{random_id} {{color: blue}}\n </style>\n \"\"\".format(random_id=random_id)\n else:\n new_style = []\n s = re.sub(r'</?style>', '', style).strip()\n for line in s.split('\\n'):\n line = line.strip()\n if not re.match(r'^table', line):\n line = re.sub(r'^', 'table ', line)\n new_style.append(line)\n new_style = ['<style>'] + new_style + ['</style>']\n\n style = re.sub(r'table(#\\S+)?', 'table#%s' % random_id, '\\n'.join(new_style))\n\n df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html)\n\n return HTML(style + df_html)\n\nThen I implement\nHTML_with_style(df, '<style>table {{{}}}</style>'.format(my_style))\n\n\nYou can modify the code to dump the html\ndef HTML_with_style(df, style=None, random_id=None):\n import numpy as np\n import re\n\n df_html = df.to_html()\n\n if random_id is None:\n random_id = 'id%d' % np.random.choice(np.arange(1000000))\n\n if style is None:\n style = \"\"\"\n <style>\n table#{random_id} {{color: blue}}\n </style>\n \"\"\".format(random_id=random_id)\n else:\n new_style = []\n s = re.sub(r'</?style>', '', style).strip()\n for line in s.split('\\n'):\n line = line.strip()\n if not re.match(r'^table', line):\n line = re.sub(r'^', 'table ', line)\n new_style.append(line)\n new_style = ['<style>'] + new_style + ['</style>']\n\n style = re.sub(r'table(#\\S+)?', 'table#%s' % random_id, '\\n'.join(new_style))\n\n df_html = re.sub(r'<table', r'<table id=%s ' % random_id, df_html)\n\n return style + df_html\n\nAnd now\nHTML_with_style(df, '<style>table {{{}}}</style>'.format(my_style))\n\n'<style>\\ntable#id850184 {background-color: rgba(0, 0, 0, 0);\\ntable#id850184 border-bottom-color: rgb(0, 0, 0);\\ntable#id850184 border-bottom-style: none;\\ntable#id850184 border-bottom-width: 0px;\\ntable#id850184 border-collapse: collapse;\\ntable#id850184 border-image-outset: 0px;\\ntable#id850184 border-image-repeat: stretch;\\ntable#id850184 border-image-slice: 100%;\\ntable#id850184 border-image-source: none;\\ntable#id850184 border-image-width: 1;\\ntable#id850184 border-left-color: rgb(0, 0, 0);\\ntable#id850184 border-left-style: none;\\ntable#id850184 border-left-width: 0px;\\ntable#id850184 border-right-color: rgb(0, 0, 0);\\ntable#id850184 border-right-style: none;\\ntable#id850184 border-right-width: 0px;\\ntable#id850184 border-top-color: rgb(0, 0, 0);\\ntable#id850184 border-top-style: none;\\ntable#id850184 border-top-width: 0px;\\ntable#id850184 box-sizing: border-box;\\ntable#id850184 color: rgb(0, 0, 0);\\ntable#id850184 display: table#id850184;\\ntable#id850184 font-family: \"Helvetica Neue\", Helvetica, Arial, sans-serif;\\ntable#id850184 font-size: 12px;\\ntable#id850184 height: 1675px;\\ntable#id850184 line-height: 20px;\\ntable#id850184 margin-left: 0px;\\ntable#id850184 margin-right: 0px;\\ntable#id850184 margin-top: 12px;\\ntable#id850184-layout: fixed;\\ntable#id850184 text-size-adjust: 100%;\\ntable#id850184 width: 700px;\\ntable#id850184 -webkit-border-horizontal-spacing: 0px;\\ntable#id850184 -webkit-border-vertical-spacing: 0px;\\ntable#id850184 -webkit-tap-highlight-color: rgba(0, 0, 0, 0);}\\n</style><table id=id850184 border=\"1\" class=\"dataframe\">\\n <thead>\\n <tr style=\"text-align: right;\">\\n <th></th>\\n <th>X</th>\\n <th>Y</th>\\n <th>Z</th>\\n </tr>\\n </thead>\\n <tbody>\\n <tr>\\n <th>A</th>\\n <td>0</td>\\n <td>1</td>\\n <td>2</td>\\n </tr>\\n <tr>\\n <th>B</th>\\n <td>3</td>\\n <td>4</td>\\n <td>5</td>\\n </tr>\\n <tr>\\n <th>C</th>\\n <td>6</td>\\n <td>7</td>\\n <td>8</td>\\n </tr>\\n </tbody>\\n</table>'\n\n", "After some research I found the prettiest and easiest solution to be https://pypi.org/project/pretty-html-table/\nimport pandas as pd\nfrom pretty_html_table import build_table\ndf = pd.DataFrame(np.arange(9).reshape(3, 3), list('ABC'), list('XYZ'))\nhtml_table_blue_light = build_table(df, 'blue_light')\nprint(html_table_blue_light)\nwith open('styled_table.html', 'w') as f:\n f.write(html_table_blue_light)\n\n", "You could also add the following Raw NBConvert cell at the top of your notebook:\n<link rel=\"stylesheet\" href=\"https://cdn.jupyter.org/notebook/5.1.0/style/style.min.css\">\n\nNBConvert seems to fail adding these styles into the exported HTML. The line above will add them explicitly.\nSource\n", "<link rel=\"stylesheet\" href=\"https://cdn.jupyter.org/notebook/5.1.0/style/style.min.css\">\nyou need to add class=\"rendered_html\" to <body> otherwise it won't work.\n", "from pretty_html_table import build_table\ndf = pd.read_excel('df.xlsx')\nhtml_table_blue_light = build_table(df, 'blue_light')\n\n" ]
[ 13, 8, 2, 2, 0 ]
[]
[]
[ "css", "html", "jupyter", "pandas", "python" ]
stackoverflow_0045422378_css_html_jupyter_pandas_python.txt
Q: S3 PreSigned URL is cut when sent in an email I have a script which generates an S3 PreSigned URL and sends it in an email. The script works fine, but when the email is sent, it adds a new-line to the URL, which breaks it and makes it unusable in the email. The only packages installed: boto3 Jinja2 The script: import boto3 from botocore.config import Config from jinja2 import Environment, FileSystemLoader from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText import smtplib # AWS my_config = Config( region_name = 'eu-west-1' ) s3 = boto3.client('s3', config=my_config) bucket = "name of bucket" # Jinja2 loader = FileSystemLoader('templates') env = Environment(loader=loader) email_template = env.get_template('test_template.html') def create_presigned_url(bucket, object_name, expiration=259200): response = s3.generate_presigned_url('get_object', Params={'Bucket': bucket, 'Key': object_name}, ExpiresIn=expiration) return response def sendEmail(download_link): toEmail = 'me@email.com' msg = MIMEMultipart('alternative') title = 'Test email' sender = 'me@email.com' rendered_template = MIMEText(email_template.render({'download_link':download_link}), 'html') msg['Subject'] = title msg['From'] = sender msg['To'] = toEmail receivers = toEmail msg.attach(rendered_template) try: smtpObj = smtplib.SMTP('mail.server.com', 25) smtpObj.sendmail(sender, receivers, msg.as_string()) print ("Successfully sent email") smtpObj.quit() except Exception as e: print (e, "Error: unable to send email") if __name__ == "__main__": selectedFile = 'file.txt' # Download link downloadURL = create_presigned_url(bucket, selectedFile) # print(downloadURL) # Send email sendEmail(downloadURL) Results When I run the script, for some reason I get a newline somewhere inside of this long URL, which breaks the URL: Here's the source from Outlook: <meta http-equiv="Content-Type" content="text/html; charset=us-ascii">Download link: https://redacted/9%2FJOUw%3D&amp;x-amz-security-token=IQoJb3JpZ2luX2VjEH0aCWV1LXdlc3QtMSJHMEUCIEBMYHD9wKhOfrR00jTv7RIcsRe3NbOU31QniJpEdps8AiEAi377qRmvQQXb5dXhRGJcXulFhRunGTSd0GRyGXHR2kMqzAQIdhAEGgwzNTgyODkxNTgxMTIiDH3bqAZMxVyKH3f6xyqpBL40WhPQMShoV8x1epn85Ml6qQ8Y1xdHe16xyoMWKqylbLGrndMFtYyOgs6LAlDvGJPrcF9xymGf8BqGsGHwCuWGdEcisxvwR%2FUoigjHBXP55fHpF%2FXnVupCRYDIVA5N%2BVOKW5%2BcljVN9KC3RKKvEeUncTGnXIaW5UHNAPiFSrgbbj9X%2FyBptkFmj5f4x2Zblm8crQS0rMTveCuoki3E06NO%2FKiDNiJQpF1vVphb%2F0spIR3CUxSx9HJHjvRBWTWQn9bmmT8rhp0lx%2Bzx9RLlpmE6hRRF6KBpNW%2B86y3EB%2BtMxVBuEhC5M1rCyjou6efK%2FA96wuwBN%2FmjD663vyZipiOGrj4yOFIMPklBu4L1SnfkxhZN8%2BNWXzwc%2B%2B5%2FNfL%2BVzyFpWS7TbGIM4A9TEvL3bPlgafvIl%2Fi24MOrN47UshpdpHGAjG20PBr0cbi75G7D%2B3UoSn%2Bzp0hZkAEACnwWtWtzEpVWbwatx%2FL1T8XF43o8OiKWqCfVxBjoZSc1itxRDOUqonYbCGY2Y0NlkXpvpHBZMcg7530dIFRBBxhTZo4RVXqkymTM4hEvDUw74R%2BDovr%2F%2BG5ji52Wpcng 954ESTpzMjOtuBXKcPtmEWTqx4au99ZP8lxbqKjq3BO%2FJLqrzTCPSEs6CTv7YbtzUqQ0r%2BkFyAU2RnpTTcYPJ5SD8ytlb4qUHb5RhEcn3bbJ5fsIRx%2B6q3LrhWkorDNKp5jh6oth1roRxXQM0swgN%2BzmwY6qQGnjWLAgUSUB9yf3heEdiFZo4DnC7ipW6BgsnkoeZJcPz5Ysx5PG4kzelCP89AsXQGD%2BtFqweusgWJVLo3dfyK3iLJ5myohn7mjSf1YVE%2B5CGlajc2HZl%2BoUOhI5gMMxpFXtpIL6jgTyY5r6ZwCKZ9g1afHO1kUF4VYir2M2BWYHTcB%2Bu8TANzoc15RJih8XmE%2FAWd%2FMQM7SQOQxsbmCiRSv5AeYMuok%2FSw&amp;Expires=1668345190 I tried: using | safe inside of my Jinja2 template. I tried using the href HTML tag, no dice. I don't know what else I can check and have no idea why it's happening. People mentioned this might be the cause: https://www.rfc-editor.org/rfc/rfc2822#section-2.1.1 A: It looks like the problem is related to max line length defined in the "Internet Message Format" RFC document 5322 2.1.1. Line Length Limits There are two limits that this standard places on the number of characters in a line. Each line of characters MUST be no more than 998 characters, and SHOULD be no more than 78 characters, excluding the CRLF. ... The more conservative 78 character recommendation is to accommodate the many implementations of user interfaces that display these messages which may truncate, or disastrously wrap, the display of more than 78 characters per line, in spite of the fact that such implementations are non-conformant to the intent of this specification (and that of [RFC2821] if they actually cause information to be lost). Again, even though this limitation is put on messages, it is encumbant upon implementations which display messages Your line (with redacted information) has 961 characters, so adding those redacted information I assume you go over that limit of 998 characters. Now, my thinking is that your local SMTP server is not splitting the line but the one on Amazon server may do that. I assume that reducing the line length is not really an option :) I would try with changing the markup, because now Outlook is looking for links and highlighting them in your message. I would try to add html markup for that with new line character inside, which hopefully will be ignored by the email client. <a href="https://redacted/9%2FJOUw%3D&amp;x-amz...... ........">Download link</a> A: try to change your jinja template to get something like <meta http-equiv="Content-Type" content="text/html; charset=us-ascii"> Download link: <a href="https://redacted/9%2FJOUw%3D&amp;x-amz-security-token=IQoJb3JpZ2luX2VjEH0aCWV1LXdlc3QtMSJHMEUCIEBMYHD9wKhOfrR00jTv7RIcsRe3NbOU31QniJpEdps8AiEAi377qRmvQQXb5dXhRGJcXulFhRunGTSd0GRyGXHR2kMqzAQIdhAEGgwzNTgyODkxNTgxMTIiDH3bqAZMxVyKH3f6xyqpBL40WhPQMShoV8x1epn85Ml6qQ8Y1xdHe16xyoMWKqylbLGrndMFtYyOgs6LAlDvGJPrcF9xymGf8BqGsGHwCuWGdEcisxvwR%2FUoigjHBXP55fHpF%2FXnVupCRYDIVA5N%2BVOKW5%2BcljVN9KC3RKKvEeUncTGnXIaW5UHNAPiFSrgbbj9X%2FyBptkFmj5f4x2Zblm8crQS0rMTveCuoki3E06NO%2FKiDNiJQpF1vVphb%2F0spIR3CUxSx9HJHjvRBWTWQn9bmmT8rhp0lx%2Bzx9RLlpmE6hRRF6KBpNW%2B86y3EB%2BtMxVBuEhC5M1rCyjou6efK%2FA96wuwBN%2FmjD663vyZipiOGrj4yOFIMPklBu4L1SnfkxhZN8%2BNWXzwc%2B%2B5%2FNfL%2BVzyFpWS7TbGIM4A9TEvL3bPlgafvIl%2Fi24MOrN47UshpdpHGAjG20PBr0cbi75G7D%2B3UoSn%2Bzp0hZkAEACnwWtWtzEpVWbwatx%2FL1T8XF43o8OiKWqCfVxBjoZSc1itxRDOUqonYbCGY2Y0NlkXpvpHBZMcg7530dIFRBBxhTZo4RVXqkymTM4hEvDUw74R%2BDovr%2F%2BG5ji52Wpcng954ESTpzMjOtuBXKcPtmEWTqx4au99ZP8lxbqKjq3BO%2FJLqrzTCPSEs6CTv7YbtzUqQ0r%2BkFyAU2RnpTTcYPJ5SD8ytlb4qUHb5RhEcn3bbJ5fsIRx%2B6q3LrhWkorDNKp5jh6oth1roRxXQM0swgN%2BzmwY6qQGnjWLAgUSUB9yf3heEdiFZo4DnC7ipW6BgsnkoeZJcPz5Ysx5PG4kzelCP89AsXQGD%2BtFqweusgWJVLo3dfyK3iLJ5myohn7mjSf1YVE%2B5CGlajc2HZl%2BoUOhI5gMMxpFXtpIL6jgTyY5r6ZwCKZ9g1afHO1kUF4VYir2M2BWYHTcB%2Bu8TANzoc15RJih8XmE%2FAWd%2FMQM7SQOQxsbmCiRSv5AeYMuok%2FSw&amp;Expires=1668345190">Download link</a> as you didn't add <a href= your link is interpretated as long text => and in long text editor can add line breaks
S3 PreSigned URL is cut when sent in an email
I have a script which generates an S3 PreSigned URL and sends it in an email. The script works fine, but when the email is sent, it adds a new-line to the URL, which breaks it and makes it unusable in the email. The only packages installed: boto3 Jinja2 The script: import boto3 from botocore.config import Config from jinja2 import Environment, FileSystemLoader from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText import smtplib # AWS my_config = Config( region_name = 'eu-west-1' ) s3 = boto3.client('s3', config=my_config) bucket = "name of bucket" # Jinja2 loader = FileSystemLoader('templates') env = Environment(loader=loader) email_template = env.get_template('test_template.html') def create_presigned_url(bucket, object_name, expiration=259200): response = s3.generate_presigned_url('get_object', Params={'Bucket': bucket, 'Key': object_name}, ExpiresIn=expiration) return response def sendEmail(download_link): toEmail = 'me@email.com' msg = MIMEMultipart('alternative') title = 'Test email' sender = 'me@email.com' rendered_template = MIMEText(email_template.render({'download_link':download_link}), 'html') msg['Subject'] = title msg['From'] = sender msg['To'] = toEmail receivers = toEmail msg.attach(rendered_template) try: smtpObj = smtplib.SMTP('mail.server.com', 25) smtpObj.sendmail(sender, receivers, msg.as_string()) print ("Successfully sent email") smtpObj.quit() except Exception as e: print (e, "Error: unable to send email") if __name__ == "__main__": selectedFile = 'file.txt' # Download link downloadURL = create_presigned_url(bucket, selectedFile) # print(downloadURL) # Send email sendEmail(downloadURL) Results When I run the script, for some reason I get a newline somewhere inside of this long URL, which breaks the URL: Here's the source from Outlook: <meta http-equiv="Content-Type" content="text/html; charset=us-ascii">Download link: https://redacted/9%2FJOUw%3D&amp;x-amz-security-token=IQoJb3JpZ2luX2VjEH0aCWV1LXdlc3QtMSJHMEUCIEBMYHD9wKhOfrR00jTv7RIcsRe3NbOU31QniJpEdps8AiEAi377qRmvQQXb5dXhRGJcXulFhRunGTSd0GRyGXHR2kMqzAQIdhAEGgwzNTgyODkxNTgxMTIiDH3bqAZMxVyKH3f6xyqpBL40WhPQMShoV8x1epn85Ml6qQ8Y1xdHe16xyoMWKqylbLGrndMFtYyOgs6LAlDvGJPrcF9xymGf8BqGsGHwCuWGdEcisxvwR%2FUoigjHBXP55fHpF%2FXnVupCRYDIVA5N%2BVOKW5%2BcljVN9KC3RKKvEeUncTGnXIaW5UHNAPiFSrgbbj9X%2FyBptkFmj5f4x2Zblm8crQS0rMTveCuoki3E06NO%2FKiDNiJQpF1vVphb%2F0spIR3CUxSx9HJHjvRBWTWQn9bmmT8rhp0lx%2Bzx9RLlpmE6hRRF6KBpNW%2B86y3EB%2BtMxVBuEhC5M1rCyjou6efK%2FA96wuwBN%2FmjD663vyZipiOGrj4yOFIMPklBu4L1SnfkxhZN8%2BNWXzwc%2B%2B5%2FNfL%2BVzyFpWS7TbGIM4A9TEvL3bPlgafvIl%2Fi24MOrN47UshpdpHGAjG20PBr0cbi75G7D%2B3UoSn%2Bzp0hZkAEACnwWtWtzEpVWbwatx%2FL1T8XF43o8OiKWqCfVxBjoZSc1itxRDOUqonYbCGY2Y0NlkXpvpHBZMcg7530dIFRBBxhTZo4RVXqkymTM4hEvDUw74R%2BDovr%2F%2BG5ji52Wpcng 954ESTpzMjOtuBXKcPtmEWTqx4au99ZP8lxbqKjq3BO%2FJLqrzTCPSEs6CTv7YbtzUqQ0r%2BkFyAU2RnpTTcYPJ5SD8ytlb4qUHb5RhEcn3bbJ5fsIRx%2B6q3LrhWkorDNKp5jh6oth1roRxXQM0swgN%2BzmwY6qQGnjWLAgUSUB9yf3heEdiFZo4DnC7ipW6BgsnkoeZJcPz5Ysx5PG4kzelCP89AsXQGD%2BtFqweusgWJVLo3dfyK3iLJ5myohn7mjSf1YVE%2B5CGlajc2HZl%2BoUOhI5gMMxpFXtpIL6jgTyY5r6ZwCKZ9g1afHO1kUF4VYir2M2BWYHTcB%2Bu8TANzoc15RJih8XmE%2FAWd%2FMQM7SQOQxsbmCiRSv5AeYMuok%2FSw&amp;Expires=1668345190 I tried: using | safe inside of my Jinja2 template. I tried using the href HTML tag, no dice. I don't know what else I can check and have no idea why it's happening. People mentioned this might be the cause: https://www.rfc-editor.org/rfc/rfc2822#section-2.1.1
[ "It looks like the problem is related to max line length defined in the \"Internet Message Format\" RFC document 5322\n\n2.1.1. Line Length Limits\nThere are two limits that this standard places on the number of characters in a line. Each line of characters MUST be no more than 998 characters, and SHOULD be no more than 78 characters, excluding the CRLF.\n...\nThe more conservative 78 character recommendation is to accommodate the many implementations of user interfaces that display these messages which may truncate, or disastrously wrap, the display of more than 78 characters per line, in spite of the fact that such implementations are non-conformant to the intent of this specification (and that of [RFC2821] if they actually cause information to be lost). Again, even though this limitation is put on messages, it is encumbant upon implementations which display messages\n\nYour line (with redacted information) has 961 characters, so adding those redacted information I assume you go over that limit of 998 characters.\nNow, my thinking is that your local SMTP server is not splitting the line but the one on Amazon server may do that.\nI assume that reducing the line length is not really an option :)\nI would try with changing the markup, because now Outlook is looking for links and highlighting them in your message. I would try to add html markup for that with new line character inside, which hopefully will be ignored by the email client.\n<a href=\"https://redacted/9%2FJOUw%3D&amp;x-amz......\n........\">Download link</a>\n\n", "try to change your jinja template to get something like\n<meta http-equiv=\"Content-Type\" content=\"text/html; charset=us-ascii\">\nDownload link: \n<a href=\"https://redacted/9%2FJOUw%3D&amp;x-amz-security-token=IQoJb3JpZ2luX2VjEH0aCWV1LXdlc3QtMSJHMEUCIEBMYHD9wKhOfrR00jTv7RIcsRe3NbOU31QniJpEdps8AiEAi377qRmvQQXb5dXhRGJcXulFhRunGTSd0GRyGXHR2kMqzAQIdhAEGgwzNTgyODkxNTgxMTIiDH3bqAZMxVyKH3f6xyqpBL40WhPQMShoV8x1epn85Ml6qQ8Y1xdHe16xyoMWKqylbLGrndMFtYyOgs6LAlDvGJPrcF9xymGf8BqGsGHwCuWGdEcisxvwR%2FUoigjHBXP55fHpF%2FXnVupCRYDIVA5N%2BVOKW5%2BcljVN9KC3RKKvEeUncTGnXIaW5UHNAPiFSrgbbj9X%2FyBptkFmj5f4x2Zblm8crQS0rMTveCuoki3E06NO%2FKiDNiJQpF1vVphb%2F0spIR3CUxSx9HJHjvRBWTWQn9bmmT8rhp0lx%2Bzx9RLlpmE6hRRF6KBpNW%2B86y3EB%2BtMxVBuEhC5M1rCyjou6efK%2FA96wuwBN%2FmjD663vyZipiOGrj4yOFIMPklBu4L1SnfkxhZN8%2BNWXzwc%2B%2B5%2FNfL%2BVzyFpWS7TbGIM4A9TEvL3bPlgafvIl%2Fi24MOrN47UshpdpHGAjG20PBr0cbi75G7D%2B3UoSn%2Bzp0hZkAEACnwWtWtzEpVWbwatx%2FL1T8XF43o8OiKWqCfVxBjoZSc1itxRDOUqonYbCGY2Y0NlkXpvpHBZMcg7530dIFRBBxhTZo4RVXqkymTM4hEvDUw74R%2BDovr%2F%2BG5ji52Wpcng954ESTpzMjOtuBXKcPtmEWTqx4au99ZP8lxbqKjq3BO%2FJLqrzTCPSEs6CTv7YbtzUqQ0r%2BkFyAU2RnpTTcYPJ5SD8ytlb4qUHb5RhEcn3bbJ5fsIRx%2B6q3LrhWkorDNKp5jh6oth1roRxXQM0swgN%2BzmwY6qQGnjWLAgUSUB9yf3heEdiFZo4DnC7ipW6BgsnkoeZJcPz5Ysx5PG4kzelCP89AsXQGD%2BtFqweusgWJVLo3dfyK3iLJ5myohn7mjSf1YVE%2B5CGlajc2HZl%2BoUOhI5gMMxpFXtpIL6jgTyY5r6ZwCKZ9g1afHO1kUF4VYir2M2BWYHTcB%2Bu8TANzoc15RJih8XmE%2FAWd%2FMQM7SQOQxsbmCiRSv5AeYMuok%2FSw&amp;Expires=1668345190\">Download link</a>\n\nas you didn't add <a href= your link is interpretated as long text => and in long text editor can add line breaks\n" ]
[ 0, 0 ]
[]
[]
[ "boto3", "jinja2", "python", "python_3.x" ]
stackoverflow_0074389781_boto3_jinja2_python_python_3.x.txt
Q: Confusion between commands.Bot and discord.Client | Which one should I use? Whenever you look at YouTube tutorials or code from this website there is a real variation. Some developers use client = discord.Client(intents=intents) while the others use bot = commands.Bot(command_prefix="something", intents=intents). Now I know slightly about the difference but I get errors from different places from my code when I use either of them and its confusing. Especially since there has a few changes over the years in discord.py it is hard to find the real difference. I tried sticking to discord.Client then I found that there are more features in commands.Bot. Then I found errors when using commands.Bot. An example of this is: When I try to use commands.Bot client = commands.Bot(command_prefix=">",intents=intents) async def load(): for filename in os.listdir("./Cogs"): if filename.endswith(".py"): client.load_extension(f"Cogs.{filename[:-3]}") The above doesnt giveany response from my Cogs and also says RuntimeWarning: coroutine 'BotBase.load_extension' was never awaited client.load_extension(f"Cogs.{filename[:-3]}") RuntimeWarning: Enable tracemalloc to get the object allocation traceback`. Then when I try to use discord.Client client = discord.Client(command_prefix=">",intents=intents) async def load(): for filename in os.listdir("./Cogs"): if filename.endswith(".py"): client.load_extension(f"Cogs.{filename[:-3]}") The above also gives me an error: Exception has occurred: AttributeError 'Client' object has no attribute 'load_extension' Which one is better in the long run? What is the exact difference? A: The difference is that commands.Bot provides a lot more functionality (like Commands), which Client doesn't do. Bot is a subclass of Client, so it can do everything that a Client can do, but not the other way around. In the long run you should use the one that you need. If you want to use Bot features then use a Bot. If you don't care about Bot features then use a Client, or you can still use a Bot and just not use the extra features. The first error in your post says coroutine load_extension was never awaited, which tells you exactly what the problem is. load_extension is a coroutine (an async function), and you're not awaiting it. Extensions were made async in 2.0, and you should adapt your code to that. Migration guide explains what to do: https://discordpy.readthedocs.io/en/stable/migrating.html#extension-and-cog-loading-unloading-is-now-asynchronous The second error in your post appears because you're trying to use a Bot feature (extensions) in a Client. It's telling you that a Client doesn't have load_extension(), because it doesn't. Considering you're using out-of-date code I assume you're following a YouTube tutorial. This is one of the major issues with them. They teach terrible code, and are instantly outdated. They also teach you to do one specific thing, and then you won't be able to create any features of your own. Don't follow YouTube tutorials, just read the docs.
Confusion between commands.Bot and discord.Client | Which one should I use?
Whenever you look at YouTube tutorials or code from this website there is a real variation. Some developers use client = discord.Client(intents=intents) while the others use bot = commands.Bot(command_prefix="something", intents=intents). Now I know slightly about the difference but I get errors from different places from my code when I use either of them and its confusing. Especially since there has a few changes over the years in discord.py it is hard to find the real difference. I tried sticking to discord.Client then I found that there are more features in commands.Bot. Then I found errors when using commands.Bot. An example of this is: When I try to use commands.Bot client = commands.Bot(command_prefix=">",intents=intents) async def load(): for filename in os.listdir("./Cogs"): if filename.endswith(".py"): client.load_extension(f"Cogs.{filename[:-3]}") The above doesnt giveany response from my Cogs and also says RuntimeWarning: coroutine 'BotBase.load_extension' was never awaited client.load_extension(f"Cogs.{filename[:-3]}") RuntimeWarning: Enable tracemalloc to get the object allocation traceback`. Then when I try to use discord.Client client = discord.Client(command_prefix=">",intents=intents) async def load(): for filename in os.listdir("./Cogs"): if filename.endswith(".py"): client.load_extension(f"Cogs.{filename[:-3]}") The above also gives me an error: Exception has occurred: AttributeError 'Client' object has no attribute 'load_extension' Which one is better in the long run? What is the exact difference?
[ "The difference is that commands.Bot provides a lot more functionality (like Commands), which Client doesn't do. Bot is a subclass of Client, so it can do everything that a Client can do, but not the other way around.\nIn the long run you should use the one that you need. If you want to use Bot features then use a Bot. If you don't care about Bot features then use a Client, or you can still use a Bot and just not use the extra features.\nThe first error in your post says coroutine load_extension was never awaited, which tells you exactly what the problem is. load_extension is a coroutine (an async function), and you're not awaiting it. Extensions were made async in 2.0, and you should adapt your code to that. Migration guide explains what to do: https://discordpy.readthedocs.io/en/stable/migrating.html#extension-and-cog-loading-unloading-is-now-asynchronous\nThe second error in your post appears because you're trying to use a Bot feature (extensions) in a Client. It's telling you that a Client doesn't have load_extension(), because it doesn't.\nConsidering you're using out-of-date code I assume you're following a YouTube tutorial. This is one of the major issues with them. They teach terrible code, and are instantly outdated. They also teach you to do one specific thing, and then you won't be able to create any features of your own. Don't follow YouTube tutorials, just read the docs.\n" ]
[ 3 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074503328_discord_discord.py_python.txt
Q: How to automatically get user in django admin through form I have a form in my django website where the user requests coins and the information is sent to the admin for me to process. I want to automatically get the user who filled the form without them doing it themselves. Here's the model.py file: class Requestpayment (models.Model): username= models.ForeignKey(User, on_delete= models.CASCADE, null=True) useremail= models.CharField(max_length=100) accountmail= models.CharField(max_length=100) accountphonenumber=models.CharField(max_length=15) coinsrequested=models.ForeignKey(Requestamount, on_delete= models.SET_NULL, null=True) created= models.DateTimeField(auto_now_add=True) def __str__(self): return self.accountmail the forms.py: class Requestpaymentform (ModelForm): class Meta: model = Requestpayment fields = '__all__' and the views.py: @login_required(login_url='login') def redeemcoins (request): form = Requestpaymentform if request.method =='POST': form = Requestpaymentform(request.POST) if form.is_valid(): form = form.save(commit=False) username = request.user form.save() return redirect ('home') I am pretty sure something is wrong but i don't know what it is (I'm very new at django) anyway the form always shows all the users in the website for the current user to pick who they are. redeem coins page I also tried excluding that part of the form but it didn't work it just shows up empty in the admin. thank you. A: You need to assign it to the instance wrapped in the form, so: @login_required(login_url='login') def redeemcoins(request): form = Requestpaymentform() if request.method == 'POST': form = Requestpaymentform(request.POST) if form.is_valid(): form.instance.username = request.user form.save() return redirect('home') # … It makes more sense however to name this field user than username. In the model you can also make the username field non-editable, such that it does not appear in the form: from django.conf import settings class Requestpayment(models.Model): user = models.ForeignKey( settings.AUTH_USER_MODEL, editable=False, on_delete=models.CASCADE ) # … Note: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation. A: When you use username= models.ForeignKey(User, on_delete= models.CASCADE, null=True), Django add a field named user_id in your database which allow django to find User object for Requestpayment. You can use user_id field to add a User object in Requestpayment. You don't need to pass username field in your fields list if you want to get user in view. class Requestpaymentform (ModelForm): class Meta: model = Requestpayment #fields = '__all__' fields = ['useremail', 'accountmail', 'accountphonenumber', 'coinsrequested', 'created'] Now do this to get user in your view. @login_required(login_url='login') def redeemcoins(request): form = Requestpaymentform() if request.method == 'POST': form = Requestpaymentform(request.POST) if form.is_valid(): requestpayment = form.save(commit=False) requestpayment.user_id = request.user.id requestpayment.save() return redirect('home') And it's great to use user instead username because it's a User object and not a simple field. Please for my English !!!
How to automatically get user in django admin through form
I have a form in my django website where the user requests coins and the information is sent to the admin for me to process. I want to automatically get the user who filled the form without them doing it themselves. Here's the model.py file: class Requestpayment (models.Model): username= models.ForeignKey(User, on_delete= models.CASCADE, null=True) useremail= models.CharField(max_length=100) accountmail= models.CharField(max_length=100) accountphonenumber=models.CharField(max_length=15) coinsrequested=models.ForeignKey(Requestamount, on_delete= models.SET_NULL, null=True) created= models.DateTimeField(auto_now_add=True) def __str__(self): return self.accountmail the forms.py: class Requestpaymentform (ModelForm): class Meta: model = Requestpayment fields = '__all__' and the views.py: @login_required(login_url='login') def redeemcoins (request): form = Requestpaymentform if request.method =='POST': form = Requestpaymentform(request.POST) if form.is_valid(): form = form.save(commit=False) username = request.user form.save() return redirect ('home') I am pretty sure something is wrong but i don't know what it is (I'm very new at django) anyway the form always shows all the users in the website for the current user to pick who they are. redeem coins page I also tried excluding that part of the form but it didn't work it just shows up empty in the admin. thank you.
[ "You need to assign it to the instance wrapped in the form, so:\n@login_required(login_url='login')\ndef redeemcoins(request):\n form = Requestpaymentform()\n if request.method == 'POST':\n form = Requestpaymentform(request.POST)\n if form.is_valid():\n form.instance.username = request.user\n form.save()\n return redirect('home')\n # …\nIt makes more sense however to name this field user than username. In the model you can also make the username field non-editable, such that it does not appear in the form:\nfrom django.conf import settings\n\n\nclass Requestpayment(models.Model):\n user = models.ForeignKey(\n settings.AUTH_USER_MODEL, editable=False, on_delete=models.CASCADE\n )\n # …\n\n\nNote: It is normally better to make use of the settings.AUTH_USER_MODEL [Django-doc] to refer to the user model, than to use the User model [Django-doc] directly. For more information you can see the referencing the User model section of the documentation.\n\n", "When you use username= models.ForeignKey(User, on_delete= models.CASCADE, null=True), Django add a field named user_id in your database which allow django to find User object for Requestpayment.\nYou can use user_id field to add a User object in Requestpayment.\n\nYou don't need to pass username field in your fields list if you want to get user in view.\nclass Requestpaymentform (ModelForm):\n class Meta:\n model = Requestpayment\n\n #fields = '__all__'\n fields = ['useremail',\n 'accountmail',\n 'accountphonenumber',\n 'coinsrequested',\n 'created']\n\n\nNow do this to get user in your view.\n@login_required(login_url='login')\ndef redeemcoins(request):\n form = Requestpaymentform()\n if request.method == 'POST':\n form = Requestpaymentform(request.POST)\n if form.is_valid():\n requestpayment = form.save(commit=False)\n requestpayment.user_id = request.user.id \n requestpayment.save()\n return redirect('home')\n\n\n\nAnd it's great to use user instead username because it's a User object and not a simple field.\nPlease for my English !!!\n" ]
[ 0, 0 ]
[]
[]
[ "django", "django_forms", "python" ]
stackoverflow_0074502951_django_django_forms_python.txt
Q: Increment Integer Field in Django I am creating a simple blogging application and would like users to be able to like a post. In terms of scalability I've decided it would be best to have likes as a separate table made up of pointers to both the user and post. I have managed to enable the post request adding a like to the model however the likes field in the post model is not incrementing. I've tried using a simple likes += 1 technique in the serializer but that made no changes and have now used an F string but still no changes are being made. I am still fairly new to Django and suspect it may be because I'm trying to update a field on a different model within a CreateAPIView serializer but I'm not sure. This is what I have so far # views.py class LikeView(generics.CreateAPIView): permission_classes = [ permissions.IsAuthenticated, ] queryset = Like.objects.all() serializer_class = LikeSerializer def like(self, request, format=None): serializer = self.serializer_class(data=request.data) if(serializer.is_valid()): user_id = serializer.data.get('user_id') post_id = serializer.data.get('post_id') l = Like(user_id=user_id, post_id=post_id) l.save() # likes field not updating with this post = Post.objects.get(id=post_id) post.likes = F('likes') + 1 post.save() return Response(LikeSerializer(l).data, status=status.HTTP_200_OK) return Response(serializer.errors(), status=status.HTTP_400_BAD_REQUEST) #models.py class Post(models.Model): id = models.CharField(max_length=36, default=generate_unique_id, primary_key=True) title = models.CharField(max_length=50) content = models.TextField() likes = models.IntegerField(default=0, blank=True) pub_date = models.DateTimeField(default=timezone.now) def __str__(self): return self.title class Like(models.Model): user_id = models.ForeignKey(User, related_name='user_id', on_delete=models.CASCADE) post_id = models.ForeignKey(Post, related_name='post_id', on_delete=models.CASCADE) def __str__(self): return "%s %s" % (self.user_id, self.post_id) #serializers.py class LikeSerializer(serializers.ModelSerializer): class Meta: fields= ( 'user_id', 'post_id' ) model = Like Thank you A: You can use F() only when doing queries. post = Post.objects.get(id=post_id) post.likes = post.likes + 1 post.save() or if you don't mind doing one more query, but also make sure that the post has always the correct number of likes: post = Post.objects.get(id=post_id) post.likes = Like.objects.filter(post=post).count() post.save() A: The main reason for the likes were not getting updated was because, in the LikeView, you wrote def like(self, request, format=None): instead of def post(self, request, *args, **kwargs): Update the view as follows: class LikeView(generics.CreateAPIView): permission_classes = [ permissions.IsAuthenticated, ] queryset = Like.objects.all() serializer_class = LikeSerializer def post(self, request, *args, **kwargs): serializer = self.serializer_class(data=request.data) if (serializer.is_valid()): like = serializer.save() post = like.post_id post.likes = post.post_id.count() post.save() return Response(LikeSerializer(like).data, status=status.HTTP_200_OK) return Response(serializer.errors(), status=status.HTTP_400_BAD_REQUEST) In the above code, you saved the Like object to a variable like Then got the post object, and found the count using related name of post foreign key. Additionally, your model structure enables one user to like a post multiple times. To prevent that you can add the unique together to the Likes model. class Like(models.Model): user_id = models.ForeignKey(User, related_name='user_id', on_delete=models.CASCADE) post_id = models.ForeignKey(Post, related_name='post_id', on_delete=models.CASCADE) def __str__(self): return "%s %s" % (self.user_id, self.post_id) class Meta: unique_together = ('user_id', 'post_id',)
Increment Integer Field in Django
I am creating a simple blogging application and would like users to be able to like a post. In terms of scalability I've decided it would be best to have likes as a separate table made up of pointers to both the user and post. I have managed to enable the post request adding a like to the model however the likes field in the post model is not incrementing. I've tried using a simple likes += 1 technique in the serializer but that made no changes and have now used an F string but still no changes are being made. I am still fairly new to Django and suspect it may be because I'm trying to update a field on a different model within a CreateAPIView serializer but I'm not sure. This is what I have so far # views.py class LikeView(generics.CreateAPIView): permission_classes = [ permissions.IsAuthenticated, ] queryset = Like.objects.all() serializer_class = LikeSerializer def like(self, request, format=None): serializer = self.serializer_class(data=request.data) if(serializer.is_valid()): user_id = serializer.data.get('user_id') post_id = serializer.data.get('post_id') l = Like(user_id=user_id, post_id=post_id) l.save() # likes field not updating with this post = Post.objects.get(id=post_id) post.likes = F('likes') + 1 post.save() return Response(LikeSerializer(l).data, status=status.HTTP_200_OK) return Response(serializer.errors(), status=status.HTTP_400_BAD_REQUEST) #models.py class Post(models.Model): id = models.CharField(max_length=36, default=generate_unique_id, primary_key=True) title = models.CharField(max_length=50) content = models.TextField() likes = models.IntegerField(default=0, blank=True) pub_date = models.DateTimeField(default=timezone.now) def __str__(self): return self.title class Like(models.Model): user_id = models.ForeignKey(User, related_name='user_id', on_delete=models.CASCADE) post_id = models.ForeignKey(Post, related_name='post_id', on_delete=models.CASCADE) def __str__(self): return "%s %s" % (self.user_id, self.post_id) #serializers.py class LikeSerializer(serializers.ModelSerializer): class Meta: fields= ( 'user_id', 'post_id' ) model = Like Thank you
[ "You can use F() only when doing queries.\npost = Post.objects.get(id=post_id)\npost.likes = post.likes + 1\npost.save()\n\nor if you don't mind doing one more query, but also make sure that the post has always the correct number of likes:\npost = Post.objects.get(id=post_id)\npost.likes = Like.objects.filter(post=post).count()\npost.save()\n\n", "The main reason for the likes were not getting updated was because, in the LikeView, you wrote\ndef like(self, request, format=None):\ninstead of\ndef post(self, request, *args, **kwargs):\nUpdate the view as follows:\nclass LikeView(generics.CreateAPIView):\n permission_classes = [\n permissions.IsAuthenticated,\n ]\n queryset = Like.objects.all()\n serializer_class = LikeSerializer\n\n def post(self, request, *args, **kwargs):\n serializer = self.serializer_class(data=request.data)\n if (serializer.is_valid()):\n like = serializer.save()\n post = like.post_id\n post.likes = post.post_id.count()\n post.save()\n\n return Response(LikeSerializer(like).data,\n status=status.HTTP_200_OK)\n return Response(serializer.errors(),\n status=status.HTTP_400_BAD_REQUEST)\n\nIn the above code, you saved the Like object to a variable like\nThen got the post object, and found the count using related name of post foreign key.\nAdditionally, your model structure enables one user to like a post multiple times. To prevent that you can add the unique together to the Likes model.\nclass Like(models.Model):\n user_id = models.ForeignKey(User, related_name='user_id', on_delete=models.CASCADE)\n post_id = models.ForeignKey(Post, related_name='post_id', on_delete=models.CASCADE)\n\n def __str__(self):\n return \"%s %s\" % (self.user_id, self.post_id)\n\n class Meta:\n unique_together = ('user_id', 'post_id',)\n \n\n" ]
[ 0, 0 ]
[]
[]
[ "django", "increment", "python", "serialization" ]
stackoverflow_0074500748_django_increment_python_serialization.txt
Q: Python: mark method as implementing/overriding Given a 'contract' of sorts that I want to implement, I want the code to tell the reader what the intent is allow the type checker to correct me (fragile base class problem) E.g. in C++, you can class X: public Somethingable { int get_something() const override { return 10; } }; Now when I rename Somethingable::get_something (to plain something for instance), the compiler will error on my X::get_something because it is not an override (anymore). In C# the reader gets even more information: class X : Somethingable { int GetSomething() implements Somethingable.GetSomething { return 10; } } In Python, we can use abc.ABC and @abstractmethod to annotate that subclasses have to define this and that member, but is there a standardised way to annotate this relation on the implementation site? class X(Somethingable): @typing.implements(Somethingable.get_something) # does not exist def get_something(self): return 10 A: I was overestimating the complexity of such solution, it is shorter: import warnings def override(func): if hasattr(func, 'fget'): # We see a property, go to actual callable func.fget.__overrides__ = True else: func.__overrides__ = True return func class InterfaceMeta(type): def __new__(mcs, name, bases, attrs): for name, a in attrs.items(): f = getattr(a, 'fget', a) if not getattr(f, '__overrides__', None): continue f = getattr(f, '__wrapped__', f) try: base_class = next(b for b in bases if hasattr(b, name)) ref = getattr(base_class, name) if type(ref) is not type(a): warnings.warn(f'Overriding method {name} messes with class/static methods or properties') continue if _check_lsp(f, ref): warnings.warn(f'LSP violation for method {name}') continue except StopIteration: warnings.warn(f'Overriding method {name} does not have parent implementation') return super().__new__(mcs, name, bases, attrs) override decorator can mark overriding methods, and InterfaceMeta confirms that these methods do exist in superclass. _check_lsp is the most complex part of this, I'll explain it below. What is actually going on? First, we take a callable and add an attribute to it from the decorator. Then metaclass looks for methods with this marker and: confirms, that at least one of base classes implements it checks, that property remains property, classmethod remains classmethod and staticmethod remains staticmethod checks, that implementation does not break Liskov substitution principle. Usage def stupid_decorator(func): """Stupid, because doesn't use `wrapt` or `functools.wraps`.""" def inner(*args, **kwargs): return func(*args, **kwargs) return inner class IFoo(metaclass=InterfaceMeta): def foo(self): return 'foo' @property def bar(self): return 'bar' @classmethod def cmethod(cls): return 'classmethod' @staticmethod def smethod(): return 'staticmethod' def some_1(self): return 1 def some_2(self): return 2 def single_arg(self, arg): return arg def two_args_default(self, arg1, arg2): return arg1 def pos_only(self, arg1, /, arg2, arg3=1): return arg1 def kwonly(self, *, arg1=1): return arg1 class Foo(IFoo): @override @stupid_decorator # Wrong signature now: "self" not mentioned. With "self" in decorator won't fail. def foo(self): return 'foo2' @override @property def baz(self): return 'baz' @override def quak(self): return 'quak' @override @staticmethod def cmethod(): return 'Dead' @override @classmethod def some_1(cls): return None @override def single_arg(self, another_arg): return 1 @override def pos_only(self, another_arg, / , arg2, arg3=1): return 1 @override def two_args_default(self, arg1, arg2=1): return 1 @override def kwonly(self, *, arg2=1): return 1 This warns: LSP violation for method foo Overriding method baz does not have parent implementation Overriding method quak does not have parent implementation Overriding method cmethod messes with class/static methods or properties Overriding method some_1 messes with class/static methods or properties LSP violation for method single_arg LSP violation for method kwonly You can set the metaclass on Foo as well with the same result. LSP LSP (Liskov substitution principle) is a very important concept that, in particular, postulates that any parent class can be substituted with any child class without interface incompatibilities. _check_lsp performs only the very simple checking, ignoring type annotations (it is mypy area, I won't touch it!). It confirms that *args and **kwargs do not disappear positional-only args count is same all parent's regular (positional-or-keyword) args are present with the same name, do not lose default values (but may change) and all added have defaults same for keyword-only args Implementation follows: from inspect import signature, Parameter from itertools import zip_longest, chain def _check_lsp(child, parent): child = signature(child).parameters parent = signature(parent).parameters def rearrange(params): return { 'posonly': sum(p.kind == Parameter.POSITIONAL_ONLY for p in params.values()), 'regular': [(name, p.default is Parameter.empty) for name, p in params.items() if p.kind == Parameter.POSITIONAL_OR_KEYWORD], 'args': next((p for p in params.values() if p.kind == Parameter.VAR_POSITIONAL), None) is not None, 'kwonly': [(name, p.default is Parameter.empty) for name, p in params.items() if p.kind == Parameter.KEYWORD_ONLY], 'kwargs': next((p for p in params.values() if p.kind == Parameter.VAR_KEYWORD), None) is not None, } child, parent = rearrange(child), rearrange(parent) if ( child['posonly'] != parent['posonly'] or not child['args'] and parent['args'] or not child['kwargs'] and parent['kwargs'] ): return True for new, orig in chain(zip_longest(child['regular'], parent['regular']), zip_longest(child['kwonly'], parent['kwonly'])): if new is None and orig is not None: return True elif orig is None and new[1]: return True elif orig[0] != new[0] or not orig[1] and new[1]: return True A: This is a duplicate question, of In Python, how do I indicate I'm overriding a method? That question has an answer by @mkorpela , where he created a pip installable package overrides, that handles this, https://github.com/mkorpela/overrides Basically, annotated a method with @override from overrides import override class Foo @override def some_method(self): ...
Python: mark method as implementing/overriding
Given a 'contract' of sorts that I want to implement, I want the code to tell the reader what the intent is allow the type checker to correct me (fragile base class problem) E.g. in C++, you can class X: public Somethingable { int get_something() const override { return 10; } }; Now when I rename Somethingable::get_something (to plain something for instance), the compiler will error on my X::get_something because it is not an override (anymore). In C# the reader gets even more information: class X : Somethingable { int GetSomething() implements Somethingable.GetSomething { return 10; } } In Python, we can use abc.ABC and @abstractmethod to annotate that subclasses have to define this and that member, but is there a standardised way to annotate this relation on the implementation site? class X(Somethingable): @typing.implements(Somethingable.get_something) # does not exist def get_something(self): return 10
[ "I was overestimating the complexity of such solution, it is shorter:\nimport warnings\n\ndef override(func):\n if hasattr(func, 'fget'): # We see a property, go to actual callable\n func.fget.__overrides__ = True\n else:\n func.__overrides__ = True\n return func\n\n\nclass InterfaceMeta(type):\n def __new__(mcs, name, bases, attrs):\n for name, a in attrs.items():\n f = getattr(a, 'fget', a)\n if not getattr(f, '__overrides__', None): continue\n f = getattr(f, '__wrapped__', f)\n try:\n base_class = next(b for b in bases if hasattr(b, name))\n ref = getattr(base_class, name)\n if type(ref) is not type(a):\n warnings.warn(f'Overriding method {name} messes with class/static methods or properties')\n continue\n if _check_lsp(f, ref):\n warnings.warn(f'LSP violation for method {name}')\n continue\n except StopIteration:\n warnings.warn(f'Overriding method {name} does not have parent implementation')\n return super().__new__(mcs, name, bases, attrs)\n\noverride decorator can mark overriding methods, and InterfaceMeta confirms that these methods do exist in superclass. _check_lsp is the most complex part of this, I'll explain it below.\nWhat is actually going on? First, we take a callable and add an attribute to it from the decorator. Then metaclass looks for methods with this marker and:\n\nconfirms, that at least one of base classes implements it\nchecks, that property remains property, classmethod remains classmethod and staticmethod remains staticmethod\nchecks, that implementation does not break Liskov substitution principle.\n\nUsage\ndef stupid_decorator(func):\n \"\"\"Stupid, because doesn't use `wrapt` or `functools.wraps`.\"\"\"\n def inner(*args, **kwargs):\n return func(*args, **kwargs)\n return inner\n\nclass IFoo(metaclass=InterfaceMeta):\n def foo(self): return 'foo'\n @property\n def bar(self): return 'bar'\n @classmethod\n def cmethod(cls): return 'classmethod'\n @staticmethod\n def smethod(): return 'staticmethod'\n def some_1(self): return 1\n def some_2(self): return 2\n\n def single_arg(self, arg): return arg\n def two_args_default(self, arg1, arg2): return arg1\n def pos_only(self, arg1, /, arg2, arg3=1): return arg1\n def kwonly(self, *, arg1=1): return arg1\n\nclass Foo(IFoo):\n @override\n @stupid_decorator # Wrong signature now: \"self\" not mentioned. With \"self\" in decorator won't fail.\n def foo(self): return 'foo2'\n \n @override\n @property\n def baz(self): return 'baz'\n\n @override\n def quak(self): return 'quak'\n\n @override\n @staticmethod\n def cmethod(): return 'Dead'\n\n @override\n @classmethod\n def some_1(cls): return None\n\n @override\n def single_arg(self, another_arg): return 1\n\n @override\n def pos_only(self, another_arg, / , arg2, arg3=1): return 1\n\n @override\n def two_args_default(self, arg1, arg2=1): return 1\n\n @override\n def kwonly(self, *, arg2=1): return 1\n\nThis warns:\nLSP violation for method foo\nOverriding method baz does not have parent implementation\nOverriding method quak does not have parent implementation\nOverriding method cmethod messes with class/static methods or properties\nOverriding method some_1 messes with class/static methods or properties\nLSP violation for method single_arg\nLSP violation for method kwonly\n\nYou can set the metaclass on Foo as well with the same result.\nLSP\nLSP (Liskov substitution principle) is a very important concept that, in particular, postulates that any parent class can be substituted with any child class without interface incompatibilities. _check_lsp performs only the very simple checking, ignoring type annotations (it is mypy area, I won't touch it!). It confirms that\n\n*args and **kwargs do not disappear\npositional-only args count is same\nall parent's regular (positional-or-keyword) args are present with the same name, do not lose default values (but may change) and all added have defaults\nsame for keyword-only args\n\nImplementation follows:\nfrom inspect import signature, Parameter\nfrom itertools import zip_longest, chain\n\ndef _check_lsp(child, parent):\n child = signature(child).parameters\n parent = signature(parent).parameters\n\n def rearrange(params):\n return {\n 'posonly': sum(p.kind == Parameter.POSITIONAL_ONLY for p in params.values()),\n 'regular': [(name, p.default is Parameter.empty) \n for name, p in params.items() \n if p.kind == Parameter.POSITIONAL_OR_KEYWORD],\n 'args': next((p for p in params.values() \n if p.kind == Parameter.VAR_POSITIONAL), \n None) is not None,\n 'kwonly': [(name, p.default is Parameter.empty) \n for name, p in params.items() \n if p.kind == Parameter.KEYWORD_ONLY],\n 'kwargs': next((p for p in params.values() \n if p.kind == Parameter.VAR_KEYWORD), \n None) is not None,\n }\n \n child, parent = rearrange(child), rearrange(parent)\n if (\n child['posonly'] != parent['posonly'] \n or not child['args'] and parent['args'] \n or not child['kwargs'] and parent['kwargs']\n ):\n return True\n\n\n for new, orig in chain(zip_longest(child['regular'], parent['regular']), \n zip_longest(child['kwonly'], parent['kwonly'])):\n if new is None and orig is not None:\n return True\n elif orig is None and new[1]:\n return True\n elif orig[0] != new[0] or not orig[1] and new[1]:\n return True\n\n", "This is a duplicate question, of In Python, how do I indicate I'm overriding a method? That question has an answer by @mkorpela , where he created a pip installable package overrides, that handles this, https://github.com/mkorpela/overrides\nBasically, annotated a method with @override\nfrom overrides import override\n\nclass Foo\n @override\n def some_method(self):\n ...\n\n" ]
[ 2, 1 ]
[]
[]
[ "abstract_base_class", "class_design", "python", "type_annotation" ]
stackoverflow_0072316756_abstract_base_class_class_design_python_type_annotation.txt
Q: Plotting multiple data values inside function call I want to plot multiple subplots of scatter plots inside a function, after calling the *args parameter to unpack (x,y) input values. However, I keep getting a simple error: ValueError: s must be a scalar, or float array-like with the same size as x and y I cannot seem to solve it even after changing the function into alternative orders of args. Here is my sample code: import pandas as pd import numpy as np from matplotlib.pyplot import plt x = np.array([[1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4], [0.3, 0.5, 0.6, 0.2, 0.4, 0.5, 0.6, 0.5, 0.8, 0.9, 0.9, 0.8, 0.2, 0.1, 0.5, 0.6], ['r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b']]) values = pd.DataFrame(x.T, columns=['a', 'b', 'c']) X = values[values['c'] == 'r'].iloc[ : , 0:2 ].values Y = values[values['c'] == 'b'].iloc[ : , 0:2 ].values def test(*args): figs, axs = plt.subplots( 1 , 2 , figsize = ( 8 , 8 ) ) for xy , ax in zip( args , axs.flat ) : print(xy) ax.scatter(*xy) test(X, Y) plt.show() A: I have achieved it with the follow, so perhaps there is a cleaner alternative? def test(*args): figs, axs = plt.subplots( 1 , 2 , figsize = ( 8 , 8 ) ) xy = np.array(args) for x_y , ax in zip( xy , axs.flat ) : (x, y) = np.hsplit(np.ndarray.flatten(x_y), 2) ax.scatter(x, y) test(X, Y) plt.show() A: okay ... this is the solution to your problem ... this is probably as incomprehensible as your code. def t(*args): figs, axs = plt.subplots( 1 , 2 , figsize = ( 8 , 8 ) ) for xy , ax in zip( zip(*args) , axs.flat ) : print(xy) ax.scatter(*xy) t(X.transpose(), Y.transpose()) now let's convert this to python code... you should know everything about the function just by looking at the function, so *args is useful for something "extra" than the intended behavior of the function. separate X and Y from args, because they are used inside the function explicitly. avoid zip unless the arguments are simple, while nesting generators is performant it's bad for code quality, in your case i'd go for enumerate instead since i am indexing into an array, which needs to be as descriptive as possible. make your calls descriptive, *xy is not descriptive, what is this variable ? is it a tuple ? an ndarray ? what will unpacking it result in ? if you are passing X and Y in it then pass X and Y directly. def t(*args): X,Y = args figs, axs = plt.subplots(1, 2, figsize=(8, 8)) for i, ax in enumerate(axs): print(X[:, i], Y[:, i]) ax.scatter(X[:, i], Y[:, i]) t(X, Y)
Plotting multiple data values inside function call
I want to plot multiple subplots of scatter plots inside a function, after calling the *args parameter to unpack (x,y) input values. However, I keep getting a simple error: ValueError: s must be a scalar, or float array-like with the same size as x and y I cannot seem to solve it even after changing the function into alternative orders of args. Here is my sample code: import pandas as pd import numpy as np from matplotlib.pyplot import plt x = np.array([[1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4], [0.3, 0.5, 0.6, 0.2, 0.4, 0.5, 0.6, 0.5, 0.8, 0.9, 0.9, 0.8, 0.2, 0.1, 0.5, 0.6], ['r', 'r', 'r', 'r', 'r', 'r', 'r', 'r', 'b', 'b', 'b', 'b', 'b', 'b', 'b', 'b']]) values = pd.DataFrame(x.T, columns=['a', 'b', 'c']) X = values[values['c'] == 'r'].iloc[ : , 0:2 ].values Y = values[values['c'] == 'b'].iloc[ : , 0:2 ].values def test(*args): figs, axs = plt.subplots( 1 , 2 , figsize = ( 8 , 8 ) ) for xy , ax in zip( args , axs.flat ) : print(xy) ax.scatter(*xy) test(X, Y) plt.show()
[ "I have achieved it with the follow, so perhaps there is a cleaner alternative?\ndef test(*args):\n figs, axs = plt.subplots( 1 , 2 , figsize = ( 8 , 8 ) )\n xy = np.array(args)\n for x_y , ax in zip( xy , axs.flat ) :\n (x, y) = np.hsplit(np.ndarray.flatten(x_y), 2)\n ax.scatter(x, y)\n\ntest(X, Y)\n\nplt.show()\n\n", "okay ... this is the solution to your problem ... this is probably as incomprehensible as your code.\ndef t(*args):\n figs, axs = plt.subplots( 1 , 2 , figsize = ( 8 , 8 ) )\n for xy , ax in zip( zip(*args) , axs.flat ) :\n print(xy)\n ax.scatter(*xy)\n\nt(X.transpose(), Y.transpose())\n\nnow let's convert this to python code...\nyou should know everything about the function just by looking at the function, so *args is useful for something \"extra\" than the intended behavior of the function.\n\nseparate X and Y from args, because they are used inside the function explicitly.\navoid zip unless the arguments are simple, while nesting generators is performant it's bad for code quality, in your case i'd go for enumerate instead since i am indexing into an array, which needs to be as descriptive as possible.\nmake your calls descriptive, *xy is not descriptive, what is this variable ? is it a tuple ? an ndarray ? what will unpacking it result in ? if you are passing X and Y in it then pass X and Y directly.\n\ndef t(*args):\n X,Y = args\n figs, axs = plt.subplots(1, 2, figsize=(8, 8))\n for i, ax in enumerate(axs):\n print(X[:, i], Y[:, i])\n ax.scatter(X[:, i], Y[:, i])\n\nt(X, Y)\n\n" ]
[ 0, 0 ]
[]
[]
[ "matplotlib", "plot", "python" ]
stackoverflow_0074503195_matplotlib_plot_python.txt
Q: 'list','_AtIndexer' object is not callable PandaPy mp = pd.read_csv("Stock price over the last 24 months of Adidas, Nike, and Puma.csv",index_col=0) mr = pd.DataFrame() # compute monthly returns for s in mp.columns: date = mp.index[0] pr0 = mp[s][date] for t in range(1,len(mp.index)): date = mp.index[t] pr1 = mp[s][date] ret = (pr1-pr0)/pr0 mr.set_value(date,s,ret) pr0 = pr1 I try to predict the Stock price over the last 24 months of Adidas, Nike, and Puma buy using panda and I'm get error TypeError Traceback (most recent call last) Input In [60], in <cell line: 3>() 8 pr1 = mp[s][date] 9 ret = (pr1-pr0)/pr0 ---> 10 mr.set_value(date,s,ret) 11 pr0 = pr1 TypeError: 'list' object is not callable PS: I try to solve by reading same error but i can't solve this. trying to use mr.at instead of set_value and i got another error TypeError Traceback (most recent call last) Input In [3], in <cell line: 5>() 10 pr1 = mp[s][date] 11 ret = (pr1-pr0)/pr0 ---> 12 mr.at(date,s,ret) 13 pr0 = pr1 TypeError: '_AtIndexer' object is not callable please help A: you should use .at like this: mr.at[date,s]=ret full code: mp = pd.read_csv("Stock price over the last 24 months of Adidas, Nike, and Puma.csv",index_col=0) mr = pd.DataFrame() # compute monthly returns for s in mp.columns: date = mp.index[0] pr0 = mp[s][date] for t in range(1,len(mp.index)): date = mp.index[t] pr1 = mp[s][date] ret = (pr1-pr0)/pr0 mr.at[date,s]=ret pr0 = pr1
'list','_AtIndexer' object is not callable PandaPy
mp = pd.read_csv("Stock price over the last 24 months of Adidas, Nike, and Puma.csv",index_col=0) mr = pd.DataFrame() # compute monthly returns for s in mp.columns: date = mp.index[0] pr0 = mp[s][date] for t in range(1,len(mp.index)): date = mp.index[t] pr1 = mp[s][date] ret = (pr1-pr0)/pr0 mr.set_value(date,s,ret) pr0 = pr1 I try to predict the Stock price over the last 24 months of Adidas, Nike, and Puma buy using panda and I'm get error TypeError Traceback (most recent call last) Input In [60], in <cell line: 3>() 8 pr1 = mp[s][date] 9 ret = (pr1-pr0)/pr0 ---> 10 mr.set_value(date,s,ret) 11 pr0 = pr1 TypeError: 'list' object is not callable PS: I try to solve by reading same error but i can't solve this. trying to use mr.at instead of set_value and i got another error TypeError Traceback (most recent call last) Input In [3], in <cell line: 5>() 10 pr1 = mp[s][date] 11 ret = (pr1-pr0)/pr0 ---> 12 mr.at(date,s,ret) 13 pr0 = pr1 TypeError: '_AtIndexer' object is not callable please help
[ "you should use .at like this:\nmr.at[date,s]=ret\n\nfull code:\nmp = pd.read_csv(\"Stock price over the last 24 months of Adidas, Nike, and Puma.csv\",index_col=0)\nmr = pd.DataFrame()\n# compute monthly returns\nfor s in mp.columns:\n date = mp.index[0]\n pr0 = mp[s][date] \n for t in range(1,len(mp.index)):\n date = mp.index[t]\n pr1 = mp[s][date]\n ret = (pr1-pr0)/pr0\n mr.at[date,s]=ret\n pr0 = pr1\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074503352_dataframe_pandas_python.txt
Q: Trying to open a csv file that lives in the same directory as my Python script, but getting error2 file doesn't exist? I am trying to open a CSV file that I recently saved using Python. Here is global directory: So the folder is called Parsing Data Using Python, and there are 3 files inside, we only concern ourselves with the codealong.py file, which is some Python code that I want to run to open the 'patrons.csv' file. Here is the code in the codealong.py file: import csv html_output = '' with open('patrons.csv','r') as data_file: csv_data = csv.reader(data_file) print(list(csv_data)) When I run this, I get the Error2: No such file or directory: 'patrons.csv' Any ideas why I am getting this? Because I am saving patrons.csv in the same directory as codealong.py I thought the file would be detected! A: One approach would be to set the working directory to be the same location as where your script is. To do this for any script, add this to the top: import os os.chdir(os.path.dirname(os.path.abspath(__file__))) This takes the full name of where your script is located, takes just the path element and sets the current working directory to that. You then would not need to specify the full path to your file. A: Another approach using pathlib. import csv from pathlib import Path file = Path(__file__).parent / "patrons.csv" with file.open("r", encoding="utf-8") as data_file: csv_data = csv.reader(data_file) print(list(csv_data))
Trying to open a csv file that lives in the same directory as my Python script, but getting error2 file doesn't exist?
I am trying to open a CSV file that I recently saved using Python. Here is global directory: So the folder is called Parsing Data Using Python, and there are 3 files inside, we only concern ourselves with the codealong.py file, which is some Python code that I want to run to open the 'patrons.csv' file. Here is the code in the codealong.py file: import csv html_output = '' with open('patrons.csv','r') as data_file: csv_data = csv.reader(data_file) print(list(csv_data)) When I run this, I get the Error2: No such file or directory: 'patrons.csv' Any ideas why I am getting this? Because I am saving patrons.csv in the same directory as codealong.py I thought the file would be detected!
[ "One approach would be to set the working directory to be the same location as where your script is. To do this for any script, add this to the top:\nimport os \nos.chdir(os.path.dirname(os.path.abspath(__file__)))\n\nThis takes the full name of where your script is located, takes just the path element and sets the current working directory to that.\nYou then would not need to specify the full path to your file.\n", "Another approach using pathlib.\nimport csv \nfrom pathlib import Path\n\nfile = Path(__file__).parent / \"patrons.csv\"\n\nwith file.open(\"r\", encoding=\"utf-8\") as data_file:\n csv_data = csv.reader(data_file)\n print(list(csv_data))\n\n" ]
[ 0, 0 ]
[]
[]
[ "csv", "python" ]
stackoverflow_0072159867_csv_python.txt
Q: Pyautogui cursor not working after seclect a text box and click at other object I'm currently having trouble with pyautogui's cursor When I execute the code, it click the text box and write text there. But after its done writitng and move to another object I aslo want it to click but the cursor didn't change, make the cursor stay text cursor and it cannot click anything after that despite the cursor moved to the object that I need to click. Pydirectinput didn't work aslo, can anybody help me? Thank you code here: import random as r import pyautogui as pg import pydirectinput as pg2 import time as t pg.FAILSAFE = False letter="1234567890QWERTYUIOPASDFGHJKLZXCVBNM" generated=None def gen(): code=''.join(r.sample(letter,12)) print("Generated code: ",code) return code def redeem(): pass def check(): pass def mine(): empty = pg.locateCenterOnScreen('empty.png', confidence=.9) redeem = pg.locateCenterOnScreen('redeem.png',confidence=.9) codeis = None ok = None x=None print(empty) if empty != None: print(empty) pg.moveTo(empty) t.sleep(0.001) pg2.click() t.sleep(0.001) x=gen() t.sleep(0.1) pg.write(x) t.sleep(.1) pg2.press('esc') if redeem != None: print(redeem) t.sleep(0.1) pg.moveTo(redeem) t.sleep(.1) pg2.click() t.sleep(0.1) else: pass #main if __name__ == '__main__': while True: mine() A: If you are trying to click on Roblox, there are issues with Pyautogui clicking the mouse in Roblox but i've found a workaround for that: import autoit if empty != None: print(empty) pg.moveTo(empty) t.sleep(0.001) autoit.mouse_click("left") #Instead of using pyautogui to click, we are gonna use autoit t.sleep(0.001) x=gen() t.sleep(0.1) pg.write(x) t.sleep(.1) pg2.press('esc') if redeem != None: print(redeem) t.sleep(0.1) pg.moveTo(redeem) t.sleep(.1) autoit.mouse_click("left") #Instead of using pyautogui to click, we are gonna use autoit t.sleep(0.1) else: pass
Pyautogui cursor not working after seclect a text box and click at other object
I'm currently having trouble with pyautogui's cursor When I execute the code, it click the text box and write text there. But after its done writitng and move to another object I aslo want it to click but the cursor didn't change, make the cursor stay text cursor and it cannot click anything after that despite the cursor moved to the object that I need to click. Pydirectinput didn't work aslo, can anybody help me? Thank you code here: import random as r import pyautogui as pg import pydirectinput as pg2 import time as t pg.FAILSAFE = False letter="1234567890QWERTYUIOPASDFGHJKLZXCVBNM" generated=None def gen(): code=''.join(r.sample(letter,12)) print("Generated code: ",code) return code def redeem(): pass def check(): pass def mine(): empty = pg.locateCenterOnScreen('empty.png', confidence=.9) redeem = pg.locateCenterOnScreen('redeem.png',confidence=.9) codeis = None ok = None x=None print(empty) if empty != None: print(empty) pg.moveTo(empty) t.sleep(0.001) pg2.click() t.sleep(0.001) x=gen() t.sleep(0.1) pg.write(x) t.sleep(.1) pg2.press('esc') if redeem != None: print(redeem) t.sleep(0.1) pg.moveTo(redeem) t.sleep(.1) pg2.click() t.sleep(0.1) else: pass #main if __name__ == '__main__': while True: mine()
[ "If you are trying to click on Roblox, there are issues with Pyautogui clicking the mouse in Roblox but i've found a workaround for that:\nimport autoit\n if empty != None:\n print(empty)\n pg.moveTo(empty)\n t.sleep(0.001)\n autoit.mouse_click(\"left\") #Instead of using pyautogui to click, we are gonna use autoit\n t.sleep(0.001)\n x=gen()\n t.sleep(0.1)\n pg.write(x)\n t.sleep(.1)\n pg2.press('esc')\n if redeem != None:\n print(redeem)\n t.sleep(0.1)\n pg.moveTo(redeem)\n t.sleep(.1)\n autoit.mouse_click(\"left\") #Instead of using pyautogui to click, we are gonna use autoit\n t.sleep(0.1)\n \nelse:\n pass\n\n" ]
[ 1 ]
[]
[]
[ "cursor", "debugging", "pyautogui", "python" ]
stackoverflow_0074501236_cursor_debugging_pyautogui_python.txt
Q: Installing my project from testpypi gives me an error I am learning how to package python projects and publish them and I ran into a problem I have been trying to solve ,but failed. I have this small project and I am trying to upload it to Testpypi I managed to upload it there and I can even find it at (https://test.pypi.org/project/cli-assistant/) Problem: When I try to install it using pip install -i https://test.pypi.org/simple/ cli-assistant I get this error: Looking in indexes: https://test.pypi.org/simple/ ERROR: Could not find a version that satisfies the requirement cli-assistant (from versions: none) ERROR: No matching distribution found for cli-assistant Here is the full setup.py file from setuptools import setup, find_packages with open("Description.rst", "r", encoding="utf-8") as fh: long_description = fh.read() with open("requirements.txt", "r", encoding="utf-8") as fh: requirements = fh.read() setup( name= 'cli-assistant', version= '0.0.5', author= 'my name', author_email= 'my email', license= 'MIT License', description='guide you with terminal and git commands', long_description=long_description, url='https://github.com/willsketch/Helper', py_modules=[ 'my_helper'], packages= find_packages(), install_requires = [requirements], classifiers=[ 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.8', ], include_package_data=True, package_data={'helper':['examples.txt']}, entry_points= { 'console_scripts':[ 'helper = my_helper:cli', ] } ) A: You have uploaded only an .egg file. Pip cannot install eggs. You should upload a source distribution (.tar.gz or .zip) and/or a wheel (.whl).
Installing my project from testpypi gives me an error
I am learning how to package python projects and publish them and I ran into a problem I have been trying to solve ,but failed. I have this small project and I am trying to upload it to Testpypi I managed to upload it there and I can even find it at (https://test.pypi.org/project/cli-assistant/) Problem: When I try to install it using pip install -i https://test.pypi.org/simple/ cli-assistant I get this error: Looking in indexes: https://test.pypi.org/simple/ ERROR: Could not find a version that satisfies the requirement cli-assistant (from versions: none) ERROR: No matching distribution found for cli-assistant Here is the full setup.py file from setuptools import setup, find_packages with open("Description.rst", "r", encoding="utf-8") as fh: long_description = fh.read() with open("requirements.txt", "r", encoding="utf-8") as fh: requirements = fh.read() setup( name= 'cli-assistant', version= '0.0.5', author= 'my name', author_email= 'my email', license= 'MIT License', description='guide you with terminal and git commands', long_description=long_description, url='https://github.com/willsketch/Helper', py_modules=[ 'my_helper'], packages= find_packages(), install_requires = [requirements], classifiers=[ 'Programming Language :: Python', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.8', ], include_package_data=True, package_data={'helper':['examples.txt']}, entry_points= { 'console_scripts':[ 'helper = my_helper:cli', ] } )
[ "You have uploaded only an .egg file. Pip cannot install eggs. You should upload a source distribution (.tar.gz or .zip) and/or a wheel (.whl).\n" ]
[ 0 ]
[]
[]
[ "pypi", "python", "python_packaging", "setup.py" ]
stackoverflow_0074502468_pypi_python_python_packaging_setup.py.txt
Q: How to find nearest nodes in a driving network considering streets directions I am trying to compute the distance between 2 points (lat, lon), using osmnx package. While testing osmnx.nearest_nodes() to firstly find the nearest node from a point, I noticed that it doesn't seem to take into account the street direction when computing the nearest node (for example when the point is on a one-way street). For instance : if I take this (lat, lon) point (48.921281, 2.517598), located on a one-way street, the nearest node is found only considering the distance from it (the node n°OSMID 288965181 found with nearest_nodes() : 48.9217176, 2.5180361). nearest_nodes() doesn't seem to consider the type of network that I have choosen (drive). If so, the found node should have been node 288964629 (48.9203235, 2.5166429). Why ? Because from the point (48.921281, 2.517598), the node 288965181 is not the closest node considering the driving network, while the node 288964629 is. Here is the code example : import osmnx as ox import networkx as nx #creating the graph from a point coordinates centreCoord = (48.920576, 2.529185) G= ox.graph_from_point(centreCoord, dist=2000, network_type='drive') #origin and destination point originCoord = (48.921281, 2.517598) destinationCoord = (48.921454, 2.518696) #calculating the nearest nodes from origin and destination points origin_node = ox.nearest_nodes(G, originCoord[1], originCoord[0]) destination_node = ox.nearest_nodes(G, destinationCoord[1], destinationCoord[0]) #computing the driving path route = nx.shortest_path(G, source=origin_node, target=destination_node, weight='length') Here a simple plot where : the red dot is the origin point (48.921281, 2.517598) the blue dot is the destination point (48.921454, 2.518696) the green dot is the node 288965181 (48.9217176, 2.5180361) the black dot is the node 288965599, the closest dot from the destination point the magenta line is the route computed with networkx.shortest_path() I have added : the orange dot which is the node 288964629 (48.9203235, 2.5166429) -> the node that I would like to have as the closest node from the origin point, regarding the direction of traffic (the choosen network type is 'drive') the blue path -> the route that I should obtained, regarding the direction of traffic Plot image I might have missed sometimes or done something wrong. I have read the osmnx documentation. I tried several combination (parameters network_type, ox.settings.bidirectional_network_types), tried to debug the computation in order to understand how the nodes are selected. Before asking, I have searched for similar topics. A lot of interesting stuff but I didn't find relevant answers. I am stuck on this for days. Any help would be great ! A: A possible workaround could be the following: find the nearest edge from the origin point, using ox.nearest_edges() function. You will get the one-way road; the edge is defined by u,v nodes: u is the starting node of the way, v the ending node, following the street direction; the v node is the node you want to be considered (288964629), so compute the route from the v node to the destination node. # Get the nearest edge from the origin point u, v, k = ox.nearest_edges(G, originCoord[1], originCoord[0]) destination_node = ox.nearest_nodes(G, destinationCoord[1], destinationCoord[0]) # computing the driving path route = nx.shortest_path(G, source=v, target=destination_node, weight='length') Here the plot, by using ox.plot_graph_route(G, route): Note that the described approach works fine with one-way roads made of only two nodes.
How to find nearest nodes in a driving network considering streets directions
I am trying to compute the distance between 2 points (lat, lon), using osmnx package. While testing osmnx.nearest_nodes() to firstly find the nearest node from a point, I noticed that it doesn't seem to take into account the street direction when computing the nearest node (for example when the point is on a one-way street). For instance : if I take this (lat, lon) point (48.921281, 2.517598), located on a one-way street, the nearest node is found only considering the distance from it (the node n°OSMID 288965181 found with nearest_nodes() : 48.9217176, 2.5180361). nearest_nodes() doesn't seem to consider the type of network that I have choosen (drive). If so, the found node should have been node 288964629 (48.9203235, 2.5166429). Why ? Because from the point (48.921281, 2.517598), the node 288965181 is not the closest node considering the driving network, while the node 288964629 is. Here is the code example : import osmnx as ox import networkx as nx #creating the graph from a point coordinates centreCoord = (48.920576, 2.529185) G= ox.graph_from_point(centreCoord, dist=2000, network_type='drive') #origin and destination point originCoord = (48.921281, 2.517598) destinationCoord = (48.921454, 2.518696) #calculating the nearest nodes from origin and destination points origin_node = ox.nearest_nodes(G, originCoord[1], originCoord[0]) destination_node = ox.nearest_nodes(G, destinationCoord[1], destinationCoord[0]) #computing the driving path route = nx.shortest_path(G, source=origin_node, target=destination_node, weight='length') Here a simple plot where : the red dot is the origin point (48.921281, 2.517598) the blue dot is the destination point (48.921454, 2.518696) the green dot is the node 288965181 (48.9217176, 2.5180361) the black dot is the node 288965599, the closest dot from the destination point the magenta line is the route computed with networkx.shortest_path() I have added : the orange dot which is the node 288964629 (48.9203235, 2.5166429) -> the node that I would like to have as the closest node from the origin point, regarding the direction of traffic (the choosen network type is 'drive') the blue path -> the route that I should obtained, regarding the direction of traffic Plot image I might have missed sometimes or done something wrong. I have read the osmnx documentation. I tried several combination (parameters network_type, ox.settings.bidirectional_network_types), tried to debug the computation in order to understand how the nodes are selected. Before asking, I have searched for similar topics. A lot of interesting stuff but I didn't find relevant answers. I am stuck on this for days. Any help would be great !
[ "A possible workaround could be the following:\n\nfind the nearest edge from the origin point, using ox.nearest_edges() function. You will get the one-way road;\nthe edge is defined by u,v nodes: u is the starting node of the way, v the ending node, following the street direction;\nthe v node is the node you want to be considered (288964629), so compute the route from the v node to the destination node.\n\n# Get the nearest edge from the origin point\nu, v, k = ox.nearest_edges(G, originCoord[1], originCoord[0])\n\ndestination_node = ox.nearest_nodes(G, destinationCoord[1], destinationCoord[0])\n\n# computing the driving path\nroute = nx.shortest_path(G, source=v, target=destination_node, weight='length')\n\nHere the plot, by using ox.plot_graph_route(G, route): \n\nNote that the described approach works fine with one-way roads made of only two nodes.\n" ]
[ 0 ]
[]
[]
[ "networkx", "osmnx", "python" ]
stackoverflow_0074495397_networkx_osmnx_python.txt
Q: MultiValueDictKeyError at /"" when uploading empty file I have a django 4 application and if a a user try to upload a empty file, then this error occurs: MultiValueDictKeyError at /controlepunt140 'upload_file' I searched for that error. But it seems that the template is oke. Because the enctype is set correct: enctype="multipart/form-data" <form class="form-inline" action="/controlepunt140" method="POST" enctype="multipart/form-data"> <div class="d-grid gap-3"> <div class="form-group">{% csrf_token %} {{form}} <button type="submit" name="form_pdf" class="btn btn-warning">Upload!</button> </div> <div class="form-outline"> <div class="form-group"> <textarea class="inline-txtarea form-control" id="content" cols="70" rows="25"> {{content}}</textarea> </div> </div> </div> </form> and views.py: from __future__ import print_function import os from django.conf import settings from django.shortcuts import render from django.views import View from .extracting_text_from_excel import ExtractingTextFromExcel from .filter_text import FilterText from .forms import ProfileForm from .models import UploadFile def post(self, *args, **kwargs): filter_text = FilterText() extract_excel_instance = ExtractingTextFromExcel() types_of_encoding = ["utf8", "cp1252"] submitted_form = ProfileForm(self.request.POST, self.request.FILES) content = '' content_excel = '' if self.request.POST.get('form_pdf') is not None: if submitted_form.is_valid() and self.request.POST: uploadfile = UploadFile( image=self.request.FILES["upload_file"]) uploadfile.save() for encoding_type in types_of_encoding: with open(os.path.join(settings.MEDIA_ROOT, f"{uploadfile.image}"), 'r', encoding=encoding_type) as f: if uploadfile.image.path.endswith('.pdf'): content = filter_text.show_extracted_data_from_file( uploadfile.image.path) else: content = f.read() return render(self.request, "main/controle_punt140.html", { 'form': ProfileForm(), "content": content }) return render(self.request, "main/controle_punt140.html", { "form": submitted_form, }) and forms.py: class ProfileForm(forms.Form): upload_file = forms.FileField(required=False) So how to tackle this? A: cf MultiValueDictKeyError in Django You're executing uploadfile = UploadFile( image=self.request.FILES["upload_file"]) Either use try / except to catch the error, or rely on .get(): self.request.FILES.get("upload_file", None) You might want to check for None before calling UploadFile(). As written, this doesn't make sense: if submitted_form.is_valid() and self.request.POST: uploadfile = UploadFile( image=self.request.FILES["upload_file"]) uploadfile.save() That is, we conditionally define the uploadfile variable, and then unconditionally call one of its methods. That call won't work if the variable is undefined.
MultiValueDictKeyError at /"" when uploading empty file
I have a django 4 application and if a a user try to upload a empty file, then this error occurs: MultiValueDictKeyError at /controlepunt140 'upload_file' I searched for that error. But it seems that the template is oke. Because the enctype is set correct: enctype="multipart/form-data" <form class="form-inline" action="/controlepunt140" method="POST" enctype="multipart/form-data"> <div class="d-grid gap-3"> <div class="form-group">{% csrf_token %} {{form}} <button type="submit" name="form_pdf" class="btn btn-warning">Upload!</button> </div> <div class="form-outline"> <div class="form-group"> <textarea class="inline-txtarea form-control" id="content" cols="70" rows="25"> {{content}}</textarea> </div> </div> </div> </form> and views.py: from __future__ import print_function import os from django.conf import settings from django.shortcuts import render from django.views import View from .extracting_text_from_excel import ExtractingTextFromExcel from .filter_text import FilterText from .forms import ProfileForm from .models import UploadFile def post(self, *args, **kwargs): filter_text = FilterText() extract_excel_instance = ExtractingTextFromExcel() types_of_encoding = ["utf8", "cp1252"] submitted_form = ProfileForm(self.request.POST, self.request.FILES) content = '' content_excel = '' if self.request.POST.get('form_pdf') is not None: if submitted_form.is_valid() and self.request.POST: uploadfile = UploadFile( image=self.request.FILES["upload_file"]) uploadfile.save() for encoding_type in types_of_encoding: with open(os.path.join(settings.MEDIA_ROOT, f"{uploadfile.image}"), 'r', encoding=encoding_type) as f: if uploadfile.image.path.endswith('.pdf'): content = filter_text.show_extracted_data_from_file( uploadfile.image.path) else: content = f.read() return render(self.request, "main/controle_punt140.html", { 'form': ProfileForm(), "content": content }) return render(self.request, "main/controle_punt140.html", { "form": submitted_form, }) and forms.py: class ProfileForm(forms.Form): upload_file = forms.FileField(required=False) So how to tackle this?
[ "cf MultiValueDictKeyError in Django\nYou're executing\n uploadfile = UploadFile(\n image=self.request.FILES[\"upload_file\"])\n\nEither use try / except to catch the error,\nor rely on .get(): self.request.FILES.get(\"upload_file\", None)\nYou might want to check for None before calling UploadFile().\n\nAs written, this doesn't make sense:\n if submitted_form.is_valid() and self.request.POST:\n uploadfile = UploadFile(\n image=self.request.FILES[\"upload_file\"])\n\n uploadfile.save()\n\nThat is, we conditionally define the uploadfile variable,\nand then unconditionally call one of its methods.\nThat call won't work if the variable is undefined.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074503345_django_python.txt
Q: Making an if statement in antlr4 not working I've been trying to create an if statement in my programming language in antlr4 My grammar that is failing is: if_stmt: IF conditional_block stmt_block (ELSE IF conditional_block stmt_block)* (ELSE conditional_block stmt_block)?; But it gives the error: line 3:2 extraneous input 'else' expecting {<EOF>, '!', BOOLEAN, 'null', 'func', 'if', 'while', 'for', ID, INTEGER, FLOAT, STRING} line 4:27 extraneous input ')' expecting {<EOF>, '!', BOOLEAN, 'null', 'func', 'if', 'while', 'for', ID, INTEGER, FLOAT, STRING} It expects 'else' my code that goes into the program is: if false { println("Hello World!") } else { println("This is true") } A: This look suspicious: (ELSE conditional_block stmt_block)? which should probably be: (ELSE stmt_block)? Not sure if that would solve your problem. If not, you'll need to edit your question and add enough of your grammar so that other are able to reproduce the error you mention.
Making an if statement in antlr4 not working
I've been trying to create an if statement in my programming language in antlr4 My grammar that is failing is: if_stmt: IF conditional_block stmt_block (ELSE IF conditional_block stmt_block)* (ELSE conditional_block stmt_block)?; But it gives the error: line 3:2 extraneous input 'else' expecting {<EOF>, '!', BOOLEAN, 'null', 'func', 'if', 'while', 'for', ID, INTEGER, FLOAT, STRING} line 4:27 extraneous input ')' expecting {<EOF>, '!', BOOLEAN, 'null', 'func', 'if', 'while', 'for', ID, INTEGER, FLOAT, STRING} It expects 'else' my code that goes into the program is: if false { println("Hello World!") } else { println("This is true") }
[ "This look suspicious:\n(ELSE conditional_block stmt_block)?\n\nwhich should probably be:\n(ELSE stmt_block)?\n\nNot sure if that would solve your problem. If not, you'll need to edit your question and add enough of your grammar so that other are able to reproduce the error you mention.\n" ]
[ 0 ]
[]
[]
[ "antlr4", "python" ]
stackoverflow_0074502698_antlr4_python.txt
Q: Looking for Simple Python Help: Counting the Number of Vehicles in a CSV by their Fuel Type MY DATA IN EXCEL MY CODE Hello Everyone! I am brand new to python and have some simple data I want to separate and graph in a bar chart. I have a data set on the cars currently being driven in California. They are separated by Year, Fuel type, Zip Code, Make, and 'Light/Heavy'. I want to tell python to count the number of Gasoline cars, the number of diesel cars, the number of battery electric cars, etc. How could i separate this data, and then graph it on a bar chart? I am assuming it is quite easy, but I have been learning python myself for maybe a week. I attached the data set, as well as some code that I have so far. It is returning 'TRUE' when I tried to make subseries of the data as 'gas', 'diesel', etc. I am assuming python is just telling me "yes, it says gasoline there". I now just hope to gather all the "Gasoline"s in the 'Fuel' column, and add them all up by the number in the 'Vehicle' column. Any help would be very much appreciated!!! import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('~/Desktop/PYTHON/californiavehicles.csv') print(df.head()) print(df.describe()) X = df['Fuel'] y = df['Vehicles'] gas = df[(df['Fuel']=='Gasoline','Flex-Fuel')] diesel = df[(df['Fuel']=='Diesel and Diesel Hybrid')] hybrid = df[(df['Fuel']=='Hybrid Gasoline', 'Plug-in Hybrid')] electric = df[(df['Fuel']=='Battery Electric')] I tried to create a subseries of the data. I haven't tried to include the numbers in 'vehicles' yet because I don't know how. A: You mentioned it's a CSV specifically. Read in the file line by line, split the data by comma (which produces a list for the current row), then if currentrow[3] == fuel type increment your count. Example: gas_cars=0 with open("data.csv", "r") as file: for line in file: row = line.split(",") if row[3] == "Gasoline": gas_cars += int(row[6]) # num cars for that car make # ... # ... # ... A: This will let you use the built-in conveniences of pandas. Short answer is, use this line: df.groupby("Fuel").sum().plot.bar() Long answer with home made data: import pandas as pd import matplotlib.pyplot as plt import numpy as np N = 1000 placeholder = [pd.NA]*N types = np.random.choice(["Gasoline", "Diesel", "Hybrid", "Battery"], size=N) nr_vehicles = np.random.randint(low=1, high=100, size=N) df = pd.DataFrame( { "Date": placeholder, "Zip": placeholder, "Model year": placeholder, "Fuel": types, "Make": placeholder, "Duty": placeholder, "Vehicles": nr_vehicles } ) df.groupby("Fuel").sum().plot.bar()
Looking for Simple Python Help: Counting the Number of Vehicles in a CSV by their Fuel Type
MY DATA IN EXCEL MY CODE Hello Everyone! I am brand new to python and have some simple data I want to separate and graph in a bar chart. I have a data set on the cars currently being driven in California. They are separated by Year, Fuel type, Zip Code, Make, and 'Light/Heavy'. I want to tell python to count the number of Gasoline cars, the number of diesel cars, the number of battery electric cars, etc. How could i separate this data, and then graph it on a bar chart? I am assuming it is quite easy, but I have been learning python myself for maybe a week. I attached the data set, as well as some code that I have so far. It is returning 'TRUE' when I tried to make subseries of the data as 'gas', 'diesel', etc. I am assuming python is just telling me "yes, it says gasoline there". I now just hope to gather all the "Gasoline"s in the 'Fuel' column, and add them all up by the number in the 'Vehicle' column. Any help would be very much appreciated!!! import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('~/Desktop/PYTHON/californiavehicles.csv') print(df.head()) print(df.describe()) X = df['Fuel'] y = df['Vehicles'] gas = df[(df['Fuel']=='Gasoline','Flex-Fuel')] diesel = df[(df['Fuel']=='Diesel and Diesel Hybrid')] hybrid = df[(df['Fuel']=='Hybrid Gasoline', 'Plug-in Hybrid')] electric = df[(df['Fuel']=='Battery Electric')] I tried to create a subseries of the data. I haven't tried to include the numbers in 'vehicles' yet because I don't know how.
[ "You mentioned it's a CSV specifically. Read in the file line by line, split the data by comma (which produces a list for the current row), then if currentrow[3] == fuel type increment your count.\nExample:\ngas_cars=0\nwith open(\"data.csv\", \"r\") as file:\n for line in file:\n row = line.split(\",\")\n if row[3] == \"Gasoline\":\n gas_cars += int(row[6]) # num cars for that car make\n # ...\n # ...\n # ...\n\n", "This will let you use the built-in conveniences of pandas. Short answer is, use this line:\ndf.groupby(\"Fuel\").sum().plot.bar()\n\nLong answer with home made data:\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nN = 1000\n\nplaceholder = [pd.NA]*N\n\ntypes = np.random.choice([\"Gasoline\", \"Diesel\", \"Hybrid\", \"Battery\"], size=N)\nnr_vehicles = np.random.randint(low=1, high=100, size=N)\n\ndf = pd.DataFrame(\n {\n \"Date\": placeholder,\n \"Zip\": placeholder,\n \"Model year\": placeholder,\n \"Fuel\": types,\n \"Make\": placeholder,\n \"Duty\": placeholder,\n \"Vehicles\": nr_vehicles\n }\n)\n\ndf.groupby(\"Fuel\").sum().plot.bar()\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "bar_chart", "dataframe", "matplotlib", "python" ]
stackoverflow_0074503284_bar_chart_dataframe_matplotlib_python.txt
Q: How to list all folders inside aws s3 bucket via python boto3 I am trying to get only all the folders/directories in an AWS S3 Bucket, not its files. I have multiple date folders in S3 Bucket like [dt=20190926,dt=20191017,dt=20191128,dt=20200127,dt=20200128,dt=20200629,dt=20201108,dt=20210918,dt=20201121] But, It is reading some random folder dt=20210918 But, what I am getting is: It is going inside the random folder and fetching all the files. Lambda.py import json import boto3 s3 = boto3.resource('s3') my_bucket = s3.Bucket('test') def lambda_handler(event, context): for object_summary in my_bucket.objects.filter(Prefix="sample/"): print(object_summary.key) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } How should I achieve my use case? A: EDIT: I agree with @lipeiran, there is no such thing as a directory in S3 structure, so there is no way for boto3 to return only "folders". You will have to iterate through all objects and extract their path on your own. According to boto3 documentation, filter() can take some parameters, including MaxKeys: Sets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more. Perhaps the only prefix you get, dt=20210918, matches more than 1000 objects, hence you don't see more prefixes in the response. Try increasing this parameter gradually, e.g. to 2000: for object_summary in my_bucket.objects.filter(Prefix="CAI_DW-ExperianCC/", MaxKeys=2000): print(object_summary.key) and check if it helps.
How to list all folders inside aws s3 bucket via python boto3
I am trying to get only all the folders/directories in an AWS S3 Bucket, not its files. I have multiple date folders in S3 Bucket like [dt=20190926,dt=20191017,dt=20191128,dt=20200127,dt=20200128,dt=20200629,dt=20201108,dt=20210918,dt=20201121] But, It is reading some random folder dt=20210918 But, what I am getting is: It is going inside the random folder and fetching all the files. Lambda.py import json import boto3 s3 = boto3.resource('s3') my_bucket = s3.Bucket('test') def lambda_handler(event, context): for object_summary in my_bucket.objects.filter(Prefix="sample/"): print(object_summary.key) return { 'statusCode': 200, 'body': json.dumps('Hello from Lambda!') } How should I achieve my use case?
[ "EDIT: I agree with @lipeiran, there is no such thing as a directory in S3 structure, so there is no way for boto3 to return only \"folders\". You will have to iterate through all objects and extract their path on your own.\nAccording to boto3 documentation, filter() can take some parameters, including\nMaxKeys:\n\nSets the maximum number of keys returned in the response. By default, the action returns up to 1,000 key names. The response might contain fewer keys but will never contain more.\n\nPerhaps the only prefix you get, dt=20210918, matches more than 1000 objects, hence you don't see more prefixes in the response. Try increasing this parameter gradually, e.g. to 2000:\nfor object_summary in my_bucket.objects.filter(Prefix=\"CAI_DW-ExperianCC/\", MaxKeys=2000):\n print(object_summary.key)\n\nand check if it helps.\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "aws_lambda", "boto3", "python" ]
stackoverflow_0074498408_amazon_s3_amazon_web_services_aws_lambda_boto3_python.txt
Q: How to reorder data from a character string with re.sub only in cases where it detects a certain regex pattern,amd not in other cases import re #example input_text = 'Alrededor de las 00:16 am o las 23:30 pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)' identify_time_regex = r"(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>(?:am|pm))" restructuring_structure_00 = r"(\g<hh>----\g<mm>----\g<am_or_pm>)" #replacement input_text = re.sub(identify_regex_01_a, restructuring_structure_00, input_text) print(repr(input_text)) # --> output I have to change things in this regex identify_time_regex so that it extracts the hour numbers but only if it is inside a structure like the following (2022_-_02_-_18 00:16 am), which can be generalized as follows: r"(\d*_-_\d{2}_-_\d{2}) " + identify_time_regex The output that I need, you can see that only those hours were modified where there was no date before: input_text = 'Alrededor de las 00----16----am o las 23----30----pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)' A: You can use import re input_text = 'Alrededor de las 00:16 am o las 23:30 pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)' identify_time_regex = r"(\b\d{4}_-_\d{2}_-_\d{2}\s+)?(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>[ap]m)" restructuring_structure_00 = lambda x: x.group() if x.group(1) else fr"{x.group('hh')}----{x.group('mm')}----{x.group('am_or_pm')}" input_text = re.sub(identify_time_regex, restructuring_structure_00, input_text) print(input_text) # Alrededor de las 00----16----am o las 23----30----pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22) See the Python demo. The logic is the following: if the (\b\d{4}_-_\d{2}_-_\d{2}\s+)? optional capturing group matches, the replacement is the whole match (i.e. no replacement occurs), and if it does not, your replacement takes place. The restructuring_structure_00 must be a lambda expression since the match structure needs to be evaluated before replacement. The \b\d{4}_-_\d{2}_-_\d{2}\s+ pattern matches a word boundary, four digits, _-_, two digits, _-_, two digits, and one or more whitespaces.
How to reorder data from a character string with re.sub only in cases where it detects a certain regex pattern,amd not in other cases
import re #example input_text = 'Alrededor de las 00:16 am o las 23:30 pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)' identify_time_regex = r"(?P<hh>\d{2}):(?P<mm>\d{2})[\s|]*(?P<am_or_pm>(?:am|pm))" restructuring_structure_00 = r"(\g<hh>----\g<mm>----\g<am_or_pm>)" #replacement input_text = re.sub(identify_regex_01_a, restructuring_structure_00, input_text) print(repr(input_text)) # --> output I have to change things in this regex identify_time_regex so that it extracts the hour numbers but only if it is inside a structure like the following (2022_-_02_-_18 00:16 am), which can be generalized as follows: r"(\d*_-_\d{2}_-_\d{2}) " + identify_time_regex The output that I need, you can see that only those hours were modified where there was no date before: input_text = 'Alrededor de las 00----16----am o las 23----30----pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)'
[ "You can use\nimport re\n\ninput_text = 'Alrededor de las 00:16 am o las 23:30 pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)'\nidentify_time_regex = r\"(\\b\\d{4}_-_\\d{2}_-_\\d{2}\\s+)?(?P<hh>\\d{2}):(?P<mm>\\d{2})[\\s|]*(?P<am_or_pm>[ap]m)\"\nrestructuring_structure_00 = lambda x: x.group() if x.group(1) else fr\"{x.group('hh')}----{x.group('mm')}----{x.group('am_or_pm')}\"\ninput_text = re.sub(identify_time_regex, restructuring_structure_00, input_text)\nprint(input_text)\n# Alrededor de las 00----16----am o las 23----30----pm , quizas cerca del 2022_-_02_-_18 llega el avion, pero no a las (2022_-_02_-_18 00:16 am), de esos hay dos (22)\n\nSee the Python demo.\nThe logic is the following: if the (\\b\\d{4}_-_\\d{2}_-_\\d{2}\\s+)? optional capturing group matches, the replacement is the whole match (i.e. no replacement occurs), and if it does not, your replacement takes place.\nThe restructuring_structure_00 must be a lambda expression since the match structure needs to be evaluated before replacement.\nThe \\b\\d{4}_-_\\d{2}_-_\\d{2}\\s+ pattern matches a word boundary, four digits, _-_, two digits, _-_, two digits, and one or more whitespaces.\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "regex", "regex_group", "replace" ]
stackoverflow_0074503120_python_python_3.x_regex_regex_group_replace.txt
Q: deepface ResourceExhaustedError: failed to allocate memory [Op:AddV2] I am new to deeplearning. I am trying to use the deepface library in my local machine. I used pip install deepface to install the library, tried on python 3.7.13, 3.8.13 and 3.9.13 which were all created using conda virtual environment. However when running the code snippet below, I am getting the same error when running on my local machine. Do I need a GPU to run the library? If yes, how do I set it up? Because from the online guides/ articles, none of them mentioned the need of installing / setup a GPU. I have a GeForce MX450 on my local pc. code import cv2 from deepface import DeepFace import numpy as np def analyse_face(): imagepath = "happy_face_woman.png" image = cv2.imread(imagepath) face_analysis = DeepFace.analyze(image) print(face_analysis) print(analyse_face()) Error: ResourceExhaustedError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in <module> 12 print(face_analysis) 13 ---> 14 analyse_face() ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in analyse_face() 9 imagepath = "happy_face_woman.png" 10 image = cv2.imread(imagepath) ---> 11 face_analysis = DeepFace.analyze(image) 12 print(face_analysis) 13 c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in analyze(img_path, actions, models, enforce_detection, detector_backend, prog_bar) 352 353 if 'age' in actions and 'age' not in built_models: --> 354 models['age'] = build_model('Age') 355 356 if 'gender' in actions and 'gender' not in built_models: c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in build_model(model_name) 61 model = models.get(model_name) 62 if model: ---> 63 model = model() ... -> 1922 seed=self.make_legacy_seed()) 1923 1924 def truncated_normal(self, shape, mean=0., stddev=1., dtype=None): ResourceExhaustedError: failed to allocate memory [Op:AddV2] Different Error output ResourceExhaustedError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in <module> 12 print(face_analysis) 13 ---> 14 analyse_face() ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in analyse_face() 9 imagepath = "happy_face_woman.png" 10 image = cv2.imread(imagepath) ---> 11 face_analysis = DeepFace.analyze(image) 12 print(face_analysis) 13 c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in analyze(img_path, actions, models, enforce_detection, detector_backend, prog_bar) 352 353 if 'age' in actions and 'age' not in built_models: --> 354 models['age'] = build_model('Age') 355 356 if 'gender' in actions and 'gender' not in built_models: c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in build_model(model_name) 61 model = models.get(model_name) 62 if model: ---> 63 model = model() ... -> 1922 seed=self.make_legacy_seed()) 1923 1924 def truncated_normal(self, shape, mean=0., stddev=1., dtype=None): ResourceExhaustedError: OOM when allocating tensor with shape[7,7,512,4096] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomUniform] Additional Info I've ran the command to check my GPU usage and the details is as follows: !nvidia-smi A: Why do not you disable GPU? import os os.environ["CUDA_VISIBLE_DEVICES"]=""
deepface ResourceExhaustedError: failed to allocate memory [Op:AddV2]
I am new to deeplearning. I am trying to use the deepface library in my local machine. I used pip install deepface to install the library, tried on python 3.7.13, 3.8.13 and 3.9.13 which were all created using conda virtual environment. However when running the code snippet below, I am getting the same error when running on my local machine. Do I need a GPU to run the library? If yes, how do I set it up? Because from the online guides/ articles, none of them mentioned the need of installing / setup a GPU. I have a GeForce MX450 on my local pc. code import cv2 from deepface import DeepFace import numpy as np def analyse_face(): imagepath = "happy_face_woman.png" image = cv2.imread(imagepath) face_analysis = DeepFace.analyze(image) print(face_analysis) print(analyse_face()) Error: ResourceExhaustedError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in <module> 12 print(face_analysis) 13 ---> 14 analyse_face() ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in analyse_face() 9 imagepath = "happy_face_woman.png" 10 image = cv2.imread(imagepath) ---> 11 face_analysis = DeepFace.analyze(image) 12 print(face_analysis) 13 c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in analyze(img_path, actions, models, enforce_detection, detector_backend, prog_bar) 352 353 if 'age' in actions and 'age' not in built_models: --> 354 models['age'] = build_model('Age') 355 356 if 'gender' in actions and 'gender' not in built_models: c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in build_model(model_name) 61 model = models.get(model_name) 62 if model: ---> 63 model = model() ... -> 1922 seed=self.make_legacy_seed()) 1923 1924 def truncated_normal(self, shape, mean=0., stddev=1., dtype=None): ResourceExhaustedError: failed to allocate memory [Op:AddV2] Different Error output ResourceExhaustedError Traceback (most recent call last) ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in <module> 12 print(face_analysis) 13 ---> 14 analyse_face() ~\AppData\Local\Temp\ipykernel_14196\3829791526.py in analyse_face() 9 imagepath = "happy_face_woman.png" 10 image = cv2.imread(imagepath) ---> 11 face_analysis = DeepFace.analyze(image) 12 print(face_analysis) 13 c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in analyze(img_path, actions, models, enforce_detection, detector_backend, prog_bar) 352 353 if 'age' in actions and 'age' not in built_models: --> 354 models['age'] = build_model('Age') 355 356 if 'gender' in actions and 'gender' not in built_models: c:\Users\user_name\anaconda3\envs\deepFacepy37\lib\site-packages\deepface\DeepFace.py in build_model(model_name) 61 model = models.get(model_name) 62 if model: ---> 63 model = model() ... -> 1922 seed=self.make_legacy_seed()) 1923 1924 def truncated_normal(self, shape, mean=0., stddev=1., dtype=None): ResourceExhaustedError: OOM when allocating tensor with shape[7,7,512,4096] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:RandomUniform] Additional Info I've ran the command to check my GPU usage and the details is as follows: !nvidia-smi
[ "Why do not you disable GPU?\nimport os\nos.environ[\"CUDA_VISIBLE_DEVICES\"]=\"\"\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "deepface", "oom", "python", "tensorflow" ]
stackoverflow_0074379226_deep_learning_deepface_oom_python_tensorflow.txt
Q: How do I pull a single object from a nested array? I've got Mongo running in a Docker container. In Mongo Bash I can pull the entire JSON file with db..find() But I can't isolate a single object with the code below: db.FilmList.find({"Films" : {Title : "Clue"}}) or any variation I can think of. Thanks. A: This fixed it: db.FilmList.find({"Films.Title" : "Clue"}, {"Films.$": 1, _id : 0})
How do I pull a single object from a nested array?
I've got Mongo running in a Docker container. In Mongo Bash I can pull the entire JSON file with db..find() But I can't isolate a single object with the code below: db.FilmList.find({"Films" : {Title : "Clue"}}) or any variation I can think of. Thanks.
[ "This fixed it:\ndb.FilmList.find({\"Films.Title\" : \"Clue\"}, {\"Films.$\": 1, _id : 0})\n\n" ]
[ 0 ]
[]
[]
[ "mongodb", "python" ]
stackoverflow_0074503507_mongodb_python.txt
Q: list comprehension always returns empty list making a function that recieves numbers separated by spaces and adds the first element to the rest, the ouput should be a list of numbers if the first element is a number i'm trying to remove all non numeric elements of list b examples- input: 1 2 3 4 output: [3, 4, 5] (2+1, 3+1, 4+1) input: 1 2 b 4 output: [3, 5] (2+1,b is purged, 4+1) input: a 1 2 3 output: Sucessor invalido linha = input() a = linha.split() b = [x for x in (a[1:]) if type(x)==int] b = [eval(x) for x in b] c = eval(a[0]) d = [] d.append(c) f = d*len(b) def soma(): if type(c)!= int: return print("Sucessor invalido") else: return list(map(lambda x, y: x + y, b, f)) g = soma() g > this condition always returns an empty list if type(x)==int sorry if im not clear, i started learning recently A: Two things: The results of input are always strings. When you split the string, you end up with more strings. So even if that string is '7', it is the string 7, not the integer 7. If you want to check if an object is of a type, use isinstance(x,int) rather than type(x)==int. To accomplish what it looks like you are doing, I dunno if you can get it with list comprehension, since you probably want a try:...except:... block, like this. linha = input() a = linha.split() b = [] #empty list for x in a[1:]: # confusing thing you are doing here, also... you want to skip the first element? try: x_int = int(x) b.append(x_int) except ValueError: pass ... A: You need to convert the numbers separated by lines to integers, after checking that they are valid numberic values like this: b = [int(x) for x in (a[1:]) if x.isnumeric()] this is because your input will be a string by default, and split() will be just a list of strings A: Your input is string, you need to cast it to int and then do calculation to append it to a new list. The function should check for the index 0 first using str.islpha() method. If it's an alphabet, return invalid input. Use try except when iterating the input list. If some element can't be cast to int it will continue to the next index. def soma(linha): if linha[0].isalpha(): return f"Sucessor invalido" result = [] for i in range(len(linha) - 1): try: result.append(int(linha[0]) + int(linha[i+1])) except ValueError: continue return result linha = input().split() print(soma(linha)) Output: 1 2 3 4 [3, 4, 5] 1 2 b 4 [3, 5] a 1 2 3 Sucessor invalido
list comprehension always returns empty list
making a function that recieves numbers separated by spaces and adds the first element to the rest, the ouput should be a list of numbers if the first element is a number i'm trying to remove all non numeric elements of list b examples- input: 1 2 3 4 output: [3, 4, 5] (2+1, 3+1, 4+1) input: 1 2 b 4 output: [3, 5] (2+1,b is purged, 4+1) input: a 1 2 3 output: Sucessor invalido linha = input() a = linha.split() b = [x for x in (a[1:]) if type(x)==int] b = [eval(x) for x in b] c = eval(a[0]) d = [] d.append(c) f = d*len(b) def soma(): if type(c)!= int: return print("Sucessor invalido") else: return list(map(lambda x, y: x + y, b, f)) g = soma() g > this condition always returns an empty list if type(x)==int sorry if im not clear, i started learning recently
[ "Two things:\n\nThe results of input are always strings. When you split the string, you end up with more strings. So even if that string is '7', it is the string 7, not the integer 7.\n\nIf you want to check if an object is of a type, use isinstance(x,int) rather than type(x)==int.\n\n\nTo accomplish what it looks like you are doing, I dunno if you can get it with list comprehension, since you probably want a try:...except:... block, like this.\nlinha = input()\na = linha.split()\nb = [] #empty list\nfor x in a[1:]: # confusing thing you are doing here, also... you want to skip the first element?\n try:\n x_int = int(x)\n b.append(x_int)\n except ValueError:\n pass\n...\n\n", "You need to convert the numbers separated by lines to integers, after checking that they are valid numberic values like this:\nb = [int(x) for x in (a[1:]) if x.isnumeric()] \n\nthis is because your input will be a string by default, and split() will be just a list of strings\n", "Your input is string, you need to cast it to int and then do calculation to append it to a new list.\nThe function should check for the index 0 first using str.islpha() method. If it's an alphabet, return invalid input.\nUse try except when iterating the input list. If some element can't be cast to int it will continue to the next index.\ndef soma(linha):\n if linha[0].isalpha():\n return f\"Sucessor invalido\"\n result = []\n for i in range(len(linha) - 1):\n try:\n result.append(int(linha[0]) + int(linha[i+1]))\n except ValueError:\n continue\n return result\n\n\nlinha = input().split()\n\nprint(soma(linha))\n\nOutput:\n1 2 3 4\n[3, 4, 5]\n\n1 2 b 4\n[3, 5]\n\na 1 2 3\nSucessor invalido\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074503151_python.txt
Q: Got error TypeError: 'numpy.ndarray' object is not callable I try to use panda by using this code dat = pd.DataFrame() return_data = dat.values().T And I got error TypeError Traceback (most recent call last) Input In [27], in <cell line: 1>() ----> 1 return_data = dat.values().T TypeError: 'numpy.ndarray' object is not callable Try to solve the errors and no idea about error A: Replace dat.values().T by dat.values.T. This error happens when you call a numpy array as function.
Got error TypeError: 'numpy.ndarray' object is not callable
I try to use panda by using this code dat = pd.DataFrame() return_data = dat.values().T And I got error TypeError Traceback (most recent call last) Input In [27], in <cell line: 1>() ----> 1 return_data = dat.values().T TypeError: 'numpy.ndarray' object is not callable Try to solve the errors and no idea about error
[ "Replace dat.values().T by dat.values.T. This error happens when you call a numpy array as function.\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074503624_pandas_python.txt
Q: Removing every nth element in an array How do I remove every nth element in an array? import numpy as np x = np.array([0,10,27,35,44,32,56,35,87,22,47,17]) n = 3 # remove every 3rd element ...something like the opposite of x[0::n]? I've tried this, but of course it doesn't work: for i in np.arange(0,len(x),n): x = np.delete(x,i) A: You're close... Pass the entire arange as subslice to delete instead of attempting to delete each element in turn, eg: import numpy as np x = np.array([0,10,27,35,44,32,56,35,87,22,47,17]) x = np.delete(x, np.arange(0, x.size, 3)) # [10 27 44 32 35 87 47 17] A: I just add another way with reshaping if the length of your array is a multiple of n: import numpy as np x = np.array([0,10,27,35,44,32,56,35,87,22,47,17]) x = x.reshape(-1,3)[:,1:].flatten() # [10 27 44 32 35 87 47 17] On my computer it runs almost twice faster than the solution with np.delete (between 1.8x and 1.9x to be honnest). You can also easily perfom fancy operations, like m deletions each n values etc. A: Here's a super fast version for 2D arrays: Remove every m-th row and n-th column from a 2D array (assuming the shape of the array is a multiple of (n, m)): array2d = np.arange(60).reshape(6, 10) m, n = (3, 5) remove = lambda x, q: x.reshape(x.shape[0], -1, q)[..., 1:].reshape(x.shape[0], -1).T remove(remove(array2d, n), m) returns: array([[11, 12, 13, 14, 16, 17, 18, 19], [21, 22, 23, 24, 26, 27, 28, 29], [41, 42, 43, 44, 46, 47, 48, 49], [51, 52, 53, 54, 56, 57, 58, 59]]) To generalize for any shape use padding or reduce the input array depending on your situation. Speed comparison: from time import time 'remove' start = time() for _ in range(100000): res = remove(remove(array2d, n), m) time() - start 'delete' start = time() for _ in range(100000): tmp = np.delete(array2d, np.arange(0, array2d.shape[0], m), axis=0) res = np.delete(tmp, np.arange(0, array2d.shape[1], n), axis=1) time() - start """ 'remove' 0.3835930824279785 'delete' 3.173515558242798 """ So, compared to numpy.delete the above method is significantly faster.
Removing every nth element in an array
How do I remove every nth element in an array? import numpy as np x = np.array([0,10,27,35,44,32,56,35,87,22,47,17]) n = 3 # remove every 3rd element ...something like the opposite of x[0::n]? I've tried this, but of course it doesn't work: for i in np.arange(0,len(x),n): x = np.delete(x,i)
[ "You're close... Pass the entire arange as subslice to delete instead of attempting to delete each element in turn, eg:\nimport numpy as np\n\nx = np.array([0,10,27,35,44,32,56,35,87,22,47,17])\nx = np.delete(x, np.arange(0, x.size, 3))\n# [10 27 44 32 35 87 47 17]\n\n", "I just add another way with reshaping if the length of your array is a multiple of n:\nimport numpy as np\n\nx = np.array([0,10,27,35,44,32,56,35,87,22,47,17])\nx = x.reshape(-1,3)[:,1:].flatten()\n# [10 27 44 32 35 87 47 17]\n\nOn my computer it runs almost twice faster than the solution with np.delete (between 1.8x and 1.9x to be honnest).\nYou can also easily perfom fancy operations, like m deletions each n values etc.\n", "Here's a super fast version for 2D arrays: Remove every m-th row and n-th column from a 2D array (assuming the shape of the array is a multiple of (n, m)):\narray2d = np.arange(60).reshape(6, 10)\nm, n = (3, 5)\nremove = lambda x, q: x.reshape(x.shape[0], -1, q)[..., 1:].reshape(x.shape[0], -1).T\n\nremove(remove(array2d, n), m)\n\nreturns:\narray([[11, 12, 13, 14, 16, 17, 18, 19],\n [21, 22, 23, 24, 26, 27, 28, 29],\n [41, 42, 43, 44, 46, 47, 48, 49],\n [51, 52, 53, 54, 56, 57, 58, 59]])\n\nTo generalize for any shape use padding or reduce the input array depending on your situation.\n\nSpeed comparison:\nfrom time import time\n\n'remove'\nstart = time()\nfor _ in range(100000):\n res = remove(remove(array2d, n), m)\ntime() - start\n\n'delete'\nstart = time()\nfor _ in range(100000):\n tmp = np.delete(array2d, np.arange(0, array2d.shape[0], m), axis=0)\n res = np.delete(tmp, np.arange(0, array2d.shape[1], n), axis=1)\ntime() - start\n\n\"\"\"\n'remove'\n0.3835930824279785\n'delete'\n3.173515558242798\n\"\"\"\n\nSo, compared to numpy.delete the above method is significantly faster.\n" ]
[ 20, 5, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0021922314_arrays_numpy_python.txt
Q: Python: pandas dataframe: Remove "  " BOM character I used Scrapy on a Linux machine to crawl some websites and saved in a CSV. When I retrieve the dataset and view on a Windows machine, I saw these characters . Here is what I do to re-encode them to UTF-8-SIG: import pandas as pd my_data = pd.read_csv("./dataset/my_data.csv") output = "./dataset/my_data_converted.csv" my_data.to_csv(output, encoding='utf-8-sig', index=False) So now they become ? if viewed on VSCode. But if I view on Notepad++, I don't see these. How do I actually remove them all? A: Given your comment, I suppose that you ended up having two BOMs. Let's look at a small example. I'm using built-in open instead of pd.read_csv/pd.to_csv, but the meaning of the encoding parameter is the same. Let's create a file saved as UTF-8 with a BOM: >>> text = 'foo' >>> with open('/tmp/foo', 'w', encoding='utf-8-sig') as f: ... f.write(text) Now let's read it back in. But we use a different encoding: "utf-8" instead of "utf-8-sig". In your case, you didn't specify the encoding parameter at all, but the default value is most probably "utf-8" or "cp-1252", which both keep the BOM. So the following is more or less equivalent to your code snippet: >>> with open('/tmp/foo', 'r', encoding='utf8') as f: ... text = f.read() ... >>> text '\ufefffoo' >>> with open('/tmp/foo_converted', 'w', encoding='utf-8-sig') as f: ... f.write(text) The BOM is read as part of the the text; it's the first character (here represented as "\ufeff"). Let's see what's actually in the files, using a suitable command-line tool: $ hexdump -C /tmp/foo 00000000 ef bb bf 66 6f 6f |...foo| 00000006 $ hexdump -C /tmp/foo_converted 00000000 ef bb bf ef bb bf 66 6f 6f |......foo| 00000009 In UTF-8, the BOM is encoded as the three bytes EF BB BF. Clearly, the second file has two of them. So even a BOM-aware program will find some non-sense character in the beginning of foo_converted, as the BOM is only stripped once. A: For me the BOM was prepended to the first column name. Fortunately Pandas was able to read it into a dataframe, with the BOM still prepended to the first column name. I iterate over ALL columns to remove the BOM from the first column name (since I deal with many different csv files sources, I can't be sure of the first column name): for column in df.columns: #Need to remove Byte Order Marker at beginning of first column name new_column_name = re.sub(r"[^0-9a-zA-Z.,-/_ ]", "", column) df.rename(columns={column: new_column_name}, inplace=True) Hope this helps someone..
Python: pandas dataframe: Remove "  " BOM character
I used Scrapy on a Linux machine to crawl some websites and saved in a CSV. When I retrieve the dataset and view on a Windows machine, I saw these characters . Here is what I do to re-encode them to UTF-8-SIG: import pandas as pd my_data = pd.read_csv("./dataset/my_data.csv") output = "./dataset/my_data_converted.csv" my_data.to_csv(output, encoding='utf-8-sig', index=False) So now they become ? if viewed on VSCode. But if I view on Notepad++, I don't see these. How do I actually remove them all?
[ "Given your comment, I suppose that you ended up having two BOMs.\nLet's look at a small example.\nI'm using built-in open instead of pd.read_csv/pd.to_csv, but the meaning of the encoding parameter is the same.\nLet's create a file saved as UTF-8 with a BOM:\n>>> text = 'foo'\n>>> with open('/tmp/foo', 'w', encoding='utf-8-sig') as f:\n... f.write(text)\n\nNow let's read it back in.\nBut we use a different encoding: \"utf-8\" instead of \"utf-8-sig\".\nIn your case, you didn't specify the encoding parameter at all, but the default value is most probably \"utf-8\" or \"cp-1252\", which both keep the BOM.\nSo the following is more or less equivalent to your code snippet:\n>>> with open('/tmp/foo', 'r', encoding='utf8') as f:\n... text = f.read()\n... \n>>> text\n'\\ufefffoo'\n>>> with open('/tmp/foo_converted', 'w', encoding='utf-8-sig') as f:\n... f.write(text)\n\nThe BOM is read as part of the the text; it's the first character (here represented as \"\\ufeff\").\nLet's see what's actually in the files, using a suitable command-line tool:\n$ hexdump -C /tmp/foo\n00000000 ef bb bf 66 6f 6f |...foo|\n00000006\n$ hexdump -C /tmp/foo_converted \n00000000 ef bb bf ef bb bf 66 6f 6f |......foo|\n00000009\n\nIn UTF-8, the BOM is encoded as the three bytes EF BB BF.\nClearly, the second file has two of them.\nSo even a BOM-aware program will find some non-sense character in the beginning of foo_converted, as the BOM is only stripped once.\n", "For me the BOM was prepended to the first column name. Fortunately Pandas was able to read it into a dataframe, with the BOM still prepended to the first column name. I iterate over ALL columns to remove the BOM from the first column name (since I deal with many different csv files sources, I can't be sure of the first column name):\n for column in df.columns: #Need to remove Byte Order Marker at beginning of first column name\n new_column_name = re.sub(r\"[^0-9a-zA-Z.,-/_ ]\", \"\", column)\n df.rename(columns={column: new_column_name}, inplace=True)\n\nHope this helps someone..\n" ]
[ 6, 0 ]
[]
[]
[ "pandas", "python", "python_3.x", "utf_8" ]
stackoverflow_0060064238_pandas_python_python_3.x_utf_8.txt
Q: If I install anaconda, do I still have to use vscode? I'm new in programming, actually I use it for Machine Learning. I have installed python and anaconda (I don't know if that is right, or I have to install only anaconda?). And I can see in start menu: (Anaconda powershell, Jupyter, Spyder, Anaconda navigator, Anaconda prompt). So my question is: Do I still have to use vscode as IDE, or one of the listed programs that come with anaconda? If the answer is the second choice, I will ask, which one of them? Thanks. I'm using python just because I have a project in ML, So I must to set the necessary things for ML, like libraries, dataset, and algorithms. Then I have to learn how to use them. Any help will be very apprecheated. A: Anaconda is a Python distribution, that not only comes with Python itself, but a lot of additional Python packages from the "scientific stack", like numpy, pandas, matplotlib, scipy, scikit-learn: exactly what you need for ML. You don't have to install anything else from python.org. Anaconda also comes with the Spyder IDE. This is the perfect choice for a Python beginner. You don't need VSCode. VSCode is way more flexible than Spyder, but you have to get used to it. Conda is the package manager that comes with Anaconda. Do yourself a favour an learn some conda basics and how to use virtual environments here: https://conda.io/projects/conda/en/latest/user-guide/getting-started.html The important difference between Anaconda and pure Python is that you have to activate a conda environment - even the "base" environemnt - before you can use it. This is not obvious to a beginner. A: If I were you I would use vcscode. Anaconda is only a python with extra features, but to code you would appreciate the VSCOde. It comes with many features and you can install extensions to burst your experience. Go for it.
If I install anaconda, do I still have to use vscode?
I'm new in programming, actually I use it for Machine Learning. I have installed python and anaconda (I don't know if that is right, or I have to install only anaconda?). And I can see in start menu: (Anaconda powershell, Jupyter, Spyder, Anaconda navigator, Anaconda prompt). So my question is: Do I still have to use vscode as IDE, or one of the listed programs that come with anaconda? If the answer is the second choice, I will ask, which one of them? Thanks. I'm using python just because I have a project in ML, So I must to set the necessary things for ML, like libraries, dataset, and algorithms. Then I have to learn how to use them. Any help will be very apprecheated.
[ "Anaconda is a Python distribution, that not only comes with Python itself, but a lot of additional Python packages from the \"scientific stack\", like numpy, pandas, matplotlib, scipy, scikit-learn: exactly what you need for ML. You don't have to install anything else from python.org.\nAnaconda also comes with the Spyder IDE. This is the perfect choice for a Python beginner. You don't need VSCode. VSCode is way more flexible than Spyder, but you have to get used to it.\nConda is the package manager that comes with Anaconda. Do yourself a favour an learn some conda basics and how to use virtual environments here: https://conda.io/projects/conda/en/latest/user-guide/getting-started.html\nThe important difference between Anaconda and pure Python is that you have to activate a conda environment - even the \"base\" environemnt - before you can use it. This is not obvious to a beginner.\n", "If I were you I would use vcscode. Anaconda is only a python with extra features, but to code you would appreciate the VSCOde. It comes with many features and you can install extensions to burst your experience. Go for it.\n" ]
[ 0, 0 ]
[]
[]
[ "anaconda", "dataset", "machine_learning", "python" ]
stackoverflow_0074503272_anaconda_dataset_machine_learning_python.txt
Q: Combine awaitables like Promise.all In asynchronous JavaScript, it is easy to run tasks in parallel and wait for all of them to complete using Promise.all: async function bar(i) { console.log('started', i); await delay(1000); console.log('finished', i); } async function foo() { await Promise.all([bar(1), bar(2)]); } // This works too: async function my_all(promises) { for (let p of promises) await p; } async function foo() { await my_all([bar(1), bar(2), bar(3)]); } I tried to rewrite the latter in python: import asyncio async def bar(i): print('started', i) await asyncio.sleep(1) print('finished', i) async def aio_all(seq): for f in seq: await f async def main(): await aio_all([bar(i) for i in range(10)]) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() But it executes my tasks sequentially. What is the simplest way to await multiple awaitables? Why doesn't my approach work? A: The equivalent would be using asyncio.gather: import asyncio async def bar(i): print('started', i) await asyncio.sleep(1) print('finished', i) async def main(): await asyncio.gather(*[bar(i) for i in range(10)]) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() Why doesn't my approach work? Because when you await each item in seq, you block that coroutine. So in essence, you have synchronous code masquerading as async. If you really wanted to, you could implement your own version of asyncio.gather using loop.create_task or asyncio.ensure_future. EDIT The original answer used the lower-level asyncio.wait. A: I noticed that asyncio.gather() may be a better way to await other than asyncio.wait() if we want ordered results. As the docs indicates, the order of result values from asyncio.gather() method corresponds to the order of awaitables in aws. However, the order of result values from asyncio.wait() won't do the same thing.You can test it. A: https://docs.python.org/3/library/asyncio-task.html#asyncio.gather asyncio.gather() will return the list of output from each async function calls. import asyncio async def bar(i): print('started', i) await asyncio.sleep(1) print('finished', i) return i async def main(): values = await asyncio.gather(*[bar(i) for i in range(10)]) print(values) asyncio.run(main()) This method, gather, takes arbitrary number of args for the concurrent jobs instead of a list, so we unpack. It's very common to need this intermediate value, values in my eg, instead of designing your function/method to have side effects. A: As of Python 3.11: A more modern way to create and run tasks concurrently and wait for their completion is asyncio.TaskGroup. ... although, I couldn't find any reason why TaskGroup should be preferred over gather
Combine awaitables like Promise.all
In asynchronous JavaScript, it is easy to run tasks in parallel and wait for all of them to complete using Promise.all: async function bar(i) { console.log('started', i); await delay(1000); console.log('finished', i); } async function foo() { await Promise.all([bar(1), bar(2)]); } // This works too: async function my_all(promises) { for (let p of promises) await p; } async function foo() { await my_all([bar(1), bar(2), bar(3)]); } I tried to rewrite the latter in python: import asyncio async def bar(i): print('started', i) await asyncio.sleep(1) print('finished', i) async def aio_all(seq): for f in seq: await f async def main(): await aio_all([bar(i) for i in range(10)]) loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() But it executes my tasks sequentially. What is the simplest way to await multiple awaitables? Why doesn't my approach work?
[ "The equivalent would be using asyncio.gather:\nimport asyncio\n\nasync def bar(i):\n print('started', i)\n await asyncio.sleep(1)\n print('finished', i)\n\nasync def main():\n await asyncio.gather(*[bar(i) for i in range(10)])\n\nloop = asyncio.get_event_loop()\nloop.run_until_complete(main())\nloop.close()\n\n\nWhy doesn't my approach work?\n\nBecause when you await each item in seq, you block that coroutine. So in essence, you have synchronous code masquerading as async. If you really wanted to, you could implement your own version of asyncio.gather using loop.create_task or asyncio.ensure_future.\nEDIT\nThe original answer used the lower-level asyncio.wait.\n", "I noticed that asyncio.gather() may be a better way to await other than asyncio.wait() if we want ordered results.\nAs the docs indicates, the order of result values from asyncio.gather() method corresponds to the order of awaitables in aws. However, the order of result values from asyncio.wait() won't do the same thing.You can test it.\n", "https://docs.python.org/3/library/asyncio-task.html#asyncio.gather\nasyncio.gather() will return the list of output from each async function calls.\nimport asyncio\n\nasync def bar(i):\n print('started', i)\n await asyncio.sleep(1)\n print('finished', i)\n return i\n\nasync def main():\n values = await asyncio.gather(*[bar(i) for i in range(10)])\n print(values)\n\nasyncio.run(main())\n\nThis method, gather, takes arbitrary number of args for the concurrent jobs instead of a list, so we unpack.\nIt's very common to need this intermediate value, values in my eg, instead of designing your function/method to have side effects.\n", "As of Python 3.11:\nA more modern way to create and run tasks concurrently and wait for their completion is asyncio.TaskGroup.\n... although, I couldn't find any reason why TaskGroup should be preferred over gather\n" ]
[ 98, 23, 14, 1 ]
[]
[]
[ "async_await", "future", "python", "python_3.x", "python_asyncio" ]
stackoverflow_0034377319_async_await_future_python_python_3.x_python_asyncio.txt
Q: How to parse the regex text between multiline and between two braces? I am new to python and trying to learn the regex by example. In this example I am trying the extract the dictionary parts from the multiline text. How to extract the parts between the two braces in the following example? MWE: How to get pandas dataframe from this data? import re s = """ [ { specialty: "Anatomic/Clinical Pathology", one: " 12,643 ", two: " 8,711 ", three: " 385 ", four: " 520 ", five: " 3,027 ", }, { specialty: "Nephrology", one: " 11,407 ", two: " 9,964 ", three: " 140 ", four: " 316 ", five: " 987 ", }, { specialty: "Vascular Surgery", one: " 3,943 ", two: " 3,586 ", three: " 48 ", four: " 13 ", five: " 296 ", }, ] """ m = re.match('({.*})', s, flags=re.S) data = m.groups() df = pd.DataFrame(data) A: I suggest to add double quotes around the keys, then cast the string to a list of dictionaries and then simply read the structure into pandas dataframe using pd.from_dict: import pandas as pd from ast import literal_eval import re s = "YOU STRING HERE" fixed_s = re.sub(r"^(\s*)(\w+):", r'\1"\2":', s, flags=re.M) df = pd.DataFrame.from_dict( ast.literal_eval(fixed_s) ) The ^(\s*)(\w+): regex matches zero or more whitespaces at the start of any line (see the flags=re.M that makes ^ match start of any line positions) capturing them into Group 1, and then matches one or more word chars capturing them into Group 2 and then matches a : and then replaces the match with Group 1 + " + Group 2 + ":. The result is cast to a list of dictionaries using ast.literal_eval. Then, the list is used to initialize the dataframe.
How to parse the regex text between multiline and between two braces?
I am new to python and trying to learn the regex by example. In this example I am trying the extract the dictionary parts from the multiline text. How to extract the parts between the two braces in the following example? MWE: How to get pandas dataframe from this data? import re s = """ [ { specialty: "Anatomic/Clinical Pathology", one: " 12,643 ", two: " 8,711 ", three: " 385 ", four: " 520 ", five: " 3,027 ", }, { specialty: "Nephrology", one: " 11,407 ", two: " 9,964 ", three: " 140 ", four: " 316 ", five: " 987 ", }, { specialty: "Vascular Surgery", one: " 3,943 ", two: " 3,586 ", three: " 48 ", four: " 13 ", five: " 296 ", }, ] """ m = re.match('({.*})', s, flags=re.S) data = m.groups() df = pd.DataFrame(data)
[ "I suggest to add double quotes around the keys, then cast the string to a list of dictionaries and then simply read the structure into pandas dataframe using pd.from_dict:\nimport pandas as pd\nfrom ast import literal_eval\nimport re\n\ns = \"YOU STRING HERE\"\nfixed_s = re.sub(r\"^(\\s*)(\\w+):\", r'\\1\"\\2\":', s, flags=re.M)\ndf = pd.DataFrame.from_dict( ast.literal_eval(fixed_s) )\n\nThe ^(\\s*)(\\w+): regex matches zero or more whitespaces at the start of any line (see the flags=re.M that makes ^ match start of any line positions) capturing them into Group 1, and then matches one or more word chars capturing them into Group 2 and then matches a : and then replaces the match with Group 1 + \" + Group 2 + \":.\nThe result is cast to a list of dictionaries using ast.literal_eval.\nThen, the list is used to initialize the dataframe.\n" ]
[ 2 ]
[]
[]
[ "python", "python_re", "regex" ]
stackoverflow_0074503346_python_python_re_regex.txt
Q: What kind of machine learning solves this? i have some basic knowledge in AI and machine learning, but a bit confused solving a concrete problem. i have the following scenario: Given are features and labeled data (0 or 1). I want to predict the probability, new data takes 0 based on the feature values of this new data. I know this is supervised learning, but what method can I use for predicting here (i think logistic regression or neural networks should be an option?) and if theres any preimplemented libaries in Python I can just fed the data to and it will presict? A: For both quick overview and example implementations, please visit scikit-learn estimators. Your task is more likely of classification, which you can think it as how it's possible to fit (from 0.0 to 1.0) to a specific category. There are vastly available models to use. Most of them work through minimizing (or iterative optimizing) the cost function to obtain a valid prediction model. In particular cases, E.g., low dimensional feature, a decision tree which is one of the primitive algorithms, also can give reasonable results. I would recommend that you dig further into how well the model can be generalized and interpretable; both are just as important as model accuracy. A: Given the nature of your problem, there are many supervised learning models that you can use. Here are some of the algorithms with what you want to achieve. Do you prefer Speed or Accuracy? For Accuracy Kernel SVM Random Forest Classifier Neural Networks Gradient Boosting Classifier For Speed, is your data explanable? If yes, use Decision Trees Logistic Regression If not, use Naive Bayes if dataset too large Linear SVM if dataset is small These are for Supervised learning (Classification), for other forms of learning, there are other models as well. You can view this link for more information.
What kind of machine learning solves this?
i have some basic knowledge in AI and machine learning, but a bit confused solving a concrete problem. i have the following scenario: Given are features and labeled data (0 or 1). I want to predict the probability, new data takes 0 based on the feature values of this new data. I know this is supervised learning, but what method can I use for predicting here (i think logistic regression or neural networks should be an option?) and if theres any preimplemented libaries in Python I can just fed the data to and it will presict?
[ "For both quick overview and example implementations, please visit scikit-learn estimators. Your task is more likely of classification, which you can think it as how it's possible to fit (from 0.0 to 1.0) to a specific category.\nThere are vastly available models to use. Most of them work through minimizing (or iterative optimizing) the cost function to obtain a valid prediction model. In particular cases, E.g., low dimensional feature, a decision tree which is one of the primitive algorithms, also can give reasonable results.\nI would recommend that you dig further into how well the model can be generalized and interpretable; both are just as important as model accuracy.\n", "Given the nature of your problem, there are many supervised learning models that you can use. Here are some of the algorithms with what you want to achieve.\nDo you prefer Speed or Accuracy?\nFor Accuracy\nKernel SVM\nRandom Forest Classifier\nNeural Networks\nGradient Boosting Classifier\nFor Speed, is your data explanable?\nIf yes, use\nDecision Trees\nLogistic Regression\nIf not, use\nNaive Bayes if dataset too large\nLinear SVM if dataset is small\nThese are for Supervised learning (Classification), for other forms of learning, there are other models as well. You can view this link for more information.\n" ]
[ 0, 0 ]
[]
[]
[ "artificial_intelligence", "machine_learning", "python" ]
stackoverflow_0074500016_artificial_intelligence_machine_learning_python.txt
Q: [PYTHON/BINARY FILE]: Sorting the bits read The file 'binary_file.bin' contains the following binary data (hexadecimal base used for simplicity): A43CB90F Each 2 bytes correspond to an unsigned integer of 16 bits: first number is A43C and second number is B90F, which in decimal base correspond respectively to 42044 and to 47375. I'm trying to read the binary input stream in Python using the method fromfile() from the numpy library as follows: import numpy as np binary_stream = open('binary_file.bin', 'rb') numbers_to_read=2 numbers = np.fromfile(binary_stream,dtype=np.uint16,count=numbers_to_read,sep="") print(numbers[0]) # Result -----> DECIMAL: 15524 / HEXADECIMAL: 3CA4 print(numbers[1]) # Result -----> DECIMAL: 4025 / HEXADECIMAL: 0FB9 The first number read corresponds to 3CA4 instead of A43C, and the second number read corresponds to 0FB9 instead of B90F. So, it looks like the bytes are flipped when reading them with fromfile(). Is there any alternative to make sure that the bytes come in the same order as in the binary file? First number should be A43C (42044) and the second number should be B90F and the bytes shouldn't get flipped. A: You need to specify the byteorder of the data type. For example: import numpy as np binary_stream = open('/tmp/binary_file.bin', 'rb') numbers_to_read=2 numbers = np.fromfile(binary_stream, dtype=np.dtype('>H'), count=numbers_to_read, sep="") for num in numbers: print(f"Decimal = {num} | Hex = {num:04X}") Which gives the following output: Decimal = 42044 | Hex = A43C Decimal = 47375 | Hex = B90F
[PYTHON/BINARY FILE]: Sorting the bits read
The file 'binary_file.bin' contains the following binary data (hexadecimal base used for simplicity): A43CB90F Each 2 bytes correspond to an unsigned integer of 16 bits: first number is A43C and second number is B90F, which in decimal base correspond respectively to 42044 and to 47375. I'm trying to read the binary input stream in Python using the method fromfile() from the numpy library as follows: import numpy as np binary_stream = open('binary_file.bin', 'rb') numbers_to_read=2 numbers = np.fromfile(binary_stream,dtype=np.uint16,count=numbers_to_read,sep="") print(numbers[0]) # Result -----> DECIMAL: 15524 / HEXADECIMAL: 3CA4 print(numbers[1]) # Result -----> DECIMAL: 4025 / HEXADECIMAL: 0FB9 The first number read corresponds to 3CA4 instead of A43C, and the second number read corresponds to 0FB9 instead of B90F. So, it looks like the bytes are flipped when reading them with fromfile(). Is there any alternative to make sure that the bytes come in the same order as in the binary file? First number should be A43C (42044) and the second number should be B90F and the bytes shouldn't get flipped.
[ "You need to specify the byteorder of the data type.\nFor example:\nimport numpy as np\nbinary_stream = open('/tmp/binary_file.bin', 'rb')\nnumbers_to_read=2\nnumbers = np.fromfile(binary_stream,\n dtype=np.dtype('>H'),\n count=numbers_to_read,\n sep=\"\") \nfor num in numbers:\n print(f\"Decimal = {num} | Hex = {num:04X}\")\n\nWhich gives the following output:\nDecimal = 42044 | Hex = A43C\nDecimal = 47375 | Hex = B90F\n\n" ]
[ 1 ]
[]
[]
[ "binary_data", "numpy", "python" ]
stackoverflow_0074501863_binary_data_numpy_python.txt
Q: Match in Python a LARGE set of (x, y) points to another set with outliers I have two large sets of (x, y) points and I want to associate in Python each point of one set with "the corresponding point" of the other. The second set can also contain outliers, i.e. extra noise points, as you can see in this picture, where there are more green dots than red dots: The association between the two sets of points is not a simple translation, as you can see in this image: In these two links you can find the red dots and green dots (list of image coordinates with origin in top-left): https://drive.google.com/file/d/1fptkxEDYbIJ93r_OXJSstDHMfk67DDYo/view?usp=share_link https://drive.google.com/file/d/1Z_ghWIzUZv8sxfawOBoGG3fJz4h_z7Qv/view?usp=share_link My problem is similar to these two: Match set of x,y points to another set that is scaled, rotated, translated, and with missing elements How to align two sets of points (translation+rotation) when those sets contain noise? but I have a large set of points, so the solutions proposed here don't work for my case. My points have a certain structure in rows so it's difficult to compute a Roto-Scale-Translation function because the rows of points get confused with each other. A: I found a method which can recover which points correspond to which other points fairly accurately, using two phases. The first phase corrects for affine transformation, and the second phase corrects for nonlinear distortion. Note: I chose to match red points to green points, rather than the other way around. Assumptions The method makes three assumptions: It knows three or more green points and the matching red points. The differences between the two are mostly linear. The nonlinear portion of the difference is locally similar - i.e. if one point has the match offset by (-10, 10), the neighboring point will have a similar offset. This is controlled by max_search_dist. Code Start by loading both datasets: import json import pandas as pd import numpy as np import matplotlib.pyplot as plt import numpy as np from sklearn.neighbors import NearestNeighbors from scipy.spatial import KDTree from collections import Counter with open('red_points.json', 'rb') as f: red_points = json.load(f) red_points = pd.DataFrame(red_points, columns=list('xy')) with open('green_points.json', 'rb') as f: green_points = json.load(f) green_points = pd.DataFrame(green_points, columns=list('xy')) I found it useful to have a function to visualize both datasets: def plot_two(green, red): if isinstance(red, np.ndarray): red = pd.DataFrame(red, columns=list('xy')) if isinstance(green, np.ndarray): green = pd.DataFrame(green, columns=list('xy')) both = pd.concat([green.assign(hue='green'), red.assign(hue='red')]) ax = both.plot.scatter('x', 'y', c='hue', alpha=0.5, s=0.5) ax.ticklabel_format(useOffset=False) Next, pick three points in green, and provide their XY coordinates. Find the corresponding points in red, and provide their XY coordinates. green_sample = np.array([ [5221, 12460], [2479, 2497], [6709, 6303], ]) red_sample = np.array([ [5274, 12597], [2375, 2563], [6766, 6406], ]) Next, use those points to find an affine matrix. This affine matrix will cover rotation, translation, scaling and skewing. Since it has six unknowns, you need at least six constraints, or the equation is underdetermined. This is why we needed at least three points earlier. def add_implicit_ones(matrix): b = np.ones((matrix.shape[0], 1)) return np.concatenate((matrix,b), axis=1) def transform_points_affine(points, matrix): return add_implicit_ones(points) @ matrix def fit_affine_matrix(red_sample, green_sample): red_sample = add_implicit_ones(red_sample) X, _, _, _ = np.linalg.lstsq(red_sample, green_sample, rcond=None) return X X = fit_affine_matrix(red_sample, green_sample) red_points_transformed = transform_points_affine(red_points.values, X) Now we get to the nonlinear matching step. This is run after red's values are transformed to match green's values. Here's the algorithm: Start at a red point which has no nonlinear component. One near one of the green_sample points is a good choice for this - the affine step will prioritize getting those points right. Search around this point in a radius for the corresponding green point. Record the difference between the red point and the corresponding green point as "drift." Look at all of the red neighbors of that red point. Add those to a list to process. At one of those red neighbors, take the average drift of all neighboring red points. Add that drift to the red point, and search in a radius for a green point. The difference between the red point and corresponding green point is the drift for this red point. Add all this point's red neighbors to the list to process, and go back to step 3. def find_nn_graph(red_points_np): nbrs = NearestNeighbors(n_neighbors=8, algorithm='ball_tree').fit(red_points_np) _, indicies = nbrs.kneighbors(red_points_np) return indicies def point_search(red_points_np, green_points_np, starting_point, max_search_radius): starting_point_idx = (((red_points_np - starting_point)**2).mean(axis=1)).argmin() green_tree = KDTree(green_points_np) dirty = Counter() visited = set() indicies = find_nn_graph(red_points_np) # Mark starting point as dirty dirty[starting_point_idx] += 1 match = {} drift = np.zeros(red_points_np.shape) # NaN = unknown drift drift[:] = np.nan while len(dirty) > 0: point_idx, num_neighbors = dirty.most_common(1)[0] neighbors = indicies[point_idx] if point_idx != starting_point_idx: neighbor_drift_all = drift[neighbors] if np.isnan(neighbor_drift_all).all(): # All neighbors have no drift # Unmark as dirty and come back to this one del dirty[point_idx] continue neighbor_drift = np.nanmean(neighbor_drift_all, axis=0) assert not np.isnan(neighbor_drift).any(), "No neighbor drift found" else: neighbor_drift = np.array([0, 0]) # Find the point in the green set red_point = red_points_np[point_idx] green_points_idx = green_tree.query_ball_point(red_point + neighbor_drift, r=max_search_radius) assert len(green_points_idx) != 0, f"No green point found near {red_point}" assert len(green_points_idx) == 1, f"Too many green points found near {red_point}" green_point = green_points_np[green_points_idx[0]] real_drift = green_point - red_point match[point_idx] = green_points_idx[0] # Save drift drift[point_idx] = real_drift # Mark unvisited neighbors as dirty if point_idx not in visited: neighbors = indicies[point_idx, 1:] neighbors = [n for n in neighbors if n not in visited] dirty.update(neighbors) # Remove this point from dirty del dirty[point_idx] # Mark this point as visited visited.add(point_idx) # Check that there are no duplicates assert len(set(match.values())) == len(match) # Check that every point in red_points_np was matched assert len(match) == red_points_np.shape[0] return match, drift # This point is assumed to have a drift of zero # Pick one of the points which was used for the linear correction starting_point = green_sample[0] # Maximum distance that a point can be found from where it is expected max_search_radius = 10 green_points_np = green_points.values match, drift = point_search(red_points_transformed, green_points_np, starting_point, max_search_radius) Next, here's a tool which you can use to audit the quality of the match. This is showing the first thousand matches. Underneath that is a quiver plot, where the arrow points from the red point in the direction of the matching green point. (Note: arrows are not to scale.) red_idx, green_idx = zip(*match.items()) def show_match_subset(start_idx, length): end_idx = start_idx + length plot_two(green_points_np[np.array(green_idx)][start_idx:end_idx], red_points_np[np.array(red_idx)][start_idx:end_idx]) plt.show() red_xy = red_points_np[np.array(red_idx)][start_idx:end_idx] red_drift_direction = drift[np.array(red_idx)][start_idx:end_idx] plt.quiver(red_xy[:, 0], red_xy[:, 1], red_drift_direction[:, 0], red_drift_direction[:, 1]) show_subset(0, 1000) Plots: Match Here's a copy of the match I found. It's in JSON format, where the keys represent indexes of points in the red point file, and the values represent indexes of points in the green point file. https://pastebin.com/SBezpstu
Match in Python a LARGE set of (x, y) points to another set with outliers
I have two large sets of (x, y) points and I want to associate in Python each point of one set with "the corresponding point" of the other. The second set can also contain outliers, i.e. extra noise points, as you can see in this picture, where there are more green dots than red dots: The association between the two sets of points is not a simple translation, as you can see in this image: In these two links you can find the red dots and green dots (list of image coordinates with origin in top-left): https://drive.google.com/file/d/1fptkxEDYbIJ93r_OXJSstDHMfk67DDYo/view?usp=share_link https://drive.google.com/file/d/1Z_ghWIzUZv8sxfawOBoGG3fJz4h_z7Qv/view?usp=share_link My problem is similar to these two: Match set of x,y points to another set that is scaled, rotated, translated, and with missing elements How to align two sets of points (translation+rotation) when those sets contain noise? but I have a large set of points, so the solutions proposed here don't work for my case. My points have a certain structure in rows so it's difficult to compute a Roto-Scale-Translation function because the rows of points get confused with each other.
[ "I found a method which can recover which points correspond to which other points fairly accurately, using two phases. The first phase corrects for affine transformation, and the second phase corrects for nonlinear distortion.\nNote: I chose to match red points to green points, rather than the other way around.\nAssumptions\nThe method makes three assumptions:\n\nIt knows three or more green points and the matching red points.\nThe differences between the two are mostly linear.\nThe nonlinear portion of the difference is locally similar - i.e. if one point has the match offset by (-10, 10), the neighboring point will have a similar offset. This is controlled by max_search_dist.\n\nCode\nStart by loading both datasets:\nimport json\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.neighbors import NearestNeighbors\nfrom scipy.spatial import KDTree\nfrom collections import Counter\n\nwith open('red_points.json', 'rb') as f:\n red_points = json.load(f)\nred_points = pd.DataFrame(red_points, columns=list('xy'))\nwith open('green_points.json', 'rb') as f:\n green_points = json.load(f)\ngreen_points = pd.DataFrame(green_points, columns=list('xy'))\n\nI found it useful to have a function to visualize both datasets:\ndef plot_two(green, red):\n if isinstance(red, np.ndarray):\n red = pd.DataFrame(red, columns=list('xy'))\n if isinstance(green, np.ndarray):\n green = pd.DataFrame(green, columns=list('xy'))\n both = pd.concat([green.assign(hue='green'), red.assign(hue='red')])\n ax = both.plot.scatter('x', 'y', c='hue', alpha=0.5, s=0.5)\n ax.ticklabel_format(useOffset=False)\n\nNext, pick three points in green, and provide their XY coordinates. Find the corresponding points in red, and provide their XY coordinates.\ngreen_sample = np.array([\n [5221, 12460],\n [2479, 2497],\n [6709, 6303],\n])\nred_sample = np.array([\n [5274, 12597],\n [2375, 2563],\n [6766, 6406],\n])\n\nNext, use those points to find an affine matrix. This affine matrix will cover rotation, translation, scaling and skewing. Since it has six unknowns, you need at least six constraints, or the equation is underdetermined. This is why we needed at least three points earlier.\ndef add_implicit_ones(matrix):\n b = np.ones((matrix.shape[0], 1))\n return np.concatenate((matrix,b), axis=1)\n\ndef transform_points_affine(points, matrix):\n return add_implicit_ones(points) @ matrix\n\ndef fit_affine_matrix(red_sample, green_sample):\n red_sample = add_implicit_ones(red_sample)\n X, _, _, _ = np.linalg.lstsq(red_sample, green_sample, rcond=None)\n return X\n\nX = fit_affine_matrix(red_sample, green_sample)\nred_points_transformed = transform_points_affine(red_points.values, X)\n\nNow we get to the nonlinear matching step. This is run after red's values are transformed to match green's values. Here's the algorithm:\n\nStart at a red point which has no nonlinear component. One near one of the green_sample points is a good choice for this - the affine step will prioritize getting those points right. Search around this point in a radius for the corresponding green point. Record the difference between the red point and the corresponding green point as \"drift.\"\nLook at all of the red neighbors of that red point. Add those to a list to process.\nAt one of those red neighbors, take the average drift of all neighboring red points. Add that drift to the red point, and search in a radius for a green point.\nThe difference between the red point and corresponding green point is the drift for this red point.\nAdd all this point's red neighbors to the list to process, and go back to step 3.\n\ndef find_nn_graph(red_points_np):\n nbrs = NearestNeighbors(n_neighbors=8, algorithm='ball_tree').fit(red_points_np)\n _, indicies = nbrs.kneighbors(red_points_np)\n return indicies\n\ndef point_search(red_points_np, green_points_np, starting_point, max_search_radius):\n starting_point_idx = (((red_points_np - starting_point)**2).mean(axis=1)).argmin()\n green_tree = KDTree(green_points_np)\n dirty = Counter()\n visited = set()\n indicies = find_nn_graph(red_points_np)\n # Mark starting point as dirty\n dirty[starting_point_idx] += 1\n\n match = {}\n\n drift = np.zeros(red_points_np.shape)\n # NaN = unknown drift\n drift[:] = np.nan\n while len(dirty) > 0:\n point_idx, num_neighbors = dirty.most_common(1)[0]\n neighbors = indicies[point_idx]\n if point_idx != starting_point_idx:\n neighbor_drift_all = drift[neighbors]\n if np.isnan(neighbor_drift_all).all():\n # All neighbors have no drift\n # Unmark as dirty and come back to this one\n del dirty[point_idx]\n continue\n neighbor_drift = np.nanmean(neighbor_drift_all, axis=0)\n assert not np.isnan(neighbor_drift).any(), \"No neighbor drift found\"\n else:\n neighbor_drift = np.array([0, 0])\n # Find the point in the green set\n red_point = red_points_np[point_idx]\n green_points_idx = green_tree.query_ball_point(red_point + neighbor_drift, r=max_search_radius)\n\n assert len(green_points_idx) != 0, f\"No green point found near {red_point}\"\n assert len(green_points_idx) == 1, f\"Too many green points found near {red_point}\"\n green_point = green_points_np[green_points_idx[0]]\n real_drift = green_point - red_point\n match[point_idx] = green_points_idx[0]\n\n # Save drift\n drift[point_idx] = real_drift\n # Mark unvisited neighbors as dirty\n if point_idx not in visited:\n neighbors = indicies[point_idx, 1:]\n neighbors = [n for n in neighbors if n not in visited]\n dirty.update(neighbors)\n # Remove this point from dirty\n del dirty[point_idx]\n # Mark this point as visited\n visited.add(point_idx)\n # Check that there are no duplicates\n assert len(set(match.values())) == len(match)\n # Check that every point in red_points_np was matched\n assert len(match) == red_points_np.shape[0]\n return match, drift\n\n\n# This point is assumed to have a drift of zero\n# Pick one of the points which was used for the linear correction\nstarting_point = green_sample[0]\n# Maximum distance that a point can be found from where it is expected\nmax_search_radius = 10\ngreen_points_np = green_points.values\nmatch, drift = point_search(red_points_transformed, green_points_np, starting_point, max_search_radius)\n\nNext, here's a tool which you can use to audit the quality of the match. This is showing the first thousand matches. Underneath that is a quiver plot, where the arrow points from the red point in the direction of the matching green point. (Note: arrows are not to scale.)\nred_idx, green_idx = zip(*match.items())\ndef show_match_subset(start_idx, length):\n end_idx = start_idx + length\n plot_two(green_points_np[np.array(green_idx)][start_idx:end_idx], red_points_np[np.array(red_idx)][start_idx:end_idx])\n plt.show()\n red_xy = red_points_np[np.array(red_idx)][start_idx:end_idx]\n red_drift_direction = drift[np.array(red_idx)][start_idx:end_idx]\n plt.quiver(red_xy[:, 0], red_xy[:, 1], red_drift_direction[:, 0], red_drift_direction[:, 1])\n \nshow_subset(0, 1000)\n\nPlots:\n\n\nMatch\nHere's a copy of the match I found. It's in JSON format, where the keys represent indexes of points in the red point file, and the values represent indexes of points in the green point file. https://pastebin.com/SBezpstu\n" ]
[ 2 ]
[]
[]
[ "affinetransform", "computer_vision", "opencv", "performance", "python" ]
stackoverflow_0074493193_affinetransform_computer_vision_opencv_performance_python.txt
Q: Subscription fees to use blpapi package I am want to connect/know if there are ways to get Bloomberg data to Python. I see we can connect through blpapi/pdblp package. So wanted to check what is the pricing for this. Appreciate if anyone can help me here? Getting ways to connect to Python to get Bloomberg data A: Bloomberg has a number of products, which support the real-time API known as the BLP API. This API is a microservice based API. They have microservices for streaming marketdata (//blp/mktdata), requesting static reference (//blp/refdata), contributing OTC pricing (//firm/c-gdco), submitting orders (//blp/emsx), etc etc. The API supports a number of languages including Python, Perl, C++, .NET, etc. The API pattern requires setting up a session where you 'target'/connect to a delivery point. There are several flavours of delivery points depending on what Bloomberg products you buy. For the Bloomberg (Professional) Terminal, you have something called Desktop API (DAPI), they have something called the Server (SAPI), they have something called B-PIPE, another is EMSX. They all present delivery points. They all support the same BLP API. The Bloomberg Terminal's delivery point is localhost:8194. No Bloomberg Terminal, no localhost delivery point. However, maybe your organisation has bought an Enterprise B-PIPE product, in which case you don't need a Bloomberg Terminal, and the delivery point will sit on at least two servers (IPs), again on port 8194. So, bottom line, the API library is available and you can develop against it. Problem is, the first few lines of creating a session object and connecting to the end point will fail unless you have a Bloomberg product. There's no sandbox, sadly. Pricing depends on product, and unfortunately you'll also need to consider your application use-case. As an example, if you're writing a systematic trading application, then the licensing of the Bloomberg (Professional) Terminal will not permit that, however, a B-PIPE will include a licence that will permit that (plus hefty exchange fees if not OTC). Good luck.
Subscription fees to use blpapi package
I am want to connect/know if there are ways to get Bloomberg data to Python. I see we can connect through blpapi/pdblp package. So wanted to check what is the pricing for this. Appreciate if anyone can help me here? Getting ways to connect to Python to get Bloomberg data
[ "Bloomberg has a number of products, which support the real-time API known as the BLP API. This API is a microservice based API. They have microservices for streaming marketdata (//blp/mktdata), requesting static reference (//blp/refdata), contributing OTC pricing (//firm/c-gdco), submitting orders (//blp/emsx), etc etc. The API supports a number of languages including Python, Perl, C++, .NET, etc. The API pattern requires setting up a session where you 'target'/connect to a delivery point. There are several flavours of delivery points depending on what Bloomberg products you buy. For the Bloomberg (Professional) Terminal, you have something called Desktop API (DAPI), they have something called the Server (SAPI), they have something called B-PIPE, another is EMSX. They all present delivery points. They all support the same BLP API.\nThe Bloomberg Terminal's delivery point is localhost:8194. No Bloomberg Terminal, no localhost delivery point. However, maybe your organisation has bought an Enterprise B-PIPE product, in which case you don't need a Bloomberg Terminal, and the delivery point will sit on at least two servers (IPs), again on port 8194.\nSo, bottom line, the API library is available and you can develop against it. Problem is, the first few lines of creating a session object and connecting to the end point will fail unless you have a Bloomberg product. There's no sandbox, sadly.\nPricing depends on product, and unfortunately you'll also need to consider your application use-case. As an example, if you're writing a systematic trading application, then the licensing of the Bloomberg (Professional) Terminal will not permit that, however, a B-PIPE will include a licence that will permit that (plus hefty exchange fees if not OTC).\nGood luck.\n" ]
[ 1 ]
[]
[]
[ "bloomberg", "blpapi", "python" ]
stackoverflow_0074284145_bloomberg_blpapi_python.txt
Q: How to solve (cid:x) pdfplumber python text extraction PDF_Doc I've been working with the pdfplumber library to extract text from pdf documents and it's been fine, however in the documents I'm working on now, I just get spaces and lots of (cid:x) instead of text. Any solution? Thanks with pdfplumber.open(fatura) as pdf: lista_paginas = pdf.pages fatura_individual = '' for pagina in lista_paginas[:len(lista_paginas)]: fatura_individual += pagina.extract_text() (cid:12)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:16) Just want to extract the full text A: Try PyPDF2 : https://pypdf2.readthedocs.io/en/latest/user/extract-text.html from PyPDF2 import PdfReader reader = PdfReader("example.pdf") for page in reader.pages: print(page.extract_text())
How to solve (cid:x) pdfplumber python text extraction
PDF_Doc I've been working with the pdfplumber library to extract text from pdf documents and it's been fine, however in the documents I'm working on now, I just get spaces and lots of (cid:x) instead of text. Any solution? Thanks with pdfplumber.open(fatura) as pdf: lista_paginas = pdf.pages fatura_individual = '' for pagina in lista_paginas[:len(lista_paginas)]: fatura_individual += pagina.extract_text() (cid:12)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0),(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:16) Just want to extract the full text
[ "Try PyPDF2 : https://pypdf2.readthedocs.io/en/latest/user/extract-text.html\nfrom PyPDF2 import PdfReader\n\nreader = PdfReader(\"example.pdf\")\nfor page in reader.pages:\n print(page.extract_text())\n\n" ]
[ 0 ]
[]
[]
[ "pdfplumber", "pdftotext", "pypdf2", "python" ]
stackoverflow_0074416930_pdfplumber_pdftotext_pypdf2_python.txt
Q: Switch the values from x-axis to y-axis while using the correct labels(Python Matplotlib.pyplot OR Seaborn) I have a small table of information that I'm trying to turn into a histogram. It has one column of Department names and a second column of totals. I would like the x-axis to use the Department names and the y-axis to use the numbers from the totals column. When I try to code it, the x-axis is the totals and the y-axis is a count of how many of those totals fit into the bins. Title: deptgroups (my dataframe) department count Admin 857 Engineering 26 IT 49 Marketing 16 Operations 1013 Sales 1551 Data as "datagroups.csv" department,count, Admin,857, Engineering,26, IT,49, Marketing,16, Operations,1013, Sales,1551 plt.hist(x=deptgroups) plt.show() Incorrect Graph I've tried specifying x and y values, but it throws an error. I would like it to look more like this (ish): Qty Dept 1 Dept 2 500 XXXXXX ...... 400 XXXXXX ...... 300 XXXXXX XXXXXX 200 XXXXXX XXXXXX 100 XXXXXX XXXXXX The original data for example would look like this: |Department | Count| |-----------|------| |Dept 1 | 500 | |Dept 2 | 300 | A: I think main confusion is coming from the name of the plot. It's called bar chart, histogram is something else. import matplotlib.pyplot as plt data = { 'Admin': 857, 'Engineering': 26, 'IT': 49, 'Marketing': 16, 'Operations': 1013, 'Sales': 1551 } departments = data.keys() count = data.values() # you don't need histogram, you need bar plot plt.bar(departments, count, color='blue', width=0.4) plt.xlabel('Department') plt.ylabel('Count') plt.show()
Switch the values from x-axis to y-axis while using the correct labels(Python Matplotlib.pyplot OR Seaborn)
I have a small table of information that I'm trying to turn into a histogram. It has one column of Department names and a second column of totals. I would like the x-axis to use the Department names and the y-axis to use the numbers from the totals column. When I try to code it, the x-axis is the totals and the y-axis is a count of how many of those totals fit into the bins. Title: deptgroups (my dataframe) department count Admin 857 Engineering 26 IT 49 Marketing 16 Operations 1013 Sales 1551 Data as "datagroups.csv" department,count, Admin,857, Engineering,26, IT,49, Marketing,16, Operations,1013, Sales,1551 plt.hist(x=deptgroups) plt.show() Incorrect Graph I've tried specifying x and y values, but it throws an error. I would like it to look more like this (ish): Qty Dept 1 Dept 2 500 XXXXXX ...... 400 XXXXXX ...... 300 XXXXXX XXXXXX 200 XXXXXX XXXXXX 100 XXXXXX XXXXXX The original data for example would look like this: |Department | Count| |-----------|------| |Dept 1 | 500 | |Dept 2 | 300 |
[ "I think main confusion is coming from the name of the plot. It's called bar chart, histogram is something else.\nimport matplotlib.pyplot as plt\n\ndata = {\n 'Admin': 857,\n 'Engineering': 26,\n 'IT': 49,\n 'Marketing': 16,\n 'Operations': 1013,\n 'Sales': 1551\n}\ndepartments = data.keys()\ncount = data.values()\n\n# you don't need histogram, you need bar plot\nplt.bar(departments, count, color='blue', width=0.4)\nplt.xlabel('Department')\nplt.ylabel('Count')\nplt.show()\n\n\n" ]
[ 0 ]
[]
[]
[ "axis", "histogram", "matplotlib", "python", "python_3.x" ]
stackoverflow_0074503294_axis_histogram_matplotlib_python_python_3.x.txt
Q: How can I remove a string from a txt file? I am trying to create a contact info txt file with python what_you_want = input("Do you want to add or remove (if add write add), (if remove write remove): ") if what_you_want == "remove": what_you_want_remove = input("What contact number you want to remove: ") with open("All Contact.txt", "r") as f: contact_info = f.readlines() if what_you_want_remove in contact_info: with open("All Contact.txt", "a") as f: if what_you_want_remove in contact_info: new_contact_info = contact_info.replace(what_you_want_remove, "") f.write(new_contact_info) I couldn't find a way to directly remove something from a txt file so I want to put it into a list and then write it back to txt file but when I try to use remove command it doesn't work. I want to ask if there is a way to remove something from a text file directly. A: You can read the file into a string or list, remove the substring, and write back to the file. For instance, with open("file.txt", "w") as f: string = f.readlines() string = string.replace(substring, "") f.write(string) If substring does not exist in file.txt, the replace function will not perform any action. If you want the replacement to be case-insensitive, you could try lowering the entire buffer as well as the subtring via string = string.lower() and substring = substring.lower(). A: Use del to delete that entry. Then overwrite the original file, making it one line shorter. if what_you_want_remove in contact_info: i = contact_info.index(what_you_want_remove) del contact_info[i] with open("All Contact.txt", "w") as fout: fout.write("\n".join(contact_info) fout.write("\n")
How can I remove a string from a txt file?
I am trying to create a contact info txt file with python what_you_want = input("Do you want to add or remove (if add write add), (if remove write remove): ") if what_you_want == "remove": what_you_want_remove = input("What contact number you want to remove: ") with open("All Contact.txt", "r") as f: contact_info = f.readlines() if what_you_want_remove in contact_info: with open("All Contact.txt", "a") as f: if what_you_want_remove in contact_info: new_contact_info = contact_info.replace(what_you_want_remove, "") f.write(new_contact_info) I couldn't find a way to directly remove something from a txt file so I want to put it into a list and then write it back to txt file but when I try to use remove command it doesn't work. I want to ask if there is a way to remove something from a text file directly.
[ "You can read the file into a string or list, remove the substring, and write back to the file. For instance,\nwith open(\"file.txt\", \"w\") as f:\n string = f.readlines()\n string = string.replace(substring, \"\")\n f.write(string)\n\nIf substring does not exist in file.txt, the replace function will not perform any action. If you want the replacement to be case-insensitive, you could try lowering the entire buffer as well as the subtring via string = string.lower() and substring = substring.lower().\n", "Use del to delete that entry.\nThen overwrite the original file, making it one line shorter.\nif what_you_want_remove in contact_info:\n i = contact_info.index(what_you_want_remove)\n del contact_info[i]\n\n with open(\"All Contact.txt\", \"w\") as fout:\n fout.write(\"\\n\".join(contact_info)\n fout.write(\"\\n\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "file", "python", "python_3.x", "text" ]
stackoverflow_0074503739_file_python_python_3.x_text.txt
Q: Importing rotated text from a PDF table such as with tabula-py in python Is there a way to import rotated text from a PDF table such as with tabula-py in python? I realize I can just rename the column headers in this case, but I was wondering if there is a way to set a parameter for importing rotated text. I don't see any mention of rotation in the readthedocs for tabula-py and haven't found other packages that would do this yet either (although I did see a mention of rotating an entire page- which doesn't fit this use case exactly as renaming the columns would be easier). Example: import tabula list_df = tabula.read_pdf( 'https://sos.oregon.gov/elections/Documents/statistics/G22-Daily-Ballot-Returns.pdf', pages=3 ) list_df[0] A: I just tried using camelot and it correctly reads the rotated text in the columns header: this is the result. A: As @Francesco mentioned, there is a particular way in which camelot is a better than tabula-py, since camelot finds the rotated text. It was a difficult process to install camelot, so I thought to share some of my learnings here. Dependency for camelot: ghostwriter https://camelot-py.readthedocs.io/en/master/user/install-deps.html For a mac brew install ghostscript tcl-tk and then troubleshoot any errors (many errors for me, but after copy-pasting each error, there was gold at the end of the rainbow). Overview of camelot install https://camelot-py.readthedocs.io/en/master/user/install.html On a mac: pip install "camelot-py[cv]" The documentation page currently actually says [base] rather than [cv], but above in the comments it says [cv] (and stack overflow articles say [cv]). In python (if you are using jupyter notebook, restart the notebook kernel) With the following, the rotated column headers are read in just fine. import camelot tables = camelot.read_pdf( 'https://sos.oregon.gov/elections/Documents/statistics/G22-Daily-Ballot-Returns.pdf', pages='all') tables[3].df
Importing rotated text from a PDF table such as with tabula-py in python
Is there a way to import rotated text from a PDF table such as with tabula-py in python? I realize I can just rename the column headers in this case, but I was wondering if there is a way to set a parameter for importing rotated text. I don't see any mention of rotation in the readthedocs for tabula-py and haven't found other packages that would do this yet either (although I did see a mention of rotating an entire page- which doesn't fit this use case exactly as renaming the columns would be easier). Example: import tabula list_df = tabula.read_pdf( 'https://sos.oregon.gov/elections/Documents/statistics/G22-Daily-Ballot-Returns.pdf', pages=3 ) list_df[0]
[ "I just tried using camelot and it correctly reads the rotated text in the columns header: this is the result.\n", "As @Francesco mentioned, there is a particular way in which camelot is a better than tabula-py, since camelot finds the rotated text.\nIt was a difficult process to install camelot, so I thought to share some of my learnings here.\n\nDependency for camelot: ghostwriter\nhttps://camelot-py.readthedocs.io/en/master/user/install-deps.html\n\nFor a mac brew install ghostscript tcl-tk and then troubleshoot any errors (many errors for me, but after copy-pasting each error, there was gold at the end of the rainbow).\n\nOverview of camelot install https://camelot-py.readthedocs.io/en/master/user/install.html\n\nOn a mac:\npip install \"camelot-py[cv]\"\nThe documentation page currently actually says [base] rather than [cv], but above in the comments it says [cv] (and stack overflow articles say [cv]).\n\nIn python (if you are using jupyter notebook, restart the notebook kernel)\n\nWith the following, the rotated column headers are read in just fine.\nimport camelot\ntables = camelot.read_pdf(\n 'https://sos.oregon.gov/elections/Documents/statistics/G22-Daily-Ballot-Returns.pdf',\n pages='all')\ntables[3].df\n\n" ]
[ 1, 1 ]
[]
[]
[ "pdf", "python", "tabula_py" ]
stackoverflow_0074392817_pdf_python_tabula_py.txt
Q: List - take elements with equal name Having a List like this: [utc1_1.tga, utc1_2.tga, utc1_3.tga, utc1_4.tga, utc2_1.tga, utc2_2.tga, utc2_3.tga, utc2_4.tga, utc3_1.tga, utc3_2.tga, utc3_3.tga, utc3_4.tga,..] I separated with this: images = list(sorted([int(name.split('_')[0]) for name in directory_files])) only timestamp names remain: [utc1, utc1, utc1, utc1, utc2, utc2, utc2, utc2, utc3, utc3,...] basically this list of images are numpy array. I would like to add the arrays with the equal timestamp. You can check actual files at this google drive link import cv2 from numpy import asarray from multiprocessing import Process import glob from PIL import Image images = glob.glob(f"{directory_files}*.tga") for img_name in images: with open(img_name, 'rb') as ldr: print(img_name) image = Image.open(ldr) data = asarray(image) print(data.shape) #images = list(sorted([int(name.split('_')[0]) for name in directory_files])) #print(images) #same_utc_arrays = #np.sum(same_utc_arrays) A: Not sure if I got it. But if you want to separate them by timestamp. I would map them into a dict using timestamp as key. Basically you would need to grab all different timestamp then start a dict, after that you could separate them by pushing to the respective timestamp key from the dict.
List - take elements with equal name
Having a List like this: [utc1_1.tga, utc1_2.tga, utc1_3.tga, utc1_4.tga, utc2_1.tga, utc2_2.tga, utc2_3.tga, utc2_4.tga, utc3_1.tga, utc3_2.tga, utc3_3.tga, utc3_4.tga,..] I separated with this: images = list(sorted([int(name.split('_')[0]) for name in directory_files])) only timestamp names remain: [utc1, utc1, utc1, utc1, utc2, utc2, utc2, utc2, utc3, utc3,...] basically this list of images are numpy array. I would like to add the arrays with the equal timestamp. You can check actual files at this google drive link import cv2 from numpy import asarray from multiprocessing import Process import glob from PIL import Image images = glob.glob(f"{directory_files}*.tga") for img_name in images: with open(img_name, 'rb') as ldr: print(img_name) image = Image.open(ldr) data = asarray(image) print(data.shape) #images = list(sorted([int(name.split('_')[0]) for name in directory_files])) #print(images) #same_utc_arrays = #np.sum(same_utc_arrays)
[ "Not sure if I got it.\nBut if you want to separate them by timestamp. I would map them into a dict using timestamp as key.\nBasically you would need to grab all different timestamp then start a dict, after that you could separate them by pushing to the respective timestamp key from the dict.\n" ]
[ 0 ]
[]
[]
[ "combining_marks", "extract", "list", "python" ]
stackoverflow_0074503715_combining_marks_extract_list_python.txt
Q: Pygame Buttons, First button working, second button not working I have recently been working on a game using python and pygame and have started on buttons for various things. An issue that I have been running into is when I create a button class and create two objects of that class for each button, both of them will turn darker when hovered over by the mouse as expected, but only the first one to process will detect when the player clicks it. I started a new python file with the same button class and mostly the same code and it still has the same issue. I have tried processing the buttons separately and changing various other things but the only way I can get the second button to work is by processing it first, but then the first button won't work. Here is that simplified code. import pygame class Button: def __init__(self, x, y, width, height, color, hover_color, click_func): self.rect = pygame.Rect(x, y, width, height) self.color = color self.hover_color = hover_color self.display_color = self.color self.click_func = click_func def process(self): events = pygame.event.get() mouse_pos = pygame.mouse.get_pos() if self.rect.collidepoint(mouse_pos): self.display_color = self.hover_color for event in events: if event.type == pygame.MOUSEBUTTONDOWN: self.click_func() else: self.display_color = self.color def press_func1(): print('press func 1') def press_func2(): print('press func 2') WIN = pygame.display.set_mode((500, 500)) buttons = [ Button(200, 100, 100, 50, (255, 0, 0), (150, 0, 0), press_func1), Button(200, 300, 100, 50, (0, 255, 0), (0, 150, 0), press_func2) ] running = True fps = pygame.time.Clock() while running: fps.tick(60) WIN.fill((255, 255, 255)) [button.process() for button in buttons] [pygame.draw.rect(WIN, button.display_color, button.rect) for button in buttons] pygame.display.update() If you know what the problem is, I would love to hear it. A: pygame.event.get() get all the events and remove them from the queue. See the documentation: This will get all the messages and remove them from the queue. [...] If pygame.event.get() is called multiple times per frame, the events are only retuned once, but never all calls will return all all events. As a result, some events appear to be missed. Get the list of events once per frame and pass the list of events to Button.process: import pygame class Button: def __init__(self, x, y, width, height, color, hover_color, click_func): self.rect = pygame.Rect(x, y, width, height) self.color = color self.hover_color = hover_color self.display_color = self.color self.click_func = click_func def process(self, events): mouse_pos = pygame.mouse.get_pos() if self.rect.collidepoint(mouse_pos): self.display_color = self.hover_color for event in events: if event.type == pygame.MOUSEBUTTONDOWN: self.click_func() else: self.display_color = self.color def press_func1(): print('press func 1') def press_func2(): print('press func 2') WIN = pygame.display.set_mode((500, 500)) buttons = [ Button(200, 100, 100, 50, (255, 0, 0), (150, 0, 0), press_func1), Button(200, 300, 100, 50, (0, 255, 0), (0, 150, 0), press_func2) ] running = True fps = pygame.time.Clock() while running: fps.tick(60) WIN.fill((255, 255, 255)) events = pygame.event.get() [button.process(events) for button in buttons] [pygame.draw.rect(WIN, button.display_color, button.rect) for button in buttons] pygame.display.update()
Pygame Buttons, First button working, second button not working
I have recently been working on a game using python and pygame and have started on buttons for various things. An issue that I have been running into is when I create a button class and create two objects of that class for each button, both of them will turn darker when hovered over by the mouse as expected, but only the first one to process will detect when the player clicks it. I started a new python file with the same button class and mostly the same code and it still has the same issue. I have tried processing the buttons separately and changing various other things but the only way I can get the second button to work is by processing it first, but then the first button won't work. Here is that simplified code. import pygame class Button: def __init__(self, x, y, width, height, color, hover_color, click_func): self.rect = pygame.Rect(x, y, width, height) self.color = color self.hover_color = hover_color self.display_color = self.color self.click_func = click_func def process(self): events = pygame.event.get() mouse_pos = pygame.mouse.get_pos() if self.rect.collidepoint(mouse_pos): self.display_color = self.hover_color for event in events: if event.type == pygame.MOUSEBUTTONDOWN: self.click_func() else: self.display_color = self.color def press_func1(): print('press func 1') def press_func2(): print('press func 2') WIN = pygame.display.set_mode((500, 500)) buttons = [ Button(200, 100, 100, 50, (255, 0, 0), (150, 0, 0), press_func1), Button(200, 300, 100, 50, (0, 255, 0), (0, 150, 0), press_func2) ] running = True fps = pygame.time.Clock() while running: fps.tick(60) WIN.fill((255, 255, 255)) [button.process() for button in buttons] [pygame.draw.rect(WIN, button.display_color, button.rect) for button in buttons] pygame.display.update() If you know what the problem is, I would love to hear it.
[ "pygame.event.get() get all the events and remove them from the queue. See the documentation:\n\nThis will get all the messages and remove them from the queue. [...]\n\nIf pygame.event.get() is called multiple times per frame, the events are only retuned once, but never all calls will return all all events. As a result, some events appear to be missed.\nGet the list of events once per frame and pass the list of events to Button.process:\nimport pygame\n\nclass Button:\n def __init__(self, x, y, width, height, color, hover_color, click_func):\n self.rect = pygame.Rect(x, y, width, height)\n self.color = color\n self.hover_color = hover_color\n self.display_color = self.color\n self.click_func = click_func\n\n def process(self, events):\n mouse_pos = pygame.mouse.get_pos()\n if self.rect.collidepoint(mouse_pos):\n self.display_color = self.hover_color\n for event in events:\n if event.type == pygame.MOUSEBUTTONDOWN:\n self.click_func()\n\n else:\n self.display_color = self.color\n\ndef press_func1():\n print('press func 1')\n\ndef press_func2():\n print('press func 2')\n\n\nWIN = pygame.display.set_mode((500, 500))\n\nbuttons = [\n Button(200, 100, 100, 50, (255, 0, 0), (150, 0, 0), press_func1),\n Button(200, 300, 100, 50, (0, 255, 0), (0, 150, 0), press_func2)\n]\n \nrunning = True\nfps = pygame.time.Clock()\nwhile running:\n fps.tick(60)\n \n WIN.fill((255, 255, 255))\n \n events = pygame.event.get()\n [button.process(events) for button in buttons]\n [pygame.draw.rect(WIN, button.display_color, button.rect) for button in buttons]\n \n pygame.display.update()\n\n" ]
[ 1 ]
[]
[]
[ "button", "pygame", "python" ]
stackoverflow_0074503738_button_pygame_python.txt
Q: Applying a function for multiple dataframes I am trying to apply the function on multiple data frames. I created a list for the data frames. If the ranking is less than 100, high performance column would be assigned values copied over from the ranking column and if the ranking is between 100 and 200, the average column would be assigned the values copied over from the ranking column. If the ranking is between 200 and 300, the lower performance column gets assigned values copied from the ranking column. I do not get any error messages when I run the script but the function does not get applied to the data frames. Any suggestions would be helpful. for file in tests: #tests would be a list of data frame def func (file): if (file['ranking']) < 100: (file['ranking']) == (file['High Performance']) elif (file['ranking']) > 100 & (file['ranking'] < 200): (file['ranking'])== (file['Average']) elif (file ['ranking']) > 200& (file['ranking'] < 300): (file['ranking']) == (file ['Low Performance']) else: return '' file['High Performance'] = file.apply(func, axis=1) file['Average'] = file.apply(func, axis=1) file['Low Performance'] = file.apply(func, axis=1) A: You get no error bc your code is syntactically correct. But watch out for the logic. I hope the below code change helps: def func (file): if (file['ranking']) < 100: (file['ranking']) == (file['High Performance']) elif (file['ranking']) > 100 & (file['ranking'] < 200): (file['ranking'])== (file['Average']) elif (file ['ranking']) > 200& (file['ranking'] < 300): (file['ranking']) == (file ['Low Performance']) else: return '' for file in tests: #tests would be a list of data frame file['High Performance'] = file.apply(func, axis=1) file['Average'] = file.apply(functionss, axis=1) file['Low Performance'] = file.apply(functionss, axis=1) A: I would suggest considering the processing option without apply. A frame is passed to the function, and the entire processed column is returned import numpy as np import pandas as pd def func(file): result = file['ranking'].copy() result[:] = '' result.loc[mask] = file.loc[(mask := file['ranking'].lt(100)), 'High Performance'] result.loc[mask] = file.loc[(mask := file['ranking'].between(100, 200, inclusive='left')), 'Average'] result.loc[mask] = file.loc[(mask := file['ranking'].between(200, 300, inclusive='both')), 'Low Performance'] return result print('\nOriginal frames:\n') lst = [] # Data preparation for _ in range(2): # adjust df = pd.DataFrame( {'ranking': np.random.randint(0, 400, 100), 'High Performance': np.random.randint(1000, 10000, 100), 'Average': np.random.randint(10000, 100000, 100), 'Low Performance': np.random.randint(100000, 1000000, 100)}) lst.append(df) print(df.head(5)) print('\nProcessed frames:\n') for i, file in enumerate(lst): lst[i]['ranking'] = func(file) print(lst[i].head(5)) Original frames: ranking High Performance Average Low Performance 0 340 7674 53049 893702 1 58 6838 38181 653512 2 313 2383 66811 794135 3 260 3930 24911 968317 4 377 6543 80905 599571 ranking High Performance Average Low Performance 0 223 6044 77461 237517 1 250 6128 24633 112060 2 396 3701 26695 767052 3 261 9031 64877 415611 4 313 1298 52726 782069 Processed frames: ranking High Performance Average Low Performance 0 7674 53049 893702 1 6838 6838 38181 653512 2 2383 66811 794135 3 968317 3930 24911 968317 4 6543 80905 599571 ranking High Performance Average Low Performance 0 237517 6044 77461 237517 1 112060 6128 24633 112060 2 3701 26695 767052 3 415611 9031 64877 415611 4 1298 52726 782069 A: I think you just needed to indent your code correctly. And I would you a map.
Applying a function for multiple dataframes
I am trying to apply the function on multiple data frames. I created a list for the data frames. If the ranking is less than 100, high performance column would be assigned values copied over from the ranking column and if the ranking is between 100 and 200, the average column would be assigned the values copied over from the ranking column. If the ranking is between 200 and 300, the lower performance column gets assigned values copied from the ranking column. I do not get any error messages when I run the script but the function does not get applied to the data frames. Any suggestions would be helpful. for file in tests: #tests would be a list of data frame def func (file): if (file['ranking']) < 100: (file['ranking']) == (file['High Performance']) elif (file['ranking']) > 100 & (file['ranking'] < 200): (file['ranking'])== (file['Average']) elif (file ['ranking']) > 200& (file['ranking'] < 300): (file['ranking']) == (file ['Low Performance']) else: return '' file['High Performance'] = file.apply(func, axis=1) file['Average'] = file.apply(func, axis=1) file['Low Performance'] = file.apply(func, axis=1)
[ "You get no error bc your code is syntactically correct. But watch out for the logic. I hope the below code change helps:\ndef func (file):\n if (file['ranking']) < 100:\n (file['ranking']) == (file['High Performance'])\n elif (file['ranking']) > 100 & (file['ranking'] < 200):\n (file['ranking'])== (file['Average'])\n elif (file ['ranking']) > 200& (file['ranking'] < 300):\n (file['ranking']) == (file ['Low Performance'])\n else: \n return ''\n \nfor file in tests: #tests would be a list of data frame\n file['High Performance'] = file.apply(func, axis=1)\n file['Average'] = file.apply(functionss, axis=1)\n file['Low Performance'] = file.apply(functionss, axis=1)\n\n", "I would suggest considering the processing option without apply. A frame is passed to the function, and the entire processed column is returned\nimport numpy as np\nimport pandas as pd\n\ndef func(file):\n result = file['ranking'].copy()\n result[:] = ''\n result.loc[mask] = file.loc[(mask := file['ranking'].lt(100)), 'High Performance']\n result.loc[mask] = file.loc[(mask := file['ranking'].between(100, 200, inclusive='left')), 'Average']\n result.loc[mask] = file.loc[(mask := file['ranking'].between(200, 300, inclusive='both')), 'Low Performance']\n return result\n\n\nprint('\\nOriginal frames:\\n')\nlst = [] # Data preparation\nfor _ in range(2): # adjust\n df = pd.DataFrame(\n {'ranking': np.random.randint(0, 400, 100), 'High Performance': np.random.randint(1000, 10000, 100),\n 'Average': np.random.randint(10000, 100000, 100), 'Low Performance': np.random.randint(100000, 1000000, 100)})\n lst.append(df)\n print(df.head(5))\n\nprint('\\nProcessed frames:\\n')\nfor i, file in enumerate(lst):\n lst[i]['ranking'] = func(file)\n print(lst[i].head(5))\n\nOriginal frames:\n\n ranking High Performance Average Low Performance\n0 340 7674 53049 893702\n1 58 6838 38181 653512\n2 313 2383 66811 794135\n3 260 3930 24911 968317\n4 377 6543 80905 599571\n ranking High Performance Average Low Performance\n0 223 6044 77461 237517\n1 250 6128 24633 112060\n2 396 3701 26695 767052\n3 261 9031 64877 415611\n4 313 1298 52726 782069\n\nProcessed frames:\n\n ranking High Performance Average Low Performance\n0 7674 53049 893702\n1 6838 6838 38181 653512\n2 2383 66811 794135\n3 968317 3930 24911 968317\n4 6543 80905 599571\n ranking High Performance Average Low Performance\n0 237517 6044 77461 237517\n1 112060 6128 24633 112060\n2 3701 26695 767052\n3 415611 9031 64877 415611\n4 1298 52726 782069\n\n", "I think you just needed to indent your code correctly.\nAnd I would you a map.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074501622_list_python.txt
Q: Python, Twitter Sentiment analysis i am getting this error upon running my code. text = str(text.encode("utf-8")) AttributeError: 'float' object has no attribute 'encode' I tried to convert my data into string using df['Translated_message']=df['Translated_message'].values.astype('string') but that doesnt worked. A: Text is a float. Check to cast as str before encoding.
Python, Twitter Sentiment analysis
i am getting this error upon running my code. text = str(text.encode("utf-8")) AttributeError: 'float' object has no attribute 'encode' I tried to convert my data into string using df['Translated_message']=df['Translated_message'].values.astype('string') but that doesnt worked.
[ "Text is a float. Check to cast as str before encoding.\n" ]
[ 0 ]
[]
[]
[ "nltk", "numpy", "pandas", "python" ]
stackoverflow_0074503855_nltk_numpy_pandas_python.txt
Q: 3D Phase portrait of Rössler System using Python I'm running into a specific problem when attempting to plot the 3D phase portrait of the Rössler system in Python. The problem is that certain arrows are excessively long, and thus the visualization isn't a good one at all: Bad 3d phase portrait This is my code so far, and I don't really know what to alter to make an appropriate plot. Any help would be much appreciated. # Axes, grid fig = plt.figure(figsize=(10, 10)) ax = plt.axes(projection ='3d') x, y, z = np.meshgrid(np.arange(-20, 20, 4), np.arange(-20, 20, 4), np.arange(0, 20, 4)) # Define vector field u = -y - z v = x + (1/5)*y w = 1/5 + (x - 5/2)*z # Plot vector field ax.quiver(x, y, z, u, v, w, length=0.1, normalize = False) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') plt.show() I haven't been able to try any alternatives largely because I'm not really sure of what to do. A: I believe that you won't be able to accurately visualize this vector field with quivers, because there is quite a big variation in magnitude in your view area. A better way is to visualize streamlines, and that's not easy either: matplotlib doesn't support 3D streamlines. Plotly support streamtubes, but when I tried it on my vector fields I usually got "mehhh" results, meaning it was really difficult to get them to proper scale. I believe that the easiest solution is to use the SymPy Plotting Backend module, which exposes a plot_vector function that also plot streamlines. Note that I am the developer of this module, so I'm going to showcase it for you. In particular, this module exposes a backend to create plots with K3D-Jupyter which, for this particular vector field, produces excellent results. from sympy import * from spb import * var("x:z, u:w") u = -y - z v = x + (1/5)*y w = 1/5 + (x - 5/2)*z plot_vector( [u, v, w], (x, -20, 20), (y, -20, 20), (z, 0, 20), # vector field and ranges backend=KB, # chose the plotting library: K3D-Jupyter streamlines=True, # enable or disable streamlines stream_kw={ # customize streamlines "starts": True, # Ask the streamlines to start from random points "npoints": 150 # number of starting points for the streamlines }, n=50, # number of discretization points. Don't go too crazy as memory requirements is at least n^3 use_cm=True, # use color map. Colors indicates the magnitude of the vector field. ) In order to use the streamline functionalities, you will have to install all the requirements, which are quite heavy (If I recall correctly, at least 200MB). pip install sympy_plot_backends[all]
3D Phase portrait of Rössler System using Python
I'm running into a specific problem when attempting to plot the 3D phase portrait of the Rössler system in Python. The problem is that certain arrows are excessively long, and thus the visualization isn't a good one at all: Bad 3d phase portrait This is my code so far, and I don't really know what to alter to make an appropriate plot. Any help would be much appreciated. # Axes, grid fig = plt.figure(figsize=(10, 10)) ax = plt.axes(projection ='3d') x, y, z = np.meshgrid(np.arange(-20, 20, 4), np.arange(-20, 20, 4), np.arange(0, 20, 4)) # Define vector field u = -y - z v = x + (1/5)*y w = 1/5 + (x - 5/2)*z # Plot vector field ax.quiver(x, y, z, u, v, w, length=0.1, normalize = False) ax.set_xlabel('x') ax.set_ylabel('y') ax.set_zlabel('z') plt.show() I haven't been able to try any alternatives largely because I'm not really sure of what to do.
[ "I believe that you won't be able to accurately visualize this vector field with quivers, because there is quite a big variation in magnitude in your view area. A better way is to visualize streamlines, and that's not easy either:\n\nmatplotlib doesn't support 3D streamlines.\nPlotly support streamtubes, but when I tried it on my vector fields I usually got \"mehhh\" results, meaning it was really difficult to get them to proper scale.\nI believe that the easiest solution is to use the SymPy Plotting Backend module, which exposes a plot_vector function that also plot streamlines. Note that I am the developer of this module, so I'm going to showcase it for you. In particular, this module exposes a backend to create plots with K3D-Jupyter which, for this particular vector field, produces excellent results.\n\nfrom sympy import *\nfrom spb import *\nvar(\"x:z, u:w\")\n\nu = -y - z\nv = x + (1/5)*y\nw = 1/5 + (x - 5/2)*z\n\nplot_vector(\n [u, v, w], (x, -20, 20), (y, -20, 20), (z, 0, 20), # vector field and ranges\n backend=KB, # chose the plotting library: K3D-Jupyter\n streamlines=True, # enable or disable streamlines\n stream_kw={ # customize streamlines\n \"starts\": True, # Ask the streamlines to start from random points\n \"npoints\": 150 # number of starting points for the streamlines\n },\n n=50, # number of discretization points. Don't go too crazy as memory requirements is at least n^3\n use_cm=True, # use color map. Colors indicates the magnitude of the vector field.\n)\n\n\nIn order to use the streamline functionalities, you will have to install all the requirements, which are quite heavy (If I recall correctly, at least 200MB).\npip install sympy_plot_backends[all]\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074503174_matplotlib_python.txt
Q: I need a way to compare two strings in python without using sets in a pandas dataframe I am currently working on a huge csv file with pandas, and I need to find and print similarity between the selected row and every other row. For example if the string is "Card" and the second string is "Credit Card Debit Card" it should return 2 or if the first string is "Credit Card" and the second string is "Credit Card Debit Card" it should return 3 because 3 of the words match with the first string. I tried solving this using sets but because of sets being unique and not containing duplicates in the first example instead of 2 it returns 1. Because in a set "Credit Card Debit Card" is {"Credit", "Card", "Debit"}. Is there anyway that I can calculate this? The formula of similarity is ((numberOfSameWords)/whichStringisLonger)*100 as explained in this photo: I tried so many things like Jaccard Similarity but they all work with sets and return wrong answers. Thanks for any help. The code I tried running: def test(row1, row2): return str(round(len(np.intersect1d(row1.split(), row2.split())) / max(len(row1.split()), len(row2.split()))*100, 2)) data = int(input("Which index should be tested:")) for j in range(0,10): print(test(dff['Product'].iloc[data], dff['Product'].iloc[j])) and my dataframe currently looks like this: print(df.sample(10).to_dict("list")) returned me: {'Product': ['Bank account or service', 'Credit card', 'Credit reporting', 'Credit reporting credit repair services or other personal consumer reports', 'Credit reporting', 'Mortgage', 'Debt collection', 'Mortgage', 'Mortgage', 'Credit reporting'], 'Issue': ['Deposits and withdrawals', 'Billing disputes', 'Incorrect information on credit report', "Problem with a credit reporting company's investigation into an existing problem", 'Incorrect information on credit report', 'Applying for a mortgage or refinancing an existing mortgage', 'Disclosure verification of debt', 'Loan servicing payments escrow account', 'Loan servicing payments escrow account', 'Incorrect information on credit report'], 'Company': ['CITIBANK NA', 'FIRST NATIONAL BANK OF OMAHA', 'EQUIFAX INC', 'Experian Information Solutions Inc', 'Experian Information Solutions Inc', 'BANK OF AMERICA NATIONAL ASSOCIATION', 'AllianceOne Recievables Management', 'SELECT PORTFOLIO SERVICING INC', 'OCWEN LOAN SERVICING LLC', 'Experian Information Solutions Inc'], 'State': ['CA', 'WA', 'FL', 'UT', 'MI', 'CA', 'WA', 'IL', 'TX', 'CA'], 'ZIP_code': ['92606', '98272', '329XX', '84321', '486XX', '94537', '984XX', '60473', '76247', '91401'], 'Complaint_ID': [90452, 2334443, 1347696, 2914771, 1788024, 2871939, 1236424, 1619712, 2421373, 1803691]} A: You can use numpy.intersect1d to get the common words but the % is different for the third row. import numpy as np df["Similarity_%"] = ( df.apply(lambda x: "%" + str(round(len(np.intersect1d(x['Col1'].split(), x['Col2'].split())) / max(len(x["Col1"].split()), len(x["Col2"].split())) *100, 2)), axis=1) ) # Output : print(df) Col1 Col2 Similarity_% 0 Debt collection Debt collection %100.0 1 Debt collection Mortgage %0.0 2 Managing loan lease Problems end loan lease %50.0 3 Managing loan lease Struggling pay loan %33.33 4 Credit reporting credit repair services personal consumer reports Payday loan title loan personal loan %12.5 # Input used: import pandas as pd df= pd.DataFrame({'Col1': ['Debt collection', 'Debt collection', 'Managing loan lease', 'Managing loan lease', 'Credit reporting credit repair services personal consumer reports'], 'Col2': ['Debt collection', 'Mortgage', 'Problems end loan lease', 'Struggling pay loan', 'Payday loan title loan personal loan']}) # Update: Based on the second given dataframe in your question, you can use a cross join (with pandas.DataFrame.merge) to compare each row of the column Product to the rest of the rows of the same colum. Try this : out = df[["Product"]].merge(df[["Product"]], how="cross", suffixes=("", "_cross")) out["Similarity_%"] = ( out.apply(lambda x: "%" + str(round(len(np.intersect1d(x['Product'].split(), x['Product_cross'].split())) / max(len(x["Product"].split()), len(x["Product_cross"].split())) *100, 2)), axis=1) ) With a dataframe/colum of 10 rows, the result will have 100 rows plus a similarity column. A: You can try this: import pandas as pd l1 = ["Debt collection", "Debt collection", "Managing loan lease", "Managing loan lease", "Credit reporting credit repair services personal consumer reports", "Credit reporting credit repair services personal consumer report"] l2 = ["Debt collection", "Mortgage", "Problems end loan lease", "Struggling pay loan", "Payday loan title loan personal loan", "Credit card prepaid card"] df = pd.DataFrame(l1, columns=["col1"]) df["col2"] = l2 def similarity(row1, row2): # calculate longest row longestSentence = 0 commonWords = 0 wordsRow1 = [x.upper() for x in row1.split()] wordsRow2 = [x.upper() for x in row2.split()] # calculate similar words in both sentences common = list(set(wordsRow1).intersection(wordsRow2)) if len(wordsRow1) > len(wordsRow2): longestSentence = len(wordsRow1) commonWords = calculate(common, wordsRow1) else: longestSentence = len(wordsRow2) commonWords = calculate(common, wordsRow2) return (commonWords / longestSentence) * 100 def calculate(common, longestRow): sum = 0 for word in common: sum += longestRow.count(word) return sum df['similarity'] = df.apply(lambda x: similarity(x.col1, x.col2), axis=1) print(df) A: You may try this: def countCommonWords( string1, string2): words1 = string1.lower().split() words2 = string2.lower().split() n = 0 for word1 in words1: if word1 in words2: n+=1 return n Please note that: countCommonWords( 'a b c a', 'c b a') will return 3, but: countCommonWords( 'c b a', 'a b c a') will return 4 and may be your solution. We do not know if your search string may contain doubled words
I need a way to compare two strings in python without using sets in a pandas dataframe
I am currently working on a huge csv file with pandas, and I need to find and print similarity between the selected row and every other row. For example if the string is "Card" and the second string is "Credit Card Debit Card" it should return 2 or if the first string is "Credit Card" and the second string is "Credit Card Debit Card" it should return 3 because 3 of the words match with the first string. I tried solving this using sets but because of sets being unique and not containing duplicates in the first example instead of 2 it returns 1. Because in a set "Credit Card Debit Card" is {"Credit", "Card", "Debit"}. Is there anyway that I can calculate this? The formula of similarity is ((numberOfSameWords)/whichStringisLonger)*100 as explained in this photo: I tried so many things like Jaccard Similarity but they all work with sets and return wrong answers. Thanks for any help. The code I tried running: def test(row1, row2): return str(round(len(np.intersect1d(row1.split(), row2.split())) / max(len(row1.split()), len(row2.split()))*100, 2)) data = int(input("Which index should be tested:")) for j in range(0,10): print(test(dff['Product'].iloc[data], dff['Product'].iloc[j])) and my dataframe currently looks like this: print(df.sample(10).to_dict("list")) returned me: {'Product': ['Bank account or service', 'Credit card', 'Credit reporting', 'Credit reporting credit repair services or other personal consumer reports', 'Credit reporting', 'Mortgage', 'Debt collection', 'Mortgage', 'Mortgage', 'Credit reporting'], 'Issue': ['Deposits and withdrawals', 'Billing disputes', 'Incorrect information on credit report', "Problem with a credit reporting company's investigation into an existing problem", 'Incorrect information on credit report', 'Applying for a mortgage or refinancing an existing mortgage', 'Disclosure verification of debt', 'Loan servicing payments escrow account', 'Loan servicing payments escrow account', 'Incorrect information on credit report'], 'Company': ['CITIBANK NA', 'FIRST NATIONAL BANK OF OMAHA', 'EQUIFAX INC', 'Experian Information Solutions Inc', 'Experian Information Solutions Inc', 'BANK OF AMERICA NATIONAL ASSOCIATION', 'AllianceOne Recievables Management', 'SELECT PORTFOLIO SERVICING INC', 'OCWEN LOAN SERVICING LLC', 'Experian Information Solutions Inc'], 'State': ['CA', 'WA', 'FL', 'UT', 'MI', 'CA', 'WA', 'IL', 'TX', 'CA'], 'ZIP_code': ['92606', '98272', '329XX', '84321', '486XX', '94537', '984XX', '60473', '76247', '91401'], 'Complaint_ID': [90452, 2334443, 1347696, 2914771, 1788024, 2871939, 1236424, 1619712, 2421373, 1803691]}
[ "You can use numpy.intersect1d to get the common words but the % is different for the third row.\nimport numpy as np\n\ndf[\"Similarity_%\"] = (\n df.apply(lambda x: \"%\" + str(round(len(np.intersect1d(x['Col1'].split(), x['Col2'].split()))\n / max(len(x[\"Col1\"].split()), len(x[\"Col2\"].split()))\n *100, 2)), axis=1)\n )\n\n# Output :\nprint(df)\n Col1 Col2 Similarity_%\n0 Debt collection Debt collection %100.0\n1 Debt collection Mortgage %0.0\n2 Managing loan lease Problems end loan lease %50.0\n3 Managing loan lease Struggling pay loan %33.33\n4 Credit reporting credit repair services personal consumer reports Payday loan title loan personal loan %12.5\n\n# Input used:\nimport pandas as pd\n\ndf= pd.DataFrame({'Col1': ['Debt collection', 'Debt collection', 'Managing loan lease', 'Managing loan lease', \n 'Credit reporting credit repair services personal consumer reports'],\n 'Col2': ['Debt collection', 'Mortgage', 'Problems end loan lease', 'Struggling pay loan',\n 'Payday loan title loan personal loan']})\n\n# Update:\nBased on the second given dataframe in your question, you can use a cross join (with pandas.DataFrame.merge) to compare each row of the column Product to the rest of the rows of the same colum.\nTry this :\nout = df[[\"Product\"]].merge(df[[\"Product\"]], how=\"cross\", suffixes=(\"\", \"_cross\"))\n\nout[\"Similarity_%\"] = (\n out.apply(lambda x: \"%\" + str(round(len(np.intersect1d(x['Product'].split(), x['Product_cross'].split()))\n / max(len(x[\"Product\"].split()), len(x[\"Product_cross\"].split()))\n *100, 2)), axis=1)\n )\n\nWith a dataframe/colum of 10 rows, the result will have 100 rows plus a similarity column.\n", "You can try this:\nimport pandas as pd\n\nl1 = [\"Debt collection\", \"Debt collection\", \"Managing loan lease\", \"Managing loan lease\",\n \"Credit reporting credit repair services personal consumer reports\", \"Credit reporting credit repair services personal consumer report\"]\nl2 = [\"Debt collection\", \"Mortgage\", \"Problems end loan lease\", \"Struggling pay loan\",\n \"Payday loan title loan personal loan\", \"Credit card prepaid card\"]\n\ndf = pd.DataFrame(l1, columns=[\"col1\"])\ndf[\"col2\"] = l2\n\n\ndef similarity(row1, row2):\n # calculate longest row\n longestSentence = 0\n commonWords = 0\n wordsRow1 = [x.upper() for x in row1.split()]\n wordsRow2 = [x.upper() for x in row2.split()]\n # calculate similar words in both sentences\n common = list(set(wordsRow1).intersection(wordsRow2))\n if len(wordsRow1) > len(wordsRow2):\n longestSentence = len(wordsRow1)\n commonWords = calculate(common, wordsRow1)\n else:\n longestSentence = len(wordsRow2)\n commonWords = calculate(common, wordsRow2)\n return (commonWords / longestSentence) * 100\n\ndef calculate(common, longestRow):\n sum = 0\n for word in common:\n sum += longestRow.count(word)\n return sum\n\ndf['similarity'] = df.apply(lambda x: similarity(x.col1, x.col2), axis=1)\n\nprint(df)\n\n", "You may try this:\ndef countCommonWords( string1, string2):\n words1 = string1.lower().split()\n words2 = string2.lower().split()\n n = 0\n for word1 in words1:\n if word1 in words2: n+=1\n return n\n\nPlease note that:\ncountCommonWords( 'a b c a', 'c b a') will return 3,\nbut:\ncountCommonWords( 'c b a', 'a b c a') will return 4 and may be your solution.\nWe do not know if your search string may contain doubled words\n" ]
[ 0, 0, 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074500458_pandas_python.txt
Q: Sorting a list of dictionaries based on one couple key-value One of the dictionary in the list is: [{' School': 'GP', 'Age': '18', 'StudyTime': '2', 'Failures': '0', 'Health': '3', 'Absences': '6', 'G1': '5', 'G2': '6', 'G3': '6'} ………………….] I want to sort them by Age so the output should be like: Range for age is 15 to 22 { 15 : [ {'School': 'GP', 'StudyTime': 4.2, 'Failures': 3, 'Health': 3, 'Absences': 6, 'G1': 7, 'G2': 8, 'G3': 10 }, { ... other dictionary }, ... ], 16 : [ {'School': 'MS', 'StudyTime': 1, 'Failures': 1.2, 'Health': 4, 'Absences': 10, 'G1': 9, 'G2': 11, 'G3': 7 }, { ... other dictionary }, ... ], ... } I have tried to solve this problem with the code below, but the index for age_list goes out of range: age_list = [15,16,17,18,19,20,21,22] #dict_list is the list of dictionaries that need to be sorted ` res = defaultdict(list) for i in age_list: for j in dict_list: if age_list[i] == j['Age']: res[i].append(j)` print(res) A: Based on the expected input and output, it appears that there are three tasks involved: Digit strings should be converted to integers when possible (e.g., "15" should become 15). Dictionary entries should be collated based on the "Age" key. In the sorted result, each entry should no longer have the "Age" key. Below is a function that receives a single dictionary entry and performs the first task. def process_dictionary(dictionary): result = {} for key, value in dictionary.items(): # might want to use .isnumeric or .isdecimal instead if value.isdigit(): result[key] = int(value) return result The second and third tasks can be achieved by slightly modifying your original code. from collections import defaultdict def collate_by_age(entries): result = defaultdict(list) for entry in entries: # pop (remove) "Age" key from entry age = entry.pop("Age") # append to result result[age].append(entry) return result Putting these together, entries = [process_dictionary(dictionary) for dictionary in dictionaries] result = collate_by_age(entries) If you're only interested in a specific set of ages, e.g., from 15 to 22, you can simply loop over the result dictionary. target_age = {age: result[age] for age in range(15, 23)}
Sorting a list of dictionaries based on one couple key-value
One of the dictionary in the list is: [{' School': 'GP', 'Age': '18', 'StudyTime': '2', 'Failures': '0', 'Health': '3', 'Absences': '6', 'G1': '5', 'G2': '6', 'G3': '6'} ………………….] I want to sort them by Age so the output should be like: Range for age is 15 to 22 { 15 : [ {'School': 'GP', 'StudyTime': 4.2, 'Failures': 3, 'Health': 3, 'Absences': 6, 'G1': 7, 'G2': 8, 'G3': 10 }, { ... other dictionary }, ... ], 16 : [ {'School': 'MS', 'StudyTime': 1, 'Failures': 1.2, 'Health': 4, 'Absences': 10, 'G1': 9, 'G2': 11, 'G3': 7 }, { ... other dictionary }, ... ], ... } I have tried to solve this problem with the code below, but the index for age_list goes out of range: age_list = [15,16,17,18,19,20,21,22] #dict_list is the list of dictionaries that need to be sorted ` res = defaultdict(list) for i in age_list: for j in dict_list: if age_list[i] == j['Age']: res[i].append(j)` print(res)
[ "Based on the expected input and output, it appears that there are three tasks involved:\n\nDigit strings should be converted to integers when possible (e.g., \"15\" should become 15).\nDictionary entries should be collated based on the \"Age\" key.\nIn the sorted result, each entry should no longer have the \"Age\" key.\n\nBelow is a function that receives a single dictionary entry and performs the first task.\ndef process_dictionary(dictionary):\n result = {}\n for key, value in dictionary.items():\n # might want to use .isnumeric or .isdecimal instead\n if value.isdigit():\n result[key] = int(value)\n return result\n\nThe second and third tasks can be achieved by slightly modifying your original code.\nfrom collections import defaultdict\n\ndef collate_by_age(entries):\n result = defaultdict(list)\n for entry in entries:\n # pop (remove) \"Age\" key from entry\n age = entry.pop(\"Age\")\n # append to result\n result[age].append(entry)\n return result\n\n\nPutting these together,\nentries = [process_dictionary(dictionary) for dictionary in dictionaries]\nresult = collate_by_age(entries)\n\nIf you're only interested in a specific set of ages, e.g., from 15 to 22, you can simply loop over the result dictionary.\ntarget_age = {age: result[age] for age in range(15, 23)}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "python" ]
stackoverflow_0074486182_dictionary_python.txt
Q: How to stop a loop triggered by tkinter in Python I'm new to Python and even more so to tkinter, and I decided to try to create a start and stop button for an infinite loop through Tkinter. Unfortunately, once I click start, it won't allow me to click stop. The start button remains indented, and I assume this is because the function it triggered is still running. How can I make this 2nd button stop the code? import tkinter def loop(): global stop stop = False while True: if stop == True: break #The repeating code def start(): loop() def stop(): global stop stop = True window = tkinter.Tk() window.title("Loop") startButton = tkinter.Button(window, text = "Start", command = start) stopButton = tkinter.Button(window, text = "Pause", command = stop) startButton.pack() A: You're calling while True. Long story short, Tk() has it's own event loop. So, whenever you call some long running process it blocks this event loop and you can't do anything. You should probably use after I avoided using global here by just giving an attribute to window. e.g. - import tkinter def stop(): window.poll = False def loop(): if window.poll: print("Polling") window.after(100, loop) else: print("Stopped long running process.") window = tkinter.Tk() window.poll = True window.title("Loop") startButton = tkinter.Button(window, text = "Start", command = loop) stopButton = tkinter.Button(window, text = "Pause", command = stop) startButton.pack() stopButton.pack() window.mainloop() A: Thank you! @Pythonista, It works for me stopping a for loop in the tkinter. But it is likely not an effective way to do that because the frame was stuck after I pressed the stop button as demonstrated below with part of script: ` def start_scan(self): self.voltage_scan( self.start.get(), self.stop.get(), self.step.get(), self.delay.get() ) def start_butt(self): if self.poll: self.start_scan() def stop_butt(self): self.poll = False def voltage_scan(self, start, stop, step, delay): def make_plot(): plt.scatter(mvol,mcurr) plt.xlabel('Voltage / V', fontsize = 12) plt.ylabel('Current / A', fontsize = 12) plt.tight_layout() if start < stop: stop = stop + 1 plt.ion() # enable interactivity fig = plt.figure(figsize = (5,3.5), dpi = 100) ax = fig.add_subplot(111) curr = 0 mvol = [] mcurr = [] for vol in range(start, stop, step): if self.poll: mvol.append(vol) #time.sleep(1 / delay) self.after(1000*delay) # 1 second delay curr += 1 mcurr.append(curr) #print(vol, curr) drawnow(make_plot) else: break ` There might be better way to do that. or at least there would be a way to refresh the frame without closing the program. bcz I have to restart the program after evrytime I press the stop button.
How to stop a loop triggered by tkinter in Python
I'm new to Python and even more so to tkinter, and I decided to try to create a start and stop button for an infinite loop through Tkinter. Unfortunately, once I click start, it won't allow me to click stop. The start button remains indented, and I assume this is because the function it triggered is still running. How can I make this 2nd button stop the code? import tkinter def loop(): global stop stop = False while True: if stop == True: break #The repeating code def start(): loop() def stop(): global stop stop = True window = tkinter.Tk() window.title("Loop") startButton = tkinter.Button(window, text = "Start", command = start) stopButton = tkinter.Button(window, text = "Pause", command = stop) startButton.pack()
[ "You're calling while True. Long story short, Tk() has it's own event loop. So, whenever you call some long running process it blocks this event loop and you can't do anything. You should probably use after\nI avoided using global here by just giving an attribute to window.\ne.g. -\nimport tkinter\n\ndef stop():\n\n window.poll = False\n\ndef loop():\n\n if window.poll:\n print(\"Polling\")\n window.after(100, loop)\n else:\n print(\"Stopped long running process.\")\n\nwindow = tkinter.Tk()\nwindow.poll = True\nwindow.title(\"Loop\")\nstartButton = tkinter.Button(window, text = \"Start\", command = loop)\nstopButton = tkinter.Button(window, text = \"Pause\", command = stop)\nstartButton.pack()\nstopButton.pack()\nwindow.mainloop()\n\n", "Thank you! @Pythonista, It works for me stopping a for loop in the tkinter. But it is likely not an effective way to do that because the frame was stuck after I pressed the stop button as demonstrated below with part of script: `\n def start_scan(self): \n self.voltage_scan(\n self.start.get(),\n self.stop.get(),\n self.step.get(),\n self.delay.get()\n \n ) \n \n def start_butt(self): \n \n if self.poll:\n self.start_scan()\n \n def stop_butt(self):\n self.poll = False\n\n def voltage_scan(self, start, stop, step, delay):\n \n \n def make_plot():\n plt.scatter(mvol,mcurr)\n plt.xlabel('Voltage / V', fontsize = 12)\n plt.ylabel('Current / A', fontsize = 12)\n plt.tight_layout()\n \n if start < stop:\n stop = stop + 1\n \n \n plt.ion() # enable interactivity\n fig = plt.figure(figsize = (5,3.5), dpi = 100)\n ax = fig.add_subplot(111)\n curr = 0\n mvol = []\n mcurr = []\n \n for vol in range(start, stop, step):\n if self.poll:\n mvol.append(vol)\n #time.sleep(1 / delay)\n self.after(1000*delay) # 1 second delay\n curr += 1\n mcurr.append(curr)\n #print(vol, curr)\n drawnow(make_plot)\n else:\n break\n\n`\n\nThere might be better way to do that. or at least there would be a way to refresh the frame without closing the program. bcz I have to restart the program after evrytime I press the stop button.\n" ]
[ 2, 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0036847769_python_tkinter.txt
Q: I want to pass side input AsDIct but getting error "ValueError: dictionary update sequence element #0 has length 101; 2 is required" class load_side_input(beam.DoFn): def process(self,pubsub_message): message = pubsub_message.decode("utf8") output:typing.Dict={} for key in message.keys(): output[key] = self.tag_model[key] return [output] side_input = (p | "AMM Events" >> beam.io.ReadFromPubSub(subscription=opts.ammSub) | "Trigger event" >> beam.WindowInto(window.GlobalWindows(), trigger=trigger.Repeatedly(trigger.AfterCount(1)), accumulation_mode=trigger.AccumulationMode.DISCARDING) | "Parse and Update Cache" >> beam.ParDo(load_side_input()) ) enrichment = (rows | 'Data Validation and Enrichment' >> beam.ParDo(validation(),y_side=AsDict(side_input)) ) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 434, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) ValueError: dictionary update sequence element #0 has length 101; 2 is required [while running 'Data Enrichment-ptransform-128'] A: You feed the function beam.pvalue.AsDict the incorrect input format. According to the documentation: Parameters: pcoll – Input pcollection. All elements should be key-value pairs (i.e. 2-tuples) with unique keys. Here is a minimum working example, which can be run at Apache Play import apache_beam as beam def use_side_input(main, side_input): return side_input[main] class BadSideInputCreator(beam.DoFn): def process(self, element): output = {} output['1'] = 'value1' output['2'] = 'value2' yield [output] # this is a list of an dict and not a 2-tuple class GoodSideInputCreator(beam.DoFn): def process(self, element): output = {} output['1'] = 'value1' output['2'] = 'value2' for key, value in output.items(): yield (key, value) # this is a 2-tuple with beam.Pipeline() as pipeline: main = ( pipeline | "init main" >> beam.Create(['1', '2']) ) side = ( pipeline | "init side" >> beam.Create(['dummy']) | beam.ParDo(BadSideInputCreator()) # replace with GoodSideInputCreator ) ( main | "use side input" >> beam.Map(use_side_input, side_input=beam.pvalue.AsDict(side)) | "print" >> beam.Map(print) ) Running with BadSideInputCreator throws your error ValueError: dictionary update sequence element #0 has length 1; 2 is required while with GoodSideInputCreator we get the expected result value1 value2
I want to pass side input AsDIct but getting error "ValueError: dictionary update sequence element #0 has length 101; 2 is required"
class load_side_input(beam.DoFn): def process(self,pubsub_message): message = pubsub_message.decode("utf8") output:typing.Dict={} for key in message.keys(): output[key] = self.tag_model[key] return [output] side_input = (p | "AMM Events" >> beam.io.ReadFromPubSub(subscription=opts.ammSub) | "Trigger event" >> beam.WindowInto(window.GlobalWindows(), trigger=trigger.Repeatedly(trigger.AfterCount(1)), accumulation_mode=trigger.AccumulationMode.DISCARDING) | "Parse and Update Cache" >> beam.ParDo(load_side_input()) ) enrichment = (rows | 'Data Validation and Enrichment' >> beam.ParDo(validation(),y_side=AsDict(side_input)) ) File "/usr/local/lib/python3.9/site-packages/apache_beam/runners/worker/bundle_processor.py", line 434, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) ValueError: dictionary update sequence element #0 has length 101; 2 is required [while running 'Data Enrichment-ptransform-128']
[ "You feed the function beam.pvalue.AsDict the incorrect input format. According to the documentation:\n\nParameters: pcoll – Input pcollection. All elements should be key-value pairs (i.e. 2-tuples) with unique keys.\n\nHere is a minimum working example, which can be run at Apache Play\nimport apache_beam as beam\n\ndef use_side_input(main, side_input):\n return side_input[main]\n\n\nclass BadSideInputCreator(beam.DoFn):\n def process(self, element):\n output = {}\n output['1'] = 'value1'\n output['2'] = 'value2'\n yield [output] # this is a list of an dict and not a 2-tuple\n \nclass GoodSideInputCreator(beam.DoFn):\n def process(self, element):\n output = {}\n output['1'] = 'value1'\n output['2'] = 'value2'\n for key, value in output.items():\n yield (key, value) # this is a 2-tuple\n\nwith beam.Pipeline() as pipeline:\n main = (\n pipeline\n | \"init main\" >> beam.Create(['1', '2'])\n )\n \n side = (\n pipeline\n | \"init side\" >> beam.Create(['dummy'])\n | beam.ParDo(BadSideInputCreator()) # replace with GoodSideInputCreator\n )\n\n (\n main\n | \"use side input\" >> beam.Map(use_side_input, side_input=beam.pvalue.AsDict(side))\n | \"print\" >> beam.Map(print)\n )\n\nRunning with BadSideInputCreator throws your error\nValueError: dictionary update sequence element #0 has length 1; 2 is required\n\nwhile with GoodSideInputCreator we get the expected result\nvalue1\nvalue2\n\n" ]
[ 2 ]
[]
[]
[ "apache_beam", "python" ]
stackoverflow_0074497838_apache_beam_python.txt
Q: Python how to convert monthly employment data into annual, csv, panda I've been stuck on this problem for two days. Below is the csv file. df = pd.read_csv('/14100017.csv') df = pd.DataFrame(data) df.head() df_year = df.groupby('REF_DATE')['REF_DATE'].count() print(df_year) This is my code. Could you please tell me or give me a hint or show me the website has similar questions. How to convert monthly employment data into annual by taking average? This is so confused. Thank you very much! Much appreciated I tried search similar questions in reddit forum and Stack overflow, they all used rsample and get the result. A: Just convert REF_DATE to datetime and then extract year: df['date'] = pd.to_datetime(df['REF_DATE']) df['year'] = pd.DatetimeIndex(df['date']).year After, you need to aggregate the value by year: monthly_year_avg = df.groupby('year')['VALUE'].mean()
Python how to convert monthly employment data into annual, csv, panda
I've been stuck on this problem for two days. Below is the csv file. df = pd.read_csv('/14100017.csv') df = pd.DataFrame(data) df.head() df_year = df.groupby('REF_DATE')['REF_DATE'].count() print(df_year) This is my code. Could you please tell me or give me a hint or show me the website has similar questions. How to convert monthly employment data into annual by taking average? This is so confused. Thank you very much! Much appreciated I tried search similar questions in reddit forum and Stack overflow, they all used rsample and get the result.
[ "Just convert REF_DATE to datetime and then extract year:\ndf['date'] = pd.to_datetime(df['REF_DATE'])\ndf['year'] = pd.DatetimeIndex(df['date']).year\n\nAfter, you need to aggregate the value by year:\nmonthly_year_avg = df.groupby('year')['VALUE'].mean()\n\n" ]
[ 2 ]
[]
[]
[ "dataframe", "pandas", "python", "rsample" ]
stackoverflow_0074503891_dataframe_pandas_python_rsample.txt
Q: Avoid reCAPTCHA using Selenium I tried to use Selenium (chromedriver) for webscraping, but always get reCaptchas (around 5-8 in a row) which I have to solve. When I visit the same website manually with Google Chrome, I don't even get one Captcha. I don't use headless option... Is there any solution to avoid these Captchas? Or to get maximum 1-2 Captchas for one request? I mean it's not a problem to solve Captchas for me, but 5-8 in a row takes to much time. A: There are captcha solvers like 2captcha that solve them at around 15-40 seconds each captcha. Captcha was made to detect bots in various shapes and forms and well... that's what it has done. The simple answer is: no, there is no "bypass" There are some workarounds to avoid the system as a whole such as using an alt-login, like an app that maybe uses a different API. This can be achieved via appium which is similar to selenium, or by using a HTTPRequest library. A: I ran into the same issue. On the net there is a lot of tips that used to work like the suggestion in the comment of using specific headers, especially set the user agent explicitly or slowing down the actions on the page (like clicking) to mock real user actions. I found all of them not working currently with the newest reCaptcha versions and fell back to using non headless mode and manually solve the captcha before my script takes over and does its magic once I passed the captcha.
Avoid reCAPTCHA using Selenium
I tried to use Selenium (chromedriver) for webscraping, but always get reCaptchas (around 5-8 in a row) which I have to solve. When I visit the same website manually with Google Chrome, I don't even get one Captcha. I don't use headless option... Is there any solution to avoid these Captchas? Or to get maximum 1-2 Captchas for one request? I mean it's not a problem to solve Captchas for me, but 5-8 in a row takes to much time.
[ "There are captcha solvers like 2captcha that solve them at around 15-40 seconds each captcha. Captcha was made to detect bots in various shapes and forms and well... that's what it has done. The simple answer is: no, there is no \"bypass\"\nThere are some workarounds to avoid the system as a whole such as using an alt-login, like an app that maybe uses a different API. This can be achieved via appium which is similar to selenium, or by using a HTTPRequest library.\n", "I ran into the same issue. On the net there is a lot of tips that used to work like the suggestion in the comment of using specific headers, especially set the user agent explicitly or slowing down the actions on the page (like clicking) to mock real user actions. I found all of them not working currently with the newest reCaptcha versions and fell back to using non headless mode and manually solve the captcha before my script takes over and does its magic once I passed the captcha.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "recaptcha", "selenium", "selenium_chromedriver", "web_scraping" ]
stackoverflow_0066755142_python_recaptcha_selenium_selenium_chromedriver_web_scraping.txt
Q: (Guizero) Writing code that so that when the user clicks on the button a new window will open containg detail from a listbox (not using tkinter) I am having trouble where I am writing code where I take a csv file and populate it on a listbox, Then I am using another csv file to relate to the item on the first csv. When the user selects a car and pushes the button called "Get Info" under the listbox, a new window will open. That new window will contain detail on the selected item from my first csv file. When the user clicks on the second button called "Last 3 drivers" a new window will open that gives information relating to the first csv file from my second csv file. Here is the full code from guizero import App, ListBox, Text, TextBox, PushButton, Window app = App("Cars", layout='grid') app.height = 350 app.width =350 app.bg = 'gray' text = Text(app, grid =[0,0], text="Buy a Car here!!!!!!\n(Cars are guaranteed to breakdown,no refunds)") text2 = Text(app, grid= [0,70], text="Select the Car to learn more\n information about it") text2.size = '10' listbox = ListBox(app, grid = [0,50]) listbox.bg = 'white' listbox.text_size = "10" with open('Cars database.csv') as fh: for items in fh: items = items.split(",") listbox.append(items[1]) def Get_Info(): with open('Cars database.csv') as fh: for items in fh: if listbox.value: window = Window(app) window.height = 250 window.width = 250 app.info = listbox.value for items in fh: items = items.split(",") output= "\nCar:" + items[1] + "\nModel:" + items[2] + "\nColor:" + items[3] + "\nBody-Style:" + items[4] + "\nAge:" + items[5] + "\nYear:" + items[6] + "\nEngine:" + items[7] if items[1] == listbox.value: txt = Text(window, text=output) txt.size = '13' else: app.error('Error', text='You must select a car first') def Last_Driv(): with open('Cars Drivers.csv') as fh: for items2 in fh: if listbox.value: newwindow = Window(app) newwindow.height = 250 newwindow.width = 250 app.info = listbox.value for items2 in fh: items2 = items2.split(",") output2= "\nName:" + items2[1] + "\nCrashes:" + items2[2] + "\nName2:" + items2[3] + "\nCrashes2:" + items2[4] + "\nName3:" + items2[5] + "\nCrashes3:" + items2[6] if items2[1] == listbox.value: txt2 = Text(window, text=output2) txt2.size = '13' else: app.error('Error', text='You must select a car first') my_button = PushButton(app, text='Get Info', grid = [0,80], command = Get_Info) my_button2 = PushButton(app, text='Last 3 Drivers', grid = [0,90], command = Last_Driv) app.display() On the second function, when you press the second button called "Last 3 drivers" It only displays a blank window, it doesnt display the items from my "Cars Drivers" csv file. Heres what my window looks like enter image description here And Heres what my window looks like when you press the second button enter image description here There is also another problem If anyone can help me, I made an error window in case someone pressed the button without selecting an item, but once the pop up error window comes up and you it "ok" it will pop up another, How do I fix this? A: You are using the wrong window name txt2 = Text(window, text=output2) should be : txt2 = Text(newwindow, text=output2) For the error window repetition remove the for in this place, you already have the right one later with open('Cars database.csv') as fh: for items in fh: #REMOVE THIS if listbox.value:
(Guizero) Writing code that so that when the user clicks on the button a new window will open containg detail from a listbox (not using tkinter)
I am having trouble where I am writing code where I take a csv file and populate it on a listbox, Then I am using another csv file to relate to the item on the first csv. When the user selects a car and pushes the button called "Get Info" under the listbox, a new window will open. That new window will contain detail on the selected item from my first csv file. When the user clicks on the second button called "Last 3 drivers" a new window will open that gives information relating to the first csv file from my second csv file. Here is the full code from guizero import App, ListBox, Text, TextBox, PushButton, Window app = App("Cars", layout='grid') app.height = 350 app.width =350 app.bg = 'gray' text = Text(app, grid =[0,0], text="Buy a Car here!!!!!!\n(Cars are guaranteed to breakdown,no refunds)") text2 = Text(app, grid= [0,70], text="Select the Car to learn more\n information about it") text2.size = '10' listbox = ListBox(app, grid = [0,50]) listbox.bg = 'white' listbox.text_size = "10" with open('Cars database.csv') as fh: for items in fh: items = items.split(",") listbox.append(items[1]) def Get_Info(): with open('Cars database.csv') as fh: for items in fh: if listbox.value: window = Window(app) window.height = 250 window.width = 250 app.info = listbox.value for items in fh: items = items.split(",") output= "\nCar:" + items[1] + "\nModel:" + items[2] + "\nColor:" + items[3] + "\nBody-Style:" + items[4] + "\nAge:" + items[5] + "\nYear:" + items[6] + "\nEngine:" + items[7] if items[1] == listbox.value: txt = Text(window, text=output) txt.size = '13' else: app.error('Error', text='You must select a car first') def Last_Driv(): with open('Cars Drivers.csv') as fh: for items2 in fh: if listbox.value: newwindow = Window(app) newwindow.height = 250 newwindow.width = 250 app.info = listbox.value for items2 in fh: items2 = items2.split(",") output2= "\nName:" + items2[1] + "\nCrashes:" + items2[2] + "\nName2:" + items2[3] + "\nCrashes2:" + items2[4] + "\nName3:" + items2[5] + "\nCrashes3:" + items2[6] if items2[1] == listbox.value: txt2 = Text(window, text=output2) txt2.size = '13' else: app.error('Error', text='You must select a car first') my_button = PushButton(app, text='Get Info', grid = [0,80], command = Get_Info) my_button2 = PushButton(app, text='Last 3 Drivers', grid = [0,90], command = Last_Driv) app.display() On the second function, when you press the second button called "Last 3 drivers" It only displays a blank window, it doesnt display the items from my "Cars Drivers" csv file. Heres what my window looks like enter image description here And Heres what my window looks like when you press the second button enter image description here There is also another problem If anyone can help me, I made an error window in case someone pressed the button without selecting an item, but once the pop up error window comes up and you it "ok" it will pop up another, How do I fix this?
[ "You are using the wrong window name\n txt2 = Text(window, text=output2)\n\n\nshould be :\ntxt2 = Text(newwindow, text=output2)\n\nFor the error window repetition remove the for in this place, you already have the right one later\n with open('Cars database.csv') as fh:\n for items in fh: #REMOVE THIS\n if listbox.value:\n\n" ]
[ 0 ]
[]
[]
[ "guizero", "python" ]
stackoverflow_0074503523_guizero_python.txt
Q: How to upload file using many to many field in django rest framework I have two models. One is article and other is documents model. Document model contains the filefield for uploading document along with some other metadata of uploaded document. Article has a m2m field that relates to Document Model. Article model has a field user according to which article is being which article is being posted. I want to upload file using m2m field, but it gives two errors: "files": [ "Incorrect type. Expected pk value, received InMemoryUploadedFile."] I also tried using slug field, the document does not exists. but i am uploading new file then why saying document does not exist. Please guide me how i can achieve this? Article Model class Article(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="ARTICLE_ID") headline=models.CharField(max_length=250) abstract=models.TextField(max_length=1500, blank=True) content=models.TextField(max_length=10000, blank=True) files=models.ManyToManyField('DocumentModel', related_name='file_documents',related_query_name='select_files', blank=True) published=models.DateTimeField(auto_now_add=True) tags=models.ManyToManyField('Tags', related_name='tags', blank=True) isDraft=models.BooleanField(blank=True, default=False) isFavourite=models.ManyToManyField(User, related_name="favourite", blank=True) created_by=models.ForeignKey(User, on_delete=mode Document Model class DocumentModel(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="DOCUMENT_ID") document=models.FileField(max_length=350, validators=[FileExtensionValidator(extensions)], upload_to=uploaded_files) filename=models.CharField(max_length=100, blank=True) filesize=models.IntegerField(default=0, blank=True) mimetype=models.CharField(max_length=100, blank=True) created_at=models.DateField(auto_now_add=True) Article Serializer class ArticleSerializer(serializers.ModelSerializer): #serializer for getting username of User created_by=serializers.CharField(source='created_by.username', read_only=True) isFavourite=serializers.PrimaryKeyRelatedField(many=True, read_only=True) tags=serializers.SlugRelatedField(many=True, queryset=Tags.objects.all(), slug_field="tag") readtime=serializers.IntegerField(read_only=True) class Meta: model= Article fields = ["id" , "headline", "abstract", "content", "readtime", "get_published_timestamp", "isDraft", "isFavourite", "tags", 'files', 'created_by' ] Document Serializer class DocumentSerializer(serializers.ModelSerializer): filesize=serializers.ReadOnlyField(source='sizeoffile') class Meta: model=DocumentModel fields = ['id', 'document', 'filesize', 'filename', 'mimetype', 'created_at' ] A: You cannot add files to M2M fields, because M2M fields hold objects of other models. If you have a M2M of DocumentModel, it will only accept the objects of DocumentModel. It only expects the primary keys of DocumentModel objects. To upload files, you can use a FileField instead.
How to upload file using many to many field in django rest framework
I have two models. One is article and other is documents model. Document model contains the filefield for uploading document along with some other metadata of uploaded document. Article has a m2m field that relates to Document Model. Article model has a field user according to which article is being which article is being posted. I want to upload file using m2m field, but it gives two errors: "files": [ "Incorrect type. Expected pk value, received InMemoryUploadedFile."] I also tried using slug field, the document does not exists. but i am uploading new file then why saying document does not exist. Please guide me how i can achieve this? Article Model class Article(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="ARTICLE_ID") headline=models.CharField(max_length=250) abstract=models.TextField(max_length=1500, blank=True) content=models.TextField(max_length=10000, blank=True) files=models.ManyToManyField('DocumentModel', related_name='file_documents',related_query_name='select_files', blank=True) published=models.DateTimeField(auto_now_add=True) tags=models.ManyToManyField('Tags', related_name='tags', blank=True) isDraft=models.BooleanField(blank=True, default=False) isFavourite=models.ManyToManyField(User, related_name="favourite", blank=True) created_by=models.ForeignKey(User, on_delete=mode Document Model class DocumentModel(models.Model): id=models.AutoField(primary_key=True, auto_created=True, verbose_name="DOCUMENT_ID") document=models.FileField(max_length=350, validators=[FileExtensionValidator(extensions)], upload_to=uploaded_files) filename=models.CharField(max_length=100, blank=True) filesize=models.IntegerField(default=0, blank=True) mimetype=models.CharField(max_length=100, blank=True) created_at=models.DateField(auto_now_add=True) Article Serializer class ArticleSerializer(serializers.ModelSerializer): #serializer for getting username of User created_by=serializers.CharField(source='created_by.username', read_only=True) isFavourite=serializers.PrimaryKeyRelatedField(many=True, read_only=True) tags=serializers.SlugRelatedField(many=True, queryset=Tags.objects.all(), slug_field="tag") readtime=serializers.IntegerField(read_only=True) class Meta: model= Article fields = ["id" , "headline", "abstract", "content", "readtime", "get_published_timestamp", "isDraft", "isFavourite", "tags", 'files', 'created_by' ] Document Serializer class DocumentSerializer(serializers.ModelSerializer): filesize=serializers.ReadOnlyField(source='sizeoffile') class Meta: model=DocumentModel fields = ['id', 'document', 'filesize', 'filename', 'mimetype', 'created_at' ]
[ "You cannot add files to M2M fields, because M2M fields hold objects of other models.\nIf you have a M2M of DocumentModel, it will only accept the objects of DocumentModel. It only expects the primary keys of DocumentModel objects.\nTo upload files, you can use a FileField instead.\n" ]
[ 0 ]
[]
[]
[ "django", "django_models", "django_rest_framework", "django_serializer", "python" ]
stackoverflow_0074429890_django_django_models_django_rest_framework_django_serializer_python.txt
Q: New variable in plt.streamplot() broken_streamlines does not work, how to fix this? I need my streamlines on the plot in Python to be continuous. I am using plt.streamplot() which by default plots broken lines. I have found that in the source code someone has already made up a variable which is called broken_streamlines and it can be True or False, by default it is True broken_streamlines. In documentation of matplotlib.pyplot.streamplot it does not exist, but in documentation of matplotlib.axes.Axes.streamplot it does exist. Unfortunatelly it does not work neither in plt.streamplot() nor Axes.streamplot() and I got communicate: TypeError: streamplot() got an unexpected keyword argument 'broken_streamlines' What can I do about this? I have been looking for other solution to force the lines to be continuous (eg. density, start points, minlenght etc.) but it occurs that my data is very difficult to manipulate. What possibly can I do wrong (Python3 is already actualized)? I already tried it on MacOS and Linux. A: Upgrade your version of the matplotlib package by running pip install matplotlib --upgrade The broken_streamlines parameter was introduced in Matplotlib 3.6.0 - see the relevant changelog entry below: https://matplotlib.org/stable/users/prev_whats_new/whats_new_3.6.0.html#streamplot-can-disable-streamline-breaks Also see the changes in the documentation of the streamplot function: version 3.5.3 https://matplotlib.org/3.5.3/api/_as_gen/matplotlib.pyplot.streamplot.html version 3.6.0 https://matplotlib.org/3.6.0/api/_as_gen/matplotlib.pyplot.streamplot.html
New variable in plt.streamplot() broken_streamlines does not work, how to fix this?
I need my streamlines on the plot in Python to be continuous. I am using plt.streamplot() which by default plots broken lines. I have found that in the source code someone has already made up a variable which is called broken_streamlines and it can be True or False, by default it is True broken_streamlines. In documentation of matplotlib.pyplot.streamplot it does not exist, but in documentation of matplotlib.axes.Axes.streamplot it does exist. Unfortunatelly it does not work neither in plt.streamplot() nor Axes.streamplot() and I got communicate: TypeError: streamplot() got an unexpected keyword argument 'broken_streamlines' What can I do about this? I have been looking for other solution to force the lines to be continuous (eg. density, start points, minlenght etc.) but it occurs that my data is very difficult to manipulate. What possibly can I do wrong (Python3 is already actualized)? I already tried it on MacOS and Linux.
[ "Upgrade your version of the matplotlib package by running pip install matplotlib --upgrade\nThe broken_streamlines parameter was introduced in Matplotlib 3.6.0 - see the relevant changelog entry below:\nhttps://matplotlib.org/stable/users/prev_whats_new/whats_new_3.6.0.html#streamplot-can-disable-streamline-breaks\nAlso see the changes in the documentation of the streamplot function:\n\nversion 3.5.3 https://matplotlib.org/3.5.3/api/_as_gen/matplotlib.pyplot.streamplot.html\nversion 3.6.0 https://matplotlib.org/3.6.0/api/_as_gen/matplotlib.pyplot.streamplot.html\n\n" ]
[ 0 ]
[]
[]
[ "github", "matplotlib", "plot", "python", "variables" ]
stackoverflow_0073119873_github_matplotlib_plot_python_variables.txt
Q: How to calculate average of monthly sales data from python pandas dataframe I have below pandas dataframe which has employees sales data for october month. Employee Timerange Dials Conn Conv Mtg Bkd Talk Dial 0 Ricky Ponting 10/3 - 10/7 1,869 102 60.0 2.0 3h:08m 5h:23m 1 Adam Gilchrist 10/3 - 10/7 1,336 53 30.0 1.0 1h:10m 3h:58m 2 Michael Clarke 10/3 - 10/7 1,960 74 42.0 1.0 2h:02m 5h:28m 3 Shane Warne 10/3 - 10/7 1,478 62 45.0 1.0 1h:55m 4h:07m Schema - # Column Non-Null Count Dtype --- ------ -------------- ----- 1 Timerange 40 non-null object 2 Dials 40 non-null object 3 Conn 40 non-null int64 4 Conv 39 non-null float64 5 Mtg Bkd 39 non-null float64 6 Talk 40 non-null object 7 Dial︎ 40 non-null object I basically want to check the dials-to-connection and the dials-to-conversation average rates of the whole team for the month. Example output like below - Month Dials Conn Dials -> Conn Dials -> Conv October 60517 2702 0.045 0.026 I tried using pd.DatetimeIndex(df['Timerange']).Month and separate the column but it is giving me error dateutil.parser._parser.ParserError: Unknown string format: 10/3 - 10/7. Please help me guys A: I will assume that your Timerange always starts with the month you are interested in, and that all data comes from the same year (this year). If these are reasonable assumptions, this works: emps = [ "Ricky Ponting", "Adam Gilchrist", "Michael Clarke", "Shane Warne" ] timeranges = [ "10/3 - 10/7", "10/3 - 10/7", "10/3 - 10/7", "10/3 - 10/7" ] dials = [1869, 1336, 1960, 1478] conn = [102, 53, 74, 62] conv = [60, 30, 42, 45] import pandas as pd df = pd.DataFrame( { "Employee": emps, "Timerange": timeranges, "Dials": dials, "Conn": conn, "Conv": conv } ) import datetime def get_month(row): month = int(row["Timerange"].split("/")[0]) return datetime.date(year=2022, month=month, day=1).strftime("%B") df["Month"] = df.apply(get_month, axis=1) sums = df.groupby("Month").sum() sums["Dials -> Conn"] = sums["Conn"] / sums["Dials"] sums["Dials -> Conv"] = sums["Conv"] / sums["Dials"] sums A: Here is a proposition using pandas.DataFrame.groupby and pandas.DataFrame.apply : #Extract the month number from the start date and convert it to a month name df["Month"]= pd.to_datetime(df["Timerange"].str.extract(r"(\d+)/\d+", expand=False), format="%m").dt.month_name() #Convert comma separated strings to numbers df["Dials"]= df["Dials"].str.replace(",", "").astype(float) out = ( df.groupby("Month", as_index=False) .apply(lambda x: pd.Series({"Dials": x["Dials"].sum(), "Conn": x["Conn"].sum(), "Dials -> Conn": x["Conn"].sum()/x["Dials"].sum(), "Dials -> Conv": x["Conv"].sum()/x["Dials"].sum()})) ) # Output : print(out) Month Dials Conn Dials -> Conn Dials -> Conv 0 October 6643.0 291.0 0.043806 0.026645
How to calculate average of monthly sales data from python pandas dataframe
I have below pandas dataframe which has employees sales data for october month. Employee Timerange Dials Conn Conv Mtg Bkd Talk Dial 0 Ricky Ponting 10/3 - 10/7 1,869 102 60.0 2.0 3h:08m 5h:23m 1 Adam Gilchrist 10/3 - 10/7 1,336 53 30.0 1.0 1h:10m 3h:58m 2 Michael Clarke 10/3 - 10/7 1,960 74 42.0 1.0 2h:02m 5h:28m 3 Shane Warne 10/3 - 10/7 1,478 62 45.0 1.0 1h:55m 4h:07m Schema - # Column Non-Null Count Dtype --- ------ -------------- ----- 1 Timerange 40 non-null object 2 Dials 40 non-null object 3 Conn 40 non-null int64 4 Conv 39 non-null float64 5 Mtg Bkd 39 non-null float64 6 Talk 40 non-null object 7 Dial︎ 40 non-null object I basically want to check the dials-to-connection and the dials-to-conversation average rates of the whole team for the month. Example output like below - Month Dials Conn Dials -> Conn Dials -> Conv October 60517 2702 0.045 0.026 I tried using pd.DatetimeIndex(df['Timerange']).Month and separate the column but it is giving me error dateutil.parser._parser.ParserError: Unknown string format: 10/3 - 10/7. Please help me guys
[ "I will assume that your Timerange always starts with the month you are interested in, and that all data comes from the same year (this year). If these are reasonable assumptions, this works:\nemps = [\n \"Ricky Ponting\", \"Adam Gilchrist\", \"Michael Clarke\", \"Shane Warne\"\n]\n\ntimeranges = [\n \"10/3 - 10/7\", \"10/3 - 10/7\", \"10/3 - 10/7\", \"10/3 - 10/7\"\n]\n\ndials = [1869, 1336, 1960, 1478]\nconn = [102, 53, 74, 62]\nconv = [60, 30, 42, 45]\n\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n \"Employee\": emps,\n \"Timerange\": timeranges,\n \"Dials\": dials,\n \"Conn\": conn,\n \"Conv\": conv\n }\n)\n\nimport datetime\n\ndef get_month(row):\n month = int(row[\"Timerange\"].split(\"/\")[0])\n return datetime.date(year=2022, month=month, day=1).strftime(\"%B\")\n\ndf[\"Month\"] = df.apply(get_month, axis=1)\n\nsums = df.groupby(\"Month\").sum()\nsums[\"Dials -> Conn\"] = sums[\"Conn\"] / sums[\"Dials\"]\nsums[\"Dials -> Conv\"] = sums[\"Conv\"] / sums[\"Dials\"]\nsums\n\n\n", "Here is a proposition using pandas.DataFrame.groupby and pandas.DataFrame.apply :\n#Extract the month number from the start date and convert it to a month name\ndf[\"Month\"]= pd.to_datetime(df[\"Timerange\"].str.extract(r\"(\\d+)/\\d+\", expand=False), format=\"%m\").dt.month_name()\n\n#Convert comma separated strings to numbers\ndf[\"Dials\"]= df[\"Dials\"].str.replace(\",\", \"\").astype(float)\n\nout = (\n df.groupby(\"Month\", as_index=False)\n .apply(lambda x: pd.Series({\"Dials\": x[\"Dials\"].sum(),\n \"Conn\": x[\"Conn\"].sum(),\n \"Dials -> Conn\": x[\"Conn\"].sum()/x[\"Dials\"].sum(),\n \"Dials -> Conv\": x[\"Conv\"].sum()/x[\"Dials\"].sum()}))\n\n )\n\n# Output :\nprint(out)\n\n Month Dials Conn Dials -> Conn Dials -> Conv\n0 October 6643.0 291.0 0.043806 0.026645\n\n" ]
[ 1, 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074503501_pandas_python.txt
Q: Streamlit: Using Multiple Conditions and Colors for Bars in a Bar Chart I'm using Streamlit to create a dashboard. I have a bar graph using altair and in their docs, they show how to color a bar if it meets a condition. I don't see anything on how to color multiple bars and with multiple, different conditions. I aiming to use three different colors based on three different conditions but I can't get it to work. I've tried variations of the following: color = alt.condition( alt.datum.Team == ['Arsenal', 'Manchester City'], alt.value('orange'), alt.value('steelblue') ) I'm also not sure how to include multiple conditions for different colors. The top four teams should be one color, the 5th placed team another color and the last three teams another color. A: Rather than alt.datum.Team == ['Arsenal', 'Manchester City'], I think you want alt.datum.Team in ['Arsenal', 'Manchester City'],
Streamlit: Using Multiple Conditions and Colors for Bars in a Bar Chart
I'm using Streamlit to create a dashboard. I have a bar graph using altair and in their docs, they show how to color a bar if it meets a condition. I don't see anything on how to color multiple bars and with multiple, different conditions. I aiming to use three different colors based on three different conditions but I can't get it to work. I've tried variations of the following: color = alt.condition( alt.datum.Team == ['Arsenal', 'Manchester City'], alt.value('orange'), alt.value('steelblue') ) I'm also not sure how to include multiple conditions for different colors. The top four teams should be one color, the 5th placed team another color and the last three teams another color.
[ "Rather than\n alt.datum.Team == ['Arsenal', 'Manchester City'],\n\nI think you want\n alt.datum.Team in ['Arsenal', 'Manchester City'],\n\n" ]
[ 0 ]
[]
[]
[ "altair", "python", "streamlit" ]
stackoverflow_0074503952_altair_python_streamlit.txt
Q: Finding the number of combinations possible, given 4 dictionaries Given the following dictionaries: dict_first_attempt = {'Offense': ['Jack','Jill','Tim'], 'Defense':['Robert','Kevin','Sam']} dict_second_attempt = {'Offense': ['Jack','McKayla','Heather'], 'Defense':['Chris','Tim','Julia']} From this dictionaries, my focus is just the offense, so if I just wanted the list of those, I would do this: first = dict_first_attempt['Offense'] second = dict_second_attempt['Offense'] For each of those lists, I am trying to create a code that can do the following: Tell me all the possible combinations of first attempt offense and second attempt offense. Outputs it in a list, with lists of the combinations. The first element within the list has to be from the first attempt offense, and the second element has to be from the second attempt offense. An example of the type of output I want is: [['Jack','Jack'],['Jack','McKayla'],['Jack','Heather'], ['Jill','Jack'],['Jill','McKayla'],['Jill','Heather'], ['Tim','Jack'],['Tim','McKayla'],['Tim','Heather']] A: import itertools list(itertools.product(first, second))
Finding the number of combinations possible, given 4 dictionaries
Given the following dictionaries: dict_first_attempt = {'Offense': ['Jack','Jill','Tim'], 'Defense':['Robert','Kevin','Sam']} dict_second_attempt = {'Offense': ['Jack','McKayla','Heather'], 'Defense':['Chris','Tim','Julia']} From this dictionaries, my focus is just the offense, so if I just wanted the list of those, I would do this: first = dict_first_attempt['Offense'] second = dict_second_attempt['Offense'] For each of those lists, I am trying to create a code that can do the following: Tell me all the possible combinations of first attempt offense and second attempt offense. Outputs it in a list, with lists of the combinations. The first element within the list has to be from the first attempt offense, and the second element has to be from the second attempt offense. An example of the type of output I want is: [['Jack','Jack'],['Jack','McKayla'],['Jack','Heather'], ['Jill','Jack'],['Jill','McKayla'],['Jill','Heather'], ['Tim','Jack'],['Tim','McKayla'],['Tim','Heather']]
[ "import itertools\nlist(itertools.product(first, second))\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074504019_dictionary_list_python.txt
Q: Python - HTML Parser - Narrow Down Scrape I am new to HTML Parser. I have written a Spider in Python which aims to crawl a website. I have included my code below. This code specifically looks for all URLs which are identified with an "a" start tag and a href attribute. However, I would like to further filter this by only scraping URLs which contain a specific word. I am currently working around this by outputting my "crawled" URLs to a txt file. From there i read the content of this file, filter it by my key word and then write the results to a new txt file. However, I feel it would be more efficient if I could narrow the focus of my crawler to only look at "a" tags, href attributes and "where word XXX" exists. Is there a way in which I can expand the "if" statement within the def handle_starttag function to only scrape urls which contain a specific word? The word is usually contained in the href link in the html also. from html.parser import HTMLParser from urllib import parse class LinkFinder(HTMLParser): def __init__(self, base_url, page_url): super().__init__() self.base_url = base_url self.page_url = page_url self.links = set() # When we call HTMLParser feed() this function is called when it encounters an opening tag <a> def handle_starttag(self, tag, attrs): if tag == 'a': for (attribute, value) in attrs: if attribute == 'href': url = parse.urljoin(self.base_url, value) self.links.add(url) def page_links(self): return self.links def error(self, message): pass A: You may have an easier time using BeautifulSoup than the lower level HTMLParser. To add the additional filter to your current implementation, you could add an additional parameter to your LinkFinder class, store the value, and use it in the conditional: class LinkFinder(HTMLParser): def __init__(self, base_url, page_url, url_filter): super().__init__() self.base_url = base_url self.page_url = page_url self.links = set() self.url_filter = url_filter def handle_starttag(self, tag, attrs): if tag == 'a': for (attribute, value) in attrs: if attribute == 'href' and self.url_filter in value: url = parse.urljoin(self.base_url, value) self.links.add(url)
Python - HTML Parser - Narrow Down Scrape
I am new to HTML Parser. I have written a Spider in Python which aims to crawl a website. I have included my code below. This code specifically looks for all URLs which are identified with an "a" start tag and a href attribute. However, I would like to further filter this by only scraping URLs which contain a specific word. I am currently working around this by outputting my "crawled" URLs to a txt file. From there i read the content of this file, filter it by my key word and then write the results to a new txt file. However, I feel it would be more efficient if I could narrow the focus of my crawler to only look at "a" tags, href attributes and "where word XXX" exists. Is there a way in which I can expand the "if" statement within the def handle_starttag function to only scrape urls which contain a specific word? The word is usually contained in the href link in the html also. from html.parser import HTMLParser from urllib import parse class LinkFinder(HTMLParser): def __init__(self, base_url, page_url): super().__init__() self.base_url = base_url self.page_url = page_url self.links = set() # When we call HTMLParser feed() this function is called when it encounters an opening tag <a> def handle_starttag(self, tag, attrs): if tag == 'a': for (attribute, value) in attrs: if attribute == 'href': url = parse.urljoin(self.base_url, value) self.links.add(url) def page_links(self): return self.links def error(self, message): pass
[ "You may have an easier time using BeautifulSoup than the lower level HTMLParser.\nTo add the additional filter to your current implementation, you could add an additional parameter to your LinkFinder class, store the value, and use it in the conditional:\nclass LinkFinder(HTMLParser):\n def __init__(self, base_url, page_url, url_filter):\n super().__init__()\n self.base_url = base_url\n self.page_url = page_url\n self.links = set()\n self.url_filter = url_filter\n\n def handle_starttag(self, tag, attrs):\n if tag == 'a':\n for (attribute, value) in attrs:\n if attribute == 'href' and self.url_filter in value:\n url = parse.urljoin(self.base_url, value)\n self.links.add(url)\n\n" ]
[ 0 ]
[]
[]
[ "html_parser", "python", "web_crawler" ]
stackoverflow_0074503859_html_parser_python_web_crawler.txt
Q: how do i use a random function in this I wrote this code to split a big number into smaller parts in a certain range. Now i am trying to randomize it but I'm not sure what module to use in random function and I'm stuck. Pardon my English import random op = '' start = 664613997892457936451903530140172288 step = 9223372036854775808 stop = 1329227995784915872903807060280344575 while start <= stop: print( hex(start).lstrip("0x") + ':' + hex(start+step).lstrip("0x") ) start += step f = open("op.txt", "a") f.write(op) f.close() The start is 800000000000000000000000000000 in hexadecimal, the stop is FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF and the difference between each step is 8000000000000000 in hexadecimal. When I run the code the numbers are generated in sequence like this: 800000000000000000000000000000:800000000000008000000000000000 800000000000008000000000000000:800000000000010000000000000000 800000000000010000000000000000:800000000000018000000000000000 800000000000018000000000000000:800000000000020000000000000000 800000000000020000000000000000:800000000000028000000000000000 800000000000028000000000000000:800000000000030000000000000000 800000000000030000000000000000:800000000000038000000000000000 800000000000038000000000000000:800000000000040000000000000000 800000000000040000000000000000:800000000000048000000000000000 800000000000048000000000000000:800000000000050000000000000000 800000000000050000000000000000:800000000000058000000000000000 800000000000058000000000000000:800000000000060000000000000000 800000000000060000000000000000:800000000000068000000000000000 800000000000068000000000000000:800000000000070000000000000000 800000000000070000000000000000:800000000000078000000000000000 800000000000078000000000000000:800000000000080000000000000000 800000000000080000000000000000:800000000000088000000000000000 800000000000088000000000000000:800000000000090000000000000000 800000000000090000000000000000:800000000000098000000000000000 800000000000098000000000000000:8000000000000a0000000000000000 8000000000000a0000000000000000:8000000000000a8000000000000000 8000000000000a8000000000000000:8000000000000b0000000000000000 8000000000000b0000000000000000:8000000000000b8000000000000000 8000000000000b8000000000000000:8000000000000c0000000000000000 Is it possible to generate random values between 800000000000000000000000000000 and FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF but with same step (8000000000000000 ) value, where the numbers generated are random, not in sequence? I tried using random.randint(start,stop) but it just doesn't seem to work. I am trying to achieve my output from 800000000000000000000000000000:800000000000008000000000000000 800000000000008000000000000000:800000000000010000000000000000 800000000000010000000000000000:800000000000018000000000000000 800000000000018000000000000000:800000000000020000000000000000 800000000000020000000000000000:800000000000028000000000000000 800000000000028000000000000000:800000000000030000000000000000 800000000000030000000000000000:800000000000038000000000000000 800000000000038000000000000000:800000000000040000000000000000 800000000000040000000000000000:800000000000048000000000000000 800000000000048000000000000000:800000000000050000000000000000 800000000000050000000000000000:800000000000058000000000000000 800000000000058000000000000000:800000000000060000000000000000 800000000000060000000000000000:800000000000068000000000000000 800000000000068000000000000000:800000000000070000000000000000 800000000000070000000000000000:800000000000078000000000000000 to b4459257eda6b45af82611586bf8c6:b4459257eda6b48000000000000000 ca60da4d4a4ee38000000000000000:ca60da4d4a4ee31000000000000000 c8572366d667810000000000000000:c8572366d667818000000000000000 e5dc56311189018000000000000000:e5dc56311189020000000000000000 a324eb157a00e20000000000000000:a324eb157a00e28000000000000000 ff5b961d3b87d28000000000000000:ff5b961d3b87d30000000000000000 I'm sorry I had to put in examples, I just wanted to make sure that everyone understood my question as my English is bad. A: Convert your hexadecimal start and stop to decimal then use random.randint(start,stop) to generate a random number. Convert it back to hexadecimal afterwards. Convert hex string to integer in Python
how do i use a random function in this
I wrote this code to split a big number into smaller parts in a certain range. Now i am trying to randomize it but I'm not sure what module to use in random function and I'm stuck. Pardon my English import random op = '' start = 664613997892457936451903530140172288 step = 9223372036854775808 stop = 1329227995784915872903807060280344575 while start <= stop: print( hex(start).lstrip("0x") + ':' + hex(start+step).lstrip("0x") ) start += step f = open("op.txt", "a") f.write(op) f.close() The start is 800000000000000000000000000000 in hexadecimal, the stop is FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF and the difference between each step is 8000000000000000 in hexadecimal. When I run the code the numbers are generated in sequence like this: 800000000000000000000000000000:800000000000008000000000000000 800000000000008000000000000000:800000000000010000000000000000 800000000000010000000000000000:800000000000018000000000000000 800000000000018000000000000000:800000000000020000000000000000 800000000000020000000000000000:800000000000028000000000000000 800000000000028000000000000000:800000000000030000000000000000 800000000000030000000000000000:800000000000038000000000000000 800000000000038000000000000000:800000000000040000000000000000 800000000000040000000000000000:800000000000048000000000000000 800000000000048000000000000000:800000000000050000000000000000 800000000000050000000000000000:800000000000058000000000000000 800000000000058000000000000000:800000000000060000000000000000 800000000000060000000000000000:800000000000068000000000000000 800000000000068000000000000000:800000000000070000000000000000 800000000000070000000000000000:800000000000078000000000000000 800000000000078000000000000000:800000000000080000000000000000 800000000000080000000000000000:800000000000088000000000000000 800000000000088000000000000000:800000000000090000000000000000 800000000000090000000000000000:800000000000098000000000000000 800000000000098000000000000000:8000000000000a0000000000000000 8000000000000a0000000000000000:8000000000000a8000000000000000 8000000000000a8000000000000000:8000000000000b0000000000000000 8000000000000b0000000000000000:8000000000000b8000000000000000 8000000000000b8000000000000000:8000000000000c0000000000000000 Is it possible to generate random values between 800000000000000000000000000000 and FFFFFFFFFFFFFFFFFFFFFFFFFFFFFF but with same step (8000000000000000 ) value, where the numbers generated are random, not in sequence? I tried using random.randint(start,stop) but it just doesn't seem to work. I am trying to achieve my output from 800000000000000000000000000000:800000000000008000000000000000 800000000000008000000000000000:800000000000010000000000000000 800000000000010000000000000000:800000000000018000000000000000 800000000000018000000000000000:800000000000020000000000000000 800000000000020000000000000000:800000000000028000000000000000 800000000000028000000000000000:800000000000030000000000000000 800000000000030000000000000000:800000000000038000000000000000 800000000000038000000000000000:800000000000040000000000000000 800000000000040000000000000000:800000000000048000000000000000 800000000000048000000000000000:800000000000050000000000000000 800000000000050000000000000000:800000000000058000000000000000 800000000000058000000000000000:800000000000060000000000000000 800000000000060000000000000000:800000000000068000000000000000 800000000000068000000000000000:800000000000070000000000000000 800000000000070000000000000000:800000000000078000000000000000 to b4459257eda6b45af82611586bf8c6:b4459257eda6b48000000000000000 ca60da4d4a4ee38000000000000000:ca60da4d4a4ee31000000000000000 c8572366d667810000000000000000:c8572366d667818000000000000000 e5dc56311189018000000000000000:e5dc56311189020000000000000000 a324eb157a00e20000000000000000:a324eb157a00e28000000000000000 ff5b961d3b87d28000000000000000:ff5b961d3b87d30000000000000000 I'm sorry I had to put in examples, I just wanted to make sure that everyone understood my question as my English is bad.
[ "Convert your hexadecimal start and stop to decimal then use random.randint(start,stop) to generate a random number. Convert it back to hexadecimal afterwards.\nConvert hex string to integer in Python\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "random" ]
stackoverflow_0074503964_python_python_3.x_random.txt
Q: What exactly does += do? I need to know what += does in Python. It's that simple. I also would appreciate links to definitions of other shorthand tools in Python. A: In Python, += is sugar coating for the __iadd__ special method, or __add__ or __radd__ if __iadd__ isn't present. The __iadd__ method of a class can do anything it wants. The list object implements it and uses it to iterate over an iterable object appending each element to itself in the same way that the list's extend method does. Here's a simple custom class that implements the __iadd__ special method. You initialize the object with an int, then can use the += operator to add a number. I've added a print statement in __iadd__ to show that it gets called. Also, __iadd__ is expected to return an object, so I returned the addition of itself plus the other number which makes sense in this case. >>> class Adder(object): def __init__(self, num=0): self.num = num def __iadd__(self, other): print 'in __iadd__', other self.num = self.num + other return self.num >>> a = Adder(2) >>> a += 3 in __iadd__ 3 >>> a 5 Hope this helps. A: += adds another value with the variable's value and assigns the new value to the variable. >>> x = 3 >>> x += 2 >>> print x 5 -=, *=, /= does similar for subtraction, multiplication and division. A: x += 5 is not exactly the same as saying x = x + 5 in Python. Note here: In [1]: x = [2, 3, 4] In [2]: y = x In [3]: x += 7, 8, 9 In [4]: x Out[4]: [2, 3, 4, 7, 8, 9] In [5]: y Out[5]: [2, 3, 4, 7, 8, 9] In [6]: x += [44, 55] In [7]: x Out[7]: [2, 3, 4, 7, 8, 9, 44, 55] In [8]: y Out[8]: [2, 3, 4, 7, 8, 9, 44, 55] In [9]: x = x + [33, 22] In [10]: x Out[10]: [2, 3, 4, 7, 8, 9, 44, 55, 33, 22] In [11]: y Out[11]: [2, 3, 4, 7, 8, 9, 44, 55] See for reference: Why does += behave unexpectedly on lists? A: += adds a number to a variable, changing the variable itself in the process (whereas + would not). Similar to this, there are the following that also modifies the variable: -=, subtracts a value from variable, setting the variable to the result *=, multiplies the variable and a value, making the outcome the variable /=, divides the variable by the value, making the outcome the variable %=, performs modulus on the variable, with the variable then being set to the result of it There may be others. I am not a Python programmer. A: It is not mere a syntactic sugar. Try this: x = [] # empty list x += "something" # iterates over the string and appends to list print(x) # ['s', 'o', 'm', 'e', 't', 'h', 'i', 'n', 'g'] versus x = [] # empty list x = x + "something" # TypeError: can only concatenate list (not "str") to list The += operator invokes the __iadd__() list method, while + one invokes the __add__() one. They do different things with lists. A: It adds the right operand to the left. x += 2 means x = x + 2 It can also add elements to a list -- see this SO thread. A: Notionally a += b "adds" b to a storing the result in a. This simplistic description would describe the += operator in many languages. However the simplistic description raises a couple of questions. What exactly do we mean by "adding"? What exactly do we mean by "storing the result in a"? python variables don't store values directly they store references to objects. In python the answers to both of these questions depend on the data type of a. So what exactly does "adding" mean? For numbers it means numeric addition. For lists, tuples, strings etc it means concatenation. Note that for lists += is more flexible than +, the + operator on a list requires another list, but the += operator will accept any iterable. So what does "storing the value in a" mean? If the object is mutable then it is encouraged (but not required) to perform the modification in-place. So a points to the same object it did before but that object now has different content. If the object is immutable then it obviously can't perform the modification in-place. Some mutable objects may also not have an implementation of an in-place "add" operation . In this case the variable "a" will be updated to point to a new object containing the result of an addition operation. Technically this is implemented by looking for __IADD__ first, if that is not implemented then __ADD__ is tried and finally __RADD__. Care is required when using += in python on variables where we are not certain of the exact type and in particular where we are not certain if the type is mutable or not. For example consider the following code. def dostuff(a): b = a a += (3,4) print(repr(a)+' '+repr(b)) dostuff((1,2)) dostuff([1,2]) When we invoke dostuff with a tuple then the tuple is copied as part of the += operation and so b is unaffected. However when we invoke it with a list the list is modified in place, so both a and b are affected. In python 3, similar behaviour is observed with the "bytes" and "bytearray" types. Finally note that reassignment happens even if the object is not replaced. This doesn't matter much if the left hand side is simply a variable but it can cause confusing behaviour when you have an immutable collection referring to mutable collections for example: a = ([1,2],[3,4]) a[0] += [5] In this case [5] will successfully be added to the list referred to by a[0] but then afterwards an exception will be raised when the code tries and fails to reassign a[0]. A: Note x += y is not the same as x = x + y in some situations where an additional operator is included because of the operator precedence combined with the fact that the right hand side is always evaluated first, e.g. >>> x = 2 >>> x += 2 and 1 >>> x 3 >>> x = 2 >>> x = x + 2 and 1 >>> x 1 Note the first case expand to: >>> x = 2 >>> x = x + (2 and 1) >>> x 3 You are more likely to encounter this in the 'real world' with other operators, e.g. x *= 2 + 1 == x = x * (2 + 1) != x = x * 2 + 1 A: The short answer is += can be translated as "add whatever is to the right of the += to the variable on the left of the +=". Ex. If you have a = 10 then a += 5 would be: a = a + 5 So, "a" now equal to 15. A: += is just a shortcut for writing number = 4 number = number + 1 So instead you would write numbers = 4 numbers += 1 Both ways are correct but example two helps you write a little less code A: According to the documentation x += y is equivalent to x = operator.iadd(x, y). Another way to put it is to say that z = operator.iadd(x, y) is equivalent to the compound statement z = x; z += y. So x += 3 is the same as x = x + 3. x = 2 x += 3 print(x) will output 5. Notice that there's also &= //= <<= %= *= @= |= **= >>= -= /= ^= A: Let's look at the byte code that CPython generates for x += y and x = x = y. (Yes, this is implementation-depenent, but it gives you an idea of the language-defined semantics being implemented.) >>> import dis >>> dis.dis("x += y") 1 0 LOAD_NAME 0 (x) 2 LOAD_NAME 1 (y) 4 INPLACE_ADD 6 STORE_NAME 0 (x) 8 LOAD_CONST 0 (None) 10 RETURN_VALUE >>> dis.dis("x = x + y") 1 0 LOAD_NAME 0 (x) 2 LOAD_NAME 1 (y) 4 BINARY_ADD 6 STORE_NAME 0 (x) 8 LOAD_CONST 0 (None) 10 RETURN_VALUE The only difference between the two is the bytecode used for the operator: INPLACE_ADD for +=, and BINARY_ADD for +. BINARY_ADD is implemented using x.__add__ (or y.__radd__ if necessary), so x = x + y is roughly the same as x = x.__add__(y). Both __add__ and __radd__ typically return new instances, without modifying either argument. INPLACE_ADD is implemented using x.__iadd__. If that does not exist, then x.__add__ is used in its place. x.__iadd__ typically returns x, so that the resulting STORE_NAME does not change the referent of x, though that object may have been mutated. (Indeed, the purpose of INPLACE_ADD is to provide a way to mutate an object rather than always create a new object.) For example, int.__iadd__ is not defined, so x += 7 when x is an int is the same as x = x.__add__(y), setting x to a new instance of int. On the other hand, list.__iadd__ is defined, so x += [7] when x is a list is the same as x = x.__iadd__([9]). list.__iadd__ effectively calls extend to add the elements of its argument to the end of x. It's not really possible to tell by looking at the value of x before and after the augmented assignment that x was reassigned, because the same object was assigned to the name. A: As others also said, the += operator is a shortcut. An example: var = 1; var = var + 1; #var = 2 It could also be written like so: var = 1; var += 1; #var = 2 So instead of writing the first example, you can just write the second one, which would work just fine. A: Remember when you used to sum, for example 2 & 3, in your old calculator and every time you hit the = you see 3 added to the total, the += does similar job. Example: >>> orange = 2 >>> orange += 3 >>> print(orange) 5 >>> orange +=3 >>> print(orange) 8 A: I'm seeing a lot of answers that don't bring up using += with multiple integers. One example: x -= 1 + 3 This would be similar to: x = x - (1 + 3) and not: x = (x - 1) + 3 A: The += cuts down on the redundancy in adding two objects with a given variable: Long Version: a = 10 a = a + 7 print(a) # result is 17 Short Version: a = 10 a += 7 print(a) # result is 17 A: It’s basically a simplification of saying (variable) = (variable) + x For example: num = num + 2 Is the same as: num += 2
What exactly does += do?
I need to know what += does in Python. It's that simple. I also would appreciate links to definitions of other shorthand tools in Python.
[ "In Python, += is sugar coating for the __iadd__ special method, or __add__ or __radd__ if __iadd__ isn't present. The __iadd__ method of a class can do anything it wants. The list object implements it and uses it to iterate over an iterable object appending each element to itself in the same way that the list's extend method does.\nHere's a simple custom class that implements the __iadd__ special method. You initialize the object with an int, then can use the += operator to add a number. I've added a print statement in __iadd__ to show that it gets called. Also, __iadd__ is expected to return an object, so I returned the addition of itself plus the other number which makes sense in this case.\n>>> class Adder(object):\n def __init__(self, num=0):\n self.num = num\n\n def __iadd__(self, other):\n print 'in __iadd__', other\n self.num = self.num + other\n return self.num\n \n>>> a = Adder(2)\n>>> a += 3\nin __iadd__ 3\n>>> a\n5\n\nHope this helps.\n", "+= adds another value with the variable's value and assigns the new value to the variable.\n>>> x = 3\n>>> x += 2\n>>> print x\n5\n\n-=, *=, /= does similar for subtraction, multiplication and division.\n", "x += 5 is not exactly the same as saying x = x + 5 in Python.\nNote here:\nIn [1]: x = [2, 3, 4] \n\nIn [2]: y = x \n\nIn [3]: x += 7, 8, 9 \n\nIn [4]: x\nOut[4]: [2, 3, 4, 7, 8, 9] \n\nIn [5]: y\nOut[5]: [2, 3, 4, 7, 8, 9] \n\nIn [6]: x += [44, 55] \n\nIn [7]: x\nOut[7]: [2, 3, 4, 7, 8, 9, 44, 55] \n\nIn [8]: y\nOut[8]: [2, 3, 4, 7, 8, 9, 44, 55] \n\nIn [9]: x = x + [33, 22] \n\nIn [10]: x\nOut[10]: [2, 3, 4, 7, 8, 9, 44, 55, 33, 22] \n\nIn [11]: y\nOut[11]: [2, 3, 4, 7, 8, 9, 44, 55]\n\nSee for reference: Why does += behave unexpectedly on lists?\n", "+= adds a number to a variable, changing the variable itself in the process (whereas + would not). Similar to this, there are the following that also modifies the variable:\n\n-=, subtracts a value from variable, setting the variable to the result\n*=, multiplies the variable and a value, making the outcome the variable\n/=, divides the variable by the value, making the outcome the variable\n%=, performs modulus on the variable, with the variable then being set to the result of it\n\nThere may be others. I am not a Python programmer.\n", "It is not mere a syntactic sugar. Try this:\nx = [] # empty list\nx += \"something\" # iterates over the string and appends to list\nprint(x) # ['s', 'o', 'm', 'e', 't', 'h', 'i', 'n', 'g']\n\nversus\nx = [] # empty list\nx = x + \"something\" # TypeError: can only concatenate list (not \"str\") to list\n\nThe += operator invokes the __iadd__() list method, while + one invokes the __add__() one. They do different things with lists.\n", "It adds the right operand to the left. x += 2 means x = x + 2\nIt can also add elements to a list -- see this SO thread.\n", "Notionally a += b \"adds\" b to a storing the result in a. This simplistic description would describe the += operator in many languages.\nHowever the simplistic description raises a couple of questions.\n\nWhat exactly do we mean by \"adding\"?\nWhat exactly do we mean by \"storing the result in a\"? python variables don't store values directly they store references to objects.\n\nIn python the answers to both of these questions depend on the data type of a.\n\nSo what exactly does \"adding\" mean?\n\nFor numbers it means numeric addition.\nFor lists, tuples, strings etc it means concatenation.\n\nNote that for lists += is more flexible than +, the + operator on a list requires another list, but the += operator will accept any iterable.\n\nSo what does \"storing the value in a\" mean? \nIf the object is mutable then it is encouraged (but not required) to perform the modification in-place. So a points to the same object it did before but that object now has different content.\nIf the object is immutable then it obviously can't perform the modification in-place. Some mutable objects may also not have an implementation of an in-place \"add\" operation . In this case the variable \"a\" will be updated to point to a new object containing the result of an addition operation.\nTechnically this is implemented by looking for __IADD__ first, if that is not implemented then __ADD__ is tried and finally __RADD__.\n\nCare is required when using += in python on variables where we are not certain of the exact type and in particular where we are not certain if the type is mutable or not. For example consider the following code.\ndef dostuff(a):\n b = a\n a += (3,4)\n print(repr(a)+' '+repr(b))\n\ndostuff((1,2))\ndostuff([1,2])\n\nWhen we invoke dostuff with a tuple then the tuple is copied as part of the += operation and so b is unaffected. However when we invoke it with a list the list is modified in place, so both a and b are affected. \nIn python 3, similar behaviour is observed with the \"bytes\" and \"bytearray\" types. \n\nFinally note that reassignment happens even if the object is not replaced. This doesn't matter much if the left hand side is simply a variable but it can cause confusing behaviour when you have an immutable collection referring to mutable collections for example: \na = ([1,2],[3,4])\na[0] += [5]\n\nIn this case [5] will successfully be added to the list referred to by a[0] but then afterwards an exception will be raised when the code tries and fails to reassign a[0].\n", "Note x += y is not the same as x = x + y in some situations where an additional operator is included because of the operator precedence combined with the fact that the right hand side is always evaluated first, e.g.\n>>> x = 2\n>>> x += 2 and 1\n>>> x\n3\n\n>>> x = 2\n>>> x = x + 2 and 1\n>>> x\n1\n\nNote the first case expand to:\n>>> x = 2\n>>> x = x + (2 and 1)\n>>> x\n3\n\nYou are more likely to encounter this in the 'real world' with other operators, e.g.\nx *= 2 + 1 == x = x * (2 + 1) != x = x * 2 + 1\n", "The short answer is += can be translated as \"add whatever is to the right of the += to the variable on the left of the +=\".\nEx. If you have a = 10 then a += 5 would be: a = a + 5\nSo, \"a\" now equal to 15. \n", "+= is just a shortcut for writing \nnumber = 4\nnumber = number + 1\n\nSo instead you would write\nnumbers = 4\nnumbers += 1\n\nBoth ways are correct but example two helps you write a little less code\n", "According to the documentation\n\nx += y is equivalent to x = operator.iadd(x, y). Another way to\nput it is to say that z = operator.iadd(x, y) is equivalent to the\ncompound statement z = x; z += y.\n\nSo x += 3 is the same as x = x + 3.\nx = 2\n\nx += 3\n\nprint(x)\n\nwill output 5.\nNotice that there's also\n\n&=\n//=\n<<=\n%=\n*=\n@=\n|=\n**=\n>>=\n-=\n/=\n^=\n\n", "Let's look at the byte code that CPython generates for x += y and x = x = y. (Yes, this is implementation-depenent, but it gives you an idea of the language-defined semantics being implemented.)\n>>> import dis\n>>> dis.dis(\"x += y\")\n 1 0 LOAD_NAME 0 (x)\n 2 LOAD_NAME 1 (y)\n 4 INPLACE_ADD\n 6 STORE_NAME 0 (x)\n 8 LOAD_CONST 0 (None)\n 10 RETURN_VALUE\n>>> dis.dis(\"x = x + y\")\n 1 0 LOAD_NAME 0 (x)\n 2 LOAD_NAME 1 (y)\n 4 BINARY_ADD\n 6 STORE_NAME 0 (x)\n 8 LOAD_CONST 0 (None)\n 10 RETURN_VALUE\n\nThe only difference between the two is the bytecode used for the operator: INPLACE_ADD for +=, and BINARY_ADD for +.\nBINARY_ADD is implemented using x.__add__ (or y.__radd__ if necessary), so x = x + y is roughly the same as x = x.__add__(y). Both __add__ and __radd__ typically return new instances, without modifying either argument.\nINPLACE_ADD is implemented using x.__iadd__. If that does not exist, then x.__add__ is used in its place. x.__iadd__ typically returns x, so that the resulting STORE_NAME does not change the referent of x, though that object may have been mutated. (Indeed, the purpose of INPLACE_ADD is to provide a way to mutate an object rather than always create a new object.)\nFor example, int.__iadd__ is not defined, so x += 7 when x is an int is the same as x = x.__add__(y), setting x to a new instance of int.\nOn the other hand, list.__iadd__ is defined, so x += [7] when x is a list is the same as x = x.__iadd__([9]). list.__iadd__ effectively calls extend to add the elements of its argument to the end of x. It's not really possible to tell by looking at the value of x before and after the augmented assignment that x was reassigned, because the same object was assigned to the name.\n", "As others also said, the += operator is a shortcut.\nAn example:\nvar = 1;\nvar = var + 1;\n#var = 2\n\nIt could also be written like so:\nvar = 1;\nvar += 1;\n#var = 2\n\nSo instead of writing the first example, you can just write the second one, which would work just fine.\n", "Remember when you used to sum, for example 2 & 3, in your old calculator and every time you hit the = you see 3 added to the total, the += does similar job. Example:\n>>> orange = 2\n>>> orange += 3\n>>> print(orange)\n5\n>>> orange +=3\n>>> print(orange)\n8\n\n", "I'm seeing a lot of answers that don't bring up using += with multiple integers.\nOne example:\nx -= 1 + 3\n\nThis would be similar to:\nx = x - (1 + 3)\n\nand not:\nx = (x - 1) + 3\n\n", "The += cuts down on the redundancy in adding two objects with a given variable:\nLong Version:\na = 10\na = a + 7\nprint(a) # result is 17\n\nShort Version:\na = 10\na += 7\nprint(a) # result is 17\n\n", "It’s basically a simplification of saying (variable) = (variable) + x\nFor example:\nnum = num + 2\n\nIs the same as:\nnum += 2\n\n" ]
[ 186, 166, 66, 32, 29, 14, 10, 7, 3, 2, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "compound_assignment", "operators", "python" ]
stackoverflow_0004841436_compound_assignment_operators_python.txt
Q: YTMUSIC ERROR: 'YTMusic' object has no attribute 'parser' I've a problem with ytmusicapi. I used it for some tests and was all ok, but now when I search for a song I get this error: 'YTMusic' object has no attribute 'parser'. Here is a test: from ytmusicapi import YTMusic ytmusic = YTMusic() ric = ytmusic.search("fix you coldplay") print(ric) I tried to analyze the script and I created an isoleted one but nothing. Can you help me? A: You are probably using an older version of ytmusicapi. Just update it with pip install -U ytmusicapi and it will work. I tested it on my machine and it works
YTMUSIC ERROR: 'YTMusic' object has no attribute 'parser'
I've a problem with ytmusicapi. I used it for some tests and was all ok, but now when I search for a song I get this error: 'YTMusic' object has no attribute 'parser'. Here is a test: from ytmusicapi import YTMusic ytmusic = YTMusic() ric = ytmusic.search("fix you coldplay") print(ric) I tried to analyze the script and I created an isoleted one but nothing. Can you help me?
[ "You are probably using an older version of ytmusicapi. Just update it with pip install -U ytmusicapi and it will work. I tested it on my machine and it works\n" ]
[ 0 ]
[]
[]
[ "attributeerror", "python", "python_3.x" ]
stackoverflow_0074504038_attributeerror_python_python_3.x.txt
Q: Reading csv with scrapy Trying to read the csv file but I am getting error that TypeError: Request url must be str, got list how to solve that kindly how to read list any suggestion recommend me import scrapy from scrapy.http import Request import pandas as pd from scrapy_selenium import SeleniumRequest from io import open class SampleSpider(scrapy.Spider): name = 'sample' with open("category.csv") as file: start_urls=[line.strip() for line in file] def start_requests(self): request=Request(url=self.start_urls,callback=self.parse) yield request def parse(self, response): pass A: You need to iterate through the list and pass each url as a request. def start_requests(self): for url in self.start_urls: yield Request(url,callback=self.parse)
Reading csv with scrapy
Trying to read the csv file but I am getting error that TypeError: Request url must be str, got list how to solve that kindly how to read list any suggestion recommend me import scrapy from scrapy.http import Request import pandas as pd from scrapy_selenium import SeleniumRequest from io import open class SampleSpider(scrapy.Spider): name = 'sample' with open("category.csv") as file: start_urls=[line.strip() for line in file] def start_requests(self): request=Request(url=self.start_urls,callback=self.parse) yield request def parse(self, response): pass
[ "You need to iterate through the list and pass each url as a request.\n def start_requests(self):\n for url in self.start_urls:\n yield Request(url,callback=self.parse)\n\n\n" ]
[ 1 ]
[]
[]
[ "python", "scrapy", "web_scraping" ]
stackoverflow_0074503930_python_scrapy_web_scraping.txt
Q: pip not installing modules As per object. I'm running Python 2.7.10 under Windows 7 64 bit. I added C:\Python27\Scripts to my PATH, and I can run pip, but it's not able to install modules. For example pip install numpy gives Collecting numpy Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', gaierror(11004,'getaddrinfo failed'))': /simple/numpy/ It keeps retrying and failing for a while, and then it exits with Could not find a version that satisfies the requirement numpy (from versions: ) No matching distribution found for numpy Probably I'm behind a firewall, but I'm quite disappointed because I can install packages under R perfectly fine with install.packages, and I don't see why I can't do the same with Python. If I install packages manually (in the case of NumPy, from here NumPy what do I miss, with respect to using pip? As per suggestions in the comments, I downloaded the .whl file for NumPy from NumPy. I navigated to the downloads dir and executed pip install numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl I only got numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl is not a supported wheel on this platform. What should I do? A: A proxy shall be used. For example: python.exe -m pip install numpy --proxy="proxy.com:8080" where "proxy.com:8080" is the proxy server address and port. This can be found in OS settings. How to get them: Windows: What Is a Proxy or Proxy Server Linux How can I find out the proxy address I am behind? Mac OS X: How can I get Mac OS X's proxy information in a Bash script? A: To bypass the firewall, you can use a proxy pip install numpy --proxy <domain\user:password@proxyaddress:port> For example, pip install numpy --proxy http://<username>:<password>@proxy.xyz.com:2180 A: If you use Anaconda: I was trying to install Django using cmd, and it just was not working! I opened up the Anaconda prompt and ran the usual py -m pip install Django command, and hey presto! Django was installed! A: Personally, it was the configuration file in ~/.config/pip/pip.conf, which contained an extra-index-url, preventing the download, because it made pip search for all the packages on this extra url instead of the main pip repository. I experimented with old pip 8, because upgrading was even worse for this extra-index-url needed for another project. A: You could try this one as well! Setting pip configuration using a proxy so that you do not need to concern the proxy issue again when you install packages via pip. pip config set global.proxy http://restrictedproxy.xxx.com:70 http://restrictedproxy.xxx.com :70 you could probably ask the proxy domain and the port from IT if you work for a company.
pip not installing modules
As per object. I'm running Python 2.7.10 under Windows 7 64 bit. I added C:\Python27\Scripts to my PATH, and I can run pip, but it's not able to install modules. For example pip install numpy gives Collecting numpy Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', gaierror(11004,'getaddrinfo failed'))': /simple/numpy/ It keeps retrying and failing for a while, and then it exits with Could not find a version that satisfies the requirement numpy (from versions: ) No matching distribution found for numpy Probably I'm behind a firewall, but I'm quite disappointed because I can install packages under R perfectly fine with install.packages, and I don't see why I can't do the same with Python. If I install packages manually (in the case of NumPy, from here NumPy what do I miss, with respect to using pip? As per suggestions in the comments, I downloaded the .whl file for NumPy from NumPy. I navigated to the downloads dir and executed pip install numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl I only got numpy-1.10.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl is not a supported wheel on this platform. What should I do?
[ "A proxy shall be used. For example:\npython.exe -m pip install numpy --proxy=\"proxy.com:8080\"\n\nwhere \"proxy.com:8080\" is the proxy server address and port. This can be found in OS settings.\nHow to get them:\n\nWindows: What Is a Proxy or Proxy Server\nLinux How can I find out the proxy address I am behind?\nMac OS X: How can I get Mac OS X's proxy information in a Bash script?\n\n", "To bypass the firewall, you can use a proxy\npip install numpy --proxy <domain\\user:password@proxyaddress:port>\n\nFor example,\npip install numpy --proxy http://<username>:<password>@proxy.xyz.com:2180\n\n", "If you use Anaconda:\nI was trying to install Django using cmd, and it just was not working! I opened up the Anaconda prompt and ran the usual\npy -m pip install Django\n\ncommand, and hey presto! Django was installed!\n", "Personally, it was the configuration file in ~/.config/pip/pip.conf, which contained an extra-index-url, preventing the download, because it made pip search for all the packages on this extra url instead of the main pip repository.\nI experimented with old pip 8, because upgrading was even worse for this extra-index-url needed for another project.\n", "You could try this one as well! Setting pip configuration using a proxy so that you do not need to concern the proxy issue again when you install packages via pip.\npip config set global.proxy http://restrictedproxy.xxx.com:70\nhttp://restrictedproxy.xxx.com \n:70\n\nyou could probably ask the proxy domain and the port from IT if you work for a company.\n" ]
[ 19, 2, 1, 0, 0 ]
[]
[]
[ "numpy", "pip", "python" ]
stackoverflow_0033996026_numpy_pip_python.txt
Q: How to append item from list in front of specific list in list of lists? I am trying to get the subset of list_a with the highest summation of both lists b and c that are below or equal to the threshold. (List b and list c corresponds to list a) For example list_a = [1, 2, 3, 4, 5] list_b = [3,4,7,8,2] list_c = [4,6,1,5,8] Threshold = 12 In descending order, I want to obtain a list with all the possible subsets of list_a that, for both lists b and c has a total value less than or equal to the threshold. So, list b has to be lower or equal to the threshold, and list c has to be lower or equal to the threshold. I don't know how I can obtain this with the least amount of calculation time. With the following function, I am trying to make subsets. Eventually, I want to obtain a list with only those subsets for which the total of list b and list c is below the threshold. for example subset (1,2,) has 7 as value for list b (3+4) and list c has a value of 10 (4+6). I tried the following for making the subsets: def powerset(s): if s: tail1 = s[1:] for e in chain.from_iterable(combinations(tail1, r) for r in range(len(tail1) + 1, -1, -1)): yield (s[0],) + e A: Here's my (perhaps not totally complete, because your goal is not perfectly clear to me) method: list_a = [1, 2, 3, 4, 5] list_b = [3,4,7,8,2] list_c = [4,6,1,5,8] Threshold = 12 l = len(list_a) def total(n): # convert n to binary form with l digits mask = bin(n)[2:].zfill(l) # calculates and returns the corresponding sums from list_b and list_c return(sum(int(mask[i])*list_b[i] for i in range(l)),sum(int(mask[i])*list_c[i] for i in range(l))) dic = {n:total(n) for n in range(2**l-1) if max(total(n)) <= Threshold} print(dic) # {0: (0, 0), 1: (2, 8), 2: (8, 5), 4: (7, 1), 5: (9, 9), 8: (4, 6), 10: (12, 11), 12: (11, 7), 16: (3, 4), 17: (5, 12), 18: (11, 9), 20: (10, 5), 24: (7, 10)} This dict's keys are the integers corresponding to the subsets (for example: 24 is 11000 in binary, which corresponds to the {1,2} subset of list_a), and the values are the corresponding sums from list_b and list_c. Now if all you want are the relevant subsets of list_a, this one will print them (using the same method as in my total(n) function): print([{list_a[i] for i in range(l) if int(bin(n)[2:].zfill(l)[i])} for n in dic]) # [set(), {5}, {4}, {3}, {3, 5}, {2}, {2, 4}, {2, 3}, {1}, {1, 5}, {1, 4}, {1, 3}, {1, 2}]
How to append item from list in front of specific list in list of lists?
I am trying to get the subset of list_a with the highest summation of both lists b and c that are below or equal to the threshold. (List b and list c corresponds to list a) For example list_a = [1, 2, 3, 4, 5] list_b = [3,4,7,8,2] list_c = [4,6,1,5,8] Threshold = 12 In descending order, I want to obtain a list with all the possible subsets of list_a that, for both lists b and c has a total value less than or equal to the threshold. So, list b has to be lower or equal to the threshold, and list c has to be lower or equal to the threshold. I don't know how I can obtain this with the least amount of calculation time. With the following function, I am trying to make subsets. Eventually, I want to obtain a list with only those subsets for which the total of list b and list c is below the threshold. for example subset (1,2,) has 7 as value for list b (3+4) and list c has a value of 10 (4+6). I tried the following for making the subsets: def powerset(s): if s: tail1 = s[1:] for e in chain.from_iterable(combinations(tail1, r) for r in range(len(tail1) + 1, -1, -1)): yield (s[0],) + e
[ "Here's my (perhaps not totally complete, because your goal is not perfectly clear to me) method:\nlist_a = [1, 2, 3, 4, 5]\nlist_b = [3,4,7,8,2]\nlist_c = [4,6,1,5,8]\nThreshold = 12\n\nl = len(list_a)\n\ndef total(n):\n # convert n to binary form with l digits\n mask = bin(n)[2:].zfill(l)\n # calculates and returns the corresponding sums from list_b and list_c\n return(sum(int(mask[i])*list_b[i] for i in range(l)),sum(int(mask[i])*list_c[i] for i in range(l)))\n\ndic = {n:total(n) for n in range(2**l-1) if max(total(n)) <= Threshold}\n\nprint(dic)\n\n# {0: (0, 0), 1: (2, 8), 2: (8, 5), 4: (7, 1), 5: (9, 9), 8: (4, 6), 10: (12, 11), 12: (11, 7), 16: (3, 4), 17: (5, 12), 18: (11, 9), 20: (10, 5), 24: (7, 10)}\n\nThis dict's keys are the integers corresponding to the subsets (for example: 24 is 11000 in binary, which corresponds to the {1,2} subset of list_a), and the values are the corresponding sums from list_b and list_c.\nNow if all you want are the relevant subsets of list_a, this one will print them (using the same method as in my total(n) function):\nprint([{list_a[i] for i in range(l) if int(bin(n)[2:].zfill(l)[i])} for n in dic])\n# [set(), {5}, {4}, {3}, {3, 5}, {2}, {2, 4}, {2, 3}, {1}, {1, 5}, {1, 4}, {1, 3}, {1, 2}]\n\n" ]
[ 0 ]
[]
[]
[ "python", "subset" ]
stackoverflow_0074502849_python_subset.txt
Q: Pydantic - Upgrading object to another model I have a NewUser model that is something that the end user inputs, I want to update the object to a UserInDB so that I can pass it to my db engine (DynamoDB, which expects a dict) At the moment I'm calling .dict twice, which doesn't feel like the correct way to do it from pydantic import BaseModel, Field from datetime import datetime from typing import Optional from uuid import uuid4 class NewUser(BaseModel): name: str email: str company_name: Optional[str] class UserInDB(NewUser): hash: str = Field(default_factory= lambda: uuid4()) range = 'DATA' created_at: datetime = Field(default_factory= lambda: datetime.now()) #... #Emulating what an end user would send user = NewUser(name='Example', company_name='example', email='example@example.com') #Is calling dict twice way to do it? user_in_db = UserInDB(**user.dict()).dict() db.create_user(user_in_db) A: You could try to define an __init__ and the code would look nicer (to me at least). from pydantic import BaseModel, Field from datetime import datetime from typing import Optional from uuid import uuid4 class NewUser(BaseModel): name: str email: str company_name: Optional[str] class UserInDB(NewUser): hash: str = Field(default_factory= lambda: uuid4()) range = 'DATA' created_at: datetime = Field(default_factory= lambda: datetime.now()) def __init__(self, user: NewUser): super().__init__(**user.dict()) #... # Emulating what an end user would send user = NewUser(name='Example', company_name='example', email='example@example.com') # This looks probably better user_in_db = UserInDB(user)
Pydantic - Upgrading object to another model
I have a NewUser model that is something that the end user inputs, I want to update the object to a UserInDB so that I can pass it to my db engine (DynamoDB, which expects a dict) At the moment I'm calling .dict twice, which doesn't feel like the correct way to do it from pydantic import BaseModel, Field from datetime import datetime from typing import Optional from uuid import uuid4 class NewUser(BaseModel): name: str email: str company_name: Optional[str] class UserInDB(NewUser): hash: str = Field(default_factory= lambda: uuid4()) range = 'DATA' created_at: datetime = Field(default_factory= lambda: datetime.now()) #... #Emulating what an end user would send user = NewUser(name='Example', company_name='example', email='example@example.com') #Is calling dict twice way to do it? user_in_db = UserInDB(**user.dict()).dict() db.create_user(user_in_db)
[ "You could try to define an __init__ and the code would look nicer (to me at least).\nfrom pydantic import BaseModel, Field\nfrom datetime import datetime\nfrom typing import Optional\nfrom uuid import uuid4\n\nclass NewUser(BaseModel):\n name: str\n email: str\n company_name: Optional[str]\n\nclass UserInDB(NewUser):\n hash: str = Field(default_factory= lambda: uuid4())\n range = 'DATA'\n created_at: datetime = Field(default_factory= lambda: datetime.now())\n\n def __init__(self, user: NewUser):\n super().__init__(**user.dict())\n\n#...\n# Emulating what an end user would send\nuser = NewUser(name='Example', company_name='example', email='example@example.com')\n\n# This looks probably better\nuser_in_db = UserInDB(user)\n\n" ]
[ 1 ]
[]
[]
[ "fastapi", "pydantic", "python" ]
stackoverflow_0064446491_fastapi_pydantic_python.txt
Q: How to lookup different dataframes and return the values? Im trying to lookup the index in two different datframes and return the values. For example, in df1 i would like to lookup in df2 and return the same index and values. DF1 DF2 I would like my result to be like this. RESULTS A: Get the IDs from df2 where the ID is in df1 filtered_df = df2[(df2['ID'].isin(df1['ID']))] A: Try this new_df = df2[df2.ID == [list(df1.ID)]]
How to lookup different dataframes and return the values?
Im trying to lookup the index in two different datframes and return the values. For example, in df1 i would like to lookup in df2 and return the same index and values. DF1 DF2 I would like my result to be like this. RESULTS
[ "Get the IDs from df2 where the ID is in df1\nfiltered_df = df2[(df2['ID'].isin(df1['ID']))]\n\n", "Try this\nnew_df = df2[df2.ID == [list(df1.ID)]]\n\n" ]
[ 1, 1 ]
[]
[]
[ "anaconda", "dataframe", "list", "python" ]
stackoverflow_0074504099_anaconda_dataframe_list_python.txt
Q: KeyError on adjacency list Getting a KeyError and can't figure out why. I'm importing data from an excel sheet using pandas and using it to create a graph using an adjacency list. The data imports fine, but when using the add_edge function I created I keep getting a KeyError. Link to a sample of the dataset: https://www.dropbox.com/s/80v3dhdf0c0o7cs/London%20Underground%20data%20fixed%20copy2.csv?dl=0 Excel Data import pandas as pd df = pd.read_excel(r'my_file_path.xlsx') class Graph: def __init__(self, nodes): self.nodes = nodes #all nodes in graph self.adj_list = {} #adjacency list #loops through nodes, and removes duplicates using sets, then adds result as key of adjacency list self.adj_list = {node: set() for node in self.nodes} def add_edge(self, node_1, node_2, weight): #adds node2 to the location of node1 #adds node_2 and the corresponding weight to the location of node_1 self.adj_list[node_1].add((node_2, weight)) #adds node_1 and the corresponding weight to the location of node_1 self.adj_list[node_2].add((node_1, weight)) #prints graph in readable format def print_graph(self): for node in self.nodes: print(node, ":", self.adj_list[node]) nodes = [] for index, row, in df.iterrows(): station_a = row['Station A'] nodes.append(station_a) graph = Graph(nodes) for index, row, in df.iterrows(): station_a = row['Station A'] station_b = row['Station B'] weight = row['Weight'] graph.add_edge(station_a, station_b, weight) graph.print_graph() A: The last station in a line (Like Elephant & Castle) is not on Station A column that you use to create your dictionary, and it should be in your dictionary (nodes). That is why you get the error. you could change to this: for index, row, in df.iterrows(): station_a = row['Station A'] station_b = row['Station B'] nodes.append(station_a) nodes.append(station_b) nodes=list(set(nodes))
KeyError on adjacency list
Getting a KeyError and can't figure out why. I'm importing data from an excel sheet using pandas and using it to create a graph using an adjacency list. The data imports fine, but when using the add_edge function I created I keep getting a KeyError. Link to a sample of the dataset: https://www.dropbox.com/s/80v3dhdf0c0o7cs/London%20Underground%20data%20fixed%20copy2.csv?dl=0 Excel Data import pandas as pd df = pd.read_excel(r'my_file_path.xlsx') class Graph: def __init__(self, nodes): self.nodes = nodes #all nodes in graph self.adj_list = {} #adjacency list #loops through nodes, and removes duplicates using sets, then adds result as key of adjacency list self.adj_list = {node: set() for node in self.nodes} def add_edge(self, node_1, node_2, weight): #adds node2 to the location of node1 #adds node_2 and the corresponding weight to the location of node_1 self.adj_list[node_1].add((node_2, weight)) #adds node_1 and the corresponding weight to the location of node_1 self.adj_list[node_2].add((node_1, weight)) #prints graph in readable format def print_graph(self): for node in self.nodes: print(node, ":", self.adj_list[node]) nodes = [] for index, row, in df.iterrows(): station_a = row['Station A'] nodes.append(station_a) graph = Graph(nodes) for index, row, in df.iterrows(): station_a = row['Station A'] station_b = row['Station B'] weight = row['Weight'] graph.add_edge(station_a, station_b, weight) graph.print_graph()
[ "The last station in a line (Like Elephant & Castle) is not on Station A column that you use to create your dictionary, and it should be in your dictionary (nodes). That is why you get the error.\nyou could change to this:\nfor index, row, in df.iterrows():\n station_a = row['Station A']\n station_b = row['Station B']\n nodes.append(station_a)\n nodes.append(station_b)\nnodes=list(set(nodes))\n\n\n" ]
[ 0 ]
[]
[]
[ "excel", "keyerror", "pandas", "python" ]
stackoverflow_0074503911_excel_keyerror_pandas_python.txt
Q: mypy does not use narrowed types inside function definitions I have a problem with mypy. mypy does not use narrowed types inside function definitions. I have the following code: from typing import Callable def foo(a: str | int) -> list[str]: x: list[str] = ["abc", "def"] if isinstance(a, int): x.insert(a, "ghi") elif isinstance(a, str): x.insert(0, a) return x def bar(a: str | int) -> Callable[[list[str]], list[str]]: if isinstance(a, int): def modify(x: list[str]) -> list[str]: x.insert(a, "ghi") return x elif isinstance(a, str): def modify(x: list[str]) -> list[str]: x.insert(0, a) return x return modify foo is correctly identified as well-typed. I believe that bar should also be well-typed, but mypy gives this error: 16: error: Argument 1 to "insert" of "list" has incompatible type "Union[str, int]"; expected "SupportsIndex" 20: error: Argument 2 to "insert" of "list" has incompatible type "Union[str, int]"; expected "str" Is this a bug in mypy? Is there a way for me to type this program otherwise? Comparing with alternative type checkers suggests this is mypy-specific. Pyright does not complain about anything here, but does if x.insert(0, a) is replaced with x.insert(a, "ghi"). A: I suspect this is because nested function definitions make name resolution and type narrowing pretty complex. You can fix it by just re-assigning a to a well-typed variable and then close over that with modify in each branch: from typing import Callable def bar(a: str | int) -> Callable[[list[str]], list[str]]: if isinstance(a, int): # create a new variable with the correct type while mypy # can keep track of what's going on idx: int = a def modify(x: list[str]) -> list[str]: # close over the new variable instead of `a` x.insert(idx, "ghi") return x elif isinstance(a, str): # do the same thing here # the explicit type annotation isn't even actually necessary # I just put it for clarity s: str = a def modify(x: list[str]) -> list[str]: x.insert(0, s) return x return modify
mypy does not use narrowed types inside function definitions
I have a problem with mypy. mypy does not use narrowed types inside function definitions. I have the following code: from typing import Callable def foo(a: str | int) -> list[str]: x: list[str] = ["abc", "def"] if isinstance(a, int): x.insert(a, "ghi") elif isinstance(a, str): x.insert(0, a) return x def bar(a: str | int) -> Callable[[list[str]], list[str]]: if isinstance(a, int): def modify(x: list[str]) -> list[str]: x.insert(a, "ghi") return x elif isinstance(a, str): def modify(x: list[str]) -> list[str]: x.insert(0, a) return x return modify foo is correctly identified as well-typed. I believe that bar should also be well-typed, but mypy gives this error: 16: error: Argument 1 to "insert" of "list" has incompatible type "Union[str, int]"; expected "SupportsIndex" 20: error: Argument 2 to "insert" of "list" has incompatible type "Union[str, int]"; expected "str" Is this a bug in mypy? Is there a way for me to type this program otherwise? Comparing with alternative type checkers suggests this is mypy-specific. Pyright does not complain about anything here, but does if x.insert(0, a) is replaced with x.insert(a, "ghi").
[ "I suspect this is because nested function definitions make name resolution and type narrowing pretty complex. You can fix it by just re-assigning a to a well-typed variable and then close over that with modify in each branch:\nfrom typing import Callable\n\ndef bar(a: str | int) -> Callable[[list[str]], list[str]]:\n if isinstance(a, int):\n # create a new variable with the correct type while mypy\n # can keep track of what's going on\n idx: int = a\n def modify(x: list[str]) -> list[str]:\n # close over the new variable instead of `a`\n x.insert(idx, \"ghi\")\n return x\n elif isinstance(a, str):\n # do the same thing here\n # the explicit type annotation isn't even actually necessary\n # I just put it for clarity\n s: str = a\n def modify(x: list[str]) -> list[str]:\n x.insert(0, s)\n return x\n return modify\n\n" ]
[ 2 ]
[]
[]
[ "mypy", "python", "type_hinting" ]
stackoverflow_0074504085_mypy_python_type_hinting.txt
Q: How to code for the value of "month" which produced the highest profit in a year? Out of all the months in the year, I need to code the month with largest total balance (it's June as all together June has the biggest "amount" value) lst = [ {'account': 'x\\*', 'amount': 300, 'day': 3, 'month': 'June'}, {'account': 'y\\*', 'amount': 550, 'day': 9, 'month': 'May'}, {'account': 'z\\*', 'amount': -200, 'day': 21, 'month': 'June'}, {'account': 'g', 'amount': 80, 'day': 10, 'month': 'May'}, {'account': 'x\\*', 'amount': 30, 'day': 16, 'month': 'August'}, {'account': 'x\\*', 'amount': 100, 'day': 5, 'month': 'June'}, ] The problem is that both "amount" and the name of the months are values. I tried to find the total for each month, but I need to use for loop to code the highest month "amount". My attempt: get_sum = lambda my_dict, month: sum(d['amount'] for d in my_list if d['month'] == month) total_June = get_sum(my_list,'June') total_August = get_sum(my_list),'August') A: A simple solution with pandas. import pandas as pd lst = [ {'account': 'x\\*', 'amount': 300, 'day': 3, 'month': 'June'}, {'account': 'y\\*', 'amount': 550, 'day': 9, 'month': 'May'}, {'account': 'z\\*', 'amount': -200, 'day': 21, 'month': 'June'}, {'account': 'g', 'amount': 80, 'day': 10, 'month': 'May'}, {'account': 'x\\*', 'amount': 30, 'day': 16, 'month': 'August'}, {'account': 'x\\*', 'amount': 100, 'day': 5, 'month': 'June'}, ] # convert list of dictionaries to dataframe df = pd.DataFrame(lst) # Get the row / series that has max amount. # idxmax returns an index for loc. max_series_by_amount = df.loc[df['amount'].idxmax(axis="index")] # Get only month and amount in a plain list print(max_series_by_amount[["month", "amount"]].tolist()) ['May', 550] Please note that using pandas adds a substantial amount of dependencies to the project, that said, pandas is commonly imported anyway for data science or data manipulation tasks. Pierre D solutions here are definitively faster. A: One possibility (among many): from itertools import groupby from operator import itemgetter mo_total = { k: sum([d.get('amount', 0) for d in v]) for k, v in groupby(sorted(lst, key=itemgetter('month')), key=itemgetter('month')) } >>> mo_total {'August': 30, 'June': 200, 'May': 630} >>> max(mo_total.items(), key=lambda kv: kv[1]) ('May', 630) Without itemgetter: bymonth = lambda d: d.get('month') mo_total = { k: sum([d.get('amount', 0) for d in v]) for k, v in groupby(sorted(lst, key=bymonth), key=bymonth) } Yet another way, using defaultdict: from collections import defaultdict tot = defaultdict(int) for d in lst: tot[d['month']] += d.get('amount', 0) >>> tot defaultdict(int, {'June': 200, 'May': 630, 'August': 30}) >>> max(tot, key=lambda k: tot[k]) 'May'
How to code for the value of "month" which produced the highest profit in a year?
Out of all the months in the year, I need to code the month with largest total balance (it's June as all together June has the biggest "amount" value) lst = [ {'account': 'x\\*', 'amount': 300, 'day': 3, 'month': 'June'}, {'account': 'y\\*', 'amount': 550, 'day': 9, 'month': 'May'}, {'account': 'z\\*', 'amount': -200, 'day': 21, 'month': 'June'}, {'account': 'g', 'amount': 80, 'day': 10, 'month': 'May'}, {'account': 'x\\*', 'amount': 30, 'day': 16, 'month': 'August'}, {'account': 'x\\*', 'amount': 100, 'day': 5, 'month': 'June'}, ] The problem is that both "amount" and the name of the months are values. I tried to find the total for each month, but I need to use for loop to code the highest month "amount". My attempt: get_sum = lambda my_dict, month: sum(d['amount'] for d in my_list if d['month'] == month) total_June = get_sum(my_list,'June') total_August = get_sum(my_list),'August')
[ "A simple solution with pandas.\nimport pandas as pd\n\nlst = [\n {'account': 'x\\\\*', 'amount': 300, 'day': 3, 'month': 'June'},\n {'account': 'y\\\\*', 'amount': 550, 'day': 9, 'month': 'May'},\n {'account': 'z\\\\*', 'amount': -200, 'day': 21, 'month': 'June'},\n {'account': 'g', 'amount': 80, 'day': 10, 'month': 'May'},\n {'account': 'x\\\\*', 'amount': 30, 'day': 16, 'month': 'August'},\n {'account': 'x\\\\*', 'amount': 100, 'day': 5, 'month': 'June'},\n]\n\n# convert list of dictionaries to dataframe\ndf = pd.DataFrame(lst)\n\n# Get the row / series that has max amount. \n# idxmax returns an index for loc.\nmax_series_by_amount = df.loc[df['amount'].idxmax(axis=\"index\")]\n\n# Get only month and amount in a plain list\nprint(max_series_by_amount[[\"month\", \"amount\"]].tolist())\n['May', 550]\n\nPlease note that using pandas adds a substantial amount of dependencies to the project, that said, pandas is commonly imported anyway for data science or data manipulation tasks. Pierre D solutions here are definitively faster.\n", "One possibility (among many):\nfrom itertools import groupby\nfrom operator import itemgetter\n\nmo_total = {\n k: sum([d.get('amount', 0) for d in v])\n for k, v in groupby(sorted(lst, key=itemgetter('month')), key=itemgetter('month'))\n}\n>>> mo_total\n{'August': 30, 'June': 200, 'May': 630}\n\n>>> max(mo_total.items(), key=lambda kv: kv[1])\n('May', 630)\n\nWithout itemgetter:\nbymonth = lambda d: d.get('month')\nmo_total = {\n k: sum([d.get('amount', 0) for d in v])\n for k, v in groupby(sorted(lst, key=bymonth), key=bymonth)\n}\n\nYet another way, using defaultdict:\nfrom collections import defaultdict\n\ntot = defaultdict(int)\n\nfor d in lst:\n tot[d['month']] += d.get('amount', 0)\n>>> tot\ndefaultdict(int, {'June': 200, 'May': 630, 'August': 30})\n\n>>> max(tot, key=lambda k: tot[k])\n'May'\n\n" ]
[ 1, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074503697_list_python.txt
Q: Getting rid of unwanted panel plot in subplots in a loop I have a daily data named and share here as data_link. I've done all the necessary operation on it and I want to bar chart from the need eleven (11) column separately using panel plot (3x4). My code worked correctly until I plot my desired results in subplots. Since am plotting results from eleven columns in 3x4 panel plot I got '''IndexError: index 11 is out of bounds for axis 0 with size 11'''. My question is how to remove the unwanted last empty panel as shown in the image below. This is the code I've been using: import matplotlib.pyplot as plt import pandas as pd from math import ceil csv_path_cont = 'path_to_my_data/data.csv' fname = pd.read_csv(csv_path_cont) fname['time'] = pd.to_datetime(fname['time']) fname['month'] = fname['time'].dt.strftime('%b') fname.set_index('time') #=== setting 3x4 pannel plot fname_col=fname.columns[1:-2] month_name=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] # fixed number of columns cols = 4 # number of rows, based on cols rows = ceil(len(fname_col) / cols) fig, ax = plt.subplots(rows, cols, figsize=(45,24)) #plt.figure(j) m=0 for i in range (3): for j in range (4): event_occurrence = fname[[fname_col[m],'month']][fname[fname_col[m]]>0] num_event = event_occurrence.groupby('month').count().reindex(month_name) num_event = num_event.fillna(0) ax[i,j].bar(num_event.index,num_event[fname_col[m]]) plt.title(m) m+=1 print(m) fig.savefig('bar_chart',dpi=300) bar_chart_plot A: You can use matplotlib.axes.Axes.set_axis_off: for i in range (3): for j in range (4): try: event_occurrence = fname[[fname_col[m],'month']][fname[fname_col[m]]>0] num_event = event_occurrence.groupby('month').count().reindex(month_name) num_event = num_event.fillna(0) ax[i,j].bar(num_event.index,num_event[fname_col[m]]) plt.title(m) m+=1 # print(m) except IndexError: ax[i,j].set_axis_off()
Getting rid of unwanted panel plot in subplots in a loop
I have a daily data named and share here as data_link. I've done all the necessary operation on it and I want to bar chart from the need eleven (11) column separately using panel plot (3x4). My code worked correctly until I plot my desired results in subplots. Since am plotting results from eleven columns in 3x4 panel plot I got '''IndexError: index 11 is out of bounds for axis 0 with size 11'''. My question is how to remove the unwanted last empty panel as shown in the image below. This is the code I've been using: import matplotlib.pyplot as plt import pandas as pd from math import ceil csv_path_cont = 'path_to_my_data/data.csv' fname = pd.read_csv(csv_path_cont) fname['time'] = pd.to_datetime(fname['time']) fname['month'] = fname['time'].dt.strftime('%b') fname.set_index('time') #=== setting 3x4 pannel plot fname_col=fname.columns[1:-2] month_name=['Jan','Feb','Mar','Apr','May','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] # fixed number of columns cols = 4 # number of rows, based on cols rows = ceil(len(fname_col) / cols) fig, ax = plt.subplots(rows, cols, figsize=(45,24)) #plt.figure(j) m=0 for i in range (3): for j in range (4): event_occurrence = fname[[fname_col[m],'month']][fname[fname_col[m]]>0] num_event = event_occurrence.groupby('month').count().reindex(month_name) num_event = num_event.fillna(0) ax[i,j].bar(num_event.index,num_event[fname_col[m]]) plt.title(m) m+=1 print(m) fig.savefig('bar_chart',dpi=300) bar_chart_plot
[ "You can use matplotlib.axes.Axes.set_axis_off:\nfor i in range (3):\n for j in range (4):\n try:\n event_occurrence = fname[[fname_col[m],'month']][fname[fname_col[m]]>0]\n num_event = event_occurrence.groupby('month').count().reindex(month_name)\n num_event = num_event.fillna(0)\n ax[i,j].bar(num_event.index,num_event[fname_col[m]])\n plt.title(m)\n m+=1\n # print(m)\n except IndexError:\n ax[i,j].set_axis_off()\n\n\n" ]
[ 0 ]
[]
[]
[ "loops", "math", "matplotlib", "pandas", "python" ]
stackoverflow_0074504144_loops_math_matplotlib_pandas_python.txt
Q: Celery auto reload on ANY changes I could make celery reload itself automatically when there is changes on modules in CELERY_IMPORTS in settings.py. I tried to give mother modules to detect changes even on child modules but it did not detect changes in child modules. That make me understand that detecting is not done recursively by celery. I searched it in the documentation but I did not meet any response for my problem. It is really bothering me to add everything related celery part of my project to CELERY_IMPORTS to detect changes. Is there a way to tell celery that "auto reload yourself when there is any changes in anywhere of project". Thank You! A: Celery --autoreload doesn't work and it is deprecated. Since you are using django, you can write a management command for that. Django has autoreload utility which is used by runserver to restart WSGI server when code changes. The same functionality can be used to reload celery workers. Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows. import shlex import subprocess from django.core.management.base import BaseCommand from django.utils import autoreload def restart_celery(): cmd = 'pkill celery' subprocess.call(shlex.split(cmd)) cmd = 'celery worker -l info -A foo' subprocess.call(shlex.split(cmd)) class Command(BaseCommand): def handle(self, *args, **options): print('Starting celery worker with autoreload...') # For Django>=2.2 autoreload.run_with_reloader(restart_celery) # For django<2.1 # autoreload.main(restart_celery) Now you can run celery worker with python manage.py celery which will autoreload when codebase changes. This is only for development purposes and do not use it in production. Code taken from my other answer here. A: You can manually include additional modules with -I|--include. Combine this with GNU tools like find and awk and you'll be able to find all .py files and include them. $ celery -A app worker --autoreload --include=$(find . -name "*.py" -type f | awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' | sed 's/.$//') Lets explain it: find . -name "*.py" -type f find searches recursively for all files containing .py. The output looks something like this: ./app.py ./some_package/foopy ./some_package/bar.py Then: awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' This line takes output of find as input and removes all occurences of ./. Then it replaces all / with a .. The last sub() removes replaces .py with an empty string. ORS replaces all newlines with ,. This outputs: app,some_package.foo,some_package.bar, The last command, sed removes the last ,. So the command that is being executed looks like: $ celery -A app worker --autoreload --include=app,some_package.foo,some_package.bar If you have a virtualenv inside your source you can exclude it by adding -path .path_to_your_env -prune -o: $ celery -A app worker --autoreload --include=$(find . -path .path_to_your_env -prune -o -name "*.py" -type f | awk '{sub("\./",""); gsub("/", "."); sub(".py",""); print}' ORS=',' | sed 's/.$//') A: You can use watchmedo pip install watchdog Start celery worker indirectly via watchmedo watchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery worker --app=worker.app --concurrency=1 --loglevel=INFO More detailed A: I used watchdog watchdemo utility, it works great but for some reason the PyCharm debugger was not able to debug the subprocess spawned by watchdemo. So if your project has werkzeug as dependency, you can use the werkzeug._reloader.run_with_reloader function to autoreload celery worker on code change. Plus it works with PyCharm debugger. """ Filename: celery_dev.py """ import sys from werkzeug._reloader import run_with_reloader # this is the celery app path in my application, change it according to your project from web.app import celery_app def run(): # create copy of "argv" and remove script name argv = sys.argv.copy() argv.pop(0) # start the celery worker celery_app.worker_main(argv) if __name__ == '__main__': run_with_reloader(run) Sample PyCharm debug configuration. NOTE: This is a private werkzeug API and is working as of Werkzeug==2.0.3. It may stop working in future versions. Use at you own risk. A: OrangeTux's solution didn't work out for me, so I wrote a little Python script to achieve more or less the same. It monitors file changes using inotify, and triggers a celery restart if it detects a IN_MODIFY, IN_ATTRIB, or IN_DELETE. #!/usr/bin/env python """Runs a celery worker, and reloads on a file change. Run as ./run_celery [directory]. If directory is not given, default to cwd.""" import os import sys import signal import time import multiprocessing import subprocess import threading import inotify.adapters CELERY_CMD = tuple("celery -A amcat.amcatcelery worker -l info -Q amcat".split()) CHANGE_EVENTS = ("IN_MODIFY", "IN_ATTRIB", "IN_DELETE") WATCH_EXTENSIONS = (".py",) def watch_tree(stop, path, event): """ @type stop: multiprocessing.Event @type event: multiprocessing.Event """ path = os.path.abspath(path) for e in inotify.adapters.InotifyTree(path).event_gen(): if stop.is_set(): break if e is not None: _, attrs, path, filename = e if filename is None: continue if any(filename.endswith(ename) for ename in WATCH_EXTENSIONS): continue if any(ename in attrs for ename in CHANGE_EVENTS): event.set() class Watcher(threading.Thread): def __init__(self, path): super(Watcher, self).__init__() self.celery = subprocess.Popen(CELERY_CMD) self.stop_event_wtree = multiprocessing.Event() self.event_triggered_wtree = multiprocessing.Event() self.wtree = multiprocessing.Process(target=watch_tree, args=(self.stop_event_wtree, path, self.event_triggered_wtree)) self.wtree.start() self.running = True def run(self): while self.running: if self.event_triggered_wtree.is_set(): self.event_triggered_wtree.clear() self.restart_celery() time.sleep(1) def join(self, timeout=None): self.running = False self.stop_event_wtree.set() self.celery.terminate() self.wtree.join() self.celery.wait() super(Watcher, self).join(timeout=timeout) def restart_celery(self): self.celery.terminate() self.celery.wait() self.celery = subprocess.Popen(CELERY_CMD) if __name__ == '__main__': watcher = Watcher(sys.argv[1] if len(sys.argv) > 1 else ".") watcher.start() signal.signal(signal.SIGINT, lambda signal, frame: watcher.join()) signal.pause() You should probably change CELERY_CMD, or any other global variables. A: This is the way I made it work in Django: # worker_dev.py (put it next to manage.py) from django.utils import autoreload def run_celery(): from projectname import celery_app celery_app.worker_main(["-Aprojectname", "-linfo", "-Psolo"]) print("Starting celery worker with autoreload...") autoreload.run_with_reloader(run_celery) Then run python worker_dev.py. This has an advantage of working inside docker container. A: This is a huge adaptation from Suor's code. I made a custom Django command which can be called like this: python manage.py runcelery So, every time the code changes, celery's main process is gracefully killed and then executed again. Change the CELERY_COMMAND variable as you wish. # File: runcelery.py import os import signal import subprocess import time import psutil from django.core.management.base import BaseCommand from django.utils import autoreload DELAY_UNTIL_START = 5.0 CELERY_COMMAND = 'celery --config my_project.celeryconfig worker --loglevel=INFO' class Command(BaseCommand): help = '' def kill_celery(self, parent_pid): os.kill(parent_pid, signal.SIGTERM) def run_celery(self): time.sleep(DELAY_UNTIL_START) subprocess.run(CELERY_COMMAND.split(' ')) def get_main_process(self): for process in psutil.process_iter(): if process.ppid() == 0: # PID 0 has no parent continue parent = psutil.Process(process.ppid()) if process.name() == 'celery' and parent.name() == 'celery': return parent return def reload_celery(self): parent = self.get_main_process() if parent is not None: self.stdout.write('[*] Killing Celery process gracefully..') self.kill_celery(parent.pid) self.stdout.write('[*] Starting Celery...') self.run_celery() def handle(self, *args, **options): autoreload.run_with_reloader(self.reload_celery)
Celery auto reload on ANY changes
I could make celery reload itself automatically when there is changes on modules in CELERY_IMPORTS in settings.py. I tried to give mother modules to detect changes even on child modules but it did not detect changes in child modules. That make me understand that detecting is not done recursively by celery. I searched it in the documentation but I did not meet any response for my problem. It is really bothering me to add everything related celery part of my project to CELERY_IMPORTS to detect changes. Is there a way to tell celery that "auto reload yourself when there is any changes in anywhere of project". Thank You!
[ "Celery --autoreload doesn't work and it is deprecated.\nSince you are using django, you can write a management command for that. \nDjango has autoreload utility which is used by runserver to restart WSGI server when code changes.\nThe same functionality can be used to reload celery workers. Create a seperate management command called celery. Write a function to kill existing worker and start a new worker. Now hook this function to autoreload as follows.\nimport shlex\nimport subprocess\n\nfrom django.core.management.base import BaseCommand\nfrom django.utils import autoreload\n\n\ndef restart_celery():\n cmd = 'pkill celery'\n subprocess.call(shlex.split(cmd))\n cmd = 'celery worker -l info -A foo'\n subprocess.call(shlex.split(cmd))\n\n\nclass Command(BaseCommand):\n\n def handle(self, *args, **options):\n print('Starting celery worker with autoreload...')\n\n # For Django>=2.2\n autoreload.run_with_reloader(restart_celery) \n\n # For django<2.1\n # autoreload.main(restart_celery)\n\nNow you can run celery worker with python manage.py celery which will autoreload when codebase changes.\nThis is only for development purposes and do not use it in production. Code taken from my other answer here.\n", "You can manually include additional modules with -I|--include. Combine this with GNU tools like find and awk and you'll be able to find all .py files and include them.\n$ celery -A app worker --autoreload --include=$(find . -name \"*.py\" -type f | awk '{sub(\"\\./\",\"\"); gsub(\"/\", \".\"); sub(\".py\",\"\"); print}' ORS=',' | sed 's/.$//')\n\nLets explain it:\nfind . -name \"*.py\" -type f\n\nfind searches recursively for all files containing .py. The output looks something like this:\n./app.py\n./some_package/foopy\n./some_package/bar.py\n\nThen:\nawk '{sub(\"\\./\",\"\"); gsub(\"/\", \".\"); sub(\".py\",\"\"); print}' ORS=','\n\nThis line takes output of find as input and removes all occurences of ./. Then it replaces all / with a .. The last sub() removes replaces .py with an empty string. ORS replaces all newlines with ,. This outputs:\napp,some_package.foo,some_package.bar,\n\nThe last command, sed removes the last ,. \nSo the command that is being executed looks like:\n$ celery -A app worker --autoreload --include=app,some_package.foo,some_package.bar\n\nIf you have a virtualenv inside your source you can exclude it by adding -path .path_to_your_env -prune -o:\n$ celery -A app worker --autoreload --include=$(find . -path .path_to_your_env -prune -o -name \"*.py\" -type f | awk '{sub(\"\\./\",\"\"); gsub(\"/\", \".\"); sub(\".py\",\"\"); print}' ORS=',' | sed 's/.$//')\n\n", "You can use watchmedo\npip install watchdog\n\nStart celery worker indirectly via watchmedo\nwatchmedo auto-restart --directory=./ --pattern=*.py --recursive -- celery worker --app=worker.app --concurrency=1 --loglevel=INFO\n\nMore detailed\n", "I used watchdog watchdemo utility, it works great but for some reason the PyCharm debugger was not able to debug the subprocess spawned by watchdemo.\nSo if your project has werkzeug as dependency, you can use the werkzeug._reloader.run_with_reloader function to autoreload celery worker on code change. Plus it works with PyCharm debugger.\n\"\"\"\nFilename: celery_dev.py\n\"\"\"\n\nimport sys\n\nfrom werkzeug._reloader import run_with_reloader\n\n# this is the celery app path in my application, change it according to your project\nfrom web.app import celery_app\n\n\ndef run():\n # create copy of \"argv\" and remove script name\n argv = sys.argv.copy()\n argv.pop(0)\n\n # start the celery worker\n celery_app.worker_main(argv)\n\n\nif __name__ == '__main__':\n run_with_reloader(run)\n\nSample PyCharm debug configuration.\n\nNOTE:\nThis is a private werkzeug API and is working as of Werkzeug==2.0.3. It may stop working in future versions. Use at you own risk.\n", "OrangeTux's solution didn't work out for me, so I wrote a little Python script to achieve more or less the same. It monitors file changes using inotify, and triggers a celery restart if it detects a IN_MODIFY, IN_ATTRIB, or IN_DELETE. \n#!/usr/bin/env python\n\"\"\"Runs a celery worker, and reloads on a file change. Run as ./run_celery [directory]. If\ndirectory is not given, default to cwd.\"\"\"\nimport os\nimport sys\nimport signal\nimport time\n\nimport multiprocessing\nimport subprocess\nimport threading\n\nimport inotify.adapters\n\n\nCELERY_CMD = tuple(\"celery -A amcat.amcatcelery worker -l info -Q amcat\".split())\nCHANGE_EVENTS = (\"IN_MODIFY\", \"IN_ATTRIB\", \"IN_DELETE\")\nWATCH_EXTENSIONS = (\".py\",)\n\ndef watch_tree(stop, path, event):\n \"\"\"\n @type stop: multiprocessing.Event\n @type event: multiprocessing.Event\n \"\"\"\n path = os.path.abspath(path)\n\n for e in inotify.adapters.InotifyTree(path).event_gen():\n if stop.is_set():\n break\n\n if e is not None:\n _, attrs, path, filename = e\n\n if filename is None:\n continue\n\n if any(filename.endswith(ename) for ename in WATCH_EXTENSIONS):\n continue\n\n if any(ename in attrs for ename in CHANGE_EVENTS):\n event.set()\n\n\nclass Watcher(threading.Thread):\n def __init__(self, path):\n super(Watcher, self).__init__()\n self.celery = subprocess.Popen(CELERY_CMD)\n self.stop_event_wtree = multiprocessing.Event()\n self.event_triggered_wtree = multiprocessing.Event()\n self.wtree = multiprocessing.Process(target=watch_tree, args=(self.stop_event_wtree, path, self.event_triggered_wtree))\n self.wtree.start()\n self.running = True\n\n def run(self):\n while self.running:\n if self.event_triggered_wtree.is_set():\n self.event_triggered_wtree.clear()\n self.restart_celery()\n time.sleep(1)\n\n def join(self, timeout=None):\n self.running = False\n self.stop_event_wtree.set()\n self.celery.terminate()\n self.wtree.join()\n self.celery.wait()\n super(Watcher, self).join(timeout=timeout)\n\n def restart_celery(self):\n self.celery.terminate()\n self.celery.wait()\n self.celery = subprocess.Popen(CELERY_CMD)\n\n\nif __name__ == '__main__':\n watcher = Watcher(sys.argv[1] if len(sys.argv) > 1 else \".\")\n watcher.start()\n\n signal.signal(signal.SIGINT, lambda signal, frame: watcher.join())\n signal.pause()\n\nYou should probably change CELERY_CMD, or any other global variables.\n", "This is the way I made it work in Django:\n# worker_dev.py (put it next to manage.py)\nfrom django.utils import autoreload\n\n\ndef run_celery():\n from projectname import celery_app\n\n celery_app.worker_main([\"-Aprojectname\", \"-linfo\", \"-Psolo\"])\n\n\nprint(\"Starting celery worker with autoreload...\")\nautoreload.run_with_reloader(run_celery)\n\nThen run python worker_dev.py. This has an advantage of working inside docker container.\n", "This is a huge adaptation from Suor's code.\nI made a custom Django command which can be called like this:\npython manage.py runcelery\n\nSo, every time the code changes, celery's main process is gracefully killed and then executed again.\nChange the CELERY_COMMAND variable as you wish.\n# File: runcelery.py\nimport os\nimport signal\nimport subprocess\nimport time\n\nimport psutil\nfrom django.core.management.base import BaseCommand\nfrom django.utils import autoreload\n\n\nDELAY_UNTIL_START = 5.0\nCELERY_COMMAND = 'celery --config my_project.celeryconfig worker --loglevel=INFO'\n\n\nclass Command(BaseCommand):\n\n help = ''\n\n def kill_celery(self, parent_pid):\n os.kill(parent_pid, signal.SIGTERM)\n\n def run_celery(self):\n time.sleep(DELAY_UNTIL_START)\n subprocess.run(CELERY_COMMAND.split(' '))\n\n def get_main_process(self):\n for process in psutil.process_iter():\n if process.ppid() == 0: # PID 0 has no parent\n continue\n\n parent = psutil.Process(process.ppid())\n\n if process.name() == 'celery' and parent.name() == 'celery':\n return parent\n\n return\n\n def reload_celery(self):\n parent = self.get_main_process()\n\n if parent is not None:\n self.stdout.write('[*] Killing Celery process gracefully..')\n self.kill_celery(parent.pid)\n\n self.stdout.write('[*] Starting Celery...')\n self.run_celery()\n\n def handle(self, *args, **options):\n autoreload.run_with_reloader(self.reload_celery)\n\n" ]
[ 31, 19, 19, 3, 2, 0, 0 ]
[]
[]
[ "celery", "django_celery", "python" ]
stackoverflow_0021666229_celery_django_celery_python.txt
Q: Got warning: warnings.warn(msg, UserWarning) I try to use cvxpy by using this code `# Number of variables n = len(symbols) The variables vector x = Variable(n) The minimum return req_return = 0.02 The return ret = r.T*x The risk in xT.Q.x format risk = quad_form(x, C) The core problem definition with the Problem class from CVXPY prob = Problem(Minimize(risk), [sum(x)==1, ret >= req_return, x >= 0])` And I got warning C:\Users\LENOVO\anaconda3\lib\site-packages\cvxpy\expressions\expression.py:593: UserWarning: This use of ``*`` has resulted in matrix multiplication. Using ``*`` for matrix multiplication has been deprecated since CVXPY 1.1. Use ``*`` for matrix-scalar and vector-scalar multiplication. Use ``@`` for matrix-matrix and matrix-vector multiplication. Use ``multiply`` for elementwise multiplication. This code path has been hit 2 times so far. warnings.warn(msg, UserWarning) ``` ` Try to solve the warning and no idea about warning A: It's Warning mean your code run properly so Let's see the warning The Warning tell us that Using ``*`` for matrix multiplication has been deprecated since CVXPY 1.1. so you already use CVXPY version upper 1.1 How to solve: Use ``*`` for matrix-scalar and vector-scalar multiplication. Use ``@`` for matrix-matrix and matrix-vector multiplication. Use ``multiply`` for elementwise multiplication. If ret = r.T*x cannot run you can try ret = r.T @x from CVXPY DOC
Got warning: warnings.warn(msg, UserWarning)
I try to use cvxpy by using this code `# Number of variables n = len(symbols) The variables vector x = Variable(n) The minimum return req_return = 0.02 The return ret = r.T*x The risk in xT.Q.x format risk = quad_form(x, C) The core problem definition with the Problem class from CVXPY prob = Problem(Minimize(risk), [sum(x)==1, ret >= req_return, x >= 0])` And I got warning C:\Users\LENOVO\anaconda3\lib\site-packages\cvxpy\expressions\expression.py:593: UserWarning: This use of ``*`` has resulted in matrix multiplication. Using ``*`` for matrix multiplication has been deprecated since CVXPY 1.1. Use ``*`` for matrix-scalar and vector-scalar multiplication. Use ``@`` for matrix-matrix and matrix-vector multiplication. Use ``multiply`` for elementwise multiplication. This code path has been hit 2 times so far. warnings.warn(msg, UserWarning) ``` ` Try to solve the warning and no idea about warning
[ "It's Warning mean your code run properly so Let's see the warning\n\nThe Warning tell us that\nUsing ``*`` for matrix multiplication has been deprecated since CVXPY 1.1.\n\nso you already use CVXPY version upper 1.1\nHow to solve:\n Use ``*`` for matrix-scalar and vector-scalar multiplication.\nUse ``@`` for matrix-matrix and matrix-vector multiplication.\nUse ``multiply`` for elementwise multiplication.\n\nIf ret = r.T*x cannot run you can try ret = r.T @x from CVXPY DOC\n" ]
[ 0 ]
[]
[]
[ "cvxpy", "python" ]
stackoverflow_0074503922_cvxpy_python.txt
Q: how to upper case every other word in string in python I am wondering how to uppercase every other word in a string. For example, i want to change "Here is my dog" to "Here IS my DOG" Can anyone help me get it started? All i can find is how to capitalize the first letter in each word. A: ' '.join( w.upper() if i%2 else w for (i, w) in enumerate(sentence.split(' ')) ) A: I think the method you are looking for is upper(). You can use split() to split your string into words and the call upper() on every other word and then join the strings back together, using join() A: words = sentence.split(' ') sentence = ' '.join(sum(zip(words[::2], map(str.upper, words[1::2])), ())) A: It's not the most compact function but this would do the trick. string = "Here is my dog" def alternateUppercase(s): i = 0 a = s.split(' ') l = [] for w in a: if i: l.append(w.upper()) else: l.append(w) i = int(not i) return " ".join(l) print alternateUppercase(string) A: Another method that uses regex to handle any non-alphanumeric characters. import re text = """The 1862 Derby was memorable due to the large field (34 horses), the winner being ridden by a 16-year-old stable boy and Caractacus' near disqualification for an underweight jockey and a false start.""" def selective_uppercase(word, index): if index%2: return str.upper(word) else: return word words, non_words = re.split("\W+", text), re.split("\w+", text) print "".join(selective_uppercase(words[i],i) + non_words[i+1] \ for i in xrange(len(words)-1) ) Output: The 1862 Derby WAS memorable DUE to THE large FIELD (34 HORSES), the WINNER being RIDDEN by A 16-YEAR-old STABLE boy AND Caractacus' NEAR disqualification FOR an UNDERWEIGHT jockey AND a FALSE start. A: user_word_2 = "I am passionate to code" #block code to split the sentence user_word_split_2 = user_word_2.split() #Empty list to store a to be split list of words, words_sep ="" #block code to make every alternate word for i in range(0, len(user_word_split_2)): #loop to make every second letter upper if i % 2 == 0: words_sep = words_sep + " "+ user_word_split_2[i].lower() else: words_sep = words_sep +" " + user_word_split_2[i].upper() #Block code to join the individual characters final_string_2 = "".join(words_sep) #Block code to product the final results print(final_string_2)
how to upper case every other word in string in python
I am wondering how to uppercase every other word in a string. For example, i want to change "Here is my dog" to "Here IS my DOG" Can anyone help me get it started? All i can find is how to capitalize the first letter in each word.
[ "' '.join( w.upper() if i%2 else w\n for (i, w) in enumerate(sentence.split(' ')) )\n\n", "I think the method you are looking for is upper().\nYou can use split() to split your string into words and the call upper() on every other word and then join the strings back together, using join()\n", "words = sentence.split(' ')\nsentence = ' '.join(sum(zip(words[::2], map(str.upper, words[1::2])), ()))\n\n", "It's not the most compact function but this would do the trick.\nstring = \"Here is my dog\"\n\ndef alternateUppercase(s):\n i = 0\n a = s.split(' ')\n l = []\n for w in a:\n if i:\n l.append(w.upper())\n else:\n l.append(w)\n i = int(not i)\n return \" \".join(l)\n\nprint alternateUppercase(string)\n\n", "Another method that uses regex to handle any non-alphanumeric characters.\nimport re\n\ntext = \"\"\"The 1862 Derby was memorable due to the large field (34 horses), \nthe winner being ridden by a 16-year-old stable boy and Caractacus' \nnear disqualification for an underweight jockey and a false start.\"\"\"\n\ndef selective_uppercase(word, index):\n if index%2: \n return str.upper(word)\n else: \n return word\n\nwords, non_words = re.split(\"\\W+\", text), re.split(\"\\w+\", text)\nprint \"\".join(selective_uppercase(words[i],i) + non_words[i+1] \\\n for i in xrange(len(words)-1) )\n\nOutput:\nThe 1862 Derby WAS memorable DUE to THE large FIELD (34 HORSES), \nthe WINNER being RIDDEN by A 16-YEAR-old STABLE boy AND Caractacus' \nNEAR disqualification FOR an UNDERWEIGHT jockey AND a FALSE start.\n\n", "user_word_2 = \"I am passionate to code\"\n\n\n#block code to split the sentence\nuser_word_split_2 = user_word_2.split() \n\n\n#Empty list to store a to be split list of words, \n\nwords_sep =\"\"\n\n#block code to make every alternate word\n\nfor i in range(0, len(user_word_split_2)): #loop to make every second letter upper\n\n if i % 2 == 0:\n words_sep = words_sep + \" \"+ user_word_split_2[i].lower() \n\n else:\n words_sep = words_sep +\" \" + user_word_split_2[i].upper() \n \n\n#Block code to join the individual characters\nfinal_string_2 = \"\".join(words_sep)\n\n#Block code to product the final results\nprint(final_string_2)\n\n" ]
[ 7, 2, 1, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0008452616_python.txt
Q: How do I create an object with subvalues without creating a class? I want to create an object, A, with x and y values without creating a class. #<Code I am looking for goes here.> print(A.x, A.y) Is there an easy way to do this that I am missing, or is it too hacky? A: Another way to accomplish this would be: import types A = types.SimpleNamespace(x=5, y=2) print(A.x, A.y) A: I found an answer to this question: A = type('any name', (), {'x': 15, 'y': 23}) print(A.x, A.y) Read more about it here.
How do I create an object with subvalues without creating a class?
I want to create an object, A, with x and y values without creating a class. #<Code I am looking for goes here.> print(A.x, A.y) Is there an easy way to do this that I am missing, or is it too hacky?
[ "Another way to accomplish this would be:\nimport types\n\nA = types.SimpleNamespace(x=5, y=2)\nprint(A.x, A.y)\n\n", "I found an answer to this question:\nA = type('any name', (), {'x': 15, 'y': 23})\n\nprint(A.x, A.y)\n\nRead more about it here.\n" ]
[ 1, 0 ]
[]
[]
[ "class", "oop", "python" ]
stackoverflow_0074504174_class_oop_python.txt
Q: How to Scroll the Left Quadrant How can I make Selenium run scroll only in the left quadrant? when I use the command below it is executed in the zoom of the map and that is not my intention, because I want to scrape the links of the companies that are in the left column driver.execute_script("window.scrollBy(0, 200)") A: You need to find the scrollable div element and then you can apply JavaScript as following: element = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[contains(@aria-label,'lanchonet')]"))) driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", element) The code above works for me. The entire code is: from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, service=webdriver_service) wait = WebDriverWait(driver, 5) url = "https://www.google.com.br/maps/search/lanchonete,/@-27.0027727,-48.6293259,15z" driver.get(url) element = wait.until(EC.presence_of_element_located((By.XPATH, "//div[@role='main']//div[contains(@aria-label,'lanchonet')]"))) driver.execute_script("arguments[0].scroll(0, arguments[0].scrollHeight);", element) you can, of course, scroll for other lengths, not only for the entire height.
How to Scroll the Left Quadrant
How can I make Selenium run scroll only in the left quadrant? when I use the command below it is executed in the zoom of the map and that is not my intention, because I want to scrape the links of the companies that are in the left column driver.execute_script("window.scrollBy(0, 200)")
[ "You need to find the scrollable div element and then you can apply JavaScript as following:\nelement = wait.until(EC.presence_of_element_located((By.XPATH, \"//div[@role='main']//div[contains(@aria-label,'lanchonet')]\")))\ndriver.execute_script(\"arguments[0].scroll(0, arguments[0].scrollHeight);\", element)\n\nThe code above works for me.\nThe entire code is:\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 5)\n\nurl = \"https://www.google.com.br/maps/search/lanchonete,/@-27.0027727,-48.6293259,15z\"\ndriver.get(url)\n\nelement = wait.until(EC.presence_of_element_located((By.XPATH, \"//div[@role='main']//div[contains(@aria-label,'lanchonet')]\")))\ndriver.execute_script(\"arguments[0].scroll(0, arguments[0].scrollHeight);\", element)\n\nyou can, of course, scroll for other lengths, not only for the entire height.\n" ]
[ 1 ]
[]
[]
[ "python", "scroll", "selenium", "selenium_webdriver" ]
stackoverflow_0074504065_python_scroll_selenium_selenium_webdriver.txt
Q: Nested loop code to create right triangle in Python Professor gave us a simple code that executes a square and we need to add/change the code to output the right triangle shape as shown below. It's just a simple loop within a loop code, but I can't find tips or help anywhere for creating shapes with Python without the code looking extremely confusing/difficult. I need a simple explanation what to do and why I need to make those changes. (Nested loop code to create right triangle in Python) The code given that executes a square: Draw Square size = input('Please enter the size: ') chr = raw_input('Please enter the drawing character: ') row = 1 while row <= size: # Output a single row col = 1 while col <= size: # Output a single character, the comma suppresses the newline output print chr, col = col + 1 # Output a newline to end the row print '' row = row + 1 print '' The shape I need to output..... x x x x x x x x x x x x x x x x x x x x x x x x x x x x Once again, just a simple code explanation, it's an introduction to Python course. A: Just change while col <= size: to while col <= row: This will print out row number of X. If rowis 1the ouput is: X If rowis 2the ouput is: X X If rowis 3the ouput is: X X X If rowis 4the ouput is: X X X X A: Here is some code: size = int(raw_input("Enter the size: ")) #Instead of input, #convert it to integer! char = raw_input("Enter the character to draw: ") for i in range(1, size+1): print char*i #on the first iteration, prints 1 'x' #on the second iteration, prints 2 'x', and so on Result: >>> char = raw_input("Enter the character to draw: ") Enter the character to draw: x >>> size = int(raw_input("Enter the size: ")) Enter the size: 10 >>> for i in range(1, size+1): print char*i x xx xxx xxxx xxxxx xxxxxx xxxxxxx xxxxxxxx xxxxxxxxx xxxxxxxxxx Also, avoid using input in Python 2, as it evaluates the string passed as code, it's unsafe and a bad practice. Hope this helps! A: Code: def triangle(i, t=0): if i == 0: return 0 else: print ' ' * ( t + 1 ) + '*' * ( i * 2 - 1 ) return triangle( i - 1, t + 1 ) triangle(5) Output: * * * * * * * * * * * * * * * * * * * * * * * * * A: values = [0,1,2,3] for j in values: for k in range (j): print "*",; print "*"; define array start first for loop to one by one initialize values of array in variable j start second (nested) for loop to initialize ranje of variable j in variable k end second (nested) for loop to print * as par initialized range of j assigned to k i.e. if range is 1 then print one * end first for loop and print * for no of initialized array A: for i in range(1,8): stars="" for star in range(1,i+1): stars+= " x" print(stars) Output: x x x x x x x x x x x x x x x x x x x x x x x x x x x x A: I just got this in a lab and figured i would post my solution chrter = input('Input Character ', ) tri_height = int(input('Input Triangle Height ', )) new_chrter = '' for i in range(1, tri_height + 1): new_chrter += chrter + ' ' print(new_chrter)
Nested loop code to create right triangle in Python
Professor gave us a simple code that executes a square and we need to add/change the code to output the right triangle shape as shown below. It's just a simple loop within a loop code, but I can't find tips or help anywhere for creating shapes with Python without the code looking extremely confusing/difficult. I need a simple explanation what to do and why I need to make those changes. (Nested loop code to create right triangle in Python) The code given that executes a square: Draw Square size = input('Please enter the size: ') chr = raw_input('Please enter the drawing character: ') row = 1 while row <= size: # Output a single row col = 1 while col <= size: # Output a single character, the comma suppresses the newline output print chr, col = col + 1 # Output a newline to end the row print '' row = row + 1 print '' The shape I need to output..... x x x x x x x x x x x x x x x x x x x x x x x x x x x x Once again, just a simple code explanation, it's an introduction to Python course.
[ "Just change while col <= size: to while col <= row:\nThis will print out row number of X.\nIf rowis 1the ouput is: X \nIf rowis 2the ouput is: X X \nIf rowis 3the ouput is: X X X \nIf rowis 4the ouput is: X X X X \n", "Here is some code:\nsize = int(raw_input(\"Enter the size: \")) #Instead of input, \n#convert it to integer!\nchar = raw_input(\"Enter the character to draw: \")\nfor i in range(1, size+1):\n print char*i #on the first iteration, prints 1 'x'\n #on the second iteration, prints 2 'x', and so on\n\nResult:\n>>> char = raw_input(\"Enter the character to draw: \")\nEnter the character to draw: x\n>>> size = int(raw_input(\"Enter the size: \"))\nEnter the size: 10\n>>> for i in range(1, size+1):\n print char*i\n\n\nx\nxx\nxxx\nxxxx\nxxxxx\nxxxxxx\nxxxxxxx\nxxxxxxxx\nxxxxxxxxx\nxxxxxxxxxx\n\nAlso, avoid using input in Python 2, as it evaluates the string passed as code, it's unsafe and a bad practice.\nHope this helps! \n", "Code:\ndef triangle(i, t=0):\n if i == 0:\n return 0\n else:\n print ' ' * ( t + 1 ) + '*' * ( i * 2 - 1 )\n return triangle( i - 1, t + 1 )\ntriangle(5)\n\nOutput:\n* * * * * * * * *\n * * * * * * *\n * * * * *\n * * * \n *\n\n", "values = [0,1,2,3]\nfor j in values:\n for k in range (j):\n print \"*\",;\n print \"*\";\n\n\ndefine array\nstart first for loop to one by one initialize values of array in variable j\nstart second (nested) for loop to initialize ranje of variable j in variable k\nend second (nested) for loop to print * as par initialized range of j assigned to k i.e. if range is 1 then print one *\nend first for loop and print * for no of initialized array \n\n", "for i in range(1,8):\n stars=\"\"\n for star in range(1,i+1):\n stars+= \" x\"\n print(stars)\n\nOutput:\nx\nx x\nx x x\nx x x x\nx x x x x\nx x x x x x\nx x x x x x x\n\n", "I just got this in a lab and figured i would post my solution\nchrter = input('Input Character ', )\ntri_height = int(input('Input Triangle Height ', ))\n\nnew_chrter = ''\nfor i in range(1, tri_height + 1):\n new_chrter += chrter + ' '\n print(new_chrter)\n\n" ]
[ 2, 1, 0, 0, 0, 0 ]
[ " def pattStar():\n print 'Enter no. of rows of pattern'\n noOfRows=input()\n for i in range(1,noOfRows+1):\n for j in range(i):\n print'*',\n print''\n\n", "for x in range(10,0,-1):\n print x*\"*\"\n\noutput:\n**********\n*********\n********\n*******\n******\n*****\n****\n***\n**\n*\n\n", "you can obtain it by simply use this:\nsize = input('Please enter the size: ')\nchr = raw_input('Please enter the drawing character: ')\ni=0\nstr =''\nwhile i< size:\n str = str +' '+ chr\n print str\n i=i+1\n\n" ]
[ -1, -1, -2 ]
[ "geometry", "loops", "nested", "python" ]
stackoverflow_0019784772_geometry_loops_nested_python.txt
Q: pandas dropna dropping the whole dataframe, need only to drop empty rows I'm using this piece of code: import pandas as pd df = pd.read_excel('input.xls', sheet_name='Nouveau concept') print(f"Dataframe:\n{df}") new_df = df.dropna() print(f"Dataframe now:\n{new_df}") To read an Excel file (it has to be xls and not xlsx) and drop all empty rows, i.e., rows that contain no data at all. When I run the above, I get this: Anibals-New-MacBook-Air:UCNI anibal$ python3 test.py Dataframe: Source Terminology Version Requestor Internal ID Parent ID Parent FSN ... Unnamed: 77 Unnamed: 78 Unnamed: 79 Unnamed: 80 0 september 2022 NaN 283403005.0 Cut of ear region (disorder) ... NaN NaN NaN NaN 1 september 2022 NaN 283403005.0 Cut of ear region (disorder) ... NaN NaN NaN NaN 2 september 2022 NaN 283412007.0 Cut of upper arm (disorder) ... NaN NaN NaN NaN 3 september 2022 NaN 283412007.0 Cut of upper arm (disorder) ... NaN NaN NaN NaN 4 september 2022 NaN 283413002.0 Cut of elbow (disorder) ... NaN NaN NaN NaN ... ... ... ... ... ... ... ... ... ... 5056 NaN NaN NaN NaN ... NaN NaN NaN NaN 5057 NaN NaN NaN NaN ... NaN NaN NaN NaN 5058 NaN NaN NaN NaN ... NaN NaN NaN NaN 5059 NaN NaN NaN NaN ... NaN NaN NaN NaN 5060 NaN NaN NaN NaN ... NaN NaN NaN NaN [5061 rows x 81 columns] Dataframe now: Empty DataFrame Columns: [Source Terminology Version, Requestor Internal ID, Parent ID, Parent FSN, FSN (*), Semantic Tag (*), PT (*), Synonym (1), Synonym (2), Definition, Reason for Change, Notes, References, Unnamed: 13, Unnamed: 14, Unnamed: 15, Unnamed: 16, Unnamed: 17, Unnamed: 18, Unnamed: 19, Unnamed: 20, Unnamed: 21, Unnamed: 22, Unnamed: 23, Unnamed: 24, Unnamed: 25, Unnamed: 26, Unnamed: 27, Unnamed: 28, Unnamed: 29, Unnamed: 30, Unnamed: 31, Unnamed: 32, Unnamed: 33, Unnamed: 34, Unnamed: 35, Unnamed: 36, Unnamed: 37, Unnamed: 38, Unnamed: 39, Unnamed: 40, Unnamed: 41, Unnamed: 42, Unnamed: 43, Unnamed: 44, Unnamed: 45, Unnamed: 46, Unnamed: 47, Unnamed: 48, Unnamed: 49, Unnamed: 50, Unnamed: 51, Unnamed: 52, Unnamed: 53, Unnamed: 54, Unnamed: 55, Unnamed: 56, Unnamed: 57, Unnamed: 58, Unnamed: 59, Unnamed: 60, Unnamed: 61, Unnamed: 62, Unnamed: 63, Unnamed: 64, Unnamed: 65, Unnamed: 66, Unnamed: 67, Unnamed: 68, Unnamed: 69, Unnamed: 70, Unnamed: 71, Unnamed: 72, Unnamed: 73, Unnamed: 74, Unnamed: 75, Unnamed: 76, Unnamed: 77, Unnamed: 78, Unnamed: 79, Unnamed: 80] Index: [] So, the second dataframe is completely empty. Why? I just need to read the rows that contain any data, i.e., if a row is just empty, skip it. The input file input.xls can be found here: https://docs.google.com/spreadsheets/d/1pXfhPHklnd0v45yLbff5E5dp2ypVIbxG/edit?usp=share_link&ouid=117900420544251849196&rtpof=true&sd=true Any ideas? I can't clean up the file by the way. This input file is generated by another system and my piece is supposed to automate handling this file, so I can't just load it in Excel and clean it up. I tried a whole bunch of combinations of dropna to no avail. I also tried several other solutions found in stackoverflow and again, to no avail. A: First thing, import only the required columns (i.e. exclude blank ones by using use_cols) df = pd.read_excel('input.xls', sheet_name='Nouveau concept',usecols="A:M") Then, to drop the empty rows, you have to consider a subset of columns. Currently, there are a few columns that are completely empty, so that is the reason why all rows are dropped. To combat this, use the following: new_df =df.dropna(subset=['Source Terminology Version'], how = 'all') # In this example, I used only one column, but you can pass in a list.
pandas dropna dropping the whole dataframe, need only to drop empty rows
I'm using this piece of code: import pandas as pd df = pd.read_excel('input.xls', sheet_name='Nouveau concept') print(f"Dataframe:\n{df}") new_df = df.dropna() print(f"Dataframe now:\n{new_df}") To read an Excel file (it has to be xls and not xlsx) and drop all empty rows, i.e., rows that contain no data at all. When I run the above, I get this: Anibals-New-MacBook-Air:UCNI anibal$ python3 test.py Dataframe: Source Terminology Version Requestor Internal ID Parent ID Parent FSN ... Unnamed: 77 Unnamed: 78 Unnamed: 79 Unnamed: 80 0 september 2022 NaN 283403005.0 Cut of ear region (disorder) ... NaN NaN NaN NaN 1 september 2022 NaN 283403005.0 Cut of ear region (disorder) ... NaN NaN NaN NaN 2 september 2022 NaN 283412007.0 Cut of upper arm (disorder) ... NaN NaN NaN NaN 3 september 2022 NaN 283412007.0 Cut of upper arm (disorder) ... NaN NaN NaN NaN 4 september 2022 NaN 283413002.0 Cut of elbow (disorder) ... NaN NaN NaN NaN ... ... ... ... ... ... ... ... ... ... 5056 NaN NaN NaN NaN ... NaN NaN NaN NaN 5057 NaN NaN NaN NaN ... NaN NaN NaN NaN 5058 NaN NaN NaN NaN ... NaN NaN NaN NaN 5059 NaN NaN NaN NaN ... NaN NaN NaN NaN 5060 NaN NaN NaN NaN ... NaN NaN NaN NaN [5061 rows x 81 columns] Dataframe now: Empty DataFrame Columns: [Source Terminology Version, Requestor Internal ID, Parent ID, Parent FSN, FSN (*), Semantic Tag (*), PT (*), Synonym (1), Synonym (2), Definition, Reason for Change, Notes, References, Unnamed: 13, Unnamed: 14, Unnamed: 15, Unnamed: 16, Unnamed: 17, Unnamed: 18, Unnamed: 19, Unnamed: 20, Unnamed: 21, Unnamed: 22, Unnamed: 23, Unnamed: 24, Unnamed: 25, Unnamed: 26, Unnamed: 27, Unnamed: 28, Unnamed: 29, Unnamed: 30, Unnamed: 31, Unnamed: 32, Unnamed: 33, Unnamed: 34, Unnamed: 35, Unnamed: 36, Unnamed: 37, Unnamed: 38, Unnamed: 39, Unnamed: 40, Unnamed: 41, Unnamed: 42, Unnamed: 43, Unnamed: 44, Unnamed: 45, Unnamed: 46, Unnamed: 47, Unnamed: 48, Unnamed: 49, Unnamed: 50, Unnamed: 51, Unnamed: 52, Unnamed: 53, Unnamed: 54, Unnamed: 55, Unnamed: 56, Unnamed: 57, Unnamed: 58, Unnamed: 59, Unnamed: 60, Unnamed: 61, Unnamed: 62, Unnamed: 63, Unnamed: 64, Unnamed: 65, Unnamed: 66, Unnamed: 67, Unnamed: 68, Unnamed: 69, Unnamed: 70, Unnamed: 71, Unnamed: 72, Unnamed: 73, Unnamed: 74, Unnamed: 75, Unnamed: 76, Unnamed: 77, Unnamed: 78, Unnamed: 79, Unnamed: 80] Index: [] So, the second dataframe is completely empty. Why? I just need to read the rows that contain any data, i.e., if a row is just empty, skip it. The input file input.xls can be found here: https://docs.google.com/spreadsheets/d/1pXfhPHklnd0v45yLbff5E5dp2ypVIbxG/edit?usp=share_link&ouid=117900420544251849196&rtpof=true&sd=true Any ideas? I can't clean up the file by the way. This input file is generated by another system and my piece is supposed to automate handling this file, so I can't just load it in Excel and clean it up. I tried a whole bunch of combinations of dropna to no avail. I also tried several other solutions found in stackoverflow and again, to no avail.
[ "First thing, import only the required columns (i.e. exclude blank ones by using use_cols)\ndf = pd.read_excel('input.xls', sheet_name='Nouveau concept',usecols=\"A:M\")\n\nThen, to drop the empty rows, you have to consider a subset of columns. Currently, there are a few columns that are completely empty, so that is the reason why all rows are dropped. To combat this, use the following:\nnew_df =df.dropna(subset=['Source Terminology Version'], how = 'all')\n# In this example, I used only one column, but you can pass in a list.\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074504298_dataframe_pandas_python.txt
Q: how to group AWS dynamodb table and get latest value of partition key using boto3(lambda)? I am new to AWS dynamodb, lambda. i have pretty good knowledge in RDB(MySQL). here is my sample table partitian key sort key attribute Device TimeStamp REMARKS D1 2022-12-12 12:13:14 hello D1 2022-12-12 12:14:14 testing D2 2022-12-12 12:18:14 hello D2 2022-12-12 12:19:14 testing D3 2022-11-12 12:13:14 hello D3 2022-12-12 12:14:14 testing i want to extract following output using python boto3 in lambda function using query statement. Latest timestamp value of each'partitian key' Output D1 2022-12-12 12:14:14 testing D2 2022-12-12 12:19:14 testing D3 2022-12-12 12:14:14 testing i tried using aws lambda tutorial but i could get all the data using scan method A: For this you would need to issue a single Query for each device and set ScanIndexForward=False and Limit=1. However, if for example you need all of the devices latest info then that would require you to create a Global Secondary Index (GSI). It was also require you to keep a "meta" record for each device which would be the latest item. pk sk attribute gsi_pk gsi_sk Pk sk REMARKS gsi_pk gsi_sk D1 2022-12-12 12:13:14 hello D1 2022-12-12 12:14:14 testing D1 meta testing D1_latest 2022-12-12 12:14:14 D2 2022-12-12 12:18:14 hello D2 2022-12-12 12:19:14 testing D2 meta testing D2_latest 2022-12-12 12:19:14 D3 2022-11-12 12:13:14 hello D3 2022-12-12 12:14:14 testing D3 meta testing D3_latest 2022-12-12 12:14:14 Now your GSI will have a partition key of Meta and will hold only the info you need: gsi partition key gsi sort key attribute gsi_pk pk gsi_sk remarks gsi_pk D1 2022-12-12 12:14:14 testing D1_latest D2 2022-12-12 12:19:14 testing D2_latest D3 2022-12-12 12:14:14 testing D3_latest This will allow you to efficiently Scan the GSI to get the items you need. However, it will require your writes to use Transactions. For every latest device you add you will also need to update the metadata item so that the GSI is updated with the latest value also. A: Using a GSI like Lee suggested is the general approach to take for a situation where you want to do bulk retrieval of items that match a certain characteristic. You mark items with that characteristic in an attribute and use that attribute as a GSI partition key. Then the GSI is pre-filtered. In this case I think it's a little tricky because when one item gains the characteristic (of being latest) another has to lose it (no longer latest), which requires two writes and coordination between those two if you have lots of potential writes concurrently to the same item collection. You'll probably want to use transactions, as Lee says, which means 2 writes at 2x the cost = 4 WCUs. Is there another way? The best choice in situations like this depends on details you didn't specify. How large is an item? How often do they update? How often do multiple clients write to the same item collection concurrently? How often do you do the bulk query? Is your scale such that costs are what matter or are costs trivial and you want to optimize for simplicity? (I wish every StackOverflow question about DynamoDB included these facts!) One design that can work (if the item data tends to be small, and you want to lower write costs at the expense of higher read costs) is to just store an array of values in a single item. You can safely add new values by directly appending to the array (which will cost just 1 write unit as long as the data set stays below 1 KB, and 2 write units if 1-2 KB, etc). So that's a 4x write cost savings over updating two items in a transaction. Then you can scan the table and for each item let the client pull the last item out of the array. The scan will return more data so the bulk read will cost a bit more. That's why the design choice depends on usage. If we assume you want to store the last N many data values per item, then this is an especially nice approach because otherwise you'd have to insert, remove the old latest flag, and delete the oldest record. Here you'd read the item, change the array as needed, write the new version, and use optimistic locking to handle concurrency. 1 WCU instead of 3, or really 6 (if you must use transactions). In other words: "It depends"
how to group AWS dynamodb table and get latest value of partition key using boto3(lambda)?
I am new to AWS dynamodb, lambda. i have pretty good knowledge in RDB(MySQL). here is my sample table partitian key sort key attribute Device TimeStamp REMARKS D1 2022-12-12 12:13:14 hello D1 2022-12-12 12:14:14 testing D2 2022-12-12 12:18:14 hello D2 2022-12-12 12:19:14 testing D3 2022-11-12 12:13:14 hello D3 2022-12-12 12:14:14 testing i want to extract following output using python boto3 in lambda function using query statement. Latest timestamp value of each'partitian key' Output D1 2022-12-12 12:14:14 testing D2 2022-12-12 12:19:14 testing D3 2022-12-12 12:14:14 testing i tried using aws lambda tutorial but i could get all the data using scan method
[ "For this you would need to issue a single Query for each device and set ScanIndexForward=False and Limit=1.\nHowever, if for example you need all of the devices latest info then that would require you to create a Global Secondary Index (GSI). It was also require you to keep a \"meta\" record for each device which would be the latest item.\n\n\n\n\npk\nsk\nattribute\ngsi_pk\ngsi_sk\n\n\n\n\nPk\nsk\nREMARKS\ngsi_pk\ngsi_sk\n\n\nD1\n2022-12-12 12:13:14\nhello\n\n\n\n\nD1\n2022-12-12 12:14:14\ntesting\n\n\n\n\nD1\nmeta\ntesting\nD1_latest\n2022-12-12 12:14:14\n\n\nD2\n2022-12-12 12:18:14\nhello\n\n\n\n\nD2\n2022-12-12 12:19:14\ntesting\n\n\n\n\nD2\nmeta\ntesting\nD2_latest\n2022-12-12 12:19:14\n\n\nD3\n2022-11-12 12:13:14\nhello\n\n\n\n\nD3\n2022-12-12 12:14:14\ntesting\n\n\n\n\nD3\nmeta\ntesting\nD3_latest\n2022-12-12 12:14:14\n\n\n\n\nNow your GSI will have a partition key of Meta and will hold only the info you need:\n\n\n\n\ngsi partition key\ngsi sort key\nattribute\ngsi_pk\n\n\n\n\npk\ngsi_sk\nremarks\ngsi_pk\n\n\nD1\n2022-12-12 12:14:14\ntesting\nD1_latest\n\n\nD2\n2022-12-12 12:19:14\ntesting\nD2_latest\n\n\nD3\n2022-12-12 12:14:14\ntesting\nD3_latest\n\n\n\n\nThis will allow you to efficiently Scan the GSI to get the items you need. However, it will require your writes to use Transactions. For every latest device you add you will also need to update the metadata item so that the GSI is updated with the latest value also.\n", "Using a GSI like Lee suggested is the general approach to take for a situation where you want to do bulk retrieval of items that match a certain characteristic. You mark items with that characteristic in an attribute and use that attribute as a GSI partition key. Then the GSI is pre-filtered.\nIn this case I think it's a little tricky because when one item gains the characteristic (of being latest) another has to lose it (no longer latest), which requires two writes and coordination between those two if you have lots of potential writes concurrently to the same item collection. You'll probably want to use transactions, as Lee says, which means 2 writes at 2x the cost = 4 WCUs.\nIs there another way? The best choice in situations like this depends on details you didn't specify. How large is an item? How often do they update? How often do multiple clients write to the same item collection concurrently? How often do you do the bulk query? Is your scale such that costs are what matter or are costs trivial and you want to optimize for simplicity? (I wish every StackOverflow question about DynamoDB included these facts!)\nOne design that can work (if the item data tends to be small, and you want to lower write costs at the expense of higher read costs) is to just store an array of values in a single item. You can safely add new values by directly appending to the array (which will cost just 1 write unit as long as the data set stays below 1 KB, and 2 write units if 1-2 KB, etc). So that's a 4x write cost savings over updating two items in a transaction. Then you can scan the table and for each item let the client pull the last item out of the array. The scan will return more data so the bulk read will cost a bit more. That's why the design choice depends on usage.\nIf we assume you want to store the last N many data values per item, then this is an especially nice approach because otherwise you'd have to insert, remove the old latest flag, and delete the oldest record. Here you'd read the item, change the array as needed, write the new version, and use optimistic locking to handle concurrency. 1 WCU instead of 3, or really 6 (if you must use transactions).\nIn other words: \"It depends\"\n" ]
[ 0, 0 ]
[]
[]
[ "amazon_dynamodb", "aws_lambda", "boto3", "python" ]
stackoverflow_0074502349_amazon_dynamodb_aws_lambda_boto3_python.txt
Q: Replacing new vector after it gets empty on Python Hi have an original vector, I would like to put the first 3 elements into new vector, do some math and then get new elements after the math. Put those new elements into a new vector, delete the original first 3 elements from original vector and repeat this exact procedure until the original vector is empty. This is what I have done so far OR=np.array([1,2,3,4,5,6]) new=OR[0:3] while (True): tran=-2*c_[new] OR= delete(OR, [0,1,2]) new=OR[0:3] if (OR==[]): break However it is not working out properly, do you have any suggestions? A: Not sure what c_ is in your code, but regardless since numpy arrays are not dynamic, you can't remove or add elements to them. Deleting elements creates a new array without those elements, which is not optimal. I think you should either use a python deque which has fast pop methods for removing one element from the front/end, or just iterate over the original numpy array, for example like this: def modify_array(arr): # your code for modifying the array here result = [] original_array = np.arange(1, 10) for idx in range(0, len(original_array), 3): result.append(modify_array(original_array[idx:idx+3])) result = np.concatenate(result)
Replacing new vector after it gets empty on Python
Hi have an original vector, I would like to put the first 3 elements into new vector, do some math and then get new elements after the math. Put those new elements into a new vector, delete the original first 3 elements from original vector and repeat this exact procedure until the original vector is empty. This is what I have done so far OR=np.array([1,2,3,4,5,6]) new=OR[0:3] while (True): tran=-2*c_[new] OR= delete(OR, [0,1,2]) new=OR[0:3] if (OR==[]): break However it is not working out properly, do you have any suggestions?
[ "Not sure what c_ is in your code, but regardless since numpy arrays are not dynamic, you can't remove or add elements to them. Deleting elements creates a new array without those elements, which is not optimal. I think you should either use a python deque which has fast pop methods for removing one element from the front/end, or just iterate over the original numpy array, for example like this:\ndef modify_array(arr):\n # your code for modifying the array here\n\nresult = []\noriginal_array = np.arange(1, 10)\n\nfor idx in range(0, len(original_array), 3):\n result.append(modify_array(original_array[idx:idx+3]))\n\nresult = np.concatenate(result)\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "matrix_multiplication", "python", "vector" ]
stackoverflow_0074504293_arrays_matrix_multiplication_python_vector.txt
Q: How to declare instance variables in abstract class? class ILinkedListElem: @property def value(self): raise NotImplementedError @property def next(self): raise NotImplementedError class ListElem(ILinkedListElem): def __init__(self, value, next_node=None): self.value = value self.next = next_node I wanna something like this. This abstract variables definition works for class vars, but not for instance I want to all instances of ILinkedListElem subclass must has "value" and "next" attributes A: If you want to force/require all instances of any subclass of ILinkedListElem to have the attributes "value" and "nxt", the following standard implementation with abstractmethod seems to do what you're after: from abc import ABC, abstractmethod class ILinkedListElem (ABC): @property @abstractmethod def value(self): raise NotImplementedError @property @abstractmethod def nxt(self): raise NotImplementedError This is the abstract class, from which we create a compliant subclass: class ListElem_good (ILinkedListElem): def __init__(self, value, next_node=None): self._value = value self._nxt = next_node @property def value(self): return self._value @property def nxt(self): return self._nxt We create an instance of this compliant subclass and test it: x = ListElem_good('foo', 'bar') print (x.value) print (x.nxt) #result: # foo # bar If we create a non-compliant subclass that omits an implementation of nxt, like so: class ListElem_bad (ILinkedListElem): def __init__(self, value): self._value = value @property def value(self): return self._value when we try to create an instance of this non-compliant subclass: y = ListElem_bad('foo') print (y.value) it fails: y = ListElem_bad('foo') TypeError: Can't instantiate abstract class ListElem_bad with abstract methods nxt This relies on essentially the same solution offered here, which you suggested in a comment-exchange does not meet your requirements. But when applied to your specific use-case above, it appears to precisely address the issue you've raised - or have I misunderstood?
How to declare instance variables in abstract class?
class ILinkedListElem: @property def value(self): raise NotImplementedError @property def next(self): raise NotImplementedError class ListElem(ILinkedListElem): def __init__(self, value, next_node=None): self.value = value self.next = next_node I wanna something like this. This abstract variables definition works for class vars, but not for instance I want to all instances of ILinkedListElem subclass must has "value" and "next" attributes
[ "If you want to force/require all instances of any subclass of ILinkedListElem to have the attributes \"value\" and \"nxt\", the following standard implementation with abstractmethod seems to do what you're after:\nfrom abc import ABC, abstractmethod\n\nclass ILinkedListElem (ABC):\n @property\n @abstractmethod\n def value(self):\n raise NotImplementedError\n\n @property\n @abstractmethod\n def nxt(self):\n raise NotImplementedError\n\nThis is the abstract class, from which we create a compliant subclass:\nclass ListElem_good (ILinkedListElem):\n \n def __init__(self, value, next_node=None):\n self._value = value\n self._nxt = next_node\n \n @property\n def value(self):\n return self._value\n \n @property\n def nxt(self):\n return self._nxt\n\nWe create an instance of this compliant subclass and test it:\nx = ListElem_good('foo', 'bar')\nprint (x.value)\nprint (x.nxt)\n\n#result:\n # foo\n # bar\n\nIf we create a non-compliant subclass that omits an implementation of nxt, like so:\nclass ListElem_bad (ILinkedListElem):\n \n def __init__(self, value):\n self._value = value\n \n \n @property\n def value(self):\n return self._value\n\nwhen we try to create an instance of this non-compliant subclass:\ny = ListElem_bad('foo')\nprint (y.value)\n\nit fails:\n y = ListElem_bad('foo')\n\nTypeError: Can't instantiate abstract class ListElem_bad with abstract methods nxt\n\nThis relies on essentially the same solution offered here, which you suggested in a comment-exchange does not meet your requirements. But when applied to your specific use-case above, it appears to precisely address the issue you've raised - or have I misunderstood?\n" ]
[ 0 ]
[]
[]
[ "abstract_class", "inheritance", "oop", "python", "python_3.x" ]
stackoverflow_0074504278_abstract_class_inheritance_oop_python_python_3.x.txt
Q: how to use the venv in pycharm for solving my problem? first I should say that I am newcomer in programming with python , and my problem is I try to make an telegram bot by python in pycharm , I install the telegram and telegram-python-bot package with pip in cmd in terminal of pycharm but when I run my project , the error be shown is the telegram module is not found . I try to solve this problem with venv base on my friend suggestion but it is not work . I hope u can solve my problem :)
how to use the venv in pycharm for solving my problem?
first I should say that I am newcomer in programming with python , and my problem is I try to make an telegram bot by python in pycharm , I install the telegram and telegram-python-bot package with pip in cmd in terminal of pycharm but when I run my project , the error be shown is the telegram module is not found . I try to solve this problem with venv base on my friend suggestion but it is not work . I hope u can solve my problem :)
[]
[]
[ "\nYou should create a local virtual environment with python3 -m venv venv\n\nConfigure this Pycharm project to select this environment: Pycharm - Preferences - Project - Python interpreter. Then select the gear, select add; choose this local environment.\n\nQuit and reopen your Pycharm project\n\n\nThen you can install your packages on this environment and it will be recognised by Pycharm\n" ]
[ -1 ]
[ "pycharm", "python", "telegram_bot", "virtualenv" ]
stackoverflow_0074504335_pycharm_python_telegram_bot_virtualenv.txt
Q: Python - Loop through lists and join them when they match I have a dict of lists. I have to loop through join them where possible. When joining them I have to add two columns together. I can either use the dict or list. Depending on what is easiest/recommended. e.g. id name date value 1 hotel1 22-11-22 90 2 hotel2 22-11-22 90 3 hotel3 22-11-22 90 4 hotel1 23-11-22 10 5 hotel2 23-11-22 60 6 hotel3 23-11-22 90 So I want to loop through the dict and provide the following outcome: { "hotelName": "hotel1", "date": "22-11-22", "value": "100" } { "hotelName": "hotel2", "date": "22-11-22", "value": "150" } { "hotelName": "hotel3", "date": "22-11-22", "value": "180" } Any tips of guidance is welcome I tried looping through the lists, but I can only output { "hotelName": "hotel1", "date": "22-11-22", "value": "90" } { "hotelName": "hotel2", "date": "22-11-22", "value": "90" } { "hotelName": "hotel3", "date": "22-11-22", "value": "90" } { "hotelName": "hotel1", "date": "23-11-22", "value": "10" } { "hotelName": "hotel2", "date": "23-11-22", "value": "60" } { "hotelName": "hotel3", "date": "23-11-22", "value": "90" } A: Here is a stab at it =) hotels = {} for ind,row in df.iterrows(): hotel = row['name'] if hotel in hotels: hotels[hotel]['value'] += row['value'] hotels[hotel]['date'].append(row['date']) else: hotels[hotel] = { 'value': row['value'], 'date': [row['date']] } print(hotels) Outputs: {'hotel1': {'value': 100, 'date': ['22.11.2022', '23.11.2022']}, 'hotel2': {'value': 150, 'date': ['22.11.2022', '23.11.2022']}, 'hotel3': {'value': 180, 'date': ['22.11.2022', '23.11.2022']}}
Python - Loop through lists and join them when they match
I have a dict of lists. I have to loop through join them where possible. When joining them I have to add two columns together. I can either use the dict or list. Depending on what is easiest/recommended. e.g. id name date value 1 hotel1 22-11-22 90 2 hotel2 22-11-22 90 3 hotel3 22-11-22 90 4 hotel1 23-11-22 10 5 hotel2 23-11-22 60 6 hotel3 23-11-22 90 So I want to loop through the dict and provide the following outcome: { "hotelName": "hotel1", "date": "22-11-22", "value": "100" } { "hotelName": "hotel2", "date": "22-11-22", "value": "150" } { "hotelName": "hotel3", "date": "22-11-22", "value": "180" } Any tips of guidance is welcome I tried looping through the lists, but I can only output { "hotelName": "hotel1", "date": "22-11-22", "value": "90" } { "hotelName": "hotel2", "date": "22-11-22", "value": "90" } { "hotelName": "hotel3", "date": "22-11-22", "value": "90" } { "hotelName": "hotel1", "date": "23-11-22", "value": "10" } { "hotelName": "hotel2", "date": "23-11-22", "value": "60" } { "hotelName": "hotel3", "date": "23-11-22", "value": "90" }
[ "Here is a stab at it =)\nhotels = {}\nfor ind,row in df.iterrows():\n hotel = row['name']\n if hotel in hotels:\n hotels[hotel]['value'] += row['value']\n hotels[hotel]['date'].append(row['date'])\n else:\n hotels[hotel] = {\n 'value': row['value'],\n 'date': [row['date']]\n }\nprint(hotels)\n\nOutputs:\n{'hotel1': {'value': 100, 'date': ['22.11.2022', '23.11.2022']},\n 'hotel2': {'value': 150, 'date': ['22.11.2022', '23.11.2022']},\n 'hotel3': {'value': 180, 'date': ['22.11.2022', '23.11.2022']}}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074504266_dictionary_list_python.txt
Q: tkinter: NameError: name 'root' is not defined I've split my tkinter app in more file, and right now I've two file: main.py import tkinter as tk from login_info import LoginInfo class Main(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.login_page = LoginInfo(self) self.login_page.pack(expand='True') if __name__ == "__main__": root = tk.Tk() Main(root).pack(side="top", fill="both", expand=True) root.mainloop() login_page.py import tkinter as tk class LoginInfo(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.login_frame = tk.Frame(self) self.login_frame.pack() self.username_label = tk.Label(self.login_frame, text='Username:') self.username_label.grid(row=0, column=0, padx=(10,0), pady=(10,0)) self.username_entry = tk.Entry(self.login_frame) self.username_entry.grid(row=0, column=1, padx=(10,0), pady=(10,0)) self.password_label = tk.Label(self.login_frame, text='Password:') self.password_label.grid(row=1, column=0, padx=(10,0), pady=(10,0)) self.password_entry = tk.Entry(self.login_frame) self.password_entry.grid(row=1, column=1, padx=(10,0), pady=(10,0)) self.login_button = tk.Button(self.login_frame, text='Login', command=self.login) self.login_button.grid(row=2, column=0, columnspan=2, pady=20) root.bind('<Return>', self.login) def login(self,event): print('Logged In') if __name__ == "__main__": root = tk.Tk() LoginInfo(root).pack(side="top", fill="both", expand=True) root.mainloop() In the login page I'm trying to bind the return button to a function but I'm gettin this error: NameError: name 'root' is not defined This happen only if I launch the code from main.py, if I launch it from login_page.py it works A: Remove the duplicated ... if __name__ == "__main__": root = tk.Tk() LoginInfo(root).pack(side="top", fill="both", expand=True) root.mainloop() ... from the login_page.py and launch the code from main.py. Use if __name__ == '__main__':... only once and only in the module which starts the program. Because only in this module the special variable __main__ will have the value '__main__'. Also, make sure that you import from login_page, not from login_info in the main.py, as you named that module login_page.py and not login_info.py. Concrete code I was able to get your example running by doing this: # main.py import tkinter as tk from login_page import LoginPage class Main(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.login_page = LoginPage(self) self.login_page.pack(expand='True') if __name__ == "__main__": root = tk.Tk() main = Main(root) main.pack(side="top", fill="both", expand=True) root.mainloop() # login_page.py import tkinter as tk class LoginPage(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.login_frame = tk.Frame(self) self.login_frame.pack() self.username_label = tk.Label(self.login_frame, text='Username:') self.username_label.grid(row=0, column=0, padx=(10,0), pady=(10,0)) self.username_entry = tk.Entry(self.login_frame) self.username_entry.grid(row=0, column=1, padx=(10,0), pady=(10,0)) self.password_label = tk.Label(self.login_frame, text='Password:') self.password_label.grid(row=1, column=0, padx=(10,0), pady=(10,0)) self.password_entry = tk.Entry(self.login_frame) self.password_entry.grid(row=1, column=1, padx=(10,0), pady=(10,0)) self.login_button = tk.Button(self.login_frame, text='Login', command=self.login) self.login_button.grid(row=2, column=0, columnspan=2, pady=20) self.bind_all('<Return>', self.login) def login(self, event=None): print('Logged In') The trick is to use self.bind_all instead of root.bind, near the bottom of login_page.py, which binds the key to the whole application. This function seems not to be documented in the official Python tkinter docs, which may indicate that there is a more pythonic solution. I personally would prefer to put the declaration of this binding in the main.py. We have a sub unit here, which silently modifies a parent unit. The bigger the project gets, the more this leads to problems.
tkinter: NameError: name 'root' is not defined
I've split my tkinter app in more file, and right now I've two file: main.py import tkinter as tk from login_info import LoginInfo class Main(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.login_page = LoginInfo(self) self.login_page.pack(expand='True') if __name__ == "__main__": root = tk.Tk() Main(root).pack(side="top", fill="both", expand=True) root.mainloop() login_page.py import tkinter as tk class LoginInfo(tk.Frame): def __init__(self, parent, *args, **kwargs): tk.Frame.__init__(self, parent, *args, **kwargs) self.login_frame = tk.Frame(self) self.login_frame.pack() self.username_label = tk.Label(self.login_frame, text='Username:') self.username_label.grid(row=0, column=0, padx=(10,0), pady=(10,0)) self.username_entry = tk.Entry(self.login_frame) self.username_entry.grid(row=0, column=1, padx=(10,0), pady=(10,0)) self.password_label = tk.Label(self.login_frame, text='Password:') self.password_label.grid(row=1, column=0, padx=(10,0), pady=(10,0)) self.password_entry = tk.Entry(self.login_frame) self.password_entry.grid(row=1, column=1, padx=(10,0), pady=(10,0)) self.login_button = tk.Button(self.login_frame, text='Login', command=self.login) self.login_button.grid(row=2, column=0, columnspan=2, pady=20) root.bind('<Return>', self.login) def login(self,event): print('Logged In') if __name__ == "__main__": root = tk.Tk() LoginInfo(root).pack(side="top", fill="both", expand=True) root.mainloop() In the login page I'm trying to bind the return button to a function but I'm gettin this error: NameError: name 'root' is not defined This happen only if I launch the code from main.py, if I launch it from login_page.py it works
[ "Remove the duplicated ...\nif __name__ == \"__main__\":\n root = tk.Tk()\n LoginInfo(root).pack(side=\"top\", fill=\"both\", expand=True)\n root.mainloop()\n\n... from the login_page.py and launch the code from main.py.\nUse if __name__ == '__main__':... only once and only in the module which starts the program. Because only in this module the special variable __main__ will have the value '__main__'.\nAlso, make sure that you import from login_page, not from login_info in the main.py, as you named that module login_page.py and not login_info.py.\nConcrete code\nI was able to get your example running by doing this:\n# main.py\n\nimport tkinter as tk\n\nfrom login_page import LoginPage\n\nclass Main(tk.Frame):\n def __init__(self, parent, *args, **kwargs):\n tk.Frame.__init__(self, parent, *args, **kwargs)\n self.login_page = LoginPage(self)\n self.login_page.pack(expand='True')\n\nif __name__ == \"__main__\":\n root = tk.Tk()\n main = Main(root)\n main.pack(side=\"top\", fill=\"both\", expand=True)\n root.mainloop()\n\n# login_page.py\n\nimport tkinter as tk\n\nclass LoginPage(tk.Frame):\n def __init__(self, parent, *args, **kwargs):\n tk.Frame.__init__(self, parent, *args, **kwargs)\n \n self.login_frame = tk.Frame(self)\n self.login_frame.pack()\n \n self.username_label = tk.Label(self.login_frame, text='Username:')\n self.username_label.grid(row=0, column=0, padx=(10,0), pady=(10,0))\n \n self.username_entry = tk.Entry(self.login_frame)\n self.username_entry.grid(row=0, column=1, padx=(10,0), pady=(10,0))\n \n self.password_label = tk.Label(self.login_frame, text='Password:')\n self.password_label.grid(row=1, column=0, padx=(10,0), pady=(10,0))\n \n self.password_entry = tk.Entry(self.login_frame)\n self.password_entry.grid(row=1, column=1, padx=(10,0), pady=(10,0))\n \n self.login_button = tk.Button(self.login_frame, text='Login', command=self.login)\n \n self.login_button.grid(row=2, column=0, columnspan=2, pady=20)\n \n self.bind_all('<Return>', self.login)\n\n def login(self, event=None):\n print('Logged In')\n\nThe trick is to use self.bind_all instead of root.bind, near the bottom of login_page.py, which binds the key to the whole application. This function seems not to be documented in the official Python tkinter docs, which may indicate that there is a more pythonic solution.\nI personally would prefer to put the declaration of this binding in the main.py. We have a sub unit here, which silently modifies a parent unit. The bigger the project gets, the more this leads to problems.\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074504480_python_tkinter.txt
Q: Class method called in __init__ not giving same output as the same function used outside the class I'm sure I'm missing something in how classes work here, but basically this is my class: import pandas as pd import numpy as np import scipy #example DF with OHLC columns and 100 rows gold = pd.DataFrame({'Open':[i for i in range(100)],'Close':[i for i in range(100)],'High':[i for i in range(100)],'Low':[i for i in range(100)]}) class Backtest: def __init__(self, ticker, df): self.ticker = ticker self.df = df self.levels = pivot_points(self.df) def pivot_points(self,df,period=30): highs = scipy.signal.argrelmax(df.High.values,order=period) lows = scipy.signal.argrelmin(df.Low.values,order=period) return list(df.High[highs[0]]) + list(df.Low[lows[0]]) inst = Backtest('gold',gold) #gold is a Pandas Dataframe with Open High Low Close columns and data inst.levels # This give me the whole dataframe (inst.df) instead of the expected output of the pivot_point function (a list of integers) The problem is inst.levels returns the whole DataFrame instead of the return value of the function pivot_points (which is supposed to be a list of integers) When I called the pivot_points function on the same DataFrame outside this class I got the list I expected I expected to get the result of the pivot_points() function after assigning it to self.levels inside the init but instead I got the entire DataFrame A: You would have to address pivot_points() as self.pivot_points() And there is no need to add period as an argument if you are not changing it, if you are, its okay there. I'm not sure if this helps, but here are some tips about your class: class Backtest: def __init__(self, ticker, df): self.ticker = ticker self.df = df # no need to define a instance variable here, you can access the method directly # self.levels = pivot_points(self.df) def pivot_points(self): period = 30 # period is a local variable to pivot_points so I can access it directly print(f'period inside Backtest.pivot_points: {period}') # df is an instance variable and can be accessed in any method of Backtest after it is instantiated print(f'self.df inside Backtest.pivot_points(): {self.df}') # to get any values out of pivot_points we return some calcualtions return 1 + 1 # if you do need an attribute like level to access it by inst.level you could create a property @property def level(self): return self.pivot_points() gold = 'some data' inst = Backtest('gold', gold) # gold is a Pandas Dataframe with Open High Low Close columns and data print(f'inst.pivot_points() outside the class: {inst.pivot_points()}') print(f'inst.level outside the class: {inst.level}') This would be the result: period inside Backtest.pivot_points: 30 self.df inside Backtest.pivot_points(): some data inst.pivot_points() outside the class: 2 period inside Backtest.pivot_points: 30 self.df inside Backtest.pivot_points(): some data inst.level outside the class: 2 A: Thanks to the commenter Henry Ecker I found that I had the function by the same name defined elsewhere in the file where the output is the df. After changing that my original code is working as expected
Class method called in __init__ not giving same output as the same function used outside the class
I'm sure I'm missing something in how classes work here, but basically this is my class: import pandas as pd import numpy as np import scipy #example DF with OHLC columns and 100 rows gold = pd.DataFrame({'Open':[i for i in range(100)],'Close':[i for i in range(100)],'High':[i for i in range(100)],'Low':[i for i in range(100)]}) class Backtest: def __init__(self, ticker, df): self.ticker = ticker self.df = df self.levels = pivot_points(self.df) def pivot_points(self,df,period=30): highs = scipy.signal.argrelmax(df.High.values,order=period) lows = scipy.signal.argrelmin(df.Low.values,order=period) return list(df.High[highs[0]]) + list(df.Low[lows[0]]) inst = Backtest('gold',gold) #gold is a Pandas Dataframe with Open High Low Close columns and data inst.levels # This give me the whole dataframe (inst.df) instead of the expected output of the pivot_point function (a list of integers) The problem is inst.levels returns the whole DataFrame instead of the return value of the function pivot_points (which is supposed to be a list of integers) When I called the pivot_points function on the same DataFrame outside this class I got the list I expected I expected to get the result of the pivot_points() function after assigning it to self.levels inside the init but instead I got the entire DataFrame
[ "You would have to address pivot_points() as self.pivot_points()\nAnd there is no need to add period as an argument if you are not changing it, if you are, its okay there.\nI'm not sure if this helps, but here are some tips about your class:\nclass Backtest:\n\n def __init__(self, ticker, df):\n self.ticker = ticker\n self.df = df\n\n # no need to define a instance variable here, you can access the method directly\n # self.levels = pivot_points(self.df)\n\n def pivot_points(self):\n period = 30\n # period is a local variable to pivot_points so I can access it directly\n print(f'period inside Backtest.pivot_points: {period}')\n # df is an instance variable and can be accessed in any method of Backtest after it is instantiated\n print(f'self.df inside Backtest.pivot_points(): {self.df}')\n # to get any values out of pivot_points we return some calcualtions\n return 1 + 1\n\n # if you do need an attribute like level to access it by inst.level you could create a property\n @property\n def level(self):\n return self.pivot_points()\n\n\ngold = 'some data'\ninst = Backtest('gold', gold) # gold is a Pandas Dataframe with Open High Low Close columns and data\nprint(f'inst.pivot_points() outside the class: {inst.pivot_points()}')\nprint(f'inst.level outside the class: {inst.level}')\n\nThis would be the result:\nperiod inside Backtest.pivot_points: 30\nself.df inside Backtest.pivot_points(): some data\ninst.pivot_points() outside the class: 2\nperiod inside Backtest.pivot_points: 30\nself.df inside Backtest.pivot_points(): some data\ninst.level outside the class: 2\n\n", "Thanks to the commenter Henry Ecker I found that I had the function by the same name defined elsewhere in the file where the output is the df. After changing that my original code is working as expected\n" ]
[ 1, 0 ]
[]
[]
[ "dataframe", "oop", "pandas", "python" ]
stackoverflow_0074504314_dataframe_oop_pandas_python.txt
Q: How to turn this into O(nlogn)? From a list of distinct numbers, I want to find the sum of the largest numbers of len(a)//3. Examples if len(a) = 9, you need to find the sum of the largest 3 numbers. If len(a)=40, you need to find the sum of the largest 13 numbers. I was able to code it as such: def largestthree(a): max2 = 0 for i in range(len(a)//3): max1 = max(a) a.remove(max1) max2+= max1 return max2 The problem is I need to have it as a O(nlog2n) which I do't have it as that. Could you modify it into aO(nlog2n) ? I don't want you to redo the code, just modify it to a O(nlog2n) Thanks in advance :) UPDATE: def largestthird(a): max2 = 0 for i in range(len(a)): if len(a)>=3: for j in range(len(a)//3): max1 = max(a) a.remove(max1) max2+= max1 return max2 Would this be considered O(nlog2n) ? Thanks, A: This one has about the same efficiency as sorted(list), which is n log(n). sum(sorted(a, reverse=True)[:len(a)//3]) A: Use a min heap, then if size > len(a)//3, pop. After iterating through all items, you are left with the biggest len(a)//3 numbers. Sum up said numbers. import heapq l = [100, 1,2,3,4 ,545 , 5434 , 34] minheap = [] heapq.heapify(minheap) for num in l: heapq.heappush(minheap, num) if len(minheap) > len(l) // 3: heapq.heappop(minheap) print(minheap) print(sum(minheap))
How to turn this into O(nlogn)?
From a list of distinct numbers, I want to find the sum of the largest numbers of len(a)//3. Examples if len(a) = 9, you need to find the sum of the largest 3 numbers. If len(a)=40, you need to find the sum of the largest 13 numbers. I was able to code it as such: def largestthree(a): max2 = 0 for i in range(len(a)//3): max1 = max(a) a.remove(max1) max2+= max1 return max2 The problem is I need to have it as a O(nlog2n) which I do't have it as that. Could you modify it into aO(nlog2n) ? I don't want you to redo the code, just modify it to a O(nlog2n) Thanks in advance :) UPDATE: def largestthird(a): max2 = 0 for i in range(len(a)): if len(a)>=3: for j in range(len(a)//3): max1 = max(a) a.remove(max1) max2+= max1 return max2 Would this be considered O(nlog2n) ? Thanks,
[ "This one has about the same efficiency as sorted(list), which is n log(n).\nsum(sorted(a, reverse=True)[:len(a)//3])\n\n", "Use a min heap, then if size > len(a)//3, pop. After iterating through all items, you are left with the biggest len(a)//3 numbers. Sum up said numbers.\nimport heapq\nl = [100, 1,2,3,4 ,545 , 5434 , 34]\n\nminheap = []\nheapq.heapify(minheap)\n\nfor num in l:\n heapq.heappush(minheap, num)\n if len(minheap) > len(l) // 3:\n heapq.heappop(minheap)\n\nprint(minheap)\nprint(sum(minheap))\n\n" ]
[ 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074504345_python.txt
Q: Counting contiguous sawtooth subarrays Given an array of integers arr, your task is to count the number of contiguous subarrays that represent a sawtooth sequence of at least two elements. For arr = [9, 8, 7, 6, 5], the output should be countSawSubarrays(arr) = 4. Since all the elements are arranged in decreasing order, it won’t be possible to form any sawtooth subarray of length 3 or more. There are 4 possible subarrays containing two elements, so the answer is 4. For arr = [10, 10, 10], the output should be countSawSubarrays(arr) = 0. Since all of the elements are equal, none of subarrays can be sawtooth, so the answer is 0. For arr = [1, 2, 1, 2, 1], the output should be countSawSubarrays(arr) = 10. All contiguous subarrays containing at least two elements satisfy the condition of the problem. There are 10 possible contiguous subarrays containing at least two elements, so the answer is 10. What would be the best way to solve this question? I saw a possible solution here:https://medium.com/swlh/sawtooth-sequence-java-solution-460bd92c064 But this code fails for the case [1,2,1,3,4,-2] where the answer should be 9 but it comes as 12. I have even tried a brute force approach but I am not able to wrap my head around it. Any help would be appreciated! EDIT: Thanks to Vishal for the response, after a few tweaks, here is the updated solution in python. Time Complexity: O(n) Space Complexity: O(1) def samesign(a,b): if a/abs(a) == b/abs(b): return True else: return False def countSawSubarrays(arr): n = len(arr) if n<2: return 0 s = 0 e = 1 count = 0 while(e<n): sign = arr[e] - arr[s] while(e<n and arr[e] != arr[e-1] and samesign(arr[e] - arr[e-1], sign)): sign = -1*sign e+=1 size = e-s if (size==1): e+=1 count += (size*(size-1))//2 s = e-1 e = s+1 return count arr1 = [9,8,7,6,5] print(countSawSubarrays(arr1)) arr2 = [1,2,1,3,4,-2] print(countSawSubarrays(arr2)) arr3 = [1,2,1,2,1] print(countSawSubarrays(arr3)) arr4 = [10,10,10] print(countSawSubarrays(arr4)) Result: 4 9 10 0 A: Here's my solution using dynamic programming. This is a bit more readable to me than the accepted answer (or the added answer in the OP), although there's probably still room for improvement. O(n) time and O(1) space. def solution(arr): # holds the count of sawtooths at each index of our input array, # for sawtooth lengths up to that index saws = [0 for x in range(0, len(arr))] # the resulting total sawtooth counts totalSawCounts = 0 previousCount = 0 for currIdx in range(1, len(arr)): currCount = 0 before = currIdx -1 if (arr[currIdx] > arr[before]): goingUp = True elif (arr[currIdx] < arr[before]): goingUp = False else: break # if we made it here, we have at least one sawtooth currCount = 1 # see if there was a previous solution (the DP part) # and if it continues our current sawtooth if before >= 1: if goingUp: if arr[before-1] > arr[before]: currCount = previousCount + currCount else: if arr[before-1] < arr[before]: currCount = previousCount + currCount previousCount = currCount totalSawCounts = totalSawCounts + currCount return totalSawCounts Test cases: arr = [9,8,7,6,5] print(solution(arr)) # 4 arr2 = [1,2,1,3,4,-2] print(solution(arr2)) # 9 arr3 = [1,2,1,2,1] print(solution(arr3)) # 10 arr4 = [10,10,10] print(solution(arr4)) # 0 # from medium article comments arr5 = [-442024811,447425003,365210904,823944047,943356091,-781994958,872885721,-296856571,230380705,944396167,-636263320,-942060800,-116260950,-126531946,-838921202] print(solution(arr5)) # 31 A: This can be solved by just splitting the array into multiple sawtooth sequences..which is O(n) operation. For example [1,2,1,3,4,-2] can be splitted into two sequence [1,2,1,3] and [3,4,-2] and now we just have to do C(size,2) operation for both the parts. Here is psedo code explaining the idea ( does not have all corner cases handled ) public int countSeq(int[] arr) { int len = arr.length; if (len < 2) { return 0; } int s = 0; int e = 1; int sign = arr[e] - arr[s]; int count = 0; while (e < len) { while (e < len && arr[e] - arr[e-1] != 0 && isSameSign(arr[e] - arr[e-1], sign)) { sign = -1 * sign; e++; } // the biggest continue subsequence starting from s ends at e-1; int size = e - s; count = count + (size * (size - 1)/2); // basically doing C(size,2) s = e - 1; e = s + 1; } return count; } A: Below is a very simple and straightforward solution with a single for loop import math def comb(x): st = 0 total_comb = 0 if len(x) < 2: #edge case return 0 if len(x) == 2: #edge case return 2 seq_s = 0 for i in range(1, len(x)-1): if (x[i]<x[i-1] and x[i]<x[i+1]) or (x[i]>x[i-1] and x[i]>x[i+1]): continue else: print(x[seq_s:i+1]) if i+1-seq_s == 2 and x[i] == x[i-1]: #means we got two same nums like 10, 10 pass else: total_comb+=math.comb(i+1-seq_s,2) seq_s=i i+=1 print(x[seq_s:]) if i+1-seq_s == 2 and x[i] == x[i-1]: #means we got two same nums like 10, 10 pass else: total_comb+=math.comb(len(x)-seq_s,2) return total_comb x= [1,2,1,3,4,-2] print(comb(x)) A: I think this is a simple DP problem. The idea is to know the number of subarrays ending at i that can be extended from the previous alternating state (increasing/decreasing). If the current element is lower than the previous element it can contribute to the already increasing subarray ending at the previous state (i-1) or vice-versa. #include <bits/stdc++.h> using namespace std; void solve(vector<int> arr) { int n = arr.size(), ans = 0; // vector<vector<int>> dp(n, vector<int>(2, 0)); int inc = 0, dec = 0; for(int i = 1; i < n; i++) { if (arr[i] > arr[i-1]) { // dp[i][0] = dp[i-1][1] + 1; inc = dec + 1; dec = 0; } else if (arr[i] < arr[i-1]) { // dp[i][1] = dp[i-1][0] + 1; dec = inc + 1; inc = 0; } else { inc = 0, dec = 0; } // ans += dp[i][0] + dp[i][1]; ans += (inc + dec); } cout << ans << endl; } int main() { auto inp = {-442024811,447425003,365210904,823944047,943356091,-781994958,872885721,-296856571,230380705,944396167,-636263320,-942060800,-116260950,-126531946,-838921202}; solve(inp); return 0; } A: Used gradient method. Passes all test cases from math import comb def solution(arr): n=len(arr) if arr[1]!=arr[0]: l=2 else: l=0 pre=arr[1]-arr[0] ans=0 for i in range(2,n): cur=arr[i]-arr[i-1] if cur*pre<0: l+=1 else: if l==2: ans+=1 elif l==0: ans+=0 else: ans+=comb(l,2) if cur!=0: l=2 else: l=0 pre=cur if l==2: ans+=1 elif l==0: ans+=0 else: ans+=comb(l,2) return ans A: I think the trick here is to realise that: assuming you have a valid sawtooth sequence of length x, adding one additional valid element would increase the number of subsequences by x too. Example: [1,2,1] is a valid sawtooth sequence. adding 2 to this valid sequence of [1,2,1] forms [1,2,1,2]. We see here that adding a new element to a valid sequence of length 3 here adds 3 new valid subsequences which are: [1,2,1,2],[2,1,2], and [1,2]. Correspondingly, adding another valid element such as -1 to [1,2,1,2] would add 4 new subsequences which are: [1,2,1,2,-1], [2,1,2,-1],[1,2,-1],and [2,-1]. Thus, what we can use a moving window with left and right pointers l and r to keep track of the length of valid sequence, reseting the l pointer when an invalid sequence is detected. . def solution(arr: list) -> int: ''' for every char, check if still current sawtooth if still currently sawtooth, numberOfWays += length else reset temp counter ''' l, r = 0, 1 ways = 0 while r < len(arr): # check if current char + past 2 chars are sawtooth if r-l > 1 and (arr[r-2] < arr[r-1] > arr[r] or arr[r-2] > arr[r-1] < arr[r]): ways += r-l # check if current char + past 1 chars are sawtooth elif arr[r-1] != arr[r]: ways += 1 l = r-1 else: # reset left pointer l = r r += 1 return ways A: I did something extremely simple but it gave the correct answers for all the test cases you provided: function sawtooth(arr) { if (arr.length < 2) return 0; let previousLongest = 1; let result = 0; for (let i = 1; i < arr.length; i++) { if (i >= arr.length) break; if (arr[i - 1] === arr[i]) continue; if (arr[i - 1] > arr[i] && arr[i] < arr[i + 1] || arr[i - 1] < arr[i] && arr[i] > arr[i + 1]) { previousLongest += 1; } else { previousLongest = 1; } result += previousLongest; } return result; }
Counting contiguous sawtooth subarrays
Given an array of integers arr, your task is to count the number of contiguous subarrays that represent a sawtooth sequence of at least two elements. For arr = [9, 8, 7, 6, 5], the output should be countSawSubarrays(arr) = 4. Since all the elements are arranged in decreasing order, it won’t be possible to form any sawtooth subarray of length 3 or more. There are 4 possible subarrays containing two elements, so the answer is 4. For arr = [10, 10, 10], the output should be countSawSubarrays(arr) = 0. Since all of the elements are equal, none of subarrays can be sawtooth, so the answer is 0. For arr = [1, 2, 1, 2, 1], the output should be countSawSubarrays(arr) = 10. All contiguous subarrays containing at least two elements satisfy the condition of the problem. There are 10 possible contiguous subarrays containing at least two elements, so the answer is 10. What would be the best way to solve this question? I saw a possible solution here:https://medium.com/swlh/sawtooth-sequence-java-solution-460bd92c064 But this code fails for the case [1,2,1,3,4,-2] where the answer should be 9 but it comes as 12. I have even tried a brute force approach but I am not able to wrap my head around it. Any help would be appreciated! EDIT: Thanks to Vishal for the response, after a few tweaks, here is the updated solution in python. Time Complexity: O(n) Space Complexity: O(1) def samesign(a,b): if a/abs(a) == b/abs(b): return True else: return False def countSawSubarrays(arr): n = len(arr) if n<2: return 0 s = 0 e = 1 count = 0 while(e<n): sign = arr[e] - arr[s] while(e<n and arr[e] != arr[e-1] and samesign(arr[e] - arr[e-1], sign)): sign = -1*sign e+=1 size = e-s if (size==1): e+=1 count += (size*(size-1))//2 s = e-1 e = s+1 return count arr1 = [9,8,7,6,5] print(countSawSubarrays(arr1)) arr2 = [1,2,1,3,4,-2] print(countSawSubarrays(arr2)) arr3 = [1,2,1,2,1] print(countSawSubarrays(arr3)) arr4 = [10,10,10] print(countSawSubarrays(arr4)) Result: 4 9 10 0
[ "Here's my solution using dynamic programming. This is a bit more readable to me than the accepted answer (or the added answer in the OP), although there's probably still room for improvement.\nO(n) time and O(1) space.\ndef solution(arr):\n # holds the count of sawtooths at each index of our input array,\n # for sawtooth lengths up to that index\n saws = [0 for x in range(0, len(arr))]\n # the resulting total sawtooth counts\n totalSawCounts = 0\n previousCount = 0\n\n for currIdx in range(1, len(arr)):\n currCount = 0\n before = currIdx -1\n if (arr[currIdx] > arr[before]):\n goingUp = True\n elif (arr[currIdx] < arr[before]):\n goingUp = False\n else:\n break\n\n # if we made it here, we have at least one sawtooth\n currCount = 1\n\n # see if there was a previous solution (the DP part)\n # and if it continues our current sawtooth\n if before >= 1:\n if goingUp:\n if arr[before-1] > arr[before]:\n currCount = previousCount + currCount\n else:\n if arr[before-1] < arr[before]:\n currCount = previousCount + currCount\n previousCount = currCount\n totalSawCounts = totalSawCounts + currCount\n\n return totalSawCounts\n\nTest cases:\narr = [9,8,7,6,5]\nprint(solution(arr)) # 4\n\narr2 = [1,2,1,3,4,-2]\nprint(solution(arr2)) # 9\n\narr3 = [1,2,1,2,1]\nprint(solution(arr3)) # 10\n\narr4 = [10,10,10]\nprint(solution(arr4)) # 0\n\n# from medium article comments\narr5 = [-442024811,447425003,365210904,823944047,943356091,-781994958,872885721,-296856571,230380705,944396167,-636263320,-942060800,-116260950,-126531946,-838921202]\nprint(solution(arr5)) # 31\n\n", "This can be solved by just splitting the array into multiple sawtooth sequences..which is O(n) operation. For example [1,2,1,3,4,-2] can be splitted into two sequence\n[1,2,1,3] and [3,4,-2] and now we just have to do C(size,2) operation for both the parts.\nHere is psedo code explaining the idea ( does not have all corner cases handled )\n public int countSeq(int[] arr) {\nint len = arr.length;\nif (len < 2) {\n return 0;\n}\n\nint s = 0;\nint e = 1;\nint sign = arr[e] - arr[s];\nint count = 0;\n\nwhile (e < len) {\n while (e < len && arr[e] - arr[e-1] != 0 && isSameSign(arr[e] - arr[e-1], sign)) {\n sign = -1 * sign;\n e++;\n }\n // the biggest continue subsequence starting from s ends at e-1;\n int size = e - s;\n count = count + (size * (size - 1)/2); // basically doing C(size,2)\n s = e - 1;\n e = s + 1;\n}\n\nreturn count;\n\n}\n", "Below is a very simple and straightforward solution with a single for loop\nimport math\n\ndef comb(x):\n st = 0\n total_comb = 0\n if len(x) < 2: #edge case\n return 0\n if len(x) == 2: #edge case\n return 2\n \n seq_s = 0\n for i in range(1, len(x)-1): \n if (x[i]<x[i-1] and x[i]<x[i+1]) or (x[i]>x[i-1] and x[i]>x[i+1]):\n continue\n else:\n print(x[seq_s:i+1])\n if i+1-seq_s == 2 and x[i] == x[i-1]: #means we got two same nums like 10, 10\n pass\n else: total_comb+=math.comb(i+1-seq_s,2)\n seq_s=i\n i+=1\n \n print(x[seq_s:])\n if i+1-seq_s == 2 and x[i] == x[i-1]: #means we got two same nums like 10, 10\n pass\n else: total_comb+=math.comb(len(x)-seq_s,2)\n return total_comb\n \n\nx= [1,2,1,3,4,-2]\nprint(comb(x))\n\n", "I think this is a simple DP problem. The idea is to know the number of subarrays ending at i that can be extended from the previous alternating state (increasing/decreasing). If the current element is lower than the previous element it can contribute to the already increasing subarray ending at the previous state (i-1) or vice-versa.\n#include <bits/stdc++.h>\nusing namespace std;\n\nvoid solve(vector<int> arr) {\n int n = arr.size(), ans = 0;\n // vector<vector<int>> dp(n, vector<int>(2, 0));\n int inc = 0, dec = 0;\n for(int i = 1; i < n; i++) {\n if (arr[i] > arr[i-1]) {\n // dp[i][0] = dp[i-1][1] + 1;\n inc = dec + 1;\n dec = 0;\n } else if (arr[i] < arr[i-1]) {\n // dp[i][1] = dp[i-1][0] + 1;\n dec = inc + 1;\n inc = 0;\n } else {\n inc = 0, dec = 0;\n }\n // ans += dp[i][0] + dp[i][1];\n ans += (inc + dec);\n }\n cout << ans << endl;\n}\n\nint main() {\n auto inp = {-442024811,447425003,365210904,823944047,943356091,-781994958,872885721,-296856571,230380705,944396167,-636263320,-942060800,-116260950,-126531946,-838921202};\n solve(inp);\n return 0;\n}\n\n", "Used gradient method.\nPasses all test cases\nfrom math import comb\ndef solution(arr):\n \n n=len(arr)\n \n if arr[1]!=arr[0]:\n l=2\n \n else:\n l=0\n pre=arr[1]-arr[0]\n ans=0\n \n for i in range(2,n):\n cur=arr[i]-arr[i-1]\n \n if cur*pre<0:\n l+=1\n else:\n if l==2:\n ans+=1\n elif l==0:\n ans+=0\n else:\n ans+=comb(l,2)\n if cur!=0:\n l=2\n else:\n l=0\n pre=cur\n if l==2:\n ans+=1\n elif l==0:\n ans+=0\n else:\n ans+=comb(l,2)\n return ans\n\n", "I think the trick here is to realise that: assuming you have a valid sawtooth sequence of length x, adding one additional valid element would increase the number of subsequences by x too.\nExample:\n\n[1,2,1] is a valid sawtooth sequence.\n\nadding 2 to this valid sequence of [1,2,1] forms [1,2,1,2]. We see here that adding a new element to a valid sequence of length 3 here adds 3 new valid subsequences which are: [1,2,1,2],[2,1,2], and [1,2].\n\nCorrespondingly, adding another valid element such as -1 to [1,2,1,2] would add 4 new subsequences which are: [1,2,1,2,-1], [2,1,2,-1],[1,2,-1],and [2,-1].\n\n\nThus, what we can use a moving window with left and right pointers l and r to keep track of the length of valid sequence, reseting the l pointer when an invalid sequence is detected.\n.\ndef solution(arr: list) -> int:\n '''\n for every char, check if still current sawtooth\n if still currently sawtooth, numberOfWays += length\n else reset temp counter\n '''\n l, r = 0, 1\n ways = 0\n while r < len(arr):\n\n # check if current char + past 2 chars are sawtooth\n if r-l > 1 and (arr[r-2] < arr[r-1] > arr[r] or\n arr[r-2] > arr[r-1] < arr[r]): \n ways += r-l\n\n # check if current char + past 1 chars are sawtooth\n elif arr[r-1] != arr[r]: \n ways += 1\n l = r-1\n\n else: \n # reset left pointer\n l = r\n\n r += 1\n return ways\n\n", "I did something extremely simple but it gave the correct answers for all the test cases you provided:\nfunction sawtooth(arr) {\n if (arr.length < 2) return 0;\n \n let previousLongest = 1;\n let result = 0;\n for (let i = 1; i < arr.length; i++) {\n if (i >= arr.length) break;\n if (arr[i - 1] === arr[i]) continue;\n if (arr[i - 1] > arr[i] && arr[i] < arr[i + 1] || arr[i - 1] < arr[i] && arr[i] > arr[i + 1]) {\n previousLongest += 1;\n } else {\n previousLongest = 1;\n }\n result += previousLongest;\n }\n return result;\n}\n\n" ]
[ 3, 2, 0, 0, 0, 0, 0 ]
[]
[]
[ "algorithm", "arrays", "dynamic_programming", "python" ]
stackoverflow_0069356332_algorithm_arrays_dynamic_programming_python.txt
Q: How to find a total year sales from a dictionary? I have this dictionary, and when I code for it, I only have the answer for June, May, September. How would I code for the months that are not given in the dictionary? Obviously, I have zero for them. {'account': 'Amazon', 'amount': 300, 'day': 3, 'month': 'June'} {'account': 'Facebook', 'amount': 550, 'day': 5, 'month': 'May'} {'account': 'Google', 'amount': -200, 'day': 21, 'month': 'June'} {'account': 'Amazon', 'amount': -300, 'day': 12, 'month': 'June'} {'account': 'Facebook', 'amount': 130, 'day': 7, 'month': 'September'} {'account': 'Google', 'amount': 250, 'day': 27, 'month': 'September'} {'account': 'Amazon', 'amount': 200, 'day': 5, 'month': 'May'} The method I used for months mentioned in the dictionary: year_balance=sum(d["amount"] for d in my_dict) print(f"The total year balance is {year_balance} $.") A: import calendar months = calendar.month_name[1:] results = dict(zip(months, [0]*len(months))) for d in data: results[d["month"]] += d["amount"] # then you have results dict with monthly amounts # sum everything to get yearly total total = sum(results.values()) A: This might help: from collections import defaultdict mydict = defaultdict(lambda: 0) print(mydict["January"]) Also, given the comments you have written, is this what you are looking for? your_list_of_dicts = [ {"January": 3, "March": 5}, {"January": 3, "April": 5} ] import calendar months = calendar.month_name[1:] month_totals = dict() for month in months: month_totals[month] = 0 for d in your_list_of_dicts: month_totals[month] += d[month] if month in d else 0 print(month_totals) {'January': 6, 'February': 0, 'March': 5, 'April': 5, 'May': 0, 'June': 0, 'July': 0, 'August': 0, 'September': 0, 'October': 0, 'November': 0, 'December': 0}
How to find a total year sales from a dictionary?
I have this dictionary, and when I code for it, I only have the answer for June, May, September. How would I code for the months that are not given in the dictionary? Obviously, I have zero for them. {'account': 'Amazon', 'amount': 300, 'day': 3, 'month': 'June'} {'account': 'Facebook', 'amount': 550, 'day': 5, 'month': 'May'} {'account': 'Google', 'amount': -200, 'day': 21, 'month': 'June'} {'account': 'Amazon', 'amount': -300, 'day': 12, 'month': 'June'} {'account': 'Facebook', 'amount': 130, 'day': 7, 'month': 'September'} {'account': 'Google', 'amount': 250, 'day': 27, 'month': 'September'} {'account': 'Amazon', 'amount': 200, 'day': 5, 'month': 'May'} The method I used for months mentioned in the dictionary: year_balance=sum(d["amount"] for d in my_dict) print(f"The total year balance is {year_balance} $.")
[ "import calendar\n\nmonths = calendar.month_name[1:]\nresults = dict(zip(months, [0]*len(months)))\n\nfor d in data:\n results[d[\"month\"]] += d[\"amount\"]\n\n# then you have results dict with monthly amounts\n# sum everything to get yearly total\ntotal = sum(results.values())\n\n", "This might help:\nfrom collections import defaultdict\nmydict = defaultdict(lambda: 0)\nprint(mydict[\"January\"])\n\nAlso, given the comments you have written, is this what you are looking for?\nyour_list_of_dicts = [\n {\"January\": 3, \"March\": 5},\n {\"January\": 3, \"April\": 5}\n]\n\nimport calendar\nmonths = calendar.month_name[1:]\n\nmonth_totals = dict()\nfor month in months:\n month_totals[month] = 0\n for d in your_list_of_dicts:\n month_totals[month] += d[month] if month in d else 0\n\nprint(month_totals)\n\n\n{'January': 6, 'February': 0, 'March': 5, 'April': 5, 'May': 0, 'June': 0, 'July': 0, 'August': 0, 'September': 0, 'October': 0, 'November': 0, 'December': 0}\n\n" ]
[ 0, 0 ]
[ "You can read the following blog regarding the usage of dictionaries and how to perform calculations.\n5 best ways to sum dictionary values in python\nThis is on of the examples given in the blog.\nwages = {'01': 910.56, '02': 1298.68, '03': 1433.99, '04': 1050.14, '05': 877.67}\ntotal = sum(wages.values())\nprint('Total Wages: ${0:,.2f}'.format(total))\n\nHere is the result with 100,000 records.\nResult with 100,000 records\n" ]
[ -1 ]
[ "dictionary", "python", "sum" ]
stackoverflow_0074504560_dictionary_python_sum.txt
Q: itertools combination can only work for one copy when I use combinations from itertools, I find that I can only use it once, and afterwards I must repeat the line of code for it to work again. For example, from itertools import combinations comb = combinations( range( 0 , 5 ) , 2 ) xyLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] >[('PCA0', 'PCA1'), ('PCA0', 'PCA2'), ('PCA0', 'PCA3'), ('PCA0', 'PCA4'), ('PCA1', 'PCA2'), ('PCA1', 'PCA3'), ('PCA1', 'PCA4'), ('PCA2', 'PCA3'), ('PCA2', 'PCA4'), ('PCA3', 'PCA4')] Whereas If I do the following: comb = combinations( range( 0 , 5 ) , 2 ) xyLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] yxLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] print(yxLabels) > [] Printing the secod argument will only produce an empty list. However, to solve this I have to do the following: comb = combinations( range( 0 , 5 ) , 2 ) xyLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] comb = combinations( range( 0 , 5 ) , 2 ) yxLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] print(yxLabels) What is the reason behind it and how can I get it to work with only one comb? A: You need to define comb as a list instead of a generator - like this: comb = list(combinations( range( 0 , 5 ) , 2 )) That will then give you the result you expect. it will however increase your memory utilisation because you evaluate comb fully, instead of having it wait in the wings to hand you values on demand. Whether the memory/convenience tradeoff is worth it is your call.
itertools combination can only work for one copy
when I use combinations from itertools, I find that I can only use it once, and afterwards I must repeat the line of code for it to work again. For example, from itertools import combinations comb = combinations( range( 0 , 5 ) , 2 ) xyLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] >[('PCA0', 'PCA1'), ('PCA0', 'PCA2'), ('PCA0', 'PCA3'), ('PCA0', 'PCA4'), ('PCA1', 'PCA2'), ('PCA1', 'PCA3'), ('PCA1', 'PCA4'), ('PCA2', 'PCA3'), ('PCA2', 'PCA4'), ('PCA3', 'PCA4')] Whereas If I do the following: comb = combinations( range( 0 , 5 ) , 2 ) xyLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] yxLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] print(yxLabels) > [] Printing the secod argument will only produce an empty list. However, to solve this I have to do the following: comb = combinations( range( 0 , 5 ) , 2 ) xyLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] comb = combinations( range( 0 , 5 ) , 2 ) yxLabels = [ (f'PCA{x}', f'PCA{y}') for x , y in comb ] print(yxLabels) What is the reason behind it and how can I get it to work with only one comb?
[ "You need to define comb as a list instead of a generator - like this:\ncomb = list(combinations( range( 0 , 5 ) , 2 ))\n\nThat will then give you the result you expect. it will however increase your memory utilisation because you evaluate comb fully, instead of having it wait in the wings to hand you values on demand. Whether the memory/convenience tradeoff is worth it is your call.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074504575_python.txt
Q: Is there a way to install Anki Add ons programmatically? I would like to install Anki add-ons programmatically without resorting to the GUI, like "anki install 2055492159" or within python like: import anki anki.addons.install("2055492159") This way I would be able to use the CLI and create bash scripts to port my installation configurations between systems easily. I tried using the Python module anki with "pip install anki" but did not find anything related to Add-ons there. A: I don't know of a way to install add-ons programatically, but the process is simple enough that you can include instructions. The Anki manual has instructions on how to install add-ons.
Is there a way to install Anki Add ons programmatically?
I would like to install Anki add-ons programmatically without resorting to the GUI, like "anki install 2055492159" or within python like: import anki anki.addons.install("2055492159") This way I would be able to use the CLI and create bash scripts to port my installation configurations between systems easily. I tried using the Python module anki with "pip install anki" but did not find anything related to Add-ons there.
[ "I don't know of a way to install add-ons programatically, but the process is simple enough that you can include instructions. The Anki manual has instructions on how to install add-ons.\n" ]
[ 0 ]
[]
[]
[ "anki", "python" ]
stackoverflow_0074504658_anki_python.txt
Q: Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip I'm getting an error Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? when trying to install lxml through pip. c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 I don't find any libxml2 dev packages to install via pip. Using Python 2.7 and Python 3.x on x86 in a virtualenv under Windows 10. A: I had this issue and realised that whilst I did have libxml2 installed, I didn't have the necessary development libraries required by the python package. Installing them solved the problem: sudo apt-get install libxml2-dev libxslt1-dev sudo pip install lxml A: Install lxml from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml for your python version. It's a precompiled WHL with required modules/dependencies. The site lists several packages, when e.g. using Win32 Python 3.9, use lxml‑4.5.2‑cp39‑cp39‑win32.whl. Download the file, and then install with: pip install C:\path\to\downloaded\file\lxml‑4.5.2‑cp39‑cp39‑win32.whl A: Try to use: easy_install lxml That works for me, win10, python 2.7. A: On Mac OS X El Capitan I had to run these two commands to fix this error: xcode-select --install pip install lxml Which ended up installing lxml-3.5.0 When you run the xcode-select command you may have to sign a EULA (so have an X-Term handy for the UI if you're doing this on a headless machine). A: In case anyone else has the same issue as this on Centos, try: yum install python-lxml Ubuntu sudo apt-get install -y python-lxml worked for me. A: set STATICBUILD=true && pip install lxml run this command instead, must have VS C++ compiler installed first https://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/ It works for me with Python 3.5.2 and Windows 7 A: I tried install a lib that depends lxml and nothing works. I see a message when build was started: "Building without Cython", so after install cython with apt-get install cython, lxml was installed. A: For some reason it doesn't work in python 3.11, but 3.10 works. On windows, to install a module with a previous version, use py -3.10 -m pip install lxml if you want to install it in a venv, then use py -3.10 -m venv .venv .venv/Scripts/pip.exe install lxml if you've set up the venv, then you can just use pip install lxml You also need to run the python program with that version. If you set up a venv, then you don't need to do this. py -3.10 file.py A: It is not strange for me that none of the solutions above came up, but I saw how the igd installation removed the new version and installed the old one, for the solution I downloaded this archive:https://pypi.org/project/igd/#files and changed the recommended version of the new version: 'lxml==4.3.0' in setup.py It works! A: I had this issue and realized that while I did have libxml2 installed, I didn't have the necessary development libraries required by the python package. 1) Installing them solved the problem: The site to download the file: Download 2) After Installing the file save it in a accessible folder pip install *path to that file* A: I got the same error for python 32 bit. After install 64bit, the problem was fixed.
Getting "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?" when installing lxml through pip
I'm getting an error Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? when trying to install lxml through pip. c:\users\f\appdata\local\temp\xmlXPathInitqjzysz.c(1) : fatal error C1083: Cannot open include file: 'libxml/xpath.h': No such file or directory ********************************************************************************* Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed? ********************************************************************************* error: command 'C:\\Users\\f\\AppData\\Local\\Programs\\Common\\Microsoft\\Visual C++ for Python\\9.0\\VC\\Bin\\cl.exe' failed with exit status 2 I don't find any libxml2 dev packages to install via pip. Using Python 2.7 and Python 3.x on x86 in a virtualenv under Windows 10.
[ "I had this issue and realised that whilst I did have libxml2 installed, I didn't have the necessary development libraries required by the python package. Installing them solved the problem:\nsudo apt-get install libxml2-dev libxslt1-dev\nsudo pip install lxml\n\n", "Install lxml from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml for your python version. It's a precompiled WHL with required modules/dependencies.\nThe site lists several packages, when e.g. using Win32 Python 3.9, use lxml‑4.5.2‑cp39‑cp39‑win32.whl.\nDownload the file, and then install with:\npip install C:\\path\\to\\downloaded\\file\\lxml‑4.5.2‑cp39‑cp39‑win32.whl\n\n", "Try to use:\neasy_install lxml\nThat works for me, win10, python 2.7.\n", "On Mac OS X El Capitan I had to run these two commands to fix this error:\nxcode-select --install\npip install lxml\n\nWhich ended up installing lxml-3.5.0\nWhen you run the xcode-select command you may have to sign a EULA (so have an X-Term handy for the UI if you're doing this on a headless machine).\n", "In case anyone else has the same issue as this on \n\nCentos, try:\n\nyum install python-lxml\n\n\nUbuntu\n\nsudo apt-get install -y python-lxml\n\nworked for me.\n", "set STATICBUILD=true && pip install lxml\n\nrun this command instead, must have VS C++ compiler installed first\nhttps://blogs.msdn.microsoft.com/pythonengineering/2016/04/11/unable-to-find-vcvarsall-bat/\nIt works for me with Python 3.5.2 and Windows 7\n", "I tried install a lib that depends lxml and nothing works. I see a message when build was started: \"Building without Cython\", so after install cython with apt-get install cython, lxml was installed.\n", "For some reason it doesn't work in python 3.11, but 3.10 works.\nOn windows, to install a module with a previous version, use\npy -3.10 -m pip install lxml\n\nif you want to install it in a venv, then use\npy -3.10 -m venv .venv\n.venv/Scripts/pip.exe install lxml\n\nif you've set up the venv, then you can just use\npip install lxml\n\nYou also need to run the python program with that version. If you set up a venv, then you don't need to do this.\npy -3.10 file.py\n\n", "It is not strange for me that none of the solutions above came up, but I saw how the igd installation removed the new version and installed the old one, for the solution I downloaded this archive:https://pypi.org/project/igd/#files\nand changed the recommended version of the new version: 'lxml==4.3.0' in setup.py\nIt works!\n", "I had this issue and realized that while I did have libxml2 installed, I didn't have the necessary development libraries required by the python package.\n1) Installing them solved the problem:\nThe site to download the file: Download\n2) After Installing the file save it in a accessible folder\npip install *path to that file*\n\n", "I got the same error for python 32 bit. After install 64bit, the problem was fixed.\n" ]
[ 167, 157, 38, 24, 21, 10, 2, 2, 1, 1, 0 ]
[ "I am using venv.\nIn my case it was enough to add lxml==4.6.3 to requirements.txt.\nOne library wanted earlier version and this was causing this error, so when I forced pip to use newest version (currently 4.6.3) installation was successful.\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0033785755_python.txt
Q: Streamlit via Google Colab through LocalTunnel does not work anymore I am using LocalTunnel on Colab. It worked perfectly until yesterday. But it stopped working since. My code has this structure : ! pip install streamlit -q Then %%writefile app.py import streamlit as st st.write('# test') Finally !streamlit run /content/app.py & npx localtunnel --port 8501 I now get this output : Traceback (most recent call last): File "/usr/local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/usr/local/lib/python3.7/dist-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/usr/local/lib/python3.7/dist-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/usr/local/lib/python3.7/dist-packages/streamlit/elements/arrow_altair.py", line 42, in <module> from streamlit.elements.utils import last_index_for_melted_dataframes File "/usr/local/lib/python3.7/dist-packages/streamlit/elements/utils.py", line 82, in <module> ) -> LabelVisibilityMessage.LabelVisibilityOptions.ValueType: File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/enum_type_wrapper.py", line 115, in __getattr__ self._enum_type.name, name)) AttributeError: Enum LabelVisibilityOptions has no value defined for name 'ValueType' npx: installed 22 in 4.266s your url is: https://eight-ties-drop-34-80-129-36.loca.lt When I follow the link and click on the "continue" button, the page does not load and I get: "504 Gateway Time-out". I can see on the output that there is an AttributeError on Enum LabelVisibilityOptions. I created a new notebook with a very simple code as above and I get the same error. Any idea where it comes from? And how to fix this? Thanks in advance for your input! A: I was getting the same error, I resolved it by going back to a previous version of Streamlit like so: pip install streamlit==1.13.0 You can see in the changelog that with version 1.14.0 some changes were made regarding Enum classes.
Streamlit via Google Colab through LocalTunnel does not work anymore
I am using LocalTunnel on Colab. It worked perfectly until yesterday. But it stopped working since. My code has this structure : ! pip install streamlit -q Then %%writefile app.py import streamlit as st st.write('# test') Finally !streamlit run /content/app.py & npx localtunnel --port 8501 I now get this output : Traceback (most recent call last): File "/usr/local/bin/streamlit", line 5, in <module> from streamlit.web.cli import main File "/usr/local/lib/python3.7/dist-packages/streamlit/__init__.py", line 55, in <module> from streamlit.delta_generator import DeltaGenerator as _DeltaGenerator File "/usr/local/lib/python3.7/dist-packages/streamlit/delta_generator.py", line 45, in <module> from streamlit.elements.arrow_altair import ArrowAltairMixin File "/usr/local/lib/python3.7/dist-packages/streamlit/elements/arrow_altair.py", line 42, in <module> from streamlit.elements.utils import last_index_for_melted_dataframes File "/usr/local/lib/python3.7/dist-packages/streamlit/elements/utils.py", line 82, in <module> ) -> LabelVisibilityMessage.LabelVisibilityOptions.ValueType: File "/usr/local/lib/python3.7/dist-packages/google/protobuf/internal/enum_type_wrapper.py", line 115, in __getattr__ self._enum_type.name, name)) AttributeError: Enum LabelVisibilityOptions has no value defined for name 'ValueType' npx: installed 22 in 4.266s your url is: https://eight-ties-drop-34-80-129-36.loca.lt When I follow the link and click on the "continue" button, the page does not load and I get: "504 Gateway Time-out". I can see on the output that there is an AttributeError on Enum LabelVisibilityOptions. I created a new notebook with a very simple code as above and I get the same error. Any idea where it comes from? And how to fix this? Thanks in advance for your input!
[ "I was getting the same error, I resolved it by going back to a previous version of Streamlit like so:\npip install streamlit==1.13.0\n\nYou can see in the changelog that with version 1.14.0 some changes were made regarding Enum classes.\n" ]
[ 1 ]
[]
[]
[ "google_colaboratory", "localtunnel", "python", "streamlit" ]
stackoverflow_0074500526_google_colaboratory_localtunnel_python_streamlit.txt
Q: Simulating orbit of planet around the sun with RK4 I am trying to simulate a planet going around the sun with the RK4 algorithm. This is my code that i tried: import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt def calcvec(x1,y1,x2,y2): r = np.array([0,0,0]) r[0]=x2-x1 r[1]=y2-y1 r[2]= (r[0]**2 + r[1]**2)**(3/2) return r def orbit(): dt = 0.001 sx = 0.0 sy = 0.0 t = np.arange(0,100,dt) rx = np.zeros(len(t)) ry = np.zeros(len(t)) vx = np.zeros(len(t)) vy = np.zeros(len(t)) rx[0]=15.0 ry[0]=0.0 vx[0]=1.0 vy[0]=1.0 ms = 1 for i in range(0,len(t)-1): k1x = vx[i] r = calcvec(rx[i],ry[i],sx,sy) k1vx = - (ms*r[0]/r[2]) k2x = vx[i] + (dt/2)*k1vx r = calcvec((rx[i]+(dt/2)*k1x),ry[i],sx,sy) k2vx = -(ms*r[0]/r[2]) k3x = vx[i] + (dt/2)*k2vx r = calcvec((rx[i]+(dt/2)*k2x),ry[i],sx,sy) k3vx = -(ms*r[0]/r[2]) k4x = vx[i] + dt*k3vx r = calcvec((rx[i]+(dt)*k3x),ry[i],sx,sy) k4vx = -(ms*r[0]/r[2]) rx[i+1] = rx[i] + (dt/6)*(k1x + 2*k2x + 2*k3x + k4x) vx[i+1] = vx[i] + (dt/6)*(k1vx + 2*k2vx + 2*k3vx + k4vx) print(str(k1vx) + ", " +str(k2vx) + ", " +str(k3vx) + ", " +str(k4vx)) k1y = vy[i] r = calcvec(rx[i],ry[i],sx,sy) k1vy = - (ms*r[1]/r[2]) k2y = vy[i] + (dt/2)*k1vy r = calcvec(rx[i],(ry[i]+(dt/2)*k1y),sx,sy) k2vy = -(ms*r[1]/r[2]) k3y = vy[i] + (dt/2)*k2vy r = calcvec(rx[i],(ry[i]+(dt/2)*k2y),sx,sy) k3vy = -(ms*r[1]/r[2]) k4y = vy[i] + dt*k3vy r = calcvec(rx[i],(ry[i]+(dt)*k3y),sx,sy) k4vy = -(ms*r[1]/r[2]) ry[i+1] = ry[i] + (dt/6)*(k1y + 2*k2y + 2*k3y + k4y) vy[i+1] = vy[i] + (dt/6)*(k1vy + 2*k2vy + 2*k3vy + k4vy) fig, ax = plt.subplots() ax.plot(rx,ry, label='x(t)') ax.scatter(sx,sy) plt.title("orbit") plt.xticks(fontsize=10) plt.grid(color='black', linestyle='-', linewidth=0.5) plt.xlabel(r'x', fontsize=15) plt.ylabel(r'y', fontsize=15) plt.savefig("testtwobody.pdf") plt.show() if __name__=="__main__": orbit() When running this code i receive an "orbit" like this which is obviously wrong, because I would expect an elliptical orbit around the sun. Therefore, there must be a grave error or some sort of misunderstanding on my part. Thanks for your help in advance! Yours sincerly, chwu A: First good night. OK! first the star is fixed at the origin of the Cartesian coordinate system and the planet describes a flat orbit around the star due to the mutual iteration of the two. The equations of motion are obtained by applying Newton's laws of dynamics in conjunction with the Newtonian theory of gravitation. After the physical analysis of the problem and calculations, we have an initial value problem. Observation! The dynamic equations of motion used in the code were taken from computational Physics Nicholas J. Giordano and Hisao Nakanishi Chapter 4 page 94 second edition. As we are using the python language, we can use methods from the scipy package to integrate the system of ordinary differential equations. The initial conditions are the same as what you provided in your code. # Author : Carlos Eduardo da Silva Lima # Theme : Movement of a Plant around a fixed star # Language : Python # date : 11/19/2022 # Environment : Google Colab # Bibliography: computational Physics Nicholas J. Giordano and Hisao Nakanishi Chapter 4 page 94 second edition. import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint, solve_ivp # Initial conditions of the planet x0 = 15.0 # componente da posição inicial - x y0 = 0.0 # componente da posição inicial - y vx0 = 1.0 # componente da velôcidade inicial - vx vy0 = 1.0 # componente da velôcidade inicial - vy t_initial = 0.0 t_final = 55.0 N = 1000 # Dynamic equations of motion of the planet around the Sun # for odeint def edo(r,t): x,y,vx,vy = r r3 = np.power((x**2+y**2),3/2) return np.array([vx,vy,-4*np.power(np.pi,2)*(x/r3),-4*np.power(np.pi,2)*(y/r3)]) # for solve_ivp def edo_(t,r): x,y,vx,vy = r r3 = np.power((x**2+y**2),3/2) return np.array([vx,vy,-4*np.power(np.pi,2)*(x/r3),-4*np.power(np.pi,2)*(y/r3)]) # Integration of dynamic equations using odeint t = np.linspace(t_initial,t_final,N) r0 = np.array([x0,y0,vx0,vy0]) sol_0 = odeint(edo,r0,t) #sol_1 = solve_ivp(edo_, t_span = [t_initial,t_final], y0 = r0, method='DOP853') # New variables x = sol_0[:,0] y = sol_0[:,1] vx = sol_0[:,2] vy = sol_0[:,3] # Plot plt.style.use('dark_background') ax = plt.figure(figsize = (10,10)).add_subplot(projection='3d') ax.plot(x,y,0,'bo',0,0,0,'yo',lw=0.5) ax.set_xlabel("X", color = 'white') ax.set_ylabel("Y", color = 'white') ax.set_zlabel("Z", color = 'white') ax.set_title("star and planet") plt.show() Plot: star and planet Hope I helped, see you :).
Simulating orbit of planet around the sun with RK4
I am trying to simulate a planet going around the sun with the RK4 algorithm. This is my code that i tried: import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt def calcvec(x1,y1,x2,y2): r = np.array([0,0,0]) r[0]=x2-x1 r[1]=y2-y1 r[2]= (r[0]**2 + r[1]**2)**(3/2) return r def orbit(): dt = 0.001 sx = 0.0 sy = 0.0 t = np.arange(0,100,dt) rx = np.zeros(len(t)) ry = np.zeros(len(t)) vx = np.zeros(len(t)) vy = np.zeros(len(t)) rx[0]=15.0 ry[0]=0.0 vx[0]=1.0 vy[0]=1.0 ms = 1 for i in range(0,len(t)-1): k1x = vx[i] r = calcvec(rx[i],ry[i],sx,sy) k1vx = - (ms*r[0]/r[2]) k2x = vx[i] + (dt/2)*k1vx r = calcvec((rx[i]+(dt/2)*k1x),ry[i],sx,sy) k2vx = -(ms*r[0]/r[2]) k3x = vx[i] + (dt/2)*k2vx r = calcvec((rx[i]+(dt/2)*k2x),ry[i],sx,sy) k3vx = -(ms*r[0]/r[2]) k4x = vx[i] + dt*k3vx r = calcvec((rx[i]+(dt)*k3x),ry[i],sx,sy) k4vx = -(ms*r[0]/r[2]) rx[i+1] = rx[i] + (dt/6)*(k1x + 2*k2x + 2*k3x + k4x) vx[i+1] = vx[i] + (dt/6)*(k1vx + 2*k2vx + 2*k3vx + k4vx) print(str(k1vx) + ", " +str(k2vx) + ", " +str(k3vx) + ", " +str(k4vx)) k1y = vy[i] r = calcvec(rx[i],ry[i],sx,sy) k1vy = - (ms*r[1]/r[2]) k2y = vy[i] + (dt/2)*k1vy r = calcvec(rx[i],(ry[i]+(dt/2)*k1y),sx,sy) k2vy = -(ms*r[1]/r[2]) k3y = vy[i] + (dt/2)*k2vy r = calcvec(rx[i],(ry[i]+(dt/2)*k2y),sx,sy) k3vy = -(ms*r[1]/r[2]) k4y = vy[i] + dt*k3vy r = calcvec(rx[i],(ry[i]+(dt)*k3y),sx,sy) k4vy = -(ms*r[1]/r[2]) ry[i+1] = ry[i] + (dt/6)*(k1y + 2*k2y + 2*k3y + k4y) vy[i+1] = vy[i] + (dt/6)*(k1vy + 2*k2vy + 2*k3vy + k4vy) fig, ax = plt.subplots() ax.plot(rx,ry, label='x(t)') ax.scatter(sx,sy) plt.title("orbit") plt.xticks(fontsize=10) plt.grid(color='black', linestyle='-', linewidth=0.5) plt.xlabel(r'x', fontsize=15) plt.ylabel(r'y', fontsize=15) plt.savefig("testtwobody.pdf") plt.show() if __name__=="__main__": orbit() When running this code i receive an "orbit" like this which is obviously wrong, because I would expect an elliptical orbit around the sun. Therefore, there must be a grave error or some sort of misunderstanding on my part. Thanks for your help in advance! Yours sincerly, chwu
[ "First good night. OK! first the star is fixed at the origin of the Cartesian coordinate system and the planet describes a flat orbit around the star due to the mutual iteration of the two. The equations of motion are obtained by applying Newton's laws of dynamics in conjunction with the Newtonian theory of gravitation. After the physical analysis of the problem and calculations, we have an initial value problem. Observation! The dynamic equations of motion used in the code were taken from computational Physics Nicholas J. Giordano and Hisao Nakanishi Chapter 4 page 94 second edition. As we are using the python language, we can use methods from the scipy package to integrate the system of ordinary differential equations. The initial conditions are the same as what you provided in your code.\n# Author : Carlos Eduardo da Silva Lima\n# Theme : Movement of a Plant around a fixed star\n# Language : Python\n# date : 11/19/2022\n# Environment : Google Colab\n# Bibliography: computational Physics Nicholas J. Giordano and Hisao Nakanishi Chapter 4 page 94 second edition.\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint, solve_ivp\n\n# Initial conditions of the planet\nx0 = 15.0 # componente da posição inicial - x\ny0 = 0.0 # componente da posição inicial - y\nvx0 = 1.0 # componente da velôcidade inicial - vx\nvy0 = 1.0 # componente da velôcidade inicial - vy\nt_initial = 0.0\nt_final = 55.0\nN = 1000\n\n# Dynamic equations of motion of the planet around the Sun\n# for odeint\ndef edo(r,t):\n x,y,vx,vy = r\n r3 = np.power((x**2+y**2),3/2)\n return np.array([vx,vy,-4*np.power(np.pi,2)*(x/r3),-4*np.power(np.pi,2)*(y/r3)])\n\n# for solve_ivp\ndef edo_(t,r):\n x,y,vx,vy = r\n r3 = np.power((x**2+y**2),3/2)\n return np.array([vx,vy,-4*np.power(np.pi,2)*(x/r3),-4*np.power(np.pi,2)*(y/r3)])\n\n# Integration of dynamic equations using odeint\nt = np.linspace(t_initial,t_final,N)\nr0 = np.array([x0,y0,vx0,vy0])\nsol_0 = odeint(edo,r0,t)\n#sol_1 = solve_ivp(edo_, t_span = [t_initial,t_final], y0 = r0, method='DOP853')\n\n# New variables\nx = sol_0[:,0]\ny = sol_0[:,1]\nvx = sol_0[:,2]\nvy = sol_0[:,3]\n\n# Plot\nplt.style.use('dark_background')\nax = plt.figure(figsize = (10,10)).add_subplot(projection='3d')\nax.plot(x,y,0,'bo',0,0,0,'yo',lw=0.5)\nax.set_xlabel(\"X\", color = 'white')\nax.set_ylabel(\"Y\", color = 'white')\nax.set_zlabel(\"Z\", color = 'white')\nax.set_title(\"star and planet\")\nplt.show()\n\nPlot: star and planet\nHope I helped, see you :).\n" ]
[ 0 ]
[]
[]
[ "orbital_mechanics", "python", "runge_kutta" ]
stackoverflow_0074302139_orbital_mechanics_python_runge_kutta.txt