content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: How can I count the keys in list of dictionaries? | python For example there is a list: list = [{'brand': 'Ford', 'Model': 'Mustang', 'year': 1964}, {'brand': 'Nissan', 'model': 'Skyline', 'year': 1969} ...] I want to count there are how many model from each. How can I do it? By the way sorry for the bad formatting I am new here yet. I tried this method: model_count = {} for i in list: if i['Model'] in model_count: model_count[i] += 1 else: model_count[i] = 1 And I got this error: TypeError: unhashable type: 'dict' A: Assuming the key "Model" appears in every dictionary capitalized like this, you can use the code below my_list = [{'brand': 'Ford', 'Model': 'Mustang', 'year': 1964}, {'brand': 'Nissan', 'Model': 'Skyline', 'year': 1969}] models = {} for dictionary in my_list: model = dictionary.get('Model', None) if model is not None: models[model] = models.get(model, 0) + 1 print(models) Output: {'Mustang': 1, 'Skyline': 1} A: The simplest way would be with collections.Counter: from collections import Counter models = Counter(i['Model'] for i in mylist)
How can I count the keys in list of dictionaries? | python
For example there is a list: list = [{'brand': 'Ford', 'Model': 'Mustang', 'year': 1964}, {'brand': 'Nissan', 'model': 'Skyline', 'year': 1969} ...] I want to count there are how many model from each. How can I do it? By the way sorry for the bad formatting I am new here yet. I tried this method: model_count = {} for i in list: if i['Model'] in model_count: model_count[i] += 1 else: model_count[i] = 1 And I got this error: TypeError: unhashable type: 'dict'
[ "Assuming the key \"Model\" appears in every dictionary capitalized like this, you can use the code below\nmy_list = [{'brand': 'Ford', 'Model': 'Mustang', 'year': 1964}, {'brand': 'Nissan', 'Model': 'Skyline', 'year': 1969}]\n\nmodels = {}\nfor dictionary in my_list:\n model = dictionary.get('Model', None)\n if model is not None:\n models[model] = models.get(model, 0) + 1\n\nprint(models)\n\nOutput:\n{'Mustang': 1, 'Skyline': 1}\n\n", "The simplest way would be with collections.Counter:\nfrom collections import Counter\nmodels = Counter(i['Model'] for i in mylist)\n\n" ]
[ 0, 0 ]
[]
[]
[ "count", "dictionary", "each", "list", "python" ]
stackoverflow_0074524306_count_dictionary_each_list_python.txt
Q: Is it possible to use pandas.DataFrame.rolling with a step greater than 1? In R you can compute a rolling mean with a specified window that can shift by a specified amount each time. However maybe I just haven't found it anywhere but it doesn't seem like you can do it in pandas or some other Python library? Does anyone know of a way around this? I'll give you an example of what I mean: Here we have bi-weekly data, and I am computing the two month moving average that shifts by 1 month which is 2 rows. So in R I would do something like: two_month__movavg=rollapply(mydata,4,mean,by = 2,na.pad = FALSE) Is there no equivalent in Python? EDIT1: DATE A DEMAND ... AA DEMAND A Price 0 2006/01/01 00:30:00 8013.27833 ... 5657.67500 20.03 1 2006/01/01 01:00:00 7726.89167 ... 5460.39500 18.66 2 2006/01/01 01:30:00 7372.85833 ... 5766.02500 20.38 3 2006/01/01 02:00:00 7071.83333 ... 5503.25167 18.59 4 2006/01/01 02:30:00 6865.44000 ... 5214.01500 17.53 A: So, I know it is a long time since the question was asked, by I bumped into this same problem and when dealing with long time series you really would want to avoid the unnecessary calculation of the values you are not interested at. Since Pandas rolling method does not implement a step argument, I wrote a workaround using numpy. It is basically a combination of the solution in this link and the indexing proposed by BENY. def apply_rolling_data(data, col, function, window, step=1, labels=None): """Perform a rolling window analysis at the column `col` from `data` Given a dataframe `data` with time series, call `function` at sections of length `window` at the data of column `col`. Append the results to `data` at a new columns with name `label`. Parameters ---------- data : DataFrame Data to be analyzed, the dataframe must stores time series columnwise, i.e., each column represent a time series and each row a time index col : str Name of the column from `data` to be analyzed function : callable Function to be called to calculate the rolling window analysis, the function must receive as input an array or pandas series. Its output must be either a number or a pandas series window : int length of the window to perform the analysis step : int step to take between two consecutive windows labels : str Name of the column for the output, if None it defaults to 'MEASURE'. It is only used if `function` outputs a number, if it outputs a Series then each index of the series is going to be used as the names of their respective columns in the output Returns ------- data : DataFrame Input dataframe with added columns with the result of the analysis performed """ x = _strided_app(data[col].to_numpy(), window, step) rolled = np.apply_along_axis(function, 1, x) if labels is None: labels = [f"metric_{i}" for i in range(rolled.shape[1])] for col in labels: data[col] = np.nan data.loc[ data.index[ [False]*(window-1) + list(np.arange(len(data) - (window-1)) % step == 0)], labels] = rolled return data def _strided_app(a, L, S): # Window len = L, Stride len/stepsize = S """returns an array that is strided """ nrows = ((a.size-L)//S)+1 n = a.strides[0] return np.lib.stride_tricks.as_strided( a, shape=(nrows, L), strides=(S*n, n)) A: You can using rolling again, just need a little bit work with you assign index Here by = 2 by = 2 df.loc[df.index[np.arange(len(df))%by==1],'New']=df.Price.rolling(window=4).mean() df Price New 0 63 NaN 1 92 NaN 2 92 NaN 3 5 63.00 4 90 NaN 5 3 47.50 6 81 NaN 7 98 68.00 8 100 NaN 9 58 84.25 10 38 NaN 11 15 52.75 12 75 NaN 13 19 36.75 A: If the data size is not too large, here is an easy way: by = 2 win = 4 start = 3 ## it is the index of your 1st valid value. df.rolling(win).mean()[start::by] ## calculate all, choose what you need. A: Now this is a bit of overkill for a 1D array of data, but you can simplify it and pull out what you need. Since pandas can rely on numpy, you might want to check to see how their rolling/strided function if implemented. Results for 20 sequential numbers. A 7 day window, striding/sliding by 2 z = np.arange(20) z #array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]) s = stride(z, (7,), (2,)) np.mean(s, axis=1) # array([ 3., 5., 7., 9., 11., 13., 15.]) Here is the code I use without the major portion of the documentation. It is derived from many implementations of strided function in numpy that can be found on this site. There are variants and incarnation, this is just another. def stride(a, win=(3, 3), stepby=(1, 1)): """Provide a 2D sliding/moving view of an array. There is no edge correction for outputs. Use the `pad_` function first.""" err = """Array shape, window and/or step size error. Use win=(3,) with stepby=(1,) for 1D array or win=(3,3) with stepby=(1,1) for 2D array or win=(1,3,3) with stepby=(1,1,1) for 3D ---- a.ndim != len(win) != len(stepby) ---- """ from numpy.lib.stride_tricks import as_strided a_ndim = a.ndim if isinstance(win, int): win = (win,) * a_ndim if isinstance(stepby, int): stepby = (stepby,) * a_ndim assert (a_ndim == len(win)) and (len(win) == len(stepby)), err shp = np.array(a.shape) # array shape (r, c) or (d, r, c) win_shp = np.array(win) # window (3, 3) or (1, 3, 3) ss = np.array(stepby) # step by (1, 1) or (1, 1, 1) newshape = tuple(((shp - win_shp) // ss) + 1) + tuple(win_shp) newstrides = tuple(np.array(a.strides) * ss) + a.strides a_s = as_strided(a, shape=newshape, strides=newstrides, subok=True).squeeze() return a_s I failed to point out that you can create an output that you could append as a column into pandas. Going back to the original definitions used above nans = np.full_like(z, np.nan, dtype='float') # z is the 20 number sequence means = np.mean(s, axis=1) # results from the strided mean # assign the means to the output array skipping the first and last 3 and striding by 2 nans[3:-3:2] = means nans # array([nan, nan, nan, 3., nan, 5., nan, 7., nan, 9., nan, 11., nan, 13., nan, 15., nan, nan, nan, nan]) A: Using Pandas.asfreq() after rolling A: Since pandas 1.5.0, there is a step parameter to rolling() that should do the trick. See: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html
Is it possible to use pandas.DataFrame.rolling with a step greater than 1?
In R you can compute a rolling mean with a specified window that can shift by a specified amount each time. However maybe I just haven't found it anywhere but it doesn't seem like you can do it in pandas or some other Python library? Does anyone know of a way around this? I'll give you an example of what I mean: Here we have bi-weekly data, and I am computing the two month moving average that shifts by 1 month which is 2 rows. So in R I would do something like: two_month__movavg=rollapply(mydata,4,mean,by = 2,na.pad = FALSE) Is there no equivalent in Python? EDIT1: DATE A DEMAND ... AA DEMAND A Price 0 2006/01/01 00:30:00 8013.27833 ... 5657.67500 20.03 1 2006/01/01 01:00:00 7726.89167 ... 5460.39500 18.66 2 2006/01/01 01:30:00 7372.85833 ... 5766.02500 20.38 3 2006/01/01 02:00:00 7071.83333 ... 5503.25167 18.59 4 2006/01/01 02:30:00 6865.44000 ... 5214.01500 17.53
[ "So, I know it is a long time since the question was asked, by I bumped into this same problem and when dealing with long time series you really would want to avoid the unnecessary calculation of the values you are not interested at. Since Pandas rolling method does not implement a step argument, I wrote a workaround using numpy.\nIt is basically a combination of the solution in this link and the indexing proposed by BENY.\ndef apply_rolling_data(data, col, function, window, step=1, labels=None):\n \"\"\"Perform a rolling window analysis at the column `col` from `data`\n\n Given a dataframe `data` with time series, call `function` at\n sections of length `window` at the data of column `col`. Append\n the results to `data` at a new columns with name `label`.\n\n Parameters\n ----------\n data : DataFrame\n Data to be analyzed, the dataframe must stores time series\n columnwise, i.e., each column represent a time series and each\n row a time index\n col : str\n Name of the column from `data` to be analyzed\n function : callable\n Function to be called to calculate the rolling window\n analysis, the function must receive as input an array or\n pandas series. Its output must be either a number or a pandas\n series\n window : int\n length of the window to perform the analysis\n step : int\n step to take between two consecutive windows\n labels : str\n Name of the column for the output, if None it defaults to\n 'MEASURE'. It is only used if `function` outputs a number, if\n it outputs a Series then each index of the series is going to\n be used as the names of their respective columns in the output\n\n Returns\n -------\n data : DataFrame\n Input dataframe with added columns with the result of the\n analysis performed\n\n \"\"\"\n\n x = _strided_app(data[col].to_numpy(), window, step)\n rolled = np.apply_along_axis(function, 1, x)\n\n if labels is None:\n labels = [f\"metric_{i}\" for i in range(rolled.shape[1])]\n\n for col in labels:\n data[col] = np.nan\n\n data.loc[\n data.index[\n [False]*(window-1)\n + list(np.arange(len(data) - (window-1)) % step == 0)],\n labels] = rolled\n\n return data\n\n\ndef _strided_app(a, L, S): # Window len = L, Stride len/stepsize = S\n \"\"\"returns an array that is strided\n \"\"\"\n nrows = ((a.size-L)//S)+1\n n = a.strides[0]\n return np.lib.stride_tricks.as_strided(\n a, shape=(nrows, L), strides=(S*n, n))\n\n", "You can using rolling again, just need a little bit work with you assign index \nHere by = 2\nby = 2\n\ndf.loc[df.index[np.arange(len(df))%by==1],'New']=df.Price.rolling(window=4).mean()\ndf\n Price New\n0 63 NaN\n1 92 NaN\n2 92 NaN\n3 5 63.00\n4 90 NaN\n5 3 47.50\n6 81 NaN\n7 98 68.00\n8 100 NaN\n9 58 84.25\n10 38 NaN\n11 15 52.75\n12 75 NaN\n13 19 36.75\n\n", "If the data size is not too large, here is an easy way:\nby = 2\nwin = 4\nstart = 3 ## it is the index of your 1st valid value.\ndf.rolling(win).mean()[start::by] ## calculate all, choose what you need.\n\n", "Now this is a bit of overkill for a 1D array of data, but you can simplify it and pull out what you need. Since pandas can rely on numpy, you might want to check to see how their rolling/strided function if implemented.\nResults for 20 sequential numbers. A 7 day window, striding/sliding by 2\n z = np.arange(20)\n z #array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19])\n s = stride(z, (7,), (2,))\n\nnp.mean(s, axis=1) # array([ 3., 5., 7., 9., 11., 13., 15.])\n\nHere is the code I use without the major portion of the documentation. It is derived from many implementations of strided function in numpy that can be found on this site. There are variants and incarnation, this is just another.\ndef stride(a, win=(3, 3), stepby=(1, 1)):\n \"\"\"Provide a 2D sliding/moving view of an array.\n There is no edge correction for outputs. Use the `pad_` function first.\"\"\"\n err = \"\"\"Array shape, window and/or step size error.\n Use win=(3,) with stepby=(1,) for 1D array\n or win=(3,3) with stepby=(1,1) for 2D array\n or win=(1,3,3) with stepby=(1,1,1) for 3D\n ---- a.ndim != len(win) != len(stepby) ----\n \"\"\"\n from numpy.lib.stride_tricks import as_strided\n a_ndim = a.ndim\n if isinstance(win, int):\n win = (win,) * a_ndim\n if isinstance(stepby, int):\n stepby = (stepby,) * a_ndim\n assert (a_ndim == len(win)) and (len(win) == len(stepby)), err\n shp = np.array(a.shape) # array shape (r, c) or (d, r, c)\n win_shp = np.array(win) # window (3, 3) or (1, 3, 3)\n ss = np.array(stepby) # step by (1, 1) or (1, 1, 1)\n newshape = tuple(((shp - win_shp) // ss) + 1) + tuple(win_shp)\n newstrides = tuple(np.array(a.strides) * ss) + a.strides\n a_s = as_strided(a, shape=newshape, strides=newstrides, subok=True).squeeze()\n return a_s\n\nI failed to point out that you can create an output that you could append as a column into pandas. Going back to the original definitions used above\nnans = np.full_like(z, np.nan, dtype='float') # z is the 20 number sequence\nmeans = np.mean(s, axis=1) # results from the strided mean\n# assign the means to the output array skipping the first and last 3 and striding by 2\n\nnans[3:-3:2] = means \n\nnans # array([nan, nan, nan, 3., nan, 5., nan, 7., nan, 9., nan, 11., nan, 13., nan, 15., nan, nan, nan, nan])\n\n", "Using Pandas.asfreq() after rolling\n", "Since pandas 1.5.0, there is a step parameter to rolling() that should do the trick. See: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html\n" ]
[ 7, 6, 6, 1, 0, 0 ]
[]
[]
[ "numpy", "pandas", "python", "r", "zoo" ]
stackoverflow_0054301042_numpy_pandas_python_r_zoo.txt
Q: If python doesn't find certain value inside JSON, append something inside list I'm making a script with Python to search for competitors with a Google API. Just for you to see how it works: First I make a request and save data inside a Json: # make the http GET request to Scale SERP api_result = requests.get('https://api.scaleserp.com/search', params) # Save data inside Json dados = api_result.json() Then a create some lists to get position, title, domain and things like that, then I create a loop for to append the position from my competitors inside my lists: # Create the lists sPositions = [] sDomains = [] sUrls = [] sTitles = [] sDescription = [] sType = [] # Create loop for to look for information about competitors for sCompetitors in dados['organic_results']: sPositions.append(sCompetitors['position']) sDomains.append(sCompetitors['domain']) sUrls.append(sCompetitors['link']) sTitles.append(sCompetitors['title']) sDescription.append(sCompetitors['snippet']) sType.append(sCompetitors['type']) The problem is that not every bracket of my Json is going to have the same values. Some of them won't have the "domain" value. So I need something like "when there is no 'domain' value, append 'no domain' to sDomains list. I'm glad if anyone could help. Thanks!! A: you should use the get method for dicts so you can set a default value incase the key doesn't exist: for sCompetitors in dados['organic_results']: sPositions.append(sCompetitors.get('position', 'no position')) sDomains.append(sCompetitors.get('domain', 'no domain')) sUrls.append(sCompetitors.get('link', 'no link')) sTitles.append(sCompetitors.get('title', 'no title')) sDescription.append(sCompetitors.get('snippet', 'no snippet')) sType.append(sCompetitors.get('type', 'no type'))
If python doesn't find certain value inside JSON, append something inside list
I'm making a script with Python to search for competitors with a Google API. Just for you to see how it works: First I make a request and save data inside a Json: # make the http GET request to Scale SERP api_result = requests.get('https://api.scaleserp.com/search', params) # Save data inside Json dados = api_result.json() Then a create some lists to get position, title, domain and things like that, then I create a loop for to append the position from my competitors inside my lists: # Create the lists sPositions = [] sDomains = [] sUrls = [] sTitles = [] sDescription = [] sType = [] # Create loop for to look for information about competitors for sCompetitors in dados['organic_results']: sPositions.append(sCompetitors['position']) sDomains.append(sCompetitors['domain']) sUrls.append(sCompetitors['link']) sTitles.append(sCompetitors['title']) sDescription.append(sCompetitors['snippet']) sType.append(sCompetitors['type']) The problem is that not every bracket of my Json is going to have the same values. Some of them won't have the "domain" value. So I need something like "when there is no 'domain' value, append 'no domain' to sDomains list. I'm glad if anyone could help. Thanks!!
[ "you should use the get method for dicts so you can set a default value incase the key doesn't exist:\nfor sCompetitors in dados['organic_results']:\n sPositions.append(sCompetitors.get('position', 'no position'))\n sDomains.append(sCompetitors.get('domain', 'no domain'))\n sUrls.append(sCompetitors.get('link', 'no link'))\n sTitles.append(sCompetitors.get('title', 'no title'))\n sDescription.append(sCompetitors.get('snippet', 'no snippet'))\n sType.append(sCompetitors.get('type', 'no type'))\n\n" ]
[ 1 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074524284_json_python.txt
Q: MSEdgeDriver - session not created: No matching capabilities found error on Selenium with Python Having some trouble getting our automation to run on Microsoft Edge. Have the correct browser version driver installed and have tried a few other 'fixes' to no avail. This is using Selenium with Python3 on PyCharm. Going back to the beginning, this is my code... from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions from selenium.webdriver.edge.options import Options from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys options = Options() driver = webdriver.Edge(executable_path='/Users/james.stott/PycharmProjects/venv/Selenium/Remote/msedgedriver') And the following is the error raised... selenium.common.exceptions.SessionNotCreatedException: Message: session not created: No matching capabilities found Any help at all, would be greatly appreciated. A: I guess you're using Edge Chromium, you can refer to the steps below to automate Edge browser using Selenium python code: Download and install the Python from this link. Launch the command prompt as an Administrator. Run the command below to install the Edge Selenium tools. pip install msedge-selenium-tools selenium==3.141 Install the correct version of the Edge web driver from this link. (The WebDriver version should be the same as the Edge browser version) Create a Python file using the code below and modify it as per your own requirements. from msedge.selenium_tools import Edge, EdgeOptions options = EdgeOptions() options.use_chromium = True options.binary_location = r"C:\Program Files (x86)\Microsoft\Edge\Application\msedge.exe" driver = Edge(executable_path = r"D:\selenium web drivers\edge driver\msedgedriver.exe", options = options) # Modify the path here... driver.get("https://example.com") Update: You need to send capabilities if you're using Mac OS. You can try to send an empty capability: desired_cap={} driver = webdriver.Edge(executable_path='/Users/james.stott/PycharmProjects/venv/Selenium/Remote/msedgedriver', capabilities=desired_cap) A: For linux users using executable_path as EdgeChromiumDriverManager or any given path, follow the following snippet: from selenium import webdriver from webdriver_manager.microsoft import EdgeChromiumDriverManager driver = webdriver.Edge(executable_path=EdgeChromiumDriverManager().install(), capabilities={"platform": "LINUX"})
MSEdgeDriver - session not created: No matching capabilities found error on Selenium with Python
Having some trouble getting our automation to run on Microsoft Edge. Have the correct browser version driver installed and have tried a few other 'fixes' to no avail. This is using Selenium with Python3 on PyCharm. Going back to the beginning, this is my code... from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions from selenium.webdriver.edge.options import Options from selenium.common.exceptions import NoSuchElementException from selenium.webdriver.common.action_chains import ActionChains from selenium.webdriver.common.keys import Keys options = Options() driver = webdriver.Edge(executable_path='/Users/james.stott/PycharmProjects/venv/Selenium/Remote/msedgedriver') And the following is the error raised... selenium.common.exceptions.SessionNotCreatedException: Message: session not created: No matching capabilities found Any help at all, would be greatly appreciated.
[ "I guess you're using Edge Chromium, you can refer to the steps below to automate Edge browser using Selenium python code:\n\nDownload and install the Python from this link.\n\nLaunch the command prompt as an Administrator.\n\nRun the command below to install the Edge Selenium tools.\npip install msedge-selenium-tools selenium==3.141\n\n\nInstall the correct version of the Edge web driver from this link. (The WebDriver version should be the same as the Edge browser version)\n\nCreate a Python file using the code below and modify it as per your own requirements.\nfrom msedge.selenium_tools import Edge, EdgeOptions\n\noptions = EdgeOptions()\noptions.use_chromium = True\noptions.binary_location = r\"C:\\Program Files (x86)\\Microsoft\\Edge\\Application\\msedge.exe\"\ndriver = Edge(executable_path = r\"D:\\selenium web drivers\\edge driver\\msedgedriver.exe\", options = options) # Modify the path here...\ndriver.get(\"https://example.com\")\n\n\n\n\nUpdate:\nYou need to send capabilities if you're using Mac OS. You can try to send an empty capability:\ndesired_cap={}\n\ndriver = webdriver.Edge(executable_path='/Users/james.stott/PycharmProjects/venv/Selenium/Remote/msedgedriver', capabilities=desired_cap)\n\n", "For linux users using executable_path as EdgeChromiumDriverManager or any given path, follow the following snippet:\nfrom selenium import webdriver\nfrom webdriver_manager.microsoft import EdgeChromiumDriverManager\n\ndriver = webdriver.Edge(executable_path=EdgeChromiumDriverManager().install(), capabilities={\"platform\": \"LINUX\"})\n\n" ]
[ 1, 0 ]
[]
[]
[ "automation", "microsoft_edge", "python", "selenium", "selenium_edgedriver" ]
stackoverflow_0066479396_automation_microsoft_edge_python_selenium_selenium_edgedriver.txt
Q: Understanding NumPy split function to extract sub-grid So I was given a worksheet exercise as follows: Given the following grid of 25 values, extract the central 3 x 3 sub-grid of 9s from the larger grid using the split() function: 1 2 3 4 5 1 9 9 9 5 1 9 9 9 5 1 9 9 9 5 1 2 3 4 5 And the solution is as follows: x = np.array([[1,2,3,4,5], [1, 9, 9, 9, 5], [1, 9, 9, 9, 5], [1, 9, 9, 9, 5], [1, 2, 3, 4, 5]]) x1, x2, x3 = np.split(x, [1, 4]) y1, y2, y3 = np.split(x2, [1, 4], axis = 1) print(y2) My question is, why is it [1, 4] in the brackets? does this refer to the element number, if so, should it not be [1, 3]? Sorry if this seems like a very simple question - am still super new to coding!! Thanks in advance :) A: In [755]: x = np.array([[1,2,3,4,5], ...: [1, 9, 9, 9, 5], ...: [1, 9, 9, 9, 5], ...: [1, 9, 9, 9, 5], ...: [1, 2, 3, 4, 5]]) If all you need is one block, just slice directly: In [756]: x[1:4,1:4] Out[756]: array([[9, 9, 9], [9, 9, 9], [9, 9, 9]])
Understanding NumPy split function to extract sub-grid
So I was given a worksheet exercise as follows: Given the following grid of 25 values, extract the central 3 x 3 sub-grid of 9s from the larger grid using the split() function: 1 2 3 4 5 1 9 9 9 5 1 9 9 9 5 1 9 9 9 5 1 2 3 4 5 And the solution is as follows: x = np.array([[1,2,3,4,5], [1, 9, 9, 9, 5], [1, 9, 9, 9, 5], [1, 9, 9, 9, 5], [1, 2, 3, 4, 5]]) x1, x2, x3 = np.split(x, [1, 4]) y1, y2, y3 = np.split(x2, [1, 4], axis = 1) print(y2) My question is, why is it [1, 4] in the brackets? does this refer to the element number, if so, should it not be [1, 3]? Sorry if this seems like a very simple question - am still super new to coding!! Thanks in advance :)
[ "In [755]: x = np.array([[1,2,3,4,5],\n ...: [1, 9, 9, 9, 5],\n ...: [1, 9, 9, 9, 5],\n ...: [1, 9, 9, 9, 5],\n ...: [1, 2, 3, 4, 5]])\n\nIf all you need is one block, just slice directly:\nIn [756]: x[1:4,1:4]\nOut[756]: \narray([[9, 9, 9],\n [9, 9, 9],\n [9, 9, 9]])\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "python", "split" ]
stackoverflow_0074524188_numpy_python_split.txt
Q: Is it possible to use Tweepy to gather a random sample of all Tweets on Twitter for a given time interval? I'm currently using Tweepy (academic access) to obtain Tweets over a given time interval. I am using a general query, and I only want a 100,000 Tweets. The Twitter API gives back the most recent 100,000 Tweets for the given time interval. Instead, I want 100,000 random Tweets from the given time interval. Here is what I have tried with my current code in order to get the 100,000 most recent Tweets. Is this possible? If so, how should the code be modified? # Imports import tweepy import json import csv # Store bearer_token in variable bearer_token = "Input Bearer Token Here" client = tweepy.Client(bearer_token=bearer_token) # Replace with your own search query query = ' "Air Pollution" "2.5" place_country:US' # Replace with time period of your choice start_time = '2021-11-20T00:00:00Z' # Replace with time period of your choice end_time = '2022-11-20T00:00:00Z' tweets = client.search_all_tweets(query=query, tweet_fields=['context_annotations', 'created_at', 'geo'], place_fields = ['place_type','geo'], expansions='geo.place_id', start_time=start_time, end_time=end_time, max_results=100000) # Prepare to write to csv file f = open('tweetData.csv','w') writer = csv.writer(f) # Write to csv file for tweet in tweets.data: print(tweet.text) print(tweet.created_at) writer.writerow(['0', tweet.id, tweet.created_at, tweet.text]) # Close csv file f.close() A: Measure the recent mean daily tweet rate, so you have a conversion factor to go back and forth between num_tweets and interval_duration. Exhaustively query some recent timeframe, obtaining some tens of thousands of tweets. This is ground truth, it captures distributions of interest. Verify it by re-querying part of the timeframe and observing the identical tweets come back. Decide on Q query interval starts, randomly chosen, spaced far enough apart so we have (mostly) non-overlapping intervals. Decide on R result tweets per query. This gives Q Γ— R raw result tweets. Your requirement is Q Γ— R > 100,000 final filtered tweets. Identify ground truth "threads", sets of interacting tweets. A tweet interacts with a previous tweet if it is any of from same author and "near" in time a reply a retweet, from any author repeats quoted text, from any author For example, we might see an hourly pattern from a weather station that reports 2.5 M at 9:01, 10 M at 9:02, NOX at 9:03. Or another station that reports on one location, then another, then another. Identify heavy hitters, authors who frequently participate in threads. Identify whether threads are usually extended by single or multiple authors. Make R small enough that thread length within each query interval is "short". Now do a production run, obtaining Q Γ— R raw tweets. Within a given query interval, identify each thread, and discard all but one of its tweets. Tweets that survive such filtering are mostly independent of one another. Common mode events, e.g. a sudden weather event, may still induce coupling among filtered tweets that are near in time. If (threaded) heavy hitters author a significant fraction of the tweets of interest, you might want to break your queries into two parts: Ask for heavy authors within the query interval, retaining at most 1 tweet from each, and then ask for all other authors within the query interval. Combine them to obtain the desired number of filtered tweets. This optimization lets you make R a bit bigger. In the ideal case, you would ask for just the first tweet within each random query interval, and send at least 100,000 API calls. That is, R would be 1. The point of the threading and filtering is to obtain similar results, with big enough R that we can reduce the number of calls.
Is it possible to use Tweepy to gather a random sample of all Tweets on Twitter for a given time interval?
I'm currently using Tweepy (academic access) to obtain Tweets over a given time interval. I am using a general query, and I only want a 100,000 Tweets. The Twitter API gives back the most recent 100,000 Tweets for the given time interval. Instead, I want 100,000 random Tweets from the given time interval. Here is what I have tried with my current code in order to get the 100,000 most recent Tweets. Is this possible? If so, how should the code be modified? # Imports import tweepy import json import csv # Store bearer_token in variable bearer_token = "Input Bearer Token Here" client = tweepy.Client(bearer_token=bearer_token) # Replace with your own search query query = ' "Air Pollution" "2.5" place_country:US' # Replace with time period of your choice start_time = '2021-11-20T00:00:00Z' # Replace with time period of your choice end_time = '2022-11-20T00:00:00Z' tweets = client.search_all_tweets(query=query, tweet_fields=['context_annotations', 'created_at', 'geo'], place_fields = ['place_type','geo'], expansions='geo.place_id', start_time=start_time, end_time=end_time, max_results=100000) # Prepare to write to csv file f = open('tweetData.csv','w') writer = csv.writer(f) # Write to csv file for tweet in tweets.data: print(tweet.text) print(tweet.created_at) writer.writerow(['0', tweet.id, tweet.created_at, tweet.text]) # Close csv file f.close()
[ "Measure the recent mean daily tweet rate,\nso you have a conversion factor\nto go back and forth between\nnum_tweets and interval_duration.\nExhaustively query some recent timeframe,\nobtaining some tens of thousands of tweets.\nThis is ground truth,\nit captures distributions of interest.\nVerify it by re-querying part of the timeframe\nand observing the identical tweets come back.\nDecide on Q query interval starts, randomly chosen, spaced far enough apart so we have (mostly) non-overlapping intervals.\nDecide on R result tweets per query.\nThis gives Q Γ— R raw result tweets.\nYour requirement is Q Γ— R > 100,000 final filtered tweets.\nIdentify ground truth \"threads\", sets of interacting tweets.\nA tweet interacts with a previous tweet\nif it is any of\n\nfrom same author and \"near\" in time\na reply\na retweet, from any author\nrepeats quoted text, from any author\n\nFor example, we might see an hourly pattern\nfrom a weather station that reports 2.5 M\nat 9:01, 10 M at 9:02, NOX at 9:03.\nOr another station that reports on one\nlocation, then another, then another.\nIdentify heavy hitters, authors who frequently\nparticipate in threads.\nIdentify whether threads are usually extended\nby single or multiple authors.\nMake R small enough that thread length\nwithin each query interval is \"short\".\n\nNow do a production run,\nobtaining Q Γ— R raw tweets.\nWithin a given query interval,\nidentify each thread,\nand discard all but one of its tweets.\nTweets that survive such filtering\nare mostly independent of one another.\nCommon mode events, e.g. a sudden weather\nevent, may still induce coupling among\nfiltered tweets that are near in time.\n\nIf (threaded) heavy hitters author a\nsignificant fraction of the tweets\nof interest, you might want to break\nyour queries into two parts:\n\nAsk for heavy authors within the query interval, retaining at most 1 tweet from each, and then\nask for all other authors within the query interval.\n\nCombine them to obtain the desired number\nof filtered tweets.\nThis optimization lets you make R a bit bigger.\n\nIn the ideal case, you would ask for just\nthe first tweet within each random\nquery interval, and send at least\n100,000 API calls. That is, R would be 1.\nThe point of the threading and filtering\nis to obtain similar results, with big\nenough R that we can reduce the number\nof calls.\n" ]
[ 0 ]
[]
[]
[ "python", "tweepy" ]
stackoverflow_0074513562_python_tweepy.txt
Q: finding specific values in text file python I have a text file that looks like the one above. How do I get python to read the file such that I only obtain the values for 'gps_alt' 'lat' and 'lon'? A: I can't really see how the text file is structured exactly but it looks like a json file so you could try something like this to store the text file into a dict: import json with open('file.txt') as f: data = json.load(f) then you have a normal python dictionary and can loop through the keys/values and get the ones you need.
finding specific values in text file python
I have a text file that looks like the one above. How do I get python to read the file such that I only obtain the values for 'gps_alt' 'lat' and 'lon'?
[ "I can't really see how the text file is structured exactly but it looks like a json file so you could try something like this to store the text file into a dict:\nimport json\n\nwith open('file.txt') as f:\n data = json.load(f)\n\nthen you have a normal python dictionary and can loop through the keys/values and get the ones you need.\n" ]
[ 0 ]
[]
[]
[ "file", "python" ]
stackoverflow_0074524419_file_python.txt
Q: list() method in API causing TypeError: 'list' object is not callable in unit test I'm working on code that retrieves information from Twilio's Flow system through their API. That part of the code functions fine, but when I try to mock it for unit testing, it's throwing an error from the mocked api response. Here is the code being tested: from twilio.rest import Client class FlowChecker: def __init__(self, twilio_sid, twilio_auth_token): self.twilio_SID = twilio_sid self.twilio_auth_token = twilio_auth_token self.client = Client(self.twilio_SID, self.twilio_auth_token) self.calls = self.client.calls.list() self.flows = self.client.studio.v2.flows def get_active_executions(self): active_executions = [] for flow in self.flows.list(): executions = self.client.studio.v2.flows(flow.sid).executions.list() for execution in executions: if execution._properties['status'] != 'ended': active_executions.append({'flow_sid': flow.sid, 'execution': execution}) And here is my unit test code that's throwing the error: import unittest from unittest.mock import Mock, patch from flows.twilio_flows import FlowChecker class FlowCheckerTest(unittest.TestCase): @patch('flows.twilio_flows.Client') def test_get_active_flows(self, mock_client): flow_checker = FlowChecker('fake_sid', 'fake_auth_token') mock_call = Mock() mock_flow = Mock() mock_flow.sid = 0 mock_execution = Mock() mock_execution._properties = {'status': 'ended'} mock_client.calls.list().return_value = [mock_call] mock_client.studio.v2.flows = [mock_flow] mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution] self.assertEqual(flow_checker.get_active_executions(), []) And here is the error traceback: Ran 2 tests in 0.045s FAILED (errors=1) Error Traceback (most recent call last): File "C:\Users\Devon\AppData\Local\Programs\Python\Python310\lib\unittest\mock.py", line 1369, in patched return func(*newargs, **newkeywargs) File "C:\Users\Devon\PycharmProjects\Day_35\tests\twilio_flows_test'.py", line 19, in test_get_active_flows_when_empty mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution] TypeError: 'list' object is not callable Process finished with exit code 1 As you can see, "mock_client.calls.list().return_value = [mock_call]" doesn't throw any errors during init, and the first code block runs fine. It's only the mocked executions.list() that's throwing the error in the test. Can anyone clear this up? Thank you! I've tried researching this specific issue and was unable to find information addressing it. It's a very specific deeply nested function in a vendor supplied client that I need to test, so I don't know what to try. A: The problem isn't with .list(), it's with .flows(). mock_client.studio.v2.flows = [mock_flow] mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution] You assign .flows to be a list, and then you try to call it like a function, which causes the error. I think maybe you intended to say .flows[mock_flow.sid] instead of .flows(mock_flow.sid)? Although even that doesn't make sense. .flows is a one-element list, so you would use .flows[0] to access the first (and only) item.
list() method in API causing TypeError: 'list' object is not callable in unit test
I'm working on code that retrieves information from Twilio's Flow system through their API. That part of the code functions fine, but when I try to mock it for unit testing, it's throwing an error from the mocked api response. Here is the code being tested: from twilio.rest import Client class FlowChecker: def __init__(self, twilio_sid, twilio_auth_token): self.twilio_SID = twilio_sid self.twilio_auth_token = twilio_auth_token self.client = Client(self.twilio_SID, self.twilio_auth_token) self.calls = self.client.calls.list() self.flows = self.client.studio.v2.flows def get_active_executions(self): active_executions = [] for flow in self.flows.list(): executions = self.client.studio.v2.flows(flow.sid).executions.list() for execution in executions: if execution._properties['status'] != 'ended': active_executions.append({'flow_sid': flow.sid, 'execution': execution}) And here is my unit test code that's throwing the error: import unittest from unittest.mock import Mock, patch from flows.twilio_flows import FlowChecker class FlowCheckerTest(unittest.TestCase): @patch('flows.twilio_flows.Client') def test_get_active_flows(self, mock_client): flow_checker = FlowChecker('fake_sid', 'fake_auth_token') mock_call = Mock() mock_flow = Mock() mock_flow.sid = 0 mock_execution = Mock() mock_execution._properties = {'status': 'ended'} mock_client.calls.list().return_value = [mock_call] mock_client.studio.v2.flows = [mock_flow] mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution] self.assertEqual(flow_checker.get_active_executions(), []) And here is the error traceback: Ran 2 tests in 0.045s FAILED (errors=1) Error Traceback (most recent call last): File "C:\Users\Devon\AppData\Local\Programs\Python\Python310\lib\unittest\mock.py", line 1369, in patched return func(*newargs, **newkeywargs) File "C:\Users\Devon\PycharmProjects\Day_35\tests\twilio_flows_test'.py", line 19, in test_get_active_flows_when_empty mock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution] TypeError: 'list' object is not callable Process finished with exit code 1 As you can see, "mock_client.calls.list().return_value = [mock_call]" doesn't throw any errors during init, and the first code block runs fine. It's only the mocked executions.list() that's throwing the error in the test. Can anyone clear this up? Thank you! I've tried researching this specific issue and was unable to find information addressing it. It's a very specific deeply nested function in a vendor supplied client that I need to test, so I don't know what to try.
[ "The problem isn't with .list(), it's with .flows().\nmock_client.studio.v2.flows = [mock_flow]\nmock_client.studio.v2.flows(mock_flow.sid).executions.list().return_value = [mock_execution]\n\nYou assign .flows to be a list, and then you try to call it like a function, which causes the error.\nI think maybe you intended to say .flows[mock_flow.sid] instead of .flows(mock_flow.sid)?\nAlthough even that doesn't make sense. .flows is a one-element list, so you would use .flows[0] to access the first (and only) item.\n" ]
[ 0 ]
[]
[]
[ "list", "python", "twilio", "twilio_api", "unit_testing" ]
stackoverflow_0074524468_list_python_twilio_twilio_api_unit_testing.txt
Q: How does ZoneInfo handle DST in the distant future? I'm trying to understand how the zoneinfo module figures out daylight savings time transitions in the distant future while it seems that dateutil and pytz both give up on daylight savings time transitions. I know these transitions aren't really meaningful that far in the future but the inconsistency is potentially a problem and source of confusion. import datetime import zoneinfo import pytz from dateutil.tz import gettz as dateutil_gettz eastern_zone = zoneinfo.ZoneInfo('America/New_York') eastern_pytz = pytz.timezone('America/New_York') eastern_dateutil = dateutil_gettz('America/New_York') dates_7000 = [datetime.datetime(7000, month, 1) for month in range(1, 13)] dates_7000_zone = [d.replace(tzinfo=eastern_zone) for d in dates_7000] dates_7000_pytz = [eastern_pytz.localize(d) for d in dates_7000] dates_7000_dateutil = [d.replace(tzinfo=eastern_dateutil) for d in dates_7000] # for zoneinfo, there are two utcoffsets in this set # datetime.timedelta(days=-1, seconds=68400) # datetime.timedelta(days=-1, seconds=72000) {d.utcoffset() for d in dates_7000_zone} # for pytz and dateutil there is only one # datetime.timedelta(days=-1, seconds=68400) {d.utcoffset() for d in dates_7000_pytz} {d.utcoffset() for d in dates_7000_dateutil} I believe that zoneinfo is just carrying the final rule forward indefinitely. Is there anyway to figure out what that rule is and create a pytz or dateutil timezone that would follow it? A: Unlike dateutil.tz and pytz, the zoneinfo module is capable of parsing Version 3 TZif files, and Version 3 MAY (read: usually do) contain a footer describing a recurring rule for DST transitions. The part of the Python implementation of zoneinfo relating to these rule-based transitions can be found here. This capability is important for a number of reasons: Version 1 TZif files store transition times as 32 bit offsets from the Unix Epoch, and as such are subject to the Epochalypse. Right now, 2038 is in the far future, but in 16 years it will be the present, and it will not be possible to express new transitions using Version 1 files. The tz project offers both "fat" and "slim" binaries β€” fat binaries include hard-coded transitions for some decently large number of years, even for zones that have simple rule-based offsets, but "slim" binaries store the minimum number of transition times necessary to accurately describe the zone. This means that for zones with rule-based transitions, the Version 1 files truncate in the past, and dateutil.tz and pytz won't even work for current datetimes. Some distros have tried transitioning from fat to slim binaries but have been bitten by patchy support for Version 3 files. Presumably when some threshold is reached, they will all transition to slim files, since there's no particular reason not to and it involves shipping less data. Of course, my normal disclaimer here applies (one that you have clearly internalized, but it bears repeating): The time zone data available is decently accurate for the period from 1970 to today, and the further away from that time period you go in either direction (the past or the future), the less meaningful the data gets. For dates in even the near future, zoneinfo is showing you the best guess that the tz project maintainers have for what time it's going to be in a different zone, and those guesses become less accurate the further into the future you go.
How does ZoneInfo handle DST in the distant future?
I'm trying to understand how the zoneinfo module figures out daylight savings time transitions in the distant future while it seems that dateutil and pytz both give up on daylight savings time transitions. I know these transitions aren't really meaningful that far in the future but the inconsistency is potentially a problem and source of confusion. import datetime import zoneinfo import pytz from dateutil.tz import gettz as dateutil_gettz eastern_zone = zoneinfo.ZoneInfo('America/New_York') eastern_pytz = pytz.timezone('America/New_York') eastern_dateutil = dateutil_gettz('America/New_York') dates_7000 = [datetime.datetime(7000, month, 1) for month in range(1, 13)] dates_7000_zone = [d.replace(tzinfo=eastern_zone) for d in dates_7000] dates_7000_pytz = [eastern_pytz.localize(d) for d in dates_7000] dates_7000_dateutil = [d.replace(tzinfo=eastern_dateutil) for d in dates_7000] # for zoneinfo, there are two utcoffsets in this set # datetime.timedelta(days=-1, seconds=68400) # datetime.timedelta(days=-1, seconds=72000) {d.utcoffset() for d in dates_7000_zone} # for pytz and dateutil there is only one # datetime.timedelta(days=-1, seconds=68400) {d.utcoffset() for d in dates_7000_pytz} {d.utcoffset() for d in dates_7000_dateutil} I believe that zoneinfo is just carrying the final rule forward indefinitely. Is there anyway to figure out what that rule is and create a pytz or dateutil timezone that would follow it?
[ "Unlike dateutil.tz and pytz, the zoneinfo module is capable of parsing Version 3 TZif files, and Version 3 MAY (read: usually do) contain a footer describing a recurring rule for DST transitions. The part of the Python implementation of zoneinfo relating to these rule-based transitions can be found here. This capability is important for a number of reasons:\n\nVersion 1 TZif files store transition times as 32 bit offsets from the Unix Epoch, and as such are subject to the Epochalypse. Right now, 2038 is in the far future, but in 16 years it will be the present, and it will not be possible to express new transitions using Version 1 files.\n\nThe tz project offers both \"fat\" and \"slim\" binaries β€” fat binaries include hard-coded transitions for some decently large number of years, even for zones that have simple rule-based offsets, but \"slim\" binaries store the minimum number of transition times necessary to accurately describe the zone. This means that for zones with rule-based transitions, the Version 1 files truncate in the past, and dateutil.tz and pytz won't even work for current datetimes. Some distros have tried transitioning from fat to slim binaries but have been bitten by patchy support for Version 3 files. Presumably when some threshold is reached, they will all transition to slim files, since there's no particular reason not to and it involves shipping less data.\n\n\nOf course, my normal disclaimer here applies (one that you have clearly internalized, but it bears repeating): The time zone data available is decently accurate for the period from 1970 to today, and the further away from that time period you go in either direction (the past or the future), the less meaningful the data gets. For dates in even the near future, zoneinfo is showing you the best guess that the tz project maintainers have for what time it's going to be in a different zone, and those guesses become less accurate the further into the future you go.\n" ]
[ 2 ]
[]
[]
[ "datetime", "python", "python_dateutil", "pytz", "zoneinfo" ]
stackoverflow_0074520944_datetime_python_python_dateutil_pytz_zoneinfo.txt
Q: Unable to use Python keyring module from systemd service I want a python script to automatically start after boot on a linux computer. To achieve this I set up a systemd service: [Unit] Description=My Script Service Wants=network-online.target After=network-online.target After=multi-user.target StartLimitIntervalSec=3600 StartLimitBurst=60 [Service] Type=idle User=masterofpuppets Restart=on-failure RestartSec=60s WorkingDirectory=/home/masterofpuppets ExecStart=/home/masterofpuppets/mypythonscript.py [Install] WantedBy=multi-user.target But I get an error: sudo systemctl status mysystemd.service ● transfer_DB_remote_to_local.service - My Script Service Loaded: loaded (/etc/systemd/system/mysystemd.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Mon 2022-11-21 11:49:46 CET; 55s ago Process: 19283 ExecStart=/home/masterofpuppets/mypythonscript.py (code=exited, status=1/FAILURE) Main PID: 19283 (code=exited, status=1/FAILURE) The python script is #!/usr/bin/env python3 # -*- coding: utf-8 -*- import subprocess import keyring import time DB_backup_from_server = f"mysqldump --single-transaction --quick -v -h 192.168.0.97 -u {keyring.get_password('serverDB', 'user')} -p'{keyring.get_password('serverDB', 'pw')}' testDB > ~/testDB_backup.sql" restore_backup_to_local_DB = f"mysql -v -u {keyring.get_password('mysqlDB', 'user')} -p'{keyring.get_password('mysqlDB', 'pw')}' testDB < ~/testDB_backup.sql" commands = [DB_backup_from_server, restore_backup_to_local_DB] execution_interval = 60*60 t0 = time.time() - execution_interval while True: if time.time() - t0 > execution_interval: t0 = time.time() for cmd in commands: subprocess.run(cmd, stdout = subprocess.PIPE, universal_newlines = True, shell = True) time.sleep(60) There are no errors if I start it manually. This is a similar issue, but the suggested solution doesn't help in my case. Edit: journalctl -u mysystemd.service Nov 21 14:36:37 masterofpuppets-pc systemd[1]: Started My Script Service. Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: Traceback (most recent call last): Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/home/masterofpuppets/mypythonscript.py Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: DB_backup_from_server = f"mysqldump --single-transaction --quick -v -h 192.168.0.38 -u {keyring.get_> Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/usr/lib/python3/dist-packages/keyring/core.py", line 57, in get_password Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: return _keyring_backend.get_password(service_name, username) Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/usr/lib/python3/dist-packages/keyring/backends/fail.py", line 25, in get_password Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: raise NoKeyringError(msg) Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: keyring.errors.NoKeyringError: No recommended backend was available. Install a recommended 3rd party bac> Nov 21 14:36:38 masterofpuppets-pc systemd[1]: mysystemd.service: Main process exited, code=exited, status=1/FAILURE Nov 21 14:36:38 masterofpuppets-pc systemd[1]: mysystemd.service: Failed with result 'exit-code'. Nov 21 14:37:38 masterofpuppets-pc systemd[1]: mysystemd.service: Scheduled restart job, restart counter is at 143. Nov 21 14:37:38 masterofpuppets-pc systemd[1]: Stopped My Script Service. A: Thanks to the hints, I now got it working via a user unit service. The file is located at ~/.config/systemd/user/myuserunit.service [Unit] StartLimitIntervalSec=3600 StartLimitBurst=60 [Service] Type=simple Restart=on-failure RestartSec=60s ExecStart=/home/masterofpuppets/mypythonscript.py [Install] WantedBy=default.target enabling the service systemctl --user daemon-reload systemctl --user enable myuserunit.service Now it automatically starts after reboot/login.
Unable to use Python keyring module from systemd service
I want a python script to automatically start after boot on a linux computer. To achieve this I set up a systemd service: [Unit] Description=My Script Service Wants=network-online.target After=network-online.target After=multi-user.target StartLimitIntervalSec=3600 StartLimitBurst=60 [Service] Type=idle User=masterofpuppets Restart=on-failure RestartSec=60s WorkingDirectory=/home/masterofpuppets ExecStart=/home/masterofpuppets/mypythonscript.py [Install] WantedBy=multi-user.target But I get an error: sudo systemctl status mysystemd.service ● transfer_DB_remote_to_local.service - My Script Service Loaded: loaded (/etc/systemd/system/mysystemd.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Mon 2022-11-21 11:49:46 CET; 55s ago Process: 19283 ExecStart=/home/masterofpuppets/mypythonscript.py (code=exited, status=1/FAILURE) Main PID: 19283 (code=exited, status=1/FAILURE) The python script is #!/usr/bin/env python3 # -*- coding: utf-8 -*- import subprocess import keyring import time DB_backup_from_server = f"mysqldump --single-transaction --quick -v -h 192.168.0.97 -u {keyring.get_password('serverDB', 'user')} -p'{keyring.get_password('serverDB', 'pw')}' testDB > ~/testDB_backup.sql" restore_backup_to_local_DB = f"mysql -v -u {keyring.get_password('mysqlDB', 'user')} -p'{keyring.get_password('mysqlDB', 'pw')}' testDB < ~/testDB_backup.sql" commands = [DB_backup_from_server, restore_backup_to_local_DB] execution_interval = 60*60 t0 = time.time() - execution_interval while True: if time.time() - t0 > execution_interval: t0 = time.time() for cmd in commands: subprocess.run(cmd, stdout = subprocess.PIPE, universal_newlines = True, shell = True) time.sleep(60) There are no errors if I start it manually. This is a similar issue, but the suggested solution doesn't help in my case. Edit: journalctl -u mysystemd.service Nov 21 14:36:37 masterofpuppets-pc systemd[1]: Started My Script Service. Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: Traceback (most recent call last): Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/home/masterofpuppets/mypythonscript.py Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: DB_backup_from_server = f"mysqldump --single-transaction --quick -v -h 192.168.0.38 -u {keyring.get_> Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/usr/lib/python3/dist-packages/keyring/core.py", line 57, in get_password Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: return _keyring_backend.get_password(service_name, username) Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: File "/usr/lib/python3/dist-packages/keyring/backends/fail.py", line 25, in get_password Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: raise NoKeyringError(msg) Nov 21 14:36:38 masterofpuppets-pc mypythonscript.py[47110]: keyring.errors.NoKeyringError: No recommended backend was available. Install a recommended 3rd party bac> Nov 21 14:36:38 masterofpuppets-pc systemd[1]: mysystemd.service: Main process exited, code=exited, status=1/FAILURE Nov 21 14:36:38 masterofpuppets-pc systemd[1]: mysystemd.service: Failed with result 'exit-code'. Nov 21 14:37:38 masterofpuppets-pc systemd[1]: mysystemd.service: Scheduled restart job, restart counter is at 143. Nov 21 14:37:38 masterofpuppets-pc systemd[1]: Stopped My Script Service.
[ "Thanks to the hints, I now got it working via a user unit service.\nThe file is located at ~/.config/systemd/user/myuserunit.service\n[Unit]\nStartLimitIntervalSec=3600\nStartLimitBurst=60\n\n[Service]\nType=simple\nRestart=on-failure\nRestartSec=60s\nExecStart=/home/masterofpuppets/mypythonscript.py\n\n[Install]\nWantedBy=default.target\n\nenabling the service\nsystemctl --user daemon-reload\nsystemctl --user enable myuserunit.service \n\nNow it automatically starts after reboot/login.\n" ]
[ 0 ]
[]
[]
[ "linux", "python", "python_keyring", "systemd" ]
stackoverflow_0074517933_linux_python_python_keyring_systemd.txt
Q: How to split input text into equal size of tokens, not character length, and then concatenate the summarization results for Hugging Face transformers I am using the below methodology to summarize longer than 1024 token size long texts. Current method splits the text by half. I took this from another user's post and modified it slightly. So what I want to do is, instead of splitting into half, split whole text into 1024 equal sized tokens and get summarization each of them and then at the end, concatenate them with the correct order and write into file. How can I do this tokenization and getting the correct output? text split with Split(" ") doesn't work same as tokenization. It produces different count. import logging from transformers import pipeline f = open("TextFile1.txt", "r") ARTICLE = f.read() summarizer = pipeline("summarization", model="facebook/bart-large-cnn" ) counter = 1 def summarize_text(text: str, max_len: int) -> str: global counter try: #logging.warning("max_len " + str(max_len)) summary = summarizer(text, min_length=30, do_sample=False) with open('parsed_'+str(counter)+'.txt', 'w') as f: f.write(text) counter += 1 return summary[0]["summary_text"] except IndexError as ex: logging.warning("Sequence length too large for model, cutting text in half and calling again") return summarize_text(text=text[:(len(text) // 2)], max_len=max_len) + " " + summarize_text(text=text[(len(text) // 2):], max_len=max_len) gg = summarize_text(ARTICLE, 1024) with open('summarized.txt', 'w') as f: f.write(gg) A: I like splitting text using nltk. You can also do it with spacy and the quality is better, but it takes a bit longer. nltk and spacy allow you to cut text into sentences and this is better because the text pieces are more coherent. You want to cut it less than 1024 to be on the safe side. 512 should be better and it's what the original BERT uses, so it shouldn't be too bad. You just summarize the summarizations in the end. Here's an example: import nltk from nltk.tokenize import sent_tokenize def split_in_segments(text): tokens = 0 mystring = list() segments = [] for sent in sent_tokenize(text): newtokens = len(sent.split()) tokens += newtokens mystring.append(str(sent).strip()) if tokens > 512: segments.append(" ".join(mystring)) mystring = [] tokens = 0 if mystring: segments.append(" ".join(mystring)) return(segments) def summarize_4_plotly(text): segments = split_in_segments(text) summarylist = summarizer(segments, max_length=100, min_length=30, do_sample=False) summary = summarizer(" ".join([summarylist[i]['summary_text'] for i in range(len(summarylist))]), max_length = 120, min_length = 30, do_sample = False) return(summary) summarize_4_plotly(text)
How to split input text into equal size of tokens, not character length, and then concatenate the summarization results for Hugging Face transformers
I am using the below methodology to summarize longer than 1024 token size long texts. Current method splits the text by half. I took this from another user's post and modified it slightly. So what I want to do is, instead of splitting into half, split whole text into 1024 equal sized tokens and get summarization each of them and then at the end, concatenate them with the correct order and write into file. How can I do this tokenization and getting the correct output? text split with Split(" ") doesn't work same as tokenization. It produces different count. import logging from transformers import pipeline f = open("TextFile1.txt", "r") ARTICLE = f.read() summarizer = pipeline("summarization", model="facebook/bart-large-cnn" ) counter = 1 def summarize_text(text: str, max_len: int) -> str: global counter try: #logging.warning("max_len " + str(max_len)) summary = summarizer(text, min_length=30, do_sample=False) with open('parsed_'+str(counter)+'.txt', 'w') as f: f.write(text) counter += 1 return summary[0]["summary_text"] except IndexError as ex: logging.warning("Sequence length too large for model, cutting text in half and calling again") return summarize_text(text=text[:(len(text) // 2)], max_len=max_len) + " " + summarize_text(text=text[(len(text) // 2):], max_len=max_len) gg = summarize_text(ARTICLE, 1024) with open('summarized.txt', 'w') as f: f.write(gg)
[ "I like splitting text using nltk. You can also do it with spacy and the quality is better, but it takes a bit longer. nltk and spacy allow you to cut text into sentences and this is better because the text pieces are more coherent. You want to cut it less than 1024 to be on the safe side. 512 should be better and it's what the original BERT uses, so it shouldn't be too bad. You just summarize the summarizations in the end. Here's an example:\nimport nltk\nfrom nltk.tokenize import sent_tokenize\n\ndef split_in_segments(text):\n tokens = 0\n mystring = list()\n segments = []\n for sent in sent_tokenize(text):\n newtokens = len(sent.split())\n tokens += newtokens\n mystring.append(str(sent).strip())\n if tokens > 512:\n segments.append(\" \".join(mystring))\n mystring = []\n tokens = 0\n if mystring:\n segments.append(\" \".join(mystring))\n return(segments)\n\ndef summarize_4_plotly(text):\n segments = split_in_segments(text)\n summarylist = summarizer(segments, max_length=100, min_length=30, do_sample=False)\n summary = summarizer(\" \".join([summarylist[i]['summary_text'] for i in range(len(summarylist))]), max_length = 120, min_length = 30, do_sample = False)\n return(summary)\n\nsummarize_4_plotly(text)\n\n" ]
[ 1 ]
[]
[]
[ "huggingface", "huggingface_tokenizers", "huggingface_transformers", "nlp", "python" ]
stackoverflow_0074244702_huggingface_huggingface_tokenizers_huggingface_transformers_nlp_python.txt
Q: How to get the items inside of an OpenAIobject in python? I would like to get the text inside this data structure that is outputted via GPT3 OpenAI. I'm using Python. When I print the object I get: <OpenAIObject text_completion id=cmpl-6F7ScZDu2UKKJGPXTiTPNKgfrikZ at 0x7f7648cacef0> JSON: { "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "text": "\nWhat was Malcolm X's original name?\nMalcolm X's original name was Malcolm Little.\n\nWhere was Malcolm X born?\nMalcolm X was born in Omaha, Nebraska.\n\nWhat was the profession of Malcolm X's father?\nMalcolm X's father was a Baptist minister.\n\nWhat did Malcolm X do after he stopped attending school?\nMalcolm X became involved in petty criminal activities." } ], "created": 1669061618, "id": "cmpl-6F7ScZDu2gJJHKZSPXTiTPNKgfrikZ", "model": "text-davinci-002", "object": "text_completion", "usage": { "completion_tokens": 86, "prompt_tokens": 1200, "total_tokens": 1286 } } How do I get the 'text' component of this? For example, if this object is called: qa ... I can output qa['choices'] And I get the same items as above... but adding a .text or ['text'] to this does not do it. and gets an error But not sure how to isolate the 'text' I've read the docs, but cannot find this... https://beta.openai.com/docs/api-reference/files/delete?lang=python Thank you A: x = {&quot;choices&quot;: [{&quot;finish_reason&quot;: &quot;length&quot;, &quot;text&quot;: &quot;, everyone, and welcome to the first installment of the new opening&quot;}], } text = x['choices'][0]['text'] print(text) # , everyone, and welcome to the first installment of the new opening
How to get the items inside of an OpenAIobject in python?
I would like to get the text inside this data structure that is outputted via GPT3 OpenAI. I'm using Python. When I print the object I get: <OpenAIObject text_completion id=cmpl-6F7ScZDu2UKKJGPXTiTPNKgfrikZ at 0x7f7648cacef0> JSON: { "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "text": "\nWhat was Malcolm X's original name?\nMalcolm X's original name was Malcolm Little.\n\nWhere was Malcolm X born?\nMalcolm X was born in Omaha, Nebraska.\n\nWhat was the profession of Malcolm X's father?\nMalcolm X's father was a Baptist minister.\n\nWhat did Malcolm X do after he stopped attending school?\nMalcolm X became involved in petty criminal activities." } ], "created": 1669061618, "id": "cmpl-6F7ScZDu2gJJHKZSPXTiTPNKgfrikZ", "model": "text-davinci-002", "object": "text_completion", "usage": { "completion_tokens": 86, "prompt_tokens": 1200, "total_tokens": 1286 } } How do I get the 'text' component of this? For example, if this object is called: qa ... I can output qa['choices'] And I get the same items as above... but adding a .text or ['text'] to this does not do it. and gets an error But not sure how to isolate the 'text' I've read the docs, but cannot find this... https://beta.openai.com/docs/api-reference/files/delete?lang=python Thank you
[ "x = {&quot;choices&quot;: [{&quot;finish_reason&quot;: &quot;length&quot;,\n &quot;text&quot;: &quot;, everyone, and welcome to the first installment of the new opening&quot;}], }\n\ntext = x['choices'][0]['text']\nprint(text) # , everyone, and welcome to the first installment of the new opening\n\n" ]
[ 1 ]
[]
[]
[ "gpt_3", "openai", "python" ]
stackoverflow_0074524530_gpt_3_openai_python.txt
Q: Trying to run Tortoise-TTS program, getting errors I am trying to run this text-to-speech program I followed instructions verbatim, but when I go to run the first line of code (below) python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast I get he following error code: C:\Users\chase\anaconda3\lib\site-packages\torchaudio\_internal\module_utils.py:99: UserWarning: Failed to import soundfile. 'soundfile' backend is not available. warnings.warn("Failed to import soundfile. 'soundfile' backend is not available.") C:\Users\chase\anaconda3\lib\site-packages\paramiko\transport.py:219: CryptographyDeprecationWarning: Blowfish has been deprecated "class": algorithms.Blowfish, Traceback (most recent call last): File "C:\Users\chase\anaconda3\lib\site-packages\soundfile.py", line 152, in <module> _snd = _ffi.dlopen(_libname) OSError: cannot load library 'C:\Users\chase\anaconda3\Library\bin\sndfile.dll': error 0x7e During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\chase\anaconda3\lib\site-packages\soundfile.py", line 178, in <module> _snd = _ffi.dlopen(_os.path.join(_path, '_soundfile_data', _packaged_libname)) OSError: cannot load library 'C:\Users\chase\anaconda3\lib\site-packages\_soundfile_data\libsndfile_64bit.dll': error 0x7e During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\chase\tortoise-tts\tortoise\do_tts.py", line 7, in <module> from api import TextToSpeech, MODELS_DIR File "C:\Users\chase\tortoise-tts\tortoise\api.py", line 22, in <module> from tortoise.utils.audio import wav_to_univnet_mel, denormalize_tacotron_mel File "C:\Users\chase\anaconda3\lib\site-packages\tortoise-2.4.2-py3.9.egg\tortoise\utils\audio.py", line 4, in <module> import librosa File "C:\Users\chase\anaconda3\lib\site-packages\librosa-0.9.2-py3.9.egg\librosa\__init__.py", line 209, in <module> from . import core File "C:\Users\chase\anaconda3\lib\site-packages\librosa-0.9.2-py3.9.egg\librosa\core\__init__.py", line 6, in <module> from .audio import * # pylint: disable=wildcard-import File "C:\Users\chase\anaconda3\lib\site-packages\librosa-0.9.2-py3.9.egg\librosa\core\audio.py", line 8, in <module> import soundfile as sf File "C:\Users\chase\anaconda3\lib\site-packages\soundfile.py", line 189, in <module> _snd = _ffi.dlopen(_libname) OSError: cannot load library 'libsndfile.dll': error 0x7e I expected it to run A: Try on linux, it works fine for me.
Trying to run Tortoise-TTS program, getting errors
I am trying to run this text-to-speech program I followed instructions verbatim, but when I go to run the first line of code (below) python tortoise/do_tts.py --text "I'm going to speak this" --voice random --preset fast I get he following error code: C:\Users\chase\anaconda3\lib\site-packages\torchaudio\_internal\module_utils.py:99: UserWarning: Failed to import soundfile. 'soundfile' backend is not available. warnings.warn("Failed to import soundfile. 'soundfile' backend is not available.") C:\Users\chase\anaconda3\lib\site-packages\paramiko\transport.py:219: CryptographyDeprecationWarning: Blowfish has been deprecated "class": algorithms.Blowfish, Traceback (most recent call last): File "C:\Users\chase\anaconda3\lib\site-packages\soundfile.py", line 152, in <module> _snd = _ffi.dlopen(_libname) OSError: cannot load library 'C:\Users\chase\anaconda3\Library\bin\sndfile.dll': error 0x7e During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\chase\anaconda3\lib\site-packages\soundfile.py", line 178, in <module> _snd = _ffi.dlopen(_os.path.join(_path, '_soundfile_data', _packaged_libname)) OSError: cannot load library 'C:\Users\chase\anaconda3\lib\site-packages\_soundfile_data\libsndfile_64bit.dll': error 0x7e During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\chase\tortoise-tts\tortoise\do_tts.py", line 7, in <module> from api import TextToSpeech, MODELS_DIR File "C:\Users\chase\tortoise-tts\tortoise\api.py", line 22, in <module> from tortoise.utils.audio import wav_to_univnet_mel, denormalize_tacotron_mel File "C:\Users\chase\anaconda3\lib\site-packages\tortoise-2.4.2-py3.9.egg\tortoise\utils\audio.py", line 4, in <module> import librosa File "C:\Users\chase\anaconda3\lib\site-packages\librosa-0.9.2-py3.9.egg\librosa\__init__.py", line 209, in <module> from . import core File "C:\Users\chase\anaconda3\lib\site-packages\librosa-0.9.2-py3.9.egg\librosa\core\__init__.py", line 6, in <module> from .audio import * # pylint: disable=wildcard-import File "C:\Users\chase\anaconda3\lib\site-packages\librosa-0.9.2-py3.9.egg\librosa\core\audio.py", line 8, in <module> import soundfile as sf File "C:\Users\chase\anaconda3\lib\site-packages\soundfile.py", line 189, in <module> _snd = _ffi.dlopen(_libname) OSError: cannot load library 'libsndfile.dll': error 0x7e I expected it to run
[ "Try on linux, it works fine for me.\n" ]
[ 0 ]
[]
[]
[ "conda", "github", "pip", "python", "text_to_speech" ]
stackoverflow_0074503208_conda_github_pip_python_text_to_speech.txt
Q: Adding string after each vowel I am currently on a project to develop a small, fun program that takes a name as an input and returns the name with the string "bi" after each vowel in the name. I am encountering the problem that my program runs in an infinite loop when I have a name that has same the same vowel twice, for example: the name "aya". technically it should return "abiyabi" """Welcome to the code of BoBi Sprache. This Sprache aka Language will put the letter "bi" after each vowel letter in your name""" print("Welcome to the BoBiSprache programm") Name = input("Please enter your name to be BoBied :D : ") NameList = list(Name.lower()) vowels = ["a", "e", "i", "o", "u"] def VowelCheck(NameList): for i in NameList: index = NameList.index(i) for j in vowels: if i == j and index == 0: NameList.insert(index + 1, "bi") elif i == j and (str(NameList[index - 1]) + str(NameList[index])) != "bi": NameList.insert(index + 1, "bi") VowelCheck(NameList) NewName = "" NewName = (NewName.join(NameList)).title() print("Your New Name is: %s" % NewName) I thought first it is a problem with the first letter being a vowel. but I added an if statement that should solve that. I'm honestly out of answers now, and seeking help. You guys might see something I don't see. A: You can also do this by using str.translate which you can give multiple-characters to change one character into many: username = input("Please enter your name to be BoBied :D : ") vowels = ["a", "e", "i", "o", "u"] vowels += [i.upper() for i in vowels] translation_table = str.maketrans({i: i+"bi" for i in vowels}) print((f"Your BoBied name is: {username.translate(translation_table)}")) Demo: Please enter your name to be BoBied :D : Hampus Your BoBied name is: Habimpubis I also added upper-case letters, so that it doesn't matter in what case that the user inputs their name. A: The reason your function keeps on looping is as you said that you have the same vowel twice while you also search for the index of the vowel. The problem with that is the index method returns the first instance of the value being searched. When you iterate to the next element of your list it remains the same value as before being that you're editing your list while iterating through it. It also won't be "bi" because that was added after the first instance of the vowel. It's not recommended to edit a list during iteration. Best practice would be to create a new list and append values to it. A: Build the Bobied name independent of the user-submitted name. Also note that strings are iterables in Python and there's no need to convert the user-submitted name to a list. username = input("Please enter your name to be BoBied :D : ") vowels = ["a", "e", "i", "o", "u"] def VowelCheck(name): bobified_name = "" for i in name: bobified_name += i if i in vowels: bobified_name += "bi" return bobified_name print("Your New Name is: %s" % VowelCheck(username).title()) Now, if you want to avoid the loops and conditionals, or have large input: use str.translate(). First make a translation table, just a dict which maps vowels to bobified vowels. Then call translate() on the name to be bobified. username = input("Please enter your name to be Bobied :D : ") bobi_table = str.maketrans({ 'a': 'abi', 'e': 'ebi', 'i': 'ibi', 'o': 'obi', 'u': 'ubi' }) print("Your new name is: %s" % username.translate(bobi_table))
Adding string after each vowel
I am currently on a project to develop a small, fun program that takes a name as an input and returns the name with the string "bi" after each vowel in the name. I am encountering the problem that my program runs in an infinite loop when I have a name that has same the same vowel twice, for example: the name "aya". technically it should return "abiyabi" """Welcome to the code of BoBi Sprache. This Sprache aka Language will put the letter "bi" after each vowel letter in your name""" print("Welcome to the BoBiSprache programm") Name = input("Please enter your name to be BoBied :D : ") NameList = list(Name.lower()) vowels = ["a", "e", "i", "o", "u"] def VowelCheck(NameList): for i in NameList: index = NameList.index(i) for j in vowels: if i == j and index == 0: NameList.insert(index + 1, "bi") elif i == j and (str(NameList[index - 1]) + str(NameList[index])) != "bi": NameList.insert(index + 1, "bi") VowelCheck(NameList) NewName = "" NewName = (NewName.join(NameList)).title() print("Your New Name is: %s" % NewName) I thought first it is a problem with the first letter being a vowel. but I added an if statement that should solve that. I'm honestly out of answers now, and seeking help. You guys might see something I don't see.
[ "You can also do this by using str.translate which you can give multiple-characters to change one character into many:\nusername = input(\"Please enter your name to be BoBied :D : \")\nvowels = [\"a\", \"e\", \"i\", \"o\", \"u\"]\nvowels += [i.upper() for i in vowels]\ntranslation_table = str.maketrans({i: i+\"bi\" for i in vowels})\n\nprint((f\"Your BoBied name is: {username.translate(translation_table)}\"))\n\nDemo:\nPlease enter your name to be BoBied :D : Hampus\nYour BoBied name is: Habimpubis\n\nI also added upper-case letters, so that it doesn't matter in what case that the user inputs their name.\n", "The reason your function keeps on looping is as you said that you have the same vowel twice while you also search for the index of the vowel. The problem with that is the index method returns the first instance of the value being searched. When you iterate to the next element of your list it remains the same value as before being that you're editing your list while iterating through it. It also won't be \"bi\" because that was added after the first instance of the vowel.\nIt's not recommended to edit a list during iteration. Best practice would be to create a new list and append values to it.\n", "Build the Bobied name independent of the user-submitted name. Also note that strings are iterables in Python and there's no need to convert the user-submitted name to a list.\n\nusername = input(\"Please enter your name to be BoBied :D : \")\nvowels = [\"a\", \"e\", \"i\", \"o\", \"u\"]\n\n\ndef VowelCheck(name):\n bobified_name = \"\"\n for i in name:\n bobified_name += i\n if i in vowels:\n bobified_name += \"bi\"\n return bobified_name\n\n\nprint(\"Your New Name is: %s\" % VowelCheck(username).title())\n\n\nNow, if you want to avoid the loops and conditionals, or have large input: use str.translate(). First make a translation table, just a dict which maps vowels to bobified vowels. Then call translate() on the name to be bobified.\nusername = input(\"Please enter your name to be Bobied :D : \")\nbobi_table = str.maketrans({\n 'a': 'abi',\n 'e': 'ebi',\n 'i': 'ibi',\n 'o': 'obi',\n 'u': 'ubi'\n})\nprint(\"Your new name is: %s\" % username.translate(bobi_table))\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "for_loop", "if_statement", "nested_for_loop", "python" ]
stackoverflow_0074523921_for_loop_if_statement_nested_for_loop_python.txt
Q: multiprocessing loop over a simple list? I have a function that calls a custom function that compares rows in a dataframe and calculates some stats. vt.make_breakpts it needs a dataframe (data), a key (unique identifier), and a datefield (date) to do it's thing. I can run this and wait a very long time and it will go through and entire dataframe and output a dataframe of stats calculated by comparing the in a sequence (in this case date). I have a list of all unique key values are want to pass it to multiprocessing so that each item in the list is used to subset the input df and then pass that work to a processor. So I created a def function that will pass the values to the custom function. def taska(id, data, key, date): cdata = data[data[key]==id] return vt.make_breakpts (data=cdata, key=key, date=date) Then used functools to set the unchanging variables and an empty list to capture the results and use unique() to get a list of unique key values. partialA = functools.partial(taska, data=pgdf, key=VID, date=PDATE) resultList = [] vidList = list(pgdf['VESSEL_ID'].unique()) How do I pass the list values to the multicore processor and return the results from each process to the list? I used... with Pool(14) as pool: for results in pool.imap_unordered(partial_task, bwedf.iterrows()): ResultsList.append(results[0]) .iterrows() worked because in that example I was using a dataframe, is there a similar approach for a simple list? A: you just pass the list itself. with Pool(14) as pool: for results in pool.imap_unordered(partial_task, vidList): ResultsList.append(results[0]) explaination: imap expects an iterable, both lists and df.iterrows are iterables ... specifically anything that you can be put in a for loop is an iterable ie: for i in iterable:
multiprocessing loop over a simple list?
I have a function that calls a custom function that compares rows in a dataframe and calculates some stats. vt.make_breakpts it needs a dataframe (data), a key (unique identifier), and a datefield (date) to do it's thing. I can run this and wait a very long time and it will go through and entire dataframe and output a dataframe of stats calculated by comparing the in a sequence (in this case date). I have a list of all unique key values are want to pass it to multiprocessing so that each item in the list is used to subset the input df and then pass that work to a processor. So I created a def function that will pass the values to the custom function. def taska(id, data, key, date): cdata = data[data[key]==id] return vt.make_breakpts (data=cdata, key=key, date=date) Then used functools to set the unchanging variables and an empty list to capture the results and use unique() to get a list of unique key values. partialA = functools.partial(taska, data=pgdf, key=VID, date=PDATE) resultList = [] vidList = list(pgdf['VESSEL_ID'].unique()) How do I pass the list values to the multicore processor and return the results from each process to the list? I used... with Pool(14) as pool: for results in pool.imap_unordered(partial_task, bwedf.iterrows()): ResultsList.append(results[0]) .iterrows() worked because in that example I was using a dataframe, is there a similar approach for a simple list?
[ "you just pass the list itself.\nwith Pool(14) as pool:\n for results in pool.imap_unordered(partial_task, vidList):\n ResultsList.append(results[0])\n\nexplaination: imap expects an iterable, both lists and df.iterrows are iterables ... specifically anything that you can be put in a for loop is an iterable ie:\nfor i in iterable:\n\n" ]
[ 1 ]
[]
[]
[ "list", "multiprocessing", "python" ]
stackoverflow_0074523667_list_multiprocessing_python.txt
Q: How do i put an array into dynamoDB as a nested String Set? I've got a dynamoDB table with 5 columns. First 2 columns are PK and SK, third column is a boolean, and I want my 4th and 5th columns to be String Sets. I've got a python function that looks something like this so far. def dbupload(id,imgid,isDuplicate,labeltypes,labelnames,): response = client.put_item( TableName='labels', Item={ 'instanceID':{ 'S':"{}".format(id), }, 'imageID':{ 'S':"{}".format(imgid), }, } ) I've got an array of labeltypes, and an equal sized array of corresponding labelnames for each imageid that exists. Can I just pass an array in as an item? Or do I have to loop through the array, and put the nested attributes in 1 by 1? I have seen batchwriter examples, but every available example is just manually importing hardcoded items instead of useful functions like sending arrays, or data from csv's, or lists. A: A StringSet can be saved by just providing a list of strings: "SS": ["Giraffe", "Hippo" ,"Zebra"] def dbupload(id,imgid,isDuplicate,labeltypes,labelnames,): response = client.put_item( TableName='labels', Item={ 'instanceID':{ 'S':"{}".format(id), }, 'imageID':{ 'S':"{}".format(imgid), }, 'mylabels':{ 'SS': labelnames }, 'mytypes':{ 'SS': labeltypes } } )
How do i put an array into dynamoDB as a nested String Set?
I've got a dynamoDB table with 5 columns. First 2 columns are PK and SK, third column is a boolean, and I want my 4th and 5th columns to be String Sets. I've got a python function that looks something like this so far. def dbupload(id,imgid,isDuplicate,labeltypes,labelnames,): response = client.put_item( TableName='labels', Item={ 'instanceID':{ 'S':"{}".format(id), }, 'imageID':{ 'S':"{}".format(imgid), }, } ) I've got an array of labeltypes, and an equal sized array of corresponding labelnames for each imageid that exists. Can I just pass an array in as an item? Or do I have to loop through the array, and put the nested attributes in 1 by 1? I have seen batchwriter examples, but every available example is just manually importing hardcoded items instead of useful functions like sending arrays, or data from csv's, or lists.
[ "A StringSet can be saved by just providing a list of strings:\n\"SS\": [\"Giraffe\", \"Hippo\" ,\"Zebra\"]\ndef dbupload(id,imgid,isDuplicate,labeltypes,labelnames,):\nresponse = client.put_item(\n TableName='labels',\n Item={\n 'instanceID':{\n 'S':\"{}\".format(id),\n },\n 'imageID':{\n 'S':\"{}\".format(imgid),\n },\n 'mylabels':{\n 'SS': labelnames\n },\n 'mytypes':{\n 'SS': labeltypes\n }\n \n }\n)\n\n\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "aws_lambda", "boto3", "python" ]
stackoverflow_0074523479_amazon_dynamodb_aws_lambda_boto3_python.txt
Q: Dynamically call staticmethod in Python I have a python object that has various attrs, one of them is check=None. class MenuItem: check: bool = True During the __init__() process, it parses it's own attrs and looks if they are callable. If so, it calls the function, and replaces it's instance variable with the result of the function: def __init__(self): self.request = ... if callable(self.check): self.check = self.check(self.request) The purpose is to have subclasses that may replace class attrs by lambda functions: class MainMenuItem(MenuItem): check = lambda request: request.user.is_authenticated So far, so good. But as the calling of instance methods implicitly adds self as first parameter, I would have to write every lambda (or external function) as lambda self, request: ... - external functions must be def check_superuser(self, request): ... - which IMHO looks bad. Is there a way in Python to call a function from a method "as staticmethod"? Like if callable(self.check): self.check = staticmethod(self.check)(self.request) (this obviously doesn't work) Any hint's welcome. Do I completely think wrong? A: Is this what you are looking for? class A: check: bool = True def __init__(self): self.request = 'request' if callable(self.check): self.check = self.__class__.check(self.request) class B(A): check = lambda request: len(request) b = B() print(b.check) outputs 7
Dynamically call staticmethod in Python
I have a python object that has various attrs, one of them is check=None. class MenuItem: check: bool = True During the __init__() process, it parses it's own attrs and looks if they are callable. If so, it calls the function, and replaces it's instance variable with the result of the function: def __init__(self): self.request = ... if callable(self.check): self.check = self.check(self.request) The purpose is to have subclasses that may replace class attrs by lambda functions: class MainMenuItem(MenuItem): check = lambda request: request.user.is_authenticated So far, so good. But as the calling of instance methods implicitly adds self as first parameter, I would have to write every lambda (or external function) as lambda self, request: ... - external functions must be def check_superuser(self, request): ... - which IMHO looks bad. Is there a way in Python to call a function from a method "as staticmethod"? Like if callable(self.check): self.check = staticmethod(self.check)(self.request) (this obviously doesn't work) Any hint's welcome. Do I completely think wrong?
[ "Is this what you are looking for?\nclass A:\n check: bool = True\n def __init__(self):\n self.request = 'request'\n if callable(self.check):\n self.check = self.__class__.check(self.request)\n\nclass B(A):\n check = lambda request: len(request)\n\nb = B()\nprint(b.check)\n\noutputs 7\n" ]
[ 2 ]
[]
[]
[ "callable", "instance", "python" ]
stackoverflow_0074524547_callable_instance_python.txt
Q: Python pandas : How to find difference between two dataframe based on single column I have two dataframes df1 = pd.DataFrame({ 'Date':['2013-11-24','2013-11-24','2013-11-25','2013-11-25'], 'Fruit':['Banana','Orange','Apple','Celery'], 'Num':[22.1,8.6,7.6,10.2], 'Color':['Yellow','Orange','Green','Green'], }) print(df1) Date Fruit Num Color 0 2013-11-24 Banana 22.1 Yellow 1 2013-11-24 Orange 8.6 Orange 2 2013-11-25 Apple 7.6 Green 3 2013-11-25 Celery 10.2 Green df2 = pd.DataFrame({ 'Date':['2013-11-25','2013-11-25','2013-11-25','2013-11-25','2013-11-25','2013-11-25'], 'Fruit':['Banana','Orange','Apple','Celery','X','Y'], 'Num':[22.1,8.6,7.6,10.2,22.1,8.6], 'Color':['Yellow','Orange','Green','Green','Red','Orange'], }) print(df2) Date Fruit Num Color 0 2013-11-25 Banana 22.1 Yellow 1 2013-11-25 Orange 8.6 Orange 2 2013-11-25 Apple 7.6 Green 3 2013-11-25 Celery 10.2 Green 4 2013-11-25 X 22.1 Red 5 2013-11-25 Y 8.6 Orange I am trying to find out the difference between these two dataframes based on the column Fruit This is what i am doing now but i am not getting the expected output mapped_df = pd.concat([df1,df2],ignore_index=True).drop_duplicates(keep=False) print(mapped_df) Expected output Date Fruit Num Color 8 2013-11-25 X 22.1 Red 9 2013-11-25 Y 8.6 Orange A: You can use the negated isin: output = df2.loc[~df2['Fruit'].isin(df1['Fruit'])]
Python pandas : How to find difference between two dataframe based on single column
I have two dataframes df1 = pd.DataFrame({ 'Date':['2013-11-24','2013-11-24','2013-11-25','2013-11-25'], 'Fruit':['Banana','Orange','Apple','Celery'], 'Num':[22.1,8.6,7.6,10.2], 'Color':['Yellow','Orange','Green','Green'], }) print(df1) Date Fruit Num Color 0 2013-11-24 Banana 22.1 Yellow 1 2013-11-24 Orange 8.6 Orange 2 2013-11-25 Apple 7.6 Green 3 2013-11-25 Celery 10.2 Green df2 = pd.DataFrame({ 'Date':['2013-11-25','2013-11-25','2013-11-25','2013-11-25','2013-11-25','2013-11-25'], 'Fruit':['Banana','Orange','Apple','Celery','X','Y'], 'Num':[22.1,8.6,7.6,10.2,22.1,8.6], 'Color':['Yellow','Orange','Green','Green','Red','Orange'], }) print(df2) Date Fruit Num Color 0 2013-11-25 Banana 22.1 Yellow 1 2013-11-25 Orange 8.6 Orange 2 2013-11-25 Apple 7.6 Green 3 2013-11-25 Celery 10.2 Green 4 2013-11-25 X 22.1 Red 5 2013-11-25 Y 8.6 Orange I am trying to find out the difference between these two dataframes based on the column Fruit This is what i am doing now but i am not getting the expected output mapped_df = pd.concat([df1,df2],ignore_index=True).drop_duplicates(keep=False) print(mapped_df) Expected output Date Fruit Num Color 8 2013-11-25 X 22.1 Red 9 2013-11-25 Y 8.6 Orange
[ "You can use the negated isin:\noutput = df2.loc[~df2['Fruit'].isin(df1['Fruit'])]\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074524559_dataframe_pandas_python.txt
Q: Eigen Matrix vs Numpy Array multiplication performance I read in this question that eigen has very good performance. However, I tried to compare eigen MatrixXi multiplication speed vs numpy array multiplication. And numpy performs better (~26 seconds vs. ~29). Is there a more efficient way to do this eigen? Here is my code: Numpy: import numpy as np import time n_a_rows = 4000 n_a_cols = 3000 n_b_rows = n_a_cols n_b_cols = 200 a = np.arange(n_a_rows * n_a_cols).reshape(n_a_rows, n_a_cols) b = np.arange(n_b_rows * n_b_cols).reshape(n_b_rows, n_b_cols) start = time.time() d = np.dot(a, b) end = time.time() print "time taken : {}".format(end - start) Result: time taken : 25.9291000366 Eigen: #include <iostream> #include <Eigen/Dense> using namespace Eigen; int main() { int n_a_rows = 4000; int n_a_cols = 3000; int n_b_rows = n_a_cols; int n_b_cols = 200; MatrixXi a(n_a_rows, n_a_cols); for (int i = 0; i < n_a_rows; ++ i) for (int j = 0; j < n_a_cols; ++ j) a (i, j) = n_a_cols * i + j; MatrixXi b (n_b_rows, n_b_cols); for (int i = 0; i < n_b_rows; ++ i) for (int j = 0; j < n_b_cols; ++ j) b (i, j) = n_b_cols * i + j; MatrixXi d (n_a_rows, n_b_cols); clock_t begin = clock(); d = a * b; clock_t end = clock(); double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC; std::cout << "Time taken : " << elapsed_secs << std::endl; } Result: Time taken : 29.05 I am using numpy 1.8.1 and eigen 3.2.0-4. A: My question has been answered by @Jitse Niesen and @ggael in the comments. I need to add a flag to turn on the optimizations when compiling: -O2 -DNDEBUG (O is capital o, not zero). After including this flag, eigen code runs in 0.6 seconds as opposed to ~29 seconds without it. A: Change: a = np.arange(n_a_rows * n_a_cols).reshape(n_a_rows, n_a_cols) b = np.arange(n_b_rows * n_b_cols).reshape(n_b_rows, n_b_cols) into: a = np.arange(n_a_rows * n_a_cols).reshape(n_a_rows, n_a_cols)*1.0 b = np.arange(n_b_rows * n_b_cols).reshape(n_b_rows, n_b_cols)*1.0 This gives factor 100 boost at least at my laptop: time taken : 11.1231250763 vs: time taken : 0.124922037125 Unless you really want to multiply integers. In Eigen it is also quicker to multiply double precision numbers (amounts to replacing MatrixXi with MatrixXd three times), but there I see just 1.5 factor: Time taken : 0.555005 vs 0.846788. A: Is there a more efficient way to do this eigen? Whenever you have a matrix multiplication where the matrix on the left side of the = does not also appear on the right side, you can safely tell the compiler that there is no aliasing taking place. This will safe you one unnecessary temporary variable and assignment operation, which for big matrices can make an important difference in performance. This is done with the .noalias() function as follows. d.noalias() = a * b; This way a*b is directly evaluated and stored in d. Otherwise, to avoid aliasing problems, the compiler will first store the product into a temporary variable and then assign the this variable to your target matrix d. So, in your code, the line: d = a * b; is actually compiled as follows: temp = a*b; d = temp;
Eigen Matrix vs Numpy Array multiplication performance
I read in this question that eigen has very good performance. However, I tried to compare eigen MatrixXi multiplication speed vs numpy array multiplication. And numpy performs better (~26 seconds vs. ~29). Is there a more efficient way to do this eigen? Here is my code: Numpy: import numpy as np import time n_a_rows = 4000 n_a_cols = 3000 n_b_rows = n_a_cols n_b_cols = 200 a = np.arange(n_a_rows * n_a_cols).reshape(n_a_rows, n_a_cols) b = np.arange(n_b_rows * n_b_cols).reshape(n_b_rows, n_b_cols) start = time.time() d = np.dot(a, b) end = time.time() print "time taken : {}".format(end - start) Result: time taken : 25.9291000366 Eigen: #include <iostream> #include <Eigen/Dense> using namespace Eigen; int main() { int n_a_rows = 4000; int n_a_cols = 3000; int n_b_rows = n_a_cols; int n_b_cols = 200; MatrixXi a(n_a_rows, n_a_cols); for (int i = 0; i < n_a_rows; ++ i) for (int j = 0; j < n_a_cols; ++ j) a (i, j) = n_a_cols * i + j; MatrixXi b (n_b_rows, n_b_cols); for (int i = 0; i < n_b_rows; ++ i) for (int j = 0; j < n_b_cols; ++ j) b (i, j) = n_b_cols * i + j; MatrixXi d (n_a_rows, n_b_cols); clock_t begin = clock(); d = a * b; clock_t end = clock(); double elapsed_secs = double(end - begin) / CLOCKS_PER_SEC; std::cout << "Time taken : " << elapsed_secs << std::endl; } Result: Time taken : 29.05 I am using numpy 1.8.1 and eigen 3.2.0-4.
[ "My question has been answered by @Jitse Niesen and @ggael in the comments.\nI need to add a flag to turn on the optimizations when compiling: -O2 -DNDEBUG (O is capital o, not zero).\nAfter including this flag, eigen code runs in 0.6 seconds as opposed to ~29 seconds without it.\n", "Change:\na = np.arange(n_a_rows * n_a_cols).reshape(n_a_rows, n_a_cols)\nb = np.arange(n_b_rows * n_b_cols).reshape(n_b_rows, n_b_cols)\n\ninto:\na = np.arange(n_a_rows * n_a_cols).reshape(n_a_rows, n_a_cols)*1.0\nb = np.arange(n_b_rows * n_b_cols).reshape(n_b_rows, n_b_cols)*1.0\n\nThis gives factor 100 boost at least at my laptop:\ntime taken : 11.1231250763\n\nvs:\ntime taken : 0.124922037125\n\nUnless you really want to multiply integers. In Eigen it is also quicker to multiply double precision numbers (amounts to replacing MatrixXi with MatrixXd three times), but there I see just 1.5 factor: Time taken : 0.555005 vs 0.846788.\n", "\nIs there a more efficient way to do this eigen?\n\nWhenever you have a matrix multiplication where the matrix on the left side of the = does not also appear on the right side, you can safely tell the compiler that there is no aliasing taking place. This will safe you one unnecessary temporary variable and assignment operation, which for big matrices can make an important difference in performance. This is done with the .noalias() function as follows.\nd.noalias() = a * b;\n\nThis way a*b is directly evaluated and stored in d. Otherwise, to avoid aliasing problems, the compiler will first store the product into a temporary variable and then assign the this variable to your target matrix d.\nSo, in your code, the line:\nd = a * b;\n\nis actually compiled as follows:\ntemp = a*b;\nd = temp;\n\n" ]
[ 7, 5, 0 ]
[]
[]
[ "c++", "eigen", "numpy", "python" ]
stackoverflow_0024566920_c++_eigen_numpy_python.txt
Q: Use Python Selenium to get CLASS_NAME text I'm trying to find a whatsapp icon "attachment" through the class_name and enter the code below link = f'https://web.whatsapp.com/send?phone={numero}&text={texto}' navegador.get(link) sleep(10) navegador.find_element(By.CLASS_NAME, 'li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1)').send_keys(Keys.ENTER) I'm doing an automation of sending pdf whatsapp and I want to enter the attachment so I can apply the pdf enter image description here When trying to look for the class_name my code does not run looking for that icon or the class return me selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1) Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5 element.find/</<@chrome://remote/content/marionette/element.sys.mjs:280:16 A: I have no idea if this locator valid li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1) but it definitely not looks like a class name. It looks like a CSS Selector. So, instead of navegador.find_element(By.CLASS_NAME, 'li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1)').send_keys(Keys.ENTER) try changing it to: navegador.find_element(By.CSS_SELECTOR, 'li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1)').send_keys(Keys.ENTER) A: The Class of the object looks different on my end <div aria-disabled="false" role="button" tabindex="0" class="_26lC3" data-tab="10" title="Attach" aria-label="Attach"> <span data-testid="clip" data-icon="clip" class="">... You might want to consider that the class names and css selectors might change over time and there are other methods. To me , the div with role="button" looks like a better candidate to go for , since its already the button that when clicked will trigger the action of choosing what to attach, so you could try to use the find by title functionality to get to the button called Attach directly. You can see how to go about that method here and see if its a better option for your needs.
Use Python Selenium to get CLASS_NAME text
I'm trying to find a whatsapp icon "attachment" through the class_name and enter the code below link = f'https://web.whatsapp.com/send?phone={numero}&text={texto}' navegador.get(link) sleep(10) navegador.find_element(By.CLASS_NAME, 'li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1)').send_keys(Keys.ENTER) I'm doing an automation of sending pdf whatsapp and I want to enter the attachment so I can apply the pdf enter image description here When trying to look for the class_name my code does not run looking for that icon or the class return me selenium.common.exceptions.NoSuchElementException: Message: Unable to locate element: .li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1) Stacktrace: RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8 WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:182:5 NoSuchElementError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:394:5 element.find/</<@chrome://remote/content/marionette/element.sys.mjs:280:16
[ "I have no idea if this locator valid li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1) but it definitely not looks like a class name. It looks like a CSS Selector.\nSo, instead of navegador.find_element(By.CLASS_NAME, 'li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1)').send_keys(Keys.ENTER) try changing it to:\nnavegador.find_element(By.CSS_SELECTOR, 'li._2qR8G:nth-child(4) > button:nth-child(1) > span:nth-child(1)').send_keys(Keys.ENTER)\n\n", "The Class of the object looks different on my end\n<div aria-disabled=\"false\" role=\"button\" tabindex=\"0\" class=\"_26lC3\" data-tab=\"10\" title=\"Attach\" aria-label=\"Attach\">\n<span data-testid=\"clip\" data-icon=\"clip\" class=\"\">...\n\nYou might want to consider that the class names and css selectors might change over time and there are other methods.\nTo me , the div with role=\"button\" looks like a better candidate to go for , since its already the button that when clicked will trigger the action of choosing what to attach, so you could try to use the find by title functionality to get to the button called Attach directly.\nYou can see how to go about that method here and see if its a better option for your needs.\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "python", "selenium", "selenium_firefoxdriver", "selenium_webdriver" ]
stackoverflow_0074524492_automation_python_selenium_selenium_firefoxdriver_selenium_webdriver.txt
Q: How to pass volume to docker container? I am running the below docker command : docker run -d -v /Users/gowthamkrishnaaddluri/Documents/dfki_sse/demo:/quantum-demo/ -it demo python3 /quantum-demo/circuit.py --res './' I am trying to run the above command in python and I have the code as follows: container = client.create_container( image='demo', stdin_open=True, tty=False, command="python3 /quantum-demo/circuit.py --res='./'", volumes=['/Users/gowthamkrishnaaddluri/Documents/dfki_sse/demo', '/quantum-demo/'], detach=True, ) client.start(container=container.get('Id')) I am not able to see the files which get generated when the python file(circuit.py) is run. The files get generated when I just run the docker command , when I use the container api the file is not seen in the directory. Am I doing something wrong on using the volumes in the client create container? Thanks! How can I rectify the above problem so that I can map the volume properly? Or please let me know how can I use docker volumes so that the file generated in the docker folder can be copied to the local directory( such as saved neural network model after training) Thanks! A: Hi Hope you are doing well! So instead of the list, you should use dict and also you should use another method. An example is below: import docker client = docker.from_env() client.containers.run( image="python", auto_remove=True, detach=True, tty=False, stdin_open=True, volumes={ # path on your machine/host "/Users/volodymyr/Development/sandbox/stackoverflow/file.py": { "bind": "/mnt/file.py", # path inside the container "mode": "rw", } }, command=["python", "/mnt/file.py", "-w", "world!"], ) # file.py import argparse if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument("-w", "--word") args = parser.parse_args() print(f"Hello {args.word!s}.") I tested this code on my machine and it works. (docker sdk version is 6.0.1) Docs: https://docker-py.readthedocs.io/en/stable/containers.html
How to pass volume to docker container?
I am running the below docker command : docker run -d -v /Users/gowthamkrishnaaddluri/Documents/dfki_sse/demo:/quantum-demo/ -it demo python3 /quantum-demo/circuit.py --res './' I am trying to run the above command in python and I have the code as follows: container = client.create_container( image='demo', stdin_open=True, tty=False, command="python3 /quantum-demo/circuit.py --res='./'", volumes=['/Users/gowthamkrishnaaddluri/Documents/dfki_sse/demo', '/quantum-demo/'], detach=True, ) client.start(container=container.get('Id')) I am not able to see the files which get generated when the python file(circuit.py) is run. The files get generated when I just run the docker command , when I use the container api the file is not seen in the directory. Am I doing something wrong on using the volumes in the client create container? Thanks! How can I rectify the above problem so that I can map the volume properly? Or please let me know how can I use docker volumes so that the file generated in the docker folder can be copied to the local directory( such as saved neural network model after training) Thanks!
[ "Hi Hope you are doing well!\nSo instead of the list, you should use dict and also you should use another method. An example is below:\nimport docker\n\nclient = docker.from_env()\nclient.containers.run(\n image=\"python\",\n auto_remove=True,\n detach=True,\n tty=False,\n stdin_open=True,\n volumes={\n # path on your machine/host\n \"/Users/volodymyr/Development/sandbox/stackoverflow/file.py\": {\n \"bind\": \"/mnt/file.py\", # path inside the container\n \"mode\": \"rw\",\n }\n },\n command=[\"python\", \"/mnt/file.py\", \"-w\", \"world!\"],\n)\n\n# file.py\nimport argparse\n\n\nif __name__ == \"__main__\":\n parser = argparse.ArgumentParser()\n parser.add_argument(\"-w\", \"--word\")\n args = parser.parse_args()\n\n print(f\"Hello {args.word!s}.\")\n\nI tested this code on my machine and it works. (docker sdk version is 6.0.1)\nDocs: https://docker-py.readthedocs.io/en/stable/containers.html\n" ]
[ 0 ]
[]
[]
[ "docker", "python", "volumes" ]
stackoverflow_0074517669_docker_python_volumes.txt
Q: Python - Selenium is complaining about element not being scrolled into view after scrolling to that element I have the following code which is supposed to scroll down the page and then click a button. When I run my script, I can see that the page does scroll until the element is at the very bottom of the page, but then the script fails when it gets time to click on that button and I get this error: selenium.common.exceptions.ElementNotInteractableException: Message: Element could not be scrolled into view I have tried using these methods: driver.execute_script("arguments[0].scrollIntoView();", element) driver.execute_script("arguments[0].scrollIntoView();", driver.find_element_by_xpath(xpath of element)) actions.move_to_element(element) All of these methods have the same result: the page will scroll to the element, but when it is time to click on the element it complains that the element could not be scrolled into view. CODE: hide_partial_rows_button_per_100 = driver.find_element_by_xpath('//button[@id="per_poss_toggle_partial_table"]') #driver.execute_script("arguments[0].scrollIntoView();",driver.find_element_by_xpath('//button[@id="per_poss_toggle_partial_table"]')) #driver.execute_script("arguments[0].scrollIntoView();",hide_partial_rows_button_per_100) actions.move_to_element(hide_partial_rows_button_per_100) actions.perform hide_partial_rows_button_per_100.click() LINK TO PAGE I'M WORKING ON: https://www.basketball-reference.com/players/v/valanjo01.html Someone on this site had a similar question but they were using JavaScript instead of Python and they added a time.sleep(1) between scrolling to the element and clicking on it, this did not work for me. As said above, I've tried both the driver.execute_script("arguments[0].scrollIntoView();",element) and actions.move_to_element(element) methods and both work for scrolling, but when it is time to click on the element it complains that it cannot be scrolled into view. A: You can apply location_once_scrolled_into_view method to perform scrolling here. The following code worked for me: from selenium import webdriver from selenium.webdriver import DesiredCapabilities from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By from selenium.webdriver.support import expected_conditions as EC options = Options() options.add_argument("start-maximized") options.add_argument('--disable-notifications') caps = DesiredCapabilities().CHROME caps["pageLoadStrategy"] = "eager" webdriver_service = Service('C:\webdrivers\chromedriver.exe') driver = webdriver.Chrome(options=options, desired_capabilities=caps, service=webdriver_service) wait = WebDriverWait(driver, 20) url = "https://www.basketball-reference.com/players/v/valanjo01.html" driver.get(url) element = wait.until(EC.presence_of_element_located((By.XPATH, '//button[@id="per_poss_toggle_partial_table"]'))) element.location_once_scrolled_into_view
Python - Selenium is complaining about element not being scrolled into view after scrolling to that element
I have the following code which is supposed to scroll down the page and then click a button. When I run my script, I can see that the page does scroll until the element is at the very bottom of the page, but then the script fails when it gets time to click on that button and I get this error: selenium.common.exceptions.ElementNotInteractableException: Message: Element could not be scrolled into view I have tried using these methods: driver.execute_script("arguments[0].scrollIntoView();", element) driver.execute_script("arguments[0].scrollIntoView();", driver.find_element_by_xpath(xpath of element)) actions.move_to_element(element) All of these methods have the same result: the page will scroll to the element, but when it is time to click on the element it complains that the element could not be scrolled into view. CODE: hide_partial_rows_button_per_100 = driver.find_element_by_xpath('//button[@id="per_poss_toggle_partial_table"]') #driver.execute_script("arguments[0].scrollIntoView();",driver.find_element_by_xpath('//button[@id="per_poss_toggle_partial_table"]')) #driver.execute_script("arguments[0].scrollIntoView();",hide_partial_rows_button_per_100) actions.move_to_element(hide_partial_rows_button_per_100) actions.perform hide_partial_rows_button_per_100.click() LINK TO PAGE I'M WORKING ON: https://www.basketball-reference.com/players/v/valanjo01.html Someone on this site had a similar question but they were using JavaScript instead of Python and they added a time.sleep(1) between scrolling to the element and clicking on it, this did not work for me. As said above, I've tried both the driver.execute_script("arguments[0].scrollIntoView();",element) and actions.move_to_element(element) methods and both work for scrolling, but when it is time to click on the element it complains that it cannot be scrolled into view.
[ "You can apply location_once_scrolled_into_view method to perform scrolling here.\nThe following code worked for me:\nfrom selenium import webdriver\nfrom selenium.webdriver import DesiredCapabilities\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\noptions.add_argument('--disable-notifications')\n\ncaps = DesiredCapabilities().CHROME\ncaps[\"pageLoadStrategy\"] = \"eager\"\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, desired_capabilities=caps, service=webdriver_service)\nwait = WebDriverWait(driver, 20)\n\nurl = \"https://www.basketball-reference.com/players/v/valanjo01.html\"\ndriver.get(url)\nelement = wait.until(EC.presence_of_element_located((By.XPATH, '//button[@id=\"per_poss_toggle_partial_table\"]')))\nelement.location_once_scrolled_into_view\n\n" ]
[ 0 ]
[]
[]
[ "python", "scroll", "selenium", "selenium_webdriver", "webdriverwait" ]
stackoverflow_0074524569_python_scroll_selenium_selenium_webdriver_webdriverwait.txt
Q: Chain df.str.split() in pandas dataframe Edit: 2022NOV21 How do we chain df.col.str.split() since this returns the split columns if expand = True I am trying to split a column after performing .melt(). If I use assign I end up using the original column and the melted column actually does not even exist. df = pd.DataFrame().from_dict({ 'id' : [1,2,3,4], '2022_amt' : [10.1,20.2,30.3, 40.4], '2022_qty' : [10,20,30,40] }) df = ( df .melt( id_vars=['id'], value_vars=['2022_amt', '2022_qty'], var_name='fy', value_name='num' ) # can i chain any pd.Series.str.[METHOD] here # .assign( # year=df.fy.str.split('_', expand=True)[0], # t=df.fy.str.split('_', expand=True)[1] # ) ) # i can add the two columns in this way but can we use chain to expand dataframe df df[['year', 't']] = df.fy.str.split('_', expand=True) df = df.drop(columns = ['fy']) A: Using expand converts it into a DataFrame, which you do not really want here; secondly with chaining, use an anonymous function to refer to the previous dataframe: (df .melt(id_vars='id',var_name='fy',value_name='num') assign(year = lambda df: df.fy.str.split('_').str[0], t = lambda df: df.fy.str.split('_').str[1]) ) id fy num year t 0 1 2022_amt 10.1 2022 amt 1 2 2022_amt 20.2 2022 amt 2 3 2022_amt 30.3 2022 amt 3 4 2022_amt 40.4 2022 amt 4 1 2022_qty 10.0 2022 qty 5 2 2022_qty 20.0 2022 qty 6 3 2022_qty 30.0 2022 qty 7 4 2022_qty 40.0 2022 qty For your use case, there are simpler, more efficient ways to do this: with pd.stack: df = df.set_index('id') df.columns = df.columns.str.split('_', expand = True) df.columns.names = ['year', 't'] df.stack(['year', 't']).reset_index(name='num') id year t num 0 1 2022 amt 10.1 1 1 2022 qty 10.0 2 2 2022 amt 20.2 3 2 2022 qty 20.0 4 3 2022 amt 30.3 5 3 2022 qty 30.0 6 4 2022 amt 40.4 7 4 2022 qty 40.0 with pivot_longer from pyjanitor: # pip install pyjanitor import pandas as pd import janitor as jn df.pivot_longer(index = 'id', names_to = ('year','t'), names_sep = '_') id year t value 0 1 2022 amt 10.1 1 2 2022 amt 20.2 2 3 2022 amt 30.3 3 4 2022 amt 40.4 4 1 2022 qty 10.0 5 2 2022 qty 20.0 6 3 2022 qty 30.0 7 4 2022 qty 40.0 A: Not sure what you are trying to do. But what I am sure of, is that you cannot use [0] (at least not to do what you want to) directly on a series. But you can call .str again so that you can use [0] operators Example df=pd.DataFrame({'s':['abc-def|ghi', 'one-two|three']}) df.s.str.split('-').str[0] #0 abc #1 one #Name: s, dtype: object df.s.str.split('-').str[1].str.split('|').str[0] #0 def #1 two #Name: s, dtype: object df.s.str.split('-').str[1].str.split('|').str[1] #0 ghi #1 three #Name: s, dtype: object Note that half of the .str here are counter intuitive, since we are not really using string functions on the result (which are arrays). But .str also works on arrays, and on anything that have usage of the [..] indexing. As long as you are not call string specific function on its. So it is a trick: .str on a series allows to call string methods on the elements of the series. And some of the string methods, including indexation, happen to have a meaning on arrays too.
Chain df.str.split() in pandas dataframe
Edit: 2022NOV21 How do we chain df.col.str.split() since this returns the split columns if expand = True I am trying to split a column after performing .melt(). If I use assign I end up using the original column and the melted column actually does not even exist. df = pd.DataFrame().from_dict({ 'id' : [1,2,3,4], '2022_amt' : [10.1,20.2,30.3, 40.4], '2022_qty' : [10,20,30,40] }) df = ( df .melt( id_vars=['id'], value_vars=['2022_amt', '2022_qty'], var_name='fy', value_name='num' ) # can i chain any pd.Series.str.[METHOD] here # .assign( # year=df.fy.str.split('_', expand=True)[0], # t=df.fy.str.split('_', expand=True)[1] # ) ) # i can add the two columns in this way but can we use chain to expand dataframe df df[['year', 't']] = df.fy.str.split('_', expand=True) df = df.drop(columns = ['fy'])
[ "Using expand converts it into a DataFrame, which you do not really want here; secondly with chaining, use an anonymous function to refer to the previous dataframe:\n(df\n.melt(id_vars='id',var_name='fy',value_name='num')\nassign(year = lambda df: df.fy.str.split('_').str[0],\n t = lambda df: df.fy.str.split('_').str[1])\n)\n\n id fy num year t\n0 1 2022_amt 10.1 2022 amt\n1 2 2022_amt 20.2 2022 amt\n2 3 2022_amt 30.3 2022 amt\n3 4 2022_amt 40.4 2022 amt\n4 1 2022_qty 10.0 2022 qty\n5 2 2022_qty 20.0 2022 qty\n6 3 2022_qty 30.0 2022 qty\n7 4 2022_qty 40.0 2022 qty\n\nFor your use case, there are simpler, more efficient ways to do this:\n\nwith pd.stack:\n\ndf = df.set_index('id')\ndf.columns = df.columns.str.split('_', expand = True)\ndf.columns.names = ['year', 't']\ndf.stack(['year', 't']).reset_index(name='num')\n\n id year t num\n0 1 2022 amt 10.1\n1 1 2022 qty 10.0\n2 2 2022 amt 20.2\n3 2 2022 qty 20.0\n4 3 2022 amt 30.3\n5 3 2022 qty 30.0\n6 4 2022 amt 40.4\n7 4 2022 qty 40.0\n\n\nwith pivot_longer from pyjanitor:\n\n# pip install pyjanitor\nimport pandas as pd\nimport janitor as jn\ndf.pivot_longer(index = 'id', names_to = ('year','t'), names_sep = '_')\n\n id year t value\n0 1 2022 amt 10.1\n1 2 2022 amt 20.2\n2 3 2022 amt 30.3\n3 4 2022 amt 40.4\n4 1 2022 qty 10.0\n5 2 2022 qty 20.0\n6 3 2022 qty 30.0\n7 4 2022 qty 40.0\n\n", "Not sure what you are trying to do.\nBut what I am sure of, is that you cannot use [0] (at least not to do what you want to) directly on a series. But you can call .str again so that you can use [0] operators\nExample\ndf=pd.DataFrame({'s':['abc-def|ghi', 'one-two|three']})\ndf.s.str.split('-').str[0]\n#0 abc\n#1 one\n#Name: s, dtype: object\n\ndf.s.str.split('-').str[1].str.split('|').str[0]\n#0 def\n#1 two\n#Name: s, dtype: object\n\ndf.s.str.split('-').str[1].str.split('|').str[1]\n#0 ghi\n#1 three\n#Name: s, dtype: object\n\nNote that half of the .str here are counter intuitive, since we are not really using string functions on the result (which are arrays). But .str also works on arrays, and on anything that have usage of the [..] indexing. As long as you are not call string specific function on its. So it is a trick: .str on a series allows to call string methods on the elements of the series. And some of the string methods, including indexation, happen to have a meaning on arrays too.\n" ]
[ 1, 0 ]
[]
[]
[ "chain", "melt", "pandas", "python", "split" ]
stackoverflow_0074496425_chain_melt_pandas_python_split.txt
Q: How to delete an AMI using boto? (cross posted to boto-users) Given an image ID, how can I delete it using boto? A: You use the deregister() API. There are a few ways of getting the image id (i.e. you can list all images and search their properties, etc) Here is a code fragment which will delete one of your existing AMIs (assuming it's in the EU region) connection = boto.ec2.connect_to_region('eu-west-1', \ aws_access_key_id='yourkey', \ aws_secret_access_key='yoursecret', \ proxy=yourProxy, \ proxy_port=yourProxyPort) # This is a way of fetching the image object for an AMI, when you know the AMI id # Since we specify a single image (using the AMI id) we get a list containing a single image # You could add error checking and so forth ... but you get the idea images = connection.get_all_images(image_ids=['ami-cf86xxxx']) images[0].deregister() (edit): and in fact having looked at the online documentation for 2.0, there is another way. Having determined the image ID, you can use the deregister_image(image_id) method of boto.ec2.connection ... which amounts to the same thing I guess. A: With newer boto (Tested with 2.38.0), you can run: ec2_conn = boto.ec2.connect_to_region('xx-xxxx-x') ec2_conn.deregister_image('ami-xxxxxxx') or ec2_conn.deregister_image('ami-xxxxxxx', delete_snapshot=True) The first will delete the AMI, the second will also delete the attached EBS snapshot A: For Boto2, see katriels answer. Here, I am assuming you are using Boto3. If you have the AMI (an object of class boto3.resources.factory.ec2.Image), you can call its deregister function. For example, to delete an AMI with a given ID, you can use: import boto3 ec2 = boto3.resource('ec2') ami_id = 'ami-1b932174' ami = list(ec2.images.filter(ImageIds=[ami_id]).all())[0] ami.deregister(DryRun=True) If you have the necessary permissions, you should see an Request would have succeeded, but DryRun flag is set exception. To get rid of the example, leave out DryRun and use: ami.deregister() # WARNING: This will really delete the AMI This blog post elaborates on how to delete AMIs and snapshots with Boto3. A: Script delates the AMI and associated Snapshots with it. Make sure you have right privileges to run this script. Inputs - Please pass region and AMI ids(n) as inputs import boto3 import sys def main(region,images): region = sys.argv[1] images = sys.argv[2].split(',') ec2 = boto3.client('ec2', region_name=region) snapshots = ec2.describe_snapshots(MaxResults=1000,OwnerIds=['self'])['Snapshots'] # loop through list of image IDs for image in images: print("====================\nderegistering {image}\n====================".format(image=image)) amiResponse = ec2.deregister_image(DryRun=True,ImageId=image) for snapshot in snapshots: if snapshot['Description'].find(image) > 0: snap = ec2.delete_snapshot(SnapshotId=snapshot['SnapshotId'],DryRun=True) print("Deleting snapshot {snapshot} \n".format(snapshot=snapshot['SnapshotId'])) main(region,images) A: using the EC2.Image resource you can simply call deregister(): Example: for i in ec2res.images.filter(Owners=['self']): print("Name: {}\t Id: {}\tState: {}\n".format(i.name, i.id, i.state)) i.deregister() See this for using different filters: What are valid values documented for ec2.images.filter command? See also: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Image.deregister
How to delete an AMI using boto?
(cross posted to boto-users) Given an image ID, how can I delete it using boto?
[ "You use the deregister() API.\nThere are a few ways of getting the image id (i.e. you can list all images and search their properties, etc)\nHere is a code fragment which will delete one of your existing AMIs (assuming it's in the EU region)\nconnection = boto.ec2.connect_to_region('eu-west-1', \\\n aws_access_key_id='yourkey', \\\n aws_secret_access_key='yoursecret', \\\n proxy=yourProxy, \\\n proxy_port=yourProxyPort)\n\n\n# This is a way of fetching the image object for an AMI, when you know the AMI id\n# Since we specify a single image (using the AMI id) we get a list containing a single image\n# You could add error checking and so forth ... but you get the idea\nimages = connection.get_all_images(image_ids=['ami-cf86xxxx'])\nimages[0].deregister()\n\n(edit): and in fact having looked at the online documentation for 2.0, there is another way.\nHaving determined the image ID, you can use the deregister_image(image_id) method of boto.ec2.connection ... which amounts to the same thing I guess.\n", "With newer boto (Tested with 2.38.0), you can run:\nec2_conn = boto.ec2.connect_to_region('xx-xxxx-x')\nec2_conn.deregister_image('ami-xxxxxxx')\n\nor\nec2_conn.deregister_image('ami-xxxxxxx', delete_snapshot=True)\n\nThe first will delete the AMI, the second will also delete the attached EBS snapshot\n", "For Boto2, see katriels answer. Here, I am assuming you are using Boto3.\nIf you have the AMI (an object of class boto3.resources.factory.ec2.Image), you can call its deregister function. For example, to delete an AMI with a given ID, you can use:\nimport boto3\n\nec2 = boto3.resource('ec2')\n\nami_id = 'ami-1b932174'\nami = list(ec2.images.filter(ImageIds=[ami_id]).all())[0]\n\nami.deregister(DryRun=True)\n\nIf you have the necessary permissions, you should see an Request would have succeeded, but DryRun flag is set exception. To get rid of the example, leave out DryRun and use:\nami.deregister() # WARNING: This will really delete the AMI\n\nThis blog post elaborates on how to delete AMIs and snapshots with Boto3.\n", "Script delates the AMI and associated Snapshots with it. Make sure you have right privileges to run this script.\n\nInputs - Please pass region and AMI ids(n) as inputs\n\nimport boto3\nimport sys\n\ndef main(region,images):\n region = sys.argv[1]\n images = sys.argv[2].split(',') \n ec2 = boto3.client('ec2', region_name=region)\n snapshots = ec2.describe_snapshots(MaxResults=1000,OwnerIds=['self'])['Snapshots']\n # loop through list of image IDs\n for image in images:\n print(\"====================\\nderegistering {image}\\n====================\".format(image=image))\n amiResponse = ec2.deregister_image(DryRun=True,ImageId=image)\n for snapshot in snapshots:\n if snapshot['Description'].find(image) > 0:\n snap = ec2.delete_snapshot(SnapshotId=snapshot['SnapshotId'],DryRun=True)\n print(\"Deleting snapshot {snapshot} \\n\".format(snapshot=snapshot['SnapshotId']))\n \nmain(region,images)\n\n", "using the EC2.Image resource you can simply call deregister():\nExample:\nfor i in ec2res.images.filter(Owners=['self']):\n print(\"Name: {}\\t Id: {}\\tState: {}\\n\".format(i.name, i.id, i.state))\n i.deregister()\n\nSee this for using different filters:\nWhat are valid values documented for ec2.images.filter command?\nSee also: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Image.deregister\n" ]
[ 7, 7, 6, 0, 0 ]
[]
[]
[ "amazon_ec2", "boto", "python" ]
stackoverflow_0005313726_amazon_ec2_boto_python.txt
Q: Django: ValueError: Cannot create form field because its related model has not been loaded yet I'm having some trouble with a Django project I'm working on. I now have two applications, which require a fair bit of overlap. I've really only started the second project (called workflow) and I'm trying to make my first form for that application. My first application is called po. In the workflow application I have a class called WorkflowObject, which (for now) has only a single attribute--a foreign key to a PurchaseOrder, which is defined in po/models.py. I have imported that class with from po.models import PurchaseOrder. What I'm trying to do is have a page where a user creates a new PurchaseOrder. This works fine (it's the same form that I used in my PurchaseOrder application), and then uses that instance of the class to create a WorkflowObject. The problem now, is that I get the error: ValueError: Cannot create form field for 'purchase' yet, because its related model 'PurchaseOrder' has not been loaded yet. I'm really not sure where to start with this. It was working ok (allowing me to create a new PurchaseOrder and forward to a url with its primary key in the url) until I added the view that should allow me to create a new WorkflowObject. I'll put that specific view here: from django.http import HttpResponse, HttpResponseRedirect from django.shortcuts import render, get_object_or_404 from django_tables2 import RequestConfig from po.models import PurchaseOrderForm, PurchaseOrder from workflow.models import POObject, WorkflowForm def new2(request, number): po=PurcchaseOrder.objects.get(pk=number) if request.method == 'POST': form = WorkflowForm(request.POST) if form.is_valid(): new_flow = form.save() return HttpResponse('Good') else: return render(request, 'new-workflow.html', {'form': form, 'purchase': po}) else: form = WorkflowForm() return render(request, 'new-workflow.html', {'form': form, 'purchase': po}) The lines of code that seem to be causing the error (or at least, one of the lines that is shown in the traceback) is: class WorkflowForm(ModelForm): purchase = forms.ModelChoiceField(queryset = PurchaseOrder.objects.all()) EDIT: I seem to have made a very noob mistake, and included parentheses in my definition of WorkflowObject, that is, I had said purchase=models.ForeignKey('PurchaseOrder'), instead of purchase=models.ForeignKey(PurchaseOrder) A: I had a similar problem and was able to resolve this by declaring all my modelForm classes below all my class models in my models.py file. This way the model classes were loaded before the modelForm classes. A: Firstly, you can try reduce code to: def new2(request, number): po=PurcchaseOrder.objects.get(pk=number) form = WorkflowForm(request.POST or None) if form.is_valid(): new_flow = form.save() return HttpResponse('Good') else: return render(request, 'new-workflow.html', {'form': form, 'purchase': po}) Secondly, I not understood why you at other case wrote forms.ModelChoiceField(...) and another case ModelForm instance forms.ModelForm ? A: Seems, that there are nothing special in your WorkflowForm, so you can define it as follows: class WorkflowForm(ModelForm): class Meta: model = WorkflowObject Field for relation will be created automatically. Documentation: Creating forms from models A: Just ran into this problem. I had a string value in the to= value of a ForeignKey (intentionally). The error was thrown because I changed my app's name from messages to messaging (because messages conflicted with django.contrib.messages), but forgot to change the model's ForeignKey string value. For example: # messaging/models.py class Thread(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, unique=True) class Message(models.Model): user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE) thread = models.ForeignKey('messages.Thread', on_delete=models.CASCADE) # <- here! The error: ValueError: Cannot create form field for 'thread' yet, because its related model 'messages.Thread' has not been loaded yet The solution was simply to change: models.ForeignKey('messages.Thread', ... to: models.ForeignKey('messaging.Thread', ...
Django: ValueError: Cannot create form field because its related model has not been loaded yet
I'm having some trouble with a Django project I'm working on. I now have two applications, which require a fair bit of overlap. I've really only started the second project (called workflow) and I'm trying to make my first form for that application. My first application is called po. In the workflow application I have a class called WorkflowObject, which (for now) has only a single attribute--a foreign key to a PurchaseOrder, which is defined in po/models.py. I have imported that class with from po.models import PurchaseOrder. What I'm trying to do is have a page where a user creates a new PurchaseOrder. This works fine (it's the same form that I used in my PurchaseOrder application), and then uses that instance of the class to create a WorkflowObject. The problem now, is that I get the error: ValueError: Cannot create form field for 'purchase' yet, because its related model 'PurchaseOrder' has not been loaded yet. I'm really not sure where to start with this. It was working ok (allowing me to create a new PurchaseOrder and forward to a url with its primary key in the url) until I added the view that should allow me to create a new WorkflowObject. I'll put that specific view here: from django.http import HttpResponse, HttpResponseRedirect from django.shortcuts import render, get_object_or_404 from django_tables2 import RequestConfig from po.models import PurchaseOrderForm, PurchaseOrder from workflow.models import POObject, WorkflowForm def new2(request, number): po=PurcchaseOrder.objects.get(pk=number) if request.method == 'POST': form = WorkflowForm(request.POST) if form.is_valid(): new_flow = form.save() return HttpResponse('Good') else: return render(request, 'new-workflow.html', {'form': form, 'purchase': po}) else: form = WorkflowForm() return render(request, 'new-workflow.html', {'form': form, 'purchase': po}) The lines of code that seem to be causing the error (or at least, one of the lines that is shown in the traceback) is: class WorkflowForm(ModelForm): purchase = forms.ModelChoiceField(queryset = PurchaseOrder.objects.all()) EDIT: I seem to have made a very noob mistake, and included parentheses in my definition of WorkflowObject, that is, I had said purchase=models.ForeignKey('PurchaseOrder'), instead of purchase=models.ForeignKey(PurchaseOrder)
[ "I had a similar problem and was able to resolve this by declaring all my modelForm classes below all my class models in my models.py file. This way the model classes were loaded before the modelForm classes.\n", "Firstly, you can try reduce code to: \n\ndef new2(request, number):\n po=PurcchaseOrder.objects.get(pk=number)\n\n form = WorkflowForm(request.POST or None)\n if form.is_valid():\n new_flow = form.save()\n return HttpResponse('Good')\n else:\n return render(request, 'new-workflow.html', {'form': form, 'purchase': po})\n\n\nSecondly, I not understood why you at other case wrote forms.ModelChoiceField(...) and another case ModelForm instance forms.ModelForm ?\n", "Seems, that there are nothing special in your WorkflowForm, so you can define it as follows:\nclass WorkflowForm(ModelForm):\n class Meta:\n model = WorkflowObject\n\nField for relation will be created automatically.\nDocumentation: Creating forms from models\n", "Just ran into this problem. I had a string value in the to= value of a ForeignKey (intentionally). The error was thrown because I changed my app's name from messages to messaging (because messages conflicted with django.contrib.messages), but forgot to change the model's ForeignKey string value.\nFor example:\n# messaging/models.py\n\nclass Thread(models.Model):\n id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False, unique=True)\n\nclass Message(models.Model):\n user = models.ForeignKey(settings.AUTH_USER_MODEL, on_delete=models.CASCADE)\n thread = models.ForeignKey('messages.Thread', on_delete=models.CASCADE) # <- here!\n\nThe error:\nValueError: Cannot create form field for 'thread' yet, because its related model 'messages.Thread' has not been loaded yet\n\nThe solution was simply to change:\nmodels.ForeignKey('messages.Thread', ...\n\nto:\nmodels.ForeignKey('messaging.Thread', ...\n\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "django", "forms", "python" ]
stackoverflow_0017155379_django_forms_python.txt
Q: IntegrityError at /admin/api/user/6/change/ FOREIGN KEY constraint failed I am developing a website on django. When I am trying to delete a user via admin panel i get an error. I can change e.g. staff status (while still getting an error, but changes are getting apllied) The code is below: models.py from django.contrib.auth.models import AbstractUser from django.db import models class User(AbstractUser): emailSpam = models.BooleanField(default=True) email = models.EmailField('email', unique=True) first_name = None last_name = None confirmedEmail = models.BooleanField(default=False) REQUIRED_FIELDS = ["emailSpam"] forms.py from django.contrib.auth.forms import UserCreationForm, UserChangeForm from .models import User class CustomUserCreationForm(UserCreationForm): class Meta: model = User fields = ('email',) class CustomUserChangeForm(UserChangeForm): class Meta: model = User fields = ('email',) admin.py from django.contrib import admin from django.contrib.auth.admin import UserAdmin from .forms import CustomUserCreationForm, CustomUserChangeForm from .models import User class Admin(UserAdmin): add_form = CustomUserCreationForm form = CustomUserChangeForm model = User list_display = ('email', 'is_staff', 'is_active',) list_filter = ('email', 'is_staff', 'is_active',) fieldsets = ( (None, {'fields': ('email', 'password')}), ('Permissions', {'fields': ('is_staff', 'is_active')}), ) add_fieldsets = ( (None, { 'classes': ('wide',), 'fields': ('email', 'password1', 'password2', 'is_staff', 'is_active')} ), ) search_fields = ('email',) ordering = ('email',) admin.site.register(User, Admin) A: Possible solutions There are three things that might be causing the issue, at least as far as I can tell. The first you already discounted. I hope it's the second solution, since that will be easier, but I fear it might be the third, which would be hardest to get around. Cause One As my comment stated, perhaps there is another model with a field that has User as a ForeignKey with on_delete = models.CASCADE. When you try to delete the User, all instances of this class that has that ForeignKey will need to be deleted as well (because of on_delete=models.CASCADE), and that's what's causing the issue. You have already stated you have no such models, so let's move on to solution 2. Cause Two I hope it's this one, since it might be easier to fix. I noticed you have email = models.EmailField('email', unique=True) as one of your fields for your User model, but AbstractUser should already have an email field. Try removing that field, makemigrations and migrate, and see if the issue is resolved. Cause Three Did you change from the default User model to the custom user you are now using in mid-project? In other words, did you, in this project, ever run makemigrations before you switched to a custom user model? That can be a big problem. However, there are two solutions for this, not so easy or desirable, but doable. Solution A: If this is a new project with no valuable data yet, you can simpy delete your database, delete all migrations in all folders, as well as all __pycache__ folders as described here. Then re do python manage.py makemigrations and python manage.py migrate. Of course this will erase all your tables, so you wouldn't want to do this mid-project. Solution B: There is a way to handle this mid-project without losing data. The steps are out in this django ticket 25313, or these, more readable instructions.
IntegrityError at /admin/api/user/6/change/ FOREIGN KEY constraint failed
I am developing a website on django. When I am trying to delete a user via admin panel i get an error. I can change e.g. staff status (while still getting an error, but changes are getting apllied) The code is below: models.py from django.contrib.auth.models import AbstractUser from django.db import models class User(AbstractUser): emailSpam = models.BooleanField(default=True) email = models.EmailField('email', unique=True) first_name = None last_name = None confirmedEmail = models.BooleanField(default=False) REQUIRED_FIELDS = ["emailSpam"] forms.py from django.contrib.auth.forms import UserCreationForm, UserChangeForm from .models import User class CustomUserCreationForm(UserCreationForm): class Meta: model = User fields = ('email',) class CustomUserChangeForm(UserChangeForm): class Meta: model = User fields = ('email',) admin.py from django.contrib import admin from django.contrib.auth.admin import UserAdmin from .forms import CustomUserCreationForm, CustomUserChangeForm from .models import User class Admin(UserAdmin): add_form = CustomUserCreationForm form = CustomUserChangeForm model = User list_display = ('email', 'is_staff', 'is_active',) list_filter = ('email', 'is_staff', 'is_active',) fieldsets = ( (None, {'fields': ('email', 'password')}), ('Permissions', {'fields': ('is_staff', 'is_active')}), ) add_fieldsets = ( (None, { 'classes': ('wide',), 'fields': ('email', 'password1', 'password2', 'is_staff', 'is_active')} ), ) search_fields = ('email',) ordering = ('email',) admin.site.register(User, Admin)
[ "Possible solutions\nThere are three things that might be causing the issue, at least as far as I can tell. The first you already discounted. I hope it's the second solution, since that will be easier, but I fear it might be the third, which would be hardest to get around.\nCause One\nAs my comment stated, perhaps there is another model with a field that has User as a ForeignKey with on_delete = models.CASCADE. When you try to delete the User, all instances of this class that has that ForeignKey will need to be deleted as well (because of on_delete=models.CASCADE), and that's what's causing the issue. You have already stated you have no such models, so let's move on to solution 2.\nCause Two\nI hope it's this one, since it might be easier to fix. I noticed you have email = models.EmailField('email', unique=True) as one of your fields for your User model, but AbstractUser should already have an email field. Try removing that field, makemigrations and migrate, and see if the issue is resolved.\nCause Three\nDid you change from the default User model to the custom user you are now using in mid-project? In other words, did you, in this project, ever run makemigrations before you switched to a custom user model? That can be a big problem. However, there are two solutions for this, not so easy or desirable, but doable.\nSolution A: If this is a new project with no valuable data yet, you can simpy delete your database, delete all migrations in all folders, as well as all __pycache__ folders as described here. Then re do python manage.py makemigrations and python manage.py migrate. Of course this will erase all your tables, so you wouldn't want to do this mid-project.\nSolution B: There is a way to handle this mid-project without losing data. The steps are out in this django ticket 25313, or these, more readable instructions.\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074521342_django_python.txt
Q: Is there a way in python to extract only the CORE TEXT (without boxes, footer etc.) from a pdf? I am trying to extract only the core text from a "rich" pdf document, meaning that it has a lot of tables, graphs, boxes, footers etc. in which I am not interested in. I tried with some common python packages like PyPDF2, pdfplumber or pdfreader.The problem is that apparently they extract all the text present in the pdf, including those parts listed above in which I am not interested. As an example: from PyPDF2 import PdfReader file = PdfReader(file) page = file.pages[10] text = page.extract_text() This code will get me the whole text from page 11, including footers, box, text from a table and the number of the page, while what I would like is only the core text. Unluckily the only solution I found up to now is to copy paste in another file the core text. Is there any method/package which can automatically recognize the main text from the other parts of the pdf and return me only that? Thank you for your help!!! A: per D.L's comment, please add some reproducible code and, preferably, a pdf to work with. However, I think I can answer at least part of your question. jsvine's pdfplumber is an incredibly robust python pdf processing package. pdfplumber contains a bounding box functionality that lets you extract text from within (.within_bbox(...)) or from outside (.outside_bbox) the 'bounding box' -- or geographical area -- delineated on the Page object. Every character object extracted from the page contains location information such as y1 - Distance of top of character from bottom of page and Distance of left side of character from left side of page. If the majority of pages within the .pdf you are trying to extract text from contain footnotes, I would recommend only extracting text above the y1 value. Given that footnotes are typically well below the end of a page, except for academic papers using Chicago Style citations, you should still be able to set a standard .bbox for where you want to extract text (within a set .bbox that does not include footnotes or out of a set .bbox that does not include footnotes). To your question about tables, that poses a trickier question. Tables are by far the trickiest thing to detect and/or extract from. pdfplumber offers, to my knowledge, the most robust open source table detection/extraction capabilities out there. To extract the area outside a table, I would call the .find_tables(...) function on each Page object to return a .bbox of the table and extract around that. However -- this is not perfect. It is not always able to detect tables. Regarding your 3rd question, how to exclude boxes, are you referring to text boxes? Please provide further clarification! Finally -- to reiterate my first point -- pdfplumber is an incredibly robust package. That being said, extracting text from .pdf files is really tough. Good luck -- please provide more information and I will be happy to help as best I can.
Is there a way in python to extract only the CORE TEXT (without boxes, footer etc.) from a pdf?
I am trying to extract only the core text from a "rich" pdf document, meaning that it has a lot of tables, graphs, boxes, footers etc. in which I am not interested in. I tried with some common python packages like PyPDF2, pdfplumber or pdfreader.The problem is that apparently they extract all the text present in the pdf, including those parts listed above in which I am not interested. As an example: from PyPDF2 import PdfReader file = PdfReader(file) page = file.pages[10] text = page.extract_text() This code will get me the whole text from page 11, including footers, box, text from a table and the number of the page, while what I would like is only the core text. Unluckily the only solution I found up to now is to copy paste in another file the core text. Is there any method/package which can automatically recognize the main text from the other parts of the pdf and return me only that? Thank you for your help!!!
[ "per D.L's comment, please add some reproducible code and, preferably, a pdf to work with.\nHowever, I think I can answer at least part of your question. jsvine's pdfplumber is an incredibly robust python pdf processing package. pdfplumber contains a bounding box functionality that lets you extract text from within (.within_bbox(...)) or from outside (.outside_bbox) the 'bounding box' -- or geographical area -- delineated on the Page object. Every character object extracted from the page contains location information such as y1 - Distance of top of character from bottom of page and Distance of left side of character from left side of page. If the majority of pages within the .pdf you are trying to extract text from contain footnotes, I would recommend only extracting text above the y1 value. Given that footnotes are typically well below the end of a page, except for academic papers using Chicago Style citations, you should still be able to set a standard .bbox for where you want to extract text (within a set .bbox that does not include footnotes or out of a set .bbox that does not include footnotes).\nTo your question about tables, that poses a trickier question. Tables are by far the trickiest thing to detect and/or extract from. pdfplumber offers, to my knowledge, the most robust open source table detection/extraction capabilities out there. To extract the area outside a table, I would call the .find_tables(...) function on each Page object to return a .bbox of the table and extract around that. However -- this is not perfect. It is not always able to detect tables.\nRegarding your 3rd question, how to exclude boxes, are you referring to text boxes? Please provide further clarification!\nFinally -- to reiterate my first point -- pdfplumber is an incredibly robust package. That being said, extracting text from .pdf files is really tough. Good luck -- please provide more information and I will be happy to help as best I can.\n" ]
[ 0 ]
[]
[]
[ "pdfplumber", "python", "text", "text_extraction", "text_mining" ]
stackoverflow_0074344614_pdfplumber_python_text_text_extraction_text_mining.txt
Q: Calculate crc32 with seed using Python In linux/crc32.h there is crc32 that define: crc32(seed, data, length) How can I calculate crc32 with seed using Python? A: Go to the docs: import zlib help(zlib.crc32) Help on built-in function crc32 in module zlib: crc32(data, value=0, /) Compute a CRC-32 checksum of data. value Starting value of the checksum. The returned checksum is an integer. Data are the same between the two implementations. Seed in the C implementation is value in the Python implementation. Note that it defaults to 0 in zlib.crc32.
Calculate crc32 with seed using Python
In linux/crc32.h there is crc32 that define: crc32(seed, data, length) How can I calculate crc32 with seed using Python?
[ "Go to the docs:\nimport zlib\nhelp(zlib.crc32)\n\n\nHelp on built-in function crc32 in module zlib:\n\ncrc32(data, value=0, /)\n Compute a CRC-32 checksum of data.\n\n value\n Starting value of the checksum.\n\n The returned checksum is an integer.\n\nData are the same between the two implementations. Seed in the C implementation is value in the Python implementation. Note that it defaults to 0 in zlib.crc32.\n" ]
[ 2 ]
[]
[]
[ "crc32", "linux_kernel", "python" ]
stackoverflow_0074524815_crc32_linux_kernel_python.txt
Q: High memory allocation when using dask.bag.map I am using dask for extending dask bag items by information from an external, previously computed object arg. Dask seems to allocate memory for arg for each partition at once in the beginning of the computation process. Is there a workaround to prevent Dask from duplicating the arg multiple times (and allocating a lot of memory)? Here is a simplified example: from pathlib import Path import numpy as np import pandas as pd from dask import bag in_dir = Path.home() / 'in_dir' out_dir = Path.home() / 'out_dir' in_dir.mkdir(parents=True, exist_ok=True) out_dir.mkdir(parents=True, exist_ok=True) n_files = 100 n_lines_per_file = int(1e6) df = pd.DataFrame({ 'a': np.arange(n_lines_per_file).astype(str) }) for i in range(n_files): df.to_csv(in_dir / f'{i}.txt', index=False, header=False) def mapper(x, arg): y = x # map x to y using arg return y arg = np.zeros(int(1e7)) ( bag .read_text(str(in_dir / '*.txt')) .map((lambda x, y: x), arg) .to_textfiles(str(out_dir / '*.txt')) ) A: One strategy for dealing with this is to scatter your data to workers first: import dask.bag, dask.distributed client = dask.distributed.Client() arg = np.zeros(int(1e7)) arg_f = client.scatter(arg, broadcast=True) ( dask.bag .read_text(str(in_dir / '*.txt')) .map((lambda x, y: x), arg_f) .to_textfiles(str(out_dir / '*.txt')) ) This sends a copy of the data to each worker, but does not create a copy for each task.
High memory allocation when using dask.bag.map
I am using dask for extending dask bag items by information from an external, previously computed object arg. Dask seems to allocate memory for arg for each partition at once in the beginning of the computation process. Is there a workaround to prevent Dask from duplicating the arg multiple times (and allocating a lot of memory)? Here is a simplified example: from pathlib import Path import numpy as np import pandas as pd from dask import bag in_dir = Path.home() / 'in_dir' out_dir = Path.home() / 'out_dir' in_dir.mkdir(parents=True, exist_ok=True) out_dir.mkdir(parents=True, exist_ok=True) n_files = 100 n_lines_per_file = int(1e6) df = pd.DataFrame({ 'a': np.arange(n_lines_per_file).astype(str) }) for i in range(n_files): df.to_csv(in_dir / f'{i}.txt', index=False, header=False) def mapper(x, arg): y = x # map x to y using arg return y arg = np.zeros(int(1e7)) ( bag .read_text(str(in_dir / '*.txt')) .map((lambda x, y: x), arg) .to_textfiles(str(out_dir / '*.txt')) )
[ "One strategy for dealing with this is to scatter your data to workers first:\nimport dask.bag, dask.distributed\n\nclient = dask.distributed.Client()\n\narg = np.zeros(int(1e7))\narg_f = client.scatter(arg, broadcast=True)\n\n(\n dask.bag\n .read_text(str(in_dir / '*.txt'))\n .map((lambda x, y: x), arg_f)\n .to_textfiles(str(out_dir / '*.txt'))\n)\n\nThis sends a copy of the data to each worker, but does not create a copy for each task.\n" ]
[ 0 ]
[]
[]
[ "dask", "memory_management", "python" ]
stackoverflow_0074520150_dask_memory_management_python.txt
Q: Rotating list of lists searching for words, only some of the words are appended I'm trying to get every word in a 15x15 matrix both vertically and horizontally. I get all of the words in the horizontal search. However after I flip I only get some of the words. Is there any obvious flaw I just can't see or is there a less redundant way to do this? This is code I have currently: words = [] def stuff(b): for line in b: word = "" for tile in line: if tile != " ": word += tile elif tile == " ": if word != "": words.append(word) word = "" stuff(board) print_board(board) t_board = [list(row) for row in zip(*reversed(board))] print_board(t_board) stuff(t_board) print(words) With the output, as is pretty clear, the single letters i,j,k,l gets appended into the list. However HBA and the bottom letters aren't appended. ['A', 'B', 'C', 'D', ' ', ' ', 'E', 'F', 'G', ' ', ' ', ' ', ' ', ' ', ' '] ['B', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] ['H', 'I', 'J', 'K', 'L', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'H', 'B', 'A'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'I', ' ', 'B'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'J', ' ', 'C'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'K', ' ', 'D'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'L', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'E'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'F'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'G'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] ['ABCD', 'EFG', 'B', 'HIJKL', 'I', 'J', 'K', 'L'] I have tried multiple numpy rotations, however these turn the "matrix" into an array of tuples which is not what I'm searching for A: It seems like you are trying to flatten data and ignore "empties". You can do this in one line. words = [cell for row in board for cell in row if cell.strip()] Below is the "long-form" version of above. Both just iterate over the entire board and store cells that contain more than whitespace. words = [] for row in board: for cell in row: if cell.strip(): words.append(cell)
Rotating list of lists searching for words, only some of the words are appended
I'm trying to get every word in a 15x15 matrix both vertically and horizontally. I get all of the words in the horizontal search. However after I flip I only get some of the words. Is there any obvious flaw I just can't see or is there a less redundant way to do this? This is code I have currently: words = [] def stuff(b): for line in b: word = "" for tile in line: if tile != " ": word += tile elif tile == " ": if word != "": words.append(word) word = "" stuff(board) print_board(board) t_board = [list(row) for row in zip(*reversed(board))] print_board(t_board) stuff(t_board) print(words) With the output, as is pretty clear, the single letters i,j,k,l gets appended into the list. However HBA and the bottom letters aren't appended. ['A', 'B', 'C', 'D', ' ', ' ', 'E', 'F', 'G', ' ', ' ', ' ', ' ', ' ', ' '] ['B', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] ['H', 'I', 'J', 'K', 'L', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'H', 'B', 'A'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'I', ' ', 'B'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'J', ' ', 'C'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'K', ' ', 'D'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'L', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'E'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'F'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', 'G'] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] [' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' ', ' '] ['ABCD', 'EFG', 'B', 'HIJKL', 'I', 'J', 'K', 'L'] I have tried multiple numpy rotations, however these turn the "matrix" into an array of tuples which is not what I'm searching for
[ "It seems like you are trying to flatten data and ignore \"empties\". You can do this in one line.\nwords = [cell for row in board for cell in row if cell.strip()]\n\nBelow is the \"long-form\" version of above. Both just iterate over the entire board and store cells that contain more than whitespace.\nwords = []\nfor row in board:\n for cell in row:\n if cell.strip():\n words.append(cell) \n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074524764_python.txt
Q: Confusing Button/PhotoImage/tkinter class behavior In my code, the second implementation correctly shows "some_img.png" as a button background but the first does not. class QuizInterface: def __init__(self): self.window = Tk() self.window.title("Quizzler") self.window.config(bg=THEME_COLOR, padx=20, pady=20) # Example 1: Works as expected true_image = PhotoImage(file="./images/true.png") self.true_button = Button(image=true_image) self.true_button.grid(row=2, column=0) # Example 2: Does not work as expected self.true_button = Button(image=PhotoImage(file="./images/true.png")) self.true_button.grid(row=2, column=0) self.window.mainloop() # QuizInterface object is created and called in my main.py with no error. No error is thrown for the first example which is confusing. Additionally, I've noticed that I cannot define an object and then in the same line call .grid(..) on that object without a "function does not return anything" warning. It seems as though tkinter does not like: defining multiple objects in a single line pack()/grid()/place() 'ing, in the same line as object construction Why? A: Tkinter images get garbage collected if there is not a reference to them. This is why your first example works and the second does not. When you create a widget you get a reference to that widget. You can then call pack/ place/grid on that reference, but these functions themselves do not return anything, so assigning to them gives None. It's like how a = [1,2,3].append(4) gives None because append does not return anything but a = [1,2,3] and a.append(4) does work.
Confusing Button/PhotoImage/tkinter class behavior
In my code, the second implementation correctly shows "some_img.png" as a button background but the first does not. class QuizInterface: def __init__(self): self.window = Tk() self.window.title("Quizzler") self.window.config(bg=THEME_COLOR, padx=20, pady=20) # Example 1: Works as expected true_image = PhotoImage(file="./images/true.png") self.true_button = Button(image=true_image) self.true_button.grid(row=2, column=0) # Example 2: Does not work as expected self.true_button = Button(image=PhotoImage(file="./images/true.png")) self.true_button.grid(row=2, column=0) self.window.mainloop() # QuizInterface object is created and called in my main.py with no error. No error is thrown for the first example which is confusing. Additionally, I've noticed that I cannot define an object and then in the same line call .grid(..) on that object without a "function does not return anything" warning. It seems as though tkinter does not like: defining multiple objects in a single line pack()/grid()/place() 'ing, in the same line as object construction Why?
[ "Tkinter images get garbage collected if there is not a reference to them. This is why your first example works and the second does not.\nWhen you create a widget you get a reference to that widget. You can then call pack/ place/grid on that reference, but these functions themselves do not return anything, so assigning to them gives None. It's like how a = [1,2,3].append(4) gives None because append does not return anything but a = [1,2,3] and a.append(4) does work.\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074524854_python_tkinter.txt
Q: Setting global JsonEncoder in Python Basically, I'm fighting with the age-old problem that Python's default json encoder does not support datetime. However all the solutions I can find call to json.dumps and manually pass the "proper" encoder on each invocation. And honestly, that can't be the best way to do it. Especially if you want to use a wrapper like jsonify to set up your response object properly, where you can't even specify these parameters. So: long story short: how to override the global default encoder in Python's JSON implementation to a custom one, that actually does support the features I want? EDIT: ok so I figured out how to do this for my specific use case (inside Flask). You can do app.json_encoder = MyCustomJSONEncoder there. However how to do this outside of flask would still be an interesting question. A: Unfortunately, I could not find a way to set default encoders or decoders for the json module. So the best way is to do what flask do, that is wrapping the calls to dump or dumps, and provide a default in that wrapper. A: I don't remember where I got this solution from but I was searching for it again today and stumbled upon this unanswered question. This works for me: import json from datetime import datetime from uuid import UUID from pydantic import BaseModel def setup_custom_json_encoder(): JSONEncoder_olddefault = json.JSONEncoder.default def JSONEncoder_newdefault(self, obj): if isinstance(obj, UUID): # if the obj is uuid, we simply return the value of uuid return str(obj) if isinstance(obj, BaseModel): return obj.dict() if isinstance(obj, datetime): return datetime.isoformat(obj) return JSONEncoder_olddefault(self, obj) json.JSONEncoder.default = JSONEncoder_newdefault However, this feels "hacky" which is why I was trying to revisit the SO thread I found this in to see if anyone updated it with a cleaner solution
Setting global JsonEncoder in Python
Basically, I'm fighting with the age-old problem that Python's default json encoder does not support datetime. However all the solutions I can find call to json.dumps and manually pass the "proper" encoder on each invocation. And honestly, that can't be the best way to do it. Especially if you want to use a wrapper like jsonify to set up your response object properly, where you can't even specify these parameters. So: long story short: how to override the global default encoder in Python's JSON implementation to a custom one, that actually does support the features I want? EDIT: ok so I figured out how to do this for my specific use case (inside Flask). You can do app.json_encoder = MyCustomJSONEncoder there. However how to do this outside of flask would still be an interesting question.
[ "Unfortunately, I could not find a way to set default encoders or decoders for the json module.\nSo the best way is to do what flask do, that is wrapping the calls to dump or dumps, and provide a default in that wrapper.\n", "I don't remember where I got this solution from but I was searching for it again today and stumbled upon this unanswered question. This works for me:\nimport json\nfrom datetime import datetime\nfrom uuid import UUID\n\nfrom pydantic import BaseModel\n\n\ndef setup_custom_json_encoder():\n JSONEncoder_olddefault = json.JSONEncoder.default\n\n def JSONEncoder_newdefault(self, obj):\n if isinstance(obj, UUID):\n # if the obj is uuid, we simply return the value of uuid\n return str(obj)\n if isinstance(obj, BaseModel):\n return obj.dict()\n if isinstance(obj, datetime):\n return datetime.isoformat(obj)\n return JSONEncoder_olddefault(self, obj)\n\n json.JSONEncoder.default = JSONEncoder_newdefault\n\nHowever, this feels \"hacky\" which is why I was trying to revisit the SO thread I found this in to see if anyone updated it with a cleaner solution\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0060170355_python.txt
Q: Regular expression to extract the last part without domain Good afternoon, I would like to know how to extract the last part of the path from URL as string, but without domain using Regex from Python style. The url is: 'https://ncd.soft.com/lags/prime-amazon.png' (prime-amazon is my objective) I tried with no exit because we need to exclude the domain (.png or .com, etc) ([^/]+(?!.png))/?$ Bad result prime-amazon.png I expect: prime-amazon A: You can use [^/]+(?=\.png/?$) See the regex demo. Details: [^/]+ - one or more chars other than / (?=\.png/?$) - a positive lookahead that requires .png or .png/ till end of string immediately to the right of the current location.
Regular expression to extract the last part without domain
Good afternoon, I would like to know how to extract the last part of the path from URL as string, but without domain using Regex from Python style. The url is: 'https://ncd.soft.com/lags/prime-amazon.png' (prime-amazon is my objective) I tried with no exit because we need to exclude the domain (.png or .com, etc) ([^/]+(?!.png))/?$ Bad result prime-amazon.png I expect: prime-amazon
[ "You can use\n[^/]+(?=\\.png/?$)\n\nSee the regex demo.\nDetails:\n\n[^/]+ - one or more chars other than /\n(?=\\.png/?$) - a positive lookahead that requires .png or .png/ till end of string immediately to the right of the current location.\n\n" ]
[ 1 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074524647_python_regex.txt
Q: Not getting a proper fit Getting an error of "IndexError: index 1 is out of bounds for axis 0 with size 1". I am a newbie. Please help. Thanks in advance. def logistic(x, l, k, x1): return l / 1+np.exp(-k*(x-x1)) distance= [1.000*70, 2.000*70, 3.000*70, 4.000*70, 5.000*70, 6.000*70, 7.000*70, 8.000*70, 9.000*70, 11.00*70, 12.000*70, 13.000*70, 14.000*70, 15.000*70, 16.000*70, 17.000*70, 18.000*70, 19.000*70, 21.000*70, 22.000*70, 23.000*70, 24.000*70, 25.000*70, 26.000*70, 27.000*70, 28.000*70, 29.000*70, 30.000*70, 31.000*70, 32.000*70, 33.000*70, 34.000*70, 35.000*70, 36.000*70] amplitude= [26, 31, 29, 26, 27, 24, 24, 28, 24, 24, 28, 31, 24, 26, 55, 30, 73, 101, 168, 219, 448, 833, 1280, 1397, 1181, 1311, 1715, 1975, 2003, 2034, 2178, 2180, 2182] plt.plot(distance,amplitude, 'o') popt, pcov = curve_fit(logistic, distance, amplitude,maxfev=100, bounds=((100, 10, 0), (200000, 200000, 200000)),p0=[2700, 3000, 1200]) print(popt) plt.plot(distance, logistic(distance, *popt), 'r', label='logistic fit') plt.show() A: there are few points on your code which had to be fixed. distance and amplititle are not same size both are list however inside you logistics definition you treat them as numpy array which you do vector operations bounds value for k is wrong, it should be below 1 but you set minimum value to be 10, which make optimizing difficult logistic definition you could add some safety measure for stability from scipy.optimize import curve_fit from scipy.optimize import curve_fit def logistic(x, l, k, x1): e = -k * (x - x1) e[ e > 30 ] = 30 return l / (1 + np.exp(e)) distance= [1.000*70, 2.000*70, 3.000*70, 4.000*70, 5.000*70, 6.000*70, 7.000*70, 8.000*70, 9.000*70, 11.00*70, 12.000*70, 13.000*70, 14.000*70, 15.000*70, 16.000*70, 17.000*70, 18.000*70, 19.000*70, 21.000*70, 22.000*70, 23.000*70, 24.000*70, 25.000*70, 26.000*70, 27.000*70, 28.000*70, 29.000*70, 30.000*70, 31.000*70, 32.000*70, 33.000*70, 34.000*70, 35.000*70, 36.000*70] amplitude= [26, 31, 29, 26, 27, 24, 24, 28, 24, 24, 28, 31, 24, 26, 55, 30, 73, 101, 168, 219, 448, 833, 1280, 1397, 1181, 1311, 1715, 1975, 2003, 2034, 2178, 2180, 2182] plt.plot(distance[1:], amplitude, 'o') popt, pcov = curve_fit(logistic, np.array(distance[1:]), np.array(amplitude[:]),maxfev=100, bounds=((100, 0, 0), (3000, 1, 200000)),p0=[2700, 0.1, 1200]) print(popt) plt.plot(distance, logistic(distance, *popt), 'r', label='logistic fit') plt.show() output:
Not getting a proper fit
Getting an error of "IndexError: index 1 is out of bounds for axis 0 with size 1". I am a newbie. Please help. Thanks in advance. def logistic(x, l, k, x1): return l / 1+np.exp(-k*(x-x1)) distance= [1.000*70, 2.000*70, 3.000*70, 4.000*70, 5.000*70, 6.000*70, 7.000*70, 8.000*70, 9.000*70, 11.00*70, 12.000*70, 13.000*70, 14.000*70, 15.000*70, 16.000*70, 17.000*70, 18.000*70, 19.000*70, 21.000*70, 22.000*70, 23.000*70, 24.000*70, 25.000*70, 26.000*70, 27.000*70, 28.000*70, 29.000*70, 30.000*70, 31.000*70, 32.000*70, 33.000*70, 34.000*70, 35.000*70, 36.000*70] amplitude= [26, 31, 29, 26, 27, 24, 24, 28, 24, 24, 28, 31, 24, 26, 55, 30, 73, 101, 168, 219, 448, 833, 1280, 1397, 1181, 1311, 1715, 1975, 2003, 2034, 2178, 2180, 2182] plt.plot(distance,amplitude, 'o') popt, pcov = curve_fit(logistic, distance, amplitude,maxfev=100, bounds=((100, 10, 0), (200000, 200000, 200000)),p0=[2700, 3000, 1200]) print(popt) plt.plot(distance, logistic(distance, *popt), 'r', label='logistic fit') plt.show()
[ "there are few points on your code which had to be fixed.\n\ndistance and amplititle are not same size\nboth are list however inside you logistics definition you treat them as numpy array which you do vector operations\nbounds value for k is wrong, it should be below 1 but you set minimum value to be 10, which make optimizing difficult\nlogistic definition you could add some safety measure for stability\n\nfrom scipy.optimize import curve_fit\nfrom scipy.optimize import curve_fit \ndef logistic(x, l, k, x1):\n e = -k * (x - x1)\n e[ e > 30 ] = 30\n return l / (1 + np.exp(e))\n\ndistance= [1.000*70, 2.000*70, 3.000*70, 4.000*70, 5.000*70, 6.000*70, 7.000*70, 8.000*70,\n 9.000*70, 11.00*70, 12.000*70, 13.000*70, 14.000*70, 15.000*70, 16.000*70,\n 17.000*70, 18.000*70, 19.000*70, 21.000*70, 22.000*70, 23.000*70, 24.000*70, 25.000*70, 26.000*70,\n 27.000*70, 28.000*70, 29.000*70, 30.000*70, 31.000*70, 32.000*70, 33.000*70,\n 34.000*70, 35.000*70, 36.000*70]\namplitude= [26, 31, 29, 26, 27, 24, 24, 28, 24, 24, 28, 31, 24, 26, 55, 30, 73, 101, 168, 219, 448, 833, 1280, 1397, 1181, 1311,\n 1715, 1975, 2003, 2034, 2178, 2180, 2182]\nplt.plot(distance[1:], amplitude, 'o')\npopt, pcov = curve_fit(logistic, np.array(distance[1:]), np.array(amplitude[:]),maxfev=100, bounds=((100, 0, 0), (3000, 1, 200000)),p0=[2700, 0.1, 1200])\n\nprint(popt)\nplt.plot(distance, logistic(distance, *popt), 'r', label='logistic fit')\n\nplt.show()\n\noutput:\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074524730_python.txt
Q: Why are the UUIDs from this AWS network socket backwards? When you use the AWS API to run a command on a remote docker container (ECS), the AWS API gives you back a websocket to read the output of your command from. When using the aws command line utility (which also uses the AWS API), reading the websocket stream is handled by session-manager-plugin. session-manager-plugin is written in GoLang, and I've been trying to rewrite parts of it in Python. I don't speak GoLang, but I fumbled my way though adding some code to session-manager-plugin to output the raw binary data it is sending, and receiving when the binary is being used. Essentially, the output of the command you ran is split up into messages, each one with headers, and a payload. One of the headers for each message is a messageID, which is a UUID. Each message needs to be acknowledged by telling the server that you're received the message with that UUID. The issue I'm having is that when analyzing the raw binary data, I can see that a message that was received with UUID b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd' is being acknowledged by session-manager-plugin with a packet that says this: b'{"AcknowledgedMessageType":"output_stream_data","AcknowledgedMessageId":"fdba1b38-1a73-4d64-85c3-12500a08af29","AcknowledgedMessageSequenceNumber":4,"IsSequentialMessage":true}' To figure out what UUID b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd' is in Python, I do this: import uuid print(str(uuid.UUID(bytes=b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd'))) # 85c31250-0a08-af29-fdba-1b381a734d64 At first glance, the UUID of the message that was received, and the UUID of the message being acknowledged do not match, but if you look closely, you'll see that the UUID of the original message that was received is reversed from the UUID being acknowledged. Sort of. In the 16 byte UUID, the first 8 bytes come after the last 8 bytes. 85c31250-0a08-af29-fdba-1b381a734d64 fdba1b38-1a73-4d64-85c3-12500a08af29 Is there any reason this would be happening? Am I decoding b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd' wrong? Note: As you can see from above, the UUID in the Acknowledgement packet is inside of JSON. If I was decoding it wrong, the whole thing would be gibberish. Also note that this is just an analysis of a perfectly working session-manager-plugin communication stream. One way or another, this actually works. I'm just trying to figure out how so I can re-create it. A: Looking at the source code for session-manager-plugin, it would appear it reads the first eight bytes as the least significant bytes, then reads the next eight bytes as the most significant bytes, then appends it in the order MSB, LSB. Seems to me like that would produce the behavior you're seeing. // getUuid gets the 128bit uuid from an array of bytes starting from the offset. func getUuid(log log.T, byteArray []byte, offset int) (result uuid.UUID, err error) { byteArrayLength := len(byteArray) if offset > byteArrayLength-1 || offset+16-1 > byteArrayLength-1 || offset < 0 { log.Error("getUuid failed: Offset is invalid.") return nil, errors.New("Offset is outside the byte array.") } leastSignificantLong, err := getLong(log, byteArray, offset) if err != nil { log.Error("getUuid failed: failed to get uuid LSBs Long value.") return nil, errors.New("Failed to get uuid LSBs long value.") } leastSignificantBytes, err := longToBytes(log, leastSignificantLong) if err != nil { log.Error("getUuid failed: failed to get uuid LSBs bytes value.") return nil, errors.New("Failed to get uuid LSBs bytes value.") } mostSignificantLong, err := getLong(log, byteArray, offset+8) if err != nil { log.Error("getUuid failed: failed to get uuid MSBs Long value.") return nil, errors.New("Failed to get uuid MSBs long value.") } mostSignificantBytes, err := longToBytes(log, mostSignificantLong) if err != nil { log.Error("getUuid failed: failed to get uuid MSBs bytes value.") return nil, errors.New("Failed to get uuid MSBs bytes value.") } uuidBytes := append(mostSignificantBytes, leastSignificantBytes...) return uuid.New(uuidBytes), nil } Source Code
Why are the UUIDs from this AWS network socket backwards?
When you use the AWS API to run a command on a remote docker container (ECS), the AWS API gives you back a websocket to read the output of your command from. When using the aws command line utility (which also uses the AWS API), reading the websocket stream is handled by session-manager-plugin. session-manager-plugin is written in GoLang, and I've been trying to rewrite parts of it in Python. I don't speak GoLang, but I fumbled my way though adding some code to session-manager-plugin to output the raw binary data it is sending, and receiving when the binary is being used. Essentially, the output of the command you ran is split up into messages, each one with headers, and a payload. One of the headers for each message is a messageID, which is a UUID. Each message needs to be acknowledged by telling the server that you're received the message with that UUID. The issue I'm having is that when analyzing the raw binary data, I can see that a message that was received with UUID b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd' is being acknowledged by session-manager-plugin with a packet that says this: b'{"AcknowledgedMessageType":"output_stream_data","AcknowledgedMessageId":"fdba1b38-1a73-4d64-85c3-12500a08af29","AcknowledgedMessageSequenceNumber":4,"IsSequentialMessage":true}' To figure out what UUID b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd' is in Python, I do this: import uuid print(str(uuid.UUID(bytes=b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd'))) # 85c31250-0a08-af29-fdba-1b381a734d64 At first glance, the UUID of the message that was received, and the UUID of the message being acknowledged do not match, but if you look closely, you'll see that the UUID of the original message that was received is reversed from the UUID being acknowledged. Sort of. In the 16 byte UUID, the first 8 bytes come after the last 8 bytes. 85c31250-0a08-af29-fdba-1b381a734d64 fdba1b38-1a73-4d64-85c3-12500a08af29 Is there any reason this would be happening? Am I decoding b'\x85\xc3\x12P\n\x08\xaf)\xfd\xba\x1b8\x1asMd' wrong? Note: As you can see from above, the UUID in the Acknowledgement packet is inside of JSON. If I was decoding it wrong, the whole thing would be gibberish. Also note that this is just an analysis of a perfectly working session-manager-plugin communication stream. One way or another, this actually works. I'm just trying to figure out how so I can re-create it.
[ "Looking at the source code for session-manager-plugin, it would appear it reads the first eight bytes as the least significant bytes, then reads the next eight bytes as the most significant bytes, then appends it in the order MSB, LSB. Seems to me like that would produce the behavior you're seeing.\n// getUuid gets the 128bit uuid from an array of bytes starting from the offset.\nfunc getUuid(log log.T, byteArray []byte, offset int) (result uuid.UUID, err error) {\n byteArrayLength := len(byteArray)\n if offset > byteArrayLength-1 || offset+16-1 > byteArrayLength-1 || offset < 0 {\n log.Error(\"getUuid failed: Offset is invalid.\")\n return nil, errors.New(\"Offset is outside the byte array.\")\n }\n\n leastSignificantLong, err := getLong(log, byteArray, offset)\n if err != nil {\n log.Error(\"getUuid failed: failed to get uuid LSBs Long value.\")\n return nil, errors.New(\"Failed to get uuid LSBs long value.\")\n }\n\n leastSignificantBytes, err := longToBytes(log, leastSignificantLong)\n if err != nil {\n log.Error(\"getUuid failed: failed to get uuid LSBs bytes value.\")\n return nil, errors.New(\"Failed to get uuid LSBs bytes value.\")\n }\n\n mostSignificantLong, err := getLong(log, byteArray, offset+8)\n if err != nil {\n log.Error(\"getUuid failed: failed to get uuid MSBs Long value.\")\n return nil, errors.New(\"Failed to get uuid MSBs long value.\")\n }\n\n mostSignificantBytes, err := longToBytes(log, mostSignificantLong)\n if err != nil {\n log.Error(\"getUuid failed: failed to get uuid MSBs bytes value.\")\n return nil, errors.New(\"Failed to get uuid MSBs bytes value.\")\n }\n\n uuidBytes := append(mostSignificantBytes, leastSignificantBytes...)\n\n return uuid.New(uuidBytes), nil\n}\n\nSource Code\n" ]
[ 2 ]
[]
[]
[ "amazon_web_services", "python" ]
stackoverflow_0074524858_amazon_web_services_python.txt
Q: Creating a new boolean column based on another dataframe in Spark I have a big dataset with many columns: df = my_id attr_1 attr_2 ... attr_n 13900 null USA 384.24 13900 null UK 399.24 13999 3467 USA 314.25 13911 3556 CND 386.77 13922 5785 USA 684.21 I also have a smaller dataframe whose first column is null: df_2 = col_1 col_2 null 13900 null 13999 null 34002 I want to add a new column to df that indicates whether the respective my_id is present is df_2: my_id attr_1 attr_2 ... attr_n check 13900 null USA 384.24 yes 13900 null UK 399.24 yes 13999 3467 USA 314.25 yes 13911 3556 CND 386.77 no 13922 5785 USA 684.21 no I was thinking of left joining df_2 to df, create a column that is yes when col_2 is populated and no when it isn't, and then dropping col_2, but is there any more elegant way? A: Your reasoning is correct: you can do a left join and then using conditional function when, derive the column check basing on the left-joined column. A sample could could look something like this: from pyspark.sql.functions import col, when, lit # 1. Do a left join df_3 = df.join(df_2, col("my_id") == col("col_2"), how="left") # 2. Derive the value of `check` column df_3.withColumn("check", when(col("col_2").isNotNull(), lit("yes")).otherwise(lit("no")
Creating a new boolean column based on another dataframe in Spark
I have a big dataset with many columns: df = my_id attr_1 attr_2 ... attr_n 13900 null USA 384.24 13900 null UK 399.24 13999 3467 USA 314.25 13911 3556 CND 386.77 13922 5785 USA 684.21 I also have a smaller dataframe whose first column is null: df_2 = col_1 col_2 null 13900 null 13999 null 34002 I want to add a new column to df that indicates whether the respective my_id is present is df_2: my_id attr_1 attr_2 ... attr_n check 13900 null USA 384.24 yes 13900 null UK 399.24 yes 13999 3467 USA 314.25 yes 13911 3556 CND 386.77 no 13922 5785 USA 684.21 no I was thinking of left joining df_2 to df, create a column that is yes when col_2 is populated and no when it isn't, and then dropping col_2, but is there any more elegant way?
[ "Your reasoning is correct: you can do a left join and then using conditional function when, derive the column check basing on the left-joined column. A sample could could look something like this:\nfrom pyspark.sql.functions import col, when, lit\n\n# 1. Do a left join\ndf_3 = df.join(df_2, col(\"my_id\") == col(\"col_2\"), how=\"left\")\n\n# 2. Derive the value of `check` column \ndf_3.withColumn(\"check\", when(col(\"col_2\").isNotNull(), lit(\"yes\")).otherwise(lit(\"no\")\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "join", "pyspark", "python" ]
stackoverflow_0074519579_dataframe_join_pyspark_python.txt
Q: Parse simple XML to pandas dataframe I hope you are well. I am looking to convert the following XML URL into a pandas dataframe. You can view the XML here; https://clients2.google.com/complete/search?hl=en&output=toolbar&q=how%20garage%20doors Here is the Python 3 code here, which currently returns an empty dataframe. from bs4 import BeautifulSoup import requests import pandas as pd response = requests.get('https://clients2.google.com/complete/search?hl=en&output=toolbar&q=how%20garage%20doors') bs = BeautifulSoup(response.text, ['xml']) print(bs) obs = bs.find_all("CompleteSuggestion") print(obs) df = pd.DataFrame(columns=['suggestion data','Keyword']) for node in obs: df = df.append({'suggestion data': node.get("suggestion data")}, ignore_index=True) df.head() Any suggestions would be welcome. I am open to do it with other modules if there are any better alternatives. Also the expected output would be a dataframe containing a list of autosuggest search terms related to "garage doors". I could not get Python ElementTree XML conversion to work. A: You need to get the attribute of suggestion tag, not the text/string inside the tag. Try this df = pd.DataFrame(columns=['suggestion data','Keyword']) for node in obs: for suggestion in node: df = df.append({'suggestion data': suggestion.attrs['data']}, ignore_index=True) df.head() A: I always use ElementTree to parse an xml, this should work for you. import xml.etree.ElementTree as ET import pandas as pd tree = ET.parse('YOUR_DATA.xml') root = tree.getroot() df = pd.DataFrame() for child in root: for child2 in child: line = child2.attrib df = df.append(line, ignore_index=True)
Parse simple XML to pandas dataframe
I hope you are well. I am looking to convert the following XML URL into a pandas dataframe. You can view the XML here; https://clients2.google.com/complete/search?hl=en&output=toolbar&q=how%20garage%20doors Here is the Python 3 code here, which currently returns an empty dataframe. from bs4 import BeautifulSoup import requests import pandas as pd response = requests.get('https://clients2.google.com/complete/search?hl=en&output=toolbar&q=how%20garage%20doors') bs = BeautifulSoup(response.text, ['xml']) print(bs) obs = bs.find_all("CompleteSuggestion") print(obs) df = pd.DataFrame(columns=['suggestion data','Keyword']) for node in obs: df = df.append({'suggestion data': node.get("suggestion data")}, ignore_index=True) df.head() Any suggestions would be welcome. I am open to do it with other modules if there are any better alternatives. Also the expected output would be a dataframe containing a list of autosuggest search terms related to "garage doors". I could not get Python ElementTree XML conversion to work.
[ "You need to get the attribute of suggestion tag, not the text/string inside the tag. Try this\ndf = pd.DataFrame(columns=['suggestion data','Keyword'])\n\nfor node in obs:\n for suggestion in node:\n df = df.append({'suggestion data': suggestion.attrs['data']}, ignore_index=True)\ndf.head()\n\n", "I always use ElementTree to parse an xml, this should work for you.\nimport xml.etree.ElementTree as ET\nimport pandas as pd\n\ntree = ET.parse('YOUR_DATA.xml')\nroot = tree.getroot()\n\ndf = pd.DataFrame()\nfor child in root:\n for child2 in child:\n line = child2.attrib\n df = df.append(line, ignore_index=True)\n \n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python", "xml_parsing" ]
stackoverflow_0074524739_pandas_python_xml_parsing.txt
Q: Remove [] in Python I have a list like this: data = [[[1, 2], [1, 1], [2, 3], [5, 5], [6, 6]]] I would like to get like this : data = [[1, 2], [1, 1], [2, 3], [5, 5], [6, 6]] How can i do using python ? A: Reassign data with the 0 index of your array. data = data [0]
Remove [] in Python
I have a list like this: data = [[[1, 2], [1, 1], [2, 3], [5, 5], [6, 6]]] I would like to get like this : data = [[1, 2], [1, 1], [2, 3], [5, 5], [6, 6]] How can i do using python ?
[ "Reassign data with the 0 index of your array.\ndata = data [0]\n" ]
[ 3 ]
[]
[]
[ "python" ]
stackoverflow_0074525024_python.txt
Q: Can you use the name of a turtle in the parameters of a variable? import turtle as trtl def position(hold): hold.forward(200) position('trtl') I'm trying to make a program which has multiple turtles use a similar function between all of them, is something like what is shown in the image possible? A: If you want to refer to things by name, store them in a dict; use the name as keys. import turtle turtles = { "one": turtle.Turtle(), "two": turtle.Turtle(), } def position(turtle_name): return turtles[turtle_name].forward(200) position('one') ...but it's unclear why you'd do that at all instead of... import turtle as turtle_mod turtle_one = turtle_mod.Turtle() turtle_two = turtle_mod.Turtle() def position(turtle): return turtle.forward(200) position(turtle_one)
Can you use the name of a turtle in the parameters of a variable?
import turtle as trtl def position(hold): hold.forward(200) position('trtl') I'm trying to make a program which has multiple turtles use a similar function between all of them, is something like what is shown in the image possible?
[ "If you want to refer to things by name, store them in a dict; use the name as keys.\nimport turtle\n\nturtles = {\n \"one\": turtle.Turtle(),\n \"two\": turtle.Turtle(),\n}\n\ndef position(turtle_name):\n return turtles[turtle_name].forward(200)\n\nposition('one')\n\n...but it's unclear why you'd do that at all instead of...\nimport turtle as turtle_mod\n\nturtle_one = turtle_mod.Turtle()\nturtle_two = turtle_mod.Turtle()\n\ndef position(turtle):\n return turtle.forward(200)\n\nposition(turtle_one)\n\n" ]
[ 1 ]
[]
[]
[ "python", "turtle_graphics" ]
stackoverflow_0074524992_python_turtle_graphics.txt
Q: Python pandas why does my code changes column when I import a dataframe from a csv file and then use concat to merge the two dataframes together? I am trying to create a program in which every time I enter a data, it stores it into a dataframe and the dataframe is stored into a csv file. Now, this whole process is in a loop. When I keep on entering the data without importing the data from the csv file, it works fine and the two dataframes are joined together perfectly. Now when I call the csv file, the data shifts into a different column. For e.g., I create a series having 6 columns. It will create a dataframe using that series. Now, if I want to add another row, the whole thing is on a loop so it will create another dataframe from the series and concat the two dataframes together. After every loop, the dataframe is exported into a csv file and the csv file is updated every loop. Suppose I used the loop twice, and then imported the csv file, the data starts from 3rd row but it creates a 5th column and all the data from the 1st column of the csv file starts from the csv file such that I end up having 8 columns. name=input("Please Enter the Employee's Name:") age=int(input("Please Enter the Employee's Age:")) gender=input("Please Enter the Employee's Gender:") Loc=input("Please Enter the Employee's Location:") Des=input("Please Enter the Employee's Designation:") Sal=int(input("Please Enter the Employee's Salary:")) l1=[name,age,gender,Loc,Des,Sal] df=pd.DataFrame(l1,index=index1) df_Employee=pd.concat([df_Employee,df.T],ignore_index=True) This is the original code which works inside an if statement and the to_csv function is used after exiting from the if...elif...else statement. All of this works inside a loop. I did not face a problem in concatenating the two dataframes and they merge together as I want it to like enter image description here df_Employee.to_csv("Employee Details.csv") Now, when I import the csv file, it shifts the columns in the csv file to a different column altogether. Below is the code that I used: df=pd.read_csv("Employee Details.csv",header=None) df_Employee=pd.concat([df_Employee,df],ignore_index=True) When I use the above codes after I have created the dataframe given in the above image, it gives me this output: enter image description here However, when I terminate the program with the data saved in the csv file and on starting a new run, if the first thing that I do is import the csv file, it imports it as I wish, but when I make the third entry while running the program, it ends up creating a different column for that entry like: enter image description here Why does this happen and how can I fix this? A: Try to create an empty dataframe with the column names and then append the row you want. df = pd.DataFrame(columns=index1) df = df.append(l1)
Python pandas why does my code changes column when I import a dataframe from a csv file and then use concat to merge the two dataframes together?
I am trying to create a program in which every time I enter a data, it stores it into a dataframe and the dataframe is stored into a csv file. Now, this whole process is in a loop. When I keep on entering the data without importing the data from the csv file, it works fine and the two dataframes are joined together perfectly. Now when I call the csv file, the data shifts into a different column. For e.g., I create a series having 6 columns. It will create a dataframe using that series. Now, if I want to add another row, the whole thing is on a loop so it will create another dataframe from the series and concat the two dataframes together. After every loop, the dataframe is exported into a csv file and the csv file is updated every loop. Suppose I used the loop twice, and then imported the csv file, the data starts from 3rd row but it creates a 5th column and all the data from the 1st column of the csv file starts from the csv file such that I end up having 8 columns. name=input("Please Enter the Employee's Name:") age=int(input("Please Enter the Employee's Age:")) gender=input("Please Enter the Employee's Gender:") Loc=input("Please Enter the Employee's Location:") Des=input("Please Enter the Employee's Designation:") Sal=int(input("Please Enter the Employee's Salary:")) l1=[name,age,gender,Loc,Des,Sal] df=pd.DataFrame(l1,index=index1) df_Employee=pd.concat([df_Employee,df.T],ignore_index=True) This is the original code which works inside an if statement and the to_csv function is used after exiting from the if...elif...else statement. All of this works inside a loop. I did not face a problem in concatenating the two dataframes and they merge together as I want it to like enter image description here df_Employee.to_csv("Employee Details.csv") Now, when I import the csv file, it shifts the columns in the csv file to a different column altogether. Below is the code that I used: df=pd.read_csv("Employee Details.csv",header=None) df_Employee=pd.concat([df_Employee,df],ignore_index=True) When I use the above codes after I have created the dataframe given in the above image, it gives me this output: enter image description here However, when I terminate the program with the data saved in the csv file and on starting a new run, if the first thing that I do is import the csv file, it imports it as I wish, but when I make the third entry while running the program, it ends up creating a different column for that entry like: enter image description here Why does this happen and how can I fix this?
[ "Try to create an empty dataframe with the column names and then append the row you want.\ndf = pd.DataFrame(columns=index1)\ndf = df.append(l1)\n\n" ]
[ 0 ]
[]
[]
[ "concatenation", "csv", "dataframe", "pandas", "python" ]
stackoverflow_0074524959_concatenation_csv_dataframe_pandas_python.txt
Q: Shiny for python - adding an icon to the input_action_button With R Shiny, adding an icon to an actionButton uses icon() function. actionButton( ... , icon = shiny::icon(icon_name) ) How can this be achieved with shiny.ui.input_action_button? ui.input_action_button( ... icon = ? ) Whatever I try in (?) seems to make it into a label instead of an icon. A: Only example I found used emoji directly like this ui.input_action_button("go", "Go!", icon="") Not sure you can use icon like R shiny.
Shiny for python - adding an icon to the input_action_button
With R Shiny, adding an icon to an actionButton uses icon() function. actionButton( ... , icon = shiny::icon(icon_name) ) How can this be achieved with shiny.ui.input_action_button? ui.input_action_button( ... icon = ? ) Whatever I try in (?) seems to make it into a label instead of an icon.
[ "Only example I found used emoji directly like this\nui.input_action_button(\"go\", \"Go!\", icon=\"\")\n\nNot sure you can use icon like R shiny.\n" ]
[ 0 ]
[]
[]
[ "py_shiny", "python", "shiny" ]
stackoverflow_0074506566_py_shiny_python_shiny.txt
Q: How do I access the Salesforce API when single-sign on is enabled? I'm attempting to make SOQL queries to the Salesforce API using the Python salesforce_api and simple-salesforce modules. I had been making these requests with a client object: client = Salesforce(username='MY_USERNAME', password='MY_PASSWORD', security_token='MY_SALESFORCE_SECURITY_TOKEN') a = client.query("SELECT something FROM some_object_table WHERE some_condition") However, my company recently restricted Salesforce sign-in through SSO only (you used to be able to login directly to Salesforce without SSO), and the funciton is throwing either: simple_salesforce.exceptions.SalesforceAuthenticationFailed: INVALID_SSO_GATEWAY_URL: the single sign on gateway url for the org is invalid Or: salesforce_api.exceptions.AuthenticationMissingTokenError: Missing or invalid security-token provided. depending on which module I use. I suspect this is because of the SSO implementation. I've seen the docs about creating a new app through Okta, but I need to authenticate and access the API of an existing app. What is the best way to access this API with Okta IdP enabled? It there a way to have a get request to Okta return an access token for Salesforce? A: Uh. It's doable but it's an art. I'll try to write it up but you should have a look at "Identity and Access Management" Salesforce certification, study guides etc. Try also asking at salesforce.stackexchange.com, might get better answers and Okta specialists. I don't know if there's pure server-side access to Okta where you'd provide OAuth2 client, secret, username and password and it'd be silently passed to login. If your app is a proper web application that needs human to operate - you can still make it work with SSO. You'd have to read about OAuth2 in general (you saw it on the web, all the "login with Google/Facebook/LinkedIn/Twitter/..." buttons) and then implement something like this or this. Human starts in your app, gets redirected to SF to enter username and password (you don't see password and you don't care whether he encountered normal SF login page or some SSO), on success he/she is redirected back and you receive info that'll let you obtain session id (sometimes called access token). Once you have access token you can make queries etc, it's just a matter of passing it as HTPP Authorization Bearer header (simple-salesforce docs mention session id at top of the examples). Look, I know what I've written doesn't make much sense. Download Data Loader and try to use it. You might have to make it use custom domain on login but there is a way for it to still work, even though you have SSO enforced. Your goal would be to build similar app to how Data Loader does it. This might help a bit: https://stackoverflow.com/a/61820476/313628 If you need a true backend integration without human involved... tricky. That might be a management problem though. They should not enforce SSO on everybody. When Okta's down you're locked out of the org, no way to disable SSO. You should have a backup plan, some service account(s) that don't have SSO enforced. They might have crazy password requirements, maybe login only from office IP address, whatever. It's not a good idea to enforce SSO on everybody. https://help.salesforce.com/articleView?id=sso_tips.htm We recommend that you don’t enable SSO for Salesforce admins. If your Salesforce admins are SSO users and your SSO server has an outage, they have no way to log in to Salesforce. Make sure that Salesforce admins can log in to Salesforce so that they can disable SSO if problems occur. (If you have a web app and it's embedded as Canvas in SF - there's another clean way to have the session id passed to you. Again - this works only if you have a human rather than backend integration) A: If you check the profiles in SFDC and uncheck the box that requires SSO. "is single sign-on Enabled [] Delegate username and password authentication to a corporate database instead of the salesforce.com user database. "
How do I access the Salesforce API when single-sign on is enabled?
I'm attempting to make SOQL queries to the Salesforce API using the Python salesforce_api and simple-salesforce modules. I had been making these requests with a client object: client = Salesforce(username='MY_USERNAME', password='MY_PASSWORD', security_token='MY_SALESFORCE_SECURITY_TOKEN') a = client.query("SELECT something FROM some_object_table WHERE some_condition") However, my company recently restricted Salesforce sign-in through SSO only (you used to be able to login directly to Salesforce without SSO), and the funciton is throwing either: simple_salesforce.exceptions.SalesforceAuthenticationFailed: INVALID_SSO_GATEWAY_URL: the single sign on gateway url for the org is invalid Or: salesforce_api.exceptions.AuthenticationMissingTokenError: Missing or invalid security-token provided. depending on which module I use. I suspect this is because of the SSO implementation. I've seen the docs about creating a new app through Okta, but I need to authenticate and access the API of an existing app. What is the best way to access this API with Okta IdP enabled? It there a way to have a get request to Okta return an access token for Salesforce?
[ "Uh. It's doable but it's an art. I'll try to write it up but you should have a look at \"Identity and Access Management\" Salesforce certification, study guides etc. Try also asking at salesforce.stackexchange.com, might get better answers and Okta specialists.\nI don't know if there's pure server-side access to Okta where you'd provide OAuth2 client, secret, username and password and it'd be silently passed to login.\nIf your app is a proper web application that needs human to operate - you can still make it work with SSO. You'd have to read about OAuth2 in general (you saw it on the web, all the \"login with Google/Facebook/LinkedIn/Twitter/...\" buttons) and then implement something like this or this. Human starts in your app, gets redirected to SF to enter username and password (you don't see password and you don't care whether he encountered normal SF login page or some SSO), on success he/she is redirected back and you receive info that'll let you obtain session id (sometimes called access token). Once you have access token you can make queries etc, it's just a matter of passing it as HTPP Authorization Bearer header (simple-salesforce docs mention session id at top of the examples).\nLook, I know what I've written doesn't make much sense. Download Data Loader and try to use it. You might have to make it use custom domain on login but there is a way for it to still work, even though you have SSO enforced. Your goal would be to build similar app to how Data Loader does it. This might help a bit: https://stackoverflow.com/a/61820476/313628\nIf you need a true backend integration without human involved... tricky. That might be a management problem though. They should not enforce SSO on everybody. When Okta's down you're locked out of the org, no way to disable SSO. You should have a backup plan, some service account(s) that don't have SSO enforced. They might have crazy password requirements, maybe login only from office IP address, whatever. It's not a good idea to enforce SSO on everybody.\nhttps://help.salesforce.com/articleView?id=sso_tips.htm\n\nWe recommend that you don’t enable SSO for Salesforce admins. If your\nSalesforce admins are SSO users and your SSO server has an outage,\nthey have no way to log in to Salesforce. Make sure that Salesforce\nadmins can log in to Salesforce so that they can disable SSO if\nproblems occur.\n\n(If you have a web app and it's embedded as Canvas in SF - there's another clean way to have the session id passed to you. Again - this works only if you have a human rather than backend integration)\n", "If you check the profiles in SFDC and uncheck the box that requires SSO.\n\"is single sign-on Enabled [] Delegate username and password authentication to a corporate database instead of the salesforce.com user database. \"\n" ]
[ 1, 0 ]
[]
[]
[ "okta", "python", "python_3.x", "salesforce" ]
stackoverflow_0062563315_okta_python_python_3.x_salesforce.txt
Q: Integration Pyspark ans Python in the same Notebook I work in the team of Analytics in X company. We use Microsoft Azure - Data Bricks. There we have to use PysPark. Let say, after different chunks we had a final data frame. I have to make use of visualisations based on this data frame. I think the library Seaborn from Python should be more useful that any library from Pyspark for data visualization. Is there a way in which I can integrate both programming lengagues in the same Notebook? Thanks for your answer. A: The Databricks includes extra Python libraries natively, so the Seaborn you have mentioned, will work out of the box, with up-to-date runtime releases. Depending whether you use an ML Databricks runtime or just a regular one, the runtime will include a different set of extra Python lib. You can find the complete list of all of them in the documentation - I am attaching link to currently newest runtime versions (11.3 LTS) Databricks Runtime 11.3 LTS Databricks Runtime 11.3 LTS for Machine Learning If you want to see an example Databricks notebook that integrates these libraries, this one should give you some hint on how to start.
Integration Pyspark ans Python in the same Notebook
I work in the team of Analytics in X company. We use Microsoft Azure - Data Bricks. There we have to use PysPark. Let say, after different chunks we had a final data frame. I have to make use of visualisations based on this data frame. I think the library Seaborn from Python should be more useful that any library from Pyspark for data visualization. Is there a way in which I can integrate both programming lengagues in the same Notebook? Thanks for your answer.
[ "The Databricks includes extra Python libraries natively, so the Seaborn you have mentioned, will work out of the box, with up-to-date runtime releases. Depending whether you use an ML Databricks runtime or just a regular one, the runtime will include a different set of extra Python lib. You can find the complete list of all of them in the documentation - I am attaching link to currently newest runtime versions (11.3 LTS)\n\nDatabricks Runtime 11.3 LTS\nDatabricks Runtime 11.3 LTS for Machine Learning\n\nIf you want to see an example Databricks notebook that integrates these libraries, this one should give you some hint on how to start.\n" ]
[ 1 ]
[]
[]
[ "analytics", "databricks", "pyspark", "python" ]
stackoverflow_0074489542_analytics_databricks_pyspark_python.txt
Q: Keep some delimiters and others not with pattern with Regex split I have a code like this: string splitttt ="This week rained all day long but next day will be a sunny day if the news are correct" string[] splitttt = Regex.Split(StringX, @"\s(week|if|)\s"); I get this output: rained all day long but next day will be a sunny day (in this case delimiters are not included in the pattern) This is works fine but in the next case I have a problem: I want a case where some delimiters are included and others not: for example using three delimiters: week|if|next I want to keep delimiters |week| and |if| and I do not want to keep delimiter |next| but still want it to work as a delimiter: for example: "This week rained all day long but next day will be a sunny day if the news are correct" I want a regular expression like this @"\s(week.+|.+if|next)\s" This should be the output: week rained all day long but day will be a sunny day if so in this case: week.+ - splits the text but remains in the beginning of the matched pattern next - splits the text but do not remain in the returned pattern .+if - splits the text but remains in the end of the matched pattern I am struggling almost five days to find a solution for it. Tried many regular expression combinations and didn`t find a working solution. Exactly what regular expression should use to achieve this thing?? A: You can include matched text in your output by defining the delimiters with lookaround: import re string = "This week rained all day long but next day will be a sunny day if the news are correct" pattern = r" (?=week)|(?<=if) | next " split = re.split(pattern, string) for word in split: print(word) This way, the word "week" isn't part of the delimiter itself; it simply constrains the pattern to match spaces that are followed by "week".
Keep some delimiters and others not with pattern with Regex split
I have a code like this: string splitttt ="This week rained all day long but next day will be a sunny day if the news are correct" string[] splitttt = Regex.Split(StringX, @"\s(week|if|)\s"); I get this output: rained all day long but next day will be a sunny day (in this case delimiters are not included in the pattern) This is works fine but in the next case I have a problem: I want a case where some delimiters are included and others not: for example using three delimiters: week|if|next I want to keep delimiters |week| and |if| and I do not want to keep delimiter |next| but still want it to work as a delimiter: for example: "This week rained all day long but next day will be a sunny day if the news are correct" I want a regular expression like this @"\s(week.+|.+if|next)\s" This should be the output: week rained all day long but day will be a sunny day if so in this case: week.+ - splits the text but remains in the beginning of the matched pattern next - splits the text but do not remain in the returned pattern .+if - splits the text but remains in the end of the matched pattern I am struggling almost five days to find a solution for it. Tried many regular expression combinations and didn`t find a working solution. Exactly what regular expression should use to achieve this thing??
[ "You can include matched text in your output by defining the delimiters with lookaround:\nimport re\n\nstring = \"This week rained all day long but next day will be a sunny day if the news are correct\"\n\npattern = r\" (?=week)|(?<=if) | next \"\n\nsplit = re.split(pattern, string)\n\nfor word in split:\n print(word)\n\nThis way, the word \"week\" isn't part of the delimiter itself; it simply constrains the pattern to match spaces that are followed by \"week\".\n" ]
[ 0 ]
[]
[]
[ ".net", "python", "regex" ]
stackoverflow_0074524883_.net_python_regex.txt
Q: Number formating fraction in Python I'm used to formatting fractions in Google Sheets as '# ##/##', is there any way to do the same in Python or do I have to program it? I have tried: F'Value: {Fraction(a / b).limit_denominator()}' Gives for example: '3/2' I would like: '1 1/2' in this case. A: You can use divmod to separate the integer and fractional parts. From there it's a simple matter of using the format method. >>> f = Fraction(3, 2) >>> '{} {}'.format(*divmod(f, 1)) '1 1/2'
Number formating fraction in Python
I'm used to formatting fractions in Google Sheets as '# ##/##', is there any way to do the same in Python or do I have to program it? I have tried: F'Value: {Fraction(a / b).limit_denominator()}' Gives for example: '3/2' I would like: '1 1/2' in this case.
[ "You can use divmod to separate the integer and fractional parts. From there it's a simple matter of using the format method.\n>>> f = Fraction(3, 2)\n>>> '{} {}'.format(*divmod(f, 1))\n'1 1/2'\n\n" ]
[ 3 ]
[]
[]
[ "fractions", "number_formatting", "python" ]
stackoverflow_0074525107_fractions_number_formatting_python.txt
Q: Sorting one array by sorting two other arrays together I apologise for the title of this question that I know is very unclear, I tried my best. I have three arrays that need to be sorted, but the tricky rule is the following: the first array needs to increment every time, and when the maximum is obtained, goes back to zero. the second array has to be sorted starting from the minimum to the maximum. The third array is the most complicated one: each position MUST correspond to the doublet of numbers that are in the two firsts arrays. For example, if before sorting, the letter 'L' in array3 was at the same position as the doublet (0, 1) in the two firsts arrays before sorting, it should be the same after sorting. Because the explanation may be not very clear, here is an example of the starting point: import numpy as np array1 = np.array([ 0 , 0 , 1 , 1 , 2 ]) array2 = np.array([ 1 , 0 , 1 , 0 , 0 ]) array3 = np.array(['L', 'H', 'O', 'E', 'L']) This is the desired output: array1 = np.array([ 0 , 1 , 2 , 0 , 1 ]) array2 = np.array([ 0 , 0 , 0 , 1 , 1 ]) array3 = np.array(['H', 'E', 'L', 'L', 'O']) This looks like a very simple problem, but at the moment I don't have found a solution to it. A: This is a non-numpy solution. But The desired order can be obtained by doing sorted(zip(array2, array1)) To obtain the list of indices (to reorder the word) you could do indices, sorted_arrays = zip(*sorted(enumerate(zip(array2, array1)), key=itemgetter(1))) indices is then (1, 3, 4, 0, 2) to get the output "".join(array3[x] for x in indices) A: Create a fourth array which is array2 * (array1.max()+1) + array1. Then argsort that as the indices for array3. import numpy as np array1 = np.array([ 0 , 0 , 1 , 1 , 2 ]) array2 = np.array([ 1 , 0 , 1 , 0 , 0 ]) array3 = np.array(['L', 'H', 'O', 'E', 'L']) to_sort = ( array1.max() + 1 ) * array2 + array1 array3[ np.argsort( to_sort ) ] # array(['H', 'E', 'L', 'L', 'O'], dtype='<U1')
Sorting one array by sorting two other arrays together
I apologise for the title of this question that I know is very unclear, I tried my best. I have three arrays that need to be sorted, but the tricky rule is the following: the first array needs to increment every time, and when the maximum is obtained, goes back to zero. the second array has to be sorted starting from the minimum to the maximum. The third array is the most complicated one: each position MUST correspond to the doublet of numbers that are in the two firsts arrays. For example, if before sorting, the letter 'L' in array3 was at the same position as the doublet (0, 1) in the two firsts arrays before sorting, it should be the same after sorting. Because the explanation may be not very clear, here is an example of the starting point: import numpy as np array1 = np.array([ 0 , 0 , 1 , 1 , 2 ]) array2 = np.array([ 1 , 0 , 1 , 0 , 0 ]) array3 = np.array(['L', 'H', 'O', 'E', 'L']) This is the desired output: array1 = np.array([ 0 , 1 , 2 , 0 , 1 ]) array2 = np.array([ 0 , 0 , 0 , 1 , 1 ]) array3 = np.array(['H', 'E', 'L', 'L', 'O']) This looks like a very simple problem, but at the moment I don't have found a solution to it.
[ "This is a non-numpy solution. But\nThe desired order can be obtained by doing\nsorted(zip(array2, array1))\n\nTo obtain the list of indices (to reorder the word) you could do\nindices, sorted_arrays = zip(*sorted(enumerate(zip(array2, array1)), key=itemgetter(1)))\n\nindices is then\n(1, 3, 4, 0, 2)\n\nto get the output\n\"\".join(array3[x] for x in indices)\n\n", "Create a fourth array which is array2 * (array1.max()+1) + array1. Then argsort that as the indices for array3.\nimport numpy as np\narray1 = np.array([ 0 , 0 , 1 , 1 , 2 ])\narray2 = np.array([ 1 , 0 , 1 , 0 , 0 ])\narray3 = np.array(['L', 'H', 'O', 'E', 'L'])\n\nto_sort = ( array1.max() + 1 ) * array2 + array1\n\narray3[ np.argsort( to_sort ) ]\n# array(['H', 'E', 'L', 'L', 'O'], dtype='<U1')\n\n" ]
[ 1, 1 ]
[]
[]
[ "arrays", "numpy", "python", "sorting" ]
stackoverflow_0074522277_arrays_numpy_python_sorting.txt
Q: How to redirect 'print' output to a file? I want to redirect the print to a .txt file using Python. I have a for loop, which will print the output for each of my .bam file while I want to redirect all output to one file. So I tried to put: f = open('output.txt','w') sys.stdout = f at the beginning of my script. However I get nothing in the .txt file. My script is: #!/usr/bin/python import os,sys import subprocess import glob from os import path f = open('output.txt','w') sys.stdout = f path= '/home/xxx/nearline/bamfiles' bamfiles = glob.glob(path + '/*.bam') for bamfile in bamfiles: filename = bamfile.split('/')[-1] print 'Filename:', filename samtoolsin = subprocess.Popen(["/share/bin/samtools/samtools","view",bamfile], stdout=subprocess.PIPE,bufsize=1) linelist= samtoolsin.stdout.readlines() print 'Readlines finished!' So what's the problem? Any other way besides this sys.stdout? I need my result look like: Filename: ERR001268.bam Readlines finished! Mean: 233 SD: 10 Interval is: (213, 252) A: The most obvious way to do this would be to print to a file object: with open('out.txt', 'w') as f: print('Filename:', filename, file=f) #Β Python 3.x print >> f, 'Filename:', filename # Python 2.x However, redirecting stdout also works for me. It is probably fine for a one-off script such as this: import sys orig_stdout = sys.stdout f = open('out.txt', 'w') sys.stdout = f for i in range(2): print('i = ', i) sys.stdout = orig_stdout f.close() Since Python 3.4 there's a simple context manager available to do this in the standard library: from contextlib import redirect_stdout with open('out.txt', 'w') as f: with redirect_stdout(f): print('data') Redirecting externally from the shell itself is another option, and often preferable: ./script.py > out.txt Other questions: What is the first filename in your script? I don't see it initialized. My first guess is that glob doesn't find any bamfiles, and therefore the for loop doesn't run. Check that the folder exists, and print out bamfiles in your script. Also, use os.path.join and os.path.basename to manipulate paths and filenames. A: You can redirect print with the file argument (in Python 2 there was the >> operator instead). f = open(filename,'w') print('whatever', file=f) # Python 3.x print >>f, 'whatever' # Python 2.x In most cases, you're better off just writing to the file normally. f.write('whatever') or, if you have several items you want to write with spaces between, like print: f.write(' '.join(('whatever', str(var2), 'etc'))) A: Python 2 or Python 3 API reference: print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False) The file argument must be an object with a write(string) method; if it is not present or None, sys.stdout will be used. Since printed arguments are converted to text strings, print() cannot be used with binary mode file objects. For these, use file.write(...) instead. Since file object normally contains write() method, all you need to do is to pass a file object into its argument. Write/Overwrite to File with open('file.txt', 'w') as f: print('hello world', file=f) Write/Append to File with open('file.txt', 'a') as f: print('hello world', file=f) A: This works perfectly: import sys sys.stdout=open("test.txt","w") print ("hello") sys.stdout.close() Now the hello will be written to the test.txt file. Make sure to close the stdout with a close, without it the content will not be save in the file A: Don't use print, use logging You can change sys.stdout to point to a file, but this is a pretty clunky and inflexible way to handle this problem. Instead of using print, use the logging module. With logging, you can print just like you would to stdout, or you can also write the output to a file. You can even use the different message levels (critical, error, warning, info, debug) to, for example, only print major issues to the console, but still log minor code actions to a file. A simple example Import logging, get the logger, and set the processing level: import logging logger = logging.getLogger() logger.setLevel(logging.DEBUG) # process everything, even if everything isn't printed If you want to print to stdout: ch = logging.StreamHandler() ch.setLevel(logging.INFO) # or any other level logger.addHandler(ch) If you want to also write to a file (if you only want to write to a file skip the last section): fh = logging.FileHandler('myLog.log') fh.setLevel(logging.DEBUG) # or any level you want logger.addHandler(fh) Then, wherever you would use print use one of the logger methods: # print(foo) logger.debug(foo) # print('finishing processing') logger.info('finishing processing') # print('Something may be wrong') logger.warning('Something may be wrong') # print('Something is going really bad') logger.error('Something is going really bad') To learn more about using more advanced logging features, read the excellent logging tutorial in the Python docs. A: The easiest solution isn't through python; its through the shell. From the first line of your file (#!/usr/bin/python) I'm guessing you're on a UNIX system. Just use print statements like you normally would, and don't open the file at all in your script. When you go to run the file, instead of ./script.py to run the file, use ./script.py > <filename> where you replace <filename> with the name of the file you want the output to go in to. The > token tells (most) shells to set stdout to the file described by the following token. One important thing that needs to be mentioned here is that "script.py" needs to be made executable for ./script.py to run. So before running ./script.py,execute this command chmod a+x script.py (make the script executable for all users) A: If you are using Linux I suggest you to use the tee command. The implementation goes like this: python python_file.py | tee any_file_name.txt If you don't want to change anything in the code, I think this might be the best possible solution. You can also implement logger but you need do some changes in the code. A: You may not like this answer, but I think it's the RIGHT one. Don't change your stdout destination unless it's absolutely necessary (maybe you're using a library that only outputs to stdout??? clearly not the case here). I think as a good habit you should prepare your data ahead of time as a string, then open your file and write the whole thing at once. This is because input/output operations are the longer you have a file handle open, the more likely an error is to occur with this file (file lock error, i/o error, etc). Just doing it all in one operation leaves no question for when it might have gone wrong. Here's an example: out_lines = [] for bamfile in bamfiles: filename = bamfile.split('/')[-1] out_lines.append('Filename: %s' % filename) samtoolsin = subprocess.Popen(["/share/bin/samtools/samtools","view",bamfile], stdout=subprocess.PIPE,bufsize=1) linelist= samtoolsin.stdout.readlines() print 'Readlines finished!' out_lines.extend(linelist) out_lines.append('\n') And then when you're all done collecting your "data lines" one line per list item, you can join them with some '\n' characters to make the whole thing outputtable; maybe even wrap your output statement in a with block, for additional safety (will automatically close your output handle even if something goes wrong): out_string = '\n'.join(out_lines) out_filename = 'myfile.txt' with open(out_filename, 'w') as outf: outf.write(out_string) print "YAY MY STDOUT IS UNTAINTED!!!" However if you have lots of data to write, you could write it one piece at a time. I don't think it's relevant to your application but here's the alternative: out_filename = 'myfile.txt' outf = open(out_filename, 'w') for bamfile in bamfiles: filename = bamfile.split('/')[-1] outf.write('Filename: %s' % filename) samtoolsin = subprocess.Popen(["/share/bin/samtools/samtools","view",bamfile], stdout=subprocess.PIPE,bufsize=1) mydata = samtoolsin.stdout.read() outf.write(mydata) outf.close() A: If redirecting stdout works for your problem, Gringo Suave's answer is a good demonstration for how to do it. To make it even easier, I made a version utilizing contextmanagers for a succinct generalized calling syntax using the with statement: from contextlib import contextmanager import sys @contextmanager def redirected_stdout(outstream): orig_stdout = sys.stdout try: sys.stdout = outstream yield finally: sys.stdout = orig_stdout To use it, you just do the following (derived from Suave's example): with open('out.txt', 'w') as outfile: with redirected_stdout(outfile): for i in range(2): print('i =', i) It's useful for selectively redirecting print when a module uses it in a way you don't like. The only disadvantage (and this is the dealbreaker for many situations) is that it doesn't work if one wants multiple threads with different values of stdout, but that requires a better, more generalized method: indirect module access. You can see implementations of that in other answers to this question. A: Here's another method I've used for printing to a file/log... Modify the built-in print function so that it logs to a file in the temp directory with the current time stamp, as well as print to stdout. The only real advantage to doing this within a script is not having to go and modify existing print statements. print('test') test Copy original print function to new variable og_print = print og_print('test2') test2 Overwrite existing print function def print(*msg): '''print and log!''' # import datetime for timestamps import datetime as dt # convert input arguments to strings for concatenation message = [] for m in msg: message.append(str(m)) message = ' '.join(message) # append to the log file with open('/tmp/test.log','a') as log: log.write(f'{dt.datetime.now()} | {message}\n') # print the message using the copy of the original print function to stdout og_print(message) print('test3') test3 display file cat /tmp/test.log 2022-01-25 10:19:11.045062 | test3 remove file rm /tmp/test.log A: I am able to crack this using the following method. It will use this print function instead of builtin print function and save the content to a file. from __future__ import print_function import builtins as __builtin__ log = open("log.txt", "a") def print(*args): newLine = "" for item in args: newLine = newLine + str(item) + " " newLine = ( newLine + """ """ ) log.write(newLine) log.flush() __builtin__.print(*args) return A: Changing the value of sys.stdout does change the destination of all calls to print. If you use an alternative way to change the destination of print, you will get the same result. Your bug is somewhere else: it could be in the code you removed for your question (where does filename come from for the call to open?) it could also be that you are not waiting for data to be flushed: if you print on a terminal, data is flushed after every new line, but if you print to a file, it's only flushed when the stdout buffer is full (4096 bytes on most systems). A: In python 3, you can reassign print: #!/usr/bin/python3 def other_fn(): #This will use the print function that's active when the function is called print("Printing from function") file_name = "test.txt" with open(file_name, "w+") as f_out: py_print = print #Need to use this to restore builtin print later, and to not induce recursion print = lambda out_str : py_print(out_str, file=f_out) #If you'd like, for completeness, you can include args+kwargs print = lambda *args, **kwargs : py_print(*args, file=f_out, **kwargs) print("Writing to %s" %(file_name)) other_fn() #Writes to file #Must restore builtin print, or you'll get 'I/O operation on closed file' #If you attempt to print after this block print = py_print print("Printing to stdout") other_fn() #Writes to console/stdout Note that the print from other_fn only switches outputs because print is being reassigned in the global scope. If we assign print within a function, the print in other_fn is normally not affected. We can use the global keyword if we want to affect all print calls: import builtins def other_fn(): #This will use the print function that's active when the function is called print("Printing from function") def main(): global print #Without this, other_fn will use builtins.print file_name = "test.txt" with open(file_name, "w+") as f_out: print = lambda *args, **kwargs : builtins.print(*args, file=f_out, **kwargs) print("Writing to %s" %(file_name)) other_fn() #Writes to file #Must restore builtin print, or you'll get 'I/O operation on closed file' #If you attempt to print after this block print = builtins.print print("Printing to stdout") other_fn() #Writes to console/stdout Personally, I'd prefer sidestepping the requirement to use the print function by baking the output file descriptor into a new function: file_name = "myoutput.txt" with open(file_name, "w+") as outfile: fprint = lambda pstring : print(pstring, file=outfile) print("Writing to stdout") fprint("Writing to %s" % (file_name)) A: Something that I have used in the past to output some dictionaries is the following: # sample dictionary the_dict = {'a': 'no', 'c': 'yes', 'b': 'try again'} # path to output to dict_path = "D:/path.txt" # script to output file with open(dict_path, "w") as f: for idx, data in the_dict.items(): print(idx, data, file=f) The outputted file will look something like below: a no c yes b try again
How to redirect 'print' output to a file?
I want to redirect the print to a .txt file using Python. I have a for loop, which will print the output for each of my .bam file while I want to redirect all output to one file. So I tried to put: f = open('output.txt','w') sys.stdout = f at the beginning of my script. However I get nothing in the .txt file. My script is: #!/usr/bin/python import os,sys import subprocess import glob from os import path f = open('output.txt','w') sys.stdout = f path= '/home/xxx/nearline/bamfiles' bamfiles = glob.glob(path + '/*.bam') for bamfile in bamfiles: filename = bamfile.split('/')[-1] print 'Filename:', filename samtoolsin = subprocess.Popen(["/share/bin/samtools/samtools","view",bamfile], stdout=subprocess.PIPE,bufsize=1) linelist= samtoolsin.stdout.readlines() print 'Readlines finished!' So what's the problem? Any other way besides this sys.stdout? I need my result look like: Filename: ERR001268.bam Readlines finished! Mean: 233 SD: 10 Interval is: (213, 252)
[ "The most obvious way to do this would be to print to a file object:\nwith open('out.txt', 'w') as f:\n print('Filename:', filename, file=f) #Β Python 3.x\n print >> f, 'Filename:', filename # Python 2.x\n\nHowever, redirecting stdout also works for me. It is probably fine for a one-off script such as this:\nimport sys\n\norig_stdout = sys.stdout\nf = open('out.txt', 'w')\nsys.stdout = f\n\nfor i in range(2):\n print('i = ', i)\n\nsys.stdout = orig_stdout\nf.close()\n\nSince Python 3.4 there's a simple context manager available to do this in the standard library:\nfrom contextlib import redirect_stdout\n\nwith open('out.txt', 'w') as f:\n with redirect_stdout(f):\n print('data')\n\nRedirecting externally from the shell itself is another option, and often preferable:\n./script.py > out.txt\n\nOther questions:\nWhat is the first filename in your script? I don't see it initialized.\nMy first guess is that glob doesn't find any bamfiles, and therefore the for loop doesn't run. Check that the folder exists, and print out bamfiles in your script.\nAlso, use os.path.join and os.path.basename to manipulate paths and filenames.\n", "You can redirect print with the file argument (in Python 2 there was the >> operator instead).\nf = open(filename,'w')\nprint('whatever', file=f) # Python 3.x\nprint >>f, 'whatever' # Python 2.x\n\nIn most cases, you're better off just writing to the file normally.\nf.write('whatever')\n\nor, if you have several items you want to write with spaces between, like print:\nf.write(' '.join(('whatever', str(var2), 'etc')))\n\n", "Python 2 or Python 3 API reference:\n\nprint(*objects, sep=' ', end='\\n', file=sys.stdout, flush=False)\nThe file argument must be an object with a write(string) method; if it is not present or None, sys.stdout will be used. Since printed arguments are converted to text strings, print() cannot be used with binary mode file objects. For these, use file.write(...) instead.\n\nSince file object normally contains write() method, all you need to do is to pass a file object into its argument.\nWrite/Overwrite to File\nwith open('file.txt', 'w') as f:\n print('hello world', file=f)\n\nWrite/Append to File\nwith open('file.txt', 'a') as f:\n print('hello world', file=f)\n\n", "This works perfectly:\nimport sys\nsys.stdout=open(\"test.txt\",\"w\")\nprint (\"hello\")\nsys.stdout.close()\n\nNow the hello will be written to the test.txt file. Make sure to close the stdout with a close, without it the content will not be save in the file\n", "Don't use print, use logging\nYou can change sys.stdout to point to a file, but this is a pretty clunky and inflexible way to handle this problem. Instead of using print, use the logging module.\nWith logging, you can print just like you would to stdout, or you can also write the output to a file. You can even use the different message levels (critical, error, warning, info, debug) to, for example, only print major issues to the console, but still log minor code actions to a file.\nA simple example\nImport logging, get the logger, and set the processing level:\nimport logging\nlogger = logging.getLogger()\nlogger.setLevel(logging.DEBUG) # process everything, even if everything isn't printed\n\nIf you want to print to stdout:\nch = logging.StreamHandler()\nch.setLevel(logging.INFO) # or any other level\nlogger.addHandler(ch)\n\nIf you want to also write to a file (if you only want to write to a file skip the last section):\nfh = logging.FileHandler('myLog.log')\nfh.setLevel(logging.DEBUG) # or any level you want\nlogger.addHandler(fh)\n\nThen, wherever you would use print use one of the logger methods:\n# print(foo)\nlogger.debug(foo)\n\n# print('finishing processing')\nlogger.info('finishing processing')\n\n# print('Something may be wrong')\nlogger.warning('Something may be wrong')\n\n# print('Something is going really bad')\nlogger.error('Something is going really bad')\n\nTo learn more about using more advanced logging features, read the excellent logging tutorial in the Python docs.\n", "The easiest solution isn't through python; its through the shell. From the first line of your file (#!/usr/bin/python) I'm guessing you're on a UNIX system. Just use print statements like you normally would, and don't open the file at all in your script. When you go to run the file, instead of\n./script.py\n\nto run the file, use\n./script.py > <filename>\n\nwhere you replace <filename> with the name of the file you want the output to go in to. The > token tells (most) shells to set stdout to the file described by the following token.\nOne important thing that needs to be mentioned here is that \"script.py\" needs to be made executable for ./script.py to run.\nSo before running ./script.py,execute this command\nchmod a+x script.py\n(make the script executable for all users)\n", "If you are using Linux I suggest you to use the tee command. The implementation goes like this:\npython python_file.py | tee any_file_name.txt\n\nIf you don't want to change anything in the code, I think this might be the best possible solution. You can also implement logger but you need do some changes in the code.\n", "You may not like this answer, but I think it's the RIGHT one. Don't change your stdout destination unless it's absolutely necessary (maybe you're using a library that only outputs to stdout??? clearly not the case here).\nI think as a good habit you should prepare your data ahead of time as a string, then open your file and write the whole thing at once. This is because input/output operations are the longer you have a file handle open, the more likely an error is to occur with this file (file lock error, i/o error, etc). Just doing it all in one operation leaves no question for when it might have gone wrong.\nHere's an example:\nout_lines = []\nfor bamfile in bamfiles:\n filename = bamfile.split('/')[-1]\n out_lines.append('Filename: %s' % filename)\n samtoolsin = subprocess.Popen([\"/share/bin/samtools/samtools\",\"view\",bamfile],\n stdout=subprocess.PIPE,bufsize=1)\n linelist= samtoolsin.stdout.readlines()\n print 'Readlines finished!'\n out_lines.extend(linelist)\n out_lines.append('\\n')\n\nAnd then when you're all done collecting your \"data lines\" one line per list item, you can join them with some '\\n' characters to make the whole thing outputtable; maybe even wrap your output statement in a with block, for additional safety (will automatically close your output handle even if something goes wrong):\nout_string = '\\n'.join(out_lines)\nout_filename = 'myfile.txt'\nwith open(out_filename, 'w') as outf:\n outf.write(out_string)\nprint \"YAY MY STDOUT IS UNTAINTED!!!\"\n\nHowever if you have lots of data to write, you could write it one piece at a time. I don't think it's relevant to your application but here's the alternative:\nout_filename = 'myfile.txt'\noutf = open(out_filename, 'w')\nfor bamfile in bamfiles:\n filename = bamfile.split('/')[-1]\n outf.write('Filename: %s' % filename)\n samtoolsin = subprocess.Popen([\"/share/bin/samtools/samtools\",\"view\",bamfile],\n stdout=subprocess.PIPE,bufsize=1)\n mydata = samtoolsin.stdout.read()\n outf.write(mydata)\noutf.close()\n\n", "If redirecting stdout works for your problem, Gringo Suave's answer is a good demonstration for how to do it.\nTo make it even easier, I made a version utilizing contextmanagers for a succinct generalized calling syntax using the with statement:\nfrom contextlib import contextmanager\nimport sys\n\n@contextmanager\ndef redirected_stdout(outstream):\n orig_stdout = sys.stdout\n try:\n sys.stdout = outstream\n yield\n finally:\n sys.stdout = orig_stdout\n\nTo use it, you just do the following (derived from Suave's example):\nwith open('out.txt', 'w') as outfile:\n with redirected_stdout(outfile):\n for i in range(2):\n print('i =', i)\n\nIt's useful for selectively redirecting print when a module uses it in a way you don't like. The only disadvantage (and this is the dealbreaker for many situations) is that it doesn't work if one wants multiple threads with different values of stdout, but that requires a better, more generalized method: indirect module access. You can see implementations of that in other answers to this question.\n", "Here's another method I've used for printing to a file/log... Modify the built-in print function so that it logs to a file in the temp directory with the current time stamp, as well as print to stdout. The only real advantage to doing this within a script is not having to go and modify existing print statements.\nprint('test')\n\ntest\n\nCopy original print function to new variable\nog_print = print\nog_print('test2')\n\ntest2\n\nOverwrite existing print function\ndef print(*msg):\n '''print and log!'''\n # import datetime for timestamps\n import datetime as dt\n # convert input arguments to strings for concatenation\n message = []\n for m in msg:\n message.append(str(m))\n message = ' '.join(message)\n # append to the log file\n with open('/tmp/test.log','a') as log:\n log.write(f'{dt.datetime.now()} | {message}\\n')\n # print the message using the copy of the original print function to stdout\n og_print(message)\n\nprint('test3')\n\ntest3\n\ndisplay file\ncat /tmp/test.log\n\n2022-01-25 10:19:11.045062 | test3\n\nremove file\nrm /tmp/test.log\n\n", "I am able to crack this using the following method. It will use this print function instead of builtin print function and save the content to a file.\nfrom __future__ import print_function\nimport builtins as __builtin__\n\nlog = open(\"log.txt\", \"a\")\n\ndef print(*args):\n newLine = \"\"\n for item in args:\n newLine = newLine + str(item) + \" \"\n newLine = (\n newLine\n + \"\"\"\n\"\"\"\n )\n log.write(newLine)\n log.flush()\n __builtin__.print(*args)\n return\n\n", "Changing the value of sys.stdout does change the destination of all calls to print. If you use an alternative way to change the destination of print, you will get the same result.\nYour bug is somewhere else:\n\nit could be in the code you removed for your question (where does filename come from for the call to open?)\nit could also be that you are not waiting for data to be flushed: if you print on a terminal, data is flushed after every new line, but if you print to a file, it's only flushed when the stdout buffer is full (4096 bytes on most systems).\n\n", "In python 3, you can reassign print:\n#!/usr/bin/python3\n\ndef other_fn():\n #This will use the print function that's active when the function is called\n print(\"Printing from function\")\n\nfile_name = \"test.txt\"\nwith open(file_name, \"w+\") as f_out:\n py_print = print #Need to use this to restore builtin print later, and to not induce recursion\n \n print = lambda out_str : py_print(out_str, file=f_out)\n \n #If you'd like, for completeness, you can include args+kwargs\n print = lambda *args, **kwargs : py_print(*args, file=f_out, **kwargs)\n \n print(\"Writing to %s\" %(file_name))\n\n other_fn() #Writes to file\n\n #Must restore builtin print, or you'll get 'I/O operation on closed file'\n #If you attempt to print after this block\n print = py_print\n\nprint(\"Printing to stdout\")\nother_fn() #Writes to console/stdout\n\nNote that the print from other_fn only switches outputs because print is being reassigned in the global scope. If we assign print within a function, the print in other_fn is normally not affected. We can use the global keyword if we want to affect all print calls:\nimport builtins\n\ndef other_fn():\n #This will use the print function that's active when the function is called\n print(\"Printing from function\")\n\ndef main():\n global print #Without this, other_fn will use builtins.print\n file_name = \"test.txt\"\n with open(file_name, \"w+\") as f_out:\n\n print = lambda *args, **kwargs : builtins.print(*args, file=f_out, **kwargs)\n\n print(\"Writing to %s\" %(file_name))\n\n other_fn() #Writes to file\n\n #Must restore builtin print, or you'll get 'I/O operation on closed file'\n #If you attempt to print after this block\n print = builtins.print\n\n print(\"Printing to stdout\")\n other_fn() #Writes to console/stdout\n\nPersonally, I'd prefer sidestepping the requirement to use the print function by baking the output file descriptor into a new function:\nfile_name = \"myoutput.txt\"\nwith open(file_name, \"w+\") as outfile:\n fprint = lambda pstring : print(pstring, file=outfile)\n print(\"Writing to stdout\")\n fprint(\"Writing to %s\" % (file_name))\n\n", "Something that I have used in the past to output some dictionaries is the following:\n# sample dictionary\nthe_dict = {'a': 'no', 'c': 'yes', 'b': 'try again'}\n\n# path to output to\ndict_path = \"D:/path.txt\"\n\n# script to output file\nwith open(dict_path, \"w\") as f:\n for idx, data in the_dict.items():\n print(idx, data, file=f)\n\nThe outputted file will look something like below:\na no\nc yes\nb try again\n\n" ]
[ 403, 97, 58, 41, 39, 15, 12, 5, 4, 3, 2, 0, 0, 0 ]
[ "Something to extend print function for loops\nx = 0\nwhile x <=5:\n x = x + 1\n with open('outputEis.txt', 'a') as f:\n print(x, file=f)\n f.close()\n\n" ]
[ -2 ]
[ "file_writing", "io", "python" ]
stackoverflow_0007152762_file_writing_io_python.txt
Q: Python multiprocessing.Queue apparently losing data I am trying to make use of multiprocessing to speed up a program. For this I at some point need to parallelize a task between as many processes as possible, let's say n. Because I don't want to create any more processes than I absolutely have to, I create n-1 new ones, start them, then run the last of the work on the current process and finally join everything together. All of these communicate through a Queue. Each process is passed its 'share of work' by argument, so each of them only needs to put the results on the Queue when they're done (each of these results can be around 6600 5-letter words long). def play(chosen_word): l=[chosen_word, chosen_word] return l def partial_test(id, words, queue): print(f'Process {id} started and allocated {len(words)} words.') guesses=[] for word in words: guesses.append(play(word)) print(f"Process {id} has finished ALL WORDS.") #debugging only queue.put((id, guesses)) print(f'Process {id} added results to queue') queue.cancel_join_thread() print(f'Process {id} closed the queue and exited. Queue has aproximately {queue.qsize()} elements') def full_test(): #do stuff #create Queue for results queue=Queue() #initialize auxiliary processes processes=[Process(target=partial_test, args=(x, word_list[x*words_per_process:(x+1)*words_per_process], queue)) for x in range(process_count-1)] #start processes for process in processes: process.start() #run last process on the current thread partial_test(process_count-1, word_list[(process_count-1)*words_per_process:], queue) #join processes i=0 for process in processes: process.join() print(f'Joined process {i} with main thread.') i+=1 print("All processes finished!") #get results (they need to be in order) results=[[] for _ in range(process_count)] i=0 while not queue.empty(): res=queue.get() results[res[0]]=res[1].copy() i+=1 print(f"Got {i} results!") #do stuff with results The problem arises when I try to read the data on the queue. Every process reports that it puts data on the queue, so before the last one joins it has n elements on it. However, when I try to get them and put them in the results list, I only ever pull a single element which contains the data recorded by the n-th process (the one I had running on the main thread). I initially didn't use queue.cancel_join_thread(), but found that to prevent the processes from joining, even after finishing execution, something about them waiting for a buffer to actually write to the queue, which, in the case of great amounts of data, would not do so until the queue.get() method was called. But since I only get the data once all the processes were finished, that would never be called and the program would get stuck. I suppose it could have something to do with this (though I don't see why it wouldn't affect the n-th process), but I found no way to 'force-flush' the data from the buffer to the queue. I am also certain that every other function this part of the code might depend on returns the correct data, as I have tested everything on the same data in a single-process version. Edit: The play function is just a stand-in, but is for all intents and purposes of this post equivalent to the original one, as using this gives the exact same problem. Posting the original one along with all of its dependencies would have meant posting most of my code, which would have made it hard to focus on the problem. A: So, here's your problem: I initially didn't use queue.cancel_join_thread(), but found that to prevent the processes from joining, even after finishing execution, something about them waiting for a buffer to actually write to the queue, which, in the case of great amounts of data, would not do so until the queue.get() method was called. But since I only get the data once all the processes were finished, that would never be called and the program would get stuck. I suppose it could have something to do with this (though I don't see why it wouldn't affect the n-th process), but I found no way to 'force-flush' the data from the buffer to the queue. With multiprocessing.Queue, the reader(s) need to read in parallel with the writer(s) writing. The queue is implemented on top of bounded-size OS-level pipes, and if nothing is reading, writers will quickly fill the pipe's buffer and become unable to write. The errors you were getting were due to that. You didn't find a "force-flush" option because there's nowhere to flush to - the pipe is full, and no one is reading to clear it. cancel_join_thread doesn't solve the problem. By calling cancel_join_thread, you're telling Python, "no, I'm actually totally fine with you throwing away my data", so the worker processes happily exit without finishing writing to the pipe, and throw away your data. The documentation explicitly warns that cancel_join_thread should only be used if you don't care about lost data. You could try using a managed queue with manager = multiprocessing.Manager() and queue = manager.Queue(), since IIRC, managed queues don't have the same limitation, but the way they avoid this limitation involves creating an extra server process to manage the queue, and the whole point of designing your program the way you did was to avoid creating extra processes. Plus, I think manager-based queues have extra inter-process communication overhead. I would recommend just using multiprocessing.Queue the way it was designed to be used - read from it in parallel with the writes. Instead of using the master process as an extra worker, have it start reading data as soon as it's done starting workers, and only join the workers once it's read all data.
Python multiprocessing.Queue apparently losing data
I am trying to make use of multiprocessing to speed up a program. For this I at some point need to parallelize a task between as many processes as possible, let's say n. Because I don't want to create any more processes than I absolutely have to, I create n-1 new ones, start them, then run the last of the work on the current process and finally join everything together. All of these communicate through a Queue. Each process is passed its 'share of work' by argument, so each of them only needs to put the results on the Queue when they're done (each of these results can be around 6600 5-letter words long). def play(chosen_word): l=[chosen_word, chosen_word] return l def partial_test(id, words, queue): print(f'Process {id} started and allocated {len(words)} words.') guesses=[] for word in words: guesses.append(play(word)) print(f"Process {id} has finished ALL WORDS.") #debugging only queue.put((id, guesses)) print(f'Process {id} added results to queue') queue.cancel_join_thread() print(f'Process {id} closed the queue and exited. Queue has aproximately {queue.qsize()} elements') def full_test(): #do stuff #create Queue for results queue=Queue() #initialize auxiliary processes processes=[Process(target=partial_test, args=(x, word_list[x*words_per_process:(x+1)*words_per_process], queue)) for x in range(process_count-1)] #start processes for process in processes: process.start() #run last process on the current thread partial_test(process_count-1, word_list[(process_count-1)*words_per_process:], queue) #join processes i=0 for process in processes: process.join() print(f'Joined process {i} with main thread.') i+=1 print("All processes finished!") #get results (they need to be in order) results=[[] for _ in range(process_count)] i=0 while not queue.empty(): res=queue.get() results[res[0]]=res[1].copy() i+=1 print(f"Got {i} results!") #do stuff with results The problem arises when I try to read the data on the queue. Every process reports that it puts data on the queue, so before the last one joins it has n elements on it. However, when I try to get them and put them in the results list, I only ever pull a single element which contains the data recorded by the n-th process (the one I had running on the main thread). I initially didn't use queue.cancel_join_thread(), but found that to prevent the processes from joining, even after finishing execution, something about them waiting for a buffer to actually write to the queue, which, in the case of great amounts of data, would not do so until the queue.get() method was called. But since I only get the data once all the processes were finished, that would never be called and the program would get stuck. I suppose it could have something to do with this (though I don't see why it wouldn't affect the n-th process), but I found no way to 'force-flush' the data from the buffer to the queue. I am also certain that every other function this part of the code might depend on returns the correct data, as I have tested everything on the same data in a single-process version. Edit: The play function is just a stand-in, but is for all intents and purposes of this post equivalent to the original one, as using this gives the exact same problem. Posting the original one along with all of its dependencies would have meant posting most of my code, which would have made it hard to focus on the problem.
[ "So, here's your problem:\n\nI initially didn't use queue.cancel_join_thread(), but found that to prevent the processes from joining, even after finishing execution, something about them waiting for a buffer to actually write to the queue, which, in the case of great amounts of data, would not do so until the queue.get() method was called. But since I only get the data once all the processes were finished, that would never be called and the program would get stuck. I suppose it could have something to do with this (though I don't see why it wouldn't affect the n-th process), but I found no way to 'force-flush' the data from the buffer to the queue.\n\nWith multiprocessing.Queue, the reader(s) need to read in parallel with the writer(s) writing. The queue is implemented on top of bounded-size OS-level pipes, and if nothing is reading, writers will quickly fill the pipe's buffer and become unable to write. The errors you were getting were due to that. You didn't find a \"force-flush\" option because there's nowhere to flush to - the pipe is full, and no one is reading to clear it.\ncancel_join_thread doesn't solve the problem. By calling cancel_join_thread, you're telling Python, \"no, I'm actually totally fine with you throwing away my data\", so the worker processes happily exit without finishing writing to the pipe, and throw away your data. The documentation explicitly warns that cancel_join_thread should only be used if you don't care about lost data.\n\nYou could try using a managed queue with manager = multiprocessing.Manager() and queue = manager.Queue(), since IIRC, managed queues don't have the same limitation, but the way they avoid this limitation involves creating an extra server process to manage the queue, and the whole point of designing your program the way you did was to avoid creating extra processes. Plus, I think manager-based queues have extra inter-process communication overhead.\nI would recommend just using multiprocessing.Queue the way it was designed to be used - read from it in parallel with the writes. Instead of using the master process as an extra worker, have it start reading data as soon as it's done starting workers, and only join the workers once it's read all data.\n" ]
[ 0 ]
[]
[]
[ "python", "python_multiprocessing", "queue" ]
stackoverflow_0074520564_python_python_multiprocessing_queue.txt
Q: Matplotlib y axis is not ordered I'm getting data from serial port and draw it with matplotlib. But there is a problem. It is that i cannot order y axis values. import matplotlib.pyplot as plt import matplotlib.animation as animation from deneme_serial import serial_reader collect = serial_reader() fig = plt.figure() ax = fig.add_subplot(1, 1, 1) xs=[] ys=[] def animate(i, xs, ys): xs = collect.collector()[0] ys = collect.collector()[1] ax.clear() ax.plot(xs) ax.plot(ys) axes=plt.gca() plt.xticks(rotation=45, ha='right') plt.subplots_adjust(bottom=0.30) plt.title('TMP102 Temperature over Time') plt.ylabel('Temperature (deg C)') ani = animation.FuncAnimation(fig, animate, fargs=(xs,ys), interval=1000) plt.show() Below graph is result of above code A: This happened to me following the same tutorial. My issue was the variables coming from my instrument were strings. Therefore, there is no order. I changed my variables to float and that fixed the problem xs.append(float(FROM_INSTRUMENT))
Matplotlib y axis is not ordered
I'm getting data from serial port and draw it with matplotlib. But there is a problem. It is that i cannot order y axis values. import matplotlib.pyplot as plt import matplotlib.animation as animation from deneme_serial import serial_reader collect = serial_reader() fig = plt.figure() ax = fig.add_subplot(1, 1, 1) xs=[] ys=[] def animate(i, xs, ys): xs = collect.collector()[0] ys = collect.collector()[1] ax.clear() ax.plot(xs) ax.plot(ys) axes=plt.gca() plt.xticks(rotation=45, ha='right') plt.subplots_adjust(bottom=0.30) plt.title('TMP102 Temperature over Time') plt.ylabel('Temperature (deg C)') ani = animation.FuncAnimation(fig, animate, fargs=(xs,ys), interval=1000) plt.show() Below graph is result of above code
[ "This happened to me following the same tutorial.\nMy issue was the variables coming from my instrument were strings. Therefore, there is no order. I changed my variables to float and that fixed the problem\nxs.append(float(FROM_INSTRUMENT))\n\n" ]
[ 1 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0060696798_matplotlib_python.txt
Q: How do you scrape a website that doesn't have specific html tags with represented with class names So I am scraping a used car website I've got the make, model, year, and miles but I don't know how to get the others due to them being the li tag as well. I've put all my code here from bs4 import BeautifulSoup import requests import pandas as pd url = 'https://jammer.ie/used-cars' response = requests.get(url) response.status_code soup = BeautifulSoup(response.content, 'html.parser') soup results = soup.find_all('div', {'class': 'span-9 right-col'}) len(results) results[0].find('h6',{'class':'car-make'}).get_text() results[0].find('p', {'class':'model'}).get_text() results[0].find('p', {'class': 'year'}).get_text() results[0].find('li').get_text().replace('\n', "") I get the information I want from the above code but for other parts of the li tags they have img tags and span tags how can I get the information from each of the li tags? I am new to python so would like it to be somewhat simply and explained to me please I tired using the img tag but don't think I used it right. A: To get all features into a dataframe you can do: import requests import pandas as pd from bs4 import BeautifulSoup url = "https://jammer.ie/used-cars" soup = BeautifulSoup(requests.get(url).text, "html.parser") all_data = [] for car in soup.select(".car"): info = car.select_one(".top-info").get_text(strip=True, separator="|") make, model, year, price = info.split("|") features = {} for feature in car.select(".car--features li"): k = feature.img["src"].split("/")[-1].split(".")[0] v = feature.span.text features[f"feature_{k}"] = v all_data.append( {"make": make, "model": model, "year": year, "price": price, **features} ) df = pd.DataFrame(all_data) print(df.to_markdown(index=False)) Prints: make model year price feature_speed feature_engine feature_transmission feature_owner feature_door-icon1 feature_petrol5 feature_paint feature_hatchback Ford Fiesta 2010 €5,950 113144 miles 1.4 litres Manual 4 previous owners 5 doors Diesel Silver Hatchback Volkswagen Polo 2013 Price on application 41000 miles 1.2 litres Automatic nan 5 doors Petrol Blue Hatchback Volkswagen Polo 2015 Price on application 27000 miles 1.2 litres Automatic nan 5 doors Petrol Red Hatchback Audi A1 2014 Price on application 45000 miles 1.4 litres Automatic nan 3 doors Petrol White Hatchback Audi A3 2014 Price on application 79000 miles 1.4 litres Automatic nan 5 doors Petrol White Hatchback Audi A3 2008 €4,450 147890 miles 1.6 litres Automatic 3 previous owners 3 doors Petrol Black Hatchback SEAT Alhambra 2018 €29,950 134000 miles 2.0 litres Manual 2 previous owners 5 doors Diesel White MPV Volkswagen Jetta 2014 €8,950 138569 miles 1.6 litres Manual 3 previous owners 4 doors Diesel Grey Saloon Volkswagen Beetle 2014 Price on application 66379 miles 1.2 litres Automatic 1 previous owners 2 doors Petrol Black Hatchback Volvo XC60 2019 €44,950 38214 miles 2.0 litres Automatic 1 previous owners 5 doors Diesel Black Estate Toyota Aqua 2014 Price on application 67405 miles 1.5 litres Automatic 1 previous owners 5 doors nan White Hatchback Audi A3 2014 Price on application 51182 miles 1.4 litres Automatic 1 previous owners 4 doors Petrol Black Saloon Volkswagen Golf 2014 Price on application 68066 miles 1.2 litres Automatic 1 previous owners 5 doors Petrol Blue Hatchback
How do you scrape a website that doesn't have specific html tags with represented with class names
So I am scraping a used car website I've got the make, model, year, and miles but I don't know how to get the others due to them being the li tag as well. I've put all my code here from bs4 import BeautifulSoup import requests import pandas as pd url = 'https://jammer.ie/used-cars' response = requests.get(url) response.status_code soup = BeautifulSoup(response.content, 'html.parser') soup results = soup.find_all('div', {'class': 'span-9 right-col'}) len(results) results[0].find('h6',{'class':'car-make'}).get_text() results[0].find('p', {'class':'model'}).get_text() results[0].find('p', {'class': 'year'}).get_text() results[0].find('li').get_text().replace('\n', "") I get the information I want from the above code but for other parts of the li tags they have img tags and span tags how can I get the information from each of the li tags? I am new to python so would like it to be somewhat simply and explained to me please I tired using the img tag but don't think I used it right.
[ "To get all features into a dataframe you can do:\nimport requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\n\nurl = \"https://jammer.ie/used-cars\"\nsoup = BeautifulSoup(requests.get(url).text, \"html.parser\")\n\nall_data = []\nfor car in soup.select(\".car\"):\n info = car.select_one(\".top-info\").get_text(strip=True, separator=\"|\")\n make, model, year, price = info.split(\"|\")\n\n features = {}\n for feature in car.select(\".car--features li\"):\n k = feature.img[\"src\"].split(\"/\")[-1].split(\".\")[0]\n v = feature.span.text\n features[f\"feature_{k}\"] = v\n\n all_data.append(\n {\"make\": make, \"model\": model, \"year\": year, \"price\": price, **features}\n )\n\ndf = pd.DataFrame(all_data)\nprint(df.to_markdown(index=False))\n\nPrints:\n\n\n\n\nmake\nmodel\nyear\nprice\nfeature_speed\nfeature_engine\nfeature_transmission\nfeature_owner\nfeature_door-icon1\nfeature_petrol5\nfeature_paint\nfeature_hatchback\n\n\n\n\nFord\nFiesta\n2010\n€5,950\n113144 miles\n1.4 litres\nManual\n4 previous owners\n5 doors\nDiesel\nSilver\nHatchback\n\n\nVolkswagen\nPolo\n2013\nPrice on application\n41000 miles\n1.2 litres\nAutomatic\nnan\n5 doors\nPetrol\nBlue\nHatchback\n\n\nVolkswagen\nPolo\n2015\nPrice on application\n27000 miles\n1.2 litres\nAutomatic\nnan\n5 doors\nPetrol\nRed\nHatchback\n\n\nAudi\nA1\n2014\nPrice on application\n45000 miles\n1.4 litres\nAutomatic\nnan\n3 doors\nPetrol\nWhite\nHatchback\n\n\nAudi\nA3\n2014\nPrice on application\n79000 miles\n1.4 litres\nAutomatic\nnan\n5 doors\nPetrol\nWhite\nHatchback\n\n\nAudi\nA3\n2008\n€4,450\n147890 miles\n1.6 litres\nAutomatic\n3 previous owners\n3 doors\nPetrol\nBlack\nHatchback\n\n\nSEAT\nAlhambra\n2018\n€29,950\n134000 miles\n2.0 litres\nManual\n2 previous owners\n5 doors\nDiesel\nWhite\nMPV\n\n\nVolkswagen\nJetta\n2014\n€8,950\n138569 miles\n1.6 litres\nManual\n3 previous owners\n4 doors\nDiesel\nGrey\nSaloon\n\n\nVolkswagen\nBeetle\n2014\nPrice on application\n66379 miles\n1.2 litres\nAutomatic\n1 previous owners\n2 doors\nPetrol\nBlack\nHatchback\n\n\nVolvo\nXC60\n2019\n€44,950\n38214 miles\n2.0 litres\nAutomatic\n1 previous owners\n5 doors\nDiesel\nBlack\nEstate\n\n\nToyota\nAqua\n2014\nPrice on application\n67405 miles\n1.5 litres\nAutomatic\n1 previous owners\n5 doors\nnan\nWhite\nHatchback\n\n\nAudi\nA3\n2014\nPrice on application\n51182 miles\n1.4 litres\nAutomatic\n1 previous owners\n4 doors\nPetrol\nBlack\nSaloon\n\n\nVolkswagen\nGolf\n2014\nPrice on application\n68066 miles\n1.2 litres\nAutomatic\n1 previous owners\n5 doors\nPetrol\nBlue\nHatchback\n\n\n\n" ]
[ 1 ]
[]
[]
[ "beautifulsoup", "python", "web_scraping" ]
stackoverflow_0074525317_beautifulsoup_python_web_scraping.txt
Q: AttributeError: module 'keras.utils' has no attribute 'get_file' using segmentation_models I'm trying to use segmentation models but I can't fix this error. I've searched for this particular one but couldn't find an answer. I'm using pycharm and this error is linked to this specific line of code BACKBONE = 'resnet34' model1 = sm.Unet(BACKBONE, weights=None, encoder_weights='imagenet', classes=num_classes, activation='softmax', decoder_block_type = 'upsampling') which is also the 83rd. I searched in the documentation and apparently the versions of tensorflow keras etc satisfy the requirements.I really don't know what to do given the fact that I really tried to install and uninstall everything in many combinations in order to get this piece of code to work.Thank you all for your help and time! Below there's the complete error, hoping it might help you! `Traceback (most recent call last): File "C:\Users\Giulia\PycharmProjects\multiclass_new\main.py", line 83, in <module> model1 = sm.Unet('resnet34', weights=None, File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\segmentation_models\__init__.py", line 34, in wrapper return func(*args, **kwargs) File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\segmentation_models\models\unet.py", line 221, in Unet backbone = Backbones.get_backbone( File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\segmentation_models\backbones\backbones_factory.py", line 103, in get_backbone model = model_fn(*args, **kwargs) File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\models_factory.py", line 78, in wrapper return func(*args, **new_kwargs) File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\models\resnet.py", line 314, in ResNet34 return ResNet( File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\models\resnet.py", line 280, in ResNet load_model_weights(model, model_params.model_name, File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\weights.py", line 25, in load_model_weights weights_path = keras_utils.get_file( AttributeError: module 'keras.utils' has no attribute 'get_file' A: You can try: import segmentation_models as sm sm.set_framework('tf.keras') sm.framework() Worked for me on google colab! A: To solve this issue, try importing the module EfficientNetB0 directly, as the code below: import efficientnet.tfkeras as efn A: I had same problem but with vgg u-net model , This worked for me !apt-get install -y libsm6 libxext6 libxrender-dev !pip install opencv-python !pip install git+https://github.com/divamgupta/image-segmentation-keras from keras_segmentation.models.unet import vgg_unet or check here how it's implemented https://gitee.com/sanyanjie/image-segmentation-keras
AttributeError: module 'keras.utils' has no attribute 'get_file' using segmentation_models
I'm trying to use segmentation models but I can't fix this error. I've searched for this particular one but couldn't find an answer. I'm using pycharm and this error is linked to this specific line of code BACKBONE = 'resnet34' model1 = sm.Unet(BACKBONE, weights=None, encoder_weights='imagenet', classes=num_classes, activation='softmax', decoder_block_type = 'upsampling') which is also the 83rd. I searched in the documentation and apparently the versions of tensorflow keras etc satisfy the requirements.I really don't know what to do given the fact that I really tried to install and uninstall everything in many combinations in order to get this piece of code to work.Thank you all for your help and time! Below there's the complete error, hoping it might help you! `Traceback (most recent call last): File "C:\Users\Giulia\PycharmProjects\multiclass_new\main.py", line 83, in <module> model1 = sm.Unet('resnet34', weights=None, File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\segmentation_models\__init__.py", line 34, in wrapper return func(*args, **kwargs) File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\segmentation_models\models\unet.py", line 221, in Unet backbone = Backbones.get_backbone( File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\segmentation_models\backbones\backbones_factory.py", line 103, in get_backbone model = model_fn(*args, **kwargs) File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\models_factory.py", line 78, in wrapper return func(*args, **new_kwargs) File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\models\resnet.py", line 314, in ResNet34 return ResNet( File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\models\resnet.py", line 280, in ResNet load_model_weights(model, model_params.model_name, File "C:\Users\Giulia\PycharmProjects\multiclass_new\venv\lib\site- packages\classification_models\weights.py", line 25, in load_model_weights weights_path = keras_utils.get_file( AttributeError: module 'keras.utils' has no attribute 'get_file'
[ "You can try:\nimport segmentation_models as sm\n\nsm.set_framework('tf.keras')\n\nsm.framework()\n\nWorked for me on google colab!\n", "To solve this issue, try importing the module EfficientNetB0 directly, as the code below:\nimport efficientnet.tfkeras as efn\n\n", "I had same problem but with vgg u-net model , This worked for me\n!apt-get install -y libsm6 libxext6 libxrender-dev\n\n!pip install opencv-python\n\n!pip install git+https://github.com/divamgupta/image-segmentation-keras\n\nfrom keras_segmentation.models.unet import vgg_unet\n\nor check here how it's implemented\nhttps://gitee.com/sanyanjie/image-segmentation-keras\n" ]
[ 18, 0, 0 ]
[ "The other answer provided here didn't work for me. Instead, upgrading keras did the trick for me via:\npip install --upgrade keras\n\n" ]
[ -1 ]
[ "keras", "python" ]
stackoverflow_0067792138_keras_python.txt
Q: How to run external python file in html I want to use my html file to take user input and then I want to use my python program to process my input and then I want my html shows the answer HTML Part ` {% extends 'base.html' %} {% block title %}Home{% endblock title %}Home {% block body %} <style> #body { padding-left:100px; padding-top:10px; } </style> <div id="body"> <br> <marquee width="750px"> <h4>My name is ChatXBot. I'm your Childs Friend. Talk to me. If you want to exit, type Bye!</h4> </marquee> <br> <form action="/external" method="post"> {% csrf_token %} <textarea id="askchat" name="askchat" rows="10" cols="100" placeholder="Start Typing Here" required></textarea> {{data_external}}<br><br> {{data1}} <br> <input class="btn btn-secondary btn-lg" type="submit" value="Ask"> </form> </div> {% endblock body %} ` Python part ` #importing the neccesary libraries import nltk import numpy as np import random import string # to process standard python strings nltk.download('omw-1.4') #wow.txt is text collected from https://en.wikipedia.org/wiki/Pediatricsn f=open('F:\Ayan\WORK STATION (III)\Python\Python Project\chatbot.txt','r',errors = 'ignore') raw = f.read() raw=raw.lower()# converts to lowercase nltk.download('punkt') # first-time use only nltk.download('wordnet') # first-time use only sent_tokens = nltk.sent_tokenize(raw)# converts to list of sentences word_tokens = nltk.word_tokenize(raw)# converts to list of word sent_tokens[:2] word_tokens[:2] lemmer = nltk.stem.WordNetLemmatizer() #WordNet is a semantically-oriented dictionary of English included in NLTK. def LemTokens(tokens): return [lemmer.lemmatize(token) for token in tokens] remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation) def LemNormalize(text): return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict))) GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up","hey",) GREETING_RESPONSES = ["hi", "hey", "*nods*", "hi there", "hello", "I am glad! You are talking to me"] def greeting(sentence): for word in sentence.split(): if word.lower() in GREETING_INPUTS: return random.choice(GREETING_RESPONSES) from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity def response(user_response): ChatBot_response='' sent_tokens.append(user_response) TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english') tfidf = TfidfVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx=vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] if(req_tfidf==0): ChatBot_response=ChatBot_response+"I am sorry! I don't understand you" return ChatBot_response else: ChatBot_response = ChatBot_response+sent_tokens[idx] return ChatBot_response flag=True print("ChatXBot: My name is ChatXBot. I'm your Friends. Talk to me. If you want to exit, type Bye!") print(".") while(flag==True): user_response = input() user_response=user_response.lower() if(user_response!='bye'): if(user_response=='thanks' or user_response=='thank you' ): flag=False print("ChatBot: You are welcome..") else: if(greeting(user_response)!=None): print("ChatBot: "+greeting(user_response)) else: print("ChatBot: ",end="") print(response(user_response)) sent_tokens.remove(user_response) else: flag=False print("ChatBot: Bye!") ` I have tried using Django but got lots of error in this block of code that I have taken from GitHub will take user input and then try to understand and then shows relent answer to us through the text file that I have provided A: if you want every thing run in client side I think you should use pyscript otherwise you must execute all your python code in server side program like django and send result to client with proper html/css/js libraries
How to run external python file in html
I want to use my html file to take user input and then I want to use my python program to process my input and then I want my html shows the answer HTML Part ` {% extends 'base.html' %} {% block title %}Home{% endblock title %}Home {% block body %} <style> #body { padding-left:100px; padding-top:10px; } </style> <div id="body"> <br> <marquee width="750px"> <h4>My name is ChatXBot. I'm your Childs Friend. Talk to me. If you want to exit, type Bye!</h4> </marquee> <br> <form action="/external" method="post"> {% csrf_token %} <textarea id="askchat" name="askchat" rows="10" cols="100" placeholder="Start Typing Here" required></textarea> {{data_external}}<br><br> {{data1}} <br> <input class="btn btn-secondary btn-lg" type="submit" value="Ask"> </form> </div> {% endblock body %} ` Python part ` #importing the neccesary libraries import nltk import numpy as np import random import string # to process standard python strings nltk.download('omw-1.4') #wow.txt is text collected from https://en.wikipedia.org/wiki/Pediatricsn f=open('F:\Ayan\WORK STATION (III)\Python\Python Project\chatbot.txt','r',errors = 'ignore') raw = f.read() raw=raw.lower()# converts to lowercase nltk.download('punkt') # first-time use only nltk.download('wordnet') # first-time use only sent_tokens = nltk.sent_tokenize(raw)# converts to list of sentences word_tokens = nltk.word_tokenize(raw)# converts to list of word sent_tokens[:2] word_tokens[:2] lemmer = nltk.stem.WordNetLemmatizer() #WordNet is a semantically-oriented dictionary of English included in NLTK. def LemTokens(tokens): return [lemmer.lemmatize(token) for token in tokens] remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation) def LemNormalize(text): return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict))) GREETING_INPUTS = ("hello", "hi", "greetings", "sup", "what's up","hey",) GREETING_RESPONSES = ["hi", "hey", "*nods*", "hi there", "hello", "I am glad! You are talking to me"] def greeting(sentence): for word in sentence.split(): if word.lower() in GREETING_INPUTS: return random.choice(GREETING_RESPONSES) from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity def response(user_response): ChatBot_response='' sent_tokens.append(user_response) TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words='english') tfidf = TfidfVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx=vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] if(req_tfidf==0): ChatBot_response=ChatBot_response+"I am sorry! I don't understand you" return ChatBot_response else: ChatBot_response = ChatBot_response+sent_tokens[idx] return ChatBot_response flag=True print("ChatXBot: My name is ChatXBot. I'm your Friends. Talk to me. If you want to exit, type Bye!") print(".") while(flag==True): user_response = input() user_response=user_response.lower() if(user_response!='bye'): if(user_response=='thanks' or user_response=='thank you' ): flag=False print("ChatBot: You are welcome..") else: if(greeting(user_response)!=None): print("ChatBot: "+greeting(user_response)) else: print("ChatBot: ",end="") print(response(user_response)) sent_tokens.remove(user_response) else: flag=False print("ChatBot: Bye!") ` I have tried using Django but got lots of error in this block of code that I have taken from GitHub will take user input and then try to understand and then shows relent answer to us through the text file that I have provided
[ "if you want every thing run in client side I think you should use pyscript otherwise you must execute all your python code in server side program like django and send result to client with proper html/css/js libraries\n" ]
[ 0 ]
[]
[]
[ "django", "external", "html", "python" ]
stackoverflow_0074524364_django_external_html_python.txt
Q: PyPI index vs simple index I've seen mention both an index and a simple index in relation to PyPI, an example is here in the devpi documentation. Is there some difference between the two indexes? Are they the same or do they have different access controls or functions for example? A: The "simple" index protocol is read-only, intended for automated use, and is defined in PEP 503. Other protocols with more functionality may be defined by particular repository servers, but are probably only usable with that server's own tools. A: As for https://pypi.org/ and some other Python repositories: https://pypi.org/pypi index (XML RPC URL) is used by the pip search command. For example: pip search --index https://pypi.org/pypi twine https://pypi.org/ deprecated this feature in their repo. But you can still use it in private repositories. https://pypi.org/simple is an index, that is used by the pip install command. For example: pip install --index-url https://pypi.org/simple twine To assess simple via web, add a slash at the end of https://pypi.org/simple/ if there is no automatic redirect. By the way, --index has a corresponding PIP_INDEX environment variable and --index-url has a corresponding PIP_INDEX_URL variable.
PyPI index vs simple index
I've seen mention both an index and a simple index in relation to PyPI, an example is here in the devpi documentation. Is there some difference between the two indexes? Are they the same or do they have different access controls or functions for example?
[ "The \"simple\" index protocol is read-only, intended for automated use, and is defined in PEP 503. Other protocols with more functionality may be defined by particular repository servers, but are probably only usable with that server's own tools.\n", "As for https://pypi.org/ and some other Python repositories:\n\nhttps://pypi.org/pypi index (XML RPC URL) is used by the pip search command.\nFor example:\n\npip search --index https://pypi.org/pypi twine\n\nhttps://pypi.org/ deprecated this feature in their repo. But you can still use it in private repositories.\n\nhttps://pypi.org/simple is an index, that is used by the pip install command.\nFor example:\n\npip install --index-url https://pypi.org/simple twine\n\nTo assess simple via web, add a slash at the end of https://pypi.org/simple/ if there is no automatic redirect.\nBy the way, --index has a corresponding PIP_INDEX environment variable and --index-url has a corresponding PIP_INDEX_URL variable.\n" ]
[ 6, 0 ]
[]
[]
[ "pypi", "python" ]
stackoverflow_0024816148_pypi_python.txt
Q: Google resource manager get all exceptions - Python im making a python script that can manage my google projects. im having a insue with one part when i try to exclude the project its can return to me many errors. i did a peace of code to get this exception: try: # Initialize request argument(s) request = DeleteProjectRequest( name=project, ) self.project_manager.delete_project(request=request) except PermissionDenied as exc: # GCP returns PermissionDenied whether we actually does # not have permissions to perform the get_project call # or when the project does not exist. Due to this reason, # the PermissionDenied exception catch won't be deterministic. logger.error(f"Project '{project_id}' does not exist", exc) return False i need to get the error message of all types of errors i changed except PermissionDenied as exc: for except Exception as exc: and it works but i need to call the logger only if the error is PermissionDenied and in all cases i need to call another function passing the message as parameter like it return_to_db(error_message) my question is. how can i run only the logger if the error is PermissionDenied? A: You can also catch multiple Exceptions by adding additional blocks, though it will choose the first isinstance() match (so if you put Exception first, it will be selected instead, while TypeError would be continued past) try: self.project_manager.delete_project( request=DeleteProjectRequest(name=project)) except PermissionDenied as exc: # GCP returns PermissionDenied whether we actually does # not have permissions to perform the get_project call # or when the project does not exist. Due to this reason, # the PermissionDenied exception catch won't be deterministic. logger.error(f"Project '{project_id}' does not exist", exc) except Exception: # FIXME other handling to go here pass # fall to return False else: # didn't raise return True # opportunity for finally: block here too # if any Exception was raised, continue to return False return False A: You can add a condition of the instance type of the current exception in Python, example : try: # Initialize request argument(s) request = DeleteProjectRequest( name=project, ) self.project_manager.delete_project(request=request) except Exception as exc: if isinstance(exc, PermissionDenied): logger.error(f"Project '{project_id}' does not exist", exc) return False As expected, the logger is executed only if the exception instance is PermissionDenied.
Google resource manager get all exceptions - Python
im making a python script that can manage my google projects. im having a insue with one part when i try to exclude the project its can return to me many errors. i did a peace of code to get this exception: try: # Initialize request argument(s) request = DeleteProjectRequest( name=project, ) self.project_manager.delete_project(request=request) except PermissionDenied as exc: # GCP returns PermissionDenied whether we actually does # not have permissions to perform the get_project call # or when the project does not exist. Due to this reason, # the PermissionDenied exception catch won't be deterministic. logger.error(f"Project '{project_id}' does not exist", exc) return False i need to get the error message of all types of errors i changed except PermissionDenied as exc: for except Exception as exc: and it works but i need to call the logger only if the error is PermissionDenied and in all cases i need to call another function passing the message as parameter like it return_to_db(error_message) my question is. how can i run only the logger if the error is PermissionDenied?
[ "You can also catch multiple Exceptions by adding additional blocks, though it will choose the first isinstance() match (so if you put Exception first, it will be selected instead, while TypeError would be continued past)\ntry:\n self.project_manager.delete_project(\n request=DeleteProjectRequest(name=project))\nexcept PermissionDenied as exc:\n # GCP returns PermissionDenied whether we actually does\n # not have permissions to perform the get_project call\n # or when the project does not exist. Due to this reason,\n # the PermissionDenied exception catch won't be deterministic.\n logger.error(f\"Project '{project_id}' does not exist\", exc)\nexcept Exception:\n # FIXME other handling to go here\n pass # fall to return False\nelse: # didn't raise\n return True\n# opportunity for finally: block here too\n\n# if any Exception was raised, continue to return False\nreturn False\n\n", "You can add a condition of the instance type of the current exception in Python, example :\n try:\n # Initialize request argument(s)\n\n request = DeleteProjectRequest(\n name=project,\n )\n self.project_manager.delete_project(request=request)\n\n except Exception as exc:\n if isinstance(exc, PermissionDenied):\n logger.error(f\"Project '{project_id}' does not exist\", exc)\n \n return False\n\nAs expected, the logger is executed only if the exception instance is PermissionDenied.\n" ]
[ 3, 2 ]
[]
[]
[ "google_cloud_platform", "python" ]
stackoverflow_0074524363_google_cloud_platform_python.txt
Q: How to get phase DC offset and amplitude of sine wave in Python I have a sine wave of the known frequency with some noise with uniform samples near Nyquist frequency. I want to get approximate values of amplitude, phase, and DC offset. I searched for an answer and found a couple of answers close to what I needed, but still was unable to write a proper code that achieves what I need. When I run the code below, I get the wrong phase and amplitude. Would be happy to get some help. import sys import numpy import pylab as plt def cosfunc(time, amplitude, omega, phase, offset): ''' Function to create sine wave. Phase in radians ''' return amplitude * numpy.cos(omega*time + phase) + offset def get_cosine_approx(timeline,sine_data): points_num=len(timeline) fft_freq = numpy.fft.fftfreq(points_num-1, timeline[1]-timeline[0]) # assume uniform spacing fft_result=numpy.fft.fft(sine_data) #Remove negative frequencies for i in range(len(fft_freq)): if fft_freq[i]<0: fft_result[i]=0 ampl=numpy.abs(fft_result)/points_num*2 max_index=numpy.argmax(ampl) guess_amplitude=ampl[max_index] phase_unwrapped=numpy.unwrap(numpy.angle(fft_result)) guess_phase=phase_unwrapped[max_index] guess_phase_dig=guess_phase*180./numpy.pi print("freq",fft_freq[max_index]) print("amplitude",guess_amplitude) print("phase",guess_phase_dig) plt.plot(timeline, sine_data, "ok", label="sine") new_timeline=numpy.linspace(timeline[0], timeline[-1], len(timeline)*1000) plt.plot(new_timeline, cosfunc(new_timeline,guess_amplitude,2.*numpy.pi*56e9,guess_phase,0), "r-", label="fit") plt.legend(loc="best") plt.show() return {"amp":guess_amplitude, "ph":guess_phase,"ph_dig":guess_phase_dig} N = 256 # Sample points f=56e9 #56GHz t = numpy.linspace(0.0, 100./f, N) # Time omega = 2.*numpy.pi*f offset=0 phase=0 A=1. cos=cosfunc(t,A,omega,phase,offset) result=get_cosine_approx(t,cos) A: You are catching the phase at an inflection point, where the phase is suddenly transitioning from +pi/2 to -pi/2, and the bin you are looking at is just partway through the downhill slide. This is just because the FFT results are not continuous. A single bin spans a range of frequencies. Notice when we plot the phase and the amplitude: import sys import numpy as np import matplotlib.pyplot as plt def cosfunc(time, amplitude, omega, phase, offset): ''' Function to create sine wave. Phase in radians ''' return amplitude * np.cos(omega*time + phase) + offset def get_cosine_approx(timeline,sine_data): points_num=len(timeline) fft_freq = np.fft.fftfreq(points_num, timeline[1]-timeline[0]) fft_result=np.fft.fft(sine_data) fft_freq = np.fft.fftshift(fft_freq) fft_result = np.fft.fftshift(fft_result) ampl = np.abs(fft_result) * 2 / points_num phase = np.angle(fft_result) plt.plot(fft_freq, ampl, label='ampl' ) plt.plot(fft_freq, phase, label='phase' ) plt.legend(loc="best") plt.show() return 0 N = 256 # Sample points f=56e9 #56GHz t = np.linspace(0.0, 100./f, N) # Time omega = 2.*np.pi*f offset=0 phase=0 A=1. cos=cosfunc(t,A,omega,phase,offset) result=get_cosine_approx(t,cos) The plot shows the inflection point right at the peak frequency bin.
How to get phase DC offset and amplitude of sine wave in Python
I have a sine wave of the known frequency with some noise with uniform samples near Nyquist frequency. I want to get approximate values of amplitude, phase, and DC offset. I searched for an answer and found a couple of answers close to what I needed, but still was unable to write a proper code that achieves what I need. When I run the code below, I get the wrong phase and amplitude. Would be happy to get some help. import sys import numpy import pylab as plt def cosfunc(time, amplitude, omega, phase, offset): ''' Function to create sine wave. Phase in radians ''' return amplitude * numpy.cos(omega*time + phase) + offset def get_cosine_approx(timeline,sine_data): points_num=len(timeline) fft_freq = numpy.fft.fftfreq(points_num-1, timeline[1]-timeline[0]) # assume uniform spacing fft_result=numpy.fft.fft(sine_data) #Remove negative frequencies for i in range(len(fft_freq)): if fft_freq[i]<0: fft_result[i]=0 ampl=numpy.abs(fft_result)/points_num*2 max_index=numpy.argmax(ampl) guess_amplitude=ampl[max_index] phase_unwrapped=numpy.unwrap(numpy.angle(fft_result)) guess_phase=phase_unwrapped[max_index] guess_phase_dig=guess_phase*180./numpy.pi print("freq",fft_freq[max_index]) print("amplitude",guess_amplitude) print("phase",guess_phase_dig) plt.plot(timeline, sine_data, "ok", label="sine") new_timeline=numpy.linspace(timeline[0], timeline[-1], len(timeline)*1000) plt.plot(new_timeline, cosfunc(new_timeline,guess_amplitude,2.*numpy.pi*56e9,guess_phase,0), "r-", label="fit") plt.legend(loc="best") plt.show() return {"amp":guess_amplitude, "ph":guess_phase,"ph_dig":guess_phase_dig} N = 256 # Sample points f=56e9 #56GHz t = numpy.linspace(0.0, 100./f, N) # Time omega = 2.*numpy.pi*f offset=0 phase=0 A=1. cos=cosfunc(t,A,omega,phase,offset) result=get_cosine_approx(t,cos)
[ "You are catching the phase at an inflection point, where the phase is suddenly transitioning from +pi/2 to -pi/2, and the bin you are looking at is just partway through the downhill slide. This is just because the FFT results are not continuous. A single bin spans a range of frequencies.\nNotice when we plot the phase and the amplitude:\nimport sys\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef cosfunc(time, amplitude, omega, phase, offset):\n ''' Function to create sine wave. Phase in radians '''\n return amplitude * np.cos(omega*time + phase) + offset\n\ndef get_cosine_approx(timeline,sine_data):\n points_num=len(timeline)\n \n fft_freq = np.fft.fftfreq(points_num, timeline[1]-timeline[0])\n fft_result=np.fft.fft(sine_data)\n fft_freq = np.fft.fftshift(fft_freq)\n fft_result = np.fft.fftshift(fft_result)\n \n ampl = np.abs(fft_result) * 2 / points_num\n phase = np.angle(fft_result)\n\n plt.plot(fft_freq, ampl, label='ampl' )\n plt.plot(fft_freq, phase, label='phase' )\n plt.legend(loc=\"best\")\n plt.show()\n\n return 0\n\nN = 256 # Sample points\nf=56e9 #56GHz\nt = np.linspace(0.0, 100./f, N) # Time\nomega = 2.*np.pi*f\noffset=0\nphase=0\nA=1.\n\ncos=cosfunc(t,A,omega,phase,offset)\nresult=get_cosine_approx(t,cos)\n\nThe plot shows the inflection point right at the peak frequency bin.\n\n" ]
[ 0 ]
[]
[]
[ "fft", "python" ]
stackoverflow_0074514831_fft_python.txt
Q: How to start Airflow Dag with a past Data Interval Date I am working in Ariflow 2.2.3 and I can't figure out how to trigger my dag with a past execution date. When I click Trigger dag with Config, I changed the calendar to the date I wanted but when I clicked run, I saw the run but it didn't run. I also tried putting the date in the config section with {"start_date":"date"} but that didn't work aswell. Any idea how to trigger a dag with a date in the past? A: To create a past Airflow run you have multiple option, but most of them needs to update the start date of your dag to be older than the date of the desired run date (first four options), otherwise the run will be marked as succeeded without being executed. Via Airflow UI: you can click on the run icon, and choose trigger DAG w/ config, then choose the logical date you want, and create the run. Via Airflow REST API: here is the doc Via Airflow python lib: DagBag(read_dags_from_db=True).get_dag(<your dag name>).create_dagrun( run_id=<run id>, run_type=DagRunType.MANUAL, execution_date=<execution date>, state=State.QUEUED, external_trigger=True, data_interval=(<desired start date>, <desired end date>), conf={<any conf>: <any value>, ...}, ) Via Airflow CLI: here is the doc airflow dags trigger -e <execution date> <dag id> Via Airlfow CLI backfill command: here is the doc. It doesn't need an old start date, where the run and the task are managed by the CLI and not the Airflow scheduler. airflow dags backfill -s <start date> -e <end date> <dag id> P.S: this concept is based on schedule interval, so if you want a signle run, should be + , otherwise you will have multiple runs between the two days. This is different in the option 3 which create a single run with the and you provided.
How to start Airflow Dag with a past Data Interval Date
I am working in Ariflow 2.2.3 and I can't figure out how to trigger my dag with a past execution date. When I click Trigger dag with Config, I changed the calendar to the date I wanted but when I clicked run, I saw the run but it didn't run. I also tried putting the date in the config section with {"start_date":"date"} but that didn't work aswell. Any idea how to trigger a dag with a date in the past?
[ "To create a past Airflow run you have multiple option, but most of them needs to update the start date of your dag to be older than the date of the desired run date (first four options), otherwise the run will be marked as succeeded without being executed.\n\nVia Airflow UI: you can click on the run icon, and choose trigger DAG w/ config, then choose the logical date you want, and create the run.\nVia Airflow REST API: here is the doc\nVia Airflow python lib:\nDagBag(read_dags_from_db=True).get_dag(<your dag name>).create_dagrun(\n run_id=<run id>,\n run_type=DagRunType.MANUAL,\n execution_date=<execution date>,\n state=State.QUEUED,\n external_trigger=True,\n data_interval=(<desired start date>, <desired end date>),\n conf={<any conf>: <any value>, ...},\n)\n\n\nVia Airflow CLI: here is the doc\nairflow dags trigger -e <execution date> <dag id>\n\n\nVia Airlfow CLI backfill command: here is the doc. It doesn't need an old start date, where the run and the task are managed by the CLI and not the Airflow scheduler.\nairflow dags backfill -s <start date> -e <end date> <dag id>\n\nP.S: this concept is based on schedule interval, so if you want a signle run, should be + , otherwise you will have multiple runs between the two days. This is different in the option 3 which create a single run with the and you provided.\n\n" ]
[ 0 ]
[]
[]
[ "airflow", "directed_acyclic_graphs", "gcs", "python" ]
stackoverflow_0074525338_airflow_directed_acyclic_graphs_gcs_python.txt
Q: How to skip if empty item in column in Django DB I;m new to learning Django and ran into a small issue: I'm working on a product display page where some products are in a subcategory. I want to be able to display this subcategory when needed but I do not want it to show up when unused. Right now it will show up on my page as 'NONE' which I do not want. How do I fix this? My model looks like this: class Category(models.Model): category_name = models.CharField(max_length=200) sub_category = models.CharField(max_length=200,blank=True,null=True) def __str__(self): return f" {self.category_name} {self.sub_category}" On my webpage I use the {{category}} in a for loop to display the different categories. Unfortunately is shows 'NONE' when there is no subcategory. I have tried the following: {% for category in categories %} <p> {% if category.sub_category == "NULL" %} {{category.category_name}} {% else %} {{category}} {% endif %} </p> {% endfor %} A: If the value is NULL at the database side, it is None at the Django/Python side, so you can work with: {% for category in categories %} <p> {% if category.sub_category is None %} {{ category.category_name }} {% else %} {{ category }} {% endif %} </p> {% endfor %} But instead of that, it makes more sense to fix this in the model itself: class Category(models.Model): category_name = models.CharField(max_length=200) sub_category = models.CharField(max_length=200, blank=True, null=True) def __str__(self): if self.sub_category is None: return self.category_name else: return f'{self.category_name} {self.sub_category}' This simplfies rendering to: {% for category in categories %} <p> {{ category }} </p> {% endfor %}
How to skip if empty item in column in Django DB
I;m new to learning Django and ran into a small issue: I'm working on a product display page where some products are in a subcategory. I want to be able to display this subcategory when needed but I do not want it to show up when unused. Right now it will show up on my page as 'NONE' which I do not want. How do I fix this? My model looks like this: class Category(models.Model): category_name = models.CharField(max_length=200) sub_category = models.CharField(max_length=200,blank=True,null=True) def __str__(self): return f" {self.category_name} {self.sub_category}" On my webpage I use the {{category}} in a for loop to display the different categories. Unfortunately is shows 'NONE' when there is no subcategory. I have tried the following: {% for category in categories %} <p> {% if category.sub_category == "NULL" %} {{category.category_name}} {% else %} {{category}} {% endif %} </p> {% endfor %}
[ "If the value is NULL at the database side, it is None at the Django/Python side, so you can work with:\n{% for category in categories %}\n<p>\n {% if category.sub_category is None %}\n {{ category.category_name }}\n {% else %}\n {{ category }}\n {% endif %}\n</p>\n{% endfor %}\nBut instead of that, it makes more sense to fix this in the model itself:\nclass Category(models.Model):\n category_name = models.CharField(max_length=200)\n sub_category = models.CharField(max_length=200, blank=True, null=True)\n\n def __str__(self):\n if self.sub_category is None:\n return self.category_name\n else:\n return f'{self.category_name} {self.sub_category}'\nThis simplfies rendering to:\n{% for category in categories %}\n<p>\n {{ category }}\n</p>\n{% endfor %}\n" ]
[ 1 ]
[]
[]
[ "django", "python" ]
stackoverflow_0074525466_django_python.txt
Q: Find all possible varients of max pair of 2 Given a string of numbers like 123456, I want to find all the possibilities they can be paired in by 2 or by itself. For example, from the string 123456 I would like to get the following: 12 3 4 5 6, 12 34 5 6, 1 23 4 56, etc. The nearest I was able to come to was this: strr = list("123456") x = list("123456") for i in range(int(len(strr)/2)): newlist = [] for j in range(i): newlist.append(x[j]) newlist.append(x[i] + x[i+1]) for j in range(len(x))[i+2:]: newlist.append(x[j]) x = newlist.copy() b = x.copy() for f in range(len(b))[i:]: if f == i: print(b) continue b[f] = b[f - 1][1] + b[f] b[f - 1] = b[f - 1][0] print(b) This code gives the output: A: It's easy to solve this problem with a recursive generator. This is similar to how you solve change-making problems, just here we have only two "coins", either two characters together, or one character at a time. The total change we're trying to make is the length of the input string. The fact that the characters are digits in a numeric string is irrelevant. def singles_and_pairs(string): if len(string) <= 1: # base case yield list(string) # yield either [] or [string] and then quit return for result in singles_and_pairs(string[:-1]): # first recursion result.append(string[-1:]) yield result for result in singles_and_pairs(string[:-2]): # second recursion result.append(string[-2:]) yield result If you plan on running this on large input strings, you might want to add memoization, since the recursive calls recalculate the same results quite often. A: Pheew, this one took me some time to get right, but it seems to finally work (edited for prettier ordering): def max_2_partitions(my_string): if not my_string: return [[]] if len(my_string) == 1: return [[my_string]] ret = [] for i in range(len(my_string)): for l in max_2_partitions(my_string[:i] + my_string[i + 1:]): li = sorted([my_string[i]]+l, key = lambda x: (len(x),x)) if li not in ret: ret.append(li) for j in range(i+1,len(my_string)): for l in max_2_partitions(my_string[:i]+my_string[i+1:j]+my_string[j+1:]): li = sorted([my_string[i] + my_string[j]] + l, key = lambda x: (len(x),x)) if li not in ret: ret.append(li) return sorted(ret, key=lambda x: (-len(x),x)) Example: print(max_2_partitions("1234")) # [['1', '2', '3', '4'], ['1', '2', '34'], ['1', '3', '24'], ['1', '4', '23'], ['2', '3', '14'], ['2', '4', '13'], ['3', '4', '12'], ['12', '34'], ['13', '24'], ['14', '23']] A: 12 lines of code, full permutations: You can first create permutations of the string, and then add spacing: from itertools import permutations def solution(A): result = [] def dfs(A,B): if not B: result.append(A) else: for i in range(1,min(2,len(B))+1): dfs(A+[B[:i]],B[i:]) for x in permutations(A): dfs([],''.join(x)) return result print(f"{solution('123') = }") # solution('123') = [['1', '2', '3'], ['1', '23'], ['12', '3'], ['1', '3', '2'], ['1', '32'], ['13', '2'], ['2', '1', '3'], ['2', '13'], ['21', '3'], ['2', '3', '1'], ['2', '31'], ['23', '1'], ['3', '1', '2'], ['3', '12'], ['31', '2'], ['3', '2', '1'], ['3', '21'], ['32', '1']]
Find all possible varients of max pair of 2
Given a string of numbers like 123456, I want to find all the possibilities they can be paired in by 2 or by itself. For example, from the string 123456 I would like to get the following: 12 3 4 5 6, 12 34 5 6, 1 23 4 56, etc. The nearest I was able to come to was this: strr = list("123456") x = list("123456") for i in range(int(len(strr)/2)): newlist = [] for j in range(i): newlist.append(x[j]) newlist.append(x[i] + x[i+1]) for j in range(len(x))[i+2:]: newlist.append(x[j]) x = newlist.copy() b = x.copy() for f in range(len(b))[i:]: if f == i: print(b) continue b[f] = b[f - 1][1] + b[f] b[f - 1] = b[f - 1][0] print(b) This code gives the output:
[ "It's easy to solve this problem with a recursive generator. This is similar to how you solve change-making problems, just here we have only two \"coins\", either two characters together, or one character at a time. The total change we're trying to make is the length of the input string. The fact that the characters are digits in a numeric string is irrelevant.\ndef singles_and_pairs(string):\n if len(string) <= 1: # base case\n yield list(string) # yield either [] or [string] and then quit\n return\n\n for result in singles_and_pairs(string[:-1]): # first recursion\n result.append(string[-1:])\n yield result\n\n for result in singles_and_pairs(string[:-2]): # second recursion\n result.append(string[-2:])\n yield result\n\nIf you plan on running this on large input strings, you might want to add memoization, since the recursive calls recalculate the same results quite often.\n", "Pheew, this one took me some time to get right, but it seems to finally work (edited for prettier ordering):\ndef max_2_partitions(my_string):\n if not my_string:\n return [[]]\n if len(my_string) == 1:\n return [[my_string]]\n ret = []\n for i in range(len(my_string)):\n for l in max_2_partitions(my_string[:i] + my_string[i + 1:]):\n li = sorted([my_string[i]]+l, key = lambda x: (len(x),x))\n if li not in ret:\n ret.append(li)\n for j in range(i+1,len(my_string)):\n for l in max_2_partitions(my_string[:i]+my_string[i+1:j]+my_string[j+1:]):\n li = sorted([my_string[i] + my_string[j]] + l, key = lambda x: (len(x),x))\n if li not in ret:\n ret.append(li)\n return sorted(ret, key=lambda x: (-len(x),x))\n\nExample:\nprint(max_2_partitions(\"1234\"))\n# [['1', '2', '3', '4'], ['1', '2', '34'], ['1', '3', '24'], ['1', '4', '23'], ['2', '3', '14'], ['2', '4', '13'], ['3', '4', '12'], ['12', '34'], ['13', '24'], ['14', '23']]\n\n", "12 lines of code, full permutations:\nYou can first create permutations of the string, and then add spacing:\nfrom itertools import permutations\n\ndef solution(A):\n result = []\n def dfs(A,B):\n if not B:\n result.append(A)\n else:\n for i in range(1,min(2,len(B))+1):\n dfs(A+[B[:i]],B[i:])\n \n for x in permutations(A):\n dfs([],''.join(x))\n return result\n\nprint(f\"{solution('123') = }\")\n# solution('123') = [['1', '2', '3'], ['1', '23'], ['12', '3'], ['1', '3', '2'], ['1', '32'], ['13', '2'], ['2', '1', '3'], ['2', '13'], ['21', '3'], ['2', '3', '1'], ['2', '31'], ['23', '1'], ['3', '1', '2'], ['3', '12'], ['31', '2'], ['3', '2', '1'], ['3', '21'], ['32', '1']]\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "list", "python", "range", "string" ]
stackoverflow_0074524232_list_python_range_string.txt
Q: Is there a quicker way than having a for loop in a for loop I am trying to find a quicker way than using a for loop in a for loop to replace the variables in column a in one table with the variables in column b in another table. for x in range(len(a["a"])): for y in range(len(b["a"])): if a["a"][x] == b["a"][y]: a["a"] = out['a'].replace([a["a"][x]],b["b"][y]]) This currently works but is super slow, is there anyway to do the same thing but make it faster? Sample Data: a = pd.DataFrame({'a': ['a','b','c','d','e','f','g', 'h', 'i']}) b = pd.DataFrame({'a': ['a','b','c','d','e','f','g'], 'b': ['alpha', 'alpha', 'alpha', 'beta', 'beta', 'charlie' 'charlie']}) Basically I am trying to replace the value in a["a"] with the values in b["b"] if a["a"] == b["a"] A: You cannot use the pandas where function because your two dataframes have different numbers of elements. But the code below will work (I renamed your dataframes df1 and df2 for clarity) df1['a'].loc[df1['a'].isin(df2['a'])] = df2['b'] which for your sample data results in a 0 alpha 1 alpha 2 alpha 3 beta 4 beta 5 charlie 6 charlie 7 h 8 i
Is there a quicker way than having a for loop in a for loop
I am trying to find a quicker way than using a for loop in a for loop to replace the variables in column a in one table with the variables in column b in another table. for x in range(len(a["a"])): for y in range(len(b["a"])): if a["a"][x] == b["a"][y]: a["a"] = out['a'].replace([a["a"][x]],b["b"][y]]) This currently works but is super slow, is there anyway to do the same thing but make it faster? Sample Data: a = pd.DataFrame({'a': ['a','b','c','d','e','f','g', 'h', 'i']}) b = pd.DataFrame({'a': ['a','b','c','d','e','f','g'], 'b': ['alpha', 'alpha', 'alpha', 'beta', 'beta', 'charlie' 'charlie']}) Basically I am trying to replace the value in a["a"] with the values in b["b"] if a["a"] == b["a"]
[ "You cannot use the pandas where function because your two dataframes have different numbers of elements. But the code below will work (I renamed your dataframes df1 and df2 for clarity)\ndf1['a'].loc[df1['a'].isin(df2['a'])] = df2['b']\n\nwhich for your sample data results in\n a\n0 alpha\n1 alpha\n2 alpha\n3 beta\n4 beta\n5 charlie\n6 charlie\n7 h\n8 i\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074525419_dataframe_pandas_python.txt
Q: I'm trying to create a website for reserving tickets. I need to add as many passengers as needed using one single form. How is that possible? `@views.route('/flight.html',methods = ['GET','POST']) def flight(): if request.method == 'POST': global no_of_passenger no_of_passengers = request.form.get('no_of_passengers')` In the above view, I'm getting the passenger count from an earlier html page which I'm using later. I need to get the input from the user as many times as the no_of_passengers. `@views.route('/passengers.html',methods = ['GET','POST']) def passenger(): if request.method == 'POST': return render_template('passengers.html') return render_template('passengers.html') @views.route('/passengersinfo.html',methods = ['GET','POST']) def passenger_information(): passengercount = no_of_passengers passengercount = int(passengercount) print(passengercount) if request.method == 'POST': for i in range(0,passengercount): passenger_info = {} passenger_info['passengername'] = request.form.get('Passenger_Name') passenger_info['Street'] = request.form.get('Street') passenger_info['City'] = request.form.get('City') passenger_info['State'] = request.form.get('State') passenger_info['ZipCode'] = request.form.get('ZipCode') return redirect(url_for("views.passenger")) return render_template('passengersinfo.html')` In this view I'm trying to run the form as per the user input using a for loop. The below attached code is the HTML form which is used to get the user form data. `{% extends 'base.html'%} {% block title %}Passenger Information Page{% endblock %} {% block content %} <form id="Form1" action = 'passengersinfo.html' method = 'POST'> <div> <label for ='Passenger_Name' >Passenger Name</label> <input type = 'text' name = 'Passenger_Name' id='Passenger_Name' id="Form1"> <br> <label for ='Street' >Street</label> <input type = 'text' name = 'Street' id='Street' id="Form1"> <br> <label for ='City' >City</label> <input type = 'text' name = 'City' id='City' id="Form1"> <br> <label for ='State' >State</label> <input type = 'text' name = 'State' id='State' id="Form1"> <br> <label for ='Zip' >Zip Code</label> <input type = 'text' name = 'Zip' id='Zip' id="Form1"> </div> <button type = 'submit' id="Form1" >Next</button> </form> {%endblock%} ` A: An easy way to implement your requirements is to use Flask-WTF. Using a FieldList and a FormField, it is possible to create a list of a predefined form. In this way you create a form for your address details and, depending on the required number, you duplicate this. In addition, you can validate the entries made. If you only want to display one nested form at a time, you can use JavaScript to navigate forward or back. The following example uses the session to avoid using global variables and stay as close to your defaults as possible. Flask from flask import ( Flask, redirect, render_template, request, session, url_for ) from flask_wtf import FlaskForm, Form from wtforms import ( FieldList, FormField, IntegerField, StringField, SubmitField ) from wtforms.validators import ( InputRequired, NumberRange ) app = Flask(__name__) app.secret_key = 'your secret here' class PassengersForm(FlaskForm): passenger_count = IntegerField('Ticket Count', validators=[NumberRange(min=1)] ) submit = SubmitField('Next') class PassengerForm(Form): name = StringField('Name', validators=[InputRequired()] ) street = StringField('Street/No') city = StringField('City') state = StringField('State') zipcode = StringField('Zip') class PassengerDetailsForm(FlaskForm): passengers = FieldList(FormField(PassengerForm)) submit = SubmitField() @app.route('/passengers', methods=['GET', 'POST']) def passengers(): form = PassengersForm(request.form, data={'passenger_count': 1}) if form.validate_on_submit(): session['count'] = form.passenger_count.data return redirect(url_for('.passengers_info')) return render_template('passengers.html', **locals()) @app.route('/passengers-info', methods=['GET', 'POST']) def passengers_info(): form = PassengerDetailsForm(request.form) form.passengers.min_entries = max(1, int(session.get('count', 1))) while len(form.passengers.entries) < form.passengers.min_entries: form.passengers.append_entry() if form.validate_on_submit(): for passenger in form.passengers.data: print(passenger) return redirect(url_for('.passengers')) return render_template('passengers_info.html', **locals()) HTML (./templates/passengers.html) <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Passengers</title> </head> <body> <form method="post"> {{ form.csrf_token }} <div> {{ form.passenger_count.label() }} {{ form.passenger_count() }} {% if form.passenger_count.errors -%} <ul> {% for error in form.passenger_count.errors -%} <li>{{ error }}</li> {% endfor -%} </ul> {% endif -%} </div> {{ form.submit }} </form> </body> </html> HTML (./templates/passengers_info.html) <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <title>Passenger Informations</title> <style type="text/css"> .step { display: none; } .step.active { display: block; } </style> </head> <body> <form method="post"> {{ form.csrf_token }} {% for subform in form.passengers -%} <div class="step {%if loop.first %}active{% endif %}" id="step-{{loop.index0}}"> {% for field in subform -%} <div> {{ field.label() }} {{ field() }} {% if field.errors -%} <ul> {% for error in field.errors -%} <li>{{ error }}</li> {% endfor -%} </ul> {% endif -%} </div> {% endfor -%} {%if not loop.first %} <button type="button" class="btn-prev">Prev</button> {% endif %} {%if not loop.last %} <button type="button" class="btn-next">Next</button> {% else %} {{ form.submit() }} {% endif %} </div> {% endfor -%} </form> <script type="text/javascript"> (function() { let step = 0; const btns_next = document.querySelectorAll('.btn-next'); btns_next.forEach(btn => { btn.addEventListener('click', evt => { [`step-${step}`, `step-${++step}`].forEach(sel => { const elem = document.getElementById(sel); elem && elem.classList.toggle('active'); }); }); }); const btns_prev = document.querySelectorAll('.btn-prev'); btns_prev.forEach(btn => { btn.addEventListener('click', function(evt) { [`step-${step}`, `step-${--step}`].forEach(sel => { const elem = document.getElementById(sel); elem && elem.classList.toggle('active'); }); }); }); })(); </script> </body> </html>
I'm trying to create a website for reserving tickets. I need to add as many passengers as needed using one single form. How is that possible?
`@views.route('/flight.html',methods = ['GET','POST']) def flight(): if request.method == 'POST': global no_of_passenger no_of_passengers = request.form.get('no_of_passengers')` In the above view, I'm getting the passenger count from an earlier html page which I'm using later. I need to get the input from the user as many times as the no_of_passengers. `@views.route('/passengers.html',methods = ['GET','POST']) def passenger(): if request.method == 'POST': return render_template('passengers.html') return render_template('passengers.html') @views.route('/passengersinfo.html',methods = ['GET','POST']) def passenger_information(): passengercount = no_of_passengers passengercount = int(passengercount) print(passengercount) if request.method == 'POST': for i in range(0,passengercount): passenger_info = {} passenger_info['passengername'] = request.form.get('Passenger_Name') passenger_info['Street'] = request.form.get('Street') passenger_info['City'] = request.form.get('City') passenger_info['State'] = request.form.get('State') passenger_info['ZipCode'] = request.form.get('ZipCode') return redirect(url_for("views.passenger")) return render_template('passengersinfo.html')` In this view I'm trying to run the form as per the user input using a for loop. The below attached code is the HTML form which is used to get the user form data. `{% extends 'base.html'%} {% block title %}Passenger Information Page{% endblock %} {% block content %} <form id="Form1" action = 'passengersinfo.html' method = 'POST'> <div> <label for ='Passenger_Name' >Passenger Name</label> <input type = 'text' name = 'Passenger_Name' id='Passenger_Name' id="Form1"> <br> <label for ='Street' >Street</label> <input type = 'text' name = 'Street' id='Street' id="Form1"> <br> <label for ='City' >City</label> <input type = 'text' name = 'City' id='City' id="Form1"> <br> <label for ='State' >State</label> <input type = 'text' name = 'State' id='State' id="Form1"> <br> <label for ='Zip' >Zip Code</label> <input type = 'text' name = 'Zip' id='Zip' id="Form1"> </div> <button type = 'submit' id="Form1" >Next</button> </form> {%endblock%} `
[ "An easy way to implement your requirements is to use Flask-WTF.\nUsing a FieldList and a FormField, it is possible to create a list of a predefined form.\nIn this way you create a form for your address details and, depending on the required number, you duplicate this. In addition, you can validate the entries made.\nIf you only want to display one nested form at a time, you can use JavaScript to navigate forward or back.\nThe following example uses the session to avoid using global variables and stay as close to your defaults as possible.\nFlask\nfrom flask import (\n Flask, \n redirect, \n render_template, \n request, \n session, \n url_for\n)\nfrom flask_wtf import FlaskForm, Form\nfrom wtforms import (\n FieldList, \n FormField, \n IntegerField, \n StringField, \n SubmitField\n)\nfrom wtforms.validators import (\n InputRequired, \n NumberRange\n)\n\napp = Flask(__name__)\napp.secret_key = 'your secret here'\n\nclass PassengersForm(FlaskForm):\n passenger_count = IntegerField('Ticket Count', \n validators=[NumberRange(min=1)]\n )\n submit = SubmitField('Next')\n\nclass PassengerForm(Form):\n name = StringField('Name', \n validators=[InputRequired()]\n )\n street = StringField('Street/No')\n city = StringField('City')\n state = StringField('State')\n zipcode = StringField('Zip')\n\nclass PassengerDetailsForm(FlaskForm):\n passengers = FieldList(FormField(PassengerForm))\n submit = SubmitField()\n\n@app.route('/passengers', methods=['GET', 'POST'])\ndef passengers():\n form = PassengersForm(request.form, data={'passenger_count': 1})\n if form.validate_on_submit():\n session['count'] = form.passenger_count.data\n return redirect(url_for('.passengers_info'))\n return render_template('passengers.html', **locals())\n\n@app.route('/passengers-info', methods=['GET', 'POST'])\ndef passengers_info():\n form = PassengerDetailsForm(request.form)\n form.passengers.min_entries = max(1, int(session.get('count', 1)))\n while len(form.passengers.entries) < form.passengers.min_entries:\n form.passengers.append_entry()\n \n if form.validate_on_submit():\n for passenger in form.passengers.data:\n print(passenger)\n return redirect(url_for('.passengers'))\n\n return render_template('passengers_info.html', **locals())\n\nHTML (./templates/passengers.html)\n<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <title>Passengers</title>\n</head>\n<body>\n <form method=\"post\">\n {{ form.csrf_token }}\n <div>\n {{ form.passenger_count.label() }}\n {{ form.passenger_count() }}\n {% if form.passenger_count.errors -%}\n <ul>\n {% for error in form.passenger_count.errors -%}\n <li>{{ error }}</li>\n {% endfor -%}\n </ul>\n {% endif -%}\n </div>\n {{ form.submit }}\n </form>\n</body>\n</html>\n\nHTML (./templates/passengers_info.html)\n<!DOCTYPE html>\n<html>\n<head>\n <meta charset=\"utf-8\">\n <meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n <title>Passenger Informations</title>\n <style type=\"text/css\">\n .step {\n display: none;\n }\n .step.active {\n display: block;\n }\n </style>\n</head>\n<body>\n <form method=\"post\">\n {{ form.csrf_token }}\n {% for subform in form.passengers -%}\n <div class=\"step {%if loop.first %}active{% endif %}\" id=\"step-{{loop.index0}}\">\n {% for field in subform -%}\n <div>\n {{ field.label() }}\n {{ field() }}\n {% if field.errors -%}\n <ul>\n {% for error in field.errors -%}\n <li>{{ error }}</li>\n {% endfor -%}\n </ul>\n {% endif -%}\n </div>\n {% endfor -%}\n\n {%if not loop.first %}\n <button type=\"button\" class=\"btn-prev\">Prev</button>\n {% endif %}\n {%if not loop.last %}\n <button type=\"button\" class=\"btn-next\">Next</button>\n {% else %}\n {{ form.submit() }}\n {% endif %}\n </div>\n {% endfor -%}\n </form>\n\n <script type=\"text/javascript\">\n (function() {\n let step = 0;\n\n const btns_next = document.querySelectorAll('.btn-next');\n btns_next.forEach(btn => {\n btn.addEventListener('click', evt => {\n [`step-${step}`, `step-${++step}`].forEach(sel => {\n const elem = document.getElementById(sel); \n elem && elem.classList.toggle('active');\n });\n });\n });\n\n const btns_prev = document.querySelectorAll('.btn-prev');\n btns_prev.forEach(btn => {\n btn.addEventListener('click', function(evt) {\n [`step-${step}`, `step-${--step}`].forEach(sel => {\n const elem = document.getElementById(sel); \n elem && elem.classList.toggle('active');\n });\n });\n });\n\n })();\n </script>\n</body>\n</html>\n\n" ]
[ 0 ]
[]
[]
[ "flask", "html", "python" ]
stackoverflow_0074523373_flask_html_python.txt
Q: Use same Airflow task in multiple branch Is there way I can re-use airflow task that needs to be executed in each branch execution. for ex. I have below tasks out of each task_1 and task_2 needs to be run in 1st flow and task_3 in 2nd flow but task_comm needs to be run in both cases. How can i create 1 task and call it in both flow ? flow_1 = DummyOperator(task_id = 'flow_1') task_1 = DummyOperator(task_id = 'task_1') task_2 = DummyOperator(task_id = 'task_2') flow_2 = DummyOperator(task_id = 'flow_2') task_3 = DummyOperator(task_id = 'task_3') task_comm = DummyOperator(task_id = 'task_comm') branch >> flow_1 >> task_1 >> task2 >> task_comm branch >> flow_2 >> task_3 >> task_comm A: In your code, you have two different branches, one of them will be succeeded and the second will be skipped. To run the task_comm after any one of them, you just need to update its trigger rule: from airflow.utils.trigger_rule import TriggerRule task_comm = DummyOperator(task_id = 'task_comm', trigger_rule=TriggerRule.NONE_FAILED) In this case, task_comm will be executed when the tasks [task2, task_3] have one of the states [succeeded, skipped]. In your case, one of them will have succeeded and the second will have skipped, which is enough to trigger the task.
Use same Airflow task in multiple branch
Is there way I can re-use airflow task that needs to be executed in each branch execution. for ex. I have below tasks out of each task_1 and task_2 needs to be run in 1st flow and task_3 in 2nd flow but task_comm needs to be run in both cases. How can i create 1 task and call it in both flow ? flow_1 = DummyOperator(task_id = 'flow_1') task_1 = DummyOperator(task_id = 'task_1') task_2 = DummyOperator(task_id = 'task_2') flow_2 = DummyOperator(task_id = 'flow_2') task_3 = DummyOperator(task_id = 'task_3') task_comm = DummyOperator(task_id = 'task_comm') branch >> flow_1 >> task_1 >> task2 >> task_comm branch >> flow_2 >> task_3 >> task_comm
[ "In your code, you have two different branches, one of them will be succeeded and the second will be skipped. To run the task_comm after any one of them, you just need to update its trigger rule:\nfrom airflow.utils.trigger_rule import TriggerRule\ntask_comm = DummyOperator(task_id = 'task_comm', trigger_rule=TriggerRule.NONE_FAILED)\n\nIn this case, task_comm will be executed when the tasks [task2, task_3] have one of the states [succeeded, skipped]. In your case, one of them will have succeeded and the second will have skipped, which is enough to trigger the task.\n" ]
[ 0 ]
[]
[]
[ "airflow", "python", "task" ]
stackoverflow_0074523502_airflow_python_task.txt
Q: Covert OME-TIFF to DZI using python Basically a have an ome.tiff image file that cames from ImageJ, and i want to transform it in .dzi file. Currently i do: (ome.tff -> jpg -> dzi). But i want to transform directly in .dzi is that possible in python? and how? I can't find anything related to this so I decided to ask here if anyone has any information about. A: You should be able to read an OME-TIFF with tifffile, and write a DZI file with vips. As both of these are Python, I guess there's a way of doing what you want, but you didn't share a representative input image so I cannot suggest much more.
Covert OME-TIFF to DZI using python
Basically a have an ome.tiff image file that cames from ImageJ, and i want to transform it in .dzi file. Currently i do: (ome.tff -> jpg -> dzi). But i want to transform directly in .dzi is that possible in python? and how? I can't find anything related to this so I decided to ask here if anyone has any information about.
[ "You should be able to read an OME-TIFF with tifffile, and write a DZI file with vips.\nAs both of these are Python, I guess there's a way of doing what you want, but you didn't share a representative input image so I cannot suggest much more.\n" ]
[ 0 ]
[]
[]
[ "data_conversion", "image", "python", "tiff" ]
stackoverflow_0074523872_data_conversion_image_python_tiff.txt
Q: Reading and writing multiple files in a consistent order I'm trying to read multiple files and then save them after I have processed each one. Right now I'm able to do so but the order is not correct. As I'm accessing a text file, each third line corresponds to a frame in order (Frame 1=line3, Frame2=line6), so I need my code to read the images in order. path = '/Users/Desktop/FFMPEG/results25' for i, image in enumerate(glob.glob("/Users/Desktop/FFMPEG/test25/*.png"), 1): img = cv2.imread(image) data_number =i #frame number and max is 918 n = data_number*3 with open('FS_25FULL.txt') as f: lines = f.readlines() #Returns a list where each line is a list item data = lines[n].split(")") #returns a list list_data = [string.replace('(', '').replace(' ', '').split(",") for string in data] list_data.pop(-1) cv2.line(img, P_1, P_2, color=(0,255,255), thickness=2) cv2.line(img, P_1, P_3, color=(0,255,255), thickness=2) def global_vector_cal(LIST_DATA): counter_vector = 0 #rest of my code cv2.imwrite(os.path.join(path, 'img_{}.png'.format(i)), img) The image names I have are results25_00001.png, results25_00002 and so on. The saved files are saved as img_1.png, img_2.png and so on, but they are not in the order of the results25 images that are in incremental order. How can I change that? A: Expanding on the comment by @jasonharper: instead of using glob.glob("/Users/Desktop/FFMPEG/test25/*.png") you could try to use: sorted(glob.glob("/Users/Desktop/FFMPEG/test25/*.png")) It is a bit hard to provide an answer without a fully reproducible example, and output (desired and obtained).
Reading and writing multiple files in a consistent order
I'm trying to read multiple files and then save them after I have processed each one. Right now I'm able to do so but the order is not correct. As I'm accessing a text file, each third line corresponds to a frame in order (Frame 1=line3, Frame2=line6), so I need my code to read the images in order. path = '/Users/Desktop/FFMPEG/results25' for i, image in enumerate(glob.glob("/Users/Desktop/FFMPEG/test25/*.png"), 1): img = cv2.imread(image) data_number =i #frame number and max is 918 n = data_number*3 with open('FS_25FULL.txt') as f: lines = f.readlines() #Returns a list where each line is a list item data = lines[n].split(")") #returns a list list_data = [string.replace('(', '').replace(' ', '').split(",") for string in data] list_data.pop(-1) cv2.line(img, P_1, P_2, color=(0,255,255), thickness=2) cv2.line(img, P_1, P_3, color=(0,255,255), thickness=2) def global_vector_cal(LIST_DATA): counter_vector = 0 #rest of my code cv2.imwrite(os.path.join(path, 'img_{}.png'.format(i)), img) The image names I have are results25_00001.png, results25_00002 and so on. The saved files are saved as img_1.png, img_2.png and so on, but they are not in the order of the results25 images that are in incremental order. How can I change that?
[ "Expanding on the comment by @jasonharper: instead of using\nglob.glob(\"/Users/Desktop/FFMPEG/test25/*.png\")\n\nyou could try to use:\nsorted(glob.glob(\"/Users/Desktop/FFMPEG/test25/*.png\"))\n\nIt is a bit hard to provide an answer without a fully reproducible example, and output (desired and obtained).\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074525192_python.txt
Q: Python calling a property from inside a class I'm trying to call the property protocolo on a new imagefield's upload_to argument What I'm trying to accomplish is to have the saved images use a custom filename. class biopsia(models.Model): paciente = models.CharField(max_length=50) creado = models.DateTimeField(auto_now_add=True) foto = models.ImageField(upload_to=f'fotos_biopsias/%Y/{protocolo}', blank=True) def __str__(self): return str(self.protocolo) @property def protocolo(self): return 'BIO' + str(self.creado.year) + '-' + str(biopsia._base_manager.filter( creado__year=self.creado.year, creado__lt=self.creado ).count() + 1) File "models.py", line 30, in biopsia foto = models.ImageField(upload_to=f'fotos_biopsias/%Y/{protocolo}', blank=True) NameError: name 'protocolo' is not defined I've tried defining an outside method for upload_to but still I cannot use it inside my class A: You can follow the official documentation on how to use function as path for ImageField. Basically, you need to define a function in outer scope of the Model class. For your case, you can try the following code: def protocolo(instance, filename): return f'fotos_biopsias/{timezone.now().year}/BIO' + str(instance.creado.year) + '-' + str(instance.__class__._base_manager.filter( creado__year=instance.creado.year, creado__lt=instance.creado ).count() + 1) + "/" + filename class biopsia(models.Model): paciente = models.CharField(max_length=50) creado = models.DateTimeField(auto_now_add=True) foto = models.ImageField(upload_to=protocolo, blank=False, null=False) def __str__(self): foto_path = self.foto.path.split('/')[3:] return '/'.join(foto_path) To be honest, it feels over calculating in DB level for simple storing the images. If it is only about how to show image in a particular url path, you can consider writing a view which acts as wrapper at that url path for downloading images. You can follow this example given here: Django: Serving Media Behind Custom URL.
Python calling a property from inside a class
I'm trying to call the property protocolo on a new imagefield's upload_to argument What I'm trying to accomplish is to have the saved images use a custom filename. class biopsia(models.Model): paciente = models.CharField(max_length=50) creado = models.DateTimeField(auto_now_add=True) foto = models.ImageField(upload_to=f'fotos_biopsias/%Y/{protocolo}', blank=True) def __str__(self): return str(self.protocolo) @property def protocolo(self): return 'BIO' + str(self.creado.year) + '-' + str(biopsia._base_manager.filter( creado__year=self.creado.year, creado__lt=self.creado ).count() + 1) File "models.py", line 30, in biopsia foto = models.ImageField(upload_to=f'fotos_biopsias/%Y/{protocolo}', blank=True) NameError: name 'protocolo' is not defined I've tried defining an outside method for upload_to but still I cannot use it inside my class
[ "You can follow the official documentation on how to use function as path for ImageField. Basically, you need to define a function in outer scope of the Model class. For your case, you can try the following code:\ndef protocolo(instance, filename):\n return f'fotos_biopsias/{timezone.now().year}/BIO' + str(instance.creado.year) + '-' + str(instance.__class__._base_manager.filter(\n creado__year=instance.creado.year,\n creado__lt=instance.creado\n ).count() + 1) + \"/\" + filename\n\n\nclass biopsia(models.Model):\n paciente = models.CharField(max_length=50)\n creado = models.DateTimeField(auto_now_add=True)\n foto = models.ImageField(upload_to=protocolo, blank=False, null=False)\n\n def __str__(self):\n foto_path = self.foto.path.split('/')[3:]\n return '/'.join(foto_path)\n\nTo be honest, it feels over calculating in DB level for simple storing the images. If it is only about how to show image in a particular url path, you can consider writing a view which acts as wrapper at that url path for downloading images. You can follow this example given here: Django: Serving Media Behind Custom URL.\n" ]
[ 1 ]
[]
[]
[ "django", "properties", "python" ]
stackoverflow_0074525494_django_properties_python.txt
Q: Error when using pyrealsense2 with multithreading I'm trying to write a program in Python, where the main thread will read depth frames from a RealSense camera and put them in a queue, and another thread that will run inference on them with a YoloV5 TensorRT model. The program runs on a Jetson Nano. For some reason, after reading about 15 frames the program crashes with the following error: Traceback (most recent call last): File "test2.py", line 59, in <module> img = np.asanyarray(c.colorize(DEPTH).get_data()) RuntimeError: Error occured during execution of the processing block! See the log for more info Here is the full code: from queue import Queue import numpy as np from ObjectDetection.objectDetectionV2 import ODModel, letterbox import torch import time from threading import Thread import cv2 from Camera.Realsense import RealSense # custom class for reading from Realsense camera def detect(queue): while True: if not queue.empty(): img0 = queue.get() if img0 is None: break img = letterbox(img0, 416, stride=32, auto=False)[0] # YoloV5 preprocessing img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB img = np.ascontiguousarray(img) print("loading image...") img = torch.tensor(img) print("loaded image") img = img.float() # uint8 to fp16/32 img /= 255 # 0 - 255 to 0.0 - 1.0 result = model(img) print(result) if __name__ == '__main__': queue = Queue() print("loading model") model = ODModel() print("model loaded") rs = RealSense() p = Thread(target=detect, args=(queue,)) c = rs.colorizer p.start() for i in range(100): RGB, DEPTH = rs.getData() img = np.asanyarray(c.colorize(DEPTH).get_data()) queue.put(img) queue.put(None) p.join() model.destroy() print("Exiting Main Thread") I tried commenting everything out and checking line by line, and I think the error is because of the c.colorizer taking too much time? When I deleted it the error went away (but of course the inference failed). If I don't remove it then the error appears after the line img = np.ascontiguousarray(img). But then why is the error not on this line? If I limit the size of the queue to at most 14, the problem stops, but then the queue is blocking so everything slows down. Also the error mentions a log, but I have no idea where it is. Can anyone help me understand what I did wrong? Thank you in advance. A: this line reserves memory, and limiting the queue size also limits memory usage so you most likely ran out of memory. a possible solution is to just limit the queue size to 1 sample, you always get the most recent result that is within the timeframe of your processing time. another solution is to use a deque of say, 5 elements, your producer will append, and your consumer will pop to get the most recent item, and if the deque length is greater than 3 elements then your producer will popleft to keep the deque bounded, and the "counting" should be in the worker thread instead of the main thread, while the main thread will have an infinite loop to guarantee 100 images were processed before breaking out of the infinite loop. (simply switching role with the worker thread.)
Error when using pyrealsense2 with multithreading
I'm trying to write a program in Python, where the main thread will read depth frames from a RealSense camera and put them in a queue, and another thread that will run inference on them with a YoloV5 TensorRT model. The program runs on a Jetson Nano. For some reason, after reading about 15 frames the program crashes with the following error: Traceback (most recent call last): File "test2.py", line 59, in <module> img = np.asanyarray(c.colorize(DEPTH).get_data()) RuntimeError: Error occured during execution of the processing block! See the log for more info Here is the full code: from queue import Queue import numpy as np from ObjectDetection.objectDetectionV2 import ODModel, letterbox import torch import time from threading import Thread import cv2 from Camera.Realsense import RealSense # custom class for reading from Realsense camera def detect(queue): while True: if not queue.empty(): img0 = queue.get() if img0 is None: break img = letterbox(img0, 416, stride=32, auto=False)[0] # YoloV5 preprocessing img = img.transpose((2, 0, 1))[::-1] # HWC to CHW, BGR to RGB img = np.ascontiguousarray(img) print("loading image...") img = torch.tensor(img) print("loaded image") img = img.float() # uint8 to fp16/32 img /= 255 # 0 - 255 to 0.0 - 1.0 result = model(img) print(result) if __name__ == '__main__': queue = Queue() print("loading model") model = ODModel() print("model loaded") rs = RealSense() p = Thread(target=detect, args=(queue,)) c = rs.colorizer p.start() for i in range(100): RGB, DEPTH = rs.getData() img = np.asanyarray(c.colorize(DEPTH).get_data()) queue.put(img) queue.put(None) p.join() model.destroy() print("Exiting Main Thread") I tried commenting everything out and checking line by line, and I think the error is because of the c.colorizer taking too much time? When I deleted it the error went away (but of course the inference failed). If I don't remove it then the error appears after the line img = np.ascontiguousarray(img). But then why is the error not on this line? If I limit the size of the queue to at most 14, the problem stops, but then the queue is blocking so everything slows down. Also the error mentions a log, but I have no idea where it is. Can anyone help me understand what I did wrong? Thank you in advance.
[ "this line reserves memory, and limiting the queue size also limits memory usage so you most likely ran out of memory.\na possible solution is to just limit the queue size to 1 sample, you always get the most recent result that is within the timeframe of your processing time.\nanother solution is to use a deque of say, 5 elements, your producer will append, and your consumer will pop to get the most recent item, and if the deque length is greater than 3 elements then your producer will popleft to keep the deque bounded, and the \"counting\" should be in the worker thread instead of the main thread, while the main thread will have an infinite loop to guarantee 100 images were processed before breaking out of the infinite loop. (simply switching role with the worker thread.)\n" ]
[ 0 ]
[]
[]
[ "multithreading", "numpy", "nvidia_jetson_nano", "python", "realsense" ]
stackoverflow_0074524107_multithreading_numpy_nvidia_jetson_nano_python_realsense.txt
Q: Remove traceback in Python on Ctrl-C Is there a way to keep tracebacks from coming up when you hit Ctrl+c, i.e. raise KeyboardInterrupt in a Python script? A: Try this: import signal import sys signal.signal(signal.SIGINT, lambda x, y: sys.exit(0)) This way you don't need to wrap everything in an exception handler. A: import sys try: # your code except KeyboardInterrupt: sys.exit(0) # or 1, or whatever Is the simplest way, assuming you still want to exit when you get a Ctrl+c. If you want to trap it without a try/except, you can use a recipe like this using the signal module, except it doesn't seem to work for me on Windows.. A: Catch the KeyboardInterrupt: try: # do something except KeyboardInterrupt: pass A: According to Proper handling of SIGINT/SIGQUIT, the only way to exit correctly if you're catching a SIGINT is to subsequently kill yourself with a SIGINT signal (see the section titled "How to be a proper program"). It is incorrect to attempt to fake the proper exit code. For more info about that, check also Why is "Doing an exit 130 is not the same as dying of SIGINT"? over on Unix Stack Exchange. It seems all the answers here which are exiting zero actually demonstrate programs that misbehave. It's preferable not to hide that you were interrupted, so that the caller (usually a shell) knows what to do. In Python, the traceback printout on stderr comes from the default sys.excepthook behaviour. So, to suppress the traceback spam, I've opted to tackle the problem directly at the cause by replacing the except hook on keyboard interrupt, and then preserve the correct exit code by re-raising the original exception: import sys, time def _no_traceback_excepthook(exc_type, exc_val, traceback): pass def main(): try: # whatever your program does here... print("hello world..") time.sleep(42) except KeyboardInterrupt: # whatever cleanup code you need here... print("bye") if sys.excepthook is sys.__excepthook__: sys.excepthook = _no_traceback_excepthook raise if __name__ == "__main__": main() The result is like this: $ python3 /tmp/example.py hello world.. ^Cbye $ echo $? 130 If you don't have any cleanup actions to execute, and all you wanted to do is suppress printing the traceback on interrupt, it may be simpler just to install that by default: import sys def _no_traceback_excepthook(exc_type, exc_val, traceback): if isinstance(exc_val, KeyboardInterrupt): return sys.__excepthook__(exc_type, exc_val, traceback) sys.excepthook = _no_traceback_excepthook A: Catch it with a try/except block: while True: try: print "This will go on forever" except KeyboardInterrupt: pass A: try: your_stuff() except KeyboardInterrupt: print("no traceback") A: Also note that by default the interpreter exits with the status code 128 + the value of SIGINT on your platform (which is 2 on most systems). import sys, signal try: # code... except KeyboardInterrupt: # Suppress tracebacks on SIGINT sys.exit(128 + signal.SIGINT) # http://tldp.org/LDP/abs/html/exitcodes.html A: suppress exception using context manager: from contextlib import suppress def output_forever(): while True: print('endless script output. Press ctrl + C to exit') if __name__ == '__main__': with suppress(KeyboardInterrupt): output_forever()
Remove traceback in Python on Ctrl-C
Is there a way to keep tracebacks from coming up when you hit Ctrl+c, i.e. raise KeyboardInterrupt in a Python script?
[ "Try this:\nimport signal\nimport sys\nsignal.signal(signal.SIGINT, lambda x, y: sys.exit(0))\n\nThis way you don't need to wrap everything in an exception handler.\n", "import sys\ntry:\n # your code\nexcept KeyboardInterrupt:\n sys.exit(0) # or 1, or whatever\n\nIs the simplest way, assuming you still want to exit when you get a Ctrl+c.\nIf you want to trap it without a try/except, you can use a recipe like this using the signal module, except it doesn't seem to work for me on Windows..\n", "Catch the KeyboardInterrupt:\ntry:\n # do something\nexcept KeyboardInterrupt:\n pass\n\n", "According to Proper handling of SIGINT/SIGQUIT, the only way to exit correctly if you're catching a SIGINT is to subsequently kill yourself with a SIGINT signal (see the section titled \"How to be a proper program\"). It is incorrect to attempt to fake the proper exit code. For more info about that, check also Why is \"Doing an exit 130 is not the same as dying of SIGINT\"? over on Unix Stack Exchange.\nIt seems all the answers here which are exiting zero actually demonstrate programs that misbehave. It's preferable not to hide that you were interrupted, so that the caller (usually a shell) knows what to do.\nIn Python, the traceback printout on stderr comes from the default sys.excepthook behaviour. So, to suppress the traceback spam, I've opted to tackle the problem directly at the cause by replacing the except hook on keyboard interrupt, and then preserve the correct exit code by re-raising the original exception:\nimport sys, time\n\ndef _no_traceback_excepthook(exc_type, exc_val, traceback):\n pass\n\ndef main():\n try:\n # whatever your program does here...\n print(\"hello world..\")\n time.sleep(42)\n except KeyboardInterrupt:\n # whatever cleanup code you need here...\n print(\"bye\")\n if sys.excepthook is sys.__excepthook__:\n sys.excepthook = _no_traceback_excepthook\n raise\n\nif __name__ == \"__main__\":\n main()\n\nThe result is like this:\n$ python3 /tmp/example.py\nhello world..\n^Cbye\n\n$ echo $?\n130\n\nIf you don't have any cleanup actions to execute, and all you wanted to do is suppress printing the traceback on interrupt, it may be simpler just to install that by default:\nimport sys\n\ndef _no_traceback_excepthook(exc_type, exc_val, traceback):\n if isinstance(exc_val, KeyboardInterrupt):\n return\n sys.__excepthook__(exc_type, exc_val, traceback)\n\nsys.excepthook = _no_traceback_excepthook\n\n", "Catch it with a try/except block:\nwhile True:\n try:\n print \"This will go on forever\"\n except KeyboardInterrupt:\n pass\n\n", "try:\n your_stuff()\nexcept KeyboardInterrupt:\n print(\"no traceback\")\n\n", "Also note that by default the interpreter exits with the status code 128 + the value of SIGINT on your platform (which is 2 on most systems).\n import sys, signal\n\n try:\n # code...\n except KeyboardInterrupt: # Suppress tracebacks on SIGINT\n sys.exit(128 + signal.SIGINT) # http://tldp.org/LDP/abs/html/exitcodes.html\n\n", "suppress exception using context manager:\nfrom contextlib import suppress\n\ndef output_forever():\n while True:\n print('endless script output. Press ctrl + C to exit')\n\n\nif __name__ == '__main__':\n with suppress(KeyboardInterrupt):\n output_forever()\n\n" ]
[ 40, 32, 8, 3, 2, 2, 1, 0 ]
[ "import sys\ntry:\n print(\"HELLO\")\n english = input(\"Enter your main launguage: \")\n print(\"GOODBYE\")\nexcept KeyboardInterrupt:\n print(\"GET LOST\")\n\n" ]
[ -6 ]
[ "keyboardinterrupt", "python", "traceback" ]
stackoverflow_0007073268_keyboardinterrupt_python_traceback.txt
Q: Why do saved pytorch models retrain after loading? import torch import torchvision n_epochs = 3 batch_size_train = 64 batch_size_test = 1000 learning_rate = 0.01 momentum = 0.5 log_interval = 10 random_seed = 1 torch.backends.cudnn.enabled = False torch.manual_seed(random_seed) train_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('./files', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_train, shuffle=True) test_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('./files', train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_test, shuffle=True) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) network = Net() optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) train_losses = [] train_counter = [] test_losses = [] test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)] def train(epoch): network.train() for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = network(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) train_losses.append(loss.item()) train_counter.append( (batch_idx*64) + ((epoch-1)*len(train_loader.dataset))) def test(): network.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: output = network(data) test_loss += F.nll_loss(output, target, size_average=False).item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) test() for epoch in range(1, n_epochs + 1): train(epoch) test() torch.save(network.state_dict(), './results/model.pth') Other file: PATH = "results/model.pth" model = torch.load(PATH) When this is called, instead of loading the model parameters, Pytorch retrains the entire model. The model is just retrained the same way (ie. they take the exact same steps to get to the same local minimum). PATH = "results/model.pth" model = Net() model.load_state_dict(torch.load(PATH)) has the same result. Is there any way I can load the model without retraining the whole thing? A: I just tried executing the code, and it works perfect. load_state_dict did not retrain the model: import torch import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) network = Net() PATH = "results/model.pth" network.load_state_dict(torch.load(PATH)) # works perfect By the way, state_dict only contains the model weights and not the dataset, so load_state_dict can never re-train the model. I think the problem is how the original code is organized. The tranining procedure starts running inmediately after Class Net is defined, so you cannot import Net from this file without re-running everything. Ideally, the training and the testing procedure should be wrapped inside an if __name__=='__main__' statement (at the end of the file), so that you can safely import Net without re-running any calculations: # source_file.py import torch import torchvision import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) def train(epoch): network.train() for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = network(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) train_losses.append(loss.item()) train_counter.append( (batch_idx*64) + ((epoch-1)*len(train_loader.dataset))) def test(): network.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: output = network(data) test_loss += F.nll_loss(output, target, size_average=False).item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) if __name__ == '__main__': n_epochs = 3 batch_size_train = 64 batch_size_test = 1000 learning_rate = 0.01 momentum = 0.5 log_interval = 10 random_seed = 1 torch.backends.cudnn.enabled = False torch.manual_seed(random_seed) train_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('./files', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_train, shuffle=True) test_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('./files', train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_test, shuffle=True) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) network = Net() optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) train_losses = [] train_counter = [] test_losses = [] test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)] test() for epoch in range(1, n_epochs + 1): train(epoch) test() PATH = './results/model.pth' torch.save(network.state_dict(), PATH) Then, when you reload the model in a second file, you can write: from my_source_file import Net network = Net() network.load_state_dict(torch.load(PATH)) Here is a website with more information about if __name__ == '__main__': https://realpython.com/if-name-main-python/ PS. Another option, that I personally use, is to define the neural network in a separate file than the training procedure. This is useful to make big projects look more organized, or even to experiment with different neural network designs. Previous answer: We should use load_state_dict to restore models: model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) model.eval() https://pytorch.org/tutorials/beginner/saving_loading_models.html
Why do saved pytorch models retrain after loading?
import torch import torchvision n_epochs = 3 batch_size_train = 64 batch_size_test = 1000 learning_rate = 0.01 momentum = 0.5 log_interval = 10 random_seed = 1 torch.backends.cudnn.enabled = False torch.manual_seed(random_seed) train_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('./files', train=True, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_train, shuffle=True) test_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST('./files', train=False, download=True, transform=torchvision.transforms.Compose([ torchvision.transforms.ToTensor(), torchvision.transforms.Normalize( (0.1307,), (0.3081,)) ])), batch_size=batch_size_test, shuffle=True) examples = enumerate(test_loader) batch_idx, (example_data, example_targets) = next(examples) import torch.nn as nn import torch.nn.functional as F import torch.optim as optim class Net(nn.Module): def __init__(self): super(Net, self).__init__() self.conv1 = nn.Conv2d(1, 10, kernel_size=5) self.conv2 = nn.Conv2d(10, 20, kernel_size=5) self.conv2_drop = nn.Dropout2d() self.fc1 = nn.Linear(320, 50) self.fc2 = nn.Linear(50, 10) def forward(self, x): x = F.relu(F.max_pool2d(self.conv1(x), 2)) x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2)) x = x.view(-1, 320) x = F.relu(self.fc1(x)) x = F.dropout(x, training=self.training) x = self.fc2(x) return F.log_softmax(x) network = Net() optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum) train_losses = [] train_counter = [] test_losses = [] test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)] def train(epoch): network.train() for batch_idx, (data, target) in enumerate(train_loader): optimizer.zero_grad() output = network(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( epoch, batch_idx * len(data), len(train_loader.dataset), 100. * batch_idx / len(train_loader), loss.item())) train_losses.append(loss.item()) train_counter.append( (batch_idx*64) + ((epoch-1)*len(train_loader.dataset))) def test(): network.eval() test_loss = 0 correct = 0 with torch.no_grad(): for data, target in test_loader: output = network(data) test_loss += F.nll_loss(output, target, size_average=False).item() pred = output.data.max(1, keepdim=True)[1] correct += pred.eq(target.data.view_as(pred)).sum() test_loss /= len(test_loader.dataset) test_losses.append(test_loss) print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format( test_loss, correct, len(test_loader.dataset), 100. * correct / len(test_loader.dataset))) test() for epoch in range(1, n_epochs + 1): train(epoch) test() torch.save(network.state_dict(), './results/model.pth') Other file: PATH = "results/model.pth" model = torch.load(PATH) When this is called, instead of loading the model parameters, Pytorch retrains the entire model. The model is just retrained the same way (ie. they take the exact same steps to get to the same local minimum). PATH = "results/model.pth" model = Net() model.load_state_dict(torch.load(PATH)) has the same result. Is there any way I can load the model without retraining the whole thing?
[ "I just tried executing the code, and it works perfect. load_state_dict did not retrain the model:\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x)\n\nnetwork = Net()\nPATH = \"results/model.pth\"\nnetwork.load_state_dict(torch.load(PATH))\n# works perfect\n\nBy the way, state_dict only contains the model weights and not the dataset, so load_state_dict can never re-train the model.\nI think the problem is how the original code is organized. The tranining procedure starts running inmediately after Class Net is defined, so you cannot import Net from this file without re-running everything.\nIdeally, the training and the testing procedure should be wrapped inside an if __name__=='__main__' statement (at the end of the file), so that you can safely import Net without re-running any calculations:\n# source_file.py\nimport torch\nimport torchvision\n\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\n\nclass Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv1 = nn.Conv2d(1, 10, kernel_size=5)\n self.conv2 = nn.Conv2d(10, 20, kernel_size=5)\n self.conv2_drop = nn.Dropout2d()\n self.fc1 = nn.Linear(320, 50)\n self.fc2 = nn.Linear(50, 10)\n\n def forward(self, x):\n x = F.relu(F.max_pool2d(self.conv1(x), 2))\n x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))\n x = x.view(-1, 320)\n x = F.relu(self.fc1(x))\n x = F.dropout(x, training=self.training)\n x = self.fc2(x)\n return F.log_softmax(x)\n\n\ndef train(epoch):\n network.train()\n for batch_idx, (data, target) in enumerate(train_loader):\n optimizer.zero_grad()\n output = network(data)\n loss = F.nll_loss(output, target)\n loss.backward()\n optimizer.step()\n if batch_idx % log_interval == 0:\n print('Train Epoch: {} [{}/{} ({:.0f}%)]\\tLoss: {:.6f}'.format(\n epoch, batch_idx * len(data), len(train_loader.dataset),\n 100. * batch_idx / len(train_loader), loss.item()))\n train_losses.append(loss.item())\n train_counter.append(\n (batch_idx*64) + ((epoch-1)*len(train_loader.dataset)))\n\ndef test():\n network.eval()\n test_loss = 0\n correct = 0\n with torch.no_grad():\n for data, target in test_loader:\n output = network(data)\n test_loss += F.nll_loss(output, target, size_average=False).item()\n pred = output.data.max(1, keepdim=True)[1]\n correct += pred.eq(target.data.view_as(pred)).sum()\n test_loss /= len(test_loader.dataset)\n test_losses.append(test_loss)\n print('\\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\\n'.format(\n test_loss, correct, len(test_loader.dataset),\n 100. * correct / len(test_loader.dataset)))\n\nif __name__ == '__main__':\n \n n_epochs = 3\n batch_size_train = 64\n batch_size_test = 1000\n learning_rate = 0.01\n momentum = 0.5\n log_interval = 10\n \n random_seed = 1\n torch.backends.cudnn.enabled = False\n torch.manual_seed(random_seed)\n \n train_loader = torch.utils.data.DataLoader(\n torchvision.datasets.MNIST('./files', train=True, download=True,\n transform=torchvision.transforms.Compose([\n torchvision.transforms.ToTensor(),\n torchvision.transforms.Normalize(\n (0.1307,), (0.3081,))\n ])),\n batch_size=batch_size_train, shuffle=True)\n \n test_loader = torch.utils.data.DataLoader(\n torchvision.datasets.MNIST('./files', train=False, download=True,\n transform=torchvision.transforms.Compose([\n torchvision.transforms.ToTensor(),\n torchvision.transforms.Normalize(\n (0.1307,), (0.3081,))\n ])),\n batch_size=batch_size_test, shuffle=True)\n \n examples = enumerate(test_loader)\n batch_idx, (example_data, example_targets) = next(examples)\n \n \n network = Net()\n optimizer = optim.SGD(network.parameters(), lr=learning_rate,\n momentum=momentum)\n \n train_losses = []\n train_counter = []\n test_losses = []\n test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)]\n \n \n test()\n for epoch in range(1, n_epochs + 1):\n train(epoch)\n test()\n \n PATH = './results/model.pth'\n torch.save(network.state_dict(), PATH)\n\nThen, when you reload the model in a second file, you can write:\nfrom my_source_file import Net\nnetwork = Net()\nnetwork.load_state_dict(torch.load(PATH))\n\nHere is a website with more information about if __name__ == '__main__':\nhttps://realpython.com/if-name-main-python/\nPS. Another option, that I personally use, is to define the neural network in a separate file than the training procedure. This is useful to make big projects look more organized, or even to experiment with different neural network designs.\n\nPrevious answer:\nWe should use load_state_dict to restore models:\nmodel = TheModelClass(*args, **kwargs)\nmodel.load_state_dict(torch.load(PATH))\nmodel.eval()\n\nhttps://pytorch.org/tutorials/beginner/saving_loading_models.html\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "mnist", "python", "pytorch" ]
stackoverflow_0074525661_deep_learning_mnist_python_pytorch.txt
Q: Change value of dictionary if it is in another dictionary I have two lists of generated dictionaries. One is like a template structured like: list_of_dicts_template = [{'year': 0, 'week': 38, 'count_tickets': 0}, {'year': 0, 'week': 39, 'count_tickets': 0}]... And another is a dictionary with values that we know: known_values_list = [{'year': 2022, 'week': 39, 'tickets': 47}, {'year': 2022, 'week': 40, 'tickets': 3}]... My problem is, I want to mix them in one list of dictionaries. Where if value of key 'week' is in list of dicts known_values_list, it will replace whole dict in list_of_dicts_template. So the expected list of dicts would look like: final_list = [{'year': 0, 'week': 38, 'count_tickets': 0}, {'year': 2022, 'week': 39, 'count_tickets': 47}, {'year': 2022, 'week': 40, 'tickets': 3}]... ` I actually don't know how to approach this problem. If I had only dicts without array, I would do something like: for sub in dicts_template: if(sub in known_values): dicts_template[sub] = known_values[sub] But if it's in arrays, I'm completely lost. A: You could use the following code: for i,d1 in enumerate(list_of_dicts_template): for j, known_value_d in enumerate(known_values_list): if known_value_d['week'] == d1['week']: list_of_dicts_template[i] = known_value_d del known_values_list[j] To add only delete the element in the known_values_list, if there shouldn't be any duplicate weeks in the list_of_dicts_template.
Change value of dictionary if it is in another dictionary
I have two lists of generated dictionaries. One is like a template structured like: list_of_dicts_template = [{'year': 0, 'week': 38, 'count_tickets': 0}, {'year': 0, 'week': 39, 'count_tickets': 0}]... And another is a dictionary with values that we know: known_values_list = [{'year': 2022, 'week': 39, 'tickets': 47}, {'year': 2022, 'week': 40, 'tickets': 3}]... My problem is, I want to mix them in one list of dictionaries. Where if value of key 'week' is in list of dicts known_values_list, it will replace whole dict in list_of_dicts_template. So the expected list of dicts would look like: final_list = [{'year': 0, 'week': 38, 'count_tickets': 0}, {'year': 2022, 'week': 39, 'count_tickets': 47}, {'year': 2022, 'week': 40, 'tickets': 3}]... ` I actually don't know how to approach this problem. If I had only dicts without array, I would do something like: for sub in dicts_template: if(sub in known_values): dicts_template[sub] = known_values[sub] But if it's in arrays, I'm completely lost.
[ "You could use the following code:\nfor i,d1 in enumerate(list_of_dicts_template):\n for j, known_value_d in enumerate(known_values_list):\n if known_value_d['week'] == d1['week']:\n list_of_dicts_template[i] = known_value_d\n del known_values_list[j]\n\nTo add only delete the element in the known_values_list, if there shouldn't be any duplicate weeks in the list_of_dicts_template.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074525639_dictionary_list_python.txt
Q: How to add emojis in the code for automating with PYAUTOGUI import time from datetime import datetime import pyautogui import os import emoji text = emoji.emojize(":thumbs_up:") Time = input("Enter your time here:") while(True): present = datetime.now() present = present.strftime("%H:%M") if (present == Time): pyautogui.write(text , interval=0.25) time.sleep(2) pyautogui.press("enter") time.sleep(2) break Can anyone help me how to make pyautogui type emojis too? im a beginner.... A: you can use pyautogui.hotkey("alt", "ALT CODE HERE") and place alt code of emoji in the "ALT CODE HERE" section
How to add emojis in the code for automating with PYAUTOGUI
import time from datetime import datetime import pyautogui import os import emoji text = emoji.emojize(":thumbs_up:") Time = input("Enter your time here:") while(True): present = datetime.now() present = present.strftime("%H:%M") if (present == Time): pyautogui.write(text , interval=0.25) time.sleep(2) pyautogui.press("enter") time.sleep(2) break Can anyone help me how to make pyautogui type emojis too? im a beginner....
[ "you can use\npyautogui.hotkey(\"alt\", \"ALT CODE HERE\") \n\nand place alt code of emoji in the \"ALT CODE HERE\" section\n" ]
[ 0 ]
[]
[]
[ "pyautogui", "python" ]
stackoverflow_0071222250_pyautogui_python.txt
Q: Finding a column within Multi-index Ho would I refer to a column of Price and Small as Example from Code Below ` dx = pd.MultiIndex.from_product([['Quantity', 'Price'], ['medium', 'large', 'small']]) idx MultiIndex([('Quantity', 'medium'), ('Quantity', 'large'), ('Quantity', 'small'), ( 'Price', 'medium'), ( 'Price', 'large'), ( 'Price', 'small')], ) df[idx] ` I tried df('Price','small') but honestly a bit new at this and not sure how to refer A: When you have a single-level / flat index, the column coordinate is a simple string: df["ColumnName"] When your dataframe columns is a multi-index, the coordinate is an n-tuple: df[("NameAtLevel0", "NameAtLevel1", "NameAtLevel2")] Follow that pattern, to retrieve your Price-Small column: df[("Price", "small")]
Finding a column within Multi-index
Ho would I refer to a column of Price and Small as Example from Code Below ` dx = pd.MultiIndex.from_product([['Quantity', 'Price'], ['medium', 'large', 'small']]) idx MultiIndex([('Quantity', 'medium'), ('Quantity', 'large'), ('Quantity', 'small'), ( 'Price', 'medium'), ( 'Price', 'large'), ( 'Price', 'small')], ) df[idx] ` I tried df('Price','small') but honestly a bit new at this and not sure how to refer
[ "When you have a single-level / flat index, the column coordinate is a simple string:\ndf[\"ColumnName\"]\n\nWhen your dataframe columns is a multi-index, the coordinate is an n-tuple:\ndf[(\"NameAtLevel0\", \"NameAtLevel1\", \"NameAtLevel2\")]\n\nFollow that pattern, to retrieve your Price-Small column:\ndf[(\"Price\", \"small\")]\n\n" ]
[ 2 ]
[]
[]
[ "multi_index", "pandas", "python" ]
stackoverflow_0074524925_multi_index_pandas_python.txt
Q: Can spacy's text categorizer learn the logic of recognizing two words in order? I'm trying to determine if Spacy's text categorizer can learn a simple logic to detect the presence of two consecutive words in order: "jhon died". After training, for this experiment, the only results that matter are the output for the same texts used in the training samples, but I have been unable to have it match only "jhon died" and not "died jhon". Is spacy's textcat unable to consider the order of the tokens during categorization? The training, evaluation and test sets are repetitions of this 4 samples: rows.append(["jhon died", 1]) rows.append(["died jhon", 0]) rows.append(["died", 0]) rows.append(["jhon", 0]) These are the set sizes: Total: 76 - Train: 57 - Dev: 11 - Test: 8 I populate all sets with: db = spacy.tokens.DocBin() docs = [] for doc, label in nlp.pipe(data, as_tuples=True): doc.cats["POS"] = label == 1 doc.cats["NEG"] = label == 0 db.add(doc) db.to_disk(outfile) Training command is: python -m spacy init config --lang en --pipeline textcat --optimize efficiency --force config.cfg When testing this: texts = ["jhon", "jhon died", "died", "died jhon", "died fast", "fast jhon"] nlp = spacy.load("./model/model-best") for text in texts: doc = nlp(text) diff = doc.cats['POS'] - doc.cats['NEG'] print("yes" if diff > 0 else ("no" if diff < 0 else "neither") , "-", text, doc.cats) I get: no - jhon {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} no - jhon died {'POS': 0.4730854034423828, 'NEG': 0.5269145965576172} no - died {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} no - died jhon {'POS': 0.4730854034423828, 'NEG': 0.5269145965576172} no - died fast {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} no - fast jhon {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} If i change the "died jhon" to classify (rows.append(["died jhon", 0])), the I get this: no - jhon {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} yes - jhon died {'POS': 0.8561566472053528, 'NEG': 0.1438433676958084} no - died {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} yes - died jhon {'POS': 0.8561566472053528, 'NEG': 0.1438433676958084} no - died fast {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} no - fast jhon {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} The result I'm expecting should match the original samples like this: no - jhon {...} yes - jhon died {...} no - died {...} no - died jhon {...} no - died fast {...} // Result doesn't matter here. no - fast jhon {...} // Result doesn't matter here. Here is the colab I'm working on for reference: https://colab.research.google.com/drive/1rnYhc-h4e0VlgatWzy1Z3-1rNbd0bGvM#scrollTo=tzXLe-IahuA5 A: Yes it can, it seems impractical to use the train command for trivial examples. The following code does exactly what is requested. Just using the default optimizer and basic updates on the model: import spacy from spacy.training import Example samples = [ ["jhon died", 1], ["died jhon", 0], ["died", 0], ["jhon", 0] ] for r in samples: print(r) def train(samples, repetitions): nlp = spacy.blank("en") textcat = nlp.add_pipe( "textcat") textcat.add_label("POS") textcat.add_label("NEG") optimizer = nlp.initialize() for i in range(0, repetitions): for raw_text, label in samples: predicted = nlp(raw_text) reference = nlp(raw_text) reference.cats["POS"] = label == 1 reference.cats["NEG"] = label == 0 example = Example(predicted=predicted, reference=reference) nlp.update([example], sgd=optimizer) return nlp train(samples, 5)import spacy from spacy.training import Example samples = [ ["jhon died", 1], ["died jhon", 0], ["died", 0], ["jhon", 0] ] for r in samples: print(r) def train(samples, repetitions): nlp = spacy.blank("en") textcat = nlp.add_pipe( "textcat") textcat.add_label("POS") textcat.add_label("NEG") optimizer = nlp.initialize() for i in range(0, repetitions): for raw_text, label in samples: predicted = nlp(raw_text) reference = nlp(raw_text) reference.cats["POS"] = label == 1 reference.cats["NEG"] = label == 0 example = Example(predicted=predicted, reference=reference) nlp.update([example], sgd=optimizer) return nlp # It seems 10 iterations gives better results than 5 for new words train(samples, 10).to_disk('./test-model') Testing the model: import spacy nlp = spacy.blank("en") nlp.add_pipe( "textcat") nlp.from_disk("./test-model") texts = ["jhon died", "died jhon", "died", "jhon", "jhon walked", "died smiled"] for text in texts: doc = nlp(text) diff = doc.cats['POS'] - doc.cats['NEG'] print("yes" if diff > 0 else ("no" if diff < 0 else "neither") , "-", text, doc.cats) Final Output: yes - jhon died {'POS': 0.997473418712616, 'NEG': 0.002526533557102084} no - died jhon {'POS': 0.0009508572984486818, 'NEG': 0.9990491271018982} no - died {'POS': 0.0012573363492265344, 'NEG': 0.9987426400184631} no - jhon {'POS': 0.0008163611637428403, 'NEG': 0.9991835951805115} no - jhon walked {'POS': 0.44277048110961914, 'NEG': 0.5572295188903809} no - died smiled {'POS': 0.014941525645554066, 'NEG': 0.9850584268569946} Here is the working colab for reference: https://colab.research.google.com/drive/1rnYhc-h4e0VlgatWzy1Z3-1rNbd0bGvM
Can spacy's text categorizer learn the logic of recognizing two words in order?
I'm trying to determine if Spacy's text categorizer can learn a simple logic to detect the presence of two consecutive words in order: "jhon died". After training, for this experiment, the only results that matter are the output for the same texts used in the training samples, but I have been unable to have it match only "jhon died" and not "died jhon". Is spacy's textcat unable to consider the order of the tokens during categorization? The training, evaluation and test sets are repetitions of this 4 samples: rows.append(["jhon died", 1]) rows.append(["died jhon", 0]) rows.append(["died", 0]) rows.append(["jhon", 0]) These are the set sizes: Total: 76 - Train: 57 - Dev: 11 - Test: 8 I populate all sets with: db = spacy.tokens.DocBin() docs = [] for doc, label in nlp.pipe(data, as_tuples=True): doc.cats["POS"] = label == 1 doc.cats["NEG"] = label == 0 db.add(doc) db.to_disk(outfile) Training command is: python -m spacy init config --lang en --pipeline textcat --optimize efficiency --force config.cfg When testing this: texts = ["jhon", "jhon died", "died", "died jhon", "died fast", "fast jhon"] nlp = spacy.load("./model/model-best") for text in texts: doc = nlp(text) diff = doc.cats['POS'] - doc.cats['NEG'] print("yes" if diff > 0 else ("no" if diff < 0 else "neither") , "-", text, doc.cats) I get: no - jhon {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} no - jhon died {'POS': 0.4730854034423828, 'NEG': 0.5269145965576172} no - died {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} no - died jhon {'POS': 0.4730854034423828, 'NEG': 0.5269145965576172} no - died fast {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} no - fast jhon {'POS': 0.1631753146648407, 'NEG': 0.8368247151374817} If i change the "died jhon" to classify (rows.append(["died jhon", 0])), the I get this: no - jhon {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} yes - jhon died {'POS': 0.8561566472053528, 'NEG': 0.1438433676958084} no - died {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} yes - died jhon {'POS': 0.8561566472053528, 'NEG': 0.1438433676958084} no - died fast {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} no - fast jhon {'POS': 0.21423980593681335, 'NEG': 0.785760223865509} The result I'm expecting should match the original samples like this: no - jhon {...} yes - jhon died {...} no - died {...} no - died jhon {...} no - died fast {...} // Result doesn't matter here. no - fast jhon {...} // Result doesn't matter here. Here is the colab I'm working on for reference: https://colab.research.google.com/drive/1rnYhc-h4e0VlgatWzy1Z3-1rNbd0bGvM#scrollTo=tzXLe-IahuA5
[ "Yes it can, it seems impractical to use the train command for trivial examples.\nThe following code does exactly what is requested. Just using the default optimizer and basic updates on the model:\nimport spacy\nfrom spacy.training import Example\n\nsamples = [\n [\"jhon died\", 1],\n [\"died jhon\", 0],\n [\"died\", 0],\n [\"jhon\", 0]\n]\n\nfor r in samples:\n print(r)\n\ndef train(samples, repetitions):\n nlp = spacy.blank(\"en\")\n\n textcat = nlp.add_pipe( \"textcat\")\n textcat.add_label(\"POS\")\n textcat.add_label(\"NEG\")\n\n optimizer = nlp.initialize()\n for i in range(0, repetitions):\n for raw_text, label in samples:\n predicted = nlp(raw_text)\n reference = nlp(raw_text)\n reference.cats[\"POS\"] = label == 1\n reference.cats[\"NEG\"] = label == 0\n example = Example(predicted=predicted, reference=reference)\n nlp.update([example], sgd=optimizer)\n\n return nlp\n\ntrain(samples, 5)import spacy\nfrom spacy.training import Example\n\nsamples = [\n [\"jhon died\", 1],\n [\"died jhon\", 0],\n [\"died\", 0],\n [\"jhon\", 0]\n]\n\nfor r in samples:\n print(r)\n\ndef train(samples, repetitions):\n nlp = spacy.blank(\"en\")\n\n textcat = nlp.add_pipe( \"textcat\")\n textcat.add_label(\"POS\")\n textcat.add_label(\"NEG\")\n\n optimizer = nlp.initialize()\n for i in range(0, repetitions):\n for raw_text, label in samples:\n predicted = nlp(raw_text)\n reference = nlp(raw_text)\n reference.cats[\"POS\"] = label == 1\n reference.cats[\"NEG\"] = label == 0\n example = Example(predicted=predicted, reference=reference)\n nlp.update([example], sgd=optimizer)\n\n return nlp\n\n# It seems 10 iterations gives better results than 5 for new words\ntrain(samples, 10).to_disk('./test-model')\n\nTesting the model:\nimport spacy\nnlp = spacy.blank(\"en\")\nnlp.add_pipe( \"textcat\")\nnlp.from_disk(\"./test-model\")\n\ntexts = [\"jhon died\", \"died jhon\", \"died\", \"jhon\", \"jhon walked\", \"died smiled\"]\nfor text in texts:\n doc = nlp(text)\n diff = doc.cats['POS'] - doc.cats['NEG']\n print(\"yes\" if diff > 0 else (\"no\" if diff < 0 else \"neither\") , \"-\", text, doc.cats)\n\nFinal Output:\nyes - jhon died {'POS': 0.997473418712616, 'NEG': 0.002526533557102084}\nno - died jhon {'POS': 0.0009508572984486818, 'NEG': 0.9990491271018982}\nno - died {'POS': 0.0012573363492265344, 'NEG': 0.9987426400184631}\nno - jhon {'POS': 0.0008163611637428403, 'NEG': 0.9991835951805115}\nno - jhon walked {'POS': 0.44277048110961914, 'NEG': 0.5572295188903809}\nno - died smiled {'POS': 0.014941525645554066, 'NEG': 0.9850584268569946}\n\nHere is the working colab for reference: https://colab.research.google.com/drive/1rnYhc-h4e0VlgatWzy1Z3-1rNbd0bGvM\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "machine_learning", "python", "spacy_3" ]
stackoverflow_0074514910_deep_learning_machine_learning_python_spacy_3.txt
Q: How to solve extracting data with scrapy because from contacts doesn't do anything? import scrapy import pycountry from locations. Items import GeojsonPointItem from locations. Categories import Code from typing import List, Dict import uuid creating the metadata #class class TridentSpider(scrapy.Spider): name: str = 'trident_dac' spider_type: str = 'chain' spider_categories: List[str] = [Code.MANUFACTURING] spider_countries: List[str] = [pycountry.countries.lookup('in').alpha_3] item_attributes: Dict[str, str] = {'brand': 'Trident Group'} allowed_domains: List[str] = ['tridentindia.com'] #start script def start_requests(self): url: str = "https://www.tridentindia.com/contact" yield scrapy.Request( url=url, callback=self.parse_contacts ) `parse data from the website using xpath` def parse_contacts(self, response): email: List[str] = [ response.xpath( "//*[@id='gatsby-focus- wrapper']/main/div[2]/div[2]/div/div[2]/div/ul/li[1]/a[2]/text()").get() ] phone: List[str] = [ response.xpath( "//*[@id='gatsby-focus- wrapper']/main/div[2]/div[2]/div/div[2]/div/ul/li[1]/a[1]/text()").get(), ] address: List[str] = [ response.xpath( "//*[@id='gatsby-focus- wrapper']/main/div[2]/div[1]/div/div[2]/div/ul/li[1]/address/text()").get(), ] dataUrl: str = 'https://www.tridentindia.com/contact' yield scrapy.Request( dataUrl, callback=self. Parse, cb_kwargs=dict(email=email, phone=phone, address=address) ) Parsing data from above def parse(self, response, email: List[str], phone: List[str], address: List[str]): ''' @url https://www.tridentindia.com/contact' @returns items 1 6 @cb_kwargs {"email": ["corp@tridentindia.com"], "phone": ["0161-5038888 / 5039999"], "address": ["E-212, Kitchlu Nagar Ludhiana - 141001, Punjab, India"]} @scrapes ref addr_full website ''' responseData = response.json() `response trom data` for row in responseData['data']: data = { "ref": uuid.uuid4().hex, 'addr_full': address, 'website': 'https://www.tridentindia.com', 'email': email, 'phone': phone, } yield GeojsonPointItem(**data) I want to extract the address (location) with the phone number and email of the 6 offices from html because I couldn't find a json with data. At the end of the extraction I want to save it as json to be able to load it on a map and check if the extracted addresses match their real location. I use scrapy because I want to learn it. I am new to web scraping using scrapy. A: There are 6 offices and none of them contain email. It didn't make sense, why have you included email item where it's clear to look that there are no email in 6 offices and the way that you are using to extract data isn't correct and perpect. So you can try yhe next example. Code: import scrapy class TestSpider(scrapy.Spider): name = "test" def start_requests(self): url = 'https://www.tridentindia.com/contact' yield scrapy.Request(url, callback=self.parse) def parse(self, response): for card in response.xpath('//*[@class="cp-correspondence typ-need-asst"]/ul/li'): yield { 'phone':''.join(card.xpath('.//*[@class="address"]/span[2]//text()').getall()).split(':')[-1].replace('\xad','').strip(), 'address':card.xpath('.//*[@class="address"]/span[1]/text()').get(), 'url':response.url } Output as json format: [ { "phone": "+91 - 161 - 5039999", "address": "E-212, Kitchlu Nagar Ludhiana - 141001, Punjab, India", "url": "https://www.tridentindia.com/contact" }, { "phone": "1800 180 2999", "address": "Trident Group, Sanghera – 148101, India", "url": "https://www.tridentindia.com/contact" }, { "phone": "0124 - 2350399", "address": "25, A, 15 Shahtoot Marg, DLF Phase-1, Sector 26A, Gurugram, Haryana-122002", "url": "https://www.tridentindia.com/contact" }, { "phone": "0172 - 4602593 / 2742612", "address": "SCO 20 - 21, Sector 9D, Madhya Marg, Chandigarh - 160009", "url": "https://www.tridentindia.com/contact" }, { "phone": "0755 - 2660479", "address": "Trident Limited, H.NO. - 3, Nadir Colony, Shyamla Hills, Bhopal - 462013", "url": "https://www.tridentindia.com/contact" }, { "phone": "01679 - 244700 - 703 - 707", "address": "Trident Limited, Sanghera Complex, Raikot Road, Barnala - 148101, Punjab", "url": "https://www.tridentindia.com/contact" } ]
How to solve extracting data with scrapy because from contacts doesn't do anything?
import scrapy import pycountry from locations. Items import GeojsonPointItem from locations. Categories import Code from typing import List, Dict import uuid creating the metadata #class class TridentSpider(scrapy.Spider): name: str = 'trident_dac' spider_type: str = 'chain' spider_categories: List[str] = [Code.MANUFACTURING] spider_countries: List[str] = [pycountry.countries.lookup('in').alpha_3] item_attributes: Dict[str, str] = {'brand': 'Trident Group'} allowed_domains: List[str] = ['tridentindia.com'] #start script def start_requests(self): url: str = "https://www.tridentindia.com/contact" yield scrapy.Request( url=url, callback=self.parse_contacts ) `parse data from the website using xpath` def parse_contacts(self, response): email: List[str] = [ response.xpath( "//*[@id='gatsby-focus- wrapper']/main/div[2]/div[2]/div/div[2]/div/ul/li[1]/a[2]/text()").get() ] phone: List[str] = [ response.xpath( "//*[@id='gatsby-focus- wrapper']/main/div[2]/div[2]/div/div[2]/div/ul/li[1]/a[1]/text()").get(), ] address: List[str] = [ response.xpath( "//*[@id='gatsby-focus- wrapper']/main/div[2]/div[1]/div/div[2]/div/ul/li[1]/address/text()").get(), ] dataUrl: str = 'https://www.tridentindia.com/contact' yield scrapy.Request( dataUrl, callback=self. Parse, cb_kwargs=dict(email=email, phone=phone, address=address) ) Parsing data from above def parse(self, response, email: List[str], phone: List[str], address: List[str]): ''' @url https://www.tridentindia.com/contact' @returns items 1 6 @cb_kwargs {"email": ["corp@tridentindia.com"], "phone": ["0161-5038888 / 5039999"], "address": ["E-212, Kitchlu Nagar Ludhiana - 141001, Punjab, India"]} @scrapes ref addr_full website ''' responseData = response.json() `response trom data` for row in responseData['data']: data = { "ref": uuid.uuid4().hex, 'addr_full': address, 'website': 'https://www.tridentindia.com', 'email': email, 'phone': phone, } yield GeojsonPointItem(**data) I want to extract the address (location) with the phone number and email of the 6 offices from html because I couldn't find a json with data. At the end of the extraction I want to save it as json to be able to load it on a map and check if the extracted addresses match their real location. I use scrapy because I want to learn it. I am new to web scraping using scrapy.
[ "There are 6 offices and none of them contain email. It didn't make sense, why have you included email item where it's clear to look that there are no email in 6 offices and the way that you are using to extract data isn't correct and perpect. So you can try yhe next example.\nCode:\nimport scrapy\nclass TestSpider(scrapy.Spider):\n name = \"test\"\n\n def start_requests(self):\n url = 'https://www.tridentindia.com/contact'\n yield scrapy.Request(url, callback=self.parse)\n\n\n def parse(self, response):\n\n for card in response.xpath('//*[@class=\"cp-correspondence typ-need-asst\"]/ul/li'):\n yield {\n\n 'phone':''.join(card.xpath('.//*[@class=\"address\"]/span[2]//text()').getall()).split(':')[-1].replace('\\xad','').strip(),\n 'address':card.xpath('.//*[@class=\"address\"]/span[1]/text()').get(),\n 'url':response.url\n }\n\nOutput as json format:\n[\n {\n \"phone\": \"+91 - 161 - 5039999\",\n \"address\": \"E-212, Kitchlu Nagar Ludhiana - 141001, Punjab, India\",\n \"url\": \"https://www.tridentindia.com/contact\"\n },\n {\n \"phone\": \"1800 180 2999\",\n \"address\": \"Trident Group, Sanghera – 148101, India\",\n \"url\": \"https://www.tridentindia.com/contact\"\n },\n {\n \"phone\": \"0124 - 2350399\",\n \"address\": \"25, A, 15 Shahtoot Marg, DLF Phase-1, Sector 26A, Gurugram, Haryana-122002\",\n \"url\": \"https://www.tridentindia.com/contact\"\n },\n {\n \"phone\": \"0172 - 4602593 / 2742612\",\n \"address\": \"SCO 20 - 21, Sector 9D, Madhya Marg, Chandigarh - 160009\",\n \"url\": \"https://www.tridentindia.com/contact\"\n },\n {\n \"phone\": \"0755 - 2660479\",\n \"address\": \"Trident Limited, H.NO. - 3, Nadir Colony, Shyamla Hills, Bhopal - 462013\",\n \"url\": \"https://www.tridentindia.com/contact\"\n },\n {\n \"phone\": \"01679 - 244700 - 703 - 707\",\n \"address\": \"Trident Limited, Sanghera Complex, Raikot Road, Barnala - 148101, Punjab\",\n \"url\": \"https://www.tridentindia.com/contact\"\n }\n]\n\n" ]
[ 0 ]
[]
[]
[ "python", "scrapy" ]
stackoverflow_0074520769_python_scrapy.txt
Q: Why is this throwing an exception when I try to save the attachment from Outlook? I am trying to iterate through the contents of a subfolder, and if the message contains an .xlsx attachment, download the attachment to a local directory. I have confirmed all other parts of this program work until that line, which throws an exception each time. I am running the following code in a Jupyter notebook through VSCode: # import libraries import win32com.client import re import os # set up connection to outlook path = os.path.expanduser("~\\Desktop\\SBD_DB") print(path) outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder(6) target_folder = inbox.Folders['SBD - Productivity'].Folders['Productivity Data Request'] target_folder.Name messages = target_folder.Items message = messages.GetLast() # while True: x=0 while x < 100: try: # print(message.subject) # get the subject of the email for attachment in message.attachments: if 'xlsx' in attachment.FileName: # print("reached") attachment.SaveAsFile(os.path.join(path, str(attachment.FileName))) print("found excel:", attachment.FileName) message = messages.GetPrevious() x+=1 except: print("exception") message = messages.GetPrevious() x+=1 A: Looks like the following line of code throws an exception at runtime: attachment.SaveAsFile(os.path.join(path, str(attachment.FileName))) First, make sure that you deal with an attached file, not a link to the actual file. The Attachment.Type property returns an OlAttachmentType constant indicating the type of the specified object. You are interested in the olByValue value when the attachment is a copy of the original file and can be accessed even if the original file is removed. Second, you need to make sure that the file path (especially the FileName property) doesn't contain forbidden symbols, see What characters are forbidden in Windows and Linux directory names? for more information. Third, make sure that a target folder exists on the disk and points to the local folder. According to the exception message: 'Cannot save the attachment. Path does not exist. Verify the path is correct.' That is it. Try to open the folder manually first, according to the error message the path doesn't exist. Before calling the SaveAsFile method you need to created the target folder or make sure it exists before.
Why is this throwing an exception when I try to save the attachment from Outlook?
I am trying to iterate through the contents of a subfolder, and if the message contains an .xlsx attachment, download the attachment to a local directory. I have confirmed all other parts of this program work until that line, which throws an exception each time. I am running the following code in a Jupyter notebook through VSCode: # import libraries import win32com.client import re import os # set up connection to outlook path = os.path.expanduser("~\\Desktop\\SBD_DB") print(path) outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI") inbox = outlook.GetDefaultFolder(6) target_folder = inbox.Folders['SBD - Productivity'].Folders['Productivity Data Request'] target_folder.Name messages = target_folder.Items message = messages.GetLast() # while True: x=0 while x < 100: try: # print(message.subject) # get the subject of the email for attachment in message.attachments: if 'xlsx' in attachment.FileName: # print("reached") attachment.SaveAsFile(os.path.join(path, str(attachment.FileName))) print("found excel:", attachment.FileName) message = messages.GetPrevious() x+=1 except: print("exception") message = messages.GetPrevious() x+=1
[ "Looks like the following line of code throws an exception at runtime:\nattachment.SaveAsFile(os.path.join(path, str(attachment.FileName)))\n\nFirst, make sure that you deal with an attached file, not a link to the actual file. The Attachment.Type property returns an OlAttachmentType constant indicating the type of the specified object. You are interested in the olByValue value when the attachment is a copy of the original file and can be accessed even if the original file is removed.\nSecond, you need to make sure that the file path (especially the FileName property) doesn't contain forbidden symbols, see What characters are forbidden in Windows and Linux directory names? for more information.\nThird, make sure that a target folder exists on the disk and points to the local folder. According to the exception message:\n 'Cannot save the attachment. Path does not exist. Verify the path is correct.'\n\nThat is it. Try to open the folder manually first, according to the error message the path doesn't exist. Before calling the SaveAsFile method you need to created the target folder or make sure it exists before.\n" ]
[ 1 ]
[]
[]
[ "email_attachments", "office_automation", "outlook", "python", "win32com" ]
stackoverflow_0074525643_email_attachments_office_automation_outlook_python_win32com.txt
Q: Python- Convert string which have numbers and letters to float for np.list I have a text that I use for taking data. I want to take this "line" and make it numpy list. My data is string but it has numbers and E letters. Because of this I can't convert it to float and taking it to list. import numpy as np import re with open("FEMMeshGmsh.inp", "r") as file: for line in file.readlines(): if "+" in line: line = line[:-1] a = np.array(line) print(a) 10,1,0.0000000000000E+00 11,1,0.0000000000000E+00 26,1,0.0000000000000E+00 27,1,0.0000000000000E+00 80,1,6.2500000000000E+01 152,1,0.0000000000000E+00 153,1,0.0000000000000E+00 154,1,0.0000000000000E+00 155,1,6.2500000000000E+01 156,1,6.2500000000000E+01 157,1,6.2500000000000E+01 158,1,6.2500000000000E+01 159,1,0.0000000000000E+00 160,1,0.0000000000000E+00 161,1,0.0000000000000E+00 162,1,6.2500000000000E+01 163,1,6.2500000000000E+01 164,1,6.2500000000000E+01 165,1,6.2500000000000E+01 166,1,6.2500000000000E+01 424,1,1.2500000000000E+02 425,1,1.2500000000000E+02 426,1,1.2500000000000E+02 427,1,1.2500000000000E+02 428,1,1.2500000000000E+02 429,1,1.2500000000000E+02 430,1,1.2500000000000E+02 I tried this code but the output is not in the list. I tried to convert this string to float using astype. But I took ValueError: could not convert string to float: '10,1,0.0000000000000E+00' this error. A: You can do that : import numpy as np import re with open("FEMMeshGmsh.inp", "r") as file: for line in file.readlines(): if "+" in line: line = line[:-1] line_array = line.split(",") number_array = line_array[-1].split("E+") line_array[-1] = float(number_array[0]) * 10 ** int(number_array[1]) a = np.array(line_array) print(a) A: In [757]: lines="""10,1,0.0000000000000E+00 ...: 11,1,0.0000000000000E+00 ...: 26,1,0.0000000000000E+00 ...: 27,1,0.0000000000000E+00 ...: 80,1,6.2500000000000E+01""".splitlines() Just split the lines on the comma; conversion to floats is easy from that: In [758]: lines1 = [l.split(',') for l in lines] In [759]: lines1 Out[759]: [['10', '1', '0.0000000000000E+00'], ['11', '1', '0.0000000000000E+00'], ['26', '1', '0.0000000000000E+00'], ['27', '1', '0.0000000000000E+00'], ['80', '1', '6.2500000000000E+01']] In [760]: arr = np.array(lines1,float) In [761]: arr Out[761]: array([[10. , 1. , 0. ], [11. , 1. , 0. ], [26. , 1. , 0. ], [27. , 1. , 0. ], [80. , 1. , 62.5]]) Or just using list and float: In [762]: lines[0].split(',') Out[762]: ['10', '1', '0.0000000000000E+00'] In [763]: float(lines[0].split(',')[2]) Out[763]: 0.0 In [764]: float(lines[-1].split(',')[2]) Out[764]: 62.5
Python- Convert string which have numbers and letters to float for np.list
I have a text that I use for taking data. I want to take this "line" and make it numpy list. My data is string but it has numbers and E letters. Because of this I can't convert it to float and taking it to list. import numpy as np import re with open("FEMMeshGmsh.inp", "r") as file: for line in file.readlines(): if "+" in line: line = line[:-1] a = np.array(line) print(a) 10,1,0.0000000000000E+00 11,1,0.0000000000000E+00 26,1,0.0000000000000E+00 27,1,0.0000000000000E+00 80,1,6.2500000000000E+01 152,1,0.0000000000000E+00 153,1,0.0000000000000E+00 154,1,0.0000000000000E+00 155,1,6.2500000000000E+01 156,1,6.2500000000000E+01 157,1,6.2500000000000E+01 158,1,6.2500000000000E+01 159,1,0.0000000000000E+00 160,1,0.0000000000000E+00 161,1,0.0000000000000E+00 162,1,6.2500000000000E+01 163,1,6.2500000000000E+01 164,1,6.2500000000000E+01 165,1,6.2500000000000E+01 166,1,6.2500000000000E+01 424,1,1.2500000000000E+02 425,1,1.2500000000000E+02 426,1,1.2500000000000E+02 427,1,1.2500000000000E+02 428,1,1.2500000000000E+02 429,1,1.2500000000000E+02 430,1,1.2500000000000E+02 I tried this code but the output is not in the list. I tried to convert this string to float using astype. But I took ValueError: could not convert string to float: '10,1,0.0000000000000E+00' this error.
[ "You can do that :\nimport numpy as np\nimport re \n\n\nwith open(\"FEMMeshGmsh.inp\", \"r\") as file: \n for line in file.readlines():\n if \"+\" in line:\n line = line[:-1]\n line_array = line.split(\",\")\n number_array = line_array[-1].split(\"E+\") \n line_array[-1] = float(number_array[0]) * 10 ** int(number_array[1])\n a = np.array(line_array)\n print(a)\n\n", "In [757]: lines=\"\"\"10,1,0.0000000000000E+00\n ...: 11,1,0.0000000000000E+00\n ...: 26,1,0.0000000000000E+00\n ...: 27,1,0.0000000000000E+00\n ...: 80,1,6.2500000000000E+01\"\"\".splitlines()\n\nJust split the lines on the comma; conversion to floats is easy from that:\nIn [758]: lines1 = [l.split(',') for l in lines]\n\nIn [759]: lines1\nOut[759]: \n[['10', '1', '0.0000000000000E+00'],\n ['11', '1', '0.0000000000000E+00'],\n ['26', '1', '0.0000000000000E+00'],\n ['27', '1', '0.0000000000000E+00'],\n ['80', '1', '6.2500000000000E+01']]\n\nIn [760]: arr = np.array(lines1,float)\n\nIn [761]: arr\nOut[761]: \narray([[10. , 1. , 0. ],\n [11. , 1. , 0. ],\n [26. , 1. , 0. ],\n [27. , 1. , 0. ],\n [80. , 1. , 62.5]])\n\nOr just using list and float:\nIn [762]: lines[0].split(',')\nOut[762]: ['10', '1', '0.0000000000000E+00']\n\nIn [763]: float(lines[0].split(',')[2])\nOut[763]: 0.0\n\nIn [764]: float(lines[-1].split(',')[2])\nOut[764]: 62.5\n\n" ]
[ 0, 0 ]
[]
[]
[ "arraylist", "numpy", "python", "readfile", "type_conversion" ]
stackoverflow_0074525668_arraylist_numpy_python_readfile_type_conversion.txt
Q: Calculate totals from text file in python How would I split the following: Sample Text file1:(items bought) Rosa,Chocolate,Banana,Strawberry,Apple Carol,Banana,Chocolate,Chocolate,Apple Sample Text File2: (price of items) Apple,$2 Banana,$5 Chocolate,$7 Strawberry,$4 (Question: Would it be easier to convert this into a csv file?) List containing names: names = [Rosa, Carol] Calculate totals for each name using dictionary: totals = {Rosa: 18, Carol: 21} A: I don't usually like to do people's homework, but you seem to be pretty far afield here. This does not need to be complicated. First, we read file 2 into a dictionary. We do it line by line, splitting on the commas, and throwing away the dollar sign. I probably should have checked for the dollar sign, but I just throw away the first character. Next, for each line in the first file, we again split on commas. I .pop the name into a separate variable, then sum up the price lookups on the other items. This would fail if they entered an item that was not present (like "Tomato"). That's an exercise left to the student. prices = {} for line in open('file2.txt'): name,price = line.strip().split(',') prices[name] = int(price[1:]) totals = {} for line in open('file1.txt'): parts = line.strip().split(',') name = parts.pop(0) totals[name] = sum(prices[k] for k in parts) print("totals =", totals) Output: totals = {'Rosa': 18, 'Carol': 21}
Calculate totals from text file in python
How would I split the following: Sample Text file1:(items bought) Rosa,Chocolate,Banana,Strawberry,Apple Carol,Banana,Chocolate,Chocolate,Apple Sample Text File2: (price of items) Apple,$2 Banana,$5 Chocolate,$7 Strawberry,$4 (Question: Would it be easier to convert this into a csv file?) List containing names: names = [Rosa, Carol] Calculate totals for each name using dictionary: totals = {Rosa: 18, Carol: 21}
[ "I don't usually like to do people's homework, but you seem to be pretty far afield here.\nThis does not need to be complicated. First, we read file 2 into a dictionary. We do it line by line, splitting on the commas, and throwing away the dollar sign. I probably should have checked for the dollar sign, but I just throw away the first character.\nNext, for each line in the first file, we again split on commas. I .pop the name into a separate variable, then sum up the price lookups on the other items. This would fail if they entered an item that was not present (like \"Tomato\"). That's an exercise left to the student.\nprices = {}\nfor line in open('file2.txt'):\n name,price = line.strip().split(',')\n prices[name] = int(price[1:])\n\ntotals = {}\nfor line in open('file1.txt'):\n parts = line.strip().split(',')\n name = parts.pop(0)\n totals[name] = sum(prices[k] for k in parts)\n\nprint(\"totals =\", totals)\n\nOutput:\ntotals = {'Rosa': 18, 'Carol': 21}\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "list", "python", "split" ]
stackoverflow_0074525601_dictionary_list_python_split.txt
Q: I want to sort a dictionary inside a list by number I need to sort a ranking of points by descending order. The users and points are inside lista_ranking which includes de following code [{'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Juan', 'apellido': 'Hardoy', 'fecha': '(2003, 3, 12)', 'puntaje': 0}, 'goles_local': 1, 'goles_visitante': 0}, {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '1234', 'nombre': 'Santi', 'apellido': 'Stev', 'fecha': '(2003, 3, 12)', 'puntaje': 8}, 'goles_local': 0, 'goles_visitante': 1}] I want to print a ranking for the most points to the least with the nombre(name) and apellido(surname) + the puntaje(points). I tried the following but it is not working for numeros in lista_ranking: sorted(str(numeros['usuario']['puntaje'])) print(numeros["usuario"]["nombre"], numeros["usuario"]["apellido"], str(numeros["usuario"]["puntaje"]),"pts") A: This will order tuples of points, first name and last name from highest to lowest; sorted([(d['usuario']['puntaje'], d['usuario']['nombre'], d['usuario']['apellido']) for d in lista_ranking], reverse=True)
I want to sort a dictionary inside a list by number
I need to sort a ranking of points by descending order. The users and points are inside lista_ranking which includes de following code [{'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Juan', 'apellido': 'Hardoy', 'fecha': '(2003, 3, 12)', 'puntaje': 0}, 'goles_local': 1, 'goles_visitante': 0}, {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '1234', 'nombre': 'Santi', 'apellido': 'Stev', 'fecha': '(2003, 3, 12)', 'puntaje': 8}, 'goles_local': 0, 'goles_visitante': 1}] I want to print a ranking for the most points to the least with the nombre(name) and apellido(surname) + the puntaje(points). I tried the following but it is not working for numeros in lista_ranking: sorted(str(numeros['usuario']['puntaje'])) print(numeros["usuario"]["nombre"], numeros["usuario"]["apellido"], str(numeros["usuario"]["puntaje"]),"pts")
[ "This will order tuples of points, first name and last name from highest to lowest;\nsorted([(d['usuario']['puntaje'], d['usuario']['nombre'], d['usuario']['apellido']) for d in lista_ranking], reverse=True)\n\n" ]
[ 0 ]
[]
[]
[ "dictionary", "list", "python", "sorting" ]
stackoverflow_0074525663_dictionary_list_python_sorting.txt
Q: How to copy image from sub_subfolders to only one folder using python I want to copy images from multi subfolders into only one folder using python or any library that can do with python framework my folders as described in tree below $ tree . β”œβ”€β”€ main_folder β”‚ β”œβ”€β”€ Subfolder_1 β”‚ β”‚ └── Subfolder1_1 β”‚ β”‚ └── β”œβ”€β”€ 0.png β”‚ β”‚ β”œβ”€β”€ 1.png β”‚ β”‚ β”œβ”€β”€ 2.png β”‚ β”‚ └── 3.png β”‚ β”‚ └── Subfolder1_2 β”‚ β”‚ └── β”œβ”€β”€ 4.png β”‚ β”‚ β”œβ”€β”€ 5.png β”‚ β”‚ β”œβ”€β”€ 6.png β”‚ β”‚ └── 7.png ..... β”‚ β”œβ”€β”€ Subfolder_2 β”‚ β”‚ └── Subfolder2_1 β”‚ β”‚ └── |____.png β”‚ β”‚ β”œβ”€β”€ 8.png β”‚ β”‚ └── 9.png β”‚ β”‚ └── Subfolder2_2 β”‚ β”‚ └── β”œβ”€β”€ 10.png β”‚ β”‚ β”œβ”€β”€ 11.png β”‚ β”‚ β”œβ”€β”€ 12.png β”‚ β”‚ └── 13.png β”‚ └── Subfolder_3 β”‚ └── Subfolder3_1 β”‚ └── |___ .png β”‚ β”œβ”€β”€ 14.png β”‚ β”œβ”€β”€ 15.png β”‚ β”œβ”€β”€ 16.png β”‚ β”‚ └── Subfolder3_2 β”‚ β”‚ └── β”œβ”€β”€ 17.png β”‚ β”‚ β”œβ”€β”€ 18.png β”‚ β”‚ β”œβ”€β”€ 19.png β”‚ β”‚ └── 20.png β”‚ └── script.py The expected results destination_folder will look like the tree below ── destination_folder β”œβ”€β”€ 0.png β”œβ”€β”€ 1.png β”œβ”€β”€ 2.png └── 3.png ......... β”œβ”€β”€ n-1.png └── n.png A: just use a mixture of glob for finding files and shutil for copying files. import glob import os import shutil dest_folder = 'destination_folder' if not os.path.isdir(dest_folder): os.mkdir(dest_folder) for item in glob.glob('**/*.png',recursive=True): filename = os.path.basename(item) full_path = os.path.abspath(item) shutil.copy(full_path, os.path.join(dest_folder,filename)) if you only want pictures with numbers you can add an if condition to it for item in glob.glob('**/*.png',recursive=True): filename = os.path.basename(item) if filename.split('.')[0].isdigit(): full_path = os.path.abspath(item) shutil.copy(full_path,os.path.join(dest_folder,filename))
How to copy image from sub_subfolders to only one folder using python
I want to copy images from multi subfolders into only one folder using python or any library that can do with python framework my folders as described in tree below $ tree . β”œβ”€β”€ main_folder β”‚ β”œβ”€β”€ Subfolder_1 β”‚ β”‚ └── Subfolder1_1 β”‚ β”‚ └── β”œβ”€β”€ 0.png β”‚ β”‚ β”œβ”€β”€ 1.png β”‚ β”‚ β”œβ”€β”€ 2.png β”‚ β”‚ └── 3.png β”‚ β”‚ └── Subfolder1_2 β”‚ β”‚ └── β”œβ”€β”€ 4.png β”‚ β”‚ β”œβ”€β”€ 5.png β”‚ β”‚ β”œβ”€β”€ 6.png β”‚ β”‚ └── 7.png ..... β”‚ β”œβ”€β”€ Subfolder_2 β”‚ β”‚ └── Subfolder2_1 β”‚ β”‚ └── |____.png β”‚ β”‚ β”œβ”€β”€ 8.png β”‚ β”‚ └── 9.png β”‚ β”‚ └── Subfolder2_2 β”‚ β”‚ └── β”œβ”€β”€ 10.png β”‚ β”‚ β”œβ”€β”€ 11.png β”‚ β”‚ β”œβ”€β”€ 12.png β”‚ β”‚ └── 13.png β”‚ └── Subfolder_3 β”‚ └── Subfolder3_1 β”‚ └── |___ .png β”‚ β”œβ”€β”€ 14.png β”‚ β”œβ”€β”€ 15.png β”‚ β”œβ”€β”€ 16.png β”‚ β”‚ └── Subfolder3_2 β”‚ β”‚ └── β”œβ”€β”€ 17.png β”‚ β”‚ β”œβ”€β”€ 18.png β”‚ β”‚ β”œβ”€β”€ 19.png β”‚ β”‚ └── 20.png β”‚ └── script.py The expected results destination_folder will look like the tree below ── destination_folder β”œβ”€β”€ 0.png β”œβ”€β”€ 1.png β”œβ”€β”€ 2.png └── 3.png ......... β”œβ”€β”€ n-1.png └── n.png
[ "just use a mixture of glob for finding files and shutil for copying files.\nimport glob\nimport os\nimport shutil\n\ndest_folder = 'destination_folder'\nif not os.path.isdir(dest_folder):\n os.mkdir(dest_folder)\n\nfor item in glob.glob('**/*.png',recursive=True):\n filename = os.path.basename(item)\n full_path = os.path.abspath(item)\n shutil.copy(full_path, os.path.join(dest_folder,filename))\n\nif you only want pictures with numbers you can add an if condition to it\nfor item in glob.glob('**/*.png',recursive=True):\n filename = os.path.basename(item)\n if filename.split('.')[0].isdigit():\n full_path = os.path.abspath(item)\n shutil.copy(full_path,os.path.join(dest_folder,filename))\n\n" ]
[ 1 ]
[]
[]
[ "operating_system", "python", "subdirectory" ]
stackoverflow_0074525745_operating_system_python_subdirectory.txt
Q: How to send a json object pretty printed as an email python I have a python script that gets a cluster health as json and sends me a mail. The issue is that the json is not pretty printed. These are the methods I have tried already: Simple --> json.dumps(health) json.dumps(health, indent=4, sort_keys=True) But the output in gmail is still unformatted, somewhat like this { "active_primary_shards": 25, "active_shards": 50, "active_shards_percent_as_number": 100.0, "cluster_name": "number_of_pending_tasks": 0, "relocating_shards": 0, "status": "green", "task_max_waiting_in_queue_millis": 0, "timed_out": false, "unassigned_shards": 0 } Mail was sent to gmail A: I can't say for certain, but it would seem like your email-sending code is defaulting to sending an "HTML" email, and in HTML consecutive spaces collapse into one, that way HTML code like: <p> This is a paragraph, but it's long so I'll break to a new line, and indented so I know it's within the `p` tag, etc. </p> Looks like "This is a paragraph, but it's long so I'll break to a new line, and indented so I know it's within the p tag, etc." to the user. So, I'd say your two options are: Change the email sending code to send the Content-type header as text/plain, or Replace all your spaces with the &nbsp; (non-breaking space) character and newlines with <br> (breaks), for example: email_body = json.dumps( health, indent=4, sort_keys=True).replace(' ', '&nbsp;').replace('\n', '<br>') A: >>> import json >>> s = json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4) >>> print s { "4": 5, "6": 7 } mine works fine Python 2.7.3 (default, Jan 17 2015, 17:10:37) [GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2 maybe it's version related, post your version and output A: The suggested solutions did not work for me. I tried json pretty printed into a variable and I see the email with expected json pretty format. import json, pprint body=json.dumps(message, indent=4) pprint.pformat(body, indent=4) # pprint body but assign the output to body variable instead of printing to console. This will introduce new line and other expected characters in the message string body. body=body.replace("\n","<br/>") # I just replaced new line with <br/> message = MIMEText(body, "HTML") # create an MIME message and send the body to intented email recepient.
How to send a json object pretty printed as an email python
I have a python script that gets a cluster health as json and sends me a mail. The issue is that the json is not pretty printed. These are the methods I have tried already: Simple --> json.dumps(health) json.dumps(health, indent=4, sort_keys=True) But the output in gmail is still unformatted, somewhat like this { "active_primary_shards": 25, "active_shards": 50, "active_shards_percent_as_number": 100.0, "cluster_name": "number_of_pending_tasks": 0, "relocating_shards": 0, "status": "green", "task_max_waiting_in_queue_millis": 0, "timed_out": false, "unassigned_shards": 0 } Mail was sent to gmail
[ "I can't say for certain, but it would seem like your email-sending code is defaulting to sending an \"HTML\" email, and in HTML consecutive spaces collapse into one, that way HTML code like:\n<p>\n This is a paragraph, but it's long so\n I'll break to a new line, and indented\n so I know it's within the `p` tag, etc.\n</p>\n\nLooks like \"This is a paragraph, but it's long so I'll break to a new line, and indented so I know it's within the p tag, etc.\" to the user.\nSo, I'd say your two options are:\n\nChange the email sending code to send the Content-type header as text/plain, or\nReplace all your spaces with the &nbsp; (non-breaking space) character and newlines with <br> (breaks), for example:\nemail_body = json.dumps(\n health, indent=4, sort_keys=True).replace(' ', '&nbsp;').replace('\\n', '<br>')\n\n\n", ">>> import json\n>>> s = json.dumps({'4': 5, '6': 7}, sort_keys=True, indent=4)\n>>> print s\n{\n \"4\": 5, \n \"6\": 7\n}\n\nmine works fine\nPython 2.7.3 (default, Jan 17 2015, 17:10:37) \n[GCC 3.4.5 20051201 (Red Hat 3.4.5-2)] on linux2\nmaybe it's version related, post your version and output\n", "The suggested solutions did not work for me. I tried json pretty printed into a variable and I see the email with expected json pretty format.\nimport json, pprint\nbody=json.dumps(message, indent=4) \npprint.pformat(body, indent=4) # pprint body but assign the output to body variable instead of printing to console. This will introduce new line and other expected characters in the message string body. \nbody=body.replace(\"\\n\",\"<br/>\") # I just replaced new line with <br/>\nmessage = MIMEText(body, \"HTML\") # create an MIME message and send the body to intented email recepient.\n\n" ]
[ 7, 0, 0 ]
[]
[]
[ "elasticsearch", "email", "json", "python" ]
stackoverflow_0041458580_elasticsearch_email_json_python.txt
Q: How to remove duplicate rows with a condition in pandas i.e i want to drop duplicates pairs using col1 and col2 as the subset only if the values are the opposite in col3 (one negative and one positive). similar to drop_duplicates function but i want to impose a condition and only want to remove the first pair (i.e if 3 duplicates, just remove 2, leave 1) my dataset (df): col1 col2 col3 0 1 1 1 1 2 2 2 2 1 1 1 3 3 5 7 4 1 2 -1 5 1 2 1 6 1 2 1 I want: col1 col2 col3 0 1 1 1 1 2 2 2 2 1 1 1 3 3 5 7 6 1 2 1 rows 4 and 5 are duplicated in col1 and col2 but value in col3 is the opposite, therefore we remove both. row 0 and row 2 have duplicate values in col1 and col2 but col3 is the same, so we don't remove those rows. i've tried using drop_duplicates but realised it wouldn't work as it will only remove all duplicates and not consider anything else. A: We can do transform out = df[df.groupby(['col1','col2']).col3.transform('sum').ne(0) & df.col3.ne(0)] Out[252]: col1 col2 col3 0 1 1 1 1 2 2 2 2 1 1 1 3 3 5 7 A: Recreating the dataset: import pandas as pd data = [ [1, 1, 1], [2, 2, 2], [1, 1, 1], [3, 5, 7], [1, 2, -1], [1, 2, 1], [1, 2, 1], ] df = pd.DataFrame(data, columns=['col1', 'col2', 'col3']) if your data is not massive, you can use an iterrows function on a subset of the data. The subset contains all duplicate values after all values have been turned into absolute values. Next, we check if col3 is negative and if the opposite of col3 is in the duplicate subset. If so, we drop the row from df. df_dupes = df[df.abs().duplicated(keep=False)] df_dupes_list = df_dupes.to_numpy().tolist() for i, row in df_dupes.iterrows(): if row.col3 < 0 and [row.col1, row.col2, -row.col3] in df_dupes_list: df.drop(labels=i, axis=0, inplace=True) This code should remove row 4. In your desired output, you left row 5 for some reason. If you can explain why you left row 5 but kept row 0, then I can adjust my code to more accurately match your desired output. A: I used @Petar Luketina code here with an adjustment and it worked. However I would like to use it for a massive dataset -> 1million rows and 43 columns. This code takes forever: df_dupes = df[df['col3'].abs().duplicated(keep=False)] df_dupes_list = df_dupes.to_numpy().tolist() for i, row in df_dupes.iterrows(): if row.col3 < 0 and [row.col1, row.col2, -row.col3] in df_dupes_list: print(row.col3) try: c = np.where((df['col1'] ==row.col1) & (df['col2'] ==row.col2) & (df['col3'] ==-row.col3))[0][0] df.drop(labels=[i,df.index.values[c]], axis=0, inplace=True) except: pass
How to remove duplicate rows with a condition in pandas
i.e i want to drop duplicates pairs using col1 and col2 as the subset only if the values are the opposite in col3 (one negative and one positive). similar to drop_duplicates function but i want to impose a condition and only want to remove the first pair (i.e if 3 duplicates, just remove 2, leave 1) my dataset (df): col1 col2 col3 0 1 1 1 1 2 2 2 2 1 1 1 3 3 5 7 4 1 2 -1 5 1 2 1 6 1 2 1 I want: col1 col2 col3 0 1 1 1 1 2 2 2 2 1 1 1 3 3 5 7 6 1 2 1 rows 4 and 5 are duplicated in col1 and col2 but value in col3 is the opposite, therefore we remove both. row 0 and row 2 have duplicate values in col1 and col2 but col3 is the same, so we don't remove those rows. i've tried using drop_duplicates but realised it wouldn't work as it will only remove all duplicates and not consider anything else.
[ "We can do transform\nout = df[df.groupby(['col1','col2']).col3.transform('sum').ne(0) & df.col3.ne(0)]\nOut[252]: \n col1 col2 col3\n0 1 1 1\n1 2 2 2\n2 1 1 1\n3 3 5 7\n\n", "Recreating the dataset:\nimport pandas as pd\n\ndata = [\n [1, 1, 1],\n [2, 2, 2],\n [1, 1, 1],\n [3, 5, 7],\n [1, 2, -1],\n [1, 2, 1],\n [1, 2, 1],\n]\n\ndf = pd.DataFrame(data, columns=['col1', 'col2', 'col3'])\n\nif your data is not massive, you can use an iterrows function on a subset of the data. \nThe subset contains all duplicate values after all values have been turned into absolute values. \nNext, we check if col3 is negative and if the opposite of col3 is in the duplicate subset. \nIf so, we drop the row from df.\ndf_dupes = df[df.abs().duplicated(keep=False)]\ndf_dupes_list = df_dupes.to_numpy().tolist()\nfor i, row in df_dupes.iterrows():\n if row.col3 < 0 and [row.col1, row.col2, -row.col3] in df_dupes_list:\n df.drop(labels=i, axis=0, inplace=True)\n\nThis code should remove row 4.\nIn your desired output, you left row 5 for some reason. \nIf you can explain why you left row 5 but kept row 0, then I can adjust my code to more accurately match your desired output.\n", "I used @Petar Luketina code here with an adjustment and it worked. However I would like to use it for a massive dataset -> 1million rows and 43 columns. This code takes forever:\ndf_dupes = df[df['col3'].abs().duplicated(keep=False)]\ndf_dupes_list = df_dupes.to_numpy().tolist()\nfor i, row in df_dupes.iterrows():\n if row.col3 < 0 and [row.col1, row.col2, -row.col3] in df_dupes_list:\n print(row.col3)\n try:\n c = np.where((df['col1'] ==row.col1) & (df['col2'] ==row.col2) & \n (df['col3'] ==-row.col3))[0][0]\n\n df.drop(labels=[i,df.index.values[c]], axis=0, inplace=True)\n except:\n pass\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "drop_duplicates", "pandas", "python" ]
stackoverflow_0074513714_drop_duplicates_pandas_python.txt
Q: Send a qweb odoo report via to mobile app via api in odoo 14 I have generated odoo qweb report and convert to base64 encoding using the following code. base64.base64encode(pdf) And i get string like "b'JVBERi0xLjQKMSAwIG9iago8PAovVGl0bGUgKP7/KQovQ3JlYXRvciAo/v8AdwBrAGgAdABtAGwAdAB " now i want to pass this string to mobile application via api.. when i used in mobile..it shows that " The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters. " base64 encoding odooo A: Base64 conversion is fine. Note that python base64.base64encode returns a binary output. Consider decoding to UTF-8 after encoding into base64. base64.base64encode(pdf).decode('utf-8') In browser environments (for ex. hybrid mobile or progressive web apps) you should indicate correct mime-type before base64 hash. data:application/pdf;base64,JVBERi0xLjQKMS...
Send a qweb odoo report via to mobile app via api in odoo 14
I have generated odoo qweb report and convert to base64 encoding using the following code. base64.base64encode(pdf) And i get string like "b'JVBERi0xLjQKMSAwIG9iago8PAovVGl0bGUgKP7/KQovQ3JlYXRvciAo/v8AdwBrAGgAdABtAGwAdAB " now i want to pass this string to mobile application via api.. when i used in mobile..it shows that " The input is not a valid Base-64 string as it contains a non-base 64 character, more than two padding characters, or an illegal character among the padding characters. " base64 encoding odooo
[ "Base64 conversion is fine.\nNote that python base64.base64encode returns a binary output.\nConsider decoding to UTF-8 after encoding into base64.\nbase64.base64encode(pdf).decode('utf-8')\nIn browser environments (for ex. hybrid mobile or progressive web apps) you should indicate correct mime-type before base64 hash.\ndata:application/pdf;base64,JVBERi0xLjQKMS...\n" ]
[ 0 ]
[]
[]
[ "odoo", "python" ]
stackoverflow_0074515115_odoo_python.txt
Q: How to import a numerical variable inside a function from another file - Python I'm trying to create a welding throat calculator just to practice my Python skills. I have 2 files for the same project, throat_size.py and support.py. I use support.py, just to make the calculus that I need, and I want to call these results throat_size, but I'm getting: name 'a' is not defined. throat_size.py: import tkinter as tk import support as sp firstWindowResults = [] firstWindow = tk.Tk() w = 650 p = 470 ws = firstWindow.winfo_screenwidth() # width of the screen hs = firstWindow.winfo_screenheight() # height of the screen x = (ws/2) - (w/2) y = (hs/2) - (p/2) firstWindowTitle = tk.Label(firstWindow, text="CordΓ΅es de soldadura") firstWindowTitle.grid(column=1, row=0, padx=10, pady=10) firstInput = tk.Label(firstWindow, text="Menor espessura a soldar") firstInput.grid(column=0, row=3, padx=10, pady=10) espessura = tk.Entry(firstWindow) espessura.grid(column=1, row=3, padx=10, pady=10) firstWindowValues = [espessura] def validateFirstWindowValues(): invalid = 0 firstWindowResults.clear() for value in firstWindowValues: if value == espessura: if float(value.get()) <= 0: entry = firstWindowValues.index(value) + 1 tk.messagebox.showerror("Valor invΓ‘lido", "Ver linha " + str(entry) + "!") invalid += 1 break else: firstWindowResults.append(value.get()) if invalid == 0: sp.thickness(firstWindowResults[0]) Res = "a = " + sp.a1 #Passo 3: Loop nextButton = tk.Button(firstWindow, text="Enter", command=validateFirstWindowValues) nextButton.grid(column=1, row=9, padx=10, pady=10) firstWindow.geometry('%dx%d+%d+%d' % (w, p, x, y)) firstWindow.mainloop() support.py: import math as mt def thickness(espessura): global a1, rz1, z1, a2, rz2, z2 rz1, rz2 = 0.5, 0.7 a1 = float(espessura) * rz1 z1 = 2 * mt.cos(mt.radians(45)) * a1 a2 = float(espessura) * rz2 z2 = 2 * mt.cos(mt.radians(45)) * a2 A: The function must return values that can be used. import math as mt def thickness(espessura): """ espessura: string return a tuple of numbers: a1, rz1, z1, a2, rz2, z2, esp >>> # unpacking values: >>> a1, rz1, z1, a2, rz2, z2, esp = thickness("2") >>> a1 1.0 >>> z1 1.4142135623730951 >>> rz1 0.5 >>> # or: >>> thickness_values = thickness("2") >>> thickness_values[0] # a1 1.0 >>> thickness_values[6] # espessura 2.0 """ rz1, rz2 = 0.5, 0.7 esp = float(espessura) a1 = esp * rz1 z1 = 2 * mt.cos(mt.radians(45)) * a1 a2 = esp * rz2 z2 = 2 * mt.cos(mt.radians(45)) * a2 return a1, rz1, z1, a2, rz2, z2, esp An abbreviated code example for using a function. import tkinter as tk import support as sp firstWindow = tk.Tk() w = 650 p = 470 ws = firstWindow.winfo_screenwidth() # width of the screen hs = firstWindow.winfo_screenheight() # height of the screen x = (ws/2) - (w/2) y = (hs/2) - (p/2) firstWindowTitle = tk.Label(firstWindow, text="CordΓ΅es de soldadura") firstWindowTitle.grid(column=1, row=0, padx=10, pady=10) firstInput = tk.Label(firstWindow, text="Menor espessura a soldar") firstInput.grid(column=0, row=3, padx=10, pady=10) espessura = tk.Entry(firstWindow) espessura.grid(column=1, row=3, padx=10, pady=10) def validateFirstWindowValues(): a1, rz1, z1, a2, rz2, z2, esp = sp.thickness(espessura.get()) print(a1, rz1, z1, a2, rz2, z2, esp) espessuraLabel["text"] = f"espessura: {esp}" firstLabel["text"] = f"rz1: {rz1}, a1: {a1}, z1: {z1}" secondLabel["text"] = f"rz2: {rz2}, a2: {a2}, z2: {z2}" resultsFrame = tk.Frame(firstWindow) resultsFrame.grid(column=0, row=10, columnspan=2 , padx=10, pady=10) espessuraLabel = tk.Label(resultsFrame, text="") espessuraLabel.grid(column=0, row=0) firstLabel = tk.Label(resultsFrame, text="") firstLabel.grid(column=0, row=1) secondLabel = tk.Label(resultsFrame, text="") secondLabel.grid(column=0, row=2) #Passo 3: Loop nextButton = tk.Button(firstWindow, text="Enter", command=validateFirstWindowValues) nextButton.grid(column=1, row=9, padx=10, pady=10) firstWindow.geometry('%dx%d+%d+%d' % (w, p, x, y)) firstWindow.mainloop()
How to import a numerical variable inside a function from another file - Python
I'm trying to create a welding throat calculator just to practice my Python skills. I have 2 files for the same project, throat_size.py and support.py. I use support.py, just to make the calculus that I need, and I want to call these results throat_size, but I'm getting: name 'a' is not defined. throat_size.py: import tkinter as tk import support as sp firstWindowResults = [] firstWindow = tk.Tk() w = 650 p = 470 ws = firstWindow.winfo_screenwidth() # width of the screen hs = firstWindow.winfo_screenheight() # height of the screen x = (ws/2) - (w/2) y = (hs/2) - (p/2) firstWindowTitle = tk.Label(firstWindow, text="CordΓ΅es de soldadura") firstWindowTitle.grid(column=1, row=0, padx=10, pady=10) firstInput = tk.Label(firstWindow, text="Menor espessura a soldar") firstInput.grid(column=0, row=3, padx=10, pady=10) espessura = tk.Entry(firstWindow) espessura.grid(column=1, row=3, padx=10, pady=10) firstWindowValues = [espessura] def validateFirstWindowValues(): invalid = 0 firstWindowResults.clear() for value in firstWindowValues: if value == espessura: if float(value.get()) <= 0: entry = firstWindowValues.index(value) + 1 tk.messagebox.showerror("Valor invΓ‘lido", "Ver linha " + str(entry) + "!") invalid += 1 break else: firstWindowResults.append(value.get()) if invalid == 0: sp.thickness(firstWindowResults[0]) Res = "a = " + sp.a1 #Passo 3: Loop nextButton = tk.Button(firstWindow, text="Enter", command=validateFirstWindowValues) nextButton.grid(column=1, row=9, padx=10, pady=10) firstWindow.geometry('%dx%d+%d+%d' % (w, p, x, y)) firstWindow.mainloop() support.py: import math as mt def thickness(espessura): global a1, rz1, z1, a2, rz2, z2 rz1, rz2 = 0.5, 0.7 a1 = float(espessura) * rz1 z1 = 2 * mt.cos(mt.radians(45)) * a1 a2 = float(espessura) * rz2 z2 = 2 * mt.cos(mt.radians(45)) * a2
[ "The function must return values that can be used.\nimport math as mt\n\n\ndef thickness(espessura):\n \"\"\"\n espessura: string\n return a tuple of numbers: a1, rz1, z1, a2, rz2, z2, esp\n \n >>> # unpacking values:\n >>> a1, rz1, z1, a2, rz2, z2, esp = thickness(\"2\")\n >>> a1\n 1.0\n >>> z1\n 1.4142135623730951\n >>> rz1\n 0.5\n >>> # or:\n >>> thickness_values = thickness(\"2\")\n >>> thickness_values[0] # a1\n 1.0\n >>> thickness_values[6] # espessura\n 2.0\n \"\"\"\n rz1, rz2 = 0.5, 0.7\n esp = float(espessura)\n \n a1 = esp * rz1\n z1 = 2 * mt.cos(mt.radians(45)) * a1\n\n a2 = esp * rz2\n z2 = 2 * mt.cos(mt.radians(45)) * a2\n\n return a1, rz1, z1, a2, rz2, z2, esp\n\nAn abbreviated code example for using a function.\nimport tkinter as tk\nimport support as sp\n\n\nfirstWindow = tk.Tk()\n\nw = 650\np = 470\n\nws = firstWindow.winfo_screenwidth() # width of the screen\nhs = firstWindow.winfo_screenheight() # height of the screen\nx = (ws/2) - (w/2)\ny = (hs/2) - (p/2)\n\nfirstWindowTitle = tk.Label(firstWindow, text=\"CordΓ΅es de soldadura\")\nfirstWindowTitle.grid(column=1, row=0, padx=10, pady=10)\n\nfirstInput = tk.Label(firstWindow, text=\"Menor espessura a soldar\")\nfirstInput.grid(column=0, row=3, padx=10, pady=10)\n\nespessura = tk.Entry(firstWindow)\nespessura.grid(column=1, row=3, padx=10, pady=10)\n\n\ndef validateFirstWindowValues():\n a1, rz1, z1, a2, rz2, z2, esp = sp.thickness(espessura.get())\n print(a1, rz1, z1, a2, rz2, z2, esp)\n espessuraLabel[\"text\"] = f\"espessura: {esp}\"\n firstLabel[\"text\"] = f\"rz1: {rz1}, a1: {a1}, z1: {z1}\"\n secondLabel[\"text\"] = f\"rz2: {rz2}, a2: {a2}, z2: {z2}\"\n\n \nresultsFrame = tk.Frame(firstWindow)\nresultsFrame.grid(column=0, row=10, columnspan=2 , padx=10, pady=10)\nespessuraLabel = tk.Label(resultsFrame, text=\"\")\nespessuraLabel.grid(column=0, row=0)\nfirstLabel = tk.Label(resultsFrame, text=\"\")\nfirstLabel.grid(column=0, row=1)\nsecondLabel = tk.Label(resultsFrame, text=\"\")\nsecondLabel.grid(column=0, row=2)\n\n#Passo 3: Loop\nnextButton = tk.Button(firstWindow, text=\"Enter\", command=validateFirstWindowValues)\nnextButton.grid(column=1, row=9, padx=10, pady=10)\n\nfirstWindow.geometry('%dx%d+%d+%d' % (w, p, x, y))\nfirstWindow.mainloop()\n\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter", "variables" ]
stackoverflow_0074511439_python_tkinter_variables.txt
Q: How to get Json key name if its value is equal to "x" - Python I am working on a python practice, it is about trying to check the availability of products in a json file, the condition is that if Key is equal to 1, then it means that producs is available, so if the product is available, then print key names. The Json format looks like: product={"FooBox": "1", "ZeroB": "0", "Birk": "1", "pjy": "0", "dimbo": "1"} I would like to get something like following: Acording to preview file, if Key value is "1" then return Key Name, like following: "Foobox","Birk","dimbo" Could someone help me to explain how I can get this working? I tried using somethink like: product='["FooBox": "1", "ZeroB": "0", "Birk": "1", "pjy": "0", "dimbo": "1"]' for x in product: if x=="1": print(x) else: print("Not Available") But is output is just the number "1" not the key name, which is what I require. A: In your attempted solution product is a string, not a dictionary like you showed in the first snippet. And even if it were a dictionary, for x in product: would set x to the keys, not the values. Use product.items() to iterate over the keys and values of the dictionary. Then you can check the value and collect the key. product={"FooBox": "1", "ZeroB": "0", "Birk": "1", "pjy": "0", "dimbo": "1"} available = [name for name, avail in product.items() if avail == "1"] print(available) A: stuff = [k for k,v in product.items() if v=="1"] print(stuff) A: for key,value in product.items(): if value == "1": print(key)
How to get Json key name if its value is equal to "x" - Python
I am working on a python practice, it is about trying to check the availability of products in a json file, the condition is that if Key is equal to 1, then it means that producs is available, so if the product is available, then print key names. The Json format looks like: product={"FooBox": "1", "ZeroB": "0", "Birk": "1", "pjy": "0", "dimbo": "1"} I would like to get something like following: Acording to preview file, if Key value is "1" then return Key Name, like following: "Foobox","Birk","dimbo" Could someone help me to explain how I can get this working? I tried using somethink like: product='["FooBox": "1", "ZeroB": "0", "Birk": "1", "pjy": "0", "dimbo": "1"]' for x in product: if x=="1": print(x) else: print("Not Available") But is output is just the number "1" not the key name, which is what I require.
[ "In your attempted solution product is a string, not a dictionary like you showed in the first snippet.\nAnd even if it were a dictionary, for x in product: would set x to the keys, not the values.\nUse product.items() to iterate over the keys and values of the dictionary. Then you can check the value and collect the key.\nproduct={\"FooBox\": \"1\", \"ZeroB\": \"0\", \"Birk\": \"1\", \"pjy\": \"0\", \"dimbo\": \"1\"}\n\navailable = [name for name, avail in product.items() if avail == \"1\"]\nprint(available)\n\n", "stuff = [k for k,v in product.items() if v==\"1\"]\nprint(stuff)\n", "for key,value in product.items():\n if value == \"1\":\n print(key)\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074525861_json_python.txt
Q: django.urls.exceptions.NoReverseMatch URLS path seem to be correct I receive the error message: django.urls.exceptions.NoReverseMatch: Reverse for 'journalrep' with arguments '('',)' not found. 2 pattern(s) tried: ['reports/journalrep/(?P[^/]+)/(?P[^/]+)\Z', 'reports/journalrep/\Z'] My urls.py contains: from django.urls import path from . import views urlpatterns = [ path('', views.index, name='reports'), path('sumlist/', views.summary_list,name='sumlist'), path('overallsummary',views.overallsummary,name='overallsummary'), path('checkreg', views.checkreg, name='checkreg'), path('checkdet/<chkno>/', views.checkdet, name='checkdet'), path('journalrep/', views.journalrep, name='journalrep'), path('journalrep/<column>/<direction>', views.journalrep, name='journalrep'), path('journaldet/<tranid>', views.journaldet, name='journaldet'), path('accountrep', views.accountrep, name='accountrep') ] The view that renders the template is a function view: @login_required def journalrep(request,column = 'date', direction = 'D'): ''' Produce journal register Will display information for a chart of accounts account if provided. If the value is 0 all journal entries will be shown ''' # # Get list of accounts (Chart of acconts) to be used for account selection box coa = ChartOfAccounts.objects.all().filter(COA_account__gt=0) coa_account = request.session.get('coa_account', None) if len(request.GET) != 0: coa_account = request.GET.get('coa_account') else: if coa_account == None: coa_account = '0' if direction == 'D': direction = '-' else: direction = "" if coa_account == '0': journal = Journal.objects.all().order_by(direction + column) else: journal = Journal.objects.filter(account__COA_account = coa_account).order_by(direction + column) context = { 'coa' : coa, 'journal' : journal , 'coa_account' : Decimal(coa_account)} request.session['coa_account'] = coa_account return render(request, 'reports/journal.html', context) And the template that is rendered is: <div class="container shadow min-vh-100 py-2"> <h2>Journal Register</h2> <select name="coa_account" hx-get="{% url 'journalrep' row.transactionID %}" hx-target="#requestcontent" > <option value="0">All</option> {% for option in coa %} <option value="{{option.COA_account}}" {% if option.COA_account == coa_account %} selected {% endif %}> {{option.COA_account_name}} {% if option.COA_account_subgroup != "" %} - {{option.COA_account_subgroup}} {% endif %} </option> {% endfor %} </select> <div class="table-responsive"> <table class="table table-hover "> <thead> <tr> <th scope="col"></th> <th scope="col">Date <br> <a hx-get="{% url 'journalrep' 'date' 'D' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-down"> </i> </a> <a hx-get="{% url 'journalrep' 'date' 'A' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-up" ></i> </a> </th> <th scope="col">Account<br>&nbsp; </th> <th scope="col">Description<br>&nbsp;</th> <th scope="col">Amount<br> <a hx-get="{% url 'journalrep' 'amount' 'D' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-down"> </i> </a> <a hx-get="{% url 'journalrep' 'amount' 'A' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-up" ></i> </a> </th> </tr> </thead> <tbody> {% for row in journal %} <tr data-bs-toggle="collapse" data-bs-target="#detail-"> <th scope="row"> {% if request.user.is_superuser %} <button hx-get="{% url 'journaldet' row.transactionID %}" hx-target="#dialog" > <i class="bi bi-eye"></i> </button> {% else %} &nbsp; {% endif %} </th> <td>{{ row.date }}</td> <td> {{ row.account.COA_account}}<br> {{ row.account.COA_account_name}} </td> <td> {{ row.description }} {% if row.transactionID != "" %} <br>{{ row.transactionID}} {% endif %} </td> <td align="right">${{ row.amount | floatformat:2 }}</td> </tr> {% endfor %} </tbody> </table> </div> </div> <div id="modal" class="modal fade"> <div id="dialog" class="modal-dialog" hx-target="this"></div> </div> <script> const modal = new bootstrap.Modal(document.getElementById("modal")) htmx.on("htmx:afterSwap", (e) => { // Response targeting #dialog => show the modal if (e.detail.target.id == "dialog") { modal.show() } }) </script> A: Try using this instead: {% url 'journalrep' column='date' direction='D' %} And also in urls.py: path('journalrep/<str:column>/<str:direction>', views.journalrep, name='journalrep') And potentially removing the line above this also as I'm not sure its required. It's possible that django is arching the first one, but hard to say with the information provided. A: In this line (line 3 of the template) <select name="coa_account" hx-get="{% url 'journalrep' row.transactionID %}" hx-target="#requestcontent" > You are not, at that point, looping through rows, so the value row.transactionID is empty, creating the empty argument of the error. Based on what occurs later and your urls.py , you are prabably also wanting to reference journaldet rather than journaldep for that URL structure to work.
django.urls.exceptions.NoReverseMatch URLS path seem to be correct
I receive the error message: django.urls.exceptions.NoReverseMatch: Reverse for 'journalrep' with arguments '('',)' not found. 2 pattern(s) tried: ['reports/journalrep/(?P[^/]+)/(?P[^/]+)\Z', 'reports/journalrep/\Z'] My urls.py contains: from django.urls import path from . import views urlpatterns = [ path('', views.index, name='reports'), path('sumlist/', views.summary_list,name='sumlist'), path('overallsummary',views.overallsummary,name='overallsummary'), path('checkreg', views.checkreg, name='checkreg'), path('checkdet/<chkno>/', views.checkdet, name='checkdet'), path('journalrep/', views.journalrep, name='journalrep'), path('journalrep/<column>/<direction>', views.journalrep, name='journalrep'), path('journaldet/<tranid>', views.journaldet, name='journaldet'), path('accountrep', views.accountrep, name='accountrep') ] The view that renders the template is a function view: @login_required def journalrep(request,column = 'date', direction = 'D'): ''' Produce journal register Will display information for a chart of accounts account if provided. If the value is 0 all journal entries will be shown ''' # # Get list of accounts (Chart of acconts) to be used for account selection box coa = ChartOfAccounts.objects.all().filter(COA_account__gt=0) coa_account = request.session.get('coa_account', None) if len(request.GET) != 0: coa_account = request.GET.get('coa_account') else: if coa_account == None: coa_account = '0' if direction == 'D': direction = '-' else: direction = "" if coa_account == '0': journal = Journal.objects.all().order_by(direction + column) else: journal = Journal.objects.filter(account__COA_account = coa_account).order_by(direction + column) context = { 'coa' : coa, 'journal' : journal , 'coa_account' : Decimal(coa_account)} request.session['coa_account'] = coa_account return render(request, 'reports/journal.html', context) And the template that is rendered is: <div class="container shadow min-vh-100 py-2"> <h2>Journal Register</h2> <select name="coa_account" hx-get="{% url 'journalrep' row.transactionID %}" hx-target="#requestcontent" > <option value="0">All</option> {% for option in coa %} <option value="{{option.COA_account}}" {% if option.COA_account == coa_account %} selected {% endif %}> {{option.COA_account_name}} {% if option.COA_account_subgroup != "" %} - {{option.COA_account_subgroup}} {% endif %} </option> {% endfor %} </select> <div class="table-responsive"> <table class="table table-hover "> <thead> <tr> <th scope="col"></th> <th scope="col">Date <br> <a hx-get="{% url 'journalrep' 'date' 'D' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-down"> </i> </a> <a hx-get="{% url 'journalrep' 'date' 'A' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-up" ></i> </a> </th> <th scope="col">Account<br>&nbsp; </th> <th scope="col">Description<br>&nbsp;</th> <th scope="col">Amount<br> <a hx-get="{% url 'journalrep' 'amount' 'D' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-down"> </i> </a> <a hx-get="{% url 'journalrep' 'amount' 'A' %}" hx-target="#requestcontent" > <i class="bi bi-sort-alpha-up" ></i> </a> </th> </tr> </thead> <tbody> {% for row in journal %} <tr data-bs-toggle="collapse" data-bs-target="#detail-"> <th scope="row"> {% if request.user.is_superuser %} <button hx-get="{% url 'journaldet' row.transactionID %}" hx-target="#dialog" > <i class="bi bi-eye"></i> </button> {% else %} &nbsp; {% endif %} </th> <td>{{ row.date }}</td> <td> {{ row.account.COA_account}}<br> {{ row.account.COA_account_name}} </td> <td> {{ row.description }} {% if row.transactionID != "" %} <br>{{ row.transactionID}} {% endif %} </td> <td align="right">${{ row.amount | floatformat:2 }}</td> </tr> {% endfor %} </tbody> </table> </div> </div> <div id="modal" class="modal fade"> <div id="dialog" class="modal-dialog" hx-target="this"></div> </div> <script> const modal = new bootstrap.Modal(document.getElementById("modal")) htmx.on("htmx:afterSwap", (e) => { // Response targeting #dialog => show the modal if (e.detail.target.id == "dialog") { modal.show() } }) </script>
[ "Try using this instead:\n{% url 'journalrep' column='date' direction='D' %}\n\nAnd also in urls.py:\npath('journalrep/<str:column>/<str:direction>', views.journalrep, name='journalrep')\n\nAnd potentially removing the line above this also as I'm not sure its required.\nIt's possible that django is arching the first one, but hard to say with the information provided.\n", "In this line (line 3 of the template)\n <select name=\"coa_account\" hx-get=\"{% url 'journalrep' row.transactionID %}\" hx-target=\"#requestcontent\" >\n\nYou are not, at that point, looping through rows, so the value row.transactionID is empty, creating the empty argument of the error.\nBased on what occurs later and your urls.py , you are prabably also wanting to reference journaldet rather than journaldep for that URL structure to work.\n" ]
[ 1, 1 ]
[]
[]
[ "django", "django_templates", "django_urls", "python" ]
stackoverflow_0074524245_django_django_templates_django_urls_python.txt
Q: Why telegram-bot on Python with Webhooks can't process messages from many users simultaneously unlike a bot with Long Polling? I use aiogram. Logic of my bot is very simple - he receive messages from user and send echo-message after 10 seconds. This is a test bot, but in general, I want to make a bot for buying movies with very big database of users. So, my bot must be able to process messages from many users simultaneously and must receive messages using Webhooks. Here are two python scripts: Telegram-bot on Long Polling: import asyncio import logging from aiogram import Bot, Dispatcher, executor, types from bot_files.config import * # Configure logging logging.basicConfig(level=logging.INFO) # Initialize bot and dispatcher bot = Bot(token=bot_token) dp = Dispatcher(bot) @dp.message_handler() async def echo(message: types.Message): await asyncio.sleep(10) await message.answer(message.text) if __name__ == '__main__': executor.start_polling(dp, skip_updates=True) Telegram-bot on Webhooks: import asyncio import logging from aiogram import Bot, Dispatcher, executor, types from bot_files.config import * # Configure logging logging.basicConfig(level=logging.INFO) # Initialize bot and dispatcher bot = Bot(token=bot_token) dp = Dispatcher(bot) WEBHOOK_HOST = f'https://7417-176-8-60-184.ngrok.io' WEBHOOK_PATH = f'/webhook/{bot_token}' WEBHOOK_URL = f'{WEBHOOK_HOST}{WEBHOOK_PATH}' # webserver settings WEBAPP_HOST = '0.0.0.0' WEBAPP_PORT = os.getenv('PORT', default=5000) async def on_startup(dispatcher): await bot.set_webhook(WEBHOOK_URL, drop_pending_updates=True) async def on_shutdown(dispatcher): await bot.delete_webhook() @dp.message_handler() async def echo(message: types.Message): await asyncio.sleep(10) await message.answer(message.text) if __name__ == '__main__': executor.start_webhook( dispatcher=dp, webhook_path=WEBHOOK_PATH, skip_updates=True, on_startup=on_startup, on_shutdown=on_shutdown, host=WEBAPP_HOST, port=WEBAPP_PORT ) In the first case, if two users send messages simultaneously, both of messages are processed also simultaneously(acynchrony) - 10 seconds. In the second case messages are processed linearly(not asynchrony) - one of two users must wait 20 seconds. Why telegram-bot on Python with Webhooks can't process messages from many users simultaneously unlike a bot with Long Polling? A: Actually telegram-bot on Python with Webhooks can process messages from many users simultaneously. You need just to put @dp.async_task after handler @dp.message_handler() @dp.async_task async def echo(message: types.Message): await asyncio.sleep(10) await message.answer(message.text)
Why telegram-bot on Python with Webhooks can't process messages from many users simultaneously unlike a bot with Long Polling?
I use aiogram. Logic of my bot is very simple - he receive messages from user and send echo-message after 10 seconds. This is a test bot, but in general, I want to make a bot for buying movies with very big database of users. So, my bot must be able to process messages from many users simultaneously and must receive messages using Webhooks. Here are two python scripts: Telegram-bot on Long Polling: import asyncio import logging from aiogram import Bot, Dispatcher, executor, types from bot_files.config import * # Configure logging logging.basicConfig(level=logging.INFO) # Initialize bot and dispatcher bot = Bot(token=bot_token) dp = Dispatcher(bot) @dp.message_handler() async def echo(message: types.Message): await asyncio.sleep(10) await message.answer(message.text) if __name__ == '__main__': executor.start_polling(dp, skip_updates=True) Telegram-bot on Webhooks: import asyncio import logging from aiogram import Bot, Dispatcher, executor, types from bot_files.config import * # Configure logging logging.basicConfig(level=logging.INFO) # Initialize bot and dispatcher bot = Bot(token=bot_token) dp = Dispatcher(bot) WEBHOOK_HOST = f'https://7417-176-8-60-184.ngrok.io' WEBHOOK_PATH = f'/webhook/{bot_token}' WEBHOOK_URL = f'{WEBHOOK_HOST}{WEBHOOK_PATH}' # webserver settings WEBAPP_HOST = '0.0.0.0' WEBAPP_PORT = os.getenv('PORT', default=5000) async def on_startup(dispatcher): await bot.set_webhook(WEBHOOK_URL, drop_pending_updates=True) async def on_shutdown(dispatcher): await bot.delete_webhook() @dp.message_handler() async def echo(message: types.Message): await asyncio.sleep(10) await message.answer(message.text) if __name__ == '__main__': executor.start_webhook( dispatcher=dp, webhook_path=WEBHOOK_PATH, skip_updates=True, on_startup=on_startup, on_shutdown=on_shutdown, host=WEBAPP_HOST, port=WEBAPP_PORT ) In the first case, if two users send messages simultaneously, both of messages are processed also simultaneously(acynchrony) - 10 seconds. In the second case messages are processed linearly(not asynchrony) - one of two users must wait 20 seconds. Why telegram-bot on Python with Webhooks can't process messages from many users simultaneously unlike a bot with Long Polling?
[ "Actually telegram-bot on Python with Webhooks can process messages from many users simultaneously. You need just to put @dp.async_task after handler\n@dp.message_handler()\n@dp.async_task\nasync def echo(message: types.Message):\n await asyncio.sleep(10)\n await message.answer(message.text)\n\n" ]
[ 0 ]
[]
[]
[ "aiogram", "python", "simultaneous", "telegram_bot", "webhooks" ]
stackoverflow_0074500287_aiogram_python_simultaneous_telegram_bot_webhooks.txt
Q: how to make easy and efficient plots on Python I use matplotlib for my plots, I find it great, but sometimes too much complicated. Here an example: import matplotlib.pyplot as plt import numpy as np idx1 = -3 idx2 = 3 x = np.arange(-3, 3, 0.01) y = np.sin(np.pi*x*7)/(np.pi*x*7) major_ticks = np.arange(idx1, idx2, 1) minor_ticks = np.arange(idx1, idx2, 0.1) fig = plt.figure() ax = fig.add_subplot(111) ax.set_ylim(-0.3, 1.2) ax.set_xlim(idx1, idx2) ax.set_xticks(major_ticks) ax.set_xticks(minor_ticks, minor = True) ax.grid(True, which = 'both') ax.tick_params(axis = 'x', labelsize = 18) ax.tick_params(axis = 'y', labelsize = 18) ax.plot(x, y) plt.show() Is there anything implemented on matplotlib and/or seaborn in which I can provide all these plot settings just as argument of a function only? It may considerably reduce the number of code lines and make the script easier both to write and understand. A: Matplotlib provides an object oriented API. This means that all the elements of the figure are acutally objects for which one can get and set properties and which can be easily manipulated. This makes matplotlib really flexible such that it can produce almost any plot you'd imagine. Since a plot may consist of a hundred or more elements, a function that would allow the same flexibility would need that amount of possible arguments. It is not necessarily easier to remember all possible arguments of a function than all possible attributes of a class. Having a single function call that does all of this, does not necessarily mean that you have to type in less characters. The commands would just be ordered differently. Furthermore the object oriented approach allows to keep things seperate. Some properties of the axes, like the grid or the axis labels are completely independend on what you plot to the axes. So you wouldn't want to set the xticks in the call to plot, because they are simply not related and it may be very confusing to set twice the same ticklabels when plotting two lines in the same axes. On the other hand, matplotlib is really easy. In order to produce a plot you need two lines import matplotlib.pyplot as plt plt.plot([1,2,3],[2,1,3]) which sets most of the parameters exactly as they are needed. The more you want to customize this plot, the more settings you have to apply. Which is fine as it allows the user himself to determine how much in depth he wants to control the appearance of the plot. Most matplotlib codes can be separated into three parts. Setting the style Creating the plot Customizing the plot Setting the style in the case of the code from the question involves e.g. the ticklabel size and the use of a grid. Those properties can set as it's done in the code but it may indeed be that one always wants to use the same properities here and finds it annoying to type the same parameters in every time one creates a plot. Therefore matplotlib provides general style settings, called rcParams. They can be set at the beginning of a script, e.g. plt.rcParams['lines.linewidth'] = 2 plt.rcParams['axes.grid '] = True plt.rcParams['axes.labelsize'] = 18 and will be applied to all plots within the script. It is also possible to define a complete stylesheet using those parameters. For more information see the Customizing matplotlib article. It is equally possible to use predefined stylesheets for certain applications. Simply importing import seaborn is also a possible way to change the style. Creating the plot can not be simplified much more. It's clear that one needs as many plotting commands as items to plot. Creating the figure and axes like fig, ax = plt.subplots() saves one line though. Equally no simplification is possible if customizing ticks or tickmarks are required. One may however consider to use Tickers and Formatters for this purpose. At the end one may of course consider to write a custom function which performs much of those tasks, but everyone can decide if that is useful for himself. A: Browsing around I saw this wabe page. This line of code can summarise many settings import matplotlib as mpl mpl.rc('lines', linewidth=2, color='r') A: ax.set is very useful for this: ax.set(xlim=(idx1, idx2), ylim=(-0.3, 1.2), xticks=major_ticks, ...) You can only set simple single-argument properties (e.g. those which don't need further keywords), but it's a nice timesaver.
how to make easy and efficient plots on Python
I use matplotlib for my plots, I find it great, but sometimes too much complicated. Here an example: import matplotlib.pyplot as plt import numpy as np idx1 = -3 idx2 = 3 x = np.arange(-3, 3, 0.01) y = np.sin(np.pi*x*7)/(np.pi*x*7) major_ticks = np.arange(idx1, idx2, 1) minor_ticks = np.arange(idx1, idx2, 0.1) fig = plt.figure() ax = fig.add_subplot(111) ax.set_ylim(-0.3, 1.2) ax.set_xlim(idx1, idx2) ax.set_xticks(major_ticks) ax.set_xticks(minor_ticks, minor = True) ax.grid(True, which = 'both') ax.tick_params(axis = 'x', labelsize = 18) ax.tick_params(axis = 'y', labelsize = 18) ax.plot(x, y) plt.show() Is there anything implemented on matplotlib and/or seaborn in which I can provide all these plot settings just as argument of a function only? It may considerably reduce the number of code lines and make the script easier both to write and understand.
[ "Matplotlib provides an object oriented API. This means that all the elements of the figure are acutally objects for which one can get and set properties and which can be easily manipulated. This makes matplotlib really flexible such that it can produce almost any plot you'd imagine. \nSince a plot may consist of a hundred or more elements, a function that would allow the same flexibility would need that amount of possible arguments. It is not necessarily easier to remember all possible arguments of a function than all possible attributes of a class. \nHaving a single function call that does all of this, does not necessarily mean that you have to type in less characters. The commands would just be ordered differently.\nFurthermore the object oriented approach allows to keep things seperate. Some properties of the axes, like the grid or the axis labels are completely independend on what you plot to the axes. So you wouldn't want to set the xticks in the call to plot, because they are simply not related and it may be very confusing to set twice the same ticklabels when plotting two lines in the same axes.\nOn the other hand, matplotlib is really easy. In order to produce a plot you need two lines\nimport matplotlib.pyplot as plt\nplt.plot([1,2,3],[2,1,3])\n\nwhich sets most of the parameters exactly as they are needed. The more you want to customize this plot, the more settings you have to apply. Which is fine as it allows the user himself to determine how much in depth he wants to control the appearance of the plot.\nMost matplotlib codes can be separated into three parts. \n\nSetting the style\nCreating the plot\nCustomizing the plot\n\nSetting the style in the case of the code from the question involves e.g. the ticklabel size and the use of a grid. Those properties can set as it's done in the code but it may indeed be that one always wants to use the same properities here and finds it annoying to type the same parameters in every time one creates a plot. Therefore matplotlib provides general style settings, called rcParams. They can be set at the beginning of a script, e.g.\nplt.rcParams['lines.linewidth'] = 2\nplt.rcParams['axes.grid '] = True\nplt.rcParams['axes.labelsize'] = 18\n\nand will be applied to all plots within the script. It is also possible to define a complete stylesheet using those parameters. For more information see the Customizing matplotlib article.\nIt is equally possible to use predefined stylesheets for certain applications.\nSimply importing import seaborn is also a possible way to change the style.\nCreating the plot can not be simplified much more. It's clear that one needs as many plotting commands as items to plot. Creating the figure and axes like\nfig, ax = plt.subplots()\n\nsaves one line though.\nEqually no simplification is possible if customizing ticks or tickmarks are required. One may however consider to use Tickers and Formatters for this purpose.\nAt the end one may of course consider to write a custom function which performs much of those tasks, but everyone can decide if that is useful for himself.\n", "Browsing around I saw this wabe page. \nThis line of code can summarise many settings\nimport matplotlib as mpl\nmpl.rc('lines', linewidth=2, color='r')\n\n", "ax.set is very useful for this:\nax.set(xlim=(idx1, idx2), ylim=(-0.3, 1.2),\n xticks=major_ticks, ...)\n\nYou can only set simple single-argument properties (e.g. those which don't need further keywords), but it's a nice timesaver.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "matplotlib", "python", "seaborn" ]
stackoverflow_0042996834_matplotlib_python_seaborn.txt
Q: Calculate Monthly Churn I am wanting to obtain a monthly customer churn rate by using the following formula: (Number of Customers Lost within 1-month period / Number of Active Customers at the beginning of the 1-month period) Say I have the following data (this is just a small sample of it - note that if "Boolean == True" the customer has left, otherwise False) start_date Boolean 2015-10-02 False 2015-10-04 False 2015-10-05 True 2015-10-06 True 2015-10-08 True 2015-10-08 True 2015-10-08 False 2015-10-08 False 2015-10-08 True 2015-10-08 False What I would like to do is use the above-stated formula to obtain a time-series graph to plot the monthly customer churn overtime (which would be the monthly customer churn across all years in the dataset) How would I go about doing this? A: Assuming you have the data stored in an array: Boolean = [False, False, True, True, True, True, False, False, True, False] the solution is a simple one-liner: sum([1 for x in Boolean if x])/len(Boolean)
Calculate Monthly Churn
I am wanting to obtain a monthly customer churn rate by using the following formula: (Number of Customers Lost within 1-month period / Number of Active Customers at the beginning of the 1-month period) Say I have the following data (this is just a small sample of it - note that if "Boolean == True" the customer has left, otherwise False) start_date Boolean 2015-10-02 False 2015-10-04 False 2015-10-05 True 2015-10-06 True 2015-10-08 True 2015-10-08 True 2015-10-08 False 2015-10-08 False 2015-10-08 True 2015-10-08 False What I would like to do is use the above-stated formula to obtain a time-series graph to plot the monthly customer churn overtime (which would be the monthly customer churn across all years in the dataset) How would I go about doing this?
[ "Assuming you have the data stored in an array:\nBoolean = [False, False, True, True, True, True, False, False, True, False]\n\nthe solution is a simple one-liner:\nsum([1 for x in Boolean if x])/len(Boolean)\n\n" ]
[ 0 ]
[]
[]
[ "churn", "pandas", "python" ]
stackoverflow_0074525832_churn_pandas_python.txt
Q: a data collection with web scraping I'am trying to extract data from a site and then to create a DataFrame out of it. the program doesnt work properly. I'am new in web scraping. Hope somoene help me out and find the problem. from urllib.request import urlopen from bs4 import BeautifulSoup url = 'https://www.imdb.com/chart/top/?sort=rk,asc&mode=simple&page=1' page = urlopen(url) soup = BeautifulSoup(page, 'html.parser') #print(soup) film_in= soup.find('tbody').findAll('tr') #print(film_in) film = film_in[0] #print(film) titre = film.find("a",{'title':'Frank Darabont (dir.), Tim Robbins, Morgan Freeman'}) print(titre.text) rang = film.find("td",{'class':'ratingColumn imdbRating'}).find('strong').text #print(rang) def remove_parentheses(string): return string.replace("(","").replace(")","") annΓ©e = film.find("span",{'class':'secondaryInfo'}).text #print(annΓ©e) imdb =[] for films in film_in: titre = film.find("a",{'title':'Frank Darabont (dir.), Tim Robbins, Morgan Freeman'}) rang = film.find("td",{'class':'ratingColumn imdbRating'}).find('strong').text annΓ©e =(remove_parentheses(film.find("span",{'class':'secondaryInfo'}).text)) dictionnaire = {'film': film, 'rang': rang, 'annΓ©e':annΓ©e } imdb.append(dictionnaire) df_imdb = pd.DataFrame(imdb) print(df_imdb) I'am trying to extract data from a site and then to create a DataFrame out of it. the program doesnt work properly. I need to solve it using urllib, is there a way. thanks in advance I'am new in web scraping. A: You can try the next example: from bs4 import BeautifulSoup from urllib.request import urlopen import requests import pandas as pd url = 'https://www.imdb.com/chart/top/?sort=rk,asc&mode=simple&page=1' #soup = BeautifulSoup(requests.get(url).text,'html.parser')# It's the perfect and powerful page = urlopen(url) soup = BeautifulSoup(page, 'html.parser') imdb = [] film_in = soup.select('table[class="chart full-width"] tr') for film in film_in[1:]: titre = film.select_one('.titleColumn a').get_text(strip=True) rang = film.select_one('[class="ratingColumn imdbRating"] > strong').text annΓ©e =film.find("span",{'class':'secondaryInfo'}).get_text(strip=True) dictionnaire = {'titre': titre, 'rang': rang, 'annΓ©e':annΓ©e } imdb.append(dictionnaire) df_imdb = pd.DataFrame(imdb) print(df_imdb) Output: titre rang annΓ©e 0 The Shawshank Redemption 9.2 (1994) 1 The Godfather 9.2 (1972) 2 The Dark Knight 9.0 (2008) 3 The Godfather Part II 9.0 (1974) 4 12 Angry Men 9.0 (1957) .. ... ... ... 245 Dersu Uzala 8.0 (1975) 246 Aladdin 8.0 (1992) 247 The Help 8.0 (2011) 248 The Iron Giant 8.0 (1999) 249 Gandhi 8.0 (1982) [250 rows x 3 columns]
a data collection with web scraping
I'am trying to extract data from a site and then to create a DataFrame out of it. the program doesnt work properly. I'am new in web scraping. Hope somoene help me out and find the problem. from urllib.request import urlopen from bs4 import BeautifulSoup url = 'https://www.imdb.com/chart/top/?sort=rk,asc&mode=simple&page=1' page = urlopen(url) soup = BeautifulSoup(page, 'html.parser') #print(soup) film_in= soup.find('tbody').findAll('tr') #print(film_in) film = film_in[0] #print(film) titre = film.find("a",{'title':'Frank Darabont (dir.), Tim Robbins, Morgan Freeman'}) print(titre.text) rang = film.find("td",{'class':'ratingColumn imdbRating'}).find('strong').text #print(rang) def remove_parentheses(string): return string.replace("(","").replace(")","") annΓ©e = film.find("span",{'class':'secondaryInfo'}).text #print(annΓ©e) imdb =[] for films in film_in: titre = film.find("a",{'title':'Frank Darabont (dir.), Tim Robbins, Morgan Freeman'}) rang = film.find("td",{'class':'ratingColumn imdbRating'}).find('strong').text annΓ©e =(remove_parentheses(film.find("span",{'class':'secondaryInfo'}).text)) dictionnaire = {'film': film, 'rang': rang, 'annΓ©e':annΓ©e } imdb.append(dictionnaire) df_imdb = pd.DataFrame(imdb) print(df_imdb) I'am trying to extract data from a site and then to create a DataFrame out of it. the program doesnt work properly. I need to solve it using urllib, is there a way. thanks in advance I'am new in web scraping.
[ "You can try the next example:\n from bs4 import BeautifulSoup\n from urllib.request import urlopen\n import requests\n import pandas as pd\n \n url = 'https://www.imdb.com/chart/top/?sort=rk,asc&mode=simple&page=1'\n \n #soup = BeautifulSoup(requests.get(url).text,'html.parser')# It's the perfect and powerful \n page = urlopen(url)\n soup = BeautifulSoup(page, 'html.parser')\n \n imdb = []\n film_in = soup.select('table[class=\"chart full-width\"] tr')\n for film in film_in[1:]:\n titre = film.select_one('.titleColumn a').get_text(strip=True)\n rang = film.select_one('[class=\"ratingColumn imdbRating\"] > strong').text\n \n annΓ©e =film.find(\"span\",{'class':'secondaryInfo'}).get_text(strip=True)\n \n dictionnaire = {'titre': titre,\n 'rang': rang,\n 'annΓ©e':annΓ©e\n }\n imdb.append(dictionnaire)\n \n df_imdb = pd.DataFrame(imdb)\n print(df_imdb)\n\nOutput:\n titre rang annΓ©e\n0 The Shawshank Redemption 9.2 (1994)\n1 The Godfather 9.2 (1972)\n2 The Dark Knight 9.0 (2008)\n3 The Godfather Part II 9.0 (1974)\n4 12 Angry Men 9.0 (1957)\n.. ... ... ...\n245 Dersu Uzala 8.0 (1975)\n246 Aladdin 8.0 (1992)\n247 The Help 8.0 (2011)\n248 The Iron Giant 8.0 (1999)\n249 Gandhi 8.0 (1982)\n\n[250 rows x 3 columns]\n\n" ]
[ 0 ]
[]
[]
[ "python", "urllib", "web_scraping" ]
stackoverflow_0074525787_python_urllib_web_scraping.txt
Q: Convert df.unit8 to df.float32 in TensorFlow I have a ds_train of MNIST data of data type unit8 and i want to convert it to float32 but i am getting the following error. ValueError Traceback (most recent call last) <ipython-input-14-ac6926bc60db> in <module> ----> 1 tf.image.convert_image_dtype(ds_trn,dtype=tf.float32, saturate=False) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 100 dtype = dtypes.as_dtype(dtype).as_datatype_enum 101 ctx.ensure_initialized() --> 102 return ops.EagerTensor(value, ctx.device_name, dtype) 103 104 ValueError: Attempt to convert a value (<PrefetchDataset element_spec=(TensorSpec(shape=(28, 28, 1), dtype=tf.uint8, name=None), TensorSpec(shape=(), dtype=tf.int64, name=None))>) with an unsupported type (<class 'tensorflow.python.data.ops.dataset_ops.PrefetchDataset'>) to a Tensor. I was trying to convert it using tf.cast in order normalize it and getting it ready for further use of data. A: there are multiple causes It is between the process and the eager process Target conversion does not support, image type array * Variable update Sample: Resizes is a lossless process, grays scales and conversion the command are line in order of the program designed with image process knowledge. To protect information loss in conversion you need to order less information loss to most conversions for the answer. import tensorflow as tf import matplotlib.pyplot as plt """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Functions """"""""""""""""""""""""""""""""""""""""""""""""""""""""" @tf.function def f( ): image = plt.imread( "F:\\datasets\\downloads\\dark\\train\\01.jpg" ) image = tf.keras.utils.img_to_array( image ) image = tf.convert_to_tensor(image, dtype=tf.int64) image = tf.image.resize(image, [32,32], method='nearest') image = tf.image.rgb_to_grayscale( image ) return image """"""""""""""""""""""""""""""""""""""""""""""""""""""""" : Tasks """"""""""""""""""""""""""""""""""""""""""""""""""""""""" image = f( ) print( image ) plt.imshow( image ) plt.show() Output: Conversion, Resizes RGB.! [[ 23] [ 19] [ 21] ... [ 15] [ 44] [ 42]]], shape=(32, 32, 1), dtype=int64)
Convert df.unit8 to df.float32 in TensorFlow
I have a ds_train of MNIST data of data type unit8 and i want to convert it to float32 but i am getting the following error. ValueError Traceback (most recent call last) <ipython-input-14-ac6926bc60db> in <module> ----> 1 tf.image.convert_image_dtype(ds_trn,dtype=tf.float32, saturate=False) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py in convert_to_eager_tensor(value, ctx, dtype) 100 dtype = dtypes.as_dtype(dtype).as_datatype_enum 101 ctx.ensure_initialized() --> 102 return ops.EagerTensor(value, ctx.device_name, dtype) 103 104 ValueError: Attempt to convert a value (<PrefetchDataset element_spec=(TensorSpec(shape=(28, 28, 1), dtype=tf.uint8, name=None), TensorSpec(shape=(), dtype=tf.int64, name=None))>) with an unsupported type (<class 'tensorflow.python.data.ops.dataset_ops.PrefetchDataset'>) to a Tensor. I was trying to convert it using tf.cast in order normalize it and getting it ready for further use of data.
[ "there are multiple causes\n\nIt is between the process and the eager process\nTarget conversion does not support, image type array *\nVariable update\n\n\nSample: Resizes is a lossless process, grays scales and conversion the command are line in order of the program designed with image process knowledge. To protect information loss in conversion you need to order less information loss to most conversions for the answer.\n\nimport tensorflow as tf\n\nimport matplotlib.pyplot as plt\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Functions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n@tf.function\ndef f( ):\n image = plt.imread( \"F:\\\\datasets\\\\downloads\\\\dark\\\\train\\\\01.jpg\" )\n image = tf.keras.utils.img_to_array( image )\n image = tf.convert_to_tensor(image, dtype=tf.int64)\n image = tf.image.resize(image, [32,32], method='nearest')\n image = tf.image.rgb_to_grayscale( image )\n\n return image\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Tasks\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nimage = f( )\nprint( image )\nplt.imshow( image )\nplt.show()\n\n\nOutput: Conversion, Resizes RGB.!\n\n[[ 23]\n [ 19]\n [ 21]\n ...\n [ 15]\n [ 44]\n [ 42]]], shape=(32, 32, 1), dtype=int64)\n\n\n" ]
[ 0 ]
[]
[]
[ "python", "tensorflow" ]
stackoverflow_0074523390_python_tensorflow.txt
Q: Rs mutate() and grepl() in Python I have a column in a dataset that lists all of the softwares that a given computer has installed. I have created multiple binary columns from this column so each software has its own column. My R code is below: data <- data %>% mutate(MS_Office_installed = ifelse(grepl("MS Office", installed_software), 1, 0), Adobe_Acrobat_installed = ifelse(grepl("Adobe Acrobat", installed_software), 1, 0), Slack_installed = ifelse(grepl("Slack", installed_software), 1, 0), Mathcard_installed = ifelse(grepl("Mathcard", installed_software), 1, 0), Google_Chrome_installed = ifelse(grepl("Google Chrome", installed_software), 1, 0)) How can I duplicate this in Python? Some observations have no softwares installed and have NaN A: You may use str.contains here. For example: df["MS_Office_installed"] = df["installed_software"].str.contains(r'\bMS Office\b', regex=True).astype(int) Use similar logic for the other desired boolean columns.
Rs mutate() and grepl() in Python
I have a column in a dataset that lists all of the softwares that a given computer has installed. I have created multiple binary columns from this column so each software has its own column. My R code is below: data <- data %>% mutate(MS_Office_installed = ifelse(grepl("MS Office", installed_software), 1, 0), Adobe_Acrobat_installed = ifelse(grepl("Adobe Acrobat", installed_software), 1, 0), Slack_installed = ifelse(grepl("Slack", installed_software), 1, 0), Mathcard_installed = ifelse(grepl("Mathcard", installed_software), 1, 0), Google_Chrome_installed = ifelse(grepl("Google Chrome", installed_software), 1, 0)) How can I duplicate this in Python? Some observations have no softwares installed and have NaN
[ "You may use str.contains here. For example:\ndf[\"MS_Office_installed\"] = df[\"installed_software\"].str.contains(r'\\bMS Office\\b', regex=True).astype(int)\n\nUse similar logic for the other desired boolean columns.\n" ]
[ 1 ]
[]
[]
[ "grepl", "mutate", "python", "r" ]
stackoverflow_0074526024_grepl_mutate_python_r.txt
Q: Please Help me with selenium on Firefox Good evening, I'm trying to do some tests with selenium in Firefox, but I'm stuck, I can not click on a button, because I got the message accepting cookies and that does not allow me to continue with the test, I do not know how to make selenium accept cookies. This is the message it gave me: An exception occurred: ElementClickInterceptedException Message: Element <select id="tramiteGrupo[1]" class="mf-input__l" name="tramiteGrupo[1]"> is not clickable at point (470,571) because another element <a class="small cli-plugin-button cli-plugin-main-button" href="#"> obscures it I want to get selenium to accept cookies and be able to continue entering parameters. A: Get Selenium to pause(check docs on how to pause, something like Thread.sleep(2000);) for a minute just before it clicks the "Accept cookies" button and try click the button yourself when it's paused. So you will be able to see what element is blocking it. Then use "page scroll up"/"page scroll down"/x/y/whatever to move it so it will not be blocked when trying to click the button.
Please Help me with selenium on Firefox
Good evening, I'm trying to do some tests with selenium in Firefox, but I'm stuck, I can not click on a button, because I got the message accepting cookies and that does not allow me to continue with the test, I do not know how to make selenium accept cookies. This is the message it gave me: An exception occurred: ElementClickInterceptedException Message: Element <select id="tramiteGrupo[1]" class="mf-input__l" name="tramiteGrupo[1]"> is not clickable at point (470,571) because another element <a class="small cli-plugin-button cli-plugin-main-button" href="#"> obscures it I want to get selenium to accept cookies and be able to continue entering parameters.
[ "Get Selenium to pause(check docs on how to pause, something like Thread.sleep(2000);) for a minute just before it clicks the \"Accept cookies\" button and try click the button yourself when it's paused. So you will be able to see what element is blocking it.\nThen use \"page scroll up\"/\"page scroll down\"/x/y/whatever to move it so it will not be blocked when trying to click the button.\n" ]
[ 0 ]
[]
[]
[ "geckodriver", "python", "selenium" ]
stackoverflow_0074526008_geckodriver_python_selenium.txt
Q: Vector arithmetic I am trying to create an array of evenly spaced elements ranging from -n to n. (ex: -2, 2, up to 1000 evenly spaced elements). Then using the array to create 2 new arrays using 2 equations by doing vector arithmetic. import numpy as np from math import sqrt width = 4 intervals = 1000 xCoords = np.linspace(-width/2, width/2, intervals+1) yList1 = sqrt(1 - ((abs(xCoords) - 1)**2)) yList2 = -3 * sqrt(1 - sqrt((abs(xCoords)/2))) print(yList1) I am getting the following error: TypeError: only size-1 arrays can be converted to Python scalars A: Use numpy functions on numpy arrays instead of the math library functions. Try np.sqrt and np.abs
Vector arithmetic
I am trying to create an array of evenly spaced elements ranging from -n to n. (ex: -2, 2, up to 1000 evenly spaced elements). Then using the array to create 2 new arrays using 2 equations by doing vector arithmetic. import numpy as np from math import sqrt width = 4 intervals = 1000 xCoords = np.linspace(-width/2, width/2, intervals+1) yList1 = sqrt(1 - ((abs(xCoords) - 1)**2)) yList2 = -3 * sqrt(1 - sqrt((abs(xCoords)/2))) print(yList1) I am getting the following error: TypeError: only size-1 arrays can be converted to Python scalars
[ "Use numpy functions on numpy arrays instead of the math library functions. Try np.sqrt and np.abs\n" ]
[ 2 ]
[]
[]
[ "arrays", "python", "vector", "vectorization" ]
stackoverflow_0074526105_arrays_python_vector_vectorization.txt
Q: Python program to extract Combination element available from the data set like Co & Fe available in composition line This is data set(Sample) which I need to extract the combination available in Composition (Like Co & Fe) only that data set to be extracted { "Au": 0.9789814953164448, "Az": 2.398972844060257, "B prime": 4.016727605471411, "B/G": 2.3640597506841443, "Bulk modulus": 165.36806388061723, "C11": 220.59548272352293, "C12": 137.75435445916438, "C44": 99.3668085387544, "Composition": "Co0.3030303 Fe0.27272727 W0.18181818 Zr0.24242424", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.043437825654109474, "Group": "validation set", "Lattice constant": 3.1677375834361996, "Poisson ratio": 0.31463584118955934, "Shear modulus": 69.9508816698735, "Total energy": -9166.132333038346, "Wigner-Seitz radius": 2.947420065302169, "Youngs modulus": 183.91987233205091 }, { "Au": 1.8997164025697, "Az": 3.2780363186086467, "B prime": 4.6337844536189445, "B/G": 2.1704056819788873, "Bulk modulus": 143.42189861171937, "C11": 186.59093474024905, "C12": 121.83738054745453, "C44": 106.13225120148684, "Composition": "Al0.33333333 Co0.16666667 Fe0.16666667 Nb0.33333333", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.00939231493839543, "Group": "validation set", "Lattice constant": 3.153272722671855, "Poisson ratio": 0.30029867452552794, "Shear modulus": 66.08068703586932, "Total energy": -3599.475639732862, "Wigner-Seitz radius": 2.933961241856199, "Youngs modulus": 171.8492595289542 }, { "Au": 1.93135861191619, "Az": 3.30708435057905, "B prime": 4.294116013034859, "B/G": 1.9451228338196076, "Bulk modulus": 147.91852499467285, "C11": 197.32920545644592, "C12": 123.21318476378632, "C44": 122.5539660799438, "Composition": "Al0.3030303 Mn0.24242424 Mo0.3030303 Ni0.15151515", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.006908657781423244, "Group": "validation set", "Lattice constant": 3.071993266900589, "Poisson ratio": 0.2805531626159049, "Shear modulus": 76.04585295223106, "Total energy": -3625.653696273156, "Wigner-Seitz radius": 2.8583348073656243, "Youngs modulus": 194.7615150036071 }, python program to to extract Combination element available from the data set like Co & Fe available in composition line. A: First, you will need to get your data as a list of dictionaries. I am not sure how you are loading your data, so I am calling such a list as list_of_dicts. If you need help how to do that, I'd suggest you submit a different question. Then it's just a matter of looping through the dictionaries, finding the Composition key and parsing the Elements from the string values. I am showing a solution using regular expression module. import re list_of_dicts = [{ "Au": 0.9789814953164448, "Az": 2.398972844060257, "B prime": 4.016727605471411, "B/G": 2.3640597506841443, "Bulk modulus": 165.36806388061723, "C11": 220.59548272352293, "C12": 137.75435445916438, "C44": 99.3668085387544, "Composition": "Co0.3030303 Fe0.27272727 W0.18181818 Zr0.24242424", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.043437825654109474, "Group": "validation set", "Lattice constant": 3.1677375834361996, "Poisson ratio": 0.31463584118955934, "Shear modulus": 69.9508816698735, "Total energy": -9166.132333038346, "Wigner-Seitz radius": 2.947420065302169, "Youngs modulus": 183.91987233205091}, { "Au": 1.8997164025697, "Az": 3.2780363186086467, "B prime": 4.6337844536189445, "B/G": 2.1704056819788873, "Bulk modulus": 143.42189861171937, "C11": 186.59093474024905, "C12": 121.83738054745453, "C44": 106.13225120148684, "Composition": "Al0.33333333 Co0.16666667 Fe0.16666667 Nb0.33333333", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.00939231493839543, "Group": "validation set", "Lattice constant": 3.153272722671855, "Poisson ratio": 0.30029867452552794, "Shear modulus": 66.08068703586932, "Total energy": -3599.475639732862, "Wigner-Seitz radius": 2.933961241856199, "Youngs modulus": 171.8492595289542}, { "Au": 1.93135861191619, "Az": 3.30708435057905, "B prime": 4.294116013034859, "B/G": 1.9451228338196076, "Bulk modulus": 147.91852499467285, "C11": 197.32920545644592, "C12": 123.21318476378632, "C44": 122.5539660799438, "Composition": "Al0.3030303 Mn0.24242424 Mo0.3030303 Ni0.15151515", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.006908657781423244, "Group": "validation set", "Lattice constant": 3.071993266900589, "Poisson ratio": 0.2805531626159049, "Shear modulus": 76.04585295223106, "Total energy": -3625.653696273156, "Wigner-Seitz radius": 2.8583348073656243, "Youngs modulus": 194.7615150036071}] for d in list_of_dicts: # store the value of the Composition key in composition composition = d['Composition'] # is a string # split the composition string wherever spaces are present composition = composition.split() # use regular expressions to substitute digits and period by nothing # obtain the composition as a list of elements composition = [re.sub(r'\d+|\.', '', i) for i in composition] print(composition) Outputs: ['Co', 'Fe', 'W', 'Zr'] ['Al', 'Co', 'Fe', 'Nb'] ['Al', 'Mn', 'Mo', 'Ni']
Python program to extract Combination element available from the data set like Co & Fe available in composition line
This is data set(Sample) which I need to extract the combination available in Composition (Like Co & Fe) only that data set to be extracted { "Au": 0.9789814953164448, "Az": 2.398972844060257, "B prime": 4.016727605471411, "B/G": 2.3640597506841443, "Bulk modulus": 165.36806388061723, "C11": 220.59548272352293, "C12": 137.75435445916438, "C44": 99.3668085387544, "Composition": "Co0.3030303 Fe0.27272727 W0.18181818 Zr0.24242424", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.043437825654109474, "Group": "validation set", "Lattice constant": 3.1677375834361996, "Poisson ratio": 0.31463584118955934, "Shear modulus": 69.9508816698735, "Total energy": -9166.132333038346, "Wigner-Seitz radius": 2.947420065302169, "Youngs modulus": 183.91987233205091 }, { "Au": 1.8997164025697, "Az": 3.2780363186086467, "B prime": 4.6337844536189445, "B/G": 2.1704056819788873, "Bulk modulus": 143.42189861171937, "C11": 186.59093474024905, "C12": 121.83738054745453, "C44": 106.13225120148684, "Composition": "Al0.33333333 Co0.16666667 Fe0.16666667 Nb0.33333333", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.00939231493839543, "Group": "validation set", "Lattice constant": 3.153272722671855, "Poisson ratio": 0.30029867452552794, "Shear modulus": 66.08068703586932, "Total energy": -3599.475639732862, "Wigner-Seitz radius": 2.933961241856199, "Youngs modulus": 171.8492595289542 }, { "Au": 1.93135861191619, "Az": 3.30708435057905, "B prime": 4.294116013034859, "B/G": 1.9451228338196076, "Bulk modulus": 147.91852499467285, "C11": 197.32920545644592, "C12": 123.21318476378632, "C44": 122.5539660799438, "Composition": "Al0.3030303 Mn0.24242424 Mo0.3030303 Ni0.15151515", "Crystal structure": "bcc", "EOS": "birchmurnaghan", "Formation enthalpy": 0.006908657781423244, "Group": "validation set", "Lattice constant": 3.071993266900589, "Poisson ratio": 0.2805531626159049, "Shear modulus": 76.04585295223106, "Total energy": -3625.653696273156, "Wigner-Seitz radius": 2.8583348073656243, "Youngs modulus": 194.7615150036071 }, python program to to extract Combination element available from the data set like Co & Fe available in composition line.
[ "First, you will need to get your data as a list of dictionaries. I am not sure how you are loading your data, so I am calling such a list as list_of_dicts. If you need help how to do that, I'd suggest you submit a different question. Then it's just a matter of looping through the dictionaries, finding the Composition key and parsing the Elements from the string values. I am showing a solution using regular expression module.\nimport re\n\nlist_of_dicts = [{\n \"Au\": 0.9789814953164448,\n \"Az\": 2.398972844060257,\n \"B prime\": 4.016727605471411,\n \"B/G\": 2.3640597506841443,\n \"Bulk modulus\": 165.36806388061723,\n \"C11\": 220.59548272352293,\n \"C12\": 137.75435445916438,\n \"C44\": 99.3668085387544,\n \"Composition\": \"Co0.3030303 Fe0.27272727 W0.18181818 Zr0.24242424\",\n \"Crystal structure\": \"bcc\",\n \"EOS\": \"birchmurnaghan\",\n \"Formation enthalpy\": 0.043437825654109474,\n \"Group\": \"validation set\",\n \"Lattice constant\": 3.1677375834361996,\n \"Poisson ratio\": 0.31463584118955934,\n \"Shear modulus\": 69.9508816698735,\n \"Total energy\": -9166.132333038346,\n \"Wigner-Seitz radius\": 2.947420065302169,\n \"Youngs modulus\": 183.91987233205091},\n{\n \"Au\": 1.8997164025697,\n \"Az\": 3.2780363186086467,\n \"B prime\": 4.6337844536189445,\n \"B/G\": 2.1704056819788873,\n \"Bulk modulus\": 143.42189861171937,\n \"C11\": 186.59093474024905,\n \"C12\": 121.83738054745453,\n \"C44\": 106.13225120148684,\n \"Composition\": \"Al0.33333333 Co0.16666667 Fe0.16666667 Nb0.33333333\",\n \"Crystal structure\": \"bcc\",\n \"EOS\": \"birchmurnaghan\",\n \"Formation enthalpy\": 0.00939231493839543,\n \"Group\": \"validation set\",\n \"Lattice constant\": 3.153272722671855,\n \"Poisson ratio\": 0.30029867452552794,\n \"Shear modulus\": 66.08068703586932,\n \"Total energy\": -3599.475639732862,\n \"Wigner-Seitz radius\": 2.933961241856199,\n \"Youngs modulus\": 171.8492595289542},\n{\n \"Au\": 1.93135861191619,\n \"Az\": 3.30708435057905,\n \"B prime\": 4.294116013034859,\n \"B/G\": 1.9451228338196076,\n \"Bulk modulus\": 147.91852499467285,\n \"C11\": 197.32920545644592,\n \"C12\": 123.21318476378632,\n \"C44\": 122.5539660799438,\n \"Composition\": \"Al0.3030303 Mn0.24242424 Mo0.3030303 Ni0.15151515\",\n \"Crystal structure\": \"bcc\",\n \"EOS\": \"birchmurnaghan\",\n \"Formation enthalpy\": 0.006908657781423244,\n \"Group\": \"validation set\",\n \"Lattice constant\": 3.071993266900589,\n \"Poisson ratio\": 0.2805531626159049,\n \"Shear modulus\": 76.04585295223106,\n \"Total energy\": -3625.653696273156,\n \"Wigner-Seitz radius\": 2.8583348073656243,\n \"Youngs modulus\": 194.7615150036071}]\n\nfor d in list_of_dicts:\n # store the value of the Composition key in composition\n composition = d['Composition'] # is a string\n # split the composition string wherever spaces are present\n composition = composition.split()\n # use regular expressions to substitute digits and period by nothing\n # obtain the composition as a list of elements\n composition = [re.sub(r'\\d+|\\.', '', i) for i in composition]\n print(composition)\n\nOutputs:\n['Co', 'Fe', 'W', 'Zr']\n['Al', 'Co', 'Fe', 'Nb']\n['Al', 'Mn', 'Mo', 'Ni']\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074525837_python.txt
Q: How do I convert a struct_time output into DD/MM/YY, Hour:Minute:Second format? I'm relatively uninitiated when it comes to Python, and I'm trying to figure out how to take an output I'm getting from a sensor into proper day, month, year and hour, minute, second format. An example of the output, which also includes a basic counter (the first output), and a timestamp (the third output) is shown below: (305, struct_time(tm_year=2022, tm_mon=11, tm_mday=9, tm_hour=16, tm_min=42, tm_sec=8, tm_wday=2, tm_yday=313, tm_isdst=-1), 7.036) I've seen a lot of questions and answers for this, but I'm left feeling kind of stumped on all of them because I'm not sure how to take the output I have (real_time, which gives a struct_time output) and turn it into this format. Any help (and understanding about my lack of fluency in this field) would be really appreciated! A: time.strftime exists for exactly this purpose: import time now_local = time.localtime() fmt = "%d/%m/%Y %H:%M:%S" out = time.strftime(fmt, now_local) print(out) However, two words of warning: time.struct_time is not "timezone aware". This will turn out to matter when you least expect it. Unless you are very sure that you know the timezone of the incoming data, and have the correct safeguards in your application and database for managing time zone iformation, use the datetime.datetime class instead. D/M/Y date format can be ambiguous. Y-M-D format is substantially safer. It is not ambiguous in any widely-used locale, and it has the extra benefit that lexical ordering of Y-M-D strings is also a correct ordering of the dates that they represent. This format is laid out by RFC 3339 and has become widely accepted as the standard, correct formatting for datetime strings. A: So as it turns out, I was able to find a solution after all. Essentially I just used this function: def _format_datetime(datetime): return "{:02}/{:02}/{} {:02}:{:02}:{:02}".format( datetime.tm_mon, datetime.tm_mday, datetime.tm_year, datetime.tm_hour, datetime.tm_min, datetime.tm_sec, ) And then applied it to the struct_time output as such (with real_time being said output): real_time = time.localtime() current_time = time.monotonic() formatted_time = _format_datetime(real_time) Hopefully this helps other people using CircuitPython for similar purposes!
How do I convert a struct_time output into DD/MM/YY, Hour:Minute:Second format?
I'm relatively uninitiated when it comes to Python, and I'm trying to figure out how to take an output I'm getting from a sensor into proper day, month, year and hour, minute, second format. An example of the output, which also includes a basic counter (the first output), and a timestamp (the third output) is shown below: (305, struct_time(tm_year=2022, tm_mon=11, tm_mday=9, tm_hour=16, tm_min=42, tm_sec=8, tm_wday=2, tm_yday=313, tm_isdst=-1), 7.036) I've seen a lot of questions and answers for this, but I'm left feeling kind of stumped on all of them because I'm not sure how to take the output I have (real_time, which gives a struct_time output) and turn it into this format. Any help (and understanding about my lack of fluency in this field) would be really appreciated!
[ "time.strftime exists for exactly this purpose:\nimport time\n\nnow_local = time.localtime()\n\nfmt = \"%d/%m/%Y %H:%M:%S\"\nout = time.strftime(fmt, now_local)\n\nprint(out)\n\nHowever, two words of warning:\n\ntime.struct_time is not \"timezone aware\". This will turn out to matter when you least expect it. Unless you are very sure that you know the timezone of the incoming data, and have the correct safeguards in your application and database for managing time zone iformation, use the datetime.datetime class instead.\n\nD/M/Y date format can be ambiguous. Y-M-D format is substantially safer. It is not ambiguous in any widely-used locale, and it has the extra benefit that lexical ordering of Y-M-D strings is also a correct ordering of the dates that they represent. This format is laid out by RFC 3339 and has become widely accepted as the standard, correct formatting for datetime strings.\n\n\n", "So as it turns out, I was able to find a solution after all. Essentially I just used this function:\n def _format_datetime(datetime):\n return \"{:02}/{:02}/{} {:02}:{:02}:{:02}\".format(\n datetime.tm_mon,\n datetime.tm_mday,\n datetime.tm_year,\n datetime.tm_hour,\n datetime.tm_min,\n datetime.tm_sec,\n )\n\nAnd then applied it to the struct_time output as such (with real_time being said output):\n real_time = time.localtime()\n current_time = time.monotonic()\n\n formatted_time = _format_datetime(real_time)\n\nHopefully this helps other people using CircuitPython for similar purposes!\n" ]
[ 1, 1 ]
[]
[]
[ "python", "sensors", "time" ]
stackoverflow_0074467436_python_sensors_time.txt
Q: Visualising the last layer node embeddings of a model in torch geometric I'm doing my first graph convolutional neural network project with torch_geometric. I want to visualize the last layer node embeddings of my model and don't know how I should get it. I trained my model on the CiteSeer dataset. You can get the full dataset as easily as this: from torch_geometric.datasets import Planetoid from torch_geometric.transforms import NormalizeFeatures dataset = Planetoid(root="data/Planetoid", name='CiteSeer', transform=NormalizeFeatures()) My model is a simple two-layered model as this: class GraphClassifier(torch.nn.Module): def __init__(self, dataset, hidden_dim): super(GraphClassifier, self).__init__() self.conv1 = GCNConv(dataset.num_features, hidden_dim) self.conv2 = GCNConv(hidden_dim, dataset.num_classes) def forward(self, data): x, edge_index = data.x, data.edge_index x = F.relu(self.conv1(x, edge_index)) x = F.relu(self.conv2(x, edge_index)) return F.log_softmax(x, dim=1) If you print my model you will get this: model = GraphClassifier(dataset, 64) print(model) >>> GraphClassifier( (conv1): GCNConv(3703, 64) (conv2): GCNConv(64, 6) ) My model is trained successfully. I only want to visualize its last-layer node embeddings. To visualize that I have this function to use: %matplotlib inline import matplotlib.pyplot as plt from sklearn.manifold import TSNE import torch # emb: (nNodes, hidden_dim) # node_type: (nNodes,). Entries are torch.int64 ranged from 0 to num_class - 1 def visualize(emb: torch.tensor, node_type: torch.tensor): z = TSNE(n_components=2).fit_transform(emb.detach().cpu().numpy()) plt.figure(figsize=(10,10)) plt.scatter(z[:, 0], z[:, 1], s=70, c=node_type, cmap="Set2") plt.show() I don't know how I should extract emb and node_type from my model to give to the visualize function. emb is the last layer of node embeddings of the model. How can I get these from my model? A: It is solve by changing the model to this: class GraphClassifier(torch.nn.Module): def __init__(self, dataset, hidden_dim): super(GraphClassifier, self).__init__() self.conv1 = GCNConv(dataset.num_features, hidden_dim) self.conv2 = GCNConv(hidden_dim, dataset.num_classes) def forward(self, data, do_visualize=False): x, edge_index = data.x, data.edge_index x = F.relu(self.conv1(x, edge_index)) x = F.relu(self.conv2(x, edge_index)) if do_visualize: # NEW LINE visualize(x, data.y) # NEW LINE return F.log_softmax(x, dim=1) Now if you call the forward function with do_visualize=Ture it will visualize. like this: model = GraphClassifier(dataset, hidden_dim) model.to(device) model(dataset[0].to(device), do_visualize=True)
Visualising the last layer node embeddings of a model in torch geometric
I'm doing my first graph convolutional neural network project with torch_geometric. I want to visualize the last layer node embeddings of my model and don't know how I should get it. I trained my model on the CiteSeer dataset. You can get the full dataset as easily as this: from torch_geometric.datasets import Planetoid from torch_geometric.transforms import NormalizeFeatures dataset = Planetoid(root="data/Planetoid", name='CiteSeer', transform=NormalizeFeatures()) My model is a simple two-layered model as this: class GraphClassifier(torch.nn.Module): def __init__(self, dataset, hidden_dim): super(GraphClassifier, self).__init__() self.conv1 = GCNConv(dataset.num_features, hidden_dim) self.conv2 = GCNConv(hidden_dim, dataset.num_classes) def forward(self, data): x, edge_index = data.x, data.edge_index x = F.relu(self.conv1(x, edge_index)) x = F.relu(self.conv2(x, edge_index)) return F.log_softmax(x, dim=1) If you print my model you will get this: model = GraphClassifier(dataset, 64) print(model) >>> GraphClassifier( (conv1): GCNConv(3703, 64) (conv2): GCNConv(64, 6) ) My model is trained successfully. I only want to visualize its last-layer node embeddings. To visualize that I have this function to use: %matplotlib inline import matplotlib.pyplot as plt from sklearn.manifold import TSNE import torch # emb: (nNodes, hidden_dim) # node_type: (nNodes,). Entries are torch.int64 ranged from 0 to num_class - 1 def visualize(emb: torch.tensor, node_type: torch.tensor): z = TSNE(n_components=2).fit_transform(emb.detach().cpu().numpy()) plt.figure(figsize=(10,10)) plt.scatter(z[:, 0], z[:, 1], s=70, c=node_type, cmap="Set2") plt.show() I don't know how I should extract emb and node_type from my model to give to the visualize function. emb is the last layer of node embeddings of the model. How can I get these from my model?
[ "It is solve by changing the model to this:\nclass GraphClassifier(torch.nn.Module):\n def __init__(self, dataset, hidden_dim):\n super(GraphClassifier, self).__init__()\n self.conv1 = GCNConv(dataset.num_features, hidden_dim)\n self.conv2 = GCNConv(hidden_dim, dataset.num_classes)\n\n def forward(self, data, do_visualize=False):\n x, edge_index = data.x, data.edge_index\n x = F.relu(self.conv1(x, edge_index))\n x = F.relu(self.conv2(x, edge_index))\n if do_visualize: # NEW LINE\n visualize(x, data.y) # NEW LINE\n return F.log_softmax(x, dim=1)\n\nNow if you call the forward function with do_visualize=Ture it will visualize. like this:\nmodel = GraphClassifier(dataset, hidden_dim)\nmodel.to(device)\nmodel(dataset[0].to(device), do_visualize=True)\n\n" ]
[ 0 ]
[]
[]
[ "graph_neural_network", "python", "pytorch", "pytorch_geometric" ]
stackoverflow_0074498230_graph_neural_network_python_pytorch_pytorch_geometric.txt
Q: My threading is not exactly working like i want it to Ok So I am doing a school project for which I am Using threads to go from a phone Home Screen to a chat app and i have used threads in both application. import pygame as pyg, sys, cv2, random, os, handDetector, time, threading import pywhatkit, pyjokes, pyttsx3 as pyt import speech_recognition as sr, chatApp, server1 class homeScreen: def __init__(self): self.width, self.height = 400, 600 self.window = pyg.display.set_mode((self.width, self.height)) self.webcam = cv2.VideoCapture(0) self.hand = handDetector.handDetector(maxhands=1) self.run = True self.image = pyg.image.load(f"/Users/surya/Desktop/SuryaFolder/SuryaAssets/nature/Image_10.jpg") self.googleWidget = pyg.image.load("SuryaAssets/googleWidget.png") self.googleWidget = pyg.transform.smoothscale(self.googleWidget, (self.width-50, self.height*(self.width/self.googleWidget.get_width()))) self.image = pyg.transform.smoothscale(self.image, (self.width, self.height)) self.chatsappIcon = pyg.image.load("SuryaAssets/download-16.jpg") self.chatsappIcon = pyg.transform.smoothscale(self.chatsappIcon, (40, 40)) self.pTime = 0 self.microphoneRect = pyg.Rect(320, 52, 25, 20) self.x = y = 0 self.select = False self.googleAssistant = False self.c = '' self.a1 = False self.run1 = False self.engine = pyt.Engine() self.a = False self.listener = sr.Recognizer() self.run1 = False self.command = '' self.newserver = False self.d = threading.Thread(target=self.mainScreen,).start() self.b = threading.Thread(target=self.function,).start() self.g = threading.Thread(target=(server1.function(),)).start() def function(self): if threading.active_count() != 1: while self.run: try: with sr.Microphone(len(sr.Microphone.list_microphone_names())-1) as source: self.listener.adjust_for_ambient_noise(source) print("Listening....") audio = self.listener.listen(source, None, 2) self.command = self.listener.recognize_google(audio).lower() if self.command.lower() == 'ok google' or self.command.lower() == 'hey google' or self.command.lower() == 'google': self.a = True if self.a: if 'bye' in self.command or 'goodbye' in self.command or 'kill yourself' in self.command: a = False if 'tell' in self.command and 'joke' in self.command and "don't" not in self.command and 'do not' not in self.command: self.engine.say(self.command) self.engine.endLoop() #engine.endLoop() if 'play' in self.command or 'youtube' in self.command: self.command = self.command.replace('play', '') self.command = self.command.replace('youtube', '') if self.command.strip() != '': pywhatkit.playonyt(self.command) else: pywhatkit.playonyt('never gonna give you up') if 'open' in self.command or 'run' in self.command: self.command = self.command.replace('open', '') self.command = self.command.replace('run', '') if 'whatsapp' in self.command or "chatsappp" in self.command: if 'in a new group' in self.command: self.newserver = True else: self.newserver = False self.run1 = True #threading.Thread(target = (chatApp.run(), )).start #threading.Condition.wait() except Exception as s: print(s) def mainScreen(self): #self.g.join() while self.run: #global a,command _, frame = self.webcam.read() frame = cv2.flip(frame, 1) self.window.blit(self.image, (0, 0)) self.hand.findHands(frame) lmList = self.hand.findPosition(frame) fingerup = self.hand.fingersUp() self.window.blit(self.googleWidget, (20,0)) if lmList: if fingerup == [0, 1, 0, 0, 0]: x = lmList[8][1]/self.webcam.get(3) y = lmList[8][2]/self.webcam.get(4) self.select = False pyg.draw.circle(self.window, (0, 0, 0), ((self.width+100)*x, (self.height+100)*y), 5) elif fingerup == [0, 1, 1, 0, 0]: x = lmList[8][1]/self.webcam.get(3) y = lmList[8][2]/self.webcam.get(4) self.select = True pyg.draw.circle(self.window, (255, 255, 255), ((self.width+100)*x, (self.height+100)*y), 8) self.cTime = time.time() self.fps = 1 / (self.cTime-self.pTime) self.pTime = self.cTime for event in pyg.event.get(): if event.type == pyg.QUIT: self.run = False pyg.quit() sys.exit() if self.select: if self.microphoneRect.collidepoint((self.width+100)*x, (self.height+100)*y): a1 = True a = True self.select = False self.window.blit(self.chatsappIcon, (100, self.height//2)) self.a1 = self.a if self.run1: print("hello") chatApp.run() if self.a or self.a1: pyg.draw.rect(self.window, (255, 255, 255), (0, self.height/2, self.width, self.height/2), 0, 5) google_logo = pyg.image.load('SuryaAssets/Google_Assistant_logo.png') google_logo = pyg.transform.scale(google_logo, (100, 100)) google_logo_rect = google_logo.get_rect() google_logo_rect.center = (self.width/2, self.height*3/4) self.window.blit(google_logo, google_logo_rect) buttons = pyg.Surface((self.image.get_width(), 50)) buttons.set_alpha(100) buttons.fill((255, 255, 255)) self.window.blit(buttons, (0, self.image.get_height()-50)) pyg.display.update() #cv2.imshow("window", frame) def main(): phone = homeScreen() #phone.mainScreen() if __name__ == "__main__": main() Individually both applications work like charm....but when try opening a thread thread to start server1.function() it give this error 2022-11-21 22:34:30.493 Python[3275:87431] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'nextEventMatchingMask should only be called from the Main Thread!' *** First throw call stack: ( 0 CoreFoundation 0x00007fff204b4beb __exceptionPreprocess + 242 1 libobjc.A.dylib 0x00007fff201edd92 objc_exception_throw + 48 2 AppKit 0x00007fff22c40eb6 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 4389 3 libSDL2-2.0.0.dylib 0x0000000104de18bd Cocoa_PumpEvents + 125 4 libSDL2-2.0.0.dylib 0x0000000104d212c1 SDL_PumpEvents_REAL + 33 5 event.cpython-310-darwin.so 0x0000000108f56910 pg_event_get + 160 6 Python 0x0000000104611244 cfunction_call + 52 7 Python 0x00000001045a9fb4 _PyObject_MakeTpCall + 132 8 Python 0x00000001046ebb53 call_function + 371 9 Python 0x00000001046e32f8 _PyEval_EvalFrameDefault + 28872 10 Python 0x00000001046da8bf _PyEval_Vector + 383 11 Python 0x00000001045ae1b1 method_vectorcall + 481 12 Python 0x00000001046dcda7 _PyEval_EvalFrameDefault + 2935 13 Python 0x00000001046da8bf _PyEval_Vector + 383 14 Python 0x00000001046eba8f call_function + 175 15 Python 0x00000001046e1d8e _PyEval_EvalFrameDefault + 23390 16 Python 0x00000001046da8bf _PyEval_Vector + 383 17 Python 0x00000001046eba8f call_function + 175 18 Python 0x00000001046e1d8e _PyEval_EvalFrameDefault + 23390 19 Python 0x00000001046da8bf _PyEval_Vector + 383 20 Python 0x00000001045ae1b1 method_vectorcall + 481 21 Python 0x00000001047e2516 thread_run + 198 22 Python 0x00000001047655c4 pythread_wrapper + 36 23 libsystem_pthread.dylib 0x00007fff203428fc _pthread_start + 224 24 libsystem_pthread.dylib 0x00007fff2033e443 thread_start + 15 ) libc++abi: terminating with uncaught exception of type NSException zsh: abort /usr/local/bin/python3 /Users/surya/Desktop/SuryaFolder/phoneHomeScreen.py So i tried put default threads in hope of just starting server before hand and add a chat app client later on in program but the main thread it self is not running I know it is in the c, addr = s.accept() But i am not able to make the thread continue without getting a client A: I don't know every version of Python that's out there, but in the version I'm running (3.9.6), this doesn't do anything useful: threading.Thread(target=(server1.function(),)).start() That statement is the same as if you did this: temp_a = server1.function() # call function() temp_b = (temp_a,) # create a new tuple temp_c = threading.Thread(target=temp_b) temp_c.start() I don't know what server1.function() returns, but the Python 3.9.6 that I'm running does not appear to allow me to use any tuple as the target of a Thread. It lets me construct a new Thread object, but then it throws an exception when I try to it. Yours isn't throwing the same exception, but it appears to be throwing from within start().
My threading is not exactly working like i want it to
Ok So I am doing a school project for which I am Using threads to go from a phone Home Screen to a chat app and i have used threads in both application. import pygame as pyg, sys, cv2, random, os, handDetector, time, threading import pywhatkit, pyjokes, pyttsx3 as pyt import speech_recognition as sr, chatApp, server1 class homeScreen: def __init__(self): self.width, self.height = 400, 600 self.window = pyg.display.set_mode((self.width, self.height)) self.webcam = cv2.VideoCapture(0) self.hand = handDetector.handDetector(maxhands=1) self.run = True self.image = pyg.image.load(f"/Users/surya/Desktop/SuryaFolder/SuryaAssets/nature/Image_10.jpg") self.googleWidget = pyg.image.load("SuryaAssets/googleWidget.png") self.googleWidget = pyg.transform.smoothscale(self.googleWidget, (self.width-50, self.height*(self.width/self.googleWidget.get_width()))) self.image = pyg.transform.smoothscale(self.image, (self.width, self.height)) self.chatsappIcon = pyg.image.load("SuryaAssets/download-16.jpg") self.chatsappIcon = pyg.transform.smoothscale(self.chatsappIcon, (40, 40)) self.pTime = 0 self.microphoneRect = pyg.Rect(320, 52, 25, 20) self.x = y = 0 self.select = False self.googleAssistant = False self.c = '' self.a1 = False self.run1 = False self.engine = pyt.Engine() self.a = False self.listener = sr.Recognizer() self.run1 = False self.command = '' self.newserver = False self.d = threading.Thread(target=self.mainScreen,).start() self.b = threading.Thread(target=self.function,).start() self.g = threading.Thread(target=(server1.function(),)).start() def function(self): if threading.active_count() != 1: while self.run: try: with sr.Microphone(len(sr.Microphone.list_microphone_names())-1) as source: self.listener.adjust_for_ambient_noise(source) print("Listening....") audio = self.listener.listen(source, None, 2) self.command = self.listener.recognize_google(audio).lower() if self.command.lower() == 'ok google' or self.command.lower() == 'hey google' or self.command.lower() == 'google': self.a = True if self.a: if 'bye' in self.command or 'goodbye' in self.command or 'kill yourself' in self.command: a = False if 'tell' in self.command and 'joke' in self.command and "don't" not in self.command and 'do not' not in self.command: self.engine.say(self.command) self.engine.endLoop() #engine.endLoop() if 'play' in self.command or 'youtube' in self.command: self.command = self.command.replace('play', '') self.command = self.command.replace('youtube', '') if self.command.strip() != '': pywhatkit.playonyt(self.command) else: pywhatkit.playonyt('never gonna give you up') if 'open' in self.command or 'run' in self.command: self.command = self.command.replace('open', '') self.command = self.command.replace('run', '') if 'whatsapp' in self.command or "chatsappp" in self.command: if 'in a new group' in self.command: self.newserver = True else: self.newserver = False self.run1 = True #threading.Thread(target = (chatApp.run(), )).start #threading.Condition.wait() except Exception as s: print(s) def mainScreen(self): #self.g.join() while self.run: #global a,command _, frame = self.webcam.read() frame = cv2.flip(frame, 1) self.window.blit(self.image, (0, 0)) self.hand.findHands(frame) lmList = self.hand.findPosition(frame) fingerup = self.hand.fingersUp() self.window.blit(self.googleWidget, (20,0)) if lmList: if fingerup == [0, 1, 0, 0, 0]: x = lmList[8][1]/self.webcam.get(3) y = lmList[8][2]/self.webcam.get(4) self.select = False pyg.draw.circle(self.window, (0, 0, 0), ((self.width+100)*x, (self.height+100)*y), 5) elif fingerup == [0, 1, 1, 0, 0]: x = lmList[8][1]/self.webcam.get(3) y = lmList[8][2]/self.webcam.get(4) self.select = True pyg.draw.circle(self.window, (255, 255, 255), ((self.width+100)*x, (self.height+100)*y), 8) self.cTime = time.time() self.fps = 1 / (self.cTime-self.pTime) self.pTime = self.cTime for event in pyg.event.get(): if event.type == pyg.QUIT: self.run = False pyg.quit() sys.exit() if self.select: if self.microphoneRect.collidepoint((self.width+100)*x, (self.height+100)*y): a1 = True a = True self.select = False self.window.blit(self.chatsappIcon, (100, self.height//2)) self.a1 = self.a if self.run1: print("hello") chatApp.run() if self.a or self.a1: pyg.draw.rect(self.window, (255, 255, 255), (0, self.height/2, self.width, self.height/2), 0, 5) google_logo = pyg.image.load('SuryaAssets/Google_Assistant_logo.png') google_logo = pyg.transform.scale(google_logo, (100, 100)) google_logo_rect = google_logo.get_rect() google_logo_rect.center = (self.width/2, self.height*3/4) self.window.blit(google_logo, google_logo_rect) buttons = pyg.Surface((self.image.get_width(), 50)) buttons.set_alpha(100) buttons.fill((255, 255, 255)) self.window.blit(buttons, (0, self.image.get_height()-50)) pyg.display.update() #cv2.imshow("window", frame) def main(): phone = homeScreen() #phone.mainScreen() if __name__ == "__main__": main() Individually both applications work like charm....but when try opening a thread thread to start server1.function() it give this error 2022-11-21 22:34:30.493 Python[3275:87431] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'nextEventMatchingMask should only be called from the Main Thread!' *** First throw call stack: ( 0 CoreFoundation 0x00007fff204b4beb __exceptionPreprocess + 242 1 libobjc.A.dylib 0x00007fff201edd92 objc_exception_throw + 48 2 AppKit 0x00007fff22c40eb6 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 4389 3 libSDL2-2.0.0.dylib 0x0000000104de18bd Cocoa_PumpEvents + 125 4 libSDL2-2.0.0.dylib 0x0000000104d212c1 SDL_PumpEvents_REAL + 33 5 event.cpython-310-darwin.so 0x0000000108f56910 pg_event_get + 160 6 Python 0x0000000104611244 cfunction_call + 52 7 Python 0x00000001045a9fb4 _PyObject_MakeTpCall + 132 8 Python 0x00000001046ebb53 call_function + 371 9 Python 0x00000001046e32f8 _PyEval_EvalFrameDefault + 28872 10 Python 0x00000001046da8bf _PyEval_Vector + 383 11 Python 0x00000001045ae1b1 method_vectorcall + 481 12 Python 0x00000001046dcda7 _PyEval_EvalFrameDefault + 2935 13 Python 0x00000001046da8bf _PyEval_Vector + 383 14 Python 0x00000001046eba8f call_function + 175 15 Python 0x00000001046e1d8e _PyEval_EvalFrameDefault + 23390 16 Python 0x00000001046da8bf _PyEval_Vector + 383 17 Python 0x00000001046eba8f call_function + 175 18 Python 0x00000001046e1d8e _PyEval_EvalFrameDefault + 23390 19 Python 0x00000001046da8bf _PyEval_Vector + 383 20 Python 0x00000001045ae1b1 method_vectorcall + 481 21 Python 0x00000001047e2516 thread_run + 198 22 Python 0x00000001047655c4 pythread_wrapper + 36 23 libsystem_pthread.dylib 0x00007fff203428fc _pthread_start + 224 24 libsystem_pthread.dylib 0x00007fff2033e443 thread_start + 15 ) libc++abi: terminating with uncaught exception of type NSException zsh: abort /usr/local/bin/python3 /Users/surya/Desktop/SuryaFolder/phoneHomeScreen.py So i tried put default threads in hope of just starting server before hand and add a chat app client later on in program but the main thread it self is not running I know it is in the c, addr = s.accept() But i am not able to make the thread continue without getting a client
[ "I don't know every version of Python that's out there, but in the version I'm running (3.9.6), this doesn't do anything useful:\nthreading.Thread(target=(server1.function(),)).start()\n\nThat statement is the same as if you did this:\ntemp_a = server1.function() # call function()\ntemp_b = (temp_a,) # create a new tuple\ntemp_c = threading.Thread(target=temp_b)\ntemp_c.start()\n\nI don't know what server1.function() returns, but the Python 3.9.6 that I'm running does not appear to allow me to use any tuple as the target of a Thread. It lets me construct a new Thread object, but then it throws an exception when I try to it. Yours isn't throwing the same exception, but it appears to be throwing from within start().\n" ]
[ 0 ]
[]
[]
[ "multithreading", "pygame", "python" ]
stackoverflow_0074522649_multithreading_pygame_python.txt
Q: How to print a list value one below another I need to sort a ranking of points by descending order. The users and points are inside lista_ranking which includes de following code: [{'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Gon', 'apellido': 'Henderson', 'fecha': '(2003, 3, 12)', 'puntaje': 5}, 'goles_local': 1, 'goles_visitante': 0}, {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '1234', 'nombre': 'George', 'apellido': 'Stev', 'fecha': '(2003, 3, 12)', 'puntaje': 8}, 'goles_local': 0, 'goles_visitante': 1}] With the code ranking_high_to_low=sorted([(numeros['usuario']['puntaje'], numeros['usuario']['nombre'], numeros['usuario']['apellido']) for numeros in lista_ranking], reverse=True) print(ranking_high_to_low) It prints the ranking from highest to lowest like this: [(8, 'George', 'Stev'), (5, 'Gon', 'Henderson')] Which for should I use in order for it to print the ranking as follows: George Stev 8 Gon Henderson 5 UPDATE: @arsho When I use your updated code: def ranking(): ranking_high_to_low = sorted([(numeros['usuario']['puntaje'], numeros['usuario']['nombre'], numeros['usuario']['apellido']) for numeros in lista_ranking], reverse=True) players = {} for info in ranking_high_to_low: player_name = ' '.join(info[1:]) players[player_name] = players.get(player_name, 0) + info[0] for player, score in sorted(players.items(), key=lambda x: x[1], reverse=True): print(f"{player} {score}") for info in ranking_ordenado: print(f"{' '.join(info[1:])} {info[0]}") The output of this is: Juan Ha 8 Santi Stev 8 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Juan Ha 8 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Juan Ha 16 Gon Va 10 Gon Va 10 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Juan Ha 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 A: You are close to the solution. You can use a loop and join the part of the name to print the rank of the users. import datetime lista_ranking = [ {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Gon', 'apellido': 'Henderson', 'fecha': '(2003, 3, 12)', 'puntaje': 5}, 'goles_local': 1, 'goles_visitante': 0}, { 'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '1234', 'nombre': 'George', 'apellido': 'Stev', 'fecha': '(2003, 3, 12)', 'puntaje': 8}, 'goles_local': 0, 'goles_visitante': 1}] ranking_high_to_low = sorted([(numeros['usuario']['puntaje'], numeros['usuario']['nombre'], numeros['usuario']['apellido']) for numeros in lista_ranking], reverse=True) for info in ranking_high_to_low: print(f"{' '.join(info[1:])} {info[0]}") Output: George Stev 8 Gon Henderson 5 The benefit of the join method here is how it can print the names with multiple parts (first name, middle name, last name). Update: If you want to store unique players and sort them by their total scores, you need to use a dictionary. Then sort the dictionary in reverse order by the scores. import datetime lista_ranking = [ {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Gon', 'apellido': 'Henderson', 'fecha': '(2003, 3, 12)', 'puntaje': 5}, 'goles_local': 1, 'goles_visitante': 0}, {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Gon', 'apellido': 'Henderson', 'fecha': '(2003, 3, 12)', 'puntaje': 5}, 'goles_local': 1, 'goles_visitante': 0} , { 'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '1234', 'nombre': 'George', 'apellido': 'Stev', 'fecha': '(2003, 3, 12)', 'puntaje': 8}, 'goles_local': 0, 'goles_visitante': 1}] ranking_high_to_low = [(numeros['usuario']['puntaje'], numeros['usuario']['nombre'], numeros['usuario']['apellido']) for numeros in lista_ranking] players = {} for info in ranking_high_to_low: player_name = ' '.join(info[1:]) players[player_name] = players.get(player_name, 0) + info[0] for player, score in sorted(players.items(), key=lambda x: x[1], reverse=True): print(f"{player} {score}") Output: Gon Henderson 10 George Stev 8 References: Blog article on ordering dictionary elements by values A: I think that this should do the trick: for (num, first, last) in ranking_high_to_low: print("{} {} {}".format(first, last, num))
How to print a list value one below another
I need to sort a ranking of points by descending order. The users and points are inside lista_ranking which includes de following code: [{'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '123', 'nombre': 'Gon', 'apellido': 'Henderson', 'fecha': '(2003, 3, 12)', 'puntaje': 5}, 'goles_local': 1, 'goles_visitante': 0}, {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20), 'hora': '13:00hs', 'equipo_local': 'Catar', 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado', 'goles_local': 0, 'goles_visitante': 1}, 'usuario': {'cedula': '1234', 'nombre': 'George', 'apellido': 'Stev', 'fecha': '(2003, 3, 12)', 'puntaje': 8}, 'goles_local': 0, 'goles_visitante': 1}] With the code ranking_high_to_low=sorted([(numeros['usuario']['puntaje'], numeros['usuario']['nombre'], numeros['usuario']['apellido']) for numeros in lista_ranking], reverse=True) print(ranking_high_to_low) It prints the ranking from highest to lowest like this: [(8, 'George', 'Stev'), (5, 'Gon', 'Henderson')] Which for should I use in order for it to print the ranking as follows: George Stev 8 Gon Henderson 5 UPDATE: @arsho When I use your updated code: def ranking(): ranking_high_to_low = sorted([(numeros['usuario']['puntaje'], numeros['usuario']['nombre'], numeros['usuario']['apellido']) for numeros in lista_ranking], reverse=True) players = {} for info in ranking_high_to_low: player_name = ' '.join(info[1:]) players[player_name] = players.get(player_name, 0) + info[0] for player, score in sorted(players.items(), key=lambda x: x[1], reverse=True): print(f"{player} {score}") for info in ranking_ordenado: print(f"{' '.join(info[1:])} {info[0]}") The output of this is: Juan Ha 8 Santi Stev 8 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Juan Ha 8 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Juan Ha 16 Gon Va 10 Gon Va 10 Gon Va 20 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Santi Stev 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8 Juan Ha 16 Gon Va 10 Gon Va 10 Santi Stev 8 Santi Stev 8 Juan Ha 8 Juan Ha 8
[ "You are close to the solution. You can use a loop and join the part of the name to print the rank of the users.\nimport datetime\n\nlista_ranking = [\n {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20),\n 'hora': '13:00hs', 'equipo_local': 'Catar',\n 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado',\n 'goles_local': 0, 'goles_visitante': 1},\n 'usuario': {'cedula': '123', 'nombre': 'Gon',\n 'apellido': 'Henderson',\n 'fecha': '(2003, 3, 12)', 'puntaje': 5},\n 'goles_local': 1,\n 'goles_visitante': 0}, {\n 'partido': {'codigo': 'AAA',\n 'fecha': datetime.date(2022, 11, 20),\n 'hora': '13:00hs', 'equipo_local': 'Catar',\n 'equipo_visitante': 'Ecuador',\n 'estado': 'Finalizado',\n 'goles_local': 0, 'goles_visitante': 1},\n 'usuario': {'cedula': '1234', 'nombre': 'George',\n 'apellido': 'Stev', 'fecha': '(2003, 3, 12)',\n 'puntaje': 8}, 'goles_local': 0,\n 'goles_visitante': 1}]\nranking_high_to_low = sorted([(numeros['usuario']['puntaje'],\n numeros['usuario']['nombre'],\n numeros['usuario']['apellido']) for numeros in\n lista_ranking], reverse=True)\nfor info in ranking_high_to_low:\n print(f\"{' '.join(info[1:])} {info[0]}\")\n\nOutput:\nGeorge Stev 8\nGon Henderson 5\n\nThe benefit of the join method here is how it can print the names with multiple parts (first name, middle name, last name).\nUpdate:\nIf you want to store unique players and sort them by their total scores, you need to use a dictionary. Then sort the dictionary in reverse order by the scores.\nimport datetime\n\nlista_ranking = [\n {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20),\n 'hora': '13:00hs', 'equipo_local': 'Catar',\n 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado',\n 'goles_local': 0, 'goles_visitante': 1},\n 'usuario': {'cedula': '123', 'nombre': 'Gon',\n 'apellido': 'Henderson',\n 'fecha': '(2003, 3, 12)', 'puntaje': 5},\n 'goles_local': 1,\n 'goles_visitante': 0},\n {'partido': {'codigo': 'AAA', 'fecha': datetime.date(2022, 11, 20),\n 'hora': '13:00hs', 'equipo_local': 'Catar',\n 'equipo_visitante': 'Ecuador', 'estado': 'Finalizado',\n 'goles_local': 0, 'goles_visitante': 1},\n 'usuario': {'cedula': '123', 'nombre': 'Gon',\n 'apellido': 'Henderson',\n 'fecha': '(2003, 3, 12)', 'puntaje': 5},\n 'goles_local': 1,\n 'goles_visitante': 0}\n , {\n 'partido': {'codigo': 'AAA',\n 'fecha': datetime.date(2022, 11, 20),\n 'hora': '13:00hs', 'equipo_local': 'Catar',\n 'equipo_visitante': 'Ecuador',\n 'estado': 'Finalizado',\n 'goles_local': 0, 'goles_visitante': 1},\n 'usuario': {'cedula': '1234', 'nombre': 'George',\n 'apellido': 'Stev', 'fecha': '(2003, 3, 12)',\n 'puntaje': 8}, 'goles_local': 0,\n 'goles_visitante': 1}]\nranking_high_to_low = [(numeros['usuario']['puntaje'],\n numeros['usuario']['nombre'],\n numeros['usuario']['apellido']) for numeros in\n lista_ranking]\n\nplayers = {}\nfor info in ranking_high_to_low:\n player_name = ' '.join(info[1:])\n players[player_name] = players.get(player_name, 0) + info[0]\n\nfor player, score in sorted(players.items(), key=lambda x: x[1], reverse=True):\n print(f\"{player} {score}\")\n\nOutput:\nGon Henderson 10\nGeorge Stev 8\n\nReferences:\n\nBlog article on ordering dictionary elements by values\n\n", "I think that this should do the trick:\n for (num, first, last) in ranking_high_to_low:\n print(\"{} {} {}\".format(first, last, num))\n\n" ]
[ 1, 0 ]
[]
[]
[ "dictionary", "list", "python", "sorting" ]
stackoverflow_0074526183_dictionary_list_python_sorting.txt
Q: How does merging two pandas dataframes worked using the assignment operation? The phenomenon that I am not able to understand is how pandas is able to join two dataframes using the equal operation as in the following code: import pandas as pd import numpy as np from IPython.display import display df1 = pd.DataFrame({"A": np.arange(1, 5), "B": np.arange(11, 15)}) df1.index = (np.arange(1, 5) + 1).tolist() df2 = pd.DataFrame({"A": np.arange(1, 7), "C": np.arange(21, 27)}) display(df1) display(df2) df1[["C"]] = df2[["C"]] display(df1) I cannot understand how merging happened in this case. I would appreciate it if someone can guide me toward the original documentation and provide some further explanation for this behavior. Many thanks in advance! A: This is a basic feature of pandas, automatic index alignment. This is indeed one of the core features which distinguishes it from just numpy (on top of which it is built). Briefly, at index 2 of df1, the new column will get the value 23 (from index 2 in df2['C']). At index 3, the new column will get the value 24 from index 3 in df['C'], etc etc – So, one way you can think of this is that there is no need to do manual index alignment, i.e.: df1['C'] = df2.loc[df1.index, 'C'] Because df1['C'] = df2['C'] does that alignment automatically for you (we could envision an API where this wasn't the case, and the above, for example, throws an error because df2 is bigger than df1 so it would be ambiguous what you want to do without automatic alignment) See the introductory tutorial: Fundamentally, data alignment is intrinsic. The link between labels and data will not be broken unless done so explicitly by you. Some more useful parts of the tutorial: vectorized operations and label alignment with series
How does merging two pandas dataframes worked using the assignment operation?
The phenomenon that I am not able to understand is how pandas is able to join two dataframes using the equal operation as in the following code: import pandas as pd import numpy as np from IPython.display import display df1 = pd.DataFrame({"A": np.arange(1, 5), "B": np.arange(11, 15)}) df1.index = (np.arange(1, 5) + 1).tolist() df2 = pd.DataFrame({"A": np.arange(1, 7), "C": np.arange(21, 27)}) display(df1) display(df2) df1[["C"]] = df2[["C"]] display(df1) I cannot understand how merging happened in this case. I would appreciate it if someone can guide me toward the original documentation and provide some further explanation for this behavior. Many thanks in advance!
[ "This is a basic feature of pandas, automatic index alignment. This is indeed one of the core features which distinguishes it from just numpy (on top of which it is built). Briefly, at index 2 of df1, the new column will get the value 23 (from index 2 in df2['C']). At index 3, the new column will get the value 24 from index 3 in df['C'], etc etc –\nSo, one way you can think of this is that there is no need to do manual index alignment, i.e.:\ndf1['C'] = df2.loc[df1.index, 'C']\n\nBecause\ndf1['C'] = df2['C']\n\ndoes that alignment automatically for you (we could envision an API where this wasn't the case, and the above, for example, throws an error because df2 is bigger than df1 so it would be ambiguous what you want to do without automatic alignment)\nSee the introductory tutorial:\n\nFundamentally, data alignment is intrinsic. The link between labels and data will not be broken unless done so explicitly by you.\n\nSome more useful parts of the tutorial:\n\nvectorized operations and label alignment with series\n\n" ]
[ 1 ]
[]
[]
[ "merge", "pandas", "python" ]
stackoverflow_0074526154_merge_pandas_python.txt
Q: how apply str.contain to every column in pandas? i have a dataframe like this : my data I want to apply this func to all column of the dataframe: data3 = data2.str.contains('|'.join(features)) but i got error AttributeError: 'DataFrame' object has no attribute 'str' β€œfeatures is a list of word” how i can do this and solve this problem? A: The dataframe doesn't have a contains method, but the columns do. Iterate the columns and assign to the result. feature_str = '|'.join(features) data3 = pd.DataFrame() for name, col in data2.items(): data3[name] = col.str.contains(feature_str)
how apply str.contain to every column in pandas?
i have a dataframe like this : my data I want to apply this func to all column of the dataframe: data3 = data2.str.contains('|'.join(features)) but i got error AttributeError: 'DataFrame' object has no attribute 'str' β€œfeatures is a list of word” how i can do this and solve this problem?
[ "The dataframe doesn't have a contains method, but the columns do. Iterate the columns and assign to the result.\nfeature_str = '|'.join(features)\ndata3 = pd.DataFrame()\nfor name, col in data2.items():\n data3[name] = col.str.contains(feature_str)\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074525827_pandas_python.txt
Q: How to prevent pytest using local module For reference, this is the opposite of this question. That question asks how to get pytest to use your local module. I want to avoid pytest using the local module. I need to test my module's installation procedure. We should all be testing our modules' installation procedures. Therefore, I want my test suite to pretend like it's any other python module trying to import my module, which I have installed (whether or not it's up to date with my latest edits and/or an editable install is my business). My project layout looks like this: . β”œβ”€β”€ my_package β”‚ β”œβ”€β”€ __init__.py β”‚ └── my_module.py β”œβ”€β”€ setup.py β”œβ”€β”€ pyproject.toml └── tests β”œβ”€β”€ conftest.py β”œβ”€β”€ __init__.py └── test_my_package.py If I pip install . cd tests python -c "import my_package" it all works. However, if I pip install . pytest it does not. This is because pytest automatically adds the calling directory to the PYTHONPATH, making it impossible to test that pip install has worked. I want it to not do that. I need to do this because I am using setuptools-scm (which has different behaviour in editable and non-editable installs) and setuptools.find_packages, which makes it easy to ignore subpackages. However, to reiterate, my issue is with pytest's discovery, not with the use of these two utilities. A: A workaround is to manually edit the PYTHONPATH by changing tests/conftest.py to include import sys sys.path.pop(0) before the first time my_module is imported, but it's not pretty and makes the assumption about where in the PYTHONPATH that item is going to show up. Of course, more code could be added to check explicitly, but that's really ugly: import sys from pathlib import Path project_dir = str(Path(__file__).resolve().parent.parent) sys.path = [p for p in sys.path if not p.startswith(project_dir)] A: See docs for pytest import mechanisms and sys.path/PYTHONPATH, which describe three import modes, which control this behavior. For instance, try this: $ pytest --import-mode=importlib which uses importlib to import test modules, rather than manipulating sys.path or PYTHONPATH.
How to prevent pytest using local module
For reference, this is the opposite of this question. That question asks how to get pytest to use your local module. I want to avoid pytest using the local module. I need to test my module's installation procedure. We should all be testing our modules' installation procedures. Therefore, I want my test suite to pretend like it's any other python module trying to import my module, which I have installed (whether or not it's up to date with my latest edits and/or an editable install is my business). My project layout looks like this: . β”œβ”€β”€ my_package β”‚ β”œβ”€β”€ __init__.py β”‚ └── my_module.py β”œβ”€β”€ setup.py β”œβ”€β”€ pyproject.toml └── tests β”œβ”€β”€ conftest.py β”œβ”€β”€ __init__.py └── test_my_package.py If I pip install . cd tests python -c "import my_package" it all works. However, if I pip install . pytest it does not. This is because pytest automatically adds the calling directory to the PYTHONPATH, making it impossible to test that pip install has worked. I want it to not do that. I need to do this because I am using setuptools-scm (which has different behaviour in editable and non-editable installs) and setuptools.find_packages, which makes it easy to ignore subpackages. However, to reiterate, my issue is with pytest's discovery, not with the use of these two utilities.
[ "A workaround is to manually edit the PYTHONPATH by changing tests/conftest.py to include\nimport sys\nsys.path.pop(0)\n\nbefore the first time my_module is imported, but it's not pretty and makes the assumption about where in the PYTHONPATH that item is going to show up. Of course, more code could be added to check explicitly, but that's really ugly:\nimport sys\nfrom pathlib import Path\nproject_dir = str(Path(__file__).resolve().parent.parent)\nsys.path = [p for p in sys.path if not p.startswith(project_dir)]\n\n", "See docs for pytest import mechanisms and sys.path/PYTHONPATH, which describe three import modes, which control this behavior. For instance, try this:\n$ pytest --import-mode=importlib\n\nwhich uses importlib to import test modules, rather than manipulating sys.path or PYTHONPATH.\n" ]
[ 0, 0 ]
[]
[]
[ "pytest", "python", "pythonpath", "unit_testing" ]
stackoverflow_0067176036_pytest_python_pythonpath_unit_testing.txt
Q: Confused on how the multiple variables work and how to get all 4 values from 1st item in list testdata = ["One,For,The,Money", "Two,For,The,Show", "Three,To,Get,Ready", "Now,Go,Cat,Go"] #My Code: def chop(string): x = 0 y = 0 while x < 5: y = string.find(",") + 1 z = string.find(",", y) x = x + 1 return y, z #My Code Ends for i in range(4): uno, dos, tres, cuatro = chop(testdata[i]) print(uno + ":" + dos + ":" + tres + ":" + cuatro) It say I don't have enough values, I previously tried appending similar code to a list and it said I had too many A: I cant figure out why You need to do in that way, but maybe it helps. testdata = ["One,For,The,Money", "Two,For,The,Show", "Three,To,Get,Ready", "Now,Go,Cat,Go"] for i in testdata: uno, dos, tres, cuatro = i.split(',') print(uno + ":" + dos + ":" + tres + ":" + cuatro) Result One:For:The:Money Two:For:The:Show Three:To:Get:Ready Now:Go:Cat:Go Just iterate through array and spllit words by ,. Result is as expected. A: You can search for the position of the , (comma) in the given line and apply sweeping technique to insert the words into a list. You were getting too few values as the function was not returning 4 elements that is being extracted in the for loop. testdata = ["One,For,The,Money", "Two,For,The,Show", "Three,To,Get,Ready", "Now,Go,Cat,Go"] # My Code: def chop(line): start_position = 0 words = [] for i, c in enumerate(line): if c == ",": words.append(line[start_position:i]) start_position = i+1 words.append(line[start_position:]) return words # My Code Ends for i in range(4): uno, dos, tres, cuatro = chop(testdata[i]) print(uno + ":" + dos + ":" + tres + ":" + cuatro) Output: One:For:The:Money Two:For:The:Show Three:To:Get:Ready Now:Go:Cat:Go Explanation (updated): Here we are keeping the start position of each word in the start_position variable. enumerate method returns a tuple of index and the value in each iteration. We are using the value for checking if it is equal to , and the index to chop the word from the line, thus using the enumerate method. References: Documentation on enumerate
Confused on how the multiple variables work and how to get all 4 values from 1st item in list
testdata = ["One,For,The,Money", "Two,For,The,Show", "Three,To,Get,Ready", "Now,Go,Cat,Go"] #My Code: def chop(string): x = 0 y = 0 while x < 5: y = string.find(",") + 1 z = string.find(",", y) x = x + 1 return y, z #My Code Ends for i in range(4): uno, dos, tres, cuatro = chop(testdata[i]) print(uno + ":" + dos + ":" + tres + ":" + cuatro) It say I don't have enough values, I previously tried appending similar code to a list and it said I had too many
[ "I cant figure out why You need to do in that way, but maybe it helps.\ntestdata = [\"One,For,The,Money\", \"Two,For,The,Show\",\n \"Three,To,Get,Ready\", \"Now,Go,Cat,Go\"]\n\nfor i in testdata:\n uno, dos, tres, cuatro = i.split(',')\n print(uno + \":\" + dos + \":\" + tres + \":\" + cuatro)\n\nResult\nOne:For:The:Money\nTwo:For:The:Show\nThree:To:Get:Ready\nNow:Go:Cat:Go\n\nJust iterate through array and spllit words by ,. Result is as expected.\n", "You can search for the position of the , (comma) in the given line and apply sweeping technique to insert the words into a list. You were getting too few values as the function was not returning 4 elements that is being extracted in the for loop.\ntestdata = [\"One,For,The,Money\", \"Two,For,The,Show\", \"Three,To,Get,Ready\",\n \"Now,Go,Cat,Go\"]\n\n\n# My Code:\ndef chop(line):\n start_position = 0\n words = []\n for i, c in enumerate(line):\n if c == \",\":\n words.append(line[start_position:i])\n start_position = i+1\n words.append(line[start_position:])\n return words\n\n\n# My Code Ends\n\nfor i in range(4):\n uno, dos, tres, cuatro = chop(testdata[i])\n print(uno + \":\" + dos + \":\" + tres + \":\" + cuatro)\n\nOutput:\nOne:For:The:Money\nTwo:For:The:Show\nThree:To:Get:Ready\nNow:Go:Cat:Go\n\nExplanation (updated):\nHere we are keeping the start position of each word in the start_position variable. enumerate method returns a tuple of index and the value in each iteration. We are using the value for checking if it is equal to , and the index to chop the word from the line, thus using the enumerate method.\nReferences:\n\nDocumentation on enumerate\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074526091_python_python_3.x.txt