question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,153,992 | 2024-11-4 | https://stackoverflow.com/questions/79153992/do-dictionaries-have-the-same-implicit-line-continuation-as-parentheses | I was surprised that this seems to work without parentheses: dict = { "a": 1 if True else 2, "b": 2 } I know that dictionaries have implicit line continuation, but I presumed that only let you split between commas. Does it simply function like parentheses, where everything is treated as one line? | Yes, implicit line continuation/joining takes place within parentheses, square brackets and curly braces in Python as documented: Expressions in parentheses, square brackets or curly braces can be split over more than one physical line without using backslashes. | 1 | 3 |
79,153,372 | 2024-11-3 | https://stackoverflow.com/questions/79153372/how-where-does-pytorch-max-documentation-show-that-you-can-pass-in-2-tensors-for | I am learning pytorch and deep learning. The documentation for torch.max doesn't make sense in that it looks like we can compare 2 tensors but I don't see where in the documentation I could have determined this. I had this code at first where I wanted to check ReLU values against the maximum. I thought that 0 could be ... | Check the bottom of the documentation page you linked where it says: torch.max(input, other, *, out=None) → Tensor See torch.maximum(). When the second argument is a tensor, torch.max computes torch.maximum | 1 | 4 |
79,153,401 | 2024-11-3 | https://stackoverflow.com/questions/79153401/difference-between-time-sleep-and-threading-timer-accuracy-and-efficiency-in | Let's say I have a Python script which is designed to wait for a specific amount of time in milliseconds and then press a key on the keyboard. Should I use threading.Timer() or should I just do it with time.sleep()? Which one is more accurate in terms of pressing the key on time and which one is more efficient? Also, p... | threading.Timer() runs in a separate thread, which means it won't block your main program. However, it can be less precise due to thread scheduling overhead and context switching. time.sleep() is more accurate for short durations (milliseconds range) since it doesn't involve thread creation overhead. However, it blocks... | 1 | 3 |
79,153,289 | 2024-11-3 | https://stackoverflow.com/questions/79153289/generate-a-stacked-bar-chart-in-python-out-of-groupby-based-on-multi-index | I would like to generate a stacked bar chart (ideally in seaborn), but happy to go with the native pandas plotting functionality. Let me introduce some test data to make things clear. In [353]: import pandas as pd In [354]: import seaborn as sns In [355]: df = pd.DataFrame({"Cat":["A", "B","A","B","A","B","A","B","C"],... | You would need to unstack the Cat (also only aggregate Time): (df.groupby(['ID', 'Cat'])['Time'].count().unstack('Cat') .plot(kind='bar', stacked=True) ) Output: Alternatively, with seaborn's object interface, for which you don't need to pre-aggregate the data. See Stack for more examples: import seaborn.objects as s... | 2 | 2 |
79,148,566 | 2024-11-1 | https://stackoverflow.com/questions/79148566/with-python-how-to-apply-vector-operations-to-a-neighborhood-in-an-n-d-image | I have a 3D image with vector components (i.e., a mapping from R3 to R3). My goal is to replace each vector with the vector of maximum norm within its 3x3x3 neighborhood. This task is proving to be unexpectedly challenging. I attempted to use scipy.ndimage.generic_filter, but despite its name, this filter only handles ... | DIPlib has this function implemented: dip.SelectionFilter(). This is how you'd use it: grad = ... # OP's grad array norm = dip.Norm(grad) out = dip.SelectionFilter(grad, norm, dip.Kernel(3, "rectangular"), mode="maximum") You can cast the dip.Image object out to a NumPy array with np.asarray(out) (no copy of the data ... | 3 | 3 |
79,147,443 | 2024-11-1 | https://stackoverflow.com/questions/79147443/is-it-more-efficient-to-highlight-cells-in-qabstracttablemodels-data-method-or | I'm building a Qt table in Python to display a large pandas DataFrame. The table uses a custom PandasTableModel (subclassing QAbstractTableModel) to connect to the DataFrame, and I want to highlight cells conditionally—e.g., red for False values and green for True. I have found two ways of doing this: Using the data m... | Premise: the proper comparison First of all, there is no such thing as more/better/optimal/faster/etc. in their absolute meaning. What's "best" is up to your requirements and context: it may be good for you and terrible for others. And that doesn't even consider user expectations in most cases. Then, before considering... | 1 | 3 |
79,151,731 | 2024-11-2 | https://stackoverflow.com/questions/79151731/error-computing-phase-angle-between-two-time-series-using-hilbert-transform | I'm trying to compute the phase angle between two time-series of real numbers. To check if my function is working without errors I have created two sine waves with a phase of 17 degrees. However, when I compute the phase angle between those two sine waves I do not get the 17 degrees. Here's my script: import numpy as n... | Instead of using arctan2(hilbert.imag, x), you can use np.angle(), which returns the argument (angle) of a complex number (always in the range [-π, π]). It's essentially doing arctan2(y, x) for a complex number x + iy. Also, after phase_angle = phase_y - phase_x, we need to ensure again that it lies in [-π, π], so we d... | 2 | 3 |
79,151,303 | 2024-11-2 | https://stackoverflow.com/questions/79151303/how-to-add-the-plane-y-x-to-a-3d-surface-plot-in-plotly | I am currently working on a 3D surface plot using Plotly in Python. Below is the code I have so far: import numpy as np import plotly.graph_objects as go # Definition of the domain x = np.linspace(-5, 5, 100) y = np.linspace(-5, 5, 100) X, Y = np.meshgrid(x, y) # Definition of the function, avoiding division by zero Z ... | To add '𝑦=𝑥' to your 3D surface plot in Plotly, you can define a separate surface for this plane and add it to the figure using another 'go.Surface' object. In Plotly, the Surface plot requires explicit values for 𝑍 to display a plane. To achieve the 𝑦=𝑥 plane across a given 𝑍 range, we need to set both 𝑋 and 𝑌... | 1 | 1 |
79,151,228 | 2024-11-2 | https://stackoverflow.com/questions/79151228/how-can-i-specify-the-directory-i-want-to-get-using-the-os-library | I have a directory called "Data" and inside it I have 35 other directories with another bunch of directories each. I need to check if these last directories have .txt files and, if so, I want to get the name of the specific directory that is one of the aforementioned 35. After this, I want to use the pandas library to ... | To check if each main directory contains any .txt files in its subdirectories, we can combine the logic you've started with and streamline it to match only the main directory name (one of the 35). Here's the code to achieve this and generate a yes/no spreadsheet using pandas. import os import pandas as pd main_dir = r'... | 2 | 1 |
79,151,259 | 2024-11-2 | https://stackoverflow.com/questions/79151259/django-model-filter-for-string | I have table which has string such as id | url 1 | /myapi/1241/ 2 | /myapi/ 3 | /myapi/1423/ 4 | /myapi/ Now I want to filter them like this below myModel.Objects.filter(url="/myapi/****/") Is it possible , or is there any method to do this? | Filter with the __regex lookup [Django-doc]: MyModel.objects.filter(url__regex=r'/myapi/\d+/') | 2 | 2 |
79,151,289 | 2024-11-2 | https://stackoverflow.com/questions/79151289/too-large-dataframe-python-spider | I tried to load an excel file on Pandas and process it. I use large excel files. For example I'm working on a file with 50 columns. I'd like to see it on spider but when I use Print functions, everytime a part of columns will be hidden with 3 points. See the attached picture. I see someone suggest use Jupiter, is this ... | To see all columns, you can adjust the pandas display settings to avoid truncated output. 1. Option 1: Adjust pandas display options You can change the display option for maximum columns. pd.set_option("display.max_columns", None) print("Monthly sales") print(df) 2. Option 2: Use to_string() for full dataframe display... | 2 | 0 |
79,150,393 | 2024-11-2 | https://stackoverflow.com/questions/79150393/how-can-i-do-one-hot-encoding-from-multiple-columns | When I search for this topic, I get answers that do not match what I want to do. Let's say I have a table like this: Item N1 N2 N3 N4 Item1 1 2 4 8 Item2 2 3 6 7 Item3 4 5 7 9 Item4 1 5 6 7 Item5 3 4 7 8 I would like to one-hot encode this to get: Item 1 2 3 4 5 6 7 8 9 Item1 1 1 0 1 0 0 0 1 0 ... | Use melt and crosstab. tmp = df.melt('Item') result = pd.crosstab(tmp['Item'], tmp['value']).reset_index().rename_axis(None, axis=1) Item 1 2 3 4 5 6 7 8 9 0 Item1 1 1 0 1 0 0 0 1 0 1 Item2 0 1 1 0 0 1 1 0 0 2 Item3 0 0 0 1 1 0 1 0 1 3 Item4 1 0 0 0 1 1 1 0 0 4 Item5 0 0 1 1 0 0 1 1 0 | 3 | 2 |
79,147,681 | 2024-11-1 | https://stackoverflow.com/questions/79147681/how-to-separate-the-string-by-specific-symbols-and-write-it-to-list | I have the following string: my_string='11AB2AB33' I'd like to write this string in a list, so 'AB' is a single element of this list in the following way: ['1', '1', 'AB', '2', 'AB', '3', '3'] I tried to do it by list(my_string) but the result wasn't what I expected: ['1', '1', 'A', 'B', '2', 'A', 'B', '3', '3'] I a... | You can use re.findall with an alternation using the pipe | matching either AB or a non whitespace character \S If you also want to match spaces you can use a . instead of \S You can see the matches here on regex101. my_string='11AB2AB33' print(re.findall(r'AB|\S', my_string)) Output ['1', '1', 'AB', '2', 'AB', '3', '... | 2 | 2 |
79,149,913 | 2024-11-2 | https://stackoverflow.com/questions/79149913/cannot-install-python-packages-because-of-urllib3 | I am trying to run a Python script: python main.py obtaining this output: Traceback (most recent call last): File "main.py", line 8, in <module> import stateful File "/path/to/s.py", line 599, in <module> import policy as offloading_policy File "/path/to/p.py", line 2, in <module> from pacsltk import perfmodel ModuleN... | A few days ago I faced a similar problem with python-requests and these steps worked for me [Don't use sudo while installing, because this seems to work when you move to sudo but installing a package with sudo is harmful who knows what you're installing is a malware in a package form]: Remove the package manually from ... | 2 | 4 |
79,149,707 | 2024-11-2 | https://stackoverflow.com/questions/79149707/how-to-obtain-n-th-even-triangle-number-using-recursive-algorithm | I know there is a formula for n-th even triangle, but it's just a matter of interest for me to try to write a recursive algorithm. def even_triangle(n): value = n * (n + 1) // 2 if value % 2 == 0: return value return even_triangle(n + 1) for i in range(1, 12): print(f"{i})", even_triangle(i)) I tried to write a functi... | The reason you get duplicates follows from your recursive call: return even_triangle(n + 1) When this gets executed, then consequently even_triangle(n) === even_triangle(n + 1), which is not what you ever want to have (since it enforces the duplicate value). When using recursion, you would typically make a recursive c... | 3 | 3 |
79,149,709 | 2024-11-2 | https://stackoverflow.com/questions/79149709/efficient-way-to-delete-columns-and-rows-from-a-numpy-array-using-slicing-and-no | Would it be possible given an array A, bad row indices and bad column indices to use slicing to make a new array that does not have these rows or columns? This can be done with np.delete as follows: import numpy as np A=np.random.rand(20,16) bad_col=np.arange(0,A.shape[1],4)[1:] bad_row=np.arange(0,A.shape[0],4)[1:] An... | You cannot remove items of an array without either moving all items (which is slow for large arrays) or creating a new one. There is no other solution. In Numba or Cython, you can directly create a new array with one operation instead of 2 so it should be about twice faster for large arrays. It should be even faster fo... | 5 | 3 |
79,149,626 | 2024-11-1 | https://stackoverflow.com/questions/79149626/how-to-fold-python-code-using-sed-and-or-awk | I have a large python code base. I want to get a feel of how a BaseClass is subclassed by 'grepping-out' the name of the sub class and the functions in the class, but only for classes that inherit from SomeBaseClass. So if we have multiple files that have multiple classes, and some of the classes look like: class SubCl... | Using any awk, this may be what you're trying to do: $ awk '/^class/{ f=/\(BaseClass)/ } f && $1 ~ /^(class|def|async)$/' file class SubClass_A(BaseClass): def foo(self): async def goo(self): class SubClass_B(BaseClass): def foo(self): async def goo(self): | 2 | 4 |
79,149,172 | 2024-11-1 | https://stackoverflow.com/questions/79149172/python-replace-period-if-its-the-only-value-in-column | How do I replace '.' in a dataframe if it's the only value in a cell without also replacing it if it's part of a decimal number? Here's the dataframe id 2.2222 . . 3.2 1.0 I tried this but it removes all decimals df = pd.DataFrame({'id':['2.2222','.','.','3.2','1.0']}) df['id'] = df['id'].str.replace(... | You could use .loc to do the replace: df.loc[df['id'] == '.', 'id'] = '' | 2 | 4 |
79,148,924 | 2024-11-1 | https://stackoverflow.com/questions/79148924/how-to-replace-xml-special-characters-from-text-within-html-tags-using-python | I am quite new to Python. I've been working on a web-scraping project that extracts data from various web pages, constructs a new HTML page using the data, and sends the page to a document management system The document management system has some XML-based parser for validating the HTML. It will reject it if XML specia... | BeautifulSoup tames unruly HTML and presents it as unbroken HTML. You can use it to fix references like this. from bs4 import BeautifulSoup doc = """<body> <p>The price of apples & oranges in New York is > the price of apples and oranges in Chicago</p> </body>""" soup = BeautifulSoup(doc, features="lxml") print(soup.pr... | 1 | 3 |
79,147,445 | 2024-11-1 | https://stackoverflow.com/questions/79147445/pandas-pivot-data-fill-mult-index-column-horizontally | i have the following code: import pandas as pd data = { 'name': ['Comp1', 'Comp1', 'Comp2', 'Comp2', 'Comp3'], 'entity_type': ['type1', 'type1', 'type2', 'type2', 'type3'], 'code': ['code1', 'code2', 'code3', 'code1', 'code2'], 'date': ['2024-01-31', '2024-01-31', '2024-01-29', '2024-01-31', '2024-01-29'], 'value': [10... | You could (1) reset_index with drop=True, (2) set the display.multi_sparse to False (with pandas.option_context) and (3) fillna with '': df = pd.DataFrame(data) out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value') .rename_axis([('name', 'entity_type', 'source', 'date')]) .res... | 2 | 4 |
79,148,025 | 2024-11-1 | https://stackoverflow.com/questions/79148025/create-random-partition-inside-a-pandas-dataframe-and-create-a-field-that-identi | I have created the following pandas dataframe: ds = {'col1':[1.0,2.1,2.2,3.1,41,5.2,5.0,6.1,7.1,10]} df = pd.DataFrame(data=ds) The dataframe looks like this: print(df) col1 0 1.0 1 2.1 2 2.2 3 3.1 4 41.0 5 5.2 6 5.0 7 6.1 8 7.1 9 10.0 I need to create a random 80% / 20% partition of the dataset and I also need to cr... | SOLUTION (PANDAS + NUMPY) A possible solution, which: First, using np.random.choice to randomly choose 80% of df indices without replacement. The df.index.isin function then checks each row's index to see if it was selected. Finally, np.where assigns a 1 to the Flag column for selected indices and a 0 for the others... | 2 | 2 |
79,147,500 | 2024-11-1 | https://stackoverflow.com/questions/79147500/glxplatform-object-has-no-attribute-osmesa | I am testing whether OSMesa functions properly, but I encountered the following error. How can this error be resolved? The full error message: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[1], line 2 1 from OpenGL import GL ----> 2 f... | To resolve this, the environment variable PYOPENGL_PLATFORM should be set to osmesa before importing pyrender. The pyrender documentation suggests to do this either in the shell when executing the program: PYOPENGL_PLATFORM=osmesa python render.py or by adding the following lines at the top of the program: import os o... | 2 | 0 |
79,146,569 | 2024-10-31 | https://stackoverflow.com/questions/79146569/modify-numpy-array-of-arrays | I have a numpy array of numpy arrays and I need to add a leading zero to each of the inner arrays: a = [[1 2] [3 4] [5 6]] --> b = [[0 1 2] [0 3 4] [0 5 6]] Looping through like this: for item in a: item = np.insert(item, 0, 0) doesn't help. Numpy.put() flattens the array, which I don't want. Any suggestions how I acco... | np.insert - insert values along the given axis before the given indices. If axis is None then array is flattened first. import numpy as np a = np.array([[1, 2], [3, 4], [5, 6]]) b = np.insert(a, 0, 0, axis=1) print(b) Result: [[0 1 2] [0 3 4] [0 5 6]] | 3 | 4 |
79,145,336 | 2024-10-31 | https://stackoverflow.com/questions/79145336/stripna-pandas-dropna-but-behaves-like-strip | It is a pretty common occurrence to have leading and trailing NaN values in a table or DataFrame. This is particularly true after joins and in timeseries data. import numpy as np import pandas as pd df1 = pd.DataFrame({ 'a': [1, 2, 3, 4, 5, 6, 7], 'b': [np.NaN, 2, np.NaN, 4, 5, np.NaN, np.NaN], }) Out[0]: a b 0 1 NaN 1... | Another way, that might be a little more readable is using the pd.Series.first_valid_index and pd.Series.last_valid_index with index slicing using loc: df1.loc[df1['b'].first_valid_index():df1['b'].last_valid_index()] Output: a b 1 2 2.0 2 3 NaN 3 4 4.0 4 5 5.0 And, this should be really fast. Using @LittleBobbyTabl... | 3 | 4 |
79,145,689 | 2024-10-31 | https://stackoverflow.com/questions/79145689/check-if-value-is-in-enum-fails | I am super confused how neither of these work. Can someone help me understand what's going on and why it prints "BAD" and "Value does not exist"? from enum import Enum class EventType(Enum): USER_LOGIN = 1, USER_LOGOUT = 2, @classmethod def has_value(cls, value): return value in cls._value2member_map_ eventType = 2 if ... | As @msanford said in the comments, remove the trailing commas from your values -- they are creating tuples. | 2 | 4 |
79,142,218 | 2024-10-30 | https://stackoverflow.com/questions/79142218/how-to-calculate-transfer-function-with-iout-iin-instead-for-parallel-rlc-tank | I am trying to calculate the RLC tank transfer function using LCAPY. I know what the answer should be but I want to do it with code. The issue is there isn't a current() or voltage() function in LCAPY and only impedance() and transfer(). Here is the code so far: import lcapy from lcapy import s, expr from IPython.displ... | You don't need a voltage here, since both currents will be proportional to it. You just need the resistance (effectively the impedance of the resistor) and the total impedance (resistor+capacitor+inductor) between the two lines. If you want your frequency response then you will need to specify this explicitly. Since th... | 3 | 4 |
79,141,599 | 2024-10-30 | https://stackoverflow.com/questions/79141599/with-pandas-how-do-i-use-np-where-with-nullable-datetime-colums | np.where is great for pandas since it's a vectorized way to change the column based on a condition. But while it seems to work great with np native types, it doesn't play nice with dates. This works great: >>> df1 = pd.DataFrame([["a", 1], ["b", np.nan]], columns=["name", "num"]) >>> df1 name num 0 a 1.0 1 b NaN >>> np... | The best answer is to use Series.where: df2['out'] = df2['date'].where(df2["date"] < datetime.datetime(2024,3,1)) As a second best answer, you can use NaT. numpy.where returns an array with a single dtype, you should not use NaN as an empty value but NaT: np.where(df2["date"] < datetime.datetime(2024,3,1), df2["date"]... | 2 | 3 |
79,142,186 | 2024-10-30 | https://stackoverflow.com/questions/79142186/how-do-i-flatten-the-elements-of-a-column-of-type-list-of-lists-so-that-it-is-a | Consider the following example: import polars as pl pl.DataFrame(pl.Series("x", ["1, 0", "2,3", "5 4"])).with_columns( pl.col("x").str.split(",").list.eval(pl.element().str.split(" ")) ) shape: (3, 1) ┌────────────────────┐ │ x │ │ --- │ │ list[list[str]] │ ╞════════════════════╡ │ [["1"], ["", "0"]] │ │ [["2"], ["3"]... | You can use Expr.explode(), Expr.list.explode(), or Expr.flatten() to return one row for each list element, and using it inside of Expr.list.eval() lets you expand each row's nested lists instead of exploding the series itself. import polars as pl df = pl.DataFrame(pl.Series("x", ["1, 0", "2,3", "5 4"])) print(df.with_... | 2 | 3 |
79,144,429 | 2024-10-31 | https://stackoverflow.com/questions/79144429/seaborn-can-i-add-a-second-hue-or-similar-within-a-stripplot-with-dodge-tru | Let's say I have a plot that looks like so: import numpy as np df = sns.load_dataset('iris') dfm = pd.melt(df, id_vars=["species"]) dfm = dfm.query('variable in ["sepal_length", "sepal_width"]') sns.stripplot(data=dfm, x="species", y="value", hue="variable", dodge=True) plt.legend(bbox_to_anchor=(1.05, 1), loc=2) Let... | You could overlay multiple stripplots with different alphas: start, end = dfm['potency'].agg(['min', 'max']) for k, v in dfm.groupby('potency'): sns.stripplot(data=v.assign(variable=v['variable']+f' / potency={k}'), x="species", y="value", hue="variable", dodge=True, alpha=(k-start+1)/(end-start+1) ) plt.legend(bbox_to... | 2 | 1 |
79,139,603 | 2024-10-30 | https://stackoverflow.com/questions/79139603/how-can-i-keep-track-of-which-function-returned-which-value-with-concurrent-fut | My code is as follows: with concurrent.futures.ThreadPoolExecutor() as executor: length = len(self.seq) futures = [None] * length results = [] for i in range(length): f,args,kwargs = self.seq[i] future = executor.submit(f, *args, **kwargs) futures[i] = future for f in concurrent.futures.as_completed(starts): results.ap... | The simplest answer is to pass a counter argument along with the rest of your args and kwargs, where you set counter to be i. You then need to modify f to return the counter along with whatever other values its supposed to return. Your results can then be sorted by the counter. Alternatively, create a results array of ... | 2 | 0 |
79,141,781 | 2024-10-30 | https://stackoverflow.com/questions/79141781/get-maximum-previous-nonmissing-value-within-group-in-pandas-dataframe | I have a pandas dataframe with a group structure where the value of interest, val, is guaranteed to be sorted within the group. However, there are missing values in val which I need to bound. The data I have looks like this: group_id id_within_group val 1 1 3.2 1 2 4.8 1 3 5.2 1 4 NaN 1 5 7.5 2 1 1.8 2 2 2.8 2 3 NaN 2 ... | You could use a custom groupby.transform with ffill/bfill+shift: g = df.groupby('group_id')['val'] df['max_prev'] = g.transform(lambda x: x.ffill().shift()) df['min_next'] = g.transform(lambda x: x[::-1].ffill().shift()) # or df['min_next'] = g.transform(lambda x: x.bfill().shift(-1)) If your values are not sorted, ad... | 1 | 4 |
79,141,475 | 2024-10-30 | https://stackoverflow.com/questions/79141475/how-to-project-a-3d-point-from-a-3d-world-with-a-specific-camera-position-into-a | I have some hard time understanding, how a 3D point from a scene can be translated into a 2d Image. I have created a scene in Blender where my camera is positioned at P_cam(0|0|0) at is looking between the x and y axis (rotation of x=90° and z=-45°). I spawned a test cube at pos_c (5|5|0). The img I see looks like this... | Your issue is here: cam_rot_in_deg = (90, 0, -45) cam_rot_in_rad = np.radians(cam_rot_in_deg) rvec = np.array([[cam_rot_in_rad]], np.float32).reshape((3,1)) OpenCV's "rvec" is an axis-angle representation, not Euler angles. What you did there is Euler angles. That's no good. Instead of rvecs (and tvecs), you can just ... | 2 | 1 |
79,141,089 | 2024-10-30 | https://stackoverflow.com/questions/79141089/saving-numpy-array-after-indexing-is-much-slower | I am running into an issue that saving a numpy array after indexing results in much slower saving. A minimal reproducible example can be seen below: import time import numpy as np def mre(save_path): array = np.zeros((245, 233, 6)) start = time.time() for i in range(1000): with open(save_path + '/array1_' + str(i), "wb... | Have you tried to use the numpy.ascontiguousarray() function ? This function is useful when working with arrays that have a non-contiguous memory layout, as it can improve performance by ensuring that the data is stored in contiguous memory locations. Example array2 = np.ascontiguousarray(array[:,:,[0,1,2,3,4,5]]) Out... | 2 | 1 |
79,140,661 | 2024-10-30 | https://stackoverflow.com/questions/79140661/how-to-sum-values-based-on-a-second-index-array-in-a-vectorized-manner | Let' say I have a value array values = np.array([0.0, 1.0, 2.0, 3.0, 4.0]) and an index array indices = np.array([0,1,0,2,2]) Is there a vectorized way to sum the values for each unique index in indices? I mean a vectorized version to get sums in this snippet: sums = np.zeros(np.max(indices)+1) for index, value in zi... | Another possible solution, which: First, creates an array of zeros b with a length equal to the number of unique elements in the indices array It then uses the np.add.at function to accumulate the values from the values array into the corresponding positions in b as specified by the indices array. b = np.zeros(1 + ... | 3 | 3 |
79,139,406 | 2024-10-30 | https://stackoverflow.com/questions/79139406/executing-an-scheduled-task-if-the-script-is-run-too-late-and-after-the-schedule | Imagine I want to run this function: def main(): pass at these scheduled times (not every random 3 hours): import schedule schedule.every().day.at("00:00").do(main) schedule.every().day.at("03:00").do(main) schedule.every().day.at("06:00").do(main) schedule.every().day.at("09:00").do(main) schedule.every().day.at("12:... | I think you need just to add main() call before the while loop, with check of current time import datetime # other schedules ... schedule.every().day.at("21:00").do(main) now = datetime.datetime.now() if 0 <= now.hour <= 20: main() # <-- this will be called once, at script start if current time is between 00:00 and 21:... | 2 | 0 |
79,139,273 | 2024-10-29 | https://stackoverflow.com/questions/79139273/django-warning-accessing-the-database-during-app-initialization-is-discourage | Recently, I’ve updated Django to the latest version, 5.1.2, and since then, every time I start the server, I get this warning: RuntimeWarning: Accessing the database during app initialization is discouraged. To fix this warning, avoid executing queries in AppConfig.ready() or when your app modules are imported. From ... | Because I'm only updating my settings and not making any changes at the database level, I concluded that, in my case, it's safe to keep using this. To silence the warning, I added the following at the beginning of settings.py: import warnings warnings.filterwarnings( 'ignore', message='Accessing the database during app... | 3 | 0 |
79,126,171 | 2024-10-25 | https://stackoverflow.com/questions/79126171/contextvar-set-and-reset-in-the-same-function-fails-created-in-a-different-con | I have this function: async_session = contextvars.ContextVar("async_session") async def get_async_session() -> AsyncGenerator[AsyncSession, None]: async with async_session_maker() as session: try: _token = async_session.set(session) yield session finally: async_session.reset(_token) This fails with: ValueError: <Token... | For anyone who comes after me, the actual problem is that my generator function is not async (the example in the question is misleading). FastAPI runs async and sync dependencies in different contexts to avoid the sync dependencies holding up the async loop thread, which is why it is then cleaned up in the wrong contex... | 2 | 0 |
79,126,228 | 2024-10-25 | https://stackoverflow.com/questions/79126228/pyinstaller-doesnt-launch-my-exe-no-module-pydantic-deprecated-decorator | I compiled my code with pyinstaller to make a .exe but when I launch it, it told me the module 'pydantic.deprecated.decorator' wasn't found. that seems logic because I have nothing with that name. So I don't know what to do to solve this issue I already tried to reinstal pydantic Traceback (most recent call last): File... | I have faced a similar error message. What has fixed it for me was to add hidden imports. With the following hidden imports that problem went away: pyinstaller --hidden-import=pydantic --hidden-import=pydantic-core --hidden-import=pydantic.deprecated.decorator app.py The langchain modules use pydantic under the hood a... | 2 | 3 |
79,135,993 | 2024-10-29 | https://stackoverflow.com/questions/79135993/convert-easting-northing-coordinates-to-latitude-and-longitude-in-scala-spark-wi | I’m working on a project in Scala and Apache Spark where I need to convert coordinates from Easting/Northing (EPSG:27700) to Latitude/Longitude (EPSG:4326). I have a Python script that uses in built libraries pyproj (transformer) to handle this, but I haven’t found an equivalent way to do it in Scala/Spark. Here’s the ... | @MartinHH Thanks for providing the reference. Looks like it is possible to create same code in scala/spark package com.test.job.function_testing import org.apache.spark.sql.SparkSession import geotrellis.proj4.CRS import geotrellis.proj4.Transform object TestCode { def main(args: Array[String]) = { val runLocally = tru... | 2 | 1 |
79,136,294 | 2024-10-29 | https://stackoverflow.com/questions/79136294/how-to-run-tests-using-each-package-pyproject-toml-configuration-instead-of-the | I'm using uv workspace with a structure that looks like the following example from the doc. albatross ├── packages │ ├── bird-feeder │ │ ├── pyproject.toml │ │ └── src │ │ └── bird_feeder │ │ ├── __init__.py │ │ └── foo.py │ └── seeds │ ├── pyproject.toml │ └── src │ └── seeds │ ├── __init__.py │ └── bar.py ├── pyproje... | You can tell pytest the directory: uv run pytest packages/bird-feeder uv run --package bird-feeder doesn't work: uv run --package bird-feeder pytest doesn't pass pyproject.toml info to pytest. pytest merely infers dir=pathlib.Path.cwd(). uv run --directory packages/bird-feeder works: uv run --directory packages/bird... | 2 | 1 |
79,138,896 | 2024-10-29 | https://stackoverflow.com/questions/79138896/how-to-conditionally-update-a-column-in-a-polars-dataframe-with-values-from-a-li | I am trying to update specific rows in a python-polars DataFrame where two columns ("Season" and "Wk") meet certain conditions, using values from a list or Series that should align with the filtered rows. In pandas, I would use .loc[] to do this, but I haven't found a way to achieve the same result with Polars. import ... | You could generate the indices with when/then and .cum_sum() df.with_columns( idx = pl.when(Season=2024, Wk=29).then(1).cum_sum() - 1 ) shape: (4, 4) ┌────────┬─────┬──────────┬──────┐ │ Season ┆ Wk ┆ position ┆ idx │ │ --- ┆ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 ┆ i32 │ ╞════════╪═════╪══════════╪══════╡ │ 2024 ┆ 28 ┆ ... | 2 | 1 |
79,138,384 | 2024-10-29 | https://stackoverflow.com/questions/79138384/use-np-where-to-create-a-list-with-same-number-of-elements-but-different-conten | I have a pandas dataframe where a value sometimes gets NA. I want to fill this column with a list of strings with the same length as another column: import pandas as pd import numpy as np df = pd.DataFrame({"a": ["one", "two"], "b": ["three", "four"], "c": [[1, 2], [3, 4]], "d": [[5, 6], np.nan]}) a b c d one t... | A possible solution consists in using np.where. df.assign(d = np.where( df['d'].isna(), pd.Series([['no_value'] * len(lst) for lst in df['c']]), df['d'])) Another possible solution, which uses: apply on df, iterating through each row with a lambda function that checks whether the value in column d is NaN. If the co... | 3 | 1 |
79,137,381 | 2024-10-29 | https://stackoverflow.com/questions/79137381/how-to-split-an-audio-file-to-have-chunks-that-are-less-than-a-certain-dimension | I need to divide an audio file to have chunks that are less than 25mb. I would like to not have to save the file on the disk. This is the code I have for now, but it is not working as expected, as it splits the audio in chunks of around 2mb def audio_splitter(audio_file) audio = AudioSegment.from_file(audio_file) # Set... | You can use bytes.IO estimation In my test case I used 1/2 mb limit and files were all exactly 512kb,512kb,426kb,71.1kb from pydub import AudioSegment import io import sys import os def audio_splitter(audio_file): audio = AudioSegment.from_file(audio_file) test = audio[0:len(audio)] test_io = io.BytesIO() test.export(t... | 2 | 0 |
79,125,305 | 2024-10-25 | https://stackoverflow.com/questions/79125305/how-to-measure-a-server-process-maximum-memory-usage-from-launch-to-sigint-using | I have a server program (let's call it my-server) I cannot alter and I am interested in measuring its maximum memory usage from when it starts to when a certain external event occurs. I am able to do it manually on my shell by: running /usr/bin/time -f "%M" my-server triggering the external event sending an INT sign... | It turns out I misunderstood /usr/bin/time's behavior. I thought that when receiving an INT signal time would relay it to its sub-process being measured, waited for it to terminate, and then terminated normally outputting its results. What happens instead is that time simply terminates before outputting any result. Whe... | 3 | 1 |
79,135,329 | 2024-10-28 | https://stackoverflow.com/questions/79135329/why-is-pandas-itertuples-slower-than-iterrows-on-dataframes-with-many-100-col | In the unfortunate situation where looping over the rows of a Pandas dataframe is the only way to proceed, it's usually mentioned that itertuples() is to be preferred over iterrows() in terms of computational speed. This assertion appears valid for dataframes with few columns ('narrow dataframes'), but doesn't seem to ... | First of all, you should not use either of them. Iterating through a dataframe in a Python loop is painfully slow no matter how you do it. You can only use them if you don't care about the performance. That being said, I will answer your question based on my technical curiosity. Both APIs are helper functions implement... | 2 | 4 |
79,120,600 | 2024-10-24 | https://stackoverflow.com/questions/79120600/passing-a-python-function-in-a-container-to-a-c-object-and-pybind-wrappers | I am writing a set of Python wrappers via pybind11 for an optimization library whose heavy-lifting code was written in C++. The abstract class hierarchy of my C++ code to be wrapped currently looks something like this (multivariate.h): typedef std::function<double(const double*)> multivariate; struct multivariate_probl... | If you would like to wrap your C-style interfaces in modern c++ compatible with pybind11, you can do like this: Considering that we have some dummy impl of your optimizer: typedef std::function<double(const double *)> multivariate; struct multivariate_problem { // objective multivariate _f; int _n; // bound constraints... | 2 | 1 |
79,136,314 | 2024-10-29 | https://stackoverflow.com/questions/79136314/how-to-let-pip-install-show-progress | In the past, I install my personal package with setup.py: python setup.py install Now this method is deprecated, and I can only use pip: python -m pip install . However, the method with setup.py can show install messages, but pip method cannot. For example, when there is c++ code which requires compiling the source c... | You can force pip to be more verbose using pip -v install. The option is additive, and can be used up to 3 times to increase verbosity: pip -v install pip -vv install pip -vvv install | 2 | 2 |
79,136,400 | 2024-10-29 | https://stackoverflow.com/questions/79136400/filter-rows-by-condition-on-columns-with-certain-names | I have a dataframe: df = pd.DataFrame({"ID": ["ID1", "ID2", "ID3", "ID4", "ID5"], "Item": ["Item1", "Item2", "Item3", "Item4","Item5"], "Catalog1": ["cat1", "1Cat12", "Cat35", "1cat3","Cat5"], "Catalog2": ["Cat11", "Cat12", "Cat35", "1Cat1","2cat5"], "Catalog3": ["cat6", "Ccat2", "1Cat9", "1cat3","Cat7"], "Price": ["71... | In your code, filter_col is a list. You can't use str with it. You can make use of pandas functions to do the operations faster. Here's the code to solve it: import pandas as pd # Create the DataFrame df = pd.DataFrame({"ID": ["ID1", "ID2", "ID3", "ID4", "ID5"], "Item": ["Item1", "Item2", "Item3", "Item4","Item5"], "Ca... | 4 | 4 |
79,136,350 | 2024-10-29 | https://stackoverflow.com/questions/79136350/numpy-array-slice-then-assign-to-itself | Why the following code does not return [1,4,3,4]? Hasn't a already changed during the reversed order assignment? a=np.array([1,2,3,4]) a[1::]=a[:0:-1] The result is: array([1, 4, 3, 2]) | You're right that a changes during the assignment, which in turn affects the view whose elements you're assigning into a. If NumPy didn't have special handling for this case, you could indeed see array([1, 4, 3, 4]) as a result. However, NumPy checks for this case. If NumPy detects that the RHS of the assignment may sh... | 2 | 6 |
79,126,205 | 2024-10-25 | https://stackoverflow.com/questions/79126205/how-to-split-a-pyspark-dataframe-taking-a-portion-of-data-for-each-different-id | I'm working with a pyspark dataframe (in Python) containing time series data. Data got a structure like this: event_time variable value step ID 1456942945 var_a 123.4 1 id_1 1456931076 var_b 857.01 1 id_1 1456932268 var_b 871.74 1 id_1 1456940055 var_b 992.3 2 id_1 1456932781 var_c 861.3 2 id_1 1456937186 var_c 959.6 3... | Group the data by ID and use percentile_approx as aggregation function to calculate the threshold for step=4. Then create a where clause with these values to filter the data: from pyspark.sql import functions as F df = ... threshold = df.where('step = 4') \ .groupBy('ID') \ .agg(F.percentile_approx('event_time', 0.25))... | 3 | 1 |
79,131,057 | 2024-10-27 | https://stackoverflow.com/questions/79131057/why-do-i-get-a-networkxerror-node-has-no-position-when-trying-to-draw-a-graph | I'm trying to create a simulation (powered by python) to analyze density and curvature of Hyperbolic Lattices and also their realtions with Anti-de Sitter (AdS) Spaces and AdS/CFT Correspondence for a research. I'm using matplotlib, numpy, networkx, random and collections. While everything seems fine when I try to run ... | Your current code does not assign positions to the nodes at depth == max_depth, resulting in nodes without defined positions. Switching the first two statements in add_hyperbolic_lattice resolves the issue. def add_hyperbolic_lattice(G, center, radius, depth, max_depth): # Add current center node G.add_node(center, pos... | 2 | 0 |
79,119,390 | 2024-10-23 | https://stackoverflow.com/questions/79119390/micropython-aioble-esp32-charactrictic-written-never-returns-even-upon-a-wri | I am working with ESP32-C3. I created a BLE GATT Server, I want to exchange data with it bidirectionally. I am using nRF Connect android app for debugging. Charactrictic #1 is used to send data from ESP to nRF Connect. It works fine. Charactrictic #2 is used to receive data from nRF Connect on ESP32. It doesn't work: c... | I solved my own problem. The mistake was: GATT_UUID = bluetooth.UUID(0x1802) I changed that UUID to a 128-bit unique one, and it worked. Hope this post helps someone. | 2 | 2 |
79,132,812 | 2024-10-28 | https://stackoverflow.com/questions/79132812/find-intersection-of-dates-in-grouped-polars-dataframe | Consider the following pl.DataFrame: import polars as pl data = { "symbol": ["AAPL"] * 5 + ["GOOGL"] * 3 + ["MSFT"] * 4, "date": [ "2023-01-01", "2023-01-02", "2023-01-03", "2023-01-04", "2023-01-05", # AAPL has 5 dates "2023-01-01", "2023-01-02", "2023-01-03", # GOOGL has 3 dates "2023-01-01", "2023-01-02", "2023-01-0... | You could count the number of unique (n_unique) symbols over date and filter the rows that have all symbols: df.filter(pl.col('symbol').n_unique().over('date') .eq(pl.col('symbol').n_unique())) Output: ┌────────┬────────────┐ │ symbol ┆ date │ │ --- ┆ --- │ │ str ┆ str │ ╞════════╪════════════╡ │ AAPL ┆ 2023-01-01 │ │... | 2 | 6 |
79,130,996 | 2024-10-27 | https://stackoverflow.com/questions/79130996/programmatically-change-components-of-pytorch-model | I am training a model in pytorch and would like to be able to programmatically change some components of the model architecture to check which works best without any if-blocks in the forward(). Consider a toy example: import torch class Model(torch.nn.Model): def __init__(self, layers: str, d_in: int, d_out: int): supe... | You could just use the if statement in the init function or in another function, for example: from enum import Enum class ModelType(Enum): Parallel = 1 Sequential = 2 class Model(torch.nn.Model): def __init__(self, layers: str, d_in: int, d_out: int, model_type: ModelType): super().__init__() self.layers = layers linea... | 2 | 1 |
79,129,975 | 2024-10-27 | https://stackoverflow.com/questions/79129975/how-to-sum-delimited-text-in-python-in-excel | I slowly started using python in excel. Somehow I managing the code, however this time I I have received my output data is delimited with ",". How to do it with py option excel. Based on the below pic, sum of aa,dd,ee is 18, bb,gg is 9 and so on.. I can do it sum individual but not sure how to deal with delimited text.... | Enter in E2 with =PY: df_nv = xl("A1:B10", headers=True) names = xl("D2:D5").iloc[:,0] nv = dict(zip(df_nv["Name"],df_nv["Value"])) def sum_split(a): return sum(map(lambda a: nv[a], a.split(","))) list(map(sum_split, names)) Siddharth Rout's answer has better checks - for example xl returns dataFrame only for multi-... | 2 | 3 |
79,130,783 | 2024-10-27 | https://stackoverflow.com/questions/79130783/difficulty-to-scrape-html-page-from-a-dynamic-generated-website-with-python | I'm trying to retrieve some data from a website with python. The website seems to generate its content with Javascript so I cannot use the standard requests library. I tried the module requests-html and Selenium that both handle javascript content, but the problem is that I still cannot get the html page of this websit... | You're approach using selenium is not optimum. Don't try to get the page source but rather use selenium's built-in functionality for navigation. For example: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selen... | 2 | 1 |
79,117,084 | 2024-10-23 | https://stackoverflow.com/questions/79117084/using-python-to-imageclip-an-image-in-autocad | I want to automatize the creation of a Clipping Boundary of an Image (references externes) in Autocad using known coordinates. I'm able to do it manually using the native function IMAGECLIP in Autocad by drawing a polygon. It seems the key parameter is ClipBoundary But I'm not able to use it proprely. For your informat... | Note: I don't have AutoCAD installed, so I can't do any testing to support my statement(s). But as the very URL from the question ([AutoDesk.Help]: ClipBoundary Method (ActiveX)) states, ClipBoundary is a method, which means it has to be invoked. raster.ClipBoundary = points doesn't make any sense, instead you should:... | 4 | 0 |
79,129,809 | 2024-10-27 | https://stackoverflow.com/questions/79129809/use-beautiful-soup-to-count-title-links | I am attempting to write a code that keeps track of the text for the links in the left handed gray box on this webpage. In this case the code should return Valykrie, The Acid Baby Here is the code I am trying to use: import requests from bs4 import BeautifulSoup url = 'https://www.mountainproject.com/area/109928429/aas... | Search for the table with a specific I’d, than the rows: import requests from bs4 import BeautifulSoup url = 'https://www.mountainproject.com/area/109928429/aasgard-sentinel' page = requests.get(url) soup = BeautifulSoup(page.text, "html.parser") table = soup.find(lambda tag: tag.name=='table' and tag.has_attr('id') an... | 2 | 1 |
79,128,733 | 2024-10-26 | https://stackoverflow.com/questions/79128733/hiding-grouped-slash-commands-in-dms | How can I hide grouped slash commands in DMs? I provided a little python code sample below, with a normal slash command (bot.tree) and a grouped slash command (class TestGroup). @discord.app_commands.guild_only() works perfectly fine with the hidden-test command but not the other one, I've tried many approaches to the ... | You were using the decorator @discord.app_commands.guild_only() on group made using subclass method, however it applies only on individual commands-group or GroupCog method, instead use guild_only=True in super().__init__() for your method of making group,not the decorator. That's the only change I have made import dis... | 2 | 0 |
79,129,581 | 2024-10-26 | https://stackoverflow.com/questions/79129581/how-to-determine-if-a-large-integer-is-a-power-of-3-in-python | I'm trying to determine if a given positive integer ( N ) is a power of 3, i.e., if there exists a nonnegative integer ( x ) such that ( 3^x = N ). The challenge is that ( N ) can be extremely large, with up to ( 10^7 ) digits. Here's the logic I want to implement: If ( N ) is less than or equal to 0, return -1. Use l... | Any approach that involves repeated division is going to be slow with numbers that large. Instead, consider using the bit length of the number (effectively the ceiling of its base 2 logarithm) to approximate the corresponding power of three, then check to see if it is indeed equal: import math def what_power_of_3(n): i... | 8 | 13 |
79,129,491 | 2024-10-26 | https://stackoverflow.com/questions/79129491/writing-interdependent-if-else-statements | Is there any advantage to one of these over the other? Also, is there any better code than these to achieve the goal? My intuition is that in number 2, since it has already checked for x or y, the check for y is more efficient? if x or y: do some stuff if y: do some OTHER stuff if x or y: do some stuff if y: do so... | The second one is marginally more efficient, because it won't redundantly test for y again in the case where x and y are both False. In cases where x and y are often False (i.e. the first if is rarely entered), this results in a slight improvement in performance. However, it should be noted that this improvement is ver... | 2 | 3 |
79,127,647 | 2024-10-26 | https://stackoverflow.com/questions/79127647/tensorflow-docker-not-using-gpu | I'm trying to get Tensorflow working on my Ubuntu 24.04.1 with a GPU. According to this page: Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver So I'm trying to use Docker. I'm checking to ensure my GPU is working with Docker by running docker run --gpus all ... | I don't think you're doing anything wrong, but I'm concerned that the image may be a "pip install" short of a complete image. I'm running a different flavor of linux, but to start off with I had to make sure I had my gpu available to docker (see here Add nvidia runtime to docker runtimes ) and I upgraded my cuda versio... | 2 | 1 |
79,126,303 | 2024-10-25 | https://stackoverflow.com/questions/79126303/how-to-efficiently-multiply-all-non-diagonal-elements-by-a-constant-in-a-pandas | I have a square cost matrix stored as a pandas DataFrame. Rows and columns represent positions [i, j], and I want to multiply all off-diagonal elements (where i != j) by a constant c, without using any for loops for performance reasons. Is there an efficient way to achieve this in pandas or do I have to switch to numpy... | Build a boolean mask with numpy.identity and update the underlying array in place: cost_matrix.values[np.identity(n=len(cost_matrix))==0] *= c output: 0 1 2 0 1 8 12 1 16 5 24 2 28 32 9 Intermediate: np.identity(n=len(cost_matrix))==0 array([[False, True, True], [ True, False, True], [ True, True, False]]) NB. for ... | 4 | 7 |
79,126,042 | 2024-10-25 | https://stackoverflow.com/questions/79126042/how-to-efficiently-remove-overlapping-circles-from-the-dataset | I have a dataset of about 20,000 records that represent global cities of population > 20,000. I have estimated radius which more or less describes the size of the city. It's not exactly accurate but for my purposes it will be enough. that I'm loading it into my Panda dataframe object. Below is the sample name_city,coun... | Once you have the circles as polygons, determining intersections between polygons is very fast if you use a spatial index to do so. So, you can: buffer the points to circles. In WGS84 this would be imprecise, so the buffering needs to be done via a sidestep to an equidistant projection. calculating which circles inter... | 3 | 4 |
79,127,523 | 2024-10-25 | https://stackoverflow.com/questions/79127523/singleton-different-behavior-when-using-class-and-dict-to-store-the-instance | Why do these two base classes result in the child objects having different behavior? class Base: _instance: "Base" = None def __new__(cls) -> "Base": if cls._instance is None: cls._instance = super().__new__(cls) return cls._instance class A(Base): def foo(self): return "foo" class B(Base): def quz(self): return "quz" ... | When a = A() is executed, the __new__ method is called with the class A as its argument. This sets the value of the class attribute A._instance. Likewise, b = B() sets the value of B._instance. In the first case, the original value of Base._instance, A.instance and B._instance is None, which is a non-mutable object, so... | 3 | 3 |
79,126,854 | 2024-10-25 | https://stackoverflow.com/questions/79126854/how-to-hint-argument-to-a-function-as-dictionary-with-parent-class-in-python | I would like to hint a function as a mapping between instances of a class or its children and a value. takes_mapping below is an example. However, I am getting a static typing error when I use the following: from collections.abc import Mapping class Parent: pass class Child(Parent): pass assert issubclass(Child, Parent... | The problem is that if the function is typed as taking a Mapping[Parent, ...] the body of the function is expected to try to access that dict with Parent keys, and that's likely not going to work if you pass in a dict[Child, ...]. (It could work depending how you implement __hash__, and you can make an argument for why... | 2 | 2 |
79,123,288 | 2024-10-24 | https://stackoverflow.com/questions/79123288/how-can-i-reliably-get-the-module-and-class-of-the-current-class-method-in-pytho | I've encountered situations where standard methods like __module__ and __class__ become unreliable due to inheritance hierarchies or metaclass-based class creation. I need a robust approach that can accurately identify the module and class, regardless of the complexity of the class structure. Here's an example of a pot... | To get the module and class of the defining type, you can look at the qualified name. With this approach, the derived class could be defined in a separate module entirely. import logging class BaseClass: @classmethod def mymethod(cls): _, _, methodname, _ = logging.getLogger().findCaller() func = getattr(cls, methodnam... | 3 | 5 |
79,126,618 | 2024-10-25 | https://stackoverflow.com/questions/79126618/formatting-nested-square-roots-in-sympy | I'm working with sympy to obtain symbolic solutions of equation. This is my code: import sympy as sp # Define the symbolic variable x = sp.symbols('x') # Define f(x) f = ((x**2 - 2)**2 - 2)**2 - 2 # Solve the equation f(x) = 0 solutions = sp.solve(f, x) # Filter only the positive solutions positive_solutions = [sol for... | Using your current code (applying reverse lexicographic order) print("The positive solutions of the equation f(x) = 0 are:") for sol in positive_solutions: pretty_print(sol, order='rev-lex') Resulting The positive solutions of the equation f(x) = 0 are: ________________ ╱ ________ ╲╱ 2 - ╲╱ 2 - √2 ________________ ╱ _... | 3 | 1 |
79,127,002 | 2024-10-25 | https://stackoverflow.com/questions/79127002/what-kind-of-sequence-are-range-and-bytearray | In the "Fluent Python" book by Luciano Ramalho (2nd ed.) he defines the concepts of container sequences and flat sequences: A container sequence holds references to the objects it contains, which may be of any type, while a flat sequence stores the value of its contents in its own memory space, not as distinct Python ... | You have it right about bytearray. It references an internal mutable memory space to hold objects of just one simple type (bytes), so it's a flat sequence, by the author's definition. A range object is a bit more tricky though. I'd say it doesn't match either of the criteria given by the author, though it's certainly a... | 3 | 5 |
79,123,379 | 2024-10-24 | https://stackoverflow.com/questions/79123379/read-multiple-excel-files-to-dataframes-using-for-loop-by-reading-the-month-in-e | I have 12 Excel files. Each is based on a month of the year and therefore each filename ends in 'Month YYYY'. For example, the file for March of 2021 ends in 'March 2021.xlsx'. I want to read each Excel file, select certain columns, drop empty rows, then merge each file into one excel file as a named worksheet. However... | I think your question is similar to this: Extract month, day and year from date regex an advanced way to do this would be using regex, which is laid out a little in that prior post. a simpler way to do this would be to split (or rsplit) the on (' '), assuming that there is a space in front of the month as well as after... | 2 | 1 |
79,123,556 | 2024-10-24 | https://stackoverflow.com/questions/79123556/seeking-clarification-on-the-sql-alchemy-connection-pool-status | I am running a python (v3.9.16) application from a main thread while a separate worker thread runs an asyncio loop that makes SQL queries to a database (using aioodbc v0.5.0). Currently there are 4 asyncio tasks running in the worker thread. With the create_async_engine command, I have configured the connection pool si... | First of the easy ones: Pool size : it indicates the maximum connections available that can be made in the pool without going overflow. Connections in pool : it indicates the number of connections idle (available for new tasks) in the pool. A connection returns back into pool once the task using it is completed. Cur... | 2 | 0 |
79,121,678 | 2024-10-24 | https://stackoverflow.com/questions/79121678/how-to-run-computations-on-other-rows-efficiently | I am working with a Polars DataFrame and need to perform computations on each row using values from other rows. Currently, I am using the map_elements method, but it is not efficient. In the following example, I add two new columns to a DataFrame: sum_lower: The sum of all elements that are smaller than the current el... | For your specific use case you don't really need join, you can calculate values with window functions. pl.Expr.shift() to exclude current row. pl.Expr.cum_sum() to calculate sum of all elements up to the current row. pl.Expr.max() to calculate max. pl.Expr.bottom_k() to calculate 2 largest elements so then we can take... | 8 | 4 |
79,125,764 | 2024-10-25 | https://stackoverflow.com/questions/79125764/find-intersection-of-columns-from-different-polars-dataframes | I have a variable number of pl.DataFrames which share some columns (e.g. symbol and date). Each pl.DataFrame has a number of additional columns, which are not important for the actual task. The symbol columns do have exactly the same content (the different str values exist in every dataframe). The date columns are some... | You can use pl.DataFrame.join() with how="semi" parameter: semi Returns rows from the left table that have a match in the right table. on = ["symbol","date"] df1.join(df2, on=on, how="semi").join(df3, on=on, how="semi") df2.join(df1, on=on, how="semi").join(df3, on=on, how="semi") df3.join(df1, on=on, how="semi").joi... | 2 | 1 |
79,125,266 | 2024-10-25 | https://stackoverflow.com/questions/79125266/pandas-html-generation-reproducible-output | I am writing a Pandas dataframe as HTML using this code import pandas as pd df = pd.DataFrame({ "a": [1] }) print(df.style.to_html()) I ran it once and it produced this output <style type="text/css"> </style> <table id="T_f9297"> <thead> <tr> <th class="blank level0" > </th> <th id="T_f9297_level0_col0" class="co... | pandas.io.formats.style.Styler.to_html has a table_uuid parameter: table_uuid str, optional: Id attribute assigned to the <table> HTML element in the format: <table id="T_<table_uuid>" ..> If not provided it generates a random uuid. If set it will use the uuid provided: print(df.style.to_html(table_uuid="my_table_id"... | 2 | 1 |
79,118,378 | 2024-10-23 | https://stackoverflow.com/questions/79118378/how-to-save-and-load-spacy-encodings-in-a-polars-dataframe | I want to use Spacy to generate embeddings of text stored in a polars DataFrame and store the results in the same DataFrame. Next, I want to save this DataFrame to the disk and be able to load again as a polars DataFrame. The backtransformation from pandas to polars results in an error. This is the error message: Arrow... | Serializing and deserializing SpaCy objects within a polars DataFrame can be stored by using SpaCys native DocBin class. The following code generates doc objects, saves them locally, and successfully loads them afterwards. from io import StringIO from spacy.tokens import DocBin import polars as pl import spacy nlp = sp... | 4 | 1 |
79,119,492 | 2024-10-23 | https://stackoverflow.com/questions/79119492/bar-chart-with-multiple-bars-using-xoffset-when-the-x-axis-is-temporal | Here's a small example: import altair as alt import polars as pl source = pl.DataFrame( { "Category": list("AAABBBCCC"), "Value": [0.1, 0.6, 0.9, 0.7, 0.2, 1.1, 0.6, 0.1, 0.2], "Date": [f"2024-{m+1}-1" for m in range(3)] * 3, } ).with_columns(pl.col("Date").str.to_date()) bars = alt.Chart(source).mark_bar().encode( x=a... | If you use the ordinal or nominal data type, you can supply a timeUnit to get date formatting. There are many options depending on what kind of data you are working with. import altair as alt import polars as pl source = pl.DataFrame( { "Category": list("AAABBBCCC"), "Value": [0.1, 0.6, 0.9, 0.7, 0.2, 1.1, 0.6, 0.1, 0.... | 3 | 3 |
79,123,305 | 2024-10-24 | https://stackoverflow.com/questions/79123305/numpy-array-does-not-correctly-update-in-gaussian-elimination-program | I am trying to write a function gaussian_elim which takes in an n x n numpy array A and an n x 1 numpy array b and performs Gaussian elimination on the augmented matrix [A|b]. It should return an n x (n+1) matrix of the form M = [U|c], where U is an n x n upper triangular matrix. However, when I test my code on a simpl... | Because you use numpy array of integers, all your result will be rounded off to integers. You need to define A and b as A = np.array([[3,-2],[1,5]], dtype=np.float64) b = np.array([1,1], dtype=np.float64) Doing so allows for float values in your matrices. | 3 | 4 |
79,122,612 | 2024-10-24 | https://stackoverflow.com/questions/79122612/unable-to-use-list-for-positional-arguements | So I am working on a small project and I can't for the life of me figure our why this doesn't work.... I am using a list for positional arguments, yet it returns that parametres are missing, I know its probably something basic but I can't seem to figure it out.. If is just place the write out the list direction in func... | You need to unpack the list to pass the arguments: class Tester: def __init__(self, first: int, second: int, third: int) -> None: self.first = first self.second = second self.third = third def __str__(self) -> str: return f"Tester(first={self.first}, second={self.second}, third={self.third})" contestants = [54, 56, 32]... | 2 | 3 |
79,122,259 | 2024-10-24 | https://stackoverflow.com/questions/79122259/python-flask-app-doesnt-serve-images-used-in-react-app | I have a react project that is hosted in flask app. This is the project folders: /simzol/ └── backend/ ├── app.py └── build/ ├── index.html ├── images/ ├── static/ │ ├── css/ │ └── js/ └── other_files... PS. the build folder is generated by react using the command "yarn build" here's the flask app initializer: app ... | I wanted the images to load up in both flask app in production and in react app (using npm start) for development. At the same time, I wanted unknown pages to be handled by react. The most convenient solution I found is to put all the images under public/static/images instead of public/images and to change the src path... | 3 | 0 |
79,119,480 | 2024-10-23 | https://stackoverflow.com/questions/79119480/conda-cannot-find-a-specific-package-dedalus-in-conda-forge-that-is-explicitly | The package in question is the Dedalus spectral CFD library, here. For posterity I'll also link the project homepage. When I run conda search --channel conda-forge dedalus or variations with options I cannot find the dedalus package. This isn't a general issue with conda because I can find all default channels packages... | Its not available on win64, check the OS tags, which only include linux-64 osx-64 osx-arm64 | 2 | 1 |
79,117,805 | 2024-10-23 | https://stackoverflow.com/questions/79117805/silence-mypy-arg-type-error-when-using-stategy-pattern | Minimal example: from typing import overload, TypeVar, Generic class EventV1: pass class EventV2: pass class DataGathererV1: def process(self, event: EventV1): pass def process2(self, event: EventV1): pass class DataGathererV2: def process(self, event: EventV2): pass def process2(self, event: EventV2): pass class Dispa... | The issue here is that I think there are two different types being conflated. Consider the protocol: T = TypeVar("T", contravariant=True) class DataGatherer(Generic[T], Protocol): def process(self, event: T): pass def process2(self, event: T): pass We can say DataGathererV1 is a subtype of DataGatherer[EventV1], and D... | 2 | 1 |
79,120,587 | 2024-10-24 | https://stackoverflow.com/questions/79120587/optimization-of-pyspark-code-to-do-comparisons-of-rows | I want to iteratively compare 2 sets of rows in a PySpark dataframe, and find the common values in another column. For example, I have the dataframe (df) below. Column1 Column2 abc 111 def 666 def 111 tyu 777 abc 777 def 222 tyu 333 ewq 888 The output I want is abc,def,CommonRow <-- because of 111 abc,ewq,NoCommonRow ... | You can do this without loops using self join as follows: from pyspark.sql import SparkSession from pyspark.sql import functions as F spark = SparkSession.builder.getOrCreate() data = [("abc", 111), ("def", 666), ("def", 111), ("tyu", 777), ("abc", 777), ("def", 222), ("tyu", 333), ("ewq", 888)] df = spark.createDataFr... | 2 | 1 |
79,120,520 | 2024-10-24 | https://stackoverflow.com/questions/79120520/fastest-way-to-combine-image-patches-given-as-4d-array-in-python | Given a 4D array of size (N,W,H,3), where N is the number of patches, W,H are the width and height of an image patch and 3 is the number of color channels. Assume that these patches were generated by taking and original image I and dividing it up into small squares. The order by which this division happen is row by row... | Here's a fast pure numpy 1-liner way to do it: def reconstruct_image_2(): return patches.reshape(num_rows, num_cols, W, H, C).swapaxes(1, 2).reshape(num_rows*W, num_cols*H, C) reconstructed_image_2 = reconstruct_image_2() assert np.all(reconstructed_image == reconstructed_image_2) # True Explanation: First reshape res... | 4 | 3 |
79,119,610 | 2024-10-23 | https://stackoverflow.com/questions/79119610/getting-the-module-and-class-of-currently-executing-classmethod-in-python | For code that exists in a module named some.module and looks like this: class MyClass: @classmethod def method(cls): # do something pass I'd like to know in the block marked as "do something" what the module name, the class name, and the method name are. In the above example, that would be: module some.module class M... | These are the most direct ways: import inspect class MyClass: @classmethod def method(cls): print("module:", __name__) print("class:", cls.__name__) print("method:", inspect.currentframe().f_code.co_name) Doing the same from a utility function requires traversing "back" (up) in the call stack to find the calling frame... | 2 | 3 |
79,119,637 | 2024-10-23 | https://stackoverflow.com/questions/79119637/what-is-the-equivalent-of-np-polyval-in-the-new-np-polynomial-api | I can't find a direct answer in NumPy documentation. This snippet will populate y with polynomial p values on domain x: p = [1, 2, 3] x = np.linspace(0, 1, 10) y = [np.polyval(p, i) for i in x] What is the new API equivalent when p = Polynomial(p)? | You can simply evaluate values with p(x). Documentation can be found on "Using the convenience classes" under "Evaluation": p = [1, 2, 3] p = np.polynomial.Polynomial(p) x = np.linspace(0, 1, 10) y = p(x) Note: Coefficients are in reverse order compared to legacy API i.e. coefficients go from lowest order to highest o... | 2 | 3 |
79,118,016 | 2024-10-23 | https://stackoverflow.com/questions/79118016/how-to-preserve-data-types-when-working-with-pandas-and-sklearn-transformers | While working with a large sklearn Pipeline (fit using a DataFrame) I ran into an error that lead back to a wrong data type of my input. The problem occurred on an a single observation coming from an API that is supposed to interface the model in production. Missing information in a single line makes pandas (obviously)... | The issue you are running into is how pandas handles None in a column. If the column has other float or integer values, the None is coerced to a numpy.nan which is an instance of float. This coercion maintains the column's type as a numeric column. However, if no other values are present in the column, just None values... | 3 | 3 |
79,117,673 | 2024-10-23 | https://stackoverflow.com/questions/79117673/can-i-reuse-output-field-instance-in-django-orm-or-i-should-always-create-a-dupl | I have a Django codebase that does a lot of Case/When/ExpressionWrapper/Coalesce/Cast ORM functions and some of them sometimes need a field as an argument - output_field. from django.db.models import FloatField, F some_param1=Sum(F('one_value')*F('second_value'), output_field=FloatField()) some_param2=Sum(F('one_value'... | You can reuse the field. Using this output_field=… [Django-doc] serves two purposes: the type sometimes requires specific formatting, typically for GIS columns, since a point, polygon, etc. needs to be converted to text so that Django can understand it; and to know what lookups transformations, etc. can be applied on ... | 2 | 1 |
79,117,121 | 2024-10-23 | https://stackoverflow.com/questions/79117121/polars-get-all-possible-categories-as-physical-representation | Given a DataFrame with categorical column: import polars as pl df = pl.DataFrame({ "id": ["a", "a", "a", "b", "b", "b", "b"], "value": [1,1,1,6,6,6,6], }) res = df.with_columns(bucket = pl.col.value.cut([1,3])) shape: (7, 3) ┌─────┬───────┬───────────┐ │ id ┆ value ┆ bucket │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ cat │ ╞═... | I don't see any direct way. However, you could combine pl.Expr.cat.get_categories and pl.Expr.to_physical as follows. res.select( pl.col("bucket").cat.get_categories().cast(res.schema["bucket"]).to_physical() ) shape: (3, 1) ┌────────┐ │ bucket │ │ --- │ │ u32 │ ╞════════╡ │ 0 │ │ 1 │ │ 2 │ └────────┘ Here, it would ... | 2 | 2 |
79,112,091 | 2024-10-22 | https://stackoverflow.com/questions/79112091/how-to-highlight-values-per-column-in-polars | I have a Polars DataFrame, and I want to highlight the top 3 values for each column using the style and loc features in Polars. I can achieve this for individual columns, but my current approach involves a lot of repetition, which is not scalable to many variables. import polars as pl import polars.selectors as cs from... | Update. Since the writing of my original answer, a mask parameter was added to great_tables.loc.body providing a native solution to the problem. import polars as pl import polars.selectors as cs from great_tables import loc, style df = pl.DataFrame({ "id": [1, 2, 3, 4, 5], "variable1": [15, 25, 5, 10, 20], "variable2":... | 3 | 4 |
79,114,550 | 2024-10-22 | https://stackoverflow.com/questions/79114550/is-mydict-getx-x-eqivalent-to-mydict-getx-or-x | When using a dictionary to occasionally replace values, are .get(x, x) and .get(x) or x equivalent? For example: def current_brand(brand_name): rebrands = { "Marathon": "Snickers", "Opal Fruits": "Starburst", "Jif": "Cif", "Thomson": "TUI", } return rebrands.get(brand_name, brand_name) # or return rebrands.get(brand_na... | In answer to your original question: or will short-circuit if your value is Falsey, so there are plenty of values where the two statements will behave differently. my_dict = { 'foo': 0, 'bar': "", 'baz': [], } x = 'foo' print(repr(my_dict.get(x, x))) print(repr(my_dict.get(x) or x)) In answer to your edit, if you can ... | 2 | 5 |
79,095,041 | 2024-10-16 | https://stackoverflow.com/questions/79095041/detectron2-installation-no-module-named-torch | I am trying to install detectron2 on ubuntu and face a weird python dependency problem. In short - pytorch is installed (with pip), torchvision is installed (with pip), but when I run pip install 'git+https://github.com/facebookresearch/detectron2.git' I get error ModuleNotFoundError: No module named 'torch' as for de... | This is probably due to the isolation mechanism of the pip building process. Basically, the installation requires torch to be installed to work, but recent versions of pip use some isolation that does not allow the build process to access installed packages. You can disable that isolation by using this command: $ pip i... | 3 | 13 |
79,099,281 | 2024-10-17 | https://stackoverflow.com/questions/79099281/firebase-firestore-client-cannot-be-deployed | I have an Firebase Cloud Functions codebase that uses Firestore database. Everything below works when I use it in local emulator using firebase emulators:start but when I need to deploy it to Firebase I got below error: Error: User code failed to load. Cannot determine backend specification main.py import json from fir... | Figured out the below answer in the Firebase repo fixes the issue: I've discovered if I move the db = firestore.client() into my cloud function, I'm able to deploy. https://github.com/firebase/firebase-functions-python/issues/126#issuecomment-1682542027 It's really weird that Google does not update the docs or fixes ... | 4 | 0 |
79,102,534 | 2024-10-18 | https://stackoverflow.com/questions/79102534/save-to-disk-training-dataset-and-validation-dataset-separately-in-pytorch | I want to save train dataset, test dataset, and validation dataset in 3 separate folders. Doing this for training and testing is easy # Get training and testing data all_training_data = getattr(datasets, config['Pytorch_Dataset']['dataset'])( root= path_to_data + "/data_all_train_" + config['Pytorch_Dataset']['dataset'... | You don't have to save the split results on separate folders to maintain reproducibility, which is what I am assuming you really care for. You could instead fix the seed before calling split like this: torch.manual_seed(42) data_train, data_val = torch.utils.data.random_split(data_all_train, (0.7, 0.3)) Then you get t... | 2 | 2 |
79,112,040 | 2024-10-21 | https://stackoverflow.com/questions/79112040/dealing-with-interlacing-lock-in-python3 | I am trying to implement the following logic in Python3: # Clarification: # function f() is the only function that would acquire both locks # It is protected by other locks so f() itself has no concurrency. # It always acquires lock1 first and then acquire lock2 inside lock1 # In other words, NO thread will own lock2 a... | You can interleave context managers by using contextlib.ExitStack, with a "stack" of just one context manager, because it lets you exit it early with the close() method: with ExitStack() as es: es.enter_context(lock_a) protected_by_a() with lock_b: protected_by_a_and_b() es.close() protected_by_b() You can even push m... | 3 | 2 |
79,115,545 | 2024-10-22 | https://stackoverflow.com/questions/79115545/unexpected-output-in-python-repl-using-vs-code-on-windows | I have VS Code set up with Python extension, but without Jupyter extension. VS Code lets you send Python code from the editor to what it calls the "Native REPL" (a Jupyter-type interface without full notebook capabilities) or the "Terminal REPL" (the good old Python >>> prompt). See https://code.visualstudio.com/docs/p... | Roll back your python extension to 14.0 and disable auto-update will fix the issue. Reference: https://github.com/microsoft/vscode-python/issues/24251 | 2 | 0 |
79,114,440 | 2024-10-22 | https://stackoverflow.com/questions/79114440/ttk-frames-not-filling-properly | I am making a python application that uses 4 ttk Frames within its main window. The first two frames should expand both vertically and horizontally to fill available space. Frames 3 and 4 should only expand horizontally and be a fixed height vertically. This is my code so far (minimum working example): import tkinter a... | acw1668 gave the answer in a comment: Remove expand=True for frame 3 and 4. That did the trick. | 1 | 2 |
79,115,254 | 2024-10-22 | https://stackoverflow.com/questions/79115254/raise-exception-in-map-elements | Update: This was fixed by pull/20417 in Polars 1.18.0 I'm using .map_elements to apply a complex Python function to every element of a polars series. This is a toy example: import polars as pl df = pl.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) def sum_cols(row): return row["A"] + row["B"] df.with_columns( pl.struct(p... | I'm pretty sure this is a bug in Polars. https://github.com/pola-rs/polars/issues/19315 https://github.com/pola-rs/polars/issues/14821 As a workaround, you could use .map_batches() to pass the whole "column" instead: import polars as pl df = pl.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}) def sum_cols(col): raise Exce... | 2 | 0 |
79,115,080 | 2024-10-22 | https://stackoverflow.com/questions/79115080/how-to-use-ruff-as-fixer-in-vim-with-ale | I'm using ale in vim, and I want to add ruff as fixer for python. So, in .vimrc, I added: let g:ale_fixers = { \ 'python': ['ruff'], \ 'javascript': ['eslint'], \ 'typescript': ['eslint', 'tsserver', 'typecheck'], \} Then, when executing in vim the command ALEFix, I've go this error: 117: Unknown function: ale#fixers... | https://docs.astral.sh/ruff/editors/setup/#vim , open "With the ALE plugin for Vim or Neovim." The docs shows " Linter let g:ale_linters = { "python": ["ruff"] } " Formatter let g:ale_fixers = { "python": ["ruff_format"] } | 2 | -1 |
79,115,170 | 2024-10-22 | https://stackoverflow.com/questions/79115170/operation-on-all-columns-of-a-type-in-modern-polars | I have a piece of code that works in Polars 0.20.19, but I don't know how to make it work in Polars 1.10. The working code (in Polars 0.20.19) is very similar to the following: def format_all_string_fields_polars() -> pl.Expr: return ( pl.when( (pl.col(pl.Utf8).str.strip().str.lengths() == 0) | # ERROR ON THIS LINE (pl... | pl.Utf8 was renamed to pl.String .str.strip() was renamed to .str.strip_chars() .str.lengths() was split into .str.len_chars() and .str.len_bytes() .keep_name() was renamed to .name.keep() def format_all_string_fields_polars() -> pl.Expr: return ( pl.when( (pl.col(pl.String).str.strip_chars().str.len_chars() == 0) | ... | 2 | 2 |
79,111,779 | 2024-10-21 | https://stackoverflow.com/questions/79111779/how-do-i-iterate-through-table-rows-in-python | How would I loop through HTML Table Rows in Python? Just to let y'all know, I am working on the website: https://schools.texastribune.org/districts/. What I'm trying to do is click each link in the table body (?) and extract the total number of students: What I have so far: response = requests.get("https://schools.te... | You should not have class_="td" when finding the <td> elements, they don't have any class. There's no <ul> elements in the table, so view = match.find('ul',class_="tr") won't find anything. You need to find the <a> element, gets its href, and load that to get the total students. d = {} for match in soup.find_all('td'):... | 2 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.