question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-07-15 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,114,445
2024-10-22
https://stackoverflow.com/questions/79114445/filter-the-earliest-and-latest-date-in-each-month
Given a dataframe like the one below, how do I filter for the earlest and latest date in each month? Note the actual data runs to tens of thousands of rows. Input: Date Deg 02/01/1990 1210.92 13/01/1990 1226.83 14/01/1990 1224.52 15/01/1990 1220.77 08/02/1990 1164.32 09/02/1990 1156.72 12/02/1990 114...
Your data looks sorted. Try this: df["year"] = df["Date"].dt.year df["month"] = df["Date"].dt.month return pd.concat( [ df.groupby(["month", "year"]).last(), df.groupby(["month", "year"]).first() ] ).reset_index(drop=True).sort_values(by="Date")
1
2
79,114,033
2024-10-22
https://stackoverflow.com/questions/79114033/whats-the-advantage-of-newtype-over-typealias
Consider the following example: UserId = NewType('UserId', int) ProductId = NewType('ProductId', int) But I can also do, the following: UserId: TypeAlias = int ProductId: TypeAlias = int So why should I use NewType over TypeAlias or vice versa? Are they both interchangeable?
Aliases don't distinguish between each other. New types do. Consider this example: Meter = NewType('Meter', float) Gram = NewType('Gram', float) MeterSquared = NewType('MeterSquared', float) def area(length: Meter, width: Meter) -> MeterSquared: return MeterSquared(length * width) Now area(3, 5) won't type check, nor ...
1
2
79,106,088
2024-10-19
https://stackoverflow.com/questions/79106088/correct-python-dbus-connection-syntax
I'm having trouble getting dbus to connect: try: logging.debug("Attempting to connect to D-Bus.") self.bus = SessionBus() self.keepass_service = self.bus.get("org.keepassxc.KeePassXC.MainWindow", "/org/keepassxc/KeePassXC/MainWindow") # self.keepass_service = self.bus.get("org.keepassxc.KeePassXC", "/org/keepassxc/Kee...
You are assuming that the object path always follows the naming of the service itself. That's not always the case – a service can export many different object paths, and does not strictly need to follow any naming style (i.e. there isn't an enforced rule that all object paths start with the service name, much less that...
2
1
79,112,299
2024-10-22
https://stackoverflow.com/questions/79112299/how-to-change-an-element-in-one-array-based-on-conditions-at-the-same-index-elem
I have two arrays containing 60 0's or 1's. One is defined as result and the other is defined as daysinfected. The goal is to look at each element of result and set that element to -1 IF it is > 0 AND IF the corresponding element in the daysinfected element is 0. By printing result for debugging after this code it is o...
Your outer loop is redundant here, just this is sufficient: for i in range (len(daysinfected)): if result[i] > 0 and daysinfected[i] == 0: i in result == -1 Now, in the 3rd line, i in result == -1 this is an expression, not an assignment. its interpreted like so: (i in result) == -1 where i in result is checking if i ...
1
2
79,111,951
2024-10-21
https://stackoverflow.com/questions/79111951/python-protocol-using-keyword-only-arguments-requires-implementation-to-have-dif
I'm on python 3.10. I'm using PyCharm's default type checker and MyPy. Here is the protocol I defined: class OnSubscribeFunc(Protocol): def __call__(self, instrument: str, *, x: int) -> AsyncGenerator: ... When create a method that implements it like this: class A: async def subscribe(self, instrument: str, *, x: int)...
This is a known bug of PyCharm. Mypy and Pyright both accept your code as it is. Put a # type: ignore or # noqa there and move on.
2
3
79,111,615
2024-10-21
https://stackoverflow.com/questions/79111615/i-am-trying-to-do-a-board-outline-with-and-but-i-am-getting-unexpected-o
I mentioned the condition to print at particular places but instead of printing at those locations code just appends "+" to "-" even with range condition it exceeds the limit. I wanted to print "+" at 0, 8, 16, 24 index locations and print "-" in between them. def display(board): for row in range(25): print("-", end = ...
You've just got to use an if else to only print 1 char per iteration of your loop. Without the else, you are printing twice on each multiple of 8. def display(board): for row in range(25): if row == 0 or row == 8 or row == 16 or row == 24: print("+", end = "") else print("-", end = "") You could also simplify the code...
1
4
79,110,939
2024-10-21
https://stackoverflow.com/questions/79110939/break-up-a-sparse-2d-array-or-table-into-multiple-subarrays-or-subtables
I want to find a way to "lasso around" a bunch of contiguous/touching values in a sparse table, and output a set of new tables. If any values are "touching", they should be part of a subarray together. For example: if I have the following sparse table/array: [[0 0 0 1 1 0 0 0 1 1 1 1 0 0 0 0 0 0 0] [0 0 0 1 1 0 0 0 1 1...
A possible solution, which based on the following ideas: First, measure.label assigns a unique label to each connected component in the array based on an 8-connectivity criterion (connectivity=2). Second, measure.regionprops retrieves properties of these labeled regions, such as their bounding boxes. Then, the code ...
3
3
79,110,878
2024-10-21
https://stackoverflow.com/questions/79110878/i-want-to-match-6-or-fewer-digits-in-a-string-if-there-are-or-between-t
It should match: "abc 12-34 def" precisely "12-34" "Phone number: 123/45", precisely "123/45" "sequence: 12//-34", precisely "12//-34" "My code is 1-2-3-4", precisely "1-2-3-4" It should not match: "too many: 1234-567-89" "too many; 1234-567" Here is what I have tried: pattern = r'\d([\/-]\d){1,5}' but didn't su...
In your pattern, this part \d([\/-]\d expects a match for either / or - You might use a single digit, and repeat 1 - 5 times a digit with zero or more occurrences of - or / in between. On the left you can place a negative lookbehind and on the right a negative lookahead to assert not - / or a digit to mark the boundari...
2
7
79,110,294
2024-10-21
https://stackoverflow.com/questions/79110294/polars-use-value-from-column-as-column-name-in-when-then-expression
In a polars dataframe I have a column that contains the names of other columns (column "id_column_name"). I want to use those names in a when-then expression with pl.col() to create a new column ("id") which gets its values out of these other columns ("id_column1", "id_column2"). Every row can gets its value from anoth...
pl.Series.unique() to get all possible values of id_column_name column. pl.when() to create conditional results. pl.coalesce() to fill the final result with first non-empty value. df.with_columns( id = pl.coalesce( pl.when(pl.col.id_column_name == col).then(pl.col(col)) for col in df.schema if col not in ("id_column_...
2
1
79,110,306
2024-10-21
https://stackoverflow.com/questions/79110306/django-circular-import-models-views-forms
Watched all the previous related topics devoted to this problem, but haven't found a proper solution, so decided to create my own question. I'm creating a forum project (as a part of the site project). Views are made via class-based views: SubForumListView leads to a main forum page, where its main sections ("subforums...
Don't import views in the models. Views should depend on models, never in the opposite way. You can use the name of the path, so: class Subforum(models.Model): # … def get_absolute_url(self): return reverse( 'subforum', kwargs={'name': self.title, 'subforum_slug': self.slug}, ) The same for ShowTopic, and thus remove t...
1
2
79,109,487
2024-10-21
https://stackoverflow.com/questions/79109487/how-to-check-whether-an-sklearn-estimator-is-a-scaler
I'm writing a function that needs to determine whether an object passed to it is an imputer (can check with isinstance(obj, _BaseImputer)), a scaler, or something else. While all imputers have a common base class that identifies them as imputers, scalers do not. I found that all scalers in sklearn.preprocessing._data i...
Unfortunately, the cleanest way to do this is to check each scaler type individually, any other check will potentially let through non-scaler objects as well. Nevertheless, I'll offer some "hack-jobs" too. The most failsafe solution is to import your scalers and then check if your object is any of these scalers or not....
1
4
79,109,524
2024-10-21
https://stackoverflow.com/questions/79109524/dataframe-manipulation-explode-rows-on-new-dataframe-with-repeated-indices
I have two dataframes say df1 and df2, for example import pandas as pd col_1= ["A", ["B","C"], ["A","C","D"], "D"] col_id = [1,2,3,4] col_2 = [1,2,2,3,3,4,4] d1 = {'ID': [1,2,3,4], 'Labels': col_1} d2 = {'ID': col_2, } d_2_get = {'ID': col_2, "Labels": ["A", "B", "C", "A", "C", "D", np.nan] } df1 = pd.DataFrame(data=d1...
IIUC, you could explode and perform a merge after deduplication with groupby.cumcount: out = (df2 .assign(n=df2.groupby('ID').cumcount()) .merge(df1.explode('Labels').assign(n=lambda x: x.groupby('ID').cumcount()), on=['ID', 'n'], how='left' ) #.drop(columns='n') ) Output: ID n Labels 0 1 0 A 1 2 0 B 2 2 1 C 3 3 0 A ...
4
6
79,107,659
2024-10-20
https://stackoverflow.com/questions/79107659/how-to-pass-aggregation-functions-as-function-argument-in-polars
How can we pass aggregation functions as argument to a custom aggregation function in Polars? You should be able to pass a single function for all columns or a dictionary if you have different aggregations by column. import polars as pl # Sample DataFrame df = pl.DataFrame({ "category": ["A", "A", "B", "B", "B"], "valu...
You can pass it as anonymous function with expression as parameter (I simplified your example just to illustrate the point): def agg_with_expr(df, agg_expr): return df.group_by("category").agg(agg_expr(pl.col("*"))) agg_with_expr(df, lambda x: x.sum()) shape: (2, 2) ┌──────────┬───────┐ │ category ┆ value │ │ --- ┆ --...
3
2
79,103,866
2024-10-18
https://stackoverflow.com/questions/79103866/stopping-asyncio-program-using-file-input
What specific code needs to change in the Python 3.12 example below in order for the program myReader.py to be successfully halted every time the line "Stop, damnit!" gets printed into sourceFile.txt by the program myWriter.py? THE PROBLEM: The problem is that myReader.py only sometimes stops when the line "Stop, damni...
Things to know: at least in my testing, this line line = await source_file.readline() will happily return an empty string if it didn't find a new line immediately. you have line = await source_file.readline() twice in your main loop, with the first call's result being thrown out That first call would return a line, a...
2
3
79,102,719
2024-10-18
https://stackoverflow.com/questions/79102719/how-would-i-write-a-function-similiar-to-np-logspace-but-with-a-given-first-inte
I am trying to write a function that returns an array that has in the first section linearly increasing elements and in the second section ever increasing distances between the elements. As inputs, I would like to give the starting value, the final value and the length of the array. This would be solveable with np.logs...
There are several changes in order: You should frame the first segment as a half-open interval, including 0 but excluding the threshold. The second segment should include both the threshold and the endpoint. You should enforce the threshold, endpoint, and a constraint of first-order differential continuity. Because o...
2
0
79,108,089
2024-10-20
https://stackoverflow.com/questions/79108089/how-to-get-item-in-text-based-game
I have been trying to come up with the code to add an item to my inventory in my text based game but so far I haven't been able to figure it out. Here is my current code: rooms = { 'Entrance': {'west': 'Catacombs A', 'north': 'Main Hall'}, 'Catacombs A': {'east': 'Entrance', 'item': 'Lesser Artifact'}, 'Main Hall': {'n...
So I've been experimenting with this and i think i may have found a method that will work. I made a function that takes the currentroom, and your inventory. It will search the dictionary of that room to see if it has an item and if it does it will append it to the inventory table. The code is as follows: rooms = { 'Ent...
2
2
79,108,099
2024-10-20
https://stackoverflow.com/questions/79108099/understanding-what-a-python-3-12-bytecode-does-call-0-after-get-iter
I have this python function and the bytecode it translates to: Text code: x = "-".join(str(z) for z in range(5)) assert x == "0-1-2-3-4" print("Assert test case for generator_expression_in_join") Disassembled code: 0 0 RESUME 0 2 2 LOAD_CONST 0 ('-') 4 LOAD_ATTR 1 (NULL|self + join) 24 LOAD_CONST 1 (<code object <genex...
argc is the total of the positional and named arguments, excluding self when a NULL is not present. Here’s one call: 28 PUSH_NULL 30 LOAD_NAME 1 (range) 32 LOAD_CONST 2 (5) 34 CALL 1 And here’s another: 24 LOAD_CONST 1 (<code object <genexpr> at 0x1026abe30, file "<dis>", line 2>) 26 MAKE_FUNCTION 0 ⋮ 42 GET_ITER ...
2
1
79,107,149
2024-10-20
https://stackoverflow.com/questions/79107149/pylance-incorrectly-flagging-sklearn-mean-squared-error-function-as-deprecated
I haven't been able to find anything online about this. Pylance seems to be marking the mean_squared_error function from sklearn.metrics as deprecated, although only the squared parameter is deprecated. I am running Python through micromamba and have the latest version of both sklearn (1.5.2) and Pylance (v2024.10.1). ...
Pyright/Pylance made no mistake, and neither did you. This is a problem with the type stubs for sklearn.metrics, in which mean_squared_error is defined as: @deprecated() def mean_squared_error( y_true: MatrixLike | ArrayLike, y_pred: MatrixLike | ArrayLike, *, sample_weight: None | ArrayLike = None, multioutput: ArrayL...
2
3
79,105,904
2024-10-19
https://stackoverflow.com/questions/79105904/approximating-logarithm-using-harmonic-mean
Here is a function to approximate log10(x+1) for (x+1) < ~1.2: a = 1.097 b = 0.085 c = 2.31 ans = 1 / (a - b*x + c/x) It should look like that: It works by adjusting harmonic mean to match log10, but the problem is in values of a, b, c. The question is how to get just right a, b and c and how to make better approxima...
Here are two approaches. Method 1: series expansion in x (better for negative and positive x) Method 2: fit the curve that passes through 3 points (here, x=0, ½ and 1) Method 1. If you expand them by Taylor series as powers of x then Equating coefficients of x, x^2 and x^3 gives In code: import math import numpy as n...
2
3
79,106,642
2024-10-20
https://stackoverflow.com/questions/79106642/how-to-webscrape-elements-using-beautifulsoup-properly
I am not from web scaping or website/html background and new to this field. Trying out scraping elements from this link that contains containers/cards. I have tried below code and find a little success but not sure how to do it properly to get just informative content without getting html/css elements in the results. f...
As Manos Kounelakis suggested, what you're likely looking for is the text attribute of BeautifulSoup HTML elements. Also, it is more natural to split up the html based on the elements with the class card rather than the row elements, as the card elements correspond to each visual card unit on the screen. Here is some c...
1
0
79,106,107
2024-10-19
https://stackoverflow.com/questions/79106107/is-this-benchmark-valid-tinygrad-is-impossibly-fast-vs-torch-or-numpy-for-medi
I ran the following benchmark code on google collab CPU with high ram enabled. Please point out any errors in the way I am benchmarking, (if any) as well as why there is a such a high performance boost with tinygrad. # Set the size of the matrices size = 10000 # Generate a random 10000x10000 matrix with NumPy np_array ...
Tinygrad performs operations in a "lazy" way, so the matrix multiplication hasn't been performed yet. Change your matrix multiplication line to: tg_result = (tg_tensor @ tg_tensor).realize() or tg_result = (tg_tensor @ tg_tensor).numpy()
4
5
79,106,128
2024-10-19
https://stackoverflow.com/questions/79106128/how-to-filter-out-a-dataframe-based-on-another-dataframe
My dataframe loads from a csv file that looks like this RepID Account Rank 123 Abcd 1 345 Zyxw 2 567 Hijk 3 ... ... 837 Kjsj 8 and I have another csv that has only one column RepID 345 488 I load the first csv in a dataframe df and the other csv in dataframe dE I want to have a new datafrmae dX that is all records fr...
A possible solution, which uses boolean indexing and isin: df[df['RepID'].isin(dE['RepID'])] # dY df[~df['RepID'].isin(dE['RepID'])] # dX Output: # dY RepID Account Rank 1 345 Zyxw 2 # dX RepID Account Rank 0 123 Abcd 1 2 567 Hijk 3 3 837 Kjsj 8
2
3
79,105,679
2024-10-19
https://stackoverflow.com/questions/79105679/python-regular-expression-for-multiple-split-criteria
I'm struggling to split some text in a piece of code that I'm writing. This software is scanning through about 3.5 million lines of text of which there are varying formats throughout. I'm kind of working my way through everything still, but the line below appears to be fairly standard within the file: EXAMPLE_FILE_TEXT...
I suggest a tokenizing approach with regex: create a regex with alternations, starting with the most specific ones, and ending with somewhat generic ones. In your case, you may try import re x = 'EXAMPLE_FILE_TEXT ID="20211111.111111 11111"' res = re.findall(r'"([^"]*)"|(\d+(?:\.\d+)*)|(\w+)', x) print( ["".join(r) for...
2
2
79,100,317
2024-10-18
https://stackoverflow.com/questions/79100317/how-can-i-align-the-numbers-to-the-top-of-the-cells
I want to align the numbers to the top left corners of each cell. I was able to find the set_text_props, but it isn't doing the alignment as I expect. import matplotlib.pyplot as plt fig = plt.figure() ax = fig.add_subplot() ax.axis('off') table = ax.table( cellText=[[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14]], ...
(This is a hack, not a solution) Add a newline to your text and adjust the cell.PAD. colab import matplotlib.pyplot as plt fig = plt.figure(figsize=(4,3)) ax = fig.add_subplot() ax.axis('off') cellText=[[1, 2, 3, 4, 5, 6, 7], [8, 9, 10, 11, 12, 13, 14]] for i in range(len(cellText)): for j in range(len(cellText[i])): ...
4
3
79,093,014
2024-10-16
https://stackoverflow.com/questions/79093014/moviepy-is-unable-to-load-video
Using python 3.11.10 and moviepy 1.0.3 on ubuntu 24.04.1 (in a VirtualBox 7.1.3 on windows 10) I have problems to load a video clip. The test code is just from moviepy.editor import VideoFileClip clip = VideoFileClip("testvideo.ts") but the error is Traceback (most recent call last): File "/home/alex/.cache/pypoetry/v...
FFMPEG_BINARY Normally you can leave it to its default (‘ffmpeg-imageio’) in which case imageio will download the right ffmpeg binary (on first use) and then always use that binary. The second option is "auto-detect". In this case ffmpeg will be whatever binary is found on the computer: generally ffmpeg (on Linux/macO...
5
5
79,104,005
2024-10-19
https://stackoverflow.com/questions/79104005/using-hist-to-bin-data-while-grouping-with-over
Consider the following example: import polars as pl df = pl.DataFrame( [ pl.Series( "name", ["A", "B", "C", "D"], dtype=pl.Enum(["A", "B", "C", "D"]) ), pl.Series("month", [1, 2, 12, 1], dtype=pl.Int8()), pl.Series( "category", ["x", "x", "y", "z"], dtype=pl.Enum(["x", "y", "z"]) ), ] ) print(df) shape: (4, 3) ┌──────...
Here's one approach using Expr.over: bins = range(1,12) out = df.select( pl.col('month').hist( bins=bins, include_breakpoint=True ) .over(partition_by='category', mapping_strategy='explode') .alias('binned'), pl.col('category').unique(maintain_order=True).repeat_by(len(bins)+1).flatten() ).unnest('binned').with_columns...
4
2
79,104,578
2024-10-19
https://stackoverflow.com/questions/79104578/pd-to-datetime-fails-with-old-dates
I have a csv file with very old dates, and pd.to_datetime fails. It works in polars. Is this an inherent limitation in pandas, a bug or something else? import pandas as pd dates = ["12/31/1672","12/31/1677","10/19/2024"] df = pd.DataFrame(dates, columns=['Date']) df['Date'] = pd.to_datetime(df['Date'], format='%m/%d/%Y...
pandas has timestamp limitations; the docs suggests use of period for such cases (of course it depends if the period data type covers your use case): df.assign(new_dates=pd.PeriodIndex(df.Date, freq='D')) Date new_dates 0 12/31/1672 1672-12-31 1 12/31/1677 1677-12-31 2 10/19/2024 2024-10-19
4
3
79,096,544
2024-10-17
https://stackoverflow.com/questions/79096544/what-is-the-thon-executable
On Ubuntu or other Linux-based systems, Python 3.14's venv creates an extra executable named 𝜋thon: $ python --version Python 3.13.0 $ python -m venv .venv $ cd .venv/bin && ls Activate.ps1 activate activate.csh activate.fish pip pip3 pip3.13 python python3 python3.13 $ python --version Python 3.14.0a1+ $ python -m v...
This is an easter egg. The 𝜋thon executable works exactly the same as python, python3 and python3.14. The name 𝜋thon itself is a pun on "Python" and 𝜋 ("Pi") the mathematical constant, whose decimal representation starts with "3.14". This executable was originally named python𝜋 as a parallelism to python3.14 and ot...
1
6
79,103,936
2024-10-18
https://stackoverflow.com/questions/79103936/merging-numpy-arrays-converts-int-to-decimal
I am need to merge 2 arrays together so if a = [] and b is array([76522, 82096], dtype=int64) the merge will be [76522, 82096] but i am getting this in a form of decimal array([76522., 82096.]) here is my code a = np.concatenate((a, b)) how can i merge both arrays with same datatype?
Since a is empty, when it gets converted to a numpy array, it chooses a default dtype=float64. Do the conversion explicitly so you can specify the dtype. np.concatenate((np.array(a, dtype=np.int64), b))
1
2
79,103,687
2024-10-18
https://stackoverflow.com/questions/79103687/how-to-easily-perform-this-random-matrix-multiplication-with-numpy
I want to produce 2 random 3x4 matrices where the entries are normally distributed, A and B. After that, I have a 2x2 matrix C = [[a,b][c,d]], and I would like to use it to produce 2 new 3x4 matrices A' and B', where A' = a A + b B, B' = c A + d B. In order to produce the matrices A and B, I was thinking to use this li...
I think you can use np.einsum np.einsum("ij, jkl -> ikl", C, Z) where "ij, jkl -> ikl" specifies the contraction pattern, where i and j are the indices of the C matrix, and j, k, and l are the indices of the Z array. Example Given dummy data like below np.random.seed(0) Z = np.random.normal(0.0, 1.0, [2, 3, 4]) C = [...
1
1
79,102,009
2024-10-18
https://stackoverflow.com/questions/79102009/how-to-load-tests-from-some-files-and-not-others
I want to run a suite of unit tests in the tests folder. The basic code is: suite = unittest.defaultTestLoader.discover('tests') I want only some of these tests to run, for example test_e1 if file e1.py is present, test_e5 if e5.py is present, but not test_e2 and test_e11 (because files e2.py and e11.py are missing). ...
You are passing hybrid file-path/object names to loadTestsFromNames. Drop the tests from the name, and ensure that tests appears on your module search path, either by Modifying sys.path before calling the method, or Adding tests to the PYTHONPATH environment variable before running your tests.
2
2
79,100,411
2024-10-18
https://stackoverflow.com/questions/79100411/pyopengl-calling-glend-gives-opengl-error-1282-after-modifying-imports
I'm trying to follow this tutorial for PyOpenGL, but I get an OpenGL error 1282 when calling glEnd(): import OpenGL import OpenGL.GL from OpenGL.GLUT import * import OpenGL.GLU from OpenGL.raw.GL.VERSION.GL_1_1 import * import time def square(): glBegin(GL_QUADS) glVertex2f(100,100) glVertex2f(200,100) glVertex2f(200, ...
Note that the OpenGL-related imports are different from the ones in the tutorial. The tutorial uses from OpenGL.GL import * from OpenGL.GLUT import * from OpenGL.GLU import * In particular, adding import OpenGL imports the PyOpenGL's default error checking mechanism, which accompanies every call to the OpenGL library ...
2
1
79,096,787
2024-10-17
https://stackoverflow.com/questions/79096787/incorrect-calculation-in-the-list-processing-logic-based-on-dependencies-between
My code gives me this incorrect output: ['CT', 'X', 'Z'] [100, 1.0583, 1.0633] [200, 3.012, 5.873600000000001] [300, 1.79, 2.5220000000000002] ['Total', 0, 0] The reason is that 5.873600000000001 is incorrectly calculated because according to list_components_and_hierarchical_relationships, it is indicated that Z is a ...
For this task I'd use networkx module: import networkx as nx def create_hierarchical_graph(relationships_list): for relationships in relationships_list: G = nx.DiGraph() root = None for item in relationships: node = item[0] connections = item[1] if connections[0] == "PRODUCT": G.add_edge("PRODUCT", node, weight=connect...
3
1
79,099,118
2024-10-17
https://stackoverflow.com/questions/79099118/override-value-in-pydantic-model-with-environment-variable
I am building some configuration logic for a Python 3 app, and trying to use pydantic and pydantic-settings to manage validation etc. I'm able to load raw settings from a YAML file and create my settings object from them. I'm also able to read a value from an environment variable. But I can't figure out how to make the...
Are you sure you want the environment to take precedence? While not ubiquitous, it is very common for environment variables to have the lowest precedence (typically, the ordering is built-in defaults, then environment variables, then configuration files, then command line options). Deviating from this convention can be...
2
2
79,099,138
2024-10-17
https://stackoverflow.com/questions/79099138/usage-of-retain-graph-in-pytorch
I get error if I don't supply retain_graph=True in y1.backward() import torch x = torch.tensor([2.0], requires_grad=True) y = torch.tensor([3.0], requires_grad=True) f = x+y z = 2*f y1 = z**2 y2 = z**3 y1.backward() y2.backward() Traceback (most recent call last): File "/Users/a0m08er/pytorch/pytorch_tutorial/tensor....
basically the error Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time ...
3
3
79,098,721
2024-10-17
https://stackoverflow.com/questions/79098721/fixing-badly-formatted-floats-with-numpy
I am reading a text file only containing floating point numbers using numpy.loadtxt. However, some of the data is corrupted and reads something like X.XXXXXXX+YYY instead of X.XXXXXXXE+YY (Missing E char). I'd like to interpret them as the intended floating point number (or NaN if impossible) and wondered if there was ...
The following works: import numpy as np import re def converter(txt): txt = re.sub(r"(?<=\d)(?<!E)[\+\-]", lambda x: 'E'+x[0], txt.decode()) return float(txt) np.loadtxt("path/to/datafile", converters = converter)
2
2
79,098,997
2024-10-17
https://stackoverflow.com/questions/79098997/python-date-time-missing-month-but-it-is-there
I've been trying to create this machine learning tool to make predictions on the amount of orders in the next year per month but I have been getting this error: ValueError: to assemble mappings requires at least that [year, month, day] be specified: [month] is missing here is my code. I am passing in the month and it ...
If you check to_datetime documentation, you will find that it requires the column called month. Your month column contains the month names. You should rename the columns before using to_datetime like this: df=df.rename(columns={"Month": "MonthName", "MonthNum": "Month"}). This way, pandas will look for the month numeri...
3
1
79,098,592
2024-10-17
https://stackoverflow.com/questions/79098592/how-to-identify-cases-where-both-elements-of-a-pair-are-greater-than-others-res
I have a case where I have a list of pairs, each with two numerical values. I want to find the subset of these elements containing only those pairs that are not exceeded by both elements of another (let's say "eclipsed" by another). For example, the pair (1,2) is eclipsed by (4,5) because both elements are less than th...
What we can do from my perspective is. First, we remove duplicates and sort the pairs - First element in des order and with the ties in first element, sort by second element in des order unique_pairs = sorted(set(pairs), reverse=True) By keeping the condition for each pair If - b is greater than the maximum second ele...
4
4
79,098,013
2024-10-17
https://stackoverflow.com/questions/79098013/precision-of-jax
I have a question regarding the precision of float in JAX. For the following code, import numpy as np import jax.numpy as jnp print('jnp.arctan(10) is:','%.60f' % jnp.arctan(10)) print('np.arctan(10) is:','%.60f' % np.arctan(10)) jnp.arctan(10) is: 1.471127629280090332031250000000000000000000000000000000000000 np.arcta...
JAX defaults to float32 computation, which has a relative precision of about 1E-7. This means that your two inputs are effectively identical: >>> np.float32(10) == np.float32(10 + 1E-7) True If you want 64-bit precision like NumPy, you can enable it as discussed at JAX sharp bits: double precision, and then the result...
2
4
79,097,421
2024-10-17
https://stackoverflow.com/questions/79097421/rolling-sum-with-right-closed-interval-in-duckdb
In Polars / pandas I can do a rolling sum where row each row the window is (row - 10 minutes, row]. For example: import polars as pl data = { "timestamp": [ "2023-08-04 10:00:00", "2023-08-04 10:05:00", "2023-08-04 10:10:00", "2023-08-04 10:10:00", "2023-08-04 10:20:00", "2023-08-04 10:20:00", ], "value": [1, 2, 3, 4, ...
Maybe not particularly neat, but from the top of my head you could exclude the rows which are exactly 10 minutes back by additional window clause import duckdb rel = duckdb.sql(""" SELECT timestamp, value, SUM(value) OVER roll - coalesce(SUM(value) OVER exclude, 0) AS rolling_sum FROM df WINDOW roll AS ( ORDER BY times...
5
2
79,097,636
2024-10-17
https://stackoverflow.com/questions/79097636/looping-if-statement
I want to loop through an array with an if statement and only after it has looped through the entire array execute the else. This is how i have my code now for index, nameList in enumerate(checkedName) if record["first_name"] == nameList["first_name"] and record["last_name"] == nameList["last_name"]: print("Match name"...
Using any() is more pythonic and readable...Logic is like check if any record in the list matches your conditions if any(record["first_name"] == nameList["first_name"] and record["last_name"] == nameList["last_name"] for nameList in checkedName): print("Match name") else: print("No match name") checkedName.append({ "id...
1
5
79,096,016
2024-10-16
https://stackoverflow.com/questions/79096016/how-do-i-get-the-methods-with-parameters-of-a-python-class-while-keeping-the-o
The dir() function prints the methods in alphabetical order. Is there a way to get the methods of a class (with their parameters) but keeping the original order of the methods? Here's my code return [ (m, getattr(PythonClass, m).__code__.co_varnames) for m in dir(PythonClass) ]
As @Barmar mentioned in the comments, you can use the __dict__ attribute of a class to access its attributes. Since Python 3.7 dict keys are guaranteed to retain their insertion order, so by iterating over PythonClass.__dict__ you can obtain attributes of PythonClass in the order of definition. It is also more idiomati...
1
3
79,096,452
2024-10-17
https://stackoverflow.com/questions/79096452/what-does-mean-in-python
Have been told that in Python ''' is used to indicate the start of a multi-line string. However I have also been taught that this code also allows for the documentation of functions and modules. Googling, surprisingly, doesn't give a clear answer on what ''' definitively refers to. So how should I remember, as a begin...
Triple quotes ''' (and """) is a marker for a string literal just like quote characters ' and ", which you can see in Python's grammar for String and Bytes literals. Its only difference to regular quotes is that newlines and unescaped quote characters are allowed within a triple-quoted string literal, which makes tripl...
2
1
79,095,809
2024-10-16
https://stackoverflow.com/questions/79095809/using-pyparsing-for-parsing-filter-expressions
I'm currently trying to write a parser (using pyparsing) that can parse strings that can then be applied to a (pandas) dataframe to filter data. I've already got it working after much trial & error for all kinds of example strings, however I am having trouble with extending it further from this point on. First, here is...
Rather than define a term that includes the spaces, better to define a term that parses words, so that it can detect and stop if it finds a word that shouldn't be included (like one of the logical operator words). I did this and then wrapped it in a Combine that a) allows for whitespace between the words (adjacent=Fals...
2
1
79,095,934
2024-10-16
https://stackoverflow.com/questions/79095934/how-to-extract-a-cell-value-from-a-dataframe
I am trying to extract a cell value from a dataframe, then why I always get a series instead of a value. For example: df_test=pd.DataFrame({'Well':['test1','test2','test3'],'Region':['east','west','east']}) df_test Well Region 0 test1 east 1 test2 west 2 test3 eas well='test2' region_thiswell=df_test.loc[df_test['Well'...
A potential issue with item/values/iloc is that it will yield an exception of there is no match. squeeze will return an empty Series: df_test.loc[df_test["Well"] == 'test999', "Region"].item() # ValueError: can only convert an array of size 1 to a Python scalar df_test.loc[df_test["Well"] == 'test999', "Region"].values...
2
3
79,096,122
2024-10-17
https://stackoverflow.com/questions/79096122/call-function-within-a-function-but-keep-default-values-if-not-specified
I have two sub functions that feed into one main functions as defined below: Sub function 1: def func(x=1, y=2): z = x + y return z Sub function 2: def func2(a=3, b=4): c = a - b return c Main function: def finalFunc(lemons, input1, input2, input3, input4): result = func(input1, input2) + func2(input3, input4) + lemo...
My recommendation: def func(x=None, y=None): if x is None: x = 1 if y is None: y = 2 z = x + y return z def func2(a=None, b=None): if a is None: a = 3 if b is None: b = 4 c = a-b return c def finalFunc(lemons, input1=None, input2=None, input3=None, input4=None): result = func(input1, input2) + func2(input3, input4) + l...
1
3
79,093,236
2024-10-16
https://stackoverflow.com/questions/79093236/how-to-create-multiple-columns-in-output-on-when-condition-in-polars
I am trying to create 2 new columns in output on checking condition but not sure how to do that. sample df: so_df = pl.DataFrame({"low_limit": [1, 3, 0], "high_limit": [3, 4, 2], "value": [0, 5, 1]}) low_limit high_limit value i64 i64 i64 1 3 0 3 4 5 0 2 1 Code for single column creation that works: so_df.with_column...
You can select into a pl.struct and then extract multiple values out using .struct.field(...): df = so_df.with_columns( pl.when(pl.col("value") > pl.col("high_limit")) .then(pl.struct(Flag=pl.lit("High"), Normality=pl.lit("Abnormal"))) .when(pl.col("value") < pl.col("low_limit")) .then(pl.struct(Flag=pl.lit("Low"), Nor...
2
1
79,092,715
2024-10-16
https://stackoverflow.com/questions/79092715/calculate-new-column-value-based-on-max-for-group-in-pandas-dataframe
I have dataframe containing list of subjects + dates of dispensing, one subject has more Dates of Dispensing and one single Date of dispensing for one subject can occur several times. Here is example: {'Subject': {1449: 'CZ100030006', 1786: 'CZ100030006', 1958: 'CZ100030006', 1964: 'CZ100030006', 4067: 'CZ100030006', 4...
@Dmitry543 has the correct logic, but this should used groupby.transform and a comparison with itself in the function: # ensure datetime df['Date Dispensed'] = pd.to_datetime(df['Date Dispensed']) # find largest second(s) for each group df['new'] = (df.groupby('Subject')['Date Dispensed'] .transform(lambda x: x==x.drop...
2
0
79,078,236
2024-10-11
https://stackoverflow.com/questions/79078236/capturing-matplotlib-coordinates-with-mouse-clicks-using-ipywidgets-in-jupyter-n
Short question I want to capture coordinates by clicking different locations with a mouse on a Matplotlib figure inside a Jupyter Notebook. I want to use ipywidgets without using any Matplotlib magic command (like %matplotlib ipympl) to switch the backend and without using extra packages apart from Matplotlib, ipywidge...
Short answer Capturing mouse clicks on a non-interactive Matplotlib figure is not possible – that's what the interactive backends are for. If you want to avoid switching back and forth between non-interactive and interactive backends, maybe try the reverse approach: Rather than trying to get interactivity from non-inte...
3
2
79,084,728
2024-10-14
https://stackoverflow.com/questions/79084728/how-do-to-camera-calibration-using-charuco-board-for-opencv-4-10-0
I am trying to do a camera calibration using OpenCV, version 4.10.0. I already got a working version for the usual checkerboard, but I can't figure out how it works with charuco. I would be grateful for any working code example. What I tried: I tried following this tutorial: https://medium.com/@ed.twomey1/using-charuco...
I managed to get the calibration done with the official non-contrib opencv code. Here is a minimal working example: The problem: Be careful when defining your detection board. A 11x8 charuco board has a different order of the aruco markers as an 8x11. Even if they look very similar when printed. The detecion will fail....
1
1
79,088,388
2024-10-15
https://stackoverflow.com/questions/79088388/configuring-django-testing-in-pycharm
I have a simple django project that I'm making in pycharm. The directory structure is the following: zelda_botw_cooking_simulator |-- cooking_simulator_project |---- manage.py |---- botw_cooking_simulator # django app |------ init.py |------ logic.py |------ tests.py |------ all_ingredients.py |------ other standard dj...
Figured it out! The problem lay in how PyCharm was interpreting the test. The question and answer here was super helpful, but I had to do the opposite and add a working directory: Replace from django.test import TestCase with from unittest import TestCase in my tests.py file. Add the working directory to the template...
2
0
79,091,469
2024-10-15
https://stackoverflow.com/questions/79091469/how-to-convert-java-serialization-data-into-json
A vendor-provided application we're maintaining stores (some of) its configuration in the form of "Java serialization data, version 5". A closer examination shows, that the actual contents is a java.util.ArrayList with several dozens of elements of the same vendor-specific type (vendor.apps.datalayer.client.navs.shared...
The documentation on the format is available here. To explain the example more thoroughly: 00: ac ed 00 05 73 72 00 04 4c 69 73 74 69 c8 8a 15 >....sr..Listi...< 10: 40 16 ae 68 02 00 02 49 00 05 76 61 6c 75 65 4c >Z......I..valueL< 20: 00 04 6e 65 78 74 74 00 06 4c 4c 69 73 74 3b 78 >..nextt..LList;x< 30: 70 00 00 00 ...
1
3
79,084,176
2024-10-13
https://stackoverflow.com/questions/79084176/polars-python-api-read-json-fails-to-parse-date
I want to read in a polars dataframe from a json string containing dates in the standard iso-format "yyyy-mm-dd". When I try to read the string in and set the dtype of the date column witheither schema or schema_override this results in only NULL values. MRE from datetime import datetime, timedelta from io import Strin...
After having a bit of a play around, it looks like unfortunately dates being read from a JSON file have a bit of a quirk. It seems to me that currently they must be written in days since the unix epoch (which is how Polars internally represents dates) for things to work as you expect. I have raised this feature request...
2
2
79,078,422
2024-10-11
https://stackoverflow.com/questions/79078422/looking-for-a-function-equivalent-to-resizerowtoindex-for-qtableview
I am working with a QTableView, and I'd like for the row to change its height to accommodate the content of the selected index. The function resizeRowToContents is not really what I am looking for. If I click on a cell [A] that doesn't need to change its height to display everything but a cell [B] in the row needs to, ...
Basically, I wrote the function heightHintForIndex in Python. def on_selection_changed(self, selected: QModelIndex , deselected: QModelIndex): self.setRowHeight(deselected.row(), 1) editor = self.indexWidget(selected) hint = 0 if (editor is not None) and (self.isPersistentEditorOpen(selected)): hint = max(hint, editor....
2
0
79,091,436
2024-10-15
https://stackoverflow.com/questions/79091436/using-unpacked-typeddict-for-specifying-function-arguments
I was wondering whether following was possible with TypedDict and Unpack, inspired by PEP 692... Regular way of using TypedDict would be: class Config(TypedDict): a: str b: int def inference(name, **config: Unpack[Config]): c = config['a']+str(config["b"]) config: Config = {"a": "1", "b": 2} inference("example", **conf...
No, there is no python syntax that will allow you to "unpack the dictionary to make it behave like they were directly introduced as variables into function signature". You could technically do it at runtime with something like this: def inference(name, **config: Unpack[Config]): vars().update(config) dosomething(a, b) ...
1
2
79,090,251
2024-10-15
https://stackoverflow.com/questions/79090251/confused-in-how-to-do-web-scraping-for-the-first-time
There is a website like bonbast.com and I'm trying to get values but I'm just confused about how to do it. Values should be something like the output of "US Dollar" and "Euro". My code: import requests from bs4 import BeautifulSoup r = requests.get("https://bonbast.com/") soup = BeautifulSoup(r.content, "html.parser") ...
From the Reference of this Repo - https://github.com/drstreet/Currency-Exchange-Rates-Scraper/blob/master/scrapper.py Focusing only on the necessary parts to retrieve exchange rates USD and Euro. import requests from bs4 import BeautifulSoup import re def get_currency_data(): url = 'https://www.bonbast.com' session = r...
1
2
79,089,003
2024-10-15
https://stackoverflow.com/questions/79089003/no-solution-found-or-tools-vrp
I am having an issue with no solution found for an instance of the OR-Tools VRP. I am new to OR-Tools. After consulting the docs, my understanding if no first solution is found then no solution at all will be found. To help this, I should somehow loosen constraints of finding the first solution. However, I have tried m...
you cannot have multiple pickups or deliveries on the same node. Only 1 action per node.
2
1
79,089,082
2024-10-15
https://stackoverflow.com/questions/79089082/merging-tables-using-python
I want to merge some 'n' tables in python. Each table has 2 columns in it. Currently, I'm trying with these 3 tables (table12, table13, table23). Context: I have certain image files, each image has some objects inside it which are labelled as 'A', 'B', 'C', etc. If I'm comparing 2 images, I get an equivalence table wh...
Your logic in unclear, but assuming that the tables are to be kept in their original order and are top aligned, you should probably concat the individual columns: pd.concat([table12[['img1']], table23[['img2']], table13[['img3']]], axis=1) Or, taking all columns as input and removing the duplicates: pd.concat([table12...
1
2
79,088,735
2024-10-15
https://stackoverflow.com/questions/79088735/i-would-like-to-know-how-to-avoid-unnecessary-duplication-when-comparing-two-dat
import pandas df1 = pandas.DataFrame( { 'code': ['001', '001'], 'name': ['test1', 'test1'], 'date': ['2024-01-01', '2024-01-01'], 'value1': [1, 2], 'value2': [1, 2], 'sum': [2, 4] } ) df2 = pandas.DataFrame( { 'code': ['001', '001', '001', '002'], 'name': ['test1', 'test1', 'test1', 'test2'], 'date': ['2024-01-01', '20...
It looks like you should include the sum columns as keys, optionally renaming them before the merge: result = pandas.merge(df1.rename(columns={'sum': 'sum_x'}) .drop(columns=['value1', 'value2']), df2.rename(columns={'sum': 'sum_y'}) .drop(columns=['value1', 'value2']), left_on=['code', 'name', 'date', 'sum_x'], right_...
1
2
79,085,795
2024-10-14
https://stackoverflow.com/questions/79085795/batched-matrix-multiplication-with-jax-on-gpu-faster-with-larger-matrices
I'm trying to perform batched matrix multiplication with JAX on GPU, and noticed that it is ~3x faster to multiply shapes (1000, 1000, 3, 35) @ (1000, 1000, 35, 1) than it is to multiply (1000, 1000, 3, 25) @ (1000, 1000, 25, 1) with f64 and ~5x with f32. What explains this difference, considering that on cpu neither J...
The difference seems to come from the compiler emitting a kLoop fusion for smaller sizes, and a kInput fusion for larger sizes. You can read about the effect of these in this source comment: https://github.com/openxla/xla/blob/e6b6e61b29cc439350a6ad2f9d39535cb06011e5/xla/hlo/ir/hlo_instruction.h#L639-L656 The compiler ...
1
5
79,086,873
2024-10-14
https://stackoverflow.com/questions/79086873/pandas-polars-write-list-of-jsons-to-database-fails-with-ndarray-is-not-json
I have multiple json columns which I concat to an array of json columns. The DataFarme looks like this ┌─────────────────────────────────┐ │ json_concat │ │ --- │ │ list[str] │ ╞═════════════════════════════════╡ │ ["{"integer_col":52,"string_co… │ │ ["{"integer_col":93,"string_co… │ │ ["{"integer_col":15,"string_co… │...
There is unfortunately no polars function to serialize a list to a JSON array. Here's how you can do it manually: df = df.select( pl.struct(pl.col("integer_col", "string_col")).struct.json_encode().alias("json1"), pl.struct(pl.col("float_col", "bool_col")).struct.json_encode().alias("json2"), pl.struct(pl.col("datetime...
2
1
79,087,381
2024-10-14
https://stackoverflow.com/questions/79087381/define-a-regex-pattern-to-remove-all-special-characters-but-with-an-exception-in
This is the problem: I'm trying to clean a text from all the special characters but want to keep the compound words like 'self-restraint' or 'e-mail' united as they are, with the middle dash. The problem is that this hyphen is recognized as a special character. I have python 3.10 I used several regex patterns to do it,...
You could try something like this: import re text = "This is -a -! -sample- ? e-mail text?classification? example- ! ?\} {]}[¿ with !self-restraint( - like @, #, and $." cleaned_text = re.sub(r'\s+', ' ', re.sub(r'(?<!\S)[-]|[-](?!\S)|[^\w\s-]', ' ', text)).strip() print(cleaned_text) Output This is a sample e-mail te...
1
1
79,087,436
2024-10-14
https://stackoverflow.com/questions/79087436/how-to-convert-python-list-into-ast-list
How can I convert Python object - a list, into an ast.List object, so I can appent it to the main AST tree as a node huge_list [ 1, "ABC", 4.5 ] object = ast.Assign([ast.Name(huge_list_name, ast.Store())], (ast.List(huge_list, ast.Load()))) object.lineno = None result = ast.unparse(object) print(result) tree.body.appe...
Assuming your list is made up of simple objects like strings and numbers, you can parse its representation back into an ast.Module, then dig the ast.List out of the module body: >>> huge_list = [1, "ABC", 4.5] >>> mod = ast.parse(repr(huge_list)) >>> [expr] = mod.body >>> expr.value <ast.List at 0x7fffed231190>
3
2
79,086,014
2024-10-14
https://stackoverflow.com/questions/79086014/concat-list-with-null-values-or-how-to-fill-null-in-pl-liststr
I want to concat three list columns in a pl.LazyFrame. However the Lists often contain NULL values. Resulting in NULL for pl.concat_list MRE import polars as pl # Create the data with some NULLs data = { "a": [["apple", "banana"], None, ["cherry"]], "b": [None, ["dog", "elephant"], ["fish"]], "c": [["grape"], ["honeyde...
You can use pl.Expr.fill_null() as follows: lazy_df.with_columns( pl.concat_list( pl.col(list_cols).fill_null([]) ).alias("merge") )
3
1
79,081,361
2024-10-12
https://stackoverflow.com/questions/79081361/echo-y-from-within-python
I'm trying to import and use a module originally made as a standalone script. Preferably without altering it, to keep tool commonality with the authors. One function I'm using prompts for a y/n response, to which I want to always answer "y": input("\nContinue? (y/n): ").lower() What's the best way to automate this ans...
Try replacing sys.stdin (the place where input reads it's data from) with something that always outputs y. import io import sys one_hunderd_yesses = io.StringIO('y\n'*100) original_stdin = sys.stdin sys.stdin = one_hunderd_yesses # Run the external module print(input("\nContinue? (y/n): ")) # Outputs "y" # When you are...
1
2
79,070,227
2024-10-9
https://stackoverflow.com/questions/79070227/calendarviewrequestbuilder-msgraph-python-sdk-query-parameter-for-start-da
I've been playing around with the start_date_time and the end_date_time like this: # Create GraphServiceClient graph_client = GraphServiceClient(credential, scopes) # Define date range for fetching events start_date = datetime.now(pytz.utc) end_date = start_date + timedelta(days=365) # Format dates correctly for the Gr...
Seems like you are calling wrong method. Checking your code events_page = await graph_client.users.by_user_id(user_id).calendars.by_calendar_id(calendar_id).events.get(request_configuration=request_configuration) you are calling the endpoint v1.0/users/{user_id}/calendars/{calendar_id}/events For calendarView you need...
2
2
79,085,604
2024-10-14
https://stackoverflow.com/questions/79085604/polars-python-is-in-for-lazyframe-typeerror
I am getting the following TypeError Traceback (most recent call last): File "/my/path/my_project/src/my_project/exploration/mre_lazyframe_error.py", line 39, in <module> current.with_columns(pl.col("foo_bar").is_in(reference["foo_bar"])) File "/my/path/.cache/pypoetry/virtualenvs/my_project-p95GORRi-py3.10/lib/python3...
Subscription of DataFrame return series, but subscription (or get_column() method) is not implemented for LazyFrame (what would it even return?). There is a closed issue where it was decided that it's not going to be implemented. If amount of distinct values in reference frame is not large (as in your case), you could ...
2
1
79,085,456
2024-10-14
https://stackoverflow.com/questions/79085456/vs-code-fails-to-run-script-which-has-quotes-in-its-filename
I am trying to run a simple "Hello World" script which is named print("hello world!").py by pressing the "play" button in the top-right of the VS Code window. I'm using Python 3.13.0, macOS 12.6.1 and VS Code 1.94. The VS Code terminal prints: dafyddpowell@Dafydds-MBP \~ % cd "/Volumes/SAMSUNG/Python projects" dafyddpo...
It seems like VSCode isn't escaping the file name that contains special characters. The simplest way around this would be to use a filename that doesn't have any special characters like hello_world.py.
1
1
79,085,124
2024-10-14
https://stackoverflow.com/questions/79085124/extract-a-class-from-a-static-method
Given a function which is a staticmethod of a class, is there a way to extract the parent class object from this? class A: @staticmethod def b(): ... ... f = A.b ... assert get_parent_object_from(f) is A I can see this buried in the __qualname__, but can't figure out how to extract this. The function get_parent_object...
One approach would be to use a custom static method descriptor that sets the owner class as an attribute of the wrapped function: class StaticMethod(staticmethod): def __set_name__(self, owner, name): self.__wrapped__.owner = owner class A: @StaticMethod def b(): ... f = A.b assert f.owner is A Demo here
2
2
79,078,700
2024-10-11
https://stackoverflow.com/questions/79078700/error-when-deploying-django-drf-backend-on-google-cloud-no-matching-distributio
I am trying to deploy my Django backend (using Django Rest Framework) on a Google Cloud VM instance. However, when I run pip install -r requirements.txt, I encounter the following error: Collecting asgiref==3.8.1 Using cached asgiref-3.8.1-py3-none-any.whl (23 kB) Collecting attrs==23.2.0 Using cached attrs-23.2.0-py3-...
Provide the full requirements.txt and Python version you plan to use to run the code. The errors indicate that PIP cannot install the version requested in the requirements.txt file as it conflicts with other requirements or is not supported by the Python version you have installed. If you are running something you didn...
2
1
79,084,064
2024-10-13
https://stackoverflow.com/questions/79084064/i-cant-get-the-input-and-output-to-work-with-my-function-in-python-tkinter
I can't get my tkinter project to work. i haven't been succesful into getting the input from the entry label from a user to then that info get worked on through my function of encryptation, so it can print the output in the output label. from tkinter import * from cryptography.fernet import Fernet raiz= Tk() #def outpu...
The problem with your code is: def Encriptador(): mensaje= cuadroinput.get # <------ here key= Fernet.generate_key() cifrar= Fernet(key) mensaje_cifrado = cifrar.encrypt(mensaje.encode()) cuadroutput.config(Text= mensaje_cifrado) The .get that you are calling, is a function, and a function, in Python, must have (). Th...
1
2
79,080,076
2024-10-12
https://stackoverflow.com/questions/79080076/how-to-set-a-qwidget-hidden-when-mouse-hovering-and-reappear-when-mouse-leaving
I am trying to create a small widget to display information. This widget is intended to be always on top, and set hidden when the mouse hovers over it so you can click or see whatever is underneath it without disruption, and then reappear once your mouse leaves this widget. The problem I am currently facing is that onc...
You could poll the cursor position to see if it's contained by the current window geometry. This has the side benefit of allowing for a short delay, so that the window isn't continually shown/hidden when the cursor moves quickly over it. The delay could be configurable by the user. I think this should work on all platf...
4
1
79,081,940
2024-10-12
https://stackoverflow.com/questions/79081940/python-flask-if-block-jinja2-exceptions-templatesyntaxerror-unexpected
<div class="mb-3"> {{ form.ism.label(class="form-label") }} {{% if form.ism.errors %}} {{ form.ism(class="form-control form-control-lg is-invalid") }} <div class="invalid-feedback"> {% for error in form.ism.errors %} <span>{{ error }}</span> {% endfor %} </div> {% else %} {{ form.ism(class="form-control form-control-l...
The syntax for variables is {{ myVar }} The syntax for expressions such as if and for is {% if ... %} Thus, instead of writing {{% %}}, you should go with {% %} which is the appropriate syntax.
1
2
79,081,575
2024-10-12
https://stackoverflow.com/questions/79081575/trying-to-solve-problem-19-on-euler-project
The question is: You are given the following information, but you may prefer to do some research for yourself. 1 Jan 1900 was a Monday. Thirty days has September, April, June and November. All the rest have thirty-one, Saving February alone, Which has twenty-eight, rain or shine. And on leap years, twenty-nine. A leap...
Your code has issue on following condition, where you are checking if month == 1, which means it'll only count Sundays if it's 1st January of the year. if day_name == day == month == 1: instead you should use: if day_name == day == 1: just a suggestion if you are open to code change, as you are only concern about 1s...
1
2
79,080,117
2024-10-12
https://stackoverflow.com/questions/79080117/openpyxl-is-not-able-to-understand-my-2-sub-header
I have an Excel sheet with data: when I try to create a chart like this [clustered chart] openpyxl is not able to read the data properly from openpyxl.chart import BarChart, Reference from openpyxl import load_workbook, workbook wb = load_workbook("Book1.xlsx") ws = wb.active data = Reference(ws, min_col=2, min_row=1...
You can add each row as a separate Series. For each row m2 to m10 add as a Series after the initial data row m1 is added. The categories which contains the two level Header is taken from the Rows 1 & 2. Then enable 'Multi-Level Category Labels' for the X Axis. The code sample below assumes your data is contained in the...
3
4
79,079,685
2024-10-11
https://stackoverflow.com/questions/79079685/tkinter-text-widget-unexpected-overflow-inside-grid-manager
Assume a simple 3row-1col grid, where 2nd widget is a label, while 1st and 3rd widgets are Text. sticky and weight settings are most certainly correct. Grid dimensions are defined and shouldn't be dictated by its content. The problem is that Texts in 1st and 3rd rows share the space as if the Label in the 2nd row didn'...
Overview The factors that contribute to the problem: You didn't specify a width or height for the text widget, so it will default to 80x24 characters wide/tall. You are forcing the window to be a specific size that is too small to fit everything at their requested size. grid will attempt to fit everything into the win...
2
3
79,079,693
2024-10-11
https://stackoverflow.com/questions/79079693/how-to-reset-a-key-in-a-defaultdict-many-times-in-a-row
d = defaultdict(str) d['one'] = 'hi' del d['one'] del d['one'] The second del raises a KeyError. d.pop('one') has the same problem. Is there a concise way to make the defaultdict reset-to-default a keypair? if 'one' in d: del d['one'] is more verbose than I would like.
The below should work (pop with None) The pop() method allows you to specify a default value that gets returned if the key is not found, avoiding a KeyError exception. from collections import defaultdict d = defaultdict(str) d['one'] = 'hi' d.pop('one', None) d.pop('one', None) print('Done')
2
5
79,078,515
2024-10-11
https://stackoverflow.com/questions/79078515/pandas-vectorized-function-to-find-time-to-grow-n-from-starting-cell
I have a pandas DataFrame with a time series (in this case price of a used car model) and am looking for a vectorized function to map each cell to the time it takes the price to grow n percent from that cell (if it never reaches n% more then return nan) It should in theory be possible to execute in a vectorized way as ...
Another option is to use numba (you can even easily parallelize the task): import numba @numba.njit(parallel=True) def search(price, n, out): for idx in numba.prange(len(price)): p = price[idx] search_for = p * n for idx2, v in enumerate(price[idx:]): if v >= search_for: out[idx] = idx2 break df["out"] = np.nan search(...
1
2
79,078,287
2024-10-11
https://stackoverflow.com/questions/79078287/groupby-a-df-column-based-on-other-column-and-add-a-default-value-to-everylist
I have an df which has 2 columns lets say Region and Country. Region Country ================================ AMER US AMER CANADA APJ INDIA APJ CHINA I have grouped the unique Country list for each Region using the code and o/p like below: df.drop_duplicates().groupby("Region")['Country'].agg(lambda x: sorted(x.unique...
You can add ['ALL'] in your agg: (df.drop_duplicates().groupby("Region")['Country'] .agg(lambda x: sorted(x.unique().tolist())+['ALL']) .to_dict() ) Also note you you don't really need agg and could use a dictionary comprehension: out = {k: sorted(v.unique().tolist())+['ALL'] for k, v in df.drop_duplicates().groupby("...
2
4
79,077,573
2024-10-11
https://stackoverflow.com/questions/79077573/sort-all-rows-with-a-certain-value-in-a-group-to-the-last-place-i-the-group
I try to sort all rows with a certain value in a group to the last place in every group. data = {'a':[1, 1, 1, 1, 1, 2, 2, 2, 3, 3, 3, 3, 3, 3], 'b':[100, 300, 200, 222, 500, 300, 222, 100, 200, 222, 300, 500, 400, 100], 'c':[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]} df1 = pd.DataFrame(data) df1 Out[29]: a b c 0 ...
Use double DataFrame.sort_values - first by b with key parameter and then by a column with kind parameter: out = (df1.sort_values('b', key = lambda x: x==222) .sort_values('a', ignore_index=True, kind='stable')) print (out) a b c 0 1 100 1 1 1 300 2 2 1 200 3 3 1 500 5 4 1 222 4 5 2 300 6 6 2 100 8 7 2 222 7 8 3 200 9 ...
3
4
79,076,480
2024-10-11
https://stackoverflow.com/questions/79076480/groupby-and-aggregate-based-on-condition
My input data: df=pd.DataFrame({'ID':['A','B','C','D'], 'Group':['group1','group1','group2','group2'], 'Flag_1':[1,0,0,1], 'Flag_2':[1,1,0,1], 'Value':[30,40,60,70] }) I am trying to add up "Value" per group when flag is equal to 1. My expected output is: df_value_group=pd.DataFrame({ 'Flag_1 Sum':[1,1], 'Flag_2 Sum':...
For a generic approach, you could use a custom groupby.agg (named aggregation): cols = df.columns[df.columns.str.startswith('Flag_')] val = df['Value'] out = (df.groupby('Group', as_index=False) .agg(**({f'{c} Sum': (c, lambda x: x.sum()) for c in cols} |{f'Value{c[4:]} Sum': (c, lambda x: val[x.index][x==1].sum()) for...
1
1
79,075,564
2024-10-10
https://stackoverflow.com/questions/79075564/what-is-the-best-way-to-fit-a-quadratic-polynomial-to-p-dimensional-data-and-com
I have been trying to use the scikit-learn library to solve this problem. Roughly: from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression # Make or load an n x p data matrix X and n x 1 array y of the corresponding # function values. poly = PolynomialFeatures(degree=2) Xp...
To compute the gradient or the Hessian of a polynomial, one needs to know exponents of variables in each monomial and the corresponding monomial coefficients. The first piece of this information is provided by poly.powers_, the second by model.coef_: from sklearn.preprocessing import PolynomialFeatures from sklearn.lin...
6
3
79,075,204
2024-10-10
https://stackoverflow.com/questions/79075204/qt-pyside6-why-is-worker-signal-stop-not-received-in-worker-thread
I've read the following material to get a better understanding what things must be considered when working with threads in Qt: https://doc.qt.io/qt-6/thread-basics.html https://wiki.qt.io/Threads_Events_QObjects https://www.haccks.com/posts/how-to-use-qthread-correctly-p1/ https://www.haccks.com/posts/how-to-use-qthre...
First of all, you have to remember that PySide (as PyQt, on which it is based) is a Python binding around the Qt library, which is a compiled library written in C++: this means that the "Qt side" has absolutely no knowledge of what Python does on its own, unless it directly interacts with the library. According to mov...
2
2
79,072,098
2024-10-9
https://stackoverflow.com/questions/79072098/solving-leetcodes-1813-sentence-similarity-iii-using-regexes
I'm trying to solve this problem from Leetcode using regexes (just for fun): You are given two strings sentence1 and sentence2, each representing a sentence composed of words. A sentence is a list of words that are separated by a single space with no leading or trailing spaces. Each word consists of only uppercase and...
My idea needs the strings to be sorted descending by length before using regex. Then concatenate s1 and s2 by newline and check what single part of s1 could be omitted to match s2 (in next line). import re def isSimilar(s1, s2): regex = r'(?is)\A(?!.* )(\b[a-z ]*)(\b[a-z ]*)(\b[a-z ]*)\n\1\b\3\Z' # sort strings by leng...
2
2
79,071,739
2024-10-9
https://stackoverflow.com/questions/79071739/optimizing-variable-combinations-to-maximize-a-classification
I am working with a dataset where users interact via an app or a website, and I need to determine the optimal combination of variables (x1, x2, ... xn) that will maximize the number of users classified as "APP Lovers." According to the business rule, a user is considered an "APP Lover" if they use the app more than 66%...
Here's my suggestion, take the data: df = pl.DataFrame({ "id": [1, 2, 3, 1, 2, 3, 1, 2, 3], "variable": ["x1", "x1", "x1", "x2", "x2", "x2", "x3", "x3", "x3"], "favorite": ["APP", "APP", "WEB", "APP", "WEB", "APP", "APP", "APP", "WEB"] }) and pivot it such that column xi is true if user id uses that action primarily t...
1
2
79,074,775
2024-10-10
https://stackoverflow.com/questions/79074775/issue-with-latex-rendering-in-the-title-of-colorbar-plots-using-pythons-matplot
I am facing an issue with the title of the following colorbar plots using Python's Matplotlib library. There are two subplots. The title of the first one works well, i.e., LaTeX rendering is done successfully. However, it returns an error for the second one. fig, axs = plt.subplots(1, 2, figsize=(24, 10)) c1 = axs[0].c...
This is LaTeX syntax clashing with function of pythons .format() method. The latter looks for curly brackets {...} in the string and operates on them, but there are two sets of curly brackets in r'$\sqrt{U_c^2 + V_c^2}$ at t={}' and the first contains something that does not fit with the .format() syntax. I suggest y...
1
2
79,074,215
2024-10-10
https://stackoverflow.com/questions/79074215/how-to-draw-a-rectangle-at-x-y-in-a-pyqt-graphicsview
I'm learning Python and Qt, and as an exercise I designed a simple window in QT Designer with a QGraphicsView which would represent a stack data structure. It should hold the items on the stack as a rectangle with a label representing the item, but my problem is that I can't position the rectangle at(x,y). Google and t...
from PyQt5 import uic from PyQt5.QtCore import * from PyQt5.QtWidgets import * from PyQt5.QtCore import Qt class Ui(QMainWindow): def __init__(self): super().__init__() uic.loadUi('Stack.ui', self) self.scene = QGraphicsScene() self.graphicsView.setScene(self.scene) self.graphicsView.setSceneRect(0, 0, 250, 250) # adju...
1
1
79,074,164
2024-10-10
https://stackoverflow.com/questions/79074164/is-there-a-way-to-run-a-file-that-is-in-a-directory-with-a-special-character-in
My directory structure is C:\Users\...\[MATH] foldername So to change directories, as per some SE post I saw, I have to use: cd 'C:\Users\...\`[MATH`] foldername' which indeed changes to the required directory. Once here, I use py -m venv Project, and it creates the Project folder with the venv stuff correctly. When I ...
Python's bundled venv module (v3.4+), as of v3.12.3, has a bug that prevents it from working properly in directories whose names contain [ / ] in PowerShell. These characters are metacharacters (characters with special meaning) in PowerShell wildcard expressions, and arguments passed to the -Path parameter (which is al...
3
5
79,072,235
2024-10-9
https://stackoverflow.com/questions/79072235/plot-a-partially-transparent-plane-in-matplotlib
I want to plot a sequence of three colormaps in a 3D space, with a line crossing all the planes of the colormaps, as shown in the figure below. https://i.sstatic.net/65yOib6B.png To do that, I am using mpl.plot_surface to generate the planes and LinearSegmentedColormap to create a colormap that transitions from transpa...
I think there's nothing you could have done better in matplotlib, great job! I think to solve your problem, it is better to change the library and approach your problem using plotly. Please see my code: import plotly.graph_objects as go import numpy as np # Testing Data sigma = 1.0 mu = np.linspace(0, 2, 10) x = np.lin...
1
2
79,072,427
2024-10-10
https://stackoverflow.com/questions/79072427/using-re-to-match-a-digit-any-contiguous-duplicates-and-storing-the-duplicates
I'm trying to use re.findall(pattern, string) to match all numbers and however many duplicates follow in a string. Eg. "1222344" matches "1", "222", "3", "44". I can't seem to find a pattern to do so though. I tried using the pattern "(\d)\1+" to match a digit 1 or more times but it doesn't seem to be working. But when...
You're on the right track but your pattern (\d)\1+ actually matches two or more contiguous digits (the first digit is matched by \d and then the + quantifier says match one or more of that digit. So what you want is (\d)\1* where the * says match zero or more of that previous digit The other thing that is perhaps confu...
2
3
79,072,381
2024-10-10
https://stackoverflow.com/questions/79072381/python-if-statements-containing-multiple-boolean-conditions-how-is-flow-hand
I'm curious about how Python handles "if" statements with multiple conditions. Does it evaluate the total boolean expression, or will it "break" from the expression with the first False evaluation? So, for example, if I have: if (A and B and C): Do_Something() Will it evaluate "A and B and C" to be True/False (obvious...
It's the latter. if A and B and C: Do_Something() is equivalent to if A: if B: if C: Do_Something() This behavior is called short-circuit evaluation.
3
3
79,072,035
2024-10-9
https://stackoverflow.com/questions/79072035/python-datetime-format-utc-time-zone-offset-with-colon
I'm trying to format a datetime object in Python using either the strftime method or an f-string. I would like to include the time zone offset from UTC with a colon between the hour and minute. According to the documentation, a format code, %:z, may be available to do exactly what I want. The documentation does warn, h...
Per documentation: Added in version 3.12: %:z was added. Example on Windows 10: Python 3.12.6 (tags/v3.12.6:a4a2d2b, Sep 6 2024, 20:11:23) [MSC v.1940 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import datetime as dt >>> print(f"{dt.datetime.now().astimezone():%...
1
2
79,069,211
2024-10-9
https://stackoverflow.com/questions/79069211/pandas-groupby-show-non-matching-values
I have the following dataframe: data = [['123456ABCD234567', 'A'], ['8502', 'A'], ['74523654894WRZI3', 'B'], ['85CGNK6987541236', 'B'], ['WF85Z4HJ95R4CF2V', 'C'], ['VB52FG85RT74DF96', 'C'], ['WERTZ852146', 'D'], ['APUNGF', 'D'] ] df = pd.DataFrame(data, columns=['CODE', 'STOCK']) df CODE STOCK 0 123456ABCD234567 A 1 85...
You could create a boolean column for values not matching the condition and groupby.transform with all to identify the STOCK with all non matching rows: out = df[df['CODE'].str.len().ne(16).groupby(df['STOCK']).transform('all')] Output: CODE STOCK 6 WERTZ852146 D 7 APUNGF D Intermediates: CODE STOCK str.len ne(16) ...
2
2
79,049,807
2024-10-3
https://stackoverflow.com/questions/79049807/genetic-algorithm-for-kubernetes-allocation
I am trying to allocate Kubernetes pods to nodes using a genetic algorithm, where each pod is assigned to one node. Below is my implementation: from string import ascii_lowercase import numpy as np import random from itertools import compress import math import pandas as pd import random def create_pods_and_nodes(n_pod...
This is a straightforward bin packing problem. https://en.wikipedia.org/wiki/Bin_packing_problem Why tackle it with a genetic algorithm!?! That is going to be horribly slow, especially if you use python. Implementing a standard bin packing algorithm in a native language with a decent optimizing compiler will give perfo...
2
3
79,057,745
2024-10-5
https://stackoverflow.com/questions/79057745/cant-create-objects-on-start
I created a function and want to run it automatically on start. The function creates several objects I have an error AppRegistryNotReady("Apps aren't loaded yet.") Reason is clear - the function imports objects from another application (parser_app) I am starting app like this gunicorn --bind 0.0.0.0:8000 core_app.wsgi...
Solved with help of ready() function in apps.py My solution class ParserAppConfig(AppConfig): default_auto_field = 'django.db.models.BigAutoField' name = 'parser_app' def ready(self): from scripts.create_schedules import create_cron_templates create_cron_templates()
3
1
79,064,048
2024-10-8
https://stackoverflow.com/questions/79064048/issues-with-using-extra-index-url-in-uv-with-google-cloud-artifact-registr
I'm trying to create a uv project that uses an --extra-index-url with Google Cloud Artifact Registry. According to the uv documentation, this should be possible. I am using uv 0.4.18. Here's what I've tried so far: gcloud auth application-default login --project ${PROJECT_ID} uv venv source .venv/bin/activate uv pip in...
Pointing to the solution of the issue here: The code that works is: gcloud auth application-default login --project ${PROJECT_ID} uv venv source .venv/bin/activate uv pip install keyring keyrings.google-artifactregistry-auth uv pip install ${MY_PACKAGE} --keyring-provider subprocess --extra-index-url https://oauth2acce...
2
1
79,065,461
2024-10-8
https://stackoverflow.com/questions/79065461/typing-polars-dataframe-with-pandera-and-mypy-validation
I am considering pandera to implement strong typing of my project uses polars dataframes. I am puzzled on how I can type my functions correctly. As an example let's have: import polars as pl import pandera.polars as pa from pandera.typing.polars import LazyFrame as PALazyFrame class MyModel(pa.DataFrameModel): a: int ...
It's quite often an issue when underlying libraries maybe don't express types as well described as they could - fortunately there are a few ways around it: 1. The Cast Way As discussed in the comments, using typing.cast is always an option. If an external library does not produce a specific enough type this is often wh...
3
3
79,058,740
2024-10-6
https://stackoverflow.com/questions/79058740/how-to-process-data-internally-so-that-it-becomes-equivalent-to-what-it-would-be
I have this string: "birthday_balloons.\u202egpj" If I execute print("birthday_balloons.\u202egpj") it outputs birthday_balloons.jpg Note how the last three characters are reversed. I want to process the string "birthday_balloons.\u202egpj" in such a way that I get the string "birthday_balloons.jpg", with the order of...
U+202E is RIGHT-TO-LEFT OVERRIDE (RLO), it marks the start of a bidirectional override forcing the following text to be rendered right-to-left regardless of the direction of the characters. It is closed by U+202C POP DIRECTIONAL FORMATTING (PDF). Its presence in a filename would be indicative of malicious intent, in a ...
3
1
79,063,494
2024-10-7
https://stackoverflow.com/questions/79063494/how-to-accelerate-getting-points-within-distance-using-two-dataframes
I have two DataFrames (df and locations_df), and both have longitude and latitude values. I'm trying to find the df's points within 2 km of each row of locations_df. I tried to vectorize the function, but the speed is still slow when locations_df is a big DataFrame (nrows>1000). Any idea how to accelerate? import panda...
You need to use a spatial index to make this fast... You can accomplish that like this: convert your locations_df to a GeoDataFrame with polygons the size of your search distance by buffering them with this distance. As you don't seem to be working in a projected crs, check out this post how to do this: buffer circle ...
2
2
79,067,886
2024-10-8
https://stackoverflow.com/questions/79067886/inner-kws-having-no-effect-on-seaborn-violin-plot
I generated a bunch of violin plots, here is an example of one and the code that generates it: plt.figure(figsize=(8, 4)) ax = sns.violinplot( x=data, # `data` is a few thousand float values between 0 and 1 orient='h', color=get_color(ff), # `get_color` returns a color based on the dataset, #FFBE0B in this case cut=0 )...
The inner_kws argument was introduced in version 0.13.0; if you have an older version of seaborn installed it has no effect. I had seaborn v0.12.2 (installed via conda) and your example printed with normal boxplot dimensions until I upgraded seaborn to v0.13.2, E.g. #!/usr/bin/env python import random import seaborn as...
1
2