question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-07-15 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,294,496
2024-12-19
https://stackoverflow.com/questions/79294496/writing-windows-registry-with-backslash-in-sub-key-name-in-python
I'm trying to write into the Windows registry key "SOFTWARE/Microsoft/DirectX/UserGpuPreferences" using the Python winreg module. The name of the sub-key needs to be the python executable path, which contains backslashes. However when using winreg.SetValue(), this instead adds a tree of keys following the components of...
It works correctly with SetValueEx()
1
2
79,291,758
2024-12-18
https://stackoverflow.com/questions/79291758/trying-to-time-the-in-operator
I have a question about how the in operator works internally. So to test it I tried to graph the time it takes to find a member of a list, in different parts of the list. import time import matplotlib.pyplot as plt # list size size = 100000 # number of points n = 100000 # display for not having to count zeros if size >...
This test seems to suggest to me that searching for a number in a list takes longer and longer based on where the item falls in the list. import timeit setup = """ size = 1_000_000 size_minus_one = size - 1 data = list(range(size)) """ print(timeit.timeit("0 in data", setup=setup, number=1_000)) print(timeit.timeit("si...
2
2
79,293,171
2024-12-19
https://stackoverflow.com/questions/79293171/import-of-a-function-from-nested-structure
Consider this directory structure: a-a | b | c | print.py Basically a-a/b/c. and print.py inside that directory. The contents of print.py looks like as : def print_5(): print("5") def print_10(): print("10") I want to import and use these functions into my current file at the level of a-a. Structure : ls a-a test.py ...
Assuming a-a is the root directory, you first need to update to a_a as you have already done. Python modules cannot contain -. You should also add an __init__.py in each directory to mark it as package that can be imported. Your directory structure should look like: a_a/ ├── __init__.py ├── b/ │ ├── __init__.py │ ├── c...
2
2
79,290,203
2024-12-18
https://stackoverflow.com/questions/79290203/groupby-a-df-column-based-on-more-than-3-columns
I have an df which has 3 columns: Region, Country and AREA_CODE. Region Country AREA_CODE AREA_SUB_CODE_1 AREA_SUB_CODE_2 =========================================================================== AMER US A1 A1_US_1 A1_US_2 AMER CANADA A1 A1_CA_1 A1_CA_2 AMER US B1 B1_US_1 B1_US_2 AMER US A1 A1_US_1 A1_US_2 Is there ...
I think we can achieve this with the following recursive function: f = lambda s: ({k: f(s[k]) for k in s.index.levels[0]} if s.index.nlevels > 1 else {k: s.loc[[k]].unique().tolist() for k in s.index.unique()}) Here, s is expected to be a pandas.Series with hierarchical indexing. At each indexing level, we map the key...
2
2
79,291,770
2024-12-18
https://stackoverflow.com/questions/79291770/fill-pandas-columns-based-on-datetime-condition
Here is the sample code to generate a dataframe. import pandas as pd import numpy as np dates = pd.date_range("20241218", periods=9600,freq='1MIN') df = pd.DataFrame(np.random.randn(9600, 4), index=dates, columns=list("ABCD")) I want to fill all the columns with -1 for time between 1:35 to 1:45 for all the dates. Simi...
Try: df.loc[df.between_time('01:35', '01:45').index] = -1 df.loc[df.index.time == pd.Timestamp('01:00').time()] = -2 Output can be verified with the similar: print(df.between_time('1:35', '1:45').head(15) ) print(df.loc[df.index.time == pd.Timestamp('01:00').time()])
1
2
79,291,557
2024-12-18
https://stackoverflow.com/questions/79291557/python-pyproject-toml-arch-dependency-solved-on-install
In pyproject.toml we have a optional-dependencies for a windows package: [project.optional-dependencies] windows = [ "pywinpty>=2.0.14" ] To install: # on windows pip install .[windows] # on linux/mac we use the enclosed pty pip install . Is it possible so pip install . does this check automatic? Or uv pip install . ...
You can use PEP 496 – Environment Markers and PEP 508 – Dependency specification for Python Software Packages; they're usable in setup.py, setup.cfg, pyproject.toml, requirements.txt. In particular in pyproject.toml: [project] dependencies = [ "pywinpty>=2.0.14; sys_platform == 'win32'" ]
1
3
79,289,700
2024-12-18
https://stackoverflow.com/questions/79289700/how-to-add-labels-to-3d-plot
I have the following code which generates a 3D plot. I am trying to label the plot lines, but they are not ending up where I expect them. import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D import numpy as np # Data names = [ "A", "B", "C", "D", "E", "F", "G", "H", "I", "J", "K", "L" ] wins = [ [0, ...
It is easier to separate the drawing and the text annotation: # Plot each line for i, y_week in enumerate(y): ax.plot(x, wins_array[i], zs=y_week, zdir='x', label=weeks[i]) # Add shaded area below the line ax.bar(x, wins_array[i], zs=y_week, zdir='x', alpha=0.1) # Add labels directly on the curves for i in y: for j, k ...
2
1
79,279,190
2024-12-13
https://stackoverflow.com/questions/79279190/how-to-conditionally-format-data-in-great-tables
I am trying to conditionally format table data using Great Tables but not sure how to do it. To highlight the color of all those cells (sort of heatmap) whose values is higher than Upper Range column. Data: import polars as pl gt_sample_df = pl.DataFrame({'Test': ['Test A','Test B','Test C','Test D','Test Z','Test E','...
Edit: As of the last Great Tables release, this is now more natively supported with a mask argument to loc.body See the PR for further detail. import polars as pl import polars.selectors as cs from great_tables import GT, loc, style # define `gt_sample_df` as per example snippet required_columns = gt_sample_df.drop("Te...
3
2
79,280,500
2024-12-14
https://stackoverflow.com/questions/79280500/how-to-save-the-clicked-map-coordinates-in-a-reactive-variable
I have a Shiny for Python app that I would like to make interactive to allow user to see various statistics depending on where they click on the map. I am using folium to display the map. I can't find a way to return the coordinates of the clicked spot back to shiny for further processing. I found this process relative...
You can add a folium.MacroElement() to the map which captures the coordinates of the click location and sends them to Shiny. It can use a jinja2.Template() having this content: {% macro script(this, kwargs) %} function getLatLng(e){ var lat = e.latlng.lat.toFixed(6), lng = e.latlng.lng.toFixed(6); parent.Shiny.setInput...
3
3
79,288,622
2024-12-17
https://stackoverflow.com/questions/79288622/django-vs-code-custom-model-manager-method-typing
I'm trying custom model managers to add annotations to querysets. My problem, which started as a little annoyance but I now realize can be an actual problem, is that VS Code does not recognise the methods defined in the custom model manager/queryset. Example: from django.db import models from rest_framework.generics im...
Add a few type annotations to your class methods: from django.db import models from rest_framework.generics import ListAPIView # models.py class CarQuerySet(models.QuerySet): def with_wheels(self) -> "CarQuerySet": # quotes here pass class CarManager(models.Manager): def get_queryset(self) -> CarQuerySet: # update the...
1
2
79,285,419
2024-12-16
https://stackoverflow.com/questions/79285419/deployed-my-fastapi-app-to-azure-but-cannot-access-the-routes
I succesfully deployed my FastAPI app to azure, but when I try to access the routes, it says 404 not found. However, when I tested the same routes locally, they worked. My db is hosted on azure. I tried adding a startup.sh file with this command: gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app, without any luck...
I've successfully deployed your code to Azure Web App and have been able to access the routes. I configured the below startup command in my Azure App Service, and then it worked. gunicorn --worker-class uvicorn.workers.UvicornWorker --timeout 600 --access-logfile '-' --error-logfile '-' main:app My workflow file: nam...
2
1
79,289,598
2024-12-17
https://stackoverflow.com/questions/79289598/finding-all-1-d-arrays-within-a-numpy-array
Given a numpy array of dimension n with each direction having length m, I would like to iterate through all 1-dimensional arrays of length m. For example, consider: import numpy as np x = np.identity(4) array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]) then I would like to find all all 1-...
Although the comprehensions are readable, for large arrays you probably want to use what numpy gives you: import numpy as np n = 4 # random n*n numpy array arr = np.random.rand(n, n) print(arr) # this has all the data you need, relatively efficiently - but not in 1D shape result = ( np.vsplit(arr, n) + np.hsplit(arr, n...
2
1
79,288,467
2024-12-17
https://stackoverflow.com/questions/79288467/python-selenium-impossible-to-close-a-frame-using-xpath-or-class-name
I'm trying to close a frame on this page. What I want is to click in here: It seems to be easy, but so far the following code (which should work) has failed: import selenium.webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support imp...
when you do driver.find_element(By.CLASS_NAME, 'sc-843139d2-14 iVPGqd') selenium translates (poorly, by adding one dot) your class name into a selector, so it becomes driver.find_element(By.CSS_SELECTOR, '.sc-843139d2-14 iVPGqd') which means: find a iVPGqd tag (yes tag) contained within something with a class sc-8431...
1
1
79,275,886
2024-12-12
https://stackoverflow.com/questions/79275886/speed-up-numpy-looking-for-best-indices
I have a numpy array that maps x-y-coordinates to the appropriate z-coordinates. For this I use a 2D array that represents x and y as its axes and contains the corresponding z values: import numpy as np x_size = 2000 y_size = 2500 z_size = 400 rng = np.random.default_rng(123) z_coordinates = np.linspace(0, z_size, y_si...
This answer provide an algorithm with an optimal complexity: O(x_size * (y_size + z_size)). This algorithm is the fastest one proposed so far (by a large margin). It is implemented in Numba using multiple threads. Explanation of the approach The idea is that there is no need to iterate over all Z values : we can itera...
8
3
79,289,546
2024-12-17
https://stackoverflow.com/questions/79289546/type-hints-lost-when-a-decorator-is-wrapped-as-a-classmethod
Consider the following code: from typing import Any, Callable, Coroutine class Cache[**P, R]: @classmethod def decorate(cls, **params): def decorator(f: Callable[P, Coroutine[Any, Any, R]]) -> Callable[P, Coroutine[Any, Any, R]]: return f # in the real world, we instantiate a Cache here return decorator @Cache.decorate...
decorator depends on two contextual type variables: P and R. In the context of the Cache class, these are not known, but assumed to be known by Pyright at call time. However, when Cache.decorate() is called, Pyright does not get enough information to resolve P and R (no explicit type arguments and no arguments), so the...
1
2
79,289,373
2024-12-17
https://stackoverflow.com/questions/79289373/are-there-alternatives-to-a-for-loop-when-parsing-free-text-in-python-pyspark
I have to read in data in Databricks Python/PySpark but the format is not the usual CSV or JSON so I have to iterate over a for loop. As a result it's very slow. The data looks like this, for millions of rows. It's not the same format each row, although there are certain common formats: HEADER0123 a bunch of spaces ACC...
you're looking for substring() my_list = [] input = raw_text.collect() for row in input: line = row[0].strip() header = line[0:6] acct = line[6:9] my_list.append(header, acct) df = spark.createDataFrame(my_list, "header string, acct int") is same as df = ( raw_text .withColumn('header', F.substring('value', 0, 6)) .w...
1
3
79,277,656
2024-12-13
https://stackoverflow.com/questions/79277656/error-post-got-an-unexpected-keyword-argument-proxies
I'm using youtube-search-python to get the URLs of an array with song names but this error keeps popping up: post() got an unexpected keyword argument 'proxies' I'm new to Python and I've been looking around but I couldn't find nothing useful for fixing this error (at least that I understood). This is the code that is...
I believe you are using httpx version 0.28.0 or above. In this version, the post method really doesn't have the proxies parameter declared. Compare this: >>> httpx.__version__ '0.27.2' >>> 'proxies' in inspect.getargs(httpx.post.__code__).args True versus >>> httpx.__version__ '0.28.1' >>> 'proxies' in inspect.getargs...
3
6
79,281,902
2024-12-15
https://stackoverflow.com/questions/79281902/regex-to-match-a-whole-number-not-ending-in-some-digits
I've not been able to construct a pattern which can return an whole numbers that don't end in a sequence of digits. The numbers could be of any length, even single digits, but they will always be whole. Additionally, multiple numbers could be on the same line of text and I want to match them all. The numbers are always...
You can use is a negative lookbehind .*(?<!a) that ensures the string does not end with a. \d++(?<!175) Test here. Note that Possessive Quantifier (++) has been introduced in Python 3.11. Your 2nd approach from revision 1 was close, but not correct since the Greedy quantifier (+) would eat up all the digits, and then ...
1
3
79,287,522
2024-12-17
https://stackoverflow.com/questions/79287522/compute-percentage-of-positive-rows-in-a-group-by-polars-dataframe
I need to compute the percentage of positive values in the value column grouped by the group column. import polars as pl df = pl.DataFrame( { "group": ["A", "A", "A", "A", "A", "B", "B", "B", "B", "B"], "value": [2, -1, 3, 1, -2, 1, 2, -1, 3, 2], } ) shape: (10, 2) ┌───────┬───────┐ │ group ┆ value │ │ --- ┆ --- │ │ st...
You could use a custom group_by.agg with Expr.ge and Expr.mean. This will convert the values to False/True depending on the sign, then compute the proportion of True by taking the mean: df.group_by('group').agg(positive_percent=pl.col('value').ge(0).mean()) Output: ┌───────┬──────────────────┐ │ group ┆ positive_perce...
2
3
79,286,464
2024-12-17
https://stackoverflow.com/questions/79286464/uv-python-packing-how-to-set-environment-variables-in-virtual-envrionments
How do I set environment variables in virtual environment creating by UV? I try setting it in .venv/scripts/activate_this.py and it doesn't work.
You can tell the uv run command to load environment variables from a file, either by using the --env-file option: uv run --env-file=.env myscript.py Or by setting the UV_ENV_FILE environment variable: export UV_ENV_FILE=.env uv run myscript.py You will find more details in the documentation.
1
3
79,285,068
2024-12-16
https://stackoverflow.com/questions/79285068/setting-slice-of-column-to-list-of-values-on-polars-dataframe
In the code below I'm creating a polars- and a pandas dataframe with identical data. I want to select a set of rows based on a condition on column A, then update the corresponding rows for column C. I've included how I would do this with the pandas dataframe, but I'm coming up short on how to get this working with pola...
If you wrap the values in a pl.lit Series, you can index the values with Expr.get values = pl.lit(pl.Series([-1, -2, -3, -4])) idxs = pl.when(pl.col.A == 'x').then(1).cum_sum() - 1 df.with_columns(C = pl.coalesce(values.get(idxs), 'C')) shape: (8, 3) ┌─────┬─────┬─────┐ │ A ┆ B ┆ C │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ ...
3
3
79,279,855
2024-12-14
https://stackoverflow.com/questions/79279855/have-numpy-concatenate-return-proper-subclass-rather-than-plain-ndarray
I have a numpy array subclass, and I'd like to be able to concatenate them. import numpy as np class BreakfastArray(np.ndarray): def __new__(cls, n=1): dtypes=[("waffles", int), ("eggs", int)] obj = np.zeros(n, dtype=dtypes).view(cls) return obj b1 = BreakfastArray(n=1) b2 = BreakfastArray(n=2) con_b1b2 = np.concatenat...
Expanding simon's solution, this is what I settled on so other numpy functions fall-back to standard ndarray (so, numpy.unique(b2["waffles"]) works as expected). Also a slight change to concatenate so it will work for any subclasses as well. import numpy as np HANDLED_FUNCTIONS = {} class BreakfastArray(np.ndarray): de...
1
0
79,285,449
2024-12-16
https://stackoverflow.com/questions/79285449/find-average-rate-per-group-in-specific-years-using-groupby-transform
I'm trying to find a better/faster way to do this. I have a rather large dataset (~200M rows) with individual dates per row. I want to find the average yearly rate per group from 2018 to 2019. I know I could create a small df with the results and merge it back in but, I was trying to find a way to use transform. Not su...
You can't perform multiple filtering/aggregations efficiently with a groupby.transform. You will have to loop. A more efficient approach would be to combine a pivot_table + merge: cols = ['foo', 'bar'] years = [2018, 2019] tmp = (df[df['year'].isin(years)] .pivot_table(index='group', columns='year', values=cols, aggfun...
2
2
79,284,760
2024-12-16
https://stackoverflow.com/questions/79284760/pymongo-async-client-not-raising-exception-when-connection-fails
It seems that a pymongo 4.10 async client does not raise an exception when there is a problem with the connection. Taken from the doc, a test without any mongo DB running locally yields: >>> import asyncio >>> from pymongo import AsyncMongoClient >>> client = AsyncMongoClient('mongodb://localhost:27017/') >>> asyncio.r...
The mongo client uses connection pools etc in the background, even though you tell it to explicitly connect (why?) it doesn't raise an exception for failing to connect until you actually try to read or write from/to the DB. But you can check if/where it's connected: >>> list(client.nodes) [('10.0.0.1', 27017)] The res...
1
2
79,278,950
2024-12-13
https://stackoverflow.com/questions/79278950/how-should-i-configure-a-pathfinding-algororithim-for-my-new-level-generation-pr
my problem is that I have a 2D array like this: [["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#", "#", "#", "#", "#", "#", "#", "#", "#"], ["#", "#",...
Your approach is to generate a random grid, and then verify whether it is one where you can reach the exit from the entry point. I suppose if this is not possible, you will generate another random grid, and continue like that until you have found a valid one. I would suggest to approach this differently. You can enhanc...
2
1
79,282,130
2024-12-15
https://stackoverflow.com/questions/79282130/split-a-pandas-column-of-lists-with-different-lengths-into-multiple-columns
I have a Pandas DataFrame that looks like: ID result 1 [.1,.5] 2 [.4,-.2,-.3,.1,0] 3 [0,.1,.6] How can split this column of lists into two columns? Desired result: ID result_1 result_2 result_3 result_4 result_5 1 .1 .5 NaN NaN NaN 2 .4 -.2 -.3 .1 0 3 0 .1 .6 NaN NaN I have digged into it a little and found this: Spl...
You can do this as suggested in linked post. import pandas as pd # your example code data = {"ID": [1, 2, 3], "result": [[0.1, 0.5], [0.4, -0.2, -0.3, 0.1, 0], [0, 0.1, 0.6]]} df = pd.DataFrame(data) print(df) answer out = df[['ID']].join( pd.DataFrame(df['result'].tolist()) .rename(columns=lambda x: f'result_{x + 1}'...
1
1
79,281,240
2024-12-14
https://stackoverflow.com/questions/79281240/why-does-the-basehttprequesthandler-rfile-read-delay-execution
I am making a simple server in python using http.server package. My goal is to log the data of POST from client to server. The problem I am having is rfile.read() is delaying execution until next POST request or if the connection is disconnected. However this problem doesn't occur if the length of the content is specif...
This happens because rfile.read() waits indefinitely until the client closes the connection or signals the end of the data stream. This behavior is by design when the Content-Length header is not provided, as the server does not know how much data to expect. To handle this, you should always rely on the Content-Length ...
1
2
79,280,773
2024-12-14
https://stackoverflow.com/questions/79280773/runtimeerror-trying-to-backward-through-the-graph-a-second-time-on-loss-tensor
I have the following training code. I am quite sure I call loss.backward() just once, and yet I am getting the error from the title. What am I doing wrong? Note that the X_train_tensor is output from another graph calculation, so it has required_grad=True as you can see in the print statement. Is this the source of the...
From what I understand, the X_train_tensor is output from the autoencoder. When you do not run torch.no_grad() during the encoding step, a computational graph is created for the outputs of the autoencoder, which links the autoencoder's operations and weights to the encoded tensors. In your code, since the model's outpu...
2
2
79,279,588
2024-12-13
https://stackoverflow.com/questions/79279588/interpolating-time-series-data-for-step-values
I have time series data that looks like this (mm/dd hh:mm): 3.100 12/14 05:42 3.250 12/14 05:24 3.300 12/14 05:23 3.600 12/14 02:45 3.700 12/13 10:54 3.600 12/12 13:19 3.900 12/12 10:43 I need to interpolate it at 1 minute intervals. It will be a step chart, so the values should be the same until the new value.
If your goal is to make a step plot, no need to interpolate, just use matplotlib.pyplot.step: import matplotlib.pyplot as plt s = pd.Series(['12/14 05:42', '12/14 05:24', '12/14 05:23', '12/14 02:45', '12/13 10:54', '12/12 13:19', '12/12 10:43'], index=[3.1, 3.25, 3.3, 3.6, 3.7, 3.6, 3.9]) plt.step(pd.to_datetime(s, fo...
2
0
79,280,091
2024-12-14
https://stackoverflow.com/questions/79280091/behavior-of-df-map-inside-another-df-apply
I find this code very interesting. I modified the code a little to improve the question. Essentially, the code uses a DataFrame to format the style of another DataFrame using pd.style. t1 = pd.DataFrame({'x':[300,200,700], 'y':[100,300,200]}) t2 = pd.DataFrame({'x':['A','B','C'], 'y':['C','B','D']}) def highlight_cell(...
DataFrame.style.apply is used here, not DataFrame.apply. By using the parameter axis=None, the callable is applied once (not per cell) on the whole DataFrame. Since the callable is a lambda, this essentially means we run: t1.map(highlight_cell, props='background-color:yellow') and use the output as format. x y 0 back...
1
3
79,277,671
2024-12-13
https://stackoverflow.com/questions/79277671/waiting-for-a-pyqt-pyside-qtcore-qthread-to-finish-before-doing-something
I have a data acquisition thread which samples and processes data which it then emits as a signal to a receiver. Now, when that thread is stopped, how can I ensure it has finished the current loop and emitted its signal before proceeding (and e.g. emitting a summary signal)? import sys import time from PySide6.QtCore i...
The problem is caused by the fact that you're emitting the signal from another object, and, more specifically, from another thread. In general, it's normally preferred to emit signals directly from "within" their object, and emitting them externally is generally discouraged (but not forbidden nor completely wrong in pr...
2
1
79,279,966
2024-12-14
https://stackoverflow.com/questions/79279966/why-this-nested-loop-generator-does-not-seem-to-be-working
I was trying this: tuple(map(tuple, tuple(((x,y) for x in range(5)) for y in range(3)))) I got this: (((0, 2), (1, 2), (2, 2), (3, 2), (4, 2)), ((0, 2), (1, 2), (2, 2), (3, 2), (4, 2)), ((0, 2), (1, 2), (2, 2), (3, 2), (4, 2))) but I expect: (((0, 0), (1, 0), (2, 0), (3, 0), (4, 0)), ((0, 1), (1, 1), (2, 1), (3, 1), ...
You're forcing the evaluations in the wrong order. You're building a generator of generators, building a tuple out of the outer generator to build a tuple of generators, and then building tuples out of the inner generators. By the time you start working with the inner generators, the outer generator has finished iterat...
2
2
79,279,831
2024-12-13
https://stackoverflow.com/questions/79279831/meaning-of-in-python-in-reserved-classes-of-identifiers
The python documentation writes about _* "Not imported by from module import *." What do they mean with that? https://docs.python.org/3/reference/lexical_analysis.html#:~:text=_*,import%20*.
The documentation uses * as a wildcard, meaning it substitutes for anything, similar to the way wildcards work in the shell. So when it says _*, it means any identifier beginning with _. So when you do from module import *, it imports all the top-level names in the module except those that begin with _. When writing a ...
1
3
79,279,391
2024-12-13
https://stackoverflow.com/questions/79279391/predecessors-from-scipy-depth-first-order
I use scipy version 1.14.1 to traverse the minimum spanning tree in depth-first order, but I do not understand some results, namely the predecessors returned by scipy are not correct. Here is an illustration for the following graph: The following code import numpy as np from scipy.sparse import coo_matrix from scipy.s...
From the documentation, the function depth_first_order() returns the following two lists: node_array ndarray, one dimension The depth-first list of nodes, starting with specified node. The length of node_array is the number of nodes reachable from the specified node. predecessors ndarray, one dimension Returned only i...
2
1
79,279,025
2024-12-13
https://stackoverflow.com/questions/79279025/how-to-process-a-massive-file-in-parallel-in-python-while-maintaining-order-and
I'm working on a Python project where I need to process a very large file (e.g., a multi-gigabyte CSV or log file) in parallel to speed up processing. However, I have three specific requirements that make this task challenging: Order Preservation: The output must strictly maintain the same line order as the input file....
No one can really answer whether using ThreadPoolExecutor or ProcessPoolExecutor will be faster without knowing exactly what each task does. you need to try both and Benchmark the time taken by each to find which is better. this code can help you figure that out yourself, it is based on this answer, but it uses a queue...
2
3
79,279,060
2024-12-13
https://stackoverflow.com/questions/79279060/how-to-use-numpy-where-in-a-pipe-function-for-pandas-dataframe-groupby
Here is a script to simulate the issue I am facing: import pandas as pd import numpy as np data = { 'a':[1,2,1,1,2,1,1], 'b':[10,40,20,10,40,10,20], 'c':[0.3, 0.2, 0.6, 0.4, 0.5, 0.2, 0.8], 'd':[3, 1, 5, 1, 7, 2., 2.], } df = pd.DataFrame.from_dict(data) # I apply some custom function to populate column 'e'. # For demo...
You're presenting the XY problem. Here's one approach: cond = df['c'] <= 0.3 df['f'] = ( df.assign(filtered_d=df['d'].where(cond)) .groupby(['a', 'b'])['filtered_d'] .transform('min') .fillna(0) ) Output: a b c d e f 0 1 10 0.3 3.0 True 2.0 1 2 40 0.2 1.0 True 1.0 2 1 20 0.6 5.0 False 0.0 3 1 10 0.4 1.0 False 2.0 4 2...
2
2
79,278,351
2024-12-13
https://stackoverflow.com/questions/79278351/polars-how-to-field-fill-null-for-whole-column
This code not fill null values in column. I want to some fields to forward and backward fill nulls. import polars as pl df1 = pl.LazyFrame({ "dt": [ "2024-08-30", "2024-08-02", "2024-09-03", "2024-09-04" ], "df1": { "a": [0.1, 0.2, 0.3, 0.1], "b": [0, -1, 2, 1] }, }).with_columns( pl.col("dt").str.to_datetime("%Y-%m-%d...
The reason .struct.with_fields doesn't do what you want is because structs still have outer nullability, and with_fields does not have a special case where the outer nullability is ignored if all fields are replaced. So instead of using with_fields to update fields, completely replace the struct column with a new one w...
1
3
79,276,109
2024-12-12
https://stackoverflow.com/questions/79276109/poetry-install-failing-with-sslerror-max-retries-exceeded-on-github-httpsconnec
I am encountering an error when running the poetry install command in my Python project. The error message is as follows: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: ... (Caused by SSLError(FileNotFoundError(2, 'No such file or directory'))) I have tried the following troubleshooti...
Here’s how I resolved the issue: 1. Install or Update certifi in the Virtual Environment Make sure the certifi package is installed and up-to-date: pip install --upgrade certifi Locate the cacert.pem File Use the following command to find where the cacert.pem file is located: python -m certifi 2. Copy the cacert.pem ...
1
2
79,277,527
2024-12-13
https://stackoverflow.com/questions/79277527/why-does-the-getattrclassinstance-http-method-names-function-in-django
I have a view being inherited by APIView and I have only implemented the GET method in it. class MyView(APIView): def get(self, request, id): # do something But when I call getattr(ClassInstance, "http_method_names", []) I get list of all the HTTP methods even though they are not implemented.
The .http_method_names [Django-doc] is an attribute that, by default lists all HTTP methods, so: ["get", "post", "put", "patch", "delete", "head", "options", "trace"] Normally it will always return that list, unless you override it. This can be useful if you inherit from a view that for example has defined a .post(…...
1
2
79,276,400
2024-12-12
https://stackoverflow.com/questions/79276400/how-to-get-the-index-of-a-text-node-in-beautifulsoup
How can I get the source index of a text node in an HTML string? Tags have sourceline and sourcepos which is useful for this, but NavigableString does not have any directly-helpful properties like that (as far as I can find) I've thought about using def get_index(text_node: NavigableString) -> int: return text_node.nex...
Using html.parser: class MyHTMLParser(HTMLParser): def handle_data(self, data: str): line, col = self.getpos() previous_lines = ''.join(html_string.splitlines(True)[:line - 1]) index = len(previous_lines) + col print(data, 'at', index) parser = MyHTMLParser() parser.feed(html_string)
1
2
79,276,761
2024-12-12
https://stackoverflow.com/questions/79276761/how-to-convert-the-column-with-lists-into-one-hot-encoded-columns
Assume, there is one DataFrame such as following import pandas as pd import numpy as np df = pd.DataFrame({'id':range(1,4), 'items':[['A', 'B'], ['A', 'B', 'C'], ['A', 'C']]}) df id items 1 [A, B] 2 [A, B, C] 3 [A, C] Is there an efficient way to convert above DataFrame into the following (one-hot encoded columns)? Ma...
SOLUTION 1 A possible solution, whose steps are: First, the explode function is used to transform each item of a list-like to a row, replicating the index values. Then, the to_numpy method converts the resulting dataframe to a numpy array, and .T transposes this array. The crosstab function computes a simple cross-t...
2
2
79,276,013
2024-12-12
https://stackoverflow.com/questions/79276013/how-to-increase-the-space-between-the-subplots-and-the-figure
I'm using a python code to plot 3D surface. However, the z-axis label get cut by the figure. Here is the code : import matplotlib.pyplot as plt import numpy as np fig = plt.figure(figsize=(12, 10), facecolor='lightblue') x = np.linspace(0, 10) y = np.linspace(0, 10) X, Y = np.meshgrid(x, y) for idx in range(4): Z = np....
One solution is to zoom out to decrease the size of each subplot (set_box_aspect). One can also play with the three angles that defines the view: elevation, azimuth, and roll (view_init). fig = plt.figure(figsize=(12/2, 10/2), facecolor='lightblue') x = np.linspace(0, 10) y = np.linspace(0, 10) X, Y = np.meshgrid(x, y)...
2
2
79,276,537
2024-12-12
https://stackoverflow.com/questions/79276537/calling-a-wrapped-static-method-using-self-instead-of-class-name-passes-self-as
This question is related to Calling a static method with self vs. class name but I'm trying to understand the behavior when you wrap a static method so I can fix my wrapper. For example: import functools def wrap(f): @functools.wraps(f) def wrapped(*args, **kwargs): print(f"{f.__name__} called with args: {args}, kwargs...
Method access in Python works by using the descriptor protocol to customize attribute access. When you access a staticmethod, it uses the descriptor protocol to make the attribute access return the underlying function. That's why isinstance(f, staticmethod) reported False, in the versions of your code where you tried t...
2
4
79,274,712
2024-12-12
https://stackoverflow.com/questions/79274712/numpy-matrix-tiling-and-multiplication-combination
I'm looking for a function capable of taking a m x n array, which repeats each row n times over a identity-like grid of m size. For demo: input = [[a1, b1, c1], [a2, b2, c2]] output = [[a1, b1, c1, 0, 0, 0], [a1, b1, c1, 0, 0, 0], [a1, b1, c1, 0, 0, 0], [ 0, 0, 0, a2, b2, c2], [ 0, 0, 0, a2, b2, c2], [ 0, 0, 0, a2, b2,...
While looking similar, it appears to me that the given problem cannot be solved using a Kronecker product: with the latter, you could only manage to get repetitions of your complete input matrix as blocks of the result matrix. I stand corrected: For a solution that employs the Kronecker product, see @ThomasIsCoding's a...
4
4
79,273,432
2024-12-11
https://stackoverflow.com/questions/79273432/python-multiprocessing-gets-slower-with-additional-cpus
I'm trying to parallelize code that should be embarrassingly parallel and it just seems to get slower the more processes I use. Here is a minimally (dys)functional example: import os import time import random import multiprocessing from multiprocessing import Pool, Manager, Process import numpy as np import pandas as p...
One thing that can cause this kind of problem is nested parallelism, when each process in your process pool starts multiple threads to speed up operations within that process. You can investigate whether this is happening is by looking at one minute load average. A program like htop can show you this. Run your program ...
3
3
79,275,860
2024-12-12
https://stackoverflow.com/questions/79275860/joining-two-dataframes-that-share-index-columns-id-columns-but-not-data-col
I find myself doing this: import polars as pl import sys red_data = pl.DataFrame( [ pl.Series("id", [0, 1, 2], dtype=pl.UInt8()), pl.Series("red_data", [1, 0, 1], dtype=pl.UInt8()), ] ) blue_data = pl.DataFrame( [ pl.Series("id", [0, 2, 3], dtype=pl.UInt8()), pl.Series("blue_data", [0, 1, 1], dtype=pl.UInt8()), ] ) # i...
It seems like you are looking for a simple pl.DataFrame.join with how="full" and coalesce=True. red_data.join(blue_data, on="id", how="full", coalesce=True) shape: (4, 3) ┌─────┬──────────┬───────────┐ │ id ┆ red_data ┆ blue_data │ │ --- ┆ --- ┆ --- │ │ u8 ┆ u8 ┆ u8 │ ╞═════╪══════════╪═══════════╡ │ 0 ┆ 1 ┆ 0 │ │ 1 ┆...
1
3
79,275,745
2024-12-12
https://stackoverflow.com/questions/79275745/odd-boolean-expression
I'm trying to debug (rewrite?) someone else's Python/cherrypy web app, and I ran across the following 'if' statement: if not filename.endswith(".dat") and ( filename.endswith(".dat") or not filename.endswith(".cup") ): raise RuntimeError( "Waypoint file {} has an unsupported format.".format( waypoint_file.filename ) ) ...
As you have two variables which could only have one of two values you can easily test each case, for example by doing for a in [False, True]: for b in [False, True]: print(a, b) if not a and (a or not b): print("Condition hold") else: print("Condition does not hold") which gives output False False Condition hold False...
2
2
79,275,441
2024-12-12
https://stackoverflow.com/questions/79275441/do-programers-need-to-manually-implement-optimization-such-as-loop-unfolding-et
I am recently learning some HPC topics and get to know that modern C/C++ compilers is able to detect places where optimization is entitled and conduct it using corresponding techniques such as SIMD, loop unfolding, etc, especially under flag -O3, with a tradeoff between runtime performance vs compile time and object fi...
Loop unrolling is useful because it can (1) reduce the overhead spent managing the loop and (2) at the assembly level, it let the processor run faster by eliminating branch penalties, keeping the instruction pipeline full, etc. (2) doesn't really apply to an interpreted language implementation like Python - it's alread...
1
3
79,275,036
2024-12-12
https://stackoverflow.com/questions/79275036/pandas-dataframe-multiindex-calculate-mean-and-add-additional-column-to-each-l
Given the following dataframe: Year 2024 2023 2022 Header N Result SD N Result SD N Result SD Vendor A 5 20 3 5 22 4 1 21 3 B 4 25 2 4 25 3 4 26 5 C 9 22 3 9 27 1 3 23 3 D 3 23 5 3 16 2 5 13 4 E 5 27 2 5 21 3 3 19 5 I would like to calculate for each year the mean value of the results column and then create a column, ...
Use DataFrame.xs for select Result labels in MultiIndex, divide by mean and append to original in concat, last for correct position add DataFrame.sort_index with parameter sort_remaining=False: df1 = df.xs('Result', axis=1, level=1, drop_level=False) out = (pd.concat([df, df1.div(df1.mean()).mul(100) .rename(columns={'...
1
1
79,274,733
2024-12-12
https://stackoverflow.com/questions/79274733/ifdrational-is-not-json-serializable-using-pillow
I am using PIL in python to extract the metadata of an image. Here is my code: import json from PIL import Image, TiffImagePlugin import PIL.ExifTags img = Image.open("/home/user/DSCN0010.jpg") dct = { PIL.ExifTags.TAGS[k]: float(v) if isinstance(v, TiffImagePlugin.IFDRational) else v for k, v in img._getexif().items()...
The problem is that you are just casting to float the IFDRational values that are directly in the root of the EXIF items. However, it looks like one of those items, called GPSInfo, is a dict that contains internally more IFDRational values. You would need a function to sanitise the values, which would ideally iterate r...
2
2
79,274,376
2024-12-12
https://stackoverflow.com/questions/79274376/slice-a-numpy-2d-array-using-another-2d-array
I have a 2D array of (4,5) and another 2D array of (4,2) shape. The second array contains the start and end indices that I need to filter out from first array i.e., I want to slice the first array using second array. np.random.seed(0) a = np.random.randint(0,999,(4,5)) a array([[684, 559, 629, 192, 835], [763, 707, 359...
Another possible solution, which uses: np.arange to create a range of column indices based on the number of columns in a. A boolean mask m is created using logical operations to check if each column index falls within the range specified by idx. The np.newaxis is used to align dimensions for broadcasting. np.where i...
5
4
79,273,994
2024-12-12
https://stackoverflow.com/questions/79273994/pandas-multi-index-subset-selection
import pandas as pd import numpy as np # Sample data index = pd.MultiIndex.from_tuples([ ('A', 'a1', 'x'), ('A', 'a1', 'y'), ('A', 'a2', 'x'), ('A', 'a2', 'y'), ('B', 'b1', 'x'), ('B', 'b1', 'y'), ('B', 'b2', 'x'), ('B', 'b2', 'y') ], names=['level_1', 'level_2', 'level_3']) data = np.random.randn(len(index)) df = pd.D...
Use DataFrame.droplevel for remove 3 level, so possible filter by subset by Index.isin in boolean indexing: S = [('A', 'a1'), ('B', 'b2')] out = df[df.droplevel(2).index.isin(S)] print (out) value level_1 level_2 level_3 A a1 x 0.545790 y -1.298511 B b2 x 0.018436 y -1.076408
2
2
79,273,312
2024-12-11
https://stackoverflow.com/questions/79273312/mismatch-between-the-volume-shape-and-the-axes-grid-in-matplotlib
I have written a script to visualize a 3D volume using Matplotlib. The decay volume is explicitly centered at x = y = 0, but the grid displayed appears displaced relative to the volume. This seems to be an issue with the grid, not the decay volume definition. The script is provided below, and I also attach the result o...
This is simply a result of your choice of z-limits. You've chosen to adjust the limits so the volume is not on the bottom plane, so the perspective makes it look like the volume isn't centered. If you adjust ax.set_zlim(z_min - 5, z_max + 5) to be ax.set_zlim(z_min, z_max + 5) you will see that the volume appears cente...
2
4
79,272,800
2024-12-11
https://stackoverflow.com/questions/79272800/rearrange-and-encode-columns-in-pandas
i have data structured like this (working with pandas): ID|comp_1_name|comp_1_percentage|comp_2_name|comp_2_percentage| 1| name_1 | 13 | name_2 | 33 | 2| name_3 | 15 | name_1 | 46 | There are six comp_name/comp_percentage couples. Names are not equally distributed in all six "*_name" columns. I would like to obtain th...
You can try using pd.wide_to_long with a little column renaming and shaping dataframe: # Renaming columns to move name and percentage to the front for pd.wide_to_long dfr = df.rename(columns=lambda x: '_'.join(x.rsplit('_', 1)[::-1])) (pd.wide_to_long(dfr, ['name_', 'percentage_'], 'ID', 'No', suffix='.*') .reset_index...
1
2
79,273,078
2024-12-11
https://stackoverflow.com/questions/79273078/python-dataframe-slicing-by-row-number
all Python experts, I'm a Python newbie, stuck with a problem which may look very simple to you. Say I have a data frame of 100 rows, how can I split it into 5 sub-frames, each of which contains the rows of 5n+0, 5n+1, 5n+2, 5n+3 and 5n+4 respectively? For instance, the 0th, 5th, 10th up to the 95th will go to one sub-...
Just use iloc and slice as you would do with a list i.e. start:end:step. Example: df = pd.DataFrame({"A":range(100)}) display(df.T) display(df.iloc[0::5].T) display(df.iloc[1::5].T) display(df.iloc[2::5].T) # ...
2
0
79,273,153
2024-12-11
https://stackoverflow.com/questions/79273153/convert-float-base-2-to-base-10-to-100-decimal-places
I'm trying to find more decimal places to the 'Prime Constant'. The output maxes out at 51 decimal places after using getcontext().prec=100 from decimal import * getcontext().prec = 100 base = 2 s = "0.0110101000101000101000100000101000001000101000100000100000101000001000101000001000100000100000001000101000101000100000...
Try base10 = Decimal(int(joinsplit, base=base)) / Decimal(divisor) You need to have the Decimal package doing the division.
1
1
79,273,143
2024-12-11
https://stackoverflow.com/questions/79273143/what-does-colon-do-in-python-dictionaries
I stumbled upon this code while debugging something. Is the second line a valid Python code? I tried running this and it ran successfully without any errors. Shouldn't I be getting a syntax error since = is used for assigning values in dictionaries? dict1 = {'temp':10} dict1['temp'] : 5
What you're seeing is the type annotation syntax. In your case it does absolutely nothing but is not invalid. Annotations must be valid expressions that evaluate without raising exceptions at the time the function is defined (but see below for forward references).
1
2
79,270,470
2024-12-11
https://stackoverflow.com/questions/79270470/error-during-python-setup-py-develop-and-pip-install-r-requirements-txt
I'm encountering an issue while trying to install the dependencies for my Python project using setup.py and pip install -r requirements.txt in Windows Poweshell. Here's my setup.py: import setuptools with open("README.md","r") as fh: long_description = fh.read() setuptools.setup( name = "titanic-prediction", version = ...
You use a wrong character for -r option; Python options parser (and any othe options parsers) expects simple dash (ascii minus) but you use n-dash. Compare these: -r (simple dash, minus) –r (n-dash) —r (m-dash) Use simple dash, minus: pip install -r
1
1
79,271,959
2024-12-11
https://stackoverflow.com/questions/79271959/upos-mappings-tensorflow-datasets-tdfs
I am using the tensorflow tdfs dataset extreme/pos which I retrieve using the code below. It is annotated with universal part of speech POS labels. These are int values. Its fairly easy to map them back to their part of speech by creating my own mapping (0 = ADJ, 7 = NOUN, etc.) but I was wondering if there is a way of...
One way is to dig into Tensorflow code to see where is defined the list of POS and then import it to use in your code. You can find the list of the POS in the Github code of tensorflow Datasets there (UPOS constant): https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conl...
1
2
79,271,631
2024-12-11
https://stackoverflow.com/questions/79271631/why-reference-count-of-none-object-is-fixed
I was experimenting with refcount of objects, and I noticed the reference count of None object does not change when I bind identifiers to None. I observed this behavior in python version 3.13. Python 3.13.0 (main, Oct 7 2024, 05:02:14) [Clang 16.0.0 (clang-1600.0.26.4)] on darwin Type "help", "copyright", "credits" or ...
This is due to PEP 683. To avoid any need to ever write to the memory of certain "immortal" objects, like None, those objects now have a fixed refcount that never changes, no matter how many actual references to the object exist. This helps with multi-threaded and multi-process performance, avoiding issues with things ...
2
3
79,271,271
2024-12-11
https://stackoverflow.com/questions/79271271/fill-in-rows-to-dataframe-based-on-another-dataframe
I have 2 dataframes that look like this: import pandas as pd data = {'QuarterYear': ["Q3 2023", "Q4 2023", "Q1 2024", 'Q2 2024', "Q3 2024", "Q4 2024"], 'data1': [5, 6, 2, 1, 10, 3], 'data2': [12, 4, 2, 7, 2, 9], 'data3': [2, 42, 2, 6, 2, 4]} df = pd.DataFrame(data) This looks like: QuarterYear data1 data2 data3 0 Q3 20...
Use DataFrame.set_index with DataFrame.reindex: out = (df1.set_index('QuarterYear') .reindex(df['QuarterYear'], fill_value=0) .reset_index()) print (out) QuarterYear data1 data2 data3 0 Q3 2023 0 0 0 1 Q4 2023 5 7 2 2 Q1 2024 0 0 0 3 Q2 2024 9 7 11 4 Q3 2024 10 3 3 5 Q4 2024 0 0 0 Another idea: out = df1.merge(df[['Qu...
1
3
79,270,601
2024-12-11
https://stackoverflow.com/questions/79270601/error-during-installation-with-pip-install
I'm encountering an issue while trying to install the dependencies for my Python project using pip install pandas==1.1.4 in Windows Poweshell. Then I got an Error like the picture below: DEPRECATION: Loading egg at d:\data analyst\github\lokalhangatt\pacmann - git\minggu_4\4. advanced data manipulation\pertemuan 8 - d...
You are getting the DEPRECATION warning and subsequent Error message because you're trying to install an older release of this package with an unsupported python version. You seem to be using Python 3.12 (as seen from the warning) but pandas 1.1.4 is compatible with python 3.6 - 3.9. One way to fix this is to install t...
2
1
79,263,771
2024-12-9
https://stackoverflow.com/questions/79263771/where-does-scipys-adaptive-step-size-method-for-finite-differences-originate
Inside the KrylovJacobian class from SciPy, there is this method: def _update_diff_step(self): mx = abs(self.x0).max() mf = abs(self.f0).max() self.omega = self.rdiff * max(1, mx) / max(1, mf) which would be the same as: This modifies the step size that the finite difference method uses, however, I cannot find the or...
Reading the journal articles which SciPy cites on the documentation page,* I cannot find any choice of omega which is exactly equivalent to what SciPy is doing. However, there are a couple of cases which are similar. High-level rationale Does anybody know the source of this method or the reasoning behind it? Reading ...
2
1
79,264,683
2024-12-9
https://stackoverflow.com/questions/79264683/error-loading-pytorch-model-checkpoint-pickle-unpicklingerror-invalid-load-ke
I'm trying to load the weights of a Pytorch model but getting this error: _pickle.UnpicklingError: invalid load key, '\x1f'. Here is the weights loading code: import os import torch import numpy as np # from data_loader import VideoDataset import timm device = torch.device('cuda' if torch.cuda.is_available() else 'cpu'...
The error is typical when trying to open a gzip file as if it was a pickle or pytorch file, because gzips start with a 1f byte. But this is not a valid gzip: it looks like a corrupted pytorch file. Indeed, looking at hexdump -C file.pt | head (shown below), most of it looks like a pytorch file (which should be a ZIP ar...
3
6
79,255,009
2024-12-5
https://stackoverflow.com/questions/79255009/memory-problem-when-serializing-zipped-files-in-pyspark-on-databricks
I want to unzip many files in 7z format in PySpark on Databricks. The zip files contain several thousand tiny files. I read the files using binary File and I use a UDF to unzip the files: schema = ArrayType(StringType()) @F.udf(returnType=schema) def unzip_content_udf(content): extracted_file_contents= [] with py7zr.Se...
Typical stackoverflow answer would be: You're doing it wrong. This seems like a misuse of Spark as you're not really using spark features. You're mostly using it to distribute unzipping across multiple nodes of a cluster. E.g. you could've used dispy instead. can not serialize object larger than 2G IMO it's very reas...
1
1
79,250,961
2024-12-4
https://stackoverflow.com/questions/79250961/joining-with-condition-in-pandas-like-in-sql-on-clause
I want to write the below type of query in Python. But basic python filtering acts like I did use WHERE clause in SQL, not ON for filtering. Could anyone please help? Appreciate for your support. select * from t1 left join t2 on t1.key = t2.key and t2.x2 <= t1.x and t2.y2 > t1.y I tried the below Python code, and it i...
Merge the dataframes, then check which locs does not match the condition, then set those locs as NaN. While converting to NaN the datatype becomes float, so you might need to convert them to integer. t1_data = {'key': [1, 2, 3, 4], 'x': [5, 6, 7, 8], 'y': [10, 11, 12, 13]} t2_data = {'key': [2, 3, 4], 'x2': [6, 7, 8], ...
2
2
79,269,901
2024-12-10
https://stackoverflow.com/questions/79269901/custom-link-on-column
I am working with django-tables2 to display some patient information on a page. I am creating the table like this: class PatientListView(tables.Table): name = tables.Column('Practice') patientid = tables.Column() firstname = tables.Column() lastname = tables.Column() dob = tables.Column() addressline1 = tables.Column()...
Option 1: turn every column into a link You can make a callable that converts the record to the link, and add that to all columns, so: def get_link(record): return f'www.example.com/patients/{record.patientid}' class PatientListView(tables.Table): name = tables.Column('Practice', linkify=get_link) patientid = tables.Co...
1
1
79,261,741
2024-12-8
https://stackoverflow.com/questions/79261741/text-recognition-with-pytesseract-and-cv2-or-other-libs
Please download the png file and save it as 'sample.png'. I want to extract english characters in the png file. import cv2 import pytesseract img = cv2.imread("sample.png") gry = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) thr = cv2.adaptiveThreshold(gry, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 23, 100) bnt ...
When working with images containing grid lines and noise, it's important to preprocess the image effectively to improve OCR accuracy. I've added some line removal, denoising, and text amplification. You might need to tweak the parameters a little bit but i've got the expected result from your sample image and also trie...
4
0
79,263,593
2024-12-9
https://stackoverflow.com/questions/79263593/unable-to-login-subdomain-of-django-tenant-rest-framework-token-doesnt-query
I have a multi tenant app using the library django-tenants. When logging into the core URL with the created superuser, the login page works completely fine. When using the subdomain address to login, the page returns a 500 error when using the correct username and password that exists and the console says Uncaught (in ...
The Uncaught (in promise) SyntaxError: Unexpected token '<', " <!doctype "... is not valid JSON error occurs because the Token call raises an uncaught exception, as you noticed. Which in turn means the token object doesn't contain a token, it contains the HTML response displaying the exception response. And you have ri...
1
1
79,268,152
2024-12-10
https://stackoverflow.com/questions/79268152/why-does-beautifulsoup-output-self-closing-tags-in-html
I've tried with 3 different parsers: lxml, html5lib, html.parser All of them output invalid HTML: >>> BeautifulSoup('<br>', 'html.parser') <br/> >>> BeautifulSoup('<br>', 'lxml') <html><body><br/></body></html> >>> BeautifulSoup('<br>', 'html5lib') <html><head></head><body><br/></body></html> >>> BeautifulSoup('<br>', ...
Use the html5 formatter: If you pass in formatter="html5", it’s the same as formatter="html", but Beautiful Soup will omit the closing slash in HTML void tags like “br”: from bs4 import BeautifulSoup BeautifulSoup('<br>', 'html.parser').decode(formatter="html5") Which outputs: '<br>'
2
1
79,269,686
2024-12-10
https://stackoverflow.com/questions/79269686/alternate-background-colors-in-styled-pandas-df-that-also-apply-to-multiindex-in
SETUP I have the following df: import pandas as pd import numpy as np arrays = [ np.array(["fruit", "fruit", "fruit","vegetable", "vegetable", "vegetable"]), np.array(["one", "two", "total", "one", "two", "total"]), ] df = pd.DataFrame(np.random.randn(6, 4), index=arrays) df.index.set_names(['item','count'],inplace=Tru...
Here's one approach: Using np.resize See below for itertools.cycle option. def style_total_index(s, colors, color_map, total): # `s` is series with index level values as *values*, name 0 for level 0, etc. level = s.name if level == 0: # level 0 is quite easy: # check if `s` equals its shift + `cumsum` result. # `resu...
1
4
79,269,716
2024-12-10
https://stackoverflow.com/questions/79269716/attributeerror-pathway-object-has-no-attribute-hidden
I am attempting to create a nested dictionary 'world'. Due to the planned size and complexity I am hoping to automate the creation a bit. However, when I attempt to run it, I get "AttributeError: 'Pathway' object has no attribute 'hidden'". The intended structure of the dictionary is as follows **world village a. room...
This is python, not javascript. class Pathway(dict): def __init__(self, hidden, travelType): self.dict = { self.hidden : hidden, self.travelType : travelType } PEP 8 asks you to instead name it travel_type. It's not strictly necessary to define all attributes in the __init__() constructor, but it's good practice, it's...
1
1
79,269,012
2024-12-10
https://stackoverflow.com/questions/79269012/how-to-style-all-cells-in-a-row-of-a-specific-multiindex-value-in-pandas
SETUP I have the following df: import pandas as pd import numpy as np arrays = [ np.array(["fruit", "fruit", "fruit","vegetable", "vegetable", "vegetable"]), np.array(["one", "two", "total", "one", "two", "total"]), ] df = pd.DataFrame(np.random.randn(6, 4), index=arrays) df.index.set_names(['item','count'],inplace=Tru...
Here's one approach: # used `np.random.seed(0)` for reproducibility def highlight_total(s): m = s.index.get_level_values('count') == 'total' return np.where(m, 'font-weight: bold', None) df.style.apply(highlight_total) Output: Explanation For level 'count' (or: '1'), check where index.get_level_values equals 'total'...
1
2
79,268,477
2024-12-10
https://stackoverflow.com/questions/79268477/sort-normalized-stacked-bar-chart-by-dataframe-order-with-altair
How can I keep the order of my stacked bars chart from my Dataframe ? The head of my Dataframe looks like this : The countries are ordered as I want them to be and I can handle it by setting sort=None. But I want to order lineages in the stacked bar by sequences_number, only keeping the 'Others' value at the end, as i...
You can use the order encoding instead of sort to order the stacked segments as in this example: import altair as alt from vega_datasets import data source = data.barley() alt.Chart(source).mark_bar().encode( x='sum(yield)', y='variety', color='site', order=alt.Order( # Sort the segments of the bars by this field 'site...
1
2
79,268,222
2024-12-10
https://stackoverflow.com/questions/79268222/pyspark-subset-array-based-on-other-column-value
I use Pyspark in Azure Databricks to transform data before sending it to a sink. In this sink any array must at most have a length of 100. In my data I have an array that is always length 300 an a field specifying how many values of these are relevant (n_relevant). n_relevant values might be: below 100 -> then I want ...
You can filter by index as follows from pyspark.sql.types import StructField, StructType, IntegerType, ArrayType df = spark.createDataFrame( [[list(range(300)), 4], [list(range(300)), 200], [list(range(300)), 300], [list(range(300)), 800]], schema=StructType( [ StructField("array", ArrayType(IntegerType())), StructFiel...
1
1
79,267,898
2024-12-10
https://stackoverflow.com/questions/79267898/camera-pose-estimation-using-opencvs-solvepnp
I have a grayscale camera for which I have already calculated intrinsic parameters with standard methods of calibrations. I have then position this camera in a particular stationary setup and put a plate with 8 marker points in front of the camera. I have calculated the camera pose with respect to the coordinate system...
-np.matrix(rotM).T * np.matrix(tvec) is the reverse operation from converting a point (xw yw zw) in the world coordinates into camera coordinates, when that point is (0,0,0) in camera system. [R R R tx][xw] [R R R ty][yw] [R R R tz][zw] [0 0 0 1 ][1 ] is Xc = rotM@Xw + tvec (Xw=[xw,yw,zw], tvec=[tx,ty,tz], rotM=R) So ...
1
4
79,267,542
2024-12-10
https://stackoverflow.com/questions/79267542/how-to-get-information-of-a-function-and-its-arguments-in-python
I ran below to get information about the list of arguments and default values of a function/method. import pandas as pd import inspect inspect.getfullargspec(pd.drop_duplicates) Results: Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'pandas' has no attribute 'drop_duplic...
The issue is that the drop_duplicates method exists as a method on a pandas.DataFrame object, i.e., pandas.DataFrame.drop_duplicates not directly on the pandas module. With that being said you might also want to check out inspect.signature as an alternative to inspect.getfullargspec: >>> import inspect >>> import panda...
1
2
79,266,819
2024-12-10
https://stackoverflow.com/questions/79266819/faster-glossary-generation
I am trying to make a table of contents for my queryset in Django like this: def get_toc(self): toc = {} qs = self.get_queryset() idx = set() for q in qs: idx.add(q.title[0]) idx = list(idx) idx.sort() for i in idx: toc[i] = [] for q in qs: if q.title[0] == i: toc[i].append(q) return toc But it has time complexity O(n...
This doesn't look like a table of contents, but a glossary, where you map the first character of a term to a list of terms. We can work with .groupby(…) [python-doc] here: from itertools import groupby result = { k: list(vs) for k, vs in groupby( self.get_queryset().order_by('title'), lambda x: x.title[0] ) }
2
2
79,265,773
2024-12-9
https://stackoverflow.com/questions/79265773/python-opc-ua-authentification
I'm trying to set up authentication for my OPC-UA server. I don't want my clients to be able to connect to my server in ‘anonymous’ mode. So I used this configuration in my opc-ua server: I'm testing with UaExpert in client mode and the password login works (I have to enter the right login + the right password). The c...
The python opcua library is no longer supported. There was a fix for this issue, but the pip package never got uppdated. So either use the current master from github. Or you switch to asyncua, which has a sync layer for easier porting, but i would recommend use it via async, if possible.
1
2
79,266,741
2024-12-10
https://stackoverflow.com/questions/79266741/writing-multiple-polars-dataframes-to-separate-worksheets-of-excel-workbook
I am trying to get the following code to work: import polars as pl # Create sample DataFrames df1 = pl.DataFrame({ "Name": ["Alice", "Bob", "Charlie"], "Age": [25, 30, 35] }) df2 = pl.DataFrame({ "Product": ["Laptop", "Phone", "Tablet"], "Price": [1000, 500, 300] }) df3 = pl.DataFrame({ "City": ["New York", "San Franci...
The workbook object needs to be closed. This is usually best achieved with a with context manager like in this polars/xlsxwriter example. However, you can also call an explicit close() on the workbook. Like this: import polars as pl import xlsxwriter # Create sample DataFrames df1 = pl.DataFrame({ "Name": ["Alice", "Bo...
2
1
79,263,433
2024-12-8
https://stackoverflow.com/questions/79263433/multiprocessing-and-sourcing-shell-file-for-every-subprocess
I am working on a code which aims to gather simulation commands and multiprocess them, sourcing a shell file for each subprocess before running a simulation command in the subprocess. For this I gather the commands in another function in a dictionary which is used by the functions below: def source_shell_script(self, ...
Kindly check now after these changes. Changes: 1.The source_shell_script method sources the shell script and returns environment variables. These variables are passed into the subprocess.run call via the env parameter. If the script is missing, a FileNotFoundError is raised. If sourcing fails, a RuntimeError is raised....
1
2
79,266,557
2024-12-9
https://stackoverflow.com/questions/79266557/python-heatmap-with-categorical-color-and-continuous-transparency
I want to make a heatmap in python (seaborn, matplotlib, etc) with two dimensions of information. I have a categorical value I want to assign to color, and a continuous variable (i.e. between 0-100 or 0-1) I want to assign to transparency, so each box has its own color and transparency (or intensity). for example: colo...
You could draw individual rectangles, giving each a specific color and transparency: import matplotlib.pyplot as plt from matplotlib.patches import Rectangle, Patch import pandas as pd colors = pd.DataFrame([['b', 'g', 'r'], ['black', 'orange', 'purple'], ['r', 'yellow', 'white']]) transparency = pd.DataFrame([[0.1, 0....
2
2
79,266,262
2024-12-9
https://stackoverflow.com/questions/79266262/when-i-navigate-to-the-url-and-get-the-contents-of-the-table-tag-its-empty
I am trying to scrape data from this website https://data.anbima.com.br/debentures/AALM11/agenda?page=1&size=100& and when I look at the DevTools > Elements, it has a TABLE tag with the data inside TR and TD tags (dates, values, etc.), but when I try to parse the HTML with Selenium or bs4 the data disappear and instead...
The problem is that the table data is dynamically loaded. When the browser is loading the page, it signals to Selenium that the page is done loading but the content of the page is still loading in the background. So your code is executed and it scrapes the partially loaded page. To fix this, we need to wait for somethi...
1
1
79,265,874
2024-12-9
https://stackoverflow.com/questions/79265874/generating-a-dataframe-of-combinations-not-permutations
Suppose I have a bag of items {a, b}. Then I can choose pairs out of it in a variety of ways. One way might be to pick all possible permutations: [a, a], [a, b], [b, a], [b, b]. But I might disallow repetition, in which case the possible permutations are: [a, b], [b, a]. I might go further and declare that [a, b] is th...
You can use .join_where() with a row index predicate to prevent "duplicates". (choices .with_row_index() .join_where(choices.with_row_index(), pl.col.flavor == pl.col.flavor_right, pl.col.index <= pl.col.index_right ) ) shape: (9, 5) ┌───────┬────────┬────────┬─────────────┬──────────────┐ │ index ┆ flavor ┆ choice ┆ ...
1
1
79,265,302
2024-12-9
https://stackoverflow.com/questions/79265302/sum-up-column-values-by-special-logic
Say we have an array like: a = np.array([ [k11, k12, k13, k14, k15, k16, k17, k18], [k21, k22, k23, k24, k25, k26, k27, k28], [k31, k32, k33, k34, k35, k36, k37, k38], [k41, k42, k43, k44, k45, k46, k47, k48] ]) const = C I need to create a vector from this array like this (runge kutta 4): result = np.array([ const * ...
np.einsum solution: result = const * np.einsum('ij,i->j', a, [1, 2, 2, 1]) ij,i are the dimensions of a and the coefficients. The result, j is missing i, which means that that dimension is multiplied and summed across the arrays. This solution is nice because it is very explicit about dimensions without requiring any ...
2
2
79,265,502
2024-12-9
https://stackoverflow.com/questions/79265502/how-to-import-multiple-records-with-merge-function-to-an-oracle-db-via-python
I am trying to import data from a .csv file into an Oracle DB using Python. So far it works fine if the .csv file contains 10 records. If I increase the number of records in the .csv file to 1.000.000, the script takes far too long and does not end even after an hour. Can anyone tell me how I can optimise my source cod...
A merge is meant to modify one table based on the data in another table. It is not intended for single-row processing from the client like this. The proper design would be to use a normal bulk-bind insert to load a work table and then you can do a single merge execution to sync the target table with the work table. Als...
2
3
79,264,751
2024-12-9
https://stackoverflow.com/questions/79264751/pandas-outofboundsdatetime-out-of-scope-issue
Am getting the following issue "pandas._libs.tslibs.np_datetime.OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 3036-12-31 00:00:00, at position 45100" I dont want to do the following as this will coerce all errors to NaT (not a time) s = pd.to_datetime(s, errors='coerce') Is there no way to keep the dates an...
You can't have dates above Timestamp('2262-04-11 23:47:16.854775807'): Assuming dates like: df = pd.DataFrame({'date': ['2036-12-31 00:00:00', '3036-12-31 00:00:00']}) You could convert to periods with PeriodIndex df['periods'] = pd.PeriodIndex(df['date'].str.extract(r'^(\d{4}-\d\d-\d\d)', expand=False), freq='D') Ou...
1
1
79,263,329
2024-12-8
https://stackoverflow.com/questions/79263329/how-to-change-text-color-of-facet-category-in-plotly-charts-in-python
I have created few Plotly charts with facets on basis of category variable and would like to change the color of facet text in the chart. Have searched alot even on plotly website but couldn't figure out the property that can be used to change the color for facet text. Using below image as an example I would like to ch...
As for annotations, they are summarized in the layout attributes and can be done by making decisions based on the content of the text. In the following example, NO has been changed to red. import plotly.express as px fig = px.scatter(px.data.tips(), x="total_bill", y="tip", facet_col="smoker") fig.for_each_annotation(l...
1
2
79,262,249
2024-12-8
https://stackoverflow.com/questions/79262249/asizeof-appears-to-be-inaccurate
Take this MWE: from pympler import asizeof from random import randint, choice from string import printable from heapq import heappush ascii = printable[:-5] pq = [] for _ in range(10_000_000): heappush(pq, (randint(0, 31), randint(0, 31), randint(0, 31), ''.join(choice(ascii) for _ in range(16)))) print(asizeof.asizeof...
asizeof rather accurately does what it's supposed to do: Measure the total size of the object structure. That's just not all the memory that Python uses. I get the exact same total 1,449,096,184 bytes with this minified test (Attempt This Online!): from sys import getsizeof def size(obj, align=8): return getsizeof(obj)...
1
3
79,256,095
2024-12-5
https://stackoverflow.com/questions/79256095/problems-plotting-timestamps-on-the-x-axis-with-matplotlib
I am working on a Python script that loads several CSV files containing timestamps and ping data and then displays them on a plot. The X-axis is supposed to display the timestamps in HH:MM format, with the timestamps coming from multiple CSV files that record different ping values for different addresses. The challenge...
Every time you add a new plot, a new axis is added for both 'x' and 'y'. And I'm unsure if you can control which axis will be on top. so the workaround that I can think about is to set the ticks param for the 'x' axis every time you add a new plot: for idx, (address, df) in enumerate(data.items()): df['Time_diff'] = (d...
1
1
79,258,240
2024-12-6
https://stackoverflow.com/questions/79258240/scrapy-script-does-not-start-spiders
I have created a new scrapy projects with a spider (multiple to be added). The spider works without any issues if started with scrapy crawl myspider However, when I try to run the scraper from a custom script, it does not start. I have broken down the script to a bare minimum that does not work: from scrapy.spiderloade...
You requested a non-default reactor in your settings so you need to install it explicitly when using CrawlerRunner (this is mentioned in https://docs.scrapy.org/en/latest/topics/asyncio.html#installing-the-asyncio-reactor). You also need to do it before importing twisted.internet.reactor as that installs the default re...
1
1
79,262,465
2024-12-8
https://stackoverflow.com/questions/79262465/how-to-invert-ohlc-data
I have a OHLC (Open, High, Low, Close financial data). An upward bar (bullish) is when the close price is higher than the open price. A downward bar (bearish) is when the close price is lower than the open price. I am trying to find a way to invert the dataset in order to have the following behavior: Original data: In...
You could take the negative values: import plotly.graph_objects as go fig = go.Figure(data=[go.Candlestick(x=df['Date_Time'], open=df['Open'], high=df['High'], low=df['Low'], close=df['Close'] )]) fig.show() fig = go.Figure(data=[go.Candlestick(x=df['Date_Time'], open=-df['Open'], high=-df['High'], low=-df['Low'], cl...
1
2
79,258,896
2024-12-6
https://stackoverflow.com/questions/79258896/how-to-do-an-advanced-grouping-in-pandas
The easiest way is to demonstrate my question with an example. Suppose I have the following long format data frame In [284]: import pandas as pd In [285]: data = pd.DataFrame({"day": [0,0,0,0,0,0,1,1,1,1,1,1], "cat1": ["A", "A", "A", "B", "B", "B", "A", "A", "B", "B", "B", "B"], "cat2":["1", "1", "2", "1", "2", "2", "1...
You can create a new column with the square value and then do the groupby: data["value2"] = data["value"] * data["value"] gb = data.groupby(["day", "cat1", "cat2"])["value2"].mean() display(gb)
1
1
79,261,474
2024-12-7
https://stackoverflow.com/questions/79261474/python-generic-type-on-function-getting-lost-somewhere
Getting this typing error: error: Incompatible types in assignment (expression has type "object", variable has type "A | B") [assignment] With this code: from dataclasses import dataclass from typing import TypeVar, Mapping, reveal_type @dataclass class A: foo: str = "a" @dataclass class B: bar: str = "b" lookup_table...
This is a mypy bug present in 1.13.0 and below (previously reported here and by OP here). pyright and basedmypy both accept the given snippet. mypy stores type[A | B] types as a union of types internally (type[A] | type[B]). This is usually convenient, but causes troubles when solving type[T] <: type[A] | type[B] for T...
4
2
79,259,509
2024-12-6
https://stackoverflow.com/questions/79259509/ffmpeg-piped-output-producing-incorrect-metadata-frame-count
The short version: Using piped output from ffmpeg produces a file with incorrect metadata. ffmpeg -y -i .\test_mp4.mp4 -f avi -c:v libx264 - > output.avi to make an AVI file using the pipe output. ffprobe -v error -count_frames -show_entries stream=duration,nb_read_frames,r_frame_rate .\output.avi The output will show ...
As I commented above, it is 100% due to outputting AVI file to a pipe. Check out this part of FFmpeg source code: https://github.com/FFmpeg/FFmpeg/blob/c893dcce312af152f21a54874f88576ad279e722/libavformat/avienc.c#L911 Specifically, the if block starting on Line 924 is skipped if you write to a pipe: if (pb->seekable &...
1
1
79,261,408
2024-12-7
https://stackoverflow.com/questions/79261408/how-can-i-clean-a-year-column-with-messy-values
I have a project I'm working on for a data analysis course, where we pick a data set and go through the steps of cleaning and exploring the data with a question to answer in mind. I want to be able to see how many instances of the data occur in different years, but right now the Year column in the data set is set to a ...
Assuming your Year columns are strings, I would write a normalize function like this: import re import pandas as pd data = [ {"year": "early 1990's"}, {"year": "89 or 90"}, {"year": "2011-2012"}, {"year": "approx 2001"}, ] def normalize(row): year = row["year"] # Count the number of digits count = len(re.findall("\\d",...
1
1
79,261,490
2024-12-7
https://stackoverflow.com/questions/79261490/how-to-adjust-the-size-of-one-subplot-independently-of-other-subplots-in-a-matpl
I want to have horizontally aligned 3D and 2D plots, where the y-axis of the 2D plot is the same height as the z-axis of the 3D plot. The following code produces a default output: import matplotlib.pyplot as plt fig = plt.figure() fig.set_size_inches(10, 4) fig.subplots_adjust(wspace=0.5) # 3D surface plot ax1 = fig.ad...
You can use set_position() to change the dimensions of one of the subplot: plt.figure(1).axes[1].set_position([0.6,0.4,0.25,0.3]) # left, bottom, width, height It gives:
2
1
79,261,137
2024-12-7
https://stackoverflow.com/questions/79261137/how-to-create-a-connected-2d-grid-graph
I have a 2-dimensional array that represents a grid. The (numpy) array is as follows: dx_grid =[[ "A", "B", "C"], [ "L", "M", "N"], [ "X", "Y", "Z"]] I want to convert that into the following: I know that grid_2d_graph can connect 4 adjacent nodes. For example, it would connect node M to B, L, N and Y BUT NOT to A, C...
Something like this should work: import networkx as nx dx_grid =[[ "A", "B", "C"], [ "L", "M", "N"], [ "X", "Y", "Z"]] r, c = len(dx_grid), len(dx_grid[0]) g = nx.Graph() # add nodes for i in range(r): for j in range(c): g.add_node(dx_grid[i][j], pos=(j,r-i)) # add edges # for all nodes for i in range(r): for j in rang...
1
1
79,259,256
2024-12-6
https://stackoverflow.com/questions/79259256/how-to-create-an-altair-faceted-and-layered-chart-with-dual-axis
I'm trying to create an Altair faceted bar chart with lines that are represents other measure and would better use of a second y-axis on the right side. I don't know if it is possible using Atair. The code is as follows: import altair as alt import pandas as pd data = { 'Date': ['2023-01-01', '2023-01-02', '2023-01-01'...
As per https://github.com/vega/vega-lite/issues/4373#issuecomment-617153232, you need some additional resolves: chart = ( (bars+lines) .resolve_scale(y='independent') # Create dual axis .facet(column='Company') .resolve_axis(y='independent') # Make sure dual axis works with facet (redraws the axis for each subplot) .re...
2
1
79,261,118
2024-12-7
https://stackoverflow.com/questions/79261118/turtle-snake-chain-project
Below is a Python code for my "Snake Project" (with Turtle and Tkinter). The purpose of this program is to create a chain of turtles, called followers, that follow each other, with the first turtle in the chain following a special turtle: the leader. The leader itself follows the movement of the mouse. It removes the l...
The linked list in add_follower isn't updated correctly. Change: self.Last.Next = new_follower self.Last = new_follower # self.Last updated new_follower.Prev = self.Last # should be the *old* version of self.Last To: self.Last.Next = new_follower temp = self.Last # save previous Last self.Last = new_follower # assign ...
2
1