question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,024,431 | 2024-9-25 | https://stackoverflow.com/questions/79024431/subclass-of-pathlib-path-doesnt-support-operator | I'm attempting to create a subclass of pathlib.Path that will do some manipulation to the passed string path value before passing it along to the base class. class MyPath(Path): def __init__(self, str_path): str_path = str_path.upper() # just representative, not what I'm actually doing super().__init__(str_path) Howev... | MyPath("foo") / "bar' goes through several steps: It invokes MyPath("foo").__truediv__("bar"), which Invokes MyPath("foo").with_segments("bar"), which Invokes MyPath(MyPath("foo"), "bar"), which Raises a TypeError because you overrode MyPath.__init__ to only accept a single additional positional argument, which Causes... | 1 | 6 |
79,024,212 | 2024-9-25 | https://stackoverflow.com/questions/79024212/scale-pygame-button-size-based-on-font-size | I want to take a font size (14) and using pygame_widgets.button.Button() scale it to match the font size up to a maximum almost every question I've seen on here has been the other way around at the moment I'm unfamiliar with the maths that would be used but I would greatly appreciate the help import pygame import pygam... | For scaling a button's size dynamically based on its font size you can use a simple proportional scaling method. I've implemented it in code done below: import pygame import pygame_widgets from pygame_widgets.button import Button import sys pygame.init() screen = pygame.display.set_mode((640, 480)) base_font_size = 14 ... | 2 | 1 |
79,024,010 | 2024-9-25 | https://stackoverflow.com/questions/79024010/pandas-return-corresponding-column-based-on-date-being-between-two-values | I have a Pandas dataframe that is setup like so: Code StartDate EndDate A 2024-07-01 2024-08-03 B 2024-08-06 2024-08-10 C 2024-08-11 2024-08-31 I have a part of my code that iterates through each day (starting from 2024-07-01) and I am trying to return the corresponding Code given a date (with a fallback if the date d... | A possible solution, which converts the StartDate and EndDate columns to datetime format (to allow for comparison of dates). It then checks if a specific date (e.g., 2024-07-03) falls within any of the date ranges defined by StartDate and EndDate. If it does, it retrieves the first corresponding Code from those rows; i... | 1 | 3 |
79,023,481 | 2024-9-25 | https://stackoverflow.com/questions/79023481/parsing-xml-document-with-namespaces-in-python | I am trying to parse xml with namespace and attributes. I'm using XML library in Python and since I'm new with this, cannot find solution even I checked over this forum, there are similar questions but not same structure of XML document as I have. This is my XML: <?xml version='1.0' encoding='UTF-8'?> <Invoice xmlns="u... | this worked namespaces = { 'cbc': 'urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2', 'cac': 'urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2', } # Extract the parameters you need for invoice_line in root.findall('.//cac:InvoiceLine', namespaces): invoiced_quantity = invoice... | 2 | 2 |
79,023,460 | 2024-9-25 | https://stackoverflow.com/questions/79023460/handling-circular-imports-in-pydantic-models-with-fastapi | I'm developing a FastAPI application organized with the following module structure. ... │ ├── modules │ │ ├── box │ │ │ ├── routes.py │ │ │ ├── services.py │ │ │ ├── models.py # the sqlalchemy classes │ │ │ ├── schemas.py # the pydantic schemas │ │ ├── toy │ │ │ ├── routes.py │ │ │ ├── services.py │ │ │ ├── models.py │... | I had this same issue and spent hours trying to figure it out, in the end i ended up just not type annotating the specific circular imports and i've lived happily ever after(so far). Maybe you could benefit from doing this same ;) That being said, there are multiple ways of fixing circular imports. As highlighted here ... | 6 | 2 |
79,023,187 | 2024-9-25 | https://stackoverflow.com/questions/79023187/enumerating-all-possible-lists-of-any-length-of-non-negative-integers | I would like to generate/enumerate all possible lists of non-negative integers such that the algorithm will generate lists like the following at some point [1] [24542,0] [245,904609,848,24128,350,999] In other words, for all possible non-negative integers, generate all possible lists which contain that many non-negati... | One of infinitely many ways to do this: Imagine a number line with cells 1, 2, 3, and up to infinity. Now think of a binary number representation, with bits indicating if there is a "break" at the cell border. So, 1 -> [1] 10 -> [2] 11 -> [1,1] 100 -> [3] 101 -> [2, 1] 110 -> [1, 2] Note how number of bits is the same... | 3 | 4 |
79,023,519 | 2024-9-25 | https://stackoverflow.com/questions/79023519/how-to-check-the-variable-is-of-certain-type | I have a python JSON data and need to iterate through the key pair values and check if the value is of certain type and perform an operation. here is the example: total = 0 grades = { 'Math': 90, 'Science': None, 'English': 85, 'History': 'A', 'Art': 88 } for grade in grades.values(): total += grade i need to check if... | You can check the variable type using isinstance() like this for grade in grades.values(): if isinstance(grade, (int, float)): # Check if the grade is a number total += grade This will add the values only if the variable is of type int or float | 1 | 2 |
79,023,227 | 2024-9-25 | https://stackoverflow.com/questions/79023227/how-to-split-pandas-series-into-two-based-on-whether-or-not-the-index-contains-a | I have pandas series that I want to split into two: one with all the entries of the original series where index contains a certain word and the other with all the remaining entries. Getting a series of entries which do contain a certain word in their index is easy: foo_series = original_series.filter(like = "foo") But... | You could drop those indices from the original Series: foo_series = original_series.filter(like = "foo") non_foo_series = original_series.drop(foo_series.index) Or use boolean indexing: m = original_series.index.str.contains('foo') foo_series = original_series[m] non_foo_series = original_series[~m] Example: # input ... | 3 | 3 |
79,022,782 | 2024-9-25 | https://stackoverflow.com/questions/79022782/return-all-rows-that-have-at-least-one-null-in-one-of-the-columns-using-polars | I need all the rows that have null in one of the predefined columns. I basically need this but i have one more requirement that I cant seem to figure out. Not every column needs to be checked. I have a function that returns the names of the columns that need to be checked in a list. Assume this is my dataframe: data = ... | If you want exclude some columns you can use .exlude(): import polars as pl data.filter(pl.any_horizontal(pl.exclude("c").is_null())) ┌─────┬──────┬─────┬──────┐ │ a ┆ b ┆ c ┆ d │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ bool │ ╞═════╪══════╪═════╪══════╡ │ abc ┆ null ┆ u ┆ true │ │ mno ┆ xyz ┆ y ┆ null │ │ qrs ┆... | 2 | 2 |
79,022,245 | 2024-9-25 | https://stackoverflow.com/questions/79022245/opencv-doesnt-read-images-from-some-directories | I'm trying to read a 16-bit grayscale PNG with OpenCV. This image is stored on a network share. When I try to load the image with cv.imread() nothing is returned: import cv2 as cv print(img_paths[0]) print(type(img_paths[0])) print(img_paths[0].exists()) img = cv.imread(img_paths[0].as_posix(), cv.IMREAD_ANYDEPTH) prin... | Yes, OpenCV on Windows has issues with encodings. Usually, anything non-ASCII in the path string can cause you trouble. Shortest possible solution: im = cv.imdecode(np.fromfile(the_path, dtype=np.uint8), cv.IMREAD_UNCHANGED) | 1 | 4 |
79,021,052 | 2024-9-25 | https://stackoverflow.com/questions/79021052/how-can-i-create-parameterized-matrices-and-generate-the-final-matrix-on-demand | I am in a situation where I need to work with parameterized matrices. For e.g., say I start with two matrices A and B, A = [1 2] B = [a b] [3 4] [5 6] Here matrix B is parameterized with the variables a and b. I might at some point need to combine the matrices, say using matrix multiplication, to get AB = C: C = AB = ... | I could see here several solutions. 1. SimPy (your current solution) SymPy great for this! It allows you cache expressions like matrix multiplications, but it's really primarily a symbolic library, so not well optimised for performance as other numeric libraries. 2. NumPy + numexpr NumPy is extremely fast and you could... | 2 | 1 |
79,021,772 | 2024-9-25 | https://stackoverflow.com/questions/79021772/poor-precision-when-plotting-small-wedge-compared-to-axis-size | I am trying to recreate the charts on this website: https://bumps.live/torpids/2022. I am using matplotlib and am running into an issue when drawing the logos, which I have recreated with the MWE below. I am drawing two semicircles next to each other, and the result is as expected when they are around the same size as ... | When you set the color, you are setting both facecolor (which is the color of the inside of the shape) and the edgecolor (which is the color of the outline). Matplotlib then draws the outline with a default line width of 1 point. That linewidth is preserved for the interactive zoom in plt.show (so remains too small to ... | 1 | 4 |
79,009,542 | 2024-9-21 | https://stackoverflow.com/questions/79009542/python-3-13-with-free-thread-is-slow | I was trying this new free-thread version of the interpreter, but find out that it actually takes longer than the GIL enabled version. I did observe that the usage on the CPU increase a lot for the free-thread interpreter, is there something I misunderstand about this new interpreter? Version downloaded: python-3.13.0r... | The primary culprit appears to be the randint module, as it is a static import and appears to share a mutex between threads. Another problem is that you're only able to process 4 tables at a time. Since you want to create 10 tables in total, you'll be running batches of 4-4-2. Here is the code with the randint problem ... | 2 | 2 |
79,006,642 | 2024-9-20 | https://stackoverflow.com/questions/79006642/multiply-elements-of-list-column-in-polars-dataframe-with-elements-of-regular-py | I have a pl.DataFrame with a column comprising lists like this: import polars as pl df = pl.DataFrame( { "symbol": ["A", "A", "B", "B"], "roc": [[0.1, 0.2], [0.3, 0.4], [0.5, 0.6], [0.7, 0.8]], } ) shape: (4, 2) ┌────────┬────────────┐ │ symbol ┆ roc │ │ --- ┆ --- │ │ str ┆ list[f64] │ ╞════════╪════════════╡ │ A ┆ [0.... | Update: Broadcasting of literals/scalars for the List type was added in 1.10.0 df.with_columns(roc_wgt = pl.col.roc * weights) shape: (4, 3) ┌────────┬────────────┬──────────────┐ │ symbol ┆ roc ┆ roc_wgt │ │ --- ┆ --- ┆ --- │ │ str ┆ list[f64] ┆ list[f64] │ ╞════════╪════════════╪══════════════╡ │ A ┆ [0.1, 0.2] ┆ [0... | 3 | 4 |
79,003,772 | 2024-9-19 | https://stackoverflow.com/questions/79003772/why-does-decoding-a-large-base64-string-appear-to-be-faster-in-single-threaded-p | I have a number of large base64 strings to decode, ranging from a few hundred of MB up to ~5 GB each. The obvious solution is a single call to base64.b64decode ("reference implementation"). I'm trying to speed up the process by using multiprocessing, but, surprisingly, it is much slower than the reference implementatio... | TL;DR: Python parallelism sucks due to the global interpreter lock and inter-process communication. Data copies also introduce overheads making your parallel implementations even slower, especially since the operation tends to be memory-bound. A native CPython module can be written to overpass the CPython's limits and ... | 5 | 5 |
79,019,204 | 2024-9-24 | https://stackoverflow.com/questions/79019204/too-many-positional-arguments-on-one-machine-but-does-not-know-the-error-on-t | I am trying to set up a GitHub Actions workflow (definition below) checking for pylint requirements. I fixed this all on my local machine. Then I noticed getting a too-many-positional-arguments on the workflow. But my local machine doesn't know that specific error. Now I tried to fix this by using pylint: disable=too-m... | I also experienced a similar issue, try setting max-positional-arguments=10 in your .pylintrc. Also, have a look at https://pylint.readthedocs.io/en/latest/user_guide/messages/refactor/too-many-positional-arguments.html Problematic code: # +1: [too-many-positional-arguments] def calculate_drag_force(velocity, area, den... | 2 | 3 |
79,020,533 | 2024-9-24 | https://stackoverflow.com/questions/79020533/using-multiple-client-certificates-with-python-and-selenium | I’m working on a web-scrape project using Python and Selenium with a Chrome driver, which requires client certificates to access pages. I have 2 scenarios it must handle: Different certificates allow access to different URLs (e.g. Certificate A accesses URLs 1, 2 and 3, and Certificate B accesses URLs 4, 5 and 6) Mult... | import sqlite3 import win32crypt from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.chrome.options import Options DATABASE_PATH = 'path/to/database.db' # Database with URLs and cert thumbprints CHROMEDRIVER_PATH = 'path/to/chromedriver' def fetch_thumbprint_for_... | 3 | 1 |
78,998,888 | 2024-9-18 | https://stackoverflow.com/questions/78998888/matplotlib-issue-with-mosaic-and-colorbars | I am facing a strange behaviour with my code. I don't understand why the subplot at the top left has a different space between the imshow and the colorbar compared to the subplot at the top right. And also I don't understand why the colorbar at the bottom is not aligned with the one at the top right. Can you explain th... | I posted this issue on Matplotlib github page and get an answer from jklymak. This is not the most elegant way to do it, but it works. However, if you want to place the colorbar at the bottom you have to adapt the code, updating the value in the inset_axes and also the orientation of the colorbar. img = ax['A'].imshow(... | 2 | 1 |
79,017,946 | 2024-9-24 | https://stackoverflow.com/questions/79017946/breaking-the-json-decode | Update: PanicException was fixed by pull/18249 in Polars 1.6.0 Broadcast length was fixed by pull/19148 in Polars 1.10.0 shape: (1, 1) ┌───────────┐ │ meta_data │ │ --- │ │ struct[0] │ ╞═══════════╡ │ {} │ └───────────┘ I want to make the string of empty dictionary to be a struct, but json_decode fails as there are... | At the moment Polars doesn't support structs without fields - see related issue. So it cannot work with only empty json objects. However, if you know which fields can be in your data you can supply appropriate dtype parameter to pl.Expr.str.json_decode(): df = pl.DataFrame({"a": ["{}"]}) df.with_columns( pl.col('a').st... | 3 | 2 |
79,019,656 | 2024-9-24 | https://stackoverflow.com/questions/79019656/hashed-cross-product-transformation-in-pytorch | I want to implement a hashed cross product transformation like the one Keras uses: >>> layer = keras.layers.HashedCrossing(num_bins=5, output_mode='one_hot') >>> feat1 = np.array([1, 5, 2, 1, 4]) >>> feat2 = np.array([2, 9, 42, 37, 8]) >>> layer((feat1, feat2)) <tf.Tensor: shape=(5, 5), dtype=float32, numpy= array([[0.... | I could trace it to this part of the code where they simply use "X" as a string separator on one set of crossed values from various features. I'm struggling to understand the concatenate(features) part. Do I have to do the hash of each "pair" of features? If you are crossing two features, for each pair of values from... | 3 | 4 |
79,019,484 | 2024-9-24 | https://stackoverflow.com/questions/79019484/error-table-already-exits-when-using-to-sql-if-exists-append-with-p | Using Panadas 2.2.3, sqlite3.version 2.6.0 and python 3.12.5, I get an error "table ... already exits" when using to_sql with if_exists='append'. I just try to append some data from a Pandas df to a SQLite DB table. Using if_exists='replace' produces the same result. In order to make sure that the db connection is acti... | We can see what is happening by logging the SQL statements made by Pandas. This minimal example: import sqlite3 from sqlite3 import Error import pandas as pd table_name = 'tbl' df = pd.DataFrame([(1,)], columns=['a']) with sqlite3.connect(':memory:') as conn: # Log all (successful) SQL statements. conn.set_trace_callba... | 2 | 1 |
79,019,523 | 2024-9-24 | https://stackoverflow.com/questions/79019523/is-typing-assert-never-removed-by-command-line-option-o-similar-to-assert-st | In Python, assert statement produces no code if command line optimization options -O or -OO are passed. Does it happen for typing.assert_never()? Is it safe to declare runtime assertions that will not be optimized out? Consider the case from typing import assert_never def func(item: int | str): match item: case int(): ... | No. assert_never() is only a normal function consisting of a raise statement, at least in CPython. It is not removed at runtime, as with other typing features. It should be emphasized that assert_never() has nothing special: It just takes in an argument of type Never and never returns anything (i.e., it raises): def a... | 1 | 2 |
79,004,567 | 2024-9-19 | https://stackoverflow.com/questions/79004567/selenium-headless-broke-after-chrome-update | After updating google chrome this weekend, headless mode using Selenium python API is bringing up a blank window when running in windows. The identical code I had running on a Debian VM does not work any longer. Here is a code snippet: chrome_options = Options() chrome_options.add_argument("--headless=new") #previousl... | It's a new bug in Chrome / Chromedriver 129 when using the new headless mode on Windows: https://github.com/SeleniumHQ/selenium/issues/14514#issuecomment-2357777800. https://issues.chromium.org/issues/359921643#comment2 In the meantime, use --window-position=-2400,-2400 to hide the window. chrome_options.add_argument("... | 3 | 6 |
79,020,229 | 2024-9-24 | https://stackoverflow.com/questions/79020229/creating-a-default-value-recursively-given-a-type-types-genericalias | Given a type t (originally comes from function annotations), I need to create a default value of that type. Normally, t() will do just that and work for many types, including basic types such as bool or int. However, tuple[bool, int]() returns an empty tuple, which is not a correct default value. It can get slightly tr... | Since tuple is the only instantiable standard type where the default value should not be just an instantiation of the type with no argument, you can simply special-case it in a function that recursively creates a default value for a given type: from types import GenericAlias def get_default_value(typ): if isinstance(ty... | 2 | 2 |
79,020,484 | 2024-9-24 | https://stackoverflow.com/questions/79020484/topic-modelling-many-documents-with-low-memory-overhead | I've been working on a topic modelling project using BERTopic 0.16.3, and the preliminary results were promising. However, as the project progressed and the requirements became apparent, I ran into a specific issue with scalability. Specifically: For development/testing, it needs to train reasonably quickly on a moder... | In general, advanced techniques like UMAP and HDBSCAN are helpful in producing high quality results on larger datasets but will take more memory. Unless it's absolutely required, you may want to consider relaxing this constraint for the sake of performance, real-world human time, and actual cost (hourly instance or oth... | 3 | 1 |
79,020,378 | 2024-9-24 | https://stackoverflow.com/questions/79020378/do-i-need-to-use-timezones-with-timedelta-and-datetime-now | If I only use datetime.now() with timedelta to calculate deltas, is it safe to ignore time zones? For example, is there a case where if a start time is before daylight savings, and an end time is after, that I will get the wrong result if I don't use a time zone aware call to datetime.now()? | No, it is not safe. Do calculations in UTC if you want the actual time between timezone-aware values. Two values in the same timezone without UTC will give "wall time". For example, DST ends Nov 3, 2024 at 2am: # "pip install tzdata" for up-to-date Windows IANA database. import datetime as dt import zoneinfo as zi zone... | 2 | 5 |
79,020,232 | 2024-9-24 | https://stackoverflow.com/questions/79020232/assign-multi-index-variable-values-based-on-the-number-of-elements-in-a-datafram | I have a large csv dataset the looks like the following: id,x,y,z 34295,695.117,74.0177,70.6486 20915,800.784,98.5225,19.3014 30369,870.428,98.742,23.9953 48151,547.681,53.055,174.176 34026,1231.02,73.7678,203.404 34797,782.725,73.9831,218.592 15598,983.502,82.9373,314.081 34076,614.738,86.3301,171.316 20328,889.016,98... | You could cut and value_counts: tmp = df[['x', 'y', 'z']] bins = np.arange(0, np.ceil(np.max(tmp)/100)*100, 100) tmp.apply(lambda s: pd.cut(s, bins, labels=bins[1:])).value_counts().to_dict() Output: {(900.0, 100.0, 100.0): 3, (600.0, 100.0, 200.0): 1, (700.0, 100.0, 100.0): 1, (700.0, 100.0, 200.0): 1, (800.0, 100.0,... | 1 | 1 |
79,019,358 | 2024-9-24 | https://stackoverflow.com/questions/79019358/converting-pandas-dataframe-to-wiki-markup-table | I'm automating some data processing and creating jira tickets out of it. Pandas does have to_html or to_csv or even to_markdown. But jira supports only wiki markup for creating a table. e.g. <!-- wiki markup --> ||header1||header2||header3||\r\n|cell 11|cell 12|cell 13|\r\n|cell 21|cell 22|cell 23| will create head... | Don't reinvent the wheel, tabulate supports a jira template: from tabulate import tabulate tabulate(df, headers='keys', tablefmt='jira', showindex=False) Output: '|| header1 || header2 || header3 ||\n| cell 11 | cell 12 | cell 13 |\n| cell 21 | cell 22 | cell 23 |' If you really want the \r\n line separator: tabulate... | 3 | 1 |
79,019,231 | 2024-9-24 | https://stackoverflow.com/questions/79019231/how-to-reduce-the-dimension-of-csv-file | Suppose I have one CSV file with dimension m×n means m rows and n columns. I want to reduce its dimension by replacing average value of corresponding sub matrix. Example 1: Given we have 6×6 matrix (CSV file): col1,col2,col3,col4,col5,col6 a1,b1,c1,d1,e1, f1 a2,b2,c2,d2,e2, f2 a3,b3,c3,d3,e3, f3 a4,b4,c4,d4,e4, f4 a5,b... | Assuming this example: col1 col2 col3 col4 col5 col6 0 0 1 2 3 4 5 1 6 7 8 9 10 11 2 12 13 14 15 16 17 3 18 19 20 21 22 23 4 24 25 26 27 28 29 5 30 31 32 33 34 35 You could rename the indexes (with set_axis), stack, and groupby.mean: import math n, m = 2, 2 # desired shape out = (df .set_axis(np.arange(df.shape[0])//... | 1 | 2 |
79,015,399 | 2024-9-23 | https://stackoverflow.com/questions/79015399/qstate-assignproperty-not-working-in-pyside | I saw this example using QState, which seems to work with PyQt5. However on trying to use PySide for this, I get this error; Traceback (most recent call last): File ".../qstate-error.py", line 16, in <module> state1.assignProperty(widget, b'test', 1) TypeError: 'PySide2.QtCore.QState.assignProperty' called with wrong a... | It's a bug in the signature definition, reported in PYSIDE-2444. Based on the PyQt/PySide convention, a C++ char should be a bytes in Python, in fact the function works as expected in PyQt5/6, and also the latest PySide6 releases (due to the fix related to the above bug). In reality, many functions that accept char als... | 1 | 2 |
79,018,992 | 2024-9-24 | https://stackoverflow.com/questions/79018992/score-number-of-true-instances-with-python-polars | I am working on a dataframe with the following structure: df = pl.DataFrame({ "datetime": [ "2024-09-24 00:00", "2024-09-24 01:020", "2024-09-24 02:00", "2024-09-24 03:00", ], "Bucket1": [2.5, 8, 0.7, 12], "Bucket2": [3.7, 10.1, 25.9, 9.9], "Bucket3": [40.0, 15.5, 10.7, 56], }) My goal is to output a table that counts... | In the latest version 1.8.1 of Polars, your code runs as expected after replacing the arr namespace with the list namespace. Moreover, it can be simplified to avoid the list comprehension as follows. cols = ["Bucket1", "Bucket2", "Bucket3"] df.with_columns( pl.concat_list(pl.col(cols).is_between(0, 10, closed="left")).... | 2 | 2 |
79,019,014 | 2024-9-24 | https://stackoverflow.com/questions/79019014/column-is-not-accessible-using-groupby-and-applylambda | I'm encountering a KeyError when trying to use the .apply() method on a pandas DataFrame after performing a groupby. The goal is to calculate the weighted average baced on the Industry_adjusted_return column. The error indicates that the 'Industry_adjusted_return' column cannot be found. Below is a minimal example that... | You should access the columns directly from grouped when calculating the weighted average. No need to use .apply() in this case since you're applying a vectorized operation: import pandas as pd data = { 'ISIN': ['DE000A1DAHH0', 'DE000KSAG888'], 'Date': ['2017-03-01', '2017-03-01'], 'MP_quintile': [0, 0], 'Mcap_w': [808... | 1 | 2 |
79,018,528 | 2024-9-24 | https://stackoverflow.com/questions/79018528/exec-inside-a-function-and-generator | I need to write a custom exec function in python (for several purposes but this is not the problem here, so this custom exec which is called myExec will do exactly as exec for now). I went into this problem : def myExec(code): exec(code) code = """ a = 1 print(a) u = [a for x in range(3)] print(u) """ myExec(code) Run... | We can explain this behavior if we take a look at the decompiled code. from dis import dis def myExec(code): dis(code) a = 1 is compiled to STORE_NAME, so it stores a as a local variable here print(a) uses LOAD_NAME to load the local a. It is a local variable, so LOAD_NAME finds it. the list comprehension is compiled... | 2 | 1 |
79,010,439 | 2024-9-21 | https://stackoverflow.com/questions/79010439/async-server-and-client-scripts-stopped-working-after-upgrading-to-python3-12 | So I have two scripts that use asyncio's servers' for communication, the script's work by the server opening an asyncio server and listening for connections, the client script connecting to that server, the server script stopping listening for new connections and assigning the reader and the writer to global variables ... | To stop serving server.close does the job. The wait_closed has a broader meaning. Let me quote directly from the asyncio code, it explains everything, also why it was working on 3.11: async def wait_closed(self): """Wait until server is closed and all connections are dropped. - If the server is not closed, wait. - If ... | 2 | 3 |
79,016,972 | 2024-9-24 | https://stackoverflow.com/questions/79016972/killable-socket-in-python | My goal is to emit an interface to listen on a socket forever ... until someone up the decision chain decides it's enough. This is my implementation, it does not work. Mixing threads, sockets, object lifetime, default params and a language I do not speak too well is confusing. I tested individually different aspects of... | I'm scared of globals. What I am attempting to do is pass a reference to "something somewhere that can be evaluated to bool". Global variables can be problematic - but sometimes they are the correct thing to use. "bool"s are scalar values in Python - when you pass alive as a parameter to your function, it will have i... | 1 | 2 |
79,000,759 | 2024-9-19 | https://stackoverflow.com/questions/79000759/is-it-possible-to-define-class-methods-as-shiny-python-modules | I'm trying to build a Shiny for Python app where portions of the form code can be broken out using Shiny Modules where I can define the specific ui and server logic as a class method, but ultimately inherit some base capability. I have the following shiny app: import pandas as pd from shiny import App, reactive, ui, mo... | I found a way to do all the things desired, but I'm not sure it's the best way. Welcome suggestions if there is a better way.. Instead of explicitly defining a ui or server module AS a class method, its possible to achieve similar functionality if the ui and server modules are defined INSIDE a class method and then cal... | 3 | 1 |
79,015,728 | 2024-9-23 | https://stackoverflow.com/questions/79015728/why-am-i-getting-runtimeerror-trying-to-backward-through-the-graph-a-second-ti | My code: import torch import random image_width, image_height = 128, 128 def apply_ellipse_mask(img, pos, axes): r = torch.arange(image_height)[:, None] c = torch.arange(image_width)[None, :] val_array = ((c - pos[0]) ** 2) / axes[0] ** 2 + ((r - pos[1]) ** 2) / axes[1] ** 2 mask = torch.where((0.9 < val_array) & (val_... | You have created a lot of leaf nodes (gradient-requiring variables), including: ref_image = apply_ellipse_mask(torch.zeros(image_width, image_height, requires_grad=True), sphere_position, [sphere_radius, sphere_radius, sphere_radius]) which creates a leaf node (with torch.zeros(image_width, image_height, requires_grad... | 4 | 2 |
79,015,652 | 2024-9-23 | https://stackoverflow.com/questions/79015652/implementing-eulers-form-of-trigonometric-interpolation | at the moment I am struggling to correctly implement Euler's from of trigonometric interpolation. To guide you through the work I have already done and information I have gathered, I will pair code with the mathematical definitions. The code will be written in python and will make use of numpy functions. Please note, t... | There are many separate problems with what you have done. The most important is that the discrete Fourier transform does NOT produce the order of frequencies that you require. The order is roughly (see https://numpy.org/doc/stable/reference/routines.fft.html ): freq[0] .... positive frequencies ... negative frequencies... | 1 | 2 |
79,015,532 | 2024-9-23 | https://stackoverflow.com/questions/79015532/data-transformation-on-pandas-dataframe-to-connect-related-rows-based-on-shared | I have a table of company data that links subsidiary to parent companies as shown in the left hand side table of the screenshot. I need to transform the data into the table on the right hand side of the screenshot. This requires tracing through the two columns of the table and making the link between individual rows. ... | You can use networkx to form a directed graph, then loop over the paths with all_simple_paths: import numpy as np import networkx as nx # create the directed graph G = nx.from_pandas_edgelist(df, source='Subsidiary Company', target='Parent Company', create_using=nx.DiGraph) # find roots (final level companies) roots = ... | 1 | 1 |
79,015,047 | 2024-9-23 | https://stackoverflow.com/questions/79015047/ollama-multimodal-gemma-not-seeing-image | This sample multimodal/main.py appears to show Ollama I am trying to do the same with an image loaded from my machine. I am using the gemma2:27b model. The model is working with chat so that is not the issue. my Code import os.path import PIL.Image from dotenv import load_dotenv from ollama import generate load_dotenv(... | The Gemma2 models are not multimodal. They accept only text as input. If you want to process images, you need to use PaliGemma which is not supported by Ollama yet (you can follow this issue about it). You may find some PaliGemma examples at the Gemma cookbook github repo. | 1 | 2 |
79,015,439 | 2024-9-23 | https://stackoverflow.com/questions/79015439/how-do-i-fill-null-on-a-struct-column | I am trying to compare two dataframes via dfcompare = (df0 == df1) and nulls are never considered identical (unlike join there is no option to allow nulls to match). My approach with other fields is to fill them in with an "empty value" appropriate to their datatype. What should I use for structs? import polars as pl d... | A struct literal to use in the context of pl.Expr.fill_null can be created with pl.struct as follows. df.with_columns( pl.col("data").fill_null( pl.struct(a=pl.lit(1), b=pl.lit("MISSING")) ) ) shape: (3, 2) ┌──────┬───────────────┐ │ int ┆ data │ │ --- ┆ --- │ │ i64 ┆ struct[2] │ ╞══════╪═══════════════╡ │ 1 ┆ {1,"b"}... | 2 | 2 |
79,014,724 | 2024-9-23 | https://stackoverflow.com/questions/79014724/chang-pandas-dataframe-from-long-to-wide | Have a dataframe in following format data = {'regions':["USA", "USA", "USA", "FRANCE", "FRANCE","FRANCE"], 'dates':['2024-08-03', '2024-08-10', '2024-08-17','2024-08-03', '2024-08-10', '2024-08-17'], 'values': [3, 4, 5, 7, 8,0], } df = pd.DataFrame(data) regions dates values 0 USA 2024-08-03 3 1 USA 2024-08-10 4 2 USA ... | If same dates per each regions is possible convert column to datetimes, pivoting, change columns names and add column with maximal dates: df['dates'] = pd.to_datetime(df['dates']) out = df.pivot(index='regions', columns='dates', values='values') out.columns = [f'values_lag{i-1}' if i!=1 else 'values' for i in range(len... | 1 | 1 |
79,008,061 | 2024-9-20 | https://stackoverflow.com/questions/79008061/proper-way-to-process-larger-than-memory-datasets-in-polars | I have begun to learn and implement Polars because of (1) the potential speed improvements and (2) for the promise of being able to process larger-than-memory datasets. However, I'm struggling to see how the second promise is actually delivered in specific scenarios that my use case requires. One specific example I'm s... | As was mentioned in comments, the streaming engine is undergoing a significant revamp to address the shortcomings of the current implementation. The details of that revamp, as far as I'm aware, haven't been documented so I can't say that this exact use case will be addressed in that revamp. It's not clear, to me, what ... | 4 | 3 |
79,013,954 | 2024-9-23 | https://stackoverflow.com/questions/79013954/how-to-add-up-value-in-column-v-by-multiplying-previous-cell-with-a-fixed-facto | Starting with a "pd.DataFrame" df :~ n v 0 1 0.0 1 2 0.0 2 3 0.0 3 4 0.0 4 5 0.0 5 6 0.0 I'd like to add up value in column "v", whereby a cell in column "v" is produced by multiplying previous cell of "v" with a fixed factor, then add current cell value of column "n". (See sample calculation table below) ## sample ... | Since n is variable, you can't easily vectorize this (you could using a matrix operation, see below, but this would take O(n^2) space). A good tradeoff might be to use numba to speed the operation: from numba import jit @jit(nopython=True) def fnv(n, factor=0.5): out = [] prev = 0 for x in n: out.append(x + prev*factor... | 3 | 2 |
79,014,004 | 2024-9-23 | https://stackoverflow.com/questions/79014004/how-to-fill-blank-cells-created-by-join-but-keep-original-null-in-pandas | I have two dataframe, one is daily and one is quarterly idx = pd.date_range("2023-03-31", periods=100, freq="D") idx_q = idx.to_series().resample("QE").last() df1 = pd.DataFrame({"A": [1, "a", None], "B": [4, None, 6]}, index=idx_q) np.random.seed(42) df2 = pd.DataFrame({"C": np.random.randn(100), "D": np.random.randn(... | IIUC, you can build a mask from df1 with isna and reindex: df = df1.join(df2) df = df.ffill().mask(df1.isna().reindex(columns=df.columns, fill_value=False)) Another approach could be to use a placeholder instead of NaNs in df1: df = df1.fillna('PLACEHOLDER').join(df2).ffill().replace('PLACEHOLDER', np.nan) Output: A... | 2 | 2 |
79,013,736 | 2024-9-23 | https://stackoverflow.com/questions/79013736/fill-numpy-array-to-the-right-based-on-previous-column | I have the following states and transition matrix import numpy as np n_states = 3 states = np.arange(n_states) T = np.array([ [0.5, 0.5, 0], [0.5, 0, 0.5], [0, 0, 1] ]) I would like to simulate n_sims paths where each path consist of n_steps. Each path starts at 0. Therefore, I write n_sims = 100 n_steps = 10 paths = ... | IIUC, your process being inherently iterative, you won't benefit much from numpy's vectorization. You might want to consider using pure python: def simulation(T, n_states=3, n_sims=100, n_steps=10, seed=123): rng = np.random.default_rng(seed) start = np.zeros(n_steps, dtype=np.int64) out = [start] for i in range(n_sims... | 2 | 1 |
79,013,239 | 2024-9-23 | https://stackoverflow.com/questions/79013239/preventing-date-conversion-in-excel-with-xlwings | the problem is as the title states. I have a column AX filled with values. The name of the column is "Remarks" and it will contain remarks but some of those remarks are dates and some are full blown notes like "Person A owes Person B X amount." The problem I'm currently facing now is that in xlwings the columns that ar... | Dates, or dates looking strings, are always a pain. I think the simplest way to deal with it in your case is to insert a ' as the first character in every cell to force Excel to treat it like a string, something like .range(f'A{i}').value = f"'{value}" (not tested, assuming that the column of data in question is in col... | 2 | 0 |
79,013,441 | 2024-9-23 | https://stackoverflow.com/questions/79013441/pandas-period-for-a-custom-time-period | I want to create pandas.Period for a custom time period, for example for a duration starting_time = pd.Timestamp('2024-01-01 09:15:00') and ending_time = pd.Timestamp('2024-01-05 08:17:00'). One way to achieving this is by first getting the pandas.Timedelta and then create pandas.Period. import pandas as pd # Define st... | You don't need to compute the duration in minutes, just pass the subtraction: pd.Period(starting_time, freq=ending_time-starting_time) Which is almost your ideal straightforward approach. Output: Period('2024-01-01 09:15', '5702min') Note that you could also use a function to have the desired parameters: def cust_per... | 3 | 2 |
79,012,126 | 2024-9-22 | https://stackoverflow.com/questions/79012126/error-when-trying-add-filter-to-gmail-using-python-script | I am trying to create a simple Python app that would be able to delete and add filters to Gmail. Using different SCOPES I can easily list labels, filters, and so on but when trying to add a new filter I am getting an error. The code below is a simplification of my actual code (that is broken into set of functions) but ... | Please modify your script as follows. From: filter_body = {'criteria': {'from': 'example@example.com'}, 'action': {'removeLabelsIds': ['SPAM']}} To: filter_body = {'criteria': {'from': 'example@example.com'}, 'action': {'removeLabelIds': ['SPAM']}} In this modification, Labels of removeLabelsIds is modified to Label... | 2 | 1 |
79,012,503 | 2024-9-22 | https://stackoverflow.com/questions/79012503/how-to-properly-track-gradients-with-mygrad-when-using-scipys-rectbivariatespli | I'm working on a project where I need to interpolate enthalpy values using scipy.interpolate.RectBivariateSpline and then perform automatic differentiation using mygrad. However, I'm encountering an issue where the gradient is not tracked at all across the interpolation. Here is a simplified version of my code: import ... | The problem here is that MyGrad doesn't know how to differentiate this operation. You can get around this by defining a custom operation with a backwards pass. The MyGrad docs explain this here. In order to implement the backward pass, you need to be able to evaluate a partial derivative of the spline. The SciPy docs e... | 1 | 2 |
79,012,469 | 2024-9-22 | https://stackoverflow.com/questions/79012469/how-do-i-change-a-variable-while-inheriting | I'm using OOP for the first time, I want to make the SubWindow class inherit all properties from the MainWindow class, but self.root would be tk.Toplevel() instead of tk.Tk(): import tkinter as tk class MainWindow: def __init__(self, size, title): self.root = tk.Tk() self.root.geometry(size) self.root.title(title) def ... | What you can do is to pass a required value for self.root to the base class constructor, with a default where the base class chooses tk.Tk(): import tkinter as tk class MainWindow: def __init__(self, size, title, root=None): if root == None: root = tk.Tk() self.root = root self.root.geometry(size) self.root.title(title... | 3 | 3 |
79,009,647 | 2024-9-21 | https://stackoverflow.com/questions/79009647/how-to-calculate-the-exponential-moving-average-ema-through-record-iterations | I have created a pandas dataframe as follows: import pandas as pd import numpy as np ds = { 'trend' : [1,1,1,1,2,2,3,3,3,3,3,3,4,4,4,4,4], 'price' : [23,43,56,21,43,55,54,32,9,12,11,12,23,3,2,1,1]} df = pd.DataFrame(data=ds) The dataframe looks as follows: display(df) trend price 0 1 23 1 1 43 2 1 56 3 1 21 4 2 43 5 ... | I slightly changed the example that you asked about RSI. I added -1 in the first prev, the cycle of filling by slices, price and in setting the values by slice of the data frame. You can also try numba, cython, but most likely the code will need to be rewritten(not all functions in them are available from numpy, I do... | 2 | 1 |
79,011,621 | 2024-9-22 | https://stackoverflow.com/questions/79011621/sqlite-gives-a-value-which-i-cannot-recreate | Using django: here is some values and the query: max_played, avg_days_last_played = get_next_song_priority_values() # Calculate time since played using raw SQL time_since_played_expr = RawSQL("(julianday('now') - julianday(main_song.played_at))", []) query = Song.objects # Annotate priority songs_with_priority = query... | In SQLite, integer division produces an integer value, so 1 / 6 = 0: sqlite> select 1 / 6; 0 You can multiply one of the value by 1.0 to convert it to a float value and then this should work: sqlite> select 1 / 6.0; 0.166666666666667 priority=( F('rating') - (F('count_played') / Value(max_played * 1.0)) # <- this li... | 1 | 2 |
79,010,903 | 2024-9-22 | https://stackoverflow.com/questions/79010903/counting-the-number-of-positive-integers-that-are-lexicographically-smaller-than | Say I have a number num and I want to count the number of positive integers in the range [1, n] which are lexicographically smaller than num and n is some arbitrary large integer. A number x is lexicographically smaller than a number y if the converted string str(x) is lexicographically smaller than the converted strin... | I couldn't follow the logic in your explanation, but this shouldn't need dynamic programming. In essence you want to do a separate count for each possible width of an integer. For instance, when calling count(7, 13), you'd want to count: integers with one digit: [1, 6] = 6 integers integers with two digits: [10, 13] =... | 3 | 2 |
79,010,931 | 2024-9-22 | https://stackoverflow.com/questions/79010931/how-to-trigger-a-post-request-api-to-add-a-record-in-a-sqlite-database-table-usi | I am trying to submit a HTML form from the browser to create a new user in a SQLite database table. Clicking on the Submit button triggers a POST request using FastAPI and Sqlalchemy 2.0. The API works perfectly when executed from the Swagger UI. But it does not work when triggered from an actual HTML form, returning a... | I could see here couple of potential problems: User Form dependency for each form field: @users_router.post("/create", response_class=HTMLResponse, status_code=201) async def create_user( request: Request, first_name: str = Form(), last_name: str = Form(), gender: str = Form(), email: str = Form(), db: Session = Depe... | 2 | 1 |
79,003,067 | 2024-9-19 | https://stackoverflow.com/questions/79003067/how-to-check-if-a-specific-list-element-is-a-number | I have 2 lists, one that contains both numbers and strings and one only numbers: list1 = [1, 2, 'A', 'B', 3, '4'] list2 = [1, 2, 3, 4, 5, 6] My goal is to print from list2 only the numbers that have another number (both as number or string) in the same index in list1. Expected output: [1,2,5,6] I have tried the follo... | This could be solved in a one-liner solution, like so: list1 = [1,2,'A','B',3,'4'] list2 = [1,2,3,4,5,6] print([list2[index] for index, x in enumerate(list1) if isinstance(x, int)]) But we can't check if a string can become an int in this specific case. Basically, using list comprehension, we filter the first list and... | 3 | 6 |
79,009,790 | 2024-9-21 | https://stackoverflow.com/questions/79009790/executing-a-market-buy-order-through-kucoin-api | I have just started with the Kucoin API but I'm having trouble executing a market buy order in the futures market using python. I'm using this Github as a reference: https://github.com/Kucoin/kucoin-futures-python-sdk heres the code I tried: from kucoin_futures.client import Trade client = Trade(key='myKey', secret='... | by default orders are limit orders. So you need to specify price and size attributes. This is according the API manual: https://www.kucoin.com/docs/rest/spot-trading/orders/place-order | 2 | 1 |
79,008,530 | 2024-9-20 | https://stackoverflow.com/questions/79008530/mutating-cells-in-a-large-polars-python-dataframe-with-iter-rows-yields-segmen | I have a large dataframe that looks like this: df_large = pl.DataFrame({'x':['h1','h2','h2','h3'], 'y':[1,2,3,4], 'ind1':['0/0','1/0','1/1','0/1'], 'ind2':['0/1','0/2','1/1','0/0'] }).lazy() df_large.collect() | x | y | ind_1 | ind_2 | |_______|_______|_______|_________| | "h1" | 1 | "0/0" | '0/1' | | "h2" | 2 | "1/0"... | If you reshape the small frame with .pivot() df_rep.with_columns(value=True).pivot(on="ind", index=["x", "y"]) shape: (2, 4) ┌─────┬─────┬──────┬──────┐ │ x ┆ y ┆ ind1 ┆ ind2 │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ bool ┆ bool │ ╞═════╪═════╪══════╪══════╡ │ h1 ┆ 1 ┆ true ┆ null │ │ h2 ┆ 2 ┆ true ┆ true │ └─────┴───... | 1 | 2 |
79,009,454 | 2024-9-21 | https://stackoverflow.com/questions/79009454/convert-a-list-of-time-string-to-unique-string-format | I have a list of time string with different formats as shown time = ["1:5 am", "1:35 am", "8:1 am", "9:14 am", "14:23 pm", "20:2 pm"] dict = {'time': time} df = pd.DataFrame(dict) and wanted to replace strings in list as shown below. ["01:05 am", "01:35 am", "08:01 am", "09:14 am", "14:23 pm", "20:02 pm"] Not sure ho... | A possible solution, which is based on regex. (df['time'].str.replace(r'^(\d):', r'0\1:', regex=True) .str.replace(r':(\d)\s', r':0\1 ', regex=True)) The main ideas are: With r'^(\d):', one matches a single digit at the beginning of the string followed by a colon (e.g., 1: in 1:5 am). With r'0\1:', one adds a 0 befo... | 3 | 1 |
79,008,533 | 2024-9-20 | https://stackoverflow.com/questions/79008533/how-do-i-count-blank-an-filled-cells-in-each-column-in-a-csv-file | What I want is to count the filled and blank cells in each column of a .csv file. This is my code: import pandas as pd file_path = r"C:\Users\andre\OneDrive\Documentos\Farmácia\Python\Cadastro_clientes\cadastro_cli.csv" df = pd.read_csv(file_path, sep='|', header=None) # No names argument, read all columns filled_count... | If you want to tally the number of filled and blank cells for each column individually, use: summary = pd.DataFrame({ 'Filled': df.notnull().sum(), # Count non-null (filled) cells 'Blank': df.isnull().sum() # Count null (blank) cells }) print(summary, "\n") If you want to tally for the whole csv, use: total_filled = s... | 1 | 3 |
79,008,391 | 2024-9-20 | https://stackoverflow.com/questions/79008391/grabbing-a-specific-url-from-a-webpage-with-re-and-requests | import requests, re r = requests.get('example.com') p = re.compile('\d') print(p.match(str(r.text))) This always prints None, even though r.text definitely contains numbers, but print(p.match('12345')) works. What do I need to do to r.text to make it readable by re.compile.match()? Casting to str is clearly insufficie... | It is because re.match only checks for a match at the beginning of the string, and r.text does not start with a number. If you want to find the first match, then use re.search instead: import requests, re r = requests.get('https://example.com') p = re.compile(r'\d') print(p.search(r.text)) Output: <re.Match object; sp... | 2 | 0 |
79,007,387 | 2024-9-20 | https://stackoverflow.com/questions/79007387/python-3-superclass-instantiation-via-derived-classs-default-constructor | In this code: class A(): def __init__(self, x): self.x = x def __str__(self): return self.x class B(A): def __str__(self): return super().__str__() b = B("Hi") print(b) The output is: Hi. What is happening under the hood? How does the default constructor in the derived class invoke the super class constructor? How are... | How does the default constructor in the derived class invoke the super class constructor? You didn't override it in B, so you inherited it from A. That's what inheritance is for. >>> B.__init__ is A.__init__ True In the same vein, you may as well not define B.__str__ at all here, since it doesn't do anything (other ... | 1 | 7 |
79,004,202 | 2024-9-19 | https://stackoverflow.com/questions/79004202/minimum-number-of-operations-for-array-of-numbers-to-all-equal-one-number | You have one array of numbers, for example [2, 5, 1]. You have a second array of numbers, for example [8, 4, 3]. For each of the numbers in the second array, how many operations would it take to make the first array all equal that number? You can only increment or decrement by 1 at a time. To get to 8, it would take (8... | Same approach as the others (sort, use bisect to split into the i smaller and the n-i larger values, and look up their sums with precomputed prefix sums), just less code: from itertools import accumulate from bisect import bisect def solve(a1, a2): a1.sort() p1 = 0, *accumulate(a1) n = len(a1) total = p1[-1] return [ (... | 2 | 1 |
79,004,666 | 2024-9-19 | https://stackoverflow.com/questions/79004666/why-does-ismethod-return-false-for-a-method-when-accessed-via-the-class | Define a simple method: class Foo: def bar(self): print('bar!') Now use inspect.ismethod: >>> from inspect import ismethod >>> ismethod(Foo.bar) False Why is this? Isn't bar a method? | Consult the documentation: inspect.ismethod(object) Return True if the object is a bound method written in Python. So ismethod is actually only testing for bound methods. That means if you create an instance and access the method through the instance, ismethod will return True: >>> obj = Foo() >>> ismethod(obj.bar) T... | 3 | 4 |
78,997,019 | 2024-9-18 | https://stackoverflow.com/questions/78997019/in-python-3-12-why-does-%c3%96l-take-less-memory-than-%c3%96 | I just read PEP 393 and learned that Python's str type uses different internal representations, depending on the content. So, I experimented a little bit and was a bit surprised by the results: >>> sys.getsizeof('') 41 >>> sys.getsizeof('H') 42 >>> sys.getsizeof('Hi') 43 >>> sys.getsizeof('Ö') 61 >>> sys.getsizeof('Öl'... | This test code (the structures are only correct according to 3.12.4 source, and even so I didn't quite double-check them) import ctypes import sys class PyUnicodeObject(ctypes.Structure): _fields_ = [ ("ob_refcnt", ctypes.c_ssize_t), ("ob_type", ctypes.c_void_p), ("length", ctypes.c_ssize_t), ("hash", ctypes.c_ssize_t)... | 36 | 19 |
79,002,792 | 2024-9-19 | https://stackoverflow.com/questions/79002792/numpy-random-0-and-1-matrix-with-bias-towards-0 | Is there a smart, fast way to create an n x n numpy array filled with 0's and 1's with bias towards 0's? I did np.random.randint(2, (size,size)) but the bias is not accounted for here. I could do for loop but i want a faster cleaner way to populate the matrix. Thanks! | I would use numpy.random.choice with custom probabilities: n = 10 p = 0.9 # probability of 0s out = np.random.choice([0, 1], size=(n, n), p=[p, 1-p]) Another option, generating numbers in the 0-1 range, comparing to the probability of 0s and converting the booleans to integer: out = (np.random.random((n, n))>p).astype... | 3 | 3 |
78,999,687 | 2024-9-18 | https://stackoverflow.com/questions/78999687/polars-make-all-groups-the-same-size | Question I'm trying to make all groups for a given data frame have the same size. In Starting point below, I show an example of a data frame that I whish to transform. In Goal I try to demonstrate what I'm trying to achieve. I want to group by the column group, make all groups have a size of 4, and fill 'missing' value... | The advantage of the approach below is that we don’t transform original DataFrame (except maybe sorting if you want to rearrange the groups), we only create additional rows and append them back to the original DataFrame. I've adjusted my answer a bit, based on assumption that you want size of the group to be max of th... | 3 | 3 |
79,002,206 | 2024-9-19 | https://stackoverflow.com/questions/79002206/concatenate-polars-dataframe-with-columns-of-dtype-enum | Consider having two pl.DataFrames with identical schema. One of the columns has dtype=pl.Enum. import polars as pl enum_col1 = pl.Enum(["type1"]) enum_col2 = pl.Enum(["type2"]) df1 = pl.DataFrame( {"enum_col": "type1", "value": 10}, schema={"enum_col": enum_col1, "value": pl.Int64}, ) df2 = pl.DataFrame( {"enum_col": "... | you can cast enum_col to combined enum type: enum_col = enum_col1 | enum_col2 pl.concat( df.with_columns(pl.col.enum_col.cast(enum_col)) for df in [df1, df2] ) shape: (2, 2) ┌──────────┬───────┐ │ enum_col ┆ value │ │ --- ┆ --- │ │ enum ┆ i64 │ ╞══════════╪═══════╡ │ type1 ┆ 10 │ │ type2 ┆ 200 │ └──────────┴───────┘ ... | 2 | 3 |
79,000,778 | 2024-9-19 | https://stackoverflow.com/questions/79000778/how-to-expand-a-single-index-dataframe-to-a-multiindex-dataframe-in-an-efficient | import pandas as pd concordance_region = pd.DataFrame( { "country 1": pd.Series([1, 0], index=["region a", "region b"]), "country 2": pd.Series([0, 1], index=["region a", "region b"]), "country 3": pd.Series([0, 1], index=["region a", "region b"]), } ) display(concordance_region) country_index = concordance_region.colu... | Code use np.kron and identity matrix (identity matrix can be created with np.eye.) import pandas as pd import numpy as np # taken from questioner's code sector_index = ['sector a', 'sector b'] country_sector = pd.MultiIndex.from_product( [country_index, sector_index], names=["country", "sector"]) region_sector = pd.Mul... | 1 | 4 |
78,999,867 | 2024-9-18 | https://stackoverflow.com/questions/78999867/django-change-form-prefix-separator | I'm using form prefix to render the same django form twice in the same template and avoid identical fields id's. When you do so, the separator between the prefix and the field name is '-', I would like it to be '_' instead. Is it possible ? Thanks | You could "monkey patch" [wiki] the BaseForm code, for example in some AppConfig: # app_name/config.py from django.apps import AppConfig class MyAppConfig(AppConfig): def ready(self): from django.forms.forms import BaseForm def add_prefix(self, field_name): return f'{self.prefix}_{field_name}' if self.prefix else field... | 2 | 2 |
78,998,333 | 2024-9-18 | https://stackoverflow.com/questions/78998333/python-inheritance-not-returning-new-class | I'm having problems understanding why the inheritance is not working in the following example: import vlc class CustomMediaPlayer(vlc.MediaPlayer): def __init__(self, *args): super().__init__(*args) def custom_method(self): print("EUREKA") custom_mp = CustomMediaPlayer() print(custom_mp) custom_mp.custom_method() This... | This can happen if the base class overrides the __new__ method, which controls what happens when someone attempts to instantiate the class. The typical behaviour is that a new instance of the given class argument is created; but the __new__ function is free to return an existing object, or create an object according to... | 4 | 3 |
78,998,783 | 2024-9-18 | https://stackoverflow.com/questions/78998783/sum-of-corresponding-values-from-different-arrays-of-the-same-size-with-python | I'm rather new to Python so it's quite possible that my question has already been asked on the net but when I find things that seem relevant, I don't always know how to use them in my code (especially if it's a function definition), so I apologise if there's any redundancy. I work with daily temperature data from the C... | I've added empty_array = np.array([]) before my for loop but I don't know what to do next in the loop. Almost right. You need to instantiate an array of the same shape as the arrays you will sum. With Numpy, you can only sum arrays of the same shape. You can inspect shape using arr.shape. The array should initially b... | 2 | 2 |
78,997,513 | 2024-9-18 | https://stackoverflow.com/questions/78997513/why-is-there-typeerror-string-indices-must-be-integers-when-using-negative-in | I would like to understand why this works fine: >>> test_string = 'long brown fox jump over a lazy python' >>> 'formatted "{test_string[0]}"'.format(test_string=test_string) 'formatted "l"' Yet this fails: >>> 'formatted "{test_string[-1]}"'.format(test_string=test_string) Traceback (most recent call last): File "<std... | Why it doesn't work This is explained in the spec of str.format(): The arg_name can be followed by any number of index or attribute expressions. An expression of the form '.name' selects the named attribute using getattr(), while an expression of the form '[index]' does an index lookup using __getitem__(). That is, y... | 4 | 4 |
78,996,869 | 2024-9-18 | https://stackoverflow.com/questions/78996869/random-adjacendy-matrix-from-list-of-degrees | I want to do exactly the same thing as this post, but in python; aka given a list of natural integers, generate a random adjacency matrix whose degrees would match the list. I had great hope as the solution proposed uses a function from igraph, sample_degseq. However it seems like this function does not exist in the py... | The equivalent of R/igraph's sample_degseq() in python-igraph is Graph.Degree_Sequence(). Note that not all methods sample uniformly, and not all methods produce the same kind of graph (simple graph vs multigraph). "configuration_simple" and "edge_switching_simple" sample simple graphs uniformly. The former is exactly ... | 2 | 1 |
78,981,196 | 2024-9-13 | https://stackoverflow.com/questions/78981196/can-i-access-directories-in-palantir-and-use-for-to-get-names-of-all-tables-insi | I need to simplify process of downloading datasets from Palantir. My idea was to use it like directory on local pc, but the problem is, that when i make codespace to use my own code, it seems like it uses virtual python environment, so i cant access directories outside of the environment, which has the datasets i want ... | If what you want to do (best guess, I +1 the comment below your post that it would be great if you can clarify what is what exactly - datasets, files, etc.) is: I have a lot of files on my local laptop, I need to upload them to Foundry, process them, and this will generate another dataset of lot of files, how can I dow... | 2 | 0 |
78,972,060 | 2024-9-11 | https://stackoverflow.com/questions/78972060/how-to-extract-values-based-on-column-names-and-put-it-in-another-column-in-pola | I would like to fill a value in a column based on another columns' name, in the Polars library from python (I obtained the following DF by exploding my variables' column names): Input: df = pl.from_repr(""" ┌────────┬─────────┬────────┬─────┬──────────┐ │ Name ┆ Average ┆ Median ┆ Q1 ┆ Variable │ │ --- ┆ --- ┆ --- ┆ --... | You can use when/then() to check whether the value of the column Variable is the same as the column name. coalesce() to choose first non-empty result. df.with_columns( value = pl.coalesce( pl.when(pl.col.Variable == col).then(pl.col(col)) for col in df["Variable"].unique() ) ) ┌────────┬─────────┬────────┬─────┬─────... | 4 | 2 |
78,975,421 | 2024-9-11 | https://stackoverflow.com/questions/78975421/how-do-i-filter-across-multiple-model-relationships | My models: class Order (models.Model): customer = models.ForeignKey("Customer", on_delete=models.RESTRICT) request_date = models.DateField() price = models.DecimalField(max_digits=10, decimal_places=2) @property def agent_name(self): assignment = Assignment.objects.get(assig_year = self.request_date.year, customer = se... | class Assignment (models.Model): assig_year = models.PositiveSmallIntegerField() customer = models.ForeignKey("Customer", on_delete=models.CASCADE) sales_agent = models.ForeignKey("Agent", on_delete=models.CASCADE) class Meta: #unique key year + customer constraints = [ UniqueConstraint( fields=['assig_year', 'customer... | 2 | 1 |
78,972,238 | 2024-9-11 | https://stackoverflow.com/questions/78972238/celery-tasks-with-psycopg-programmingerror-the-last-operation-didnt-produce-a | I'm working on aproject in which I have A PostgreSQL 16.2 database A Python 3.12 backend using psycopg 3.2.1 and psycopg_pool 3.2.2. Celery for handling asynchronous tasks. The celery tasks uses the database pool through the following code: import os from psycopg_pool import ConnectionPool from contextlib import con... | If both the main target row of test as well as the additional one selected based on its test.resource_id foreign key aren't shareable, lock them. Otherwise, concurrent workers are likely bumping into each other, taking on the processing of the same row and altering its fields and the fields of the one its associated wi... | 4 | 3 |
78,987,685 | 2024-9-15 | https://stackoverflow.com/questions/78987685/abnormal-interpolating-spline-with-odd-number-of-points | I have implemented a cubic B-Spline interpolation, not approximation, as follows: import numpy as np import math from geomdl import knotvector def cox_de_boor( d_, t_, k_, knots_): if (d_ == 0): if ( knots_[k_] <= t_ <= knots_[k_+1]): return 1.0 return 0.0 denom_l = (knots_[k_+d_] - knots_[k_]) left = 0.0 if (denom_l !... | The abnormal behavior described is very interesting and its cause is subtle. Basically, the root cause of this behavior is the Cox-DeBoor function implementation with the <= fix. In my answer to the OP's previous SO question I give a detailed explanation of why this fix is wrong. In short, this implementation construct... | 2 | 1 |
78,982,732 | 2024-9-13 | https://stackoverflow.com/questions/78982732/extracting-data-from-two-nested-columns-in-one-dataframe | I have a pandas dataframe that contains transactions. A transaction is either booked as a payment, or a ledger_account_booking. A single transaction can have multiple payments and/or multiple ledger account bookings. Therefore, my columns payments and ledger_account_bookings contain a list of dicts, where the number of... | Universal solution for processing data by tuples: #in tuple set original and new columns names prefixes cols = [('payments', 'payments'),('ledger_account_bookings', 'ledger')] L = [] for col, prefix in cols: df_explode = df_financial_mutations.pop(col).explode() #Normalize the json column into separate columns df_norma... | 3 | 3 |
78,979,548 | 2024-9-12 | https://stackoverflow.com/questions/78979548/create-list-column-out-of-column-names | I have a simple pl.DataFrame with a number of columns that only contain boolean values. import polars as pl df = pl.DataFrame( {"s1": [True, True, False], "s2": [False, True, True], "s3": [False, False, False]} ) shape: (3, 3) ┌───────┬───────┬───────┐ │ s1 ┆ s2 ┆ s3 │ │ --- ┆ --- ┆ --- │ │ bool ┆ bool ┆ bool │ ╞══════... | List API You could build a list of when/then expressions and then remove the nulls. df.with_columns( pl.concat_list( pl.when(col).then(pl.lit(col)) for col in df.columns ) .list.drop_nulls() .alias("list") ) shape: (3, 4) ┌───────┬───────┬───────┬──────────────┐ │ s1 ┆ s2 ┆ s3 ┆ list │ │ --- ┆ --- ┆ --- ┆ --- │ │ bool... | 3 | 1 |
78,991,877 | 2024-9-16 | https://stackoverflow.com/questions/78991877/checking-count-discrepancies-from-one-date-to-another-in-dataframe | Suppose I have this data data = {'site': ['ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY', 'ACY'... | Whenever I see a problem that requires comparison acrosss dates with particular properties I immediately think "what is the correct dataframe index?". In this case, using a good index and some restructuring makes the problem pretty easy. I did indexed = df_sample.set_index(["site", "item_id", "usage_date"]).unstack("us... | 3 | 2 |
78,976,838 | 2024-9-12 | https://stackoverflow.com/questions/78976838/python-selenium-issue-with-google-chrome-version-128-0-6613-138-profil-screen | i recently ran into the problem, that all my python scripts which utilize the selenium module are broken apparently due to a google chrome update. It seems like selenium/google chrome always asks to select a user profile no matter what options are given inside the python script e.g. "user-data-dir" has no effect at all... | This is not an actual answer to selenium, but for now my solution is to abandon selenium and instead use playwright which can be used just the same way i need. | 2 | 1 |
78,992,288 | 2024-9-17 | https://stackoverflow.com/questions/78992288/parameter-problems-of-iloc-functions-in-pandas | I just started to learn pandas, and the call of df.iloc[[1][0]] (df is the pd.DataFrame data type with a shape of (60935, 54)) appeared in a code. Understanding df.iloc[[1 ][0]] should be a line of df, but how should we understand [[1][0]]? Why do the parameters in iloc[] allow the acceptance of two adjacent lists? How... | You misunderstood the syntax [0][-1] an so on. [0] is a list of length 1 containing the number 0 [1] is still al list of length 1 containing the number 1 [0][-1] means: "take the last element of the list [0]", which is equivalent to .iloc[0] [1][0] means: "take the first element of the list [1]", which is equivalent to... | 2 | 1 |
78,992,321 | 2024-9-17 | https://stackoverflow.com/questions/78992321/getting-argument-of-typing-optional-in-python | I would like to create a typed DataFrame from a Pydantic BaseModel class, let's call it MyModel that has Optional fields. As I create multiple instances of MyModel, some will have Optional fields with None values, and if I initialize a DataFrame with such rows, they will may have inconsistent column dtypes. I'd like th... | As long as Optional[X] is equivalent to Union[X, None]: from typing import Union, get_args, get_origin def get_optional_arg(typ: type) -> type | None: # make sure typ is really Optional[...], otherwise return None if get_origin(typ) is Union: args = get_args(typ) if len(args) == 2 and args[1] is type(None): return args... | 2 | 1 |
78,987,693 | 2024-9-15 | https://stackoverflow.com/questions/78987693/polars-how-to-find-out-the-number-of-columns-in-a-polars-expression | I'm building a package on top of Polars, and one of the functions looks like this def func(x: IntoExpr, y: IntoExpr): ... The business logic requires that x can include multiple columns, but y must be a single column. What should I do to check and validate this? | You can use the polars.selectors.expand_selector function which lets you evaluate selected columns using either selectors or simple expressions. Note that the drawback here is that you can’t pass in arbitrary expressions, or else the evaluation fails (see the final examples). import polars as pl import polars.selectors... | 2 | 2 |
78,993,284 | 2024-9-17 | https://stackoverflow.com/questions/78993284/how-to-improve-pandas-df-processing-time-on-different-combinations-of-calculated | I got a big dataset, something like 100 K or 1 mil rows, and I got a function that makes vector calculations that take 0.03 sec. Now all my columns before the process can be the same for every iteration. I want to calculate the 2^n combinations of conditions I make. So currently it will take me 2^n * 0.03 s to run it a... | I think the best method without touching too much your code is by using polars. When I tested your code I was at : 7.52 seconds and now I am at : 0.45 seconds import polars as pl import numpy as np from itertools import combinations import time # Generate similar data with Polars data = { 'Height': np.random.uniform(15... | 3 | 3 |
78,992,244 | 2024-9-17 | https://stackoverflow.com/questions/78992244/how-can-i-subclass-logging-logger-without-breaking-filename-in-logging-format | I am trying to write a custom logging.Logger subclass which is mostly working, but I run into issues when trying to use a logging.Formatter that includes the interpolated value %(filename) in the custom format, it prints the filename where my custom subclass is, rather than the filename of the code that called the logg... | This is already answered in https://stackoverflow.com/a/59492341/2138700 The solution is to use the stacklevel keyword argument when calling the super().debug in your custom_logger code. Here is the relevant section from the documentation The third optional keyword argument is stacklevel, which defaults to 1. If great... | 4 | 1 |
78,992,094 | 2024-9-16 | https://stackoverflow.com/questions/78992094/access-class-properties-or-methods-from-within-a-commands-command | I'm building a Discord bot. The bot should store some information into some internal variables to be accessed at a later time. To do so I'm structuring it as a class (as opposed to many examples where the commands are outside a class definition). However, I discovered that when you use the @commands.command(name='test'... | My recommendation is that you avoid defining commands inside your bot class. There is a more appropriate way to do this, which is using cogs/extensions. See this topic where commands are created in a separate file (extension) and only loaded into the bot class: https://stackoverflow.com/a/78166456/14307703 Also know th... | 3 | 2 |
78,992,395 | 2024-9-17 | https://stackoverflow.com/questions/78992395/how-to-pandas-fast-nested-for-loop-for-non-numeric-columns | how to pandas fast nested for loop for "non numeric" columns? because this for loop is way to slow: for i in range(len(df1[column_A]): for j in range(len(df2[column_A]): if df1[column_A][i] == df2[column_A][j]: df1[column_B][i] = df2[column_B][j] else: pass so any other way to do it by pandas itself or other libraries... | The nested loop you provided result in O(n^2) complexity, making it slow for larger datasets . Looping over the same range for both i and j, which is unnecessary. Instead you can use pd.merge import pandas as pd # Merge file1 and file2 on column A merged_df = pd.merge(file1, file2, on='column_A') # assuming it is a pan... | 2 | 2 |
78,991,975 | 2024-9-16 | https://stackoverflow.com/questions/78991975/get-an-a-tag-content-using-beautifulsoup | I'd like to get the content of an <a> tag using BeautifulSoup (version 4.12.3) in Python. I have this code and HTML exemple: h = """ <a id="0"> <table> <thead> <tr> <th scope="col">Person</th> <th scope="col">Most interest in</th> <th scope="col">Age</th> </tr> </thead> <tbody> <tr> <th scope="row">Chris</th> <td>HTML ... | Specify explicitly the parser you want to use (use html.parser). By default it will use the "best" parser available - I pressume lxml which doesn't parse this document well: import bs4 h = """ <a id="0"> <table> <thead> <tr> <th scope="col">Person</th> <th scope="col">Most interest in</th> <th scope="col">Age</th> </tr... | 2 | 1 |
78,984,046 | 2024-9-14 | https://stackoverflow.com/questions/78984046/partialdependencedisplay-from-estimator-plots-having-lines-with-0-values | Need to evaluate the two way interaction between two variables after regressor model. Used PartialDependenceDisplay.from_estimator to plot but the contour lines inside the plot all have value 0.Not sure what might cause this. Checked the data and model and there are no problems while loading the model and data. Checked... | Most probably your contour values are all < 0.005. Contour labels are formatted as "%2.2f" and there appears to be no documented way of changing this format. The only workaround I could think of is to retrieve the labels and their values and replace the label texts: import matplotlib.pyplot as plt from matplotlib.text ... | 3 | 2 |
78,984,405 | 2024-9-14 | https://stackoverflow.com/questions/78984405/find-duplicate-group-of-rows-in-pandas-dataframe | How can I find duplicates of a group of rows inside of a DataFrame? Or in other words, how can I find the indices of a specific duplicated DataFrame inside of a larger DataFrame? The larger DataFrame: index 0 1 0 0 1 1 2 3 2 4 4 3 0 1 4 2 3 5 2 3 6 0 1 The specific duplicated DataFrame (or group o... | You need to check all combinations with a sliding window, using numpy.lib.stride_tricks.sliding_window_view to create a mask and extend the mask with numpy.convolve: import numpy as np from numpy.lib.stride_tricks import sliding_window_view as swv n = len(dup_df) mask = (swv(lrg_df, n, axis=0) == dup_df.to_numpy().T ).... | 2 | 1 |
78,991,070 | 2024-9-16 | https://stackoverflow.com/questions/78991070/column-manipulation-based-on-headline-values-within-rows | I have a Pandas dataframe with a column that contains different types of values and I want to create a new column out of it based on the information inside that column. Every few rows there is a kind of "headline" row that should define that values for the following rows until the next headline row that then defines th... | You can use where and ffill (forward fill) without the need for loops: df['AA'].where(df['AA'].str.startswith('V_')).ffill().fillna('') str.startswith to identify rows where AA column starts with 'V_'. where to keep identified headline rows in BB column and set other rows to NaN. ffill to forward fill the last valid ... | 1 | 4 |
78,990,589 | 2024-9-16 | https://stackoverflow.com/questions/78990589/how-to-merge-and-match-different-length-dataframes-lists-in-python-pandas | I have over 12 dataframes that I want to merge into a single dataframe, where row values match for each column (or null if they don't exist). Each dataframe has a different number of rows, but will never repeat values. The goal is to both identify common values and missing values. Eg.df1 id label 1 a-1 2 b-2 3 z-10 Eg... | For multiple dataframes, you can use merge with reduce: from functools import reduce reduce(lambda left, right: pd.merge(left, right, on='label', how='outer'), map(lambda d: d[1].drop(columns='id') .assign(**{ f'df{d[0]}':lambda x: x['label'] }), enumerate(dfs, 1)) ).assign(id=lambda x:range(1, 1+len(x))).drop(columns=... | 3 | 1 |
78,987,052 | 2024-9-15 | https://stackoverflow.com/questions/78987052/simplify-polygon-shapefile-to-reduce-file-size-in-python | I have a polygon shapefile which was vectorized from a raster layer (second image). This shapefile contains thousands of features with just one column which represent polygon level (ranging 1-5). The file size is massive and so I was trying to use Shapely's simplify tool to reduce the file size. The aim is to get to a ... | The problem you are facing is that shapely at the moment only can simplify geometries one by one. Because of this, gaps and slivers can/will appear between adjacent polygons because different points might be removed on the adjacent borders of the polygons. To avoid this, you need "topology-aware" simplification. This t... | 2 | 1 |
78,989,038 | 2024-9-16 | https://stackoverflow.com/questions/78989038/why-do-imaginary-number-calculations-behave-differently-for-exponents-up-to-100 | The imaginary number i, or j in Python, means the square root of -1. So, i to the 4th or any multiple of 4 should be positive 1. >>> (1j)**4 (1+0j) >>> (1j)**96 (1+0j) >>> (1j)**100 (1+0j) Up until this point all is good, but once we get past 100, Python just bugs out. For example: >>> (1j)**104 (1+7.842691359635767e-... | See the code, for integer exponents up to 100 it uses a different algorithm: complex_pow(PyObject *v, PyObject *w, PyObject *z) { ... // Check whether the exponent has a small integer value, and if so use // a faster and more accurate algorithm. if (b.imag == 0.0 && b.real == floor(b.real) && fabs(b.real) <= 100.0) { p... | 5 | 13 |
78,980,518 | 2024-9-13 | https://stackoverflow.com/questions/78980518/pandas-generate-columns-of-cumsums-based-on-variable-names-in-two-different-colu | I have a dataframe as follows: import pandas import numpy df = pandas.DataFrame( data= {'s1' :numpy.random.choice( ['A', 'B', 'C', 'D', 'E'], size=20 ), 's2' :numpy.random.choice( ['A', 'B', 'C', 'D', 'E'], size=20 ), 'val':numpy.random.randint(low=-1, high=3, size=20)}, ) I want to generate two result columns that p... | What you want is a shifted cumulated sum after flattening the input dataset. Use melt, groupby.transform with shift+cumsum, then restore the original shape with pivot df[['ans1', 'ans2']] = (df .melt('val', ['s1', 's2'], ignore_index=False).sort_index(kind='stable') .assign(S=lambda x: x.groupby('value')['val'].transfo... | 4 | 2 |
78,989,203 | 2024-9-16 | https://stackoverflow.com/questions/78989203/calling-another-py-from-the-converted-exe-file | I just want to ask how I can call another .py from the converted .exe file? I have a main Python code called main_2.py file that acts as the main interface for the program that I created, which in this case, is the one that will be converted into .exe. One of its main functions is to open up another python code called ... | It should be enough to import the functions from the second file, as explained here! After that, auto-py-to-exe will automatically create an exe file with everything included needed to run it! Another way is to add other py scripts as additional files! | 2 | 2 |
78,988,304 | 2024-9-15 | https://stackoverflow.com/questions/78988304/split-on-regex-more-than-a-character-maybe-variable-width-and-keep-the-separa | In GNU awk, there is a four argument version of split that can optionally keep all the separators from the split in a second array. This is useful if you want to reconstruct a select subset of columns from a file where the delimiter may be more complicated than just a single character. Suppose I have the following file... | With splitting you always have one more field than the delimiters, which is why you have to fill in an empty string as the delimiter for the last field. A simpler way to achieve the filling would be to always append an empty string to the list returned by the split so that you can use the itertools.batched function (av... | 10 | 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.