question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,615,990 | 2025-5-10 | https://stackoverflow.com/questions/79615990/how-to-concatenate-n-rows-of-content-to-current-row-in-a-rolling-window-in-pa | I'm looking to transform a dataframe containing [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]] into [[1, 2, 3, []], [4, 5, 6, [1, 2, 3, 4, 5, 6]], [7, 8, 9, [4, 5, 6, 7, 8, 9]], [10, 11, 12, [7, 8, 9, 10, 11, 12]]] So far the only working solution I've come up with is: import pandas as pd import numpy as np # Create t... | This doesn't need windowing, IIUC, you can use df.shift: x = df.apply(lambda x: x.tolist(), axis=1) df[3] = (x.shift() + x) Output: 0 1 2 3 0 1 2 3 NaN 1 4 5 6 [1, 2, 3, 4, 5, 6] 2 7 8 9 [4, 5, 6, 7, 8, 9] 3 10 11 12 [7, 8, 9, 10, 11, 12] Adding window sizing: import pandas as pd import numpy as np from functools im... | 3 | 1 |
79,617,835 | 2025-5-12 | https://stackoverflow.com/questions/79617835/python-3-13-threading-lock-acquire-vs-lock-acquire-lock | In Python 3.13 (haven't checked lower versions) there seem to be two locking mechanisms for the threading.Lock class. I've looked online but found no mentions of acquire_lock or release_lock and wanted to ask if anyone knows what the difference is between them and the standard acquire and release methods. Here's the th... | currently, they are just aliases, and according to github history they have been like that for the past 15 years, you shouldn't be using undocumented functions, they can be removed at any time. {"acquire_lock", _PyCFunction_CAST(lock_PyThread_acquire_lock), ... {"acquire", _PyCFunction_CAST(lock_PyThread_acquire_lock)... | 2 | 8 |
79,615,872 | 2025-5-10 | https://stackoverflow.com/questions/79615872/why-is-array-manipulation-in-jax-much-slower | I'm working on converting a transformation-heavy numerical pipeline from NumPy to JAX to take advantage of JIT acceleration. However, I’ve found that some basic operations like broadcast_to and moveaxis are significantly slower in JAX—even without JIT—compared to NumPy, and even for large batch sizes like 3,000,000 whe... | There are a couple things happening here that come from the different execution models of NumPy and JAX. First, NumPy operations like broadcasting, transposing, reshaping, slicing, etc. typically return views of the original buffer. In JAX, it is not possible for two array objects to share memory, and so the equivalent... | 3 | 4 |
79,618,258 | 2025-5-12 | https://stackoverflow.com/questions/79618258/sns-histplot-does-not-fully-show-the-legend-when-setting-the-legend-outside-the | I tried to create a histogram with a legend outside the axes. Here is my code: import pandas as pd import seaborn as sns df_long = pd.DataFrame({ "Category": ["A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D", "A", "B", "C", "D"], "Round": ["Round1", "Round1", "Round1", "Round1", "Round2", "Round2", "Round2", ... | The solution is to use bbox_inches = 'tight' in the plt.savefig() function: import matplotlib.pyplot as plt plt.savefig("histogram.png",bbox_inches='tight') plt.savefig("histogram.pdf", bbox_inches='tight') | 2 | 3 |
79,618,176 | 2025-5-12 | https://stackoverflow.com/questions/79618176/matplotlib-plot-continuous-time-series-of-data | I'm trying to continuously plot data received via network using matplotlib. On the y-axis, I want to plot a particular entity, while the x-axis is the current time. The x-axis should cover a fixed period of time, ending with the current time. Here's my current test code, which simulates the data received via network wi... | Basically there are small errors. For example don't call ax.plot() in the loop because it adds a new line each time, which is inefficient and causes multiples lines to be drawn. I would suggest to use a single line2D object by creating it once and then update its data with set_data() insde your loop. Additionally, use ... | 2 | 3 |
79,615,662 | 2025-5-10 | https://stackoverflow.com/questions/79615662/how-to-replace-all-occurrences-of-a-string-in-python-and-why-str-replace-mi | I want to replace all patterns 0 in a string by 00 in Python. For example, turning: '28 5A 31 34 0 0 0 F0' into '28 5A 31 34 00 00 00 F0'. I tried with str.replace(), but for some reason it misses some "overlapping" patterns: i.e.: $ python3 Python 3.12.3 (main, Feb 4 2025, 14:48:35) [GCC 13.3.0] on linux Type "help", ... | A better tactic would be to not look for spaces around the individual zeros, but to use regex substitution and look for word boundaries (\b): >>> import re >>> re.sub(r'\b0\b', '00', '28 5A 31 34 0 0 0 F0') '28 5A 31 34 00 00 00 F0' This has the added benefit that a 0 at the start or end of the string would get replac... | 2 | 7 |
79,617,903 | 2025-5-12 | https://stackoverflow.com/questions/79617903/renaming-automatic-aggregation-name-for-density-heatmaps-2d-histograms | When creating density heatmaps / 2d histograms, there is an automatic aggregation that can take place, which also sets the name as it appears on the legend. I'm trying to change how that aggregation is displayed on the legend. Consider the following example, taken directly from the plotly docs: import plotly.express as... | Try setting the title.text property of coloraxis_colorbar inside layout. df = px.data.tips() fig = px.density_heatmap(df, x="total_bill", y="tip") fig.update_layout(coloraxis_colorbar=dict( title=dict( text="Number of Bills per Cell") ) ) fig.show() You can also define this in a single line using coloraxis_colorbar_ti... | 2 | 1 |
79,616,857 | 2025-5-11 | https://stackoverflow.com/questions/79616857/desired-frequency-in-discrete-fourier-transform-gets-shifted-by-the-factor-of-in | I have written a python script to compute DFT of a simple sin wave having frequency 3. I have taken the following consideration for taking sample of the sin wave sin function for test = sin( 2 * pi * 3 * t ) sample_rate = 15 time interval = 1/sample_rate = 1/15 = ~ 0.07 second sample_duration = 1 second (for test1) and... | your Frequency axis is wrong, the lowest frequency on the DFT axis should be 1/N which can be translated to time domain to be 1/T, that is when the total time is 2 seconds, the first point after zero will be at 0.5 Hz not 1 Hz the longest sine wave a DFT can represent (the lowest frquency) is a sine wave that does 1 cy... | 2 | 2 |
79,616,449 | 2025-5-11 | https://stackoverflow.com/questions/79616449/how-do-i-do-a-specific-aggregation-on-a-table-based-on-row-column-values-on-anot | I have loaded two fact tables CDI and Population and a couple dimension tables in DuckDB. I did joins on the CDI fact table and its respective dimension tables which yields a snippet of the table below And below is the Population fact table merged with its other dimension tables yielding this snippet below Now what ... | I agree with Chris Maurer comment, here is a SQL query to achieve what you are looking for : SELECT YearStart, YearEnd, LocationDesc, AgeStart, AgeEnd, Sex, Ethnicity, Origin, Sun(Population) AS TotalPopulation FROM CDI LEFT JOIN Population AS pop ON (pop.Year BETWEEN CDI.YearStart AND CDI.YearEnd) AND (CDI.Sex=pop.Sex... | 2 | 1 |
79,616,550 | 2025-5-11 | https://stackoverflow.com/questions/79616550/selenium-4-25-opens-chrome-136-with-existing-profile-to-new-tab-instead-of-nav | I'm using Python with Selenium 4.25.0 to automate Google Chrome 136. My goal is to have Selenium use my existing, logged-in "Default" Chrome profile to navigate to a specific URL (https://aistudio.google.com/prompts/new_chat) and perform actions. The Problem: When I execute my script: Chrome launches successfully. It c... | The root cause could be that the ChromeDriver (≥ v113 with “Chrome for Testing”) intentionally limits automation on “default” or regular profiles for security and stability. This is reflected in the warning: "DevTools remote debugging requires a non-default data directory" This means: ChromeDriver can't fully contro... | 2 | 1 |
79,616,310 | 2025-5-11 | https://stackoverflow.com/questions/79616310/firebase-admin-taking-an-infinite-time-to-work | I recently started using firebase admin in python. I created this example script: import firebase_admin from firebase_admin import credentials from firebase_admin import firestore cred = credentials.Certificate("./services.json") options = { "databaseURL": 'https://not_revealing_my_url.com' } app = firebase_admin.initi... | Yes, you're on the right track with setting up Firebase Admin in Python. The error you're seeing: grpc._channel._MultiThreadedRendezvous: status = StatusCode.UNAVAILABLE details = "failed to connect to all addresses; last error: UNKNOWN: ipv6:[::1]:8081: tcp handshaker shutdown" strongly suggests that the client is tr... | 1 | 2 |
79,610,568 | 2025-5-7 | https://stackoverflow.com/questions/79610568/store-numpy-array-in-pandas-dataframe | I want to store a numpy array in pandas cell. This does not work: import numpy as np import pandas as pd bnd1 = np.random.rand(74,8) bnd2 = np.random.rand(74,8) df = pd.DataFrame(columns = ["val", "unit"]) df.loc["bnd"] = [bnd1, "N/A"] df.loc["bnd"] = [bnd2, "N/A"] But this does: import numpy as np import pandas as pd... | The issue is that when you try to insert a numpy array into a pandas DataFrame, pandas can't process the data correctly. To fix this, you can use either a pd.Series or a dictionary for better alignment: first way: Using pd.Series: df.loc["bnd"] = pd.Series([bnd2, "N/A"], index=["val", "unit"]) OR second way: Using dic... | 2 | 1 |
79,616,218 | 2025-5-11 | https://stackoverflow.com/questions/79616218/typeerror-sequence-item-0-expected-str-instance-int-found-what-should-i-do-t | matrix1=[[1,2,3,4],[5,6,7,8],[9,10,11,12],[13,14,15,16]] m2="\n".join(["\t".join([ritem for ritem in item]) for item in matrix1]) print(m2) where am i wrong that i receive this error? | The values you try to join with str.join must be strings themselves. You're trying to join ints and this is causing the error you're seeing. You want: m2 = "\n".join(["\t".join([str(ritem) for ritem in item]) for item in matrix1]) Note that you can pass any iterable, and not just a list, so you can remove some extrane... | 1 | 2 |
79,615,098 | 2025-5-10 | https://stackoverflow.com/questions/79615098/is-there-simpler-way-to-get-all-nested-text-inside-of-elementtree | I am currently using the xml.etree Python library to parse HTML. After finding a target DOM element, I am attempting to extract its text. Unfortunately, it seems that the .text attribute is severely limited in its functionality and will only return the immediate inner text of an element (and not anything nested). Do I ... | You can use itertext(), too. If you don’t like the whitespaces, indention and line break you can use strip(). import xml.etree.ElementTree as ET html = """<html> <head> <title>Example page</title> </head> <body> <p>Moved to <a href="http://example.org/">example.org</a> or <a href="http://example.com/">example.com</a>.<... | 1 | 1 |
79,615,560 | 2025-5-10 | https://stackoverflow.com/questions/79615560/how-to-select-save-rows-with-multiple-same-value-in-pandas | I have financial data where I need to save / find rows that have multiple same value and a condition where the same value happened more than / = 2 and not (value)equal to 0 or < 1. Say I have this: A B C D E F G H I 5/7/2025 21:00 0 0 0 0 0 0 0 0 5/7/2025 21:15 0 0 19598.8 0 19598.8 0 0 0 5/7/2025 21:30 0 0 0 0 0 0 0 ... | A simple approach could be to select the columns of interest, then identify if any value is duplicated within a row. Then select the matching rows with boolean indexing: mask = df.loc[:, 'B':].T out = df[mask.apply(lambda x: x.duplicated(keep=False)).where(mask >= 1).any()] A potentially more efficient approach could ... | 2 | 0 |
79,615,397 | 2025-5-10 | https://stackoverflow.com/questions/79615397/how-to-locate-elements-simultaneously | By nature, Playwright locator is blocking, so whenever it's trying to locate for an element X, it stops and waits until that element is located or it times out. However, I want to see if it is possible to make it so that it locates two elements at once, and, if either one is found, proceed forward, based on whichever w... | or_ Added in: v1.33 Creates a locator matching all elements that match one or both of the two locators. Note that when both locators match something, the resulting locator will have multiple matches, potentially causing a locator strictness violation. Usage Consider a scenario where you'd like to click a "New email" b... | 3 | 4 |
79,615,284 | 2025-5-10 | https://stackoverflow.com/questions/79615284/how-to-remove-duplicates-from-this-nested-dataframe | I have a dataframe as below and I want remove the duplicates and want the output as mentioned below. Tried few things but not working as expected. New to pandas. import pandas as pd # Sample DataFrame data = { "some_id": "xxx", "some_email": "abc.xyz@somedomain.com", "This is Sample": [ { "a": "22", "b": "Y", "c": "33"... | The issue you're encountering (e.g., unhashable type: 'dict') happens because dictionaries are mutable and unhashable, so drop_duplicates() doesn't work directly on them. To deduplicate rows where one of the columns contains dictionaries, you can: Convert dictionaries to strings, use drop_duplicates(), then Convert t... | 2 | 1 |
79,614,976 | 2025-5-9 | https://stackoverflow.com/questions/79614976/does-file-obj-close-nicely-close-file-objects-in-other-modules-that-have-been | I have a file main_file.py that creates a global variable file_obj by opening a text file and imports a module imported_module.py which has functions that write to this file and therefore also has a global variable file_obj which I set equal to file_obj in main_file.py: main_file.py import imported_module as im file_ob... | Yes. The two variables refer to the same file object. Closing either closes the object itself, it doesn't matter which variable you use to refer to it. This is no different from having two variable referring to the same list, a modification of one is visible through the other: a = [1, 2, 3] b = a a.append(4) print(b) ... | 1 | 1 |
79,613,844 | 2025-5-9 | https://stackoverflow.com/questions/79613844/tkinter-widget-not-appearing-on-form | I’m having trouble working out why a widget doesn’t appear on my tkinter form. Here is what I’m doint: Create a form Create a widget (a label) with the form as the master. Create a Notebook and Frame and add them to the form. Create additional widgets with the form as the master. Add the widgets to the form using grid... | The behavior has to do with stacking order. Widgets created before the notebook are lower in the stacking order. In effect it is behind the notebook. You'll you correctly observed that a row has been allocated for the widget, but since it's behind the notebook it isn't visible. You can make it appear by calling lift on... | 1 | 3 |
79,614,850 | 2025-5-9 | https://stackoverflow.com/questions/79614850/how-to-replace-string-values-in-a-strict-way-in-polars | I'm working with a Polars DataFrame that contains a column with string values. I aim to replace specific values in this column using the str.replace_many() method. My dataframe: import polars as pl df = (pl.DataFrame({"Products": ["cell_foo","cell_fooFlex","cell_fooPro"]})) Current approach: mapping= { "cell_foo" : "c... | The top-level Expr.replace() and .replace_strict() are for replacing entire "values". df.with_columns(pl.col("Products").replace(mapping).alias("Replaced")) shape: (3, 2) ┌──────────────┬──────────┐ │ Products ┆ Replaced │ │ --- ┆ --- │ │ str ┆ str │ ╞══════════════╪══════════╡ │ cell_foo ┆ cell │ │ cell_fooFlex ┆ cel... | 3 | 1 |
79,614,700 | 2025-5-9 | https://stackoverflow.com/questions/79614700/how-to-display-years-on-the-the-y-axis-of-horizontal-bar-chart-subplot-when-th | I'm plotting date vs frequency horizontal bar charts that compares the monthly distribution pattern over time for a selection of crimes as subplots. The problem is the tick labels of the y-axis, which represents the date, display all the months over period of 2006-2023. I want to instead display the year whilst preserv... | pandas makes the assumption that the major axis of a bar-chart is always categorical, and therefore converts your values to strings prior to plotting. This means that it forces matplotlib to render a label for every bar you have. The way to do this with minimal changes to your code would be to manually override the yti... | 1 | 0 |
79,614,770 | 2025-5-9 | https://stackoverflow.com/questions/79614770/how-can-i-get-all-thing-names-from-a-thing-group-in-aws-iot-core-using-a-lambda | I'm trying to get all the thing names that are part of a specific thing group in AWS IoT Core using a Python Lambda function. I checked the Boto3 documentation looking for a function that retrieves the names of things inside a specific thing group, but I couldn't find anything that does exactly that. Is there a way to ... | You can use the BOTO3 client to retrieve IoT things in a thing group. Here is the Python code. You need to use this Python code in an AWS Lambda function to address your use case. For additional AWS code examples, refer to the AWS Code Library -- where you will find thousands of examples in various SDKs, CLI, etc. impo... | 1 | 0 |
79,609,220 | 2025-5-6 | https://stackoverflow.com/questions/79609220/documenting-a-script-step-by-step-with-sphinx | I am documenting a python library with Sphinx. I have a couple of example scripts which I'd like to document in a narrative way, something like this: #: Import necessary package and define :meth:`make_grid` import numpy as np def make_grid(a,b): """ Make a grid for constant by piece functions """ x = np.linspace(0,np.p... | Take a look at the sphinx-gallery extension, which seems to do what you require. With this extension, if you have a Python script, you must start it with a header docstring, and then you can add comments that will be formatted as text rather than code using the # %% syntax, e.g., """ My example script. """ import numpy... | 7 | 3 |
79,614,033 | 2025-5-9 | https://stackoverflow.com/questions/79614033/what-explains-pattern-matching-in-python-not-matching-for-0-0-but-matching-for | I would like to understand how pattern matching works in Python. I know that I can match a value like so: >>> t = 12.0 >>> match t: ... case 13.0: ... print("13") ... case 12.0: ... print("12") ... 12 But I notice that when I use matching with a type like float(), it matches 12.0: >>> t = 12.0 >>> match t: ... case fl... | The thing that follows the case keyword is not an expression, but special syntax called a pattern. 0.0 is a literal pattern. It checks equality with 0.0. float() is a class pattern. It checks that the type is float. Since it is not an expression, it isn't evaluated and therefore is different from 0.0. | 14 | 20 |
79,610,653 | 2025-5-7 | https://stackoverflow.com/questions/79610653/python-pynput-the-time-module-do-not-seem-to-work-together-in-a-loop | So I have written this Python script to vote repeatedly (It's allowed) for a friend on a show at a local TV station. import os import time from pynput.keyboard import Key, Controller os.system("open -a Messages") time.sleep(3) keyboard = Controller() for i in range(50): keyboard.type("Example Message") print("Message t... | Solved by user @furas in the comments here, keyboard.press() keeps the button pressed, so the code needed keyboard.release() to avoid the initial keyboard.press() being held in for the rest of the loops. | 2 | 3 |
79,613,425 | 2025-5-9 | https://stackoverflow.com/questions/79613425/get-media-created-timestamp-with-python-for-mp4-and-m4a-video-audio-files | Trying to get "Media created" timestamp and insert as the "Last modified date" with python for .mp4 and .m4a video, audio files (no EXIF). The "Media created" timestamp shows up and correctly in Windows with right click file inspection, but I can not get it with python. What am I doing wrong? (This is also a working fi... | As described here, the "Media created" value is not filesystem metadata. It's accessible in the API as a Windows Property. You can use os.utime to set "Media created" timestamp as the "Last modified date". Like import pytz import datetime import os from win32com.propsys import propsys, pscon file = 'path/to/your/file' ... | 1 | 2 |
79,613,107 | 2025-5-8 | https://stackoverflow.com/questions/79613107/pyspark-udf-mapping-is-returning-empty-columns | Given a dataframe, I want to apply a mapping with UDF but getting empty columns. data = [(1, 3), (2, 3), (3, 5), (4, 10), (5, 20)] df = spark.createDataFrame(data, ["int_1", "int_2"]) df.show() +-----+-----+ |int_1|int_2| +-----+-----+ | 1| 3| | 2| 3| | 3| 5| | 4| 10| | 5| 20| +-----+-----+ I have a mapping: def test_... | Your problem is that your UDF is registered to return an integer (defined to return an IntegerType()) while your Python function intends to return a string ("low" or "high"), so what you need to do is to set StringType() in your UDF return type: test_udf = F.udf(test_map, StringType()) Let me know if you want more exp... | 1 | 2 |
79,613,039 | 2025-5-8 | https://stackoverflow.com/questions/79613039/assign-a-number-for-every-matching-value-in-list | I have a long list of items that I want to assign a number to that increases by one every time the value in the list changes. Basically I want to categorize the values in the list. It can be assumed that the values in the list are always lumped together, but I don't know the number of instances it's repeating. The list... | Use pandas.factorize, and add 1 if you need the category numbers to start with 1 instead of 0: import pandas as pd my_list = ['Apple', 'Apple', 'Orange', 'Orange','Orange','Banana'] grouping = pd.DataFrame(my_list, columns=['List']) grouping['code'] = pd.factorize(grouping['List'])[0] + 1 print(grouping) Output: List... | 4 | 9 |
79,612,757 | 2025-5-8 | https://stackoverflow.com/questions/79612757/scipys-wrappedcauchy-function-wrong | I'd like someone to check my understanding on the wrapped cauchy function in Scipy... From Wikipedia "a wrapped Cauchy distribution is a wrapped probability distribution that results from the "wrapping" of the Cauchy distribution around the unit circle." It's similar to the Von Mises distribution in that way. I use the... | Is there a way to correctly "wrap" the outputs of a the Scipy call You can use the modulo operator. The operation number % x wraps all output to the range [0, x). If you want the range to begin at a value other than 0, you can add and subtract a constant before and after the modulo operation to center it somewhere el... | 3 | 3 |
79,612,625 | 2025-5-8 | https://stackoverflow.com/questions/79612625/underlining-fails-in-matplotlib | My matplotlib.__version__ is 3.10.1. I'm trying to underline some text and can not get it to work. As far as I can tell, Latex is installed and accessible in my system: import subprocess result = subprocess.run( ["pdflatex", "--version"], check=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) print(result.stdout... | As you have correctly found, \underline, is not a currently supported MathText command. But, matplotlib's MathText is not the same a LaTeX. To instead use LaTeX, you can do, e.g., import matplotlib.pyplot as plt # turn on use of LaTeX rather than MathText plt.rcParams["text.usetex"] = True plt.text(.5, .5, r'Some $\und... | 1 | 4 |
79,612,007 | 2025-5-8 | https://stackoverflow.com/questions/79612007/undefined-reference-to-py-initialize-when-build-a-simple-demo-c-on-a-linux-con | I am testing of running a Python thread in a c program with a simple example like the below # demo.py import time for i in range(1, 101): print(i) time.sleep(0.1) // demo.c #include <Python.h> #include <pthread.h> #include <stdio.h> void *run_python_script(void *arg) { Py_Initialize(); if (!Py_IsInitialized()) { fprin... | You need to pass --embed to python3-config because you are embedding a Python interpreter in your program. Observe the difference: $ python3-config --ldflags -L/usr/lib/python3.10/config-3.10-x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -lcrypt -ldl -lm -lm $ python3-config --embed --ldflags -L/usr/lib/python3.10/confi... | 1 | 3 |
79,611,667 | 2025-5-8 | https://stackoverflow.com/questions/79611667/how-do-i-handle-sigterm-inside-python-async-methods | Based on this code, I'm trying to catch SIGINT and SIGTERM. It works perfectly for SIGINT: I see it enter the signal handler, then my tasks do their cleanup before the whole program exits. On SIGTERM, though, the program simply exits immediately. My code is a bit of a hybrid of the two examples from the link above, as ... | That gist is very old, and asyncio/python has evolved since. Your code sort of works, but the way it's designed, the signal handling will create two coroutines, one of which will not be awaited when the other signal is received. This is because the couroutines are eagerly created, but they're only launched (ensure_futu... | 1 | 1 |
79,611,884 | 2025-5-8 | https://stackoverflow.com/questions/79611884/how-to-pass-a-dynamic-list-of-csv-files-from-snakemake-input-to-a-pandas-datafra | I'm working on a Snakemake workflow where I need to combine multiple CSV files into a single Pandas DataFrame. The list of CSV files is dynamic—it depends on upstream rules and wildcard patterns. Here's a simplified version of what I have in my Snakefile: rule combine_tables: input: expand("results/{sample}/data.csv", ... | rule combine_tables: input: # Static sample list (use checkpoints if dynamically generated) expand("results/{sample}/data.csv", sample=SAMPLES) output: "results/combined/all_data.csv" run: import pandas as pd dfs = [] missing_files = [] corrupt_files = [] # Process files in consistent order for file_path in sorted(inpu... | 2 | 1 |
79,611,544 | 2025-5-8 | https://stackoverflow.com/questions/79611544/multiprocessing-with-scipy-optimize | Question: Does scipy.optimize have minimizing functions that can divide their workload among multiple processes to save time? If so, where can I find the documentation? I've looked a fair amount online, including here, for answers: Scipy's optimization incompatible with Multiprocessing? Parallel optimizations in SciPy... | With respect to scipy.optimize.differential_evolution, it does seem to offer multiprocessing through multiprocessing.Pool via the optional "workers" call parameter, according to the official documentation at https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.differential_evolution.html#scipy.optimize.d... | 2 | 2 |
79,610,188 | 2025-5-7 | https://stackoverflow.com/questions/79610188/how-should-i-take-a-matrix-from-input | As we know input makes the inputs to string. How can I take matrix like this ([[1,2],[3,4]]) from input() by user and have it like a normal 2D list to do some thing on it. It should be like that data = input([[1,2],[3,4]]) print(data) output : [[1,2],[3,4]] I tried this data = list(input()) but it was so wrong. | Using AST You can use ast Literals to parse your input string into a list. import ast raw_input = input("Enter the matrix (e.g., [[1,2],[3,4]]): ") # Parse the input string as a list matrix = ast.literal_eval(raw_input) Using numpy In order to use numpy you would have to enter the matrix in a slightly different format... | 2 | 1 |
79,609,709 | 2025-5-7 | https://stackoverflow.com/questions/79609709/how-to-adjust-size-of-violin-plot-based-on-number-of-hues-available-for-each-cat | I need to create a violin plot based on two categories. But, some of the combination of categories are not available in the data. So it creates a white space, when i try to make the plot. I remember long ago i was able to adjust the size of the violins when the categories were not available in r using geom_violin(posit... | I'm not aware of a way to do this automatically, but you can easily overlay several violinplots, manually synchronizing the hue colors. An efficient way would be to use groupby to split the groups per number of "hues" per X-axis category, and loop over the categories. Then manually create a legend: # for reproducibilit... | 2 | 1 |
79,608,184 | 2025-5-6 | https://stackoverflow.com/questions/79608184/wrong-column-assignment-with-np-genfromtxt-if-passed-column-order-is-not-the-sam | This problem appeared in some larger code but I will give simple example: from io import StringIO import numpy as np example_data = "A B\na b\na b" data1 = np.genfromtxt(StringIO(example_data), usecols=["A", "B"], names=True, dtype=None) print(data1["A"], data1["B"]) # ['a' 'a'] ['b' 'b'] which is correct data2 = np.ge... | [62]: text = "A B\na b\na b".splitlines() In [63]: np.genfromtxt(text,dtype=None, usecols=[1,0],names=True) Out[63]: array([('b', 'a'), ('b', 'a')], dtype=[('A', '<U1'), ('B', '<U1')]) In [64]: np.genfromtxt(text3,dtype=None, usecols=[1,0]) Out[64]: array([['B', 'A'], ['b', 'a'], ['b', 'a']], dtype='<U1') So it uses ... | 1 | 1 |
79,609,245 | 2025-5-6 | https://stackoverflow.com/questions/79609245/polars-unusual-query-plan-for-lazyframe-custom-function-apply-takes-extremely-l | I have a spacy nlp function nlp(<string>).vector that I need to apply to a string column in a dataframe. This function takes on average 13 milliseconds to return. The function returns a ndarray that contains 300 Float64s. I need to expand these Floats to their own columns. This is the sketchy way I've done this: import... | It's due to how expression expansion works. The expression level unnest expands into multiple expressions (one for each field) pl.col("x").struct.unnest() Would turn into pl.col("x").struct.field("a") pl.col("x").struct.field("b") pl.col("x").struct.field("c") Normally you don't notice as Polars caches expressions (C... | 2 | 1 |
79,608,280 | 2025-5-6 | https://stackoverflow.com/questions/79608280/cannot-read-files-list-from-a-specific-channel-from-slack-using-python | I used to have a working python function to fetch files from a specific Slack channel, but that stopped working a few months ago. I tested the same request to the slack API (files.list) using Postman which does give me an array with a number of files. The following code used to work but no longer does: import requests ... | I think it has something to do with the content-type you send with the requests.post Have you tried using json=requestData instead of data=requestData Even though the content-type is correctly set in your headers the requests.post might still send "data" as a dictionary, this is maybe why the slack api is ignoring yo... | 1 | 1 |
79,608,369 | 2025-5-6 | https://stackoverflow.com/questions/79608369/bars-not-fitting-to-x-axis-ticks-in-a-seaborn-distplot | I do generate that figure with seaborn.distplot(). My problem is that the ticks on the X-axis do not fit to the bars, in all cases. I would expect a relationship between bars and ticks like you can see at 11 and 15. This is the MWE import numpy as np import pandas as pd import seaborn as sns # Data np.random.seed(42) ... | You need discrete=True to tell seaborn that the x values are discrete. Adding shrink=0.8 will leave some space between the bars. import numpy as np import pandas as pd import seaborn as sns from matplotlib import pyplot as plt # Data np.random.seed(42) n = 5000 df = pd.DataFrame({ 'PERSON': np.random.randint(100000, 99... | 3 | 4 |
79,619,027 | 2025-5-13 | https://stackoverflow.com/questions/79619027/why-do-results-from-adjustable-quadratic-volterra-filter-mapping-not-enhance-dar | Based on this paper Adjustable quadratic filters for image enhancement, Reinhard Bernstein, Michael Moore and Sanjit Mitra, 1997, I am trying to reproduce the image enhancement results. I followed the described steps, including implementing the nonlinear mapping functions (e.g., f_map_2 = x^2) and applying the 2D Teage... | I think I found your error. In enhance_image() where you compose the final image, i.e. enhanced = np.clip(img_norm + alpha * teager_output, 0, 1) you accidentally use your normalized image img_norm instead of the mapped image mapped_img. Replacing this line by enhanced = np.clip(mapped_img + alpha * teager_output, 0, 1... | 1 | 1 |
79,620,550 | 2025-5-13 | https://stackoverflow.com/questions/79620550/python-global-variable-changes-depending-on-how-script-is-run | I have a short example python script that I'm calling glbltest.py: a = [] def fun(): global a a = [20,30,40] print("before ",a) fun() print("after ",a) If I run it from the command line, I get what I expect: $ python glbltest.py before [] after [20, 30, 40] I open a python shell and run it by importing, and I get bas... | global a refers to the name a in the glbltest module's namespace. When you set a by hand, it refers to the name a in the __main__ module's namespace. When you use from glbltest import * the names in the module are imported into the __main__ module's namespace. Those are different names but refer to the same objects. Wh... | 1 | 3 |
79,620,294 | 2025-5-13 | https://stackoverflow.com/questions/79620294/how-can-i-share-one-requests-session-across-all-flask-routes-and-close-it-cleanl | I’m building a small Flask 3.0 / Python 3.12 micro-service that calls an external REST API on almost every request Right now each route makes a new requests.Session which is slow and leaks sockets under load from flask import Flask, jsonify import requests app = Flask(__name__) @app.get("/info") def info(): with reques... | Use serving-lifecycle hooks @app.before_serving – runs once per worker, right before the first request is accepted. @app.after_serving – runs once on a clean shutdown Create the requests.Session in the first hook, stash it on the application object and close it in the second. | 1 | 1 |
79,620,333 | 2025-5-13 | https://stackoverflow.com/questions/79620333/insert-new-column-of-blanks-into-an-existing-dataframe | I have an existing dataframe: data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) output: 0 1 0 5011025 234 1 5012025 937 2 5013025 625 What I need to do is insert a new column at 0 (the same # of rows) that contains 3 spaces. Recreating the dataframe, from scratch, it would be something ... | Based on your comment, you could shift all cols up one and add a col 0 like this: import pandas as pd data = [[5011025, 234], [5012025, 937], [5013025, 625]] df = pd.DataFrame(data) df.columns = df.columns + 1 df[0] = ' ' df = df.sort_index(axis=1) | 2 | 2 |
79,620,088 | 2025-5-13 | https://stackoverflow.com/questions/79620088/how-can-i-make-a-simple-idempotent-post-endpoint-in-a-flask-micro-service | I’m building a small internal micro-service in Python 3.12 / Flask 3.0. The service accepts POST /upload requests that insert a record into PostgreSQL. Problem Mobile clients sometimes retry the request when the network is flaky, so I end up with duplicate rows: @app.post("/upload") def upload(): payload = request.get_... | Give the table a uniqueness guarantee so duplicates physically can’t happen. Use an UPSERT (INSERT … ON CONFLICT) with RETURNING so you know whether the row was really inserted. Map that to HTTP status codes. | 1 | 1 |
79,619,950 | 2025-5-13 | https://stackoverflow.com/questions/79619950/is-there-a-way-to-filter-columns-of-a-pandas-dataframe-which-include-elements-of | In the below dataframe I would like to filter the columns based on a list called 'animals' to select all the columns that include the list elements. animal_data = { "date": ["2023-01-22","2023-11-16","2024-06-30","2024-08-16","2025-01-22"], "cats_fostered": [1,2,3,4,5], "cats_adopted":[1,2,3,4,5], "dogs_fostered":[1,2,... | The issue with both attempts is that you are looking for a substring of the columns name. Except for the date column there is not a full match between the strings in the animals list and the actual column names. One possibility is to use filter with .join to construct the regex if using .filter, or a "manual" list comp... | 1 | 0 |
79,619,717 | 2025-5-13 | https://stackoverflow.com/questions/79619717/how-to-count-consecutive-increases-in-a-1d-array | I have a 1d numpy array It's mostly decreasing, but it increases in a few places I'm interested in the places where it increases in several consecutive elements, and how many consecutive elements it increases for in each case In other words, I'm interested in the lengths of increasing contiguous sub-arrays I'd like to... | Here is another solution: import numpy as np def count_consecutive_increases(y: np.ndarray) -> np.ndarray: increases = np.diff(y, prepend=y[0]) > 0 all_summed = np.cumsum(increases) return all_summed - np.maximum.accumulate(all_summed * ~increases) y = np.array([9, 8, 7, 9, 6, 5, 6, 7, 8, 4, 3, 1, 2, 3, 0]) c = count_c... | 3 | 3 |
79,619,760 | 2025-5-13 | https://stackoverflow.com/questions/79619760/polars-list-eval-difference-between-pl-element-and-pl-all | the Polars user guide on Lists and Arrays explains how to manipulate Lists with common expression syntax using .list.eval(), i.e. how to operate on list elements. More specifically, the user guide states: The function eval gives us access to the list elements and pl.element refers to each individual element, but we ca... | The method pl.all() called without arguments refers to all columns available in the context. It does not have a special meaning within list.eval(), but since the only column available inside of it are the list elements, it works the same as pl.element(). You could also get the same behavior using either pl.col(''), a ... | 2 | 1 |
79,619,061 | 2025-5-13 | https://stackoverflow.com/questions/79619061/replacing-values-in-columns-with-values-from-another-columns-according-to-mappin | I have this kind of dataframe: df = pd.DataFrame({ "A1": [1, 11, 111], "A2": [2, 22, 222], "A3": [3, 33, 333], "A4": [4, 44, 444], "A5": [5, 55, 555] }) A1 A2 A3 A4 A5 0 1 2 3 4 5 1 11 22 33 44 55 2 111 222 333 444 555 and this kind of mapping: mapping = { "A1": ["A2", "A3"], "A4": ["A5"] } which means that I want al... | You could rework the dictionary and use assign: out = df.assign(**{col: df.get(k) for k, v in mapping.items() for col in v}) NB. assign is not in place, either use this in chained commands, or reassign to df. Or you could reindex and rename/set_axis: dic = {v: k for k, l in mapping.items() for v in l} out = (df.reinde... | 5 | 4 |
79,618,775 | 2025-5-13 | https://stackoverflow.com/questions/79618775/how-to-add-new-feature-to-torch-geometric-data-object | I am using the Zinc graph dataset via torch geometric which I access as zinc_dataset = ZINC(root='my_path', split='train') Each data element is a graph zinc_dataset[0] looks like Data(x=[33, 1], edge_index=[2, 72], edge_attr=[72], y=[1]) I have computed a tensor valued feature for each graph in the dataset. I have st... | To add your list of new features (e.g. List[Tensor], with each tensor corresponding to a graph in the dataset) to each torch_geometric.data.Data object in a Dataset like ZINCYou can do this by simply assigning your new tensor as an attribute of each Data object. Here’s how you can do it step-by-step: import torch from ... | 2 | 2 |
79,621,854 | 2025-5-14 | https://stackoverflow.com/questions/79621854/compute-cumulative-mean-std-on-polars-dataframe-using-over | I want to compute the cumulative mean & std on a polars dataframe column. For the mean I tried this: import polars as pl df = pl.DataFrame({ 'value': [4, 6, 8, 11, 5, 6, 8, 15], 'class': ['A', 'A', 'B', 'A', 'B', 'A', 'B', 'B'] }) df.with_columns(cum_mean=pl.col('value').cum_sum().over('class') / pl.int_range(pl.len())... | I might have a solution which is more clean. You can get to it using rolling-functions like rolling_mean or rolling_std. Here is my proposal: df.with_columns( cum_mean=pl.col('value').cum_sum().over('class')/pl.col('value').cum_count().over('class'), cum_mean_by_rolling=pl.col('value').rolling_mean(window_size=df.shape... | 2 | 2 |
79,620,883 | 2025-5-14 | https://stackoverflow.com/questions/79620883/how-do-i-repeat-one-dataframe-to-match-the-length-of-another-dataframe | I want to combine two DataFrames of unequal length to a new DataFrame with the size of the larger one. Now, specifically, I want to pad the values of the shorter array by repeating it until it is large enough. I know this is possible for lists using itertools.cycle as follows: from itertools import cycle x = range(7) y... | If you want to combine the two DataFrames to obtain an output DataFrame of the length of the longest input with repetitions of the smallest input that restart like itertools.cycle, you could compute a common key (with numpy.arange and the modulo (%) operator) to perform a merge: out = (df1.merge(df2, left_on=np.arange(... | 1 | 1 |
79,620,845 | 2025-5-14 | https://stackoverflow.com/questions/79620845/how-is-np-repeat-so-fast | I am implementing the Poisson bootstrap in Rust and wanted to benchmark my repeat function against numpy's. Briefly, repeat takes in two arguments, data and weight, and repeats each element of data by the weight, e.g. [1, 2, 3], [1, 2, 0] -> [1, 2, 2]. My naive version was around 4.5x slower than np.repeat. pub fn repe... | TL;DR: the gap is certainly due to the use of wider loads/stores in Numpy than your Rust code, and you should avoid indexing if you can for sake of performance. Performance of the Numpy code VS your Rust code First of all, we can analyse the assembly code generated from your Rust code (I am not very familiar with Rust... | 4 | 6 |
79,621,854 | 2025-5-14 | https://stackoverflow.com/questions/79621854/compute-cumulative-mean-std-on-polars-dataframe-using-over | I want to compute the cumulative mean & std on a polars dataframe column. For the mean I tried this: import polars as pl df = pl.DataFrame({ 'value': [4, 6, 8, 11, 5, 6, 8, 15], 'class': ['A', 'A', 'B', 'A', 'B', 'A', 'B', 'B'] }) df.with_columns(cum_mean=pl.col('value').cum_sum().over('class') / pl.int_range(pl.len())... | I might have a solution which is more clean. You can get to it using rolling-functions like rolling_mean or rolling_std. Here is my proposal: df.with_columns( cum_mean=pl.col('value').cum_sum().over('class')/pl.col('value').cum_count().over('class'), cum_mean_by_rolling=pl.col('value').rolling_mean(window_size=df.shape... | 2 | 2 |
79,620,883 | 2025-5-14 | https://stackoverflow.com/questions/79620883/how-do-i-repeat-one-dataframe-to-match-the-length-of-another-dataframe | I want to combine two DataFrames of unequal length to a new DataFrame with the size of the larger one. Now, specifically, I want to pad the values of the shorter array by repeating it until it is large enough. I know this is possible for lists using itertools.cycle as follows: from itertools import cycle x = range(7) y... | If you want to combine the two DataFrames to obtain an output DataFrame of the length of the longest input with repetitions of the smallest input that restart like itertools.cycle, you could compute a common key (with numpy.arange and the modulo (%) operator) to perform a merge: out = (df1.merge(df2, left_on=np.arange(... | 1 | 1 |
79,620,845 | 2025-5-14 | https://stackoverflow.com/questions/79620845/how-is-np-repeat-so-fast | I am implementing the Poisson bootstrap in Rust and wanted to benchmark my repeat function against numpy's. Briefly, repeat takes in two arguments, data and weight, and repeats each element of data by the weight, e.g. [1, 2, 3], [1, 2, 0] -> [1, 2, 2]. My naive version was around 4.5x slower than np.repeat. pub fn repe... | TL;DR: the gap is certainly due to the use of wider loads/stores in Numpy than your Rust code, and you should avoid indexing if you can for sake of performance. Performance of the Numpy code VS your Rust code First of all, we can analyse the assembly code generated from your Rust code (I am not very familiar with Rust... | 4 | 6 |
79,618,810 | 2025-5-13 | https://stackoverflow.com/questions/79618810/fielderror-at-chat-search-unsupported-lookup-groupchat-name-for-charfield-or | I'm trying to be able to search chat groups by looking up the chatroom name. I'm using Django Q query... models.py class ChatGroup(models.Model): group_name = models.CharField(max_length=128, unique=True, default=shortuuid.uuid) groupchat_name = models.CharField(max_length=128, null=True, blank=True) picture = models.I... | According to the OP in a comment: using a class based view was triggering a query when I opened the page. I had to create a new page with just the input query then use the query results on a separate page | 1 | 0 |
79,622,589 | 2025-5-15 | https://stackoverflow.com/questions/79622589/ndb-python-error-returning-object-has-no-attribute-connection-from-host | I have the code below which is built on top of ndb. When running I receive the two errors below. Can I ask for some guidance, specifically what is the connection_from_host referring to? import flask import config import util app = flask.Flask(__name__) from google.appengine.api import app_identity from google.appengine... | I think the presence of requests_toolbelt dependency in your project caused the issue. It may have forced the requests library to use Google App Engine’s URLFetch service (urllib3), which requires URLFetch to be present. I think that was often necessary in the Python 2 runtime environment on GAE, but not in Python 3. Y... | 1 | 1 |
79,624,185 | 2025-5-15 | https://stackoverflow.com/questions/79624185/how-to-substitute-variable-value-within-another-string-variable-in-python | I have HTML template in database column that is shared with another platform. The HTML template has placeholder variables. This value is pulled from DB in my Python script but for some reason, it is not replacing the placeholder variables with it's value. Here is what the HTML string that is in DB. <html> Dear {FullNa... | I think you just need to call the .format() string method on mail_body, and pass in the value of FullName. You can either do it at the end when you call send_mail(): mailbody=mail_body.format(FullName=FullName) Or you can do it when you first read mail_body from the database: mail_body = f''' {data.Value} '''.format(F... | 1 | 1 |
79,624,117 | 2025-5-15 | https://stackoverflow.com/questions/79624117/wrap-class-method-with-arguments-only-once | There are two classes and I want to wrap item.foo method only once, to prevent cache = {param1: 'param_value'} being reinited class Foo: _count = 0 def foo(self, param2): self._count += param2 class Bar: _collection = [Foo(), Foo(), Foo()] def bar(self, param1, param2): for item in self._collection: wrapped_function = ... | It sounds like you want to wrap Foo.foo, not item.foo (which is different bound method for each value of foo. Something like class Bar: _collection = [Foo(), Foo(), Foo()] def bar(self, param1, param2): wrapped_function = wrapper(Foo.foo, param1) for item in self._collection: wrapped_function(item, param2) It's more c... | 2 | 2 |
79,623,642 | 2025-5-15 | https://stackoverflow.com/questions/79623642/python-threading-tkinter-event-set-doesnt-terminate-thread-if-bound-to-tk | I'm writing an app that generates a live histogram to be displayed in a Tkinter window. This is more or less how the app works: A Histogram class is responsible for generating the embedded histogram inside the Tk window, collecting data and update the histogram accordingly. There is a 'Start' button that creates a thr... | The issue stems from a combination of thread synchronization problems and blocking behavior in the main (GUI) thread during window closure. The primary flaw is that your kill() method uses a non-thread-safe busy-wait loop (while not self.stopped) to monitor the background thread's state. This introduces three critical ... | 2 | 1 |
79,623,174 | 2025-5-15 | https://stackoverflow.com/questions/79623174/calculating-a-pct-change-between-3-values-in-a-pandas-series-where-one-of-more | Scenario: I have a pandas series that contains 3 values. These values can vary between nan, 0 and any value above zero. I am trying to get the pct_change among the series whenever possible. Examples: [0,nan,50] [0,0,0] [0,0,50] [nan,nan,50] [nan,nan,0] [0,0,nan] [0,nan,0] What I tried: from other SO questions I was ab... | I think you could dropna, then compute the pct_change and only keep the max finite value: series_test.dropna().pct_change().loc[np.isfinite].max() Or maybe: s.pct_change().where(np.isfinite, 0).max() Example output for the second approach: [0, nan, 50] - 0.0 [0, 0, 0] - 0.0 [0, 0, 50] - 0.0 [nan, nan, 50] - 0.0 [nan,... | 1 | 2 |
79,622,579 | 2025-5-15 | https://stackoverflow.com/questions/79622579/type-annotate-decorator-that-changes-decorated-function-arguments | I want to design a decorator that will allow the wrapped method to take a float or a numpy array of floats. If the passed argument was a float then a float should be returned and if it was a numpy array then a numpy array should be returned. Below is my MWE and latest attempt. I am using VSCode with pylance version v20... | Define a protocol with overloaded signatures for your wrapper and then use that as the return type of your decorator: import numpy as np from numpy import float64 from numpy.typing import NDArray from typing import Callable, Protocol, overload class AsFloatOrArray(Protocol): @overload def __call__(self, arg: float) -> ... | 1 | 0 |
79,622,744 | 2025-5-15 | https://stackoverflow.com/questions/79622744/can-not-find-shadow-root-using-selenium | i try to find a shadow root on a website and clicking a button using the following code: import time from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.... | '//bahf-cd-modal[@class="modal-open sc-bahf-cd-modal-h sc-bahf-cd-modal-s hydrated"]' is for an element that is inside the shadow root, you need to locate an element that contains the shadow root shadow_root = driver.find_element(By.TAG_NAME, 'bahf-cookie-disclaimer-dpl3').shadow_root shadow_root.find_element(By.CSS_SE... | 1 | 2 |
79,622,540 | 2025-5-15 | https://stackoverflow.com/questions/79622540/how-to-send-receive-binary-data-to-an-web-application-in-python | I am learning web development, I have some experience in python scripting. This time, I wanted to create an api in python, so review fast api docs. I have the following set up (example contrived for the purpose of this post). Based on the examples in fast api site, I created following python code, which has two methods... | You're on the right track with FastAPI and Pydantic, but binary file uploads via curl -F (i.e., multipart/form-data) don't get automatically mapped to a Pydantic model like FileContent. When dealing with file uploads, FastAPI provides a special way to handle binary data using UploadFile, not bytes directly in a Pydanti... | 1 | 1 |
79,624,459 | 2025-5-16 | https://stackoverflow.com/questions/79624459/merge-dataframes-with-repeated-ids | I have 2 dataframes, dfA & dfB. dfA contains purchases of certain products & dfB contains info on said products. For instance, dfA: purchaseID productID quantity 1 432 1 2 432 4 3 567 7 and dfB: productID name 432 'mower' 567 'cat' I wish to merge the two datasets on productID to produce somet... | pd.merge(dfA, dfB, on='productID', how='inner') you can use merge along with the how type | 1 | 1 |
79,626,356 | 2025-5-17 | https://stackoverflow.com/questions/79626356/split-a-column-of-string-into-list-of-list | How could I split a column of string into list of list? Minimum example: df = pl.DataFrame({'test': "A,B,C,1\nD,E,F,2\nG,H,I,3\nJ,K,L,4"}) I try the following, somehow I stop after the first split df = df.with_columns(pl.col('test').str.split('\n')) My desired result would be it return a list of list inside the dataf... | df.with_columns( pl.col('test') .str.split('\n') .list.eval( pl.element() .str.split(",") ) ) In your example you have a list of mixed strings and numbers which polars doesn't support so your output has to have the numbers as strings. You say you want to use these lists from other columns readily so you might want to ... | 3 | 2 |
79,626,384 | 2025-5-17 | https://stackoverflow.com/questions/79626384/scipy-bpoly-from-derivatives-compared-with-numpy | I implemented this comparison between numpy and scipy for doing the same function interpolation. The results show how numpy crushes scipy. Python version: 3.11.7 NumPy version: 2.1.3 SciPy version: 1.15.2 Custom NumPy interpolation matches SciPy BPoly for 10 000 points. SciPy coeff time: 0.218046 s SciPy eval time : 0.... | Analysis of the coefficient computations Regarding BPoly.from_derivatives, Scipy uses a generic code with some (slow) pure-Python one inside and several calls to Numpy each ones having a small overhead. The Numpy functions used are also generic so the code is sub-optimal. For example, a low-level profiler on my Linux m... | 3 | 2 |
79,626,166 | 2025-5-17 | https://stackoverflow.com/questions/79626166/why-cv-convexitydefects-fails-in-this-example-is-it-a-bug | This script: import numpy as np, cv2 as cv contour = np.array([[0, 0], [1, 0], [1, 1], [0.5, 0.2], [0, 0]], dtype='f4') hull = cv.convexHull(contour, returnPoints=False) defects = cv.convexityDefects(contour, hull) fails and produces this error message: File "/home/paul/upwork/pickleball/code/so-65-ocv-convexity-defe... | I couldn't find confirmation for that in the documentation, but convexityDefects expects the points coordinates in contour to be int32 rather than floating points. The following code works as expected: import cv2 as cv import numpy as np contour = np.array([[0, 0], [10, 0], [10, 10], [5, 2], [0, 0]], dtype=np.int32) hu... | 1 | 1 |
79,627,506 | 2025-5-18 | https://stackoverflow.com/questions/79627506/turtle-window-automatically-closes-after-opening-the-python-script | When i double click on this python script, it automatically closes turtle window after receiving the input. import math import turtle def arc(t, radius, angle): arc_length = 2 * math.pi * radius * angle / 360 n = int(arc_length / 3) + 1 step_length = arc_length / n step_angle = float(angle) / n polyline(t, n, step_leng... | radius is set by a string, producing an error in arc when computing arc_length , so the execution stops and the window is closed For instance replace radius=input("Enter the radius of the circle to be printed: ") by radius=float(input("Enter the radius of the circle to be printed: ")) Launching the execution by hand ... | 1 | 2 |
79,628,870 | 2025-5-19 | https://stackoverflow.com/questions/79628870/why-does-allisinstancex-str-for-x-in-value-not-help-pyright-infer-iterable | I'm working with Pyright in strict mode and want to check if a function parameter value of type object is an Iterable[str]. I tried using: if isinstance(value, Iterable) and all(isinstance(v, str) for v in value): # Pyright complains: 'Type of "v" is unknown' However, looking at the elements, Pyright complains that th... | No you cannot do it with pyright without additional structures. Plain all(isinstance(...)) is not a type guard that is supported by pyright. Different to filter can an all type guard not be written in the stubs and needs needs special handling; see some discussion here or here. The type checker would not only need to ... | 2 | 5 |
79,628,442 | 2025-5-19 | https://stackoverflow.com/questions/79628442/is-using-mutex-in-my-class-redundant-because-of-gil | I have a class with two threads MainThread - solving tasks one by one(getting self.cur_task_id, and changing self.cur_task_status) Thread self.report_status_thread - read self.cur_task_id, self.cur_task_status and send values via http I am using mutex in my class to synchronize these threads. Is that redundant becaus... | If you had a single field, which you either read or wrote, relying on the GIL would be sufficient (it does not allow torn reads / torn writes). However, here that is not the case: you have two fields, and the GIL can be released at any bytecode operation, so a separate thread could observe inconsistent values for cur_t... | 1 | 3 |
79,628,910 | 2025-5-19 | https://stackoverflow.com/questions/79628910/improve-code-that-finds-nan-values-with-a-condition-and-removes-them | I have a dataframe where each column starts and finished with certain number of nan values. Somewhere in the middle of a column there is a continuous list of values. It can happen that a nan value "interrupts" the data. I want to iterate over each column, find such values and then remove the whole row. For example, I w... | You could use a vectorial approach with isna and cummin to perform boolean indexing. First let's use one column as example: # identify NaNs m1 = df['A'].isna() # Identify external NaNs m2 = (m1.cummin()|m1[::-1].cummin()) out= df.loc[m2 | ~m1, 'A'] Output: 0 NaN 1 NaN 2 NaN 3 1.0 4 4.0 5 6.0 6 6.0 7 9.0 9 13.0 10 NaN ... | 1 | 2 |
79,628,093 | 2025-5-19 | https://stackoverflow.com/questions/79628093/can-this-similarity-measure-between-different-size-numpy-arrays-be-expressed-ent | This script: import numpy as np from numpy.linalg import norm a = np.array([(1, 2, 3), (1, 4, 9), (2, 4, 4)]) b = np.array([(1, 3, 3), (1, 5, 9)]) r = sum([min(norm(a-e, ord=1, axis=1)) for e in b]) computes a similarity measure r between different size NumPy arrays a and b. Is there a way to express it entirely with ... | You can do this: r = np.linalg.norm(a[:,None,:]-b[None], ord=1, axis=2).min(0).sum() Here, a[:,None,:] - b[None] will add extra dimensions to a and b for the proper subtraction broadcasting. The norm is unchanged, the .min(0) takes the row-wise minimum, and then .sum() gets the final result. | 1 | 2 |
79,630,089 | 2025-5-20 | https://stackoverflow.com/questions/79630089/how-to-display-a-legend-when-plotting-a-geodataframe | I have a GeoDataFrame I want to plot. This works fine, however somehow I cannot easily plot its legend. I have tried a number of alternatives and checked solutions from googling and LLM, but I do not understand why this does not work. Code: import geopandas as gpd from shapely.geometry import box, Polygon, LineString i... | The issue is that plot_obj = gdf.plot(...) returns an Axes object, not a plot "handle" that can be passed to legend(). To display a legend, you need to create a proxy artist (e.g., a matplotlib.patches.Patch) that mimics the appearance of your GeoDataFrame's geometry (in your case, a red-bordered polygon with no fill),... | 1 | 1 |
79,632,156 | 2025-5-21 | https://stackoverflow.com/questions/79632156/how-to-mark-a-class-as-abstract-in-python-no-abstract-methods-and-in-a-mypy-com | I'm trying to make it impossible to instantiate a class directly, without it having any unimplemented abstract methods. Based on other solutions online, a class should have something along the lines of: class Example: def __new__(cls,*args,**kwargs): if cls is Example: raise TypeError("...") return super().__new__(cls,... | May I suggest a mixin instead of a decorator? You basically want to check if cls.mro()[1] is the abstract class (i.e., in the current class is a direct subclass of Abstract) from typing import Self, final class Abstract: @final def __new__(cls, *args, **kwargs) -> Self: if cls.mro()[1] is Abstract: raise TypeError("...... | 2 | 2 |
79,633,258 | 2025-5-22 | https://stackoverflow.com/questions/79633258/how-to-make-plotly-text-bold-using-scatter | I'm trying to make a graph using plotly library and I want to make some texts in bold here's the code used : import plotly.express as px import pandas as pd data = { "lib_acte":["test 98lop1", "test9665 opp1", "test QSDFR1", "test ABBE1", "testtest21","test23"], "x":[12.6, 10.8, -1, -15.2, -10.4, 1.6], "y":[15, 5, 44, ... | Not sure if there is a better solution, but, as mentioned in furas's comment you can use the HTML tag <b>…</b> for the elements that should be bold-faced. You can achieve this, for example, by adding the following line: # Create your data dict (as before) data = { ... } # Add HTML tag for boldface data["lib_acte"] = [(... | 1 | 2 |
79,634,830 | 2025-5-23 | https://stackoverflow.com/questions/79634830/seeking-for-help-illustrate-this-y-combinator-python-implementation | I once read this Python implementation of Y-combinator in a legacy code deposit: def Y_combinator(f): return (lambda x: f(lambda *args: x(x)(*args)))( lambda x: f(lambda *args: x(x)(*args)) ) And there exists an example usage: factorial = Y_combinator( lambda f: lambda n: 1 if n == 0 else n * f(n - 1) ) Could anyone ... | With some hesitation I will try to answer your question. I have been programming in Python for more than twenty years and it took me more than an hour to unravel this. I'll start with a simple observation about lambda - one that we all know. Lambda expressions create anonymous functions, and therefore they can always b... | 2 | 3 |
79,636,956 | 2025-5-24 | https://stackoverflow.com/questions/79636956/setup-dj-rest-auth-and-all-allauth-not-working | Hello i'm trying to setup dj_rest_auth and allauth with custom user model for login for my nextjs app but it seems not working the backend part besides it not working i get this warning /usr/local/lib/python3.12/site-packages/dj_rest_auth/registration/serializers.py:228: UserWarning: app_settings.USERNAME_REQUIRED is d... | The warnings are an issue with recent releases of django-allauth and you can resolve it by downgrading django-allauth to version 65.2.0 The docs said this in version 65.6.0: A check is in place to verify that ACCOUNT_LOGIN_METHODS is aligned with ACCOUNT_SIGNUP_FIELDS. The severity level of that check has now been low... | 1 | 1 |
79,636,381 | 2025-5-24 | https://stackoverflow.com/questions/79636381/chromium-web-extension-with-nodriver | I downloaded chromium version 136 and im using it on macOS 64 ARM. I opened it manually and added a chrome web extension and it works. When I close chrome, quit, and open it the extension is always active. But when I run my code with NoDriver from a python script, the chromium tab thats opened does not have the extensi... | Try: args = [ f"--window-size={window_width},{window_height}", f"--window-position={x_position},{y_position}", "--disable-sync", "--no-first-run", "--no-default-browser-check", "--disable-backgrounding-occluded-windows", "--disable-renderer-backgrounding", "--disable-background-timer-throttling", "--disable-breakpad", ... | 2 | 2 |
79,637,271 | 2025-5-25 | https://stackoverflow.com/questions/79637271/python-text-tokenize-code-to-output-results-from-horizontal-to-vertical-with-gra | Below code tokenises the text and identifies the grammar of each tokenised word. import nltk from nltk.tokenize import sent_tokenize, word_tokenize from nltk.corpus import wordnet as wn #nltk.download() text = "Natural language processing is fascinating" # tokenise the sentence words = word_tokenize(text) print(words) ... | You can achieve it this way : # Lists to store parts of speech nouns = [] verbs = [] for w in words: synsets = wn.synsets(w) if synsets: pos = synsets[0].pos() if pos == 'n': nouns.append(w.lower()) elif pos == 'v': verbs.append(w.lower()) full solution: import nltk from nltk.tokenize import word_tokenize from nltk.co... | 1 | 0 |
79,639,284 | 2025-5-26 | https://stackoverflow.com/questions/79639284/output-is-not-what-it-is-supposed-to-be | i am making todo app or program on python in which output is not working according to the code while True: user_action = input("type add, show, edit, remove or exit ") user_action = user_action.strip() if 'add'in user_action: todo = user_action[4:] with open('todos.txt', 'r') as file: todos = file.readlines() todos.app... | As OldBoy said in the comment, you are currently removing the newline characters with the strip() Python method. The strip method removes leading and trailing characters. This means you append all the inputs with the add functionality in a single line. I think I fixxed it by adding the '\n' which is the new line charac... | 1 | 1 |
79,638,451 | 2025-5-26 | https://stackoverflow.com/questions/79638451/error-while-running-constraints-optimization-using-cvxpy | I faced some issues when doing constrainted optmiziation. Use CVXPY Variables in Optimization: I'll set the unknown values (NaNs) to be part of the CVXPY optimization variable. import cvxpy as cp import numpy as np def optimize_x_simple(A, x_values): # Convert the list to a numpy array x_values = np.array(x_values, dty... | You can't set the element of a numpy array with a cvxpy variable. A workaround is to use a dummy matrix multiplication to inflate your x_unknown to the size of x_full and then add the inflated array to the known values. Additionally, you need to replace the NaNs in x_full with zeros. IDX_unknown2full = np.zeros((len(x_... | 1 | 3 |
79,642,264 | 2025-5-28 | https://stackoverflow.com/questions/79642264/numpy-testing-assert-array-equal-fails-when-comparing-structured-numpy-arrays | I was comparing some data using numpy.testing.assert_array_equal. The data was read from a MAT-file using scipy.io.loadmat. The MAT-file was generated as follows: a = [1, 2; 3, 4]; b = struct('MyField', 10); c = struct('MyField', [1, 2; 3, 4]); save('example.mat', 'a', 'b', 'c'); For testing, I manually generated the ... | This happens because the field in c contains a Numpy array, and assert_array_equal tries to compare structured arrays using ==, which fails when it encounters arrays inside object fields. In b, the field is a scalar (10.0), so it works fine, but in c, MyField holds an array, and comparing two arrays with == returns an ... | 3 | 1 |
79,642,049 | 2025-5-28 | https://stackoverflow.com/questions/79642049/how-to-use-ak-array-with-index-arrays-to-create-a-custom-masked-output | I have these arrays in Awkward Array: my_indices = ak.Array([[0,1],[0],[1]]) my_dummy_arr = ak.Array([[1,1],[1,1],[1,1]]) I want to generate this result: [[1,1],[1,0],[0,1]] Basically, for each sub-array, I want to keep 1 at the positions in my_indices and set 0 elsewhere, matching the structure of my_dummy_arr. How ... | You can achieve that via a local index for the dummy array: local_index = ak.local_index(my_dummy_arr) mask = ak.any(local_index[:, :, None] == my_indices[:, None, :], axis=-1) result = ak.where(mask, 1, 0) | 1 | 0 |
79,643,186 | 2025-5-29 | https://stackoverflow.com/questions/79643186/can-i-index-class-types-in-a-python-list | The intended functionality for the code I'm working on is to be able to load in molecules to a two-player game at runtime, and I've largely implemented a good chunk of the physics. I'm wondering if the functionality to look up a specific class type can be accessed in python for the part of the project I'm currently wor... | Yes, this is absolutely valid and feasible in Python! You're on the right track. In Python, classes are first-class objects, meaning you can store them in lists, dictionaries, and variables, then call them to create instances. In your case, I recommend the Factory Pattern because it's Extensible, Error-safe, Game-frien... | 2 | 2 |
79,645,288 | 2025-5-30 | https://stackoverflow.com/questions/79645288/how-to-efficiently-retrieve-xy-coordinates-from-image | I have an image img with 1000 rows and columns each. Now I would like to consider each pixel as x- and y-coordinates and extract the respective value. An illustrated example of what I want to achieve: In theory, the code snippet below should work. But it is super slow (I stopped the execution after some time). img = n... | A possible solution: pd.DataFrame(img).stack().reset_index().to_numpy() Method pd.DataFrame creates a dataframe where rows represent y-coordinates and columns represent x-coordinates. stack then compresses the dataframe's columns into a single column, turning the x-values into part of a hierarchical index alongside th... | 1 | 4 |
79,644,894 | 2025-5-30 | https://stackoverflow.com/questions/79644894/python-doctests-for-a-colored-text-output | Is it possible to write a docstring test for a function that prints out the colored text into the command line? I want to test only the content ignoring the color or to add somehow the information on color into the docstring. In the example below the test has failed, but it should not. Example from colorama import Fore... | Yes, it is possible to encode the color information into the docstring using wrapper that alters the docstrings. Below is the example code that passes the test from colorama import Fore def wrapper(func): # wrapper func that alters the docstring func.__doc__ = func.__doc__.format(**{"GREEN": Fore.GREEN, "RESET": Fore.R... | 1 | 1 |
79,648,605 | 2025-6-2 | https://stackoverflow.com/questions/79648605/how-to-define-nullable-fields-for-sqltransform | I'm using Beam SqlTransform in python, trying to define/pass nullable fields. This code works just fine: with beam.Pipeline(options=options) as p: # ... # Use beam.Row to create a schema-aware PCollection | "Create beam Row" >> beam.Map(lambda x: beam.Row( user_id=int(x['user_id']), user_name=str(x['user_name]) )) | 'S... | When working with Apache Beam's SqlTransform in Python, you need to properly define nullable fields in your schema. Here are the correct approaches: Option 1: Using beam.Row with Optional Types The most straightforward way is to use Python's Optional type hint with beam.Row: from typing import Optional with beam.Pipeli... | 1 | 1 |
79,648,218 | 2025-6-2 | https://stackoverflow.com/questions/79648218/how-can-i-efficiently-parallelize-and-optimize-a-large-scale-graph-traversal-alg | I'm working on Python project that involves processing a very large graph - it has millions of nodes and edges. The goal is to perform a breadth-first search (BFS) or depth-first search (DFS) from a given start node to compute shortest paths or reachability. Here's the challenge : The graph is too large to fit comfort... | You can use combination of "pyspark + graphframes" to achieve this. Sample Code from pyspark.sql import SparkSession from graphframes import GraphFrame # Create Spark session spark = SparkSession.builder \ .appName("BFS") \ .config("spark.jars.packages", "graphframes:graphframes:0.8.2-spark3.0-s_2.12") \ .getOrCreate()... | 2 | 1 |
79,648,050 | 2025-6-2 | https://stackoverflow.com/questions/79648050/updating-current-value-in-flet | Good evening, Based on tutorial here : Flet tutorial i am following the codes , everything and i would like to ask one question, here is my code : import flet as ft from flet import * def main(page:ft.Page): greeting =ft.Ref[ft.Column]() def hello_here(e): greeting.current.controls.append(ft.Text(f'hello {first_name.cu... | Simply assign new list instead of appending to old one def hello_here(e): greeting.current.controls = [ft.Text(f'hello {first_name.current.value} {last_name.current.value}')] # ... rest ... EDIT: If flet would make some problem with memory leak when you assign new list then you can also use .clear() to remove old ele... | 4 | 1 |
79,651,164 | 2025-6-3 | https://stackoverflow.com/questions/79651164/why-did-the-mocking-api-failed | My project is huge, I tried to write unitest for one API session import unittest from unittest.mock import patch, MagicMock from alm_commons_utils.mylau.lau_client import lauApiSession class TestlauApiSession(unittest.TestCase): @patch("alm_commons_utils.mylau.lau_client.lauApiSession.get_component") def test_mock_get_... | Your output in line #4 suggests that you are not retrieving the token. So basically your test is trying to retrieve a token from the URL https://mock-lau-login-url, your lau_login_url. You'll need to mock also the token when you want to run the lauApiSession, since the constructor of laupApiSession is still making real... | 2 | 1 |
79,651,120 | 2025-6-3 | https://stackoverflow.com/questions/79651120/formatting-integers-in-pandas-dataframe | I've read the documentation and simply cannot understand why I can't seem to achieve my objective. All I want to do is output integers with a thousands separator where appropriate. I'm loading a spreadsheet from my local machine that is in the public domain here Here's my MRE: import pandas as pd WORKBOOK = "/Volumes/S... | Your approach works fine, however style does not modify the DataFrame in place. Instead it returns a special object that can be displayed (for instance in a notebook) or exported to a file. You could see the HTML version in jupyter with: df.style.format(my_formatter) (this should be the last statement of the current c... | 2 | 4 |
79,652,536 | 2025-6-4 | https://stackoverflow.com/questions/79652536/matplotlib-hover-coordinates-with-labelled-xticks | I've got a matplotlib graph with labelled X-ticks: The labels repeat (in case that's relevant). In the real graph, there is a multi-level X-axis with more clarification in the lower layers. That works fine, but I want to be able to hover the mouse and see the X-coordinate in the top-right of the graph. Whenever I set ... | From [ Matplotlib mouse cursor coordinates not shown for empty tick labels ] adding ax.format_coord = lambda x, y: 'x={:g}, y={:g}'.format(x, y) anywhere before plt.show() makes it show the right coordinates. If you want to display A, B or C as x coordinates, you can make a custom coordinates format string function in ... | 1 | 2 |
79,653,909 | 2025-6-5 | https://stackoverflow.com/questions/79653909/how-i-can-color-a-st-data-editor-cell-based-on-a-condition | I'm using streamlit's st.data_editor, and I have this DataFrame: import streamlit as st import pandas as pd df = pd.DataFrame( [ {"command": "test 1", "rating": 4, "is_widget": True}, {"command": "test 2", "rating": 5, "is_widget": False}, {"command": "test 3", "rating": 3, "is_widget": True}, ] ) edited_df = st.data_e... | According to docs, you can't color cell if it's not in disabled column. Styles from pandas.Styler will only be applied to non-editable columns. If disabling rating column is acceptable to you, you can use pandas.Styler like this: import streamlit as st import pandas as pd df = pd.DataFrame( [ {"command": "test 1", "r... | 2 | 0 |
79,657,102 | 2025-6-7 | https://stackoverflow.com/questions/79657102/how-to-pass-several-variables-in-for-a-pandas-groupby | This code works: cohort = r'priority' result2025 = df.groupby([cohort],as_index=False).agg({'resolvetime': ['count','mean']}) and this code works cohort = r'impactedservice' result2025 = df.groupby([cohort],as_index=False).agg({'resolvetime': ['count','mean']}) and this code works result2025 = df.groupby(['impactedse... | The issue is that when you do: cohort = r'impactedservice,priority' You're creating a single string, not a list of column names. Pandas treats that as a single column name (which doesn’t exist), hence the KeyError. Correct way: Define cohort as a list of column names: cohort = ['impactedservice', 'priority'] result202... | 1 | 2 |
79,657,990 | 2025-6-8 | https://stackoverflow.com/questions/79657990/why-is-my-shiny-express-app-having-trouble-controlling-the-barchart-plot | Im having trouble setting the y axis to start at zero for the following shiny express python script. Instead it starts at 4.1 . the set_ylim is having no affect. from shiny.express import input, render, ui import matplotlib.pyplot as plt import numpy as np data = { "Maturity": ['1Y', '2Y', '3Y', '4Y', '5Y', '6Y', '7Y',... | This is an issue which results from how you process your data. Avoid the array conversion: from shiny.express import render import matplotlib.pyplot as plt data = { '1Y': 4.1, '2Y': 4.3, '3Y': 4.5, '4Y': 4.7, '5Y': 4.8, '6Y': 4.9, '7Y': 5.0, '8Y': 5.1 } @render.plot def create_line_plot(): x = list(data.keys()) y = lis... | 1 | 0 |
79,657,426 | 2025-6-8 | https://stackoverflow.com/questions/79657426/estimation-internet-speed-test-app-in-flet | i have completly followed to the following link :youtube tutorial for speed test and also given link : github page and there is my code : import flet as ft from flet import * from time import sleep import speedtest def main(page:ft.Page): page.title ="Internet Speed Test" page.theme_mode ="dark" page.vertical_alignment... | There were multiple issues with your code. Fixes: .text_to_print used incorrectly. Replaced with .value speedtest might silently fail. Added try/except for debug Font might not load. Used system font or ensure asset path is correct. sleep() might block UI. Used asyncio.sleep() if using async version later Here is... | 1 | 1 |
79,658,364 | 2025-6-9 | https://stackoverflow.com/questions/79658364/asyncio-run-coroutine-from-a-synchronous-function | How can I call task2 from func without declaring func async and awaiting it? My first thought was to create a thread and use run_coroutine_threadsafe but it deadlocks. Same as not using a thread. Do I have to start a new loop? import asyncio from threading import Thread async def task2(): print("starting task2...") awa... | python threading synchronization primitives such as Thread.join don't work well with asyncio because they suspend the thread and therefore block the running eventloop, so run_coroutine_threadsafe cannot use the blocked eventloop. Instead you have loop.run_in_executor for creating threaded tasks that don't block the eve... | 1 | 3 |
79,661,702 | 2025-6-11 | https://stackoverflow.com/questions/79661702/how-to-specify-relevant-columns-with-read-excel | As far as I can tell, the following MRE conforms to the relevant documentation: import polars df = polars.read_excel( "/Volumes/Spare/foo.xlsx", engine="calamine", sheet_name="natsav", read_options={"header_row": 2}, columns=(1,2,4,5,6,7), # columns 0 and 3 are not needed ) print(df.head()) The issue here is that the ... | When you use Calamine, which is the default engine and which you've specified explicitly, the docs say (emphasis mine): this engine can be used for reading all major types of Excel Workbook (.xlsx, .xlsb, .xls) and is dramatically faster than the other options, using the fastexcel module to bind the Rust-based Calamin... | 1 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.