question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,260,585 | 2024-12-7 | https://stackoverflow.com/questions/79260585/why-selenium-webdriver-is-not-imported | >>> import selenium >>> selenium.webdriver Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'selenium' has no attribute 'webdriver' >>> from selenium import * >>> webdriver Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'webdriver' is ... | According to my current understanding, if not asked, in python all classes/functions should load. The flaw in your understanding is that selenium.webdriver is not a class or function. In reality, selenium.webdriver is a module, and it has to be imported, rather than used as if it was an object. Note that importing th... | 1 | 4 |
79,260,345 | 2024-12-7 | https://stackoverflow.com/questions/79260345/try-to-write-a-generic-free-function-in-cython-but-get-an-error | error info: .\tools.c(17775): error C2069: Cast from "void" to non-void .\tools.c(17775): error C2036: "void *" : unknown size my code: This function attempts to free the memory of one-dimensional array or two-dimensional array, but do not work. cdef void free_memory(void* arr, int ndim, int rows): cdef int i if ndim... | This: free(<double*>arr[i]) is attempting to index a void*, and then cast the result to the type you're expecting. But you can't index a void*. The compiler doesn't have any information about what type arr points to. You're casting too late. What you need to do is cast arr itself to the right type, and then index that... | 1 | 3 |
79,259,261 | 2024-12-6 | https://stackoverflow.com/questions/79259261/generic-of-class-ignored-by-generic-pydantic-constructor | I have a generic class with a function that returns a Pydantic model where one of the fields is the generic type. What follows is a code snippet that defines two classes the generic GetValue and GetInt. I don't understand why the behavior of GetValue[int] is not the same as GetInt. from typing import Any, Generic, Type... | Well, Python doesn't really support a simple way to determine the type of a TypeVar at runtime. pydantic models are a bit special because they override the standard behaviour of generic classes such that they can more or less easily determine a generic type. However, I just realized that in your code you're not trying ... | 2 | 3 |
79,251,301 | 2024-12-4 | https://stackoverflow.com/questions/79251301/creating-a-decaying-halo-around-a-cluster-in-an-image-with-python | I'm trying to modify the sorrounding values around clusters in an image so that the neighbours pixels decrease to zero following a exponential decay. I need to find a way to control the decaying rate with a parameter. I also want to keep the original non-zero values in the image. I have created an example were I used c... | Note that Convolution with Gaussian kernel represents an isotropic diffusion process that will decrease the intensity of objects isotropically, so pasting the original image on top will destroy the halo. Something like the following, using mathematical morphology should work. Iteratively expand the contour of the objec... | 1 | 1 |
79,259,917 | 2024-12-7 | https://stackoverflow.com/questions/79259917/python-type-for-dict-like-object | I have some function that accepts a dict-like object. from typing import Dict def handle(x: Dict[str, str]): pass # Do some processing here... I still get a type warning if I try passing a Shelf to the function, even though the function supports it. How to I specify the type of a dict-like object? | Use collections.abc.Mapping (typing.Mapping is deprecated): from collections.abc import Mapping def handle(x: Mapping[str, str]): pass # Do some processing here... | 1 | 1 |
79,258,525 | 2024-12-6 | https://stackoverflow.com/questions/79258525/plotting-quiver-plots-in-matplotlib | I want to plot the slope field for: 0.5*sin(0.5*pi*x)*sqrt(y+7) import numpy as np import matplotlib.pyplot as plt # Specify the grid of dots x = np.arange(-3,3,0.3) y = np.arange(-2,4,0.3) X, Y = np.meshgrid(x,y) # Create unit vectors at each dot with correct slope dy = 0.5*(np.sin(x*np.pi*0.5))*np.sqrt(y+7) dx = np.o... | So, the first thing to do is remove the arrowheads, which @jasonharper shows in the comments can be done by adding these options to your quiver call: headwidth=0, headlength=0, headaxislength=0. Next is to deal with the length. You're currently normalizing by X and Y when you should be normalizing by dx and dy. I would... | 3 | 2 |
79,256,403 | 2024-12-5 | https://stackoverflow.com/questions/79256403/telebot-bot-callback-query-handler-isnt-working | Can somebody tell me what am I doing wrong? The bot sends me an inline keyboard, but after clicking the button, I do not receive any callbacks (even the logger doesn't send anything). Also, if I pass URL to inlineKeyboardButton it works properly @bot.message_handler() def add_city(message): if user_status[message.chat.... | The solution was updating the token. Hope it will help someone | 2 | 0 |
79,257,762 | 2024-12-6 | https://stackoverflow.com/questions/79257762/how-to-run-dependence-tasks-concurrently-with-non-dependence-ones-and-tasks-ins | I am learning asyncio and there is a problem of running a dependence task concurrently with non-dependence ones. So far I couldn't make it work. This is my code: import asyncio import random def first_execution(choice): if choice==1: print(f"First result {choice}") return choice else: print(f"First result {0}") return ... | First, when I run your code I get either First result 0 -> Third result 2 -> First check of value 1 complete or First result 1 -> Second result 2 -> First check of value 1 complete depending on the random choice generated, which is my understanding of what you want. So I cannot duplicate what you say is happening as fa... | 1 | 1 |
79,258,814 | 2024-12-6 | https://stackoverflow.com/questions/79258814/numpythonic-way-of-float-to-signed-integer-normalization | What is the faster numpythonic way of this normalization: def normalize_vector(x, b, axis): """ Normalize real vector x and outputs an integer vector y. Parameters: x (numpy.ndarray): Input real vector. (batch_size, seq_len) b (int): Unsigned integer defining the scaling factor. axis (int/None): if None, perform flaten... | there is np.piecewise to transform data based on multiple conditions. def normalize_vector2(x, b, axis): # Step 1: Find the maximum absolute value in `x` m = np.max(np.abs(x), axis=axis) y = np.piecewise(x, [x > 0, x < 0], [ lambda xi: ((2**b - 1) * xi / m), lambda xi: (2**b * xi / m) ]) return y.astype(int) if your p... | 1 | 3 |
79,257,679 | 2024-12-6 | https://stackoverflow.com/questions/79257679/dropping-duplicates-by-column-in-pyspark | I have a PySpark dataframe like this but with a lot more data: user_id event_date 123 '2024-01-01 14:45:12.00' 123 '2024-01-02 14:45:12.00' 456 '2024-01-01 14:45:12.00' 456 '2024-03-01 14:45:12.00' I drop duplicates of users, leaving the last event. I am using something like this: df = df.orderBy(['user... | When you call dropDuplicates() without passing any columns to it - it just drops the identical rows, no matter what, for all columns (so we may kind of call it "deterministic" - as all columns will have same values in different rows being dropped and only one is kept - doesn't matter which one). This non-deterministic ... | 2 | 2 |
79,255,413 | 2024-12-5 | https://stackoverflow.com/questions/79255413/how-does-python-threadpoolexecutor-switch-between-concurrent-threads | How does Python ThreadPoolExecutor switch between concurrent threads? In the case of the async/awaint event-loop, the switching between different pieces of the code happens at the await calls. Does the ThreadPoolExecutor run each submitted task for a random amount of time> Or until something somewhere calls Thread.slee... | Does the ThreadPoolExecutor run each submitted task for a random amount of time? No. The executor runs a thread pool with a work queue. When you add a new task, a thread will pick it up and run it to completion. Individual threads do not switch tasks before the previous task has completed. As long as the global inter... | 1 | 2 |
79,258,033 | 2024-12-6 | https://stackoverflow.com/questions/79258033/how-to-query-a-reverse-foreign-key-multiple-times-in-django-orm | Assuming the following models: class Store(models.Model): name = models.CharField(max_length=255) class Stock(models.Model): name = models.CharField(max_length=255) store = models.ForeignKey(Store, on_delete=models.PROTECT) class Consignment(models.Model): cost = models.FloatField() class Order(models.Model): quantity ... | You query in reverse by spanning over multiple relations with the __ separator: Consignment.objects.filter(order__stock__store__name__startswith='One') | 1 | 1 |
79,255,406 | 2024-12-5 | https://stackoverflow.com/questions/79255406/python-wagtail-crashes-6-3-1-streamfield-object-has-no-attribute-bind-to-model | While updating an old Wagtail website to the current version, I encounter this error, in admin/panels/group.py line 74: AttributeError: 'StreamField' object has no attribute 'bind_to_model' Since this is apparently in the Wagtail software as distributed, I am quite confused. The full traceback is as follows: Exception... | The error indicates that you have a StreamField object inside a panels definition such as content_panels. This isn't valid - it should only contain panel objects such as FieldPanel. As part of the Wagtail 2.x to 6.x upgrade process, you would have to replace any instances of StreamFieldPanel with a plain FieldPanel - m... | 1 | 1 |
79,256,797 | 2024-12-6 | https://stackoverflow.com/questions/79256797/plot-variables-from-xml-file-in-python | I have and XML file (please see attached). I made a python script to try to obtain information from the XML to later do some plots. The aim of this code is: a. To iterate over the XML file to find the </event> and then </origin> and then <quality> to reach <azimuthalGap>348.000</azimuthalGap> b. To save each azimutalGa... | If you have a list of events, you can create the interested gaps list with root.findall(). import xml.etree.ElementTree as ET root = ET.parse("your_file.xml").getroot() gaps = root.findall(".//azimuthalGap") print([x.text for x in gaps]) Output: ['353.000', '453.000', …] For the 3. point of your question, you will fi... | 1 | 2 |
79,256,824 | 2024-12-6 | https://stackoverflow.com/questions/79256824/why-are-enums-incompatible-across-python-packages | An enum is declared in an imported package and identically in the importer. Same value, but Python treats the imported enum value as different for some reason. Package 1 is a parser that I wrote which outputs a dictionary containing some values from this enum declared in the parser package: class NodeFace(Enum): TOP = ... | This is just how Python normally works. Defining two classes the same way doesn't make them the same class, or make their instances equal. Even if they print the same, Python doesn't compare objects based on how they print. You have two separate enums with separate members. Unless you implement different behavior (whic... | 2 | 2 |
79,256,675 | 2024-12-6 | https://stackoverflow.com/questions/79256675/dividing-nested-calls-into-several-lines | I have a function like this in Python (the capital letters can represent constants, functions, anything, but not function calls): def f(x): a = foo1(A, B, foo3(E, foo2(A, B))) b = foo3(a, E) return b and I want to break it up into "atomic" operations like this: def f(x): tmp1 = foo2(A, B) tmp2 = foo3(E, tmp1) a = foo1... | I think this is what you want exactly: s = """def f(x): a = foo1(A, B, foo3(E, foo2(A, B))) b = foo3(a, E) return b""" import re result = [] reg = re.compile(r'^(\s+).*?(\w+\([^\(\)]*\))') count = 1 for l in s.splitlines(): r = reg.search(l) if not r: result.append(l) continue while r: indent = r[1] var = f'tmp_{count}... | 2 | 4 |
79,255,383 | 2024-12-5 | https://stackoverflow.com/questions/79255383/python-ctypes-and-np-array-ctypes-data-as-unexpected-behavior-when-indexing | Using Python ctypes and numpy library, I pass data to a shared library and encounter a very weird behavior C function : #include <stdio.h> typedef struct { double *a; double *b; } s_gate; void printdouble(s_gate*, int); void printdouble(s_gate *gate, int n) { for (int i =0; i < n; i++) { printf("gate->a[%d] = %f\n", i,... | The memory is being freed for the masked arrays creating undefined behavior. Likely the a pointer's memory is reused but the b pointer happens to still be the same. Both are freed though. Create the masked arrays and hold a reference to them in the s_gate object, then it works: import ctypes as ct import numpy as np PD... | 2 | 1 |
79,249,769 | 2024-12-4 | https://stackoverflow.com/questions/79249769/why-is-the-plot-of-the-refractive-index-wavelength-dependent-fresnels-equation | I want to reproduce the reflectance spectrum of a thin film whose complex refractive index is wavelength-dependent (the complex refractive index data, named N2 in code, can be obtained from here). Using fresnel's equations for medium 1: air, medium 2: thin film, and medium 3: air : ; The reflection coefficient where... | You can try this. The major changes are: N2 is given by n + i.k, rather than n + i.n as you had there is just N2 ** 2 in the expression for delta, not abs(N2)**2, which would lose an important complex part I think the reflection coefficients should have a '+', not a '-' sign in the denominator (though I could be wro... | 2 | 2 |
79,251,417 | 2024-12-4 | https://stackoverflow.com/questions/79251417/in-polars-how-can-you-update-several-columns-simultaneously | Suppose we have a Polars frame something like this lf = pl.LazyFrame([ pl.Series("a", ...), pl.Series("b", ...), pl.Series("c", ...), pl.Series("i", ...) ]) and a function something like this def update(a, b, c, i): s = a + b + c + i a /= s b /= s c /= s return a, b, c that depends of elements of columns a, b, c and ... | update lf.select( pl.exclude(["a","b","c"]), pl .struct(pl.all()).map_elements(update, return_dtype=pl.List(pl.Float64)) .list.to_struct(fields=["a","b","c"]) .alias("result") ).unnest("result").collect() shape: (4, 5) ┌─────┬─────┬──────────┬──────────┬────────┐ │ i ┆ o ┆ a ┆ b ┆ c │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │... | 1 | 1 |
79,255,551 | 2024-12-5 | https://stackoverflow.com/questions/79255551/pandas-load-json-from-file-and-save-to-excel-valueerror | I'm running a simple python test to read json and output to excel. I'm get the following error: "ValueError("All arrays must be of the same length")" JSON file example (test data) { "content": [ { "id": "test_id", "url": "http://www.google.com/", "path1": "dir1/dir1_subdir_1/hello.txt", "path2": "external://hello.txt",... | Assuming you only need content from the json file, you can first use json.load and then use it to create dataframe: import pandas as pd import json with open('simple.json') as json_data: data = json.load(json_data) df = pd.DataFrame(data['content']) # output to excel df.to_excel("json_output.xlsx", index=False) | 1 | 1 |
79,252,863 | 2024-12-4 | https://stackoverflow.com/questions/79252863/is-it-possible-to-speed-up-my-set-implementation | I am trying to make a fast and space efficient set implementation for 64 bit unsigned ints. I don't want to use set() as that converts everything into Python ints that use much more space than 8 bytes per int. Here is my effort: import numpy as np from numba import njit class HashSet: def __init__(self, capacity=1024):... | As requested, here is the class, but using jitclass. I'm not sure how much value all the type annotations add. I had been playing around to see if could get any improvements. Overall, your original code had peak performance of 20 μs. Whereas, the code below had a peak performance of 2.3 μs (an order of magnitude faster... | 1 | 1 |
79,252,957 | 2024-12-4 | https://stackoverflow.com/questions/79252957/bars-almost-disappear-when-i-layer-and-facet-charts | import numpy as np import pandas as pd import altair as alt np.random.seed(0) model_keys = ['M1', 'M2'] data_keys = ['D1', 'D2'] scene_keys = ['S1', 'S2'] layer_keys = ['L1', 'L2'] ys = [] models = [] dataset = [] layers = [] scenes = [] for sc in scene_keys: for m in model_keys: for d in data_keys: for l in layer_keys... | This is not caused by the error bars but comes from using facet instead of row and column encodings. It's possible that this is a bug, but there is an easy enough work around: If you set the width as a step instead of a fixed size it works fine. Sharing the X scale also works, but I'm sure there are situations where th... | 2 | 2 |
79,251,752 | 2024-12-4 | https://stackoverflow.com/questions/79251752/altair-layered-chart-bars-and-lines-cannot-change-color-of-the-lines | I'm using Altair to create a layered chart with bars and lines. In this simple example, I'm trying to make the strokedash with different colors without success. I have tried using scale scheme for the strokedash, but it didn't change the colors. Can anyone help? import altair as alt import pandas as pd data = { 'Date':... | strokeDash controls the dash of the line but has no impact on the color. To change the color of the line you can either use the stroke or color encoding. import altair as alt import pandas as pd data = { 'Date': ['2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01-02', '2023-01-01', '2023-01... | 2 | 1 |
79,252,144 | 2024-12-4 | https://stackoverflow.com/questions/79252144/scipy-curve-fit-how-would-i-go-about-improving-this-fit | I've been working on a standard potential which I am trying to fit with a given model: ax2 - bx3 + lx4 The x and y values for the fit are generated from the code as well, the x values are generated by numpy.linspace. I have bounded the a,b,c parameters such that they are always positive. I needed the fit to mimic the d... | You can also use scipy.optimize.leastsq Here, you can also impose constraints (through a penalty added to the residuals when they are not satisfied). You asked for the following constraints: the maximum of the function is correct; the position of the global minimum is correct. It's a little sensitive to how you imp... | 3 | 3 |
79,251,914 | 2024-12-4 | https://stackoverflow.com/questions/79251914/convert-list-of-integers-into-a-specific-string-format | I'm dealing with a list of integers that represent the pages in which a keyword was found. I would like to build block of code that converts this list into a string with a specific format that follow some simple rules. Single integers are converted into string. Consecutive integers are considered as intervals (left bou... | Track first and last of a range of consecutive numbers, collect the string of the range when a number is non-consecutive. I used a sub-function to avoid repeated code: res = [4, 5, 6, 7, 8, 9, 10, 22, 23, 26, 62, 63, 113, 137, 138, 139] def span(numbers): if len(numbers) == 0: # empty case return '' result = [] def _pr... | 1 | 1 |
79,252,245 | 2024-12-4 | https://stackoverflow.com/questions/79252245/pandas-filter-and-sum-but-apply-to-all-rows | I have a dataframe that has user ID, code, and value. user code value 0001 P 10 0001 P 20 0001 N 10 0002 N 40 0002 N 30 0003 P 10 I am trying to add a new column that groups by User ID, filters for code = P and sums the value. However I want this value to be applied to every row. So for the example above, the output I... | Use a mask and where rather than loc: df['Sum_of_P'] = (df['value'].where(df['code'].eq('P'), 0) .groupby(df['user']).transform('sum') ) Variant with NaNs as masked values: df['Sum_of_P'] = (df['value'].where(df['code'].eq('P')) .groupby(df['user']).transform('sum') .convert_dtypes() ) If you want to use loc you shou... | 2 | 2 |
79,251,266 | 2024-12-4 | https://stackoverflow.com/questions/79251266/space-and-time-complexity-of-flattening-a-nested-list-of-arbitrary-depth | Given a python list that contains nested lists of arbitrary levels of nesting, the goal is to return a completely flattened list i.e for the sample input [1, [2], [[[3]]], 1], the output should be [1, 2, 3, 1]. My solution: def flatten(lst): stack = [[lst, 0]] result = [] while stack: current_lst, start_index = stack[-... | Yes, the solution is correct. Yes, the time and space analysis are correct ... if you don't count the space used by result as auxiliary space, which is reasonable. Although note that result overallocates/reallocates, which you could regard as taking O(n) auxiliary space. You could optimize that by doing two passes over... | 3 | 1 |
79,251,724 | 2024-12-4 | https://stackoverflow.com/questions/79251724/multiply-within-a-group-in-polars | I would like to take the product within a group in polars The following works but I suspect there is a more elegant/efficient way to perform this operation. Thank you import polars as pl import numpy as np D = pl.DataFrame({'g':['a','a','b','b'],'v':[1,2,3,4],'v2':[2,3,4,5]}) D.group_by('g').agg(pl.all().map_elements( ... | You could use Expr.product: D.group_by('g').agg(pl.all().product()) Output: ┌─────┬─────┬─────┐ │ g ┆ v ┆ v2 │ │ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ b ┆ 12 ┆ 20 │ │ a ┆ 2 ┆ 6 │ └─────┴─────┴─────┘ If you want Floats: D.group_by('g').agg(pl.all().product().cast(pl.Float64)) ┌─────┬──────┬──────... | 5 | 3 |
79,250,549 | 2024-12-4 | https://stackoverflow.com/questions/79250549/python-error-rv-generic-interval-missing-1-required-positional-argument-con | I have been trying to run the below code to calculate upper and lower confidence intervals using t distribution, but it keeps throwing the error in the subject. The piece of code is as below: def trans_threshold(Day): Tran_Cnt=Tran_Cnt_DF[['Sample',Day]].dropna() Tran_Cnt=Tran_Cnt.astype({'Sample':'str'}) Tran_Cnt.dtyp... | Note that the signature of scipy.stats.t is interval(confidence, df, loc=0, scale=1). There is no alpha keyword, pass it as positional or relabel it to confidence. | 1 | 2 |
79,250,415 | 2024-12-4 | https://stackoverflow.com/questions/79250415/calculate-relative-difference-of-elements-in-a-1d-numpy-array | Say I have a 1D numpy-array given by np.array([1,2,3]). Is there a built-in command for calculating the relative difference between each element and display it in a 2D-array? The result would then be given by np.array([[0,-50,-100*2/3], [100,0,-100*1/3], [200,50,0]]) Alternatively I would have to use a for-loop. | Use numpy broadcasting: a = np.array([1,2,3]) out = (a[:, None]-a)/a*100 Output: array([[ 0. , -50. , -66.66666667], [100. , 0. , -33.33333333], [200. , 50. , 0. ]]) | 3 | 2 |
79,242,142 | 2024-12-1 | https://stackoverflow.com/questions/79242142/how-can-i-create-a-stylized-tree-chart | I have been making a tree of fraternity-adjacent bigs and littles and was looking for a way to automate it for changes as more people join. Everyone's names and years, big, and littles are in an Excel spreadsheet. What could I use to emulate the design I did here? Specifically, the stem style and ability to space nodes... | Solved it. from anytree import Node, RenderTree from collections import Counter import os import openpyxl from PIL import Image, ImageDraw, ImageFont import re # Create a directory to store the individual name card images cards_dir = "C:/Users/Chris Fitz/Documents/Fun/Trumpet History/trumpettree/cards" os.makedirs(card... | 2 | 0 |
79,243,117 | 2024-12-2 | https://stackoverflow.com/questions/79243117/moviepy-subtitlesclip-will-not-accept-any-provided-font-python | I am currently trying out moviepy to burn subtitles on a video. However I keep getting the same error message no matter what I do. This is the code I am using: from moviepy import TextClip from moviepy.video.tools.subtitles import SubtitlesClip ... generator = lambda txt: TextClip( text = self.txt, font = self.font_pat... | I was facing the same issue and I managed to solve it. Corrected code from moviepy import TextClip from moviepy.video.tools.subtitles import SubtitlesClip ... generator = lambda txt: TextClip( self.font_path, text = self.txt, font_size = 100, color= self.text_color, stroke_color="black", stroke_width=5, ) subtitles = ... | 2 | 1 |
79,240,178 | 2024-11-30 | https://stackoverflow.com/questions/79240178/python-set-and-get-windows-11-volume | I have this script: from pycaw.pycaw import AudioUtilities, IAudioEndpointVolume from ctypes import cast, POINTER from comtypes import CLSCTX_ALL, CoInitialize, CoUninitialize CLSCTX_ALL = 7 import time def set_windows_volume(value_max_100): devices = AudioUtilities.GetSpeakers() interface = devices.Activate(IAudioEndp... | I have downloaded some tools from https://www.nirsoft.net/ SoundVolumeView.exe svcl.exe nircmd.exe python script: import csv import subprocess import os def find_active_audio_device(): try: subprocess.run("SoundVolumeView.exe /scomma devices_list.csv", shell=True, check=True) if not os.path.exists("devices_list.csv")... | 2 | 0 |
79,249,385 | 2024-12-3 | https://stackoverflow.com/questions/79249385/xlsxwriter-creating-table-formulas-using-structural-references | I am trying to create a table in Excel using XLSX writer where a lot of the data is precomputed, but a few columns need running formulas. I am trying to use structural references (headers as reference) to improve readability of the formulas in the table. However, upon opening the generated file, I get a warning that ... | The structured formula that Excel displays in a Table isn't the formula that it stores. In addition to this Excel has changed the syntax of referring to table elements from [#This Row],[Column] to @[Column] over time but it still uses the former in the stored formula. XlsxWriter tries to account for this and in most ca... | 1 | 2 |
79,246,369 | 2024-12-3 | https://stackoverflow.com/questions/79246369/detect-filled-in-black-rectangles-on-patterned-background-with-python-opencv | I'm trying to detect the location of these filled-in black rectangles using OpenCV. Black rectangles on paper I have tried to find the contours of these, but I think the background lines are also detected as objects. Also the rectangles aren't fully seperated (sometimes they touch a corner), and then they are detected ... | As I said in the comments, no need for adaptive thresholding. Simple problems are best solved with simple solutions. A simple threshold would suffice in your case, but I guess you did not want to do that because of the lines? Is there any reason why you went with adaptive thresholding? Here's my appraoch: Read image ... | 2 | 2 |
79,248,409 | 2024-12-3 | https://stackoverflow.com/questions/79248409/how-to-conditionally-fill-between-two-line-charts-with-different-colours-using-p | I'm trying to fill with colour the space between two line charts (between col1 and col2) as follows: Desired output: when col1 is above col2, fill with color green. if col1 is under col2, fill with color red. I have tried this: def DisplayPlot(df): month = df['month'].tolist() col1 = df['col1'].tolist() col2 = df['c... | You'll have to understand that the line plot contains segments and you'll have to handle those segments separately to get the desired result. By segments I mean the regions between the lines of col1 and col2. The segments are separated from each other by the intersections of the lines. The following conditions are need... | 2 | 2 |
79,248,880 | 2024-12-3 | https://stackoverflow.com/questions/79248880/networkx-find-largest-fully-connected-subset-of-points | I am new to graph theory and am trying to find the largest subset of x,y coordinate points in which all points in the subset are within a specified distance of one another. I am currently using the nx.k_core(G) function in the following code: import networkx as nx import geopandas as gpd import numpy as np # Initialize... | The K-core is not "the largest subset of x,y coordinate points in which all points in the subset are within a specified distance of one another". It is the maximal subgraph that contains nodes of degree k or more. In networkx's implementation of k_core, the default k=None will yield the subgraph with the largest core n... | 1 | 1 |
79,247,091 | 2024-12-3 | https://stackoverflow.com/questions/79247091/how-to-efficiently-compute-and-process-3x3x3-voxel-neighborhoods-in-a-3d-numpy-a | I am working on a function to process 3D images voxel-by-voxel. For each voxel, I compute the difference between the voxel value and its 3x3x3 neighborhood, apply distance-based scaling, and determine a label based on the maximum scaled difference. However, my current implementation is slow, and I'm looking for ways to... | You could use a sliding_window_view, and vectorize the for loops. One attempt def create_label2(image_input): # Define neighborhood offsets and distances locations = np.array([tuple([dx, dy, dz]) for dx, dy, dz in itertools.product(range(-1, 2), repeat=3)]) euclidian_distances = np.linalg.norm(locations, axis=1) euclid... | 1 | 2 |
79,249,406 | 2024-12-3 | https://stackoverflow.com/questions/79249406/is-it-possible-to-find-the-average-of-a-group-of-numbers-and-return-the-index-of | Specifically: Write a function (in python, java or pseudocode) average(nums) that prints the mean of the numbers in list nums, and returns a tuple: The tuple's first element is the index of the number furthest from the mean. The tuple's second element is the number furthest from the mean. Can this be done with one lo... | Is it possible to find the average of a group of numbers and return the index of the number furthest from the average with one loop? Yes. The number furthest from the mean will be either the largest or the smallest in the sample. With a single pass through the data, you can compute the average and track the indices o... | 1 | 4 |
79,249,078 | 2024-12-3 | https://stackoverflow.com/questions/79249078/efficient-algorithm-that-returns-a-list-of-unique-lists-given-a-list-of-of-lists | Given a python list of lists containing numbers i.e lists = [ [1, 2], [2, 1], [3, 4] ], the problem is to return a list of all unique lists from the input list. A list is considered a duplicate if it can be generated from another list by reordering the items in the list. i.e [2, 1] is a duplicate of [1, 2]. Given the i... | You can get an O(m*n) solution as long as the "key" you are using is O(m*n). This can be accomplished in two ways. If the inner lists can't contain duplicates, then a set of frozen sets is an elegant solution. Note, frozenset(mylist) is O(n): def unique(lists): seen = set() result = [] for sub in lists: key = frozenset... | 4 | 3 |
79,249,060 | 2024-12-3 | https://stackoverflow.com/questions/79249060/is-there-a-portable-way-to-deduce-the-current-python-interpreter | I have a Python script that attempts to run a subprocess that runs the same interpreter as the currently-running interpreter. The subprocess's interpreter needs to be the same executable as the currently running interpreter's. Specifically, it needs to be the correct version even when multiple versions of Python are in... | psutil package claims to be portable across several platforms Can be tested with a little recursive script import psutil import subprocess,os proc = psutil.Process().cmdline() print(f"{os.getpid()} {psutil.Process().exe()} {proc}") # set next command interpreter to the one found proc[0]=psutil.Process().exe() # run tha... | 1 | 1 |
79,248,789 | 2024-12-3 | https://stackoverflow.com/questions/79248789/polars-python-how-to-change-the-number-of-conditions-inputted-when-making-a-ne | I have large datasets (ranging from 100k - 4 million rows) where I am looking for different relevant codes across multiple columns. For example, if I wanted to identify each row which has some start to a string '302' I would do: import polars as pl df = pl.DataFrame({ 'Codes_1': ['302E513', '301E513', '302E512'], 'Code... | You could replace the multiple str.starts_with with a single regex and str.contains: df.with_columns( pl.when(pl.any_horizontal( pl.col(column_names).str.contains(f"^({'|'.join(conditions)})"), )) .then(1.0) .otherwise(0.0) .alias('Column_name') ) Or use a loop: df.with_columns( pl.when(pl.any_horizontal( pl.col(colum... | 3 | 2 |
79,248,296 | 2024-12-3 | https://stackoverflow.com/questions/79248296/pandas-dataframe-combine-cell-values-as-strings | I have a dataframe: Email | Col1 | Col2 | Col3 | Name -------------------------------------------------------------------- john.cena@gmail.com | CellStr11 | 1.4 | CellStr13 | John Cena damian.doe@gmail.com | CellStr11 | 1.2 | CellStr13 | Matt Smith john.smith@gmail.com | CellStr21 | 1.2 | CellStr23 | John Cena I need ... | If your dataframe is df: df['Col2'] = df['Col2'].astype('str') # all the columns must be strings gp_col = df.groupby(["Name"])[['Col1', 'Col2', 'Col3']] \ .agg(lambda x: " / ".join(x)) \ .reset_index() display(gp_col) gives: | 1 | 1 |
79,247,499 | 2024-12-3 | https://stackoverflow.com/questions/79247499/pandas-dataframe-finding-row-comparing-two-cell-values | I have a dataframe: Email | ... | Name -------------------------------------- john.cena@gmail.com | ... | John Cena john.smith@gmail.com | ... | John Cena I need to find a row, that matches Name column with Email column when email_cell.split("@")[0] == name_cell.lower().replace(" ", ".") I tried dataframe.loc[datafra... | Take email usernames by splitting on '@' and Convert names to lowercase and replaces spaces with dots df[df['Email'].str.split('@').str[0] == df['Name'].str.lower().str.replace(' ', '.')] Output Matched rows: Email Name 0 john.cena@gmail.com John Cena 2 randy.orton@gmail.com Randy Orton You can also try apply with la... | 1 | 2 |
79,246,676 | 2024-12-3 | https://stackoverflow.com/questions/79246676/plot-contours-from-discrete-data-in-matplotlib | How do I make a contourf plot where the areas are supposed to be discrete (integer array instead of float)? The values should discretely mapped to color indices. Instead matplotlib just scales the result across the whole set of colors. Example: import numpy as np from matplotlib import pyplot as plt axes = (np.linspace... | Now I got it :) Thanks @jared, pcolormesh was the right function, but I have to explicitly map the colors as the plotted variable: import numpy as np from matplotlib import pyplot as plt axes = (np.linspace(-2, 2, 100), np.linspace(-2, 2, 100)) xx, yy = np.meshgrid(*axes, indexing="xy") fig, ax = plt.subplots() z = np.... | 1 | 1 |
79,245,922 | 2024-12-3 | https://stackoverflow.com/questions/79245922/how-to-get-parameter-name-type-and-default-value-of-oracle-plsql-function-body | I have the following PLSQL code which I am processing with Antlr4 in Python. I ma having trouble extracting the function parameter name and related details. CREATE OR REPLACE FUNCTION getcost ( p_prod_id IN VARCHAR2 , p_date IN DATE) RETURN number AS The ParseTree output for this: ╚═ sql_script ╠═ unit_statement ║ ╚═ ... | The parameter_name in the context is a fuction, beacuse if the context can contain more than one parameter the integer parameter defines the index of it. so please change you code to this: ctx.parameter_name().identifier() And I wonder why your function has no return value, so for multiple parameters the param_name va... | 1 | 1 |
79,245,886 | 2024-12-2 | https://stackoverflow.com/questions/79245886/how-can-i-efficiently-scan-multiple-remote-parquet-files-in-parallel | Suppose I have urls, a list of s3 Parquet urls (on S3). I observe that this collect_all runs in O(urls). Is there a better way to parallelize this task? import polars as pl pl.collect_all(( pl.scan_parquet(url).filter(expr) for url in urls) )) | Depending on what your expr is and what you're doing next, you might be better off with pl.concat([ pl.scan_parquet(url) for url in urls ]).filter(expr).collect() One difference is that instead of getting a list of distinct dfs this one assumes you want them all combined into one and that they have the same schema. An... | 1 | 1 |
79,245,770 | 2024-12-2 | https://stackoverflow.com/questions/79245770/where-is-scipy-stats-dirichlet-multinomial-rvs | I wanted to draw samples from a Dirichlet-multinomial distribution using SciPy. Unfortunately it seems that scipy.stats.dirichlet_multinomial does not define the rvs method that other distributions use to generate random samples. I think this would be equivalent to the following for a single sample: import scipy.stats ... | Is the above implementation correct? This looks correct to me. Based on the discussion in the PR implementing multinomial, SciPy did implement a bit of code to generate samples from a multinomial Dirichlet, but the code is only part of a test, not a public API. One of the reviewers briefly touches on what you mention... | 2 | 2 |
79,244,847 | 2024-12-2 | https://stackoverflow.com/questions/79244847/scipy-spatial-how-to-use-convexhull-with-points-containing-an-attribute | Having a list of points (x, y), the function ConvexHull() of SciPy.spatial is great to calculate the points that form the hull. In my case, every point (x, y) also has a string as attribute. Is it possible to calculate the hull and returning its points including their respective attribute (x, y, str)? I checked both ht... | I would suggest: hull = ConvexHull([(p[0], p[1]) for p in points_with_attribute]) hull_points = [points_with_attribute[i] for i in hull.vertices] This has the advantage of avoiding an O(N^2) loop, by using the index of the points that has already been found. | 1 | 2 |
79,245,015 | 2024-12-2 | https://stackoverflow.com/questions/79245015/pydantic-objects-as-elements-in-a-polars-dataframe-get-automatically-converted-t | What puzzles me is that when running class Cat(pydantic.BaseModel): name: str age: int cats = [Cat(name="a", age=1), Cat(name="b", age=2)] df = pl.DataFrame({"cats": cats}) df = df.with_columns(pl.lit(0).alias("acq_num")) def wrap(batch): return Cat(name="c", age=3) df = df.group_by("acq_num").agg(pl.col("cats").map_ba... | Polars already infers the dtype at pl.DataFrame: cats = [Cat(name="a", age=1), Cat(name="b", age=2)] df = pl.DataFrame({"cats": cats}) df shape: (2, 1) ┌───────────┐ │ cats │ │ --- │ │ struct[2] │ ╞═══════════╡ │ {"a",1} │ │ {"b",2} │ └───────────┘ df.schema Schema([('cats', Struct({'name': String, 'age': Int64}))]) ... | 1 | 1 |
79,241,319 | 2024-12-1 | https://stackoverflow.com/questions/79241319/autoflake-prints-unused-imports-variables-but-doesnt-remove-them | I'm using the autoflake tool to remove unused imports and variables in a Python file, but while it prints that unused imports/variables are detected, it doesn't actually remove them from the file. Here's the command I'm running: autoflake --in-place --remove-unused-variables portal/reports.py Printed Output: portal/re... | A similar issue was raised here and the OP was able to solve the issue by removing [tool.autoflake] check = true in the pyproject.toml. See Remove unused imports not working in place. | 1 | 1 |
79,244,459 | 2024-12-2 | https://stackoverflow.com/questions/79244459/how-to-use-aggregation-functions-as-an-index-in-a-polars-dataframe | I have a Polars DataFrame, and I want to create a summarized view where aggregated values (e.g., unique IDs, total sends) are displayed in a format that makes comparison across months easier. Here's an example of my dataset: My example dataframe: import polars as pl df = pl.DataFrame({ "Channel": ["X", "X", "Y", "Y", "... | You can aggregate multiple aggregates while pivoting and then explode the lists: ( df.pivot( on="Month", values="ID", aggregate_function= pl.concat_list( pl.element().n_unique().alias("value"), pl.element().len().alias("value") ) ) .with_columns(agg_func=["Uniques ID","Total sends"]) .explode(pl.exclude("Channel")) ) ... | 1 | 1 |
79,243,808 | 2024-12-2 | https://stackoverflow.com/questions/79243808/python-tkinter-grid-column-width-not-expanding-need-first-row-not-scorallable | I have three column with scrollbar and it needs to be expand and it should be stretched the window size but i am unable to increase the column width according to the screen size and i need top first row should need to stable that is like heading. the code is below import tkinter as tk from tkinter import * from tkinter... | If you want to expand the columns to fit the canvas width, you need to: save the item ID of canvas.create_window((0, 0), window=frame3s, ...) set the width of frame3s to the same as that of canvas inside the callback of event <Configure> on canvas using canvas.itemconfig() set weight=1 on column 0 to 2 using frame3s.c... | 1 | 1 |
79,243,356 | 2024-12-2 | https://stackoverflow.com/questions/79243356/how-do-i-convert-a-csv-file-to-an-apache-arrow-ipc-file-with-dictionary-encoding | I am trying to use pyarrow to convert a csv to an apache arrow ipc with dictionary encoding turned on. The following appears to convert the csv to an arrow ipc file: file = "./in.csv" arrowFile = "./out.arrow" with pa.OSFile(arrowFile, 'wb') as arrow: with pa.csv.open_csv(file) as reader: with pa.RecordBatchFileWriter(... | It looks like the IPC protocol only supports unified dictionary encoding. In your example each batch has got different dictionary encoding, which IPC doesn't support. You'd have to load the whole table and call unify_dicionaries. import pyarrow as pa schema = pa.schema([pa.field("col1", pa.dictionary(pa.int32(), pa.str... | 1 | 2 |
79,231,097 | 2024-11-27 | https://stackoverflow.com/questions/79231097/incorrect-syntax-near-cast-when-using-pandas-to-sql | I'm trying to write some code to update an sql table from the values in a pandas dataframe. The code I'm using to do this is: df.to_sql(name='streamlit_test', con=engine, schema='dbo', if_exists='replace', index=False) Using an sqlalchemy engine. However, I'm getting the error: ProgrammingError: (pyodbc.ProgrammingErr... | Microsoft SQL Server offers limited support for using databases that are compatible with earlier versions. For example, SQL Server 2008 creates databases with compatibility level 100 (SQL Server 2008) by default, but it also allows us to use databases that are compatible with SQL Server 2005 - database compatibility le... | 2 | 1 |
79,244,132 | 2024-12-2 | https://stackoverflow.com/questions/79244132/placing-label-next-to-a-slider-handle | I want a label near the slider's handle, that displays the current value. I used the code provided on the forum and tried to adapt it to Python from PyQt5 import QtCore, QtGui, QtWidgets class test(QtWidgets.QWidget): def __init__(self, parent = None): super().__init__(parent) self.main_layout = QtWidgets.QHBoxLayout(s... | With the help of @ekhumoro and @simon, I fixed the problem from PyQt5 import QtCore, QtGui, QtWidgets class test(QtWidgets.QWidget): def __init__(self, parent = None): super().__init__(parent) self.main_layout = QtWidgets.QHBoxLayout(self) self.main_layout.setContentsMargins(0, 0, 0, 0) self.main_layout.setSpacing(0) s... | 2 | 1 |
79,242,541 | 2024-12-1 | https://stackoverflow.com/questions/79242541/in-polars-how-do-you-create-a-group-counter-group-id | How do you get a group_id column like this, grouping by columns col1 and col2 ? col1 col2 group_id A Z 1 A Y 2 A Z 1 B Z 3 based on such a DataFrame : df = pl.DataFrame({ 'col1': ['A', 'A', 'A', 'B'], 'col2': ['Z', 'Y', 'Z', 'Z']} ) In other words, I'm looking for polars equivalent to R data.table .GRP... | If you don't care about "first occurrence has lower rank" then you can just rank combination of col1 and col2 columns. pl.struct() to create one column out of col1, col2. pl.Expr.rank() to assign rank to rows. df.with_columns(group_id = pl.struct("col1","col2").rank("dense")) shape: (4, 3) ┌──────┬──────┬──────────┐... | 3 | 2 |
79,244,168 | 2024-12-2 | https://stackoverflow.com/questions/79244168/order-pandas-dataframe-rows-with-custom-order-defined-by-list | I'm trying to order this dataframe in quarterly order using the list sortTo as reference to put it into a table. import pandas as pd # Sample DataFrame data = {'QuarterYear': ["Q1 2024", "Q2 2024", "Q3 2023", 'Q3 2024', "Q4 2023", "Q4 2024"], 'data1': [5, 6, 2, 1, 10, 3], 'data2': [12, 4, 2, 7, 2, 9]} sortTo = ["Q3 202... | Code use key parameter for custom sorting. out = df.sort_values( 'QuarterYear', key=lambda x: x.map({k: n for n, k in enumerate(sortTo)}) ) out: QuarterYear data1 data2 2 Q3 2023 2 2 4 Q4 2023 10 2 0 Q1 2024 5 12 1 Q2 2024 6 4 3 Q3 2024 1 7 5 Q4 2024 3 9 If your data consists of years and quarters, the following co... | 1 | 1 |
79,243,345 | 2024-12-2 | https://stackoverflow.com/questions/79243345/linking-python-to-c-something-not-shown-on-the-console-of-c | I have the following in Python: def ask_ollama(messages): """ Function to send the conversation to the Ollama API and get the response. """ payload = { 'model': 'llama3.2:1b', # Model ID 'messages': messages, # Pass the entire conversation history 'stream': False } response = requests.post( "http://localhost:11434/api/... | To put comments into an answer: First of all: you should consider calling the REST API directly from C#. That would probably solve a lot of problems revolving around this issue. But let's say you couldn't for a moment. Then you still have control over both parts: C# and Python. Which means you can do the following: In... | 2 | 1 |
79,233,046 | 2024-11-28 | https://stackoverflow.com/questions/79233046/python-ssl-issue-with-azure-cosmos-db-emulator-in-github-actions | I am trying to make unit tests for my azure functions, written in Python. I have a python file that does some setup (making the cosmos db databases and containers) and I do have a github actions yaml file to pull a docker container and then run the scripts. The error: For some reason, I do get an error when running the... | After bits of puzzling around for a few days I got it to work: jobs: test: runs-on: ubuntu-latest steps: - name: Checkout repository uses: actions/checkout@v3 - name: Start Cosmos DB Emulator run: docker run --detach --publish 8081:8081 --publish 1234:1234 mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:vnext-pr... | 2 | 3 |
79,241,291 | 2024-12-1 | https://stackoverflow.com/questions/79241291/how-to-save-a-matplotlib-figure-with-automatic-height-to-pdf | I have the following problem: I want to save a figure with a specific width, but auto-determine its height. Let's look at an example: import matplotlib.pyplot as plt import numpy as np fig,ax=plt.subplots(figsize=(5,5),layout='constrained') x=np.linspace(0,2*np.pi) y=np.sin(x) ax.set_aspect('equal') ax.plot(x,y) plt.sh... | Thanks to RuthC for providing the answer in a comment, the following seems to solve my problem: fig.savefig('test.pdf', format='pdf',bbox_inches='tight', pad_inches='layout') https://matplotlib.org/stable/users/prev_whats_new/whats_new_3.8.0.html#pad-inches-layout-for-savefig | 2 | 2 |
79,238,475 | 2024-11-29 | https://stackoverflow.com/questions/79238475/logging-inheritance-in-python | I am currently developing a core utils package where I want to set some logging properties (I know that this is not best practice, but it´s for interal purposes and intended to generate logs). When I now import the package nothing gets logged: # core.__main__.py class BaseLoggerConfig(BaseModel): LOG_FORMAT: str = "%(l... | There are a couple of tweaks that need to be made here. First, the file in which you configure your loggers should not be core/__main__.py, it should be core/__init__.py. __main__.py is used when you want to python -m core, which would run __main__.py. Second, the fmt key in your formatter config should be called forma... | 1 | 2 |
79,237,327 | 2024-11-29 | https://stackoverflow.com/questions/79237327/how-to-bound-typevar-correctly-to-protocol | I want to type annotate a simple sorting function that receives a list of any values that have either __lt__, __gt__, or both, but not mixed methods (i.e., all values should have the same comparison method) and returns a list containing the same elements it received in sorted order. What I've done so far: from typing i... | Union bound Your first attempt is indeed unsafe. Let's see that: class HasGT: def __init__(self, value: int) -> None: self.value = value def __gt__(self, other: Self) -> bool: return self.value > other.value class HasLT: def __init__(self, value: int) -> None: self.value = value def __lt__(self, other: Self) -> bool: r... | 3 | 1 |
79,239,871 | 2024-11-30 | https://stackoverflow.com/questions/79239871/cant-access-tracks-audio-features-using-spotipy-fetching-from-spotify-api | Doing a project in university data science course and I'm working with API-s for the first time. I need to fetch different data about tracks using Spotify API, but I have encountered a problem early on. I can get access to some basic data about tracks popularity, duration, etc but I get 403 error when trying to fetch a... | Spotify deprecated several API endpoints on November 27th 2024. The get-audio-features was one of those endpoints. This in turn affected spotipy as well. | 3 | 6 |
79,239,590 | 2024-11-30 | https://stackoverflow.com/questions/79239590/how-to-declare-the-type-of-the-attribute-of-an-undeclared-variable | I want to declare the type of a variable which I do not declare myself, but which I know exists. Motivation I am currently working with the kivy library, which does a poor job indicating to static type checkers what types its fields have. I would like to indicate the types myself so I can get autocompletions. In the ex... | You should annotate ids with a type for which it is known that it has an attribute task_list_area of type BoxLayout. If kivy doesn't provide such a type, you can create one using typing.Protocol: from typing import Protocol class HasTaskListArea(Protocol): task_list_area: BoxLayout class DetailsScreen(Screen): ids: Has... | 1 | 3 |
79,239,332 | 2024-11-30 | https://stackoverflow.com/questions/79239332/getting-the-uuid-of-an-entries-list-of-a-group-using-pykeepass | I'm looking to get the UUID of the entries of a group with pykeepass. #!/usr/bin/env python3 from pykeepass import PyKeePass kp = PyKeePass('dbtest.kdbx', password='password') kpGroup = kp.find_groups(name='GroupTest', first=True) print(kpGroup.entries) The code will return this : [Entry: "GroupTest/test (test@test.co... | It is the difference between the real content of kpGroup.entries, the brief info which you get with the print(kpGroup.entries). So you may obtain arbitrary other info from this list of group entries. You wanted this: [(e.group, e.title, e.uuid) for e in kpGroup.entries] You will obtain something as this: [(Group: "... | 2 | 1 |
79,239,232 | 2024-11-30 | https://stackoverflow.com/questions/79239232/how-to-implement-softmax-in-python-whereby-the-input-are-signed-8-integers | I am trying to implement a softmax function that takes in signed int8 input and returns a signed int8 output array. The current implementation I have going is this, import numpy as np def softmax_int8(inputs): inputs = np.array(inputs, dtype=np.int8) x = inputs.astype(np.int32) x_max = np.max(x) x_shifted = x - x_max ... | I think there are different mathematical definitions of softmax in different contexts. Wikipedia definition (on real numbers): exp(z) / sum(exp(z)) What I inferred from your code: (1<<(z-z_max + 16)) / sum((1 << (z-z_max + 16))) or something similar. 1<< === 2** obviously. The major difference is the base number of t... | 4 | 2 |
79,235,770 | 2024-11-29 | https://stackoverflow.com/questions/79235770/python-how-to-set-dict-key-as-01-02-03-when-using-dpath | Let's say we want to create/maintain a dict with below key structure { "a": {"bb": {"01": "some value 01", "02": "some value 02", } }, } We use dpath .new() as below to do this import dpath d=dict() ; print(d) # d={} dpath.new(d,'a/bb/00/c', '00val') ; print(d) # d={'a': {'bb': [{'c': '00val'}]}} # d # NOTE this will ... | I traced into the source, and noticed that it calls a Creator, documented in the function as: creator allows you to pass in a creator method that is responsible for creating missing keys at arbitrary levels of the path (see the help for dpath.path.set) I copied _default_creator from dpath.segments, and just eliminate... | 1 | 1 |
79,237,559 | 2024-11-29 | https://stackoverflow.com/questions/79237559/azure-function-not-able-to-index-functions | Hi I have three azure function written in python, last day I did a minor change in one of them and deployed it. However as we don't have separated deployments for the threee of them the other two got updated as well. After deployi9ng what I thought would be a small change all the functions stopped working. when checkin... | Please check your Health History in Service Health -> Health History. A global Azure Functions service issue has been occurring for the last 23 hours. Most probably you are affected as well. | 1 | 2 |
79,237,895 | 2024-11-29 | https://stackoverflow.com/questions/79237895/python-polars-create-pairwise-elements-of-two-lists | In Python Polars I am trying to "group" two lists together, for example in a Pl.Struct, removing the group when one of the elements is Null. For example, given this DataFrame: df = pl.DataFrame({ "Movie": [[None, "IT",]], "Count": [[30, 27]], }) shape: (1, 2) ┌──────────────┬───────────┐ │ Movie ┆ Count │ │ --- ┆ --- │... | df.with_columns( pl.struct( pl.col("Movie", "Count").list.gather( pl.col("Movie").list.eval(pl.element().is_not_null().arg_true()) ) .list.first() ) .alias("data") ) shape: (1, 3) ┌──────────────┬───────────┬───────────┐ │ Movie ┆ Count ┆ data │ │ --- ┆ --- ┆ --- │ │ list[str] ┆ list[i64] ┆ struct[2] │ ╞══════════════... | 1 | 1 |
79,229,633 | 2024-11-27 | https://stackoverflow.com/questions/79229633/an-iterative-plot-is-not-drawn-in-jupyter-notebook-while-python-works-fine-why | I have written an iterative plot code for python using plt.draw(). While it works fine for python interpreter and ipython also, but it does not work in jupyter notebook. I am running python in virtualenv in Debian. The codes are import matplotlib.pyplot as plt import numpy as np N=5 x = np.arange(-3, 3, 0.01) fig = plt... | Jupyter Notebook handles plot display differently. If you’re able to modify the code, you can try the following approach: %matplotlib widget import matplotlib.pyplot as plt import numpy as np from matplotlib.animation import FuncAnimation # Parameters for the plot N = 5 x = np.arange(-3, 3, 0.01) # Create a figure and ... | 2 | 3 |
79,235,819 | 2024-11-29 | https://stackoverflow.com/questions/79235819/how-to-access-the-last-10-rows-and-first-two-columns-of-a-dataframe | I was given a file to practice pandas on, and was asked this question: Q: Access the last 10 rows and the first two columns of the index dataframe. So, I tried this code: df = index[(index.tail(10)) & (index.iloc[:, :2])] df but it gave me an error. How do I access the last 10 rows and first two columns of a dataframe... | One way is to try with .iloc df = index.iloc[-10: , :2] -10: takes last 10 rows. Note the negative and look more into how iloc (section Indexing both axes). :2 after comma takes first two column i.e. 0 and 1. | 1 | 3 |
79,231,593 | 2024-11-27 | https://stackoverflow.com/questions/79231593/return-two-closest-rows-above-and-below-a-target-value-in-polars | I'm trying to figure out the most elegant way in Polars to find the two bracketing rows (first above and first below) a specific target. Essentially the Min > 0 & the Max < 0. data = { "strike": [5,10,15,20,25,30], "target": [16] * 6, } df = (pl.DataFrame(data) .with_columns( diff = pl.col('strike') - pl.col('target'))... | data = {"strike": [5,10,15,20,25,30]} target = 16 diff = pl.col("strike") - target ( pl.DataFrame(data) # diff and target can be added as columns, but this is not actually needed # .with_columns(target=target, diff=diff) # filter for where the diff equals the min above 0, or the max below 0 .filter( (diff == diff.filte... | 1 | 2 |
79,235,198 | 2024-11-28 | https://stackoverflow.com/questions/79235198/pandas-merge-elegant-way-to-deal-with-filling-and-dropping-columns | Assume we have two data frames with columns as follows: df1[['name', 'year', 'col1', 'col2', 'col3']] df2[['name', 'year', 'col2', 'col3', 'col4']] I want to do the merge of df1 and df2 by name and year with the condition to keep all value of col2 col3 on df1, if it is None then use value in df2 I know how to do this ... | Assuming unique combinations of name/year, you could concat and groupby.first: out = pd.concat([df1, df2]).groupby(['name', 'year'], as_index=False).first() For a more generic merge, you could perform two merges, excluding the common, non-key, columns, then combine_first: cols = ['name', 'year'] common = df1.columns.i... | 1 | 1 |
79,234,492 | 2024-11-28 | https://stackoverflow.com/questions/79234492/flattening-pandas-columns-in-a-non-trivial-way | I have a pandas dataframe which looks like the following: site pay delta over under phase a a b ID D01 London 12.3 10.3 -2.0 0.0 -2.0 D02 Bristol 7.3 13.2 5.9 5.9 0.0 D03 Bristol 17.3 19.2 1.9 1.9 0.0 I'd like to flatten the column multindex to the columns are ID site a b delta over under D01 London 12.3 10.3 -2.0 0.... | As the above answer mentioned the requirement is not clear, I have tried out just the flattening of the data, let us know import pandas as pd data = { 'index': ['D01', 'D02', 'D03'], 'columns': [('site', 'a'), ('pay', 'a'), ('pay', 'b'), ('delta', ''), ('over', ''), ('under', '')], 'data': [['London', 12.3, 10.3, -2.0,... | 1 | 2 |
79,233,300 | 2024-11-28 | https://stackoverflow.com/questions/79233300/generate-multiple-disjunct-samples-from-a-dataframe | I am doing some statistic of a very large dataframe that takes sums of multiple random samples. I would like the samples to be disjuct (no number should be present in two different samples). Minimal example that might use some numbers multiple times: import polars as pl import numpy as np df = pl.DataFrame( {"a": np.ra... | You can sample them all at once using with_replacement = False (which is default) and then aggregate into N_samples sums: ( df .sample(N_samples * N_logs) .group_by(pl.int_range(pl.len()) // N_logs) .sum() .get_column("a") ) shape: (50,) Series: 'a' [f64] [ 9.993712 10.667377 9.983055 7.092786 10.780031 … 9.384218 8.5... | 2 | 1 |
79,232,831 | 2024-11-28 | https://stackoverflow.com/questions/79232831/how-to-use-django-querystring-with-get-form | In Django 5.1 {% querystring %} was added. Is there some way to use it with GET form? For example, let's say we have template with: <span>Paginate by:</span> <a href="{% querystring paginate_by=50 %}">50</a> {# ... #} <form method="GET"> <input name="query" value="{{ request.GET.query }}"> <button type="submit">Search<... | Is there some way to use it with GET form? No., the {% querystring … %} template tag [Django-doc] is used to generate querystrings. This is not what a form element is supposed to do. If you add a querystring at the end of the action=".." [mdn-doc] for a form that makes a GET request, then the browser will normally st... | 4 | 3 |
79,233,242 | 2024-11-28 | https://stackoverflow.com/questions/79233242/how-to-get-relative-frequencies-from-pandas-groupby-with-two-grouping-variables | Suppose my data look as follows: import datetime import pandas as pd df = pd.DataFrame({'datetime': [datetime.datetime(2024, 11, 27, 0), datetime.datetime(2024, 11, 27, 1), datetime.datetime(2024, 11, 28, 0), datetime.datetime(2024, 11, 28, 1), datetime.datetime(2024, 11, 28, 2)], 'product': ['Apple', 'Banana', 'Banana... | You can use a crosstab with normalize: ct = pd.crosstab(df['datetime'].dt.normalize(), df['product'], normalize='index') Output: product Apple Banana datetime 2024-11-27 0.500000 0.500000 2024-11-28 0.333333 0.666667 As a graph: ct.plot.bar() Output: | 2 | 1 |
79,233,050 | 2024-11-28 | https://stackoverflow.com/questions/79233050/django-model-has-manytomany-field-how-to-get-all-ids-without-fetching-the-objec | I have a data structure like this: class Pizza(models.Model): name = models.CharField(max_length=100) toppings = models.ManyToManyField(Topping, related_name="pizzas") class Topping(models.Model): name = models.CharField(max_length=100) And to get all topping IDs related to a pizza I can do this: list(map(lambda t: t.... | Use .values_list() methods to get only the IDs as tuples: topping_ids = pizza.toppings.all().values_list("id") # Example of result ( (4), (5), (12), (54) ) | 2 | 2 |
79,232,452 | 2024-11-28 | https://stackoverflow.com/questions/79232452/perform-a-binary-op-on-values-in-a-pandas-dataframe-column-by-a-value-in-that-sa | Sorry for the mouthful title. I think this is best illustrated by an example. Let's say we have an item that has different rarity levels, all of which have dfferent prices in different shops. I want to know how much more expensive a given rarity is than the base "normal" rarity in each shop separately. How could I add ... | Here's one approach, assuming a single 'normal' price per item / shop. Data used # adding another item, and deleting 1 shop import pandas as pd data = {'item': {0: 'bread', 1: 'bread', 2: 'bread', 3: 'bread', 4: 'bread', 5: 'bread'}, 'quality': {0: 'normal', 1: 'rare', 2: 'legendary', 3: 'normal', 4: 'rare', 5: 'legend... | 1 | 2 |
79,232,565 | 2024-11-28 | https://stackoverflow.com/questions/79232565/app-registration-error-aadsts500011-show-tenant-is-as-domain-instead-of-long-str | I've tried numerous times to register an app and connect to in in python: app_id = '670...' tenant_id = '065...' client_secret_value = 'YJr...' import requests import msal authority = f'https://login.microsoftonline.com/{tenant_id}' scopes = ['https://analysis.microsoft.net/powerbi/api/.default'] app = msal.Confidentia... | The error occurred as you are using wrong scope value to generate access token for Power BI API. Initially, I too got same error when I tried to generate token with scope as https://analysis.microsoft.net/powerbi/api/.default like this: Make sure to use https://analysis.windows.net/powerbi/api/.default/ as scope value... | 1 | 2 |
79,231,405 | 2024-11-27 | https://stackoverflow.com/questions/79231405/no-enum-for-numpy-uintp | I am trying to wrap a C pointer array of type size_t with a numpy ndarray via Cython using the following: cimport numpy as cnp from libcpp.vector cimport vector cnp.import_array() cdef size_t num_layers = 10 cdef vector[size_t] steps_taken_vec = vector[size_t]() steps_taken_vec.resize(3 * num_layers) cdef size_t* steps... | The easiest solution is probably to declare NPY_UINTP yourself. def extern from *: cdef enum: NPY_UINTP This just assumes Cython that some enum-like constant exists called NPY_UINTP. You can pass that to PyArray_SimpleNewFromData. It doesn't matter that it's not from the "official" Numpy pxd file. As I said in a comm... | 1 | 2 |
79,230,796 | 2024-11-27 | https://stackoverflow.com/questions/79230796/skipping-empty-row-and-fixing-length-mismatch | I am working with an Excel data set that looks like this Col 1 Col 2 Col 3 Col 4 Col 5 Col 6 Col 7 Col 8 1 2 A\n \nB C\n \nD E\n \nF G\n \nH I\n \nJ K\n \nL 3 4 5 6 a\n \nb c e\n \nf g\n \nh i\n \nj k\n \nl I want to use Col 3 to determine how many new rows need to be created, and then split Col 3 t... | Assuming this input: df = pd.DataFrame({'Col 1': [1, 3, 5], 'Col 2': [2, 4, 6], 'Col 3': ['A\\n \\nB', None, 'a\\n \\nb'], 'Col 4': ['C\\n \\nD', None, 'c'], 'Col 5': ['E\\n \\nF', None, 'e\\n \\nf'], 'Col 6': ['G\\n \\nH', None, 'g\\n \\nh'], 'Col 7': ['I\\n \\nJ', None, 'i\\n \\nj'], 'Col 8': ['K\\n \\nL', None, 'k\\... | 3 | 3 |
79,209,183 | 2024-11-20 | https://stackoverflow.com/questions/79209183/problem-with-mismatched-length-when-using-a-mask | I'm writing a code and I have a function that calculates the values that are not fulfilling a condition with the values that are fulfilling the condition, but I'm having a lot of trouble with managing the shape of the arrays. I have a similar function, but with other logical structure that does this (MWE for the functi... | I believe you're trying to use the Taylor series to approximate the function at a position beyond the last known value. The function values are stored in the var array, and the first derivatives are in the dvar array. It seems like you're using mask to identify the last known function value, and h represents the argume... | 2 | 0 |
79,224,765 | 2024-11-25 | https://stackoverflow.com/questions/79224765/ansible-python-interpreter-fallback-is-not-working | I have a playbook with a mix of connection: local tasks and remote tasks that use an AWS dynamic inventory. The Python interpreter has different paths on local and remote systems. Through another question python3 venv - how to sync ansible_python_interpreter for playbooks that mix connection:local and target system, I ... | You cannot mix and match a dynamic inventory and a "regular" (YAML) inventory in the same file like that, you will have to create two different inventory files. The reason being the inventory is inferred by Ansible when it loads the inventory files. You can actually realise this if you run Ansible in verbose mode (usin... | 2 | 2 |
79,215,387 | 2024-11-22 | https://stackoverflow.com/questions/79215387/extract-the-original-paper-links-from-google-scholar-author-profiles | I put together the following python code to get the links of the papers published by a random author (from google scholar): import requests from bs4 import BeautifulSoup as bs import pandas as pd def fetch_scholar_links_from_url(url: str) -> pd.DataFrame: pd.set_option('display.max_columns', None) pd.set_option('displa... | TL;DR: I know you expect Python code but Google blocked most of my attempts so to get the job done I used a combination of other tools and python. Scroll down to the bottom to view the results. Also, I'm on macOS and you might not have the following tools: - curl - jq - ripgrep - awk - sed The final python script uses... | 1 | 3 |
79,226,797 | 2024-11-26 | https://stackoverflow.com/questions/79226797/how-to-build-a-cffi-extension-for-inclusion-in-binary-python-wheel-with-uv | I am migrating a library of mine to use uv to manage the package. My package, however, includes a C extension that is wrapped and compiled through CFFI (there's a build script for this). Below is the current version of pyproject.toml that fails to run the build script and, If I build the extension manually, uv build --... | After a few days, bumping my head against the lack of specific documentation about this, here is the final form of a pyproject.toml that works : [project] authors = [ { name = "Flavio Codeco Coelho", email = "fccoelho@gmail.com" }, { name = "Sandro Loch", email = "es.loch@gmail.com" }, ] license = { text = "AGPL-3.0" }... | 3 | 1 |
79,219,057 | 2024-11-23 | https://stackoverflow.com/questions/79219057/how-to-efficiently-represent-a-matrix-product-with-repeated-elements | I have a tensor a that is of shape (n/f, c, c) that I want to multiply by another tensor b of shape (n, c, 1). Each row of a represents f rows of b, such that the naiive way of implementing this would be to simply repeat each row of a f times before performing the multiplication: n = 100 c = 5 f = 10 a = tf.constant(np... | We can do this by reshaping tensors and utilizing broadcasting, We can perform matrix multiplication more efficiently by eliminating the need for explicit repetition. import tensorflow as tf import numpy as np n = 100 c = 5 f = 10 a = tf.constant(np.random.rand(n // f, c, c)) b = tf.constant(np.random.rand(n, c, c)) #R... | 3 | 2 |
79,223,097 | 2024-11-25 | https://stackoverflow.com/questions/79223097/settings-py-cannot-find-psycopg2-squlite3-and-postgresql-problem-no-such-t | I'm trying to deploy a simple app (learning log from Python Crash Course) to heroku. The app runs but upon login, I get a 500 error, and in debug=true the error is: no such table: auth_user. I realise this has something to do with squlite3 and Postgresql, but when I try to build the code in settings: DATABASES = { 'def... | You have not correctly configured your DATABASES in settings.py. You are attempting to use the default sqlite3 configuration with a PostgreSQL engine which wouldn't work. Also Heroku does not work well with sqlite3 and it wouldn't be a good idea to use it in production. See Why are my file uploads missing/deleted from ... | 1 | 2 |
79,223,478 | 2024-11-25 | https://stackoverflow.com/questions/79223478/era5-pressure-values-on-theta-levels | How do I interpret the Pressure values in this dataset? I downloaded ERA5 Potential Vorticity and Pressure(Monthly means of Daily means, analysis, Potential Temperature levels). The pressure data doesn’t make sense to me. Please see the attached image. The potential vorticity data makes sense. It is a 2D grid with the ... | So if you convert your grib file to netcdf like this cdo -f nc4 copy test.grb test.nc4 and look at the header with ncdump ncdump -h test.nc4 you will see that the PV field (var60) is retrieved on an unstructured reduced Gaussian grid and the pressure field (var54) is a spectral field, that's why you have the zeros as... | 1 | 3 |
79,221,167 | 2024-11-24 | https://stackoverflow.com/questions/79221167/blip2-type-mismatch-exception | I'm trying to create an image captioning model using hugging face blip2 model on colab. My code was working fine till last week (Nov 8) but it gives me an exception now. To install packages I use the following command: !pip install -q git+https://github.com/huggingface/peft.git transformers bitsandbytes datasets To lo... | I had the same issue. You need to add a prompt in the processor: prompt = " " inputs = processor(images=image, text=prompt, return_tensors="pt").to(device="cuda", dtype=torch.float16) Hope it helps. | 2 | 1 |
79,227,338 | 2024-11-26 | https://stackoverflow.com/questions/79227338/case-insensitive-uniqueconstraint-using-sqlalchemy-with-postgresql-db | I am using SQLAlchemy and PostgreSQL, and I am trying to create a case-insensitive unique constraint that works like this UniqueConstraint( 'country', 'organisation_id', func.lower(func.trim('name')), name='uq_country_orgid_lower_trim_name' ) Ensuring a unique combination of name, country and organisation id, regardl... | This is mostly the ORM version of @snakecharmerb and avoids the use of text (nothing wrong with sanitized text, it is just a preference). This answer suggests using a unique index, instead of unique constraint because SQLA lacks support. from sqlalchemy import create_engine, func, Index, select, column from sqlalchemy.... | 1 | 2 |
79,226,508 | 2024-11-26 | https://stackoverflow.com/questions/79226508/pyspark-groupeddata-chain-several-different-aggregation-methods | I am playing with GroupedData in pyspark. This is my environment. Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.5.1 /_/ Using Scala version 2.12.18, OpenJDK 64-Bit Server VM, 11.0.24 Branch HEAD https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql... | I do not think that this is possible. Looking at the source of GroupedData, we see that all functions like avg max min and sum return a DataFrame, so chaining is not possible. | 1 | 1 |
79,225,773 | 2024-11-26 | https://stackoverflow.com/questions/79225773/in-polars-what-is-the-correct-equivalent-code-for-row-number-overpartition-b | I am trying to refactor (translate) a given SQL query to python script using polars library. I am stuck in one line of query where ROW_NUMBER() function is used followed by OVER(PARTITION BY) function. Below is the table schema: product_id (INTEGER) variant_id (INTEGER) client_code (VARCHAR) transaction_date (DATE) cu... | You could use pl.Expr.rank() but it is applied to one pl.Expr/column. You can, of course, create this column out of sequence of columns with pl.struct() and rank it: partition_by_keys = ["product_id", "variant_id", "store_id", "customer_id", "client_code"] order_by_keys = ["transaction_date", "invoice_id", "invoice_lin... | 4 | 1 |
79,227,254 | 2024-11-26 | https://stackoverflow.com/questions/79227254/shift-column-in-dataframe-without-deleting-one | Here is my dataframe: A B C First row to delete row to shift Second row to delete row to shift And I want this output : A B C First row to shift Second row to shift I tried this code : df.shift(-1, axis=1) A B C row to delete row to shift row to delete row to shift The fact is, i... | Be explicit, chose the columns to affect and reassign (or update): df[['B', 'C']] = df[['B', 'C']].shift(-1, axis=1, fill_value='') Or: cols = ['B', 'C'] df[cols] = df[cols].shift(-1, axis=1, fill_value='') # or # df.update(df[cols].shift(-1, axis=1, fill_value='')) Output: A B C 0 First row to shift 1 Second row to... | 3 | 3 |
79,226,735 | 2024-11-26 | https://stackoverflow.com/questions/79226735/pandas-replace-and-downcasting-deprecation-since-version-2-2-0 | Replacing strings by numerical values used to be easy, but since pandas 2.2. the simple approach below throws a warning. What is the "correct" way to do this now? >>> s = pd.Series(["some", "none", "all", "some"]) >>> s.dtypes dtype('O') >>> s.replace({"none": 0, "some": 1, "all": 2}) FutureWarning: Downcasting behavio... | When you run: s.replace({"none": 0, "some": 1, "all": 2}) The dtype of the output is currently int64, as pandas inferred that the values are all integers. print(s.replace({"none": 0, "some": 1, "all": 2}).dtype) # int64 In a future pandas version this won't happens anymore automatically, the dtype will remain object ... | 3 | 3 |
79,226,130 | 2024-11-26 | https://stackoverflow.com/questions/79226130/gps-tracking-streamlit-in-mobile-device | I'm running a Streamlit app where I try to retrieve the user's geolocation in streamlit. However, when using geocoder.ip("me"), the coordinates returned are 45, -121, which point to Oregon, USA, rather than my actual location. This is the function I use: def get_lat_lon(): # Use geocoder to get the location based on IP... | Consider the following simple Streamlit app: import streamlit as st import requests import geocoder from typing import Optional, Tuple def get_location_geocoder() -> Tuple[Optional[float], Optional[float]]: """ Get location using geocoder library """ g = geocoder.ip('me') if g.ok: return g.latlng[0], g.latlng[1] return... | 1 | 2 |
79,225,406 | 2024-11-26 | https://stackoverflow.com/questions/79225406/how-do-you-express-the-identity-expression | How do you express the identity expression in Polars? By this I mean the expression idexpr that when you do lf.filter(idexpr) you get the entirety of lf. Similar to SELECT(*) in SQL. I'm resorting to a logical expression like idexpr = (pl.col("a") == 0) | (pl.col("a") != 0) | According to Documentation, what's passed to filter as a predicate needs to be an "Expression(s) that evaluates to a boolean Series." This you already know, since you are passing a logical expression to circumvent it. Easily enough, there's a very simple expression that always evaluates to true: pl.lit(True) or just Tr... | 4 | 4 |
79,225,757 | 2024-11-26 | https://stackoverflow.com/questions/79225757/how-to-replace-the-deprecated-webelement-get-attribute-function-in-selenium-4 | On some web pages, due to rendering issues (e.g., hidden element), WebElement.text may not reveal the underlying text whereas WebElement.get_attribute("textContent") will. Therefore I have written the following utility function: from selenium.webdriver.remote.webelement import WebElement def text(e: WebElement) -> str:... | Further research reveals that I can use WebElement.get_property() This mean a slight change to the utility function because this function can return any one of str | bool | WebElement | dict. Therefore, it becomes: from selenium.webdriver.remote.webelement import WebElement def text(e: WebElement) -> str | bool | dict ... | 1 | 1 |
79,218,845 | 2024-11-23 | https://stackoverflow.com/questions/79218845/altair-chart-mark-text-background-color-visual-clarity | I've been playing with Altair charts in the context of Streamlit for making simple calculators. In one case I wanted to plot 3 bar charts next to each other to make an explainer for how progressive tax brackets work. In one column is the total income, in the next is a bar drawn between the start of the bracket and the ... | In addition to Joel's suggestion of using a background box, I have also had good results adding a copy of the text with a stroke behind the main text. This automatically adjusts the background to the right size. import altair as alt import pandas as pd # Example data data = pd.DataFrame({ 'category': ['A', 'B', 'C', '... | 3 | 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.