question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
79,218,490 | 2024-11-23 | https://stackoverflow.com/questions/79218490/problem-with-recognizing-single-cell-tables-in-pdfplumber | I have sample medical report and on top of each page in pdf there is a table that contains personal information. I have been trying to remove/crop the personal information table from that sample sample_pdf from all pages by finding layout values of the table. I am new to pdfplumber and not sure if that's the right appr... | Understanding the Issue pdfplumber 0.11.4 The issue arises because pdfplumber filters out tables with a single cell. This behavior is controlled by the following line in the library's source code: # File: pdfplumber/table.py def cells_to_tables(cells: List[T_bbox]) -> List[List[T_bbox]]: ... filtered = [t for t in _sor... | 3 | 1 |
79,224,478 | 2024-11-25 | https://stackoverflow.com/questions/79224478/how-can-i-enforce-a-minimum-age-constraint-and-manage-related-models-in-django | I am working on a Django project where I need to validate a model before saving it, based on values in its related models. I came up with this issue while extracting an app from an project using an old Django version (3.1) to a separate Django 5.1 project, then there error "ValueError: 'Model...' instance needs to have... | You can add a MinValueValidator [Django-doc]: from django.core.validators import MinValueValidator class Guest(models.Model): name = models.CharField(max_length=255) age = models.PositiveIntegerField( validators=[ MinValueValidator(18, 'All guests must be at least 18 years old.') ] ) reservation = models.ForeignKey( Re... | 4 | 3 |
79,223,802 | 2024-11-25 | https://stackoverflow.com/questions/79223802/how-do-i-get-custom-colors-using-color-chooser-from-tkinter | In the askcolor function of class Chooser in the pop up you can add color to a custom colors list. I was wondering if I can somehow get the values of those colors, but I could not find a way how to do it. Code for the pop up: from tkinter import colorchooser color_code = colorchooser.askcolor() Pop up screen shot: | The function tkinter.colorchooser.askcolor() doesn't directly provide access to the custom colors list in its popup. The function's main role is to return the selected color. The closest you can get is keeping track of the colors by creating a list (custom_colors = []) and then every time color_code = colorchooser.askc... | 1 | 2 |
79,219,311 | 2024-11-24 | https://stackoverflow.com/questions/79219311/async-python-function-as-a-c-callback | I'm in a situation where I need to use a C library that calls a python function for IO purposes so I don't have to create a C equivalent for the IO. The python code base makes extensive use of asyncio and all the IO goes through queues. And the C code is loaded as a dll using ctypes The problem is you can't await from ... | From a bit of searching, I believe async def python_callback(test_num): await outgoing_queue.put(test_num) test_num = await incoming_queue.get() return test_num can be wrapped with something like loop = None # No loop has been created initially def non_async_callback(test_num): global loop if not loop: # Only create a... | 1 | 1 |
79,221,744 | 2024-11-25 | https://stackoverflow.com/questions/79221744/ipopt-solution-by-gekko-in-comparison-to-grg-algorithm-used-within-the-excel-sol | The aim is to compute the thermodynamically equilibrated composition of the mixture at 1000K based on Gibbs' energy of formation of the reaction products and educts (steam+C2H6 in a molar ratio of 4:1) for the ethane steam gasification reaction as following: from gekko import GEKKO m = GEKKO(remote=True) x = m.Array(m.... | The variables lamda1 ,lamda2, lamda3, summe are defined twice and the equations associated with those variables are only used to initialize the second definition of the variables. H2,H2O,CO,O2,CO2,CH4,C2H6,C2H4,C2H2,lamda1,lamda2,lamda3,summe= x summe = m.Var(H2 + O2 + H2O + CO + CO2 + CH4 + C2H6 + C2H4 + C2H2) lamda2 ... | 3 | 1 |
79,221,652 | 2024-11-25 | https://stackoverflow.com/questions/79221652/polars-how-to-extract-last-non-null-value-on-a-given-column | I'd like to perform the following: Input: df = pl.DataFrame({ "a": [1,15,None,20,None] }) Output: df = pl.DataFrame({ "a": [1,15,None,20,None], "b": [0,14,None,5,None] }) That is, from: A 1 15 None 20 None to: A B 1 0 15 14 None None 20 5 None None So, what it does: If the value o... | You can use a combination of shift and forward_fill to get the last non-null value. So with your input, this looks like df = pl.DataFrame({ "a": [1, 15, None, 20, None] }) df.with_columns( # current row value for "a" minus the last non-null value # as the first row has no previous non-null value, # fill it with the fir... | 4 | 2 |
79,221,813 | 2024-11-25 | https://stackoverflow.com/questions/79221813/snakemake-how-to-implement-function-using-wildcards | I am trying to using snakemake to output some files from a specific job. Basically I have different channels of a process, that span different mass ranges. Depending then on the {channel, mass} pair I have to run jobs for different values of "norms". I wanted to to this using: import numpy as np import pickle as pkl pa... | Rule all cannot have wildcards (i.e. its Wildcards object doesn't have attributes representing wildcard values). This comes from the fact that it doesn't have an output "consumed" by downstream rules. Wildcards are determined by matching patterns in output sections of rules with files required in the input of downstrea... | 2 | 1 |
79,212,337 | 2024-11-21 | https://stackoverflow.com/questions/79212337/vscode-add-custom-autocomplete-to-known-external-classes | I'm working with Python in an environment where there are some classes which are defined externally (as in, I don't have access to the files where these classes are defined). So I can import these classes and use them, but since VSCode can't resolve the import, there's no autocomplete for them. What I would want: a way... | Autocomplete and IntelliSense are provided for all files within the current working folder in VSCode. They're also available for Python packages that are installed in standard locations. To enable IntelliSense for packages that are installed in non-standard locations, add those locations to the python.autoComplete.extr... | 1 | 2 |
79,220,668 | 2024-11-24 | https://stackoverflow.com/questions/79220668/is-it-possible-to-teach-toml-kit-how-to-dump-an-object | I am generating TOML files with several tables using TOML Kit without any problem in general. So far all the values were either strings or numbers, but today I first bumped into a problem. I was trying to dump a pathlib.Path object and it fails with a ConvertError Unable to convert an object of <class 'pathlib.WindowsP... | You want https://tomlkit.readthedocs.io/en/latest/api/#tomlkit.register_encoder Which you would use like this: from pathlib import Path from typing import Any import tomlkit from tomlkit.items import Item, String, ConvertError class PathItem(String): def unwrap(self) -> Path: return Path(super().unwrap()) def path_enco... | 1 | 2 |
79,220,482 | 2024-11-24 | https://stackoverflow.com/questions/79220482/how-to-get-information-out-of-onetoonefield-and-use-this-information-in-admin | I making my first site, this is my first big project so I stuck at one issue and can't solve it.. Here part of my code with error: models.py ... class GField(models.Model): golf_club_name = models.CharField( primary_key=True, max_length=100, help_text='Enter a golf club (Gfield) name', ) golf_field_par = models.Positiv... | Define a property [python-doc]: class PlayerInstance(models.Model): # … golf_field_playing = models.OneToOneField( GField, default='for_nothoing', on_delete=models.RESTRICT, help_text='where is it taking part?', ) now_at_hole = models.IntegerField() today = models.IntegerField() R1 = models.IntegerField() R2 = models.I... | 2 | 2 |
79,217,465 | 2024-11-23 | https://stackoverflow.com/questions/79217465/odoo-api-returns-invalid-credentials-even-with-correct-username-and-password | I’m working with Odoo (version 15) and trying to implement a login API. I have created a controller that checks the username and password, but it always returns "Invalid credentials" even when I use the correct login information. Here’s my code: # -*- coding: utf-8 -*- from odoo import http class TestApi(http.Controll... | The issue is that the original code expected data in kwargs, which only works for query parameters or form-encoded data. Since the request sent contains a JSON payload in the request body, the username and password were not being retrieved. The modified code resolves this by explicitly parsing the JSON data from the re... | 2 | 2 |
79,219,211 | 2024-11-24 | https://stackoverflow.com/questions/79219211/matplotlib-polar-chart-not-showing-all-xy-ticks | Issue 1: The x-ticks (pie pieces) aren't ordered from 0 to 24 (my bad, should be 1) Issue 2: all y-ticks (rings) aren't showing Issue 3: Someone seems to have eaten a part of the polar chart ... I expect to see 31 rings, and 24 "pie pieces". | import matplotlib.pyplot as plt import numpy as np fig = plt.figure() ax = fig.add_subplot(111, projection='polar') ax.set_xticks(np.arange(1, 25) * np.pi / 12) ax.set_xticklabels([str(number) for number in range(1, 25)]) ax.set_yticks(range(1, 32)) ax.set_yticklabels([str(number) for number in range(1, 32)]) ax.grid(T... | 1 | 1 |
79,218,266 | 2024-11-23 | https://stackoverflow.com/questions/79218266/is-there-a-way-to-return-the-highest-value-in-excel | The Problem I'm working directly on a Excel sheet with the Python extension for Excel and I'm trying to find out which is the highest number of a list of cells. This is the code for the function that I wrote: def getMax(celle_voti_lista) -> int: valori = [v for v in celle_voti_lista if isinstance(v, (int, float))] retu... | try something like this: from openpyxl import load_workbook def xl(range_str, file_path, sheet_name): wb = load_workbook(file_path) sheet = wb[sheet_name] start_cell, end_cell = range_str.split(":") start_row, start_col = int(start_cell[1:]), start_cell[0].upper() end_row, end_col = int(end_cell[1:]), end_cell[0].upper... | 2 | 0 |
79,218,262 | 2024-11-23 | https://stackoverflow.com/questions/79218262/filter-pandas-dataframe-by-multiple-thresholds-defined-in-a-dictionary | I want to filter a DataFrame against multiple thresholds, based on the ID's prefix. Ideally I'd configure these thresholds with a dictionary e.g. minimum_thresholds = { 'alpha': 3, 'beta' : 5, 'gamma': 7, 'default': 4 } For example: data = { 'id': [ 'alpha-164232e7-75c9-4e2e-9bb2-b6ba2449beba', 'alpha-205acbf0-64ba-40... | Another possible solution, whose steps are: First, the id column is split at each hyphen using the str.split method, extracting the first part of each split with str[0]. Then, the resulting first parts are mapped to their corresponding threshold values using the map function, referencing the thresholds dictionary. If... | 1 | 2 |
79,216,975 | 2024-11-23 | https://stackoverflow.com/questions/79216975/how-to-uninstall-specific-opencv | I am getting an error on running cv2.imshow() cv2.imshow("Image", image) cv2.error: OpenCV(4.9.0) /io/opencv/modules/highgui/src/window.cpp:1272: error: (-2:Unspecified error) The function is not implemented. Rebuild the library with Windows, GTK+ 2.x or Cocoa support. If you are on Ubuntu or Debian, install libgtk2.0... | Your problem isn't the way the package was installed. Your problem is that you installed a headless package when you didn't want that. Run pip3 list. Look for your headless OpenCV. That step is optional but you'll learn something. Remove the headless package with pip3 uninstall opencv-python-headless. Install a regu... | 1 | 3 |
79,216,200 | 2024-11-22 | https://stackoverflow.com/questions/79216200/linear-interpolation-lookup-of-a-dataframe | I have a dataframe using pandas, something like: d = {'X': [1, 2, 3], 'Y': [220, 187, 170]} df = pd.DataFrame(data=d) the dataframe ends up like X Y 1 220 2 187 3 170 I can get the y value for an x value of 1.0 using df[df['X'] == 1.0]['Y'] which returns 220 But is there a way to get a linearly interpol... | You can use np.interp for this: import numpy as np X_value = 1.5 np.interp(X_value, df['X'], df['Y']) # 203.5 Make sure that df['X'] is monotonically increasing. You can use the left and right parameters to customize the return value for out-of-bounds values: np.interp(0.5, df['X'], df['Y'], left=np.inf, right=-np.inf... | 5 | 2 |
79,215,119 | 2024-11-22 | https://stackoverflow.com/questions/79215119/how-can-i-convert-the-datatype-of-a-numpy-array-sourced-from-an-awkward-array | I have a numpy array I converted from awkward array by to_numpy() function, and the resulting array has the datatype: dtype=[('phi', '<f8'), ('eta', '<f8')]). I want to make it a regular tuple of (float32, float32) because otherwise this does not convert into a tensorflow tensor I tried the regular asdtype functions bu... | I believe your problem is equivalent to this: you have some Awkward Array with record structure, >>> array = ak.Array([{"phi": 1.1, "eta": 2.2}, {"phi": 3.3, "eta": 4.4}]) and when you convert that with ak.to_numpy, it turns the record fields into NumPy structured array fields: >>> ak.to_numpy(array) array([(1.1, 2.2)... | 2 | 4 |
79,212,852 | 2024-11-21 | https://stackoverflow.com/questions/79212852/constraint-to-forbid-nan-in-postgres-numeric-columns-using-django-orm | Postgresql allows NaN values in numeric columns according to its documentation here. When defining Postgres tables using Django ORM, a DecimalField is translated to numeric column in Postgres. Even if you define the column as bellow: from django.db import models # You can insert NaN to this column without any issue num... | I don't have a PostgreSQL database to test against but you can try creating a database constraint using a lookup based on the IsNull looukup: from decimal import Decimal from django.db.models import ( CheckConstraint, DecimalField, Field, Model, Q, ) from django.db.models.lookups import ( BuiltinLookup, ) @Field.regist... | 1 | 2 |
79,214,742 | 2024-11-22 | https://stackoverflow.com/questions/79214742/polars-python-filter-list-column-using-a-boolean-list-column-but-keeping-list | I would like to get elements from a list dtype column using another boolean list column and keeping the original size of the list (as oppose to this solution). Starting from this dataframe: df = pl.DataFrame({ 'identity_vector': [[True, False], [False, True]], 'string_vector': [['name1', 'name2'], ['name3', 'name4']] }... | Kind of standard pl.Expr.explode() / calculate / pl.Expr.implode() route: df.with_columns( pl.when( pl.col.identity_vector.explode() ).then( pl.col.string_vector.explode() ).otherwise(None) .implode() .over(pl.int_range(pl.len())) .alias("filtered_strings") ) shape: (2, 3) ┌─────────────────┬────────────────────┬─────... | 2 | 3 |
79,206,427 | 2024-11-20 | https://stackoverflow.com/questions/79206427/remove-unused-node-in-python-plotly | What im trying here is to create a relationship between Tasks. Some of them are connected directly to each other while others are passing through this big box i circled instead of connecting directly(which is what i need). How can i remove this node? def generate_links_and_nodes(dataframe): cleaned_links = [] for _, ro... | To eliminate the intermediary nodes you need to identify nodes acting as unnecessary passthroughs and bypassing them to create direct links between relevant tasks. So, in my example , intermediary nodes from the q3 column are identified as those that connect q10, the starting tasks, to q11, the ending tasks, and that a... | 3 | 2 |
79,210,901 | 2024-11-21 | https://stackoverflow.com/questions/79210901/methods-to-reduce-a-tensor-embedding-to-x-y-z-coordinates | I have a model from hugging face and would like to use it for performing word comparisons. At first I thought of performing a series of similarity calculations across words of interest but quickly I found that this problem would exponentially grow as the number of words expanded as well. A solution I thought about is p... | After more through reading, it was brough to my attention that it would be impossible to use TSNE in the manner which I was hoping as the dimensions generated by TSNE is only representative of the training data. Further fitting with new data or transformation of data not within the training set would result in outputs ... | 2 | 0 |
79,213,461 | 2024-11-22 | https://stackoverflow.com/questions/79213461/what-makes-printnp-half500-2-differs-from-printfnp-half500-2 | everyone. I've been learning floating-point truncation errors recently. But I found print(np.half(500.2)) and print(f"{np.half(500.2)}") yield different results. Here are the logs I got in IPython. In [11]: np.half(500.2) Out[11]: np.float16(500.2) In [12]: print(np.half(500.2)) 500.2 In [13]: print(f"{np.half(500.2)}"... | print calls __str__, while an f-string calls __format__. __format__ with an empty format spec is usually equivalent to __str__, but not all types implement it that way, and numpy.half is one of the types that implements different behavior: In [1]: import numpy In [2]: x = numpy.half(500.2) In [3]: str(x) Out[3]: '500.2... | 1 | 4 |
79,208,817 | 2024-11-20 | https://stackoverflow.com/questions/79208817/get-a-single-series-of-classes-instead-of-one-series-for-each-class-with-pandas | I have a DataFrame with 3 column of zeroes and ones corresponding to 3 different classes. I want to get a single series of zeroes, ones, and twos depending of the class of the entry (0 for the first class, 1 for the second one and 2 for the third one): >>> results.head() HOME_WINS DRAW AWAY_WINS ID 0 0 0 1 1 0 1 0 2 0 ... | Multiply by a dictionary, sum and convert to_frame: d = {'HOME_WINS': 0, 'DRAW': 1, 'AWAY_WINS': 2} out = df.mul(d).sum(axis=1).to_frame(name='SCORE') Or using a dot product: d = {'HOME_WINS': 0, 'DRAW': 1, 'AWAY_WINS': 2} out = df.dot(pd.Series(d)).to_frame(name='SCORE') Or, if there is exactly one 1 per row, with f... | 3 | 5 |
79,212,904 | 2024-11-21 | https://stackoverflow.com/questions/79212904/why-is-tz-naive-timestamp-converted-to-integer-while-tz-aware-is-kept-as-timesta | Understandable and expected (tz-aware): import datetime import numpy as np import pandas as pd aware = pd.DatetimeIndex(["2024-11-21", "2024-11-21 12:00"], tz="UTC") eod = datetime.datetime.combine(aware[-1].date(), datetime.time.max, aware.tz) aware, eod, np.concat([aware, [eod]]) returns (DatetimeIndex(['2024-11-21 ... | From Data type promotion in NumPy When mixing two different data types, NumPy has to determine the appropriate dtype for the result of the operation. This step is referred to as promotion or finding the common dtype. In typical cases, the user does not need to worry about the details of promotion, since the promotion ... | 3 | 1 |
79,211,584 | 2024-11-21 | https://stackoverflow.com/questions/79211584/no-solution-found-in-gekko-when-modelling-a-tank-level | I'm trying to simulate the level of a tank that has two inlet flows and one outlet. The idea is that after this will be used in a control problem. However, I can't get it to work with gekko but it did work with scypy odeint. #set time tmax=60*6 i = 0 t = np.linspace(i,tmax,int(tmax/10)+1) # minutes #Assign values d2_h0... | There is no solution because the lower bound on tank level height is set to 0 with: h1 = m.Var(value=d2_h0, lb=0) When this constraint is removed, it is solved successfully but with an unrealistic solution with a negative height. Overflow or complete drainage can be included in the model by adding slack variables: # ... | 2 | 0 |
79,212,853 | 2024-11-21 | https://stackoverflow.com/questions/79212853/swig-hello-world-importerror-dynamic-module-does-not-define-module-export-func | This is supposed to be the absolute minimum Hello World using SWIG, C, and setuptools. But the following exception is raised when the module is imported: >>> import hello Traceback (most recent call last): File "<python-input-0>", line 1, in <module> import hello ImportError: dynamic module does not define module expor... | Your extension module should be named _hello (notice the leading "_"). pyproject.toml: # ... [tool.setuptools] ext-modules = [ { name = "_hello", sources = ["src/hc/hello.c", "src/hc/hello.i"] } ] Check: [SO]: c program SWIG to python gives 'ImportError: dynamic module does not define init function' (@CristiFati's an... | 2 | 2 |
79,208,862 | 2024-11-20 | https://stackoverflow.com/questions/79208862/in-a-jupyter-notebook-open-in-vs-code-how-can-i-quickly-navigate-to-the-current | This feels like a useful feature but haven't been able to find a setting / extension that offers this capability. | Just figured it out - you can use the "Go To" button in the top toolbar in vscode (pic below). You'll need the jupyter extension installed. | 2 | 0 |
79,212,165 | 2024-11-21 | https://stackoverflow.com/questions/79212165/how-does-pandas-series-nbytes-work-for-strings-results-dont-seem-to-match-expe | The help doc for pandas.Series.nbytes shows the following example: s = pd.Series(['Ant', 'Bear', 'Cow']) s 0 Ant 1 Bear 2 Cow dtype: object s.nbytes 24 << end example >> How is that 24 bytes? I tried looking at three different encodings, none of which seems to yield that total. print(s.str.encode('utf-8').str.len().s... | Pandas nbytes does not refer to the bytes required to store the string data encoded in specific formats like UTF-8, UTF-16, or ASCII. It refers to the total number of bytes consumed by the underlying array of the Series data in memory. Pandas stores a NumPy array of pointers to these Python objects when using the objec... | 1 | 3 |
79,206,684 | 2024-11-20 | https://stackoverflow.com/questions/79206684/how-to-mark-repeated-entries-as-true-starting-from-the-second-occurrence-using-n | Problem I have a NumPy array and need to identify repeated elements, marking the second occurrence and beyond as True, while keeping the first occurrence as False. For example, given the following array: np.random.seed(100) a = np.random.randint(0, 5, 10) # Output: [0 0 3 0 2 4 2 2 2 2] I want to get the following out... | To reveal that the problem is actually less complicated than it may seem at first glance, the question could be rephrased as follows: Mark all first occurrences of values with False. This leads to a bit of a simplified version of EuanG's answer¹: def find_repeated(a): mask = np.ones_like(a, dtype=bool) mask[np.unique(a... | 4 | 3 |
79,211,816 | 2024-11-21 | https://stackoverflow.com/questions/79211816/value-based-partial-slicing-with-non-existing-keys-is-now-deprecated | When running the snippet of example code below with pandas 2.2.3, I get an error saying KeyError: 'D' index = pd.MultiIndex.from_tuples( [('A', 1), ('A', 2), ('A', 3), ('B', 1), ('B', 2), ('B', 2)], names=['letter', 'number'] ) df = pd.DataFrame({'value': [10, 20, 30, 40, 50, 60]}, index=index) idx = pd.IndexSlice resu... | When you run this code with pandas 1.5.3 you should in fact receive a FutureWarning: FutureWarning: The behavior of indexing on a MultiIndex with a nested sequence of labels is deprecated and will change in a future version. series.loc[label, sequence] will raise if any members of 'sequence' or not present in the inde... | 2 | 0 |
79,207,488 | 2024-11-20 | https://stackoverflow.com/questions/79207488/how-do-i-represent-sided-boxplot-in-seaborn-when-boxplots-are-already-grouped | I'm seeking for a way to represent two sided box plot in seaborn. I have 2 indexes (index1 and index2) that I want to represent according to two information info1 (a number) and info2 (a letter) My issue is the boxplot I have are already grouped together, and I don't understand how manage the last dimension? for now I ... | To create an additional grouping in Seaborn, the idea is to let Seaborn create a grid of subplots (called FacetGrid in Seaborn). The function sns.catplot(kind='box', ...) creates such a FacetGrid for boxplots. The col= parameter takes care of putting each Info1 in a separate subplot. To use Index1/Index2 as hue, both c... | 2 | 2 |
79,205,654 | 2024-11-20 | https://stackoverflow.com/questions/79205654/rounding-coordinates-to-centre-of-grid-square | Im currently trying to collect some weather data from an API, and to reduce the amount of API calls im trying to batch the calls on 0.5degree longitude and latitude chunks due to its resolution. I had this code def round_to_grid_center(coordinate,grid_spacing=0.5 ): offset = grid_spacing / 2 return round(((coordinate -... | Given: round half to even The round() function uses "round half to even" rounding mode, as mentioned in the Built-in Types doc, section Numeric types (emphasis by me): Operation Result round(x[, n]) x rounded to n digits, rounding half to even. If n is omitted, it defaults to 0. "Rounding half to even" mean... | 1 | 3 |
79,209,784 | 2024-11-21 | https://stackoverflow.com/questions/79209784/conversationsummarybuffermemory-is-not-fully-defined-you-should-define-basec | I am attempting to use LangChain's ConversationSummaryBufferMemory and running into this error: pydantic.errors.PydanticUserError: `ConversationSummaryBufferMemory` is not fully defined; you should define `BaseCache`, then call `ConversationSummaryBufferMemory.model_rebuild()`. This is what my code looks like: memory ... | pydantic library has been updated. pip install pydantic==2.9.2 should help | 2 | 3 |
79,209,425 | 2024-11-21 | https://stackoverflow.com/questions/79209425/build-a-wheel-and-install-package-version-depending-on-os | I have several python packages that need to be installed on various os/environments. These packages have dependencies and some of them like Polars needs a different package depending on the OS, for example: polars-lts-cpu on MacOS (Darwin) and polars on all the other OS. I use setuptools to create a whl file, but the d... | Use declarative environment markers as described in PEP 496 and PEP 508: install_requires=[ "polars>=1.12.0; platform_system!='Darwin'", "polars-lts-cpu>=1.12.0; platform_system=='Darwin'", ] | 3 | 3 |
79,208,029 | 2024-11-20 | https://stackoverflow.com/questions/79208029/how-can-i-fix-filenotfounderror-in-python-when-writing-to-a-file-in-a-onedrive-d | My OS is Windows, newest version and all updated, I think the issue lies in the path or something to do with OneDrive. Using the code: file = "dataset.csv" with open(file, "w") as f: f.write(data) I get the error: FileNotFoundError: [Errno 2] No such file or directory: 'dataset.csv', but this is an error I have never ... | I found out that it was a Windows Security issue in the end... I enabled controlled folder access. To disable this follow these steps: Settings > Update & Security (Windows 10) or Privacy & Security (Windows 11) > Windows Security > Virus & threat protection. Under Virus & threat protection settings, select Manage set... | 1 | 3 |
79,208,182 | 2024-11-20 | https://stackoverflow.com/questions/79208182/segmentation-fault-when-executing-a-python-script-in-a-c-program | I need to execute some python and C at the same time. I tried using Python.h: #include <Python.h> int python_program(char* cwd) { char* python_file_path; FILE* fd; int run; python_file_path = malloc(sizeof(char) * (strlen(cwd) + strlen("src/query.py") + 1)); strcpy(python_file_path, cwd); strcat(python_file_path, "src/... | Golden rule: error handling is not an option but a hard requirement in programming (pointed out by answers and comments). Failing to include it might work for a while, but almost certainly will come back and bite in the ass at a later time, and it will do it so hard that someone (unfortunately, often not the same perso... | 2 | 0 |
79,208,254 | 2024-11-20 | https://stackoverflow.com/questions/79208254/how-to-specify-column-name-with-the-suffix-based-on-another-column-value | #Column X contains the suffix of one of V* columns. Need to put the value from V(X) in column Y. import pandas as pd import numpy as np # sample dataframes df = pd.DataFrame({ 'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'V1': [2,3,4,50,6,7,8], 'V2': [3,3,3,3,3,3,3], 'V3': [7,15,8,9,10,11,12], 'X': [2,3,1,3,3,1,2] }) # Expe... | The canonical way would be to use indexing lookup, however since the column names and X values are a bit different, you first need to convert to string and prepend V: idx, cols = pd.factorize('V'+df['X'].astype(str)) df['Y'] = df.reindex(cols, axis=1).to_numpy()[np.arange(len(df)), idx] Output: EMPLID V1 V2 V3 X Y 0 ... | 1 | 2 |
79,207,951 | 2024-11-20 | https://stackoverflow.com/questions/79207951/python-wheel-entry-point-not-working-as-expected-on-windows | I'm tryring to setup a python wheel for my testrunner helper script to make it easyly acessible from everywhere on my Windows machine. Therefore I configure a console entry point in my setup.py. I can see the generated entry point in the entry_points.txt but if I'm trying to invoke my script I get the error message. No... | No module named testrunner.__main__; 'testrunner' is a package and cannot be directly executed You don't have __main__.py so you cannot do python -m testrunner. To fix the problem: echo "from .testrunner import runTests" >testrunner/__main__.py echo "runTests()" >>testrunner/__main__.py The second problem: from .te... | 3 | 1 |
79,207,871 | 2024-11-20 | https://stackoverflow.com/questions/79207871/replace-last-two-row-values-in-a-grouped-polars-dataframe | I need to replace the last two values in the value column of a pl.DataFrame with zeros, whereby I need to group_by the symbol column. import polars as pl df = pl.DataFrame( {"symbol": [*["A"] * 4, *["B"] * 4], "value": range(8)} ) shape: (8, 2) ┌────────┬───────┐ │ symbol ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════... | You can use pl.Expr.head() with pl.len() to get data without last two rows. pl.Expr.append() and pl.repeat() to pad it with zeroes. df.with_columns( pl.col.value.head(pl.len() - 2).append(pl.repeat(0, 2)) .over("symbol") ) shape: (8, 2) ┌────────┬───────┐ │ symbol ┆ value │ │ --- ┆ --- │ │ str ┆ i64 │ ╞════════╪════... | 3 | 1 |
79,207,102 | 2024-11-20 | https://stackoverflow.com/questions/79207102/python-opencv-projectpoints-neutral-position-off-center | I want to draw 3D Positions in a webcam Image using OpenCV's projectPoints function. During testing I noticed, that I always have a certain offset from the real object. This is most obvious when trying to project the origin (0,0,0) to the image center. The image shape is (2988, 5312, 3), the big red dot is the image c... | The image shape is (2988, 5312, 3), the big red dot is the image center at (1494, 2656) You calibrated that camera, right? Then that's not the optical center. The optical center is in the camera matrix, along with the focal length. (1476, 2732) That looks more like it, although I'd have expected these numbers to al... | 3 | 1 |
79,198,230 | 2024-11-17 | https://stackoverflow.com/questions/79198230/django-dask-integration-usage-and-progress | About performance & best practice Note, the entire code for the question below is public on Github. Feel free to check out the project! https://github.com/b-long/moose-dj-uv/pull/3 I'm trying to workout a simple Django + Dask integration, where one view starts a long-running process and another view is able to check ... | I think the solution you are looking for is as_completed: https://docs.dask.org/en/latest/futures.html#waiting-on-futures. You can also iterate over the futures as they complete using the as_completed function | 1 | 1 |
79,201,789 | 2024-11-19 | https://stackoverflow.com/questions/79201789/why-does-pandas-rolling-method-return-a-series-with-a-different-dtype-to-the-ori | Just curious why the Pandas Series rolling window method doesn't preserve the data-type of the original series: import numpy as np import pandas as pd x = pd.Series(np.ones(6), dtype='float32') x.dtype, x.rolling(window=3).mean().dtype Output: (dtype('float32'), dtype('float64')) | x.rolling(window=3) gives you a pandas.core.window.rolling.Rolling object. help(pandas.core.window.rolling.Rolling.mean) includes the note: Returns ------- Series or DataFrame Return type is the same as the original object with ``np.float64`` dtype. that's the little why. The big why it would do such a thing, I don't... | 1 | 1 |
79,204,288 | 2024-11-19 | https://stackoverflow.com/questions/79204288/handle-dns-timeout-with-call-to-blob-client-upload-blob | While using the Azure storage SDK in Python, I have been unable to override what appears to be a default 90-second timeout to catch a DNS exception occurring within a call to blob_client.upload_blob(). I am looking for a way to override this with a shorter time interval (i.e. 5 seconds). The following code illustrates ... | While using the Azure storage SDK in Python, I have been unable to override what appears to be a default 90-second timeout to catch a DNS exception occurring within a call to blob_client.upload_blob(). I am looking for a way to override this with a shorter time interval (i.e. 5 seconds). As, I mentioned in comment, T... | 1 | 2 |
79,203,282 | 2024-11-19 | https://stackoverflow.com/questions/79203282/setting-rgb-value-for-a-numpy-array-using-boolean-indexing | I have an array with shape (100, 80, 3) which is an rgb image. I have a boolean mask with shape (100, 80). I want each pixel where the mask is True to have value of pix_val = np.array([0.1, 0.2, 0.3]). cols = 100 rows = 80 img = np.random.rand(rows, cols, 3) mask = np.random.randint(2, size=(rows, cols), dtype=np.bool_... | I'll attempt to explain why your indexing attempt didn't work. Make a smaller 3d array, and 2d mask: In [1]: import numpy as np In [2]: img = np.arange(24).reshape(2,3,4) In [3]: mask = np.array([[1,0,1],[0,1,1]],bool);mask Out[3]: array([[ True, False, True], [False, True, True]]) Using @mozway's indexing, produces a... | 1 | 2 |
79,204,500 | 2024-11-19 | https://stackoverflow.com/questions/79204500/mypy-with-pydantic-field-validator | With pydantic, is there a way for mypy to be hinted so it doesn't raise an error in this scenario, where there's a field_validator modifying the type? class MyModel(BaseModel): x: int @field_validator("x", mode="before") @classmethod def to_int(cls, v: str) -> int: return len(v) MyModel(x='test') | If you're always inserting a string there, it might be better to use a computed_field. Something along this, maybe? class MyModel(BaseModel): input: str @computed_field def x(self) -> int: return len(self.input) I think it's very counterintuitive if you see the model with the int declaration while it would raise type ... | 1 | 2 |
79,201,663 | 2024-11-18 | https://stackoverflow.com/questions/79201663/split-columns-containing-lists-from-csv-into-separate-csv-files-with-pandas | I have CSV files with multiple columns of data retrieved from APIs, where each cell may contain either a single value or a list/array. The size of these lists is consistent across each column (e.g., a column named ALPHANUMS having a row containing a list like "['A', 'B', '4']" has the same list size of a column named C... | I ended up using a combination of explode,map, and ast.literal_eval to break out the columns with string lists into different CSV files. Instead of hard-coding column names like NAME or CARS, the program now dynamically checks which columns contain string representations of lists. This is done by iterating over all col... | 1 | 0 |
79,204,623 | 2024-11-19 | https://stackoverflow.com/questions/79204623/correct-way-to-parallelize-request-processing-in-flask | I have a Flask service that receives GET requests, and I want to scale the QPS on that endpoint (on a single machine/container). Should I use a python ThreadPoolExecutor or ProcessPoolExecutor, or something else? The GET request just retrieves small pieces of data from a cache backed by a DB. Is there anything specific... | Neither. Flask will serve one request per worker (or more, but depending on the worker type) - the way you set-it up, either with gunicorn, wsgi or awsgi is what is responsible for the number of parallel requests your app can process. Inside your app, you don't change anything - your views will be called as independent... | 1 | 2 |
79,204,622 | 2024-11-19 | https://stackoverflow.com/questions/79204622/how-to-create-dataframe-that-is-the-minimal-values-based-on-2-other-dataframes | Let's say I have DataFrames df1 and df2: >>> df1 = pd.DataFrame({'A': [0, 2, 4], 'B': [2, 17, 7], 'C': [4, 9, 11]}) >>> df1 A B C 0 0 2 4 1 2 17 9 2 4 7 11 >>> df2 = pd.DataFrame({'A': [9, 2, 32], 'B': [1, 3, 8], 'C': [6, 2, 41]}) >>> df2 A B C 0 9 1 6 1 2 3 2 2 32 8 41 What I want is the 3rd DataFrame that will have ... | You can mask df1 with df2 when df2['B'] < df1['B']: out = df1.mask(df2['B'].lt(df1['B']), df2) Output: A B C 0 9 1 6 1 2 3 2 2 4 7 11 | 1 | 2 |
79,198,298 | 2024-11-17 | https://stackoverflow.com/questions/79198298/improving-safety-when-a-sqlalchemy-relationship-adds-conditions-that-refer-to-ta | I have a situation where I want to set up relationships between tables, mapped with the SQLAlchemy ORM layer, where these relationships have an extra join key. As far as I know, setting this up by hand requires embedding strings that are eval'd; I'm trying to figure out to what extent that can be avoided, or at least v... | Pretty sure you can use lambdas or functions like this: (untested) class User(Base): __tablename__ = "user" id: Mapped[UUID] = mapped_column(Uuid(), primary_key=True) tenant_id: Mapped[UUID] = mapped_column(Uuid()) actions: Mapped["Action"] = relationship(lambda: Action, back_populates="user", foreign_keys=lambda: Act... | 1 | 2 |
79,204,156 | 2024-11-19 | https://stackoverflow.com/questions/79204156/pandas-dataframe-ffill-with-one-greater-the-previous-nonzero-value | I have a pandas DataFrame with a column like: 0 1 1 2 2 3 4 5 5 0 0 0 I would like to leave any leading zeros, but ffill to replace the trailing zeros with one greater than the previous, nonzero value. In this case, I'd like the output to be: 0 1 1 2 2 3 4 5 5 6 6 6 How can I go about doing this? | You could mask, increment and ffill: m = df['col'].eq(0) s = df['col'].mask(m) df['out'] = s.fillna(s.add(1).ffill().fillna(0)).convert_dtypes() Or, if you really want to only target the trailing zeros: df['out'] = df['col'].mask(df['col'].eq(0)[::-1].cummin(), df['col'].max()+1) Output: col out 0 0 0 1 1 1 2 1 1 3 ... | 1 | 2 |
79,203,876 | 2024-11-19 | https://stackoverflow.com/questions/79203876/why-does-my-google-service-account-not-see-files-shared-with-it-but-can-see-fil | I am using the Python Client to amend a Google Sheet shared to a Google Service Account. Everything works as expected when using a Sheet that is created by the Service Account, but returns a Requested entity was not found error when accessing a Google Sheet that is owned by someone else but shared with the service acco... | Can anyone point to what I could be doing wrong that would mean that the client can find a sheet that it creates, but not one that is shared with it? There is a scope in Google drive, which is characteristic of your description: https://www.googleapis.com/auth/drive.file See, edit, create, and delete only the specifi... | 1 | 2 |
79,200,874 | 2024-11-18 | https://stackoverflow.com/questions/79200874/column-assignment-with-alias-or | What is the preferred way to assign/add a new column to a polars dataframe in .select() or .with_columns()? Are there any differences between the below column assignments using .alias() or the = sign? import polars as pl df = pl.DataFrame({"A": [1, 2, 3], "B": [1, 1, 7]}) df = df.with_columns(pl.col("A").sum().alias("a... | The advantage of alias is that it allows you to specify a column name that wouldn't be a valid Python identifier. For example, you could use "a sum!". This can also be achieved by creating a dictionary and using ** to unpack it, passing the items as keyword arguments. Assignment with = cannot work in this way, as it re... | 2 | 5 |
79,203,100 | 2024-11-19 | https://stackoverflow.com/questions/79203100/plotting-intersecting-planes-in-3d-space-plotly | I'm trying to plot 3 planes in 3D space using plotly, I can only define a surface along the XY plane, whilst ZY and XZ do not appear. I'm including a simple example below, I would expect the code to produce three planes intersecting at the point (1, 1, 1), instead there is only one surface at (x, y) = 1. Any help would... | All of your x, y, z should have the same shape. I don't because of which default UB one of your plane even show. But none of them are correctly specified. What you meant is import plotly.graph_objects as go import numpy as np xx,yy=np.meshgrid([0,1,2], [0,1,2]) zsurf = go.Surface(y=xx, x=yy, z=np.ones((3, 3))) ysurf = ... | 1 | 1 |
79,199,490 | 2024-11-18 | https://stackoverflow.com/questions/79199490/what-is-the-number-at-the-end-of-the-django-get-log | I am using Django to serve an application, and I noticed some slowing down recently. So I went and checked the console that serves the server, that usually logs lines of this format : <date_time> "GET <path> HTTP/1.1" <HTTP_STATUS> <response_time> What I thought was the response time in milliseconds is apparently not, ... | It's the response size and it's printed by Python http-server which Django inherits. That explains why it's not documented by Django, because it's not the Django code that prints it. You can verify that by looking at this Django module. This is the line that starts the http-server. It inherits from Python http-server. ... | 1 | 3 |
79,202,327 | 2024-11-19 | https://stackoverflow.com/questions/79202327/importing-from-another-directory-located-in-parent-directory-in-python | Suppose we have a project structure like: project/ public_app/ __init__.py dir/ __init__.py config.py subdir/ __init__.py functions.py utils.py my_app/ main.py In my_app/main.py, I would like to import some functions from public_app/dir/subdir/functions.py. A solution I found was to add the following: # main.py import... | Run the main.py from the parent directory as a reference. That will resolve relative errors and import errors as the interpreter will get the idea of the relations between modules. Thus, Instead of doing: .../myapp$ python3 main.py Do: <parentfolderofproject>$ python3 -m project.myapp.main | 2 | 1 |
79,191,769 | 2024-11-15 | https://stackoverflow.com/questions/79191769/how-to-resolve-latency-issue-with-django-m2m-and-filter-horizontal-in-modeladmin | I have used django ModelAdmin with M2M relationship and formfield filtering code as follows: But for superuser or any other login where the number of mailboxes is more than 100k. I have sliced the available after filtering. But loading the m2m field takes time and times out for superuser login: def formfield_for_manyto... | Limited the number of objects loading during form init as: class GroupMailIdsForm(forms.ModelForm): class Meta: model = GroupMailIds fields = "__all__" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) request = getattr(self, 'request', None) if not request: return email = request.user.email qs = M... | 2 | 1 |
79,198,397 | 2024-11-18 | https://stackoverflow.com/questions/79198397/how-to-redraw-figure-on-event-in-matplotlib | I'm trying to pre-generate and store matplotlib figures in python, and then display them on a keyboard event (left-right cursor keys). It partially seems working, but fails after the first keypress. Any idea, what am I doing wrong? import matplotlib.pyplot as plt import numpy as np def new_figure(title, data): fig,ax =... | Not sure about the root cause of the error, but one way to avoid this is to fully replace the figure and reconnect the key_press_event: def redraw(event, cnt): event.canvas.figure = figs[cnt] event.canvas.mpl_connect('key_press_event', keypress) event.canvas.draw() | 3 | 1 |
79,202,065 | 2024-11-19 | https://stackoverflow.com/questions/79202065/how-to-sum-data-by-input-date-month-and-previous-month | I'm trying to sum up data of selected date, month of selected date and previous month of selected date but don't know how to do. Below is my sample data and my expected Output: Sample data: import pandas as pd import numpy as np df = pd.read_excel('https://github.com/hoatranobita/hoatranobita/raw/refs/heads/main/Check%... | The exact generalization of your question is not fully clear, but assuming that you want to group by COA Code, you could ensure everything is a datetime/periods, then select the appropriate rows with boolean indexing and between, finally perform a groupby.sum of those rows and concat to the original date rows. Here as ... | 1 | 1 |
79,201,815 | 2024-11-19 | https://stackoverflow.com/questions/79201815/in-python-polars-how-to-search-string-across-multiple-columns-and-create-a-new | To search over multiple columns, and create a new column of flag if string found, the following codes work, but is there any compact way inside with_columns() to achieve the same? df = pl.DataFrame({ "col1": ["hello", "world", "polars"], "col2": ["data", "science", "hello"], "col3": ["test", "string", "match"], "col4":... | You can use .any_horizontal() df.with_columns( pl.any_horizontal(pl.all().str.contains(search_string)) .alias("string_found") ) shape: (3, 5) ┌────────┬─────────┬────────┬─────────┬──────────────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 ┆ string_found │ │ --- ┆ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str ┆ bool │ ╞════════╪══... | 3 | 6 |
79,201,256 | 2024-11-18 | https://stackoverflow.com/questions/79201256/python-and-json-files | I am fairly new to python but i know the basics, I am not well versed in .json files and the commands used in python for them. I would like to create a personal app for creating character sheets for D&D and storing the data in json files so you can access it later. I watched the "python tutorial: working with JSON data... | Welcome to Stack Overflow! You should definitely use and be comfortable with JSON. It is super-duper powerful in programming, computer science, math, etc. JSON is used all over the programming and software world, so a solid understanding will serve you well :) Stack Overflow has a ton of information on JSON, and you ca... | 1 | 2 |
79,200,847 | 2024-11-18 | https://stackoverflow.com/questions/79200847/proper-way-to-extract-xml-elements-from-a-namespace | In a Python script I make a call to a SOAP service which returns an XML reply where the elements have a namespace prefix, let's say <ns0:foo xmlns:ns0="SOME-URI"> <ns0:bar>abc</ns0:bar> </ns0:foo> I can extract the content of ns0:bar with the method call doc.getElementsByTagName('ns0:bar') However, the name ns0 is on... | Namespace support is needed if searching by element name doc.getElementsByTagNameNS('SOME-URI','bar') If using a package with namespace support like lxml tree.findall('{http://schemas.xmlsoap.org/soap/envelope/}Body') or by local name tree.xpath('//*[local-name()="bar"]' lxml example from lxml import etree tree = e... | 1 | 2 |
79,200,092 | 2024-11-18 | https://stackoverflow.com/questions/79200092/mock-patch-function-random-random-with-return-value-depending-on-the-module | Intention: I am trying to create a unit test for a complex class, where a lot of values are randomly generated by using random.random(). To create a unit test, I want to use mock.patch to set fixed values for random.random(), to receive always same values (same configuration) and then I can run my test which must have ... | There is only function to patch, random.random, and it's shared by both modules. The best you can do is use side_effect to provide two values for it to return, one per call, but that requires you to know the order in which modul1.function and modul2.function will be called, and that may not be predictable. Better would... | 2 | 1 |
79,199,863 | 2024-11-18 | https://stackoverflow.com/questions/79199863/reorder-numpy-array-by-given-index-list | I have an array of indexes: test_idxs = np.array([4, 2, 7, 5]) I also have an array of values (which is longer): test_vals = np.array([13, 19, 31, 6, 21, 45, 98, 131, 11]) So I want to get an array with the length of the array of indexes, but with values from the array of values in the order of the array of indexes. ... | This is actually extremely simple with numpy, just index your test_vals array with test_idx (integer array indexing): out = test_vals[test_idxs] Output: array([ 21, 31, 131, 45]) Note that this requires the indices to be valid. If you have indices that could be too high you would need to handle them explicitly. Examp... | 1 | 1 |
79,189,688 | 2024-11-14 | https://stackoverflow.com/questions/79189688/plotly-python-how-to-properly-add-shapes-to-subplots | How does plotly add shapes to figures with multiple subplots and what best practices are around that? Let's take the following example: from plotly.subplots import make_subplots fig = make_subplots(rows=2, cols=1, shared_xaxes=True) fig.add_vrect(x0=1, x1=2, row=1, col=1, opacity=0.5, fillcolor="grey") fig.add_scatter(... | The problem with shapes in subplots arises from how plotly assigns axis references (xref and yref). add_vrect automatically maps shapes to subplots using row and col which can fail in complex layouts. For more control, use add_shape. For a rectangle that always stretches vertically across the entire plot, similar to vr... | 1 | 2 |
79,199,298 | 2024-11-18 | https://stackoverflow.com/questions/79199298/python-pandas-str-extract-with-one-capture-group-only-works-in-some-cases | I have a single column in a big datasheet which I want to change, by extracting substrings from the string in that column. I do this by using str.extract on that column like so: Groups Group (A) Group (B) Group (CA) Group (CB) Group (G) Group (XP) What I want to get is the following: Groups (... | This works fine in my hands, you have to use a raw string (r'...') to avoid the DeprecationWarning: rule = r'(\(A\)|\(B\)|\(G\)|\(CA\)|\(CB\)|\(XP\))' df['out'] = df['Groups'].str.extract(rule, expand=True) Another, more generic, option could be to allow anything between the parentheses, except parentheses themselves:... | 1 | 1 |
79,199,034 | 2024-11-18 | https://stackoverflow.com/questions/79199034/how-to-read-a-part-of-parquet-dataset-into-pandas | I have a huge dataframe and want to split it into small files for better performance. Here is the example code to write. BUT I can not just read a small pieces from it without loading whole dataframe into memory. import pandas as pd import os # Create a sample DataFrame with daily frequency data = { "timestamp": pd.dat... | You can load your dataset selectively if your parquet output is properly partitioned. You can use libraries like PyArrow to let you filter the data at the file or partition level to make sure only the relevant data is loaded in to memory. Here's how you can do it using pyarrow.dataset: import pyarrow.dataset as ds data... | 1 | 3 |
79,192,572 | 2024-11-15 | https://stackoverflow.com/questions/79192572/how-can-i-abbreviate-phrases-using-polars-built-in-methods | I need to abbreviate a series or expression of phrases by extracting the capitalized words and then creating an abbreviation based on their proportional lengths. Here's what I'm trying to achieve: Extract capitalized words from each phrase. Calculate proportional lengths based on the total length of the capitalized wo... | It is possible, but it's probably not an ideal task to perform this way. This is the fastest approach I've found so far. It runs ~2.5x faster for me with 1_000_000 phrases. (1.5s vs 3.9s) def abbreviate_phrases(s: pl.Series, length: int = 4): phrases = pl.col(s.name) is_upper = pl.element().filter(pl.element().str.cont... | 3 | 2 |
79,198,324 | 2024-11-17 | https://stackoverflow.com/questions/79198324/why-does-pythons-structural-pattern-matching-not-support-multiple-assignment | I was experimenting with Python Structural Pattern Matching and wanted to write a match statement capable of matching repeated occurrences in a sequence. Suppose that I have the following list of tuples and was trying to match every pair in which the first element of the tuple is the same. from itertools import combina... | From PEP 635 – Structural Pattern Matching: Motivation and Rationale, section Capture Patterns: A name used for a capture pattern must not coincide with another capture pattern in the same pattern. This, again, is similar to parameters, which equally require each parameter name to be unique within the list of paramete... | 2 | 5 |
79,198,199 | 2024-11-17 | https://stackoverflow.com/questions/79198199/how-do-i-stop-legends-from-being-merged-when-vertically-concatenating-two-plots | Consider the following small example (based off of this gallery example]): import altair as alt from vega_datasets import data import polars as pl # add a column indicating the year associated with each date source = pl.from_pandas(data.stocks()).with_columns(year=pl.col.date.dt.year()) # an MSFT specific plot msft_plo... | You can use (msft_plot & all_plot).resolve_scale(color='independent'): More info about the resolve_ methods can be found in this section of the docs. | 1 | 2 |
79,197,104 | 2024-11-17 | https://stackoverflow.com/questions/79197104/difference-between-single-and-table-methods-in-pandas-dataframe-quantile | I hope someone can help me understand the difference between the "single" and "table" methods in pandas.DataFrame.quantile? Whether to compute quantiles per-column (‘single’) or over all columns (‘table’). https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.quantile.html For example, the following code yiel... | If you invert the order of the values in b the result will be different: df['b'] = df['b'].values[::-1] print(df.quantile(method='single', interpolation='nearest')) print(df.quantile(method='table', interpolation='nearest')) Output: a 3 b 100 Name: 0.5, dtype: int64 a 3 b 10 Name: 0.5, dtype: int64 The exact differen... | 1 | 1 |
79,186,201 | 2024-11-13 | https://stackoverflow.com/questions/79186201/converting-pl-duration-to-human-string | When printing a polars data frame, pl.Duration are printed in a "human format" by default. What function is used to do this conversion? Is it possible to use it? Trying "{}".format() returns something readable but not as good. import polars as pl data = {"end": ["2024/11/13 10:28:00", "2024/10/10 10:10:10", "2024/09/13... | Polars 1.14.0 added Duration type support to .dt.to_string() It can produce iso and polars formatted strings. pl.select( pl.duration(hours=1, minutes=2).dt.to_string() # format="iso" ).item() # 'PT1H2M' pl.select( pl.duration(hours=1, minutes=2).dt.to_string(format="polars") ).item() # '1h 2m' | 3 | 2 |
79,186,983 | 2024-11-13 | https://stackoverflow.com/questions/79186983/how-to-render-latex-in-shiny-for-python | I'm trying to find if there is a way to render LaTeX formulas in Shiny for Python or any low-hanging fruit workaround for that. Documentation doesn't have any LaTeX mentions, so looks like there's no dedicated functionality to support it. Also double-checked different variations of Latex in their playground. Tried this... | You can import Katex. I got here via https://stackoverflow.com/a/65540803/5599595. Running in shinylive from shiny.express import ui from shiny import render with ui.tags.head(): # Link KaTeX CSS ui.tags.link( rel="stylesheet", href="https://cdn.jsdelivr.net/npm/katex@0.16.0/dist/katex.min.css" ), ui.tags.script(src="h... | 3 | 2 |
79,196,656 | 2024-11-17 | https://stackoverflow.com/questions/79196656/networkx-graph-get-groups-of-linked-connected-values-with-multiple-values | If I use such data import networkx as nx G = nx.Graph() G.add_nodes_from([1, 2, 3, 4, 5, 6, 7]) G.add_edges_from([(1, 2), (1, 3), (2, 4), (5, 6)]) print(list(nx.connected_components(G))) Everything works fine. But what if I need to get connected values from multiple tuple, such as the folowing import networkx as nx G ... | The issue is that add_edges_from only takes a list of (source, target) tuples. You could use itertools: import networkx as nx from itertools import chain, pairwise G = nx.Graph() G.add_nodes_from([1, 2, 3, 4, 5, 6, 7]) edges = [(1, 2), (1, 3, 7), (2, 4, 1, 6), (5, 6)] G.add_edges_from(chain.from_iterable(map(pairwise, ... | 2 | 2 |
79,195,581 | 2024-11-16 | https://stackoverflow.com/questions/79195581/json-normalization-record-path-key-not-found | This post was edited to grab the actual JSON file (large) instead of the sample snippet that I extracted,(which works in this post). I was wondering why I get a key error when i use record_path on this data set. under the results key there are 2 nested keys named 'active_ingredients' and 'packaging' when i normalize i ... | When you specify multiple record_path entries (like "packaging" and "active_ingredients"), pandas expects that the second record_path ("active_ingredients") exists within every element of the first record_path ("packaging"), but, in your data, active_ingredients is not a nested property of packaging Do this to solve th... | 3 | 0 |
79,196,626 | 2024-11-17 | https://stackoverflow.com/questions/79196626/python-polars-recursion | I've used Polars for some time now but this is something that often makes me go from Polars DataFrames to native Python calculations. I've spent resonable time looking for solutions that (tries) to use shift(), rolling(), group_by_dynamic() and so on but none is successful. Task Do calculation that depends on previous ... | How can I do a calculation that depends on previous (row's) calculation result that is in the same column? The short answer is that you can't without falling back into Python. To do this, any library would need to essentially need to iterate over the rows, only calculating a single row at a time. This means any sort ... | 5 | 4 |
79,196,393 | 2024-11-17 | https://stackoverflow.com/questions/79196393/pip-requirements-syntax-highlighting-in-github-markdown | According to GitHub syntax highlighting, keyword for pip requirements syntax highlighting can be found on languages.yml. According to the link, the keyword is Pip Requirements, but the following markdown snippet isn't highlighted on GitHub: ```Pip Requirements pandas==2.2.3 ``` How to syntax highlight pip requirements... | Use "pip-requirements" instead of "Pip Requirements": ```pip-requirements --editable . foo==1.2 bar==3.4 baz[quux]>=1.0.1 ``` It does not seem to render on Stack Overflow, but here is a screencap of markdown rendered from Github: Source: https://gist.github.com/wimglenn/204dd744240848d8cbf2b9beb6eb4a83 | 1 | 1 |
79,196,228 | 2024-11-16 | https://stackoverflow.com/questions/79196228/pydirectinput-entering-incorrect-key | Given the below code I am expecting the keyboard to press the page down key. However, instead it is pressing the number 3. All other keys I have used appear to be working correctly. How do I get this mapped correctly? import time import pydirectinput time.sleep(2) pydirectinput.keyDown('pagedown') time.sleep(0.01) pydi... | You want the sequence of keystrokes to include NumLock. | 1 | 1 |
79,195,973 | 2024-11-16 | https://stackoverflow.com/questions/79195973/how-to-access-unknown-fields-in-python-protobuf-version-5-38-3-with-upb-backend | I'm using Python protobuf package version 5.38.3 for deserializing some packets and I need to check if the messages I deserialize are conformant or not to a specific protobuf message structure. For some checks I want to obtain the list of unknown fields. This post points to an API UnknownFields() supported by messages,... | How can I get access to the list of unknown fields Here, let me google that for you. https://protobuf.dev/news/2023-08-15 Python Breaking Change In v25 message.UnknownFields() will be deprecated in pure Python and C++ extensions. It will be removed in v26. Use the new UnknownFieldSet(message) support in unknown_fiel... | 1 | 1 |
79,195,042 | 2024-11-16 | https://stackoverflow.com/questions/79195042/handling-complex-parentheses-structures-to-get-the-expected-data | We have data from a REST API call stored in an output file that looks as follows: Sample Input File: test test123 - test (bla bla1 (On chutti)) test test123 bla12 teeee (Rinku Singh) balle balle (testagain) (Rohit Sharma) test test123 test1111 test45345 (Surya) (Virat kohli (Lagaan)) testagain blae kaun hai ye banda (R... | Purely based on your shown input and your comments reflecting that you need to capture 1 or 2 values per line, here is an optimized regex solution: ^(?:\([^)(]*\)|[^()])*\(([^)(]+)(?:\([^)(]*\)[, ]*(?:([^)(]+))?)? RegEx Demo RegEx Details: This regex solution does the following: match everythng before last (...) then... | 7 | 6 |
79,195,659 | 2024-11-16 | https://stackoverflow.com/questions/79195659/issue-with-toggling-sign-of-the-last-entered-number-in-calculator-using-%e2%81%ba%e2%88%95%e2%82%8b-in-p | I am developing a calculator using Python. The problem I'm facing is that when I try to toggle the sign of the last number entered by the user using the ⁺∕₋ button, all similar numbers in the text get toggled as well. I believe the reason for this is Python's memory optimization, which causes similar strings to be stor... | replace() method replaces all of the occurrences of a substring, not just the last one. So when the following line executes: current_text.replace(current_text_list[-1], ...) It replaces all instances of current_text_list[-1] in current_text. That's why 2 x 2 is becoming (-2) x (-2), both 2s being are replaced. A possi... | 1 | 1 |
79,190,072 | 2024-11-14 | https://stackoverflow.com/questions/79190072/collecting-joining-waiting-for-parallel-depth-first-ops-in-dagster | After a much-appreciated assist from @zyd in this answer to parallel, deep-first execution in Dagster, I am now looking for a way to run an @op on the collected results of the graph run, or at least one that waits until they have all finished, since they don't have hard dependencies per se. My working code is as follow... | Here's an example that worked for me: from dagster import Definitions, op, DynamicOutput, graph, GraphOut, DynamicOut @op def a_op(path): return path @op def op2(path): return path @op def op3(path): return path @op(out=DynamicOut(str)) def mapper(): for i in range(10): yield DynamicOutput(str(i), mapping_key=str(i)) #... | 2 | 2 |
79,195,523 | 2024-11-16 | https://stackoverflow.com/questions/79195523/error-select-one-object-and-all-float-int-in-pandas-groupby | I have this dataframe. import pandas as pd x = { "year": ["2012", "2012", "2013", "2014", "2012", "2014", "2013", "2013", "2012", "2013", "2012", "2014", "2014", "2013", "2012", "2014"], "class": ["A", "B", "C", "A", "C", "B", "B", "C", "A", "C", "B", "C", "A", "C", "B", "A"], "gender": ["M", "F", "F", "M", "F", "M", "... | The issue id due to the columns score1, score2, score3, and coree4 in your DataFrame are stored as strings, not as numeric types. Do this import pandas as pd x = { "year": ["2012", "2012", "2013", "2014", "2012", "2014", "2013", "2013", "2012", "2013", "2012", "2014", "2014", "2013", "2012", "2014"], "class": ["A", "B"... | 1 | 1 |
79,194,023 | 2024-11-15 | https://stackoverflow.com/questions/79194023/how-to-insert-string-data-as-enum-into-postgres-database-in-python | I'm trying to add a String type data into a postgresql database in Python by Prisma. The culumn in database is defined as a specific enum type in Prisma schema. I tried to insert by a String mapping. However the insert was failed with not support with String type. What should I do? from enum import Enum from prisma imp... | missing enumerated value Your prisma schema should also include LUCK. A discrepancy between python and prisma seems like a Bad Thing. read the diagnostic prisma.errors.DataError: Error converting field "difficulty" of expected non-nullable type "String", found incompatible value of "EASY".' The question['difficulty'] ... | 2 | 0 |
79,192,371 | 2024-11-15 | https://stackoverflow.com/questions/79192371/how-to-combine-slice-assignment-mask-assignment-and-broadcasting-in-pytorch | To be more specific, I'm wondering how to assign a tensor by slice and by mask at different dimension(s) simultaneously in PyTorch. Here's a small example about what I want to do: With the tensors and masks below: x = torch.zeros(2, 3, 4, 6) mask = torch.tensor([[ True, True, False], [True, False, True]]) y = torch.ran... | You can do the following: x = torch.zeros(2, 3, 4, 6) mask = torch.tensor([[ True, True, False], [True, False, True]]) y = torch.rand(2, 3, 1, 3) x[..., :3][mask] = y[mask] This produces the same result as i, j = mask.nonzero(as_tuple = True) x[i, j, :, :3] = y[i, j] For the 2D mask scenario. This method also works f... | 1 | 1 |
79,192,549 | 2024-11-15 | https://stackoverflow.com/questions/79192549/why-is-jaxs-jit-compilation-slower-on-the-second-run-in-my-example | I am new to using JAX, and I’m still getting familiar with how it works. From what I understand, when using Just-In-Time (JIT) compilation (jax.jit), the first execution of a function might be slower due to the compilation overhead, but subsequent executions should be faster. However, I am seeing the opposite behavior.... | For general tips on running JAX microbenchmarks effectively, see FAQ: Benchmarking JAX code. I cannot reproduce the timings from your snippet, but in your more complicated case, I suspect you are getting fooled by JAX's Asynchronous dispatch, which means that the timing method you're using will not actually reflect the... | 1 | 2 |
79,191,165 | 2024-11-15 | https://stackoverflow.com/questions/79191165/django-is-not-importing-models-sublime-text-cant-find-django-windows-11 | I've been working through Python Crash Course e2, and got stuck on Ch.18. Having opened models.py, and entered the code, the error message is: ModuleNotFoundError: No module named 'django' I have spent some time working on this, without a solution. Could it be that PCCe2 is out of date, or is there a workaround solutio... | You can create a Build System for your project where you can specify a python environment to use: { "cmd": ["/full/path/to/your/specific/python", "$file"], "selector": "source.python", "file_regex": "^\\s*File \"(...*?)\", line ([0-9]*)" } Generally though, you don't need to run any files when working with Django. Jus... | 3 | 1 |
79,192,393 | 2024-11-15 | https://stackoverflow.com/questions/79192393/torch-randn-vector-differs | I am trying to generate a torch vector with a specific length. I want the vector to have the same beginning elements when increasing its length using the same seed. This works when the vector's length ranges from 1 to 15 for example: For length 14 torch.manual_seed(1) torch.randn(14) tensor([ 0.6614, 0.2669, 0.0617, 0.... | This is due to the behaviour of PRNG. Different code paths might be used. There is no guarantee that all different sequence length will produce exactly the same output samples from the PRNG. The outputs from 1-15 match while starting from 16 another (probably vectorized) code path will be used. Changes in the sequence ... | 1 | 2 |
79,191,501 | 2024-11-15 | https://stackoverflow.com/questions/79191501/polars-rolling-window-on-time-series-with-custom-filter-based-on-the-current-row | How do I use polars' native API to do a rolling window on a datetime column, but filter out rows in the window based on the value of a column of the "current" row? My polars dataframe of financial transactions has the following schema: For each transaction and a duration d, I want to: grab the source_acct and its tim... | df_in = ( df.join_where( df, pl.col.source_acct == pl.col.dest_acct_right, pl.col.timestamp.dt.offset_by("-1h") <= pl.col.timestamp_right, pl.col.timestamp >= pl.col.timestamp_right ) .group_by("timestamp", "source_acct") .agg(amount_in = pl.col.amount_right.sum()) ) ( df .join(df_in, on=["timestamp","source_acct"], ho... | 2 | 2 |
79,187,368 | 2024-11-14 | https://stackoverflow.com/questions/79187368/how-to-use-init-py-to-create-a-clean-api | Problem Description: I am trying to create a local API for my team. I think I understand broadly the mechanics of _init_.py. Let's say we have the below package structure: API/ ├── __init__.py # Top-level package init file └── core/ ├── __init__.py # Core module init file ├── calculator.py └── exceptions.py Now if I b... | The typical best practice is to define an __all__ list in each sub-module with names you want the sub-module to export so that the parent module can import just those names with a star import. Names that you don't want exposed should be named with a leading underscore by convention so that the linters will warn the use... | 2 | 1 |
79,191,312 | 2024-11-15 | https://stackoverflow.com/questions/79191312/how-to-sort-python-pandas-dataframe-in-repetitive-order-after-groupby | I have a dataset which is sorted in this order: col1 col2 col3 a 1 r a 1 s a 2 t a 2 u a 3 v a 3 w b 4 x b 4 y b 5 z b 5 q b 6 w b 6 e I want it to be sorted in the following order: col1 col2 col3 a 1 r a 2 t a 3 v a 1 s a 2 u a 3 w b 4 x b 5 z b 6 w b 4 y b 5 ... | Use groupby.cumcount to form a secondary key for sorting: out = (df.assign(key=lambda x: x.groupby(['col1', 'col2']).cumcount()) .sort_values(by=['col1', 'key', 'col2']) .drop(columns='key') ) Note that you can avoid creating the intermediate column using numpy.lexsort: import numpy as np out = df.iloc[np.lexsort([df[... | 2 | 3 |
79,191,017 | 2024-11-15 | https://stackoverflow.com/questions/79191017/how-to-check-if-one-dictionary-containing-list-is-a-subset-of-another-dictionary | Hi I have two dictionaries defined which contains lists dict_1 = {'V1': ['2024-11-07', '2024-11-08'], 'V2': ['2024-11-07', '2024-11-08']} dict_2 = {'V1': ['2024-11-08'], 'V2': ['2024-11-07']} both items (key and val) from dict_2 above are subset of dict_1 so I wanted return true in both cases. I tried to use res = set... | You can check for each key of dict_2 that its sublist is a subset of the corresponding sublist of dict_1: def dict_issubset(maybe_subset, maybe_superset): return { key: set(sublist).issubset(maybe_superset[key]) for key, sublist in maybe_subset.items() } so that: dict_1 = {'V1': ['2024-11-07', '2024-11-08'], 'V2': ['2... | 3 | 2 |
79,190,189 | 2024-11-14 | https://stackoverflow.com/questions/79190189/extract-uncaptured-raw-text-from-regex | I am given a regex expression that consists of raw text and capture groups. How can I extract all raw text snippets from it? For example: pattern = r"Date: (\d{4})-(\d{2})-(\d{2})" assert extract(pattern) == ["Date: ", "-", "-", ""] Here, the last entry in the result is an empty string, indicating that there is no raw... | What you are asking for is to extract literal tokens from a parsed regex pattern at the top level. If you don't mind tapping into the internals of the re package, you can see from the list of tokens of a given pattern parsed by re._parser.parse: import re pattern = r"\(born in (.*)\)" print(*re._parser.parse(pattern).d... | 2 | 3 |
79,189,825 | 2024-11-14 | https://stackoverflow.com/questions/79189825/use-brush-for-transform-calculate-in-interactive-altair-char | I have an interactive plot in altair/vega where I can select points and I see a pie chart with the ratio of the colors of the selected points. import altair as alt import numpy as np import polars as pl selection = alt.selection_interval(encodings=["x"]) base = ( alt.Chart( pl.DataFrame( { "x": list(np.random.rand(100)... | There might be an easier way to do this, but the following works: alt.hconcat( base.encode(x="x:Q", y="y:Q"), ( base.mark_arc(theta=4) .transform_joinaggregate(class_total='count()', groupby=['class']) .transform_filter(selection) # Including class_total in the groupby just so that column is not dropped since we need i... | 1 | 2 |
79,190,321 | 2024-11-14 | https://stackoverflow.com/questions/79190321/resampling-by-group-in-polars | I'm trying to build a Monte Carlo simulator for my data in Polars. I am attempting to group by a column, resample the groups and then, unpack the aggregation lists back in their original sequence. I've got it worked out up until the last step and I'm stuck and beginning to think I've gone about this in the wrong way. d... | As @jqurious pointed out in the comments, this is easily solved with... df_resampled.explode(pl.exclude("colA")) | 2 | 2 |
79,190,108 | 2024-11-14 | https://stackoverflow.com/questions/79190108/how-to-generate-an-array-which-is-a-multiple-of-original | I'm trying to upsize OpenCV images in Python in such a manner that the individual pixels are spread out by an integral factor; I use this to visually examine deep detail and individual pixel values can be seen (using cv2.imshow in this instance). For example, an array: [[1,2], [3,4]] And a factor of 2 means I'd get: ... | You can use opencv's cv.resize with nearest-neighbor as interpolation method (cv.INTER_NEAREST) to achieve what you need: import cv2 as cv import numpy as np src = np.array([[1,2], [3,4]]) dst = cv.resize(src, (4,4), interpolation=cv.INTER_NEAREST) print(dst) Output: [[1 1 2 2] [1 1 2 2] [3 3 4 4] [3 3 4 4]] Live dem... | 1 | 3 |
79,188,746 | 2024-11-14 | https://stackoverflow.com/questions/79188746/presenting-complex-table-data-in-chart-for-a-single-slide | Tables allow to summarise complex information. I have a table similar following one (this is produce for this question) in my latex document, like so: \documentclass{article} \usepackage{graphicx} % Required for inserting images \usepackage{tabularx} \usepackage{booktabs} \usepackage{makecell} \begin{document} \begin{t... | I would definitely go for some kind of heatmap. Any barplot-like graphic would be cluttered. import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = { 'Fruit': ['Apple', 'Banana', 'Orange', 'Grapes', 'Mango'], 'Data1-Precision': [0.61, 0.90, 0.23, 0.81, 0.31], 'Data1-Recall': [0.91, 0.32, 0.35,... | 1 | 2 |
79,188,329 | 2024-11-14 | https://stackoverflow.com/questions/79188329/pandas-find-duplicate-pairs-between-2-columns-of-data | I have a dataset that contains 3 columns. These are edge connections between nodes and the strength of the connection. What I am trying to do is find and merge the extra edges that can occur when the direction goes in the opposite direction. as a short example data_frame = pd.DataFrame({"A":["aa", "aa", "aa", "bb", "bb... | You can aggregate as frozenset, then perform a groupby.sum: out = (data_frame['C'] .groupby(data_frame[['A', 'B']].agg(frozenset, axis=1)) .sum() .reset_index() ) Output: index C 0 (bb, aa) 9 1 (cc, aa) 7 2 (dd, aa) 9 3 (bb, dd) 3 4 (ee, dd) 2 Variant to get the original columns: cols = ['A', 'B'] out = (data_frame ... | 1 | 3 |
79,188,007 | 2024-11-14 | https://stackoverflow.com/questions/79188007/polars-equivalent-of-numpy-tile | df = pl. DataFrame({"col1": [1, 2, 3], "col2": [4, 5, 6]}) print(df) shape: (3, 2) ┌──────┬──────┐ │ col1 ┆ col2 │ │ --- ┆ --- │ │ i64 ┆ i64 │ ╞══════╪══════╡ │ 1 ┆ 4 │ │ 2 ┆ 5 │ │ 3 ┆ 6 │ └──────┴──────┘ I am looking for the polars equivalent of numpy.tile. Something along the line such as df.tile(2) or df.select(pl.... | You could concat several times the same DataFrame: out = pl.concat([df]*2) # for contiguous memory out = pl.concat([df]*2, rechunk=True) Or, for just 2 repeats vstack: out = df.vstack(df) Alternatively, using Expr.repeat_by + explode: N = 2 (df.with_columns(pl.all().repeat_by(pl.lit(N)), pl.int_ranges(pl.lit(N))) .ex... | 3 | 1 |
79,187,488 | 2024-11-14 | https://stackoverflow.com/questions/79187488/groupby-count-after-conditional-python | I'm trying to perform a groupby sum on a specific column in a pandas df. But I only want to execute of count after a certain threshold. For this example, it will be where B > 2. The groupby is on A and the count is on C. The correct output should be: x = 3 y = 9 df = pd.DataFrame(dict(A=list('ababaa'), B=[1, 1, 3, 4, ... | Use mask in both sides: m = df['B'] > 2 df['Count'] = 0 df.loc[m, 'Count'] = df[m].groupby('A')['C'].transform('sum') print (df) A B C Count 0 a 1 9 0 1 b 1 9 0 2 a 3 0 3 3 b 4 9 9 4 a 5 1 3 5 a 6 2 3 Another idea is use Series.where: m = df['B'] > 2 df['Count'] = m.groupby(df['A']).transform('sum').where(m, 0) Or nu... | 1 | 2 |
79,187,234 | 2024-11-14 | https://stackoverflow.com/questions/79187234/how-can-i-fix-the-python-repl-in-vs-code-with-python-3-13 | I'm having trouble sending code in Python file to the interactive REPL in VS Code (using Shift + Enter). Single line code and functions with one line work fine, but any code chunks with multiple lines raise IndentationErrors. Also KeyboardInterrupt shows up in the REPL every time code is sent. Tried updating and restar... | It's a known issue: https://github.com/microsoft/vscode-python/issues/24256 Yes, that KeyboardInterrupt is handled via #24422 So you give it a try tomorrow (it just got merged today Nov 12), and it wont be there! So apparently it will be fixed in the next vscode-python build (v2024.20.0 is currently the latest). | 2 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.