question_id
int64
59.5M
79.7M
creation_date
stringdate
2020-01-01 00:00:00
2025-07-15 00:00:00
link
stringlengths
60
163
question
stringlengths
53
28.9k
accepted_answer
stringlengths
26
29.3k
question_vote
int64
1
410
answer_vote
int64
-9
482
79,184,421
2024-11-13
https://stackoverflow.com/questions/79184421/converting-an-array-of-floats-into-rgba-values-in-an-efficient-way
I am trying to create a system to take an array of floats which range from 0.0 to 1.0 and convert them into RGBA values based on a lookup table. The output should be an array that is one dimension larger that the input with the last dimension being size 4 and consisting of the RGBA values. Currently I have only been ab...
Use the following: shape = input_array.shape index = input_array[*np.indices(shape).reshape(2, -1)].astype(int) colour_array1 = cyan[index].reshape(4, *shape) Confirm the two are equal: np.allclose(colour_array, colour_array1,atol=0) Out[62]: True USE THE OTHER SOLUTION!!!
1
3
79,185,792
2024-11-13
https://stackoverflow.com/questions/79185792/mypy-doesnt-detect-a-type-guard-why
I am trying to teach myself how to use type guards in my new Python project in combination with pydantic-settings, and mypy doesn't seem to pick up on them. What am I doing wrong here? Code: import logging from logging.handlers import SMTPHandler from functools import lru_cache from typing import Final, Literal, TypeGu...
Make a separate ProdSettings class. ProdSettings will raise an error if any of those values are missing. class ProdSettings(BaseSettings): ENVIRONMENT: Literal["production"] BOT_TOKEN: SecretStr TOPIC_ID: int GROUP_CHAT_ID: int SMTP_HOST: str SMTP_USER: EmailStr SMTP_PASSWORD: SecretStr model_config = SettingsConfigDic...
1
2
79,186,037
2024-11-13
https://stackoverflow.com/questions/79186037/what-is-the-pandas-version-of-np-select
I feel very silly asking this. I want to set a value in a DataFrame depending on some other columns. I.e: (Pdb) df = pd.DataFrame([['cow'], ['dog'], ['trout'], ['salmon']], columns=["animal"]) (Pdb) df animal 0 cow 1 dog 2 trout 3 salmon (Pdb) df["animal"] = np.select(df["animal"] == "dog", "canine", "not-canine") But...
np.select can be used here, but you need to wrap the conditions/assigned values in a list: df = pd.DataFrame([['cow'], ['dog'], ['trout'], ['salmon']], columns=['animal']) df['animal'] = np.select([df['animal'] == 'dog'], ['canine'], 'not-canine') Since you have a single condition here, better use numpy.where: df['ani...
2
3
79,185,216
2024-11-13
https://stackoverflow.com/questions/79185216/why-does-this-implementation-of-the-block-tridiagonal-thomas-algorithm-give-such
See below my attempt at implementing the block tridiagonal thomas algorithm. If you run this however, you get a relatively large (10^-2) error in the block TMDA compared to the np direct solve (10^-15), even for this very simple case. More complicated test cases give larger errors - I think the errors start growing on ...
It is dangerous to define a and c with a different number of elements to b, as it could lead to the wrong indexing in the TDMA. If you change the line setting the last element of d_star to the following: d_star[-1] = np.linalg.solve(b[-1] - a[-1] @ C_star[-1], d[-1] - a[-1] @ d_star[-2]) this should now work. Note the...
1
2
79,185,543
2024-11-13
https://stackoverflow.com/questions/79185543/curly-brace-expansion-fails-on-bash-in-linux-when-called-from-python
Consider this curly brace expansion in bash: for i in {1..10}; do echo $i; done; I call this script from the shell (on macOS or Linux) and the curly brace does expand: $ ./test.sh 1 2 3 4 5 6 7 8 9 10 I want to call this script from Python, for example: import subprocess print(subprocess.check_output("./test.sh", she...
{1..10} is a bash feature, it is not defined in POSIX sh. It seems that subprocess.check_output("./test.sh", shell=True) invokes bash (or another shell which supports this feature) in the first example (macOS) and sh in your second example (Linux). See: https://www.shellcheck.net/wiki/SC3009 Actual meaning of 'shell=T...
1
1
79,184,836
2024-11-13
https://stackoverflow.com/questions/79184836/python-tkinter-notebook-gets-resized-when-i-pack-a-ttk-treeview-on-the-second-ta
I am trying to write a program to time sailing races, and to simplify adding the boats to a race by selecting from a list. I have used a Treeview linked to a tk.StringVar() to filter the boats by typing a part of the person or boats name to filter the list of boats to select. This works very well. I am writing this pro...
Without calling tree2.pack(...), the size of RaceFrame is still around 1600x880, so the frames inside the notebook will all have the size of the biggest frame, that is RaceFrame. However, when tree2.pack(...) is called, the size of RaceFrame will be shrink to the size of tree2. So the size of the notebook will also be ...
2
1
79,184,247
2024-11-13
https://stackoverflow.com/questions/79184247/best-practices-for-using-property-with-enum-values-on-a-django-model-for-drf-se
Question: I'm looking for guidance on using @property on a Django model, particularly when the property returns an Enum value and needs to be exposed in a Django REST Framework (DRF) serializer. Here’s my setup: I’ve defined an Enum, AccountingType, to represent the possible accounting types: from enum import Enum clas...
Is there a reason serializers.ReadOnlyField() doesn’t work with @property when it returns an Enum? An Enum can not be JSON serialized. Make the AccountingType json serializable by making it a subclass of str as well: class AccountingType(str, Enum): ASSET = 'Asset' LIABILITY = 'Liability' UNKNOWN = 'Unknown' then it ...
2
2
79,183,948
2024-11-13
https://stackoverflow.com/questions/79183948/how-can-i-override-a-method-where-the-ellipsis-is-assigned-as-the-default-value
In Python v3.10, the following code generates a Pylance error stating (Expression of type "EllipsisType" cannot be assigned to parameter of type "int") from typing import Any from PySide6.QtGui import QStandardItem class A(QStandardItem): def data(self, role: int = ...) -> Any: return super().data(role) pass In QtGui....
... as the default value in a .pyi file does not mean a literal Ellipsis object. Rather, role: int = ... means that the parameter role is of type int and has a default value of that same type at runtime, but that value is omitted in the stub file. That said, you need to provide a default value of your own: class A(QSta...
2
2
79,183,144
2024-11-13
https://stackoverflow.com/questions/79183144/discard-rows-with-a-string-value-from-a-dataframe
I have a dataframe like this: C1 C2 C3 C4 1 foo asd 23 foo foo asd 43 3 foo asd 1 4 foo asd bar I'm trying to filter (and discard) all rows that have strings in C1 or C4 columns, my final dataframe must be: C1 C2 C3 C4 1 foo asd 23 3 foo asd 1 I'm trying to do this using "isNaN" but I'm not ...
You can try something like this: df.loc[df[['C1', 'C4']].apply(pd.to_numeric, errors='coerce').dropna(how='any').index] Output: C1 C2 C3 C4 0 1 foo asd 23 2 3 foo asd 1 Use pd.to_numeric with errors='coerce' to give NaN that can be used to with dropna.
1
4
79,172,444
2024-11-9
https://stackoverflow.com/questions/79172444/accessing-the-end-of-of-a-file-being-written-while-live-plotting-of-high-speed-d
My question refers to the great answer of the following question: Real time data plotting from a high throughput source As the gen.py code of this answer was growing fast, I wrote own version gen_own.py below, which essentially imposes a delay of 1 ms before writing a new data on the file. I also adapted the code plot....
Even without delay, you have to note that only 1 in 2000 lines are being read and printed and displayed, with delay of 1ms it is 1 in 20 line, but in it there is some issue in seeking end and reading which causes data to be empty several times, you can implement the method tail function from this nice answer therefor...
3
1
79,161,068
2024-11-6
https://stackoverflow.com/questions/79161068/how-to-listen-for-hotkeys-in-a-separate-thread-using-python-with-win32-api-and-p
I’m setting up a hotkey system for Windows in Python, using the Win32 API and PySide6. I want to register hotkeys in a HotkeyManager class and listen for them in a separate thread, so the GUI remains responsive. However, when I move the listening logic to a thread, the hotkey events are not detected correctly. Here’s t...
Regarding the statement: "Here’s the code that works without using threads", there's nothing about code in the question that actually works. Let me detail: from win32con import VK_NUMPAD0, MOD_NOREPEAT would yield ImportError. Check [GitHub]: mhammond/pywin32 - feat: add MOD_NOREPEAT (RegisterHotKey) constant (that I ...
2
2
79,180,832
2024-11-12
https://stackoverflow.com/questions/79180832/custom-tk-scrollable-frame-scrolls-when-the-window-is-not-full
I've been trying to create a scrollable frame in with Tkinter / ttk using the many answers on here and guides scattered around. The frame is to hold a table which can have rows added and deleted. One issue I'm having that I can't find elsewhere is that if there is space for more content within the frame, I can scroll t...
I found an answer thanks to the hint from @acw1668; I adjusted the <Configure> command to check the size of the scrollregion and compare it to the parent ScrollableFrame class window. If it is smaller, I change the scroll region to be the size of the ScrollableFrame. Here are the adjustments I made to my class, includi...
1
2
79,182,114
2024-11-12
https://stackoverflow.com/questions/79182114/error-importing-python-modules-in-nextflow-script-block
I have a similar problem to those described here and here. The code is as follows: process q2_predict_dysbiosis { publishDir 'results', mode: 'copy' input: path abundance_file path species_abundance_file path stratified_pathways_table path unstratified_pathways_table output: path "${abundance_file.baseName}_q2pd.tsv" ...
The Python import system uses the following sequence to locate packages and modules to import: The current working directory (i.e. $PWD): This is the directory from which the Python interpreter was launched. The PYTHONPATH environment variable: If set, this environment variable can specify additional directories for ...
1
2
79,182,908
2024-11-12
https://stackoverflow.com/questions/79182908/how-can-i-implement-email-verification-in-django
Completely stumped! I'm using the console as my email backend. I end up with False in token_generator.check_token as a result "Invalid or expired token." is displayed in my homepage when I navigate to say "http://localhost:8000/user/verify-email/?token=cgegv3-ec1fe9eb2cebc34e240791d72fb10d7d&email=test16@example.com" H...
The code you posted seems fine. The check_token method performs several checks and figuring out which exact one fails should lead you to a solution. You can add breakpoints or print statements in place where the Django package is installed or bring the code into your project. Since you're already subclassing PasswordRe...
2
2
79,172,747
2024-11-9
https://stackoverflow.com/questions/79172747/polars-ndcg-optimized-calculation
The problem here is to implement NDCG calculation on Polars that would be efficient for huge datasets. Main idea of NDCG is to calculate DCG and IDCG, let's skip the gain part and only think about discount part, which depends on ranks from ideal and proposed orderings. So the tricky part for me here is to properly and ...
In my testing, I got the fastest results by first getting the intersection: .list.set_intersection() There is currently no List API method to get the index positions, but it can be emulated if we .flatten() the lists .over() each "row". (we use with_row_index as the group id) .arg_true() is used to get the indexes w...
2
1
79,180,366
2024-11-12
https://stackoverflow.com/questions/79180366/unable-to-set-any-property-when-protecting-an-excel-worksheet-using-pywin32
I am using pywin32 for a project to automate a few excel files using python. In the excel file, I want to protect all the cells that contain a formula. So, I first unlock all the cells and then only lock those cells which has a formula. When I protect the worksheet with a password, I also pass all the relevant protect...
I tested originally on a Windows 10 PC with Excel 2013 and as stated your code worked. I then tried a Windows 10 PC with Excel 2021 and it appeared to exhibit the same issue as you, the protections remained as False after being set to True. But another run later and it all worked fine on that Excel version. I tested w...
2
0
79,182,496
2024-11-12
https://stackoverflow.com/questions/79182496/how-do-i-find-all-combinations-of-pairs-such-that-no-elements-of-the-combinatio
I have a list of number-letter pairs like: all_edges_array = [ [1,'a'],[1,'b'],[1,'c'], [2,'c'],[2,'d'], [3,'b'],[3,'c'] ] Notice that the input pairs are not a cross-product of the letters and numbers used - for example, [2, 'a'] is missing. I want to efficiently find the combinations of some number of pairs, such th...
It is always inefficient to generate all combinations and throw away unwanted combinations when you can selectively generate wanted combinations to begin with. The desired combinations can be most efficiently generated by recursively yielding combinations of unused letters paired with the number of the current recursio...
3
2
79,181,977
2024-11-12
https://stackoverflow.com/questions/79181977/in-python-pytz-how-can-i-add-a-day-to-a-datetime-in-a-dst-aware-fashion-s
I'm doing some datetime math in python with the pytz library (although I'm open to using other libraries if necessary). I have an iterator that needs to increase by one day for each iteration of the loop. The problem comes when transitioning from November 3rd to November 4th in the Eastern timezone, which crosses the d...
It looks like you want "wall time", which is the same "wall clock" time the next day, regardless of daylight savings time transitions. I would use the built-in zoneinfo module. You may need to install the "1st party" tzdata module if using Windows to have up-to-date time zone information (pip install tzdata): import da...
1
2
79,179,293
2024-11-11
https://stackoverflow.com/questions/79179293/parallel-depth-first-ops-in-dagster-with-ops-graphs-and-jobs-together
(also posted on r/dagster) Dagster N00b here. I have a very specific use-case. My ETL executes the following steps: Query a DB to get a list of CSV files Go to a filesystem and for each CSV file: load it into DuckDB transform some columns to date transform some numeric codes to text categories export clean table to ...
If I understand correctly, you want depth-first processing, instead of breadth first? I think you might be able to trigger depth-first processing using a nested graph after the dynamic output step. You're also conceptually missing how to set dependencies between ops in Dagster. Something like this should work: @op def ...
2
2
79,181,689
2024-11-12
https://stackoverflow.com/questions/79181689/polars-read-csv-to-read-from-string-and-not-from-file
Is it possible to read from string with pl.read_csv() ? Something like this, which would work : content = """c1, c2 A,1 B,3 C,2""" pl.read_csv(content) I know of course about this : pl.DataFrame({"c1":["A", "B", "C"],"c2" :[1,3,2]}) But it is error-prone with long tables and you have to count numbers to know which va...
pl.read_csv() accepts IO as source parameter. source: str | Path | IO[str] | IO[bytes] | bytes So you can use io.StringIO: from io import StringIO content = """ c1,c2 A,1 B,3 C,2 """ data = StringIO(content) pl.read_csv(data) shape: (3, 2) ┌─────┬─────┐ │ c1 ┆ c2 │ │ --- ┆ --- │ │ str ┆ i64 │ ╞═════╪═════╡ │ A ┆ 1 │...
2
5
79,180,528
2024-11-12
https://stackoverflow.com/questions/79180528/draw-a-circle-with-periodic-boundary-conditions-matplotlib
I am doing a project that involves lattices. A point of coordinates (x0, y0) is chosen randomly and I need to color blue all the points that are in the circle of center (x0, y0) and radius R and red all the other points and then draw a circle around. The tricky part is that there is periodic boundary conditions, meanin...
The comment from Tino_D gave me the answer. I imagined a bigged lattice, my lattice and 8 lattices surrounding it and drew a total of 9 circles with a centers that were translated to another sub lattice and then I restricted my plot to the original lattice. def draw_zone(self, filename, R): colormap = np.where(self.di...
3
3
79,177,394
2024-11-11
https://stackoverflow.com/questions/79177394/evaluate-expression-inside-custom-class-in-polars
I am trying to extend the functionality of polars to manipulate categories of Enum. I am following this guide and this section of documentation orig_df = pl.DataFrame({ 'idx': pl.int_range(5, eager=True), 'orig_series': pl.Series(['Alpha', 'Omega', 'Alpha', 'Beta', 'Gamma'], dtype=pl.Enum(['Alpha', 'Beta', 'Gamma', 'Om...
You can use .map_batches() @pl.api.register_expr_namespace('fct') class CustomEnumMethodsCollection: def __init__(self, expr: pl.Expr): self._expr = expr def rev(self) -> pl.Expr: return self._expr.map_batches(lambda s: s.cast(pl.Enum(s.cat.get_categories().reverse())) ) df = pl.DataFrame({ 'idx': pl.int_range(5, eage...
1
2
79,179,193
2024-11-11
https://stackoverflow.com/questions/79179193/calculating-the-correlation-coefficient-of-time-series-data-of-unqual-length
Suppose you have a dataframe like this data = {'site': ['A', 'A', 'B', 'B', 'C', 'C'], 'item': ['x', 'x', 'x', 'x', 'x', 'x'], 'date': ['2023-03-01', '2023-03-10', '2023-03-20', '2023-03-27', '2023-03-5', '2023-03-12'], 'quantity': [10,20,30, 20, 30, 50]} df_sample = pd.DataFrame(data=data) df_sample.head() Where you ...
Reshape to wide form with pivot_table and add zeros to missing data points, this will allow a correct comparison. You can then select the item you want and compute the correlation of all combinations of columns with corr: tmp = df_sample.pivot_table(index='date', columns=['item', 'site'], values='quantity', fill_value=...
1
1
79,161,150
2024-11-6
https://stackoverflow.com/questions/79161150/how-to-publish-an-update-to-pip-just-for-older-python-versions
I have a library published on pip which previously had a minimum Python version of 3.7, and now has a minimum Python version of 3.9. This means that, when a user with Python 3.7 or 3.8 does pip install my-package, they silently get the last version that was published with 3.7 support, rather than the most recent versio...
Note: pip is an installer, not a package index. In this answer I'm guessing that "published on pip" means "published to a repository which pip can install from", such as PyPI. Since you're publishing to a package index, you must have version strings because it is a required field in the packaging metadata. Since you'...
3
3
79,178,550
2024-11-11
https://stackoverflow.com/questions/79178550/python-polars-method-find-returns-an-incorrect-value-for-strings-when-using-u
The behavior of the str.find() method in polars differs from str.find() in pandas in Python. Is there a parameter for processing utf-8 characters? Or is it a bug? Example code in python: import polars as pl # Define a custom function that wraps the str.find() method def find_substring(s, substring): return int(s.find(s...
There is an open issue on the Github tracker. https://github.com/pola-rs/polars/issues/14190 I think we should update the docs to make clear that we return byte offsets. As for the actual goal - it seems you want split a string into 2 parts and take the right hand side. You could use regex e.g. with .str.extract()...
2
1
79,179,299
2024-11-11
https://stackoverflow.com/questions/79179299/split-dataframe-according-to-sub-lists-by-cutoff-value
I want to split the dataframe according to the sublists that given from diving a list into parts where the only value above a cutoff is the first. e.g. Cutoff = 3 [4,2,3,5,2,1,6,7] => [4,2,3], [5,2,1], [6], [7] I still need to keep track of the other fields in the dataframe. I should get the given result from this df d...
If you explode/flatten the lists, you can use .cum_sum() of the comparison .over() each "group" to assign a group id/index to identify each sublist. (df.explode("time_deltas", "other_field") .with_columns(bool = pl.col("time_deltas") > cutoff) .with_columns(index = pl.col("bool").cum_sum().over("uid")) ) shape: (13, 5...
2
1
79,177,302
2024-11-11
https://stackoverflow.com/questions/79177302/polars-read-excel-incorrectly-adds-suffix-to-column-names
I am using polars v1.12.0 to read data from an Excel sheet. pl.read_excel( "test.xlsx", sheet_name="test", has_header=True, columns=list(range(30, 49)) ) The requested columns are being imported correctly. However, polars adds a suffix _1 to every column name. There's one column header where a _3 has been added. In th...
I don't think you have made a mistake, the behaviour just seems to differ wildly between different engines, and none of them do what you want to do. I have the following excel: alpha | bravo | charlie | charlie | delta | echo | foxtrot | alfa 1 | a | 1 | a | 1 | a | 1 | a For the following code snippet: df = pl.read_...
1
2
79,178,919
2024-11-11
https://stackoverflow.com/questions/79178919/count-elements-in-a-row-and-create-column-counter-in-pandas
I have created the following pandas dataframe: import pandas as pd ds = {'col1' : ['A','A','B','C','C','D'], 'col2' : ['A','B','C','D','D','A']} df = pd.DataFrame(data=ds) The dataframe looks like this: print(df) col1 col2 0 A A 1 A B 2 B C 3 C D 4 C D 5 D A The possible values in col1 and col2 are A, B, C and D. I n...
Here is a way using pd.get_dummies() df.join(pd.get_dummies(df,prefix='',prefix_sep='').T.groupby(level=0).sum().T.rename('count{}'.format,axis=1)) and here is a way using value_counts() df.join(df.stack().groupby(level=0).value_counts().unstack(fill_value = 0).rename('count{}'.format,axis=1)) Output: col1 col2 coun...
3
3
79,176,006
2024-11-10
https://stackoverflow.com/questions/79176006/why-are-parameterized-queries-not-possible-with-do-end
The following works fine: conn = psycopg.connect(self.conn.params.conn_str) cur = conn.cursor() cur.execute(""" SELECT 2, %s; """, (1,), ) But inside a DO: cur.execute(""" DO $$ BEGIN SELECT 2, %s; END$$; """, (1,), ) it causes psycopg.errors.UndefinedParameter: there is no parameter $1 LINE 1: SELECT 2, $1 ^ QUERY: ...
import psycopg from psycopg import sql con = psycopg.connect("postgresql://postgres:postgres@127.0.0.1:5432/test") cur = con.cursor() cur.execute(sql.SQL(""" DO $$ BEGIN PERFORM 2, {}; END$$; """).format(sql.Literal(1)) ) This uses the sql module of psycopg to build a dynamic SQL statement using proper escaping. DO ca...
3
2
79,177,845
2024-11-11
https://stackoverflow.com/questions/79177845/matplotlib-patches-rectangle-produces-rectangles-with-unequal-size-of-linewidth
I am using matplotlib to plot the columns of a matrix as separate rectangles using matplotlib.patches.Rectangle. Somehow, all the "inner" lines are wider than the "outer" lines? Does somebody know what's going on here? Is this related to this Github issue? Here's an MRE: import numpy as np import matplotlib.pyplot as p...
Your rectangles' edges are getting clipped by the axis boundaries. Add clip_on=False to Rectangle: rect = patches.Rectangle( (j * (cell_size + column_gap), i * cell_size), # x position with gap applied to columns only cell_size, # width of each cell cell_size, # height of each cell facecolor=color, edgecolor=edgecolor...
3
2
79,177,384
2024-11-11
https://stackoverflow.com/questions/79177384/count-occurrences-of-each-type-of-event-within-a-time-window-in-pandas
I have a DataFrame with the following structure: event_timestamp: Timestamp of each event. event_type: Type of the event. I need to add a column for each unique event_type to count how many events of that event_type occurred within a 10ms window before each row's event_timestamp. data = { 'event_timestamp': [ '2024-0...
IIUC, you could produce the columns with get_dummies, then perform a rolling.sum on 10ms to get the counts, finally merge back to the original DataFrame: out = df.merge(pd .get_dummies(df['event_type']).add_prefix('count_') .set_axis(df['event_timestamp']).sort_index() .rolling('10ms').sum().convert_dtypes(), left_on='...
2
1
79,176,155
2024-11-11
https://stackoverflow.com/questions/79176155/how-to-bump-python-package-version-using-uv
Poetry has the version command to increment a package version. Does uv package manager has anything similar?
Currently uv package manager does not have a built-in command to bump package versions like poetry's version command. You can manually edit pyproject.toml or automate it with a script. For example: import toml from typing import Literal def bump_version(file_path: str, part: Literal["major", "minor", "patch"] = "patch"...
4
3
79,175,533
2024-11-10
https://stackoverflow.com/questions/79175533/rolling-sum-using-duckdbs-python-relational-api
Say I have data = {'id': [1, 1, 1, 2, 2, 2], 'd': [1, 2, 3, 1, 2, 3], 'sales': [1, 4, 2, 3, 1, 2]} I want to compute a rolling sum with window of 2 partitioned by 'id' ordered by 'd' Using SQL I can do: duckdb.sql(""" select *, sum(sales) over w as rolling_sales from df window w as (partition by id order by d rows bet...
I'm not an expert in DuckDB relational API but this works: rel.sum( 'sales', projected_columns='*', window_spec='over (partition by id order by d rows between 1 preceding and current row) as rolling_sales' ) ┌───────┬───────┬───────┬───────────────┐ │ id │ d │ sales │ rolling_sales │ │ int64 │ int64 │ int64 │ int128 │...
3
1
79,175,860
2024-11-10
https://stackoverflow.com/questions/79175860/invert-colors-of-a-mask-in-pygame
I have a pygame mask of a text surface that I would like to invert, in the sense that the black becomes white, and the white becomes black. The black is the transparent part of the text, and the white is the non-transparent part, but I'd like it flipped so I can make a text outline. I can't really figure it out. If any...
There are different possibilities. You can invert the mask with invert. mask = pygame.mask.from_surface(location_text) mask.invert() mask_surf = mask.to_surface() mask_surf.set_colorkey((255, 255, 255)) You can also set the colors when you turn the mask into a surface. Make the setcolor black and the unsetcolor white ...
3
3
79,175,794
2024-11-10
https://stackoverflow.com/questions/79175794/is-a-lock-recommended-in-python-when-updating-a-bool-in-one-direction-in-a-threa
Is a lock necessary in a situation where thread 1 checks if a bool has flipped from False to True periodically within a loop, the bool being updated in thread 2? As I understand a bool is atomic in python, it should not be possibly for the bool to be incorrectly updated or take on a garbage value like in C++ for exampl...
As I understand a bool is atomic in python, it should not be possibly for the bool to be incorrectly updated or take on a garbage value like in C++ for example Practically, a bool variable in cPython is going to be a pointer to one of the two objects Py_True or Py_False. The implementation details shouldn't matter, b...
1
2
79,175,123
2024-11-10
https://stackoverflow.com/questions/79175123/adding-previous-rows-to-generate-another-row-in-pandas
I am trying to solve a problem in my data-frame df.head() 0 Key value 1 10 500 2 11 500 3 12 600 4 12 800 5 13 1000 6 13 1200 . . . 200++ output is to put the values in the above data-frame or have another data-frame with all the values of above with additional info show as below. Expected Output: 0 Key value 1 10 500...
A possible solution, which takes the following steps: First, groupby is used to organize df by Key, and then filter is applied with a lambda function that selects only groups with more than one row, ensuring sums are computed only for repeated Key values. Next, this filtered group is re-aggregated with groupby to cal...
1
2
79,174,726
2024-11-10
https://stackoverflow.com/questions/79174726/is-there-a-way-to-show-which-line-of-a-cell-is-being-proccesed-in-jupyter-lab
I want to know if there's a way to show an indicator or something that tells me at which line my Jupyter Lab code is while executing. Google Colab does this with a little green arrow next to the line (see image below), and I'm wondering if there's something similar for JL.
Not quite like the way that corporation adapted the open source product, but some of these options may get you to what you want.... JupyterLab lets you attach a console to monitor everything as it runs. You want to make show you activate Show All Kernel Activity. See here, here, and here. See more about JupyterLab's co...
1
2
79,173,053
2024-11-9
https://stackoverflow.com/questions/79173053/how-to-convert-character-indices-to-bert-token-indices
I am working with a question-answer dataset UCLNLP/adversarial_qa. from datasets import load_dataset ds = load_dataset("UCLNLP/adversarial_qa", "adversarialQA") How do I map character-based answer indices to token-based indices after tokenizing the context and question together using a tokenizer like BERT. Here's an e...
You should encode both the question and context, locate the token span for the answer within the tokenized context, and update the dataset with the token-level indices. The following function does the above for you: def get_token_indices(example): # Tokenize with `return_offsets_mapping=True` to get character offsets f...
2
2
79,164,983
2024-11-7
https://stackoverflow.com/questions/79164983/numerically-integrating-signals-with-absolute-value
Suppose I have a numpy s array of acceleration values representing some signal sampled at a fixed rate dt. I want to compute the cumulative absolute velocity, i.e. np.trapz(np.abs(s), dx=dt). This is great except if dt is "large" (e.g. 0.01) and the signal s is both long and crossing between positive and negative value...
This answer focus more on the performance/vectorization aspects than numerical integration. Faster implementation Is ideally vectorised so that integration can be done for tens of thousands of signals at once in a reasonable time? Technically, Numba code can run with njit (and without errors) are always vectorized b...
4
1
79,172,783
2024-11-9
https://stackoverflow.com/questions/79172783/polars-sql-case
Is this a bug, non-conformant behavior, or standardized behavior? A Polars SQL statement is calculating the average of values based on a condition. The CASE WHEN doesn't include an ELSE because those values should be ignored. Polars complains that an ELSE is required. If I include an ELSE, with no value, it's a syntax ...
Is this a bug, non-conformant behavior, or standardized behavior? Fun-fact: zero RDBMS today implement the ISO SQL specification de-jure. In my mind the spec is both aspirational but also something that no-one should actually conform to (because ISO SQL is just so horribly unergonomic, and the ISO itself can be bette...
2
3
79,171,202
2024-11-8
https://stackoverflow.com/questions/79171202/jinja-templating-with-recursive-in-dict-doesnt-works
I'm stuck in a Jinja implementation problem. Here is my little python script: path = Path(__file__).parent env = Environment( loader=FileSystemLoader(path / "templates") ) template = env.get_template("template1.rst") rendered = template.render(sample={"a": {"b": "c"}}) And here is my template for jinja: .. toctree:: :...
immediate fix When you start the loop, you use sample.items() iterator - {% for k, v in sample.items() recursive %} ^^^^^^^^^^^^^^ When you recur, you are passing the dict itself - {%- else %} {{ loop(v) }} ^ Simply change this to - {%- else %} {{ loop(v.items()) }} ^^^^^^^^^ naive test An improvement to the enti...
2
1
79,172,182
2024-11-9
https://stackoverflow.com/questions/79172182/why-does-sympy-perfect-power-64-return-false
The documentation for sympy.perfect_power says: Return (b, e) such that n == b**e if n is a unique perfect power with e > 1, else False (e.g. 1 is not a perfect power). A ValueError is raised if n is not Rational. Yet evaluating sympy.perfect_power(-64) results in False. However, -64 == (-4)**3, so sympy.perfect_powe...
At the documentation, click on [source] and you'll find this: if n < 0: pp = perfect_power(-n) if pp: b, e = pp if e % 2: return -b, e return False Given -64, that first computes that 64 is 2^6 and then gives up because 6 isn't odd. I do think it's a bug and it should try to remove the factor 2 from the exponent. May...
4
6
79,171,631
2024-11-8
https://stackoverflow.com/questions/79171631/how-do-i-determine-whether-a-zoneinfo-is-an-alias
I am having trouble identifying whether a ZoneInfo is built with an alias: > a = ZoneInfo('Atlantic/Faeroe') > b = ZoneInfo('Atlantic/Faroe') > a == b False It seems like these ZoneInfos are identical in practice. How do I identify that they are the same, as opposed to e.g. EST and UTC which are different?
The tzdata package publishes the data for the IANA time zone database. If installed, one could compare the data files for those zones located in: <PYTHON_DIR>\Lib\site-packages\tzdata\zoneinfo\Atlantic The data files for Faeroe and Faroe compare as binary same. Programmatically, the binary data can be directly read an...
2
3
79,169,874
2024-11-8
https://stackoverflow.com/questions/79169874/performance-issues-of-using-lambda-for-assigning-variables-in-pandas-in-a-method
When working with pandas dataframes, I like to use method chains, because it makes the workflow similar to the tidyverse approach in R, where you use a string of pipes. Consider the example in this answer: N = 10 df = ( pd.DataFrame({"x": np.random.random(N)}) .assign(y=lambda d: d['x']*0.5) .assign(z=lambda d: d.y * 2...
Your operations are vectorized, the lambda is not operating as the level of the values but rather for the column names. The running time of the function will be negligible for large enough datasets. However, each assign call is generating a new DataFrame. You could use a single assign call, this would avoid generating ...
2
5
79,169,451
2024-11-8
https://stackoverflow.com/questions/79169451/calculating-sums-of-nested-dictionaries-into-the-dictionary
I'm writing a program that helps collate data from a few sources to perform analysis on. I currently have a dictionary that looks like this: output = { "main": { "overall": { "overall": { "total": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Profit": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, "Loss": {"q1": 0,"q2": 0,"q3": 0,"q4": 0}, ...
You appear to have two levels of nested dictionaries within output["main"] and want to generate totals for each quarter and each permutation of level: for main_key, main_value in output["main"].items(): if main_key == "overall": continue for sub_key, sub_value in main_value.items(): if sub_key == "overall": continue fo...
1
1
79,169,880
2024-11-8
https://stackoverflow.com/questions/79169880/how-to-add-a-row-for-sorted-multi-index-dataframe
I have a multiindex dataframe, which comes from groupby. Here is a demo: In [54]: df = pd.DataFrame({'color': ['blue', 'grey', 'blue', 'grey', 'black'], 'name': ['pen', 'pen', 'pencil', 'pencil', 'box'],'price':[2.5, 2.3, 1.5, 1.3, 5.2],'bprice':[2.2, 2, 1.3, 1.2, 5.0]}) In [55]: df Out[55]: color name price bprice 0 b...
Compute a groupby.sum on a, then append a level with * and concat, finally sort_index based on color: # compute the sum per color/name # sort by descending price a = (df.groupby(['color', 'name'])[['price', 'bprice']].sum() .sort_values(by='price', ascending=False) ) # compute the sum per color # concatenate, sort_inde...
1
3
79,168,495
2024-11-8
https://stackoverflow.com/questions/79168495/interpolating-battery-capacity-data-in-logarithmic-scale-with-python
I'm working on interpolating battery capacity data based on the relationships between hour_rates, capacities and currents. Here’s a sample of my data: import numpy as np import pandas as pd from scipy.interpolate import interp1d import matplotlib.pyplot as plt # Data from Rolls S-480 flooded battery capacity_data = [ [...
Looking on your currency data described relations: hour_rates (h) = capacities (Ah) / currents (A) capacities (Ah) = hour_rates (h) * currents (A) currents (A) = capacities (Ah) / hour_rates (h) These are not met explicitly in the data you presented. I've created the data which are exactly like the presented results: ...
2
1
79,166,062
2024-11-7
https://stackoverflow.com/questions/79166062/is-pythons-cffi-an-adequate-tool-to-parse-c-definitions-from-a-header-file
From python, I want to fetch the details of structures/arrays/enums defined in C headers: the list of defined types, the list and types of struct members, the names and values defined in enums, the size of arrays, etc. I don't plan to link a C lib in python, but I wanted to use a battle-tested tool to "parse" C definit...
Self reply here! I made progress! The CFFI documentation didn't helped much :D using dir() on objects returned did. Two options were found, the easiest one is this snippet (more complete answer at the end) : if k.kind == 'struct' : for f in k.fields : name, obj = f print(name, obj.type, obj.offset) where k is obtained...
2
0
79,164,605
2024-11-6
https://stackoverflow.com/questions/79164605/pyqt6-dbus-signal-not-being-received
I'm trying to create a system to keep track of the currently-playing media via mpris. Adapted from this question's PyQt6 answer, I have tried the following code: from PyQt6 import QtCore, QtWidgets, QtDBus import sys class MainWindow(QtWidgets.QMainWindow): def __init__ (self): super().__init__() service = 'org.mpris.M...
There are two problems: You won't find MPRIS players (nor any desktop other apps) on the system bus¹. They are all connected to the user's individual session bus i.e. .sessionBus(). Generally the only services you will find on the system bus are those which are global to the system (e.g. NetworkManager), whereas multi...
1
3
79,168,650
2024-11-8
https://stackoverflow.com/questions/79168650/why-doesnt-python-put-the-iterator-class-in-mro-when-using-map-mro-bu
I dont understand why in python when i write: from collections.abc import * print(map.__mro__) print(issubclass(map,Iterator)) The output I get is: (<class 'map'>, <class 'object'>) True but the iterator class is not displayed in the return of map.___mro___, why does that happen? It is related on how mro works? I expe...
This is because collections.abc.Iterator implements a __subclasshook__ method that considers a given class to be its subclass as long as it has the __iter__ and __next__ methods defined: class Iterator(Iterable): __slots__ = () @abstractmethod def __next__(self): 'Return the next item from the iterator. When exhausted,...
3
4
79,168,379
2024-11-7
https://stackoverflow.com/questions/79168379/pandas-slowing-way-down-after-processing-10-000-rows
I am working on a small function to do a simple cleanup of a csv using pandas. Here is the code: def clean_charges(conn, cur): charges = pd.read_csv('csv/all_charges.csv', parse_dates=['CreatedDate', 'PostingDate', 'PrimaryInsurancePaymentPostingDate', 'SecondaryInsurancePaymentPostingDate', 'TertiaryInsurancePaymentPo...
Dropping records from a dataframe is reported to be slow so it would be better to use pandas filtering features. Generating a 70000 records csv and processing only first 10000 def clean_charges(charges): flt_date = datetime(2024, 9, 1) count = 0 total = 0 # for cur_charge in charges_split: for index, charge in charges....
1
2
79,167,901
2024-11-7
https://stackoverflow.com/questions/79167901/why-do-i-need-another-pair-of-curly-braces-when-using-a-variable-in-a-format-spe
I'm learning how Python f-strings handle formatting and came across this syntax: a = 5.123 b = 2.456 width = 10 result = f"The result is {(a + b):<{width}.2f}end" print(result) This works as expected, however I don't understand why {width} needs its own curly braces within the format specification. Why can't I just us...
The braces are needed because the part starting with : is the format specification mini-language. That is parsed as its own f-string, where all is literal, except if put in braces. That format specification language would bump into ambiguities if those braces were not required: that language gives meaning to certain le...
14
22
79,167,346
2024-11-7
https://stackoverflow.com/questions/79167346/strange-rendering-behaviour-with-selection-interval
I'm generating a plot with the following code (in an ipython notebook): import altair as alt import pandas as pd events = pd.DataFrame( [ {"event": "Task A", "equipment": "SK-101", "start": 10.2, "finish": 11.3}, {"event": "Task B", "equipment": "SK-102", "start": 6.5, "finish": 10.2}, {"event": "Task C", "equipment": ...
I'm unsure why that happens, but another approach would be to use the brush as a filter instead of to set the y-domain. As long as you are able to set a fixed x-domain I think this can work well for what you need: detail = ( alt.Chart(events) .mark_bar() .encode( x=alt.X("start:Q", title="time (hr)").scale(domain=(0, 2...
2
2
79,167,429
2024-11-7
https://stackoverflow.com/questions/79167429/encoding-in-utf-16be-and-decoding-in-utf-8-print-the-correct-output-but-cannot-b
If I'm encoding a string using utf-16be and decoding the encoded string using utf-8, I'm not getting any error and the output seems to be correctly getting printed on the screen as well but still I'm not able to convert the decoded string into Python representation using json module. import json str = '{"foo": "bar"}' ...
Why encoding str with utf-16be and decoding encoded_str with utf-8 doesn't result in an error? Because in this case, the resulting bytes of str.encode("utf-16be") are also valid UTF-8. This is in fact always the case with ASCII characters, you really need to go above U+007F to trigger possible errors here (eg. use ...
1
2
79,166,985
2024-11-7
https://stackoverflow.com/questions/79166985/increment-value-based-on-condition
I want to increment a column value based on a certain condition within a polars dataframe, while considering how many times that condition was met. Example data. import polars as pl df = pl.DataFrame({ "before": [0, 0, 0, 0, 0, 0, 0, 0, 0], "cdl_type": ["REC", "REC", "GEC", None, None, "GEC", None, "REC", "GEC"], }) C...
Based on the current approach and the expected result, I take that the condition is that cdcl_type equals either "REC" or "GEC". The expected output can the be obtained as follows. For each contiguous block of rows satisfying the condition, we obtain a corresponding id using pl.Expr.rle_id on expression for the condit...
3
2
79,166,664
2024-11-7
https://stackoverflow.com/questions/79166664/2-date-columns-comparison-to-indicate-whether-a-record-occured-after-another
I have a dataframe where I want to return the number (proportion) of patinets that have had a subsequent follow up after diagnosis of disease. Original DF (1 Patient example) | patient_id | app_date | diag_date | cancer_yn | |------------|------------|------------|-----------| | 1 | 2024-01-11 | NaT | NaN | | 1 | 2024-...
Assuming you have a typo in your data and that 2022-03-14 is 2024-03-14, you can identify the subsequent appointment with groupby.transform: # ensure datetime df[['app_date', 'diag_date']] = df[['app_date', 'diag_date'] ].apply(pd.to_datetime) df['fup_yn'] = (df.groupby('patient_id')['diag_date'] .transform('first').lt...
2
2
79,164,734
2024-11-7
https://stackoverflow.com/questions/79164734/scraping-data-from-website-with-multiple-steps-to-get-to-the-data-and-that-preve
I'm trying to scrape data from the following website: https://nfeweb.sefaz.go.gov.br/nfeweb/sites/nfe/consulta-completa STEP 1 STEP 2 When the Access Key is inserted I need to press the "Pesquisar" button: In this case I've used the following access key: 52241061585865236600650040001896941930530252 and it returns the...
The website may be blocking you because it's trying to stop scraping. However, the following code works for me even after multiple attempts: from selenium import webdriver from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.com...
1
2
79,162,280
2024-11-6
https://stackoverflow.com/questions/79162280/python-3-13-generic-classes-with-type-parameters-and-inheritance
I'm exploring types in Python 3.13 and can't get the generic typing hints as strict as I would like. The code below defines a generic Predicate class, two concrete subclasses, and a generic negation predicate. class Predicate[T](Callable[[T], bool]): """ Base class for predicates: a function that takes a 'T' and evalua...
If you change your Predicate definition to: class Predicate[T]: it works. I think this is because the "new style" Generics are inheriting from the old Generic class, for backward compatibility, this means that internally your class looks something like this: class Predicate(Callable[[T], bool], Generic[T]): And thank...
2
2
79,164,771
2024-11-7
https://stackoverflow.com/questions/79164771/extract-multiple-sparsely-packed-responses-to-yes-no-identifiers-while-preservin
I have some data from Google Sheets that has a multi-response question, like so: Q1 Q2 ... Multi-Response 0 ... ... ... "A; B" 1 ... ... ... "B; C" 2 ... ... ... "D; F" 3 ... ... ... "A; B; F" (Note the whitespace, the separator is '; ' for weird workaround reasons with the way the survey writer wrote the questions a...
Use str.get_dummies with sep='; ': out = (df.drop(columns='Multi-Response') .join(df['Multi-Response'].str.get_dummies(sep='; ')) ) Since the separator must be a fixed string, if you have a variable number of spaces in the input, you should pre-process with str.replace: out = (df.drop(columns='Multi-Response') .join(d...
2
2
79,165,399
2024-11-7
https://stackoverflow.com/questions/79165399/reading-c-struct-dumped-into-a-file-into-python-dataclass
Is there a consistent way of reading a c struct from a file back into a Python Dataclass? E.g. I have this c struct struct boh { uint32_t a; int8_t b; boolean c; }; And I want to read it's data into its own python Dataclass @dataclass class Boh: a: int b: int c: bool Is there a way to decorate this class to make this...
Since you already have the data read as bytes, you can use struct.unpack to unpack the bytes into a tuple, which can then be unpacked as arguments to your data class contructor. The formatting characters for struct.unpack can be found here, where L denotes an unsigned long, b denotes a signed char, and ? denotes a bool...
2
1
79,165,506
2024-11-7
https://stackoverflow.com/questions/79165506/expand-list-of-struct-column-in-polars
I have a pl.DataFrame with a column that is a list of struct entries. The lengths of the lists might differ: pl.DataFrame( { "id": [1, 2, 3], "s": [ [ {"a": 1, "b": 1}, {"a": 2, "b": 2}, {"a": 3, "b": 3}, ], [ {"a": 10, "b": 10}, {"a": 20, "b": 20}, {"a": 30, "b": 30}, {"a": 40, "b": 40}, ], [ {"a": 100, "b": 100}, {"a...
First explode, then unnest: df.explode('s').unnest('s') Output: ┌─────┬─────┬─────┐ │ id ┆ a ┆ b │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞═════╪═════╪═════╡ │ 1 ┆ 1 ┆ 1 │ │ 1 ┆ 2 ┆ 2 │ │ 1 ┆ 3 ┆ 3 │ │ 2 ┆ 10 ┆ 10 │ │ 2 ┆ 20 ┆ 20 │ │ … ┆ … ┆ … │ │ 3 ┆ 100 ┆ 100 │ │ 3 ┆ 200 ┆ 200 │ │ 3 ┆ 300 ┆ 300 │ │ 3 ┆ 400 ┆ 400 │ ...
4
2
79,163,896
2024-11-6
https://stackoverflow.com/questions/79163896/how-to-detect-task-cancellation-by-task-group
Given a taskgroup and number of running tasks, per taskgroup docs if any of the tasks raises an error, rest of the tasks in group will be cancelled. If some of these tasks need to perform cleanup upon cancellation, then how would one go about detecting within the task it's being cancelled? Was hoping some exception is ...
Casually, I might avoid a design where a task group is intentionally cancelled in this way order and timing of Exception may be difficult to predict potentially-awkward cleanup new work created during cancellation However, you can except asyncio.CancelledError or use a finally block async def task_that_needs_to_clean...
3
1
79,164,737
2024-11-7
https://stackoverflow.com/questions/79164737/how-to-make-a-django-url-case-insensitive
For example, if I visit http://localhost:8000/detail/PayPal I get a Page not found error 404 with the following message: Using the URLconf ... Django tried these URL patterns, in this order: ... detail/<slug:slug> [name='processor_detail'] The current path, detail/PayPal, matched the last one. Here is my code: views.p...
You're getting 404, not because urls.py couldn't find a match, but because ProcessorDetailView couldn't find a slug named "PayPal", even if "paypal" was in the database. So the problem isn't with urls.py, it's with the view trying to look for the slug you've specified. After some research, it turned out that you could ...
2
2
79,164,756
2024-11-7
https://stackoverflow.com/questions/79164756/remove-specific-indices-in-each-row-of-a-numpy-ndarray
I have integer arrays of the type: import numpy as np seed_idx = np.asarray([[0, 1], [1, 2], [2, 3], [3, 4]], dtype=np.int_) target_idx = np.asarray([[2,9,4,1,8], [9,7,6,2,4], [1,0,0,4,9], [7,1,2,3,8]], dtype=np.int_) For each row of target_idx, I want to select the elements whose indices are not the ones in seed_idx....
You can mask out the values you don't want with np.put_along_axis and then index the others: >>> np.put_along_axis(target_idx, seed_idx, -1, axis=1) >>> target_idx[np.where(target_idx != -1)].reshape(len(target_idx), -1) array([[4, 1, 8], [9, 2, 4], [1, 0, 9], [7, 1, 2]]) If -1 is a valid value, use target_idx.min() -...
1
1
79,164,352
2024-11-6
https://stackoverflow.com/questions/79164352/polars-arg-unique-for-list-column
How can I obtain the (first occurence) indices of unique elements for a column of type list in polars dataframe? I am looking for something similar to arg_unique, but that only exists for pl.Series, such as to be performed over a whole column. I need this to work one level below that, so on every list that is inside th...
.list.eval() can be used as a fallback when there is no specific .list.* method currently implemented. df.with_columns( pl.col("fruits").list.eval(pl.element().arg_unique()).alias("idxs") ) shape: (3, 2) ┌────────────────────────────────────────┬───────────┐ │ fruits ┆ idxs │ │ --- ┆ --- │ │ list[str] ┆ list[u32] │ ╞═...
2
2
79,163,783
2024-11-6
https://stackoverflow.com/questions/79163783/failed-to-find-out-the-source-of-a-certain-portion-of-a-link
I've created a script in python to scrape certain fields from a webpage. When I use this link in the script, it produces all the data in json format and I can parse it accordingly. import requests link = 'https://api-emis.kemenag.go.id/v1/institutions/pontren/public/identity/K1hOenRreVRmaWYwSGVzcERWVFpjZz09' headers = ...
So, developer has used aes encryption along with base64 encoding. import base64 from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes from cryptography.hazmat.backends import default_backend from cryptography.hazmat.primitives import padding def aes_encrypt_cbc(plaintext: str) -> str: secret_key ...
2
4
79,163,041
2024-11-6
https://stackoverflow.com/questions/79163041/text-inside-a-pygame-gui-textbox-disappears-after-changing-it-pygame-gui-libra
I am using the pygame_gui library. I am trying to make a textbox that when pressed enter it will print the text inside the box to the console and reset it (so the textbox will be empty). It does indeed resets the textbox but for some reason the textbox doesnt get any new input until I click somewhere in the background ...
I have gone through the source and I found out that if we add the line text_box.redraw() right after the set_text it fixes the problem, so the updated code is: import pygame_gui, pygame pygame.init() screen = pygame.display.set_mode((500,500)) clock = pygame.time.Clock() manager = pygame_gui.UIManager((500,500)) manage...
3
2
79,163,372
2024-11-6
https://stackoverflow.com/questions/79163372/python-equivalent-of-the-perl-flip-flop-operator
What is the Python equivalent of the Perl ".." (range, or flip-flop) operator? for ( qw( foo bar barbar baz bazbaz bletch ) ) { print "$_\n" if /ar.a/ .. /az\w/; } Output: barbar baz bazbaz The Python workaround that I am aware of includes generator expression and indexing with the help of enumerate, but this seems c...
When the operands aren't simple numbers, EXPR1 .. EXPR2 in scalar context is equivalent to following (except for the scope created by do { }): do { state $hidden_state = 0; if ( $hidden_state ) { ++$hidden_state; } else { $hidden_state = 1 if EXPR1; } my $rv = $hidden_state; # Or `$hidden_state > 1 && EXPR2` for `...`....
2
2
79,162,500
2024-11-6
https://stackoverflow.com/questions/79162500/gaps-in-a-matplotlib-plot-of-categorical-data
When I have numerical data, say index by some kind of time, it is straightforward to plot gaps in the data. For instance, if I have values at times 1, 2, 3, 5, 6, 7, I can set an np.nan at time 4 to break up the plot. import numpy as np import matplotlib.pyplot as plt x = [1, 2, 3, 4, 5, 6, 7] y = [10, 20, 30, np.nan, ...
To create a gap in a categorical plot handle np.nan differently asmatplotlib doesn’t natively interpret np.nan in categorical contexts (Gets converted to 'nan' string.). Solution: Splitting the data and then plot each segment separately. x = [1, 2, 3, 5, 6, 7] y = ["cat", "cat", "dog", "dog", "cat", "cat"] # Plot each ...
1
3
79,162,993
2024-11-6
https://stackoverflow.com/questions/79162993/how-to-select-column-range-based-on-partial-column-names-in-pandas
I have pandas dataframe and I am trying to select multiple columns (column range starting from Test to Bio Ref). Selection has to start from column Test to any column whose name starts with Bio. Below is the sample dataframe. In reality it can contain: any number of columns before Test column, any number of columns be...
You can creates masks for boolean indexing: m1 = np.maximum.accumulate(df_chunk.columns=='Test') # array([False, True, True, True, True, True]) m2 = np.maximum.accumulate(df_chunk.columns.str.startswith('Bio')[::-1])[::-1] # array([ True, True, True, True, True, False]) # m1 & m2 # array([False, True, True, True, True,...
2
2
79,162,666
2024-11-6
https://stackoverflow.com/questions/79162666/how-to-type-polars-altair-plots-in-python
I use polars dataframes (new to this module) and I'm using some static typing, to keep my code tidy and clean for debugging purposes, and to allow auto-completion of methods and attributes on my editor. Everything goes well. However, when plotting things from dataframes with the altair API, as shown in the doc, I am un...
You can do from __future__ import annotations from typing import TYPE_CHECKING if TYPE_CHECKING: import altair as alt def my_plot(df: pl.DataFrame, col: str) -> alt.Chart:
4
4
79,162,352
2024-11-6
https://stackoverflow.com/questions/79162352/how-to-filter-a-dataframe-with-different-conditions-for-each-column
I have a DataFrame where for each column I only want to show specific values based on the index, but these conditions are different for each column. This is what it looks like: data = {'a': [1,2,3,4,5], 'b': [10,20,30,40,50], 'c': [1,1,1,1,1]} df = pd.DataFrame(data) df: a b c 0 1 10 1 1 2 20 1 2 3 30 1 3 4 40 1 4 5 50...
Another possible solution, whose steps are: The np.stack function stacks these boolean conds, resulting in a 2D array, which is then transpose to align correctly the mask with the original df. The [np.where]2 function then evaluates these conds, returning the element of df if True, and NaN otherwise. conds = [df.in...
3
3
79,161,804
2024-11-6
https://stackoverflow.com/questions/79161804/what-is-the-best-way-to-filter-the-groups-that-have-at-least-n-rows-that-meets-t
This is my DataFrame: import pandas as pd df = pd.DataFrame({ 'a': [10, 20, 30, 50, 50, 50, 4, 100], 'b': [30, 3, 200, 25, 24, 31, 29, 2], 'd': list('aaabbbcc') }) Expected output: a b d 0 10 30 a 1 20 3 a 2 30 200 a The grouping is by column d. I want to return the groups that have at least two instances of this ma...
With pandas You could use a groupby.transform on the mask with sum to produce a boolean Series: m = df['b'].gt(df['a']) out = df[m.groupby(df['d']).transform('sum').ge(2)] Output: a b d 0 10 30 a 1 20 3 a 2 30 200 a Intermediates: a b d m transform('sum') ge(2) 0 10 30 a True 2 True 1 20 3 a False 2 True 2 30 200 a...
2
3
79,161,450
2024-11-6
https://stackoverflow.com/questions/79161450/join-where-with-starts-with-in-polars
I have two data frames, df = pl.DataFrame({'url': ['https//abc.com', 'https//abcd.com', 'https//abcd.com/aaa', 'https//abc.com/abcd']}) conditions_df = pl.DataFrame({'url': ['https//abc.com', 'https//abcd.com', 'https//abcd.com/aaa', 'https//abc.com/aaa'], 'category': [['a'], ['b'], ['c'], ['d']]}) now I want a df, ...
I would expect pl.DataFrame.join_where() to work, but apparently it doesn't allow pl.Expr.str.starts_with() condition yet - I get only 1 binary comparison allowed as join condition error. So you can use pl.DataFrame.join() and pl.DataFrame.filter() instead: ( df .join(conditions_df, how="cross") .filter(pl.col("url").s...
2
1
79,160,696
2024-11-5
https://stackoverflow.com/questions/79160696/converting-python-logic-to-sql-query-pairing-two-status-from-one-column
I need help with converting my python code to SQL: req_id_mem = "" req_workflow_mem = "" collect_state_main = [] collect_state_temp = [] for req_id, req_datetime, req_workflow in zip(df["TICKET_ID"], df["DATETIMESTANDARD"], df["STATUS"]): if req_id_mem == "" or req_id_mem != req_id: req_id_mem = req_id req_workflow_mem...
One way of doing it: SELECT FOD.TICKET_ID , FOD.FIRSTOPENDATETIME , (SELECT NC.DATETIMESTANDARD from MyTbl NC -- Nearest future close date where NC.TICKET_ID=FOD.TICKET_ID and NC.STUS='Closed' and NC.DATETIMESTANDARD>FOD.FIRSTOPENDATETIME and DATETIMESTANDARD=(select min(DATETIMESTANDARD) from MyTbl NCb where NCb.TICKE...
2
2
79,152,912
2024-11-3
https://stackoverflow.com/questions/79152912/how-to-select-a-particular-div-or-pragraph-tag-from-html-content-using-beautiful
I'm using beautiful soup to extract some text content from HTML data. I have a div and several paragraph tags and the last paragraph is the copyright information with copyright logo , the year and some more info. the year is different based on what year the content was from , so i can't look for exact text but rest is ...
Just store elements in a list and then pop the last item. soup = BeautifulSoup(text_content, "html.parser") # Find all the paragraph tags paragraphs = soup.find_all("p") # Check if the last paragraph contains a match if "copyright" in paragraphs[-1].text.lower(): # Remove the last paragraph paragraphs.pop() # Join the ...
1
1
79,157,298
2024-11-4
https://stackoverflow.com/questions/79157298/python-gekko-step-wise-variable-in-a-time-domain-problem-with-differential-equat
I am coding an MPC problem in Python Gekko to heat a building on a winter day (for now I am working with a simple building with 2 zones only). The building is expressed with a Resistor-Capacitor (RC) model and the objective is to minimize a combination of maximum power and the total energy used. The RC model is express...
Try using an option for the MV type that specifies the number of time steps between each allowable movement of the Manipulated Variable. SP_1.MV_STEP_HOR=8 SP_2.MV_STEP_HOR=8 There is more information on MPC tuning options in the Dynamic Optimization Course and in the documentation. A global option can be set for all...
2
0
79,155,108
2024-11-4
https://stackoverflow.com/questions/79155108/one-progress-bar-for-a-parallel-job-python
The loop runs over some number of models (n_mod) and is distributed among the n_cpus. As you will note, running this code as mpirun -np 4 python test_mpi.py produces 4 progress bars. This is understandable. But is there a way to use tdqm to get one progress bar which tells me how many models have been completed? from t...
This can be achieved with a main node and worker node(s) setup. Essentially only the rank == 0 node will be updating a progress bar whilst the worker nodes will simply be informing the main node that they have completed the task. Worker: (Defined similarly to your above code) def worker(n_mod, size, rank): comm = MPI.C...
1
2
79,154,980
2024-11-4
https://stackoverflow.com/questions/79154980/any-way-to-check-if-a-number-is-already-within-a-3x3-grid-in-a-9x9-grid-of-neste
Programming a sudoku game and I've got a nested list acting as my board and the logic written out however I am stumped on how to check if a number is specifically within a 3x3 grid within the 9x9 nested list board I've got #board initilization board = [ [0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 2, 0, 0, 0, 0, 0, 0, 2], [0, 0, ...
First you just need a given cell (row,col) which is the top left corner (the start) of the 3x3 subgrid you are trying to check: start_row = (row // 3) * 3 start_col = (col // 3) * 3 Then simply iterate through the 3x3 subgrid. Function implementation: def is_valid_move(board, row, col, num): # Check row and column if ...
1
3
79,157,061
2024-11-4
https://stackoverflow.com/questions/79157061/python3-venv-how-to-sync-ansible-python-interpreter-for-playbooks-that-mix-con
I'm running ansible playbooks in python venv My playbooks often involve a mix of cloud infrastructure (AWS) and system engineering. I have configured them to run cloud infrastructure tasks with connection: local - this is to minimize access rights required on the target system. However since using venv I have a conflic...
I had a similar problem, and I resolved it with interpreter fallback which can use a list of locations (attempted in order) unlike the interpreter config which is just a single path. In your inventory set this variable: ... ... ansible_python_interpreter_fallback: - /Users/jd/projects/mgr2/ansible/bin/python3 - /usr/bi...
1
2
79,159,747
2024-11-5
https://stackoverflow.com/questions/79159747/is-there-a-way-to-render-gt-tables-as-pngs-with-a-no-browser-and-b-without-w
I have a really pretty gt table that we'd like to automate production of. Running this on our remote server has some limitations: enterprise policy is that no browsers, headless or otherwise, may be installed on the server; the admin has been unwilling to install wkhtmltopdf. So I can either run this locally, which I'd...
Figured it out! foo <- <a gt_tbl object> library(ggplot2) library(ggplotify) library(rsvg) #required to render SVG cells foo_1 <- as_gtable(foo) foo_2 <- as.ggplot(foo_1) ggsave('path/to/file.png', foo_2)
2
1
79,157,011
2024-11-4
https://stackoverflow.com/questions/79157011/python-3-0-apply-conditional-formatting-to-a-cell-based-off-the-calculated-valu
I am trying to automate processing of a form with; (1) two calculated fields in columns F and G, and (2) one manually-entered field in column H. If both these values in a row are calculated to be >=30, I would like to highlight the corresponding cell in column A. Alternatively, if the value in column H is "Warranty P...
PART 1 To fix your first issue; If both these values in a row are calculated to be >=30, I would like to highlight the corresponding cell in column A You just need to add another CF rule for column A using 'FormulaRule' Using a Sheet that contains your example data which I'll just use to fill the first 3 rows so the d...
2
3
79,158,826
2024-11-5
https://stackoverflow.com/questions/79158826/printing-nested-html-tables-in-pyqt6
I have an issue when trying to print the contents of a QTableWidget in a PyQt6 application. It actually works, but there is a small problem: I have tables embedded in the main table and I'd like those tables to completely fill the parent cells (100% of their widths), but the child tables don't expand as expected. This ...
The rich text in Qt only provides a limited subset of HTML and CSS. More specifically, it only provides what QTextDocument allows, and the HTML parsing is therefore completely based on the QTextDocument capabilities. Complex layouts that may be easier to achieve with standard HTML and CSS in a common web browser, may b...
1
1
79,160,774
2024-11-5
https://stackoverflow.com/questions/79160774/how-to-disable-the-caret-characters-in-the-python-stacktrace
I have a script for doing test readouts that worked fine with Python 3.9 but now we upgraded to Python 3.12 we have those carets that break the script. So the easiest way would be disabling it. Is there a way to disable the carets (^^^^^^^^^^^^^^^^^^^^) in the Python stacktrace? ERROR: test_email (tests.test_emails.Ema...
You can run Python with the -X no_debug_ranges command-line option, or set the PYTHONNODEBUGRANGES environment variable (to any nonempty string) before running your program, to disable these indicators.
2
5
79,154,880
2024-11-4
https://stackoverflow.com/questions/79154880/why-do-i-lose-doc-on-a-parameterized-generic
Issue The docstring is lost when setting type parameters on a generic. Minimal example: from typing import TypeVar, Generic T = TypeVar("T") class Test(Generic[T]): """My docstring""" assert Test.__doc__ == "My docstring" This works fine as expected. However, this fails as __doc__ is now None: assert Test[int].__doc__...
Parameterising user-defined generic types with type arguments results in instances of typing._GenericAlias, which inherits from typing._BaseGenericAlias. >>> type(Test[int]) <class 'typing._GenericAlias'> >>> type(Test[int]).mro() [<class 'typing._GenericAlias'>, <class 'typing._BaseGenericAlias'>, <class 'typing._Fina...
3
2
79,159,200
2024-11-5
https://stackoverflow.com/questions/79159200/how-to-fill-spaces-between-subplots-with-a-color-in-matplotlib
With the following code : nb_vars=4 fig, axs = plt.subplots(4,4,figsize=(8,8), gridspec_kw = {'wspace':0.20, 'hspace':0.20}, dpi= 100) for i_ax in axs: for ii_ax in i_ax: ii_ax.set_yticklabels([]) for i_ax in axs: for ii_ax in i_ax: ii_ax.set_xticklabels([]) The space between the subplots is white. How is it possible ...
You could add patches in between the axes: from matplotlib import patches nb_vars=4 # colors between two axes in a row r_colors = [['#CC0000', '#CC0000', '#CC0000'], ['#0293D8', '#0293D8', '#0293D8'], ['#FF8E00', '#FF8E00', '#FF8E00'], ['#ABB402', '#ABB402', '#ABB402'], ] # colors between two axes in a column c_colors ...
3
3
79,158,742
2024-11-5
https://stackoverflow.com/questions/79158742/i-want-to-change-values-of-a-string-to-upper-case-if-its-even-what-am-i-doing-w
def func(string): stringlist= string.split() stringlist =[] for pos in string: stringlist.append(pos) for pan in stringlist: if pan %2 ==0: pan.upper() print(stringlist) func("Hello") Tried everything I know. I just get an error about TypeError: not all arguments converted during string formatting
You could make use of enumerate, which returns index and value within the string: def func(string): return ''.join(char.upper() if index % 2 == 0 else char for index, char in enumerate(string)) print(func("Hello")) HeLlO
1
2
79,157,939
2024-11-5
https://stackoverflow.com/questions/79157939/how-can-i-rewrite-the-complex-number-z-5i-into-standard-form-z-coslog5
I would like to write complex numbers z into the standard form z = a + i b with a and b real numbers. For most of my cases, the sympy construct z.expand(complex=True) does what I am expecting but not in all cases. For instance, I fail to rewrite z = 5**sp.I and SymPy just gives back the input: In [1]: import sympy as s...
First convert it to exponential form, then convert to trig. from sympy import I, cos, exp expr = 5 ** I expr = expr.rewrite(exp).rewrite(cos) print( expr ) Output: cos(log(5)) + I*sin(log(5))
1
4
79,158,209
2024-11-5
https://stackoverflow.com/questions/79158209/groupby-a-df-column-based-on-2-other-columns
I have an df which has 3 columns lets say Region, Country and AREA_CODE. Region Country AREA_CODE =================================== AMER US A1 AMER CANADA A1 AMER US B1 AMER US A1 I want to get the output like list of AREA_CODE for each country under each Region with 'ALL' as list value as well. something like { "AM...
In order to have an extra level of dictionary nesting, you need to perform an additional groupby. This is most easily done in a dictionary comprehension: out = {k: {k2: ['ALL']+sorted(v2.unique().tolist()) for k2, v2 in v.groupby('Country')['AREA_CODE'] } for k, v in df.drop_duplicates().groupby('Region') } Or using a...
2
1
79,157,442
2024-11-5
https://stackoverflow.com/questions/79157442/how-to-create-multi-channel-tif-image-with-python
I have set of microscopy images, each subset has two channels which I need to merge into one .tif image (Each merged image should contain two channels) import cv2 as cv im1 = cv.imread('/img1.tif', cv.IMREAD_UNCHANGED) im2 = cv.imread('/img1.tif', cv.IMREAD_UNCHANGED) im3 = im1 + im2 cv.imwrite('img3.tif', im3) That w...
If you wanted to generate a TIFF file containing a "stack" of images, you don't need tifffile. You can use OpenCV's own function cv.imwritemulti(): import cv2 as cv im1 = cv.imread('img1.tif', cv.IMREAD_UNCHANGED) im2 = cv.imread('img1.tif', cv.IMREAD_UNCHANGED) im3 = [im1, im2] # a stack of two cv.imwritemulti('img3.t...
2
3
79,157,914
2024-11-5
https://stackoverflow.com/questions/79157914/why-io-bytesio-is-not-a-subclass-of-typing-binaryio-and-io-stringio-is-neither
When use match-case pattern, I found that case typing.BinaryIO(): can not match object with type io.BytesIO. So I try this: import io import typing assert issubclass(list, typing.Sequence) assert issubclass(list, typing.List) assert issubclass(dict, typing.Mapping) assert issubclass(dict, typing.Dict) # assert issubcla...
Classes in the typing module are not meant to be used for instance / subclass checks at runtime. Their purpose is type hinting. If you want to use instance / subclass checks like that, check out collections.abc, especially this overview. For io.StringIO and io.BytesIO, their abstract base classes are io.TextIOBase and ...
1
3
79,157,504
2024-11-5
https://stackoverflow.com/questions/79157504/how-to-append-an-executable-sum-formula-to-google-sheets-using-python
I'm writing a Python script that uploads a Polars DataFrame to Google Sheets and formats it accordingly. One of my goals is to create a summary row at the bottom of the table that sums the numerical values for each column. Currently, I have the following code snippet that successfully constructs a summary row: # Add a ...
The default value_input_option is set to RAW in gspread's append_row method. The input data won't be parsed, if set to RAW according to sheets API. Set it to USER_ENTERED: new_sheet.append_row(total_row, value_input_option='USER_ENTERED') Or new_sheet.append_row(total_row, value_input_option=gspread.utils.ValueInputOp...
1
2
79,157,015
2024-11-4
https://stackoverflow.com/questions/79157015/how-to-place-dataframe-data-in-one-unique-index
I've got the next code: data = [{'TpoMoneda': 'UYU'}, {'MntNetoIvaTasaMin': '3825.44'}, {'IVATasaMin': '10.000'}, {'IVATasaBasica': '22.000'}, {'MntIVATasaMin': '382.54'}, {'MntTotal': '4207.98'}, {'MntTotRetenido': '133.90'}, {'CantLinDet': '2'}, {'RetencPercep': None}, {'RetencPercep': None}, {'MontoNF': '0.12'}, {'M...
You can use map + pd.transpose like below pd.DataFrame( map(lambda x: df[x].dropna().values, df.columns), index=df.columns ).transpose() which gives TpoMoneda MntNetoIvaTasaMin IVATasaMin IVATasaBasica MntIVATasaMin MntTotal \ 0 UYU 3825.44 10.000 22.000 382.54 4207.98 MntTotRetenido CantLinDet RetencPercep MontoNF M...
2
0
79,148,243
2024-11-1
https://stackoverflow.com/questions/79148243/transposing-within-a-polars-df-lead-to-typeerror-not-yet-implemented-nested-o
I have this data: ┌────────────┬─────────────────────────────────────┐ │ col1 ┆ col2 │ │ --- ┆ --- │ │ list[str] ┆ list[list[str]] │ ╞════════════╪═════════════════════════════════════╡ │ ["a"] ┆ [["a"]] │ │ ["b", "c"] ┆ [["b", "c"], ["b", "c"], ["b", "c"] │ │ ["d"] ┆ [["d"]] │ └────────────┴───────────────────────────...
As you've mentioned performance in the comments - this may also be of interest. Building upon @Herick's answer - when you add an index and explode, you are guaranteed to have sorted data. With sorted data, you can add a "row number per group" using rle() - which can be significantly faster. (some discussion here: http...
3
0
79,155,921
2024-11-4
https://stackoverflow.com/questions/79155921/wagtail-cms-programmatically-enabling-user-access-to-manage-pages-within-the-a
Context Wagtail CMS has a permission system that builds on that of Django's. However, customizing it for users that are neither an admin nor using the pre-made groups Moderator or Editor is unclear. Presently, I have: A custom user class, StudentUser Pages arranged in the below hierarchy: Program | Course / | \ Repo...
Page permissions in Wagtail are determined by position within the page hierarchy rather than by page type. So, while it's not possible to directly give students edit permission over the Report page model - if all your Report pages exist under a single ReportIndex (something that can be enforced through the use of paren...
2
2
79,152,821
2024-11-3
https://stackoverflow.com/questions/79152821/python-selenium-script-not-retrieving-product-price-from-a-webpage
I'm trying to scrape product prices from the website Ultra Liquors using Python and Selenium, but I'm unable to retrieve the price despite the HTML containing the expected elements. My goal is to compare prices from several shops to find the best deals or any ongoing specials for our venue. Here's the code I'm using: f...
You should use bs4 because bs4 is faster than selenium (if you don't have to deal with bot protection), I used this endpoint https://george.ultraliquors.co.za/getFilteredProducts which accepts a POST request with JSON body, you can get all the product prices under the category of SPIRITS --> BRANDY and your catagoryId ...
2
1
79,155,737
2024-11-4
https://stackoverflow.com/questions/79155737/join-differently-nested-lists-in-polars-columns
As you might have recognized from my other questions I am transitioning from pandas to polars right now. I have a polars df with differently nested lists like this: ┌────────────────────────────────────┬────────────────────────────────────┬─────────────────┬──────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 │ │ --- ┆ --- ┆ --- ┆ --- ...
You could use list.eval() and .list.join() df.with_columns( pl.col(nested_list_cols).list.eval(pl.element().list.join("+")).list.join("-"), pl.col(list_cols).list.join("-") ) shape: (3, 4) ┌─────────────┬─────────────┬───────┬──────┐ │ col1 ┆ col2 ┆ col3 ┆ col4 │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ str ┆ str ┆ str │ ╞══...
2
1
79,153,112
2024-11-3
https://stackoverflow.com/questions/79153112/keyerror-default-when-attempting-to-create-a-table-using-magic-line-sql-in-j
I am attempting to create a new database and create a table using magic line %sql in Jupyter Notebook but I've been getting a KeyError and I'm struggling to work out why, my code is as follows. import sqlite3 as sql import pandas as pd %load_ext sql %sql sqlite:///dataProgramming.db %%sql sqlite:// CREATE TABLE users...
This issue reported here looks to be the same with very related pointers to similar code in the last parts and the same error. Following the advice given there: Try running a cell like %config SqlMagic.style = '_DEPRECATED_DEFAULT' before running the cell magic one in your post.
4
8
79,154,580
2024-11-4
https://stackoverflow.com/questions/79154580/check-if-string-does-not-contain-strings-from-the-list-with-wildcard-when-symb
newlist = ['test', '%ing', 'osh', '16fg'] tartext = 'Singing' I want to check my tartext value doesn't matching with any value with newlist. if the newlist string contains % symbol in the value then I need to match this as wildcard charecter. I want to achieve condition as below. if (tartext != 'test' and tartext not ...
One approach would be to use regular expressions. In this approach, we can replace % in the newlist by .*. Then, use re.search along with a list comprehension to find any matches. newlist = ['test', '%ing', 'osh', '16fg'] tartext = 'Singing' newlist = [r'^' + x.replace('%', '.*') + r'$' for x in newlist] num_matches = ...
2
0
79,155,290
2024-11-4
https://stackoverflow.com/questions/79155290/dutch-sentiment-analysis-robbertje-outputs-just-positive-negative-labels-netura
When I run Dutch sentiment analysis RobBERTje, it outputs just positive/negative labels, netural label is missing in the data. https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment There are obvious neutral sentences/words e.g. 'Fhdf' (nonsense) and 'Als gisteren inclusief blauw' (neutral), but they both eval...
This model was trained only on negative and positive labels. Therefore, it will try to categorize every input as positive or negative, even if it is nonsensical or neutral. what you can do is to: 1- Find other models that was trained to include neutral label. 2- Fine-tune this model on a dataset that includes neutral l...
2
3