question_id int64 59.5M 79.7M | creation_date stringdate 2020-01-01 00:00:00 2025-07-15 00:00:00 | link stringlengths 60 163 | question stringlengths 53 28.9k | accepted_answer stringlengths 26 29.3k | question_vote int64 1 410 | answer_vote int64 -9 482 |
|---|---|---|---|---|---|---|
78,901,362 | 2024-8-22 | https://stackoverflow.com/questions/78901362/why-are-dict-keys-dict-values-and-dict-items-not-subscriptable | Referring to an item of a dict_keys, dict_values, or dict_items object by index raises a type error. For example: > my_dict = {"foo": 0, "bar": 1, "baz": 2} > my_dict.items()[0] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ----> 1 my_dict.items(... | "Ordered" doesn't mean "efficiently indexable". There is no more efficient way to retrieve the Nth item of a dict view than to iterate N steps. Allowing indexing would give the impression that views can support this operation efficiently, which would in turn encourage extremely inefficient code. Trying to change the im... | 6 | 5 |
78,902,666 | 2024-8-22 | https://stackoverflow.com/questions/78902666/rolling-rank-groupby | I am trying to get rolling rank based on group by. This is my demo data and "Rolling 2Y Rank' is expected column. The way how it works is I intend to group by each "ID" and calculate rank based on its own historical "Value". df = pd.DataFrame({ 'Year': [2000,2000,2000,2001,2001,2001,2002,2002,2002,2003,2003,2003], 'ID'... | You want the reversed (descending) rank, so pass that to the rank function. Then you can do a merge to assign back to the original dataframe: df = df.merge( df.groupby('ID').rolling(2, on='Year')['Value'].rank(ascending=False), on=['ID','Year'], ) Output: Year ID Value_x Value_y 0 2000 A 5 NaN 1 2000 B 1 NaN 2 2000 C... | 2 | 3 |
78,902,630 | 2024-8-22 | https://stackoverflow.com/questions/78902630/is-there-an-efficient-way-to-include-every-remaining-unselected-column-in-a-pyth | I'm trying to reorder the columns in a Polars dataframe and put 5 columns out of 100 first (the document must unfortunately be somewhat readable in excel). I can't seem to find an easy way to do this. Ideally, I'd like something simple like df.select( 'col2', 'col1', r'^.*$', # the rest of the columns, but this throws ... | You can combine pl.exclude with the walrus operator. Suppose you have something like df=pl.DataFrame( [ pl.Series('c', [1, 2, 3], dtype=pl.Int64), pl.Series('b', [2, 3, 4], dtype=pl.Int64), pl.Series('fcvem', [4, 5, 6], dtype=pl.Int64), pl.Series('msoy', [4, 5, 6], dtype=pl.Int64), pl.Series('smrn', [4, 5, 6], dtype=pl... | 5 | 2 |
78,901,940 | 2024-8-22 | https://stackoverflow.com/questions/78901940/can-i-subset-an-iterator-and-keep-it-an-iterator | I have a use case where I need the permutations of boolean. However I do not need them when they are reversed. So I can do something like this: import itertools [ p for p in itertools.product([True,False],repeat=4) if p!=p[::-1] ] This issue is that this is a list, stored in memory, and with increasing repeats I will ... | You use a generator expression, syntactically identical to a listcomp, but using parentheses rather than square brackets as the delimiter: ( p for p in itertools.product([True,False],repeat=4) if p!=p[::-1] ) If the generator expression is the sole argument to a function, you don't even need the extra parentheses (the... | 2 | 4 |
78,897,975 | 2024-8-21 | https://stackoverflow.com/questions/78897975/python-idiom-if-name-main-in-uwsgi | What is Python idiom in uwsgi for if __name__ == '__main__': main() I found here a long string in uwsgi instead of __main__ What __name__ string does uwsgi use?. But it looks like a workaround. Is there better way to call function once, when uwsgi starts my python script? | It depends what you are trying to guard against. There are three use cases I can think of: Do something only if running a real server, and not unit tests. Do something in a module if it is the main WSGI file. Do something after uwsgi has forked a worker process. Do something only if running a real server, and not uni... | 2 | 2 |
78,900,257 | 2024-8-22 | https://stackoverflow.com/questions/78900257/plotting-differently-sized-subplots-in-pyplot | I want to plot a figure in pyplot that has 3 subplots, arranged vertically. The aspect ratios of the first one is 1:1, while for the other two it is 2:1. And the heights of each of these plots should be the same. This would mean that the left and right boundaries of the 1st plot are quite far away from the boundaries o... | Create 3 vertically arranged subplots, their heights will by equal be default. Then set the aspect ratios as desired. import matplotlib.pyplot as plt fig, axes = plt.subplots(nrows=3) axes[0].set_aspect(1) axes[1].set_aspect(.5) axes[2].set_aspect(.5) | 3 | 2 |
78,898,497 | 2024-8-21 | https://stackoverflow.com/questions/78898497/create-categorical-series-from-physical-values | I want to create a categorical column, where each category has a descriptive name for self-documentation. I have a list of integers equivalent to the physical values in the categorical column, and I want to make the categorical column without creating an intermediate list of strings to pass to pl.Series. import polars ... | You can simply cast to the enum datatype. assert s1.equals(s1.to_physical().cast(dt)) # True | 3 | 2 |
78,898,081 | 2024-8-21 | https://stackoverflow.com/questions/78898081/groupby-index-and-keep-the-max-column-value-given-a-single-column | Scenario: With a dataframe with duplicated indices, I want to groupby while keeping the max value. I found the solution to this in Drop duplicates by index, keeping max for each column across duplicates however, this gets the max value of each column. This mixed the data of different rows, keeping the max values. Quest... | You don't provide a sample of your data so I'm just going for a general approach. That said, you can sort the dataframe by C, then groupby with head: # this assumes that index has only one level df.sort_values('C', ascending=False).groupby(level=0).head(1) Or: df.sort_values('C').groupby(level=0).tail(1) Also take a ... | 2 | 2 |
78,897,960 | 2024-8-21 | https://stackoverflow.com/questions/78897960/annotate-a-scatterplot-with-text-and-position-taken-from-a-pandas-dataframe-dire | What I want to achieve is a more elegant and direct method of annotating points with x and y position from a pandas dataframe with a corresponding label from the same row. This working example works and results in what I want, but I feel there must be a more elegant solution out there without having to store individual... | You probably can't avoid the iteration, but you can remove the need to create lists by using df.iterrows(). This has the added benefit that you are not decoupling any data from your DataFrame. import pandas as pd import matplotlib.pyplot as plt d = {'Date': ['15-08-24', '16-08-24', '17-08-24'], 'Temperature': [24, 26, ... | 2 | 1 |
78,894,891 | 2024-8-21 | https://stackoverflow.com/questions/78894891/polars-combining-sales-and-purchases-fifo-method | I have two dataframes: One with buys df_buy = pl.DataFrame( { "BuyId": [1, 2], "Item": ["A", "A"], "BuyDate": [date.fromisoformat("2023-01-01"), date.fromisoformat("2024-03-07")], "Quantity": [40, 50], } ) BuyId Item BuyDate Quantity 1 A 2023-01-01 40 2 A 2024-03-07 50 And other with sells: df_sell = pl.Da... | Calculate total quantity bought so far and total quantity sold so far. cum_sum() to calculate running total. df_buy_total = ( df_buy .with_columns(QuantityTotal = pl.col.Quantity.cum_sum().over("Item")) ) df_sell_total = ( df_sell .with_columns(QuantityTotal = pl.col.Quantity.cum_sum().over("Item")) ) ┌───────┬────... | 3 | 1 |
78,897,279 | 2024-8-21 | https://stackoverflow.com/questions/78897279/confirming-what-happens-when-importing-a-package | I was surprised to find out that both the two call in main.py works: import package_top package_top.module.hello() # I thought this won't work ... package_top.hello() # I thought this is the only way package_top package_top/ ├── __init__.py └── module.py __init__.py from .module import hello module.py def hello(): p... | The Python import machinery modifies the package_top module object and adds the module attribute as part of importing the child module. See the Python import docs on submodules: When a submodule is loaded using any mechanism (e.g. importlib APIs, the import or import-from statements, or built-in __import__()) a bindin... | 5 | 4 |
78,896,943 | 2024-8-21 | https://stackoverflow.com/questions/78896943/accessing-the-values-used-to-impute-and-normalize-new-data-based-upon-scikit-lea | Using scikit-learn I'm building machine learning models on a training set, and then evaluating them on a test set. On the train set I perform data imputation and scaling with the ColumnTransformer, then build a logistic regression model using Kfold CV, and the final model is used to predict the values on the test set. ... | Pipelines, ColumnTransformers, GridSearch, and others all have attributes (and sometimes a custom __getitem__ to access these like dictionaries) exposing their component parts, and similarly each of the transformers has fitted statistics as attributes, so it's just a matter of chaining these all together, e.g.: ( train... | 2 | 1 |
78,896,486 | 2024-8-21 | https://stackoverflow.com/questions/78896486/spurious-zero-printed-in-seaborn-barplot-while-plotting-pandas-dataframe | The following is the minimal code. A spurious zero is printed between second and third bars that I am unable to get rid of in the plot. Please help me fix the code. A minimal working example is below: import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = { 'Temp': [5, 10, 25, 50, 100, 5, 10,... | The additional 0.0 is from the legend patches (in fact four 0.0s on top of each other). You can iterate over the bar containers instead: for c in barplot.containers: for p in c: height = p.get_height() x = p.get_x() + p.get_width() / 2. if height >= 0: barplot.text(x, height + 0.1, f'{height:.1f}', ha='center', va='bot... | 2 | 2 |
78,894,954 | 2024-8-21 | https://stackoverflow.com/questions/78894954/polars-dataframe-via-django-query | I am exploring a change from pandas to polars. I like what I see. Currently, it is simple to get the data into Pandas. cf = Cashflow.objects.filter(acct=acct).values() df = pd.DataFrame(cf) So I figured it would be a simple change - but this will not work for me. df = pl.DataFrame(cf) What is the difference between u... | You just need to check the input parameters of Polars and the output data type of the Django queryset. In Polars the pl.DataFrame() constructor expects a list of dictionaries or a list of other data structures. Also when you run Cashflow.objects.filter(acct=acct).values() in Django, the result is a queryset of dictiona... | 2 | 1 |
78,895,383 | 2024-8-21 | https://stackoverflow.com/questions/78895383/how-to-use-apply-on-dataframe-using-a-custom-function | I have the following Pandas DataFrame: import pandas as pd from collections import Counter print(sentences) the output is (yes, the column name is 0): 0 0 A 1 B 2 C 3 D 4 EEE ... ... 462064467 FGH 462064468 QRS 462064469 EEE 462064470 VWXYZ 462064471 !!! [462064472 rows x 1 columns] I have a custom function to check... | You could filter with str.len and boolean indexing then pass to value_counts: out = sentences.loc[sentences[0].str.len()>1, 0].value_counts() Or count everything, then filter the keys: out = sentences[0].value_counts() out = out[out.index.str.len()>1] Output: 0 EEE 2 FGH 1 QRS 1 VWXYZ 1 !!! 1 Name: count, dtype: int6... | 3 | 3 |
78,894,984 | 2024-8-21 | https://stackoverflow.com/questions/78894984/why-can-hexadecimal-python-integers-access-properties-but-not-regular-ints | Decimal (i.e. non-prefixed) integers in Python seem to have fewer features than prefixed integers. If I do 1.real I get a SyntaxError: invalid decimal literal. However, if I do 0x1.real, then I get no error and 1 is the result. (Same for 0b1.real and 0o1.real, though in Python2 01.real gives a syntax error as). | It's because the 1. lead-in is being treated as a floating-point literal and r is not a valid decimal digit. Hex literals, of the form you show, are integers, so are not ambiguous in being treated as a possible floating point (there is no 0x1.1). If you use (1).real to specify that the literal is just the 1, it works f... | 4 | 7 |
78,894,794 | 2024-8-21 | https://stackoverflow.com/questions/78894794/regex-to-match-starting-numbering-or-alphabet-bullets-like-a | I am trying to find whether string(sentence) starts with numbering or alphabet bullets followed by dot(.) or space. I have regex like: r'^(\(\d|\[a-z]\))\s +' and r"^(?:\(\d+\)|\\[a-z]\.)\s*" I tried it on example strings: (a). this is bullet Not a bullet, (b) its bullet again I am so relaxed that its not bullet. (1)... | Executable code in Python Solution import re content = """ (a). this is bullet Not a bullet, (b) its bullet again I am so relaxed that its not bullet. (1) Bullet again. """ patter_str = r'\((?:\w+)\)\.?\s.*' pattern = re.compile(patter_str) matches = pattern.findall(content) for item in matches: print(item) Output (a)... | 2 | 1 |
78,886,125 | 2024-8-19 | https://stackoverflow.com/questions/78886125/vscode-python-extension-loading-forever-saying-reactivating-terminals | After updating VS code to v1.92, the Python extension consistently fails to launch, indefinitely showing a spinner next to “Reactivating terminals…” on the status bar. Selecting OUTPUT > Python reveals the error Failed to resolve env "/mnt/data-linux/miniconda3". Here’s the error trace: 2024-08-07 18:35:35.873 [error] ... | This appears to be a bug related to the new "native" Python locator. You can go back to the old working version by adding the following line to the user settings JSON (until the bug in the native locator is fixed): "python.locator": "js", Note that this workaround pins you to the legacy version which is not something ... | 51 | 50 |
78,874,958 | 2024-8-15 | https://stackoverflow.com/questions/78874958/invalid-filter-length-is-error-in-django-template-how-to-fix | I’m encountering a TemplateSyntaxError in my Django project when rendering a template. The error message I’m seeing is: TemplateSyntaxError at /admin/dashboard/program/add/ Invalid filter: 'length_is' Django Version: 5.1 Python Version: 3.12.4 Error Location: This error appears in a Django template at line 22 of the fi... | Don't copy the old length_is template tag from Django 5.0.x. If this is in an upstream repository, propose the appropriate change to the project (or use the approach provided here: https://github.com/farridav/django-jazzmin/issues/593#issuecomment-2288096357 Update: The specific fix has been merged in: https://github.c... | 1 | 4 |
78,875,297 | 2024-8-15 | https://stackoverflow.com/questions/78875297/horizontal-sum-in-duckdb | In Polars I can do: import polars as pl df = pl.DataFrame({'a': [1,2,3], 'b': [4, 5, 6]}) df.select(pl.sum_horizontal('a', 'b')) shape: (3, 1) ┌─────┐ │ a │ │ --- │ │ i64 │ ╞═════╡ │ 5 │ │ 7 │ │ 9 │ └─────┘ Is there a way to do this with DuckDB? | COLUMNS() can now be unpacked as of DuckDB 1.1.0 This does allow you to use the list_* functions e.g. duckdb.sql(""" from df select list_sum(list_value(*columns(*))) """) ┌──────────────────────────────────┐ │ list_sum(list_value(df.a, df.b)) │ │ int128 │ ├──────────────────────────────────┤ │ 5 │ │ 7 │ │ 9 │ └───────... | 3 | 1 |
78,875,431 | 2024-8-15 | https://stackoverflow.com/questions/78875431/how-to-disable-functools-lru-cache-when-developing-locally | How do you disable caching when working with pythons functools? We have a boolean setting where you can disable the cache. This is for working locally. I thought something like this: @lru_cache(disable=settings.development_mode) But there is no setting like that. Am I missing something? How are people doing this? | If you want to disable caching conditionally while using functools.lru_cache, you need to manage this manually, You need a decorator that can conditionally apply lru_cache based on your settings I am using the docs as the guide docsOnFunctools Quotes from docs: If maxsize is set to None, the LRU feature is disabled an... | 2 | 2 |
78,889,486 | 2024-8-19 | https://stackoverflow.com/questions/78889486/preserving-dataframe-subclass-type-during-pandas-groupby-aggregate | I'm subclassing pandas DataFrame in a project of mine. Most pandas operations preserve the subclass type, but df.groupby().agg() does not. Is this a bug? Is there a known workaround? import pandas as pd class MySeries(pd.Series): pass class MyDataFrame(pd.DataFrame): @property def _constructor(self): return MyDataFrame... | It turns out that groupby().agg() combines Series to build a DataFrame, so the subclassed Series constructor needs to be properly defined. See this documentation. The following code runs with no errors: import pandas as pd class MySeries(pd.Series): @property def _constructor(self): return MySeries @property def _const... | 2 | 0 |
78,889,002 | 2024-8-19 | https://stackoverflow.com/questions/78889002/make-image-from-uint8-rgb-pixel-data | I am trying to make a library related to RGB pixel data, but cannot seem to save the image data correctly. That is my output image. This is my code: pixelmanager.py from PIL import Image import numpy as np class MakeImgFromPixelRGB: def createIMG(PixelArray: list, ImgName: str, SaveImgAsFile: bool): # Convert the pixe... | Mark Setchell was ALMOST correct. His code did help me get an image, but it was repeated 4 times in one. Mark's line of code had a switchup (with h as height and w as width): array = np.array(PixelArray, dtype=np.uint8).reshape(h,w,3) This is my line of code: array = np.array(PixelArray, dtype=np.uint8).reshape(w, h, ... | 2 | 1 |
78,891,643 | 2024-8-20 | https://stackoverflow.com/questions/78891643/how-to-make-gtk-interface-work-with-asyncio | I'm trying to write a Python program with a GTK interface that gets output from functions using async/await that take a few seconds to execute, what I'm asking for is the best solution for running this while not freezing the GUI.. I was using threads before and sort of got it working, but both the GUI and the backend n... | One option to implement this is to merge event loops as shown in the other answer. The downside of that approach is that it tends to lead to CPU churn, as each event loop busy-loops to avoid blocking the other. An alternative approach is to run both event loops normally, each in its own thread. The GTK event loop insis... | 4 | 2 |
78,890,441 | 2024-8-20 | https://stackoverflow.com/questions/78890441/wordcloud-with-2-background-colors | I generated this on wordcloud.com using one of the "themes". I'd like to be able to do this with the python wordcloud library, but so far all I can achieve is a single background color (so all black, not grey and black). Can anyone give me a hint on how to add the additional background color, using matlab or imshow? H... | You can draw the wordcloud with the desired background color (e.g."grey") and then overlay this plot with a uniformly colored image (e.g. "black") masked using the wordcloud mask. import numpy as np import matplotlib.pyplot as plt from wordcloud import WordCloud from PIL import Image text = "some text about Dynasty TV ... | 3 | 3 |
78,878,107 | 2024-8-16 | https://stackoverflow.com/questions/78878107/how-to-add-file-name-pattern-in-aws-glue-etl-job-python-script | I wanted to add file name pattern in AWS Glue ETL job python script where it should generate the files in s3 bucket with pattern dostrp*.csv.gz but could not find way how to provide this file pattern in python script : import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark... | Use pandas : import pandas as pd # Convert DynamicFrame to Pandas DataFrame df = dynamic_frame.toDF().toPandas() # Define S3 bucket and prefix s3_bucket = 'your-s3-bucket' s3_prefix = 'your/s3/prefix/' # Define the S3 path for the output file s3_output_path = f"s3://{s3_bucket}/{s3_prefix}output_file.csv.gz" # Create a... | 2 | 1 |
78,893,363 | 2024-8-20 | https://stackoverflow.com/questions/78893363/extract-continuous-cell-values-from-multiple-excel-files-using-python | The aim of my task is firstly to extract values from continuous cells of a single excel file. Then the same extraction method will be performed on the remaining excel files of the same folder until the loop ends For example, I want to extract values from row 'A283:A9000' at excel file 1. After the extraction at excel f... | Yes you are trying to use 'A283:A9000' as a single cells' co-ordinate hence the attribute error. An alternative is you can treat every element of your 'cells' list as a range cells = ['C11', 'C15', 'D15', 'C16', 'A283:A9000'] so for each element the code extracts all the cells that the range covers; for 'C11' that wo... | 2 | 1 |
78,894,080 | 2024-8-20 | https://stackoverflow.com/questions/78894080/syncing-matplotlib-imshow-coordinates | I'm trying to create an image using networkx, save that image to use later, and then overlay a plot over top of it later. However, when I try to load the image in and make new points, the scale seems off. I've tried everything I can find to make them sync, and I'm not sure what else to try at this point. Here's a simpl... | It seems that matplotlib adds padding to the sides of the image when saving the data from the plot. You can remove this padding by adding fig.tight_layout(pad=0) to the code like so: import networkx as nx import matplotlib.pyplot as plt import numpy as np fig = plt.figure() G = nx.dodecahedral_graph() pos = nx.spring_l... | 2 | 1 |
78,893,923 | 2024-8-20 | https://stackoverflow.com/questions/78893923/polars-set-missing-value-from-another-row | The following data frame represents basic flatten tree structure, as shown below, where pairs (id, sub-id) and (sub-id, key) are always unique and key always represents the same thing under the same id id1 └─┬─ sub-id │ │ └─── key1 │ │ └─── value │ └─ sub-id2 │ └─── key1 │ └─── None id2 └─── sub-id3 └─── key2 └─── valu... | There is pl.Expr.fill_null to fill missing values. As fill value, we use the first non-null value with the same id and key. As we assume that all values for the same id and key are the same, taking the first value is reasonable. It can be constructed as follows: pl.Expr.filter and pl.Expr.is_not_null to filter for non... | 2 | 3 |
78,893,691 | 2024-8-20 | https://stackoverflow.com/questions/78893691/capture-integer-in-string-and-use-it-as-part-of-regular-expression | I've got a string: s = ".,-2gg,,,-2gg,-2gg,,,-2gg,,,,,,,,t,-2gg,,,,,,-2gg,t,,-1gtt,,,,,,,,,-1gt,-3ggg" and a regular expression I'm using import re delre = re.compile('-[0-9]+[ACGTNacgtn]+') #this is almost correct print (delre.findall(s)) This returns: ['-2gg', '-2gg', '-2gg', '-2gg', '-2gg', '-2gg', '-1gtt', '-1gt'... | You can't do this with the regex pattern directly, but you can use capture groups to separate the integer and character portions of the match, and then trim the character portion to the appropriate length. import re # surround [0-9]+ and [ACGTNacgtn]+ in parentheses to create two capture groups delre = re.compile('-([0... | 5 | 4 |
78,888,948 | 2024-8-19 | https://stackoverflow.com/questions/78888948/how-to-get-mypy-to-raise-errors-warnings-about-using-the-typing-package-inst | Currently I have a bunch of code that does this: from typing import Dict foo: Dict[str, str] = [] In Python 3.9+, it is preferable to use the built-in types (source): foo: dict[str, str] = [] Is there a way to configure mypy to raise an error/warning when my code uses Dict instead of dict? | According to the mypy maintainers, this fuctionality will not be implemented because it has already been implemented by formatters like ruff: This is already implemented by linters such as Ruff (https://docs.astral.sh/ruff/rules/non-pep585-annotation/). That doesn't mean mypy can't also implement support for checks li... | 2 | 1 |
78,892,568 | 2024-8-20 | https://stackoverflow.com/questions/78892568/polars-split-column-and-get-n-th-or-last-element | I have the following code and output. Code. import polars as pl df = pl.DataFrame({ 'type': ['A', 'O', 'B', 'O'], 'id': ['CASH', 'ORB.A123', 'CHECK', 'OTC.BV32'] }) df.with_columns(sub_id=pl.when(pl.col('type') == 'O').then(pl.col('id').str.split('.')).otherwise(None)) Output. shape: (4, 3) ┌──────┬──────────┬────────... | You can simply append .list.last() to select the last element of each list. Alternatively, there exists .list.get() to get list elements by index. import polars as pl df = pl.DataFrame({ 'type': ['A', 'O', 'B', 'O'], 'id': ['CASH', 'ORB.A123', 'CHECK', 'OTC.BV32'] }) df.with_columns( sub_id=pl.when( pl.col('type') == '... | 2 | 3 |
78,891,481 | 2024-8-20 | https://stackoverflow.com/questions/78891481/pandas-string-replace-with-regex-argument-for-non-regex-replacements | Suppose I have a dataframe in which I want to replace a non-regex substring consisting only of characters (i.e. a-z, A-Z) and/or digits (i.e. 0-9) via pd.Series.str.replace. The docs state that this function is equivalent to str.replace or re.sub(), depending on the regex argument (default False). Apart from most likel... | Special characters! Using regular expressions with plain words is generally fine (aside from efficiency concerns), there will however be an issue when you have special characters. This is an often overlooked issue and I've seen many people not understanding why their str.replace failed. Pandas even changed the default ... | 3 | 3 |
78,889,556 | 2024-8-19 | https://stackoverflow.com/questions/78889556/create-date-range-with-predefined-number-of-periods-in-polars | When I create a date range in pandas, I often use the periods argument. Something like this: pd.date_range(start='1/1/2018', periods=8) DatetimeIndex(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-04', '2018-01-05', '2018-01-06', '2018-01-07', '2018-01-08'], dtype='datetime64[ns]', freq='D') What would be the eq... | Adding a periods argument has been an open feature request for a while now. Until the request has been implemented, you can make start an expression and create end by offsetting start by the desired number of periods (using pl.Expr.dt.offset_by). start = pl.lit("1/1/2018").str.to_date() pl.date_range(start=start, end=s... | 3 | 4 |
78,879,247 | 2024-8-16 | https://stackoverflow.com/questions/78879247/ignoring-undefined-symbol-errors-in-ctypes-library-load | I'm loading a not-owned-by-me library with Python's ctypes module as ctypes.CDLL("libthirdparty.so") which produces an error undefined symbol: g_main_context_push_thread_default because libthirdparty.so was overlinking a lot of unneeded / unused stuff like glib. In this particular case, I can work around this problem s... | Listing [Python.Docs]: ctypes - A foreign function library for Python. What you're after, is [Man7]: DLOPEN (3) (emphasis is mine): RTLD_LAZY Perform lazy binding. Resolve symbols only as the code that references them is executed. If the symbol is never referenced, then it is never resolved. (Lazy binding is performed... | 2 | 2 |
78,890,348 | 2024-8-20 | https://stackoverflow.com/questions/78890348/merging-the-two-replace-methods | Is it possible to merge the two "replace" methods in the example below into one? The intention is for scalability. import pandas as pd custom_groups = {"odd": [1, 3, 5], "even": [2, 4]} s = pd.Series([1, 2, 3, 4, 5]) s.replace(custom_groups["odd"], "odd").replace(custom_groups["even"], "even") | Most pythonic/efficient approach: reverse the dictionary with a dictionary comprehension and replace once: d = {v:k for k, l in custom_groups.items() for v in l} s.replace(d) Output: 0 odd 1 even 2 odd 3 even 4 odd dtype: object Intermediate d: {1: 'odd', 3: 'odd', 5: 'odd', 2: 'even', 4: 'even'} timings This is ~2x... | 2 | 2 |
78,889,767 | 2024-8-19 | https://stackoverflow.com/questions/78889767/polars-chain-multiple-operations-on-select-with-values-counts | I'm working with a Polars dataframe and I want to perform a series of operations using the .select() method. However, I'm facing problems when I try to apply value_counts() followed by unnest() to get separate columns instead of a struct column. If I just use the method alone, then I don't have any issues: ( df .select... | Note that in your first example, you didn't call .unnest() directly on the value_counts() expression, but on the select context. This can also be done if the select context contains multiple expressions. ( df .select( pl.col("CustomerID"), pl.col("Country").value_counts(sort=True).struct.rename_fields(["Country", "Stat... | 2 | 1 |
78,889,055 | 2024-8-19 | https://stackoverflow.com/questions/78889055/in-python-why-does-preallocation-of-a-numpy-array-fail-to-limit-its-printed-pre | Here is a minimal example: import numpy as np np.set_printoptions(linewidth=1000, precision=3) # First attempt fails to limit the printed precision of x x = np.array([None]) x[0] = 1/3 print(x) # Second attempt succeeds x = [None] x[0] = 1/3 x = np.array(x) print(x) Running this script yields [0.3333333333333333] [0.3... | When running: x = np.array([None]) x[0] = 1/3 print(x) x is an object array (that contains python floats), not an array with a float dtype like your second attempt: array([0.3333333333333333], dtype=object) This ignores the print options. You can reproduce this simply with: print(np.array([1/3], dtype=object), np.arr... | 2 | 5 |
78,888,863 | 2024-8-19 | https://stackoverflow.com/questions/78888863/compute-the-number-of-unique-combinations-while-excluding-those-containing-missi | I'd like to count the number of unique values when combining several columns at once. My idea so far was to use pl.struct(...).n_unique(), which works fine when I consider missing values as a unique value: import polars as pl df = pl.DataFrame({ "x": ["a", "a", "b", "b"], "y": [1, 1, 2, None], }) df.with_columns(foo=pl... | pl.Expr.drop_nulls does not drop the row as the entirety of the struct is indeed not null. To still achieve the desired result, you can filter out all rows which contain a null values in any of the columns of interest using pl.Expr.filter. ( df .with_columns( foo=pl.struct("x", "y").filter( ~pl.any_horizontal(pl.col("x... | 2 | 2 |
78,888,400 | 2024-8-19 | https://stackoverflow.com/questions/78888400/how-do-i-override-django-db-backends-logging-to-work-when-debug-false | In django's LOGGING configuration for the builtin django.db.backends it states that: "For performance reasons, SQL logging is only enabled when settings.DEBUG is set to True, regardless of the logging level or handlers that are installed." As a result the following LOGGING configuration, which is correctly set up to is... | Fortunately we can override this. Indeed, by setting the .force_debug_cursor of the connection to True, for example in one of the AppConfigs (any app config) exists: # my_app/apps.py from django.apps import AppConfig from django.db import connection class MyAppConfig(AppConfig): name = 'my_app' def ready(self): connect... | 2 | 2 |
78,888,819 | 2024-8-19 | https://stackoverflow.com/questions/78888819/comparing-multiple-values-across-columns-of-pandas-dataframe-based-on-column-nam | I have a pandas dataframe with a number of thresholds and values associated with epochs. I want to compare the all of the thresholds with their associated values simultaneously to remove rows as needed. I will be doing this many times and the letter designations can change each time I create this dataframe, but there w... | Here's one approach: Use df.filter for both 'value*' and 'threshold*' columns. For 'threshold*', chain df.values to allow element-wise comparison on shape, rather than on column labels. Afterwards check df.all row-wise (axis=1) and use for boolean indexing. Minimal reproducible example import pandas as pd data = {'ep... | 2 | 2 |
78,886,008 | 2024-8-19 | https://stackoverflow.com/questions/78886008/handling-multiple-operations-on-dataframe-columns-with-polars | I'm trying to select all columns of a DataFrame and perform multiple operations on each column using Polars. For example, I discovered that I can use the following code to count non-null values in each column: df.select(pl.col("*").is_not_null().sum() However, when I attempt to concatenate multiple operations like thi... | Let's use a simple test input df = pl.DataFrame({ "InvoiceNo": [1,2,3], "StockCode": [1,None,3], "Description": [None,None,6] }) To get your desired output, you can use unpivot() to transpose the DataFrame. Also, you don't really need to calculate both values, as soon as you calculated count of null values, you can u... | 3 | 1 |
78,888,584 | 2024-8-19 | https://stackoverflow.com/questions/78888584/compute-difference-between-dates-and-convert-into-weeks-months-years-in-polars-d | I have a pl.DataFrame with a start_date and end_date column. I need to compute the difference between those two columns and add new columns representing the result in days, weeks, months and years. I would be fine to get an approximate result, meaning dividing the days by 7 / 30 / 365. My problem is to convert the dura... | you can use dt.total_days() to extract hours from Duration datatype: df.with_columns( days = (pl.col.end_date - pl.col.start_date).total_days() ) ┌────────────┬────────────┬──────┐ │ start_date ┆ end_date ┆ days │ │ --- ┆ --- ┆ --- │ │ date ┆ date ┆ i64 │ ╞════════════╪════════════╪══════╡ │ 2024-01-01 ┆ 2024-07-31 ┆ 2... | 3 | 2 |
78,888,041 | 2024-8-19 | https://stackoverflow.com/questions/78888041/cast-multiple-columns-with-unix-epoch-to-datetime | I have a dataframe with multiple columns containing unix epochs. In my example I only use 2 of 13 columns I have. I'd like to cast all those columns to a datetime with UTC timezone in a single call to with_columns(). df = pl.from_repr(""" ┌─────┬────────────┬────────────┬────────────┐ │ id ┆ start_date ┆ end_date ┆ can... | pl.from_epoch() accepts a single column name, but also Expr objects. column: str | Expr | Series | Sequence[int] pl.col() can select multiple columns, which we can pass directly: df.with_columns(pl.from_epoch(pl.col("start_date", "end_date"))) shape: (5, 4) ┌─────┬─────────────────────┬─────────────────────┬────────... | 2 | 2 |
78,887,466 | 2024-8-19 | https://stackoverflow.com/questions/78887466/largest-product-in-a-grid-not-in-the-same-direction | ERROR: type should be string, got "https://projecteuler.net/problem=11 Here is the question: int grid[20][20] = { {8, 2, 22, 97, 38, 15, 0, 40, 0, 75, 4, 5, 7, 78, 52, 12, 50, 77, 91, 8}, {49, 49, 99, 40, 17, 81, 18, 57, 60, 87, 17, 40, 98, 43, 69, 48, 4, 56, 62, 0}, {81, 49, 31, 73, 55, 79, 14, 29, 93, 71, 40, 67, 53, 88, 30, 3, 49, 13, 36, 65}, {52, 70, 95, 23, 4, 60, 11, 42, 69, 24, 68, 56, 1, 32, 56, 71, 37, 2, 36, 91}, {22, 31, 16, 71, 51, 67, 63, 89, 41, 92, 36, 54, 22, 40, 40, 28, 66, 33, 13, 80}, {24, 47, 32, 60, 99, 3, 45, 2, 44, 75, 33, 53, 78, 36, 84, 20, 35, 17, 12, 50}, {32, 98, 81, 28, 64, 23, 67, 10, 26, 38, 40, 67, 59, 54, 70, 66, 18, 38, 64, 70}, {67, 26, 20, 68, 2, 62, 12, 20, 95, 63, 94, 39, 63, 8, 40, 91, 66, 49, 94, 21}, {24, 55, 58, 5, 66, 73, 99, 26, 97, 17, 78, 78, 96, 83, 14, 88, 34, 89, 63, 72}, {21, 36, 23, 9, 75, 0, 76, 44, 20, 45, 35, 14, 0, 61, 33, 97, 34, 31, 33, 95}, {78, 17, 53, 28, 22, 75, 31, 67, 15, 94, 3, 80, 4, 62, 16, 14, 9, 53, 56, 92}, {16, 39, 5, 42, 96, 35, 31, 47, 55, 58, 88, 24, 0, 17, 54, 24, 36, 29, 85, 57}, {86, 56, 0, 48, 35, 71, 89, 7, 5, 44, 44, 37, 44, 60, 21, 58, 51, 54, 17, 58}, {19, 80, 81, 68, 5, 94, 47, 69, 28, 73, 92, 13, 86, 52, 17, 77, 4, 89, 55, 40}, {4, 52, 8, 83, 97, 35, 99, 16, 7, 97, 57, 32, 16, 26, 26, 79, 33, 27, 98, 66}, {88, 36, 68, 87, 57, 62, 20, 72, 3, 46, 33, 67, 46, 55, 12, 32, 63, 93, 53, 69}, {4, 42, 16, 73, 38, 25, 39, 11, 24, 94, 72, 18, 8, 46, 29, 32, 40, 62, 76, 36}, {20, 69, 36, 41, 72, 30, 23, 88, 34, 62, 99, 69, 82, 67, 59, 85, 74, 4, 36, 16}, {20, 73, 35, 29, 78, 31, 90, 1, 74, 31, 49, 71, 48, 86, 81, 16, 23, 57, 5, 54}, {1, 70, 54, 71, 83, 51, 54, 69, 16, 92, 33, 48, 61, 43, 52, 1, 89, 19, 67, 48} }; What is the greatest product of four adjacent numbers in the \"same direction\" (up, down, left, right, or diagonally) in the 20 by 20 grid? At first I read the question wrong and did not see the \"same direction\" part. I thought the rule is every number has to be adjacent to at least one other number in the grid, not at most. I was able to solve it fairly quickly (i also used ChatGpt to fix the syntax errors) because I am kind of new to this. Solution: #include <stdio.h> // Function to find the maximum product of four adjacent numbers in a grid int findMaxProduct(int grid[20][20], int* pos) { int maxProduct = 0; // Check horizontally for (int i = 0; i < 20; i++) { for (int j = 0; j < 20 - 3; j++) { int product = grid[i][j] * grid[i][j + 1] * grid[i][j + 2] * grid[i][j + 3]; printf(\"Checking horizontally at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i; pos[3] = j + 1; pos[4] = i; pos[5] = j + 2; pos[6] = i; pos[7] = j + 3; printf(\"New max horizontal product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } // Check vertically for (int i = 0; i < 20 - 3; i++) { for (int j = 0; j < 20; j++) { int product = grid[i][j] * grid[i + 1][j] * grid[i + 2][j] * grid[i + 3][j]; printf(\"Checking vertically at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i + 1; pos[3] = j; pos[4] = i + 2; pos[5] = j; pos[6] = i + 3; pos[7] = j; printf(\"New max vertical product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } // Check diagonally (down-right) for (int i = 0; i < 20 - 3; i++) { for (int j = 0; j < 20 - 3; j++) { int product = grid[i][j] * grid[i + 1][j + 1] * grid[i + 2][j + 2] * grid[i + 3][j + 3]; printf(\"Checking diagonally down-right at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i + 1; pos[3] = j + 1; pos[4] = i + 2; pos[5] = j + 2; pos[6] = i + 3; pos[7] = j + 3; printf(\"New max diagonal down-right product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } // Check diagonally (down-left) for (int i = 0; i < 20 - 3; i++) { for (int j = 3; j < 20; j++) { int product = grid[i][j] * grid[i + 1][j - 1] * grid[i + 2][j - 2] * grid[i + 3][j - 3]; printf(\"Checking diagonally down-left at (%d, %d) -> product: %d\\n\", i, j, product); if (product > maxProduct) { maxProduct = product; pos[0] = i; pos[1] = j; pos[2] = i + 1; pos[3] = j - 1; pos[4] = i + 2; pos[5] = j - 2; pos[6] = i + 3; pos[7] = j - 3; printf(\"New max diagonal down-left product found: %d at (%d, %d), (%d, %d), (%d, %d), (%d, %d)\\n\", maxProduct, pos[0], pos[1], pos[2], pos[3], pos[4], pos[5], pos[6], pos[7]); } } } return maxProduct; } int main(void) { int grid[20][20] = { {8, 2, 22, 97, 38, 15, 0, 40, 0, 75, 4, 5, 7, 78, 52, 12, 50, 77, 91, 8}, {49, 49, 99, 40, 17, 81, 18, 57, 60, 87, 17, 40, 98, 43, 69, 48, 4, 56, 62, 0}, {81, 49, 31, 73, 55, 79, 14, 29, 93, 71, 40, 67, 53, 88, 30, 3, 49, 13, 36, 65}, {52, 70, 95, 23, 4, 60, 11, 42, 69, 24, 68, 56, 1, 32, 56, 71, 37, 2, 36, 91}, {22, 31, 16, 71, 51, 67, 63, 89, 41, 92, 36, 54, 22, 40, 40, 28, 66, 33, 13, 80}, {24, 47, 32, 60, 99, 3, 45, 2, 44, 75, 33, 53, 78, 36, 84, 20, 35, 17, 12, 50}, {32, 98, 81, 28, 64, 23, 67, 10, 26, 38, 40, 67, 59, 54, 70, 66, 18, 38, 64, 70}, {67, 26, 20, 68, 2, 62, 12, 20, 95, 63, 94, 39, 63, 8, 40, 91, 66, 49, 94, 21}, {24, 55, 58, 5, 66, 73, 99, 26, 97, 17, 78, 78, 96, 83, 14, 88, 34, 89, 63, 72}, {21, 36, 23, 9, 75, 0, 76, 44, 20, 45, 35, 14, 0, 61, 33, 97, 34, 31, 33, 95}, {78, 17, 53, 28, 22, 75, 31, 67, 15, 94, 3, 80, 4, 62, 16, 14, 9, 53, 56, 92}, {16, 39, 5, 42, 96, 35, 31, 47, 55, 58, 88, 24, 0, 17, 54, 24, 36, 29, 85, 57}, {86, 56, 0, 48, 35, 71, 89, 7, 5, 44, 44, 37, 44, 60, 21, 58, 51, 54, 17, 58}, {19, 80, 81, 68, 5, 94, 47, 69, 28, 73, 92, 13, 86, 52, 17, 77, 4, 89, 55, 40}, {4, 52, 8, 83, 97, 35, 99, 16, 7, 97, 57, 32, 16, 26, 26, 79, 33, 27, 98, 66}, {88, 36, 68, 87, 57, 62, 20, 72, 3, 46, 33, 67, 46, 55, 12, 32, 63, 93, 53, 69}, {4, 42, 16, 73, 38, 25, 39, 11, 24, 94, 72, 18, 8, 46, 29, 32, 40, 62, 76, 36}, {20, 69, 36, 41, 72, 30, 23, 88, 34, 62, 99, 69, 82, 67, 59, 85, 74, 4, 36, 16}, {20, 73, 35, 29, 78, 31, 90, 1, 74, 31, 49, 71, 48, 86, 81, 16, 23, 57, 5, 54}, {1, 70, 54, 71, 83, 51, 54, 69, 16, 92, 33, 48, 61, 43, 52, 1, 89, 19, 67, 48} }; int pos[8]; // To store the positions of the 4 adjacent numbers with the highest product int maxProduct = findMaxProduct(grid, pos); printf(\"The maximum product of four adjacent numbers is %d.\\n\", maxProduct); printf(\"The positions of these numbers are:\\n\"); for (int i = 0; i < 8; i += 2) { printf(\"(%d, %d)\\n\", pos[i], pos[i + 1]); } return 0; } What if we consider numbers that aren’t in the same direction?. The only rule is every number has to be adjacent to one other number at least instead only being adjacent to one other number. Like the answer itself could be 4 numbers that are all adjacent to each other like a square 2 by 2 matrix? Now we have to search for a lot more things. Manually finding shapes in python def is_in_bounds(x, y, rows, cols): return 0 <= x < rows and 0 <= y < cols def generate_combinations(x, y): return [ [(x, y), (x+1, y), (x+2, y), (x+3, y)], # Horizontal line [(x, y), (x, y+1), (x, y+2), (x, y+3)], # Vertical line [(x, y), (x+1, y+1), (x+2, y+2), (x+3, y+3)], # Diagonal down-right [(x, y), (x-1, y+1), (x-2, y+2), (x-3, y+3)], # Diagonal down-left [(x, y), (x+1, y), (x, y+1), (x+1, y+1)], # 2x2 square [(x, y), (x+1, y), (x+2, y), (x+2, y+1)], # L-shape 1 [(x, y), (x, y+1), (x, y+2), (x+1, y+2)], # L-shape 2 [(x, y), (x+1, y), (x+2, y), (x, y+1)], # L-shape 3 [(x, y), (x, y+1), (x, y+2), (x+1, y)], # L-shape 4 [(x, y), (x+1, y), (x+2, y), (x+1, y+1)], # T-shape 1 [(x, y), (x, y+1), (x, y+2), (x+1, y+1)], # T-shape 2 [(x, y), (x+1, y), (x+1, y+1), (x+1, y+2)], # T-shape 3 [(x, y), (x, y+1), (x-1, y+1), (x+1, y+1)] # T-shape 4 ] def calculate_product(grid, combination): product = 1 for x, y in combination: product *= grid[x][y] return product def find_max_product(grid): max_product = 0 max_combination = [] rows = len(grid) cols = len(grid[0]) for i in range(rows): for j in range(cols): combinations = generate_combinations(i, j) valid_combinations = [comb for comb in combinations if all(is_in_bounds(x, y, rows, cols) for x, y in comb)] for comb in valid_combinations: product = calculate_product(grid, comb) print(f\"Combination: {comb} => Values: {[grid[x][y] for x, y in comb]} => Product: {product}\") if product > max_product: max_product = product max_combination = comb print(f\"Max product combination: {max_combination} => Product: {max_product}\") return max_product # Example usage grid = [[int(x) for x in line.split()] for line in \"\"\" 08 02 22 97 38 15 00 40 00 75 04 05 07 78 52 12 50 77 91 08 49 49 99 40 17 81 18 57 60 87 17 40 98 43 69 48 04 56 62 00 81 49 31 73 55 79 14 29 93 71 40 67 53 88 30 03 49 13 36 65 52 70 95 23 04 60 11 42 69 24 68 56 01 32 56 71 37 02 36 91 22 31 16 71 51 67 63 89 41 92 36 54 22 40 40 28 66 33 13 80 24 47 32 60 99 03 45 02 44 75 33 53 78 36 84 20 35 17 12 50 32 98 81 28 64 23 67 10 26 38 40 67 59 54 70 66 18 38 64 70 67 26 20 68 02 62 12 20 95 63 94 39 63 08 40 91 66 49 94 21 24 55 58 05 66 73 99 26 97 17 78 78 96 83 14 88 34 89 63 72 21 36 23 09 75 00 76 44 20 45 35 14 00 61 33 97 34 31 33 95 78 17 53 28 22 75 31 67 15 94 03 80 04 62 16 14 09 53 56 92 16 39 05 42 96 35 31 47 55 58 88 24 00 17 54 24 36 29 85 57 86 56 00 48 35 71 89 07 05 44 44 37 44 60 21 58 51 54 17 58 19 80 81 68 05 94 47 69 28 73 92 13 86 52 17 77 04 89 55 40 04 52 08 83 97 35 99 16 07 97 57 32 16 26 26 79 33 27 98 66 88 36 68 87 57 62 20 72 03 46 33 67 46 55 12 32 63 93 53 69 04 42 16 73 38 25 39 11 24 94 72 18 08 46 29 32 40 62 76 36 20 69 36 41 72 30 23 88 34 62 99 69 82 67 59 85 74 04 36 16 20 73 35 29 78 31 90 01 74 31 49 71 48 86 81 16 23 57 05 54 01 70 54 71 83 51 54 69 16 92 33 48 61 43 52 01 89 19 67 48 \"\"\".strip().split('\\n')] print(find_max_product(grid)) But if the problem asks for n=5; I will need to manually find more shapes on pen and paper. Is there another way to find all possible \"n\" number combinations(where at leas one number is adjacent to another one) . Do we have a formula for this? where I plug in n and it will tell me how many possible shapes are there?" | Given any connected shape with n tiles, you can: Find the tiles with the the smallest y coordinate, and choose the one of those with the smallest x coordinate. This is the "starting tile" Traverse the remaining tiles using BFS. As you consider the neighboring cells of each tile, keep track of which ones you examine, m... | 2 | 0 |
78,887,911 | 2024-8-19 | https://stackoverflow.com/questions/78887911/is-there-an-efficient-way-to-sum-over-numpy-splits | I would like to split array into chunks, sum the values in each chunk, and return the result as another array. The chunks can have different sizes. This can be naively done by using numpy split function like this def split_sum(a: np.ndarray, breakpoints: np.ndarray) -> np.ndarray: return np.array([np.sum(subarr) for su... | You wouldn't really split an array in numpy unless this is the last step. Numpy can handle your operation natively with numpy.add.reduceat (a minor difference with your function is how the breakpoints are defined, you will need to prepend 0 with reduceat): arr = np.arange(20) breakpoints = np.array([2, 5, 10, 12]) def ... | 3 | 5 |
78,886,152 | 2024-8-19 | https://stackoverflow.com/questions/78886152/how-can-pandas-loc-take-three-arguments | I am looking at someones code and this is what they wrote from financetoolkit import Toolkit API_KEY = "FINANCIAL_MODELING_PREP_API_KEY" companies = Toolkit(["AAPL", "MSFT", "GOOGL", "AMZN"], api_key=API_KEY, start_date="2005-01-01") income_statement_growth = companies.get_income_statement(growth=True) display(income_s... | The return value of companies.get_income_statement(growth=True) is a pandas DataFrame with a multi-index. The columns are indexed by period ('2019', '2020', etc.) and the rows are indexed by a combination of company ticker and data item (e.g. ('AAPT', 'Revenue')). You could access a single element like this: print(inco... | 2 | 2 |
78,885,636 | 2024-8-18 | https://stackoverflow.com/questions/78885636/optimize-identification-of-quantiles-along-array-columns | I have an array A (of size m x n), and a percentage p in [0,1]. I need to produce an m x n boolean array B, with True in in the (i,j) entry if A[i,j] is in p^{th} quantile of the column A[:,j]. Here is the code I have used so far. import numpy as np m = 200 n = 300 A = np.random.rand(m, n) p = 0.3 quant_levels = np.zer... | I'm not sure it's much faster but you should at least be aware that numpy.quantile has an axis keyword argument so you can compute all the quantiles with one command: quant_levels = np.quantile(A, p, axis=0) B = (A >= quant_levels) | 2 | 3 |
78,873,119 | 2024-8-14 | https://stackoverflow.com/questions/78873119/how-to-prevent-ruff-formatter-from-adding-a-newline-after-module-level-docstring | I'm using ruff as a replacement to black formatter but I wanna keep the diff at minimum. I'm noticing that it automatically inserts a newline between the module-level docstring and the first import statement. For example, given this code: """base api extension.""" import abc from typing import List, Optional, Type Aft... | If you update your black version you'll find the same issue. This formatting change is internally called module_docstring_newlines: Black version 24.1.0 (26 Jan 2024) introduced the new "2024 stable style" which includes this change (which was previously only in "preview style") - see Black's changelogs (look for #393... | 2 | 2 |
78,884,251 | 2024-8-18 | https://stackoverflow.com/questions/78884251/unable-to-install-wordnet-with-nltk-3-9-0-as-importing-nltk-requires-installed-w | It is not possible to import nltk, and the solution given by the output required me to import nltk: >>>import nltk Traceback (most recent call last): File "D:\project\Lib\site-packages\nltk\corpus\util.py", line 84, in __load root = nltk.data.find(f"{self.subdir}/{zip_name}") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^... | This bug was introduced in nltk 3.9.0 (released on 18 August 2024) and is a known issue. It was fixed in 3.9.1: python3 -m pip install nltk~=3.9.1 The most recent full release prior to 3.9.x was nltk 3.8.1. However, be aware that this version is vulnerable to remote code execution. python3 -m pip install nltk==3.8.1 | 5 | 9 |
78,880,609 | 2024-8-16 | https://stackoverflow.com/questions/78880609/polars-gives-exception-for-empty-json-but-pandas-works | While reading json from files, using polars give exception when it has empty json. But it is not the same in pandas. I have some files with only key and not value. Is polars still primitive and not matured like pandas? which package to use? Polars from io import StringIO import polars as pl print(pl.read_json(StringIO(... | Pandas is mature and Polars is comparatively a new library. There is an issue exists, which is same as yours, refer to this bug link, Polars Issue Link Either you can use exception handling with polars for this scenario or go with pandas. Update: The issue is closed now, you it is working in Polars | 3 | 0 |
78,883,445 | 2024-8-17 | https://stackoverflow.com/questions/78883445/find-absolute-difference-value-between-elements-using-just-numpy-array-operation | a = np.array([101,105,90,102,90,10,50]) b = np.array([99,110,85,110,85,90,60]) expected result = np.array([2,5,5,8,5,20,10]) How can I find minimum absolute difference value between elements using just numpy operations; with modulus of 100 if two values are across 100. | The answer by Derek Roberts with numpy.minimum is almost correct. However, since the input values can be greater than 100, they should first be rescaled to 0-100 with %100 (mod). I'm adding an extra pair of values to demonstrate this: a = np.array([101,105,90,102,90,10,50,1001]) b = np.array([99,110,85,110,85,90,60,2])... | 3 | 2 |
78,877,560 | 2024-8-16 | https://stackoverflow.com/questions/78877560/liboqs-python-throws-attributeerror-module-oqs-has-no-attribute-get-enabled | I'm trying to get this Open Quantum Safe example working: https://github.com/open-quantum-safe/liboqs-python/blob/main/examples/kem.py I'm getting this error: (myenv) user@mx:~ $ python3 '/home/user/Documents/Dev/quantum_algo_tests/# Key encapsulation Python example.py' Enabled KEM mechanisms: Traceback (most recent ca... | Chances are, liboqs just wasn't installed correctly. The GitHub page gives several installation steps, some of which are contradictory and could break your install. (When I tried following the instructions from top to bottom, I got a version conflict due to incompatible installs of oqs and liboqs-python.) The safest op... | 3 | 2 |
78,882,910 | 2024-8-17 | https://stackoverflow.com/questions/78882910/complicated-nested-integrals-in-terms-of-python-scipy-integrate | I want to write the following complicated function using python scipy.integrate: where \phi and \Phi are the pdf and cdf of the standard normal respectively and the sum i != 1,2 is over the thetas in *theta, and the product s != 1,i,2 is over the thetas in *theta that is not theta_i To make things simpler, I break down... | The first thing I did was move prod outside of f since it isn't a function of v. I realized that your prod includes the ti term, so I added a check for ts (renamed from t) equal to ti. You also have way too many variables for your functions. g is a function of u and the thetas and f is a function of v and the thetas, s... | 2 | 1 |
78,881,821 | 2024-8-17 | https://stackoverflow.com/questions/78881821/how-to-implement-enum-with-mutable-members-comparable-and-hashable-by-their-ind | I am writing a class to represent sequential stages of an industrial process, composed of ADMISSION, PROCESSING, QC and DELIVERY stages. Each stage has a unique, progressive sequence number, a mnemonic name and a field keeping track of the number of instances going through it: @dataclass class Stage: seq_id: int name: ... | Both dataclass and Enum do a lot of work to make things simple for the user -- when you start extending and/or combining them you need to be careful. Working code: from dataclasses import dataclass from enum import Enum, auto, unique from functools import total_ordering from typing import override @dataclass class Stag... | 2 | 1 |
78,882,187 | 2024-8-17 | https://stackoverflow.com/questions/78882187/dataframe-set-all-in-group-with-value-that-occurs-first-in-corresponding-multi-c | I have multiple A* columns with corresponding B* columns (A and B have the corresponding numbers at the end of the column names). When the REFNO value = A# value and the 'MNGR' is not BOB, I need to put the value from corresponding B# column into the ‘AGE’ column and the corresponding # number in the 'FLAG' column. The... | Without the need for a loop: m = df.filter(regex=r'^A\d').eq(df['REFNO'], axis=0).values df['AGE'] = df.filter(regex=r'^B\d').where(m).max(axis=1).astype(int) df['FLAG'] = m.argmax(axis=1) + 1 df[['AGE', 'FLAG']] = (df.groupby(['MNGR', 'YEAR'])[['AGE', 'FLAG']] .transform('first') .where(df['MNGR'].ne('BOB'), 0) ) Out... | 2 | 2 |
78,882,352 | 2024-8-17 | https://stackoverflow.com/questions/78882352/plotting-a-timeseries-as-bar-plot-with-pandas-results-in-an-incorrect-year | I have the following dataframe (except my actual data is over 25 years): import pandas as pd df = pd.DataFrame( dict( date=pd.date_range(start="2020-01-01", end="2020-12-31", freq="MS"), data=[1,2,3,4,5,6,7,8,9,10,11,12] ), ) df Output: date data 0 2020-01-01 1 1 2020-02-01 2 2 2020-03-01 3 3 2020-04-01 4 4 2020-05-0... | When you are using a bar plot, the x-coordinates become 0, 1, 2, 3, etc. That's why mdates.DateFormatter returns 1970, as it treats these coordinates as seconds since epoch time. You can set the tick labels manually: ax.set_xticklabels(df["date"].dt.strftime("%Y - %b")) | 3 | 1 |
78,880,721 | 2024-8-16 | https://stackoverflow.com/questions/78880721/iterate-over-groups-created-using-group-by-on-date-column | Update: This was fixed by pull/18251 I am new to Polars. I want to iterate over the groups created by grouping over the column where each cell of that column contains a list of two dates. I used the following (sample) piece of code to achieve and it used to work fine with polars==0.20.18 version: import polars as pl i... | You could group by the .hash() (or cast) as a workaround. (df.group_by(pl.col("period_bins").hash().alias("key")) .all() ) shape: (2, 4) ┌─────────────────────┬─────────────────────────────────┬─────────────────────────────────┬─────────────────────────────────┐ │ key ┆ ts ┆ files ┆ period_bins │ │ --- ┆ --- ┆ --- ┆ -... | 5 | 1 |
78,880,945 | 2024-8-16 | https://stackoverflow.com/questions/78880945/weighted-sum-on-multiple-dataframes | I have several dataframes with an ID, a time of day, and a number. I would like to weight each dataframe number and then sum them for each id/time of day. As an example: weighted 0.2 ID TOD M 0 10 morning 1 1 13 afternoon 3 2 32 evening 2 3 10 evening 2 weighted 0.4 ID TOD W 0 10 morning 1 1 13 morning 3 2 32 afternoon... | IIUC, you can try pd.concat the dataframes after you set index and multiply by your weights for each dataframe, then use groupby and sum: df_out = pd.concat([df1_2.set_index(['ID', 'TOD']).mul(.2), df2_4.set_index(['ID', 'TOD']).mul(.4)])\ .sum(axis=1)\ .groupby(level=[0,1])\ .sum()\ .reset_index() df_out Output: ID ... | 2 | 5 |
78,877,618 | 2024-8-16 | https://stackoverflow.com/questions/78877618/polars-struct-fieldliststr-returns-a-single-column-when-dealing-with-listst | Some of my columns in my Polars Dataframe have the dtype pl.List(pl.Struct). I'm trying to replace these columns so that I get multiple columns that are lists of scalar values. Here's an example of a column I'm trying to change: import polars as pl df = pl.DataFrame({ "column_0": [ [{"field_1": "a", "field_2": 1}, {"fi... | You could unpack the lists/structs with .explode() + .unnest() and group the rows back together. (df.with_row_index() .explode("column_0") .unnest("column_0") .group_by("index", maintain_order=True) .all() ) shape: (2, 3) ┌───────┬────────────┬───────────┐ │ index ┆ field_1 ┆ field_2 │ │ --- ┆ --- ┆ --- │ │ u32 ┆ list... | 3 | 2 |
78,878,048 | 2024-8-16 | https://stackoverflow.com/questions/78878048/shaky-zoom-with-opencv-python | I want to apply zoom in and out effect on a video using opencv, but as opencv doesn't come with built in zoom, I try cropping the frame to the interpolated value's width, height, x and y and than resize back the frame to the original video size, i.e. 1920 x 1080. But when I rendered the final video, there is shakiness ... | First, let's look at why your approach jitters. Then I'll show you an alternative that doesn't jitter. In your approach, you zoom by first cropping the image, and then resizing it. That cropping only happens by whole pixel rows/columns, not in finer steps. You saw that especially well near the end of the ease, where th... | 3 | 4 |
78,876,878 | 2024-8-15 | https://stackoverflow.com/questions/78876878/system-identification-using-an-arx-model-with-gekko | The following is related to this question: Why doesn't the Gekko solver adapt to variations in the system? What I still don't understand is why, sometimes, when the outside temperature rises, the inside temperature remains constant. Normally, it should also increase since beta remains constant. Here are examples of thi... | There is a small error where m.d is redefined and the connection is broken with the m.arx() model. Here is a correction: # distrubance and parametres #m.d = m.Param(T_externel[0]) m.d.value = T_externel[0] There is also a dynamics issue. The response to beta (and likely external temperature) is very slow as shown in t... | 3 | 2 |
78,879,657 | 2024-8-16 | https://stackoverflow.com/questions/78879657/selecting-rows-that-compose-largest-group-within-another-group-in-a-pandas-dat | Main problem I'm trying to find a way to select the rows that make up the largest sub-group inside another group in a Pandas DataFrame and I'm having a bit of a hard time. Visual example (code below) Here is a sample dataset to help explain exactly what it is I'm trying to do. There is code below to recreate this datas... | A short code to perform this could value_counts + idxmax, then merge: keep = my_df[['Col1', 'Col2']].value_counts().groupby(level='Col1').idxmax() out = my_df.merge(pd.DataFrame(keep.tolist(), columns=['Col1', 'Col2'])) Output: RowID Col1 Col2 Col3 Col4 0 10005 Group A Type 3 500 Earl 1 10006 Group A Type 3 600 Fred ... | 2 | 1 |
78,879,293 | 2024-8-16 | https://stackoverflow.com/questions/78879293/define-an-unused-new-symbol-inside-function-scope | I find difficult to really encapsulate a lot of logic in a reusable manner when writing sympy code I would like to write a function in the following way: def compute_integral(fn): s = symbols_new() # do not apply lexicographic identity, this symbol identity is unique return integrate( fn(s), (s, 0, 1)) If I just use t... | I'm using the latest sympy, version 1.13.2, and the code block you provided doesn't run: it raises an error. There seems to be some misconceptions that I'd like to clarify. Consider this code block: import sympy as sp from sympy.abc import s, x t = sp.sin(3 + s) print("t:", t) e = x**2 + t * s * x print("e:", e) # t: s... | 2 | 1 |
78,877,275 | 2024-8-16 | https://stackoverflow.com/questions/78877275/python-class-mimic-makefile-dependency | Q: Is there a better way to do this, or the idea itself is wrong I have a processing class that creates something with multiple construction steps, such that the next function depends on the previous ones. I want to have dependencies specified like those in a makefile, and when the dependency does not exist, construct ... | This is a good use case of properties, with which you can generate values on demand so there's no need to build a dependency tree. As a convention the generated values are cached in attributes of names prefixed with an underscore. Since all your getter methods and setter methods are going to access attributes in a unif... | 2 | 2 |
78,875,835 | 2024-8-15 | https://stackoverflow.com/questions/78875835/polars-rolling-corr-giving-weird-results | I was trying to implement rolling autocorrelation in polars, but got some weird results when there're nulls involved. The code is pretty simple. Let's say I have two dataframes df1 and df2: df1 = pl.DataFrame({'a': [1.06, 1.07, 0.93, 0.78, 0.85], 'lag_a': [1., 1.06, 1.07, 0.93, 0.78]}) df2 = pl.DataFrame({'a': [1., 1.0... | I think this is a bug in the Rust implementation of rolling_corr (in fairness, it is marked unstable in python). It looks it naively applies rolling_mean without first applying the joint null mask. So the rolling mean of a that's used in the computation is df2.get_column("a").rolling_mean(window_size=10, min_periods=5)... | 3 | 1 |
78,876,898 | 2024-8-15 | https://stackoverflow.com/questions/78876898/using-the-python-protobuf-library-how-can-i-load-a-desc-or-proto-file-dynami | I've tried using a few different methods but all end up not working. I'm trying to read from a directory and iterate all the .desc files and inspect all the fields in each class. I'm trying to build a dependency tree with the outer classes being the parent nodes and the leaves being the nested classes. I'm trying to do... | The Protobuf hierarchy is confusing/complex. The following is a basic approach to enumerating the contents of a Protobuf descriptor file: from google.protobuf.descriptor_pb2 import FileDescriptorSet with open("descriptor.pb", 'rb') as descriptor: data = descriptor.read() fds = FileDescriptorSet.FromString(data) # file:... | 3 | 3 |
78,877,001 | 2024-8-15 | https://stackoverflow.com/questions/78877001/fast-index-mapping-between-two-numpy-arrays-with-duplicate-values | I'm trying to write a join operation between two Numpy arrays and was surprised to find Numpy's recfunctions.join_by doesn't handle duplicate values. The approach I'm taking is using the column to be joined and finding an index mapping between them. From looking online, majority of Numpy only solutions suffer the same ... | Using np.isin() we can literally create a mask that shows us which values are already in the other array, when you have that you only need to figure out the indices. import numpy as np # Arrays to be joined x = np.array([1, 1, 2, 100, 4, 5, 3, 75]) y = np.array([1, 2, 3, 4, 5, 6, 7]) # Get mask with True and False valu... | 3 | 2 |
78,869,326 | 2024-8-14 | https://stackoverflow.com/questions/78869326/why-is-it-that-calling-standard-sum-on-a-numpy-array-produces-a-different-result | Observe in the following code, creating an numpy array and calling the builtin python sum function produces different results than numpy.sum How is numpy's sum function implemented? And why is the result different? test = [.1]*10 test = [np.float64(x) for x in test] test[5]= np.float64(-.9) d = [np.asarray(test) for x ... | Numpy uses pairwise summation:https://github.com/numpy/numpy/pull/3685 but python uses reduce summation. The answer is only partially related to FP inaccuracy because if I have an array of FP numbers and use the same algorithm to sum them, I should expect the same result if I sum them in the same order. | 3 | 2 |
78,873,113 | 2024-8-14 | https://stackoverflow.com/questions/78873113/different-behavior-in-custom-class-between-left-addition-right-addition-with-a-n | I am writing a class where one of the stored attributes is cast to an integer in the constuctor. I am also overloading left/right addition, where adding/subtracting an integer means to shift this attribute over by that integer. In principle, addition commutes in all contexts with my class, so as a user I would not expe... | Numpy provides some hooks for this. In this case you probably want to implement class.__array_ufunc__() on your class. If you simply define it with None it will raise an exception: __array_ufunc__ = None Alternatively, you can actually implement something: def __array_ufunc__(self, ufunc, method, *inputs): print(f"{uf... | 2 | 3 |
78,872,477 | 2024-8-14 | https://stackoverflow.com/questions/78872477/pytorch-conv2d-outputs-infinity | My input x is a [1,256,60,120] shaped tensor. My Conv2d is defined as follows import torch.nn as nn conv2d = nn.Conv2d( 256, 256, kernel_size=2, stride=2, bias=False, ), Some instances I see that conv2d(x).isinf().any() is True. Note that x.max() = tensor(5140., device='cuda:0', dtype=torch.float16). x.min() = tensor(... | This is a case of numerical overflow. Consider: import torch import torch.nn as nn # set seed torch.manual_seed(42) # random values between 0 and 5140, like your values x = (torch.rand(1, 256, 60, 120) * 5140) # create conv layer conv2d = nn.Conv2d( 256, 256, kernel_size=2, stride=2, bias=False, ) # set conv weights to... | 2 | 3 |
78,872,144 | 2024-8-14 | https://stackoverflow.com/questions/78872144/is-it-expected-to-have-a-epsilon-named-attribute-of-a-python-object-turned-into | If I create an attribute named ϵ for a Python object using a dot expression, the name is converted to ε without any warning. Is this expected? If it is, where can I find a reference about this? Python 3.12.5 (main, Aug 7 2024, 13:49:14) [GCC 14.2.0] on linux Type "help", "copyright", "credits" or "license" for more inf... | Those two are the same identifier in normal form NFKC. >>> unicodedata.normalize("NFKC", 'ε') == unicodedata.normalize("NFKC", 'ϵ') True And as the docs say, All identifiers are converted into the normal form NFKC while parsing; comparison of identifiers is based on NFKC. So – yes, it's expected. | 3 | 4 |
78,870,792 | 2024-8-14 | https://stackoverflow.com/questions/78870792/how-to-avoid-stubtests-symbol-is-not-present-at-runtime-error | Example Given the file ./mylib/minimalistic_repro.py class Foo: def __init__(self, param): self.param = param class Bar: def __init__(self, param): self.param = param def foobar(par1, par2): return par1.param + par2.param and its stub ./mylib/minimalistic_repro.pyi from typing import TypeAlias FooBar: TypeAlias = Foo ... | stubtest is complaining because it thinks your FooBar is a public API symbol, which might cause type checkers/IDE autocomplete to make incorrect assumptions and suggestions. The "correct" way to fix it is to make it private; that is, precede the name with an underscore: _FooBar: TypeAlias = Foo | Bar def foobar(par1: _... | 3 | 4 |
78,871,375 | 2024-8-14 | https://stackoverflow.com/questions/78871375/selenium-screenshot-of-table-with-custom-font | I have the following Python code for loading the given page in selenium and taking a screenshot of the table. from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.s... | Note: Changed the font to Matemasie to make it stand out more in the screenshot. The issue is caused by the screenshot being taken before the custom font has loaded. There are some ways to wait for that to finish before continuing, eg: Selenium ChromeDriver "get" doesn't reliably load @import fonts time.sleep(2) als... | 2 | 2 |
78,869,863 | 2024-8-14 | https://stackoverflow.com/questions/78869863/how-fix-found-array-with-dim-4error-when-using-ml-algorthims-to-classify-image | I have a simple ML classification problem. I have 8 folder each one represent class so I have first load these images from folders and assign labels and then save it as csv file (code in below) def load_images_from_folder(root_folder): image_paths = [] images = [] labels = [] for label in os.listdir(root_folder): label... | In the example using sklearn.svm.SVC.fit(), the input is expected to be of shape (n_samples, n_features) (thus being 2-dimensional). In your case, each sample would be an image. To make your code technically work, you would thus need to flatten your X_train input and make each "raw" pixel value a feature, X_train_flat ... | 3 | 1 |
78,868,961 | 2024-8-14 | https://stackoverflow.com/questions/78868961/how-to-get-the-value-of-a-specified-index-number-from-the-sorting-of-a-column-an | import polars as pl df = pl.DataFrame( {"name": list("abcdef"), "age": [21, 31, 32, 53, 45, 26], "country": list("AABBBC")} ) df.group_by("country").agg( pl.col("name").sort_by("age").first().alias("age_sort_1"), pl.col("name").sort_by("age").get(2).alias("age_sort_2"), # OutOfBoundsError: index out of bounds # pl.col(... | There's an open issue related to your question now - polars.Expr.take returns null if ComputeError: index out of bounds. For now you can either use shift() (maintain_order = True just to make it more readable): exp = pl.col.name.sort_by("age") ( df .group_by("country", maintain_order = True).agg( exp.first().alias("age... | 7 | 6 |
78,868,163 | 2024-8-13 | https://stackoverflow.com/questions/78868163/does-poetry-for-python-use-a-nonstandard-pyproject-toml-how | I am considering introducing my organization to Poetry for Python, and I came across this claim: Avoid using the Poetry tool for new projects. Poetry uses non-standard implementations of key features. For example, it does not use the standard format in pyproject.toml files, which may cause compatibility issues with ot... | Update: Poetry 2.0.0 is released and has support for pyproject.toml project settings! Yes, Poetry as of writing still uses its own [tool.poetry.dependencies] table in pyproject.toml. This is in conflict with PEP621, which specifies among other things that a project's dependencies ought to be listed in [project.depende... | 3 | 6 |
78,842,605 | 2024-8-7 | https://stackoverflow.com/questions/78842605/16-byte-offset-in-mpeg-4-video-export-from-dicom-file | Short version: Where is the 16-byte offset coming from when exporting an MPEG-4 video stream from a DICOM file with Pydicom via the following code? (And, bonus question, is it always a 16-byte offset?) from pathlib import Path import pydicom in_dcm_filename: str = ... out_mp4_filename: str = ... ds = pydicom.dcmread(in... | The reason why you see these bytes is that the pixel data is encapsulated. Using dcmdump shows this clearly: (7fe0,0010) OB (PixelSequence #=2) # u/l, 1 PixelData (fffe,e000) pi (no value available) # 0, 1 Item (fffe,e000) pi 00\00\00\20\66\74\79\70\69\73\6f\6d\00\00\02\00\69\73\6f\6d\69\73... # 13548606, 1 Item (fffe,... | 3 | 4 |
78,859,635 | 2024-8-12 | https://stackoverflow.com/questions/78859635/py2app-error-17-file-exists-when-running-py2app-for-the-first-time | I'm trying to make a simple password app on my desktop from a code I wrote. I've done this before with no troubles on another simple app. This is my setup: from setuptools import setup APP = ['main.py'] OPTIONS = { 'argv_emulation': True, 'iconfile':"logo.png", 'packages': ['tkinter', 'random', 'json', 'pyperclip'] } s... | This is a known issue in setuptools >= 70.0.2. Pin 70.3.0 to get past it. For example in a pyproject.tml: [tool.poetry.dependencies] python = "^3.12" pyqt5 = "^5.15.10" setuptools = "==70.3.0" # add this line | 4 | 4 |
78,866,188 | 2024-8-13 | https://stackoverflow.com/questions/78866188/ctypes-bitfield-sets-whole-byte | I have the following structure: class KeyboardModifiers(Structure): _fields_ = [ ('left_control', c_bool, 1), ('right_control', c_bool, 1), ('left_shift', c_bool, 1), ('right_shift', c_bool, 1), ('left_alt', c_bool, 1), ('right_alt', c_bool, 1), ('left_meta', c_bool, 1), ('right_meta', c_bool, 1), ('left_super', c_bool... | It results from using c_bool. I'm not getting the whole byte set to 1, but bit set resulting in a non-zero byte results in a value of 1, not 0xFF as described. It may be an OS or implementation detail. For me it seems to normalize the boolean value to 0/1 based on zero/non-zero value of the byte. For your implementatio... | 4 | 3 |
78,864,879 | 2024-8-13 | https://stackoverflow.com/questions/78864879/make-a-full-width-table-using-borb | I'm trying to reduce the margins of a page using borb, that works, but my Table then is not taking the full width of the page. Please note that not only Table is shifted and not covering the full width, HorizontalRule as well etc... What's the fix for that ? from decimal import Decimal from borb.pdf import Document, Pa... | disclaimer: I am the author of borb The "issue" you are running into is that borb applies a sensible default for margins (on Image objects). Looking at the (constructor) code in Image: margin_bottom=margin_bottom if margin_bottom is not None else Decimal(5), margin_left=margin_left if margin_left is not None else Deci... | 2 | 0 |
78,850,669 | 2024-8-8 | https://stackoverflow.com/questions/78850669/create-a-custom-pydantic-class-requiring-value-to-be-in-an-interval | I'm working to build a function parameter validation library using pydantic. We want to be able to validate parameters' types and values. Types are easy enough, but I'm having a hard time creating a class to validate values. Specifically, the first class I want to build is one that requires the value to be in a user-de... | I was able to find a solution - I had indeed messed up when writing the __get_pydantic_core_schema__ method. Instead of calling handler as a function, its .generate_schema() method must be used, i.e. I replaced schema = handler(self.type_definition) with schema = handler.generate_schema(self.type_definition) So here ... | 2 | 0 |
78,845,834 | 2024-8-7 | https://stackoverflow.com/questions/78845834/selenium-webdriver-unexpectedly-exits-in-aws-mwaa | I'm trying to run selenium periodically within AWS MWAA but chromium crashes with status code -5 every single time. I've tried to google this status code without success. Any ideas as to what's causing this error? Alternatively, how should I be running selenium with AWS MWAA? One suggestion I saw was to run a selenium ... | Setup I mainly tried to make this work without root privileges due to a misunderstanding. Now there are two methods setup the environment! And yes, you need Chrome. Setting up without root privileges I'm proud to say this method does not require root privileges. The way you indicated it to me was that you couldn't run ... | 2 | 1 |
78,864,778 | 2024-8-13 | https://stackoverflow.com/questions/78864778/how-can-i-serialize-versionedtransaction-in-solana-with-python | I built sending several txs in one jito bundle with Node.js. I want convert this code to python. When I use jito bundle, I should serialize transactions. When I use Node.js, I can use transaction.serialize(). But when I use python, I used transaction.serialize(), it occurs error. 'solders.transaction.VersionedTransact... | It looks like VersionedTransaction implements bytes at https://github.com/kevinheavey/solders/blob/c10419c29b7890e69572f0160e5e74406814048b/python/solders/transaction.pyi#L121, so you should be able to serialize the transaction with: bytes(transaction) | 2 | 2 |
78,863,608 | 2024-8-12 | https://stackoverflow.com/questions/78863608/django-5-update-or-create-reverse-one-to-one-field | On Django 4.x Code is working as expected from django.db import models class Project(models.Model): rough_data = models.OneToOneField( "Data", related_name="rough_project", on_delete=models.SET_NULL, null=True, blank=True, ) final_data = models.OneToOneField( "Data", related_name="final_project", on_delete=models.SET_N... | In Django 4.x, it did not raise an exception but also did not save the relationship if the object was created — so it was actually not working as expected. Django ticket: https://code.djangoproject.com/ticket/34586 Django PR that fixed1 it to raise exception: https://github.com/django/django/pull/17112/files It appea... | 3 | 5 |
78,856,057 | 2024-8-10 | https://stackoverflow.com/questions/78856057/fasthtml-htmx-attribute-is-not-translated-in-final-html | I was making a simple Input field in FastHTML. A function MakeInput returns the input field as I wish to clear the input field after processing the form. def MakeInput(): return Input( placeholder='Add new Idea', id='title', hx_swap_oob='true' ) @route('/todos') def get(): form = Form( Group( MakeInput(), Button('Save'... | This is a bug that was fixed in version 0.3.1, see the following issue: https://github.com/AnswerDotAI/fasthtml/issues/254 Update to version 0.3.1 or higher and you should be fine. | 2 | 3 |
78,868,024 | 2024-8-13 | https://stackoverflow.com/questions/78868024/how-to-know-when-to-use-map-elements-map-batches-lambda-and-struct-when-using | import polars as pl import numpy as np df_sim = pl.DataFrame({ "daily_n": [1000, 2000, 3000, 4000], "prob": [.5, .5, .5, .6], "size": 1 }) df_sim = df_sim.with_columns( pl.struct(["daily_n", "prob", "size"]) .map_elements(lambda x: np.random.binomial(n=x['daily_n'], p=x['prob'], size=x['size'])) .cast(pl.Int32) .alias(... | Let's go back to what a context is/does. polars DataFrames (or LazyFrame) have contexts which is just a generic way of referring to with_columns, select, agg, and group_by. The inputs to contexts are Expressions. To a limited extent, the python side of polars can convert python objects into polars expressions. For exam... | 5 | 6 |
78,867,160 | 2024-8-13 | https://stackoverflow.com/questions/78867160/pypdf2-stalling-while-parsing-pdf-for-unknown-reason | I have a script in which I go through and parse a large collection of PDFs. I noticed that when I tried to parse a particular PDF, the script just stalls forever. But it doesn't throw up an error and as far as I can tell, the PDF is not corrupted. I can't tell what the issue is, but I can see that it happens on page 4.... | You should not use PyPDF2 any more unless really required and switch to pypdf instead - see the note on PyPI as well: https://pypi.org/project/PyPDF2/ Running the corresponding migrated code with the latest release does not show any performance issues: from pypdf import PdfReader doc = "78867160.pdf" doc_text = "" try:... | 2 | 1 |
78,864,414 | 2024-8-13 | https://stackoverflow.com/questions/78864414/webpage-only-scrolls-once-using-selenium-despite-new-content-loading | I'm trying to scrape URLs from a dynamically allocated webpage that requires continuous scrolling to load all the content into the DOM. My approach involves running window.scrollTo(0, document.body.scrollHeight); in a loop using Selenium's execute_script function. After each scroll, I compare the number of URLs loaded ... | This code below works well in scrolling down the page, try to embed it into your code: ele = driver.find_element(By.XPATH, '//div[contains(@class,"Pane-module_u1-pane__content")]') driver.execute_script('arguments[0].scrollIntoView(false);', ele) | 2 | 1 |
78,860,938 | 2024-8-12 | https://stackoverflow.com/questions/78860938/why-doesnt-the-gekko-solver-adapt-to-variations-in-the-system | The following is related to this question : predictive control model using GEKKO I am still traying applying MPC to maintain the temperature of a room within a defined range(16,18), as Professor @John explained last time.the gain is normally listed as K=array([[0.93705481,−12.24012156]]). Thus, an increase in β by +1 l... | The solution that you showed was able to find a solution with a single step to keep within the temperature range (17-19). Tightening the temperature range (16.5-17.5) shows additional movement of the MV over the time period and shows that it is responsive. Unless there is a violation of this temperature range, the cont... | 2 | 1 |
78,850,049 | 2024-8-8 | https://stackoverflow.com/questions/78850049/pyinstaller-hidden-import-ffmpeg-python-not-found | Trying to convert Python scripts to exe with PyInstaller. In my code, I use ffmpeg-python: import ffmpeg .... def ffmpeg_save_clip(self,output_video: str, clip_start: str, clip_end: str): (ffmpeg .input(self.file.get_videopath(), ) .output(output_video, vcodec='copy', ss=clip_start, to=clip_end, acodec='copy') .global_... | For anyone encountered this issue, I finally found a way to make it work. tank you for @happy_code_egg support (and it's explaining capacity) To solve ffmpeg-python including to pyinstaller, follow what @happy_code_egg said. Anyway (as I wrote in update section of my question), it still remained a problem: using ffmpeg... | 4 | 1 |
78,867,121 | 2024-8-13 | https://stackoverflow.com/questions/78867121/fill-several-polars-columns-with-a-constant-value | I am working with the following code... import polars as pl df = pl.DataFrame({ 'region': ['GB', 'FR', 'US'], 'qty': [3, 6, -8], 'price': [100, 102, 95], 'tenor': ['1Y', '6M', '2Y'], }) cols_to_set = ['price', 'tenor'] fill_val = '-' df.with_columns([pl.lit(fill_val).alias(c) for c in cols_to_set]) ...with the followi... | you could use replace_strict(): df.with_columns(pl.col(cols_to_set).replace_strict(None, None, default = '-')) ┌────────┬─────┬───────┬───────┐ │ region ┆ qty ┆ price ┆ tenor │ │ --- ┆ --- ┆ --- ┆ --- │ │ str ┆ i64 ┆ str ┆ str │ ╞════════╪═════╪═══════╪═══════╡ │ GB ┆ 3 ┆ - ┆ - │ │ FR ┆ 6 ┆ - ┆ - │ │ US ┆ -8 ┆ - ┆ - │... | 3 | 3 |
78,845,218 | 2024-8-7 | https://stackoverflow.com/questions/78845218/fastapi-testclient-not-starting-lifetime-in-test | Example code: import os import asyncio from contextlib import asynccontextmanager from fastapi import FastAPI, Request @asynccontextmanager async def lifespan(app: FastAPI): print(f'Lifetime ON {os.getpid()=}') app.state.global_rw = 0 _ = asyncio.create_task(infinite_1(app.state), name='my_task') yield app = FastAPI(li... | The main reason that the written test fails is that it doesn't handle the asynchronous nature of the FastAPI app's lifespan context properly. In fact, the global_rw is not set due to improper initialization. If you don't want to utilize an AsyncClient like the one by httpx you can use pytest_asyncio and the async fixtu... | 2 | 1 |
78,865,667 | 2024-8-13 | https://stackoverflow.com/questions/78865667/how-to-draw-a-rectangle-with-one-side-in-matplotlib | I want to draw a rectangle in matplotlib and I want only the top edge to show. I tried to draw a line on top of the rectangle to make it work, but I was not satisfied with the result. Here's my code: import matplotlib.pyplot as plt from matplotlib.axes import Axes from matplotlib.figure import Figure from matplotlib.pa... | You can set the CapStyle for the ax.plot line to butt (instead of the default, which is projecting). You can use the solid_capstyle kwarg for this. To demonstrate, I increased the line width to make the differences more obvious: solid_capstyle='projecting' ax.plot([-width / 2, width / 2], [0, 0], color=color, solid_cap... | 1 | 4 |
78,863,932 | 2024-8-12 | https://stackoverflow.com/questions/78863932/runtimeerror-numpy-is-not-available-transformers | I basically just want to use the transformers pipeline() to classify data, but independent of which model I try to use, it returns the same error, stating Numpy is not available Code I'm running: pipe = pipeline("text-classification", model="AdamLucek/roberta-llama3.1405B-twitter-sentiment") sentiment_pipeline('Today i... | Try: pip install "numpy<2" then restart the kernel. | 3 | 4 |
78,859,203 | 2024-8-11 | https://stackoverflow.com/questions/78859203/python-sqlite3-dump-source-to-mariadb-unknown-collation-utf8mb4-0900-ai-ci | I'm following Head First Python (3ed). I'm at the last chapter and have a bug I can't get past. I've got an sqlite3 database that I need to port to MariaDB. I've got the schema and data in separate files: sqlite3 CoachDB.sqlite3 .schema > schema.sql sqlite3 CoachDB.sqlite3 '.dump swimmers events times --data-only' > da... | MySQL Connector appears to be forcing a collation that does not exist in MariaDB (utf8mb4_0900_ai_ci). Per comments it appears that attempting to force a later MariaDB collation utf8mb4_uca1400_ai_ci on the connection appears to not resolve the connection. MariaDB Connector/Python its actually tested with MariaDB serve... | 3 | 2 |
78,863,540 | 2024-8-12 | https://stackoverflow.com/questions/78863540/force-pyarrow-table-write-to-ignore-null-type-and-use-original-schema-type-for-a | I have this piece of code that appends two parts of the same data to a PyArrow table. The second write fails because the column gets assigned null type. I understand why it is doing that. Is there a way to force it to use the type in the table's schema, and not use the inferred one from the data in second write? import... | The problem is that the assignment of pd.NA leads to the incorrect dtype (object): df2 = df1.copy() df2['col3'] = pd.NA print(df2.dtypes) col1 object col2 int64 col3 object dtype: object Simply change it to Int64 first, using Series.astype: df2['col3'] = pd.NA df2['col3'] = df2['col3'].astype('Int64') Or in one state... | 2 | 2 |
78,863,088 | 2024-8-12 | https://stackoverflow.com/questions/78863088/dataframe-multi-columns-condition-with-groupby | I need to set a flag to 1 for every employee in the group (groupby MNGR, YEAR) if any of the employees in the group has any of the columns V1 or V2 or V3 or V4 is greater than 14. I can make it work if only checking 1 column but the V columns can be more than 10 columns. I have this code: DF1['flg'] = [1 for i in range... | You can break this transformation into two discrete steps: Find the rows where ANY V{1..4} columns are greater than 14 transform the groups ['MNGR', 'YEAR'] where ANY row meets the above import pandas as pd import numpy as np df = pd.DataFrame({ 'EMPLID': [12, 13, 14, 15, 16, 17, 18], 'MNGR': ['BOB', 'JIM', 'RHONDA',... | 2 | 4 |
78,855,983 | 2024-8-10 | https://stackoverflow.com/questions/78855983/transform-an-exploded-data-frame-into-a-deeply-nested-dictionary-with-headers | The function I am using to convert my data frame into a nested dictionary strips the column names from the hierarchy, making the dictionary diffictult to naviagte. I have a large dataframe that looks similar to ths: exploded_df = pd.DataFrame({ 'school_code': [1, 1, 1, 1, 2, 2, 2, 2], 'school_name': ['A', 'A', 'A', 'A'... | I found the following solution def fold_dataframe(df): columns = df.columns.tolist() if len(columns) < 2: return df columns = df.columns.tolist() grouped = df.groupby(columns[:-1], as_index=False).agg({columns[-1]: list}) grouped[grouped.columns[-1]] = grouped.apply( lambda row: {columns[-1]: row[columns[-1]]}, axis=1)... | 2 | 0 |
78,861,446 | 2024-8-12 | https://stackoverflow.com/questions/78861446/scrape-the-latitude-and-longitude-from-the-website | I want to convert a list of zip codes into a DataFrame of latitude and longitude using data from this website: Free Map Tools. https://www.freemaptools.com/convert-us-zip-code-to-lat-lng.htm#google_vignette Here’s my code, but it’s not returning the latitude and longitude data. How can I improve it ? import requests fr... | Try: import requests api_url = ( "https://api.promaptools.com/service/us/zip-lat-lng/get/?zip={}&key=17o8dysaCDrgvlc" ) zips = ["97048", "63640", "63628"] headers = { "Origin": "https://www.freemaptools.com", } for z in zips: url = api_url.format(z) data = requests.get(url, headers=headers).json() print(z, data) Print... | 2 | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.