QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
79,703,727
2,894,535
Data class with argument optional only in init
<p>I have the following simple class in Python:</p> <pre><code>class Point: def __init__(x: int, y: int | None = None): self.x = x self.y = y if y is not None else x </code></pre> <p>How can the same thing be implemented using a <code>@dataclass</code>? The obvious would be to do the following:</p> <pre><code>from dataclasses import dataclass @dataclass class Point: x: int y: int | None def __post_init__(self): if self.y is None: self.y = self.x </code></pre> <p>This appears to work, but changes type of <code>y</code> to an optional, resulting in false positive warning issued by e.g. <code>Point(1, 2).y + 0</code>.</p> <p>How can I make <code>y</code> optional only in <code>__init__</code>? The only thing that comes to mind is to split it in half:</p> <pre><code>@dataclass class Point: x: int y: int = field(init=False) _y: InitVar[int | None] def __post_init__(self, _y: int | None): self.y = _y if _y is not None else self.x </code></pre> <p>but it seems overly verbose</p>
<python><python-3.x><python-dataclasses>
2025-07-16 16:20:04
2
3,116
Dominik Kaszewski
79,703,712
13,350,341
Validating ES query_string upfront, namely without connecting to an Elasticsearch server
<p>I am looking for a Python library (if any) that could help <strong>validate</strong> the <code>query_string</code> field of Elasticsearch queries<sup>1</sup> upfront, namely without connecting to an Elasticsearch server and without having to define any custom validation logic. <sup>1</sup>I mean queries of the kind</p> <pre><code>{Β Β    &quot;query&quot;: {Β Β      &quot;query_string&quot;: {Β Β        &quot;query&quot;: query_string     }Β Β    }Β Β  } </code></pre> <p> To give a clearer picture, I'd like to implement a simple utility that - based on a library of the kind described above - can help me build and let pass tests like the ones below:</p> <pre><code>@pytest.mark.parametrize(Β Β    &quot;data, expectation&quot;,Β Β    [Β Β      (       ['(&quot;Race cars&quot; AND &quot;Sport cars&quot;)'],       None     ),Β Β      # missing closed parenthesisΒ Β      (       ['(&quot;Race cars&quot; AND &quot;Sport cars&quot;'],       pytest.raises(ValueError)     ),Β Β      # empty queryΒ Β      ([''], pytest.raises(ValueError)),Β Β      # invalid queryΒ Β      (       ['(&quot;Race cars&quot; AND'],       pytest.raises(ValueError)     ),Β Β    ],Β Β  ) </code></pre> <p>I've tried to follow the suggestions of LLM-based tools which, however, have been proven wrong. For instance, I've tried out different suggested options exploiting <code>elasticsearch_dsl</code> and either the class <code>Query</code></p> <pre><code>Query(query_str).to_dict() != {None: {}} </code></pre> <p>or classes <code>Q</code> and <code>Search</code> in combination</p> <pre><code>query = Q('query_string', query=query_string) search=Search().query(query) search.to_dict() != {None: {}} </code></pre> <p>, but none of them do work (as indeed <code>elasticsearch_dsl</code> is not meant for validation). Thank you!</p>
<python><elasticsearch><elasticsearch-dsl><elasticsearch-py>
2025-07-16 16:06:17
0
3,157
amiola
79,703,702
11,999,452
ModuleNotFoundError: No module named 'hidapi'
<p>I want to run the following code:</p> <pre><code>import hidapi # Find the device devices = hidapi.DeviceManager().devices() for device in devices: if device.vendor_id == 0x2341 and device.product_id == 0x0042: gamepad = device break # Read data from the device data = gamepad.read(64) print(data) </code></pre> <p>I have installed hidapi using pip but it still says it can not find the module</p> <pre><code>PS C:\Users\felix\OneDrive\Desktop\UE5 Space Ship Project&gt; pip3 install hidapi Defaulting to user installation because normal site-packages is not writeable Collecting hidapi Using cached hidapi-0.14.0.post4-cp312-cp312-win_amd64.whl.metadata (3.7 kB) Requirement already satisfied: setuptools&gt;=19.0 in c:\python312\lib\site-packages (from hidapi) (75.6.0) Using cached hidapi-0.14.0.post4-cp312-cp312-win_amd64.whl (70 kB) Installing collected packages: hidapi Successfully installed hidapi-0.14.0.post4 PS C:\Users\felix\OneDrive\Desktop\UE5 Space Ship Project&gt; python -u &quot;c:\Users\felix\OneDrive\Desktop\UE5 Space Ship Project\HID_test.py&quot; Traceback (most recent call last): File &quot;c:\Users\felix\OneDrive\Desktop\UE5 Space Ship Project\HID_test.py&quot;, line 1, in &lt;module&gt; import hidapi ModuleNotFoundError: No module named 'hidapi' PS C:\Users\felix\OneDrive\Desktop\UE5 Space Ship Project&gt; </code></pre> <p>I'm running Python 3.12.6 on Windows 11.</p>
<python>
2025-07-16 15:57:21
2
400
Akut Luna
79,703,565
4,408,232
Anaconda install of geopy but module is not found
<p>I am trying to use geopy on my laptop running Ubuntu 24.04.2 LTS. Anaconda is installed and geopy is installed.</p> <pre><code>(base) igor@XPS-13:~$ conda list | grep geopy geopy 2.4.1 pyhd8ed1ab_2 conda-forge </code></pre> <p>and searchig for installation folder:</p> <pre><code>(base) igor@XPS-13:~$ find -type d -name &quot;*geopy*&quot; ./anaconda3/pkgs/geopy-2.4.1-pyhd8ed1ab_2 ./anaconda3/pkgs/geopy-2.4.1-pyhd8ed1ab_2/site-packages/geopy-2.4.1.dist-info ./anaconda3/pkgs/geopy-2.4.1-pyhd8ed1ab_2/site-packages/geopy ./anaconda3/lib/python3.12/site-packages/geopy-2.4.1.dist-info ./anaconda3/lib/python3.12/site-packages/geopy </code></pre> <p>but trying to use it python3 I get:</p> <pre><code>(base) igor@XPS-13:~$ python3 Python 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import geopy Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'geopy' </code></pre> <p>and looking at the path in my python terminal I do not see any anaconda3 folders</p> <pre><code>&gt;&gt;&gt; print(sys.path) ['', '/usr/lib/python312.zip', '/usr/lib/python3.12', '/usr/lib/python3.12/lib-dynload', '/usr/local/lib/python3.12/dist-packages', '/usr/lib/python3/dist-packages'] </code></pre> <p>What have I missed to do on my system?</p>
<python><anaconda><geopy>
2025-07-16 14:16:00
1
301
IgorLopez
79,703,525
4,054,573
Reportlab canvas.DrawImage resizing not working
<p>I'd like to add a logo to to the canvas so that it repeats with each page of a report, but the problem is that the PNG image is quite large. I've tried resizing it inside the <code>canvas.DrawImage</code> command, but each time the image comes back in its original size.</p> <p>Here is what I am trying, but it has no effect:</p> <pre><code>def on_page(canvas, doc, pagesize=A4): page_num = canvas.getPageNumber() image = Image('/opt/rspro/home/e8409/projects/CRAMM logo.png') image._restrictSize(1 * inch, 2 * inch) canvas.drawImage(image, 0,0) canvas.showPage() from reportlab.platypus import PageTemplate portrait_template = PageTemplate( id='portrait', frames=portrait_frame, onPage=on_page, pagesize=A4) from reportlab.platypus import BaseDocTemplate doc = BaseDocTemplate( 'report.pdf', pageTemplates=[ portrait_template ] ) </code></pre> <p>Appreciate any assistance -- I'd rather not have to write a bunch of separate code to resize the image separate from the <code>reportlab</code> command.</p>
<python><reportlab>
2025-07-16 13:56:03
1
1,179
vashts85
79,703,347
10,423,341
Unable to Load Extensions in nodriver Proxy Context
<p>I need to load user profile, extensions and proxy in a single page context. Looks like it is not possible to do so right now using nodriver, any help/suggestion would be appreciated. Thanks</p> <p>Right now the user profile gets loaded just fine, but no extension gets loaded within the proxy context.</p> <pre><code># import nodriver as nd from nodriver import * import asyncio import subprocess path_to_extension_1 = r'D:\chrome_profile\User Data\Default\Extensions\hoklmmgfnpapgjgcpechhaamimifchmp\6.12.9_0' path_to_extension_2 = r'D:\chrome_profile\User Data\Default\Extensions\hlkenndednhfkekhgcdicdfddnkalmdm\1.13.0_0' async def main(): config = Config( headless=False, user_data_dir=r&quot;D:\chrome_profile\User Data&quot;, lang=&quot;en-US&quot;, browser_args=[ &quot;--disable-extensions-except=D:/chrome_profile/User Data/Default/Extensions/hoklmmgfnpapgjgcpechhaamimifchmp/6.12.9_0,D:/chrome_profile/User Data/Default/Extensions/hlkenndednhfkekhgcdicdfddnkalmdm/1.13.0_0&quot;, &quot;--load-extension=D:/chrome_profile/User Data/Default/Extensions/hoklmmgfnpapgjgcpechhaamimifchmp/6.12.9_0,D:/chrome_profile/User Data/Default/Extensions/hlkenndednhfkekhgcdicdfddnkalmdm/1.13.0_0&quot; ] ) browser = await Browser.create(config) tab = await browser.create_context( url = &quot;https://jsonip.com/&quot;, proxy_server = &quot;socks5://xyz:xyz@123:123&quot;, # new_tab=True, # new_window=False ) await tab.get('https://jsonip.com/') await asyncio.sleep(10) browser.stop() if __name__ == '__main__': # since asyncio.run never worked (for me) loop().run_until_complete(main()) </code></pre>
<python><google-chrome><playwright-python><chrome-devtools-protocol><nodriver>
2025-07-16 11:37:10
0
309
Jawad Ahmad Khan
79,703,332
393,010
What is the difference between xpath() and findall()?
<p>Very often I see that calls to xpath could as well be replaced by calls to findall, when can this be done? What is the main differences between the two functions?</p> <ol> <li>The first argument to <code>path</code> findall is a <code>path</code>, while to xpath the first argument <code>_path</code> is an <code>xpath</code>.</li> </ol> <p>lxml docs for <code>findall()</code>: <a href="https://lxml.de/apidoc/lxml.etree.html#lxml.etree._Element.findall" rel="nofollow noreferrer">https://lxml.de/apidoc/lxml.etree.html#lxml.etree._Element.findall</a></p> <pre class="lang-py prettyprint-override"><code>findall(self, path, namespaces=None): &quot;&quot;&quot;Finds all matching subelements, by tag name or path. The optional namespaces argument accepts a prefix-to-namespace mapping that allows the usage of XPath prefixes in the path expression.&quot;&quot;&quot; </code></pre> <p>lxml docs for <code>xpath()</code>: <a href="https://lxml.de/apidoc/lxml.etree.html#lxml.etree.XPath" rel="nofollow noreferrer">https://lxml.de/apidoc/lxml.etree.html#lxml.etree.XPath</a></p> <pre class="lang-py prettyprint-override"><code>xpath(self, _path, namespaces=None, extensions=None, smart_strings=True, **_variables) &quot;&quot;&quot;Evaluate an xpath expression using the element as context node.&quot;&quot;&quot; </code></pre> <p>However most of the arguments are not documented what they do. And a non-listed argument <code>error_log</code> is supplied with and empty description.</p> <p>This seems to be the specification of an xpath: <a href="https://www.w3.org/TR/xpath-31/" rel="nofollow noreferrer">https://www.w3.org/TR/xpath-31/</a></p> <p>But what is the <code>path</code> object supplied to findall?</p> <p>The python package has this to say about xpath support in xml.etree.elementtree (ElementTree is not the same as the lxml package mentioned above, see <a href="https://stackoverflow.com/questions/47229309/what-are-the-differences-between-lxml-and-elementtree">What are the differences between lxml and ElementTree?</a>), but is the limited xpath related? <a href="https://docs.python.org/3.13/library/xml.etree.elementtree.html#xpath-support" rel="nofollow noreferrer">https://docs.python.org/3.13/library/xml.etree.elementtree.html#xpath-support</a></p>
<python><lxml>
2025-07-16 11:24:59
2
5,626
Moberg
79,703,329
6,805,396
How to rotate a single label in a plotly treemap?
<p>Suppose we have a treemap like this:</p> <pre><code>import plotly.graph_objects as go fig = go.Figure(go.Treemap( parents=['', '', 'A', 'A', 'B'], labels=['A', 'B', 'a1', 'a2', 'b1'] )) fig.show() </code></pre> <p>And we need to rotate the <code>b1</code> label to 90 degrees. Is it possible to do in plotly?</p>
<python><plotly><treemap>
2025-07-16 11:23:53
0
609
Vlad
79,703,196
1,926,221
Print only assert message in Python
<p>Is there any way print only assert message:</p> <p><code>assert 5==4, &quot;test&quot;</code></p> <p>will print:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\user\temp\test.py&quot;, line 4, in &lt;module&gt; assert 5==4, &quot;test&quot; AssertionError [Finished in 221ms] </code></pre> <p>How to set it up, so it will print only:</p> <p><code>&quot;test&quot;</code></p>
<python><python-3.x><assert><assertion>
2025-07-16 09:19:10
2
3,726
IGRACH
79,703,043
6,312,979
Best way to convert FastAPI/SQLmodel into Polars Dataframe?
<p>What is best way to convert a FastAPI query into a Polars (or pandas) dataframe.</p> <p>Co-pilot give this.</p> <pre><code>with Session(engine) as session: questions = session.exec(select(Questions)).all() questions_json = [q.dict() for q in questions] df = pl.DataFrame(questions_json) </code></pre> <p>Do we always have to convert the results into json first? or is there some Polars way to read the data?</p> <p>Thanks.</p>
<python><pandas><fastapi><python-polars><sqlmodel>
2025-07-16 07:14:24
1
2,181
diogenes
79,702,999
14,250,641
Unsupervised Time Series Segmentation Without Predefined Number of Segments
<p>I'm working with time series data where I need to identify distinct segments without prior knowledge of how many segments exist. The data looks like:</p> <p><a href="https://i.sstatic.net/53jfNxpH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/53jfNxpH.png" alt="sample data" /></a></p> <p>I've tried the claspy and ruptures packages, but they require # of change points as an input. In this example, I would expect 3 segments. Please advise, thank you!</p>
<python><time-series><cluster-analysis>
2025-07-16 06:30:56
0
514
youtube
79,702,749
5,312,606
sphinxcontrib-bibtex and sphinx-multiversion
<p>I have a strange bug when building our documentation using sphinxcontrib-bibtex and sphinx-multiversion.</p> <p>In my <code>docs/source/conf.py</code> I have</p> <pre class="lang-py prettyprint-override"><code>from pathlib import Path extensions = [ &quot;sphinx_multiversion&quot;, &quot;sphinxcontrib.bibtex&quot;, ] # Get the directory where conf.py is located CONF_DIR = Path(__file__).parent bibtex_bibfiles = [str(CONF_DIR / &quot;literature.bib&quot;)] </code></pre> <p>In the same directory I have my <code>literature.bib</code>.</p> <p>If I use the citation in the rst files everything works when using <code>sphinx-build -W --keep-going -n source build/html</code> and when using <code>sphinx-multiversion -W --keep-going -n source build/html</code>. <strong>But</strong> if I use a citation in a python docstring, only the <code>sphinx-build</code> command works.</p> <p>Does someone have a solution for this?</p> <hr /> <p>To add more details:</p> <p>Something such as</p> <pre class="lang-none prettyprint-override"><code>The whole software package was published in :cite:`cho_quemb_2025`. </code></pre> <p>in the <code>index.rst</code> file works. But</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot; ... If you use :python:`&quot;chemgen&quot;` in your work please credit :cite:`weser_automated_2023`. &quot;&quot;&quot; </code></pre> <p>in a docstring fails and gives</p> <pre><code>/home/mcocdawc/code/quemb/src/quemb/molbe/fragment.py:docstring of quemb.molbe.fragment.fragmentate:19: WARNING: could not find bibtex key &quot;weser_automated_2023&quot; [bibtex.key_not_found] </code></pre> <p>Note, the citation key exists in the bib file and the error also persists if I use a citation key that is used in a RST file (and works there).</p> <p>The actual code can be found <a href="https://github.com/troyvvgroup/quemb/pull/181" rel="nofollow noreferrer">here</a>.</p> <p>The error can be reproduced via</p> <pre><code>git clone git@github.com:troyvvgroup/quemb.git cd quemb git checkout improve_doc pip install . cd docs pip install -r requirements.txt # additional requirements for docs make html # works sphinx-build -W --keep-going -n source build/html # works sphinx-multiversion -W --keep-going -n source build/html # fails with the given error message. </code></pre>
<python><python-sphinx><bibtex>
2025-07-15 23:18:11
0
1,897
mcocdawc
79,702,696
967,621
Enable the strictest `ruff check` in GitHub Actions
<p>How do I enable the most stringent <code>ruff check</code> in GitHub Actions? I am looking for the equivalent of:</p> <pre><code>ruff check --select ALL </code></pre> <p>The docs (<a href="https://github.com/astral-sh/ruff-action?tab=readme-ov-file#specify-multiple-files" rel="nofollow noreferrer">astral-sh/ruff-action: A GitHub Action to run Ruff</a>) apparently specify the default <code>ruff check</code>, which is not strict enough (for example, it allows trailing whitespace):</p> <pre><code>- uses: astral-sh/ruff-action@v3 with: src: &gt;- path/to/file1.py path/to/file2.py </code></pre>
<python><github-actions><ruff>
2025-07-15 21:46:13
1
12,712
Timur Shtatland
79,702,608
2,711,059
Why is the condition in the Langraph not working
<p>I am trying to build a langraph and call the relevant node as per the condition. Here is my code:</p> <pre><code>class PersonDetails(TypedDict): name: str age: int # Nodes def greet(state: PersonDetails): print(f&quot;Hello, {state['name']}!&quot;) return state def check_age(state: PersonDetails): age = state['age'] if age &gt;= 21: return &quot;can_drink&quot; elif age &gt;= 16: return &quot;can_drive&quot; else: return &quot;minor&quot; # return state def can_drink(state: PersonDetails): print(&quot;You can legally drink 🍺&quot;) return state def can_drive(state: PersonDetails): print(&quot;You can drive πŸš—&quot;) return state def minor(state: PersonDetails): print(&quot;You're a minor 🚫&quot;) return state # Build graph graph = StateGraph(PersonDetails) graph.add_node(&quot;greet&quot;, greet) graph.add_node(&quot;check_age&quot;, RunnableLambda(check_age)) # must wrap conditional node graph.add_node(&quot;can_drink&quot;, can_drink) graph.add_node(&quot;can_drive&quot;, can_drive) graph.add_node(&quot;minor&quot;, minor) # Edges graph.add_edge(START, &quot;greet&quot;) graph.add_edge(&quot;greet&quot;, &quot;check_age&quot;) # graph.add_edge(&quot;check_age&quot;, END) # Conditional branching graph.add_conditional_edges( &quot;check_age&quot;, { &quot;can_drink&quot;: &quot;can_drink&quot;, &quot;can_drive&quot;: &quot;can_drive&quot;, &quot;minor&quot;: &quot;minor&quot; } ) graph.add_edge(&quot;can_drink&quot;, END) graph.add_edge(&quot;can_drive&quot;, END) graph.add_edge(&quot;minor&quot;, END) # Compile graph app = graph.compile() </code></pre> <p>But when I run the code, I get the following error:</p> <blockquote> <p>TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: &lt;class 'str'&gt;</p> </blockquote> <p>I'm not sure what I'm doing wrong, help would be appreciated</p>
<python><langchain><langgraph><google-generativeai>
2025-07-15 20:13:54
1
5,268
Lijin Durairaj
79,702,590
494,134
How to have separate logging for instances of a class
<p>I have a class that does some logging, and I want to be able to easily distinguish log messages that are from different instances of the class.</p> <p>So, I thought I would create a logger object in the class <code>__init__</code> method that has a unique identifier in the message formatter:</p> <pre><code>import logging class Job: def __init__(self, jobid): self.logger = logging.getLogger(__name__) self.logger.setLevel(logging.INFO) handler = logging.StreamHandler() formatter = logging.Formatter( f'%(asctime)s %(levelname)-8s [{jobid}] %(message)s' ) handler.setFormatter(formatter) self.logger.addHandler(handler) def logit(self, message): self.logger.info(message) j = Job(123) k = Job(456) j.logit('j starting') k.logit('k starting') j.logit('j ending') k.logit('k ending') </code></pre> <p>When the code runs, I expect to see this output:</p> <pre><code>2025-07-15 14:46:01,192 INFO [123] j starting 2025-07-15 14:46:01,192 INFO [456] k starting 2025-07-15 14:46:01,192 INFO [123] j ending 2025-07-15 14:46:01,192 INFO [456] k ending </code></pre> <p>But instead, I see this:</p> <pre><code>2025-07-15 14:45:11,960 INFO [123] j starting 2025-07-15 14:45:11,960 INFO [456] j starting 2025-07-15 14:45:11,960 INFO [123] k starting 2025-07-15 14:45:11,960 INFO [456] k starting 2025-07-15 14:45:11,961 INFO [123] j ending 2025-07-15 14:45:11,961 INFO [456] j ending 2025-07-15 14:45:11,961 INFO [123] k ending 2025-07-15 14:45:11,961 INFO [456] k ending </code></pre> <p>All of the messages are logged twice, once with jobid 123 and again with jobid 456.</p> <p>I suppose this happens because <code>logging.getLogger(__name__)</code> always produces the same logger object, which then gets multiple handlers added to it.</p> <p>What is the solution here?</p> <p>Do I need to pass a unique name to <code>getLogger()</code>, like <code>f'{__name__}::{jobid}'</code>, to ensure that each logger object is unique?</p>
<python><logging>
2025-07-15 19:53:14
1
33,765
John Gordon
79,702,441
7,253,674
Replace all non-empty strings in a column with a constant
<p>I have a data frame with a variety of string values. For a given column, if there is any string entered, I would like to replace it with the same value (say 'fruit').</p> <p>Example:</p> <pre class="lang-py prettyprint-override"><code>data = {'item_name': ['apple', 'banana', 'cherry', 'pineapple', 'apple pie', 'banana split', 'hafts to chos', 'lov_frutz', 'I always pick apples', None, None, None]} df = pd.DataFrame(data) </code></pre> <p>Result I want:</p> <pre><code>item_name 'fruit' 'fruit' 'fruit' 'fruit' 'fruit' 'fruit' 'fruit' 'fruit' 'fruit' nan nan nan response = 'fruit' </code></pre> <p>I have tried using regex but don't seem to understand it correctly, so tried:</p> <pre class="lang-py prettyprint-override"><code>df[col] = df[col].replace(to_replace=r'\w\b', value = response, regex = True) </code></pre>
<python><pandas><string><replace>
2025-07-15 17:16:18
1
365
Liz
79,702,381
7,589,775
Ignore some default values in pydantic during JSON schema generation
<p>(At the time of writing, this is on pydantic version 2.11.7) I have the following MRE:</p> <pre class="lang-py prettyprint-override"><code>import json import time from pydantic import BaseModel, Field class SeededModel(BaseModel): seed: int = Field(default_factory=lambda _: int(time.time() * 1000)) sensible_default: bool = False class Base(BaseModel): settings: SeededModel = SeededModel() config_schema = Base.model_json_schema() print(json.dumps(config_schema, indent=2)) </code></pre> <p>When I go to run this, I get the following JSON schema:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;$defs&quot;: { &quot;SeededModel&quot;: { &quot;properties&quot;: { &quot;seed&quot;: { &quot;title&quot;: &quot;Seed&quot;, &quot;type&quot;: &quot;integer&quot; }, &quot;sensible_default&quot;: { &quot;default&quot;: false, &quot;title&quot;: &quot;Sensible Default&quot;, &quot;type&quot;: &quot;boolean&quot; } }, &quot;title&quot;: &quot;SeededModel&quot;, &quot;type&quot;: &quot;object&quot; } }, &quot;properties&quot;: { &quot;settings&quot;: { &quot;$ref&quot;: &quot;#/$defs/SeededModel&quot;, &quot;default&quot;: { &quot;seed&quot;: 1752596882987, &quot;sensible_default&quot;: false } } }, &quot;title&quot;: &quot;Base&quot;, &quot;type&quot;: &quot;object&quot; } </code></pre> <p>Is there a nice way to provide a default for <code>Base#settings</code> that preserves the default value for <code>SeededModel#sensible_default</code> but not for <code>SeededModel#seed</code>?</p>
<python><pydantic-v2>
2025-07-15 16:29:25
1
4,336
Tristan F.-R.
79,702,280
6,041,915
Is it right to raise an error in except block in python?
<p>I often see code like this:</p> <pre><code>try: some_operation() except Exception: logger.error(&quot;An error occurred while running the operation&quot;) raise Exception(&quot;A custom message&quot;) </code></pre> <p>Please ignore using the general Exception in this example, I know it's a bad practice. You can think of any other more specific subclasses of Exception.</p> <p>I don't feel fine with code like this. What is the purpose of catching an exception if it's raised again, maybe with modified message? If a developer does it for logging or changing the class of the exception is it OK and good approach? I understand try...except blocks (in any language) to really handle exceptions. The example I showed is for me not good, but maybe it's a common practice in python world? Is it? Is there a more proper way for logging in such cases?</p>
<python><exception><error-handling>
2025-07-15 15:09:54
1
702
Jakub MaΕ‚ecki
79,701,980
5,402,618
Pycharm fails to find package in the defined PYTHONPATH
<p>I have a Python monorepo. One of the services in this monorepo is &quot;poller-service&quot;. Its general structure is:</p> <pre><code> services/ └── poller-service/ β”œβ”€β”€ .venv/ | └─ ... β”œβ”€β”€ main.py β”œβ”€β”€ src/ | └─── mycompany/ | β”œβ”€β”€ __init__.py | └── poller/ | β”œβ”€β”€ __init__.py | └── api/ | β”œβ”€β”€ __init__.py | └── app.py β”œβ”€β”€ pyproject.toml </code></pre> <p>main.py is:</p> <pre><code>import asyncio from mycompany.poller.api.app import app async def main(): ... if __name__ == '__main__': asyncio.run(main()) </code></pre> <p>As you can see, main.py imports variable from app.py, which lies under the package &quot;mycompany.poller.api&quot; in &quot;/src&quot; sub-folder.</p> <p>When I launch the service from Pycharm, I use the Python interpreter defined in .venv sub-folder. This is how the launcher looks:</p> <p><a href="https://i.sstatic.net/UmSTYGDE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UmSTYGDE.png" alt="enter image description here" /></a></p> <p>When I run the launcher I get the error:</p> <pre><code>/Users/me/repositories/services/poller-service/.venv/bin/python /Users/me/repositories/services/poller-service/main.py -m univorn --port 8006 --reload Traceback (most recent call last): File &quot;/Users/me/repositories/Droxi-services/Python/droxi-poller/main.py&quot;, line 12, in &lt;module&gt; from mycompany.poller.api.app import app # noqa: E402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'mycompany.poller' </code></pre> <p>I tried to check whether /src subfolder is included in PYTHONPATH by running the lines inside main.py:</p> <pre><code>import sys import pprint pprint.pprint(sys.path) </code></pre> <p>and I definitely see that /src sub-folder is included in PYTHONPATH. If so, why do I get this error?</p> <p>The weirdest part is that when I run the command that the launcher executes under the hood:</p> <pre><code>/Users/me/repositories/services/poller-service/.venv/bin/python /Users/me/repositories/services/poller-service/main.py -m univorn --port 8006 --reload </code></pre> <p>I don't get this error.</p> <p>This is probably a problem within Pycharm. Do you know why it happens?</p>
<python><pycharm><pythonpath>
2025-07-15 11:18:15
0
15,182
CrazySynthax
79,701,884
11,405,174
Indicating which column wins in a df.min() call
<p>I want to find the minimum value per row and create a new column indicating which of those columns has the lowest number. Unfortunately, it seems like pandas isn't immediately able to help in this regard. My research has led to the <code>min()</code> function, which does find the lowest for each row (when axis=1), but there's no further information beyond the number itself.</p> <pre class="lang-py prettyprint-override"><code>initialDict = {&quot;A&quot;:[6.53,11.47,92.08],&quot;B&quot;:[9.11,8.15,12.49]} initialDf = pd.DataFrame.from_dict(initialDict,orient=&quot;index&quot;,columns=[&quot;Value 1&quot;,&quot;Value 2&quot;,&quot;Value 3&quot;]) &gt;&gt;&gt; initialDf Value 1 Value 2 Value 3 A 6.53 11.47 92.08 B 9.11 8.15 12.49 &gt;&gt;&gt; initialDf.min(axis=1,numeric_only=True) A 6.53 # Value 1 - just a number is useless to me. B 8.15 # Value 2 - how do i access which columns these are? </code></pre> <p>My sample data is a lot larger than two rows, so ideally I'd want a vectorised solution.</p> <p>Can I somehow access which column has the lowest number <strong>per row</strong> and assign it to a new value?</p>
<python><pandas><dataframe>
2025-07-15 10:06:52
1
464
Corsaka
79,701,849
13,682,559
How to type hint an attribute to be a dataclass?
<p>I want a class that encapsulates data with some meta information. The data changes during runtime. I want to safe it for evaluation. The class looks like this:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass, replace, _DataclassT @dataclass class Encapsulated[P: ???]: # &lt;- What to specify here? type: int payload: P def record(self) -&gt; P: return replace(self.payload) </code></pre> <p>As you can see, I have no requirements with regard to P except that is has to be a Dataclass that can be used in <code>dataclasses.replace</code>. For example:</p> <pre class="lang-py prettyprint-override"><code>@dataclass class Payload: val: int = 0 </code></pre> <p>I tried <code>object</code> which fails for obvious reasons. I find <code> _DataclassT</code> inappropriate because of it being private. Also: It doesn't work.</p>
<python><python-typing><python-dataclasses>
2025-07-15 09:36:29
1
1,108
Durtal
79,701,824
509,868
Does ArgumentParser support different arguments per file, ffmpeg style?
<p>I want my application to work on several files and have a different set of options for each input file:</p> <pre><code>python my.py -a file_a -b file_b --do-stuff=x file_c </code></pre> <p><code>ffmpeg</code> uses this idea for its command line arguments.</p> <p>I tried the following:</p> <pre><code>parser = argparse.ArgumentParser() parser.add_argument(&quot;-a&quot;, action=&quot;store_true&quot;) parser.add_argument(&quot;-b&quot;, action=&quot;store_true&quot;) parser.add_argument(&quot;--do-stuff&quot;, type=str) parser.add_argument(&quot;files&quot;, type=pathlib.Path, nargs=&quot;+&quot;) args = parser.parse_args() </code></pre> <p>And I get an error:</p> <pre><code>error: unrecognized arguments: file_b file_c </code></pre> <p>Can I use <code>ArgumentParser</code> to parse such a command line?</p> <p>The goal is to get a mapping like this:</p> <pre><code>file_a: {a: True} file_b: {b: True} file_c: {do_stuff: 'x'} </code></pre>
<python><command-line-arguments>
2025-07-15 09:22:38
2
28,630
anatolyg
79,701,742
20,895,654
Map one type to another and make type checker understand
<p>I have the following piece of code:</p> <pre><code>from typing import Any class RawA: pass class A: pass class RawB: pass class B: pass # Example of a kind of mapping (that doesn't work) mapping = { RawA: A, RawB: B } def unraw[TRaw: Any](obj: TRaw) -&gt; Any # -&gt; the unraw type ... </code></pre> <p>Now how would I create some sort of mapping (of course a dictionary like here won't work) that the type checker then uses to find the appropriate return type?</p> <p>For example, if I pass an instance of <code>RawA</code> into my function, the type checker understands the return value will be an instance of the type <code>RawA</code> is mapped to, in this case <code>A</code>.</p> <p>I know could just exhaust all possibilities via <code>overload</code>s, but that is way to cumbersome and error prone for my case.</p>
<python><python-typing>
2025-07-15 08:11:36
2
346
JoniKauf
79,701,584
5,567,893
How can I remove the brackets and parentheses at the end of words?
<p>My task is to parse the protein names by removing the brackets and parentheses in the row.<br /> In short, I want to retain the words in front of any parentheses and brackets.<br /> Note that I need to keep symbols in the main words like <code>H(+)/Cl(-) exchange transporter 6</code>.</p> <pre class="lang-py prettyprint-override"><code>data = { &quot;Entry&quot;: [&quot;A0A087X1C5&quot;, &quot;A0A0B4J2F0&quot;, &quot;O00468&quot;, &quot;P51797&quot;, &quot;O75164&quot;], &quot;Reviewed&quot;: [&quot;reviewed&quot;] * 5, &quot;Entry Name&quot;: [&quot;CP2D7_HUMAN&quot;, &quot;PIOS1_HUMAN&quot;, &quot;AGRIN_HUMAN&quot;, &quot;CLCN6_HUMAN&quot;, &quot;KDM4A_HUMAN&quot;], &quot;Protein names&quot;: [ &quot;Putative cytochrome P450 2D7 (EC 1.14.14.1)&quot;, &quot;Protein PIGBOS1 (PIGB opposite strand protein 1)&quot;, &quot;Agrin [Cleaved into: Agrin N-terminal 110 kDa subunit; Agrin C-terminal 110 kDa subunit; Agrin C-terminal 90 kDa fragment (C90); Agrin C-terminal 22 kDa fragment (C22)]&quot;, &quot;H(+)/Cl(-) exchange transporter 6 (Chloride channel protein 6) (ClC-6) (Chloride transport protein 6)&quot;, &quot;Lysine-specific demethylase 4A (EC 1.14.11.66) (EC 1.14.11.69) (JmjC domain-containing histone demethylation protein 3A) (Jumonji domain-containing protein 2A) ([histone H3]-trimethyl-L-lysine(36) demethylase 4A) ([histone H3]-trimethyl-L-lysine(9) demethylase 4A)&quot; ], &quot;Gene Names&quot;: [&quot;CYP2D7&quot;, &quot;PIGBOS1&quot;, &quot;AGRN AGRIN&quot;, &quot;CLCN6 KIAA0046&quot;, &quot;KDM4A JHDM3A JMJD2 JMJD2A KIAA0677&quot;], &quot;Length&quot;: [515, 54, 2068, 869, 1064], &quot;STRING&quot;: [None, &quot;9606.ENSP00000484893&quot;, &quot;9606.ENSP00000368678&quot;, &quot;9606.ENSP00000234488&quot;, &quot;9606.ENSP00000361473&quot;] } # Load into DataFrame df = pd.DataFrame(data) </code></pre> <pre class="lang-py prettyprint-override"><code>result = { &quot;Entry&quot;: [&quot;A0A087X1C5&quot;, &quot;A0A0B4J2F0&quot;, &quot;O00468&quot;, &quot;P51797&quot;, &quot;O75164&quot;], &quot;Reviewed&quot;: [&quot;reviewed&quot;] * 5, &quot;Entry Name&quot;: [&quot;CP2D7_HUMAN&quot;, &quot;PIOS1_HUMAN&quot;, &quot;AGRIN_HUMAN&quot;, &quot;CLCN6_HUMAN&quot;, &quot;KDM4A_HUMAN&quot;], &quot;Protein names&quot;: [ &quot;Putative cytochrome P450 2D7&quot;, &quot;Protein PIGBOS1&quot;, &quot;Agrin&quot;, &quot;H(+)/Cl(-) exchange transporter 6&quot;, &quot;Lysine-specific demethylase 4A&quot; ], &quot;Gene Names&quot;: [&quot;CYP2D7&quot;, &quot;PIGBOS1&quot;, &quot;AGRN AGRIN&quot;, &quot;CLCN6 KIAA0046&quot;, &quot;KDM4A JHDM3A JMJD2 JMJD2A KIAA0677&quot;], &quot;Length&quot;: [515, 54, 2068, 869, 1064], &quot;STRING&quot;: [None, &quot;9606.ENSP00000484893&quot;, &quot;9606.ENSP00000368678&quot;, &quot;9606.ENSP00000234488&quot;, &quot;9606.ENSP00000361473&quot;] } # Expected result result_df = pd.DataFrame(result) </code></pre> <p>When I tried the first approach, it removed all parentheses and brackets without considering the intermediate symbols.</p> <pre class="lang-py prettyprint-override"><code>df['Protein names'] = df[&quot;Protein names&quot;].str.replace(r'\s*(\(|\[).*?(\)|\])\s*$', '', regex=True).str.strip() # Before # H(+)/Cl(-) exchange transporter 6 (Chloride channel protein 6) (ClC-6) (Chloride transport protein 6) # After # H </code></pre> <p>I also tried the step-by-step process using the second code, but it removed the intermediate symbols too.</p> <pre class="lang-py prettyprint-override"><code>df[&quot;Protein names&quot;] = df[&quot;Protein names&quot;].str.split(' \(').str[0].str.strip() df[&quot;Protein names&quot;] = df[&quot;Protein names&quot;].str.split(' \[').str[0].str.strip() # Before # Very-long-chain (3R)-3-hydroxyacyl-CoA dehydratase 1 (EC 4.2.1.134) (3-hydroxyacyl-CoA dehydratase 1) (HACD1) (Cementum-attachment protein) (CAP) (Protein-tyrosine phosphatase-like member A) # After # Very-long-chain </code></pre>
<python><pandas><regex>
2025-07-15 05:14:15
4
466
Ssong
79,701,463
17,246,545
Why nothing written in AWS Lambda logs?
<p>Originally some AWS Lambda(A) that i made was work well. But When I deploy new ECR Image and apply Lambda A, and then I invoke the lambda; Sometimes, nothing happen!</p> <p>So, I checked lambda logs every time when occured upper issue. and the logs like below:</p> <pre><code>2025-07-14T08:51:28.206+09:00 START RequestId: 24e982ca-5161-4f02-82d7-0bd94d289fc5 Version: $LATEST 2025-07-14T08:56:28.255+09:00 2025-07-13T23:56:28.254Z 24e982ca-5161-4f02-82d7-0bd94d289fc5 Task timed out after 300.05 seconds 2025-07-14T08:56:28.255+09:00 END RequestId: 24e982ca-5161-4f02-82d7-0bd94d289fc5 2025-07-14T08:56:28.255+09:00 REPORT RequestId: 24e982ca-5161-4f02-82d7-0bd94d289fc5 Duration: 300048.73 ms Billed Duration: 301192 ms Memory Size: 128 MB Max Memory Used: 119 MB Init Duration: 1191.06 ms </code></pre> <p>What's wrong with this Lambda?</p> <p>Memory size is the one of issue?</p>
<python><amazon-web-services><aws-lambda><amazon-ecr>
2025-07-15 01:23:54
1
389
SecY
79,701,408
753,558
Python requests failed to verify certificate, while urllib3 and curl do it successfully
<p>Trying to done simple GET request. Site have self-signed certificate. I was export it using Firefox (downloaded chained certificates) as a &quot;pem&quot; file. Here versions of libraries:</p> <pre><code>[user@host]$ pip list | grep -E &quot;urllib3|requests&quot; requests 2.32.4 urllib3 2.5.0 </code></pre> <p>This is code that using requests:</p> <pre><code>import requests requests.get(&quot;https://bvs.skala&quot;, verify='/home/user/bvs-skala-chain.pem') </code></pre> <p>And got next error:</p> <pre><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='bvs.skala', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:979)'))) </code></pre> <p>This is code that use same certificate, but urllib3 instead of requests:</p> <pre><code>import urllib3 http = urllib3.PoolManager(ca_certs='/home/user/bvs-skala-chain.pem') response = http.request('GET', 'https://bvs.skala/') print(response.status) </code></pre> <p>And it is runs successfully:</p> <pre><code>[user@host]$ python using_urllib3.py 200 </code></pre> <p>And curl is runs successfully too:</p> <pre><code>[user@host]$ curl -X GET https://bvs.skala --cacert /home/user/bvs-skala-chain.pem &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; ... </code></pre> <p>What else was tried, but without success:</p> <pre><code>export REQUESTS_CA_BUNDLE=/home/user/bvs-skala-chain.pem export CURL_CA_BUNDLE=/home/user/bvs-skala-chain.pem </code></pre> <p>Can't figure out where to dig.</p>
<python><python-3.x><ssl><python-requests>
2025-07-14 22:57:07
0
302
Renat Zaripov
79,701,380
7,121,783
Python: Running tests with all combinations of feature flags
<p>We have several modules that require mandatory feature / backout flags. These flags are defined at module level.</p> <p>module.py:</p> <pre><code>from enabled import is_enabled FLAGS = {flag : is_enabled(flag) for flag in (&quot;foo&quot;, &quot;bar&quot;)} if FLAGS.get(&quot;foo&quot;): def baz(): print(&quot;New baz&quot;) else: def baz(): print(&quot;Old baz&quot;) print(f&quot;Loaded module with {FLAGS=}&quot;) print(&quot;#&quot;*100) </code></pre> <p>enabled.py:</p> <pre><code>def is_enabled(_): return True </code></pre> <p>I can use mock.patch to patch <code>is_enabled</code> and by using <code>importlib.reload</code> I can test all combinations of <code>True</code> and <code>False</code> for all flags for a module(max number of flags at the same time is 5 or 6 so combination is not going to explode).</p> <p>My main problem is now running the test with all combinations; I have tried:</p> <ol> <li><p>Creating a decorator : this fails because setup and teardown methods are not called between repeated runs in the decorator causing assertions to fail. I can either call <code>self.setUp</code> and <code>self.tearDown</code> in each test OR pass those methods to the decorator and call them but it feels like a messy approach which creates dependencies and is fragile. I'm going to avoid this if possible.</p> </li> <li><p>Creating a test runner (subclass of <code>TextTestRunner</code>) : Unfortunately a test runner already exists and is setup. I'm not sure if it is possible / how to use my own test runner instead of the existing test runner (which can't be easily swapped out / refactored as it is used a lot). Is there a way for me to override / ignore that and use my own? This approach is cleaner and technically more correct as the the test suite is ran multiple times with all the necessary setup /tear-down, captured results, etc.</p> </li> </ol> <p>given the unit test below, can you please demonstrate how to run the <code>test_baz</code> with all combinations of <code>&quot;foo&quot;</code> and <code>&quot;bar&quot;</code> being <code>True</code> and <code>False</code>?</p> <p>module_test.py</p> <pre><code>import module from importlib import reload from unittest import TestCase from itertools import product from functools import wraps, partial from unittest.mock import patch, DEFAULT def run_all_combinations_decorator(func): flags = sorted(module.FLAGS.keys()) combinations = list(product((True, False), repeat=len(flags))) def is_enabled(new_flags, flag): return new_flags.get(flag, DEFAULT) @wraps(func) def wrapper(*args, **kwargs): for combination in combinations: new_flags = {flag: enabled for flag, enabled in zip(flags, combination)} print(f&quot;Running with {new_flags=}&quot;) with patch(&quot;enabled.is_enabled&quot;, side_effect=partial(is_enabled, new_flags)): reload(module) result = func(*args, **kwargs) # Not ideal as only the last result is captured return result return wrapper class ModuleTests(TestCase): def setUp(self): self.x = 0 @run_all_combinations_decorator def test_baz(self): # self.setUp() # Uncommenting this makes the test pass but is not ideal module.baz() self.x += 1 self.assertEqual(self.x, 1) # Fails because setup is not called between runs </code></pre> <p>If there is a better approach to this that I have not yet tried, please suggest that as well.</p>
<python><testing><python-unittest>
2025-07-14 21:54:54
0
1,003
OM222O
79,701,195
2,153,235
Should plt.ion() eliminate the need for plt.show()?
<p>I was using <code>matplotlib</code>'s Tk back end on Spyder and never had to issue <code>plt.show()</code> (or more specifically, <code>plt.show(block=False)</code>). I now have to prepare for analysis work on a closed system where the only access to Python is via QGIS. From much Googling, my <em>impression</em> is that the Qt back end is foundational to QGIS. This corroborates with the fact that <code>matplotlib.use('TkAgg')</code> yields the message:</p> <blockquote> <p>ImportError: Cannot load backend 'TkAgg' which requires the 'tk' interactive framework, as 'qt' is currently running</p> </blockquote> <p>I sought other ways to avoid the constant need to repeatedly type <code>plt.show(block=False)</code>. The <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.ion.html#matplotlib.pyplot.ion" rel="nofollow noreferrer">ion()</a> method provides an example, from which it seems that (say) <code>with plt.ion(): plt.plot([1,2],[3,4])</code> should create a figure window immediately. It doesn't. Even issuing <code>plt.show()</code> doesn't make the figure window appear. However, <code>plt.plot([1,2],[3,4]); plt.show(block=False)</code> does create the plot right away.</p> <p>What am I not understanding about the <code>ion()</code> documentation example?</p> <p>P.S. Considering that Qt is foundational to QGIS, I do not want to venture down the path of creating a QGIS startup script that sets the back end to Tk. I'm a complete newbie to QGIS and only want to use it for it's Python support (and I'm not all that much of a Python veteran either).</p> <p><strong>Would someone please re-examine the closure of this question?</strong></p> <p>The code is provided right in the question, as is the link to <code>ios()</code>, an explanation of possible links to the back end, and the research determining that <code>TkAgg</code> is <em>not</em> advisable. What is missing?</p>
<python><matplotlib>
2025-07-14 17:22:01
1
1,265
user2153235
79,701,140
3,892,866
pip install --no-index can't find setuptools
<p>I have a computer that for security reasons has no public network access. The system starts with all necessary public Python dependencies installed, I just need to add the latest version of my own python package and run. I'm trying to install my package from the .tar.gz file by pip3 --no-index; --no-index is needed because otherwise pip tries to go out to a remote repository and fails. When I run &quot;pip3 --no-deps --no-index mypackage-1.0.0.tar.gz&quot;, given an error like:</p> <pre><code> Γ— pip subprocess to install build dependencies did not run successfully. β”‚ exit code: 1 ╰─&gt; [2 lines of output] ERROR: Could not find a version that satisfies the requirement setuptools&gt;=57 (from versions: none) ERROR: No matching distribution found for setuptools&gt;=57 [end of output] </code></pre> <p>But setuptools is there! When I run python, then &quot;import setuptools,&quot; there it is. Version 59.6.0.</p> <p>I think maybe I need to add a --find-links option to tell it where to find setuptools, but that also seems to want a .whl file, and I don't have a .whl file - just the setuptools install in /usr/lib/python3/dist-packages. I tried adding a bunch of different --find-links local paths, none helped.</p> <p>Any idea how to fix this? Is the right --find-links option what I need, or do I need to download the .whl file for setup tools, or...?</p>
<python><pip>
2025-07-14 16:33:10
1
568
Bill Shubert
79,701,038
20,895,654
Make type checker understand that class and instance attributes share names but have diffent types dynamically
<p>I have a piece of code in Python that in essence looks and works the following way:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any class Column: def __init__(self, s: str) -&gt; None: self.name = s def __repr__(self): return f'Column(name={self.name!r})' class Meta(type): def __new__(cls, name: str, bases: tuple[type, ...], namespace: dict[str, Any]) -&gt; type: annotations = namespace.get('__annotations__', {}) namespace.update({k: Column(k) for k in annotations}) return super().__new__(cls, name, bases, namespace) class Base(metaclass=Meta): def __init__(self, data: dict[str, Any]) -&gt; None: if self.__class__ == Base: raise RuntimeError self.__dict__ = data # Example: inherit from Base and create an object for testing # (and we assume the data passed is correct). class Human(Base): name: str height: int tom = Human({'name': 'Tom', 'height': 180}) print(Human.height) # Column(name='height') print(tom.height) # 180 </code></pre> <p>Now when we access a field from the class, we get back a Column with the field's name. When we access the field from an instance like <code>tom</code> we get back the actual value for that field of that instance, in our case <code>180</code>. We also kind of tricked the type checker that these are both valid fields, because it just thinks we are working with class fields. (At least this is the case for Pylance, which is what I use)</p> <p>Now comes the question: Is there a trick or workaround for the type checker to understand that <code>Human.height</code> is of type <code>Column</code>, <strong>without</strong> specifying the same field twice like the following I would have to do here:</p> <pre><code>class Human(Base): name: str name: ClassVar[Column] height: int height: ClassVar[Column] </code></pre> <p>So the goal is to have my type checker think the class has the annotated instance attributes, while all class attributes have the same name but are all of type <code>Column</code>.</p>
<python><python-typing><class-variables><pyright>
2025-07-14 14:41:54
1
346
JoniKauf
79,701,017
1,172,907
Mocking a class method attribute returns AttributeError
<p>How can I mock the value of <code>x</code> to &quot;bar&quot; instead of &quot;foo&quot;?</p> <pre class="lang-py prettyprint-override"><code>import pytest class Command(): def run(self): x = &quot;foo&quot; return x def test(mocker): mocker.patch(&quot;myapp.tests.test_mocker.Command.run.x&quot;, &quot;bar&quot;) c = Command() print(c.run()) </code></pre> <p>pytest returns <code>AttributeError: &lt;function Command.run at 0x7f4910e23f70&gt; does not have the attribute 'x'</code></p> <p><a href="https://pypi.org/project/pytest-mock/" rel="nofollow noreferrer">https://pypi.org/project/pytest-mock/</a></p>
<python><pytest><pytest-mock>
2025-07-14 14:16:16
1
605
jjk
79,701,003
2,912,349
Generating blue noise with values sampled from a log normal distribution
<h1>Aim</h1> <p>I am trying to generate random signals with the following two properties:</p> <ol> <li><p>The values should be approximately log-normally distributed (any long-tailed distribution bounded form below with non-zero mode would do).</p> </li> <li><p>The power spectral density (PSD) of the signal should have low power in the lower frequencies and increase in power with increasing frequency.</p> </li> </ol> <h1>Current approach</h1> <p>Generating samples from a log-normal distribution is straightforward: sample from a normal distribution and apply the exponential function <a href="https://en.wikipedia.org/wiki/Log-normal_distribution" rel="noreferrer">1</a>.</p> <p>Python code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt size = 100_000 samples = np.exp(np.random.randn(size)) plt.hist(samples, bins=np.linspace(0, 10, 101), histtype=&quot;step&quot;, density=True) plt.show() </code></pre> <p><a href="https://i.sstatic.net/9tNrn0KN.png" rel="noreferrer"><img src="https://i.sstatic.net/9tNrn0KN.png" alt="lognormally distributed samples" /></a></p> <p>Generating signals with normally distributed values and a defined power spectra is covered well in <a href="https://stackoverflow.com/a/67127726/2912349">this SO answer</a>.</p> <h1>The Problem</h1> <p>Combining both approaches works well for white noise and pink noise. However, when applying the exponential to violet or blue noise, the PSD shape is not preserved. Instead, the PSDs closely resemble the PSD of white noise.</p> <p>Any insights or suggestions for alternative approaches would be much appreciated.</p> <p><a href="https://i.sstatic.net/Cbf4JZ4r.png" rel="noreferrer"><img src="https://i.sstatic.net/Cbf4JZ4r.png" alt="Comparison of different noise signals with normally and lognormally distributed values" /></a></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from scipy.signal import welch # from https://stackoverflow.com/a/67127726/2912349 def noise_psd(N, psd = lambda f: 1): X_white = np.fft.rfft(np.random.randn(N)); S = psd(np.fft.rfftfreq(N)) # Normalize S S = S / np.sqrt(np.mean(S**2)) X_shaped = X_white * S; return np.fft.irfft(X_shaped); def PSDGenerator(f): return lambda N: noise_psd(N, f) @PSDGenerator def get_white_noise(f): return 1; @PSDGenerator def get_blue_noise(f): return np.sqrt(f); @PSDGenerator def get_violet_noise(f): return f; @PSDGenerator def get_brownian_noise(f): return 1/np.where(f == 0, float('inf'), f) @PSDGenerator def get_pink_noise(f): return 1/np.where(f == 0, float('inf'), np.sqrt(f)) def get_colored_noise(size, color=&quot;white&quot;): if color == &quot;white&quot;: return get_white_noise(size) elif color == &quot;blue&quot;: return get_blue_noise(size) elif color == &quot;violet&quot;: return get_violet_noise(size) elif color == &quot;brownian&quot;: return get_brownian_noise(size) elif color == &quot;pink&quot;: return get_pink_noise(size) # -------------------------------------------------------------------------------- # Sample the different noise distributions, and determine the spectral density of each series. size = 100_000 fig, axes = plt.subplots(2, 2, sharex=False, sharey=&quot;row&quot;) bins = np.linspace(-5, 5, 101) noise_types = [&quot;white&quot;, &quot;pink&quot;, &quot;blue&quot;, &quot;violet&quot;] colors = [&quot;lightgray&quot;, &quot;magenta&quot;, &quot;blue&quot;, &quot;violet&quot;] for noise_type, color in zip(noise_types, colors): samples = get_colored_noise(size, noise_type) axes[0, 0].hist(samples, bins=bins, color=color, label=noise_type, histtype=&quot;step&quot;, density=True) f, p = welch(samples) axes[1, 0].loglog(f, p, color=color, label=noise_type) # -------------------------------------------------------------------------------- # The samples above are normally distributed. Applying the exponential function yields lognormally distributed samples. # However, the shape of the power spectrum is not preserved in every case. # Specifically, noise with a low power in the lower frequencies and high power in high frequencies (blue, violet) appears indistinguishable from white noise. # # [1] https://en.wikipedia.org/wiki/Log-normal_distribution bins = np.linspace(0, 10, 101) for noise_type, color in zip(noise_types, colors): samples = np.exp(get_colored_noise(size, noise_type)) axes[0, 1].hist(samples, bins=bins, color=color, label=noise_type, histtype=&quot;step&quot;, density=True) f, p = welch(samples) axes[1, 1].loglog(f, p, color=color, label=noise_type) # -------------------------------------------------------------------------------- # decorate axes axes[0, 0].set_title(&quot;Normally distributed samples&quot;) axes[0, 1].set_title(&quot;Lognormally distributed samples&quot;) axes[0, 0].set_ylabel(&quot;Density&quot;) axes[0, 0].set_xlabel(&quot;Value&quot;) axes[0, 1].set_xlabel(&quot;Value&quot;) axes[1, 0].set_ylabel(&quot;Power&quot;) axes[1, 0].set_xlabel(&quot;Frequency [Hz]&quot;) axes[1, 1].set_xlabel(&quot;Frequency [Hz]&quot;) axes[0, 0].legend(loc=&quot;upper left&quot;) fig.tight_layout() plt.show() </code></pre> <h1>Test results for proposed solutions</h1> <p>@jlandercy's answer below matches the PSDs remarkably well. However, at least in the case of lognormal target distributions with non-white noise PSD, the distribution of the values is no longer matched very well. While the resulting distributions are long-tailed, the mode appears to be zero (even though it should be around 1).</p> <p><a href="https://i.sstatic.net/T8yoDDJj.png" rel="noreferrer"><img src="https://i.sstatic.net/T8yoDDJj.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt from scipy import fft, stats def get_colored_noise(size, power=lambda f: 1., distribution=stats.norm): X = fft.rfft(distribution.rvs(size=size)) S = power(fft.rfftfreq(size)) S /= np.sqrt(np.mean(S ** 2)) X *= S return fft.irfft(X) powers = { &quot;white&quot; : lambda f: 1., &quot;blue&quot; : lambda f: np.sqrt(f), &quot;violet&quot; : lambda f: f, &quot;brown&quot; : lambda f: 1. / np.where(f == 0, float('inf'), f), &quot;pink&quot; : lambda f: 1. / np.where(f == 0, float('inf'), np.sqrt(f)) } # -------------------------------------------------------------------------------- # Test with a normal target distribution. size = 100_000 noise_types = [&quot;white&quot;, &quot;pink&quot;, &quot;blue&quot;, &quot;violet&quot;] colors = [&quot;lightgray&quot;, &quot;magenta&quot;, &quot;blue&quot;, &quot;violet&quot;] fig, axes = plt.subplots(2, 2, sharex=False, sharey=&quot;row&quot;) target_distribution = stats.norm(loc=0, scale=0.5) bins = np.linspace(-2.5, 2.5, 101) for noise_type, color in zip(noise_types, colors): samples = get_colored_noise(size, powers[noise_type], target_distribution) axes[0, 0].hist(samples, bins=bins, color=color, label=noise_type, histtype=&quot;step&quot;, density=True) f, p = welch(samples) axes[1, 0].loglog(f, p, color=color, label=noise_type) # plot expected distribution of values expected_values = target_distribution.rvs(size) axes[0, 0].hist(expected_values, bins=bins, color=&quot;black&quot;, linestyle=&quot;--&quot;, label=&quot;expectation&quot;, histtype=&quot;step&quot;, density=True) # -------------------------------------------------------------------------------- # Test with a lognormal target distribution. target_distribution = stats.lognorm(loc=0, s=0.5) bins = np.linspace(0, 5, 101) for noise_type, color in zip(noise_types, colors): samples = get_colored_noise(size, powers[noise_type], stats.lognorm(loc=0, s=0.5)) axes[0, 1].hist(samples, bins=bins, color=color, label=noise_type, histtype=&quot;step&quot;, density=True) f, p = welch(samples) axes[1, 1].loglog(f, p, color=color, label=noise_type) # plot expected distribution of values expected_values = target_distribution.rvs(size) axes[0, 1].hist(expected_values, bins=bins, color=&quot;black&quot;, linestyle=&quot;--&quot;, label=&quot;expectation&quot;, histtype=&quot;step&quot;, density=True) # -------------------------------------------------------------------------------- # decorate axes axes[0, 0].set_title(&quot;Normally distributed samples&quot;) axes[0, 1].set_title(&quot;Lognormally distributed samples&quot;) axes[0, 0].set_ylabel(&quot;Density&quot;) axes[0, 0].set_xlabel(&quot;Value&quot;) axes[0, 1].set_xlabel(&quot;Value&quot;) axes[1, 0].set_ylabel(&quot;Power&quot;) axes[1, 0].set_xlabel(&quot;Frequency [Hz]&quot;) axes[1, 1].set_xlabel(&quot;Frequency [Hz]&quot;) axes[0, 0].legend(loc=&quot;upper left&quot;) fig.tight_layout() plt.show() </code></pre>
<python><numpy><scipy><statistics><signal-processing>
2025-07-14 14:02:04
1
12,703
Paul Brodersen
79,700,885
14,380,704
Pandas Pivot_table KeyError when Key is Present
<p>I've run this code in an older version of Python, with success; however, we've recently switched to Python 3.9 and I'm getting a KeyError:'RegIndex' on a pivot step for a column that exists in the original dataframe...below is the code sample and dataframe sample.</p> <pre><code>myData=df[['ModelYear','RegIndex','ModelIndex','Model','ModelQty','ModelWgt']] total = myData.groupby(['RegIndex','Model'])[['ModelQty','ModelWgt']],sum().round(2),reset_index() myData = myData.pivot_table(index='ModelYear', columns = ['ModelIndex','Model'], margins=True, margins_name='Total', aggfunc=sum, fill_value=0) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">Index</th> <th style="text-align: left;">ModelYear</th> <th style="text-align: center;">RegIndex</th> <th style="text-align: right;">ModelIndex</th> <th style="text-align: right;">Model</th> <th style="text-align: right;">ModelQty</th> <th style="text-align: right;">ModelWgt</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">1</td> <td style="text-align: left;">2001</td> <td style="text-align: center;">0</td> <td style="text-align: right;">1</td> <td style="text-align: right;">F150</td> <td style="text-align: right;">1000</td> <td style="text-align: right;">7500</td> </tr> <tr> <td style="text-align: left;">2</td> <td style="text-align: left;">2002</td> <td style="text-align: center;">0</td> <td style="text-align: right;">1</td> <td style="text-align: right;">F150</td> <td style="text-align: right;">2000</td> <td style="text-align: right;">7500</td> </tr> <tr> <td style="text-align: left;">3</td> <td style="text-align: left;">2003</td> <td style="text-align: center;">0</td> <td style="text-align: right;">1</td> <td style="text-align: right;">F150</td> <td style="text-align: right;">1200</td> <td style="text-align: right;">7500</td> </tr> <tr> <td style="text-align: left;">4</td> <td style="text-align: left;">2004</td> <td style="text-align: center;">0</td> <td style="text-align: right;">1</td> <td style="text-align: right;">F150</td> <td style="text-align: right;">1750</td> <td style="text-align: right;">7500</td> </tr> <tr> <td style="text-align: left;">5</td> <td style="text-align: left;">2001</td> <td style="text-align: center;">0</td> <td style="text-align: right;">2</td> <td style="text-align: right;">Silverado</td> <td style="text-align: right;">2000</td> <td style="text-align: right;">6500</td> </tr> </tbody> </table></div> <p>Is the error due to 'RegIndex' not being included in the index of the pivot_table() ? I added it like this: index=['ModelYear','RegIndex'] and it removed the KeyError;however, I don't know if that will alter the original design that worked in the prior version and I'm unsure of the implications. Not sure if there's a better way to go about this.</p> <p>Possible solution could be to simply drop the column from the dataframe prior to the pivot_table step. It's strange this error never occurred in the prior version.</p>
<python><pandas>
2025-07-14 12:32:36
1
307
2020db9
79,700,637
5,698,125
Python3 dictionary being modified at another thread does not show changes after those modifications at the original thread
<p>Python version: (3.9, but the same result with python 3.12)</p> <p>The goal was that another thread modified a dictionary and those modifications to be available at the original thread.</p> <pre class="lang-py prettyprint-override"><code>import multiprocessing as mp import sys def my_func(result: dict): print(f'variable address at worker: {hex(id(result))}', file=sys.stderr) result[&quot;test&quot;] = &quot;test&quot; print(f'{result}', file=sys.stderr) result = {} print(f'variable address at main thread: {hex(id(result))}', file=sys.stderr) my_worker = lambda : my_func(result) # execution at another Thread p = mp.Process(target=my_worker) p.start() p.join() print(f'result at main thread after execution: {result}', file=sys.stderr) # manual execution my_worker() print(f'result at main thread after manual execution: {result}', file=sys.stderr) print(sys.version) </code></pre> <p>and the output is:</p> <pre><code>variable address at main thread: 0x6ffffff39580 variable address at worker: 0x6ffffff39580 {'test': 'test'} result at main thread after execution: {} variable address at worker: 0x6ffffff39580 {'test': 'test'} result at main thread after manual execution: {'test': 'test'} 3.9.16 (main, Mar 8 2023, 22:47:22) [GCC 11.3.0] </code></pre> <p>My expectations were that result dictionary would show the changes done at the worker but it does not.</p> <p>What am I doing wrong?</p>
<python><multithreading>
2025-07-14 08:58:11
1
410
Francisco Javier Rojas
79,700,632
3,933,475
Python pickle scipy.interpolate.RBFInterpolator across operating systems (windows, mac)
<p>I have created an <code>RBFInterpolator</code> object from the package <code>scipy.interpolate</code> and I can easily pickle it and unpickle it using the <code>pickle</code> package along with binary read and write.</p> <p>However, when I create it under windows and try to use it on a mac (and vice versa), I get following error message:</p> <pre><code>TypeError: Invalid call to pythranized function `_build_evaluation_coefficients(float64[:, :] (is a view), float64[:, :] (is a view), str, float, int64[:, :], float64[:], float64[:])' Candidates are: - _build_evaluation_coefficients(float[:,:], float[:,:], str, float, int[:,:], float[:], float[:]) </code></pre> <p>The pickled object <code>rbf</code> loads well, and I can even inspect it, e.g. <code>rbf.y</code>, <code>rbf.kernel</code> etc. The above error is thrown when I try to interpolate at a point, <code>rbf([[0., 0., 0.]])</code>.</p> <p>Using python 3.12 on both win and mac. <strong>Edit:</strong> The SciPy versions match, both 1.14.1</p> <p>Can you help to find a different way of pickling / unpickling to become os-independent?</p>
<python><windows><macos><scipy><pickle>
2025-07-14 08:55:24
0
394
Philip Harding
79,700,582
219,153
VS Code doesn't open Python virtual environment
<p>I'm using Python 3.13.5 with VS Code 1.102.0 on Ubuntu 24.04.2. With VS Code updates, there were often problems with opening <code>venv</code> environment, but usually reloading VS Code was sufficient to make it work. Not with the newest version. Here are the extensions I have installed:</p> <p><a href="https://i.sstatic.net/M6RhixEp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6RhixEp.png" alt="enter image description here" /></a></p> <p>Opening a Python virtual environment project shows that <code>venv</code> is active, both in command prompt and in the bottom bar:</p> <p><a href="https://i.sstatic.net/itJdWf2j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/itJdWf2j.png" alt="enter image description here" /></a></p> <p>but it is misleading as evidenced by <code>pip list</code>, which lists system Python modules, or <code>deactivate</code>, which fails:</p> <pre><code>.venvpaul@cube:~/st-python/test$ pip list ... ufw 0.36.2 unattended-upgrades 0.1 urllib3 2.0.7 wadllib 1.3.6 wheel 0.42.0 xdg 5 xkit 0.0.0 .venvpaul@cube:~/st-python/test$ deactivate deactivate: command not found .venvpaul@cube:~/st-python/test$ source .venv/bin/activate (.venv) .venvpaul@cube:~/st-python/test$ pip list Package Version ------------------ ------- Levenshtein 0.27.1 pip 25.1.1 python-Levenshtein 0.27.1 RapidFuzz 3.13.0 (.venv) .venvpaul@cube:~/st-python/test$ </code></pre> <p>Only after manual activation and getting a weird <code>(.venv) .venv</code> prompt, I can issue commands inside virtual environment. I replicated this behavior on another system with Python 3.12.7 with VS Code 1.102.0 on Ubuntu 22.04.5. How to fix this problem?</p>
<python><visual-studio-code><python-venv>
2025-07-14 08:07:59
0
8,585
Paul Jurczak
79,700,418
68,736
sympy mod on formulas with custom functions
<p>I would like to define custom <code>sympy</code> functions which cannot be directly evaluated, but which I do want to be able to compute some things about. For example, modular remainders. Here's my code attempt:</p> <pre class="lang-py prettyprint-override"><code>from sympy import Function class Foo(Function): @classmethod def eval(cls, n): pass def __mod__(self, modulus: int) -&gt; int: n, = self.args return (n + 1) % modulus print(Foo(7) % 3) # 2 print((Foo(7) + 1) % 3) # Mod(Foo(7) + 1, 3) print((Foo(7) + 1) % 3 == 0) # False </code></pre> <p>So <code>Foo(7) % 3</code> works as expected, but <code>(Foo(7) + 1) % 3</code> does not, I wanted <code>sympy</code> to notice that it can compute <code>(a + b) % 3</code> by computing <code>a%3</code> and <code>b%3</code>. Is there some way to implement this? Perhaps I need to be more clever and write a new function that walks the symbolic tree of the formula? Or maybe I'm going about this the wrong way? I'm also open to other python libraries (I just guessed that <code>sympy</code> might be a good one to use).</p> <p>In case it helps, my real use-case for this is representing integers that are too large to store in memory and so can only be represented by formulas. But where we can still calculate some properties like modular remainders. See for example <a href="https://www.sligocki.com/2022/06/21/bb-6-2-t15.html" rel="nofollow noreferrer">https://www.sligocki.com/2022/06/21/bb-6-2-t15.html</a></p>
<python><sympy><symbolic-math><largenumber>
2025-07-14 03:43:50
1
6,427
sligocki
79,700,371
13,413,858
In-place, strictly In-place, and O(1) space algorithms
<p>How does one categorize an algorithm like this:</p> <pre class="lang-py prettyprint-override"><code>def foo(arr): # original_len = len(arr) tmp = [] while arr: tmp.append(arr.pop()) while tmp: arr.append(tmp.pop()) </code></pre> <p>The definitions get kinda of confusing because theoretically, at any point <code>len(arr) + len(tmp) ~= original_len</code>, so I am not using extra memory outside the extra list object which is constant. But here are some of my thoughts:</p> <ul> <li>I do actually need extra memory, as if I only had 32 buckets of memory laid out sequentially, if the original array requires 32 buckets, then when I put the 31st bucket into the new list after putting the 32nd, I will need a 33rd bucket while the 31st bucket would remain empty</li> <li>It is definitely in-place as it literally alters the data in-place, but perhaps not strictly in-place, if that distinction exists</li> <li>Perhaps the fact that I am just allocating a new list already rules out it being strictly in-place, but if I had more control over the memory, one could imagine building the second list as one that grows toward lower memory addresses, thus reutilizing the same space, would that then not be strictly in-place?</li> <li>Dynamic lists don't actually work that way anyway, since they allocate continuous blocks of memory, so the above is irrelevant</li> </ul> <p>My issue is that I have seen some claim (perhaps incorrectly) that snippets similar to that one are <code>O(1)</code> in space, and I also, generally when doing these broad stroke &quot;analysis&quot;, some implementation details are ignored.</p>
<python><space-complexity><in-place>
2025-07-14 01:59:39
1
494
Mathias Sven
79,700,110
503,456
scipy 1.16 bug, invalid matrices with incorrect indptr sizes
<p>I have just upgraded from 1.15 to 1.16, but rand across an issue with CSC matrices and multiplication.</p> <p>Demonstration of scipy sparse matrix multiply bug affecting CSC matrix indptr size.</p> <p>This script demonstrates a bug in scipy where the .multiply() operation on CSC matrices can produce invalid CSC matrices with incorrect indptr array sizes. This bug was first noticed in scipy 1.16.0 and affects downstream operations that rely on the CSC format invariant: len(indptr) == n_columns + 1.</p> <p>The bug manifests when:</p> <ol> <li>Two CSC matrices are multiplied using .multiply()</li> <li>The result has len(indptr) != shape[1] + 1</li> <li>This breaks downstream operations like matrix reconstruction</li> </ol> <p>This script provides a minimal reproducible example and shows the fix.</p> <pre><code>&quot;&quot;&quot; Demonstration of scipy sparse matrix multiply bug affecting CSC matrix indptr size. This script demonstrates a bug in scipy where the .multiply() operation on CSC matrices can produce invalid CSC matrices with incorrect indptr array sizes. This bug was first noticed in scipy 1.16.0 and affects downstream operations that rely on the CSC format invariant: len(indptr) == n_columns + 1. The bug manifests when: 1. Two CSC matrices are multiplied using .multiply() 2. The result has len(indptr) != shape[1] + 1 3. This breaks downstream operations like matrix reconstruction This script provides a minimal reproducible example and shows the fix. &quot;&quot;&quot; import numpy as np from scipy.sparse import coo_matrix, csc_matrix import scipy import sys def get_version_info(): &quot;&quot;&quot;Get and display version information.&quot;&quot;&quot; print(&quot;=&quot; * 60) print(&quot;SCIPY MULTIPLY BUG DEMONSTRATION&quot;) print(&quot;=&quot; * 60) print(f&quot;Python version: {sys.version}&quot;) print(f&quot;NumPy version: {np.__version__}&quot;) print(f&quot;SciPy version: {scipy.__version__}&quot;) print(&quot;=&quot; * 60) print() def create_test_matrices(): &quot;&quot;&quot; Create test matrices that trigger the scipy multiply bug. These matrices are designed to produce the bug when multiplied. The data comes from actual sparse matrix operations that exhibited this issue. &quot;&quot;&quot; print(&quot;Creating test matrices...&quot;) # Matrix 1 data (example sparse matrix) # Raw data: [7, 0, 8, 2, 9, 3, 1, 1, 2, 2, 3, 3, 0, 1, 3, 2, 3, 4, 3, 1, 3, 0, 4, 4, 2, 4, 5, 4, 2, 3, 4, 3, 5, 0, 8, 4, 5, 0, 7, 0, 3, 4] # Reshaped to (row_indices, col_indices) pairs b0_raw = np.array([7, 0, 8, 2, 9, 3, 1, 1, 2, 2, 3, 3, 0, 1, 3, 2, 3, 4, 3, 1, 3, 0, 4, 4, 2, 4, 5, 4, 2, 3, 4, 3, 5, 0, 8, 4, 5, 0, 7, 0, 3, 4]) b0_reshaped = b0_raw.reshape((int(len(b0_raw) / 2), 2)).T # Matrix 2 data (example sparse matrix) # Raw data: [1, 1, 8, 3, 4, 2, 9, 3, 5, 1, 7, 4, 1, 1, 2, 0, 0, 0, 4, 4, 1, 3, 0, 3, 5, 1, 5, 0, 7, 4, 4, 0, 5, 3] # Reshaped to (row_indices, col_indices) pairs b1_raw = np.array([1, 1, 8, 3, 4, 2, 9, 3, 5, 1, 7, 4, 1, 1, 2, 0, 0, 0, 4, 4, 1, 3, 0, 3, 5, 1, 5, 0, 7, 4, 4, 0, 5, 3]) b1_reshaped = b1_raw.reshape((int(len(b1_raw) / 2), 2)).T print(f&quot;Matrix 1 shape: {b0_reshaped.shape}&quot;) print(f&quot;Matrix 2 shape: {b1_reshaped.shape}&quot;) print() # Create sparse matrices in CSC format # Use ones for data (incidence matrices) mat1 = coo_matrix((np.ones(b0_reshaped.shape[1]), b0_reshaped), shape=(10, 5)).tocsc() mat2 = coo_matrix((np.ones(b1_reshaped.shape[1]), b1_reshaped), shape=(10, 5)).tocsc() return mat1, mat2 def analyze_matrix(matrix, name): &quot;&quot;&quot; Analyze a sparse matrix and display its properties. Args: matrix: The sparse matrix to analyze name: Name for display purposes &quot;&quot;&quot; print(f&quot;{name}:&quot;) print(f&quot; Shape: {matrix.shape}&quot;) print(f&quot; nnz (non-zero elements): {matrix.nnz}&quot;) print(f&quot; indptr size: {len(matrix.indptr)}&quot;) print(f&quot; indices size: {len(matrix.indices)}&quot;) print(f&quot; data size: {len(matrix.data)}&quot;) print(f&quot; Expected indptr size: {matrix.shape[1] + 1}&quot;) print(f&quot; indptr valid: {len(matrix.indptr) == matrix.shape[1] + 1}&quot;) print(f&quot; indptr: {matrix.indptr}&quot;) print(f&quot; indices: {matrix.indices}&quot;) print(f&quot; data: {matrix.data}&quot;) print() def demonstrate_bug(mat1, mat2): &quot;&quot;&quot; Demonstrate the scipy multiply bug. This function shows how .multiply() can produce an invalid CSC matrix with incorrect indptr size in certain scipy versions. &quot;&quot;&quot; print(&quot;=&quot; * 60) print(&quot;DEMONSTRATING THE BUG&quot;) print(&quot;=&quot; * 60) # Show original matrices print(&quot;Original matrices:&quot;) analyze_matrix(mat1, &quot;Matrix 1&quot;) analyze_matrix(mat2, &quot;Matrix 2&quot;) # Perform the problematic operation print(&quot;Performing element-wise multiplication...&quot;) result = mat1.multiply(mat2) print(&quot;Result after .multiply():&quot;) analyze_matrix(result, &quot;Result&quot;) # Check if the result is valid is_valid = len(result.indptr) == result.shape[1] + 1 print(f&quot;Result matrix is valid: {is_valid}&quot;) if not is_valid: print(&quot;❌ BUG DETECTED: Invalid indptr size!&quot;) print(f&quot; Expected: {result.shape[1] + 1}, Got: {len(result.indptr)}&quot;) else: print(&quot;βœ… No bug detected: Valid indptr size&quot;) print() def test_matrix_reconstruction(matrix, name): &quot;&quot;&quot; Test if a matrix can be reconstructed from its components. This simulates what happens when loading matrix data from storage. &quot;&quot;&quot; print(f&quot;Testing reconstruction of {name}...&quot;) try: reconstructed = csc_matrix((matrix.data, matrix.indices, matrix.indptr), shape=matrix.shape) print(f&quot; βœ… Reconstruction successful&quot;) print(f&quot; Reconstructed shape: {reconstructed.shape}&quot;) print(f&quot; Reconstructed nnz: {reconstructed.nnz}&quot;) print(f&quot; Reconstructed indptr size: {len(reconstructed.indptr)}&quot;) except Exception as e: print(f&quot; ❌ Reconstruction failed: {e}&quot;) print() def demonstrate_fix(mat1, mat2): &quot;&quot;&quot; Demonstrate the fix for the scipy multiply bug. The fix converts the invalid CSC matrix to COO format and back to CSC, which reconstructs the matrix with correct indptr size. &quot;&quot;&quot; print(&quot;=&quot; * 60) print(&quot;DEMONSTRATING THE FIX&quot;) print(&quot;=&quot; * 60) # Perform the problematic operation result_broken = mat1.multiply(mat2) print(&quot;Before fix:&quot;) analyze_matrix(result_broken, &quot;Broken Result&quot;) # Apply the fix if len(result_broken.indptr) != result_broken.shape[1] + 1: print(&quot;Applying fix: .tocoo().tocsc()&quot;) result_fixed = result_broken.tocoo().tocsc() else: print(&quot;No fix needed: Matrix is already valid&quot;) result_fixed = result_broken print(&quot;After fix:&quot;) analyze_matrix(result_fixed, &quot;Fixed Result&quot;) # Test reconstruction test_matrix_reconstruction(result_fixed, &quot;Fixed Matrix&quot;) # Show dense matrix for comparison print(&quot;Dense matrix representation:&quot;) dense_result = result_fixed.toarray() print(dense_result) print() def show_dense_comparison(mat1, mat2): &quot;&quot;&quot; Show the dense matrix comparison to prove the mathematical result is correct. &quot;&quot;&quot; print(&quot;=&quot; * 60) print(&quot;DENSE MATRIX COMPARISON&quot;) print(&quot;=&quot; * 60) print(&quot;Matrix 1 (dense):&quot;) print(mat1.toarray()) print() print(&quot;Matrix 2 (dense):&quot;) print(mat2.toarray()) print() # Element-wise multiplication in dense format dense_result = mat1.toarray() * mat2.toarray() print(&quot;Element-wise multiplication result (dense):&quot;) print(dense_result) print() # Show sparse result for comparison sparse_result = mat1.multiply(mat2) print(&quot;Element-wise multiplication result (sparse):&quot;) print(sparse_result.toarray()) print() # Check if they match dense_sparse_match = np.allclose(dense_result, sparse_result.toarray()) print(f&quot;Dense and sparse results match: {dense_sparse_match}&quot;) print() def main(): &quot;&quot;&quot; Main function to run the complete demonstration. &quot;&quot;&quot; get_version_info() # Create test matrices mat1, mat2 = create_test_matrices() # Demonstrate the bug demonstrate_bug(mat1, mat2) # Show dense comparison show_dense_comparison(mat1, mat2) # Demonstrate the fix demonstrate_fix(mat1, mat2) print(&quot;=&quot; * 60) print(&quot;SUMMARY&quot;) print(&quot;=&quot; * 60) print(&quot;This script demonstrates a bug in scipy where .multiply() on CSC matrices&quot;) print(&quot;can produce invalid matrices with incorrect indptr sizes.&quot;) print() print(&quot;The bug affects:&quot;) print(&quot;- Matrix reconstruction from components&quot;) print(&quot;- Matrix storage and loading&quot;) print(&quot;- Any operation that relies on CSC format invariants&quot;) print() print(&quot;The fix (.tocoo().tocsc()) restores the matrix to valid CSC format&quot;) print(&quot;while preserving the mathematical result.&quot;) print() print(&quot;This bug was first noticed in scipy 1.16.0 and may affect other versions.&quot;) print(&quot;=&quot; * 60) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>And here is the output:</p> <pre><code>============================================================ SCIPY MULTIPLY BUG DEMONSTRATION ============================================================ Python version: 3.12.0 (main, Mar 15 2024, 15:20:13) [Clang 15.0.0 (clang-1500.1.0.2.5)] NumPy version: 1.26.4 SciPy version: 1.16.0 ============================================================ Creating test matrices... Matrix 1 shape: (2, 21) Matrix 2 shape: (2, 17) ============================================================ DEMONSTRATING THE BUG ============================================================ Original matrices: Matrix 1: Shape: (10, 5) nnz (non-zero elements): 18 indptr size: 6 indices size: 18 data size: 18 Expected indptr size: 6 indptr valid: True indptr: [ 0 3 6 9 13 18] indices: [3 5 7 0 1 3 2 3 8 2 3 4 9 2 3 4 5 8] data: [1. 2. 2. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 2. 1. 1. 1.] Matrix 2: Shape: (10, 5) nnz (non-zero elements): 14 indptr size: 6 indices size: 14 data size: 14 Expected indptr size: 6 indptr valid: True indptr: [ 0 4 6 7 12 14] indices: [0 2 4 5 1 5 4 0 1 5 8 9 4 7] data: [1. 1. 1. 1. 2. 2. 1. 1. 1. 1. 1. 1. 1. 2.] Performing element-wise multiplication... Result after .multiply(): Result: Shape: (10, 5) nnz (non-zero elements): 4 indptr size: 11 indices size: 4 data size: 4 Expected indptr size: 6 indptr valid: False indptr: [0 0 1 1 1 2 3 3 3 3 4] indices: [1 4 0 3] data: [2. 1. 2. 1.] Result matrix is valid: False ❌ BUG DETECTED: Invalid indptr size! Expected: 6, Got: 11 ============================================================ DENSE MATRIX COMPARISON ============================================================ Matrix 1 (dense): [[0. 1. 0. 0. 0.] [0. 1. 0. 0. 0.] [0. 0. 1. 1. 1.] [1. 1. 1. 1. 2.] [0. 0. 0. 1. 1.] [2. 0. 0. 0. 1.] [0. 0. 0. 0. 0.] [2. 0. 0. 0. 0.] [0. 0. 1. 0. 1.] [0. 0. 0. 1. 0.]] Matrix 2 (dense): [[1. 0. 0. 1. 0.] [0. 2. 0. 1. 0.] [1. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [1. 0. 1. 0. 1.] [1. 2. 0. 1. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 2.] [0. 0. 0. 1. 0.] [0. 0. 0. 1. 0.]] Element-wise multiplication result (dense): [[0. 0. 0. 0. 0.] [0. 2. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 1.] [2. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 1. 0.]] Element-wise multiplication result (sparse): [[0. 0. 0. 0. 0.] [0. 2. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 1.] [2. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 1. 0.]] Dense and sparse results match: True ============================================================ DEMONSTRATING THE FIX ============================================================ Before fix: Broken Result: Shape: (10, 5) nnz (non-zero elements): 4 indptr size: 11 indices size: 4 data size: 4 Expected indptr size: 6 indptr valid: False indptr: [0 0 1 1 1 2 3 3 3 3 4] indices: [1 4 0 3] data: [2. 1. 2. 1.] Applying fix: .tocoo().tocsc() After fix: Fixed Result: Shape: (10, 5) nnz (non-zero elements): 4 indptr size: 6 indices size: 4 data size: 4 Expected indptr size: 6 indptr valid: True indptr: [0 1 2 2 3 4] indices: [5 1 9 4] data: [2. 2. 1. 1.] Testing reconstruction of Fixed Matrix... βœ… Reconstruction successful Reconstructed shape: (10, 5) Reconstructed nnz: 4 Reconstructed indptr size: 6 Dense matrix representation: [[0. 0. 0. 0. 0.] [0. 2. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 1.] [2. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 0. 0.] [0. 0. 0. 1. 0.]] ============================================================ SUMMARY ============================================================ This script demonstrates a bug in scipy where .multiply() on CSC matrices can produce invalid matrices with incorrect indptr sizes. The bug affects: - Matrix reconstruction from components - Matrix storage and loading - Any operation that relies on CSC format invariants The fix (.tocoo().tocsc()) restores the matrix to valid CSC format while preserving the mathematical result. This bug was first noticed in scipy 1.16.0 and may affect other versions. ============================================================ </code></pre> <p>Prior versions work fine.</p>
<python><scipy>
2025-07-13 16:41:07
1
928
mattjvincent
79,700,040
626,063
Performing Union Operation using Inkscape Extension on 3 Groups But Weird Result
<p>I made an extension to perform union operation on each selected group, but I got a weird result.</p> <p>For example, there are three selected groups of rectangles:</p> <p><a href="https://i.sstatic.net/Z4iyiw8m.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4iyiw8m.jpg" alt="Example - Three Groups of Rectangles" /></a></p> <p>After using the extension on them, only the first group gets converted into a union path, the rest get duplicated and they aren't in their original positions anymore:</p> <p><a href="https://i.sstatic.net/bhZKeJUr.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bhZKeJUr.jpg" alt="Weird Result" /></a></p> <p>The expected result should be three union paths:</p> <p><a href="https://i.sstatic.net/6HzWFWCB.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HzWFWCB.jpg" alt="Expected Result" /></a></p> <p>Here's the extension (save it as &quot;<em>union_operation_on_each_selected_group.py</em>&quot;):</p> <pre class="lang-py prettyprint-override"><code>import inkex import tempfile, os, shutil from uuid import uuid4 from inkex import Group, Circle, Ellipse, Line, PathElement, Polygon, Polyline, Rectangle, Use from inkex.paths import Path from inkex.command import call from math import ceil from lxml import etree def process_svg(svg, action_string): temp_folder = tempfile.mkdtemp() # Create a random filename for svg svg_temp_filepath = os.path.join(temp_folder, f'original_{str(uuid4())}.svg') with open(svg_temp_filepath, 'w') as output_file: svg_string = svg.tostring().decode('utf-8') output_file.write(svg_string) processed_svg_temp_filepath = os.path.join(temp_folder, f'processed_{str(uuid4())}.svg') my_actions = '--actions=' export_action_string = my_actions + f'export-filename:{processed_svg_temp_filepath};{action_string}export-do;' # Run the command line cmd_selection_list = inkex.command.inkscape(svg_temp_filepath, export_action_string) # Replace the current document with the processed document with open(processed_svg_temp_filepath, 'rb') as processed_svg: loaded_svg = inkex.elements.load_svg(processed_svg).getroot() shutil.rmtree(temp_folder) return loaded_svg def element_list_union(svg, element_list): group = None action_string = '' for element in element_list: group = element.getparent() action_string = action_string + f'select-by-id:{element.get_id()};' action_string = action_string + f'path-union;select-clear;' processed_svg = process_svg(svg, action_string) to_remove = [] for child in group: to_remove.append(child) for item in to_remove: group.remove(item) for elem in processed_svg: group.add(elem) class UnionizeEachGroupMembers(inkex.EffectExtension): def add_arguments(self, pars): pars.add_argument(&quot;--selection_type_radio&quot;, type=str, dest=&quot;selection_type_radio&quot;, default='all') def effect(self): SELECTION_TYPE = self.options.selection_type_radio if SELECTION_TYPE == 'all': selection_list = self.svg.descendants() else: selection_list = self.svg.selected # Filter for only shapes selected_elements = selection_list.filter(inkex.ShapeElement) if len(selected_elements) &lt; 1: inkex.errormsg('Please select at least one object / No Objects found') return for elem in selected_elements: if elem.tag == inkex.addNS('g', 'svg'): # Check if the tag is 'g' in the SVG namespace group = elem group_id = group.get_id() elements_to_union = [child for child in group if isinstance(child, (Circle, Ellipse, Line, PathElement, Polygon, Polyline, Rectangle, Use))] if len(elements_to_union) &gt; 0: element_list_union(self.svg, elements_to_union) if __name__ == '__main__': UnionizeEachGroupMembers().run() </code></pre> <p>And here's the <em>.inx</em> file:</p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;inkscape-extension xmlns=&quot;http://www.inkscape.org/namespace/inkscape/extension&quot;&gt; &lt;name&gt;Union Operation on Each Selected Group&lt;/name&gt; &lt;id&gt;union_operation_on_each_selected_group&lt;/id&gt; &lt;!-- Parameters Here --&gt; &lt;param name=&quot;selection_type_radio&quot; type=&quot;optiongroup&quot; appearance=&quot;radio&quot; gui-text=&quot;Selection Type&quot;&gt; &lt;option value=&quot;selected&quot;&gt;Selected Objects&lt;/option&gt; &lt;option value=&quot;all&quot;&gt;All Objects&lt;/option&gt; &lt;/param&gt; &lt;effect&gt; &lt;object-type&gt;all&lt;/object-type&gt; &lt;effects-menu&gt; &lt;!-- &lt;submenu name=&quot;Submenu Name&quot;/&gt;--&gt; &lt;/effects-menu&gt; &lt;/effect&gt; &lt;script&gt; &lt;command location=&quot;inx&quot; interpreter=&quot;python&quot;&gt;union_operation_on_each_selected_group.py&lt;/command&gt; &lt;/script&gt; &lt;/inkscape-extension&gt; </code></pre> <p>You can download the example file here:</p> <p><a href="https://mega.nz/file/J3YH3BzR#bCHG5TyUNgsRN6XlFSIHo-C4GsJmXdJp2wf2wZh7-MY" rel="nofollow noreferrer">Example File</a></p> <p>Is it possible to get the expected result using an extension? How could I fix the extension?</p>
<python><inkscape>
2025-07-13 14:49:12
1
469
Bayu
79,700,024
5,490,316
How to work with different python library with different interpreter
<p>I have some python interpreter installed in my computer:</p> <ul> <li>Python 3.13.5 (<em>Recommended</em>)</li> <li>Python 3.13.1 (<em>Global</em>)</li> <li>Python 3.12.10</li> <li>Python 3.12.3</li> <li>Python 3.11.9</li> </ul> <p>But some libraries can only work with one interpreter, for example if I import these 2 libraries:</p> <pre><code>import pyaudio import numpy as p </code></pre> <p>With <code>python 3.13.5</code> as the recommended version, the <code>numpy</code> library got some squiggly lines that says <code>Import 'numpy' could not be resolved Pylance(ReportMissingImports)</code>. This error also appear when I choose the <code>python 3.13.1</code>.</p> <p>On the other hand, if I choose <code>python 3.12.10</code> or <code>python 3.12.3</code> as the interpreter, the similar error will show, but on the <code>pyaudio</code> library.</p> <p>It seems <code>numpy</code> uses Python 3.12, and <code>pyaudio</code> uses Python 3.13. How to fix this so I can work with the libraries? Should I uninstall/install the libraries or upgrade it? Or is there any specific way to install libraries on specific interpreter alone?</p>
<python>
2025-07-13 14:25:49
2
387
louislugas
79,699,857
5,378,816
How to parametrize all async tests?
<p>I have many test functions, sync and async.</p> <p>I'm using <code>pytest-asyncio</code> in the &quot;auto&quot; mode, so it detects all async tests and I don't have to mark them as such.</p> <p>My problem is that I want to parametrize all my async tests. I want to run them twice, using two different task factories.</p> <p>As a quick fix I'm using a fixture with the &quot;autouse&quot; parameter. Regarding the asyncio task factories it works well. But it also runs the sync tests twice and that wastes time.</p> <pre><code>@pytest.fixture(scope=&quot;module&quot;, params=[False, True], autouse=True) async def task_factories(request): factory = asyncio.eager_task_factory if request.param else None asyncio.get_running_loop().set_task_factory(factory) </code></pre> <p>How can I apply such fixture only to async tests, i.e. tests that the <code>pytest-asyncio</code> is responsible for?</p> <hr /> <p>UPDATE:</p> <p>I removed the <code>autouse=True</code>, changed the fixture scope to &quot;function&quot; and added:</p> <pre><code># in conftest.py def pytest_collection_modifyitems(items): for item in items: if any(m.name == 'asyncio' for m in item.iter_markers()): item.add_marker(pytest.mark.usefixtures(&quot;task_factories&quot;)) </code></pre> <p>I verified, that the markers were added to async tests. But it doesn't change anything. The <code>task_factories</code> fixture is not used.</p>
<python><pytest><pytest-asyncio>
2025-07-13 09:20:57
1
17,998
VPfB
79,699,841
7,456,317
VSCode devcontainer and UV
<p>I'm developing using VSCode's devcontainer, and I used pip with a <code>requirements.txt</code> file for a few years with no problems. Works like charm. I'd like to upgrade to using uv, but I'm encountering a problem. My <code>Dockerfile</code> has the following lines:</p> <pre class="lang-none prettyprint-override"><code>ADD https://astral.sh/uv/install.sh /install.sh RUN chmod -R 655 /install.sh &amp;&amp; /install.sh &amp;&amp; rm /install.sh ENV PATH=&quot;/root/.local/bin:$PATH&quot; WORKDIR /app COPY ./pyproject.toml . RUN uv sync --dev </code></pre> <p>These commands install all the modules under <code>/app/.venv</code>. However, when I open the container via VSCode, my working directory is <code>/workspaces/</code>, which is the mount directory, and I need to run <code>uv sync --dev</code> again, to install the modules under <code>/workspaces/&lt;project&gt;/.venv</code>.</p> <p>I guess it worked well with pip because there's no <code>.venv</code> there. So my question is: how can I avoid this duplication?</p>
<python><docker><vscode-devcontainer><uv>
2025-07-13 08:33:13
1
913
Gino
79,699,626
16,305,340
the virtual environment is using the global pip not the local one
<p>I am using Ubuntu 24.04 with Python 3.12 installed on it. I am jsut trying to use the command <code>pip install &lt;package_name&gt;</code> inside the virutal environment but for some reason it uses the global one.</p> <p>I first created a virtual environment using the command:</p> <pre><code>python3 -m venv .venv </code></pre> <p>I then activated the virtual environment using the command:</p> <pre><code>source ./.venv/bin/activate </code></pre> <p>I then tried to check which python and pip it uses so I ran the following and got the following results:</p> <pre><code>(.venv) $ which pip /usr/bin/pip (.venv) $ which python /mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/test/.venv/bin/python </code></pre> <p>it uses the python in the virutal environment correctly but not pip. I tried to force run pip by using the following command:</p> <pre><code>(.venv) $ ./.venv/bin/pip install numpy bash: ./.venv/bin/pip: Permission denied </code></pre> <p>I ran it using sudo but it was also in vain. I checked the execte permission on the file by running the command:</p> <pre><code>(.venv) $ ls -l ./.venv/bin/pip -rwxrwxr-x 1 ams ams 336 Jul 13 01:06 ./.venv/bin/pip (.venv) $ whoami ams </code></pre> <p>it seems it has the correct permissions to run but why can't it run?</p> <p>I tried installing the packages using the command <code>python -m pip install numpy</code> and it was installed succesfully, but the problem the when I import that module in the code, I get the following error:</p> <pre><code>(.venv) ams@ams-Alienware-m17-R3:/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts$ python run_prediction.py Traceback (most recent call last): File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/_core/__init__.py&quot;, line 23, in &lt;module&gt; from . import multiarray File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/_core/multiarray.py&quot;, line 10, in &lt;module&gt; from . import overrides File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/_core/overrides.py&quot;, line 7, in &lt;module&gt; from numpy._core._multiarray_umath import ( ImportError: /mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/_core/_multiarray_umath.cpython-312-x86_64-linux-gnu.so: failed to map segment from shared object During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/__init__.py&quot;, line 114, in &lt;module&gt; from numpy.__config__ import show_config File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/__config__.py&quot;, line 4, in &lt;module&gt; from numpy._core._multiarray_umath import ( File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/_core/__init__.py&quot;, line 49, in &lt;module&gt; raise ImportError(msg) ImportError: IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE! Importing the numpy C-extensions failed. This error can happen for many reasons, often due to issues with your setup or how NumPy was installed. We have compiled some common reasons and troubleshooting tips at: https://numpy.org/devdocs/user/troubleshooting-importerror.html Please note and check the following: * The Python version is: Python3.12 from &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/bin/python&quot; * The NumPy version is: &quot;2.2.6&quot; and make sure that they are the versions you expect. Please carefully study the documentation linked above for further help. Original error was: /mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/_core/_multiarray_umath.cpython-312-x86_64-linux-gnu.so: failed to map segment from shared object The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/run_prediction.py&quot;, line 2, in &lt;module&gt; import numpy as np File &quot;/mnt/DATA/AUC/Research Assistant/Pedestrian-Estimation-Dataset-Annotation/Scripts/.venv/lib/python3.12/site-packages/numpy/__init__.py&quot;, line 119, in &lt;module&gt; raise ImportError(msg) from e ImportError: Error importing numpy: you should not try to import numpy from its source directory; please exit the numpy source tree, and relaunch your python interpreter from there. </code></pre> <p><strong>UPDATE:</strong> As pointed out by @phd, the SSD from which I was running the script, it was mounted with noexec flag on it. Running this command:</p> <pre><code>mount | grep -F /mnt </code></pre> <p>resulted in this:</p> <pre><code>(rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,user,x-gvfs-show) </code></pre> <p>So I edited the &quot;/etc/fstab&quot; file and changed this line:</p> <pre><code>UUID=349264BD926484E8 /mnt/DATA auto rw,user,uid=1000,gid=1000,dmask=0002,fmask=0002,nosuid,nodev,nofail,x-gvfs-show 0 0 </code></pre> <p>To this line</p> <pre><code>UUID=349264BD926484E8 /mnt/DATA auto rw,user,exec,uid=1000,gid=1000,dmask=0002,fmask=0002,nosuid,nodev,nofail,x-gvfs-show 0 0 </code></pre> <p>The problem is that since I was switched from windows to linux machine, some of the ssds are still formatted as NTFS which need some special treatment. However, My problem is solved and now it works peacefully.</p>
<python><python-3.x><pip><virtualenv>
2025-07-12 22:25:52
1
1,893
abdo Salm
79,699,531
16,037,994
pybind11: Python callback executed in C++ with parameter modification
<p>I'm working on Python bindings of my C++ library (a mathematical optimization solver) and I'm stuck at a point where I create a Python callback <code>evaluate_constraints()</code> that takes two arguments, pass it to the C++ library and evaluate it with C++ arguments. The callback modifies its second parameter <code>constraints</code> based on its first parameter <code>x</code>.</p> <pre class="lang-cpp prettyprint-override"><code>// C++ code #include &quot;Vector.hpp&quot; #include &lt;pybind11/pybind11.h&gt; namespace py = pybind11; void solve(const std::function&lt;void(const Vector&amp;, Vector&amp;)&gt;&amp; evaluate_constraints) { const Vector x = ...; Vector constraints = ...; evaluate_constraints(x, constraints); } PYBIND11_MODULE(myCppModule, module) { py::class_&lt;Vector&gt;(module, &quot;Vector&quot;) .def(py::init&lt;size_t&gt;(), &quot;Constructor&quot;) .def(&quot;__getitem__&quot;, [](const Vector&amp; vector, size_t index) { return vector[index]; }) .def(&quot;__setitem__&quot;, [](Vector&amp; vector, size_t index, double value) { vector[index] = value; }); module.def(&quot;solve&quot;, &amp;solve); } </code></pre> <pre class="lang-py prettyprint-override"><code># Python code import myCppModule def evaluate_constraints(x, constraints): constraints[0] = function of x constraints[1] = function of x ... myCppModule.solve(evaluate_constraints) </code></pre> <p>Unfortunately, some copy must happen somewhere, because the C++ object <code>constraints</code> is not modified. I'm not sure whether I missed something totally obvious (I've stumbled upon suggestions to use <code>py::return_value_policy::reference_internal</code>, but to no avail) or whether it is indeed a bit tricky to address. Hope you can crack it!</p> <p>Note: the second parameter is a <code>Vector</code> here, but for other callbacks, it could be a C++ matrix type.</p>
<python><c++><reference><pybind11>
2025-07-12 19:39:09
1
401
Charlie Vanaret - the Uno guy
79,699,115
28,004
Can't find PDF text
<p>I'm trying to come up with a nice feature for my work colleagues, that every day in the morning, we would get in Slack the menu of the day... inspired by <a href="https://github.com/lemedege/LunchBot/blob/main/run.py" rel="nofollow noreferrer">https://github.com/lemedege/LunchBot/blob/main/run.py</a></p> <p>But I'm struggling <strong>to read the text</strong> from this PDF</p> <p><a href="https://torvekoekken.dk/Files/Files/Branding%20RAPIDO/01%20KBH%20menu/29_S%C3%B8borg_Favorit_ENG%202025%20ny.pdf" rel="nofollow noreferrer">https://torvekoekken.dk/Files/Files/Branding%20RAPIDO/01%20KBH%20menu/29_S%C3%B8borg_Favorit_ENG%202025%20ny.pdf</a></p> <p>I saved it as <code>favorite.pdf</code></p> <p>here is a simple test:</p> <pre class="lang-py prettyprint-override"><code>from PyPDF2 import PdfReader name = &quot;favorite&quot; doc = PdfReader(name+'.pdf') print(f&quot;PDF {name} opened, with {len(doc.pages)} pages&quot;) page = doc.pages[0] print(f&quot; text -&gt;{page.extract_text()}&lt;-&quot;) &quot;&quot;&quot; prints: PDF favorite opened, with 5 pages text -&gt;&lt;- &quot;&quot;&quot; </code></pre> <p>and it's not because of the first page, all pages outputs empty content</p> <pre class="lang-py prettyprint-override"><code>for page in doc: print(f&quot; PDF contains this text -&gt;{page.extract_text()}&lt;-&quot;) ### prints: PDF favorite opened, with 5 pages PDF contains this text -&gt;&lt;- PDF contains this text -&gt;&lt;- PDF contains this text -&gt;&lt;- PDF contains this text -&gt;&lt;- ### </code></pre> <p>I even tried with <a href="https://pymupdf.readthedocs.io/en/latest/recipes-text.html" rel="nofollow noreferrer"><code>pymupdf</code></a> as well at no avail πŸ˜•</p> <p>what am I missing? is it because of the text font they are using?</p>
<python><pypdf>
2025-07-12 08:04:55
0
75,406
balexandre
79,699,094
11,071,831
Testing equality of lists which contain NaN
<p>I have a small class for which I am writing tests. The result for my function contains <code>nan</code> which causes my test to fail because <code>nan</code> is not equal to any other <code>nan</code>. How do I write a proper test for this?</p> <pre><code>from math import nan import unittest import pandas as pd class holder: def __init__(self, source: list) -&gt; None: self.source = source self.data = [] def calculate_rolling_average(self): self.data = pd.Series(self.source).rolling(3).mean().to_list() class test_value(unittest.TestCase): def setUp(self): self.hold = holder(source=[10, 20, 30, 40]) def test_rolling_simple_average_for_list(self): expected_result = [nan, nan, 20.0, 30.0] self.hold.calculate_rolling_average() self.assertSequenceEqual(self.hold.data, expected_result) unittest.main() </code></pre> <p>This results in:</p> <pre><code>&gt;&gt;&gt; AssertionError: Sequences differ: [nan, nan, 20.0, 30.0] != [nan, nan, 20.0, 30.0] First differing element 0: nan nan [nan, nan, 20.0, 30.0] </code></pre> <p>I read <a href="https://stackoverflow.com/questions/51728427/unittest-how-to-assert-if-the-two-possibly-nan-values-are-equal">How to assert if the two possibly NaN values are equal</a> which is testing a singular value for <code>numpy.nan</code> so the same does not apply to this question.</p>
<python><unit-testing>
2025-07-12 07:26:10
1
440
Charizard_knows_to_code
79,699,081
4,058,178
Not able to execute DAX on PowerBI in .Net through ADOMD Client
<p>I have the below python code which works perfectly on any Workspace by executing the DAX on powerbi Dataset, whereas i converted this in .Net, but its failing and giving Not Found error. Can someone please help here</p> <p>Python</p> <pre><code>import clr clr.AddReference(r&quot;C:\Program Files\Microsoft.NET\ADOMD.NET\160\Microsoft.AnalysisServices.AdomdClient.dll&quot;) from pyadomd import Pyadomd # Power BI connection details conn_str = ( &quot;Provider=MSOLAP;&quot; &quot;Data Source=powerbi://api.powerbi.com/v1.0/myorg/WorkSpace1;&quot; &quot;Initial Catalog=TEST&quot; ) # Run DAX query def run_dax_query(query: str) -&gt; pd.DataFrame: with Pyadomd(conn_str) as conn: with conn.cursor().execute(query) as cur: df = pd.DataFrame(cur.fetchall(), columns=[col.name for col in cur.description]) return df </code></pre> <p>.Net</p> <pre><code>public string ExecuteDaxQuery(string daxQuery) { string workspaceConnection = _config[&quot;PowerBI:XmlaConnectionString&quot;]; using var conn = new AdomdConnection(workspaceConnection); conn.Open(); using var cmd = new AdomdCommand(daxQuery, conn); using var reader = cmd.ExecuteReader(); var results = new List&lt;Dictionary&lt;string, object&gt;&gt;(); while (reader.Read()) { var row = new Dictionary&lt;string, object&gt;(); for (int i = 0; i &lt; reader.FieldCount; i++) { row[reader.GetName(i)] = reader.GetValue(i); } results.Add(row); } return System.Text.Json.JsonSerializer.Serialize(results); } </code></pre> <p>Error SocketException: An established connection was aborted by the software in your host machine.</p>
<python><.net><powerbi><dax>
2025-07-12 07:08:15
0
404
Sam K
79,698,987
2,955,095
How can I execute a Python script in the REPL interpreter mode and get the exactly same output as if it was manually typed in? (Ubuntu, Python 3.12)
<p>The Python interpreter can be run either in script or interactive/REPL mode. I do have a Python script as text file but want to run it <em>as if</em> it was manually typed in in the interactive/REPL mode. I want to get the output (stdout) exactly as if this was done. To give an example, assume that I have the following text stored in a file called <code>srcipt.py</code></p> <pre class="lang-py prettyprint-override"><code>a = 5 a + 8 if a &lt; 4: print(&quot;123&quot;) else: print(&quot;xyz&quot;) exit() </code></pre> <p>I want to execute this script and get the following stdout:</p> <pre><code>Python 3.12.3 (main, Jun 18 2025, 17:59:45) [GCC 13.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; a = 5 &gt;&gt;&gt; a + 8 13 &gt;&gt;&gt; if a &lt; 4: ... print(&quot;123&quot;) ... else: ... print(&quot;xyz&quot;) ... xyz &gt;&gt;&gt; exit() </code></pre> <p>As far as I can see, this is not possible using the <code>python3</code> command with any of the standard options. So I am looking either for some Bash script or maybe a Python program that would take the path to my script as input and then produce the stdout as required.</p> <p>If need be, I could also change my input script to look like this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; a = 5 &gt;&gt;&gt; a + 8 &gt;&gt;&gt; if a &lt; 4: ... print(&quot;123&quot;) ... else: ... print(&quot;xyz&quot;) &gt;&gt;&gt; exit() </code></pre> <p>I feel that what I want should be possible, because I think that doctests doing something not very far from this.</p> <p>Many thanks for any suggestions or pointers.</p> <p>Cheers, Thomas.</p>
<python><linux><stdout><interactive><read-eval-print-loop>
2025-07-12 03:42:41
3
441
Thomas Weise
79,698,870
9,669,142
Convert FlightRadar altitude to WGS84 ellipsoid + elevation
<p>I have a CSV export from a flight from FlightRadar, where the altitude is included. I want to use this CSV with Cesium. Right now, the ground altitude in the file is always 0, which won't work properly with Cesium since then the altitude will be under the ground. Hence I need to take two things into account: the WGS84 ellipsoid en the elevation.</p> <p>I use the following code to convert from feet to meters ellipsoid:</p> <pre><code>from pyproj import Transformer from pygeodesy import ellipsoidalVincenty as ev def feet_msl_to_meters_ellipsoid(lat, lon, height_ft_msl): height_m_msl = height_ft_msl * 0.3048 point = ev.LatLon(lat, lon) geoid_height = point.height # Default EGM96 height_ellipsoid = height_m_msl + geoid_height return height_ellipsoid </code></pre> <p>However, the some of the points still end up below the ground, hence I think it has something to do with the elevation. Does someone know how to take the ground elevation into account using the coordinates? Or is there something else going on?</p> <p>UPDATE An image to visualize the issue: <a href="https://i.sstatic.net/3GSQdAml.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3GSQdAml.png" alt="enter image description here" /></a></p>
<python><wgs84>
2025-07-11 21:57:05
1
567
Fish1996
79,698,757
4,996,797
Pytest fixture saved into a pickle
<p>I am working on a project where I am building on top of a solution of a very time-consuming problem. Instead of solving the very time consuming part many times, I only run the solver once, I save the solution with pickle, and then I reuse the pickle to test my features build on top of the solution.</p> <p>Here is a model of what I mean</p> <pre class="lang-py prettyprint-override"><code>import time import pickle class Solution: &quot;&quot;&quot; A class that represents a solution of a very time-consuming problem. &quot;&quot;&quot; def solve_difficult_problem() -&gt; Solution: time.sleep(1) return Solution() def prepare_solution() -&gt; None: &quot;&quot;&quot; I solve the problem only once. I use the solution for all the development of my new feature as it never changes. &quot;&quot;&quot; solution = solve_difficult_problem() with open('solved.pickle', 'wb') as file: pickle.dump(solution, file) def my_feature(solution: Solution) -&gt; None: &quot;&quot;&quot; I am using the solution. &quot;&quot;&quot; print(solution) def test_my_feature() -&gt; None: # prepare_solution() # I run it only once: at the first run. with open('solved.pickle', 'rb') as file: solution = pickle.load(file) assert my_feature(solution) is None </code></pre> <p>My problem is that with pytest, I need to manually uncomment the <code>prepare_solution()</code> at the first run of my tests. Is there a way for pytest to automate this?</p> <p>I have tried writing a separate file which prepares the needed solution. But I don't know how to tell pytest that it should run it manually only the first time.</p> <p>I would like the solution to be saved as a local file that persists longer than one session. Basically, I would like the solution to be available until I remove the whole project from my computer.</p>
<python><automated-tests><pytest>
2025-07-11 19:22:14
1
408
PaweΕ‚ WΓ³jcik
79,698,620
21,370,869
correct way to use the β€˜dragCallback’ parameter of the channelBox command?
<p>I have been at this for an hour, with various variations but have not had any success with it.</p> <p>Here is a very basic sample of one approach I tried:</p> <pre><code>def foo(dragControl, x, y, modifiers, *args): Β  Β print(&quot;--------------------------------- Β  Hello World ------------------------------------&quot;) cmds.channelBox('mainChannelBox',dragCallback=foo) </code></pre> <p>Am expecting, when I engage, the MMB on either the viewport (with an attribute selected on the channel box) or on top of the channel box, the message in <code>foo</code> to be printed. But in this case nothing is printed on either case. The few AI models I usually consult with just take me around in circles.</p> <p>I would appreciate any help on this matter, thank you.</p>
<python><maya>
2025-07-11 16:33:46
0
1,757
Ralf_Reddings
79,698,441
6,838,716
Running Python functions from other files
<p>I have created a number of .py files containing functions. The files would be organized in a Project_Repertory as follows:</p> <p>Project_Repertory</p> <ul> <li>Load_Repertory <ul> <li>load_functions.py</li> <li>main.py</li> </ul> </li> <li>Automate_Repertory <ul> <li>main.py</li> </ul> </li> </ul> <p>In the Automate_Repertory I have a <code>main.py</code> file, which is to eventually be run using Apache Airflow (to which I am fully new), and I want to call the <code>main()</code> function of each of the other <code>main.py</code> files in the other folders.</p> <p>However, if I write <code>from Load_Repertory.main import main</code>, it fails with a <code>ModuleNotFound</code> error. I tried working from the &quot;Project_Repertory&quot; instead, and then when I do <code>from Load_Repertory.main import main</code> it fails with the same error but pointing to a line in the code saying <code>from load_functions import function1</code>.</p> <p>My <code>Load_Repertory.main.py</code> file looks like this:</p> <pre><code>import re from load_functions import function1 def main() print(&quot;This is main&quot;) function1() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I also tried <code>importlib</code> and <code>runpy</code> unsuccessfully. How do I import the <code>main()</code> from the <code>Load_Repertory.main.py</code> file and get it to run successfully if the terminal is open on the <code>Automate_Repertory</code>?</p> <p>Details: Python 3.13.3 on Windows 11.</p>
<python><import>
2025-07-11 13:57:38
1
1,486
YamiOmar88
79,698,380
3,336,423
PyQt vs PySide uic loader difference
<p>I'm migrating some code from PtQy5 to PySide6. I'm experiencing a significant behavioural difference when loading .ui files.</p> <p>The original PyQt5 code:</p> <pre><code>import sys import os from PyQt5.QtWidgets import QApplication from PyQt5.QtWidgets import QWidget, QMainWindow from PyQt5 import uic ui_content = '&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;\ &lt;ui version=&quot;4.0&quot;&gt;\ &lt;class&gt;Form&lt;/class&gt;\ &lt;widget class=&quot;QWidget&quot; name=&quot;Form&quot;&gt;\ &lt;layout class=&quot;QVBoxLayout&quot; name=&quot;verticalLayout&quot;&gt;\ &lt;item&gt;\ &lt;widget class=&quot;QPushButton&quot; name=&quot;myButton&quot;&gt;\ &lt;property name=&quot;text&quot;&gt;\ &lt;string&gt;PushButton&lt;/string&gt;\ &lt;/property&gt;\ &lt;/widget&gt;\ &lt;/item&gt;\ &lt;/layout&gt;\ &lt;/widget&gt;\ &lt;resources/&gt;\ &lt;connections/&gt;\ &lt;/ui&gt;\ ' if __name__ == '__main__': app = QApplication(sys.argv) mainWindow = QMainWindow() class MyCentralWidget(QWidget): def __init__(self): super().__init__() import tempfile tmp = tempfile.NamedTemporaryFile(delete=False) tmp.write(ui_content.encode()) tmp.close() self.ui = uic.loadUi(tmp.name,self) self.ui.myButton.setText(self.ui.myButton.text() + &quot; Foo&quot;) self.myButton.setText(self.myButton.text() + &quot; Bar&quot;) os.remove(tmp.name) centralWidget = MyCentralWidget() mainWindow.setCentralWidget(centralWidget) mainWindow.show() app.exec() </code></pre> <p>As you can see, objects declared in the .ui (like <code>myButton</code>) end up being both attributes of the returned <code>ui</code> object <em>and</em> the <code>MyCentralWidget</code> object. Meaning that both <code>self.ui.myButton</code> and <code>self.myButton</code> work. This is because I pass <code>self</code> as second parameter to <code>loadUi</code>, but if I don't do this the widget remains blank, so I suppose that's the right way to do.</p> <p>When migrating to PySide6, I had to adapt a bit the code:</p> <pre><code>import sys import os from PySide6.QtWidgets import QApplication from PySide6.QtWidgets import QWidget, QMainWindow, QVBoxLayout from PySide6.QtUiTools import QUiLoader from PySide6.QtCore import QFile ui_content = '&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt;\ &lt;ui version=&quot;4.0&quot;&gt;\ &lt;class&gt;Form&lt;/class&gt;\ &lt;widget class=&quot;QWidget&quot; name=&quot;Form&quot;&gt;\ &lt;layout class=&quot;QVBoxLayout&quot; name=&quot;verticalLayout&quot;&gt;\ &lt;item&gt;\ &lt;widget class=&quot;QPushButton&quot; name=&quot;myButton&quot;&gt;\ &lt;property name=&quot;text&quot;&gt;\ &lt;string&gt;PushButton&lt;/string&gt;\ &lt;/property&gt;\ &lt;/widget&gt;\ &lt;/item&gt;\ &lt;/layout&gt;\ &lt;/widget&gt;\ &lt;resources/&gt;\ &lt;connections/&gt;\ &lt;/ui&gt;\ ' if __name__ == '__main__': ui_loader = QUiLoader() app = QApplication(sys.argv) mainWindow = QMainWindow() class MyCentralWidget(QWidget): def __init__(self): super().__init__() import tempfile tmp = tempfile.NamedTemporaryFile(delete=False) tmp.write(ui_content.encode()) tmp.close() uifile = QFile(tmp.name) uifile.open(QFile.ReadOnly) self.ui = ui_loader.load(uifile,self) uifile.close() # Needed to have self.ui be centered self.mainLayout = QVBoxLayout(self) self.mainLayout.addWidget(self.ui) self.ui.myButton.setText(self.ui.myButton.text() + &quot; Foo&quot;) #self.myButton does not exist #self.myButton.setText(self.myButton.text() + &quot; Bar&quot;) os.remove(tmp.name) centralWidget = MyCentralWidget() mainWindow.setCentralWidget(centralWidget) mainWindow.show() app.exec() </code></pre> <p>But now, the objects are not cloned to <code>self</code> and so <code>self.myButton</code> does not exist.</p> <p>Is that expected? Or am I doing something wrong?</p> <p>My code was written using PyQt5, so there's many places where we use <code>self.XXX</code> instead of <code>self.ui.XXX</code> just because it used to work just fine, and it's hard to identify all the places. To fix all the runtime issues, we duplicate the objects:</p> <pre><code>self.__dict__.update(self.ui.__dict__) </code></pre> <p>Is this a bad idea/workaround?</p>
<python><pyqt5><qt-designer><pyside6>
2025-07-11 13:03:53
0
21,904
jpo38
79,698,070
2,311,202
Adding vertical lines to histogram plot
<p>I have the following code to plot a histogram in Python:</p> <p><code>fig.add_trace(go.Histogram(x = df[&quot;error&quot;], showlegend = False))</code></p> <p>This results into a histogram similar to:</p> <p><a href="https://i.sstatic.net/6bw54LBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6bw54LBM.png" alt="enter image description here" /></a></p> <p>Now I want to add 2 vertical lines to the plot at x = target and x = -target, to create something similar like:</p> <p><a href="https://i.sstatic.net/JHMmnJ2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JHMmnJ2C.png" alt="enter image description here" /></a></p> <p>I currently have the following code:</p> <pre><code>fig.add_trace(go.Scatter(x=[target, target], y=[0, ymax], mode='lines', name='target', line=dict(color='grey', dash='dash'), legendgroup='target', showlegend=True)) fig.add_trace(go.Scatter(x=[-target, -target], y=[0, ymax], mode='lines', name='target', line=dict(color='grey', dash='dash'), legendgroup='target', showlegend=False)) </code></pre> <p>However, I am not sure how I can retrieve the required ymax value from the histogram plot.</p>
<python><plotly><histogram>
2025-07-11 08:54:59
2
506
Pietair
79,697,826
3,577,105
Is there a way to tell if a function's return value is used?
<p>Is there a way to determine if the return value of a function is needed, from inside that function?</p> <p>Example:</p> <pre><code>drawPoint(1,2) </code></pre> <p>vs.</p> <pre><code>newPointId=drawPoint(1,2) </code></pre> <p>drawPoint makes an http request, so, the response from that request might be received from the server 'right away' or 'soon' or 'after a while' or never.</p> <p>If the return value is not needed (the first case), the command could be 'fire-and-forget': just an immediate return with irrelevant value after the request is sent with zero timeout.</p> <p>If the return value is needed (the second case), then the command should be 'blocking', at least for some timeout period. Various retry schemes and timeout values could be employed. (If no response is received in the desired timeframe, it could return False, and the rest of the code would need to be able to deal with that case; but there's an expectation that the variable 'newPointId' should be usable on the next line of code.)</p> <pre><code>def drawPoint(x,y): if [returnValueNeeded]: # do a blocking request, with retry scheme etc. # return its response or whatever part of its response is needed else: # do a non-blocking 'fire-and-forget' request with zero timeout # return with any value or no value </code></pre>
<python>
2025-07-11 04:28:59
2
904
Tom Grundy
79,697,347
13,682,559
How to define a immutable ClassVar in a python protocol?
<p>I have an Enum and several classes using that Enum in an immutable class variable.</p> <pre class="lang-py prettyprint-override"><code>from typing import ClassVar, Protocol, Final, Literal from enum import Enum class MyEnum(Enum): A = 0 B = 1 class MyClass1: type: Final = MyEnum.A class MyClass2: type: Final = MyEnum.B </code></pre> <p>This works fine but I fail to define a protocol encompassing the two classes. Both:</p> <pre class="lang-py prettyprint-override"><code>class MyProtocol1(Protocol): @property def type(self) -&gt; MyEnum: ... class MyProtocol2(Protocol): type: Final[MyEnum] </code></pre> <p>do not work for obvious reasons.</p> <pre class="lang-py prettyprint-override"><code>class MyProtocol3(Protocol): @property def type(self) -&gt; ClassVar[MyEnum]: ... class MyProtocol4(Protocol): @property @classmethod def type(cls) -&gt; MyEnum: ... </code></pre> <p>might achieve what I want but are not allowed. I thought about using a Generic with Literal, which also led nowhere. I am running out of ideas.</p>
<python><protocols><python-typing><class-variables>
2025-07-10 16:29:34
1
1,108
Durtal
79,697,269
6,141,238
When reading a database table with polars, how do I avoid a SchemaError?
<p>I have a large <code>table_to_load</code> in a database file <code>my_database.db</code> that I am trying to read into a Python program as a <code>polars</code> DataFrame. Here is the code that does the reading:</p> <pre><code>import polars as pl conn = sqlite3.connect('my_database.db') df = pl.read_database(connection=conn, query='SELECT * FROM table_to_load', infer_schema_length=None) conn.close() </code></pre> <p>When I run this code, <code>pl.read_database</code> throws the error, &quot;polars.exceptions.SchemaError: failed to determine supertype of i64 and binary.&quot; The traceback is at the end of the post. I have several closely related questions about this:</p> <ol> <li>Each column of <code>table_to_load</code> verifiably already has one of the three <a href="https://docs.pola.rs/api/python/dev/reference/api/polars.read_database.html" rel="nofollow noreferrer">sqlite3 datatypes</a> <code>integer</code>, <code>real</code>, and <code>text</code>. Can I instruct <code>polars</code> to just use a reasonable analog of each datatype? For example, <code>i64</code> seems like a reasonable analog of <code>integer</code>.</li> <li>Why does <code>polars</code> struggle to determine the datatype if it is reading the whole column as instructed by the parameter setting <code>infer_schema_length=None</code>, and is there any way to resolve this? (It <a href="https://docs.pola.rs/api/python/dev/reference/api/polars.read_database.html" rel="nofollow noreferrer">looks</a> like there is also a <code>schema_overrides</code> parameter, but this would be challenging to use in my application because my table columns are not static.)</li> <li>In the traceback, why does <code>polars</code> not reveal the column or columns that are generating the SchemaError, and is there a way to request that it do so? In my case, this would be helpful because <code>table_to_load</code> has hundreds of columns.</li> </ol> <hr /> <p>The traceback of the SchemaError:</p> <pre><code>Traceback (most recent call last): File &quot;c:\Users\user0\AppData\Local\Programs\Python\Python312\test_module.py&quot;, line 115, in &lt;module&gt; main_function() File &quot;c:\Users\user0\AppData\Local\Programs\Python\Python312\test_module.py&quot;, line 54, in main_function df = pl.read_database(connection=conn, query='SELECT * FROM table_to_load', infer_schema_length=None) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\io\database\functions.py&quot;, line 251, in read_database ).to_polars( ^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\io\database\_executor.py&quot;, line 543, in to_polars frame = frame_init( ^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\io\database\_executor.py&quot;, line 300, in _from_rows return frames if iter_batches else next(frames) # type: ignore[arg-type] ^^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\io\database\_executor.py&quot;, line 283, in &lt;genexpr&gt; DataFrame( File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\dataframe\frame.py&quot;, line 377, in __init__ self._df = sequence_to_pydf( ^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\_utils\construction\dataframe.py&quot;, line 461, in sequence_to_pydf return _sequence_to_pydf_dispatcher( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\functools.py&quot;, line 909, in wrapper return dispatch(args[0].__class__)(*args, **kw) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\_utils\construction\dataframe.py&quot;, line 674, in _sequence_of_tuple_to_pydf return _sequence_of_sequence_to_pydf( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\user0\AppData\Local\Programs\Python\Python312\Lib\site-packages\polars\_utils\construction\dataframe.py&quot;, line 590, in _sequence_of_sequence_to_pydf pydf = PyDataFrame.from_rows( ^^^^^^^^^^^^^^^^^^^^^^ polars.exceptions.SchemaError: failed to determine supertype of i64 and binary [1]: https://www.sqlite.org/datatype3.html </code></pre>
<python><dataframe><sqlite><python-polars><polars>
2025-07-10 15:10:29
2
427
SapereAude
79,697,190
5,058,384
ComfyUI error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0
<p>I am trying to train a LoRA in ComfyUI using the modified version of the example workflow for ComfyUI-FluxTrainer (<a href="https://github.com/kijai/ComfyUI-FluxTrainer/issues" rel="nofollow noreferrer">https://github.com/kijai/ComfyUI-FluxTrainer/issues</a>) I found on reddit here (<a href="https://www.reddit.com/r/StableDiffusion/comments/1f5onyx/tutorial_setup_train_flux1_dev_loras_using/" rel="nofollow noreferrer">https://www.reddit.com/r/StableDiffusion/comments/1f5onyx/tutorial_setup_train_flux1_dev_loras_using/</a>). This is the first version where CUDA has not run out of memory (VRAM 8gb) but I am getting this error:</p> <pre><code>Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! </code></pre> <p>There are lots of posts around the web with this error but none that seem to trace back to training a LoRA and all seem to have different solution. Anyone know what the issue might be in this instance?</p> <p>Full error log here: <a href="https://pastebin.com/rFkQaX5A" rel="nofollow noreferrer">https://pastebin.com/rFkQaX5A</a></p> <p>EDIT: Have traced the issue to the &quot;blocks_to_swap&quot; argument in the Init Flux LoRA Training node. If this is set to 0 then the issue stops but I'm then back to a &quot;torch.OutOfMemoryError: Allocation on device&quot; error - so not a solution really.</p>
<python><pytorch>
2025-07-10 14:05:59
0
966
garrettlynchirl
79,696,741
2,311,202
Plot confusion matrix in black and white
<p>I currently have the following code:</p> <pre class="lang-py prettyprint-override"><code>disp = ConfusionMatrixDisplay(confusion_matrix=cm) disp.plot() plt.show() </code></pre> <p>This results into something like:</p> <p><a href="https://i.sstatic.net/JfXLXyc2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfXLXyc2.png" alt="colored plot" /></a></p> <p>However, I want the diagonal to be depicted with a black background, and the off-diagonal with a white background, that is something like:</p> <p><a href="https://i.sstatic.net/lIwwCK9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lIwwCK9F.png" alt="bwplot" /></a></p> <p>How can I change the cell colors accordingly?</p>
<python><scikit-learn><confusion-matrix>
2025-07-10 08:32:50
1
506
Pietair
79,696,290
1,713,450
A way to defer yielding a Request in scrapy?
<p>My scrapy logic is as follows:</p> <ol> <li>get all rows from <code>child_page_table</code> where <code>parent_page_id</code> is null</li> <li>for each row, if <code>parent_page_id</code> is (still) null, yield a <code>Request</code> with callback <code>scrape_page</code></li> <li><code>[scrape_page]</code> if this a &quot;parent page,&quot; add data to <code>parent_page_table</code> and get a list of all &quot;child pages&quot; linked from the parent page. Insert those into <code>child_page_table</code> with <code>parent_page_id</code> pointing to what we just inserted to <code>parent_page_table</code>.</li> <li>if this is not a parent page, yield <code>Request</code> of the parent page linked within the child page with callback <code>scrape_page</code> (which upon recursing will run step #3 rather than step #4)</li> </ol> <p>The logic here is that I've got tens of thousands of child pages and parent pages point to many of those child pages. The naive way of filling in the <code>parent_page_id</code> is just for every child page, request it and fetch the parent page and request that, etc.</p> <p>But a more efficient way is get the first child page, fetch the parent page, and the parent page will have dozens or hundreds of child pages linked. For each of those, making no more requests, update the data in the child table so that we <em>never have to make a Request for those children</em>.</p> <p>However, it appears that what scrapy's <code>start</code> does is yield all 15,000 rows-as-Requests immediately rather than what I'd assumed, which was it would yield 16-ish at a time (the number of concurrent requests I'm allowing).</p> <p>So essentially step #2 <em>always</em> happens as it happens immediately after step #1 and no requests have been called to update any child rows' <code>parent_page_id</code>.</p> <p><strong>Is there a way to create a Request that will, right before making the request, re-check if the request is still necessary? Essentially delay the &quot;is it necessary?&quot; logic until immediately before that request is made?</strong></p> <p>Possibly some way to thunk the request until whatever manages the requests is actually going to fire it off to HTTP land?</p> <pre class="lang-py prettyprint-override"><code> children = list(enumerate(self.cur.execute(&quot;select tag from children where parent_page_id is null;&quot;).fetchall())) rel_len = len(relationships) for (idx, row) in relationships: logging.debug(&quot;[Children] Fetching &quot; + str(idx) + &quot;/&quot; + str(rel_len)) child_item = Page(tag=row[0], type=TagType.CHILD) if self._needs_call(child_item) // checks if parent_page_id is null yield Request( base_url + tag_item.tag, callback=self.parse_page, cb_kwargs=dict(tag=row[0], tag_type=TagType.CHILD), errback=self.handle_error, meta=MyScraper.meta ) </code></pre>
<python><scrapy>
2025-07-09 21:43:11
0
1,513
user1713450
79,696,174
1,056,563
How to use mpld3.display() within ipython?
<p>I am running an example for <em>mplde3</em>: how to show it?</p> <pre class="lang-py prettyprint-override"><code>import mpld3 from mpld3 import plugins from mpld3.utils import get_id import numpy as np import collections import matplotlib.pyplot as plt mpld3.enable_notebook() N_paths = 5 N_steps = 100 x = np.linspace(0, 10, 100) y = 0.1 * (np.random.random((N_paths, N_steps)) - 0.5) y = y.cumsum(1) fig, ax = plt.subplots() labels = [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;, &quot;d&quot;, &quot;e&quot;] line_collections = ax.plot(x, y.T, lw=4, alpha=0.2) interactive_legend = plugins.InteractiveLegendPlugin(line_collections, labels) plugins.connect(fig, interactive_legend) mpld3.display() </code></pre> <p>Result is an object - but no graph:</p> <blockquote> <p>Out[7]: &lt;IPython.core.display.HTML object&gt;</p> </blockquote>
<python><mpld3>
2025-07-09 19:44:11
1
63,891
WestCoastProjects
79,696,142
2,250,791
How to use the type of another method's parameter as the type of my method's parameter?
<p>If I have something like this:</p> <pre><code>def _process(img: str | List[str] | ndarray[_AnyShape, dtype[Any]] | List[ndarray[_AnyShape, dtype[Any]]]) -&gt; None: … type TextInputs = ??? def process(img: TextInputs) -&gt; None: _process(img) </code></pre> <p>Where <code>_process</code> is defined in a 3rd party module and imported, how would I define <code>TextInputs</code> to be the type of the <code>img</code> argument to <code>_process</code>?</p>
<python><python-typing>
2025-07-09 19:09:22
0
2,075
Camden Narzt
79,696,122
14,305,251
UnicodeDecodeError when connecting to PostgreSQL using psycopg2, despite UTF-8 encoding everywhere
<p>I'm trying to connect to a local PostgreSQL database using psycopg2 in Python. Here's the code I'm using:</p> <pre><code>import psycopg2 params = { 'dbname': 'database_name', 'user': 'user_name', 'password': 'mypassword', 'host': 'localhost', } for k, v in params.items(): print(f&quot;{k}: {v} (type={type(v)}, encoded={v.encode('utf-8')})&quot;) try: conn = psycopg2.connect(**params) print(&quot;Connexion OK&quot;) conn.close() except Exception as e: print(&quot;Connexion Erro&quot;) print(type(e), e) </code></pre> <p>The printed output confirms that all parameters are strings and UTF-8 encoded. However, I still get the following error:</p> <pre><code> Connexion Erro &lt;class 'UnicodeDecodeError'&gt; 'utf-8' codec can't decode byte 0xe9 in position 103: invalid continuation byte </code></pre> <p>I also checked the server and client encoding on PostgreSQL using:</p> <pre><code> SHOW server_encoding; SHOW client_encoding; </code></pre> <p>Both return UTF8.</p> <p>Given that all inputs are UTF-8 and the database is configured for UTF-8, I don't understand why this error occurs.</p> <p>Has anyone encountered this before or has an idea of where this byte 0xe9 might come from? What else should I check?</p> <p>EDIT : I’m running the code on Windows 11 (64-bit) with Python 3.11.9.</p> <p>The default language of my Windows system is French.</p> <p>I do not run the Django application in Chrome. I only use Chrome to view the pages generated by Django views.</p> <p>I have other browsers available for testing the application. I mentioned Chrome because after a browser update, it triggered an automatic reboot of my PC, which caused an abrupt shutdown of all servers (Django, database, etc.). The issues started right after this event, whereas everything was working fine before.</p> <p>The code snippet is part of the Django application. However, even when isolating and running this code on its own, the problem persists. Based on my observations, the issue seems to be specifically related to the database connection</p>
<python><django><database><postgresql>
2025-07-09 18:50:19
0
379
the star
79,696,110
1,747,834
Why is watchdog event-handler trying to open every new file?
<p>The simple directory-watching script is below:</p> <pre class="lang-py prettyprint-override"><code>import logging import sys import time from watchdog.events import FileSystemEventHandler from watchdog.observers import Observer class MyEventHandler(FileSystemEventHandler): def __init__(self, observer, log): self.log = log self.observer = observer def on_created(self, event): self.log.info(&quot;%s&quot;, event) def main(argv): logging.basicConfig(format = '%(asctime)s %(levelname)-8s %(message)s', datefmt = '%Y-%m-%d %H:%M:%S', level = logging.DEBUG) log = logging.getLogger(__name__) path = argv[1] observer = Observer() log.debug('Created observer %s', observer) event_handler = MyEventHandler(observer, log) observer.schedule(event_handler, path, recursive = False) observer.start() observer.join() return 0 if __name__ == &quot;__main__&quot;: sys.exit(main(sys.argv)) </code></pre> <p>Started as something like <code>./fpounce.py /tmp</code> it works, but the observer-thread crashes as soon as a file is created in the watched directory (<code>/tmp</code>) by some other user (such as the <code>www</code>):</p> <pre class="lang-none prettyprint-override"><code>2025-07-09 14:30:41 DEBUG Created observer &lt;KqueueObserver(Thread-1, initial daemon)&gt; 2025-07-09 14:30:42 INFO FileCreatedEvent(src_path='/tmp/sess_e645f55f69770d25a08b8210245510f6', dest_path='', event_type='created', is_directory=False, is_synthetic=False) Exception in thread Thread-2: Traceback (most recent call last): File &quot;/opt/lib/python3.11/threading.py&quot;, line 1045, in _bootstrap_inner self.run() File &quot;/opt/lib/python3.11/site-packages/watchdog/observers/api.py&quot;, line 158, in run self.queue_events(self.timeout) File &quot;/opt/lib/python3.11/site-packages/watchdog/observers/kqueue.py&quot;, line 630, in queue_events self.queue_event(FileCreatedEvent(file_created)) File &quot;/opt/lib/python3.11/site-packages/watchdog/observers/kqueue.py&quot;, line 481, in queue_event self._register_kevent(event.src_path, is_directory=event.is_directory) File &quot;/opt/lib/python3.11/site-packages/watchdog/observers/kqueue.py&quot;, line 435, in _register_kevent self._descriptors.add(path, is_directory=is_directory) File &quot;/opt/lib/python3.11/site-packages/watchdog/observers/kqueue.py&quot;, line 215, in add self._add_descriptor(KeventDescriptor(path, is_directory=is_directory)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/opt/lib/python3.11/site-packages/watchdog/observers/kqueue.py&quot;, line 294, in __init__ self._fd = os.open(path, WATCHDOG_OS_OPEN_FLAGS) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ PermissionError: [Errno 13] Permission denied: '/tmp/sess_e645f55f69770d25a08b8210245510f6' </code></pre> <p>Why is it trying to open the newly-created files? Can I disable this somehow -- or make failures non-fatal?</p> <p><em>Update</em>: Just tested the same code on Linux -- no problem there with inotify. Must be something about kqueue watchdog-implementation in particular -- will have to file a bug-report, I suppose.</p>
<python><python-watchdog><kqueue>
2025-07-09 18:39:23
0
4,246
Mikhail T.
79,696,095
14,944,414
Ordering points that roughly follow the contour of a line to form a polygon
<p>I have a list of points (x,y). A line is drawn somewhere 'inside' these points (within the area they form). I have been trying different algorithms for ordering these points to create a polygon, with these constraints:</p> <ol> <li>The polygon is not self-intersecting.</li> <li>The resulting polygon is not necessarily convex; a convex hull will not suffice</li> <li>The polygon needs to go through all of the points. I have tried the <code>alphashape</code> concave hull algorithm, but I was not able to get it to work (it ignored some points, and I wasn't sure what to tune the <code>alpha</code> value to)</li> <li>The resulting polygon will not intersect with the drawn line. I think this is relevant to remove a lot of the ambiguity for concave polygons</li> </ol> <p>Some clarification for how the points are ordered: they roughly follow the profile of a contoured line, which is unknown and different from the drawn line. The drawn line will be intrinsically quite similar to this unspecified line however.</p> <p>I have also tried the very simplistic angle sort algorithm, which unsurprisingly does not work. However, running the algorithm at differnt points along the line gave more promising results, but I am not sure how to use the data.</p> <p><a href="https://i.sstatic.net/TR0t3OJj.png" rel="noreferrer"><img src="https://i.sstatic.net/TR0t3OJj.png" alt="Example of problem" /></a></p> <p>Above is a screenshot of the following test program I cobbled together:</p> <pre class="lang-py prettyprint-override"><code>import pygame import numpy as np from scipy.interpolate import splprep, splev import sys import random # Pygame setup pygame.init() screen = pygame.display.set_mode((1000, 1000)) pygame.display.set_caption(&quot;Smoothed Curve with Draggable Points&quot;) clock = pygame.time.Clock() # Colors WHITE = (255, 255, 255) BLUE = (0, 0, 255) RED = (255, 0, 0) GREEN = (0, 200, 0) BLACK = (0, 0, 0) drawing = False points = [] smoothed = [] normals = [] # Dot settings DOT_RADIUS = 6 DRAG_RADIUS = 10 # Dot class for draggable points class Dot: def __init__(self, pos): self.x, self.y = pos self.dragging = False def pos(self): return (int(self.x), int(self.y)) def is_mouse_over(self, mx, my): return (self.x - mx)**2 + (self.y - my)**2 &lt;= DRAG_RADIUS**2 def start_drag(self): self.dragging = True def stop_drag(self): self.dragging = False def update_pos(self, mx, my): if self.dragging: self.x, self.y = mx, my # crude test arrangement of dots. random_dots = [Dot(tup) for tup in [(159, 459),(133, 193),(286, 481),(241, 345),(411, 404),(280, 349),(352, 471),(395, 361),(85, 390),(203, 321),(41, 281),(58, 348),(175, 275),(75, 185),(385, 443),(44, 219),(148, 229),(215, 477),(338, 339),(122, 430)]] def downsample(points, step=2): return points[::step] def smooth_curve(points, smoothness=150): if len(points) &lt; 4: return [], [] points = downsample(points, step=2) x, y = zip(*points) try: tck, u = splprep([x, y], s=100.0, k=3) u_new = np.linspace(0, 1, smoothness) # Evaluate curve points x_vals, y_vals = splev(u_new, tck) # Derivatives: dx/du, dy/du dx_du, dy_du = splev(u_new, tck, der=1) gradients = np.array(dy_du) / np.array(dx_du) # dy/dx return list(zip(x_vals, y_vals)), list(zip(dx_du, dy_du)) except: return [], [] def draw_normals(screen, smoothed, derivatives, spacing=10, length=30): for i in range(0, len(smoothed), spacing): x, y = smoothed[i] dx, dy = derivatives[i] # Normal vector is (-dy, dx) nx, ny = -dy, dx norm = np.hypot(nx, ny) if norm == 0: continue nx /= norm ny /= norm x1 = x - nx * length / 2 y1 = y - ny * length / 2 x2 = x + nx * length / 2 y2 = y + ny * length / 2 pygame.draw.line(screen, GREEN, (x1, y1), (x2, y2), 2) def generate_random_dots(n=20, w=800, h=600): return [Dot((random.randint(0, w), random.randint(0, h))) for _ in range(n)] # Main loop dragged_dot = None running = True while running: screen.fill(WHITE) mx, my = pygame.mouse.get_pos() for event in pygame.event.get(): if event.type == pygame.QUIT: running = False elif event.type == pygame.MOUSEBUTTONDOWN: drawing = True points = [] smoothed = [] normals = [] # Check for draggable dot for dot in random_dots: if dot.is_mouse_over(mx, my): dragged_dot = dot dot.start_drag() drawing = False break elif event.type == pygame.MOUSEBUTTONUP: drawing = False if dragged_dot: dragged_dot.stop_drag() dragged_dot = None else: smoothed, normals = smooth_curve(points) elif event.type == pygame.MOUSEMOTION: if dragged_dot: dragged_dot.update_pos(mx, my) elif drawing: points.append((mx, my)) elif event.type == pygame.KEYDOWN: if event.key == pygame.K_p: random_dots = generate_random_dots() if event.key == pygame.K_l: for i in random_dots: print(f&quot;({i.x}, {i.y})&quot;) # Draw random draggable dots for dot in random_dots: pygame.draw.circle(screen, BLACK, dot.pos(), DOT_RADIUS) # Draw raw stroke if len(points) &gt; 1: pygame.draw.lines(screen, BLUE, False, points, 1) # Draw smoothed curve if len(smoothed) &gt; 1: pygame.draw.lines(screen, RED, False, smoothed, 3) # Draw normals if smoothed and normals: draw_normals(screen, smoothed, normals) pygame.display.flip() clock.tick(60) pygame.quit() sys.exit() </code></pre> <p>What algorithm could I use to accomplish this?</p> <h2>EDIT:</h2> <p>Some details about the diagram. The black dots are the input points to form the polygon. The red is the drawn line, which is a smoothed version of the blue one. The green lines show some equidistant test points on the drawn line. The required output order is the order in which the points have to be joined up to form the polygon, i.e. A,B,C,D for a rectangle. The input order of the points is not ordered in any meaningful way.</p>
<python><algorithm><polygon><concave>
2025-07-09 18:28:39
3
307
Leo
79,696,021
13,682,559
Pyright false positive when implementing a protocol
<p>This MRE illustrates my problem:</p> <pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass from typing import Protocol class Child(Protocol): val: float class Parent(Protocol): sub: Child @dataclass class Child1(Child): val: float @dataclass class Parent1(Parent): sub: Child1 # &lt;- Pyright complains here </code></pre> <p>To my eyes, the <code>Parent1</code>-class appears as a valid implementation of the <code>Parent</code>-protocol. I plan to use this a lot.</p> <p><a href="https://mypy-play.net/?mypy=latest&amp;python=3.12&amp;gist=793a7496c7789e6211430a25bab446a3" rel="nofollow noreferrer">mypy approves this</a> while <a href="https://pyright-play.net/?code=GYJw9gtgBAJghgFzgYwDZwM4YKYagSwgAcwQFZEV0sAoUSKBATyPwDsBzA408gBXAIwyMKhri0mPAGEAFvlQwAFALBCRqAJQAuGlH1QAbnFTaowVGEQTqePnBDY2CFYOGidegxgCuAIzM5BRhxGgABeCRJWmiZeUUARiUgxU8DIxMzCysEcQjKWJpYqHtHZyTSpwQ0739A%2BJgEmiA" rel="nofollow noreferrer">pyright does not</a>. The latter complains:</p> <pre class="lang-none prettyprint-override"><code>&quot;sub&quot; overrides symbol of same name in class &quot;Parent&quot; Variable is mutable so its type is invariant Override type &quot;Child1&quot; is not the same as base type &quot;Child&quot; (reportIncompatibleVariableOverride) </code></pre> <p>I do not understand. What has mutability to do with it? Playing around with <code>frozen=True</code> does not change anything, And why does mypy comply?</p> <p>Can I do something about it? Adjust the code or pacify pyright somehow?</p>
<python><python-typing><mypy><pyright>
2025-07-09 17:27:51
1
1,108
Durtal
79,695,981
5,348,895
How to assign subgraph IDs based on weakly connected user pairs, but split when no shared connection exists
<p>I'm working with a dataset where I want to assign a sub_graph ID to user interactions. Each row in the data represents a directed edge between an actor_user_id and a related_user_id.</p> <p>I want to compute a sub_graph ID such that:</p> <p>Rows belong to the same sub_graph if they are connected (even indirectly) through shared users.</p> <p>However, if two edges only share the same actor_user_id, they should not be grouped unless their related_user_ids are also connected.</p> <p>Here’s a simplified example:</p> <pre><code>import pandas as pd import networkx as nx import matplotlib.pyplot as plt # Dummy data edges = pd.DataFrame({ 'properties_id': ['A', 'A', 'A', 'A'], 'global_journey_id': ['A1', 'A1', 'A1', 'A1'], 'actor_user_id': ['abc', 'abc', 'pat', 'abc'], 'related_user_id': ['def', 'efg', 'def', 'lal'], }) </code></pre> <pre class="lang-none prettyprint-override"><code>actor_user_id related_user_id sub_graph abc def 1 pat def 1 abc efg 2 abc lal 3 </code></pre> <p>I've tried using NetworkX like this:</p> <pre><code>G = nx.DiGraph() G.add_edges_from(zip(edges['actor_user_id'], edges['related_user_id'])) components = list(nx.weakly_connected_components(G)) </code></pre> <p>But this gives me only one connected component because abc is a common node across many edges.</p> <p>Question:</p> <p>How can I build logic that ensures subgraphs are only formed when there is a true &quot;shared connection&quot;, not just a shared actor_user_id?</p> <p>Is there a way to force the graph to disconnect branches that don’t have overlapping related_user_ids?</p> <p>Any idea or trick to break down these graphs properly (even custom component grouping logic) would be super appreciated</p> <p>Edit : Thanks to Daniel Raphael comment below I was able to get the proper code working as it's looking at related_user_ids &gt; actor_user_id</p> <pre><code>import pandas as pd from collections import defaultdict, deque # Sample input data = { 'properties_id': ['A', 'A', 'A', 'A'], 'global_journey_id': ['A1', 'A1', 'A1', 'A1'], 'actor_user_id': ['abc', 'abc', 'pat', 'abc'], 'related_user_id': ['def', 'efg', 'def', 'lal'], } df = pd.DataFrame(data) # Container for results results = [] # Group by properties_id and global_journey_id grouped = df.groupby(['properties_id', 'global_journey_id']) for (prop_id, journey_id), group in grouped: # Step 1: Build directional graph: related_user_id β†’ list of actor_user_id graph = defaultdict(list) for _, row in group.iterrows(): graph[row['related_user_id']].append(row['actor_user_id']) # Step 2: Build edge list (frozenset for undirected identity) edges = [frozenset([row['actor_user_id'], row['related_user_id']]) for _, row in group.iterrows()] edge_set = set(edges) # Step 3: Traverse the directed graph to find connected edge groups visited_edges = set() edge_to_subgraph = {} subgraph_id = 1 for edge in edge_set: if edge in visited_edges: continue node1, node2 = list(edge) start_node = node2 if node2 in graph else node1 if start_node not in graph: continue queue = deque([start_node]) connected_edges = set() while queue: current = queue.popleft() for target in graph.get(current, []): candidate_edge = frozenset([current, target]) if candidate_edge in edge_set and candidate_edge not in visited_edges: visited_edges.add(candidate_edge) connected_edges.add(candidate_edge) if target in graph: queue.append(target) for e in connected_edges: edge_to_subgraph[e] = subgraph_id subgraph_id += 1 # Step 4: Map subgraph ID to each row in the group for _, row in group.iterrows(): edge_key = frozenset([row['actor_user_id'], row['related_user_id']]) subgraph = edge_to_subgraph.get(edge_key, None) results.append({ 'properties_id': row['properties_id'], 'global_journey_id': row['global_journey_id'], 'actor_user_id': row['actor_user_id'], 'related_user_id': row['related_user_id'], 'sub_graph': subgraph }) # Final result final_df = pd.DataFrame(results) final_df = final_df.sort_values(by=['properties_id', 'global_journey_id', 'sub_graph']) print(final_df) </code></pre>
<python><pandas><networkx>
2025-07-09 16:41:24
1
376
patrick
79,695,854
113,158
How do I "sign" a JWT using HMAC-SHA256 with a public key (RSA or EC) in order to trigger algorithm confusion?
<p>I am trying to understand exactly how exactly it is possible to trigger JWT algorithm confusion, as described in <a href="https://redfoxsec.com/blog/jwt-deep-dive-into-algorithm-confusion/" rel="nofollow noreferrer">https://redfoxsec.com/blog/jwt-deep-dive-into-algorithm-confusion/</a> - in section &quot;How does Algorithm Confusion Occur&quot;:</p> <blockquote> <h3>How does Algorithm Confusion Occur</h3> <p>Think of 2 types of encryption algorithms:</p> <ul> <li><strong>HMAC(Symmetric):</strong><br /> There’s only 1 key involved, AKA the secret<br /> The JWT will be signed and validated using the same secret key.</li> <li><strong>RSA(Asymmetric):</strong><br /> There’s 2 keys involved. A private key, and a public key.<br /> The JWT will be signed using a private key, and the signature will be validated using a public key.</li> </ul> <p>A scenario where algorithm confusion is exploited can be as follows:</p> <ol> <li><p>An attacker authenticates to a web server with valid credentials.</p> </li> <li><p>The web server generates a JWT, and signs it with an RSA private key, only known by the server.</p> </li> <li><p>This JWT is then passed on to the attacker.</p> </li> <li><p>The public key is exposed on an endpoint such as /.well-known/jwks.json.</p> </li> <li><p>The attacker now:</p> <ul> <li>Downloads the public key.</li> <li>Tampers with the JWT.</li> <li>Signs the JWT with the exposed public key.</li> <li>Changes the encryption algorithm to HMAC.</li> <li>Lastly, Sends the request across.</li> </ul> </li> <li><p>Now, in a normal scenario, where the encryption algorithm is RSA, the server reads the signature, tries to verify it using the public key, but fails, because the public key can only verify signatures that have been generated using the private key (asymmetric encryption).</p> </li> <li><p>However, since in this scenario the attacker had modified the encryption algorithm to HMAC (a symmetrical encryption), the server attempted and succeeded in verifying signature using its public key.</p> </li> <li><p>This happens, because HMAC is a symmetrical encryption, and the key that is used to sign the JWT, is the same key, that will be used to validate it.</p> </li> <li><p>And because the new tampered JWT has been signed with the exposed public key, and the algorithm in use is HMAC, the JWT gets validated.</p> </li> </ol> </blockquote> <p>I am trying to understand how exactly the attacker can &quot;sign the JWT with the exposed private key&quot; (I want to write a unit test to check that my implementation logic is not vulnerable).</p> <p>I am trying to implement this logic in Python, using the <code>jwcrypto</code> library.</p> <p>However, when I try to write how it would work:</p> <pre class="lang-py prettyprint-override"><code>from jwcrypto.jwt import JWT ec_key = load_jwk(&quot;sign-ec-p256&quot;) # This is a function that will load a jwcrypto.jwk.JWK object of type `EC` with only the public part jwt_object = JWT(header={&quot;alg&quot;:&quot;HS256&quot;, &quot;kid&quot;: &quot;sign-ec-p256&quot;}, claims={&quot;iss&quot;: &quot;https://my-domain.com&quot;}) jwt_object.make_signed_token(ec_key) </code></pre> <p>I get a stack trace in the library, because it tries to load the private part, which is not present and thus triggers an error before I can ever get a badly signed JWT.</p> <p>From what I understand, it seems to me that:</p> <ul> <li>the described type of algorithm confusion can only occur if the key is transmitted to the API in a serialized form, and if the deserialization of the key object is only done by trusting the <code>alg</code> field in JWT;</li> <li>the <code>jwcrypto</code> library avoids that issue completely because its API requires that key objects be already deserialized before passing them for usage in signature/verification functions;</li> </ul> <p>Is that undestanding correct? That &quot;JWT algorithm confusion&quot; is related to letting the JWT <code>alg</code> value direct the deserialization of a key represented in the program as a string?</p>
<python><jwt><jwcrypto>
2025-07-09 15:00:56
1
16,863
Jean Hominal
79,695,783
20,591,261
Ranking categories by count within groups in Polars
<p>I have a Polars DataFrame with months, categories, and IDs. I want to rank categories by their frequency within each month, then pivot the results to show which category held each rank position in each month.</p> <p>My dataframe:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl df = pl.DataFrame( { &quot;ID&quot;: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], &quot;MONTH&quot;: [1, 2, 1, 2, 1, 2, 2, 2, 1, 1, 1, 2, 2, 2, 1, 1, 2], &quot;CATEGORY&quot;: [ &quot;C&quot;, &quot;B&quot;, &quot;A&quot;, &quot;C&quot;, &quot;B&quot;, &quot;A&quot;, &quot;C&quot;, &quot;C&quot;, &quot;A&quot;, &quot;B&quot;, &quot;A&quot;, &quot;A&quot;, &quot;C&quot;, &quot;C&quot;, &quot;A&quot;, &quot;B&quot;, &quot;B&quot;, ], } ) </code></pre> <p>Desired Output:</p> <pre><code>shape: (3, 3) β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ PLACE ┆ 1 ┆ 2 β”‚ β”‚ --- ┆ --- ┆ --- β”‚ β”‚ i64 ┆ str ┆ str β”‚ β•žβ•β•β•β•β•β•β•β•ͺ═════β•ͺ═════║ β”‚ 1 ┆ A ┆ C β”‚ β”‚ 2 ┆ B ┆ A β”‚ β”‚ 3 ┆ C ┆ B β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Current Working Solution:</p> <pre class="lang-py prettyprint-override"><code>( df.group_by(pl.col(&quot;MONTH&quot;), pl.col(&quot;CATEGORY&quot;)) .agg(pl.col(&quot;ID&quot;).len().alias(&quot;COUNT&quot;)) .sort(by=[&quot;MONTH&quot;, &quot;COUNT&quot;], descending=True) .with_columns(pl.lit(1).alias(&quot;aux&quot;)) .with_columns(pl.col(&quot;aux&quot;).cum_sum().over([&quot;MONTH&quot;]).alias(&quot;PLACE&quot;)) .pivot(index=&quot;PLACE&quot;, values=&quot;CATEGORY&quot;, on=&quot;MONTH&quot;, sort_columns=True) ) </code></pre> <p>This works correctly, but I feel like there might be a more elegant or direct way to achieve this ranking and pivoting operation in Polars. Is there a simpler approach?</p>
<python><dataframe><pivot><python-polars>
2025-07-09 14:17:01
1
1,195
Simon
79,695,735
13,801,302
Dynamic and scalable MCP Server infrastructure in docker
<p>I have the following structure of my MCP Server projekt</p> <p><em>Folder structure</em></p> <pre><code>project-root/ β”‚ β”œβ”€β”€ prompts/ β”‚ β”œβ”€β”€ resources/ β”‚ β”œβ”€β”€ tools/ β”‚ └── add.py β”‚ β”œβ”€β”€ mcp_server.py β”‚ β””-- main.py </code></pre> <p><em>mcp_server.py</em></p> <pre class="lang-py prettyprint-override"><code>from fastmcp import FastMCP mcp = FastMCP( name=&quot;MCP Server&quot;, version=&quot;1.0.0&quot; ) </code></pre> <p><em>add.py</em></p> <pre class="lang-py prettyprint-override"><code>from mcp_server import mcp @mcp.tool() async def add(a: int, b: int) -&gt; int: &quot;&quot;&quot;Add two numbers&quot;&quot;&quot; return a + b </code></pre> <p>I want to create a dynamic and scalable MCP infrastructure. The <strong>goal</strong> is to have a MCP Server and add MCP Tools dynamically. I propose to run the MCP Server and each MCP Tool in an individual docker container, to have the possibility to deploy it in a Kurbenetes cluster for scaling purposes. For DevOps purpose, the developer should create their tools and commit/push the code to a special branch like <code>mcp_tool_branch</code> and a git action creates/updates the related docker container.</p> <p><a href="https://i.sstatic.net/nEdlboPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nEdlboPN.png" alt="enter image description here" /></a></p> <p>But step back, <strong>the challenge is</strong>, how to deploy a MCP Server in docker and add the tools to the MCP Server? It's easy to deploy each individual, but I don't know how to register the tools on the server, if each of the components are in an own docker?</p> <p>Today, I use the <code>@mcp.tools()</code> wrapper in my tools, but the <code>mcp</code> is the server instance. In the described case above, how does the tool know the MCP Server for registration? How can it work?</p>
<python><docker><agent><model-context-protocol>
2025-07-09 13:45:34
1
621
Christian01
79,695,675
12,439,683
How to forcefully terminate a running Python test in VSCode
<p>Closely related to my question is <a href="https://stackoverflow.com/q/71803409/12439683">VSCode: how to interrupt a running Python test?</a>, however in my case the standard method of pressing the square in the <em>Test Results</em> tap does not work.</p> <p>What is different in my tests?</p> <p>I have code that via multiprocessing (and <code>os.fork</code>) executes multiple sub processes. There are two issues that cause the tests to be stuck.<br /> One, because of the forking (involving JAX) a deadlock happens when several tests are executed in one set, this seems to be a limitation of unittest cleanup.<br /> Secondly, I might have a <code>breakpoint</code> in code that is executed in these other processes. Additionally is a custom debugger in place of the standard <code>pdb</code> that would allow debugging and hooking into these processes when executed normally - but not when using tests. To be precise I am using <a href="https://docs.ray.io/en/latest/ray-observability/ray-distributed-debugger.html" rel="nofollow noreferrer">ray's distributed debugger</a> and the last message I get is that the debugger is now active. This prevents the tests from being stopped in the normal way via the square button.</p> <p>I tried to kill some running <code>python</code> processes but there is a long list of processes that are running, I don't know which is the correct to select.</p> <p>Is there another integrated method that allows to immediately terminate a running test that will work in my case? For example shutdown and restart the test console (process).</p>
<python><visual-studio-code><debugging><python-unittest><ray>
2025-07-09 13:08:26
1
5,101
Daraan
79,695,585
4,505,998
Matplotlib Engformatter base 2
<p>I am plotting some running time vs. data size. The x axis is logarithmic in base 2, but when I try to use <a href="https://matplotlib.org/stable/api/ticker_api.html#matplotlib.ticker.EngFormatter" rel="nofollow noreferrer">EngFormatter</a>, I get the values in base 10.</p> <p>Is it possible to get the values in base 2? Like 32k instead of 33k and 64k instead of 66k.</p> <p>Here I provide some sample code and figure:</p> <pre class="lang-py prettyprint-override"><code>sizes = np.array([2**i for i in range(1, 18)]) times = np.array([2**i*np.random.uniform(0.4, 1.6) for i in range(1, 18)]) # Example times in milliseconds fig, ax = plt.subplots(figsize=(10, 6)) plt.plot(sizes, times, marker='o') ax.set_xscale('log', base=2) ax.set_xlabel('size') ax.set_ylabel('time') ax.xaxis.set_major_formatter(mticker.EngFormatter(places=0)) # TODO: base 2 </code></pre> <p>Resulting figure: <a href="https://i.sstatic.net/cWFQidag.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWFQidag.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2025-07-09 12:06:15
1
813
David DavΓ³
79,695,557
6,930,340
How to resize/fit Altair chart in Quarto dashboard container?
<p>The Quarto <a href="https://quarto.org/docs/dashboards/data-display.html#plots" rel="nofollow noreferrer">docs</a> state that Altair charts resize themselves to fit their container within dashboards.</p> <p>My experience is that this is true for <code>alt.Chart()</code>, but not if I concatenate multiple charts via <code>alt.hconcat()</code>.</p> <pre class="lang-markdown prettyprint-override"><code>--- title: &quot;Test&quot; format: dashboard --- ```{python} #| title: alt.chart() import altair as alt from vega_datasets import data source = data.iowa_electricity() alt.Chart(source).mark_area(opacity=0.3).encode( x=&quot;year:T&quot;, y=alt.Y(&quot;net_generation:Q&quot;).stack(None), color=&quot;source:N&quot; ) ``` ```{python} #| title: Chart concatenation import altair as alt from vega_datasets import data source = data.iowa_electricity() chart1 = alt.Chart(source).mark_area(opacity=0.3).encode( x=&quot;year:T&quot;, y=alt.Y(&quot;net_generation:Q&quot;).stack(None), color=&quot;source:N&quot; ) chart2 = alt.Chart(source).mark_area(opacity=0.3).encode( y=&quot;year:T&quot;, x=alt.Y(&quot;net_generation:Q&quot;).stack(None), color=&quot;source:N&quot; ) alt.hconcat(chart1, chart2) ``` </code></pre> <p><a href="https://i.sstatic.net/Tx7ORNJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tx7ORNJj.png" alt="enter image description here" /></a></p> <p>Is there a way to fit a concatenated chart to its container?</p>
<python><altair><quarto>
2025-07-09 11:40:12
0
5,167
Andi
79,695,544
922,712
Installing an older version of selenium with Python3 on Ubuntu with an externally-managed-environment
<p>I am on the following Ubuntu version on WSL</p> <pre><code>Distributor ID: Ubuntu Description: Ubuntu 24.04.2 LTS Release: 24.04 Codename: noble </code></pre> <p>I am running Python 3.12.3</p> <p>I need to install selenium 4.2.0 or 4.2.1</p> <p>I have admin rights on the machine</p> <p>When I try to install the particular version</p> <p>$ sudo pip install selenium=4.2.0</p> <pre><code>error: externally-managed-environment Γ— This environment is externally managed ╰─&gt; To install Python packages system-wide, try apt install python3-xyz, where xyz is the package you are trying to install. </code></pre> <p>(I am pasting only the relevant part of the error message)</p> <p>So next I tried the same using apt-get</p> <p>$sudo apt-get install python3-selenium=4.2.0</p> <pre><code>Package python3-selenium is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Version '4.2.0' for 'python3-selenium' was not found </code></pre> <p>How do I get this version for the python3-selenium package?</p> <p>Selenium website has the 4.2.0 package but then I would have to install it with <code>pip</code> which again give the managed environment error</p> <p>So how do I do this?</p>
<python><python-3.x><ubuntu><selenium-webdriver><installation>
2025-07-09 11:33:31
1
14,081
user93353
79,695,261
2,083,756
Running MCP Server from py web view app on a diffreent thread after packaging
<p>I have Py Web View app and I need to run an MCP server in the backround. <br/> I am using MultiProcessing. <br/> In a python environment, everything works fine. <br/> I want it to package it as an exe, and I am using pyinstaller for that matter. <br/> The packaging process completes without error. <br/> My issue is that when I run the exe, the process is respawned infinitely. Meaning that the main window keeps being opened in new instances of the app. How can I stop it from respawning infinitely?</p> <p>My MCP File:</p> <pre><code>mcp = FastMCP(&quot;resume_fetcher&quot;) @mcp.tool() async def get_files(): &quot;&quot;&quot; &quot;&quot;&quot; return 'using tool' # Run the server with streamable-http transport class MCPRunner: def run_mcp(self): # Set server configuration through the settings property mcp.settings.mount_path = &quot;/mcp&quot; mcp.settings.port = 8765 mcp.settings.host = &quot;127.0.0.1&quot; # Run the server with streamable-http transport logging.debug(f&quot;Starting MCP server at http://{mcp.settings.host}:{mcp.settings.port}{mcp.settings.mount_path}&quot;) mcp.run(transport=&quot;streamable-http&quot;) </code></pre> <p>my py web view app:</p> <pre><code>class CommandsAutomatorApi: def init_mcp(self, mcp_runner): self.mcp_process = multiprocessing.Process(target=mcp_runner.run_mcp) self.mcp_process.start() def __del__(self): &quot;&quot;&quot;Cleanup method to terminate the MCP process when the agent is destroyed&quot;&quot;&quot; if hasattr(self, 'mcp_process') and self.mcp_process.is_alive(): self.mcp_process.terminate() self.mcp_process.join() def main(): api = CommandsAutomatorApi() mcp_runner = MCPRunner() api.init_mcp(mcp_runner) window = webview.create_window( 'Commands Automator', 'ui/commands_automator.html', js_api=api, width=1000, height=800 ) webview.start(icon='ui/resources/Commands_Automator.ico') if __name__ == '__main__': main() </code></pre>
<python><pyinstaller><model-context-protocol>
2025-07-09 07:44:36
0
306
Moutabreath
79,695,252
6,936,582
Position an axis at a point location
<p>I have a plot with a line graph. I need to place a new axis at a point/coordinate and plot a pie chart on the new axis so the pie is centered on the point.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(6, 3)) ax.set_xlim(0,16) ax.set_ylim(8,12) #Plot a sine wave x = np.arange(0, 5*np.pi, 0.1) y = np.sin(x)+10 ax.plot(x, y, color=&quot;blue&quot;) #Plot a red point at x=12, y=10 ax.plot(12,10,marker=&quot;o&quot;, color=&quot;red&quot;) #Add a new axes to the plot #Normalize the points coordinates to range between 0 and 1 x_norm = (12-0)/(16-0) #0.75 y_norm = (10-8)/(12-8) #0.5 #Add an ax at the normalized coordinates left=0.75 bottom=0.5 width=0.1 height=0.1 sub_ax = fig.add_axes(rect=(left, bottom, width, height)) </code></pre> <p><a href="https://i.sstatic.net/41WBkWLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/41WBkWLj.png" alt="enter image description here" /></a></p> <pre><code>sub_ax.pie((0.2,0.3,0.5)) </code></pre> <p>The pie is centered on the new axis center. I cant figure out the logic to get it centered at the point?</p> <p><a href="https://i.sstatic.net/fz23Tmm6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fz23Tmm6.png" alt="enter image description here" /></a></p>
<python><matplotlib>
2025-07-09 07:36:47
1
2,220
Bera
79,695,194
2,081,568
Find field throwing error in Python Dataclass conversion
<p>I'm trying to convert a json array to a Python list of typed objects. It's data from Teltonika FOTA</p> <p>The call <code>result_list = fromlist(FotaDevice, intermediate_list)</code> is failing with the error message <strong>TypeError: strptime() argument 1 must be str, not None</strong></p> <p>imports are:</p> <pre><code>from dataclasses import dataclass from dataclass_wizard import fromlist, asdict, DateTimePattern, JSONWizard from typing import Optional </code></pre> <p>The dataclass is as follows:</p> <pre><code>@dataclass class FotaDevice(JSONWizard, debug=logging.DEBUG): imei:int serial:int model:str spec_id:int platform_id:int current_configuration:Optional[str] current_firmware:str description:Optional[str] company_id:int group_id:Optional[int] track:int first_login:Optional[DateTimePattern['%Y-%m-%d %H:%M:%S']] seen_at:Optional[DateTimePattern['%Y-%m-%d %H:%M:%S']] created_at:DateTimePattern['%Y-%m-%d %H:%M:%S'] updated_at:Optional[DateTimePattern['%Y-%m-%d %H:%M:%S']] state_timestamp:Optional[DateTimePattern['%Y-%m-%d %H:%M:%S']] created_by:int updated_by:Optional[int] chip_id:Optional[int] gsm_number:Optional[str] gnss_version:Optional[str] login_count:int parameter_collect:Optional[str] last_sync:DateTimePattern['%Y-%m-%d %H:%M:%S'] iccid:str imsi:str ble_firmware:str sold_at:DateTimePattern['%Y-%m-%d %H:%M:%S'] shipment:str last_analytics_received_at:DateTimePattern['%Y-%m-%d %H:%M:%S'] product_code:str company_name:Optional[str] group_name:Optional[str] next_task:Optional[str] can_adapter:Optional[str] obd:Optional[str] accessories:Optional[str] carrier:Optional[str] company:Optional[FotaCompany] group:Optional[FotaGroup] activity_status:str task_queue:str modem_version:Optional[str] model_platform:str def __str__(self): return f&quot;{self.imei}:{self.model}&quot; </code></pre> <p>Here's two records lightly sanitised from the collection so you can see the source data:</p> <pre><code>[ { &quot;imei&quot;: 1111111, &quot;serial&quot;: 111, &quot;model&quot;: &quot;FMM001&quot;, &quot;spec_id&quot;: 1, &quot;platform_id&quot;: 1, &quot;current_configuration&quot;: null, &quot;current_firmware&quot;: &quot;03.27.13.Rev.54&quot;, &quot;description&quot;: &quot;xxxxxx&quot;, &quot;company_id&quot;: 13991, &quot;group_id&quot;: null, &quot;track&quot;: 6, &quot;first_login&quot;: &quot;2022-11-09 02:08:20&quot;, &quot;seen_at&quot;: &quot;2024-02-07 22:29:28&quot;, &quot;created_at&quot;: &quot;2021-10-18 09:14:06&quot;, &quot;updated_at&quot;: &quot;2024-02-07 22:29:28&quot;, &quot;state_timestamp&quot;: null, &quot;created_by&quot;: 10, &quot;updated_by&quot;: 1, &quot;chip_id&quot;: null, &quot;gsm_number&quot;: &quot;6141111&quot;, &quot;gnss_version&quot;: null, &quot;login_count&quot;: 1, &quot;parameter_collect&quot;: null, &quot;last_sync&quot;: &quot;2024-02-01 09:55:25&quot;, &quot;iccid&quot;: &quot;8961111111111111&quot;, &quot;imsi&quot;: &quot;111111111q111&quot;, &quot;ble_firmware&quot;: null, &quot;sold_at&quot;: &quot;2021-11-10 00:00:00&quot;, &quot;shipment&quot;: &quot;LTG172&quot;, &quot;last_analytics_received_at&quot;: null, &quot;product_code&quot;: &quot;FMM001FPSP01&quot;, &quot;company_name&quot;: &quot;xxxxxxxxxxx&quot;, &quot;group_name&quot;: null, &quot;next_task&quot;: null, &quot;can_adapter&quot;: null, &quot;obd&quot;: null, &quot;accessories&quot;: null, &quot;carrier&quot;: null, &quot;company&quot;: { &quot;code&quot;: &quot;SG3414&quot;, &quot;id&quot;: 13991, &quot;name&quot;: &quot;xxxxxxxxxx&quot;, &quot;company_id&quot;: 37, &quot;deleted_at&quot;: null, &quot;created_at&quot;: &quot;2021-02-16 03:57:26&quot;, &quot;updated_at&quot;: &quot;2023-06-12 09:21:35&quot;, &quot;created_by&quot;: 1, &quot;updated_by&quot;: 11239 }, &quot;group&quot;: null, &quot;activity_status&quot;: &quot;Offline&quot;, &quot;task_queue&quot;: &quot;Empty&quot;, &quot;modem_version&quot;: null, &quot;model_platform&quot;: &quot;Fmx&quot; }, { &quot;imei&quot;: 2222222222, &quot;serial&quot;: 2222, &quot;model&quot;: &quot;FMM001&quot;, &quot;spec_id&quot;: 1, &quot;platform_id&quot;: 1, &quot;current_configuration&quot;: null, &quot;current_firmware&quot;: &quot;03.27.07.Rev.00&quot;, &quot;description&quot;: &quot;xxxxxxx&quot;, &quot;company_id&quot;: 13991, &quot;group_id&quot;: null, &quot;track&quot;: 6, &quot;first_login&quot;: &quot;2022-11-09 01:56:02&quot;, &quot;seen_at&quot;: &quot;2023-02-22 01:00:37&quot;, &quot;created_at&quot;: &quot;2021-10-18 09:14:06&quot;, &quot;updated_at&quot;: &quot;2023-09-18 02:18:18&quot;, &quot;state_timestamp&quot;: null, &quot;created_by&quot;: 10, &quot;updated_by&quot;: 8225, &quot;chip_id&quot;: null, &quot;gsm_number&quot;: &quot;61473914722&quot;, &quot;gnss_version&quot;: null, &quot;login_count&quot;: 1, &quot;parameter_collect&quot;: null, &quot;last_sync&quot;: &quot;2023-02-02 04:22:21&quot;, &quot;iccid&quot;: &quot;89622222222222&quot;, &quot;imsi&quot;: &quot;2222222222&quot;, &quot;ble_firmware&quot;: null, &quot;sold_at&quot;: &quot;2021-11-10 00:00:00&quot;, &quot;shipment&quot;: &quot;LTG172&quot;, &quot;last_analytics_received_at&quot;: null, &quot;product_code&quot;: &quot;FMM001FPSP01&quot;, &quot;company_name&quot;: &quot;xxxxxxx&quot;, &quot;group_name&quot;: null, &quot;next_task&quot;: null, &quot;can_adapter&quot;: null, &quot;obd&quot;: null, &quot;accessories&quot;: null, &quot;carrier&quot;: null, &quot;company&quot;: { &quot;code&quot;: &quot;SG3414&quot;, &quot;id&quot;: 13991, &quot;name&quot;: &quot;xxxxxxx&quot;, &quot;company_id&quot;: 37, &quot;deleted_at&quot;: null, &quot;created_at&quot;: &quot;2021-02-16 03:57:26&quot;, &quot;updated_at&quot;: &quot;2023-06-12 09:21:35&quot;, &quot;created_by&quot;: 1, &quot;updated_by&quot;: 11239 }, &quot;group&quot;: null, &quot;activity_status&quot;: &quot;Offline&quot;, &quot;task_queue&quot;: &quot;Empty&quot;, &quot;modem_version&quot;: null, &quot;model_platform&quot;: &quot;Fmx&quot; }, ] </code></pre> <p>Logically the error is on a datetime field, which is why I've wrapped them as <code>Optional</code></p> <p>I've added debugging to the class header, and I don't understand the output. It seems to imply that every field has a missing type? Newlines on <code>Field(</code> added by me:</p> <pre><code>DEBUG:dataclass_wizard:Globals before function compilation: {'cls_fields': ( Field(name='imei',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='serial',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='model',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='spec_id',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='platform_id',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='current_configuration',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='current_firmware',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='description',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='company_id',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='group_id',type=typing.Optional[int],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='track',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='first_login',type=typing.Optional[PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S')],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='seen_at',type=typing.Optional[PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S')],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='created_at',type=PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S'),default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='updated_at',type=typing.Optional[PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S')],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='state_timestamp',type=typing.Optional[PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S')],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='created_by',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='updated_by',type=typing.Optional[int],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='chip_id',type=typing.Optional[int],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='gsm_number',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='gnss_version',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='login_count',type=&lt;class 'int'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='parameter_collect',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='last_sync',type=PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S'),default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='iccid',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='imsi',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='ble_firmware',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='sold_at',type=PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S'),default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='shipment',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='last_analytics_received_at',type=PatternedDT(cls=&lt;class 'datetime.datetime'&gt;, pattern='%Y-%m-%d %H:%M:%S'),default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='product_code',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='company_name',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='group_name',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='next_task',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x00,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='accessories',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='carrier',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='company',type=typing.Optional[fleetlogix.fota.FotaCompany],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='group',type=typing.Optional[fleetlogix.fota.FotaGroup],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='activity_status',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='task_queue',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='modem_version',type=typing.Optional[str],default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD), Field(name='model_platform',type=&lt;class 'str'&gt;,default=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,default_factory=&lt;dataclasses._MISSING_TYPE object at 0x000001507F56E180&gt;,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({}),kw_only=False,_field_type=_FIELD)), 'LOG': &lt;Logger dataclass_wizard (DEBUG)&gt;, 'MissingData': &lt;class 'dataclass_wizard.errors.MissingData'&gt;, 'MissingFields': &lt;class 'dataclass_wizard.errors.MissingFields'&gt;, 'ParseError': &lt;class 'dataclass_wizard.errors.ParseError'&gt;} DEBUG:dataclass_wizard:Namespace after function compilation: {'cls_fromdict': &lt;function __create_cls_fromdict_fn__.&lt;locals&gt;.cls_fromdict at 0x000001500106D300&gt;} </code></pre> <p>What I'm trying to find is the definitive field causing the error</p>
<python><python-dataclasses><dataclass-wizard>
2025-07-09 06:50:40
1
1,111
Hecatonchires
79,695,154
13,825,658
Why does variance inference for type parameters include `__init__`?
<p>From the official <a href="https://typing.python.org/en/latest/spec/generics.html#variance-inference" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>The introduction of explicit syntax for generic classes in Python 3.12 eliminates the need for variance to be specified for type parameters. Instead, type checkers will infer the variance of type parameters based on their usage within a class. Type parameters are inferred to be invariant, covariant, or contravariant depending on how they are used.</p> </blockquote> <p>The problem is, type checkers include <code>__init__</code> when infering variance: they check if <code>__init__</code> accepts the typevar as init parameter.</p> <pre class="lang-py prettyprint-override"><code>class MyContainer[T]: # This is covariant for all intended usecases # Inferred as invariant because of `val: T`, but I need to pass the value somehow... def __init__(self, val: T) -&gt; None: self.val = val def get_val(self) -&gt; T: return self.val def print_float_val(container: MyContainer[float]) -&gt; None: print(container) int_container = MyContainer(1) # reveal_type: MyContainer[int] print_float_val(int_container) # main.py:15: error: Argument 1 to &quot;print_float_val&quot; has incompatible type &quot;MyContainer[int]&quot;; expected &quot;MyContainer[float]&quot; [arg-type] </code></pre> <p>I cannot imagine this behavior to be very useful because:</p> <ul> <li>My understanding of covariant generics is that they are mostly (perhaps all) read-only containers. Pretty much all my covariant container types that I can think of receive the values it contains during initialization: <code>tuple</code>, <code>namedtuple</code>, <code>frozenset</code>.</li> <li>We only call <code>__init__</code> when instantiating objects, and it is irrelevant afterward. For instance, <code>print_float_val</code> accepts an instance of <code>MyContainer[float]</code>, so the object would have been instantiated before being passed to this function.</li> </ul>
<python><generics><python-typing><covariance>
2025-07-09 06:22:01
1
1,368
Leonardus Chen
79,695,141
4,423,300
csv file wide to long. Float value gets truncated and getting integer for index column
<p>I have csv file with approx 40K rows and 1200 columns in wide format. with looks something like this:</p> <pre><code>| Millisec_diff | Value1 | Value2 | Value3 | | ------------- | ------ | ------ |------- | | 0 | 100 | 200 | 1.3 | | 0.005 | 101 | 20.1 | 1.3 | | 0.01 | 103 | 20.1 | 2.5 | | 0.015 | 104 | 24.1 | 4.5 | | 0.02 | 103 | 40.1 | 5.4 | </code></pre> <p>I added date time column based Millisec_diff and converted to long format using pandas.melt:</p> <pre><code>start = datetime.today() df['time'] = ( datetime( start.year, start.month, start.day, start.hour, start.minute, start.second, start.microsecond) + pandas.to_timedelta(df['Millisec_diff '], unit='ms') ) df_long = pandas.melt( df, id_vars=[&quot;time&quot;, 'Millisec_diff '], value_vars=col_names, var_name='param', value_name='value') </code></pre> <p>When I run the code on sample dataset, I can see proper values (eg: .005, .01) for <code>Millisec_diff</code> but when I run this on whole file, I can see only integers (like 1, 2) for <code>Millisec_diff</code> instead of actual decimal values. I am not sure what is happening.</p>
<python><pandas><melt><pandas-melt>
2025-07-09 06:14:02
1
637
SheCodes
79,695,102
4,423,300
Pandas create date time column based on other column with relative milliseconds time
<p>My original data csv file columns looks like this:</p> <pre><code>| Millisec_diff | Value1 | Value2 | | ------------- | ------ | ------ | | 0 | 100 | 200 | | 0.005 | 101 | 20.1 | | 0.01 | 103 | 20.1 | | 0.015 | 104 | 24.1 | | 0.02 | 103 | 40.1 | </code></pre> <p>I would like to create a column with date time in ISO 8601 format based on current time + timedelta based on <code>Millisec_diff</code>.</p> <p>I tried:</p> <pre><code>start = datetime.today() df['time'] = ( datetime(start.year, start.month, start.day,start.hour, start.minute, start.second, start.microsecond) + pandas.to_timedelta(df['Millisec_diff '], unit='ms') ) </code></pre> <p>Which generated a <code>time</code> column, but value is the same. I wanted to generate time with difference as per <code>Millisec_diff</code>. Thanks.</p>
<python><pandas><datetime><time-difference>
2025-07-09 05:11:53
1
637
SheCodes
79,694,783
7,453,703
Streamlit data_editor won't update data
<p>I am trying to have a data_editor in streamlit that can handle automatic changes from users. The idea, is that user will update a column value and automatically the change will be displayed triggering a calculation within the table. In my example, it should days to expire for a financial option. Pretty much like excel. But I am stuck with the fact that, it will always revert to the previous and it needs &quot;two clicks&quot;.</p> <p>Here is the code I have (with a lot of chatpt too) At the moment I have creating the edited data frame, because it is failing to update the same data_editor. Ideally, the changes you see on the second data frame, I would like to appear in the input one.</p> <p>Can anyone helpout?</p> <p>In the code below, is where the problem is, user's input is not registered</p> <pre><code>import streamlit as st import pandas as pd contracts = ['Jan25', 'Feb25', 'Mar25','Apr25','May25','Jun25', 'Jul25', 'Auh25', 'Sep25', 'Oct25', 'Nov25', 'Dec25'] options_type = ['Call', 'Put'] # Only initialize once if &quot;editor_df&quot; not in st.session_state: st.session_state[&quot;editor_df&quot;] = pd.DataFrame({ &quot;Is Long&quot;: [True], &quot;Call/Put&quot;: [options_type[0]], &quot;Contract&quot;: [contracts[0]], &quot;Strike&quot;: [35.0] }) # Show editor and get edited DataFrame edited_df = st.data_editor( st.session_state[&quot;editor_df&quot;], key=&quot;editor&quot;, column_config={ &quot;Is Long&quot;: st.column_config.CheckboxColumn(default=True), &quot;Call/Put&quot;: st.column_config.SelectboxColumn(options=options_type), &quot;Contract&quot;: st.column_config.SelectboxColumn(options=contracts), &quot;Strike&quot;: st.column_config.NumberColumn(default=35.0), }, num_rows=&quot;dynamic&quot; ) # Compare before assigning if not edited_df.equals(st.session_state[&quot;editor_df&quot;]): st.session_state[&quot;editor_df&quot;] = edited_df.copy() </code></pre> <p>The below code is the workaround I have. Populate two tables. The first is the input and the second calculates the extra field.</p> <pre><code>import streamlit as st import pandas as pd import datetime # Define contracts and expiries contracts = ['Jan25', 'Feb25', 'Mar25', 'Apr25', 'May25', 'Jun25', 'Jul25', 'Aug25', 'Sep25', 'Oct25', 'Nov25', 'Dec25'] options_type = ['Call', 'Put'] expiries = { 'Jan25': datetime.datetime(2024, 12, 27), 'Feb25': datetime.datetime(2025, 1, 29), 'Mar25': datetime.datetime(2025, 2, 26), 'Apr25': datetime.datetime(2025, 3, 27), 'May25': datetime.datetime(2025, 4, 28), 'Jun25': datetime.datetime(2025, 5, 28), 'Jul25': datetime.datetime(2025, 6, 26), 'Aug25': datetime.datetime(2025, 7, 29), 'Sep25': datetime.datetime(2025, 8, 27), 'Oct25': datetime.datetime(2025, 9, 26), 'Nov25': datetime.datetime(2025, 10, 29), 'Dec25': datetime.datetime(2025, 11, 27) } # Initialize session state if &quot;editor_df&quot; not in st.session_state: st.session_state.editor_df = pd.DataFrame({ &quot;Is Long&quot;: [True], &quot;Call/Put&quot;: [&quot;Call&quot;], &quot;Contract&quot;: [&quot;Jan25&quot;], &quot;Strike&quot;: [35.0] }) # Show editor first edited_df = st.data_editor( st.session_state.editor_df, key=&quot;editor&quot;, column_config={ &quot;Is Long&quot;: st.column_config.CheckboxColumn(), &quot;Call/Put&quot;: st.column_config.SelectboxColumn(options=options_type), &quot;Contract&quot;: st.column_config.SelectboxColumn(options=contracts), &quot;Strike&quot;: st.column_config.NumberColumn(), }, num_rows=&quot;dynamic&quot; ) # Compute days on what user just changed def compute_days(df: pd.DataFrame) -&gt; pd.DataFrame: now = datetime.datetime.now() def get_days(contract): expiry = expiries.get(str(contract)) if not isinstance(expiry, datetime.datetime): return 0.0 delta = expiry - now return max(delta.total_seconds() / 86400, 0.0) df = df.copy() df[&quot;Days to expiry&quot;] = df[&quot;Contract&quot;].apply(get_days) return df # Compute on latest edited data final_df = compute_days(edited_df) # Show full live-updated table st.dataframe(final_df) </code></pre>
<python><streamlit>
2025-07-08 20:11:27
1
405
pbou
79,694,410
8,297,745
How to use Django Q objects with ~Q() inside annotate(filter=...) to exclude a value?
<p>I'm refactoring a legacy Django Job to use <code>annotate</code> with filtered <code>Count</code> aggregations instead of querying each record individually (avoiding the N+1 problem).</p> <p>I want to count the number of related <code>EventReport</code> objects per <code>Store</code>, excluding those where <code>status=&quot;C&quot;</code>.</p> <p>So I wrote something like:</p> <pre class="lang-py prettyprint-override"><code>stores_with_monitoring_enabled.annotate( total_cards=Count( 'eventreport', filter=Q( eventreport__event_at__gte=day_30_days_ago_start, eventreport__event_at__lte=yesterday_end ) &amp; ~Q(eventreport__status='C') ), # ... etc </code></pre> <p>But Django raised <code>SyntaxError: positional argument follows keyword argument</code>.</p> <p><strong>I also tried:</strong></p> <pre class="lang-py prettyprint-override"><code># ... etc filter=Q( eventreport__event_at__gte=start_date, eventreport__event_at__lte=end_date ) &amp; ~Q(eventreport__status=&quot;C&quot;) # ... etc </code></pre> <p>But I'm unsure if this is the correct pattern inside <code>annotate()</code>'s filter.</p> <p>I expected to get only related objects where <code>`status != &quot;C&quot;</code> without any errors.</p> <p><strong>PS:</strong> I looked into other solutions on StackOverflow and the suggestions on this one: <a href="https://stackoverflow.com/a/1154977/8297745%5D(https://stackoverflow.com/a/1154977/8297745">How do I do a not equal in Django queryset filtering?</a>, but I could'nt get it working when using <code>Q()</code> alongside <code>~Q()</code> with other kwargs.</p> <p>What’s the best approach to express <code>status != 'C'</code> inside a <code>Count(..., filter=...)</code> clause?</p>
<python><django><django-models><django-queryset><django-orm>
2025-07-08 14:30:37
1
849
Raul Chiarella
79,694,363
1,833,563
What are the benefits of using an annotated class vs. a dict[str, Any] in the declaration of an MCP tool?
<p>FastMCP's <a href="https://gofastmcp.com/servers/tools#return-values" rel="nofollow noreferrer">documentation</a> states that:</p> <blockquote> <p>When you add return type annotations, FastMCP automatically generates output schemas to validate the structured data and enables clients to deserialize results back to Python objects.</p> </blockquote> <p>I've modified my code to return an annotated class instead of a simple dict[str, Any], but could not observe any differences in the response returned by the MCP client - Not in the description returned by the MCP server, and even not in the response to a tool_request by the MCP client (I used Wireshark to view the result returned by the server when the client called the tool).</p> <p>Here are the original and modified versions of the code:</p> <pre class="lang-py prettyprint-override"><code># FastMCP version: 2.10.1 # Original code: mcp.tool(name=&quot;some_tool&quot;) def some_tool(arg1: str, arg2: int) -&gt; dict[str, Any]: ... # Code using an annotated class: class ResponseClass(BaseModel): mem1: Annotated[int, Field(description=&quot;some description&quot;)] mem2: Annotated[int, Field(description=&quot;some description&quot;)] mcp.tool(name=&quot;some_tool&quot;) def some_tool(arg1: str, arg2: int) -&gt; ResponseClass: ... </code></pre> <p>Assuming I don't need output schema validation - what benefits does the annotated class have over the simple <code>dict[str, Any]</code> option?</p>
<python><pydantic><model-context-protocol>
2025-07-08 13:59:14
1
1,476
omer
79,694,308
7,408,848
extended mapfield results in error when updating a doc - mongoengine
<p>I am trying to write an enhanced field for mongoengine mapfield where it takes a defined enum and tracks the selection. Ideally, it locates and presents the enum when in python but saves the defined code when in mongodb. The code works well when saving the document initially but for some reason when I try to make modification, it will result in the error below. I have attached the EnumMapfield and an example.</p> <p>an enum field</p> <pre><code>class NarrativeEnum(Enum): LINEAR = (1, &quot;Linear&quot;) BRANCHING = (2, &quot;Branching&quot;) # property _index: int _code: str _description: str @property def index(self) -&gt; int: return self._index @property def code(self) -&gt; str: return self._code def __init__(self, index: int, code: str): self._index = index self._code = code </code></pre> <p>here is the class where I am trying to extend the mapfield to take in the enum field</p> <pre><code>class EnumMapField(MapField): def __init__(self, enum_class: Type[Enum], field: Any, *args, **kwargs): self.enum_class = enum_class super().__init__(field=field,*args, **kwargs) def validate(self, value): if not isinstance(value, dict): raise ValidationError for key, val in value.items(): if not isinstance(key, self.enum_class): raise ValidationError(f&quot;Key '{key}' is not a valid member of enum '{self.enum_class.__name__}'&quot;) string_keyed_dict: Dict[str, Any] = {key.code: val for key, val in value.items()} super(EnumMapField, self).validate(string_keyed_dict) def to_mongo(self, value, use_db_field=True, fields=None): if value is None: return None value = {key.code: val for key, val in value.items()} value = super(EnumMapField, self).to_mongo(value, use_db_field, fields) return value def to_python(self, value): value = super(EnumMapField, self).to_python(value) return {self.enum_class[key]: val for key, val in value.items()} </code></pre> <p>a document example and code</p> <pre><code>class NarrativeDoc(Docuement): meta = {&quot;abstract&quot;: False, &quot;allow_inheritance&quot;: False, &quot;collection&quot;: &quot;narrative&quot;, &quot;autoIndexId&quot;: False} id: str = StringField(db_field=&quot;_id&quot;, primary_key=True) narratives: Dict[NarrativeEnum, Dict[str, str]] = EnumMapField(NarrativeEnum, DictField(), db_field=&quot;naratives&quot;, default={}) doc = NarrativeDoc(id = &quot;mytext&quot;) doc.narratives[NarrativeEnum.LINEAR] = {&quot;atest&quot;:&quot;ofdata&quot;} doc.save() # this works perfectly fine, it is when I try to modify and save doc = NarrativeDoc.objects[0] doc.narratives[NarrativeEnum.BRANCHING] = {&quot;tomany&quot;:&quot;branches&quot;} doc.save() # this second save results in an error </code></pre> <p>error:</p> <blockquote> <p>AttributeError: 'NoneType' object has no attribute '_reverse_db_field_map'</p> </blockquote> <p>When I change to a Dictfield, I am able to make changes and save the document as many times as I like. But for the EnumMapField, it always crashes after second save. I have tried to go through documents to see what could be causing the issue but haven't figured it out.</p>
<python><mongoengine>
2025-07-08 13:14:19
0
1,111
Hojo.Timberwolf
79,694,270
18,814,386
How can I exclude a column from a heatmap?
<p>I have a pivoted dataframe and want to plot a heatmap in Plotly. I have a total column, making it hard to use a color scale.</p> <pre><code>import numpy as np import pandas as pd import plotly.express as px city_origin = ['London', 'Tokio', 'Seoul', 'Paris', 'Tashkent', 'Washington', 'Moscow'] city_current = ['London', 'Madrid', 'Tashkent', 'Seoul', 'Paris', 'Toronto', 'Washington', 'Istanbul', 'Hanoi', 'Manilla', 'Delhi', 'Busan', 'Moscow'] migrant_origin = np.random.choice(city_origin,size = 1000) migrant_current = np.random.choice(city_current, size = 1000) migrant_salary = np.random.randint(1300, 6900, size = 1000) df = pd.DataFrame({'migrant_origin':migrant_origin, 'migrant_current':migrant_current, 'migrant_salary':migrant_salary}) new_df = pd.pivot_table(df, index = 'migrant_origin', columns = 'migrant_current', values ='migrant_salary', aggfunc = 'sum', fill_value = 0) new_df['total'] = new_df.sum(axis = 1) cols = ['total'] + [col for col in new_df.columns if col != 'total'] new_df = new_df[cols] px.imshow(new_df,text_auto=True) </code></pre> <p><a href="https://i.sstatic.net/gkJ2gkIz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gkJ2gkIz.png" alt="enter image description here" /></a></p> <p>I want to show the totals but not affected colors on the heatmap. Is there any way to exclude columns from coloring?</p>
<python><pandas><plotly><heatmap>
2025-07-08 12:50:38
2
394
Ranger
79,694,234
10,277,250
Does FastAPI still need Gunicorn?
<p>For a long time Gunicorn+Uvicorn was the default setup for running FastAPI in production. However, I recently came across a <a href="https://blueshoe.io/blog/fastapi-in-production/" rel="nofollow noreferrer">blog post</a> saying:</p> <blockquote> <p>In the meantime, this combination of Gunicorn and Uvicorn is no longer needed, as Uvicorn now also handles worker management itself</p> </blockquote> <p>I haven't found any other sources to verify this statement. Beside this, the <a href="https://fastapi.tiangolo.com/deployment/docker/" rel="nofollow noreferrer">official documentation</a> only mention that the <code>tiangolo/uvicorn-gunicorn-fastapi</code> base Docker image is deprecated, but say nothing about Gunicorn+Uvicorn setup itself</p> <p>So:</p> <ol> <li>Does modern FastAPI need Gunicorn?</li> <li>Is there any difference between <code>fastapi run --workers 4 main.py</code> and <code>gunicorn main:app -w 4 -k uvicorn.workers.UvicornWorker</code> now?</li> </ol>
<python><fastapi><gunicorn><uvicorn><asgi>
2025-07-08 12:28:57
1
363
Abionics
79,694,189
4,404,699
matplotlib widget on jupyter notebook, dropdown menu working but not showing
<p>I wanted to test the code from <a href="https://stackoverflow.com/questions/61468175/dropdown-widget-python">this previous post</a> because I have a trouble to visualize my dropdown menu.</p> <p>I use pycharm, and add the code found in the answer of the post above in a Jupyter notebook. As you can see in the printscreen I have the dropbox with the first value displayed. I also have my plot that updates with the option I choose. The problem is that when I click on the option, I only know it is selected thanks to the blue rectangle that appears, then with the arrows of my keyboard I can select the value I want.</p> <p>Question : I want to see displayed some options like a regular dropdown list. How can I do that?</p> <p>Thank you</p> <p><strong>Edit</strong>: here is the code I used from the previous post</p> <pre><code> %matplotlib widget # needs ipympl import ipywidgets as widgets from IPython.display import display import matplotlib.pyplot as plt dropdown = widgets.Dropdown( #options=['2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019'], #value='2009', options=list(range(2009, 2020)), # integers instead of strings value=2009, # integer instead of string description='Jahr:', ) fig, ax = plt.subplots() fig.dpi = 150 x = [2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019] y = [130, 137, 104, 147, 401, 274, 234, 770, 857, 746, 704] line, = ax.plot(x, y, label=&quot;Flugstunden pro Jahr&quot;, marker=&quot;.&quot;) ax.legend() ax.set_title(&quot;Flugstunden&quot;) ax.set_xlabel(&quot;Jahr&quot;) ax.set_ylabel(&quot;Flugstunden&quot;) ax.set_facecolor((0.9,0.9,0.9)) #plt.show() def on_change1(value=2009): &quot;&quot;&quot;remove old line(s) and plot new line(s)&quot;&quot;&quot; #print(type(value)) #value = int(value) # I don't have to convert string to integer # get all value for `year &gt;= value` #pairs = [(a,b) for a,b in zip(x, y) if a &gt;= value] # use `==` to get only one value #selected_x, selected_y = zip(*pairs) # select data pos = x.index(value) selected_x = x[pos] # create `selected_x` to keep original values in `x` selected_y = y[pos] # create `selected_y` to keep original values in `y` print('x:', selected_x) print('y:', selected_y) # remove old line(s) for l in ax.lines: l.remove() # plot new line(s) ax.plot(selected_x, selected_y, label=&quot;Flugstunden pro Jahr&quot;, marker=&quot;.&quot;) def on_change2(value=2009): &quot;&quot;&quot;keep line, remove all data from line and use new data with the same line&quot;&quot;&quot; #print(type(value)) #value = int(value) # I don't have to convert string to integer # get all value for `year &gt;= value` #pairs = [(a,b) for a,b in zip(x, y) if a &gt;= value] # use `==` to get only one value #selected_x, selected_y = zip(*pairs) # select data pos = x.index(value) selected_x = x[pos] # create `selected_x` to keep original values in `x` selected_y = y[pos] # create `selected_y` to keep original values in `y` print('x:', selected_x) print('y:', selected_y) line.set_xdata(selected_x) line.set_ydata(selected_y) #fig.canvas.draw() widgets.interact(on_change1, value=dropdown) #widgets.interact(on_change2, value=dropdown) </code></pre> <p><strong>Edit 2</strong>: The last code with iypmpl works on the 'launcher binder'. On my computer, when I open the notebook in jupyter lab, the code works well. Then, when I open the same notebook in PyCharm, the code generates the figure, but the dropdown menu does not open. So it must be the IDE.</p> <p><strong>Update</strong>: I just decided to update the IDE (last time I did, it was 6 months ago). Now it works like a charm...My new Pycharm version is 2025.1.3</p> <p><a href="https://i.sstatic.net/oTXdhrhA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTXdhrhA.png" alt="enter image description here" /></a></p>
<python><matplotlib><widget><jupyter><ipywidgets>
2025-07-08 11:54:13
0
1,457
tuttifolies
79,694,182
2,090,453
Memory Not Released After Each Request Despite Cleanup Attempts
<p>We're running a FastAPI service that fetches data from Trino, processes it using PyArrow and Polars, and uploads the result to AWS S3 in Parquet format. However, we're facing a persistent issue where memory is not released after each request, even after explicitly attempting cleanup.</p> <p>Architecture overview:</p> <ul> <li>Framework: FastAPI</li> <li>Data Source: Trino</li> <li>Processing: PyArrow, Polars</li> <li>Storage: AWS S3 (Parquet format)</li> </ul> <p>Flow:</p> <ol> <li>API receives a request</li> <li>Fetch data from Trino</li> <li>Convert to PyArrow Table</li> <li>Upload to S3 using <code>write_to_dataset</code></li> <li>Attempt memory cleanup</li> </ol> <p>We've tried:</p> <ol> <li><p>Using del to delete large objects</p> <p>Deleted PyArrow tables and Polars DataFrames after upload.</p> <pre class="lang-py prettyprint-override"><code>import pyarrow as pa import pyarrow.parquet as pq import pyarrow.fs as pafs # Fetch and process data data = fetch_data_from_trino() arrow_table = pa.table(list(zip(*data))) # Initialize S3 filesystem with optimal configuration for high-performance writes s3_filesystem = pafs.S3FileSystem() # Upload to S3 pq.write_to_dataset( arrow_table, root_path=f&quot;{RESULT_STORAGE_BUCKET}/{s3_storage_path}&quot;, partition_cols=['organisation'], filesystem=s3_filesystem ) # Attempt to free memory del data del arrow_table </code></pre> <p>Observation: Memory usage remained high even after deletion.</p> <pre class="lang-json prettyprint-override"><code>&quot;resource_stats&quot;: { &quot;memory_mb&quot;: { &quot;start&quot;: 140.3004568, &quot;peak&quot;: 589.02921865, &quot;end&quot;: 587.01258 } } </code></pre> </li> <li><p>Forcing garbage collection with <code>gc.collect()</code></p> <p>Called after <code>del</code>.</p> <pre class="lang-py prettyprint-override"><code>import os import psutil def force_garbage_collection_and_cleanup() -&gt; Dict[str, int]: try: # Get memory usage before cleanup memory_before = psutil.Process(os.getpid()).memory_info().rss / (1024 ** 2) # Force garbage collection - run multiple times for thorough cleanup objects_collected = 0 for _ in range(3): # Run GC multiple times for better cleanup collected = gc.collect() objects_collected += collected # Get memory usage after cleanup memory_after = psutil.Process(os.getpid()).memory_info().rss / (1024 ** 2) except Exception as cleanup_error: logger.warning(f&quot;Failed to perform garbage collection: {str(cleanup_error)}&quot;) </code></pre> <p>Observation: no significant drop in memory usage.</p> <pre class="lang-json prettyprint-override"><code>&quot;resource_stats&quot;: { &quot;memory_mb&quot;: { &quot;start&quot;: 190.56, &quot;peak&quot;: 579.6, &quot;end&quot;: 579.7 } } </code></pre> </li> <li><p>Manual memory release with <code>malloc_trim</code> and deep GC</p> <p>Custom <code>force_memory_release()</code> function:</p> <ul> <li>Runs <code>gc.collect()</code> across all generations</li> <li>Calls <code>malloc_trim(0)</code> (Linux)</li> </ul> <pre class="lang-py prettyprint-override"><code>import gc import sys import psutil import platform import ctypes def force_memory_release(): &quot;&quot;&quot; Force Python to release memory back to the operating system. This method: 1. Runs garbage collection multiple times to ensure all unreferenced objects are cleaned 2. On Linux (ECS), calls malloc_trim() to force glibc to release memory to OS 3. Tracks and reports memory freed Returns: dict: Memory statistics before and after cleanup &quot;&quot;&quot; # Get memory usage before cleanup process = psutil.Process() memory_before_mb = process.memory_info().rss / 1024 / 1024 # Force garbage collection multiple times # First pass: collect unreferenced objects collected_gen0 = gc.collect(0) # Young generation collected_gen1 = gc.collect(1) # Middle generation collected_gen2 = gc.collect(2) # Old generation # Second pass: ensure all cyclic references are broken collected_final = gc.collect() total_objects_collected = collected_gen0 + collected_gen1 + collected_gen2 + collected_final # Force memory release to OS on Linux systems (ECS containers) malloc_trim_success = False if platform.system() == 'Linux': try: # Load glibc and call malloc_trim to release memory to OS libc = ctypes.CDLL(&quot;libc.so.6&quot;) malloc_trim = libc.malloc_trim malloc_trim.argtypes = [ctypes.c_size_t] malloc_trim.restype = ctypes.c_int # malloc_trim(0) releases all possible memory to OS result = malloc_trim(0) malloc_trim_success = bool(result) if malloc_trim_success: print(&quot;Successfully called malloc_trim() to release memory to OS&quot;) else: print(&quot;malloc_trim() was called but returned 0 (no memory released)&quot;) except Exception as e: print(f&quot;Could not call malloc_trim(): {e}&quot;) malloc_trim_success = False # Get memory usage after cleanup memory_after_mb = process.memory_info().rss / 1024 / 1024 memory_freed_mb = max(0, memory_before_mb - memory_after_mb) cleanup_stats = { 'objects_collected': total_objects_collected, 'memory_before_mb': round(memory_before_mb, 2), 'memory_after_mb': round(memory_after_mb, 2), 'memory_freed_mb': round(memory_freed_mb, 2), 'malloc_trim_success': malloc_trim_success, 'platform': platform.system() } print( f&quot;Memory cleanup completed: &quot; f&quot;Objects collected: {total_objects_collected}, &quot; f&quot;Memory freed: {memory_freed_mb:.2f}MB &quot; f&quot;({memory_before_mb:.2f}MB -&gt; {memory_after_mb:.2f}MB), &quot; f&quot;malloc_trim: {'success' if malloc_trim_success else 'not available/failed'}&quot; ) return cleanup_stats force_memory_release() </code></pre> <p>Observation: this method showed partial success in reducing memory usage. However, memory was not fully returned to baseline, indicating potential native memory fragmentation or memory held by external libraries.</p> <pre class="lang-json prettyprint-override"><code>&quot;resource_stats&quot;: { &quot;memory_mb&quot;: { &quot;start&quot;: 216.47, &quot;peak&quot;: 460, &quot;end&quot;: 460 } } </code></pre> </li> <li><p>Periodic memory cleanup script</p> <p>Background script runs every 10 seconds:</p> <ul> <li>Triggers GC</li> <li>Attempts to release PyArrow memory</li> <li>Calls <code>malloc_trim</code></li> </ul> <pre class="lang-bash prettyprint-override"><code>#!/bin/bash # Run a Python script that performs memory cleanup /usr/local/bin/python3 &lt;&lt;EOF import gc import pyarrow as pa import psutil import os import ctypes import time def log(msg): print(f&quot;[{time.strftime('%Y-%m-%d %H:%M:%S')}] {msg}&quot;) process = psutil.Process(os.getpid()) rss_before = process.memory_info().rss / (1024 ** 2) arrow_before = pa.total_allocated_bytes() / (1024 ** 2) log(f&quot;Memory before cleanup: {rss_before:.2f} MB (RSS), {arrow_before:.2f} MB (PyArrow)&quot;) # Perform garbage collection gc.collect() # Release unused memory from PyArrow pa.default_memory_pool().release_unused() # Try to return memory to OS (glibc only) try: ctypes.CDLL(&quot;libc.so.6&quot;).malloc_trim(0) except Exception as e: log(f&quot;malloc_trim failed: {e}&quot;) rss_after = process.memory_info().rss / (1024 ** 2) arrow_after = pa.total_allocated_bytes() / (1024 ** 2) log(f&quot;Memory after cleanup: {rss_after:.2f} MB (RSS), {arrow_after:.2f} MB (PyArrow)&quot;) EOF </code></pre> <p>Observation: while PyArrow memory usage dropped significantly, RSS memory did not fully return to baseline. This suggests native memory fragmentation or memory still held by other libraries.</p> <pre class="lang-json prettyprint-override"><code>&quot;resource_stats&quot;: { &quot;memory_mb&quot;: { &quot;start&quot;: 141.30078125, &quot;peak&quot;: 634.94921875, &quot;end&quot;: 635.0 } } </code></pre> </li> </ol> <p>Questions:</p> <ul> <li>What could be causing memory to remain high after each request, even after aggressive cleanup attempts?</li> <li>Are there known memory retention issues with PyArrow, Polars, or Trino clients in long-running FastAPI services?</li> <li>Any best practices for managing native memory in such a pipeline?</li> </ul>
<python><fastapi><python-polars><pyarrow>
2025-07-08 11:50:34
1
4,058
DonOfDen
79,694,305
7,282,437
Efficiently Computing Mean Pairwise Distances in an Array
<p>Suppose we have observations <span class="math-container">$\{y_i\}_{i=1}^{n}$</span>. I would like to compute the average pairwise distance defined by: <span class="math-container">$$ D = \frac{1}{n^2}\sum_{i=1}^{n}\sum_{j=1}^{n} |y_i - y_j| $$</span> This computation is done repeatedly in a loop, so optimizing it for speed is important. In my application, each <span class="math-container">$y_i$</span> is an array of shape <code>(n, p, k)</code>, where the final result of <span class="math-container">$D$</span> should be averaged across the first dimension - resulting in an array of shape <code>(p, k)</code>.</p> <hr /> <p>Simplest way to do this is a naive for-loop:</p> <pre><code>Y # Array of shape (n, p, k) D = 0 for i in range(n): for j in range(n): D += np.abs(Y[i, :, :] - Y[j, :, :]) D = D / n**2 </code></pre> <p>However this is of course very slow. My current solution uses shape broadcasting/vectorization and is extremely fast:</p> <pre><code>D = np.mean( np.abs( np.expand_dims(Y, axis=0) - np.expand_dims(Y, axis=1) ), axis=[0,1] ) </code></pre> <p>However a problem I have with this is that I am quickly hitting out-of-memory errors, since it requires storing two intermediate results of shapes <code>(n, n, p, k)</code>. For example if <code>n=100, p=100, k=300</code>, this is already two arrays of over 6M points.</p> <hr /> <p>Any suggestions on how to compute <span class="math-container">$D$</span> in a way that is more memory-efficient but also fast? Could we rewrite <span class="math-container">$D$</span> somehow in terms of sample statistics?</p>
<python><distance>
2025-07-08 11:38:07
1
389
Adam
79,693,791
9,257,294
Why is Dash ignoring the HOST environment variable?
<p>I have this minimal Dash app:</p> <pre><code>import os import dash from dash import html app = dash.Dash(__name__) app.layout = html.Div(&quot;Hello Dash!&quot;) print(f'{os.environ[&quot;HOST&quot;]=}') app.run() </code></pre> <p>The environment variable <code>HOST</code> is set to <code>0.0.0.0</code>. According to <a href="https://dash.plotly.com/reference#app.run" rel="nofollow noreferrer">the Dash documentation</a>, when the <code>host</code> parameter to the <code>app.run</code> method is not specified, the <code>HOST</code> variable should be used:</p> <blockquote> <p>host</p> <p>Host IP used to serve the application, default to &quot;127.0.0.1&quot; env: HOST</p> </blockquote> <p>However, when I run the above code, Dash seems to ignore the <code>HOST</code> variable and runs on <code>127.0.0.1</code>:</p> <pre><code>$ python dash_app_minimal.py os.environ[&quot;HOST&quot;]='0.0.0.0' Dash is running on http://127.0.0.1:8080/ </code></pre> <p>If I pass the <code>host</code> argument (<code>app.run(host=&quot;0.0.0.0&quot;)</code>), it works fine:</p> <pre><code>$ python dash_app_minimal.py os.environ[&quot;HOST&quot;]='0.0.0.0' Dash is running on http://0.0.0.0:8080/ </code></pre> <p>Note that the <code>PORT</code> variable is considered, as expected:</p> <pre><code>$ PORT=8888 python dash_app_minimal.py os.environ[&quot;HOST&quot;]='0.0.0.0' Dash is running on http://127.0.0.1:8888/ </code></pre> <p>Environment: Dash 3.1.1 with Python 3.12.11, running on JupyterLab 4.4.3, in a Docker container based on this image: quay.io/jupyter/r-notebook:hub-5.3.0. Kubernetes 1.28.</p>
<python><docker><environment-variables><plotly-dash><jupyter-lab>
2025-07-08 07:01:31
1
1,129
mckbrd
79,693,681
14,271,017
In Python, How to run two statistical tests on all numeric columns
<p>I have a dataframe <code>df</code>, I want to do the following:</p> <ol> <li>run two stats tests on all the numeric columns (<code>column_1</code> to <code>column_84</code>) to compare if there is a statistical difference between Types <code>X</code>, <code>Y</code> and <code>Z</code></li> </ol> <ul> <li><p>The stats tests are <code>Kruskal</code> and <code>Dunn's</code> tests</p> </li> <li><p>The comparing group: <code>X vs Y</code>, <code>Y vs Z</code> and <code>X vs Z</code></p> </li> </ul> <ol start="2"> <li>Export the results to excel spreadsheet ( see screenshot below)</li> </ol> <pre><code># copy &amp; paste ## generate dataframe &quot;df&quot; import pandas as pd import numpy as np df = pd.DataFrame( data=np.random.uniform(low=5.5, high=30.75, size=(60, 84)), columns=[f'column_{i}' for i in range(1, 85)],) df.insert(loc=0, column='Type',value=np.repeat(['X','Y','Z'], 20, axis=0),) df </code></pre> <p>I want to run <code>kruskal wallis test</code> and <code>Dunn test</code> for each column <code>col_1</code> to <code>col_84</code></p> <pre><code># copy and paste the libraries below from scipy.stats import kruskal pip install scikit-posthocs import scikit_posthocs as sp # filtering for each Type X,Y and Z # for column_1 # Extract values for each group group_x_values = df[df['Type'] == 'X']['column_1'].values group_y_values = df[df['Type'] == 'Y']['column_1'].values group_z_values = df[df['Type'] == 'Z']['column_1'].values # 1st stats test : Kruskal wallis h_statistic, p_value = kruskal(group_x_values, group_y_values, group_z_values) # Print the results print(f&quot;H-statistic: {h_statistic}&quot;) print(f&quot;P-value: {p_value}&quot;) # 2nd stats test: Dunn test data = [df[df['Group'] == 'X']['column_1'].values, df[df['Group'] == 'Y']['column_1'].values, df[df['Group'] == 'Z']['column_1'].values] p_values = sp.posthoc_dunn(data, p_adjust='bonferroni') print(p_values) # for column_2 group_x_values = df[df['Type'] == 'X']['column_2'].values group_y_values = df[df['Type'] == 'Y']['column_2'].values group_z_values = df[df['Type'] == 'Z']['column_2'].values # 1st stats test : Kruskal wallis h_statistic, p_value = kruskal(group_x_values, group_y_values, group_z_values) # Print the results print(f&quot;H-statistic: {h_statistic}&quot;) print(f&quot;P-value: {p_value}&quot;) # 2nd stats test: Dunn test data = [df[df['Group'] == 'X']['column_2'].values, df[df['Group'] == 'Y']['column_2'].values, df[df['Group'] == 'Z']['column_2'].values] p_values = sp.posthoc_dunn(data, p_adjust='bonferroni') print(p_values) . . . #for column_84 group_x_values = df[df['Type'] == 'X']['column_84'].values group_y_values = df[df['Type'] == 'Y']['column_84'].values group_z_values = df[df['Type'] == 'Z']['column_84'].values # 1st stats test : Kruskal wallis h_statistic, p_value = kruskal(group_x_values, group_y_values, group_z_values) # Print the results print(f&quot;H-statistic: {h_statistic}&quot;) print(f&quot;P-value: {p_value}&quot;) # 2nd stats test: Dunn test data = [df[df['Group'] == 'X']['column_84'].values, df[df['Group'] == 'Y']['column_84'].values, df[df['Group'] == 'Z']['column_84'].values] p_values = sp.posthoc_dunn(data, p_adjust='bonferroni') print(p_values) </code></pre> <p>I want to export both results to excel worksheet, something like this:</p> <p>Kruskal Worksheet</p> <p><a href="https://i.sstatic.net/eRc8kIvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eRc8kIvI.png" alt="enter image description here" /></a></p> <p>Dunn Worksheet</p> <p><a href="https://i.sstatic.net/wpg68AY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wpg68AY8.png" alt="enter image description here" /></a></p>
<python><pandas><numpy><scipy><scikits>
2025-07-08 05:02:35
1
319
RayX500
79,693,617
10,794,031
How to read [project.urls] metadata inside a project's entry point?
<p>Using <code>pyproject.toml</code> I wanted to read its <a href="https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#urls" rel="nofollow noreferrer"><code>[project.urls]</code></a> from inside a script launched by one of the package's entry points. So using <a href="https://packaging.python.org/en/latest/guides/writing-pyproject-toml/#a-full-example" rel="nofollow noreferrer">the full example</a> <code>pyproject.toml</code> (altering the <code>[build-system]</code> to <code>setuptools</code> and <code>wheel</code>) how could (or should) I get <code>https://example.com</code> from the <code>[project.urls]</code>'s <code>Homepage</code> name programmatically inside a function of the <code>spam:main_cli</code> entry point?</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;setuptools&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; [project] name = &quot;spam-eggs&quot; version = &quot;2020.0.0&quot; dependencies = [ &quot;httpx&quot;, &quot;gidgethub[httpx]&gt;4.0.0&quot;, &quot;django&gt;2.1; os_name != 'nt'&quot;, &quot;django&gt;2.0; os_name == 'nt'&quot;, ] requires-python = &quot;&gt;=3.8&quot; authors = [ {name = &quot;Pradyun Gedam&quot;, email = &quot;pradyun@example.com&quot;}, {name = &quot;Tzu-Ping Chung&quot;, email = &quot;tzu-ping@example.com&quot;}, {name = &quot;Another person&quot;}, {email = &quot;different.person@example.com&quot;}, ] maintainers = [ {name = &quot;Brett Cannon&quot;, email = &quot;brett@example.com&quot;} ] description = &quot;Lovely Spam! Wonderful Spam!&quot; readme = &quot;README.rst&quot; license = &quot;MIT&quot; license-files = [&quot;LICEN[CS]E.*&quot;] keywords = [&quot;egg&quot;, &quot;bacon&quot;, &quot;sausage&quot;, &quot;tomatoes&quot;, &quot;Lobster Thermidor&quot;] classifiers = [ &quot;Development Status :: 4 - Beta&quot;, &quot;Programming Language :: Python&quot; ] [project.optional-dependencies] gui = [&quot;PyQt5&quot;] cli = [ &quot;rich&quot;, &quot;click&quot;, ] [project.urls] Homepage = &quot;https://example.com&quot; Documentation = &quot;https://readthedocs.org&quot; Repository = &quot;https://github.com/me/spam.git&quot; &quot;Bug Tracker&quot; = &quot;https://github.com/me/spam/issues&quot; Changelog = &quot;https://github.com/me/spam/blob/master/CHANGELOG.md&quot; [project.scripts] spam-cli = &quot;spam:main_cli&quot; [project.gui-scripts] spam-gui = &quot;spam:main_gui&quot; [project.entry-points.&quot;spam.magical&quot;] tomatoes = &quot;spam:main_tomatoes&quot; </code></pre>
<python><pyproject.toml>
2025-07-08 03:16:41
1
13,254
bad_coder
79,693,602
10,737,147
polar plot -- sin transformation
<p>I want to plot a very simple function that would look like a simple sinusoidal f(x) = r + n* sin(n*x) in a Cartesian coordinate plane.</p> <p>Given this, now I want to plot this in a polar plot -- ideally as a sin wave along a circular path.</p> <p>This is my code</p> <pre><code>import numpy as np import matplotlib.pyplot as plt n_theta = 500 theta = np.linspace(0, 2 * np.pi, n_theta) def inner(theta, n_petals, amplitude, base_radius): return base_radius + amplitude * np.cos(n_petals * theta) inner_bc = inner(theta, n_petals=5, amplitude=2, base_radius=5) # Create polar plot plt.figure(figsize=(6, 6)) ax = plt.subplot(111, projection='polar') ax.grid(True) ax.plot(theta, inner_bc, color='red', linewidth=0.8) plt.show() </code></pre> <p>but this has a bug that is, it compresses as r getting lower and expands when r getting higher, meaning the petals are going to be more rounded at larger r and sharper at smaller r which producing a plot that is not preserving angles of a sin wave.</p> <p><a href="https://i.sstatic.net/9nSLn7sK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nSLn7sK.png" alt="enter image description here" /></a></p> <p>can someone please suggest a way to correct this function such that it plots the graph correctly ? My guess is, it has to be properly accounted for x-y to r-theta transformation, only in the radial direction but I couldn't wrap my head around how to do it properly. Could someone shed some light on this ?</p>
<python><matplotlib><trigonometry>
2025-07-08 02:42:16
1
437
XYZ
79,693,413
430,766
Why is the fastest way to print a list of ints so unintuitive?
<p>I have a <code>list</code> of <code>int</code>s (<code>ints</code>) and I want to print each one in their own line as fast as possible.</p> <p>My first shot was this:</p> <pre><code>list(map(print,ints)) </code></pre> <p>and running it on 10<sup>7</sup> ints together with a function to read them from standard in this takes an average of 1708 ms from 30 repetitions.</p> <p>As <a href="https://stackoverflow.com/a/18433519/430766">this fairly well-received answer</a> points out, using <code>list()</code> to force-expand a <code>map</code>-result is inefficient because it has to create a long list of irrelevant data, specifically in our case a list of <code>None</code> values, returned by <code>print</code>. That makes intuitive sense, right?</p> <p>So I tried a number of other approaches:</p> <pre><code># n print calls list(map(print,ints)) # map for n in ints: print(n) # loop for i in range(0,len(ints)): print(ints[i]) # range [print(n) for n in ints] # compr deque(map(print,ints),0) # deque # one print call print('\n'.join(map(str,ints))) # cat print(*ints, sep='\n') # sep </code></pre> <p>Measured in-situ together with <code>ints = list(map(int,sys.stdin.readlines()))</code> the following are the runtimes.</p> <pre><code>map 1707 ms loop 1926 ms range 2027 ms compr 1828 ms deque 1699 ms -------------- cat 1084 ms sep 1462 ms </code></pre> <p>This is not a fluke. I don't want to overload this question with too much statistics, but these times are not outliers.</p> <p>This is entirely counter intuitive. What I would have guessed to be the fastest (from the n-print set) is significantly slower than my first version ('map') despite that one creating a <code>[None,...,None]</code> list with ten million entries. Does python optimise that list construction out? List comprehension is in between and not hugely surprising, but the most astonishing is that in 'cat' creating a huge <code>str</code> and calling <code>print</code> on that is by far the <em>fastest</em>. I would have assumed that the memory allocations on that one str would have destroyed the runtime entirely, even if having only one io operation looks better on paper (note that <code>flush=False</code> is the default).</p> <p>Main question: <strong>Why is the 'map' variant so much faster than other variants with n print calls?</strong><br /> <em>And why is 'cat' so much faster yet?</em></p> <hr /> <p>Example map.py:</p> <pre><code>#!/usr/bin/python3 import sys if __name__ == '__main__': ints = list(map(int,sys.stdin.readlines())) list(map(print,ints)) </code></pre> <p>Run as</p> <pre><code>/usr/bin/time -f '%e' -o map.run-time -a ./map.py &lt; input.log | wc -l &gt; /dev/null </code></pre>
<python><performance><optimization><language-implementation>
2025-07-07 21:02:47
2
35,164
bitmask
79,693,348
3,605,534
How to add an image to a page_navbar?
<p>I have a Shiny Python App in which I want to add an image as a logo on the menu's left hand side. I created my www folder and I saved the logo.png file there. I don't know why I cannot see the image when loading my Shiny App. I share the code as follows:</p> <pre><code>from shiny import App, ui app_ui = ui.page_navbar( ui.nav_panel(&quot;Bar&quot;, ui.h2(&quot;Bar plot&quot;)), ui.nav_panel(&quot;Map&quot;, ui.h2(&quot;Cloropleth&quot;)), ui.nav_spacer(), ui.nav_control(ui.input_dark_mode(id=&quot;dark_mode_switch&quot;)), title=ui.tags.a(ui.tags.img(src=&quot;logo.png&quot;, height=&quot;30px&quot;), &quot;&quot;, href=&quot;#&quot;) ) app = App(app_ui, server=None) if __name__ == '__main__': app.run() </code></pre> <p>My folder structure is main_folder which contains the www folder and the app.py file. I am using Python 3.12 and shiny 1.4.0.</p>
<python><py-shiny>
2025-07-07 19:49:12
1
945
GSandro_Strongs
79,693,288
19,270,168
YouTube Videos.list API has drastic delays when called by requests.get
<pre class="lang-py prettyprint-override"><code>def validate_youtube_video(link:str, sessionid:str) -&gt; Tuple[bool, str]: # Returns (status, sanitized_link|error_message) import urllib.parse res = urllib.parse.urlparse(link) vidid = None if res.netloc == 'www.youtube.com' and res.path == '/watch': queries = res.query.split('&amp;') for q in queries: if q.startswith('v='): vidid = q[2:] if res.netloc == 'youtu.be': vidid = res.path[1:] resp = requests.get(f'https://www.googleapis.com/youtube/v3/videos?id={vidid}&amp;part=snippet,contentDetails,status&amp;key=[REDACTED]').json() if resp['items'] == []: return (False, 'Invalid ID') if resp['items'][0]['snippet']['title'] != f'[REDACTED] {sessionid}' or resp['items'][0]['snippet']['description'] != '': return (False, 'Invalid title or description') if resp['items'][0]['status']['privacyStatus'] != 'unlisted': return (False, 'Privacy status is not unlisted') return (True, f'https://youtu.be/{vidid}') </code></pre> <p>This code aims to call the YouTube Videos.list API V3 to check if the video has required title, description and privacy status for anonymity. It takes less than 1 second in my browser (with the API key) but takes more than 4 minutes when called using <code>requests.get</code>. Does anyone know why?</p>
<python><youtube><youtube-api><youtube-data-api>
2025-07-07 18:34:29
1
1,196
openwld
79,693,271
1,520,228
How to point Conda at a specific, non-standard, pre-existing python installation
<p>I feel like I am asking for a hack, but I want to be sure before I look into other options.</p> <p>I am trying to use conda to build, repeatable environments on a given user's machine. For most uses everything works fine, however, when working with game engines or DCCs (digital content creation tools, maya, blender, unreal) they often come with their own installation of python that is compiled with their own packages/apis/etc. These custom versions are compiled against a standard version of python, but they only can be installed as a part of the main application. They do have a standardized location in the install folder so finding it is not a problem and they can be run independently of the associated application.</p> <p>Ideally I could provide a standard env file and feed in the local python exe and version via a bat file or something. That said, I am open to all ideas.</p> <p>Edit:</p> <p>For clarity regarding the installation of python I am working with, I do mean a python installation that currently exists on the users machine.</p> <p>In the example below, an Autodesk Maya 2025 Installation, there is a version of python (renamed to mayapy.exe) that is Python 3.11.4 compiled such that it has direct access to the app and its API.</p> <p><a href="https://i.sstatic.net/4a1bs4tL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4a1bs4tL.png" alt="Autodesk Maya 2025 Installation" /></a></p> <p><a href="https://i.sstatic.net/bbtoD3Ur.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bbtoD3Ur.png" alt="enter image description here" /></a></p>
<python><anaconda><conda>
2025-07-07 18:10:46
1
1,786
TheBeardedBerry
79,693,239
10,242,281
How can I call Python script on remote server with xp_cmdshell?
<p>Trying to call python script on <code>ServerB</code> from local server and getting this error like below, searched all about this error, figured out that it's about env variables, but not clear where in <code>xp_cmdshell</code> to change this setting. This is existing legacy process which uses <code>EXEC sp_executesql @sqlcmd</code> call from sql server, it works OK for other processes, all security consideration are taken care of. In error output I see mixup of local and remote pathes, my local path to python has <code>Program Files</code> in it, on remote ServerB python installed right under <code>c\Python312</code>.<br /> See that it determined <code>sys.executable</code> correctly but rest are set for local ? I understand that I need to silence local pythonpath, but not clear how, can anybody help?</p> <p>Thanks this is my simple test1.py:</p> <pre><code>import socket #if 'PYTHONHOME' in os.environ: # del os.environ['PYTHONHOME'] print (socket.gethostname()) ### end of test1.py EXEC sp_executesql @sqlcmd; EXEC xp_cmdshell '\\ServerB\C$\Python312\python.exe \\ServerB\C$\python_Scripts\test1.py' </code></pre> <p>Python path configuration:</p> <pre class="lang-none prettyprint-override"><code>PYTHONHOME = 'C:\\Program Files\\Python37' ###&lt;&lt; ???? PYTHONPATH = (not set) program name = '\\ServerB\C$\Python312\python.exe ' isolated = 0 environment = 1 user site = 1 safe_path = 0 import site = 1 is in build tree = 0 stdlib dir = 'C:\Program Files\Python37\Lib' sys._base_executable = '\\\\ServerB\\C$\\Python312\\python.exe ' sys.base_prefix = 'C:\\\\Program Files\\\\Python37' ###&lt;&lt; ???? sys.base_exec_prefix = 'C:\\\\Program Files\\\\Python37' ###&lt;&lt; ???? sys.platlibdir = 'DLLs' sys.executable = '\\\\ServerB\\C$\\Python312\\python.exe ' sys.prefix = 'C:\\\\Program Files\\\\Python37' ###&lt;&lt; ???? sys.exec_prefix = 'C:\\\\Program Files\\\\Python37' ###&lt;&lt; ???? sys.path = [ '\\\\ServerB\\C$\\Python312\\python312.zip', 'C:\\Program Files\\Python37\\DLLs', ###&lt;&lt; ???? 'C:\\Program Files\\Python37\\Lib', ###&lt;&lt; ???? '\\\\ServerB\\C$\\Python312', ] </code></pre> <p>and output:</p> <pre class="lang-none prettyprint-override"><code>Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding Python runtime state: core initialized ModuleNotFoundError: No module named 'encodings' NULL Current thread 0x000059fc (most recent call first): &lt;no Python frame&gt; NULL </code></pre>
<python><xp-cmdshell>
2025-07-07 17:44:17
0
504
Mich28
79,693,052
11,505,680
Windows CreateMutexW using Python not working
<p>I'm writing a Python application that connects to an external device. I want to enable the user to operate multiple devices simultaneously by running multiple instances of the application, but each device may only connect to one application instance. This calls for a mutex keyed to the device's serial number. With the help of ChatGPT, I came up with the following minimal example (without the serial number):</p> <pre class="lang-py prettyprint-override"><code>import ctypes import sys ERROR_ALREADY_EXISTS = 183 mutex = ctypes.windll.kernel32.CreateMutexW(None, 1, &quot;my-app\\1234&quot;) err = ctypes.GetLastError() print(f'Error code received: {err}') if err == ERROR_ALREADY_EXISTS: print(&quot;Another instance is already using the device.&quot;) sys.exit(0) input(&quot;Program running. Press Enter to exit...&quot;) </code></pre> <p>When I open two command windows and run this script in each one, neither one errors out and both give an <code>err</code> value of 3. I checked the <a href="https://learn.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-createmutexw" rel="nofollow noreferrer">Microsoft documentation for <code>CreateMutexW</code></a> and for the <a href="https://learn.microsoft.com/en-us/windows/win32/debug/system-error-codes--0-499-" rel="nofollow noreferrer">error codes</a>, and I can't figure out what I'm doing wrong.</p>
<python><windows><mutex>
2025-07-07 15:02:55
1
645
Ilya