QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
79,684,031
1,779,973
How can I detect all missing packages at once when installing from a private PyPI index?
<p>I'm working in a Python project that uses a private Artifactory PyPI index. When I run:</p> <pre class="lang-bash prettyprint-override"><code>pip install -e &quot;.[dev]&quot; \ --index-url https://&lt;my-artifactory&gt;/artifactory/api/pypi/pypi-local/simple \ --extra-index-url https://&lt;my-artifactory&gt;/artifactory/api/pypi/pypi-local-external/simple </code></pre> <p>the installation fails if a required package is missing from the index. The problem is that it only reports the first missing package. Once I upload that one and try again, it fails on the next one, and so on.</p> <p>The real issue is that these are <strong>indirect dependencies</strong> — not explicitly listed in my pyproject.toml, but required by the build system or other packages. For example, editables was required by hatchling, but I only found that out after the first install attempt failed.</p> <p>I’ve tried using:</p> <pre class="lang-bash prettyprint-override"><code>pip install --dry-run --report report.json ... </code></pre> <p>but the report doesn’t always include missing build-time dependencies, and sometimes it’s empty or unhelpful.</p> <p><strong>Is there a reliable way to determine all missing packages, including indirect and build-system dependencies, in one go when installing from a private index?</strong></p>
<python><pip><pypi>
2025-06-29 20:28:17
1
536
Ido
79,683,722
5,312,606
How to rebuild c++ extension when using pyproject.toml
<p>I have a few C++ extensions for a python library.</p> <p>Previously, this library was shipped using <code>setup.py</code> and one could recompile via <code>python setup.py build_ext --inplace</code> if only the C++ files changed.</p> <p>Since I had to change the build system to manage more complex dependencies, I switched over to <code>pyproject.toml + scicit-build-core + CMake</code>.</p> <p>Unfortunately I cannot find any way of how to recompile the changed files in this new setup. At the moment I have to always call <code>pip install .</code> which goes through the whole python dependency management and compiles everything.</p> <p>TLDR: What is the equivalent of <code>make</code> in this setup?</p> <hr /> <p><code>pip install --no-deps .</code> gets rid of the python dependencies part, but it still recompiles all cpp files, even if they were not changed.</p>
<python><c++><pybind11><pyproject.toml>
2025-06-29 12:03:42
1
1,897
mcocdawc
79,683,580
22,780,476
ModuleNotFoundError: No module named 'autogen_core' when importing FunctionTool and Tool
<pre><code>Traceback (most recent call last): File &quot;C:\Work\code_agent.py&quot;, line 2, in &lt;module&gt; from autogen_core.tools import FunctionTool, Tool ModuleNotFoundError: No module named 'autogen_core' </code></pre> <p>What is the correct way to import FunctionTool and Tool?</p> <p>I have pyautogen==0.9.0 installed.</p>
<python>
2025-06-29 08:04:27
1
402
amrita yadav
79,683,496
17,275,378
polars.read_database() Fails with Enum Type when Using schema_overrides
<p>I have a database table with an Enum column. My application does transforms with <code>polars</code>, so I'm reading with the <code>pl.read_database()</code> method. However reading the table when <code>schema_overrides</code> is supplied fails.</p> <h4>Minimum Reproducible Example</h4> <h5>Imports</h5> <pre><code># Standard Imports import enum import polars as pl # Non-Sandard Imports from sqlalchemy import Enum, Column from sqlmodel import SQLModel, create_engine from sqlmodel import insert, select, Field </code></pre> <h5>Define a Python Native Enum</h5> <pre><code>class CompassEnum(enum.Enum): NORTH = 'N' EAST = 'E' SOUTH = 'S' WEST = 'W' </code></pre> <h5>Define a SQModel Table + Deploy to DB</h5> <pre><code>class Compass(SQLModel, table = True): id: int | None = Field(default = None, primary_key=True) compass_direction: CompassEnum = Field( sa_column = Column( Enum(CompassEnum), default = None, nullable = False, index = False ) ) ENGINE = create_engine('my_connection', echo = True) SQLModel.metadata.create_all(ENGINE) </code></pre> <h5>Insert a Record into the Database</h5> <pre><code>statement = insert(Compass)\ .values(compass_direction = 'NORTH') conn = ENGINE.connect() conn.execute(statement) conn.commit() conn.close() </code></pre> <h5>Read Back from the Database</h5> <pre><code>query = select(Compass.id, Compass.compass_direction) schema = {'id': pl.String,'compass_direction': pl.Enum(CompassEnum)} # This works df_one = df_two = pl.read_database(query, ENGINE, infer_schema_length=1000) print(df_one) # This doesn't work df_two = pl.read_database(query, ENGINE, schema_overrides = schema) print(df_two) </code></pre> <h4>The Error</h4> <p>&quot;&quot;&quot;ComputeError: could not append value: CompassEnum.NORTH of type: object to the builder; make sure that all rows have the same schema or consider increasing <code>infer_schema_length</code>&quot;&quot;&quot;</p> <p>Note how when we allow schema inference with df_one, the method succeeds. However - I need to supply schema because the table can be empty in production. The database is <code>CockroachDB</code> which is an implementation of <code>Postgresql</code>.</p> <h4>My Essentials</h4> <ul> <li>polars==1.25.2</li> <li>Python 3.11.4</li> </ul>
<python><enums><python-polars><cockroachdb><sqlmodel>
2025-06-29 04:31:21
0
326
eldrly
79,683,433
9,015,675
Python Bluetooth advertising fails when I include any manufacturing data
<p>I'm using a Raspberry Pi 3 with a fresh install of Raspberry Pi OS Lite 32-bit. I have a Bluetooth Dongle that supports Bluetooth Low Energy. I'm running Python 3.11.2.</p> <p>I am attempting to broadcast a Bluetooth advertisement. When I run my Python script I can see the advertisement using the LightBlue or nRF Connect phone apps. However, as soon as I add manufacturing data, I can no longer see the advertisement.</p> <p>Here is my Python code:</p> <pre class="lang-py prettyprint-override"><code>import dbus import dbus.mainloop.glib import dbus.service from gi.repository import GLib MANUFACTURER_ID = 0x0397 # LEGO PAYLOAD_BYTES = b'\x10\x06\x18\x1a\xb6' class Advertisement(dbus.service.Object): PATH_BASE = '/org/bluez/example/advertisement' def __init__(self, bus, index): self.path = self.PATH_BASE + str(index) self.bus = bus super().__init__(bus, self.path) def get_path(self): return dbus.ObjectPath(self.path) def get_properties(self): return { 'org.bluez.LEAdvertisement1': { 'Type': 'peripheral', 'LocalName': 'MyBrickPi', # 'ManufacturerData': { # dbus.Array([dbus.Byte(0x00)], signature='y') # dbus.UInt16(0x0397): dbus.Array([dbus.Byte(0x01)], signature='y') # dbus.UInt16(0x0397): dbus.Array([dbus.Byte(b) for b in b'\x10\x06\x18\x1a\xb6'], signature='y') # dbus.UInt16(0xFFFF): dbus.Array([dbus.Byte(0x01)], signature='y') # dbus.UInt16(0x0000): dbus.Array([dbus.Byte(0x01)], signature='y') # {dbus.UInt16(0x0000): [16, 5, 7, 24, 186, 175, 161]} # [{16: [[b'4a04389']]}] # }, # 'IncludeTxPower': True, # 'Appearance': dbus.UInt16(0), } } @dbus.service.method('org.freedesktop.DBus.Properties', in_signature='s', out_signature='a{sv}') def GetAll(self, interface): return self.get_properties()['org.bluez.LEAdvertisement1'] @dbus.service.method('org.bluez.LEAdvertisement1', in_signature='', out_signature='') def Release(self): print('Advertisement released') def main(): dbus.mainloop.glib.DBusGMainLoop(set_as_default=True) bus = dbus.SystemBus() adapter_path = '/org/bluez/hci0' adapter = bus.get_object('org.bluez', adapter_path) adapter_props = dbus.Interface(adapter, 'org.freedesktop.DBus.Properties') # Power on adapter if off try: powered = adapter_props.Get('org.bluez.Adapter1', 'Powered') if not powered: adapter_props.Set('org.bluez.Adapter1', 'Powered', dbus.Boolean(1)) except Exception as e: print(f&quot;Failed to get/set adapter power state: {e}&quot;) return ad_manager = dbus.Interface(adapter, 'org.bluez.LEAdvertisingManager1') advertisement = Advertisement(bus, 0) def on_success(): print(&quot;Advertising started.&quot;) def on_error(error): print(f&quot;Failed to advertise: {error}&quot;) ad_manager.RegisterAdvertisement(advertisement.get_path(), {}, reply_handler=on_success, error_handler=on_error) try: GLib.MainLoop().run() except KeyboardInterrupt: print(&quot;\nStopping advertisement...&quot;) try: ad_manager.UnregisterAdvertisement(advertisement) except Exception: print(&quot;Advertisement was not registered or already unregistered.&quot;) advertisement.remove_from_connection() if __name__ == '__main__': main() </code></pre> <p>The <code>get_properties</code> function returns the advertising data. Currently the <code>ManufacturerData</code> is commented out. As soon as I put that back in, there are no errors, but I can not detect the advertisement using LightBlue or nRF Connect. I left in a number of different formats for the manufacturing data that I have tried.</p> <p>I commented out the <code>IncludeTxPower</code> and <code>Appearance</code>, but they don't seem to have an effect either way.</p> <p>Eventually I want the manufacture ID to be 919 (LEGO). And at this point I don't care what the second data is, just any ASCII string. I just want to get something working.</p>
<python><bluetooth><bluetooth-lowenergy>
2025-06-29 01:47:56
1
503
Adam
79,683,425
27,596,369
df.head() is returning ValueError: Per-column arrays must each be 1-dimensional
<p>Here is my code:</p> <pre><code>set_theme_count = sets_df[&quot;theme_id&quot;].value_counts()[:-5] </code></pre> <p>Result: <a href="https://i.sstatic.net/rUIm7vvk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUIm7vvk.png" alt="DataFrame Head" /></a></p> <p>Now, I want to rename the columns so I can use pd.merge. So this is what I did:</p> <pre><code>set_theme_count = pd.DataFrame({'id' :set_theme_count.index, 'set_count': set_theme_count.values}) </code></pre> <p>This code runs successfully, but when I try to do <code>set_theme_count.head()</code>, I am getting this error:</p> <pre><code>ValueError: Per-column arrays must each be 1-dimensional </code></pre> <p>I am very confused since the DataFrame gets created successfully, but the <code>.head()</code> throws an error. Also, if there is another way to rename the columns, please let me know.</p> <p>I am using google collab.</p>
<python><pandas><dataframe>
2025-06-29 01:22:38
1
1,512
Aadvik
79,683,236
6,043,170
python can't connect, but openssl can
<p>I am trying to connect to a legacy (windows server 2008R2) server using python / winrm. I am seeing an issue where openssl can establish the ssl socket cleanly, but python and winrm can't. I am using python3.12 on ubuntu 24.04 (noble)</p> <p>here is the python code:</p> <pre class="lang-py prettyprint-override"><code>import ssl, socket ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2) ctx.verify_mode = ssl.CERT_NONE with socket.create_connection(('server.example.com', 5986)) as sock: ssock = ctx.wrap_socket(sock) print(ssock.cipher(), ssock.version()) </code></pre> <p>this code fails with the following error:</p> <p><code>SSLEOFError: [SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1010)</code></p> <p>and here is the output from <code>openssl s_client -connect</code>:</p> <pre><code>CONNECTED(00000003) depth=0 CN = server.example.com verify error:num=20:unable to get local issuer certificate verify return:1 depth=0 CN = server.example.com verify error:num=21:unable to verify the first certificate verify return:1 --- Certificate chain 0 s:CN = server.example.com i:CN = server.example.com --- Server certificate -----BEGIN CERTIFICATE----- MIIFADCC.... -----END CERTIFICATE----- subject=CN = server.example.com issuer=CN = server.example.com --- No client certificate CA names sent Peer signing digest: SHA1 Peer signature type: RSA Server Temp Key: ECDH, P-256, 256 bits --- SSL handshake has read 2084 bytes and written 505 bytes Verification error: unable to verify the first certificate --- New, TLSv1.2, Cipher is ECDHE-RSA-AES256-SHA384 Server public key is 4096 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-SHA384 Session-ID: E223..... Session-ID-ctx: Master-Key: 83FBC..... PSK identity: None PSK identity hint: None SRP username: None Start Time: 1751134634 Timeout : 7200 (sec) Verify return code: 21 (unable to verify the first certificate) Extended master secret: yes --- </code></pre> <p>they both see the unexpected EOF, but it looks like python can't handle the unexpected EOF gracefully while openssl can. Is there anything I can do to establish the ssl connection in python like I can with openssl or curl?</p>
<python><ssl><openssl>
2025-06-28 18:20:52
1
16,057
2ps
79,683,201
2,534,938
Using Tkinter in Python, How Can I Autosize a Canvas to its Parent Frame
<p>I'm trying to build a Tkinter form in Python that has three &quot;frames&quot;: a narrow frame across the top (for a title), a thin frame down the left (for buttons), and a main frame (to show content). The left frame will have buttons, and the top and main frames will have canvases that can be drawn on. I'd like to use &quot;grid&quot; placement of the Tkinter widgets to simplify layout.</p> <p>The code below works perfectly if I don't include the canvas in the top panel: the frames, the buttons in the left frame, and the canvas in the main frame are all laid out exactly as I want (and note particularly that the canvas in the main frame sized itself to fit allotted space). (See Figure 1.)</p> <p>However, when I add a canvas to the top panel (in exactly the same manner as the canvas in the main panel), it doesn't size itself properly. (The width is correct, but the height isn't.) (Figure 2.) How to I make the canvas in the top panel size itself properly (and/or what am I doing wrong)?</p> <p>Figure 1: Without Canvas in the Top Panel <a href="https://i.sstatic.net/cEuvyHgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cEuvyHgY.png" alt="Figure 1 - Without Canvas in Top Panel" /></a></p> <p>Figure 2: With Canvas Added To the Top Panel <a href="https://i.sstatic.net/pBDNYgwf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBDNYgwf.png" alt="Figure 2 - With Canvas Added to Top Panel" /></a></p> <pre><code>from tkinter import Tk tk = Tk() from tkinter import Frame, Canvas, Button screenWidth = tk.winfo_screenwidth() screenHeight = tk.winfo_screenheight() formWidth = int(screenWidth * 0.8) formHeight = int(screenHeight * 0.8) formLeft = (screenWidth - formWidth) // 2 formTop = (screenHeight - formHeight) // 2 tk.geometry(f'{formWidth}x{formHeight}+{formLeft}+{formTop}') tk.resizable(False, False) tk.columnconfigure(0, weight=10) tk.columnconfigure(1, weight=90) tk.rowconfigure(0, weight=10) tk.rowconfigure(1, weight=90) topFrame = Frame(tk, bg=&quot;blue&quot;, highlightthickness=0) topFrame.grid(column=0, row=0, columnspan=2, sticky=&quot;nwes&quot;) topFrame.columnconfigure(0, weight=1) topFrame.rowconfigure(0, weight=1) leftFrame = Frame(tk, bg=&quot;green&quot;, highlightthickness=0) leftFrame.grid(column=0, row=1, sticky=&quot;nwes&quot;) leftFrame.columnconfigure(0, weight=1) #leftFrame.rowconfigure(0, weight=1) mainFrame = Frame(tk, bg=&quot;purple&quot;, highlightthickness=0) mainFrame.grid(column=1, row=1, sticky=&quot;nwes&quot;) mainFrame.columnconfigure(0, weight=1) mainFrame.rowconfigure(0, weight=1) startButton = Button(leftFrame, text=&quot;Start&quot;) startButton.grid(column=0, row=0, sticky=&quot;ew&quot;) stopButton = Button(leftFrame, text=&quot;Stop&quot;) stopButton.grid(column=0, row=1, sticky=&quot;ew&quot;) titleCanvas = Canvas(topFrame, bg=&quot;indigo&quot;, highlightthickness=0) titleCanvas.grid(column=0, row=0, sticky=&quot;nwes&quot;) mainCanvas = Canvas(mainFrame, bg=&quot;gray&quot;, highlightthickness=0) mainCanvas.grid(column=0, row=0, sticky=&quot;nwes&quot;) tk.update() # Show that the main canvas has sized itself correctly. mainCanvas.create_line(0, 0, mainCanvas.winfo_width(), mainCanvas.winfo_height(), width=2, fill=&quot;black&quot;) tk.mainloop() </code></pre>
<python><tkinter><canvas><grid>
2025-06-28 17:47:07
2
733
Boulder Keith
79,683,049
5,568,409
How to convert annotation in bold text?
<p>This is a reproducible program displaying an annotated line:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt plt.rcParams['text.usetex'] = True fig, ax = plt.subplots(figsize=(6, 4)) point1 = [0, 1] point2 = [np.sqrt(2), 0] x_values = [point1[0], point2[0]] y_values = [point1[1], point2[1]] ax.plot(x_values, y_values, 'bo', linestyle=&quot;-&quot;) ax.annotate( r&quot;$y = -\,\frac{1}{\sqrt(2)}x + 1$&quot;, xy = (0.2, 0.7), rotation = -35, fontsize = 12) plt.show() </code></pre> <p>Now looking for the best way to display the annotation in <strong>bold</strong> .</p> <p>I tried to use <code>\textbf</code>:</p> <pre><code>ax.annotate( r&quot;\textbf{$y = -\,\frac{1}{\sqrt(2)}x + 1$}&quot;, xy = (0.2, 0.7), rotation = -35, fontsize = 12) </code></pre> <p>but nothing changed. I guess a misuse of <code>\textbf</code>... What would be the proper use?</p>
<python><matplotlib>
2025-06-28 13:45:24
1
1,216
Andrew
79,683,006
7,517,392
Transform Dataframe with Duplicate Date Entries
<p>I have a dataframe that looks like this (so there is no unique Index):</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>Data</th> <th>Product</th> </tr> </thead> <tbody> <tr> <td>1/1/25</td> <td>1</td> <td>A</td> </tr> <tr> <td>2/1/25</td> <td>2</td> <td>A</td> </tr> <tr> <td>3/1/25</td> <td>3</td> <td>A</td> </tr> <tr> <td>1/1/25</td> <td>1</td> <td>B</td> </tr> <tr> <td>2/1/25</td> <td>4</td> <td>B</td> </tr> <tr> <td>3/1/25</td> <td>5</td> <td>B</td> </tr> <tr> <td>1/1/25</td> <td>1</td> <td>C</td> </tr> <tr> <td>2/1/25</td> <td>6</td> <td>C</td> </tr> <tr> <td>3/1/25</td> <td>7</td> <td>C</td> </tr> </tbody> </table></div> <p>There are many products in this dataframe (not just 3). I want to transform it so each product has its own column for the 'Data' column like this (with the name based on the Product column).</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>Data_A</th> <th>Data_B</th> <th>Data_C</th> </tr> </thead> <tbody> <tr> <td>1/1/25</td> <td>1</td> <td>1</td> <td>1</td> </tr> <tr> <td>2/1/25</td> <td>2</td> <td>4</td> <td>6</td> </tr> <tr> <td>3/1/25</td> <td>3</td> <td>5</td> <td>7</td> </tr> </tbody> </table></div> <p>I thought maybe I could select only the rows for each Product and then merge each Product to a new dataframe? Would love to know the easiest way to do this. Thank you!</p>
<python><pandas><dataframe>
2025-06-28 12:40:36
2
361
Stanford Wong
79,682,984
5,118,421
rsrync make folder read-only for venv despite 777 and chown
<p><strong>Steps</strong>:</p> <ol> <li><p>Run <code>rscyn</code> on remote ssh server with folder with venv python, lets assume <code>.venv</code>.</p> </li> <li><p>Run <code>source .venv/bin/python3</code> on the either source or destination folder, lets assume *destination folder.</p> </li> <li><p>Create empyt python file, lets assume <em>some_file.py</em></p> </li> <li><p>Run <code>chmod 777</code> on <em>destination folder</em>:</p> <p>chmod -R 777 .</p> </li> <li><p>Try to run with python any file in <em>destination folder</em> from <em>destination folder</em>:</p> <p>python some_file.py</p> </li> </ol> <p><strong>Expected:</strong></p> <p>Python is running successfully.</p> <p><strong>Actual:</strong></p> <pre><code>INFO: Will watch for changes in these directories: ['/home/taagcgaat/ollama_travel_assistant'] ERROR: [Errno 13] Permission denied </code></pre> <p><strong>System:</strong></p> <p><em>OS:</em> Ubuntu 22 LTS</p> <p><em>Rsync version:</em> 3.2.7</p> <p>selinux is disabled.</p>
<python><python-3.x><linux><ubuntu><rsync>
2025-06-28 12:10:01
1
1,407
Irina
79,682,876
5,568,409
How to create same ticks and labels on both axes?
<p>I have to create a plot on which <code>ticks</code> and <code>labels</code> are specifically defined; an example of reproducible plot is given below:</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns plt.style.use('seaborn-v0_8') fig, ax = plt.subplots(figsize=(4, 4)) ticks = [0.00, 0.25, 0.50, 0.75, 1.00] ax.set_xticks(ticks) ax.set_xticklabels(ticks, weight='bold', size=8) ax.set_yticks(ticks) ax.set_yticklabels(ticks, weight='bold', size=8) plt.show() </code></pre> <p>As <code>ticks</code> and <code>labels</code> are exactly the same of both axes, is there a way to set them as a single command ? Something mixing both <code>xticks</code> and <code>yticks</code> ?</p>
<python><python-3.x><xticks><yticks>
2025-06-28 09:28:34
2
1,216
Andrew
79,682,838
595,305
Is there any way to obtain a listing of enum names and values in a typedef for QFlags?
<p>As such I (questioner) am no longer looking for an answer but I am not sure on what grounds to recommend closing. It could possibly be helpful to someone.</p> <p>Here's an example of what I mean (based on <a href="https://doc.qt.io/archives/qt-5.15/qt.html#Orientation-enum" rel="nofollow noreferrer">Qt5 documentation</a>):</p> <blockquote> <p>This type is used to signify an object's orientation.</p> </blockquote> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th><strong>Constant</strong></th> <th><strong>Value</strong></th> </tr> </thead> <tbody> <tr> <td><code>Qt::Horizontal</code></td> <td><code>0x1</code></td> </tr> <tr> <td><code>Qt::Vertical</code></td> <td><code>0x2</code></td> </tr> </tbody> </table></div> <blockquote> <p>Orientation is used with QScrollBar for example.<br /> The Orientations type is a typedef for QFlags&lt;Orientation&gt;. It stores an OR combination of Orientation values.</p> </blockquote> <p>What I'm trying to obtain (automatically, programmatically) is a <code>dict</code> like this:</p> <pre class="lang-py prettyprint-override"><code>{ &quot;Horizontal&quot;: 1, &quot;Vertical&quot;: 2 } </code></pre> <p>...with a view to giving a &quot;readable&quot; representation of the result which delivers a value of this <code>Orientation</code> <code>enum</code>.</p> <p>Obviously these values are a subset of &quot;generic&quot; values from an enormous list of <code>QtCore.Qt</code> constants, with values which are often <code>int</code>, often equal to 1 or to 2, and often <code>OR</code>ed together to produce binary values as results. This <code>Orientation</code> enum appears to be used in many contexts. In my particular case, I want to find the result of <code>QSizePolicy.expandingDirections()</code>, which returns a value in this <code>Orientation</code> enum.</p> <p>I have tried exploring both the <code>Qt.Orientation</code> enum class and the <code>Qt.Orientations</code> &quot;flags&quot; class, i.e. I've looked at <code>dir(Qt.Orientation)</code>, etc.</p> <p>Usually <code>dir</code> on an <code>enum</code> class will show all the possible constant values. Not in this case. Is there any way of obtaining them?</p> <p>The problem outlined above only applies to PyQt5, but doesn't in PyQt6.</p>
<python><enums><pyqt><pyqt5><pyqt6>
2025-06-28 08:24:09
1
16,076
mike rodent
79,682,339
5,568,409
How to introduce seed in np.random.random?
<p>I need to create a number of points in a square [0,1]x[0,1].</p> <p>To obtain an array of coordinates, I used:</p> <pre><code>N_fixed_points = 10 selection = np.random.random((N_fixed_points,2)) </code></pre> <p>which gives me a proper result.</p> <p>But I would like to reproduce the same selection output and I know that I should use a random number generator such as:</p> <pre><code>rng = np.random.default_rng(seed=1949) </code></pre> <p>What is the simplest way to update the <code>selection</code> line of code with <code>rng</code>?</p>
<python><python-3.x><random><seed>
2025-06-27 18:06:34
1
1,216
Andrew
79,682,250
2,939,369
is there a way to assign a numpy array into pandas without a copy?
<p>I'm in a performance critical field, where we store our results in pandas dataframes - issue is we are doing most of computations in numpy and then assigning to pd later - but this forces a copy on assign: <code>df['col'] = arr # this will create a copy</code></p> <p>My quesiton: is there a pandas friendly way to assign in a way that will not break in the future? Is this feature in the pipeline? I currently found <code>df._set_item_mgr('col', arr)</code> but am wary of potential changes in the future.</p> <p>I was thinking of making an issue on GH, but wanted to see what everyone thinks before submitting :)</p>
<python><pandas><numpy>
2025-06-27 16:48:55
1
831
Raven
79,682,246
15,520,615
Unable to Read from Azure Event Hub with Spark
<p>I am successfully sending dummy data to Azure Event Hub with the following Python Script:</p> <pre><code>import dbldatagen as dg from pyspark.sql.types import IntegerType, StringType, FloatType import json from pyspark.sql.types import StructType, StructField, IntegerType, DecimalType, StringType, TimestampType, Row, BooleanType from pyspark.sql.functions import * import pyspark.sql.functions as F num_rows = 1 * 10 # number of rows to generate num_partitions = 1 # number of Spark dataframe partitions delay_reasons = [&quot;Air Carrier&quot;, &quot;Extreme Weather&quot;, &quot;National Aviation System&quot;, &quot;Security&quot;, &quot;Late Aircraft&quot;] # will have implied column `id` for ordinal of row flightdata_defn = (dg.DataGenerator(spark, name=&quot;flight_delay_data&quot;, rows=num_rows, partitions=num_partitions) .withColumn(&quot;body&quot;,StringType()) .withColumn(&quot;flightNumber&quot;, &quot;int&quot;, minValue=1000, uniqueValues=10000, random=True) .withColumn(&quot;airline&quot;, &quot;string&quot;, minValue=1, maxValue=500, prefix=&quot;airline&quot;, random=True, distribution=&quot;normal&quot;) .withColumn(&quot;original_departure&quot;, &quot;timestamp&quot;, begin=&quot;2020-01-01 01:00:00&quot;, end=&quot;2020-12-31 23:59:00&quot;, interval=&quot;1 minute&quot;, random=True) .withColumn(&quot;delay_minutes&quot;, &quot;int&quot;, minValue=20, maxValue=600, distribution=dg.distributions.Gamma(1.0, 2.0)) .withColumn(&quot;delayed_departure&quot;, &quot;timestamp&quot;, expr=&quot;cast(original_departure as bigint) + (delay_minutes * 60) &quot;, baseColumn=[&quot;original_departure&quot;, &quot;delay_minutes&quot;]) .withColumn(&quot;reason&quot;, &quot;string&quot;, values=delay_reasons, random=True) ) df_flight_data = flightdata_defn.build(withStreaming=True, options={'rowsPerSecond': 10}) streamingDelays = ( df_flight_data .groupBy( # df_flight_data.body, df_flight_data.flightNumber, df_flight_data.airline, df_flight_data.original_departure, df_flight_data.delay_minutes, df_flight_data.delayed_departure, df_flight_data.reason, window(df_flight_data.original_departure, &quot;1 hour&quot;) ) .count() ) writeConnectionString = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString) checkpointLocation = &quot;///checkpoint&quot; ehWriteConf = { 'eventhubs.connectionString' : writeConnectionString } connectionString = connectionString ehConf = {} ehConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString) # Write body data from a DataFrame to EventHubs. Events are distributed across partitions using round-robin model. ds = streamingDelays \ .select(F.to_json(F.struct(&quot;*&quot;)).alias(&quot;body&quot;)) \ .writeStream.format(&quot;eventhubs&quot;) \ .options(**ehWriteConf) \ .outputMode(&quot;complete&quot;) \ .option(&quot;checkpointLocation&quot;, &quot;...&quot;) \ .start() display(streamingDelays) # display(ds) </code></pre> <p>The being sent to the EventHub looks like the following:</p> <p><a href="https://i.sstatic.net/cWmHsf8g.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWmHsf8g.png" alt="enter image description here" /></a></p> <p>I now attempt to read the data with the following code:</p> <pre><code>connectionString = connectionString ehConf = {} ehConf['eventhubs.connectionString'] = sc._jvm.org.apache.spark.eventhubs.EventHubsUtils.encrypt(connectionString) streaming_df = spark \ .readStream \ .format(&quot;eventhubs&quot;) \ .options(**ehConf) \ .load() display(streaming_df) </code></pre> <p>However, no data is retrieved when I attempt to read the EventHub</p> <p>Can someone let me know where I might be going wrong?</p>
<python><azure><pyspark><azure-eventhub>
2025-06-27 16:45:53
0
3,011
Patterson
79,682,238
126,833
Add header and footer to a second DOCX only that would eventually be appended to a cover page DOCX
<p>I have two DOCX files - ID-1.docx and ID-2.docx - ID-1 being a cover page and ID-2 being the content. ID-2 doesn't have a header and footer and I need the header and footer only in ID-2. Post which I merge the 2 DOCXs.</p> <pre class="lang-php prettyprint-override"><code>$doc1 = escapeshellarg(DOCX_PATH.&quot;$ID-1.docx&quot;); $doc2 = escapeshellarg(DOCX_PATH.&quot;$ID-2.docx&quot;); $output = escapeshellarg(DOCX_PATH.&quot;$ID.docx&quot;); $cmd = PYTHON.&quot; &quot;.ABS_PATH.&quot;merge_docs.py $doc1 $doc2 $output 2&gt;&amp;1&quot;; exec($cmd, $outputLines, $returnCode); </code></pre> <p>How do I add header and footer to ID-2 only ? I think python's docx capabilities are bette than PhpWord ?</p> <p>merge_docx.py :</p> <pre class="lang-py prettyprint-override"><code>import warnings warnings.filterwarnings(&quot;ignore&quot;, category=UserWarning, module=&quot;docxcompose&quot;) import sys from docx import Document from docxcompose.composer import Composer import os doc1_path = sys.argv[1] doc2_path = sys.argv[2] output_path = sys.argv[3] doc1 = Document(doc1_path) doc2 = Document(doc2_path) composer = Composer(doc1) composer.append(doc2) composer.save(output_path) </code></pre>
<python><php>
2025-06-27 16:34:19
0
4,291
anjanesh
79,682,223
9,989,779
Celery is not running despite correct configuration - using Django, Celery 5+, and Python 3.12+
<p>Despite what appears to be a correct initial configuration, Celery tasks are not running. The logging system does not show any errors. This is a new setup of Celery version 5+ running on Python 3.12+. Full server-side configuration has been added. All required packages and dependencies have been installed, and the application itself does not raise any errors.</p> <p>What could be the reason the test Celery task is not executing every minute as scheduled, according to the interval specified in settings.py?</p> <p>tasks.py</p> <pre><code>from celery import shared_task from datetime import datetime from .models import CeleLog @shared_task def get_all_weather_data(): now = datetime.now().strftime('%Y-%m-%d %H:%M:%S') CeleLog.objects.create(content=f&quot;Current time: {now}&quot;) print(f&quot;Logged time: {now}&quot;) </code></pre> <p>settings.py</p> <pre><code>CELERY_BROKER_URL = 'amqp://localhost' from celery.schedules import crontab CELERY_BEAT_SCHEDULE = { 'log-every-minute': { 'task': 'company_dashboard.tasks.get_all_weather_data', # &lt;- ścieżka do funkcji 'schedule': crontab(minute='*/1'), }, } </code></pre> <p>celery.py - project</p> <pre><code>import os from celery import Celery from django.conf import settings os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'app_rama.settings') app = Celery('app_rama') app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) </code></pre> <p><strong>init</strong>.py - project</p> <pre><code>from .celery import app as celery_app </code></pre> <p>app-celery-worker.conf</p> <pre><code>[program:app-celery-worker] command=/home/app/bin/celery -A app_rama worker -l INFO directory=/home/app/app user=app numprocs=1 stdout_logfile=/home/app/logs/app-rama-worker.log stderr_logfile=/home/app/logs/app-rama-worker.log autostart=true autorestart=true startsecs=10 stopwaitsecs=600 killasgroup=true priority=998 </code></pre> <p>app-celery-beat.conf</p> <pre><code>[program:app-celery-beat] command=/home/app/bin/celery -A app_rama beat -l INFO directory=/home/app/app user=app numprocs=1 stdout_logfile=/home/app/logs/app-rama-worker.log stderr_logfile=/home/app/logs/app-rama-worker.log autostart=true autorestart=true startsecs=10 stopwaitsecs=600 killasgroup=true priority=998 </code></pre> <p>And I run:</p> <pre class="lang-bash prettyprint-override"><code>sudo supervisorctl reread sudo supervisorctl update sudo service nginx restart sudo supervisorctl restart app sudo supervisorctl start app-celery-worker sudo supervisorctl start app-celery-beat </code></pre> <p>The logs don’t show any errors, and the configuration seems to have been correct. I suspect the issue might be related to how Celery is being launched. I don’t see any test tasks or models running in the logs every minute, so maybe I’m not launching the new Celery version 5 correctly in settings.py?</p>
<python><django><celery>
2025-06-27 16:15:08
0
2,197
Maddie Graham
79,681,743
7,791,963
VSCode debug always sets cwd to test file's folder in Robot Framework project — ignores launch.json setting
<h2>Context</h2> <p>I'm working on a Robot Framework project using VSCode and the robotcode extension (robotcode-dbiel). In production, we have one git repo for the framework, and one git repo for any test related assets maintained separately.</p> <p>Therefore when developing / debugging my folder structure looks like this:</p> <pre><code>git/ ├── .vscode/ │ └── launch.json ├── FW/ │ └── Python libraries, resource files ├── FW-TEST-RESOURCES/ │ └── test_cases │ └── input_variables.yaml &lt;--- Paths of configuration files relative to working dir folder which will be loaded in the various python scripts </code></pre> <p>When I CD to git, and run robot via CLI it works.</p> <pre><code>robot FW-TEST-RESOURCES/test_cases/test.robot </code></pre> <p>But when I set up my <code>launch.json</code> for debugging,</p> <pre><code> { &quot;name&quot;: &quot;Robotcode: Run Test with Root CWD&quot;, &quot;type&quot;: &quot;robotcode&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;target&quot;: &quot;${file}&quot;, // Runs the currently active file &quot;cwd&quot;: &quot;${workspaceFolder}&quot;, // Forces CWD to the root (your git folder) &quot;args&quot;: [], } </code></pre> <h2>Problem</h2> <p>Even though I’ve explicitly set <code>&quot;cwd&quot;: &quot;${workspaceFolder}&quot;</code>, the debugger still sets the current working directory to the base folder of the git repo of the test file (e.g., FW-TEST-RESOURCES/), not the root git/ folder.</p> <p>This breaks my logic because:</p> <ul> <li><p>I load <code>input_variables.yaml</code> (stored in FW-TEST-RESOURCES/), which contains relative paths from the git folder pointing at paths inside FW folder.</p> </li> <li><p>My Python code in FW/ expects to resolve these paths relative to the root project folder (git/), since cwd is wrong, when I prepend <code>Path.cwd()</code> to the relative path, I get errors like:</p> <p><code>FileNotFoundError: Configuration file 'git/FW-TEST-RESOURCES/FW/configs/config.yaml' not found!</code></p> <p>When the expected path would be <code>git/FW/configs/config.yaml</code></p> </li> </ul> <p>I have also tried hardcoding the cwd in launch.json to the absolute path of the git folder, with same result.</p> <p><strong>How can I make VSCode and robotcode use the actual workspace root (git/) as the working directory during debug runs? So it works as when I run it from CLI with robot</strong></p>
<python><visual-studio-code><robotframework>
2025-06-27 09:53:06
0
697
Kspr
79,681,710
1,419,127
VSCode (on a remote Ubuntu) not offering auto-complete suggestions for any C++ or Python file
<p>I am running VSCode on my Windows 11 laptop as the client of a VSCode server on a Ubuntu 22.4 (Jammy) machine.</p> <p>This is VScode 1.101.1 (user setup) and my remote Ubuntu has that version too.</p> <p>I have a folder which is a mixed Python/C++ project. When I open this folder and then files from that folder I get no auto-complete suggestions for neither Python nor C++.</p> <p><a href="https://i.sstatic.net/MB58ADKp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MB58ADKp.png" alt="enter image description here" /></a></p> <p>Both file types have been recognized for what they are (bottom-right corner correctly says either Python or C++). Both relevant extensions (C/C++, Python) are installed on the Ubuntu machine and running. I have a local (to my project) compile_commands.json which I was elsewhere suggested to generate by running cmake with some -D option (not sure if it helps or makes things worse). It has a lot of content, namely 106 entries of the form :</p> <pre><code>{ &quot;directory&quot;: &quot;/home/&lt;user&gt;/gpudrive/build/tests&quot;, &quot;command&quot;: &quot;/home/&lt;user&gt;/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/toolchain/bin/clang++ -DMADRONA_LINUX=\&quot;(1)\&quot; -I/home/&lt;user&gt;/gpudrive/src -I/home/&lt;user&gt;/gpudrive/external/madrona -I/home/&lt;user&gt;/gpudrive/external/madrona/src/common/../../include -I/home/&lt;user&gt;/gpudrive/external/json/include -isystem /home/&lt;user&gt;/gpudrive/external/madrona/external/googletest/googletest/include -isystem /home/&lt;user&gt;/gpudrive/external/madrona/external/googletest/googletest -isystem /home/&lt;user&gt;/gpudrive/external/madrona/external/madrona-toolchain/bundled-toolchain/libcxx-noexcept/include/c++/v1 -isystem /usr/local/cuda-12.4/include -g -ggdb3 -fvisibility=hidden -nostdinc++ -nostdlib++ -fno-exceptions -fno-rtti -fcolor-diagnostics -Wshadow -pedantic -Wall -Wextra -march=x86-64-v3 -std=c++20 -o CMakeFiles/my_tests.dir/observationTest.cpp.o -c /home/&lt;user&gt;/gpudrive/tests/observationTest.cpp&quot;, &quot;file&quot;: &quot;/home/&lt;user&gt;/gpudrive/tests/observationTest.cpp&quot; }, </code></pre> <p>I also have a ~/.vscode/settings.json with this litttle content :</p> <pre><code>{ &quot;editor.semanticHighlighting.enabled&quot;: true, &quot;C_Cpp.enhancedColorization&quot;: &quot;Enabled&quot; } </code></pre> <p>and a project-specific .vscode/settings.json with this :</p> <pre><code>{ &quot;files.associations&quot;: { &quot;*.hpp&quot;: &quot;cpp&quot;, &quot;*.inc&quot;: &quot;cpp&quot;, &quot;__tree&quot;: &quot;cpp&quot;, &quot;map&quot;: &quot;cpp&quot;, &quot;chrono&quot;: &quot;cpp&quot;, &quot;__bit_reference&quot;: &quot;cpp&quot;, &quot;__node_handle&quot;: &quot;cpp&quot;, &quot;bitset&quot;: &quot;cpp&quot;, &quot;deque&quot;: &quot;cpp&quot;, &quot;__memory&quot;: &quot;cpp&quot;, &quot;limits&quot;: &quot;cpp&quot;, &quot;optional&quot;: &quot;cpp&quot;, &quot;ratio&quot;: &quot;cpp&quot;, &quot;regex&quot;: &quot;cpp&quot;, &quot;tuple&quot;: &quot;cpp&quot;, &quot;vector&quot;: &quot;cpp&quot;, &quot;system_error&quot;: &quot;cpp&quot;, &quot;array&quot;: &quot;cpp&quot;, &quot;functional&quot;: &quot;cpp&quot;, &quot;type_traits&quot;: &quot;cpp&quot;, &quot;utility&quot;: &quot;cpp&quot;, &quot;variant&quot;: &quot;cpp&quot;, &quot;__functional_base&quot;: &quot;cpp&quot;, &quot;algorithm&quot;: &quot;cpp&quot;, &quot;filesystem&quot;: &quot;cpp&quot;, &quot;memory&quot;: &quot;cpp&quot;, &quot;random&quot;: &quot;cpp&quot;, &quot;istream&quot;: &quot;cpp&quot;, &quot;locale&quot;: &quot;cpp&quot; }, &quot;C_Cpp.errorSquiggles&quot;: &quot;disabled&quot;, &quot;C_Cpp.intelliSenseEngine&quot;: &quot;Default&quot;, &quot;C_Cpp.intelliSenseEngineFallback&quot;: &quot;Enabled&quot;, &quot;C_Cpp.loggingLevel&quot;: &quot;Debug&quot; } </code></pre> <p>and a local .vscode/c_cpp_properties.json with</p> <pre><code>{ &quot;version&quot;: 4, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;&lt;Ubuntu machine&gt;&quot;, &quot;compilerPath&quot;: &quot;/usr/bin/g++&quot;, &quot;compileCommands&quot;: &quot;${workspaceFolder}/build/compile_commands.json&quot;, &quot;cStandard&quot;: &quot;c17&quot;, &quot;cppStandard&quot;: &quot;c++20&quot;, &quot;includePath&quot;: [ &quot;${workspaceFolder}/**&quot;, &quot;/usr/include&quot;, &quot;/usr/local/include&quot;, &quot;/usr/local/cuda-12.4/include&quot; ], &quot;intelliSenseMode&quot;: &quot;linux-gcc-x64&quot; } ] } </code></pre> <p>Not sure what else I might post here that might help.</p> <p>What might be missing ?</p>
<python><c++><visual-studio-code><autocomplete>
2025-06-27 09:29:18
0
1,329
Charles
79,681,605
4,382,305
ImportError: Module "django_comments_xtd" does not define a "XtdComment" attribute/class
<p>When I installed django-comments-xtd according the <a href="https://django-comments-xtd.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">documention</a> I got this error:</p> <pre><code>ImportError: Module &quot;django_comments_xtd&quot; does not define a &quot;XtdComment&quot; attribute/class </code></pre> <p>Configs in settings.py are:</p> <pre><code>INSTALLED_APPS += [ 'django_comments_xtd', 'django_comments', 'django.contrib.sites', ] SITE_ID = 1 COMMENTS_APP = &quot;django_comments_xtd&quot; COMMENTS_XTD_MODEL = 'django_comments_xtd.XtdComment' COMMENTS_XTD_MAX_THREAD_LEVEL = 3 COMMENTS_XTD_CONFIRM_EMAIL = False COMMENTS_XTD_ALLOW_ANONYMOUS = True COMMENTS_XTD_ALLOW_PUBLICATION_MODERATION = True </code></pre>
<python><python-3.x><django>
2025-06-27 08:04:18
1
2,091
Darwin
79,681,478
9,363,181
Installing external python packages on EMR on EC2
<p>I want to install external <code>Python packages</code> on <code>EMR with an EC2</code> setup, but currently, apart from bootstrap actions, nothing else seems to be working. The problem with this setup is that if I want to include any new package, I have to create a new cluster with a new set of libraries in the bootstrap script. Lots of things will change because the cluster ID has to be hardcoded.</p> <p>For this, I was doing something like below:</p> <pre><code>aws emr add-steps \ --cluster-id &lt;cluster-id&gt;\ --steps Type=Spark,Name=&quot;Run PySpark Framework&quot;,ActionOnFailure=CONTINUE,\ Args=[--deploy-mode,cluster,--master,yarn,\ --archives,s3://msk-nimbuspost-connectors/pyspark_env.tar.gz#pyspark_env,\ --py-files,s3://msk-nimbuspost-connectors/src.zip,\ s3://msk-connectors/main.py,\ --env, prod] </code></pre> <p>But this doesn't seem to work. The way I built my pyspark_env.tar.gz is like this.</p> <p><strong>Approach 1</strong>:</p> <pre><code>pip3 install venv-pack venv-pack -f -o pyspark_env.tar.gz # This will zip all the packages installed under the virtual environment into the .tar.gz file </code></pre> <p><strong>Approach 2</strong>:</p> <p>I again tried creating a new folder, unzipping the contents from the above tar in that folder, and creating a new tar from that folder so that the path looks as follows:</p> <pre><code>(venv) @root-Pro % tar -tzf pyspark_env.tar.gz | head pyspark_env/ pyspark_env/bin/ pyspark_env/include/ pyspark_env/pyspark_env.tar.gz pyspark_env/pyvenv.cfg pyspark_env/lib/ pyspark_env/share/ pyspark_env/share/py4j/ pyspark_env/share/py4j/py4j0.10.9.5.jar pyspark_env/lib/python3.11/ </code></pre> <p>Even this doesn't seem to be working. So, how can I achieve this?</p>
<python><amazon-web-services><pip><amazon-emr>
2025-06-27 06:23:04
0
645
RushHour
79,681,477
219,153
Why does keyword argument 'weights' not work when calling NumPy histogram in Numba?
<p>This Python 3.13.5 script with numpy 2.2.6 and numba 0.61.2:</p> <pre><code>import numpy as np, numba as nb @nb.njit(fastmath=True) def f(a, b): return np.histogram(a, 10, weights=b) a = np.random.randint(0, 256, (100,)).astype(np.uint8) b = np.random.randint(0, 256, (100,)).astype(np.uint8) print(np.histogram(a, 10, weights=b)) # no problems print(f(a, b)) # fails here </code></pre> <p>fails with this error:</p> <pre class="lang-none prettyprint-override"><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(&lt;function histogram at 0x7244459565c0&gt;) found for signature: &gt;&gt;&gt; histogram(array(uint8, 1d, C), Literal[int](10), weights=array(uint8, 1d, C)) There are 2 candidate implementations: - Of which 2 did not match due to: Overload in function 'np_histogram': File: numba/np/old_arraymath.py: Line 3942. With argument(s): '(array(uint8, 1d, C), int64, weights=array(uint8, 1d, C))': Rejected as the implementation raised a specific error: TypingError: got an unexpected keyword argument 'weights' raised from /home/paul/st-python/test-3.13/.venv/lib/python3.13/site-packages/numba/core/typing/templates.py:791 During: resolving callee type: Function(&lt;function histogram at 0x7244459565c0&gt;) During: typing of call at /home/paul/st-python/test-3.13/so-numba-hist.py (5) File &quot;so-numba-hist.py&quot;, line 5: def f(a, b): return np.histogram(a, 10, weights=b) ^ During: Pass nopython_type_inference </code></pre> <p>Why doesn't Numba accept a legal <code>weights=</code> argument?</p>
<python><numpy><histogram><numba>
2025-06-27 06:22:27
1
8,585
Paul Jurczak
79,681,111
2,613,670
is CommentedMap.keys() in ruamel.yaml keeping the insertion order?
<p>I am trying to check some internal yaml files, and &quot;fix&quot; them. For that I need to preserve the comments, so I decided to use <a href="https://pypi.org/project/ruamel.yaml/" rel="nofollow noreferrer">ruamel.yaml</a> library.</p> <p>The thing is that one of the checks is that some &quot;keys&quot;, if present, are in a certain order. So for example:</p> <pre class="lang-yaml prettyprint-override"><code>settings: # foo key2: prop: value # bar key1: prop: value ... </code></pre> <p>I need to &quot;fix&quot; it, like this:</p> <pre class="lang-yaml prettyprint-override"><code>settings: # bar key1: prop: value # foo key2: prop: value ... </code></pre> <p>but in order to check first the correct &quot;order&quot;, I am wondering if I do the following:</p> <pre class="lang-py prettyprint-override"><code>yaml = YAML() yaml.preserve_quotes = True file: CommentedMap = yaml.load(path) settings: Any = file.get(&quot;settings&quot;, default=[]) if not isinstance(linters, CommentedMap): return keys: list[str] = list(settings.keys()) # are keys keeping the insertion order? ... </code></pre> <p><code>keys</code> will keep the insertion order.</p> <p>Thanks.</p>
<python><yaml><ruamel.yaml>
2025-06-26 20:28:09
1
1,904
Manuelarte
79,680,981
1,998,377
Pandas Dataframe align/merge/join in similar way to text based file comparison
<p>I want to create a dataframe that aligns the rows of two input dataframes in a similar manner to how a text comparison tool would align text when comparing two files.</p> <p>For instance, consider the image below, where df_A and df_B are the two input dataframes. They both have triple index <em>year</em>, <em>pos</em>, and <em>score</em>. In this case there is an intersection between the two indexes, but that won't necessarily be the case.</p> <p><a href="https://i.sstatic.net/v8BRB66o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8BRB66o.png" alt="Image of sample input and required output" /></a></p> <p>I want to create a dataframe (or dataframes) that aligns the rows of df_A and df_B giving the output shown. The empty cells can contain NaN, or be empty. Note that this is not quite the same as the result that is generated by any of the standard Pandas merge, join, concat or align methods (as far as I can tell.)</p> <p>For instance <em>join</em> (see image below) doesn't give the correct NaN/empty values for <em>pos</em> values 1 or 8. And it duplicates the values for <em>pos</em> 4 too many times. (Note that I understand what <em>join</em> is doing, and why it does it - it just isn't what I am needing to do here.)</p> <p><a href="https://i.sstatic.net/6HYzlX3B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HYzlX3B.png" alt="Image of output from Pandas.DataFrame.join" /></a></p> <p>The required output is more akin to what would be generated by a text comparison tool if each row of the dataframes was considered a line of text. That is, rows of each dataframe are &quot;shuffled&quot; down until the next matching line is found.</p> <p>I can write a custom function to do this, but am wondering if I'm missing something that may be available in Pandas already.</p> <pre><code>import pandas as pd data_A = {&quot;year&quot;:[2023]*7,&quot;pos&quot;:[1,2,4,4,4,8,8],&quot;score&quot;:[15,20,30,30,30,60,60],&quot;value&quot;:[&quot;a&quot;,&quot;b&quot;,&quot;c&quot;,&quot;c&quot;,&quot;c&quot;,&quot;d&quot;,&quot;d&quot;]} df_A = pd.DataFrame(data_A) df_A = df_A.set_index([&quot;year&quot;,&quot;pos&quot;,&quot;score&quot;]) data_B = {&quot;year&quot;:[2023]*9,&quot;pos&quot;:[1,1,1,3,3,4,4,8,10],&quot;score&quot;:[15,15,15,25,25,30,30,60,80],&quot;value&quot;:[&quot;v&quot;,&quot;v&quot;,&quot;v&quot;,&quot;w&quot;,&quot;w&quot;,&quot;x&quot;,&quot;x&quot;,&quot;y&quot;,&quot;z&quot;]} df_B = pd.DataFrame(data_B) df_B = df_B.set_index([&quot;year&quot;,&quot;pos&quot;,&quot;score&quot;]) df = pd.merge(df_A,df_B,on=[&quot;year&quot;,&quot;pos&quot;,&quot;score&quot;],how=&quot;outer&quot;) print(df) df = df_A.join(df_B,how=&quot;outer&quot;,lsuffix='_left', rsuffix='_right') print(df) llll, rrrr = df_A.align(df_B) print(llll) print(rrrr) </code></pre>
<python><pandas><dataframe>
2025-06-26 18:02:33
1
10,834
Phil Goddard
79,680,915
984,975
How does NiceGUI object binding (listener) work behind the scenes?
<p>Here is a code example from their great documentation:</p> <p><a href="https://nicegui.io/documentation/label" rel="nofollow noreferrer">NiceGUI label</a></p> <pre><code>from nicegui import ui class status_label(ui.label): def _handle_text_change(self, text: str) -&gt; None: super()._handle_text_change(text) if text == 'ok': self.classes(replace='text-positive') else: self.classes(replace='text-negative') model = {'status': 'error'} status_label().bind_text_from(model, 'status') ui.switch(on_change=lambda e: model.update(status='ok' if e.value else 'error')) ui.run() </code></pre> <p>Why does this work? How do they make <code>model</code> reactive with assigning a whole new wrapped object to that name? I read their code but it is too highly advance for me to comprehend it.</p>
<python><nicegui>
2025-06-26 17:09:25
0
8,024
AturSams
79,680,822
16,462,878
How to clip to max an integer overflow using numpy or opencv
<p>I have an array of the form <code>a = np.array([1], dtype='uint8')</code>.<br /> Now if I add <code>255</code> to <code>a</code> it will overflow and be <code>np.array([0])</code>.</p> <p>Is there a <em>built-in</em> way to &quot;clip&quot; to value to <code>255</code>?</p> <p>NumPy has the function <code>np.clip</code> but it doesn't works under overflow condition.</p>
<python><numpy><opencv><integer-overflow><saturation-arithmetic>
2025-06-26 15:58:36
4
5,264
cards
79,680,520
6,100,177
Jax persistent cache breaks causes miscompilation?
<p>When I comment the below line (&quot;THIS LINE&quot;), I get correct results (loss decreases), and when I don't, I get broken results (constant loss every iteration):</p> <pre class="lang-py prettyprint-override"><code>import jax import numpy as np from jax import numpy as jnp import optax from tqdm import tqdm from pathlib import Path import shutil cache_path = &quot;/tmp/jax_cache&quot; if Path(cache_path).exists(): shutil.rmtree(cache_path) jax.clear_caches() jax.config.update(&quot;jax_compilation_cache_dir&quot;, cache_path) jax.config.update(&quot;jax_persistent_cache_min_compile_time_secs&quot;, 0) # THIS LINE print('initializing...') params = jnp.array(np.random.normal(size=[10, 11])) def loss(params): return jnp.sum(params ** 4) / 1000.0 print('training...') opt = optax.lbfgs(0.03) @jax.jit def do_update(params, opt_state): loss_value, params_grad = jax.value_and_grad(loss)(params) updates, opt_state = opt.update( params_grad, opt_state, params=params, value=loss_value, grad=params_grad, value_fn=loss, ) params = optax.apply_updates(params, updates) return params, opt_state opt_state = opt.init(params) def maybe_log(i): if i % 100 == 0: loss_float = loss(params).item() tqdm.write(f'{i} {loss_float}') for i in tqdm(range(1000)): maybe_log(i) params, opt_state = do_update(params, opt_state) </code></pre> <p>Why is this happening? Is it a bug in jax or a problem in my code?</p> <p>My understanding is that this line causes jax to put <code>do_update</code> in the persistent cache, so it's surprising to me that this change's the code's behavior. I've believe I've even cleared all caches at the start of this code.</p> <ul> <li>jax: 0.5.3</li> <li>hardware: aarch64 CPU</li> <li>python: 3.13.0</li> </ul>
<python><jax>
2025-06-26 12:45:44
0
1,058
mwlon
79,680,467
2,537,394
How to handle loss function with sparse output
<p>I'm trying to create a ML model in TensorFlow that takes in a tensor with shape (128,128,12) and outputs a tensor with shape (128,128,3), where the output dimensions mean (x, y, sensor_number).</p> <p>With my training data I have the problem that my output is very sparse, meaning I could only take sensor measurements at very few x-y-coordinates. But if I take a measurement, I always will have all 3 sensor readings.</p> <p>I have created a simple model that takes the input data and a mask as input:</p> <pre class="lang-py prettyprint-override"><code>from keras import Input, layers, models input_data = Input(shape=(128,128,12), name=&quot;input_data&quot;) input_mask = Input(shape=(128,128,3), name=&quot;input_mask&quot;) output_layer = layers.Conv2D(filters=3, kernel_size=(3,3), padding=&quot;same&quot;, activation=&quot;sigmoid&quot;, name=&quot;output&quot;)(input_data) output_masked = layers.Multiply(name=&quot;masked_output&quot;)([output_layer, input_mask]) print(output_masked.shape) # return input_data, input_mask, output_layer, output_masked model = models.Model(inputs=input_data, outputs=output_layer) model_masked = models.Model(inputs=[input_data, input_mask], outputs=output_masked) model_masked.compile(optimizer=&quot;adam&quot;, loss=&quot;mse&quot;) </code></pre> <p>The mask simply contains ones at the coordinates where I have taken measurements, otherwise zeros. If helpful, it would be no problem to use zeros in y_true to obtain the mask, the actual sensor readings are always greater than 0.</p> <p>Now the problem is during training the loss gets tiny, and the model will usually predict zeros. To solve this issue I figured I'd need a custom loss function that only calculates the loss for the coordinates where data is available. I've tried this:</p> <pre class="lang-py prettyprint-override"><code>from keras import ops def masked_mse(y_true, y_pred): mask_value = 0 mask = ops.repeat( ops.cast( ops.any( ops.not_equal(y_true, mask_value), axis=-1, keepdims=True, ), &quot;float32&quot; ), repeats=y_true.shape[-1], axis=-1 ) masked_squared_error = ops.square(mask * (y_pred - y_true)) masked_mse = ops.sum(masked_squared_error, axis=-1) / ops.sum(mask, axis=-1) # results in lots of NaNs # masked_mse = ops.sum(masked_squared_error, axis=-1) / ops.maximum(ops.sum(mask, axis=-1), 1) # results in lots of zeros return masked_mse </code></pre> <p>but I'm not really understanding how this loss function gets applied during training. If you notice I've included two variants on how to calculate <code>masked_mse</code>. The first one results in nans where no measurements are. The output is per x-y-coordinate. During training however TensorFlow logs the loss as a single value, which will always be nan. I don't understand how this aggregation is calculated.</p> <p>With the second variant to calculate <code>masked_mse</code> most values will be 0, and during training TensorFlow logs <em>some</em> value, but then I'm back again at tiny losses and the model learning to predict mostly zeros.</p> <p>How do I define a proper loss function for training with sparse output data?</p> <p>As a side note I saw that there are two different MSE calculations in Keras: <a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/MeanSquaredError" rel="nofollow noreferrer">tf.keras.losses.MeanSquaredError</a> and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/losses/MSE" rel="nofollow noreferrer">tf.keras.losses.MSE</a>, where the former has the parameter <code>reduction</code>. With <code>reduction=None</code> this calculates the coordinate-wise MSE. With <code>reduction=&quot;sum_over_batch_size&quot;</code> (default) this calculates a single value. Is this how I should define my masked loss function?</p>
<python><tensorflow><keras><loss-function>
2025-06-26 12:11:27
1
731
YPOC
79,680,007
219,153
Why does the last bin in a NumPy histogram have an unusually high count?
<p>This Python 3.12.7 script with numpy 2.2.4:</p> <pre><code>import numpy as np a = np.random.randint(0, 256, (500, 500)).astype(np.uint8) counts, bins = np.histogram(a, range(0, 255, 25)) print(np.column_stack((counts, bins[:-1], bins[1:]))) counts, bins = np.histogram(a, range(0, 257, 16)) print(np.column_stack((counts, bins[:-1], bins[1:]))) </code></pre> <p>produces this kind of output:</p> <pre><code>[[24721 0 25] [24287 25 50] [24413 50 75] [24441 75 100] [24664 100 125] [24390 125 150] [24488 150 175] [24355 175 200] [24167 200 225] [25282 225 250]] [[15800 0 16] [15691 16 32] [15640 32 48] [15514 48 64] [15732 64 80] [15506 80 96] [15823 96 112] [15724 112 128] [15629 128 144] [15681 144 160] [15661 160 176] [15558 176 192] [15526 192 208] [15469 208 224] [15772 224 240] [15274 240 256]] </code></pre> <p>where the first histogram always has the highest count in bin <code>[225, 250)</code>. The second histogram indicates a uniform distribution, as expected. I tried a dozen of times and the anomaly was always there. Can someone explain this behavior?</p>
<python><numpy><histogram>
2025-06-26 06:32:22
1
8,585
Paul Jurczak
79,679,773
6,074,306
How did Pandas data frames end up with multiple indexing methods?
<p>This seems like a simple question but I haven't been able to find any historical perspective as to when <code>[]</code>, <code>loc</code> and <code>iloc</code> (and formerly <code>ix</code>) were added, and why.</p> <p>The docs merely say &quot;Object selection has had a number of user-requested additions&quot;.</p>
<python><pandas>
2025-06-25 23:53:38
3
1,421
n.caillou
79,679,768
6,141,238
How should CPU time be computed for calculations parallelized with the multiprocessing module?
<p>I am trying to measure the processing time, or CPU time, of a CPU-intensive computation that has been parallelized with <code>multiprocessing</code>. However, simply bookending the parallelization of the computation with <code>process_time()</code> calls and taking the difference is insufficient to do this. For example, running the MRE</p> <pre><code>from time import perf_counter as pc, process_time as pt from multiprocessing import Pool n_workers = 2 def worker(k): total = 0 for i in range(2*10**7): total += i return total if __name__ == '__main__': print('Serial computation') pc_start = pc() pt_start = pt() results = n_workers * [0] for k in range(n_workers): results[k] = worker(k) print(f' Total run time: {pc() - pc_start} seconds') print(f' Total CPU time: {pt() - pt_start} seconds') print('Parallel computation') pc_start = pc() pt_start = pt() with Pool() as pool: results = pool.map(worker, range(n_workers)) print(f' Total run time: {pc() - pc_start} seconds') print(f' Total CPU time: {pt() - pt_start} seconds') </code></pre> <p>produces</p> <pre><code>Serial computation Total run time: 1.8759662999982538 seconds Total CPU time: 1.859375 seconds Parallel computation Total run time: 1.2482177000019874 seconds Total CPU time: 0.046875 seconds </code></pre> <p>In the above output, the run time of each computation captures something like the number of seconds that have ticked by on a stopwatch since the relevant <code>pc_start = pc()</code> call was made. The CPU time appears to represent the CPU time of the current process, excluding the CPU time of child processes of the current process.</p> <p>Thus, the measured CPU time of the serial computation measures the total CPU time of that computation, but the measured CPU time of the parallel computation measures only the CPU time of the parent process.</p> <p>In pursuit of the goal of measuring the CPU time of the full parallelized process, I could expand on the above MRE by also measuring the CPU time of each child process and then summing these CPU times of the parent and child processes.</p> <p>But is this a general solution to the problem? To me, the answer is unclear: Does <code>multiprocessing</code> create additional background processes to manage (e.g., serialize/deserialize) variables and data sent or shared between the parent and child processes? If so, the naive approach of equating the CPU time with the sum of the CPU times of the parent and child processes may fail to include the CPU times of these background processes (which are a part of the parallel computation), making the resulting total CPU time incorrect. But how then should I measure the total CPU time for the parallel computation?</p> <hr /> <p><em>What I have tried:</em></p> <p>I have tried many Google searches. In these searches, Google AI struggles with the question. I also was unable to find blog posts, discussion threads, or SO questions on the topic.</p>
<python><multiprocessing><python-multiprocessing><cpu-usage><cpu-time>
2025-06-25 23:41:29
1
427
SapereAude
79,679,557
189,230
Why are we seeing incorrect timings in Datadog APM tracing for simple actions like a Redis GET?
<p>We have Python Django application deployed to AWS EKS on EC2 instances. We use Gunicorn as our web server, although we recently ran Apache + wsgi and saw the same issues. The EC2 instances are m6a.xlarges, the containers themselves have 1500m CPU and 2048MB memory.</p> <p>We use Datadog APM tracing and we're tracking Redis calls. Occasionally we'll see a slow request, and when we look deeper we'll see in the trace a simple Redis GET query take 2.8 seconds!</p> <p>I've used Redis for years and I've never had a problem with it. It's lightening fast. And our platform produces very little load (CPU is around 2% on a small instance). I enabled slow logging in ElastiCache and we're seeing no results, so I know it isn't the Redis instance itself.</p> <p>What could be causing this?</p> <p>I know the timings in APM are based on ddtrace's own in-Python timings, so maybe our Python processes are getting bogged down somehow? CPU usage on the EC2 instances in general is very low, 10% tops. Memory usage is also low. I'm at a loss. We don't have over allocation, so I don't think it's that.</p>
<python><django><redis><datadog><apm>
2025-06-25 19:13:29
0
4,820
Stephen Melrose
79,679,497
1,115,716
pip error: failed to map segment from shared object
<p>I'm trying to install <code>Wan 2.1</code> on a remote box and I'm seeing the following error when it gets to <code>flash_attn</code>:</p> <pre><code>Collecting flash_attn (from -r requirements.txt (line 14)) Using cached flash_attn-2.8.0.post2.tar.gz (7.9 MB) Preparing metadata (setup.py) ... error error: subprocess-exited-with-error × python setup.py egg_info did not run successfully. │ exit code: 1 ╰─&gt; [14 lines of output] Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 2, in &lt;module&gt; File &quot;&lt;pip-setuptools-caller&gt;&quot;, line 35, in &lt;module&gt; File &quot;/home/jagar005/dev/tmp/pip-install-ch1uomgk/flash-attn_f8d54031d33645928635b8bf49f36a0c/setup.py&quot;, line 22, in &lt;module&gt; import torch File &quot;/home/jagar005/.local/lib/python3.9/site-packages/torch/__init__.py&quot;, line 408, in &lt;module&gt; _load_global_deps() File &quot;/home/jagar005/.local/lib/python3.9/site-packages/torch/__init__.py&quot;, line 364, in _load_global_deps raise err File &quot;/home/jagar005/.local/lib/python3.9/site-packages/torch/__init__.py&quot;, line 312, in _load_global_deps ctypes.CDLL(global_deps_lib_path, mode=ctypes.RTLD_GLOBAL) File &quot;/usr/lib64/python3.9/ctypes/__init__.py&quot;, line 374, in __init__ self._handle = _dlopen(self._name, mode) OSError: /home/jagar005/.local/lib/python3.9/site-packages/torch/lib/libtorch_global_deps.so: failed to map segment from shared object [end of output] </code></pre> <p>from other threads, it sounded like an issue with the <code>TMPDIR</code> location, but using those solutions hasn't helped me. What can I do here?</p>
<python><linux><pip>
2025-06-25 18:12:17
0
1,842
easythrees
79,679,286
3,225,904
How to use a decorator to mark a keyword-only mandatory argument as optional in the call signature when statically checked by the IDE?
<p>I have a bunch of functions that all have a certain keyword argument, let's call it <code>kw</code>, which is mandatory and has a specific type <code>T</code>. However, I want to be able to call the functions without necessarily providing that keyword argument, and if it's not present I want it to be generated, probably through a decorator. Furthermore, I want my IDE to show its signature as if <code>kw</code> were optional.</p> <p>In other words, I have functions that have a signature kind of like this:</p> <pre class="lang-py prettyprint-override"><code>def f(*args, kw: T, **kwargs): ... </code></pre> <p>I want their tooltip signature as if it were defined like this:</p> <pre class="lang-py prettyprint-override"><code>def f(*args, kw: T | None=None, **kwargs): ... </code></pre> <p><img src="https://i.imgur.com/V7W3XyP.png" alt="tooltip in my IDE" /></p> <p>Note, that each function might have other positional and/or keyword arguments.</p> <hr /> <p>It seems to me like I should be able to have some decorator sort of like this:</p> <pre class="lang-py prettyprint-override"><code>def decorator(func): sig = inspect.signature(func) hints = get_type_hints(func) orig_ann = hints.get(&quot;kw&quot;) new_ann = Optional[orig_ann] new_params = [] for param in sig.parameters.values(): if param.name == &quot;kw&quot;: new_params.append( param.replace( annotation=new_ann, default=None, ) ) else: new_params.append(param) new_sig = sig.replace(parameters=new_params) @wraps(func) def wrapper(*args, **kwargs): if kwargs.get(&quot;kw&quot;) is None: kwargs[&quot;kw&quot;] = some_function_or_something() return func(*args, **kwargs) &quot;do something to update the signature&quot; return wrapper </code></pre> <p>But I cannot figure out <em>what</em> I would need to do to update the static signature so that the tooltip changes. Is it possible? How would I do it if so?</p>
<python><python-typing><pyright>
2025-06-25 15:18:21
1
427
Red
79,679,253
6,141,238
What are the basic rules for propagating lists and dictionaries across processes using the Manager of the multiprocessing module?
<p>I am trying to use the <code>multiprocessing</code> module to parallelize a CPU-intensive piece code over multiple cores. This module looks terrific in several respects, but when I try to pass lists and dictionaries between processes, changes are not always propagated across the processes as I would expect. What are the rules for this? For example, how do I propagate deep changes in nested lists and dictionaries between processes?</p> <p>Below is a MRE to show a simple instance of an apparent failure of propagation. If I change <code>shared_list</code> in <code>parent_func</code> with <code>.append</code> or <code>.extend</code>, then the data is propagated to <code>child_func</code>, but if I try to change the list by setting it equal to a list outright, then propagation does not occur.</p> <pre><code>from time import sleep from multiprocessing import Manager, Process def parent_func(shared_list): sleep(1) shared_list.extend(['a','b','c']) # propagated # shared_list = ['a','b','c'] # not propagated def child_func(shared_list): k = 0 while k &lt; 8: k += 1 print(f'{k}: {shared_list}') sleep(0.2) def main_func(): with Manager() as manager: shared_list = manager.list() process = Process(target=parent_func, args=(shared_list,)) processes = [process] process.start() process = Process(target=child_func, args=(shared_list,)) processes.append(process) process.start() for process in processes: process.join() print('---') print(list(shared_list)) if __name__ == '__main__': main_func() </code></pre> <p>For dictionaries, an example somewhat similar to the above is shown <a href="https://stackoverflow.com/questions/75799480">here</a>.</p> <hr /> <p><em>What I have tried:</em></p> <p>I have checked the <code>multiprocessing</code> <a href="https://docs.python.org/3/library/multiprocessing.html#managers" rel="nofollow noreferrer">documentation</a>, but could not find much on this question there. As a separate issue, Google AI is currently displaying inline code phrases as empty gray boxes, so I am unable to obtain a Google AI summary on the topic.</p>
<python><process><reference><python-multiprocessing><multiprocessing-manager>
2025-06-25 14:56:49
1
427
SapereAude
79,679,101
1,862,861
PyTorch jagged nested tensors massively slow down a DataLoader
<p>The <a href="https://docs.pytorch.org/docs/stable/nested.html#construction" rel="nofollow noreferrer">current recommendation</a> for using nested tensors in PyTorch is to create them with a <code>torch.jagged</code> layout as opposed to the <code>torch.strided</code> layout. However, I have noticed a considerable drawback to this - if I have a nested tensor in a <a href="https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.Dataset" rel="nofollow noreferrer"><code>Dataset</code></a> for use in a <a href="https://docs.pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader" rel="nofollow noreferrer"><code>DataLoader</code></a>, the dataset using jagged layouts is massively slower. For example:</p> <pre class="lang-py prettyprint-override"><code>import torch # dataset with nested tensor class TestDatasetNested(torch.utils.data.Dataset): def __init__(self, N=100, dl=100, layout=torch.jagged): self.N = N self.dl = dl # set std deviations nnoise = torch.randint(1, high=5, size=(self.N,)) sigmas = [torch.rand(n) for n in nnoise] self.sigmas = torch.nested.nested_tensor(sigmas, layout=layout) def __getitem__(self, i): sigmas = self.sigmas[i % self.N] return torch.cat([sigma * torch.randn((self.dl, 1)) for sigma in sigmas], dim=-1).sum(dim=-1) def __len__(self): return self.N # create dataset with jagged and strided nested tensor layouts dataset_jagged = TestDatasetNested(layout=torch.jagged) dataset_strided = TestDatasetNested(layout=torch.strided) # create dataloader for both cases dl_jagged = torch.utils.data.DataLoader(dataset=dataset_jagged, batch_size=10) dl_strided = torch.utils.data.DataLoader(dataset=dataset_strided, batch_size=10) def nepochs(dl, n=100): for i in range(n): for _ in dl: pass %timeit nepochs(dl_jagged) 4.57 s ± 17.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) %timeit nepochs(dl_strided) 322 ms ± 3.41 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) </code></pre> <p>So, in this case using the jagged layout is ~14 times slower. Does anyone know why this is the case and if it can be improved?</p>
<python><pytorch>
2025-06-25 13:15:50
0
7,300
Matt Pitkin
79,679,016
5,828,291
How can I force pandas to read the last Excel column when it is empty for the first few rows?
<p>I have to parse data from Excel files (and I can't control the incoming format). Four columns have column headers; the fifth column does not, and is often blank -- sometimes there is no data at all in this column, and sometimes one or two rows may have data, so there can be an arbitrary number of blanks in that column near the top. If I do this:</p> <p><code>pd.read_excel(excel_file, usecols=&quot;A:E&quot;)</code></p> <p>I get a warning: <code> FutureWarning: Defining usecols with out of bounds indices is deprecated and will raise a ParserError in a future version.</code> and it doesn't give me column 5.</p> <p>I have a workaround using openpyxl, but I would like to use pandas if possible. Is there a way to do this?</p>
<python><excel><pandas>
2025-06-25 12:07:32
1
403
Mike Christie
79,678,882
2,232,265
How to avoid interleaved logs in parallel processing?
<p>Naively parallelizing some tasks that involve logging results in interleaved logs:</p> <pre><code>import logging from concurrent.futures import ThreadPoolExecutor from time import sleep logger = logging.getLogger(__name__) def task(n): logger.info(f&quot;Task {n} started&quot;) sleep(0.001) logger.info(f&quot;Task {n} finished&quot;) return n * n def main(): with ThreadPoolExecutor(max_workers=2) as executor: futures = [executor.submit(task, i) for i in range(5)] _ = [f.result() for f in futures] if __name__ == &quot;__main__&quot;: logging.basicConfig(level=logging.INFO) main() --- INFO:__main__:Task 0 started INFO:__main__:Task 1 started INFO:__main__:Task 0 finished INFO:__main__:Task 2 started INFO:__main__:Task 1 finished INFO:__main__:Task 3 started INFO:__main__:Task 2 finished INFO:__main__:Task 4 started INFO:__main__:Task 3 finished INFO:__main__:Task 4 finished </code></pre> <p>But I want logs that are not interleaved, and am willing to sacrifice real-time delivery of logs within each process to make this happen:</p> <pre><code>INFO:__main__:Task 1 started INFO:__main__:Task 1 finished INFO:__main__:Task 0 started INFO:__main__:Task 0 finished INFO:__main__:Task 4 started INFO:__main__:Task 4 finished INFO:__main__:Task 2 started INFO:__main__:Task 2 finished INFO:__main__:Task 3 started INFO:__main__:Task 3 finished </code></pre> <p>Is there a straightforward way to get this type of behavior?</p>
<python><logging><parallel-processing>
2025-06-25 10:28:52
1
3,274
zkurtz
79,678,881
5,568,409
How to repeatedly sample so that the sampling result doesn't change?
<p>I have an array from which I want to select a number of rows, so that the rows selected don't vary if I repeat the sampling process.</p> <p>In my following code, this is not the case; each time I run the selecting process, I get a different array:</p> <pre><code>points array([[ 4.926, 46.153], [ 5.428, 46.009], [ 5.373, 45.961], ..., [ 2.404, 49.008], [ 2.387, 49.074], [ 1.825, 49.096]], shape=(34806, 2)) sel = np.random.choice(points.shape[0], size=6, replace=False) points_sampled = points[sel] points_sampled array([[ 1.862, 48.238], [ 5.984, 46.951], [ 4.838, 47.172], [ 7.082, 49.022], [ 3.39 , 50.299], [ 3.099, 50.318]]) </code></pre> <p>What would be the simplest way to achieve this?</p>
<python><python-3.x><numpy><random>
2025-06-25 10:28:45
2
1,216
Andrew
79,678,871
13,806,869
Why does this code fail to remove NaN values when run on a specific laptop?
<p>I have a Pandas dataframe that looks like this:</p> <pre><code>1 | 2 -------|------- 123 | 123 456 | 456 NaN | 789 </code></pre> <p>The data comes from a spreadsheet which is sent to me quarterly in the same format. I need to convert the dataframe into this format:</p> <pre><code>list_number | item_code ------------|---------- 1 | 123 1 | 456 2 | 123 2 | 456 2 | 789 </code></pre> <p>I have some code that does this, which has worked fine for a couple of years. However, this quarter I get the following error:</p> <blockquote> <p>ValueError: Unable to parse string &quot;nan&quot;</p> </blockquote> <p>Here is the bit of code which causes the problem:</p> <pre><code>df = #my data df.dropna(axis=1, how='all', inplace=True) df = pd.melt(df, var_name='LIST_NUMBER', value_name='ITEM_CODE') df[&quot;LIST_NUMBER&quot;] = pd.to_numeric( df[&quot;LIST_NUMBER&quot;].astype(str).str.replace(r&quot;[^\d]&quot;, &quot;&quot;), errors='raise') df[&quot;ITEM_CODE&quot;] = pd.to_numeric( df[&quot;ITEM_CODE&quot;].astype(str).str.replace(r&quot;[^\d]&quot;, &quot;&quot;), errors='raise') </code></pre> <p>The dropna function is failing to replace the NaN value; the dataframe looks the same after as it did before. This then causes the to_numeric function to fail, resulting in the error message.</p> <p>Strangely, the code still works when my colleague runs it on her laptop, which suggests to me that the problem may be my modules rather than the code or data. I recently upgraded Pandas to 2.1.4 (I don't remember the previous version, but it was earlier than 2.0). She's using Pandas 1.4.4.</p> <p>Does anyone know why this code is failing, please?</p> <p>EDIT: For clarity, this is what the dataframe looks like after pd.melt. Melt is not the problem; dropna is.</p> <pre><code>list_number | item_code ------------|---------- 1 | 123 1 | 456 1 | NaN 2 | 123 2 | 456 2 | 789 </code></pre>
<python><pandas>
2025-06-25 10:21:34
1
521
SRJCoding
79,678,601
3,999,951
Module showing as missing in PyCharm despite installation
<p>An environment is set up using Poetry v2.1.3. The environment is created from a <code>pyproject.toml</code> which has pandas as one of the dependencies:</p> <pre class="lang-toml prettyprint-override"><code>[project] name = &quot;pintail-proj&quot; version = &quot;0.1.0&quot; description = &quot;&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.12, &lt;4.0&quot; dependencies = [ &quot;poetry-core (&gt;=2.0.0,&lt;3.0.0)&quot;, &quot;pandas (&gt;= 2.3.0)&quot; ] [tool.poetry] packages = [{include = &quot;poetry_demo&quot;, from = &quot;src&quot;}] [build-system] requires = [&quot;poetry-core&gt;=2.0.0,&lt;3.0.0&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>The environment is successfully created using <code>poetry update</code> from the directory containing the file. Opening PyCharm (PyCharm Community v2025.1.2), the interpreter is set to the Poetry environment. All packages are installed as expected, including pandas, which will be called:</p> <p><a href="https://i.sstatic.net/T7oQWpJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T7oQWpJj.png" alt="installed packages" /></a></p> <p>However, when attempting to call the pandas package from a script, an error is returned:</p> <pre><code>No module named 'pandas' </code></pre> <p>What is the error in the setup of this environment?</p>
<python><pycharm><python-poetry>
2025-06-25 07:11:49
1
467
acolls_badger
79,678,438
13,946,204
Is it possible to generate unique ids based on monotonic clock in python?
<p>I need to implement fast generator of unique IDs in Python. And my idea is:</p> <pre class="lang-py prettyprint-override"><code>from time import time def generate(prefix: str) -&gt; str: return prefix + str(time.monotonic_ns()) </code></pre> <p>Where <code>prefix</code> is a random string generated once on process start.</p> <p>Since monotonic value is always increased, I can be sure that IDs inside a single process will be unique. And since <code>prefix</code> is supposed to be a unique value for a specific period of time, I can be sure that IDs between processes also will be unique.</p> <p>But I don't understand what is arbitrary starting point for monotonic clock in Python?</p> <p>Let's say that my service is running half year without restart. Is it possible that monotonic time will be reset or repeated for the period of time and I'll get the same value for an ID inside single process?</p>
<python><uniqueidentifier>
2025-06-25 03:26:59
2
9,834
rzlvmp
79,678,365
14,386,187
Polars is less memory efficient than pandas
<p>In my application, I want to generate a preview of spreadsheet files that users upload in a memory-efficient manner.</p> <p>I'm testing the following script using <code>pytest-memray</code>:</p> <pre class="lang-py prettyprint-override"><code>import pytest import pandas as pd import polars as pl @pytest.fixture def path() -&gt; str: return &quot;files/rows.csv&quot; @pytest.fixture def path_xlsx() -&gt; str: return &quot;files/bankdataset.xlsx&quot; def test_load_pandas(path: str): pd.read_csv(path) def test_load_polars(path: str): pl.scan_csv(path, low_memory=True).collect() def test_load_pandas_xlsx(path_xlsx: str): pd.read_excel(path_xlsx, sheet_name=None) def test_load_polars_xlsx(path_xlsx: str): pl.read_excel(path_xlsx, sheet_name=None) def test_load_pandas_partial(path: str): df = pd.read_csv(path, nrows=20) assert len(df) == 20 def test_load_polars_partial(path: str): df = pl.scan_csv(path).head(n=20).collect() assert len(df) == 20 def test_load_pandas_partial_xlsx(path_xlsx: str): df = pd.read_excel(path_xlsx, nrows=20) assert len(df) == 20 def test_load_polars_partial_xlsx(path_xlsx: str): df = pl.read_excel(path_xlsx).head(n=20) assert len(df) == 20 def test_load_polars_partial_buffer(path: str): with io.BytesIO() as buffer: pl.scan_csv(path).limit(20).sink_csv(buffer) df = pl.read_csv(buffer.getvalue()) assert len(df) == 20 </code></pre> <p><a href="https://www.cdc.gov/covid/php/covid-net/index.html" rel="nofollow noreferrer">files/rows.csv</a> is around 30k rows, and <a href="https://www.kaggle.com/datasets/ksabishek/massive-bank-dataset-1-million-rows" rel="nofollow noreferrer">files/bankdataset.xlsx</a> is 1 million rows.</p> <p>This outputs:</p> <pre class="lang-bash prettyprint-override"><code>================================================================================================================ MEMRAY REPORT ================================================================================================================ Allocation results for tests/test_load.py::test_load_polars_xlsx at the high watermark 📦 Total memory allocated: 473.5MiB 📏 Total allocations: 21 📊 Histogram of allocation sizes: |▁█▆ | 🥇 Biggest allocating functions: - load_sheet_eager:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/fastexcel/__init__.py:424 -&gt; 473.4MiB - _call_with_frames_removed:&lt;frozen importlib._bootstrap&gt;:488 -&gt; 20.4KiB - read_excel:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/fastexcel/__init__.py:514 -&gt; 13.9KiB - _compile_bytecode:&lt;frozen importlib._bootstrap_external&gt;:784 -&gt; 9.6KiB - inner:/home/linuxbrew/.linuxbrew/opt/python@3.13/lib/python3.13/typing.py:429 -&gt; 9.0KiB Allocation results for tests/test_load.py::test_load_polars_partial_xlsx at the high watermark 📦 Total memory allocated: 473.4MiB 📏 Total allocations: 7 📊 Histogram of allocation sizes: |█ | 🥇 Biggest allocating functions: - load_sheet_eager:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/fastexcel/__init__.py:424 -&gt; 473.4MiB - read_excel:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/fastexcel/__init__.py:514 -&gt; 13.9KiB - _read_spreadsheet:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/polars/io/spreadsheet/functions.py:684 -&gt; 536.0B Allocation results for tests/test_load.py::test_load_pandas_xlsx at the high watermark 📦 Total memory allocated: 426.2MiB 📏 Total allocations: 871 📊 Histogram of allocation sizes: | █▂ | 🥇 Biggest allocating functions: - feed:/home/linuxbrew/.linuxbrew/opt/python@3.13/lib/python3.13/xml/etree/ElementTree.py:1291 -&gt; 245.1MiB - parse_cell:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/openpyxl/worksheet/_reader.py:244 -&gt; 63.0MiB - maybe_infer_to_datetimelike:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/core/dtypes/cast.py:1198 -&gt; 39.3MiB - _rows_to_cols:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/parsers/python_parser.py:1066 -&gt; 38.3MiB - _infer_types:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/parsers/base_parser.py:720 -&gt; 15.3MiB Allocation results for tests/test_load.py::test_load_polars at the high watermark 📦 Total memory allocated: 172.5MiB 📏 Total allocations: 92 📊 Histogram of allocation sizes: |█ | 🥇 Biggest allocating functions: - collect:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/polars/lazyframe/frame.py:2332 -&gt; 96.0B Allocation results for tests/test_load.py::test_load_pandas at the high watermark 📦 Total memory allocated: 24.6MiB 📏 Total allocations: 25 📊 Histogram of allocation sizes: |█ ▁| 🥇 Biggest allocating functions: - read:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/parsers/c_parser_wrapper.py:234 -&gt; 18.6MiB - __init__:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/parsers/c_parser_wrapper.py:93 -&gt; 6.0MiB - get_handle:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/common.py:873 -&gt; 4.0KiB - _clean_options:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/parsers/readers.py:1688 -&gt; 1.5KiB - read_csv:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/pandas/io/parsers/readers.py:1009 -&gt; 1.5KiB Allocation results for tests/test_load.py::test_load_polars_partial_buffer at the high watermark 📦 Total memory allocated: 31.5MiB 📏 Total allocations: 16 📊 Histogram of allocation sizes: |▁█ | 🥇 Biggest allocating functions: - _check_empty:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/polars/io/_utils.py:282 -&gt; 1.3KiB - read_csv:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/polars/io/csv/functions.py:572 -&gt; 768.0B - read_csv:/home/monopoly/workspace/toys/sheetz/.venv/lib/python3.13/site-packages/polars/io/csv/functions.py:549 -&gt; 728.0B </code></pre> <p>It appears that pandas beats polars in every aspect as far as memory efficiency. Is there something that I'm doing wrong? I'm trying to take advantage of LazyFrames with <code>pl.scan_csv</code>, but even that doesn't seem to help.</p>
<python><python-polars>
2025-06-25 00:57:06
2
676
monopoly
79,678,325
5,413,581
Building a sklearn compatible estimator: 'dict' object has no attribute 'requires_fit'
<p>I am trying to build a scikit-learn compatible estimator. I have built a custom class that inherits from <code>BaseEstimator</code> and <code>RegressorMixin</code>. However, when I try to use this, I run into an <code>AttributeError: 'dict' object has no attribute 'requires_fit'</code> that I do not know how to solve. Here I am posting a minimal code example. The logic to obtain the parameters in this class is irrelevant, I am just interested in getting it to work with the appropiate validations and checks.</p> <pre><code>from sklearn.utils.validation import check_is_fitted, check_X_y import numpy as np from sklearn.base import BaseEstimator, RegressorMixin class BaseModel(BaseEstimator, RegressorMixin): &quot;&quot;&quot; Base class for penalized regression models using cp. &quot;&quot;&quot; def __init__(self, param: float = 0.5): self.param = param def _obtain_beta(self, X, y): self.intercept_ = 0 self.coef_ = self.param * np.ones(X.shape[0]) def fit(self, X: np.ndarray, y: np.ndarray): self.feature_names_in_ = None if hasattr(X, &quot;columns&quot;) and callable(getattr(X, &quot;columns&quot;, None)): self.feature_names_in_ = np.asarray(X.columns, dtype=object) X, y = check_X_y(X, y, accept_sparse=False, y_numeric=True, ensure_min_samples=2) self.n_features_in_ = X.shape[1] # Solve the problem self._obtain_beta(X, y) self.is_fitted_ = True return self def predict(self, X: np.ndarray) -&gt; np.ndarray: check_is_fitted(self, [&quot;coef_&quot;, &quot;intercept_&quot;, &quot;is_fitted_&quot;]) predictions = np.dot(X, self.coef_) + self.intercept return predictions def __sklearn_tags__(self): tags = { &quot;allow_nan&quot;: False, &quot;requires_y&quot;: True, &quot;requires_fit&quot;: True, } return tags # USAGE EXAMPLE from sklearn.datasets import make_regression X, y, beta = make_regression(n_samples=200, n_features=200, n_informative=25, bias=10, noise=5, random_state=42, coef=True) model = BaseModel() model.fit(X, y) </code></pre> <p>Executing this produces this error:</p> <pre><code>AttributeError: 'dict' object has no attribute 'requires_fit' File ~\anaconda3\envs\py311env\Lib\site-packages\IPython\core\formatters.py:1036, in MimeBundleFormatter.__call__(self, obj, include, exclude) 1033 method = get_real_method(obj, self.print_method) 1035 if method is not None: -&gt; 1036 return method(include=include, exclude=exclude) 1037 return None 1038 else: Show Traceback </code></pre> <p>It says it has no attribute <code>requires_fit</code> but I am including that attribute in the tags. I am working on a Windows 11 machine with the following requirements:</p> <pre><code>Python 3.11.12 numpy 2.0.2 scikit-learn 1.6.1 </code></pre> <p>Changing the python or packages version is not an option for me in this case.</p>
<python><scikit-learn><regression>
2025-06-24 23:27:38
1
769
Álvaro Méndez Civieta
79,678,079
12,544,460
Blocking call to socket.socket.accept - when using langchain_postgres.v2.vectorstores
<p>blockbuster.blockbuster.BlockingError: Blocking call to socket.socket.accept - when using langchain_postgres.v2.vectorstores</p> <p>I'm customizing <a href="https://github.com/langchain-ai/chat-langchain" rel="nofollow noreferrer">chat-langchain repo</a> using PGvectorstore V2</p> <p>I've been trying with PGVectorStore.create_sync or PGVectorStore.create() or even AsyncPGVectorStore.create() but still got Blocking error, how to fix it?</p> <pre><code>@contextmanager def make_pg_retriever( configuration: BaseConfiguration, embedding_model: Embeddings ) -&gt; Iterator[BaseRetriever]: connection_string = os.environ.get(&quot;PGVECTOR_CONNECTION_STRING&quot;,) engine = PGEngine.from_connection_string(url=connection_string) try: vectorstore = PGVectorStore.create_sync( engine=engine, embedding_service=embedding_model, table_name=WEAVIATE_DOCS_INDEX_NAME, # The table must already exist! metadata_columns=[&quot;source&quot;, &quot;title&quot;, &quot;description&quot;, &quot;language&quot;] # ... other optional parameters ) search_kwargs = {**configuration.search_kwargs, &quot;return_uuids&quot;: True} retriever = vectorstore.as_retriever(search_kwargs=search_kwargs) yield retriever finally: engine.close() </code></pre> <p>or this with corresponding async call</p> <pre><code>@asynccontextmanager # Use asynccontextmanager for async operations async def make_pg_retriever( configuration: BaseConfiguration, embedding_model: Embeddings ) -&gt; AsyncIterator[BaseRetriever]: # Establish an asynchronous connection pool to PostgreSQL # You'll need to configure your PostgreSQL connection string # For example, using environment variables connection_string = os.environ.get( &quot;PGVECTOR_CONNECTION_STRING&quot;, ) # engine = await asyncio.to_thread(PGEngine.from_connection_string,url= connection_string) engine = await asyncio.to_thread(lambda: PGEngine.from_connection_string(url=connection_string)) # engine = PGEngine.from_connection_string( # url=connection_string # ) vectorstore = await AsyncPGVectorStore.create( engine=engine, embedding_service=embedding_model, #why dont use like ingest.py? table_name=WEAVIATE_DOCS_INDEX_NAME, # The table must already exist! metadata_columns=[&quot;source&quot;, &quot;title&quot;, &quot;description&quot;, &quot;language&quot;] # ... other optional parameters ) search_kwargs = {**configuration.search_kwargs} # AsyncPGVectorStore's as_retriever might not directly support 'return_uuids' # The returned documents will likely have the primary key or a unique identifier # as part of their metadata, which you can access. yield vectorstore.as_retriever(search_kwargs=search_kwargs) </code></pre> <p>the error for .create_sync() method</p> <pre><code>Traceback (most recent call last): File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph_api\worker.py&quot;, line 170, in worker await asyncio.wait_for( File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\asyncio\tasks.py&quot;, line 489, in wait_for return fut.result() ^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph_api\stream.py&quot;, line 307, in consume raise e from g File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph_api\stream.py&quot;, line 290, in consume async for mode, payload in stream: File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph_api\stream.py&quot;, line 225, in astream_state event = await wait_if_not_done(anext(stream, sentinel), done) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph_api\asyncio.py&quot;, line 89, in wait_if_not_done raise e.exceptions[0] from None File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\__init__.py&quot;, line 2830, in astream async for _ in runner.atick( File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\runner.py&quot;, line 400, in atick _panic_or_proceed( File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\runner.py&quot;, line 507, in _panic_or_proceed raise exc File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\retry.py&quot;, line 136, in arun_with_retry return await task.proc.ainvoke(task.input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\utils\runnable.py&quot;, line 672, in ainvoke input = await asyncio.create_task( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\utils\runnable.py&quot;, line 440, in ainvoke ret = await self.afunc(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\./backend/retrieval_graph/graph.py&quot;, line 181, in conduct_research result = await researcher_graph.ainvoke({&quot;question&quot;: state.steps[0]}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\__init__.py&quot;, line 2963, in ainvoke async for chunk in self.astream( File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\__init__.py&quot;, line 2830, in astream async for _ in runner.atick( File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\runner.py&quot;, line 400, in atick _panic_or_proceed( File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\runner.py&quot;, line 507, in _panic_or_proceed raise exc File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\pregel\retry.py&quot;, line 136, in arun_with_retry return await task.proc.ainvoke(task.input, config) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\utils\runnable.py&quot;, line 672, in ainvoke input = await asyncio.create_task( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langgraph\utils\runnable.py&quot;, line 440, in ainvoke ret = await self.afunc(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\backend\retrieval_graph\researcher_graph\graph.py&quot;, line 70, in retrieve_documents with retrieval.make_retriever(config) as retriever: File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\contextlib.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\backend\retrieval.py&quot;, line 60, in make_retriever with make_pg_retriever(configuration, embedding_model) as retriever: File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\contextlib.py&quot;, line 137, in __enter__ return next(self.gen) ^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\backend\retrieval.py&quot;, line 33, in make_pg_retriever engine = PGEngine.from_connection_string(url=connection_string) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\langchain_postgres\v2\engine.py&quot;, line 104, in from_connection_string cls._default_loop = asyncio.new_event_loop() ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py&quot;, line 810, in new_event_loop return get_event_loop_policy().new_event_loop() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\asyncio\events.py&quot;, line 699, in new_event_loop return self._loop_factory() ^^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\asyncio\selector_events.py&quot;, line 56, in __init__ self._make_self_pipe() File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\asyncio\selector_events.py&quot;, line 107, in _make_self_pipe self._ssock, self._csock = socket.socketpair() ^^^^^^^^^^^^^^^^^^^ File &quot;mylocation\AppData\Local\Programs\Python\Python311\Lib\socket.py&quot;, line 645, in socketpair ssock, _ = lsock.accept() ^^^^^^^^^^^^^^ File &quot;mylocation\Downloads\Project\chat-langchain\.venv\Lib\site-packages\blockbuster\blockbuster.py&quot;, line 109, in wrapper raise BlockingError(func_name) blockbuster.blockbuster.BlockingError: Blocking call to socket.socket.accept </code></pre> <p>the error come from</p> <pre><code>engine = PGEngine.from_connection_string(url=connection_string) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code></pre> <p>and the recommend is this but i have try many way and not working. (except option 3)</p> <pre><code>Here are your options to fix this: 1. Best approach: Convert any blocking code to use async/await patterns For example, use 'await aiohttp.get()' instead of 'requests.get()' 2. Quick fix: Move blocking operations to a separate thread Example: 'await asyncio.to_thread(your_blocking_function)' 3. Override (if you can't change the code): - For development: Run 'langgraph dev --allow-blocking' - For deployment: Set 'BG_JOB_ISOLATED_LOOPS=true' environment variable </code></pre>
<python><asynchronous><langchain><langgraph>
2025-06-24 18:47:12
0
362
Tom Tom
79,678,040
496,289
Find number of selected tests programatically
<p>I have a project with 600+ unit tests. These tests use a very time consuming function to log dataframes, which help with debugging. Which is great when I'm debugging 1-2 tests. But when I run all 600 tests this logging takes 30% of total execution time.</p> <p>What I want to do is disable this expensive logging whenever the current test run is running say more than 100 tests.</p> <p>Here is what I did in my <code>conftest.py</code>:</p> <pre><code>def pytest_collection_modifyitems(config, items): num_tests = len(items) if num_tests &gt; 100: print(f'Running {num_tests} tests (more than 100 tests), disabling dataframe logging.') os.environ['RUNNING_MORE_THAN_100_TESTS'] = '1' else: print(f'Running {num_tests} tests.') </code></pre> <p>and then in my <code>log_dataframe()</code> method I do nothing if <code>RUNNING_MORE_THAN_100_TESTS</code> is set.</p> <pre><code>def log_dataframe(df: DataFrame, logger_obj): if os.environ.get('RUNNING_MORE_THAN_100_TESTS'): logger_obj.info(f'RUNNING_MORE_THAN_100_TESTS is set. log_dataframe() skipped.') return else: # log df as usual </code></pre> <p>But when I run it with some keyword to select specific tests:</p> <pre class="lang-none prettyprint-override"><code>$ pytest -k emailer ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-8.3.2, pluggy-1.6.0 rootdir: /local_disk0/.ephemeral_nfs/envs/pythonEnv-1581bc5c-5c05-4f64-8c4d-2cf4c1be04fb/lib/python3.10/site-packages/my_pkg plugins: mock-3.14.0, profiling-1.8.1, cov-5.0.0, anyio-3.5.0 Running 651 tests (more than 100 tests), disabling dataframe logging. collected 651 items / 610 deselected / 41 selected </code></pre> <p>So it seems like the hook is called before filters are applied. Is there some other hook that I can use (which is called after items are selected)?</p>
<python><unit-testing><pytest>
2025-06-24 18:14:51
1
17,945
Kashyap
79,677,768
4,451,315
How to get mypy to recognise file `-stubs` package
<p>Here's my folder structure:</p> <pre><code>. ├── foo │   ├── __init__.py │   └── classic.py ├── foo-stubs │   ├── __init__.pyi │   └── classic.pyi ├── pyproject.toml └── t.py </code></pre> <p>Contents are:</p> <pre class="lang-console prettyprint-override"><code>$ cat foo/__init__.py from foo.classic import my_func $ cat foo/classic.py def my_func(a): return 3 $ cat foo-stubs/__init__.pyi from typing import Any from foo.classic import my_func __all__ = ['my_func'] $ cat foo-stubs/classic.pyi from typing import Any def my_func(a: Any) -&gt; int: ... $ cat pyproject.toml [build-system] requires = [&quot;hatchling&quot;] build-backend = &quot;hatchling.build&quot; [project] name = &quot;foo&quot; version = &quot;0.1.0&quot; requires-python = &quot;&gt;=3.9&quot; </code></pre> <p>Then, in <code>t.py</code>, I write</p> <pre class="lang-py prettyprint-override"><code>from foo import my_func reveal_type(my_func) </code></pre> <p>If I then run <code>mypy t.py</code>, I get:</p> <pre><code>t.py:3: note: Revealed type is &quot;def (a: Any) -&gt; Any&quot; Success: no issues found in 1 source file </code></pre> <p>But...why? Shouldn't it be revealed as <code>def (a: Any) -&gt; builtins.int</code>?</p> <hr /> <p><strong>Note</strong></p> <p>I'm aware that I can also put the <code>.pyi</code> files directly in the <code>foo</code> directory, along with <code>py.typed</code>. My question is specifically about how to solve this problem with a <code>-stubs</code> directory, which is mentioned as a strategy on <a href="https://typing.python.org/en/latest/spec/distributing.html#stub-only-packages" rel="nofollow noreferrer">https://typing.python.org/en/latest/spec/distributing.html#stub-only-packages</a></p> <blockquote> <p>However, the stubs can also be put in a separate package and distributed separately. Third parties can also find this method useful if they wish to distribute stub files. The name of the stub package MUST follow the scheme foopkg-stubs for type stubs for the package named foopkg.</p> </blockquote>
<python><python-typing><mypy>
2025-06-24 14:36:07
2
11,062
ignoring_gravity
79,677,547
16,563,251
Infer generic type from metaclass argument
<p>I have a metaclass which creates a <a href="https://typing.python.org/en/latest/reference/generics.html" rel="nofollow noreferrer">generic class</a> parametrized by a type var <code>T</code>. It also needs to know the same type at runtime, which I currently achieve by providing it as an <a href="https://stackoverflow.com/questions/13762231/how-to-pass-arguments-to-the-metaclass-from-the-class-definition">additional argument</a>. Both types must be the same always.</p> <p>Currently, when creating an instance class, I have to specify the type twice, once for the type checker and once for runtime use. I would like to avoid this. I think as generic arguments should not be used at runtime, the solution would be to infer the generic type from the metaclass argument. Is that even possible? Or is there another solution, that allows to specify the type just once?</p> <p>MWE:</p> <pre class="lang-py prettyprint-override"><code>class Meta(type): def __new__( cls, name: str, bases: tuple[type, ...], cls_dict: dict[str, object], wrapped_class: type, ): for member in wrapped_class.__dict__: if member.startswith(&quot;__&quot;): # exclude python internals continue if member not in cls_dict: raise TypeError(f&quot;Need to implement {member}&quot;) return super().__new__(cls, name, bases, cls_dict) class Base: def foo(self): return &quot;bar&quot; class Wrapper[T](metaclass=Meta, wrapped_class=object): def do_wrapper_stuff(self, arg: T) -&gt; T: return arg class BaseWrapper(Wrapper[Base], wrapped_class=Base): ... # TypeError: &quot;Need to implement foo&quot; – as is intended </code></pre> <p>More context what I am trying to achieve: I have multiple classes for which I am writing wrappers. The wrappers need to always have parity with the wrapped classes, because either could be given as parameters to other functions. The metaclass is supposed to ensure this. The generic argument is needed for the type checker to know which class is wrapped.</p> <p>I am aware that subclassing the wrapped class would achieve about that. However, when the wrapped class obtains new attributes, they must not be accessible by default (the wrapper is part of a permission system), instead the wrapper should raise an exception when trying to access them (or implement the intended behaviour). This is very hard to detect when the wrapper is a subclass, so my current approach is to implement it separately (the code needs to be written in either case) and perform a type cast to the wrapped class for further use.</p>
<python><python-typing><metaclass>
2025-06-24 12:08:57
2
573
502E532E
79,677,539
1,786,137
Tkinter image label performance issues when updating large image
<p>This is a follow up question from <a href="https://stackoverflow.com/questions/79633834/calling-imagetk-photoimage-in-a-thread-causing-deadlock/79634376#79634376">Calling ImageTk.PhotoImage() in a thread causing deadlock</a></p> <p>I have so far fixed the deadlock issue following the design in the accepted answer. Now I am working on a responsive ui that allows user to resize the window. As the window size changes, the image shown in the label should also resize to fill the window. However, I found large images can significantly impact the frame rate. Displaying and updating images on a 4K monitor causes the UI to become inresponsive.</p> <p>I timed the set_image method, which is called recursively with root.after and found that root.after actually delays behind significantly when showing large images:</p> <pre><code>Average delay time: 0.0014089000100102568, image size: 100 x 100 Average delay time: 0.001673793329991895, image size: 200 x 200 Average delay time: 0.0033536416499555344, image size: 400 x 400 Average delay time: 0.02414946045006218, image size: 800 x 800 Average delay time: 0.07494734207000875, image size: 1600 x 1600 Average delay time: 0.1494195945899719, image size: 3200 x 3200 </code></pre> <p>I am using a MacBook M3 Pro so it is not a terrible machine, similar results are obtained from an i7 machine also.</p> <p>Is there a better way to display images in a Tkinter UI? Am I reaching the limitation of python and should consider C++ at this point instead?</p> <p>Below is a minimum viable code for you test.</p> <pre class="lang-py prettyprint-override"><code>import tkinter as tk from PIL import Image, ImageTk import threading import time ITERATION = 20 class Timer: def __init__(self): self.reset() def reset(self): self.time_accumulated = 0 self.start_time = None self.count = 0 def begin(self): self.start_time = time.perf_counter() def stop(self): if self.start_time is None: return self.time_accumulated += time.perf_counter() - self.start_time self.count += 1 def get_average_time(self): return self.time_accumulated / self.count class App: def __init__(self, root): self.root = root self.label = tk.Label(root) self.label.pack() self.running = True self.frame = None self.image_size = 100 # run in other thread self.display_thread = threading.Thread(target=self.update_image_loop, daemon=True) self.display_thread.start() self.root.protocol(&quot;WM_DELETE_WINDOW&quot;, self.on_close) self.loop_time_previous = time.perf_counter() self.timer = Timer() self.root.after(1, self.set_image) def update_image_loop(self): &quot;&quot;&quot;Run in other thread.&quot;&quot;&quot; while self.running: # self.frame = Image.open(self.image_path) # make a small black image self.frame = Image.new('RGB', (self.image_size, self.image_size), (0, 0, 0)) time.sleep(0.001) def set_image(self): &quot;&quot;&quot;Run in main thread.&quot;&quot;&quot; self.timer.stop() if self.frame: self.image = ImageTk.PhotoImage(image=self.frame) self.label.config(image=self.image) if self.timer.count &gt;= ITERATION: self.next() if self.running: self.timer.begin() self.root.after(1, self.set_image) def next(self): print(f'Average delay time: {self.timer.get_average_time()}, image size: {self.image_size} x {self.image_size}') self.image_size *= 2 if self.image_size &gt;= 5000: self.root.destroy() return self.timer.reset() def on_close(self): self.running = False self.display_thread.join() self.root.destroy() if __name__ == '__main__': root = tk.Tk() app = App(root) root.mainloop() </code></pre>
<python><tkinter>
2025-06-24 12:00:44
0
3,863
Anthony
79,677,376
3,333,319
Code block partially highlighted in Sphinx
<p>In my Sphinx documentation I have the following code-block:</p> <pre class="lang-none prettyprint-override"><code>.. code-block:: python import logging logging.getLogger('mymodule').setLevel(logging.INFO) </code></pre> <p>but when the documentation is rendered the code is only partially highlighted:</p> <p><a href="https://i.sstatic.net/wiJNENUY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiJNENUY.png" alt="rendered_snippet" /></a></p> <p>Is it possible to highlight also the other parts? The only (related) settings I am using in <code>conf.py</code> is <code>pygments_style = 'default'</code>.</p> <p><strong>Update:</strong> I would expect something similar to what I get in VS Code</p> <p><a href="https://i.sstatic.net/lO43Ag9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lO43Ag9F.png" alt="vscode_highlighting" /></a></p>
<python><documentation><python-sphinx>
2025-06-24 10:02:44
1
973
Sirion
79,677,198
1,771,155
Install local project in editable mode with uv
<p>I have my <code>project</code> and some <code>other-local-project</code> where I want to;</p> <ul> <li>Import <code>other-local-project</code> in <code>project</code></li> <li>Changes in <code>other-local-project</code> are immediately reflected in <code>project</code></li> </ul> <pre class="lang-bash prettyprint-override"><code>project/ pyproject.toml uv.lock other-local-project/ pyproject.toml uv.lock </code></pre> <p>how do I install another local project in editable mode with uv?</p>
<python><uv>
2025-06-24 07:56:48
1
4,886
Vincent Claes
79,677,095
354,051
When using nanobind, How to manage GIL and objects in between different threads
<p>I have created a small framework based on <a href="https://github.com/taskflow/taskflow" rel="nofollow noreferrer">taskflow</a> and wrapping it using nanobind. When framework finishes the graph evaluation, it can send a callback that you can register. I am trying to register a python function but the application is getting crashed.</p> <pre class="lang-cpp prettyprint-override"><code>.def(&quot;set_graph_evaluation_finish_callback&quot;, &amp;PyScene::set_graph_evaluation_finish_callback) </code></pre> <pre class="lang-cpp prettyprint-override"><code>void PyScene::set_graph_evaluation_finish_callback( nb::callable cb) { nb::gil_scoped_acquire gil; callback = std::move(cb); scene.setGraphEvaluationFinishCallback([this]() { nb::gil_scoped_acquire gil; if (this-&gt;callback) this-&gt;callback(); std::cout &lt;&lt; &quot;C++ lambda reached (no GIL)\n&quot;; }); } </code></pre> <p>When lambda function is getting called from framework, the line inside lambda</p> <pre><code>nb::gil_scoped_acquire gil; </code></pre> <p>is responsible for the crash.</p> <pre><code>Critical nanobind error: nanobind::detail::incref_check(): attempted to change the reference count of a Python object while the GIL was not held. </code></pre> <p>I believe because there are two threads, the python one and the one that's calling the lambda or I could be wrong. How do I solve this problem?</p>
<python><gil><nanobind>
2025-06-24 06:27:26
0
947
Prashant
79,676,851
7,215,853
Microsoft Graph SDK for Python: Using Patch Operation with the SDKs own models will create false updates
<p>I am using the microsoft graph SDK for CRUD operations on resources like mails, contacts, calendarevents etc.</p> <p>The graph API uses patch for updating datasets. At the same time, the SDK provides models for all resources available via the Graph API and you are strongly encouraged to use them.</p> <p>The problem is, that those models are typed dataclasses. Meaning right from instantiation, the class has all defined attributes, those you set are filled, all others are None by default. Initially I assumed, that the sdk or the graph api ignores values set to None by default, but it does not. So whenever I use the provided models for a patch operation and only fill them with the data that is supposed to be updated, it will update this data and everything else gets updated to &quot;None&quot;.</p> <p>For instance when I have a contact like:</p> <pre><code>Surname: Homer Name: Simpson Occupation: Controller at Springfield Nuclear Plant </code></pre> <p>And I now update Homers Occupation because he switches jobs again then the SDKs Contact Object will look like this:</p> <pre><code>Surname: None Name: None Occupation: Astronaut </code></pre> <p>And since its a patch operation, it will also update my dataset on MS Graph with those None values.</p> <p>What I have tried so far:</p> <ol> <li>A lot of research in the SDKs Git repository and many LLMs.</li> <li>Converting the Object into a dictionary to delete None values. This is possible but a lot of those objects have other nested objects and when you later try to serialise this dictionary, Python is unable to do this. The SDK comes with its own serialiser, but when you make any changes to the provides models it often is unable to serialise them.</li> </ol> <p>This feels like a catch 22 because the models provided can not be changed without breaking the serialiser, but they also can not be left with all those None values because they break the patch operation.</p> <p>EDIT: I was asked to provide python code. But the question is not about my code but about the python implementation of the graph sdk and the apis behaviour. So please find attached for reference: <a href="https://github.com/microsoftgraph/msgraph-sdk-python/tree/main" rel="nofollow noreferrer">https://github.com/microsoftgraph/msgraph-sdk-python/tree/main</a></p>
<python><microsoft-graph-api><microsoft-graph-sdks>
2025-06-23 23:13:04
0
320
MrTony
79,676,799
20,895,654
Python How to type hint the class Any itself
<h2>The Problem And Approach #1</h2> <p>I have a function that has a parameter which can take some specific types and the type <code>Any</code> (the class itself), which in my case is also the default. Here is a simple example:</p> <pre><code>def foo(type: type[int] | type[str] | type[Any] = Any) -&gt; Any: ... </code></pre> <p>The problem is that this represents something different than what I want. Using <code>Any</code> as the single type hint argument of <code>type</code> makes it act like the usual use case for <code>Any</code>, which is that the parameter can be a type of anything. This causes multiple issues for type checking purposes:</p> <ol> <li>Now any kind of class is allowed, not just the class <code>Any</code> itself, which defeats the purpose of type checking.</li> <li>My type checker (Pylance) even sees this as an error, claiming <code>Type &quot;type[typing.Any]&quot; is not assignable to declared type &quot;builtins.type[Any]</code></li> </ol> <h2>Approach #2</h2> <p>As a second approach I looked into the code and found this code that defines <code>Any</code> (Python 3.13):</p> <pre><code>class _AnyMeta(type): def __instancecheck__(self, obj): if self is Any: raise TypeError(&quot;typing.Any cannot be used with isinstance()&quot;) return super().__instancecheck__(obj) def __repr__(self): if self is Any: return &quot;typing.Any&quot; return super().__repr__() # respect to subclasses class Any(metaclass=_AnyMeta): &quot;&quot;&quot;Special type indicating an unconstrained type. - Any is compatible with every type. - Any assumed to have all methods. - All values assumed to be instances of Any. Note that all the above statements are true from the point of view of static type checkers. At runtime, Any should not be used with instance checks. &quot;&quot;&quot; def __new__(cls, *args, **kwargs): if cls is Any: raise TypeError(&quot;Any cannot be instantiated&quot;) return super().__new__(cls) </code></pre> <p>So I saw that <code>Any</code> has the metaclass <code>_AnyMeta</code> and therefore tried importing it and simply type hinting my function like this:</p> <pre><code>def foo(type: type[int] | type[str] | _AnyMeta = Any) -&gt; Any: ... </code></pre> <p>The problem here is that I cannot import or use <code>_AnyMeta</code> from the typing module. I actually don't know why, as <code>_AnyMeta</code> doesn't get deleted with <code>del</code> anywhere in the source code of the <code>typing</code> module.</p> <h2>Summary</h2> <p>So I could not figure out how to type hint the <code>Any</code> class itself. I also couldn't find any similar questions or answers online that could answer this. I am left wondering if this is even possible in the current version of Python (3.13), but I hope that somebody might know how to. Technically I could use some kind of sentinel object instead of the <code>Any</code> type, but I'm still curious if this is possible. It would also be a more elegant solution for my problem if it was possible.</p>
<python><python-typing>
2025-06-23 21:55:18
1
346
JoniKauf
79,676,658
1,857,229
Efficiently compute all multi-dimensional traces for all offsets and store in matrix
<p>I have a $N\times N \times N$ array $a$ and would like to implement the formula $$ b_{ij} = \sum_k a_{i+k,j+k,k} $$ efficiently.</p> <p>Right now, I'm doing this via</p> <pre class="lang-py prettyprint-override"><code>b = np.zeros((N, N)) for i in range(N): for j in range(N): m = np.maximum(i, j) b[i,j] = np.einsum('iii', a[i:N-m+i,j:N-m+j,:N-m]) </code></pre> <p>which seems quite inefficient.</p> <p>Is there a way to do this without cython which works just with numpy (or any other, numpy-compatible interface such as <code>jax.numpy</code>)?</p> <p><strong>Edit 1:</strong> added missing explicit bounds.</p>
<python><numpy><numpy-einsum>
2025-06-23 19:12:46
2
1,471
Uroc327
79,676,654
11,064,604
Overthreading in Multiprocessing for ML models
<h3>The problem</h3> <p>I am running machine learning models in parallel using multiprocessing. When using models with parameters stating the number of threads used- <code>num_threads</code>, <code>num_jobs</code>, etc. - the code works well. Recently, I have had to use models that do not have this parameter. Below is a minimal example of my code:</p> <pre><code>import multiprocessing import numpy as np from sklearn.linear_model import PoissonRegressor class MPModelWrapper: &quot;&quot;&quot; Wraps model for use in multiprocessing &quot;&quot;&quot; def __init__(self, model): self.model = model def fit_predict(self, train_target_val_pair): X = train_target_val_pair[0] y = train_target_val_pair[1] val = train_target_val_pair[2] self.model.fit(X,y) return self.model.predict(val) #Fake data to train models on num_models_to_run = 100 train_target_val_pairs = [ (np.random.random((1000, 10)), np.random.randint(0, 2, size=(1000,) ), np.random.random((10, 10)) ) for i in range(num_models_to_run) ] #Multiprocessor num_concurrent_models = 1 model = PoissonRegressor() with multiprocessing.Pool( num_concurrent_models ) as p: results = p.map(MPModelWrapper(model).fit_predict, train_target_val_pairs) </code></pre> <p><strong>How can I ensure that each model called only uses 1 thread?</strong></p> <h3>Attempts to Fix the Problem</h3> <ol> <li><em>Manually setting libraries to 1 core</em> I tried this but it still indicated full core usage by each model:</li> </ol> <pre><code>#At the top of the pgm above all the previous code import os os.environ['OMP_NUM_THREADS'] = '1' os.environ['MKL_NUM_THREADS'] = '1' os.environ['OPENBLAS_NUM_THREADS'] = '1' os.environ['BLAS_NUM_THREADS'] = '1' os.environ['NUMEXPR_NUM_THREADS'] = '1' </code></pre> <ol start="2"> <li><em>joblib</em>. I used joblib but encountered the same problems:</li> </ol> <pre><code>from joblib import Parallel, delayed import os # Set global threading limits os.environ.update({ 'OMP_NUM_THREADS': '1', 'MKL_NUM_THREADS': '1', 'OPENBLAS_NUM_THREADS': '1', 'BLAS_NUM_THREADS': '1' }) preds = Parallel(n_jobs=os.cpu_count(), backend='threading')( delayed(fit_predict_wrapper)() for _ in range(1) ) </code></pre>
<python><machine-learning><multiprocessing>
2025-06-23 19:09:55
0
353
Ottpocket
79,676,321
9,549,068
How to prevent VS Code from importing global Python env site-packages when using pixi venv?
<h3>situation</h3> <ul> <li><p>I created a new Python project with <strong>pixi venv</strong>. <br /> I have this isolated venv, including Python. <br /> I installed <code>pandas</code> etc. <br /> I didn't install <code>plotly</code> etc.</p> </li> <li><p>I have Python installed globally. <br /> The global Python environment has <code>plotly</code> installed.</p> </li> </ul> <h3>problem</h3> <ul> <li><p>Now I open the project in VS Code.</p> </li> <li><p>If I try to import <code>plotly</code>. <br /> It won't show any error. <br /> If I click into it, it goes to the global Python env site-packages.</p> </li> <li><p>Even If I try to import <code>pandas</code>. <br /> Same thing happens.</p> </li> <li><p><a href="https://i.sstatic.net/IYLFP5wW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYLFP5wW.png" alt="enter image description here" /></a></p> </li> </ul> <h3>desire</h3> <ul> <li><p>I want VS Code to truly isolate the venv. <br /> Stop it from importing from the global Python site-packages. <br /> Just show an error if that happens in my code.</p> </li> <li><p>Note: I need an visible error in the IDE, not just when running in terminal.</p> </li> </ul> <h3>attempts</h3> <ul> <li>I tried with <code>PYTHONNOUSERSITE=1</code>, <ul> <li>won't work for VS Code IDE</li> <li>only works in terminal (this is not enough)</li> </ul> </li> <li><code>include-system-site-packages = false</code> doesn't exist for pixi.</li> </ul> <h3>compare</h3> <ul> <li>if I create a project with <code>.venv</code>, <strong>no such problem</strong>, it's truly isolated, and VS Code <strong>will show an error</strong> when importing a package not in venv (regardless of its in global Python env site-packages or not).</li> <li>if I create a project with <code>.conda</code>, some project has no such problem, (the base env), but others seems have the same problem</li> <li>if I create a project with <code>.pixi</code>, have the problem</li> </ul> <h3>diagnostics</h3> <p>To show where they are imported:</p> <pre class="lang-py prettyprint-override"><code>import sys import os # Print the executable python thinks it's running print(&quot;--- Python Executable ---&quot;) print(sys.executable) print(&quot;-&quot; * 25) # Print all the paths python will search for modules print(&quot;\n--- Module Search Paths (sys.path) ---&quot;) for path in sys.path: print(path) print(&quot;-&quot; * 25) # Check if the PYTHONPATH variable is set print(&quot;\n--- PYTHONPATH Environment Variable ---&quot;) py_path = os.environ.get('PYTHONPATH') if py_path: print(py_path.replace(':', '\n')) # Use ';' for Windows if needed else: print(&quot;PYTHONPATH is not set.&quot;) print(&quot;-&quot; * 25) # Try the import and see where it comes from try: import pandas print(&quot;\n--- 'pandas' was found here: ---&quot;) print(pandas.__file__) except ImportError: print(&quot;\n--- 'pandas' could not be imported. ---&quot;) try: import plotly print(&quot;\n--- 'plotly' was found here: ---&quot;) print(plotly.__file__) except ImportError: print(&quot;\n--- 'plotly' could not be imported. ---&quot;) try: import torch print(&quot;\n--- 'torch' was found here: ---&quot;) print(torch.__file__) except ImportError: print(&quot;\n--- 'torch' could not be imported. ---&quot;) </code></pre> <p>Output:</p> <pre><code>D:\usp\uhpsj\study\build_llm_wsp\build_llm&gt;D:/usp/uhpsj/study/build_llm_wsp/build_llm/.pixi/envs/default/python.exe d:/usp/uhpsj/study/build_llm_wsp/build_llm/src/build_llm_demo/_demo/d2.py --- Python Executable --- D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\python.exe ------------------------- --- Module Search Paths (sys.path) --- d:\usp\uhpsj\study\build_llm_wsp\build_llm\src\build_llm_demo\_demo D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\python311.zip D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\DLLs D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\Lib D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default C:\Users\nshln\AppData\Roaming\Python\Python311\site-packages D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\Lib\site-packages D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\Lib\site-packages\win32 D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\Lib\site-packages\win32\lib D:\usp\uhpsj\study\build_llm_wsp\build_llm\.pixi\envs\default\Lib\site-packages\Pythonwin ------------------------- --- PYTHONPATH Environment Variable --- PYTHONPATH is not set. ------------------------- --- 'pandas' was found here: --- C:\Users\nshln\AppData\Roaming\Python\Python311\site-packages\pandas\__init__.py --- 'plotly' was found here: --- C:\Users\nshln\AppData\Roaming\Python\Python311\site-packages\plotly\__init__.py --- 'torch' was found here: --- C:\Users\nshln\AppData\Roaming\Python\Python311\site-packages\torch\__init__.py </code></pre> <p><code>pixi.toml</code> packages</p> <pre class="lang-ini prettyprint-override"><code>[workspace] authors = [&quot;norlz &lt;norlz@gitkra.com&gt;&quot;] channels = [&quot;conda-forge&quot;] name = &quot;build-llm-demo&quot; platforms = [&quot;win-64&quot;] version = &quot;0.1.0&quot; [tasks] [system-requirements] cuda = &quot;12.0&quot; [dependencies] python = &quot;3.11.*&quot; pandas = &quot;&lt;=2.1.4&quot; pydantic-settings = &quot;&gt;=2.9.1,&lt;3&quot; ruff = &quot;&gt;=0.12.0,&lt;0.13&quot; pyright = &quot;&gt;=1.1.402,&lt;2&quot; pytest = &quot;&gt;=8.4.1,&lt;9&quot; pytorch-gpu = &quot;*&quot; cuda-version = &quot;12.6.*&quot; [pypi-dependencies] fastapi = { version = &quot;&gt;=0.115.13, &lt;0.116&quot;, extras = [&quot;standard&quot;] } uvicorn = { version = &quot;&gt;=0.34.3, &lt;0.35&quot;, extras = [&quot;standard&quot;] } gunicorn = &quot;&gt;=23.0.0, &lt;24&quot; </code></pre>
<python><visual-studio-code><python-venv><pixi-package-manager>
2025-06-23 14:22:30
2
1,595
Nor.Z
79,676,236
13,756,780
What size should an `IterableDataset` report when used in a multi-worker `DataLoader`?
<p>Here's a simple dataset and a data loader that uses it:</p> <pre class="lang-py prettyprint-override"><code>import torch from torch.utils.data import DataLoader, IterableDataset class Dataset(IterableDataset): def __init__(self, size: int): super().__init__() self.size = size def __iter__(self): for i in range(self.size): yield torch.tensor([i], dtype=torch.float32) def __len__(self): return self.size dataloader = DataLoader(Dataset(5), batch_size=1, num_workers=4) result = list(dataloader) print(len(result)) </code></pre> <p>This prints the result:</p> <pre><code>/usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 6 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 7 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 8 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 9 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 10 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 11 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 12 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 13 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 14 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 15 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 16 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 17 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 18 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 19 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) /usr/local/share/venv/lib/python3.9/site-packages/torch/utils/data/dataloader.py:718: UserWarning: Length of IterableDataset &lt;__main__.Dataset object at 0x77699d662fa0&gt; was reported to be 5(when accessing len(dataloader)), but 20 samples have been fetched. For multiprocessing data-loading, this could be caused by not properly configuring the IterableDataset replica at each worker. Please see https://pytorch.org/docs/stable/data.html#torch.utils.data.IterableDataset for examples. warnings.warn(warn_msg) 20 </code></pre> <p>I don't get it. The dataset reports its correct size. I get that each worker will iterate until its end, but then how should I &quot;configure&quot; it in each worker? Should it report a length of <code>self.size * 4</code> so that the <code>DataLoader</code> sees the length that all workers together will see? Should I modify it after the fact in each worker so that each worker gets a dataset that's 4 times smaller?</p> <p>Why is the <code>DataLoader</code> expecting <code>len(dataset)</code> to be equal to <code>number_of_samples * worker_size</code> when it's not the dataset that handles the workers?</p>
<python><pytorch><pytorch-dataloader>
2025-06-23 13:11:26
1
344
acl
79,676,114
497,649
How to add a config to a Plotly object?
<p>In Shiny for Python, how to add a config to a Plotly figure object as usually done by:</p> <pre><code>fig.show(config={ 'modeBarButtonsToRemove': [ 'autoScale2d', 'select2d', 'lasso2d', 'pan2d', 'zoom2d', 'zoomOut2d', 'hoverClosestCartesian', 'hoverCompareCartesian', 'toggleSpikelines', 'sendDataToCloud', 'editInChartStudio' ], 'displaylogo': False }) </code></pre> <p>As there is no <code>show()</code> command to be called, how to achieve this?</p>
<python><plotly><py-shiny>
2025-06-23 11:51:15
1
640
lambruscoAcido
79,675,883
2,918,248
Python, Use mask to handle MSB is impossible due to infinite precision?
<p>Can anyone explain the last part of the following code to check for overflow?</p> <p>The code is the solution for LeetCode question 137. Single Number II.</p> <p>I know it might be overflowing due to the MSB is set.</p> <p>Base on the problem statement, the number must be in - 2^31 ~ 2^31-1. We can use deduction to correct the overflow.</p> <p>But can we use mask i.e. 0x7FFF FFFF FFFF FFFF to solve that?</p> <p>I think it is No.</p> <p>Questions:</p> <ol> <li><p>Since the precision is infinite in Python, the mask method would not work. Is that correct?</p> </li> <li><p>But in another problem<a href="https://leetcode.com/problems/sum-of-two-integers/description/" rel="nofollow noreferrer">371. Sum of Two Integers</a>, I see another way to handle overflow by using mask like below.</p> </li> </ol> <pre><code> MASK = 0xFFFFFFFF loner=~(loner ^ MASK) </code></pre> <p>I am thinking if the mask method is more formal.</p> <p>I am not sure why deduction works.</p> <p>When overflow appears, we use 2's complement to restore the number. This is what we were taught, and that explains the mask method.</p> <pre><code> class Solution: def singleNumber(self, nums: List[int]) -&gt; int: # Loner. loner = 0 # Iterate over all bits for shift in range(32): bit_sum = 0 # For this bit, iterate over all integers for num in nums: # Compute the bit of num, and add it to bit_sum bit_sum += (num &gt;&gt; shift) &amp; 1 # Compute the bit of loner and place it loner_bit = bit_sum % 3 loner = loner | (loner_bit &lt;&lt; shift) # Do not mistaken sign bit for MSB. if loner &gt;= (1 &lt;&lt; 31): loner = loner - (1 &lt;&lt; 32) return loner </code></pre>
<python>
2025-06-23 08:46:56
0
334
Craig Yang
79,675,857
3,502,079
How to change environment within the graphical interface of spyder?
<p>I will provide a self answer to this question, it has been bugging me for years and I only recently found out the solution so I suspect that other people have encountered the same issue.</p> <p>Q: Is it possible to change environments in spyder in the graphical interface?</p> <p>How I currently do it:</p> <ol> <li>Start up anaconda prompt</li> <li>Type <code>activate my_env</code></li> <li>Type <code>spyder</code></li> </ol> <p>Then it would launch spyder in that environment, but this is a relatively large amount of work for something that I do often. Running spyder would also freeze the terminal, so if I want to change environments, I would first have to close spyder and then do steps (2) and (3) again. Again, it's not <em>a lot</em> of work, but it's something I have to do each time I launch spyder or change environments, which I do almost daily, and it takes a couple seconds for spyder to launch. Is there a better way? I would prefer to launch spyder instead of the anaconda prompt and then change environments within spyder.</p>
<python><user-interface><spyder><virtual-environment>
2025-06-23 08:25:26
1
392
AccidentalTaylorExpansion
79,675,688
9,758,790
Pausing or adding new breakpoints halfway during running in VSCode makes the Python Exception uncaught, unhandled, and lost
<p>Pausing or adding new breakpoints halfway during debugging in VSCode seems to make the Python Exception uncaught.</p> <p>The following is a minimal example to reproduce my problem.</p> <pre class="lang-py prettyprint-override"><code>import contextlib, signal @contextlib.contextmanager def time_limit(seconds): def signal_handler(signum, frame): raise RuntimeError(&quot;Timed out!&quot;) signal.setitimer(signal.ITIMER_REAL, seconds) signal.signal(signal.SIGALRM, signal_handler) try: yield finally: signal.setitimer(signal.ITIMER_REAL, 0) if __name__ == &quot;__main__&quot;: print(&quot;start&quot;) try: with time_limit(10): a=1 while True: a = -a except RuntimeError: print(&quot;timed out&quot;) print(&quot;finished&quot;) </code></pre> <p>Different ways of debugging will give different results:</p> <ol> <li><p><strong>Use Pause. The exception is not caught.</strong> I click the <code>pause (F6)</code> halfway through running, and the program pauses on the line <code>a=-a</code> as expected. After several seconds, the time limit is reached, and the RuntimeError is thrown, but the program fails to catch the error. The trackback is printed in the console, but the program is not interrupted (still paused). As a result, the program will run forever after clicking <code>continue (F5)</code>.</p> </li> <li><p><strong>Use preset breakpoint. The exception is caught.</strong> In contrast, if you set a breakpoint at the <code>a=-a</code>. Run the program, and the program will stop at the breakpoint. Wait a few seconds, and the program will catch the runtime error and finish successfully. The program will continue automatically once it receives the Exception without the user's permission, which is also strange to me.</p> </li> <li><p><strong>Add a breakpoint halfway. The exception is not caught.</strong> If you add a new breakpoint at <code>a=-a</code> after the program runs for a while, you can find the same behavior as &quot;using Pause&quot;. In my opinion, &quot;Pause&quot; may be implemented by &quot;adding a breakpoint halfway&quot;? If this is correct, then this problem may be about &quot;adding a breakpoint while running&quot;.</p> </li> </ol> <p>More experiment results:</p> <ol> <li>When using <code>Pause</code>. After pausing, if I switch to the <code>Debug Console</code>, the following message will be thrown when the time limit is reached:</li> </ol> <pre><code>&lt;class 'RuntimeError'&gt; raised from within the callback set in sys.settrace. Debugging will be disabled for this thread (&lt;_MainThread(MainThread, started 140324903388352)&gt;). </code></pre> <ol start="2"> <li>I found similar results for PyCharm. But their behavior is a little different. For VSCode, you can still pause again after clicking <code>continue(F5)</code>. For PyCharm, the pause will not work after clicking <code>continue(F5)</code>.</li> </ol>
<python><visual-studio-code><vscode-debugger><vscode-python>
2025-06-23 05:56:20
0
3,084
hellohawaii
79,675,666
5,896,591
How to efficiently log a byte string message in Python 3
<p>I am trying to port a Python2 library to Python3. (My reason is a bit silly: just that Python3 ships with Ubuntu and so if my library works in Python3 then there are fewer installation steps.)</p> <p>It is frustrating because all of my byte strings (i.e. <code>bytes</code> objects) are getting escaped, padded with <code>b''</code>, and converted to Unicodes (i.e. <code>str</code> objects) and back via decoding and re-encoding. For example, I have some simple log statements with byte string parameters:</p> <pre><code># some application stuff tar_inc_page = 1024 elem = lambda:None elem.name = b'_demo_vec' elem.base = 0x2000003c elem.size = 4 #logging import logging logging.basicConfig(level=logging.INFO) LOG = logging.getLogger(__name__) LOG.info(b&quot;using tar inc page size of %d&quot; % tar_inc_page) LOG.info(b&quot;%8s = 0x%08x + 0x%04x&quot; % (elem.name, elem.base, elem.size)) </code></pre> <p>In Python2 I get:</p> <pre><code>INFO:__main__:using tar inc page size of 1024 INFO:__main__:_demo_vec = 0x2000003c + 0x0004 </code></pre> <p>But in Python3 I get:</p> <pre><code>INFO:__main__:b'using tar inc page size of 1024' INFO:__main__:b'_demo_vec = 0x2000003c + 0x0004' </code></pre> <p>It looks like Python3 is adding the <code>b''</code> to my byte string, escaping all non-ascii characters (as well as control characters and 0x7f), converting the result to a Unicode, then UTF-8 encoding the result for my terminal. How do I stop the lavish munging and extra conversions? I just want to send the original byte strings to the file or terminal as it worked in Python2.</p> <p>My byte strings are UTF-8 encoded, but 99.99% of software shouldn't care what encoding my file, socket, or terminal is using because I already know that my files, sockets, and terminals all use the same encoding. For example, I use the <code>cat</code> utility to send files to terminals. Furthermore, any encoding/decoding of strings other than for prettyprinting or text editing is inefficient, bug-prone, and general evidence of poor design. 99.99% of string operations can be done perfectly well on byte strings.</p> <p>Does Python3's built-in logging module provide an option to send the byte strings directly to the output? Otherwise is there a Python2-compatible logging module I can use instead of the one that ships with Python3?</p>
<python><python-3.x><string><python-2.x><porting>
2025-06-23 05:26:35
1
4,630
personal_cloud
79,675,628
11,580,993
PlotlyExpress not generating bar chart properly
<p>I am trying to create a basic plotly chart:</p> <pre><code>import pandas as pd import numpy as np import plotly.express as px df = pd.DataFrame({&quot;college_name&quot;:list(&quot;ABCD&quot;),&quot;in-state_tuition&quot;:np.random.randint(5000,7000,4),&quot;out-of-state_tuition&quot;:np.random.randint(8000,15000,4)}) px.bar( data_frame = df, x = &quot;college_name&quot;, y = [&quot;in-state_tuition&quot;,&quot;out-of-state_tuition&quot;], opacity = 0.9, orientation = &quot;v&quot;, barmode = 'group', title='Annual In-State Tuition vs Out-of-state Tuition', ).show() </code></pre> <p>but the result is incorrect, as it returns me bar chart of the same height: <a href="https://i.sstatic.net/Z4LGFu1m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4LGFu1m.png" alt="enter image description here" /></a></p> <p>I got the example from : <a href="https://plotly.com/python/bar-charts/" rel="nofollow noreferrer">https://plotly.com/python/bar-charts/</a></p> <p>printing the fig:</p> <pre><code>print(fig) print(fig.data[0].y) print(fig.data[1].y) Figure({ 'data': [{'alignmentgroup': 'True', 'hovertemplate': 'variable=in-state_tuition&lt;br&gt;college_name=%{x}&lt;br&gt;value=%{y}&lt;extra&gt;&lt;/extra&gt;', 'legendgroup': 'in-state_tuition', 'marker': {'color': '#636efa', 'opacity': 0.9, 'pattern': {'shape': ''}}, 'name': 'in-state_tuition', 'offsetgroup': 'in-state_tuition', 'orientation': 'v', 'showlegend': True, 'textposition': 'auto', 'type': 'bar', 'x': array(['A', 'B', 'C', 'D'], dtype=object), 'xaxis': 'x', 'y': {'bdata': 'TBvhF68atRc=', 'dtype': 'i2'}, 'yaxis': 'y'}, {'alignmentgroup': 'True', 'hovertemplate': 'variable=out-of-state_tuition&lt;br&gt;college_name=%{x}&lt;br&gt;value=%{y}&lt;extra&gt;&lt;/extra&gt;', 'legendgroup': 'out-of-state_tuition', 'marker': {'color': '#EF553B', 'opacity': 0.9, 'pattern': {'shape': ''}}, 'name': 'out-of-state_tuition', 'offsetgroup': 'out-of-state_tuition', 'orientation': 'v', 'showlegend': True, 'textposition': 'auto', 'type': 'bar', 'x': array(['A', 'B', 'C', 'D'], dtype=object), 'xaxis': 'x', 'y': {'bdata': 'XSmbMYYk/B8=', 'dtype': 'i2'}, 'yaxis': 'y'}], 'layout': {'barmode': 'group', 'legend': {'title': {'text': 'variable'}, 'tracegroupgap': 0}, 'template': '...', 'title': {'text': 'Annual In-State Tuition vs Out-of-State Tuition'}, 'xaxis': {'anchor': 'y', 'domain': [0.0, 1.0], 'title': {'text': 'college_name'}}, 'yaxis': {'anchor': 'x', 'domain': [0.0, 1.0], 'title': {'text': 'value'}}} }) [6988 6113 6831 6069] [10589 12699 9350 8188] </code></pre> <p>I am using the following versions: plotly: 6.1.2 numpy: 1.23.5</p>
<python><plotly><plotly-express>
2025-06-23 04:50:56
0
1,003
Rene Chan
79,675,537
4,018,331
Digital Ocean Function returning generic error code after adding log forwarding
<p>I'm testing a simple function (python) in Digital Ocean App Platform. The function imports psycopg2, connects to an attached Postgres DB, and returns the DB's version. This function was working until I added log forwarding to my DO App and function, to forward logs to Betterstack (Logtail). I did this, because within DO App Platform it is not possible get function logs (I verified this with Digital Ocean support: function logs are only available via their command line tool <code>doctl</code> if the function is a stand alone function, NOT if the function is part of an App. If the function is part of an App, it is necessary to forward logs to a logging service). Now, instead of returning a success message:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;message&quot;: &quot;Successfully imported dependencies and found DB version RealDictRow([('version', 'PostgreSQL 16.9 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 14.2.1 20240912 (Red Hat 14.2.1-3), 64-bit')]).&quot;, &quot;psycopg2_version&quot;: &quot;2.9.10 (dt dec pq3 ext lo64)&quot; } </code></pre> <p>It returns this error message which is coming from Digital Ocean, not my function:</p> <pre class="lang-json prettyprint-override"><code>{ &quot;code&quot;: &quot;abc123581b123ea5e3d73848&quot;, &quot;error&quot;: &quot;There was an error processing your request.&quot; } </code></pre> <p>This is my project.yml file. Note that DO Support recommended I specify LOG_DESTINATION in this way:</p> <pre class="lang-yaml prettyprint-override"><code>environment: DB_HOST: &quot;${DB_HOST}&quot; DB_PORT: &quot;${DB_PORT}&quot; DB_USER: &quot;${DB_USER}&quot; DB_PASSWORD: &quot;${DB_PASSWORD}&quot; DB_NAME: &quot;${DB_NAME}&quot; DB_SSL_MODE: &quot;${DB_SSL_MODE}&quot; LOG_DESTINATIONS: &quot;[{'logtail':{'token':'${LOGTAIL_TOKEN}'}}]&quot; packages: - name: db-test actions: - name: db-tester runtime: 'python:default' web: true </code></pre> <p>This is my function:</p> <pre class="lang-py prettyprint-override"><code>import os import psycopg2 from psycopg2.extras import RealDictCursor def get_db_connection(): &quot;&quot;&quot;Create a connection to the PostgreSQL database&quot;&quot;&quot; print(&quot;Connecting to the database...&quot;) connection = psycopg2.connect( host=os.environ.get(&quot;DB_HOST&quot;), port=os.environ.get(&quot;DB_PORT&quot;), dbname=os.environ.get(&quot;DB_NAME&quot;), user=os.environ.get(&quot;DB_USER&quot;), password=os.environ.get(&quot;DB_PASSWORD&quot;), sslmode=os.environ.get(&quot;DB_SSL_MODE&quot;), cursor_factory=RealDictCursor ) return connection def main(args): try: print(&quot;Starting the receipt-submitter function...&quot;) version = psycopg2.__version__ conn = get_db_connection() cursor = conn.cursor() cursor.execute(&quot;SELECT version();&quot;) db_version = cursor.fetchone() cursor.close() conn.close() # Test imports were successful return { &quot;body&quot;: { &quot;message&quot;: f&quot;Successfully imported dependencies and found DB version {db_version}.&quot;, &quot;psycopg2_version&quot;: version } } except Exception as e: print(&quot;Exception thrown.&quot;) return { &quot;body&quot;: { &quot;error&quot;: f&quot;{str(e)}&quot; }, &quot;statusCode&quot;: 500 } </code></pre>
<python><logging><digital-ocean>
2025-06-23 00:46:52
1
1,297
j3py
79,675,356
1,000,026
Why is this pandas rolling function exhibiting nonlinear slowdown when run on the cloud?
<p>I moved a library for stock market analysis to the cloud and found strange performance with a key pandas function. It runs reasonably quickly on my low-performance Dell XPS laptop and exhibits expected linear time-to-finish.</p> <p>When I ran the same code on CodeAnywhere and Google Colab, it tanked.</p> <p>The example rolling function calculates the 20th percentile of all &quot;high&quot; prices using intraday data, limited to the last 20 business days of data strictly from prior dates.</p> <p>Here is the code I'm using to test</p> <pre><code>from data_access import backfill_historical_intraday_data, form_full_data from time import time import pandas as pd from pandas.api.indexers import VariableOffsetWindowIndexer my_ticker = 'DBL' my_intraday_data = form_full_data(handle=my_ticker) my_data = my_intraday_data[my_intraday_data['date'] &gt; '2020-01-01'] my_data.set_index('date', inplace=True) for k in range(8): my_length = 10 * (3 ** k) start = time() my_trial_data = my_data.copy().iloc[-1 * my_length:] offset = pd.offsets.BDay(20) indexer = VariableOffsetWindowIndexer(index=my_trial_data.index, offset=offset) my_low_highs = my_trial_data['high'].rolling(indexer, closed='left').quantile(0.2) end = time() print(f'Process with df length of {my_length} took {end - start} seconds.') </code></pre> <p>The whole dataframe uses 4.4 M of memory.</p> <p>Here are the results using my cheap XPS:</p> <pre><code>Process with df length of 10 took 0.004008054733276367 seconds. Process with df length of 30 took 0.003987312316894531 seconds. Process with df length of 90 took 0.006006479263305664 seconds. Process with df length of 270 took 0.00899958610534668 seconds. Process with df length of 810 took 0.02326822280883789 seconds. Process with df length of 2430 took 0.07396459579467773 seconds. Process with df length of 7290 took 0.23105525970458984 seconds. Process with df length of 21870 took 0.7741758823394775 seconds. </code></pre> <p>Same code on Colab (immediately after buying 100 compute credits):</p> <pre><code>Process with df length of 10 took 0.003057718276977539 seconds. Process with df length of 30 took 0.0040819644927978516 seconds. Process with df length of 90 took 0.004738807678222656 seconds. Process with df length of 270 took 0.01237034797668457 seconds. Process with df length of 810 took 0.02781844139099121 seconds. Process with df length of 2430 took 0.22655129432678223 seconds. Process with df length of 7290 took 6.735988140106201 seconds. Process with df length of 21870 took 51.67249917984009 seconds. </code></pre> <p>Up until a length of 810, the two timings matched very closely. But then a 3x increase in length lead to a 10x increase in time, as though the process became O(N^2) at that point but not prior.</p> <p>There is no RAM issue. This is a small df and the Colab resource monitor never goes above 10% utilization.</p> <p>Any ideas why two different cloud computing platforms exhibit this nonlinear performance on such a tiny dataset?</p> <p>UPDATE: I put together the following reproducible, and I'm not seeing the behavior when I run it in Colab. I need to go back and see how this dataframe differs from the one that my code is making.</p> <pre><code>import pandas as pd from numpy.random import default_rng from pandas.api.indexers import VariableOffsetWindowIndexer from time import time my_rng = default_rng() print(pd.__version__) num_minutes = my_rng.random(100000) * 5 * 365.24 * 24 * 60 tick_value = my_rng.random(100000) raw_df = pd.DataFrame( { 'num_minutes': num_minutes, 'value': tick_value } ) raw_df['num_minutes'] = raw_df['num_minutes'].astype(int) raw_df['date_time'] = pd.to_datetime('2020-01-01') + pd.to_timedelta(raw_df['num_minutes'], unit='m') raw_df['date'] = pd.to_datetime(raw_df['date_time'].dt.date) raw_df.sort_values('date_time', inplace=True) raw_df.set_index('date', inplace=True) for k in range(9): my_length = 10 * (3 ** k) start = time() my_trial_data = raw_df.copy().iloc[-1 * my_length:] offset = pd.offsets.BDay(20) indexer = VariableOffsetWindowIndexer(index=my_trial_data.index, offset=offset) my_rolling_value = my_trial_data['value'].rolling(indexer, closed='left').quantile(0.2) end = time() print(f'Process with df length of {my_length} took {end - start} seconds.') </code></pre> <p>Colab and my computer are using the same version of pandas. Colab is probably running on a Linux box; mine is Windows.</p>
<python><pandas><google-cloud-platform><cloud><rolling-computation>
2025-06-22 18:28:10
1
1,044
David R
79,675,294
794,329
Pydantic v2 + Python 3.13: PEP 695 type-alias of a generic model returns TypeAliasType (no model_rebuild()) — how can I keep a real BaseModel?
<p>I’m experimenting with Python 3.13 and Pydantic v2.11.7. My goal is to avoid writing</p> <pre class="lang-python prettyprint-override"><code>GenericTraining[Discriminate[L1LossConfig]] </code></pre> <p>over and over—I’d like a terse alias such that I can do</p> <pre class="lang-python prettyprint-override"><code>TrainingLike[L1LossConfig] </code></pre> <p>and still have a fully-fledged BaseModel subclass (with Pydantic’s model_rebuild, validators, etc.).</p> <p><em>Minimal reproducible example</em></p> <pre class="lang-python prettyprint-override"><code>from typing import Annotated, Literal from pydantic import BaseModel, Field class L1LossConfig(BaseModel): loss_type: Literal[&quot;l1&quot;] = &quot;l1&quot; class GenericTraining[LossT](BaseModel): loss: LossT type Discriminate[T] = Annotated[T, Field(discriminator=&quot;loss_type&quot;)] # ----------------------------------------------------------------------------- # 1️⃣ VERBOSE version – works Training1 = GenericTraining[Discriminate[L1LossConfig]] Training1.model_rebuild() # ✅ OK # 2️⃣ TERSE version – breaks type TrainingLike[T] = GenericTraining[Discriminate[T]] Training2 = TrainingLike[L1LossConfig] Training2.model_rebuild() # ❌ Attribute &quot;model_rebuild&quot; is unknown </code></pre> <p>How can I have <code>TrainingLike[L1LossConfig]</code> behave exactly like <code>GenericTraining[Discriminate[L1LossConfig]]</code>, i.e. to be a real subclass of BaseModel?</p>
<python><python-3.x><pydantic><pydantic-v2>
2025-06-22 16:59:53
0
441
Danilo Horta
79,675,122
5,318,986
For Pyfixest: how to access adjusted R2 from the summary of feols?
<p>I am using pyfixest for python (not R), and it looks close to impossible to obtain simple statistics, such as number of observations and the R-squared. I get the slopes, and I can get them even in a nice data frame using <code>tidy()</code>, but simple parameters that the output reports cannot be obtained for further processing:</p> <pre><code> res = pf.feols(regmodel, subset, vcov={&quot;CRV1&quot;: &quot;gvkey + dyear&quot;}) print(res.summary()) tidy_df = res.tidy() ### Estimation: OLS Dep. var.: ln_abs_mv15, Fixed effects: 0 Inference: CRV1 Observations: 18825 | Coefficient | Estimate | Std. Error | t value | Pr(&gt;|t|) | 2.5% | 97.5% | |:--------------|-----------:|-------------:|----------:|-----------:|-------:|--------:| | Intercept | 2.985 | 0.105 | 28.334 | 0.000 | 2.772 | 3.197 | --- RMSE: 0.816 R2: 0.713 </code></pre>
<python><pyfixest>
2025-06-22 11:54:31
1
2,767
Martien Lubberink
79,674,919
25,261,014
Pip exits when installing build dependencies for pygame-ce
<p>I had pygame (not ce) on my Debian system and then updated it to pygame-ce using <code>pip install —-upgrade pygame-ce</code> and it worked fine, installing pygame-ce==2.5.2 . There were some more recent updates since and I decided to update pygame-ce again using the same command. I am using python 3.9.2 and pip 25.1.1 . I got a similar error trying to install it on windows but never this issue on my Linux box. It looks like there is no library <code>portmidi</code> which it apparently needs, but I tried running <code>sudo apt install portmidi portmidi-dev</code> but apparently neither of those packages exist. I don’t know where pip and/or meson is building pygame.</p> <p><strong>Full pip output</strong>:</p> <pre><code>$ pip install --upgrade pygame-ce Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.org/simple, https://www.piwheels.org/simple Requirement already satisfied: pygame-ce in /usr/local/lib/python3.9/dist-packages (2.5.2) Collecting pygame-ce Using cached pygame_ce-2.5.5.tar.gz (5.8 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [34 lines of output] + meson setup /tmp/pip-install-7cxt5bz1/pygame-ce_1fb067b21b694b7e8cf6cc8e4fdaa8b1 /tmp/pip-install-7cxt5bz1/pygame-ce_1fb067b21b694b7e8cf6cc8e4fdaa8b1/.mesonpy-qmbs8ea0 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --native-file=/tmp/pip-install-7cxt5bz1/pygame-ce_1fb067b21b694b7e8cf6cc8e4fdaa8b1/.mesonpy-qmbs8ea0/meson-python-native-file.ini The Meson build system Version: 1.7.0 Source dir: /tmp/pip-install-7cxt5bz1/pygame-ce_1fb067b21b694b7e8cf6cc8e4fdaa8b1 Build dir: /tmp/pip-install-7cxt5bz1/pygame-ce_1fb067b21b694b7e8cf6cc8e4fdaa8b1/.mesonpy-qmbs8ea0 Build type: native build Program python found: YES ../meson_options.txt:27: WARNING: Project does not target a minimum version but uses feature deprecated since '1.1.0': &quot;boolean option&quot; keyword argument &quot;value&quot; of type str. use a boolean, not a string ../meson_options.txt:31: WARNING: Project does not target a minimum version but uses feature deprecated since '1.1.0': &quot;boolean option&quot; keyword argument &quot;value&quot; of type str. use a boolean, not a string ../meson_options.txt:35: WARNING: Project does not target a minimum version but uses feature deprecated since '1.1.0': &quot;boolean option&quot; keyword argument &quot;value&quot; of type str. use a boolean, not a string Project name: pygame_ce Project version: 2.5.5 C compiler for the host machine: cc (gcc 10.2.1 &quot;cc (Raspbian 10.2.1-6+rpi1) 10.2.1 20210110&quot;) C linker for the host machine: cc ld.bfd 2.35.2 Cython compiler for the host machine: cython (cython 3.0.11) Host machine cpu family: arm Host machine cpu: arm Program python found: YES Program python found: YES (/usr/bin/python3) Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2 Run-time dependency python found: YES 3.9 Has header &quot;Python.h&quot; with dependency python-3.9: YES Run-time dependency sdl2 found: YES 2.0.14 Run-time dependency sdl2_image found: YES 2.0.5 Run-time dependency sdl2_mixer found: YES 2.0.4 Run-time dependency sdl2_ttf found: YES 2.0.15 Run-time dependency freetype2 found: YES 23.4.17 Found CMake: /usr/bin/cmake (3.18.4) WARNING: CMake Toolchain: Failed to determine CMake compilers state Run-time dependency portmidi found: NO (tried pkgconfig and cmake) ../meson.build:302:25: ERROR: C shared or static library 'portmidi' not found A full log can be found at /tmp/pip-install-7cxt5bz1/pygame-ce_1fb067b21b694b7e8cf6cc8e4fdaa8b1/.mesonpy-qmbs8ea0/meson-logs/meson-log.txt [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>There appears not to be any file at /tmp/pip-install-7cxt5bz1/.../meson-logs/meson-log.txt .</p>
<python><pip><meson-build><pygame-ce>
2025-06-22 05:21:48
1
362
DeepThought42
79,674,833
5,369,275
Scipy OptimizeWarning: Covariance of the parameters could not be estimated when trying to fit function to data
<p>I'm trying to plot some data with a non-linear fit using the function: <a href="https://i.sstatic.net/ItIBPAWk.png" rel="noreferrer"><img src="https://i.sstatic.net/ItIBPAWk.png" alt="The function to fit" /></a></p> <p>kB and Bv being a constant while J'' is the independent variable and T is a parameter.</p> <p>I tried to do this in Python:</p> <pre><code>#Constants kb = 1.380649e-23 B = 0.3808 #Fit function def func(J, T): return (B*(2*J+1) / kb*T * np.exp(-B*J*(J+1)/kb*T)) popt, pcov = optimize.curve_fit(func, x, y) print(popt) plt.plot(x, y, &quot;o&quot;) plt.plot(x, func(j, popt[0])) </code></pre> <p>But this results in the warning &quot;OptimizeWarning: Covariance of the parameters could not be estimated warnings.warn('Covariance of the parameters could not be estimated'&quot; and the parameter turns out to be 1.</p> <p>This is the data:</p> <pre><code>x = [2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48] y = [0.185,0.375,0.548,0.695,0.849,0.931,0.996,0.992,1.0,0.977,0.927,0.834,0.691,0.68,0.575,0.479,0.421,0.351,0.259,0.208,0.162,0.135,0.093,0.066] </code></pre> <p><a href="https://i.sstatic.net/VCCsnZwt.png" rel="noreferrer"><img src="https://i.sstatic.net/VCCsnZwt.png" alt="The Graph" /></a></p>
<python><scipy><physics><data-fitting>
2025-06-22 01:33:49
1
601
Tom291
79,674,727
9,877,065
opencv cv2.drawcontours contour argument: contour vs [contour]
<p>my input :</p> <p><a href="https://i.sstatic.net/19nHf6z3.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19nHf6z3.jpg" alt="enter image description here" /></a></p> <p>my code :</p> <pre><code># import the necessary packages import numpy as np import argparse import imutils import cv2 print(&quot;Your OpenCV version: {}&quot;.format(cv2.__version__)) print(&quot;Are you using OpenCV 2.X? {}&quot;.format(imutils.is_cv2())) print(&quot;Are you using OpenCV 3.X? {}&quot;.format(imutils.is_cv3())) print(&quot;Are you using OpenCV 4.X? {}&quot;.format(imutils.is_cv4())) # load the image image_orig = cv2.imread('box1.jpg') height_orig, width_orig = image_orig.shape[:2] # cv2.imshow('image_orig' , image_orig) # cv2.waitKey(0) cv2.imwrite('image_orig.png', image_orig) # copy of original image image_to_process = image_orig.copy() image_gray = cv2.cvtColor(image_to_process, cv2.COLOR_BGR2GRAY) image_gray = cv2.GaussianBlur(image_gray, (5, 5), 0) # cv2.imshow('image_gray' , image_gray) # cv2.waitKey(0) cv2.imwrite('image_gray.png', image_gray) # perform edge detection, then perform a dilation + erosion to close gaps in between object edges image_edged = cv2.Canny(image_gray, 50, 100) image_edged = cv2.dilate(image_edged, None, iterations=1) image_edged = cv2.erode(image_edged, None, iterations=1) # cv2.imshow('image_edged' , image_edged) # cv2.waitKey(0) cv2.imwrite('image_edged.png', image_edged) # find contours in the edge map cnts = cv2.findContours(image_edged, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE) cnts = cnts[1] if imutils.is_cv2() else cnts[0] #### #### using CV4 print(len(cnts)) # loop over the contours individually # prints contours in red color thickness = 1 image_contours = image_to_process.copy() for c in cnts: cv2.drawContours(image_contours, c , -1 ,(0,0,255),1) cv2.imshow('1', image_contours) cv2.waitKey(0) # prints contours in red color thickness = 10 image_contours = image_to_process.copy() for c in cnts: cv2.drawContours(image_contours, c , -1 ,(0,0,255),10) cv2.imshow('10', image_contours) cv2.waitKey(0) # prints contours in red color thickness = cv2.FILLED image_contours = image_to_process.copy() for c in cnts: cv2.drawContours(image_contours, c , -1 ,(0,0,255), cv2.FILLED) cv2.imshow('filled', image_contours) cv2.waitKey(0) # prints contours in red color thickness = cv2.FILLED image_contours = image_to_process.copy() for c in cnts: cv2.drawContours(image_contours, [c] , -1 ,(0,0,255), cv2.FILLED) cv2.imshow('filled_[]', image_contours) cv2.waitKey(0) </code></pre> <p>output:</p> <p><a href="https://i.sstatic.net/lDRcdj9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lDRcdj9F.png" alt="enter image description here" /></a></p> <p>Any particular reason:</p> <p><code>cv2.drawContours(image_contours, [c] , -1 ,(0,0,255), cv2.FILLED)</code></p> <p>works while</p> <p><code>cv2.drawContours(image_contours, c , -1 ,(0,0,255), cv2.FILLED)</code></p> <p>doesn't , and why</p> <p><code>cv2.drawContours(image_contours, c , -1 ,(0,0,255), 10)</code> do work ?</p> <p>Which one is the right way to code it ? Am I missing anything ?</p>
<python><opencv><image-processing>
2025-06-21 20:26:00
1
3,346
pippo1980
79,674,355
5,704,198
How to test a form with unique fields using Hypothesis?
<p>I have a model with an unique constrain linking three fields:</p> <pre><code>class Bookshelf(models.Model): house = models.CharField(max_length=2, choices=HOUSE_CHOICES, default='Ar') room = models.CharField(max_length=20, choices=ROOM_CHOICES) name = models.CharField(max_length=200) compartments = models.IntegerField(choices=[(x, str(x)) for x in range(1, 9)]) shelves = models.IntegerField(choices=[(x, str(x)) for x in range(1, 9)]) class Meta: constraints = [ models.UniqueConstraint(fields=['house', 'room', 'name'], name='unique_library') ] class Book(models.Model): title = models.CharField(max_length=200) bookshelf = models.ForeignKey(Bookshelf, on_delete=models.PROTECT, blank=True, null=True) ... </code></pre> <p>and a modelform to test using hypothesis:</p> <pre><code>class SearchForm(forms.ModelForm): class Meta: model = Book fields = ['title', 'author_name', 'main_topic', 'subtopic', 'bookshelf', 'vote', 'support'] def __init__(self, *args, **kwargs): super(SearchForm, self).__init__(*args, **kwargs) ... </code></pre> <p>My tests often fail with flaky behavior (when I use <code>assume()</code> or <code>exist()</code>) or an IntegrityError. It seems that Hypothesis creates a Bookshelf instance and leaves it in the database between test runs or within the same run. So, when it tries to create a Bookshelf with the same <code>(house, room, name)</code> combination again, it violates the unique constraint. Here same things I try (a little mixed up), I can be more specific about what gives what error:</p> <pre><code>from hypothesis.strategies import data as hypothesis_data @composite def search_form_data(draw): house = draw(st.text(min_size=1, max_size=10)) room = draw(st.text(min_size=1, max_size=10)) name = draw(st.text(min_size=1, max_size=10)) bookshelf = Bookshelf.objects.create( house=house, room=room, name=name, compartments=2, shelves=5 ) # assume(not Bookshelf.objects.filter(house=house, room=room, name=name).exists()) # if not Bookshelf.objects.filter(house=house, room=room, name=name).exists(): return { 'title': draw(st.text(min_size=0, max_size=30)), 'author_name': draw(st.text(min_size=0, max_size=30)), 'bookshelf': bookshelf.id, } @given(data=search_form_data()) def test_search_form_accepts_random_valid_input(data): form = SearchForm(data) try: form.is_valid() except Exception as e: pytest.fail(f&quot;Unexpected exception: {e}&quot;) @given( unique_houses=st.sets(st.text(min_size=1, max_size=2), min_size=5, max_size=5), data=hypothesis_data() ) def test_search_form_accepts_random_valid_input(unique_houses, data): for house in unique_houses: form_data = data.draw(search_form_data(house), label=f&quot;form_data_for_{house}&quot;) form = SearchForm(form_data) try: form.is_valid() except Exception as e: pytest.fail(f&quot;Unexpected exception: {e}&quot;) @composite def unique_bookshelf_triplet(draw): triplet_strategy = st.tuples( st.text(min_size=1, max_size=3), st.text(min_size=1, max_size=10), st.text(min_size=1, max_size=10), ) seen = draw(st.shared(st.builds(set), key=&quot;triplets&quot;)) return draw( triplet_strategy .filter(lambda x: x not in seen) .map(lambda x: seen.add(x) or x) ) @given(data=hypothesis_data()) def test_create_unique_bookshelves(data): for _ in range(5): house, room, name = data.draw(unique_bookshelf_triplet()) bookshelf = Bookshelf.objects.create( house=house, room=room, name=name, compartments=data.draw(st.integers(min_value=1, max_value=10)), shelves=data.draw(st.integers(min_value=1, max_value=10)), ) assert bookshelf.pk is not None </code></pre> <p>So if I don't check, hypothesis gives many duplicate (integrityerror), If I check the tests become flaky (too many excluded or inconsistent), I tried to generate unique strategies but the results are flaky, probabily I do something very wrong because it's a common situation so it should be easy</p> <p>Thank you for your help</p>
<python><testing><pytest><python-hypothesis>
2025-06-21 10:44:02
1
1,385
fabio
79,674,000
16,563,251
Exception thrown in python magic method is lost on the way
<p>For a project, I want to prevent the use of certain methods in an overriding class. As this happens rather frequently, I am using a metaclass to block many methods at once (how and why is not within the scope of this question).</p> <p>This is done by replacing the methods I want to block by a data descriptor, which throws an exception. Using a data descriptor is helpful, as it comes first in the <a href="https://blog.peterlamut.com/2018/11/04/python-attribute-lookup-explained-in-detail/" rel="nofollow noreferrer">attribute lookup</a>, but is not of special relevance other than that.</p> <p>This approach is also used for <a href="https://docs.python.org/3/reference/datamodel.html#special-method-names" rel="nofollow noreferrer">magic methods</a> which override operators (e.g. the <code>==</code> operator). When calling the magic method <code>__eq__</code> directly, the exception is raised as expected. When indirectly calling it using <code>==</code>, it is also raised, but apparently caught somewhere on the way.</p> <ul> <li>Why is this the case? It seems there is a catch-all somewhere along the way (I tested different exception types), but I do not understand why this is beneficial for the language behaviour. Is there a reason?</li> <li>Is there a way to change my approach, such that both uses throw an exception?</li> </ul> <p>MWE:</p> <pre class="lang-py prettyprint-override"><code>from typing import override class ErrorDescriptor: def __get__(self, obj: object, objtype: object = None): print(&quot;Exception incoming&quot;) raise RuntimeError() def __set__(self, obj: object, objtype: object = None): ... class Meta(type): def __new__(cls, name: str, bases: tuple[type, ...], cls_dict: dict[str, object]): cls_dict[&quot;__eq__&quot;] = ErrorDescriptor() return super().__new__(cls, name, bases, cls_dict) class Foo: @override def __eq__(self, other: object): return True class Bar(Foo, metaclass=Meta): pass print(Foo() == 0) # True print(Bar() == 0) # Exception incoming\n False # ^^^^^ ?! print(Bar().__eq__(0)) # Exception incoming\n raises RuntimeError </code></pre>
<python><metaclass><python-descriptors>
2025-06-20 22:12:50
1
573
502E532E
79,673,866
16,399,497
How to generate a tar.gz stream to be returned as a StreamingResponse in FastAPI/Starlette?
<p>I'm making an FastAPI/Starlette server which requests to another server (well, S3) large files. These chunks feed a <code>tarfile.TarFile</code> object to produce a <code>.tar.gz</code> stream. This stream should be sent on the fly to a <code>StreamingResponse</code>.</p> <pre><code>S3 server --files chunks--&gt; My server --tar.gz chunks--&gt; User </code></pre> <p>There is a big issue with the tarfile.TarFile implementation as:</p> <h4>it writes chunks in file-like objects (with a <code>write</code> method),</h4> <p>This is not easy to be interfaced with the <code>stralette.StreamingResponse</code> which expects a chunk generator.</p> <p>My idea was to rewrite the <code>StreamingResponse.stream_response</code> method (cf <a href="https://github.com/encode/starlette/blob/739ea4928b11d4b4cb2b366ccad11405ef3034c4/starlette/responses.py#L246" rel="nofollow noreferrer">here</a>). Something like this:</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot;A script to test the tar.gz streaming.&quot;&quot;&quot; import os import tarfile from pathlib import Path from typing import Mapping from fastapi import FastAPI from fastapi.responses import StreamingResponse from fastapi.testclient import TestClient from starlette.background import BackgroundTask from starlette.types import Send TAR_FILE_PATH = Path(&quot;archive.tar.gz&quot;) CHUNK_SIZE = 1024 app = FastAPI() class FileStreamingResponse(StreamingResponse): def __init__( self, files_to_tar: list[Path], status_code: int = 200, headers: Mapping[str, str] | None = None, media_type: str | None = None, background: BackgroundTask | None = None, ) -&gt; None: self.files_to_tar = files_to_tar self.status_code = status_code self.media_type = self.media_type if media_type is None else media_type self.background = background self.init_headers(headers) async def stream_response(self, send: Send) -&gt; None: await send( { &quot;type&quot;: &quot;http.response.start&quot;, &quot;status&quot;: self.status_code, &quot;headers&quot;: self.raw_headers, } ) class DumpWriter: async def write(buffer): print(f&quot;Really sending {len(buffer)} bytes&quot;) await send( {&quot;type&quot;: &quot;http.response.body&quot;, &quot;body&quot;: buffer, &quot;more_body&quot;: True} ) async with tarfile.open( mode=&quot;w|gz&quot;, fileobj=DumpWriter(), bufsize=CHUNK_SIZE ) as file: for input_file in self.files_to_tar: await file.add(input_file.open(&quot;rb&quot;)) await send({&quot;type&quot;: &quot;http.response.body&quot;, &quot;body&quot;: b&quot;&quot;, &quot;more_body&quot;: False}) FILES_TO_TAR = Path(&quot;src&quot;).iterdir() @app.get(&quot;/&quot;) def send_tar() -&gt; StreamingResponse: &quot;&quot;&quot;Send a tar file.&quot;&quot;&quot; return FileStreamingResponse( files_to_tar=FILES_TO_TAR, media_type=&quot;application/tar+gzip&quot;, headers={&quot;Content-Disposition&quot;: f'attachment; filename=&quot;{TAR_FILE_PATH.name}&quot;'}, ) # # TESTS # TEST_TAR_FILE_PATH = Path(&quot;archive.tar.gz&quot;) client = TestClient(app) def test_main(): if TEST_TAR_FILE_PATH.exists(): os.remove(TEST_TAR_FILE_PATH) response = client.get(&quot;/&quot;) response.raise_for_status() with TEST_TAR_FILE_PATH.open(&quot;wb&quot;) as file: for chunk in response.iter_bytes(CHUNK_SIZE): file.write(chunk) with tarfile.open(TEST_TAR_FILE_PATH, &quot;r:gz&quot;) as tar_file: files = [tarinfo.name for tarinfo in tar_file.getmembers()] src_files = [file.name for file in FILES_TO_TAR] assert set(files) == set(src_files) </code></pre> <p>but the issue is that...</p> <h4>it writes them <em>synchronuosly</em></h4> <p>Ok, then to write it correctly, I need to run the <code>DumpWriter.write</code> method synchronously? I tried to do</p> <pre class="lang-py prettyprint-override"><code> class DumpWriter: def write(buffer): print(f&quot;Really sending {len(buffer)} bytes&quot;) asyncio.run_coroutine_threadsafe( send( { &quot;type&quot;: &quot;http.response.body&quot;, &quot;body&quot;: buffer, &quot;more_body&quot;: True, } ), asyncio.get_running_loop(), ) </code></pre> <p>but the data chunks were not transmitted.</p> <p>I also thought to rewrite the TarFile class to be async, but this is a nightmare!</p> <p>How can I achieve that?</p> <hr /> <p><strong>EDIT</strong></p> <p>Here is a MWE (run <code>pytest main.py</code>)</p> <pre class="lang-py prettyprint-override"><code>&quot;&quot;&quot;A script to test the tar.gz streaming.&quot;&quot;&quot; import asyncio import os import tarfile from pathlib import Path from typing import Mapping from fastapi import FastAPI from fastapi.responses import StreamingResponse from fastapi.testclient import TestClient from starlette.background import BackgroundTask from starlette.types import Send TAR_FILE_PATH = Path(&quot;archive.tar.gz&quot;) CHUNK_SIZE = 1024 app = FastAPI() FILES_TO_TAR = [(&quot;tarfile.py&quot;, Path(tarfile.__file__)), (&quot;os.py&quot;, Path(os.__file__))] class FileStreamingResponse(StreamingResponse): def __init__( self, files_to_tar: list[tuple[str, Path]], status_code: int = 200, headers: Mapping[str, str] | None = None, media_type: str | None = None, background: BackgroundTask | None = None, ) -&gt; None: self.files_to_tar = files_to_tar self.status_code = status_code self.media_type = self.media_type if media_type is None else media_type self.background = background self.init_headers(headers) async def stream_response(self, send: Send) -&gt; None: await send( { &quot;type&quot;: &quot;http.response.start&quot;, &quot;status&quot;: self.status_code, &quot;headers&quot;: self.raw_headers, } ) class DumpWriter: def write(self, buffer): print(f&quot;Really sending {len(buffer)} bytes&quot;) asyncio.run_coroutine_threadsafe( send( { &quot;type&quot;: &quot;http.response.body&quot;, &quot;body&quot;: buffer, &quot;more_body&quot;: True, } ), asyncio.get_running_loop(), ) async with tarfile.open( mode=&quot;w|gz&quot;, fileobj=DumpWriter(), bufsize=CHUNK_SIZE ) as file: for name, input_path in self.files_to_tar: await file.addfile(tarfile.TarInfo(name), input_path.open(&quot;rb&quot;)) await send({&quot;type&quot;: &quot;http.response.body&quot;, &quot;body&quot;: b&quot;&quot;, &quot;more_body&quot;: False}) @app.get(&quot;/&quot;) def send_tar() -&gt; StreamingResponse: &quot;&quot;&quot;Send a tar file.&quot;&quot;&quot; return FileStreamingResponse( files_to_tar=FILES_TO_TAR, media_type=&quot;application/tar+gzip&quot;, headers={&quot;Content-Disposition&quot;: f'attachment; filename=&quot;{TAR_FILE_PATH.name}&quot;'}, ) # # TESTS # TEST_TAR_FILE_PATH = Path(&quot;archive.tar.gz&quot;) client = TestClient(app) def test_main(): if TEST_TAR_FILE_PATH.exists(): os.remove(TEST_TAR_FILE_PATH) response = client.get(&quot;/&quot;) response.raise_for_status() with TEST_TAR_FILE_PATH.open(&quot;wb&quot;) as file: for chunk in response.iter_bytes(CHUNK_SIZE): file.write(chunk) with tarfile.open(TEST_TAR_FILE_PATH, &quot;r:gz&quot;) as tar_file: files = [tarinfo.name for tarinfo in tar_file.getmembers()] src_files_names = [file[0] for file in FILES_TO_TAR] assert set(files) == set(src_files_names) </code></pre> <p>As explained earlier, I get this kind of error:</p> <pre><code>&gt; async with tarfile.open( mode=&quot;w|gz&quot;, fileobj=DumpWriter(), bufsize=CHUNK_SIZE ) as file: E TypeError: 'TarFile' object does not support the asynchronous context manager protocol </code></pre> <p>My issue here is that I would need</p> <ul> <li>or to make tarfile write asynchronously in its fileobjet (but in stream mode, it writes via a sub <code>_Stream</code> object which is not asynchronous either),</li> <li>or to run the <code>DumpWriter.write</code> synchronously but running the inner <code>send</code> function asynchronously (my previous tests gave strange results, as if it was not transmitting data. Yet, I'm not really good with loops, threads etc)</li> <li>or to make the <code>tarfile.addfile</code> method yield written chunks as a generator to be provided to the original <code>StreamResponse</code> class.</li> </ul> <p>I can't figure out which solution to use and how to make that.</p> <p>Help!</p>
<python><python-3.x><python-asyncio><fastapi><starlette>
2025-06-20 19:13:19
1
723
emonier
79,673,773
27,596,369
How to use @wraps with decorator factories
<p>I have a decorator factory which just multiplies some given numbers.</p> <pre><code>from functools import wraps def multiply(numbers): def decorator(f): def wrapper(*args, **kwargs): num1 = numbers[0] for num2 in numbers[1:]: num1 = num1 * num2 print(num1) return f(*args) return wrapper return decorator @multiply(numbers=[1, 2, 3]) def input(**kwargs): print(&quot;That was the multiplied number.&quot;) input() </code></pre> <p>(I know this decorator is useless but this is for just for learning)</p> <p>Now, if I wanted to use <code>@wraps</code>, where would I put it?</p>
<python><function><python-decorators>
2025-06-20 17:31:23
2
1,512
Aadvik
79,673,440
6,197,439
Redraw a QPushButton after .setDown is called?
<p>Consider the following example:</p> <pre class="lang-py prettyprint-override"><code>import sys import time from PyQt5.QtWidgets import QApplication, QWidget, QPushButton, QMessageBox, QVBoxLayout from PyQt5.QtCore import QTimer app = QApplication(sys.argv) window = QWidget() window.setWindowTitle(&quot;Minimal PyQt5 Example&quot;) button = QPushButton(&quot;Click Me&quot;) button.setCheckable(True) def unpress_button(): button.setDown(False) print(&quot;{}: unpress_button: {}&quot;.format(time.time(), button.isDown())) def show_message_box(): msg_box = QMessageBox() msg_box.setText(&quot;Button clicked&quot;) msg_box.setStandardButtons(QMessageBox.Ok) ok_button = msg_box.button(QMessageBox.Ok) ok_button.setText(&quot;OK&quot;) msg_box.exec_() unpress_button() QTimer.singleShot(1000, unpress_button) button.clicked.connect(show_message_box) layout = QVBoxLayout() layout.addWidget(button) window.setLayout(layout) window.show() sys.exit(app.exec_()) </code></pre> <p>At start of the application, we get the <code>button</code> in its unchecked/unpressed state:</p> <p><a href="https://i.sstatic.net/v8MhYYHo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8MhYYHo.png" alt="app start" /></a></p> <p>When the button is clicked, since it is <code>.setCheckable</code>, it is rendered in its checked/pressed state, as expected - and a &quot;Button clicked&quot; sub dialog appears:</p> <p><a href="https://i.sstatic.net/Wx68nYew.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wx68nYew.png" alt="app first press" /></a></p> <p>What I want now, is that after the OK button is pressed on the sub dialog and the sub dialog disappears, the original <code>button</code> should be automatically restored to its unpressed/unchecked state. I was hoping to do that via the <code>unpress_button</code> function, which runs <code>button.setDown(False)</code> - and in fact the printouts show that the &quot;down&quot; property is indeed set to False:</p> <pre class="lang-none prettyprint-override"><code>1750425124.4820263: unpress_button: False 1750425125.5181835: unpress_button: False </code></pre> <p>... however, after all this, the button remains drawn/rendered in its &quot;pressed&quot;/&quot;checked&quot; state:</p> <p><a href="https://i.sstatic.net/v8y8WBPo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8y8WBPo.png" alt="app after dialog click" /></a></p> <p>Note that I have even used <code>QTimer.singleShot</code> so I could escape the context of the <code>show_message_box</code> handler, but that does not help either.</p> <p>So, how can I force the <code>button</code> to be drawn in its &quot;unpressed&quot;/&quot;unchecked&quot; state, as it should be as per what its <code>.isDown()</code> method indicates?</p>
<python><pyqt5>
2025-06-20 13:16:56
0
5,938
sdbbs
79,673,223
21,294,350
clickable elements throw error "could not be scrolled into view" in selenium
<p>I try to scrape <a href="https://www.anytimemailbox.com/s/new-york-42-broadway" rel="nofollow noreferrer">https://www.anytimemailbox.com/s/new-york-42-broadway</a>. I checked <a href="https://stackoverflow.com/a/61343018/21294350">https://stackoverflow.com/a/61343018/21294350</a> and used <code>driver.execute_script(&quot;window.scrollTo(0, document.body.scrollHeight);&quot;)</code> for my special case.</p> <p>minimal demo:</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver from selenium.webdriver.support import expected_conditions as EC selector1_str_main='div[class=&quot;t-disc&quot;]' selector1_str=selector1_str_main+'&gt;a' selector1=(By.CSS_SELECTOR, selector1_str) driver=webdriver.Firefox() driver.execute_script(&quot;window.scrollTo(0, document.body.scrollHeight);&quot;) # script1 waiter=WebDriverWait(driver, 20) # assert '&lt;a href=&quot;#&quot; onclick=&quot;thShowFullServicePlan11(4967);return false;&quot;&gt;' in driver.page_source # This doesn't throw error. With this addition, the delay will make the scroll work to make click available. waiter.until(EC.visibility_of_element_located((By.CSS_SELECTOR,&quot;.policy-wrapper&quot;))) waiter.until(EC.visibility_of_all_elements_located(selector1)) waiter.until(EC.element_to_be_clickable(selector1).click() # throw &quot;ERROR:Message: Element &lt;a href=&quot;#&quot;&gt; could not be scrolled into view&quot; </code></pre> <p>This is probably due to <code>script1</code> is not finished before running the latter <code>click</code>. I have tried <a href="https://stackoverflow.com/a/65844911/21294350">https://stackoverflow.com/a/65844911/21294350</a> by <code>waiter.until(EC.visibility_of_element_located((By.CSS_SELECTOR,&quot;.policy-wrapper&quot;)))</code> but it doesn't work.</p> <p>Currently the workaround is to just use JS which doesn't need the clickable item inside the view <a href="https://stackoverflow.com/a/55431861/21294350">https://stackoverflow.com/a/55431861/21294350</a>.</p> <p>But I still wonders why the above <code>until</code>s seem to fail to work. What's the problem for that?</p>
<python><selenium-webdriver><web-scraping>
2025-06-20 10:23:31
1
782
An5Drama
79,672,988
10,134,422
Why can't (langchain) AzureOpenAI find a model that AzureChatOpenAI can?
<p>Invoking a request using AzureChatOpenAI returns response as expected:</p> <pre><code>import os from dotenv import load_dotenv from langchain_openai import AzureChatOpenAI load_dotenv() llm = AzureChatOpenAI( azure_endpoint=os.getenv(&quot;AZURE_ENDPOINT&quot;), api_key=os.getenv(&quot;TOOL_KEY&quot;), api_version=os.getenv(&quot;API_VERSION&quot;), model=&quot;gpt-4o-2024-08-06&quot; ) response = llm.invoke(&quot;what is 2+3?&quot;) print(response.content) </code></pre> <p>Returns:</p> <pre><code>2 + 3 equals 5. </code></pre> <p>But using the same config doesn't work with AzureOpenAI:</p> <pre><code>import os from dotenv import load_dotenv from langchain_openai import AzureOpenAI load_dotenv() llm = AzureOpenAI( azure_endpoint=os.getenv(&quot;AZURE_ENDPOINT&quot;), api_key=os.getenv(&quot;TOOL_KEY&quot;), api_version=os.getenv(&quot;API_VERSION&quot;), model=&quot;gpt-4o-2024-08-06&quot; ) response = llm.invoke(&quot;what is 2+3?&quot;) print(response.content) </code></pre> <p>Returns error:</p> <pre><code>openai.NotFoundError: Error code: 404 - {'detail': 'Not Found'} </code></pre> <p>What am I missing here?</p>
<python><langchain><large-language-model><azure-openai>
2025-06-20 07:17:40
1
460
Sanchez333
79,672,973
8,913,983
Pandas str in col failing to find value
<p>I have a <code>df</code> like so:</p> <pre><code>RangeIndex: 115506 entries, 0 to 115505 Series name: Email Non-Null Count Dtype -------------- ----- 115506 non-null object dtypes: object(1) memory usage: 902.5+ KB </code></pre> <p>I am searching some string values in the df</p> <pre><code>'abc' in df.Email returns False </code></pre> <p>whereas</p> <pre><code>df.Email.str.contains('abc') returns True </code></pre> <p>I checked with LLMs and they return that:</p> <pre><code>1. 'abc' in df.Email This checks whether the exact string 'abc' is a label in the index of the Series df.Email. It does not check the contents of the Series. That's why it returns False, even if 'abc' appears somewhere in the data. 2. df.Email.str.contains('abc') This checks each element of the Series to see if 'abc' is a substring of that element. It returns a Series of booleans, and if 'abc' exists in at least one of the values, some elements will be True. </code></pre> <p>Even in simple <code>df</code></p> <pre><code>import pandas as pd df = pd.DataFrame({'Email': ['abc', 'def']}) 'abc' in df.Email # False </code></pre> <p>It's been a while since I used pandas, but I could swear thats how I used to check for values in columns</p>
<python><pandas>
2025-06-20 07:03:47
0
4,870
Jonas Palačionis
79,672,719
19,459,262
How do I set custom axis titles for a go.Figure()?
<p>I have a graph objects Figure and I want to set custom axis titles. I've found how to do it with <code>update_layout()</code> for plotly express figures, but I haven't found a way to do it with a <code>go.Figure()</code>. (Using that instead of a plotly express graph because I'm not sure how to graph an exponential function using plotly express.)</p> <p>Code:</p> <pre><code>import numpy as np import plotly.graph_objects as go import plotly.io as pio pio.templates.default = &quot;none&quot; x = np.linspace(0, 10,100) graph = go.Scatter(x=x, y=100*pow(1/2, x), mode='lines', line_width=2, line_color='red',) layout = go.Layout(title='My Graph') fig = go.Figure(data=[graph], layout=layout) # want to set axis titles! fig.show() </code></pre> <p>How can I set custom axis titles?</p>
<python><plotly><plotly.graph-objects>
2025-06-19 23:00:51
2
784
Redz
79,672,674
10,083,382
Log Metrics in Azure using MLflow
<p>I have a component registered within MLStudio that contains code to run a Promptflow pipeline. I am executing the flow using an AzureML pipeline, following the <a href="https://microsoft.github.io/promptflow/cloud/azureai/use-flow-in-azure-ml-pipeline.html" rel="nofollow noreferrer">documentation</a> and <a href="https://github.com/microsoft/promptflow/blob/main/examples/tutorials/run-flow-with-pipeline/pipeline.ipynb" rel="nofollow noreferrer">example notebook</a>. The flow seems to execute without any errors.</p> <p>However, I want to run an aggregation function on the consolidated results after compiling the results from all the child runs, and then create a metric. I want to log that metric so that it is recorded for each experiment run in Azure MLStudio.</p> <p>Below is the code I’ve used, but it creates a nested experiment (an experiment within an experiment), and I am unable to log the metric in the associated experiment run. For now, I have just placed a placeholder to log the metric without compiling results, for simplicity.</p> <p>How can I fix the code below to log the metric using MLflow or any other approach, so that the metric is logged within the associated run?</p> <p><strong>pipeline.py</strong></p> <pre><code>import os import uuid import mlflow import azureml.mlflow # Azure ML ↔ MLflow integration from azure.identity import DefaultAzureCredential from azure.ai.ml import MLClient, load_component from azure.ai.ml.constants import AssetTypes from azure.ai.ml.dsl import pipeline from azure.ai.ml import MLClient, load_component, dsl, Input, Output from promptflow.connections import ConnectionProvider # ------------------------------------------------------------------------------ # 1. Workspace &amp; MLflow Tracking Configuration # ------------------------------------------------------------------------------ subscription_id = &quot;foo&quot; resource_group = &quot;bar&quot; workspace_name = &quot;baz&quot; os.environ['subscription_id'] = subscription_id os.environ['resource_group'] = resource_group os.environ['workspace_name'] = workspace_name cred = DefaultAzureCredential() ml_client = MLClient( credential = cred, subscription_id = subscription_id, resource_group_name = resource_group, workspace_name = workspace_name, ) # get the MLflow tracking URI from your workspace experiment_name = &quot;test_custom_connection_promptflow_pipeline&quot; tracking_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri mlflow.set_tracking_uri(tracking_uri) mlflow.set_experiment(experiment_name) parent_run = mlflow.start_run() cluster_name = &quot;LLM-Prompt-Flow&quot; print(ml_client.compute.get(cluster_name)) # ------------------------------------------------------------------------------ # 2. Turn your Prompt Flow into a reusable Component # ------------------------------------------------------------------------------ flow_component = load_component( source=&quot;flow.dag.yaml&quot; ) # List all versions of the component existing_versions = ml_client.components.list(name=&quot;test_custom_connection&quot;) existing_versions = [int(c.version) for c in existing_versions if c.version.isdigit()] # Determine next version next_version = str(max(existing_versions) + 1) if existing_versions else &quot;1&quot; ml_client.components.create_or_update(flow_component, version=str(next_version)) # ------------------------------------------------------------------------------ # 3. Build the DSL Pipeline that invokes your flow component # ------------------------------------------------------------------------------ local_csv_path = &quot;sample.csv&quot; # Path to your CSV eval_data = Input( type=AssetTypes.URI_FILE, path=local_csv_path, mode=&quot;ro_mount&quot;, # or &quot;download&quot; if you prefer ) @pipeline() def eval_pipeline( ): # Declare pipeline step 'flow_node' by using flow component flow_node = flow_component( data=eval_data, topic=&quot;${data.topic}&quot;, ) # Provide run settings of your flow component # Only 'compute' is required and other setting will keep default value if not provided. flow_node.environment_variables = { } flow_node.compute = cluster_name flow_node.resources = {&quot;instance_count&quot;: 1} flow_node.mini_batch_size = 5 flow_node.max_concurrency_per_instance = 1 # flow_node.retry_settings = { # &quot;max_retries&quot;: 1, # &quot;timeout&quot;: 1200, # } flow_node.error_threshold = -1 flow_node.mini_batch_error_threshold = -1 flow_node.logging_level = &quot;DEBUG&quot; # create pipeline instance pipeline_job = eval_pipeline() pipeline_job.settings.default_compute = cluster_name pipeline_job.name = f&quot;eval-{uuid.uuid4().hex[:8]}&quot; submitted = ml_client.jobs.create_or_update( pipeline_job, experiment_name=experiment_name, tags={&quot;mlflow.parentRunId&quot;: parent_run.info.run_id}, ) print(f&quot;▶️ Submitted pipeline job: {submitted.name}&quot;) ml_client.jobs.stream(submitted.name) mlflow.log_metric(&quot;Avg&quot;, 2) mlflow.end_run() </code></pre>
<python><azure><azure-machine-learning-service><mlflow><azure-ml-pipelines>
2025-06-19 21:48:24
0
394
Lopez
79,672,349
15,842
Handle shadowed attribute in multiple inheritance
<p>What is the best way to handle attribute shadowing from multiple inheritance?</p> <p>Intended behavior is to end up with a <code>Field</code>.</p> <pre><code>from pydantic import BaseModel class Parent(object): value: int = 1 class Shadow(BaseModel, Parent): value: int = 2 Shadow() </code></pre> <p>Options I have considered:</p> <ol> <li>Should I silence the warning? If so, how?</li> <li>Is there a model or field configuration option?</li> <li>Don't do multiple inheritance that shadows.</li> </ol>
<python><pydantic>
2025-06-19 16:08:07
0
21,402
Gregg Lind
79,672,338
1,422,096
Matplotlib button "Save data as CSV" in the toolbar next to the "Save the figure as PNG" button
<p>This works to add a &quot;Save as CSV&quot; button in a Matplotlib plot:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.widgets import Button import csv X = [1, 2, 3, 4, 5] Y = [2, 3, 5, 7, 11] def save_as_csv(event): file_path = 'data.csv' with open(file_path, 'w', newline='') as file: writer = csv.writer(file) writer.writerow(['X', 'Y']) writer.writerows(zip(X, Y)) print(f'Data saved to {file_path}') fig, ax = plt.subplots() plt.plot(X, Y, linestyle='None', marker='.', markersize=5) plt.subplots_adjust(bottom=0.2) ax_button = plt.axes([0.81, 0.05, 0.1, 0.075]) button = Button(ax_button, 'Save CSV') button.on_clicked(save_as_csv) plt.show() </code></pre> <p><strong>How to have the button in the toolbar, next to the &quot;Save the figure&quot; (diskette icon), with <code>TkAgg</code> backend?</strong></p>
<python><matplotlib>
2025-06-19 16:00:46
1
47,388
Basj
79,672,310
21,294,350
Is splash:select different from document.querySelector?
<p>example link <a href="https://www.anytimemailbox.com/s/new-york-42-broadway" rel="nofollow noreferrer">https://www.anytimemailbox.com/s/new-york-42-broadway</a></p> <pre class="lang-lua prettyprint-override"><code>function main(splash) local button = splash:select('div[class=&quot;t-disc&quot;]&gt;a') assert(button) -- fail end </code></pre> <p>But in the F12 developer tool console</p> <pre class="lang-js prettyprint-override"><code>&gt;&gt; document.querySelector('div[class=&quot;t-disc&quot;]&gt;a') &lt;a href=&quot;#&quot; onclick=&quot;thShowFullServicePlan11(4967);return false;&quot;&gt;Full Details&lt;/a&gt; </code></pre> <p>The doc <a href="https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-select" rel="nofollow noreferrer">https://splash.readthedocs.io/en/stable/scripting-ref.html#splash-select</a> says</p> <blockquote> <p>Using splash:select you can get the element that matches your specified CSS selector <em>like</em> using document.querySelector in the browser.</p> </blockquote> <p>Is there some corner case where <code>splash:select</code> is different from document.querySelector? If so, is there one workaround?</p>
<javascript><python><lua><scrapy-splash><splash-js-render>
2025-06-19 15:41:58
1
782
An5Drama
79,672,309
27,596,369
Flask-login is making Flask return 404 error
<p>I have a script which is basically a blog post website, it has multiple routes and uses many libraries. Here is a MRE of my code.</p> <pre><code>from flask import Flask, render_template, request from flask_login import UserMixin, LoginManager, current_user from flask_sqlalchemy import SQLAlchemy from sqlalchemy.orm import relationship, DeclarativeBase, Mapped, mapped_column from sqlalchemy import Integer, String, Text, ForeignKey from typing import List app = Flask(__name__) app.config['SECRET_KEY'] = '8BYkEfBA6O6donzWlSihBXox7C0sKR6b' login_manager = LoginManager() login_manager.init_app(app) @login_manager.user_loader def load_user(user_id): return db.get_or_404(User, user_id) class Base(DeclarativeBase): pass app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///posts.db' db = SQLAlchemy(model_class=Base) db.init_app(app) class BlogPost(db.Model): __tablename__ = &quot;blog_posts&quot; id: Mapped[int] = mapped_column(Integer, primary_key=True) title: Mapped[str] = mapped_column(String(250), unique=True, nullable=False) subtitle: Mapped[str] = mapped_column(String(250), nullable=False) date: Mapped[str] = mapped_column(String(250), nullable=False) body: Mapped[str] = mapped_column(Text, nullable=False) author = relationship(&quot;User&quot;, back_populates='posts') img_url: Mapped[str] = mapped_column(String(250), nullable=False) author_id = db.Column(Integer, ForeignKey('users.id')) class User(UserMixin, db.Model): __tablename__ = &quot;users&quot; id: Mapped[int] = mapped_column(Integer, primary_key=True) email: Mapped[str] = mapped_column(String, unique=True, nullable=False) password: Mapped[str] = mapped_column(String, nullable=False) name: Mapped[str] = mapped_column(String, nullable=False) posts: Mapped[List[&quot;BlogPost&quot;]] = relationship(back_populates=&quot;author&quot;) with app.app_context(): db.create_all() @app.route(&quot;/&quot;) def home(): return render_template('test.html') </code></pre> <p>While I was playing around to see what was causing the problem, I realized that as soon as I added the flask_login (which requires SQLAlchemy), flask returned a 404 not found error. In my real code, I use flask_login very often to control many things. This has bamboozled me since it stopped working all of a sudden.</p>
<python><sqlite><flask><sqlalchemy><flask-login>
2025-06-19 15:40:38
1
1,512
Aadvik
79,672,205
5,452,365
python-gql async execute_batch error with httpx transport
<p>I am following this example to implement client with httpx transport. However, I'm getting this error: <code>AttributeError: 'AsyncClientSession' object has no attribute 'execute_batch'</code></p> <p>This is the code that I'm trying to execute:</p> <pre><code>from gql import Client, GraphQLRequest from gql.transport.httpx import HTTPXAsyncTransport from itertools import batched import asyncio async def main(): async with Client( transport=HTTPXAsyncTransport(url=GQL_URL, headers=GQL_HEADERS, timeout=3000), batch_interval=5 ) as session: res = await session.execute_batch([ GraphQLRequest(document=QUERY, variable_values=VARIABLES) for batch in batched(DATA, 1000) ]) print(res) asyncio.run(main()) </code></pre>
<python><https><gql>
2025-06-19 14:16:29
1
11,652
Rahul
79,672,095
9,877,065
Detecting inflection points/local minima in openCv Python contours objects to differentiate shapes
<p>I am trying to answer <a href="https://stackoverflow.com/questions/79640660/detecting-circular-elliptical-shape-with-python-opencv-or-any-other-lib/79653025#79653025">Detecting circular/elliptical shape with Python openCv (or any other lib)</a>. In trying to get the best results I would like to find a way to differentiate between the following two kinds of shapes (red contours):</p> <p><a href="https://i.sstatic.net/4hOWtoQL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4hOWtoQL.png" alt="enter image description here" /></a>.</p> <blockquote> <p>My first attempt uses as input:</p> </blockquote> <p><code>bud_1.jpg</code> :</p> <p><a href="https://i.sstatic.net/Jpj62C9G.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jpj62C9G.jpg" alt="enter image description here" /></a></p> <p>and this code :</p> <pre><code>import cv2 import numpy as np def getCurvature(vec_contour_points, step): vec_curvature = [0.0] * len(vec_contour_points) if len(vec_contour_points) &lt; step: return vec_curvature front_to_back = vec_contour_points[0] - vec_contour_points[-1] # print(front_to_back) # Assuming CONTENT_OF is a print statement is_closed = max(abs(front_to_back[0]), abs(front_to_back[1])) &lt;= 1 for i in range(len(vec_contour_points)): pos = vec_contour_points[i] max_step = step if not is_closed: max_step = min(min(step, i), len(vec_contour_points) - 1 - i) if max_step == 0: vec_curvature[i] = float('inf') continue iminus = i - max_step iplus = i + max_step pminus = vec_contour_points[iminus if iminus &gt;= 0 else iminus + len(vec_contour_points)] pplus = vec_contour_points[iplus if iplus &lt; len(vec_contour_points) else iplus - len(vec_contour_points)] f1st_derivative = [(pplus[0] - pminus[0]) / (iplus - iminus), (pplus[1] - pminus[1]) / (iplus - iminus)] f2nd_derivative = [ (pplus[0] - 2 * pos[0] + pminus[0]) / ((iplus - iminus) / 2 * (iplus - iminus) / 2), (pplus[1] - 2 * pos[1] + pminus[1]) / ((iplus - iminus) / 2 * (iplus - iminus) / 2) ] divisor = f1st_derivative[0] ** 2 + f1st_derivative[1] ** 2 if abs(divisor) &gt; 1e-8: curvature_2d = abs(f2nd_derivative[1] * f1st_derivative[0] - f2nd_derivative[0] * f1st_derivative[1]) / (divisor ** (3.0 / 2.0)) else: curvature_2d = float('inf') vec_curvature[i] = curvature_2d return vec_curvature # Load image and find contours img = cv2.imread('bud_1.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) _, thresh = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY) cv2.imshow(&quot;Image threshold&quot;, gray) cv2.waitKey(0) black_img = np.zeros((img.shape[0], img.shape[1])) print('img size : ', img.shape, img.shape[1]) print('black_img size : ', black_img.shape, img.shape[1]) contours, _ = cv2.findContours(thresh, cv2.RETR_LIST, cv2.CHAIN_APPROX_NONE) # Select the largest contour if contours: print('contours lenght : ', len(contours)) # largest_contour = max(contours, key=cv2.contourArea) cnt = sorted(contours, key=cv2.contourArea, reverse=True) print('cnt lenght : ', len(cnt)) largest_contour = cnt[1] print('largest_contour area : ', cv2.contourArea(largest_contour)) cv2.drawContours(img, [largest_contour] , 0, (0,255,0) , 1) cv2.imshow(&quot;Image contour&quot;, img) cv2.imwrite('contour.jpg' , img) cv2.waitKey(0) filled_contours = cv2.drawContours(black_img, [largest_contour], 0 , 1 , cv2.FILLED) cv2.imshow(&quot;Black Image&quot;, black_img) cv2.waitKey(0) cv2.imwrite('black.tiff', black_img) thinned_img = np.zeros((img.shape[0], img.shape[1])) thinned_contours = cv2.drawContours(thinned_img, [largest_contour], 0 , 1 , 1) cv2.imshow(&quot;Thinned Image&quot;, thinned_img) cv2.waitKey(0) cv2.imwrite('thinned_contours.tiff', thinned_img) # Extract contour points # contour_points = np.squeeze(largest_contour) # this is tocheck that contours is bad as it is even if looks like a nice one pixel line contour_points = np.where(thinned_contours == 1) contour_points = np.squeeze(largest_contour) # for i in contour_points : # print(i) print('largest_contour lenght : ', len(largest_contour)) print('contour points lenght : ', len(contour_points)) # # Calculate slopes and curvatures (using finite differences) ###### Using SO 32629806 answer for curvatures # slopes = np.zeros(len(contour_points) - 1) # curvatures = np.zeros(len(contour_points) - 2) q_value = np.zeros(len(contour_points) - 1) k = 2 ###### Using SO 32629806 answer for curvatures # for i in range(len(contour_points) -k -1): # # slopes[i] = (contour_points[i+k][1] - contour_points[i][1]) / (contour_points[i+k][0] - contour_points[i][0]) # # slopes[i] = (contour_points[i][1]) - contour_points[i+k][1] / (contour_points[i+k][0] - contour_points[i][0]) # if (contour_points[i][0] - contour_points[i+k][0]) != 0: # slopes[i] = (contour_points[i][1] - contour_points[i+k][1]) / (contour_points[i][0] - contour_points[i+k][0]) # else : # slopes[i] = 0 ###### Using SO 32629806 answer for curvatures # for i in range(len(contour_points) -k -1): # if (contour_points[i][0] - contour_points[i+k][0]) != 0: # curvatures[i] = -(slopes[i] - slopes[i+k]) / (contour_points[i][0] - contour_points[i+k][0]) # else : # curvatures[i] = 0 curvatures = getCurvature(contour_points , 1) # print('curvatures : ', curvatures) print('len(contour_points) :' ,len(contour_points)) for i in range(len(contour_points)-k-1): q_value[i] = contour_points[i][1] - curvatures[i]*(contour_points[i][0]) # Identify inflection points (curvature change sign) inflection_points = [] for i in range(len(curvatures) - 1): if np.sign(curvatures[i]) != np.sign(curvatures[i + 1 ]) : mid_point = (contour_points[i]) if True: img_copy = img.copy() cv2.circle(img_copy, (mid_point[0], mid_point[1]), 5, (0, 0, 255), -1) if curvatures[i] != float('inf'): # print('------------', np.isnan(curvatures[i]) , curvatures[i]) if np.isnan(curvatures[i]) : print(np.isnan(curvatures[i])) cv2.line(img_copy , ( 0 , int(curvatures[i]*0 + q_value[i])) , (img.shape[1]-1 , int(curvatures[i]*img.shape[1]-1 + q_value[i])), (0, 255, 0), 1) cv2.imshow(&quot;Image Copy&quot;, img_copy) # cv2.imwrite('finale.jpg__'+str(i), img_copy) cv2.waitKey(0) inflection_points.append([mid_point]) # # print('\ninflection points[0] : ' , inflection_points) # # Draw the inflection points on the image for point in inflection_points: cv2.circle(img, (point[0][0], point[0][1]), 5, (0, 0, 255), -1) cv2.imshow(&quot;Image with Inflection Points&quot;, img) cv2.imwrite('finale.jpg', img) cv2.waitKey(0) cv2.destroyAllWindows() </code></pre> <p>What I am trying to achieve is to get all the tangents to the contour shape and figure out a way to detect the two local minima in the shape:</p> <p><a href="https://i.sstatic.net/bZBJtc3U.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZBJtc3U.jpg" alt="enter image description here" /></a></p> <p>Apart from having difficulty retrieving only real tangents to the curve (I guess it has to do with the finite and bumpy contour profile that is not a continuous numerical function) I need to devise a strategy to isolate these kinds of tangents:</p> <p><a href="https://i.sstatic.net/eAHQyqzv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAHQyqzv.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/WiAfVonw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiAfVonw.png" alt="enter image description here" /></a></p> <p>to have a way to assess that the found contour is a bilobated shape versus a circular or elliptical one.</p> <p>My strategy would be to check if the <strong>contour</strong> area is similar to the calculated <code>cv2.minEnclosingCircle(contour)</code> or <code>cv2.fitEllipse(contour)</code> and for the shape showing too big difference evaluate the presence of at least 2 local minimal (I guess they could be more because of the results of my algorithm that retrieve wrong tangents i.e. :</p> <p><a href="https://i.sstatic.net/xFk7v6Wi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFk7v6Wi.png" alt="enter image description here" /></a></p> <p>My idea would be to check if any of the points on the tangent at the right of the contour contact point (inflection point) belongs to the blue shape and test for the same condition on the right. Any idea about how to achieve this ? Any idea on how to get only proper real tangent to the contour ?</p>
<python><opencv><image-processing><shapes>
2025-06-19 13:05:12
1
3,346
pippo1980
79,672,094
12,468,438
Tensorflow speed of tf.nn.conv2D used instead of opencv GaussianBlur
<p>I'm trying to move some computer vision tasks to tensorflow. The most intensive ops are convolutions, like GaussianBlur. The timings I get using timeit suggest that the GPU equivalent is &gt;10 x slower.</p> <ul> <li>The stdout reports &quot;WARNING:tensorflow:AutoGraph could not transform ... and will run it as-is...&quot; for both functions. This means that no graph is created, and so performance is less good?</li> <li>Can I use timeit to check performance of a tf script?</li> <li>I assumed that by creating the tf variables outside the function, that the function starts from variables already on the GPU, is this true?</li> <li>Is it possible to do the 2D convolution layer by layer without splitting the image in layers like I do in tf_gauss_blur_stacked()?</li> </ul> <p>The code below is my current version of Gaussian blur in opencv and tensorflow.</p> <pre><code>import tensorflow as tf # v2.2.0, requires tf 2 for example import numpy as np # v1.21.6 import cv2 # v4.6.0 from timeit import timeit def opencv_gauss_blur(im): return cv2.GaussianBlur(im.copy(), (5, 5), 1.1) @tf.function def tf_gauss_blur_stacked(im, im_out, kernel2D): for ii in range(im.shape[-1]): im_slice = im[0, :, :, ii][tf.newaxis, :, :, tf.newaxis] im_out[0, :, :, ii].assign(tf.nn.conv2d(im_slice, kernel2D, strides=[1, 1, 1, 1], padding=&quot;SAME&quot;)[0,:,:,0]) return im_out @tf.function def tf_gauss_blur(im, kernel3D): return tf.nn.conv2d(im, kernel3D, strides=[1, 1, 1, 1], padding=&quot;SAME&quot;) # input image A with dimensions (x, y, channel) A = np.random.randint(0, 4095, (40, 50, 3)).astype(dtype=np.float32) B = opencv_gauss_blur(A) blur_kernel = cv2.getGaussianKernel(5, 1.1) * cv2.getGaussianKernel(5, 1.1).T kernel2D = tf.constant(blur_kernel, dtype=tf.float32)[:, :, tf.newaxis, tf.newaxis] # shape dims: X, Y, num_input_channels, num_output_channels kernel3D = np.zeros(shape=kernel2D.shape[:2] + (A.shape[-1], A.shape[-1]), dtype=np.float32) for ii in range(A.shape[-1]): kernel3D[:, :, ii, ii] = kernel2D[:, :, 0, 0] kernel3D = tf.constant(kernel3D, dtype=tf.float32) tfA = tf.constant(A, dtype=tf.float32)[tf.newaxis, ] # shape dims batch, X, Y, cha im_out = tf.Variable(tfA) B_tf_stack = tf_gauss_blur_stacked(tfA, im_out, kernel2D) B_tf = tf_gauss_blur(tfA, kernel3D) print(np.abs((B_tf[0, 2:-2, 2:-2, ] - B[2:-2, 2:-2,])).max()) print(np.abs((B_tf_stack[0, 2:-2, 2:-2, ] - B[2:-2, 2:-2,])).max()) </code></pre> <p>The max difference between openCV and tensorflow is &lt; 0.001 (on scale of 0-4095), which is sufficient agreement.</p> <p>Timeit from the console:</p> <pre><code>%timeit B = opencv_gauss_blur(A) %timeit B_tf = tf_gauss_blur_stacked(tfA, im_out, kernel2D) %timeit B_tf = tf_gauss_blur(tfA, kernel3D) </code></pre> <p>Gives 11, 386 and 257 us per loop (mean ± std. dev. of 7 runs, 1000 loops each). OpenCV convolutes a 1D gaussian in X and then a 1D gaussian in Y direction, which should yield the same as the 2D/3D tensorflow functions, but has 2.5x/7.5x fewer computations.</p>
<python><tensorflow><opencv><neural-network><convolution>
2025-06-19 13:04:44
0
314
Frank_Coumans
79,672,067
12,550,791
Issue while typing a recursive function with list unpacking
<p>I'm trying to correctly type the following function in Python 3.10</p> <pre><code>from typing import overload @overload def joinpath(*path_pieces: str) -&gt; str: ... @overload def joinpath(*path_pieces: list[str] | str) -&gt; list[str]: ... def joinpath(*path_pieces: list[str] | str) -&gt; list[str] | str: &quot;&quot;&quot;Join paths. os separators in the path pieces will be remove, except for the trailing separator of the last piece which is used to differenciate folders from files. Parameters ---------- *path_pieces : str | list[str] Path pieces to join. If one of the path piece is a list, the function will join each element of the list with the other path pieces and return a list of path. If all path pieces are strings, the function will return a string. Returns ------- str | list[str] the joined path: str if all the components are strings, list of strings otherwise. Examples -------- &gt;&gt;&gt; joinpath(&quot;a&quot;, &quot;b&quot;, &quot;c&quot;) &quot;a/b/c&quot; &gt;&gt;&gt; joinpath(&quot;a&quot;, &quot;&quot;, &quot;c&quot;) &quot;a/c&quot; &gt;&gt;&gt; joinpath([&quot;a&quot;, &quot;b&quot;], &quot;c/&quot;) [&quot;a/c/&quot;, &quot;b/c/&quot;] &gt;&gt;&gt; joinpath([&quot;a&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;], &quot;e&quot;) [&quot;a/c/e&quot;, &quot;a/d/e&quot;, &quot;b/c/e&quot;, &quot;b/d/e&quot;] &gt;&gt;&gt; joinpath([&quot;a&quot;, &quot;&quot;, &quot;b&quot;], [&quot;c&quot;, &quot;d&quot;], &quot;e&quot;) [&quot;a/c/e&quot;, &quot;a/d/e&quot;, &quot;b/c/e&quot;, &quot;b/d/e&quot;] &quot;&quot;&quot; if all(isinstance(piece, str) for piece in path_pieces): return join_path_pieces(*path_pieces) if len(path_pieces) &lt; 2: raise ValueError(&quot;At least two path pieces are required.&quot;) if len(path_pieces) &gt; 2: # Recursive call to join the first two pieces and the rest of the pieces. # This will slowly reduce the number of pieces until there are only two. return joinpath(joinpath(path_pieces[0], path_pieces[1]), *path_pieces[2:]) first_paths = ensure_in_list(path_pieces[0]) second_paths = ensure_in_list(path_pieces[1]) joined_paths = _cartesian_product_join_paths(first_paths, second_paths) return joined_paths def join_path_pieces(*path_pieces: str) -&gt; str: &quot;&quot;&quot;Join path pieces, while stripping leading slashes for all except the first, and trailing slashes for all except the last. &quot;&quot;&quot; striped_pieces = [path for path in path_pieces if path != &quot;&quot;] for i in range(len(striped_pieces) - 1): striped_pieces[i] = striped_pieces[i].rstrip(&quot;/&quot;) for i in range(1, len(striped_pieces)): striped_pieces[i] = striped_pieces[i].lstrip(&quot;/&quot;) return os.sep.join(striped_pieces) def ensure_in_list(value: T | list[T]) -&gt; list[T]: &quot;&quot;&quot;Put in a list any scalar value that is not None. If it is already a list, do nothing. &quot;&quot;&quot; if value is not None and not isinstance(value, list): value = [value] return value def _cartesian_product_join_paths( first_path_piece: list[str], second_path_piece: list[str] ) -&gt; list[str]: &quot;&quot;&quot;Compute the cartesian product of two lists of paths. If one of the path piece is an empty string, it will be ignored. The output list will contain the concatenation of each element of the first list with each element of the second list. Returns ------- list[str] List of length len(first_path_piece) * len(second_path_piece) that contains the concatenated path. &quot;&quot;&quot; ... </code></pre> <p>Unfortunately, there are two mypy errors:</p> <ol> <li>L4 error: Overloaded function signatures 1 and 2 overlap with incompatible return types</li> <li>L40 error: Argument 1 to &quot;join_path_pieces&quot; has incompatible type &quot;*tuple[list[str] | str, ...]&quot;; expected &quot;str&quot;</li> </ol> <p>For the first error, the two signatures are meant to reflect that if there is at least one list of str as an input, the output will be a list.</p> <p>For the second error, I feel like mypy is missing the condition on the type of all components of path_pieces</p>
<python><python-typing><mypy>
2025-06-19 12:44:45
0
391
Marco Bresson
79,672,036
14,463,396
Simpy model with closure, entities grabbing resource while closing code running
<p>I have a model where:</p> <ul> <li><p>resources are available between opening hours (8am-8pm).</p> </li> <li><p>Any entities using resources are kicked out at 1 minute to 8pm, then a closing function is run to claim all the resources with a -1 priority for 12 hours until 8am</p> </li> <li><p>The resources stop accepting new requests at 6pm:</p> <ul> <li><p>entities who arrive between 6-8pm have a timeout until 8:01pm before requesting a resource, by which time the shop <em>should</em> be closed and they can't claim one until the shop is open again at 8am.</p> </li> <li><p>If an entity doesn't get a resource during the day before 6pm, they timeout for ~2 hours until close where the shop <em>should</em> be closed and they wait for a resource to become available again when the shop opens at 8am.</p> </li> </ul> </li> </ul> <p>My MRE is below (its quite long):</p> <pre><code>import simpy import random import math import numpy as np class default_params(): random.seed(10) run_time = 10080 iterations = 1 no_resources = 4 inter_arr = 120 resource_time = 240 #shop Opening Hours shop_open_time = 8 shop_stop_accept = 18 shop_close_time = 20 class spawn_entity: def __init__(self, p_id): self.id = p_id self.arrival_time = np.nan self.resource_gained_time = np.nan self.leave_time = np.nan class shop_model: def __init__(self, run_number, input_params): #start environment, set entity counter to 0 and set run number self.env = simpy.Environment() self.input_params = input_params self.entity_counter = 0 self.run_number = run_number #establish resources self.resources = simpy.PriorityResource(self.env, capacity=input_params.no_resources) ##############################MODEL TIME############################### def model_time(self, time): #Function to work out day and hour in the model day = math.floor(time / (24*60)) day_of_week = day % 7 #If day 0, hour is time / 60, otherwise it is the remainder time once #divided by number of days hour = math.floor((time % (day*(24*60)) if day != 0 else time) / 60) return day, day_of_week, hour ###########################ARRIVALS################################## def arrivals(self): yield self.env.timeout(1) while True: #up entity counter and spawn a new entity self.entity_counter += 1 p = spawn_entity(self.entity_counter) #begin entity to shop process self.env.process(self.entity_journey(p)) #randomly sample the time until the next arrival sampled_interarrival = round(random.expovariate(1.0 / self.input_params.inter_arr)) yield self.env.timeout(sampled_interarrival) ######################### CLOSE SHOP ################################ def close_shop(self): while True: #if first day, close shop until open time, then wait until close if self.env.now == 0: time_closed = self.input_params.shop_open_time time_out = self.input_params.shop_close_time else: #close for 12 hour overnight and time out process until next day time_closed = 12 time_out = 24 #Take away all the resources for the close time to simulate the shop # being closed. print(f'--!!!!!CLOSING SHOP AT {self.env.now} FOR {time_closed * 60} MINS!!!!!--') i=0 for _ in range(self.input_params.no_resources): i += 1 print(f'--claiming resource {i}') self.env.process(self.fill_shop(time_closed * 60)) print(f'--!!!!!SHOP CLOSED!!!!!--') #Timout for timeout period until next closure. yield self.env.timeout(time_out * 60) def fill_shop(self, time_closed): #If close time, take away all the resources with self.resources.request(priority=-1) as req: yield req print(f'--resource claimed for close at {self.env.now} for {time_closed} mins') yield self.env.timeout(time_closed) ######################## ENTITY JOURNEY ############################# def entity_journey(self, entity): #Arrival entity.arrival_time = self.env.now day, day_of_week, hour = self.model_time(entity.arrival_time) print(f'entity {entity.id} starting process at {entity.arrival_time}') #If arrival hour between stop accept and close, add an extra wait #to ensure they don't sneak in between these times if ((hour &gt;= self.input_params.shop_stop_accept) and (hour &lt; self.input_params.shop_close_time)): #Time out until shop close, where resources will all be claimed #then entity can wait until next open. next_close = ((day * 60 * 24) + (self.input_params.shop_close_time * 60)) time_out = (next_close - entity.arrival_time) + 1 print(f'entity {entity.id} arrived in queue after stop accepting. Time out {time_out} mins until close') yield self.env.timeout(time_out) print(f'entity {entity.id} has waited until close at {self.env.now}') #request resource i=1 with self.resources.request(priority=1) as req: #Find out current model time time = self.env.now print(f'entity {entity.id} requesting resource at time {time} - attempt {i}') day, day_of_week, hour = self.model_time(time) #Work out the minutes until the next shop stop accept time (next day #if hour after stop accept hour) next_stop_accept_day = (day + 1 if hour &gt;= self.input_params.shop_stop_accept else day) next_stop_accept = ((next_stop_accept_day * 60 * 24) + (self.input_params.shop_stop_accept * 60)) time_to_stop_accept = (next_stop_accept - time) + 1 print(f'entity {entity.id} has {time_to_stop_accept} mins until shop stops accepting') #entity either gets resource or timesout if past stop accept time yield req | self.env.timeout(time_to_stop_accept) #If entity doesn't get a bed the first attempt, keep trying #until they do while not req.triggered: i += 1 print(f'entity {entity.id} did not get resource as shop stopped accepting at {self.env.now}, entity waiting 2 hours then rejoining queue for attempt {i}') #Time out for the time between stop accepting and close, to #ensure no entities get a resource during this time. yield self.env.timeout(((self.input_params.shop_close_time - self.input_params.shop_stop_accept) * 60) + 1) print(f'entity {entity.id} requesting resource at time {self.env.now} - attempt {i}') #entity tries again to get a resource until the next stop accept #time. time = self.env.now day, day_of_week, hour = self.model_time(time) next_stop_accept_day = (day + 1 if hour &gt;= self.input_params.shop_stop_accept else day) next_stop_accept = ((next_stop_accept_day * 60 * 24) + (self.input_params.shop_stop_accept * 60)) time_to_stop_accept = (next_stop_accept - time) + 1 print(f'entity {entity.id} has {time_to_stop_accept} until shop stops accepting') #entity either gets resource or timesout if past stop accept time yield req | self.env.timeout(time_to_stop_accept) #Entity got resource, record the time and randomly sample #process time print(f'entity {entity.id} got resource at {self.env.now} on attempt {i}') entity.resource_gained_time = self.env.now #Time entity spends is randomly sampled, with them being kicked out #1 minute before 8pm. day, day_of_week, hour = self.model_time(self.env.now) next_close_day = (day + 1 if hour &gt;= self.input_params.shop_close_time else day) next_close = ((next_close_day * 60 * 24) + (self.input_params.shop_close_time * 60)) - 1 time_to_next_close = next_close - self.env.now sampled_shop_time = round(random.expovariate(1.0 / self.input_params.resource_time)) yield self.env.timeout(min(sampled_shop_time, time_to_next_close)) #record entity leave time entity.leave_time = self.env.now ########################RUN####################### def run(self): self.env.process(self.arrivals()) self.env.process(self.close_shop()) self.env.run(until = self.input_params.run_time) def run_the_model(input_params): #run the model for the number of iterations specified for run in range(input_params.iterations): print(f&quot;Run {run+1} of {input_params.iterations}&quot;) model = shop_model(run, input_params) model.run() run_the_model(default_params) </code></pre> <p>Sometimes the close code runs perfectly, and all the resources are claimed at 8pm, which looks like:</p> <pre><code>--!!!!!CLOSING SHOP AT 4080 FOR 720 MINS!!!!!-- --claiming resource 1 --claiming resource 2 --claiming resource 3 --claiming resource 4 --!!!!!SHOP CLOSED!!!!!-- --resource claimed for close at 4080 for 720 mins --resource claimed for close at 4080 for 720 mins --resource claimed for close at 4080 for 720 mins --resource claimed for close at 4080 for 720 mins </code></pre> <p>The issue I have is that sometimes when I'm closing the shop, not all resources get claimed at exactly 8pm and some entity activity will cut in before all resources have been claimed, sometimes even claiming a resource for a short time! An example from the above code output is:</p> <pre><code>--!!!!!CLOSING SHOP AT 2640 FOR 720 MINS!!!!!-- --claiming resource 1 --claiming resource 2 --claiming resource 3 --claiming resource 4 --!!!!!SHOP CLOSED!!!!!-- --resource claimed for close at 2640 for 720 mins --resource claimed for close at 2640 for 720 mins entity 21 has waited until close at 2641 entity 21 requesting resource at time 2641 - attempt 1 entity 21 has 1320 mins until shop stops accepting entity 22 has waited until close at 2641 entity 22 requesting resource at time 2641 - attempt 1 entity 22 has 1320 mins until shop stops accepting entity 19 requesting resource at time 2642 - attempt 2 entity 19 has 1319 until shop stops accepting entity 20 requesting resource at time 2642 - attempt 2 entity 20 has 1319 until shop stops accepting entity 19 got resource at 2642 on attempt 2 entity 20 got resource at 2642 on attempt 2 --resource claimed for close at 2683 for 720 mins --resource claimed for close at 2703 for 720 mins </code></pre> <p>Here, 2 of the 4 resources gets closed exactly at 2640mins as expected, but the other 2 don't get claimed until 2683 and 2703. Entities 19 and 20 both manage to claim a resource briefly before they get closed, and leave the model. I don't understanding how they are managing to jump ahead of the close in the queue, as the close is priority -1 compared to their priority of 1. Nor do I understand why they sometimes don't get claimed to so many minutes after the close. Any help greatly appreciated.</p>
<python><simpy>
2025-06-19 12:16:58
2
3,395
Emi OB
79,671,980
2,123,706
memoryError when using to_sql and sqlalchemy
<p>I am copying data from postgres to SSMS, and using sqlalchemy as the go between.</p> <p>I read from postgress successfully, and push into a pandas dataframe.</p> <p>I write most of the tables to SSMS successfully, but for a few receive the following error: <code>MemoryError</code></p> <p>I get around this by inserting 10 rows at a time.</p> <p>I am confused, as using <code>sys.getsizeof()</code>, the df in question is not larger than any of the other successfully written tables.</p> <p>Here is some data on the written tables. <code>full</code> column is the size of the entire data, <code>partial</code> column is the size of the chunk written to sql, if needed .</p> <p>87 successful:</p> <p><a href="https://i.sstatic.net/KytaMYGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KytaMYGy.png" alt="enter image description here" /></a></p> <p>10 unsuccessful tables that had to be written in chunks.</p> <p><a href="https://i.sstatic.net/fHvqSx6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fHvqSx6t.png" alt="enter image description here" /></a></p> <p>We see the maximum of the 10 unsuccessful is 141kb, which is 10x smaller than the maximum of the successful batch, of 1.6mb.</p> <p>What could be causing this <code>MemoryError</code>, on such a small value?</p> <p>redacted code:</p> <pre><code>query = f'select * from {postgres_table}' cur.execute(query) df = pd.DataFrame(cur.fetchall()) # do stuff# df.to_sql(name=ssms_table, con=ssms_conxn, if_exists='append', index=False) ssms_conxn.commit() </code></pre> <p>Full traceback:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 49, in &lt;module&gt; File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\pandas\core\generic.py&quot;, line 2878, in to_sql return sql.to_sql( ^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\pandas\io\sql.py&quot;, line 769, in to_sql return pandas_sql.to_sql( ^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\pandas\io\sql.py&quot;, line 1920, in to_sql total_inserted = sql_engine.insert_records( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\pandas\io\sql.py&quot;, line 1461, in insert_records return table.insert(chunksize=chunksize, method=method) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\pandas\io\sql.py&quot;, line 1023, in insert num_inserted = exec_insert(conn, keys, chunk_iter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\pandas\io\sql.py&quot;, line 929, in _execute_insert result = conn.execute(self.table.insert(), data) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\base.py&quot;, line 1412, in execute return meth( ^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\sql\elements.py&quot;, line 483, in _execute_on_connection return connection._execute_clauseelement( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\base.py&quot;, line 1635, in _execute_clauseelement ret = self._execute_context( ^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\base.py&quot;, line 1844, in _execute_context return self._exec_single_context( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\base.py&quot;, line 1984, in _exec_single_context self._handle_dbapi_exception( File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\base.py&quot;, line 2342, in _handle_dbapi_exception raise exc_info[1].with_traceback(exc_info[2]) File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\base.py&quot;, line 1934, in _exec_single_context self.dialect.do_executemany( File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\dialects\mssql\pyodbc.py&quot;, line 716, in do_executemany super().do_executemany(cursor, statement, parameters, context=context) File &quot;C:\Users\aaa\AppData\Roaming\Python\Python311\site-packages\sqlalchemy\engine\default.py&quot;, line 918, in do_executemany cursor.executemany(statement, parameters) MemoryError </code></pre>
<python><sqlalchemy>
2025-06-19 11:17:30
1
3,810
frank
79,671,957
9,251,158
Equivalent of Pandas `to_feather` for Series
<p>I want to save a Pandas series to disk in feather format:</p> <pre><code>data.to_feather(filepath) </code></pre> <p>but I get this error:</p> <pre><code>AttributeError: 'Series' object has no attribute 'to_feather' </code></pre> <p>I use <code>.to_feather()</code> on dataframes; what is the equivalent for series? If it does not exist, what is the best practice to save a Pandas series to disk?</p>
<python><pandas><feather>
2025-06-19 11:00:37
1
4,642
ginjaemocoes
79,671,934
1,841,143
ValueError: No valid stream found in input file. Is -1 of the desired media type?
<p>While using <code>torchcodec</code> with the following code</p> <pre class="lang-py prettyprint-override"><code>import torchcodec file_path = &quot;/path/to/episode_000000.mp4&quot; x = torchcodec.decoders.VideoDecoder(file_path, seek_mode=&quot;approximate&quot;) </code></pre> <p>I got the following error:</p> <pre><code>[rank0]: File &quot;/home/hzxie/Development/vla/utils/datasets.py&quot;, line 202, in _get_video_bytes [rank0]: x = torchcodec.decoders.VideoDecoder(vbytes) [rank0]: File &quot;/home/hzxie/Applications/miniconda3/envs/lerobot/lib/python3.10/site-packages/torchcodec/decoders/_video_decoder.py&quot;, line 98, in __init__ [rank0]: core.add_video_stream( [rank0]: File &quot;/home/hzxie/Applications/miniconda3/envs/lerobot/lib/python3.10/site-packages/torch/_ops.py&quot;, line 723, in __call__ [rank0]: return self._op(*args, **kwargs) [rank0]: ValueError: No valid stream found in input file. Is -1 of the desired media type? </code></pre> <p>How to solve this problem?</p>
<python><ffmpeg><pytorch><torchcodec><lerobot>
2025-06-19 10:42:05
1
3,754
Haozhe Xie
79,671,740
6,553,631
Algorithm to extract the common part of all strings in a list
<p>My goal is to classify and show the types of errors in log files. I've solved the part of clustering them into buckets, but now I want to show the user the error string that is common across all the errors in each bucket. Right now I'm showing the user the first string of the bucket, but it would be nicer to show them a glob expression or a regex that matches all the strings in the bucket but that keeps the common parts of the strings and substitutes the different parts with something that matches those different parts (maximizing the amount of characters that match).</p> <p>For example, if I have these error strings in a bucket:</p> <pre><code>&quot;Error: [MODULE_FOO] File foo has 5 unsolved dependencies and 4 errors.&quot; &quot;Error: [MODULE_BLA] Files bar and yaz have 123 unsolved dependencies.&quot; &quot;Error: [MODULE_123] File baz has 45 unsolved dependencies and 3 warnings.&quot; </code></pre> <p>The common glob string that matches all of them would be: <code>&quot;Error: [MODULE_*] File* * ha* * unsolved dependencies*.&quot;</code></p> <p><em>This example is just to illustrate how the different parts of the strings are converted to asterisks and the common parts are kept as they are. Don't assume this is the actual structure of the errors.</em></p> <p>I'm working on Python, but if there's no library that can help me, any non-language specific pseudo-code is also welcome.</p> <p>To cluster the strings I'm using difflib. Here's the code, I'm not sure it's relevant for the question, but I wonder if I could use more of difflib's functionality to also help create this algorithm to extract common parts between strings.</p> <pre class="lang-py prettyprint-override"><code>def cluster_strings(strings: list[str], threshold: float = 0.9) -&gt; list[list[str]]: clusters = [] cluster_num = 0 while len(strings) &gt; 0: string = strings.pop() clusters.append([string]) indexes_to_remove = [] for idx, string_to_match in enumerate(strings): if difflib.SequenceMatcher(None, string, string_to_match).quick_ratio() &gt; threshold: clusters[cluster_num].append(string_to_match) indexes_to_remove.append(idx) strings = [string for idx, string in enumerate(strings) if idx not in indexes_to_remove] cluster_num += 1 return clusters </code></pre>
<python><string><algorithm>
2025-06-19 08:16:50
1
765
tgonzalez89
79,671,560
22,434,294
How can I insert an image with a clickable hyperlink on the first page of a PDF using Python?
<p>I'm trying to programmatically add a clickable image icon to the top-left corner of the first page of a PDF file using Python. I want the icon to link to a URL.</p> <p>what I tried using reportlab and PyPDF2:</p> <pre><code>from PyPDF2 import PdfReader, PdfWriter from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import letter from reportlab.lib.units import inch import io # Input paths pdf_path = &quot;original.pdf&quot; icon_path = &quot;icon.jpeg&quot; output_pdf_path = &quot;output.pdf&quot; # Set up canvas for overlay packet = io.BytesIO() can = canvas.Canvas(packet, pagesize=letter) # Coordinates for top-left corner with padding x, y = 40, 792 - 40 - inch can.drawImage(icon_path, x, y, width=inch, height=inch) can.linkURL(&quot;https://t.me/newsmalayalampdf&quot;, (x, y, x + inch, y + inch), relative=0) can.save() packet.seek(0) # Merge overlay with original PDF overlay = PdfReader(packet) reader = PdfReader(pdf_path) writer = PdfWriter() first_page = reader.pages[0] first_page.merge_page(overlay.pages[0]) writer.add_page(first_page) for page in reader.pages[1:]: writer.add_page(page) with open(output_pdf_path, &quot;wb&quot;) as f: writer.write(f) </code></pre> <p>Despite setting the coordinates for the top-left, the image still appears somewhere in the middle of the page in the output. I've tried recalculating the Y position using the letter page height, but no luck.</p> <p>What is the correct way to align an image with a hyperlink in the top-left of a PDF page using ReportLab and PyPDF2?</p>
<python><pypdf><reportlab>
2025-06-19 05:25:27
2
577
nicholaspooran
79,671,464
219,153
What is the reason of this performance discrepancy between NumPy and Numba?
<p>This Python 3.12.7 script with NumPy 2.2.4 and Numba 0.61.2:</p> <pre><code>import numpy as np, timeit as ti, numba as nb def f0(a): p0 = a[:-2] p1 = a[1:-1] p2 = a[2:] return (p0 &lt; p1) &amp; (p1 &gt; p2) def f1(a): p0 = a[:-4] p1 = a[1:-3] p2 = a[2:-2] p3 = a[3:-1] p4 = a[4:] return ((p0 &lt; p1) &amp; (p1 == p2) | (p1 &lt; p2)) &amp; ((p2 &gt; p3) | (p2 == p3) &amp; (p3 &gt; p4)) @nb.njit(fastmath=True) def g0(a): r = np.zeros_like(a, dtype=np.bool) for i in range(1, a.size-1): r[i] = (a[i-1] &lt; a[i]) &amp; (a[i+1] &lt; a[i]) return r[1:-1] @nb.njit(fastmath=True) def g1(a): r = np.zeros_like(a, dtype=np.bool) for i in range(2, a.size-2): r[i] = ((a[i-1] == a[i]) &amp; (a[i-2] &lt; a[i-1]) | (a[i-1] &lt; a[i])) &amp; \ ((a[i+1] == a[i]) &amp; (a[i+2] &lt; a[i+1]) | (a[i+1] &lt; a[i])) return r[2:-2] a = np.random.randint(0, 256, (500, 500)).astype(np.uint8) b = a.ravel() print(f'Minimum, median and maximum execution time in us:') for fun in ('f0(b)', 'f1(b)', 'g0(b)', 'g1(b)'): t = 10**6 * np.array(ti.repeat(stmt=fun, setup=fun, globals=globals(), number=1, repeat=999)) print(f'{fun:20} {np.amin(t):8,.3f} {np.median(t):8,.3f} {np.amax(t):8,.3f}') </code></pre> <p>produces these timings on AMD Ryzen Ryzen 7 3800X PC running Ubuntu 22.04.5:</p> <pre><code>Minimum, median and maximum execution time in us: f0(b) 32.261 33.483 95.640 f1(b) 118.974 120.737 129.424 g0(b) 11.081 11.281 19.327 g1(b) 723.319 744.419 794.042 </code></pre> <p>For a simple array expression, i.e. <code>f0</code> vs <code>g0</code>, Numba is about 3x faster than NumPy, but for more complex expression, i.e. <code>f1</code> vs <code>g1</code>, Numba becomes 5x slower. This is very surprising to me. What is the reason? Can these run times be improved?</p>
<python><numpy><benchmarking><numba>
2025-06-19 02:19:48
1
8,585
Paul Jurczak
79,671,212
2,153,235
Force "os" or "posixpath" to use forward slashes?
<p>To get paths that use forward slash as a path separator, Google AI says to use the <code>posixpath</code> module instead of the <code>os</code> module. However, I find that it still uses backslashes:</p> <pre><code>&gt;&gt;&gt; import os &gt;&gt;&gt; os.path.abspath('LoadJSON.py') 'C:\\cygwin64\\home\\User.Name\\prj\\SomeProject\\Some-Subfolder\\LoadJSON.py' &gt;&gt;&gt; import posixpath as pp &gt;&gt;&gt; pp.abspath('LoadJSON.py') 'C:\\cygwin64\\home\\User.Name\\prj\\SomeProject\\Some-Subfolder/LoadJSON.py' </code></pre> <p>If possible, I would like to avoid littering my code with string replacements. Is it possible to get paths that use forward slashe by default?</p> <p>I am using:</p> <ul> <li>Spyder version: 6.0.5 (conda)</li> <li>Python version: 3.9.18 64-bit</li> <li>Qt version: 5.15.2</li> <li>PyQt5 version: 5.15.10</li> <li>Operating System: Windows-10-10.0.26100-SP0</li> </ul>
<python><python-3.x><path><path-separator>
2025-06-18 19:28:35
0
1,265
user2153235
79,671,202
9,971,619
Why is pydantic_settings not parsing the correct dotenv file?
<p>I have a FastAPI app written in Python. I want to have 2 environments for the app: <em>dev</em> and <em>local</em>. For managing the environments, I have kept 2 files namely <code>.local.env</code> and <code>.dev.env</code>. Also I have a standalone <code>.env</code> file, which I later plan to use for production purposes.</p> <p>I am unable to parse a particular dot env file even after explicitly mentioning it.</p> <p>My approach is to write a class as follows:</p> <pre class="lang-py prettyprint-override"><code># config.py file from pydantic_settings import BaseSettings, SettingsConfigDict class EnvironmentSettings(BaseSettings): APP_ENV: str APP_DB_URL: str # .... and so on... # Experiment 1: no luck! model_config = SettingsConfigDict(env_file=&quot;.local.env&quot;) # Experiment 2: Settings priorities - still no luck! model_config = SettingsConfigDict(env_file=(&quot;.local.env&quot;, &quot;.env&quot;)) # Experiment 3: using relative path of the file. model_config = SettingsConfigDict(env_file=&quot;..//.local.env&quot;) # Experiment 4: using absolute path of the file. model_config = SettingsConfigDict(env_file=&quot;C:\\myproject\\.local.env&quot;) # Experiment 5: Using the old format but not luck. class Config: env_file=&quot;.local.env&quot; </code></pre> <p>In all cases it is parsing the contents of the standalone .env file every time - why?</p> <p>Provided the folder structure is as follows:</p> <pre class="lang-none prettyprint-override"><code>myproject |___ app |____|___ config.py |___.env |___.local.env |___.dev.env </code></pre> <p>Info: Python 3.12.8 and pydantic_settings &gt;2.9.1</p>
<python><fastapi><dotenv><pydantic-v2><pydantic-settings>
2025-06-18 19:11:38
1
329
Kalpadiptya Roy
79,670,998
11,063,709
jax.random.uniform causing segmentation fault when called on GPU but not on CPU, nor is jax.random.normal crashing
<p>I ran the following 4 commands at the command line (bash):</p> <pre><code>JAX_PLATFORM_NAME=cpu python -c &quot;import jax; import jax.numpy as jnp; key = jax.random.PRNGKey(1); print(jax.random.uniform(key, (2, 2),))&quot; JAX_PLATFORM_NAME=cpu python -c &quot;import jax; import jax.numpy as jnp; key = jax.random.PRNGKey(1); print(jax.random.normal(key, (2, 2),))&quot; JAX_PLATFORM_NAME=gpu python -c &quot;import jax; import jax.numpy as jnp; key = jax.random.PRNGKey(1); print(jax.random.uniform(key, (2, 2),))&quot; JAX_PLATFORM_NAME=gpu python -c &quot;import jax; import jax.numpy as jnp; key = jax.random.PRNGKey(1); print(jax.random.normal(key, (2, 2),))&quot; </code></pre> <p>All of them ran fine, except for the <code>uniform</code> sampling on GPU, which resulted in a <code>Segmentation fault (core dumped)</code> with exit code 139 (and exit code 245 when similar code was run as part of a longer program).</p> <p>Partial output of <code>nvidia-smi</code>:</p> <pre><code> NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 NVIDIA GeForce RTX 4090 </code></pre> <p>jax and jaxlib versions:</p> <pre><code>Name: jax Version: 0.4.37 Name: jaxlib Version: 0.4.36 </code></pre> <p>ChatGPT thinks it's a CUDA and jax compatibility issue but the <a href="https://docs.jax.dev/en/latest/installation.html#nvidia-gpu" rel="nofollow noreferrer">page here</a> seems to suggest CUDA &gt;=12.1 should be fine.</p> <p>Any ideas?</p> <p>Update: I upgraded <code>jax</code> and <code>jaxlib</code> to 0.6.2 and now get:</p> <pre><code>~/anaconda3/envs/unc/lib/python3.10/site-packages/jaxlib/plugin_support.py:71: RuntimeWarning: JAX plugin jax_cuda12_plugin version 0.4.33 is installed, but it is not compatible with the installed jaxlib version 0.6.2, so it will not be used. warnings.warn( Segmentation fault (core dumped) </code></pre>
<python><jax>
2025-06-18 16:21:11
0
1,442
Warm_Duscher