QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
โŒ€
79,533,019
2,038,383
Get name of executed .pex file?
<p>Consider this script <code>/tmp/test.py</code>:</p> <pre class="lang-py prettyprint-override"><code>import os import sys print(__file__) print(os.path.dirname(os.path.realpath(__file__))) print(sys.argv) </code></pre> <p>bundled like this with <code>pex</code>:</p> <pre><code>python -m pex --exe test.py -o test.pex </code></pre> <p>Executing <code>./test.pex</code> gives:</p> <pre><code>/home/user/.cache/pex/unzipped_pexes/1/05ee97ddfe7a3cfe9392492e49c46d07135e26d9/__pex_executable__.py /home/user/.cache/pex/user_code/0/242a6d4429f13194d3dedebc8dbd8d72bf0c79bd ['/home/user/.cache/pex/unzipped_pexes/1/05ee97ddfe7a3cfe9392492e49c46d07135e26d9/__pex_executable__.py'] </code></pre> <p>How can I get the name and path of the executed .pex file?</p>
<python><pex>
2025-03-25 08:08:12
1
8,760
spinkus
79,532,844
9,359,102
Create a Class object using the username from incoming data
<p>Python newbie here:</p> <p>I have a</p> <pre><code>class ExampleState: ... </code></pre> <p>My purpose is to create a class object unique to every user. I get the username from my client flutter app in django.</p> <p>So, instead of</p> <pre><code>state = ExampleState() </code></pre> <p>it should be</p> <pre><code>state_Derek = ExampleState() state_Brian = ExampleState() ... </code></pre> <p>In my django views,</p> <p>I have</p> <pre><code>username = data.get('username') </code></pre> <p>I now need to create a class Object based on the username above.</p> <p>How do I create a class object like</p> <pre><code>state_(username fetched from above) = ExampleState() </code></pre>
<python>
2025-03-25 06:38:47
1
489
Earthling
79,532,659
5,483,765
FastAPI openapi_examples not working with Query() although it works with Body(). How can I make it work with Query()?
<p>I am using <code>fastapi==0.115.12</code> and I have the following code:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Body, Query from pydantic import BaseModel app = FastAPI() class GetFrameworksRequest(BaseModel): devtype: str languages: Optional[list[str]] | None = None @app.get(&quot;/v1/swe/lookup/frameworks&quot;) async def get_frameworks(query: Annotated[GetFrameworksRequest, Body( openapi_examples={ &quot;languages are provided&quot;: { &quot;value&quot;: { &quot;devtype&quot;: &quot;backend&quot;, &quot;languages&quot;: [&quot;python&quot;, &quot;javascript&quot;] } }, &quot;languages are not provided&quot;: { &quot;value&quot;: { &quot;devtype&quot;: &quot;backend&quot;, &quot;languages&quot;: None } } } )]) </code></pre> <p>Which as expected shows the dropdown of examples (and replaces the code in the example as well):</p> <p><a href="https://i.sstatic.net/WxK07D6w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxK07D6w.png" alt="docs GET request with dropdown" /></a></p> <p>However, if I just change the <code>Body</code> to <code>Query</code> it breaks.</p> <p><strong>My question is</strong>, how to properly document examples <strong>for <code>Query</code></strong> parameters such that I can have the nice documentation showing a dropdown and that has a code replacement when I select an option (just like the <code>Body</code>)? Or am I getting something wrong here?</p>
<python><fastapi><openapi><swagger-ui>
2025-03-25 03:59:13
0
1,035
Hassan
79,532,585
9,986,939
SQLAlchemy Count returns NoneType
<p>I'm working on a FastAPI project and I'm using SQLAlchemy to map the data to the API. I swear this problem started randomly as I've reverted back to my MVP code (bookmarked so I can always roll back to working code), and it still doesn't work.</p> <p>My situation is this, I'm trying to query the database to get the record count and then offset/limit and paginate.</p> <p>Here is my broken code:</p> <pre><code>query = db.query(query_class) total_items = query.count().scalar() </code></pre> <p>This returns an error <code>int() argument must be a string, a bytes-like object or a real number, not 'NoneType'</code> which tells me that count is likely None.</p> <p>Now obviously I can query the dataset and see there are records. In fact I can even do this:</p> <pre><code>total_items = len(db.query(query_class).all()) </code></pre> <p>Now can someone enlighten me on what I'm doing wrong here?</p> <p>Here are my versions:</p> <ul> <li>SQLAlchemy 1.4.54</li> <li>sqlalchemy-databricks 0.2.0</li> </ul>
<python><sqlalchemy><databricks><databricks-sql>
2025-03-25 02:37:31
1
407
Robert Riley
79,532,447
889,053
Does the oracle driver for python really require libraries from the native client?
<p>I am writing some Python code and pulling in the</p> <pre class="lang-yaml prettyprint-override"><code>cx-oracle = &quot;^8.3.0&quot; </code></pre> <p>library with poetry. However, in order to make this work I actually have to initialize it with a directory where it can find the native libraries. Specifically, <a href="https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html" rel="nofollow noreferrer">the Oracle documentation</a> calls out that this needs the libraries for the quick client.</p> <p>But, this is entirely non-portable. I want a project/bundle where when I type:</p> <pre><code>poetry install </code></pre> <p>Everything I need to run is installed into the virtual environment, an no extra configuration is required.</p> <p>What I can do to get around this (and I don't want to) is actually put the libraries into a dedicated directory in source control and initialize it from there. But checking in binary dependencies into source has a very bad smell.</p> <p>So, am I missing something, or do I really need to do this? I could see Oracle forcing this for licensing reasons.</p>
<python><oracle-database><python-poetry>
2025-03-25 00:35:40
1
5,751
Christian Bongiorno
79,532,425
11,231,350
Applying a 2D deformation field to an image
<p>I am trying to deform an image using the following vector field.</p> <p>I have tried to use the response in <a href="https://stackoverflow.com/a/58727841/11231350">this post</a>. However, all of my attempts have been unsuccessful till now.</p> <p>The deformation field is generated using the following code:</p> <pre><code>import numpy as np import cv2 size = 100 x = np.linspace(-1, 1, size) y = np.linspace(-1, 1, size) X, Y = np.meshgrid(x, y) phase_profile = 800.0*((X-0.0)**2 + (Y-0.0)**2) </code></pre> <p><a href="https://i.sstatic.net/V08odsLt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V08odsLt.png" alt="enter image description here" /></a></p> <p><strong>How to deform an image given the deformation field represented by the light blue arrows ?</strong> Here is an example of an image I am trying to deform.</p> <p><a href="https://i.sstatic.net/0kx99wUC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kx99wUC.png" alt="enter image description here" /></a></p> <p>My attempt is the following:</p> <pre><code>def stack_overflow_gradient_distortion_map(phase_profile, size=256, coeff=30.0): # Create coordinate grids x = np.linspace(-1, 1, size) y = np.linspace(-1, 1, size) X, Y = np.meshgrid(x, y) shape = X.shape dx = 2 / (size - 1) grad_y, grad_x = np.gradient(phase_profile, dx, dx) # Compute gradients (deformation field) gy, gx = np.gradient(phase_profile) mapx_base, mapy_base = np.meshgrid(np.arange(shape[0]), np.arange(shape[1])) mapx = mapx_base + gx*coeff mapy = mapy_base + gy*coeff return mapx, mapy </code></pre> <p>P.S: The plot function used is the following:</p> <pre><code>def plot_phase_heatmap_with_gradients(phase_profile, mesh_grid_x, mesh_grid_y): dx = 2 / (size - 1) # Physical spacing in the [-1, 1] range grad_y, grad_x = np.gradient(phase_profile, dx, dx) plt.figure(figsize=(8, 6)) plt.imshow(phase_profile, cmap='hot', origin='lower', extent=[-1, 1, -1, 1]) plt.colorbar(label='Phase Profile') print(&quot;max grad_x: &quot;, np.max(grad_x)) # Overlay gradient arrows skip = 10 # Adjust to reduce arrow density plt.quiver(mesh_grid_x[::skip, ::skip], mesh_grid_y[::skip, ::skip], grad_x[::skip, ::skip], grad_y[::skip, ::skip], color='cyan') plt.xlabel('X-axis') plt.ylabel('Y-axis') plt.title('Phase Profile with Gradient Vectors') plt.show() </code></pre>
<python><opencv><image-processing><graphics><computer-vision>
2025-03-25 00:16:47
0
369
alpha027
79,532,418
1,445,660
sqlalchemy - clear list and add item - null violation for foreign key
<p>I try to clear a list and add an item to it. I get <code>IntegrityError('(psycopg2.errors.NotNullViolation) null value in column &quot;game_id&quot; of relation &quot;player&quot; violates not-null co...2, null).\n')</code></p> <pre><code>class Game(Base): __tablename__ = &quot;game&quot; game_id: int = Column(INTEGER, primary_key=True, server_default=Identity(always=True, start=1, increment=1, minvalue=1, maxvalue=2147483647, cycle=False, cache=1), autoincrement=True) players = relationship('Player', back_populates='game') class Player(Base): __tablename__ = &quot;player&quot; player_id: int = Column(INTEGER, primary_key=True, server_default=Identity(always=True, start=1, increment=1, minvalue=1, maxvalue=2147483647, cycle=False, cache=1), autoincrement=True) game_id: int = Column(INTEGER, ForeignKey('game.game_id'), nullable=False) game = relationship('Game', back_populates='players') game = session.query(Game).first() game.players.clear() player = Player(name='john') game.players.append(player) session.commit() </code></pre>
<python><postgresql><sqlalchemy>
2025-03-25 00:10:12
0
1,396
Rony Tesler
79,532,286
14,471,263
Array time complexity when modifying elements in Python
<p>I was reading a bit on DS/A and found this cheat sheet on Leet Code..</p> <p>Add or remove element from arbitrary index: O(n) Access or modify element at arbitrary index: O(1)</p> <p>Intuitively I would think they would both be O(n). Why is one O(1) and the other O(n)?</p> <p>Why is adding or removing an element linear, while accessing or modifying an element constant? Does it have to do with re-indexing the array?</p> <p>Would adding or removing the last element of an array be O(1) since you wouldn't be adjusting the index of that array.</p> <p>Assume we're talking about Python here in case that matters for this specific question.</p>
<python><time-complexity>
2025-03-24 22:21:40
1
301
rarara
79,532,275
2,112,406
Why does torch import complain about numpy version
<p>I created a virtual environment on MacOS and installed pytorch with pip:</p> <pre><code>python -m env torch-env source torch-env/bin/activate pip install torch torchvision torchaudio </code></pre> <p>Launching python and importing torch fails, however:</p> <pre><code>Python 3.11.2 (main, Mar 5 2023, 23:08:47) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import torch A module that was compiled using NumPy 1.x cannot be run in NumPy 2.2.4 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11&gt;=2.12'. If you are a user of the module, the easiest solution will be to downgrade to 'numpy&lt;2' or try to upgrade the affected module. We expect that some modules will need time to support NumPy 2. Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/ia/torch-env/lib/python3.11/site-packages/torch/__init__.py&quot;, line 1477, in &lt;module&gt; from .functional import * # noqa: F403 File &quot;/Users/ia/torch-env/lib/python3.11/site-packages/torch/functional.py&quot;, line 9, in &lt;module&gt; import torch.nn.functional as F File &quot;/Users/ia/torch-env/lib/python3.11/site-packages/torch/nn/__init__.py&quot;, line 1, in &lt;module&gt; from .modules import * # noqa: F403 File &quot;/Users/ia/torch-env/lib/python3.11/site-packages/torch/nn/modules/__init__.py&quot;, line 35, in &lt;module&gt; from .transformer import TransformerEncoder, TransformerDecoder, \ File &quot;/Users/ia/torch-env/lib/python3.11/site-packages/torch/nn/modules/transformer.py&quot;, line 20, in &lt;module&gt; device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'), /Users/ia/torch-env/lib/python3.11/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:84.) device: torch.device = torch.device(torch._C._get_default_device()), # torch.device('cpu'), </code></pre> <p>I don't understand why it compiled it with <code>1.x</code>. Upon checking the pytorch repo, the <code>requirements.txt</code> does not specify a numpy version (neither does <code>setup.py</code>). I can verify that I'm using the right <code>pip</code> and <code>python</code>:</p> <pre><code>(torch-env) โžœ ~ which pip /Users/ia/torch-env/bin/pip (torch-env) โžœ ~ which python /Users/ia/torch-env/bin/python </code></pre> <p>I understand I can circumvent this issue by downgrading my <code>numpy</code> but I want to understand what is happening here. Am I supposed to upgrade something that I'm not explicitly upgrading (<code>pip</code>? <code>pybind11</code>?)? Or is this a bug?</p> <p>Also note that:</p> <pre><code>(torch-env) โžœ ~ pip list Package Version ----------------- -------- filelock 3.18.0 fsspec 2025.3.0 Jinja2 3.1.6 MarkupSafe 3.0.2 mpmath 1.3.0 networkx 3.4.2 numpy 2.2.4 pillow 11.1.0 pip 22.3.1 setuptools 65.5.0 sympy 1.13.3 torch 2.2.2 torchaudio 2.2.2 torchvision 0.17.2 typing_extensions 4.12.2 </code></pre>
<python><python-3.x><numpy><pytorch>
2025-03-24 22:13:30
0
3,203
sodiumnitrate
79,532,148
3,625,865
Slack bolt python use metadata when using blocks in say() function
<p>I have this block of code which is responsible for selecting the namespace:</p> <pre class="lang-py prettyprint-override"><code>@app.command(&quot;/enable_worker&quot;) def enable_worker(ack, body, say): ack() blocks = [ { &quot;type&quot;: &quot;section&quot;, &quot;block_id&quot;: &quot;section678&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;:one: Select the environment you want to setup:&quot; }, &quot;accessory&quot;: { &quot;action_id&quot;: &quot;environment_selector&quot;, &quot;type&quot;: &quot;static_select&quot;, &quot;placeholder&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: &quot;Select an environment&quot; }, &quot;options&quot;: [] } } ] services = get_services() for namespace, service in services.items(): ns_data = { &quot;text&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: f&quot;{namespace}&quot; }, &quot;value&quot;: f&quot;{namespace}&quot; } blocks[0][&quot;accessory&quot;][&quot;options&quot;].append(ns_data) say(blocks=blocks) </code></pre> <p>And this handler for <code>environment_selector</code>:</p> <pre class="lang-py prettyprint-override"><code>@app.action(&quot;environment_selector&quot;) def environment_selector(ack, body, say): ack() workers_in_ns = get_services(body[&quot;actions&quot;][0][&quot;selected_option&quot;][&quot;value&quot;]) workers = workers_in_ns[body[&quot;actions&quot;][0][&quot;selected_option&quot;][&quot;value&quot;]] blocks = [ { &quot;type&quot;: &quot;section&quot;, &quot;block_id&quot;: &quot;section678&quot;, &quot;text&quot;: { &quot;type&quot;: &quot;mrkdwn&quot;, &quot;text&quot;: &quot;:two: Select the worker in this environment you want to update:&quot; }, &quot;accessory&quot;: { &quot;action_id&quot;: &quot;worker_selector&quot;, &quot;type&quot;: &quot;static_select&quot;, &quot;placeholder&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: &quot;Select a worker&quot; }, &quot;options&quot;: [] } } ] for worker in workers: w_data = { &quot;text&quot;: { &quot;type&quot;: &quot;plain_text&quot;, &quot;text&quot;: f&quot;{worker[0]}&quot; }, &quot;value&quot;: f&quot;{worker[0]}&quot; } blocks[0][&quot;accessory&quot;][&quot;options&quot;].append(w_data) say(blocks=blocks, metadata={&quot;namespace&quot;: body[&quot;actions&quot;][0][&quot;selected_option&quot;][&quot;value&quot;]}) </code></pre> <p>And I want to pass the <code>namespace</code> to the handler of <code>worker_selector</code>:</p> <pre class="lang-py prettyprint-override"><code>@app.action(&quot;worker_selector&quot;) def worker_selector(ack, body, say): ack() print(body) </code></pre> <p>But the output of <code>body</code> here has no element <code>metadata</code>, how can I pass this variable to the handler?</p>
<python><slack><slack-api><slack-bolt>
2025-03-24 21:04:01
0
2,743
Yashar
79,531,906
3,732,793
select value does fail in cosmo db
<p>new to cosmoDB tried this and it worked fine</p> <pre><code>cosmos_db = cosmos_client.create_database_if_not_exists(&quot;Test_DB&quot;) container = cosmos_db.create_container_if_not_exists(&quot;test_data&quot;, PartitionKey(path='/id', kind='Hash')) container.create_item({&quot;id&quot;: &quot;1&quot;, &quot;value&quot;: &quot;foo&quot;}) container.upsert_item({&quot;id&quot;: &quot;2&quot;, &quot;value&quot;: &quot;bar&quot;}) container.upsert_item({&quot;id&quot;: &quot;3&quot;, &quot;value&quot;: &quot;HelloWorld&quot;}) item = container.read_item(&quot;1&quot;, partition_key=&quot;1&quot;) assert item[&quot;value&quot;] == &quot;foo&quot; queryText = &quot;SELECT * FROM test_data d where d.id = '1'&quot; results = container.query_items(query=queryText, enable_cross_partition_query=False, ) items = [item for item in results] </code></pre> <p>works for both the query and the read_item. When I use</p> <pre><code>queryText = &quot;SELECT * FROM test_data d where d.value = 'foo'&quot; </code></pre> <p>it fails wild and with</p> <pre><code>Code: BadRequest Message: Failed to parse token 'value' at position 36 </code></pre> <p>is there a way to avoid the big stack trace and even better to query for foo ?</p>
<python><azure-cosmosdb>
2025-03-24 18:47:24
1
1,990
user3732793
79,531,838
6,936,582
Create a graph using the edge attribute as node
<p>I have a directed graph where the edges have the attribute <code>edge_id</code>. I want to create a new graph using the <code>edge_id</code> as nodes.</p> <p>I think there should be some more straightforward method than this?</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt edges = [(&quot;A&quot;,&quot;D&quot;, {&quot;edge_id&quot;:1}), (&quot;B&quot;,&quot;D&quot;, {&quot;edge_id&quot;:2}), (&quot;D&quot;, &quot;G&quot;, {&quot;edge_id&quot;:3}), (&quot;C&quot;, &quot;F&quot;, {&quot;edge_id&quot;:4}), (&quot;E&quot;, &quot;F&quot;, {&quot;edge_id&quot;:5}), (&quot;F&quot;, &quot;G&quot;, {&quot;edge_id&quot;:6}), (&quot;G&quot;, &quot;I&quot;, {&quot;edge_id&quot;:7}), (&quot;H&quot;, &quot;I&quot;, {&quot;edge_id&quot;:8}), (&quot;I&quot;, &quot;J&quot;, {&quot;edge_id&quot;:9}), ] G = nx.DiGraph() G.add_edges_from(edges) fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10,5)) pos = nx.spring_layout(G) nx.draw(G, with_labels=True, pos=pos, ax=ax[0]) end_node = [x for x in G.nodes() if G.out_degree(x)==0 and G.in_degree(x)==1][0] start_nodes = [n for n, d in G.in_degree() if d == 0] H = nx.DiGraph() paths = [] #Iterate over each start node and find the path from it to the end node for start_node in start_nodes: my_list = [] path = nx.shortest_path(G, source=start_node, target=end_node) for n1, n2 in zip(path, path[1:]): my_list.append(G.edges[(n1, n2)][&quot;edge_id&quot;]) paths.append(my_list) #paths #[[1, 3, 7, 9], [2, 3, 7, 9], [4, 6, 7, 9], [5, 6, 7, 9], [8, 9]] for sublist in paths: for n1, n2 in zip(sublist, sublist[1:]): H.add_edge(n1, n2) nx.draw(H, with_labels=True, pos=nx.spring_layout(H), ax=ax[1]) </code></pre> <p><a href="https://i.sstatic.net/XGolh7cg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XGolh7cg.png" alt="enter image description here" /></a></p>
<python><networkx>
2025-03-24 18:12:22
1
2,220
Bera
79,531,810
3,575,623
'numpy.float64' object has no attribute 'numerator' for statistics.stdev()
<p>I have this function to calculate Cohen's D for some csv files</p> <pre><code>import statistics import numpy as np def cohen_d(a,b): float_a = [] float_b = [] for x in a: try: float_a.append(float(x)) except ValueError: return &quot;NAN&quot; for y in b: try: float_b.append(float(y)) except ValueError: return &quot;NAN&quot; try: res=(statistics.fmean(float_a) - statistics.fmean(float_b)) / statistics.stdev(float_a+float_b) except ZeroDivisionError: res=0 return res def cohen_d_np(a,b): float_a = [] float_b = [] for x in a: try: float_a.append(float(x)) except ValueError: return &quot;NAN&quot; for y in b: try: float_b.append(float(y)) except ValueError: return &quot;NAN&quot; float_a = np.array(float_a) float_b = np.array(float_b) try: res=(statistics.fmean(float_a) - statistics.fmean(float_b)) / statistics.stdev(np.concatenate([float_a,float_b])) except ZeroDivisionError: res=0 return res </code></pre> <p>When I try to run it on my data, I systematically get this error for the <code>np</code> version, or just <code>'float' object has no attribute 'numerator'</code> for the list based version:</p> <pre><code>RuleException: AttributeError in file /work/user/cburnard/PIPELINES/ChIPseq/Snakefile, line 111: 'numpy.float64' object has no attribute 'numerator' File &quot;/work/user/cburnard/PIPELINES/ChIPseq/Snakefile&quot;, line 903, in __rule_calculate_multimapbw_indiv_scores File &quot;/work/user/cburnard/PIPELINES/ChIPseq/Snakefile&quot;, line 162, in effect_size_mmb File &quot;/work/user/cburnard/PIPELINES/ChIPseq/Snakefile&quot;, line 111, in cohen_d_np File &quot;/tools/devel/python/Python-3.11.1/lib/python3.11/statistics.py&quot;, line 922, in stdev File &quot;/tools/devel/python/Python-3.11.1/lib/python3.11/concurrent/futures/thread.py&quot;, line 58, in run </code></pre> <p>Where is this error coming from? I don't know exactly on what line of data it is occurring, but in theory even if it's non-numerical due to some bug, I should still get the <code>return &quot;NAN&quot;</code> instead of this error.</p>
<python><numpy>
2025-03-24 17:58:33
1
507
Whitehot
79,531,796
2,266,881
Polars Dataframe from nested dictionaries as columns
<p>I have a dictionary of nested columns with the index as key in each one. When i try to convert it to a polars dataframe, it fetches the column names and the values right, but each column has just one element that's the dictionary of the column elements, without &quot;expanding&quot; it into a series.</p> <p>An example, let's say i have:</p> <pre><code>d = {'col1': {'0':'A','1':'B','2':'C'}, 'col2': {'0':1,'1':2,'2':3}} </code></pre> <p>Then, when i do a <code>pl.DataFrame(d)</code> or <code>pl.from_dict(d)</code>, i'm getting:</p> <pre><code>col1 col2 --- --- struct[3] struct[3] {&quot;A&quot;,&quot;B&quot;,&quot;C&quot;} {1,2,3} </code></pre> <p>Instead of the regular dataframe.</p> <p>Any idea how to fix this?</p> <p>Thanks in advance!</p>
<python><dataframe><python-polars><polars>
2025-03-24 17:51:43
1
1,594
Ghost
79,531,667
2,526,586
Best practice for sharing resource in a Python module?
<p>Consider a Python3.1X project like this:</p> <pre><code>โ”œโ”€ my_module_1 โ”‚ โ”œโ”€ a.py โ”‚ โ””โ”€ b.py โ”œโ”€ my_module_2 โ”‚ โ”œโ”€ c.py โ”‚ โ””โ”€ d.py โ””โ”€ main.py </code></pre> <p><code>main.py</code> would import <code>my_module_1</code> and <code>my_module_2</code>. <code>main.py</code> would provision/initialise resources such as database connection, logger, etc. These resources will also be shared and used by <code>my_module_1</code> and <code>my_module_2</code>. However <code>my_module_1</code> and <code>my_module_2</code> are reusable modules that may be imported by other scripts/modules too, so by design, <code>my_module_1</code> and <code>my_module_2</code> do not have knowledge of <code>main.py</code> as they don't know their importer of themselves.</p> <hr /> <p>For example, <code>main.py</code> may have something like this:</p> <pre><code>from my_module_1 import a import logging logger = logging.getLogger(__name__) logging.basicConfig(filename='example.log', encoding='utf-8', level=logging.DEBUG) logger.info('Hello world') </code></pre> <p>and in <code>my_module_1</code>/<code>a.py</code>,</p> <pre><code>import logging logger.info('Logging from ') </code></pre> <p>Note that the logger here in <code>a.py</code> is different from the one in <code>main.py</code>.</p> <hr /> <p>To resolve this, I suppose I can do something like this in <code>a.py</code>:</p> <pre><code>import logging _logger = None def set_logger(l): _logger = l if _logger: _logger.info('Logging from ') </code></pre> <p>Then <code>main.py</code> will have to initialise the logger for <code>a.py</code>:</p> <pre><code>from my_module_1 import a import logging logger = logging.getLogger(__name__) ... a.set_logger(logger) </code></pre> <p>I admit this is very clumsy. I am wondering if there are any more Pythonic ways to deal with this kind of resource-sharing across imported modules, not just for logging-specific, but also other things like reference data caching, database connections, config management, etc. Is there a best practice too?</p> <p>P.S: Should I use things like <code>dependency_injector</code>? I don't want to over-complicate things for the importers of the modules and I don't want to enforce any external dependencies for the importers if not necessary.</p>
<python><python-3.x><logging><dependency-injection>
2025-03-24 16:53:29
0
1,342
user2526586
79,531,664
16,383,578
What is the fastest way to generate alternating boolean sequences in NumPy?
<p>I want to create a 1D array of length n, every element in the array can be either 0 or 1. Now I want the array to contain alternating runs of 0s and 1s, every full run has the same length as every other run. Every run of 1s is followed by a run of 0s of the same length and vice versa, the gaps are periodic.</p> <p>For example, if we start with five 0s, the next run is guaranteed to be five 1s, and then five 0s, and so on.</p> <p>To better illustrate what I mean:</p> <pre><code>In [65]: (np.arange(100) // 5) &amp; 1 Out[65]: array([0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1]) </code></pre> <p>The runs can also be shifted, meaning we don't have a full run at the start:</p> <pre><code>In [66]: ((np.arange(100) - 3) // 7) &amp; 1 Out[66]: array([1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]) </code></pre> <p>Now as you can see I have found a way to do it, in fact I have found three ways, but all are flawed. The above works with shifted runs but is slow, one other is faster but it doesn't allow shifts.</p> <pre><code>In [82]: %timeit (np.arange(524288) // 16) &amp; 1 6.45 ms ยฑ 2.67 ms per loop (mean ยฑ std. dev. of 7 runs, 100 loops each) In [83]: range1 = np.arange(524288) In [84]: %timeit (range1 // 16) &amp; 1 3.14 ms ยฑ 201 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each) In [85]: %timeit np.tile(np.concatenate([np.zeros(16, dtype=np.uint8), np.ones(16, dtype=np.uint8)]), 16384) 81.6 ฮผs ยฑ 843 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [86]: %timeit np.repeat([0, 1], 262144).reshape(32, 16384).T.flatten() 5.42 ms ยฑ 74.2 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 100 loops each) In [87]: np.array_equal((range1 // 16) &amp; 1, np.tile(np.concatenate([np.zeros(16, dtype=np.uint8), np.ones(16, dtype=np.uint8)]), 16384)) Out[87]: True In [88]: np.array_equal(np.repeat([0, 1], 262144).reshape(32, 16384).T.flatten(), np.tile(np.concatenate([np.zeros(16, dtype=np.uint8), np.ones(16, dtype=np.uint8)]), 16384)) Out[88]: True </code></pre> <p>Is there a way faster than <code>np.tile</code> based solution that also allows shifts?</p> <hr /> <p>I have made the code blocks output the same result for fair comparison, and for completeness I have added another inefficient method.</p> <hr /> <p>Another method:</p> <pre><code>In [89]: arr = np.zeros(524288, dtype=np.uint8) In [90]: %timeit arr = np.zeros(524288, dtype=np.uint8) 19.9 ฮผs ยฑ 156 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [91]: arr[262144:] = 1 In [92]: %timeit arr[262144:] = 1 9.91 ฮผs ยฑ 52 ns per loop (mean ยฑ std. dev. of 7 runs, 100,000 loops each) In [93]: %timeit arr.reshape(32, 16384).T.flatten() 932 ฮผs ยฑ 11.7 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 1,000 loops each) In [94]: %timeit arr.reshape(32, 16384).T 406 ns ยฑ 1.81 ns per loop (mean ยฑ std. dev. of 7 runs, 1,000,000 loops each) In [95]: %timeit list(arr.reshape(32, 16384).T.flat) 24.7 ms ยฑ 242 ฮผs per loop (mean ยฑ std. dev. of 7 runs, 10 loops each) </code></pre> <p>As you can see, <code>np.repeat</code> is extremely slow, creating the array and assigning 1 to half the values is very fast, and <code>arr.reshape.T</code> is extremely fast, but <code>arr.flatten()</code> is very slow.</p>
<python><arrays><numpy>
2025-03-24 16:52:22
3
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
79,531,519
6,362,595
Installing `stringcase` under python `3.13.2` fails while it works on `python 3.12.9`
<p>I am trying to install the (seemingly abandoned) <code>stringcase</code> <a href="https://github.com/okunishinishi/python-stringcase/tree/master" rel="nofollow noreferrer">python package</a> with a <code>python3.13.2</code> build on ubuntu in a <code>venv</code> like so::</p> <pre><code> python3.13 -m venv venv source venv/bin/activate pip install -U pip pip install stringcase </code></pre> <p>However, I get the following error::</p> <pre><code>Collecting stringcase Using cached stringcase-1.2.0.tar.gz (3.0 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error ร— Getting requirements to build wheel did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€&gt; [35 lines of output] Traceback (most recent call last): File &quot;/home/fgoudreault/venv/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 389, in &lt;module&gt; main() ~~~~^^ File &quot;/home/fgoudreault/venv/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 373, in main json_out[&quot;return_val&quot;] = hook(**hook_input[&quot;kwargs&quot;]) ~~~~^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/home/fgoudreault/venv/lib/python3.13/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 143, in get_requires_for_build_wheel return hook(config_settings) File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/build_meta.py&quot;, line 334, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=[]) ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/build_meta.py&quot;, line 304, in _get_build_requires self.run_setup() ~~~~~~~~~~~~~~^^ File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/build_meta.py&quot;, line 522, in run_setup super().run_setup(setup_script=setup_script) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/build_meta.py&quot;, line 320, in run_setup exec(code, locals()) ~~~~^^^^^^^^^^^^^^^^ File &quot;&lt;string&gt;&quot;, line 3, in &lt;module&gt; File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/_distutils/core.py&quot;, line 160, in setup dist.parse_config_files() ~~~~~~~~~~~~~~~~~~~~~~~^^ File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/dist.py&quot;, line 730, in parse_config_files self._parse_config_files(filenames=inifiles) ~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^ File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/dist.py&quot;, line 599, in _parse_config_files opt = self._enforce_underscore(opt, section) File &quot;/tmp/pip-build-env-9bqsrgl1/overlay/lib/python3.13/site-packages/setuptools/dist.py&quot;, line 629, in _enforce_underscore raise InvalidConfigError( ...&lt;3 lines&gt;... ) setuptools.errors.InvalidConfigError: Invalid dash-separated key 'description-file' in 'metadata' (setup.cfg), please use the underscore name 'description_file' instead. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error ร— Getting requirements to build wheel did not run successfully. โ”‚ exit code: 1 โ•ฐโ”€&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. </code></pre> <p>Meanwhile, if I use a <code>python 3.12.3</code> build, the install proceeds successfully. Therefore, I am tempted to conclude that the error comes from an installing library. I tried downgrading <code>setuptools</code> but to no avail. The error message seems strange since the <a href="https://github.com/okunishinishi/python-stringcase/blob/master/setup.cfg" rel="nofollow noreferrer"><code>setup.cfg</code>ย file on the package github repo</a> does not contain the problematic line mentioned in the error.</p> <p>My question is: what is the source of the problem and why is this happening?</p>
<python><pip>
2025-03-24 15:43:54
1
921
fgoudra
79,531,474
842,693
in Altair, can I assign fixed colour indices to data values?
<p>I have data in pandas dataframes, where I make plots of several subsets of the data, using Altair. In all the plots, colours are taken from some categorical column. To make the charts easy to read, I want each category to have the same colour in all the charts, even if the category selection differs between charts. I also want to use the same colour scheme as I use otherwise, let's say the default one.</p> <p>Simple MWE:</p> <pre class="lang-py prettyprint-override"><code>import streamlit as st import altair as alt from vega_datasets import data iris = data.iris() ch1 = alt.Chart(iris).mark_point().encode( x='petalWidth', y='petalLength', color='species' ) </code></pre> <p><a href="https://i.sstatic.net/LR1pxcFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LR1pxcFd.png" alt="figure for ch1" /></a>.</p> <p>Now, let's say that I for some reason filter the data so that 'versicolor' disappears:</p> <pre class="lang-py prettyprint-override"><code>ch2 = alt.Chart(iris[iris.species != 'versicolor']).mark_point().encode( x='petalWidth', y='petalLength', color='species' ) </code></pre> <p><a href="https://i.sstatic.net/026cbfCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/026cbfCY.png" alt="figure for ch2" /></a>.</p> <p>There, 'virginica' changed colour, which is what I want to avoid.</p> <p>My current solution is:</p> <pre class="lang-py prettyprint-override"><code>scheme_col = ['#4e79a7', '#f28e2b', '#e15759'] sel_idx = [0, 2] sel_col = [scheme_col[i] for i in sel_idx] ch3 = alt.Chart(iris[iris.species != 'versicolor']).mark_point().encode( x='petalWidth', y='petalLength', color=alt.Color('species').scale(range=sel_col) ) </code></pre> <p><a href="https://i.sstatic.net/jYOEPcFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jYOEPcFd.png" alt="figure of ch3" /></a></p> <p>I get the result I want, but this approach involves two manual steps:</p> <ol> <li>I need <code>scheme_col</code>, i.e. the colours of the current scheme. I did not find a way to get this programmatically.</li> <li>I need to know which categories are present in each plot - though I guess this should not be that difficult to get, by comparing the dataframes.</li> </ol> <p>Is there a more elegant way of achieving this?</p>
<python><altair>
2025-03-24 15:28:49
1
1,623
Michal Kaut
79,531,444
8,746,228
Find the value that is different when a row is marked "left_only" or "right_only" when comparing Pandas DataFrames
<p>There are two dataframes: <code>current</code> and <code>base</code>. <code>current</code> is an incremental update to <code>base</code>.</p> <p>Whenever there is an update made to <code>current</code>, we want to run checks to see if it adheres to rules. If it does, <code>current</code> becomes <code>base</code>.</p> <p>Updates to column <code>Issr</code> is not expected except empty value. So, there are two checks:</p> <ul> <li>New entry must not add any value to <code>Issr</code> column (<code>''</code> is accepted)</li> <li>Existing row's value for column <code>Issr</code> shouldn't be updated (if updated to empty, it is fine). Update to any other column is ok.</li> </ul> <p><code>base</code></p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>SNo</th> <th>Rank</th> <th>Ctry</th> <th>Cat</th> <th>Issr</th> <th>Ref</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> <td>A</td> <td>Book</td> <td>Y</td> <td>100</td> </tr> <tr> <td>2</td> <td>14</td> <td>B</td> <td>Laptop</td> <td>C</td> <td>101</td> </tr> <tr> <td>3</td> <td>15</td> <td>C</td> <td>Pen</td> <td>J</td> <td>102</td> </tr> <tr> <td>4</td> <td>50</td> <td>D</td> <td>Pen</td> <td></td> <td>103</td> </tr> </tbody> </table></div> <p><code>current</code></p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>SNo</th> <th>Rank</th> <th>Ctry</th> <th>Cat</th> <th>Issr</th> <th>Ref</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> <td>A</td> <td>Book1</td> <td>Y</td> <td>100</td> </tr> <tr> <td>2 (updated)</td> <td>14</td> <td>B</td> <td>Laptop</td> <td></td> <td>101</td> </tr> <tr> <td>3</td> <td>15</td> <td>C</td> <td>Pen</td> <td>J</td> <td>102</td> </tr> <tr> <td>4 (updated)</td> <td>50</td> <td>D</td> <td>Pen</td> <td>U</td> <td>103</td> </tr> <tr> <td>5 (new entry)</td> <td>24</td> <td>K</td> <td>Pencil</td> <td>W</td> <td>101</td> </tr> <tr> <td>6 (new entry)</td> <td>24</td> <td>RT</td> <td>Pencil</td> <td></td> <td>201</td> </tr> </tbody> </table></div> <p>Above <code>current</code> fails checks because of multiple issues:</p> <ul> <li>Row# 4 in <code>base</code>: Existing row's column <code>Issr</code> got updated to a non-empty values (while existing Row# 2 update is fine as it made the value empty)</li> <li>Row# 5 is a new entry in <code>current</code> with a non-empty value. This is violation. Row#6 is also new entry but ok since the value for <code>Issr</code> is empty.</li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd base = { 'Rank': [10,14,15,50], 'Ctry': ['A', 'B', 'C', 'D'], 'Cat': ['Book', 'Laptop', 'Pen', 'Pen'], 'Issr': ['Y', 'C', 'J', ''], 'Ref': ['100', '101', '102', '103'] } current = { 'Rank': [10,14,15,50, 24, 24], 'Ctry': ['A', 'B', 'C', 'D', 'K', 'RT'], 'Cat': ['Book', 'Laptop', 'Pen', 'Pen', 'Pencil', 'Pencil'], 'Issr': ['Y', '', 'J', 'U', 'W', ''], 'Ref': ['100', '101', '102', '103', '101', '201'] } base_df = pd.DataFrame(base) current_df = pd.DataFrame(current) merged_df = pd.merge(current_df, base_df, how=&quot;outer&quot;, indicator=True) print(merged_df) Rank Ctry Cat Issr Ref _merge 10 A Book Y 100 both 14 B Laptop 101 left_only 15 C Pen J 102 both 50 D Pen U 103 left_only &lt;--- how to know this got marked `left_only` as value of `issr` col is different 24 K Pencil W 101 left_only &lt;--- Invalid - new entry has no-empty value for `Issr` col 24 RT Pencil 201 left_only 14 B Laptop C 101 right_only 50 D Pen 103 right_only </code></pre> <p>I can get <code>left_only</code> (indicating update/new rows in <code>current</code> df) but how to know because of which column/columns the row got marked as <code>left_only</code>?</p> <p>If I get to know that pandas marked <code>left_only</code> as it saw a diff in <code>Issr</code> column, I can just check its value (empty or not), and pass/fail the job.</p> <p>How to get this column info?</p>
<python><pandas><dataframe>
2025-03-24 15:15:42
1
1,513
adarsh
79,531,412
305,883
AWS - put data on S3 results in TimeOutError
<p>I am creating a dataset on AWS S3, for their Opendata program.</p> <p>I am fetching audio files, which are already stored on S3. I then segment them into smaller audio chunks, and putting those on S3 again. The problem occurs when I need to PUT (if I comment out, it won't give error).</p> <p>The S3 bucket is in US-region.</p> <p>To test if it was connectivity error on my end I tried:</p> <ul> <li>Sagemaker Free lab, using US-region: it hung without error, but after 4 hours there was no progress</li> <li>Google Colab, in US-region : same error, but they restricted temporarily the resources due to data volume, and cannot try again</li> <li>local environment, in EU-region: it returns TimeOutError, no progress</li> </ul> <p>Can you please help in avoiding the error and possibly speed up operations ? I must only use basic S3 services, not using Lambda or other AWS services to meet allowed budget costs.</p> <p>Below what I tried:</p> <pre><code># boto3 client for uploading (signed requests) s3_client = boto3.client( ย  ย  's3', ย  ย  aws_access_key_id=AWS_ACCESS_KEY, ย  ย  aws_secret_access_key=AWS_SECRET_KEY, ย  ย  region_name=REGION_NAME ) </code></pre> <pre><code>%%time import functools cache_audio = {} for o, (lb, up) ย in enumerate(batches[6:]): ย  for ix, row in annotated_segments.loc[lb:up].iterrows(): ย  ย  # clear cache to keep memory safe ย  ย  if len(cache_audio) &gt; 5: ย  ย  ย  cache_audio.clear() ย  ย  # audio props ย  ย  file_name = row['File name'] ย  ย  # path props ย  ย  file_folder = row['File folder'] ย  ย  # segment props ย  ย  segment_name = row['segment_name'] ย  ย  start = row['voice_start'] ย  ย  end = row['voice_end'] ย  ย  # read from Cache ย  ย  if file_name not in cache_audio: ย  ย  ย  audio, rate = fetch_audio(row) ย  ย  ย  cache_audio[file_name] = audio ย  ย  ย  ย  else: ย  ย  ย  audio = cache_audio[file_name] ย  ย  # store segment on S3 ย  ย  audio_segment = audio[start : end] ย  ย  ย  ย  try: ย  ย  ย  s3_path = f&quot;data/annotated_segments/{file_folder}/{file_name}/{segment_name}&quot; ย  ย  ย  # initialise the bianary file ย  ย  ย  file_obj = io.BytesIO() ย  ย  ย  # write the audio segment ย  ย  ย  # https://python-soundfile.readthedocs.io/en/latest/index.html#soundfile.write ย  ย  ย  soundfile.write(file_obj, audio_segment, samplerate = rate, format='WAV') ย # norm=False for raw data ย  ย  ย  # Reset the file pointer to the beginning ย  ย  ย  file_obj.seek(0) ย  ย  ย  # put annotated segments in S3 ย  ย  ย  put_audio_to_s3(file_obj, s3_path) ย  ย  except Exception as e: ย  ย  ย  print(f&quot;Error uploading file: {e}. File name: { file_name }. Batch: {lb} - {up}&quot;) ย  ย  ย  ย  ย  ย  ย  ย  print(f'Success! Completed {o}-th batch: {lb} - {up}') ย ย  </code></pre> <p>Error raised after a while:</p> <pre><code>---------------------------------------------------------------------------TimeoutError Traceback (most recent call last)File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/urllib3/response.py:754, in HTTPResponse._error_catcher(self) 753 try:--&gt; 754 yield 756 except SocketTimeout as e: 757 # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but 758 # there is yet no clean way to get at it from this context.File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/urllib3/response.py:879, in HTTPResponse._raw_read(self, amt, read1) 878 with self._error_catcher():--&gt; 879 data = self._fp_read(amt, read1=read1) if not fp_closed else b&quot;&quot; 880 if amt is not None and amt != 0 and not data: 881 # Platform-specific: Buggy versions of Python. 882 # Close the connection when no data is returned (...) 887 # not properly close the connection in all cases. There is 888 # no harm in redundantly calling close.File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/urllib3/response.py:862, in HTTPResponse._fp_read(self, amt, read1) 860 else: 861 # StringIO doesn't like amt=None--&gt; 862 return self._fp.read(amt) if amt is not None else self._fp.read()File ~/miniconda3/envs/fruitbats/lib/python3.10/http/client.py:482, in HTTPResponse.read(self, amt) 481 try:--&gt; 482 s = self._safe_read(self.length) 483 except IncompleteRead:File ~/miniconda3/envs/fruitbats/lib/python3.10/http/client.py:631, in HTTPResponse._safe_read(self, amt) 625 &quot;&quot;&quot;Read the number of bytes requested. 626 627 This function should be used when &lt;amt&gt; bytes &quot;should&quot; be present for 628 reading. If the bytes are truly not available (due to EOF), then the 629 IncompleteRead exception can be used to detect the problem. 630 &quot;&quot;&quot;--&gt; 631 data = self.fp.read(amt) 632 if len(data) &lt; amt:File ~/miniconda3/envs/fruitbats/lib/python3.10/socket.py:717, in SocketIO.readinto(self, b) 716 try:--&gt; 717 return self._sock.recv_into(b) 718 except timeout:File ~/miniconda3/envs/fruitbats/lib/python3.10/ssl.py:1307, in SSLSocket.recv_into(self, buffer, nbytes, flags) 1304 raise ValueError( 1305 &quot;non-zero flags not allowed in calls to recv_into() on %s&quot; % 1306 self.__class__)-&gt; 1307 return self.read(nbytes, buffer) 1308 else:File ~/miniconda3/envs/fruitbats/lib/python3.10/ssl.py:1163, in SSLSocket.read(self, len, buffer) 1162 if buffer is not None:-&gt; 1163 return self._sslobj.read(len, buffer) 1164 else:TimeoutError: The read operation timed outThe above exception was the direct cause of the following exception:ReadTimeoutError Traceback (most recent call last)File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/botocore/response.py:99, in StreamingBody.read(self, amt) 98 try:---&gt; 99 chunk = self._raw_stream.read(amt) 100 except URLLib3ReadTimeoutError as e: 101 # TODO: the url will be None as urllib3 isn't setting it yetFile ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/urllib3/response.py:955, in HTTPResponse.read(self, amt, decode_content, cache_content) 953 return self._decoded_buffer.get(amt)--&gt; 955 data = self._raw_read(amt) 957 flush_decoder = amt is None or (amt != 0 and not data)File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/urllib3/response.py:878, in HTTPResponse._raw_read(self, amt, read1) 876 fp_closed = getattr(self._fp, &quot;closed&quot;, False)--&gt; 878 with self._error_catcher(): 879 data = self._fp_read(amt, read1=read1) if not fp_closed else b&quot;&quot;File ~/miniconda3/envs/fruitbats/lib/python3.10/contextlib.py:153, in _GeneratorContextManager.__exit__(self, typ, value, traceback) 152 try:--&gt; 153 self.gen.throw(typ, value, traceback) 154 except StopIteration as exc: 155 # Suppress StopIteration *unless* it's the same exception that 156 # was passed to throw(). This prevents a StopIteration 157 # raised inside the &quot;with&quot; statement from being suppressed.File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/urllib3/response.py:759, in HTTPResponse._error_catcher(self) 756 except SocketTimeout as e: 757 # FIXME: Ideally we'd like to include the url in the ReadTimeoutError but 758 # there is yet no clean way to get at it from this context.--&gt; 759 raise ReadTimeoutError(self._pool, None, &quot;Read timed out.&quot;) from e # type: ignore[arg-type] 761 except BaseSSLError as e: 762 # FIXME: Is there a better way to differentiate between SSLErrors?ReadTimeoutError: AWSHTTPSConnectionPool(host='fruitbat-vocalizations.s3.us-west-2.amazonaws.com', port=443): Read timed out.During handling of the above exception, another exception occurred:ReadTimeoutError Traceback (most recent call last)File &lt;timed exec&gt;:25Cell In[25], line 15, in fetch_audio(row, sr) 12 s3_object_key = str(s3_path.relative_to(DSLOC)) 14 response = s3_client.get_object(Bucket=BUCKET_NAME, Key=s3_object_key)---&gt; 15 file_content = response['Body'].read() 17 # https://stackoverflow.com/questions/73350508/read-audio-file-from-s3-directly-in-python 18 # this will read in float64 by default and multichannel if any 19 data, rate = soundfile.read(io.BufferedReader(io.BytesIO(file_content)), always_2d=True)File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/botocore/httpchecksum.py:240, in StreamingChecksumBody.read(self, amt) 239 def read(self, amt=None):--&gt; 240 chunk = super().read(amt=amt) 241 self._checksum.update(chunk) 242 if amt is None or (not chunk and amt &gt; 0):File ~/miniconda3/envs/fruitbats/lib/python3.10/site-packages/botocore/response.py:102, in StreamingBody.read(self, amt) 99 chunk = self._raw_stream.read(amt) 100 except URLLib3ReadTimeoutError as e: 101 # TODO: the url will be None as urllib3 isn't setting it yet--&gt; 102 raise ReadTimeoutError(endpoint_url=e.url, error=e) 103 except URLLib3ProtocolError as e: 104 raise ResponseStreamingError(error=e)ReadTimeoutError: Read timeout on endpoint URL: &quot;None&quot; </code></pre>
<python><amazon-web-services><amazon-s3><timeout><put>
2025-03-24 15:01:36
1
1,739
user305883
79,531,319
16,383,578
How to use Numba Cuda without Conda?
<p>I don't use Conda. I have downloaded and installed cuda_12.8.1_572.61_windows.exe from the <a href="https://developer.nvidia.com/cuda-downloads?target_os=Windows&amp;target_arch=x86_64&amp;target_version=11&amp;target_type=exe_local" rel="nofollow noreferrer">official link</a>. I have installed numba 0.61.0, numba-cuda 0.8.0, llvmlite 0.44.0, numpy 2.1.3, cuda-python 12.8.0, cuda-bindings 12.8.0 and pwin32 308.</p> <p>I am trying to generate images programmatically, one bottleneck I have identified is that arctan2 and other trigonometric functions are extremely slow. Fortunately I have a GPU, so I am trying to use the GPU to generate the images by using Numba CUDA. The GPU in question is NVIDIA Geforce GTX 1050 Ti with 4GiB RAM (I am planning to upgrade it).</p> <p>I am following <a href="https://nvidia.github.io/numba-cuda/user/kernels.html" rel="nofollow noreferrer">this guide</a>, but I can't make the code run, I get this error:</p> <pre><code>NvvmSupportError: libNVVM cannot be found. Do `conda install cudatoolkit`: Could not find module 'nvvm.dll' (or one of its dependencies). Try using the full path with constructor syntax. </code></pre> <p>I don't use Anaconda or Miniconda. How can I fix this?</p> <hr /> <p>I have found the .dll file here:</p> <pre><code>PS C:\Users\xenig&gt; get-childitem -path &quot;C:\Program Files\NVIDIA GPU Computing Toolkit&quot; -recurse -filter 'nvvm*.dll' Directory: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\nvvm\bin Mode LastWriteTime Length Name ---- ------------- ------ ---- -a--- 2025-02-22 13:18 52873216 nvvm64_40_0.dll </code></pre> <p>Now what?</p> <hr /> <p>I have tried this:</p> <pre><code>import os os.environ['NUMBAPRO_NVVM'] = &quot;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8/nvvm/bin/nvvm64_40_0.dll&quot; os.environ['NUMBAPRO_LIBDEVICE'] = &quot;C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8/nvvm/libdevice&quot; </code></pre> <p>It doesn't work.</p>
<python><cuda><numba>
2025-03-24 14:22:14
1
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
79,531,119
2,412,837
I2C Slave respond to registry read on Raspberry Pi with Python using pigpio package
<p>I need to respond to a specific registry read over I2C on an RPi.</p> <p>Currently, following the usual instructions for the pigpio package here: <a href="https://abyz.me.uk/rpi/pigpio/python.html#bsc_xfer" rel="nofollow noreferrer">https://abyz.me.uk/rpi/pigpio/python.html#bsc_xfer</a> i can see the reads to the device address. I cannot however in the documentation see how to respond to a specific registry read on the device</p> <p>The master device (outside of my control) will read the Slave (RPi), on address 0x20. It will then look to access the register 0x1c5h (for example) to read any data stored there. I cannot see how to:</p> <ol> <li>Detect which register is being read</li> <li>See how to respond appropriately (i.e. with the data that should be &quot;stored&quot; at that address)</li> </ol> <p>Code used so far (ignore the prints for debug!).</p> <pre><code>from decimal import Decimal import bitstring import time import pigpio I2C_ADDR = 0x50 #device address def i2c(id, tick): global pi #s, b, d = pi.bsc_i2c(I2C_ADDR) #print(bitstring.BitArray(d).bin) #Print anything written s, b, d = pi.bsc_i2c(I2C_ADDR, '1240') #respond to read #convert decimal to bytes def decimal_to_bytes(num): x = Decimal(str(num)) a = x ** Decimal(1) / Decimal(7) s = str(a) b = s.encode('ascii') return b; pi = pigpio.pi() if not pi.connected: exit() e = pi.event_callback(pigpio.EVENT_BSC, i2c) pi.bsc_i2c(I2C_ADDR) # Configure BSC as I2C slave time.sleep(600) e.cancel() pi.bsc_i2c(0) # Disable BSC peripheral pi.stop() </code></pre>
<python><raspberry-pi><i2c>
2025-03-24 12:48:07
0
346
tobynew
79,531,055
607,407
Can I get different boto3 instances to share their connection pool to the same host?
<p>I have code that creates few S3 objects but for many access keys. It does so in parallel, but within the same python process.</p> <p>Each of the boto3 instances has its own connection pool, which can linger even when it is not used. What happens, on occassion, is that the whole thing crashes with some variant of this error:</p> <pre><code>[Errno 99] Cannot assign requested address </code></pre> <p>This is because there is simply too many TCP connections on the system. This limit can be altered, but I am hitting up to 27000 connections.</p> <p>This is how I instantiate boto3:</p> <pre><code>my_boto_resource = boto3.resource(service_name='s3', use_ssl=False, verify=False, region_name='us-east-1', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=&quot;http://testserver.local&quot;) my_boto_client = self.resource.meta.client </code></pre> <p>These two then get passed to code that populates some test data on for all S3 access keys. Note that the endpoint URL is always the same, it's a custom S3 server that supports unlimited access keys.</p> <p>I had faced the exact same problem in JavaScript with the <code>@aws-sdk/client-s3</code> library. But that library allows you to construct <code>S3</code> instances with pre-created <code>http(s)</code> agent object, which can then be shared across many instances.</p> <p>Is there some way to share HTTP(s) connection pool for the same host for boto3 instances, like in <code>@aws-sdk/client-s3</code>?</p> <p>Note: Reducing how many S3 access keys are being processed in parallel has been tried. However, due to very small number of operations per S3 key (sometime just one), this reduced the speed of the whole script significantly.</p>
<python><boto3>
2025-03-24 12:23:56
0
53,877
Tomรกลก Zato
79,531,015
2,266,881
Bad request version/syntax/type when executing POST request to flask
<p>Good Morning,</p> <p>I'm trying to set up a local flask app running for testing some python code, i used to do it the same way a lot (with the same python script) without any issues, but now i'm getting random 'bad request version/type/syntax&quot; errors and a lot of random bytes text errors.</p> <p>To set the app running, i'm doing the standard:</p> <pre><code>export FLASK_APP=&lt;name_here&gt; export FLASK_ENV=development </code></pre> <p>And then i do:</p> <pre><code>flask run </code></pre> <p>Then, after sending any POST request to it, i'm getting a random error like this:</p> <pre><code>127.0.0.1 - - [24/Mar/2025 08:54:18] code 400, message Bad request version ('\x02ยณรกWรผรงรฐ\x18รฅร™รญรครฏ\x81ยกP') 127.0.0.1 - - [24/Mar/2025 08:54:18] &quot;\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03]|ร›\x9cยฆร›\x1bNnยกB\x08ยฅรฆ&lt;รบรคยดq\x87\x08\x09ร•รฝQรขn\x8bร…รฃยผt \x02ยณรกWรผรงรฐ\x18รฅร™รญรครฏ\x81ยกP&quot; 400 - 127.0.0.1 - - [24/Mar/2025 08:54:25] code 400, message Bad HTTP/0.9 request type ('\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03ยฒ2/รฝP\x84#') 127.0.0.1 - - [24/Mar/2025 08:54:25] &quot;\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03ยฒ2/รฝP\x84#\x0c\x0d2ยปยฎ\x86\x8a\x91รˆยข&quot;ร™\x1a~\x13%d\x17#D&quot; 400 - 127.0.0.1 - - [24/Mar/2025 08:54:26] code 400, message Bad request version ('ร€\x13ร€') 127.0.0.1 - - [24/Mar/2025 08:54:26] &quot;\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03bzqJ%\x0bรž\x90ยฏ_ยจร’\x81N\x1cS?ยฆOร—\x84qlX&quot;`JยฆsBBK \x16ยบรง$รฒ:\x87\x9dยน\x93\x0ery_~'\x12\x0b+\x1fรซ_n83&amp;aยฉรL\x97ร–\x00$\x13\x01\x13\x02\x13\x03ร€/ร€+ร€0ร€,ร€'รŒยฉรŒยจร€\x09ร€\x13ร€&quot; 400 - 127.0.0.1 - - [24/Mar/2025 08:54:27] code 400, message Bad request syntax ('\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03Aร‚:รฟ\x91+*รฒ&gt;~ir{รฟรด5}@รชรซยธoยผ') 127.0.0.1 - - [24/Mar/2025 08:54:27] &quot;\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03Aร‚:รฟ\x91+*รฒ&gt;~ir{รฟรด5}@รชรซยธoยผ&quot; 400 - 127.0.0.1 - - [24/Mar/2025 08:54:41] code 400, message Bad request version ('ร€\x13ร€') 127.0.0.1 - - [24/Mar/2025 08:54:41] &quot;\x16\x03\x01\x00รก\x01\x00\x00ร\x03\x03^ยจlรกยฅยปรช!\x1ab\x92*\x85\x99ร„aรฐร\x82\x88รชร’ยฃeC\x90\x85ร“\x92oรรพ ร&lt;|3KMรรน~:รฌ\x0f-ร–รข~E&gt;o\x87O1รŸร›ยฑรจยตรซ}ร’\x17r\x00$\x13\x01\x13\x02\x13\x03ร€/ร€+ร€0ร€,ร€'รŒยฉรŒยจร€\x09ร€\x13ร€&quot; 400 - </code></pre> <p>My guess is something must have changed in the flask library, because, as i said, i used to test the same python app like 2 years ago using that syntax and everything worked flawless.</p> <p>Any idea what could it be?</p> <p>Thanks in advance!</p>
<python><flask>
2025-03-24 12:04:59
0
1,594
Ghost
79,530,884
6,601,575
Unable to start a pyFlink job from savepoint
<p>I'm using Flink 1.20.0, and try to submit a pyFlink job and start it from aan existed savepoint, I execute in command line:</p> <pre><code>flink run --fromSavepoint s3a://.../1a4e1e73910e5d953183b8eb1cd6eb84/chk-1 -py flink_job.py -pyFiles requirements.txt --table my_table --bucket my_bucket --secret my_secret </code></pre> <p>But Flink raise exception:</p> <pre><code>Python command line option detected but the flink-python module seems to be missing or not working as expected. </code></pre> <p>If I drop the <code>--fromSavePoint</code>, the job could be submitted successfully, but if I add <code>--fromSavepoint</code>, it will failed with exception.</p> <p>What's the proper way to start a pyFlink job from savepoint?</p>
<python><apache-flink><flink-streaming><flink-sql><pyflink>
2025-03-24 11:02:14
1
834
Rinze
79,530,877
8,040,369
Importing NeuralProphet throwing ModuleNotFoundError: No module named 'importlib_resources'
<p>I have installed the latest version of &quot;NeuralProphet&quot;, and when I try to import it, getting the below error with <strong>importlib_resources</strong></p> <p>Tried installing the package separately using <strong>pip install importlib-resources</strong>, but still the same issue persists. Below is the error log</p> <pre><code> Traceback (most recent call last): File &quot;/home/forecast_models_prod/./Forecast_Main_Script.py&quot;, line 25, in &lt;module&gt; from neuralprophet import NeuralProphet File &quot;/usr/local/lib/python3.9/dist-packages/neuralprophet/__init__.py&quot;, line 4, in &lt;module&gt; import pytorch_lightning as pl File &quot;/usr/local/lib/python3.9/dist-packages/pytorch_lightning/__init__.py&quot;, line 27, in &lt;module&gt; from pytorch_lightning.callbacks import Callback # noqa: E402 File &quot;/usr/local/lib/python3.9/dist-packages/pytorch_lightning/callbacks/__init__.py&quot;, line 14, in &lt;module&gt; from pytorch_lightning.callbacks.batch_size_finder import BatchSizeFinder File &quot;/usr/local/lib/python3.9/dist-packages/pytorch_lightning/callbacks/batch_size_finder.py&quot;, line 26, in &lt;module&gt; from pytorch_lightning.callbacks.callback import Callback File &quot;/usr/local/lib/python3.9/dist-packages/pytorch_lightning/callbacks/callback.py&quot;, line 22, in &lt;module&gt; from pytorch_lightning.utilities.types import STEP_OUTPUT File &quot;/usr/local/lib/python3.9/dist-packages/pytorch_lightning/utilities/types.py&quot;, line 36, in &lt;module&gt; from torchmetrics import Metric File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/__init__.py&quot;, line 37, in &lt;module&gt; from torchmetrics import functional # noqa: E402 File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/functional/__init__.py&quot;, line 14, in &lt;module&gt; from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/functional/audio/__init__.py&quot;, line 14, in &lt;module&gt; from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/functional/audio/pit.py&quot;, line 22, in &lt;module&gt; from torchmetrics.utilities import rank_zero_warn File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/utilities/__init__.py&quot;, line 14, in &lt;module&gt; from torchmetrics.utilities.checks import check_forward_full_state_property File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/utilities/checks.py&quot;, line 26, in &lt;module&gt; from torchmetrics.metric import Metric File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/metric.py&quot;, line 43, in &lt;module&gt; from torchmetrics.utilities.plot import _AX_TYPE, _PLOT_OUT_TYPE, plot_single_or_multi_val File &quot;/usr/local/lib/python3.9/dist-packages/torchmetrics/utilities/plot.py&quot;, line 28, in &lt;module&gt; import matplotlib.pyplot as plt File &quot;/usr/local/lib/python3.9/dist-packages/matplotlib/pyplot.py&quot;, line 58, in &lt;module&gt; from matplotlib import ( # noqa: F401 Re-exported for typing. File &quot;/usr/local/lib/python3.9/dist-packages/matplotlib/style/__init__.py&quot;, line 1, in &lt;module&gt; from .core import available, context, library, reload_library, use File &quot;/usr/local/lib/python3.9/dist-packages/matplotlib/style/core.py&quot;, line 26, in &lt;module&gt; import importlib_resources ModuleNotFoundError: No module named 'importlib_resources' </code></pre> <p>Kindly suggest how to fix this issue.</p> <p>Thanks,</p>
<python><prophet>
2025-03-24 10:58:30
0
787
SM079
79,530,829
1,306,892
Issues with Recursive Digraph Construction and Missing Loops in Python Code
<p>I'm trying to write code to recursively generate directed graphs as described in the following process:</p> <ul> <li>For <code>G_1</code>, we start with a single vertex with a self-loop.</li> <li>For <code>G_2</code>, we take two copies of <code>G_1</code>, add a new vertex, and connect them with new edges.</li> <li>For <code>G_3</code> and higher, we repeat the process, taking two copies of <code>G_{n-1}</code>, adding a new vertex, and connecting them with three new edges.</li> </ul> <p>The process should be recursive, and I'm using Python with the <code>networkx</code> library to represent and visualize the graphs. Here's the code I wrote so far:</p> <pre class="lang-py prettyprint-override"><code>import networkx as nx import matplotlib.pyplot as plt def construct_G(n): if n == 1: G = nx.DiGraph() G.add_node(1) G.add_edge(1, 1) # Self-loop return G G_n_minus_1 = construct_G(n - 1) G = nx.DiGraph() G.add_nodes_from(G_n_minus_1.nodes) G.add_edges_from(G_n_minus_1.edges) offset = 2**(n-1) G_renamed = nx.relabel_nodes(G_n_minus_1, lambda x: x + offset) G.add_nodes_from(G_renamed.nodes) G.add_edges_from(G_renamed.edges) new_vertex = offset G.add_node(new_vertex) G.add_edge(1, 2**n - 1) G.add_edge(offset + 1, new_vertex) G.add_edge(new_vertex, offset - 1) return G def draw_G(n): G = construct_G(n) pos = nx.spring_layout(G, seed=42) plt.figure(figsize=(8, 6)) nx.draw(G, pos, with_labels=True, node_color='lightblue', edge_color='gray', node_size=500, arrowsize=10) plt.title(f&quot;Digraph G_{n}&quot;) plt.show() # Example: Draw the digraph G_4 draw_G(2) </code></pre> <p><strong>Problem 1:</strong><br /> The code is missing self-loops for some vertices, which should be present according to the definition. I can't figure out how to add these loops for these vertices in the graph. (The presence of vertices with loops ultimately depends on the copying mechanism underlying the recursive definition of the graph. In fact, the loop that starts at step n=1 is copied and propagates from step to step.)</p> <p><strong>Problem 2:</strong><br /> When the value of <code>n</code> is greater than 2, the graph's vertices and edges overlap. The graph looks correct for <code>n = 2</code>, but for higher values of <code>n</code>, the nodes and edges are superimposed, making it difficult to visualize.</p> <p>Any suggestions on how to fix these issues?</p> <p><strong>Updates.</strong></p> <p>I added some figures:</p> <p><a href="https://i.sstatic.net/ELAH0bZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ELAH0bZP.png" alt="" /></a></p> <p>for <code>G_1</code>,</p> <p><a href="https://i.sstatic.net/xV4Wp3oi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xV4Wp3oi.png" alt="enter image description here" /></a></p> <p>for <code>G_2</code>,</p> <p><a href="https://i.sstatic.net/TM6x7peJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TM6x7peJ.png" alt="enter image description here" /></a></p> <p>for <code>G_3</code>.</p>
<python><networkx><graph-theory>
2025-03-24 10:33:42
1
1,801
Mark
79,530,791
4,473,615
Move files from one S3 to another using Boto3 in Python from multiple sub bucket levels
<p>I have below code to move the file from one S3 bucket to another which works perfectly. The catch here is while copying the files it copies the S3 bucket as well, instead I wanted to copy only the file from the bucket.</p> <p><code>Source bucket - bucket-source/Source-First/</code> --&gt; This file has multiple .xlsx files <br> <code>Target bucket - bucket-target</code></p> <p>I want only the files from bucket-source/Source-First/ copied to bucket-target. Currently it copies along with the bucket name Source-First/ <br></p> <p>bucket-target/Source-First/File1.xlsx<br> bucket-target/Source-First/File2.xlsx</p> <p>But I wanted only the files to be copied</p> <pre><code>bucket-target/File1.xlsx&lt;br&gt; bucket-target/File2.xlsx </code></pre> <pre><code>import boto3 bucket_from = &quot;bucket-source&quot; bucket_to = &quot;bucket-target&quot; s3 = boto3.resource('s3') src = s3.Bucket(bucket_from) tgt = s3.Bucket(bucket_to) for archive in src.objects.filter(Prefix='Source-First/'): if archive.key.endswith('.xlsx'): CopySource = {'Bucket': archive.bucket_name, 'Key': archive.key} tgt.copy(CopySource, archive.key) archive.delete() </code></pre>
<python><amazon-s3><boto3>
2025-03-24 10:18:52
0
5,241
Jim Macaulay
79,530,763
12,870,651
Python in VS Code - Change the default directory of python shell
<p>I am working on a ETL code template using VS Code which has the building blocks for most of the ETL workflows my team uses.</p> <p>Idea is that folks will <code>git clone</code> the repo and use this as a base for building ETLs rather than writing all the code from scratch.</p> <pre><code>root/ โ””โ”€โ”€ src/ โ”œโ”€โ”€ python/ โ”‚ โ”œโ”€โ”€ __init__.py โ”‚ โ”œโ”€โ”€ sample_etl1.py โ”‚ โ”œโ”€โ”€ sample_etl2.py โ”‚ โ”‚ โ”‚ โ””โ”€โ”€ dependencies/ โ”‚ โ”œโ”€โ”€ __init__.py โ”‚ โ”œโ”€โ”€ file_manager.py # Contain helper functions to manage directories and files โ”‚ โ”œโ”€โ”€ functions.py # Placeholder module for any additional custom functions required โ”‚ โ”œโ”€โ”€ logfusion.py # Helps instantiate a python logger โ”‚ โ”œโ”€โ”€ outlook_manager.py # Creates an outlook instance to access mailbox folders &amp; send emails โ”‚ โ”œโ”€โ”€ pygmentation.py #Syntax Highlights log messages before sending via email โ”‚ โ”œโ”€โ”€ sqalchemist.py #SQL Toolkit to create database connections and improve data load functionality โ”‚ โ””โ”€โ”€ utils.py #Placeholder for any utility items like SQLAlchemy dtype mappings โ”‚ โ”œโ”€โ”€ sql/ #Store SQL Queries here โ”‚ โ”œโ”€โ”€ query1.sql โ”‚ โ””โ”€โ”€ query2.sql โ”‚ โ””โ”€โ”€ main.py </code></pre> <p>All the python code runs within the <code>src</code> directory using <code>main.py</code>.</p> <p>During development, when I select some python code and press <code>shift+enter</code> to run the code in the shell, the shell launches in the root directory.</p> <p>I have to then manually exit the shell, cd to <code>src</code> directory, again enter shell for me to run selected python code which is a bit annoying.</p> <p>Is there a way for me to launch the python shell directly into the source directory when I select any python code and hit <code>shift+enter</code>? This would allow developers to test pieces of code as they go along</p> <p>I have tried a bunch of settings using <code>.vscode\settings.json</code> but nothing seems to have worked so far.</p> <p>Hope my query is clear. Happy to provide more information.</p> <p>Any suggestions would be very helpful.</p>
<python><visual-studio-code>
2025-03-24 10:05:43
0
439
excelman
79,530,541
11,159,734
Cursor IDE overwrite TAB complete without disabling it in python
<p>I'm using cursor for a while not and I generally like it but it's incredible annoying when programming in python. The image below shows a simple example. My cursor is in line 22 and I just want to insert a TAB space since I do not want to autocomplete the suggestion in line 21. However as far as I know it's impossible to suppress this TAB autocomplete without turning it off (entirely or for python). I wonder if there is no good solution (e.g. to be able to use <kbd>Shift</kbd> + <kbd>Tab</kbd> to insert a TAB space without accepting the AI suggestion)?</p> <p>This is so annoying that 95% of the time I leave the cursor TAB completion off in python which is a shame because the quality of the AI suggestions is really good most of time.</p> <p>I don't like to fiddle with the settings for hours to see if there is an easy fix for this so I just ask this here. I also saw similar questions but they were a few months old so I'm wondering if anything has changed since then.</p> <p><a href="https://i.sstatic.net/eAqJssdv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAqJssdv.png" alt="enter image description here" /></a></p>
<python><cursor-ide>
2025-03-24 08:29:17
0
1,025
Daniel
79,530,318
1,145,011
python script to check if the downloaded file from URL is corrupted or not
<p>I have a request to download files from URL and store in local system. Files are of different format like pdf, ppt, doc, docx, zip, jpg, iso etc. After downloading the files, is there a way to check if downloaded files are corrupted or not. Most of options suggested to use Checksum approach, but I do not have checksum details of file from URL to compare with downloaded file.</p> <p>Used below piece of code to download file</p> <pre><code>response = requests.get(url_to_download_file,timeout=5) with open(local_save_path, 'wb') as file: file.write(response.content) </code></pre>
<python><python-requests>
2025-03-24 06:35:21
2
1,551
user166013
79,530,149
1,481,689
Problem if two inherited __init_subclass__s have the same argument name
<p>I'm trying to use two classes, <code>A</code> and <code>B</code>, as mixing classes of <code>AB</code>. They both have <code>__init_subclass__</code> methods. The problem is that both <code>__init__sublass__</code> methods have the same argument <code>msg</code>. Therefore I've used an adaptor class <code>B_</code> to rename <code>B</code>'s argument <code>msg_b</code>. But I'm having trouble!</p> <p>The nearest I have got is:</p> <pre class="lang-py prettyprint-override"><code>class A: def __init_subclass__(cls, msg, **kwargs): print(f'{cls=} A: {msg}') super().__init_subclass__(**kwargs) class B: def __init_subclass__(cls, msg, **kwargs): print(f'{cls=} B: {msg}') super().__init_subclass__(**kwargs) # Adaptor class `B_` needed because both `A` and `B` have an argument `msg`. class B_: # Rename `msg` to `msg_b`. def __init_subclass__(cls, msg_b, **kwargs): # `B.__init_subclass__(msg_b, **kwargs)` sets the subclass as `B` not `cls`, but otherwise works. B.__init_subclass__.__func__(cls, msg=msg_b, **kwargs) # Still need a `B`. def __init__(self, *args, **kwargs): self.b = B(*args, **kwargs) # Forward all the attributes to `self.b`. def __getattr__(self, item): return getattr(self.b, item) class AB(A, B_, msg='Hello.', msg_b='Also, hello.' ): ... print(f'{AB()=}, {isinstance(AB(), A)=}, {isinstance(AB(), B_)=}, {isinstance(AB(), B)=}') </code></pre> <p>Which does call both <code>__init_sublass__</code>s with the correct class argument, it prints:</p> <pre><code>cls=&lt;class '__main__.AB'&gt; A: Hello. cls=&lt;class '__main__.AB'&gt; B: Also, hello. </code></pre> <p>But then you get the error:</p> <pre><code>Traceback (most recent call last): File &lt;definition of AB&gt;, in &lt;module&gt; class AB(A, B_, msg='Hello.', msg_b='Also, hello.' ): File &lt;A's&gt;, in __init_subclass__ super().__init_subclass__(**kwargs) File &lt;B_'s&gt;, in __init_subclass__ B.__init_subclass__.__func__(cls, msg=msg_b, **kwargs) File &lt;B's&gt;, in __init_subclass__ super().__init_subclass__(**kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: super(type, obj): obj must be an instance or subtype of type </code></pre> <p>Presumably because <code>B</code> is not a super type of <code>AB</code> (<code>B_</code> is). Iโ€™m not clear why super wants subtypes matching though!</p> <p>Any ideas on how to fix this?</p>
<python>
2025-03-24 04:52:49
1
1,040
Howard Lovatt
79,530,146
4,126,111
Autogen: How to get a simple csv from the search instead of using
<p>I am trying to get data from the web. Unfortunately, I get this error. I suppose that it identifies a big file using a search and then tries to have a conversation about that entire file. Is there any way to navigate him to download the file, check the headers, and then process it? <a href="https://i.sstatic.net/oJ8DDuDA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJ8DDuDA.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>import os import requests from typing import Dict from autogen import AssistantAgent, UserProxyAgent, register_function from dotenv import load_dotenv # Load environment variables from a .env file load_dotenv() # Define the web search function def web_search(query: str) -&gt; Dict: &quot;&quot;&quot;Perform a web search and return the top result.&quot;&quot;&quot; subscription_key = os.getenv('BING_SEARCH_API_KEY') if not subscription_key: raise ValueError(&quot;Bing Search API key not found. Please set the 'BING_SEARCH_API_KEY' environment variable.&quot;) search_url = &quot;https://api.bing.microsoft.com/v7.0/search&quot; headers = {&quot;Ocp-Apim-Subscription-Key&quot;: subscription_key} params = {&quot;q&quot;: query, &quot;textDecorations&quot;: True, &quot;textFormat&quot;: &quot;HTML&quot;} response = requests.get(search_url, headers=headers, params=params) response.raise_for_status() search_results = response.json() return search_results # Define the function to determine if a message is a termination message def is_termination_message(message): return message.get(&quot;content&quot;, &quot;&quot;).strip().lower() == &quot;goodbye!&quot; # Define the LLM configuration llm_config = { &quot;model&quot;: &quot;gpt-3.5-turbo&quot;, &quot;api_key&quot;: os.getenv(&quot;OPENAI_API_KEY&quot;), } # Initialize the assistant agent assistant = AssistantAgent( name='assistant', llm_config=llm_config, is_termination_msg=is_termination_message ) # Initialize the user proxy agent user_proxy = UserProxyAgent( name=&quot;user_proxy&quot;, llm_config=llm_config, code_execution_config={ &quot;work_dir&quot;: &quot;code_execution&quot;, &quot;use_docker&quot;: False }, human_input_mode=&quot;ALWAYS&quot; ) # Register the web_search function with both agents register_function( f=web_search, name=&quot;web_search&quot;, description=&quot;Perform a web search and return the top result based on the query.&quot;, caller=assistant, # The assistant agent can suggest calls to the web_search function. executor=user_proxy # The user proxy agent can execute the web_search function. ) # Initiate the chat between the user proxy and the assistant user_proxy.initiate_chat( assistant, message=&quot;Plot the GDP growth vs. unemployment in Czechia for the years 1993-2024.&quot; ) </code></pre> <p>simple_code_executor copy.py 3 KB</p>
<python><csv><web-scraping><large-language-model><ms-autogen>
2025-03-24 04:50:19
0
1,189
Karel Macek
79,530,047
10,321,138
Incorrect argument error when trying to enqueue a task within a firebase function
<p>I have a firebase function that attempts to enqueue a task with some data. Every time I call it I get the following two errors from the cloud run logs:</p> <blockquote> <p>requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: <a href="https://cloudtasks.googleapis.com/v2/projects/PROJECT_ID/locations/us-central1/queues/TASK_FUNCTION_NAME/tasks" rel="nofollow noreferrer">https://cloudtasks.googleapis.com/v2/projects/PROJECT_ID/locations/us-central1/queues/TASK_FUNCTION_NAME/tasks</a></p> </blockquote> <blockquote> <p>firebase_admin.exceptions.InvalidArgumentError: Request contains an invalid argument.</p> </blockquote> <p>Here is my code:</p> <pre><code>@scheduler_fn.on_schedule(schedule=&quot;every day 00:00&quot;, timeout_sec=540, secrets=[APIFY_API_KEY]) def update_brooklyn_listings(event: scheduler_fn.ScheduledEvent) -&gt; None: asyncio.run(update_brooklyn_listings_async()) async def update_brooklyn_listings_async(): result = await SOME_ASYNC_TASK data = json.loads(result.get_bytes(format=&quot;json&quot;)) body = {&quot;data&quot;: data} task_queue = functions.task_queue(&quot;functionname&quot;) task_queue.enqueue(body) </code></pre> <p>Based on the <code>enqueue</code> documentation no options parameter is needed, and the <code>data</code> passed in can be of type <code>Any</code>. Any idea what I could be missing here?</p>
<python><firebase><google-cloud-functions>
2025-03-24 02:54:01
1
1,896
yambo
79,530,017
1,445,660
return header without quotes in aws lambda + remove header if empty
<p>I'm trying to return a header from a lambda, it's returned as <code>&quot;&lt;https://example.com/ab?offset=0&amp;limit=0&gt;; rel=\&quot;next\&quot;&quot;</code> instead of <code>&lt;https://example.com/ab?offset=0&amp;limit=0&gt;; rel=&quot;next&quot;</code>.</p> <p>And also when I don't return the header, I still see it as <code>&quot;&quot;</code>. This is my cdk:</p> <pre><code>_api_gw.MethodResponse( status_code=&quot;200&quot;, response_parameters={&quot;method.response.header.Link&quot;: True} ) integration_response = _api_gw.IntegrationResponse( status_code=&quot;200&quot;, response_parameters=response_integration_parameters, response_templates={ &quot;application/json&quot;: &quot;&quot;&quot; #set($inputRoot = $input.path('$')) ## Map headers only if exist ## #set($link = $input.json('$.Link')) #if($link != &quot;&quot;) #set($context.responseOverride.header.Link = $link) #end ## Clean JSON body (exclude headers) ## { &quot;data&quot;: $input.json('$.data'), } &quot;&quot;&quot; }, ) default_request_template = { &quot;application/json&quot;: '{ &quot;method&quot;: &quot;$context.httpMethod&quot;, &quot;body&quot; : &quot;$util.escapeJavaScript(&quot;$input.json(\'$\')&quot;)&quot;, &quot;headers&quot;: { #foreach($param in $input.params().header.keySet()) &quot;$param&quot;: &quot;$util.escapeJavaScript($input.params().header.get($param))&quot; #if($foreach.hasNext),#end #end }, &quot;queryParams&quot;: { #foreach($param in $input.params().querystring.keySet()) &quot;$param&quot;: &quot;$util.escapeJavaScript($input.params().querystring.get($param))&quot; #if($foreach.hasNext),#end #end }, &quot;pathParams&quot;: { #foreach($param in $input.params().path.keySet()) &quot;$param&quot;: &quot;$util.escapeJavaScript($input.params().path.get($param))&quot; #if($foreach.hasNext),#end #end }}' } resource.add_method( &quot;GET&quot;, _api_gw.LambdaIntegration( my_lambda, proxy=False, request_templates=default_request_template, integration_responses=[integration_response], ), method_responses=[ _api_gw.MethodResponse( status_code=&quot;200&quot;, response_parameters={&quot;method.response.header.Link&quot;: True} ) ], ) </code></pre>
<python><aws-lambda><aws-cdk>
2025-03-24 02:18:02
0
1,396
Rony Tesler
79,529,986
234,118
No local python in Conda venv
<p>I am using a M2 Macbook Pro. conda 24.9.2. Python 3.12. Why the command &quot;python&quot; is not &quot;localized&quot;?</p> <pre><code>conda create -n f5-tts python=3.10 conda activate f5-tts which pip =&gt; /opt/anaconda3/envs/f5-tts/bin/pip which python =&gt; python: aliased to /usr/local/bin/python3 which python3 =&gt;/opt/anaconda3/envs/f5-tts/bin/python3 </code></pre>
<python><conda><python-venv>
2025-03-24 01:28:23
0
5,540
Dustin Sun
79,529,951
1,788,496
Python: Get the Source State in Twain
<p>I have a python script where I am using Twain to interact with my scanner. First I initialize my source:</p> <pre><code>scanner = sm.OpenSource() </code></pre> <p>set a few capabilities then request to acquire an image:</p> <pre><code>scanner.RequestAcquire(0, 1) </code></pre> <p>I then try to transfer an image:</p> <pre><code>scanner.XferImageNatively() </code></pre> <p>I get a Twain Sequence error, the docs tell me this happens when it is in the incorrect state. I can not for the life of me find in the docs how to check said state though. Can anyone offer some assistance??</p> <p>I would like to check the state and wait until my scanner is in a valid state.</p>
<python><twain>
2025-03-24 00:52:31
0
1,447
mgrenier
79,529,708
2,023,370
How to avoid a Torch error with Open WebUI/Ollama
<p>I'd like to get Open WebUI working with Ollama on Ubuntu 24.10, but installing it using pip and venv leads me to a <code>torch</code> error.</p> <p>Firstly, Ollama (0.6.2) is working: I can type <code>/path/to/ollama list</code> and see the 3 models I've been working with.</p> <p>Next, I follow the guidance in the error message after <code>pip install open-webui</code>, relating to the use of the APT package manager, and so I use venv:</p> <pre><code>sudo apt-get install python3-full python3 -m venv /path/to/venv source /path/to/venv/bin/activate python install open-webui </code></pre> <p>I then try <code>python open-webui serve</code>, but this complains that there is no file or directory called <code>open-webui</code> in my <code>$HOME</code> directory. I see there is an executable file called <code>open-webui</code> in my <code>/path/to/venv/bin</code> so I try:</p> <pre><code>python /path/to/venv/bin/open-webui serve </code></pre> <p>...and I see the large <code>OPEN WEBUI</code> text , and an error. Opening <code>http://localhost:8080</code> in my browser seems to work, but how can I avoid the following error message?:</p> <pre><code>ERROR [open_webui.main] Error updating models: cannot import name 'Tensor' from 'torch' (unknown location) </code></pre>
<python><pip><torch><python-venv><ollama>
2025-03-23 21:15:29
1
11,288
user2023370
79,529,322
1,236,840
Do subset-sum instances inherently require large integers to force exponential difficulty?
<p>I'm developing custom subset-sum algorithms and have encountered a puzzling issue: <strong>it seems difficult to generate truly &quot;hard&quot; subset-sum instances (i.e., forcing exponential computational effort) without using very large integers (e.g., greater than about 2^22).</strong></p> <p>I'd specifically like to know:</p> <ul> <li><strong>Are there known constructions or instance generators for subset-sum that reliably force exponential complexityโ€”particularly against common subset-sum algorithms or custom heuristicsโ€”using only moderately sized integers (โ‰ค2^22)?</strong></li> <li>Is the hardness of subset-sum instances inherently tied to the size of integers involved, or is it possible to create computationally difficult instances purely through numerical structure and relationships, even with smaller numbers?</li> </ul> <p>For context, here are some attempts I've made at generating potentially hard instances (feedback or improvements welcome):</p> <pre class="lang-py prettyprint-override"><code>import random def generate_exponential_instance(n): max_element = 2 ** 22 A = [random.randint(1, max_element) for _ in range(n)] while True: mask = [random.choice([0, 1]) for _ in range(n)] if sum(mask) != 0: break target = sum(A[i] * mask[i] for i in range(n)) return A, target def generate_dense_high_values_instance(n): base = 2 ** 22 - random.randint(0, 100) A = [base + random.randint(0, 20) for _ in range(n)] target = sum(random.sample(A, k=n // 2)) return A, target def generate_merkle_hellman_instance(n, max_step=20): total = 0 private_key = [] for _ in range(n): next_val = total + random.randint(1, max_step) private_key.append(next_val) total += next_val q = random.randint(total + 1, 2 * total) r = random.randint(2, q - 1) public_key = [(r * w) % q for w in private_key] message = [random.randint(0, 1) for _ in range(n)] ciphertext = sum(b * k for b, k in zip(message, public_key)) return public_key, ciphertext </code></pre>
<python><algorithm><optimization><subset-sum>
2025-03-23 17:05:03
2
1,010
Naseiva Juman
79,529,320
10,918,680
PIL.UnidentifiedImageError for some image URL, but not others
<p>I'm trying to open an image at a URL using Pillow, but it only works for some URL. In my code below, URL1 works but not URL2.</p> <pre><code>import requests from PIL import Image url1 = &quot;https://picsum.photos/200&quot; # url2 = &quot;https://www.electrical-forensics.com/MajorAppliances/GE/Small/GE_2018_Stackable_Washer_Dryer_Tag.jpg&quot; image = Image.open(requests.get(url, stream=True, verify=False).raw) </code></pre> <p>If I run the code with URL2, it gives me an error:</p> <pre><code>PIL.UnidentifiedImageError: cannot identify image file &lt;_io.BytesIO object at 0x...&gt; </code></pre> <p>I am able to open both URLs with my browser so I'm really curious as to why. Are there any methods that will work on both?</p>
<python><image><python-requests><python-imaging-library>
2025-03-23 17:02:52
0
425
user173729
79,529,216
3,783,002
Cannot load an EF Core db context wrapper in Python using Python.NET
<p>Crossposting from <a href="https://github.com/pythonnet/pythonnet/discussions/2567" rel="nofollow noreferrer">here</a></p> <p>I have a class called <code>ModelSetWrapper</code> which wraps model sets from an EfCore DbContext (Jet, specifically) and returns a dictionary containing the model set information. Here's what the class looks like:</p> <pre class="lang-cs prettyprint-override"><code>namespace LibFacade.Utils; public class ModelSetWrapper&lt;T&gt; where T : class { private Dictionary&lt;string, List&lt;object&gt;&gt; _data; public ModelSetWrapper() { _data = new Dictionary&lt;string, List&lt;object&gt;&gt;(); PropertyInfo[] properties = typeof(T).GetProperties(); using (ModelContext context = new ModelContext()) { DbSet&lt;T&gt; dbSet = context.Set&lt;T&gt;(); foreach (PropertyInfo pInfo in properties) { List&lt;object&gt; list = dbSet.Select(c =&gt; pInfo.GetValue(c)).ToList(); _data[pInfo.Name] = list; } } } public Dictionary&lt;string, List&lt;object&gt;&gt; Data =&gt; _data; } </code></pre> <p>This seems to work fine from the .NET side. From a console app, the following runs without any errors:</p> <pre class="lang-cs prettyprint-override"><code>ModelSetWrapper&lt;Jaugeage&gt; wrapper = new ModelSetWrapper&lt;Jaugeage&gt;(); foreach (KeyValuePair&lt;string, List&lt;object&gt;&gt; kvp in wrapper.Data) { Console.WriteLine($&quot;{kvp.Key}, {kvp.Value.Count}&quot;); } </code></pre> <p>I'm able to load this assembly and types from it using pythonnet without any issues. However when I attempt the following in my python code:</p> <p><code>wrapper = ModelSetWrapper[Jaugeage]()</code></p> <p>I get an error:</p> <pre><code>--------------------------------------------------------------------------- InvalidOperationException Traceback (most recent call last) Cell In[2], line 1 ----&gt; 1 wrapper = ModelSetWrapper[Jaugeage]() InvalidOperationException: The property can only be set once. at EntityFrameworkCore.Jet.Data.JetConnection.set_DataAccessProviderFactory(DbProviderFactory value) at EntityFrameworkCore.Jet.Data.JetConnection.Open() at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.OpenDbConnection(Boolean errorsExpected) at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.OpenInternal(Boolean errorsExpected) at Microsoft.EntityFrameworkCore.Storage.RelationalConnection.Open(Boolean errorsExpected) at Microsoft.EntityFrameworkCore.Storage.RelationalCommand.ExecuteReader(RelationalCommandParameterObject parameterObject) at Microsoft.EntityFrameworkCore.Query.Internal.SingleQueryingEnumerable`1.Enumerator.InitializeReader(Enumerator enumerator) at Microsoft.EntityFrameworkCore.Query.Internal.SingleQueryingEnumerable`1.Enumerator.&lt;&gt;c.&lt;MoveNext&gt;b__21_0(DbContext _, Enumerator enumerator) at EntityFrameworkCore.Jet.Storage.Internal.JetExecutionStrategy.Execute[TState,TResult](TState state, Func`3 operation, Func`3 verifySucceeded) at Microsoft.EntityFrameworkCore.Query.Internal.SingleQueryingEnumerable`1.Enumerator.MoveNext() at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection) at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source) at LibFacade.Utils.ModelSetWrapper`1..ctor() in [C:\Users\admin\Desktop\TEMP\projects\317\notebooks\Lib\LibFacade\ModelSetWrapper.cs](file:///C:/Users/admin/Desktop/TEMP/projects/317/notebooks/Lib/LibFacade/ModelSetWrapper.cs):line 12 at System.RuntimeMethodHandle.InvokeMethod(Object target, Void** arguments, Signature sig, Boolean isConstructor) at System.Reflection.MethodBaseInvoker.InvokeConstructorWithoutAlloc(Object obj, Boolean wrapInTargetInvocationException) </code></pre> <p>I have searched far an wide for a solution but don't really know how to start tackling this. Any pointers would be very welcome.</p>
<python><c#><entity-framework-core><python.net><jet-ef-provider>
2025-03-23 15:47:52
0
6,067
user32882
79,529,060
3,815,773
ssh not accepting access with keygen keys
<p>I have a Desktop plus NAS setup, and did save backups via ssh. It has worked for years, but then the NAS crashed and was replaced. Both desktop and NAS are now regular computers running Linux Mint LMDE6. But now something is wrong in the SSH workings.</p> <p>My backup program (<a href="https://sourceforge.net/projects/backuso/" rel="nofollow noreferrer">https://sourceforge.net/projects/backuso/</a>) needs to be started with sudo, as I need to copy root owned files. SSH is configured with keys to avoid having to enter passwords for the crontab controlled backups. Yet my Python programm is still asking for passwords, like shown here:</p> <pre><code># Python code: command = &quot;ssh root@10.0.0.51 'ls \&quot;/home/BackupGen10/BackupRemote/BackusoTEST/\&quot;'&quot; proc = subprocess.run(command, text=True, encoding=&quot;UTF-8&quot;, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) # shell response: root@10.0.0.51's password: </code></pre> <p>But when I take this exact same command out of the big Backuso Python code and paste it into a tiny test program, it works perfectly well, NOT asking for passwords, and the output is as it should be:</p> <pre><code>Program code: #! /usr/bin/python3 # -*- coding: utf-8 -*- import subprocess command = &quot;ssh root@10.0.0.51 'ls \&quot;/home/BackupGen10/BackupRemote/BackusoTEST/\&quot;'&quot; proc = subprocess.run(command, text=True, encoding=&quot;UTF-8&quot;, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) print(&quot;Items in stdout:&quot;) for prs in proc.stdout.split(&quot;\n&quot;): print(prs) Output: $ ./test_ssh.py Items in stdout: 2024-06-18_13-53-45-824424 2024-06-18_13-54-53-855948 </code></pre> <p>What am I missing?</p>
<python><ssh>
2025-03-23 14:03:22
2
505
ullix
79,528,990
14,351,788
Segmentation Fault during open() of JSON file in Python 3.8
<p>I am encounting a segmentation fault in my Python application, which utilizes MQTT to send requests to a service. The service then writes the response into a JSON file, which the application subsequently reads to retrieve the results.</p> <p>The problem manifests as a segmentation fault when the application attempts to open this JSON file using open(). While not consistently reproducible on every function call, this issue occurs reliably after the function executes dozens or hundreds of times consecutively. Specifically, the following code snippet, particularly the line <code>with open(mqtt_response_path, 'r') as file:</code>, appears to be the source of the problem:</p> <pre><code># send request logging.info(f'the request info:') logging.info(f'{request}') send_mqtt(request) while True: try: with open(mqtt_response_path, 'r') as file: fcntl.flock(file, fcntl.LOCK_SH) json_data = json.load(file) if isinstance(json_data, str): json_data = json.loads(json_data) fcntl.flock(file, fcntl.LOCK_UN) # logging.info(f&quot;the return info:{json_data}, the info type{type(json_data)}&quot;) if json_data['message_id'] == message_id: response = json_data break except (json.JSONDecodeError, FileNotFoundError): logging.info(&quot;read failed&quot;) pass # time.sleep(0.01) # wait 0.01s except Exception as e: pass </code></pre> <p>The issue is particularly perplexing as <code>open()</code> is a standard Python function. To gain more insight, I used gdb to debug the issue and obtained the following backtrace:</p> <pre class="lang-none prettyprint-override"><code>Thread 86 &quot;python3.8&quot; received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fff2a000640 (LWP 8129)] PyType_IsSubtype (b=0x5555558b1680 &lt;PyBufferedReader_Type&gt;, a=0x5555558b1616 &lt;PyBufferedWriter_Type+310&gt;) at Objects/typeobject.c:1369 1369 Objects/typeobject.c: No such file or directory. (gdb) bt #0 PyType_IsSubtype (b=0x5555558b1680 &lt;PyBufferedReader_Type&gt;, a=0x5555558b1616 &lt;PyBufferedWriter_Type+310&gt;) at Objects/typeobject.c:1369 #1 type_call (type=0x5555558b1680 &lt;PyBufferedReader_Type&gt;, args=0x7fffa8d43900, kwds=0x0) at Objects/typeobject.c:989 #2 0x00005555555c9dfd in _PyObject_MakeTpCall (callable=0x5555558b1680 &lt;PyBufferedReader_Type&gt;, args=&lt;optimized out&gt;, nargs=&lt;optimized out&gt;, keywords=0x0) at Objects/call.c:159 #3 0x00005555555ca31a in _PyObject_Vectorcall (kwnames=0x0, nargsf=&lt;optimized out&gt;, args=0x7fff29ffd060, callable=0x5555558b1680 &lt;PyBufferedReader_Type&gt;) at ./Include/cpython/abstract.h:125 #4 _PyObject_Vectorcall (kwnames=0x0, nargsf=&lt;optimized out&gt;, args=0x7fff29ffd060, callable=0x5555558b1680 &lt;PyBufferedReader_Type&gt;) at ./Include/cpython/abstract.h:115 #5 _PyObject_FastCall (nargs=&lt;optimized out&gt;, args=0x7fff29ffd060, func=0x5555558b1680 &lt;PyBufferedReader_Type&gt;) at ./Include/cpython/abstract.h:147 #6 _PyObject_CallFunctionVa (callable=0x5555558b1680 &lt;PyBufferedReader_Type&gt;, format=format@entry=0x5555557df707 &quot;Oi&quot;, va=va@entry=0x7fff29ffd0d0, is_size_t=is_size_t@entry=1) at Objects/call.c:941 #7 0x00005555555cb156 in _PyObject_CallFunctionVa (is_size_t=1, va=0x7fff29ffd0d0, format=0x5555557df707 &quot;Oi&quot;, callable=&lt;optimized out&gt;) at Objects/call.c:914 #8 _PyObject_CallFunction_SizeT (callable=&lt;optimized out&gt;, format=format@entry=0x5555557df707 &quot;Oi&quot;) at Objects/call.c:992 #9 0x0000555555725a76 in _io_open_impl (module=&lt;optimized out&gt;, opener=0x555555898a80 &lt;_Py_NoneStruct&gt;, closefd=1, newline=0x0, errors=0x0, encoding=0x0, buffering=4096, mode=0x7ffff7b862e0 &quot;r&quot;, file=&lt;optimized out&gt;) at ./Modules/_io/_iomodule.c:463 #10 _io_open (module=&lt;optimized out&gt;, args=&lt;optimized out&gt;, nargs=&lt;optimized out&gt;, kwnames=&lt;optimized out&gt;) at ./Modules/_io/clinic/_iomodule.c.h:279 #11 0x000055555560156f in cfunction_vectorcall_FASTCALL_KEYWORDS (func=0x7ffff7bcf680, args=0x7ffec002e228, nargsf=&lt;optimized out&gt;, kwnames=&lt;optimized out&gt;) at Objects/methodobject.c:441 #12 0x00005555555b31fe in _PyObject_Vectorcall (kwnames=&lt;optimized out&gt;, nargsf=&lt;optimized out&gt;, args=&lt;optimized out&gt;, callable=&lt;optimized out&gt;) at ./Include/cpython/abstract.h:127 #13 call_function (tstate=tstate@entry=0x7ffec40022a0, pp_stack=pp_stack@entry=0x7fff29ffd3c8, oparg=&lt;optimized out&gt;, kwnames=kwnames@entry=0x0) at Python/ceval.c:4963 #14 0x00005555555b5ea0 in _PyEval_EvalFrameDefault (f=&lt;optimized out&gt;, throwflag=&lt;optimized out&gt;) at Python/ceval.c:3500 #15 0x00005555556785b3 in PyEval_EvalFrameEx (throwflag=0, f=0x7ffec002dff0) at Python/ceval.c:741 #16 _PyEval_EvalCodeWithName (_co=&lt;optimized out&gt;, globals=globals@entry=0x7ffff5d64280, locals=locals@entry=0x0, args=args@entry=0x7fffab6da990, argcount=1, kwnames=0x7fffa8de5e58, kwargs=0x7fffab6da998, kwcount=8, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7fffdf535db0, qualname=0x7fffe4906300) at Python/ceval.c:4298 #17 0x00005555555ca7d3 in _PyFunction_Vectorcall (func=func@entry=0x7fffdff9b670, stack=stack@entry=0x7fffab6da990, nargsf=nargsf@entry=1, kwnames=kwnames@entry=0x7fffa8de5e40) at Objects/call.c:436 #18 0x00005555555cc27d in PyVectorcall_Call (callable=0x7fffdff9b670, tuple=&lt;optimized out&gt;, kwargs=&lt;optimized out&gt;) at Objects/call.c:200 #19 0x00005555555b6d62 in do_call_core (kwdict=0x7fffa8d52b80, callargs=0x7fffa8d6f700, func=0x7fffdff9b670, tstate=0x7ffec40022a0) at Python/ceval.c:5010 #20 _PyEval_EvalFrameDefault (f=&lt;optimized out&gt;, throwflag=&lt;optimized out&gt;) at Python/ceval.c:3559 #21 0x00005555556785b3 in PyEval_EvalFrameEx (throwflag=0, f=0x7ffec000fbb0) at Python/ceval.c:741 #22 _PyEval_EvalCodeWithName (_co=&lt;optimized out&gt;, globals=globals@entry=0x7ffff5d64280, locals=locals@entry=0x0, args=&lt;optimized out&gt;, argcount=1, kwnames=0x7fffe48ffc28, kwargs=0x7ffec002df88, kwcount=8, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x7fffe4900700, name=0x7ffff7a9f330, qualname=0x7fffe49619e0) at Python/ceval.c:4298 #23 0x00005555555ca7d3 in _PyFunction_Vectorcall (func=&lt;optimized out&gt;, stack=&lt;optimized out&gt;, nargsf=&lt;optimized out&gt;, kwnames=&lt;optimized out&gt;) at Objects/call.c:436 #24 0x000055555575b9ca in _PyObject_Vectorcall (kwnames=0x7fffe48ffc10, nargsf=1, args=0x7ffec002df80, callable=0x7fffdff9b700) at ./Include/cpython/abstract.h:127 #25 method_vectorcall (method=&lt;optimized out&gt;, args=0x7ffec002df88, nargsf=&lt;optimized out&gt;, kwnames=0x7fffe48ffc10) at Objects/classobject.c:60 #26 0x00005555555b31fe in _PyObject_Vectorcall (kwnames=&lt;optimized out&gt;, nargsf=&lt;optimized out&gt;, args=&lt;optimized out&gt;, callable=&lt;optimized out&gt;) at ./Include/cpython/abstract.h:127 #27 call_function (tstate=tstate@entry=0x7ffec40022a0, pp_stack=pp_stack@entry=0x7fff29ffda10, oparg=&lt;optimized out&gt;, kwnames=kwnames@entry=0x7fffe48ffc10) at Python/ceval.c:4963 #28 0x00005555555b6e22 in _PyEval_EvalFrameDefault (f=&lt;optimized out&gt;, throwflag=&lt;optimized out&gt;) at Python/ceval.c:3515 #29 0x00005555556785b3 in PyEval_EvalFrameEx (throwflag=0, f=0x7ffec002dd90) at Python/ceval.c:741 #30 _PyEval_EvalCodeWithName (_co=&lt;optimized out&gt;, globals=globals@entry=0x7ffff5d64280, locals=locals@entry=0x0, args=args@entry=0x7fffef1f6cf0, argcount=1, kwnames=0x7fffa8de6538, kwargs=0x7fffef1f6cf8, kwcount=6, kwstep=1, defs=0x7fffe491b218, defcount=2, kwdefs=0x0, closure=0x0, name=0x7fffdf5378b0, qualname=0x7fffe4906350) at Python/ceval.c:4298 #31 0x00005555555ca7d3 in _PyFunction_Vectorcall (func=func@entry=0x7fffdff9b790, stack=stack@entry=0x7fffef1f6cf0, nargsf=nargsf@entry=1, kwnames=kwnames@entry=0x7fffa8de6520) at Objects/call.c:436 #32 0x00005555555cc27d in PyVectorcall_Call (callable=0x7fffdff9b790, tuple=&lt;optimized out&gt;, kwargs=&lt;optimized out&gt;) at Objects/call.c:200 #33 0x00005555555b6d62 in do_call_core (kwdict=0x7fffa8d59c00, callargs=0x7fffa8d8f460, func=0x7fffdff9b790, tstate=0x7ffec40022a0) at Python/ceval.c:5010 #34 _PyEval_EvalFrameDefault (f=&lt;optimized out&gt;, throwflag=&lt;optimized out&gt;) at Python/ceval.c:3559 #35 0x00005555556785b3 in PyEval_EvalFrameEx (throwflag=0, f=0x7fffa8dbb9f0) at Python/ceval.c:741 #36 _PyEval_EvalCodeWithName (_co=&lt;optimized out&gt;, globals=globals@entry=0x7ffff5d64280, locals=locals@entry=0x0, args=&lt;optimized out&gt;, argcount=1, kwnames=0x7fffe490de38, kwargs=0x7ffec0011b88, kwcount=6, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x7fffe49007f0, name=0x7ffff7a9f330, qualname=0x7fffe49604b0) at Python/ceval.c:4298 #37 0x00005555555ca7d3 in _PyFunction_Vectorcall (func=&lt;optimized out&gt;, stack=&lt;optimized out&gt;, nargsf=&lt;optimized out&gt;, kwnames=&lt;optimized out&gt;) at Objects/call.c:436 #38 0x000055555575b9ca in _PyObject_Vectorcall (kwnames=0x7fffe490de20, nargsf=1, args=0x7ffec0011b80, callable=0x7fffdff9b820) at ./Include/cpython/abstract.h:127 #39 method_vectorcall (method=&lt;optimized out&gt;, args=0x7ffec0011b88, nargsf=&lt;optimized out&gt;, kwnames=0x7fffe490de20) at Objects/classobject.c:60 #40 0x00005555555b31fe in _PyObject_Vectorcall (kwnames=&lt;optimized out&gt;, nargsf=&lt;optimized out&gt;, args=&lt;optimized out&gt;, callable=&lt;optimized out&gt;) at ./Include/cpython/abstract.h:127 #41 call_function (tstate=tstate@entry=0x7ffec40022a0, pp_stack=pp_stack@entry=0x7fff29ffe050, oparg=&lt;optimized out&gt;, kwnames=kwnames@entry=0x7fffe490de20) at Python/ceval.c:4963 #42 0x00005555555b6e22 in _PyEval_EvalFrameDefault (f=&lt;optimized out&gt;, throwflag=&lt;optimized out&gt;) at Python/ceval.c:3515 #43 0x00005555556785b3 in PyEval_EvalFrameEx (throwflag=0, f=0x7ffec0011930) at Python/ceval.c:741 #44 _PyEval_EvalCodeWithName (_co=&lt;optimized out&gt;, globals=globals@entry=0x7ffff5d64280, locals=locals@entry=0x0, args=args@entry=0x7fffa8d85b10, argcount=1, kwnames=0x7fffa8dd4828, kwargs=0x7fffa8d85b18, kwcount=5, kwstep=1, defs=0x0, defcount=0, kwdefs=0x0, closure=0x0, name=0x7fffdf53d1c0, qualname=0x7fffe490de70) at Python/ceval.c:4298 #45 0x00005555555ca7d3 in _PyFunction_Vectorcall (func=func@entry=0x7fffdff9be50, stack=stack@entry=0x7fffa8d85b10, nargsf=nargsf@entry=1, kwnames=kwnames@entry=0x7fffa8dd4810) at Objects/call.c:436 #46 0x00005555555cc27d in PyVectorcall_Call (callable=0x7fffdff9be50, tuple=&lt;optimized out&gt;, kwargs=&lt;optimized out&gt;) at Objects/call.c:200 #47 0x00005555555b6d62 in do_call_core (kwdict=0x7fffa8d74540, callargs=0x7fffa8d85430, func=0x7fffdff9be50, tstate=0x7ffec40022a0) at Python/ceval.c:5010 #48 _PyEval_EvalFrameDefault (f=&lt;optimized out&gt;, throwflag=&lt;optimized out&gt;) at Python/ceval.c:3559 </code></pre> <p>The backtrace suggests a type mismatch between PyBufferedReader_Type and PyBufferedWriter_Type, potentially indicating an issue at the CPython level.</p> <p>My analysis leads me to believe this might be a low-level CPython issue. but now I still don't know how to fix it.</p> <ul> <li>Is there a known effective solution to this problem?</li> <li>Should I consider upgrading or downgrading my Python environment (currently Python 3.8)?</li> <li>Are there any workarounds or alternative approaches to reading the JSON file that could mitigate this issue?</li> </ul>
<python><python-3.x><segmentation-fault>
2025-03-23 13:06:50
1
437
Carlos
79,528,948
11,062,613
How to efficiently use NumPy's StringDType for string operations (e.g., joining strings)?
<p>I'm trying to perform string operations with NumPy's StringDType. As an example, I've attempted to join strings with a separator row-wise. In the past, NumPy's string operations were somewhat slower compared to Python list comprehensions, and I was hoping that with the introduction of NumPy's StringDType (which supports variable string sizes), these operations would have improved. However, I haven't been able to achieve a significant performance boost so far.</p> <p>Are there better options for efficiently performing operations like string joining using NumPy's StringDType?</p> <p>Here's a sample code where I test several methods that try to leverage vectorized operations:</p> <pre><code>import timeit from functools import reduce import numpy as np import polars as pl from numpy.dtypes import StringDType def interleave_separator(arr, sep=', '): &quot;&quot;&quot;Interleave a separator into a 2D array column-wise (costly helper).&quot;&quot;&quot; nrows, ncols = arr.shape interleaved = np.empty((nrows, 2 * ncols - 1), dtype=StringDType) interleaved[:, ::2] = arr interleaved[:, 1::2] = sep return interleaved def strings_join_py(arr, sep=', '): &quot;&quot;&quot;Python list comprehension.&quot;&quot;&quot; return [sep.join(a) for a in arr] def strings_join_pl(arr, sep=', '): &quot;&quot;&quot;Polars join series of lists.&quot;&quot;&quot; return arr.list.join(separator=sep) def strings_join_np1(arr, sep=', '): &quot;&quot;&quot;Numpy interleave separator and apply sum.&quot;&quot;&quot; return np.sum(interleave_separator(arr, sep), axis=1) def strings_join_np2(arr, sep=', '): &quot;&quot;&quot;Numpy interleave separator and apply add.reduce.&quot;&quot;&quot; return np.strings.add.reduce(interleave_separator(arr, sep), axis=1) def strings_join_np3(arr, sep=', '): &quot;&quot;&quot;Numpy/Python accumulate over columns row-wise.&quot;&quot;&quot; return reduce(lambda x, y: x + sep + y, arr.T) </code></pre> <p>Check results:</p> <pre><code>np.random.seed(999) choices = [&quot;apple&quot;, &quot;banana&quot;, &quot;cherry&quot;, &quot;salad&quot;] arr = np.random.choice(choices, size=(3, 3)).astype(StringDType) sep = &quot;, &quot; print('2D array:') print(arr) # [['apple' 'apple' 'banana'] # ['banana' 'apple' 'banana'] # ['salad' 'salad' 'banana']] print('1D array joined by separator:') print(strings_join_py(arr.tolist(), sep)) print(strings_join_pl(pl.Series(arr.tolist()), sep)) print(strings_join_np1(arr, sep)) print(strings_join_np2(arr, sep)) print(strings_join_np3(arr, sep)) # ['apple, apple, banana' # 'banana, apple, banana' # 'salad, salad, banana'] </code></pre> <p>Run benchmarks:</p> <pre><code>np.random.seed(999) choices = [&quot;apple&quot;, &quot;banana&quot;, &quot;cherry&quot;, &quot;salad&quot;] arr = np.random.choice(choices, size=(100_000, 10)).astype(StringDType) lst = arr.tolist() ser = pl.Series(lst) sep = &quot;, &quot; baseline = timeit.timeit(lambda: strings_join_py(lst, sep), number=5) time_pl = timeit.timeit(lambda: strings_join_pl(ser, sep), number=5) time_np1 = timeit.timeit(lambda: strings_join_np1(arr, sep), number=5) time_np2 = timeit.timeit(lambda: strings_join_np2(arr, sep), number=5) time_np3 = timeit.timeit(lambda: strings_join_np3(arr, sep), number=5) print(&quot;Ratio compared to Python list comprehension (&gt;1 is faster)&quot;) print(f&quot;pl: {baseline/time_pl:.2f}&quot;) print(f&quot;np1: {baseline/time_np1:.2f}&quot;) print(f&quot;np2: {baseline/time_np2:.2f}&quot;) print(f&quot;np3: {baseline/time_np3:.2f}&quot;) # pl: 1.11 # Polars Series.list.join # np1: 0.14 # interleaved np.sum # np2: 0.14 # interleaved np.add.reduce # np3: 0.19 # reduce np.add </code></pre> <p>Edit - Hereโ€™s a benchmark with an array shape of (500,000, 2):</p> <pre><code># Ratio compared to Python list comprehension (&gt;1 is faster) # pl: 0.61 # np1: 0.31 # np2: 0.31 # np3: 1.57 </code></pre> <p>The data type seems to perform well (see np3) but there seem to be not enough specialized functions at the moment to increase the usability.</p> <p>Edit: Observation</p> <p>I've observed that NumPy's string ufunc (np.strings.add) is quite efficient if there arenโ€™t many intermediate results to compute. As the number of accumulated columns increases, its efficiency declines compared to a Python list comprehension.</p> <p>Here's a small benchmark showing the impact of intermediate results as the number of accumulated columns rises:</p> <pre><code># Ratio compared to Python list comprehension (&gt;1 is faster) # Py_list_comp / np.strings.add: 0.77 - (shape (500000, 2)) # Py_list_comp / np.strings.add: 0.04 - (shape (1000, 1000)) </code></pre>
<python><string><numpy>
2025-03-23 12:29:48
0
423
Olibarer
79,528,403
4,609,089
unsloth save_pretrained_merged function issue
<p>I tried to save the model with the below code but failed. Planning to use <code>unsloth/Llama-3.2-11B-Vision-Instruct</code> as a base model to fine-tune a new model.</p> <pre><code>!pip install unsloth model, tokenizer = FastVisionModel.from_pretrained( &quot;unsloth/Llama-3.2-11B-Vision-Instruct&quot;, load_in_4bit = False, use_gradient_checkpointing = &quot;unsloth&quot;, ) ... if True: model.save_pretrained_merged( &quot;unsloth_finetune&quot;, tokenizer, save_method = &quot;merged_16bit&quot;,) </code></pre> <p>with error</p> <pre><code>File ~/.local/lib/python3.10/site-packages/unsloth/save.py:2357, in unsloth_generic_save_pretrained_merged(self, save_directory, tokenizer, save_method, push_to_hub, token, is_main_process, state_dict, save_function, max_shard_size, safe_serialization, variant, save_peft_format, tags, temporary_location, maximum_memory_usage) [2355](~/.local/lib/python3.10/site-packages/unsloth/save.py:2355) arguments[&quot;model&quot;] = self [2356](~/.local/lib/python3.10/site-packages/unsloth/save.py:2356) del arguments[&quot;self&quot;] -&gt; [2357](~/.local/lib/python3.10/site-packages/unsloth/save.py:2357) unsloth_generic_save(**arguments) [2358](~/.local/lib/python3.10/site-packages/unsloth/save.py:2358) for _ in range(3): [2359](~/.local/lib/python3.10/site-packages/unsloth/save.py:2359) gc.collect() File ~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:116, in context_decorator.&lt;locals&gt;.decorate_context(*args, **kwargs) [113](~/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py:113) @functools.wraps(func) ... --&gt; [472](~/.local/lib/python3.10/site-packages/unsloth_zoo/saving_utils.py:472) try_save_directory = temp_file.name [474]~/.local/lib/python3.10/site-packages/unsloth_zoo/saving_utils.py:474) total, used, free = shutil.disk_usage(save_directory) [475](~/.local/lib/python3.10/site-packages/unsloth_zoo/saving_utils.py:475) free = int(free*0.95) AttributeError: 'NoneType' object has no attribute 'name' </code></pre> <p>All the help is appreciated.</p>
<python><model><fine-tuning>
2025-03-23 03:23:36
1
833
John
79,528,380
1,394,353
Is it possible to turn off printing the id (hex address) globally for Python objects?
<p>When you don't provide a <code>__repr__</code> or <code>__str__</code> method on a custom class, you just get a classname and the Python address of the object (or, to be more specific, what <code>id(self)</code> would return.</p> <p>This is fine most of the time. And it is very helpful when you are debugging some code and you want to see if instances are/are not the same, visually. But to be honest I <em>almost never care about that id value</em>.</p> <p>However it also means that running a program with debugging print functions never looks the same. Ditto if you are comparing log files. Unless you write a lot of <code>__repr__</code> only to avoid this issue. Or if you pre-format the log files to zero out the hex values on the default object prints.</p> <h5>A sample program to illustrate what I would like to do: not have that id printed.</h5> <pre><code>class ILookDifferentEveryRun: &quot;baseline behavior&quot; def __init__(self,a): &quot;I don't actually care about `a`, that's why I don't need a `repr`&quot; self.a = a class ILookTheSameEveryRun(ILookDifferentEveryRun): &quot;&quot;&quot;this is my workaround, a cut and paste of a default __repr__&quot;&quot;&quot; def __repr__(self) : return type(self).__name__ class ILookAlmostLikeBuiltinRepr(ILookDifferentEveryRun): &quot;can I do this with a global switch?&quot; def __repr__(self) : &quot;&quot;&quot;this is more or less what I want&quot;&quot;&quot; res = f&quot;&lt;{type(self).__module__}.{type(self).__name__} object&gt; at &lt;dontcare&gt;&quot; return res inst1 = ILookDifferentEveryRun(a=1) inst2 = ILookTheSameEveryRun(a=1) inst3 = ILookAlmostLikeBuiltinRepr(a=1) print(inst1) print(inst2) print(inst3) </code></pre> <p>run twice:</p> <pre><code>&lt;__main__.ILookDifferentEveryRun object at 0x100573260&gt; ILookTheSameEveryRun &lt;__main__.ILookAlmostLikeBuiltinRepr object&gt; at &lt;dontcare&gt; </code></pre> <pre><code>&lt;__main__.ILookDifferentEveryRun object at 0x104ca7320&gt; ILookTheSameEveryRun &lt;__main__.ILookAlmostLikeBuiltinRepr object&gt; at &lt;dontcare&gt; </code></pre> <p>I took a look at the startup flags for the python interpreter, but nothing seems to allow for this. Any workarounds? I know I could also put the repr on a Mixin and reuse that everywhere, but that's ugly too.</p> <p>If I can't, that's fine and that's what I am expecting to hear. Just wondering if someone else had the same problem and found a way.</p> <p>p.s. this is less about dedicated printing of instances and more about things like <code>print(mylist)</code> where <code>mylist=[item1,item2,item3]</code>, generally any complex data structures with nested items in them.</p>
<python><debugging><logging><repr>
2025-03-23 02:42:03
4
12,224
JL Peyret
79,528,264
9,560,245
How to sanitize indentation in the lists in a generated Markdown document?
<p>In my Python application I need to render to HTML a Markdown document generated by third-party software. The document contains long lists that contain non-standard and incompatible formatting (especially, blank lines between the elements of the same bulleted list). Additionally, the software does not comply with the indentation standard, so the lists may look like:</p> <pre class="lang-markdown prettyprint-override"><code>## Enumeration 1. Element 1 2. Element 2 ## List (all elements rely on the same topic) * Element 1 level 1 * **Element 2** Level 2 * Element 3 level 1 * Element 4 level 2 * Element 5 level 3 - Element 6 level 1 - Element 7 level 2 - Element 8 level 3 </code></pre> <p>How would I sanitize this Markdown document in Python and set it up properly for the rendering? It is stored in a single Python string, including all the newlines, etc., and I need to expose the result in the same way.</p>
<python><markdown>
2025-03-22 23:32:45
1
596
Andrei Vukolov
79,528,196
2,196,069
How do I call a recursion method for this minimal coin change problem?
<blockquote> <p>given coins <code>[10, 5, 1]</code> cents, find the minimum number of each coin to total <code>18</code> cents. A solution is supposed to return the coin collection <code>[10, 5, 1, 1, 1]</code>.</p> </blockquote> <p>I can't get it to move past the first coin, <code>10</code>, in the collection. It's supposed to do the rest with recursion.</p> <pre class="lang-python prettyprint-override"><code>def r_change_money(total, denominations): def collect_coins(total, denominations): if total == 0: return False # fd (floor division) fd = total // denominations[0] addl_coins = [denominations[0]] * fd total -= fd * denominations[0] denominations.pop(0) return addl_coins # sort highest denom first denominations.sort(reverse=True) # holds coin collection coins = [] coins += collect_coins(total, denominations) return coins # [10, 5, 1, 1, 1] print(r_change_money(18, [1, 10, 5])) # [6, 1, 1] (although safe answer is [4, 4]) print(r_change_money(8, [6, 1, 4])) </code></pre> <p>That code has no recursion right now, and I am getting:</p> <pre><code>[10] [6] </code></pre> <p>How do I make this work with recursion?</p>
<python><algorithm><recursion><coin-change>
2025-03-22 22:06:07
4
3,174
haltersweb
79,528,081
1,267,833
Installing cmdstanpy fast on Google Colab
<p>After finding out that I would need to reinstall certain Python packages in Google Colab every time I refresh a runtime, I quickly lost interest in trying to use Google Colab to run <code>stan</code> code. In particular, the last step in installing <code>cmdstanpy</code></p> <pre><code>!pip install cmdstanpy import cmdstanpy cmdstanpy.install_cmdstan() </code></pre> <p>takes about 10 or so minutes!</p> <p>However, I have noticed <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/CmdStanPy_Example_Notebook.ipynb" rel="nofollow noreferrer">this page</a> provided a clever solution that would only require me to install cmdstanpy once. This solution saves all of the stan and c++ files, compresses them into a <code>.tar.gz</code>, and then reads that back in at the beginning of each session.</p> <p>Unfortunately, some of the source code appears to be outdated. The code in that notebook throws an error after complaining a binary executable isn't getting the right command line parameters.</p> <p>My question: how can I regenerate this <code>.tar.gz</code> every so often? How can I save all the files created by calling <code>cmdstanpy.install_cmdstan()</code> to be read back in later?</p> <p>This is my current attempt, but I don't think it's grabbing everything. When I re-upload everything, and decompress it, it complains it can't find everything.</p> <pre><code>!pip install cmdstanpy import cmdstanpy cmdstanpy.install_cmdstan() # this takes a while # write everything out to disk import os import shutil cmdstan_dir = cmdstanpy.__path__ tar_filename = '/content/cmdstan_files.tar.gz' shutil.make_archive(tar_filename.replace('.tar.gz', ''), 'gztar', cmdstan_dir) </code></pre>
<python><google-colaboratory><stan>
2025-03-22 20:32:32
1
2,157
Taylor
79,528,064
10,321,138
How to await within a Firebase Cloud Function in Python
<p>Can Firebase functions be marked as <code>async</code>?</p> <p>For example:</p> <pre><code>async def async_function() -&gt; str: print(&quot;async function starting&quot;) await asyncio.sleep(1) print (&quot;async function finished&quot;) return &quot;finished&quot; @https_fn.on_request() async def sample_cloud_function(req: https_fn.Request) -&gt; https_fn.Response: result = await async_function() return https_fn.Response(result) </code></pre> <p>The above throws a type error when emulating:</p> <blockquote> <p>TypeError: The view function did not return a valid response. The return type must be a string, dict, list, tuple with headers or status, Response instance, or WSGI callable, but it was a coroutine.</p> </blockquote> <p>But I am able to deploy it. When deploying though nothing happens, I get a warning saying the function is never awaited.</p> <p>I also can't find documentation on this. What are best practices here?</p>
<python><firebase><google-cloud-functions>
2025-03-22 20:15:31
1
1,896
yambo
79,528,000
511,081
Resolving Cyclic Import in Custom Type System for Expression Evaluator
<p>I'm building a custom expression evaluator that handles operations like max(date_1, date_2) + 90 using type-specific classes. The initial implementation worked when all classes were in a single file, but splitting them into separate modules caused cyclic imports.</p> <p><strong>Working Single-File Version</strong></p> <pre><code># Custom Data Type Definitions # evaluator.py (original) from datetime import date, timedelta class DateValue: def __init__(self, value: date): self.value = value def __add__(self, other): if isinstance(other, IntValue): # References IntValue return DateValue(self.value + timedelta(days=other.value)) # ... other cases ... class IntValue: def __init__(self, value: int): self.value = value def __add__(self, other): if isinstance(other, DateValue): # References DateValue return DateValue(other.value + timedelta(days=self.value)) # ... other cases ... </code></pre> <p><strong>Problem When Modularizing</strong></p> <pre><code># int_value.py from date_value import DateValue # &lt;-- Cyclic import class IntValue: def __add__(self, other): if isinstance(other, DateValue): # Requires DateValue # ... implementation ... # date_value.py from int_value import IntValue # &lt;-- Cyclic import class DateValue: def __add__(self, other): if isinstance(other, IntValue): # Requires IntValue # ... implementation ... </code></pre> <p><strong>Constraints</strong></p> <ol> <li><p>Need to maintain strong type checking (isinstance)</p> </li> <li><p>Want to avoid local imports inside <strong>add</strong> methods like:</p> </li> </ol> <pre><code>def __add__(self, other): from date_value import DateValue # **Not desired** </code></pre> <p><strong>Attempted Solutions</strong></p> <ol> <li><p>Using interface/base classes didn't resolve the circular dependency</p> </li> <li><p>Tried from <strong>future</strong> import annotations with string type hints, but runtime isinstance checks still require concrete classes</p> </li> </ol> <p><strong>Questions</strong></p> <p>How can I structure these interdependent classes across modules while avoiding:</p> <ul> <li><p>Cyclic imports</p> </li> <li><p>Local imports inside operator methods</p> </li> <li><p>Sacrificing type safety checks?</p> </li> </ul> <p>Are there established patterns for cross-dependent type systems in Python that handle this scenario cleanly?</p>
<python>
2025-03-22 19:28:11
0
361
amu61
79,527,893
7,741,377
Django models migration gets generated after managed set to False
<p>In my django project, I have 2 databases, and I'm trying to achieve a cross database foreign key relationship. the models Form, FormSubmission, CmsEvent are in a postgres database and the models CmsEventOrder, CmsEventPayment are in another postgres database.</p> <p>When I makemigrations, the migrations is created for all the models, while it should only be created for CmsEventOrder and CmsEventPayment</p> <p>Now the models CmsEventOrder and CmsEventPayment have a foreign key to the other 3 models. But the other 3 models have managed false, now do I reference them? Is the migrations getting generated because of the foreign keys?</p> <p>This is my view</p> <pre><code>class CreateOrderView(APIView): def post(self, request): try: # Extract data from request amount = int(request.data.get('amount', 0)) * \ 100 # Convert to paise currency = request.data.get('currency', 'INR') receipt_id = &quot;receipt_id_&quot; + ''.join(random.choices( string.ascii_letters + string.digits, k=12)) notes = request.data.get('notes', {}) event_id = request.data.get('event_id') form_submission_id = request.data.get('form_submission_id') if amount &lt;= 0: return Response({&quot;error&quot;: &quot;Amount must be greater than zero.&quot;}, status=status.HTTP_400_BAD_REQUEST) # Validate event and form submission event = get_object_or_404(CmsEvent, id=event_id) form_submission = get_object_or_404( FormSubmission, id=form_submission_id) # Initialize Razorpay client client = get_razorpay_client() # Create an order order_data = { 'amount': amount, 'currency': currency, 'receipt': receipt_id, 'notes': notes } order = client.order.create(order_data) # Save order to database CmsEventOrder.objects.create( order_id=order['id'], receipt_id=receipt_id, # Convert back to original currency amount=order['amount'] / 100, amount_due=order['amount_due'] / 100, currency=order['currency'], status=order['status'], notes=notes, event=event, form_submission=form_submission ) return Response(order, status=status.HTTP_201_CREATED) except Exception as e: logger.error(&quot;Failed to create order&quot;, error=str(e)) return Response({&quot;error&quot;: str(e)}, status=status.HTTP_500_INTERNAL_SERVER_ERROR) </code></pre> <p>And this below is my models</p> <pre><code>class Form(models.Model): id = models.AutoField(primary_key=True) title = models.CharField(max_length=255, help_text=&quot;Enter the form title&quot;) submit_button_label = models.CharField( max_length=255, help_text=&quot;Enter the submit button label (e.g., Submit, Register)&quot; ) confirmation_type = models.CharField( max_length=10, choices=ConfirmationType.choices, help_text=&quot;Select the type of confirmation after form submission&quot; ) confirmation_message = models.JSONField( blank=True, null=True, help_text=&quot;Provide a confirmation message in JSON format if confirmation type is 'Message'&quot; ) redirect_url = models.URLField( max_length=2048, blank=True, null=True, help_text=&quot;Provide a redirect URL if confirmation type is 'Redirect'&quot; ) updated_at = models.DateTimeField( auto_now=True, help_text=&quot;Timestamp when the form was last updated&quot; ) created_at = models.DateTimeField( auto_now_add=True, help_text=&quot;Timestamp when the form was created&quot; ) class Meta: app_label = &quot;cms&quot; db_table = &quot;forms&quot; managed = False verbose_name = _(&quot;Form&quot;) verbose_name_plural = _(&quot;Forms&quot;) def __str__(self): return self.title class FormSubmission(models.Model): id = models.AutoField(primary_key=True) form = models.ForeignKey( Form, on_delete=models.CASCADE, related_name='submissions', help_text=&quot;Reference to the associated form&quot; ) updated_at = models.DateTimeField( auto_now=True, help_text=&quot;Timestamp when the submission was last updated&quot; ) created_at = models.DateTimeField( auto_now_add=True, help_text=&quot;Timestamp when the submission was created&quot; ) class Meta: app_label = &quot;cms&quot; db_table = &quot;form_submissions&quot; managed = False verbose_name = &quot;Form Submission&quot; verbose_name_plural = &quot;Form Submissions&quot; def __str__(self): return f&quot;Submission {self.id} for Form {self.form_id}&quot; class CmsEvent(models.Model): id = models.AutoField(primary_key=True) event_uuid = models.CharField( max_length=36, unique=True, editable=False, help_text=&quot;Read-only UUID for the event&quot; ) name = models.CharField( max_length=128, help_text=&quot;Enter the name of the event&quot;) description = models.JSONField( default=dict, help_text=&quot;Provide a detailed description of the event&quot;) image = models.ForeignKey( &quot;Media&quot;, on_delete=models.CASCADE, related_name=&quot;events&quot;, help_text=&quot;Select the event image&quot; ) venue = models.CharField(max_length=255, blank=True, null=True, help_text=&quot;Enter the venue name (optional)&quot;) city = models.CharField(max_length=255, blank=True, null=True, help_text=&quot;Enter the city (optional)&quot;) locality = models.CharField( max_length=255, blank=True, null=True, help_text=&quot;Enter the locality (optional)&quot;) google_maps_link = models.URLField( validators=[URLValidator()], help_text=&quot;Enter the Google Maps link for the event&quot;) google_form_link = models.URLField( validators=[URLValidator()], help_text=&quot;Enter the Google Maps link for the event&quot;) payment_link = models.URLField( validators=[URLValidator()], help_text=&quot;Enter the Google Maps link for the event&quot;) is_online = models.BooleanField( default=False, help_text=&quot;Check this if the event is only an online event.&quot;) is_online_and_offline = models.BooleanField( default=False, help_text=&quot;Check this if an offline event is also streamed online.&quot;) start_date = models.DateTimeField( help_text=&quot;Event start date with time and timezone&quot;) end_date = models.DateTimeField( blank=True, null=True, help_text=&quot;Event end date with time and timezone&quot;) start_time = models.CharField( max_length=10, blank=True, null=True, help_text=&quot;Enter the start time (e.g., HH:MM)&quot;) end_time = models.CharField( max_length=10, blank=True, null=True, help_text=&quot;Enter the end time (e.g., HH:MM)&quot;) deleted = models.BooleanField( default=False, help_text=&quot;Mark this if the event is deleted.&quot;) is_online = models.BooleanField( default=False, help_text=&quot;Check this if the event is an online event.&quot;) location = models.JSONField( default=dict, help_text=&quot;Provide structured location data for the event&quot;) form = models.ForeignKey( Form, on_delete=models.SET_NULL, blank=True, null=True, related_name=&quot;events&quot;, help_text=&quot;Select the associated form for the event&quot; ) price = models.IntegerField(blank=False, null=False, validators=[MinValueValidator(0)], help_text=&quot;Enter the price of the event&quot; ) class Meta: app_label = &quot;cms&quot; db_table = &quot;events&quot; managed = False verbose_name = _(&quot;CMS Event&quot;) verbose_name_plural = _(&quot;CMS Events&quot;) def __str__(self): return self.name class CmsEventOrder(models.Model): order_id = models.CharField( max_length=255, unique=True) # Razorpay order ID receipt_id = models.CharField(max_length=255, null=True, blank=True) amount = models.DecimalField(max_digits=10, decimal_places=2) amount_paid = models.DecimalField( max_digits=10, decimal_places=2, default=0) amount_due = models.DecimalField( max_digits=10, decimal_places=2, default=0) currency = models.CharField(max_length=10, default='INR') attempts = models.PositiveIntegerField(default=0) status = models.CharField(max_length=20, choices=[ ('created', 'Created'), ('paid', 'Paid'), ('failed', 'Failed') ], default='created') # Store event data or other notes notes = models.JSONField(null=True, blank=True) form = models.ForeignKey( Form, on_delete=models.SET_NULL, null=True, blank=True, related_name=&quot;orders&quot;, help_text=&quot;Associated form for the event order&quot; ) event = models.ForeignKey( CmsEvent, on_delete=models.CASCADE, related_name=&quot;orders&quot;, help_text=&quot;Associated event for the order&quot; ) form_submission = models.ForeignKey( FormSubmission, on_delete=models.SET_NULL, null=True, blank=True, related_name=&quot;cms_event_orders&quot;, help_text=&quot;Reference to the associated form submission&quot; ) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) class Meta: managed = True verbose_name = &quot;CMS Event Order&quot; verbose_name_plural = &quot;CMS Event Orders&quot; class CmsEventPayment(models.Model): order = models.ForeignKey( CmsEventOrder, on_delete=models.CASCADE, related_name='cmseventpayment') payment_id = models.CharField( max_length=255, unique=True) status = models.CharField(max_length=20, choices=[ ('created', 'Created'), ('captured', 'Captured'), ('failed', 'Failed') ], default='created') payment_verified = models.BooleanField(default=False) paid_at = models.DateTimeField(null=True, blank=True) form = models.ForeignKey( Form, on_delete=models.SET_NULL, null=True, blank=True, related_name=&quot;payments&quot;, help_text=&quot;Associated form for the event payment&quot; ) event = models.ForeignKey( CmsEvent, on_delete=models.CASCADE, related_name=&quot;payments&quot;, help_text=&quot;Associated event for the payment&quot; ) form_submission = models.ForeignKey( FormSubmission, on_delete=models.SET_NULL, null=True, blank=True, related_name=&quot;cms_event_payments&quot;, help_text=&quot;Reference to the associated form submission&quot; ) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) def is_successful(self): return self.status == 'captured' class Meta: managed = True verbose_name = &quot;CMS Event Payment&quot; verbose_name_plural = &quot;CMS Event Payments&quot; </code></pre> <p>And this is my DB router</p> <pre><code>import structlog logger = structlog.get_logger(&quot;db_router&quot;) class TerrumCMSRouter: &quot;&quot;&quot; A database router to direct read operations to the 'terrum_cms' database and prevent any write operations for models in the 'cms' app. &quot;&quot;&quot; def db_for_read(self, model, **hints): if model._meta.app_label == &quot;cms&quot;: logger.debug(f&quot;Read operation routed to 'terrum_cms' for model '{model.__name__}'&quot;) return &quot;terrum_cms&quot; return None def db_for_write(self, model, **hints): if model._meta.app_label == &quot;cms&quot;: logger.warning(f&quot;Write operation blocked for model '{model.__name__}' in app 'cms'&quot;) return None return None def allow_relation(self, obj1, obj2, **hints): db_list = {&quot;default&quot;, &quot;terrum_cms&quot;} if obj1._state.db in db_list and obj2._state.db in db_list: return True return False def allow_migrate(self, db, app_label, model_name=None, **hints): if app_label == &quot;cms&quot; and db == &quot;terrum_cms&quot;: logger.warning(f&quot;Migrations disallowed for app 'cms' in database 'terrum_cms'&quot;) return False return True </code></pre> <p>Below are the migration files that get generated with the command <code>python3 manage.py makemigrations payments</code></p> <p>Migrations file under app/payments</p> <pre><code># Generated by Django 5.1.7 on 2025-03-22 18:19 import django.db.models.deletion from django.db import migrations, models class Migration(migrations.Migration): initial = True dependencies = [ (&quot;cms&quot;, &quot;0001_initial&quot;), ] operations = [ migrations.CreateModel( name=&quot;CmsEventOrder&quot;, fields=[ ( &quot;id&quot;, models.BigAutoField( auto_created=True, primary_key=True, serialize=False, verbose_name=&quot;ID&quot;, ), ), (&quot;order_id&quot;, models.CharField(max_length=255, unique=True)), (&quot;receipt_id&quot;, models.CharField(blank=True, max_length=255, null=True)), (&quot;amount&quot;, models.DecimalField(decimal_places=2, max_digits=10)), ( &quot;amount_paid&quot;, models.DecimalField(decimal_places=2, default=0, max_digits=10), ), ( &quot;amount_due&quot;, models.DecimalField(decimal_places=2, default=0, max_digits=10), ), (&quot;currency&quot;, models.CharField(default=&quot;INR&quot;, max_length=10)), (&quot;attempts&quot;, models.PositiveIntegerField(default=0)), ( &quot;status&quot;, models.CharField( choices=[ (&quot;created&quot;, &quot;Created&quot;), (&quot;paid&quot;, &quot;Paid&quot;), (&quot;failed&quot;, &quot;Failed&quot;), ], default=&quot;created&quot;, max_length=20, ), ), (&quot;notes&quot;, models.JSONField(blank=True, null=True)), (&quot;created_at&quot;, models.DateTimeField(auto_now_add=True)), (&quot;updated_at&quot;, models.DateTimeField(auto_now=True)), ( &quot;event&quot;, models.ForeignKey( help_text=&quot;Associated event for the order&quot;, on_delete=django.db.models.deletion.CASCADE, related_name=&quot;orders&quot;, to=&quot;cms.cmsevent&quot;, ), ), ( &quot;form&quot;, models.ForeignKey( blank=True, help_text=&quot;Associated form for the event order&quot;, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name=&quot;orders&quot;, to=&quot;cms.form&quot;, ), ), ( &quot;form_submission&quot;, models.ForeignKey( blank=True, help_text=&quot;Reference to the associated form submission&quot;, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name=&quot;cms_event_orders&quot;, to=&quot;cms.formsubmission&quot;, ), ), ], options={ &quot;verbose_name&quot;: &quot;CMS Event Order&quot;, &quot;verbose_name_plural&quot;: &quot;CMS Event Orders&quot;, &quot;managed&quot;: True, }, ), migrations.CreateModel( name=&quot;CmsEventPayment&quot;, fields=[ ( &quot;id&quot;, models.BigAutoField( auto_created=True, primary_key=True, serialize=False, verbose_name=&quot;ID&quot;, ), ), (&quot;payment_id&quot;, models.CharField(max_length=255, unique=True)), ( &quot;status&quot;, models.CharField( choices=[ (&quot;created&quot;, &quot;Created&quot;), (&quot;captured&quot;, &quot;Captured&quot;), (&quot;failed&quot;, &quot;Failed&quot;), ], default=&quot;created&quot;, max_length=20, ), ), (&quot;payment_verified&quot;, models.BooleanField(default=False)), (&quot;paid_at&quot;, models.DateTimeField(blank=True, null=True)), (&quot;created_at&quot;, models.DateTimeField(auto_now_add=True)), (&quot;updated_at&quot;, models.DateTimeField(auto_now=True)), ( &quot;event&quot;, models.ForeignKey( help_text=&quot;Associated event for the payment&quot;, on_delete=django.db.models.deletion.CASCADE, related_name=&quot;payments&quot;, to=&quot;cms.cmsevent&quot;, ), ), ( &quot;form&quot;, models.ForeignKey( blank=True, help_text=&quot;Associated form for the event payment&quot;, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name=&quot;payments&quot;, to=&quot;cms.form&quot;, ), ), ( &quot;form_submission&quot;, models.ForeignKey( blank=True, help_text=&quot;Reference to the associated form submission&quot;, null=True, on_delete=django.db.models.deletion.SET_NULL, related_name=&quot;cms_event_payments&quot;, to=&quot;cms.formsubmission&quot;, ), ), ( &quot;order&quot;, models.ForeignKey( on_delete=django.db.models.deletion.CASCADE, related_name=&quot;cmseventpayment&quot;, to=&quot;payments.cmseventorder&quot;, ), ), ], options={ &quot;verbose_name&quot;: &quot;CMS Event Payment&quot;, &quot;verbose_name_plural&quot;: &quot;CMS Event Payments&quot;, &quot;managed&quot;: True, }, ), ] </code></pre> <p>I've crossed the character limit, so here is the migration file under <code>app/cms</code> on pastebin</p> <p><a href="https://pastebin.com/sdQ0wZCG" rel="nofollow noreferrer">https://pastebin.com/sdQ0wZCG</a></p> <p>I could get it if it created migrations for the models CmsEvent, Form, FormSubmission but it even creates for Media, ShoppingCategory, Brand and BrandRelationship</p>
<python><django><postgresql><multi-database>
2025-03-22 18:09:41
1
838
Abhishek AN
79,527,704
6,683,176
Why is RUN pip install from Github not working in my Dockerfile?
<p>RUN pip install git+https://${GITHUB_TOKEN}@github.com//packages.git#egg=mypackage&amp;subdirectory=mypackage is not working in my Dockerfile</p> <p>This is my Dockerfile</p> <pre><code>FROM python:3.9-slim COPY . /app WORKDIR /app RUN apt-get update &amp;&amp; apt-get install -y git &amp;&amp; rm -rf /var/lib/apt/lists/* RUN pip install --no-cache-dir -r requirements.txt # Accept GITHUB_TOKEN as a build argument ARG GITHUB_TOKEN RUN echo $GITHUB_TOKEN # Fail if GITHUB_TOKEN is not set RUN test -n &quot;$GITHUB_TOKEN&quot; || (echo &quot;ERROR: GITHUB_TOKEN is not set&quot; &amp;&amp; exit 1) RUN pip install git+https://${GITHUB_TOKEN}@github.com/&lt;username&gt;/packages.git#egg=mypackage&amp;subdirectory=mypackage </code></pre> <p>When the image is built, all packages in requirements.txt are successfully installed, however the github mypackage cannot be found when i try to import it in my code.</p> <p>Note. the GITHUB_TOKEN is being successfully passed to Docker build in the cloud build step:</p> <pre><code>'--build-arg', 'GITHUB_TOKEN=${_GITHUB_TOKEN}' </code></pre> <p>And the pip installation works if I add the github package line to requirements.txt (with hard coded token)</p> <p>Im wondering if the package is being pip installed somewhere unexpected. Any thoughts?</p>
<python><pip><dockerfile><google-cloud-build>
2025-03-22 16:02:48
0
339
pablowilks2
79,527,670
1,261,153
How to specify actual class in Pydantic member
<p>I'd like to serialize and deserialize an object holding a polymorphic member.</p> <p>If the class was polymorphic, I would use a correct concrete type with <code>TypeAdapter</code>. The question is, how to do the same when a member is polymorphic. Here's a minimal example of what I mean:</p> <pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod from pydantic import BaseModel, TypeAdapter class I_Widget(ABC, BaseModel): @abstractmethod def do_stuff(self): ... class WidgetHolder(BaseModel): widget: I_Widget class ActualWidget(I_Widget): value: int def do_stuff(self): print('Doing stuff') if __name__ == '__main__': w = ActualWidget(value=42) wh = WidgetHolder(widget=w) # Serializes: json = wh.json() # Deserializes: wh2 = TypeAdapter(WidgetHolder).validate_json(json) # Throws an error </code></pre> <p>When I run it, I get:</p> <pre><code> File &quot;/home/adam/.cache/pypoetry/virtualenvs/poc-py3.12/lib/python3.12/site-packages/pydantic/type_adapter.py&quot;, line 446, in validate_json return self.validator.validate_json( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: Can't instantiate abstract class I_Widget without an implementation for abstract method 'do_stuff' </code></pre>
<python><pydantic>
2025-03-22 15:36:24
2
8,144
Adam Ryczkowski
79,527,532
16,383,578
Why is the continued fraction expansion of arctangent combined with half-angle formula not working with Machin-like series?
<p>Sorry for the long title. I don't know if this is more of a math problem or programming problem, but I think my math is extremely rusty and I am better at programming.</p> <p>So I have this <a href="https://en.wikipedia.org/wiki/Continued_fraction" rel="nofollow noreferrer">continued fraction</a> expansion of arctangent:</p> <p><a href="https://i.sstatic.net/ZSAFcqmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZSAFcqmS.png" alt="enter image description here" /></a></p> <p>I got it from <a href="https://en.wikipedia.org/wiki/Continued_fraction#Transcendental_functions_and_numbers" rel="nofollow noreferrer">Wikipedia</a></p> <p>I tried to find a simple algorithm to calculate it:</p> <p><a href="https://i.sstatic.net/9SNigOKN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9SNigOKN.jpg" alt="enter image description here" /></a></p> <p>And I did it, I have written an infinite precision implementation of the continued fraction expansion without using any libraries, using only basic integer arithmetic:</p> <pre><code>import json import math import random from decimal import Decimal, getcontext from typing import Callable, List, Tuple Fraction = Tuple[int, int] def arctan_cf(y: int, x: int, lim: int) -&gt; Fraction: y_sq = y**2 a1, a2 = y, 3 * x * y b1, b2 = x, 3 * x**2 + y_sq odd = 5 for i in range(2, 2 + lim): t1, t2 = odd * x, i**2 * y_sq a1, a2 = a2, t1 * a2 + t2 * a1 b1, b2 = b2, t1 * b2 + t2 * b1 odd += 2 return a2, b2 </code></pre> <p>And it converges faster than <a href="https://en.wikipedia.org/wiki/Arctangent_series#Accelerated_series" rel="nofollow noreferrer">Newton's arctangent series</a> which I <a href="https://codereview.stackexchange.com/questions/295668/how-to-make-this-arbitrary-precision-%cf%80-calculator-using-machin-like-formula-run">previously</a> used.</p> <p>Now I think if I combine it with the <a href="https://en.wikipedia.org/wiki/Atan2#Definition_and_computation" rel="nofollow noreferrer">half-angle formula</a> of arctangent it should converge faster.</p> <pre><code>def half_arctan_cf(y: int, x: int, lim: int) -&gt; Fraction: c = (x**2 + y**2) ** 0.5 a, b = c.as_integer_ratio() a, b = arctan_cf(a - b * x, b * y, lim) return 2 * a, b </code></pre> <p>And indeed, it does converge even faster:</p> <pre><code>def test_accuracy(lim: int) -&gt; dict: result = {} for _ in range(lim): x, y = random.sample(range(1024), 2) while not x or not y: x, y = random.sample(range(1024), 2) atan2 = math.atan2(y, x) entry = {&quot;atan&quot;: atan2} for fname, func in zip( (&quot;arctan_cf&quot;, &quot;half_arctan_cf&quot;), (arctan_cf, half_arctan_cf) ): i = 1 while True: a, b = func(y, x, i) if math.isclose(deci := a / b, atan2): break i += 1 entry[fname] = (i, deci) result[f&quot;{y} / {x}&quot;] = entry return result print(json.dumps(test_accuracy(8), indent=4)) for v in test_accuracy(128).values(): assert v[&quot;half_arctan_cf&quot;][0] &lt;= v[&quot;arctan_cf&quot;][0] </code></pre> <pre><code>{ &quot;206 / 136&quot;: { &quot;atan&quot;: 0.9872880750087898, &quot;arctan_cf&quot;: [ 16, 0.9872880746658675 ], &quot;half_arctan_cf&quot;: [ 6, 0.9872880746018052 ] }, &quot;537 / 308&quot;: { &quot;atan&quot;: 1.0500473287277563, &quot;arctan_cf&quot;: [ 18, 1.0500473281360896 ], &quot;half_arctan_cf&quot;: [ 7, 1.0500473288158192 ] }, &quot;331 / 356&quot;: { &quot;atan&quot;: 0.7490241118247137, &quot;arctan_cf&quot;: [ 10, 0.7490241115996227 ], &quot;half_arctan_cf&quot;: [ 5, 0.749024111913438 ] }, &quot;744 / 613&quot;: { &quot;atan&quot;: 0.8816364228048325, &quot;arctan_cf&quot;: [ 13, 0.8816364230439662 ], &quot;half_arctan_cf&quot;: [ 6, 0.8816364227495634 ] }, &quot;960 / 419&quot;: { &quot;atan&quot;: 1.1592605364805093, &quot;arctan_cf&quot;: [ 24, 1.1592605359263286 ], &quot;half_arctan_cf&quot;: [ 7, 1.1592605371181872 ] }, &quot;597 / 884&quot;: { &quot;atan&quot;: 0.5939827714677137, &quot;arctan_cf&quot;: [ 7, 0.5939827719895824 ], &quot;half_arctan_cf&quot;: [ 4, 0.59398277135389 ] }, &quot;212 / 498&quot;: { &quot;atan&quot;: 0.40246578425167584, &quot;arctan_cf&quot;: [ 5, 0.4024657843859885 ], &quot;half_arctan_cf&quot;: [ 3, 0.40246578431841773 ] }, &quot;837 / 212&quot;: { &quot;atan&quot;: 1.322727785860997, &quot;arctan_cf&quot;: [ 41, 1.322727786922624 ], &quot;half_arctan_cf&quot;: [ 8, 1.3227277847674388 ] } } </code></pre> <p>That assert block runs quite a bit long for large number of samples, but it never raises exceptions.</p> <p>So I think I can use the continued fraction expansion of arctangent with <a href="https://en.wikipedia.org/wiki/Approximations_of_%CF%80#Machin-like_formula" rel="nofollow noreferrer">Machin-like series</a> to calculate ฯ€. (I used the last series in the linked section because it converges the fastest)</p> <pre><code>def sum_fractions(fractions: List[Fraction]) -&gt; Fraction: while (length := len(fractions)) &gt; 1: stack = [] for i in range(0, length - (odd := length &amp; 1), 2): num1, den1 = fractions[i] num2, den2 = fractions[i + 1] stack.append((num1 * den2 + num2 * den1, den1 * den2)) if odd: stack.append(fractions[-1]) fractions = stack return fractions[0] MACHIN_SERIES = ((44, 57), (7, 239), (-12, 682), (24, 12943)) def approximate_loop(lim: int, func: Callable) -&gt; List[Fraction]: fractions = [] for coef, denom in MACHIN_SERIES: dividend, divisor = func(1, denom, lim) fractions.append((coef * dividend, divisor)) return fractions def approximate_1(lim: int) -&gt; List[Fraction]: return approximate_loop(lim, arctan_cf) def approximate_2(lim: int) -&gt; List[Fraction]: return approximate_loop(lim, half_arctan_cf) approx_funcs = (approximate_1, approximate_2) def calculate_pi(lim: int, approx: bool = 0) -&gt; Fraction: dividend, divisor = sum_fractions(approx_funcs[approx](lim)) dividend *= 4 return dividend // (common := math.gcd(dividend, divisor)), divisor // common getcontext().rounding = 'ROUND_DOWN' def to_decimal(dividend: int, divisor: int, places: int) -&gt; str: getcontext().prec = places + len(str(dividend // divisor)) return str(Decimal(dividend) / Decimal(divisor)) def get_accuracy(lim: int, approx: bool = 0) -&gt; Tuple[int, str]: length = 12 fraction = calculate_pi(lim, approx) while True: decimal = to_decimal(*fraction, length) for i, e in enumerate(decimal): if Pillion[i] != e: return (max(0, i - 2), decimal[:i]) length += 10 with open(&quot;D:/Pillion.txt&quot;, &quot;r&quot;) as f: Pillion = f.read() </code></pre> <p><a href="https://drive.google.com/file/d/1YTWbg91hu-scG69bfTkM4DUGXE6yIAdR/view?usp=sharing" rel="nofollow noreferrer">Pillion.txt</a> contains the first 1000001 digits of ฯ€, Pi + Million = Pillion.</p> <p>And it works, but only partially. The basic continued fraction expansion works very well with Machin-like formula, but combined with half-angle formula, I can only get 9 correct decimal places no matter what, and in fact, I get 9 correct digits on the very first iteration, and then this whole thing doesn't improve ever:</p> <pre><code>In [2]: get_accuracy(16) Out[2]: (73, '3.1415926535897932384626433832795028841971693993751058209749445923078164062') In [3]: get_accuracy(32) Out[3]: (138, '3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647093844609550582231') In [4]: get_accuracy(16, 1) Out[4]: (9, '3.141592653') In [5]: get_accuracy(32, 1) Out[5]: (9, '3.141592653') In [6]: get_accuracy(1, 1) Out[6]: (9, '3.141592653') </code></pre> <p>But the digits do in fact change:</p> <pre><code>In [7]: to_decimal(*calculate_pi(1, 1), 32) Out[7]: '3.14159265360948500093515231500093' In [8]: to_decimal(*calculate_pi(2, 1), 32) Out[8]: '3.14159265360945286794831052938917' In [9]: to_decimal(*calculate_pi(3, 1), 32) Out[9]: '3.14159265360945286857612896472974' In [10]: to_decimal(*calculate_pi(4, 1), 32) Out[10]: '3.14159265360945286857611676794770' In [11]: to_decimal(*calculate_pi(5, 1), 32) Out[11]: '3.14159265360945286857611676818392' </code></pre> <p>Why is the continued fraction with half-angle formula not working with Machin-like formula? And is it possible to make it work, and if it can work, then how? I want either a proof that it is impossible, or a working example that proves it is possible.</p> <hr /> <p>Just a sanity check, using ฯ€/4 = arctan(1) I was able to make <code>half_arctan_cf</code> spit out digits of ฯ€ but it converges much slower:</p> <pre><code>def approximate_3(lim: int) -&gt; List[Fraction]: return [half_arctan_cf(1, 1, lim)] approx_funcs = (approximate_1, approximate_2, approximate_3) </code></pre> <pre><code>In [28]: get_accuracy(16, 2) Out[28]: (15, '3.141592653589793') In [29]: get_accuracy(16, 0) Out[29]: (73, '3.1415926535897932384626433832795028841971693993751058209749445923078164062') </code></pre> <p>And the same problem recurs, it reaches maximum precision of 15 digits at the 10th iteration:</p> <pre><code>In [37]: get_accuracy(9, 2) Out[37]: (14, '3.14159265358979') In [38]: get_accuracy(10, 2) Out[38]: (15, '3.141592653589793') In [39]: get_accuracy(11, 2) Out[39]: (15, '3.141592653589793') In [40]: get_accuracy(32, 2) Out[40]: (15, '3.141592653589793') </code></pre> <hr /> <p>I just rewrote my arctangent continued fraction implementation and made it avoid doing redundant computations.</p> <p>In my code in each iteration t1 increases by 2 * y_sq, so there is no need to repeatedly multiply y_sq by the odd number, instead just use a cumulative variable and a step of 2 * y_sq.</p> <p>And the difference between each pair of consecutive square numbers is just the odd numbers, so I can use a cumulative variable of a cumulative variable.</p> <pre><code>def arctan_cf_0(y: int, x: int, lim: int) -&gt; Fraction: y_sq = y**2 a1, a2 = y, 3 * x * y b1, b2 = x, 3 * x**2 + y_sq odd = 5 for i in range(2, 2 + lim): t1, t2 = odd * x, i**2 * y_sq a1, a2 = a2, t1 * a2 + t2 * a1 b1, b2 = b2, t1 * b2 + t2 * b1 odd += 2 return a2, b2 def arctan_cf(y: int, x: int, lim: int) -&gt; Fraction: y_sq = y**2 a1, a2 = y, 3 * x * y b1, b2 = x, 3 * x**2 + y_sq t1_step, t3_step = 2 * x, 2 * y_sq t1, t2 = 5 * x, 4 * y_sq t3 = t2 + y_sq for _ in range(lim): a1, a2 = a2, t1 * a2 + t2 * a1 b1, b2 = b2, t1 * b2 + t2 * b1 t1 += t1_step t2 += t3 t3 += t3_step return a2, b2 </code></pre> <pre><code>In [301]: arctan_cf_0(4, 3, 100) == arctan_cf(4, 3, 100) Out[301]: True In [302]: %timeit arctan_cf_0(4, 3, 100) 58.6 ฮผs ยฑ 503 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) In [303]: %timeit arctan_cf(4, 3, 100) 54.3 ฮผs ยฑ 816 ns per loop (mean ยฑ std. dev. of 7 runs, 10,000 loops each) </code></pre> <p>While this doesn't improve the speed by much, this is definitively an improvement.</p>
<python><algorithm><math><pi>
2025-03-22 13:38:07
2
3,930
ฮžฮญฮฝฮท ฮ“ฮฎฮนฮฝฮฟฯ‚
79,527,471
777,384
matplotlib candlestick chart and multiple subplots
<p>I need candlestick chart in one subplot and another subplot which share Y axe with candlestick chart.<br> I have following code drawing subplots:</p> <pre><code>df = get_current_data(limit=150) df_asks, df_bids = get_order_book_data(bookStep=&quot;step0&quot;,limit=150) fig, ((ax1, ax2),(ax3,ax4)) = plt.subplots(2,2, sharex='col', sharey='row', width_ratios=[5,1], height_ratios=[5,1]) ax1.plot(df[&quot;timestamp&quot;],df[&quot;close&quot;]) ax2.plot(df_asks[&quot;amount&quot;],df_asks[&quot;price&quot;],&quot;r&quot;) ax2.plot(df_bids[&quot;amount&quot;],df_bids[&quot;price&quot;],&quot;g&quot;) ax3.bar(df[&quot;timestamp&quot;],df[&quot;volume&quot;], width=0.0005) plt.subplots_adjust(wspace=0.0,hspace=0.0) ax1.set_title(&quot;BTC USD&quot;) ax2.set_title(&quot;Orders Book&quot;) ax1.set_ylabel(&quot;Price&quot;) ax3.set_ylabel(&quot;Volume&quot;) ax1.grid() ax2.grid() plt.show() </code></pre> <p>Result: <a href="https://i.sstatic.net/Kr4mJbGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kr4mJbGy.png" alt="enter image description here" /></a></p> <p>I need instead of BTC line to draw candlesticks.<br> I know i can use mplfinance library to draw candlestick chart, but seems to be mplfinance allows to stack additional subplots ONLY vertically sharing X (index of DataFrame which is datetime).</p>
<python><matplotlib><candlestick-chart>
2025-03-22 12:53:53
2
360
asat
79,527,456
16,798,185
IPython installation: "error: externally-managed-environment"
<p>I need to install <a href="https://en.wikipedia.org/wiki/IPython" rel="nofollow noreferrer">IPython</a> on Mac. My current Python interpreter version is:</p> <pre class="lang-none prettyprint-override"><code>python3 --version </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>Python 3.12.6 </code></pre> <p>And:</p> <pre class="lang-none prettyprint-override"><code>which python3 </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>/opt/homebrew/bin/python3 </code></pre> <p>Similar to this post (<em><a href="https://stackoverflow.com/questions/78309015/ipython-installation-on-mac">IPython installation on Mac</a></em>), I donโ€™t have <a href="https://en.wikipedia.org/wiki/Anaconda_(Python_distribution)" rel="nofollow noreferrer">Anaconda</a> on Mac. I am now trying to install IPython using pip3. When I try to install IPython, initially I was getting a error related to a private / custom repository hosted on GCP. Hence I ran the below command.</p> <pre class="lang-none prettyprint-override"><code>pip3 install --isolated --index-url https://pypi.org/simple ipython </code></pre> <p>Now I get the below error, i.e., &quot;error: externally-managed-environment&quot;</p> <p>As root:</p> <pre class="lang-none prettyprint-override"><code>pip3 install --isolated --index-url https://pypi.org/simple ipython </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>[notice] A new release of pip is available: 24.2 -&gt; 25.0.1 [notice] To update, run: python3.12 -m pip install --upgrade pip error: externally-managed-environment ร— This environment is externally managed โ•ฐโ”€&gt; To install Python packages system-wide, try brew install xyz, where xyz is the package you are trying to install. If you wish to install a Python library that isn't in Homebrew, use a virtual environment: python3 -m venv path/to/venv source path/to/venv/bin/activate python3 -m pip install xyz If you wish to install a Python application that isn't in Homebrew, it may be easiest to use 'pipx install xyz', which will manage a virtual environment for you. You can install pipx with brew install pipx You may restore the old behavior of pip by passing the '--break-system-packages' flag to pip, or by adding 'break-system-packages = true' to your pip.conf file. The latter will permanently disable this error. If you disable this error, we STRONGLY recommend that you additionally pass the '--user' flag to pip, or set 'user = true' in your pip.conf file. Failure to do this can result in a broken Homebrew installation. Read more about this behavior here: &lt;https://peps.python.org/pep-0668/&gt; note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages. hint: See PEP 668 for the detailed specification. </code></pre> <p>Iโ€™m not looking to create a virtual environment. I'm trying to understand why this error is occurring and how to fix it.</p>
<python><pip>
2025-03-22 12:43:12
1
377
user16798185
79,527,384
2,375,154
Wagtail one-to-one object hierarchy with enum type field
<p>I have a wagtail site and I want to create an object hierarchy in a one-to-one matter, but with multiple options. Basically, I want that the database setup look slike this:</p> <pre><code>CREATE TABLE products ( id PRIMARY KEY, product_type VARCHAR(10) CHECK (product_type IN ('BOOK', 'SHIRT', ...)), product_name VARCHAR(255), product_description TEXT, ... ); CREATE TABLE product_shirts ( id PRIMARY KEY, product_id integer REFERENCES products (id), size varchar(255), ... ); CREATE TABLE product_books ( id PRIMARY KEY, product_id integer REFERENCES products (id), author varchar(255), ... ); </code></pre> <p>It is pretty straigt forward to create a regular one-to-one relationship with setting ParentalKey in the derived model. However, I want to also have an enum-type field in the parent model to check which product type we have, so that I can do something like that in my ProductsView:</p> <pre><code>if object.product_type == 'SHIRT': # display additional shirt attributes elif object.product_type == 'BOOK': # display book attributes else: # unknown type, should not happen </code></pre> <p>I know, that with a one-to-one relationship in wagtail I could just simply call <code>product.shirt</code>which would raise an exception, if the product is not a shirt. But it seems very cumbersome to have nested try-catch blocks if I have many different product types...</p> <p>Any better idea to solve this in a django/wagtail style?</p>
<python><django><django-models><wagtail>
2025-03-22 11:44:51
1
976
Alexander Kรถb
79,527,375
5,674,660
Flask Integration with Mpesa Express STK results in Wrong credentials
<p>I am still testing my code under sandbox, therefore, I believe giving the consumer key and consumer secret for my application is not a security issue, because I can always delete and change. The main problem is that the STK Push results in the error: <code>'errorCode': '500.001.1001', 'errorMessage': 'Wrong credentials'</code>. My full code is shown below and any assistance will be appreciated:</p> <pre class="lang-none prettyprint-override"><code>import base64 import datetime import json from requests.auth import HTTPBasicAuth # Correct authentication # Credentials consumer_key = &quot;VehT7XXDuS8UVeLQkqOe3lTkFc14lFKtyssZBcAzBexwgcJG&quot; consumer_secret = &quot;OE9NWfYcW8JKeQ6v7FzObIEG8zjAJpQTDfKFARLAM0WX1Wy3dT9JM12Gxt92oKAz&quot; # Business Details (Ensure correctness) shortcode = &quot;174379&quot; # 174379 for Safaricom sandbox passkey = &quot;bfb279f9aa9bdbcf158e97dd71a467cd2f2c3e6c74a35d8b22a82e1a8ed2c919&quot; # Generate timestamp timestamp = datetime.datetime.now().strftime(&quot;%Y%m%d%H%M%S&quot;) print(f&quot;๐Ÿ•’ Timestamp: {timestamp}&quot;) # Generate Base64-encoded password password = base64.b64encode(f&quot;{shortcode}{passkey}{timestamp}&quot;.encode()).decode() # Step 1: Get Access Token auth_url = &quot;https://sandbox.safaricom.co.ke/oauth/v1/generate?grant_type=client_credentials&quot; try: auth_response = requests.get(auth_url, auth=HTTPBasicAuth(consumer_key, consumer_secret)) auth_response.raise_for_status() # Raise exception for HTTP errors auth_data = auth_response.json() access_token = auth_data.get(&quot;access_token&quot;) if not access_token: print(&quot;โŒ ERROR: Access token missing from response.&quot;) exit() print(f&quot;โœ… Access Token: {access_token}&quot;) except requests.exceptions.RequestException as e: print(f&quot;โŒ ERROR: Failed to get access token: {e}&quot;) exit() # Step 2: Make STK Push Request stk_url = &quot;https://sandbox.safaricom.co.ke/mpesa/stkpush/v1/processrequest&quot; headers = { &quot;Authorization&quot;: f&quot;Bearer {access_token.strip()}&quot;, &quot;Content-Type&quot;: &quot;application/json&quot; } payload = { &quot;BusinessShortCode&quot;: shortcode, &quot;Password&quot;: password, &quot;Timestamp&quot;: timestamp, &quot;TransactionType&quot;: &quot;CustomerPayBillOnline&quot;, &quot;Amount&quot;: &quot;1&quot;, # Minimum for testing &quot;PartyA&quot;: &quot;254724290860&quot;, # Must be in international format &quot;PartyB&quot;: shortcode, &quot;PhoneNumber&quot;: &quot;254724290860&quot;, &quot;CallBackURL&quot;: &quot;https://tobago-repository-hospitals-commented.trycloudflare.com/callback&quot;, &quot;AccountReference&quot;: &quot;Test&quot;, &quot;TransactionDesc&quot;: &quot;Payment&quot; } print(&quot;๐Ÿ“ฆ Sending STK Push Request...&quot;) try: response = requests.post(stk_url, json=payload, headers=headers) response.raise_for_status() # Raise an exception for HTTP errors response_json = response.json() print(&quot;๐Ÿ“ฉ STK Push Response:&quot;, json.dumps(response_json, indent=4)) except requests.exceptions.HTTPError as e: print(f&quot;โŒ HTTP ERROR: {e.response.status_code} - {e.response.text}&quot;) except requests.exceptions.RequestException as e: print(f&quot;โŒ ERROR: Failed to send STK Push request: {e}&quot;) except json.JSONDecodeError: print(f&quot;โŒ ERROR: Failed to parse JSON response:\n{response.text}&quot;) </code></pre>
<python><flask><integration><mpesa>
2025-03-22 11:35:21
0
1,012
chibole
79,527,321
891,440
uv run fails with "failed to spawn"
<p>I have a module with a pretty regular layout, a <code>main</code> function in a <code>sacrypt</code> directory inside a <code>__init__.py</code> file. Then, pyproject.toml looks like this;</p> <pre class="lang-ini prettyprint-override"><code>[project] name = &quot;sacrypt&quot; version = &quot;0.1.0&quot; description = &quot;Cryptanalysis examples for SAC&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.13&quot; [project.scripts] sacrypt = &quot;sacrypt:main&quot; </code></pre> <p><code>uv run sacrypt</code>, however, fails with:</p> <pre><code>error: Failed to spawn: `sacrypt` Caused by: No such file or directory (os error 2) </code></pre> <p>I have tried changing it to a different name, as well as moving <code>sacrypt</code> to <code>src/sacrypt</code> to no avail. This is by the book following tutorials as well as <a href="https://docs.astral.sh/uv/concepts/projects/config/#entry-points" rel="nofollow noreferrer">the documentation</a>. Any idea what is going on here or where I should poke to find the error? Running <code>uv</code> with any amount of <code>-v</code> is not giving any more information. Also, this is <code>uv</code> version 0.6.9.</p> <p><strong>Update</strong>: As instructed in the comment, I have added a build system this way</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;setuptools&gt;=42&quot;, &quot;wheel&quot;] build-backend = &quot;setuptools.build_meta&quot; </code></pre> <p>And then issued <code>uv build</code>. Still an error, this time different:</p> <pre><code>Traceback (most recent call last): File &quot;/home/jmerelo/Tutoriales/cemed-green-software/code/Python/.venv/bin/sacrypt&quot;, line 4, in &lt;module&gt; from sacrypt import main ModuleNotFoundError: No module named 'sacrypt' </code></pre> <p>Running it from the command line, or running tests, works fine.</p>
<python><pyproject.toml><uv>
2025-03-22 10:55:44
2
23,619
jjmerelo
79,527,090
22,213,065
Convert multiple-page PDF files to PNG quickly
<p>I have a folder containing <strong>600 PDF files</strong>, and each PDF has <strong>20 pages</strong>. I need to convert each page into a <strong>high-quality PNG</strong> as quickly as possible.</p> <p>I wrote the following script for this task:</p> <pre><code>import os import multiprocessing import fitz # PyMuPDF from PIL import Image def process_pdf(pdf_path, output_folder): try: pdf_name = os.path.splitext(os.path.basename(pdf_path))[0] pdf_output_folder = os.path.join(output_folder, pdf_name) os.makedirs(pdf_output_folder, exist_ok=True) doc = fitz.open(pdf_path) for i, page in enumerate(doc): pix = page.get_pixmap(dpi=850) # Render page at high DPI img = Image.frombytes(&quot;RGB&quot;, [pix.width, pix.height], pix.samples) img_path = os.path.join(pdf_output_folder, f&quot;page_{i+1}.png&quot;) img.save(img_path, &quot;PNG&quot;) print(f&quot;Processed: {pdf_path}&quot;) except Exception as e: print(f&quot;Error processing {pdf_path}: {e}&quot;) def main(): input_folder = r&quot;E:\Desktop\New folder (5)\New folder (4)&quot; output_folder = r&quot;E:\Desktop\New folder (5)\New folder (5)&quot; pdf_files = [os.path.join(input_folder, f) for f in os.listdir(input_folder) if f.lower().endswith(&quot;.pdf&quot;)] with multiprocessing.Pool(processes=multiprocessing.cpu_count()) as pool: pool.starmap(process_pdf, [(pdf, output_folder) for pdf in pdf_files]) print(&quot;All PDFs processed successfully!&quot;) if __name__ == &quot;__main__&quot;: main() </code></pre> <p><strong>Issue:</strong></p> <p>This script is <strong>too slow</strong>, especially when processing a large number of PDFs. I tried the following optimizations, but they <strong>did not improve speed significantly</strong>:</p> <ul> <li><strong>Reduced DPI slightly</strong> โ€“ Lowered from <strong>1200 DPI</strong> to <strong>850 DPI</strong>. (I also tested 600-800 DPI.)</li> <li><strong>Enabled</strong> <code>alpha=False</code> <strong>in</strong> <code>get_pixmap()</code> โ€“ Reduced memory usage.</li> <li><strong>Used</strong> <code>ThreadPoolExecutor</code> <strong>instead of</strong> <code>multiprocessing.Pool</code> โ€“ No major improvement.</li> <li><strong>Reduced PNG compression</strong> โ€“ Set <code>optimize=False</code> when saving images.</li> <li><strong>Converted images to grayscale</strong> โ€“ Helped slightly, but I need <strong>color images</strong> for my task.</li> </ul> <p><strong>Possible Solutions I Considered:</strong></p> <ul> <li><strong>Parallel Processing of Pages Instead of Files</strong> โ€“ Instead of processing one file at a time, <strong>process each page in parallel</strong> to fully utilize CPU cores.</li> <li><strong>Use</strong> <code>ProcessPoolExecutor</code> <strong>instead of</strong> <code>ThreadPoolExecutor</code> โ€“ Since rendering is <strong>CPU-intensive</strong>, multiprocessing should be better.</li> <li><strong>Use JPEG Instead of PNG</strong> โ€“ JPEG is much <strong>faster to save</strong> and takes less storage, but I need <strong>high-quality images</strong>.</li> <li><strong>Lower DPI to 500-600</strong> โ€“ Provides a balance between <strong>speed and quality</strong>.</li> <li><strong>Batch Write Files Instead of Saving One by One</strong> โ€“ Reduces I/O overhead.</li> </ul> <p><strong>What I Need Help With:</strong></p> <ul> <li>How can I <strong>significantly speed up</strong> this PDF-to-PNG conversion while <strong>maintaining high image quality?</strong></li> <li>Are there <strong>better libraries</strong> or <strong>techniques</strong> I should use?</li> <li>Is there a way to <strong>fully utilize CPU cores efficiently?</strong></li> </ul> <p>Any suggestions would be greatly appreciated!</p>
<python><pdf><parallel-processing><multiprocessing><pymupdf>
2025-03-22 07:34:01
3
781
Pubg Mobile
79,527,000
1,145,011
download large file from URL using python
<p>I have a task to download around 16K+ ( max size is of 1GB) files from given URL to location. Files are of different format like pdf, ppt, doc, docx, zip, jpg, iso etc. So had written below piece of code which</p> <ol> <li>downloads file some times and some times only 26KB file only will be downloaded.</li> <li>Also sometimes get error message &quot; [Errno 10054] An existing connection was forcibly closed by the remote host&quot;</li> </ol> <pre><code>def download_file(s): for row in sheet.iter_rows(min_row=2): try: url = row[6].value #reading from excel # Send GET request to the URL response = s.get(url) if response.status_code == 200: with open(save_path, 'wb') as file: file.write(response.content) except Exception as e: print(f&quot;Error: {e}&quot;) if __name__ == &quot;__main__&quot;: with requests.session() as s: res = s.post(login_url, data=login_data) download_file(s) </code></pre> <p>Tried alternative approach using shutil and downloading in chunks . still the issue is observed. <a href="https://stackoverflow.com/questions/67833450/how-to-download-a-file-using-requests">reffered solutions from here </a> and <a href="https://stackoverflow.com/questions/16694907/download-large-file-in-python-with-requests">here</a></p> <pre><code>import shutil with requests.get(url, stream=True) as r: with open(local_filename, 'wb') as f: shutil.copyfileobj(r.raw, f) </code></pre> <pre><code>response = requests.get(url, stream=True) with open(book_name, 'wb') as f: for chunk in response.iter_content(1024 * 1024 * 2): f.write(chunk) </code></pre>
<python><python-requests><shutil>
2025-03-22 05:41:03
3
1,551
user166013
79,526,957
2,541,276
Error to find module debugpy when debugging Python
<p>This is my neovim config for dap. This is specifically python config. When I tried to debug a python file I get below error. JS/Java/scala and go are all working fine. Only python dap is giving error.</p> <p>This is the <a href="https://jumpshare.com/s/jKy2WY6kqAfqDgQawL7q" rel="nofollow noreferrer">screen recording</a> of the error</p> <p>Dap Error log is -</p> <pre><code>/opt/homebrew/opt/python@3.13/bin/python3.13: Error while finding module specification for 'debugpy.adapter' (ModuleNotFoundError: No module named 'debugpy') </code></pre> <p>I have venv environment as well but still getting same error. Any idea how can I fix this error?</p>
<python><neovim>
2025-03-22 04:50:54
0
10,555
user51
79,526,834
2,315,319
For a custom Mapping class that returns self as iterator, list() returns empty. How do I fix it?
<p>The following is a simplified version of what I am trying to do (the actual implementation has a number of nuances):</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from collections.abc import MutableMapping class SideDict(MutableMapping, dict): &quot;&quot;&quot; The purpose of this special dict is to side-attach another dict. A key and its value from main dict are preferred over same key in the side-dict. If only a key is not present in main dict, then it is used from the side-dict. &quot;&quot;&quot; # The starting SideDict instance will have side_dict=None, a subsequent # SideDict instance can use the first instance as its side_dict. def __init__(self, data, side_dict: SideDict | None): self._store = dict(data) self._side_dict = side_dict self._iter_keys_seen = [] self._iter_in_side_dict = False self._iter = None # Also other stuff # Also implements __bool__, __contains__, __delitem__, __eq__, __getitem__, # __missing__, __or__, __setitem__ and others. def __iter__(self): self._iter_keys_seen = [] self._iter_in_side_dict = False self._iter = None return self def __next__(self): while True: # Start with an iterator that is on self._store if self._iter is None: self._iter = self._store.__iter__() try: next_ = self._iter.__next__() if next_ in self._iter_keys_seen: continue # Some other stuff I do with next_ self._iter_keys_seen.append(next_) return next_ except StopIteration as e: if self._side_dict is None or self._iter_in_side_dict: raise e else: # Switching to side-dict iterator self._iter_in_side_dict = True self._iter = self._side_dict.__iter__() def __len__(self): return len([k for k in self]) # Its not the most efficient, but # I don't know any other way. sd_0 = SideDict(data={&quot;a&quot;: &quot;A&quot;}, side_dict=None) sd_1 = SideDict(data={&quot;b&quot;: &quot;B&quot;}, side_dict=sd_0) sd_2 = SideDict(data={&quot;c&quot;: &quot;C&quot;}, side_dict=sd_1) print(len(sd_0), len(sd_1), len(sd_2)) # all work fine print(list(sd_0)) # ! Here is the problem, shows empty list `[]` ! </code></pre> <p>On putting some <code>print()</code>s, here is what I observed being called:</p> <ol> <li><code>list()</code> triggers <code>obj.__iter__()</code> first.</li> <li>Followed by <code>obj.__len__()</code>. I vaguely understand that this is done so as to allocate optimal length of list.</li> <li>Because <code>obj.__len__()</code> has list-comprehension (<code>[k for k in self]</code>), it again triggers <code>obj.__iter__()</code>.</li> <li>Followed by <code>obj.__next__()</code> multiple times as it iterates through <code>obj._store</code> and <code>obj._side_dict</code>.</li> <li>When <code>obj.__next__()</code> hits the final un-silenced <code>StopIteration</code>, list-comprehension in <code>obj.__len__()</code> ends.</li> <li>Here the problem starts. <code>list()</code> seems to be calling <code>obj.__next__()</code> again immediately after ending <code>obj.__len__()</code>, and it hits <code>StopIteration</code> again. There is no <code>obj.__iter__()</code>. And so the final result is an empty list!</li> </ol> <p>What I think might be happening is that <code>list()</code> starts an iterator on its argument, but before doing anything else, it wants to find out the length. My <code>__len__()</code> uses an iterator itself, so it seems the both are using the same iterator. And then this iterator is consumed in <code>obj.__len__()</code>, and nothing left for outer <code>list()</code> to consume. Please correct me if I am wrong.</p> <p>So how can I change my <code>obj.__len__()</code> to use a non-clashing iterator?</p>
<python><iterator><iteration>
2025-03-22 01:39:30
2
313
fishfin
79,526,636
15,046,415
Why doesn't len(obj) go through __getattribute__, and how can I override this behavior?
<p>I've noticed that calling <code>obj.__len__()</code> explicitly goes through <code>obj.__getattribute__(&quot;__len__&quot;)</code>, but calling <code>len(obj)</code> does not. A quick search suggests that built - in functions like <code>len()</code>, <code>dir()</code>, and <code>repr()</code> optimize their calls by accessing attributes directly via the object's internal structure rather than going through <code>__getattribute__</code>, except when dealing with private attributes.</p> <p>Even if I clear <code>obj.__dict__</code>, <code>len(obj)</code> still bypasses <code>__getattribute__</code>. I also tried overriding <code>obj.__dir__()</code> and removing all dunder methods it returns, but <code>len(obj)</code> still avoids <code>__getattribute__</code>.</p> <p>Is there any way to override this behavior and force <code>len(obj)</code> (and the others) to go through <code>__getattribute__(&quot;__len__&quot;)</code>?</p> <p><strong>Exaple:</strong></p> <pre><code>class Custom: def __getattribute__(self, item): print(f&quot;Accessing: {item}&quot;) return super().__getattribute__(item) def __len__(self): return 50 obj = Custom() print(obj.__len__()) # Triggers __getattribute__ print(len(obj)) # Bypasses __getattribute__ </code></pre> <p><strong>Output:</strong></p> <pre><code>Accessing: __len__ 50 50 # No `Accessing: __len__` message, meaning __getattribute__ was bypassed </code></pre>
<python>
2025-03-21 21:58:24
0
454
Ziv Sion
79,526,624
4,696,802
Is Python supposed to be this slow when importing?
<p>My understanding is that imports can be slow when you import the first time, and then they should be cached for subsequent imports. However that's not what I'm experiencing. And I really don't think I'm doing something wrong like having the wrong settings because I remember this same thing happening when I tried using Python a year ago. I'm doing the following:</p> <pre><code>from langchain_google_genai import ChatGoogleGenerativeAI from browser_use import Agent </code></pre> <p>Each of those import statements take about 6 seconds each. This means each time I start my program I have to wait at least 10 seconds. Is that normal? I've heard the benefit of using Python in machine learning and AI contexts was the faster iteration time because you didn't have to recompile as you do in C++. Also, I've used NodeJS, and I don't remember the imports ever taking long, though to be fair I didn't have large projects. Is this normal?</p> <p>Also, I stepped through the code to check whether it was doing io like reading in a file, as this is slow no matter what language you use, including C++, but no, it just seems slow overall.</p> <p><a href="https://github.com/Please-just-dont/PythonImportExample" rel="nofollow noreferrer">Here</a> is a minimal example of what's happening.</p>
<python><import>
2025-03-21 21:44:24
1
16,228
Zebrafish
79,526,621
889,053
I am unable to load ed25519 private key in PEM format in Python
<p>I ripped code straight off of the jwt documentation website as I try to implement JWT. Their example works fine. However, when I try it with an ssh-keygen file, in PKCS8 format, it doesn't work:</p> <pre class="lang-py prettyprint-override"><code>import jwt private_key = &quot;-----BEGIN PRIVATE KEY-----\nMC4CAQAwBQYDK2VwBCIEIPtUxyxlhjOWetjIYmc98dmB2GxpeaMPP64qBhZmG13r\n-----END PRIVATE KEY-----\n&quot; public_key = &quot;-----BEGIN PUBLIC KEY-----\nMCowBQYDK2VwAyEA7p4c1IU6aA65FWn6YZ+Bya5dRbfd4P6d4a6H0u9+gCg=\n-----END PUBLIC KEY-----\n&quot; encoded = jwt.encode({&quot;some&quot;: &quot;payload&quot;}, private_key, algorithm=&quot;EdDSA&quot;) jwt.decode(encoded, public_key, algorithms=[&quot;EdDSA&quot;]) print(&quot;pass&quot;) with open(&quot;id_ed25519&quot;, &quot;r&quot;) as f: private_key = f.read() print(private_key) with open(&quot;id_ed25519.pub&quot;, &quot;r&quot;) as f: public_key = f.read() print(public_key) encoded = jwt.encode({&quot;some&quot;: &quot;payload&quot;}, private_key, algorithm=&quot;EdDSA&quot;) jwt.decode(encoded, public_key, algorithms=[&quot;EdDSA&quot;]) print(&quot;it works!&quot;) </code></pre> <p>produces:</p> <pre><code>pass -----BEGIN OPENSSH PRIVATE KEY----- b3BlbnNzaC1rZXktdjEAAAAABG5vbmUAAAAEbm9uZQAAAAAAAAABAAAAMwAAAAtzc2gtZW QyNTUxOQAAACCx/iwn0j++zhjIzFYUYzQEIUUS9LJuAOPUPIsjAvi6HQAAAJgS8hOAEvIT gAAAAAtzc2gtZWQyNTUxOQAAACCx/iwn0j++zhjIzFYUYzQEIUUS9LJuAOPUPIsjAvi6HQ AAAEAT87A79bj9AFXc0iAgBKPnDoxGE6wcxZMVRgnfnGaoJbH+LCfSP77OGMjMVhRjNAQh RRL0sm4A49Q8iyMC+LodAAAAFWNib25naW9yQGNib25naW9yLW1hYw== -----END OPENSSH PRIVATE KEY----- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILH+LCfSP77OGMjMVhRjNAQhRRL0sm4A49Q8iyMC+Lod raise InvalidKeyError( jwt.exceptions.InvalidKeyError: Expecting a EllipticCurvePrivateKey/EllipticCurvePublicKey. Wrong key provided for EdDSA algorithms </code></pre> <p>Here are some details about the keys in question:</p> <pre><code>echo -n &quot;-----BEGIN PRIVATE KEY-----\nMC4CAQAwBQYDK2VwBCIEIPtUxyxlhjOWetjIYmc98dmB2GxpeaMPP64qBhZmG13r\n-----END PRIVATE KEY-----\n&quot; &gt; testme -&gt; % ssh-keygen -l -f testme 256 SHA256:ZSZKe1nlMIpu8Jjivb/0nmN6xZrXVreNs2P4uX4jvlk no comment (ED25519) -&gt; % ssh-keygen -t ed25519 -p -m &quot;PKCS8&quot; -&gt; % ssh-keygen -l -f id_ed25519 256 SHA256:M15wqcX0NGIYuRIeziO3WOEDxhfmyhqsD1O32I02VFc cbongior@cbongior-mac (ED25519) -&gt; % wc -c testme 119 testme (.venv) cbongior@cbongior-mac [14:20:48] [~/dev/oracle/fleetman] [main *] -&gt; % wc -c id_ed25519 411 id_ed25519 </code></pre> <p>Both files are valid, but the biggest difference is the size and (I assume) the encoding. The first one looks to be PKCS8 encoded (so, that's what I told ssh to generate the key as).</p> <p>I am not sure what the difference is, but clearly jwt doesn't like my ssh key. Can someone explain what the issue is? Obviously I am expecting it to print <code>it works!</code></p>
<python><jwt>
2025-03-21 21:40:55
0
5,751
Christian Bongiorno
79,526,487
835,073
How to create global variables dynamically from base class defined in a separate file?
<p>My real scenario needs to create global variables dynamically from within a function defined in a library so it can be reused for many projects. The function can be either a method in a base class or a global function.</p> <p>For the sake of simplicity, consider the following a trivial example. I know the global variables are in file or module scope.</p> <pre class="lang-py prettyprint-override"><code># mylib.py class Parent: def __init__(self, var_name): self.__var_name__ = var_name def create_global_variable(self, var_value): globals()[self.__var_name__] = var_value def clear_global_variables(self): globals().clear() </code></pre> <pre class="lang-py prettyprint-override"><code># test.py from mylib import Parent class Child(Parent): pass child = Child('temperature') child.create_global_variable(100) print(temperature) # NameError: name 'temperature' is not defined child.clear_global_variables() </code></pre> <p>Is there any trick to bypass the restriction?</p> <h3>Edit</h3> <p><code>job()</code> will be reused in many projects.</p> <pre class="lang-py prettyprint-override"><code>def job(input, predicate, buffer_name): globals()[buffer_name] = predicate(input) return input exp = fr&quot;&quot;&quot; Squaring {job(2,lambda x: x*x, 'square')} equals to {square} &quot;&quot;&quot; print(exp) </code></pre> <p>Constraint: all buffering variables must be defined in f-string.</p>
<python>
2025-03-21 19:55:32
1
880
D G
79,526,468
2,396,539
Why we say dequeue in python is threadsafe when GIL already restricts running one thread at a time
<p>From what I have been reading Python has GIL that ensures only one thread can run the python code. So I am slightly confused when we say collections.dequeue is thread-safe. If only one thread is run at a time wouldn't objects be in a consistent state already?</p> <p>It would be great if someone can give an example of how a list in python is not thread safe and using a dequeue possible counters that?</p>
<python><multithreading><gil>
2025-03-21 19:42:34
2
69,441
Aniket Thakur
79,526,465
3,763,493
Is there a way to remove stack traces in python log handlers using filters or some other mechanism when calling Logger.exception()?
<p>I know the subject sounds counter-intuitive but bear with me.</p> <p>My goal is to have two rotating file log handlers, mostly for logger.exception() calls, one that includes stack traces and one that doesn't. I find that error conditions can sometimes cause the logs to fill with stack traces, making them difficult to read/parse without creating a separate parsing script, which I'd prefer not to do. I would like to use the built-in RotatingFileHandler class if possible.</p> <p>Here's a simplified code snippet of what I'm trying to accomplish:</p> <pre><code># getLogger just returns an instance of a logger derived from the main Python logger # where I can override the logger.exception() method from myutils import getLogger logger = getLogger(__name__) try: x = 1 / 0 except ZeroDivisionError: # This one call should log to one file with the trace, one without the trace logger.exception(&quot;Oops&quot;) </code></pre> <p>I have the infrastructure in place to have this write to separate log files already using separate handlers, but they both include the stack traces. Is there a mechanism (logging filter or otherwise) where a log handler can strip the stack trace from logger.exception() calls?</p> <p>I am assuming (or hoping) that logging filters attached to a handler can accomplish this, but I'm not sure how it can be done.</p> <p>And just as an FYI, here is the source code of the Python logger for Logging.exception() and Logging.error() calls:</p> <pre><code> def error(self, msg, *args, **kwargs): &quot;&quot;&quot; Delegate an error call to the underlying logger. &quot;&quot;&quot; self.log(ERROR, msg, *args, **kwargs) def exception(self, msg, *args, exc_info=True, **kwargs): &quot;&quot;&quot; Delegate an exception call to the underlying logger. &quot;&quot;&quot; self.log(ERROR, msg, *args, exc_info=exc_info, **kwargs) </code></pre>
<python><python-logging>
2025-03-21 19:40:46
2
544
DaveB
79,526,348
5,053,483
matplotlib - Unable to update plot with button widget
<p>The code below draws a button and an axes object in which it's meant to print out the number of times the button has been pressed. However, it never updates the axes with the number of presses.</p> <pre><code>from matplotlib import pyplot as plt from matplotlib.widgets import Button fig, ax = plt.subplots(figsize=(7,7)) ax.set_visible(False) class MyClass: def __init__(self, fig): self.N_presses = 0 def button_fx(self, event): ax_str.clear() self.N_presses += 1 text_out = &quot;Pushed {} times&quot;.format(self.N_presses) ax_str.text(0.5, 0.5, text_out) print(text_out) MC = MyClass(fig) ax_str = fig.add_axes((0.25, 0.5, 0.5, 0.1)) ax_button = fig.add_axes((0.25, 0.3, 0.5, 0.1)) my_button = Button(ax_button, &quot;Push this button&quot;, color=&quot;0.75&quot;, hovercolor=&quot;0.875&quot;) my_button.on_clicked(func=MC.button_fx) plt.show() </code></pre> <p>As a check, I also have it print out the number of presses to the console, which happens as it should. It's only the axes that seem to be out of reach. Why can't I update the axes <code>ax_str</code> with new text using <code>button_fx</code>? I can't even plot to it. Is there a workaround?</p> <p>(Note: although it doesn't add anything in this MWE, the class is essential for my actual use case, so I need to know how to solve this problem while keeping the class structure.)</p>
<python><matplotlib><matplotlib-widget>
2025-03-21 18:34:26
1
482
BGreen
79,526,325
10,658,339
How to identify duplicate datetime entries from a .csv file where pandas does not consider time down to the second?
<p>I am working with a pandas DataFrame where one of the columns contains datetime values, and I need to identify duplicate entries in the &quot;Data&quot; column. The datetime values include both the date and the exact time (hours, minutes, and seconds). However, I noticed an issue when I read the data from a .csv file โ€” pandas does not seem to consider the time down to the second when identifying duplicates.</p> <p>Interestingly, when I create synthetic data directly in pandas (like in the example below), the expected output works correctly, and it identifies the duplicates as I would expect. But when I read the same data from a .csv file, it marks even datetime values that are different by the hour as duplicates, which is not what I want.</p> <p>Here is an example of my synthetic DataFrame:</p> <pre><code>import pandas as pd # Creating synthetic data with random IDs and names data = { 'ID': ['ID-1001', 'ID-1002', 'ID-1003', 'ID-1004', 'ID-1005', 'ID-1006', 'ID-1007', 'ID-1008', 'ID-1009', 'ID-1010'], 'Name': ['Sensor-A', 'Sensor-B', 'Sensor-C', 'Sensor-D', 'Sensor-E', 'Sensor-F', 'Sensor-G', 'Sensor-H', 'Sensor-I', 'Sensor-J'], 'Code': [330735, 330736, 330737, 330738, 330739, 330740, 330741, 330742, 330743, 330744], 'Date': [ '2022-01-01 12:00:00', '2022-01-01 12:00:00', '2022-01-01 13:00:00', '2022-01-01 14:00:00', '2022-01-02 12:00:00', '2022-01-02 13:00:00', '2022-01-02 14:00:00', '2022-01-02 15:00:00', '2022-01-03 12:00:00', '2022-01-03 13:00:00' ] } # Convert to DataFrame dd_csv = pd.DataFrame(data) # Ensure 'Date' is in datetime format dd_csv['Date'] = pd.to_datetime(dd_csv['Date']) </code></pre> <p>In this dataset, the following rows have exact duplicate datetime values (same date and time):</p> <p>2022-01-01 12:00:00 for Sensor-A and Sensor-B (these are duplicates). Now, I want to check for duplicates in the &quot;Data&quot; column based on the exact datetime value, including both date and time. It works ok for the synthetic data above.</p> <pre><code>duplicates_all = dd_csv['Date'].duplicated(keep=False) print(dd_csv[duplicates_all]) </code></pre> <pre><code> ID Name Code Date 0 ID-1001 Sensor-A 330735 2022-01-01 12:00:00 1 ID-1002 Sensor-B 330736 2022-01-01 12:00:00 </code></pre> <p>However, when the data is read from a .csv file (real data), the time is not correctly recognized down to the second. This results in pandas marking entries with the same date but different times (down to the hour) as duplicates, even if I set the format before:</p> <pre><code>import pandas as pd # URL of the CSV file in the GitHub repository url = 'https://raw.githubusercontent.com/jc-barreto/Data/main/test_data.csv' # Read the CSV file directly from the URL real_data = pd.read_csv(url) # Convert the 'Date' column to datetime format real_data['Date'] = pd.to_datetime(real_data['Date'], format=&quot;%Y-%m-%d %H:%M:%S&quot;, errors='coerce') # Identify rows with duplicate dates duplicates_all = real_data['Date'].duplicated(keep=False) # Print the rows with duplicate dates print(real_data[duplicates_all]) </code></pre> <p>and the output is:</p> <pre><code> Unnamed: 0 ID Date T 11774 11774 A 2017-05-25 12:00:00 20.55000 11775 11775 A 2017-05-25 13:00:00 20.56000 11776 11776 A 2017-05-25 14:00:00 20.56000 11777 11777 A 2017-05-25 15:00:00 20.57000 11778 11778 A 2017-05-25 16:00:00 20.57000 </code></pre> <p>where clear the dates are not repeated since it have different times.</p> <p>I have tried the suggestion from the answer below, but didn't work neither:</p> <pre><code>real_data['date_only'] = [x.date() for x in real_data['Date']] real_data['time_only'] = [x.time() for x in real_data['Date']] duplicates_all2 = real_data[['date_only', 'time_only']].duplicated(keep=False) print(real_data[duplicates_all2]) </code></pre> <p><strong>How do I fix that?</strong> I need to fix because I'm going to use the ID + Data as a key for a database update, to make sure I only update data that is not in the database.</p>
<python><pandas><date><datetime>
2025-03-21 18:21:56
1
527
JCV
79,526,231
10,034,073
How to omit exc_info from logs when using coloredlogs in Python?
<p>See <a href="https://stackoverflow.com/questions/6177520/python-logging-exc-info-only-for-file-handler">this question</a> about sending <code>exc_info</code> to a log file but not to the console using Python's <code>logging</code> module.</p> <p>I want the exact same behavior, except that I'm using <a href="https://pypi.org/project/coloredlogs/" rel="nofollow noreferrer"><code>coloredlogs</code></a>. How can I do this?</p> <hr /> <p>Basically, if I call <code>my_logger.error(&quot;something went wrong&quot;, exc_info=True)</code>, I want to see the stack trace in my log file but not in the console. The solution to the aforementioned question involves subclassing <code>logging.Formatter</code>, but when using <code>coloredlogs</code>, you don't access a formatter class directly. Instead, you call <code>coloredlogs.install()</code>, and I don't see a parameter for hiding stack traces.</p>
<python><logging><python-logging>
2025-03-21 17:38:40
0
444
kviLL
79,526,166
3,245,254
In Folium can I specify multiple timestamps for a given feature
<p>I have a set of geometric features that I want to animate (change colors) using Folium.</p> <p>I know how to do it (see MWE below), but it seems I need to copy the feature data for each time, only changing the properties (time and color). While this works, it has lots of redundant data, resulting in a large HTML file when I save it (I do have hundreds of thousands of features).</p> <p>MWE with a single geometric feature animated at 24 different times:</p> <pre><code>import folium from folium.plugins import TimestampedGeoJson import pandas as pd colors = [&quot;red&quot;, &quot;green&quot;, &quot;blue&quot;] data = [ { &quot;coordinates&quot;: ( (-0.9759493939147111, 51.985462016129176), (-0.9697969480030385, 51.97695747265381), ), &quot;times&quot;: [f&quot;2025-03-21T{i:02d}:00:00Z&quot;, f&quot;2025-03-21T{i+1:02d}:00:00Z&quot;], } for i in range(0, 24) ] df = pd.DataFrame(data) features = [] for i, row in df.iterrows(): feature = { &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;LineString&quot;, &quot;coordinates&quot;: row[&quot;coordinates&quot;], }, &quot;properties&quot;: { &quot;times&quot;: row[&quot;times&quot;], &quot;style&quot;: {&quot;color&quot;: colors[i % 3], &quot;weight&quot;: 10, &quot;opacity&quot;: 1}, }, } features.append(feature) geojson = { &quot;type&quot;: &quot;FeatureCollection&quot;, &quot;features&quot;: features, } m = folium.Map( location=[51.986179995762726, -0.97640998763152], zoom_start=14, tiles=&quot;cartodbpositron&quot; ) TimestampedGeoJson( geojson, period=&quot;PT1H&quot;, duration=&quot;PT1H&quot;, add_last_point=False, ).add_to(m) m.save(&quot;lines.html&quot;) </code></pre> <p>I'd like to specify the geometry once, and then indicate what properties should apply at each different time. E.g. something like:</p> <pre><code>features = [ { &quot;type&quot;: &quot;Feature&quot;, &quot;geometry&quot;: { &quot;type&quot;: &quot;LineString&quot;, &quot;coordinates&quot;: ( (-0.9759493939147111, 51.985462016129176), (-0.9697969480030385, 51.97695747265381), ), }, &quot;properties&quot;: [ { &quot;times&quot;: [&quot;2025-03-21T00:00:00Z&quot;, &quot;2025-03-21T01:00:00Z&quot;], &quot;style&quot;: {&quot;color&quot;: &quot;red&quot;, &quot;weight&quot;: 10, &quot;opacity&quot;: 1}, }, { &quot;times&quot;: [&quot;2025-03-21T01:00:00Z&quot;, &quot;2025-03-21T02:00:00Z&quot;], &quot;style&quot;: {&quot;color&quot;: &quot;green&quot;, &quot;weight&quot;: 10, &quot;opacity&quot;: 1}, }, { &quot;times&quot;: [&quot;2025-03-21T01:00:00Z&quot;, &quot;2025-03-21T03:00:00Z&quot;], &quot;style&quot;: {&quot;color&quot;: &quot;blue&quot;, &quot;weight&quot;: 10, &quot;opacity&quot;: 1}, } ] } ] </code></pre> <p>Is it possible? How else I could approach the problem so the resulting file does not repeat over and over the geometry of the features?</p>
<python><animation><timestamp><folium>
2025-03-21 17:12:20
0
645
Didac Busquets
79,526,103
3,621,143
Converting RSA generated "modulus" (n) to base64 string
<p>I have a &quot;modulus&quot; as a long string of numbers that is obtained from doing the following:</p> <pre><code>private_key = rsa.generate_private_key(public_exponent=65537, key_size=2048) modulus = private_key.private_numbers().public_numbers.n </code></pre> <p>which gives me this (modulus = )</p> <pre><code>26430269838726291280672963883929276522234428127706081469034773908296247736139996682259102127358592459713530791841365862493123186868249887704862202193368911366855128282431762151411775448913702006864890463842779084995140786092249248736282702798861993161873918065709700856741944572285079076367907667914080902844624750622976126824522682693806275617591268441477045328753440100516039389493242021813789624216965389245973390154276959750292100226026141811533048330927545995241735560114821851311606450209870516259015344299837790769762906871134121821490748608899823911354842159754168574881499683924223044838326144226160998129721 </code></pre> <p>I want to get this into the usual base64 interpretation of:</p> <pre><code>0V45nHfQFYZwdC7aES-0zkkhct3PM-fpxp9Lo6QZWmeaXSwS8gQVfJeJhmLp1097qlO3d-n0kblVouvH42LdlWgkzYq-lqP2Ny2M4z3a0VXCdIk1TAxM0Qse-QP6otsIoLKcT2p0JdIEOVeCC9BOLIEcGnWenqHsrm29i-21-zngbREUEQwM7UT55_vgywmJn9fB_NJFz-g7lLyhxwP8gMKSWwhMnQ4oAsRAfefEDr2a_0IPRqQE0r4L2WzgknW6aHex-KZ7LWCgLdLzH5iFUEdfvqJ6MhzlcJtpZQkFhwBVnfKVelCZcDex3TI374dbcvvO-3tVH7Ik4WEukrSgOQ </code></pre> <p>From a routine I found on the internet, this can be done executing openssl to read a private key PEM encoded file, extract the modulus in hex, and then convert it.</p> <pre><code>proc = subprocess.Popen([&quot;openssl&quot;, &quot;rsa&quot;, &quot;-in&quot;, &quot;./account.key&quot;, &quot;-noout&quot;, &quot;-text&quot;], stdin=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE) out, err = proc.communicate(None) pub_pattern = r&quot;modulus:[\s]+?00:([a-f0-9\:\s]+?)\npublicExponent: ([0-9]+)&quot; pub_hex, _ = re.search(pub_pattern, out.decode('utf8'), re.MULTILINE|re.DOTALL).groups() modulus = base64.urlsafe_b64encode(binascii.unhexlify(re.sub(r&quot;(\s|:)&quot;, &quot;&quot;, pub_hex).encode(&quot;utf-8&quot;))).decode('utf8').replace(&quot;=&quot;, &quot;&quot;) print(pub_hex) print(modulus) </code></pre> <p>which gives you:</p> <pre><code>d1:5e:39:9c:77:d0:15:86:70:74:2e:da:11:2f: b4:ce:49:21:72:dd:cf:33:e7:e9:c6:9f:4b:a3:a4: 19:5a:67:9a:5d:2c:12:f2:04:15:7c:97:89:86:62: e9:d7:4f:7b:aa:53:b7:77:e9:f4:91:b9:55:a2:eb: c7:e3:62:dd:95:68:24:cd:8a:be:96:a3:f6:37:2d: 8c:e3:3d:da:d1:55:c2:74:89:35:4c:0c:4c:d1:0b: 1e:f9:03:fa:a2:db:08:a0:b2:9c:4f:6a:74:25:d2: 04:39:57:82:0b:d0:4e:2c:81:1c:1a:75:9e:9e:a1: ec:ae:6d:bd:8b:ed:b5:fb:39:e0:6d:11:14:11:0c: 0c:ed:44:f9:e7:fb:e0:cb:09:89:9f:d7:c1:fc:d2: 45:cf:e8:3b:94:bc:a1:c7:03:fc:80:c2:92:5b:08: 4c:9d:0e:28:02:c4:40:7d:e7:c4:0e:bd:9a:ff:42: 0f:46:a4:04:d2:be:0b:d9:6c:e0:92:75:ba:68:77: b1:f8:a6:7b:2d:60:a0:2d:d2:f3:1f:98:85:50:47: 5f:be:a2:7a:32:1c:e5:70:9b:69:65:09:05:87:00: 55:9d:f2:95:7a:50:99:70:37:b1:dd:32:37:ef:87: 5b:72:fb:ce:fb:7b:55:1f:b2:24:e1:61:2e:92:b4: a0:39 0V45nHfQFYZwdC7aES-0zkkhct3PM-fpxp9Lo6QZWmeaXSwS8gQVfJeJhmLp1097qlO3d-n0kblVouvH42LdlWgkzYq-lqP2Ny2M4z3a0VXCdIk1TAxM0Qse-QP6otsIoLKcT2p0JdIEOVeCC9BOLIEcGnWenqHsrm29i-21-zngbREUEQwM7UT55_vgywmJn9fB_NJFz-g7lLyhxwP8gMKSWwhMnQ4oAsRAfefEDr2a_0IPRqQE0r4L2WzgknW6aHex-KZ7LWCgLdLzH5iFUEdfvqJ6MhzlcJtpZQkFhwBVnfKVelCZcDex3TI374dbcvvO-3tVH7Ik4WEukrSgOQ </code></pre> <p>I would like to do this using all python modules, not relying on executing openssl.</p>
<python><openssl><cryptography>
2025-03-21 16:42:00
2
1,175
jewettg
79,526,074
2,487,835
Ollama model keep in memory and prevent unloading between requests (keep_alive?)
<p>No matter what I do from the terminal or in code, the agent requests to Ollama models take 15โ€“25 seconds each time on my local M2 MacBook Pro.</p> <p>I am pretty sure this is not a hardware issue because the model is lightning fast when used from the terminal.</p> <p>I am using Python's module to load <code>gemma3.4b</code> via Ollama. Tried different models โ€“ similar result.</p> <p>Tried calling via http โ€“ not much improvement.</p> <p>So, my conclusion is that the model keeps unloading from memory each time and loads on new requests.</p> <p>I've looked for <code>keep_alive =</code> option and found that some say it defaults to <code>-1</code>. And it can be changed when running from CLI, but I could not find docs on how to use it in Python, even though there is, apparently, a fixed bug on adding this feature.</p> <p><a href="https://github.com/ollama/ollama-python/pull/31" rel="nofollow noreferrer">https://github.com/ollama/ollama-python/pull/31</a></p> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code> import json import asyncio from library import Model from pydantic import BaseModel import ollama class ModelOllama(Model): def __init__(self, name: str): super().__init__(name) def _get_client(self): return ollama def _format(self, schema: BaseModel): return schema.model_json_schema() async def __call__(self, prompt : str, response_schema : BaseModel, role : str = 'user', temperature : float = 0.0 ): params = { 'model' : self.name, 'messages' : [{ 'role' : role, 'content' : prompt }], 'format' : self._format(response_schema), 'options' : { 'temperature': temperature, # 'num_gpu': 1, # Use GPU acceleration if available # 'num_thread': 6, # Use multiple threads 'keep_alive': 60 # Keep model loaded for 1 minute (60 seconds) } } try: response = await asyncio.to_thread(self.client.chat, **params) output = json.loads(response['message']['content']) except json.JSONDecodeError as e: raise Exception(f'OllamaModel json parsing error: {e}') except Exception as e: raise Exception(f'OllamaModel LLM communication error: {e}') return output </code></pre>
<python><large-language-model><ollama>
2025-03-21 16:29:37
1
3,020
Lex Podgorny
79,525,834
9,962,007
How to compile FlashAttention wheels faster?
<p>Currently the compilation of the Python wheel for the FlashAttention 2 (<code>Dao-AILab/flash-attention</code>) Python package takes several <em>hours</em>, as reported by multiple users on GitHub (see e.g. <a href="https://github.com/Dao-AILab/flash-attention/issues/1038" rel="nofollow noreferrer">this issue</a>). What are the possible ways of speeding it up?</p>
<python><build><python-wheel><flash-attn>
2025-03-21 15:00:54
1
7,211
mirekphd
79,525,541
11,863,823
Nullable ints and pandera unit testing
<p>First of all, I'm using <code>pandera==0.23.1</code> on Python 3.12.9.</p> <p>I have been following the examples of the <code>pandera</code> doc, in particular the ones from the <a href="https://pandera.readthedocs.io/en/stable/data_synthesis_strategies.html" rel="nofollow noreferrer">Data Synthesis Strategies</a> section.</p> <p><strong>The sample code adapted from the pandera doc</strong></p> <pre class="lang-py prettyprint-override"><code>import hypothesis import pandera as pa schema = pa.DataFrameSchema( { &quot;column1&quot;: pa.Column(int, pa.Check.eq(10)), &quot;column2&quot;: pa.Column(float, pa.Check.eq(0.25)), &quot;column3&quot;: pa.Column(str, pa.Check.eq(&quot;foo&quot;)), } ) out_schema = schema.add_columns({&quot;column4&quot;: pa.Column(float)}) def processing_fn(df): &quot;&quot;&quot; This function is undecorated as we are supposed to import it from another place &quot;&quot;&quot; return df.assign(column4=df.column1 * df.column2) @hypothesis.given(schema.strategy(size=5)) def test_processing_fn(dataframe): ## This is exactly equivalent to: # @pa.check_output(out_schema) # def dec_processing_fn(_): # return processing_fn(_) # processing_fn(df) pa.check_output(out_schema)(processing_fn)(dataframe) </code></pre> <p>This works, so I tried applying the same behaviour to my use case, which contains nullable ints. Here is my MWE:</p> <p><strong>test_test.py</strong></p> <pre class="lang-py prettyprint-override"><code>import pandera as pa import pandas as pd import hypothesis in_schema = pa.DataFrameSchema( { &quot;a&quot;: pa.Column( pd.Int64Dtype, checks=[ pa.Check.isin([1,2,pd.NA]) ], coerce=True ) } ) out_schema = in_schema.add_columns( { &quot;b&quot;: pa.Column( pd.Int64Dtype, checks=[ pa.Check.isin([10,20,pd.NA]) ], coerce=True ) } ) def transform(df): &quot;&quot;&quot; This function is undecorated as we are supposed to import it from another place &quot;&quot;&quot; return df.assign(b=df[&quot;a&quot;] * 10) def test_transform1(): &quot;&quot;&quot; We test transform on a sample dataframe &quot;&quot;&quot; df_in = pd.DataFrame({&quot;a&quot;: [1,2,pd.NA]}) df_out = pd.DataFrame({ &quot;a&quot;: [1, 2, pd.NA], &quot;b&quot;: [10, 20, pd.NA] }) pd.testing.assert_frame_equal(df_out, transform(df_in)) </code></pre> <p>This test passes, but in practice my dataframes and checks are more complex, and I have many more functions to test, so instead of crafting a dataframe for each test case I want to use schemas. I therefore write a second test:</p> <pre class="lang-py prettyprint-override"><code>@hypothesis.given(in_schema.strategy(size=5)) def test_transform2(df): &quot;&quot;&quot; This test should pass and doesn't &quot;&quot;&quot; pa.check_output(out_schema)(transform)(df) </code></pre> <p>This fails with the following error trace:</p> <pre><code>tests/test_test.py:43 (test_transform2) @hypothesis.given(in_schema.strategy(size=5)) &gt; def test_transform2(df): test_test.py:45: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/engine.py:789: in run self._run() ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/engine.py:1344: in _run self.generate_new_examples() ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/engine.py:1100: in generate_new_examples self.test_function(data) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/engine.py:451: in test_function self.__stoppable_test_function(data) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/engine.py:344: in __stoppable_test_function self._test_function(data) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/core.py:1091: in _execute_once_for_engine result = self.execute_once(data) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/core.py:1028: in execute_once result = self.test_runner(data, run) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/core.py:729: in default_executor return function(data) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/core.py:939: in run kw, argslices = context.prep_args_kwargs_from_strategies( ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/control.py:170: in prep_args_kwargs_from_strategies obj = check(self.data.draw(s, observe_as=f&quot;generate:{k}&quot;)) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1114: in draw v = strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/lazy.py:178: in do_draw return data.draw(self.wrapped_strategy) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1108: in draw return strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/core.py:1821: in do_draw return self.definition(data.draw, *self.args, **self.kwargs) ../../../venv/3.12/lib/python3.12/site-packages/pandera/strategies/pandas_strategies.py:1179: in _dataframe_strategy return draw(strategy) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1108: in draw return strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/lazy.py:178: in do_draw return data.draw(self.wrapped_strategy) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1108: in draw return strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/lazy.py:178: in do_draw return data.draw(self.wrapped_strategy) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1108: in draw return strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/strategies.py:915: in do_draw x = data.draw(self.mapped_strategy) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1108: in draw return strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/core.py:1821: in do_draw return self.definition(data.draw, *self.args, **self.kwargs) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/extra/pandas/impl.py:639: in just_draw_columns value = draw(c.elements) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/internal/conjecture/data.py:1108: in draw return strategy.do_draw(self) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/strategies.py:607: in do_draw result = self.do_filtered_draw(data) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/strategies.py:634: in do_filtered_draw element = self.get_element(i) ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/strategies.py:622: in get_element return self._transform(self.elements[i]) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sampled_from([1, 2, &lt;NA&gt;]).map(int64).map(convert_element) element = &lt;NA&gt; def _transform( self, # https://github.com/python/mypy/issues/7049, we're not writing `element` # anywhere in the class so this is still type-safe. mypy is being more # conservative than necessary element: Ex, # type: ignore ) -&gt; Union[Ex, UniqueIdentifier]: # Used in UniqueSampledListStrategy for name, f in self._transformations: if name == &quot;map&quot;: &gt; result = f(element) E TypeError: int() argument must be a string, a bytes-like object or a real number, not 'NAType' E while generating 'df' from _dataframe_strategy() ../../../venv/3.12/lib/python3.12/site-packages/hypothesis/strategies/_internal/strategies.py:596: TypeError </code></pre> <p>So the issue is that <code>in_schema.strategy</code> tries to cast my values to <code>int</code> instead of the required <code>pandas.Int64DType</code>, <a href="https://pandas.pydata.org/docs/user_guide/integer_na.html" rel="nofollow noreferrer">which is nullable</a>. I tried with the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/basics.html#dtypes" rel="nofollow noreferrer">string alias <code>&quot;Int64&quot;</code></a> instead of the explicit type, it gave the same result. I tried removing <code>nullable=True</code>, <code>coerce=True</code>, to no avail.</p> <hr /> <p>The things I tried include <a href="https://stackoverflow.com/questions/71395580/ingesting-an-null-int-column-pandas-and-pandera">SO:71395580</a>, the issue in <a href="https://stackoverflow.com/questions/78407951/null-conversion-error-polars-exceptions-computeerror-pandera0-19-0b3-with">SO:78407951</a> is fixed in my version, I checked multiple open and closed GitHub issues and the closest one I could find is this one, <a href="https://github.com/unionai-oss/pandera/issues/1903" rel="nofollow noreferrer">#1903</a>, but after investigating I'm unsure that my issue is caused by this bug.</p>
<python><pandas><unit-testing><pandera>
2025-03-21 13:03:19
0
628
globglogabgalab
79,525,493
9,381,985
vscode python debugger hangs for ever
<p>Recently my VSCode is out of work. When I choose <code>Python Debugger: Debug Python File</code> from the upper-right drop-down menu, the TERMINAL panel will show the command line of starting Python debugger like:</p> <p><code>cd /Users/me/Projects/kiwi ; /usr/bin/env /Users/me/miniconda3/bin/python /Users/me/.vscode/extensions/ms-python.debugpy-2025.4.1-darwin-arm64/bundled/libs/debugpy/adapter/../../debugpy/launcher 49633 -- /Users/me/Projects/kiwi/test.py </code></p> <p>then it hangs there for ever. If I choose 'Run Python File', it works fine.</p> <p>The environment:</p> <pre><code>MacOS Sequoia 15.3.2 Python v3.12.9 VSCode Version: 1.98.2 VSCode plugins: Pylance 2025.3.2, Python 2025.2.0, Python Debugger 2025.4.1, Vim 1.29.0 </code></pre> <p>test.py:</p> <pre><code>import os print(os.getcwd()) </code></pre> <p>I am not debugging with <code>launch.json</code>, so it is irrelevant. Any advice on how to get VSCode back to normal is very much appreciated.</p>
<python><visual-studio-code><debugpy>
2025-03-21 12:48:58
1
575
Cuteufo
79,525,445
4,473,615
Sub Folders of S3 bucket - Bucket name must match the regex "^[a-zA-Z0-9.\-_]
<p>I have below code to get the list of objects in the bucket, but it does not work for the sub folders of the S3.</p> <pre><code>import boto3 bucket_from = &quot;BucketSource&quot; s3 = boto3.resource('s3') src = s3.Bucket(bucket_from) for archive in src.objects.all(): print(archive.key) </code></pre> <p>While using sub folders, <code>FirstLevelS3/SecondLevelS3/BucketSource</code>, this is the error:</p> <pre class="lang-none prettyprint-override"><code>Bucket name must match the regex &quot;^[a-zA-Z0-9.\-_] </code></pre>
<python><amazon-web-services><amazon-s3><boto3>
2025-03-21 12:33:01
2
5,241
Jim Macaulay
79,525,439
5,457,202
Changes to table with python-docx not persistent in Python 3.11
<p>I'm using python-docx (v1.1.2) and Python 3.11.3 to work on a tool to fix a bunch of Word documents automatically. I've been able to update fonts, titles, texts, headers and footers and tables (their borders).</p> <p>However, some tables, and I haven't found the pattern of why this happens, are not updated. This is the snippet I've developed.</p> <pre><code>def update_borders(table): #namespaces used when updating the elements (the changes are not persistent if not used) NSMAP = {&quot;w&quot;: &quot;http://schemas.openxmlformats.org/wordprocessingml/2006/main&quot;} WORD_NS = NSMAP[&quot;w&quot;] # sample color map BORDER_COLOR_MAP = { &quot;auto&quot;: &quot;F07E26&quot; } small_table = len(table.rows) == 1 and len(table.columns) == 2 tbl_xml = table._element tbl_pr = tbl_xml.find(f'{{{WORD_NS}}}tblPr') if tbl_pr is None: tbl_pr = etree.SubElement(tbl_xml, f&quot;{{{WORD_NS}}}tblPr&quot;) #Ensure tblBorders exists tbl_borders = tbl_pr.find(f'{{{WORD_NS}}}tblBorders') # If the table doesn't have a tblPr object (everything is set to &quot;auto&quot;), we create it if tbl_borders is None: tbl_borders = etree.SubElement(tbl_pr, f&quot;{{{WORD_NS}}}tblBorders&quot;) if small_table: border_sides = [&quot;top&quot;, &quot;left&quot;, &quot;bottom&quot;, &quot;right&quot;] else: border_sides = [&quot;top&quot;, &quot;left&quot;, &quot;bottom&quot;, &quot;right&quot;, &quot;insideH&quot;, &quot;insideV&quot;] for side in border_sides: border = etree.SubElement(tbl_borders, f&quot;{{{WORD_NS}}}{side}&quot;) border.set(f&quot;{{{WORD_NS}}}val&quot;, &quot;single&quot;) border.set(f&quot;{{{WORD_NS}}}sz&quot;, &quot;4&quot;) border.set(f&quot;{{{WORD_NS}}}space&quot;, &quot;0&quot;) border.set(f&quot;{{{WORD_NS}}}color&quot;, &quot;F07E26&quot;) else: # If there is a tblPr object, we update the color based in a map of colors for border in tbl_borders: border_color = border.attrib[f&quot;{{{WORD_NS}}}color&quot;] if (small_table and border_color != &quot;auto&quot;) or not small_table: border.set(f&quot;{{{WORD_NS}}}color&quot;, BORDER_COLOR_MAP[border.attrib[f&quot;{{{WORD_NS}}}color&quot;]]) </code></pre> <p>While this script works for most of the tables, I've found some tables in which it doesn't. This table for instance.</p> <pre><code> &lt;w:tbl xmlns:w=&quot;http://schemas.openxmlformats.org/wordprocessingml/2006/main&quot; xmlns:wpc=&quot;http://schemas.microsoft.com/office/word/2010/wordprocessingCanvas&quot; xmlns:cx=&quot;http://schemas.microsoft.com/office/drawing/2014/chartex&quot; xmlns:cx1=&quot;http://schemas.microsoft.com/office/drawing/2015/9/8/chartex&quot; xmlns:cx2=&quot;http://schemas.microsoft.com/office/drawing/2015/10/21/chartex&quot; xmlns:cx3=&quot;http://schemas.microsoft.com/office/drawing/2016/5/9/chartex&quot; xmlns:cx4=&quot;http://schemas.microsoft.com/office/drawing/2016/5/10/chartex&quot; xmlns:cx5=&quot;http://schemas.microsoft.com/office/drawing/2016/5/11/chartex&quot; xmlns:cx6=&quot;http://schemas.microsoft.com/office/drawing/2016/5/12/chartex&quot; xmlns:cx7=&quot;http://schemas.microsoft.com/office/drawing/2016/5/13/chartex&quot; xmlns:cx8=&quot;http://schemas.microsoft.com/office/drawing/2016/5/14/chartex&quot; xmlns:mc=&quot;http://schemas.openxmlformats.org/markup-compatibility/2006&quot; xmlns:aink=&quot;http://schemas.microsoft.com/office/drawing/2016/ink&quot; xmlns:am3d=&quot;http://schemas.microsoft.com/office/drawing/2017/model3d&quot; xmlns:o=&quot;urn:schemas-microsoft-com:office:office&quot; xmlns:oel=&quot;http://schemas.microsoft.com/office/2019/extlst&quot; xmlns:r=&quot;http://schemas.openxmlformats.org/officeDocument/2006/relationships&quot; xmlns:m=&quot;http://schemas.openxmlformats.org/officeDocument/2006/math&quot; xmlns:v=&quot;urn:schemas-microsoft-com:vml&quot; xmlns:wp14=&quot;http://schemas.microsoft.com/office/word/2010/wordprocessingDrawing&quot; xmlns:wp=&quot;http://schemas.openxmlformats.org/drawingml/2006/wordprocessingDrawing&quot; xmlns:w10=&quot;urn:schemas-microsoft-com:office:word&quot; xmlns:w14=&quot;http://schemas.microsoft.com/office/word/2010/wordml&quot; xmlns:w15=&quot;http://schemas.microsoft.com/office/word/2012/wordml&quot; xmlns:w16cex=&quot;http://schemas.microsoft.com/office/word/2018/wordml/cex&quot; xmlns:w16cid=&quot;http://schemas.microsoft.com/office/word/2016/wordml/cid&quot; xmlns:w16=&quot;http://schemas.microsoft.com/office/word/2018/wordml&quot; xmlns:w16du=&quot;http://schemas.microsoft.com/office/word/2023/wordml/word16du&quot; xmlns:w16sdtdh=&quot;http://schemas.microsoft.com/office/word/2020/wordml/sdtdatahash&quot; xmlns:w16sdtfl=&quot;http://schemas.microsoft.com/office/word/2024/wordml/sdtformatlock&quot; xmlns:w16se=&quot;http://schemas.microsoft.com/office/word/2015/wordml/symex&quot; xmlns:wpg=&quot;http://schemas.microsoft.com/office/word/2010/wordprocessingGroup&quot; xmlns:wpi=&quot;http://schemas.microsoft.com/office/word/2010/wordprocessingInk&quot; xmlns:wne=&quot;http://schemas.microsoft.com/office/word/2006/wordml&quot; xmlns:wps=&quot;http://schemas.microsoft.com/office/word/2010/wordprocessingShape&quot;&gt;\n &lt;w:tblPr&gt;\n &lt;w:tblStyle w:val=&quot;Tablaconcuadrcula&quot;/&gt;\n &lt;w:tblW w:w=&quot;9684&quot; w:type=&quot;dxa&quot;/&gt;\n &lt;w:jc w:val=&quot;center&quot;/&gt;\n &lt;w:tblLook w:val=&quot;04A0&quot; w:firstRow=&quot;1&quot; w:lastRow=&quot;0&quot; w:firstColumn=&quot;1&quot; w:lastColumn=&quot;0&quot; w:noHBand=&quot;0&quot; w:noVBand=&quot;1&quot;/&gt;\n &lt;/w:tblPr&gt;\n .... &lt;w:tc&gt;\n &lt;w:tcPr&gt;\n &lt;w:tcW w:w=&quot;8062&quot; w:type=&quot;dxa&quot;/&gt;\n &lt;w:tcBorders&gt;\n &lt;w:top w:val=&quot;single&quot; w:sz=&quot;4&quot; w:space=&quot;0&quot; w:color=&quot;auto&quot;/&gt;\n &lt;w:left w:val=&quot;single&quot; w:sz=&quot;4&quot; w:space=&quot;0&quot; w:color=&quot;auto&quot;/&gt;\n &lt;w:bottom w:val=&quot;single&quot; w:sz=&quot;4&quot; w:space=&quot;0&quot; w:color=&quot;auto&quot;/&gt;\n &lt;w:right w:val=&quot;single&quot; w:sz=&quot;4&quot; w:space=&quot;0&quot; w:color=&quot;auto&quot;/&gt;\n &lt;/w:tcBorders&gt;\n &lt;w:shd w:val=&quot;clear&quot; w:color=&quot;auto&quot; w:fill=&quot;FDE7D4&quot;/&gt;\n &lt;w:hideMark/&gt;\n &lt;/w:tcPr&gt;\n </code></pre> <p>I noticed that the table doesn't have a <code>tblBorders</code> inside of <code>tblPr</code>, but that every cell has the same style set individually. However I don't understand that even when I set it manually (with <code>etree.SubElement(tbl_pr, f&quot;{{{WORD_NS}}}tblBorders&quot;)</code>), if I save that document, and load it again with python-docx, the properties are not there. I assume one alternative could be to edit all the cell individually with a loop, but I don't see why this doesn't work.</p>
<python><xml><ms-word><python-docx>
2025-03-21 12:30:52
1
436
J. Maria
79,525,409
1,194,864
Create a zip file that contains only files and not a folder/directory
<p>I would like to create a function that lists the items of a directory and adds just the files in a <code>zip</code> folder (without having any folder inside the <code>zip</code>). So my directory looks like as follows:</p> <pre><code>student_path student_1: file1.txt file2.txt file.html student_2: file1.txt file2.txt file.html ... student_n: file1.txt file2.txt </code></pre> <p>I want to create a look through these folders that reads the files, deletes HTML files and zip only the text files of each student in a zip folder (but just the text files without any folder).</p> <p>My code for this is as follows:</p> <pre><code>import os import shutil from zipfile import ZipFile, ZIP_DEFLATED import pathlib import pdb path = &quot;assignment1/student_files/&quot; folders = os.listdir(path) # Create a zip file from a directory def zipCreate(_path_, zip_path): folder = pathlib.Path(_path_) with ZipFile(zip_path, 'w', ZIP_DEFLATED) as zipf: for file in folder.iterdir(): zipf.write(file, arcname=file.name) # Remove all python files from the student_files directory def remove_html(path): for item in folders: if not item == &quot;.DS_Store&quot;: for file in os.listdir(path + item): if os.path.isdir(path + item + &quot;/&quot; + file): shutil.rmtree(path + item + &quot;/__pycache__/&quot;) if file.endswith(&quot;.html&quot;): print(&quot;Removing &quot; + path + item + &quot;/&quot; + file) os.remove(path + item + &quot;/&quot; + file) # Create a zip file from the student_files directory zip_path = path + &quot;/&quot; + item + &quot;.zip&quot; zipCreate(path+item+&quot;/&quot;, zip_path) if __name__ == &quot;__main__&quot;: remove_html(path) print(&quot;Done removing python files and zipping submissions...&quot;) </code></pre> <p>However, each time I create a <code>zip</code> file, it contains a folder directory named <code>student_n</code>, while I just want to include the text files.</p>
<python><path><directory><zip>
2025-03-21 12:17:42
1
5,452
Jose Ramon
79,525,357
1,548,830
Deadlock in Multiprocessing Queue
<p>I am developing a program to simulate a P2P network using the <code>multiprocessing</code> package. Specifically, I have created a <code>Node</code> class that inherits from <code>multiprocessing.Process</code> and contains all the basic functionalities, while the <code>SpecializedNode</code> class implements specific features.</p> <p><code>SpecializedNode</code> performs an operation for <code>max_iter</code> iterations and, at each iteration, sends the result to its peers (<code>python broadcast_message(self, msg)</code>) and collects all received messages before executing the task again (<code>collect_messages(self)</code>).</p> <p>To handle inter-process communication, each <code>Node</code> has a <code>multiprocessing.Queue</code> where it receives messages from other peers.</p> <p>Unfortunately, it seems that all processes get stuck when calling <code>collect_messages(self)</code>, without raising any errors or exceptions. Since this is my first time using the <code>multiprocessing</code> package, I wonder if I am using it correctly.</p> <p>Moreover, does this implementation allow the node to correctly receive messages from peers while is performing the task in <code>run()</code>?</p> <pre class="lang-py prettyprint-override"><code> from multiprocessing import Process, Queue class Node(Process): def __init__(self, node_id): super(Node, self).__init__() self.node_id = node_id # Incoming messages (e.g., models) queue self.message_queue = Queue() def set_peers(self, peers): self.peers = peers def collect_messages(self): messages = [] while not self.message_queue.empty(): msg = self.message_queue.get() messages.append(msg) return messages def broadcast_message(self, msg): for peer in self.peers: peer.message_queue.put(msg) </code></pre> <p>Then I have my specialized Node class:</p> <pre class="lang-py prettyprint-override"><code> class SpecializedNode(Node): def __init__(self, node_id, max_iter): super(node_id) self.max_iter = max_iter def run(self): for current_iter in range(self.max_iter): #do stuff msg = #build my message self.broadcast_msg(msg) incoming_messages = self.collect_messages() # do stuff based on the received messages </code></pre> <pre class="lang-py prettyprint-override"><code> if __name__ == &quot;__main__&quot;: n_nodes = 30 # Create the nodes nodes = [SpecializedNode(i, 100) for i in range(n_nodes) # Simulate a fully-connected network for node in nodes: node.set_peers(peers=[n for n in nodes if n.node_id != node.node_id]) # Start the nodes (i.e., processes) for node in nodes: node.start() # Wait for completition for node in nodes: node.join() </code></pre>
<python><multiprocessing><queue>
2025-03-21 11:53:32
1
529
Mattia Campana
79,525,356
11,850,171
ValueError: could not convert string to float: '76.4984018.904527.000':geopandas
<p>I am working with geospatial data and attempting to apply machine learning for prediction. Hereโ€™s how I read my data:</p> <pre><code>import geopandas as gpd import fiona from fiona.drvsupport import supported_drivers supported_drivers['CSV'] = 'csv' all_data = gpd.read_file('book3.csv')[['Name','description', 'geometry']] </code></pre> <p>results sample:</p> <pre><code>Name description geometry 0 0 mall POLYGON Z((76.4805522843099,76.4905522843099)) 1 1 Restaurants POLYGON Z((76.0253377024336,76.0255777024336 2 2 High Fashion POLYGON Z((76.4805522843097,76.1253377024336)) 3 3 Supermarket POLYGON Z((76.4825679653146,76.4825681653146)) 4 4 Cheap Fashion POLYGON Z((76.5136851191604,76.6136851191604) 5 5 Cosmetics POLYGON Z((76.5486903603254, 76.7126903603254)) 6 6 Electronic POLYGON Z((76.5442602768404,76.7112602768404)) 7 7 West Wing POLYGON Z((76.4984018904527,76.4987348904527)) 8 8 Brought Yes LINESTRINGZ(76.4984018.904527.00000,76.4987348... 9 9 Brought Yes LINESTRINGZ(76.4984018.904527.00000,76.4987348... 10 10 Brought Yes LINESTRINGZ(76.4984018.904527.00000,76.4987348... 11 11 Brought Yes LINESTRINGZ(76.5486903.603254.00000,76.7126903... 12 12 Brought Yes LINESTRINGZ(76.5442602.768404.00000,76.7112602... 13 13 Brought Yes LINESTRINGZ(76.5442602.768404.00000,76.7112602... 14 14 Brought Yes LINESTRINGZ(76.4984018.904527.00000,76.4987348... 15 15 Brought Yes LINESTRINGZ(76.4984018.904527.00000,76.4987348... 16 16 Brought Yes LINESTRINGZ(76.5486903.603254.00000,76.7126903... 17 17 Brought Yes LINESTRINGZ(76.5486903.603258.00000,76.7126903... 18 18 Brought No LINESTRINGZ(76.4984018.904545.00000,76.4987348... 19 19 Brought No LINESTRINGZ(76.5486903.603265.00000,76.7126903... 20 20 Brought No LINESTRINGZ(76.5486903.603254.00000,76.7126903... 21 21 Brought No LINESTRINGZ(76.5486903.603255.00000,76.7126903... 22 22 Brought No LINESTRINGZ(76.5486903.603254.00000,76.7126903... </code></pre> <p>I am trying to convert geometry data to float in the following way:</p> <pre><code>import pandas as pd from shapely.geometry.point import Point results = [] # split the lines into points for i, row in itineraries.iterrows(): # print(row) #removing redundant information list_of_points_extracted = str(row['geometry']).strip('LINESTRINGZ(').strip(')').strip().split(',') list_of_points_extracted = [point[:-2] for point in list_of_points_extracted] # print(list_of_points_extracted) # convert lat and long into floats for x in list_of_points_extracted: for y in x.rstrip(' ').rstrip().split(','): print((float(y).type)) </code></pre> <p>But I am getting the following error message:</p> <pre><code>value error: could not convert string to float: '76.4984018.904527.000' </code></pre> <p>I tried so hard but failed, please I need your help on this. Thank you</p>
<python><pandas><dataframe><geopandas>
2025-03-21 11:51:48
0
321
TinaTz
79,525,238
4,537,160
Have different Black settings for submodules in main repo
<p>I'm cloning a repo which contains 2 submodules cloned in the <code>subs/</code> folder, so the structure is like:</p> <pre class="lang-none prettyprint-override"><code>main_repo/ -- subs/ ---- sub_01 ---- sub_02 </code></pre> <p>I would want each of them to have different settings, for example line length for the Black formatter and flake8, so I created 3 <code>.vscode/settings.json</code> file (main repo, and in each of the submodules' folders), each containing:</p> <pre class="lang-json prettyprint-override"><code> &quot;[python]&quot;: { &quot;editor.formatOnType&quot;: false, &quot;editor.formatOnSave&quot;: true, &quot;editor.defaultFormatter&quot;: &quot;ms-python.black-formatter&quot;, }, &quot;black-formatter.args&quot;: [ &quot;--line-length=xxx&quot;. ] </code></pre> <p>I set the line length to 150 in the main repo and sub_01, and 100 in sub_02, but it seems the 150 setting is being used everywhere when I save.</p>
<python><visual-studio-code>
2025-03-21 10:51:31
1
1,630
Carlo
79,525,092
6,378,557
Using Python's match/case statement to check for a list of a given type
<p>I'm trying to use a match/case instruction to process things differently based on type:</p> <p>Code looks like:</p> <pre><code>def checkArg(arg: str | list[str]) -&gt; str: match arg: case str(): return 'str' case list(): return 'list(str)' case _: return &quot;unmatched&quot; def test(arg: str | list[str]): print(f'{arg=!r}: {checkArg(arg)}') test(0) test('This should match') test(['This', 'should', 'match']) test(['This', 'should', 0, 'not', 'match']) </code></pre> <ul> <li>As expected type hinting makes PyCharm flag <code>test(0)</code></li> <li>But I don't get any warning for <code>test(['This', 'should', 0, 'not', 'match'])</code> so I could be missing something already</li> <li>I can't find what I could use instead of <code>case list()</code> to reject the 4th test and restrict the match to proper list of strings (I tried things like <code>case list[str]()</code> but either the syntax is not valid or that doesn't work).</li> </ul>
<python><structural-pattern-matching>
2025-03-21 09:52:40
1
9,122
xenoid
79,525,083
10,727,331
How to remove padding coming from tick labels from other subplots
<p>I have created a set of heatmaps in a 6x3 subplot grid.</p> <p><a href="https://i.sstatic.net/AGrElD8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AGrElD8J.png" alt="Heatmaps status quo" /></a></p> <p>As you can see, I added a shared color bar in the last row that spans over all columns (I used a grid spec to realize this).<br /> While iterating through the rows and columns, I made sure that the Y tick labels are only rendered for the first column and the X tick labels only for the last row.</p> <p>I am already pretty happy with the results, except for one thing:<br /> Due to the Y tick labels from subplots of the first column, the color bar also starts where the Y tick labels start.<br /> While this is great for the subplots that contain the actual heatmaps, it is unnecessary for the color bar.<br /> I would like to remove this padding on the left side and if possible center the color bar in the middle of the last row.</p> <p>I tried various approaches like disabling the ticks for the color bar axis using <code>cax.set_yticks(ticks=[])</code> or removing the margin with <code>cax.set_xmargin(0)</code> but nothing worked so far.</p> <pre><code>fig = plt.figure(figsize=(12, 13), layout=&quot;constrained&quot;) def render_heatmaps(): # Create a figure with multiple subplots nrows = ( len(domain_knowledge[&quot;DS1: Automotive&quot;]) + 2 ) # usually 4 + 2 for infobox and color bar ncols = len(domain_knowledge.keys()) # usually 3 gs = fig.add_gridspec(nrows, ncols) # Adjust the width ratios gs.set_height_ratios([0.7, 1, 1, 1, 1, 0.1]) gs.hspace = 0.07 gs.wspace = 0.02 # Render infobox ax_text: Axes = fig.add_subplot(gs[0, :]) render_infobox(ax_text) # Create a colorbar based on the min/max values that are in all datasets (contingency matrices for each study object) norm = mcolors.Normalize(vmin=np.min(datasets), vmax=np.max(datasets)) colors = ScalarMappable(norm, cmap=&quot;GnBu&quot;) for row in range(nrows - 2): for col in range(ncols): axis = fig.add_subplot(gs[row + 1, col]) # Render the combined heatmap sns.heatmap( datasets[col * 4 + row], annot=True, cmap=colors.cmap, # only show the y labels on the first column yticklabels=(categories_necessity[:nec_range] if col == 0 else False), # only show the x labels on the last row xticklabels=( categories_temporality[:temp_range] if row + 1 == nrows - 2 else False ), ax=axis, cbar=False, # uses a combined colorbar ) axis.tick_params(axis=&quot;y&quot;, labelrotation=0) axis.set_title(label=f&quot;S{col*4+row}&quot;) # create a combined colorbar cax = fig.add_subplot(gs[nrows - 1, :]) # Use all columns for the colorbar cax.set_xmargin(0) fig.colorbar( colors, cax=cax, orientation=&quot;horizontal&quot;, ) cax.set_title(&quot;Number of Study Responses&quot;) </code></pre>
<python><matplotlib><seaborn>
2025-03-21 09:49:15
3
392
Mayor Mayer
79,525,077
11,803,687
FastAPI/Starlette's Request object is empty when used in a normal function
<p>When I use <code>request</code> inside a FastAPI route, the <code>request</code> is filled with the proper headers, such as:</p> <pre><code>request.headers.get('X-Forwarded-For') </code></pre> <p>However, when I use the <code>request</code> object inside a function, which isn't a route, all the values are empty. I guess the whole object is not evaluated?</p> <p>Working:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import Request @app.get(&quot;/logip&quot;) async def log_request_ip(request: Request): print(request.headers.get(&quot;X-Forwarded-For)) </code></pre> <p>Not working:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import Request @app.get(&quot;/logip&quot;) async def log_request_ip(): log_ip() def log_ip(request: Request = Request): print(request.headers.get(&quot;X-Forwarded-For)) </code></pre> <p>Is there a reason for this? Does FastAPI only evaluate the request object when it is injected into the route, but not in a function? Is there any way to make this possible? I want to avoid injecting the request into every route, and just keep this in the function that actually uses this value.</p> <p>Is there maybe a way to tell FastAPI that this function is only used within a request context?</p>
<python><fastapi><starlette>
2025-03-21 09:48:34
1
1,649
c8999c 3f964f64
79,524,765
2,148,718
Set initial value of a Pydantic 2 field using validator
<p>Say I have the following Pydantic 2.10.6 model, where <code>x</code> is dynamically calculated from another field, <code>y</code>:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field, field_validator, ValidationInfo class Foo(BaseModel): x: int = Field(init=False) y: int @field_validator(&quot;x&quot;, mode=&quot;before&quot;) @staticmethod def set_x(val: None, info: ValidationInfo): return info.data[&quot;y&quot;] + 1 Foo(y=1) </code></pre> <p>Running this as is fails validation:</p> <pre><code>pydantic_core._pydantic_core.ValidationError: 1 validation error for Foo x Field required [type=missing, input_value={'y': 1}, input_type=dict] </code></pre> <hr /> <p>I can &quot;fix&quot; the runtime error by giving <code>x</code> a default:</p> <pre class="lang-py prettyprint-override"><code> x: int = Field(init=False, default=None) </code></pre> <p>But then this fails type checking in pyright with:</p> <pre><code>Type &quot;None&quot; is not assignable to declared type &quot;int&quot; </code></pre> <hr /> <p>I can also fix it using a <code>model_validator</code>, but this is a tad less neat:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field, model_validator class Foo(BaseModel): x: int = Field(init=False) y: int @model_validator(mode=&quot;before&quot;) @staticmethod def set_x(data: dict): data[&quot;x&quot;] = data[&quot;y&quot;] + 1 return data Foo(y=1) </code></pre> <hr /> <p>How can I cleanly represent this model using only a field validator?</p>
<python><pydantic><pydantic-v2>
2025-03-21 07:17:13
1
20,337
Migwell
79,524,654
271,789
SQLAlchemy Mapped field not correctly type-checked
<p>With this code:</p> <pre class="lang-py prettyprint-override"><code>from typing import Protocol from sqlalchemy import Integer from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column class SpamProt(Protocol): id: int class Base(DeclarativeBase): pass class SpamModel(Base): id: Mapped[int] = mapped_column(Integer()) def do_with_spam(d: SpamProt) -&gt; None: raise NotImplementedError() if __name__ == &quot;__main__&quot;: spam = SpamModel(id=10) do_with_spam(spam) </code></pre> <p><code>mypy</code> is returning an error:</p> <pre><code>spam.py:24: error: Argument 1 to &quot;do_with_spam&quot; has incompatible type &quot;SpamModel&quot;; expected &quot;SpamProt&quot; [arg-type] spam.py:24: note: Following member(s) of &quot;SpamModel&quot; have conflicts: spam.py:24: note: id: expected &quot;int&quot;, got &quot;Mapped[int]&quot; Found 1 error in 1 file (checked 1 source file) </code></pre> <p>As SQLAlchemy claims to have no fully compatible typing without any plugins, I don't understand why this simplest example doesn't work. It is not a result of invariance. It is not an effect of interference from old plugin (here's the whole environment):</p> <pre><code>โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ name โ”‚ version โ”‚ location โ”‚ โ”œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ”‚ greenlet โ”‚ 3.1.1 โ”‚ โ”‚ โ”‚ mypy โ”‚ 1.15.0 โ”‚ โ”‚ โ”‚ mypy-extensions โ”‚ 1.0.0 โ”‚ โ”‚ โ”‚ SQLAlchemy โ”‚ 2.0.39 โ”‚ โ”‚ โ”‚ typing_extensions โ”‚ 4.12.2 โ”‚ โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ </code></pre> <p>do I fundamentally misunderstand how the <code>Mapped</code> arguments should work?</p>
<python><sqlalchemy><python-typing><mypy>
2025-03-21 06:06:36
1
2,066
zefciu
79,524,602
2,700,041
How to efficiently apply batched edge/node modifications across multiple graph copies using precomputed data?
<p>I'm working on a task where I need to apply custom modifications to many independent copies of the same base graph. For each instance, I want to:</p> <ul> <li>Start from the same base graph</li> <li>Remove one or more edges</li> <li>Insert intermediate nodes</li> <li>Add new edges with custom weights</li> </ul> <p>I <strong>already have all the necessary modification data</strong> (bulk of edges to remove, nodes to add, and edges to insert) prepared up front as NumPy arrays or lists โ€” one set per graph copy.</p> <p><strong>What Iโ€™m doing now (with networkx):</strong></p> <pre><code>import networkx as nx # Base graph G = nx.Graph() G.add_edge(0, 1, DISTANCE=10) G.add_edge(1, 2, DISTANCE=10) # Example modification for one instance G_copy = G.copy() G_copy.remove_edge(0, 1) G_copy.add_node(100) G_copy.add_edge(0, 100, DISTANCE=4) G_copy.add_edge(100, 1, DISTANCE=6) </code></pre> <p>I need to apply different sets of modifications (edge removals, node insertions, edge additions with weights) to thousands of independent copies of a base graph. <strong>All modification data is precomputed in bulk. The bottleneck is the sequential processing: each copy is handled one at a time in Python using loops</strong>. Even with all data available upfront, <code>networkx</code> lacks support for efficient batched or parallel updates.</p> <p>I'm looking for a graph library that supports fast, batched modifications across many graph instances, ideally with:</p> <ul> <li>NumPy-like vectorized graph operations</li> <li>Efficient edge/node insertions and deletions</li> </ul> <hr /> <p><strong>To be concise:</strong></p> <p>Iโ€™m looking for a way to apply the following batched operations on multiple independent graph copies, using precomputed data:</p> <pre><code>G.remove_edges(first_nodes, last_nodes) G.add_nodes(middle_nodes) G.add_edges(first_nodes, middle_nodes, DISTANCE=first_middle_distance) G.add_edges(middle_nodes, last_nodes, DISTANCE=middle_last_distance) </code></pre> <p>Here, <code>first_nodes, last_nodes, middle_nodes, first_middle_distance</code>, and <code>middle_last_distance</code> are all precomputed. The goal is to perform these modifications efficiently across many G copies in batch โ€” ideally <strong>returning one modified graph per index in the input arrays.</strong></p>
<python><numpy><graph><networkx>
2025-03-21 05:22:11
1
1,427
hanugm
79,524,581
10,191,971
What is the good practice to add many similar methods to a class in Python?
<p>Let's say I have a class like this.</p> <pre><code>from scipy import interpolate import numpy as np class Bar: def __init__(self,data): self.data = data def mean(self): return self.data.mean(axis=0) def sd(self): return self.data.std(axis=0) bar = Bar(np.random.rand(10,5)) print(bar.mean()) print(bar.sd()) </code></pre> <p>The class <code>Bar</code> may have many such methods such as <code>mean()</code>, <code>sd()</code> etc. I want to add sampled versions of those methods, so that I can simply get results equivalent to this:</p> <pre><code>new_ids = np.linspace(bar.ids[0],bar.ids[-1],100) sampled_mean = interpolate.interp1d(bar.ids,bar.mean(),axis=0)(new_ids) </code></pre> <p>Current workaround: manually add new methods with help of decorators.</p> <pre><code>from scipy import interpolate import numpy as np def sample(func): def wrapper(self,n_sample=100,*args,**kwargs): new_ids = np.linspace(self.ids[0],self.ids[-1],n_sample) vals = func(self,*args,**kwargs) return interpolate.interp1d(self.ids,vals,axis=0)(new_ids) return wrapper class Bar: def __init__(self,ids,data): self.ids = ids self.data = data def mean(self): return self.data.mean(axis=0) def sd(self): return self.data.std(axis=0) @sample def spl_mean(self): return self.mean() @sample def spl_sd(self): return self.sd() bar = Bar(ids=np.arange(5),data=np.random.rand(10,5)) new_ids = np.linspace(bar.ids[0],bar.ids[-1],100) sampled_mean = interpolate.interp1d(bar.ids,bar.mean(),axis=0)(new_ids) assert np.all(sampled_mean == bar.spl_mean()) </code></pre> <p>However, I have many such methods. Of course I can write them with the help of LLM but what I want to know is:</p> <ul> <li>whether this is already a good practice to achieve this, and</li> <li>whether there's more elegant approache.</li> </ul>
<python><class><oop><decorator>
2025-03-21 05:05:17
2
1,622
C.K.
79,524,515
1,491,229
Huge time difference when reading a file inside class
<p>Any suggestions where this huge time difference results from when reading the same large text file by the same loop from inside a class in Python.</p> <pre><code>import timeit fn = &quot;some-large-file.txt&quot; pos = 2434735976 class TestCase: def __init__(self, filename): self.filename = filename self.content = &quot;&quot; def read_file(self, start): self.readstart = start with open(self.filename, &quot;r&quot;) as f: f.seek(self.readstart) line = f.readline() while line: self.content += line.strip() line = f.readline() if line.strip().startswith('&gt;'): return timeit_start = timeit.default_timer() a = TestCase(fn) a.read_file(pos) print(len(a.content)) timeit_stop = timeit.default_timer() print('Elapsed time: ', timeit_stop - timeit_start) </code></pre> <p>90338456</p> <p>Elapsed time: 31628.955818721</p> <pre><code>timeit_start = timeit.default_timer() s = '' with open(fn, &quot;r&quot;) as f: f.seek(pos) line = f.readline() while line: s += line.strip() line = f.readline() if line.strip().startswith('&gt;'): break print(len(s)) timeit_stop = timeit.default_timer() print('Elapsed time: ', timeit_stop - timeit_start) </code></pre> <p>90338456</p> <p>Elapsed time: 1.233782830000564</p> <p>I use Jupyter with Python 3.8.10 and IPython 8.12.2.</p>
<python><text-files><readline><file-read><timeit>
2025-03-21 03:48:07
3
709
user1491229
79,524,210
13,282,329
How to correctly read and process Kinesis Video Streams fragments?
<p>I am working on processing real-time audio from Amazon Connect by retrieving fragments from Kinesis Video Streams and saving them to S3. However, the MKV file I export to S3 is not playable and seems to be corrupted.</p> <pre><code>import boto3 import os s3_bucket = 'BUCKET_NAME' def lambda_handler(event, context): MediaStreams = event['Details']['ContactData']['MediaStreams'] CustomerAudio = MediaStreams['Customer']['Audio'] StartFragmentNumber = CustomerAudio['StartFragmentNumber'] StreamARN = CustomerAudio['StreamARN'] kvs_client = boto3.client(&quot;kinesisvideo&quot;, region_name=&quot;eu-central-1&quot;) endpoint = kvs_client.get_data_endpoint( StreamARN=StreamARN, APIName=&quot;GET_MEDIA&quot; )[&quot;DataEndpoint&quot;] media_client = boto3.client(&quot;kinesis-video-media&quot;, endpoint_url=endpoint, region_name=&quot;eu-central-1&quot;) response = media_client.get_media( StreamARN=StreamARN, StartSelector={ &quot;StartSelectorType&quot;: &quot;FRAGMENT_NUMBER&quot;, &quot;AfterFragmentNumber&quot;: StartFragmentNumber, } ) process_audio(response[&quot;Payload&quot;]) def process_audio(payload): s3_client = boto3.client('s3') s3_key = 'stream.mkv' temp_audio_path = '/tmp/stream.mkv' with open(temp_audio_path, 'wb') as f: f.write(payload.read()) s3_client.upload_file(temp_audio_path, s3_bucket, s3_key) os.remove(temp_audio_path) return { 'statusCode': 200, 'body': f'Audio file saved to s3://{s3_bucket}/{s3_key}' } </code></pre> <p>The stream.mkv file saved in S3 is not playable. It seems corrupt, and standard media players cannot open it. I suspect that the data stream is not being correctly read or written.</p> <p><strong>How can I correctly parse and save these real-time media fragments?</strong></p>
<python><amazon-web-services><amazon-kinesis><amazon-connect><amazon-kinesis-video-streams>
2025-03-20 22:34:57
0
703
Marcellin Khoury
79,524,167
5,956,725
How to Align Row and Column Labels of Plotly Subplot Grid in Python Dash Application
<p>I'm trying to create a Dash application that displays a grid of subplots to visualize the pairwise comparison of the columns of a dataframe. To the top and left of each grid row and column will be the corresponding variables. The variable names can be quite long though, so it's easy to misalign them. I've tried, staggering the variable names, but eventually settled on line-wrapping them. See the picture below. I've also included my code at the end of this post</p> <pre><code>df = pd.DataFrame({ &quot;AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;, &quot;4&quot;], &quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&quot;: [&quot;2024-01-01&quot;, &quot;2024-01-02&quot;, &quot;2024-01-03&quot;, &quot;2024-01-04&quot;], &quot;CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC&quot;: [&quot;cat&quot;, &quot;dog&quot;, &quot;cat&quot;, &quot;mouse&quot;], &quot;DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD&quot;: [&quot;10.5&quot;, &quot;20.3&quot;, &quot;30.1&quot;, &quot;40.2&quot;], 'EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE': ['apple', 'apple', 'apple', 'banana'] }) </code></pre> <p>For this dataframe, I'd like to get something like</p> <p><a href="https://i.sstatic.net/73rxO4eK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/73rxO4eK.png" alt="enter image description here" /></a></p> <p>As you can see, I'm having trouble aligning the row and column labels of the grid. Here is my code</p> <pre><code>import dash from dash import dcc, html import pandas as pd import plotly.express as px import plotly.subplots as sp import numpy as np import plotly.graph_objects as go # Sample DataFrame df = pd.DataFrame({ &quot;AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA&quot;: [&quot;1&quot;, &quot;2&quot;, &quot;3&quot;, &quot;4&quot;], &quot;BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB&quot;: [&quot;2024-01-01&quot;, &quot;2024-01-02&quot;, &quot;2024-01-03&quot;, &quot;2024-01-04&quot;], &quot;CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC&quot;: [&quot;cat&quot;, &quot;dog&quot;, &quot;cat&quot;, &quot;mouse&quot;], &quot;DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD&quot;: [&quot;10.5&quot;, &quot;20.3&quot;, &quot;30.1&quot;, &quot;40.2&quot;], 'EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE': ['apple', 'apple', 'apple', 'banana'] }) # Convert data types def convert_dtypes(df): for col in df.columns: try: df[col] = pd.to_numeric(df[col]) # Convert to int/float except ValueError: try: df[col] = pd.to_datetime(df[col]) # Convert to datetime except ValueError: df[col] = df[col].astype(&quot;string&quot;) # Keep as string return df df = convert_dtypes(df) columns = df.columns num_cols = len(columns) # Dash App app = dash.Dash(__name__) app.layout = html.Div([ html.H1(&quot;Pairwise Column Plots&quot;), dcc.Graph(id='grid-plots') ]) @app.callback( dash.Output('grid-plots', 'figure'), dash.Input('grid-plots', 'id') # Dummy input to trigger callback ) def create_plot_grid(_): fig = sp.make_subplots(rows = num_cols, cols = num_cols, #subplot_titles = [f&quot;{x} vs {y}&quot; for x in columns for y in columns], shared_xaxes = False, shared_yaxes = False) annotations = [] # Store subplot titles dynamically # Add column labels (Top Labels) for j, col_label in enumerate(columns): annotations.append( dict( #text=f&quot;&lt;b&gt;{col_label}&lt;/b&gt;&quot;, # Bold for emphasis text=f&quot;&lt;b&gt;{'&lt;br&gt;'.join(col_label[x:x+10] for x in range(0, len(col_label), 10))}&lt;/b&gt;&quot;, xref = &quot;paper&quot;, yref = &quot;paper&quot;, x = (j) / num_cols, # Center over the column y = 1.02, # Slightly above the top row showarrow = False, font = dict(size = 14, color = &quot;black&quot;) ) ) # Add row labels (Side Labels) for i, row_label in enumerate(columns): annotations.append( dict( #text = f&quot;&lt;b&gt;{row_label}&lt;/b&gt;&quot;, # Bold for emphasis text=f&quot;&lt;b&gt;{'&lt;br&gt;'.join(row_label[x:x+10] for x in range(0, len(row_label), 10))}&lt;/b&gt;&quot;, xref = &quot;paper&quot;, yref = &quot;paper&quot;, x = -0.02, # Slightly to the left of the row y = (1 - (i + 0.5) / num_cols), # Center next to the row showarrow = False, font = dict(size = 14, color = &quot;black&quot;), textangle = -90 # Rotate text for vertical orientation ) ) print(annotations) for i, x_col in enumerate(columns): for j, y_col in enumerate(columns): dtype_x, dtype_y = df[x_col].dtype, df[y_col].dtype row, col = i + 1, j + 1 # Adjust for 1-based indexing # I only want to print the upper triangle of the grid if j &lt;= i: trace = None # Numeric vs Numeric: Scatter Plot elif pd.api.types.is_numeric_dtype(dtype_x) and pd.api.types.is_numeric_dtype(dtype_y): trace = px.scatter(df, x = x_col, y = y_col).data[0] # Numeric vs Categorical: Box Plot elif pd.api.types.is_numeric_dtype(dtype_x) and pd.api.types.is_string_dtype(dtype_y): trace = px.box(df, x = y_col, y = x_col).data[0] elif pd.api.types.is_string_dtype(dtype_x) and pd.api.types.is_numeric_dtype(dtype_y): trace = px.box(df, x = x_col, y = y_col).data[0] # Categorical vs Categorical: Count Heatmap elif pd.api.types.is_string_dtype(dtype_x) and pd.api.types.is_string_dtype(dtype_y): #trace = px.histogram(df, x = x_col, color = y_col, barmode = &quot;group&quot;).data[0] counts_df = ( df .groupby([x_col, y_col]) .size() .reset_index(name = 'count') .pivot_table(index = x_col, columns = y_col, values = &quot;count&quot;, aggfunc=&quot;sum&quot;) ) trace = go.Heatmap(z = counts_df.values, x = counts_df.columns, y = counts_df.index, showscale = False) # Datetime vs Numeric: Line Plot elif pd.api.types.is_datetime64_any_dtype(dtype_x) and pd.api.types.is_numeric_dtype(dtype_y): trace = px.line(df, x = x_col, y = y_col).data[0] elif pd.api.types.is_numeric_dtype(dtype_x) and pd.api.types.is_datetime64_any_dtype(dtype_y): trace = px.line(df, x = y_col, y = x_col).data[0] else: trace = None # Unsupported combination if trace: fig.add_trace(trace, row = row, col = col) fig.update_layout(height = 300 * num_cols, width = 300 * num_cols, showlegend = False, annotations = annotations) print(fig['layout']) return fig if __name__ == '__main__': app.run_server(debug = True) </code></pre>
<python><plotly><visualization><plotly-dash>
2025-03-20 22:02:10
0
332
The_Questioner
79,524,154
5,561,649
How to achieve a dynamic, non-square drawing area in matplotlib 3D plots?
<p>I've noticed, from what I can tell, that matplotlib 3D plots are restricted to a square drawing area. For example, when plotting a surface and zooming in, instead of the surface filling the entire window, it remains confined to a square region fitted inside the window, with blank borders: <a href="https://i.sstatic.net/V3Lh6Oth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V3Lh6Oth.png" alt="original plot view" /></a> <a href="https://i.sstatic.net/oTgUubTA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTgUubTA.png" alt="zoomed in view" /></a></p> <p>I couldnโ€™t find any way to change that behavior nor any mention of this limitation in the documentation or online discussions. Is there any way to change this so that the drawing area can have a non-square aspect ratio and dynamically resize with the window? Any pointers or workarounds would be appreciated.</p> <p>My simplified code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import matplotlib.pyplot as plt # Generate some sample data x = np.arange(10) y = np.arange(15) X, Y = np.meshgrid(x, y) Z = np.random.randn(15, 10) # Create figure and 3D axes fig = plt.figure(figsize=(8, 6), dpi=100) ax = fig.add_subplot(111, projection=&quot;3d&quot;) # Plot surface ax.plot_surface(X, Y, Z, cmap=&quot;inferno&quot;, linewidth=0, antialiased=True) # Set labels ax.set_xlabel(&quot;Delta&quot;) ax.set_ylabel(&quot;Maturity&quot;) ax.set_zlabel(&quot;Volatility&quot;) ax.set_title(&quot;Volatility Surface&quot;) # Show plot plt.show() </code></pre>
<python><matplotlib>
2025-03-20 21:54:28
1
550
LoneCodeRanger
79,524,079
3,482,266
What type of dependency management does `pip` do?
<p>In <a href="https://python-programs.com/differences-conda-vs-pip/" rel="nofollow noreferrer">this link</a>, one can read:</p> <blockquote> <p>Pip installs dependencies using a recursive serial loop. Pip does not check to see if all of the dependencies of all packages are met at the same time.</p> <p>If the packages installed earlier in the order have incompatible dependencies with versions that are different from the ones installed later in the order, the environment is broken, and most importantly, this problem goes undiscovered until you notice some unusual errors.</p> </blockquote> <p>I was wondering whether this is true, and if not, how pip does exactly its package management?</p> <p>In the <a href="https://pip.pypa.io/en/stable/topics/dependency-resolution/" rel="nofollow noreferrer">pip documentation</a>, the closest topic to this I could find was <code>dependency resolution</code>, with the following description:</p> <blockquote> <p>At the start of a pip install run, pip does not have all the dependency information of the requested packages. It needs to work out the dependencies of the requested packages, the dependencies of those dependencies, and so on. Over the course of the dependency resolution process, pip will need to download distribution files of the packages which are used to get the dependencies of a package.</p> </blockquote> <p>This quote does seem to point in the direction of the initial quote (it only checks the dependency structure starting from the package we want to install, it does not check the whole environment), but I'm not sure.</p>
<python><pip>
2025-03-20 21:07:35
0
1,608
An old man in the sea.
79,524,069
9,464,295
Urwid ListBox size to space available instead of constant
<p>My program is split into 4 sections. One section is a constant-size section of various widgets, and the other 3 are list boxes.</p> <p><a href="https://i.sstatic.net/v8McH03o.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v8McH03o.png" alt="Minimal example" /></a></p> <p>If I set <code>rows</code> to 15, on this screen size, the program will crash. Also, if it is set to 5, then space is wasted that could be used to show more elements in list box 3.</p> <p>What can I do to let 3 shrink and prevent crashes if the screen is not big enough so that 4 takes up &gt; 50% of the vertical space? Also, if possible, how can I make 3 take up extra vertical space if it is available on a taller screen? Thank you.</p> <p>What I've tried: explicitly setting weights, box/flow wrapper</p> <pre><code>import urwid rows = 5 listbox1 = urwid.ListBox(urwid.SimpleFocusListWalker([])) listbox2 = urwid.ListBox(urwid.SimpleFocusListWalker([])) listbox3 = urwid.ListBox(urwid.SimpleFocusListWalker([])) fixedSize = urwid.Pile([urwid.Text(f'Hello {x}') for x in range(rows)]) linebox1 = urwid.LineBox(listbox1, title=&quot;1&quot;) linebox2 = urwid.LineBox(listbox2, title=&quot;2&quot;) linebox3 = urwid.LineBox(listbox3, title=&quot;3&quot;) linebox4 = urwid.LineBox(fixedSize, title=&quot;4&quot;) print(linebox3) layout = urwid.Columns([ urwid.Pile([ linebox1, linebox2 ]), urwid.Pile([ linebox3, linebox4 ]) ]) master = urwid.Frame(body=layout) loop = urwid.MainLoop(master) loop.run() </code></pre>
<python><layout><urwid>
2025-03-20 20:59:04
0
775
Doot
79,524,057
4,913,660
Replace values in series with first regex group if found, if not leave nan
<p>Say I have a dataframe such as</p> <pre><code>df = pd.DataFrame(data = {'col1': [1,np.nan, '&gt;3', 'NA'], 'col2':[&quot;3.&quot;,&quot;&lt;5.0&quot;,np.nan, 'NA']}) </code></pre> <pre><code>out: col1 col2 0 1 3. 1 NaN &lt;5.0 2 &gt;3 NaN 3 NA NA </code></pre> <p>What I would like is to strip stuff like &quot;&lt;&quot; or &quot;&gt;&quot;, and get to floats only</p> <pre><code>out: col1 col2 0 1 3. 1 NaN 5.0 2 3 NaN 3 NaN NaN </code></pre> <p>I thought about something like</p> <pre><code>df['col2'].replace({'.*' : r&quot;[-+]?(?:\d*\.*\d+)&quot;}, regex=True, inplace=True) </code></pre> <p>the idea being, replace anything with the regex for float, (I think), but this fails</p> <pre><code>error: bad escape \d at position 8 </code></pre> <p>I tried along the lines of</p> <pre><code>df['col2'].replace({r&quot;[-+]?(?:\d*\.*\d+)&quot;: r&quot;\1&quot;}, regex=True, inplace=True) </code></pre> <p>assuming &quot;&gt;&quot;or &quot;&lt;&quot; come before the float number, but then this fails (does not catch anything) if the field is a string or <code>np.nan</code>.</p> <p>Any suggestions please?</p>
<python><pandas><regex>
2025-03-20 20:53:06
4
414
user37292
79,523,961
1,892,584
How does one get pypi package metadata for an uninstalled package - preferably in JSON?
<p><code>pip show</code> doesn't work for packages that aren't installed and does not output in JSON. Is there a way to make it do so?</p> <pre><code>pip show django WARNING: Package(s) not found: django </code></pre> <p>Is there any way of getting package metadata from the command line? Besides writing a scraper for the pypi website of course.</p>
<python><pip><pypi>
2025-03-20 19:51:51
1
1,947
Att Righ