QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,133,259
| 4,093,019
|
Corrupted display from QPixmap.fromImage
|
<p>When running the following code, I find that the displayed image is corrupt - consistently on Windows, intermittently on macOS.</p>
<pre class="lang-py prettyprint-override"><code>from PyQt6.QtCore import Qt
from PyQt6.QtGui import QImage, QPixmap
from PyQt6.QtWidgets import QWidget, QApplication, QLabel, QVBoxLayout
app = QApplication([])
window = QWidget()
window.resize(200, 200)
picture_view = QLabel('Image View')
def wrap():
qimage = QImage(b"\x00\x00\xff\xff"*200*200, 200, 200, QImage.Format.Format_RGB32)
return QPixmap.fromImage(qimage)
pixmap = wrap()
picture_view.setPixmap(pixmap)
layout = QVBoxLayout(window)
layout.addWidget(picture_view)
window.show()
app.exec()
</code></pre>
<p><a href="https://i.sstatic.net/O9XcY9P1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O9XcY9P1.png" alt="Corrupt version" /></a></p>
<p>If I remove the <code>wrap()</code> function, so that the middle portion is just</p>
<pre class="lang-py prettyprint-override"><code>qimage = QImage(b"\x00\x00\xff\xff"*200*200, 200, 200, QImage.Format.Format_RGB32)
pixmap = QPixmap.fromImage(qimage)
picture_view.setPixmap(pixmap)
</code></pre>
<p>the problem goes away.</p>
<p><a href="https://i.sstatic.net/AtnPfS8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AtnPfS8J.png" alt="Correct version" /></a></p>
<p>Does anyone know if this is considered to be a bug in PyQt6, or is my code violating some established principle?</p>
<p>Is there any way to keep the structure where <code>qimage</code> exists only in <code>wrap()</code> without this corruption? Some way of instructing the QPixmap instance to load the data into itself?
Or is the official advice that the user needs to keep a copy of the QImage around, even after passing it into a QPixmap?</p>
<p>Thanks for your time.</p>
|
<python><pyqt><pyqt6>
|
2024-10-28 11:51:45
| 0
| 1,039
|
radarhere
|
79,133,141
| 4,878,478
|
Encoding/Decoding in Spring Boot
|
<p>I have a Spring Boot application (A1), and I need to invoke another application (A2) via Rest API. A1 is written in Java + Spring Boot while A2 is written in python and tornado. A1 is trying to invoke /Get that is exposed by A2. The /Get url is something like:
<code>https:123.456:123/something?industry=Test Data O&G Upstream - NA & Europe</code>. The A1 application is encoding the above requestParam as <code>industry=Test+Data+O%26G+Upstream+-+NA+%26+Europe</code>. Based on my analysis till now it <code>&</code> that is causing the issue.</p>
<p>I am using <code>URLEncoder</code> to encode the value.</p>
<p>The same request works on the postman but not with Spring Boot.</p>
|
<python><java><spring-boot><microservices><tornado>
|
2024-10-28 11:19:08
| 1
| 328
|
Rahul
|
79,132,931
| 22,963,183
|
ConnectionError when initializing BM25Assembler in DB-GPT
|
<p>I'm trying to set up <a href="https://github.com/eosphoros-ai/DB-GPT/blob/main/examples/rag/bm25_retriever_example.py" rel="nofollow noreferrer">a simple example</a> in <a href="https://github.com/eosphoros-ai/DB-GPT?tab=readme-ov-file" rel="nofollow noreferrer">DB-GPT</a> which uses Elasticsearch as the vector store backend. This is part of the knowledge base initialization process where BM25Assembler is used for document retrieval and ranking.</p>
<p>I have run <a href="http://docs.dbgpt.cn/docs/installation/advanced_usage/ollama" rel="nofollow noreferrer">DB-GPT with Ollama</a> and <a href="https://www.elastic.co/blog/getting-started-with-the-elastic-stack-and-docker-compose" rel="nofollow noreferrer">Elasticsearch</a> is deployed using Docker for both. Everything is fine as below
<a href="https://i.sstatic.net/IYQ2CnWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYQ2CnWk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/1gaAwO3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1gaAwO3L.png" alt="enter image description here" /></a>
I'm encountering a ConnectionError while setting up a knowledge base in DB-GPT using BM25Assembler. The error occurs during the initialization of the assembler. After run <code>python examples/rag/bm25_retriever_example.py</code></p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen
response = self._make_request(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 536, in _make_request
response = conn.getresponse()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connection.py", line 507, in getresponse
httplib_response = super().getresponse()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elastic_transport/_node/_http_urllib3.py", line 167, in perform_request
response = self.pool.urlopen(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 843, in urlopen
retries = retries.increment(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/util/retry.py", line 449, in increment
raise reraise(type(error), error, _stacktrace)
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/util/util.py", line 38, in reraise
raise value.with_traceback(tb)
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 789, in urlopen
response = self._make_request(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connectionpool.py", line 536, in _make_request
response = conn.getresponse()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/urllib3/connection.py", line 507, in getresponse
httplib_response = super().getresponse()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/media/manhdt4/sda1/db-gpt/DB-GPT/examples/rag/bm25_retriever_example.py", line 50, in <module>
asyncio.run(main())
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/media/manhdt4/sda1/db-gpt/DB-GPT/examples/rag/bm25_retriever_example.py", line 37, in main
assembler = BM25Assembler.load_from_knowledge(
File "/media/manhdt4/sda1/db-gpt/DB-GPT/dbgpt/rag/assembler/bm25.py", line 144, in load_from_knowledge
return cls(
File "/media/manhdt4/sda1/db-gpt/DB-GPT/dbgpt/rag/assembler/bm25.py", line 110, in __init__
if not self._es_client.indices.exists(index=self._index_name):
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elasticsearch/_sync/client/utils.py", line 446, in wrapped
return api(*args, **kwargs)
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elasticsearch/_sync/client/indices.py", line 1227, in exists
return self.perform_request( # type: ignore[return-value]
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py", line 423, in perform_request
return self._client.perform_request(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py", line 271, in perform_request
response = self._perform_request(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elasticsearch/_sync/client/_base.py", line 316, in _perform_request
meta, resp_body = self.transport.perform_request(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elastic_transport/_transport.py", line 342, in perform_request
resp = node.perform_request(
File "/media/manhdt4/sda1/miniconda3/envs/dbgpt/lib/python3.10/site-packages/elastic_transport/_node/_http_urllib3.py", line 202, in perform_request
raise err from e
elastic_transport.ConnectionError: Connection error caused by: ConnectionError(Connection error caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))))
</code></pre>
<p>Here is code in simple example related to error:</p>
<pre class="lang-py prettyprint-override"><code> # create bm25 assembler
assembler = BM25Assembler.load_from_knowledge(
knowledge=knowledge,
es_config=es_config,
chunk_parameters=chunk_parameters,
)
</code></pre>
<p>Config:</p>
<pre class="lang-py prettyprint-override"><code>def _create_es_config():
"""Create vector connector."""
return ElasticsearchVectorConfig(
name="bm25_es_dbgpt",
uri="localhost",
port="9200",
user="elastic",
password="changeme",
)
</code></pre>
<p><strong>What I've Tried</strong></p>
<ol>
<li>Verified Elasticsearch container is running properly</li>
<li>Checked Elasticsearch Docker logs</li>
<li>Verified Ollama is running correctly</li>
<li>I have checked connection to ELK. It' ok. No error</li>
</ol>
<pre class="lang-py prettyprint-override"><code>from elasticsearch import Elasticsearch
es = Elasticsearch(['http://localhost:9200'], basic_auth=('elastic', 'changeme'))
</code></pre>
<p>What's causing this connection error in the DB-GPT context? I only want run a simple example.</p>
<p>I'm sorry, I can't assign this issue to db-gpt tag because it doesn't exist.</p>
|
<python><docker><elasticsearch><large-language-model><ollama>
|
2024-10-28 10:16:23
| 1
| 515
|
happy
|
79,132,812
| 6,930,340
|
Find intersection of dates in grouped polars dataframe
|
<p>Consider the following <code>pl.DataFrame</code>:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
┌────────┬────────────┐
│ symbol ┆ date │
│ --- ┆ --- │
│ str ┆ str │
╞════════╪════════════╡
│ AAPL ┆ 2023-01-01 │
│ AAPL ┆ 2023-01-02 │
│ AAPL ┆ 2023-01-03 │
│ AAPL ┆ 2023-01-04 │
│ AAPL ┆ 2023-01-05 │ # AAPL has 5 dates
│ GOOGL ┆ 2023-01-01 │
│ GOOGL ┆ 2023-01-02 │
│ GOOGL ┆ 2023-01-03 │ # GOOGL has 3 dates
│ MSFT ┆ 2023-01-01 │
│ MSFT ┆ 2023-01-02 │
│ MSFT ┆ 2023-01-03 │
│ MSFT ┆ 2023-01-04 │ # MSFT has 4 dates
└────────┴────────────┘
""")
with pl.Config(tbl_rows=-1):
print(df)
</code></pre>
<p>I need to make each group's dates (grouped_by <code>symbol</code>) consistent accross all groups.
Therefore, I need to identify the common dates across all groups (probably using <code>join</code>) and subsequently filter the dataframe accordingly.</p>
<p>It might be related to:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/79125764/find-intersection-of-columns-from-different-polars-dataframes">Find intersection of columns from different polars dataframes</a></li>
</ul>
<p>I am looking for a generalized solution. In the above example the resulting <code>pl.DataFrame</code> should look as follows:</p>
<pre><code>shape: (9, 2)
┌────────┬────────────┐
│ symbol ┆ date │
│ --- ┆ --- │
│ str ┆ str │
╞════════╪════════════╡
│ AAPL ┆ 2023-01-01 │
│ AAPL ┆ 2023-01-02 │
│ AAPL ┆ 2023-01-03 │
│ GOOGL ┆ 2023-01-01 │
│ GOOGL ┆ 2023-01-02 │
│ GOOGL ┆ 2023-01-03 │
│ MSFT ┆ 2023-01-01 │
│ MSFT ┆ 2023-01-02 │
│ MSFT ┆ 2023-01-03 │
└────────┴────────────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-10-28 09:38:11
| 1
| 5,167
|
Andi
|
79,132,760
| 11,155,419
|
Can't decode event data in Cloud Run Function with a Firestore trigger
|
<p>I have deployed a 2nd Gen Cloud Run Function with a Firestore trigger, on the <code>google.cloud.datastore.entity.v1.written</code> event. I have used the sample code shown in <a href="https://cloud.google.com/firestore/docs/extend-with-functions-2nd-gen#functions_cloudevent_firebase_firestore-python" rel="nofollow noreferrer">Google Cloud documentation</a>:</p>
<pre class="lang-py prettyprint-override"><code>from cloudevents.http import CloudEvent
import functions_framework
from google.events.cloud import firestore
@functions_framework.cloud_event
def hello_firestore(cloud_event: CloudEvent) -> None:
"""Triggers by a change to a Firestore document.
Args:
cloud_event: cloud event with information on the firestore event trigger
"""
firestore_payload = firestore.DocumentEventData()
firestore_payload._pb.ParseFromString(cloud_event.data)
print(f"Function triggered by change to: {cloud_event['source']}")
print("\nOld value:")
print(firestore_payload.old_value)
print("\nNew value:")
print(firestore_payload.value)
</code></pre>
<p>And here's the list of the dependencies residing in the <code>requirements.txt</code> file:</p>
<pre><code>functions-framework==3.3.0
google-events==0.13.0
google-api-core==2.6.0
protobuf==5.28.3
cloudevents==1.9.0
</code></pre>
<p>However, every time a Firestore event comes in, the Cloud Run Function fails with the following error:</p>
<pre><code>Traceback (most recent call last):
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/__init__.py", line 174, in view_func
function(event)
File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/__init__.py", line 70, in wrapper
return func(*args, **kwargs)
File "/workspace/main.py", line 13, in hello_firestore
firestore_payload._pb.ParseFromString(cloud_event.data)
google.protobuf.message.DecodeError: Error parsing message with type 'google.events.cloud.firestore.v1.DocumentEventData'
</code></pre>
<p><a href="https://cloud.google.com/firestore/docs/extend-with-functions-2nd-gen#include_the_proto_dependencies_in_your_source" rel="nofollow noreferrer">Documentation also suggests including the proto dependencies in the source</a>, which is something I've done but this doesn't seem to solve the issue, either. I'm sharing a screenshot of the structure, in case I have placed the proto dependencies, incorrectly:</p>
<p><a href="https://i.sstatic.net/zOSbcob5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOSbcob5.png" alt="enter image description here" /></a></p>
<p>I'm running out of ideas so would appreciate any help in case anyone has experienced similar issues and managed to resolve them.</p>
|
<python><google-cloud-firestore><google-cloud-functions><google-cloud-run>
|
2024-10-28 09:26:14
| 0
| 843
|
Tokyo
|
79,132,717
| 12,284,585
|
Why do i get a `'PosixPath' object has no attribute 'walk'` exception?
|
<p>Why do i get a <code>'PosixPath' object has no attribute 'walk'</code> exception?</p>
<pre class="lang-py prettyprint-override"><code>from pathlib import Path
p = Path()
for dirpath, dirnames, filenames in p.walk():
print(dirpath, dirnames, filenames)
</code></pre>
|
<python><path><os.walk>
|
2024-10-28 09:09:42
| 1
| 1,333
|
tturbo
|
79,132,658
| 18,054,760
|
SIGBUS error when trying to use python jaydeebe api
|
<p>I have written a small function that creates jdbc connection in python using <a href="https://pypi.org/project/JayDeBeApi/#install" rel="nofollow noreferrer">jaydeebe</a> api/module/library. I have tested it on my personal laptop which is a windows env and it created the connection object successfully. When I moved the code to my other laptop which is a macOS env and tried to do the same, I ran into this error. Anyways, here's all the information that I think matters, feel free to request more information if you think that may give more insights.</p>
<p>Code:</p>
<pre><code>conn = jaydebeapi.connect(
properties['driverName'],
properties['url'],
[properties['username'], properties['password']],
driver
)
logger.info("Connection created successfully")
</code></pre>
<p>Error:</p>
<pre><code>#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGBUS (0xa) at pc=0x0000000106f02b18, pid=35834, tid=0x0000000000000103
#
# JRE version: (8.0_431) (build )
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.431-b10 mixed mode bsd-aarch64 compressed oops)
# Problematic frame:
# V [libjvm.dylib+0x23ab18] CodeHeap::allocate(unsigned long, bool)+0xe8
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /Users/abhinav/Workspace/hs_err_pid35834.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
</code></pre>
|
<python><java><postgresql><jdbc>
|
2024-10-28 08:53:11
| 1
| 315
|
Abhinav
|
79,132,609
| 465,159
|
How to add types to pandas *DataFrames*?
|
<p>Pandas dataframes are useful, but they are fundamentally untyped. With this I don't mean the raw types of the columns, but the structure of the dataframe itself.</p>
<p>For example, when you do <code>df : pd.Dataframe = pd.read_csv(...)</code>, you don't know which columns the dataframe contains, nor what their types are. You know them at runtime, of course (<code>df.columns</code>, etc.) but that doesn't help the type checker.</p>
<p>When writing code, you cannot enforce the 'type' of the dataframe, defined as the number and names of columns, the type of such columns, etc. - you only have a black box <code>pd.Dataframe</code> structure which is too generic to be of any actual typing use.</p>
<p>So what is the solution people use? Is there a library which allows to enforce better structure? Or do yo use custom classes to wrap over operations? (the latter seems like heavy work).</p>
<p>Any ideas / alternative solutions?</p>
<hr />
<p>To clarify, I want the type checker to enforce this at 'compile' time, not runtime checks. I want to be able to look at a dataframe and have pylance or something else tell me which column it contains, and emit a compile error if I try to access a column which was not explicitly specified in the type. For example:</p>
<pre class="lang-py prettyprint-override"><code>df: pd.Dataframe[{'col1': pd.int64, 'col2': pd.string}] = pd.read_csv(...)
df.col3 ## Raise *type* error at *compile* time, not runtime.
df.columns['X'] ## Editor autocompletes 'X' to 'col1' or 'col2'
df.col1.tolower() ## *Type* error at compile time
</code></pre>
|
<python><pandas><dataframe>
|
2024-10-28 08:37:58
| 0
| 5,474
|
Ant
|
79,132,604
| 7,510,661
|
aiohttp adds ~100 seconds latency?
|
<p>I am sending a basic post request from a client to a server. The logs on my server show that the response took 9.41 seconds, but the logs on the client show that it took ~100 seconds.</p>
<p>Here is my code</p>
<pre class="lang-py prettyprint-override"><code>logger.info("Uploading %s messages to backend.", len(messages))
st = time.time()
async with aiohttp.ClientSession() as session:
async with session.post(
base_url + f"messages/",
json=payload,
headers=_auth_headers,
) as response:
results = await response.json()
et = time.time()
logger.info(
f"{len(results)} messages created in backend in {et-st:.2f} seconds.",
)
</code></pre>
<p>Client logs</p>
<pre><code>2024-10-28 07:05:36,319 [INFO] Uploading 104 messages to backend.
2024-10-28 07:07:12,952 [INFO] 104 messages created in backend in 96.63 seconds.
</code></pre>
<p>Server logs</p>
<pre><code>api/messages/ POST 201 9.41301 28 Oct 2024 07:05:45
</code></pre>
<p>Any idea why would aiohttp add this overhead? Is it somehow getting stuck in the session? It doesn't seem to happen always, but usually happens if my requests take more than a few seconds..</p>
|
<python><aiohttp>
|
2024-10-28 08:37:13
| 0
| 1,429
|
Vahe Tshitoyan
|
79,132,558
| 6,667,035
|
Video Frame-by-Frame Deraining with MFDNet in Python
|
<p>As <a href="https://codereview.stackexchange.com/q/294178/231235">this CodeReview question</a> mentioned, I am trying to modify the code to process frame-by-frame rain streaks removal in a video. FFmpeg package is used in this code.</p>
<pre class="lang-py prettyprint-override"><code>import argparse
import os
import time
import cv2
import ffmpeg
import numpy as np
import torch
from skimage import img_as_ubyte
from torch.utils.data import DataLoader
from tqdm import tqdm
import utils
from data_RGB import get_test_data
from MFDNet import HPCNet as mfdnet
def process_video_frame_by_frame(input_file, output_file, model_restoration):
"""
Decodes a video frame by frame, processes each frame,
and re-encodes to a new video.
Args:
input_file: Path to the input video file.
output_file: Path to the output video file.
"""
try:
# Probe for video information
probe = ffmpeg.probe(input_file)
video_stream = next((stream for stream in probe['streams'] if stream['codec_type'] == 'video'), None)
width = int(video_stream['width'])
height = int(video_stream['height'])
# Input
process1 = (
ffmpeg
.input(input_file)
.output('pipe:', format='rawvideo', pix_fmt='rgb24')
.run_async(pipe_stdout=True)
)
# Output
process2 = (
ffmpeg
.input('pipe:', format='rawvideo', pix_fmt='rgb24', s='{}x{}'.format(width, height))
.output(output_file, vcodec='libx264', pix_fmt='yuv420p')
.overwrite_output()
.run_async(pipe_stdin=True)
)
# Process frame (deraining processing)
while in_bytes := process1.stdout.read(width * height * 3):
in_frame = torch.frombuffer(in_bytes, dtype=torch.uint8).float().reshape((1, 3, width, height))
restored = model_restoration(torch.div(in_frame, 255).to(device='cuda'))
restored = torch.clamp(restored[0], 0, 1)
restored = restored.cpu().detach().numpy()
restored *= 255
out_frame = restored
np.reshape(out_frame, (3, width, height))
# Encode and write the frame
process2.stdin.write(
out_frame
.astype(np.uint8)
.tobytes()
)
# Close streams
process1.stdout.close()
process2.stdin.close()
process1.wait()
process2.wait()
except ffmpeg.Error as e:
print('stdout:', e.stdout.decode('utf8'))
print('stderr:', e.stderr.decode('utf8'))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='Image Deraining using MPRNet')
parser.add_argument('--weights', default='./checkpoints/checkpoints_mfd.pth', type=str,
help='Path to weights')
parser.add_argument('--gpus', default='0', type=str, help='CUDA_VISIBLE_DEVICES')
args = parser.parse_args()
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = args.gpus
model_restoration = mfdnet()
utils.load_checkpoint(model_restoration, args.weights)
print("===>Testing using weights: ", args.weights)
model_restoration.eval().cuda()
input_video = "Input_video.mp4"
output_video = 'output_video.mp4'
process_video_frame_by_frame(input_video, output_video, model_restoration)
</code></pre>
<p>Let's focus on the <code>while</code> loop part:</p>
<p>The version of the code snippet above can be executed without error. In the next step, I am trying to follow <a href="https://codereview.stackexchange.com/a/294222/231235">301_Moved_Permanently's answer</a> to make the usage of <code>torch.save</code>. Therefore, the contents of <code>while</code> loop comes as the following code:</p>
<pre><code> # Process frame (deraining processing)
while in_bytes := process1.stdout.read(width * height * 3):
in_frame = torch.frombuffer(in_bytes, dtype=torch.uint8).float().reshape((1, 3, width, height))
restored = model_restoration(torch.div(in_frame, 255).to(device='cuda'))
restored = torch.clamp(restored[0], 0, 1)
out_frame = torch.mul(restored.cpu().detach(), 255).reshape(3, width, height).byte()
torch.save(out_frame, process2.stdin)
</code></pre>
<p>Out of memory error happened with the following message:</p>
<blockquote>
<p>torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 676.00 MiB. GPU 0 has a total capacity of 23.99 GiB of which 0 bytes is free. Of the allocated memory 84.09 GiB is allocated by PyTorch, and 1.21 GiB is reserved by PyTorch but unallocated.</p>
</blockquote>
<p>To diagnostics the error, I removed the last two lines of code:</p>
<pre><code> # Process frame (deraining processing)
while in_bytes := process1.stdout.read(width * height * 3):
in_frame = torch.frombuffer(in_bytes, dtype=torch.uint8).float().reshape((1, 3, width, height))
restored = model_restoration(torch.div(in_frame, 255).to(device='cuda'))
restored = torch.clamp(restored[0], 0, 1)
</code></pre>
<p>The out of memory error still happened. This is weird to me. My understanding of the executable version code, the line <code>restored = restored.cpu().detach().numpy()</code> is to transfer the <code>restored</code> data in GPU memory to main memory and then convert it to numpy format. Why I remove this line of code then out of memory error happened?</p>
<p>The hardware and software specification I used is as follows:</p>
<ul>
<li><p>CPU: 12th Gen Intel(R) Core(TM) i9-12900K 3.20 GHz</p>
</li>
<li><p>RAM: 128 GB (128 GB usable)</p>
</li>
<li><p>Graphic card: NVIDIA GeForce RTX 4090</p>
</li>
<li><p>OS: Windows 11 Pro 22H2, OS build: 22621.4317</p>
</li>
<li><p>Pytorch version:</p>
<pre class="lang-none prettyprint-override"><code>> python -c "import torch; print(torch.__version__)"
2.5.0+cu124
</code></pre>
</li>
</ul>
|
<python><video><ffmpeg><pytorch><out-of-memory>
|
2024-10-28 08:22:51
| 1
| 559
|
JimmyHu
|
79,132,363
| 16,171,413
|
Why does my Blog API CustomUser have an error when sending request for a CustomUser detail?
|
<p>I have two Blog API's in Django Rest Framework. One uses Django's default User model whilst the other uses a CustomUser. Authentication works well in both cases. I have similar serializers, viewsets and routers all configured. The endpoints that returns a collection of posts and users works well as well as the endpoint for a single post. Also that of a single user works only while using Django's default user model. However when I try to access a single CustomUser, I get this error:</p>
<pre><code>AttributeError at /api/v1/users/3/
'CustomUser' object has no attribute 'author'
Request Method: GET
Request URL: http://127.0.0.1:3412/api/v1/users/3/
Django Version: 5.1.2
Exception Type: AttributeError
Exception Value: 'CustomUser' object has no attribute 'author'
Exception Location: C:\Users\Names\OneDrive\my_apps\django_apps\blog_API\api\permissions.py, line 16, in has_object_permission
Raised during: api.views.CustomUserViewSet
</code></pre>
<p>The exception occurs on the last line:</p>
<pre><code>from rest_framework import permissions
class IsAuthorOrReadOnly(permissions.BasePermission):
"""
Object-level permission to only allow authors of an object to edit it.
Assumes the model instance has an `author` attribute.
"""
def has_object_permission(self, request, view, obj):
if request.method in permissions.SAFE_METHODS:
return True
return obj.author == request.user
</code></pre>
<p>Here's my CustomUserViewset:</p>
<pre><code>from rest_framework import viewsets
from users.models import CustomUser
from .serializers import CustomUserSerializer
from .permissions import IsAuthorOrReadOnly
class CustomUserViewSet(viewsets.ModelViewSet):
"""Viewset for Custom User Object."""
queryset = CustomUser.objects.all()
serializer_class = CustomUserSerializer
permission_classes = (IsAuthorOrReadOnly,)
</code></pre>
<p>I have looked at the location of the error and tried print debugging to print the User object and in both cases (Default User model and CustomUser model), nothing gets printed. But Default User model works well and returns details of a single user. I also tried to print my queryset and it correctly returns my CustomUser.</p>
<p>I want to access a single CustomUser from the endpoint just like I can with Dango's default User. My <code>Post</code> and <code>CustomUser</code> models are correctly serialized and both collections can be seen from their endpoints. The PostDetail also works well. I'd be happy to provide more clarification if needed. Thanks...</p>
|
<python><django><django-rest-framework><django-rest-viewsets><django-custom-user>
|
2024-10-28 07:12:34
| 1
| 5,413
|
Uchenna Adubasim
|
79,132,337
| 1,942,868
|
Copying database and authentication fails?
|
<p>I have django project which has database on aws <code>RDS</code>.</p>
<p>I can login django account correctly with this database.</p>
<p>Now I copied the database with command.</p>
<pre><code>PGPASSWORD=XXXXXX createdb -U dbuser -h kkk.cgrxw5ome4nb.ap-northeast-1.rds.amazonaws.com -p 5432 -T old_db new_db
</code></pre>
<p>It copies the database correctly, but I can not login any accounts with new database.</p>
<p>It says password is uncorrect <code>[auth] Error! Password wrong</code></p>
<p>However I just copied the database, and username and password is completely same</p>
<p>old database;</p>
<pre><code>select username,password from defapp_customuser where id=1;
username | password
----------+------------------------------------------------------------------------------------------
admin | pbkdf2_sha256$600000$bZ2UvAJc7cwJMFyk63hlLO$h8/Y2lFZTkwbghyvM/rliyBxwQxF2dMMSqk5tfFqqyo=
</code></pre>
<p>new database:</p>
<pre><code>select username,password from defapp_customuser where id=1;
username | password
----------+------------------------------------------------------------------------------------------
admin | pbkdf2_sha256$600000$bZ2UvAJc7cwJMFyk63hlLO$h8/Y2lFZTkwbghyvM/rliyBxwQxF2dMMSqk5tfFqqyo=
</code></pre>
<p>is there any problem?</p>
<pre><code>Class MyAuthBackend(ModelBackend):
def authenticate(self, request, username=None, password=None, **kwargs):
logger.info("[auth] 2.Try Django auth backend is Auth")
try:
user = m.CustomUser.objects.get(username=username)
except m.CustomUser.DoesNotExist:
logger.info("[auth] Error! Django user can not be found")
return None
else:
if user.check_password(password) and self.user_can_authenticate(user):
return user
else:
logger.info("[auth] Error! Password wrong")
</code></pre>
<hr />
<p>check the parameter groups and found the data.</p>
<h2><a href="https://i.sstatic.net/TpnREs2J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpnREs2J.png" alt="enter image description here" /></a></h2>
<p>I have the same SECRET_KEY in the settings.py</p>
<pre><code>SECRET_KEY = 'django-insecure-^pktcneebsjtv(mnt2-wps07t$$1syoykvXXXXXXXX'
</code></pre>
<hr />
<p>More checking......</p>
<p>My current user is like this (made in old database)</p>
<pre><code>>>> print(m.CustomUser.objects.get(username="admin").username,m.CustomUser.objects.get(username="admin").password)
admin pbkdf2_sha256$600000$bZ2UvAJc7cwJMFyk63hlLO$h8/Y2lFZTkwbghyvM/rliyBxwQxF2dMMSqk5tfFqqyo=
</code></pre>
<p>then make new users <code>admin2</code> and <code>admin3</code> with <code>createsuperuser</code> and use the same password.</p>
<p>It shows,</p>
<pre><code>>>> print(m.CustomUser.objects.get(username="admin2").username,m.CustomUser.objects.get(username="admin2").password)
admin2 pbkdf2_sha256$600000$4U8Ng588QgJPF2XDgkoTMj$Y7/YmN/ObZqorUlr4V/I7pwgiC+59BGOcmBapVjI1aY=
>>> print(m.CustomUser.objects.get(username="admin3").username,m.CustomUser.objects.get(username="admin3").password)
admin3 pbkdf2_sha256$600000$8h5ZeiwPxuA5YKD78Jwuad$R7VcCxvj5x+wRCT3jbeAtOGIjg2MO1uFhyERJPZVYvk=
</code></pre>
<p>every user has different password hash, is it correct behaivor?</p>
|
<python><django><postgresql>
|
2024-10-28 07:01:19
| 1
| 12,599
|
whitebear
|
79,132,052
| 2,289,030
|
Is there a way to find undecorated classes/functions in a Python module?
|
<p>I'm trying to write a tool which will help me find implementation bugs in a large existing python code base that makes heavy use of decorators.</p>
<p>Lets say I have a file, <code>rules.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>manager = RuleManager()
@manager.register
class A:
pass
class B:
pass
</code></pre>
<p>Lets say I already know ahead of time what the Manager classes are that use the decorator.</p>
<p>How would I find that class B is undecorated?</p>
<p>Ideally the output would include something like <code>/path/to/rules.py:`B` is undecorated by any manager</code></p>
|
<python><abstract-syntax-tree><python-decorators>
|
2024-10-28 04:24:28
| 2
| 968
|
ijustlovemath
|
79,132,015
| 3,169,868
|
browser_cookie3.BrowserCookieError: Unable to get key for cookie decryption
|
<p>While using <code>browser_cookie3</code> <a href="https://pypi.org/project/browser-cookie3/" rel="nofollow noreferrer">python package</a>, the following:</p>
<pre><code>my_domain = 'www.xxx.com'
browser_cookie3.chrome(domain_name=my_domain)
</code></pre>
<p>generates the following error message:</p>
<pre><code>Traceback (most recent call last):
File "/Users/create_cookies.py", line xx, in main
x = browser_cookie3.chrome(domain_name=my_domain)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/python311/lib/python3.11/site-packages/browser_cookie3/__init__.py", line 1160, in chrome
return Chrome(cookie_file, domain_name, key_file).load()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/python311/lib/python3.11/site-packages/browser_cookie3/__init__.py", line 515, in load
value = self._decrypt(value, enc_value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/python311/lib/python3.11/site-packages/browser_cookie3/__init__.py", line 585, in _decrypt
raise BrowserCookieError('Unable to get key for cookie decryption')
browser_cookie3.BrowserCookieError: Unable to get key for cookie decryption
</code></pre>
<p>Chrome is <code>Version 130.0.6723.70</code>, python is <code>3.11.9</code> and browser_cookie3 is <code>0.19.1</code>.</p>
|
<python><google-chrome><cookies>
|
2024-10-28 03:56:02
| 2
| 1,432
|
S_S
|
79,131,911
| 284,932
|
How to replace the deprecated 'conversational' pipeline in Hugging Face Transformers?
|
<p>I'm working on a chatbot using Hugging Face's Transformers library. I was previously using the conversational pipeline, but it seems that it has been deprecated. Here is the code I was using:</p>
<pre><code># Initialize the conversational pipeline
conversation = pipeline("conversational", model=model, tokenizer=tokenizer)
# Define the chat history
chat_history = [
{"role": "system", "content": "You are a smart chatbot"},
{"role": "user", "content": "What is the capital of France?"},
]
# Generate a response
response = conversation(chat_history)
print(response)
</code></pre>
<p>raise this error:</p>
<pre><code>KeyError: "Unknown task conversational, available tasks are ['audio-classification', 'automatic-speech-recognition', 'depth-estimation', 'document-question-answering', 'feature-extraction', 'fill-mask', 'image-classification', 'image-feature-extraction', 'image-segmentation', 'image-to-image', 'image-to-text', 'mask-generation', 'ner', 'object-detection', 'question-answering', 'sentiment-analysis', 'summarization', 'table-question-answering', 'text-classification', 'text-generation', 'text-to-audio', 'text-to-speech', 'text2text-generation', 'token-classification', 'translation', 'video-classification', 'visual-question-answering', 'vqa', 'zero-shot-audio-classification', 'zero-shot-classification', 'zero-shot-image-classification', 'zero-shot-object-detection', 'translation_XX_to_YY']"
</code></pre>
<p>Now that the conversational pipeline is deprecated, I'm unsure how to achieve the same functionality. I've read that I might need to use the text-generation pipeline or the model's generate method directly, but I'm not sure how to implement this with my existing chat history structure.</p>
<p>So, What is the recommended way to create a conversational chatbot using the latest version of Transformers?</p>
<p>Any guidance or code examples would be greatly appreciated!</p>
|
<python><huggingface-transformers><large-language-model>
|
2024-10-28 02:28:48
| 2
| 474
|
celsowm
|
79,131,807
| 605,794
|
Python static property autocomplete list not working in PyCharm
|
<p>I'm using PyCharm Community Edition 2024.2.4. Autocomplete function lists appear correctly except in the case where I have a static property returning an object instance.</p>
<p>The static property syntax comes from this post: <a href="https://stackoverflow.com/questions/1697501/staticmethod-with-property">@staticmethod with @property</a></p>
<pre><code>class staticproperty(property):
def __get__(self, owner_self, owner_cls):
return self.fget()
class _ScorecardClass(object):
_average = 0
@property
def average_score(self) -> float:
return self._average
@average_score.setter
def average_score(self, value):
self._average = value
class MainClass(object):
@staticmethod
def _get_scorecard():
return _ExternalDataProvider._get_scorecard()
@staticproperty
def Scorecard() -> _ScorecardClass:
r"""Returns the scorecard."""
return MainClass._get_scorecard()
</code></pre>
<p>In calling code, I can access the Scorecard object via its static property as follows:</p>
<pre><code>MainClass.ScoreCard
</code></pre>
<p>However, the average_score property is not shown in the popup function list for ScoreCard. If I enter the function name manually, the code runs correctly.</p>
<pre><code>average = MainClass.ScoreCard.average_score
</code></pre>
<p>I have the various code-completion options enabled in PyCharm. It seems to work fine for instance properties, but not statics. The auto-complete list displays correctly in VS Code. I have tried Python 3.9 and 3.12 and it's the same in both cases.</p>
<p>If I do:</p>
<pre><code>sc: _ScorecardClass
</code></pre>
<p>The sc variable shows the various properties of _ScorecardClass in its autocomplete list, complete with docstring when the property is highlighted.</p>
|
<python><pycharm><code-completion>
|
2024-10-28 00:55:21
| 1
| 1,635
|
greenback
|
79,131,544
| 3,990,451
|
How to reach inner button inside a inner open shadow root with selenium
|
<p>a webpage has this HTML</p>
<pre><code><custom-tag1 class data-component-key="..." ...>
#shadow-root (open)
<div class="...">
<custom-tag2 ...>
#shadow-root (open)
<button class="button-download" style="background: transparent; border: 0;">
</button>
</custom-tag2>
</div>
</custom-tag1>
</code></pre>
<p>I can reach the first shadow root by doing</p>
<pre><code>outer_host = driver.find_element_by_css_selector("custom-tag1")
outer_shadow_root = driver.execute_script("return arguments[0].shadowRoot", outer_host)
</code></pre>
<p>but then I can't repeat the process. The outer_shadow_root is a <strong>ShadowRoot</strong> class.
It has a <strong>find_element</strong> method, but I fail to locate <em>custom-tag2</em></p>
<p>No invocation of the find_element by any locator works</p>
<p>Any ideas?</p>
|
<python><selenium-webdriver>
|
2024-10-27 21:38:31
| 1
| 982
|
MMM
|
79,131,531
| 5,378,816
|
how to avoid deadlock in process.wait
|
<p>I'm looking for a way how to avoid or break the deadlock the <a href="https://docs.python.org/3/library/asyncio-subprocess.html#asyncio.subprocess.Process" rel="nofollow noreferrer">docs</a> warn about. Here is the warning:</p>
<blockquote>
<p>This method [process.wait] can deadlock when using stdout=PIPE or
stderr=PIPE and the child process generates so much output that it
blocks waiting for the OS pipe buffer to accept more data. Use the
communicate() method when using pipes to avoid this condition</p>
</blockquote>
<p>The code looks basically like this:</p>
<pre><code>proc = await asyncio.create_subprocess_exec(
program, stdout=asyncio.subprocess.PIPE)
# when the program generates a huge line,
# that is an error I want to handle
# with a program restart.
line = await proc.stdout.readline()
# readline() fails with: Separator is not found, and chunk exceed the limit
# the exception is caught and the handler kills the process:
proc.terminate()
# I checked that the PID does not exists any more
# but now this line (it's in a different task) blocks !!
await proc.wait()
</code></pre>
<p>The suggested <code>communicate</code> method is of no use in my case, the communication protocol is different.</p>
<p>I guess I should somehow discard the data from the pipe to unblock the mentioned buffer, but don't know how.</p>
<hr />
<p>UPDATE:</p>
<p>I made a progress:</p>
<ul>
<li><p><code>proc.returncode</code> value gets filled in. That is just a detail, but it shows the process termination was registered.</p>
</li>
<li><p>a simple <code>await proc.stdout.read()</code> drains the buffer and unblocks the <code>proc.wait()</code></p>
</li>
</ul>
<p>Basically I have now a working solution, but new ideas would be still appreciated.</p>
|
<python><python-asyncio>
|
2024-10-27 21:26:52
| 0
| 17,998
|
VPfB
|
79,131,319
| 529,103
|
How do you remove a python venv from vscode?
|
<p>I've added several custom virtual environments to <code>vscode</code>, so that when I use the command palette I can select from a variety of builtin ones, as well as my own.</p>
<p>I'd like to remove some of these, but I can't figure out where vscode stores these. I don't have a settings.json in my workspace, but it still remembers the ones I've added via a custom interpreter path. I'd also like to remove some of the builtin ones like <code>~/anaconda3/python.exe</code>, and <code>~/anaconda3/envs/sd_playground/python.exe</code>, which I never added and am never going to use.</p>
<p>What json or other settings file can I edit to make <code>vscode</code> not know about these?</p>
|
<python><vscode-python>
|
2024-10-27 19:19:24
| 1
| 792
|
Zachary Turner
|
79,131,144
| 1,574,054
|
Matplotlib pgf export: supylabel misaligned
|
<p>I am trying to add a <code>supylabel</code> to two subplots in matplotlib and export the result as a <code>.pgf</code>. However, the <code>supylabel</code> is clearly not centered.</p>
<p>This is the python code:</p>
<pre><code>fig, axes = pyplot.subplots(
2,
1,
figsize=(3, 3),
squeeze=True,
sharex=True,
sharey=True,
layout="constrained"
)
xy = numpy.linspace(-80, 70)
z = numpy.zeros((len(xy), len(xy)))
supylabel_text = fig.supylabel("Super-y axis label")
for i in range(2):
ax: pyplot.Axes = axes[i]
im = ax.pcolormesh(
xy, xy, z, shading="gouraud", rasterized=True
)
ax.set_aspect("equal")
# Remove all ticks and labels, see comment.
# ax.get_xaxis().set_visible(False)
# ax.get_yaxis().set_visible(False)
fig.savefig("test_supy.pgf")
fig.savefig("test_supy.pgf.pdf")
</code></pre>
<p>This is the figure as rendered inside a latex document:</p>
<p><a href="https://i.sstatic.net/Olj9Xrw1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Olj9Xrw1.png" alt="Two empty colormesh plots. Left a label for the y-axis, which hangs too low." /></a></p>
<p>Clearly, the <code>supylabel</code> hangs too low. Interestingly, in the pdf which is also exported from the code above, it is placed correctly:</p>
<p><a href="https://i.sstatic.net/JpjJm8G2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpjJm8G2.png" alt="wo empty colormesh plots. Left a label for the y-axis, which is centered vertically." /></a></p>
<p>This is the generated PGF:</p>
<pre><code>\begingroup%
\makeatletter%
\begin{pgfpicture}%
\pgfpathrectangle{\pgfpointorigin}{\pgfqpoint{3.000000in}{3.000000in}}%
\pgfusepath{use as bounding box, clip}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetmiterjoin%
\definecolor{currentfill}{rgb}{1.000000,1.000000,1.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.000000pt}%
\definecolor{currentstroke}{rgb}{1.000000,1.000000,1.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{3.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{3.000000in}{3.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{3.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathclose%
\pgfusepath{fill}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetmiterjoin%
\definecolor{currentfill}{rgb}{1.000000,1.000000,1.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.000000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetstrokeopacity{0.000000}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{1.681793in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{1.681793in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{2.958330in}}%
\pgfpathlineto{\pgfqpoint{0.984049in}{2.958330in}}%
\pgfpathlineto{\pgfqpoint{0.984049in}{1.681793in}}%
\pgfpathclose%
\pgfusepath{fill}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsys@transformshift{0.980000in}{1.680000in}%
\pgftext[left,bottom]{\includegraphics[interpolate=true,width=1.280000in,height=1.280000in]{test_supy-img0.png}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{1.239357in}{1.681793in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{1.664869in}{1.681793in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{2.090381in}{1.681793in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{0.984049in}{1.937100in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.602071in, y=1.884339in, left, base]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}\ensuremath{-}50}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{0.984049in}{2.362613in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.798462in, y=2.309851in, left, base]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}0}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{0.984049in}{2.788125in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.710096in, y=2.735364in, left, base]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}50}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{1.681793in}}%
\pgfpathlineto{\pgfqpoint{0.984049in}{2.958330in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{2.260586in}{1.681793in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{2.958330in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{1.681793in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{1.681793in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{2.958330in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{2.958330in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetmiterjoin%
\definecolor{currentfill}{rgb}{1.000000,1.000000,1.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.000000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetstrokeopacity{0.000000}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{0.273305in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{0.273305in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{1.549842in}}%
\pgfpathlineto{\pgfqpoint{0.984049in}{1.549842in}}%
\pgfpathlineto{\pgfqpoint{0.984049in}{0.273305in}}%
\pgfpathclose%
\pgfusepath{fill}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsys@transformshift{0.980000in}{0.270000in}%
\pgftext[left,bottom]{\includegraphics[interpolate=true,width=1.280000in,height=1.280000in]{test_supy-img1.png}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{1.239357in}{0.273305in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=1.239357in,y=0.176083in,,top]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}\ensuremath{-}50}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{1.664869in}{0.273305in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=1.664869in,y=0.176083in,,top]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}0}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{0.000000in}{-0.048611in}}{\pgfqpoint{0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{0.000000in}{-0.048611in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{2.090381in}{0.273305in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=2.090381in,y=0.176083in,,top]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}50}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{0.984049in}{0.528612in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.602071in, y=0.475851in, left, base]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}\ensuremath{-}50}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{0.984049in}{0.954125in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.798462in, y=0.901363in, left, base]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}0}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetbuttcap%
\pgfsetroundjoin%
\definecolor{currentfill}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetfillcolor{currentfill}%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfsys@defobject{currentmarker}{\pgfqpoint{-0.048611in}{0.000000in}}{\pgfqpoint{-0.000000in}{0.000000in}}{%
\pgfpathmoveto{\pgfqpoint{-0.000000in}{0.000000in}}%
\pgfpathlineto{\pgfqpoint{-0.048611in}{0.000000in}}%
\pgfusepath{stroke,fill}%
}%
\begin{pgfscope}%
\pgfsys@transformshift{0.984049in}{1.379637in}%
\pgfsys@useobject{currentmarker}{}%
\end{pgfscope}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.710096in, y=1.326875in, left, base]{\color{textcolor}{\sffamily\fontsize{10.000000}{12.000000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}50}}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{0.273305in}}%
\pgfpathlineto{\pgfqpoint{0.984049in}{1.549842in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{2.260586in}{0.273305in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{1.549842in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{0.273305in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{0.273305in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\pgfsetrectcap%
\pgfsetmiterjoin%
\pgfsetlinewidth{0.803000pt}%
\definecolor{currentstroke}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{currentstroke}%
\pgfsetdash{}{0pt}%
\pgfpathmoveto{\pgfqpoint{0.984049in}{1.549842in}}%
\pgfpathlineto{\pgfqpoint{2.260586in}{1.549842in}}%
\pgfusepath{stroke}%
\end{pgfscope}%
\begin{pgfscope}%
\definecolor{textcolor}{rgb}{0.000000,0.000000,0.000000}%
\pgfsetstrokecolor{textcolor}%
\pgfsetfillcolor{textcolor}%
\pgftext[x=0.168298in, y=0.761800in, left, base,rotate=90.000000]{\color{textcolor}{\sffamily\fontsize{12.000000}{14.400000}\selectfont\catcode`\^=\active\def^{\ifmmode\sp\else\^{}\fi}\catcode`\%=\active\def%{\%}Super-y axis label}}%
\end{pgfscope}%
\end{pgfpicture}%
\makeatother%
\endgroup%
</code></pre>
<p>After removing all ticks and labels, the <code>supylabel</code> is still misaligned:</p>
<p><a href="https://i.sstatic.net/lpLzru9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lpLzru9F.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><pgf>
|
2024-10-27 17:31:34
| 0
| 4,589
|
HerpDerpington
|
79,130,996
| 5,786,649
|
Programmatically change components of pytorch Model?
|
<p>I am training a model in pytorch and would like to be able to programmatically change some components of the model architecture to check which works best without any if-blocks in the <code>forward()</code>. Consider a toy example:</p>
<pre class="lang-py prettyprint-override"><code>import torch
class Model(torch.nn.Model):
def __init__(self, layers: str, d_in: int, d_out: int):
super().__init__()
self.layers = layers
linears = torch.nn.ModuleList([
torch.nn.Linear(d_in, d_out),
torch.nn.Linear(d_in, d_out),
])
def forward(x1: torch.Tensor, x2: torch.Tensor) -> torch.Tensor:
if self.layers == "parallel":
x1 = self.linears[0](x1)
x2 = self.linears[1](x2)
x = x1 + x2
elif self.layers == "sequential":
x = x1 + x2
x = self.linears[0](x)
x = self.linears[1](x)
return x
</code></pre>
<p>My first intution was to provide external functions, e.g.</p>
<pre><code>def parallel(x1, x2):
x1 = self.linears[0](x1)
x2 = self.linears[1](x2)
return x1 + x2
</code></pre>
<p>and provide them to the model, like</p>
<pre><code>class Model(torch.nn.Model):
def __init__(self, layers: str, d_in: int, d_out: int, fn: Callable):
super().__init__()
self.layers = layers
linears = torch.nn.ModuleList([
torch.nn.Linear(d_in, d_out),
torch.nn.Linear(d_in, d_out),
])
self.fn = fn
def forward(x1: torch.Tensor, x2: torch.Tensor) -> torch.Tensor:
x = self.fn(x1, x2)
</code></pre>
<p>but of course the function's scope does not know <code>self.linears</code> and I would also like to avoid having to pass each and every architectural element to the function.</p>
<p>Do I wish for too much? Do I have to "bite the sour apple" as it says in German and either have larger function signatures, or use if-conditions, or something else? Or is there a solution to my problem?</p>
|
<python><pytorch>
|
2024-10-27 16:25:12
| 1
| 543
|
Lukas
|
79,130,980
| 7,123,797
|
How do the copy and deepcopy functions work for a bytearray argument?
|
<p>The <a href="https://docs.python.org/3/library/copy.html" rel="nofollow noreferrer">docs</a> say:</p>
<blockquote>
<p>The difference between shallow and deep copying is only relevant for compound objects (objects that contain other objects, like lists or class instances)</p>
</blockquote>
<p>It seems that the <code>bytearray</code> object is not a compound object (I think so because, unlike the <code>list</code> object, <code>bytearray</code> object doesn't contain references to other objects). Therefore, <code>copy.copy</code> and <code>copy.deepcopy</code> will produce the same result for such an object. But the docs don't describe the details for this particular data type. I know that for immutable numeric and <code>str</code> objects (they are obviously not compound too) <code>copy.copy</code> and <code>copy.deepcopy</code> simply return their argument (they don't create a new object here).</p>
<p>I did some tests with a <code>bytearray</code> object <code>obj</code> and came to the conclusion that for such an object functions <code>copy.copy(obj)</code> and <code>copy.deepcopy(obj)</code> will create a new object with the same value as the original object <code>obj</code> have:</p>
<pre><code>>>> import copy
>>> obj = bytearray(b'hello')
>>> obj2 = copy.copy(obj)
>>> obj3 = copy.deepcopy(obj)
>>> obj2 is obj; obj3 is obj
False
False
>>> obj2 == obj; obj3 == obj
True
True
</code></pre>
<p>So, can you say that my conclusion is correct (in particular, there is no need to make a deep copy of a <code>bytearray</code> object – it is enough to make a shallow copy)? Maybe there are some other important details in this process (making a shallow or deep copy of a <code>bytearray</code> object) that I should be aware of?</p>
|
<python><copy><deep-copy>
|
2024-10-27 16:16:09
| 1
| 355
|
Rodvi
|
79,130,588
| 18,385,480
|
Geometric visulalization of Cosine Similarity
|
<p>I have calculated the cosine similarity between two documents in a very basic way by using TF-IDF vectorization in Python.
But I want to visualize the documents as a vectorized graph in a 3D space. Like this:</p>
<p><a href="https://i.sstatic.net/pBFMUIOf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBFMUIOf.png" alt="example of graph" /></a></p>
<p>Here's the code I used for calculating cosine similarity:</p>
<pre class="lang-py prettyprint-override"><code>from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import matplotlib.pyplot as plt
import numpy as np
# Sample documents
documents = [
"This is a sample document.",
"This document is another example."
]
# Vectorizing the documents
vectorizer = TfidfVectorizer()
tfidf_matrix = vectorizer.fit_transform(documents)
# Calculating cosine similarity
cosine_sim = cosine_similarity(tfidf_matrix[0:1], tfidf_matrix[1:2])
print("Cosine Similarity:", cosine_sim)
</code></pre>
|
<python><nlp><tf-idf><cosine-similarity><tfidfvectorizer>
|
2024-10-27 12:32:32
| 1
| 723
|
bsraskr
|
79,130,497
| 6,197,439
|
Creating a custom diagonal line QBrush tiled pattern in PyQt5?
|
<p>So, at first I wanted to use a gradient background to emphasize certain cells / items in my QTableView, but as it can be seen in <a href="https://stackoverflow.com/questions/79129869/stretchable-qlineargradient-as-backgroundrole-for-resizeable-qtableview-cells-in">Stretchable QLinearGradient as BackgroundRole for resizeable QTableView cells in PyQt5?</a>, I could not quite get that to work.</p>
<p>Then I thought maybe I could somehow use CSS to define the gradient background and Qt properties to control which items it is shown on, however the problem is that in a QTableView, the cells / items are QStyledItemDelegate, and as noted in <a href="https://forum.qt.io/topic/84304/qtreewidget-how-to-implement-custom-properties-for-stylesheet" rel="nofollow noreferrer">https://forum.qt.io/topic/84304/qtreewidget-how-to-implement-custom-properties-for-stylesheet</a> :</p>
<blockquote>
<p>Dynamic properties can only be applied to QWidgets!</p>
<blockquote>
<p>Is it possible to apply the property on the selected item only?</p>
</blockquote>
<p>no, not directly. Unless you subclass a custom QStyledItemDelegate and initialize the StyleOption for the painted index according to your widget's property.</p>
</blockquote>
<p>So then looking at <a href="https://doc.qt.io/qt-5/qbrush.html" rel="nofollow noreferrer">https://doc.qt.io/qt-5/qbrush.html</a> I saw there are different patterns that can be chosen for QBrush, and in fact I liked Qt.BDiagPattern diagonal line pattern / hatch - but the lines were too thin for my taste.</p>
<p>So, I wanted to customize the line thickness for the BDiagPattern, and I found <a href="https://stackoverflow.com/questions/62799632/can-i-customize-own-brushstyle">Can I customize own brushstyle?</a> which has an example of how to draw a QPixmap and use it as a texture pattern that repeats / tiles, however, it does not demonstrate how to do diagonal lines.</p>
<p>So, I came up with this example:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QPointF, QEvent)
from PyQt5.QtGui import (QColor, QGradient, QLinearGradient, QBrush, QTransform)
from PyQt5.QtWidgets import QApplication
# starting point from https://www.pythonguis.com/tutorials/qtableview-modelviews-numpy-pandas/
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data, parent):
super(TableModel, self).__init__()
self._data = data
self.parent = parent
self.bg_col1 = QColor("#A3A3FF")
self.bg_col2 = QColor("#FFFFA3")
#
def create_texture(self): # https://stackoverflow.com/q/62799632
pixmap = QtGui.QPixmap(QtCore.QSize(16, 16))
pixmap.fill(QColor(0,0,0,0)) # without .fill, bg is black; can use transparent though
painter = QtGui.QPainter()
painter.begin(pixmap)
painter.setBrush(QtGui.QBrush(QtGui.QColor("blue")))
painter.setPen(QtGui.QPen(self.bg_col1, 5, Qt.SolidLine))
painter.drawLine(pixmap.rect().bottomLeft(), pixmap.rect().topRight())
painter.end()
return pixmap
def data(self, index, role):
if role == Qt.DisplayRole:
# See below for the nested-list data structure.
# .row() indexes into the outer list,
# .column() indexes into the sub-list
return self._data[index.row()][index.column()]
if role == Qt.BackgroundRole:
if index.column() == 2:
print( f"{self.parent.table.itemDelegate(index)=}" )
brush = QBrush(self.create_texture())
brush.setTransform(QTransform(QTransform.fromScale(4, 4))) # zoom / scale - https://stackoverflow.com/q/41538932; scales pixels themselves
return brush
#
def rowCount(self, index):
# The length of the outer list.
return len(self._data)
#
def columnCount(self, index):
# The following takes the first sub-list, and returns
# the length (only works if all rows are an equal length)
return len(self._data[0])
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
data = [
[4, 9, 2, 2],
[1, 0, 0, 0],
[3, 5, 0, 0],
[3, 3, 2, 2],
[7, 8, 9, 9],
]
self.model = TableModel(data, self)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app=QtWidgets.QApplication(sys.argv)
window=MainWindow()
window.show()
app.exec_()
</code></pre>
<p>... however, as you can see from the output screenshot:</p>
<p><a href="https://i.sstatic.net/i1zcugj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i1zcugj8.png" alt="table with custom brush background" /></a></p>
<p>... the diagonal lines don't quite "flow" into each other - in other words, the pattern does not tile.</p>
<p>So, how can a draw a diagonal line pattern that tiles nicely, with custom line thickness, using QBrush and QPixmap?</p>
|
<python><pyqt5><qt5>
|
2024-10-27 11:47:17
| 1
| 5,938
|
sdbbs
|
79,130,437
| 7,945,506
|
How can I implement parallel threads with a GUI?
|
<p>With python, I want to run a slideshow (tkinter) and - in parallel - open a video feed. If there is a "thumbs up" gesture detected, the slideshow and the video feed should close.</p>
<p>My current implementation:</p>
<p><strong>main.py</strong></p>
<pre><code>import threading
import time
from slideshow import start_slideshow
from detect_gesture import start_gesture_detection
running_flag = {'running': True}
slideshow = start_slideshow(image_files=matching_images, interval=interval)
print("Slideshow initialized and started")
gesture_thread = threading.Thread(target=start_gesture_detection, args=(slideshow, running_flag))
gesture_thread.start()
print("Started gesture detection thread")
slideshow.mainloop()
running_flag['running'] = False
gesture_thread.join()
</code></pre>
<p><strong>slideshow.py</strong></p>
<pre><code>import tkinter as tk
from PIL import Image, ImageTk
import itertools
import os
from helpers import to_absolute_path
class Slideshow(tk.Tk):
def __init__(self, image_folder, image_files, interval=1):
super().__init__()
self.image_files = itertools.cycle(image_files)
self.interval = interval * 1000 # Convert to milliseconds
self.running = True # Track if slideshow is running
self.current_after_id = None
self.title("Slideshow")
self.geometry("800x600")
self.configure(background='black')
self.load_next_image(image_folder)
def load_next_image(self, image_folder):
if not self.running:
return
# Get next image file
image_file = next(self.image_files)
image_path = os.path.join(image_folder, image_file)
image = Image.open(image_path)
image.thumbnail((800, 600))
self.photo = ImageTk.PhotoImage(image)
if hasattr(self, "image_label"):
self.image_label.config(image=self.photo)
else:
self.image_label = tk.Label(self, image=self.photo)
self.image_label.pack(expand=True, fill=tk.BOTH)
self.current_after_id = self.after(self.interval, lambda: self.load_next_image(image_folder))
def stop(self):
self.running = False
if self.current_after_id:
self.after_cancel(self.current_after_id)
self.destroy()
def start_slideshow(image_files, interval=3):
image_folder = to_absolute_path("../data/images")
slideshow = Slideshow(image_folder, image_files, interval)
return slideshow
</code></pre>
<p><strong>detect_gesture.py</strong></p>
<pre><code>import cv2
import mediapipe as mp
# MediaPipe hands setup
mp_hands = mp.solutions.hands
mp_drawing = mp.solutions.drawing_utils
hands = mp_hands.Hands(static_image_mode=False, max_num_hands=1, min_detection_confidence=0.7, min_tracking_confidence=0.5)
def is_thumbs_up(landmarks):
thumb_tip = landmarks[4]
index_tip = landmarks[8]
middle_tip = landmarks[12]
ring_tip = landmarks[16]
pinky_tip = landmarks[20]
return (thumb_tip[1] < index_tip[1] and
thumb_tip[1] < middle_tip[1] and
thumb_tip[1] < ring_tip[1] and
thumb_tip[1] < pinky_tip[1])
def start_gesture_detection(slideshow, running_flag):
cap = cv2.VideoCapture(0) # Open the camera
while running_flag['running']:
success, frame = cap.read()
if not success:
break
frame = cv2.flip(frame, 1) # Flip the frame horizontally
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) # Convert to RGB
result = hands.process(rgb_frame)
if result.multi_hand_landmarks:
for hand_landmarks in result.multi_hand_landmarks:
mp_drawing.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)
landmarks = [(lm.x, lm.y) for lm in hand_landmarks.landmark]
if is_thumbs_up(landmarks):
print("Thumbs-Up detected!")
slideshow.stop() # Stop the slideshow
running_flag['running'] = False # Stop the detection loop
break
cv2.imshow('Gesture Detection', frame)
# Break the loop if the user manually closes the OpenCV window
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Cleanup
cap.release()
cv2.destroyAllWindows() # Close all OpenCV windows
</code></pre>
<p>When I run <code>main.py</code>, at first, it works as expect: The slideshow opens as well as the video feed. When I do the thumbs up, the slideshow closes and the video feed freezes (instead of closing).</p>
<p>I get this output:</p>
<pre><code>Slideshow initialized and started
Started gesture detection thread
...\.venv\Lib\site-packages\google\protobuf\symbol_database.py:55: UserWarning: SymbolDatabase.GetPrototype() is deprecated. Please use message_factory.GetMessageClass() instead. SymbolDatabase.GetPrototype() will be removed soon.
warnings.warn('SymbolDatabase.GetPrototype() is deprecated. Please '
Thumbs-Up detected!
</code></pre>
<p>What could be the problem and how could I fix it?</p>
|
<python><opencv><user-interface><tkinter><python-multithreading>
|
2024-10-27 11:09:06
| 0
| 613
|
Julian
|
79,130,355
| 1,306,892
|
Issues with LaTeX Formatting in Matplotlib
|
<p>I'm trying to display LaTeX formulas using Matplotlib in Python. I have two pieces of code, one of which works correctly, while the other raises a <code>ValueError</code>.</p>
<p>Here’s the code that works:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
# Set the LaTeX formula
formula = r"$L_0(x)=x \quad ; \quad L_n(x)=L_{n-1}^2-2 \quad \forall n \geq 1$"
# Create a figure and an axis to display the formula
fig, ax = plt.subplots(figsize=(6, 1))
ax.text(0.5, 0.5, formula, fontsize=20, ha='center', va='center')
ax.axis('off') # Remove axes to show only the formula
# Save as SVG
plt.savefig("formula.svg", format="svg")
plt.show()
</code></pre>
<p>This code produces the desired output. However, when I run this other code:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
# Set the LaTeX formula
formula = r"$L_n= \begin{cases}4 & \text { if } n=0 \\ L_{n-1}^2-2 & \text { otherwise }\end{cases}$"
# Create a figure and an axis to display the formula
fig, ax = plt.subplots(figsize=(6, 1))
ax.text(0.5, 0.5, formula, fontsize=20, ha='center', va='center')
ax.axis('off') # Remove axes to show only the formula
# Save as SVG
plt.savefig("formula.svg", format="svg")
plt.show()
</code></pre>
<p>I receive the following error:</p>
<pre><code>ValueError:
L_n= \begin{cases}4 & \text { if } n=0 \\ L_{n-1}^2-2 & \text { otherwise }\end{cases}
^
ParseFatalException: Unknown symbol: \begin, found '\' (at char 5), (line:1, col:6)
</code></pre>
<p>It seems like Matplotlib has trouble parsing the <code>\begin{cases}</code> structure in the second formula. How can I fix this error? Is there a specific format I need to follow for cases in LaTeX with Matplotlib?</p>
<p>Thank you for your help!</p>
|
<python><matplotlib><latex>
|
2024-10-27 10:25:03
| 0
| 1,801
|
Mark
|
79,130,350
| 3,686,187
|
PIP installing lowest possible version of dependency without explanation
|
<p>AFAIK, pip's default behavior is to install latest released versions of subdependencies, which satisfy all requirements. However, I'm observing quite the opposite behavior when trying to install my project afresh.</p>
<p>Here's my requirements.txt (it does beg for some stricter version specification, I know):</p>
<pre><code>datasets>=2.14.2
rouge-score>=0.0.4
nlpaug>=1.1.10
scikit-learn>=1.5.1
tqdm>=4.64.1
matplotlib>=3.6
pandas>=1.3.5
torch>=1.13.0
bs4
transformers>=4.40
nltk>=3.6.5
sacrebleu>=1.5.0
sentencepiece>=0.1.97
hf-lfs>=0.0.3
pytest>=4.4.1
pytreebank>=0.2.7
setuptools>=60.2.0
numpy>=1.23.5
dill>=0.3.5.1
scipy>=1.9.3
flask>=2.3.2
protobuf>=4.23
fschat>=0.2.3
hydra-core>=1.3.2
einops
accelerate>=0.32.1
bitsandbytes
openai>=1.52.0
wget
sentence-transformers
bert-score>=0.3.13
unbabel-comet==2.2.1
nltk>=3.7,<4
evaluate>=0.4.2
spacy>=3.4.0,<4
fastchat
diskcache>=5.6.3
</code></pre>
<p>This is excrept from what I see when running <code>pip install -e .</code> in the project directory with a fresh conda env:</p>
<pre><code>Collecting contourpy>=1.0.1 (from matplotlib>=3.6->lm_polygraph==0.0.0)
Using cached contourpy-1.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB)
Using cached contourpy-1.2.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB)
Using cached contourpy-1.1.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.9 kB)
Using cached contourpy-1.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.7 kB)
Using cached contourpy-1.0.7-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)
Using cached contourpy-1.0.6-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.5 kB)
Using cached contourpy-1.0.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.3 kB)
Using cached contourpy-1.0.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.2 kB)
Using cached contourpy-1.0.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.0 kB)
Using cached contourpy-1.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.2 kB)
Using cached contourpy-1.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (2.2 kB)
</code></pre>
<p>This happens to several sub-dependencies and might take a while, if no cache is present on the machine, and eventually installs the ancient versions of the packages. Why would it do that, if the first option it tries seems to satisfy the requirements? Running pip with <code>-vvv</code> doesn't specify why it tries lower versions either.</p>
<p>Some context:</p>
<pre><code>python --version
Python 3.10.0
pip --version
pip 24.2 from /apps/local/anaconda3/envs/polygraph_test/lib/python3.10/site-packages/pip (python 3.10)
</code></pre>
<p><code>pip debug</code> output:
<a href="https://pastebin.com/NkJ9qbPG" rel="nofollow noreferrer">https://pastebin.com/NkJ9qbPG</a></p>
<p>Full install log:
<a href="https://pastebin.com/u8RY3pEp" rel="nofollow noreferrer">https://pastebin.com/u8RY3pEp</a></p>
<p>UPD: weirdly enough, it installs latest versions eventually:</p>
<pre><code>pip list | grep contour
contourpy 1.3.0
</code></pre>
<p>But tries all the versions between lowest satisfying and latest available, which takes a lot of time for something like <code>transformers</code>, for which it tries to download all versions between 4.40 and 4.46.</p>
|
<python><pip><pypi>
|
2024-10-27 10:22:40
| 0
| 389
|
Roman
|
79,130,216
| 1,487,336
|
How to plot Pandas Series having zero values and DateTimeIndex properly?
|
<p>I have a Pandas Series having many zero values and DateTimeIndex. I want to plot them only with some zero values and handle datetime spacing proper.</p>
<p>For example, the series is as follows. The simple plot shows too many zeros. I only want to show a few zeros before and after the non-zero values. And at the same time, hiding the dates between properly.</p>
<pre><code>ser_tmp = pd.Series(0, index=pd.date_range('2020-01-01', '2020-01-30'))
ser_tmp.loc[[pd.Timestamp('2020-01-03'), pd.Timestamp('2020-01-04'), pd.Timestamp('2020-01-23'), pd.Timestamp('2020-01-24')]] = 1
ser_tmp.plot()
</code></pre>
<p><a href="https://i.sstatic.net/M62AKAXp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M62AKAXp.png" alt="enter image description here" /></a></p>
<p>Plot after replacing all zeros is not what I want.</p>
<pre><code>ser_tmp.replace(0, np.nan).plot()
</code></pre>
<p><a href="https://i.sstatic.net/ED3fRB1Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ED3fRB1Z.png" alt="enter image description here" /></a></p>
<p>What I want is something as <code>ser_tmp2</code>. But plotting <code>ser_tmp2</code> didn't hide the dates gap between <code>2020-01-06</code> and <code>2020-01-21</code>. Moreover, I would like to detect the dates automatically, rather than setting zero as in <code>ser_tmp2</code> mannually.</p>
<pre><code>ser_tmp2 = ser_tmp.replace(0, np.nan).copy()
ser_tmp2.loc[[pd.Timestamp('2020-01-01'), pd.Timestamp('2020-01-02'), pd.Timestamp('2020-01-05'), pd.Timestamp('2020-01-06'), pd.Timestamp('2020-01-21'), pd.Timestamp('2020-01-22'), pd.Timestamp('2020-01-25'), pd.Timestamp('2020-01-26')]] = 0
ser_tmp2 = ser_tmp2.dropna()
ser_tmp2
ser_tmp2.plot()
</code></pre>
<p><a href="https://i.sstatic.net/C0anYErk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/C0anYErk.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/lGHsZhu9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGHsZhu9.png" alt="enter image description here" /></a></p>
|
<python><pandas><plot>
|
2024-10-27 08:58:58
| 2
| 809
|
Lei Hao
|
79,130,143
| 1,606,657
|
Python retrieve Google Keep data of personal account
|
<p>I'd like to build an app that interacts with my Google keep data (edit, add, delete).
I have created a new Google Cloud project and enabled the Google Keep API as well as created a new service account for accessing the API.</p>
<p>I've created credentials for the service account and downloaded them. I have created the below code to call the API with the service account but obviously it will not return anything as it is accessing it's own data.</p>
<pre><code>from google.oauth2 import service_account
from googleapiclient.discovery import build
SCOPES = ['https://www.googleapis.com/auth/keep']
SERVICE_ACCOUNT_FILE = 'credentials.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE,
scopes=SCOPES
)
service = build("keep", "v1", credentials=credentials)
l = service.notes().list().execute()
print(l)
</code></pre>
<p>I have tried adding the service account email address as a collaborator on one of my notes in Google keep but the above code still returns only an empty dict, and after refreshing Keep the added service account also vanishes as a collaborator, so I suspect it's not possible to add them there.</p>
<p>Does anyone know if this is possible at all or how to Google Keep operations on personal accounts via APIs?</p>
<p>Edit:
I have tried using OAuth credentials as well, but there doesn't seem to be any Keep API scopes available for it.</p>
|
<python><google-keep-api>
|
2024-10-27 08:11:14
| 1
| 6,352
|
wasp256
|
79,130,069
| 671,013
|
"json_schema_extra" and Pylance/Pyright issues
|
<p>I need to add metadata to fields of a <code>pydantic</code> model in a way that I could change the metadata. I ended up with the following solution:</p>
<pre class="lang-py prettyprint-override"><code>class Foo(BaseModel):
a: str = Field(
meta_field=("some extra data a"), # pyright: ignore
)
b: str = Field(
meta_field=("some extra data b"), # pyright: ignore
)
c: str = Field(
meta_field=("some extra data c"), # pyright: ignore
)
@classmethod
def summarize_meta_fields(cls, **kwargs) -> dict[str, str]:
schema = cls.model_json_schema()
return {
k: schema["properties"][k]["meta_field"] for k in schema["properties"].keys()
}
def configure_meta_data(**kwargs) -> None:
for k in kwargs:
if k not in Foo.model_fields:
raise ValueError(f"Field {k} not found in SummeryTube model")
Foo.model_fields[k].json_schema_extra["meta_field"] = kwargs[k]
</code></pre>
<p>My problem is that in VScode, I get the following error:</p>
<p><a href="https://i.sstatic.net/8MshlpvT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MshlpvT.png" alt="screenshot of the error" /></a></p>
<p>with the following text:</p>
<pre><code>Object of type "(JsonDict) -> None" is not subscriptablePylancereportIndexIssue
Object of type "None" is not subscriptablePylancereportOptionalSubscript
</code></pre>
<p>How can I refactor the code and mitigate this warning? Or should I simply ignore it as the code behaves as I expect.</p>
|
<python><python-typing><pydantic><pyright>
|
2024-10-27 07:09:38
| 1
| 13,161
|
Dror
|
79,129,869
| 6,197,439
|
Stretchable QLinearGradient as BackgroundRole for resizeable QTableView cells in PyQt5?
|
<p>Consider this example, where I want to apply a "vertical" background to all cells in the third column of the table:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5 import QtCore, QtGui, QtWidgets
from PyQt5.QtCore import (Qt, QPointF)
from PyQt5.QtGui import (QColor, QGradient, QLinearGradient, QBrush)
# starting point from https://www.pythonguis.com/tutorials/qtableview-modelviews-numpy-pandas/
class TableModel(QtCore.QAbstractTableModel):
def __init__(self, data):
super(TableModel, self).__init__()
self._data = data
self.bg_col1 = QColor("#A3A3FF")
self.bg_col2 = QColor("#FFFFA3")
self.bg_grad = QLinearGradient(QPointF(0.0, 0.0), QPointF(0.0, 1.0)) # setcolor 0 on top, 1 on bottom
self.bg_grad.setCoordinateMode(QGradient.ObjectMode) #StretchToDeviceMode) #ObjectBoundingMode)
self.bg_grad.setSpread(QGradient.PadSpread) #RepeatSpread) # PadSpread (default)
self.bg_grad.setColorAt(0.0, self.bg_col1)
self.bg_grad.setColorAt(1.0, self.bg_col2)
self.bg_grad_brush = QBrush(self.bg_grad)
#
def data(self, index, role):
if role == Qt.DisplayRole:
# See below for the nested-list data structure.
# .row() indexes into the outer list,
# .column() indexes into the sub-list
return self._data[index.row()][index.column()]
if role == Qt.BackgroundRole:
if index.column() == 2:
return self.bg_grad_brush
#
def rowCount(self, index):
# The length of the outer list.
return len(self._data)
#
def columnCount(self, index):
# The following takes the first sub-list, and returns
# the length (only works if all rows are an equal length)
return len(self._data[0])
class MainWindow(QtWidgets.QMainWindow):
def __init__(self):
super().__init__()
self.table = QtWidgets.QTableView()
data = [
[4, 9, 2],
[1, 0, 0],
[3, 5, 0],
[3, 3, 2],
[7, 8, 9],
]
self.model = TableModel(data)
self.table.setModel(self.model)
self.setCentralWidget(self.table)
app=QtWidgets.QApplication(sys.argv)
window=MainWindow()
window.show()
app.exec_()
</code></pre>
<p>When I run this code, first I get this drawing:</p>
<p><a href="https://i.sstatic.net/KnE9jt7G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnE9jt7G.png" alt="table first" /></a></p>
<p>Only the cell in the first row is with the gradient as I had imagined it to be; but all the others below it appear with flat colors.</p>
<p>I can notice, that if I resize the second row height, there is a gradient in the related cell there, but it is somehow wrong - it does not stretch proportionally to the bounds of the cell:</p>
<p><a href="https://i.sstatic.net/kZlTYxAb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZlTYxAb.png" alt="table second" /></a></p>
<p>If I afterwards resize the height of row 1, then I can see that the cell with background in that row "stretches" the background gradient according to my expectations - while in the meantime, the cell below it lost the gradient it showed previously:</p>
<p><a href="https://i.sstatic.net/wf4SRoY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wf4SRoY8.png" alt="table third" /></a></p>
<p>What can I do, so that all cells with a gradient BackgroundRole behave the same as the cell in the first rown (full gradient shown and stretched according to cell size), and do not "lose" their gradient rendering if another cell changes size?</p>
|
<python><pyqt5><qt5>
|
2024-10-27 03:50:50
| 1
| 5,938
|
sdbbs
|
79,129,809
| 9,632,470
|
Use Beautiful Soup to count Title/Links
|
<p>I am attempting to write a code that keeps track of the text for the links in the left handed gray box on <a href="https://www.mountainproject.com/area/109928429/aasgard-sentinel" rel="nofollow noreferrer">this</a> webpage. In this case the code should return</p>
<p>Valykrie, The<br />
Acid Baby</p>
<p>Here is the code I am trying to use:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://www.mountainproject.com/area/109928429/aasgard-sentinel'
page = requests.get(url)
soup = BeautifulSoup(page.text, "html.parser")
for link in soup.findAll('a', class_= 'new-indicator'):
print(link)
</code></pre>
<p>It is not working (otherwise I wouldn't be here!) I'm pretty new to BeautifulSoup, and coding in general. No matter how much I inspect the page source I can't seem to figure out the inputs to the findAll to get it to return what I want!</p>
|
<python><web-scraping><beautifulsoup>
|
2024-10-27 02:29:40
| 4
| 441
|
Prince M
|
79,129,734
| 11,501,160
|
Manipulate rotation of image using im2pdf. Parameter doesn't work as expected?
|
<p>I am trying to take images and convert them to PDFs, however, when an image has a rotation value in its exif data, then my pdf also has a rotation value on the converted pdf.
Instead of having the rotation value on the pdf page, i want the image to be rotated as it is meant to be viewed based on the image's exif, and not have any rotation value on the pdf page.</p>
<p>How do i accomplish this ?
I'm hoping to be able to do this only using img2pdf.</p>
<p>I am using this command:</p>
<pre><code>pdf_bytes = img2pdf.convert(io.BytesIO(response.content), rotation=img2pdf.Rotation.ifvalid, first_frame_only=True)
</code></pre>
<p>And the input file is :
<a href="https://drive.google.com/file/d/1JyCBl5ulQmKIdQnVsvbNo7oNHX7oGIda/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1JyCBl5ulQmKIdQnVsvbNo7oNHX7oGIda/view?usp=sharing</a></p>
<p>The input file has a 90CW rotation as per exif data.</p>
<p>However, whether i generate the pdf using rotation=img2pdf.Rotation.ifvalid or not, the generated pdf is the same. Why does it not make any difference ?</p>
<p>My goal is for the image to be rotated as per exif and the pdf to not have any rotation value on the pdf page.
I am checking the page rotation of the output using this code:</p>
<pre><code>from pypdf import PdfReader
with open('abc.pdf', 'rb') as input_pdf:
reader = PdfReader(input_pdf)
# Iterate through each page and get the rotation value
for page_num, page in enumerate(reader.pages):
# Get the rotation value of the page
rotation_value = page.get('/Rotate') or 0 # Default to 0 if not rotated
print(f"Page {page_num + 1} is rotated by {rotation_value} degrees.")
</code></pre>
<p>Output:</p>
<pre><code>Page 1 is rotated by 90 degrees.
</code></pre>
|
<python><pdf><img2pdf>
|
2024-10-27 00:59:33
| 1
| 305
|
Zain Khaishagi
|
79,129,491
| 4,701,426
|
Writing interdependent if else statements?
|
<p>Is there any advantage to one of these over the other? Also, is there any better code than these to achieve the goal? My intuition is that in number 2, since it has already checked for x or y, the check for y is more efficient?</p>
<ol>
<li>
<pre><code>if x or y:
do some stuff
if y:
do some OTHER stuff
</code></pre>
</li>
<li>
<pre><code>if x or y:
do some stuff
if y:
do some OTHER stuff
</code></pre>
</li>
</ol>
|
<python>
|
2024-10-26 21:26:56
| 2
| 2,151
|
Saeed
|
79,129,467
| 203,204
|
How to create Python `datetime` in a specific timezone?
|
<p>Input:</p>
<pre><code>import pytz
from datetime import datetime as dt
targettz = pytz.timezone('America/New_York')
d1 = dt(2024, 1, 1, 7, 40, 0, tzinfo=targettz)
d1.isoformat()
</code></pre>
<p>Output:</p>
<pre><code>'2024-01-01T07:40:00-04:56'
</code></pre>
<p>Why <code>'-04:56'</code> but not <code>'-04:00'</code>?</p>
|
<python><python-3.x><python-datetime>
|
2024-10-26 21:09:56
| 1
| 12,842
|
Anthony
|
79,129,317
| 4,752,874
|
Python How to Select to List All DataFrame Rows where Column has a NaN Entry
|
<p>I have a DataFrame (20k rows) with 2 columns I would like to update if the first column (latitude) row entry is NaN. I wanted to use the code below as it might be a fast way of doing it, but I'm not sure how to update this line <code>msk = [isinstance(row, float) for row in df['latitude'].tolist()]</code> to get the rows that are NaN only. The latitude column I am doing the check on is float, so this line of code returns all rows.</p>
<p></p>
<pre><code>def boolean_mask_loop(df):
msk = [isinstance(row, float) for row in df['latitude'].tolist()]
out = []
for target in df.loc[msk, 'address'].tolist():
dict_temp = geocoding(target)
out.append([dict_temp['lat'], dict_temp['long']])
df.loc[msk, ['latitude', 'longitude']] = out
return df
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
<th>address</th>
<th>latitude</th>
<th>longitude</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>addr1</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>2</td>
<td>addr2</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>3</td>
<td>addr3</td>
<td>40.7526</td>
<td>-74.0016</td>
</tr>
</tbody>
</table></div>
|
<python><dataframe><for-loop>
|
2024-10-26 19:21:28
| 1
| 349
|
CGarden
|
79,129,205
| 16,869,946
|
Applying scipy.minimize to Pandas dataframe with parameters
|
<p>I have a function defined by f(x_0, x_1) = a(x_1 - x_0^2)^2 + (b - x_0)^2, where a and b are some parameters:</p>
<pre><code>def f(x):
return a*(x[1]-x[0]**2)**2+(b-x[0])**2
</code></pre>
<p>where <code>x=np.array([x_0,x_1])</code> is a numpy array. Then the gradient and the Hessian of f are both easy to find and are given by</p>
<pre><code>def f_der(x):
return np.array([-4*a*x[0]*(x[1]-x[0]**2)-2*(b-x[0]), 2*a*(x[1]-x[0]**2)])
def f_hess(x):
a_11 = 12*a*x[0]**2 - 4*a*x[1] + 2
a_12 = -4*a*x[0]**2
a_21 = -4*a*x[0]**2
a_22 = 2*a
return np.array([[a_11, a_12], [a_21, a_22]])
</code></pre>
<p>Now I have a Pandas dataframe <code>df</code> that records different values of a, b:</p>
<pre><code>a b
1 2
3 7
4 12
</code></pre>
<p>and I would like to create two new columns called <code>x_0*</code> and <code>x_1*</code> which are given by the values of x_0 and x_1 respectively, which minimizes f, subject to the constraints:</p>
<pre><code>0 <= x_0 + ax_1 <= 1
0 <= ax_0 - x_1 <= b
0 <= x_0 <= 1
0 <= x_1 <= b
</code></pre>
<p>I know about the scipy optimize package and I can do it with one pair of a, b at a time:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import scipy
from scipy.optimize import minimize
from scipy.optimize import Bounds
from scipy.optimize import LinearConstraint
a = 1
b = 2
bounds = Bounds([0, 1], [a, b])
linear_constraint = LinearConstraint([[0, a], [a, -1]], [0, 0], [1, b])
res = minimize(f, np.array([0,0]), method='trust-constr',
jac=f_der, hess=f_hess,
constraints=linear_constraint,
options={'verbose': 1}, bounds=bounds)
res.x
</code></pre>
<p>But my original dataframe is large (~100,000 rows) so I would like to ask if there is a quick way to apply the minimizer to the whole dataframe all at once and do it quickly. So the desired outcome looks like this:</p>
<pre><code>a b x_0* x_1*
1 2 1.00000001 0.99999999
3 7 1.12649598 0.4
4 12 1.19393107 0.29411765
</code></pre>
|
<python><pandas><numpy><scipy><scipy-optimize-minimize>
|
2024-10-26 18:22:53
| 1
| 592
|
Ishigami
|
79,129,144
| 12,415,855
|
Can't install shazamio on MacOs?
|
<p>I try to install the python shazamio package using the following command</p>
<pre><code>pip install shazamio
</code></pre>
<p>Generally the solution works fine on my windows computer but on Mac I get this error when trying to install the package -</p>
<pre><code> Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
💥 maturin failed
Caused by: Can't find /private/var/folders/q1/288l_zvn2sl7_lx4cv34yz3m0000gn/T/pip-install-ncjk92fq/shazamio-core_d948f73e48d346ff9dec627058e2b80c/Cargo.toml (in /private/var/folders/q1/288l_zvn2sl7_lx4cv34yz3m0000gn/T/pip-install-ncjk92fq/shazamio-core_d948f73e48d346ff9dec627058e2b80c)
Error running maturin: Command '['maturin', 'pep517', 'write-dist-info', '--metadata-directory', '/private/var/folders/q1/288l_zvn2sl7_lx4cv34yz3m0000gn/T/pip-modern-metadata-lcpjwpc7', '--interpreter', '/Users/polzimac/Documents/DEV/venv/shazamio/bin/python3']' returned non-zero exit status 1.
Checking for Rust toolchain....
Running `maturin pep517 write-dist-info --metadata-directory /private/var/folders/q1/288l_zvn2sl7_lx4cv34yz3m0000gn/T/pip-modern-metadata-lcpjwpc7 --interpreter /Users/polzimac/Documents/DEV/venv/shazamio/bin/python3`
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>I have already installed Rust using the following statement:</p>
<pre><code>curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
</code></pre>
|
<python><rust><shazam>
|
2024-10-26 17:48:45
| 1
| 1,515
|
Rapid1898
|
79,128,856
| 1,880,405
|
Building a statically-linked pysqlite3 library
|
<p>I am trying to compile and import SQLite3 into Python3 by following <a href="https://github.com/coleifer/pysqlite3?tab=readme-ov-file#building-a-statically-linked-library" rel="nofollow noreferrer">Building a statically-linked library</a>, however I can't get it to work. I have compiled from source, copied .c and .h files, run <code>python setup.py build_static build</code>, but it's not working.</p>
<pre><code>❯ find . -name "_sqlite3*.so"
./build/lib.macosx-14.0-arm64-cpython-312/pysqlite3/_sqlite3.cpython-312-darwin.so
❯ python3 -c "import sys; print(sys.path)"
['', '/opt/homebrew/Cellar/python@3.12/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python312.zip', '/opt/homebrew/Cellar/python@3.12/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12', '/opt/homebrew/Cellar/python@3.12/3.12.6/Frameworks/Python.framework/Versions/3.12/lib/python3.12/lib-dynload', '/Users/myusername/Library/Python/3.12/lib/python/site-packages', '/opt/homebrew/lib/python3.12/site-packages', '/opt/homebrew/lib/python3.12/site-packages/pysqlite3-0.5.4-py3.12-macosx-14.0-arm64.egg', '/opt/homebrew/opt/python-tk@3.12/libexec']
</code></pre>
<p>However, then trying to import it does not work:</p>
<pre><code>from pysqlite3 import dbapi2 as sqlite3
Traceback (most recent call last):
File "/Users/username/Sandbox/sqlite3ttt/pysqlite3/a.py", line 1, in <module>
from pysqlite3 import dbapi2 as sqlite3
File "/Users/username/Sandbox/sqlite3ttt/pysqlite3/pysqlite3/__init__.py", line 23, in <module>
from pysqlite3.dbapi2 import *
File "/Users/username/Sandbox/sqlite3ttt/pysqlite3/pysqlite3/dbapi2.py", line 28, in <module>
from pysqlite3._sqlite3 import *
ModuleNotFoundError: No module named 'pysqlite3._sqlite3'
</code></pre>
|
<python><macos>
|
2024-10-26 15:25:00
| 1
| 432
|
estranged
|
79,128,840
| 53,468
|
How to define the method signature in a subclass
|
<p>I'm implementing a repository pattern as an excercise on typing.</p>
<p>I have a couple of unrelated SQLAlchemy models:</p>
<pre><code>class Base(MappedAsDataclass, DeclarativeBase):
id: Mapped[primary_key] = mapped_column(init=False)
default_string = Annotated[str, mapped_column(String(100))]
class User(Base):
__tablename__ = "sample"
name: Mapped[default_string]
class Sample(Base):
__tablename__ = "sample"
value: Mapped[default_string]
location: Mapped[default_string]
</code></pre>
<p>I want to create a <code>Repository</code> for each to have a uniform way to interact and query, and I want to use typing to hint the usage.</p>
<p>Say:</p>
<pre><code>T = TypeVar("T", bound=Base)
class Repository(ABC, Generic[T]):
def __init__(self, model: type[T]):
self.db = SessionLocal()
self.model = model
def list(self, skip: int = 0, limit: int = 100):
return self.db.query(self.model).offset(skip).limit(limit).all()
</code></pre>
<p>So I use them as:</p>
<pre><code>class SampleRepository(Repository[Sample]):
def get_by_name(self, name: str) -> list[Sample]:
return self.model.query.filter(name=name).all()
repository = SampleRepository(Sample)
</code></pre>
<p>Now, thing is, on <code>repository.create()</code>, which is a quite convulted piece of logic, I'd need to have different signatures for the different model arguments. I thought of using dict unpacking in this case</p>
<pre><code>class Repository(ABC, Generic[T]):
...
def create(self, **data) -> T:
instance = self.model(**data)
self.db.add(instance)
self.db.commit()
self.db.refresh(instance)
return instance
</code></pre>
<p>But If I try to overload this method in the <code>SampleRepository</code> as such:</p>
<pre><code>class SampleRepository(Repository[Sample]):
def create(self, name: str) -> Sample:
return super().create(
name=name,
)
</code></pre>
<p>This breaks as the signature of "create" incompatible with supertype "Repository".</p>
<p>Is there a way to achieve this? or I'm asking too much to the type system?</p>
|
<python><python-typing><abc>
|
2024-10-26 15:19:28
| 1
| 3,680
|
tutuca
|
79,128,726
| 5,928,577
|
Segmentation Fault when Running pip install with Pyenv on macOS M1
|
<p>I’m encountering a segmentation fault while trying to install Python packages with pip inside a pyenv-managed environment on my macOS M1 system.</p>
<p>Steps I Followed:</p>
<p><code>brew install pyenv</code></p>
<p><code>pyenv install 3.10.0</code></p>
<p><code>pyenv global 3.10.0</code></p>
<p><code>pip install transformers</code></p>
<p>Error I am getting
<code>/opt/homebrew/Cellar/pyenv/2.4.16/pyenv.d/exec/pip-rehash/pip: line 20: 20306 Segmentation fault: 11 "$PYENV_COMMAND_PATH" "$@"</code></p>
<p>I tried reinstalling pyenv and python, but it didn't work.</p>
|
<python><macos><installation><pip>
|
2024-10-26 14:19:01
| 3
| 2,740
|
Divyesh Savaliya
|
79,128,612
| 8,519,830
|
Pandas precision not working for last dataframe column
|
<p>I have a issue with pandas dataframe print not printing the last column with the requested precision. How to fix?</p>
<p>Here is the short code snippet of the printout:</p>
<pre><code>print(dfy)
print(dfy.dtypes)
with pd.option_context('display.precision', 0):
print(dfy)
</code></pre>
<p>The printout from the code snippet:</p>
<pre><code> gas prev delta%
2023-10-01 818.851398 NaN NaN
2023-11-01 2009.784005 1755.768035 14.467513
2023-12-01 2304.134123 2479.160200 -7.059894
2024-01-01 2647.367761 2524.686911 4.859250
2024-02-01 1685.694664 2070.363903 -18.579789
2024-03-01 1588.714377 1840.684792 -13.688950
2024-04-01 1376.973210 1385.102980 -0.586943
2024-05-01 605.798978 706.870375 -14.298434
2024-06-01 117.488287 155.409729 -24.400945
2024-07-01 163.644399 133.852727 22.257053
2024-08-01 129.027315 121.264696 6.401384
2024-09-01 625.730027 198.051683 215.942797
gas float64
prev float64
delta% float64
dtype: object
gas prev delta%
2023-10-01 819 NaN NaN
2023-11-01 2010 1756 1e+01
2023-12-01 2304 2479 -7e+00
2024-01-01 2647 2525 5e+00
2024-02-01 1686 2070 -2e+01
2024-03-01 1589 1841 -1e+01
2024-04-01 1377 1385 -6e-01
2024-05-01 606 707 -1e+01
2024-06-01 117 155 -2e+01
2024-07-01 164 134 2e+01
2024-08-01 129 121 6e+00
2024-09-01 626 198 2e+02
</code></pre>
|
<python><pandas><dataframe>
|
2024-10-26 13:08:30
| 1
| 585
|
monok
|
79,128,153
| 9,686,427
|
Visual Studio Code displaying "..." instead of the output in .iypnb files
|
<p>When running code in cells of a <code>.iypnb</code> file in VisualStudioCode I often encounter the issue of VSC not showing any output, but rather just three gray dots:</p>
<p><a href="https://i.sstatic.net/wjhIsWIY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wjhIsWIY.png" alt="enter image description here" /></a></p>
<p>Pressing on said dots gives two options:</p>
<p><a href="https://i.sstatic.net/jCI9dVFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jCI9dVFd.png" alt="enter image description here" /></a></p>
<p>If I copy the output and paste it somewhere, I do indeed get 'Anything' (or whatever the output was supposed to be) i.e. the output <em>is</em> there, it's just not visible. Reloading VSC can solve the issue, but it's rather tiring and I was wondering if there is a way to make it stop.</p>
|
<python><visual-studio-code><jupyter-notebook><output>
|
2024-10-26 08:55:08
| 0
| 484
|
Sam
|
79,128,076
| 1,802,693
|
Calling async code from syncronous function in Python
|
<p>This problem is probably not complicated, but I'm new to async library and I just can't figure this out.</p>
<p>Let's say I have a synchronous function from which I want to call an async function without transforming the synchronous function into an asynchronous one.
The synchronous function needs to wait for the async function to complete and return the value it provides. It's fine if it blocks execution in the meantime.
All of this is necessary because the async function comes from a library.</p>
<p>How can I retrieve the result computed by the async function?</p>
<ul>
<li>There is already a running event loop, and I have a reference to it.</li>
<li>They are running on the same thread, so I don't need another thread.</li>
<li>I can't use run_until_complete() because it somehow conflicts with the already running event loop.</li>
</ul>
<pre><code>import asyncio
class Sample:
def __init__(self):
self.state = self.compute_sync() # I want to call async from this constructor
async def io_function(self):
print("started")
x = await asyncio.sleep(1) # using an async library here
print("ended")
return x.result()
def compute_sync(self):
io_result = self.io_function() # I want the result of this without using await
return 'local_data' + io_result
async def main():
await asyncio.sleep(1) # other IO stuff
s = Sample()
print(s)
asyncio.run(main())
</code></pre>
<p>(The code above is extremely simplified, maybe it does not reflect the real use case, but the comments will maybe help what I need to do here)</p>
<ul>
<li>I'm looking for answers with the newest version of Python (which currently is 3.13).</li>
<li>I don't want to use an external async library if possible.</li>
</ul>
|
<python><asynchronous><python-asyncio>
|
2024-10-26 08:06:34
| 2
| 1,729
|
elaspog
|
79,127,868
| 1,880,405
|
How do I instruct Python to use my own compiled SQLite3?
|
<p>I have compiled my own version of SQLite3 <a href="https://www.sqlite.org/download.html" rel="nofollow noreferrer">amalgamation tarball from the download</a> page and would like to include that into my Python3 script on MacOs. Is there a way to do that via <code>import sqlite3</code>? As in, is it possible to instruct Python to use my own version?</p>
|
<python><macos><sqlite3-python>
|
2024-10-26 05:15:42
| 1
| 432
|
estranged
|
79,127,802
| 3,657,298
|
Extracting a PDF Page With pypdf Consistently Creates PDF Without Any Pages
|
<p>I am trying to programmatically split a PDF containing multiple articles into a PDF for each article. The read and page extraction appears to work, the file is created, but is only 311 bytes of what appears to contain PDF header information without any PDF pages, according to Adobe Reader.</p>
<p>I created a new one-page PDF that is about 132KB and a simple test program. The length of text looks correct but the output PDF is again only 311 bytes.</p>
<pre><code>from pypdf import PdfReader, PdfWriter
input_pdf = PdfReader('testpdf.pdf')
page = input_pdf.pages[0]
print(len(page.extract_text()))
output = PdfWriter()
output.add_page = page
with open('testpdf_1.pdf', 'wb') as output_stream:
output.write(output_stream)
</code></pre>
<p>If I run the code in a python interactive session, I see:</p>
<pre><code>False, <_io.BufferedWriter name='testpdf_1.pdf'>)
</code></pre>
<p>I am not sure this is an error, or at least I have not been able to find what the message means.</p>
<p>I am running <code>pypdf 5.0.1</code> and <code>python 3.8.0</code> in a venv.</p>
|
<python><python-3.x><pypdf>
|
2024-10-26 04:02:14
| 1
| 498
|
user3657298
|
79,127,665
| 354,051
|
Multithreaded list comprehension in python is not faster compare to non threaded version
|
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor
words = [...] # List of ~100 words
dictionary = { # ~20K items
'word1': 0.12,
'word2': 0.32,
'word3': 0.24,
# more words...
}
def get_word_frequency(word):
return (word, dictionary[word]) if word in dictionary else None
def search_words(words):
with ThreadPoolExecutor() as executor:
results = list(executor.map(get_word_frequency, words))
return {word: freq for word, freq in results if freq is not None}
result = search_words(words)
</code></pre>
<p>The above code is taking almost the same time as normal list comprehension.</p>
<pre class="lang-py prettyprint-override"><code>result = [w for w in words if w in dictionary.keys()]
</code></pre>
<p>Here I'm just showing a single dictionary as an example, in real case I do have near about 200 dictionaries loaded from a single json file. Sets performs way better than list but I'm not sure how can I implement a Set based comprehension in the above case.</p>
|
<python><list-comprehension><python-multithreading>
|
2024-10-26 01:45:52
| 2
| 947
|
Prashant
|
79,127,647
| 894,067
|
Tensorflow Docker Not Using GPU
|
<p>I'm trying to get Tensorflow working on my Ubuntu 24.04.1 with a GPU.</p>
<p>According to <a href="https://www.tensorflow.org/install/docker" rel="noreferrer">this page</a>:</p>
<blockquote>
<p>Docker is the easiest way to run TensorFlow on a GPU since the host machine only requires the NVIDIA® driver</p>
</blockquote>
<p>So I'm trying to use Docker.</p>
<p>I'm checking to ensure my GPU is working with Docker by running <code>docker run --gpus all --rm nvidia/cuda:12.6.2-cudnn-runtime-ubuntu24.04 nvidia-smi</code>. The output of that is:</p>
<pre><code>==========
== CUDA ==
==========
CUDA Version 12.6.2
Container image Copyright (c) 2016-2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license
A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.
Sat Oct 26 01:16:50 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 560.35.03 Driver Version: 560.35.03 CUDA Version: 12.6 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA TITAN RTX Off | 00000000:01:00.0 Off | N/A |
| 41% 40C P8 24W / 280W | 1MiB / 24576MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+
</code></pre>
<p>(Side note, I'm not using the command they suggest because <code>docker run --gpus all --rm nvidia/cuda nvidia-smi</code> doesn't work due to <a href="https://hub.docker.com/r/nvidia/cuda/" rel="noreferrer">nvidia/cuda not having a <code>latest</code> tag anymore</a>)</p>
<p>So it looks to be working. However when I run:</p>
<pre><code>docker run --gpus all -it --rm tensorflow/tensorflow:latest-gpu \
python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
</code></pre>
<p>The output is:</p>
<pre><code>2024-10-26 01:20:51.021242: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1729905651.033544 1 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1729905651.037491 1 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-26 01:20:51.050486: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
W0000 00:00:1729905652.350499 1 gpu_device.cc:2344] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
[]
</code></pre>
<p>Which indicates that there is no GPU detected by Tensorflow.</p>
<p>What am I doing wrong here?</p>
|
<python><docker><tensorflow><gpu>
|
2024-10-26 01:23:52
| 1
| 20,944
|
Charlie Fish
|
79,127,603
| 3,334,721
|
Dealing with confidential modules in a Python code: are conditional imports possible?
|
<p>Let say I have a Python code with a <code>src</code> source folder. Inside this folder is a subfolder <code>confidential</code> (typically a git submodule) that is confidential and only selected users of the code can access.</p>
<p>I am trying to find a robust way to deal with this confidential part of the code. I want to make sure all users can run the code, even if they don't have the confidential source files.</p>
<p>My main file would look like this :</p>
<pre><code>try:
from src.confidential.function import function
except ImportError:
def function(*args):
return 0, 0
# main code
user_can_access_confidential_code = False
a = 1
b = 2
if user_can_access_confidential_code:
c, d = function(a, b)
</code></pre>
<p>I am not very satisfied with this way of working as it implies a potentially large try/except section should there be many confidential functions to import conditionally. In addition, I could not find a way to simplify the <code>return</code> part of the function.</p>
<p>Is there an efficient way to deal with such conditional import ? I would like to add that, though it does not appear in the mwe, the code is running with Numba.</p>
|
<python><import><conditional-statements>
|
2024-10-26 00:26:30
| 0
| 403
|
Alain
|
79,127,575
| 3,182,021
|
AttributeError in Python when running on a web server
|
<p>I have code that works pretty well from command line, but when launching by Apache web server via PHP, and Racket subprocess, I get this error in the server logs:</p>
<pre><code>(Traceback (most recent call last): File "/opt/homebrew/var/www/drive/cut_1D.py", line 59, in <module> from_VTK = fb.from_VTK(data_path) ^^^^^^^^^^^ AttributeError: module 'fibo' has no attribute 'from_VTK')
</code></pre>
<p>The original Python <code>cut_1D.py</code> code looks like this:</p>
<pre><code># before launch set : export PYTHONPATH=.:fibo
# python3.11 cut_1D.py data ./BepiColombo-Mio_MSO-orbit_1min_short.txt output
# added Damien Mattei
import numpy as np
import matplotlib.pyplot as plt
import fibo as fb
import sys
#from scipy.ndimage.filters import gaussian_filter as gf
# above is deprecated and will be removed
from scipy.ndimage import gaussian_filter as gf
from scipy.interpolate import RegularGridInterpolator
######################################################################
# Here you put hard-coded input parameters:
# -) tars = list of fields/scalars to cut along the trajectory
# -) cycle_min, cycle_max = limits of simulation cycle to cut
# -) word = char string to name output file
######################################################################
tars = ['B','rhoe0']
cycle_min = 5000
cycle_max = 8000
word = 'Mio'
#######################################################################
# Here you the paths to 3 files using sys
#######################################################################
data_path = sys.argv[1] # here you give path to run
traj_file = sys.argv[2] # here you give a trajectory in MSO coord
save_path = sys.argv[3] # here you give path to save extracted traj
#######################################################################
# Here you define two routines to change coordinates
# MSO means Mercury-centered Solar Orbital, ie the frame to define a
# spacecraft trajectory
#######################################################################
def coordinate_box_to_MSO(x,y,z):
x_mso = -(x - xc)/R
y_mso = -(y - yc)/R
z_mso = (z - zc)/R
return x_mso, y_mso, z_mso
def coordinate_MSO_to_box(x,y,z):
x_box = -(x*R - xc)
y_box = -(y*R - yc)
z_box = (z*R + zc)
return x_box, y_box, z_box
#######################################################################
# Here you load simulation parameters from file SimulationData.txt
# using routine in fibo/mod_from.py
#######################################################################
print()
print('Loading simulation parameters (SimulationData.txt)')
print()
from_VTK = fb.from_VTK(data_path)
from_VTK.get_meta(silent=False)
x = np.linspace(0,from_VTK.meta['xl'],from_VTK.meta['nx'])
y = np.linspace(0,from_VTK.meta['yl'],from_VTK.meta['ny'])
z = np.linspace(0,from_VTK.meta['zl'],from_VTK.meta['nz'])
nx,ny,nz = from_VTK.meta['nnn']
xc = from_VTK.meta['xc']
yc = from_VTK.meta['yc']
zc = from_VTK.meta['zc']+from_VTK.meta['Doff']
R = from_VTK.meta['R']
#######################################################################
# Here you open .txt with spacecraft trajectory and compute some stuff
#######################################################################
print()
print('Open .txt with spacecraft trajectory and compute some stuff')
print()
tt, xx, yy, zz = np.loadtxt(traj_file,unpack=True)
d = np.sqrt(xx**2+yy**2+zz**2)
i_CA = np.where(d==min(d))
tt = (tt-tt[i_CA])
Vx_array = []
Vy_array = []
Vz_array = []
xx1_box, yy1_box, zz1_box = coordinate_MSO_to_box(xx,yy,zz) # doit etre la trajectoire du spacecraft
#######################################################################
# Here you loop on the times of the simulations and on the tars to load
# the simulation data (the 3D cubes on data written in vtk), this is
# done using the routines get_vect and get_scal in fibo/mod_from.py
#######################################################################
print()
print('loop on the times of the simulations and on the tars')
print()
print('from_VTK.meta[\'segcycles\']=',from_VTK.meta['segcycles'])
print()
for seg in from_VTK.meta['segcycles']:
print()
print('seg=',seg)
print()
if cycle_min<=seg<=cycle_max :
print('LOAD->',seg)
for tar in tars:
print()
print('tar=',tar)
print()
# open txt output file
file_out = open(save_path+'/trajectory-near_'+word+'_'+tar+'_'+str(seg)+'.txt','w')
file_out.write('#index\t X_MSO\t Y_MSO\t Z_MSO\t Vx\t Vy\t Vz\n')
# check if tar is a scalar (a bit hard-coded, can be changed)
if (('rho' in tar) or ('T' in tar)):
scalar=True
else:
scalar=False
# load data using fibo/mod_from.py
# and interpolate onto the box grid
if (scalar==False):
Vect = from_VTK.get_vect(from_VTK.meta['name']+'_'+tar+'_'+str(seg),seg,fibo_obj=None,tar_var=tar,silent=True)
fx = RegularGridInterpolator((x,y,z), Vect[0])
fy = RegularGridInterpolator((x,y,z), Vect[1])
fz = RegularGridInterpolator((x,y,z), Vect[2])
elif (scalar==True):
Vect = from_VTK.get_scal(from_VTK.meta['name']+'_'+tar+'_'+str(seg),seg,fibo_obj=None,tar_var=tar,silent=True)
fx = RegularGridInterpolator((x,y,z), Vect)
# loop on trajectory points
for ii in range(0,len(xx)):
# print()
# print('ii=',ii)
# print()
# if the traj. point is inside the box
if((xx1_box[ii]>0)*(yy1_box[ii]>0)*(zz1_box[ii]>0)*(xx1_box[ii]<max(x))*(yy1_box[ii]<max(y))*(zz1_box[ii]<max(z))):
# interp field to traj. point
point = np.array([[xx1_box[ii],yy1_box[ii],zz1_box[ii]],[0,0,0]])
Vx_array.append(fx(point)[0])
if (scalar==False):
Vy_array.append(fy(point)[0])
Vz_array.append(fz(point)[0])
elif (scalar==True):
Vy_array.append(np.nan)
Vz_array.append(np.nan)
# else point out of the box
else:
Vx_array.append(np.nan)
Vy_array.append(np.nan)
Vz_array.append(np.nan)
# write on output file
file_out.write('%f\t%f\t%f\t%f\t%f\t%f\t%f\n'%(tt[ii],xx[ii],yy[ii],zz[ii],-Vx_array[ii],-Vy_array[ii],Vz_array[ii]))
# close txt output file
file_out.close()
</code></pre>
<p>Problem is at line 59 but in command line the code used to ran perfectly, I suppose some environment variable missing...</p>
<p>Mode deeply the python source code is in fibo subdir <code>init.py</code>:</p>
<pre><code>
#! /bin/env python
# coding: utf8
#---------------------------------------------------------------------------------------
#-(1)-----21.12.18----FDP:S,F-----------------------------------------------------------
#-(2)-----30.12.18----FDP:S,J,R---------------------------------------------------------
#-(3)-----02.04.19----FDP:S,P,L---------------------------------------------------------
#-(4)-----04.04.19----FDP:S,J,L---------------------------------------------------------
#---------------------------------------------------------------------------------------
#-(alpha)-19.07.19----FDP:S,J,L---------------------------------------------------------
#-(beta0)-12.11.19----FDP:S,F-----------------------------------------------------------
#-(beta1)-19.11.19----FDP:S-------------------------------------------------------------
#-(beta2)-25.11.19----FDP:S,L-----------------------------------------------------------
#-(beta3)-21.03.20----FDP:S-------------------------------------------------------------
#-(gamma)-03.06.21----FDP:S,J-----------------------------------------------------------
#---------------------------------------------------------------------------------------
from mod_from import from_VTK
from mod_phybo import phybo
import mod_get
import mod_axis
import mod_extract
import mod_calc
import mod_find
import mod_comp
import mod_draw
import mod_print
import mod_extra
class fibo (mod_get.fibo_get,
mod_axis.fibo_axis,
mod_extract.fibo_extract,
mod_calc.fibo_calc,
mod_find.fibo_find,
mod_comp.fibo_comp,
mod_draw.fibo_draw,
mod_print.fibo_print,
mod_extra.fibo_extra):
"""
fibo is a python object designed to contain your simulation data and perform automatically simple operations on it
all functions are designed for data arrays of the form [nx,ny,nz] (no other indices, please - this is done to keep routines light and free from for cycles)
[fibo.data] means [str] in data or [np.ndarray(nx,ny,nz)]
"""
def __init__(self,
fibo_name): #name of the set of data on which you work (part of simulation)
self.fibo_name = str(fibo_name)
self.data = {} #dict of np.ndarray(nx,ny,nz)
self.pnts = {} #dict of np.ndarray(3,points)
self.meta = {} #dict of meta_data - must be copied from the loaders
self.stat = {} #dict of statistical values
#------------------------------------------------------------
def help(self):
print("Qu'est-ce que c'est que ce bordel ?")
print('pippo = fb.fibo("pippo")')
print('dir(fb)')
print('dir(fb.fibo)')
print('pippo.data.keys() --> list of data available')
print('pippo.data["newname"] = pippo.data.pop("oldname")')
print('np.unravel_index(np.argmax(...),np.shape(...))')
</code></pre>
<p>mod_from.py contains from_VTK :</p>
<pre><code>
#! /bin/env python
import collections
import codecs
import numpy as np
import os
import pickle
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy.ndimage as ndm
import itertools as itt
import struct
import re
import time
import vtk
from vtk.util import numpy_support as VN
from vtk.numpy_interface import dataset_adapter as dsa
from vtk.numpy_interface import algorithms as algs
import xml.etree.ElementTree as ET
#---------------------------------------------------------------------------------------
#------fill-fibo-objects-from-various-sources-------------------------------------------
#-------or-simply-get-your-data---------------------------------------------------------
#---------------------------------------------------------------------------------------
#---------------------------------------------------------------------------------------
class from_VTK (object):
def __init__(self,
address):
"""
Creates the object to retrieve data from VTK files
Parameters :
- address [address] where your data are (folder with segs inside)
"""
self.address = address
self.meta = {}
#------------------------------------------------------------
def get_meta(self, #counts lines in file and calls the appropriate function to get the meta data
extra_address = '',
silent = True):
"""
------------------------------------------------------------------------------------
fills the metadata list
------------------------------------------------------------------------------------
extra_address = '' [address] to reach any subfolder where your meta-data are
silent = True [bool] don't you want to see all infos printed on shell?
------------------------------------------------------------------------------------
"""
with open(os.path.join(self.address,extra_address,'SimulationData.txt'),'r') as foo:
line_number = len(foo.readlines())
if line_number==35 : old_vers=True
elif line_number>35 : old_vers=False
self.get_meta_A(old_vers,extra_address)
#---------get-dimensions-of-your-simulation----------
self.meta['dx'] = self.meta['xl']/self.meta['nx']
self.meta['dy'] = self.meta['yl']/self.meta['ny']
self.meta['dz'] = self.meta['zl']/self.meta['nz']
self.meta['nnn'] = (self.meta['nx'], self.meta['ny'], self.meta['nz'])
self.meta['lll'] = (self.meta['xl'], self.meta['yl'], self.meta['zl'])
self.meta['ddd'] = (self.meta['dx'], self.meta['dy'], self.meta['dz'])
self.meta['ppp'] = (False, False, False) # HARD-CODED !!!
self.meta['x'] = np.arange(0.,self.meta['xl'],self.meta['dx'])
try:
self.meta['y'] = np.arange(0.,self.meta['yl'],self.meta['dy'])
except:
self.meta['y'] = np.array([0.])
try:
self.meta['z'] = np.arange(0.,self.meta['zl'],self.meta['dz'])
except:
self.meta['z'] = np.array([0.])
#----------get-time-infos-from all-vtk-files-----------------
segments = [f for f in os.listdir(self.address) if f.split('.')[-1]=='vtk']
for i in range(len(segments)):
if i == 0:
self.meta['name'] = segments[i].split('_')[0]
segments[i] = segments[i].split('_')[-1].split('.')[0]
segments = set(segments)
segments = map(str, sorted(map(int, segments)))
self.meta['segcycles']=[]
self.meta['segtimes']=[]
for seg in segments:
self.meta['segcycles'].append(int(seg))
self.meta['segtimes'].append(float(seg)*self.meta['dt'])
#----------add-informations-on-species-----------------
species = []
for isp in range(0,self.meta['nss']):
if self.meta['sQOM'][isp]<0 : species.append('e'+str(isp))
elif self.meta['sQOM'][isp]>0 : species.append('i'+str(isp))
self.meta['species'] = species
if self.meta['ny'] == 1:
self.meta['space_dim'] = '1D'
elif self.meta['nz'] == 1:
self.meta['space_dim'] = '2D'
else:
self.meta['space_dim'] = '3D'
#----------print-summary-----------------
if not silent :
print('iPIC3D> cell number : ', self.meta['nnn'])
print('iPIC3D> domain size : ', self.meta['lll'])
print('iPIC3D> mesh spacing : ', self.meta['ddd'])
print('iPIC3D> periodicity : ', self.meta['ppp'])
print('iPIC3D> time step : ', self.meta['dt'])
print('iPIC3D> species : ', self.meta['species'])
for i in range(self.meta['nss']):
print(' '+species[i]+' charge-over-mass : ', self.meta['sQOM'][i])
#------------------------------------------------------------
def get_meta_A(self,
old_vers=False,
extra_address = ''):
"""
------------------------------------------------------------------------------------
extra routine, reads meta data from SimulationData.txt
------------------------------------------------------------------------------------
old_vers = False [bool] is your simu older than 11/2021? means that I have changed the simulationdata.txt in ipic3d
extra_address = '' [address] to reach any subfolder where your meta-data are
------------------------------------------------------------------------------------
"""
#get mesh infos from SimulationData.txt
infos = open(os.path.join(self.address,extra_address,'SimulationData.txt'),'r')
infos.readline() #---------------------------
infos.readline() #- Simulation Parameters -
infos.readline() #---------------------------
self.meta['nss'] = int(infos.readline().split('=')[-1]) #number of species
stag=[]
sQOM=[]
for i in range(self.meta['nss']):
sQOM.append(float(infos.readline().split('=')[-1]))
self.meta['sQOM'] = sQOM
infos.readline() #---------------------------
self.meta['xl'] = float(infos.readline().split('=')[-1]) #box physical dimensions
self.meta['yl'] = float(infos.readline().split('=')[-1])
self.meta['zl'] = float(infos.readline().split('=')[-1])
self.meta['nx'] = int(infos.readline().split('=')[-1]) #box grid dimensions
self.meta['ny'] = int(infos.readline().split('=')[-1])
self.meta['nz'] = int(infos.readline().split('=')[-1])
if not old_vers :
infos.readline() #---------------------------
self.meta['XLEN'] = int(infos.readline().split('=')[-1]) # MPI procs grid
self.meta['YLEN'] = int(infos.readline().split('=')[-1])
self.meta['ZLEN'] = int(infos.readline().split('=')[-1])
infos.readline() #---------------------------
self.meta['xc'] = float(infos.readline().split('=')[-1]) #planet position
self.meta['yc'] = float(infos.readline().split('=')[-1])
self.meta['zc'] = float(infos.readline().split('=')[-1])
self.meta['R'] = float(infos.readline().split('=')[-1]) #planet radius
self.meta['Doff'] = float(infos.readline().split('=')[-1])
infos.readline() #---------------------------
self.meta['SAL'] = int(infos.readline().split('=')[-1])
self.meta['Nsal'] = int(infos.readline().split('=')[-1])
infos.readline() #---------------------------
self.meta['dt'] = float(infos.readline().split('=')[-1]) #timestep
self.meta['nsteps'] = int(infos.readline().split('=')[-1]) #number of steps
infos.readline() #---------------------------
for i in range(self.meta['nss']):
infos.readline() #rho init species
infos.readline() #rho inject species
infos.readline() #current sheet thickness
self.meta['Bx0'] = float(infos.readline().split('=')[-1]) #Bx0
self.meta['By0'] = float(infos.readline().split('=')[-1]) #By0
self.meta['Bz0'] = float(infos.readline().split('=')[-1]) #Bz0
if not old_vers :
infos.readline() #---------------------------
self.meta['Vx0'] = float(infos.readline().split('=')[-1]) #Vx0
self.meta['Vy0'] = float(infos.readline().split('=')[-1]) #Vy0
self.meta['Vz0'] = float(infos.readline().split('=')[-1]) #Vz0
self.meta['vths'] = []
for i in range(self.meta['nss']):
self.meta['vths'].append(float(infos.readline().split('=')[-1])) #vth species
infos.readline() #---------------------------
infos.readline() #Smooth
infos.readline() #2D smoothing
infos.readline() #nvolte ?
infos.readline() #GMRES error tolerance
infos.readline() #CG error toletance
infos.readline() # Mover error tolerance
infos.readline() #---------------------------
infos.readline() #Results saved in:
infos.readline() #Restart saved in:
infos.readline() #---------------------------
infos.close()
# attention il y a plusieurs versions de cette function!!!
#------------------------------------------------------------
def get_scal(self,
tar_file,
seg,
fibo_obj = None,
tar_var = None,
double_y = False,
silent = True):
"""
Reads scalar from .vtk file
Parameters :
- tar_file [str] target file to read (don't include '.vtk')
- seg [str] cycle of the simulation
- fibo_obj = None [None or fibo] fibo object you want to fill, else returns values
- tar_var = None [None or str] name the.variable will be given
- double_y = False [bool] was your file printed twice in y?
- silent = True [bool] print status at the end?
Returns :
- scal [fibo_var]
"""
#create data vector, fill it!
data_file = open(os.path.join(self.address,tar_file+'.vtk'),'r', errors='replace')
if tar_var == None :
tar_var = data_file.readline().split()[0]+'%.8i'%int(seg)
else :
tar_var = tar_var+'%.8i'%int(seg)
data_file.readline()
data_file.readline()
data_format = data_file.readline()
data_structure = data_file.readline().split()[1]
self.meta['nx'], self.meta['ny'], self.meta['nz'] = map(int, data_file.readline().split()[1:4])
data_file.readline()
self.meta['dx'], self.meta['dy'], self.meta['dz'] = map(float, data_file.readline().split()[1:4])
data_file.readline()
data_file.readline() #NB here you have the nx*ny*nz preduct
data_file.readline()
data_file.readline()
data_file.close()
if double_y : self.meta['ny'] = self.meta['ny']/2 #NB here you divide by two the box in y!
if data_structure == 'STRUCTURED_POINTS': reader = vtk.vtkStructuredPointsReader() #here you can add other readers in case
t0 = time.time()
reader.SetFileName(os.path.join(self.address,tar_file+'.vtk'))
t1 = time.time()
print('DEBUG: SetFileName',t1-t0)
reader.ReadAllScalarsOn()
t2 = time.time()
print('DEBUG: ReadAllVectors',t2-t1)
reader.Update()
t3 = time.time()
print('DEBUG: Update',t3-t2)
vtk_output = reader.GetOutput()
t4 = time.time()
print('DEBUG: GetOutput',t4-t3)
if vtk_output.GetDimensions()[0] != self.meta['nx'] : print('ERROR: wrong number of cells along x (Nx)')
if vtk_output.GetDimensions()[2] != self.meta['nz'] : print('ERROR: wrong number of cells along z (Nz)')
if not double_y and vtk_output.GetDimensions()[1] != self.meta['ny'] : print('ERROR: wrong number of cells along y (Ny) ; double_y=False')
if double_y and vtk_output.GetDimensions()[1] != self.meta['ny']*2 : print('ERROR: wrong number of cells along y (Ny) ; double_y=True')
scal = VN.vtk_to_numpy(vtk_output.GetPointData().GetScalars())
t5 = time.time()
print('DEBUG: vtk_to_numpy',t5-t4)
print('reader=',reader)
print('vtk_output=',vtk_output)
print('vect=',scal)
if double_y : scal = scal.reshape(self.meta['nz'],2*self.meta['ny'],self.meta['nx']).transpose(2,1,0) #recast flatten array to 3D array
else : scal = scal.reshape(self.meta['nz'],self.meta['ny'],self.meta['nx']) .transpose(2,1,0)
if double_y : scal = scal[:,:self.meta['ny'],:]
if (fibo_obj != None) :
fibo_obj.data[tar_var] = scal
else:
return scal
if not silent:
print('get_scal_from_VTK> data format : ', data_format)
print('get_scal_from_VTK> data structure : ', data_structure)
print('get_scal_from_VTK> grid dimensions : ', self.meta['nnn'])
print('get_scal_from_VTK> grid size : ', self.meta['lll'])
print('get_scal_from_VTK> grid spacing : ', self.meta['ddd'])
if (fibo_obj != None) :
print('get_scal_from_VTK> created fibo_obj.data['+tar_var+']')
#------------------------------------------------------------
def get_vect(self,
tar_file,
seg,
fibo_obj = None,
tar_var = None,
double_y = False,
silent=True):
"""
Reads vector from .vtk file
Parameters :
- tar_file [str] target file to read (don't include '.vtk')
- fibo_obj = None [None or fibo] fibo object you want to fill, else returns values
- tar_var = None [None or str] name the.variable will be given
- double_y = False [bool] was your file printed twice in y?
Returns :
- scal [fibo_var]
"""
#create data vector, fill it!
data_file = open(os.path.join(self.address,tar_file+'.vtk'),'r',errors='replace') # b , binary but bug after
print("DEBUG: mod_probe : tar_file =",tar_file); # added Damien Mattei
if tar_var == None :
tar_var_x,tar_var_y,tar_var_z = data_file.readline().split()[0][1:-1].split(',')
tar_var_x = tar_var_x+'%.8i'%int(seg)
tar_var_y = tar_var_y+'%.8i'%int(seg)
tar_var_z = tar_var_z+'%.8i'%int(seg)
else :
tar_var_x = tar_var+'_x'+'%.8i'%int(seg)
tar_var_y = tar_var+'_y'+'%.8i'%int(seg)
tar_var_z = tar_var+'_z'+'%.8i'%int(seg)
data_file.readline()
data_file.readline()
data_format = data_file.readline()
data_structure = data_file.readline().split()[1]
self.meta['nx'], self.meta['ny'], self.meta['nz'] = map(int, data_file.readline().split()[1:4])
data_file.readline() # ici ORIGIN
self.meta['dx'], self.meta['dy'], self.meta['dz'] = map(float, data_file.readline().split()[1:4])
data_file.readline()
data_file.readline() #NB here you have the nx*ny*nz preduct
data_file.readline()
data_file.close()
if double_y : self.meta['ny'] = self.meta['ny']/2 #NB here you divide by two the box in y!
if data_structure == 'STRUCTURED_POINTS': reader = vtk.vtkStructuredPointsReader() #here you can add other readers in case
t0 = time.time()
reader.SetFileName(os.path.join(self.address,tar_file+'.vtk'))
t1 = time.time()
print('DEBUG: SetFileName',t1-t0)
reader.ReadAllVectorsOn()
t2 = time.time()
print('DEBUG: ReadAllVectors',t2-t1)
reader.Update()
t3 = time.time()
print('DEBUG: Update',t3-t2)
vtk_output = reader.GetOutput()
t4 = time.time()
print('DEBUG: GetOutput',t4-t3)
if vtk_output.GetDimensions()[0] != self.meta['nx'] : print('ERROR: wrong number of cells along x (Nx)')
if vtk_output.GetDimensions()[2] != self.meta['nz'] : print('ERROR: wrong number of cells along z (Nz)')
if not double_y and vtk_output.GetDimensions()[1] != self.meta['ny'] : print('ERROR: wrong number of cells along y (Ny) ; double_y=False')
if double_y and vtk_output.GetDimensions()[1] != self.meta['ny']*2 : print('ERROR: wrong number of cells along y (Ny) ; double_y=True')
vect = VN.vtk_to_numpy(vtk_output.GetPointData().GetArray(tar_var))
t5 = time.time()
print('DEBUG: vtk_to_numpy',t5-t4)
print('reader=',reader)
print('vtk_output=',vtk_output)
print('vect=',vect)
if double_y :
vect_x = vect[:,0].reshape(self.meta['nz'],self.meta['ny']*2,self.meta['nx']).transpose(2,1,0)
vect_y = vect[:,1].reshape(self.meta['nz'],self.meta['ny']*2,self.meta['nx']).transpose(2,1,0)
vect_z = vect[:,2].reshape(self.meta['nz'],self.meta['ny']*2,self.meta['nx']).transpose(2,1,0)
else :
vect_x = vect[:,0].reshape(self.meta['nz'],self.meta['ny'],self.meta['nx']).transpose(2,1,0)
vect_y = vect[:,1].reshape(self.meta['nz'],self.meta['ny'],self.meta['nx']).transpose(2,1,0)
vect_z = vect[:,2].reshape(self.meta['nz'],self.meta['ny'],self.meta['nx']).transpose(2,1,0)
if double_y :
vect_x = vect_x[:,:self.meta['ny'],:]
vect_y = vect_y[:,:self.meta['ny'],:]
vect_z = vect_z[:,:self.meta['ny'],:]
if (fibo_obj != None) :
fibo_obj.data[tar_var_x] = vect_x
fibo_obj.data[tar_var_y] = vect_y
fibo_obj.data[tar_var_z] = vect_z
else: return np.array([vect_x, vect_y, vect_z])
if not silent:
print('get_vect_from_VTK> data format : ', data_format)
print('get_vect_from_VTK> data structure : ', data_structure)
print('get_vect_from_VTK> grid dimensions : ', self.meta['nnn'])
print('get_vect_from_VTK> grid size : ', self.meta['lll'])
print('get_vect_from_VTK> grid spacing : ', self.meta['ddd'])
if (fibo_obj != None) :
print('get_vect_from_VTK> created fibo_obj.data['+tar_var_x+']')
print('get_vect_from_VTK> created fibo_obj.data['+tar_var_y+']')
print('get_vect_from_VTK> created fibo_obj.data['+tar_var_z+']')
#---------------------------------------------------------------------------------------
class from_HDF5 (object):
def __init__(self,
address):
"""
Creates the object to retrieve data from HDF5 files
Parameters :
- address [address] where your data are (folder with segs inside)
"""
self.address = address
self.meta = {}
#------------------------------------------------------------
def get_scal(self,
tar_file,
seg,
path,
fibo_obj = None,
tar_var = None,
double_y = False,
silent = True):
"""
Reads scalar from .h5 file
Parameters :
- tar_file [str] target file to read (don't include '.h5')
- seg [str] cycle of the simulation
- path [str] path to field inside hdf5 dictionary
- fibo_obj = None [None or fibo] fibo object you want to fill, else returns values
- tar_var = None [None or str] name the.variable will be given
- double_y = False [bool] was your file printed twice in y?
- silent = True [bool] print status at the end?
Returns :
- scal [fibo_var]
"""
...
</code></pre>
<p>again there is no error on the command line.</p>
<p>note that if i unset PYTHONPATH on command line i got a relative error but not exactly the same</p>
|
<python>
|
2024-10-25 23:55:29
| 1
| 419
|
Damien Mattei
|
79,127,523
| 7,323,888
|
Singleton different behavior when using class and dict to store the instance
|
<p>Why do these two base classes result in the child objects having different behavior?</p>
<pre><code>class Base:
_instance: "Base" = None
def __new__(cls) -> "Base":
if cls._instance is None:
cls._instance = super().__new__(cls)
return cls._instance
class A(Base):
def foo(self):
return "foo"
class B(Base):
def quz(self):
return "quz"
a = A()
b = B()
print(id(a))
print(id(b))
</code></pre>
<pre><code>140035075937792
140035075948400
</code></pre>
<p>On the other hand</p>
<pre><code>from typing import Dict
class Base:
_instances: Dict[int, "Base"] = {}
def __new__(cls) -> "Base":
if 0 not in cls._instances:
cls._instances[0] = super().__new__(cls)
return cls._instances[0]
class A(Base):
def foo(self):
return "foo"
class B(Base):
def quz(self):
return "quz"
a = A()
b = B()
print(id(a))
print(id(b))
</code></pre>
<pre><code>140035075947296
140035075947296
</code></pre>
|
<python><singleton>
|
2024-10-25 23:03:18
| 1
| 3,841
|
Victor Wong
|
79,127,484
| 11,959,501
|
What does QuantizeWrapperV2 actually do?
|
<p>So I am training this <code>small CNN model which has few Conv2D layers and some MaxPool2D, Activations, Dense</code>, basically the basic layers that <code>Tensorflow</code> provides.</p>
<p>I want it to <code>run on an embedded system</code> which doesn't have lots of space, and cannot make floating point calculations.</p>
<p>Hence I was trying to do a QAT training (Quantize Aware Training) with the model so the weights will be (eventually) quantized to 8-bit and I am using the <code>tfmot.quantization.keras.QuantizeWrapperV2</code>.</p>
<p><strong>I couldn't understand the parameters count and the actions (mathematically) it does for each type of layer,</strong> and I would love to get some help understanding the undocumented math operations given by this API.</p>
<p>Below there is the summary of the same model, once with QAT and once without. there is a difference on the <code>Param #</code> column that I couldn't understand.</p>
<p>Thanks for the help.</p>
<h1>Here is the model summary <strong>WITHOUT</strong> QAT applied:</h1>
<pre><code> Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 56, 56, 1)] 0
conv2d_3 (Conv2D) (None, 54, 54, 30) 300
activation_3 (Activation) (None, 54, 54, 30) 0
max_pooling2d_2 (MaxPooling (None, 27, 27, 30) 0
2D)
conv2d_4 (Conv2D) (None, 25, 25, 16) 4336
activation_4 (Activation) (None, 25, 25, 16) 0
max_pooling2d_3 (MaxPooling (None, 12, 12, 16) 0
2D)
conv2d_5 (Conv2D) (None, 10, 10, 16) 2320
activation_5 (Activation) (None, 10, 10, 16) 0
global_average_pooling2d_1 (None, 16) 0
(GlobalAveragePooling2D)
dense (Dense) (None, 8) 136
activation_6 (Activation) (None, 8) 0
dense_1 (Dense) (None, 1) 9
activation_7 (Activation) (None, 1) 0
=================================================================
Total params: 7,101
Trainable params: 7,101
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<h1>Here is the model summary <strong>WITH</strong> QAT Applied:</h1>
<pre><code> Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 56, 56, 1)] 0
quantize_layer_1 (QuantizeL (None, 56, 56, 1) 3
ayer)
quant_conv2d_3 (QuantizeWra (None, 54, 54, 30) 301
pperV2)
quant_activation_3 (Quantiz (None, 54, 54, 30) 3
eWrapperV2)
quant_max_pooling2d_2 (Quan (None, 27, 27, 30) 1
tizeWrapperV2)
quant_conv2d_4 (QuantizeWra (None, 25, 25, 16) 4337
pperV2)
quant_activation_4 (Quantiz (None, 25, 25, 16) 3
eWrapperV2)
quant_max_pooling2d_3 (Quan (None, 12, 12, 16) 1
tizeWrapperV2)
quant_conv2d_5 (QuantizeWra (None, 10, 10, 16) 2321
pperV2)
quant_activation_5 (Quantiz (None, 10, 10, 16) 3
eWrapperV2)
quant_global_average_poolin (None, 16) 3
g2d_1 (QuantizeWrapperV2)
quant_dense (QuantizeWrappe (None, 8) 137
rV2)
quant_activation_6 (Quantiz (None, 8) 3
eWrapperV2)
quant_dense_1 (QuantizeWrap (None, 1) 14
perV2)
quant_activation_7 (Quantiz (None, 1) 1
eWrapperV2)
=================================================================
Total params: 7,131
Trainable params: 7,101
Non-trainable params: 30
_________________________________________________________________
</code></pre>
|
<python><tensorflow><keras><quantization-aware-training><tfmot>
|
2024-10-25 22:36:50
| 1
| 577
|
Jhon Margalit
|
79,127,377
| 2,275,171
|
Using the name of a column as a parameter in Pyspark SQL query
|
<p>On a particular DataFrame I have a SQL query that I want to use twice, once to generate daily results and once to get monthly results. I can't just roll up the daily information because I have non-additive metrics like averages, distinct counts, etc.</p>
<pre><code>SOURCE_BASIC_METRICS = """
select category,
{date_field} as dt,
count(distinct id) as unique_ids
from mytable
where 1=1
group by category,
{date_field}
""";
</code></pre>
<p>mytable has a field called event_date and event_month. I want to execute the query twice, like so.</p>
<pre><code>dfBasicMetrics = spark.sql(SOURCE_BASIC_METRICS, date_field = "event_date");
dfBasicMetrics\
.write\
.parquet(DESTINATION_BASIC_METRICS + "/daily", mode = 'overwrite');
dfBasicMetrics = spark.sql(SOURCE_BASIC_METRICS, date_field = "event_month");
dfBasicMetrics\
.write\
.parquet(DESTINATION_BASIC_METRICS + "/monthly", mode = 'overwrite');
</code></pre>
<p>I'm new to parameterized queries, and I've been able to pass values, but haven't figured out how to pass an actual <strong>column name</strong>. I couldn't get the parameter marker method to work, like this...</p>
<p><code>spark.sql("select :date_field, etc etc", args={"date_field": "event_date"})</code></p>
<p>The named parameters in curly braces is as close as I've gotten. Would love any help using either method to be able to use an actual <strong>column name</strong> as the variable.</p>
<p>Thanks!</p>
|
<python><pyspark><apache-spark-sql>
|
2024-10-25 21:37:09
| 1
| 680
|
mateoc15
|
79,127,307
| 4,048,657
|
PyTorch how to use the gradient of an intermediate variable in the computation graph of a later variable
|
<pre class="lang-py prettyprint-override"><code>point_2 = torch.tensor([0.2, 0.8], device=device, requires_grad=True)
p = torch.cat((point_2, torch.tensor([0], device=device)), 0)
x_verts = torch.tensor([0.0, 1.0, 0.0], device=device, requires_grad=True)
y_verts = torch.tensor([0.0, 0.0, 1.0], device=device, requires_grad=True)
z_verts = torch.tensor([0.1, -0.1, 0.2], device=device, requires_grad=True)
v1_2d = torch.cat((torch.index_select(x_verts, 0, torch.tensor([0])), torch.index_select(y_verts, 0, torch.tensor([0])), torch.tensor([0])))
v2_2d = torch.cat((torch.index_select(x_verts, 0, torch.tensor([1])), torch.index_select(y_verts, 0, torch.tensor([1])), torch.tensor([0])))
v3_2d = torch.cat((torch.index_select(x_verts, 0, torch.tensor([2])), torch.index_select(y_verts, 0, torch.tensor([2])), torch.tensor([0])))
area_3 = torch.cross(v2_2d - v1_2d, v3_2d - v1_2d)
area = torch.index_select(area_3, 0, torch.tensor([2]))
alpha_3 = 0.5 * torch.cross(v2_2d - p, v3_2d - p) / area
beta_3 = 0.5 * torch.cross(v3_2d - p, v1_2d - p) / area
gamma_3 = 0.5 * torch.cross(v1_2d - p, v2_2d - p) / area
alpha = torch.index_select(alpha_3, 0, torch.tensor([2]))
beta = torch.index_select(beta_3, 0, torch.tensor([2]))
gamma = torch.index_select(gamma_3, 0, torch.tensor([2]))
z = alpha * torch.index_select(z_verts, 0, torch.tensor([0])) + beta * torch.index_select(z_verts, 0, torch.tensor([1])) + gamma * torch.index_select(z_verts, 0, torch.tensor([2]))
z.backward()
grad_norm = torch.norm(point_2.grad ) # <= disconnection
f = torch.tanh(10.0 * (grad_norm - 2.0))
f.backward() # <= error
print(x_verts.grad)
print(y_verts.grad)
print(z_verts.grad)
</code></pre>
<p>I have this code which fails because I use a variable's <code>.grad</code> value as input to another variable. How can I fix this?</p>
|
<python><pytorch>
|
2024-10-25 21:02:55
| 1
| 1,239
|
Cedric Martens
|
79,127,256
| 1,380,269
|
Why my async function seems not to run asynchronously?
|
<p>I want to (quickly) scan my LAN to find a host listening to a certain TCP port, e.g. 2442 (JS8Call network API, if anyone's interested).</p>
<p>I copied program structure from a tutorial explaining asynchronous execution in python with the help of <code>asyncio</code> package. The program kind of works, but the individual tasks do not run in parallel, instead they run in sequence although the task creation and result collection seems to be OK. When it runs, I can see the print messages from async function <code>ping</code> appearing sequentially, the next one always appearing exactly after the previous one timed out.</p>
<p>What is wrong with my <code>ping</code> function? It does not seem to create a coroutine object, I guess.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python
import socket
import asyncio
socket.setdefaulttimeout(0.5)
async def ping(host, port):
r = f'{host}:{port} OK'
try:
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect( (host, port) )
print( r )
s.close()
return r
except Exception as X:
print(f'{host} FAILED')
return X
async def main():
tasks = []
async with asyncio.TaskGroup() as tg:
# start all tasks
for i in range(5,240):
host = f'192.168.33.{i}'
t = tg.create_task( ping(host, 22) )
tasks.append(t)
results = [ task.result() for task in tasks ]
for p in results:
if p != None:
print(p)
asyncio.run( main() )
</code></pre>
|
<python><python-asyncio>
|
2024-10-25 20:32:53
| 0
| 854
|
Jindrich Vavruska
|
79,126,930
| 11,277,108
|
Generic TypeVar solution for class that produces another class that contains the original class
|
<p>I'm trying to come up with a generic <code>TypeVar</code> solution to avoid having to create an ever expanding <code>Union</code>type annotation. Here's a somewhat contrived MRE of what I have currently:</p>
<pre><code>from __future__ import annotations
from typing import Union
class CreatedDetails:
def __init__(self, creator: Union[Creator1, Creator2], new: bool) -> None:
self.creator = creator
self.new = new
class Creator1:
def __init__(self) -> None:
self.name = "creator1"
def create(self) -> CreatedDetails:
return CreatedDetails(creator=self, new=True)
class Creator2:
def __init__(self) -> None:
self.name = "creator2"
def create(self) -> CreatedDetails:
return CreatedDetails(creator=self, new=False)
creator1 = Creator1()
created = creator1.create()
created.creator.name
</code></pre>
<p>This plays nice and my type checker recognises everything:</p>
<p><a href="https://i.sstatic.net/9QDp3P0K.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QDp3P0K.png" alt="successful type checker" /></a></p>
<p>However, this relies on me adding new <code>Creator</code> classes to the <code>Ùnion</code> type annotation.</p>
<p>I've tried using a generic <code>TypeVar</code>:</p>
<pre><code>from __future__ import annotations
from typing import TypeVar
CreatorRecord = TypeVar("CreatorRecord")
class CreatedDetails:
def __init__(self, creator: CreatorRecord, new: bool) -> None:
self.creator = creator
self.new = new
class Creator:
pass
class Creator1(Creator):
def __init__(self) -> None:
self.foo = "foo"
def create(self) -> CreatedDetails:
return CreatedDetails(creator=self, new=True)
class Creator2(Creator):
def __init__(self) -> None:
self.bar = "bar"
def create(self) -> CreatedDetails:
return CreatedDetails(creator=self, new=False)
creator1 = Creator1()
created = creator1.create()
created.creator.foo
created.new
</code></pre>
<p>However, the type checker doesn't fully recognise this:</p>
<p><a href="https://i.sstatic.net/FyYqMEmV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyYqMEmV.png" alt="unsuccessful type checker" /></a></p>
<p>Where am I going wrong?</p>
|
<python>
|
2024-10-25 18:27:44
| 1
| 1,121
|
Jossy
|
79,126,854
| 11,609,834
|
How to hint argument to a function as dictionary with parent class in Python
|
<p>I would like to hint a function as a mapping between instances of a class or its children and a value. <code>takes_mapping</code> below is an example. However, I am getting a static typing error when I use the following:</p>
<pre class="lang-py prettyprint-override"><code>from collections.abc import Mapping
class Parent:
pass
class Child(Parent):
pass
assert issubclass(Child, Parent)
def takes_mapping(mapping: Mapping[Parent, int]):
return
child = Child()
my_dict: dict[Child, int] = {child: 1}
my_mapping: Mapping[Child, int] = {child: 1}
takes_mapping(my_dict) # typing error...
takes_mapping(my_mapping) # same basic error, involving invariance (see below)
</code></pre>
<p>Pyright generates the following error:</p>
<pre class="lang-none prettyprint-override"><code>Argument of type "dict[Child, int]" cannot be assigned to parameter "mapping" of type "Mapping[Parent, int]" in function "takes_mapping"
"dict[Child, int]" is not assignable to "Mapping[Parent, int]"
Type parameter "_KT@Mapping" is invariant, but "Child" is not the same as "Parent" reportArgumentType
</code></pre>
<p>How can I hint the argument to take mapping in such a way that the keys may be an instance of <code>Parent</code> or any of its children (without typing errors)? In my use case, we may introduce additional children of <code>Parent</code> and it would be nice we didn't have to couple the hint to the hierarchy, i.e., <code>Union</code> will not really express what's desired since it depends on the specific unioned types.</p>
|
<python><python-typing>
|
2024-10-25 18:03:09
| 1
| 1,013
|
philosofool
|
79,126,749
| 11,092,636
|
How to change "Recommended environment" in Visual Studio Code
|
<p>In Visual Studio Code, whenever I'm prompted to choose an environment, there is a "recommended" one or "favourite" one. But it's not the one I would like to see there (I would like another virtual environment* to get there).</p>
<p>E.g.:
<a href="https://i.sstatic.net/82mWjPWT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82mWjPWT.png" alt="enter image description here" /></a></p>
<p>I've looked everywhere. How can I change this and select which one should be "recommended"?</p>
<p>* added virtual environment to this sentence because apparently someone said it's bad to recommend a non-virtual-environment</p>
|
<python><visual-studio-code><virtualenv>
|
2024-10-25 17:28:03
| 1
| 720
|
FluidMechanics Potential Flows
|
79,126,618
| 1,306,892
|
Formatting Nested Square Roots in SymPy
|
<p>I'm working with <code>sympy</code> to obtain symbolic solutions of equation. This is my code:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
# Define the symbolic variable
x = sp.symbols('x')
# Define f(x)
f = ((x**2 - 2)**2 - 2)**2 - 2
# Solve the equation f(x) = 0
solutions = sp.solve(f, x)
# Filter only the positive solutions
positive_solutions = [sol for sol in solutions if sol.is_real and sol > 0]
# Print the positive solutions
print("The positive solutions of the equation f(x) = 0 are:")
for sol in positive_solutions:
print(sol)
</code></pre>
<p>I'm using SymPy to solve the equation</p>
<p><a href="https://i.sstatic.net/TXxBzoJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TXxBzoJj.png" alt="[
L_3(x) = \left(\left(x^2 - 2\right)^2 - 2\right)^2 - 2 = 0
]" /></a></p>
<p>and I am able to obtain the positive solutions. However, the solutions are returned with nested square roots in a way that makes the inner roots appear on the left side, like this:</p>
<pre><code>sqrt(2 - sqrt(2 - sqrt(2)))
sqrt(2 - sqrt(sqrt(2) + 2))
sqrt(sqrt(2 - sqrt(2)) + 2)
sqrt(sqrt(sqrt(2) + 2) + 2)
</code></pre>
<p>I would like the nested square roots to be formatted in a way that they all appear to the right, like this:</p>
<pre><code>sqrt(2 - sqrt(2 - sqrt(2)))
sqrt(2 - sqrt(sqrt(2) + 2))
sqrt(2 + sqrt(2 - sqrt(2)))
sqrt(2 + sqrt(sqrt(2) + 2))
</code></pre>
<p>Is there a way to adjust the output formatting of the solutions so that the nested square roots appear as desired? Any help would be greatly appreciated! Thank you!</p>
|
<python><sympy><symbolic-math>
|
2024-10-25 16:46:02
| 2
| 1,801
|
Mark
|
79,126,521
| 7,426,792
|
polars: `json_path_match` on `pl.element()` in `list.eval()` context
|
<p>I'm trying to perform a JSON path match for each element (string) in a list. I'm observing the following behavior:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = [
'{"text":"asdf","entityList":[{"mybool":true,"id":1},{"mybool":true,"id":2},{"mybool":false,"id":3}]}',
'{"text":"asdf","entityList":[{"mybool":false,"id":1},{"mybool":true,"id":2},{"mybool":false,"id":3}]}',
]
df = pl.DataFrame({"data": [data]})
print(df)
# shape: (1, 1)
# ┌─────────────────────────────────┐
# │ data │
# │ --- │
# │ list[str] │
# ╞═════════════════════════════════╡
# │ ["{"text":"asdf","entityList":… │
# └─────────────────────────────────┘
expr1 = pl.col("data").list.eval(pl.element().str.json_path_match("$.entityList[*].id"))
print(df.select(expr1))
# shape: (1, 1)
# ┌────────────┐
# │ data │
# │ --- │
# │ list[str] │
# ╞════════════╡
# │ ["1", "1"] │
# └────────────┘
expr2 = pl.col("data").list.eval(pl.element().str.json_path_match("$.entityList[*].id").flatten())
print(df.select(expr2))
# shape: (1, 1)
# ┌────────────┐
# │ data │
# │ --- │
# │ list[str] │
# ╞════════════╡
# │ ["1", "1"] │
# └────────────┘
</code></pre>
<p>My understanding of JSON path is that <code>$.entityList[*].id</code> should extract the <code>id</code> of every element in <code>entityList</code>, therefore I'd expect the following result:</p>
<pre><code>shape: (1, 1)
┌────────────────────────┐
│ data │
│ --- │
│ list[list[i64]] │
╞════════════════════════╡
│ [[1, 2, 3], [1, 2, 3]] │
└────────────────────────┘
</code></pre>
<p>Am I misunderstanding how <code>json_path_match</code> operates on list elements or could this be a bug in how the nested lists are created?</p>
|
<python><json><dataframe><python-polars><jsonpath>
|
2024-10-25 16:18:18
| 1
| 427
|
Moritz Wilksch
|
79,126,460
| 10,083,382
|
Using fine tuned flux-dev model from Replicate platform locally?
|
<p>I've fine tuned flux-dev model using Replicate platform. I've downloaded the weights after the training process but I do not want to generate images on web interface. Instead I want to use my local resources to generate images. Currently, I generate images locally using baseline flux-dev model.</p>
<p>After downloading three files <code>captions</code>, <code>config.yaml</code> and <code>lora.safetensors</code> where should I place them in current baseline flux-dev model dir?</p>
|
<python><large-language-model><replicate>
|
2024-10-25 16:02:01
| 1
| 394
|
Lopez
|
79,126,368
| 10,658,339
|
How to create a plotly graph in Power BI
|
<p>I'm using the Python visual tool for Power BI and I would like to create a plotly graph, like in <a href="https://www.youtube.com/watch?v=uLikWluqg54" rel="nofollow noreferrer">this video</a>
I'm able to create plots with seaborn and matplotlib, but when I try plotly it gives the error:
<a href="https://i.sstatic.net/AJpDcd68.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJpDcd68.png" alt="enter image description here" /></a>
But it works because it opens the graph in a html window, outside Power BI.
What Am I doing wrong?
This is the code used in Power BI Python visual:</p>
<pre><code>import pandas as pd
import plotly.graph_objects as go # Import go for graph_objects
# Create a scatter plot figure
fig = go.Figure()
# Add scatter trace
fig.add_trace(
go.Scatter(
x=dataset['I_B'],
y=dataset['I_C'],
mode='markers',
marker=dict(size=8, color='blue', opacity=0.7)
)
)
# Customize the layout
fig.update_layout(
title="Scatter Plot of I_B vs. I_C",
xaxis_title="I_B",
yaxis_title="I_C",
width=800,
height=500,
template="plotly_white"
)
# Show the figure inline
fig.show()
</code></pre>
|
<python><powerbi><plotly>
|
2024-10-25 15:35:15
| 2
| 527
|
JCV
|
79,126,319
| 8,458,083
|
How to constrain a generic type with an alias derived from a union type in Python
|
<p>quick summary:
I want the function <code>combine</code> to take two arguments with the same type. It is the generic type T.
And T is one element certain union type <code>possibleTypeForCombine = int | str.</code> I want to constraint by possibleTypeForCombine not the type it include because the definition of possibleTypeForCombine can change. By doing this way mypy will detect a problem if I don t adapt the pattern matching in the function combine</p>
<hr />
<p>I'm trying to improve a function that combines two values of the same type, currently supporting int and str. Here's my current implementation:</p>
<pre><code>def combine[T: (int, str)](a: T, b: T) -> T:
match a:
case int():
return a + b
case str():
return a + " " + b
</code></pre>
<p>This passes mypy type checking. However, I want to make it more extensible for future additions of other types. I'd like to use a type alias possibleTypeForCombine to represent the allowed types. I've tried defining it as:</p>
<pre><code>possibleTypeForCombine = int | str
# or
from typing import Union
possibleTypeForCombine = Union[int, str]
</code></pre>
<p>My goal is to be able to easily add new types to possibleTypeForCombine in the future, and mypy informs me where I need to add new cases in functions using this type.
I attempted to use this type alias in a new version of the function:</p>
<pre><code>def combine2[T: possibleTypeForCombine](a: T, b: T) -> T:
match a:
case int():
c = a + b
return c
case str():
c = a +" " + b
return c
</code></pre>
<p>However, this results in the following mypy errors:</p>
<blockquote>
<p>error: Missing return statement [return] error: Unsupported operand
types for + ("str" and "T") [operator] error: Incompatible return
value type (got "str", expected "T") [return-value]</p>
</blockquote>
<p>Of course I expected that. THe syntax to constrain a generic type is <code>T: (possible type 1, possible type 2)</code>.</p>
<p>I would like to know if this is nevertheless possible to make something similar</p>
|
<python><generics><python-typing><mypy>
|
2024-10-25 15:18:24
| 4
| 2,017
|
Pierre-olivier Gendraud
|
79,126,257
| 1,012,952
|
Accessing query results from DBT in python script
|
<p>I am doing a bunch of programmatic running of dbt models from a python script. Part of this is the need to access the results of a simple query to determine what to do next. The <code>dbtRunnerResult</code> object never seems to have the query results, no matter whether I run a model or an operation. So so far I've gotten a workaround to execute a macro that then logs the rows of the query result and I parse that from the <code>stdout</code> of a subprocess in python like so:</p>
<p>The dbt macro:</p>
<pre class="lang-none prettyprint-override"><code>{% macro execute_sql(query) %}
{% set results = run_query(query) %}
{% for row in results %}
{{ log('counter=='~row[0], info=True) }}
{% endfor %}
{% endmacro %}
</code></pre>
<p>and here's the python script:</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
import json
import re
# Define a function to run the dbt command and capture the output
def run_dbt_command(command):
result = subprocess.run(command, capture_output=True, text=True)
return result.stdout, result.stderr
# Define the SQL query
sql_query = "SELECT COUNT(*) FROM dbt.model_x"
# Run the SQL query using the execute_sql macro
command = [
'dbt',
'run-operation',
'execute_sql',
'--args',
json.dumps({"query": sql_query})
]
stdout, stderr = run_dbt_command(command)
# Access the query results from the log output
if "counter==" in stdout:
print("Query executed successfully")
# Extract the rows from the log output
rows = re.findall(r'counter==(\d+)', stdout)
for row in rows:
print(row)
else:
print("Query execution failed")
print(stderr)
</code></pre>
<p>This does let me access the log output from the macro but this feels really dumb and I'd much rather do something like</p>
<pre class="lang-py prettyprint-override"><code>from dbt.cli.main import dbtRunner, dbtRunnerResult
from dbt.contracts.graph.manifest import Manifest
# Initialize the dbt runner and parse the manifest
res: dbtRunnerResult = dbtRunner().invoke(["parse"])
manifest: Manifest = res.result
dbt = dbtRunner(manifest=manifest)
sql_query = "SELECT COUNT(*) FROM dbt.model_x"
run: dbtRunnerResult = dbt.invoke([
'run-operation',
'execute_sql',
'--args',
json.dumps({"query": sql_query})
])
if run.success:
print("Query executed successfully")
results = run.result.results.something
</code></pre>
<p>I'm running dbt version 1.8 and various old pieces around the internet around <code>dbt-rpc</code> or <code>dbt.get_results(query)</code> just aren't a thing anymore.</p>
<p>EDIT:
The reason I want to use DBT for this query running is so that I don't have to have a DBT client and corresponding DB client (e.g. <code>psycopg2</code>, <code>clickhouse-connect</code> / <code>clickhouse-driver</code>) to run queries separately and handle auth & connections and whatnot.</p>
|
<python><dbt>
|
2024-10-25 14:59:14
| 2
| 4,080
|
Killerpixler
|
79,126,205
| 13,147,413
|
How to split a pyspark dataframe taking a portion of data for each different id
|
<p>I'm working with a pyspark dataframe (in Python) containing time series data. Data got a structure like this:</p>
<pre><code>event_time variable value step ID
1456942945 var_a 123.4 1 id_1
1456931076 var_b 857.01 1 id_1
1456932268 var_b 871.74 1 id_1
1456940055 var_b 992.3 2 id_1
1456932781 var_c 861.3 2 id_1
1456937186 var_c 959.6 3 id_1
1456934746 var_d 0.12 4 id_1
1456942945 var_a 123.4 1 id_2
1456931076 var_b 847.01 1 id_2
1456932268 var_b 871.74 1 id_2
1456940055 var_b 932.3 2 id_2
1456932781 var_c 821.3 3 id_2
1456937186 var_c 969.6 4 id_2
1456934746 var_d 0.12 4 id_2
</code></pre>
<p>For each id i got each variable's value at a specific "step".</p>
<p><strong>I need to subset this dataframe like this</strong>: for each id take all the rows corresponding to steps 1, 2, 3 and a portion of step 4 data starting from the first_event time value of step 4, let's say first 25%. This portioning is to be done with respect to event time.</p>
<p>I'm able to do it for a single id, after having subset the DF based on that id:</p>
<pre><code># single step partitioning
threshold_value = DF.selectExpr(f"percentile_approx({"event_time"}, {0.25}) as threshold").collect()[0]["threshold"]
partitioned_df= DF.filter(col(column_name) <= threshold_value)
# First 3 steps
first_3_steps_df = DF.filter((col("step").isin([1,2,3])))
</code></pre>
<p>And then i would concat the partitioned_df and first_3_steps_df to obtain the desidered output for 1 specific id. I'm stuck at iterating this kind of partitioning for each id in DF without <em>actually</em> iterating that process for each id separately.</p>
<p>I'm also able to do it in pandas, but the DF is huge and i really need to stick to Pyspark, so no Pandas answers, please.</p>
|
<python><pyspark>
|
2024-10-25 14:48:35
| 1
| 881
|
Alessandro Togni
|
79,126,171
| 274,460
|
ContextVar set and reset in the same function fails - created in a different context
|
<p>I have this function:</p>
<pre><code>async_session = contextvars.ContextVar("async_session")
async def get_async_session() -> AsyncGenerator[AsyncSession, None]:
async with async_session_maker() as session:
try:
_token = async_session.set(session)
yield session
finally:
async_session.reset(_token)
</code></pre>
<p>This fails with:</p>
<pre><code>ValueError: <Token var=<ContextVar name='async_session' at 0x7e1470e00e00> at 0x7e14706d4680> was created in a different Context
</code></pre>
<p>How can this happen? AFAICT the only way for the <code>Context</code> to be changed is for a whole function call. So how can the context change during a <code>yield</code>?</p>
<p>This function is being used as a FastAPI <code>Depends</code> in case that makes a difference - but I can't see how it does. It's running under Python 3.8 and the version of FastAPI is equally ancient - 0.54.</p>
|
<python><python-3.x><fastapi><python-contextvars>
|
2024-10-25 14:39:50
| 2
| 8,161
|
Tom
|
79,126,092
| 999,162
|
Django ModelForm doesn't update instance but fails with IntegrityError
|
<p>I'm having some really weird issue with a <code>ModelForm</code>. Instead of saving the instance, it attempts to create the instance with the same primary key.</p>
<pre><code>class Upload(models.Model):
file = models.FileField(upload_to=get_foundry_upload_name, null=False, blank=False)
filename = models.CharField(max_length=256, null=True, blank=True)
foundry = models.ForeignKey(Foundry, on_delete=models.CASCADE)
family = models.CharField(max_length=256, null=True, blank=True)
family_select = models.ForeignKey(Family, null=True, blank=True, on_delete=models.CASCADE)
style = models.CharField(max_length=256, null=True, blank=True)
style_select = models.ForeignKey(Style, null=True, blank=True, on_delete=models.CASCADE)
processed = models.BooleanField(default=False)
created = models.DateTimeField("Created", auto_now_add=True)
# edit: this was omitted in the original question, because I thought it irrelevant
def save(self, commit=True):
self.filename = self.file.name
super().save(commit)
class UploadProcessForm(ModelForm):
class Meta:
model = Upload
fields = (
"filename",
"family",
"family_select",
"style",
"style_select",
)
def upload_process_row(request, foundry, id):
i = get_object_or_404(Upload, id=id, foundry=foundry)
upload_form = UploadProcessForm(instance=i)
if request.method == "POST":
upload_form = UploadProcessForm(request.POST, instance=i)
if upload_form.is_valid():
upload_form.save()
return render(request, "foundry/upload_process.row.html", {
"f": upload_form
})
</code></pre>
<blockquote>
<p>django.db.utils.IntegrityError: duplicate key value violates unique constraint "foundry_upload_pkey"
DETAIL: Key (id)=(1) already exists.</p>
</blockquote>
<p>I'm certain this is some super trivial mistake, but I just cannot spot where I'm going wrong; imo this looks exactly like the <a href="https://docs.djangoproject.com/en/5.1/topics/forms/modelforms/#the-save-method" rel="nofollow noreferrer">textbook example</a>. The <code>upload_form.save()</code> always attempts to <strong>create</strong> a database entry, and with the instance's primary key. I'd just want to update the existing instance (that's the whole point of a <code>ModelForm</code>, no?).</p>
<p>I've wiped the database table and migrations and recreated them fresh, just to be sure.</p>
<p>Edit: I've added the <code>save</code> overwrite in <code>Upload</code> that I originally omitted as irrelevant. A closer look at the stacktrace (:facepalm:) pointed out that it is <em>that</em> <code>save()</code> call where the pk error originates from. However, now that I know where the error is triggered, I still don't understand why? Shouldn't that simply save with the same pk again?</p>
|
<python><django><django-forms><modelform>
|
2024-10-25 14:19:16
| 1
| 5,274
|
kontur
|
79,126,042
| 14,643,631
|
How to efficiently remove overlapping circles from the dataset
|
<p>I have a dataset of about 20,000 records that represent global cities of population > 20,000. I have estimated radius which more or less describes the size of the city. It's not exactly accurate but for my purposes it will be enough.</p>
<p>that I'm loading it into my Panda dataframe object. Below is the sample</p>
<pre><code>name_city,country_code,latitude,longitude,geohash,estimated_radius,population
Vitry-sur-Seine,FR,48.78716,2.40332,u09tw9qjc3v3,1000,81001
Vincennes,FR,48.8486,2.43769,u09tzkx5dr13,500,45923
Villeneuve-Saint-Georges,FR,48.73219,2.44925,u09tpxrmxdth,500,30881
Villejuif,FR,48.7939,2.35992,u09ttdwmn45z,500,48048
Vigneux-sur-Seine,FR,48.70291,2.41357,u09tnfje022n,500,26692
Versailles,FR,48.80359,2.13424,u09t8s6je2sd,1000,85416
Vélizy-Villacoublay,FR,48.78198,2.19395,u09t9bmxdspt,500,21741
Vanves,FR,48.82345,2.29025,u09tu059nwwp,500,26068
Thiais,FR,48.76496,2.3961,u09tqt2u3pmt,500,29724
Sèvres,FR,48.82292,2.21757,u09tdryy15un,500,23724
Sceaux,FR,48.77644,2.29026,u09tkp7xqgmw,500,21511
Saint-Mandé,FR,48.83864,2.41579,u09tyfz1eyre,500,21261
Saint-Cloud,FR,48.84598,2.20289,u09tfhhh7n9u,500,28839
Paris,FR,48.85341,2.3488,u09tvmqrep8n,12000,2138551
Orly,FR,48.74792,2.39253,u09tq6q1jyzt,500,20528
Montrouge,FR,48.8162,2.31393,u09tswsyyrpr,500,38708
Montreuil,FR,48.86415,2.44322,u09tzx7n71ub,2000,111240
Montgeron,FR,48.70543,2.45039,u09tpf83dnpn,500,22843
Meudon,FR,48.81381,2.235,u09tdy73p38y,500,44652
Massy,FR,48.72692,2.28301,u09t5yqqvupx,500,38768
Malakoff,FR,48.81999,2.29998,u09tsr6v13tr,500,29420
Maisons-Alfort,FR,48.81171,2.43945,u09txtbkg61z,1000,53964
Longjumeau,FR,48.69307,2.29431,u09th0q9tq1s,500,20771
Le Plessis-Robinson,FR,48.78889,2.27078,u09te9txch23,500,22510
Le Kremlin-Bicêtre,FR,48.81471,2.36073,u09ttwrn2crz,500,27867
Le Chesnay,FR,48.8222,2.12213,u09t8rc3cjwz,500,29154
La Celle-Saint-Cloud,FR,48.85029,2.14523,u09tbufje6p6,500,21539
Ivry-sur-Seine,FR,48.81568,2.38487,u09twq8egqrc,1000,57897
Issy-les-Moulineaux,FR,48.82104,2.27718,u09tezd5njkr,1000,61447
Fresnes,FR,48.75568,2.32241,u09tkgenkj6r,500,24803
Fontenay-aux-Roses,FR,48.79325,2.29275,u09ts4t92cn3,500,24680
Clamart,FR,48.80299,2.26692,u09tes6dp0dn,1000,51400
Choisy-le-Roi,FR,48.76846,2.41874,u09trn12bez7,500,35590
Chevilly-Larue,FR,48.76476,2.3503,u09tmmr7mfns,500,20125
Châtillon,FR,48.8024,2.29346,u09tshnn96xx,500,32383
Châtenay-Malabry,FR,48.76507,2.26655,u09t7t6mn7yj,500,32715
Charenton-le-Pont,FR,48.82209,2.41217,u09twzu3r9hq,500,30910
Cachan,FR,48.79632,2.33661,u09tt5j7nvqd,500,26540
Bagnolet,FR,48.86667,2.41667,u09tyzzubrxb,500,33504
Bagneux,FR,48.79565,2.30796,u09tsdbx727w,500,38900
Athis-Mons,FR,48.70522,2.39147,u09tn6t2mr16,500,31225
Alfortville,FR,48.80575,2.4204,u09txhf6p7jp,500,37290
Quinze-Vingts,FR,48.84656,2.37439,u09tyh0zz6c8,500,26265
Croulebarbe,FR,48.81003,2.35403,u09tttd5hc5f,500,20062
Gare,FR,48.83337,2.37513,u09ty1cdbxcq,1000,75580
Maison Blanche,FR,48.82586,2.3508,u09tv2rz1xgx,1000,64302
</code></pre>
<p>Below is the visual representation of the data sample:
<a href="https://i.sstatic.net/1r5J6D3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1r5J6D3L.png" alt="enter image description here" /></a></p>
<p>My goal is to find an efficient algorithm that would remove the intersecting circles and only keep the one with largest <code>population</code>.</p>
<p>My initial approach was to determine which circles are intersecting using haversine formula. Problem was that to check every record for intersections with others, it needs to traverse entire dataset. The time complexity of this was too high.</p>
<p>My second approach was to segregate the dataset by country code, and run the comparisons by chunks:</p>
<pre class="lang-py prettyprint-override"><code> def _remove_intersecting_circles_for_country(df_country):
"""Helper function to remove intersections within a single country."""
indices_to_remove = set()
for i in range(len(df_country)):
for j in range(i + 1, len(df_country)):
distance = haversine(df_country['latitude'].iloc[i], df_country['longitude'].iloc[i],
df_country['latitude'].iloc[j], df_country['longitude'].iloc[j])
if distance < df_country['estimated_radius'].iloc[i] + df_country['estimated_radius'].iloc[j]:
if df_country['population'].iloc[i] < df_country['population'].iloc[j]:
indices_to_remove.add(df_country.index[i])
else:
indices_to_remove.add(df_country.index[j])
return indices_to_remove
all_indices_to_remove = set()
for country_code in df['country_code'].unique():
df_country = df[df['country_code'] == country_code]
indices_to_remove = _remove_intersecting_circles_for_country(df_country)
all_indices_to_remove.update(indices_to_remove)
new_df = df.drop(index=all_indices_to_remove)
return new_df
</code></pre>
<p>This has significantly improved the performance because to check every record we only need to check against all the records with the same <code>country_code</code>. But that still makes a lot of unnecessary comparisons</p>
|
<python><pandas><dataframe><algorithm><gis>
|
2024-10-25 14:03:01
| 3
| 308
|
Sebastian Meckovski
|
79,125,968
| 18,769,241
|
How to check whether audio bytes contain empty noise or actual voice/signal?
|
<p>I use my microphone to get sound as follows:</p>
<pre><code>FRAMES_PER_BUFFER = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 48000
RECORD_SECONDS = 2
import pyaudio
audio = pyaudio.PyAudio()
stream = audio.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=FRAMES_PER_BUFFER,
input_device_index=2)
data = stream.read(FRAMES_PER_BUFFER)
</code></pre>
<p>I want to know whether or not <code>data</code> contains voice signals or empty sound, To note that the variable always contain bytes (empty or sound) if I print it.</p>
<p>Is there a scipy/signal solution to that without recording the sound to a file?( which can be processed as follows):</p>
<pre><code>from scipy.io import wavfile
samplerate, data = wavfile.read('audio.wav')
thres= 0.2
isNoise = False
for i in data:
if i > thres:
isNoise = True
break
if isNoise:
print("Not empty")
</code></pre>
|
<python><audio><pyaudio>
|
2024-10-25 13:43:47
| 0
| 571
|
Sam
|
79,125,851
| 14,120,387
|
Python Kubernetes MutatingWebhook Not Adding Labels
|
<p>I have created a Kubernetes MutatingWebhook using a Docker image I built from Python. The purpose of the webhook is to add labels to any Pod that is launched. All Kubernetes resources are running fine, however, when new Pods are launched they do not get the expected labels from the MutatingWebhook.</p>
<p>I get this error when I launch a new Pod:</p>
<pre><code>Error from server (InternalError): Internal error occurred: failed calling webhook "add-labels.k8s.io": failed to call webhook: Post "https://add-labels-webhook-svc.mutatingwh.svc:443/mutate?timeout=10s": dial tcp 10.3.33.250:443: connect: connection refused
</code></pre>
<p>Here is the Python webhook code:</p>
<pre><code>import json
from flask import Flask, request, jsonify
import base64
app = Flask(__name__)
@app.route('/mutate', methods=['POST'])
def mutate():
# Read the admission review request
admission_review = request.get_json()
print("AdmissionReview Request:", json.dumps(admission_review))
# Define the patch to add labels
patch = [
{
"op": "add",
"path": "/metadata/labels/live",
"value": "true"
},
{
"op": "add",
"path": "/metadata/labels/environment",
"value": "production"
},
{
"op": "add",
"path": "/metadata/labels/service",
"value": "my-service"
},
{
"op": "add",
"path": "/metadata/labels/version",
"value": "7.55.2"
}
]
# Encode the patch to base64
patch_base64 = base64.b64encode(json.dumps(patch).encode()).decode()
# Prepare the admission review response
admission_response = {
"uid": admission_review['request']['uid'],
"allowed": True,
"patchType": "JSONPatch",
"patch": patch_base64
}
# Return the admission review response
response = {
"apiVersion": "admission.k8s.io/v1",
"kind": "AdmissionReview",
"response": admission_response
}
return jsonify(response)
if __name__ == '__main__':
app.run(host='0.0.0.0', port=443, ssl_context=('/tls/tls.crt', '/tls/tls.key'), debug=True)
</code></pre>
<p>Here are the various K8s manifest files that I used for the deployment, services, etc.</p>
<p>deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: add-labels-webhook
namespace: mutatingwh
spec:
replicas: 1
selector:
matchLabels:
app: add-labels-webhook
template:
metadata:
labels:
app: add-labels-webhook
spec:
containers:
- name: webhook
image: 805960120419.dkr.ecr.us-east-1.amazonaws.com/noc/add-labels-webhook:latest
ports:
- containerPort: 443
volumeMounts:
- name: tls-certs
mountPath: /tls
readOnly: true
imagePullSecrets:
- name: ecr-secret
volumes:
- name: tls-certs
secret:
secretName: webhook-tls
</code></pre>
<p>service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: add-labels-webhook-svc
namespace: mutatingwh
spec:
ports:
- port: 443
targetPort: 443
selector:
app: add-labels-webhook
</code></pre>
<p>mutatingwebhookconfiguration.yaml</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
name: add-labels-webhook
webhooks:
- name: add-labels.k8s.io
clientConfig:
service:
name: add-labels-webhook-svc
namespace: mutatingwh
path: "/mutate"
caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURNekNDQWh1Z0F3SUJBZ0lVTEpJVlFkcXBiR05yb3N3Rk9SRDA0SzREbnpjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0tURW5NQ1VHQTFVRUF3d2VZV1JrTFd4aFltVnNjeTEzWldKb2IyOXJMbVJsWm1GMWJIUXVjM1pqTUI0WApEVEkwTVRBeU5UQTRORGd3T0ZvWERUSTFNVEF5TlRBNE5EZ3dPRm93S1RFbk1DVUdBMVVFQXd3ZVlXUmtMV3hoClltVnNjeTEzWldKb2IyOXJMbVJsWm1GMWJIUXVjM1pqTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEEKTUlJQkNnS0NBUUVBc05CWG0yWE1TZVNTUUxsZ2ZvR01wRk0xeHRtdmxxN1JrK2RoTUw0b211VVN1NDJaMk9VcAp1S3RJdkdtRnorYUpBU3FONmFTY0xJbFFTTElSdmNqNnJ3aXVwcGZVQldSNkdDeEhGYW52R0VxT0YxSkR6OEptCko4Zkp3b09aMFJqQmtYdWFLSFdtNC9NSHV6T240SUdnV3dRRnN2Tk1ZUitFMkRsRjZnOWE4c0cwbjVQTVVydHkKNGdiVDduY2pFZnc2NFc2SmZqRzFNTDNLTXI3aUZ1NXBxM0hJMXh2cSt5T2lDNGQ4WEVJL3FibU9vbDdOYUZDRwpyNVZRd0hLMWx4aHBZY2VWVTNlQWIxS0tHL3FDSVdURVptbTlFbGZuMWJRbzZtOEVxenVES2QyTHE1VmRqUWVnCmFJRkNMYmd2YWx0S1FWaGIwUEhNRjlqUE5yN0xLenRsclFJREFRQUJvMU13VVRBZEJnTlZIUTRFRmdRVUpYcDEKdnVmcXVFbW9VTDhmL3BUZXEzQ25vME13SHdZRFZSMGpCQmd3Rm9BVUpYcDF2dWZxdUVtb1VMOGYvcFRlcTNDbgpvME13RHdZRFZSMFRBUUgvQkFVd0F3RUIvekFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSTNUczh0UE1zR29kCjZvU2ZxanQ0Rzk0SWR4SjVRZkZMaG5hTnBmS0tBRldDZVI2cVEzYWtWWFNnVzVFSlBud0dtdjBYRkh2YWtXa1EKc3A5QjhvaVI1OWlWRHpFME5QdytHQmkzMWkrTW9aaDAzYXYyWFg0K1FKOTFkRENldkFZeEQ4RkxackZ3UTJEMwpjUVdmQk1kSS9JSys5V1dWNXRtZmdtN2d3bi9XNy9nTC9ScVNkaVQwWllmRjZSVjhNeHFaano4bGFWUXhCU2FYCi9kNVROZi9XcldIUjlIek1xRU91UXBIL3NqRVlUM2xkbTA2aTdKMFVsOFJDQTkyL3c2L3dLRk82SHFYVWR1RVoKS25KTHdZV1A4YnJkMHhTcFhYUFRrbHl0T3ZLYkt3N2ljeDRaYzkxbXJOMjRGZ2k5OWN3L2tIZng4cnQ2RVJqawp0S3BLSCt2OVpRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
admissionReviewVersions: ["v1"]
sideEffects: None
rules:
- operations: ["CREATE"]
apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
failurePolicy: Fail
</code></pre>
<p>My question is why is everything running fine, but when new Pods are created they do not get the labels from the Python webhook?</p>
|
<python><kubernetes>
|
2024-10-25 13:11:01
| 1
| 418
|
Brian G
|
79,125,836
| 13,981,672
|
Using linux's perf tool with Python virtual environnent
|
<p>Assume the following setup:</p>
<pre class="lang-bash prettyprint-override"><code>source -- "<some_python_venv_directory>/bin/activate"
perf record -- python3 test.py
</code></pre>
<p>The <code>python3</code> binary is expected to be the one in <code>${VIRTUAL_ENV}/bin</code>. But due to <code>perf</code>'s sneaky prepend to the <code>PATH</code> environment variable:
basically it does <code>export PATH=/usr/libexec/perf-core:/usr/bin:${PATH}</code></p>
<p>The <code>python3</code> I end up using is <code>/usr/bin/python3</code>. The issue is that the venv is built for a Python 3.11 which is different from the Python 3.6 I have in <code>/usr/bin</code>.</p>
<p>I was wondering:</p>
<ul>
<li>where this <code>perf</code> behavior is documented;</li>
<li>why it does it;</li>
<li>and apart from using <code>"${VIRTUAL_ENV}/bin/python3"</code>, what can be done to avoid this issue?</li>
</ul>
|
<python><virtualenv><perf>
|
2024-10-25 13:07:50
| 0
| 675
|
Etienne M
|
79,125,784
| 8,458,083
|
How to efficiently use generics in Python to type-hint a function with two arguments of the same type (either int or str)?
|
<p>I have a function that takes two arguments of the same type, which could be either int or str. My initial implementation without generics looks like this:
python</p>
<pre><code>def combine1(a: int | str, b: int | str) -> int | str:
match a:
case int():
match b:
case int():
return a + b
case str():
return "error"
case str():
match b:
case int():
return "error"
case str():
return a + b
</code></pre>
<p>This implementation is inefficient because it checks the type of b even though we know it must have the same type as a. It also allows for the possibility of a and b having different types, which isn't the intended behavior.
To improve this, I want to use generics to make mypy understand that a and b have the same type. Here's my attempt:
python</p>
<pre><code>def combine2[T: int | str](a: T, b: T) -> T:
match a:
case int():
return a + b
case str():
return a + b
</code></pre>
<p>However, mypy reports the following errors:</p>
<blockquote>
<p>text error: Incompatible return value type (got "int", expected "T")
[return-value] error: Unsupported operand types for + ("int" and "T")
[operator] error: Incompatible return value type (got "str", expected
"T") [return-value] error: Unsupported operand types for + ("T" and
"T") [operator]</p>
</blockquote>
<p>How can I modify this function to correctly use generics and satisfy mypy's type checking, while ensuring that the function only accepts two arguments of the same type (either both int or both str) and avoiding the inefficiency of the first implementation?</p>
<p>This question is not the same as this one: <a href="https://stackoverflow.com/questions/69502152/how-to-use-type-hint-to-make-sure-two-variables-always-have-the-same-type">How to use type hint to make sure two variables always have the same type?</a></p>
<p>because I try to propose to solve this problem by using generic. What this question doesn t.
Furthermore this question doesn't use pattern matching inside the function. My point is avoiding to check two time the type of variables because I already know they have the same type. I would like to just check the type T.</p>
<p>A good solutation that doesn't work would be to use pattern matching on the type T instead of a variable</p>
<p>I tried this too:</p>
<pre><code>def combine3[T : int | str](a: T, b: T) -> T:
match a,b:
case int(),int():
return a +b
case str(), str():
return a+b
</code></pre>
<blockquote>
<p>error: Missing return statement [return] error: Incompatible return
value type (got "int", expected "T") [return-value] error:
Incompatible return value type (got "str", expected "T")
[return-value]</p>
</blockquote>
<p>mypy isn't able to understand that the 2 case I proposed are the only possible one. To remove the first error I have to add a default case</p>
<pre><code>def combine4[T : int | str](a: T, b: T) -> T:
match a,b:
case int(),int():
return a +b
case str(), str():
return a+b
case _ :
return a
</code></pre>
<blockquote>
<pre><code>error: Incompatible return value type (got "int", expected "T") [return-value]
error: Incompatible return value type (got "str", expected "T") [return-value]
</code></pre>
</blockquote>
<p>But I really cannot undersand why it say the function returns int or str instead of T.</p>
|
<python><generics><python-typing><mypy>
|
2024-10-25 12:55:03
| 2
| 2,017
|
Pierre-olivier Gendraud
|
79,125,764
| 6,930,340
|
Find intersection of columns from different polars dataframes
|
<p>I have a variable number of <code>pl.DataFrame</code>s which share some columns (e.g. <code>symbol</code> and <code>date</code>). Each <code>pl.DataFrame</code> has a number of additional columns, which are not important for the actual task.</p>
<p>The <code>symbol</code> columns do have exactly the same content (the different <code>str</code> values exist in every dataframe). The <code>date</code> columns are somewhat different in the way that they don't have the exact same dates in every <code>pl.DataFrame</code>.</p>
<p>The actual task is to find the common dates per grouping (i.e. <code>symbol</code>) and filter each <code>pl.DataFrame</code> accordingly.</p>
<p>Here are three example <code>pl.DataFrame</code>s:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df1 = pl.from_repr("""
┌────────┬────────────┬────────────────┐
│ symbol ┆ date ┆ some_other_col │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞════════╪════════════╪════════════════╡
│ AAPL ┆ 2023-01-01 ┆ 0 │
│ AAPL ┆ 2023-01-02 ┆ 1 │
│ AAPL ┆ 2023-01-03 ┆ 2 │
│ AAPL ┆ 2023-01-04 ┆ 3 │
│ GOOGL ┆ 2023-01-02 ┆ 4 │
│ GOOGL ┆ 2023-01-03 ┆ 5 │
│ GOOGL ┆ 2023-01-04 ┆ 6 │
└────────┴────────────┴────────────────┘
""")
</code></pre>
<pre class="lang-py prettyprint-override"><code>df2 = pl.from_repr("""
┌────────┬────────────┬─────────────┐
│ symbol ┆ date ┆ another_col │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞════════╪════════════╪═════════════╡
│ AAPL ┆ 2023-01-02 ┆ 0 │
│ AAPL ┆ 2023-01-03 ┆ 1 │
│ AAPL ┆ 2023-01-04 ┆ 2 │
│ GOOGL ┆ 2023-01-01 ┆ 3 │
│ GOOGL ┆ 2023-01-02 ┆ 4 │
│ GOOGL ┆ 2023-01-03 ┆ 5 │
│ GOOGL ┆ 2023-01-04 ┆ 6 │
│ GOOGL ┆ 2023-01-05 ┆ 7 │
└────────┴────────────┴─────────────┘
""")
</code></pre>
<pre class="lang-py prettyprint-override"><code>df3 = pl.from_repr("""
┌────────┬────────────┬──────────┐
│ symbol ┆ date ┆ some_col │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞════════╪════════════╪══════════╡
│ AAPL ┆ 2023-01-02 ┆ 0 │
│ AAPL ┆ 2023-01-03 ┆ 1 │
│ AAPL ┆ 2023-01-04 ┆ 2 │
│ AAPL ┆ 2023-01-05 ┆ 3 │
│ GOOGL ┆ 2023-01-03 ┆ 4 │
│ GOOGL ┆ 2023-01-04 ┆ 5 │
└────────┴────────────┴──────────┘
""")
</code></pre>
<p>Now, the first step would be to find the common dates for every <code>symbol</code>.<br />
AAPL: <code>["2023-01-02", "2023-01-03", "2023-01-04"]</code><br />
GOOGL: <code>["2023-01-03", "2023-01-04"]</code></p>
<p>That means, each <code>pl.DataFrame</code> needs to be filtered accordingly. The expected result looks like this:</p>
<p>DataFrame 1 filtered:</p>
<pre class="lang-py prettyprint-override"><code>shape: (5, 3)
┌────────┬────────────┬────────────────┐
│ symbol ┆ date ┆ some_other_col │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞════════╪════════════╪════════════════╡
│ AAPL ┆ 2023-01-02 ┆ 1 │
│ AAPL ┆ 2023-01-03 ┆ 2 │
│ AAPL ┆ 2023-01-04 ┆ 3 │
│ GOOGL ┆ 2023-01-03 ┆ 5 │
│ GOOGL ┆ 2023-01-04 ┆ 6 │
└────────┴────────────┴────────────────┘
</code></pre>
<p>DataFrame 2 filtered:</p>
<pre class="lang-py prettyprint-override"><code>shape: (5, 3)
┌────────┬────────────┬─────────────┐
│ symbol ┆ date ┆ another_col │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞════════╪════════════╪═════════════╡
│ AAPL ┆ 2023-01-02 ┆ 0 │
│ AAPL ┆ 2023-01-03 ┆ 1 │
│ AAPL ┆ 2023-01-04 ┆ 2 │
│ GOOGL ┆ 2023-01-03 ┆ 5 │
│ GOOGL ┆ 2023-01-04 ┆ 6 │
└────────┴────────────┴─────────────┘
</code></pre>
<p>DataFrame 3 filtered:</p>
<pre class="lang-py prettyprint-override"><code>shape: (5, 3)
┌────────┬────────────┬──────────┐
│ symbol ┆ date ┆ some_col │
│ --- ┆ --- ┆ --- │
│ str ┆ str ┆ i64 │
╞════════╪════════════╪══════════╡
│ AAPL ┆ 2023-01-02 ┆ 0 │
│ AAPL ┆ 2023-01-03 ┆ 1 │
│ AAPL ┆ 2023-01-04 ┆ 2 │
│ GOOGL ┆ 2023-01-03 ┆ 4 │
│ GOOGL ┆ 2023-01-04 ┆ 5 │
└────────┴────────────┴──────────┘
</code></pre>
|
<python><dataframe><python-polars>
|
2024-10-25 12:51:39
| 2
| 5,167
|
Andi
|
79,125,755
| 8,044,858
|
numpy's kronecker product outputs non-zero angles outside valid complex-valued blocks
|
<p>It seems <code>numpy.kron</code> outputs unexpected non-zero angles when applied to a complex-valued matrix, that is non-zero angles outside of the blocks reproducing the input complex-valued matrix. Here is the code generating toy inputs, real and complex:</p>
<pre><code>import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
Id = np.identity(3)
ones_real = np.ones((2,2))
rand_complex = np.random.uniform(-1, 1, (2,2)) + 1.j * np.random.uniform(-1, 1, (2,2))
print(Id)
print(ones_real)
print(rand_complex)
### PLOT DATA
fig1, ax = plt.subplots(4, 1, figsize=(10, 20))
sns.heatmap(Id, ax=ax[0])
sns.heatmap(ones_real, ax=ax[1])
sns.heatmap(np.abs(rand_complex), ax=ax[2])
sns.heatmap(np.angle(rand_complex), ax=ax[3])
</code></pre>
<p>Previous inputs visualized as heatmaps :</p>
<p><a href="https://i.sstatic.net/9nGfEa1Km.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nGfEa1Km.png" alt="inputs as generated by first subplots" /></a></p>
<p>Now, when the kronecker product is applied using numpy's function, non-zero angles may appear off the valid diagonal of blocks:</p>
<pre><code>### COMPUTE KRON PRODUCTS
kron_real = np.kron(Id, ones_real)
kron_complex = np.kron(Id, rand_complex)
### PLOT KRON PRODUCTS
fig2, ax = plt.subplots(3, 1, figsize=(10, 20))
sns.heatmap(kron_real, ax=ax[0])
sns.heatmap(np.abs(kron_complex), ax=ax[1])
sns.heatmap(np.angle(kron_complex), ax=ax[2])
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/w0ccOlY8m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w0ccOlY8m.png" alt="outputs from second subplots" /></a></p>
<p>I suspect this is due to the reliance of <code>numpy.angle</code> on an arctan function, i.e. numerical instability. Still, is this known ? Did I miss something on using <code>numpy.kron</code> with complex-valued inputs ?</p>
|
<python><numpy><complex-numbers><kronecker-product>
|
2024-10-25 12:48:53
| 0
| 1,079
|
Blupon
|
79,125,629
| 5,678,653
|
Keeping SciPy Voronoi Iteration Bounded
|
<p>I'm looking at various means of balancing points in a given space, and voronoi iteration / relaxation (aka Lloyds algorithm) has been beckoning me.</p>
<p>However, when using SciPy Voronoi, the points seem to be leaking out of the boundary, and spreading into the known universe, which doesn't work for me at all!</p>
<p>The following generates ten points in the region [-0.5,0.5] but after 100 generations they have spread to a region -800k ...600k (depending on start conditions.
What I am hoping for is an evenly spaced set of points within [-0.5,0.5].</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.spatial import Voronoi, voronoi_plot_2d
from random import random
if __name__ == '__main__':
points = [[random()-0.5, random()-0.5] for pt in range(100)]
voronoi = None
for i in range(60):
voronoi = Voronoi(np.array(points))
points = []
for reg in voronoi.point_region:
if len(voronoi.regions[reg]) > 0:
points.append(np.mean([voronoi.vertices[i] for i in voronoi.regions[reg] if i >= 0], axis=0))
plt = voronoi_plot_2d(voronoi, show_vertices=False, line_colors='orange', line_width=2, line_alpha=0.6, point_size=2)
plt.show()
</code></pre>
|
<python><scipy><voronoi><relaxer>
|
2024-10-25 12:15:12
| 1
| 2,248
|
Konchog
|
79,125,570
| 10,061,888
|
Debounced Completer autoselects highlighted option when I stop arrowing through results?
|
<p>I'm trying to create a debounced Typeahead widget that populates with data from my database. When I try to down arrow through the query results, the highlighted result autoselects when I stop pressing the down arrow key.</p>
<p>I looked at the docs for the <a href="https://doc.qt.io/qtforpython-5/PySide2/QtWidgets/QCompleter.html" rel="nofollow noreferrer">QCompleter</a> and the return type of popup, <a href="https://doc.qt.io/qtforpython-5/PySide2/QtWidgets/QAbstractItemView.html#PySide2.QtWidgets.PySide2.QtWidgets.QAbstractItemView" rel="nofollow noreferrer">QAbstractItemView</a>. QAbstractItemView, says Ctrl+Arrow keys, "Changes the current item but does not select it.". However, when I try navigating with the Ctrl key held, the behavior is no different.</p>
<p>The code for my widget is below:</p>
<pre><code># Typeahead.h
from PyQt5.QtWidgets import QLineEdit
from PyQt5.QtCore import QTimer
from PyQt5.QtWidgets import QCompleter
from utk_apis.sql_engine import SQLEngine
from PyQt5.QtCore import QStringListModel
from PyQt5.QtCore import QTimer, QStringListModel
from PyQt5.QtWidgets import QLineEdit, QCompleter, QAbstractItemView
class Typeahead(QLineEdit):
def __init__(self, parent=None):
super(Typeahead, self).__init__(parent)
self.results = []
self.selected_result = None
self.engine = None
self.current_index = -1
self.ignore_text_change = False
self.completer = QCompleter(self)
self.completer.setWidget(self)
self.completer.setCompletionMode(QCompleter.PopupCompletion)
self.completer.popup().setSelectionMode(
QAbstractItemView.SingleSelection
) # Single item selection in popup
self.setCompleter(self.completer)
self.completer.popup().setAlternatingRowColors(True)
# Create a timer for debouncing
self.debounce_timer = QTimer(self)
self.debounce_timer.setSingleShot(True) # Only trigger once after interval
self.debounce_timer.setInterval(300) # 300 ms debounce interval
# Connect the timeout of the timer to the emit_text_changed method
self.debounce_timer.timeout.connect(self.emit_text_changed)
# Connect the textChanged signal to start the debounce timer
self.textChanged.connect(self.start_debounce)
# Connect the completer's activated signal to set the selected item only when Enter is pressed
self.completer.activated.connect(self.on_completer_activated)
def start_debounce(self):
"""Start the debounce timer when the text changes."""
self.debounce_timer.start() # This starts or restarts the debounce timer
def emit_text_changed(self):
"""Fetch results and update completer when text changes."""
print(f'Emitting text changed event: {self.text()} Ignore text change: {self.ignore_text_change}')
if self.ignore_text_change is True:
self.ignore_text_change = False
return
if self.engine is None:
self.engine = SQLEngine(env=self.property("env"))
# Run the query to fetch data from the database
data = self.engine.run_query(self._build_query(), results_as_dict=True)
# Convert data to the list of results. Store for sharing with other Typeahead instances
self.results = [
{
"text": f"{row['primary_key_1']} ({row['primary_key_2']}, {row['primary_key_3']})",
"primary_key_1": row["primary_key_1"],
"primary_key_2": row["primary_key_2"],
"primary_key_3": row["primary_key_3"],
}
for row in data
]
# Update the completer with the new results
self.update_completer()
def update_completer(self):
"""Update the QCompleter with the new results."""
completer_model = QStringListModel([result["text"] for result in self.results])
#self.completer.model().setStringList(completer_model)
self.completer.setModel(completer_model)
# Set the width of the popup based on the longest string to avoid truncation
longest_string = max(
[len(result["text"]) for result in self.results], default=0
)
self.completer.popup().setMinimumWidth(
longest_string * 15
) # Adjust 7 to fit font size
# Manually open the completer dropdown
if self.results:
self.completer.complete() # Force the dropdown to show up again
def on_completer_activated(self, text):
"""Handle what happens when an item in the dropdown is selected."""
# Only set the text when the user selects an item by pressing Enter
selected_item = next(
(result for result in self.results if result["text"] == text), None
)
if selected_item:
self.setText(selected_item["text"])
def _build_query(self):
"""Build the SQL query for fetching suggestions."""
query = f"SELECT {','.join(self.property('primary_keys'))} FROM {self.property('targetTable')} WHERE {self.property('targetField')} LIKE '%{self.text()}%'"
return query
</code></pre>
|
<python><pyqt5>
|
2024-10-25 11:59:07
| 1
| 359
|
J.spenc
|
79,125,477
| 8,914,303
|
Nested Pydantic Models and Generics
|
<p>I am trying to reduce some code duplication by using generic types in python. With this minimal example</p>
<pre><code>from typing import Generic, TypeVar
from pydantic import BaseModel
T = TypeVar("T")
class Foo(BaseModel, Generic[T]):
a: T
Bar = list[Foo[T]]
class Baz(BaseModel, Generic[T]):
b: Bar[T]
</code></pre>
<p>I get the following type error:</p>
<blockquote>
<p>TypeError: There are no type variables left in <code>list[__main__.Foo]</code></p>
</blockquote>
<p>It works all fine if I replace the alias <code>Bar</code> in the <code>Baz</code> class by <code>list[Foo[T]]</code>, so is this a general issue with aliases? Thanks.</p>
|
<python><generics><python-typing><pydantic>
|
2024-10-25 11:31:59
| 1
| 1,699
|
CodeZero
|
79,125,266
| 626,804
|
Pandas HTML generation, reproducible output
|
<p>I am writing a Pandas dataframe as HTML using this code</p>
<pre><code>import pandas as pd
df = pd.DataFrame({ "a": [1] })
print(df.style.to_html())
</code></pre>
<p>I ran it once and it produced this output</p>
<pre><code><style type="text/css">
</style>
<table id="T_f9297">
<thead>
<tr>
<th class="blank level0" >&nbsp;</th>
<th id="T_f9297_level0_col0" class="col_heading level0 col0" >a</th>
</tr>
</thead>
<tbody>
<tr>
<th id="T_f9297_level0_row0" class="row_heading level0 row0" >0</th>
<td id="T_f9297_row0_col0" class="data row0 col0" >1</td>
</tr>
</tbody>
</table>
</code></pre>
<p>But when I run the same program again a moment later it gives</p>
<pre><code><style type="text/css">
</style>
<table id="T_d628d">
<thead>
<tr>
<th class="blank level0" >&nbsp;</th>
<th id="T_d628d_level0_col0" class="col_heading level0 col0" >a</th>
</tr>
</thead>
<tbody>
<tr>
<th id="T_d628d_level0_row0" class="row_heading level0 row0" >0</th>
<td id="T_d628d_row0_col0" class="data row0 col0" >1</td>
</tr>
</tbody>
</table>
</code></pre>
<p>I would like to get the same output each time. That is, the <code>T_f9297</code> and <code>T_d628d</code> identifiers shouldn't change from one run to the next. How can I get that?</p>
<p>I believe that I could generate HTML without any CSS styling and without the identifiers, but I do want CSS (I just omitted it from my example) and I'm happy to have the identifiers, as long as I get the same output given the same input data.</p>
<p>I am using Python 3.11.7 and Pandas 2.1.4.</p>
|
<python><html><pandas><deterministic>
|
2024-10-25 10:22:23
| 1
| 1,602
|
Ed Avis
|
79,125,214
| 4,119,911
|
Sending an array from jQuery to a Django view
|
<p>I am making a very small application to learn Django. I am send a nested array from jQuery and trying to loop it in my Django view.</p>
<p>The jQuery code is as follows:</p>
<pre><code>$(document).on('click','#exModel',function () {
const sending = [];
$("table tr").each(function () {
var p1 = $(this).find("th label").html();
var p2 = $(this).find("td input").attr('id');
var p3 = $(this).find("td input").val();
const build = [];
build.push(p1, p2, p3);
sending.push(build);
console.log(sending);
});
$.ajax({
url: '../coreqc/exModel/',
data: {'sending': sending},
type: 'post',
headers: {'X-CSRFToken': '{{ csrf_token }}'},
async: 'true',
success: function (data) {
console.log("I made it back")
//dom: 'Bfrtip',
}
});
});
</code></pre>
<p>The above works and takes the following form in the console: Note that the 3rd value is intentionally empty as I sent the form with no values in the fields to get the console read out.</p>
<pre><code>[Log] [["Product A", "1", ""], ["Product B", "2", ""], ["Product C", "3", ""], ["Product D", "4", ""], ["Product E:", "5", ""], ["Product F", "6", ""], ["Product G", "7", ""], ["Product H", "8", ""], ["Product I", "9", ""], ["Product K", "10", ""], …] (36) (coreqc, line 491)
[Log] I made it back # This is the success text in the above jQuery code
</code></pre>
<p>It is making it to my view and I am able to print() the output to the shell:</p>
<p>exModel view:</p>
<pre><code>def exModel(request):
sentData = request.POST
print(sentData)
template = loader.get_template('coreqc/tester.html')
context = {
'sentData':sentData
}
return HttpResponse(template.render(context, request))
</code></pre>
<p>Now, 'sentData' does print to the shell but it does not look right to me or at least the '<code>sending[1][]</code>' part does not. When I say it does not look right, I do not understand the empty square bracket. I can not access sending like <code>sending[1][2]</code> - I get a dictionary key error.</p>
<pre><code><QueryDict: {'sending[0][]': ['Product A:', '1', ''], 'sending[1][]': ['Product B', '2', ''], 'sending[2][]': ['Product C', '3', ''], 'sending[3][]': ['Product D', '4', ''], 'sending[4][]': ['Product E', '5', ''], 'sending[5][]': ['Product F', '6', ''], 'sending[6][]': ['Product G', '7', ''], 'sending[7][]': ['Product I', '8', '']}>
</code></pre>
<p>What I would like to be able do is to loop through each of values in the QueryDict in a loop in my view, not just print them. However, I am unsure how I access them or whether what is being sent is accessible.</p>
<p>get.values() - works and I can print to console - looks same as above.</p>
<p>I can loop and print like so:</p>
<pre><code>for x, obj in sentData.items():
print(x)
for y in obj:
print(y + ':', obj[y])
</code></pre>
<p>However I just get this output, it prints the below:</p>
<pre><code>sending[0][]
sending[1][]
sending[2][]
sending[3][]
</code></pre>
<p>What I need is to access to the inner values i.e. "Product A", and I am not quite sure on how I do this.</p>
<p>So in summary:</p>
<ol>
<li>Am I sending the data from jQuery in the right way? right being a way that Python Django can handle.</li>
<li>How do I loop said data to gain access to each data field.</li>
</ol>
<p>Many thanks for any help.</p>
<p>Update: Following on from comments I have looked heavily into the content being sent Jquery. I found that despite changing headers etc it was coming across as url encoded. I fixed the headers and tested for content type and it is now coming across as application/json. However, the issue I have now is that the request.body is full of '%' symbols like so:</p>
<pre><code>application/json
[26/Oct/2024 09:10:09] "POST /coreqc/exModel/ HTTP/1.1" 200 4631
b'sending%5B0%5D%5B%5D - this continues. I believe all the data is there but maybe it's unicode issue.
</code></pre>
|
<python><jquery><django><django-views>
|
2024-10-25 10:09:47
| 2
| 360
|
fcreav
|
79,124,974
| 2,265,812
|
How to type hint a method returning a list of generic dataclass instances?
|
<p>I was wondering if it was possible to get a Union return type for <code>get_bars</code> in this scenario using <code>pyright</code>:</p>
<pre><code>from dataclasses import dataclass
@dataclass
class Bar[T]:
a: T
@dataclass
class Foo:
bars: list[Bar]
def get_bars(self):
return self.bars
</code></pre>
<p>I'm not sure on how to type <code>bars</code> in Foo (or which TypeVar pass to Foo), the idea would be that <code>get_bars</code> should return a Union of <code>Bar[T]</code> for instance</p>
<pre><code>foo = Foo(bars=[Bar[int](a=1), Bar[str](a='bar')])
foo.get_bars() # Should be inferred as a list[Bar[int] | Bar[str]]
</code></pre>
|
<python><generics><python-typing><pyright>
|
2024-10-25 08:56:39
| 0
| 1,043
|
Orelus
|
79,124,776
| 1,406,168
|
CBS Authentication Expired - Azure Service bus
|
<p>I am trying to add a simple message to a service bus locally. I have a c# project locally too in Visual Studio that works towards same ressource with no problems.</p>
<p>But in Visual Studio Code with Phython I cannot get it working:</p>
<p>First I login to azure with azd login successfully.</p>
<p>I have following script:</p>
<pre><code>from azure.servicebus import ServiceBusClient, ServiceBusMessage
from azure.identity import DefaultAzureCredential as cho
rith_servicebus_name = "xxx.servicebus.windows.net"
q_name = "xxx"
credential = cho()
servicebus_client = ServiceBusClient(rith_servicebus_name , credential)
rith = servicebus_client.get_queue_sender(queue_name=q_name)
with rith:
val = ServiceBusMessage("Hello Rithwik")
rith.send_messages(val)
</code></pre>
<p>I get following error:</p>
<blockquote>
<p>Exception has occurred: ServiceBusAuthenticationError CBS
Authentication Expired. Error condition: amqp:internal-error.
azure.servicebus._pyamqp.error.TokenExpired: Error condition:
ErrorCondition.InternalError Error Description: CBS Authentication
Expired.</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>File "C:\Development\PortHierarchyData\service-bus-tester.py", line
9, in
with rith: azure.servicebus.exceptions.ServiceBusAuthenticationError: CBS
Authentication Expired. Error condition: amqp:internal-error.</p>
</blockquote>
<p>When running the script it seems like its trying as some threads starts and closes, but then throws an error after around 10 seconds.</p>
<p>Any suggestions what can cause this or how to troubleshoot? Kind of strange as my visual studio project with .NET works fine, so its not network related.</p>
|
<python><azure><azureservicebus><azure-managed-identity>
|
2024-10-25 07:59:07
| 1
| 5,363
|
Thomas Segato
|
79,124,373
| 511,302
|
How to print errors like python does by default?
|
<p>Considering the following errornous function:</p>
<pre><code>def inner():
raise Exception("message")
</code></pre>
<p>If I run the function I get an error like:</p>
<pre><code>Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
^^^^^^
File "<input>", line 1, in <module>
File "<input>", line 2, in inner
Exception: message
</code></pre>
<p>Now I wish for logging purposes to get this kind of error messaging into code, what I tried was using <code>traceback.format_exc()</code> as well as unpacking <code>sys.exc_info()</code></p>
<pre><code>def outer():
try:
inner()
except Exception as e:
exc_type, exc_obj, exc_tb = sys.exc_info()
print(exc_type)
print(exc_obj)
print(exc_tb)
print(traceback.format_tb(exc_tb))
print(traceback.format_exc())
</code></pre>
<p>which gives:</p>
<pre><code>>>> outer()
<class 'Exception'>
message
<traceback object at 0x10affca00>
[' File "<input>", line 3, in outer\n', ' File "<input>", line 2, in inner\n']
Traceback (most recent call last):
File "<input>", line 3, in outer
File "<input>", line 2, in inner
Exception: message
</code></pre>
<p>While format_exc() correctly formats the latter part, it does not include the import "line" information</p>
<pre><code>/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevconsole.py", line 364, in runcode
coro = func()
</code></pre>
<p>How would I get that information and log that?</p>
|
<python><exception><python-logging>
|
2024-10-25 05:34:16
| 1
| 9,627
|
paul23
|
79,124,311
| 8,621,823
|
How does pytest install pytest and _pytest into site-packages
|
<p>Within site-packages, there are <code>pytest</code> and <code>_pytest</code> folders.
Most of the core code is in _pytest.</p>
<p>When looking at pyproject.toml, I could not find how does setuptools know to install <code>pytest</code> and <code>_pytest</code>.</p>
<p>I thought is it due to the existence of <code>__init__.py</code> files, but other folders like <code>testing/example_scripts</code> have them too, so that's unlikely.</p>
<p>There isn't a setup.py in <a href="https://github.dev/pytest-dev/pytest" rel="nofollow noreferrer">https://github.dev/pytest-dev/pytest</a>, either.
I was looking for something like:</p>
<pre><code>setup(
name='your_project_name',
version='0.1',
packages=find_packages() # or something like ['pytest', '_pytest']
)
</code></pre>
<p>Also, why does <code>pip list</code> and <code>pipdeptree</code> not show <code>_pytest</code>?</p>
<p>I assumed all packages in site-packages will show. I thought packages starting with <code>_</code> are private like Python's private variables, but there are no options in <code>pip list</code> regarding showing "private packages."</p>
|
<python><pytest><setuptools><pyproject.toml>
|
2024-10-25 04:58:50
| 1
| 517
|
Han Qi
|
79,124,260
| 2,962,555
|
ModuleNotFoundError: No module named 'x' where x is a module suppose to be a shared lib
|
<p>I have a project with three modules - one, two and three. one and two are stand-alone services and three is the common lib that will be used by one and two. When I tried to run <code>docker-compose up --build</code> at the root folder, both the module one and two containers failed to start with error log</p>
<pre><code>ModuleNotFoundError: No module named 'three'
</code></pre>
<p>Below is the structure</p>
<pre><code>/root-folder
│
├── one
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── __init__.py
│ └── main.py
│
├── two
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── __init__.py
│ └── main.py
│
├── three
│ ├── Dockerfile
│ ├── requirements.txt
│ ├── __init__.py
│ └── setup.py
│
└── docker-compose.yaml
</code></pre>
<p>The docker-compose.yaml file is as below</p>
<pre><code>version: '3.8'
services:
one:
container_name: container-one
build:
context: .
dockerfile: ./one/Dockerfile
image: one:latest
ports:
- "8881:8881"
environment:
- ENVIRONMENT=development
two:
container_name: container-two
build:
context: .
dockerfile: ./two/Dockerfile
image: two:latest
ports:
- "8882:8882"
environment:
- ENVIRONMENT=development
</code></pre>
<p>And Dockerfile for one and two are</p>
<pre><code>FROM python:3.12-slim AS builder
WORKDIR /build
COPY /one/requirements.txt /build/ # ("COPY /two/requirements.txt /build/" for Dockerfile of module two)
RUN python -m venv /opt/venv \
&& /opt/venv/bin/pip install --upgrade pip setuptools wheel \
&& /opt/venv/bin/pip install -r requirements.txt
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY /one /app # ("COPY /two /app" for Dockerfile of module two)
COPY /three /three
RUN pip install /three && pip install -r requirements.txt
CMD ["python", "main.py"]
</code></pre>
<p>For three is</p>
<pre><code>FROM python:3.12-slim AS builder
WORKDIR /build
COPY requirements.txt /build/
RUN python -m venv /opt/venv \
&& /opt/venv/bin/pip install --upgrade pip setuptools wheel \
&& /opt/venv/bin/pip install -r requirements.txt
FROM python:3.12-slim
WORKDIR /app
COPY --from=builder /opt/venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
COPY . .
RUN pip install -e .
</code></pre>
<p>and the setup.py for module three is</p>
<pre><code>from setuptools import setup, find_packages
setup(
name='three',
version='0.1',
packages=find_packages(include=['*']),
install_requires=[
'avro-python3==1.10.2',
],
zip_safe=False,
)
</code></pre>
<p>Please help.</p>
|
<python><docker><docker-compose><dockerfile>
|
2024-10-25 04:20:32
| 0
| 1,729
|
Laodao
|
79,124,203
| 9,547,278
|
Algorithm to indent and de-indent lines
|
<p>I have a long document with text like this:</p>
<pre><code>paragraph = '''
• Mobilising independently with a 4-wheel walker
• Gait: Good quality with good cadence, good step length, and adequate foot clearance
• Lower limb examination:
- Tone: Normal bilaterally
- No clonus in either leg
- Passive dorsiflexion: Possible to -10 to -5 degrees bilaterally
- Power: 5/5 in all major muscle groups of lower limbs bilaterally
- Sensation: Intact to gross touch bilaterally
- ROM:
o Right hip flexion: 0-120 degrees
o Left hip flexion: 10-120 degrees (fixed flexion deformity present)
• Feet: Not broad-based
2. Post-stroke mobility and spasticity
• Improvement noted in right leg stiffness
• Continuing with current exercise regimen:
- Neuro Group twice weekly
- Gym group (including bike riding)
- Home exercises
• Plan:
- Continue current exercise programme
- Maintain current baclofen dose
'''
</code></pre>
<p>I wish to correctly format the lines by doing proper indentation. Basically I wish to convert the above to this:</p>
<pre><code>paragraph = '''
• Mobilising independently with a 4-wheel walker
• Gait: Good quality with good cadence, good step length, and adequate foot clearance
• Lower limb examination:
- Tone: Normal bilaterally
- No clonus in either leg
- Passive dorsiflexion: Possible to -10 to -5 degrees bilaterally
- Power: 5/5 in all major muscle groups of lower limbs bilaterally
- Sensation: Intact to gross touch bilaterally
- ROM:
o Right hip flexion: 0-120 degrees
o Left hip flexion: 10-120 degrees (fixed flexion deformity present)
• Feet: Not broad-based
2. Post-stroke mobility and spasticity
• Improvement noted in right leg stiffness
• Continuing with current exercise regimen:
- Neuro Group twice weekly
- Gym group (including bike riding)
- Home exercises
• Plan:
- Continue current exercise programme
- Maintain current baclofen dose
'''
</code></pre>
<p>I wrote the following code, but it's not properly formatting the strings:</p>
<pre><code>add_indent = ""; corpus = []; bullet_point = ""
for line in paragraph.split("\n"):
if line.strip().endswith(":") and len(line.split(" ")[0])==1: add_indent += " "; bullet_point = line.split(" ")[0]
elif not line.strip().endswith(":") and bullet_point == line.split(" ")[0]: add_indent = add_indent[:-2]
elif not line: add_indent = ""
corpus.append(add_indent+line)
for line in corpus: print(line)
</code></pre>
<p>Where am I going wrong?</p>
|
<python><string>
|
2024-10-25 03:41:46
| 1
| 474
|
Legion
|
79,123,919
| 825,227
|
Pandas bar plot with multiple alpha values
|
<p>I am trying to create a bar plot with <code>alpha</code> transparency values for different time series.</p>
<p>I thought the code below would give me that, as I've seen elsewhere lists can be passed via the <code>alpha</code> <code>kwarg</code>, but I get an error (included below). What am I missing?</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>plt.figure()
df.plot(kind="bar", alpha=[.5,1])
</code></pre>
<p>Error message:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: alpha must be numeric or None, not <class 'list'>
</code></pre>
|
<python><pandas><matplotlib>
|
2024-10-25 00:07:37
| 2
| 1,702
|
Chris
|
79,123,731
| 856,804
|
Why does asizeof() claim that Python instance of protobuf message takes so much memory?
|
<p>I'm investigating a memory issue related to protobuf message in Python.</p>
<p>Here is the simple protobuf message:</p>
<pre class="lang-protobuf prettyprint-override"><code>syntax="proto3";
message LiveAgentMessage {
string text = 1;
}
</code></pre>
<p>So it gets compiled to a class in python with protoc, I ran the following script</p>
<pre class="lang-py prettyprint-override"><code>from pympler import asizeof
from foo_pb2 import LiveAgentMessage
print(f"{asizeof.asizeof(LiveAgentMessage(text=''))=:}")
print(f"{asizeof.asizeof(LiveAgentMessage(text='a'))=:}")
</code></pre>
<p>The output is</p>
<pre><code>asizeof.asizeof(LiveAgentMessage(text='')=816
asizeof.asizeof(LiveAgentMessage(text='a'))=895904
</code></pre>
<p>I wonder why the <code>LiveAgentMessage(text='a')</code> (the second line) takes up so much memory with just a one-letter string, it's <strong>10^4x</strong> compared to the first line?</p>
<p>I'm using <a href="https://pympler.readthedocs.io/en/latest/" rel="nofollow noreferrer">pympler</a> for size calculation, but not sure if that's the right one for the python proto class.</p>
|
<python><memory><reflection><protocol-buffers>
|
2024-10-24 22:02:22
| 1
| 9,110
|
zyxue
|
79,123,539
| 54,873
|
How can I eliminate the pandas complaint about using aggfunc=np.sum in pivot_tables?
|
<p>I recently upgraded my <code>pandas</code> from v1.x to v2.x. However, code that does reasonable things with pivot tables generates an obnoxious set of future warnings.</p>
<pre><code>(Pdb) df = pd.DataFrame([["a", 1], ["a", 2], ["b", 3]], columns=["col1", "col2"])
(Pdb) df
col1 col2
0 a 1
1 a 2
2 b 3
(Pdb) df.pivot_table(index="col1", values="col2", aggfunc=np.sum)
<stdin>:1: FutureWarning: The provided callable <function sum at 0x102bcb2e0> is currently using DataFrameGroupBy.sum. In a future version of pandas, the provided callable will be used directly. To keep current behavior pass the string "sum" instead.
col2
col1
a 3
b 3
</code></pre>
<p>It's that weird FutureWarning that confuses/annoys me. I am not in fact passing <code>DataFrameGroupBy.sum</code>; I am passing <code>np.sum</code>. (Maybe behind the scenes the first is an alias for the second). Moreover, I <em>want</em> the callable to be used directly. That's why I passed it!</p>
<p>Two questions for stackoverflow:</p>
<p>[A] I don't really want to pass the string "sum" here; I think it ought be fine that I pass my own callable. Is there a right way to do that that I'm not doing?</p>
<p>[B] How can I suppress this specific warning code-wide?</p>
<p>I know how to suppress the warnings for an individual code block using:</p>
<pre><code>with warnings.catch_warnings():
warnings.simplefilter(action='ignore', category=FutureWarning)
</code></pre>
<p>But I use <code>pivot_table</code> everywhere and it would be crazy to use the <code>with</code> context each time. I could also block <code>FutureWarning</code>'s across my codebase, but other <code>FutureWarning</code>'s are in fact useful.</p>
<p>How can I suppress this specific warning?</p>
|
<python><pandas>
|
2024-10-24 20:33:53
| 2
| 10,076
|
YGA
|
79,123,534
| 11,374,051
|
Playwright manual download with default file names
|
<p>I use Playwright to partially automate my tasks but sometimes I need to navigate manually to get to the info I require. As such, I need to download files manually as well, but there doesn't seem to be a way to save those files as their default names.</p>
<p>I know if the download is started from the script, I might be able to use <code>suggested_filename</code>, but in this case, I am triggering the download manually.</p>
<p>How can I add a global override to name files by their <code>suggested_filename</code> and not a GUID? Preferably starting from <a href="https://playwright.dev/python/docs/api/class-playwright" rel="nofollow noreferrer">this</a> example code.</p>
<pre><code>import asyncio
from playwright.async_api import async_playwright, Playwright
async def run(playwright: Playwright):
chromium = playwright.chromium # or "firefox" or "webkit".
browser = await chromium.launch(headless=False)
page = await browser.new_page()
await asyncio.sleep(9999)
async def main():
async with async_playwright() as playwright:
await run(playwright)
asyncio.run(main())
</code></pre>
|
<python><playwright><playwright-python>
|
2024-10-24 20:32:35
| 2
| 574
|
weasel
|
79,123,529
| 986,612
|
Revert back the behavior of loading dlls to before python 3.8 and the new add_dll_directory()
|
<p>The python <a href="https://docs.python.org/3.8/library/os.html#os.add_dll_directory" rel="nofollow noreferrer">doc</a> explains the breaking change:</p>
<blockquote>
<p>New in version 3.8: Previous versions of CPython would resolve DLLs
using the default behavior for the current process. This led to
inconsistencies, such as only sometimes searching PATH or the current
working directory, and OS functions such as AddDllDirectory having no
effect.</p>
</blockquote>
<p>I've never had such issues, and I'd be happy for a specific example.
I've never heard about <code>AddDllDirectory</code> before, and I was happy. I used to have a reliable process: dll (shared lib) dirs coincide with the environment path--like in any other OS. This allowed me to easily test if necessary dlls are in the path, e.g., via <a href="https://github.com/adamrehn/dll-diagnostics" rel="nofollow noreferrer">dlldiag</a>. I was able to easily read and modify the env path. This applies to any executable on Windows, and I had robust tools to handle any dll issues.</p>
<p>Now, I have only this <code>os.add_dll_directory()</code>. I don't see any way to read it, and there's no convenient var to modify. I can't compare what's there in order to verify locations of my dlls. Rumor has it, that this function uses in the end <code>AddDllDirectory</code>, and there are no additional <em>convenient</em> functions in its <a href="https://learn.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-adddlldirectory" rel="nofollow noreferrer">family</a>.</p>
<p>I tried the following simple solution:</p>
<pre><code>import os
for p in os.environ[ 'path' ].split( ';' ):
if os.path.isdir( p ):
os.add_dll_directory( p )
</code></pre>
<p>This resolved the useless error message</p>
<blockquote>
<p>ImportError: DLL load failed while importing foo: The specified module could not be found</p>
</blockquote>
<p>which meant couldn't find a dependent dll (god forbids it will tell me clearly that a dll is missing and which one).</p>
<p>Now, however, I have a completely new useless message:</p>
<blockquote>
<p>ImportError: DLL load failed while importing foo: The operating system cannot run %1</p>
</blockquote>
<p>(ah, of course, it can't run %1, how silly of me) which apparently means that there are two candidate dlls in the path, and it doesn't know what to do (or the wrong one was loaded or something).</p>
<p>This is a new can of worms. I usually have in the path both release and debug versions of the dlls, and there was never such an issue. Now, I have to be careful to separate them and specify each dir with the proper release or debug that I'm running. If in my pipeline I have a script that executes another one as a different process, well, bad luck, these new dll paths aren't inherited anymore, and you need somehow to pass them down the line.</p>
<p>How am I suppose to conveniently handle that without burdening the user with these dir specification?
How do I keep this cross platform? Now, I need special code for Windows.</p>
<p>Python suddenly decides to break from its nice cross-platform behavior (<code>os.add_dll_directory()</code> is available only on Windows) and starts meddling with obscure Windows functions.
Moreover, the function of <code>win32api.LoadLibrary()</code> is now inconsistent with how <code>ctypes</code> loads a dll.</p>
<p>How can I revert back the behavior of loading dlls to the way it was before Python 3.8?</p>
|
<python>
|
2024-10-24 20:30:07
| 1
| 779
|
Zohar Levi
|
79,123,446
| 8,512,262
|
Is there a better way to combine two DataFrames from two SQL queries into a single DataFrame?
|
<p>Currently I'm querying two separate tables in a SQL database - one from which I get "data" from <code>my_table</code> and the other from which I get "units" from a <code>config</code> table. The <strong>units</strong> correspond 1:1 with the values in the <strong>data</strong>, i.e., each column of data values has a corresponding unit.</p>
<p>Because the <code>query_units</code> returns a single <em>column</em> of info, I end up transposing the resulting <code>DataFrame</code> with <code>T</code>. Then I <code>concat</code> that transposed <code>DataFrame</code> with my main data to end up with my desired result, i.e. a row of "units" followed by the rest of the data rows (see the example at the bottom).</p>
<p>Ultimately I'm trying to determine if there's a more efficient way to handle this setup.</p>
<p>Running two separate queries and concatenating the results (albeit with some transposition) works, but <strong>I'm wondering if there's a way to, say, handle this with a single SQL query which I can use to get a single <code>DataFrame</code></strong>.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import pyodbc
# connect to database
conn = pyodbc.connect(('<connection info here>'))
# query to get all data from 'my_table' between these two timestamps
query_data = "SELECT * FROM my_table WHERE timestamp BETWEEN '2024/10/24 11:00:00' AND '2024/10/24 12:00:00'"
# query to get units from the 'config' table for the items in 'my_table'
query_units = "SELECT units FROM config WHERE t_name=my_table"
# get a DataFrame containing the requested data
df = pd.read_sql_query(query_data, conn, 'timestamp')
# and a DataFrame of the units for each data item, transposed because this returns a
# single column and I want it as a row (using the data column names as its index)
eu = pd.read_sql_query(query_units, conn, index=df.columns).T
# combine (concat) the units row and the rest of the data, joined along 'df.columns'
# (the name of each data value)
df = concat(objs=(eu, df))
</code></pre>
<p>This is an example result of the above (NB: this is correct)</p>
<pre class="lang-none prettyprint-override"><code> data1 data2 data3 ... dataN
units volts amps temp ... volts (this is the row that gets inserted)
11:00 10 5 69 ... 9
11:30 11 5 70 ... 10
12:00 12 6 72 ... 9
</code></pre>
|
<python><sql><pandas>
|
2024-10-24 19:54:20
| 1
| 7,190
|
JRiggles
|
79,123,367
| 1,886,904
|
LangChain ReAct agent not performing multiple cycles
|
<p>I am struggling with getting my ReAct agent in LangChain to do any more than one cycle. It will select tools, and run them, but it won't ever decide that it needs to run any again. Very occasionally some combination of prompts will result in multiple cycles, but it is never repeatable.</p>
<p>It is important to note that I am using <code>langgraph.prebuilt.create_react_agent</code> to create the agent. I know there are a bunch of other ways to create the agent, but after looking through the <code>langgraph.prebuilt</code> code, I don't see any reason why it won't work.</p>
<p>At the end I also have a list of things I have tried which I can provide details for if necessary.</p>
<p>Library Versions:</p>
<pre><code>langchain-core==0.3.10
langchain-ollama==0.2.0
langgraph==0.2.39
langgraph-checkpoint==2.0.2
</code></pre>
<p>Agent Code (color print method omitted):</p>
<pre class="lang-py prettyprint-override"><code>import os
import asyncio
import json
import uuid
from dotenv import load_dotenv
from langchain_core.messages import HumanMessage
from langchain_ollama import ChatOllama
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
load_dotenv("../.env")
thing_list = [
{"name": "Thing0", "location": "paris"},
{"name": "Thing1", "location": "london"},
{"name": "Thing2", "location": "berlin"},
{"name": "Thing3", "location": "paris"},
]
@tool
def get_thing_location(symbol: str) -> str:
"""
Retrieves the location of the thing.
Args:
symbol (str): The symbol of the thing to find.
Returns:
str: the location value.
"""
thing = next((item for item in thing_list if item["name"] == symbol), None)
return thing["location"] if thing else None
@tool
def find_things_by_location(location: str) -> list:
"""
Retrieves a list of things based on their location.
Args:
location (str): The location to look for things.
Returns:
list: list of things at the location.
"""
return [thing for thing in thing_list if thing["location"] == location]
modelName=os.getenv("LLM_MODEL", "llama3.2:3b")
ollamaUrl=os.getenv("OLLAMA_URL")
model = ChatOllama(model=modelName, base_url=ollamaUrl)
memory = MemorySaver()
threadId = "thing-agent"
config = {"configurable": {"thread_id": threadId}}
agent = create_react_agent(
model=model,
tools=[find_things_by_location, get_thing_location],
checkpointer=memory,
)
thingLoc = get_thing_location('Thing0');
print(f"Location of Thing0: {thingLoc}")
otherThings = find_things_by_location(thingLoc)
print(f"Things in same location: {otherThings}")
msgs = [
HumanMessage(content="""
What is the location of Thing0?
What other things are in the same location?
"""),
]
from color_outputs import print_event
async def run():
async for event in (agent.astream_events(
{"messages": msgs},
version="v2", config=config)):
print_event(event)
asyncio.run(run())
</code></pre>
<p>Output:</p>
<pre><code>Location of Thing0: paris
Things in same location: [{'name': 'Thing0', 'location': 'paris'}, {'name': 'Thing3', 'location': 'paris'}]
agent on_chain_start (['graph:step:1'])
Starting agent: agent with input messages:
Human:
What is the location of Thing0?
What other things are in the same location?
-------------
--------------------
get_thing_location on_tool_start (['seq:step:1'])
Starting tool: get_thing_location with inputs: {'symbol': 'Thing0'}
--------------------
find_things_by_location on_tool_start (['seq:step:1'])
Starting tool: find_things_by_location with inputs: {'location': 'get_thing_location', 'symbol': 'Thing0'}
--------------------
find_things_by_location on_tool_end (['seq:step:1'])
Done tool: find_things_by_location
Tool output was: content=[] name='find_things_by_location' tool_call_id='8f012dd4-6082-4bd1-bbaa-e82eba61eca0'
--------------------
get_thing_location on_tool_end (['seq:step:1'])
Done tool: get_thing_location
Tool output was: content='paris' name='get_thing_location' tool_call_id='7fc27e00-29be-48b1-ab7a-d640cee96212'
--------------------
agent on_chain_start (['graph:step:3'])
Starting agent: agent with input messages:
Human:
What is the location of Thing0?
What other things are in the same location?
-------------
AI:
-------------
Tool: get_thing_location: paris
-------------
Tool: find_things_by_location: []
-------------
--------------------
Process finished with exit code 0
</code></pre>
<p>As you can see, it doesn't try to do a <code>find_things_by_location</code> call after receiving the location information from the previous call. Sometimes it does use that tool, but not with meaningful input (either nothing or something it makes up).</p>
<p>Things I have tried:</p>
<ul>
<li>Different models:
<ul>
<li>llama3.1:70b</li>
<li>mistral-large:123b</li>
<li>mistral-nemo:12b</li>
<li>llama3.2:3b</li>
</ul>
</li>
<li>A single message with both instructions, instead of 2.</li>
<li>Adding a <code>state_modifier</code> with a prompt describing a Think-Act-Observe-Repeat cycle.</li>
<li>Making that prompt more or less detailed, or explicitly including restrictions about when to use which tool.</li>
</ul>
|
<python><ollama><langgraph><langchain-agents>
|
2024-10-24 19:12:24
| 0
| 682
|
egeorge
|
79,123,306
| 8,595,535
|
Filter pandas dataframe between dates located in another dataframe
|
<p>Suppose I have two pandas dataframes: A first dataframe with some data (reflected in columns Col1 and Col2 in example below) across both Code and Time.
A second dataframe detailing a set of date bounds. The idea is to exclude any rows (in <code>df_table</code>) that have a date that falls between any bounds (in <code>df_dates</code>). Each Code has one or more (and different) bounds or may not have bounds at all.
The code below works well and does the task correctly but it takes way too much time (the first dataframe <code>df_table</code> has several million rows).</p>
<pre><code>import pandas as pd
import numpy as np
def filter_date(main_table, table_dates):
idx = table_dates.Code == main_table.Code
table_dates = table_dates.loc[idx]
filtering = (table_dates['Start Date'] <= main_table.Timestamp) & (table_dates['End Date'] >= main_table.Timestamp)
return filtering.any()
# Example below
df_table = pd.DataFrame({'Code':['A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'C', 'C'], 'Timestamp':[1,2,3,4,5,6,1,2,3,4,5,6,1,2], 'Col1':np.arange(14), 'Col2':np.arange(14)})
df_dates = pd.DataFrame({'Code':['A', 'A', 'B'], 'Start Date':[1, 5, 1], 'End Date':[2, 6, 3]})
df_table['Exclude Date'] = df_table.apply(lambda x: filter_date(x, df_dates), axis=1)
df_table = df_table[df_table['Exclude Date'] == False]
print(df_table)
</code></pre>
<p>Is there a smarter way to do this and avoid this <code>.apply()</code>?</p>
|
<python><pandas>
|
2024-10-24 18:50:49
| 2
| 309
|
CTXR
|
79,123,305
| 12,115,498
|
Numpy array does not correctly update in Gaussian elimination program
|
<p>I am trying to write a function <code>gaussian_elim</code> which takes in an n x n numpy array A and an n x 1 numpy array b and performs Gaussian elimination on the augmented matrix [A|b]. It should return an n x (n+1) matrix of the form M = [U|c], where U is an n x n upper triangular matrix. However, when I test my code on a simple 2x2 matrix, it seems that the elimination step is not being performed properly. I have inserted print statements to illustrate how the matrix is not being updated properly.</p>
<pre><code>def gaussian_elim(A,b):
"""
A: n x n numpy array
b: n x 1 numpy array
Applies Gaussian Elimination to the system Ax = b.
Returns a matrix of the form M = [U|c], where U is upper triangular.
"""
n = len(b)
b = b.reshape(-1, 1) # Ensure b is a column vector of shape (n, 1)
M = np.hstack((A,b)) #Form the n x (n+1) augmented matrix M := [A|b]
#For each pivot:
for j in range(n-1): #j = 0,1,...,n-2
#For each row under the pivot:
for i in range(j+1,n): #i = j + 1, j + 2,..., n-1
if (M[j,j] == 0):
print("Error! Zero pivot encountered!")
return
#The multiplier for the the i-th row
m = M[i,j] / M[j,j]
print("M[i,:] = ", M[i,:])
print("M[j,:] = ", M[j,:])
print("m = ", m)
print("M[i,:] - m*M[j,:] = ", M[i,:] - m*M[j,:])
#Eliminate entry M[i,j] (the first nonzero entry of the i-th row)
M[i,:] = M[i,:] - m*M[j,:]
print("M[i,:] = ", M[i,:]) #Make sure that i-th row of M is correct (it's not!)
return M
</code></pre>
<p>Testing with a 2x2 matrix</p>
<pre><code>A = np.array([[3,-2],[1,5]])
b = np.array([1,1])
gaussian_elim(A,b)
</code></pre>
<p>yields the following output:</p>
<pre><code>M[i,:] = [1 5 1]
M[j,:] = [ 3 -2 1]
m = 0.3333333333333333
M[i,:] - m*M[j,:] = [0. 5.66666667 0.66666667] <-- this is correct!
M[i,:] = [0 5 0] <--why is this not equal to the above line???
array([[ 3, -2, 1],
[ 0, 5, 0]])
</code></pre>
<p>The output I expected is <code>array([[ 3, -2, 1],[0. 5.66666667 0.66666667]])</code>. Why did the second row not update properly?</p>
|
<python><arrays><numpy><matrix><linear-algebra>
|
2024-10-24 18:50:43
| 1
| 783
|
Leonidas
|
79,122,914
| 2,205,380
|
Zip input and output without wildcards, using absolute paths
|
<p>Very similar to <a href="https://stackoverflow.com/questions/69137857/snakemake-coupling-inputs-and-outputs-without-shared-wildcards">Coupling inputs and outputs without shared wildcards</a>, I have assembled a user-specified list of inputs and outputs that are paired:</p>
<pre><code>inputs = ["/path/to/input1", "/path/to/input2"]
outputs = ["/path/to/output1", "/path/to/output2"]
</code></pre>
<p>In other words, I have a rule which should process input1 and generate output1, and in parallel the same rule may also process input2 and generate output2. I can try the recommended answer from the linked question, which looks like this, except I have used absolute paths, and remove the ".txt" suffix from the rules:</p>
<pre><code>numbers = ['/tmp/1.txt', '/tmp/2.txt', '/tmp/3.txt', '/tmp/4.txt']
letters = ['/tmp/A.txt', '/tmp/B.txt', '/tmp/C.txt', '/tmp/D.txt']
ln = dict(zip(numbers, letters))
rule all:
input:
expand('{number}', number= numbers),
rule out:
input:
letter= lambda wc: ln[wc.number],
output:
'{number}'
shell:
"""
echo {input.letter} > {output}
"""
</code></pre>
<p>I have tried touching the inputs <code>/tmp/A.txt</code>, etc., however no matter how I try to arrange it, I get either key exceptions or missing input exceptions. However, if I use relative instead of absolute paths, I can get it to work. Is there any way to get it to work using all absolute paths?</p>
|
<python><snakemake>
|
2024-10-24 16:42:07
| 1
| 2,388
|
Rboreal_Frippery
|
79,122,892
| 6,216,530
|
Check when vectors are uploaded to Pinecone namespace
|
<p>I have a pinecone index set up and I am using Langchain to upload my prepared Documents to a certain namespace.
My problem is that, I am returning the PineconeVectorStore and passing that as a parameter to another class; however, Pinecone takes a few seconds to completely be done with all the vector uploading.
Is there anyway to check if it is completely done beside using time.sleep() because I don't find it as an elegant solution and there is no guarantee that all the vectors have uploaded at this point.
I am using the below code to insert my data to Pinecone Index.</p>
<pre><code>from typing import List
from langchain_core.documents import Document
from langchain_openai import OpenAIEmbeddings
from langchain_pinecone import PineconeVectorStore
embed_model = OpenAIEmbeddings(
model='text-embedding-3-small'
)
def upload_data(docs: List(Document), embed_model, index_name: str, namespace:str) -> PineconeVectorStore:
vector_store = PineconeVectorStore.from_documents(
documents=docs,
embedding=embed_model,
index_name=index_name,
namespace=namespace
)
time.sleep(4) # change this part
# I have created my index outside of this snippet of code
# when I do my_index.describe_index_states() before and after the time.sleep I get different number of total_vector_count
return vector_store
</code></pre>
|
<python><openai-api><langchain><pinecone>
|
2024-10-24 16:33:30
| 0
| 867
|
H.Sdq
|
79,122,632
| 850,271
|
Recursive nested string formatting
|
<p>I want to take something like the following:</p>
<pre><code>print('{test_{test2}}'.format(test2='test2', test_test2='test3'))
</code></pre>
<p>and produce:</p>
<pre><code>test3
</code></pre>
<p>But I get the error:</p>
<pre><code>ValueError: unexpected '{' in field name
</code></pre>
<p>How can I do this?</p>
|
<python><string>
|
2024-10-24 15:21:34
| 1
| 6,004
|
John Roberts
|
79,122,605
| 20,920,790
|
Why pandas.merge_asof working with error in my case?
|
<p>I'm trying to merge 2 tables with pandas.merge_asof.</p>
<p>First table administrators_system_with_schemes_sort:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;">salon_id</th>
<th style="text-align: right;">staff_id</th>
<th style="text-align: left;">date</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-02 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-03 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-06 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-07 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-10 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-11 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-14 00:00:00</td>
</tr>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-15 00:00:00</td>
</tr>
</tbody>
</table></div>
<p>Second table, bonus_and_penalty_for_staff_id_administrators_sort:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th style="text-align: right;">salon_id</th>
<th style="text-align: right;">staff_id</th>
<th style="text-align: left;">date</th>
<th style="text-align: right;">bonus</th>
<th style="text-align: right;">penalty</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">872646</td>
<td style="text-align: right;">2715596</td>
<td style="text-align: left;">2024-10-12 00:00:00</td>
<td style="text-align: right;">4070</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table></div>
<p>My code:</p>
<pre><code>astype_dict = {
'salon_id': 'int64', 'staff_id': 'int64'
, 'date': 'datetime64[ns]'
}
administrators_system_with_schemes['date'] = [pd.to_datetime(date).date() for date in administrators_system_with_schemes['date']]
bonus_and_penalty_for_staff_id_administrators['date'] = [pd.to_datetime(date).date() for date in bonus_and_penalty_for_staff_id_administrators['date']]
administrators_system_with_schemes_sort = (
administrators_system_with_schemes.copy()
.astype(astype_dict)
.sort_values(by='date')
)
bonus_and_penalty_for_staff_id_administrators_sort = (
bonus_and_penalty_for_staff_id_administrators.copy()
.astype(astype_dict)
.sort_values(by='date')
)
administrators_system_with_schemes_with_additional_bonus_penalty = (
pd.merge_asof(
left = administrators_system_with_schemes_sort
, right = bonus_and_penalty_for_staff_id_administrators_sort
, on = ['date']
, by = ['salon_id', 'staff_id']
, suffixes=['', '_y']
, direction='nearest'
))
</code></pre>
<p>Result:</p>
<pre><code>| salon_id | staff_id | date | bonus | penalty |
|-----------:|-----------:|:--------------------|--------:|----------:|
| 872646 | 2715596 | 2024-10-02 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-03 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-06 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-07 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-10 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-11 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-14 00:00:00 | 0 | 0 |
| 872646 | 2715596 | 2024-10-15 00:00:00 | 0 | 0 |
</code></pre>
<p>Result is wrong, cause I got suitable value in tables.
I've already tries many way to change data types, but still have this mistake.</p>
<p>Any ideas, how to fix this problem?</p>
<p>Thanks.</p>
<p>Pandas ver. 2.1.4 (same error on ver.2.2.3).
Python ver. 3.11.7</p>
|
<python><pandas><dataframe>
|
2024-10-24 15:13:48
| 1
| 402
|
John Doe
|
79,122,259
| 2,401,856
|
Python - flask app doesn't serve images used in react app
|
<p>I have a react project that is hosted in flask app.</p>
<p>This is the project folders:</p>
<pre><code>/simzol/
└── backend/
├── app.py
└── build/
├── index.html
├── images/
├── static/
│ ├── css/
│ └── js/
└── other_files...
</code></pre>
<blockquote>
<p>PS. the build folder is generated by react using the command "yarn
build"</p>
</blockquote>
<br>
here's the flask app initializer:
<pre><code>app = Flask(__name__, static_url_path="/static", static_folder="build/static", template_folder="build")
</code></pre>
<p>that's how I serve the main route and any other path that isn't an API path (which should be handled by the react app. If the page doesn't exist, react should return the appropriate 404 not found page that I wrote):</p>
<pre><code>@app.route("/")
@app.route('/<path:path>')
def home(path=None):
try:
return render_template("index.html"), 200
except Exception as e:
return handle_exception(e)
</code></pre>
<p>When I run the app and go to the main route, I can't see my images load up.<br><br>
<strong>What I tried:</strong></p>
<ol>
<li>move the images folder into the static folder. didn't change anything</li>
<li>change the <code>static_url_path</code> to empty string ("") and the <code>static_folder</code> to "build". That solves the problem but I encounter another problem when I surf any page that is not the root page (like /contactus) directly through the browser's url input field, I get 404 error from flask (not react)</li>
<li>I use relative image path in react src attributes, maybe changing that could fix the problem but if that solution works, I don't like it, because that makes the developing in react more complicated</li>
</ol>
|
<python><reactjs><flask>
|
2024-10-24 13:54:26
| 1
| 620
|
user2401856
|
79,121,793
| 4,819,195
|
Requests per second(RPS) not going above certain number with Hana database and Locust
|
<p>I wanted to perform some load testing on Hana database. For this I decided to use Locust and I referenced the implementation given in this link for another database:
<a href="https://community.cratedb.com/t/load-testing-cratedb-using-locust/1686" rel="nofollow noreferrer">https://community.cratedb.com/t/load-testing-cratedb-using-locust/1686</a>.</p>
<p>Below is the code I rewrote for Hana. I am using a connection pool of size 50 here:</p>
<pre><code>import time
import random
from locust import task, User, between
import os
import json
from datetime import datetime
import uuid
from hdbcli import dbapi
from queue import Queue
# Credentials are always used from here to not have them leak into the UI as
# part of the connection URL.
POOL_SIZE = 50
HOST = ""
PORT = 443 # Default port for HANA DB
USER = ""
PASSWORD = ""
insert_query = f"INSERT INTO DEMO VALUES (?, ?, ?)"
def connect_to_hana():
""" Connect to SAP HANA database """
try:
conn = dbapi.connect(HOST, PORT, USER, PASSWORD)
return conn
except dbapi.Error as e:
print(f"Error connecting to SAP HANA: {e}")
return None
connection_pool = Queue(maxsize=POOL_SIZE)
for _ in range(POOL_SIZE):
conn = connect_to_hana()
if conn:
connection_pool.put(conn)
class HANADBClient:
def __init__(self, host, request_event):
self._request_event = request_event
def get_connection(self):
try:
return connection_pool.get(block=True)
except Exception as e:
print(f"Error getting connection: {e}")
return None
def create_conn(self):
print("Connect and Query HanaSQL")
return dbapi.connect(
address="",
# HANA Host
port=443, # HANA Port
user="", # HANA User
password="" # HANA Password
)
def send_query(self):
request_meta = {
"request_type": "HANADB",
"name": "hana insert object",
"response_length": 0,
"response": None,
"context": {},
"exception": None,
}
start_time = time.perf_counter()
conn = self.get_connection()
cursor = conn.cursor()
col1 = random.randint(18, 65)
col2 = f"Name_{random.randint(1, 1000)}"
col3 = 5.5
try:
start_time = time.perf_counter()
# Generate a batch of records
cursor.execute(insert_query, (col1, col2, col3))
conn.commit()
except dbapi.Error as e:
print(f"Error inserting object: {e}")
request_meta["exception"] = e
conn.rollback()
finally:
cursor.close()
connection_pool.put(conn)
request_meta["response_time"] = (time.perf_counter() - start_time) * 1000
request_meta["response_length"] = 1
# Log a successful insert operation
self._request_event.fire(**request_meta)
class HANADBUser(User):
abstract = True
def __init__(self, environment):
super().__init__(environment)
self.client = HANADBClient(self.host, request_event=environment.events.request)
class QuickstartUser(HANADBUser):
wait_time = between(0, 0)
@task(1)
def query01(self):
self.client.send_query(
)
</code></pre>
<p>When I run the above locust file using this command <code>locust -f benchmark.py --host http://localhost:7997</code>, I see that the <code>Request per second(RPS)</code> never goes above 11 even when I increase the number of users. I was expecting RPS to go till 1000 requests per second.</p>
<p>What could be the issue here?</p>
|
<python><load-testing><hana><locust>
|
2024-10-24 12:02:30
| 0
| 399
|
A Beginner
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.