QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
79,069,823
| 2,072,312
|
How do I insert timestamp data into an AWS Glue managed Iceberg table using AWS Firehose?
|
<p>Using AWS Firehose to ingest data into an Iceberg table managed by AWS Glue, I'm unable to insert timestamp data.</p>
<p><strong>Firehose</strong></p>
<p>I'm trying to insert data using the following script:</p>
<pre class="lang-py prettyprint-override"><code>json_data = json.dumps(
{
"ADF_Record": {
"foo": "bar",
"baz": "2024-09-04T18:56:15.114"
},
"ADF_Metadata": {
"OTF_Metadata": {
"DestinationDatabaseName": "my_db",
"DestinationTableName": "my_table",
"Operation": "INSERT"
}
}
}
)
response = boto3.client("firehose").put_record(
DeliveryStreamName="my_stream",
Record={"Data": json_data.encode()}
)
</code></pre>
<p>Note that the <code>baz</code> value corresponds to a timestamp of type <a href="https://docs.aws.amazon.com/firehose/latest/dev/apache-iceberg-destination-supp.html" rel="nofollow noreferrer">TimestampType.withoutZone</a> as referenced in the Firehose documentation.</p>
<p><strong>Glue</strong></p>
<ul>
<li>My table is of the Iceberg type.</li>
<li>I did not define any additional SerDe library or SerDe parameters.</li>
<li>The table schema is :
<ul>
<li><code>foo</code> : <code>string</code></li>
<li><code>baz</code> : <code>timestamp</code></li>
</ul>
</li>
</ul>
<p><strong>Error</strong></p>
<p>Whenever I try to insert data using this method, no data is delivered and I get this error on Firehose side :</p>
<pre><code>Firehose is unable to convert column data in your record to the column type specified within the schema. Table: my_db.my_table
</code></pre>
<p><strong>Things I tried</strong></p>
<ul>
<li>Data is written when <code>baz</code> is removed from the payload (the pipeline seems functional without timestamp).</li>
<li>Switching to epoch format (<code>1725476175114000</code>) doesn't help. Glue creates a new version of the table with <code>baz</code> as <code>date</code> and the written data is not legible.</li>
<li>Switching to TimestampType.WithZone results in the same error.</li>
<li>Trying a SerDe library like <code>org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe</code> and <code>timestamp.formats</code> parameter doesn't help. Glue creates a new version of the table and removes the SerDe parameters altogether.</li>
</ul>
<p>I'm about to give up and just write timestamps as string. Any insight is appreciated!</p>
|
<python><amazon-web-services><aws-glue><amazon-kinesis-firehose><apache-iceberg>
|
2024-10-09 10:43:22
| 2
| 832
|
Crolle
|
79,069,771
| 7,483,211
|
Mypy throws syntax error for multi-line f-strings, despite code running without error
|
<p>I'm working with Python 3.12 and recently added <code>mypy</code> type-checking to my project. I've encountered an odd issue where <code>mypy</code> throws a syntax error for certain f-strings in my code, specifically those with newline characters in the middle of the f-string. The curious thing is that the Python interpreter doesn’t complain at all and runs the code just fine.</p>
<p>Here’s a simplified example of the kind of f-string that <code>mypy</code> flags as a syntax error:</p>
<pre class="lang-py prettyprint-override"><code>name = "Alice"
message = f"Hello, {name
}, welcome!"
</code></pre>
<p>Mypy error:</p>
<pre class="lang-bash prettyprint-override"><code>mypy minimal_reproducible_example.py
src/loculus_preprocessing/alice.py:2: error: unterminated string literal (detected at line 2) [syntax]
Found 1 error in 1 file (errors prevented further checking)
</code></pre>
<p>Even adding <code>--python-version 3.12</code> doesn't fix it.</p>
<p>I understand that using triple quotes (<code>"""</code>) for multi-line strings is recommended, but in this case, the code works in the interpreter without issue, while <code>mypy</code> consistently fails with a syntax error.</p>
<p>My questions:</p>
<ol>
<li>Why does <code>mypy</code> consider this a syntax error, even though Python 3.12 accepts it?</li>
<li>Is this a limitation of <code>mypy</code>, or am I overlooking something in Python's syntax that could lead to issues?</li>
</ol>
|
<python><syntax-error><mypy><f-string><python-3.12>
|
2024-10-09 10:28:36
| 2
| 10,272
|
Cornelius Roemer
|
79,069,684
| 4,993,513
|
Twilio playing only white noise while trying to stream audio
|
<p>I am trying to stream audio generated by ElevenLabs (via their websocket) back to Twilio. However, I was only able to hear white noise.</p>
<p>However, when I used their API to get the audio, opened it, and streamed it to twilio, it was played on the phone properly.</p>
<p>Below is my code where I'm trying to generate audio from elevenlabs, and then sending it to my twilio socket</p>
<pre><code>async def text_to_speech_stream(text: str):
voice_id = "pNInz6obpgDQGcFmaJgB"
model_id = "eleven_multilingual_v2"
# Construct the WebSocket URL
url = f"wss://api.elevenlabs.io/v1/text-to-speech/{voice_id}/stream-input?model_id={model_id}"
# Set up headers
headers = {
'xi-api-key': ""
}
# Payload to send
payload = {
"text": " ",
"voice_settings": {
"stability": 0.0,
"similarity_boost": 1.0,
"style": 0.0,
"use_speaker_boost": True
},
"model_id": model_id,
"voice_id": voice_id,
"xi-api-key": "",
"output_format": "ulaw_8000",
"flush":True
}
async with websockets.connect(url) as ws:
print("websocket connected: 11labs")
# Send the payload as JSON
await ws.send(json.dumps(payload))
await ws.send(json.dumps({"text":text}))
await ws.send(json.dumps({"text": ""}))
# Receive audio chunks
start_time = time.time()
print("waiting for revecivng")
print()
audio_buffer = bytearray()
while True:
try:
message = await ws.recv()
data = json.loads(message)
if data.get("audio"):
# Audio chunk received
print("yielded in: ",time.time()-start_time, flush=True)
yield data["audio"]#base64.b64decode(data["audio"])
elif data.get('isFinal'):
break
elif isinstance(message, str):
# Text message received (e.g., errors or status updates)
data = json.loads(message)
if data.get("warning"):
print(f"Warning: {data['warning']}")
elif data.get("error"):
print(f"Error: {data['error']}")
break
else:
print(f"Message from ElevenLabs: ")
else:
print("Unknown message type received")
except websockets.exceptions.ConnectionClosedOK:
# Connection closed gracefully
break
except Exception as e:
print(f"Error in ElevenLabs WebSocket: {e}")
break
async def send_to_twilio_eleven(websocket, message, stream_sid, interruption_event):
print("Starting TTS streaming from ElevenLabs")
try:
start_time = time.time()
print("message is: ", message)
audio_payload = ""
count = 0
# audio_payload = text_to_speech_stream(message) # for API
async for audio_chunk in text_to_speech_stream(message):
print("Chunk yielded in : ",time.time()-start_time)
# Trying to buffer. Facing same issue even when streamed directly without buffering
audio_payload += audio_chunk
# Construct the message to send to Twilio
audio_delta = {
"event": "media",
"streamSid": stream_sid,
"media": {
"payload": audio_payload
}
}
# ideal for sending chunks
if interruption_event.is_set():
print("Interruption detected")
return
# Send the message to Twilio
await websocket.send_json(audio_delta)
print("received audio sent to twilio")
# Yield control to the event loop
await asyncio.sleep(0)
except Exception as e:
print(f"Error in send_to_twilio_eleven: {e}")
</code></pre>
<p>Is there anything I am doing wrong while receiving the audio from ElevenLabs or before sending it to twilio?</p>
|
<python><audio><websocket><twilio><elevenlabs>
|
2024-10-09 10:08:36
| 1
| 11,141
|
Dawny33
|
79,069,603
| 2,613,150
|
Unable to configure Celery logging using a YAML configuration file with FastAPI server
|
<p>I am using Celery as the background task processor with a FastAPI web server. Logging for FastAPI is configured using a <code>logging.yaml</code> file via Uvicorn like this:</p>
<pre><code>uvicorn syncapp.main:app --workers 4 --host 0.0.0.0 --port 8084 --log-config logging.yaml
</code></pre>
<p>I have configured multiple loggers which work fine for the API server. However, no matter what I do, I cannot get Celery to use a logger defined in the <code>logging.yaml</code> file. I have tried other solutions, but none seem to work for my case.</p>
<p>Celery main application: (<code>Path: syncapp/worker.py</code>)</p>
<pre><code>from celery import Celery
from syncapp.settings import get_settings
settings = get_settings()
app = Celery(
main=settings.app_name,
broker=settings.celery_broker_url,
backend=settings.celery_result_backend,
include="syncapp.tasks.copy_data",
)
app.conf.update(
broker_connection_retry_on_startup=True,
worker_hijack_root_logger=False,
)
</code></pre>
<p>Example task method (<code>Path: syncapp/tasks/copy_data.py</code>)</p>
<pre><code>from celery.utils.log import get_task_logger
from syncapp.commons.enums import WorkerQueue
from syncapp.worker import app
logger = get_task_logger(__name__)
@app.task(queue=WorkerQueue.COPY_DATA.value)
def copy_index_data(index_name: str) -> bool:
"""Copies index data up until the current date."""
task_id = celery.current_task.request.id
logger.info("Job %s: Copying data from source index %s", task_id, index_name)
return True
</code></pre>
<p>Contents of <code>logging.yaml</code> file (some keys are removed to keep it short):</p>
<pre><code>version: 1
...
handlers:
hl1_console:
class: logging.StreamHandler
formatter: simple
stream: ext://sys.stdout
hl1_syncapp_file_handler:
class: logging.FileHandler
filename: logs/syncapp.log
formatter: extended
hl1_synctask_file_handler:
class: logging.FileHandler
filename: logs/synctask.log
formatter: extended
hl1_celery_file_handler:
class: logging.FileHandler
filename: logs/celery.log
formatter: extended
hl2_synctask_queue_handler:
class: logging_.handlers.QueueListenerHandler
handlers:
- cfg://handlers.hl1_console
- cfg://handlers.hl1_synctask_file_handler
queue: cfg://objects.synctask_queue
hl2_celery_queue_handler:
class: logging_.handlers.QueueListenerHandler
handlers:
- cfg://handlers.hl1_console
- cfg://handlers.hl1_celery_file_handler
queue: cfg://objects.celery_queue
hl2_syncapp_queue_handler:
class: logging_.handlers.QueueListenerHandler
handlers:
- cfg://handlers.hl1_console
- cfg://handlers.hl1_syncapp_file_handler
queue: cfg://objects.syncapp_queue
loggers:
celery.task:
level: DEBUG
handlers:
- hl2_synctask_queue_handler
propagate: no
celery:
level: INFO
handlers:
- hl2_celery_queue_handler
propagate: no
syncapp:
level: DEBUG
handlers:
- hl2_syncapp_queue_handler
propagate: no
root:
level: INFO
handlers:
- hl1_console
</code></pre>
<p>Running Celery worker with the following command (did not mention the logfile parameter here):</p>
<pre><code>celery -A syncapp.worker:app worker --concurrency=4 --hostname=copy@%h --queues=syncapp.tasks.copy_data --loglevel=INFO
</code></pre>
<p>I want the Celery backend to log its events using the <code>celery</code> logger and the tasks logs to <code>syncapp.tasks</code> logger. The main application logs should go through <code>syncapp</code> logger (which it does).</p>
<p><strong>Update #4</strong></p>
<p>Instead of preventing Celery from configuring logger by connecting to <code>signals.setup_logging</code>, I can manually reinitialize logger by connecting to <code>signals.after_setup_logger</code> and <code>signals.after_setup_task_logger</code>. The code looks like this (yes, repeating code could be tidied up)</p>
<pre><code>app = Celery(
main=settings.app_name,
broker=settings.celery_broker_url,
backend=settings.celery_result_backend,
include="syncapp.tasks.copy_data",
)
app.conf.update(
broker_connection_retry_on_startup=True,
worker_hijack_root_logger=False,
)
@signals.after_setup_logger.connect()
def after_setup_logger(logger, *args, **kwargs):
logger.info(f"Logger before configuration: {logger.name}, Handlers: {logger.handlers}")
with open(settings.log_config_path, "r") as config_file:
YAMLConfig(config_file.read())
logger_celery = logging.getLogger("celery")
for handler in logger_celery.handlers:
logger.addHandler(handler)
logger.info(f"Logging configured. Name: {logger.name}, Handlers: {logger.handlers}")
@signals.after_setup_task_logger.connect()
def after_setup_task_logger(logger, *args, **kwargs):
logger.info(f"Logger before configuration: {logger.name}, Handlers: {logger.handlers}")
with open(settings.log_config_path, "r") as config_file:
YAMLConfig(config_file.read())
logger_celery_task = get_task_logger("syncapp.tasks.copy_data")
for handler in logger_celery_task.handlers:
logger.addHandler(handler)
logger.info(f"Task Logging configured. Name: {logger.name}, Handlers: {logger.handlers}")
</code></pre>
<p>This does not quite solve the problem. Some observations:</p>
<p>From the logs, I can see that the handlers got attached with the logger:</p>
<pre><code>syncdata-1 | [2024-10-14 13:19:15.384] [pid 1] [INFO] - [root]: Logging configured. Name: root, Handlers: [<StreamHandler <stdout> (NOTSET)>, <QueueListenerHandler (NOTSET)>]
syncdata-1 | [2024-10-14 13:19:15.390] [pid 1] [INFO] - [celery.task]: Task Logging configured. Name: celery.task, Handlers: [<StreamHandler <stdout> (NOTSET)>, <FileHandler /es-sync/logs/synctask.log (NOTSET)>]
</code></pre>
<p>The main Celery process can send data to the corresponding log file. In the task, I can see the logger has handlers:</p>
<pre><code>syncdata-1 | [2024-10-14 13:19:25.712] [pid 16] [WARNING] - [celery.redirected]: Logger Name: syncapp.tasks.copy_data, Handlers: [<StreamHandler <stdout> (NOTSET)>, <FileHandler /es-sync/logs/synctask.log (NOTSET)>]
</code></pre>
<p>Interesting point here: even though the logger <code>syncapp.tasks.copy_data</code> has a parent logger <code>celery.task</code>, configuring the logger for <code>syncapp.tasks</code> or <code>celery.task</code> did not work. It had to be named exactly <code>syncapp.tasks.copy_data</code>.</p>
<p>Note that my loggers were using a queue handler. The queue object was being accessed from the main Celery thread and the task thread, and this is somehow stopping the queue handler from emitting logs.</p>
<p>If I enable propagation to the root logger, I can see that the logs were, indeed, generated:</p>
<pre><code>syncdata-1 | [2024-10-14 13:19:25.712] [pid 16] [INFO] - [syncapp.tasks.copy_data]: Job ID: 18b9953a-e5cf-4939-a535-2bfe47678016, Task ID: b5aa83e1-cb81-4145-a142-13dd4c59124a. Copying data from source index xyz
</code></pre>
<p>What would be the thread-safe way to utilize a queue handler? For now, the solution is not to use the queue handler, and explicitly use the file and console handlers.</p>
|
<python><logging><celery><fastapi>
|
2024-10-09 09:48:17
| 2
| 2,357
|
Zobayer Hasan
|
79,069,570
| 3,169,872
|
Raising python exception from native thread
|
<p>I have a C++ class <code>Runner</code> that encapsulates running a native thread. This class is exposed to Python via <code>pybind11</code>, so <code>start_thread</code> is called from Python.</p>
<pre class="lang-cpp prettyprint-override"><code>class Runner {
public:
void start_thread() {
thread_ = std::jthread([]{
try {
while (true) {
// do something
}
} catch (...) {
pybind11::gil_scoped_acquire gil;
pybind11::set_error(PyExc_RuntimeError, "Something horrible happend");
}
});
}
private:
std::jthread thread_;
};
</code></pre>
<p>Exception might be thrown in native thread, is there a way to reraise an exception from a native thread to Python?</p>
<p>I was thinking about using <a href="https://pybind11.readthedocs.io/en/stable/reference.html#_CPPv4NK17builtin_exception9set_errorEv" rel="nofollow noreferrer">pybind11::set_error</a>, but is it the right way? And if so what are the guarantees on when Python realises that there is an exception? If exceptions are checked as soon as Gil is released by a native thread - then this solution should be fine.</p>
|
<python><c++><exception><pybind11>
|
2024-10-09 09:40:58
| 1
| 1,653
|
DoctorMoisha
|
79,069,554
| 10,557,442
|
PySpark: New column with uppercase name is dropped unexpectedly
|
<p>I am trying to add a new column <code>CHANNEL_ID</code> in my PySpark DataFrame based on conditional logic using <code>pyspark.sql.functions.when</code> and after that, removing the old column <code>channel_id</code>, which is no longer needed. However, the new column doesn't appear in the resulting DataFrame when I use the uppercase name <code>CHANNEL_ID</code>. It seems to drop the new <code>CHANNEL_ID</code> column instead.</p>
<p>Here is my code:</p>
<pre class="lang-py prettyprint-override"><code>import pyspark.sql.functions as f
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
df = spark.createDataFrame(
[
(1, 101),
(2, 102),
(3, 103)
], schema="id int, channel_id int"
)
df.show()
</code></pre>
<p>This shows the expected DataFrame:</p>
<pre><code>+---+----------+
| id|channel_id|
+---+----------+
| 1| 101|
| 2| 102|
| 3| 103|
+---+----------+
</code></pre>
<p>I then try to add a new column <code>CHANNEL_ID</code> and drop the original <code>channel_id</code> column:</p>
<pre class="lang-py prettyprint-override"><code>df.withColumns(
{
"CHANNEL_ID": f.when(f.col("channel_id") == 101, "First channel")
.when(f.col("channel_id") == 102, "Second channel")
.otherwise("-")
}
).drop("channel_id").show()
</code></pre>
<p>I expected the output to show the <code>id</code> column and the newly created <code>CHANNEL_ID</code> column, but instead, this is the result:</p>
<pre><code>+---+
| id|
+---+
| 1|
| 2|
| 3|
+---+
</code></pre>
<h3>Question:</h3>
<p>Why is the <code>CHANNEL_ID</code> column being dropped, even though I am trying to drop <code>channel_id</code>? It seems like the uppercase name <code>CHANNEL_ID</code> is somehow interfering with the <code>drop</code> method. If I rename the column to something else (like <code>channel_alias</code>), it works fine.</p>
<p>Is this behavior related to case sensitivity or naming conflicts in PySpark? Any clarification or suggestions to resolve this would be appreciated!</p>
|
<python><dataframe><apache-spark><pyspark>
|
2024-10-09 09:38:31
| 1
| 544
|
Dani
|
79,069,392
| 11,277,108
|
How to tell if CloseSpider was raised at the level of CrawlerProcess
|
<p>I need to run my scrapers in a loop but if certain errors occur in a spider I'd like to be able to raise <code>CloseSpider</code>and for this to filter up to the looping function and stop the loop.</p>
<p>Here's my code so far which works fine with a fully functioning spider but I've created a little MRE to test the <code>CloseSpider</code> use case.</p>
<pre><code>from __future__ import print_function
import multiprocessing as mp
import traceback
from time import sleep
from typing import Type
from scrapy import Spider
from scrapy.crawler import CrawlerProcess
from scrapy.exceptions import CloseSpider
class MyTestSpider(Spider):
name = "my_test_spider"
def __init__(self) -> None:
raise CloseSpider
class Process(mp.Process):
def __init__(self, target: callable, *args, **kwargs):
mp.Process.__init__(self, target=target, *args, **kwargs)
self._pconn, self._cconn = mp.Pipe()
self._exception = None
def run(self):
try:
mp.Process.run(self)
self._cconn.send(None)
except Exception as e:
tb = traceback.format_exc()
self._cconn.send(tb)
@property
def exception(self):
if self._pconn.poll():
self._exception = self._pconn.recv()
return self._exception
def run_crawler_loop(
spider: Type[Spider],
loop_wait_secs: int,
**kwargs,
) -> None:
while True:
run_crawler_reactor_safe(spider=spider, **kwargs)
sleep(loop_wait_secs)
def run_crawler_reactor_safe(spider: Type[Spider], **kwargs) -> None:
process = Process(target=run_crawler, kwargs={"spider": spider} | kwargs)
process.start()
process.join()
if process.exception:
error, traceback = process.exception
# send an email here
raise error # close the loop
def run_crawler(spider: Type[Spider], **kwargs) -> None:
process = CrawlerProcess()
crawler = process.create_crawler(spider)
process.crawl(crawler_or_spidercls=crawler, **kwargs)
process.start()
# how would I tell here if the spider was closed due to raising a CloseSpider exception?
# I'd like to raise that exception here so I can stop the loop by raising an error in run_crawler_reactor_safe
if __name__ == "__main__":
run_crawler_loop(spider=MyTestSpider, loop_wait_secs=0)
</code></pre>
<p>Running this generates two log entries in <code>twisted</code>:</p>
<pre><code>Unhandled error in Deferred:
2024-10-09 09:39:01 [twisted] CRITICAL: Unhandled error in Deferred:
Traceback (most recent call last):
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/crawler.py", line 265, in crawl
return self._crawl(crawler, *args, **kwargs)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/crawler.py", line 269, in _crawl
d = crawler.crawl(*args, **kwargs)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/twisted/internet/defer.py", line 2260, in unwindGenerator
return _cancellableInlineCallbacks(gen)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/twisted/internet/defer.py", line 2172, in _cancellableInlineCallbacks
_inlineCallbacks(None, gen, status, _copy_context())
--- <exception caught here> ---
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/twisted/internet/defer.py", line 2003, in _inlineCallbacks
result = context.run(gen.send, result)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/crawler.py", line 155, in crawl
self.spider = self._create_spider(*args, **kwargs)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/crawler.py", line 169, in _create_spider
return self.spidercls.from_crawler(self, *args, **kwargs)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/spiders/__init__.py", line 62, in from_crawler
spider = cls(*args, **kwargs)
File "/Users/myusername/GitHub/polgara_v2/so__capture_SpiderClose.py", line 17, in __init__
raise CloseSpider
scrapy.exceptions.CloseSpider:
2024-10-09 09:39:01 [twisted] CRITICAL:
Traceback (most recent call last):
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/twisted/internet/defer.py", line 2003, in _inlineCallbacks
result = context.run(gen.send, result)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/crawler.py", line 155, in crawl
self.spider = self._create_spider(*args, **kwargs)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/crawler.py", line 169, in _create_spider
return self.spidercls.from_crawler(self, *args, **kwargs)
File "/Users/myusername/opt/miniconda3/envs/myenv/lib/python3.11/site-packages/scrapy/spiders/__init__.py", line 62, in from_crawler
spider = cls(*args, **kwargs)
File "/Users/myusername/GitHub/polgara_v2/so__capture_SpiderClose.py", line 17, in __init__
raise CloseSpider
scrapy.exceptions.CloseSpider
</code></pre>
<p>However, these errors seem to be handled here and are not actually raised again in any way that can be captured further up. I've looked through both the <code>crawler</code> and <code>process</code> instances when <code>process.start()</code> finishes in <code>run_crawler()</code> but I can't find any sign of the spider let alone an error message.</p>
<p>I've also tried looking through the <code>twisted</code> package and a post on SO (<a href="https://stackoverflow.com/questions/9295359/stopping-twisted-from-swallowing-exceptions">Stopping Twisted from swallowing exceptions</a>) but got lost quickly...</p>
<p>Any ideas on how I might accomplish what I'm looking to do?</p>
|
<python><scrapy><python-multiprocessing>
|
2024-10-09 08:56:17
| 1
| 1,121
|
Jossy
|
79,069,112
| 17,277,677
|
chromaDB collection.query WHERE
|
<p>this is how i pass values to my where parameter:</p>
<pre><code>results = collection.query(
query_texts=user_info,
n_results=10,
where={"date":"20-04-2023"}
)
print(results['metadatas'], results['distances'])
</code></pre>
<p>what if i want my start day to be "20-04-2023" till today ?
I would like to pass range to this parameter</p>
|
<python><chromadb>
|
2024-10-09 07:43:09
| 1
| 313
|
Kas
|
79,069,042
| 16,545,894
|
is it possible to use the azure blob storage as a postgresql database server in a django project?
|
<p>I want to apply the <strong>PostgreSQL</strong> database in a <strong>azure blob storage</strong> In my <strong>Django</strong> Project. Because <strong>azure blob storage</strong> is cheaper than the <strong>SQL</strong> database on <strong>azure</strong>.</p>
<p>Is technically possible ?</p>
|
<python><django><postgresql><azure><azure-blob-storage>
|
2024-10-09 07:21:07
| 1
| 1,118
|
Nayem Jaman Tusher
|
79,068,751
| 4,935,114
|
Convert a pure Python list of lists to a numpy array of pure python lists (of 1 dimension)
|
<p>I have a list of a certain Python object (let's call it <code>MyClass</code>) that can be interpreted as a multi-dimensional Numpy array. However, I'd like to convert that list to a numpy array of <code>MyClass</code>, and not try to convert <code>MyClass</code> to an inner Numpy array. Just for the sake of the question, you can use a simple list instead of <code>MyClass</code>:</p>
<pre class="lang-py prettyprint-override"><code>a = [1, 2, 3]
b = [4, 5, 6]
data = [a, b]
</code></pre>
<p>You can achieve what I want with:</p>
<pre class="lang-py prettyprint-override"><code>import numpy
arr = np.empty(len(data), dtype=object)
for i,v in enumerate(data):
arr[i] = v
assert arr.shape == (len(data),) # Works
</code></pre>
<p>But I'm surprised as to why this doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>arr = np.array(data, dtype=object)
print(arr.shape) # prints (2 ,3), and I want (2,)
</code></pre>
<p>Is there a way I can limit Numpy to not delve into the inner lists and instantiate them as arrays? Why isn't <code>dtype=object</code> argument implies that? I really hope to avoid that <code>i,v</code> enumerating loop.</p>
|
<python><arrays><numpy>
|
2024-10-09 06:02:28
| 2
| 2,916
|
Doron Behar
|
79,068,558
| 748,175
|
Stripping prefix in python module name
|
<p>I have the following tree structure:</p>
<pre><code>volleybot/
- vbt/
+- BUILD
+- api.py
- tests/
+- BUILD
+- api_test.py
</code></pre>
<p>From <code>tests/api_test.py</code>, I would like to be able to write: <code>from vbt import api</code>, instead of <code>from volleybot.vbt import api</code>. How can I do that?</p>
|
<python><bazel>
|
2024-10-09 04:49:50
| 1
| 13,030
|
qdii
|
79,068,519
| 13,086,128
|
ERROR: Failed building wheel for pyarrow (Failed to build pyarrow)
|
<p>I am installing <code>pyarrow</code> on python 3.13:</p>
<pre><code>pip install pyarrow
</code></pre>
<p>this is what I am getting:</p>
<pre><code>C:\Users\dev\AppData\Local\Programs\Python\Python313>py -3.13 -m pip install pyarrow
Collecting pyarrow
Downloading pyarrow-17.0.0.tar.gz (1.1 MB)
---------------------------------------- 1.1/1.1 MB 6.2 MB/s eta 0:00:00
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting numpy>=1.16.6 (from pyarrow)
Using cached numpy-2.1.2-cp313-cp313-win_amd64.whl.metadata (59 kB)
Using cached numpy-2.1.2-cp313-cp313-win_amd64.whl (12.6 MB)
Building wheels for collected packages: pyarrow
Building wheel for pyarrow (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for pyarrow (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [346 lines of output]
<string>:34: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
running bdist_wheel
running build
running build_py
creating build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\array_ops.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\common.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\convert_builtins.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\convert_pandas.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\io.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\microbenchmarks.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\parquet.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\streaming.py -> build\lib.win-amd64-cpython-313\benchmarks
copying benchmarks\__init__.py -> build\lib.win-amd64-cpython-313\benchmarks
creating build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\acero.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\benchmark.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\cffi.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\compute.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\conftest.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\csv.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\cuda.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\dataset.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\feather.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\pandas_compat.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\substrait.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\types.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\util.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_compute_docstrings.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_generated_version.py -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\__init__.py -> build\lib.win-amd64-cpython-313\pyarrow
creating build\lib.win-amd64-cpython-313\scripts
copying scripts\run_emscripten_tests.py -> build\lib.win-amd64-cpython-313\scripts
copying scripts\test_imports.py -> build\lib.win-amd64-cpython-313\scripts
copying scripts\test_leak.py -> build\lib.win-amd64-cpython-313\scripts
creating build\lib.win-amd64-cpython-313\examples\dataset
copying examples\dataset\write_dataset_encrypted.py -> build\lib.win-amd64-cpython-313\examples\dataset
creating build\lib.win-amd64-cpython-313\examples\flight
copying examples\flight\client.py -> build\lib.win-amd64-cpython-313\examples\flight
copying examples\flight\middleware.py -> build\lib.win-amd64-cpython-313\examples\flight
copying examples\flight\server.py -> build\lib.win-amd64-cpython-313\examples\flight
creating build\lib.win-amd64-cpython-313\examples\parquet_encryption
copying examples\parquet_encryption\sample_vault_kms_client.py -> build\lib.win-amd64-cpython-313\examples\parquet_encryption
creating build\lib.win-amd64-cpython-313\pyarrow\interchange
copying pyarrow\interchange\buffer.py -> build\lib.win-amd64-cpython-313\pyarrow\interchange
copying pyarrow\interchange\column.py -> build\lib.win-amd64-cpython-313\pyarrow\interchange
copying pyarrow\interchange\dataframe.py -> build\lib.win-amd64-cpython-313\pyarrow\interchange
copying pyarrow\interchange\from_dataframe.py -> build\lib.win-amd64-cpython-313\pyarrow\interchange
copying pyarrow\interchange\__init__.py -> build\lib.win-amd64-cpython-313\pyarrow\interchange
creating build\lib.win-amd64-cpython-313\pyarrow\parquet
copying pyarrow\parquet\core.py -> build\lib.win-amd64-cpython-
copying pyarrow\tests\arrow_39313.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\arrow_7980.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\conftest.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\pandas_examples.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\pandas_threaded_import.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\read_record_batch.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\strategies.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_acero.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_adhoc_memory_leak.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_array.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_builder.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_cffi.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_compute.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_convert_builtin.py -> build\lib.win-amd64-cpython- copying pyarrow\tests\test_feather.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_flight.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_flight_async.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_fs.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_gandiva.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_gdb.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_io.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_json.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_jvm.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_pandas.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_scalars.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_schema.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_sparse_tensor.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_strategies.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_substrait.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_table.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_udf.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\test_util.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\util.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\__init__.py -> build\lib.win-amd64-cpython-313\pyarrow\tests
creating build\lib.win-amd64-cpython-313\pyarrow\vendored
copying pyarrow\vendored\docscrape.py -> build\lib.win-amd64-cpython-313\pyarrow\vendored
copying pyarrow\vendored\version.py -> build\lib.win-amd64-cpython-313\pyarrow\vendored
copying pyarrow\vendored\__init__.py -> build\lib.win-amd64-cpython-313\pyarrow\vendored
creating build\lib.win-amd64-cpython-313\pyarrow\tests\interchange
copying pyarrow\tests\interchange\test_conversion.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\interchange
copying pyarrow\tests\interchange\test_interchange_spec.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\interchange
copying pyarrow\tests\interchange\__init__.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\interchange
creating build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\common.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\conftest.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\encryption.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_basic.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_compliant_nested_type.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_dataset.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_data_types.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_datetime.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_parquet_file.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\test_parquet_writer.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
copying pyarrow\tests\parquet\__init__.py -> build\lib.win-amd64-cpython-313\pyarrow\tests\parquet
running egg_info
writing pyarrow.egg-info\PKG-INFO
writing dependency_links to pyarrow.egg-info\dependency_links.txt
writing requirements to pyarrow.egg-info\requires.txt
writing top-level names to pyarrow.egg-info\top_level.txt
reading manifest file 'pyarrow.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '..\LICENSE.txt'
warning: no files found matching '..\NOTICE.txt'
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*~' found anywhere in distribution
warning: no previously-included files matching '#*' found anywhere in distribution
warning: no previously-included files matching '.git*' found anywhere in distribution
warning: no previously-included files matching '.DS_Store' found anywhere in distribution
no previously-included directories found matching '.asv'
writing manifest file 'pyarrow.egg-info\SOURCES.txt'
creating build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\AWSSDKVariables.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\BuildUtils.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\DefineOptions.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindClangTools.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindGTestAlt.cmake -> build\lib.win-amd64-cpython-
copying cmake_modules\FindProtobufAlt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindPython3Alt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindRapidJSONAlt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindSQLite3Alt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindgRPCAlt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\Findlibrados.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\Findlz4Alt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\FindorcAlt.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\GandivaAddBitcode.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\SetupCxxFlags.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\ThirdpartyToolchain.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\UseCython.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\Usevcpkg.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\aws_sdk_cpp_generate_variables.sh -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\san-config.cmake -> build\lib.win-amd64-cpython-313\cmake_modules
copying cmake_modules\snappy.diff -> build\lib.win-amd64-cpython-313\cmake_modules
copying pyarrow\__init__.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_acero.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_acero.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_azurefs.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_compute.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_compute.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_csv.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_csv.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_cuda.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_cuda.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dataset.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dataset.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dataset_orc.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dataset_parquet.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dataset_parquet.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dataset_parquet_encryption.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_dlpack.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_feather.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_flight.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_fs.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_fs.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_gcsfs.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_hdfs.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_json.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_json.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_orc.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_orc.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_parquet.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_parquet.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_parquet_encryption.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_parquet_encryption.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_pyarrow_cpp_tests.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_pyarrow_cpp_tests.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_s3fs.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\_substrait.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\array.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\benchmark.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\builder.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\compat.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\config.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\device.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\error.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\gandiva.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\io.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\ipc.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\lib.pxd -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\lib.pyx -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\memory.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\pandas-shim.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\public-api.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\scalar.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\table.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\tensor.pxi -> build\lib.win-amd64-cpython-313\pyarrow
copying pyarrow\types.pxi -> build\lib.win-amd64-cpython-313\pyarrow
creating build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\common.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_acero.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_cuda.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_dataset.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_dataset_parquet.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_feather.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_flight.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_fs.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_python.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libarrow_substrait.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libgandiva.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\libparquet_encryption.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\includes\__init__.pxd -> build\lib.win-amd64-cpython-313\pyarrow\includes
copying pyarrow\tests\bound_function_visit_strings.pyx -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\extensions.pyx -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\tests\pyarrow_cython_example.pyx -> build\lib.win-amd64-cpython-313\pyarrow\tests
copying pyarrow\src\arrow\python\arrow_to_pandas.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\arrow_to_python_internal.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\async.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\benchmark.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\benchmark.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\common.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\common.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\csv.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\csv.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\datetime.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\datetime.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\decimal.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\decimal.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\deserialize.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\deserialize.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\extension_type.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\extension_type.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\filesystem.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\filesystem.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\flight.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\flight.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\gdb.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\gdb.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\helpers.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\helpers.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\inference.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\inference.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\init.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\init.h -> build\lib.win-amd64-cpython-
copying pyarrow\src\arrow\python\ipc.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\ipc.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\iterators.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_convert.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_convert.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_internal.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_interop.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_to_arrow.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\numpy_to_arrow.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\parquet_encryption.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\parquet_encryption.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pch.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\platform.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow_api.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\pyarrow_lib.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_test.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_test.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_to_arrow.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\python_to_arrow.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\serialize.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\serialize.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\type_traits.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\udf.cc -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\udf.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
copying pyarrow\src\arrow\python\visibility.h -> build\lib.win-amd64-cpython-313\pyarrow\src\arrow\python
creating build\lib.win-amd64-cpython-313\pyarrow\tests\data\feather
copying pyarrow\tests\data\feather\v0.17.0.version.2-compression.lz4.feather -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\feather
creating build\lib.win-amd64-cpython-313\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\README.md -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.jsn.gz -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.emptyFile.orc -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\TestOrcFile.test1.jsn.gz -> build\lib.win-amd64-
copying pyarrow\tests\data\orc\decimal.jsn.gz -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\orc
copying pyarrow\tests\data\orc\decimal.orc -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\orc
creating build\lib.win-amd64-cpython-313\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.all-named-index.parquet -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.column-metadata-handling.parquet -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.parquet -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\parquet
copying pyarrow\tests\data\parquet\v0.7.1.some-named-index.parquet -> build\lib.win-amd64-cpython-313\pyarrow\tests\data\parquet
running build_ext
creating C:\Users\navigatordev\AppData\Local\Temp\pip-install-zllpdq6m\pyarrow_7674497907ab40dbaa14d8f45104e542\build\temp.win-amd64-cpython-313
-- Running cmake for PyArrow
cmake -DCMAKE_INSTALL_PREFIX=C:\Users\navigatordev\AppData\Local\Temp\pip-install-zllpdq6m\pyarrow_7674497907ab40dbaa14d8f45104e542\build\lib.win-amd64-cpython-313\pyarrow -DPYTHON_EXECUTABLE=C:\Users\dev\AppData\Local\Programs\Python\Python313\python.exe -DPython3_EXECUTABLE=C:\Users\dev\AppData\Local\Programs\Python\Python313\python.exe -DPYARROW_CXXFLAGS= -G "Visual Studio 15 2017 Win64" -DPYARROW_BUNDLE_ARROW_CPP=off -DPYARROW_BUNDLE_CYTHON_CPP=off -DPYARROW_GENERATE_COVERAGE=off -DCMAKE_BUILD_TYPE=release C:\Users\navigatordev\AppData\Local\Temp\pip-install-zllpdq6m\pyarrow_7674497907ab40dbaa14d8f45104e542
error: command 'cmake' failed: None
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyarrow
Failed to build pyarrow
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (pyarrow)
</code></pre>
|
<python><pip><python-3.13>
|
2024-10-09 04:24:59
| 2
| 30,560
|
Talha Tayyab
|
79,068,298
| 6,471,140
|
valueError: Supplied state dict for layers does not contain `bitsandbytes__*` and possibly other `quantized_stats`(when load saved quantized model)
|
<p>We are trying to deploy a quantized Llama 3.1 70B model(from Huggingface, using bitsandbytes), quantizing part works fine as we check the model memory which is correct and also test getting predictions for the model, which is also correct, the problem is: after saving the quantized model and then loading it we get</p>
<blockquote>
<p>valueError: Supplied state dict for layers.0.mlp.down_proj.weight does
not contain <code>bitsandbytes__*</code> and possibly other <code>quantized_stats</code>
components</p>
</blockquote>
<p>What we do is:</p>
<ul>
<li>Save the quantized model using the usual save_pretrained(save_dir)</li>
<li>Try to load the model using AutoModel.from_pretrained, passing the save_dir and the same quantization_config used when creating the model.</li>
</ul>
<p>Here is the code:</p>
<pre><code>model_id = "meta-llama/Meta-Llama-3.1-70B-Instruct"
cache_dir = "/home/ec2-user/SageMaker/huggingface_cache"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
model_4bit = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.bfloat16,
quantization_config=quantization_config,
low_cpu_mem_usage=True,
offload_folder="offload",
offload_state_dict=True,
cache_dir=cache_dir
)
tokenizer = AutoTokenizer.from_pretrained(model_id,cache_dir=cache_dir)
pt_save_directory = "test_directory"
tokenizer.save_pretrained(pt_save_directory,)
model_4bit.save_pretrained(pt_save_directory)
## test load it
loaded_model = AutoModel.from_pretrained(pt_save_directory,
quantization_config=quantization_config
)
</code></pre>
|
<python><artificial-intelligence><huggingface-transformers><large-language-model><quantization>
|
2024-10-09 02:25:28
| 1
| 3,554
|
Luis Leal
|
79,068,176
| 2,955,827
|
Why we need sync_to_async in Django?
|
<p>The <a href="https://docs.djangoproject.com/en/5.1/topics/async/#asgiref.sync.sync_to_async" rel="nofollow noreferrer">document</a> said:</p>
<blockquote>
<p>The reason this is needed in Django is that many libraries, specifically database adapters, require that they are accessed in the same thread that they were created in. Also a lot of existing Django code assumes it all runs in the same thread, e.g. middleware adding things to a request for later use in views.</p>
</blockquote>
<p>But another question <a href="https://stackoverflow.com/questions/48966277/is-it-safe-that-when-two-asyncio-tasks-access-the-same-awaitable-object">Is it safe that when Two asyncio tasks access the same awaitable object?</a> said python's asyncio is thread safe.</p>
<p>And as I know since the GIL still exist accessing one object from multiple thread should be thread safe.</p>
<p>Can any one give a minimal example for why we have to use <code>await sync_to_async(foo)()</code> instead of directly <code>foo()</code> in django or other async apps?</p>
<p>This question(<a href="https://stackoverflow.com/questions/75462449/how-does-sync-to-async-convert-sync-functions-to-async">How does sync_to_async convert sync functions to async</a>) does not answer my question, in this example use <code>sync_to_async</code> does not show any significant difference, the order of function calls inside second_job is the same. And have nothing to do with avoid serious conflict.</p>
<p>I know who <code>sync_to_async</code> works, but why we need to run sync function in a new thread in async context?</p>
|
<python><django><multithreading><python-asyncio>
|
2024-10-09 00:59:27
| 1
| 3,295
|
PaleNeutron
|
79,068,166
| 10,150,736
|
Jupyter notebook module not found python3
|
<p>I am receiving this module error while using <code>jupyter notebook</code>:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import sisl
%matplotlib inline
plt.rcParams['font.size'] = 16
Emin, Emax = -15, 15
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33660/856295374.py in <module>
1 import numpy as np
2 import matplotlib.pyplot as plt
----> 3 import sisl
4 get_ipython().run_line_magic('matplotlib', 'inline')
5 plt.rcParams['font.size'] = 16
ModuleNotFoundError: No module named 'sisl'
</code></pre>
<p>However: <code>sisl</code> it is successfully installed as version 0.11 as you can see from here:</p>
<pre><code>(base) my@my-laptop:~$ sisl --version
/home/path/miniconda3/lib/python3.9/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.2
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
sisl: 0.11.0
git-hash: 0
</code></pre>
<p>I googled and looked on various examples such as: <a href="https://stackoverflow.com/questions/41910139/python-error-modulenotfounderror">this one</a>, <a href="https://stackoverflow.com/questions/54598292/modulenotfounderror-when-trying-to-import-module-from-imported-package">this one</a> and also <a href="https://stackoverflow.com/questions/73072257/resolve-warning-a-numpy-version-1-16-5-and-1-23-0-is-required-for-this-versi">this one</a></p>
<p>I also made sure the path is added on my <code>gedit ~/.bashrc</code>
And in fact here is a quick overview :</p>
<pre><code>`export PYTHONPATH="${PYTHONPATH}:/home/path/Downloads/Siesta/Slides_Tutorials/All/Tutorials-2021/DFT-TB/"`
</code></pre>
<p>For completeness I am also attaching a print screen for you to see:</p>
<p><a href="https://i.sstatic.net/TaazY3Jj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TaazY3Jj.png" alt="problem" /></a></p>
<p>Can anybody see what the problem might be and try to point me in the right direction?</p>
|
<python><python-3.x><jupyter-notebook><modulenotfounderror>
|
2024-10-09 00:50:29
| 0
| 2,474
|
Emanuele
|
79,067,938
| 4,963,334
|
Peewe upgrade causing weired autocommit behaviour
|
<p>We currently use peewe 2.x and this is the code we use custom mysql connection:</p>
<pre><code>from peewee import MySQLDatabase
class SQLDb(MySQLDatabase):
def connect(self, *args, **kwargs):
attempts = 5
while attempts > 0:
# Logic to retry db connection
def begin(self):
self.execute_sql('set autocommit=0')
self.execute_sql('begin')
</code></pre>
<p>Now I call the transaction using
DB.tranaction() while runing the transaction.</p>
<p>Now once we upgraded the peewe to 3.17.x we are seeing weired behaviour.</p>
<p>with this being set as it is</p>
<pre><code> def begin(self):
self.execute_sql('set autocommit=0')
self.execute_sql('begin')
</code></pre>
<p>When txn completes and mysql command run without txn it does not commit the data.
When i remove this line self.execute_sql('set autocommit=0') it started working as expected.</p>
<p>Now if i remove self.execute_sql('set autocommit=0') i beleive i will be changing transaction behaviour to commit at every mysql command. How do i ensure that with peewe 3.17.x it works as expected?</p>
<p>It works find till peewe 3.15.3. Peewe 3.16 has some breakimng changes related to peewe autocomit behaviour.</p>
|
<python><mysql><peewee><flask-peewee>
|
2024-10-08 22:15:12
| 2
| 1,525
|
immrsteel
|
79,067,913
| 6,780,025
|
Is it possible to load configs from a file for a "dynamic config group" in Hydra?
|
<p>Imagine a complex config that specifies many ML models, each with some number of layers. Something like</p>
<pre><code>@dataclass
class Layer:
width: int
activation: str
...
@dataclass
class Model:
layers: List[Layer]
@dataclass
class Forest:
models: Dict[str, Model]
</code></pre>
<p>Then, I have a yaml file as my main config where I define a forest of models. Then, on command line, I can override individual fields of some layer with something like <code>models.cnn.layers.1.width=10</code>. I would like to have a directory of various layers and be able to do something like <code>models.cnn.layers.1=wide_cnn</code> on command line, where <code>wide_cnn.yaml</code> is some yaml file in that directory.</p>
<p>I tried various things like using <code>wide_cnn.yaml</code>, using absolute path, using <code>models/cnn/layers.1</code>, etc. It seems like the core issue is that to have hydra load a file, the text before the <code>=</code> on command line (call it a "key") must be a config group. I don't quite understand what makes something a config group, but it seems like it corresponds to a directory structure. In my case, the key can be basically anything because users can name their models arbitrarily.</p>
<p>Any suggestions how to do something like this? Thanks!</p>
|
<python><fb-hydra>
|
2024-10-08 22:05:39
| 1
| 3,643
|
iga
|
79,067,793
| 2,016,632
|
When I register blueprints within init then they fail to register, why is that?
|
<p>There are dozens of questions about disappearing blueprints on StackExchange, but this may be the tersest example yet. Assume app.py is in upper folder and __init__.py, etc are in a subfolder "test". The following fails:</p>
<p><strong>app.py</strong></p>
<pre><code>from test_bad import create_app
# Start the app
if __name__ == '__main__':
app = create_app()
for rule in app.url_map.iter_rules():
print(rule)
app.run(debug=True, use_reloader=False)
</code></pre>
<p><strong>__init__.py</strong></p>
<pre><code>from flask import Flask, render_template
from . routes import dash
def create_app():
app = Flask(__name__)
app.register_blueprint(dash)
return app
</code></pre>
<p><strong>routes.py</strong></p>
<pre><code>from flask import Blueprint
# Define the blueprint
dash = Blueprint('dash', __name__)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from . routes import dash
@dash.route('/', methods=['GET', 'POST'])
def index():
return '<html><body><h1>Hello World!</h1></body></html>'
@dash.route('/verify', methods=['GET','POST'])
def check_verification():
return '<html><body><h1>And Farewell!</h1></body></html>'
</code></pre>
<p>whereas the following works just fine:</p>
<p><strong>app.py</strong></p>
<pre><code>from test_good.views import create_app
# Start the app
if __name__ == '__main__':
app = create_app()
for rule in app.url_map.iter_rules():
print(rule)
app.run(debug=True, use_reloader=False)
</code></pre>
<p><strong>__init__.py</strong></p>
<pre><code></code></pre>
<p><strong>routes.py</strong></p>
<pre><code>from flask import Blueprint
# Define the blueprint
dash = Blueprint('dash', __name__)
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>from flask import Flask, render_template
from . routes import dash
@dash.route('/', methods=['GET', 'POST'])
def index():
return '<html><body><h1>Hello World!</h1></body></html>'
@dash.route('/verify', methods=['GET','POST'])
def check_verification():
return '<html><body><h1>And Farewell!</h1></body></html>'
def create_app():
app = Flask(__name__)
app.register_blueprint(dash)
return app
</code></pre>
<p>Looking at the printed endpoint rules, you can see that in the "bad" case those endpoints did not get added to app (or app got "refreshed" somehow). Whereas in the good case the endpoints are just fine.</p>
<p>From a code structure perspective, I would have thought that __init__ was the correct place, but apparently not.</p>
<p>What is a yet better way to write this? [Or am I missing something]. Thanks,</p>
|
<python><flask>
|
2024-10-08 21:11:06
| 1
| 619
|
Tunneller
|
79,067,766
| 2,437,514
|
Render LATEX math in Excel using Python in Excel
|
<p>I'd like to use <strong>Python in Excel</strong> to display some LATEX math formulas in cells.</p>
<p>Creating these images is easy when just using the <code>sympy</code> library, but one has to have <code>latex</code> installed. <strong>Python in Excel</strong> runs on Anaconda distribution and it does not include a standalone LATEX rendering library (that I am aware of).</p>
<p>However <code>matplotlib</code> is able to render LATEX. I wonder if there's a way to "trick" <code>sympy</code> into using the same LATEX rendering library that is being used by <code>matplotlib</code>? Or maybe there's another way.</p>
<p><a href="https://i.sstatic.net/82bKXKqT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82bKXKqT.png" alt="enter image description here" /></a></p>
|
<python><excel><latex><sympy>
|
2024-10-08 20:59:07
| 2
| 45,611
|
Rick
|
79,067,648
| 14,179,793
|
Docker exec with python subprocess.Popen failes
|
<p><strong>Objective</strong>
Use <code>docker exec</code> on an EC2 instance to run a command in another container that is on the instance.</p>
<p><strong>Code</strong>:</p>
<pre><code>from base64 import decode
import json
import subprocess
with open('temp.json', 'r') as file:
event = json.load(file)
encoded = json.dumps(event)
a = ['/usr/bin/docker', 'exec', '-it', '4015980c98fb', 'python', '-m', 'code.handler', encoded]
if __name__ == '__main__':
with subprocess.Popen(a, shell=False, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as test:
print(f'ARGS: {test.args}')
for line in test.stdout:
line = line.decode().strip()
print(line)
</code></pre>
<p><strong>Observed Behavior</strong><br />
On Local Machine: The subprocess is correctly initiated and run<br />
AWS ECS: <code>Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?</code></p>
<ul>
<li>Is docker running? From inside the running ECS task:</li>
</ul>
<pre><code>root@532795fd1aad:/# docker -h
Flag shorthand -h has been deprecated, use --help
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Common Commands:
run Create and run a new container from an image
...
</code></pre>
<p><strong>shell=True</strong>
Usage text is printed locally and in AWS ECS but the command appears correct:</p>
<pre><code>ARGS: ['/usr/bin/docker', 'exec', '-it', '4015980c98fb', 'python', '-m', 'code.handler' '{...}',
Usage: docker [OPTIONS] COMMAND
A self-sufficient runtime for containers
Options:
--config string Location of client config files (default "/home/michael/.docker")
-c, --context string Name of the context to use to connect to the daemon (overrides DOCKER_HOST env var and default context set with "docker context
use")
...
</code></pre>
<p>What am I missing/not understanding here?</p>
|
<python><docker><amazon-ecs>
|
2024-10-08 20:09:06
| 0
| 898
|
Cogito Ergo Sum
|
79,067,505
| 2,056,452
|
Mapping between GeoPandas coordinates and MatPlotLib coordinates
|
<p>I'd like to draw a map with <code>geopandas</code> which I then want to overlay with standard <code>matplotlib</code> plots as subplots.</p>
<pre><code>map = gpd.read_file("my_shapefile.shp")
map["values"] = np.random.rand(len(map))
fig, ax = plt.subplots(1, 1, figsize=(15, 15))
map.plot(column="values", ax=ax, legend=True, cmap="Reds")
map["county_centers"] = map.geometry.centroid # With this, I get coordinates
# of a point in the center of the polygons
# in "Map"-coordinates
subax = fig.add_axes([0.45, # left
0.45, # bottom
0.1, # width
0.1]) # height
subax.pie([0.2,0.8], labels=["A", "B"])
fig.show()
</code></pre>
<p>It would be great, if I could just use the centroids of the polygons to position my subplots, but they are on a different coordinate system.</p>
<p>I tried to normalize them:</p>
<pre><code>minx, miny, maxx, maxy = map.total_bounds
p = map["county_centers"][0]
x = (p.x - minx) / (maxx - minx)
y = (p.y - miny) / (maxy - miny)
</code></pre>
<p>and a variety of different calculations, omitting the subtraction at the on or the other place. All of them ending up outside the map.</p>
<p><strong>Is there any way, to easily map the <em>centroids</em> to matplotlib coordinates?</strong></p>
|
<python><matplotlib><geopandas>
|
2024-10-08 19:23:09
| 1
| 13,801
|
derM
|
79,067,419
| 8,521,346
|
Django Prefetch Related Still Queries
|
<p>I have the function</p>
<pre><code>def selected_location_equipment(self):
qs = (Equipment.objects
.prefetch_related('service_logs')
.all())
return qs
</code></pre>
<p>That returns a queryset with a few related fields grabbed.</p>
<p>The problem is that when i access the prefetched data later in my code, it executes a query again.</p>
<p>Ive stepped through the Django code and can see where it is checking the cache for the .all() in one spot and doesnt query, but then when its called here, it's almost like the cache is cleared.</p>
<p>Debug Toolbar shows a query for each iteration of the loop as well.</p>
<pre><code>for e in equipments:
last_service = list(e.service_logs.all())[-1]
for log in e.service_logs.all():
# do other stuff
...
</code></pre>
<p>Here's the basic model definition for Equipment</p>
<pre><code>class ServiceLog(models.Model):
equipment = models.ForeignKey(Equipment,
on_delete=models.CASCADE,
related_name='service_logs')
</code></pre>
|
<python><django>
|
2024-10-08 18:53:09
| 2
| 2,198
|
Bigbob556677
|
79,067,351
| 2,056,452
|
Move labels close to "axis" on pyplot "radar_chart"
|
<p>I use matplotlib and geopandas to plot a map.
Now I'd like to plot additional information on top of this map, e.g. using radar charts or pie charts.</p>
<p>I got this sweet function, which creates radar charts for me</p>
<pre><code>def radar_chart(ax, data, categories, color, label_fontsize=8):
N = len(categories)
angles = np.linspace(0, 2 * np.pi, N, endpoint=False).tolist()
data = np.concatenate((data, [data[0]]))
angles += [angles[0]]
ax.fill(angles, data, color=color, alpha=0.25)
ax.plot(angles, data, color=color, linewidth=2)
ax.set_xticks(angles[:-1])
ax.set_xticklabels(categories, fontsize=label_fontsize) # Schriftgröße anpassen
ax.yaxis.set_visible(False)
</code></pre>
<p>which looks pretty well, as long as the radar charts are fairly large.<br />
<a href="https://i.sstatic.net/4aDSGedL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4aDSGedL.png" alt="large radar chart" /></a></p>
<p>Yet, this size is unhandy, for they would cover most of my map. However, if I plot them smaller, they look riddiculous, since the padding between the axis and the labels is way out of proportion, especially if I use an adjusted font size.</p>
<p><a href="https://i.sstatic.net/oTcrv4wA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTcrv4wA.png" alt="enter image description here" /></a></p>
<p><strong>Is there a way to change this padding?</strong></p>
<p>If I can not specify this padding, I figgured I might just plot the subplot in a larger size, so the proportions within the subplot look nice and then scale it down, so it fits in the main plot.</p>
<p>Yet, I failed to find a way to do that. A difficulty might be, that <em>scale</em> has another meaning in the context of matplotlib than resizing images...</p>
<p><strong>Can I <em>scale</em> subplots as in downscaling an image?</strong></p>
|
<python><matplotlib>
|
2024-10-08 18:29:25
| 1
| 13,801
|
derM
|
79,067,242
| 8,963,682
|
Bot Framework: 'access_token' Error When Sending Messages in Teams Bot Using Python
|
<p>I'm building a Teams bot using the Bot Framework SDK for Python, integrated with FastAPI. The bot is supposed to send periodic updates to users every 60 seconds after they send an initial message. However, I'm facing issues with sending messages due to an <code>access_token</code> error.</p>
<p>When I use <code>turn_context</code> within <code>on_message_activity</code>, the bot responds correctly to the user's initial message. But when trying to send periodic updates outside of <code>on_message_activity</code>, I'm encountering the <code>access_token</code> error.</p>
<p>I'm aware that a new <code>TurnContext</code> needs to be created for each conversation, but I'm unsure how to properly implement this. I've followed the necessary steps as far as I know, but the bot still isn't sending the periodic messages.</p>
<p><strong>Code:</strong></p>
<pre><code>import asyncio
from botbuilder.core import ActivityHandler, TurnContext, MessageFactory, BotFrameworkAdapterSettings, BotFrameworkAdapter
from botbuilder.schema import Activity, ConversationReference
from botbuilder.core.teams import TeamsActivityHandler
from typing import Dict
from botframework.connector.auth import ClaimsIdentity, MicrosoftAppCredentials
from fastapi import APIRouter
import json
# Logging setup
# (Assuming a logger is properly configured)
import logging
logger = logging.getLogger(__name__)
# Bot settings
APP_ID = 'your-app-id-here'
APP_PASSWORD = 'your-app-password-here'
# Adapter configuration
CONFIGS = BotFrameworkAdapterSettings(APP_ID, APP_PASSWORD)
ADAPTER = BotFrameworkAdapter(CONFIGS)
# Router setup for FastAPI
router = APIRouter()
class NotifyBot(TeamsActivityHandler):
def __init__(self, adapter):
super().__init__()
self.adapter = adapter
self.conversation_references: Dict[str, ConversationReference] = {}
self.scheduled_tasks: Dict[str, asyncio.Task] = {}
logger.info("NotifyBot initialized")
async def on_message_activity(self, turn_context: TurnContext):
# Store conversation reference when user sends a message
self.add_conversation_reference(turn_context.activity)
# Extract the user ID from the incoming activity
user_id = turn_context.activity.from_property.id
# Send a response to acknowledge the user
await turn_context.send_activity("Thanks for messaging me. I'll start sending you updates now.")
# Start sending periodic messages if not already scheduled
if user_id not in self.scheduled_tasks:
logger.info(f"Scheduling periodic messages for user {user_id}")
self.scheduled_tasks[user_id] = asyncio.create_task(self.send_periodic_messages(user_id))
def add_conversation_reference(self, activity: Activity):
# Add or update the conversation reference for a user
conversation_reference = TurnContext.get_conversation_reference(activity)
self.conversation_references[conversation_reference.user.id] = conversation_reference
logger.info(f"Added conversation reference for user {conversation_reference.user.id}")
async def send_periodic_messages(self, user_id: str):
while True:
try:
# Send a message every 5 seconds
await asyncio.sleep(60)
await self.send_personal_message(user_id)
except asyncio.CancelledError:
logger.info(f"Cancelled periodic messages for user {user_id}")
break
except Exception as e:
logger.error(f"Error sending periodic message to user {user_id}: {str(e)}")
async def send_personal_message(self, user_id: str):
conversation_reference = self.conversation_references.get(user_id)
if conversation_reference:
try:
logger.info(f"Sending message to user {user_id}")
# Use continue_conversation to send the message
await self.adapter.continue_conversation(
bot_app_id=APP_ID,
reference=conversation_reference,
logic=lambda turn_context: asyncio.create_task(self.send_message_callback(turn_context, "Periodic update: hi!"))
)
except Exception as e:
logger.error(f"Error sending message to user {user_id}: {str(e)}")
else:
logger.error(f"No conversation reference found for user {user_id}")
async def send_message_callback(self, turn_context: TurnContext, message: str):
try:
await turn_context.send_activity(MessageFactory.text(message))
logger.info(f"Message sent to {turn_context.activity.from_property.id}")
except Exception as e:
logger.error(f"Error sending message: {str(e)}")
</code></pre>
<p><strong>Issues I'm Facing:</strong></p>
<ul>
<li>The bot successfully responds to the initial message in
<code>on_message_activity</code>, but it doesn't send the periodic messages every
60 seconds as intended.</li>
<li>When trying to send messages using <code>continue_conversation</code>, I receive
an 'access_token' error.</li>
<li>I'm concerned that the token used in the initial <code>turn_context</code> will
expire, and I need to create a new <code>turn_context</code> or properly handle
tokens, but I'm unsure how to proceed.</li>
</ul>
<p><strong>What I've Tried:</strong></p>
<ul>
<li>Ensured that <code>APP_ID</code> and <code>APP_PASSWORD</code> are correctly set.</li>
<li>Attempted to acquire a new access token using
<code>MicrosoftAppCredentials</code>, but I'm not sure if I'm implementing it
correctly.</li>
<li>Reviewed documentation and examples on how to use
continue_conversation and <code>ClaimsIdentity</code>, but still facing issues.</li>
</ul>
<p><strong>Questions:</strong></p>
<ol>
<li>How can I correctly send periodic messages to the user every 5
seconds?</li>
<li>How do I properly handle the <code>access_token</code> and <code>ClaimsIdentity</code> when
using <code>continue_conversation</code>?</li>
<li>Do I need to create a new <code>TurnContext</code> for each message, and if so,
how can I do that?</li>
</ol>
<p>Any help or guidance on how to resolve these issues would be greatly appreciated!</p>
<p><strong>Additional Information:</strong></p>
<ul>
<li>I'm using the Bot Framework SDK for Python.</li>
<li>The bot is hosted using FastAPI.</li>
<li>The goal is to have the bot send updates to users periodically after
they initiate a conversation.</li>
</ul>
|
<python><botframework><fastapi><microsoft-teams>
|
2024-10-08 17:58:57
| 0
| 617
|
NoNam4
|
79,067,153
| 8,232,165
|
KITTI IMU data gaps interpolation
|
<p>I am using some IMU data from KITTI raw dataset. There are gaps in OxTs IMU data. I want to interpolate the gaps. The data looks periodically. Is there any way to interpolate them realistically? Below is the visualization of accelerations at XYZ axis.</p>
<p><a href="https://i.sstatic.net/K2QkBJGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/K2QkBJGy.png" alt="acceleration at x axis" /></a></p>
<p><a href="https://i.sstatic.net/HCZ0eROy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HCZ0eROy.png" alt="acceleration at y axis" /></a></p>
<p><a href="https://i.sstatic.net/M6dAH98p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6dAH98p.png" alt="acceleration at z axis" /></a></p>
<p>I tried to fit the data with a trigonometric function, however it was not good enough.</p>
<pre class="lang-py prettyprint-override"><code>def trigonometric_model2(t, a1, b1, a2, b2, a3, b3, a4, b4, a5, b5, a6, b6, a7, b7, a8, b8, c):
return (a1 * np.sin(b1 * t) + a2 * np.cos(b2 * t) +
a3 * np.sin(b3 * t) + a4 * np.cos(b4 * t) +
a5 * np.sin(b5 * t) + a6 * np.cos(b6 * t) +
a7 * np.sin(b7 * t) + a8 * np.cos(b8 * t) +
c)
# data is a n * 4 array.
# The first col is the timestamps,
# The second col is the acceleration at x direction
# The third col is the accelration at the y direction
# The last col is the acceleration at the z direction
ts = data[:, 0] # there is an ~2 second gap as shown above
axs = data[:, 1]
# try to fit the model on the data
params, _ = curve_fit(trigonometric_model2, ts, axs, maxfev=60000)
# predict the data for the gap
xs = np.linspace(ts[0], ts[-1], 500)
ys = trigonometric_model2(xs, *params)
</code></pre>
<p><a href="https://i.sstatic.net/rrWWCjkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rrWWCjkZ.png" alt="enter image description here" /></a></p>
<p>Here is the sample data <a href="https://docs.google.com/spreadsheets/d/1NSjnnPQR-p2iB-EYGVJw6XLkzfMQ5XNz6dmj8WaJElM/edit?usp=sharing" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1NSjnnPQR-p2iB-EYGVJw6XLkzfMQ5XNz6dmj8WaJElM/edit?usp=sharing</a></p>
|
<python><math><time-series><interpolation><kitti>
|
2024-10-08 17:29:52
| 1
| 453
|
Bai Yang
|
79,067,125
| 20,591,261
|
Efficiently Handling Large Combinations in Polars Without Overloading RAM
|
<p>I have a list of n values (in my case, n=19), and I want to generate all possible combinations of these values. My goal is to use each combination as a filter for a Polars DataFrame, iterate over the combinations, do some functions, and save the results into a new DataFrame.</p>
<p>However, since n=19, this results in 19! combinations, which overwhelms my RAM. Iterating over such a large number of combinations is impractical due to memory constraints.</p>
<p>How can I handle this computation efficiently without consuming too much RAM? Is there a way to either reduce memory usage or process this iteratively without holding everything in memory at once? Any suggestions for optimizing this workflow with Polars?</p>
<p>My current approach:</p>
<pre><code>import polars as pl
import itertools
states = ["a", "b", "c", "d"]
df = pl.DataFrame({
"ID": [1, 2, 3, 4, 5, 6,7,8,9,10],
"state": ["b", "b", "a", "d","a", "b", "c", "d","c", "d"],
"Value" : [3,6,9,12,15,18,21,24,27,30],
})
all_combinations = []
for r in range(1,len(states)+1):
all_combinations.extend(itertools.combinations(states, r))
def foo(df):
return(
df
)
new_rows = []
for i in range(len(all_combinations)):
df_filtered = df.filter(pl.col("state").is_in(all_combinations[i]))
df_func = foo(df_filtered)
x = df_func.shape[0]
new_rows.append({"loop_index": i, "shape": x})
df_final = pl.DataFrame(new_rows)
df_final
</code></pre>
<p>EDIT: Thanks for the feedback! I've realized my current approach isn't optimal. I'll post a new question with full context soon.</p>
<p>EDIT2 : Here its the <a href="https://stackoverflow.com/questions/79067125/efficiently-handling-large-combinations-in-polars-without-overloading-ram?">link</a> with the full context</p>
|
<python><python-itertools><python-polars>
|
2024-10-08 17:21:34
| 0
| 1,195
|
Simon
|
79,067,000
| 7,326,981
|
Unable to install azureml-dataset-runtime
|
<p>I am using a M1 mac running on MacOS Sonoma 14.7. I have project dependencies that I need to install to execute the project. I am using Python version 3.8. When I install the same dependencies on Windows machine it works fine.</p>
<p>Below are the dependencies.</p>
<pre><code>azureml
azureml-core
azureml-dataset-runtime
azureml-interpret
azureml-monitoring
azureml-responsibleai
azureml-telemetry
azure-identity
azure-mgmt-resource
azureml-dataset-runtime
numpy
pandas>=1.5.0
pandera
torch
neuralforecast
datasetsforecast
hyperopt
matplotlib
darts
mlflow
</code></pre>
<p>when I try to install I get the following error</p>
<pre><code>Collecting pyarrow<4.0.0,>=0.17.0 (from azureml-dataset-runtime)
Using cached pyarrow-3.0.0.tar.gz (682 kB)
Installing build dependencies ... error
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> [77 lines of output]
Ignoring numpy: markers 'python_version >= "3.9"' don't match your environment
Collecting cython>=0.29
Using cached Cython-3.0.11-py2.py3-none-any.whl.metadata (3.2 kB)
Collecting numpy==1.16.6
Using cached numpy-1.16.6.zip (5.1 MB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting setuptools
Using cached setuptools-75.1.0-py3-none-any.whl.metadata (6.9 kB)
Collecting setuptools_scm
Using cached setuptools_scm-8.1.0-py3-none-any.whl.metadata (6.6 kB)
Collecting wheel
Using cached wheel-0.44.0-py3-none-any.whl.metadata (2.3 kB)
Collecting packaging>=20 (from setuptools_scm)
Using cached packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
Collecting typing-extensions (from setuptools_scm)
Using cached typing_extensions-4.12.2-py3-none-any.whl.metadata (3.0 kB)
Collecting tomli>=1 (from setuptools_scm)
Using cached tomli-2.0.2-py3-none-any.whl.metadata (10.0 kB)
Using cached Cython-3.0.11-py2.py3-none-any.whl (1.2 MB)
Using cached setuptools-75.1.0-py3-none-any.whl (1.2 MB)
Using cached setuptools_scm-8.1.0-py3-none-any.whl (43 kB)
Using cached wheel-0.44.0-py3-none-any.whl (67 kB)
Using cached packaging-24.1-py3-none-any.whl (53 kB)
Using cached tomli-2.0.2-py3-none-any.whl (13 kB)
Using cached typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py): started
Building wheel for numpy (setup.py): finished with status 'error'
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Running from numpy source directory.
/private/var/folders/dq/t1v4d77x37gdhkz5rcd1wq680000gp/T/pip-install-822946xd/numpy_36feab41475244f49084f939ee233e61/numpy/distutils/misc_util.py:476: SyntaxWarning: "is" with a literal. Did you mean "=="?
return is_string(s) and ('*' in s or '?' is s)
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/private/var/folders/dq/t1v4d77x37gdhkz5rcd1wq680000gp/T/pip-install-822946xd/numpy_36feab41475244f49084f939ee233e61/setup.py", line 419, in <module>
setup_package()
File "/private/var/folders/dq/t1v4d77x37gdhkz5rcd1wq680000gp/T/pip-install-822946xd/numpy_36feab41475244f49084f939ee233e61/setup.py", line 398, in setup_package
from numpy.distutils.core import setup
File "/private/var/folders/dq/t1v4d77x37gdhkz5rcd1wq680000gp/T/pip-install-822946xd/numpy_36feab41475244f49084f939ee233e61/numpy/distutils/core.py", line 26, in <module>
from numpy.distutils.command import config, config_compiler, \
File "/private/var/folders/dq/t1v4d77x37gdhkz5rcd1wq680000gp/T/pip-install-822946xd/numpy_36feab41475244f49084f939ee233e61/numpy/distutils/command/config.py", line 19, in <module>
from numpy.distutils.mingw32ccompiler import generate_manifest
File "/private/var/folders/dq/t1v4d77x37gdhkz5rcd1wq680000gp/T/pip-install-822946xd/numpy_36feab41475244f49084f939ee233e61/numpy/distutils/mingw32ccompiler.py", line 34, in <module>
from distutils.msvccompiler import get_build_version as get_build_msvc_version
ModuleNotFoundError: No module named 'distutils.msvccompiler'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Running setup.py clean for numpy
error: subprocess-exited-with-error
× python setup.py clean did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
Running from numpy source directory.
`setup.py clean` is not supported, use one of the following instead:
- `git clean -xdf` (cleans all files)
- `git clean -Xdf` (cleans all versioned files, doesn't touch
files that aren't checked into the git repo)
Add `--force` to your command to use it anyway if you must (unsupported).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed cleaning build dir for numpy
Failed to build numpy
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (numpy)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>This error occurs when an attempt is made to install <code>pyarrow</code>. How can I fix this issue?</p>
|
<python><pip><apple-m1><azure-machine-learning-service><pyarrow>
|
2024-10-08 16:42:47
| 0
| 1,298
|
Furqan Hashim
|
79,066,931
| 2,840,125
|
Query a pandas DataFrame for a Windows path
|
<p>I need to inventory a network server. I want to save a list of file names, paths, file sizes, and created and accessed dates into a CSV file. Here is my code:</p>
<pre><code>df = pd.read_csv(r"C:\Users\ME\Documents\INVENTORY.csv")
for root, dirs, files in os.walk(r"S:\XXXXXXXXXXX"):
for file in files:
sfullpath = os.path.join(root, file)
print(sfullpath)
dftemp = df.query(rf"FullPath == '{sfullpath}'") # Error here
if len(dftemp.index) == 0:
try:
ssize = str(os.stat(sfullpath).st_size)
ctime = os.stat(sfullpath).st_ctime
cdate = datetime.fromtimestamp(ctime).strftime('%m/%d/%Y')
atime = os.stat(sfullpath).st_atime
adate = datetime.fromtimestamp(atime).strftime('%m/%d/%Y')
with open(r"C:\Users\ME\Documents\INVENTORY.csv", 'a') as f:
f.write("%s,%s,%s,%s,%s,%s\n"%(sfullpath,file,root,ssize,cdate,adate))
except FileNotFoundError:
continue
</code></pre>
<p>My script works until it gets to a file path with a <code>\N</code> in it. I thought that having <code>rf</code> or <code>fr</code> in front of the query string would tell Python to format variables and ignore escape characters, but I still get this error (pardon the redaction):</p>
<pre><code> File "<unknown>", line 1
FullPath =='S:\XXXXXXXXXXX\XXXXXX\Nxxxxxxxxx\desktop.ini'
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 21-22: malformed \N character escape
</code></pre>
<p>Is there a better way to check that a specific file hasn't already been written to the CSV? This server is huge and the script probably won't be able to scan the whole thing in one go. Thanks in advance.</p>
|
<python><python-3.x><pandas>
|
2024-10-08 16:26:06
| 1
| 477
|
Kes Perron
|
79,066,683
| 11,561,121
|
Unable to replace python function definition with mock and unitest
|
<p>I am trying to write integration test for the following python code:</p>
<pre><code>import xx.settings.config as stg
from xx.infrastructure.utils import csvReader, dataframeWriter
from pyspark.sql import SparkSession
from typing import List
from awsglue.utils import getResolvedOptions
import sys
def main(argv: List[str]) -> None:
args = getResolvedOptions(
argv,
['JOB_NAME', 'S3_BRONZE_BUCKET_NAME', 'S3_PRE_SILVER_BUCKET_NAME', 'S3_BRONZE_PATH', 'S3_PRE_SILVER_PATH'],
)
s3_bronze_bucket_name = args['S3_BRONZE_BUCKET_NAME']
s3_pre_silver_bucket_name = args['S3_PRE_SILVER_BUCKET_NAME']
s3_bronze_path = args['S3_BRONZE_PATH']
s3_pre_silver_path = args['S3_PRE_SILVER_PATH']
spark = SparkSession.builder.getOrCreate()
spark.conf.set('spark.sql.sources.partitionOverwriteMode', 'dynamic')
for table in list(stg.data_schema.keys()):
raw_data = stg.data_schema[table].columns.to_dict()
df = csvReader(spark, s3_bronze_bucket_name, s3_bronze_path, table, schema, '\t')
dataframeWriter(df, s3_pre_silver_bucket_name, s3_pre_silver_path, table, stg.data_schema[table].partitionKey)
if __name__ == '__main__':
main(sys.argv)
</code></pre>
<p>I basically loop on a list of tables then read their content (csv format) from S3 and write them in parquet format in S3 also.</p>
<p>These are definitions of csvReader and dataframeWriter:</p>
<pre><code>def csvReader(spark: SparkSession, bucket: str, path: str, table: str, schema: StructType, sep: str) -> DataFrame:
return (
spark.read.format('csv')
.option('header', 'true')
.option('sep', sep)
.schema(schema)
.load(f's3a://{bucket}/{path}/{table}.csv')
)
def dataframeWriter(df: DataFrame, bucket: str, path: str, table: str, partition_key: str) -> None:
df.write.partitionBy(partition_key).mode('overwrite').parquet(f's3a://{bucket}/{path}/{table}/')
</code></pre>
<p>For my integration tests I would like to replace S3 interaction with
local files interaction (read css from local and write parquet in local. This is what I done:</p>
<pre><code>import os
from unittest import TestCase
from unittest.mock import patch, Mock
from pyspark.sql import SparkSession
from pyspark.sql.types import StructType
import xx.application.perfmarket_pre_silver as perfmarket_pre_silver
from dvr_config_utils.config import initialize_settings
def local_csvReader(spark: SparkSession, table: str, schema: StructType, sep: str):
"""Mocked function that replaces real csvReader. this one reads from local rather than S3."""
return (
spark.read.format('csv')
.option('header', 'true')
.option('sep', sep)
.schema(schema)
.load(f'../input_mock/{table}.csv')
)
def local_dataframeWriter(df, table: str, partition_key: str):
"""Mocked function that replaces real dataframeWriter. this one writes in local rather than S3."""
output_dir = f'../output_mock/{table}/'
if not os.path.exists(output_dir):
os.makedirs(output_dir)
df.write.partitionBy(partition_key).mode('overwrite').parquet(output_dir)
class TestPerfmarketSilver(TestCase):
@classmethod
def setUpClass(cls):
cls.spark = SparkSession.builder.master('local').appName('TestPerfmarketSilver').getOrCreate()
cls.spark.conf.set('spark.sql.sources.partitionOverwriteMode', 'dynamic')
@classmethod
def tearDownClass(cls):
"""Clean up the Spark session and test data."""
cls.spark.stop()
os.system('rm -rf ../output_mock')
@patch('xx.application.cc.getResolvedOptions')
@patch('src.xx.infrastructure.utils.csvReader', side_effect=local_csvReader)
@patch('xx.infrastructure.utils.dataframeWriter', side_effect=local_dataframeWriter)
def test_main(self, mock_csvreader, mock_datawriter, mocked_get_resolved_options: Mock):
expected_results = {'chemins': {'nbRows': 8}}
mocked_get_resolved_options.return_value = {
'JOB_NAME': 'perfmarket_pre_silver_test',
'S3_BRONZE_BUCKET_NAME': 'test_bronze',
'S3_PRE_SILVER_BUCKET_NAME': 'test_pre_silver',
'S3_BRONZE_PATH': '../input_mock',
'S3_PRE_SILVER_PATH': '../output_mock'
}
perfmarket_pre_silver.main([])
for table in stg.data_schema.keys():
# Verify that the output Parquet file is created
output_path = f'../output_mock/{table}/'
self.assertTrue(os.path.exists(output_path))
# Read the written Parquet file and check the data
written_df = self.spark.read.parquet(output_path)
self.assertEqual(written_df.count(), expected_results[table]['nbRows']) # Check row count
self.assertTrue(
[
column_data['bronze_name']
for table in stg.data_schema.values()
for column_data in table['columns'].values()
]
== written_df.columns
)
</code></pre>
<p>What I wanted to do with these two lines:</p>
<pre><code>@patch('src.xx.infrastructure.utils.csvReader', side_effect=local_csvReader)
@patch('xx.infrastructure.utils.dataframeWriter', side_effect=local_dataframeWriter)
</code></pre>
<p>Is to replace definitions of <code>csvReader</code> by <code>local_csvReader</code> and <code>dataframeWriter</code> by <code>local_dataframeWriter</code>.</p>
<p>Unfortunately, code is retuning</p>
<pre><code>py4j.protocol.Py4JJavaError: An error occurred while calling o39.load.
: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class org.apache.hadoop.fs.s3a.S3AFileSystem not found
</code></pre>
<p>This is my project structure:</p>
<pre><code>project/
│
├── src/
│ └── xx/
│ ├── application/
│ │ └── perfmarket_pre_silver.py
│ ├── __init__.py
│ ├── infrastructure/
│ │ ├── __init__.py
│ │ └── utils.py
│ └── other_modules/
└── tests/
└── integration_tests/
└── application/
└── test_perfmarket_pre_silver.py
</code></pre>
<p>Both <code>csvReader</code> and <code>dataframeWriter</code> are defined in utils.py.</p>
<p>Error is pointing to <code>csvReader</code> call in main code (first snippet).</p>
<p>So my replacing technique is clearly not working.</p>
<p>What am I doing wrong please ?</p>
|
<python><mocking>
|
2024-10-08 15:20:46
| 2
| 1,019
|
Haha
|
79,066,674
| 22,371,917
|
Flask and CSRF tokens
|
<p>I'm trying to use <code>csrf</code> tokens with my Flask app, but I noticed after a little bit of the site being open(1 hour), it wouldn't work unless I reloaded, so I did a little testing and found out its because of the <code>csrf</code> tokens expiring, so I tried generating a new one every request(which is what I want, so people can't use the same one over and over),
but it's still not working.</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, redirect, url_for, render_template_string
from flask_wtf.csrf import CSRFProtect, generate_csrf
from datetime import timedelta
app = Flask(__name__)
app.secret_key = 'key'
csrf = CSRFProtect(app)
app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(seconds=10)
html= """
<form method="POST" action="{{ url_for('redirect_route') }}">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<button type="submit">Redirect</button>
</form>
"""
@app.route('/')
def home():
generate_csrf()
return render_template_string(html)
@app.route('/redirect', methods=['POST'])
def redirect_route():
generate_csrf()
return redirect(url_for('home'))
app.run(debug=True)
</code></pre>
<p>I have the line</p>
<p><code>app.config['PERMANENT_SESSION_LIFETIME'] = timedelta(seconds=10)</code></p>
<p>for testing, so yeah as you can see I have <code>generate_csrf()</code> before both routes, yet still after 10 seconds, I get the error and need to reload the site.</p>
<p><code>app.config['SESSION_REFRESH_EACH_REQUEST'] = True</code></p>
<p>which didn't change anything also</p>
<p><code>session.permanent = True</code></p>
<p>didn't work also passing the new <code>csrf</code> token in the return statement didn't work.</p>
<p>So I just really don't know what to do. I want every request to have its own <code>csrf</code> so that it can't be used again and I don't want leaving the site open to make it not work and give an error when I press on the button it should just make a new one or something.</p>
|
<python><flask><csrf><csrf-token>
|
2024-10-08 15:17:41
| 1
| 347
|
Caiden
|
79,066,654
| 2,897,115
|
snowflake: python connector: execute many: not able to insert
|
<h1>I have created a table</h1>
<pre><code>CREATE TABLE "VALIDATION_RULES_RESULTS" (
id INT AUTOINCREMENT,
created_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
mongo_data_ingestion_id STRING,
table_name STRING,
col_name STRING,
rule_name STRING,
success STRING,
element_count INT,
missing_count INT,
failed_records_indexes VARIANT,
failed_records_values VARIANT
);
</code></pre>
<h1>the below is my python script</h1>
<pre><code> insert_statement = f"""
INSERT INTO "VALIDATION_RULES_RESULTS" (
mongo_data_ingestion_id,
table_name,
col_name,
rule_name,
success,
element_count,
missing_count,
failed_records_indexes,
failed_records_values
) select
mongo_data_ingestion_id,
table_name,
col_name,
rule_name,
success,
element_count,
missing_count,
parse_json(failed_records_indexes),
parse_json(failed_records_values)
from VALUES (
%(mongo_data_ingestion_id)s,
%(table_name)s,
%(col_name)s,
%(rule_name)s,
%(success)s,
%(element_count)s,
%(missing_count)s,
%(failed_records_indexes)s,
%(failed_records_values)s
)
"""
# Prepare the data for executemany
rows_to_insert = []
for rcd in snowflake_validation_rule_row:
row = {
'mongo_data_ingestion_id': rcd['mongo_data_ingestion_id'],
'table_name': rcd['table_name'],
'col_name': rcd['col_name'],
'rule_name': rcd['rule_name'],
'success': str(rcd['success']).lower(),
'element_count': rcd['element_count'],
'missing_count': rcd['missing_count'],
'failed_records_indexes': json.dumps(rcd['failed_records_indexes'], default=str),
'failed_records_values': json.dumps(rcd['failed_records_values'], default=str)
}
rows_to_insert.append(row)
snoflake_conn.cursor().executemany(
insert_statement,
rows_to_insert
)
</code></pre>
<p>getting error</p>
<pre><code>20:44:13.63 !!! snowflake.connector.errors.ProgrammingError: 000904 (42000): SQL compilation error: error line 12 at position 8
20:44:13.63 !!! invalid identifier 'MONGO_DATA_INGESTION_ID'
20:44:13.63 !!! When calling: snoflake_conn.cursor().executemany(
20:44:13.63 insert_statement,
20:44:13.63 rows_to_insert
20:44:13.63 )
20:44:13.63 !!! Call ended by exception
</code></pre>
<p>sample insert values</p>
<pre><code>{'mongo_data_ingestion_id': '1728370271', 'table_name': 'yest', 'col_name': 'test', 'rule_name': 'basic_rule_expect_column_value_lengths_to_be_between', 'success': 'false', 'element_count': 10, 'missing_count': 0, 'failed_records_indexes': '[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]', 'failed_records_values': '["1982-04-20 00:00:00", "1984-12-15 00:00:00", "1981-07-10 00:00:00", "1977-12-24 00:00:00", "1965-05-13 00:00:00", "1957-05-17 00:00:00", "1991-12-29 00:00:00", "1913-03-15 00:00:00", "1983-08-27 00:00:00", "1957-04-07 00:00:00"]'}
</code></pre>
|
<python><snowflake-cloud-data-platform>
|
2024-10-08 15:15:07
| 1
| 12,066
|
Santhosh
|
79,066,590
| 13,634,560
|
plotly dash, css file not connecting
|
<p>I am attempting to move from plotly to dash, and would like to connect a css file to style my dashboard. The app renders fine with my file structure; app.py and index.py in the main folder, and an additional style.css file in the /assets folder.</p>
<p>however, when I add some styling to the css, the main dash app does not “automatically render it” as the documentation states it should.</p>
<ol>
<li>my dash version is later than 2.13; specific version is 2.18.1</li>
<li>I have tried both including and excluding <strong>name</strong> in the Dash() call</li>
<li>I have tried specifically stating assets_external_path=“/assets” in the Dash() call</li>
<li>I have tried setting external_stylesheets=["/assets/style.css"] in the Dash() call</li>
</ol>
<p>and it still does not connect to the style.css file. does anyone have any tips or tricks to get dash to recognize and connect with the css file?</p>
|
<python><css><plotly-dash>
|
2024-10-08 14:59:30
| 1
| 341
|
plotmaster473
|
79,066,491
| 8,188,120
|
'Flask' object is not iterable | AWS lambda using zappa
|
<p>I am trying to deploy a Flask app on AWS lambda using zappa. (I am very new to this so if I need to provide more relevant information, please ask!)</p>
<p>The Flask app has this directory structure:</p>
<pre><code>backend
├── __init__.py
├── __pycache__
├── app.py
├── databases
├── home.py
├── static
├── templates
└── utils.py
</code></pre>
<p>Where, the app is being instantiated by app.py like so:</p>
<pre><code>import os
from flask import Flask as FlaskApp
from flask_cors import CORS
import boto3
import logging
from backend import home
def initialize_app(test_config=None, *args, **kwargs):
print("MADE IT HERE TOO")
# create and configure the app
app = FlaskApp(__name__, instance_relative_config=True)
# Set up logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logger.info("Boto3 Version: %s", boto3.__version__)
print("APP HAS REACHED THIS POINT")
s3_client = boto3.client('s3')
print("Here too...")
# Example of logging
try:
response = s3_client.list_buckets()
logger.info("S3 Buckets: %s", response['Buckets'])
except Exception as e:
logger.error("Error accessing S3: %s", str(e))
if test_config is None:
# load the instance config, if it exists, when not testing
app.config.from_pyfile('config.py', silent=True)
else:
# load the test config if passed in
app.config.from_mapping(test_config)
print("made it here before os makedirs")
# ensure the instance folder exists
try:
os.makedirs(app.instance_path)
except OSError:
pass
print("made it here before home")
app.register_blueprint(home.bp)
print("made it here after home")
return app
</code></pre>
<p>... the home blueprint is just a python file with lots of functions in it. Some are routed, some are not. But importantly, something is always routed to root '/' like this:</p>
<pre><code>bp = Blueprint('home', __name__, url_prefix='/')
@bp.route('/')
def index():
return "Welcome to the Home Page!"
</code></pre>
<p>I am using these zapper settings to deploy the app in AWS lambda:</p>
<pre><code>{
"dev": {
"app_function": "backend.app.initialize_app",
"aws_region": "af-south-1",
"exclude": [
"boto3",
"dateutil",
"botocore",
"s3transfer",
"concurrent",
"node_modules",
"frontend",
"awsTests"
],
"profile_name": "default",
"project_name": "backend",
"runtime": "python3.10",
"s3_bucket": "zappa-htprl75eu",
"slim_handler": true
}
</code></pre>
<p>}</p>
<p>Nothing crazy there.</p>
<p>In the CloudWatch logs I see all the print statements correctly displayed. However, the lambda instance then returns an error:</p>
<blockquote>
<p>'Flask' object is not iterable</p>
</blockquote>
<p><a href="https://i.sstatic.net/GsPZG56Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GsPZG56Q.png" alt="AWS CloudWatch report logs showing the error" /></a></p>
<p>Can anyone help me identify why this error may be occurring?</p>
<p>I have checked my home.py file for anything which may be calling app but it does not. So I can only think that zappa must be using the returned app from initialize_app() in a way which triggers this error.</p>
<hr />
<p>If it helps, I am using python version 3.10.6 and these are the packages in my requirements.txt file, one level above backend:</p>
<pre><code>argcomplete==3.5.1
blinker==1.7.0
boto3==1.35.32
botocore==1.35.32
certifi==2024.8.30
cffi==1.17.1
cfn-flip==1.3.0
charset-normalizer==3.3.2
click==8.1.7
cryptography==43.0.1
durationpy==0.9
Flask==3.0.2
Flask-Cors==4.0.0
hjson==3.1.0
idna==3.10
itsdangerous==2.1.2
Jinja2==3.1.3
jmespath==1.0.1
kappa==0.6.0
MarkupSafe==2.1.5
pdf2image==1.17.0
pillow==10.3.0
placebo==0.9.0
pycparser==2.22
PyJWT==2.9.0
PyMuPDF==1.24.2
PyMuPDFb==1.24.1
python-dateutil==2.9.0.post0
python-slugify==8.0.4
PyYAML==6.0.2
requests==2.32.3
s3transfer==0.10.2
six==1.16.0
text-unidecode==1.3
toml==0.10.2
tqdm==4.66.5
troposphere==4.8.3
urllib3==2.2.3
Werkzeug==3.0.1
zappa==0.59.0
</code></pre>
|
<python><amazon-web-services><flask><aws-lambda><zappa>
|
2024-10-08 14:33:05
| 1
| 925
|
user8188120
|
79,066,483
| 1,922,959
|
Seaborn plot labeling
|
<p>Very new to Seaborn...using it in a class and using VSCode to edit notebooks for assignments.</p>
<p>Without uploading a bunch of data here, I have the following code:</p>
<pre><code>myPlot = sns.barplot(dfFrequencies, x="Topic", y="Relative Frequency")
myPlot.set_xticklabels(myPlot.get_xticklabels(), rotation=45)
myPlot.set_title("Pew Poll Responses: Which is Most Meaningful?")
</code></pre>
<p>And in VSCode this is the resulting output</p>
<p><a href="https://i.sstatic.net/zatwJ65n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zatwJ65n.png" alt="enter image description here" /></a></p>
<p>So my first question is how can I suppress the circled text?</p>
<p>Later in the same notebook I have the following code:</p>
<pre><code># Create the requested scatterplot with Mid-Career Year on the x-axis and
# standardized OPS on the y-axis.
# Remember titles and axis labels.
sns.set(rc={"figure.figsize":(10, 6)})
set_style(theme)
myplot = sns.scatterplot(data=dfHofData, x="Midpoint", y="OPS_z")
myPlot.set_title("Standardized OPS by Mid-Career Year")
myPlot.set_xlabel("Mid-Career Year")
myPlot.set_ylabel("Standardized On Base Percentage Plus Slugging (OPS)")
</code></pre>
<p>And it produces this plot. Again there is the text from the last <code>set</code> but in this case none of them <code>set</code>s have actually done anything. What am I missing?</p>
<p><a href="https://i.sstatic.net/xF7WpTiI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xF7WpTiI.png" alt="enter image description here" /></a></p>
<p>The call to <code>set_style(theme)</code> is some code the instructor provided to set the theme to dark mode. For completeness, here it is:</p>
<pre><code>theme = 'darkmode'
def set_style(theme='darkmode'):
if theme == 'darkmode':
sns.set_theme(style='ticks', context='notebook', rc={'axes.facecolor':'black', 'figure.facecolor':'black', 'text.color':'white',
'xtick.color':'white', 'ytick.color':'white', 'axes.labelcolor':'white', 'axes.grid':False, 'axes.edgecolor':'white'})
else:
sns.set_theme(style='ticks', context='notebook')
</code></pre>
|
<python><seaborn>
|
2024-10-08 14:31:14
| 1
| 1,299
|
jerH
|
79,066,478
| 999,162
|
Django reusable Many-to-one definition in reverse
|
<p>I'm struggling to make a Many-to-one relationship reusable.</p>
<p>Simplified, let's say I have:</p>
<pre><code>class Car(models.Model):
...
class Wheel(models.Model):
car = models.ForeignKey(Car)
...
</code></pre>
<p>Pretty straight forward. What, however, if I'd like to use my <code>Wheel</code> model also on another model, <code>Bike</code>? Can I define the relationship in reverse, on the "One" side of the relationship?</p>
<p>Defining this as a Many-to-many on the <em>Vehicles</em> would mean the same <code>Wheel</code> could belong to multiple <em>Vehicles</em>, which is not what I want.</p>
<p>Would I have to subclass my <code>Wheel</code> to <code>CarWheel</code> and <code>BikeWheel</code> only to be able to differentiate the Foreignkey for each relationship? Seems like there should be a cleaner solution.</p>
|
<python><django><orm><many-to-one>
|
2024-10-08 14:29:02
| 2
| 5,274
|
kontur
|
79,066,324
| 2,915,050
|
Get every key in JSON whose value is an array and its path
|
<p>I have a JSON file, and for some keys their value is an array. The JSON can go up to an unspecified depth. I would like to know how to extract all the keys whose value is an array, and also the JSON path to that array.</p>
<p>Example schema:</p>
<pre><code>[
{
"field1": "x",
"field2": ["y", "z"],
"field3": [
{
"field4": "a",
"field5": ["b", "c"]
},
{
"field4": "d",
"field5": ["e", "f"]
}
],
"field6": "g"
}
]
</code></pre>
<p>From this, I want they keys <code>field2, field3, field5</code> and their paths e.g. <code>field2, field3, field3[0][field5], field3[1][field5]</code></p>
<p>I have got the following code which can identify all keys in a JSON file, but not the type of value it holds nor its path:</p>
<pre><code>def get_keys(d):
if isinstance(d, dict):
for k, v in d.items():
yield k
yield from list(get_keys(v))
elif isinstance(d, list):
for o in d:
yield from list(get_keys(o))
</code></pre>
|
<python><json>
|
2024-10-08 13:49:45
| 1
| 1,583
|
RoyalSwish
|
79,066,076
| 133,374
|
Distinguish native type from user type
|
<p>I want to be able to distinguish some native class like <code>_functools.partial</code> (<a href="https://github.com/python/cpython/blob/43ad3b51707f51ae4b434e2b5950d2c8bf7cca6e/Modules/_functoolsmodule.c#L758" rel="nofollow noreferrer">code</a>, <a href="https://github.com/python/cpython/blob/43ad3b51707f51ae4b434e2b5950d2c8bf7cca6e/Modules/_functoolsmodule.c#L135" rel="nofollow noreferrer">code</a>) from some user-class like:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass:
pass
</code></pre>
<p>I want to test this from within Python (so not using the CPython API or so). But it's ok if this only works with CPython. It should work for CPython >=3.6.</p>
<p>How?</p>
<p>More specifically, I want to distinguish objects where its <code>__dict__</code>/<code>__slots__</code> will not give me all the object attributes. This is usually the case for native types such as <code>_functools.partial</code> where the <code>func</code>, <code>args</code>, <code>keywords</code> are not in <code>__dict__</code> (and <code>__slots__</code> does not exist).</p>
<p>I think basically I want to check if the type defines some custom <code>Py_tp_members</code> (but not derived from <code>__slots__</code> (<a href="https://stackoverflow.com/a/32336272/133374">see</a>)).</p>
<hr />
<p>Some background / ideas which might be helpful to get to an answer:</p>
<ul>
<li><code>functools.partial.__dict__</code> is usually empty or <code>None</code>.</li>
<li><code>functools.partial.__slots__</code> only exists if Python uses the pure Python implementation, which is usually not the case.</li>
<li><code>functools.partial.__getstate__()</code> exists since Python >=3.11 and will return <code>__dict__</code> (and maybe <code>__slots__</code>), so in case of the native <code>partial</code>, it will be empty or <code>None</code>. (Not sure if this is really correct. See <a href="https://github.com/python/cpython/issues/125094" rel="nofollow noreferrer">CPython issue #125094</a>.)</li>
<li><code>functools.partial.__dir__(functools.partial)</code> also does not cover <code>func</code> and co? However, <code>dir(functools.partial)</code> does? Side question: What is the builtin <code>dir</code> doing differently? Where is the exact code of the builtin <code>dir</code> which covers this case? I assume it iterates through the <code>Py_tp_members</code>, maybe also <code>Py_tp_methods</code>?</li>
<li>I initially thought that maybe <code>__dictoffset__</code> gives me some hint, but that turned out to be wrong.</li>
<li>The builtin <code>dir</code> could probably be used in any case. The <code>inspect.getmembers</code> (or also <code>inspect.getmembers_static</code>) might tell me what I need to know.</li>
</ul>
<hr />
<p>Why do I need that?</p>
<p>We need to define our own custom hash (not <code>__hash__</code>) for any type of object, in a stable way (that the hash does not change across Python versions), and in a meaningful way (that the hash contains all relevant state). So far, we iterate over <code>__getstate__()</code>, or <code>__dict__</code>/<code>__slots__</code>, which works fine for most cases, except now for <code>functools.partial</code>, and likely for other native types. So the idea was, add a check if we have a native type, and if so, fallback to using <code>dir</code>, as that seems to work fine even in those cases.</p>
<p>See <a href="https://github.com/rwth-i6/sisyphus/issues/207" rel="nofollow noreferrer">Sisyphus issue #207</a> and <a href="https://github.com/rwth-i6/sisyphus/blob/c7de85e2e5c202194813fa556e53371d9cee1f30/sisyphus/hash.py#L39" rel="nofollow noreferrer">Sisyphus <code>get_object_state</code></a> and <a href="https://github.com/rwth-i6/sisyphus/blob/c7de85e2e5c202194813fa556e53371d9cee1f30/sisyphus/hash.py#L76" rel="nofollow noreferrer">Sisyphus <code>sis_hash_helper</code></a>.</p>
|
<python>
|
2024-10-08 12:55:44
| 1
| 68,916
|
Albert
|
79,066,074
| 6,533,161
|
Python CSV Reader & Writer to respect the double quotes
|
<p>I'm new to Python and working on below use case:</p>
<p>I want my Python script to enhance a csv file.</p>
<p>My <code>input.csv</code> looks like below:</p>
<pre><code>test_case,test_desc,"test data 1","test data 2"
,0,"",
</code></pre>
<p>I want to enhance this to add two more columns. So my <code>output.csv</code> should look like below:</p>
<pre><code>test_case,test_desc,"test data 1","test data 2",col_1,col_2
,0,"",,,
</code></pre>
<p>I'm using below script:</p>
<pre><code>import csv
from pathlib import Path
source_headers = []
source_data = []
# Read the 'input.csv'
with Path('input.csv').open() as file:
headers = csv.reader(file)
for line in headers:
source_headers = line
break
with Path('input.csv').open() as file:
lines = csv.DictReader(file)
for line in lines:
source_data.append(line)
print("source_headers: ", source_headers)
print("source_data: ", source_data)
# Specify the target_enhanced_headers
target_enhanced_headers = {'col_1': 'target col_1', 'col_2': 'target col_2'}
enhanced_headers = source_headers
enhanced_headers.extend(target_enhanced_headers)
enhanced_data = source_data
# Enhance the source data
for new_header in target_enhanced_headers:
enhanced_data[0][new_header] = ''
print("enhanced_headers: ", enhanced_headers)
print("enhanced_data: ", enhanced_data)
# Write new file
with Path('output.csv').open("w", encoding ="utf-8", newline='') as file:
writer = csv.DictWriter(file, fieldnames=enhanced_headers)
writer.writeheader()
writer.writerows(enhanced_data)
file.close()
</code></pre>
<p>The output of this script comes out to be like this:</p>
<pre><code>source_headers: ['test_case', 'test_desc', 'test data 1', 'test data 2']
source_data: [{'test_case': '', 'test_desc': '0', 'test data 1': '', 'test data 2': ''}]
enhanced_headers: ['test_case', 'test_desc', 'test data 1', 'test data 2', 'col_1', 'col_2']
enhanced_data: [{'test_case': '', 'test_desc': '0', 'test data 1': '', 'test data 2': '', 'col_1': '', 'col_2': ''}]
</code></pre>
<p>The output.csv looks like:</p>
<pre><code>test_case,test_desc,test data 1,test data 2,col_1,col_2
,0,,,,
</code></pre>
<p>It's not respecting the double quotes that are there in the headers and in the data.</p>
<p>How can I make it work in Python by using csv module?</p>
|
<python><csv>
|
2024-10-08 12:55:22
| 1
| 808
|
Roshan007
|
79,065,764
| 448,357
|
Django max_length validation for BinaryField causes KeyError in translation __init__.py
|
<p>I have a simple model, something like</p>
<pre class="lang-py prettyprint-override"><code>class Notenbild(models.Model):
bild_data = models.BinaryField(max_length=500000, editable=True)
</code></pre>
<p>In <code>admin.py</code></p>
<pre class="lang-py prettyprint-override"><code>class BinaryFieldWithUpload(forms.FileField):
def __init__(self, *, max_length=None, allow_empty_file=False, **kwargs):
super().__init__(max_length=max_length, allow_empty_file=allow_empty_file, **kwargs)
def to_python(self, data):
data = super().to_python(data)
if data:
image = Image.open(data)
# some more processing with the image which I omitted here
byte_array = io.BytesIO()
image.save(byte_array, format='PNG')
return byte_array.getvalue()
return None
def widget_attrs(self, widget):
attrs = super().widget_attrs(widget)
if isinstance(widget, FileInput) and "accept" not in widget.attrs:
attrs.setdefault("accept", "image/*")
return attrs
@admin.register(Notenbild)
class NotenbildAdmin(admin.ModelAdmin):
fields = [
'bild_data',
'vorschau',
]
readonly_fields = ['vorschau']
formfield_overrides = {
models.BinaryField: {'form_class': BinaryFieldWithUpload},
}
@admin.display(description='Bild (Vorschau)')
def vorschau(self, notenbild: Notenbild):
encoded_image = base64.b64encode(notenbild.bild_data).decode('utf-8')
return format_html(
f'<p>{len(notenbild.bild_data)} bytes</p>'
f'<img src="data:image/png;base64,{encoded_image}" style="max-width:40rem; max-height:16rem" />'
)
</code></pre>
<p>which works fine for saving images through the admin interface, which fit the size limits. However, when trying to save a file which exceeds the size limit, I get a very strange error:</p>
<pre><code>KeyError at /admin/library/notenbild/368/change/
"Your dictionary lacks key 'max'. Please provide it, because it is required to determine whether string is singular or plural."
Django Version: 5.1.1
Exception Location: /Users/alex/Repositories/ekd-cms/venv/lib/python3.12/site-packages/django/utils/translation/__init__.py, line 130, in _get_number_value
</code></pre>
<p>which the full stacktrace being</p>
<pre><code>
Traceback (most recent call last):
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/utils/translation/__init__.py", line 128, in _get_number_value
return values[number]
^^^^^^^^^^^^^^
During handling of the above exception ('max'), another exception occurred:
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/debug_toolbar/middleware.py", line 92, in __call__
panel.generate_stats(request, response)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/debug_toolbar/panels/templates/panel.py", line 201, in generate_stats
template_data["context_list"] = self.process_context_list(
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/debug_toolbar/panels/templates/panel.py", line 134, in process_context_list
if key_values == context_layer:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/forms/utils.py", line 192, in __eq__
return list(self) == other
^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/forms/utils.py", line 192, in __eq__
return list(self) == other
^^^^^^^^^^
File "<frozen _collections_abc>", line 1026, in __iter__
<source code not available>
^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/forms/utils.py", line 197, in __getitem__
return next(iter(error))
^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/core/exceptions.py", line 210, in __iter__
message %= error.params
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/utils/functional.py", line 167, in __mod__
return self.__cast() % other
^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/utils/translation/__init__.py", line 148, in __mod__
number_value = self._get_number_value(rhs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/Repositories/my-project/venv/lib/python3.12/site-packages/django/utils/translation/__init__.py", line 130, in _get_number_value
raise KeyError(
^
</code></pre>
<p>which doesn't make any sense to me. The validation correctly caught the invalid data, but seems like this might be a bug on Django? Or am I using this thing incorrectly?</p>
<p>I've tried adding my own custom validator to the model:</p>
<pre class="lang-py prettyprint-override"><code>from django.core.validators import BaseValidator
from django.utils.deconstruct import deconstructible
from django.utils.translation import gettext_lazy as _
from django.core.exceptions import ValidationError
@deconstructible
class MaxFileSizeValidator(BaseValidator):
message = _("Ensure the file is less than or equal to %(limit_value)s.")
code = "limit_value"
def __init__(self, limit_value, message=None):
super().__init__(limit_value, message)
self.limit_value = limit_value
if message:
self.message = message
def __call__(self, value):
cleaned = self.clean(value)
limit_value = (
self.limit_value() if callable(self.limit_value) else self.limit_value
)
params = {"limit_value": limit_value, "show_value": cleaned, "value": value}
if self.compare(cleaned, limit_value):
raise ValidationError(self.message, code=self.code, params=params)
def compare(self, a, b):
return len(a) > b
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>class Notenbild(models.Model):
bild_data = models.BinaryField(max_length=500000, editable=True, validators=[MaxFileSizeValidator(500000)])
</code></pre>
<p>but whatever I try, the translation <code>__init__.py</code> method keep throwing this error, which doesn't make a lot of sense to me. Any ideas?</p>
|
<python><django><validation><django-admin><binary-data>
|
2024-10-08 11:35:34
| 0
| 9,860
|
Alexander Pacha
|
79,065,461
| 335,247
|
Typing polars dataframe with pandera and mypy validation
|
<p>I am considering <code>pandera</code> to implement strong typing of my project uses <code>polars</code> dataframes.</p>
<p>I am puzzled on how I can type my functions correctly.</p>
<p>As an example let's have:</p>
<pre class="lang-py prettyprint-override"><code>
import polars as pl
import pandera.polars as pa
from pandera.typing.polars import LazyFrame as PALazyFrame
class MyModel(pa.DataFrameModel):
a: int
class Config:
strict = True
def foo(
f: pl.LazyFrame
) -> PALazyFrame[MyModel]:
# Our input is unclean, probably coming from pl.scan_parquet on some files
# The validation is dummy here
return MyModel.validate(f.select('a'))
</code></pre>
<p>If I'm calling <code>mypy</code> it will return the following error</p>
<pre><code>error: Incompatible return value type (got "DataFrameBase[MyModel]", expected "LazyFrame[MyModel]")
</code></pre>
<p>Sure, I can modify my signature to specify the return Type <code>DataFrameBase[MyModel]</code>, but I'll lose the precision that I'm returning a LazyFrame.</p>
<p>Further more LazyFrame is defined as implementing DataFrameBase in <code>pandera</code> code.</p>
<p>How can I fix my code so that the return type LazyFrame[MyModel] works?</p>
|
<python><python-typing><mypy><python-polars><pandera>
|
2024-10-08 10:12:35
| 1
| 3,713
|
ohe
|
79,065,271
| 10,451,021
|
Trying to pass prompt value into the code in Azure DevOps (python)
|
<p>I am trying to get userStory value from user while running build pipeline in Azure DevOps. I want to use that value as a variable into the python script that I am building. I tried below code with yaml, Also attached is the error.</p>
<pre><code>trigger:
- main
pool:
vmImage: 'ubuntu-latest'
parameters:
- name: UserStory
displayName: Enter User Story ID.
type: string
default: ''
steps:
- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
echo "Running p1.py with User Story ID ${UserStory}"
export USER_STORY_ID=${UserStory}
python p1.py
displayName: 'Run p1.py'
</code></pre>
<p>p1.py</p>
<pre><code>import os
print('Hello, world!')
user_story2 = os.getenv('USER_STORY_ID')
print(user_story2)
</code></pre>
<p>Error:</p>
<pre><code> ========================== Starting Command Output ===========================
/usr/bin/bash --noprofile --norc /home/vsts/work/_temp/e91b2e42-ada0-4108-aa1f-8bb3b4b6d4a3.sh
Running p1.py with User Story ID
Hello, world!
</code></pre>
|
<python><azure-devops><azure-pipelines><azure-pipelines-yaml>
|
2024-10-08 09:30:42
| 2
| 1,999
|
Salman
|
79,065,268
| 6,105,404
|
Horizontal bar chart with matplotlib and an x-offset
|
<p>I'd like to make a horizontal bar chart with the length of the space mission (in years) on the x-axis plotted against cost of mission given on the y-axis.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_csv('mission_lifetime.dat')
length_of_mission = data['Mission_End_Year']-data['Mission_Start_Year']
cost = data['Cost_in_millions']
plt.barh(length_of_mission, cost)
</code></pre>
<p>has the correct 'length' of the x-bar but not the correct offset for the start of the x-axis (in this case "data['Mission_Start_Year']")</p>
<p>Here is a sample of the data file:</p>
<pre><code>Mission,Mission_Start_Year,Mission_End_Year,Cost_in_millions
ISO,1995,1998,615.
Herschel,2009,2013,1100.
Planck,2009,2013,700.
</code></pre>
|
<python><matplotlib><plot><histogram>
|
2024-10-08 09:30:20
| 1
| 1,918
|
npross
|
79,065,238
| 11,403,193
|
Python 3.13 distutils.msvccompiler Not Found Error
|
<p>During the installation of <code>gymnasium-2048</code> I came across this rather persistent error message:
<code>ModuleNotFoundError: No module named ‘distutils.msvccompiler</code></p>
<p>A quick google search suggests installing MSVC compiler by Visual Studio Installer, etc. But this alone did not solve the problem. The error message is still the same. Even after restarting and making sure the compiler is installed in its default location (C drive for windows) the error still occurs, suggesting it cannot find a compiler for MSVC somehow.</p>
<p>OS: Windows 10
Python: 3.13.0</p>
|
<python><visual-c++><pip>
|
2024-10-08 09:22:30
| 1
| 401
|
SiminSimin
|
79,065,191
| 801,924
|
Why hexa decode need two digit
|
<p>The <code>hex</code> Python function produce <code>0x0</code> for the <code>0</code> number:</p>
<pre class="lang-py prettyprint-override"><code>>>> hex(0)
'0x0'
</code></pre>
<p>But <code>bytes.fromhexa</code> only accept hexa with minimum of two digits:</p>
<pre class="lang-py prettyprint-override"><code>>>> bytes.fromhex(hex(0)[2:])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: non-hexadecimal number found in fromhex() arg at position 1
>>> bytes.fromhex('00')
b'\x00'
</code></pre>
<p>Why <code>bytes.fromhexa</code> accept hexa from two digits and no with one digit ?</p>
|
<python>
|
2024-10-08 09:14:04
| 2
| 7,711
|
bux
|
79,065,189
| 1,481,986
|
Matplotlib venn3 empty subgroup layout
|
<p>I'm creating a venn3 diagram using <code>matplotlib_venn</code> where one of the subsets is empty. Minimal example -</p>
<pre><code>from matplotlib_venn import venn3, venn3_circles
group1 = set([1,2, 3])
group2 = set([1, 4, 5])
group3 = set([4, 5])
venn3(subsets = [group1, group2, group3], set_labels = ['a','b' ,'c'])
plt.show()
</code></pre>
<p>And I get the following image -
<a href="https://i.sstatic.net/lG2tOyB9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lG2tOyB9.png" alt="enter image description here" /></a></p>
<p>The empty subset appears in the image, although I wouldn't like it to appear (i.e., the layout should be improved; I can remove the labels later). What can I do to improve the layout?</p>
<p>(I tried a few cost algorithms, but nothing helped)</p>
|
<python><matplotlib><matplotlib-venn>
|
2024-10-08 09:13:19
| 1
| 6,241
|
Tom Ron
|
79,065,062
| 9,324,997
|
Copy a ML Model from one Azure Databricks workspace to another Databricks Workspace
|
<p>I ran the below code to export the ML Model in <strong>Azure Databricks</strong> based mlflow but I seem to be getting this error</p>
<blockquote>
<p>MLflow host or token is not configured correctly</p>
</blockquote>
<p>I'm unable to figure out what the issue is. The URL for the workspace is correct along with the PAT Token.</p>
<p>The export_import tools is very buggy. It expects mlfow library but what comes with Databricks ML Runtime is mlflow-skinny.</p>
<pre><code>import mlflow
import os
from mlflow_export_import.model.export_model import ModelExporter
from mlflow.tracking import MlflowClient
# Set the Databricks MLflow tracking URI with the workspace URL
mlflow.set_tracking_uri("https://adb-xxxyyymmmnnnyyy.1.azuredatabricks.net/")
# Set both tokens for compatibility
os.environ["DATABRICKS_TOKEN"] = "mnop6672ec8e20c7d219eb2A-3"
os.environ["MLFLOW_TRACKING_TOKEN"] = "mnop6672ec8e20c7d219eb2A-3"
# Initialize the MLflow client (no need to pass tracking URI as it's set globally)
mlflow_client = MlflowClient()
# Initialize the ModelExporter with the MLflow client
exporter = ModelExporter(mlflow_client)
# Export the model
exporter.export_model(
model_name="Signature_Test",
output_dir="/tmp/mlflow_export/model",
stages=None, # Use "None" to export all stages, or specify "Staging" or "Production"
export_metadata_tags=True
)
</code></pre>
|
<python><machine-learning><azure-databricks><mlflow>
|
2024-10-08 08:39:33
| 1
| 460
|
Gopinath Rajee
|
79,064,907
| 3,494,271
|
How to ensure QWebEngineScript is executed before any other JS script in HTML document?
|
<p>I'm trying to add to a PyQt5 application a QtWebEngineView widget. This widget must load the following very simple SVG (which, by specs, I cannot modify):</p>
<pre><code><!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg version="1.1" xmlns="http://www.w3.org/2000/svg">
<circle cx="50" cy="100" r="50" fill="red" />
<script type="text/javascript"><![CDATA[
alert("tag")
mynamespace.myfunction();
]]></script>
</svg>
</code></pre>
<p>But obviously, I need to define <code>mynamespace.myfunction</code> beforehand, so I'm trying the following in my widget constructor:</p>
<pre><code>import time
from PyQt5.QtCore import QUrl
from PyQt5.QtWebEngineWidgets import QWebEngineView, QWebEngineScript
class MyWebPageView(QtWebEngineView):
def __init__(svg,*args,**kwargs):
super().__init__(*args,**kwargs)
# Create script object
js_script = QWebEngineScript()
js_script.setSourceCode(
"""
alert("qwebenginescript");
const mynamespace = { myfunction: function() { alert("success");}};
"""
)
js_script.setInjectionPoint(QWebEngineScript.DocumentCreation)
# Add the script to the page
self.page().scripts().insert(js_script)
time.sleep(3) #to be sure this is not a timing issue
# Load the SVG into the page
self.load(QUrl("file:///path/to/svg.svg"))
</code></pre>
<p>I expect the <code>QWebEngineScript</code> to be executed before the SVG's <code><script></code> code, but what I see is first the the "tag" alert, and only then the "qwebenginescript" alert. The red circle is properly drawn, but the "success" alert never shows, and I get the following error message:</p>
<pre><code>js: Uncaught ReferenceError: mynamespace is not defined
</code></pre>
<p>How can I ensure the QWebEngineScript is really executed first?</p>
<p>(Note: this is a follow-up of this question: <a href="https://stackoverflow.com/posts/79059688">https://stackoverflow.com/posts/79059688</a>)</p>
|
<javascript><python><pyqt5><qwebengine>
|
2024-10-08 08:04:31
| 0
| 945
|
Silverspur
|
79,064,048
| 829,979
|
Issues with Using `--extra-index-url` in `uv` with Google Cloud Artifact Registry
|
<p>I'm trying to create a <code>uv</code> project that uses an <code>--extra-index-url</code> with Google Cloud Artifact Registry. According to the <a href="https://docs.astral.sh/uv/guides/integration/alternative-indexes/" rel="nofollow noreferrer">uv documentation</a>, this should be possible. I am using <code>uv 0.4.18</code>. Here's what I've tried so far:</p>
<pre class="lang-bash prettyprint-override"><code>gcloud auth application-default login --project ${PROJECT_ID}
uv venv
source .venv/bin/activate
uv pip install keyring keyrings.google-artifactregistry-auth
uv pip install --keyring-provider subprocess ${MY_PACKAGE} --extra-index-url https://${REGION}-python.pkg.dev/${PROJECT_ID}/${REPOSITORY_ID}/simple
</code></pre>
<p>However, it returns an error indicating that my package can't be found. Interestingly, when I use standard Python, I can install my private package without any issues. Here's the code that works:</p>
<pre class="lang-bash prettyprint-override"><code>gcloud auth application-default login --project ${PROJECT_ID}
python -m venv .venv
source .venv/bin/activate
pip install keyring keyrings.google-artifactregistry-auth
pip install ${MY_PACKAGE} --extra-index-url https://${REGION}-python.pkg.dev/${PROJECT_ID}/${REPOSITORY_ID}/simple
</code></pre>
<p>It seems like others have faced this issue before, as mentioned in this <a href="https://github.com/astral-sh/uv/issues/2822" rel="nofollow noreferrer">closed GitHub issue</a>. Has anyone else encountered this problem or found a workaround? Any help would be appreciated!</p>
|
<python><google-cloud-platform><uv>
|
2024-10-08 01:22:17
| 2
| 685
|
Jonatas Eduardo
|
79,063,940
| 688,563
|
Cannot figure out why previously working FastAI learner inference now fails with CUDA 12 and Jetpack 6.0
|
<p>I have a Jetson Orin and the internal drive failed. I had to flash from scratch.</p>
<p>I had some working Fastai 2.0 based code with a PyTorch version ~1 year ago and also a previous version of CUDA, I believe some version of 11.x. The Jetpack version was 5.x.</p>
<p>I now flashed the latest version of Jetpack on the Orin with Ubuntu 22.04. It is now Jetpack 6.0 with CUDA 12.</p>
<p>PyTorch has CUDA support installed:</p>
<pre><code>Python 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>>
>>> device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
>>> print(f"Using device: {device}")
Using device: cuda
</code></pre>
<p>Here are the versions of PyTorch and FastAI:</p>
<pre><code>>>> import fastai
>>> print(fastai.__version__)
2.7.17
>>> import torch
>>> print(torch.__version__)
2.4.0a0+07cecf4168.nv24.05
</code></pre>
<p>My previously working Image classification code now fails at the inference call on this line:</p>
<pre class="lang-py prettyprint-override"><code># Perform batch inference
torch.cuda.synchronize()
start_inference = time.time()
with torch.no_grad():
# Put the model into inference mode
accumulated_output = []
logger.info("Setting inference mode now ...")
with torch.inference_mode():
for data in infer_loader:
data = data.cuda()
logger.info(f"Calling learner now: {data} ...")
logger.info(f"Learner is: {learner}")
output = learner(data)
logger.info("Done calling learner ...")
for tensor_output in output.data.cpu():
accumulated_output.append(tensor_output)
# Time the inference work
torch.cuda.synchronize()
end_inference = time.time()
logger.info(f"Inference time: {end_inference - start_inference}")
</code></pre>
<p>The error is caught in my exception handle due to this line:</p>
<pre><code>output = learner(data)
</code></pre>
<p>The error message is:</p>
<pre><code>Failed to perform inference: hasattr(): attribute name must be string
</code></pre>
<p>A more complete output is shown here:</p>
<pre><code>[2024-10-07 18:44:06,229] __main__ {process_frame_utils.py:258} INFO - Creating data loader now ...
[2024-10-07 18:44:06,230] __main__ {process_frame_utils.py:274} INFO - Setting inference mode now ...
[2024-10-07 18:44:06,757] __main__ {process_frame_utils.py:278} INFO - Calling learner now: tensor([[[[-2.1179, -2.1179, -2.1179, ..., -2.1179, -2.1179, -2.1179],
...
[-1.8044, -1.8044, -1.8044, ..., -1.8044, -1.8044, -1.8044]]]], device='cuda:0') ...
[2024-10-07 18:44:06,759] __main__ {process_frame_utils.py:279} INFO - Learner is: <fastai.learner.Learner object at 0xfffe851c4490>
[2024-10-07 18:44:06,798] __main__ {process_frame_utils.py:370} ERROR - Error: Failed to perform inference: hasattr(): attribute name must be string
</code></pre>
<p>I've tried retraining my model several times. Something has changed in the libraries between my previously working configuration and now, however I have been unable to figure out what the error means.</p>
<p>How can I fix this?</p>
|
<python><pytorch><fast-ai>
|
2024-10-07 23:59:44
| 0
| 368
|
PhilBot
|
79,063,876
| 5,153,500
|
VSCode shows duplicate tests in Python
|
<p>I don't know if this due to a recent update, but I am seeing my Python tests appear twice and when I am in test debug mode it runs twice as well. How can I fix this? I just created an empty project and I am running into the same issue.</p>
<p>Steps to reproduce:</p>
<ol>
<li><p><code>python -m venv .venv</code> (create virtualenv)</p>
</li>
<li><p><code>source ./.venv/bin/activate</code> (activate virtualenv)</p>
</li>
<li><p><code>pip install pytest==8.3.3</code></p>
</li>
<li><p>On VSCode select Python interpreter to be the virtualenv one.</p>
</li>
<li><p><code>mkdir server server/tests</code></p>
</li>
<li><p>Basic tests file</p>
<pre class="lang-py prettyprint-override"><code># server/tests/test_storage.py
import pytest
class TestStorage:
def test(self):
pass
</code></pre>
</li>
<li><p>Settings.json (I tried different variations with the directory)</p>
<pre class="lang-json prettyprint-override"><code>{
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}
</code></pre>
</li>
</ol>
<p><a href="https://i.sstatic.net/63ZU1NBMm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/63ZU1NBMm.png" alt="Screenshot of test module" /></a>
<a href="https://i.sstatic.net/TMSQauQJm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMSQauQJm.png" alt="Screenshot 2" /></a>
<a href="https://i.sstatic.net/1KFAxxC3m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1KFAxxC3m.png" alt="enter image description here" /></a></p>
<p>VSCode Version: 1.94.0 (Universal)</p>
|
<python><visual-studio-code><pytest><vscode-debugger>
|
2024-10-07 23:13:14
| 1
| 560
|
Semih Sezer
|
79,063,772
| 2,778,405
|
AWS Lambda timing out on return statement
|
<p>I'm trying to write a simple lambda function to use as a view for an application that has parquet data living in an S3 bucket. I'm just trying to test out a simple function here:</p>
<pre><code>import json
import time
t = time.time()
print('This ')
import os
import urllib.parse
import boto3
import pandas as pd
import awswrangler as wr
print('Imports complete: ', time.time() - t)
s3 = boto3.client('s3')
def lambda_handler(event, context):
bucket = os.environ['S3_BUCKET']
try:
print('wrangling df')
df = wr.s3.read_parquet(bucket, dataset=True)
print('df loaded: ',time.time() - t)
response = df.to_json()
print('response type: ', type(response))
print('df to json: ',time.time() - t)
return response
except Exception as e:
print('exception caught: ',time.time() - t)
print(e)
print('Error getting object from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(bucket))
raise e
</code></pre>
<p>I'm using the managed python layer on us-west-2:</p>
<pre><code>arn:aws:lambda:eu-west-2:336392948345:layer:AWSSDKPandas-Python312:13
</code></pre>
<p>This is the output with a 60s timeout configured:</p>
<pre><code>Test Event Name
(unsaved) test event
Response
{
"errorType": "Sandbox.Timedout",
"errorMessage": "RequestId: 24b0f94f-a78e-40e4-9573-0e49ac3eeef3 Error: Task timed out after 60.00 seconds"
}
Function Logs
This
Imports complete: 4.597054481506348
START RequestId: 24b0f94f-a78e-40e4-9573-0e49ac3eeef3 Version: $LATEST
wrangling df
df loaded: 10.094886779785156
response type: <class 'str'>
df to json: 10.67463231086731
END RequestId: 24b0f94f-a78e-40e4-9573-0e49ac3eeef3
REPORT RequestId: 24b0f94f-a78e-40e4-9573-0e49ac3eeef3 Duration: 60000.00 ms Billed Duration: 60000 ms Memory Size: 128 MB Max Memory Used: 126 MB Init Duration: 4874.63 ms Status: timeout
Request ID
24b0f94f-a78e-40e4-9573-0e49ac3
</code></pre>
<p>So it looks like the function is just hanging at the return statement when I test it. Oddly if I replace <code>return response</code> with <code>return 'some text'</code> it returns right after finishing. Why would a string several kbs long change the return time by 50+ s?</p>
|
<python><amazon-web-services><aws-lambda>
|
2024-10-07 22:10:27
| 0
| 2,386
|
Jamie Marshall
|
79,063,686
| 898,042
|
how to catch throw and other exceptions in coroutine with 1 yield?
|
<p>I have var DICTIONARY, which is a dictionary where the keys are English letters and the values are words that start with the corresponding letter. The initial filling of DICTIONARY looks like this:</p>
<pre><code>DICTIONARY = {
'a': 'apple',
'b': 'banana',
'c': 'cat',
'd': 'dog',
}
</code></pre>
<p>My code has 2 while loops, since higher try-except would end the entire generator loop:</p>
<pre><code>def alphabet():
while True:
try:
letter = yield #waiting for first input from send
while True:
try:
letter = yield DICTIONARY[letter]
except KeyError:
letter = yield 'default' # return 'default', if key is not found
except Exception:
letter = yield 'default'
except KeyError:
letter = yield 'default'
except Exception:
letter = yield 'default'
</code></pre>
<p>However for the input:</p>
<pre><code>coro = alphabet()
next(coro)
print(coro.send('apple'))
print(coro.send('banana'))
print(coro.throw(KeyError))
print(coro.send('dog'))
print(coro.send('d'))
</code></pre>
<p>expected output:</p>
<pre class="lang-none prettyprint-override"><code>default
default
default
default
dog
</code></pre>
<p>But I don't catch last default - it's None:</p>
<pre class="lang-none prettyprint-override"><code>default
default
default
None
dog
</code></pre>
<p>What is wrong?</p>
|
<python><coroutine><yield>
|
2024-10-07 21:26:36
| 3
| 24,573
|
ERJAN
|
79,063,645
| 3,849,662
|
Gaps and inconsistent ordering in plotly express bar chart
|
<p>I have a dataframe that consists of 3 columns, Date, Name, Number. With 5 dates (may change depending on time data extract is run) and 10 names per date. The same name can appear in multiple dates, or may only appear in one date. Numbers can be positive or negative. The data is ordered by Date (<code>ascending=True</code>) and then by Number (<code>ascending=False</code>).</p>
<p>I am trying to plot a chart using plotly express, that has number on Y axis and Date on Axis, with bars coloured by reporter. Bars should be ordered from largest to smallest number per date.</p>
<p>When using this code the ordering is correct for the first date, but after that there are gaps between some bars and the ordering is wrong, for example positive bars being plotted after negative ones.</p>
<pre><code>fig = px.bar(df, x="Date", y="Number", color="Name", barmode="group")
</code></pre>
<p>I have tried using <code>fig.update_layout(yaxis={'categoryorder': 'total ascending'})</code> but this doesn't seem to do anythng.</p>
<p>Please can someone help me format this chart so that there are no gaps and the ordering is correct for all dates.</p>
<p>On further investigation it appears the ordering is set on the first value of the x axis (eg. Day1) and is then kept the same. So if a name is in Day1, but not in Day2, then there will be an empty space in Day2. If a name doesn't appear in Day1, but is in Day2, then that bar will appear and the end, even if it represents a larger Number than the previous bar.</p>
<p>Essentially I need to force Plotly Express to order the bars for each X value independently of eachother.</p>
<p>The below code recreates my issue, albeit with only 2 dates rather than 5, it still demonstrates the problem.</p>
<pre><code>import pandas as pd
import plotly.express as px
df = pd.DataFrame({
"Name": ["Joe", "Tom", "Tim", "Alex", "Ben", "Steve", "Nick", "Alan", "Jack", "George", "Joe", "Tom", "Tim", "Leo", "Alex", "Ben", "Nick", "Alan", "Jack", "George"],
"Date": (["01-01-2024"] * 10) + (["01-02-2024"] * 10),
"Number": [0.5, 0.4, 0.3, 0.2, 0.1, -0.1, -0.2, -0.3, -0.4, -0.5, 0.5, 0.4, 0.3, 0.2, 0.1, -0.1, -0.2, -0.3, -0.4, -0.5]
})
df["Date"] = pd.to_datetime(df["Date"])
df.sort_values(by=["Date", "Number"], ascending=[True, False], inplace=True)
print(df)
fig=px.bar(df, x="Date", y="Number", color="Name", barmode="group")
fig.show()
</code></pre>
<p><strong>UPDATE</strong></p>
<p>After implementing the code suggested by r-beginners, the ordering and gaps issue is now solved, but has introduced other formatting errors, such as overlapping bars and large gaps.</p>
<p>My input data is below:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Name</th>
<th>Number</th>
</tr>
</thead>
<tbody>
<tr>
<td>2024-02-19</td>
<td>B</td>
<td>80.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>C</td>
<td>70.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>A</td>
<td>40.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>D</td>
<td>30.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>E</td>
<td>10.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>G</td>
<td>-20.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>F</td>
<td>-40.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>J</td>
<td>-50.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>I</td>
<td>-60.0</td>
</tr>
<tr>
<td>2024-02-19</td>
<td>H</td>
<td>-90.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>A</td>
<td>140.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>C</td>
<td>90.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>B</td>
<td>80.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>E</td>
<td>40.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>K</td>
<td>10.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>F</td>
<td>-10.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>G</td>
<td>-30.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>I</td>
<td>-40.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>H</td>
<td>-90.0</td>
</tr>
<tr>
<td>2024-02-20</td>
<td>J</td>
<td>-140.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>C</td>
<td>100.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>B</td>
<td>90.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>A</td>
<td>80.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>D</td>
<td>30.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>E</td>
<td>20.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>F</td>
<td>-20.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>G</td>
<td>-40.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>H</td>
<td>-100.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>I</td>
<td>-130.0</td>
</tr>
<tr>
<td>2024-02-21</td>
<td>J</td>
<td>-150.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>A</td>
<td>30.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>E</td>
<td>30.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>B</td>
<td>20.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>C</td>
<td>10.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>D</td>
<td>10.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>F</td>
<td>-20.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>G</td>
<td>-50.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>H</td>
<td>-70.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>I</td>
<td>-70.0</td>
</tr>
<tr>
<td>2024-02-22</td>
<td>J</td>
<td>-110.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>B</td>
<td>170.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>C</td>
<td>90.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>E</td>
<td>50.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>A</td>
<td>10.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>D</td>
<td>10.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>F</td>
<td>50.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>G</td>
<td>-10.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>H</td>
<td>-80.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>I</td>
<td>-80.0</td>
</tr>
<tr>
<td>2024-02-23</td>
<td>J</td>
<td>-150.0</td>
</tr>
</tbody>
</table></div>
<p>The code used is:</p>
<pre><code>fig = go.Figure()
for d in df['Date'].unique():
dff = df.query('Date == @d')
for n in dff['Name'].unique():
dfn = dff.query('Name == @n')
fig.add_trace(go.Bar(
x=dfn['Date'],
y=dfn['Number'],
marker=dict(color=color_dict[n]),
name=n,
width=60*60*1000
)
)
names = set()
fig.for_each_trace(
lambda trace:
trace.update(showlegend=False)
if (trace.name in names) else names.add(trace.name))
unique_dates = df["Date"].unique()
print(unique_dates)
min_x = unique_dates[0]
print(type(min_x))
max_x = unique_dates[:-1]
print(type(max_x))
fig.update_layout(xaxis_range=[min_x, max_x])
fig.update_layout(height=500, width=800, barmode='group')
fig.show()
</code></pre>
<p>and the output looks like this:</p>
<p><a href="https://i.sstatic.net/9nhG4sKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nhG4sKN.png" alt="output chart" /></a></p>
|
<python><pandas><plotly>
|
2024-10-07 21:09:29
| 1
| 773
|
Joe Smart
|
79,063,592
| 8,869,570
|
Quickest way to compute the sum of squared elements of the result of a matrix multiplication?
|
<p>I want to do something like this in Python</p>
<pre><code>sum(square(matmul(A, B)))
</code></pre>
<p>There are various ways you can achieve this behavior, e.g.,</p>
<pre><code>sum(np.square(np.matmul(A, B)))
</code></pre>
<p>or</p>
<pre><code>np.lingalg.norm(np.matmul(A, B)) ** 2
</code></pre>
<p>I haven't profiled it yet, but I suspect the second option will be faster (less ops, fused operations, etc.).</p>
<p>Are there other approaches that could potentially beat this? I'd like to profile various approaches to solving this. I'm not restricted to numpy.</p>
|
<python><numpy><matmul>
|
2024-10-07 20:47:01
| 1
| 2,328
|
24n8
|
79,063,569
| 11,515,528
|
Error encountered scrapping data using Selenium due to NoneType
|
<p>I am extracting data from the website <code>https://octopus.energy/dashboard/new/accounts</code> that requires a login. I have been successfully accessing it using Selenium with this code.</p>
<pre><code>driver = webdriver.Chrome()
wait = WebDriverWait(driver, 30)
page = driver.get(electric_meter)
user_name ="xxxx"
pw = "xxxx"
wait.until(EC.element_to_be_clickable((By.NAME, "username"))).send_keys(user_name)
sleep(0.11)
wait.until(EC.element_to_be_clickable((By.NAME, "password"))).send_keys(pw)
sleep((0.21))
wait.until(EC.element_to_be_clickable((By.XPATH, "//*[@id='loginForm']/div[4]/button"))).click()
</code></pre>
<p>The issue arises when I attempt to retrieve the information. The below code functions properly; however, it outputs a string containing all the data that could be parsed using regex. Still, I am looking for a more efficient or built-in solution.</p>
<pre><code>print(driver.find_element(By.CLASS_NAME, "MeterReadingHistory__content").get_attribute("textContent"))
</code></pre>
<p><strong>returns</strong></p>
<pre><code>Nice to be aware of: if you have recently sent in a reading, we have received it - it may just take some time to be displayed. Maybe give it another shot tomorrow. DateReading7th Oct 2024Your reading252046th Oct 2024Your reading251984th Oct 2024Your reading251863rd Oct 2024Your...
</code></pre>
<p>I want to put each data point e.g. <code>6th Oct 2024Your reading25198</code> into a dataframe.</p>
<p><a href="https://i.sstatic.net/TzUWA7Jj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TzUWA7Jj.png" alt="DOM" /></a></p>
<p><code>MeterReadingHistory__content</code> functions properly but I encounter an <code>AttributeError</code> when attempting to retrieve text from <code>MeterReadingHistorystyled__StyledReadingsContainer-sc-1mqak03-0.cEwaaW</code> which states 'NoneType' object does not have 'get_attribute'.</p>
<p>What am I overlooking?</p>
|
<python><selenium-webdriver><web-scraping>
|
2024-10-07 20:34:00
| 1
| 1,865
|
Cam
|
79,063,494
| 7,347,925
|
How to accelerate getting points within distance using two DataFrames?
|
<p>I have two DataFrames (df and locations_df), and both have longitude and latitude values. I'm trying to find the df's points within 2 km of each row of locations_df.</p>
<p>I tried to vectorize the function, but the speed is still slow when <code>locations_df</code> is a big DataFrame (nrows>1000). Any idea how to accelerate?</p>
<pre><code>import pandas as pd
import numpy as np
def select_points_for_multiple_locations_vectorized(df, locations_df, radius_km):
R = 6371 # Earth's radius in kilometers
# Convert degrees to radians
df_lat_rad = np.radians(df['latitude'].values)[:, np.newaxis]
df_lon_rad = np.radians(df['longitude'].values)[:, np.newaxis]
loc_lat_rad = np.radians(locations_df['lat'].values)
loc_lon_rad = np.radians(locations_df['lon'].values)
# Haversine formula (vectorized)
dlat = df_lat_rad - loc_lat_rad
dlon = df_lon_rad - loc_lon_rad
a = np.sin(dlat/2)**2 + np.cos(df_lat_rad) * np.cos(loc_lat_rad) * np.sin(dlon/2)**2
c = 2 * np.arctan2(np.sqrt(a), np.sqrt(1-a))
distances = R * c
# Create a mask for points within the radius
mask = distances <= radius_km
# Get indices of True values in the mask
indices = np.where(mask)
result = pd.concat([df.iloc[indices[0]].reset_index(drop=True), locations_df.iloc[indices[1]].reset_index(drop=True)], axis=1)
return result
def random_lat_lon(n=1, lat_min=-10., lat_max=10., lon_min=-5., lon_max=5.):
"""
this code produces an array with pairs lat, lon
"""
lat = np.random.uniform(lat_min, lat_max, n)
lon = np.random.uniform(lon_min, lon_max, n)
return np.array(tuple(zip(lat, lon)))
df = pd.DataFrame(random_lat_lon(n=10000000), columns=['latitude', 'longitude'])
locations_df = pd.DataFrame(random_lat_lon(n=20), columns=['lat', 'lon'])
result = select_points_for_multiple_locations_vectorized(df, locations_df, radius_km=2)
</code></pre>
|
<python><pandas><dataframe><dask><geopandas>
|
2024-10-07 20:02:28
| 1
| 1,039
|
zxdawn
|
79,063,476
| 2,512,377
|
Adding a policy to my custom chain in nftables via python libnftables is failing, can you tell me why?
|
<p>I'm trying to create a table, chain and ruleset for nft in python to act as a whitelist.</p>
<p>So I need a chain with policy of "drop" to drop all outbound packets that do not match approved destinations set out in the rules of that chain.</p>
<p>I want to use a custom table to keep maintenance easier and so that fail2ban and other utilities can operate alongside this whitelist (which will be for packets incoming from one specific interface only ).</p>
<p>I'm using python and the nft json library, I can see my table and chain and rule being created, but no policy is listed in the ruleset output after running my program, so I assume it will default to "accept".</p>
<p>I am using a json string to set up a new table and a chain within, currently it just has a very simple rule for test purposes. It passes validation by libnftables, but when I list the ruleset after running my python code I see the table, chain and rules, but no policy applied at all.</p>
<p>here's the setup string:</p>
<pre><code> {'nftables':
[{'add': {'table': {'family': 'ip', 'name': 'O365'}}},
{'add': {'chain': {'family': 'ip', 'table': 'O365', 'name': 'O365WhiteList', 'policy': 'drop'}}},
{'add': {'rule': {'family': 'ip', 'table': 'O365', 'chain': 'O365WhiteList',
'expr': [{'match': {'op': '==', 'left': {'payload': {'protocol': 'tcp', 'field': 'dport'}}, 'right': 22}}, {'accept': None}
]
}}}]}
</code></pre>
<p>And here's the ruleset output:</p>
<pre><code># Warning: table ip filter is managed by iptables-nft, do not touch!
table ip filter {
chain f2b-sshd {
counter packets 92105 bytes 13096562 return
}
chain INPUT {
type filter hook input priority filter; policy accept;
meta l4proto tcp tcp dport 22 counter packets 132201 bytes 15881220 jump f2b-sshd
}
}
# Warning: table ip nat is managed by iptables-nft, do not touch!
table ip nat {
chain POSTROUTING {
type nat hook postrouting priority srcnat; policy accept;
oifname "ens19" counter packets 983 bytes 70843 masquerade
}
}
table ip O365 {
chain O365WhiteList {
tcp dport 22 accept
}
}
</code></pre>
<p>As you can see, I'm not actually hooking into my table yet, I just want to see it created properly first. The table, chain, and rule are there, but no policy is given.</p>
<p>Any ideas? The man page for libnftables-json isn't helpful as far as I can see, listing "policy" only as STRING.</p>
<p>Below is the full code for the above:</p>
<pre><code>import json
import nftables
#some objects to be turned into json for nftables to apply as commands
delTable = {"nftables": [{ "delete": { "table": { "family": "ip", "name": "O365" }}}]}
setupTablesCmds= """
{ "nftables": [
{ "add": { "table": { "family": "ip", "name": "O365" }}},
{ "add": { "chain": {
"family": "ip",
"table": "O365",
"name": "O365WhiteList",
"policy": "drop"
}}},
{ "add": { "rule": {
"family": "ip",
"table": "O365",
"chain": "O365WhiteList",
"expr": [
{ "match": {
"op": "==",
"left": { "payload": {
"protocol": "tcp",
"field": "dport"
}},
"right": 22
}},
{ "accept": null }
]
}}}
]}
"""
nft = nftables.Nftables()
try:
nft.json_validate(json.loads(setupTablesCmds))
except Exception as e:
print(f"ERROR: failed validating initial setup json schema: {e}")
exit(1)
print(" base config data passed validation, yay!" )
print()
print("Removing old O365 table")
rc, output, error = nft.json_cmd(delTable)
if rc != 0:
# error here is probably because table doesn't exist, which is fine, so no exit on error for now
print(f"ERROR: running json cmd: {error}")
print(" setting up new O365 table " )
print( "json commands = :", json.loads(setupTablesCmds))
rc, output, error = nft.json_cmd(json.loads(setupTablesCmds))
if rc != 0:
print(f"ERROR: running json cmd: {error}")
exit(1)
if len(output) != 0:
print(f"WARNING: output: {output}")
print(" Base config applied ok, O365 table created" )
</code></pre>
|
<python><json><netfilter><nftables>
|
2024-10-07 19:53:31
| 1
| 549
|
user2512377
|
79,063,329
| 5,536,016
|
Django custom function taking a long time to run for the view
|
<p>I have a function below that generates a navigation so it can be using in the view. It is pulling in the articles, article category, and article subcategory. The main issue is that with about 15 category and 46 subcategory and 33 articles it is taking like 7-10 seconds to finish loading this function. What is the best approach for handling this so that it loads faster?</p>
<pre><code>def generate_navigation_list(request):
articles = Article.objects.all()
article_category = ArticleCategory.objects.all()
articles_sub_category = ArticleSubCategory.objects.all()
# Create a navigation list structure
navigation_list = []
# Iterate over categories
for category in article_category:
category_dict = {
'id': category.id,
'title': category.name,
'subcategory': []
}
# Filter subcategories for the current category
filtered_subcategories = articles_sub_category.filter(category=category)
# Iterate over filtered subcategories
for subcategory in filtered_subcategories:
subcategory_dict = {
'id': subcategory.id,
'title': subcategory.name,
'articles': []
}
# Filter articles for the current subcategory
filtered_articles = articles.filter(sub_category=subcategory).order_by('order')
# Add articles to the subcategory dictionary if there are articles
if filtered_articles.exists(): # Check if there are any articles
for article in filtered_articles:
article_dict = {
'title': article.title,
'slug': article.slug
}
subcategory_dict['articles'].append(article_dict)
# Append subcategory dictionary to category dictionary
category_dict['subcategory'].append(subcategory_dict)
# Append category dictionary to navigation list if it has subcategories with articles
if category_dict['subcategory']:
navigation_list.append(category_dict)
request.session['navigation_list'] = navigation_list
request.session.save()
</code></pre>
|
<python><django>
|
2024-10-07 18:59:30
| 1
| 1,765
|
Nello
|
79,063,204
| 6,068,462
|
Matplotlib's plt.show() throwing "ValueError: object __array__ method not producing an array"
|
<p>I'm trying to run the following simple lines of Python code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
plt.plot(x, y)
plt.show()
</code></pre>
<p>But 'm getting the following error for the plt.show() line:</p>
<pre><code>Traceback (most recent call last):
File "/home/user/Pycharm/someproject/sam.py", line 36, in <module>
plt.show()
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/pyplot.py", line 612, in show
return _get_backend_mod().show(*args, **kwargs)
File "/home/yz8zdw/Programs/pycharm-2024.1.3/plugins/python/helpers/pycharm_matplotlib_backend/backend_interagg.py", line 41, in __call__
manager.show(**kwargs)
File "/home/yz8zdw/Programs/pycharm-2024.1.3/plugins/python/helpers/pycharm_matplotlib_backend/backend_interagg.py", line 144, in show
self.canvas.show()
File "/home/yz8zdw/Programs/pycharm-2024.1.3/plugins/python/helpers/pycharm_matplotlib_backend/backend_interagg.py", line 80, in show
FigureCanvasAgg.draw(self)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/backends/backend_agg.py", line 387, in draw
self.figure.draw(self.renderer)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/artist.py", line 95, in draw_wrapper
result = draw(artist, renderer, *args, **kwargs)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/artist.py", line 72, in draw_wrapper
return draw(artist, renderer)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/figure.py", line 3161, in draw
self.patch.draw(renderer)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/artist.py", line 72, in draw_wrapper
return draw(artist, renderer)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/patches.py", line 632, in draw
self._draw_paths_with_artist_properties(
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/patches.py", line 617, in _draw_paths_with_artist_properties
renderer.draw_path(gc, *draw_path_args)
File "/home/user/Programs/miniconda/envs/plot/lib/python3.10/site-packages/matplotlib/backends/backend_agg.py", line 131, in draw_path
self._renderer.draw_path(gc, path, transform, rgbFace)
ValueError: object __array__ method not producing an array
</code></pre>
<p>Environment information:</p>
<pre><code>python 3.8.20
numpy 1.16.0
matplotlib 3.7.5
</code></pre>
<p>What's causing this issue?</p>
|
<python><numpy><matplotlib><valueerror>
|
2024-10-07 18:29:37
| 1
| 541
|
TheTomer
|
79,063,140
| 6,068,462
|
ModuleNotFoundError: No module named 'distutils.msvccompiler' when trying to install numpy 1.16
|
<p>I'm working inside a conda environment and I'm trying to downgrade numpy to version 1.16, but when running <code>pip install numpy==1.16</code> I keep getting the following error:</p>
<pre><code>$ pip install numpy==1.16
Collecting numpy==1.16
Downloading numpy-1.16.0.zip (5.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.1/5.1 MB 10.8 MB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Building wheels for collected packages: numpy
Building wheel for numpy (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Running from numpy source directory.
/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/misc_util.py:476: SyntaxWarning: "is" with a literal. Did you mean "=="?
return is_string(s) and ('*' in s or '?' is s)
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/setup.py", line 415, in <module>
setup_package()
File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/setup.py", line 394, in setup_package
from numpy.distutils.core import setup
File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/core.py", line 26, in <module>
from numpy.distutils.command import config, config_compiler, \
File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/command/config.py", line 19, in <module>
from numpy.distutils.mingw32ccompiler import generate_manifest
File "/tmp/pip-install-jdof0z8r/numpy_4597057bbb504aa18b7bda112f0aa37f/numpy/distutils/mingw32ccompiler.py", line 34, in <module>
from distutils.msvccompiler import get_build_version as get_build_msvc_version
ModuleNotFoundError: No module named 'distutils.msvccompiler'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for numpy
Running setup.py clean for numpy
error: subprocess-exited-with-error
× python setup.py clean did not run successfully.
│ exit code: 1
╰─> [10 lines of output]
Running from numpy source directory.
`setup.py clean` is not supported, use one of the following instead:
- `git clean -xdf` (cleans all files)
- `git clean -Xdf` (cleans all versioned files, doesn't touch
files that aren't checked into the git repo)
Add `--force` to your command to use it anyway if you must (unsupported).
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed cleaning build dir for numpy
Failed to build numpy
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (numpy)
</code></pre>
<p>How can I resolve this?</p>
|
<python><numpy><anaconda><conda><distutils>
|
2024-10-07 18:13:22
| 1
| 541
|
TheTomer
|
79,063,135
| 1,178,841
|
Python Filtering
|
<p>In my application I have a list of films that I want to filter by year or other criterias. My filters work fine except the year (= variable annee)</p>
<p>In the following code the variable annee becomes {'9', '1'} for video['année'] = 1999
all the other variables have no such problem</p>
<pre><code>def populate_comboboxes_with_filtered_videos(self, filtered_videos):
genres = set()
acteurs = set()
realisateurs = set()
annees = set()
langues = set()
sous_titres = set()
for video in filtered_videos:
genres.update(video['genres'])
acteurs.update(video['acteurs'])
realisateurs.update(video['réalisateurs'])
annees.update(video['année'])
langues.add(video['langue'])
sous_titres.add(video['sous_titres'])
...
</code></pre>
<p>Any help for python newbee would be welcome</p>
|
<python>
|
2024-10-07 18:11:14
| 1
| 571
|
G. Trennert
|
79,063,091
| 13,142,245
|
FastAPI stateful dependencies
|
<p>I've been reviewing the <a href="https://fastapi.tiangolo.com/tutorial/dependencies/#first-steps" rel="nofollow noreferrer">Depends docs</a>, official example</p>
<pre class="lang-py prettyprint-override"><code>from typing import Annotated
from fastapi import Depends, FastAPI
app = FastAPI()
async def common_parameters(q: str | None = None, skip: int = 0, limit: int = 100):
return {"q": q, "skip": skip, "limit": limit}
@app.get("/items/")
async def read_items(commons: Annotated[dict, Depends(common_parameters)]):
return commons
</code></pre>
<p>However, in my use case, I need to serve an ML model, which will be updated at a recurring cadence (hourly, daily, etc.) The solution from docs (above) depends on a callable function; I believe that <a href="https://github.com/fastapi/fastapi/issues/424#issuecomment-584169213" rel="nofollow noreferrer">it is cached not generated each time</a>. Nonetheless, my use case is not some scaffolding that needs to go up/down with each invocation. But rather, I need a custom class with state. The idea is that the ML model (a class attribute) can be updated scheduled and/or async and the <code>./invocations/</code> method will serve said model, reflecting updates as they occur.</p>
<p>In current state, I use global variables. This works well when my entire application fits on a single script. However, as my application grows, I will be interested in using the router yet I'm concerned that <code>global state</code> will cause failures.</p>
<p>Is there an appropriate way to pass a stateful instance of a class object across methods?</p>
<p>See example class and method</p>
<pre class="lang-py prettyprint-override"><code>class StateManager:
def __init__(self):
self.bucket = os.environ.get("BUCKET_NAME", "artifacts_bucket")
self.s3_model_path = "./model.joblib"
self.local_model_path = './model.joblib'
def get_clients(self):
self.s3 = boto3.client('s3')
def download_model(self):
self.s3.download_file(self.bucket, self.s3_model_path, self.local_model_path)
self.model = joblib.load(self.local_model_path)
...
state = StateManager()
state.download_model()
...
@app.post("/invocations")
def invocations(request: InferenceRequest):
input_data = pd.DataFrame(dict(request), index=[0])
try:
predictions = state.model.predict(input_data)
return JSONResponse({"predictions": predictions.tolist()},
status_code=status.HTTP_200_OK)
except Exception as e:
return JSONResponse({"error": str(e)},
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR)
</code></pre>
|
<python><fastapi>
|
2024-10-07 17:58:50
| 1
| 1,238
|
jbuddy_13
|
79,062,904
| 2,071,807
|
How to create a subclass with a different type parameter to its parent
|
<p><a href="https://peps.python.org/pep-0695/" rel="nofollow noreferrer">PEP 695</a> introduced a nice way to specify type parameters in a generic class.</p>
<p>But I find the rules around inheritance confusing. I would like to inherit from a class which specifies a type parameter, but replace the parent's type parameter with a different class.</p>
<p>Here is an example:</p>
<pre class="lang-py prettyprint-override"><code>class Foo[T]:
prop: T
def get_prop(self) -> T:
return self.prop
class StringFoo(Foo[str]):
prop: str = "Hello world"
def do_thing(self):
reveal_type(self.get_prop()) # str
class IntFoo(Foo[int]):
prop: int = 42
@property
def is_a_number(self) -> bool:
return True
def do_thing(self):
reveal_type(self.get_prop()) # int
</code></pre>
<p>But if I want another type of <code>Foo</code> which inherits from <code>IntFoo</code> (because I want <code>is_a_number</code> on my new class), I try doing this:</p>
<pre class="lang-py prettyprint-override"><code>class FloatFoo(IntFoo[float]): # this line has the message
def do_thing(self) -> float:
reveal_type(self.get_prop()) # int :(
return 42.0
</code></pre>
<p>But my IDE tells me:</p>
<blockquote>
<p>Expected no type arguments for class "IntFoo"</p>
</blockquote>
<p>I even tried this:</p>
<pre class="lang-py prettyprint-override"><code>class FloatFoo(Foo[float], IntFoo): # there's another message here
prop: float = 42.0
def do_thing(self):
reveal_type(self.get_prop()) # float :)
</code></pre>
<p>But now I get:</p>
<blockquote>
<p>Base classes of FloatFoo are mutually incompatible</p>
</blockquote>
<p>So how can I declare a subclass of <code>IntFoo</code> which has all <code>IntFoo</code>'s methods, but which has a different type parameter?</p>
|
<python>
|
2024-10-07 16:57:54
| 0
| 79,775
|
LondonRob
|
79,062,895
| 9,983,652
|
Error of Could not reach host. Are you offline when converting html file into PDF
|
<p>I am attempting to change an html file into PDF document. I tried multiple methods but none of them proved effective. The html documentwas saved stored on the U drive of my company's network.</p>
<p>For instance, when I am use pyhtml2pdf, I consistently encounter the error displayed below.</p>
<p><a href="https://pypi.org/project/pyhtml2pdf/" rel="nofollow noreferrer">https://pypi.org/project/pyhtml2pdf/</a></p>
<pre><code>from pyhtml2pdf import converter
filepath_html='U:/Data/test.html'
path = os.path.abspath(filepath_html)
converter.convert(f'{path}', 'sample.pdf')
714 # Make the request on the httplib connection object.
--> 715 httplib_response = self._make_request(
716 conn,
717 method,
718 url,
719 timeout=timeout_obj,
720 body=body,
721 headers=headers,
722 chunked=chunked,
723 )
725 # If we're going to release the connection in ``finally:``, then
726 # the response doesn't need to know about the connection. Otherwise
727 # it will also try to release it and we'll have a double-release
728 # mess.
File c:\ProgramData\anaconda_envs\dash3\Lib\site-packages\urllib3\connectionpool.py:404, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
403 try:
--> 404 self._validate_conn(conn)
405 except (SocketTimeout, BaseSSLError) as e:
406 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
File c:\ProgramData\anaconda_envs\dash3\Lib\site-packages\urllib3\connectionpool.py:1060, in HTTPSConnectionPool._validate_conn(self, conn)
...
---> 35 raise exceptions.ConnectionError(f"Could not reach host. Are you offline?")
36 self.validate_response(resp)
37 return resp
ConnectionError: Could not reach host. Are you offline?
</code></pre>
|
<python>
|
2024-10-07 16:56:08
| 1
| 4,338
|
roudan
|
79,062,864
| 1,839,674
|
VS Code Pytest won't run individual tests
|
<p>My VS Code instance (1.94.0) can discover the tests (after modifying the settings.json) and it does run all the tests if I press the "Rerun Tests" button on top of the VSCode Testing Pane (beaker icon).</p>
<p>However,
using the VSCode Testing Pane (beaker icon), when I drill down to the individual tests, it won't run an individual test or run an individual test in debug mode. This used to work and was highly helpful.</p>
<p>here is my settings.json file:</p>
<pre><code> {
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
"python.testing.pytestArgs": [
"${workspaceFolder}",
"--rootdir=${workspaceFolder}",
],
}
</code></pre>
<p>Anyone know how to fix this?</p>
|
<python><visual-studio-code><pytest>
|
2024-10-07 16:48:09
| 0
| 620
|
lr100
|
79,062,809
| 1,966,790
|
Find out whether filesystem supports extended attributes
|
<p>I have python program that calls another (external) program:</p>
<pre><code>cmd = [ "unsquashfs", "-n", "-d", dirname, filename ]
subprocess.check_call( cmd )
</code></pre>
<p>It worked so far. But now I started running this program in /tmp with tmpfs filesystem, which doesn't support xattrs and program subprocess fails with:</p>
<pre><code>write_xattr: failed to write xattr user.random-seed-creditable for file some/file/somewhere because extended attributes are not supported by the destination filesystem
Ignoring xattrs in filesystem
To avoid this error message, specify -no-xattrs
</code></pre>
<p>It very kindly informed me I need to add another param, great. But the program belongs to different team, whole company uses it so I would like to minimize my change and add the param only when needed (when xattrs are not supported) - for all I know somebody somewhere depends on xattrs being set in normal case.
So, how do I implement the following?:</p>
<pre><code>if not filesystemSupportsXattrs(dirname):
cmd += ['-no-xattrs']
</code></pre>
|
<python><linux><filesystems><xattr>
|
2024-10-07 16:31:46
| 1
| 3,033
|
MateuszL
|
79,062,615
| 536,262
|
python logger init to `level=20` (INFO) do not show debug if I later change to `level=10` (DEBUG)
|
<p>I changed my logging default level to <code>logging.INFO</code> (20) from <code>Logging.DEBUG</code> (10), but then it wont show debug messages if I change logLevel to 'DEBUG' later on in the code:</p>
<p>Old config works fine:</p>
<pre class="lang-py prettyprint-override"><code>>>> import mylog
>>> log1 = mylog.mylogger('log1',level=10) # default is DEBUG
added logger log1 of level:10
>>> log1
<Logger log1 (DEBUG)>
>>> log1.debug("works")
20241007163742.800|DEBUG|<stdin>:1|works
>>> log1.setLevel(20)
>>> log1
<Logger log1 (INFO)>
>>> log1.info("works")
20241007163825.042|INFO|<stdin>:1|works
>>> log1.debug("should not show")
>>>
</code></pre>
<p>I can also go back to debug and it works:</p>
<pre><code>>>> log1.setLevel(10)
>>> log1
<Logger log1 (DEBUG)>
>>> log1.debug("works")
20241007164010.113|DEBUG|<stdin>:1|works
</code></pre>
<p>All fine, except default logging is DEBUG which logs a bit too much if I do not change the loglevel.</p>
<p>But if I defaults to <code>level=20</code> I can't see DEBUG after a <code>log2.setLevel(10)</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> log2 = mylog.mylogger('log2',level=20) # default is INFO
added logger log2 of level:20
>>> log2
<Logger log2 (INFO)>
>>> log2.info("works")
20241007164505.921|INFO|<stdin>:1|works
>>> log2.debug("should not show")
>>> log2.setLevel(10)
>>> log2
<Logger log2 (DEBUG)>
>>> log2.info("works as before")
20241007164628.199|INFO|<stdin>:1|works as before
>>> log2.debug("why is this not showing")
>>>
</code></pre>
<p>my simplified log module <code>mylog.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import logging
def mylogger(name:str, level:int=logging.INFO) -> logging.Logger:
""" default logger """
logger = logging.getLogger(name)
if logger.hasHandlers():
print(f"logger {name} already has handler")
else:
logger.setLevel(level)
console_fmt = "%(asctime)s.%(msecs)03d|%(levelname)s|%(pathname)s:%(lineno)d|%(message)s"
datefmt = "%Y%m%d%H%M%S"
ch = logging.StreamHandler()
ch.setLevel(level)
ch.setFormatter(logging.Formatter(file_fmt, datefmt))
logger.addHandler(ch)
logger.propagate = False
print(f"added logger {name} of level:{logger.level}")
return logger
</code></pre>
<p>Could it be the level of the <code>StreamHandler()</code>?</p>
<p>How can I check this after it has been set up?</p>
<pre class="lang-py prettyprint-override"><code>>>> loggers = [logging.getLogger(name) for name in logging.root.manager.loggerDict]
>>> loggers
[<Logger log1 (DEBUG)>, <Logger log2 (DEBUG)>]
</code></pre>
|
<python><logging>
|
2024-10-07 15:36:25
| 2
| 3,731
|
MortenB
|
79,062,605
| 898,042
|
how to generate correct output using 2 yield statements in coroutine?
|
<p>i have var DICTIONARY, which is a dictionary where the keys are English letters and the values are words that start with the corresponding letter. The initial filling of DICTIONARY looks like this:</p>
<pre><code>DICTIONARY = {
'a': 'apple',
'b': 'banana',
'c': 'cat',
'd': 'dog',
...
}
</code></pre>
<p>i'm trying to write coroutine called alphabet, which takes letters as input and returns the words associated with the given letter from the DICTIONARY.</p>
<pre><code>Sample Input 1:
coro = alphabet()
next(coro)
print(coro.send('a'))
print(coro.send('b'))
print(coro.send('c'))
Sample Output 1:
text
apple
banana
cat
Sample Input 2:
python
coro = alphabet()
next(coro)
for letter in 'qwerty':
print(coro.send(letter))
Sample Output 2:
text
quail
walrus
elephant
rabbit
tiger
yak
</code></pre>
<p>my code uses 2 yields , 1 is assigned as variable:</p>
<pre><code>def alphabet():
while True:
ch = yield
yield DICTIONARY[ch]
</code></pre>
<p>however 2 yield statements used in coroutine actually skip 1 value always:</p>
<pre><code>Test input:
coro = alphabet()
next(coro)
print(coro.send('a'))
print(coro.send('b'))
print(coro.send('c'))
Correct output:
apple
banana
cat
Your code output:
apple
None #this is the problem
cat
</code></pre>
<p>i dont know how to deal with None and it skips 'b' for banana.</p>
<p>updated:</p>
<pre><code>#still dont really get it how it works
def alphabet(letter='a'):
while True:
letter = yield DICTIONARY[letter]
</code></pre>
|
<python><coroutine><yield>
|
2024-10-07 15:34:13
| 1
| 24,573
|
ERJAN
|
79,062,545
| 1,295,422
|
Compute FFT after reading accelerometer data
|
<p>I'd like to detect abnormal vibrations on a motor I have.
I'm using the following devices:</p>
<ul>
<li>Raspberry PI 5</li>
<li>Adafruit accelerometer ISM330DHCX</li>
</ul>
<p>I've successfully managed to read the sensor at a specific rate (2000Hz) and I'm using it's absolute value (although it's not changing that much):</p>
<pre class="lang-py prettyprint-override"><code>def read_sensor():
global data
target_duration = 1 / SAMPLE_RATE
# Connect to sensor
...
while True:
loop_start_time = time.perf_counter()
x, y, z = accelerometer.acceleration
acceleration = np.sqrt(x**2 + y**2 + z**2)
data = np.roll(data, -1)
data[-1] = acceleration
# Try to get exactly SAMPLE_RATE per second
loop_duration = time.perf_counter() - loop_start_time
sleep_time = max(0, target_duration - loop_duration)
end_time = time.perf_counter() + sleep_time
while time.perf_counter() < end_time: pass
</code></pre>
<p>Then, I use <code>numpy.fft</code> to compute the FFT on the above data and save it to a CSV file:</p>
<pre class="lang-py prettyprint-override"><code>n = len(data)
fft = abs(np.fft.fft(data * np.blackman(n)))[:n//2]
amplitudes = np.abs(fft) / n
# Identify max frequencies
max_amplitude = np.max(amplitudes)
max_index = np.argmax(amplitudes)
frequencies = np.fft.fftfreq(n, d=1/n)[:n//2]
# Generate summary
result = {
'mean': round(data.mean(), 4),
'std': round(data.std(), 4),
'max': round(data.max(), 4),
'min': round(data.min(), 4),
'range': round(data.max() - data.min(), 4),
'fft_mean': round(np.mean(fft), 4),
'fft_std': round(np.std(fft), 4),
'max_freq': round(frequencies[max_index], 4),
'max_amp': round(max_amplitude, 4),
'speed': duty_cycle
}
</code></pre>
<p>It produces lines like these ones:</p>
<pre class="lang-none prettyprint-override"><code>mean,std,max,min,range,fft_mean,fft_std,max_freq,max_amp,speed
9.9587,0.1261,10.599,9.5467,1.0523,15.6502,308.3268,0.0,4.1798,47
9.9609,0.1287,10.599,9.5467,1.0523,16.2121,308.5345,0.0,4.1826,47
</code></pre>
<p>I've attached a propeller to the motor and used a piece of soft plastic to hit the propeller.
I known the Nyquist frequency theorem saying that I can only detect frequencies below <code>SAMPLE_RATE/2</code> Hz.</p>
<p>My questions are the following:</p>
<ul>
<li>Why isn't the acceleration changing that much when I add the piece of plastic ?</li>
<li>Why is my FFT always showing 0.0 frequency ?</li>
<li>Where did I make a mistake in my code ?</li>
</ul>
|
<python><linux><numpy><fft><raspberry-pi5>
|
2024-10-07 15:23:28
| 1
| 8,732
|
Manitoba
|
79,062,434
| 11,062,613
|
Conda downgrading NumPy during package update
|
<p>I have a virtual Conda environment named dev that was created using the following YAML file:</p>
<pre><code># *** dev.yml ***
name: dev
channels:
- defaults # Check this channel first
- conda-forge # Fallback to conda-forge if packages are not available in defaults
dependencies:
- python==3.12
- numpy>=2.0.1
# ...more libraries without fixed versions
</code></pre>
<p>The environment was created without any dependency issues, and I successfully have NumPy at version 2.0.1. However, when I try to update the packages using the command:</p>
<pre><code>conda update --all
</code></pre>
<p>I get the following output suggesting that NumPy will be downgraded:</p>
<pre><code>The following packages will be DOWNGRADED:
numpy 2.0.1-py312h2809609_1 --> 1.26.4-py312h2809609_0
numpy-base 2.0.1-py312he1a6c75_1 --> 1.26.4-py312he1a6c75_0
</code></pre>
<p>Why is Conda trying to downgrade NumPy during the update?
I understand that dependencies might conflict during an update, but I thought this would be caught during the initial environment creation which runs without dependency warnings.</p>
<p>Is there a way to conditionally update packages where dependencies are solvable while keeping the current NumPy version?</p>
|
<python><numpy><conda>
|
2024-10-07 14:52:56
| 1
| 423
|
Olibarer
|
79,062,292
| 7,500,268
|
TypeError: expected np.ndarray (got numpy.ndarray)
|
<p>I use the below code and it report error, This is a very simple example and should not report error as expected.</p>
<pre><code>import numpy as np
import torch as th
# Assuming tt is some data (example as list)
tt = [1, 2, 3, 4, 5] # Example data
# Check if tt is a NumPy array, and convert if necessary
if not isinstance(tt, np.ndarray):
tt = np.array(tt)
# Now, convert tt to a PyTorch tensor
tensor_tt = th.from_numpy(tt)
print(tensor_tt)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[26], line 12
9 tt = np.array(tt)
11 # Now, convert tt to a PyTorch tensor
---> 12 tensor_tt = th.from_numpy(tt)
13 print(tensor_tt)
TypeError: expected np.ndarray (got numpy.ndarray)
</code></pre>
<p>I am using the following conda environment:</p>
<pre><code>conda list
# packages in environment at /opt/miniconda3/envs/ethos:
#
# Name Version Build Channel
appnope 0.1.4 pyhd8ed1ab_0 conda-forge
asttokens 2.4.1 pyhd8ed1ab_0 conda-forge
blas 1.0 openblas
bottleneck 1.3.7 py312ha86b861_0
brotli 1.0.9 h80987f9_8
brotli-bin 1.0.9 h80987f9_8
bzip2 1.0.8 h80987f9_6
ca-certificates 2024.9.24 hca03da5_0
click 8.1.7 pypi_0 pypi
colorlog 6.8.2 pypi_0 pypi
comm 0.2.2 pyhd8ed1ab_0 conda-forge
contourpy 1.2.0 py312h48ca7d4_0
cycler 0.11.0 pyhd3eb1b0_0
debugpy 1.6.7 py312h313beb8_0
decorator 5.1.1 pyhd8ed1ab_0 conda-forge
ethos 0.1.0 pypi_0 pypi
exceptiongroup 1.2.2 pyhd8ed1ab_0 conda-forge
executing 2.1.0 pyhd8ed1ab_0 conda-forge
expat 2.6.3 h313beb8_0
filelock 3.16.1 pypi_0 pypi
fonttools 4.51.0 py312h80987f9_0
freetype 2.12.1 h1192e45_0
fsspec 2024.9.0 pypi_0 pypi
h5py 3.12.1 pypi_0 pypi
importlib-metadata 8.5.0 pyha770c72_0 conda-forge
ipykernel 6.29.5 pyh57ce528_0 conda-forge
ipython 8.28.0 pyh707e725_0 conda-forge
jedi 0.19.1 pyhd8ed1ab_0 conda-forge
jinja2 3.1.4 pypi_0 pypi
joblib 1.4.2 pypi_0 pypi
jpeg 9e h80987f9_3
jupyter_client 8.6.3 pyhd8ed1ab_0 conda-forge
jupyter_core 5.7.2 pyh31011fe_1 conda-forge
kiwisolver 1.4.4 py312h313beb8_0
lcms2 2.12 hba8e193_0
lerc 3.0 hc377ac9_0
libbrotlicommon 1.0.9 h80987f9_8
libbrotlidec 1.0.9 h80987f9_8
libbrotlienc 1.0.9 h80987f9_8
libcxx 14.0.6 h848a8c0_0
libdeflate 1.17 h80987f9_1
libffi 3.4.4 hca03da5_1
libgfortran 5.0.0 11_3_0_hca03da5_28
libgfortran5 11.3.0 h009349e_28
libopenblas 0.3.21 h269037a_0
libpng 1.6.39 h80987f9_0
libsodium 1.0.18 h27ca646_1 conda-forge
libtiff 4.5.1 h313beb8_0
libwebp-base 1.3.2 h80987f9_0
llvm-openmp 14.0.6 hc6e5704_0
lz4-c 1.9.4 h313beb8_1
markupsafe 2.1.5 pypi_0 pypi
matplotlib-base 3.9.2 py312h2df2da3_0
matplotlib-inline 0.1.7 pyhd8ed1ab_0 conda-forge
mpmath 1.3.0 pypi_0 pypi
ncurses 6.4 h313beb8_0
nest-asyncio 1.6.0 pyhd8ed1ab_0 conda-forge
networkx 3.3 pypi_0 pypi
numexpr 2.8.7 py312h0f3ea24_0
numpy 2.1.2 pypi_0 pypi
numpy-base 1.26.4 py312he047099_0
openjpeg 2.5.2 h54b8e55_0
openssl 3.3.2 h8359307_0 conda-forge
packaging 24.1 pyhd8ed1ab_0 conda-forge
pandas 2.2.3 pypi_0 pypi
parso 0.8.4 pyhd8ed1ab_0 conda-forge
pexpect 4.9.0 pyhd8ed1ab_0 conda-forge
pickleshare 0.7.5 py_1003 conda-forge
pillow 10.4.0 py312h80987f9_0
pip 24.2 py312hca03da5_0
platformdirs 4.3.6 pyhd8ed1ab_0 conda-forge
prompt-toolkit 3.0.48 pyha770c72_0 conda-forge
psutil 5.9.0 py312h80987f9_0
ptyprocess 0.7.0 pyhd3deb0d_0 conda-forge
pure_eval 0.2.3 pyhd8ed1ab_0 conda-forge
pyarrow 17.0.0 pypi_0 pypi
pygments 2.18.0 pyhd8ed1ab_0 conda-forge
pyparsing 3.1.2 py312hca03da5_0
python 3.12.7 h99e199e_0
python-dateutil 2.9.0.post0 pypi_0 pypi
python-tzdata 2023.3 pyhd3eb1b0_0
pytz 2024.2 pypi_0 pypi
pyzmq 25.1.2 py312h313beb8_0
readline 8.2 h1a28f6b_0
seaborn 0.13.2 pypi_0 pypi
setuptools 75.1.0 py312hca03da5_0
six 1.16.0 pyh6c4a22f_0 conda-forge
sqlite 3.45.3 h80987f9_0
stack_data 0.6.2 pyhd8ed1ab_0 conda-forge
sympy 1.13.3 pypi_0 pypi
tk 8.6.14 h6ba3021_0
torch 2.4.1 pypi_0 pypi
tornado 6.4.1 py312h80987f9_0
tqdm 4.66.5 pypi_0 pypi
traitlets 5.14.3 pyhd8ed1ab_0 conda-forge
typing_extensions 4.12.2 pyha770c72_0 conda-forge
tzdata 2024.2 pypi_0 pypi
unicodedata2 15.1.0 py312h80987f9_0
wcwidth 0.2.13 pyhd8ed1ab_0 conda-forge
wheel 0.44.0 py312hca03da5_0
xz 5.4.6 h80987f9_1
zeromq 4.3.5 h313beb8_0
zipp 3.20.2 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 h18a0788_1
zstd 1.5.5 hd90d995_2
</code></pre>
<p>I try to convert np.array data to tensor in torch; The reporting error is confusing; I do not know whether there is conflicting in the package versions.</p>
<p>When I downgrade the numpy it still report the same error:</p>
<pre><code>import numpy as np
import torch
print(torch.__version__)
print(np.__version__)
x = np.array([[1, 2, 3, 4], [5, 6, 7, 8]], dtype=np.float32)
y = torch.from_numpy(x)
2.4.1
1.26.0
{
"name": "TypeError",
"message": "expected np.ndarray (got numpy.ndarray)",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 6
4 print(np.__version__)
5 x = np.array([[1, 2, 3, 4], [5, 6, 7, 8]], dtype=np.float32)
----> 6 y = torch.from_numpy(x)
TypeError: expected np.ndarray (got numpy.ndarray)"
}
</code></pre>
|
<python><python-3.x><numpy><pytorch>
|
2024-10-07 14:09:24
| 2
| 797
|
Z. Zhang
|
79,062,223
| 19,155,645
|
RAG with Haystack: compiles but returns empty responses
|
<p>My RAG pipeline (using Haystack) compiles and runs, but is returning empty responses.<br>
From my checks, I thought it might be due to the embedding and llm models not being compatible, so I changed to an embedding model that is based on same as my llm model (both based on mistral).</p>
<pre><code>mymodel = "occiglot/occiglot-7b-eu5-instruct" # llm model
# embedding_model = "Alibaba-NLP/gte-Qwen2-7B-instruct" # old embedding model
embedding_model = "intfloat/e5-mistral-7b-instruct"
</code></pre>
<p>My relevant imports are as follows:</p>
<pre><code>from haystack import Pipeline
from haystack.components.builders import PromptBuilder
from haystack.components.embedders import SentenceTransformersTextEmbedder
from haystack.components.embedders import SentenceTransformersDocumentEmbedder
from haystack.components.generators import HuggingFaceLocalGenerator
</code></pre>
<p>Embedders & generator:</p>
<pre><code>embedder = SentenceTransformersDocumentEmbedder(model=embedding_model)
text_embedder = SentenceTransformersTextEmbedder(model=embedding_model)
generator = HuggingFaceLocalGenerator(model=mymodel)
</code></pre>
<p>indexing pipeline (<code>question</code> is relevant to the uploaded document):</p>
<pre><code>indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
indexing_pipeline.add_component("splitter", DocumentSplitter(split_by="sentence", split_length=2))
indexing_pipeline.add_component("embedder", embedder)
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter.documents", "splitter.documents")
indexing_pipeline.connect("splitter.documents", "embedder.documents")
indexing_pipeline.connect("embedder", "writer")
</code></pre>
<p>query_pipeline:</p>
<pre><code>query_pipeline = Pipeline()
query_pipeline.add_component("text_embedder", text_embedder)
query_pipeline.add_component("retriever", MilvusEmbeddingRetriever(document_store=document_store, top_k=3))
query_pipeline.add_component("prompt_builder", PromptBuilder(template=prompt_template))
query_pipeline.add_component("generator", generator)
query_pipeline.connect("text_embedder.embedding", "retriever.query_embedding")
query_pipeline.connect("retriever.documents", "prompt_builder.documents")
query_pipeline.connect("prompt_builder", "generator")
</code></pre>
<p><code>.run()</code> calls:</p>
<pre><code>indexing_pipeline.run({
"converter": {"sources": [file_path]},
})
results = query_pipeline.run({
"text_embedder": {"text": question},
})
print("RAG answer:", results["generator"]["replies"][0])
</code></pre>
<p>The output is simply: <code>RAG answer: </code>
Additionally, when loading the embedder the following line is shown: <code>Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.</code></p>
<p>Im not sure if the issue is in the models I picked? or in my pipeline (maybe the document and text embedders together cause an issue?)<br>
Im happy to get any advice, help with this.</p>
<hr />
<p>EDIT 1: Following <a href="https://stackoverflow.com/users/10883094/stefano-fiorucci-anakin87">@Stefano-Fiorucci</a>'s suggestions, did the following changes:</p>
<ol>
<li>separated the pipeline to indexing and query (see the code above).</li>
<li>here is the code for the prompt, maybe the issue is here? :</li>
</ol>
<pre><code>prompt_template = """Answer the following query based on the provided context. If the context does
not include an answer, reply with 'I don't know'.\n
Query: {{query}}
Documents:
{% for doc in documents %}
{{ doc.content }}
{% endfor %}
Answer:
"""
</code></pre>
<ol start="3">
<li>tested also with <code>Alibaba-NLP/gte-large-en-v1.5</code></li>
</ol>
<p>But unfortunately, I still get empty string from the system.</p>
<p>the complete output for <code>results</code> (i.e. <code>query_pipeline.run({"text_embedder": {"text": question},})</code>) is:<br>
<code>{'generator': {'replies': ['\n']}}</code></p>
|
<python><huggingface-transformers><embedding><rag><haystack>
|
2024-10-07 13:49:25
| 0
| 512
|
ArieAI
|
79,061,819
| 4,461,051
|
Check if any value in a Polars DataFrame is True
|
<p>This is quite a simple ask but I can't seem to find any clear simplistic solution to this, feels like I'm missing something.</p>
<p>Let's say I have a DataFrame of type</p>
<pre><code>df = pl.from_repr("""
┌───────┬───────┬───────┐
│ a ┆ b ┆ c │
│ --- ┆ --- ┆ --- │
│ bool ┆ bool ┆ bool │
╞═══════╪═══════╪═══════╡
│ false ┆ true ┆ false │
│ false ┆ false ┆ false │
│ false ┆ false ┆ false │
└───────┴───────┴───────┘
""")
</code></pre>
<p>How do I do a simple check if any of the values in the DataFrame is True?
Some solutions I have found is</p>
<pre><code>selection = df.select(pl.all().any(ignore_nulls=True))
</code></pre>
<p>or</p>
<pre><code>selection = df.filter(pl.any_horizontal())
</code></pre>
<p>and then check in that row</p>
<pre><code>any(selection.row(0))
</code></pre>
<p>Is just seems like so many steps for a single check</p>
|
<python><dataframe><data-science><python-polars>
|
2024-10-07 11:49:38
| 3
| 746
|
Jerry
|
79,061,810
| 11,703,652
|
Sync pl.StringCache / Categorical encoding across machines in polars
|
<p>I need polars <code>Categorical</code> data type to have the same physical representation across different machines. On one machine, I can use <code>pl.StringCache</code> to get the same physical representation:</p>
<pre class="lang-py prettyprint-override"><code>with pl.StringCache():
s1 = pl.Series("color", ["red", "green", "red"], dtype=pl.Categorical)
s2 = pl.Series("color", ["blue", "red", "green"], dtype=pl.Categorical)
</code></pre>
<p>But I need to replicate the same physical encoding on a totally different machine:</p>
<pre><code>with pl.StringCache():
s1 = pl.Series("color", ["red", "green", "red"], dtype=pl.Categorical)
# Somhow save encoding here
# On a different machine
with pl.StringCache(): # Somehow load encoding here
s2 = pl.Series("color", ["blue", "red", "green"], dtype=pl.Categorical)
</code></pre>
<p>It seems to be not possible at the time, but are there any workarounds?</p>
<p>My use case is machine learning. I'm training a model on categorical data on one machine, and I need the encoding for inference on different machines. While I could technically use a Scikit learn label encoder, I would like to stay in polars.</p>
|
<python><python-polars><categorical>
|
2024-10-07 11:46:21
| 1
| 361
|
McToel
|
79,061,645
| 9,773,920
|
Box-api get_items() folders fetch limits to 15 folders
|
<p>I am trying to fetch a specific folder under the<code>target_folder_name</code> variable from my box account using get_items in box SDK. However, my code below fetches only first 15 folder names and stops rightafter. Pagination is not working as expected.</p>
<pre><code>def lambda_handler(event, context):
box_client = get_box_jwt_client()
target_folder_name = "my_test_folder"
box_folder_id = get_box_folder_id(box_client, target_folder_name)
if box_folder_id:
print(f"Found folder ID: {box_folder_id}")
else:
print(f"Folder '{target_folder_name}' not found")
def get_box_folder_id(items, target_folder_name, root_folder_id="0", limit=200, offset=0):
"""
Recursively fetches items from a Box folder in batches and returns the folder ID of the target folder.
"""
items = box_client.folder(root_folder_id).get_items(limit=limit, offset=offset)
for item in items:
if item.name == target_folder_name and item.type == 'folder':
print(f"Folder '{target_folder_name}' found with ID: {item.id}")
return item.id # Return the folder ID as soon as it is found
if len(list(items)) < limit:
return None
# Fetch the next batch by updating the offset
return get_box_folder_id(box_client, target_folder_name, root_folder_id, limit, offset + limit)
</code></pre>
<p>How to fix the file fetch limit issue?</p>
|
<python><sdk><box-api><box>
|
2024-10-07 10:49:36
| 0
| 1,619
|
Rick
|
79,061,618
| 10,566,155
|
Fuzzy Join on Venue Names Based on City
|
<p>I am working with PySpark and need to join two datasets based on the city and a fuzzy matching condition on the venue names. The first dataset contains information about stadiums including a unique venue_id, while the second dataset, which I receive periodically, only includes venue names and cities without the venue_id.</p>
<p>I want to join these datasets to match the venue_name from the incoming data to the existing dataset using fuzzy logic (since the names are not always written identically), and then pull the corresponding venue_id.</p>
<p>Existing Dataset (df_stadium_information):</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>venue_name</th>
<th>city</th>
<th>venue_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sree Kanteerava Stadium</td>
<td>Bengaluru</td>
<td>1</td>
</tr>
<tr>
<td>Sree Kanteerava Stadium</td>
<td>Kochi</td>
<td>2</td>
</tr>
<tr>
<td>Eden Gardens</td>
<td>Kolkata</td>
<td>3</td>
</tr>
<tr>
<td>Narendra Modi Stadium</td>
<td>Ahmedabad</td>
<td>4</td>
</tr>
</tbody>
</table></div>
<p>Incoming Data (df_new_stadium_data):</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>venue_name</th>
<th>city</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sri Kanteerava Indoor Stadium</td>
<td>Bengaluru</td>
</tr>
<tr>
<td>Eden Gardens</td>
<td>Kolkata</td>
</tr>
</tbody>
</table></div>
<p>Desired Output:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>venue_name</th>
<th>city</th>
<th>venue_id</th>
</tr>
</thead>
<tbody>
<tr>
<td>Sri Kanteerava Indoor Stadium</td>
<td>Bengaluru</td>
<td>1</td>
</tr>
<tr>
<td>Eden Gardens</td>
<td>Kolkata</td>
<td>null</td>
</tr>
</tbody>
</table></div>
<p>I want the output to show the venue_id from df_stadium_information if there is a fuzzy match on venue_name and an exact match on city. If there's no fuzzy match, the venue_id should be null.</p>
|
<python><sql><pyspark><apache-spark-sql>
|
2024-10-07 10:39:48
| 1
| 329
|
Harshit Mahajan
|
79,061,520
| 12,350,600
|
How get PDF/A compliance with fpdf2
|
<p>I am able to generate a pdf with fpdf2 but I want it to show up with the PDF/A compliance tag in Adobe Acrobat Reader. Basically i want the below text to appear when the PDF is opened.</p>
<p><a href="https://i.sstatic.net/65PEgleB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65PEgleB.png" alt="PDF/A Text" /></a></p>
<p>Do I need to need to signing to make this PDF/A compliant?
Something like:</p>
<p><code>pdf.sign_pkcs12("certs.p12", password=b"1234")</code></p>
|
<python><python-3.x><pdf><fpdf2>
|
2024-10-07 10:09:18
| 1
| 394
|
Kruti Deepan Panda
|
79,061,506
| 525,229
|
How can you verify two tar.gz files are identical
|
<p>I am making a sharing protocol, and when you share a folder it gets tar.gz-ipped and inserted in a folder.</p>
<p>It's created like this:</p>
<pre><code>with tarfile.open(full_data_name, "w:gz", format=GNU_FORMAT) as tar_handle:
...
tar_handle.add(file_path)
</code></pre>
<p>When you do that again, I'd like to verify and check if new tar.gz is identical to the old one (so I do not need to re-publish it).</p>
<p>I know about pkgdiff and that works fine, but I'd like to do it in python.</p>
<p>I also know I can do it manually, de-zip&tar the files, load up the content and verify byte wise, but isn't there some simpler and less resource hungry method?</p>
<p>I have tried to just check the contents of the tar.gz files (removing the timestamp at byte 4-7) but that only works sometimes, so I guess there is some random reshuffling in the tar part or some randomness in the gz, as pkgdiff says they are the same, but a hex editor shows lots of differences.</p>
|
<python><python-3.x><gzip><tar>
|
2024-10-07 10:04:38
| 1
| 2,969
|
Valmond
|
79,061,437
| 151,829
|
Verifying Amazon SNS signatures in Python
|
<p>I am struggling to get a successful signature validation when I send a test message through <code>complaint@simulator.amazonses.com</code>. Below is my current relevant Python code after some trial and error. But I always get an InvalidSignature error on <code>signing_cert.public_key().verify</code>. Any ideas on what I am doing wrong?</p>
<pre><code>from cryptography import x509
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.asymmetric import padding
def create_string_to_sign(payload):
string_to_sign = (
f"""Message:\n{payload['Message']}\nMessageId:\n{payload['MessageId']}\n"""
)
# 'Subject' is optional, add it only if present in the payload
if "Subject" in payload:
string_to_sign += f"Subject:\n{payload['Subject']}\n"
string_to_sign += f"""Timestamp:\n{payload['Timestamp']}\nTopicArn:\n{payload['TopicArn']}\nType:\n{payload['Type']}\n"""
# Add 'UnsubscribeURL' only if present
if "UnsubscribeURL" in payload:
string_to_sign += f"UnsubscribeURL:\n{payload['UnsubscribeURL']}\n"
return string_to_sign
class AmazonSNSSESWebhookView(WebhookView):
"""
Validate and process webhook events from Amazon SNS for Amazon SES spam complaints.
"""
def validate(self):
"""
Sample payload from Amazon SNS
{
"Type" : "Notification",
"MessageId" : "1c2a7465-1f6b-43a2-b92f-0f24b9c7f3c5",
"TopicArn" : "arn:aws:sns:us-east-1:123456789012:SES_SpamComplaints",
"Message" : "{\"notificationType\":\"Complaint\",\"complaint\":{\"complainedRecipients\":[{\"emailAddress\":\"example@example.com\"}],\"complaintFeedbackType\":\"abuse\",\"arrivalDate\":\"2024-09-25T14:00:00.000Z\"},\"mail\":{\"timestamp\":\"2024-09-25T13:59:48.000Z\",\"source\":\"sender@example.com\",\"messageId\":\"1234567890\"}}",
"Timestamp" : "2024-09-25T14:00:00.000Z",
"SignatureVersion" : "1",
"Signature" : "...",
"SigningCertURL" : "https://sns.us-east-1.amazonaws.com/SimpleNotificationService.pem",
"UnsubscribeURL" : "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe"
}
"""
payload = json.loads(self.request.body)
# Ensure that the sender is actually Amazon SNS
signing_cert_url = payload["SigningCertURL"]
if not signing_cert_url.startswith(
f"https://sns.{settings.AWS_SES_REGION_NAME}.amazonaws.com/"
):
return False
# We need to handle the subscription confirmation
if payload["Type"] == "SubscriptionConfirmation":
response = requests.get(payload["SubscribeURL"])
if response.status_code != 200:
logger.error(
f"Failed to confirm Amazon SNS subscription. Payload: {payload}"
)
return False
else:
return True
# Message Type: Ensure that you only process SNS messages of type Notification.
# There are other message types (e.g., SubscriptionConfirmation and UnsubscribeConfirmation),
# which you may want to handle separately. For SubscriptionConfirmation,
# you should respond to confirm the subscription.
if payload["Type"] != "Notification":
return False
# Check that the message is recent, protect against replay attacks
# Ignore if it is old
sns_datetime = parser.parse(payload["Timestamp"])
current_datetime = datetime.now(timezone.utc)
time_window = timedelta(minutes=5)
if sns_datetime < current_datetime - time_window:
return False
# Retrieve the certificate.
signing_cert = x509.load_pem_x509_certificate(
requests.get(signing_cert_url).content
)
decoded_signature = base64.b64decode(payload["Signature"])
signature_hash = (
hashes.SHA1() if payload["SignatureVersion"] == "1" else hashes.SHA256()
)
# Sign the string.
string_to_sign = create_string_to_sign(payload)
return signing_cert.public_key().verify(
decoded_signature,
string_to_sign.encode("UTF-8"),
padding=padding.PKCS1v15(),
algorithm=signature_hash,
)
</code></pre>
|
<python><amazon-web-services><cryptography><amazon-sns>
|
2024-10-07 09:44:24
| 0
| 16,763
|
Botond Béres
|
79,061,128
| 6,111,772
|
python: class with global variables
|
<p>Of course there should be no global variables - vars should be passed with function calls. My actual implementation needs lots of variables which are used in different "xxx.py" modules. So I tried to make an own class including these variables and put it in an own module named e.g. "vars.py":</p>
<pre><code>class GlobalVariables():
def __init__():
self.a = 42
self.e = 2.71828
self.c = ["good","music"]
# ---> should I place:
gv = GlobalVariables() # (1)here
</code></pre>
<p>The main program would look like for this case (1)</p>
<pre><code>from vars.py import gv
def main():
gv.a = 73
...
</code></pre>
<p>and in case (2)</p>
<pre><code>from vars.py include GlobalVariables
gv = GlobalVariables() # (2)in the main program
def main():
gv.b = 2.71828
...
</code></pre>
<p>I prefer the first method, since gv is used in many separate modules. Defining gv in each module separately there are certainly different gv variables in each module.</p>
<p>Reading lots of instructions and posts about classes and global variables I got confused which way is applicable - or is there a completely different approach to this problem?</p>
|
<python><module><global-variables>
|
2024-10-07 08:08:52
| 1
| 441
|
peets
|
79,061,095
| 5,378,816
|
Using Annotated[...] in a type statement
|
<p>The <code>type</code> statement is relatively new (3.12+) and its support in <code>mypy</code> is experimental:</p>
<blockquote>
<p>PEP 695 type aliases are not yet supported. Use
--enable-incomplete-feature=NewGenericSyntax for experimental support</p>
</blockquote>
<p>Here is a short program were mypy (with the feature enabled) does not like the first two <code>type DataAs.... = ...</code> lines:</p>
<pre><code>from typing import Annotated
from pydantic import AfterValidator
def validate_anything(data):
return data
type DataAsDict = Annotated[dict[str, str], AfterValidator[validate_anything]]
type DataAsList = Annotated[list[str], AfterValidator[validate_anything]]
type DataType = DataAsDict | DataAsList
def foo(data: DataType) -> None:
pass
</code></pre>
<p>The error is (two occurences):</p>
<blockquote>
<p>error: The type "type[AfterValidator]" is not generic and not indexable [misc]</p>
</blockquote>
<p>Removing the <code>type</code> keyword solves the problem:</p>
<pre><code>DataAsDict = Annotated[dict[str, str], AfterValidator[validate_anything]]
DataAsList = Annotated[list[str], AfterValidator[validate_anything]]
type DataType = DataAsDict | DataAsList
</code></pre>
<p>My relationshiop with typing is experimental too and I'm confused. Has the original code an incorrect typing, i.e. was the <code>mypy</code> right when it detected those errors? Or is the experimental feature not usable for this case yet (that's ok, I was warned).</p>
|
<python><python-typing>
|
2024-10-07 08:00:50
| 0
| 17,998
|
VPfB
|
79,060,768
| 13,560,598
|
stacking rugplots in seaborn
|
<p>Is there a way to stack rugplots in Seaborn as in <a href="https://d2mvzyuse3lwjc.cloudfront.net/images/WikiWeb/Rug_Plot/Distribution_Curve_with_Rug.png?v=10549" rel="nofollow noreferrer">here</a> but below the x-axis?. I have not found anything on the rugplots <a href="https://seaborn.pydata.org/generated/seaborn.rugplot.html" rel="nofollow noreferrer">documentation</a>.</p>
|
<python><matplotlib><seaborn>
|
2024-10-07 06:06:28
| 1
| 593
|
NNN
|
79,060,606
| 15,002,748
|
DeprecationWarning: The `ipykernel.comm.Comm` class has been deprecated. Please use the `comm` module instead
|
<p>I'm trying to build recommendation engine using Azure Databricks notebook by using the algorithm from mlxtend library. I'm able to train the model successfully. However, there is a warning during training process as shown in the screenshot below:</p>
<p><a href="https://i.sstatic.net/IxLkGpVW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IxLkGpVW.png" alt="enter image description here" /></a></p>
<p>I have tried to upgrade <code>ipykernel ipywidgets</code> through <code>%pip install --upgrade ipykernel ipywidgets</code> but the warning still there. May I know how should I do in order to avoid this warning? Thank you.</p>
|
<python><jupyter-notebook><azure-databricks><ipywidgets><deprecation-warning>
|
2024-10-07 04:41:44
| 1
| 1,127
|
weizer
|
79,060,390
| 1,641,112
|
Where is the documentation for how ptyest implements conftest.py files?
|
<p>I'm new to Python. Been playing with pytest and I'm ready to tear my hair out. As best I can tell, an older version of pytest, 7.2.2 searches up the directory tree and merges in conftest file. In 8.3.3, it only seems to find conftest.py if it's in the same directory I run the tests in.</p>
<p>I can't find any official documentation that spells out how pytest implements these files. This is the best I could find and it wasn't very helpful: <a href="https://docs.pytest.org/en/8.3.x/how-to/plugins.html#requiring-loading-plugins-in-a-test-module-or-conftest-file" rel="nofollow noreferrer">https://docs.pytest.org/en/8.3.x/how-to/plugins.html#requiring-loading-plugins-in-a-test-module-or-conftest-file</a></p>
<p>Here's output from 7.2.2:</p>
<pre><code>> $ pytest test_simple_module.py -s [±main ●]
CONFTEST ../../ LOADED
CONFTEST .. LOADED
CONFTEST . LOADED
======================================================== test session starts =========================================================
platform darwin -- Python 3.11.10, pytest-7.2.2, pluggy-1.0.0
rootdir: /Users/steve/python/tests/TestUtils/PytestWrapper/simple_module
plugins: anyio-3.7.1
collecting ...
Collected test items:
</code></pre>
<p>Here's output from 8.3.3:</p>
<pre><code>(test_env) > $ pytest -s test_simple_module.py [±main ●]
CONFTEST . LOADED
========================================================== test session starts ===========================================================
platform darwin -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0
rootdir: /Users/steve/python/tests/TestUtils/PytestWrapper/simple_module
collected 7 items
</code></pre>
<p>Notice the key difference: conftest.py's from directories higher up are loaded in 7.2.2 but only conftest.py from currently directory in 8.3.3.</p>
|
<python><pytest>
|
2024-10-07 01:43:18
| 0
| 7,553
|
StevieD
|
79,060,340
| 19,090,490
|
Elegant way to add a list view test case in drf
|
<p>I have one concern in writing the test code for list view.</p>
<p>For DRF's list view, it responds to multiple data.</p>
<p>What should I do if I assume that the response.data object is a model with many fields?</p>
<pre><code>response.data --> [{"id": 1, "field1": "xxx", "field2": "yyy"....}, {"id": 2, ...}, ...]
</code></pre>
<p>Too much input is required to compare with static data when a returns a lot of field data.</p>
<pre><code>def test_xx(self):
self.assertEqual(response.data, [{"id": 1, "field1": "xxx", ..., "field12": "ddd", ...}, ...])
</code></pre>
<p>Currently, the way I came up with is as follows.</p>
<p>I created an example model called Dog.</p>
<pre><code>class Dog(models.Model):
...
hungry = models.BooleanField(default=False)
hurt = models.BooleanField(default=False)
</code></pre>
<p>As an example, let's say there is a class called Dog and there is logic in View to get a list of hungry dogs.
I'm going to make a test about the logic of get a list of hungry dogs.</p>
<pre><code>class ...
def test_dog_hungry_list(self):
response = client.get("/api/dog/hungry/")
expected_response = Dog.objects.filter(hungry=True)
self.assertEqual(response.data, expected_response)
</code></pre>
<p>The two values that I want to compare with assertEqual, response and expected_response, are of different types.</p>
<ul>
<li>response --> Response</li>
<li>expected_response --> Queryset</li>
</ul>
<p>In the case of response.data, Serializer is applied, so when it's in dict form, I want to compare it with Queryset type expected_response.</p>
<p>I thought of two ways as follows.</p>
<ol>
<li>Apply serializer to Queryset for comparison. --> (Serializer(expected_response, many=True))</li>
<li>Compare the primary key values.</li>
</ol>
<p><strong>number 2 example.</strong></p>
<pre><code>expected_response = Dog.objects.filter(hungry=True).values("id")
self.assertEqual(
[data["id"] for data in response.data],
[data["id"] for data in expected_response]
)
</code></pre>
<p>But I'm not sure if the two methods are good in comparison.</p>
<p>Is there a way to make the data smart to compare when the test code returns two or three data in the form of a large dict?</p>
|
<python><django><django-rest-framework><django-tests>
|
2024-10-07 00:55:36
| 1
| 571
|
Antoliny Lee
|
79,060,140
| 13,279,557
|
Does Python allow defining a C function that a shared library can call directly?
|
<p>I figured out how to let Python call a function from my shared library:</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
def bar():
return 1337
dll = ctypes.PyDLL("./mod.so")
assert dll.foo(ctypes.c_float(42)) == 42 + 1337
</code></pre>
<p>where <code>mod.so</code> can call that <code>bar</code> Python function like so in C:</p>
<pre class="lang-c prettyprint-override"><code>int foo(float f) {
PyObject *module = PyImport_ImportModule("main");
PyObject *bar = PyObject_GetAttrString(module, "bar");
PyObject *args = PyTuple_Pack(0);
PyObject *result = PyObject_CallObject(bar, args);
return f + PyLong_AsInt(result);
}
</code></pre>
<p>I'm essentially looking for a way to let the C <code>foo()</code> function call a function in the Python code directly, without needing anything Python specific like <code>PyObject_CallObject()</code>. This is what I'm trying to turn the C file into:</p>
<pre class="lang-c prettyprint-override"><code>int foo(float f) {
return f + bar();
}
</code></pre>
<h3>Context</h3>
<p>My programming language (see <a href="https://mynameistrez.github.io/2024/02/29/creating-the-perfect-modding-language.html" rel="nofollow noreferrer">my blog post</a>) its compiler+linker outputs a generic <code>mod.so</code> that is supposed to be usable by <em>any</em> programming language, so not just Python.</p>
<p>This <code>mod.so</code> works out-of-the-box for C and C++, because the shared library simply uses a <code>call</code> instruction to call the main executable's <code>bar</code> function. Python (and most other high-level languages) on the other hand don't allow you to call a Python function like this directly. This makes sense, as Python wraps primitive C types in custom structs. This is why we needed the <code>PyObject_CallObject()</code> call in the above code.</p>
<p>I <em>could</em> write a program that replaces all of the <code>call</code> instructions with the chain of <code>PyImport_ImportModule()</code> + <code>PyObject_GetAttrString()</code> + etc. calls, but I'd prefer leaving the <code>mod.so</code> alone for technical reasons.</p>
<p>What I got working is putting an <code>adapter.c</code> as a middleman between the Python code and the <code>mod.so</code>. <code>adapter.so</code> just exists to expose a <code>bar</code> wrapper function that <code>mod.so</code> can call directly, containing the <code>PyObject_CallObject</code> and co. calls. See <a href="https://stackoverflow.com/q/79043309/13279557">my previous Stack Overflow question</a> for the rough code.</p>
<p>Because my programming language already requires the game developer to document the mod API in <code>mod_api.json</code>, it is possible to regenerate this <code>adapter.c</code> file automatically, but ideally this step wouldn't be required at all.</p>
<h3>Looping back to the post's title</h3>
<p>Does Python allow defining a C function that a shared library can call directly? If so, then I wouldn't have to worry about modifying <code>mod.so</code> nor having an <code>adapter.c</code>. Thanks.</p>
|
<python><c><shared-libraries>
|
2024-10-06 21:48:36
| 1
| 672
|
MyNameIsTrez
|
79,060,070
| 12,011,020
|
What to return in a preprocessing function for optimal performance
|
<p>My starting point is a large polars LazyFrame.</p>
<p>I have written a function (<code>def preprocessing(df:pl.Lazyframe</code>) that has to take in a <code>pl.LazyFrame</code> (it can unfortunately not be a <code>pl.Expr</code>).<br />
The function which does a long chain of operations on different columns of the lazyframe. Finally the output/return of the preprocessing function is only on final column that is to be appended to the starting/incoming lazyframe.</p>
<p>What would now be the best/optimized return type for the query planner to be integrated in the starting data frame? Should I just do a <code>.collect().to_series()</code>, and append it via <code>.with_columns()</code>? But that would require a collect and thus scarefice potential for optimization. Is there a way arround returning a series and collecting?</p>
<p>Unfortunately I can not share the code. However here is pseudo/mock up code:</p>
<pre class="lang-py prettyprint-override"><code>def preprocessing(
df: pl.LazyFrame | pl.DataFrame,
) -> pl.Series : #? tbd return type
return (
df.lazy()
.with_columns(
[
pl.col("some_col")
.cast(pl.Int64)
.cast(pl.Utf8)
.str.pad_start(5, "0")
.alias("some_col_preprocessed"),
pl.col("other_col")
.cast(pl.Int64)
.cast(pl.Utf8)
.str.pad_start(7, "0")
.alias("other_col_preprocessed"),
],
)
# ... more steps
.select(
pl.concat_str(
[
pl.col("some_col_preprocessed"),
pl.col("other_col_preprocessed"),
pl.lit("42133723"),
],
)
).collect().to_series()
starting_df = starting_df.with_columns(final_col=preprocessing(starting_df))
</code></pre>
<h2>Downsides of returning series</h2>
<p>I just realized a major downside of the current approach (<code>.collect().to_series</code>) might put me in danger if one of the steps changes the order / or had me forget the maintain_order argument.</p>
<h2>Other Alternatives</h2>
<ul>
<li>Finding an ID column and using a join</li>
<li>Using the <code>with_columns</code> expression context and just passing in, processing and returing the complete lazyframe, without even calling <code>.collect()</code></li>
</ul>
|
<python><performance><python-polars>
|
2024-10-06 20:52:40
| 0
| 491
|
SysRIP
|
79,060,062
| 1,332,263
|
Tkinter Notebook: Activate Button on Tab1 from Tab2
|
<p>On Tab1 I have a Test button that can be "turned on" by pressing the Test button directly or by pressing the "activate button". From Tab2 I want to press "activate button on Tab 1" and have the Test button on Tab 1 "turn on" and show the green ON state. If I remove "Notebook" and have everything on one page it will work but it's when I have two tabs I cannot show the Test button in the ON state by activating it from Tab 2.</p>
<pre><code> #!/usr/bin/python3.9
import tkinter as tk
from tkinter import ttk
class MainFrame(ttk.Frame):
def __init__(self, container):
super().__init__(container)
self.labelA = ttk.Label(self, text = "This is MainFrame")
self.labelA.grid(column=1, row=1, padx = 30, pady = 30)
self.buttonA = tk.Button(self, text="test", command=self.button)
self.buttonA.grid(column=1, row=2, pady=20)
self.buttonA.config(relief="raised", bg="gray", font='Helvetica 14', width='4', height='1', bd='8')
self.buttonB = tk.Button(self, text="activate TEST button", command=self.button) #button)
self.buttonB.grid(column=1, row=3, pady=20)
self.buttonC = tk.Button(self, text="get time from Tab 2", command=self.get_time)
self.buttonC.grid(column=1, row=4, padx=40, pady=20)
def button(self):
if self.buttonA.config('relief')[-1] == 'raised':
self.buttonA.config(relief='sunken', text="ON", bg='green', font='Helvetica 14', width='4', height='1', bd='8')
else:
if self.buttonA.config('relief')[-1] != 'raised':
self.buttonA.config(relief="raised", text="test", bg="gray", font='Helvetica 14', width='4', height='1')
def get_time(self):
self.Frame2.get_time()
class SubFrame(ttk.Frame):
def __init__(self, container):
super().__init__(container)
self.labelB = ttk.Label(self, text = "This is SubFrame")
self.labelB.grid(column=1, row=2, padx = 30, pady = 30)
self.buttonB = tk.Button(self, text="activate button in MainFrame", command=self.test)
self.buttonB.grid(column=1, row=3, padx=40, pady=20)
self.buttonD = tk.Button(self, text="get time", command=self.get_time)
self.buttonD.grid(column=1, row=5, padx=40, pady=20)
## ENTRY STYLE
style = {'fg': 'black', 'bg': 'white', 'font': 'Helvetica 14 bold', 'width':'6', 'bd':'2',
'highlightbackground':"black", 'justify':'center', 'relief':'sunken',
'insertontime': '0', 'takefocus': '0' } # highlightcolor':"red"
self.entry1_var=tk.StringVar()
#self.entry1_var.set("00:00")
self.entry1=tk.Entry(self, textvariable=self.entry1_var, width=5, justify="center",)
self.entry1.grid(column=1, row=4, padx=(0, 0), pady=(0,10))
self.entry1.configure(style)
def get_time(self):
print (self.entry1.get())
def test(self):
self.Frame1.button()
class App(tk.Tk):
def __init__(self):
super().__init__()
self.columnconfigure(0, weight=1)
self.rowconfigure(0, weight=1)
self.geometry('800x500')
#self.resizable(False, False)
self.title("TABS")
self.notebook = ttk.Notebook(self)
self.Frame1 = MainFrame(self.notebook)
self.Frame2 = SubFrame(self.notebook)
self.Frame1.Frame2 = self.Frame2
self.Frame2.Frame1 = self.Frame1
self.notebook.add(self.Frame1, text='TAB 1')
self.notebook.add(self.Frame2, text='TAB 2')
self.notebook.grid(row=0, column=0, sticky=tk.N+tk.S+tk.E+tk.W) #pack(expand = 1, fill ="both")
if __name__ == '__main__':
app = App()
app.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-10-06 20:48:41
| 1
| 417
|
bob_the_bob
|
79,060,045
| 219,153
|
How to declare multiple similar member variables of a dataclass in a single line?
|
<p>Can this <code>dataclass</code> declaration:</p>
<pre><code>@dataclass
class Point:
x: float
y: float
z: float
</code></pre>
<p>be rewritten in order to reduce boilerplate and resemble something like this:</p>
<pre><code>@dataclass
class Point:
x, y, z: float
</code></pre>
|
<python><python-dataclasses>
|
2024-10-06 20:34:13
| 1
| 8,585
|
Paul Jurczak
|
79,060,002
| 1,936,046
|
Model Loader errors in Text Gen Web UI`
|
<p>I have tried loading multiple models from HuggingFace using Text Gen Web UI, but no matter the model or the loader, I get the same "ModuleNotFoundError" for the loaders.</p>
<p>Importantly, I am using an Intel i7 and not a GPU, but I have been able to run smaller models using different UI tools in the past.</p>
<p>Steps I followed:</p>
<ol>
<li>Cloned Text Gen from Git Created a new environment in Anaconda.Navigator.</li>
<li>Using Visual Studio Code opened Text Generation Web UI.</li>
<li>I selected the environment's Python as the Python interpreter.</li>
<li>Selected to activate the relevant conda environment and installed the requirements.txt file for Text Generation Web UI.</li>
<li>Started the application using the one_click.py file that comes with Text Generation Web UI.</li>
</ol>
<p>Upon failing to get a model to load, I tried many things:</p>
<ol>
<li>Re-installing pip packages.</li>
<li>Installing a different version of torch: pip install torch torchvision torchaudio --index-url <a href="https://download.pytorch.org/whl/cpu" rel="nofollow noreferrer">https://download.pytorch.org/whl/cpu</a></li>
<li>Deleting the environment and starting over.</li>
<li>Asking ChatGPT and CoPilot a million questions.</li>
</ol>
<p>The same errors appear over and over. It is driving me crazy!</p>
<p>Examples (each of these have been installed, in the correct environment):</p>
<blockquote>
<p>ModuleNotFoundError: Failed to import 'autogptq'. Please install it
manually following the instructions in the AutoGPTQ GitHub repository.</p>
<p>ModuleNotFoundError: No module named 'exllamav2'</p>
<p>OSError: Error no file named pytorch_model.bin, model.safetensors,
tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory
models\meta-llama_Llama-3.2-1B.</p>
</blockquote>
|
<python><large-language-model><huggingface>
|
2024-10-06 20:11:37
| 1
| 764
|
Duanne
|
79,059,906
| 558,639
|
GoogleAuth getting "invalid grant: Bad Request" in call to LocalWebserverAuth()
|
<p>Here's a Python script in its entirety:</p>
<pre><code>from pydrive.auth import GoogleAuth
gauth = GoogleAuth()
gauth.LoadClientConfigFile("client_secrets.json")
print(f'==== called LoadClientConfigFile', flush=True)
gauth.LocalWebserverAuth() # Prompt for authentication via web browser
</code></pre>
<p>When I run it, I get the following:</p>
<pre><code>==== called LoadClientConfigFile
Traceback (most recent call last):
File "C:\Users\r\.virtualenvs\pygmu-oJfXpe_C\Lib\site-packages\pydrive\auth.py", line 475, in Refresh
self.credentials.refresh(self.http)
File "C:\Users\r\.virtualenvs\pygmu-oJfXpe_C\Lib\site-packages\oauth2client\client.py", line 545, in refresh
self._refresh(http)
File "C:\Users\r\.virtualenvs\pygmu-oJfXpe_C\Lib\site-packages\oauth2client\client.py", line 761, in _refresh
self._do_refresh_request(http)
File "C:\Users\r\.virtualenvs\pygmu-oJfXpe_C\Lib\site-packages\oauth2client\client.py", line 819, in _do_refresh_request
raise HttpAccessTokenRefreshError(error_msg, status=resp.status)
oauth2client.client.HttpAccessTokenRefreshError: invalid_grant: Bad Request
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\r\Projects\pygmu\t1.py", line 6, in <module>
gauth.LocalWebserverAuth() # Prompt for authentication via web browser
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\r\.virtualenvs\pygmu-oJfXpe_C\Lib\site-packages\pydrive\auth.py", line 120, in _decorated
self.Refresh()
File "C:\Users\r\.virtualenvs\pygmu-oJfXpe_C\Lib\site-packages\pydrive\auth.py", line 477, in Refresh
raise RefreshError('Access token refresh failed: %s' % error)
pydrive.auth.RefreshError: Access token refresh failed: invalid_grant: Bad Request
</code></pre>
<p>Given that the error happens after the call to <code>LoadClientConfigFile()</code>, I suspect the issue might be with the call to <code>LocalWebserverAuth()</code>.</p>
<p>Any ideas on how to diagnose and/or fix this?</p>
<h3>Update</h3>
<p>Note that no web server window appears, so I'm not sure where the problem lies.</p>
<h3>Environment</h3>
<ul>
<li>Python 3.12</li>
<li>Windows 11 Pro, v 23H2</li>
<li>Available Browser(s): Firefox, Chrome, Edge</li>
</ul>
<h3>Update 2</h3>
<p>Based on a hint from @LindaLawton-DaImTo, my code may be out of date. I'm revisiting <a href="https://developers.google.com/drive/api/guides/about-sdk" rel="nofollow noreferrer">https://developers.google.com/drive/api/guides/about-sdk</a> and <a href="https://developers.google.com/drive/api/quickstart/js" rel="nofollow noreferrer">https://developers.google.com/drive/api/quickstart/js</a> to make sure I'm using the current API. Standby...</p>
|
<python><windows><google-oauth>
|
2024-10-06 19:12:33
| 1
| 35,607
|
fearless_fool
|
79,059,839
| 6,618,225
|
Delete content from PDF using Python
|
<p>I need to cleanse a large number of PDFs from PDF content and leave only an image inside (the structure of the PDFs is always the same).</p>
<p>Here is a screenshot of the PDF content:</p>
<p><a href="https://i.sstatic.net/rUI78Wxk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rUI78Wxk.png" alt="enter image description here" /></a></p>
<p>The image marked in yellow is the one I want to keep, all those Paths and Texts and the other smaller image are to be deleted. I have checked out some Python libraries for PDF such as PyPDF but it seems to me like it does not allow me to access that content, only comments and annotations and such stuff.</p>
<p>Does anyone have a solution?</p>
|
<python><pdf>
|
2024-10-06 18:42:46
| 1
| 357
|
Kai
|
79,059,699
| 4,505,998
|
How to make `__getitems__` return a dict?
|
<p>In torch's <code>Dataset</code>, on top of the obligatory <code>__getitem__</code> method, you can implement the <code>__getitems__</code> method.</p>
<p>In my case <code>__getitem__</code> returns a dict, but I can't figure out how to do the same with <code>__getitems__</code>.</p>
<pre class="lang-py prettyprint-override"><code>class StackOverflowDataset(torch.utils.data.Dataset):
def __init__(self, data):
self._data = data
def __getitem__(self, idx):
return {'item': self._data[idx], 'whatever': idx*self._data[idx]+3}
def __getitems__(self, idxs):
return {'item': self._data[idxs], 'whatever': idxs*self._data[idxs]+3}
def __len__(self):
return len(self._data)
dataset = StackOverflowDataset(np.random.random(5))
for X in DataLoader(dataset, 2):
print(X)
break
</code></pre>
<p>If I comment out <code>__getitems__</code> it works, but leaving it there raises a <code>KeyError: 0</code>.</p>
<pre><code>KeyError Traceback (most recent call last)
Cell In[182], line 15
12 return len(self._data)
14 dataset = StackOverflowDataset(np.random.random(5))
---> 15 for X in DataLoader(dataset, 2):
16 print(X)
17 break
File ~/recommenders/venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py:630, in _BaseDataLoaderIter.__next__(self)
627 if self._sampler_iter is None:
628 # TODO(https://github.com/pytorch/pytorch/issues/76750)
629 self._reset() # type: ignore[call-arg]
--> 630 data = self._next_data()
631 self._num_yielded += 1
632 if self._dataset_kind == _DatasetKind.Iterable and \
633 self._IterableDataset_len_called is not None and \
634 self._num_yielded > self._IterableDataset_len_called:
File ~/recommenders/venv/lib/python3.12/site-packages/torch/utils/data/dataloader.py:673, in _SingleProcessDataLoaderIter._next_data(self)
671 def _next_data(self):
672 index = self._next_index() # may raise StopIteration
--> 673 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
674 if self._pin_memory:
675 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device)
File ~/recommenders/venv/lib/python3.12/site-packages/torch/utils/data/_utils/fetch.py:55, in _MapDatasetFetcher.fetch(self, possibly_batched_index)
53 else:
54 data = self.dataset[possibly_batched_index]
---> 55 return self.collate_fn(data)
File ~/recommenders/venv/lib/python3.12/site-packages/torch/utils/data/_utils/collate.py:317, in default_collate(batch)
256 def default_collate(batch):
257 r"""
258 Take in a batch of data and put the elements within the batch into a tensor with an additional outer dimension - batch size.
259
(...)
315 >>> default_collate(batch) # Handle `CustomType` automatically
316 """
--> 317 return collate(batch, collate_fn_map=default_collate_fn_map)
File ~/recommenders/venv/lib/python3.12/site-packages/torch/utils/data/_utils/collate.py:137, in collate(batch, collate_fn_map)
109 def collate(batch, *, collate_fn_map: Optional[Dict[Union[Type, Tuple[Type, ...]], Callable]] = None):
110 r"""
111 General collate function that handles collection type of element within each batch.
112
(...)
135 for the dictionary of collate functions as `collate_fn_map`.
136 """
--> 137 elem = batch[0]
138 elem_type = type(elem)
140 if collate_fn_map is not None:
KeyError: 0
</code></pre>
|
<python><pytorch><pytorch-dataloader>
|
2024-10-06 17:25:24
| 1
| 813
|
David Davó
|
79,059,576
| 18,769,241
|
top lip to nose ratio aspect to detect chewing activity not working as expected
|
<p>I want to know whether a person is chewing in front of a camera or not using dlib's facemarks.
For this end I compute the distance from nose to top lip and derive the mean, then if the that value is bigger than a threshold then we could say that the person is chewing.</p>
<p>To compute the nose to top lip distance I do the following:</p>
<pre><code>def lip_nose_distance(shape):
top_lip = shape[50:54]
nose = shape[32:36]
nose = np.mean(nose, axis=0)
top_lip = np.mean(top_lip, axis=0)
distance = abs(nose[0] - top_lip[0])
return distance
</code></pre>
<p>Then iterating of the buffered frame from the video like so:</p>
<pre><code>old_lip_mar1 = 0.0
for (i, landmark) in enumerate(landmark_buffer):
lip_dist = lip_nose_distance(shape)
if old_lip_mar1>0.0:
print("old_lip_mar1 ", old_lip_mar1)
print("lip_dist ", lip_dist)
if abs(lip_dist-old_lip_mar1) > 0.015:
print("lip is moving around too much! so the user is chewing")
else:
print("lip is not moving around too much!")
old_lip_mar1 = lip_dist
</code></pre>
<p>The buffers are obtained using:</p>
<pre><code>video_list = os.listdir(VIDEO_PATH) # Read video list
# for vid_name in video_list: # Iterate on video files
vid_path = VIDEO_PATH + "chewing_2.avi"
vid = cv2.VideoCapture(vid_path) # Read video
# Parse into frames
frame_buffer = [] # A list to hold frame images
frame_buffer_color = [] # A list to hold original frame images
while (True):
success, frame = vid.read() # Read frame
if not success:
break # Break if no frame to read left
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Convert image into grayscale
frame_buffer.append(gray) # Add image to the frame buffer
vid.release()
landmark_buffer = [] # A list to hold face landmark information
for (i, image) in enumerate(frame_buffer): # Iterate on frame buffer
result = get_rotated_mouth_loc_with_height(image)
landmark = predictor(image, result['rect']) # Detect face landmarks
landmark = shape_to_list(landmark)
landmark_buffer.append(landmark)
shape = result['shape']
</code></pre>
<p>The code to work required the following constants and imports:</p>
<pre><code>import dlib
import cv2
import os
from collections import OrderedDict
import numpy as np
from scipy.spatial import distance as dist
FACIAL_LANDMARKS_IDXS = OrderedDict([
("mouth", (48, 68)),
("inner_mouth", (60, 68)),
("right_eyebrow", (17, 22)),
("left_eyebrow", (22, 27)),
("right_eye", (36, 42)),
("left_eye", (42, 48)),
("nose", (27, 36)),
("jaw", (0, 17))
])
</code></pre>
<p>Finally an example on which the code could run is found in:</p>
<pre><code>https://streamable.com/lrlchs
</code></pre>
<p>For now the error I am getting is that the results are not the ones that are expected (e.g., I get
<code>lip is not moving around too much!</code> while they are!) plus the values of the distances don't change accross frames (e.g., <code>('old_lip_mar1 ', 0.75),('lip_dist ', 0.75)</code> )</p>
|
<python><face-recognition><dlib><face-landmark>
|
2024-10-06 16:30:51
| 0
| 571
|
Sam
|
79,059,555
| 233,928
|
python running in cursor can't see an installed package. What makes packages visible in python?
|
<p>I wrote code using pygame, and so installed it:</p>
<pre><code>pip install pygame
</code></pre>
<p>From the command line, the code works.</p>
<p>In cursor (an AI-enabled editor that appears to be based on vscode) though, it shows up as missing, and when I try to run the program,</p>
<pre><code>ModuleNotFoundError: no module named pygame
</code></pre>
<p>I killed the editor, started again from the command line that works. Same error.</p>
<p>I tested it in VSCode, and it's the same, so it has nothing to do with cursor specifically.</p>
<p>I understand linux PATH, but not whatever mechanism is searching for packages in Python.</p>
<p>How do I get the environment in the IDE to match what I can do on the command line?</p>
<p>And why doesn't it match what I've installed in <code>pip</code> on my account?</p>
|
<python><vscode-debugger>
|
2024-10-06 16:21:05
| 1
| 8,644
|
Dov
|
79,059,505
| 15,178,267
|
Handling Errors in Django Test Cases
|
<p>I am currently writing test cases using Django and would like to improve the way we handle errors, particularly distinguishing between errors that arise from the test cases themselves and those caused by the user's code.</p>
<h3>Context:</h3>
<p>Let’s say a user writes a function to fetch and increment the likes count of a post:</p>
<pre class="lang-py prettyprint-override"><code>def fetch_post(request, post_id):
try:
post = Post.objects.get(id=post_id)
except Post.DoesNotExist:
raise Http404("Post not found")
post.likes += 1
post.save()
return HttpResponse("Post liked")
</code></pre>
<p>Here’s the test case for that function:</p>
<pre class="lang-py prettyprint-override"><code>from django.test import TestCase
from project_app.models import Post
from django.urls import reverse
from django.http import Http404
class FetchPostViewTests(TestCase):
def setUp(self):
self.post = Post.objects.create(title="Sample Post")
def assertLikesIncrementedByOne(self, initial_likes, updated_post):
if updated_post.likes != initial_likes + 1:
raise AssertionError(f'Error: "Likes cannot be incremented by {updated_post.likes - initial_likes}"')
def test_fetch_post_increments_likes(self):
initial_likes = self.post.likes
response = self.client.get(reverse('fetch_post', args=[self.post.id]))
updated_post = Post.objects.get(id=self.post.id)
self.assertLikesIncrementedByOne(initial_likes, updated_post)
self.assertEqual(response.content.decode(), "Post liked")
def test_fetch_post_not_found(self):
response = self.client.get(reverse('fetch_post', args=[9999]))
self.assertEqual(response.status_code, 404)
</code></pre>
<h3>Scenario:</h3>
<p>Now, if a user accidentally modifies the code to increment the likes by 2 instead of 1 and didn't save the post object.</p>
<pre class="lang-py prettyprint-override"><code># Wrong code that will fail the test case
def fetch_post(request, post_id):
try:
# used filter() method instead of get() method
post = Post.objects.filter(id=post_id)
except Post.DoesNotExist:
raise Http404("Post not found")
post.likes += 2 # incremented by two instead of 1
post.save()
return HttpResponse("Post liked")
</code></pre>
<p>This leads to the following test failure:</p>
<pre class="lang-bash prettyprint-override"><code>test_cases/test_case.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.9/site-packages/django/test/client.py:742: in get
response = super().get(path, data=data, secure=secure, **extra)
/usr/local/lib/python3.9/site-packages/django/test/client.py:396: in get
return self.generic('GET', path, secure=secure, **{
/usr/local/lib/python3.9/site-packages/django/test/client.py:473: in generic
return self.request(**r)
/usr/local/lib/python3.9/site-packages/django/test/client.py:719: in request
self.check_exception(response)
/usr/local/lib/python3.9/site-packages/django/test/client.py:580: in check_exception
raise exc_value
/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py:47: in inner
response = get_response(request)
/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py:181: in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
request = <WSGIRequest: GET '/fetch_post/9999/'>, post_id = 9999
def fetch_post(request, post_id):
try:
post = Post.objects.filter(id=post_id)
except Post.DoesNotExist:
raise Http404("Post not found") # Raise 404 if post does not exist
> post.likes += 2
E AttributeError: 'QuerySet' object has no attribute 'likes'
project_app/views.py:11: AttributeError
------------------------------ Captured log call -------------------------------
ERROR django.request:log.py:224 Internal Server Error: /fetch_post/9999/
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/usr/local/lib/python3.9/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/app/project_app/views.py", line 11, in fetch_post
post.likes += 2
AttributeError: 'QuerySet' object has no attribute 'likes'
=========================== short test summary info ============================
FAILED test_cases/test_case.py::FetchPostViewTests::test_fetch_post_increments_likes
FAILED test_cases/test_case.py::FetchPostViewTests::test_fetch_post_not_found
============================== 2 failed in 0.74s ===============================
</code></pre>
<h3>Request for Assistance:</h3>
<p>Instead of displaying the above traceback, which clearly indicates a test case error, I would prefer to return a more user-friendly error message like:</p>
<pre><code>Error 1: "Likes cannot be incremented by 2"
Error 2: "You used filter method instead of the get method in your function"
</code></pre>
<p>Is there a way to catch such errors within the test case and return a more human-readable error message? Any guidance on how to implement this would be greatly appreciated.</p>
<p>Thank you!</p>
<hr />
<h2>Edit 1</h2>
<p>Note: I am using a celery worker to run the dockerfile that will in turn run the tests. Here is the relevant part of the tasks.py responsible for printing error:</p>
<pre><code>print("Running tests in Docker container...")
test_result = subprocess.run(
["docker", "run", "--rm", "-v", f"{volume_name}:/app", "test-runner"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
if test_result.returncode != 0:
# Check for AssertionError messages in stdout
error_message = test_result.stdout
if "AssertionError" in error_message:
# Extract just the assertion error message
lines = error_message.splitlines()
for line in lines:
if "Error:" in line:
submission.error_log = line.strip() # Save specific error message to the database
raise Exception(line.strip())
</code></pre>
|
<python><django>
|
2024-10-06 15:53:45
| 1
| 851
|
Destiny Franks
|
79,059,331
| 13,634,560
|
best way to fill nans, getting the mean from multiple columns
|
<p>I have a dataframe with reviews of restaurants. a columns "description sentiment" has been assigned a score ranging from -1.00 - +1.00. Some of the rows are 0 (due to some other factors in the dataframe), and so I'd like to interpolate some of the rows based on other info, like the Award score.</p>
<pre><code>interpolate_cols = ["description_sentiment", "Award_ordinal"]
</code></pre>
<p>As I understand it, best practice for interpolation is to set a mask and use for each possible value, such as in the last block below.</p>
<pre><code>nan_mask = (michelin["description_sentiment"]==np.nan)
award1_mask = (michelin["Award_ordinal"]==1)
award2_mask = (michelin["Award_ordinal"]==2)
award3_mask = (michelin["Award_ordinal"]==3)
award4_mask = (michelin["Award_ordinal"]==4)
award5_mask = (michelin["Award_ordinal"]==5)
michelin.loc[nan_mask & award1_mask, "description_sentiment"] = michelin.loc[award1_mask, "description_sentiment"].mean()
...
</code></pre>
<p>and then list all possible values</p>
<p>My question is, what happens when the complexity of data increases, ie, more features. is the best way really to list all individually? or is there a more simple programmatic way, like:</p>
<pre><code>interpolate() np.nan for col.unique(), col.unique(), col.unique()
</code></pre>
<p>many thanks.</p>
<hr />
<p><strong>Edit: MRE</strong></p>
<p>the first three columns should be considered ordinal, the next three nominal, the the next few onehot encoded [boolean from categorical nominal], and the final continuous.</p>
<pre><code>df = pd.DataFrame({
"A" : np.random.choice([1, 12], 1000),
"B" : np.random.choice([1, 20], 1000),
"C" : np.random.choice([1, 25], 1000),
"D" : np.random.choice(["Japan", "China", "Indonesia", "Thailand", "Laos", "Cambodia", "Philippines"], 1000),
"E" : np.random.choice(["Japan", "China", "Indonesia", "Thailand", "Laos", "Cambodia", "Philippines"], 1000),
"F" : np.random.choice(["Japan", "China", "Indonesia", "Thailand", "Laos", "Cambodia", "Philippines"], 1000),
"G" : np.random.choice([0, 1], 1000),
"H" : np.random.choice([0, 1], 1000),
"I" : np.random.choice([0, 1], 1000),
"J" : np.random.choice([0, 1], 1000),
"K" : np.random.choice([0, 1], 1000),
"L" : np.random.choice([0, 1], 1000),
"M" : np.random.choice([0, 1], 1000),
"missing_values" : np.random.choice(np.arange(0,1000), 1000) /100
})
df.loc[df["missing_values"].sample(frac=0.1).index, "missing_values"] = np.nan
</code></pre>
<p>as I understand it, interpolation could not be used because that implies a linear relationship within missing values, and it's not a timeseries. a random forest or decision tree could work, but I was hoping to find a solution using a mask.</p>
<p>is there a way to programmatic way to achieve this?</p>
|
<python><pandas><nan>
|
2024-10-06 14:30:16
| 0
| 341
|
plotmaster473
|
79,059,252
| 561,243
|
Indexing overlapping areas of a numpy array
|
<p>I have a 2D array of zeros (called labels) and a list of its coordinates (called centers). For each of the centers I would like to put a progressive number in a 5x5 cluster around it inside the label array.</p>
<p>Should two centers be so close that the corresponding clusters are overlapping, then I would like the labels arrays to give precedence to the lower label value. In other words, if cluster 2 is overlapping cluster 1, then I want cluster 1 to be 5x5 and cluster 2 to be smaller.</p>
<p>I have managed to code this procedure as follow:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
labels = np.zeros((10,20))
n_rows, n_cols = labels.shape
centers = [(4,4), (7,7), (5,10), (5,18)]
for i, center in enumerate(centers, start=1):
# find the coordinates for the 5x5 cluster centered in center.
cluster = np.ix_(np.arange(max(center[0]-2,0), min(center[0]+3,n_rows),1),
np.arange(max(center[1]-2, 0), min(center[1]+3,n_cols),1))
# create a temporary label array with all zeros
temp_label = np.zeros_like(labels)
# set the label value in the temporary array in the position corresponding to the cluster
temp_label[cluster] = i
# apply some boolean algebra
# (labels == 0) is a bool array corresponding to all positions that are not yet belonging to a label
# (labels == 0) * temp_label is like the temp_label, but only where the labels array is still free
# adding the labels back is ensuring that all previous labels are also counted.
labels = (labels == 0) * temp_label + labels
print(labels)
</code></pre>
<p>Output:</p>
<pre><code>[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 1. 1. 1. 1. 0. 3. 3. 3. 3. 3. 0. 0. 0. 4. 4. 4. 4.]
[0. 0. 1. 1. 1. 1. 1. 0. 3. 3. 3. 3. 3. 0. 0. 0. 4. 4. 4. 4.]
[0. 0. 1. 1. 1. 1. 1. 2. 2. 2. 3. 3. 3. 0. 0. 0. 4. 4. 4. 4.]
[0. 0. 1. 1. 1. 1. 1. 2. 2. 2. 3. 3. 3. 0. 0. 0. 4. 4. 4. 4.]
[0. 0. 0. 0. 0. 2. 2. 2. 2. 2. 3. 3. 3. 0. 0. 0. 4. 4. 4. 4.]
[0. 0. 0. 0. 0. 2. 2. 2. 2. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 2. 2. 2. 2. 2. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>This snippet is actually doing what I want. Cluster 1 and 4 are full, while cluster 2 and 3 that are touching each other are only partial. To achieve this, I have to pass through a temporary array each time.</p>
<p>What would be a better solution?</p>
<p>Unfortunately reversing the center lists is <em>not</em> an option, because in the real analysis case, the center list is generated one element at the time.</p>
|
<python><arrays><numpy><indexing><numpy-ndarray>
|
2024-10-06 13:54:24
| 3
| 367
|
toto
|
79,059,219
| 18,949,720
|
CLI game flickering
|
<p>I am trying to develop a small/lightweight implementation of Tetris running in the CLI, using Python. Currently, I am using a <code>print()</code> for each frame update, just after <code>os.system('cls')</code> to clear the terminal.</p>
<p>The problem with this approach is heavy flickering which makes the game very difficult to look at.</p>
<p>Would you have any suggestions of things I should try to improve this?</p>
<p>Here is the Github repo of the project (the main script is not very long): <a href="https://github.com/TheoFABIEN/CLI-Tetris" rel="nofollow noreferrer">https://github.com/TheoFABIEN/CLI-Tetris</a></p>
|
<python><command-line-interface><game-development><tetris>
|
2024-10-06 13:38:38
| 1
| 358
|
Droidux
|
79,059,078
| 5,852,692
|
Applying reinforced-learning on combination of continuous and discrete action space environment
|
<p>I have a custom gym environment, where it has 3 continuous and 1 discrete action space. I would like to apply a reinforcement-learning algorithm, however I am not sure what to use.</p>
<p>Below you can find the environment code, action space is basically setting some parameters within the gas network which is called via <code>self.func</code> and observation space is the pressure results of nodes, and velocity results of elements:</p>
<pre><code>import numpy as np
import gymnasium as gym
import simtools as st
class GasNetworkEnv(gym.Env):
def __init__(self, map_, qcorr_bounds, pset_bounds, cs_ctrl_bounds,
obs_size, func, func_args):
super(GasNetworkEnv, self).__init__()
self.action_space = gym.spaces.Dict({
'qcorr': gym.spaces.Box(
low=np.array([qcorr_bounds[0]]),
high=np.array([qcorr_bounds[1]]),
dtype=np.float64),
'pset': gym.spaces.Box(
low=np.array([pset_bounds[0]]),
high=np.array([pset_bounds[1]]),
dtype=np.float64),
'cs_ctrl': gym.spaces.Box(
low=np.repeat(cs_ctrl_bounds[0], len(map_)),
high=np.repeat(cs_ctrl_bounds[1], len(map_)),
dtype=np.float64),
'cs_state': gym.spaces.MultiBinary(
sum([len(map_[k].no) for k in map_]))})
self.observation_space = gym.spaces.Box(
low=-1e5, high=1e5, shape=(obs_size,), dtype=np.float64)
self.func = func
self.func_args = func_args
def step(self, action):
# call objective function (we are trying to minimize score which is bad when higher)
node_results, element_results, score = self.func(action, self.func_args)
reward = -score
# observation
observation = np.concatenate((node_results, element_results))
# termination conditions (currently: no termination)
done = False
return observation, reward, done, {}
def reset(self, **kwargs):
super().reset(seed=kwargs.get('seed', None))
initial_observation = np.random.uniform(
low=self.observation_space.low,
high=self.observation_space.high,
size=self.observation_space.shape)
return initial_observation
</code></pre>
|
<python><reinforcement-learning><openai-gym><dqn><gymnasium>
|
2024-10-06 12:30:55
| 0
| 1,588
|
oakca
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.