QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,097,616
| 1,008,588
|
Django Summernote plugin upload image by userid
|
<p>I am using the Summernote plugin for Django and my target is to allow users to upload media inside the server. At the moment files are organized (by default) in a folder named with the upload date. Something like:</p>
<ul>
<li>"ProjectName/media/django-summernote/<strong>2024-10-11</strong>/989d2f98-ad3c-47d6-9c07-e5f6d0c731e6.png"</li>
<li>"ProjectName/media/django-summernote/<strong>2024-10-17</strong>/13d646b8-d7cd-4e04-a76a-804a1ee0d090.jpg".</li>
</ul>
<p>Is it possible to change the path and include the user_id in the path?
Something like</p>
<ul>
<li>"ProjectName/media/django-summernote/<strong>User100</strong>/989d2f98-ad3c-47d6-9c07-e5f6d0c731e6.png"</li>
<li>"ProjectName/media/django-summernote/<strong>User200</strong>/13d646b8-d7cd-4e04-a76a-804a1ee0d090.jpg".</li>
</ul>
<p><strong>What I have done</strong></p>
<p>I made these edits in the <code>settings.py</code> file</p>
<pre><code># Summernote plugin
def summernote_upload_to(request, filename):
user = request.user
# Create the dynamic path
upload_path = f'user_upload/{user}'
return os.path.join(upload_path)
SUMMERNOTE_CONFIG = {
'attachment_upload_to': summernote_upload_to,
'summernote': {
'attachment_filesize_limit': 200 * 1000 * 1000, # specify the file size
'width': '100%',
'height': '480',
}
}
</code></pre>
<p>but, when I upload an image, I get an error</p>
<blockquote>
<p>AttributeError: 'Attachment' object has no attribute 'user'</p>
</blockquote>
|
<python><django><django-forms><django-settings><summernote>
|
2024-10-17 10:18:49
| 1
| 2,764
|
Nicolaesse
|
79,097,475
| 4,489,082
|
MemoryError in creating large numpy array
|
<p>My objective is to plot a histogram given values and counts. <code>hist</code> only takes an array of data as input. I have tried to recreat data using <code>np.repeat</code>, but this gives <code>MemoryError: Unable to allocate 15.9 GiB for an array with shape (2138500000,) and data type float64</code>.</p>
<p>Wanted to know if there is a smarter way of doing this.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
values = [ 1, 2, 2.5, 4, 5, 5.75, 6.5]
counts = [10**8, 10**9, 1.5*10**7, 1.25*10**7, 10**6, 10**7,10**9]
data_recreated = np.repeat(values, counts)
f1, ax = plt.subplots(1,1)
ax.hist(data_recreated, bins=5)
</code></pre>
|
<python><numpy><matplotlib>
|
2024-10-17 09:42:30
| 2
| 793
|
pkj
|
79,097,421
| 4,451,315
|
rolling sum with right-closed interval in duckdb
|
<p>In Polars / pandas I can do a rolling sum where row each row the window is <code>(row - 10 minutes, row]</code>. For example:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {
"timestamp": [
"2023-08-04 10:00:00",
"2023-08-04 10:05:00",
"2023-08-04 10:10:00",
"2023-08-04 10:10:00",
"2023-08-04 10:20:00",
"2023-08-04 10:20:00",
],
"value": [1, 2, 3, 4, 5, 6],
}
df = pl.DataFrame(data).with_columns(pl.col("timestamp").str.strptime(pl.Datetime))
print(
df.with_columns(pl.col("value").rolling_sum_by("timestamp", "10m", closed="right"))
)
</code></pre>
<p>This outputs</p>
<pre><code>shape: (6, 2)
βββββββββββββββββββββββ¬ββββββββ
β timestamp β value β
β --- β --- β
β datetime[ΞΌs] β i64 β
βββββββββββββββββββββββͺββββββββ‘
β 2023-08-04 10:00:00 β 1 β
β 2023-08-04 10:05:00 β 3 β
β 2023-08-04 10:10:00 β 9 β
β 2023-08-04 10:10:00 β 9 β
β 2023-08-04 10:20:00 β 11 β
β 2023-08-04 10:20:00 β 11 β
βββββββββββββββββββββββ΄ββββββββ
</code></pre>
<p>How can I do this in DuckDB? Closest I could come up with is:</p>
<pre><code>rel = duckdb.sql("""
SELECT
timestamp,
value,
SUM(value) OVER roll AS rolling_sum
FROM df
WINDOW roll AS (
ORDER BY timestamp
RANGE BETWEEN INTERVAL 10 minutes PRECEDING AND CURRENT ROW
)
ORDER BY timestamp;
""")
print(rel)
</code></pre>
<p>but that makes the window <code>[row - 10 minutes, row]</code>, not <code>(row - 10 minutes, row]</code></p>
<p>Alternatively, I could do</p>
<pre class="lang-py prettyprint-override"><code>rel = duckdb.sql("""
SELECT
timestamp,
value,
SUM(value) OVER roll AS rolling_sum
FROM df
WINDOW roll AS (
ORDER BY timestamp
RANGE BETWEEN INTERVAL '10 minutes' - INTERVAL '1 microsecond' PRECEDING AND CURRENT ROW
)
ORDER BY timestamp;
""")
</code></pre>
<p>but I'm not sure about how robust that'd be?</p>
|
<python><postgresql><python-polars><duckdb>
|
2024-10-17 09:29:59
| 2
| 11,062
|
ignoring_gravity
|
79,097,403
| 21,446,483
|
Azure python webapp deploy shows as running even though it is successful
|
<p>When I deploy my webapp using the azure CLI the command keeps running until it times out but the app itself deploys successfully and quickly. Sometimes, seemingly randomly, the cli command correctly detects this and shows a successful deploy.</p>
<p>Command:</p>
<pre><code>az webapp up --subscription <subscription> --name <webapp-name> --resource-group <rg> --plan <app-service-plan> --runtime PYTHON:3.11 --debug --verbose --launch-browser --location northeurope
</code></pre>
<p>Web app code (simplified, but issue persists):</p>
<pre class="lang-py prettyprint-override"><code>from dotenv import load_dotenv
load_dotenv()
from fastapi import FastAPI
app = FastAPI()
@app.get("/version")
async def version():
print('Hit version endpoint')
return {"version": "1.0.0"}
</code></pre>
<p>The app is configured the variable <code>SCM_DO_BUILD_DURING_DEPLOYMENT</code> and startup command <code>python -m uvicorn main:app --host 0.0.0.0 --workers 4</code>.</p>
<p>As shown above, I've tried simplifying the app as much as possible with no success or clear understanding of why it randomly doesn't pick up that the deploy was successful.</p>
<p>In case it is useful, I know the deployment is successful because I update the <code>/version</code> endpoint and because from the Azure web portal, under Deployment > Deployment Center > Logs I can see a successful run:
<a href="https://i.sstatic.net/0Q6GpsCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0Q6GpsCY.png" alt="enter image description here" /></a></p>
<p>The debug and verbose logs don't provide any useful feedback, it is simply an infinite loop of:</p>
<pre><code>cli.azure.cli.core.util: Request URL: 'https://management.azure.com/subscriptions/...'
<request details>
cli.azure.cli.core.util: Response status: 202
<answer details indicating instancesInProgress: 1 and successfulInstances: 0>
cli.azure.cli.command_modules.appservice.custom: Status: Starting the site... Time: 623(s)
cli.azure.cli.command_modules.appservice.custom: InprogressInstances: 1, SuccessfulInstances: 0
</code></pre>
<p>Has anyone found this issue and found a way to make the deployments consistently show success?</p>
|
<python><azure><azure-web-app-service>
|
2024-10-17 09:25:33
| 1
| 332
|
Jesus Diaz Rivero
|
79,097,077
| 281,201
|
Converting .py file to Databricks with Markdown
|
<p>I wish to convert a python <code>.py</code> file into one that can be run on Databricks with multiple cells. I don't wish to do it in the GUI, as I'll eventually want an automated process for this. What is the minimal code to create a title cell and then have the rest of the code being Python code?
I've tried this, but it doesn't work:</p>
<pre><code>%md
# Title
%python
# My code goes here
</code></pre>
<p>I could copy some output from a Databricks notebook I created my self I suppose, but it's really not that minimal.</p>
|
<python><markdown><databricks>
|
2024-10-17 07:49:56
| 2
| 3,519
|
Warpspace
|
79,096,977
| 3,580,213
|
Dynamic type instantiation of Local class in Python 3.7
|
<p>I'm trying to do some custom serialization/deserialization and must be able to handle custom data-classes, even local ones within functions. I found a <a href="https://stackoverflow.com/questions/4821104/dynamic-instantiation-from-string-name-of-a-class-in-dynamically-imported-module">question about this with a proper solution</a>, but this works only for non-locals/nested classes.</p>
<p>Example Code.</p>
<pre><code>from dataclasses import dataclass
class Foo:
def bar(self) -> None:
@dataclass
class TestClass:
val1: str
val2: int
instance = TestClass("", 1)
print(type(instance))
test = Foo()
test.bar()
print(type(test))
</code></pre>
<p>This snippet gives me this output:</p>
<pre><code><class '__main__.Foo.bar.<locals>.TestClass'>
<class '__main__.Foo'>
</code></pre>
<p>When trying to retrieve the type from the given string, this works for the class <code>Foo</code>, but not for <code>Foo.bar.<locals>.TestClass</code>:</p>
<pre><code>module = __import__("python_typing_example")
foo_type = getattr(module, "Foo") # <-- Works fine
test_class_type = getattr(module, "Foo.bar.<locals>.TestClass") # <-- Gives AttributeError
</code></pre>
<p>Is there a solution to find local types within other classes/methods which have a local scope?</p>
|
<python><python-3.7>
|
2024-10-17 07:21:06
| 1
| 867
|
freakinpenguin
|
79,096,965
| 4,094,231
|
How to append a middleware in specific spider along with the ones set in settings.py?
|
<p>There are certain middlewares enabled for all spiders in <code>settings.py</code>.</p>
<p>How to append another middleware for one specific spider, along with all the ones in <code>settings.py</code>?</p>
<p>Let say <code>settings.py</code> is:</p>
<pre class="lang-py prettyprint-override"><code>DOWNLOADER_MIDDLEWARES = {
'myproject.middlewares.CustomDownloaderMiddleware1': 543,
'myproject.middlewares.CustomDownloaderMiddleware2': 544,
}
</code></pre>
<p>If I set that middleware via <code>custom_settings</code> in my spider, then all others set in <code>settings.py</code> are ignored.</p>
<p>I tried</p>
<pre class="lang-py prettyprint-override"><code>class MySpider(Spider):
name = 'my_spider'
def __init__(self, *args, **kwargs):
super(MySpider, self).__init__(*args, **kwargs)
settings = get_project_settings()
# Get existing middlewares
middlewares = settings.get('DOWNLOADER_MIDDLEWARES', {})
# Append or update your additional middleware
middlewares['myproject.middlewares.MyAdditionalMiddleware'] = 550
# Apply it to the spider's settings
self.custom_settings = {
'DOWNLOADER_MIDDLEWARES': middlewares
}
def start_requests(self):
# Spider logic here
pass
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>from scrapy.spiders import Spider
from scrapy.utils.project import get_project_settings
class MySpider(Spider):
name = 'my_spider'
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
spider = super(MySpider, cls).from_crawler(crawler, *args, **kwargs)
# Get the global settings
settings = crawler.settings
# Get the existing middlewares
middlewares = settings.get('DOWNLOADER_MIDDLEWARES', {}).copy()
# Append or update your additional middleware
middlewares['myproject.middlewares.MyAdditionalMiddleware'] = 550
# Update the spider's settings for this instance
spider.custom_settings = {
'DOWNLOADER_MIDDLEWARES': middlewares
}
return spider
def start_requests(self):
# Spider logic here
pass
</code></pre>
<p>But the <code>MyAdditionalMiddleware</code> is not activated in my spider.</p>
<h3>Note</h3>
<p>Above codes were generated by ChatGpt <a href="https://chatgpt.com/share/6710b8b3-eda8-800a-a63d-5a244501475b" rel="nofollow noreferrer">https://chatgpt.com/share/6710b8b3-eda8-800a-a63d-5a244501475b</a></p>
|
<python><scrapy>
|
2024-10-17 07:18:13
| 2
| 21,655
|
Umair Ayub
|
79,096,544
| 21,305,238
|
What is the `πthon` executable?
|
<p>On Ubuntu and other Linux-based systems, Python 3.14's <code>venv</code> creates an extra executable named <code>πthon</code>:</p>
<pre class="lang-bash prettyprint-override"><code>$ python --version
Python 3.13.0
$ python -m venv .venv
$ cd .venv/bin && ls
Activate.ps1 activate activate.csh activate.fish pip pip3 pip3.13 python python3 python3.13
</code></pre>
<pre class="lang-bash prettyprint-override"><code>$ python --version
Python 3.14.0a1+
$ python -m venv .venv
$ cd .venv/bin && ls
πthon Activate.ps1 activate activate.csh activate.fish pip pip3 pip3.14 python python3 python3.14
</code></pre>
<p>What does it do and why is it there?</p>
|
<python><python-venv>
|
2024-10-17 05:09:25
| 1
| 12,143
|
InSync
|
79,096,421
| 1,492,229
|
How to split my dataset into Test and Train without repitition?
|
<p>I am developing a Python script to test an algorithm. I have a dataset that I need to split into 80% for training and 20% for testing. However, I want to save the test set for further analysis, ensuring no overlap with previous test sets.</p>
<p>Although my code works well overall, I encountered one issue: the test dataset sometimes contains records that were already selected in previous test runs due to the random selection process.</p>
<p>In the end of the process all 100% of the records should be tested at one of the runs</p>
<p>To clarify with an example:</p>
<ul>
<li>On the first run, my dataset <code>{0,1,2,3,4,5,6,7,8,9}</code> is split into a training set <code>{0,1,2,4,5,7,8,9}</code> and a test set <code>{3,6}</code>.</li>
<li>On the second run, the training set is <code>{0,1,2,3,4,5,7,9}</code> and the test set is <code>{6,8}</code>.</li>
</ul>
<p>As you can see, the record <code>{6}</code> was selected twice for testing, which I want to avoid.</p>
<p>How can I modify the code to ensure that the 20% test set is chosen randomly each time but excludes any records that were previously selected?</p>
<p>Here is the current code:</p>
<pre><code>df = pd.read_csv("CustomersInfo.csv")
y = df['CustomerRank']
X = df.drop('CustomerRank', axis=1, errors='ignore')
#-------------------------------------------------------------------
#This is the part that need to be fixed
for RandStat in [11, 22, 33, 44, 55]:
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=RandStat)
#-------------------------------------------------------------------
clf = XGBClassifier(random_state=RandStat)
clf.fit(X_train, y_train)
fnStoreAnalyse(y_train)
</code></pre>
|
<python><machine-learning><train-test-split>
|
2024-10-17 03:50:34
| 1
| 8,150
|
asmgx
|
79,096,144
| 5,350,089
|
Socket Timeout will kill the Thread in Python
|
<p>I am going to use the following Python program in my dedicated online server. My doubt is for each and every new connection to server, it is creating a new thread number. I already set the timeout function for the child. It is working fine, but the thread number keeps on increasing. I thought the thread is not clearing properly. Kindly give me the solution to clear or kill the thread process completely.</p>
<pre><code>import socket
from threading import Thread
def on_new_client(client_socket, addr):
while True:
client_socket.settimeout(10)
data = client_socket.recv(1024).decode('utf-8')
client_socket.settimeout(None)
send="GOK"
client_socket.send(send.encode())
if not data:
print('Disconnected')
break
print("Adr and data",addr,data)
client_socket.close()
def main():
host = '127.0.0.1' # allow any incoming connections
port = 4001
s = socket.socket()
s.bind((host, port)) # bind the socket to the port and ip address
s.listen(5) # wait for new connections
while True:
c,addr=s.accept() # Establish connection with client.
print("New connection from:",addr)
thread = Thread(target=on_new_client, args=(c, addr)) # create the thread
thread.start() # start the thread
c.close() #c.close()
thread.join()
if __name__ == '__main__':
main()
</code></pre>
|
<python><multithreading><sockets><tcp><timeout>
|
2024-10-17 00:35:10
| 1
| 445
|
Sathish
|
79,096,122
| 4,582,026
|
Call function within a function but keep default values if not specified
|
<p>I have two sub functions that feed into one main functions as defined below:</p>
<p>Sub function 1:</p>
<pre><code>def func(x=1, y=2):
z = x + y
return z
</code></pre>
<p>Sub function 2:</p>
<pre><code>def func2(a=3, b=4):
c = a - b
return c
</code></pre>
<p>Main function:</p>
<pre><code>def finalFunc(lemons, input1, input2, input3, input4):
result = func(input1, input2) + func2(input3, input4) + lemons
return result
</code></pre>
<p>How do I call my main function but if the values for the sub functions aren't specified, they're treated as default? Similarly, if they are specified, then use them instead, e.g.</p>
<pre><code>>>> finalFunc(lemons=1)
3
</code></pre>
<p>or</p>
<pre><code>>>> finalFunc(lemons=1, input1=4, input4=6)
4
</code></pre>
<p>I don't want to specify the default values in my main function, as the sub functions are always changing. I want to keep the default values set at whatever the sub functions contain.</p>
|
<python>
|
2024-10-17 00:15:47
| 4
| 549
|
Vik
|
79,096,118
| 1,809,784
|
Crypto.com Exchange API Create Order unauthorised
|
<p>I am trying to use Crypto.com API and when I am calling private/create-order to buy an instrument I am getting 401 Http status code with 40101 code, which means
Unauthorised - Not authenticated, or key/signature incorrect</p>
<p>I tried calling private/user-balance and i can successfully get my balance so it can't be the IP.</p>
<p>The difference between two calls is that balance call is a get call and has no params; however, the create order call does have params and when I am building the signature which is required for all authenticated calls, the params are also part of the signature and I believe I am doing something wrong there.</p>
<p>I am following this documentation -> <a href="https://exchange-docs.crypto.com/exchange/v1/rest-ws/index.html#introduction" rel="nofollow noreferrer">https://exchange-docs.crypto.com/exchange/v1/rest-ws/index.html#introduction</a></p>
<p>This is my code for building the signature</p>
<h1>Sign the request using HMAC SHA256</h1>
<pre><code>def sign_request(method, params, request_id):
"""Generate the signature following the HMAC-SHA256 process."""
# Step 1: Sort the request parameters alphabetically
if params:
param_str = ''.join([f'{key}{params[key]}' for key in sorted(params)])
else:
param_str = ''
# Step 2: Concatenate method, request id, api_key, param_str, and nonce
nonce = str(int(time.time() * 1000))
signature_payload = method + request_id + API_KEY + param_str + nonce
# Step 3: Create the HMAC-SHA256 signature using the API_SECRET
signature = hmac.new(
bytes(API_SECRET, 'utf-8'),
bytes(signature_payload, 'utf-8'),
hashlib.sha256
).hexdigest() # Step 4: Output the signature as a hex string
return signature, nonce
</code></pre>
<p>and this is what I am sending in as params</p>
<h1>Params</h1>
<pre><code>params = {
'instrument_name': symbol,
'side': side, # 'buy' or 'sell'
'type': 'MARKET', # or 'LIMIT'
'price': price,
'quantity': quantity,
'client_oid': client_oid # Add client order ID
}
</code></pre>
|
<python><crypto.com-exchange-api>
|
2024-10-17 00:10:29
| 1
| 538
|
Robert Dinaro
|
79,095,934
| 9,983,652
|
how to extract a cell value from a dataframe?
|
<p>I am trying to extract a cell value from a dataframe, then why I always get a series instead of a value.</p>
<p>For example:</p>
<pre><code>df_test=pd.DataFrame({'Well':['test1','test2','test3'],'Region':['east','west','east']})
df_test
Well Region
0 test1 east
1 test2 west
2 test3 eas
well='test2'
region_thiswell=df_test.loc[df_test['Well']==well,'Region']
region_thiswell
1 west
Name: Region, dtype: object
</code></pre>
<p>I am expecting variable of region_thiswell is equal to 'west' string only. Why I am getting a series?</p>
<p>Thanks</p>
|
<python><pandas><indexing>
|
2024-10-16 21:57:46
| 3
| 4,338
|
roudan
|
79,095,927
| 128,967
|
Providing an IPython Interpreter During Development in a Hatch Project
|
<p>I have a project using Hatch as a build system for <code>pyproject</code>. My <code>pyproject.toml</code> looks like this:</p>
<pre class="lang-ini prettyprint-override"><code>[project]
name = "demo"
description = "Demo Project"
version = "0.0.1"
readme = "README.md"
requires-python = ">=3.12"
dependencies = []
[project.optional-dependencies]
test = [
"pytest",
"ipython",
]
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool]
[tool.hatch.envs.default]
type = "virtual"
path = "venv"
</code></pre>
<p>I prefer using IPython as a REPL instead of just regular Python, since it adds a lot of features and nice-to-haves during development.</p>
<p>I've specified the dependency during the <code>test</code> phase in my optional dependencies, but I can't seem to find the egg on-disk or a way to execute it.</p>
<p>Is there a way for me to use IPython in a Hatch project like this without making it a project dependency? It's only relevant during local development.</p>
|
<python><ipython><pyproject.toml><hatch>
|
2024-10-16 21:51:19
| 1
| 92,570
|
Naftuli Kay
|
79,095,884
| 642,793
|
pymqi - 2009 - FAILED: MQRC_CONNECTION_BROKEN
|
<p>I'm running a below simple python MQ put/get program in my local machine that connects to a list of qmgrs running on a remote machine 'qmgr.test.com'.</p>
<p>I'm getting MQRC 2009 error.</p>
<p><code>MQ Error for QMGR1: 2009 - FAILED: MQRC_CONNECTION_BROKEN</code></p>
<p><code>MQ Error for QMGR2: 2009 - FAILED: MQRC_CONNECTION_BROKEN</code></p>
<ul>
<li>connectivity between my local machine and mq host is good.(telnet to mq host and port is success)</li>
<li>QMGR1 and QMGR2 are running fine.</li>
<li>I was able to put/get message through 'amqsputc' command to the same
qmgrs and queue from my local.</li>
<li>MQ connection details are intact, i verified using print statement in
between code.</li>
</ul>
<p>what could cause MQRC 2009 error?</p>
<pre><code>import pymqi
import time
import logging
# Configuration details for IBM MQ
queue_managers = [
{
'queue_manager': 'QMGR1',
'channel': 'QMGR1.SVRCONN',
'queue_name': 'QMGR1.QUEUE',
'host': 'qmgr.test.com',
'port': '1441',
'user': 'mquser',
'password': '******'
},
{
'queue_manager': 'QMGR2',
'channel': 'QMGR2.SVRCONN',
'queue_name': 'QMGR2.QUEUE',
'host': 'qmgr.test.com',
'port': '1441',
'user': 'mquser',
'password': '******'
},
]
logging.basicConfig(filename='mq_health_check.log',level=logging.INFO)
def check_queue_manager_health(queue_manager_info, message):
qmgr = None
try:
# Connection info
conn_info = f"{queue_manager_info['host']}({queue_manager_info['port']})"
qmgr = pymqi.connect(queue_manager_info['queue_manager'],queue_manager_info['channel'],conn_info,queue_manager_info['user'],queue_manager_info['password'])
# open the queue
queue = pymqi.Queue(qmgr, queue_manager_info['queue_name'])
# Send the message
queue.put(message)
logging.info(f"Message sent to {queue_manager_info['queue_manager']} on {queue_manager_info['queue_name']}.")
# Retrieve the message
received_message = queue.get()
# check queue depth
queue_depth = queue.inquire(pymqi.CMQC.MQIA_CURRENT_Q_DEPTH)
logging.info(f"Queue Depth for {queue_manager_info['queue_manager']} on {queue_manager_info['queue_name']}: {queue_depth}")
queue.close()
except pymqi.MQMIError as e:
print(f"MQ Error for {queue_manager_info['queue_manager']}: {e.reason} - {e.errorAsString()}")
except Exception as e:
print(f"General Error for {queue_manager_info['queue_manager']}: {e.errorAsString()}")
finally:
if qmgr:
qmgr.disconnect()
if __name__ == "__main__":
# Message
message = "Test Message for monitoring"
# Send the message to each queue manager
for manager in queue_managers:
check_queue_manager_health(manager, message)
</code></pre>
<p>updating my question with error logs from mq client-</p>
<pre><code>----- amqxufnx.c : 1446 ------------------------------------------------------- 10/16/24 15:58:21 - Process(15507.1) Program(Python)
Host(AB89548) Installation(MQNI93L22121400D)
VRMF(9.3.1.0)
Time(2024-10-16T20:58:21.725Z)
ArithInsert1(24948) ArithInsert2(2)
CommentInsert1(/opt/homebrew/lib64/libmqe_r.dylib)
CommentInsert2(No such file or directory)
CommentInsert3(64)
AMQ6174I: Failed to open catalog for error message id 0x6174, inserts: 24948, 2, /opt/homebrew/lib64/libmqe_r.dylib, No such file or directory and 64. Issue "mqrc AMQ6174" on a different system for a description of the message.
----- amqxufnx.c : 1446 -------------------------------------------------------
</code></pre>
|
<python><ibm-mq><pymqi>
|
2024-10-16 21:28:36
| 1
| 1,077
|
Vignesh
|
79,095,809
| 3,641,435
|
Using pyparsing for parsing filter expressions
|
<p>I'm currently trying to write a parser (using pyparsing) that can parse strings that can then be applied to a (pandas) dataframe to filter data. I've already got it working after much trial & error for all kinds of example strings, however I am having trouble with extending it further from this point on.</p>
<p>First, here is my current code (that should be working if you just copy paste, at least for my Python 3.11.9 and pyparsing 3.1.2):</p>
<pre class="lang-none prettyprint-override"><code>import pyparsing as pp
# Define the components of the grammar
field_name = pp.Word(pp.alphas + "_", pp.alphanums + "_")
action = pp.one_of("include exclude")
sub_action = pp.one_of("equals contains starts_with ends_with greater_than not_equals not_contains not_starts_with not_ends_with empty not_empty less_than less_than_or_equal_to greater_than_or_equal_to between regex in_list not_in_list")
# Custom regex pattern parser that handles regex ending at the first space
def regex_pattern():
def parse_regex(t):
# Join tokens to form the regex pattern
return ''.join(t[0])
return pp.Regex(r'[^ ]+')("regex").setParseAction(parse_regex)
# Define value as either a quoted string, a regex pattern, or a simple word with allowed characters
quoted_string = pp.QuotedString('"')
unquoted_value = pp.Word(pp.alphanums + "_-;, ") | pp.Regex(r'[^/]+')
value = pp.Optional(quoted_string | regex_pattern() | unquoted_value)("value")
slash = pp.Suppress("/")
filter_expr = pp.Group(field_name("field") + slash + action("action") + slash + sub_action("sub_action") + pp.Optional(slash + value, default=""))
# Define logical operators
and_op = pp.one_of("AND and")
or_op = pp.one_of("OR or")
not_op = pp.one_of("NOT not")
# Define the overall expression using infix notation
expression = pp.infixNotation(filter_expr,
[
(not_op, 1, pp.opAssoc.RIGHT),
(and_op, 2, pp.opAssoc.LEFT),
(or_op, 2, pp.opAssoc.LEFT)
])
# List of test filters
test_filters = [
"order_type/exclude/contains/STOP ORDER AND order_validity/exclude/contains/GOOD FOR DAY",
"order_status/include/regex/^New$ AND order_id/include/equals/123;124;125",
"order_id/include/equals/123;124;125",
"order_id/include/equals/125 OR currency/include/equals/EUR",
"trade_amount/include/greater_than/1500 AND currency/include/equals/USD",
"trade_amount/include/between/1200-2000 AND currency/include/in_list/USD,EUR",
"order_status/include/starts_with/New;Filled OR order_status/include/ends_with/ed",
"order_status/exclude/empty AND filter_code/include/not_empty",
"order_status/include/regex/^New$",
"order_status/include/regex/^New$ OR order_status/include/regex/^Changed$",
"order_status/include/contains/New;Changed"
]
# Loop over test filters, parse each, and display the results
for test_string in test_filters:
print(f"Testing filter: {test_string}")
try:
parse_result = expression.parse_string(test_string, parseAll=True).asList()[0]
print(f"Parsed result: {parse_result}")
except Exception as e:
print(f"Error with filter: {test_string}")
print(e)
print("\n")
</code></pre>
<p>Now, if you run the code, you'll notice that all the test strings parse just fine, except the first element of the list, <code>"order_type/exclude/contains/STOP ORDER AND order_validity/exclude/contains/GOOD FOR DAY"</code>.</p>
<p>The problem (as far as I can tell) is that the empty space between "STOP" and "ORDER" is being recognized as the end of the "value" part of that part of the group, and then it breaks.</p>
<p>What I've tried is to use Skipsto to just skip to the next logical operator after the sub_action part is done, but that didn't work. Also, I wasn't sure how extendable that is, because in theory it should even be possible to have many chained expressions (e.g. part1 AND part2 OR part3), where each part consits of the 3-4 elements (field_name, action, sub_action and the optional value).</p>
<p>I've also tried extending the unquoted_value to also include empty spaces, but that changed nothing, either.</p>
<p>I've also looked at some of the examples over at <a href="https://github.com/pyparsing/pyparsing/tree/master/examples" rel="nofollow noreferrer">https://github.com/pyparsing/pyparsing/tree/master/examples</a>, but I couldn't really see anything that was similar to my use case. (Maybe once my code is working properly, it could be added as an example there, not sure how useful my case is to others).</p>
|
<python><pyparsing>
|
2024-10-16 20:55:06
| 1
| 491
|
Olorun
|
79,095,773
| 1,000,204
|
Github Actions cannot find my poetry-installed dev dependencies and therefore cannot find the aws_cdk module when trying to call cdk deploy
|
<p>I have a mangum python service with a cdk stack that I can deploy on my local box just fine. Here are my:</p>
<p>pyproject.toml</p>
<pre><code>[tool.poetry]
name = "dics-core-data-service"
version = "0.1.0"
description = ""
authors = ["Clayton <clayton.stetz@gmail.com>"]
readme = "README.md"
package-mode = false
[tool.poetry.dependencies]
python = "^3.12"
fastapi = "^0.115.0"
uvicorn = "^0.31.0"
python-dotenv = "^1.0.1"
pydantic = {extras = ["email"], version = "^2.9.2"}
phonenumbers = "^8.13.46"
pydantic-extra-types = "^2.9.0"
mangum = "^0.19.0"
constructs = "^10.3.0"
psycopg2-binary = "^2.9.9"
[tool.poetry.dev-dependencies]
pytest = "^8.3.3"
aws-cdk-lib = "^2.161.1"
aws-cdk-aws-lambda-python-alpha = "^2.161.1a0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>relevant sections of poetry.lock</p>
<pre><code>[[package]]
name = "aws-cdk-lib"
version = "2.161.1"
description = "Version 2 of the AWS Cloud Development Kit library"
optional = false
python-versions = "~=3.8"
files = [
{file = "aws_cdk_lib-2.161.1-py3-none-any.whl", hash = "sha256:c7de930396b1b9f0f512a728a1b926c77c3cab28fbc11fd4f81819dd9563bfb3"},
{file = "aws_cdk_lib-2.161.1.tar.gz", hash = "sha256:e27a427bc6d95dd2eb0500b0de628a88ee587b212f999fa6efb7e9ab17980201"},
]
[package.dependencies]
"aws-cdk.asset-awscli-v1" = ">=2.2.202,<3.0.0"
"aws-cdk.asset-kubectl-v20" = ">=2.1.2,<3.0.0"
"aws-cdk.asset-node-proxy-agent-v6" = ">=2.1.0,<3.0.0"
"aws-cdk.cloud-assembly-schema" = ">=38.0.0,<39.0.0"
constructs = ">=10.0.0,<11.0.0"
jsii = ">=1.103.1,<2.0.0"
publication = ">=0.0.3"
typeguard = ">=2.13.3,<5.0.0"
[[package]]
name = "aws-cdk-aws-lambda-python-alpha"
version = "2.161.1a0"
description = "The CDK Construct Library for AWS Lambda in Python"
optional = false
python-versions = "~=3.8"
files = [
{file = "aws_cdk.aws_lambda_python_alpha-2.161.1a0-py3-none-any.whl", hash = "sha256:bc50b108080d06c68d0d8468467b59751082e1a7b553452a92175bf03e38d0aa"},
{file = "aws_cdk_aws_lambda_python_alpha-2.161.1a0.tar.gz", hash = "sha256:3543cbaeabb6fb2c8e694cf5c2525ab5f1965130cc7db79a929c8c38be40db84"},
]
[package.dependencies]
aws-cdk-lib = ">=2.161.1,<3.0.0"
constructs = ">=10.0.0,<11.0.0"
jsii = ">=1.103.1,<2.0.0"
publication = ">=0.0.3"
typeguard = ">=2.13.3,<5.0.0"
</code></pre>
<p>If I run <code>poetry install</code> on my local box then run <code>cdk deploy --context stage="dev"</code>, it succeeds. It sees my dev dependencies just fine and can run the cdk code I have that requires aws-cdk-lib.</p>
<p>However, I need to run this in github actions as well so that I can deploy changesets when the "dev" and "stage" and "prod" branches get pushed to. This is where I hit issues.</p>
<p>Here's my github actions yaml:</p>
<pre><code>name: AWS Service CI/CD
on:
push:
branches: [dev]
jobs:
build:
environment: dev
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python 3.12
uses: actions/setup-python@v4
with:
python-version: "3.12"
- name: Set up Node
uses: actions/setup-node@v3
with:
node-version: "22"
- name: Service - Install Python and CDK
run: |
cd serverlessservice
python -m pip install --upgrade pip
npm install -g aws-cdk
- name: Service - Install Poetry
uses: Gr1N/setup-poetry@v8
- name: Service - Configure AWS credentials
uses: aws-actions/configure-aws-credentials@master
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: "us-west-2"
- name: Service - Install Dependencies
run: |
cd serverlessservice
poetry install
cdk deploy --context stage="dev"
</code></pre>
<p>The very last line fails and the output is as follows:</p>
<pre><code>Traceback (most recent call last):
File "/home/runner/work/core-data-service/core-data-service/serverlessservice/./infra.py", line 2, in <module>
import aws_cdk as cdk
ModuleNotFoundError: No module named 'aws_cdk'
Subprocess exited with error 1
</code></pre>
<p>If I manually install aws-cdk-lib, the error changes to complain about the other dev dependency I have listed, which is my main evidence that it's the dev dependencies that are the issue.</p>
<p>If I manually install all my dev dependencies needed by my cdk-deploy call using pip install, it DOES work but then the deploy package is way too big and gets rejected, because aws-cdk-lib is huge and should not be in production deploys.</p>
<p>This is the final blocker in a long line of issues with poetry and github actions, someone save me :D</p>
|
<python><github-actions><aws-cdk><python-poetry>
|
2024-10-16 20:43:24
| 1
| 329
|
kamii
|
79,095,766
| 5,201,265
|
401 Unauthorized Error when fetching Apple's public keys
|
<p>I keep getting 401 Unauthorized error when fetching Apple's public keys.</p>
<p>In [14]: print(f"Error fetching public keys: {response.status_code} {response.text}")
Error fetching public keys: 401 Unauthenticated</p>
<p>I've verified that the Key ID, Issuer ID, and private key file are all correct, with the private key having admin access. The server time is correctly set to UTC. Given this, I can't identify what might be causing the issue. Any insights?</p>
<pre><code>def generate_apple_developer_token():
# Load the private key in PEM format
with open(PRIVATE_KEY_FILE, 'rb') as key_file:
private_key = serialization.load_pem_private_key(
key_file.read(),
password=None,
backend=default_backend()
)
# JWT header
headers = {
"alg": "ES256", # Algorithm: Elliptic Curve
"kid": KEY_ID, # Key ID from Apple Developer
"typ": "JWT" # Type: JWT
}
# JWT payload
payload = {
"iss": ISSUER_ID, # Issuer ID from Apple Developer
"iat": int(datetime.utcnow().timestamp()), # Issued at time
"exp": int((datetime.utcnow() + timedelta(minutes=10)).timestamp()), # Expiration (max 10 minutes)
"aud": "appstoreconnect-v1", # Audience
}
# Encode the header and payload as base64
header_base64 = base64.urlsafe_b64encode(json.dumps(headers).encode()).decode().rstrip("=")
payload_base64 = base64.urlsafe_b64encode(json.dumps(payload).encode()).decode().rstrip("=")
# Concatenate header and payload
message = f"{header_base64}.{payload_base64}".encode()
# Sign the message using ECDSA with SHA256
signature = private_key.sign(
message,
ec.ECDSA(hashes.SHA256())
)
# Convert the DER-encoded signature to raw format (r and s concatenated)
der_to_raw_ecdsa_format = lambda der: der[4:36] + der[-32:]
# Convert the signature to raw format (64 bytes)
signature_64 = der_to_raw_ecdsa_format(signature)
# Base64 URL-encode the signature
signature_base64 = base64.urlsafe_b64encode(signature_64).decode().rstrip("=")
# Concatenate header, payload, and signature to form the JWT
jwt_token = f"{header_base64}.{payload_base64}.{signature_base64}"
return jwt_token
def get_apple_public_keys():
try:
# Generate a fresh JWT
developer_token = generate_apple_developer_token()
# Set up headers with the authorization token
headers = {
"Authorization": f"Bearer {developer_token}"
}
# Fetch the public keys from Apple
response = requests.get('https://api.storekit.itunes.apple.com/in-app-purchase/publicKeys', headers=headers)
# Log the response if it's not successful
if response.status_code != 200:
print(f"Error fetching public keys: {response.status_code} {response.text}")
response.raise_for_status() # Raises an exception for 4xx/5xx errors
# Parse and return the public keys
response_data = response.json()
keys = response_data.get('keys')
if not keys:
print("No 'keys' found in the response from Apple.")
return []
return keys
except requests.exceptions.RequestException as e:
print(f"Error fetching Apple's public keys: {e}")
return []
</code></pre>
<p>I also tried using jwt to implement the jwt token</p>
<pre><code>def generate_apple_developer_token():
with open(PRIVATE_KEY_FILE, 'r') as f:
private_key = f.read().strip()
print('this is the key', private_key)
# JWT header
headers = {
"alg": "ES256", # Apple uses ES256 (Elliptic Curve)
"kid": KEY_ID,
"typ": "JWT"
}
# JWT payload
payload = {
"iss": ISSUER_ID,
"iat": int(datetime.utcnow().timestamp()), # Issued at time
"exp": int((datetime.utcnow() + timedelta(minutes=10)).timestamp()), # Expiration (max 10 minutes)
"aud": "appstoreconnect-v1",
}
# Generate and return the JWT
return jwt.encode(payload, private_key, algorithm="ES256", headers=headers)
</code></pre>
<p>In either case, I checked the token in jwt.io and they are correct, but for some reason they fail the authentication</p>
|
<python><ios><jwt><apple-push-notifications>
|
2024-10-16 20:41:24
| 1
| 647
|
Lucia
|
79,095,560
| 1,457,380
|
Technique for traveling through a dictionary
|
<p>Given a dictionary where the keys hold coordinates and the values hold a list of possible values, how to output a list of <strong>all</strong> the dictionaries that can be created with a single value taken from each list?</p>
<pre><code>input = {(0, 0): [1, 2], (0, 1): [3, 4]}
output = [{(0, 0): 1, (0, 1): 3}, {(0, 0): 1, (0, 1): 4}, {(0, 0): 2, (0, 1): 3}, {(0, 0): 2, (0, 1): 4}]
</code></pre>
<p>Note: The method must be able to handle a dictionary with lists of unequal size and <code>None</code> values (the case where values are <code>None</code> can be skipped), e.g.</p>
<pre><code>{(0, 0): [1, 2], (0, 1): [3, 4], (1, 0): [5, 6, 7], (1, 1): None}
</code></pre>
<p>Sketch of loop | maybe not the right approach?</p>
<pre><code>def loop(input):
"""return a lits of dictionaries that store every coordinate/value pair among the dictionary of candidate values"""
# initialize a list to hold all the dictionaries
lst = []
# initialize dictionary to hold coordinate/value pairs
dct = {}
for key, values in input.items():
for v in values:
# what can I do here?
if key not in dct:
# check dct to enforce uniqueness?
kv = {key: v}
dct.update(kv)
if dct not in lst:
lst.append(dct)
return lst
loop(input)
# [{(0, 0): 1, (0, 1): 3}, {(0, 0): 1, (0, 1): 4}, {(0, 0): 2, (0, 1): 3}, {(0, 0): 2, (0, 1): 4}]
</code></pre>
<p>Also, as I travel through the dictionary, how do I make sure that I pick all combinations? How can I enforce an order?</p>
|
<python><dictionary>
|
2024-10-16 19:27:14
| 1
| 10,646
|
PatrickT
|
79,095,529
| 4,400,686
|
Manim/python overwrites update function, animating only last updater
|
<p>I am attempting to make a Manim animation where dots follow a set of radial basis functions as a function of x. I've also made a vertical line that the dots should follow exactly. Strangely, the dots appear to overlap each other, making it look like only one dot is in the animation. Using <code>pdb.set_trace</code>, I saw that the updater function in each Dot gets changed in each iteration of the for loop. <code>deepcopy</code> of either the function or the <code>Dot</code> object does not seem to prevent this from happening. What do I do to have a different updater function for each of these Dots?</p>
<pre><code>from manim import *
import numpy as np
from scipy.stats import norm
import pdb
class RDFBasis(Scene):
def construct(self):
axes = Axes(
x_range=[0, 5, 1],
y_range=[0, 1, 0.2],
x_length=10,
tips=False,
)
axes_labels = axes.get_axis_labels()
vt = ValueTracker(0)
line = Line(axes.c2p(0, 1), axes.c2p(0, 0))
line.add_updater(lambda x: x.move_to(axes.c2p(vt.get_value(), 0.5)))
graphs = []
dots = []
anims = []
for i in np.linspace(0, 5, 5):
func = lambda x: norm.pdf(x, loc=i, scale=0.5)
graph = axes.plot(func, color=RED)
graphs.append(graph)
dot = Dot()
dot.move_to(graph.get_start())
dot_mover = lambda x=dot, graph=graph: x.move_to(graph.get_point_from_function(vt.get_value()))
dot.add_updater(dot_mover)
dots.append(dot)
# pdb.set_trace()
plot = VGroup(axes, *graphs)
labels = VGroup(axes_labels)
# pdb.set_trace()
self.add(plot, labels, line, *dots)
self.play(vt.animate.set_value(5.0))
</code></pre>
|
<python><manim>
|
2024-10-16 19:16:51
| 1
| 321
|
Eric Taw
|
79,095,508
| 16,869,946
|
Pandas groupby transform mean with date before current row for huge dataframe
|
<p>I have a Pandas dataframe that looks like</p>
<pre><code>df = pd.DataFrame([['John', 'A', '1/1/2017', '10'],
['John', 'A', '2/2/2017', '15'],
['John', 'A', '2/2/2017', '20'],
['John', 'A', '3/3/2017', '30'],
['Sue', 'B', '1/1/2017', '10'],
['Sue', 'B', '2/2/2017', '15'],
['Sue', 'B', '3/2/2017', '20'],
['Sue', 'B', '3/3/2017', '7'],
['Sue', 'B', '4/4/2017', '20']],
columns=['Customer', 'Group', 'Deposit_Date', 'DPD'])
</code></pre>
<p>And I want to create a new row called <code>PreviousMean</code>. This column is the year to date average of DPD for that customer. i.e. Includes all DPDs up to but not including rows that match the current deposit date. If no previous records existed then it's null or 0.</p>
<p>So the desired outcome looks like</p>
<pre><code> Customer Group Deposit_Date DPD PreviousMean
0 John A 2017-01-01 10 NaN
1 John A 2017-02-02 15 10.0
2 John A 2017-02-02 20 10.0
3 John A 2017-03-03 30 15.0
4 Sue B 2017-01-01 10 NaN
5 Sue B 2017-02-02 15 10.0
6 Sue B 2017-03-02 20 12.5
7 Sue B 2017-03-03 7 15.0
8 Sue B 2017-04-04 20 13.0
</code></pre>
<p>And after some researching on the site and internet here is one solution:</p>
<pre><code>df['PreviousMean'] = df.apply(
lambda x: df[(df.Customer == x.Customer) &
(df.Group == x.Group) &
(df.Deposit_Date < x.Deposit_Date)].DPD.mean(),
axis=1)
</code></pre>
<p>And it works fine. However, my actual dataframe is much larger (~1 million rows) and the above code is very slow.</p>
<p>I have asked a similar question before: <a href="https://stackoverflow.com/questions/79027616/pandas-groupby-transform-mean-with-date-before-current-row-for-huge-huge-datafra">Pandas groupby transform mean with date before current row for huge huge dataframe</a></p>
<p>except that this time the groupby is done on two columns and hence the solutions do not work and I failed to try to generalize it.
Is there any better way to do it? Thanks</p>
|
<python><pandas><dataframe><group-by>
|
2024-10-16 19:10:34
| 2
| 592
|
Ishigami
|
79,095,468
| 425,895
|
How to pass parameters to this sklearn Cox model in a Pipeline?
|
<p>If I run the following Python code it works well:</p>
<pre><code>target = 'churn'
tranOH = ColumnTransformer([ ('one', OneHotEncoder(drop='first', dtype='int'),
make_column_selector(dtype_include='category', pattern=f"^(?!{target}).*")
) ], remainder='passthrough')
dftrain2 = tranOH.fit_transform(dftrain)
cph = CoxPHFitter(penalizer=0.1)
cph.fit(dftrain2, 'months', 'churn')
</code></pre>
<p>But if I try to do it with a Pipeline I get an error:</p>
<pre><code>mcox = Pipeline(steps=[
("onehot", tranOH),
('modelo', CoxPHFitter(penalizer=0.1))
])
mcox.fit(dftrain, modelo__duration_col="months", modelo__event_col='churn')
</code></pre>
<p>It says:</p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[88], line 6
1 mcox = Pipeline(steps=[
2 ("onehot", tranOH),
3 ('modelo', CoxPHFitter(penalizer=0.1))
4 ])
----> 6 mcox.fit(dftrain, modelo__duration_col="months", modelo__event_col=target)
File ~\AppData\Roaming\Python\Python310\site-packages\sklearn\base.py:1473, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
1466 estimator._validate_params()
1468 with config_context(
1469 skip_parameter_validation=(
1470 prefer_skip_nested_validation or global_skip_validation
1471 )
1472 ):
-> 1473 return fit_method(estimator, *args, **kwargs)
File ~\AppData\Roaming\Python\Python310\site-packages\sklearn\pipeline.py:473, in Pipeline.fit(self, X, y, **params)
471 if self._final_estimator != "passthrough":
472 last_step_params = routed_params[self.steps[-1][0]]
--> 473 self._final_estimator.fit(Xt, y, **last_step_params["fit"])
475 return self
File ~\AppData\Roaming\Python\Python310\site-packages\lifelines\utils\__init__.py:56, in CensoringType.right_censoring.<locals>.f(model, *args, **kwargs)
53 @wraps(function)
54 def f(model, *args, **kwargs):
55 cls.set_censoring_type(model, cls.RIGHT)
---> 56 return function(model, *args, **kwargs)
TypeError: CoxPHFitter.fit() got multiple values for argument 'duration_col'
</code></pre>
<p>tranOH is a Columntransformer that onehot encodes all categorical columns except 'churn'.</p>
<p>I have also tried using <code>col="months"</code> and <code>event_col=target</code> directly inside <code>CoxPHFitter()</code> but I get the same error.</p>
<p>Later I want to use it to perform a GridSearchCV to finetune the penalizer parameter, optimizing the accuracy score to predict churn at a given time="months".</p>
<p>I don't have the same problem with other models, for example if I replace CoxPHFitter with LogisticRegression it works well.</p>
|
<python><scikit-learn><cox-regression><scikit-learn-pipeline>
|
2024-10-16 18:58:18
| 1
| 7,790
|
skan
|
79,095,149
| 11,357,695
|
Pandas dataframe is mangled when writing to csv
|
<p>I have written a pipeline to send queries to <a href="https://www.uniprot.org/help/api" rel="nofollow noreferrer">uniprot</a>, but am having a strange issue with one of the queries. I've put this into a small test case below.</p>
<p>I am getting the expected dataframe (<code>df</code>) structure (one row and 15 columns, one per field), but when I export this to CSV and open in excel it looks mangled. Specifically, instead of one row I get two, with the second starting partway through the <code>'Sequence'</code> dataframe column (I've given more details in the bottom comment). This is for one of 99 queries, and the rest were all fine. I suspect this is an issue in my <code>pd.to_csv</code> call, but if anyone could give more details it would be much appreciated.</p>
<p>Thanks!
Tim</p>
<pre><code>import requests
import pandas as pd
import io
def queries_to_table(base, query, organism_id):
rest_url = base + f'query=(({query})AND(organism_id:{organism_id}))'
response = requests.get(rest_url)
if response.status_code == 200:
return pd.read_csv(io.StringIO(response.text),
sep = '\t')
else:
raise ValueError(f'The uniprot API returned a status code of {response.status_code}. '\
'This was not 200 as expected, which may reflect an issue '\
f'with your query: {query}.\n\nSee here for more '\
'information: https://www.uniprot.org/help/rest-api-headers. '\
f'Full url: {rest_url}')
size = 500
fields = 'accession,id,protein_name,gene_names,organism_name,'\
'length,sequence,go_p,go_c,go,go_f,ft_topo_dom,'\
'ft_transmem,cc_subcellular_location,ft_intramem'
url_base = f'https://rest.uniprot.org/uniprotkb/search?size={size}&'\
f'fields={fields}&format=tsv&'
query = '(id:TITIN_HUMAN)'
organism_id = 9606
df = queries_to_table(url_base, query, organism_id)
#-> df looks fine - one row and 15 columns
pd.concat([df]).to_csv('test2_error.csv')
#-> opening in excel this is broken - it splits df['Sequence'] into two rows at
#the junction between 'RLLANAECQEGQSVCFEIRVSGIPPPTLKWEKDG' and
#'PLSLGPNIEIIHEGLDYYALHIRDTLPEDTGYY'. In df['Sequence'], this sequence is joined
#by a 'q' (the below string covers the junction, and has the previously quoted substrings in capitals):
#tdstlrpmfkRLLANAECQEGQSVCFEIRVSGIPPPTLKWEKDGqPLSLGPNIEIIHEGLDYYALHIRDTLPEDTGYYrvtatntags
</code></pre>
|
<python><pandas><dataframe><csv><bioinformatics>
|
2024-10-16 17:18:59
| 1
| 756
|
Tim Kirkwood
|
79,095,041
| 2,979,749
|
detectron2 installation - No module named 'torch'
|
<p>I am trying to install detectron2 on ubuntu and face a weird python dependency problem. In short - pytorch is installed (with pip), torchvision is installed (with pip), but when I run</p>
<pre><code>pip install 'git+https://github.com/facebookresearch/detectron2.git'
</code></pre>
<p>I get error ModuleNotFoundError: No module named 'torch'</p>
<p>as for dependencies</p>
<pre><code>(detectron2_test) ubuntu@LAPTOP:~$ pip install torchvision
Requirement already satisfied: torchvision in ./detectron2_test/lib/python3.12/site-packages (0.19.1+cu118)
Requirement already satisfied: numpy in ./detectron2_test/lib/python3.12/site-packages (from torchvision) (1.26.3)
Requirement already satisfied: torch==2.4.1 in ./detectron2_test/lib/python3.12/site-packages (from torchvision) (2.4.1+cu118)
(...)
(detectron2_test) ubuntu@LAPTOP:~$ which pip
/home/ubuntu/detectron2_test/bin/pip
(detectron2_test) ubuntu@LAPTOP:~$ which python
/home/ubuntu/detectron2_test/bin/python
(detectron2_test) ubuntu@LAPTOP:~$ which python3
/home/ubuntu/detectron2_test/bin/python3
</code></pre>
<p>Any suggestions are appreciated!</p>
|
<python><pytorch>
|
2024-10-16 16:46:40
| 1
| 3,974
|
Lech Migdal
|
79,094,768
| 6,843,153
|
how to reset streamlit data_editor "edited_rows" dict
|
<p>I have the following code in streamlit:</p>
<pre><code> print(st.session_state[self._key]["edited_rows"])
with grid_container:
save_button_click = st.button(
label="**Save**",
key=f"{self._key}_save_button",
) if not is_read_only else False
edited_df = st.data_editor(
styled_data,
hide_index=True,
key=self._key,
column_config=column_config
)
if save_button_click:
self._save_editions(
message_container,
st.session_state[f"{self._key}_grid_state"]["pages"][
st.session_state[f"{self._key}_grid_state"]["current_page"] - 1
]
)
st.rerun()
def _save_editions(self, message_container, page=None):
if self._key not in st.session_state:
return
edited_rows = list(st.session_state[self._key]["edited_rows"].keys())
edited_rows_indexes = (
page.iloc[edited_rows].index
if page is not None
else edited_rows
)
if len(edited_rows_indexes) > 0:
edited_rows = (
self._controller.controller_data()["data"].iloc[edited_rows_indexes]
)
if len(edited_rows) > 0:
edited_rows = self._update_dataframe(
edited_rows,
st.session_state[self._key]["edited_rows"]
)
response = self._controller.update(edited_rows)
if response["type"] == "success":
message = "Changes saved automatically!!!"
st.session_state[self._key]["edited_rows"] = None
else:
message = f"Not possible to save changes: `{response['text']}`"
post_message(message_container, response["type"], message, timeout=5)
if self._key == "pt_new_records_grid":
st.session_state["pt_new_records_data"] = pd.DataFrame()
</code></pre>
<p>I expect that after the <code>st.rerun()</code> the <code>st.session_state[self._key]["edited_rows"].keys()</code> would print <code>None</code>, but it prints the editions, which means that <code>st.session_state[self._key]["edited_rows"] = None</code> had no effect (I evaluate and the value is actually changed, but in the next run the change is lost and the value is the original one).</p>
<p>What am I doing wrong?</p>
|
<python><streamlit>
|
2024-10-16 15:28:07
| 0
| 5,505
|
HuLu ViCa
|
79,094,679
| 3,225,420
|
Reading Excel file returns 'Invalid Request' error
|
<p>My goal is to use Python to read and write to an Excel file in SharePoint that other users will accessing. I developed a solution to a problem on my local machine using <code>Pandas</code> to read and <code>xlWings</code> to write. Now I'm trying to move it to SharePoint for others to use.</p>
<p>The solution is a non-computational application of Excel, it's essentialy to track positions of items so there will be many instances of many empty cells when the file is read. I will need to read the range A1:V20, here's what the sheet looks like:
<a href="https://i.sstatic.net/oeNELeA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oeNELeA4.png" alt="enter image description here" /></a></p>
<p>I tried various url changes based off Microsoft's guide <a href="https://learn.microsoft.com/en-us/graph/api/resources/excel?view=graph-rest-1.0" rel="nofollow noreferrer">here</a>, but still can't get results.</p>
<p>Here are the permissions my application has:
<a href="https://i.sstatic.net/JUSf9N2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JUSf9N2C.png" alt="enter image description here" /></a></p>
<p>I've read some tutorials/answers to questions (i.e. <a href="https://www.youtube.com/watch?v=wAIZn6RDSJg&ab_channel=PythonBites" rel="nofollow noreferrer">here</a> and <a href="https://learn.microsoft.com/en-us/answers/questions/1662521/how-to-read-excel-in-sharepoint-in-azure-functions" rel="nofollow noreferrer">here</a>) and can authenticate. I've updated my <code>url</code> based off <a href="https://stackoverflow.com/a/78786696/3225420">this</a> answer, but no luck.</p>
<p>Here's the error message I am getting:</p>
<pre><code>{
"error": {
"code": "invalidRequest",
"message": "Invalid request"
}
}
'value'
</code></pre>
<p>Previously I had the url ending in <code>/workbook/tables('Plant_3')</code> based off this <a href="https://learn.microsoft.com/en-us/answers/questions/1662521/how-to-read-excel-in-sharepoint-in-azure-functions" rel="nofollow noreferrer">answer</a> but I was getting the following error:</p>
<pre><code>"error": {
"code": "BadRequest",
"message": "Open navigation properties are not supported on OpenTypes. Property name: 'tables'.",
</code></pre>
<p>I beleive that error was because my file has no tables and the url was trying to link to that object/collection.</p>
<p>Here's my code (updated per comments):</p>
<pre><code>from msal import ConfidentialClientApplication
import requests
import json
import pandas as pd
import configparser
config = configparser.ConfigParser()
config.read('config.ini')
client_id = config['entra_auth']['client_id']
client_secret = config['entra_auth']['client_secret']
tenant_id = config['entra_auth']['tenant_id']
msal_scope = ['https://graph.microsoft.com/.default']
msal_app = ConfidentialClientApplication(client_id=client_id,
authority=f"https://login.microsoftonline.com/{tenant_id}",
client_credential=client_secret, )
result = msal_app.acquire_token_silent(scopes=msal_scope,
account=None)
if not result:
result = msal_app.acquire_token_for_client(scopes=msal_scope)
if 'access_token' in result:
access_token = result['access_token']
else:
raise Exception("Failed to acquire token")
headers = {'Authorization': f'Bearer {access_token}'}
site_id = config['site']['site_id'] # https://www.powertechtips.com/check-site-id-sharepoint/
# https://answers.microsoft.com/en-us/msoffice/forum/all/how-can-i-find-the-library-id-on-our-sharepoint/701e68f3-954f-490c-b3cb-ceb8bd5601d1
document_library_id = config['site']['document_library_id']
doc_id = config['site']['doc_id'] # from document details
# create url
url = f"https://graph.microsoft.com/v1.0/sites/{site_id}/drives/{document_library_id}/items/{doc_id}:/workbook/worksheets('Plant_3')/range(address='A1:V20')"
# Make a GET request to the Microsoft Graph API to read the Excel file as a pandas dataframe
response = requests.get(url, headers=headers)
try: # how I want it to go
data = response.json().get("value", [])
# if data found, convert to dataframe.
if data:
df = pd.DataFrame(data)
print(df)
else:
print("No data")
except Exception as e:
print(f"Error: {e}")
</code></pre>
|
<python><azure><microsoft-graph-api>
|
2024-10-16 15:04:35
| 1
| 1,689
|
Python_Learner
|
79,094,616
| 891,919
|
python error with faiss on GPU with cuda despite successful installation
|
<p>I'm working on a Google Cloud VM with CUDA 12. I tried to install either <code>faiss-gpu-cu12</code> or <code>faiss-gpu-cu12[fix_cuda]</code> using either a venv or pyenv virtual environment, under python 3.12.4. For the application we also need langchain packages as below:</p>
<pre><code>python --version
Python 3.12.4
</code></pre>
<pre><code>python -m pip install langchain langchain-huggingface langchain-community faiss-gpu-cu12[fix_cuda]
</code></pre>
<p>pip gives no error and everything looks fine, but when I run the minimal example below:</p>
<pre><code>from langchain_community.vectorstores import FAISS
from langchain_huggingface import HuggingFaceEmbeddings
from sentence_transformers import SentenceTransformer
docs = [ 'sefsl;fk lskdf;lk s', 'ewrl kwelklekfl ls ;' ]
embedder = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')
f = FAISS.from_texts(docs, embedding=embedder)
embeddings = embedder.encode(docs)
</code></pre>
<p>I get this error:</p>
<pre><code>Traceback (most recent call last):
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/faiss.py", line 55, in dependable_faiss_import
import faiss
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/faiss/__init__.py", line 16, in <module>
from .loader import *
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/faiss/loader.py", line 111, in <module>
from .swigfaiss import *
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/faiss/swigfaiss.py", line 10, in <module>
from . import _swigfaiss
ImportError: /home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/faiss/_swigfaiss.cpython-312-x86_64-linux-gnu.so: ELF load command address/offset not properly aligned
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jupyterlab_user/erwan/RAG/minimal-faiss-expl2.py", line 9, in <module>
f = FAISS.from_texts(docs, embedding=embedder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/faiss.py", line 1042, in from_texts
return cls.__from(
^^^^^^^^^^^
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/faiss.py", line 994, in __from
faiss = dependable_faiss_import()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jupyterlab_user/erwan/RAG/.venv/lib/python3.12/site-packages/langchain_community/vectorstores/faiss.py", line 57, in dependable_faiss_import
raise ImportError(
ImportError: Could not import faiss python package. Please install it with `pip install faiss-gpu` (for CUDA supported GPU) or `pip install faiss-cpu` (depending on Python version).
</code></pre>
<p>It looks like the Faiss library is not installed properly, even though pip gave no error when installing package <code>faiss-gpu-cu12</code>. Is there anything I missed? For instance, is there any known incompatibility between the packages above?</p>
<p>For now I'm restricted to use <code>faiss-cpu</code>, so any suggestion welcome!</p>
<hr />
<p>Additional info:</p>
<pre><code>$ pip freeze | grep faiss
faiss-gpu-cu12==1.8.0.2
</code></pre>
<pre><code>$ pip freeze | grep langchain
langchain==0.3.3
langchain-community==0.3.2
langchain-core==0.3.12
langchain-huggingface==0.1.0
langchain-text-splitters==0.3.0
</code></pre>
<pre><code>$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
</code></pre>
|
<python><gpu><py-langchain><faiss>
|
2024-10-16 14:49:42
| 3
| 1,185
|
Erwan
|
79,094,586
| 2,215,904
|
Nonblocking child process output read
|
<p>I can't figure why I can't proper communicate with child process.
I have program in C and python. Python need to start C program and then capture output from C in nonblocking way. In the example the C program will output 10 lines per second and will terminate after 5 seconds.</p>
<blockquote>
<p>The ccode.c compiled with: gcc -D arch_X86 ccode.c -o ccode.bin</p>
</blockquote>
<pre><code>#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
uint64_t nanos(void){
struct timespec ts;
clock_gettime(CLOCK_MONOTONIC, &ts);
return ts.tv_sec*1000000000ull + ts.tv_nsec;
}
int main(){
printf("\nC Code Started\n");
uint64_t nowns, ns;
int16_t n=0;
nowns=nanos();
ns=nanos();
while ((nanos()-nowns)<5000000000UL){
if ((nanos()-ns) > 100000000UL){
ns=nanos();
printf("From C I got %i\n", n);
n++;
}
}
printf("C code exit\n");
}
</code></pre>
<p>and python script:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf8 -*-
import subprocess
import time
pr=subprocess.Popen( ['./ccode.bin'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
while True:
print (pr.stdout.read(1), end='' )
</code></pre>
<p>When I start python the program is blocked for 5 seconds and then 50 lines are printed in once. The expected is to print 10 lines per second untin 5 second elapses. In real application Python is tkinter GUI with timer executed 10 times per second to capture output from c code.</p>
|
<python><popen><nonblocking>
|
2024-10-16 14:43:17
| 0
| 460
|
eSlavko
|
79,094,532
| 14,386,187
|
Python skips recursion if yield is present
|
<p>I have the following XML file:</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<xliff version="1.2" xmlns="urn:oasis:names:tc:xliff:document:1.2" xmlns:okp="okapi-framework:xliff-extensions" xmlns:its="http://www.w3.org/2005/11/its" xmlns:itsxlf="http://www.w3.org/ns/its-xliff/" its:version="2.0">
<file original="temp/file_conversion/tmp4a9kn6bn/69502fea-751c-4c3c-a38a-4fce9e13ebde.txt" source-language="en" target-language="ar" datatype="x-text/plain" okp:inputEncoding="UTF-8">
<body>
<trans-unit id="1idhasofh" xml:space="preserve">
<source xml:lang="en">foo<bpt id="0">&lt;bar&gt;</bpt>&lt;Instruction><ept id="0">&lt;crow&gt;</ept>&lt;grande&gt;</source>
<target xml:lang="ar">foo<bpt id="0">&lt;bar&gt;</bpt>&lt;Instruction><ept id="0">&lt;crow&gt;</ept>&lt;grande&gt;</target>
</trans-unit>
</body>
</file>
</xliff>
</code></pre>
<p>I'm trying to create a function that parses an XML file that I've read into an ElementTree.Element:</p>
<pre class="lang-py prettyprint-override"><code>from xml.etree import ElementTree as ET
def parse_xml(ele: ET.Element):
tag = ele.tag
if not isinstance(tag, str) and tag is not None:
return
t = ele.text
if t:
yield t
for e in ele:
parse_xml(e)
t = e.tail
if t:
yield t
def main():
fp = "path/to/xml"
tree = ET.parse(fp)
root = tree.getroot()
t_units = root.findall(".//{*}trans-unit")
for source, target in t_units:
for ele in parse_xml(source):
print(ele)
</code></pre>
<p>I get:</p>
<pre class="lang-py prettyprint-override"><code>foo
<Instruction>
<grande>
</code></pre>
<p>In my debugger, I see that <code>parse_xml(e)</code> gets skipped. When I replace the yields with print statements:</p>
<pre class="lang-py prettyprint-override"><code>def parse_xml(ele: ET.Element):
tag = ele.tag
if not isinstance(tag, str) and tag is not None:
return
t = ele.text
if t:
print(t)
for e in ele:
parse_xml(e)
t = e.tail
if t:
print(t)
</code></pre>
<p>I get the expected result (reaches all the tagged text):</p>
<pre><code>foo
<bar>
<Instruction>
<crow>
<grande>
</code></pre>
<p>Why does this happen with yield?</p>
|
<python><xml><yield>
|
2024-10-16 14:28:52
| 1
| 676
|
monopoly
|
79,094,259
| 5,013,752
|
how to add spark config to DatabricksSession
|
<p>I used to work with a custom spark object define as follow :</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
spark_builder = SparkSession.builder.appName(settings.project_name)
config = {**self.DEFAULT_CONFIG, **spark_config}
for key, value in config.items():
spark_builder.config(key, value)
self._spark = spark_builder.getOrCreate()
</code></pre>
<p>this is part of a bigger object. Here, <code>self.DEFAULT_CONFIG</code> and <code>spark_config</code> are both python dict and they contain spark configs e.g. <code>{"spark.driver.extraJavaOptions": "-Xss32M"}</code></p>
<p>I'm trying to swith using <code>DatabricksSession</code> instead.<br />
<a href="https://learn.microsoft.com/en-us/azure/databricks/dev-tools/databricks-connect/python/install" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/databricks/dev-tools/databricks-connect/python/install</a></p>
<p>They do something in the doc like this :</p>
<pre class="lang-py prettyprint-override"><code>from databricks.connect import DatabricksSession
from databricks.sdk.core import Config
config = Config(
host = f"https://{retrieve_workspace_instance_name()}",
token = retrieve_token(),
cluster_id = retrieve_cluster_id()
)
spark = DatabricksSession.builder.sdkConfig(config).getOrCreate()
</code></pre>
<p>which works fine for me, I manage to make it work.
But I want to reproduce that "config" mechanism that I used to have previously.</p>
<p>I tryied to add my custom config to the <code>Config</code> object... which does not provide any error (<code>Config</code> signature includes kwargs) but when I check my config using :</p>
<pre class="lang-py prettyprint-override"><code>spark.cong.get("spark.driver.extraJavaOptions")
</code></pre>
<p>I do not see my "custom" config.</p>
<p>I also try to use a <code>config</code> method, but the DatabricksSession object does not have it :</p>
<blockquote>
<p>AttributeError: 'Builder' object has no attribute 'config'</p>
</blockquote>
<p>Any idea on how I could do that ?</p>
|
<python><apache-spark><pyspark><azure-databricks>
|
2024-10-16 13:23:41
| 1
| 15,420
|
Steven
|
79,094,198
| 2,449,857
|
Attributes on Python flag enums
|
<p>I was very happy to learn <code>Enum</code> types can effectively carry named attributes:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
class ActivityType(Enum):
NEXT = 0, "N"
JOIN = 3, "J"
DIVIDE_REAR = 1, "DR"
DIVIDE_FRONT = 2, "DF"
def __init__(self, xml_code, label):
self.xml_code = xml_code
self.label = label
assert ActivityType.JOIN.label == "J"
assert ActivityType.DIVIDE_FRONT.xml_code == 2
</code></pre>
<p>How can I do something similar for <code>Flag</code> enums? Here, something can have a combination of <code>PowerType</code>s, and the string representation has a code for each one which are combined in a specific order:</p>
<pre class="lang-py prettyprint-override"><code>class PowerType(Flag):
NONE = 0
AC_OVERHEAD = auto()
DC_3RAIL = auto()
DIESEL = auto()
def xml_code(self):
powertype_subcode = {
PowerType.AC_OVERHEAD: "O",
PowerType.DC_3RAIL: "3",
PowerType.DIESEL: "D",
}
return "".join(sc for pt, sc in powertype_subcode.items() if pt & self)
assert (PowerType.DIESEL | PowerType.DC_3RAIL).xml_code() == "3D"
assert (PowerType.AC_OVERHEAD | PowerType.DC_3RAIL).xml_code() == "O3"
</code></pre>
<p>I'd like to do something like this and eliminate that <code>powertype_subcode</code> dict:</p>
<pre class="lang-py prettyprint-override"><code>class PowerType(Flag):
NONE = 0, ""
AC_OVERHEAD = auto(), "O"
DC_3RAIL = auto(), "3"
DIESEL = auto(), "D"
def __init__(self, flag, xml_code):
self.flag = flag
self._xml_subcode = xml_code
def xml_code(self):
return "".join(pt._xml_subcode for pt in PowerType if pt & self)
</code></pre>
<p>However this causes problems with <code>auto()</code> and <code>_generate_next_value_</code> and if I work around that, it then finds it can't do bitwise operations on tuple values.</p>
|
<python><python-3.x><enums><flags>
|
2024-10-16 13:12:29
| 2
| 3,489
|
Jack Deeth
|
79,093,929
| 1,814,420
|
AttributeError: get_motor_collection when setting fetch_links=True
|
<p>I'm using Beanie with FastAPI. For example, my Beanie models look like this:</p>
<pre class="lang-py prettyprint-override"><code>class Project(Document):
id: PydanticObjectId = Field(default_factory=PydanticObjectId)
items: list[Link[ItemDB]] | None = None
class ItemDB(Document):
creator: Indexed(str)
</code></pre>
<p>When getting a <code>Project</code> by ID, I also want to prefetch its related <code>items</code>. So, my endpoint looks like this:</p>
<pre class="lang-py prettyprint-override"><code>@router.get('/{project_id}', status_code=status.HTTP_200_OK, summary='Get a project',
response_model_exclude_none=True)
async def get_project(project_id: str) -> Project:
project = await Project.get(project_id, fetch_links=True)
if not project:
raise HTTPException(status_code=status.HTTP_404_NOT_FOUND)
return project
</code></pre>
<p>But I got the error:</p>
<pre class="lang-py prettyprint-override"><code>ERROR: Exception in ASGI application
Traceback (most recent call last):
File "my-project/.venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "my-project/.venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
raise exc
File "my-project/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
await app(scope, receive, sender)
File "my-project/.venv/lib/python3.12/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "my-project/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 62, in wrapped_app
raise exc
File "my-project/.venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 51, in wrapped_app
await app(scope, receive, sender)
File "my-project/.venv/lib/python3.12/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/app/routers/project.py", line 46, in get_project
project = await Project.get(project_id, fetch_links=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/documents.py", line 276, in get
return await cls.find_one(
^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/queries/find.py", line 1024, in __await__
document = yield from self._find_one().__await__() # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/queries/find.py", line 982, in _find_one
return await self.document_model.find_many(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/queries/find.py", line 689, in first_or_none
res = await self.limit(1).to_list()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/queries/cursor.py", line 67, in to_list
cursor = self.motor_cursor
^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/queries/find.py", line 661, in motor_cursor
self.build_aggregation_pipeline()
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/queries/find.py", line 610, in build_aggregation_pipeline
construct_lookup_queries(
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/utils/find.py", line 29, in construct_lookup_queries
construct_query(
File "my-project/.venv/lib/python3.12/site-packages/beanie/odm/utils/find.py", line 271, in construct_query
"from": link_info.document_class.get_motor_collection().name, # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "my-project/.venv/lib/python3.12/site-packages/pydantic/_internal/_model_construction.py", line 242, in __getattr__
raise AttributeError(item)
AttributeError: get_motor_collection
</code></pre>
<p>If I remove <code>fetch_links=True</code> and later on call</p>
<pre class="lang-py prettyprint-override"><code>await project.fetch_all_links()
</code></pre>
<p>then it works fine.</p>
<p>I'm sure that the database was initialized properly because everything else works normally, only this <code>fetch_links</code> has problem. Just to be sure, here is how the database was initialized:</p>
<pre class="lang-py prettyprint-override"><code>async def init_db(db_url: str):
client = AsyncIOMotorClient(host=db_url, tz_aware=True)
await init_beanie(
database=client.get_default_database(default='myDB'),
document_models=[Project, ItemDB],
allow_index_dropping=True
)
</code></pre>
<p>This function is then called in the startup part of FastAPI:</p>
<pre class="lang-py prettyprint-override"><code>@asynccontextmanager
async def lifespan(app: FastAPI):
await init_db(settings.db_url)
yield
app = FastAPI(
lifespan=lifespan
)
</code></pre>
<p>So, did I miss something? Or it's a bug of Beanie? I'm using FastAPI 0.115.2 and Beanie 1.27.0.</p>
|
<python><mongodb><fastapi><beanie>
|
2024-10-16 12:00:38
| 1
| 12,163
|
Triet Doan
|
79,093,909
| 41,284
|
Error in docker container in qdrant client - ValueError: Unsupported embedding model but the same value is present as a valid model in the description
|
<p>We are using the qdrant python client and setting the embedding model in the client object.</p>
<p>The error received is</p>
<pre><code> File "/app/.venv/lib/python3.12/site-packages/qdrant_client/qdrant_fastembed.py", line 122, in set_model
self._get_or_init_model(
File "/app/.venv/lib/python3.12/site-packages/qdrant_client/qdrant_fastembed.py", line 205, in _get_or_init_model
raise ValueError(
ValueError: Unsupported embedding model: "BAAI/bge-small-en-v1.5". Supported models: {'BAAI/bge-base-en': (768, <Distance.COSINE: 'Cosine'>), 'BAAI/bge-base-en-v1.5': (768, <Distance.COSINE: 'Cosine'>), 'BAAI/bge-large-en-v1.5': (1024, <Distance.COSINE: 'Cosine'>), 'BAAI/bge-small-en': (384, <Distance.COSINE: 'Cosine'>), 'BAAI/bge-small-en-v1.5': (384, <Distance.COSINE: 'Cosine'>), 'BAAI/bge-small-zh-v1.5': (512, <Distance.COSINE: 'Cosine'>), 'sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2': (384, <Distance.COSINE: 'Cosine'>), 'thenlper/gte-large': (1024, <Distance.COSINE: 'Cosine'>), 'mixedbread-ai/mxbai-embed-large-v1': (1024, <Distance.COSINE: 'Cosine'>), 'snowflake/snowflake-arctic-embed-xs': (384, <Distance.COSINE: 'Cosine'>), 'snowflake/snowflake-arctic-embed-s': (384, <Distance.COSINE: 'Cosine'>), 'snowflake/snowflake-arctic-embed-m': (768, <Distance.COSINE: 'Cosine'>), 'snowflake/snowflake-arctic-embed-m-long': (768, <Distance.COSINE: 'Cosine'>), 'snowflake/snowflake-arctic-embed-l': (1024, <Distance.COSINE: 'Cosine'>), 'intfloat/multilingual-e5-large': (1024, <Distance.COSINE: 'Cosine'>), 'sentence-transformers/paraphrase-multilingual-mpnet-base-v2': (768, <Distance.COSINE: 'Cosine'>), 'Qdrant/clip-ViT-B-32-text': (512, <Distance.COSINE: 'Cosine'>), 'sentence-transformers/all-MiniLM-L6-v2': (384, <Distance.COSINE: 'Cosine'>), 'jinaai/jina-embeddings-v2-base-en': (768, <Distance.COSINE: 'Cosine'>), 'jinaai/jina-embeddings-v2-small-en': (512, <Distance.COSINE: 'Cosine'>), 'jinaai/jina-embeddings-v2-base-de': (768, <Distance.COSINE: 'Cosine'>), 'jinaai/jina-embeddings-v2-base-code': (768, <Distance.COSINE: 'Cosine'>), 'nomic-ai/nomic-embed-text-v1.5': (768, <Distance.COSINE: 'Cosine'>), 'nomic-ai/nomic-embed-text-v1.5-Q': (768, <Distance.COSINE: 'Cosine'>), 'nomic-ai/nomic-embed-text-v1': (768, <Distance.COSINE: 'Cosine'>)}
</code></pre>
<p>Two observations</p>
<ol>
<li>The "invalid" model <code>BAAI/bge-small-en-v1.5</code> is also listed as a valid model in the error message</li>
<li>This error occurs only when running via docker container. On the host machine (linux or windows) we don't get this error.</li>
</ol>
<p>Any suggestions on how to begin to diagnose this issue?</p>
|
<python><docker><qdrantclient>
|
2024-10-16 11:53:58
| 1
| 1,354
|
alok
|
79,093,782
| 12,890,458
|
Continue process while waiting in simpy
|
<p>In simpy I simulate a car driving day and night. As soon as my tank fills below 70% and it is daytime I go to fill up. If my tank gets below 70% at night I just drive on and go fill up as soon as it gets light. I know in advance when it gets light. So I can compute beforehand how many hours (<code>dt_hours</code>) I must wait till daylight. My gasoline consumption depends on my speed and route, which are determined in the simulation. My desired output is the place where I refuel and the amount I need to refuel. If I put</p>
<pre><code>yield env.timeout(dt_hours)
</code></pre>
<p>in my simpy driving process, the driving halts and my car does not drive on, but the position remains constant. My question is how can I just drive on in simpy, determining my speed and route, while waiting for it to get light?</p>
|
<python><simpy>
|
2024-10-16 11:14:28
| 0
| 460
|
Frank Tap
|
79,093,508
| 4,725,707
|
Problem selecting a "check-box" with python Selenium: click() launches the link in the HTML code
|
<p>I am automating with Selenium & python some searches in a website (<a href="https://hemerotecadigital.bne.es/hd/es/advanced" rel="nofollow noreferrer">https://hemerotecadigital.bne.es/hd/es/advanced</a>). Most of it is already done but one specific step I am still doing manually, and for this I'd appreciate your support.
The difficulty is not with the identification of the element, but that <code>element.click()</code> calls a link inside the element instead of ticking the associated box.</p>
<p>Let me show first one field on which the click() option works. The HTML code of the section "AΓO" (year) is for one of the years (1683):</p>
<pre><code><div class="parameter-row-unchecked-ok-empty"> <img alt="1683" class="parameter-check" src="/fs/static/img/check-off.png"><p class="parameter-paragraph">1683</p></div>
</code></pre>
<p><a href="https://i.sstatic.net/4PuUZkLj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4PuUZkLj.png" alt="Selection of Year" /></a></p>
<p>Although the HTML code does not have any "input" or "checkbox" tags, the code <code>driver.find_element(By.XPATH, xpath).click()</code> works well and selects the element/ticks the box. I note that <code>element.is_activated()</code> does not work, but <code>element.get_attribute(class)</code> allows to identify the status.</p>
<p>The part with which I am having problems is the following (TΓTULO = title). The HTML code is:</p>
<pre><code><div class="parameter-row-unchecked-ok-empty"><img alt="25352831" class="parameter-check" src="/fs/static/img/check-off.png" title="Esta obra es de acceso restringido. Puede acceder a ella en ordenadores especΓficos de las instalaciones de la BNE."><p class="parameter-paragraph"><a href="card?sid=25352831">25 DivisiΓ³n</a></p></div>
</code></pre>
<p><a href="https://i.sstatic.net/YQqbPHx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YQqbPHx7.png" alt="Selection of Title" /></a></p>
<p>Note that inside the <code><p></code> tag there is a <code>href</code> attribute whose URL is called whenever I run <code>driver.find_element(By.XPATH, xpath).click()</code>. I have tried with the inner and outer xpaths, but it does not matter: either an error is raised because the xpath is not clickable, or the new page is opened instead of the box being ticked.</p>
<p>So far I have automated all the data input (except for this) and the output gathering. Any suggestion is welcome!</p>
<p>[edit: examples of xpaths I've used: <code>xpath = '//*[@id="parameter-div-sid"]/div[18]'</code> or, longer version, <code>xpath = '/html/body/section[2]/div/form[1]/fieldset[3]/div[1]/div[2]/div[2]/div/div[2]/div[18]'</code></p>
|
<python><selenium-webdriver><checkbox>
|
2024-10-16 09:57:22
| 1
| 538
|
RiGonz
|
79,093,281
| 984,077
|
Python in Excel: How to install other Anaconda packages
|
<p>Python in Excel includes some packages in the Anaconda distribution. <a href="https://support.microsoft.com/en-us/office/open-source-libraries-and-python-in-excel-c817c897-41db-40a1-b9f3-d5ffe6d1bf3e" rel="nofollow noreferrer">+info</a>, <a href="https://docs.anaconda.com/anaconda/allpkglists/2024.06-1/" rel="nofollow noreferrer">anaconda distribution packages</a>.</p>
<p>However, is it possible to install additional packages that are still in the Anaconda repository?</p>
<p>For example, let's say we want to install <a href="https://anaconda.org/anaconda/shap" rel="nofollow noreferrer">SHAP</a> and <a href="https://anaconda.org/anaconda/lightgbm" rel="nofollow noreferrer">LightGBM</a>.</p>
|
<python><excel><lightgbm><shap>
|
2024-10-16 09:01:18
| 1
| 2,823
|
FZNB
|
79,093,236
| 12,466,687
|
How to create multiple columns in output on when condition in Polars?
|
<p>I am trying to <strong>create 2 new columns</strong> in output on checking condition but not sure how to do that.</p>
<p><strong>sample df:</strong></p>
<pre><code>so_df = pl.DataFrame({"low_limit": [1, 3, 0], "high_limit": [3, 4, 2], "value": [0, 5, 1]})
</code></pre>
<pre><code>low_limit high_limit value
i64 i64 i64
1 3 0
3 4 5
0 2 1
</code></pre>
<p>Code for single column creation that works:</p>
<pre><code>so_df.with_columns(pl.when(pl.col('value') > pl.col('high_limit'))
.then(pl.lit("High"))
.when((pl.col('value') < pl.col('low_limit')))
.then(pl.lit("Low"))
.otherwise(pl.lit("Within Range")).alias('Flag')
)
</code></pre>
<p><strong>output</strong></p>
<pre><code>low_limit high_limit value Flag
i64 i64 i64 str
1 3 0 "Low"
3 4 5 "High"
0 2 1 "Within Range"
</code></pre>
<p><strong>Issue/Doubt:</strong> Creating 2 columns that doesn't work</p>
<pre><code>so_df.with_columns(pl.when(pl.col('value') > pl.col('high_limit'))
.then(Flag = pl.lit("High"), Normality = pl.lit("Abnormal"))
.when((pl.col('value') < pl.col('low_limit')))
.then(Flag = pl.lit("Low"), Normality = pl.lit("Abnormal"))
.otherwise(Flag = pl.lit("Within Range"), Normality = pl.lit("Normal"))
)
</code></pre>
<p><strong>Desired output:</strong></p>
<pre><code>low_limit high_limit value Flag Normality
i64 i64 i64 str str
1 3 0 "Low" "Abnormal"
3 4 5 "High" "Abnormal"
0 2 1 "Within Range" "Normal"
</code></pre>
<p>I know I can do another with_Columns and using when-then again but that will take double the computation. So how can I create 2 new columns in 1 go ?</p>
<p>something like:</p>
<pre><code>if (condition):
Flag = '',
Normality = ''
</code></pre>
|
<python><python-polars>
|
2024-10-16 08:49:28
| 2
| 2,357
|
ViSa
|
79,093,231
| 11,233,365
|
Unable to access public GitHub repo via its API endpoint when submitting request through FastAPI
|
<p>I am trying to write a FastAPI mirror to return publicly available GitHub repos for download on a local machine that cannot see the wider internet. However, when I poke the endpoint I've written, I receive a "Bad credentials" response from GitHub.</p>
<p>Interestingly, when I make the same request via Python in the terminal on the same device I'm running the FastAPI endpoint on, the request is approved and I get the response containing the package assets and download URLs that I need.</p>
<p>Why is my request via Python in the terminal approved and not that via FastAPI, even though both originate from the same device? In both cases, I have not supplied any credentials, as GitHub's API endpoints for public releases are supposed to be accessible without the need for authentication (<a href="https://docs.github.com/en/rest/releases/releases?apiVersion=2022-11-28#list-releases" rel="nofollow noreferrer">relevant GitHub REST API docs section here</a>).</p>
<p>The code I'm using for reference:</p>
<pre class="lang-py prettyprint-override"><code>"""
API endpoint in question
"""
import requests
from fastapi import APIRouter, Response
windows_terminal = APIRouter(prefix="/microsoft/terminal")
@windows_terminal.get("/releases/latest", response_class=Response)
def get_latest_windows_terminal_release():
url = "https://api.github.com/repos/microsoft/terminal/releases/latest"
response = requests.get(url)
return Response(
content=response.content,
status_code=response.status_code,
headers=response.headers,
)
</code></pre>
<p>The above endpoint returns a "Bad credentials" response when run like that. However, when I provide a GitHub personal access token in the header, it returns the package releases data just fine.</p>
<pre class="lang-none prettyprint-override"><code>"""
From Python in the terminal on the same device
"""
$ python
$ import requests
$ url = "https://api.github.com/repos/microsoft/terminal/releases/latest"
$ response = requests.get(url)
$ print(response.text)
</code></pre>
<p>The command line option succeeds at returning the JSON blob with info on the latest release of the package I need, even when I don't provide it with a GitHub token.</p>
<p>And here are the headers for the response and request for the API endpoint I wrote when I get the "Bad credentials" response:</p>
<pre class="lang-markdown prettyprint-override"><code># Response Headers
access-control-allow-origin *
access-control-expose-headers ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset
content-length 95
content-security-policy default-src 'none'
content-type application/json; charset=utf-8
date Wed, 16 Oct 2024 09:26:30 GMT, Wed, 16 Oct 2024 09:26:30 GMT
referrer-policy origin-when-cross-origin, strict-origin-when-cross-origin
server uvicorn, github.com
strict-transport-security max-age=31536000; includeSubdomains; preload
vary Accept-Encoding, Accept, X-Requested-With
x-content-type-options nosniff
x-frame-options deny
x-github-media-type github.v3; format=json
x-github-request-id C62C:3C7F8A:20BE68E:230F16B:670F86C6
x-ratelimit-limit 60
x-ratelimit-remaining 59
x-ratelimit-reset 1729074390
x-ratelimit-resource core
x-ratelimit-used 1
x-xss-protection 0
# Request Headers
Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding gzip, deflate
Accept-Language en-GB,en;q=0.5
Connection keep-alive
DNT 1
Host ######## (omitted)
Upgrade-Insecure-Requests 1
User-Agent Mozilla/5.0 (X11; Linux x86_64; rv:109.0) Gecko/20100101 Firefox/115.0
</code></pre>
|
<python><python-requests><fastapi><github-api>
|
2024-10-16 08:48:48
| 0
| 301
|
TheEponymousProgrammer
|
79,093,115
| 2,826,018
|
Format "04x" produces more than 4 hex digits
|
<p>I thought <code>f"0x{number:04x}"</code> produces a hex number with four digits. But I recently noticed that this is not the case:</p>
<p><code>f"0x{13544123:04x}" = "0xceaabb"</code> which has 6 digits. What am I doing wrong here?</p>
|
<python><number-formatting>
|
2024-10-16 08:25:14
| 1
| 1,724
|
binaryBigInt
|
79,093,102
| 3,956,017
|
Why do I need a empty window.attributes() to create a transparent window using tkinter (in python)?
|
<p>When I create a transparent window with <code>tkinter</code> in python I need to start with a empty <code>window.attributes()</code> call.<br>Otherwise it doesn't become transparent. Is this a bug or wanted behavior ?</p>
<p>Example code:</p>
<pre><code>#!/usr/bin/env python3
import tkinter as tk
window = tk.Tk()
window.attributes() # Unless I use this line the window will not be transparent
window.attributes('-alpha', 0.5)
window.update_idletasks()
window.mainloop()
</code></pre>
<p>Other things I noticed:</p>
<ul>
<li>Making it wait a bit between the creation of the window and making it transparent does <strong>not</strong> help.<br><em>By using <code>time.sleep(1)</code> instead of <code>window.attributes()</code></em></li>
<li>The <code>window.attributes()</code> has to be empty. Setting another attribute does <strong>not</strong> help.<br><em>e.g.: <code>window.attributes("-fullscreen", True)</code> instead of <code>window.attributes()</code>.</em></li>
<li>Surprisingly the the empty <code>window.attributes()</code> is <strong>not necessary</strong> when I want to change <strong>another</strong> attribute.<br><em>e.g.: <code>window.attributes("-fullscreen", True)</code> <strong>will</strong> create a full screen window with or without starting with <code>window.attributes()</code></em></li>
</ul>
<p>My system:</p>
<ul>
<li>Mint 22 (A Linux distro based on Ubuntu 24.04)</li>
<li>Python from Ubuntu package <code>python3</code> version <code>3.12.3-0ubuntu2</code></li>
<li>Tkinter from Ubuntu package <code>python3-tk</code> version <code>3.12.3-0ubuntu1</code></li>
<li>X server from Ubuntu package <code>xserver-xorg-core</code> version <code>2:21.1.12-1ubuntu1</code></li>
<li>Cinnamon as desktop environment from Mint package <code>cinnamon</code> version <code>6.2.9+wilma</code></li>
<li>Tk from Ubuntu package <code>tk</code> version <code>8.6.14build1</code></li>
</ul>
<p>I also wonder if this behavior happens on other systems...</p>
|
<python><tkinter><tk-toolkit>
|
2024-10-16 08:23:31
| 1
| 1,530
|
Garo
|
79,093,078
| 662,642
|
python problem with chained stdin to stdout
|
<p>I want to have several python scripts to modify text files. For example, 'do1' accepts input from either stdin or a file, and sends output to either stdout or a file:</p>
<pre><code>#!/usr/bin/env python3
import sys
import select
fi = sys.stdin if select.select([ sys.stdin, ], [], [], 0.0 )[ 0 ] else None
ofname = None
for arg in sys.argv[ 1 : ]:
if fi is None: fi = open( arg )
elif ofname is None: ofname = arg
if fi is None:
print( '1 no stdin or input file' ) # 2 for do2
exit( 1 )
if ofname is None:
fo = sys.stdout
else:
fo = open( ofname, 'w' )
for l in fi:
fo.write( '1' ) # 2 for do2
fo.write( l )
</code></pre>
<p>A second script do2 is the same but with '2 for 1' as in the comments. When I run this (on unix) I get various results:</p>
<pre><code>$ echo hi | ./do1 | ./do2
2 no stdin or input file
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>or</p>
<pre><code>$ echo hi | ./do1 | ./do2
2 no stdin or input file
</code></pre>
<p>or</p>
<pre><code>$ echo hi | ./do1 | ./do2
21hi
</code></pre>
<p>What is going on here? I assume this is some kind of timing issue, with the the previous pipe disappearing before the next program can read it, but I don't know how to progress this.</p>
|
<python><pipe>
|
2024-10-16 08:18:36
| 3
| 516
|
PatB
|
79,093,014
| 1,581,090
|
Moviepy is unable to load video
|
<p>Using python 3.11.10 and moviepy 1.0.3 on ubuntu 24.04.1 (in a VirtualBox 7.1.3 on windows 10) I have problems to load a video clip. The test code is just</p>
<pre><code>from moviepy.editor import VideoFileClip
clip = VideoFileClip("testvideo.ts")
</code></pre>
<p>but the error is</p>
<pre><code>Traceback (most recent call last):
File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/ffmpeg_reader.py", line 285, in ffmpeg_parse_infos
line = [l for l in lines if keyword in l][index]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^
IndexError: list index out of range
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/alex/Repos/pypdzug/tester.py", line 5, in <module>
clip = VideoFileClip("testvideo.ts")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/VideoFileClip.py", line 88, in __init__
self.reader = FFMPEG_VideoReader(filename, pix_fmt=pix_fmt,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/ffmpeg_reader.py", line 35, in __init__
infos = ffmpeg_parse_infos(filename, print_infos, check_duration,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/alex/.cache/pypoetry/virtualenvs/pypdzug-WqasAXAr-py3.11/lib/python3.11/site-packages/moviepy/video/io/ffmpeg_reader.py", line 289, in ffmpeg_parse_infos
raise IOError(("MoviePy error: failed to read the duration of file %s.\n"
OSError: MoviePy error: failed to read the duration of file testvideo.ts.
Here are the file infos returned by ffmpeg:
ffmpeg version 4.2.2-static https://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 8 (Debian 8.3.0-6)
configuration: --enable-gpl --enable-version3 --enable-static --disable-debug --disable-ffplay --disable-indev=sndio --disable-outdev=sndio --cc=gcc --enable-fontconfig --enable-frei0r --enable-gnutls --enable-gmp --enable-libgme --enable-gray --enable-libaom --enable-libfribidi --enable-libass --enable-libvmaf --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-librubberband --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libvorbis --enable-libopus --enable-libtheora --enable-libvidstab --enable-libvo-amrwbenc --enable-libvpx --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libdav1d --enable-libxvid --enable-libzvbi --enable-libzimg
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
</code></pre>
<p>It says it failed to read the duration of the file, but the file plays properly (with <code>mplayer</code>) and <code>ffmpeg -i testvideo.ts</code> returns</p>
<pre><code>ffmpeg version 6.1.1-3ubuntu5 Copyright (c) 2000-2023 the FFmpeg developers
built with gcc 13 (Ubuntu 13.2.0-23ubuntu3)
configuration: --prefix=/usr --extra-version=3ubuntu5 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --disable-omx --enable-gnutls --enable-libaom --enable-libass --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libglslang --enable-libgme --enable-libgsm --enable-libharfbuzz --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzimg --enable-openal --enable-opencl --enable-opengl --disable-sndio --enable-libvpl --disable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-ladspa --enable-libbluray --enable-libjack --enable-libpulse --enable-librabbitmq --enable-librist --enable-libsrt --enable-libssh --enable-libsvtav1 --enable-libx264 --enable-libzmq --enable-libzvbi --enable-lv2 --enable-sdl2 --enable-libplacebo --enable-librav1e --enable-pocketsphinx --enable-librsvg --enable-libjxl --enable-shared
libavutil 58. 29.100 / 58. 29.100
libavcodec 60. 31.102 / 60. 31.102
libavformat 60. 16.100 / 60. 16.100
libavdevice 60. 3.100 / 60. 3.100
libavfilter 9. 12.100 / 9. 12.100
libswscale 7. 5.100 / 7. 5.100
libswresample 4. 12.100 / 4. 12.100
libpostproc 57. 3.100 / 57. 3.100
Input #0, mpegts, from 'testvideo.ts':
Duration: 00:10:10.13, start: 0.133333, bitrate: 3256 kb/s
Program 1
Metadata:
service_name : 2024-10-04 11:49:49.917
service_provider: gvos-6.0
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 1920x1080, 15 fps, 15 tbr, 90k tbn
</code></pre>
<p>Here the duration is clearly given to be 10 minutes and 10.13 seconds. So what could be the cause of this error/issue?</p>
|
<python><ffmpeg><virtualbox><moviepy><ubuntu-24.04>
|
2024-10-16 07:59:28
| 2
| 45,023
|
Alex
|
79,092,651
| 7,580,944
|
3D cylindrical polar plot in python (beamforming per different frequencies)
|
<p>Is it possible to make this figure in Matplotlib or any other libraries in Python?</p>
<p>The top figure can be easily achieved with <code>plt.polar</code> in mattplo, but how about the bottom one? Would Tikz be more appropriate?</p>
<p><a href="https://i.sstatic.net/65pzYPVB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65pzYPVB.png" alt="enter image description here" /></a></p>
<p>Image taken from the paper <a href="https://israelcohen.com/wp-content/uploads/2020/11/09261932.pdf" rel="nofollow noreferrer">"Steering Study of Linear Differential Microphone Arrays"</a> by Jin, et al.</p>
|
<python><matplotlib><plot><polar-coordinates>
|
2024-10-16 06:03:28
| 0
| 359
|
Chutlhu
|
79,092,547
| 4,451,521
|
How to call one poetry environment from a different poetry environment
|
<p>I have two different folders or locations. Let's call these "app" and "source". In the source folder there is a script.</p>
<h1>The Previous situation</h1>
<p>Before each folder had its own virtual environment (created with venv)</p>
<p>The app folder had a RestAPI script (written with FastAPI) that called a script from the source folder.</p>
<p>Obviously since the source virtual environment was different, it did something like</p>
<pre><code>command = f'bash -c \'source {venv_path} && python3 {script_path} \' '
try:
result = subprocess.run(command, shell=True, capture_output=True, text=True)
# Return the output and the completion status
return {
"stdout": result.stdout,
"stderr": result.stderr,
"returncode": result.returncode,
"status": "Completed" if result.returncode == 0 else "Error",
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error running script: {str(e)}")
</code></pre>
<p>which correctly run the script in the source folder with its own virtual environment python.</p>
<h1>Now using Poetry</h1>
<p>I have replaced the venvs with poetry. So now in source I have a <code>pyproject.toml</code> and poetry's virtual environment is working</p>
<p>In the app folder I also have a <code>pyproject.toml</code> and poetry's virtual environment is also working.</p>
<p>However, I can see that the app script calls the scripts in source with its own poetry environment and not source's envrionment.</p>
<p>I have tried first changing the directory in the command</p>
<pre><code>command = f'cd {script_path.parent} && poetry run python {script_path.name} '
</code></pre>
<p>and then even</p>
<pre><code>command = f'poetry run python {script_path.name} '
try:
result = subprocess.run(command, cwd=source_dir, shell=True, capture_output=True, text=True)
# Return the output and the completion status
return {
"stdout": result.stdout,
"stderr": result.stderr,
"returncode": result.returncode,
"status": "Completed" if result.returncode == 0 else "Error",
}
except Exception as e:
raise HTTPException(status_code=500, detail=f"Error running script: {str(e)}")
</code></pre>
<p>and still the app in the app folder tries to call the script in the source folder with its own poetry virtual environment.</p>
<h1>Question</h1>
<p>How can I make that app in the app folder calls the script located in source with the source poetry environment?</p>
|
<python><subprocess><python-poetry>
|
2024-10-16 05:13:51
| 1
| 10,576
|
KansaiRobot
|
79,092,542
| 5,057,078
|
Django change on_delete at runtime
|
<h1>Story</h1>
<p>I am making a <a href="https://github.com/Brambor/Jurnal" rel="nofollow noreferrer">Jurnal app</a>. Currently working on synchronization across multiple machines. I want to synchronize model by model, not the whole database at once. I even allow synchronization of just a few entries, image:
<a href="https://i.sstatic.net/yrRqqS80.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrRqqS80.png" alt="enter image description here" /></a></p>
<p>The current problem I am facing is replacing the data of a model with the new synchronized data. I have the new database for the model (not all models) in JSON.</p>
<p>Other models have <code>ForeginKey</code> to the data being changed (with <code>on_delete=CASCADE</code>). I have a mapping as a <code>dict</code> <code>{old_pk : new_pk...}</code>. I will use that to update other objects <code>ForeginKey</code>s to the objects being synchronized (whose <code>pk</code>s will change).</p>
<h1>Problem</h1>
<p>I want to delete the old database without <code>CASCADE</code>ing the deletion. I would like to set <code>on_delete</code> to <code>DO_NOTHING</code> temporarily until I finish the migration.</p>
<p>Note: <code>ReadAt</code> code contains:</p>
<pre><code>read_by = models.ForeignKey(
Person,
on_delete=models.CASCADE, # change temporarily to DO_NOTHING at runtime
)
</code></pre>
<p>By changing <code>on_delete</code> at runtime I could do:</p>
<pre><code># DELETE OLD DATA
ReadAt.on_delete=DO_NOTHING # pseudo code I want to achieve
Person.objects.all().delete() # without code above, this deletes all ReadAt
# LOAD NEW DATA
with open("person_file.json", mode="w+", encoding="utf-8") as myfile:
myfile.write(json.dumps(data_new_server))
call_command("loaddata", "person_file.json")
# Then fix the PKs of ReadAt with a mapping I have made
ReadAt.on_delete=CASCADE # pseudo code, change back
# ideally check the integrity of ReadAt data now
</code></pre>
<h1>Alternative</h1>
I thought of this, but got stuck:
<ol>
<li>Find the max of imported pk MAX_PK.</li>
<li>Change all PKs entries in Person (and FK in ReadAt), add MAX_PK to them.
<ul>
<li>I don't know how to do this. pk seems uneditable, This doesn't work: <a href="https://stackoverflow.com/a/77775898/5057078">https://stackoverflow.com/a/77775898/5057078</a></li>
</ul>
</li>
<li>Import new data (there is no overlap between the old and new data).</li>
<li>Change the PK of ReadAt from old_PK to new_PK.</li>
<li>Delete all Person whose PK >= MAX_PK, there are no ForeginKeys to them after the previous step.</li>
</ol>
|
<python><django><database><sqlite>
|
2024-10-16 05:10:15
| 1
| 696
|
Brambor
|
79,092,103
| 2,778,405
|
Shift by month shifts to wrong date
|
<p>Can someone explain this result to me?</p>
<pre class="lang-py prettyprint-override"><code>d = [
{'Date Enrolled': pd.Timestamp('2013-11-30'), 'metric': 2},
{'Date Enrolled': pd.Timestamp('2013-12-01'), 'metric': 0},
{'Date Enrolled': pd.Timestamp('2013-12-02'), 'metric': 0},
{'Date Enrolled': pd.Timestamp('2013-12-03'), 'metric': 0},
{'Date Enrolled': pd.Timestamp('2013-12-04'), 'metric': 0},
{'Date Enrolled': pd.Timestamp('2013-12-05'), 'metric': 0}
]
tdf = pd.DataFrame(d)
tdf = tdf.set_index(['Date Enrolled']).asfreq('D')
tdf['metric'].shift(periods=1, freq='ME')
</code></pre>
<p>result:</p>
<pre><code>Date Enrolled
2013-12-31 2
2013-12-31 0
2013-12-31 0
2013-12-31 0
2013-12-31 0
2013-12-31 0
Name: metric, dtype: int64
</code></pre>
<p>expected:</p>
<pre><code>Date Enrolled
2013-12-31 2
2014-01-31 0
2014-01-31 0
2014-01-31 0
2014-01-31 0
2014-01-31 0
Name: metric, dtype: int64
</code></pre>
|
<python><pandas>
|
2024-10-16 00:21:53
| 1
| 2,386
|
Jamie Marshall
|
79,092,005
| 4,003,134
|
Gradio 5.0.2 Image: how to dispatch the start_recording event
|
<p>On my <strong>WSL</strong> with <strong>Ubuntu 22.04</strong> and <strong>Python 3.10</strong> and <strong>Node 20.10</strong>, I would like to receive the <strong>"start_recording"</strong> event from <em>gr.Image</em>, running in <em>"webcam"</em> mode.
Since the event is dispatched in the <em>WebCam.svelte</em> frontend, how can I get it in my Python backend?
I have tried building a custom component, templated from gr.Image:</p>
<pre class="lang-py prettyprint-override"><code>class MyImage(StreamingInput, Component):
EVENTS = [
Events.clear,
Events.change,
Events.stream,
Events.select,
Events.upload,
Events.input,
Events.start_recording, # Added
]
</code></pre>
<p>and then</p>
<pre class="lang-py prettyprint-override"><code>import gradio as gr
from gradio_myimage import MyImage
def recording_started():
print('Yipee')
with gr.Blocks() as demo:
cam = MyImage(sources=['webcam'], streaming=True, interactive=True)
cam.start_recording(recording_started)
demo.launch()
</code></pre>
<p>But I get an <code>AttributeError: 'tuple' object has no attribute 'start_recording'</code></p>
<p>Can anyone please give me a hint, what's missing? Thank you in advance..</p>
|
<python><gradio>
|
2024-10-15 23:13:14
| 1
| 1,029
|
x y
|
79,091,886
| 1,783,688
|
How to update JSON columns in FastAPI
|
<p>I am trying to set the value for a key in a json stored on a column in the database (MySql).</p>
<p>Sample Code:</p>
<pre><code>import json
from sqlmodel import Field, JSON
from fastapi.middleware.cors import CORSMiddleware
from settings import settings
from typing import Annotated
from sqlmodel import Session, create_engine, SQLModel
from fastapi import Depends, FastAPI
class Organization(SQLModel, table=True):
__tablename__ = "organizations"
id: int | None = Field(default=None, primary_key=True)
name: str = Field()
meta: dict = Field(sa_type=JSON)
class MyService:
def __init__(self, session: Session):
self.session = session
def update(self, value: str, organization_id: str):
organization = self.session.get(Organization, organization_id)
print(organization)
organization.meta['custom_key'] = value
self.session.add(organization)
self.session.commit()
def get_session():
with Session(engine) as session:
yield session
engine = create_engine(settings.db_connection_string, echo=True)
SessionDep = Annotated[Session, Depends(get_session)]
app = FastAPI()
app.add_middleware(
CORSMiddleware,
allow_origins=settings.cors_origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.on_event("startup")
def on_startup():
SQLModel.metadata.create_all(engine)
@app.post("/update/{organization_id}")
def update(organization_id: str, value: str, session: SessionDep):
service = MyService(session)
service.update(value, organization_id)
return {"message": "success"}
</code></pre>
<p>The <code>organization</code> is read correctly from the database and the value of <code>meta</code> is correct. But I don't see any update statements run when I hit the <code>update</code> function.</p>
<p>Logs:</p>
<pre><code>INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
INFO: Started reloader process [28024] using WatchFiles
INFO: Started server process [39612]
INFO: Waiting for application startup.
2024-10-15 15:03:16,139 INFO sqlalchemy.engine.Engine SELECT DATABASE()
2024-10-15 15:03:16,140 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:03:16,141 INFO sqlalchemy.engine.Engine SELECT @@sql_mode
2024-10-15 15:03:16,141 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:03:16,142 INFO sqlalchemy.engine.Engine SELECT @@lower_case_table_names
2024-10-15 15:03:16,142 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:03:16,143 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2024-10-15 15:03:16,143 INFO sqlalchemy.engine.Engine DESCRIBE `kraya_local`.`organizations`
2024-10-15 15:03:16,144 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:03:16,146 INFO sqlalchemy.engine.Engine COMMIT
INFO: Application startup complete.
2024-10-15 15:03:18,441 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2024-10-15 15:03:18,444 INFO sqlalchemy.engine.Engine SELECT organizations.id AS organizations_id, organizations.name AS organizations_name, organizations.meta AS organizations_meta
FROM organizations
WHERE organizations.id = %(pk_1)s
2024-10-15 15:03:18,444 INFO sqlalchemy.engine.Engine [generated in 0.00030s] {'pk_1': '9befc383-4db8-413e-934a-2e2f6dce51ed'}
name='Tripper Trails' meta={'custom_key': 'test'} id='9befc383-4db8-413e-934a-2e2f6dce51ed'
2024-10-15 15:03:18,457 INFO sqlalchemy.engine.Engine COMMIT
INFO: 127.0.0.1:52204 - "POST /update/9befc383-4db8-413e-934a-2e2f6dce51ed?value=test HTTP/1.1" 200 OK
</code></pre>
<p>Alternatively, if I set the model as</p>
<pre><code>class Organization(SQLModel, table=True):
__tablename__ = "organizations"
id: int | None = Field(default=None, primary_key=True)
name: str = Field()
meta: str = Field()
</code></pre>
<p>and do this in the function:</p>
<pre><code>
def update(self, value: str, organization_id: str):
organization = self.session.get(Organization, organization_id)
print(organization)
# organization.meta['custom_key'] = value
organization_metadata = json.loads(organization.meta)
organization_metadata['custom_key'] = value
organization.meta = json.dumps(organization_metadata)
self.session.add(organization)
self.session.commit()
</code></pre>
<p>For this, I can see the update statement run in the logs</p>
<pre><code>INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
INFO: Started reloader process [20320] using WatchFiles
INFO: Started server process [23168]
INFO: Waiting for application startup.
2024-10-15 15:05:50,857 INFO sqlalchemy.engine.Engine SELECT DATABASE()
2024-10-15 15:05:50,857 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:05:50,858 INFO sqlalchemy.engine.Engine SELECT @@sql_mode
2024-10-15 15:05:50,858 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:05:50,859 INFO sqlalchemy.engine.Engine SELECT @@lower_case_table_names
2024-10-15 15:05:50,859 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:05:50,860 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2024-10-15 15:05:50,861 INFO sqlalchemy.engine.Engine DESCRIBE `kraya_local`.`organizations`
2024-10-15 15:05:50,861 INFO sqlalchemy.engine.Engine [raw sql] {}
2024-10-15 15:05:50,877 INFO sqlalchemy.engine.Engine COMMIT
INFO: Application startup complete.
2024-10-15 15:05:54,774 INFO sqlalchemy.engine.Engine BEGIN (implicit)
2024-10-15 15:05:54,778 INFO sqlalchemy.engine.Engine SELECT organizations.id AS organizations_id, organizations.name AS organizations_name, organizations.meta AS organizations_meta
FROM organizations
WHERE organizations.id = %(pk_1)s
2024-10-15 15:05:54,779 INFO sqlalchemy.engine.Engine [generated in 0.00082s] {'pk_1': '9befc383-4db8-413e-934a-2e2f6dce51ed'}
name='Tripper Trails' meta='{}' id='9befc383-4db8-413e-934a-2e2f6dce51ed'
2024-10-15 15:05:54,784 INFO sqlalchemy.engine.Engine UPDATE organizations SET meta=%(meta)s WHERE organizations.id = %(organizations_id)s
2024-10-15 15:05:54,785 INFO sqlalchemy.engine.Engine [generated in 0.00078s] {'meta': '{"custom_key": "test"}', 'organizations_id': '9befc383-4db8-413e-934a-2e2f6dce51ed'}
2024-10-15 15:05:54,790 INFO sqlalchemy.engine.Engine COMMIT
INFO: 127.0.0.1:52267 - "POST /update/9befc383-4db8-413e-934a-2e2f6dce51ed?value=test HTTP/1.1" 200 OK
</code></pre>
|
<python><fastapi><sqlmodel>
|
2024-10-15 22:06:56
| 1
| 1,039
|
otaku
|
79,091,722
| 10,012,446
|
Unable to install `evals` python cli for OpenAI
|
<p>Failing to install: <code>pip install evals</code>. For complete logs please see <a href="https://gist.github.com/sahilrajput03/e86baa88d35e5f5c0f63946671edbb77" rel="nofollow noreferrer">this gist file here</a>.</p>
<p>Tools Version:</p>
<ol>
<li><code>python --version</code>: <code>Python 3.9.6</code></li>
<li><code>pip --version</code>: <code>pip 24.2 from /Users/apple/Library/Python/3.9/lib/python/site-packages/pip (python 3.9)</code></li>
</ol>
<pre class="lang-bash prettyprint-override"><code>Collecting keras<2.8,>=2.7.0rc0 (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals)
Using cached keras-2.7.0-py2.py3-none-any.whl.metadata (1.3 kB)
Collecting tensorboard~=2.6 (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals)
Using cached tensorboard-2.17.1-py3-none-any.whl.metadata (1.6 kB)
Using cached tensorboard-2.17.0-py3-none-any.whl.metadata (1.6 kB)
Using cached tensorboard-2.13.0-py3-none-any.whl.metadata (1.8 kB)
Using cached tensorboard-2.11.2-py3-none-any.whl.metadata (1.9 kB)
Collecting protobuf (from google-generativeai->evals)
Using cached protobuf-3.20.3-cp39-cp39-macosx_10_9_x86_64.whl.metadata (679 bytes)
Collecting tensorboard~=2.6 (from tensorflow<3.0.0,>=2.4.0->spacy-universal-sentence-encoder->evals)
Using cached tensorboard-2.11.0-py3-none-any.whl.metadata (1.9 kB)
Using cached tensorboard-2.10.1-py3-none-any.whl.metadata (1.9 kB)
Using cached tensorboard-2.10.0-py3-none-any.whl.metadata (1.9 kB)
Using cached tensorboard-2.7.0-py3-none-any.whl.metadata (1.9 kB)
Using cached tensorboard-2.6.0-py3-none-any.whl.metadata (1.9 kB)
Collecting tensorflow<3.0.0,>=2.4.0 (from spacy-universal-sentence-encoder->evals)
Using cached tensorflow-2.7.1-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.9 kB)
Using cached tensorflow-2.7.0-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.9 kB)
INFO: pip is still looking at multiple versions of tf-keras to determine which version is compatible with other requirements. This could take a while.
Using cached tensorflow-2.6.5-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.6.4-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.6.3-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.6.2-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.6.1-cp39-cp39-macosx_10_14_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.6.0-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.5.3-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.5.2-cp39-cp39-macosx_10_14_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.5.1-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Using cached tensorflow-2.5.0-cp39-cp39-macosx_10_11_x86_64.whl.metadata (2.8 kB)
Collecting SQLAlchemy<3,>=1.4 (from langchain->evals)
Using cached SQLAlchemy-2.0.35-cp39-cp39-macosx_10_9_x86_64.whl.metadata (9.6 kB)
INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.
Using cached SQLAlchemy-2.0.34-cp39-cp39-macosx_10_9_x86_64.whl.metadata (9.6 kB)
Using cached SQLAlchemy-2.0.33-cp39-cp39-macosx_10_9_x86_64.whl.metadata (9.6 kB)
ERROR: Exception:
Traceback (most recent call last):
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_internal/cli/base_command.py", line 105, in _run_wrapper
status = _inner_run()
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_internal/cli/base_command.py", line 96, in _inner_run
return self.run(options, args)
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_internal/cli/req_command.py", line 67, in wrapper
return func(self, options, args)
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_internal/commands/install.py", line 379, in run
requirement_set = resolver.resolve(
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 95, in resolve
result = self._result = resolver.resolve(
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_vendor/resolvelib/resolvers.py", line 546, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/Users/apple/Library/Python/3.9/lib/python/site-packages/pip/_vendor/resolvelib/resolvers.py", line 457, in resolve
raise ResolutionTooDeep(max_rounds)
pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000
</code></pre>
<p>As we can see in the end of below log, we see some error. Please help me debug this, I'm new to python and tried to find the answer but couldn't figure it out after waiting to install this for like 1 hours twice. The error is like <code>pip._vendor.resolvelib.resolvers.ResolutionTooDeep: 200000</code>.</p>
<p>Please help. Thanks in advance.</p>
<p>Github Respository issue by me on this python library: <a href="https://github.com/openai/evals/issues/1563" rel="nofollow noreferrer">https://github.com/openai/evals/issues/1563</a></p>
<p>Community Post on OpenAI: <a href="https://community.openai.com/t/im-not-able-to-install-evals-python-library/981186" rel="nofollow noreferrer">https://community.openai.com/t/im-not-able-to-install-evals-python-library/981186</a></p>
|
<python><python-3.x><pip><openai-api>
|
2024-10-15 20:54:40
| 2
| 414
|
Sahil Rajput
|
79,091,689
| 7,473,954
|
HTTP certificate file authentication works fine with curl, with openssl, but not with Python module "requests"
|
<p>I'm trying to make a post via "requests" in python, on a url where client certificate authentication is mandatory.
Certificates are production ones (not self signed)</p>
<p>Script is simple :</p>
<pre><code>import requests
print(requests.post('https://my_url.com', cert=('client.pem', 'key.pem'),data='<a>foo</a>', verify='ca.pem'))
</code></pre>
<p>I get the error :</p>
<pre><code>ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)
</code></pre>
<p>Of course, it works fine when I set :</p>
<pre><code>verify=False
</code></pre>
<p>With the same certificates (and key), it works fine with curl :</p>
<pre><code>[ec2-user@ec2-instance ~]$ curl -w "%{http_code}\n" -s -o /dev/null -X POST https://my_url.com --cert client.pem --key key.pem --cacert ca.pem --data-binary "@some_file"
200
</code></pre>
<p>And I got no errors using openssl :</p>
<pre><code>[ec2-user@ec2-instance ~]$ openssl s_client -connect my_url.com:443 -cert client.pem -key key.pem -CAfile ca.pem
CONNECTED(00000003)
depth=2 C = FR, O = ******, CN = ******
verify return:1
depth=1 C = FR, O = ******, OU = ******, organizationIdentifier = ******, CN = ******
verify return:1
depth=0 C = FR, L = ******, O = ******, CN = ******
verify return:1
---
Certificate chain
0 s:C = FR, L = ******, O = ******, CN = ******
i:C = FR, O = ******, OU = ******, organizationIdentifier = ******, CN = ******
a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
v:NotBefore: Jun 12 22:00:00 2024 GMT; NotAfter: Jun 12 21:59:59 2025 GMT
---
Server certificate
-----BEGIN CERTIFICATE-----
...blah...
-----END CERTIFICATE-----
subject=C = FR, L = ******, O = ******, CN = ******
issuer=C = FR, O = ******, OU = ******, organizationIdentifier = ******, CN = ******
---
Acceptable client certificate CA names
...blah...
---
SSL handshake has read 3450 bytes and written 5346 bytes
Verification: OK
---
New, TLSv1.3, Cipher is TLS_AES_256_GCM_SHA384
Server public key is 2048 bit
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
</code></pre>
<p>What am I missing here ?</p>
|
<python><curl><python-requests><openssl>
|
2024-10-15 20:44:00
| 1
| 528
|
Julien
|
79,091,636
| 8,849,755
|
How to install this package with `pip -e`?
|
<p>I want to install <a href="https://github.com/thliebig/CSXCAD" rel="nofollow noreferrer">this package</a>, which is part of <a href="https://github.com/thliebig/openEMS-Project" rel="nofollow noreferrer">the openEMS project</a>, in "development mode" so that I can change the code and see what happens. I have always done this by doing <code>pip install -e path/to/package</code>. For this one, however, this does not work:</p>
<pre><code>$ pip install --user -e .
DEPRECATION: Loading egg at /home/myself/.local/lib/python3.12/site-packages/openEMS-0.0.36-py3.12-linux-x86_64.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330
Obtaining file:///home/myself/opt/openEMS-Project/CSXCAD/python
Installing build dependencies ... done
Checking if build backend supports build_editable ... done
Getting requirements to build editable ... error
error: subprocess-exited-with-error
Γ Getting requirements to build editable did not run successfully.
β exit code: 1
β°β> [23 lines of output]
Traceback (most recent call last):
File "/usr/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/usr/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 132, in get_requires_for_build_editable
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-zcbdct2_/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 464, in get_requires_for_build_editable
return self.get_requires_for_build_wheel(config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-zcbdct2_/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 332, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=[])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-zcbdct2_/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 302, in _get_build_requires
self.run_setup()
File "/tmp/pip-build-env-zcbdct2_/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 503, in run_setup
super().run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-zcbdct2_/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 318, in run_setup
exec(code, locals())
File "<string>", line 2, in <module>
ModuleNotFoundError: No module named 'Cython'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
Γ Getting requirements to build editable did not run successfully.
β exit code: 1
β°β> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>It looks like it does not find Cython, however:</p>
<pre><code>$ pip list | grep Cython
DEPRECATION: Loading egg at /home/myself/.local/lib/python3.12/site-packages/openEMS-0.0.36-py3.12-linux-x86_64.egg is deprecated. pip 24.3 will enforce this behaviour change. A possible replacement is to use pip for package installation.. Discussion can be found at https://github.com/pypa/pip/issues/12330
Cython 3.0.11
</code></pre>
<p>and if I do <code>python</code> and then <code>import cython</code>, it works fine.</p>
<p>If I do <code>python setup.py install --user</code> then it is installed and I can use it, but to test changes I have to uninstall it and install it again every time.</p>
<p>How can I either install it with <code>pip -e</code> or achieve the same goal using <code>python setup.py</code>?</p>
|
<python><pip>
|
2024-10-15 20:24:34
| 1
| 3,245
|
user171780
|
79,091,627
| 1,732,969
|
How to detect 2d datamatrix in image with Python
|
<p>I have a set of medical forms that may or may not contain a 2d datamatrix into a corner of the page. I need to detect if the 2d datamatrix is present or not. For now, it's not necessary to read the content of the barcode.
I've been looking for different libraries but I can't find one with OCR or something that may detect the presence of the 2d datamatrix.
I need to do this with Python.</p>
<p>Attached is a medical form example where a 2d datamatrix is located at the right bottom of the page. In this case, the algorithm should say "True" as the datamatrix exists in the page.</p>
<p>PS: I've tested AWS Textract and is not detecting the datamatrix.</p>
<p><a href="https://i.sstatic.net/jtUfZfRF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtUfZfRF.png" alt="Medical form example" /></a></p>
|
<python><amazon-web-services><ocr><barcode><datamatrix>
|
2024-10-15 20:19:27
| 2
| 1,593
|
eduardosufan
|
79,091,469
| 1,747,834
|
How to convert Java serialization data into JSON?
|
<p>A vendor-provided application we're maintaining stores (some of) its configuration in the form of "<em>Java serialization data, version 5</em>". A closer examination shows, that the actual contents is a <code>java.util.ArrayList</code> with several dozens of elements of the same vendor-specific type (<code>vendor.apps.datalayer.client.navs.shared.api.Function</code>).</p>
<p>As we seek to deploy and configure instances of this application with Ansible, we'd like all configuration-files to be human-readable -- and subject to <em>textual</em> revision-control.</p>
<p>To that end, we need to be able to decode the Java serialization binary data into a human-readable form of some kind -- preferably, JSON. That JSON also needs to be convertible back into the same Java serialization format for the application to read it.</p>
<p>The accepted answer to an <a href="/questions/30967364/">earlier question</a> on this topic is Java-based:</p>
<ol>
<li>Read the Java serialization data using <code>ObjectInputStream</code>, casting it to the known type -- thus instantiating each object.</li>
<li>Write it back out using GSON.</li>
</ol>
<p>Though usable, that approach is less than ideal for us because:</p>
<ul>
<li>it requires full knowledge of the vendor's type serialized in the data, even though we don't need to instantiate the objects;</li>
<li>we'd rather it be a Python-script, that we could integrate into Ansible.</li>
</ul>
<p>There is a <a href="https://pypi.org/project/javaobj-py3/" rel="nofollow noreferrer">Python module</a> for this, but custom classes seem to require providing custom Python code -- a lot of custom code -- even when all the fields of the class are themselves of standard Java-types.</p>
<p>It is my understanding, the serialized data itself already provides all the information necessary -- one does not need to access the class-definition(s), unless one wants to invoke the <em>methods</em> of the class, which we don't...</p>
|
<python><java><serialization><deserialization>
|
2024-10-15 19:25:55
| 1
| 4,246
|
Mikhail T.
|
79,091,467
| 19,048,408
|
In Polars >=v1, what is the correct way to check if a datatype meets a selector criteria?
|
<p>Prior to v1, this was the prefered way to check if a Polars datatype (e.g., looked up from a schema dict) is numeric:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
assert pl.UInt8 in pl.selectors.NUMERIC_DTYPES
assert pl.UInt8 not in pl.selectors.TEMPORAL_DTYPES
</code></pre>
<p>However, as of >=v1, it seems that these type lists aren't exported. Pyright gives the following error, for example:</p>
<pre><code>"TEMPORAL_DTYPES" is not exported from module "polars.selectors" (Pylance reportPrivateImportUsage)
</code></pre>
<p>I'm aware that <a href="https://docs.pola.rs/api/python/stable/reference/selectors.html" rel="nofollow noreferrer">the selector API exists</a>, but I'm still unable to figure out a basic type comparison like that.</p>
<p>I'm expecting <em>something</em> like this:</p>
<pre class="lang-py prettyprint-override"><code>assert pl.selectors.numeric().contains(pl.UInt8)
</code></pre>
|
<python><python-polars>
|
2024-10-15 19:25:32
| 2
| 468
|
HumpbackWhale194
|
79,091,455
| 2,778,405
|
shift by frequency groups by date?
|
<p>I'm trying to create a simple data cube, of one metric by many different time freqencies. Example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
r = [{'Date Enrolled': pd.Timestamp('2022-04-13 00:00:00'), 'metric': 1, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 1, 'weekly_sum': 1, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-14 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-15 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-16 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-17 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-18 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-19 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': np.nan, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-20 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-21 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-22 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-23 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-24 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-25 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-26 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': np.nan, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-27 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 1.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-28 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 1.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-29 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 1.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-04-30 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 1, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 1, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 1.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-01 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 1.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-02 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 1.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-03 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-04 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-05 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-06 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-07 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-08 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-09 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-10 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': np.nan}, {'Date Enrolled': pd.Timestamp('2022-05-11 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-05-12 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-05-13 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-05-14 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-05-15 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-05-16 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-05-17 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-18 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-19 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-20 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-21 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-22 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-23 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-24 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-25 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-26 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-27 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-28 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-29 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-30 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-05-31 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 0, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-01 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-02 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-03 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-04 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-05 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-06 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-07 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-08 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-09 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 0, 'yearly_cumsum': 1, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-10 00:00:00'), 'metric': 1, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 2, 'daily_sum': 1, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-11 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 2, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-12 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 2, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-13 00:00:00'), 'metric': 0, 'weekly_cumsum': 1, 'monthly_cumsum': 1, 'yearly_cumsum': 2, 'daily_sum': 0, 'weekly_sum': 1, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-14 00:00:00'), 'metric': 1, 'weekly_cumsum': 1, 'monthly_cumsum': 2, 'yearly_cumsum': 3, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-15 00:00:00'), 'metric': 1, 'weekly_cumsum': 2, 'monthly_cumsum': 3, 'yearly_cumsum': 4, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-16 00:00:00'), 'metric': 1, 'weekly_cumsum': 3, 'monthly_cumsum': 4, 'yearly_cumsum': 5, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-17 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 4, 'yearly_cumsum': 5, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-18 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 4, 'yearly_cumsum': 5, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-19 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 4, 'yearly_cumsum': 5, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-20 00:00:00'), 'metric': 1, 'weekly_cumsum': 4, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 1.0, 'prev_week_2': 0.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-21 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-22 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-23 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-24 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-25 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-26 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-27 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 0, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 1.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-28 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-29 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 5, 'yearly_cumsum': 6, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-06-30 00:00:00'), 'metric': 1, 'weekly_cumsum': 1, 'monthly_cumsum': 6, 'yearly_cumsum': 7, 'daily_sum': 1, 'weekly_sum': 3, 'monthly_sum': 6, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-01 00:00:00'), 'metric': 2, 'weekly_cumsum': 3, 'monthly_cumsum': 2, 'yearly_cumsum': 9, 'daily_sum': 2, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-02 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 2, 'yearly_cumsum': 9, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-03 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 2, 'yearly_cumsum': 9, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-04 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 2, 'yearly_cumsum': 9, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 0.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-05 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 2, 'yearly_cumsum': 9, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-06 00:00:00'), 'metric': 2, 'weekly_cumsum': 2, 'monthly_cumsum': 4, 'yearly_cumsum': 11, 'daily_sum': 2, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-07 00:00:00'), 'metric': 0, 'weekly_cumsum': 2, 'monthly_cumsum': 4, 'yearly_cumsum': 11, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-08 00:00:00'), 'metric': 1, 'weekly_cumsum': 3, 'monthly_cumsum': 5, 'yearly_cumsum': 12, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-09 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 5, 'yearly_cumsum': 12, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-10 00:00:00'), 'metric': 0, 'weekly_cumsum': 3, 'monthly_cumsum': 5, 'yearly_cumsum': 12, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-11 00:00:00'), 'metric': 1, 'weekly_cumsum': 4, 'monthly_cumsum': 6, 'yearly_cumsum': 13, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 3.0, 'prev_week_2': 0.0, 'prev_week_3': 1.0}, {'Date Enrolled': pd.Timestamp('2022-07-12 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 6, 'yearly_cumsum': 13, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-13 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 6, 'yearly_cumsum': 13, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-14 00:00:00'), 'metric': 1, 'weekly_cumsum': 1, 'monthly_cumsum': 7, 'yearly_cumsum': 14, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-15 00:00:00'), 'metric': 1, 'weekly_cumsum': 2, 'monthly_cumsum': 8, 'yearly_cumsum': 15, 'daily_sum': 1, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-16 00:00:00'), 'metric': 0, 'weekly_cumsum': 2, 'monthly_cumsum': 8, 'yearly_cumsum': 15, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-17 00:00:00'), 'metric': 0, 'weekly_cumsum': 2, 'monthly_cumsum': 8, 'yearly_cumsum': 15, 'daily_sum': 0, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-18 00:00:00'), 'metric': 2, 'weekly_cumsum': 4, 'monthly_cumsum': 10, 'yearly_cumsum': 17, 'daily_sum': 2, 'weekly_sum': 4, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 3.0, 'prev_week_3': 4.0}, {'Date Enrolled': pd.Timestamp('2022-07-19 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 10, 'yearly_cumsum': 17, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-20 00:00:00'), 'metric': 0, 'weekly_cumsum': 0, 'monthly_cumsum': 10, 'yearly_cumsum': 17, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-21 00:00:00'), 'metric': 1, 'weekly_cumsum': 1, 'monthly_cumsum': 11, 'yearly_cumsum': 18, 'daily_sum': 1, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-22 00:00:00'), 'metric': 1, 'weekly_cumsum': 2, 'monthly_cumsum': 12, 'yearly_cumsum': 19, 'daily_sum': 1, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-23 00:00:00'), 'metric': 0, 'weekly_cumsum': 2, 'monthly_cumsum': 12, 'yearly_cumsum': 19, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-24 00:00:00'), 'metric': 0, 'weekly_cumsum': 2, 'monthly_cumsum': 12, 'yearly_cumsum': 19, 'daily_sum': 0, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}, {'Date Enrolled': pd.Timestamp('2022-07-25 00:00:00'), 'metric': 1, 'weekly_cumsum': 3, 'monthly_cumsum': 13, 'yearly_cumsum': 20, 'daily_sum': 1, 'weekly_sum': 3, 'monthly_sum': 17, 'yearly_sum': 24, 'prev_week_1': 4.0, 'prev_week_2': 4.0, 'prev_week_3': 0.0}]
tdf = tdf.set_index('Date Enrolled')
</code></pre>
<p>I want to add time frequencies for previous_month, and previous_year. I would think shift would work for this:</p>
<pre class="lang-py prettyprint-override"><code>tdf['previous_month'] = tdf['monthly_sum'].shift(periods=1, freq='ME')
</code></pre>
<p>However it throws <code>ValueError: cannot reindex on an axis with duplicate labels</code></p>
<p>Upon inspecting what is returned from shift, you can see that it actually shifted everything to last months end date and not by an increment of a month.</p>
<p><a href="https://i.sstatic.net/eG6utGvI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eG6utGvI.png" alt="enter image description here" /></a></p>
<p>There has to be an easy way to do this? Anyone have a simple solution as to how to get the previous month totals next to the last months totals?</p>
<p>** EDIT 1 **</p>
<p>I've also found odd behavior where it takes this column data:</p>
<pre><code>Date Enrolled
2013-11-30 2
2013-12-01 0
2013-12-02 0
2013-12-03 0
</code></pre>
<p>and shifted like this:</p>
<pre><code>tdf['monthly_sum'].shift(periods=1, freq='ME')
</code></pre>
<p>returns this:</p>
<pre><code>2013-12-31 2
2013-12-31 0
2013-12-31 0
</code></pre>
<p>See how it shifted them all to the same month end? WTF...</p>
|
<python><pandas>
|
2024-10-15 19:23:04
| 1
| 2,386
|
Jamie Marshall
|
79,091,430
| 13,058,538
|
Request with proxy can't reach FastAPI webserver
|
<p>I want to test my HTTP downloaders in GitLab CI/CD. So I chose FastAPI as the web server to which downloaders will be tested.</p>
<p>Currently, I am trying to make it work locally, but the request always fails with the 503 status code when I am using proxies (that I must use for testing). Without proxies I get 200 status code.</p>
<p>Here is my <strong>docker-compose.yml</strong> to set up FastAPI web server:</p>
<pre><code>services:
fastapi:
build: .
ports:
- "8000:8000"
networks:
- test-network
networks:
test-network:
driver: bridge
</code></pre>
<p>Here is my Dockerfile:</p>
<pre><code>FROM python:3.12-slim
WORKDIR /app
COPY main.py /app
RUN pip install fastapi uvicorn
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>Here is the FastAPI <strong>main.py</strong>:</p>
<pre><code>from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
# Allow all origins, methods, and headers
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/test")
async def read_test():
return {"message": "Hello, World!"}
</code></pre>
<p>And lastly, the test sample that fails while using proxies and succeeds while NOT using proxies <strong>test_simple.py</strong>:</p>
<pre><code>import aiohttp
import asyncio
from aiohttp import BasicAuth
proxy_host: str = "HOST"
proxy_pass: str = "PASS"
proxy_port: int = 60000
proxy_user: str = "USER"
async def simple_test():
proxy_url = f"http://{proxy_host}:{proxy_port}"
proxy_auth = BasicAuth(proxy_user, proxy_pass)
async with aiohttp.ClientSession() as session:
async with session.get('http://localhost:8000/test', proxy=proxy_url, proxy_auth=proxy_auth) as response:
print(f"Status: {response.status}")
print(f"Content: {await response.text()}")
asyncio.run(simple_test())
</code></pre>
<p>I tried making a request with curl to google.com with proxy and it was successful, so the proxies themselves are not the issue.</p>
<p>I guess the issue is that I need to make extra configurations within FastAPI that I don't know.</p>
<p>Note, that I don't want to put my testing code in docker-compose since many more downloaders will be tested and I want to have each as a separate CI/CD job, and in ideal result, docker-compose would spin up the FastAPI web server and my downloaders would be tested against it locally or in Gitlab CI/CD as separate pytest commands.</p>
|
<python><docker-compose><fastapi><aiohttp>
|
2024-10-15 19:14:01
| 0
| 523
|
Dave
|
79,091,213
| 2,772,805
|
Detach tabs with PyQt5 - keeping all widgets into the detached tabs
|
<p>With a PyQt application, I wanted to deliver detachable tabs where user can by a double click detach a tab and reattach it to the main window by closing the detached tab.</p>
<p>I have minimize the problem to the following script that almost works except that when I have multiple QWidgets in the layout, only 1 on 2 is added to the detached layout. This is very puzzling.</p>
<p>Here is the code:</p>
<pre><code>import sys
from PyQt5.QtWidgets import (
QApplication, QMainWindow, QTabWidget, QWidget, QVBoxLayout,
QLabel, QStatusBar
)
# Variable to store detached windows
detached_windows = {}
def create_combined_tab():
widget = QWidget()
layout = QVBoxLayout()
# Create two QLabel for demonstration
label1 = QLabel("This is QLabel 1.")
label1.setObjectName('label1')
label2 = QLabel("This is QLabel 2.")
label2.setObjectName('label2')
label3 = QLabel("This is QLabel 3.")
label3.setObjectName('label3')
label4 = QLabel("This is QLabel 4.")
label4.setObjectName('label4')
label5 = QLabel("This is QLabel 5.")
label5.setObjectName('label5')
layout.addWidget(label1)
layout.addWidget(label2)
layout.addWidget(label3)
layout.addWidget(label4)
layout.addWidget(label5)
widget.setLayout(layout)
return widget
def detach_tab(tab_widget, index):
"""Detaches the tab and displays it in a new window."""
tab_content = tab_widget.widget(index)
tab_name = tab_widget.tabText(index)
# Remove the tab from the QTabWidget
tab_widget.removeTab(index)
# Create a new window for the detached tab
detached_window = QMainWindow()
detached_window.setWindowTitle(tab_name)
detached_window.setGeometry(200, 200, 800, 600)
detached_content = QWidget()
detached_layout = QVBoxLayout()
detached_content.setLayout(detached_layout)
# Move the content of the tab to the new window
print(f"Before detaching: {tab_content.layout().count()} widgets") # Debugging output
for i in range(tab_content.layout().count()):
item = tab_content.layout().itemAt(i)
print(i, item)
if item is not None and item.widget() is not None:
widget = item.widget()
detached_layout.addWidget(widget) # Add the widget to the new layout
print(f"After detaching: {detached_layout.count()} widgets in new layout") # Debugging output
detached_window.setCentralWidget(detached_content)
detached_window.setStatusBar(QStatusBar())
detached_window.statusBar().showMessage(f"{tab_name} - Ready")
# Handle the close event of the detached window
def on_close_event(event):
tab_widget.addTab(detached_content, tab_name) # Reattach the content
tab_widget.setCurrentWidget(detached_content) # Select the reattached tab
event.accept()
del detached_windows[tab_name]
detached_window.closeEvent = on_close_event
detached_window.show() # Show the new window
detached_windows[tab_name] = detached_window # Keep track of the detached window
# Set up the application
app = QApplication(sys.argv)
main_window = QMainWindow()
main_window.setGeometry(200, 200, 800, 600)
tabs = QTabWidget(main_window)
tabs.setMovable(True)
# Create and add the combined tab
combined_tab = create_combined_tab()
tabs.addTab(combined_tab, "Tab - QLabels")
# Connect the double-click event to detach the tab
tabs.tabBarDoubleClicked.connect(lambda index: detach_tab(tabs, index))
main_window.setCentralWidget(tabs)
main_window.setStatusBar(QStatusBar())
main_window.statusBar().showMessage("Application ready")
main_window.show() # Show the main window
sys.exit(app.exec_()) # Start the application event loop
</code></pre>
|
<python><pyqt5>
|
2024-10-15 18:05:19
| 1
| 429
|
PBrockmann
|
79,091,168
| 3,252,285
|
How to apply a threshold filter to a layer?
|
<p>Having an array like this :</p>
<pre><code>input = np.array([[0.04, -0.8, -1.2, 1.3, 0.85, 0.09, -0.08, 0.2]])
</code></pre>
<p>I want to change all the values (of the last dimension) between -0.1 and 0.1 to zero and change the rest to 1</p>
<pre><code>filtred = [[0, 1, 1, 1, 1, 0, 0, 1]]
</code></pre>
<p>Using the <code>lamnda</code> layer is not my favor choice (I would prefer to find a solution with a native layer which could be easily converted to TfLite without activating the <code>SELECT_TF_OPS</code> or the <code>TFLITE_BUILTINS</code> options) but I tried it anyway :</p>
<pre><code>layer = tf.keras.layers.Lambda(lambda x: 0 if x <0.1 and x>-0.1 else 1)
layer(input)
</code></pre>
<p>I am getting :</p>
<pre><code>ValueError: Exception encountered when calling Lambda.call().
The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
Arguments received by Lambda.call():
β’ inputs=tf.Tensor(shape=(6,), dtype=float32)
β’ mask=None
β’ training=None
</code></pre>
|
<python><numpy><keras>
|
2024-10-15 17:49:49
| 1
| 4,864
|
Nassim MOUALEK
|
79,090,997
| 3,070,181
|
How can I run a tkinter application to allow pyautogui locateOnScreen function to process?
|
<p>The frame appears at the top of the screen, but although I am using <em>update_idletasks()</em>, the buttons do not appear before the <em>locateOnScreen</em> function is called</p>
<p>If I include <em>update()</em>, then the buttons do appear. However, the <em>_locate_button</em> call in <em>main</em> does not find the button</p>
<p>The button is found if I comment out the <em>_locate_button</em> call in <em>main</em> and click on <em>Search</em></p>
<pre><code>import tkinter as tk
from tkinter import ttk
import pyautogui
def main() -> None:
root = tk.Tk()
root.title('Locate button')
root.geometry('300x200+0+0')
button = ttk.Button(root, text='Search', command=_locate_button)
button.grid(row=0, column=0, padx=5, pady=5)
button = ttk.Button(root, text='Exit', underline=1,)
button.grid(row=0, column=1, padx=5, pady=5)
root.update_idletasks()
root.update()
_locate_button()
root.mainloop()
def _locate_button():
# pos = button.winfo_geometry()
pos = pyautogui.locateOnScreen('exit_button.png')
print(pos)
if __name__ == '__main__':
main()
</code></pre>
<p>I know that I can find the location using <em>winfo_geometry</em>, but I am trying to establish the principle using <em>pyautogui</em></p>
<p>System details:</p>
<pre><code>Operating System: Manjaro Linux
KDE Plasma Version: 6.0.5
KDE Frameworks Version: 6.4.0
Qt Version: 6.7.2
Kernel Version: 6.6.41-1-MANJARO (64-bit)
Graphics Platform: X11
scrot: 1.11.1
pyautogui: 0.9.54
</code></pre>
|
<python><tkinter><pyautogui>
|
2024-10-15 16:57:54
| 0
| 3,841
|
Psionman
|
79,090,953
| 8,284,452
|
How to debug segfault and mod_wsgi conda/mamba environments?
|
<p>I have tried in vain to get core dumping and gdb to work, but cannot seem to get a core to dump or gdb to find the symbols it needs. So here's the gist of everything happening.</p>
<p>I have installed miniforge3. I have done <code>mamba install mod_wsgi</code> in the base environment to install mod_wsgi 5.0.0. I have two virtual hosts in Apache; I am creating a virtual environment for each virtual host with <code>mamba create -n <name of env></code>. The <code>sys.prefix</code> is this:</p>
<pre><code>/opt/miniforge3
</code></pre>
<p>Printing <code>sys.path</code> yields this:</p>
<pre><code>['', '/opt/miniforge3/lib/python312.zip', '/opt/miniforge3/lib/python3.12', '/opt/miniforge3/lib/python3.12/lib-dynload', '/opt/miniforge3/lib/python3.12/site-packages']
</code></pre>
<p>Running <code>echo $PATH</code> as root user yields this. Upon installation of miniforge3 as root, it appeared to add <code>/opt/miniforge3/bin</code> to the path automatically, but I still had to export it later by adding a file in <code>/etc/profile.d</code> so a non-privileged user would have it in their path, so now it appears twice when you're logged in as root. I don't know if that will cause issues or not:</p>
<pre><code>/opt/miniforge3/bin:/opt/miniforge3/condabin:/root/.nvm/versions/node/v20.5.0/bin:/root/.local/bin:/root/bin:/opt/miniforge3/bin:/sbin:/bin:/usr/sbin:/usr/bin:/var/cfengine/bin
</code></pre>
<p>In <code>/etc/httpd/conf.modules.d</code> I have a .conf file that loads the WSGI module like so:</p>
<pre><code><IfModule !wsgi_module>
LoadModule wsgi_module /opt/miniforge3/lib/python3.12/site-packages/mod_wsgi/server/mod_wsgi-py312.cpython-312-x86_64-linux-gnu.so
</IfModule>
</code></pre>
<p>Both virtual environments I have created have python 3.12.7 installed to match mod_wsgi.</p>
<p>My virtual hosts .conf file looks like this (edited to relevant parts):</p>
<pre><code><VirtualHost *:443>
ServerName domainNameOne.com
## Vhost docroot
DocumentRoot "/var/www/vhosts/domainNameOne"
## Logging
ErrorLog "/var/log/httpd/domainNameOne-error.log"
LogLevel info
ServerSignature Off
CustomLog "/var/log/httpd/domainNameOne-access.log" combined
## WSGI configuration
WSGIApplicationGroup %{GLOBAL}
WSGIDaemonProcess domainNameOne display-name=%{GROUP} home=/var/www/vhosts/domainNameOne python-home=/opt/miniforge3/envs/app1 threads=1 user=hydro
WSGIProcessGroup domainNameOne
WSGIScriptAlias / "/var/www/vhosts/domainNameOne/wsgi.py"
## Anything start with dot
<DirectoryMatch "^\.|\/\.">
Require all denied
</DirectoryMatch>
<LocationMatch "\/\.">
Require all denied
</LocationMatch>
<Location />
Require all granted
</Location>
</VirtualHost>
<VirtualHost *:443>
ServerName domainNameTwo.com
## Vhost docroot
DocumentRoot "/var/www/vhosts/domainNameTwo"
## Logging
ErrorLog "/var/log/httpd/domainNameTwo-error.log"
LogLevel info
ServerSignature Off
CustomLog "/var/log/httpd/domainNameTwo-access.log" combined
## WSGI configuration
WSGIApplicationGroup %{GLOBAL}
WSGIDaemonProcess domainNameTwo display-name=%{GROUP} home=/var/www/vhosts/domainNameTwo python-home=/opt/miniforge3/envs/app2 threads=1 user=hydro
WSGIProcessGroup domainNameTwo
WSGIScriptAlias / "/var/www/vhosts/domainNameTwo/wsgi.py"
## Anything start with dot
<DirectoryMatch "^\.|\/\.">
Require all denied
</DirectoryMatch>
<LocationMatch "\/\.">
Require all denied
</LocationMatch>
<Location />
Require all granted
</Location>
</VirtualHost>
</code></pre>
<p>Apache's .conf:</p>
<pre><code>ErrorLog "/var/log/httpd/error_log"
LogLevel info
</code></pre>
<p>For the rest of this, I'll focus on just one of the applications since the errors I receive are the same between them. So, <code>domainNameTwo</code> WSGI script looks like this (I am running a Flask application using blueprints):</p>
<pre><code>import sys
from api import create_app
from os import getcwd
sys.path.insert(0, getcwd() + "/api")
sys.path.append(getcwd() + "/api/models")
application = create_app()
</code></pre>
<p>Now, when I <code>systemctl restart httpd</code>, I go to <code>domainNameTwo.com</code> and receive a 500 error. In the apache's <code>error_log</code> I receive messages about segfaults on the WSGI app:</p>
<pre><code>[Tue Oct 15 11:24:04.098111 2024] [wsgi:info] [pid 280203:tid 280203] mod_wsgi (pid=280203): Starting process 'domainNameOne' with uid=36143, gid=48 and threads=1.
[Tue Oct 15 11:24:04.099092 2024] [wsgi:info] [pid 280204:tid 280204] mod_wsgi (pid=280204): Starting process 'domainNameTwo' with uid=36143, gid=48 and threads=1.
[Tue Oct 15 11:24:04.101579 2024] [wsgi:info] [pid 280203:tid 280203] mod_wsgi (pid=280203): Python home /opt/miniforge3/envs/app1.
[Tue Oct 15 11:24:04.101656 2024] [wsgi:info] [pid 280203:tid 280203] mod_wsgi (pid=280203): Initializing Python.
[Tue Oct 15 11:24:04.101982 2024] [wsgi:info] [pid 280204:tid 280204] mod_wsgi (pid=280204): Python home /opt/miniforge3/envs/app2.
[Tue Oct 15 11:24:04.102046 2024] [wsgi:info] [pid 280204:tid 280204] mod_wsgi (pid=280204): Initializing Python.
[Tue Oct 15 11:24:04.102127 2024] [mpm_event:notice] [pid 280200:tid 280200] AH00489: Apache/2.4.57 (Red Hat Enterprise Linux) OpenSSL/3.0.7 mod_wsgi/5.0.0 Python/3.12 configured -- resuming normal operations
[Tue Oct 15 11:24:04.102145 2024] [mpm_event:info] [pid 280200:tid 280200] AH00490: Server built: Aug 5 2024 00:00:00
[Tue Oct 15 11:24:04.102163 2024] [core:notice] [pid 280200:tid 280200] AH00094: Command line: '/usr/sbin/httpd -D FOREGROUND'
[Tue Oct 15 11:24:04.106853 2024] [http2:info] [pid 280205:tid 280205] h2_workers: created with min=25 max=37 idle_ms=600000
[Tue Oct 15 11:24:04.107965 2024] [wsgi:info] [pid 280205:tid 280205] mod_wsgi (pid=280205): Initializing Python.
[Tue Oct 15 11:24:04.109002 2024] [http2:info] [pid 280207:tid 280207] h2_workers: created with min=25 max=37 idle_ms=600000
[Tue Oct 15 11:24:04.110156 2024] [wsgi:info] [pid 280207:tid 280207] mod_wsgi (pid=280207): Initializing Python.
[Tue Oct 15 11:24:04.120330 2024] [http2:info] [pid 280206:tid 280206] h2_workers: created with min=25 max=37 idle_ms=600000
[Tue Oct 15 11:24:04.121614 2024] [wsgi:info] [pid 280206:tid 280206] mod_wsgi (pid=280206): Initializing Python.
[Tue Oct 15 11:24:04.123846 2024] [wsgi:info] [pid 280207:tid 280207] mod_wsgi (pid=280207): Attach interpreter ''.
[Tue Oct 15 11:24:04.138679 2024] [wsgi:info] [pid 280206:tid 280206] mod_wsgi (pid=280206): Attach interpreter ''.
[Tue Oct 15 11:24:04.142589 2024] [wsgi:info] [pid 280205:tid 280205] mod_wsgi (pid=280205): Attach interpreter ''.
[Tue Oct 15 11:24:04.143744 2024] [wsgi:info] [pid 280207:tid 280207] mod_wsgi (pid=280207): Imported 'mod_wsgi'.
[Tue Oct 15 11:24:04.161980 2024] [wsgi:info] [pid 280205:tid 280205] mod_wsgi (pid=280205): Imported 'mod_wsgi'.
[Tue Oct 15 11:24:04.163719 2024] [wsgi:info] [pid 280206:tid 280206] mod_wsgi (pid=280206): Imported 'mod_wsgi'.
[Tue Oct 15 11:50:45.699499 2024] [core:notice] [pid 280200:tid 280200] AH00051: child pid 280204 exit signal Segmentation fault (11), possible coredump in /etc/httpd
[Tue Oct 15 11:50:45.699649 2024] [wsgi:info] [pid 280200:tid 280200] mod_wsgi (pid=280204): Process 'domainNameTwo' has died, deregister and restart it.
[Tue Oct 15 11:50:45.699668 2024] [wsgi:info] [pid 280200:tid 280200] mod_wsgi (pid=280204): Process 'domainNameTwo' terminated by signal 11
[Tue Oct 15 11:50:45.699690 2024] [wsgi:info] [pid 280200:tid 280200] mod_wsgi (pid=280204): Process 'domainNameTwo' has been deregistered and will no longer be monitored.
[Tue Oct 15 11:50:45.701459 2024] [wsgi:info] [pid 286088:tid 286088] mod_wsgi (pid=286088): Starting process 'domainNameTwo' with uid=36143, gid=48 and threads=1.
[Tue Oct 15 11:50:45.704146 2024] [wsgi:info] [pid 286088:tid 286088] mod_wsgi (pid=286088): Python home /opt/miniforge3/envs/app2.
[Tue Oct 15 11:50:45.704212 2024] [wsgi:info] [pid 286088:tid 286088] mod_wsgi (pid=286088): Initializing Python.
[Tue Oct 15 11:50:46.702656 2024] [core:notice] [pid 280200:tid 280200] AH00051: child pid 286088 exit signal Segmentation fault (11), possible coredump in /etc/httpd
[Tue Oct 15 11:50:46.702719 2024] [wsgi:info] [pid 280200:tid 280200] mod_wsgi (pid=286088): Process 'domainNameTwo' has died, deregister and restart it.
[Tue Oct 15 11:50:46.702728 2024] [wsgi:info] [pid 280200:tid 280200] mod_wsgi (pid=286088): Process 'domainNameTwo' terminated by signal 11
[Tue Oct 15 11:50:46.702735 2024] [wsgi:info] [pid 280200:tid 280200] mod_wsgi (pid=286088): Process 'domainNameTwo' has been deregistered and will no longer be monitored.
[Tue Oct 15 11:50:46.704057 2024] [wsgi:info] [pid 286103:tid 286103] mod_wsgi (pid=286103): Starting process 'domainNameTwo' with uid=36143, gid=48 and threads=1.
[Tue Oct 15 11:50:46.706740 2024] [wsgi:info] [pid 286103:tid 286103] mod_wsgi (pid=286103): Python home /opt/miniforge3/envs/app2.
[Tue Oct 15 11:50:46.706800 2024] [wsgi:info] [pid 286103:tid 286103] mod_wsgi (pid=286103): Initializing Python.
</code></pre>
<p>In the virtual host specific error log, I get this:</p>
<pre><code>[Tue Oct 15 11:24:04.129726 2024] [wsgi:info] [pid 280204:tid 280204] mod_wsgi (pid=280204): Attach interpreter ''.
[Tue Oct 15 11:50:45.275309 2024] [wsgi:info] [pid 280204:tid 280304] [remote 10.159.64.61:53297] mod_wsgi (pid=280204, process='domainNameTwo', application=''): Loading Python script file '/var/www/vhosts/domainNameTwo/wsgi.py'.
[Tue Oct 15 11:50:45.667099 2024] [wsgi:error] [pid 280205:tid 280336] [client 10.159.64.61:53297] Truncated or oversized response headers received from daemon process 'domainNameTwo': /var/www/vhosts/domainNameTwo/wsgi.py
[Tue Oct 15 11:50:45.715999 2024] [wsgi:info] [pid 286088:tid 286088] mod_wsgi (pid=286088): Attach interpreter ''.
[Tue Oct 15 11:50:46.145008 2024] [wsgi:info] [pid 286088:tid 286091] [remote 10.159.64.61:53298] mod_wsgi (pid=286088, process='domainNameTwo', application=''): Loading Python script file '/var/www/vhosts/domainNameTwo/wsgi.py'.
[Tue Oct 15 11:50:46.509611 2024] [wsgi:error] [pid 280206:tid 280359] [client 10.159.64.61:53298] Truncated or oversized response headers received from daemon process 'domainNameTwo': /var/www/vhosts/domainNameTwo/wsgi.py, referer: https://domainNameTwo.com/
[Tue Oct 15 11:50:46.718194 2024] [wsgi:info] [pid 286103:tid 286103] mod_wsgi (pid=286103): Attach interpreter ''.
</code></pre>
<p>If I uninstall python packages that are related to SQL in some way (flask-sqlalchemy, psycopg2, geoalchemy2, etc), then I get messages about those modules not being found obviously, but I get no segfaults. I have tried setting <code>header-buffer-size=131072</code> (quadruple the default value) in the <code>WSGIDaemonProcess</code> of each virtual host, but it doesn't fix the problem.</p>
<p>I am on RedHat and core dumping seems impossible for some reason. As root, I have set <code>ulimit -c unlimited</code> and have tried altering the <code>proc/sys/kernel/core_pattern</code> to many different things to get it to dump with no luck (I have it currently set back to the default which is <code>|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h</code>). When I use <code>coredumpctl list</code>, everything shows "none" even though <code>coredump.conf</code> is set to use the defaults:</p>
<pre><code>[Coredump]
#Storage=external
#Compress=yes
#ProcessSizeMax=2G
#ExternalSizeMax=2G
#JournalSizeMax=767M
#MaxUse=
#KeepFree=
</code></pre>
<p>I have tried uncommenting <code>Storage=external</code> to see if that would force it, but no luck.</p>
<p><code>coredumpctl info <pid></code> generates this:</p>
<pre><code> PID: 292401 (httpd)
UID: 36143 (hydro)
GID: 48 (apache)
Signal: 11 (SEGV)
Timestamp: Tue 2024-10-15 12:34:47 EDT (3min 56s ago)
Command Line: $'(wsgi:domainNameT' -DFOREGROUND
Executable: /usr/sbin/httpd
Control Group: /system.slice/httpd.service
Unit: httpd.service
Slice: system.slice
Boot ID: 98404b4ba09d4a7bbde55d9149e95df7
Machine ID: 1788bfff64a44078829e9e7872ced29f
Hostname: myHostName
Storage: none
Message: Process 292401 (httpd) of user 36143 dumped core.
</code></pre>
<p>Switching gears; when I try to employ <code>gdb</code> I mostly don't know what I'm looking at. I have tried to follow <a href="https://modwsgi.readthedocs.io/en/develop/user-guides/debugging-techniques.html#debugging-crashes-with-gdb" rel="nofollow noreferrer">the directions</a> for daemon processes, like this (as root):</p>
<pre><code>gdb /usr/sbin/httpd <pid of domainNameTwo mod_wsgi app>
</code></pre>
<p>When it attached, I refresh domainNameTwo.com to get the 500 error and instead of the page refreshing and producing the error, it just hangs. I then put <code>thread apply all bt</code> as the documentation instructs and this is what I get:</p>
<pre><code>GNU gdb (GDB) Red Hat Enterprise Linux 10.2-13.el9
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Type "show copying" and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<https://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/sbin/httpd...
Reading symbols from .gnu_debugdata for /usr/sbin/httpd...
(No debugging symbols found in .gnu_debugdata for /usr/sbin/httpd)
Attaching to program: /usr/sbin/httpd, process 292401
[New LWP 292583]
[New LWP 292584]
[New LWP 292585]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
0x00007f2de1f019ff in poll () from target:/lib64/libc.so.6
Missing separate debuginfos, use: dnf debuginfo-install httpd-core-2.4.57-11.el9_4.1.x86_64
(gdb) thread apply all bt
Thread 4 (Thread 0x7f2ddf466640 (LWP 292585) "httpd"):
#0 0x00007f2de1f0e21e in epoll_wait () from target:/lib64/libc.so.6
#1 0x00007f2de21dd310 in impl_pollset_poll.lto_priv () from target:/lib64/libapr-1.so.0
#2 0x00007f2de10b4322 in wsgi_daemon_worker (thread=<optimized out>, p=<optimized out>) at src/server/mod_wsgi.c:9019
#3 wsgi_daemon_thread (thd=<optimized out>, data=<optimized out>) at src/server/mod_wsgi.c:9188
#4 0x00007f2de1e89c02 in start_thread () from target:/lib64/libc.so.6
#5 0x00007f2de1f0ec40 in clone3 () from target:/lib64/libc.so.6
Thread 3 (Thread 0x7f2ddfc67640 (LWP 292584) "httpd"):
#0 0x00007f2de1f0422d in select () from target:/lib64/libc.so.6
#1 0x00007f2de21df849 in apr_sleep () from target:/lib64/libapr-1.so.0
#2 0x00007f2de10a49d3 in wsgi_deadlock_thread (thd=<optimized out>, data=<optimized out>) at src/server/mod_wsgi.c:9228
#3 0x00007f2de1e89c02 in start_thread () from target:/lib64/libc.so.6
#4 0x00007f2de1f0ec40 in clone3 () from target:/lib64/libc.so.6
Thread 2 (Thread 0x7f2de0468640 (LWP 292583) "httpd"):
#0 0x00007f2de1f0422d in select () from target:/lib64/libc.so.6
#1 0x00007f2de21df849 in apr_sleep () from target:/lib64/libapr-1.so.0
#2 0x00007f2de10a25cd in wsgi_monitor_thread (thd=<optimized out>, data=0x5601865bfca8) at src/server/mod_wsgi.c:9510
#3 0x00007f2de1e89c02 in start_thread () from target:/lib64/libc.so.6
#4 0x00007f2de1f0ec40 in clone3 () from target:/lib64/libc.so.6
Thread 1 (Thread 0x7f2de202e540 (LWP 292401) "httpd"):
#0 0x00007f2de1f019ff in poll () from target:/lib64/libc.so.6
#1 0x00007f2de21d7de1 in apr_poll () from target:/lib64/libapr-1.so.0
#2 0x00007f2de10b7821 in wsgi_daemon_main (daemon=0x5601865bfca8, p=0x5601863bec28) at src/server/mod_wsgi.c:9811
#3 wsgi_start_process (p=p@entry=0x5601863bec28, daemon=0x5601865bfca8) at src/server/mod_wsgi.c:10542
#4 0x00007f2de10b8d95 in wsgi_start_daemons (p=0x5601863bec28) at src/server/mod_wsgi.c:10780
#5 0x0000560185e8bced in ap_run_pre_mpm ()
#6 0x00007f2de1ad4695 in event_run () from target:/etc/httpd/modules/mod_mpm_event.so
#7 0x0000560185e816d8 in ap_run_mpm ()
#8 0x0000560185e6f64f in main ()
</code></pre>
<p>I have tried to take care of the <code>Missing separate debugInfos</code> message by running the command it suggests, but that fails too in that no debugInfo things are found for <code>httpd-core...</code> package.</p>
<p>Can anyone help me either determine what is wrong in my mod_wsgi config that is causing the segfaults or help me figure out how to get debugging httpd processes or core dumps to work? I am at a total loss after investing many hours into this.</p>
|
<python><apache><flask><mod-wsgi>
|
2024-10-15 16:42:40
| 1
| 686
|
MKF
|
79,090,875
| 6,510,273
|
I need to acsess data in an XML via looping over the elements
|
<p>I have a xml file and need to accsess a specific part.</p>
<p>I get close to it with:</p>
<pre><code>from lxml import objectify
path = xml_path
xml = objectify.parse(open(path))
root = xml.getroot()
# Access the list of 'Evt' elements in 'History'
events = root.getchildren()[1].History.getchildren()
</code></pre>
<p>However the structure look like the following</p>
<pre><code> -History
|-Env
| |-EnvD
| |-Data
| |-Data
|-Env
|-EnvD
|-Data
|-Data
</code></pre>
<p>So I need to loop over each Env,EnVD, ... to reach the data.</p>
<p>Using something like</p>
<pre><code>events = root.getchildren()[1].History.getchildren()
for event in events:
Data_elements = event.findall('.//Data')
</code></pre>
<p>does not work as it finds the elements, however without the internal elements. I would need to use</p>
<pre><code>events = root.getchildren()[1].History.Env.EnvD.Data.getchildren()
</code></pre>
<p>to get to the elements while looping over Env and EnvD.</p>
<p>Any ideas? Thanks a lot</p>
|
<python><xml><dataframe><schema>
|
2024-10-15 16:23:33
| 2
| 2,177
|
Florida Man
|
79,090,640
| 14,839,602
|
How can I segment the handwritten lines in this type of documents?
|
<p>This is the document page. I want to segment the 10 handwritten lines perfectly and then crop it to save it to train my model.</p>
<p>What methods can I use??</p>
<p>I don't want to make my own model to segment those lines. I am looking a another straightforward method.</p>
<p><a href="https://i.sstatic.net/lQkrHG19.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lQkrHG19.png" alt="enter image description here" /></a></p>
<p>Here the code and more image samples:
<a href="https://github.com/MuhammadSabah/SO-Attempt" rel="nofollow noreferrer">https://github.com/MuhammadSabah/SO-Attempt</a></p>
|
<python><opencv><computer-vision><image-segmentation>
|
2024-10-15 15:20:52
| 2
| 434
|
Hama Sabah
|
79,089,938
| 9,707,473
|
Can you set permission classes per function in class-based views?
|
<p>Let's say we have the following <em>class-based</em> view (CBV) implementing DRF's <code>APIView</code>:</p>
<pre><code>class ExampleListAPIView(APIView):
permission_classes = [IsAuthenticatedOrReadOnly]
def get(self, request):
'''
List all examples
'''
examples = Example.objects.all()
serializer = ExampleListSerializer(examples, many=True)
return Response(serializer.data, status.HTTP_200_OK)
def post(self, request):
'''
Create new example
'''
serializer = ExampleListSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>Using <code>permission_classes</code> to set authentication policy in CBV, the policy is applied to all methods of class. Is it possible to set permission <em>per function</em> in a CBV, e.g.</p>
<pre><code>class ExampleListAPIView(APIView):
def get(self, request): # Apply [AllowAny]
# logic
def post(self, request): # Apply [IsAuthenticated]
# logic
</code></pre>
<p>Or you have to use <em>function-based views</em> to achieve this behavior?</p>
|
<python><django><django-rest-framework><permissions>
|
2024-10-15 12:29:25
| 2
| 512
|
lezaf
|
79,089,298
| 9,081,267
|
Azure Communication Services python, how do I send a Whatsapp message with parameters
|
<p>In the Meta Whatsapp Business portal you can define message templates which include parameters, for example a users name or multiple parameters.</p>
<p>Example: <code>"You made a purchase for {{1}} using a credit card ending in {{2}}."</code></p>
<p>I want to send a whatsapp message template which includes parameters using <a href="https://learn.microsoft.com/en-us/azure/communication-services/overview" rel="nofollow noreferrer">Azure Communication Services</a>. The python QuickStart shows examples on how to send a plain message template, but there's no documentation on how to include parameters. How can I do that?</p>
<p>Link to documentation: <a href="https://learn.microsoft.com/en-us/azure/communication-services/quickstarts/advanced-messaging/whatsapp/get-started?tabs=visual-studio%2Cconnection-string&pivots=programming-language-python" rel="nofollow noreferrer">click</a></p>
|
<python><azure><whatsapp><azure-communication-services>
|
2024-10-15 09:21:32
| 1
| 43,326
|
Erfan
|
79,089,047
| 534,238
|
Writing protobufs to BigQuery using Apache Beam (Dataflow) using Python
|
<p>As the title says, I am trying to write a protocol buffer message to a BigQuery table, in Python.</p>
<p>Is there an equivalent to <a href="https://beam.apache.org/releases/javadoc/2.50.0/org/apache/beam/sdk/io/gcp/bigquery/BigQueryIO.html#writeProtos-java.lang.Class-" rel="nofollow noreferrer">Java's <code>writeProtos</code></a>?</p>
<p>If not, what is the best solution (other than using Java)? This is at massive scale and streaming, hence why I am using Dataflow in the first place. I could convert the <code>Message</code> to a <code>dict</code> before writing, but then I am writing a <code>dict</code> instead of keeping the <code>Message</code> small as a protobuf provides. I could use Java, but I am also managing our team and we want to stay with Python since that is our team's core skill set (myself included).</p>
|
<python><google-bigquery><apache-beam><apache-beam-io>
|
2024-10-15 08:23:39
| 0
| 3,558
|
Mike Williamson
|
79,088,973
| 12,466,687
|
Unable to install pdftotext on windows/Ubuntu
|
<p>From weeks I have been trying to <strong>install</strong> <code>pdftotext</code> for <code>python</code> but have faced challenges & failed due to poppler earlier.</p>
<p>So recently I have:</p>
<ol>
<li><strong>Upgraded</strong> <code>Windows 10</code> to <code>Windows 11</code> to enable <code>Sudo</code> & use <code>apt</code> commands</li>
<li>installed WSL and Ubuntu in Windows 11 for apt- commands and</li>
<li>Ran following commands:</li>
</ol>
<pre><code>sudo apt-get update
sudo apt install python3-pip
sudo apt-get install python-poppler
sudo apt install build-essential libpoppler-cpp-dev pkg-config python3-dev
</code></pre>
<p>all ran till this point:
<a href="https://i.sstatic.net/2OJjrlM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2OJjrlM6.png" alt="enter image description here" /></a></p>
<p><strong>Issue</strong>:
Now when I goto <code>cmd</code> and run</p>
<pre><code>pip install pdftotext
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>Collecting pdftotext
Using cached pdftotext-2.2.2.tar.gz (113 kB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: pdftotext
Building wheel for pdftotext (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py bdist_wheel did not run successfully.
β exit code: 1
β°β> [11 lines of output]
running bdist_wheel
running build
running build_ext
building 'pdftotext' extension
creating build
creating build\temp.win-amd64-cpython-39
creating build\temp.win-amd64-cpython-39\Release
"C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.41.34120\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DPOPPLER_CPP_AT_LEAST_0_30_0=0 -DPOPPLER_CPP_AT_LEAST_0_58_0=0 -DPOPPLER_CPP_AT_LEAST_0_88_0=0 -IC:\Users\vinee\anaconda3\include -IC:\Users\vinee\anaconda3\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.41.34120\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\VS\include" /EHsc /Tppdftotext.cpp /Fobuild\temp.win-amd64-cpython-39\Release\pdftotext.obj -Wall
pdftotext.cpp
C:\Users\vinee\anaconda3\include\pyconfig.h(59): fatal error C1083: Cannot open include file: 'io.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.41.34120\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pdftotext
Running setup.py clean for pdftotext
Failed to build pdftotext
ERROR: Could not build wheels for pdftotext, which is required to install pyproject.toml-based projects
</code></pre>
<p>For issue I referred this <a href="https://stackoverflow.com/questions/71470989/python-setup-py-bdist-wheel-did-not-run-successfully">SO Post</a> and it mentions about <strong>installing</strong> <code>CMAKE</code> which I already have and again ran</p>
<pre><code>C:\Windows\System32>pip install Cmake
WARNING: Ignoring invalid distribution -cipy (c:\users\vinee\anaconda3\lib\site-packages)
Requirement already satisfied: Cmake in c:\users\vinee\anaconda3\lib\site-packages (3.30.4)
WARNING: Ignoring invalid distribution -cipy (c:\users\vinee\anaconda3\lib\site-packages)
</code></pre>
<p>But I am still stuck on build wheel error. What should I do next. Really need help on this.</p>
<p><strong>Update:</strong>
I came across this <a href="https://stackoverflow.com/questions/40018405/cannot-open-include-file-io-h-no-such-file-or-directory">SO post</a> about missing io-h file or directory and I have tried adding below command:</p>
<pre><code>set LIB=C:\Program Files (x86)\Windows Kits\10\Redist\ucrt\DLLs\x64
</code></pre>
<p>But I am still getting the same error.</p>
|
<python><windows><ubuntu><pdftotext><poppler>
|
2024-10-15 08:09:56
| 1
| 2,357
|
ViSa
|
79,088,876
| 881,712
|
How to send OSC bundle using python-osc?
|
<p>On the repository page of <a href="https://github.com/attwad/python-osc" rel="nofollow noreferrer">python-osc</a> I read:</p>
<p><strong>Building bundles</strong></p>
<pre><code>from pythonosc import osc_bundle_builder
from pythonosc import osc_message_builder
bundle = osc_bundle_builder.OscBundleBuilder(
osc_bundle_builder.IMMEDIATELY)
msg = osc_message_builder.OscMessageBuilder(address="/SYNC")
msg.add_arg(4.0)
# Add 4 messages in the bundle, each with more arguments.
bundle.add_content(msg.build())
msg.add_arg(2)
bundle.add_content(msg.build())
msg.add_arg("value")
bundle.add_content(msg.build())
msg.add_arg(b"\x01\x02\x03")
bundle.add_content(msg.build())
sub_bundle = bundle.build()
# Now add the same bundle inside itself.
bundle.add_content(sub_bundle)
# The bundle has 5 elements in total now.
bundle = bundle.build()
# You can now send it via a client as described in other examples.
</code></pre>
<p>But it's not clear to me how to "send it via a client as described in other examples".
I tried:</p>
<pre><code>client.send_message(bundle)
</code></pre>
<p>but it returns:</p>
<blockquote>
<p>SimpleUDPClient.send_message() missing 1 required positional argument: 'value'</p>
</blockquote>
<p>and</p>
<pre><code>client.send_message("/test", bundle)
</code></pre>
<p>returns:</p>
<blockquote>
<p>Infered arg_value type is not supported</p>
</blockquote>
<p>What is the correct syntax to send OSC bundles?</p>
<p>Here how the client is constructed, from the example in the same page:</p>
<pre><code>import argparse
import random
import time
from pythonosc import udp_client
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", default="127.0.0.1",
help="The ip of the OSC server")
parser.add_argument("--port", type=int, default=5005,
help="The port the OSC server is listening on")
args = parser.parse_args()
client = udp_client.SimpleUDPClient(args.ip, args.port)
for x in range(10):
client.send_message("/filter", random.random())
time.sleep(1)
</code></pre>
|
<python><osc>
|
2024-10-15 07:43:47
| 1
| 5,355
|
Mark
|
79,088,863
| 9,363,181
|
Superset Invalid Login Issue while Implementing SSO
|
<p>I am having a superset running in AWS EC2 instance using docker compose. While I am doing docker compose up the server gets up and I can see server running. Additionally I have configured an SSO for Microsoft for the superset to login.</p>
<p>So the flow will be A user comes to login page of superset enter its Microsoft credential it authenticates and redirect it to inside superset.</p>
<p>But in my case it is authenticates user and redirect to login page and says <strong>Invalid login. Please try again</strong>.</p>
<p>While my superset_app container logs says
<strong>werkzeug.exceptions.NotFound: 404 Not Found: The requested URL was not found on the server.</strong></p>
<p>Linked is my superset <a href="https://i.sstatic.net/DxIdWy4E.jpg" rel="nofollow noreferrer">logs</a> image.</p>
<p>Here is the <code>superset_config</code>.py file</p>
<pre><code>import logging
import os
from celery.schedules import crontab
from flask_caching.backends.filesystemcache import FileSystemCache
logger = logging.getLogger()
DATABASE_DIALECT = os.getenv("DATABASE_DIALECT")
DATABASE_USER = os.getenv("DATABASE_USER")
DATABASE_PASSWORD = os.getenv("DATABASE_PASSWORD")
DATABASE_HOST = os.getenv("DATABASE_HOST")
DATABASE_PORT = os.getenv("DATABASE_PORT")
DATABASE_DB = os.getenv("DATABASE_DB")
EXAMPLES_USER = os.getenv("EXAMPLES_USER")
EXAMPLES_PASSWORD = os.getenv("EXAMPLES_PASSWORD")
EXAMPLES_HOST = os.getenv("EXAMPLES_HOST")
EXAMPLES_PORT = os.getenv("EXAMPLES_PORT")
EXAMPLES_DB = os.getenv("EXAMPLES_DB")
# The SQLAlchemy connection string.
SQLALCHEMY_DATABASE_URI = (
f"{DATABASE_DIALECT}://"
f"{DATABASE_USER}:{DATABASE_PASSWORD}@"
f"{DATABASE_HOST}:{DATABASE_PORT}/{DATABASE_DB}"
)
SQLALCHEMY_EXAMPLES_URI = (
f"{DATABASE_DIALECT}://"
f"{EXAMPLES_USER}:{EXAMPLES_PASSWORD}@"
f"{EXAMPLES_HOST}:{EXAMPLES_PORT}/{EXAMPLES_DB}"
)
REDIS_HOST = os.getenv("REDIS_HOST", "redis")
REDIS_PORT = os.getenv("REDIS_PORT", "6379")
REDIS_CELERY_DB = os.getenv("REDIS_CELERY_DB", "0")
REDIS_RESULTS_DB = os.getenv("REDIS_RESULTS_DB", "1")
RESULTS_BACKEND = FileSystemCache("/app/superset_home/sqllab")
CACHE_CONFIG = {
"CACHE_TYPE": "RedisCache",
"CACHE_DEFAULT_TIMEOUT": 300,
"CACHE_KEY_PREFIX": "superset_",
"CACHE_REDIS_HOST": REDIS_HOST,
"CACHE_REDIS_PORT": REDIS_PORT,
"CACHE_REDIS_DB": REDIS_RESULTS_DB,
}
DATA_CACHE_CONFIG = CACHE_CONFIG
class CeleryConfig:
broker_url = f"redis://{REDIS_HOST}:{REDIS_PORT}/{REDIS_CELERY_DB}"
imports = (
"superset.sql_lab",
"superset.tasks.scheduler",
"superset.tasks.thumbnails",
"superset.tasks.cache",
)
result_backend = f"redis://{REDIS_HOST}:{REDIS_PORT}/{REDIS_RESULTS_DB}"
worker_prefetch_multiplier = 1
task_acks_late = False
beat_schedule = {
"reports.scheduler": {
"task": "reports.scheduler",
"schedule": crontab(minute="*", hour="*"),
},
"reports.prune_log": {
"task": "reports.prune_log",
"schedule": crontab(minute=10, hour=0),
},
}
CELERY_CONFIG = CeleryConfig
FEATURE_FLAGS = {"ALERT_REPORTS": True}
ALERT_REPORTS_NOTIFICATION_DRY_RUN = True
# When using docker compose baseurl should be http://superset_app:8088/
# The base URL for the email report hyperlinks.
SQLLAB_CTAS_NO_LIMIT = True
#
# Optionally import superset_config_docker.py (which will have been included on
# the PYTHONPATH) in order to allow for local settings to be overridden
#
try:
import superset_config_docker
from superset_config_docker import * # noqa
logger.info(
f"Loaded your Docker configuration at " f"[{superset_config_docker.__file__}]"
)
except ImportError:
logger.info("Using default Docker config...")
# Ensure you are using HTTPS
ENABLE_PROXY_FIX = True
PREFERRED_URL_SCHEME = 'https'
SESSION_COOKIE_HTTPONLY = "Lax"
#SSO Login
from flask_appbuilder.security.manager import AUTH_OAUTH
from custom_sso_security_manager import CustomSsoSecurityManager
# Set the authentication type to OAuth
AUTH_TYPE = AUTH_OAUTH
# Will allow user self registration, allowing to create Flask users from Authorized User
AUTH_USER_REGISTRATION = True
# The default user self registration role
AUTH_USER_REGISTRATION_ROLE = "Public"
CUSTOM_SECURITY_MANAGER = CustomSsoSecurityManager
OAUTH_PROVIDERS = [{
'name': 'SSO',
'token_key': 'access_token',
'icon': 'fa-windows',
'remote_app':{
'api_base_url': 'https://login.microsoft.com/tenant_id/oauth2',
'request_token_url': None,
'request_token_params': {
'scope': 'openid profile email'
},
'access_token_url': 'https://login.microsoftonline.com/tenant_id/oauth2/v2.0/token',
'acess_token_params':{
'scope': 'openid profile email'
},
'authorize_url': 'https://login.microsoftonline.com/tenant_id/oauth2/v2.0/authorize',
'authorize_params':{
'scope': 'openid profile email'
},
'client_id': 'client-id(application-id)',
'client_secret': 'secret-key',
'jwks_uri': 'https://login.microsoftonline.com/common/discovery/v2.0/keys',
'redirect_uri': 'https://superset.domain.com/oauth-authorize/callback'
}
}]
OAUTH_USER_INFO_URL = 'https://graph.microsoft.com/v1.0/me'
from flask_appbuilder.security.manager import AUTH_OAUTH
def get_oauth_user_info(response):
user_info = response.json()
# Assign role based on domain
if user_info['mail'].endswith('@domain.com'):
return {
'role': 'Admin',
'user_info': user_info
}
else:
return {
'role': 'Public',
'user_info': user_info
}
</code></pre>
<p>So I have tried various steps like :</p>
<ol>
<li>Checking my application(client) id</li>
<li>Checking my secrete keys</li>
<li>Checking Config file.</li>
</ol>
<p>Finally found that it's an open issue on the GitHub of <a href="https://github.com/apache/superset/issues/20319" rel="nofollow noreferrer">Superset </a>repo also tried the solution present in the comments.</p>
<p>I followed all the steps as it is along with the <code>docker-compose.yaml</code> code from this <a href="https://superset.apache.org/docs/installation/docker-compose" rel="nofollow noreferrer">official documentation</a> but no luck</p>
|
<python><docker><single-sign-on><apache-superset>
|
2024-10-15 07:41:26
| 1
| 645
|
RushHour
|
79,088,765
| 6,721,603
|
How to find dropdown name in HTML code for Selenium's Select class?
|
<p>I want to download the CSV data shown at the end of the page using Python:</p>
<p><a href="https://www.cboe.com/delayed_quotes/spx/quote_table/" rel="nofollow noreferrer">https://www.cboe.com/delayed_quotes/spx/quote_table/</a></p>
<p>Specifically, before downloading the data, I need Python to select the dropdowns <kbd>Option Range</kbd> and <kbd>Expirations</kbd> to 'All'</p>
<p>I have the following code to access the website via Python, which works fine.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.support.ui import Select
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get('https://www.cboe.com/delayed_quotes/spx/quote_table')
html = driver.page_source
</code></pre>
<p>I want to use the Selenium select function, but I am unable to find the id of the two dropdowns in the HTML code.</p>
<pre><code>element = driver.find_element(By.ID, "????")
</code></pre>
|
<python><csv><selenium-webdriver>
|
2024-10-15 07:04:48
| 1
| 1,010
|
freddy888
|
79,088,388
| 14,154,784
|
Configuring Django Testing in PyCharm
|
<p>I have a simple django project that I'm making in pycharm. The directory structure is the following:</p>
<pre><code>zelda_botw_cooking_simulator
|-- cooking_simulator_project
|---- manage.py
|---- botw_cooking_simulator # django app
|------ init.py
|------ logic.py
|------ tests.py
|------ all_ingredients.py
|------ other standard django app files
|---- cooking_simulator_project # django project
|------ manage.py
|------ other standard django project files
</code></pre>
<p>When I run <code>python manage.py test</code> in the PyCharm terminal, everything works great.</p>
<p>When I click the little triangle icon in PyCharm next to a test to run that test, however, I get one of two errors depending on how I've tried to configure the configuration for testing in PyCharm:</p>
<p>Error 1:</p>
<pre><code>File ".../zelda_botw_cooking_simulator/cooking_simulator_proj/botw_cooking_simulator/tests.py", line 5, in <module>
from .all_ingredients import all_ingredients
ImportError: attempted relative import with no known parent package
</code></pre>
<p>Error 2:</p>
<pre><code>/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/bin/python /Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py test botw_cooking_simulator.tests.TestAllIngredients.test_hearty_durian /Users/brendenmillstein/Dropbox (Personal)/BSM_Personal/Coding/BSM_Projects/zelda_botw_cooking_simulator/cooking_simulator_proj
Testing started at 10:05β―PM ...
Traceback (most recent call last):
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py", line 168, in <module>
utility.execute()
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py", line 142, in execute
_create_command().run_from_argv(self.argv)
File "/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/lib/python3.10/site-packages/django/core/management/commands/test.py", line 24, in run_from_argv
super().run_from_argv(argv)
File "/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/lib/python3.10/site-packages/django/core/management/base.py", line 413, in run_from_argv
self.execute(*args, **cmd_options)
File "/opt/homebrew/anaconda3/envs/zelda_botw_cooking_simulator/lib/python3.10/site-packages/django/core/management/base.py", line 459, in execute
output = self.handle(*args, **options)
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_manage.py", line 104, in handle
failures = TestRunner(test_labels, **options)
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_runner.py", line 254, in run_tests
return DjangoTeamcityTestRunner(**options).run_tests(test_labels,
File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pycharm/django_test_runner.py", line 156, in run_tests
return super(DjangoTeamcityTestRunner, self).run_tests(test_labels, extra_tests, **kwargs)
TypeError: DiscoverRunner.run_tests() takes 2 positional arguments but 3 were given
Process finished with exit code 1
</code></pre>
<p><strong>How can I fix this?</strong></p>
<p>I have tried configuring run environments and test environments in PyCharm for 2 hours now and I'm not getting it. The questions/answers <a href="https://stackoverflow.com/questions/40751256/pycharm-django-tests-settings">here</a> and <a href="https://stackoverflow.com/questions/20399046/running-django-tests-in-pycharm">here</a> are close, but there's not quite enough detail for me to fix it. Exactly what do I put in each field in each window? What's the 'target', the 'working directory', do I need an environment variable? What goes in the settings part and what goes in the configuration?</p>
<p>ChatGPT recommended a bunch of stuff that didn't work, and I can't seem to find a YouTube video showing the right way to do this. Thank you!</p>
<p><br><br><br><br><br>
**** Expanding Answer in Response to Comments/Questions ****
Here is my tests.py code:</p>
<pre><code>from datetime import timedelta
from django.test import TestCase
from .all_ingredients import all_ingredients
from .data_structures import MealType, MealResult, MealName, SpecialEffect, EffectLevel
from .logic import (
determine_meal_type,
simulate_cooking,
check_if_more_than_one_effect_type,
calculate_sell_price,
)
# Create your tests here.
class TestAllIngredients(TestCase):
def test_hearty_durian(self):
ingredient = all_ingredients["Hearty Durian"]
self.assertEqual(ingredient.effect_type.value, "Hearty")
self.assertEqual(ingredient.category.value, "Fruit")
self.assertEqual(ingredient.price, 15)
self.assertEqual(ingredient.base_hp, 12)
self.assertEqual(ingredient.bonus_hp, 0)
self.assertEqual(ingredient.base_time, timedelta(seconds=00))
self.assertEqual(ingredient.bonus_time, timedelta(seconds=00))
self.assertEqual(ingredient.potency, 4)
etc.
</code></pre>
<p>Here is a screenshot of the configuration:
<a href="https://i.sstatic.net/A2QRZ0I8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2QRZ0I8.png" alt="Configuration for first test method (not class)" /></a></p>
<p>Thank you!!</p>
|
<python><django><pycharm><django-testing><django-tests>
|
2024-10-15 04:21:39
| 2
| 2,725
|
BLimitless
|
79,088,085
| 807,797
|
Running Python 3.12 asyncio tasks in parallel
|
<p>What specifically needs to change in the Python 3.12 code below in order for each and every one of the calls to the <code>write_to_file(linesBuffer)</code> function to run in parallel instead of running sequentially?</p>
<p>In other words,</p>
<ol>
<li>We want for the program execution to continue without waiting for <code>write_to_file(linesBuffer)</code> to return,</li>
<li>But we also want to make sure that each call to <code>write_to_file(linesBuffer)</code> does eventually return.</li>
</ol>
<p>Each call to <code>write_to_file(linesBuffer)</code> should start at a different time, and return after whatever different duration might be required in order for each call to successfully complete its work. And there should never be delays waiting for one call to <code>write_to_file(linesBuffer)</code> to complete before the next call to <code>write_to_file(linesBuffer)</code> is initiated.</p>
<p>When we remove <code>await</code> from the <code>write_to_file(linesBuffer)</code> line, the result is that none of the print commands inside the <code>write_to_file(linesBuffer)</code> function ever get executed. So we cannot simply change <code>await write_to_file(linesBuffer)</code> to <code>write_to_file(linesBuffer)</code>.</p>
<p>The problem in the code is that the many sequential calls to the <code>await write_to_file(linesBuffer)</code> function cause the program to become very slow.</p>
<p>Here is the code:</p>
<pre><code>import os
import platform
import asyncio
numLines = 10
def get_source_file_path():
if platform.system() == 'Windows':
return 'C:\\path\\to\\sourceFile.txt'
else:
return '/path/to/sourceFile.txt'
async def write_to_file(linesBuffer):
print("inside Writing to file...")
with open('newFile.txt', 'a') as new_destination_file:
for line in linesBuffer:
new_destination_file.write(line)
#get the name of the directory in which newFile.txt is located. Then print the name of the directory.
directory_name = os.path.dirname(os.path.abspath('newFile.txt'))
print("directory_name: ", directory_name)
linesBuffer.clear()
#print every 1 second for 2 seconds.
for i in range(2):
print("HI HO, HI HO. IT'S OFF TO WORK WE GO...")
await asyncio.sleep(1)
print("inside done Writing to file...")
async def read_source_file():
source_file_path = get_source_file_path()
linesBuffer = []
counter = 0
print("Reading source file...")
print("source_file_path: ", source_file_path)
#Detect the size of the file located at source_file_path and store it in the variable file_size.
file_size = os.path.getsize(source_file_path)
print("file_size: ", file_size)
with open(source_file_path, 'r') as source_file:
source_file.seek(0, os.SEEK_END)
while True:
line = source_file.readline()
new_file_size = os.path.getsize(source_file_path)
if new_file_size < file_size:
print("The file has been truncated.")
source_file.seek(0, os.SEEK_SET)
file_size = new_file_size
linesBuffer.clear()
counter = 0
print("new_file_size: ", new_file_size)
if len(line) > 0:
new_line = str(counter) + " line: " + line
print(new_line)
linesBuffer.append(new_line)
print("len(linesBuffer): ", len(linesBuffer))
if len(linesBuffer) >= numLines:
print("Writing to file...")
await write_to_file(linesBuffer) #When we remove await from this line, the function never runs.
print("awaiting Writing to file...")
linesBuffer.clear()
counter += 1
print("counter: ", counter)
if not line:
await asyncio.sleep(0.1)
continue
#detect whether or not the present line is the last line in the file. If it is the last line in the file, then write the line to the file.
if source_file.tell() == file_size:
print("LAST LINE IN FILE FOUND. Writing to file...")
await write_to_file(linesBuffer)
print("awaiting Writing to file...")
linesBuffer.clear()
counter = 0
async def main():
await read_source_file()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><python-3.x><asynchronous><python-asyncio>
|
2024-10-15 00:53:56
| 1
| 9,239
|
CodeMed
|
79,088,004
| 6,618,225
|
Count number of rows in grouped column in Pandas
|
<p>I have a simple dataset and I am trying to group a column by its values and create a new column that contains how many rows are returned for that group.</p>
<p>I have tried several options but nothing seems to work.</p>
<p>A simple (not working) example could be like this:</p>
<pre><code>import pandas as pd
data = {'Team' : ['1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '2', '2', '2', '2', '3','3', '3', '3', '3', '3', '3', '3', '3', '3', '3']}
df = pd.DataFrame(data)
df['Count'] = df.groupby('Team')['Team'].count()
print(df)
</code></pre>
<p>I also came across options like .size() and .valuecounts(), but nothing works.</p>
<p>My desired output would be like this:</p>
<pre><code>import pandas as pd
data = {'Team' : ['1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '2', '2', '2', '2', '3','3', '3', '3', '3', '3', '3', '3', '3', '3', '3'],
'Count' : [5, 5, 5, 5, 5, 9, 9, 9, 9, 9, 9, 9, 9, 9, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11]}
df = pd.DataFrame(data)
print(df)
</code></pre>
|
<python><pandas><group-by>
|
2024-10-14 23:45:46
| 0
| 357
|
Kai
|
79,087,880
| 1,235,269
|
What's the idiomatic way to waste the output of a generator in Python?
|
<p>Sometimes I need to consume everything that a generator outputs but don't actually need the output. In one case, some callers need progress updates from a coroutine, but some callers don't; in another case, some callers need to iterate through the results but some callers only need the side effects.</p>
<p>What's the idiomatic way to do this?</p>
<p>I currently use <code>list()</code>:</p>
<pre class="lang-py prettyprint-override"><code>def do_work_and_yield():
for x in ...:
y = do_work_on(x)
yield y
def only_cares_about_side_effects():
list(do_work_and_yield())
</code></pre>
<p>but of course this is an abuse of <code>list()</code> and a waste of memory. I could write a loop:</p>
<pre class="lang-py prettyprint-override"><code>def only_cares_about_side_effects():
for _ in do_work_and_yield():
pass
</code></pre>
<p>but what I Really Mean is just "run <code>do_work_and_yield</code> to completion", which the loop obfuscates.</p>
|
<python><iterator><coroutine>
|
2024-10-14 22:27:58
| 1
| 2,604
|
zmccord
|
79,087,767
| 258,418
|
Use typealias defined inside generic class in method signature
|
<p>Given a generic class, one might have multiple methods which use a deduced type in their signature. To avoid repetition I would like to define a typealias. What is the best way to access it in a method signature?</p>
<pre><code>class Y[X]:
type Z = list[X]
# this one works
def works_with_pyright_but_ugly(self) -> "Y[X].Z":
return []
# these do not work
def what_is_Y(self) -> Y[X].Z:
return []
def self_has_no_member_z(self) -> Self.Z:
return []
def what_is_z(self) -> Z:
return []
def listify(self, value: X) -> Z:
"""Just to show a slighlty more meaningful funciton"""
return [value]
</code></pre>
<p>Is there a better way to reference Z in the method signatures?</p>
<p>In my view, <code>what_is_z</code> and <code>listify</code> do not work, because my editor does not list the value of the variable in the following example as <code>list[int]</code> as I would expect. Half the reason of type annotations for me is convenience in the editor (the other half being error checking).</p>
<p><a href="https://i.sstatic.net/yk8gn2U0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yk8gn2U0.png" alt="enter image description here" /></a></p>
|
<python><python-typing><pyright>
|
2024-10-14 21:30:52
| 1
| 5,003
|
ted
|
79,087,732
| 10,859,585
|
Google Drive PDF File - FileNotDownloadable and FileNotExportable
|
<p>Having issues with using Google Drive API to either export or download a pdf. I've tried using both <code>export_media()</code> and <code>get_media()</code>. Both error out with a message about trying it's opposite's self. The error occurs in each function right after <code>.next_chunk()</code>.</p>
<p>I've used the same credentials to obtain the <code>file_id</code>, so I can't imagine that being the problem. It's not good to have spaces in my file name, but that shouldn't be the issue either.</p>
<p>Is there something else I am missing?</p>
<pre><code>def export_file(creds, file_id, mime_type, destination):
try:
# create drive api client
service = build("drive", "v3", credentials=creds)
# Request to export the file to the specified MIME type
request = service.files().export_media(fileId=file_id, mimeType=mime_type)
file = io.BytesIO()
exporter = MediaIoBaseDownload(file, request)
done = False
while not done:
status, done = exporter.next_chunk()
# Write the file content to the specified destination
with open(destination, 'wb') as f:
f.write(file.getbuffer())
except HttpError as error:
print(error)
</code></pre>
<pre><code>def download_file(creds, file_id, destination):
try:
# create drive api client
service = build("drive", "v3", credentials=creds)
request = service.files().get_media(fileId=file_id)
file = io.BytesIO()
downloader = MediaIoBaseDownload(file, request)
done = False
while done is False:
status, done = downloader.next_chunk()
with open(destination, 'wb') as f:
f.write(file.getvalue())
except HttpError as error:
print(error)
</code></pre>
<pre><code>file_mime_type = "application/pdf"
dst = dst_path + file_extension # Colab Notebooks/My File 04-24-24.pdf
file_id = ...
creds = ...
export_file(creds=creds, file_id=item[1], mime_type=file_mime_type, destination=dst)
>>>
<HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/.../export?mimeType=application%2Fpdf&alt=media returned "Export only supports Docs Editors files.". Details: "[{'message': 'Export only supports Docs Editors files.', 'domain': 'global', 'reason': 'fileNotExportable'}]">
download_file(creds=creds, file_id=item[1], destination=item[0])
>>>
<HttpError 403 when requesting https://www.googleapis.com/drive/v3/files/...?alt=media returned "Only files with binary content can be downloaded. Use Export with Docs Editors files.". Details: "[{'message': 'Only files with binary content can be downloaded. Use Export with Docs Editors files.', 'domain': 'global', 'reason': 'fileNotDownloadable', 'location': 'alt', 'locationType': 'parameter'}]">
</code></pre>
<p><a href="https://i.sstatic.net/nSzD312P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSzD312P.png" alt="enter image description here" /></a></p>
|
<python><google-drive-api>
|
2024-10-14 21:09:03
| 0
| 414
|
Binx
|
79,087,707
| 6,843,153
|
Get row index of a pandas row knowing the row number
|
<p>I have a Pandas dataframe that is the result of filtering over another dataframe, so the index of rows are not sequential since only some of the rows of the base dataframe are kept in the resulting dataframe.</p>
<p>I want to know the index of a given row position of the resulting dataframe. Let's say we want the index value of row 1, I know I can get row 1 using <code>row = resulting_dataframe.iloc[0]</code>, but I don't know how to get the index for that row.</p>
<p>I tried <code>resulting_dataframe.index[resulting_dataframe.iloc[0]]</code>, but I get an error, and I also tried <code>resulting_dataframe.iloc[0].index</code>, but I only get the row without index.</p>
<p>How can I get the index of the row?</p>
|
<python><pandas>
|
2024-10-14 20:54:45
| 1
| 5,505
|
HuLu ViCa
|
79,087,652
| 1,492,337
|
Adding accuracy, recall and f1 metrics to SFTTrainer
|
<p>I'm working on fine tuning an LLM using <code>SFTTrainer</code>.</p>
<p>For some reason, during validation phase it only yields the <code>eval_loss</code>.
While this is nice, I actually interested also on other metrics (accuracy for example), but I wasn't able to figure out yet how to do it.</p>
<p>I've seen many examples for <code>Trainer</code> class and I know <code>SFTTrainer</code> supports <code>compute_metrics</code> parameter in its <code>__init__</code> method, but I had no luck in actually connect all the pieces together.</p>
<p>A quick look found some the below issues on GitHub:<a href="https://github.com/huggingface/trl/issues/862" rel="nofollow noreferrer">https://github.com/huggingface/trl/issues/862</a> and <a href="https://github.com/huggingface/trl/issues/862" rel="nofollow noreferrer">https://github.com/huggingface/trl/issues/862</a></p>
<p>For what its worth, my dataset is formatted as follows:</p>
<ol>
<li>Train ==> prompt + data + response</li>
<li>Evaluation ==> prompt + data + response (I'm aware the response should be removed)</li>
<li>Test ==> prompt + data</li>
</ol>
<p>Here's where the trainer is being created:</p>
<pre><code>
def create_trainer(self):
train_args = TrainingArguments(
per_device_train_batch_size=2,
gradient_accumulation_steps=2,
warmup_steps=200,
gradient_checkpointing=True,
per_device_eval_batch_size=1,
# num_train_epochs=self.num_of_epochs,
max_steps=10,
learning_rate=2e-4,
fp16=not is_bfloat16_supported(),
bf16=is_bfloat16_supported(),
logging_steps=1,
optim="adamw_8bit",
weight_decay=0.01,
lr_scheduler_type="cosine",
report_to="none",
seed=3407,
output_dir=self.output_dir,
eval_strategy="steps",
eval_steps=0.1,
)
trainer = SFTTrainer(
model=self.model,
compute_metrics=self.compute_metrics,
preprocess_logits_for_metrics=self.preprocess_logits_for_metrics,
tokenizer=self.tokenizer,
train_dataset=self.train_data,
eval_dataset=self.eval_data,
dataset_text_field="text",
max_seq_length=self.max_seq_length,
dataset_num_proc=2,
packing=False, # Can make training 5x faster for short sequences.
args=train_args,
dataset_kwargs={
"add_special_tokens": False,
"append_concat_token": False,
}
)
return trainer
</code></pre>
<p>and also</p>
<pre><code> @staticmethod
def preprocess_logits_for_metrics(logits, labels):
"""
Original Trainer may have a memory leak.
This is a workaround to avoid storing too many tensors that are not needed.
"""
pred_ids = torch.argmax(logits[0], dim=-1)
return pred_ids, labels
@staticmethod
def compute_metrics(eval_pred):
# Unpack predictions and labels
predictions, _ = eval_pred.predictions
labels = eval_pred.label_ids
# Flatten labels if necessary
if labels.ndim > 1:
labels = labels.flatten()
# Ensure predictions are in the same format
if predictions.ndim > 1:
predictions = predictions.flatten()
# Remove invalid labels if necessary (e.g., -100)
valid_indices = labels != -100
# Handle size mismatch if needed
if len(predictions) != len(labels):
# You might need to slice or pad arrays here
min_len = min(len(predictions), len(labels))
labels = labels[:min_len]
predictions = predictions[:min_len]
valid_indices = valid_indices[:min_len]
# Filter out invalid indices
labels = labels[valid_indices]
predictions = predictions[valid_indices]
# Calculate accuracy, precision, recall, and F1 score
accuracy = accuracy_score(labels, predictions)
precision, recall, f1, _ = precision_recall_fscore_support(labels, predictions, average='weighted')
return {
'accuracy': accuracy,
'precision': precision,
'recall': recall,
'f1': f1,
}
</code></pre>
|
<python><huggingface-transformers><large-language-model><fine-tuning>
|
2024-10-14 20:37:50
| 0
| 433
|
Ben
|
79,087,599
| 10,714,273
|
Azure OpenAI Embedding Skill - Cannot iterate over non-array '/document/contentVector'
|
<p>The following code runs successfully but the indexer execution history shows the warning:</p>
<pre><code>Cannot iterate over non-array '/document/contentVector'.
Could not map output field 'contentVector' to search index. Check the 'outputFieldMappings' property of your indexer.
</code></pre>
<p>Based on the split skill and openai embedding skill docs (<a href="https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-azure-openai-embedding" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-azure-openai-embedding</a> , <a href="https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-textsplit" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/search/cognitive-search-skill-textsplit</a>), I am quite sure i've configured input outputs and field mappings correctly. When retrieving a doc, the <code>chunks</code> field is correctly chunked however <code>contentVector</code> is an empty list <code>[]</code>.</p>
<pre class="lang-py prettyprint-override"><code>import os
from pprint import pprint
from tqdm import tqdm
import time
import json
from dotenv import load_dotenv
from lxml import etree
from bs4 import BeautifulSoup
from typing import List, Dict, Collection
import uuid
from langchain_openai import AzureOpenAIEmbeddings, AzureChatOpenAI
from azure.core.credentials import AzureKeyCredential
from azure.search.documents import SearchClient
from azure.search.documents.indexes import SearchIndexClient, SearchIndexerClient
from azure.search.documents.indexes.models import (
SearchIndex,
SimpleField,
SearchableField,
SearchFieldDataType,
SearchField,
VectorSearch,
VectorSearchProfile,
HnswAlgorithmConfiguration,
_edm,
AzureOpenAIEmbeddingSkill,
InputFieldMappingEntry,
OutputFieldMappingEntry,
SearchIndexerSkillset,
SearchIndexerSkill,
SearchIndexer,
SplitSkill,
SearchIndexerDataSourceConnection,
SearchIndexerDataContainer,
FieldMapping,
IndexingParameters,
IndexingParametersConfiguration,
AzureOpenAIVectorizer,
AzureOpenAIVectorizerParameters
)
from langchain.text_splitter import RecursiveCharacterTextSplitter
from azure.core.exceptions import ResourceNotFoundError, ResourceExistsError
from langchain.vectorstores import AzureSearch
from langchain.retrievers import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
from azure.search.documents import IndexDocumentsBatch
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
load_dotenv()
AZURE_SEARCH_ENDPOINT = os.getenv("AZURE_SEARCH_ENDPOINT")
AZURE_SEARCH_KEY = os.getenv("AZURE_SEARCH_KEY")
AZURE_OPENAI_ENDPOINT = os.getenv("AZURE_OPENAI_ENDPOINT")
AZURE_OPENAI_API_KEY = os.getenv("AZURE_OPENAI_API_KEY")
AZURE_BLOB_URL = os.getenv("AZURE_BLOB_URL")
AZURE_BLOB_CONN_STRING = os.getenv("AZURE_BLOB_CONN_STRING")
AZURE_BLOB_ACC_KEY = os.getenv("AZURE_BLOB_ACC_KEY")
"""
Create index
Create chunking and embedding skills
Index data
"""
def ta_create_skillset():
split_skill = SplitSkill(
inputs=[InputFieldMappingEntry(name="text", source="/document/content")],
outputs=[OutputFieldMappingEntry(name="textItems", target_name="chunks")],
name="cf-textsplit-skill-1k",
text_split_mode="pages",
maximum_page_length=1000,
page_overlap_length=100
)
embedding_skill = AzureOpenAIEmbeddingSkill(
inputs=[InputFieldMappingEntry(name="text", source="/document/chunks/*")],
outputs=[OutputFieldMappingEntry(name="embedding", target_name="contentVector")],
context="/document/chunks/*",
name="cf-embedding-skill-large",
resource_url=AZURE_OPENAI_ENDPOINT,
api_key=AZURE_OPENAI_API_KEY,
deployment_name="text-embedding-3-large",
model_name="text-embedding-3-large",
dimensions=3072
)
skillset = SearchIndexerSkillset(
name="cf-chunk-embed-skillset",
description="Skillset for chunking and Azure OpenAI embeddings",
skills=[split_skill, embedding_skill]
)
indexer = SearchIndexerClient(
endpoint=AZURE_SEARCH_ENDPOINT,
credential=AzureKeyCredential(AZURE_SEARCH_KEY)
)
indexer.create_or_update_skillset(skillset)
print(f"Skillset {skillset.name} with skills: {', '.join([x.name for x in skillset.skills])} created.")
def ta_create_index(index_name):
fields = [
SimpleField(name="id", type=SearchFieldDataType.String, key=True),
SearchableField(name="title", type=SearchFieldDataType.String),
SearchableField(name="content", type=SearchFieldDataType.String),
SearchableField(name="chunks", collection=True, type=SearchFieldDataType.String),
SimpleField(name="location", type=SearchFieldDataType.String, filterable=True),
SimpleField(name="document_type", type=SearchFieldDataType.String, filterable=True),
SearchableField(name="jurisdiction", collection=True, type=SearchFieldDataType.String, filterable=True),
SearchableField(name="category", collection=True, type=SearchFieldDataType.String, filterable=True),
SearchableField(name="summary", type=SearchFieldDataType.String),
SearchableField(name="abstract", type=SearchFieldDataType.String),
SearchableField(name="contentVector", collection=True, type=SearchFieldDataType.Single,
vector_search_dimensions=3072, vector_search_profile_name="cf-vector")
# SearchField(name="contentVector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
# searchable=True, hidden=False, vector_search_dimensions=3072, vector_search_profile_name="cf-vector"),
]
vectorizer = AzureOpenAIVectorizer(
vectorizer_name="cf-vectorizer",
parameters=AzureOpenAIVectorizerParameters(
resource_url=AZURE_OPENAI_ENDPOINT,
deployment_name="text-embedding-3-large",
api_key=AZURE_OPENAI_API_KEY,
model_name="text-embedding-3-large"
)
)
vector_search = VectorSearch(
profiles=[
VectorSearchProfile(name="cf-vector",
algorithm_configuration_name="vector-config",
vectorizer_name="cf-vectorizer"
)
],
algorithms=[
HnswAlgorithmConfiguration(
name="vector-config",
parameters={
"m": 4,
"efConstruction": 400,
"efSearch": 500,
"metric": "cosine"
}
)
],
vectorizers=[vectorizer]
)
index_client = SearchIndexClient(endpoint=AZURE_SEARCH_ENDPOINT, credential=AzureKeyCredential(AZURE_SEARCH_KEY))
index = SearchIndex(name=index_name, fields=fields, vector_search=vector_search)
index_client.create_or_update_index(index)
print(f"Index '{index_name}' created or updated with vector search capability.")
ta_create_skillset()
ta_create_index("cf-rag-index")
indexer_client = SearchIndexerClient(endpoint=AZURE_SEARCH_ENDPOINT, credential=AzureKeyCredential(AZURE_SEARCH_KEY))
data_source_conn = SearchIndexerDataSourceConnection(
name="cf-ta-blob-conn",
connection_string=AZURE_BLOB_CONN_STRING,
type="azureblob",
container=SearchIndexerDataContainer(name="cf-ta-container")
)
indexer_client.create_or_update_data_source_connection(data_source_conn)
indexer = SearchIndexer(
name="cf-ta-indexer",
data_source_name="cf-ta-blob-conn",
target_index_name="cf-rag-index",
skillset_name="cf-chunk-embed-skillset",
output_field_mappings=[
FieldMapping(source_field_name="/document/chunks", target_field_name="chunks"),
FieldMapping(source_field_name="/document/contentVector/*", target_field_name="contentVector")
],
parameters={"configuration": {"parsing_mode":"json"}}
)
indexer_client.create_or_update_indexer(indexer)
indexer_client.run_indexer(name="cf-ta-indexer")
indexer_status = indexer_client.get_indexer_status("cf-ta-indexer")
print(indexer_status.status)
def print_execution_history(indexer_client, indexer_name):
indexer_status = indexer_client.get_indexer_status(indexer_name)
for execution in indexer_status.execution_history:
if len(execution.errors) > 0:
e = execution.errors[0]
print(e.details)
print(e.error_message)
print("-"*10)
if len(execution.warnings) > 0:
w = execution.warnings[0]
print(w.details)
print(w.message)
print("-"*10)
print_execution_history(indexer_client, "cf-ta-indexer")
search_client = SearchClient(
endpoint=AZURE_SEARCH_ENDPOINT,
index_name="cf-rag-index",
credential=AzureKeyCredential(AZURE_SEARCH_KEY)
)
results = search_client.search("*", top=1)
res = list(results)[0]
print(res["chunks"][0][-100:])
print(res["chunks"][1][:100])
pprint(res)
</code></pre>
<p>The following vector search returns the error:</p>
<pre><code>Message: The field 'contentVector' in the vector field list is not a vector field.
Parameter name: vector.fields
Exception Details: (FieldNotSearchable) The field 'contentVector' in the vector field list is not a vector field.
Code: FieldNotSearchable
Message: The field 'contentVector' in the vector field list is not a vector field.
</code></pre>
<pre><code>results = search_client.search(
select="title,chunks",
vector_queries=[VectorizableTextQuery(
text=myquery,
k_nearest_neighbors=3,
fields="contentVector"
)]
)
for r in results:
pprint(r)
</code></pre>
<p>Changing the vector field to</p>
<pre><code>SearchField(name="contentVector", type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True, vector_search_dimensions=3072, vector_search_profile_name="cf-vector")
</code></pre>
<p>actually gets rid of the above error however the search result is empty and another error is returned:</p>
<pre><code>There's a mismatch in vector dimensions. The vector field 'contentVector', with dimension of '3072', expects a length of '3072'. However, the provided vector has a length of '0'. Please ensure that the vector length matches the expected length of the vector field. Read the following documentation for more details: https://learn.microsoft.com/en-us/azure/search/vector-search-how-to-configure-compression-storage.
Could not index document because some of the data in the document was not valid.
----------
Cannot iterate over non-array '/document/contentVector'.
Could not map output field 'contentVector' to search index. Check the 'outputFieldMappings' property of your indexer.
</code></pre>
|
<python><azure><azure-openai>
|
2024-10-14 20:12:04
| 1
| 359
|
cap
|
79,087,531
| 26,843,912
|
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard' on Mac
|
<p>I was developing my fastapi project in my windows PC and everything was working fine there, but I recently got a new macbook and installed the lastest python 3.13 version and everything is just giving error</p>
<p>When I try to run my fastapi server, its giving this error :-</p>
<pre><code>(venv) venvlisa@Lisas-MacBook-Air src % uvicorn main:app --host 0.0.0.0 --port 8000 --reload
INFO: Will watch for changes in these directories: ['/Users/lisa/Documents/Projects/phia-backend-python/src']
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process [67473] using WatchFiles
Process SpawnProcess-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/multiprocessing/process.py", line 313, in _bootstrap
self.run()
~~~~~~~~^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 194, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 721, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/uvicorn/server.py", line 68, in serve
config.load()
~~~~~~~~~~~^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/uvicorn/config.py", line 473, in load
self.loaded_app = import_from_string(self.app)
~~~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/importlib/__init__.py", line 88, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1387, in _gcd_import
File "<frozen importlib._bootstrap>", line 1360, in _find_and_load
File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 935, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 1022, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/Users/lisa/Documents/Projects/phia-backend-python/src/main.py", line 16, in <module>
from fastapi import FastAPI
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/__init__.py", line 7, in <module>
from .applications import FastAPI as FastAPI
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/applications.py", line 16, in <module>
from fastapi import routing
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/routing.py", line 24, in <module>
from fastapi.dependencies.models import Dependant
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/dependencies/models.py", line 3, in <module>
from fastapi.security.base import SecurityBase
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/security/__init__.py", line 1, in <module>
from .api_key import APIKeyCookie as APIKeyCookie
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/security/api_key.py", line 3, in <module>
from fastapi.openapi.models import APIKey, APIKeyIn
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/fastapi/openapi/models.py", line 107, in <module>
class Schema(BaseModel):
...<86 lines>...
extra: str = "allow"
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/pydantic/main.py", line 286, in __new__
cls.__try_update_forward_refs__()
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/pydantic/main.py", line 808, in __try_update_forward_refs__
update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,))
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/pydantic/typing.py", line 554, in update_model_forward_refs
update_field_forward_refs(f, globalns=globalns, localns=localns)
~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/pydantic/typing.py", line 520, in update_field_forward_refs
field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/lisa/Documents/Projects/phia-backend-python/venv/lib/python3.13/site-packages/pydantic/typing.py", line 66, in evaluate_forwardref
return cast(Any, type_)._evaluate(globalns, localns, set())
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
</code></pre>
<p>And here is my requirement.txt :-</p>
<pre><code>anyio
asyncio
click==8.1.3
fastapi==0.95.1
fastapi-jwt-auth==0.5.0
h11==0.14.0
httptools
idna==3.4
numpy
opencv-python==4.7.0.72
opencv-stubs==0.0.7
pika==1.3.1
Pillow
pydantic==1.10.7
PyJWT==1.7.1
python-dotenv==1.0.0
PyYAML
sniffio==1.3.0
starlette==0.26.1
typing_extensions
uvicorn==0.22.0
watchfiles==0.19.0
websockets==11.0.3
httpx
asyncer
aiofiles
watchdog
boto3
</code></pre>
|
<python><python-3.x><django><fastapi>
|
2024-10-14 19:44:44
| 2
| 323
|
Zaid
|
79,087,362
| 6,141,238
|
Why does Scalene produce no results or partial results on my Windows 10 PC?
|
<p>I just installed Scalene 1.5.45 and have Python 3.12.0 and VS Code 1.94.2 already installed. I am running Windows 10 on a Dell laptop.</p>
<p>I am using the test script below introduced at about 24:30 in <a href="https://www.youtube.com/watch?v=Uq60vknROcM" rel="nofollow noreferrer">this</a> video and have saved it as <strong>problem1.py</strong>.</p>
<pre><code>import numpy as np
def main():
for i in range(10):
x = np.array(range(10**7))
y = np.array(np.random.uniform(0, 100, size=(10**8)))
main()
print('Done.')
</code></pre>
<p>(1) When I open a PowerShell window, <code>cd</code> to the directory containing problem1.py, and run <code>python problem1.py</code>, the code runs fine and outputs "Done.". When I run <code>scalene problem1.py</code> instead, the code again seems to run fine and returns "Done.", but is followed by the message:</p>
<blockquote>
<p>Scalene: The specified code did not run for long enough to profile.
By default, Scalene only profiles code in the file executed and its subdirectories.
To track the time spent in all files, use the <code>--profile-all</code> option.
NOTE: The GPU is currently running in a mode that can reduce Scalene's accuracy when reporting GPU utilization.
If you have sudo privileges, you can run this command (Linux only) to enable per-process GPU accounting:
python3 -m scalene.set_nvidia_gpu_modes</p>
</blockquote>
<p>No window with profiling results automatically appears. The code takes maybe 20 seconds or so to complete, so I think it may in fact be running for long enough to profile.</p>
<p>(2) I also encounter a similar issue in VS Code. The code runs as expected when not profiling. However, when I follow the (excellent) instructions given in <a href="https://github.com/plasma-umass/scalene/discussions/703" rel="nofollow noreferrer">this</a> GitHub issue β press Ctrl+Shift+P, type "scalene", and select the only option in the drop-down menu β a pop-up window appears in the lower right corner of VS Code titled "Scalene: now profiling". This box sticks around for perhaps 20 seconds and then disappears. Nothing prints to the terminal and no window with results automatically appears.</p>
<p>What could be the possible causes of issues (1) and (2), or how should I troubleshoot them?</p>
<p>(I am posting this question on Stack Overflow with the thought that answers might be useful to other programmers working with Scalene, but if the question should go elsewhere, please let me know with a comment. A copy is currently <a href="https://github.com/plasma-umass/scalene/issues/868" rel="nofollow noreferrer">posted</a> on GitHub.)</p>
<hr />
<p><strong>Update.</strong> While I still do not understand the error-like message quoted above, I went ahead and tried <code>scalene problem1.py --profile-all</code>, and this did bring up a webpage with what appears to be partial profiling results. It is hard to describe the content of this webpage without a screenshot, so I will tentatively include one, and if anyone has objections to this, let me know with a comment.</p>
<p><a href="https://i.sstatic.net/64NG13BM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/64NG13BM.png" alt="enter image description here" /></a></p>
<p>Notably, lines 1, 2, 5, 7, and 8 of problem1.py do not appear in the above profiling. And the first block that appears to profile "threading" operations is not shown in the <a href="https://www.youtube.com/watch?v=Uq60vknROcM" rel="nofollow noreferrer">video</a> mentioned above. A run on a separate PC with a different history and software yielded identical results. Additionally, similar profiling of other test functions showed the same clipped/fragmented profiling results.</p>
<p>The caveat below still appears in the terminal after running <code>scalene problem1.py --profile-all</code>.</p>
<blockquote>
<p>NOTE: The GPU is currently running in a mode that can reduce Scalene's
accuracy when reporting GPU utilization. If you have sudo privileges,
you can run this command (Linux only) to enable per-process GPU
accounting: python3 -m scalene.set_nvidia_gpu_modes</p>
</blockquote>
<p>I could not find an equivalent Windows command to address this.</p>
<p>So, with all that said, is there anything that I can do on my Windows machines to make Scalene profile all lines of problem1.py above?</p>
|
<python><windows><profiling><line-by-line><scalene>
|
2024-10-14 18:47:52
| 0
| 427
|
SapereAude
|
79,087,296
| 999,881
|
Importing Python modules from different directories
|
<p>I have the following folder setup</p>
<pre><code>user_test.py
factories/
user_factory.py
pb/
user_pb2.py
user_pb2_grpc.py
</code></pre>
<p>This is how the imports are in each of the files</p>
<pre><code>user_test.py
import factories.user_factory
factories/user_factory.py
import pb.user_pb2_grpc
pb/user_pb2_grpc.py
import user_pb2
</code></pre>
<p>However, on the last line I get an error saying</p>
<p><code>ModuleNotFoundError: No module named 'user_pb2'</code></p>
<p>pb/user_pb2_grpc.py is not being able to import user_pb2.py even though they are in the same directory</p>
<p>This can be fixed by changing <code>import user_pb2</code> to <code>from . import user_pb2</code> but this is Autgenerated code by GRPC and I cannot change.</p>
<p>How can I resolve this error or better structure my code?</p>
|
<python><grpc>
|
2024-10-14 18:20:17
| 1
| 441
|
dg428
|
79,087,092
| 3,949,008
|
All source traveling sales person
|
<p>I have a "small-scale" TSP problem where exact solution can be trivially computed as follows. I am trying to get this to work by spitting out "all source" TSP paths. I don't have to start at a fixed node. I can start any where, and want to know all possible optimal paths from each vertex. How can this be done other than having to generate multiple distance matrices re-labeling the vertices each time, which would be quite annoying.</p>
<pre><code>import numpy as np
from python_tsp.exact import solve_tsp_dynamic_programming
distance_matrix = np.array([
[0,437,23,21,41,300,142,187,81,171,98],
[545,0,10,19,54,339,142,201,78,170,143],
[95,147,0,186,273,93,29,55,731,76,164],
[45,60,930,0,518,28,10,13,298,31,71],
[74,111,495,490,0,62,25,21,407,87,143],
[122,126,2,1,13,0,2057,241,17,45,26],
[328,313,5,20,34,527,0,560,70,107,76],
[208,200,5,9,24,442,159,0,41,61,41],
[122,174,53,44,110,98,25,40,0,878,798],
[214,340,12,37,66,162,54,96,199,0,789],
[265,423,12,27,90,173,73,86,312,701,0]])
distance_matrix = distance_matrix * -1
solve_tsp_dynamic_programming(distance_matrix)
([0, 5, 6, 7, 4, 3, 2, 8, 9, 10, 1], np.int64(-7727))
</code></pre>
<p>NOTE: I only want exact solutions, and not approximations.</p>
|
<python><traveling-salesman>
|
2024-10-14 17:13:15
| 1
| 10,535
|
Gopala
|
79,086,960
| 10,014,361
|
undefined error not raising NameError and program runs fine
|
<p>Why is my code here not raising a NameError or Syntax error when I clearly am using non existing error name?</p>
<pre class="lang-py prettyprint-override"><code>def fun(x):
assert x >= 0
return x ** 0.5
def mid_level(x):
try:
fun(x)
except ButtError:
print(333)
raise
print(444)
try:
x = mid_level(-1)
except RuntimeError:
x = -1
except Exception:
x = 9
except:
x = -2
print(x)
</code></pre>
|
<python><python-3.x><exception>
|
2024-10-14 16:31:07
| 2
| 2,479
|
Aven Desta
|
79,086,936
| 162,758
|
How do I log the queries used by Feast for point in time joins
|
<p>I have a Feast feature store setup with a Snowflake offline store, Snowflake registry and a DynamoDB online store. I am playing around with some feature store retrieval scenarios and would like to understand the queries Feast is using to retrieve historical data. How can I log or print these queries?</p>
|
<python><snowflake-cloud-data-platform><feast>
|
2024-10-14 16:18:22
| 1
| 2,344
|
VDev
|
79,086,873
| 12,011,020
|
Pandas / Polars: Write list of JSONs to Database fails with `ndarray is not json serializable`
|
<p>I have multiple json columns which I concat to an array of json columns.
The DataFarme looks like this</p>
<pre><code>βββββββββββββββββββββββββββββββββββ
β json_concat β
β --- β
β list[str] β
βββββββββββββββββββββββββββββββββββ‘
β ["{"integer_col":52,"string_coβ¦ β
β ["{"integer_col":93,"string_coβ¦ β
β ["{"integer_col":15,"string_coβ¦ β
β ["{"integer_col":72,"string_coβ¦ β
β ["{"integer_col":61,"string_coβ¦ β
β ["{"integer_col":21,"string_coβ¦ β
β ["{"integer_col":83,"string_coβ¦ β
β ["{"integer_col":87,"string_coβ¦ β
β ["{"integer_col":75,"string_coβ¦ β
β ["{"integer_col":75,"string_coβ¦ β
βββββββββββββββββββββββββββββββββββ
</code></pre>
<p>Here is the output of polars <code>glimpse</code></p>
<pre><code>Rows: 10
Columns: 1
$ json_concat <list[str]> ['{"integer_col":52,"string_col":"v"}', '{"float_col":86.61761457749351,"bool_col":true}', '{"datetime_col":"2021-01-01 00:00:00","categorical_col":"Category3"}'], ['{"integer_col":93,"string_col":"l"}', '{"float_col":60.11150117432088,"bool_col":false}', '{"datetime_col":"2021-01-02 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":15,"string_col":"y"}', '{"float_col":70.80725777960456,"bool_col":false}', '{"datetime_col":"2021-01-03 00:00:00","categorical_col":"Category1"}'], ['{"integer_col":72,"string_col":"q"}', '{"float_col":2.0584494295802447,"bool_col":true}', '{"datetime_col":"2021-01-04 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":61,"string_col":"j"}', '{"float_col":96.99098521619943,"bool_col":true}', '{"datetime_col":"2021-01-05 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":21,"string_col":"p"}', '{"float_col":83.24426408004217,"bool_col":true}', '{"datetime_col":"2021-01-06 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":83,"string_col":"o"}', '{"float_col":21.233911067827616,"bool_col":true}', '{"datetime_col":"2021-01-07 00:00:00","categorical_col":"Category1"}'], ['{"integer_col":87,"string_col":"o"}', '{"float_col":18.182496720710063,"bool_col":true}', '{"datetime_col":"2021-01-08 00:00:00","categorical_col":"Category2"}'], ['{"integer_col":75,"string_col":"s"}', '{"float_col":18.34045098534338,"bool_col":true}', '{"datetime_col":"2021-01-09 00:00:00","categorical_col":"Category1"}'], ['{"integer_col":75,"string_col":"l"}', '{"float_col":30.42422429595377,"bool_col":true}', '{"datetime_col":"2021-01-10 00:00:00","categorical_col":"Category2"}']
</code></pre>
<p>I want to write the json column to a table called <code>testing</code>. I tried both <code>pd.DataFrame.to_sql()</code> as well as <code>pl.DataFrame.write_database()</code> both failing with a similary error</p>
<h2>Error</h2>
<p>The essential part is this <strong>sqlalchemy.exc.StatementError: (builtins.TypeError) Object of type ndarray is not JSON serializable</strong></p>
<pre><code>File "/usr/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
sqlalchemy.exc.StatementError: (builtins.TypeError) Object of type ndarray is not JSON serializable
[SQL: INSERT INTO some_schema.testing (json_concat) VALUES (%(json_concat)s)]
[parameters: [{'json_concat': array(['{"integer_col":52,"string_col":"v"}',
'{"float_col":86.61761457749351,"bool_col":true}',
'{"datetime_col":"2021-01-01 00:00:00","categorical_col":"Category3"}'],
dtype=object)},
# ... abbreviated
dtype=object)}, {'json_concat': array(['{"integer_col":75,"string_col":"l"}',
'{"float_col":30.42422429595377,"bool_col":true}',
'{"datetime_col":"2021-01-10 00:00:00","categorical_col":"Category2"}'],
dtype=object)}]]
</code></pre>
<h3>Code that produces the Error</h3>
<p>(exemplary for pandas)</p>
<pre class="lang-py prettyprint-override"><code>df_pandas.to_sql(
"testing",
con=engines.engine,
schema=schema,
index=False,
if_exists="append",
dtype=DTYPE,
)
</code></pre>
<h2>Question</h2>
<p>How do I need to prepare the concated json column for it to be json serializable?</p>
<h3>MRE (Create Example Data)</h3>
<pre class="lang-py prettyprint-override"><code>from typing import Any
import numpy as np
import pandas as pd
import polars as pl
from myengines import engines
from sqlalchemy import dialects, text
schema = "some_schema"
# Seed for reproducibility
np.random.seed(42)
n = 10
# Generate random data
integer_col = np.random.randint(1, 100, n)
float_col = np.random.random(n) * 100
string_col = np.random.choice(list("abcdefghijklmnopqrstuvwxyz"), n)
bool_col = np.random.choice([True, False], n)
datetime_col = pd.date_range(start="2021-01-01", periods=n, freq="D")
categorical_col = np.random.choice(["Category1", "Category2", "Category3"], n)
# Creating the DataFrame
df = pl.DataFrame(
{
"integer_col": integer_col,
"float_col": float_col,
"string_col": string_col,
"bool_col": bool_col,
"datetime_col": datetime_col,
"categorical_col": categorical_col,
}
)
df = df.select(
pl.struct(pl.col("integer_col", "string_col")).struct.json_encode().alias("json1"),
pl.struct(pl.col("float_col", "bool_col")).struct.json_encode().alias("json2"),
pl.struct(pl.col("datetime_col", "categorical_col"))
.struct.json_encode()
.alias("json3"),
).select(pl.concat_list(pl.col(["json1", "json2", "json3"])).alias("json_concat"))
DTYPE: dict[str, Any] = {"json_concat": dialects.postgresql.JSONB}
</code></pre>
|
<python><json><pandas><sqlalchemy><python-polars>
|
2024-10-14 15:57:22
| 1
| 491
|
SysRIP
|
79,086,849
| 2,532,408
|
python collation sort "shift-trimmed"
|
<p>How would I make this test pass?</p>
<pre class="lang-py prettyprint-override"><code>names = [
"cote",
"cotΓ©",
"cΓ΄te",
"cΓ΄tΓ©",
"ReasonE",
"Reason1",
"ReasonΔ",
"Reason Super",
"ReasonΓ
",
"ReasonA",
"Reasona",
"Reasone",
"death",
"deluge",
"de luge",
"disΓlva John",
"diSilva John",
"di Silva Fred",
"diSilva Fred",
"disΓlva Fred",
"di Silva John",
]
loc = icu.Locale("und-u-ka-shifted-kb-true")
c = icu.Collator.createInstance(loc)
assert sorted(names, key=c.getSortKey) == [
"cote",
"cΓ΄te",
"cotΓ©",
"cΓ΄tΓ©",
"death",
"deluge",
"de luge",
"di Silva Fred",
"diSilva Fred",
"disΓlva Fred",
"di Silva John",
"diSilva John",
"disΓlva John",
"Reason1",
"Reasona",
"ReasonA",
"ReasonΓ
",
"Reasone",
"ReasonE",
"ReasonΔ",
"Reason Super",
]
</code></pre>
<p><strong>Background:</strong></p>
<p>I'm trying to replicate the sorting behavior from a postgres database. Best I can figure is it's got custom rules for space/punctuation based on the '<a href="https://unicode-org.github.io/icu/userguide/collation/customization/ignorepunct.html#shift-trimmed" rel="nofollow noreferrer">shift-trimmed</a>' option, along with '<a href="https://unicode.org/reports/tr10/#Backward" rel="nofollow noreferrer">backwards accent</a>' (<code>kb-true</code> or <code>[backwards 2]</code>)</p>
<p>ICU doesn't appear to support <code>shift-trimmed</code> and I'm not sure how else I could get these to sort "properly".</p>
<p><em>See <a href="https://www.unicode.org/reports/tr35/tr35-collation.html#Collation_Settings" rel="nofollow noreferrer">collation settings</a> and <a href="https://unicode.org/reports/tr10/#Contextual_Sensitivity" rel="nofollow noreferrer">contextual sensitivity</a> for more explanation</em>.</p>
|
<python><sorting><collation><icu>
|
2024-10-14 15:50:13
| 0
| 4,628
|
Marcel Wilson
|
79,086,695
| 7,504,750
|
List containers/folders/subfolders in Azure DataLake in Python
|
<p>I have a python code snippet to connect to Azure. The functions are to connect, get the containers, and get the blobs of the container.</p>
<p>The <code>get_containers()</code> can already list the up to the first folder, problem is I cannot get the subfolders.</p>
<p>The directory structure in Azure looks something like <strong>(this is also my desired output)</strong>:</p>
<pre><code>container1
container1/folder1/subfolder2
container1/folder1/subfolder2/subfolder3
container1/folder1/subfolder2/subfolder3/subfolder4
container1/folder2
container1/folder3/subfolder5
container2
container2/folder4
container2/folder5/subfolder6
container2/folder6
container3
container3/folder7/subfolder7
container4
container4/folder8
container4/folder9
</code></pre>
<p>and it could possible change. I need to list or print the containers/folders/subfolders in a dynamic way.</p>
<p>Here is the snippet of the functions:</p>
<pre><code> def _authenticate(self):
"""Authenticates using ClientSecretCredential and returns BlobServiceClient."""
try:
credential = ClientSecretCredential(
tenant_id=self.tenant_id,
client_id=self.client_id,
client_secret=self.client_secret
)
blob_service_client = BlobServiceClient(
account_url=self.account_url,
credential=credential
)
print("Successfully connected to Azure Blob Storage!")
return blob_service_client
except Exception as e:
print(f'Failed to authenticate: {str(e)}')
return None
def get_containers(self):
"""Lists containers and their subdirectories in Azure Blob Storage."""
if self.blob_service_client:
try:
container_list = []
containers = self.blob_service_client.list_containers()
for container in containers:
container_name = container['name']
container_list.append(container_name)
# List blobs and subdirectories in each container
container_client = self.blob_service_client.get_container_client(container_name)
blobs = container_client.walk_blobs()
for blob in blobs:
container_list.append(f'{container_name}/{blob.name}')
return container_list
except Exception as e:
print(f'Failed to list containers: {str(e)}')
return []
else:
print("Service Client not initialized")
return []
def get_blobs(self, container_name: str, directory: str = "/") -> list:
"""Lists all blobs in the specified container and directory."""
if self.blob_service_client:
try:
container_client = self.blob_service_client.get_container_client(container_name)
blobs = container_client.walk_blobs(name_starts_with=directory)
blob_list = []
for blob in blobs:
blob_list.append(f'{container_name}/{blob.name}')
return blob_list
except Exception as e:
print(f'Failed to list blobs in {container_name}: {str(e)}')
return []
else:
print("Service Client not initialized")
return []
</code></pre>
<p>The output I got is:</p>
<pre><code>container1
container1/folder1
container1/folder2
container1/folder3
container2
container2/folder4
container2/folder5
container2/folder6
container3
container3/folder7
container4
container4/folder8
container4/folder9
</code></pre>
<p>is there someething missing from my <code>get_containers()</code> that I only get up to the first folder and not the subfolders?</p>
|
<python><azure-blob-storage>
|
2024-10-14 15:05:16
| 1
| 451
|
Ricky Aguilar
|
79,086,623
| 1,554,020
|
Can asyncio.shield be correctly called with a coroutine argument?
|
<p><a href="https://docs.python.org/3/library/asyncio-task.html#shielding-from-cancellation" rel="nofollow noreferrer"><code>asyncio.shield</code> docs</a> say:</p>
<blockquote>
<p>If <code>aw</code> is a coroutine it is automatically scheduled as a <code>Task</code>.</p>
</blockquote>
<p>and</p>
<blockquote>
<p>Important: Save a reference to tasks passed to this function, to avoid a task disappearing mid-execution. The event loop only keeps weak references to tasks. A task that isnβt referenced elsewhere may get garbage collected at any time, even before itβs done.</p>
</blockquote>
<p>Does the second clause also refer to tasks created internally when automatically wrapping coroutines? Does that further imply passing a coroutine to <code>shield</code> is never correct? Why does it support such automatic wrapping then?</p>
|
<python><garbage-collection><python-asyncio>
|
2024-10-14 14:42:04
| 0
| 14,259
|
yuri kilochek
|
79,086,262
| 18,618,577
|
Panda read_csv, ignore line that contain specific string
|
<p>I've a dataframe that list datalogger name and there password. The password is generated inside my script if the datalogger have a blank in the password field. And if there is not a generic password, then I put the specific password in this field for this datalogger.
Third case, for some of datalogger, I set a specific string ('NON') in that field to say to my script not to consider this line : the datalogger must be ignore. That give something like this :</p>
<pre><code>datalog, pwd
A001,
A002, 123
A003,
A004,
A005, NON
A006, 456
A007,
A008, NON
A009, 789
A010,
</code></pre>
<p>So :</p>
<p>Dataloggers 1, 3, 4, 7, 10 have a generic password.</p>
<p>Dataloggers 2, 6, 9 have a specific password.</p>
<p>Dataloggers 5, 8 must be ignored.</p>
<p>How can I make a pd.read_csv that ignore lines contains 'NON' in the second column ?</p>
|
<python><pandas><read-csv>
|
2024-10-14 13:01:03
| 1
| 305
|
BenjiBoy
|
79,086,247
| 7,086,023
|
Segmentation fault after printing function output, works line-by-line
|
<p>I'm working on a chemical clustering script in Python using the following packages:</p>
<ul>
<li><p><code>pubchempy</code> for fetching SMILES of chemical compounds</p>
</li>
<li><p><code>rdkit</code> for generating fingerprints and computing Tanimoto similarity</p>
</li>
</ul>
<pre><code>import numpy as np
import pandas as pd
import pubchempy as pcp
from rdkit import Chem
from rdkit.Chem import rdFingerprintGenerator
from rdkit.DataStructs import BulkTanimotoSimilarity
from scipy.cluster.hierarchy import linkage, fcluster
from scipy.spatial.distance import squareform
def get_similarity_matrix(chemical_names):
# Function to get SMILES from a chemical name
def get_smiles(chemical_name):
try:
compounds = pcp.get_compounds(chemical_name, 'name')
if compounds:
return compounds[0].canonical_smiles
else:
return None
except IndexError:
return None # If compound is not found
# Convert chemical names to SMILES
smiles_dict = {name: get_smiles(name) for name in chemical_names}
# Remove compounds that could not be converted to SMILES
smiles_dict = {name: smiles for name, smiles in smiles_dict.items() if smiles}
# Generate Morgan fingerprints
morgan_gen = rdFingerprintGenerator.GetMorganGenerator(radius=2)
fingerprints = {name: morgan_gen.GetFingerprint(Chem.MolFromSmiles(smiles)) for name, smiles in smiles_dict.items()}
# Create a similarity matrix
similarity_matrix = pd.DataFrame(index=smiles_dict.keys(), columns=smiles_dict.keys())
# Calculate pairwise Tanimoto similarities
for i, name1 in enumerate(smiles_dict.keys()):
for j, name2 in enumerate(smiles_dict.keys()):
if i <= j: # To avoid redundant calculation
if name1 == name2:
similarity = 1.0 # Similarity with itself
else:
similarity = BulkTanimotoSimilarity(fingerprints[name1], [fingerprints[name2]])[0]
similarity_matrix.at[name1, name2] = similarity
similarity_matrix.at[name2, name1] = similarity # Symmetric matrix
return similarity_matrix
def cluster_chemicals(similarity_matrix: pd.DataFrame, threshold: float) -> pd.DataFrame:
# Convert the similarity matrix to a distance matrix (1 - similarity)
distance_matrix = 1.0 - similarity_matrix
# Convert the distance matrix to a condensed distance matrix (required by linkage)
condensed_distance_matrix = squareform(distance_matrix)
# Perform hierarchical clustering
Z = linkage(condensed_distance_matrix, method='average')
# Assign cluster labels based on the similarity threshold
clusters = fcluster(Z, threshold, criterion='distance')
# All compounds initially belong to one cluster (cluster 0)
initial_cluster = np.zeros(len(similarity_matrix), dtype=np.int32)
# Ensure compounds that do not cluster with others are placed in separate clusters
unique_clusters = np.unique(clusters)
new_cluster_id = 1
for cluster in unique_clusters:
cluster_indices = np.where(clusters == cluster)[0]
if len(cluster_indices) == 1:
initial_cluster[cluster_indices] = 0 # Keep them in the initial cluster 0
else:
initial_cluster[cluster_indices] = new_cluster_id # Assign new cluster ID
new_cluster_id += 1
# Create a DataFrame to display the results
clustered_chemicals = pd.DataFrame({
'Compound': similarity_matrix.index,
'Cluster': initial_cluster
})
return clustered_chemicals
chemical_names = np.array(['Pioglitazone', 'Ticlopidine', 'Troglitazone', 'Zolpidem',
'Clopidogrel', 'Clomiphene', 'Alpidem', 'Sitaxentan',
'Rosiglitazone', 'Ambrisentan', 'Cyclofenil', 'Minocycline',
'Entacapone', 'Tolcapone', 'Ibufenac', 'Ibuprofen', 'Olanzapine',
'Clozapine', 'Trovafloxacin', 'Bosentan'], dtype=object)
compounds_similarity = get_similarity_matrix(chemical_names)
print(chemical_names)
print(compounds_similarity)
compounds_clustered = cluster_chemicals(similarity_matrix=compounds_similarity, threshold=0.5)
print(compounds_clustered)
</code></pre>
<p>Python crashes with a segmentation fault (zsh: segmentation fault) when executing the last line (<code>print(compounds_clustered)</code>). Oddly, when I run the lines in the cluster_chemicals function manually (line by line) and print the results, it works fine and there is no segmentation fault.</p>
<p>This problem was reproduced in macOS and Linux environments with the following packages:</p>
<pre><code>conda list
# packages in environment at /Users/xxxxx/miniconda3/envs/PythonML:
#
# Name Version Build Channel
_py-xgboost-mutex 2.0 cpu_0
_tflow_select 2.2.0 eigen
abseil-cpp 20211102.0 hc377ac9_0
absl-py 2.1.0 py310hca03da5_0
aiohappyeyeballs 2.4.0 py310hca03da5_0
aiohttp 3.10.5 py310h80987f9_0
aiosignal 1.2.0 pyhd3eb1b0_0
appnope 0.1.2 py310hca03da5_1001
asttokens 2.0.5 pyhd3eb1b0_0
astunparse 1.6.3 py_0
async-timeout 4.0.3 py310hca03da5_0
attrs 23.1.0 py310hca03da5_0
blas 1.0 openblas
blinker 1.6.2 py310hca03da5_0
boost 1.74.0 py310hd0bb7a8_5 conda-forge
boost-cpp 1.74.0 h32e41df_4 conda-forge
bottleneck 1.3.7 py310hbda83bc_0
brotli 1.0.9 h80987f9_8
brotli-bin 1.0.9 h80987f9_8
brotli-python 1.0.9 py310h313beb8_8
bzip2 1.0.8 h80987f9_6
c-ares 1.19.1 h80987f9_0
ca-certificates 2024.8.30 hf0a4a13_0 conda-forge
cachetools 5.3.3 py310hca03da5_0
cairo 1.16.0 h302bd0f_5
certifi 2024.8.30 pyhd8ed1ab_0 conda-forge
cffi 1.17.1 py310h3eb5a62_0
chardet 4.0.0 py310hca03da5_1003
charset-normalizer 3.3.2 pyhd3eb1b0_0
click 8.1.7 py310hca03da5_0
comm 0.2.1 py310hca03da5_0
contourpy 1.2.0 py310h48ca7d4_0
cryptography 41.0.3 py310h3c57c4d_0
cycler 0.11.0 pyhd3eb1b0_0
debugpy 1.6.7 py310h313beb8_0
decorator 5.1.1 pyhd3eb1b0_0
exceptiongroup 1.2.0 py310hca03da5_0
executing 0.8.3 pyhd3eb1b0_0
expat 2.6.3 h313beb8_0
flatbuffers 24.3.25 h313beb8_0
fontconfig 2.14.1 hee714a5_2
fonttools 4.51.0 py310h80987f9_0
freetype 2.12.1 h1192e45_0
freetype-py 2.3.0 pyhd8ed1ab_0 conda-forge
frozenlist 1.4.0 py310h80987f9_0
gast 0.4.0 pyhd3eb1b0_0
gettext 0.21.0 h13f89a0_1
giflib 5.2.1 h80987f9_3
glib 2.78.4 h313beb8_0
glib-tools 2.78.4 h313beb8_0
google-auth 2.29.0 py310hca03da5_0
google-auth-oauthlib 0.5.2 py310hca03da5_0
google-pasta 0.2.0 pyhd3eb1b0_0
greenlet 3.0.1 py310h313beb8_0
grpc-cpp 1.48.2 h877324c_0
grpcio 1.48.2 py310h877324c_0
h5py 3.11.0 py310haafd478_0
hdf5 1.12.1 h160e8cb_2
icu 68.1 hc377ac9_0
idna 3.7 py310hca03da5_0
ipdb 0.13.13 pypi_0 pypi
ipykernel 6.29.5 pyh57ce528_0 conda-forge
ipython 8.27.0 py310hca03da5_0
jedi 0.19.1 py310hca03da5_0
joblib 1.4.2 py310hca03da5_0
jpeg 9e h80987f9_3
jupyter_client 8.6.0 py310hca03da5_0
jupyter_core 5.7.2 py310hca03da5_0
keras 2.12.0 py310hca03da5_0
keras-preprocessing 1.1.2 pyhd3eb1b0_0
kiwisolver 1.4.4 py310h313beb8_0
krb5 1.20.1 h8380606_1
lcms2 2.12 hba8e193_0
lerc 3.0 hc377ac9_0
libbrotlicommon 1.0.9 h80987f9_8
libbrotlidec 1.0.9 h80987f9_8
libbrotlienc 1.0.9 h80987f9_8
libcurl 8.2.1 h0f1d93c_0
libcxx 14.0.6 h848a8c0_0
libdeflate 1.17 h80987f9_1
libedit 3.1.20230828 h80987f9_0
libev 4.33 h1a28f6b_1
libffi 3.4.4 hca03da5_1
libgfortran 5.0.0 11_3_0_hca03da5_28
libgfortran5 11.3.0 h009349e_28
libglib 2.78.4 h0a96307_0
libiconv 1.16 h80987f9_3
libnghttp2 1.52.0 h10c0552_1
libopenblas 0.3.21 h269037a_0
libpng 1.6.39 h80987f9_0
libprotobuf 3.20.3 h514c7bf_0
libsodium 1.0.18 h1a28f6b_0
libsqlite 3.46.0 hfb93653_0 conda-forge
libssh2 1.10.0 h449679c_2
libtiff 4.5.1 h313beb8_0
libwebp-base 1.3.2 h80987f9_0
libxgboost 2.1.1 h313beb8_0
libxml2 2.10.4 h372ba2a_0
libzlib 1.2.13 hfb2fe0b_6 conda-forge
llvm-openmp 14.0.6 hc6e5704_0
lz4-c 1.9.4 h313beb8_1
markdown 3.4.1 py310hca03da5_0
markupsafe 2.1.3 py310h80987f9_0
matplotlib 3.9.2 py310hca03da5_0
matplotlib-base 3.9.2 py310h7ef442a_0
matplotlib-inline 0.1.6 py310hca03da5_0
multidict 6.0.4 py310h80987f9_0
ncurses 6.4 h313beb8_0
nest-asyncio 1.6.0 py310hca03da5_0
numexpr 2.8.7 py310hecc3335_0
numpy 1.23.5 py310hb93e574_0
numpy-base 1.23.5 py310haf87e8b_0
oauthlib 3.2.2 py310hca03da5_0
openjpeg 2.5.2 h54b8e55_0
openssl 1.1.1w h53f4e23_0 conda-forge
opt_einsum 3.3.0 pyhd3eb1b0_1
packaging 24.1 py310hca03da5_0
pandas 2.2.2 py310h313beb8_0
parso 0.8.3 pyhd3eb1b0_0
pcre2 10.42 hb066dcc_1
pexpect 4.8.0 pyhd3eb1b0_3
pillow 10.4.0 py310h80987f9_0
pip 24.2 py310hca03da5_0
pixman 0.40.0 h1a28f6b_0
platformdirs 3.10.0 py310hca03da5_0
prompt-toolkit 3.0.43 py310hca03da5_0
prompt_toolkit 3.0.43 hd3eb1b0_0
protobuf 3.20.3 py310h313beb8_0
psutil 5.9.0 py310h1a28f6b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pubchempy 1.0.4 pyh864c0ab_2 bioconda
pure_eval 0.2.2 pyhd3eb1b0_0
py-xgboost 2.1.1 py310hca03da5_0
pyasn1 0.4.8 pyhd3eb1b0_0
pyasn1-modules 0.2.8 py_0
pybind11-abi 4 hd3eb1b0_1
pycairo 1.23.0 py310hc7d53f0_0
pycparser 2.21 pyhd3eb1b0_0
pygments 2.15.1 py310hca03da5_1
pyjwt 2.8.0 py310hca03da5_0
pyopenssl 23.2.0 py310hca03da5_0
pyparsing 3.1.2 py310hca03da5_0
pysocks 1.7.1 py310hca03da5_0
python 3.10.8 hf452327_0_cpython conda-forge
python-dateutil 2.9.0post0 py310hca03da5_2
python-flatbuffers 24.3.25 py310hca03da5_0
python-tzdata 2023.3 pyhd3eb1b0_0
python_abi 3.10 5_cp310 conda-forge
pytz 2024.1 py310hca03da5_0
pyzmq 25.1.2 py310h313beb8_0
rdkit 2022.09.1 py310hb777eea_0 conda-forge
re2 2022.04.01 hc377ac9_0
readline 8.2 h1a28f6b_0
reportlab 4.2.5 py310h493c2e1_0 conda-forge
requests 2.32.3 py310hca03da5_0
requests-oauthlib 2.0.0 py310hca03da5_0
rlpycairo 0.2.0 pyhd8ed1ab_0 conda-forge
rsa 4.7.2 pyhd3eb1b0_1
scikit-learn 1.5.1 py310h46d7db6_0
scipy 1.13.1 py310hd336fd7_0
seaborn 0.13.2 py310hca03da5_0
setuptools 75.1.0 py310hca03da5_0
six 1.16.0 pyhd3eb1b0_1
snappy 1.2.1 h313beb8_0
sqlalchemy 2.0.34 py310hbe2cdee_0
sqlite 3.45.3 h80987f9_0
stack_data 0.2.0 pyhd3eb1b0_0
tensorboard 2.12.1 py310hca03da5_0
tensorboard-data-server 0.7.0 py310ha6e5c4f_1
tensorboard-plugin-wit 1.8.1 py310hca03da5_0
tensorflow 2.12.0 eigen_py310h205ab9b_0
tensorflow-base 2.12.0 eigen_py310h0a52ebb_0
tensorflow-estimator 2.12.0 py310hca03da5_0
termcolor 2.1.0 py310hca03da5_0
threadpoolctl 3.5.0 py310h33ce5c2_0
tk 8.6.14 h6ba3021_0
tomli 2.0.2 pypi_0 pypi
tornado 6.4.1 py310h80987f9_0
traitlets 5.14.3 py310hca03da5_0
typing-extensions 4.11.0 py310hca03da5_0
typing_extensions 4.11.0 py310hca03da5_0
tzdata 2024b h04d1e81_0
unicodedata2 15.1.0 py310h80987f9_0
urllib3 2.2.3 py310hca03da5_0
wcwidth 0.2.5 pyhd3eb1b0_0
werkzeug 3.0.3 py310hca03da5_0
wheel 0.35.1 pyhd3eb1b0_0
wrapt 1.14.1 py310h1a28f6b_0
xgboost 2.1.1 py310hca03da5_0
xz 5.4.6 h80987f9_1
yarl 1.11.0 py310h80987f9_0
zeromq 4.3.5 h313beb8_0
zlib 1.2.13 hfb2fe0b_6 conda-forge
zstd 1.5.6 hfb09047_0
</code></pre>
<p>Does anyone know what could be causing the segmentation fault?</p>
|
<python><pandas><numpy>
|
2024-10-14 12:57:10
| 0
| 373
|
lizaveta
|
79,086,014
| 12,011,020
|
concat_list with NULL values, or how to fill NULL in pl.List[str]
|
<p>I want to concat three list columns in a <code>pl.LazyFrame</code>. However the Lists often contain NULL values. Resulting in NULL for <code>pl.concat_list</code></p>
<h2>MRE</h2>
<pre class="lang-py prettyprint-override"><code>import polars as pl
# Create the data with some NULLs
data = {
"a": [["apple", "banana"], None, ["cherry"]],
"b": [None, ["dog", "elephant"], ["fish"]],
"c": [["grape"], ["honeydew"], None],
}
# Create a LazyFrame
lazy_df = pl.LazyFrame(data)
list_cols = ["a", "b", "c"]
print(lazy_df.with_columns(pl.concat_list(pl.col(list_cols)).alias("merge")).collect())
</code></pre>
<pre><code>βββββββββββββββββββββββ¬ββββββββββββββββββββββ¬βββββββββββββββ¬ββββββββββββ
β a β b β c β merge β
β --- β --- β --- β --- β
β list[str] β list[str] β list[str] β list[str] β
βββββββββββββββββββββββͺββββββββββββββββββββββͺβββββββββββββββͺββββββββββββ‘
β ["apple", "banana"] β null β ["grape"] β null β
β null β ["dog", "elephant"] β ["honeydew"] β null β
β ["cherry"] β ["fish"] β null β null β
βββββββββββββββββββββββ΄ββββββββββββββββββββββ΄βββββββββββββββ΄ββββββββββββ
</code></pre>
<h2>Question</h2>
<p>How can I concat the lists even when some values are NULL?</p>
<h2>Tried solutions</h2>
<p>I've tried to fill the null values via <code>expr.fill_null("")</code> or <code>expr.fill_null(pl.List(""))</code> or <code>expr.fill_null(pl.List([]))</code> but could not get it to run through. How do I fill an empty list instead of NULL in cols of type <code>pl.List[str]</code>. And is there a better way to concat the three list columns?</p>
|
<python><dataframe><list><python-polars>
|
2024-10-14 11:38:03
| 1
| 491
|
SysRIP
|
79,085,836
| 1,254,515
|
how to parallelize data extraction from netcdf to files in python
|
<p>I have data from a netCDF source (<a href="https://cds.climate.copernicus.eu/datasets/reanalysis-era5-pressure-levels?tab=overview" rel="nofollow noreferrer">ECMWF ERA5</a>) which I read using xarray. It has four dimensions (say x, y, z and t) and three variables (say r, h and g).
I have to write it to plain text files in directories defined by three of the dimensions extracting all data corresponding to these variable values for all values of the last variable.<br />
A serial version of this code works just fine. A simplified representation is:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
def extract_data(data,z,x,y,t):
out_str = ''
for t_val in t:
for z_val in z:
out_str += data.sel(x,y,t_val,z_val).data
return(out_str)
def data_get_write(dir,r,h,g,x,y,z,t):
r_str = extract_data(r,x,y,z,t)
with open(dir+'r_file', 'w') as f_r:
f_r.write(r_str)
h_str = extract_data(h,x,y,z,t)
with open(dir+'h_file', 'w') as f_h:
f_h.write(h_str)
g_str = extract_data(g,x,y,z,t)
with open(dir+'g_file', 'w') as f_g:
f_g.write(g_str)
return(<operation>)
if __name__ == '__main__':
ds=xr.open_dataset('data.nc')
r=ds['r']
h=ds['h']
g=ds['g']
z=ds['z']
t=ds['t']
dir_list=[]
x_list=[]
y_list=[]
for val_x in x:
for val_y in y:
dir=(<operation>)
dir_list.append(dir)
x_list.append(x)
y_list.append(y)
list_len=len(dir_list)
r_list=[r]*list_len
h_list=[h]*list_len
g_list=[g]*list_len
z_list=[z]*list_len
t_list=[t]*list_len
results = map(data_get_write, r_list, h_list, g_list, x_list, y_list, z_list, t_list)
</code></pre>
<p>I have written it this way with the map function call hoping to parallelize the execution using <code>concurrent.futures.ThreadPoolExecutor</code>, replacing the map call thus:</p>
<pre class="lang-py prettyprint-override"><code> max_workers = min(32, int(arguments["-n"]) + 4)
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
results = executor.map(data_get_write, r_list, h_list, g_list, x_list, y_list, z_list, t_list)
</code></pre>
<p>However, this code runs actually slower than the serialised version.<br />
On my test dataset and machine (a small 600Mb sample file on a small server), the serialized version takes roughly 2'40", the parallelized version 6'30". Changing the number of cores (from 1 to 20) changes nothing significant for either version.<br />
I have specifically chosen the ThreadPoolExecutor as the dataset is likely to be very large in real world cases (several Gb) and passing it by value to that many subprocesses I fear might cause an OOM error in some cases. My test dataset only has around 20 values for the x and y dimensions and 6 for t. Real world cases run up to a thousand t values and around a hundred x and y, so around four to five orders of magnitude greater.</p>
<p>Either I am doing something very wrong (entirely possible, this is my first code reading netCDF and my first attempt at parallelization) which is slowing down the parallel execution or concurrent.futures.ThreadPoolExecutor is not the way to go.</p>
<p>I have seen a similar question <a href="https://stackoverflow.com/questions/63719044/approaches-to-parallelize-data-extraction-and-processing-in-python">here</a>, but there is no other suggestion than to use parallel, which is not an option in my case, my code runs on clusters where I am not at leisure to install parallel and the parallelization must be within my script and the libraries, which must be as standard as possible (hence using xarray rather than netCDF4).<br />
I have started reading about asyncio but it is quite a bit different from concurrent.futures and looks like I will need to re-write quite a bit, I'd like to avoid going down another dead-end.</p>
|
<python><parallel-processing><concurrent.futures>
|
2024-10-14 10:45:58
| 1
| 323
|
Oliver Henriot
|
79,085,795
| 15,456,681
|
Batched matrix multiplication with JAX on GPU faster with larger matrices
|
<p>I'm trying to perform batched matrix multiplication with JAX on GPU, and noticed that it is ~3x faster to multiply shapes (1000, 1000, 3, 35) @ (1000, 1000, 35, 1) than it is to multiply (1000, 1000, 3, 25) @ (1000, 1000, 25, 1) with f64 and ~5x with f32.</p>
<p>What explains this difference, considering that on cpu neither JAX or NumPy show this behaviour, and on GPU CuPy doesn't show this behaviour?</p>
<p>I'm running this with JAX: 0.4.32 on an NVIDIA RTX A5000 (and get similar results on a Tesla T4), code to reproduce:</p>
<pre><code>import numpy as np
import cupy as cp
from cupyx.profiler import benchmark
from jax import config
config.update("jax_enable_x64", True)
import jax
import jax.numpy as jnp
import matplotlib.pyplot as plt
rng = np.random.default_rng()
x = np.arange(5, 55, 5)
</code></pre>
<p>GPU timings:</p>
<pre><code>dtype = cp.float64
timings_cp = []
for i in range(5, 55, 5):
a = cp.array(rng.random((1000, 1000, 3, i)), dtype=dtype)
b = cp.array(rng.random((1000, 1000, i, 1)), dtype=dtype)
timings_cp.append(benchmark(lambda a, b: a@b, (a, b), n_repeat=10, n_warmup=10))
dtype = jnp.float64
timings_jax_gpu = []
with jax.default_device(jax.devices('gpu')[0]):
for i in range(5, 55, 5):
a = jnp.array(rng.random((1000, 1000, 3, i)), dtype=dtype)
b = jnp.array(rng.random((1000, 1000, i, 1)), dtype=dtype)
func = jax.jit(lambda a, b: a@b)
timings_jax_gpu.append(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=10, n_warmup=10))
plt.figure()
plt.plot(x, [i.gpu_times.mean() for i in timings_cp], label="CuPy")
plt.plot(x, [i.gpu_times.mean() for i in timings_jax_gpu], label="JAX GPU")
plt.legend()
</code></pre>
<p><a href="https://i.sstatic.net/YF88454x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YF88454x.png" alt="enter image description here" /></a></p>
<p>Timings with those specific shapes:</p>
<pre><code>dtype = jnp.float64
with jax.default_device(jax.devices('gpu')[0]):
a = jnp.array(rng.random((1000, 1000, 3, 25)), dtype=dtype)
b = jnp.array(rng.random((1000, 1000, 25, 1)), dtype=dtype)
func = jax.jit(lambda a, b: a@b)
print(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=1000, n_warmup=10).gpu_times.mean())
a = jnp.array(rng.random((1000, 1000, 3, 35)), dtype=dtype)
b = jnp.array(rng.random((1000, 1000, 35, 1)), dtype=dtype)
print(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=1000, n_warmup=10).gpu_times.mean())
</code></pre>
<p>Gives</p>
<pre><code>f64:
0.01453789699935913
0.004859122595310211
f32:
0.005860503035545349
0.001209742688536644
</code></pre>
<p>CPU timings:</p>
<pre><code>timings_np = []
for i in range(5, 55, 5):
a = rng.random((1000, 1000, 3, i))
b = rng.random((1000, 1000, i, 1))
timings_np.append(benchmark(lambda a, b: a@b, (a, b), n_repeat=10, n_warmup=10))
timings_jax_cpu = []
with jax.default_device(jax.devices('cpu')[0]):
for i in range(5, 55, 5):
a = jnp.array(rng.random((1000, 1000, 3, i)))
b = jnp.array(rng.random((1000, 1000, i, 1)))
func = jax.jit(lambda a, b: a@b)
timings_jax_cpu.append(benchmark(lambda a, b: func(a, b).block_until_ready(), (a, b), n_repeat=10, n_warmup=10))
plt.figure()
plt.plot(x, [i.cpu_times.mean() for i in timings_np], label="NumPy")
plt.plot(x, [i.cpu_times.mean() for i in timings_jax_cpu], label="JAX CPU")
plt.legend()
</code></pre>
<p><a href="https://i.sstatic.net/kE2R0kpb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kE2R0kpb.png" alt="enter image description here" /></a></p>
|
<python><numpy><jax><cupy>
|
2024-10-14 10:34:46
| 1
| 3,592
|
Nin17
|
79,085,768
| 3,960,991
|
How to Access the Attributes in an iFrame in Python
|
<p>I'm learning web scraping for data analysis.</p>
<p>I have successfully retreived several elements of interest on this page such as Title, Date, Upvotes etc. <a href="https://old.reddit.com/r/JoeRogan/comments/cmxmtc/jre_1330_bernie_sanders/" rel="nofollow noreferrer">https://old.reddit.com/r/JoeRogan/comments/cmxmtc/jre_1330_bernie_sanders/</a>.</p>
<p>I'd like to retreive the original youtube title of the video on this page, however I've been unable to access it. In this case it is <code>title="Joe Rogan Experience #1330 - Bernie Sanders"</code>.</p>
<p><a href="https://i.sstatic.net/7oyyG2Ie.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7oyyG2Ie.png" alt="enter image description here" /></a></p>
<p>I have read and understood that elements within iframes are not directly obtainable, however the <a href="https://www.selenium.dev/documentation/webdriver/interactions/frames/" rel="nofollow noreferrer">Selenium documentation</a> does not cover this case, at least not sufficiently in Python. In this case the iframe has no <code>name</code> variables.</p>
<p>When I run <code>driver.switch_to.frame(1)</code> an error is returned which makes me confused as to what's going on because there are certainly iframe tags in the structure of the page. <code>driver.switch_to.frame(0)</code> works but I assume that's just referring to the primary, default page anyway. I'm not sure what are the ways to identify the different iframe windows.</p>
<p>I have right-clicked on the area and extracted the XPath <code>/html/body/iframe</code> although this looks different than the examples I've seen online and it ultimately did not work when I tried <code>driver.find_element(By.XPATH, "/html/body/iframe")</code>. I have also tried searching by TAG_NAME instead of XPATH.</p>
<p>I guess that the problem may be related to the structure of the iframes that I am missing. Any help on how to extract the title attribute from this iframe would be greatly appreciated.</p>
|
<python><html><selenium-webdriver><web-scraping><iframe>
|
2024-10-14 10:28:35
| 1
| 368
|
AndrΓ© Foote
|
79,085,760
| 10,425,150
|
Second .connect with paramiko (ssh)
|
<p>Form the main Linux OS (connected, working) I would like to run another ".connect".</p>
<p>Here is the code:</p>
<pre><code>import paramiko
hostname = "hostaname"
ip = "1111.111.11.11"
username = "user"
password = "pass"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(hostname=hostname,username=username, password=password)
</code></pre>
<p>Now from the main I want to accesses another localhost:</p>
<pre><code>hostname = "localhost"
ip = "222.222.2.2"
username = "root"
password = "root"
ssh.connect(hostname=hostname,username=username, password=password)
</code></pre>
<p>I get the following error:</p>
<pre><code>NoValidConnectionsError: [Errno None] Unable to connect to port 22 on 222.222.2.2 or ::1
</code></pre>
<p>UPD: I found my answer in the following question:
<a href="https://stackoverflow.com/questions/35304525/nested-ssh-using-python-paramiko">Nested SSH using Python Paramiko</a></p>
|
<python><ssh><paramiko>
|
2024-10-14 10:26:36
| 1
| 1,051
|
GΠΎΠΎd_MΠ°n
|
79,085,604
| 12,011,020
|
Using is_in with a Polars LazyFrame causing TypeError
|
<p>I am getting the following TypeError</p>
<pre><code>Traceback (most recent call last):
File "/my/path/my_project/src/my_project/exploration/mre_lazyframe_error.py", line 39, in <module>
current.with_columns(pl.col("foo_bar").is_in(reference["foo_bar"]))
File "/my/path/.cache/pypoetry/virtualenvs/my_project-p95GORRi-py3.10/lib/python3.10/site-packages/polars/lazyframe/frame.py", line 619, in __getitem__
raise TypeError(msg)
TypeError: 'LazyFrame' object is not subscriptable (aside from slicing)
</code></pre>
<h2>MRE</h2>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import polars as pl
num_rows = 10000
ids = np.arange(num_rows)
foo_bar = np.random.randint(1, 101, num_rows)
current = pl.LazyFrame(
{
"id": ids,
"foo_bar": foo_bar,
}
)
reference = pl.LazyFrame(
{
"id": ids,
"foo_bar": np.random.randint(
1, 101, num_rows
), # different random numbers for 'reference'
}
)
current.with_columns(
pl.col("foo_bar").is_in(reference["foo_bar"]).name.suffix("_avail"),
)
current.with_columns(pl.col("foo_bar").is_in(reference["foo_bar"]))
</code></pre>
<p>As far as I understand it this should be possible (<a href="https://docs.pola.rs/api/python/stable/reference/expressions/api/polars.Expr.is_in.html#polars.Expr.is_in" rel="nofollow noreferrer">is_in docs</a>).
When I do it in eager it runs through without any error.
However I would preferably compute everything in lazy.
Is there anyway to make this work?</p>
|
<python><dataframe><python-polars>
|
2024-10-14 09:41:01
| 1
| 491
|
SysRIP
|
79,085,207
| 3,303,266
|
oci: How to use REST sdk call to create instance with GPU shape setting gpus
|
<p>I can do the below command in CLI and</p>
<pre><code>oci compute instance launch -c ocid1.compartment.AK00684129.broom15.xri7sc0a2upguqn5cn8ajnwi9664jtbc9cd7vib6x974ghk1dfg757hxpme5 --availability-domain AD-1 --shape VM.C3.GPU.L40S.Flex --shape-config '{"gpus": 1,"ocpus":4,"memoryInGBs":48}' --source-details '{"bootVolumeSizeInGBs": 100, "image_id": "ocid1.image.AK00684129.broom15.5sprgqr09pc5dtjb4p959h12clg5ebeq1tlh37jrpfshzvtdy054us8c0623", "source_type": "image"}' --subnet-id ocid1.subnet.AK00684129.broom15.c67zxe3fwglsldmv3xfc9s1jqc4zkbmrv5bfmd8nwqvdezyrfrq25ev5fbg4 --assign-public-ip True --ssh-authorized-keys-file /nfs/shared_storage/ossetest/usr/src/tests/pca_3x/CE/oci_keys/id_ecdsa.pub --display-name gpu-inst-test
</code></pre>
<p>BUt the sdk does not contain the gpus attribute in LaunchInstanceShapeConfigDetails</p>
<p><a href="https://docs.oracle.com/en-us/iaas/tools/python/2.135.2/api/core/models/oci.core.models.LaunchInstanceShapeConfigDetails.html" rel="nofollow noreferrer">https://docs.oracle.com/en-us/iaas/tools/python/2.135.2/api/core/models/oci.core.models.LaunchInstanceShapeConfigDetails.html</a></p>
<p>So i am unsure of how to set it via REST. Can you advise</p>
|
<python><oracle-cloud-infrastructure>
|
2024-10-14 07:52:26
| 1
| 371
|
user3303266
|
79,085,124
| 5,134,817
|
Extract a class from a static method
|
<p>Given a function which is a <code>staticmethod</code> of a class, is there a way to extract the parent class object from this?</p>
<pre class="lang-py prettyprint-override"><code>class A:
@staticmethod
def b(): ...
...
f = A.b
...
assert get_parent_object_from(f) is A
</code></pre>
<p>I can see this buried in the <code>__qualname__</code>, but can't figure out how to extract this.</p>
<p>The function <code>get_parent_object_from</code> should have no knowledge of the parent class <code>A</code>.</p>
|
<python><static-methods>
|
2024-10-14 07:30:03
| 1
| 1,987
|
oliversm
|
79,085,070
| 7,227,146
|
Interrupting the kernel 'Python 3.12.7' timed out. Do you want to restart the kernel instead? All variables will be lost
|
<p>This might be a nooby question so bear with me.</p>
<p>Sometimes when I execute Python Jupyter Notebook cells on VSCode, they take too long and I want to terminate the execution so I click on the top-left corner of the cell, where there's a square and you hover over it and it says "stop cell execution".</p>
<p>However instead of interrupting the kernel I get a pop-up: <code>Interrupting the kernel 'Python 3.12.7' timed out. Do you want to restart the kernel instead? All variables will be lost.</code> This bugs me endlessly because if I click on <code>Cancel</code> the execution doesn't stop and if I click on <code>Restart</code>, well, I lose all variables and have to start again.</p>
<p>I can reproduce this behaviour just with this cell:</p>
<pre><code>import time
time.sleep(20)
</code></pre>
<p>I'm using Python 3.12.7 (both the system installation and a virtual environment, the same thing happens) and Windows 11.</p>
<p>How can I stop the cell execution without losing all variables? I want to <strong>interrupt</strong> and NOT <strong>restart</strong> the kernel.</p>
|
<python><jupyter-notebook>
|
2024-10-14 07:15:29
| 0
| 679
|
zest16
|
79,084,987
| 1,496,362
|
Blockchain API only gives me data for every 4 days
|
<p>Trying to download daily bitcoin miner fees using the blockchain.com API. However, even though the graph on the website (<a href="https://www.blockchain.com/explorer/charts/transaction-fees" rel="nofollow noreferrer">https://www.blockchain.com/explorer/charts/transaction-fees</a>) shows daily data, the API only gives data for every 4 days.</p>
<pre><code>import requests
import pandas as pd
url = "https://api.blockchain.info/charts/transaction-fees?timespan=all&format=json"
response = requests.get(url)
data = response.json()
df = pd.DataFrame(data['values'])
# Convert the timestamp to a readable format
df['x'] = pd.to_datetime(df['x'], unit='s') # 'x' is the Unix timestamp
df.set_index('x', inplace=True) # Set the datetime as the index
df.rename(columns={'y': 'fees'}, inplace=True) # Rename 'y' to 'hashrate'
print(df.head())
</code></pre>
<p>I tried adapting the code to rolling 4d window and interpolating, but there is a quite large error when I do this, as in this case I don't get the correct data for each day, but estimate it based on the surrounding days:</p>
<pre><code>import requests
import pandas as pd
url = "https://api.blockchain.info/charts/transaction-fees?timespan=all&rollingAverage=4days&format=json"
response = requests.get(url)
data = response.json()
df = pd.DataFrame(data['values'])
# Convert the timestamp to a readable format
df['x'] = pd.to_datetime(df['x'], unit='s')
df.set_index('x', inplace=True)
df.rename(columns={'y': 'fees'}, inplace=True) # Rename 'y' to 'hashrate'
df_daily = df.resample('D').interpolate(method='linear')
print(df_daily.head())
</code></pre>
<p>-- Update</p>
<p>I assume this is an api limitation. As the raw data also misses 3 days each time: <a href="https://api.blockchain.info/charts/transaction-fees" rel="nofollow noreferrer">https://api.blockchain.info/charts/transaction-fees</a></p>
<p>-- Update 2
I have added 'sampled=false' to the api request, and now I get data for every 15min, which is too much. I am just looking for daily data, but the API doc is not so good:</p>
<p><a href="https://www.blockchain.com/explorer/api/charts_api" rel="nofollow noreferrer">https://www.blockchain.com/explorer/api/charts_api</a></p>
|
<python><blockchain>
|
2024-10-14 06:45:02
| 1
| 5,417
|
dorien
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.