QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,215,638
| 591,064
|
Environment variable PATH_INFO not defined by lighttpd using uwsgi module
|
<p>I have lighttpd version 1.4.73 running on Alpine Linux. I enabled <code>mod_scgi</code> and added this to my configuration file:</p>
<pre><code>scgi.protocol = "uwsgi"
scgi.server = (
"/tp2/" => ( "tp2-app" => (
"host" => "127.0.0.1",
"port" => 8080,
"check-local" => "disable",
"fix-root-scriptname" => "enable",
"fix_root_path_name" => "enable"
)
),
)
</code></pre>
<p>My <a href="https://github.com/bottlepy/bottle/blob/master/bottle.py#L966" rel="nofollow noreferrer">Bottle.py crashes because it can't find <code>PATH_INFO</code></a>. I reduced the problem to this minuscle pure Python wsgi application that displays the request's environment variables:</p>
<pre><code>def application(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain;charset=utf-8')])
page = [f'@Hello World!\n'.encode("utf-8")]
for key, value in environ.items():
page.append(f"{key} = {value}\n".encode("utf-8"))
return sorted(page)
</code></pre>
<p>I run this app with:</p>
<pre><code>uwsgi --uwsgi-socket 127.0.0.1:8080 --plugin python3 -w hello_world_app
</code></pre>
<p>And indeed, <code>PATH_INFO</code> is not set by lighttpd:</p>
<pre><code>HTTP/1.1 200 OK
Content-Type: text/plain;charset=utf-8
Accept-Ranges: bytes
Content-Length: 943
Date: Sun, 24 Mar 2024 18:22:09 GMT
Server: lighttpd/1.4.73
@Hello World!
CONTENT_LENGTH = 0
DOCUMENT_ROOT = /var/www/localhost/htdocs
GATEWAY_INTERFACE = CGI/1.1
HTTP_ACCEPT = */*
HTTP_HOST = tp2.test
HTTP_USER_AGENT = curl/8.5.0
QUERY_STRING = date=2024-03-24T14:22:10-04:00
REDIRECT_STATUS = 200
REMOTE_ADDR = 192.168.3.10
REMOTE_PORT = 35690
REQUEST_METHOD = GET
REQUEST_SCHEME = http
REQUEST_URI = /tp2/hello?date=2024-03-24T14:22:10-04:00
SCRIPT_FILENAME = /var/www/localhost/htdocs/tp2/hello
SCRIPT_NAME = /tp2/hello
SERVER_ADDR = 192.168.3.30
SERVER_NAME = tp2.test
SERVER_PORT = 80
SERVER_PROTOCOL = HTTP/1.1
SERVER_SOFTWARE = lighttpd/1.4.73
uwsgi.node = b'tp2.test'
uwsgi.version = b'2.0.23'
</code></pre>
<p>How can I forward everything under the path <code>/tp2/</code> to my application and have <code>PATH_INFO</code> available for Bottle?</p>
|
<python><uwsgi><bottle><lighttpd>
|
2024-03-24 18:27:12
| 1
| 10,249
|
ixe013
|
78,215,448
| 2,221,360
|
Assigning values to numpy array based on multiple conditions of multiple array
|
<p>I have the ocean and atmospheric dataset in netcdf file. Ocean data will contain <code>nan</code> or any other value <code>-999</code> over land area. For this eample, say it is <code>nan</code>. Sample data will look like this:-</p>
<pre><code>import numpy as np
ocean = np.array([[2, 4, 5], [6, np.nan, 2], [9, 3, np.nan]])
atmos = np.array([[4, 2, 5], [6, 7, 3], [8, 3, 2]])
</code></pre>
<p>Now I wanted to apply multiple conditions on ocean and atmos data to make a new array which will have only values from <code>1</code> to <code>8</code>. For example in ocean data, values between <code>2</code> and <code>4</code> will be assigned as <code>1</code> and values between <code>4</code> and <code>6</code> will be assigned as <code>2</code>. The same comparison goes to atmos dataset as well.</p>
<p>To simplify the comparison and assignment operation, I made a list of bin values and used <code>np.digitize</code> to make categories.</p>
<pre><code>bin1 = [2, 4, 6]
bin2 = [4, 6, 8]
ocean_cat = np.digitize(ocean, bin1)
atmos_cat = np.digitize(atmos, bin2)
</code></pre>
<p>which produces the following result:-</p>
<pre><code>[[1 2 2]
[3 3 1]
[3 1 3]]
[[1 0 1]
[2 2 0]
[3 0 0]]
</code></pre>
<p>Now I wanted element-wise maximum between the above two array results. Therefore, I used <code>np.fmax</code> to get the element-wise maximum.</p>
<pre><code>final_cat = np.fmax(ocean_cat, atmos_cat)
print(final_cat)
</code></pre>
<p>which produces the below result:-</p>
<pre><code>[[1 2 2]
[3 3 1]
[3 1 3]]
</code></pre>
<p>The above result is almost what I need. The only issue I find here is the missing <code>nan</code> value. What I wanted in the final result is:-</p>
<pre><code>[[1 2 2]
[3 nan 1]
[3 1 nan]]
</code></pre>
<p>Can someone help me to replace the values with nan from the same index of original ocean array?</p>
|
<python><numpy><netcdf>
|
2024-03-24 17:20:27
| 2
| 3,910
|
sundar_ima
|
78,215,423
| 2,016,632
|
Why does gcloud build insist on making new PIP packages each time?
|
<p>I have <code>Google Cloud SDK 469.0.0, alpha 2024.03.15, beta 2024.03.15, bq 2.1.1, core 2024.03.15, gsutil 5.27</code></p>
<p>I read on multiple Stackoverflow posts that if I structure my Dockerfile as</p>
<pre><code>FROM python:3.12
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY requirements.txt ./
RUN pip install -r ./requirements.txt
COPY . ./
[CMD]
</code></pre>
<p>then Dockerfile will cache the pip installations and not keep repeating them each time I change the source code.</p>
<p>This doesn't seem to work with <code>gcloud builds submit --tag gcr.io/...</code></p>
<p>Is there some setting that I should be applying?</p>
|
<python><docker><google-cloud-platform>
|
2024-03-24 17:15:25
| 1
| 619
|
Tunneller
|
78,215,364
| 344,286
|
How can I convert a Python multithreaded socket client/server to asyncio?
|
<p>I'm writing a tool to test an asyncio based server end-to-end. Initially I was going to spin up the server in one terminal window, and run the test in a separate window, but then I realized that it should be possible to just do that in one script. After all, I can do it with a <code>concurrent.futures.ThreadPoolExecutor</code>, but I'm struggling converting the logic using <code>await</code>/<code>async def</code>.</p>
<p>Here's a working example using the TPE:</p>
<pre><code>import argparse
import socket
import concurrent.futures
import threading
import socketserver
class TCPHandler(socketserver.BaseRequestHandler):
def handle(self):
print(f'Got data: {self.request.recv(1024).strip().decode()}')
def started_server(*, server):
print('starting server')
server.serve_forever()
print('server thread closing')
def run_client(*, host, port, server):
print('client started, attempting connection')
with socket.create_connection((host, port), timeout=0.5) as conn:
print('connected')
conn.send(b'hello werld')
print('closing server')
server.shutdown()
print('cancelled')
def test_the_server(*, host, port):
ex = concurrent.futures.ThreadPoolExecutor(max_workers=3)
print('server a')
quitter = threading.Event()
server = socketserver.TCPServer((host, port), TCPHandler)
a = ex.submit(started_server, server=server)
b = ex.submit(run_client, host=host, port=port, server=server)
print(a.result(), b.result())
print('server b')
def do_it(): # Shia LeBeouf!
parser = argparse.ArgumentParser(usage=__doc__)
parser.add_argument("--host", default="127.0.0.1")
parser.add_argument("-p", "--port", type=int, default=60025)
args = parser.parse_args()
exit(test_the_server(host=args.host, port=args.port))
if __name__ == "__main__":
do_it()
</code></pre>
<p>How would I convert this to use an asyncio loop? I'm pretty sure that I need to spawn the server asyncio loop in a thread, but so far it's just turned out blocking, and other questions on SO have failed to provide a solution (or have been outdated).</p>
<p>Here's an example of something that fails for me:</p>
<pre><code>import asyncio
import argparse
import socket
import concurrent.futures
import threading
import socketserver
class EchoHandler(asyncio.Protocol):
def data_received(self, data):
print(f"Got this data: {data.decode()}")
async def run_server(*, server):
print('starting server')
server = await server
async with server:
print('start serving')
await server.start_serving()
print('waiting on close')
await server.wait_closed()
print('server coro closing')
def started_server(*, server):
print('server thread started')
asyncio.run(run_server(server=server))
print('server thread finished')
def run_client(*, host, port, server):
print('client started, attempting connection')
with socket.create_connection((host, port), timeout=0.5) as conn:
print('connected')
conn.send(b'hello werld')
print('closing server')
server.close()
print('cancelled')
async def fnord(reader, writer):
data = await reader.read(100)
message = data.decode()
print('got', message)
def test_the_server(*, host, port):
ex = concurrent.futures.ThreadPoolExecutor(max_workers=3)
print('server a')
quitter = threading.Event()
#server = socketserver.TCPServer((host, port), TCPHandler)
server = asyncio.start_server(fnord, host, port)
a = ex.submit(started_server, server=server)
b = ex.submit(run_client, host=host, port=port, server=server)
print(a.result(), b.result())
print('server b')
def do_it(): # Shia LeBeouf!
parser = argparse.ArgumentParser(usage=__doc__)
parser.add_argument("--host", default="127.0.0.1")
parser.add_argument("-p", "--port", type=int, default=60025)
args = parser.parse_args()
exit(test_the_server(host=args.host, port=args.port))
if __name__ == "__main__":
do_it()
</code></pre>
<p>I was hoping that https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.Server.wait_closed would be enough that when I call <code>server.close()</code> on the other thread that it would shut down the server, but it doesn't appear to be the case. <code>serve_forever</code> behaves the same as the <code>start_serving</code> approach.</p>
|
<python><multithreading><async-await><python-asyncio>
|
2024-03-24 17:00:33
| 4
| 52,263
|
Wayne Werner
|
78,215,347
| 7,644,562
|
Load EMNIST dataset from within the Pytorch
|
<p>I'm working on <strong>EMNIST</strong> dataset and want to load it from PyTorch, but it returns a strange error as:</p>
<blockquote>
<p>RuntimeError: File not found or corrupted.</p>
</blockquote>
<p>Here's how i have tried to load the dataset:</p>
<pre><code>trainset = torchvision.datasets.EMNIST(root="emnist",
split="letters",
train=True,
download=True,
transform=transforms.ToTensor())
</code></pre>
<p>What might be wrong?</p>
|
<python><machine-learning><deep-learning><pytorch><mnist>
|
2024-03-24 16:53:53
| 2
| 5,704
|
Abdul Rehman
|
78,215,243
| 12,193,952
|
Python app keeps OOM crashing on Pandas merge
|
<p>I have a ligh Python app which should perform a very simple task, but keeps crashing due to OOM.</p>
<h2>What app should do</h2>
<ol>
<li>Loads data from <code>.parquet</code> in to dataframe</li>
<li>Calculate indicator using <code>stockstats</code> package</li>
<li>Merge freshly calculated data into original dataframe to have both OHCL + SUPERTREND inside one dataframe -> <strong>here is crashes</strong></li>
<li>Store dataframe as <code>.parquet</code></li>
</ol>
<h3>Where is crashes</h3>
<pre class="lang-py prettyprint-override"><code>df = pd.merge(df, st, on=['datetime'])
</code></pre>
<h3>Using</h3>
<ul>
<li>Python <code>3.10</code></li>
<li><code>pandas~=2.1.4</code></li>
<li><code>stockstats~=0.4.1</code></li>
<li>Kubernetes <code>1.28.2-do.0</code> (running in Digital Ocean)</li>
</ul>
<p>Here is the strange thing, the dataframe is very small (<code>df.size</code> is <code>208446</code>, file size is <code>1.00337 MB</code>, mem usage is <code>1.85537 MB</code>).</p>
<p>Measured</p>
<pre class="lang-py prettyprint-override"><code>import os
file_stats = os.stat(filename)
file_size = file_stats.st_size / (1024 * 1024) # 1.00337 MB
df_mem_usage = dataframe.memory_usage(deep=True)
df_mem_usage_print = round(df_mem_usage.sum() / (1024 * 1024), 6 # 1.85537 MB
df_size = dataframe.size # 208446
</code></pre>
<h3>Deployment info</h3>
<p>App is deployed into Kubernetes using Helm with following resources set</p>
<pre class="lang-yaml prettyprint-override"><code>resources:
limits:
cpu: 1000m
memory: 6000Mi
requests:
cpu: 1000m
memory: 1000Mi
</code></pre>
<p><s>I am using nodes with 4vCPU + 8 GB memory and the node not under performance pressure.</s> I have created dedicated node pool with <strong>8 vCPU + 16 GB</strong> nodes, but same issue.</p>
<pre><code>kubectl top node test-pool
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
test-pool-j8t3y 38m 0% 2377Mi 17%
</code></pre>
<p>Pod info</p>
<pre><code>kubectl describe pod xxx
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Sun, 24 Mar 2024 16:08:56 +0000
Finished: Sun, 24 Mar 2024 16:09:06 +0000
...
</code></pre>
<p>Here is CPU and memory consumption from Grafana. I am aware that very short Memory or CPU spikes will be hard to see, but from long term perspective, the app does not consume a lot of RAM. On the other hand, from my experience we are using the same <code>pandas</code> operations on containers with less RAM and dataframes are much much bigger with not problems.</p>
<p><a href="https://i.sstatic.net/QerkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QerkZ.png" alt="Grafana stats" /></a></p>
<p>How should I fix this?
What else should I debug in order to prevent OOM?</p>
<h3>Data and code example</h3>
<p>Original dataframe (named <code>df</code>)</p>
<pre class="lang-py prettyprint-override"><code> datetime open high low close volume
0 2023-11-14 11:15:00 2.185 2.187 2.171 2.187 19897.847314
1 2023-11-14 11:20:00 2.186 2.191 2.183 2.184 8884.634728
2 2023-11-14 11:25:00 2.184 2.185 2.171 2.176 12106.153954
3 2023-11-14 11:30:00 2.176 2.176 2.158 2.171 22904.354082
4 2023-11-14 11:35:00 2.171 2.173 2.167 2.171 1691.211455
</code></pre>
<p>New dataframe (named <code>st</code>). <br>
Note: If <code>trend_orientation = 1</code> => <code>st_lower = NaN</code>, if <code>-1 => st_upper = NaN</code></p>
<pre class="lang-py prettyprint-override"><code> datetime supertrend_ub supertrend_lb trend_orientation st_trend_segment
0 2023-11-14 11:15:00 0.21495 NaN -1 1
1 2023-11-14 11:20:00 0.21495 NaN -10 1
2 2023-11-14 11:25:00 0.21495 NaN -11 1
3 2023-11-14 11:30:00 0.21495 NaN -12 1
4 2023-11-14 11:35:00 0.21495 NaN -13 1
</code></pre>
<p>Code example</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import multiprocessing
import numpy as np
import stockstats
def add_supertrend(market):
try:
# Read data from file
df = pd.read_parquet(market, engine="fastparquet")
# Extract date columns
date_column = df['datetime']
# Convert to stockstats object
st_a = stockstats.wrap(df.copy())
# Generate supertrend
st_a = st_a[['supertrend', 'supertrend_ub', 'supertrend_lb']]
# Add back datetime columns
st_a.insert(0, "datetime", date_column)
# Add trend orientation using conditional columns
conditions = [
st_a['supertrend_ub'] == st_a['supertrend'],
st_a['supertrend_lb'] == st_a['supertrend']
]
values = [-1, 1]
st_a['trend_orientation'] = np.select(conditions, values)
# Remove not required supertrend values
st_a.loc[st_a['trend_orientation'] < 0, 'st_lower'] = np.NaN
st_a.loc[st_a['trend_orientation'] > 0, 'st_upper'] = np.NaN
# Unwrap back to dataframe
st = stockstats.unwrap(st_a)
# Ensure correct date types are used
st = st.astype({
'supertrend': 'float32',
'supertrend_ub': 'float32',
'supertrend_lb': 'float32',
'trend_orientation': 'int8'
})
# Add trend segments
st_to = st[['trend_orientation']]
st['st_trend_segment'] = st_to.ne(st_to.shift()).cumsum()
# Remove trend value
st.drop(columns=['supertrend'], inplace=True)
# Merge ST with DF
df = pd.merge(df, st, on=['datetime'])
# Write back to parquet
df.to_parquet(market, compression=None)
except Exception as e:
# Using proper logger in real code
print(e)
pass
def main():
# Using fixed market as example, in real code market is fetched
market = "BTCUSDT"
# Using multiprocessing to free up memory after each iteration
p = multiprocessing.Process(target=add_supertrend, args=(market,))
p.start()
p.join()
if __name__ == "__main__":
main()
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.10
ENV PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1 \
PYTHONPATH=.
# Adding vim
RUN ["apt-get", "update"]
# Get dependencies
COPY requirements.txt .
RUN pip3 install -r requirements.txt
# Copy main app
ADD . .
CMD main.py
</code></pre>
<hr />
<p>Possible solutions / tried approaches</p>
<ul>
<li>❌: tried; not worked</li>
<li>💡: and idea I am going to test</li>
<li>😐: did not completely solved the problem, but helped towards the solution</li>
<li>✅: working solution</li>
</ul>
<h3>Lukasz Tracewskis suggestion</h3>
<blockquote>
<p>Use <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/" rel="nofollow noreferrer">Node-pressure Eviction</a> in order to test whether pod even can allocate enough memory on nodes</p>
</blockquote>
<p>I have done:</p>
<ul>
<li>created new node pool: <code>8vCPU + 16 GB RAM</code></li>
<li>ensured that only my pod (and some system ones) will be deployed on this node (using tolerations and affinity)</li>
<li>run a stress test with no OOM or other errors</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>...
image: "polinux/stress"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "5G", "--vm-hang", "1"]
...
</code></pre>
<pre class="lang-yaml prettyprint-override"><code>kubectl top node test-pool-j8t3y
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
test-pool-j8t3y 694m 8% 7557Mi 54%
</code></pre>
<p>Node description</p>
<pre><code> Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system cilium-24qxl 300m (3%) 0 (0%) 300Mi (2%) 0 (0%) 43m
kube-system cpc-bridge-proxy-csvvg 100m (1%) 0 (0%) 75Mi (0%) 0 (0%) 43m
kube-system csi-do-node-tzbbh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m
kube-system disable-systemd-upgrade-timer-mqjsk 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m
kube-system do-node-agent-dv2z2 102m (1%) 0 (0%) 80Mi (0%) 300Mi (2%) 43m
kube-system konnectivity-agent-wq5p2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 43m
kube-system kube-proxy-gvfrv 0 (0%) 0 (0%) 125Mi (0%) 0 (0%) 43m
scanners data-gw-enrich-d5cff4c95-bkjkc 100m (1%) 1 (12%) 1000Mi (7%) 6000Mi (43%) 2m33s
</code></pre>
<p>The pod did not crash due to OOM. So it is very likely that the issue will be inside code, somewhere.</p>
<h3>Detailed memory monitoring</h3>
<p>I have inserted memory measurement into multiple points. I am measuring both dataframe size and memory usage using <code>psutil</code>.</p>
<pre class="lang-py prettyprint-override"><code>import psutil
total = round(psutil.virtual_memory().total / 1000 / 1000, 4)
used = round(psutil.virtual_memory().used / 1000 / 1000, 4)
pct = round(used / total * 100, 1)
logger.info(f"[Current memory usage is: {used} / {total} MB ({pct} %)]")
</code></pre>
<p>Memory usage</p>
<ul>
<li>prior read data from file
<ul>
<li>RAM: <code>938.1929 MB</code></li>
</ul>
</li>
<li>after df loaded
<ul>
<li>df_mem_usage: <code>1.947708 MB</code></li>
<li>RAM: <code>954.1181 MB</code></li>
</ul>
</li>
<li>after ST generated
<ul>
<li>df_mem_usage of ST df: <code>1.147757 MB</code></li>
<li>RAM: <code>944.9226 MB</code></li>
</ul>
</li>
<li>line before df merge
<ul>
<li>df_mem_usage: <code>945.4223 MB</code></li>
</ul>
</li>
</ul>
<h3>❌ Not using <code>multiprocessing</code></h3>
<p>In order to "reset" memory every iteration, I am using <code>multiprocessing</code>. However I wanted to be sure that this does not cause troubles. I have removed it and called the <code>add_supertrend</code> directly. But it ended up in OOM, so I do not think this is the problem.</p>
<h3>Real data</h3>
<p>As suggested by Lukasz Tracewski, I am sharing real data which are causing the OOM crash. Since they are in <code>parquet</code> format, I cannot use services like pastebin and I am using GDrive instead. I will use this folder to share any other stuff related to this question/issue.</p>
<ul>
<li><a href="https://drive.google.com/drive/folders/1nCbG4SUoniZGhwULEtnvs4OtUmXP6BXG?usp=sharing" rel="nofollow noreferrer">GDrive folder</a></li>
</ul>
<h3>❌ Upgrade pandas to <code>2.2.1</code></h3>
<p>Sometimes plain pacakge upgrade might help, so I have decide to try using upgrading pandas to <code>2.2.1</code> and also <code>fastparquet</code> to <code>2024.2.0</code> (<em>newer pandas required newer fastparquet</em>). <code>pyarrow</code> was also updated to <code>15.0.0</code>.</p>
<p>It seemed to work during first few iterations, but than crashed with OOM again.</p>
<h3>❌ Using Dask</h3>
<p>I remembered that when I used to solve complex operations with dataframes, I used dask. So I tried to use it in this case as well. Without success. OOM again. Using <code>dask</code> <code>2024.3.1</code>.</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
# mem usage 986.452 MB
ddf1 = dd.from_pandas(df)
# mem usage 1015.37 MB
ddf2 = dd.from_pandas(st)
# mem usage 1019.50 MB
df_dask = dd.merge(ddf1, ddf2, on='datetime')
# mem usage 1021.56 MB
df = df_dask.compute() <- here it crashes ¯\_(ツ)_/¯
</code></pre>
<h3>💡 Duplicated datetimes</h3>
<p>During investigating data with dask, I have noticed that there are duplicate records for <code>datetime</code> columns. This is definitely wrong, datetime has to be unique. I think this might cause the issue. I will investigate that further.</p>
<pre class="lang-py prettyprint-override"><code>df.tail(10)
datetime open high low close volume
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
0 2024-02-26 02:55:00 0.234 0.238 0.2312 0.2347 103225.029408
</code></pre>
<p>I have implemented a fix which removes duplicate records in the other component that prepares data. Fix looks like this and I will monitor whether this will help or not.</p>
<pre class="lang-py prettyprint-override"><code> # Append gathered data to df and write to file
df = pd.concat([df, fresh_data])
# Drop duplicates
df = df.drop_duplicates(subset=["datetime"])
</code></pre>
|
<python><python-3.x><pandas><docker><out-of-memory>
|
2024-03-24 16:22:02
| 2
| 873
|
FN_
|
78,215,217
| 2,835,670
|
Getting the plot points for a kernel density estimate in seaborn
|
<p>I am using this code</p>
<pre><code>kde = sns.kdeplot(x = data, fill = True, color = "black", alpha = 0.1)
</code></pre>
<p>to get the kde for my data, and it works well. I am now trying to get all the x,y - point used to draw the plot and I am doing:</p>
<pre><code>poly = kde.collections[0]
x_values = poly.get_paths()[0].vertices[:, 0]
y_values = poly.get_paths()[0].vertices[:, 1]
</code></pre>
<p>However, the x_values increase and then decrease. Why? I understand the y_values should increase and decrease, but I expect the x_values to be increasing since the curve is drawn from left to right. By the way, the values of the points is reasonable and matches the plot, except for this behaviour.</p>
|
<python><seaborn><kernel-density>
|
2024-03-24 16:11:37
| 1
| 2,111
|
user
|
78,215,203
| 18,756,733
|
Image channel error while training CNN model
|
<p>I get the following error when trying to train a CNN model:</p>
<pre class="lang-none prettyprint-override"><code>InvalidArgumentError: Graph execution error:
Detected at node decode_image/DecodeImage defined at (most recent call last):
<stack traces unavailable>
Number of channels inherent in the image must be 1, 3 or 4, was 2
[[{{node decode_image/DecodeImage}}]]
[[IteratorGetNext]] [Op:__inference_train_function_1598]
</code></pre>
<p>The dataset I am working on is Cats and Dogs classification dataset from Kaggle. I defined the data like this:</p>
<pre><code>path=r'C:\Users\berid\python\cats and dogs\PetImages'
data=tf.keras.utils.image_dataset_from_directory(path)
</code></pre>
<p>Any suggestion will be appreciated.</p>
|
<python><tensorflow><keras><deep-learning><conv-neural-network>
|
2024-03-24 16:07:28
| 1
| 426
|
beridzeg45
|
78,215,201
| 1,694,745
|
Decrypt ruby DES-EDE3-CBC encrypted data in Python
|
<p>I have a bunch of data which are encrypted in Ruby by following code</p>
<pre><code>text = '12345678'
key = '6b4f0a29d4bba86add88be9d'
cipher = OpenSSL::Cipher.new('DES-EDE3-CBC').encrypt
cipher.key = key
s = cipher.update(text) + cipher.final
encrypted_text = s.unpack('H*')[0].upcase
# => 3B223AA60F1921F34CBBBAC209ACDCE4
</code></pre>
<p>It can be decrypted in Ruby</p>
<pre><code>cipher = OpenSSL::Cipher.new('DES-EDE3-CBC').decrypt
cipher.key = key
s = [encrypted_text].pack("H*").unpack("C*").pack("c*")
cipher.update(s) + cipher.final
# => 12345678
</code></pre>
<p>Now I have to decrypt the data in Python.
I wrote the encryption code as below:</p>
<pre><code>from Crypto.Cipher import DES3
from Crypto import Random
from base64 import b64encode, b64decode
from Crypto.Util.Padding import pad, unpad
key = '6b4f0a29d4bba86add88be9d'
iv = Random.new().read(DES3.block_size)
cipher = DES3.new(key, DES3.MODE_CBC, iv)
text = '12345678'.encode()
encrypted = cipher.encrypt(text)
encrypted_text = encrypted.hex()
print(encrypted_text)
# => f84f555b02e3ee24
</code></pre>
<p>As you can see, the encrypted data is quite different, at least is the length. So decryption part does not work.
How can I modify the Python code to compatible with Ruby?
Unfortunately, the data were available there by ruby application so I can only modify Python.</p>
|
<python><ruby><encryption><openssl>
|
2024-03-24 16:07:24
| 1
| 1,068
|
Tiktac
|
78,215,187
| 4,387,837
|
sklearn ComplementNB: only class 0 predictions for perfectly seperable data
|
<p>As shown below, the balanced, one dimensional data below can be perfectly separated by <code>sklearn GaussianNB</code>. Why is it that <code>sklearn ComplementNB</code> gives classifications that are all zeros for the same data?</p>
<pre><code>from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import ComplementNB
import numpy as np
N = 20
np.random.seed(9)
pos = np.random.uniform(size = N, low = 0.7, high = 0.8).reshape(-1, 1)
neg = np.random.uniform(size = N, low = 0.4, high = 0.5).reshape(-1, 1)
X = np.r_[pos, neg]
Y = np.array([1] * N + [0] * N)
gnb = GaussianNB()
cnb = ComplementNB()
gnb.fit(X,Y)
cnb.fit(X,Y)
#predict training data
print(gnb.predict(X))
print(cnb.predict(X))
</code></pre>
<p>The Gaussian Naive Bayes model is 100% correct. The Complement Naive Bayes model only predicts zeros. Why?</p>
<pre><code>[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
</code></pre>
|
<python><scikit-learn><classification><naivebayes>
|
2024-03-24 16:04:28
| 1
| 839
|
HOSS_JFL
|
78,215,078
| 7,846,884
|
broadcasting tensor matmul over batches
|
<p>how can i find dot product of each batch response and X data.</p>
<pre><code>y_yhat_allBatches_matmulX_allBatches = torch.matmul(yTrue_yHat_allBatches_tensorSub, interceptXY_data_allBatches[:, :, :-1])
</code></pre>
<p>expected shape of <code>y_yhat_allBatches_matmulX_allBatches</code> should be 2 by 5. where each row is for specific batch</p>
<p><code>yTrue_yHat_allBatches_tensorSub.shape</code> = <code>[2, 15]</code> where rows batch (1&2) and columns = size of response (15)</p>
<p><code>interceptXY_data_allBatches[:, :, :-1].shape = torch.Size([2, 15, 5])</code> for 15 observations by 5 features for 2 batches</p>
<p>please see full reproducible code</p>
<pre><code>#define dataset
nFeatures_withIntercept = 5
NObservations = 15
miniBatches = 2
interceptXY_data_allBatches = torch.randn(miniBatches, NObservations, nFeatures_withIntercept+1) #+1 Y(response variable)
#random assign beta to work with
beta_holder = torch.rand(nFeatures_withIntercept)
#y_predicted for each mini-batch
y_predBatchAllBatches = torch.matmul(interceptXY_data_allBatches[:, :, :-1], beta_holder)
#y_true - y_predicted for each mini-batch
yTrue_yHat_allBatches_tensorSub = torch.sub(interceptXY_data_allBatches[..., -1], y_predBatchAllBatches)
y_yhat_allBatches_matmulX_allBatches = torch.matmul(yTrue_yHat_allBatches_tensorSub, interceptXY_data_allBatches[:, :, :-1])
</code></pre>
|
<python><tensorflow><pytorch><array-broadcasting>
|
2024-03-24 15:26:11
| 1
| 473
|
sahuno
|
78,215,046
| 5,795,116
|
Slow Scraping Using Selenium
|
<p><a href="https://i.sstatic.net/fndvb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fndvb.png" alt="enter image description here" /></a>I am trying to scrape website using selenium.But it is very slow.Taking a minute for every record.</p>
<p>The webpage is <a href="https://jamabandi.nic.in/land%20records/NakalRecord" rel="nofollow noreferrer">https://jamabandi.nic.in/land%20records/NakalRecord</a>. I'm trying to scrape every record.</p>
<p>So is there any alternative to this,Can i use any api endpoint or HTTPS request</p>
<p>My Code is</p>
<pre><code>di=11
district_xpath_new = (By.XPATH, district_xpath)
dropdown_district=Select(handle_stale_element_reference(driver, district_xpath_new))
dropdown_district.select_by_index(di)
total_districts=len(Select(handle_stale_element_reference(driver, district_xpath_new)).options)
while(di<(total_districts)):
time.sleep(5)
driver,district_name=district_func(driver,di,district_xpath)
print("District Started "+str(di))
te=1
driver,dropdown_tehsil,total_tehsils,tehsil_name=tehsil_func(driver,te,tehsil_xpath)
# dropdown_tehsil.select_by_index(te)
while(te<total_tehsils):
time.sleep(5)
print("Tehsil Started is"+str(te))
driver,dropdown_tehsil,total_tehsils,tehsil_name=tehsil_func(driver,te,tehsil_xpath)
vi=8
driver,dropdown_village,total_vill,village_name=village_func(driver,vi,vill_xpath)
while(vi<total_vill):
time.sleep(5)
print("Village Started is"+str(vi))
driver,dropdown_village,total_vill,village_name=village_func(driver,vi,vill_xpath)
ye=3
driver,dropdown_year,total_year,year=year_func(driver,ye,year_xpath)
while(ye<total_year):
time.sleep(5)
print("Year Started is"+str(ye))
driver,dropdown_year,total_year,year=year_func(driver,ye,year_xpath)
ow=2
time.sleep(10)
print("Selected Personal Ownerlist"+str(ow))
driver,dropdown_owner=owner_drop(driver,ow,owner_dropdown_xpath)
name=280
driver,owner_name_drop,total_names,name_of_owner=owner_names_func(driver,name,owner_name_xpath)
while(name<total_names):
print("Names Started is"+str(name))
time.sleep(2)
driver,owner_name_drop,total_names,name_of_owner=owner_names_func(driver,name,owner_name_xpath)
try:
if '?' not in name_of_owner:
print(name_of_owner)
df_owner,driver=dataframe_check(driver,district_name,tehsil_name,village_name,year,name)
driver=select_all(driver,di,ye,te,vi,ow,name)
else:
pass
except:
print("Name is"+str(name))
print("Not Found")
name+=1
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2024-03-24 15:17:12
| 1
| 327
|
jatin rajani
|
78,215,019
| 11,748,924
|
np.save and np.load with memmap mode returned OSError
|
<p>I tried this simple code:</p>
<pre><code>import numpy as np
np.save('tmp.npy', np.empty(128))
tmp = np.load('tmp.npy', mmap_mode='r+')
np.save('tmp.npy', tmp[:64])
</code></pre>
<p>It returned <code>OSError</code>:</p>
<pre><code>---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[1], line 4
2 np.save('tmp.npy', np.empty(128))
3 tmp = np.load('tmp.npy', mmap_mode='r+')
----> 4 np.save('tmp.npy', tmp[:64])
File <__array_function__ internals>:180, in save(*args, **kwargs)
File c:\Users\User\Documents\KULIAH\LAB_SOFTWARE\python_proj\anagrambotid\.venv\lib\site-packages\numpy\lib\npyio.py:518, in save(file, arr, allow_pickle, fix_imports)
516 if not file.endswith('.npy'):
517 file = file + '.npy'
--> 518 file_ctx = open(file, "wb")
520 with file_ctx as fid:
521 arr = np.asanyarray(arr)
OSError: [Errno 22] Invalid argument: 'tmp.npy'
</code></pre>
<p>My OS is Windows 11, I think on linux it would be worked fine. What happened here?</p>
<p>Even using <code>tmp.flush()</code> changes nothing.</p>
|
<python><numpy><numpy-memmap>
|
2024-03-24 15:09:31
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
78,214,916
| 2,303,071
|
How to parse subdomains from URLs in FastAPI?
|
<p>Our customers are going to hit our new API at <code>foo123.bar456.domain.com/v1.5/</code></p>
<p>The <code>foo123</code> and <code>bar456</code> subdomains are account-specific (and let us load balance). They signify relationships and trigger processing we need to do.</p>
<p>We don't want to (repetitively) pass query parameters in the URL, e.g., <code>...domain.com/v1.5/?acc=foo123&parent=bar456</code>, as that is just non-pythonic, frankly.</p>
<p>So, I'd like to parse, in FastAPI, the fully-qualified domain name that was called.</p>
<p>I can't find tips on how to do this (URL parsing) that doesn't involve folders to the right of the fqdn. Tips / pointers? Thanks!</p>
|
<python><fastapi><subdomain><starlette><parse-url>
|
2024-03-24 14:38:44
| 1
| 1,093
|
Todd Curry
|
78,214,680
| 8,849,755
|
Skip with statement in Python if condition is met
|
<p>Consider the following code:</p>
<pre class="lang-py prettyprint-override"><code>if not condition:
with my_object.whatever(...) as context_manager:
do_something()
</code></pre>
<p>Is it possible to put the checking of <code>condition</code> inside <code>my_object.whatever</code> such that I don't need to wrap the <code>with</code> with an <code>if</code> every time? I am looking for an interface like this one:</p>
<pre class="lang-py prettyprint-override"><code>with my_object.whatever(..., skip_with=condition) as context_manager:
do_something()
</code></pre>
<p>but I would not know how to implement this. Maybe rising my own exception inside <code>__enter__</code> and then catching it in the <code>__exit__</code> ? I don't know if this would be a good practice.</p>
|
<python><with-statement>
|
2024-03-24 13:25:35
| 0
| 3,245
|
user171780
|
78,214,609
| 726,730
|
Restarting process raises runtimeerror (can't create new thread at interpreter shutdown)
|
<p>I have two processes: P1 and P2. P1 send data to P2 (with shared memory).</p>
<p>When something goes wrong to P2 a PyQt5 Window with the error message is opened.
One option in the window is to restart process P2 by terminated and re-instantiating.</p>
<p>After this operation the P1 process resend data to P2.</p>
<p>When this happens this error oquers:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "C:\Users\chris\Documents\My Projects\papinhio-player\src\python+\main-window\../..\python+\main-window\final-slice.py", line 635, in final_slice_ready
self.main_self.final_slice_plot_instance.final_slice_plot_queue.put({"type":"slice","slice":slice})
File "C:\Python\Lib\multiprocessing\queues.py", line 94, in put
self._start_thread()
File "C:\Python\Lib\multiprocessing\queues.py", line 192, in _start_thread
self._thread.start()
File "C:\Python\Lib\threading.py", line 992, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't create new thread at interpreter shutdown
</code></pre>
<p>Why this is happened and how can i resolve it?</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Process, Queue, Pipe
import time
class Proc_1(Process):
def __init__(self,q_1):
super().__init__()
self.daemon = False
self.q_1 = q_1
def run(self):
while(True):
time.sleep(0.125)
self.q_1.put("data...")
class Proc_2(Process):
def __init__(self,q_1):
super().__init__()
self.daemon = False
self.q_1 = q_1
def run(self):
while(True):
data = self.q_1.get()
print(data)
if __name__ == "__main__":
q_1 = Queue()
proc_1 = Proc_1(q_1)
proc_1.start()
proc_2 = Proc_2(q_1)
proc_2.start()
time.sleep(1)
proc_2.terminate()
q_1 = Queue()
proc_2 = Proc_2(q_1)
proc_2.start()
</code></pre>
<p>The above simple example works.</p>
<p>Here is the error snippet:</p>
<pre class="lang-py prettyprint-override"><code>import time
from PyQt5.QtCore import pyqtSignal, QThread
from multiprocessing import Process, Queue, Pipe
from datetime import datetime, timedelta
import traceback
import matplotlib.pyplot as plt
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.dates import num2date
from matplotlib.ticker import FuncFormatter
import numpy as np
class Final_Slice_Plot:
def __init__(self, main_self):
try:
self.main_self = main_self
# chart
self.chart = Canvas(self)
self.chart.ax.set_facecolor((1, 1, 1))
self.chart.ax.tick_params(labelcolor='white')
# create process
self.process_number = 94
self.final_slice_plot_mother_pipe, self.final_slice_plot_child_pipe = Pipe()
self.final_slice_plot_queue = Queue()
self.final_slice_plot_queue.put({"type":"test"})
print("test")
self.final_slice_plot_emitter = Final_Slice_Plot_Emitter(self.final_slice_plot_mother_pipe)
self.final_slice_plot_emitter.error_signal.connect(lambda error_message: self.main_self.open_final_slice_plot_error_window(error_message))
self.final_slice_plot_emitter.plot_data_signal.connect(lambda x,y: self.plot(x,y))
self.final_slice_plot_emitter.start()
self.final_slice_plot_child_process = Final_Slice_Plot_Child_Proc(self.final_slice_plot_child_pipe, self.final_slice_plot_queue)
self.final_slice_plot_child_process.start()
counter = 0
for process in self.main_self.manage_processes_instance.processes:
if "process_number" in process:
if process["process_number"] == self.process_number:
self.main_self.manage_processes_instance.processes[counter][
"pid"] = self.final_slice_plot_child_process.pid
self.main_self.manage_processes_instance.processes[counter]["start_datetime"] = datetime.now()
self.main_self.manage_processes_instance.processes[counter]["status"] = "in_progress"
counter += 1
if self.main_self.manage_proccesses_window_is_open:
self.main_self.manage_proccesses_window_support_code.manage_proccesses_queue.put(
{"type": "table-update", "processes": self.main_self.manage_processes_instance.processes})
self.main_self.final_slice_instance.put_to_plot = True
except:
error_message = traceback.format_exc()
print("ERROR")
self.main_self.open_final_slice_plot_error_window(error_message)
</code></pre>
<p>In the first run "test" message in printed in console but after restart this error appeared in the error window:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\chris\Documents\My Projects\papinhio-player\src\python+\main-window\../..\python+\main-window\final-slice-plot.py", line 28, in __init__
self.final_slice_plot_queue.put({"type":"test"})
File "C:\Python\Lib\multiprocessing\queues.py", line 94, in put
self._start_thread()
File "C:\Python\Lib\multiprocessing\queues.py", line 192, in _start_thread
self._thread.start()
File "C:\Python\Lib\threading.py", line 992, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't create new thread at interpreter shutdown
</code></pre>
|
<python><multiprocessing><python-multiprocessing>
|
2024-03-24 13:06:11
| 0
| 2,427
|
Chris P
|
78,214,561
| 9,720,161
|
Stable-Baslines3 Type Error in _predict w. custom environment & policy
|
<p>I'm in the process of integrating a custom environment and policy into Stable-Baselines3 (SB3). While setting up the <code>_predict</code> functionality, I encountered an issue. Although I can manually utilize the <code>_predict</code> functionality with a standard Python <code>dict</code>. I need to define the environment using <code>gym.spaces</code> when working with SB3. I suspect that SB3 uses the <code>gym.spaces</code> to do something internally. Consequently, I represent the observations from the environment using a standard Python dictionary and define the observation and action space using <code>gym.spaces</code>. However, when employing the SB3 Proximal Policy Optimization (PPO) algorithm, an error arises. It appears that PPO is passing the <class 'gymnasium.spaces.dict.Dict'> type instead of the actual <code>dict</code> observation. Perhaps I need to embed the observation into the observation space. However, I don't think this is possible. Should I utilize SB3 feature extractor in some fashion? Perhaps there is something that I have forgotten to define in order to ensure that the actual observation is used and not just the gym.space?</p>
<pre><code>import numpy as np
import gymnasium as gym
import torch
import torch.nn as nn
import torch.nn.functional as F
from stable_baselines3 import PPO
from stable_baselines3.common.evaluation import evaluate_policy
from stable_baselines3.common.policies import BasePolicy
class CustomEnv(gym.Env):
def __init__(self, nRows, nCols):
super().__init__()
# init
self.nRows = nRows
self.nCols = nCols
self.iter = 0
self.done = False
self.truncated = False
# action space
self.action_space = gym.spaces.Discrete(self.nRows * self.nCols)
# observation space
self.observation_space = gym.spaces.Dict({
'layout': gym.spaces.Box(low=0, high=255, shape=(self.nRows, self.nCols), dtype=np.uint8),
'mask': gym.spaces.Box(low=0, high=255, shape=(self.nRows * self.nCols,), dtype=np.uint8)
})
# actual observation
self.observation = {'layout': np.zeros((self.nRows, self.nCols), dtype=np.uint8),
'mask': np.zeros(self.nRows * self.nCols, dtype=np.uint8)}
def step(self, action):
self.iter += 1
reward = 0
print("action: ", action)
layout = self.observation["layout"].flatten()
mask = self.observation["mask"]
if layout[action] == 0:
layout[action] = 1
mask[action] = 1
reward = 1
else:
reward = -1
self.observation = {'layout': np.reshape(layout, (self.nRows, self.nCols)),
'mask': mask}
if self.iter > self.nRows * self.nCols:
self.done = True
self.truncated = True
info = {}
return self.observation, reward, self.done, self.truncated, info
def reset(self, seed=None, options=None):
# reset
self.iter = 0
self.done = False
self.truncated = False
self.observation = {'layout': np.zeros((self.nRows, self.nCols), dtype=np.uint8),
'mask': np.zeros(self.nRows * self.nCols, dtype=np.uint8)}
info = {}
return self.observation, info
def render(self):
pass
def close(self):
pass
class CustomPolicy(BasePolicy):
def __init__(self, observation_space, action_space):
super(CustomPolicy, self).__init__(observation_space, action_space)
input_size = np.shape(observation_space["layout"])[0] * np.shape(observation_space["layout"])[1]
output_size = action_space.n
self.l1 = nn.Linear(input_size, 5)
self.relu = nn.ReLU()
self.l2 = nn.Linear(5, output_size)
def forward(self, obs, r=None, deterministic: bool = False, **kwargs):
print(type(obs))
x = torch.Tensor(obs["layout"].flatten())
output = self.l1(x)
output = self.relu(output)
output = self.l2(output)
return output
def _predict(self, obs, deterministic: bool = False):
# Forward pass through the network
action_logits = self.forward(obs, deterministic=deterministic)
if deterministic:
# For deterministic action selection, take the action with maximum probability
action = torch.argmax(action_logits)
else:
# For stochastic action selection, sample from the action distribution
action_probs = F.softmax(action_logits, dim=0)
action_dist = torch.distributions.Categorical(probs=action_probs)
action = action_dist.sample()
return action, None # Don't need log-prob.
# Create an instance of your custom environment
env = CustomEnv(3, 3)
custom_policy = CustomPolicy(env.observation_space, env.action_space)
# Running a small test
action = env.action_space.sample()
observation, reward, done, truncated, _ = env.step(action)
action, _ = custom_policy._predict(observation, False)
print(action)
action, _ = custom_policy._predict(observation, True)
print(action)
# train PPO with your custom environment and custom policy
model = PPO(policy=custom_policy, env=env, verbose=1).model.learn(total_timesteps=1000)
# eval
mean_reward, std_reward = evaluate_policy(model, env, n_eval_episodes=10)
print(f"Mean reward: {mean_reward} +/- {std_reward}")
-------------THE ERROR-------------------
x = torch.Tensor(obs["layout"].flatten())
AttributeError: 'Box' object has no attribute flatten
</code></pre>
|
<python><reinforcement-learning><openai-gym><stable-baselines>
|
2024-03-24 12:50:51
| 0
| 303
|
AliG
|
78,214,529
| 984,621
|
Scrapy - how do I load data from the database in ItemLoader before sending it to the pipeline?
|
<p>I have a PSQL database table <code>brands</code> where are columns like <code>id</code>, <code>name</code>, and other columns.</p>
<p>My (simplified) code - <code>MySpider.py</code>:</p>
<pre><code>import DB
class MySpider(scrapy.Spider):
db = DB.connect()
def start_requests(self):
urls = [ 'https://www.website.com']
for url in URLs:
yield Request(url=url, callback=self.parse, meta=meta)
def parse(self, response):
cars = response.css('...')
for car in cars:
item = CarLoader(item=Car(), selector=car)
data.add_value('brand_id', car.css('...').get())
...
</code></pre>
<p><code>items.py</code>:</p>
<pre><code>import scrapy
class Car(scrapy.Item):
name = scrapy.Field()
brand_id = scrapy.Field()
established = scrapy.Field()
...
</code></pre>
<p><code>itemsloaders.py</code>:</p>
<pre><code>from itemloaders.processors import TakeFirst, MapCompose
from scrapy.loader import ItemLoader
class CarLoader(ItemLoader):
default_output_processor = TakeFirst()
</code></pre>
<p>When I am saving a new item to the database (that's done in <code>pipeline.py</code>), I don't want to save to the column <code>cars.brand_id</code> the brand name (BMW, Audi, etc.) of the car, but its ID (this ID is stored in <code>brands.id</code>).</p>
<p>What's the proper way of doing that? I need to search the name of the brand in the <code>brands</code> table and the found ID save to <code>cars.brand_id</code> - but where should I place this operation, so it's logical and scrappy-correct?</p>
<p>I was thinking and doing that in <code>MySpider.py</code>, as well as in <code>pipeline.py</code>, but I find it a bit dirty and it does not feel it belongs there.</p>
<p>It seems that this functionality should be placed to <code>itemsloaders.py</code>, but the purpose of this file is a bit mystical to me. How do I resolve this?</p>
|
<python><scrapy>
|
2024-03-24 12:44:34
| 0
| 48,763
|
user984621
|
78,214,361
| 386,861
|
How to handle inf and nans in Great Table
|
<p>I've got a dataframe that I want to format which includes inf and nan.</p>
<p>The dict for it is:</p>
<pre><code>df = pd.DataFrame({'Foodbank': {0: 'study',
1: 'generation',
2: 'near',
3: 'sell',
4: 'former',
5: 'line',
6: 'ok',
7: 'field',
8: 'last',
9: 'really',
10: 'particularly',
11: 'must',
12: 'drive',
13: 'herself',
14: 'learn'},
'%(LY)': {0: -20.93,
1: -19.23,
2: -26.09,
3: 150.0,
4: 90.24,
5: -23.85,
6: nan,
7: inf,
8: inf,
9: inf,
10: inf,
11: -35.48,
12: nan,
13: nan,
14: -1.3}})
from great_tables import GT
GT(df)
</code></pre>
<p>It looks like this:</p>
<p><a href="https://i.sstatic.net/5iYLi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5iYLi.png" alt="enter image description here" /></a></p>
<p>What I want is to have a dash or n/a to highlight it rather than inf which won't mean anything to an audience.</p>
|
<python><pandas><great-tables>
|
2024-03-24 11:36:17
| 1
| 7,882
|
elksie5000
|
78,214,089
| 1,001,889
|
MyPy and sqlalchemy - "Session" has no attribute "__enter__"; maybe "__iter__"? Mypyattr-defined
|
<p>I recently updated mypy configuration for my python 3.11 project, adding an .ini file to check my already working sql alchemy code. <strong>I'm using sqlalchemy == 2.0.29</strong></p>
<p>It oddly started complaining about this line of code:</p>
<p><a href="https://i.sstatic.net/u80Z2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/u80Z2.png" alt="" /></a></p>
<pre class="lang-py prettyprint-override"><code>with Session(db) as session:
...
</code></pre>
<p>the 'db' variable is being used as per <a href="https://docs.sqlalchemy.org/en/20/orm/session_basics.html" rel="nofollow noreferrer">SQlalchemy 2.0 docs</a>:</p>
<pre class="lang-py prettyprint-override"><code>db = create_engine(db_string)
</code></pre>
<p>Here is the MyPy error in detail:</p>
<p><a href="https://i.sstatic.net/ybSXI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ybSXI.png" alt="" /></a></p>
<pre class="lang-none prettyprint-override"><code>"Session" has no attribute "__enter__"; maybe "__iter__"? [attr-defined]
"Session" has no attribute "__exit__" [attr-defined]
</code></pre>
<p>While this is my current mypy.ini file:</p>
<pre class="lang-ini prettyprint-override"><code>[mypy]
python_version = 3.11
ignore_missing_imports = True
plugins = sqlmypy
</code></pre>
<p>Why is mypy complaining about that perfectly working line of code?</p>
|
<python><sqlalchemy><mypy>
|
2024-03-24 10:06:15
| 0
| 3,818
|
Shine
|
78,213,951
| 1,082,349
|
Why does scipy.sparse.linalg.spsolve crash when numpy.linalg.solve does not?
|
<p>I have a sparse Matrix <code>B</code> and an array <code>b</code> (uploaded <a href="https://file.io/QOrK3j5inVW8" rel="nofollow noreferrer">here</a>), and am trying to do</p>
<pre><code>import numpy as np
import config
from scipy.sparse.linalg import spsolve
from scipy.sparse import load_npz
from numpy.linalg import solve
B = load_npz('B.npz')
b = np.load('b.npy')
spsolve(B, b)
</code></pre>
<p>which leads to</p>
<pre><code>corrupted size vs. prev_size
Process finished with exit code 134 (interrupted by signal 6:SIGABRT)
</code></pre>
<p>the same problem does not happen when I use numpy.linalg.solve:</p>
<pre><code>B_dense = np.array(B.todense())
solve(B_dense, b)
Out[5]:
array([-299.77073372, -299.68350884, -299.59972996, ..., -286.52926438,
-286.50847706, -286.48927409])
</code></pre>
<p>So since numpy's version works, I suppose nothing is "wrong" with my matrices? Why does this occur? And since this crashes the code without a trace -- is there perhaps a way to "wrap" the error in an exception, to run solve whenever spsolve would lead to an issue?</p>
|
<python><numpy><scipy>
|
2024-03-24 09:10:59
| 0
| 16,698
|
FooBar
|
78,213,865
| 774,133
|
Group dataframe by some columns, do nothing, display result
|
<p>I have a Pandas dataframe and need to group its values by some columns, then do nothing, finally show the resulting dataframe. This might appear as a strange operation, but it is only for visualisation purposes as I need to format the dataframe in a Latex tabular environment.</p>
<p>Consider this example:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = [
dict(a=1, b=2, c=3),
dict(a=1, b=2, c=4),
dict(a=1, b=2, c=3),
dict(a=2, b=1, c=5),
dict(a=2, b=1, c=9)
]
df = pd.DataFrame.from_records(data)
display(df)
print("reindexed")
df2 = df.set_index(["a", "b"]).sort_index()
display(df2)
</code></pre>
<p>with outputs:</p>
<pre><code> a b c
0 1 2 3
1 1 2 4
2 1 2 3
3 2 1 5
4 2 1 9
reindexed
c
a b
1 2 3
2 4
2 3
2 1 5
1 9
</code></pre>
<p>I do not want repeated values in the column <code>b</code>. The desired output would be:</p>
<pre><code> c
a b
1 2 3
4
3
2 1 5
9
</code></pre>
<p>I am not able to achieve this, even if it should be quite simple. I tried to use group-by:</p>
<pre><code>print("grouped")
df3 = df.groupby(["a", "b"]).apply(lambda x: x)
print(df3)
</code></pre>
<p>but with this result:</p>
<pre><code>grouped
a b c
a b
1 2 0 1 2 3
1 1 2 4
2 1 2 3
2 1 3 2 1 5
4 2 1 9
</code></pre>
<p>What am I doing wrong?</p>
|
<python><pandas>
|
2024-03-24 08:33:55
| 1
| 3,234
|
Antonio Sesto
|
78,213,800
| 826,112
|
Why is my sieve of sundaram so inefficient
|
<p>I have been playing around with prime sieves and their performance. I've turned to the Sundaram sieve. My implementation is:</p>
<pre><code>import time
from pympler import asizeof
start = time.perf_counter()
limit = 10000
upper = int(limit/2)
numbers = [i for i in range(upper)]
for j in range(1,int(upper/2)):
i = 1
while i <= j:
n = i + j + (2*i*j)
if n in numbers and n < upper:
numbers[numbers.index(n)] = 0
i += 1
primes = [2, 3]
for number in numbers:
if number != 0:
primes.append(number*2 + 1)
stop = time.perf_counter()
print(primes)
print(f'Time = {stop - start} sec')
print(f'List of numbers: {asizeof.asizeof(numbers)}')
print(f'List of primes: {asizeof.asizeof(primes)}')
</code></pre>
<p>The performance results are:</p>
<ul>
<li>primes below 100: 0.0002 sec</li>
<li>primes below 1000: 0.0772 sec</li>
<li>primes below 10000: 97.71 sec</li>
<li>primes below 100000: > 30 mins (I terminated the execution)</li>
</ul>
<p>I don't think the algorithm is inefficient and my implementation of other algorithms are much quicker than this one so it must be my code. What part of my code is causing the run time to blow out this badly?</p>
|
<python>
|
2024-03-24 08:04:26
| 1
| 536
|
Andrew H
|
78,213,588
| 19,146,511
|
usage of vllm for extracting embeddings
|
<p>Following is a little piece of code to extract embeddings from a certain layer of LLM:</p>
<pre class="lang-py prettyprint-override"><code>def process_row(prompt: str, model, tokenizer, layers_to_use: list, remove_period: bool):
"""
Processes a row of data and returns the embeddings.
"""
if remove_period:
prompt = prompt.rstrip(". ")
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(inputs.input_ids, output_hidden_states=True, return_dict_in_generate=True, max_new_tokens=1, min_new_tokens=1)
embeddings = {}
for layer in layers_to_use:
last_hidden_state = outputs.hidden_states[0][layer][0][-1]
embeddings[layer] = [last_hidden_state.numpy().tolist()]
return embeddings
</code></pre>
<p>It's pretty standard way, but it's pretty slow. Is there any way to use vllm to make it faster without needing to call generate function everytime? I've tried batching, but it's slow too. Any help is appreciated!</p>
<p>One way to get last hidden state values using vllm is as follows:</p>
<pre class="lang-py prettyprint-override"><code>from vllm import LLM, SamplingParams
from vllm.sequence import (SamplerOutput, Sequence, SequenceGroup, SequenceData,
SequenceGroupMetadata, SequenceStatus)
from transformers import LlamaModel, LlamaTokenizer
from vllm import EngineArgs, LLMEngine, SamplingParams, RequestOutput
from vllm.sequence import SamplerOutput, SequenceData, SequenceGroupMetadata
llm = LLM(model=path_to_llama2)
# Enable top-k sampling to reflect the accurate memory usage.
vocab_size = llm.llm_engine.workers[0].model.config.vocab_size
sampling_params = SamplingParams(top_p=0.99, top_k=vocab_size - 1)
max_num_batched_tokens = llm.llm_engine.workers[0].scheduler_config.max_num_batched_tokens
max_num_seqs = llm.llm_engine.workers[0].scheduler_config.max_num_seqs
</code></pre>
<pre class="lang-py prettyprint-override"><code>prompt = train[0]
prompt_token_ids = llm.llm_engine.tokenizer.encode(prompt) #[2, 100, 524, 10]
seqs = []
group_id = 1
seq_data = SequenceData(prompt_token_ids)
seq = SequenceGroupMetadata(
request_id=str(group_id),
is_prompt=True,
seq_data={group_id: seq_data},
sampling_params=sampling_params,
block_tables=None,
)
seqs.append(seq)
input_tokens, input_positions, input_metadata = llm.llm_engine.workers[0]._prepare_inputs(
seqs)
prompt_len = len(seq_data.prompt_token_ids)
input_tokens = input_tokens[:prompt_len]
input_positions = input_positions[:prompt_len]
# Execute the model.
num_layers = llm.llm_engine.workers[0].model_config.get_num_layers(llm.llm_engine.workers[0].parallel_config)
tempOut = llm.llm_engine.workers[0].model.model(
input_ids=input_tokens,
positions=input_positions,
kv_caches=[(None, None)] * num_layers,
input_metadata=input_metadata,
cache_events=None,
)
print(tempOut.size())
</code></pre>
<p>but this doesn't get me with all the hidden state embeddings (of all layers). Is there any other way to get such values in a faster manner?</p>
|
<python><nlp><huggingface-transformers><large-language-model>
|
2024-03-24 06:30:06
| 0
| 307
|
lazytux
|
78,213,511
| 4,170,439
|
Remove (not clear) attachments from email
|
<p>Python 3.6</p>
<p>I'm trying to archive some old mails, and I want to remove attachments from some of them.</p>
<p>However, if I use the <code>clear()</code> method, the MIME part remains in the mail, just empty (so it's assumed to be of type <code>text/plain</code>). I came up with a really hacky solution of converting the <code>EmailMessage</code> object to text then removing any boundary lines that aren't followed by headers, but surely there's a better way.</p>
<p>Example mail with two .png inline attachments and two .txt attachments.</p>
<p>Here's a sample:</p>
<pre><code>from email import policy
from email.parser import BytesParser
from email.iterators import _structure
with open(eml_path, 'rb') as fp:
msg = BytesParser(policy=policy.SMTP).parse(fp)
print(_structure(msg))
for part in msg.walk():
cd = part.get_content_disposition()
if cd is not None:
part.clear()
print(_structure(msg))
</code></pre>
<p>Structure of original mail:</p>
<pre><code>multipart/mixed
multipart/alternative
text/plain
multipart/related
text/html
image/png
image/png
text/plain
text/plain
</code></pre>
<p>Structure after removing attachments:</p>
<pre><code>multipart/mixed
multipart/alternative
text/plain
multipart/related
text/html
text/plain
text/plain
text/plain
text/plain
</code></pre>
<p>The last 4 parts are left empty, but I want to remove them.</p>
<p>This causes some graphical issues in Thunderbird and Gmail, from what I've tried. Once I remove the lingering boundary lines, they display correctly.</p>
|
<python><email><mime>
|
2024-03-24 05:35:17
| 1
| 329
|
Bangaio
|
78,213,463
| 3,358,488
|
Running ChatGPT programmatically - How to continue conversation without re-submitting all past messages?
|
<p>One can obtain a ChatGPT response to a prompt using the following example:</p>
<pre class="lang-py prettyprint-override"><code>from openai import OpenAI
client = OpenAI() # requires key in OPEN_AI_KEY environment variable
completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."}
]
)
print(completion.choices[0].message.content)
</code></pre>
<p>How can one continue the conversation? I've seen examples saying you just add a new message to the list of messages and re-submit:</p>
<pre class="lang-py prettyprint-override"><code># Continue the conversation by including the initial messages and adding a new one
continued_completion = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "system", "content": "You are a poetic assistant, skilled in explaining complex programming concepts with creative flair."},
{"role": "user", "content": "Compose a poem that explains the concept of recursion in programming."},
{"role": "assistant", "content": initial_completion.choices[0].message.content}, # Include the initial response
{"role": "user", "content": "Can you elaborate more on how recursion can lead to infinite loops if not properly handled?"} # New follow-up prompt
]
)
</code></pre>
<p>But I would imagine this means processing the previous messages all over again at every new prompt, which seems quite wasteful. Is that really the only way? Isn't there a way to keep a "session" of some sort that keeps ChatGPT's internal state and just processes a newly given prompt?</p>
|
<python><openai-api><langchain><chatgpt-api>
|
2024-03-24 05:11:31
| 1
| 5,872
|
user118967
|
78,213,351
| 2,328,154
|
ALLOW_ADMIN_USER_PASSWORD_AUTH not getting set in AWS CDK
|
<p>I am trying to set the Authentication Flows in my Cognito - User Pool - App Client to the flows below in AWS CDK.</p>
<ul>
<li>ALLOW_ADMIN_USER_PASSWORD_AUTH</li>
<li>ALLOW_CUSTOM_AUTH</li>
<li>ALLOW_REFRESH_TOKEN_AUTH</li>
<li>ALLOW_USER_SRP_AUTH</li>
</ul>
<p>I can only get it to add these flows.</p>
<ul>
<li>ALLOW_REFRESH_TOKEN_AUTH</li>
<li>ALLOW_CUSTOM_AUTH</li>
<li>ALLOW_USER_SRP_AUTH</li>
</ul>
<p>I am missing ALLOW_ADMIN_USER_PASSWORD_AUTH.</p>
<p>My code to create the app client is as follows.</p>
<pre><code> cognito.CfnUserPoolClientProps(
user_pool_id=self.user_pool.user_pool_id,
explicit_auth_flows=["ALLOW_ADMIN_USER_PASSWORD_AUTH, ALLOW_CUSTOM_AUTH, ALLOW_REFRESH_TOKEN_AUTH, ALLOW_USER_SRP_AUTH"]
)
self.user_pool.add_client('cognito-app-client',
user_pool_client_name='cognito-app-client',
access_token_validity=Duration.minutes(60),
id_token_validity=Duration.minutes(60),
refresh_token_validity=Duration.days(1),
# auth_flows=cognito.AuthFlow(user_password=True),
o_auth=cognito.OAuthSettings(
flows=cognito.OAuthFlows(
implicit_code_grant=True,
)
),
prevent_user_existence_errors=True,
generate_secret=True,
enable_token_revocation=True)
</code></pre>
<p>Can anyone point me in the right direction?</p>
|
<python><amazon-cognito><aws-cdk>
|
2024-03-24 04:04:18
| 1
| 421
|
MountainBiker
|
78,213,135
| 16,614,515
|
How do I update the matplotlib elements of a sympy plot?
|
<p>This following code below consists of a matplotlib graph of the function y > 5/x, with the ability to fill in the graph as the user pans/zooms outward.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
x_range = np.linspace(-10, 10, 400)
y_range = 5 / x_range
line, = ax.plot(x_range, y_range, 'r', linewidth=2, linestyle='--')
ax.fill_between(x_range, y_range, y_range.max(), alpha=0.3, color='gray')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_title('Inequality: y > 5 / x')
ax.axhline(0, color='black',linewidth=3)
ax.axvline(0, color='black',linewidth=3)
ax.grid(color='gray', linestyle='--', linewidth=0.5)
def update_limits(event):
xlim = ax.get_xlim()
x_range = np.linspace(xlim[0], xlim[1], max(200, int(200 * (xlim[1] - xlim[0]))))
y_range = 5 / x_range
line.set_data(x_range, y_range)
for collection in ax.collections:
collection.remove()
ax.fill_between(x_range, y_range, max(y_range.max(), ax.get_ylim()[1]), alpha=0.3, color='gray')
plt.draw()
fig.canvas.mpl_connect('button_release_event', update_limits)
plt.show()
</code></pre>
<p>I've been trying to convert this concept into code that uses the sympy module (by accessing the Matplotlib backends), which has better control over algebraic functions. However, the following code below, does not seem to fill in the graph as the user pans outward. Why is this the case and how do I fix it?</p>
<pre><code>import sympy as sp
from sympy.plotting.plot import MatplotlibBackend
x, y = sp.symbols('x y')
# Define the implicit plot
p1 = sp.plot_implicit(sp.And(y > 5 / x), (x, -10, 10), (y, -10, 10), show=False)
mplPlot = MatplotlibBackend(p1)
mplPlot.process_series()
mplPlot.fig.tight_layout()
mplPlot.ax[0].set_xlabel("x-axis")
mplPlot.ax[0].set_ylabel("y-axis")
def update_limits(event):
global mplPlot
xmin, xmax = mplPlot.ax[0].get_xlim()
ymin, ymax = mplPlot.ax[0].get_ylim()
p1 = sp.plot_implicit(sp.And(y > 5 / x), (x, xmin, xmax), (y, ymin, ymax), show=False)
mplPlot = MatplotlibBackend(p1)
mplPlot.process_series()
mplPlot.fig.tight_layout()
mplPlot.ax[0].set_xlabel("x-axis")
mplPlot.ax[0].set_ylabel("y-axis")
mplPlot.plt.draw()
mplPlot.fig.canvas.mpl_connect('button_release_event', update_limits)
mplPlot.plt.show()
</code></pre>
<p><strong>UPDATE</strong>: After some debugging (with the sympy code), I have found that the <code>xmax</code>, <code>xmin</code> variables in the <code>update_limits</code> function are only changed once and they stay that way for the rest of the duration that the program is run. If possible I would also like to know why this is.</p>
<p><strong>UPDATE 2</strong>: If instead of running <code>mplPlot.plt.draw()</code>, you run <code>mplPlot.plt.show()</code>, a new window is created with the correct graph. This is not what I want, as I want the changes to be put on the same window. Another buggy behavior is revealed when I do this as well, which is when I pan far enough into Quadrant IV of the graph, the graph seems to become unresponsive. This doesn't happen all the time and I can't find an explanation for it. If anyone knows why this is the case, feel free to add that into your answer!</p>
|
<python><python-3.x><matplotlib><charts><sympy>
|
2024-03-24 01:41:32
| 1
| 316
|
LuckElixir
|
78,213,102
| 543,572
|
How to add class variable values to a list, python 3x?
|
<p>I'm trying to add class variable values to a list inside a function, but I'm not seeing any errors or the expected output? The comboxbox doesn't display when I uncomment the list code.</p>
<p>Outside of the function, this code works standalone:</p>
<pre><code>value_list = []
selected_float = 0.5
value_list.append(selected_float)
print('The value_list value is: ')
print(value_list)
</code></pre>
<p>Output as expected:
The value_list value is:
[0.5]</p>
<p>However, here is the code with the function where it doesn't print anything. I had to comment out the code at 1. and 2. or it stopped working and didn't display the combobox.</p>
<pre><code>from tkinter import *
from tkinter import ttk
# Create an instance of tkinter frame or window
win = Tk()
# Set the size of the window
win.geometry("700x350")
# Create a function to clear the combobox
def clear_cb(self):
self.cb.set('')
def handle_selection(self):
selected_index = self.cb.current() # Get the index of the selected item
print(selected_index)
selected_tuple = self.data[selected_index] # Get the selected tuple
print(selected_tuple)
selected_float = float(selected_tuple[-1]) # Extract the float value from the tuple
print(selected_float) # Print the extracted float value
# 2. Commented out these lines:
#self.value_list.append(selected_float)
#print('The value_list value is: ')
#print(self.value_list)
class ComboWidget():
def __init__(self):
# Create a combobox widget
self.var = StringVar()
self.cb = ttk.Combobox(win, textvariable= self.var)
self.cb['values'] = self.data
self.cb['state'] = 'readonly'
self.cb.pack(fill = 'x',padx=5, pady=5)
# Define Tuple
self.data = [('First', 'F', '0.5'), ('Next', 'N', '1.0'), ('Middle', 'M', '0.6'), ('Last', 'L', '0.24')]
# 1. Commented out the declaration.
#self.value_list = []
self.cb.bind("<<ComboboxSelected>>", handle_selection)
# Create a button to clear the selected combobox text value
self.button = Button(win, text= "Clear", command= clear_cb)
self.button.pack()
win.mainloop()
</code></pre>
<p>I believe it should be possible to mix instance and class variables together, but something I'm doing is wrong? Any ideas?</p>
|
<python><ttk><python-class><ttkbootstrap><ttkcombobox>
|
2024-03-24 01:20:35
| 1
| 15,801
|
James-Jesse Drinkard
|
78,212,699
| 6,546,694
|
How to use featuretools at the test time?
|
<p>I would demonstrate the issue with an example:</p>
<p>Let us say we want to use the primitive 'PERCENTILE'</p>
<p>Imports:</p>
<pre><code>import pandas as pd
import featuretools as ft
</code></pre>
<p>For training (create a simple data with one column and let featuretools compute a percentile feature on top of it):</p>
<pre><code>df_train = pd.DataFrame({'index':[1,2,3,4,5], 'val':[1,2,3,4,5]})
es_train = ft.EntitySet("es_train")
es_train.add_dataframe(df_train,'df')
fm, fl = ft.dfs(entityset = es_train, trans_primitives=['percentile'], agg_primitives=[], target_dataframe_name='df')
</code></pre>
<p>output:</p>
<pre><code>print(fm)
val PERCENTILE(val)
index
1 1 0.2
2 2 0.4
3 3 0.6
4 4 0.8
5 5 1.0
</code></pre>
<p>So far everything is expected</p>
<p>Now, when I get an example with the value, say, 3, at the test time. I would want it translated to 0.6 as per the training data. But, that is not what happens</p>
<pre><code>df_test = pd.DataFrame({'index':[1], 'val':[3]})
es_test = ft.EntitySet("es_test")
es_test.add_dataframe(df_test,'df')
ft.calculate_feature_matrix(features = fl, entityset=es_test)
</code></pre>
<p>output:</p>
<pre><code> val PERCENTILE(val)
index
1 3 1.0
</code></pre>
<p>So, metadata in feature definitions in <code>fl</code> that is the output of <code>ft.dfs</code> does not store train time stats needed to compute the features at the test time. This would throw any machine-learning model into a tailspin</p>
<p>What is the canonical way to apply featuretools at the test time?</p>
|
<python><pandas><featuretools>
|
2024-03-23 22:17:32
| 0
| 5,871
|
figs_and_nuts
|
78,212,679
| 11,748,924
|
Numpy memmap still using RAM instead of disk while doing vector operation
|
<p>I initialize two operands and one result:</p>
<pre><code>a = np.memmap('a.mem', mode='w+', dtype=np.int64, shape=(2*1024*1024*1024))
b = np.memmap('b.mem', mode='w+', dtype=np.int64, shape=(2*1024*1024*1024))
result = np.memmap('result.mem', mode='w+', dtype=np.int64, shape=(2*1024*1024*1024))
</code></pre>
<p>At idle state like that, the System RAM reported by Google Colab still <code>1.0/12.7 GB</code> which is good there is no RAM activities yet. But, doing this vector operation such as vector substraction, the reported system ram increased to the almost maximum peak which is <code>11.2/12.7 GB</code> that eventually the runtime kernel is crashed:</p>
<pre><code>result[:] = a[:] - b[:] # This still consume memory
result.flush()
</code></pre>
<p>I have read <code>np.memmap</code> docs many times, it was asserted that the purpose of <code>memmap</code> is supposed to reduce memory consumption, but why I still got <code>Out Of Memory</code> error?</p>
<p>I suspect, the vector subtraction must be buffered into small chunk such as for every <code>512MB</code> buffer memory. But I have no idea what the syntax is:
Perhaps what I mean is something like this:</p>
<pre><code>BUFF_SIZE = 512 * 1024 * 1024
for i in range(0, result.size, BUFF_SIZE):
result[i:i+BUFF_SIZE] = a[i:i+BUFF_SIZE] - b[i:i+BUFF_SIZE]
result.flush()
</code></pre>
|
<python><numpy><memory><memory-management><numpy-memmap>
|
2024-03-23 22:10:31
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
78,212,325
| 16,845
|
Type annotations for a dictionary that maps types to a list of instances of that type?
|
<p>In Python 3.11, I would like to create a class that holds a dictionary where the key is the type of a class and the value of that key is a list of objects of that type. The code:</p>
<pre><code>import typing
T = typing.TypeVar("T")
class DictHolder:
def __init__(self):
self._things: dict[type[T], list[T]] = {}
</code></pre>
<p>run through pyright, complains that <code>Type variable "T" has no meaning in this context</code>. That makes sense, since <code>DictHolder</code> is itself not a generic class. It is not parameterized on a single type, so it does not inherit from <code>typing.Generic[T]</code>. However, its private dict field is generic, as might be any properties or accessor functions that expose or interact with this generic dict.</p>
<p>What are the correct type annotations for the dictionary?</p>
|
<python><python-3.x><dictionary>
|
2024-03-23 20:14:34
| 2
| 1,216
|
Charles Nicholson
|
78,212,322
| 7,211,014
|
Flask how can I use after_request to print out all responses and not break swagger-ui?
|
<p>Setup:</p>
<ul>
<li>connexion[swagger-ui]==2.13.0</li>
<li>Flask==2.2.5</li>
</ul>
<p>I am using connexion (Not Flask directly), to set up my app and host swagger. I want to print out every request and response payload to console before sending it.</p>
<p>I tried using these after building my connexion app</p>
<pre><code># Add pre payload printer to app
connex_app.app.before_request(print_request(config))
connex_app.app.after_request(print_response(config))
</code></pre>
<p>Here are those functions</p>
<pre><code>def print_request(config):
'''
Print out requests received, or response sent
'''
def request_logger():
if request.method not in ('GET'):
if config.myproject.logging.myproject.pretty_print.request_payloads:
logger.debug(f"Request received, Body: {request.get_data(as_text=True)}")
else:
data = request.get_json() if request.is_json else {}
flat_json = json.dumps(data, separators=(',', ':'))
logger.debug(f"Request received, Body: {flat_json}")
else:
logger.debug("Request received")
return request_logger
def print_response(config):
'''
Print out requests received, or response sent
'''
def response_logger(response):
# Ensure you have access to the correct response object here, this might need to be passed explicitly
if config.myproject.logging.myproject.pretty_print.response_payloads:
logger.debug(f"Response sent, Body: {response.get_data(as_text=False)}")
else:
# This section might need adjustment based on how you're handling response data
data = response.get_json() if response.is_json else {}
flat_json = json.dumps(data, separators=(',', ':'))
logger.debug(f"Response sent, Body: {flat_json}")
return response
return response_logger
</code></pre>
<p>The problem is this line</p>
<pre><code>logger.debug(f"Response sent, Body: {response.get_data(as_text=False)}")
</code></pre>
<p>For some reason it tries to redirect swagger-ui requests and responses to its interface, which then breaks the swager-ui.
This is very odd to me, as I did not have this issue yesterday. I did have connexion dependency set to <code>2.*</code>, and I have no idea what version was being used before.
I do not have time to change all my code to use connexion 3.0, so that is not an option right now. Also I had to set <code>Flask==2.2.5</code> because of a JSONEncoding problem with newer versions.</p>
<p>Has anyone else ran into this before?
Am I printing responses properly? is there a better way to do it?</p>
<p>This just feels like a dependency issue going on and I am not sure how to fix it...</p>
|
<python><flask><swagger><response><connexion>
|
2024-03-23 20:13:56
| 1
| 1,338
|
Dave
|
78,212,315
| 471,478
|
Is there a sum(min_count=1) that is as fast as sum on DataFrameGroupBy.agg
|
<p>I have a DataFrame with a few million rows that I groupby and sum various columns via</p>
<pre><code>df.groupby(…).agg({
"foo": "sum",
"bar": "sum",
…})
</code></pre>
<p>That is rather fast and takes around 10 seconds on my machine.</p>
<p>Now I need a variant of sum that keeps NaN values, like</p>
<pre><code>sum(min_count=1)
</code></pre>
<p>The only way this seems to work with the groupby/agg from above is via</p>
<pre><code>... agg({
"foo": (lambda s: s.sum(min_count=1))
})
</code></pre>
<p>But that is upto 10x slower.</p>
<p>I tried passing kwargs but these seem not to be propagated when passing a dict to agg:</p>
<pre><code>... agg({
"foo": "sum",
"bar": "sum",
}, min_count=1)
</code></pre>
<p>Is there a way to group/agg with the semantics of min_count=1 and the speed of a bare sum?</p>
|
<python><python-3.x><pandas>
|
2024-03-23 20:10:51
| 3
| 12,364
|
scravy
|
78,212,299
| 1,169,091
|
How to use a specific column on the X-axis of a DataFrame plot
|
<p>I want my plot to use State names on the x-axis, not the index values.
This code produces the graph:</p>
<pre><code>print(df.info())
print(df)
df.plot(kind='bar')
plt.xlabel('State')
plt.ylabel('Total Sales')
plt.title('Total Sales by State')
plt.show()
</code></pre>
<p>This is the output:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 11 entries, 0 to 10
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 State 11 non-null object
1 SumOfTotalPrice 11 non-null float64
dtypes: float64(1), object(1)
memory usage: 304.0+ bytes
None
State SumOfTotalPrice
0 AK 1.063432e+07
1 CA 4.172891e+07
2 IL 2.103149e+07
3 IN 2.270681e+08
4 KY 4.144238e+07
5 ME 2.057557e+07
6 MI 4.216375e+07
7 OH 7.970354e+08
8 PA 2.158148e+07
9 SD 1.025623e+07
10 TX 2.061534e+07
</code></pre>
<p>The plot doesn't have the state names on the X-axis:
<a href="https://i.sstatic.net/LWZBD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LWZBD.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><plot>
|
2024-03-23 20:06:16
| 1
| 4,741
|
nicomp
|
78,212,127
| 9,720,964
|
Python datetime.replace() has unexpected behaviour when uploading to Firebase Firestore
|
<p>For testing purposes, I have deployed the following Firebase Cloud Function. When the function gets called, it adds a document to my Firestore collection with two fields containing two datetimes.</p>
<pre class="lang-py prettyprint-override"><code>@https_fn.on_call(region='europe-west1', vpc_connector='connector1', vpc_connector_egress_settings=options.VpcEgressSetting('ALL_TRAFFIC'))
def testing(req: https_fn.CallableRequest):
firestore_client: google.cloud.firestore.Client = firestore.client()
visit_collection = firestore_client.collection('visits')
visit_collection.add(
{
'test1': datetime.now().astimezone(),
'test2': datetime.now().replace(hour=0, minute=0, second=0, microsecond=0).astimezone(),
},
'test'
)
</code></pre>
<p>When looking at the contents of the document in Cloud Firestore in the Firebase Console, what I would expect for the values of the fields in my document is (if it would be 19:50):</p>
<pre><code>test1: 23 March 2024 at 19:50:00 UTC+1
test2: 23 March 2024 at 00:00:00 UTC+1
</code></pre>
<p>Instead, what I see in the document is:</p>
<pre><code>test1: 23 March 2024 at 19:50:00 UTC+1
test2: 23 March 2024 at 01:00:00 UTC+1
</code></pre>
|
<python><firebase><google-cloud-firestore><google-cloud-functions><python-datetime>
|
2024-03-23 19:13:00
| 1
| 447
|
jaakdentrekhaak
|
78,211,958
| 3,305,998
|
How can I test rust functions wrapped by pyo3 in rust, before building and installing in python?
|
<p>I have functionality written in rust, which I am exposing to python via pyo3. I would like to test that the python functions are correctly exposed and handle python types correctly.</p>
<p>I already have tests in place to validate the actual functional implementation (in rust) and the end-to-end integration (in python).</p>
<p>How can I test the pyo3 python functions in rust?</p>
|
<python><unit-testing><rust><pyo3>
|
2024-03-23 18:14:32
| 1
| 318
|
MusicalNinja
|
78,211,910
| 4,089,351
|
Is the error 'name 'Scene' is not defined' in Maning a generic output, or does it have a concrete meaning, and how to fix it?
|
<p>Running on Windows VSC the following code from the <a href="https://3b1b.github.io/manim/getting_started/example_scenes.html" rel="nofollow noreferrer">Manim examples</a>:</p>
<pre><code>from manim import *
class OpeningManimExample(Scene):
def construct(self):
intro_words = Text("""
The original motivation for manim was to
better illustrate mathematical functions
as transformations.
""")
intro_words.to_edge(UP)
self.play(Write(intro_words))
self.wait(2)
# Linear transform
grid = NumberPlane((-10, 10), (-5, 5))
matrix = [[1, 1], [0, 1]]
linear_transform_words = VGroup(
Text("This is what the matrix"),
IntegerMatrix(matrix, include_background_rectangle=True),
Text("looks like")
)
linear_transform_words.arrange(RIGHT)
linear_transform_words.to_edge(UP)
linear_transform_words.set_stroke(BLACK, 10, background=True)
self.play(
ShowCreation(grid),
FadeTransform(intro_words, linear_transform_words)
)
self.wait()
self.play(grid.animate.apply_matrix(matrix), run_time=3)
self.wait()
# Complex map
c_grid = ComplexPlane()
moving_c_grid = c_grid.copy()
moving_c_grid.prepare_for_nonlinear_transform()
c_grid.set_stroke(BLUE_E, 1)
c_grid.add_coordinate_labels(font_size=24)
complex_map_words = TexText("""
Or thinking of the plane as $\\mathds{C}$,\\\\
this is the map $z \\rightarrow z^2$
""")
complex_map_words.to_corner(UR)
complex_map_words.set_stroke(BLACK, 5, background=True)
self.play(
FadeOut(grid),
Write(c_grid, run_time=3),
FadeIn(moving_c_grid),
FadeTransform(linear_transform_words, complex_map_words),
)
self.wait()
self.play(
moving_c_grid.animate.apply_complex_function(lambda z: z**2),
run_time=6,
)
self.wait(2)
</code></pre>
<p>yields an error message:</p>
<pre><code> class OpeningManimExample(Scene):
^^^^^
NameError: name 'Scene' is not defined. Did you mean: 'scene'?
[Done] exited with code=1 in 0.901 seconds
</code></pre>
<p>However, I have code for Manim with exactly the same structure, i.e. the <code>from manim import *</code> following by a <code>class NameOfTheClass(Scene):</code> that run perfectly well.</p>
<p>If I look up this error online I get posts that prompt to install Manim, discuss downloading the package versus pip installing, etc. However, is the error really about <code>Scene</code>? Because if this is the case it would happen in all Manim files with this structure, wouldn't it?</p>
<p>Trying from the terminal <code>from manimlib import *</code> also seems to be a problem, even though manimlib is installed:</p>
<pre><code>PS C:\Users\j\Documents\MANIM> from manimlib import *
ParserError:
Line |
1 | from manimlib import *
| ~~~~
| The 'from' keyword is not supported in this version of the language.
</code></pre>
<hr />
<p>Directory:</p>
<p><a href="https://i.sstatic.net/zNmKv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zNmKv.png" alt="enter image description here" /></a></p>
<hr />
<p>This is the output of</p>
<pre><code>import manimlib; print(dir(manimlib))
[Running] python -u "c:\Users\anton\Documents\MANIM\test.py"
['__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', 'animation', 'camera', 'config', 'constants', 'container', 'extract_scene', 'main', 'manimlib', 'mobject', 'scene', 'utils']
[Done] exited with code=0 in 8.02 seconds
</code></pre>
|
<python><manim>
|
2024-03-23 18:01:16
| 0
| 4,851
|
Antoni Parellada
|
78,211,842
| 2,862,945
|
Getting interpolation out of nested for loops in python
|
<p>I have a 2D structure with a certain shape in the (xz) plane. For simplicity, I set it here to be of circular shape. I basically need to rotate that structure around the z axis and my idea was to do that with an interpolation function. <code>RegularGridInterpolator</code> (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RegularGridInterpolator.html" rel="nofollow noreferrer">link to the docs</a>) sounds suitable for this having the idea in mind that I use the structure in the given (xz) plane as input to the interpolater, and then, when I rotate around the z axis, calculate sqrt(x^2 + y^2) at each position (looking from the top, i.e. along the z-axis), corresponding to the original x coordinate, and z simply is still z.</p>
<p>The code is fine, but it is very slow when the arrays become large (up to 1000 points in each direction). That is very likely due to the nested for loops, in which the interpolation is calculated. I thought about using list comprehensions for that, but can't get it to work with the interpolator here. So my question is, how to get rid of at least one of the for loops, maybe more?</p>
<p>Here is the code:</p>
<pre><code>import matplotlib.pyplot as plt
from mayavi import mlab
import numpy as np
import scipy.interpolate as interp
def make_simple_2Dplot( data2plot, xVals, yVals, N_contLevels=8 ):
fig, ax = plt.subplots()
# the necessity for a transposed arrays confuses me...
ax.contourf(x_arr, z_arr, data2plot.T)
ax.set_aspect('equal')
ax.set_xlabel('x')
ax.set_ylabel('z')
plt.show()
def make_simple_3Dplot( data2plot, xVals, yVals, zVals, N_contLevels=8 ):
contLevels = np.linspace( np.amin(data2plot),
np.amax(data2plot),
N_contLevels)[1:].tolist()
fig1 = mlab.figure( bgcolor=(1,1,1), fgcolor=(0,0,0),size=(800,600))
contPlot = mlab.contour3d( data2plot, contours=contLevels,
transparent=True, opacity=.4,
figure=fig1
)
mlab.xlabel('x')
mlab.ylabel('y')
mlab.zlabel('z')
mlab.show()
x_min, z_min = 0, 0
x_max, z_max = 10, 10
Nx = 100
Nz = 50
x_arr = np.linspace(x_min, x_max, Nx)
z_arr = np.linspace(z_min, z_max, Nz)
# center of circle in 2D
xc, zc = 5, 5
# radius of circle
rc = 2
# make 2D circle
data_2D = np.zeros( (Nx,Nz) )
for ii in range(Nx):
for kk in range(Nz):
if np.sqrt((x_arr[ii]-xc)**2 + (z_arr[kk]-zc)**2) < rc:
data_2D[ii,kk] = 1
# interpolation function to make 3D object
circle_xz = interp.RegularGridInterpolator( (x_arr,z_arr), data_2D,
bounds_error=False,
fill_value=0
)
# coordinate arrays for 3D data
y_min = -x_max
y_max = x_max
Ny = 100
x_arr_3D = np.linspace(-x_max, x_max, Nx)
y_arr_3D = np.linspace(y_min, y_max, Ny)
z_arr_3D = np.linspace(z_min, z_max, Nz)
# make 3D circle
data_3D = np.zeros( (Nx, Ny, Nz) )
for ii in range(Nx):
for jj in range(Ny):
# calculate R corresponding to x in (xz) plane
R = np.sqrt(x_arr_3D[ii]**2 + y_arr_3D[jj]**2)
for kk in range(Nz):
# hiding the interpolator deep in the nested for loop
# is probably not very clever
data_3D[ii,jj,kk] = circle_xz( (R, z_arr_3D[kk]) )
make_simple_2Dplot( data_2D, x_arr, z_arr, N_contLevels=8 )
make_simple_3Dplot( data_3D, x_arr_3D, y_arr_3D, z_arr_3D )
</code></pre>
<p>As can be seen by the 2D output and the 3D output, see below, it works, but it is very slow.</p>
<p><a href="https://i.sstatic.net/Uy1PT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uy1PT.png" alt="2D plot" /></a></p>
<p><a href="https://i.sstatic.net/GFOMF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GFOMF.png" alt="3D plot" /></a></p>
|
<python><for-loop><optimization><scipy><interpolation>
|
2024-03-23 17:39:29
| 1
| 2,029
|
Alf
|
78,211,526
| 8,405,296
|
PyTorch: AttributeError: 'torch.dtype' object has no attribute 'itemsize'
|
<p>I am trying to follow this article on medium <a href="https://dassum.medium.com/fine-tune-large-language-model-llm-on-a-custom-dataset-with-qlora-fb60abdeba07" rel="nofollow noreferrer">Article</a>.</p>
<p>I had a few problems with it so the remain chang eI did was to the <code>TrainingArguments</code> object I added <code>gradient_checkpointing_kwargs={'use_reentrant':False},</code>.</p>
<p>So now I have the following objects:</p>
<pre class="lang-py prettyprint-override"><code>peft_training_args = TrainingArguments(
output_dir = output_dir,
warmup_steps=1,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
max_steps=100, #1000
learning_rate=2e-4,
optim="paged_adamw_8bit",
logging_steps=25,
logging_dir="./logs",
save_strategy="steps",
save_steps=25,
evaluation_strategy="steps",
eval_steps=25,
do_eval=True,
gradient_checkpointing=True,
gradient_checkpointing_kwargs={'use_reentrant':False},
report_to="none",
overwrite_output_dir = 'True',
group_by_length=True,
)
peft_model.config.use_cache = False
peft_trainer = transformers.Trainer(
model=peft_model,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
args=peft_training_args,
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
</code></pre>
<p>And when I call <code>peft_trainer.train()</code> I get the following error:</p>
<pre><code>AttributeError: 'torch.dtype' object has no attribute 'itemsize'
</code></pre>
<p>I'm using Databricks, and my pytorch version is <code>2.0.1+cu118</code></p>
|
<python><pytorch><databricks><huggingface><peft>
|
2024-03-23 16:05:27
| 5
| 1,362
|
Lidor Eliyahu Shelef
|
78,211,442
| 10,531,186
|
When using a jupyter notebook in VSCode, the Python interpreter is ignored
|
<p>I have a virtual environment with some installed packages and a notebook with some imports of packages that only exist in this environment.</p>
<p>I have done Ctrl + Shift + P > Python: Select Interpreter, and chosen my environment.</p>
<p>But, if I run the cell, it tells me ModuleNotFoundError, as it tries to use my global Python environment for the imports (if I install the packages globally the cell succeeds).</p>
<p>Therefore, the selected Interpreter is ignored, and the global environment is always used.</p>
<p>I also tried using my environment in: Jupyter: Select Interpreter to Start Jupyter Server, but the result did not change.</p>
<p>This problem does not exist in a python file, only in the jupyter notebooks.</p>
<p>How to fix it? Am I forced to install packages globally when using Jupyter Notebooks?</p>
|
<python><visual-studio-code><jupyter-notebook><jupyter><interpreter>
|
2024-03-23 15:37:12
| 2
| 324
|
someguy
|
78,211,423
| 1,136,300
|
dask dataframe aggregation without groupby (ddf.agg(['min','max'])?
|
<p>Pandas define dataframe.agg, but DASK only defines dask_dataframe.groupby.agg.</p>
<p>Is there a way to have multiple aggregations over a column in dask without groupby?</p>
<p>I know describe() has columns statistics, which solves <strong>one</strong> specific problem, but I'm looking for a general solution.</p>
<p>First try was to create a dummy columns with a single value and groupby(['min','max']).
The result worked but the dask_DF created was a single row, multi-index column which dask can't transpose or stack (unimplemented, unless I'm doing it wrong).
I would like to keep all in dask even though the result table is small enough to run in pandas alone, and quite trivial to process, but I'm thinking about how to do it in a general situation where exporting, re-importing to pandas from a local result is unfeasible.</p>
|
<python><pandas><dask><dask-dataframe>
|
2024-03-23 15:30:08
| 1
| 683
|
Carlos Troncoso
|
78,211,422
| 9,142,615
|
Why does my ffmpeg command fails from python subprocess?
|
<p>I want to concat two movies with ffmpeg. In the shell I can execute this:
<code>\\programs\2d\ffmpeg\inst\ffmpeg.bat -y -i "concat:C:/daten/movieA.ts1|C:/daten/movieB.ts2" -c copy -bsf:a aac_adtstoasc C:/daten/movieConcat.mov</code> and it works fine. If I try to call it from a python subprocess:</p>
<pre><code>import subprocess
cmd = [r"\\programs\2d\ffmpeg\inst\ffmpeg.bat", "-i", '"concat:C:/daten/movieA.ts1|C:/daten/movieB.ts2"', "-c", "copy", "-bsf:a aac_adtstoasc", "C:/daten/movieConcat.mov"]
result = subprocess.run(cmd, shell=True, capture_output=True, text=True)
if result.returncode > 0:
print("create concat failed")
print(result.stdout)
print(result.stderr)
</code></pre>
<p>I get this error:</p>
<pre><code>Trailing option(s) found in the command: may be ignored.
[in#0 @ 00000222c056a1c0] Error opening input: Invalid argument
Error opening input file "concat:C:/daten/movieA.ts1|C:/daten/movieB.ts2".
Error opening input files: Invalid argument
</code></pre>
<p>I have no idea what's wrong with my call and I'd appreciate any hints.</p>
|
<python><ffmpeg>
|
2024-03-23 15:29:31
| 1
| 1,978
|
haggi krey
|
78,211,399
| 3,305,998
|
How can I separate a rust library and the pyo3 exported python extensions which wrap it
|
<p>I have a rust library which provides useful functionality for use in other rust programs. Additionally I would like to provide this functionality as a python extension (using pyo3 and setuptools-rust, although most of this is tool agnostic)</p>
<p>The documentation examples all show a single library. This means that anyone building the rust library and using it in the rust ecosystem needs the python headers and gets the exported python module as well.</p>
<p>How can I separate this as an independent rust library offering the functionality in rust and a python package offering the functionality in python?</p>
|
<python><rust><pyo3>
|
2024-03-23 15:21:21
| 1
| 318
|
MusicalNinja
|
78,211,377
| 22,213,065
|
Script not list excel search result for me
|
<p>I wrote a python script that must search for <code>Mar</code> <code>Jun</code> <code>Sep</code> <code>Dec</code> in my xlsx file and provide name of search result columns in text file to me.</p>
<pre><code>import pandas as pd
# Path to the Excel file
excel_file = r'E:\Desktop\Big_comp_cap\New Microsoft Excel Worksheet.xlsx'
# Read the Excel file
df = pd.read_excel(excel_file)
# Specify the month names to search for
months_to_search = ['Mar', 'Jun', 'Sep', 'Dec']
# Initialize a set to store matching column names
matching_columns = set()
# Iterate over each cell in the DataFrame
for column in df.columns:
for index, value in df[column].items():
# Check if the value is a string and contains any of the specified month names
if isinstance(value, str):
for month in months_to_search:
if month in value:
matching_columns.add(column)
break # Once a match is found, move to the next cell
# Write the column names to a text file
output_file = 'matching_columns.txt'
with open(output_file, 'w') as file:
for col_name in matching_columns:
file.write(col_name + '\n')
print("Matching column names have been saved to:", output_file)
</code></pre>
<p>but it not find any result!<br />
<strong>note that script must look in values and not in comments or formulas</strong><br />
where is my script problem?</p>
|
<python><pandas><excel><xlsx>
|
2024-03-23 15:12:14
| 0
| 781
|
Pubg Mobile
|
78,211,255
| 11,110,455
|
How to low stdout verbosity for AppDynamics Agent and Proxy in Python Applications
|
<p>I'm integrating the AppDynamics Python Agent into a FastAPI project for monitoring purposes and have encountered a bit of a snag regarding log verbosity in my stdout. I'm launching my FastAPI app with the following command to include the AppDynamics agent:</p>
<pre class="lang-bash prettyprint-override"><code>pyagent run -c appdynamics.cfg uvicorn my_app:app --reload
</code></pre>
<p>My goal is to <strong>reduce the verbosity of the logs from both the AppDynamics agent and the proxy that are output to stdout</strong>, aiming to keep my console output clean and focused on more critical issues.</p>
<p>My module versions:</p>
<pre class="lang-py prettyprint-override"><code>$ pip freeze | grep appdy
appdynamics==23.10.0.6327
appdynamics-bindeps-linux-x64==23.10.0
appdynamics-proxysupport-linux-x64==11.64.3
</code></pre>
<p>Here's the content of my <code>appdynamics.cfg</code> configuration file:</p>
<pre class="lang-ini prettyprint-override"><code>[agent]
app = my-app
tier = my-tier
node = teste-local-01
[controller]
host = my-controller.saas.appdynamics.com
port = 443
ssl = true
account = my-account
accesskey = my-key
[log]
level = warning
debugging = off
</code></pre>
<p>I attempted to decrease the log verbosity further by modifying the <code>log4j.xml</code> file for the proxy to set the logging level to WARNING. However, this change didn't have the effect I was hoping for. The <code>log4j.xml</code> file I adjusted is located at:</p>
<pre class="lang-bash prettyprint-override"><code>/tmp/appd/lib/cp311-cp311-63ff661bc175896c1717899ca23edc8f5fa87629d9e3bcd02cf4303ea4836f9f/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j.xml
</code></pre>
<p>Here are the adjustments I made to the <code>log4j.xml</code>:</p>
<pre class="lang-xml prettyprint-override"><code> <appender class="com.singularity.util.org.apache.log4j.ConsoleAppender" name="ConsoleAppender">
<layout class="com.singularity.util.org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="%d{ABSOLUTE} %5p [%t] %c{1} - %m%n" />
</layout>
<filter class="com.singularity.util.org.apache.log4j.varia.LevelRangeFilter">
<param name="LevelMax" value="FATAL" />
<param name="LevelMin" value="WARNING" />
</filter>
</code></pre>
<p>Despite these efforts, I'm still seeing a high volume of logs from both the agent and proxy. Could anyone provide guidance or suggestions on how to effectively lower the log output to stdout for both the AppDynamics Python Agent and its proxy? Any tips on ensuring my changes to <code>log4j.xml</code> are correctly applied would also be greatly appreciated.</p>
<p>Thank you in advance for your help!</p>
<h3>Example of logging messages I would like to remove from my stdout:</h3>
<pre class="lang-py prettyprint-override"><code>2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759
2024-03-23 11:15:28,409 [INFO] appdynamics.proxy.watchdog <22759>: Started watchdog with pid=22759
...
[AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:51 BRT 2024[DEBUG]: JavaAgent - Setting AgentClassLoader as Context ClassLoader
[AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - Low Entropy Mode: Attempting to swap to non-blocking PRNG algorithm
[AD Thread Pool-ProxyControlReq0] Sat Mar 23 11:15:52 BRT 2024[INFO]: JavaAgent - UUIDPool size is 10
Agent conf directory set to [/home/wsl/.pyenv/versions/3.11.6/lib/python3.11/site-packages/appdynamics_bindeps/proxy/conf]
...
11:15:52,167 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Starting BT Logs at Sat Mar 23 11:15:52 BRT 2024
11:15:52,168 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - ###########################################################
11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] BusinessTransactions - Using Proxy Version [Python Agent v23.10.0.6327 (proxy v23.10.0.35234) compatible with 4.5.0.21130 Python Version 3.11.6]
11:15:52,169 INFO [AD Thread Pool-ProxyControlReq0] JavaAgent - Logging set up for log4j2
...
11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] JDBCConfiguration - Setting normalizePreparedStatements to true
11:15:52,965 INFO [AD Thread Pool-ProxyControlReq0] CallGraphConfigHandler - Call Graph Config Changed callgraph-granularity-in-ms Value -null
</code></pre>
|
<python><logging><proxy><appdynamics>
|
2024-03-23 14:30:22
| 1
| 1,813
|
Felipe Windmoller
|
78,211,236
| 6,618,225
|
Creating a WHILE-Loop with a dynamic number of conditions in Python
|
<p>I am looking for a solution to create a dynamic WHILE Loop. For example, if I have a game with a flexible number of players (2-6) and each player has a score, the WHILE Loop should end when one of those players reached a score of 100. I had the idea to create a nested list where every list item is another list that contains the player information, i.e. player name and score, for example:</p>
<pre><code>players = [['Player 1', 0], ['Player 2', 0]]
</code></pre>
<p>In this example, there are two players. Player 1 and Player 2 with each a current score of 0. Here an example code:</p>
<pre><code>import random
players = [['Player 1', 0], ['Player 2', 0]]
while players[0][1] < 100 and players[1][1] < 100:
chances = [0, 1]
for i in range(0, len(players)):
player = players[i]
score = random.choice(chances)
if score == 1:
player[1] += 1
if player[1] == 100:
winner = player[0]
print('Congrats {0}, you have won the game'.format(winner))
</code></pre>
<p>chances is whatever game they are playing, if a player gets a 0 nothing happens, if he gets a 1, his score increases by 1. Whoever reaches 100 first wins the game.</p>
<p>If there was now a third, fourth or fifth player, I would have to adjust the WHILE loop in the code acoordingly:</p>
<pre><code>players = [['Player 1', 0], ['Player 2', 0], ['Player 3', 0]
</code></pre>
<p>or</p>
<pre><code>players = [['Player 1', 0], ['Player 2', 0], ['Player 3', 0], ['Player 4', 0]
</code></pre>
<p>and so forth.</p>
<p>Is there a dynamic way to do that where I do not need to have a fixed number of players or cause an IndexError exception?</p>
|
<python><python-3.x><while-loop>
|
2024-03-23 14:25:41
| 1
| 357
|
Kai
|
78,211,119
| 3,015,186
|
How to tackle "Statement is unreachable [unreachable]" with mypy when setting attribute value in a method?
|
<h3>Problem description</h3>
<p>Suppose a following test</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __init__(self):
self.value: int | None = None
def set_value(self, value: int | None):
self.value = value
def test_foo():
foo = Foo()
assert foo.value is None
foo.set_value(1)
assert isinstance(foo.value, int)
assert foo.value == 1 # unreachable
</code></pre>
<p>The test:</p>
<ul>
<li>First, checks that <code>foo.value</code> is something</li>
<li>Then, sets the value using a method.</li>
<li>Then it checks that the <code>foo.value</code> has changed.</li>
</ul>
<p>When running the test with mypy version 1.9.0 (latest at the time of writing), and having <a href="https://mypy.readthedocs.io/en/stable/config_file.html#confval-warn_unreachable" rel="nofollow noreferrer">warn_unreachable</a> set to True, one gets:</p>
<pre><code>(venv) niko@niko-ubuntu-home:~/code/myproj$ python -m mypy tests/test_foo.py
tests/test_foo.py:16: error: Statement is unreachable [unreachable]
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<h3>What I have found</h3>
<ul>
<li>There is an open issue in the mypy GitHub: <a href="https://github.com/python/mypy/issues/11969" rel="nofollow noreferrer">https://github.com/python/mypy/issues/11969</a> One comment said to use <a href="https://pypi.org/project/safe-assert/" rel="nofollow noreferrer">safe-assert</a>, but after rewriting the test as</li>
</ul>
<pre class="lang-py prettyprint-override"><code>from safe_assert import safe_assert
def test_foo():
foo = Foo()
safe_assert(foo.value is None)
foo.set_value(1)
safe_assert(isinstance(foo.value, int))
assert foo.value == 1
</code></pre>
<p>the problem persists (safe-assert 0.4.0)<sup>[1]</sup>. This time, both mypy and VS Code Pylance think that <code>foo.set_value(1)</code> two lines above is not reachable.</p>
<h2>Question</h2>
<p>How can I say to mypy that the <code>foo.value</code> has changed to <code>int</code> and that it should continue checking also everything under the <code>assert isinstance(foo.value, int)</code> line?</p>
<hr />
<p><strong><sup>[1]</sup></strong> UPDATE: The safe_assert v. 0.5.0 has <a href="https://github.com/wemake-services/safe-assert/blob/master/CHANGELOG.md" rel="nofollow noreferrer">fixed the issue</a></p>
|
<python><mypy><python-typing>
|
2024-03-23 13:51:07
| 3
| 35,267
|
Niko Fohr
|
78,211,002
| 5,831,073
|
Comparison in python without functools.cmp_to_key(func)
|
<p>I would like to sort a list after doing some complex comparison with the values in it. For instance,</p>
<pre><code>sorted_result = sorted(unsorted_list, key=functools.cmp_to_key(complex_compare), reverse=True)
</code></pre>
<p>where <strong>complex_compare</strong> is the method that is currently doing said comparison.</p>
<p>However, I would like to do this <em>without</em> the use of <strong>functools.cmp_to_key</strong> because I was going through the python docs and I see that it says:</p>
<blockquote>
<p>This function is primarily used as a transition tool for programs
being converted from Python 2 which supported the use of comparison
functions.</p>
</blockquote>
<p>I looked up and couldn't find a simple way of doing something similar and that is just <em>not transitional</em> (which I was a bit surprised by – not sure if I missed something). What is a good way to do what I'm looking for?</p>
|
<python><python-3.x><list><sorting>
|
2024-03-23 13:17:13
| 1
| 407
|
spinyBabbler
|
78,210,979
| 1,353,930
|
Handle mixed charsets in the same json file
|
<p>Given I have the following file:</p>
<pre><code>{
"title": {
"ger": "Komödie" (utf8, encoded as c3 b6)
},
"files": [
{
"filename": "Kom�die" (latin1, encoded as f6)
}
]
}
</code></pre>
<p>(might look differently if you try to copy-paste it)</p>
<p>This happened due to an application bug, I cannot fix the source which generates these files.</p>
<p>I try now to fix the charset of the filename field(s), there can be multiple of them. I tried first with jq (single field):</p>
<pre><code>value="$(jq '.files[0].filename' <in.txt | iconv -f latin1 -t utf-8)"
jq --arg f "$value" '.files[0].filename = $f' <in.txt
</code></pre>
<p>But jq interprets the whole file as utf-8 and this damages the single f6 character.</p>
<p>I would like to find a solution in python, but also there, the input is by default interpreted as utf-8 in linux. I tried with 'ascii', but this doesn't allow characters >= 128.</p>
<p>Now, I think I found a way, but the json serializer escapes all characters. As I (intentionally) work with the wrong character set, the escaped sequence is also garbage.</p>
<pre><code>#!/usr/bin/python3
import sys
import io
import json
with open('in.txt', encoding='latin1') as fh:
j = json.load(fh)
for f in j['files']:
f['filename'] = f['filename'].encode('utf-8').decode('latin1') # might be wrong, couldn't test
with open('out.txt', 'w', encoding='latin1') as fh:
json.dump(j, fh)
</code></pre>
<p>What can I do to achieve the expected result, a clean non-escaped utf-8 json file?</p>
|
<python><json><character-encoding><jq>
|
2024-03-23 13:09:29
| 1
| 5,504
|
Daniel Alder
|
78,210,916
| 1,209,675
|
Cython not recognizing Numpy types
|
<p>I'm learning Cython and can't get a simple example to work. I have the following code to compute a sigmoid function:</p>
<pre><code>import numpy as np
cimport numpy as cnp
cdef inline cnp.float32_t _sigmoid(cnp.float32_t x):
return 1/(1 + np.exp(-x))
</code></pre>
<p>Compiling gives an error <code>'float32_t' is not a type identifier</code></p>
<p>What am I missing?</p>
|
<python><numpy><cython>
|
2024-03-23 12:49:48
| 0
| 335
|
user1209675
|
78,210,800
| 315,168
|
Type hinting Python class with dynamic "any" attribute
|
<p>I have a Python class that supports "any" attribute through dynamic attribute resolution. This is one of the flavours of "attribute dict" pattern:</p>
<pre><code>class ReadableAttributeDict(Mapping[TKey, TValue]):
"""
The read attributes for the AttributeDict types
"""
def __init__(
self, dictionary: Dict[TKey, TValue], *args: Any, **kwargs: Any
) -> None:
self.__dict__ = dict(dictionary) # type: ignore
</code></pre>
<p>How can I tell Python type hinting that this class supports dynamic looking of attributes?</p>
<p>If I do</p>
<pre><code>value = my_attribute_dict.my_var
</code></pre>
<p>Currently, PyCharm and Datalore are complaining:</p>
<pre><code>Unresolved attribute reference 'my_var for class 'MyAttributeDict'
</code></pre>
|
<python><pycharm><python-typing>
|
2024-03-23 12:09:49
| 1
| 84,872
|
Mikko Ohtamaa
|
78,210,495
| 4,906,944
|
Getting FolderID and folder name inside a particular folder in Google Drive
|
<p>I am using Colab in Python to get the folder ID and folder name inside a particular folder called myProject from Google Drive.</p>
<p><strong>Folder structure is like this:</strong></p>
<pre><code>Google Drive (When I open drive.google.com I see the below folder structure)
myProject
ImageToDoc
images
</code></pre>
<p>I have mounted the drive and it said successful.</p>
<p>But I always get "No folder id and Foldername". I know there are folders and files, the name of the folder is correct too.</p>
<p><strong>this is the code I am trying:</strong></p>
<pre><code>from googleapiclient.discovery import build
from google.oauth2 import service_account
import os
json_key_path = '/content/drive/My Drive/myProject/credentials.json'
if not os.path.exists(json_key_path):
print("JSON key file not found.")
exit()
credentials = service_account.Credentials.from_service_account_file(json_key_path)
drive_service = build('drive', 'v3', credentials=credentials)
def get_folder_id_by_name(folder_name, parent_folder_id='root'):
response = drive_service.files().list(
q=f"name='{folder_name}' and mimeType='application/vnd.google-apps.folder' and '{parent_folder_id}' in parents",
fields='files(id)').execute()
items = response.get('files', [])
if items:
return items[0]['id']
else:
return None
def list_folders_in_my_project():
my_project_folder_id = get_folder_id_by_name("myProject", parent_folder_id='root')
print("myProjectfolder ID:", my_project_folder_id)
if not my_project_folder_id:
print("myProject folder not found.")
return
response = drive_service.files().list(
q=f"'{my_project_folder_id}' in parents and mimeType='application/vnd.google-apps.folder'",
fields='files(name)').execute()
print("API Response:", response)
folders = response.get('files', [])
if folders:
print("Folders inside myProject folder:")
for folder in folders:
print(f"Folder Name: {folder['name']}")
else:
print("No folders found inside myProject folder.")
list_folders_in_my_project()
</code></pre>
<p>I am not sure what could be wrong with the above code. Can someone help me fix this?</p>
<p>Thanks!</p>
|
<python><google-drive-api><google-colaboratory><google-api-python-client><service-accounts>
|
2024-03-23 10:32:06
| 1
| 2,795
|
Sanjana Nair
|
78,210,428
| 2,667,066
|
Regexp for positions of runs of alternating characters
|
<p>I would like to find multiple regions within a string (in my case, a DNA sequence) that have the characters C and G alternating, in any order:</p>
<pre><code>str = "ANNNTGCGCCCCGCGGTGCGNNT"
^^^^ ^^^^ ^^^
pos 01234567890123456789012
0 1 2
</code></pre>
<p>Above I've marked the G/C C/G repeating regions, so I'm looking for a regexp that reports a list of start & end coords like <code>(5:8, 11:14, 17:19)</code> or alternatively, using half-closed intervals that report the position immediately after the end of each region <code>(5:9, 11:15, 17:20)</code>.</p>
<p>Ideally this would also be case-insensitive. Any suggestions gratefully accepted.</p>
<p>Edit - I'm happy to use any programming language implementation (e.g. Python's <code>re</code> module) to extract the positions.</p>
|
<python><regex>
|
2024-03-23 10:09:18
| 2
| 2,169
|
user2667066
|
78,210,323
| 3,293,726
|
Why does socket.accept() break the unfinished established connection?
|
<p>I have searched a lot of posts but could not find an answer. I understand that the <code>socket.accept()</code> actually fetches the socket from fully-connected queue. Also, I know because of the blocking IO, we have to rely on the concurrent for multiple connections.</p>
<p>However, my question is what else does <code>socket.accept()</code> do? <strong>Will it break established connections</strong>? In my experiment, I use one thread to accept a new connection and <strong>leaves it alone</strong>. In other words, I do NOT <code>recv()</code> intentionally because I want to hold every requests in the effort to keep <strong>multiple</strong> connections.</p>
<p>I use several clients to send messages. Surprisingly, only one connection can sustain. The following is my code.</p>
<p>On the server side (on a Linux server):</p>
<pre><code>import time
import socket
HOST = "0.0.0.0"
PORT = 65432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen()
while True:
conn, addr = s.accept()
</code></pre>
<p>On the client side (on the Macbook):</p>
<pre><code>import socket
HOST = "192.168.0.104"
PORT = 65432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST, PORT))
while True:
print('connect')
s.send(b"Hello, world")
</code></pre>
<p>When I start the server, the first client got blocked. When I start the second client, the first client existed with error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/home/xxx/test-client.py", line 14, in <module>
s.send(b"Hello, world")
BrokenPipeError: [Errno 32] Broken pipe
</code></pre>
<p>while the second client got blocked.</p>
<p>My question is:</p>
<ol>
<li>I expect the server can hold multiple connections, but why only one can sustain?</li>
<li>If <code>socket.accept()</code> always breaks the established connections, why concurrent can work well?</li>
</ol>
<hr />
<p>As @Mark Tolonen answered, the cause is the garbage-collection. When I use the following snippet, I can see multiple connections sustained.</p>
<pre><code>#!/usr/bin/env python3
import time
import socket
HOST = "0.0.0.0"
PORT = 65432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind((HOST, PORT))
s.listen()
connections = []
while True:
conn, addr = s.accept()
connections.append(conn)
</code></pre>
<hr />
<p>I would like to thank you all for your help, but would like to leave a note here to complain about the culture of current stackoverflow.</p>
<p>Before I asked this question, I invested days to dive the socket and surveyed relevant questions. It may seem to be native for some viewers but I really can't understand why some think of it a bad question. One's question can always be too native and simple for others, but it is hard for the owner and he can only grow by learning. It can also give opportunities for others who may encounter the same challenge. Let's keep stackoverflow open.</p>
|
<python><sockets>
|
2024-03-23 09:29:45
| 1
| 2,004
|
Tengerye
|
78,210,272
| 12,415,855
|
Django / UploadFileForm only provides No valid form?
|
<p>i try to create a simple file-upload funcionality with Django using the following code / files -</p>
<p>Its more or less taken from this example:
<a href="https://docs.djangoproject.com/en/5.0/topics/http/file-uploads/" rel="nofollow noreferrer">https://docs.djangoproject.com/en/5.0/topics/http/file-uploads/</a></p>
<p>forms.py:</p>
<pre><code>from django import forms
class UploadFileForm(forms.Form):
title = forms.CharField(max_length=50)
file = forms.FileField()
</code></pre>
<p>views.py:</p>
<pre><code>from django.http import HttpResponseRedirect
from django.shortcuts import render
from .forms import UploadFileForm
import os
import sys
def handle_uploaded_file(f):
path = os.path.abspath(os.path.dirname(sys.argv[0]))
fnFile = os.path.join(path, "gpp", "FILES", "QUESTIONS", "someFile.xyz")
print(fnFile)
with open(fnFile, 'wb+') as destination:
for chunk in f.chunks():
destination.write(chunk)
def home(request):
if request.method == 'POST':
form = UploadFileForm(request.POST, request.FILES)
print(request.POST)
print(request.FILES)
if form.is_valid():
print(f"Valid form")
handle_uploaded_file(request.FILES['file'])
return HttpResponseRedirect('/success/url/')
else:
print("No valid form")
else:
form = UploadFileForm()
return render(request, 'home.html', {'form': form})
</code></pre>
<p>Generally the form looks fine and i can choose the file:</p>
<p><a href="https://i.sstatic.net/RiPnG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RiPnG.png" alt="enter image description here" /></a></p>
<p>But when i enter something it doesn´t work seeing the following in the log:</p>
<pre><code>[23/Mar/2024 10:02:35] "GET / HTTP/1.1" 200 1777
<QueryDict: {'csrfmiddlewaretoken': ['HWaddf1sBsKJc1k4CY9LiijIv1VAlxjWkRoFRixEP12rNDYyIx6zRYFC2MvPKIFm'], 'title': ['Test'], 'file': ['gpt.zip'], 'save': ['']}>
<MultiValueDict: {}>
No valid form
[23/Mar/2024 10:04:33] "POST / HTTP/1.1" 200 1869
</code></pre>
<p>So it seems that request.FILES is for whatever reason empty?
Why is this not working as intended?</p>
|
<python><django>
|
2024-03-23 09:06:57
| 1
| 1,515
|
Rapid1898
|
78,210,261
| 710,955
|
How to configure inference settings to generate images with the Stable Diffusion XL pipeline?
|
<p>I'm working with the Stable Diffusion XL (SDXL) model from Hugging Face's diffusers library and I want to set this inference parameters :</p>
<ul>
<li>width: Width of the image in pixels.</li>
<li>height: Height of the image in pixels.</li>
<li>steps: Amount of inference steps performed on image generation.</li>
<li>cfg_scale: How strictly the diffusion process adheres to the prompt text (higher values keep your image closer to your prompt).</li>
</ul>
<p>Here's a minimal example of my current implementation:</p>
<pre><code>import os
import datetime
from diffusers import DiffusionPipeline
import torch
if __name__ == "__main__":
output_dir = "output_images"
os.makedirs(output_dir, exist_ok=True)
pipe = DiffusionPipeline.from_pretrained(
# https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0
"stabilityai/stable-diffusion-xl-base-1.0",
torch_dtype=torch.float16,
use_safetensors=True,
variant="fp16",
)
pipe.to("cuda")
# enabling xformers for memory efficiency
pipe.enable_xformers_memory_efficient_attention()
prompt = "Extreme close up of a slice a lemon with splashing green cocktail, alcohol, healthy food photography"
images = pipe(prompt=prompt).images
timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
image_path = os.path.join(output_dir, f"output_{timestamp}.jpg")
images[0].save(image_path)
print(f"Image saved at: {image_path}")
</code></pre>
<p>How Can I set the inference parameters?</p>
|
<python><pytorch><huggingface-transformers><stable-diffusion>
|
2024-03-23 09:03:52
| 1
| 5,809
|
LeMoussel
|
78,209,984
| 14,224,948
|
Problems with deploying my first Django app on Azure
|
<p>I have tried different approaches, but none of them works. I still get the:
<code>ERROR: No matching distribution found for Python==3.9.18</code>
Or other version I have been trying to use.
I tried:
Installing the version defined in <code>requirements.txt</code> with <code>Set up Python</code> process:</p>
<pre><code>- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.11.3'
</code></pre>
<p>(the base version for this app was 3.11.3), changing the <code>requirements.txt</code> to match the version that I found from the Action logs in GitHub:
<code>Python Version: /opt/python/3.9.18/bin/python3.9</code></p>
<p>And then checking which version I have installed locally and trying that one (it's 3.10.1). Each time I get the same error but with different version. Please help.</p>
<p>You can browse the full approach here:
<a href="https://github.com/JJDabrowski/Portfolio" rel="nofollow noreferrer">https://github.com/JJDabrowski/Portfolio</a></p>
|
<python><django><azure>
|
2024-03-23 06:53:07
| 1
| 1,086
|
Swantewit
|
78,209,821
| 9,874,309
|
Unable initialise pub/sub with SparkSession
|
<p>SO I'm adding Pub/Sub connector in SparkSession to use spark streaming. But every time it gives error that Pub/Sub data source not found. How to fix it !</p>
<pre><code>from pyspark.sql import SparkSession
# Initialize SparkSession
spark = SparkSession \
.builder \
.appName("PubSubSpark") \
.config("spark.jars.packages", "org.apache.bahir:spark-streaming-pubsub_2.11:2.4.0") \
.getOrCreate()
project_number = "********"
topic = "*****"
sub = "*****"
# Read from Pub/Sub into Spark DataFrame
df = spark.read \
.format("pubsub") \
.option(f"topic", "projects/{project_number}/topics/{topic}") \
.option(f"subscription", "projects/{project_number}/subscriptions/{sub}") \
.load()
# Show DataFrame
df.show()
# Write DataFrame to Pub/Sub
df.write \
.format("pubsub") \
.option("topic", "projects/{project_number}/topics/{topic}") \
.save()
# Stop SparkSession
spark.stop()
</code></pre>
<p>Upon running this code i get this error.</p>
<pre><code>Py4JJavaError Traceback (most recent call last)
<ipython-input-7-e4bd27732667> in <cell line: 15>()
17 .option(f"topic", "projects/{project_number}/topics/{topic}") \
18 .option(f"subscription", "projects/{project_number}/subscriptions/{sub}") \
---> 19 .load()
20
21 # Show DataFrame
3 frames
/usr/local/lib/python3.10/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)
325 if answer[1] == REFERENCE_TYPE:
--> 326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
328 format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o70.load.
: org.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find the data source: pubsub. Please find packages at `https://spark.apache.org/third-party-projects.html`.
</code></pre>
|
<python><apache-spark><google-cloud-platform><pyspark><google-cloud-pubsub>
|
2024-03-23 05:12:40
| 0
| 1,201
|
Chidananda Nayak
|
78,209,759
| 6,423,456
|
How can I iterate over an AsyncIterator stream in Python with a timeout, without cancelling the stream?
|
<p>I'm dealing with an object that is an <code>AsyncIterator[str]</code>.
It gets messages from the network, and yields them as strings.
I want to create a wrapper for this stream that buffers these messages, and yields them at a regular interval.</p>
<p>My code looks like this:</p>
<pre class="lang-py prettyprint-override"><code>async def buffer_stream(stream: AsyncIterator[str], buffer_time: Optional[float]) -> AsyncIterator[str]:
"""
Buffer messages from the stream, and yields them at regular intervals.
"""
last_sent_at = time.perf_counter()
buffer = ''
stop = False
while not stop:
time_to_send = False
timeout = (
max(buffer_time - (time.perf_counter() - last_sent_at), 0)
if buffer_time else None
)
try:
buffer += await asyncio.wait_for(
stream.__anext__(),
timeout=timeout
)
except asyncio.TimeoutError:
time_to_send = True
except StopAsyncIteration:
time_to_send = True
stop = True
else:
if time.perf_counter() - last_sent_at >= buffer_time:
time_to_send = True
if not buffer_time or time_to_send:
if buffer:
yield buffer
buffer = ''
last_sent_at = time.perf_counter()
</code></pre>
<p>As far as I can tell, the logic makes sense, but as soon as it hits the first timeout, it interrupts the stream, and exits early, before the stream is done.</p>
<p>I think this might be because <code>asyncio.wait_for</code> specifically says:</p>
<blockquote>
<p>When a timeout occurs, it cancels the task and raises TimeoutError. To avoid the task cancellation, warp it in <code>shield()</code>.</p>
</blockquote>
<p>I tried wrapping it in shield:</p>
<pre class="lang-py prettyprint-override"><code>buffer += await asyncio.wait_for(
shield(stream.__anext__()),
timeout=timeout
)
</code></pre>
<p>This errors out for a different reason: <code>RuntimeError: anext(): asynchronous generator is already running</code>. From what I understand, that means that it's still in the process of getting the previous <code>anext()</code> when it tries to get the next one, which causes an error.</p>
<p>Is there a proper way to do this?</p>
<p>Demo: <a href="https://www.sololearn.com/en/compiler-playground/cBCVnVAD4H7g" rel="nofollow noreferrer">https://www.sololearn.com/en/compiler-playground/cBCVnVAD4H7g</a></p>
|
<python><asynchronous><python-asyncio>
|
2024-03-23 04:35:15
| 1
| 2,774
|
John
|
78,209,422
| 920,731
|
Convert Python Request call to PHP (cURL)
|
<p>I am pulling my hair out :)</p>
<p>THe company I am using has a API I need to use but only gives an example to me in Python.</p>
<p>Can someone help me write it in PHP? I have tried several times but no luck. I think the problem is in the file sending.</p>
<p>Here is the Python code:</p>
<pre class="lang-none prettyprint-override"><code>import requests
import json
def send_excel_catalog_import(fname, auth_token, report_recipient_email):
api_url = f"https://www.etailpet.com/COMPANY/api/v1/catalog-update/"
payload = {"email": report_recipient_email}
files = [
("product_import", open(fname, "rb"))
]
headers = {"Authorization": f"Bearer {auth_token}"}
response = requests.request(
"POST", api_url, headers=headers, data=payload, files=files
)
results_str = response.content.decode("utf8")
return response.status_code, json.loads(results_str)
if __name__ == "__main__":
fname = "product_import.xlsx"
send_excel_catalog_import(fname)
</code></pre>
|
<python><php>
|
2024-03-23 01:04:03
| 1
| 341
|
rsirota
|
78,209,132
| 9,848,968
|
Scrapy Playwright Page Method: Prevent timeout error if selector cannot be located
|
<p>My question is related to Scrapy Playwright and how to prevent the Page of a Spider from crashing, if in the course of applying a PageMethod a specific selector cannot be located.</p>
<p>Below is a Scrapy Spider that uses Playwright to interact with the website.
The spider waits for the cookie button to appear and then clicks it.
The selector as well as the actions are defined in the meta attribute of the Request object and here in a dictionary in a list called page_methods.
If the GDPR button is not present, the Page crashes with a timeout error:
<code>playwright._impl._errors.TimeoutError: Timeout 30000ms exceeded.</code></p>
<pre class="lang-python prettyprint-override"><code>from typing import Iterable
import scrapy
from scrapy_playwright.page import PageMethod
GDPR_BUTTON_SELECTOR = "iframe[id^='sp_message_iframe'] >> internal:control=enter-frame >> .sp_choice_type_11"
class GuardianSpider(scrapy.Spider):
name = "guardian"
allowed_domains = ["www.theguardian.com"]
start_urls = ["https://www.theguardian.com"]
def start_requests(self) -> Iterable[scrapy.Request]:
url = "https://www.theguardian.com"
yield scrapy.Request(
url,
meta=dict(
playwright=True,
playwright_include_page=True,
playwright_page_methods=[
PageMethod("wait_for_selector", GDPR_BUTTON_SELECTOR),
PageMethod("dispatch_event", GDPR_BUTTON_SELECTOR, "click"),
],
),
)
def parse(self, response):
pass
</code></pre>
<p>If you run the spider, and the Cookie button is present, everything works fine.
However, if the Cookie button is not present, the spider crashes with a timeout error.</p>
<p>This is not how I would like to handle the GDPR button. I would like to have a function that checks if the button is present and then clicks it.
Below is a function in plain Python-playwright that does exactly that. The function accepts a Page object and checks if the GDPR button is present. If it is, it clicks it. If it is not, it does nothing.</p>
<pre class="lang-python prettyprint-override"><code>from playwright.sync_api import Page
def accecpt_gdpr(page: Page) -> None:
if page.locator(GDPR_BUTTON_SELECTOR).count():
page.locator(GDPR_BUTTON_SELECTOR).dispatch_event("click")
</code></pre>
<p>How can I achieve the same functionality inside the Scrapy Spider?</p>
|
<python><web-scraping><scrapy><playwright><scrapy-playwright>
|
2024-03-22 22:35:46
| 2
| 385
|
muw
|
78,209,123
| 2,612,259
|
ipywidget to replace Plotly legend item
|
<p>I am trying to replace the Plotly provided legend with a custom legend using ipywidgets.
A simple example is below where I have a checkbox for each trace that can be used to "highlight" the trace from the legend.</p>
<p>This seems to work fine, but I would like to reproduce something similar to what Plotly would normally show for the given trace. I can get all the info I need, line color, dash type, marker symbol, etc, but I can't see a easy way to reproduce a legend entry graphic such that it looks like what Plotly would have produced.</p>
<p>Here is what I have so far, which should be a self contained example:</p>
<pre class="lang-py prettyprint-override"><code>import ipywidgets as widgets
from ipywidgets import interactive, fixed
import plotly.graph_objects as go
line = {'name': 'line','data': ((1,1), (2,2), (3,3)), 'color':'red', 'dash':'solid', 'symbol':'circle'}
square = {'name': 'square','data': ((1,1), (2,4), (3,9)), 'color':'blue', 'dash':'dash', 'symbol':'square'}
traces = (line, square)
def get_values(func, index):
return [e[index] for e in func['data']]
def toggle_highlight(highlight, fig, index):
new_width = 4 if fig.data[index].line.width == 2 else 2
fig.data[index].line.width = new_width
fig = go.FigureWidget()
legend_items = []
for i, t in enumerate(traces):
highlight_chk = interactive(toggle_highlight, highlight=False, fig=fixed(fig), index=fixed(i)).children[0]
item = widgets.HBox([widgets.Label(value=f"{t['name']}, c = {t['color']}, d = {t['dash']}, s = {t['symbol']}"), highlight_chk])
s = go.Scatter(name=t['name'], x=get_values(t, 0), y=get_values(t, 1), line=dict(width=2, color=t['color'], dash=t['dash']), marker=dict(symbol=t['symbol']), customdata=[{'index':i}])
fig.add_trace(s)
legend_items.append(item)
legend = widgets.VBox(legend_items)
display(legend)
display(fig)
</code></pre>
<p>This is what it looks like now:
<a href="https://i.sstatic.net/8aONK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8aONK.png" alt="enter image description here" /></a></p>
<p>this is what I would like to get it to look like:
<a href="https://i.sstatic.net/opDFV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/opDFV.png" alt="enter image description here" /></a></p>
<p>Any thoughts on how to create such a "label"?</p>
|
<python><plotly><ipywidgets>
|
2024-03-22 22:32:54
| 1
| 16,822
|
nPn
|
78,209,118
| 12,049,252
|
Transpose within Transpose Notepad++?
|
<p>I have a text file that looks like this (but 132k lines)</p>
<pre><code>********
name : one
Place : city
Initial: none
********
name : two
Place : city2
Initial: none
********
name : three
Place : city3
Initial: none
Limits : some
</code></pre>
<p>I'm trying to move it into a more friendly format (excel/database records). Each 'record' is separated by the ********, the fields for 90% of the records are all the same, but some have additional fields, like the limits in the 3rd record.</p>
<p>I would like a csv, or similar output like:</p>
<pre><code>name,place,initial,limit
one,city,none,n/a
two,city2,none,n/a
three,city3,none,some
</code></pre>
<p>Is python better suited for parsing and manipulating this?</p>
|
<python><notepad++>
|
2024-03-22 22:31:20
| 1
| 339
|
Gene Parmesan
|
78,209,115
| 1,492,229
|
How to pivot dataframe in python
|
<p>I have a dataset that looks like this</p>
<pre><code>StID SubClassID PhOrder PhAmt LbOrder LbAmt
4326 200572288 Anti1 23.3 Asprin 13.7
4326 200572288 Anti2 39.3 Morphin 2.2
4326 200572288 NULL NULL Medine 30.5
4326 200572288 Anti3 13.5 Kabomin 20.3
4326 200572288 NULL NULL Zorifin 0.2
0993 200348299 Anti1 8.4 Zorifin 9.7
0993 200348299 Anti2 10.9 Zorifin 95.6
0993 200348299 Anti4 48.9 NULL NULL
</code></pre>
<p>I want to flat this table using one hot encoding</p>
<p>so it looks like this</p>
<pre><code>StID SubClassID Anti1 Anti2 Anti3 Anti4 Asprin Morphin Medine Kabomin Zorifin
4326 200572288 23.3 39.3 13.5 NULL 13.7 2.2 30.5 20.3 0.2
0993 200348299 8.4 10.9 NULL 48.9 NULL NULL NULL NULL 52.65
</code></pre>
<p>Here is the code I did but I get many duplicated columns with wrong values</p>
<pre><code>data = {
'StId': [4326, 4326, 4326, 4326, 4326, 993, 993, 993],
'SubClassID': [200572288, 200572288, 200572288, 200572288, 200572288, 200348299, 200348299, 200348299],
'PhOrder': ['Anti1', 'Anti2', 'NULL', 'Anti3', 'NULL', 'Anti1', 'Anti2', 'Anti4'],
'PhAmt': [23.3, 39.3, None, 13.5, None, 8.4, 10.9, 48.9],
'LbOrder': ['Asprin', 'Morphin', 'Medine', 'Kabomin', 'Zorifin', 'Zorifin', 'Zorifin', None],
'LbAmt': [13.7, 2.2, 30.5, 20.3, 0.2, 9.7, 95.6, None]
}
df = pd.DataFrame(data)
df_pivot = df.pivot_table(index=['StId', 'SubClassID'], columns=['PhOrder','LbOrder'], values=['PhAmt','LbAmt'], aggfunc='first')
</code></pre>
|
<python><pandas>
|
2024-03-22 22:30:27
| 2
| 8,150
|
asmgx
|
78,209,071
| 3,917,215
|
How to identify the root from hierarchical data structure using Pandas
|
<p>I have a dataframe with the following columns: Parent, Parent_Rev, Child, ChildRev. In this structure, Parent serves as a parent node, while Child functions as a child node. Both Parent_Rev and ChildRev capture different revisions of their respective nodes. A Child may appear in the parent column and have it's own child values, indicated in columns Child and Child_Rev.</p>
<p>This hierarchical chain continues until a path is established where a Parent doesn't appear as a Child to any other values.</p>
<p>To generate the desired output, it's necessary to traverse through all the values, identifying all possible levels and combinations until the top node is reached. When checking for possible links, the combination of Node+Rev should be considered unique, rather than just the Node value alone.</p>
<p><strong>Sample Input Dataframe:</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Parent</th>
<th>Parent_Rev</th>
<th>Child</th>
<th>Child_REV</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1</td>
<td>1</td>
<td>B1</td>
<td>1</td>
</tr>
<tr>
<td>B1</td>
<td>1</td>
<td>C1</td>
<td>1</td>
</tr>
<tr>
<td>C1</td>
<td>1</td>
<td>D1</td>
<td>1</td>
</tr>
<tr>
<td>D1</td>
<td>1</td>
<td>E1</td>
<td>1</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>B2</td>
<td>1</td>
</tr>
<tr>
<td>B2</td>
<td>1</td>
<td>C2</td>
<td>1</td>
</tr>
<tr>
<td>C2</td>
<td>1</td>
<td>C2</td>
<td>1</td>
</tr>
<tr>
<td>A3</td>
<td>1</td>
<td>B3</td>
<td>3</td>
</tr>
<tr>
<td>A4</td>
<td>1</td>
<td>B3</td>
<td>3</td>
</tr>
</tbody>
</table></div>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'Node1': ['A1','B1','C1','D1','A2','B2','C2','A3','A4'],
'Node1_Rev': ['1','1','1','1','1','1','1','1','1'],
'Node2': ['B1','C1','D1','E1','B2','C2','C2','B3','B3'],
'Node2_Rev': ['1','1', '1','1','1','1','1','3','3']
}
)
</code></pre>
<p><strong>Sample Output Dataframe:</strong></p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Root</th>
<th>Root_Rev</th>
<th>Parent</th>
<th>Parent_Rev</th>
<th>Child</th>
<th>Child_REV</th>
</tr>
</thead>
<tbody>
<tr>
<td>A1</td>
<td>1</td>
<td>A1</td>
<td>1</td>
<td>B1</td>
<td>1</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>B1</td>
<td>1</td>
<td>C1</td>
<td>1</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>C1</td>
<td>1</td>
<td>D1</td>
<td>1</td>
</tr>
<tr>
<td>A1</td>
<td>1</td>
<td>D1</td>
<td>1</td>
<td>E1</td>
<td>1</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>A2</td>
<td>1</td>
<td>B2</td>
<td>1</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>B2</td>
<td>1</td>
<td>C2</td>
<td>1</td>
</tr>
<tr>
<td>A2</td>
<td>1</td>
<td>C2</td>
<td>1</td>
<td>C2</td>
<td>1</td>
</tr>
<tr>
<td>A3</td>
<td>1</td>
<td>A3</td>
<td>1</td>
<td>B3</td>
<td>3</td>
</tr>
<tr>
<td>A4</td>
<td>1</td>
<td>A4</td>
<td>1</td>
<td>B3</td>
<td>3</td>
</tr>
</tbody>
</table></div>
<p>What are the efficient ways to generate the output for a bigger dataset to update the root and rev for all the parent_child input combinations?</p>
|
<python><pandas><dataframe>
|
2024-03-22 22:13:46
| 1
| 353
|
Osceria
|
78,209,037
| 3,737,135
|
How do I set up pytest parameters for multiple functions at once?
|
<p>I have the following example unit test set up. There are several classes with their own fixtures. I am showing the 5th class here:</p>
<pre><code>import pytest
class TestOperator1:
....
class TestOperator5:
@pytest.fixture(scope="class"):
def runm0(self, request):
yield request.param+10
@pytest.fixture(scope="class"):
def runm1(self, request):
yield request.param+20
@pytest.mark.parametrize("runm0, runm1", [(5, 10), (20, 30)])
def test_1(self, runm0, runm1):
assert runm0*runm1 == 10
@pytest.mark.parametrize("runm0, runm1", [(5, 10), (20, 30)])
def test_2(self, runm0, runm1):
assert runm0*runm1 == 20
@pytest.mark.parametrize("runm0, runm1", [(5, 10), (20, 30)])
def test_3(self, runm0, runm1):
assert runm0*runm1 == 30
</code></pre>
<p>How can I write the parametrize code once and have it apply to all test functions within the class? Decorating the class with parametrize appears to be the solution, but how would that work if the fixtures I want to parametrize on are defined within the class?</p>
|
<python><unit-testing><pytest><python-unittest>
|
2024-03-22 21:59:36
| 0
| 432
|
Fanylion
|
78,209,013
| 1,506,763
|
Is there a better way to use multiprocessing within a loop?
|
<p>I'm quite new to using <code>multiprocessing</code> and I'm trying to figure out if there is a better way to use <code>multiprocessing</code> when it has to be within a loop. Or at least I think it has to be within a loop. Let me try to describe the outline and then a better solution may be obvious.</p>
<p>I'm processing large spatial datasets with lots of data points. I'm trying to calculate values for smaller sub-regions of the large dataset by splitting it up with a regular grid. Each grid calculation can be calculated independently of another so this makes it perfect for parallel processing - I just need to know what data to send to each grid. Typically, there will be 1000's of the smaller grid-bins.</p>
<p>However, I'm trying to track these changes over time so I'm then repeating the same process for each timestep of data. Typically there will be 10-500 timesteps of data to process. Each timestep's data is stored in a several files on disk that I have to load when processing the timestep.</p>
<p>Currently my code looks a bit like the following psuedo-code and it takes about 30-60s to process a timestep. Reading and binning probably only takes a few seconds of that time and the rest is the multi-processing of each bin.</p>
<pre class="lang-py prettyprint-override"><code>from multiprocessing import Pool
for t in range(num_timeteps_to_process):
# read data files for this timestep
data_1 = read_file1_data(path, t)
data_2 = read_file2_data(path, t)
data_3 = read_file3_data(path, t)
# find what data is in each grid_bin - returns data and bin_id
binned_data = spatial_binning_function(data_1, num_bins_x=100, num_bins_y=100)
# loop through all bins doing calculation
bin_calc_arg_list = []
for bin_indx, data_id in enumerate(binned_data):
bin_calc_arg_list.append((bin_indx, data_id, data_1, data_2, data_3, other_settings))
print(" Running Multiprocessing Pool")
# create the process pool - multiprocessing
with Pool(processes=num_processors) as pool:
# execute a task
results = pool.starmap(multiprocess_timesteps_bins, bin_calc_arg_list)
# close the process pool
pool.close()
# wait for issued tasks to complete
pool.join()
# collate results here and return
...
...
...
</code></pre>
<p><code>Data_1</code>, <code>Data_2</code> and <code>Data_3</code> are large <code>numpy</code> arrays of size n x 12, where n is typically larger than 200,000.</p>
<p>Each call to the <code>multiprocess_timesteps_bins</code> function returns a tuple of a unique bin_id and a list of 30 different calculated properties for the bin. <code>results</code> is then added to a dictionary with the timestep as the key.</p>
<p>That's the general flow of my code and it's quite a bit quicker than any single-threaded implementation but I'm aware that the opening and closing of the processing pool within each timestep loop is probably a really bad idea but the reading and subdivision of data needs to be done per timestep as I don't think running multiple processes where each one is reading the same data a few thousand times is better.</p>
<p>From a few print statements it seems that there may be several seconds required starting up the pool each timestep and maybe some more time spent shutting it down.</p>
<p>As I said, I'm not overly familiar with multiprocessing yet so I'm not quite sure what is the best way to set up a problem like this - I'm assuming my implementation is not how it should be done, but it works and it's faster than single threaded.</p>
<p>Should I be using some other dynamic queue that I open at the start and then pop items into that queue during each timestep? And then close once at the end?</p>
<p>I think what I want to do is this - move the pool creating outside the loop, read the data and then use starmap to call multiple processors to processes the bins, and because starmap is a blocking call I can just process the results before restarting the loop, while keeping the pool open.</p>
<p>Only close and join when I'm finished this part of the code.</p>
<p>This will save a lot of time for starting and shutting down the pool each timestep when using the context manager ad should be ok to use like that?</p>
<p>Any pointers or suggestions welcome.</p>
|
<python><python-multiprocessing>
|
2024-03-22 21:52:47
| 0
| 676
|
jpmorr
|
78,208,804
| 3,713,336
|
The variable is returning empty in my Python script
|
<p>I am working on a Python script and using GCP Cloud Build to run it. Instead of hardcoding some of the variables, I am trying to use a Cloud Build YAML file to define the variables there and then use them in my Python script. However, it keeps returning an empty string. Can you review the code and let me know why?</p>
<p>yaml file</p>
<pre><code> steps
args:
- 'python3'
- 'test.py'
- '--project=${_PROJECT_ID}'
- '--dataset=${_DATASET_ID}'
- '--table=${_TABLE_NAME}'
substitutions:
_PROJECT_ID: 'test1det'
_DATASET_ID: 'testdataset'
_TABLE_NAME: 'testtable'
</code></pre>
<p>python code test.py</p>
<pre><code>def tst(
project_id: str=os.getenv('_PROJECT_ID', 'default_project_id'),
dataset_id: str=os.getenv('_DATASET_ID', 'default_dataset_id'),
table_name: str=os.getenv('_TABLE_NAME', 'default_table_name')
):
</code></pre>
|
<python><google-cloud-platform><google-cloud-build><cloudbuild.yaml><vertex-ai-pipeline>
|
2024-03-22 20:57:04
| 0
| 357
|
lisa_rao007
|
78,208,735
| 726,730
|
Parallel programming: Synchronizing processes
|
<p>I have a program which has a lot of music decks (deck 1, deck 2, music_clip_deck, speackers_deck, ip_call_1, ip_call_2, ip_call_3). Each deck works in a seperate process. The chunk time I use to crop the mp3 files/retransmitions stream/voice from microphone/voice from aiortc-pyav is 125msec. After that I fill some queues (one for each seperate process) and I send the final queue to the final thread for the final audio processing before hearing and transmitted to clients.</p>
<p>How can I synchronize all the process together, so one while run time of each process takes exactly 125 msec?</p>
<p>Here is one figure for help:</p>
<p><a href="https://i.sstatic.net/gdsES.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gdsES.png" alt="enter image description here" /></a></p>
<p>This approach may not help at all:</p>
<pre class="lang-py prettyprint-override"><code>class Deck_1_Proc(Process):
...
...
...
def run(self):
while(True):
t1 = time.time()
...
...
...
t2 = time.time()
if t2 - t1 < 0.125:
time.sleep(0.125 - (t2 - t1))
</code></pre>
<p>Maybe a better approach should be use something like javascript setInterval with time parameter: 125msec</p>
<pre class="lang-py prettyprint-override"><code>from threading import Event, Thread
def call_repeatedly(interval, func, *args):
stopped = Event()
def loop():
while not stopped.wait(interval): # the first call is in `interval` secs
func(*args)
Thread(target=loop).start()
return stopped.set
#call:
cancel_future_calls = call_repeatedly(0.125, run)
#stopping to app termination:
cancel_future_calls()
</code></pre>
<p>Here is a more complicated flow chart:</p>
<p><a href="https://i.sstatic.net/KPS8OpRG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KPS8OpRG.png" alt="enter image description here" /></a></p>
<p>With this processes run, and with Ahmed AEK solution i have to buffer the downsampling audio data before plot it (this is for sync time: 20msec, not for 125msec). What can i do instead?</p>
|
<python><parallel-processing><multiprocessing><synchronization>
|
2024-03-22 20:40:34
| 1
| 2,427
|
Chris P
|
78,208,703
| 489,088
|
In a 2d numpy array, how to select every first and second element of the inner arrays? Can this be done with indexing?
|
<p>For example the array:</p>
<pre><code>array = np.array([[1, 1, 4, 2, 1, 8],
[1, 1, 8, 2, 1, 16],
[1, 1, 40, 2, 1, 80],
[1, 2, 40, 2, 1, 80]])
</code></pre>
<p>I'd like to essentially remove every third <code>[:, ::2]</code> element of the inner arrays. So the result should be:</p>
<pre><code>[[1 1 2 1]
[1 1 2 1]
[1 1 2 1]
[1 2 2 1]]
</code></pre>
<p>I could do two selections and concatenate each of the resulting arrays, but that seem super slow.. is there a way to use indexing or some other method that doesn't involve a loop with np.concatenate?</p>
<p>My actual array always have inner arrays with sizes divisible by 3, and the 1 dimension is very, very large. So I'm interested in the fastest way to accomplish this.</p>
<p>Thank you!</p>
|
<python><arrays><numpy>
|
2024-03-22 20:29:43
| 4
| 6,306
|
Edy Bourne
|
78,208,450
| 17,142,551
|
Rolling sum within 30 non-datetime days
|
<p>I've been racking my brain trying to figure out the best way to do this. I want to find the rolling sum of the previous 30 days but my 'day' column is not in datetime format.</p>
<p>Sample data</p>
<pre><code>df = pd.DataFrame({'client': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B', 'B'],
'day': [319, 323, 336, 352, 379, 424, 461, 486, 496, 499, 303, 334, 346, 373, 374, 395, 401, 408, 458, 492],
'foo': [5.0, 2.0, np.nan, np.nan, np.nan, np.nan, np.nan, 7.0, np.nan, np.nan, 8.0, 7.0, 22.0, np.nan, 13.0, np.nan, np.nan, 5.0, 11.0, np.nan]}
>>> df
client day foo
0 A 319 5.0
1 A 323 2.0
2 A 336 NaN
3 A 352 NaN
4 A 379 NaN
5 A 424 NaN
6 A 461 NaN
7 A 486 7.0
8 A 496 NaN
9 A 499 NaN
10 B 303 8.0
11 B 334 7.0
12 B 346 22.0
13 B 373 NaN
14 B 374 13.0
15 B 395 NaN
16 B 401 NaN
17 B 408 5.0
18 B 458 11.0
19 B 492 NaN
</code></pre>
<p>I want a new column showing the rolling sum of 'foo' every 30 days.</p>
<p>So far I've tried:</p>
<pre><code>df['foo_30day'] = df.groupby('client').rolling(30, on='day', min_periods=1)['foo'].sum().values
</code></pre>
<p>But it looks like it's taking the rolling sum of the last 30 rows.</p>
<p>I was also thinking of maybe changing the 'day' column to a datetime format, then using <code>rolling('30D')</code> but I'm not sure how or even if that's the best approach. I've also tried to use a groupby reindex to stretch the 'day' column and doing a simple <code>rolling(30)</code> but it's not working for me.</p>
<p>Any advice would be greatly appreciated.</p>
|
<python><pandas>
|
2024-03-22 19:22:49
| 1
| 1,842
|
amance
|
78,208,433
| 7,228,014
|
ModuleNotFoundError after upgrade
|
<p>I used to work with python version 3.9 in Jupyter lab.
When I tried to install skrub, an extension of sk-learn, it appeared that I needed a version greater than 3.10. So, I installed the lastest python version, i.e 3.12.</p>
<p>Once in Jupyter, I checked the version using <code>! python --version</code>. It confirmed I was in Python 3.12.2. All fine.</p>
<p>Then, I installed skrub also from Jupyter using <code>!pip install skrub</code>.
Again, all went fine, and I got confirmations that skrub, scikit-learn, numpy, scipy, pandas, ... were successfully installed in folder c:\users\JCF\appdata\local\programs\python\python312\lib\site-packages</p>
<p>All fine, no error or warning.</p>
<p>Now, in the same notebook, practically on the next cell, I enter the command
<code>from skrub import TableVectorizer</code>.
I then get a message:
ModuleNotFoundError: No module named 'skrub'</p>
<p>Based on other questions asked here, I understand that this is related to windows paths. Now, it's more about the next step.
What would be the recommended approach from this point to get a fully working version in 3.12? Should I completely remove version 3.9?</p>
<p><strong>Note:</strong><br />
Using the magic command (as recommended by Wayne), I get the same message before upgrading from version 3.9. The messages I get are the following:</p>
<pre><code>ERROR: Ignored the following versions that require a different python version: 0.1.0 Requires-Python >=3.10
ERROR: Could not find a version that satisfies the requirement skrub (from versions: none)
ERROR: No matching distribution found for skrub
</code></pre>
|
<python><upgrade><python-3.12>
|
2024-03-22 19:19:44
| 1
| 309
|
JCF
|
78,208,414
| 17,800,932
|
Emitting a `Signal` to directly transition a `QStateMachine` to another `QState`
|
<p>I have the following state machine:</p>
<p><a href="https://i.sstatic.net/v036f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v036f.png" alt="enter image description here" /></a></p>
<p>I have implemented this in Python via the standard state pattern. This state machine is running in a separate thread from a PySide6 application thread, which is running on the program's main thread. Let's call this state machine class <code>Controller</code>.</p>
<p>On the GUI side, I am using the <a href="https://doc.qt.io/qtforpython-5/overviews/statemachine-api.html" rel="nofollow noreferrer">Qt for Python State Machine Framework</a>. (Note: the linked documentation is for PySide2/PySide5, but I am using Pyside6. For some reason, the same documentation doesn't exist for PySide6.) This makes it very nice to define what happens in the GUI when a state is entered.</p>
<p>What I am looking for is <em>not</em> for the GUI to initiate transitions but for my core state machine <code>Controller</code> running in another thread to control the transitions. Typically, one uses <code><QState>.addTransition(...)</code> to add a transition at the GUI level, but I want the GUI to simple send commands down to <code>Controller</code>, and then when <code>Controller</code> transitions its state, I want it to emit a <code>Signal</code> that triggers the PySide6 <code>QStateMachine</code> to enter a given state and thus set all the properties appropriate for that state. So in other words, I want the GUI to dispatch commands to the <code>Controller</code> state machine and for the GUI to "listen" for the <code>Controller</code>'s state transitions.</p>
<hr />
<p>So the question is, given a <code>QState</code>, how do I send a signal to it that forces the <code>QStateMachine</code> it is a part of to transition to that state? Or do I need to send a signal to the <code>QStateMachine</code> and provide it a <code>QState</code> to transition to?</p>
<p>Example program:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from abc import ABC, abstractmethod
from typing import override
from PySide6.QtStateMachine import QState
class Controller:
def __init__(self, initial_state: IState, states: list[QState]) -> None:
"""Initialize the controller to the given initial state and call the `
on_entry` method for the state.
"""
self.__state = initial_state
self.__state.controller = self
self.__state.on_entry()
self.__state.states = states
def _transition_to(self, new_state: IState) -> None:
"""Transition from the current state to the given new state. This calls
the `on_exit` method on the current state and the `on_entry` of the
new state. This method should not be called by any object other than
concrete implementations of `IState`.
"""
self.__state.on_exit()
self.__state = new_state
self.__state.controller = self
self.__state.on_entry()
@property
def state(self):
"""Get the current state"""
return self.__state
def start_camera_exposure(self) -> None:
self.__state.start_camera_exposure()
def stop_camera_exposure(self) -> None:
self.__state.stop_camera_exposure()
def abort_camera_exposure(self) -> None:
self.__state.abort_camera_exposure()
class IState(ABC):
"""Serve as an interface between the controller and the explicit, individual states."""
@property
def controller(self) -> Controller:
return self.__controller
@controller.setter
def controller(self, controller: Controller) -> None:
self.__controller = controller
@property
def states(self) -> list[QState]:
return self.__states
@states.setter
def states(self, states: list[QState]):
self.__states = states
def on_entry(self) -> None:
"""Can be overridden by a state to perform an action when the state is
being entered, i.e., transitions into. It is not required to be overridden.
"""
pass
def on_exit(self) -> None:
"""Can be overridden by a state to perform an action when the state is
being exited, i.e., transitioned from. It is not required to be overridden.
"""
pass
# If a concrete implementation does not handle the called method, i.e., it is an invalid action
# in the specific state, it is enough to simply call `pass`.
@abstractmethod
def start_camera_exposure(self) -> None: ...
@abstractmethod
def stop_camera_exposure(self) -> None: ...
@abstractmethod
def abort_camera_exposure(self) -> None: ...
class Idle(IState):
@override
def on_entry(self):
# I want to emit a signal here to force a `QStateMachine` to go to state: `self.__states[0]`
print("Idling ...")
def start_camera_exposure(self) -> None:
self.controller._transition_to(CameraExposing())
def stop_camera_exposure(self) -> None:
pass
def abort_camera_exposure(self) -> None:
pass
class CameraExposing(IState):
@override
def on_entry(self) -> None:
# I want to emit a signal here to force a `QStateMachine` to go to state: `self.__states[1]`
print("Starting camera exposure ...")
@override
def on_exit(self) -> None:
print("Stopping camera exposure ...")
def start_camera_exposure(self) -> None:
pass
def stop_camera_exposure(self) -> None:
self.controller._transition_to(SavingCameraImages())
def abort_camera_exposure(self) -> None:
self.controller._transition_to(AbortingCameraExposure())
class SavingCameraImages(IState):
@override
def on_entry(self) -> None:
# I want to emit a signal here to force a `QStateMachine` to go to state: `self.__states[2]`
print("Saving camera images ...")
self.controller._transition_to(Idle())
def start_camera_exposure(self) -> None:
pass
def stop_camera_exposure(self) -> None:
pass
def abort_camera_exposure(self) -> None:
pass
class AbortingCameraExposure(IState):
@override
def on_entry(self) -> None:
# I want to emit a signal here to force a `QStateMachine` to go to state: `self.__states[3]`
print("Aborting camera exposure ...")
self.controller._transition_to(Idle())
def start_camera_exposure(self) -> None:
pass
def stop_camera_exposure(self) -> None:
pass
def abort_camera_exposure(self) -> None:
pass
</code></pre>
<p>On the GUI side, I have something like:</p>
<pre class="lang-py prettyprint-override"><code>from PySide6.QtStateMachine import QState, QStateMachine
class MainWindow(QWidget):
def __init__(self) -> None:
super().__init__()
self.machine = QStateMachine(parent=self)
self.state_idle = QState(self.machine)
self.state_camera_exposing = QState(self.machine)
self.state_saving_camera_images = QState(self.machine)
self.state_aborting_camera_exposure = QState(self.machine)
self.machine.setInitialState(self.state_idle)
self.states = [
self.state_idle,
self.state_camera_exposing,
self.state_saving_camera_images,
self.state_aborting_camera_exposure,
]
self.initialize()
</code></pre>
|
<python><pyqt><pyside><pyside6><qstatemachine>
|
2024-03-22 19:14:20
| 1
| 908
|
bmitc
|
78,208,230
| 1,377,288
|
How can I convert a PredictResponse to JSON?
|
<p>I have a VertexAI project I want to access. I'm currently trying two approaches, via a React frontend and via a Python backend, which I would then connect to the FE. I posted a question about making requests to VertexAI from Node <a href="https://stackoverflow.com/questions/78208092/how-to-send-a-request-to-a-vertexai-model-from-nodejs">here</a>.</p>
<p>In the python approach, I'm able to make the request and receive the correct response. However, in order for it to be accessible by the FE, I would need to convert it to JSON. I'm struggling with how to do that.</p>
<p>Here's the code I'm using:</p>
<pre><code># The AI Platform services require regional API endpoints.
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
# This client only needs to be created once, and can be reused for multiple requests.
client = PredictionServiceClient(
client_options=client_options, credentials=credentials
)
instance = schema.predict.instance.TextClassificationPredictionInstance(
content=content,
).to_value()
instances = [instance]
parameters_dict = {}
parameters = json_format.ParseDict(parameters_dict, Value())
endpoint = client.endpoint_path(
project=project_id, location=compute_region, endpoint=endpoint_id
)
response = client.predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
response_dict = [dict(prediction) for prediction in response.predictions]
</code></pre>
<p><code>response_dict</code> is printable, but I can't convert <code>response</code> to json using <code>json.dumps</code> because:</p>
<pre><code>TypeError: Object of type PredictResponse is not JSON serializable
</code></pre>
<p>This is the error that has been plaguing me in every attempt. DuetAI simply tells me to use <code>json.dumps</code>.</p>
<p><strong>EDIT</strong></p>
<p>Here's the working code using the accepted response:</p>
<pre class="lang-py prettyprint-override"><code> ...
response = client.predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
predictions = MessageToDict(response._pb)
predictions = predictions["predictions"][0]
</code></pre>
|
<python><json><google-cloud-platform><google-cloud-vertex-ai>
|
2024-03-22 18:28:29
| 1
| 527
|
Tuma
|
78,208,171
| 6,573,259
|
How do you create a semi-transaparent image in numpy and saving it in pillows as PNG
|
<p>Hello I would like to create a semi-transparent image. What i thought would be straight forward turns out not to be. I created a 1000x1000 pixel array with a depth of 4 (B,G,R,A channels). I initialized them to all 0 thinking that they would produce full black but also fully transparent base image. I then draw a green shape at 50% transaparency, in this case a square to make the code very minimal,</p>
<p><a href="https://i.sstatic.net/SL8SAm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SL8SAm.png" alt="enter image description here" /></a></p>
<pre><code>from PIL import Image
import cv2
blankImage = np.zeros((1000,1000,4))
BGRAcolor = (38,255,0,125) # 50% transaparent light green
blankImage[250:250, 750:750] = BGRAcolor
#Image.fromarray(blankImage).save('result.png') #throws an error
cv2.imshow("blankImage", blankImage)
cv2.waitKey()
</code></pre>
<p>Problem is i cannot save the image as a .png because PIL throws an error, hence i cannot confirm if i have correctly made the image or not. Error is:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\.....\Image.py", line 3098, in fromarray
mode, rawmode = _fromarray_typemap[typekey]
~~~~~~~~~~~~~~~~~~^^^^^^^^^
KeyError: ((1, 1, 4), '<f8')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\.....\transparentImage.py", line 25, in <module>
Image.fromarray(testImage).save('result.png')
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\.....\Image.py", line 3102, in fromarray
raise TypeError(msg) from e
TypeError: Cannot handle this data type: (1, 1, 4), <f8
</code></pre>
<p>Alternativly i tried using opencv imshow() to preview the images, but I really cant tell if open cv shows transparent pixels as black or if i have messed up making/assigning values to my alpha layer. Also the color is white for some reason and not green.</p>
<p>Can anyone point out what i am doing wrong?</p>
<p>Pardon that the square became a rectangle strip.</p>
<p><a href="https://i.sstatic.net/aUGik.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aUGik.png" alt="enter image description here" /></a></p>
|
<python><numpy><python-imaging-library>
|
2024-03-22 18:17:02
| 1
| 752
|
Jake quin
|
78,208,112
| 9,703,039
|
Why is my function not behaving asynchronously (asyncio to_thread)?
|
<p>Learning from <a href="https://www.youtube.com/watch?v=p8tnmEdeOU0" rel="nofollow noreferrer">this video</a>, I'm trying to make my codes run faster.<br />
I have a synchronous get function (here represented by <code>mySynchronousFunction</code>) that I want to evaluate over several values in a List.<br />
I am coding all this in a jupyter notebook.</p>
<p>Here is a MWE:</p>
<pre class="lang-py prettyprint-override"><code>
# Cell one
import asyncio
import pandas as pd
import requests
from time import sleep
# cell two
def mySynchronousFunction():
sleep(4)
return(True)
async def turningMyFunctionAsync(string):
print(f"starting job for {string}")
returnVal = await asyncio.to_thread(mySynchronousFunction)
print(f"end of job for {string}")
return(returnVal)
# Cell tree
myList = [{'val':'val1'},{'val':'val2'},{'val':'val3'},{'val':'val4'},{'val':'val5'}]
async def mainAsyncFunction():
for val in myList:
val['result'] = await asyncio.create_task(turningMyFunctionAsync(val['val']))
# Cell four
await mainAsyncFunction()
# Prints:
# starting job for val1
# end of job for val1
# starting job for val2
# end of job for val2
# starting job for val3
# end of job for val3
# starting job for val4
# end of job for val4
# starting job for val5
# end of job for val5
</code></pre>
<p>I was expecting cell four to print all the "starting job for val1" first (i.e. launching all the processes at ones) considering the huge sleep time. But everything just run synchronously.
What did I do wrong?</p>
<p>Thanks.</p>
|
<python><python-3.x><python-asyncio>
|
2024-03-22 18:05:49
| 1
| 339
|
Odyseus_v4
|
78,208,034
| 19,024,379
|
How to get property return type annotation?
|
<p>I'm looking for a way to extract the return type annotation of a property from a class, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>class Test:
@property
def foo(self) -> int:
return 1
print(return_type_extract(Test, 'foo')) # <class 'int'>
</code></pre>
<p>For standard methods this can be done like so:</p>
<pre class="lang-py prettyprint-override"><code>import inspect
class Test:
def foo(self) -> int:
return 1
def return_type_extract(cls: type, method_name: str) -> type:
method = getattr(cls, method_name)
return inspect.signature(method).return_annotation
print(return_type_extract(Test, 'foo')) # <class 'int'>
</code></pre>
<p>This method however does not work for the <code>@property</code> decorator, as it raises an error inside <code>inspect.signature</code>. I've looked at an equivalent for properties in the <code>inspect</code> library, but so far no luck.</p>
|
<python><python-typing>
|
2024-03-22 17:48:34
| 1
| 1,172
|
Mark
|
78,207,990
| 17,884,397
|
Sum each column of a sparse matrix multiplied by a vector
|
<p>I have a large scipy <strong>sparse</strong> matrix <code>X</code>.<br />
I have a vector, <code>y</code> with the number of elements matches the number of rows of <code>X</code>.</p>
<p>I want to calculate the sum of of each column after it was multiplies by <code>y</code>.<br />
If <code>X</code> was dense, it is equivalent of <code>np.sum(X * y, axis=0)</code>.</p>
<p>How can it be done efficiently for a sparse matrix?</p>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code>z = np.zeros(X.shape[1])
for i in range(X.shape[1]):
z[i] = np.sum(np.array(X[:, i]) * y)
</code></pre>
<p>Yet it was terribly slow.<br />
Is there a better way to achieve this?</p>
|
<python><numpy><performance><scipy><sparse-matrix>
|
2024-03-22 17:37:26
| 1
| 736
|
Eric Johnson
|
78,207,986
| 6,458,245
|
How to debug ValueError: `FlatParameter` requires uniform dtype but got torch.float32 and torch.bfloat16?
|
<p>I'm trying to do Pytorch Lightning Fabric distributed FSDP training with Huggingface PEFT LORA fine tuning on LLAMA 2 but my code ends up failing with:</p>
<pre><code>`FlatParameter` requires uniform dtype but got torch.float32 and torch.bfloat16
File ".......", line 100, in <module>
model, optimizer = fabric.setup(model, optimizer)
ValueError: `FlatParameter` requires uniform dtype but got torch.float32 and torch.bfloat16
</code></pre>
<p>How do I find out which tensors in pytorch fabric are of float32 type?</p>
|
<python><distributed-computing><pytorch-lightning>
|
2024-03-22 17:36:39
| 1
| 2,356
|
JobHunter69
|
78,207,971
| 7,648,650
|
minimizing portfolio tracking error yield unlogical results
|
<p>I have a <code>pd.DataFrame</code> with daily stock returns of shape <code>(250,10)</code> and a <code>pd.Series</code> with my benchmarks daily returns of shape <code>(250,)</code>. The goal is to minimize the Tracking Error, between a portfolio of stocks and the benchmark.
The tracking error is the Standard Deviation of (Portfolio - Benchmark)
But somehow <code>scipy.minimize</code> can't correctly minimize the tracking error function, the results just don't make any sense. For other functions like maximizing the return it works flawless.</p>
<p>In the end the two lines should be very similar, but they aren't. Scipy doesn't complain but the results are just not what one would expect, do you know why this objective function troubles scipy?</p>
<p>MWE:</p>
<pre><code>import numpy as np
import pandas as pd
from scipy.optimize import minimize
def portfolio_te(weights, rets, bm_rets):
port_returns = np.dot(rets, weights)
te = np.sqrt(np.mean((port_returns - bm_rets)**2))
return te
stocks_returns = pd.DataFrame(np.random.normal(0, 0.02, (250, 10)))
benchmark_returns = pd.Series(np.random.normal(0, 0.02, 250))
result = minimize(portfolio_te, x0=[0.1 for i in range(10)],
bounds=[(0,0.3) for i in range(10)], method='SLSQP',
args=(stocks_returns, benchmark_returns))
port_returns = pd.Series(np.dot(stocks_returns, result.x))
ts = pd.concat([(1+port_returns).cumprod(), (1+benchmark_returns).cumprod()], axis=1)
ts.plot()
</code></pre>
|
<python><optimization><scipy><scipy-optimize><portfolio>
|
2024-03-22 17:34:00
| 1
| 1,248
|
Quastiat
|
78,207,901
| 1,100,913
|
Poetry packaging several dependencies
|
<p>I have a project with several packages:</p>
<pre><code>Project
|---PackageA
|------dist
|----------packagea.whl
|------pyproject.toml
|---PackageB
|------dist
|----------packageb.whl
|------pyproject.toml
</code></pre>
<p>PackageB has PackageA as a dependency.</p>
<p>Right now I have PackageB pyproject.toml like this:</p>
<pre><code>[tool.poetry.dependencies]
packagea = {path = "../PackageA", develop = true}
</code></pre>
<p>I then send both wheel files to another "prod" machine and try to install them.<br />
I can't publish in PyPI the packages for IP reasons.</p>
<p>PackageA installation passes, however PackageB installation fails.
That happens because PackageB is looking for PackageA at the same location it was in my original "dev" machine.</p>
<ol>
<li><p>I don't understand something basic:<br />
When I install PackageA, I see it installed as "packagea" in my python environment (using pip list).<br />
When I install PackageB, if packagea is already installed, shouldn't it skip this dependency installation?</p>
</li>
<li><p>What is the correct way to build packages and shipping them without PyPI?</p>
</li>
<li><p>Is there a way to pip install packageb.whl excluding ONLY "packagea" dependecy? Obviously, there will be no problem running the project, because python will find "packagea" in its environment.</p>
</li>
</ol>
<p>UPD.:<br />
Currently, I see no legit way to build 2 local packages for deployment without external repository (PyPI or Git). Pip's requirement to use absolute paths in dependencies leads to "PackageB" being not portable, because you can't know the deployment path of a local dependency in advance.<br />
In simple words: There's no official way to build/publish a package with a local dependency (without workaround). It will be not installable.</p>
<p>UPD. 2:
You can't know the location a dependent wheel file in end-user machine, so it's impossible to use a regular deployment, when the dependencies are installed automatically. I found 2 workarounds how to install a dependency in advance and make pip believe the right version is installed:</p>
<ol>
<li><p>Don't include a dependency in main dependency list in toml file. Instead, add it in dev group with -G dev. When the wheel file is build, it will omit dev dependencies. No dependency - no problem.</p>
</li>
<li><p>Modify the compiled wheel file. It is an archive, so you have to edit inner RECORD file. Update the sha256 of the required dependency using this code: <a href="https://stackoverflow.com/a/55906133/1100913">https://stackoverflow.com/a/55906133/1100913</a></p>
</li>
</ol>
|
<python><python-packaging><python-poetry>
|
2024-03-22 17:21:10
| 0
| 953
|
Andrey
|
78,207,696
| 5,211,659
|
`Unsupported object type int` when passing a `pandas.core.frame.DataFrame` to tensorflow.convert_to_tensor()
|
<p>I keep getting the error <code>Unsupported object type int</code> when passing a pandas <code>DataFrame</code> to <code>tensorflow.convert_to_tensor()</code>. I've already converted all columns to types supported by tensorflow. So I'm not sure why it's still not working. This is the dataframe:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 783 entries, 0 to 782
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gearbox 783 non-null string
1 fuel 783 non-null string
2 body_type 783 non-null string
3 sports_package 783 non-null int32
4 mileage 783 non-null int32
5 registration_month 783 non-null int32
6 registration_year 783 non-null int32
7 zip_code 783 non-null int32
8 horsepower 783 non-null int32
9 carplay 783 non-null int32
10 accident_free 783 non-null int32
dtypes: int32(8), string(3)
memory usage: 42.9 KB
None
</code></pre>
<p>Here's the full error:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[146], line 19
13 x['accident_free'] = x['accident_free'].astype('int32')
17 print(x.info())
---> 19 tf.convert_to_tensor(x)
21 model.fit(x, y)
File ~/miniforge3/envs/tf_projects/lib/python3.8/site-packages/tensorflow/python/util/traceback_utils.py:153, in filter_traceback.<locals>.error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.__traceback__)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb
File ~/miniforge3/envs/tf_projects/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.py:98, in convert_to_eager_tensor(value, ctx, dtype)
96 dtype = dtypes.as_dtype(dtype).as_datatype_enum
97 ctx.ensure_initialized()
---> 98 return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type int).
</code></pre>
<p>Thank you for your help!</p>
<p>UPDATE: Here's the contents of the df:</p>
<pre><code> gearbox fuel body_type sports_package mileage registration_month \
0 manual petrol hatchback 0 26025 12
1 manual petrol hatchback 0 22987 11
2 manual petrol hatchback 0 13513 12
registration_year zip_code horsepower carplay accident_free
0 2021 86470 90 1 0
1 2021 78628 90 0 1
2 2021 14641 90 1 0
</code></pre>
|
<python><pandas><numpy><tensorflow>
|
2024-03-22 16:43:33
| 0
| 821
|
Daniel Becker
|
78,207,668
| 6,379,348
|
unable to extract text from span tag using Beautifulsoup and Request
|
<p>I'm trying to scrap posts from this online forum. <a href="https://csn.cancer.org/categories/prostate" rel="nofollow noreferrer">https://csn.cancer.org/categories/prostate</a>
All post seems to be in span tags.</p>
<p>I used the code below to scrap the posts.</p>
<pre><code>import requests
from bs4 import BeautifulSoup as bs
url = f"https://csn.cancer.org/categories/prostate"
response = requests.get(url)
soup = bs(response.text, 'html.parser')
ps = soup.findAll('span',attrs ={ 'class':"css-yjh1h7-TruncatedText-styles-truncated"})
for p in ps:
print(p.text)
</code></pre>
<p>I got nothing. from the span ('class':"css-yjh1h7-TruncatedText-styles-truncated"), I'm not able to extract anything.</p>
<p>However, what's puzzling me is that if I take part of the html code from the website and did the following:</p>
<pre><code>html =''' <div class="css-e9znss-ListItem-styles-item"><div class="css-1flthw6-ListItem-styles-iconContainer"><div class="css-n9xrp8-ListItem-styles-icon"><div class="css-7yur4m-DiscussionList-classes-userIcon"><a aria-current="false" href="https://csn.cancer.org/profile/336163/PhoenixM" tabindex="0" data-link-type="legacy"><div class="css-1eztffh-userPhotoStyles-medium css-11wwpgq-userPhotoStyles-root isOpen"><img title="PhoenixM" alt="User: &quot;PhoenixM&quot;" height="200" width="200" src="https://w6.vanillicon.com/v2/62fd0812499add3e38f8e90eee3af967.svg" class="css-10y567c-userPhotoStyles-photo" loading="lazy"></div></a></div></div></div><div class="css-2guvez-ListItem-styles-contentContainer"><div class="css-1kxjkhx-ListItem-styles-titleContainer"><h3 class="css-glebqx-ListItem-styles-title"><a aria-current="false" href="https://csn.cancer.org/discussion/327814/brachytherapy" class="css-ojxxy9-ListItem-styles-titleLink-DiscussionList-classes-title" tabindex="0" data-link-type="legacy"><span class="css-yjh1h7-TruncatedText-styles-truncated">Brachytherapy</span></a></h3></div><div class="css-1y6ygw7-ListItem-styles-metaWrapContainer"><div class="css-5swiwf-ListItem-styles-metaDescriptionContainer"><p class="css-1ggegep-ListItem-styles-description"><span class="css-yjh1h7-TruncatedText-styles-truncated">I have recently been diagnosed with locally advanced prostate cancer. Gleason 9 stage 4a. My cancer has spread outside my prostate to a very enlarged lymph node in my pelvic region. I’m currently taki…</span></p><div class="css-1uyxq88-Metas-styles-root css-h3lbxm-ListItem-styles-metasContainer"><div class="css-1a607mt-Metas-styles-meta">135 views</div><div class="css-1a607mt-Metas-styles-meta">6 comments</div><div class="css-1a607mt-Metas-styles-meta">0 point</div><div class="css-1a607mt-Metas-styles-meta">Started by <a aria-current="false" href="https://csn.cancer.org/profile/336163/PhoenixM" class="css-1unw87s-Metas-styles-metaLink" tabindex="0" data-link-type="legacy">PhoenixM</a></div><div class="css-1a607mt-Metas-styles-meta">Most recent by <a aria-current="false" href="https://csn.cancer.org/profile/285710/Steve1961" class="css-1unw87s-Metas-styles-metaLink" tabindex="0" data-link-type="legacy">Steve1961</a></div><div class="css-1a607mt-Metas-styles-meta"><time datetime="2024-03-22T01:59:58+00:00" title="Thursday, March 21, 2024 at 9:59 PM">Mar 21, 2024</time></div><div class="css-1a607mt-Metas-styles-meta"><a aria-current="false" href="https://csn.cancer.org/categories/prostate" class="css-1unw87s-Metas-styles-metaLink" tabindex="0" data-link-type="legacy"> Prostate Cancer </a></div></div></div></div></div><div class="css-1pv9k2p-ListItem-styles-actionsContainer"></div></div> '''
from bs4 import BeautifulSoup as bs
import requests
# Parse the HTML with BeautifulSoup
soup = bs(html, 'html.parser')
ps = soup.findAll('span',attrs ={ 'class':"css-yjh1h7-TruncatedText-styles-truncated"})
for p in ps:
print(p.text)
</code></pre>
<p>I was able to extract the post content. Can anyone help me understand why I could not extract anything from the site url? What did I do wrong? Thanks in advance.</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2024-03-22 16:37:39
| 2
| 11,903
|
zesla
|
78,207,583
| 6,151,828
|
pandas.read_csv : load only certain columns and the unnamed index
|
<p>While working with a large dataset, I would like to load only a part of it (more precisely - I would like to load only the training data samples) as loading the full dataset takes some time and memory. I know the names of the columns of interest, but the index column is unnamed. The trouble is that (to my knowledge) the <code>usecols</code> parameter of <code>pandas.read_csv()</code> takes either column names or column numbers - I don't know if/how I could mix the two.</p>
<p>For no I bypass it in a somewhat cumbersome way:</p>
<pre><code>#load only the columns of the dataset
cols = pd.read_csv('dataset.tsv.gz', sep='\t', index_col=0, nrows=0).columns
#find the positions of the columns of interest (those in my_columns_list)
colnums = np.nonzero(cols.isin(my_colmuns_list))[0]
#shift column numbers by one and insert 0 for the index column
colnums = np.insert(colnums+1, 0, 0)
#read the dataset
X = pd.read_csv('dataset.tsv.gz', sep='\t', index_col=0, usecols=colnums)
</code></pre>
<p>Obviously, an alternative solution could be load the full dataset, name the index column (e.g., 'index_column'), resave the dataset and simply use:</p>
<pre><code>X = pd.read_csv('dataset.tsv.gz', sep='\t', index_col=0, usecols=['index_column'] + my_columns_list)
</code></pre>
<p>but then this operation would have to be repeated every time that I deal with a new dataset (names of the samples for <code>my_columns_list</code>, are known independently, from the response variable.)</p>
|
<python><pandas><dataframe>
|
2024-03-22 16:23:57
| 1
| 803
|
Roger V.
|
78,207,433
| 11,474,852
|
How can spark.version be different from installed pyspark version?
|
<p>I'm working in a Python virtual environment in which I have installed pyspark version 3.5.0:</p>
<pre class="lang-bash prettyprint-override"><code>$ pip show pyspark
Name: pyspark
Version: 3.5.0
Summary: Apache Spark Python API
Home-page: https://github.com/apache/spark/tree/master/python
Author: Spark Developers
Author-email: dev@spark.apache.org
License: http://www.apache.org/licenses/LICENSE-2.0
Location: C:\Users\...\venv2\Lib\site-packages
</code></pre>
<p>However, when I run</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('get-version').getOrCreate()
print(spark.version)
</code></pre>
<p>I get</p>
<pre class="lang-bash prettyprint-override"><code>Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
24/03/22 15:50:10 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
3.4.0
</code></pre>
<p>Now, which Spark version is it? How can I make sure I'm running Spark version 3.5.0?</p>
|
<python><apache-spark><pyspark>
|
2024-03-22 15:57:20
| 0
| 605
|
Kai Roesner
|
78,207,307
| 10,461,632
|
Converting YAML file to dataclass with nested dataclasses and optional keyword arguments
|
<p>I want to read in a YAML file and convert it into a python dataclass. The goal in this example is to be able to produce the same dataclass.</p>
<p>Without reading a YAML file:</p>
<pre><code>from dataclasses import dataclass, field
@dataclass
class OptionsSource:
a: str
b: str = None
kwargs: dict = field(default_factory=dict)
def __post_init__(self):
for k, v in self.kwargs.items():
setattr(self, k, v)
@dataclass
class OptionsInput:
file: str
source: list[OptionsSource] = field(default_factory=list[OptionsSource], kw_only=True)
@dataclass
class Options:
inputs: OptionsInput = field(default_factory=OptionsInput, kw_only=True)
options = Options(
inputs=OptionsInput(
file='file1',
source=[
OptionsSource(a=1, b=2, kwargs={'c': 3}),
OptionsSource(a=10, b=20)
]
))
</code></pre>
<pre><code>>>>print(options)
Options(inputs=OptionsInput(file='file1', source=[OptionsSource(a=1, b=2, kwargs={'c': 3}), OptionsSource(a=10, b=20, kwargs={})]))
</code></pre>
<pre><code>>>>print(options.inputs.source[0].c)
3
</code></pre>
<p>Now, when I read this YAML, my output is different (i.e., <code>OptionsSource</code> dataclass isn't used).</p>
<pre><code>yaml_input = yaml.load("""
inputs:
file: file1
source:
- a: 1
b: 2
c: 3
- a: 10
b: 20
""", Loader=yaml.FullLoader)
options_from_yaml = Options(inputs=OptionsInput(**yaml_input['inputs']))
</code></pre>
<pre><code>>>>print(options_from_yaml)
Options(inputs=OptionsInput(file='file1', source=[{'a': 1, 'b': 2, 'c': 3}, {'a': 10, 'b': 20}]))
</code></pre>
<p>My desired output is for <code>options_from_yaml</code> to match <code>options</code>.</p>
<p>My two problems:</p>
<ol>
<li><code>source</code> isn't a list of <code>OptionsSource</code></li>
<li>I can't figure out how the <code>kwargs</code> piece of <code>OptionsSource</code> to let me provide any keyword arguments and have them stored so it can be accessed with <code>options.inputs.source[0].c</code>.</li>
</ol>
|
<python><yaml><python-dataclasses>
|
2024-03-22 15:34:50
| 5
| 788
|
Simon1
|
78,207,298
| 10,997,298
|
Trying to slice a large css file Trying to slice a large css file (1,952,726 rows) to chances using Python
|
<p>When I run the following command, I get an error:</p>
<pre><code>import pandas as pd
import os
import numpy as np
Endcustomers = "Resources/WESTCON_INTE_LTD_2024_01_GBP.csv"
Endcustomers_df = pd.read_csv(Endcustomers)
Endcustomers_df.head(3)
def split_csv_into_chunks(Endcustomers, 10000):
# Read the CSV file into a pandas DataFrame
df = pd.read_csv(Endcustomers)
# Determine the total number of chunks needed
total_chunks = len(df) // 500000 + 1
# Split the DataFrame into chunks
chunks = np.array_split(df, total_chunks)
# Save each chunk as a separate CSV file
for i, chunk in enumerate(chunks):
chunk.to_csv(f"{input_file}_chunk_{i+1}.csv", index=False)
print(f"Chunk {i+1} saved.")
</code></pre>
<p>Error:
Cell In[16], line 1
def split_csv_into_chunks(Endcustomers,10000):
^
SyntaxError: invalid syntax</p>
<p>Can someone help resolve this please? Thank you</p>
|
<python><split><chunks>
|
2024-03-22 15:33:24
| 1
| 685
|
RedaB
|
78,207,291
| 11,474,852
|
How to use Spark Connect with pyspark on Python 3.12?
|
<p>I'm trying to use Spark Connect to create a Spark session on a remote Spark cluster with pyspark in Python 3.12:</p>
<pre class="lang-py prettyprint-override"><code>ingress_ep = "..."
access_token = "..."
conn_string = f"sc://{ingress_ep}/;token={access_token}"
spark = SparkSession.builder.remote(conn_string).getOrCreate()
</code></pre>
<p>When running this I get a <code>ModuleNotFoundError</code> message:</p>
<pre><code>---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[13], line 11
9 conn_string = f"sc://{ingress_ep}/;token={access_token}"
10 print(conn_string)
---> 11 spark = SparkSession.builder.remote(conn_string).getOrCreate()
File c:\Users\...\venv2\Lib\site-packages\pyspark\sql\session.py:464, in SparkSession.Builder.getOrCreate(self)
458 if (
459 "SPARK_CONNECT_MODE_ENABLED" in os.environ
460 or "SPARK_REMOTE" in os.environ
461 or "spark.remote" in opts
462 ):
463 with SparkContext._lock:
--> 464 from pyspark.sql.connect.session import SparkSession as RemoteSparkSession
466 if (
467 SparkContext._active_spark_context is None
468 and SparkSession._instantiatedSession is None
469 ):
470 url = opts.get("spark.remote", os.environ.get("SPARK_REMOTE"))
File c:\Users\...\venv2\Lib\site-packages\pyspark\sql\connect\session.py:19
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one or more
3 # contributor license agreements. See the NOTICE file distributed with
...
---> 24 from distutils.version import LooseVersion
26 try:
27 import pandas
ModuleNotFoundError: No module named 'distutils'
</code></pre>
<p>I'm aware that that the <code>distuils</code> module has been removed from Python 3.12. So I have installed <code>setuptools</code> and set <code>SETUPTOOLS_USE_DISTUTILS='local'</code> as suggested in <a href="https://stackoverflow.com/questions/77233855">Why did I got an error ModuleNotFoundError: No module named 'distutils'?</a> and <a href="https://stackoverflow.com/questions/77876447">No module named 'distutils' despite setuptools installed</a> but I'm still getting the error.</p>
<p>Going back to an older version of Python is not an option for me. Am I missing something? How can I get this to work?</p>
|
<python><pyspark><setuptools><spark-connect>
|
2024-03-22 15:31:45
| 1
| 605
|
Kai Roesner
|
78,207,024
| 3,214,538
|
Uploading large amounts of data in bulk to a Cloud SQL instance via Python
|
<p>I have a Python application that uses a Cloud SQL instance to store data. The application usually runs on a Google Compute Engine but is run locally during development and testing. At some point I need to update two separate tables with a number of rows in the order of one hundred thousands rows each. Rows will already exist in most cases and only require an <code>UPDATE</code> for a single column 'val' but may also require an INSERT statement in some cases. Using an <code>INSERT</code> statement with <code>ON DUPLICATE KEY UPDATE</code> with the SQLAlchemy connection works but took almost half an hour (for around half a million rows) when the actual calculation only took 5 minutes in my test case.</p>
<p>So now I'm wondering if there's a faster way to make those changes in bulk. The Cloud SQL database is running MySQL 5.7. I tried using the <code>LOAD DATA LOCAL INFILE</code> statement with a csv file but lack the required permissions and haven't been able to figure out how (if at all) this is possible with Cloud SQL. Uploading the csv to a bucket and then using <code>LOAD DATA INFILE</code> (i.e. without the <code>LOCAL</code>) also fails due to a lack of permissions.
How can I grant the required permissions to the Cloud SQL user logged in to the database or - alternatively - is there another way of updating/inserting those rows in bulk?</p>
<p>Edit:
The way I got it to work, albeit slowly, is by running the following SQL statement via SQLAlchemy session.execute for each single row:</p>
<pre><code>INSERT INTO my_database.my_table (position_id, type, date, val)
VALUES (<position_id>, "<type>", "<date>", <val>)
ON DUPLICATE KEY UPDATE val=<val>
</code></pre>
<p>This statement is run for each row through a for loop and after the loop terminates, a session.commit() persists the changes to the database.
In my test this took about 18 minutes to <code>UPDATE</code> or <code>INSERT</code> a total of 417374 rows.</p>
<p>As an alternative I suppose I could simply try to run a simple <code>UPDATE</code> statement and only if the rowcount is 0 <code>INSERT</code> it (or vice versa). This took about 20 minutes for 5 <code>INSERTS</code> and 417369 <code>UPDATES</code>.</p>
<p>Keeping in mind that most rows, identified by <code>position_id</code>, <code>type</code> and <code>date</code>, a) will already exist and b) may already have the desired value (i.e. don't even have to be updated), as a third option I could also try getting the value, comparing it to the val to be written and only writing it if it either doesn't exist or differs. This could improve performance if reading access is significantly faster than writing.</p>
<p>I'll try and benchmark all of the above and post my results here once I have them. What I can already say is that the time it takes to run the actual for loop without the SQL part is negligible and can be disregarded for the purpose of improving performance for the whole process.</p>
|
<python><mysql><sqlalchemy>
|
2024-03-22 14:48:52
| 0
| 443
|
Midnight
|
78,206,985
| 9,703,039
|
Fill Pandas Dataframe asynchronously with async
|
<p>I just saw <a href="https://www.youtube.com/watch?v=p8tnmEdeOU0" rel="nofollow noreferrer">this awesome video from Idently</a> and tried to use the trick to fill in some dataframe columns according to another.</p>
<p>Here is my MWE (more like non-working example infact) code, I code in a Jupyter notebook.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import pandas as pd
import requests
mydf = pd.DataFrame({'url':['https://google.com','https://apple.com']})
print(mydf)
print("-----")
async def fetch_status(url:str) -> int:
response = await asyncio.to_thread(requests.get,url)
return(response.status_code)
async def main_task() -> None:
myTask = asyncio.create_task(mydf['url'].apply(fetch_status))
mydf['status'] = await myTask
print(mydf)
</code></pre>
<p>In a separate cell:</p>
<pre class="lang-py prettyprint-override"><code>asyncio.run(main = main_task())
</code></pre>
<p>I get a <code>RuntimeError: asyncio.run() cannot be called from a running event loop</code> error.<br />
Any idea why? Any help is welcome.</p>
|
<python><python-3.x><pandas><python-asyncio>
|
2024-03-22 14:42:12
| 1
| 339
|
Odyseus_v4
|
78,206,851
| 6,936,682
|
Pydantic to load list of nested json objects thows missing value
|
<p>I have the following pydantic objects:</p>
<pre><code>class NestedObject(BaseModel):
id: int
class MainObject(BaseModel):
id: int
nested_objects: list[NestedObject]
</code></pre>
<p>Also I have a JSON string that I want pydantic to build the objects from:</p>
<pre><code>bla = """
{
"id": 1,
"nested_objects:": [
{
"id": 1
},
{
"id": 2
}
]
}
"""
</code></pre>
<p>How I load my json string:</p>
<pre><code>MainObject.model_validate_json(bla)
</code></pre>
<p>When I try to load the JSON string with the model, it gives me a <a href="https://docs.pydantic.dev/2.6/errors/validation_errors/#missing" rel="nofollow noreferrer">ValidationError</a> saying that <code>nested_objects</code> are missing.</p>
<p>I have been scratching my head over this, since I couldn't find anything in the documentation stating that it is not possible to validate from complex nested json structures.</p>
<p>Do I need a field_validator in order to make this work, or have I missed something?</p>
|
<python><pydantic>
|
2024-03-22 14:18:53
| 0
| 1,970
|
Jeppe Christensen
|
78,206,782
| 22,824,066
|
Python4Delphi GetIt Demos aren't working in Delphi 12
|
<p>I just downloaded Python4Delphi from getIt and after it finished installing, it opened the demos for me.</p>
<p>When I run the demos, I just keep getting errors:</p>
<blockquote>
<p>Error 87: Could not open Dll "python33.dll"</p>
</blockquote>
<blockquote>
<p>Python could not be properly initialized. We must quit.</p>
</blockquote>
<p>My getIt installation of Python4Delphi was successful without any issues. So why does the Demos not work?</p>
|
<python><delphi><delphi-12-athens><python4delphi>
|
2024-03-22 14:07:53
| 0
| 632
|
Martin Kakhuis
|
78,206,706
| 2,409,793
|
Python module not imported despite file being there
|
<p>I have a <code>python</code> project. I have activated <code>virtualenv</code> for it and installed <code>requirements.txt</code></p>
<p>Here is my files/dirs structure</p>
<pre><code>.
├── app
│ ├── app.py
│ ├── modules
│ │ ├── __pycache__
│ │ ├── foo.py
│ │ ├── settings.py
│ └── requirements.txt
</code></pre>
<p>I am doing the following import in <code>foo.py</code></p>
<pre><code>import settings
</code></pre>
<p>VSCode does not complain (while in other erroneous import attempts it did complain)</p>
<p><a href="https://i.sstatic.net/GMBYT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GMBYT.png" alt="enter image description here" /></a></p>
<p>When trying to run the program</p>
<pre><code>▶ python app/app.py
Traceback (most recent call last):
File "/path/to/project/app/app.py", line 1, in <module>
from modules import foo
File "/path/to/project/app/modules/foo.py", line 14, in <module>
import settings
ModuleNotFoundError: No module named 'settings'
(.venv)
</code></pre>
<p>What am I missing?</p>
|
<python><python-3.x><import><python-import>
|
2024-03-22 13:54:52
| 1
| 19,856
|
pkaramol
|
78,206,531
| 3,870,664
|
Debugging Python AWS CDK application in VSCode
|
<p>I have a AWS CDK application written in python. I want to debug this in VSCode. I do understand this would not be the normal convention. Since in my case cdk is installed globally and the cdk.json directs the global cdk to run my <code>"app": "python3 app.py"..</code> and at runtime cdk is using jsii to probably transpile python cdk into javascript to then convert to AWS Cloudformation.</p>
<p>With all that complexity stated, I should either be able to run the python debugger with correctly providing <code>import aws_cdk as cdk... app = cdk.App()</code> with the context. I can start the python debugger and pick up my venv/ with aws_cdk installed no problems, but it has no context. I cannot find any documentation on providing the context. using <code>cdk.app(context={'key':'value})</code> returns <code>app.node.get_all_context() = {}</code>. I was then thinking I could start cdk and use VSCode to attach to the running process. I cannot run <code><global path>/node cdk syth</code> since <code>cdk</code> is itself an executable and is supposed to be run <code>cdk syth</code> directly (I am on a MAC). One obvious additional test would be to run <code>npm init</code> and add cdk locally. That would help me use existing documentation on stackoverflow. Although that may work for me that is not desirable since I would have a npm setup for debugging plus a python setup for the app.</p>
<p>Python was chosen as the cdk language because there is more python experience than typescript in this app development. Although as I am reading it seems that python is a second rate language for aws cdk development.</p>
|
<python><aws-cdk><vscode-debugger>
|
2024-03-22 13:20:32
| 0
| 1,508
|
vfrank66
|
78,206,446
| 1,745,291
|
Is there a simple way to subclass python's set without redefining all operators?
|
<p>Is there a way to subclass <code>set</code>, with the binary operator returning the subclassed type, without redefining them ?</p>
<p>example :</p>
<pre><code>class A(set):
pass
a = A([1,2,3]) & A([1,2,4])
a.__class__ == A # it's False, and I would like it to be true without redefining all operators
</code></pre>
<p><strong>Note that this question : <a href="https://stackoverflow.com/questions/798442/what-is-the-correct-or-best-way-to-subclass-the-python-set-class-adding-a-new">What is the correct (or best) way to subclass the Python set class, adding a new instance variable?</a> is 10 years old and the provided answers are only relevant for python 2.x.
This is why I asked another question concerning python 3.x (especially, python ≥ 3.8).</strong></p>
|
<python><python-3.x><subclassing>
|
2024-03-22 13:05:51
| 2
| 3,937
|
hl037_
|
78,206,353
| 3,934,271
|
Unable to generate plantuml diagram when trying to use "iplantuml" in Jupyter Notebook within vscode
|
<p>I'm trying to use iplantuml in jupyter notebook within VS Code.</p>
<p><a href="https://i.sstatic.net/OqFaH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OqFaH.png" alt="enter image description here" /></a></p>
<p>Keep getting this error:</p>
<pre class="lang-none prettyprint-override"><code>---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[10], line 1
----> 1 get_ipython().run_cell_magic('plantuml', '', '@startuml\nAlice -> Bob: Authentication Request\nBob --> Alice: Authentication Response\n@enduml\n')
File ~\AppData\Roaming\Python\Python312\site-packages\IPython\core\interactiveshell.py:2541, in InteractiveShell.run_cell_magic(self, magic_name, line, cell)
2539 with self.builtin_trap:
2540 args = (magic_arg_s, cell)
-> 2541 result = fn(*args, **kwargs)
2543 # The code below prevents the output from being displayed
2544 # when using magics with decorator @output_can_be_silenced
2545 # when the last Python token in the expression is a ';'.
2546 if getattr(fn, magic.MAGIC_OUTPUT_CAN_BE_SILENCED, False):
File c:\Users\....\AppData\Local\Programs\Python\Python312\Lib\site-packages\iplantuml\__init__.py:101, in plantuml(line, cell)
99 output = None
100 if use_web:
--> 101 output = plantuml_web(uml_path)
102 else:
103 plantuml_path = os.path.abspath(args.plantuml_path or PLANTUMLPATH)
File c:\Users\.....\AppData\Local\Programs\Python\Python312\Lib\site-packages\iplantuml\__init__.py:68, in plantuml_web(*file_names, **kwargs)
57 """
58 Given a list of UML documents, generate corresponding SVG diagrams, using
59 PlantUML's web service via the plantweb module.
...
1553 self._close_pipe_fds(p2cread, p2cwrite,
1554 c2pread, c2pwrite,
1555 errread, errwrite)
FileNotFoundError: [WinError 2] The system cannot find the file specified
</code></pre>
<p>If I hover over the <code>import iplantuml</code> I see this error:</p>
<blockquote>
<p>"iplantuml" is not accessed Pylance</p>
</blockquote>
<p>I can generate a plot using matplotlib. It is only plantuml I'm unable to use</p>
<p>I tried doing an extra step below but then I ran into another issue of module not found and pip install didn't resolve it either. Also, didn't have to do this when I was using jupyter notebook within anaconda</p>
<pre class="lang-none prettyprint-override"><code> %load_ext plantuml_magics
</code></pre>
<p>I
What am I missing?</p>
|
<python><visual-studio-code><jupyter-notebook><plantuml>
|
2024-03-22 12:50:26
| 1
| 1,590
|
jas
|
78,206,317
| 9,112,151
|
itemgetter for nested dict key value
|
<p>I'd like to get a nested dict value with <code>itemgetter</code>:</p>
<pre><code>from operator import itemgetter
dct = {"name": {"en": "John"}}
getter = itemgetter("name", "en")
getter(dct)
</code></pre>
<p>The code gives me error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/alber.aleksandrov/PycharmProjects/Playground/sa/разное.py", line 5, in <module>
getter(dct)
KeyError: 'en'
</code></pre>
<p>How should I use <code>itemgetter</code> to make it work?</p>
|
<python><dictionary>
|
2024-03-22 12:43:20
| 4
| 1,019
|
Альберт Александров
|
78,206,137
| 606,576
|
How to type hint an overloaded decorator that may be sync or async
|
<p>Python 3.10.</p>
<p>I have written two versions of a logging decorator, one for normal functions and one for async functions. Mypy is happy with these:</p>
<pre class="lang-py prettyprint-override"><code>from functools import wraps
from inspect import Signature, signature
from logging import getLogger
from typing import Any, Awaitable, Callable, ParamSpec, TypeVar, Union, overload
# You heard it here first: type hinting for decorators is a pain in the arse.
Param = ParamSpec("Param")
RetType = TypeVar("RetType")
def _log_with_bound_arguments(
func_name: str, func_sig: Signature, *args: Any, **kwargs: Any
) -> None:
# Cf. <https://docs.python.org/3/library/inspect.html#inspect.BoundArguments>
bound_func = func_sig.bind_partial(*args, **kwargs)
func_params = ", ".join([k + "=" + repr(v) for k, v in bound_func.arguments.items()])
getLogger().debug("-> %s(%s)", func_name, func_params)
def atrace(func: Callable[Param, Awaitable[RetType]]) -> Callable[Param, Awaitable[RetType]]:
"""Async decorator that safely logs the function call at the debug level."""
@wraps(func)
async def async_wrapper(*args: Param.args, **kwargs: Param.kwargs) -> RetType:
_log_with_bound_arguments(func.__qualname__, signature(func), *args, **kwargs)
return await func(*args, **kwargs)
return async_wrapper
def trace(func: Callable[Param, RetType]) -> Callable[Param, RetType]:
"""Decorator that safely logs the function call at the debug level."""
@wraps(func)
def wrapper(*args: Param.args, **kwargs: Param.kwargs) -> RetType:
_log_with_bound_arguments(func.__qualname__, signature(func), *args, **kwargs)
return func(*args, **kwargs)
return wrapper
</code></pre>
<p>I would like to consolidate these into a single decorator that I can slap on all functions, async or not. Continuing:</p>
<pre class="lang-py prettyprint-override"><code>from inspect import iscoroutinefunction
from typing import Union, overload
@overload
def ctrace(func: Callable[Param, RetType]) -> Callable[Param, RetType]: ...
@overload
def ctrace(func: Callable[Param, Awaitable[RetType]]) -> Callable[Param, Awaitable[RetType]]: ...
def ctrace(
func: Union[Callable[Param, RetType], Callable[Param, Awaitable[RetType]]],
) -> Union[Callable[Param, RetType], Callable[Param, Awaitable[RetType]]]:
if iscoroutinefunction(func):
@wraps(func)
async def async_wrapper(*args: Param.args, **kwargs: Param.kwargs) -> RetType:
_log_with_bound_arguments(func.__qualname__, signature(func), *args, **kwargs)
return await func(*args, **kwargs)
return async_wrapper
else:
@wraps(func)
def wrapper(*args: Param.args, **kwargs: Param.kwargs) -> RetType:
_log_with_bound_arguments(func.__qualname__, signature(func), *args, **kwargs)
return func(*args, **kwargs)
return wrapper
</code></pre>
<p>However, now Mypy is giving me four errors I can't figure out how to fix:</p>
<pre class="lang-none prettyprint-override"><code>src/my_package/log/__init__.py:109: error: Overloaded function implementation does not accept all possible arguments of signature 2 [misc]
src/my_package/log/__init__.py:109: error: Overloaded function implementation cannot produce return type of signature 2 [misc]
src/my_package/log/__init__.py:118: error: Returning Any from function declared to return "RetType" [no-any-return]
src/my_package/log/__init__.py:127: error: Incompatible return value type (got "RetType | Awaitable[RetType]", expected "RetType") [return-value]
</code></pre>
<p>The line numbers obviously don't help a lot, so I'll point out that line 109 is the <code>def ctrace(</code> line that starts the implementation of the overloaded function. 118 is <code>return await func(*args, **kwargs)</code> and 127 is <code>return func(*args, **kwargs)</code>.</p>
<p>Pylance only shows a single error: on line 127, <em>Expression of type "RetType@ctrace | Awaitable[RetType@ctrace]" cannot be assigned to return type "RetType@ctrace"
Type "RetType@ctrace | Awaitable[RetType@ctrace]" cannot be assigned to type "RetType@ctrace"</em> This is the same as the fourth error from Mypy as far as I can tell.</p>
<p>Any help in hinting this properly would be greatly appreciated.</p>
|
<python><overloading><mypy><python-decorators><python-typing>
|
2024-03-22 12:08:22
| 0
| 915
|
kthy
|
78,205,950
| 8,176,763
|
FastAPI oauth2 + jwt extend exp time at every request
|
<p>According to the example of fastapi, we can use ouath2 and json web tokens to create login for users:</p>
<pre><code>from datetime import datetime, timedelta, timezone
from typing import Annotated
from fastapi import Depends, FastAPI, HTTPException, status
from fastapi.security import OAuth2PasswordBearer, OAuth2PasswordRequestForm
from jose import JWTError, jwt
from passlib.context import CryptContext
from pydantic import BaseModel
# to get a string like this run:
# openssl rand -hex 32
SECRET_KEY = "09d25e094faa6ca2556c818166b7a9563b93f7099f6f0f4caa6cf63b88e8d3e7"
ALGORITHM = "HS256"
ACCESS_TOKEN_EXPIRE_MINUTES = 30
fake_users_db = {
"johndoe": {
"username": "johndoe",
"full_name": "John Doe",
"email": "johndoe@example.com",
"hashed_password": "$2b$12$EixZaYVK1fsbw1ZfbX3OXePaWxn96p36WQoeG6Lruj3vjPGga31lW",
"disabled": False,
}
}
class Token(BaseModel):
access_token: str
token_type: str
class TokenData(BaseModel):
username: str | None = None
class User(BaseModel):
username: str
email: str | None = None
full_name: str | None = None
disabled: bool | None = None
class UserInDB(User):
hashed_password: str
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="token")
app = FastAPI()
def verify_password(plain_password, hashed_password):
return pwd_context.verify(plain_password, hashed_password)
def get_password_hash(password):
return pwd_context.hash(password)
def get_user(db, username: str):
if username in db:
user_dict = db[username]
return UserInDB(**user_dict)
def authenticate_user(fake_db, username: str, password: str):
user = get_user(fake_db, username)
if not user:
return False
if not verify_password(password, user.hashed_password):
return False
return user
def create_access_token(data: dict, expires_delta: timedelta | None = None):
to_encode = data.copy()
if expires_delta:
expire = datetime.now(timezone.utc) + expires_delta
else:
expire = datetime.now(timezone.utc) + timedelta(minutes=15)
to_encode.update({"exp": expire})
encoded_jwt = jwt.encode(to_encode, SECRET_KEY, algorithm=ALGORITHM)
return encoded_jwt
async def get_current_user(token: Annotated[str, Depends(oauth2_scheme)]):
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
username: str = payload.get("sub")
if username is None:
raise credentials_exception
token_data = TokenData(username=username)
except JWTError:
raise credentials_exception
user = get_user(fake_users_db, username=token_data.username)
if user is None:
raise credentials_exception
return user
async def get_current_active_user(
current_user: Annotated[User, Depends(get_current_user)]
):
if current_user.disabled:
raise HTTPException(status_code=400, detail="Inactive user")
return current_user
@app.post("/token")
async def login_for_access_token(
form_data: Annotated[OAuth2PasswordRequestForm, Depends()]
) -> Token:
user = authenticate_user(fake_users_db, form_data.username, form_data.password)
if not user:
raise HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Incorrect username or password",
headers={"WWW-Authenticate": "Bearer"},
)
access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
access_token = create_access_token(
data={"sub": user.username}, expires_delta=access_token_expires
)
return Token(access_token=access_token, token_type="bearer")
@app.get("/users/me/", response_model=User)
async def read_users_me(
current_user: Annotated[User, Depends(get_current_active_user)]
):
return current_user
@app.get("/users/me/items/")
async def read_own_items(
current_user: Annotated[User, Depends(get_current_active_user)]
):
return [{"item_id": "Foo", "owner": current_user.username}]
</code></pre>
<p>However we can verify that the token will expire after 30 minutes. <code>ACCESS_TOKEN_EXPIRE_MINUTES = 30</code>. What I would like to have is a token that will extend its expiration time the current time + 10 minutes, everytime the user makes a request to the any endpoint in this application. As such I could keep always a period of user inactivity of which the user will not be suddenly logged out even though he is actively using the application. Is there a way to do this without ever touching on storing users on databases and only with jwts stored in the client side web browser ? What are the best practices to solve a problem like this ? update the authrization header at every
request ? set cookies ? Please advise.</p>
<p>EDIT: This is my attempt by using a middleware to modify the headers, but it's still not working.</p>
<pre><code>async def get_current_user(token: Annotated[str, Depends(oauth2_scheme)]):
credentials_exception = HTTPException(
status_code=status.HTTP_401_UNAUTHORIZED,
detail="Could not validate credentials",
headers={"WWW-Authenticate": "Bearer"},
)
try:
payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
username: str = payload.get("sub")
if username is None:
raise credentials_exception
token_data = TokenData(username=username)
except JWTError:
raise credentials_exception
user = get_user(fake_users_db, username=token_data.username)
if user is None:
raise credentials_exception
# Check if the token needs to be refreshed
expiration_time = datetime.fromtimestamp(payload["exp"], tz=timezone.utc)
if expiration_time - datetime.now(tz=timezone.utc) < timedelta(minutes=REFRESH_INTERVAL_MINUTES):
# Refresh the token with a new expiration time
access_token_expires = timedelta(minutes=ACCESS_TOKEN_EXPIRE_MINUTES)
new_access_token = create_access_token(
data={"sub": user.username}, expires_delta=access_token_expires
)
return user, new_access_token
return user , token
async def get_current_active_user(
current_user_and_token: Annotated[User, Depends(get_current_user)]
):
current_user,_ = current_user_and_token
if current_user.disabled:
raise HTTPException(status_code=400, detail="Inactive user")
return current_user
</code></pre>
|
<python><jwt><fastapi>
|
2024-03-22 11:36:32
| 1
| 2,459
|
moth
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.