QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,614,505
| 6,328,506
|
How to decode a value stored in Firefox localStorage
|
<p>I'm trying to decode a value stored in Firefox localStorage on Linux Alpine. I know that Firefox stores localStorage data in <code>/config/.mozilla/firefox/*.default-release/storage/default/<website>/ls/data.sqlite</code>. After extracting the encrypted access_token value using the command <code>sqlite3 data.sqlite "SELECT value FROM data WHERE key = 'access_token';"</code>, I obtained a string like this:</p>
<pre><code>οΏ½οΏ½eyJhbGcisdiJSUzI1NiIsInRsdIgOiAiSldUIiwia2lkIiA6ICJsdkdsSnR6RjRxXzdrdWhSc29VTUNrM2k1dGVNMU1EM\h0dHBzOi8vcHJvY29uLWsdnN1bWlkb3ItcHJvZC5henVyZXdlYnNpdGVzLm5ldCIsImh0dHBzOi8vcHJvY29uLWZvcm5lY2 οΏ½οΏ½yaG1sLmF6dXJld2Vic2l0ZXMubmV0L2F1dGhlbnRpY2F0aW9uL2xvZ2luLWNhbGxiYWNrIiwiaHR0cHM6Ly9icGNvbnN1bsdkb3JwcmQuY2JseC5kZXYiLCJodHq$.,
</code></pre>
<p>It seems that Firefox uses the Snappy algorithm to <s>encrypt</s> compress this value, as mentioned in <a href="https://stackoverflow.com/a/77335474/6328506">https://stackoverflow.com/a/77335474/6328506</a>. However, when I copied the extracted value to a file and used the <a href="https://pypi.org/project/python-snappy/" rel="nofollow noreferrer">python-snappy</a> library, I received the following error:</p>
<pre><code>>>> snappy.uncompress(file.read())
Traceback (most recent call last):
File "/Users/abc/.pyenv/versions/cde/lib/python3.11/site-packages/snappy/snappy.py", line 84, in uncompress
out = bytes(_uncompress(data))
^^^^^^^^^^^^^^^^^
cramjam.DecompressionError: snappy: input buffer (size = 446315786887151) is larger than allowed (size = 4294967295)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/cde/.pyenv/versions/abc/lib/python3.11/site-packages/snappy/snappy.py", line 86, in uncompress
raise UncompressError from err
snappy.snappy.UncompressError
</code></pre>
<p>What should I do to extract the access_token value saved in localStorage in the same format as displayed in the browser?</p>
|
<python><firefox><encryption><browser><alpine-linux>
|
2024-06-12 18:50:06
| 0
| 416
|
Kafka4PresidentNow
|
78,614,458
| 2,328,154
|
Snowflake SQLAlchemy - Create table with Timestamp?
|
<p>I am creating table definitions with SqlAlchemy in Python. I am able to successfully create a table with primary key which autoincrements and some other fields. However, I haven't been able to find any documentation on how to add a field with a timestamp which will be populated when a new record is added. For reference, this is how you would create this field directly in a Snowflake worksheet.</p>
<pre><code>CREATE OR REPLACE TABLE My_Table(
TABLE_ID NUMBER NOT NULL PRIMARY KEY,
... Other fields
TIME_ADDED TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
</code></pre>
<p>This is my Python code...</p>
<pre><code>from sqlalchemy import Column, DateTime, Integer, Text, String
from sqlalchemy.orm import declarative_base
from sqlalchemy import create_engine
Base = declarative_base()
class My_Table(Base):
__tablename__ = 'my_table'
TABLE_ID = Column(Integer, primary_key=True, autoincrement=True)
.. other columns
.. Also need to create TIME_ADDED field
engine = create_engine(
'snowflake://{user}:{password}@{account_identifier}/{database}/{schema}?warehouse={warehouse}'.format(
account_identifier=account,
user=username,
password=password,
database=database,
warehouse=warehouse,
schema=schema
)
)
Base.metadata.create_all(engine)
</code></pre>
<p>Has anyone done this and, if so, could you point me in the right direction?</p>
|
<python><sqlalchemy><snowflake-cloud-data-platform>
|
2024-06-12 18:42:06
| 1
| 421
|
MountainBiker
|
78,614,281
| 1,521,496
|
How does typing.Type work in Python? When can variables be used as types?
|
<p>I am using Python 3.11 and Pylance for this.</p>
<pre><code>class Dog:
pass
</code></pre>
<p>Example A:</p>
<pre><code>DogType: type[Dog] = Dog # type[Dog]
AnotherDogType: type[DogType] = Dog # Error
# ^^^^^^^ variable not allowed in type expression
</code></pre>
<p>Example B:</p>
<pre><code>DogType = Dog # type[Dog]
AnotherDogType: type[DogType] = Dog # Ok
</code></pre>
<p>I am wondering why I get the error for <code>ExampleA.AnotherDogType</code>. <code>DogType</code> is <code>type[Dog]</code> when I hover over both variables in vscode.</p>
|
<python><python-typing><pyright>
|
2024-06-12 17:54:46
| 1
| 666
|
Sidd Singal
|
78,614,047
| 3,533,721
|
Plotly - Renormalise data in response to zooming in on plotly figure
|
<p>I have some data like:</p>
<pre><code>import pandas as pd
import datetime as dt
import plotly.graph_objects as go
# Example data
returns = pd.DataFrame({'time': [dt.date(2020,1,1), dt.date(2020,1,2), dt.date(2020,1,3), dt.date(2020,1,4), dt.date(2020,1,5), dt.date(2020,1,6)],
'longs': [0, 1,2,3,4,3],
'shorts': [0, -1,-2,-3,-4,-4]})
</code></pre>
<p>and a chart like:</p>
<pre><code>fig = go.Figure()
fig.add_trace(
go.Scatter(x=list(returns.time),
y=list(returns.longs),
name="Longs",
line=dict(color="#0c56cc")))
fig.add_trace(
go.Scatter(x=list(returns.time),
y=list(returns.shorts),
name="Shorts",
line=dict(color="#850411", dash="dash")))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/2fhmrdSM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fhmrdSM.png" alt="Chart" /></a></p>
<p><strong>When I zoom in on a particular section however I would like for the data to be renormlised using the first date in the xrange</strong>. So zooming in on the period after Jan 5 here would give me something like:</p>
<p><a href="https://i.sstatic.net/BHrPsFHz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHrPsFHz.png" alt="Desired Outcome" /></a></p>
<p>This seems to be possible using something like <code>fig.layout.on_change</code> as discussed <a href="https://stackoverflow.com/questions/71980580/update-plotly-y-axis-range-after-setting-x-axis-range">here</a>. My attempt at doing it is below but it does not appear to change the figure at all.</p>
<pre><code>def renormalise_returns(returns, renorm_date):
long_data = returns.melt(id_vars = ['time'])
long_data.sort_values(by = 'time', inplace=True)
cum_at_start = long_data[long_data['time'] >= renorm_date].groupby(['variable']).first().reset_index().rename(columns = {'value': 'old_value'}).drop(columns = ['time'])
long_data2 = pd.merge(long_data, cum_at_start, how = 'left', on = ['variable'])
long_data2['value'] = long_data2['value'] - long_data2['old_value']
long_data2.drop(columns = ['old_value'], inplace=True)
finn = long_data2.pivot(index = ['time'], columns = ['variable'], values = 'value').reset_index()
return finn
fig = go.FigureWidget([go.Scatter(x=returns['time'], y=returns['longs'], name='Longs', line=dict(color="#0c56cc")),
go.Scatter(x=returns['time'], y=returns['shorts'], name="Shorts", line=dict(color="#850411", dash="dash"))])
def zoom(xrange):
xrange_zoom_min, xrange_zoom_max = fig.layout.xaxis.range[0], fig.layout.xaxis.range[1]
df2 = renormalise_returns(returns, xrange_zoom_min)
fig = go.FigureWidget([go.Scatter(x=df2['time'], y=df2['longs']), go.Scatter(x=df2['time'], y=df2['shorts'])])
fig.layout.on_change(zoom, 'xaxis.range')
fig.show()
</code></pre>
|
<python><plotly>
|
2024-06-12 16:52:33
| 1
| 1,502
|
Stuart
|
78,613,926
| 10,200,497
|
How can I merge two dataframes based on last date of each group?
|
<p>These are my DataFrames:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame(
{
'close': [100, 150, 200, 55, 69, 221, 2210, 111, 120, 140, 150, 170],
'date': [
'2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04',
'2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08',
'2024-01-09', '2024-01-10', '2024-01-11', '2024-01-12',
]
}
)
df2 = pd.DataFrame(
{
'group': ['a', 'a', 'a', 'a', 'b', 'b', 'b', 'b'],
'close': [100, 105, 112, 117, 55, 65, 221, 211],
'date': [
'2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04',
'2024-01-05', '2024-01-06', '2024-01-07', '2024-01-08'
],
'extend': [
'2024-01-09', '2024-01-09', '2024-01-09', '2024-01-09',
'2024-01-11', '2024-01-11', '2024-01-11', '2024-01-11'
],
}
)
</code></pre>
<p>And this is the expected output. I want to extend <code>df2</code> for each group in <code>group</code> column:</p>
<pre><code> group close date extend
0 a 100 2024-01-01 2024-01-09
1 a 105 2024-01-02 2024-01-09
2 a 112 2024-01-03 2024-01-09
3 a 117 2024-01-04 2024-01-09
4 a 69 2024-01-05 2024-01-09
5 a 221 2024-01-06 2024-01-09
6 a 2210 2024-01-07 2024-01-09
7 a 111 2024-01-08 2024-01-09
8 a 120 2024-01-09 2024-01-09
7 b 55 2024-01-05 2024-01-11
8 b 65 2024-01-06 2024-01-11
9 b 221 2024-01-07 2024-01-11
10 b 211 2024-01-08 2024-01-11
11 b 120 2024-01-09 2024-01-11
12 b 140 2024-01-10 2024-01-11
13 b 150 2024-01-11 2024-01-11
</code></pre>
<p>The logic is:</p>
<p>Each group in <code>df2</code> has a fixed <code>extend</code> date. This is the basically the date that each group should be extended using <code>df1</code>.</p>
<p>For example for group <code>a</code>, The data should be extended from 2024-01-04 to 2024-01-09. The start point of extending is basically <code>df2.date.iloc[-1]</code> for each group and the end is the <code>extend</code>.</p>
<p>This is my attempt that didn't work:</p>
<pre><code>import janitor
def func(df2, df1):
df2['extend_start'] = df2.date.iloc[-1]
df2['extend_start'] = pd.to_datetime(df2.extend_start)
df3 = df2.conditional_join(
df1,
('extend_start', 'date', '<'),
('extend', 'date', '>')
)
return df3
df1['date'] = pd.to_datetime(df1.date)
df2['extend'] = pd.to_datetime(df2.extend)
out = df2.groupby('group').apply(func, df1=df1)
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-12 16:24:22
| 5
| 2,679
|
AmirX
|
78,613,817
| 15,461,255
|
ModuleNotFoundError: No module named '_ssl' running Python3.9
|
<p>I'm trying to configure a virtual environment with Python3.9 (with other distributions works without problem) in Ubuntu 20.04 LTS but I got the error:</p>
<p><code>ModuleNotFoundError: No module named '_ssl'</code></p>
<p>I tried with the following solutions found in different posts but none of them worked:</p>
<ol>
<li>Reinstalling <code>ssl</code> with <code>sudo apt reinstall libssl-dev</code>,</li>
<li>Installing Python3.9 from source including the flag <code>--with-openssl=/usr/lib/ssl</code> when running <code>./configure</code>,</li>
<li>Following the instructions of <a href="https://stackoverflow.com/questions/78119798/install-python3-11-7-get-no-module-named-ssl">this post</a> but for Python3.9</li>
</ol>
<p>Any idea?</p>
|
<python><ssl><python-3.9>
|
2024-06-12 16:04:16
| 0
| 350
|
Palinuro
|
78,613,442
| 1,882,711
|
linearly interpolate 1D array against n dimensional arrays over axis in parallel
|
<p>Is there any way to interpolate an 1D array against two n-dimensional arrays along certain axis?</p>
<p>In particular, I have x and y arrays with shapes (6, 2, 10). I also have an x_in array with shape(10, ) that I would like to interpolate along first axis. Is there any elegant way to achieve this with any interpolation function in numpy or scipy?</p>
<p>In the following code, T has shape (2,), B has shape (10, ) and x_in has shape (10, ), as well. I would like to obtain an array with shape (2, 10) as below but avoid <code>for loop</code>.</p>
<pre class="lang-py prettyprint-override"><code>def foo(T, B, x_in):
len_T = len(T)
len_x_in = len(x_in)
x = np.ones((6, len_T, len_x_in))
x[0, :, :] = 0
x[1, :, :] = 0.3 - 0.1*T[:, np.newaxis]
x[2, :, :] = 0.35 - 0.1*T[:, np.newaxis]
x[3, :, :] = 0.8 - 0.2*T[:, np.newaxis]
x[4, :, :] = 0.9 - 0.2*T[:, np.newaxis]
x[5, :, :] = 1.0
y = np.ones((6, len_T, len_x_in))
y[0, :, :] = -0.25*T[:, np.newaxis]*(1 + B)
y[1, :, :] = -1
y[2, :, :] = 1
y[3, :, :] = 1
y[4, :, :] = -1
y[5, :, :] = -1
out = np.ones((len_T, len_x_in))
for i in range(len_T):
for j in range(len_x_in):
out[i, j] = np.interp([x_in[j]], x[:, i, j], y[:, i, j])
return out
</code></pre>
<p>Is there any function to vectorise/parallelise the above interpolation?</p>
|
<python><arrays><numpy><scipy><linear-interpolation>
|
2024-06-12 14:52:35
| 0
| 636
|
GioR
|
78,613,195
| 2,819,922
|
cron job to trigger something on a remote webserver
|
<p>I have a webserver with ssh access and the possibility to enter a cron job there.</p>
<p>I have another second webserver, where I just can create web pages to do some work (via php) when they are called from outside.</p>
<p>What's the best way to automate those tasks by using a cron job on the first one?
I found python3 and webbrowser in general available on that first (cron job) web server.</p>
<p>When testing interactively, a text based browser pops up and requires a q(uit) command to be entered.
Before going that path any further (How to enter input to a shell script running in background ?) I'd rather ask if there's a better way to solve my problem.</p>
|
<python><shell><cron><webserver>
|
2024-06-12 14:05:54
| 1
| 1,835
|
datafiddler
|
78,613,148
| 14,282,714
|
Remove rows in your dataframe with filling option in streamlit
|
<p>I'm currently creating a <code>streamlit</code> app that shows the results of user's inputs. It may happen that the user accidentally submitted its results. So I want the user to allow removing its values in the dataframe. The most similar option I found was this <a href="https://discuss.streamlit.io/t/deleting-rows-in-st-data-editor-progmatically/46337/2" rel="nofollow noreferrer"><code>Deleting rows in st.data_editor progmatically</code></a>, which uses a callback function to remove the rows. Unfortunately, it only works on the first row of the dataframe but it removes the complete dataset. Also, it doesn't work on the latter rows. Here is some reproducible example:</p>
<pre><code>import pandas as pd
import streamlit as st
st.sidebar.header("Submit results")
# default values
st.session_state["option"] = ""
st.session_state["number"] = 1
if 'data' not in st.session_state:
data = pd.DataFrame(columns=["Track", "Result"])
st.session_state.data = data
data = st.session_state.data
def onAddRow():
row = pd.DataFrame({'Track':[st.session_state["option"]], 'Result':[st.session_state["number"]]})
st.session_state.data = pd.concat([st.session_state.data, row])
def callback():
edited_rows = st.session_state["data_editor"]["edited_rows"]
rows_to_delete = []
for idx, value in edited_rows.items():
if value["x"] is True:
rows_to_delete.append(idx)
st.session_state["data"] = (
st.session_state["data"].drop(rows_to_delete, axis=0).reset_index(drop=True)
)
with st.form('Form1'):
st.session_state["option"] = st.selectbox(
"Select the tracked you played:",
("ds Mario Kart", "Toad Harbour", "Koopa Cape"))
st.session_state["number"] = st.slider("Pick a number", 1, 12)
st.session_state["submitted1"] = st.form_submit_button('Submit 1', on_click=onAddRow)
columns = st.session_state["data"].columns
column_config = {column: st.column_config.Column(disabled=True) for column in columns}
modified_df = st.session_state["data"].copy()
modified_df["x"] = False
# Make Delete be the first column
modified_df = modified_df[["x"] + modified_df.columns[:-1].tolist()]
st.data_editor(
modified_df,
key="data_editor",
on_change=callback,
hide_index=True,
column_config=column_config,
)
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/rSNTW6kZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rSNTW6kZ.png" alt="enter image description here" /></a></p>
<p>So it looks like this. Imagine we want to remove the second row, we can click on the button in the first column. But this returns an error:</p>
<pre><code>KeyError: '[1] not found in axis'
</code></pre>
<p>I don't understand why this happens. So I was wondering if anyone knows how to create an option to remove rows like above even when you have a filling option with a submit button?</p>
|
<python><streamlit>
|
2024-06-12 13:59:27
| 1
| 42,724
|
Quinten
|
78,613,142
| 2,304,916
|
Polars: Convert duration to integer number of hours/minutes/seconds
|
<p>I want to convert a duration to integer number of hour (or minute or seconds). I though the <code>.dt</code> namespace would work the same as for the datetimes, but I get an error instead.</p>
<p>This example</p>
<pre><code>from datetime import datetime
import polars as pl
pl.__version__
dx = pl.DataFrame({'dt1': datetime(2024, 1, 12, 13, 45)})
dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.hour())
</code></pre>
<p>gives the error:</p>
<pre><code>---------------------------------------------------------------------------
InvalidOperationError Traceback (most recent call last)
Cell In[5], line 2
1 dx = pl.DataFrame({'dt1': datetime(2024, 1, 12, 13, 45)})
----> 2 dx.select((pl.col('dt1') - pl.col('dt1').dt.date()).dt.hour())
File ~/src/poste/sda-poste-logistics/venv/lib/python3.11/site-packages/polars/dataframe/frame.py:8461, in DataFrame.select(self, *exprs, **named_exprs)
8361 def select(
8362 self, *exprs: IntoExpr | Iterable[IntoExpr], **named_exprs: IntoExpr
8363 ) -> DataFrame:
8364 """
8365 Select columns from this DataFrame.
8366
(...)
8459 βββββββββββββ
8460 """
-> 8461 return self.lazy().select(*exprs, **named_exprs).collect(_eager=True)
File ~/src/poste/sda-poste-logistics/venv/lib/python3.11/site-packages/polars/lazyframe/frame.py:1967, in LazyFrame.collect(self, type_coercion, predicate_pushdown, projection_pushdown, simplify_expression, slice_pushdown, comm_subplan_elim, comm_subexpr_elim, cluster_with_columns, no_optimization, streaming, background, _eager, **_kwargs)
1964 # Only for testing purposes atm.
1965 callback = _kwargs.get("post_opt_callback")
-> 1967 return wrap_df(ldf.collect(callback))
InvalidOperationError: `hour` operation not supported for dtype `duration[ΞΌs]`
</code></pre>
<p>on both polars 0.20.31 and 1.0.0-a1.</p>
<p>Is this a bug or am I doing something wrong?</p>
|
<python><python-polars>
|
2024-06-12 13:57:52
| 2
| 8,154
|
user2304916
|
78,612,997
| 1,427,758
|
Web uploader using python "requests" to upload 7G file occupies all CPU resources
|
<p>We have this snippet to upload a file to a web api</p>
<pre><code>import requests
import json
api_url = "https://ourserver/ourserverfolder/api/auth/Login"
dto = { "username": "ourusename", "password": "ourpassword" }
headers = { 'Content-Type': 'application/json', 'Accept': 'application/json' }
response = requests.post(api_url, data=json.dumps(dto), headers=headers)
response_obj = response.json()
bearer = response_obj["token"]
upload_url = "https://anotherserver/anotherserverfolder/UploadFile"
filename = r"D:\bigfile.zip"
filenamewithoutfolder = filename.split('\\')[-1]
with open(filename, 'rb') as f:
file = (filenamewithoutfolder, f, 'application/x-zip')
upload_files = { "file": file }
upload_headers = { 'Authorization': 'Bearer ' + bearer }
upload_response = requests.post(upload_url, files=upload_files, headers=upload_headers,
verify=False)
</code></pre>
<p>When we upload smaller files, it uploads them without a noticeable CPU impact.</p>
<p>However, when I am trying to upload a 7G file, it occupies all cpu on my computer.
Everything just stops. Like we had came back in time and was running Windows 95 or some other operating system with cooperative multitasking.</p>
<p>Why?</p>
<p>I can also notice that the program takes up memory in a linear proportion to the time it has executed.</p>
<p>I have some ideas</p>
<ul>
<li>The code might still in some way cause the file to be stored as a byte buffer.</li>
<li>The multipart/form-data creation might check the whole file to come up with a suitable delimiter? Instead of just guessing?</li>
</ul>
|
<python>
|
2024-06-12 13:33:39
| 0
| 7,434
|
Anders LindΓ©n
|
78,612,905
| 12,550,643
|
Detection small object in game with yolov8 model
|
<p>I've just started learning about artificial intelligence image capture.I learned that I needed a dataset, so I took a few screenshots.I labeled the objects with roboflow and trained the model with yolov8. But no matter what I did, I could not detect the fish (small, moving, shadowy) object correctly.I think the Preprocessing and Augmentation sections are very important in roboflow. I applied tiling, but I didn't understand what I should do on the code side. I'm so confused :D I need your help.</p>
<p>roboflow dataset:</p>
<p><a href="https://universe.roboflow.com/test-uifst/test55/dataset/3" rel="nofollow noreferrer">https://universe.roboflow.com/test-uifst/test55/dataset/3</a></p>
<p>The model was trained on : <a href="https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov8-object-detection-on-custom-dataset.ipynb#" rel="nofollow noreferrer">https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov8-object-detection-on-custom-dataset.ipynb#</a></p>
<p>python :</p>
<pre><code>import cv2
from ultralytics import YOLO
import supervision as sv
import mss
import numpy as np
import pyautogui
import time
model = YOLO('best.pt')
sct = mss.mss()
monitor = sct.monitors[1]
while True:
screenshot = sct.grab(monitor)
img = np.array(screenshot)
# RGB (3 CHANNEL)
img = cv2.cvtColor(img, cv2.COLOR_BGRA2RGB)
results = model(img)
for result in results:
for bbox in result.boxes:
if bbox.cls == 0: # "fish"
x1, y1, x2, y2 = bbox.xyxy[0]
</code></pre>
|
<python><yolov8><roboflow>
|
2024-06-12 13:14:51
| 1
| 479
|
zafer
|
78,612,486
| 329,829
|
Rolling aggregation in polars and also get the original column back without join or using .over
|
<p>Using polars <code>.rolling</code> and <code>.agg</code>, how do I get the original column back, without having to join back with the original column, or without having to use <code>.over</code>?</p>
<p><strong>Example:</strong></p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
dates = [
"2020-01-01 13:45:48",
"2020-01-01 16:42:13",
"2020-01-01 16:45:09",
"2020-01-02 18:12:48",
"2020-01-03 19:45:32",
"2020-01-08 23:16:43",
]
df = pl.DataFrame({"dt": dates, "a": [3, 7, 5, 9, 2, 1]}).with_columns(
pl.col("dt").str.to_datetime().set_sorted()
)
</code></pre>
<p>Provides me with a small polars dataframe:</p>
<pre><code>βββββββββββββββββββββββ¬ββββββ
β dt β a β
β --- β --- β
β datetime[ΞΌs] β i64 β
βββββββββββββββββββββββͺββββββ‘
β 2020-01-01 13:45:48 β 3 β
β 2020-01-01 16:42:13 β 7 β
β 2020-01-01 16:45:09 β 5 β
β 2020-01-02 18:12:48 β 9 β
β 2020-01-03 19:45:32 β 2 β
β 2020-01-08 23:16:43 β 1 β
βββββββββββββββββββββββ΄ββββββ
</code></pre>
<p>When I apply a rolling aggregations, I get the new columns back, but not the original columns:</p>
<pre class="lang-py prettyprint-override"><code>out = df.rolling(index_column="dt", period="2d").agg(
pl.sum("a").alias("sum_a"),
pl.min("a").alias("min_a"),
pl.max("a").alias("max_a"),
pl.col("a")
)
</code></pre>
<p>which gives:</p>
<pre><code>βββββββββββββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬βββββββββββββββ
β dt β sum_a β min_a β max_a β a β
β --- β --- β --- β --- β --- β
β datetime[ΞΌs] β i64 β i64 β i64 β list[i64] β
βββββββββββββββββββββββͺββββββββͺββββββββͺββββββββͺβββββββββββββββ‘
β 2020-01-01 13:45:48 β 3 β 3 β 3 β [3] β
β 2020-01-01 16:42:13 β 10 β 3 β 7 β [3, 7] β
β 2020-01-01 16:45:09 β 15 β 3 β 7 β [3, 7, 5] β
β 2020-01-02 18:12:48 β 24 β 3 β 9 β [3, 7, 5, 9] β
β 2020-01-03 19:45:32 β 11 β 2 β 9 β [9, 2] β
β 2020-01-08 23:16:43 β 1 β 1 β 1 β [1] β
βββββββββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄βββββββββββββββ
</code></pre>
<p>How can I get the original a column. I don't want to join and I don't want to use .over as I need the group_by of the rolling later on and .over does not work with .rolling</p>
<p><strong>Edit.</strong> I am also not keen on using the following.</p>
<pre class="lang-py prettyprint-override"><code>out = df.rolling(index_column="dt", period="2d").agg(
pl.sum("a").alias("sum_a"),
pl.min("a").alias("min_a"),
pl.max("a").alias("max_a"),
pl.col("a").last()
)
</code></pre>
<p><strong>Edit 2.</strong> Why Expr.rolling() is not feasible and why I need the group_by:</p>
<p>Given a more elaborate example:</p>
<pre class="lang-py prettyprint-override"><code>dates = [
"2020-01-01 13:45:48",
"2020-01-01 16:42:13",
"2020-01-01 16:45:09",
"2020-01-02 18:12:48",
"2020-01-03 19:45:32",
"2020-01-08 23:16:43",
]
df_a = pl.DataFrame({"dt": dates, "a": [3, 7, 5, 9, 2, 1],"cat":["one"]*6}).with_columns(
pl.col("dt").str.to_datetime()
)
df_b = pl.DataFrame({"dt": dates, "a": [3, 7, 5, 9, 2, 1],"cat":["two"]*6}).with_columns(
pl.col("dt").str.to_datetime()
)
df = pl.concat([df_a,df_b])
</code></pre>
<pre><code>βββββββββββββββββββββββ¬ββββββ¬ββββββ
β dt β a β cat β
β --- β --- β --- β
β datetime[ΞΌs] β i64 β str β
βββββββββββββββββββββββͺββββββͺββββββ‘
β 2020-01-01 13:45:48 β 3 β one β
β 2020-01-01 16:42:13 β 7 β one β
β 2020-01-01 16:45:09 β 5 β one β
β 2020-01-02 18:12:48 β 9 β one β
β 2020-01-03 19:45:32 β 2 β one β
β 2020-01-08 23:16:43 β 1 β one β
β 2020-01-01 13:45:48 β 3 β two β
β 2020-01-01 16:42:13 β 7 β two β
β 2020-01-01 16:45:09 β 5 β two β
β 2020-01-02 18:12:48 β 9 β two β
β 2020-01-03 19:45:32 β 2 β two β
β 2020-01-08 23:16:43 β 1 β two β
βββββββββββββββββββββββ΄ββββββ΄ββββββ
</code></pre>
<p>and the code:</p>
<pre class="lang-py prettyprint-override"><code>out = df.rolling(index_column="dt", period="2d",group_by="cat").agg(
pl.sum("a").alias("sum_a"),
pl.min("a").alias("min_a"),
pl.max("a").alias("max_a"),
pl.col("a")
)
</code></pre>
<pre><code>βββββββ¬ββββββββββββββββββββββ¬ββββββββ¬ββββββββ¬ββββββββ¬βββββββββββββββ
β cat β dt β sum_a β min_a β max_a β a β
β --- β --- β --- β --- β --- β --- β
β str β datetime[ΞΌs] β i64 β i64 β i64 β list[i64] β
βββββββͺββββββββββββββββββββββͺββββββββͺββββββββͺββββββββͺβββββββββββββββ‘
β one β 2020-01-01 13:45:48 β 3 β 3 β 3 β [3] β
β one β 2020-01-01 16:42:13 β 10 β 3 β 7 β [3, 7] β
β one β 2020-01-01 16:45:09 β 15 β 3 β 7 β [3, 7, 5] β
β one β 2020-01-02 18:12:48 β 24 β 3 β 9 β [3, 7, 5, 9] β
β one β 2020-01-03 19:45:32 β 11 β 2 β 9 β [9, 2] β
β one β 2020-01-08 23:16:43 β 1 β 1 β 1 β [1] β
β two β 2020-01-01 13:45:48 β 3 β 3 β 3 β [3] β
β two β 2020-01-01 16:42:13 β 10 β 3 β 7 β [3, 7] β
β two β 2020-01-01 16:45:09 β 15 β 3 β 7 β [3, 7, 5] β
β two β 2020-01-02 18:12:48 β 24 β 3 β 9 β [3, 7, 5, 9] β
β two β 2020-01-03 19:45:32 β 11 β 2 β 9 β [9, 2] β
β two β 2020-01-08 23:16:43 β 1 β 1 β 1 β [1] β
βββββββ΄ββββββββββββββββββββββ΄ββββββββ΄ββββββββ΄ββββββββ΄βββββββββββββββ
</code></pre>
<p>This does not work:</p>
<pre class="lang-py prettyprint-override"><code>df.sort("dt").with_columns(sum=pl.sum("a").rolling(index_column="dt", period="2d").over("cat"))
</code></pre>
<p>Gives:</p>
<pre><code># InvalidOperationError: rolling expression not allowed in aggregation
</code></pre>
|
<python><dataframe><window-functions><python-polars><rolling-computation>
|
2024-06-12 11:57:16
| 3
| 5,232
|
Olivier_s_j
|
78,612,429
| 13,349,539
|
Creating Pytest tests for async API calls with FastAPI and MongoDB
|
<p>I have an API using FastAPI as the backend and MongoDB as the database. I want to write tests for the API endpoints I have, I have been trying to get a single test up and running but with no luck. My minimum reproducible setup is the following:</p>
<p><code>database.py</code>:</p>
<pre><code>client = AsyncIOMotorClient(MONGO_DETAILS)
db = client[DATABASE_NAME]
try:
# The ismaster command is cheap and does not require auth.
client.admin.command("ismaster") # type: ignore
print("Connected to MongoDB")
except Exception as e:
print("Server not available")
print("Database:", db)
def close_mongo_connection():
client.close()
</code></pre>
<p><code>auth_route.py</code>:</p>
<pre><code>class UserCreate(UserBase):
name: str
password: str
async def create_user(user: UserCreate):
user_dict = user.model_dump()
user_dict["hashed_password"] = get_password_hash(user_dict.pop("password"))
result = await users_collection.insert_one(user_dict)
return result
@router.post("/register")
async def register_user(user: UserCreate):
db_user = await create_user(user)
return db_user
</code></pre>
<p>My tests are inside a folder called <em>tests</em> and it has 2 files:</p>
<p><code>conftest.py</code>:</p>
<pre><code>from httpx import AsyncClient
from pytest import fixture
from starlette.config import environ
import app.main as main
@fixture(scope="function", autouse=True)
async def test_client(monkeypatch):
original_env = environ.get('ENVIRONMENT')
monkeypatch.setenv("ENVIRONMENT", "test")
application = main.app
async with AsyncClient(app=application, base_url="http://test") as test_client:
yield test_client
monkeypatch.setenv("ENVIRONMENT", original_env)
</code></pre>
<p><code>test_auth.py</code>:</p>
<pre><code>import pytest
@pytest.mark.asyncio
async def test_register_user(test_client):
user_data = {
"name": "Test User",
"password": "testpassword",
}
response = await test_client.post("/auth/register", json=user_data)
print(response)
assert response.status_code == 200
assert response.json()["email"] == user_data["email"]
</code></pre>
<p>I have tried:</p>
<ul>
<li>replacing <code>@pytest.mark.asyncio</code> with <code>@pytest.mark.anyio</code></li>
<li>changing the fixture scope from <em>function</em> to <em>session</em></li>
<li>using <code>TestClient</code> instead of <code>AsyncClient</code></li>
<li>setting <code>asyncio_mode=auto</code> when executing the tests</li>
</ul>
<p>The errors I'm getting range from <code>RuntimeError: Task <Task pending name='Task-3' coro=<test_register_user()</code> to <code>Event Loop Closed</code>. Whenever I change anything, I get another error.</p>
<p>I want the tests to execute without errors, it should send a request to my API and receive back an answer from the live database. I want the tests to set up a new DB (e.g. called "test") and apply all the CRUD operations in that DB, then delete the DB after the tests are done.</p>
<p>I have checked the following links for a potential solution for my problem:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/72009629/how-to-implement-pytest-for-fastapi-with-mongodbmotor">How to implement pytest for FastAPI with MongoDB(Motor)</a></li>
<li><a href="https://stackoverflow.com/questions/76406843/what-causes-error-async-generator-object-has-no-attribute-add">What causes error 'async_generator' object has no attribute 'add'?</a></li>
<li><a href="https://stackoverflow.com/questions/72996818/attributeerror-in-pytest-with-asyncio-after-include-code-in-fixtures">AttributeError in pytest with asyncio after include code in fixtures</a></li>
<li><a href="https://stackoverflow.com/questions/61022713/pytest-asyncio-has-a-closed-event-loop-but-only-when-running-all-tests">pytest-asyncio has a closed event loop, but only when running all tests</a></li>
</ul>
<p>And none of them are working, unless I missed something in their solution.</p>
|
<python><mongodb><pytest><fastapi>
|
2024-06-12 11:43:21
| 2
| 349
|
Ahmet-Salman
|
78,612,423
| 893,254
|
Can nonlocal be used to repeatedly reference names in higher levels of scope?
|
<p>In Python, the <code>nonlocal</code> keyword allows code in an inner scope to access a name in the enclosing scope.</p>
<p>To explain this, we can observe the behaviour of this example code, taken from the <a href="https://docs.python.org/3/tutorial/classes.html#scopes-and-namespaces-example" rel="nofollow noreferrer">Python tutorial documentation</a>.</p>
<pre><code>def scope_test():
def do_nonlocal():
nonlocal spam
spam = 'nonlocal spam'
spam = 'test spam'
do_nonlocal()
print(f'after nonlocal assignment spam={spam}')
scope_test()
# prints: 'after nonlocal assignment spam=nonlocal spam'
</code></pre>
<p>This demonstrates that the <code>nonlocal</code> keyword can be used to access the name <code>spam</code> from the enclosing scope, rather than the scope defined by <code>do_nonlocal</code>.</p>
<p>Can <code>nonlocal</code> be used repeatedly to access progressively higher levels of scope?</p>
|
<python><python-nonlocal>
|
2024-06-12 11:42:10
| 1
| 18,579
|
user2138149
|
78,612,411
| 9,097,114
|
Python Selenium - Google has stopped working popup
|
<p>Hi I am trying to do web scraping and code processing successfully, but later after some time(~after 5-6 hours) I am getting error (pop-up) as "Google chrome has stopped working".<br />
Below are <code>chrome_options</code> that I am using</p>
<pre><code>chrome_options = Options()
chrome_options.add_argument("enable-automation")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument("--dns-prefetch-disable")
chrome_options.add_argument("--disable-gpu")
#chrome_options.add_experimental_option("detach", True) #### NEW
#chrome_options.add_argument(f'--proxy-server={None}')
# INITIATE CHROME
chrome_options.add_argument("--no-proxy-server")
chrome_options.add_argument("--proxy-server='direct://'")
chrome_options.add_argument("--proxy-bypass-list=*")
chrome_options.add_experimental_option('extensionLoadTimeout', 60000)
driver = webdriver.Chrome(executable_path='C:/Users/.wdm/drivers/chromedriver/win64/123.0.6312.122/chromedriver-win32/chromedriver.exe',options=chrome_options)
</code></pre>
<p>The Popup that I am getting is as follows<br />
<a href="https://i.sstatic.net/9nDoXMpK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nDoXMpK.png" alt="enter image description here" /></a></p>
<p>Anything else that I need to change in <code>chrome_options</code>?<br />
Thanks in advance.</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2024-06-12 11:39:07
| 0
| 523
|
san1
|
78,612,335
| 6,708,508
|
Celery shared_task not working with pytest in fastapi
|
<p>I am struggling to test fastapi endpoint that contains a celery task. I would like to mock the celery task. The task is as follows</p>
<pre><code>send_email.delay(
user.email,
"Subject goes here",
html_content,
)
</code></pre>
<p>Here is how the actual mock function</p>
<pre><code>@pytest.fixture
def mocked_send_email(mocker):
mocker.patch('app.views.user.send_email.delay', return_value=None)
</code></pre>
<pre><code>def test_create_user(client, mocked_send_email):
payload = {}
response = client.post(
f"{base_url}/user",
json=payload,
headers=admin_headers
)
mocked_send_email.assert_called_once_with(
"email@example.com",
"Subject goes here",
mock.ANY
)
</code></pre>
<p>Here is my part of <code>conftest.py</code> that handles celery</p>
<pre><code>@pytest.fixture(scope="session")
def celery_config():
print('running celery config')
return {"broker_url": REDIS, "result_backend": REDIS}
@pytest.fixture(scope='session', autouse=True)
def configure_celery(celery_config):
from app.celery import celery
celery.conf.update(celery_config)
yield
celery.conf.update({
'broker_url': 'memory://',
'result_backend': 'rpc://'
})
</code></pre>
<p>Now when i run my test, I get the following error</p>
<pre><code>usr/local/lib/python3.12/site-packages/kombu/connection.py:476: OperationalError
test_1 | ------------------------------ Captured log call -------------------------------
test_1 | WARNING kombu.connection:connection.py:669 _info - _info - No hostname was supplied. Reverting to default 'localhost'
</code></pre>
<p>My setup is dockerized and running with no errors when I am not running pytest. I am really stumped here and I am not sure whats causing this error considering my setup looks good and works when i am not running tests.Any help will be very much appreciated</p>
<p>Update. Print statements for configure celery fixture</p>
<pre><code>Before update:redis://redis-cache:6379/1 redis://redis-cache:6379/1
After update:redis://redis-cache:6379/5 redis://redis-cache:6379/5
After reset: memory:// rpc://
</code></pre>
|
<python><pytest><celery>
|
2024-06-12 11:21:38
| 1
| 1,408
|
Muteshi
|
78,612,251
| 610,569
|
How do we add/modify the normalizer in a pretrained Huggingface tokenizer?
|
<p>Given a Huggingface tokenizer that already have a normalizer, e.g. <code>"mistralai/Mistral-7B-v0.1"</code>, we can do this to modify the normalizer</p>
<pre><code>import json
from transformers import AutoTokenizer
from tokenizers.normalizers import Sequence, Replace, Prepend
tokenizer_name = "mistralai/Mistral-7B-v0.1"
old_tok = AutoTokenizer.from_pretrained(tokenizer_name)
assert old_tok.backend_tokenizer.normalizer != None
new_normalizer = Sequence(
[Prepend('β'), Replace('β', ' '), Replace("foo", "bar"), Replace('<br>', '\n')]
)
old_tok.backend_tokenizer.normalizer = new_normalizer
new_tokenizdr_name = f"new_tokenizer-{tokenizer_name}"
old_tok.save_pretrained(new_tokenizdr_name)
old_tok = AutoTokenizer.from_pretrained(tokenizer_name)
new_tok = AutoTokenizer.from_pretrained(new_tokenizdr_name)
</code></pre>
<p>[out]:</p>
<pre><code>>>> print(' '.join(old_tok.batch_decode(old_tok("I foo you<br>hello world")['input_ids'])))
<s> I foo you < br > hello world
>>> print(' '.join(new_tok.batch_decode(new_tok("I foo you<br>hello world")['input_ids'])))
<s> I bar you
hello world
</code></pre>
<p>But when this hot-plug normalizer modification don't always work, if we change it to <code>"mistralai/Mistral-7B-v0.3"</code>, it fails to work:</p>
<pre><code>import json
from transformers import AutoTokenizer
from tokenizers.normalizers import Sequence, Replace, Prepend
tokenizer_name = "mistralai/Mistral-7B-v0.3"
old_tok = AutoTokenizer.from_pretrained(tokenizer_name)
new_normalizer = Sequence(
[Prepend('β'), Replace('β', ' '), Replace("foo", "bar"), Replace('<br>', '\n')]
)
old_tok.backend_tokenizer.normalizer = new_normalizer
new_tokenizdr_name = f"new_tokenizer-{tokenizer_name}"
old_tok.save_pretrained(new_tokenizdr_name)
old_tok = AutoTokenizer.from_pretrained(tokenizer_name)
new_tok = AutoTokenizer.from_pretrained(new_tokenizdr_name)
print(' '.join(old_tok.batch_decode(old_tok("I foo you<br>hello world")['input_ids'])))
print(' '.join(new_tok.batch_decode(new_tok("I foo you<br>hello world")['input_ids'])))
</code></pre>
<p>[out]:</p>
<pre><code><s> I foo you < br > hello world
<s> I foo you < br > hello world
</code></pre>
<h3>How do we add/modify the normalizer in a pretrained Huggingface tokenizer?</h3>
<p>Can any normalizer from a pretrained tokenizer be modified or just specific ones?</p>
<p>If the latter, why and how do we know if a pretrained tokenizer's normalizer can be extended or modified?</p>
|
<python><nlp><large-language-model><huggingface-tokenizers>
|
2024-06-12 11:03:59
| 1
| 123,325
|
alvas
|
78,612,091
| 18,107,780
|
Error implementing a python IDropTarget wrapper for Flet
|
<p>I'm trying to create a wrapper around the <code>IDropTarget</code> to be able to drag and drop files directly into a <a href="https://flet.dev/" rel="nofollow noreferrer">flet</a> windows app.</p>
<p>This is my current implementation, i'm using <code>pythoncom</code>, <code>win32 API</code>, <code>ctypes</code>:</p>
<pre><code>import pythoncom
import win32api
import win32con
import win32gui
from ctypes import windll, create_unicode_buffer
import flet as ft
import threading
import time
class IDropTarget:
_public_methods_ = ["DragEnter", "DragOver", "DragLeave", "Drop"]
_com_interfaces_ = [pythoncom.IID_IDropTarget]
def __init__(self, callback):
self.callback = callback
print("IDropTarget initialized")
def DragEnter(self, dataObject, keyState, point, effect):
effect[0] = pythoncom.DROPEFFECT_COPY
print("DragEnter")
return 0
def DragOver(self, keyState, point, effect):
effect[0] = pythoncom.DROPEFFECT_COPY
print("DragOver")
return 0
def DragLeave(self):
print("DragLeave")
return 0
def Drop(self, dataObject, keyState, point, effect):
format_etc = pythoncom.FormatEtc(
pythoncom.CF_HDROP,
None,
pythoncom.DVASPECT_CONTENT,
-1,
pythoncom.TYMED_HGLOBAL,
)
stgmedium = dataObject.GetData(format_etc)
data = stgmedium.data
num_files = windll.shell32.DragQueryFileW(data, -1, None, 0)
file_list = []
for i in range(num_files):
length = windll.shell32.DragQueryFileW(data, i, None, 0)
buffer = create_unicode_buffer(length + 1)
windll.shell32.DragQueryFileW(data, i, buffer, length + 1)
file_list.append(buffer.value)
self.callback(file_list)
print("Drop", file_list)
return 0
def register_drop_target(hwnd, callback):
drop_target = IDropTarget(callback)
drop_target_ptr = pythoncom.WrapObject(
drop_target, pythoncom.IID_IDropTarget, pythoncom.IID_IDropTarget
)
pythoncom.RegisterDragDrop(hwnd, drop_target_ptr)
print("Drop target registered")
def find_window_by_title(title):
def enum_windows_proc(hwnd, lParam):
if win32gui.IsWindowVisible(hwnd) and title in win32gui.GetWindowText(hwnd):
lParam.append(hwnd)
return True
hwnds = []
win32gui.EnumWindows(enum_windows_proc, hwnds)
return hwnds[0] if hwnds else None
def handle_drop(files):
for file in files:
print(f"File dropped: {file}")
def poll_for_window_handle(title, callback):
hwnd = None
while hwnd is None:
hwnd = find_window_by_title(title)
time.sleep(0.1) # Sleep for 100 ms before trying again
callback(hwnd)
def main(page: ft.Page):
page.title = "dnd"
page.add(ft.Text("Drag and Drop files here"))
def on_window_handle_found(hwnd):
def run_drop_target():
pythoncom.CoInitialize()
try:
register_drop_target(hwnd, handle_drop)
pythoncom.PumpMessages()
except pythoncom.com_error as e:
print(f"Error registering drop target: {e}")
finally:
pythoncom.CoUninitialize()
thread = threading.Thread(target=run_drop_target)
thread.daemon = True
thread.start()
poll_for_window_handle(page.title, on_window_handle_found)
page.update()
if __name__ == "__main__":
ft.app(target=main)
</code></pre>
<p>This doesnt work, I always get this error back:</p>
<pre><code>Error registering drop target: (-2147024882, "Insufficient memory resources available to complete the operation.", None, None)
</code></pre>
<p>Moreover the drag DROPEFFECTs doesn't even seems to be applied.</p>
<p>I don't understand where and what I'm doing wrong, my machine really shouldn't run into a "insufficient resources".</p>
<p>How can I fix it?</p>
|
<python><windows><winapi><memory><flet>
|
2024-06-12 10:30:50
| 0
| 457
|
Edoardo Balducci
|
78,612,045
| 1,259,330
|
GUI closing unexpectedly
|
<p>I'm using PyQT6, PyCharm and Python 3.11.</p>
<p>Very briefly, I'm trying to convert (mentally) from VBA to Python.</p>
<p>I made a GUI in pyQT6, let's call it <code>test.ui</code>. It has a number of buttons and a table. I convert this to a Python file using the pyuic6 utility. I now have a <code>Test.py</code> file in the same folder. All good so far.</p>
<p>I add</p>
<pre class="lang-py prettyprint-override"><code>self.mybutton.clicked.connect("buttonpushed")
</code></pre>
<p>to the <code>Test.py</code> script and tag on a</p>
<pre class="lang-py prettyprint-override"><code>def buttonpushed(self):
print("Button Pushed")
</code></pre>
<p>This works great and as expected. My intention is to have a form with many buttons, so rather than including the <code>def</code> in the <code>Test.py</code> file, which will change every time I run <code>pyuic6</code>, I move it across to another file called <code>ExtraBits.py</code> and include this in to <code>Test.py</code> with the line</p>
<pre class="lang-py prettyprint-override"><code>import ExtraBits
</code></pre>
<p>It still works as expected and print the words correctly.</p>
<p>Now my problem:
I want this button to hide a table on my GUI. So I added</p>
<pre class="lang-py prettyprint-override"><code>self.mytbl.hide()
</code></pre>
<p>to the <code>buttonpushed</code> procedure in <code>Test.py</code> and it works. If I add this to the procedure when it's in the <code>ExtraBits.py</code>, I still get the printed words, but the table doesn't hide and the GUI closes.
I've convinced myself that it is due to me wrongly referencing the <code>mytable</code> object in the imported <code>ExtraBits.py</code> file but not sure how to fix it.</p>
<p>I've tried every combination I can think of including the name of the main window object etc. I don't get any errors, just the <code>hide()</code> doesn't hide and the GUI bombs out, but only if in the <code>ExtraBits.py</code> file.</p>
<p>Re made the file in to minimal.py and ExtraBits.py to try and highlight my issue</p>
<pre><code>from PyQt6 import QtCore, QtGui, QtWidgets
import ExtraBits
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(parent=MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.tableWidget = QtWidgets.QTableWidget(parent=self.centralwidget)
self.tableWidget.setGeometry(QtCore.QRect(30, 20, 256, 192))
self.tableWidget.setObjectName("tableWidget")
self.tableWidget.setColumnCount(0)
self.tableWidget.setRowCount(0)
self.pushButton = QtWidgets.QPushButton(parent=self.centralwidget)
self.pushButton.setGeometry(QtCore.QRect(40, 260, 75, 23))
self.pushButton.setObjectName("pushButton")
self.pushButton.clicked.connect(ExtraBits.PushButton)
MainWindow.setCentralWidget(self.centralwidget)
self.statusbar = QtWidgets.QStatusBar(parent=MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.pushButton.setText(_translate("MainWindow", "PushButton"))
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec())
</code></pre>
<p>and ExtraBits.py is</p>
<pre><code>def PushButton(self):
print("Button pressed")
self.tableWidget.hide()
</code></pre>
|
<python><pyqt6>
|
2024-06-12 10:25:09
| 1
| 412
|
perfo
|
78,611,921
| 18,618,577
|
How to exclude file less than x kB from a group read by glob
|
<p>I think it's very simple but I can't figure it out.
In a python program to plot science meteo data I need to exclude some file of a list of text file.
I have hundred of file downloaded from a datalogger on the field. Sometimes the LTE link is degraded and instead of receiving a normal text file with normal header, I've something looklike HMTL (because I'm downling file with an URL API, an unreadable file could result in a sort of 404 error page downloaded, I guess).</p>
<p>Here a sample of a normal file with it's header:</p>
<pre><code>"TOA5","2693","CR300","2693","CR300.Std.11.00","CPU:20240411_modem_steynard.CR300","4956","meteo"
"TIMESTAMP","RECORD","T107_C_010_Avg","T107_C_150_Avg","T107_C_300_Avg","AirTC_Avg","RH","WS_ms_S_WVT","WindDir_D1_WVT","WindDir_SD1_WVT","SlrW_Avg","SlrkJ_Tot","Rain_mm_Tot"
"TS","RN","Deg C","Deg C","Deg C","Deg C","%","meters/second","Deg","Deg","W/m^2","kJ/m^2","mm"
"","","Avg","Avg","Avg","Avg","Smp","WVc","WVc","WVc","Avg","Tot","Tot"
"2024-04-18 00:00:00",788,6.492,"NAN",9.66,-1.336,86.2,0,0,0,0,0,0
"2024-04-18 00:10:00",789,6.446,"NAN",9.6,-1.314,88.1,0,0,0,0,0,0
"2024-04-18 00:20:00",790,6.411,"NAN",9.54,-1.379,91.1,0,0,0,0,0,0
"2024-04-18 00:30:00",791,6.373,"NAN",9.48,-1.433,90.1,0,0,0,0,0,0
"2024-04-18 00:40:00",792,6.343,"NAN",9.42,-1.553,90.2,0,0,0,0,0,0
</code></pre>
<p>And a 'false' file sample :</p>
<pre><code><html>
<head>
<script language='JavaScript'>var BrowserDetect = {
init: function () {
this.browser = this.searchString(this.dataBrowser) || "An unknown browser";
this.version = this.searchVersion(navigator.userAgent)
|| this.searchVersion(navigator.appVersion)
|| "an unknown version";
this.OS = this.searchString(this.dataOS) || "an unknown OS";
</code></pre>
<p>It's easy to see in a 'du -sh' bash command for example because false file are 4 kb and a normal file is approx 12 kb.</p>
<p>Here is the line I'm using in python to retreive all ile in a list</p>
<pre><code>path = '/TEMPORARY/STMTO/'
all_files = glob.glob(os.path.join(path , "*.dat"))
</code></pre>
<p>How can I exclude file less than 5 kb of all_files ?</p>
|
<python><dataframe><glob>
|
2024-06-12 09:58:39
| 3
| 305
|
BenjiBoy
|
78,611,893
| 2,131,200
|
Drawing hierarchical clustering in scikit-learn
|
<p>I have an embedding matrix of shape <code>(4312, 1024)</code> (corresponding to 1024-dimensional embedding vectors of 4312 English sentences). I want to perform a clustering of these vectors and to visualize the results (in order to see if the distance threshold that I chose was good enough).</p>
<p>The clustering is done using:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from sklearn.cluster import AgglomerativeClustering
model = AgglomerativeClustering(n_clusters=None, metric='cosine',
compute_full_tree='auto',
linkage='complete',
distance_threshold=0.2,
compute_distances=True)
clustering = model.fit(embeddings)
print(f'Number of clusters: {clustering.n_clusters_}')
print(f'Labels:\n{clustering.labels_}')
# count unique labels
unique_labels, counts = np.unique(clustering.labels_, return_counts=True)
print(f'Number of clusters by counting: {len(unique_labels)}')
# Sort in descending order of counts
sorted_indices = np.argsort(-counts)
unique_labels = unique_labels[sorted_indices]
counts = counts[sorted_indices]
print(f'Unique labels: {unique_labels}')
print(f'counts: {counts}')
</code></pre>
<p>The results of the print is:</p>
<pre class="lang-bash prettyprint-override"><code>Number of clusters: 1714
clustering.labels_:
[ 460 820 245 ... 1030 112 1367]
Number of clusters by counting: 1714
Unique labels: [ 410 352 229 ... 1039 1041 1713]
counts: [55 42 33 ... 1 1 1]
</code></pre>
<p>I obtained 1714 clusters, and the largest cluster contains 55 points. If I increase the distance threshold to 0.25, then the number of clusters decreased to 1395. I want to know which sentences have been merged if <code>distance_threshold=0.25</code> (compared to <code>distance_threshold=0.2</code>), so I plot the results for <code>0.2</code>, following the <a href="https://scikit-learn.org/stable/auto_examples/cluster/plot_agglomerative_dendrogram.html#sphx-glr-auto-examples-cluster-plot-agglomerative-dendrogram-py" rel="nofollow noreferrer">Scipy's official example</a> and did:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram
def plot_dendrogram(model, **kwargs):
# Create linkage matrix and then plot the dendrogram
# create the counts of samples under each node
counts = np.zeros(model.children_.shape[0])
n_samples = len(model.labels_)
for i, merge in enumerate(model.children_):
current_count = 0
for child_idx in merge:
if child_idx < n_samples:
current_count += 1 # leaf node
else:
current_count += counts[child_idx - n_samples]
counts[i] = current_count
linkage_matrix = np.column_stack(
[model.children_, model.distances_, counts]
).astype(float)
# Plot the corresponding dendrogram
dendrogram(linkage_matrix, **kwargs)
plt.title("Hierarchical Clustering Dendrogram")
# plot the top three levels of the dendrogram
plot_dendrogram(clustering, truncate_mode="level", p=3, distance_sort='ascending', show_leaf_counts=True)
plt.xlabel("Number of points in node (or index of point if no parenthesis).")
plt.show()
</code></pre>
<p>Results:</p>
<p><a href="https://i.sstatic.net/vTI1fRRo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTI1fRRo.png" alt="enter image description here" /></a></p>
<p>I have two questions please:</p>
<ol>
<li><p>I chose the distance to be <code>cosine</code>, which is between <code>[0,1]</code>. How is it possible that the distances from the leaves to their parents are larger than 1? Is it due to numerical errors?</p>
</li>
<li><p>The function <code>dendrogram</code> seems to draw the tree top-down. Is it possible to draw bottom-up: the clusters are the leaves? This would make more sense to me because I would like to start from the clusters and see how they would merge depending on the distances between them.</p>
</li>
</ol>
<p><strong>Update:</strong> I was not very clear about the second question. What I would like to have is not the orientation of the graph (i.e., not a geometric transformation). Let me be more specific. There are 1714 clusters with the following number of points, in decreasing order: <code>counts: [55 42 33 ... 1 1 1]</code>. I would like the drawn tree to have:</p>
<ul>
<li><p>Exactly 1714 leaves at the bottom (with labels <code>(55)</code>, <code>(42)</code>, <code>(33)</code>, etc. In the above plot, you can see that this is not the case: the labels are <code>(1231)</code>, <code>(692)</code>, etc.)</p>
</li>
<li><p>Then above the leaves, let's say level 1: the merged leaves</p>
</li>
<li><p>Then above level 1, let's say level 2: the merged level-1 nodes</p>
</li>
<li><p>etc. up until level <code>p</code>.</p>
</li>
</ul>
<p>This will not necessarily reach the root and thus will create disconnected components, but that's precisely what I would like to have.</p>
<p>Thank you very much in advance for your help!</p>
|
<python><scikit-learn><scipy>
|
2024-06-12 09:54:10
| 1
| 1,606
|
f10w
|
78,611,856
| 5,320,591
|
PyTest fixture : How to resolve FastAPI 'app' shortcut deprecated warning?
|
<p>I am trying to solve one for all, this Warning while executing pytests:</p>
<pre><code>/usr/local/lib/python3.11/site-packages/httpx/_client.py:680:
DeprecationWarning: The 'app' shortcut is now deprecated.
Use the explicit style 'transport=WSGITransport(app=...)' instead.
warnings.warn(message, DeprecationWarning)
</code></pre>
<p>I know that a similar question has been asked here:</p>
<p><a href="https://stackoverflow.com/questions/78238988/how-to-write-pytest-tests-for-a-fastapi-route-involving-dependency-injection-wit">How to write pytest tests for a FastAPI route involving dependency injection with Pydantic models using Annotated and Depends?</a></p>
<p><strong>But still I cannot get how to avoid this warning.</strong></p>
<p>I have this Python test fixture:</p>
<pre><code>@pytest.fixture
def client(app: FastAPI) -> Generator:
with TestClient(app, base_url="http://localhost") as client:
yield client
</code></pre>
<p>But if I change this to the thing suggested in the link previously mentioned:</p>
<pre><code>@pytest.fixture
def client()-> Generator:
"""Fixture to create a FastAPI test client."""
# instead of app = app, use this to avoid the DeprecationWarning:
with TestClient(transport=ASGITransport(app=app), base_url="http://localhost") as
client:
yield client
</code></pre>
<p>My tests can't pass, and instead I have this new error:</p>
<pre><code>TypeError: TestClient.__init__() got an unexpected keyword argument 'transport'
</code></pre>
|
<python><pytest><fastapi><suppress-warnings>
|
2024-06-12 09:45:34
| 2
| 1,546
|
RobyB
|
78,611,844
| 12,932,447
|
Choose between extras of the same dependency with Poetry
|
<p>I have a Python project managed with Poetry.</p>
<p>I want to install either <code>psycopg[binary]</code> or <code>psycopg[c]</code> via</p>
<pre class="lang-bash prettyprint-override"><code>$ poetry install -E pgbinary
</code></pre>
<p>or</p>
<pre class="lang-bash prettyprint-override"><code>$ poetry install -E pgc
</code></pre>
<p>These extras should be mutually exclusive, i.e.</p>
<pre><code>$ poetry install -E pgbinary -E pgc
</code></pre>
<p>should raise an error.</p>
<p>This is what I wrote in the <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
python = "^3.9"
...
psycopg = { version = "^3.1.0", extras = ["binary", "c"], optional = true}
[tool.poetry.extras]
pgbinary = ["psycopg[binary]"]
pgc = ["psycopg[c]"]
</code></pre>
<p>But when I do one of the following</p>
<pre><code>$ poetry install
$ poetry install -E pgbinay
$ poetry install -E pgc
</code></pre>
<p>I always get this</p>
<pre><code>The Poetry configuration is invalid:
- data.extras.pgbinary[0] must match pattern ^[a-zA-Z-_.0-9]+$
</code></pre>
<p>The problem seems related to the square brackets in <code>psycopg[binary]</code> (or <code>psycopg[c]</code>).</p>
<p>Is there a way to choose which extra to install for a specific library during the <code>poetry install</code>?</p>
<hr />
<h3>UPDATE</h3>
<p>I've solved with this for now</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
...
psycopg = {version = "^3.1.9", optional=true}
psycopg-binary = {version = "^3.1.9", optional=true}
psycopg-c = {version = "^3.1.9", optional=true}
[tool.poetry.extras]
pgbinary = ["psycopg", "psycopg-binary"]
pgc = ["psycopg", "psycopg-c"]
</code></pre>
<p>It works, but has stated <a href="https://pypi.org/project/psycopg-c/" rel="nofollow noreferrer">here</a></p>
<pre><code>You shouldnβt install this package directly: use instead
pip install "psycopg[c]"
to install a version of the optimization package matching the psycopg version installed.
</code></pre>
<p>If there is a way to write <code>version = "^3.1.9"</code> just once, it would be great.</p>
<p>Moreover, the extras are still not mutually exclusive.</p>
|
<python><python-poetry><psycopg3>
|
2024-06-12 09:43:09
| 2
| 875
|
ychiucco
|
78,611,640
| 22,326,950
|
Does SAP GUI scripting with python3 require you to release the connection or is it handled by the garbage collection?
|
<p><strong>This Post is mainly about scripting in Python but also VBA input is welcome</strong></p>
<p>Both in the SAP script recording and in forum posts (e.g. by <a href="https://community.sap.com/t5/technology-blogs-by-members/how-to-use-sap-gui-scripting-inside-python-programming-language/ba-p/13348848" rel="nofollow noreferrer">Stefan Schnell</a>), the objects used for the connection are released again both in the event of a failed connection and before the end of the script (see python snippet):</p>
<pre class="lang-py prettyprint-override"><code>def connect_and_release():
try:
[...]
session = connection.Children(1)
if not type(session) == win32com.client.CDispatch:
connection = None # "releasing"
application = None # "releasing"
SapGuiAuto = None # "releasing"
return
[...]
except:
print(sys.exc_info()[0])
finally:
session = None # "releasing"
connection = None # "releasing"
application = None # "releasing"
SapGuiAuto = None # "releasing"
</code></pre>
<p>I understand that if this function were integrated into a larger program, for example, and the objects were not released, the connection to the scripting would then exist permanently until the end of the program (visible, for example, in the rotating barber pole in SAP). My Questions are therefore:</p>
<ul>
<li>if the script ends directly after the function, is the <code>finally</code> block necessary, or is the connection disconnected by the python garbage collection after the script ends anyway?</li>
<li>which of the listed objects actually have to be released so that, for example, SAP shows the scripting as no longer active. So if <code>session</code> is defined via <code>connection</code>, do I necessarily have to release both, or is it sufficient to release <code>connection</code>?</li>
<li>what are the implications towards excel VBA scripts? When using a private variable, do i neccessarily have to set to <code>Nothing</code>?</li>
</ul>
<p><em>Please note that this is not about best practices but rather about understanding what happens under the hood.</em></p>
|
<python><vba><garbage-collection><sap-gui>
|
2024-06-12 09:04:25
| 0
| 884
|
Jan_B
|
78,611,602
| 891,919
|
How to mount the local filesystem with stlite
|
<p>I'm trying to create an executable from a streamlit app using <a href="https://github.com/whitphx/stlite" rel="nofollow noreferrer">stlite</a>. I need the app to read and write files on the local filesystem, so I followed <a href="https://github.com/whitphx/stlite/blob/main/packages/desktop/README.md#local-file-access" rel="nofollow noreferrer">these explanations</a>. Even with the example provided <a href="https://github.com/whitphx/stlite/tree/main/packages/desktop/samples/file-persistence-nodefs" rel="nofollow noreferrer">here</a>, I always get an error like this:</p>
<pre><code>Error during boot up
Only URLs with a scheme in: file, data, node, and electron are supported by the default ESM loader. On Windows, absolute paths must be valid file:// URLs. Received protocol 'c:'
</code></pre>
<p>A popup also shows the trace (sorry for the screenshot, the popup doesn't let me select the text):</p>
<p><a href="https://i.sstatic.net/TMygNUTJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMygNUTJ.png" alt="screenshot full error message" /></a></p>
<p>I thought that this had to do with the way the path is specified in package.json:</p>
<pre><code> "nodefsMountpoints": {
"/mnt": "."
}
</code></pre>
<p>so I tried a few variants, with <code>file:///c:/...</code> but it always ended with the same error.</p>
<p>Any suggestion welcome, thanks!</p>
|
<python><streamlit>
|
2024-06-12 08:58:51
| 1
| 1,185
|
Erwan
|
78,611,591
| 13,942,929
|
How can I pass optional type from C++ to Cython?
|
<p>I'm trying to pass an optional argument from C++ to Cython files.
I don't know how to write it in .pxd and .pyx file:</p>
<p>[C++]</p>
<pre><code>std::optional<std::shared_ptr<CPP_MyObject>> cpp_find_value();
</code></pre>
<p>[Work.pxd]</p>
<pre><code>shared_ptr[CPP_MyObject] cpp_find_value() # fix me
</code></pre>
<p>[Work.pyx]</p>
<pre><code>def python_find_value(self): # fix me
cdef shared_ptr[CPP_MyObject] cpp_object = self.thisptr.get().cpp_find_value()
cdef Python_MyObject python_object = Python_MyObject("0")
python_object.thisptr = cpp_object
return python_object
</code></pre>
|
<python><c++><cython><cythonize>
|
2024-06-12 08:57:09
| 2
| 3,779
|
Punreach Rany
|
78,611,518
| 891,919
|
streamlit st.text_input: value not updating as expected
|
<p>New to streamlit, this is my issue: Iβm trying to provide the user with an interface to annotate some data. For each item the user should give a label and may optionally leave a comment, and they can navigate across the different items with βprevβ and βnextβ buttons. A basic version of the code is here:</p>
<pre><code>import streamlit as st
N=10
choices_values = ['No label', 'YES','NO', 'NA/unclear']
choices_position = { choice:i for i,choice in enumerate(choices_values) }
map_choice_to_value = { 'No label': None, 'YES': 1, 'NO': 0, 'NA/unclear': -1}
map_value_to_choice = { value:key for key, value in map_choice_to_value.items() }
# go to next pair
def next_pair():
st.session_state.current_index += 1
# go to previous pair
def prev_pair():
st.session_state.current_index -= 1
# save
def change_label():
choice = st.session_state.select_label
st.session_state.labels[st.session_state.current_index] = map_choice_to_value[choice]
def edit_comment_area():
st.session_state.comments[st.session_state.current_index] = st.session_state.comment_area
top_text = st.markdown('**Loading data...**')
# initialize state variables
if 'labels' not in st.session_state:
st.session_state.labels = [None] * N
if 'comments' not in st.session_state:
st.session_state.comments = [None] * N
if 'current_index' not in st.session_state:
st.session_state.current_index = 0
n_labels = sum(x is not None for x in st.session_state.labels)
top_text.markdown(f"**Item {st.session_state.current_index+1}/{N}** ({n_labels}/{N} annotated)")
# User input: label choice
current_label = st.session_state.labels[st.session_state.current_index]
st.radio('Are these two reports similar?',
options= choices_values,
index = choices_position[map_value_to_choice[current_label]],
key = 'select_label',
on_change=change_label)
# User input: optional comments
st.text(st.session_state.comments[st.session_state.current_index])
st.text_area('Comment (optional)',
key='comment_area',
value = st.session_state.comments[st.session_state.current_index],
on_change=edit_comment_area)
# UI: Previous and next buttons to navigate across the pairs
#
prev_button = st.button('Previous',
on_click=prev_pair,
disabled=st.session_state.current_index == 0)
next_button = st.button('Next',
on_click=next_pair,
disabled=st.session_state.current_index == N-1)
</code></pre>
<p>The labels part work well: when the user changes a label for item X, navigates to other items, then comes back to item X, they will find the last label they selected for X.</p>
<p>However the comments donβt work: when the user enters a comment it is correctly stored in <code>st.session_state.comments</code> and it can be printed by <code>st.write</code>, but even though the value arg in <code>st.text_area</code> is set to the right text, it does not show up in the text area as expected.</p>
<p>I noticed a few things: entering a comment for item X then another comment for the previous or next item and immediately going back makes the comment appear in the <code>text_area</code> for X. However navigating again in other comments makes both disappear, even though <code>st.session_state.comments</code> contains the correct value.</p>
<p>Am I missing something? How do I obtain the expected behaviour?</p>
|
<python><streamlit>
|
2024-06-12 08:41:29
| 0
| 1,185
|
Erwan
|
78,611,328
| 7,396,306
|
Draw vertical line between points on twinned axes
|
<p>This question answers how to draw a vertical line between two points on the same axis: <a href="https://stackoverflow.com/questions/59582503/matplotlib-how-to-draw-vertical-line-between-two-y-points">Matplotlib how to draw vertical line between two Y points</a>. But what if we have two separate sets of points each on a different twinned axis?</p>
<p>Given the plot</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
num_points = 100
# Generate x-values
x_values = np.arange(num_points)
# Create a decreasing linear function
slope = -0.1
intercept = 5
y_values = slope * x_values + intercept
# Add random noise
noise_scale = 0.5
y_values += np.random.normal(0, noise_scale, num_points)
# Create the numpy arrays
random_points = np.column_stack((x_values, y_values))
random_numbers = np.random.uniform(-0.5, 0.5, num_points)
# Creat DF
df = pd.DataFrame(data={'x': x_values, 'y0': y_values, 'y1': random_numbers})
# Plot the points
ax = df.plot.scatter(x='x', y='y0', c='b', marker='o', label='y0')
ax.legend().set_visible(False)
ax1 = ax.twinx()
ax1.scatter(df['x'], df['y1'], color='darkorange', marker='o', label='y1')
ax1.legend().set_visible(False)
plt.show()
</code></pre>
<p>How would I put a vertical line between each y value on the same x?</p>
|
<python><matplotlib>
|
2024-06-12 07:59:49
| 1
| 859
|
DrakeMurdoch
|
78,611,287
| 3,502,079
|
What does 'nt' stand for in when you type `os.name` in Python on windows?
|
<p>The questions is basically in the title. Where does 'nt' come from?</p>
|
<python><windows><operating-system>
|
2024-06-12 07:52:09
| 0
| 392
|
AccidentalTaylorExpansion
|
78,611,217
| 5,014,959
|
grpc Greeter Service sample fails randomly
|
<p>I came across a problem with grpc sample code while I was testing waters with it. I noticed that client succeeds, let's say 75% of time but fails at a rate of 25% with following error:</p>
<pre><code>grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNIMPLEMENTED
details = "Method not found!"
debug_error_string = "UNKNOWN:Error received from peer {grpc_message:"Method not found!", grpc_status:12, created_time:"2024-06-12T10:16:43.243776013+03:00"}"
</code></pre>
<p>Please see the picture attached.
<a href="https://i.sstatic.net/nSLTq04P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSLTq04P.png" alt="Sample grpc client runs" /></a></p>
<p>I'd expect it to work 100% of the time. What am I missing here ?</p>
<p><strong>Please note that, I haven't made any changes to the sample code.</strong> I am using them as is and I got them via:</p>
<pre><code>git clone -b v1.64.0 --depth 1 --shallow-submodules https://github.com/grpc/grpc
</code></pre>
<p><strong>EDIT</strong>
My environment for this example:</p>
<p>Host OS : Linux (Manjaro)</p>
<p>Network : Client and server are both running on the same machine, just different terminal windows.</p>
<p>Firewall Rules: <a href="https://i.sstatic.net/BHStx99z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHStx99z.png" alt="See the picture please" /></a></p>
<pre><code>python --version
Python 3.12.3
</code></pre>
<pre><code>pip freeze
grpcio==1.64.1
grpcio-tools==1.64.1
protobuf==5.27.1
setuptools==70.0.0
</code></pre>
|
<python><grpc><grpc-python>
|
2024-06-12 07:38:27
| 0
| 3,147
|
Alp
|
78,611,152
| 5,699,915
|
What is the alternative to Whis: 'range'?
|
<p>Earlier I used <code>whis = 'range'</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>box = plt.boxplot(data, whis = 'range', widths = 0.15, patch_artist = True,
showfliers = False)
</code></pre>
<p>In the newer matplotlib versions, <code>whis</code> is a <code>float</code>. What should be the new <code>whis</code> value as an alternative to <code>'range'</code>?</p>
|
<python><matplotlib><boxplot>
|
2024-06-12 07:24:35
| 2
| 567
|
Sri Sanketh Uppalapati
|
78,611,096
| 676,192
|
Trying to access Inkscape from d-bus fails (but works with Gio)
|
<p>I am trying to use dbus to drive Inkscape from scripts</p>
<p>This code - which I got from here - works ok:</p>
<pre><code>
# Start after inkscape is running.
print ("DBus test")
import gi
gi.require_version("Gio", "2.0")
from gi.repository import Gio, GLib
try:
bus = Gio.bus_get_sync(Gio.BusType.SESSION, None)
except BaseException:
print("No DBus bus")
exit()
print ("Got DBus bus")
proxy = Gio.DBusProxy.new_sync(bus, Gio.DBusProxyFlags.NONE, None,
'org.freedesktop.DBus',
'/org/freedesktop/DBus',
'org.freedesktop.DBus', None)
names_list = proxy.call_sync('ListNames', None, Gio.DBusCallFlags.NO_AUTO_START, 500, None)
# names_list is a GVariant, must unpack
names = names_list.unpack()[0]
# Look for Inkscape; names is a tuple.
for name in names:
if ('org.inkscape.Inkscape' in name):
print ("Found: " + name)
break
print ("Name: " + name)
appGroupName = "/org/inkscape/Inkscape"
winGroupName = appGroupName + "/window/1"
docGroupName = appGroupName + "/document/1"
applicationGroup = Gio.DBusActionGroup.get( bus, name, appGroupName)
windowGroup = Gio.DBusActionGroup.get(bus, name, winGroupName)
documentGroup = Gio.DBusActionGroup.get(bus, name, docGroupName)
# Activate actions. Draw a few objects first.
applicationGroup.activate_action('select-all', GLib.Variant.new_string('all'))
applicationGroup.activate_action('object-rotate-90-cw', None)
windowGroup.activate_action('tool-switch', GLib.Variant.new_string('Arc'))
windowGroup.activate_action('canvas-zoom-page', None)
</code></pre>
<p>When I try to translate it to use <code>dbus</code>like this:</p>
<pre><code>#!/usr/bin/env python
# Start after Inkscape is running.
import dbus
from xml.etree import ElementTree as ET
print("DBus test")
try:
# Connect to the session bus
bus = dbus.SessionBus()
except dbus.exceptions.DBusException as e:
print(f"No DBus bus: {e}")
exit()
print("Got DBus bus")
# Get the list of names on the bus
proxy = bus.get_object('org.freedesktop.DBus', '/org/freedesktop/DBus')
interface = dbus.Interface(proxy, 'org.freedesktop.DBus')
names = interface.ListNames()
# Look for Inkscape
name = None
for n in names:
if 'org.inkscape.Inkscape' in n:
name = n
print(f"Found: {name}")
break
if not name:
print("Inkscape not found")
exit()
print(f"Name: {name}")
appGroupName = "/org/inkscape/Inkscape"
winGroupName = appGroupName + "/window/1"
docGroupName = appGroupName + "/document/1"
def activate_action(group, action, parameter=None):
try:
group.Activate(action, parameter, dbus_interface='org.gtk.Actions')
print(f"Activated action: {action} with parameter: {parameter}")
except dbus.exceptions.DBusException as e:
print(f"Failed to activate action {action}: {e}")
applicationGroup = bus.get_object(name, appGroupName)
windowGroup = bus.get_object(name, winGroupName)
documentGroup = bus.get_object(name, docGroupName)
# Activate actions. Draw a few objects first.
activate_action(applicationGroup, 'select-all', 'all')
activate_action(applicationGroup, 'object-rotate-90-cw')
</code></pre>
<p>it fails with this message:</p>
<pre><code>ERROR:dbus.connection:Unable to set arguments ('select-all', 'all') according to signature 'sava{sv}': <class 'TypeError'>: More items found in D-Bus signature than in Python arguments
Traceback (most recent call last):
File "/home/simone/inkscape_experiments/dbus_05.py", line 54, in <module>
activate_action(applicationGroup, 'select-all', 'all')
File "/home/simone/inkscape_experiments/dbus_05.py", line 44, in activate_action
group.Activate(action, parameter, dbus_interface='org.gtk.Actions')
File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 72, in __call__
return self._proxy_method(*args, **keywords)
File "/usr/lib/python3/dist-packages/dbus/proxies.py", line 141, in __call__
return self._connection.call_blocking(self._named_service,
File "/usr/lib/python3/dist-packages/dbus/connection.py", line 643, in call_blocking
message.append(signature=signature, *args)
TypeError: More items found in D-Bus signature than in Python arguments
</code></pre>
<p>What's the issue here? is there a way to inspect d-bus signatures and use that to call things correctly?</p>
|
<python><dbus>
|
2024-06-12 07:08:33
| 1
| 5,252
|
simone
|
78,610,544
| 1,822,700
|
Optimizing variable applied to discrete data in order to minimize error function
|
<p>I'm trying to optimize a function based on discrete data sets I got in the lab.</p>
<p><code>Spectrum_1</code> and <code>Spectrum_2</code> are experimental data sets with length <code>N</code>. Those two arrays contain values taken for each lambda (wavelength) value in <code>lambdas</code> array.</p>
<p>So:</p>
<pre><code>len(Spectrum_1) == len(Spectrum_2) == len(lambdas)
</code></pre>
<p>Each <code>Spectrum_1</code> and <code>Spectrum_2</code> have the symbolic form</p>
<pre><code>Spectrum = G(T,lambda) * E(lambda)
</code></pre>
<p><strong>E(lambda) is unknown</strong>, but it is known to <em>not</em> vary with T. E(lambda) would be discrete data (a set of points defined on each value in the 'lambdas' array)</p>
<p><strong>G(T, lambda) is a known function of T and wavelength (lambda)</strong> (it is a continuous and defined function). It is the Planck blackbody radiation equation in regards to wavelength to be more specific.</p>
<p><strong><code>E_1</code> and <code>E_2</code> should be equal</strong>:</p>
<pre><code>E_1 = Spectrum_1/np.array([G(T_1, lamda) for lamda in lambdas])
E_2 = Spectrum_2/np.array([G(T_2, lamda) for lamda in lambdas])
</code></pre>
<p><strong><code>T_1</code> and <code>T_2</code> are unknown</strong> but I know they are within 400 to 1000 range (both). <code>T_1</code> and <code>T_2</code> are two scalar values.</p>
<p>Knowing that, I need to minimize:</p>
<pre><code>np.sum(np.array([(E_1[i] - E_2[i])**2 for i in lamdas]))
</code></pre>
<p>Or at least that's what I think I should minimize. Ideally <code>(E_1[i]-E_2[i])==0</code> but that won't be the case granted the experimental data in <code>Spectrum_1</code> and <code>_2</code> contain noise and distortions due to atmospheric transmission.</p>
<p>I'm not very familiar with optimizing with multiple unknown variables <code>(T_1 and T_2)</code> in Python. I could brute test millions of combinations of <code>T_1</code> and <code>T_2</code> I suppose, but I wish to do it correctly.
I wonder if anybody could help me.</p>
<p>I hear scipy.optimize could do it for me, but many methods ask for Jacobian and Hessian and I'm unsure how to proceed given I have experimental data (<code>Spectrum_1</code> and <code>Spectrum_2</code>) and I'm not dealing with continuous/smooth functions.</p>
|
<python><scipy><curve-fitting><scipy-optimize><multivariate-testing>
|
2024-06-12 03:48:12
| 1
| 892
|
Yannick
|
78,610,508
| 1,347,170
|
Troubleshooting onnxruntime inference - X num_dims does not match W num_dims
|
<p>Using the nnhash.py script found here: <a href="https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX" rel="nofollow noreferrer">https://github.com/AsuharietYgvar/AppleNeuralHash2ONNX</a></p>
<pre class="lang-py prettyprint-override"><code># Copyright 2021 Asuhariet Ygvar
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
# or implied. See the License for the specific language governing
# permissions and limitations under the License.
import sys
import onnxruntime
import numpy as np
from PIL import Image
# Load ONNX model
session = onnxruntime.InferenceSession(sys.argv[1])
# Load output hash matrix
seed1 = open(sys.argv[2], 'rb').read()[128:]
seed1 = np.frombuffer(seed1, dtype=np.float32)
seed1 = seed1.reshape([96, 128])
# Preprocess image
image = Image.open(sys.argv[3]).convert('RGB')
image = image.resize([360, 360])
arr = np.array(image).astype(np.float32) / 255.0
arr = arr * 2.0 - 1.0
arr = arr.transpose(2, 0, 1).reshape([1, 3, 360, 360])
# Run model
inputs = {session.get_inputs()[0].name: arr}
outs = session.run(None, inputs)
# Convert model output to hex hash
hash_output = seed1.dot(outs[0].flatten())
hash_bits = ''.join(['1' if it >= 0 else '0' for it in hash_output])
hash_hex = '{:0{}x}'.format(int(hash_bits, 2), len(hash_bits) // 4)
print(hash_hex)
</code></pre>
<p>When I try to execute using: <code>python nnhash.py ../model.onnx ../seed1.dat ../1.png</code></p>
<blockquote>
<p>onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] :
1 : FAIL : Non-zero status code returned while running FusedConv node.
Name:'' Status Message: X num_dims does not match W num_dims. X:
{1,1280,1,1} W: {500}</p>
</blockquote>
<p>I've attached what the Netron layer output looks like: <a href="https://i.imgur.com/EeVItQ2.jpeg" rel="nofollow noreferrer">https://i.imgur.com/EeVItQ2.jpeg</a> (sending it as a link because the actual image is extremely long)</p>
<p>From what I can tell, towards the very bottom, there's a W:{500} right before the leafy part. I'm sure this is what is causing the issue, I'm just not sure what I need to do in order to process the input image so that it flows through the model fine.</p>
<p>Edit: I found an older model.onnx that appears to work fine, the last few layers differ a bit, which is I think the problem. I'm not sure what to do in order to get the script to work with the newer model.onnx file.</p>
<p>Old:
<a href="https://i.sstatic.net/zqw0YD5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zqw0YD5n.png" alt="old" /></a></p>
<p>New:
<a href="https://i.sstatic.net/7AOUqIse.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AOUqIse.png" alt="new" /></a></p>
|
<python><reshape><coreml><onnx>
|
2024-06-12 03:29:55
| 1
| 2,017
|
Joshua Terrill
|
78,610,246
| 968,132
|
Langchain: limiting message history length
|
<p>My goal is to limit the last N messages in a message history so I don't overload the LLM. My plan is to use use <code>RunnableWithMessageHistory</code> in conjunction with a filter function. Unfortunately I seem to have two problems: 1) getting the limiting function to work; and 2) passing the actual user message into the model.</p>
<p>I'm using these two docs as reference:
<a href="https://python.langchain.com/v0.2/docs/how_to/message_history/" rel="nofollow noreferrer">https://python.langchain.com/v0.2/docs/how_to/message_history/</a>
<a href="https://python.langchain.com/v0.2/docs/tutorials/chatbot/#message-history" rel="nofollow noreferrer">https://python.langchain.com/v0.2/docs/tutorials/chatbot/#message-history</a></p>
<p>My questions:
What am I doing wrong? And specifically:</p>
<ol>
<li>Why is the RunnablePassthrough not applying to the message history?</li>
<li>Am I using <code>HumanMessage("{user_message_key}")</code> correctly with <code>input_messages_key</code>? I understand there's HumanMessagePromptTemplate but I'm not sure why I must use it.</li>
<li>Can I apply <code>_filter_messages</code> within <code>get_session_history</code>? What are the pros/cons?</li>
</ol>
<p>Here's MRS:</p>
<pre><code>from typing import List, Union
from langchain_openai import ChatOpenAI
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.chat_history import BaseChatMessageHistory
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage, SystemMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.runnables import RunnablePassthrough
store = {}
model_name = "gpt-3.5-turbo"
system_message = "You're a helpful, friendly AI"
model = ChatOpenAI(model=model_name)
# Function to get chat message history
def get_session_history(session_id: str) -> BaseChatMessageHistory:
if session_id not in store:
store[session_id] = ChatMessageHistory()
history: BaseChatMessageHistory = store[session_id]
# history.messages = self._filter_messages(history.messages)
return history
def filter_messages(messages: List[Union[HumanMessage, AIMessage]]) -> List[Union[HumanMessage, AIMessage]]:
return messages[-5:]
def process_text(text: str, session_history: BaseChatMessageHistory, session_id: str) -> AIMessage:
# Create the prompt template
prompt = ChatPromptTemplate.from_messages(
[
SystemMessage(system_message),
MessagesPlaceholder(variable_name="chat_hist"),
HumanMessage("{user_message_key}")
]
)
runnable = RunnablePassthrough.assign(chat_hist=lambda x: filter_messages(x["chat_hist"])) | prompt | model
with_message_history = RunnableWithMessageHistory(
runnable=runnable,
get_session_history=get_session_history,
history_messages_key="chat_hist",
input_messages_key="usr_msg",
)
resp: AIMessage = with_message_history.invoke(
{"usr_msg": [HumanMessage(content=text)]},
config={"configurable": {"session_id": session_id}},
)
print(resp.content)
return resp
session_id = "test_session"
previous_messages = [
HumanMessage(content="I love the color Pink. It's my favorite color."),
AIMessage(content="That's great to know!"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Here's a placeholder message for testing"),
HumanMessage(content="Hello, my name is John."),
AIMessage(content="Hi John, great to meet you."),
HumanMessage(content="I'm sick of this rain."),
]
session_history = get_session_history(session_id)
for message in previous_messages:
session_history.add_message(message)
user_input = "Repeat exactly: GOODBYE"
response:AIMessage = process_text(user_input, session_history, session_id)
# answer: I'm sorry to hear that you're feeling down because of the rain. Is there anything I can do to help cheer you up?
</code></pre>
|
<python><langchain><py-langchain>
|
2024-06-12 00:58:27
| 1
| 1,148
|
Peter
|
78,610,202
| 1,313,890
|
Configure PySpark to use Sunday as Start of Week
|
<p>According to the <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.weekofyear.html" rel="nofollow noreferrer">pyspark docs</a> for <code>weekOfYear()</code>, Monday is considered to be the start of the week. However, <code>dayOfWeek()</code> uses Sunday as the start date. I do a lot of reporting on previous periods where I'm calculating change week over week but also for the same period last year. This becomes problematic because I am reliant on both <code>weekOfYear()</code> and <code>dayOfWeek()</code> to correctly calculate these time periods but in order to properly calculate, they both need to start on the same day (which in my case, should be Sunday). Does anyone know of a way to change a config or something in pyspark so that it will consider Sunday as the start of the week for ALL datetime calculations (including <code>weekOfYear()</code>)? I really don't want to have to write a custom function to do this.</p>
|
<python><apache-spark><date><pyspark><apache-spark-sql>
|
2024-06-12 00:32:30
| 0
| 547
|
Shane McGarry
|
78,609,812
| 202,862
|
OpenCV on Raspberry Pi Bookworm: cv2.windowMove() is not working
|
<p>Here's my code:</p>
<pre><code>name = "Player"
cv2.namedWindow(name, cv2.WINDOW_NORMAL)
cv2.resizeWindow(name, 800, 600)
cv2.moveWindow(name, 100, 50)
</code></pre>
<p><code>cv2.moveWindow()</code> does not have any effect at all - seems like it has something to do with Wayland Window Manager which does not support moveWindow ..</p>
<p>I tried switching to X11 but won't get any Desktop screen at all. However, the console is still working for switching back to Wayland.</p>
<p>Any idea for a workaround?</p>
|
<python><opencv><user-interface><raspberry-pi>
|
2024-06-11 21:27:02
| 1
| 7,595
|
Fuxi
|
78,609,706
| 13,336,872
|
ERROR: Could not build wheels for opencv-python, which is required to install pyproject.toml-based projects when installing gym in anaconda env
|
<p>I was following a chapter in the book <a href="https://github.com/PacktPublishing/Hands-On-Reinforcement-Learning-with-Python" rel="nofollow noreferrer">SUDHARSAN RAVICHANDIRAN - HANDS-ON REINFORCEMENT LEARNING WITH PYTHON - _ master reinforcement and deep reinforcement</a>. There's a handson exercise to work with Gym toolkit in a separate anaconda environment. In there I had to follow couple of steps to install necessary tools in my Xubuntu machine.</p>
<ol>
<li>Open the Terminal and type the following command to download Anaconda: <code>wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh</code></li>
<li>After downloading, we can install Anaconda using the following command: <code>bash Anaconda3-5.0.1-Linux-x86_64.sh</code></li>
<li>Create a virtual environment using the following command and name our environment universe: <code>conda create --name universe python=3.6 anaconda</code></li>
<li>Activate it: <code>source activate universe</code></li>
<li>Install the following dependencies:</li>
</ol>
<pre><code>sudo apt-get update
sudo apt-get install golang libcupti-dev libjpeg-turbo8-dev make tmux
htop chromium-browser git cmake zlib1g-dev libjpeg-dev xvfb libav-tools
xorg-dev python-opengl libboost-all-dev libsdl2-dev swig
conda install pip six libgcc swig
conda install opencv
</code></pre>
<ol start="6">
<li>Install Gym: <code>pip install gym==0.15.4</code></li>
</ol>
<p>Then it took a considerable time and gave following logs</p>
<pre><code>Collecting gym==0.15.4
Using cached gym-0.15.4-py3-none-any.whl
Collecting opencv-python
Using cached opencv-python-4.10.0.82.tar.gz (95.1 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Collecting pyglet<=1.3.2,>=1.2.0
Using cached pyglet-1.3.2-py2.py3-none-any.whl (1.0 MB)
Requirement already satisfied: numpy>=1.10.4 in ./anaconda3/envs/universe/lib/python3.6/site-packages (from gym==0.15.4) (1.18.5)
Requirement already satisfied: six in ./anaconda3/envs/universe/lib/python3.6/site-packages (from gym==0.15.4) (1.15.0)
Requirement already satisfied: scipy in ./anaconda3/envs/universe/lib/python3.6/site-packages (from gym==0.15.4) (1.5.0)
Collecting cloudpickle~=1.2.0
Using cached cloudpickle-1.2.2-py2.py3-none-any.whl (25 kB)
Requirement already satisfied: future in ./anaconda3/envs/universe/lib/python3.6/site-packages (from pyglet<=1.3.2,>=1.2.0->gym==0.15.4) (0.18.2)
Building wheels for collected packages: opencv-python
Building wheel for opencv-python (pyproject.toml) ... -
set of Installing commands
...
Copying files from CMake output
creating directory _skbuild/linux-x86_64-3.6/cmake-install/cv2
copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/python-3/cv2.abi3.so -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/cv2.abi3.so
copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/__init__.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/__init__.py
copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/load_config_py2.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/load_config_py2.py
copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/load_config_py3.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/load_config_py3.py
copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/config.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/config.py
copying _skbuild/linux-x86_64-3.6/cmake-install/python/cv2/config-3.py -> _skbuild/linux-x86_64-3.6/cmake-install/cv2/config-3.py
Traceback (most recent call last):
File "/home/damiboy123/anaconda3/envs/universe/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/home/damiboy123/anaconda3/envs/universe/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/home/damiboy123/anaconda3/envs/universe/lib/python3.6/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 262, in build_wheel
metadata_directory)
File "/tmp/pip-build-env-kazhr26h/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 231, in build_wheel
wheel_directory, config_settings)
File "/tmp/pip-build-env-kazhr26h/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 215, in _build_with_temp_dir
self.run_setup()
File "/tmp/pip-build-env-kazhr26h/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
File "/tmp/pip-build-env-kazhr26h/overlay/lib/python3.6/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 543, in <module>
main()
File "setup.py", line 316, in main
cmake_source_dir=cmake_source_dir,
File "/tmp/pip-build-env-kazhr26h/overlay/lib/python3.6/site-packages/skbuild/setuptools_wrap.py", line 683, in setup
cmake_install_dir,
File "setup.py", line 456, in _classify_installed_files_override
raise Exception("Not found: '%s'" % relpath_re)
Exception: Not found: 'python/cv2/py.typed'
----------------------------------------
ERROR: Failed building wheel for opencv-python
Failed to build opencv-python
ERROR: Could not build wheels for opencv-python, which is required to install pyproject.toml-based projects
</code></pre>
<p>please consider I had to do some modifications to above mentioned commands, since the last edition is 4 years later and some of commands weren't working. ie:</p>
<ul>
<li><code>libav-tools</code> has been replaced by <code>ffmpeg</code> and <code>python-opengl</code> by <code>PyOpenGL</code></li>
</ul>
<pre><code>sudo apt-get install ffmpeg
sudo apt-get install python3-pip
pip3 install PyOpenGL
sudo apt-get install ffmpeg
</code></pre>
<p>and I followed <a href="https://stackoverflow.com/questions/63732353/error-could-not-build-wheels-for-opencv-python-which-use-pep-517-and-cannot-be">stackoverflow thread</a>, but there wasn't any remedy.</p>
|
<python><opencv><pip><anaconda><python-wheel>
|
2024-06-11 20:52:45
| 1
| 832
|
Damika
|
78,609,695
| 6,849,363
|
Does hacker rank evaluate code for speed?
|
<p>I am practicing on hacker rank on the following question:</p>
<blockquote>
<p>Jesse loves cookies and wants the sweetness of some cookies to be
greater than value . To do this, two cookies with the least sweetness
are repeatedly mixed. This creates a special combined cookie with:</p>
<p>sweetness Least sweet cookie 2nd least sweet cookie).</p>
<p>This occurs until all the cookies have a sweetness .</p>
<p>Given the sweetness of a number of cookies, determine the minimum
number of operations required. If it is not possible, return .</p>
<p>Example</p>
<p>The smallest values are . Remove them then return to the array. Now .
Remove and return to the array. Now . Remove , return and .
Finally, remove and return to . Now . All values are so the process
stops after iterations. Return .</p>
<p>Function Description Complete the cookies function in the editor
below.</p>
<p>cookies has the following parameters:</p>
<p>int k: the threshold value int A[n]: an array of sweetness values
Returns</p>
<p>int: the number of iterations required or -1</p>
</blockquote>
<p>I wrote this very simple solution in Python:</p>
<pre><code>def cookies(k, A):
i = 0
while True:
A.sort()
print(A)
if A[0] >= k:
break
elif len(A)< 2:
i = -1
break
n1 = A.pop(0)
n2 = A.pop(0)
new_cookie = (n1 + 2*n2)
A.insert(0, new_cookie)
i += 1
return i
</code></pre>
<p>It fails several test cases. I don't see any errors on edge cases. Are things like speed evaluated by hacker rank? Is this not performant/optimal? Are there edge cases where my sort is causing an overflow?</p>
|
<python><performance>
|
2024-06-11 20:50:37
| 1
| 470
|
Tanner Phillips
|
78,609,621
| 2,016,632
|
Since upgrading pandas I can no longer insert a row without a Future Warning. How to fix?
|
<p>I have a dataframe with a well populated column of datetimes called "ds". My intention is to insert one row before the others:</p>
<pre><code>start = df_p['ds'].iloc[0] - pd.Timedelta(1,"d")
if start not in df_p.index:
df_p.loc[start] = np.NaN
</code></pre>
<p>What I get is:</p>
<blockquote>
<p>FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.</p>
</blockquote>
<p>Obviously I am missing something dumb, but I can't see it...</p>
|
<python><pandas>
|
2024-06-11 20:26:21
| 1
| 619
|
Tunneller
|
78,609,465
| 5,092,134
|
Animating Yearly Data from Pandas in GeoPandas with Matplotlib FuncAnimation
|
<p>Using this dataset of % change by state, I have merged it with a cartographic boundary map of US states from the Census department: <a href="https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_500k.zip" rel="nofollow noreferrer">https://www2.census.gov/geo/tiger/GENZ2018/shp/cb_2018_us_state_500k.zip</a></p>
<pre class="lang-py prettyprint-override"><code>df.head()
Year 2017 2018 2019 2020 2021 2022 2023
State
Alabama 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Arizona 0.24 0.00 0.03 -0.15 0.56 -0.36 0.21
Arkansas 0.35 -0.06 -0.03 0.03 -0.00 -0.13 -0.02
California 0.13 0.07 -0.03 0.04 0.21 -0.10 0.03
Colorado 0.81 -0.18 -0.01 -0.05 0.10 -0.03 -0.51
</code></pre>
<p><a href="https://i.sstatic.net/M6SsFGgp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6SsFGgp.png" alt="figures from column (year) 2017 shown on map" /></a></p>
<p>I would like to cycle through the columns (years) in a <code>FuncAnimation</code> after the boundaries have been plotted, and I am not quite sure how to go about it. The lifecycle of a plot in official reference manual cites relevant examples, but all deal with built-in figures, and not shape files.</p>
<p>Here is a related answer that seems exactly like what I'm missing, but deals with only <code>(x, y)</code> line graph: <a href="https://stackoverflow.com/questions/61018924/how-to-keep-shifting-the-x-axis-and-show-the-more-recent-data-using-matplotlib-a">How to keep shifting the X axis and show the more recent data using matplotlib.animation in Python?</a></p>
<p>How do I extrapolate column outside of calling <code>shape.plot()</code>?</p>
<p>code:</p>
<pre><code>shape = gpd.read_file(shapefile)
years = dfc.columns # dfc = % change df
tspan = len(dfc.columns)
""" merge map with dataframe on state name column """
shape = pd.merge(
left=shape,
right=dfc,
left_on='NAME',
right_on='State',
how='right'
)
""" init pyplot 'OO method' """
fig, ax = plt.subplots(figsize=(10, 5))
""" draw shape boundary """
ax = shape.boundary.plot(
ax=ax,
edgecolor='black',
linewidth=0.3,
)
""" plot shape """
ax = shape.plot(
ax=ax,
column=year, # what I need access to
legend=True, cmap='RdBu_r',
legend_kwds={'shrink': 0.3, 'orientation': 'horizontal', 'format': '%.0f'})
""" cycle through columns -- not operable yet """
def animate(year):
ax.clear()
ax.shape.column(year)
animation = FuncAnimation(states, animate, frames=(dfc.columns[0], dfc.columns[tspan] + 1, 1), repeat=True, interval=1000)
</code></pre>
<p>I really haven't found anything online dealing with these cartographic boundary maps specifically</p>
<p>I have tried the most obvious things I could think of:<br />
Putting the entire <code>shape.plot()</code> method into <code>animate()</code></p>
<p>I tried a <code>for</code> loop cycling the years, which resulted in 7 distinct maps. Each iteration lost the attributes I set in <code>shape.boundary.plot()</code></p>
<p>Edit:</p>
<p>Since I've converted the original procedural example into the OO format, I am starting to have new questions about what might be done.</p>
<p>If <code>ax = shape.plot(ax=ax)</code>, is there some kind of getter/setter, for previously defined attributes? e.g. <code>ax.set_attr = column=year</code> (will scour manual immediately after I finish this)</p>
<p>Is there a way to define the map's boundary lines, shown here with <code>shape.plot()</code> and <code>shape.boundary.plot()</code>, using the <code>fig</code>, instead of <code>ax</code> (<code>ax = shape.plot()</code>)?</p>
<p>Barring that, could we have <code>shape.plot()</code> and <code>shape.boundary.plot()</code> persist to the first subplot <code>axs[0]</code> and have columns of data shown using subsequent overlapping subplots <code>axs[n == year]</code>?</p>
<p>Any iterative process I've seen so far has lost the boundary attributes, so that's been a big sticking point for me.</p>
|
<python><pandas><matplotlib><geopandas><matplotlib-animation>
|
2024-06-11 19:36:46
| 2
| 1,233
|
Avery Freeman
|
78,609,293
| 3,390,466
|
How do I perform an atomic XREAD and XDEL on a Redis Stream?
|
<p>I am trying to design a system on Redis, where:</p>
<ul>
<li>There is a queue of messages</li>
<li>There is a single writer to the queue</li>
<li>There are multiple consumers of "messages" in the queue, each continuously reading and consuming messages as they arrive</li>
<li>Each message MUST be received by one and only one consumer.</li>
</ul>
<p>I was not able to find a way to <code>XREAD</code> and <code>XDEL</code> the message atomically. The best solution I see is:</p>
<pre><code>import redis
client = redis.StrictRedis(host='localhost', port=6379, db=0)
group_name = 'my_group'
stream_name = 'my_queue'
consumer_name = 'consumer_1' # Change this for each consumer
def read_and_process_messages():
while True:
# Read message from the stream
messages = client.xreadgroup(group_name, consumer_name, {stream_name: '>'}, count=1, block=5000)
for stream, messages in messages:
for message_id, message in messages:
print(f"Consumer {consumer_name} processing message ID: {message_id}, data: {message[b'message'].decode()}")
# Acknowledge the message
client.xack(stream_name, group_name, message_id)
# ANOTHER CONSUMER CAN RECEIVE THE MESSAGE HERE!
# Delete the message after acknowledging
client.xdel(stream_name, message_id)
if __name__ == "__main__":
read_and_process_messages()
</code></pre>
<p>But this has a problem that while an <code>xread</code> request was sent, another parallel consumer can receive the same message.</p>
<p>How can I read the first message in the stream and automatically delete it, ensuring that it is only received once? Is there an option which allows to do that, like:</p>
<p><code>XREAD COUNT 1 STREAMS my_queue 0 DEL</code></p>
|
<python><redis><stream><queue><publish-subscribe>
|
2024-06-11 18:47:36
| 1
| 4,209
|
Victor2748
|
78,609,106
| 4,669,905
|
Detect collision and run a function in pymunk
|
<p>I have a simple setup in pymunk where a circle collides with a segment. On every collision the radius of the circle should increase by an amount I specify (5% in this case). The circle should bounce correctly from the segment even after the radius increases. Here is my implementation but I am facing an issue where this code works erratically. Sometimes the radius increases on collision and sometimes it does not and I can't figure out why.</p>
<pre><code>import pygame as pg
import pymunk
import pymunk.pygame_util
import sys
import math
pymunk.pygame_util.positive_y_is_up = False
RES = WIDTH, HEIGHT = 1080 // 2, 1920 // 2
FPS = 120
pg.init()
surface = pg.display.set_mode(RES)
clock = pg.time.Clock()
draw_options = pymunk.pygame_util.DrawOptions(surface)
space = pymunk.Space()
space.gravity = 0, 530
elasticity = 0.999
friction = 0.05
# create the bounding box
def create_segments(start, end):
segment_shape = pymunk.Segment(space.static_body, start, end, 3)
segment_shape.elasticity = elasticity
segment_shape.friction = friction
segment_shape.collision_type = 1 # Assign a collision type to the walls
space.add(segment_shape)
create_segments((0, HEIGHT), (WIDTH, HEIGHT)) # floor
create_segments((0, 0), (0, HEIGHT)) # left wall
create_segments((WIDTH, 0), (WIDTH, HEIGHT)) # right wall
create_segments((0, 0), (WIDTH, 0)) # ceiling
# create the ball
ball_mass = 1
ball_radius = 20 # Separate variable to store the radius
ball_moment = pymunk.moment_for_circle(ball_mass, 0, ball_radius)
ball_body = pymunk.Body(ball_mass, ball_moment)
ball_body.position = WIDTH // 2, 50
ball_body.velocity = (400, 200)
ball_shape = pymunk.Circle(ball_body, ball_radius)
ball_shape.elasticity = elasticity
ball_shape.friction = friction
ball_shape.collision_type = 2 # Assign a different collision type to the ball
space.add(ball_body, ball_shape)
#Collision handler
collided = False # Flag to track if collision happened in the current frame
def collision_handler(arbiter, space, data):
global ball_radius, ball_shape, collided
if collided:
return False
# Calculate new radius
new_radius = ball_radius * 1.05 # 5% increase
# Adjust the ball's position based on the collision direction
normal = arbiter.contact_point_set.normal
if normal.y > 0: # Collision with floor
ball_body.position += (0, new_radius - ball_radius)
elif normal.y < 0: # Collision with ceiling
ball_body.position -= (0, new_radius - ball_radius)
elif normal.x > 0: # Collision with right wall
ball_body.position += (new_radius - ball_radius, 0)
elif normal.x < 0: # Collision with left wall
ball_body.position -= (new_radius - ball_radius, 0)
# Now check against the limits after adjusting the position
if new_radius > ball_body.position[0] or new_radius > ball_body.position[1] or \
new_radius > WIDTH - ball_body.position[0] or new_radius > HEIGHT - ball_body.position[1]:
return True # Reject radius increase if it exceeds space limits
# Increase the radius
ball_radius = new_radius
# Remove the old ball shape
space.remove(ball_shape)
# Create the new ball shape
ball_shape = pymunk.Circle(ball_body, ball_radius)
ball_shape.elasticity = elasticity
ball_shape.friction = friction
ball_shape.collision_type = 2 # Assign the same collision type as before
space.add(ball_shape)
collided = True
return True
# Add collision handler for the ball hitting the walls
handler = space.add_collision_handler(1, 2) # Only handle collisions between types 1 and 2
handler.pre_solve = collision_handler # Use pre_solve instead of post_solve
# Main loop
while True:
surface.fill(pg.Color('white'))
for i in pg.event.get():
if i.type == pg.QUIT:
pg.quit()
sys.exit()
collided = False
space.step(1 / FPS)
space.debug_draw(draw_options)
pg.display.flip()
clock.tick(FPS)
</code></pre>
<p>I have some other parts of code where I check and display the radius and that's how I was able to tell that the radius is not increasing on every bounce. For simplicity I've only included the relevant parts of the code here. I believe it is something to do with my collision handling logic but I am not able to troubleshoot.</p>
|
<python><pygame><simulation><physics><pymunk>
|
2024-06-11 17:56:17
| 1
| 931
|
Broly
|
78,608,972
| 15,587,184
|
How can I optimize performance for my Pandas Agent using OpenAI Azure and LangChain?
|
<p>I have implemented a Pandas Agent using OpenAI Azure and LangChain to handle queries on a dataset. However, I'm encountering performance issues where a simple query takes between 8-12 seconds to execute. Here's a simplified version of my current setup:</p>
<pre><code>import pandas as pd
from time import time
from openai_tools import create_pandas_dataframe_agent
# Data import
csv_dir_path = 'https://raw.githubusercontent.com/datasciencedojo/datasets/master/titanic.csv'
df = pd.read_csv(csv_dir_path)
# Creating Pandas Agent
agent_executor = create_pandas_dataframe_agent(
model,
df,
verbose=True,
agent_type="openai-tools",
max_iterations=5
)
def agent_query(question):
time_start = time()
res = agent_executor.invoke({"input": question})
time_finish = time()
print("Query execution time:", time_finish - time_start, "seconds")
return res['output']
# Example query
question = 'whats the percentage of male survivors'
agent_query(question)
</code></pre>
<p><strong>Issues:</strong>
Performance Delay: Queries take 8-12 seconds to complete, even though sometimes the response is ready earlier.</p>
<p><strong>Attempted Solutions:</strong> I've tried adjusting max_iterations without noticeable impact on response time. Profiling suggests the delay might be due to data loading or model invocation.</p>
<p><strong>Desired Outcome:</strong>
I'm seeking advice on optimizing the setup to improve query response time.
Any suggestions on improving efficiency or identifying bottlenecks would be greatly appreciated.
Thank you!</p>
<p>Here is an output that I get regulary:</p>
<pre><code>> Entering new AgentExecutor chain...
Invoking: `python_repl_ast` with `{'query': "male_survivors = df[(df['Sex'] == 'male') & (df['Survived'] == 1)].shape[0]\ntotal_males = df[df['Sex'] == 'male'].shape[0]\npercentage_male_survivors = (male_survivors / total_males) * 100\npercentage_male_survivors"}`
18.890814558058924The percentage of male survivors is approximately 18.89%.
> Finished chain.
9.06292200088501
</code></pre>
|
<python><langchain><agent><azure-openai>
|
2024-06-11 17:22:12
| 0
| 809
|
R_Student
|
78,608,874
| 1,818,935
|
How to write a FunctionTransformer that outputs a DataFrame with a different number of columns than of the input DataFrame?
|
<p>Is it possible to use FunctionTransformer to emulate OneHotEncoder? More generally, is it possible to write a FunctionTransformer such that the number of columns in the output DataFrame is different (either lower or higher) than the number of columns in the input DataFrame?</p>
|
<python><scikit-learn>
|
2024-06-11 16:56:52
| 0
| 6,053
|
Evan Aad
|
78,608,557
| 11,028,689
|
Color-Coded Time Series Plot Based on Value Sign
|
<p>I have a dataframe containing positive,negative and zero values.</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df1 = pd.DataFrame({'A': [-1, 3, 9, 5, 0, -1, -1],
'B': [4, -5, 5, 7, 9, 8, 6],
'C': [7, 5, -3,-1, 5, 9, 3],
'D': [-3, 6, 7, 4, 0, -2, -1],
'date':['01-02-2020', '01-06-2020', '01-03-2021', '01-05-2021', '01-10-2021', '01-03-2022', '01-08-2022']})
# make sure the time column is actually time format
df1['date']=pd.to_datetime(df1['date'])
# set time as the index
df1.set_index('date',inplace=True)
df1
</code></pre>
<p>I want to show the month and year on the X-axis, there should be 4 scatterplots and I want the markers (i.e. scatter circles) to be 'red' for the negative values in the corresponding month/year,'green' for positive and yellow for zero, which I have tried to do with a list comprehension statement below.</p>
<pre><code># to condition by colour
kcolors = ['green' if s>0 else 'red' if s<0 else 'yellow' for s in df1] # this gives a TypeError: '>' not supported between instances of 'str' and 'int'
# I have also tried
# kcolors = ['#00ff41' if s>0 else '#65000B' if s<0 else '#F5D300' for s in df1]
# plot
fig, ax = plt.subplots()
df1.plot(kind='scatter', marker='o', color=kcolors, ax=ax)
ax.figure.autofmt_xdate(rotation=45, ha='center')
plt.legend(loc='best')
plt.show()
</code></pre>
<p>How can I achieve this?</p>
|
<python><pandas><dataframe><matplotlib>
|
2024-06-11 15:39:36
| 3
| 1,299
|
Bluetail
|
78,608,434
| 7,408,848
|
cartesian coordinates to label
|
<p>Using coordinates for labelling? I was asked it was possible and to see for myself, I am trying to program it and read up more on it. I do not know what it is called or where to look but the general idea is as follows:</p>
<p>labels are converted into an N-dimensional space and trajectories are calculated along the N-dimensional space. Based on the direction, a label is assigned with a confidence interval.</p>
<p>The data</p>
<pre><code>basic_data = [
{"label":"First-Person RPG", "Tags":["Open-world", "Fantasy", "Adventure", "Single-player", "Exploration", "Dragons", "Crafting", "Magic", "Story-rich", "Moddable"]},
{"label":"Action RPG", "Tags":["Open-world", "Fantasy", "Story-rich", "Adventure", "Single-player", "Monsters", "Crafting", "Horse-riding", "Magic", "Narrative"]},
{"label":"Adventure", "Tags":["Difficult", "Dark Fantasy", "Action", "Single-player", "Exploration", "Lore-rich", "Combat", "Permadeath", "Monsters", "Atmospheric"]},
{"label":"Party Game", "Tags":["Multiplayer", "Social Deduction", "Indie", "Strategy", "Casual", "Space", "Deception", "Survival", "Teams", "Interactive"]}
]
</code></pre>
<p>code for the first part below</p>
<pre><code>mlb = MultiLabelBinarizer()
for idx, data in enumerate(basic_data):
basic_data[idx]["tag_str"] = ",".join(data["Tags"])
pd_basic_data: pd.DataFrame = pd.DataFrame(basic_data)
tags: List = [str(pd_basic_data.loc[i,'tag_str']).split(',') for i in range(len(pd_basic_data))]
mlb_result = mlb.fit_transform(tags)
df_final: pd.DataFrame = pd.concat([pd_basic_data['label'],pd.DataFrame(mlb_result,columns=list(mlb.classes_))],axis=1)
</code></pre>
<p>a simple one word answer telling the theory works as well for an answer. I just need to know where to look.</p>
|
<python><pandas><labeling>
|
2024-06-11 15:14:59
| 1
| 1,111
|
Hojo.Timberwolf
|
78,608,378
| 2,153,235
|
Difference between "levenshtein" and "python levenshtein" packages?
|
<p>I installed the <em>levenshtein</em> module from conda-forge. I don't recall the exact command used, but it was likely something similar to <code>conda install -c conda-forge PackageName</code>. I queried the package versions. I see <em>two</em> packages with exactly the same version number:</p>
<pre><code>(py39) C:\>conda list levenshtein$
# packages in environment at C:\Users\User.Name\AppData\Local\anaconda3\envs\py39:
#
# Name Version Build Channel
levenshtein 0.25.1 py39h99910a6_0 conda-forge
python-levenshtein 0.25.1 pyhd8ed1ab_0 conda-forge
</code></pre>
<p>The version matches what I could find online <a href="https://pypi.org/project/Levenshtein" rel="nofollow noreferrer">here</a> and <a href="https://prefix.dev/channels/conda-forge/packages/levenshtein" rel="nofollow noreferrer">here</a>. I can also find both variations on GitHube, <a href="https://github.com/conda-forge/levenshtein-feedstock" rel="nofollow noreferrer">here</a>, <a href="https://github.com/conda-forge/python-levenshtein-feedstock" rel="nofollow noreferrer">here</a>, and <a href="https://github.com/ztane/python-Levenshtein" rel="nofollow noreferrer">here</a>.</p>
<p>To try and get an idea of whether they are different, I used Cygwin's Bash to navigate to Conda environment folder <code>/c/Users/User.Name/AppData/Local/anaconda3/envs/py39</code> and searched for files related to the packages:</p>
<pre><code>$ find * -name '*levenshtein*' -print | xargs ls -l
7270 Jun 7 17:13 conda-meta/levenshtein-0.25.1-py39h99910a6_0.json
4564 Jun 7 17:13 conda-meta/python-levenshtein-0.25.1-pyhd8ed1ab_0.json
3951 Jan 27 2023 Lib/site-packages/gensim/similarities/__pycache__/levenshtein.cpython-39.pyc
4505 Jan 27 2023 Lib/site-packages/gensim/similarities/levenshtein.py
219136 Apr 7 12:12 Lib/site-packages/Levenshtein/levenshtein_cpp.cp39-win_amd64.pyd
</code></pre>
<p>I'm not sure how to interpret these findings. Looking at the top 2 JSON files, I see possibly relevant selected lines</p>
<pre><code>levenshtein-0.25.1-py39h99910a6_0.json
--------------------------------------
"extracted_package_dir": "C:\\Users\\User.Name\\AppData\\Local\\anaconda3\\pkgs\\levenshtein-0.25.1-py39h99910a6_0",
"fn": "levenshtein-0.25.1-py39h99910a6_0.conda",
"source": "C:\\Users\\User.Name\\AppData\\Local\\anaconda3\\pkgs\\levenshtein-0.25.1-py39h99910a6_0",
"package_tarball_full_path": "C:\\Users\\User.Name\\AppData\\Local\\anaconda3\\pkgs\\levenshtein-0.25.1-py39h99910a6_0.conda",
python-levenshtein-0.25.1-pyhd8ed1ab_0.json
-------------------------------------------
"extracted_package_dir": "C:\\Users\\User.Name\\AppData\\Local\\anaconda3\\pkgs\\python-levenshtein-0.25.1-pyhd8ed1ab_0",
"fn": "python-levenshtein-0.25.1-pyhd8ed1ab_0.conda",
"source": "C:\\Users\\User.Name\\AppData\\Local\\anaconda3\\pkgs\\python-levenshtein-0.25.1-pyhd8ed1ab_0",
"package_tarball_full_path": "C:\\Users\\User.Name\\AppData\\Local\\anaconda3\\pkgs\\python-levenshtein-0.25.1-pyhd8ed1ab_0.conda",
</code></pre>
<p>Are the two packages in fact the same? If so, why would they manifest as differently name packages? How can one check whether a common package is made available via two different names?</p>
|
<python><package><anaconda><conda><levenshtein-distance>
|
2024-06-11 15:05:12
| 1
| 1,265
|
user2153235
|
78,608,348
| 6,912,069
|
Airflow task_group with list from previous task
|
<p>Based on the following <a href="https://stackoverflow.com/a/75876007/6912069">answer</a> to a similar question, I'm trying to create an airflow <code>task_group</code> based of an output from a previous task.
More precisely, my first task uses the <code>ExecuteSQLQueryOperator</code> which returns a list. And I want to use this list to map it to a task group. However I always end up with a <code>TypeError: 'XComArg' object is not iterable</code>.</p>
<p>I don't know how to write a reproducible example with the sql operator, but I can at least write some pseudo code:</p>
<pre class="lang-py prettyprint-override"><code>from airflow.decorators import dag, task_group
from airflow.operators.dummy import DummyOperator
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
@dag()
def my_dag():
get_list = SQLExecuteQueryOperator(
task_id='get_list',
conn_id=pg_conn_id,
database=db_name,
sql='select column from table',
)
@task_group()
def map_list_to_task(l: list):
return list(map(my_task, l))
def my_task(element):
# define and return a task
return DummyOperator()
map_list_to_task(get_list.output)
dag = my_dag()
</code></pre>
<p>By simply replacing <code>get_list.output</code> by a true <code>list</code> on the last line, it works. But I need to make it work with the result from the previous task <code>get_list</code>. Any help would be much appreciated!</p>
|
<python><airflow>
|
2024-06-11 14:59:11
| 1
| 686
|
N. Maks
|
78,608,190
| 3,098,783
|
FastAPI lifespan event download model across multple gunicorn workers
|
<p>I want to load a ML model to a local folder. However, I'm using gunicorn with multiple workers, so on startup each of them downloads the model:</p>
<pre class="lang-py prettyprint-override"><code>from contextlib import asynccontextmanager
from fastapi import FastAPI
@asynccontextmanager
async def lifespan(app: FastAPI):
model_path = os.path.join("./models", model)
if os.path.isdir(model_path):
print(f"Startup: model {model} exists, skip download")
else:
print(f"Startup: install model {model}")
download_model(model)
yield
print('release models')
</code></pre>
<p>This leads to every worker downloading the model. Once it's downloaded it's skipped as expected.</p>
<p>Can a worker somehow "wait" for a few seconds until a folder exists or something? I don't want to introduce redis or another db just for this use-case as it's otherwise a really simple (stateless) app.</p>
|
<python><fastapi><gunicorn>
|
2024-06-11 14:29:03
| 1
| 8,471
|
Jankapunkt
|
78,608,139
| 5,838,180
|
How to set independent parameters for subplots in Plotly?
|
<p>Iβm working with Plotly in Python to create a figure with two subplots, each containing a density contour plot. I want to set different bin sizes (<code>nbinsx</code> and <code>nbinsy</code>) for each subplot. However, it seems that setting these parameters for one subplot affects the other, and Iβm unable to have independent bin sizes for each.</p>
<p>Hereβs a minimal working example:</p>
<pre><code>import plotly.graph_objs as go
from plotly.subplots import make_subplots
import plotly.express as px
import pandas as pd
# Create some dummy data
df_plot = pd.DataFrame({
'g_w1': [i for i in range(30)],
'w1_w2': [i*0.5 for i in range(30)],
'bp_g': [i*2 for i in range(30)],
'g_rp': [i*0.3 for i in range(30)],
'type': ['typeA']*15 + ['typeB']*15
})
# Create a subplot with 1 row and 2 columns
fig = make_subplots(rows=1, cols=2)
# First density contour plot for 'crossmatches'
fig_crossmatches = px.density_contour(df_plot, x="g_w1", y="w1_w2", color='type', nbinsx=28, nbinsy=28)
# Add the 'crossmatches' plot to the first subplot
for trace in fig_crossmatches.data:
fig.add_trace(trace, row=1, col=1)
# Second density contour plot for 'no crossmatches'
fig_nonmatches = px.density_contour(df_plot, x="bp_g", y="g_rp", color='type')
# Add the 'no crossmatches' plot to the second subplot
for trace in fig_nonmatches.data:
fig.add_trace(trace, row=1, col=2)
# Attempt to update the bin sizes for the second subplot
fig.update_traces(selector=dict(row=1, col=2), nbinsx=128, nbinsy=128)
# Update the layout if needed
fig.update_layout(autosize=False, width=1500, height=600)
# Show the figure
fig.show()
</code></pre>
<p>When I run this code, the nbinsx and nbinsy for the second subplot do not seem to take effect. Iβve tried using update_traces but to no avail. How can I set independent bin sizes for each subplot in Plotly?</p>
|
<python><plotly><visualization><plotly-express>
|
2024-06-11 14:20:35
| 2
| 2,072
|
NeStack
|
78,608,134
| 4,994,781
|
Python namespace with unused attribute detection
|
<p>I want to refactor some constants so they share namespace in a Python project. Something like this:</p>
<pre><code>class MyNamespace():
""" Does NOT need to be a class, this is just an example. """
AVOID_MAGIC_NUMBER: int = 10
SOME_TEXT_OUTPUT_MESSAGE: str = 'something'
FREQUENTLY_USED_INIT_ARG: str = 'magic initialization argument'
THIS_IS_A_WELL_KNOWN_FILENAME: Path = Path('some_hardcoded_filename.txt')
BASE_FILENAME: PATH = THIS_IS_A_WELL_KNOWN_FILENAME.stem
HERE_IS_AN_OBJECT = DefinedElsewhere(FREQUENTLY_USED_INIT_ARG)
</code></pre>
<p>The fact that they are not real constants is not an issue. My needs:</p>
<ol>
<li>A namespace, so the variable names do not clash with imported symbols, locals, etc.</li>
<li>Some of the attributes need to refer to previously defined attributes, sometimes.</li>
<li>If some attribute stops being used in the code, the IDE should detect it and mark the variable as unused. This is probably the most important point.</li>
<li>Optionally, it allows me to use type annotations, but I can live without this.</li>
</ol>
<p>So far I'm using <code>class</code> because it fulfills <code>1</code> and <code>2</code>, but I've also tried <code>@dataclass</code>, <code>NamedTuple</code>, etc. All of them work perfect for <code>1</code> and <code>2</code>. My problem is with <code>3</code>.</p>
<p>No matter the linter, type checker, etc. I use, unused class attributes are not deteced as such, for obvious reasons.</p>
<p>Is there any way I can have a namespace, with attributes which can reference other attributes within the namespace (so, no <code>SimpleNamespace</code>), and which IDEs will detect unused attributes?</p>
<p>My IDE of choice is Visual Studio Code, but I assume that any solution working for some IDE will work for mine, too, since most of them use <code>pylint</code>, <code>Pyright</code>, <code>Ruff</code>, etc.</p>
|
<python><ide><lint>
|
2024-06-11 14:19:37
| 0
| 580
|
RaΓΊl NΓΊΓ±ez de Arenas Coronado
|
78,607,976
| 9,506,773
|
How do I read the original pdf file in set indexer datasource in a custom WebApiSkill after enabling "Allow Skillset to read file data"
|
<p>I see the following under my indexer settings:</p>
<p><a href="https://i.sstatic.net/68WP1fBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/68WP1fBM.png" alt="enter image description here" /></a></p>
<p>When hovering over it I read the following:</p>
<blockquote>
<p>True means the original file data obtained from your blob data source
is preserved. This allows passing the original file to a custom skill,
or to the Document Extraction skill.</p>
</blockquote>
<p>How do I read the original pdf file in the associated blob data source in a custom WebApiSkill?</p>
<pre class="lang-py prettyprint-override"><code>file_data_base64 = value.get('data', {}).get('file_data', '')
...
</code></pre>
<h4>EDIT</h4>
<p>I enabled <code>Allow Skillset to read file data</code> in the indexer. My full setup:</p>
<ul>
<li>WebApiSkill inputs</li>
</ul>
<pre class="lang-py prettyprint-override"><code>inputs=[
InputFieldMappingEntry(name="file_data", source="/document/file_data")
],
</code></pre>
<ul>
<li>WebApiSkill input reading</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import azure.functions as func
import datetime
import json
import logging
import base64
import fitz
from io import BytesIO
app = func.FunctionApp()
logging.basicConfig(level=logging.INFO)
@app.route(route="CustomSplitSkill", auth_level=func.AuthLevel.FUNCTION)
def CustomSplitSkill(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
try:
req_body = req.get_json()
logging.info('Request body parsed successfully.')
except ValueError:
logging.error(f"Invalid input: {e}")
return func.HttpResponse("Invalid input", status_code=400)
# 'values' expected top-level key in the request body
response_body = {"values": []}
for value in req_body.get('values', []):
recordId = value.get('recordId')
file_data_base64 = value.get('data', {}).get('file_data', '').get('data', '')
if not file_data_base64:
logging.error("No file_data found in the request.")
return func.HttpResponse("Invalid input: No file_data found", status_code=400)
try:
file_data = base64.b64decode(file_data_base64)
try:
pdf_document = fitz.open(stream=BytesIO(file_data), filetype='pdf')
except fitz.FileDataError as e:
logging.error(f"Failed to open PDF document: {e}")
return func.HttpResponse("Failed to open PDF document", status_code=400)
except Exception as e:
logging.error(f"An unexpected error occurred while opening the PDF document: {e}")
return func.HttpResponse("An unexpected error occurred", status_code=500)
if pdf_document.page_count == 0:
logging.error("No pages found in the PDF document.")
return func.HttpResponse("Invalid PDF: No pages found", status_code=400)
extracted_text = ""
for page_num in range(pdf_document.page_count):
page = pdf_document.load_page(page_num)
extracted_text += page.get_text()
combined_list = [{'textItems': ['text1', 'text2'], 'numberItems': [0, 1]}] # i deleted the chunking and associated page extraction for simplicity
response_record = {
"recordId": recordId,
"data": {
"subdata": combined_list
}
}
response_body['values'].append(response_record)
except Exception as e:
logging.error(f"Error processing file_data: {e}")
return func.HttpResponse("Error processing file_data", status_code=500)
logging.info('Function executed successfully.')
return func.HttpResponse(json.dumps(response_body), mimetype="application/json")
</code></pre>
<p>The error:</p>
<pre class="lang-py prettyprint-override"><code>Message:
Could not execute skill because the Web Api request failed.
Details:
Web Api response status: 'NotFound', Web Api response details: ''
</code></pre>
<p>Given that I have projections I cannot debug this properly as debugging is not supported with projections. The logging does not seem to log the specific error either despite the error handling and checks.</p>
|
<python><azure><azure-cognitive-services>
|
2024-06-11 13:49:16
| 2
| 3,629
|
Mike B
|
78,607,910
| 21,612,376
|
subprocess.run() with capture_output=True
|
<p>I am very new to Python. I am trying to run some shell commands via Python's <code>subprocess.run()</code>. I would also like to capture the output of the shell commands. When I run the command with <code>capture_output=False</code>, it seems to run fine. But when I switch it to <code>True</code>, I get an error.</p>
<pre class="lang-py prettyprint-override"><code>import subprocess
# runs just fine
subprocess.run("git --version", capture_output=False)
# throws error
subprocess.run("git --version", capture_output=True)
## OSError: [WinError 6] The handle is invalid
</code></pre>
<p>After some further investigation, I think this is happening because I'm using Rstudio as my IDE. When I try to run this in a Python shell, it works just fine.</p>
|
<python><subprocess>
|
2024-06-11 13:39:44
| 2
| 833
|
joshbrows
|
78,607,866
| 4,769,503
|
Why is python not able to find required modules after "pip install" inside a Docker-image?
|
<p>I have an issue with Python that it can't find any dependencies when running in a container. In my case, I have a fastAPI-based application that runs perfectly on my local machine. When I start the docker image, it complains about every single module as long as I don't do a separate "pip install xyz" within the image.</p>
<p>I have the following Dockerfile:</p>
<pre><code># Use the official Python image from the Docker Hub
FROM python:3.12-slim as builder
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Install build dependencies
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc libpq-dev
# Create a working directory
WORKDIR /app
# Install pipenv and python dependencies in a virtual environment
COPY ./requirements.txt /app/
RUN python3 -m venv venv && bash -c "source venv/bin/activate"
RUN venv/bin/pip3 install -r requirements.txt
# Use the official Python image again for the final stage
FROM python:3.12-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
# Create a working directory
WORKDIR /app
# Copy installed dependencies from the builder stage
COPY --from=builder /usr/local/lib/python3.12/site-packages /usr/local/lib/python3.12/site-packages
COPY --from=builder /usr/local/bin/pip /usr/local/bin/pip
# Copy the application code
COPY . /app
# Install runtime dependencies (if any)
RUN pip install uvicorn gunicorn
# Expose the port the app runs on
EXPOSE 8000
# Command to run the application
CMD ["gunicorn", "-k", "uvicorn.workers.UvicornWorker", "main:app", "--bind", "0.0.0.0:8000", "--workers", "4"]
</code></pre>
<p>My file requirements.txt contains all necessary modules, like</p>
<pre><code>...
fastapi==0.111.0
fastapi-cli==0.0.4
filelock==3.14.0
...
pydub==0.25.1
Pygments==2.18.0
python-dotenv==1.0.1
python-multipart==0.0.9
PyYAML==6.0.1
requests==2.32.3
rich==13.7.1
...
</code></pre>
<p>I built the container with:</p>
<p>docker build -t my-fastapi-app .</p>
<p>I run the container with:</p>
<p>docker run -p 8000:8000 my-fastapi-app</p>
<p>it prints a long call stack ending with:</p>
<pre><code>ModuleNotFoundError: No module named 'requests'
</code></pre>
<p>Then I am going to add the modules with a separate "pip install" inside the Dockerfile:</p>
<pre><code>RUN pip install pydub requests
</code></pre>
<p>Now it complains about missing fastAPI:</p>
<pre><code>ModuleNotFoundError: No module named 'fastapi'
</code></pre>
<p>So I am going to add another <code>pip install</code> with fastAPI and so on and so forth.</p>
<p>Then I have tried using <strong>pipenv</strong>:</p>
<pre><code>COPY ./Pipfile /app
COPY ./Pipfile.lock /app
RUN pip install --upgrade pip \
&& pip install pipenv \
&& pipenv install --deploy --ignore-pipfile
</code></pre>
<p>The necessary Pipfile contains all required modules, but then they are still missing after installation.</p>
<p>I thought that <code>pip install -r requirements.txt</code> would do all this automatically. But that's not the case here. Where is my mistake?</p>
<p>Do I have to blow up my Dockerfile with separate "pip install"-commands for all modules that are listed within my "requirements.txt"?</p>
<p>Or is the virtual env just messed up somehow that Python can't find any modules although they are installed?</p>
|
<python><docker><pip><fastapi><pipenv>
|
2024-06-11 13:33:15
| 1
| 19,350
|
delete
|
78,607,689
| 4,480,180
|
Unable to import module 'lambda_function': No module named 'tapo.tapo'
|
<p>I'm trying to import <a href="https://pypi.org/project/tapo/" rel="nofollow noreferrer">tapo</a> library in a Python Lambda with the following code (which works locally with an equivalent <code>main</code>):</p>
<pre><code>import asyncio
import os
from tapo import ApiClient
def lambda_handler(event, context):
tapo_username = "username"
tapo_password = "password"
ip_address = "ip"
client = ApiClient(tapo_username, tapo_password)
</code></pre>
<p>I've zipped this and manually uploaded on the lambda code. However, when I try to test it on my Lambda console I get:</p>
<pre><code> "errorMessage": "Unable to import module 'lambda_function': No module named 'tapo.tapo'",
</code></pre>
<p>Things I tried:</p>
<ul>
<li>Just to clarify, the <code>tapo</code> dependency folder (installed through <code>pip</code> in the project) is at the same level as <code>lambda_function.py</code></li>
<li>Using the request dependency with the same process, it works fine</li>
<li>Using a virtual env as described <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-package.html" rel="nofollow noreferrer">here</a></li>
<li>Using a lambda layer with only that tapo package</li>
</ul>
<p>What am I doing wrong?</p>
|
<python><aws-lambda><pip>
|
2024-06-11 13:00:24
| 0
| 6,884
|
justHelloWorld
|
78,607,672
| 7,307,125
|
PyVisa lists my ethernet connected device, but error when trying to open resource [python3.11]
|
<p>I am trying to communicate with TTi power supply model QPX1200SP through ethernet connection.</p>
<p>Here is my script:</p>
<pre><code>import pyvisa as visa
rm = visa.ResourceManager()
print(rm.list_resources(query='TCP?*')) #specific query for anything over Ethernet
#above line gives me an answer ('TCPIP0::10.10.0.27::9221::SOCKET',), so line below:
ttipsu = 'TCPIP0::10.10.0.27::9221::SOCKET'
psu = rm.open_resource(ttipsu)
</code></pre>
<p>I receive a traceback:</p>
<blockquote>
<p>File C:\Python\Lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\me\documents\python scripts\visa\pyvisa_v1.py:18
psu = rm.open_resource(ttipsu)</p>
</blockquote>
<blockquote>
<p>File C:\Python\Lib\site-packages\pyvisa\highlevel.py:3291 in open_resource
res.open(access_mode, open_timeout)</p>
</blockquote>
<blockquote>
<p>File C:\Python\Lib\site-packages\pyvisa\resources\resource.py:281 in open
self.session, status = self._resource_manager.open_bare_resource(</p>
</blockquote>
<blockquote>
<p>File C:\Python\Lib\site-packages\pyvisa\highlevel.py:3216 in open_bare_resource
return self.visalib.open(self.session, resource_name, access_mode, open_timeout)</p>
</blockquote>
<blockquote>
<p>File C:\Python\Lib\site-packages\pyvisa\ctwrapper\functions.py:1850 in open
ret = library.viOpen(</p>
</blockquote>
<blockquote>
<p>File C:\Python\Lib\site-packages\pyvisa\ctwrapper\highlevel.py:226 in _return_handler
return self.handle_return_value(session, ret_value) # type: ignore</p>
</blockquote>
<blockquote>
<p>File C:\Python\Lib\site-packages\pyvisa\highlevel.py:251 in handle_return_value
raise errors.VisaIOError(rv)</p>
</blockquote>
<blockquote>
<p>VisaIOError: VI_ERROR_RSRC_NFOUND (-1073807343): Insufficient location information or the requested device or resource is not present in the system.</p>
</blockquote>
<p>I also tried it with line:</p>
<pre><code>rm = visa.ResourceManager('C:\\Windows\\System32\\visa64.dll')
</code></pre>
<p>I can ping the 10.10.0.27 with success, 2ms, 0% loss.
<a href="https://i.sstatic.net/fzhtxps6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzhtxps6.png" alt="Ping" /></a></p>
<p>What seems to be a problem here?</p>
|
<python><windows><visa><pyvisa>
|
2024-06-11 12:57:26
| 1
| 351
|
smajli
|
78,607,642
| 2,736,559
|
How to remove motion blur from a given image in frequency domain (deconvolution)?
|
<p>I have read that if we have the <code>PSF</code> of a given image, using deconvolution we can reverse the distortion and get the original image. I tried to do the same thing, however, I'm getting an image that looks like complete noise:</p>
<pre class="lang-py prettyprint-override"><code>def add_motion_blur(img, size=31, angle=11):
kernel = np.zeros((size,size), dtype=np.float32)
# set the middle row of the kernel array to be all ones.
kernel[size//2,:] = np.ones((size,),dtype=np.float32)
# now rotate the kernel
m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0)
kernel = cv2.warpAffine(kernel, m, dsize=(size,size))
kernel /= np.sum(kernel)
return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel
img = cv2.imread('./img/peppers.jpg')
# adding motion blur is akin to creating the PSF because
# it's a distorting operator, and we are distorting the image using it!
img_blurred, kernel = add_motion_blur(img, size=31, angle=60)
# now we should be able to deconvolve the image with the PSF and get the deblurred version
# for this we can use the fourier transform and divide our blurred image from the psf
fft = np.fft.fftn(img_blurred[...,0])
# pad the kernel so its the same shape as our image so the division goes on without an issue
fft_psf = np.fft.fftn(kernel, s=fft.shape)
fft_result = fft/fft_psf
ifft_result = np.fft.ifft2(fft_result)
deblurred_image = np.abs(ifft_result).astype(np.uint8)
cv2.imshow('deblurred image',deblurred_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>This results in pure noise it seems:</p>
<p><a href="https://i.sstatic.net/H36EwhiO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H36EwhiO.png" alt="enter image description here" /></a></p>
<p>I also tried Wiener deconvolution to no avail:</p>
<pre class="lang-py prettyprint-override"><code># Wiener deconvolution
fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 10)
fft_result = fft / fft_psf_mag
</code></pre>
<p>It results in the same result it seems:</p>
<p><a href="https://i.sstatic.net/LhSinGod.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhSinGod.png" alt="enter image description here" /></a></p>
<p>What am I missing here? I also tried <code>fft/(fft_psf+1e-3)</code> as a way of regularizing the kernel's small values, but this does not result in any improvement either.</p>
<h2>Side note</h2>
<p>These are the previous outputs belonging to the original image, the blurred version and the kernel used:</p>
<p><a href="https://i.sstatic.net/pBCrLUaf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBCrLUaf.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/mL9Ps54D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mL9Ps54D.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/gw0YnzCI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gw0YnzCI.png" alt="enter image description here" /></a></p>
<h2>Update:</h2>
<p>Here is the updated version which:</p>
<ol>
<li>pads the kernel properly on all sides</li>
<li>the imaginary values are close to 0
However the problem still persists and I can not get anything meaningful! The outputs are given at the end:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def add_motion_blur(img, size=31, angle=11):
kernel = np.zeros((size,size), dtype=np.float32)
# set the middle row of the kernel array to be all ones.
kernel[size//2,:] = np.ones((size,),dtype=np.float32)
# now rotate the kernel
m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0)
kernel = cv2.warpAffine(kernel, m, dsize=(size,size))
kernel /= np.sum(kernel)
return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel
img = cv2.imread('./img/peppers.jpg')
# adding motion blur is akin to creating the PSF because
# it's a distorting operator, and we are distorting the image using it!
img_blurred, kernel = add_motion_blur(img, size=31, angle=60)
# now we should be able to deconvolve the image with the PSF and get the deblurred version
# for this we can use the fourier transform and divide our blurred image from the psf
fft = np.fft.fftn(img_blurred[...,0])
# pad the kernel equally on all sides, so that it sits at the center!
pad_height = img.shape[0] - kernel.shape[0]
pad_width = img.shape[1] - kernel.shape[1]
pad_top = pad_height // 2
pad_bottom = pad_height - pad_top
pad_left = pad_width // 2
pad_right = pad_width - pad_left
kernel_padded = np.pad(kernel, [(pad_top, pad_bottom), (pad_left, pad_right)], 'constant')
fft_psf = np.fft.fft2(kernel_padded)
# the normal way doesnt work!
# fft_result = fft/fft_psf
# wiener deconvolution (0.8 is the best I could find to get something that could be visualized somewhat properly)
fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 0.8)
fft_result = fft * fft_psf_mag
# get back the image
ifft_result = np.fft.ifft2(fft_result)
# Check if the imaginary part is close to zero
assert np.all(np.isclose(np.imag(ifft), 0, atol=1e-8)), 'imaginary values must be close to 0 otherwise something is wrong!'
# grab the final image
img_abs = np.abs(ifft_result).astype(np.uint8)
img_real = ifft_result.real.astype(np.uint8)
cv2.imshow('padded kernel',cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX))
cv2.imshow('deblurred(img.real)', img_real)
cv2.imshow('deblurred(np.abs(img))', img_abs)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Here are the outputs(note using .real values for image visualization is inferior to the magnitude based version:</p>
<p><a href="https://i.sstatic.net/BHFdxWWz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHFdxWWz.png" alt="enter image description here" /></a><br />
<a href="https://i.sstatic.net/eAkBkHBv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAkBkHBv.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/QsbBrsFn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QsbBrsFn.png" alt="enter image description here" /></a></p>
<h2>Update 2:</h2>
<p>Here is the updated version which:</p>
<ol>
<li>pads the kernel properly according to <a href="https://stackoverflow.com/a/54977551/2736559">this answer</a></li>
<li>images are clipped properly using the real values and not their magnitudes based on the explanations <a href="https://stackoverflow.com/a/54977551/2736559">here</a></li>
<li>used a much lower epsilon and got a much better output</li>
<li>instead of selecting a single channel from the BGR image, I used the grayscale version which resulted in a much more clearer output.(different channels have different brightness, so they lead to different outcomes, using grayscale version removes this discrepancy.</li>
</ol>
<p>However, playing with epsilon, while results in a better output compared to the previous attempts, it introduces some artifacts/patterns into the final deblurred result which is not indented or wanted.</p>
<p>Is this the best I can hope for, or is there still space for improvements?</p>
<p>The outputs are given at the end:</p>
<pre class="lang-py prettyprint-override"><code>
def add_motion_blur(img, size=31, angle=11):
kernel = np.zeros((size,size), dtype=np.float32)
# Set the middle row of the kernel array to be all ones.
kernel[size//2,:] = np.ones((size,),dtype=np.float32)
# Now rotate the kernel
m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0)
kernel = cv2.warpAffine(kernel, m, dsize=(size,size))
kernel /= np.sum(kernel)
return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel
img = cv2.imread('./img/peppers.jpg')
img_blurred, kernel = add_motion_blur(img, size=31, angle=60)
# Use the grayscale image instead
img_blurred_gray = cv2.cvtColor(img_blurred,cv2.COLOR_BGR2GRAY)
fft = np.fft.fftn(img_blurred_gray)
# Pad the kernel properly, and then shift it
pad_height = img.shape[0] - kernel.shape[0]
pad_width = img.shape[1] - kernel.shape[1]
pad_top = pad_height // 2
pad_bottom = pad_height - pad_top
pad_left = pad_width // 2
pad_right = pad_width - pad_left
kernel_padded = np.pad(kernel, [(pad_top, pad_bottom), (pad_left, pad_right)], 'constant')
# shift the kernel
kernel_padded = np.fft.ifftshift(kernel_padded)
fft_psf = np.fft.fft2(kernel_padded)
# The normal way doesnt work very well, but after correcting the kernel, it works much better than the previous attempt! 2 as the regularizer/epsilon seems to give somewhat good result compared to other values.
# fft_result = fft/(fft_psf+2)
# Wiener deconvolution works much better, 0.01 seems like a good eps, but it introduces weird artifacts in the image.
fft_psf_mag = np.conj(fft_psf) / (np.abs(fft_psf)**2 + 0.01)
fft_result = fft * fft_psf_mag
# Get back the image-no need to shift-back the result!
ifft_result = np.fft.ifft2(fft_result)
# Check if the imaginary part is close to zero
assert np.all(np.isclose(np.imag(ifft), 0, atol=1e-8)), 'imaginary values must be close to 0 otherwise something is wrong!'
# Grab the final image
# Clip the values for more accurate visualization
img_real = ifft_result.real.clip(0,255).astype(np.uint8)
cv2.imshow('padded kernel',cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX))
cv2.imshow('deblurred(img.real)', img_real)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p><a href="https://i.sstatic.net/YO6IMvx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YO6IMvx7.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/c36IKYgY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/c36IKYgY.png" alt="enter image description here" /></a></p>
<h2>Update 3:</h2>
<p>Here is the updated version which:</p>
<ol>
<li>fixes the artifacts caused by deblurring by expanding the input image followed by cropping the final result to the original image dimension according to the <a href="https://stackoverflow.com/questions/78607642/how-to-remove-motion-blur-from-a-given-image-in-frequency-domain-deconvolution?noredirect=1#comment138597453_78607642">Cris Luengo's explanations here</a></li>
<li>individual channels are operated on and later aggregated to allow for color image deblurring (<a href="https://stackoverflow.com/questions/78607642/how-to-remove-motion-blur-from-a-given-image-in-frequency-domain-deconvolution?noredirect=1#comment138597453_78607642">Cris Luengo's explanations here</a>).</li>
</ol>
<p>After these changes, the result look very good. The outputs are given at the end:</p>
<pre class="lang-py prettyprint-override"><code>
def add_motion_blur(img, size=31, angle=11):
kernel = np.zeros((size,size), dtype=np.float32)
# Set the middle row of the kernel array to be all ones.
kernel[size//2,:] = np.ones((size,),dtype=np.float32)
# Now rotate the kernel
m = cv2.getRotationMatrix2D((size//2, size//2), angle=angle, scale=1.0)
kernel = cv2.warpAffine(kernel, m, dsize=(size,size))
kernel /= np.sum(kernel)
return cv2.filter2D(img, ddepth=-1, kernel=kernel), kernel
img = cv2.imread('./img/peppers.jpg')
# so lets add 50px padding to each side of our image
pad=50
img = cv2.copyMakeBorder(img, pad,pad,pad,pad,borderType=cv2.BORDER_REFLECT)
img_blurred, kernel = add_motion_blur(img,size=31,angle=60)
pad_height = img.shape[0] - kernel.shape[0]
pad_width = img.shape[1] - kernel.shape[1]
pad_top = (pad_height+1)//2
pad_bottom = pad_height//2
pad_left = (pad_width+1)//2
pad_right = pad_width//2
kernel_padded = np.pad(kernel, [(pad_top, pad_bottom),(pad_left, pad_right)],mode='constant')
kernel_padded = np.fft.fftshift(kernel_padded)
fft_psf = np.fft.fft2(kernel_padded)
# now lets take the fft of our image
ffts = np.fft.fftn(img_blurred)
# weiner deconvolution
eps = 0.01
fft_psf_mag = np.conj(fft_psf)/(np.abs(fft_psf)**2 + eps)
# individually multiply each channel and then aggregate them
fft_result = np.array([ffts[...,i] * fft_psf_mag for i in range(3)]).transpose(1,2,0)
# now lets get back the image.
iffts = np.fft.ifftn(fft_result)
# before we continue, lets makesure the imaginary components are close to zero
assert np.all(np.isclose(iffts.imag, 0, atol=1e-8)), 'imaginary values must be close to zero! or something is worng'
# take the image
img_deblurred = iffts.real.clip(0,255).astype(np.uint8)
# crop the final image
img_deblurred_cropped = img_deblurred[pad:img.shape[0]-pad, pad:img.shape[1]-pad]
cv2.imshow('img',img)
cv2.imshow('kenel_padded', cv2.normalize(kernel_padded,None,0,255,cv2.NORM_MINMAX))
cv2.imshow('img-blurred', img_blurred)
cv2.imshow('img-deblurred-uncropped', img_deblurred)
cv2.imshow('img-deblurred-cropped', img_deblurred_cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>result:</p>
<p><a href="https://i.sstatic.net/LhMDgiyd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhMDgiyd.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/BD5lTNzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BD5lTNzu.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/Um2vVVQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Um2vVVQE.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/932Mf6KN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/932Mf6KN.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/4axhxOtL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4axhxOtL.png" alt="enter image description here" /></a></p>
|
<python><image-processing><signal-processing><fft><deconvolution>
|
2024-06-11 12:52:43
| 1
| 26,332
|
Hossein
|
78,607,634
| 1,182,299
|
Python parsing a JSON file outputs only one result
|
<p>I'm pretty new to json, I try to parse two elements (multiple occurences), I'm pretty sure I don't iterate correctly.</p>
<p>My json file:</p>
<pre><code>{
"data": {
"chapter": {
"bibleId": "9f3cb709f9bded60-01",
"bookId": "PSA",
"id": "PSA.117",
"content": [
{
"type": "paragraph",
"style": "nb",
"content": [
{
"type": "verse-number",
"style": "v",
"verseId": "PSA.117.1",
"verseOrgId": [
"PSA.117.1"
],
"content": "1"
},
{
"type": "verse-text",
"verseId": "PSA.117.1",
"verseOrgId": [
"PSA.117.1"
],
"verseText": "Sed ut perspiciatis unde omnis iste natus."
},
{
"type": "verse-number",
"style": "v",
"verseId": "PSA.117.2",
"verseOrgId": [
"PSA.117.2"
],
"content": "2"
},
{
"type": "verse-text",
"verseId": "PSA.117.2",
"verseOrgId": [
"PSA.117.2"
],
"verseText": "Lorem ipsum dolor sit amet."
}
]
}
],
"number": "117",
"next": {
"id": "PSA.118",
"bookId": "PSA",
"number": "118"
},
"previous": {
"id": "PSA.116",
"bookId": "PSA",
"number": "116"
},
"copyright": "1969/77 Deutsche Bibelgesellschaft, Stuttgart",
"verseCount": 2,
"title": "Psalmi 117"
},
"studyContent": null,
"chapterImage": null
}
}
</code></pre>
<p>Desired output:</p>
<pre><code>PSA.117.1 Sed ut perspiciatis unde omnis iste natus.
PSA.117.2 Lorem ipsum dolor sit amet.
</code></pre>
<p>My output:</p>
<pre><code>PSA.117.1 Sed ut perspiciatis unde omnis iste natus.
</code></pre>
<p>My code:</p>
<pre><code>import json
with open('data.json') as f:
data = json.load(f)
for i, verse in enumerate(data['data']['chapter']['content']):
if i == 0:
print("%s\t%s" % (verse['content'][0]['verseId'], verse['content'][1]['verseText']))
elif i == 1:
print("%s\t%s" % (verse['content'][0]['verseId'], verse['content'][1]['verseText']))
</code></pre>
|
<python><json><parsing>
|
2024-06-11 12:51:53
| 3
| 1,791
|
bsteo
|
78,607,624
| 9,261,745
|
databricks notebook pip install python wheel error
|
<p>I try to install the python wheel in the dbfs in databricks.
I first run below to confirm that I have that python wheel.</p>
<pre><code>display(dbutils.fs.ls("dbfs:/FileStore/python-wheels/"))
</code></pre>
<p><a href="https://i.sstatic.net/8MN83CtT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MN83CtT.png" alt="enter image description here" /></a></p>
<p>Then I did</p>
<pre><code>%pip install /dbfs/FileStore/python-wheels/dp_utils-0.1.43-py3-none-any.whl
</code></pre>
<p>But I got error that <a href="https://i.sstatic.net/7obLofoe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7obLofoe.png" alt="enter image description here" /></a></p>
<p>I really don't know why. Anyone has any idea?</p>
|
<python><databricks><python-wheel>
|
2024-06-11 12:50:13
| 1
| 457
|
Youshikyou
|
78,607,542
| 13,491,504
|
Simulation of a spring loaded pendulum on a spinning disk
|
<p>I want to write a simulation in Python similar to the one described in <a href="https://stackoverflow.com/questions/78227144/simulation-of-a-pendulum-hanging-on-a-spinning-disk">Simulation of a Pendulum hanging on a spinning Disk</a>.</p>
<p>But I want the system to be spring loaded. So instead of the mass hanging from a thread, I want the mass to hang from a spring, that rotates. I have tried putting the ODE together but it takes forever to calculate:</p>
<pre><code>import sympy as sp
from IPython.display import display
R = sp.symbols('R')
omega = sp.symbols('omega')
t = sp.symbols('t')
phi = sp.Function('phi')(t)
theta = sp.Function('theta')(t)
s = sp.Function('s')(t)
L = sp.symbols('L')
m = sp.symbols('m')
k = sp.symbols('k')
g = sp.symbols('g')
x = R*sp.cos(omega*t)+(L+s)*(sp.sin(theta)*sp.cos(phi))
y = R*sp.sin(omega*t)+(L+s)*(sp.sin(theta)*sp.sin(phi))
z = -(L+s)*sp.cos(theta)
xs = sp.diff(x,t)
ys = sp.diff(y,t)
zs = sp.diff(z,t)
v = xs**2 + ys**2 + zs**2
Ekin = 0.5*m*v
Epot = g*(L+s)*sp.cos(theta)+0.5*k*s**2
L = Ekin + Epot
#display(L)
ELTheta = sp.diff(sp.diff(L,sp.Derivative(theta,t)), t) + sp.diff(L,theta)
ELPhi = sp.diff(sp.diff(L,sp.Derivative(phi,t)), t) + sp.diff(L,phi)
ELs = sp.diff(sp.diff(L,sp.Derivative(s,t)), t) + sp.diff(L,s)
Eq1 = sp.Eq(ELTheta,0)
Eq2 = sp.Eq(ELPhi,0)
Eq3 = sp.Eq(ELs,0)
LGS = sp.solve((Eq1,Eq2,Eq3),(sp.Derivative(theta,t,2),sp.Derivative(phi,t,2),sp.Derivative(s,t,2)))
thetadd = sp.simplify(LGS[sp.Derivative(theta,t,2)])
phidd = sp.simplify(LGS[sp.Derivative(phi,t,2)])
sdd = sp.simplify(LGS[sp.Derivative(s,t,2)])
</code></pre>
<p>I don't know whether I chose the right original condition. Is there a simpler and faster way to compute this or a different formula to the problem, that would simplify it?</p>
|
<python><sympy><simulation>
|
2024-06-11 12:38:57
| 1
| 637
|
Mo711
|
78,607,374
| 4,340,985
|
How to add (and use) a secondary y axis with subplot_mosaic?
|
<p>I'm setting up a somewhat elaborate plot (should end up as a pdf page at the end) and for that <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.figure.Figure.subplot_mosaic.html#matplotlib.figure.Figure.subplot_mosaic" rel="nofollow noreferrer"><code>subplot_mosaic</code></a> seems like a great solution and very straightforward with plotting into the various boxes.</p>
<p>However, I need two y axes in one of the plots and I can't figure out how.</p>
<p>MWE:</p>
<pre><code>fig = plt.figure(layout='constrained')
ax_dict = fig.subplot_mosaic("""A.B
CCC
CCC
DDD""")
ax_dict['C'].plot([1,2,3],[1,2,3])
ax_dict['C'].plot([1,2,3],[200,190,180])
</code></pre>
<p>(I've added a bunch of empty plots to show the general Idea.)</p>
<p>Now, obviously, plotting two different datasets into C isn't going to work without a secondary y axis. On a short search <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.secondary_yaxis.html#matplotlib.axes.Axes.secondary_yaxis" rel="nofollow noreferrer"><code>secondary_yaxis</code></a> seems to be what I'm looking for and Adding the line <code>ax_dict['C'].secondary_yaxis('right')</code> into the MWE adds a secondary axis, but I can't figure out how to make it aware that i belongs to one of the plotted datasets. The <code>functions</code> part looks like I can't simply feed data that already exists into it, and apparently, <code>This method is experimental as of 3.1, and the API may change.</code></p>
<p>Essentially, I'm looking for something like <code>secondary_y=True</code>, as it exists in pandas.</p>
|
<python><matplotlib>
|
2024-06-11 12:06:40
| 0
| 2,668
|
JC_CL
|
78,607,283
| 6,195,489
|
How do I use numpys interpolation with xarrays
|
<p>I have some code that does a linear interpolation on data.</p>
<p>In the case where the data is numpy arrays it works as I would expect.</p>
<pre><code>amplitude = np.interp(reflected_times, times, trace)
</code></pre>
<p>gives (zoomed in to show the interpolation clearly):</p>
<p><a href="https://i.sstatic.net/pBGCIgnf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBGCIgnf.png" alt="numpy arrays" /></a></p>
<p>However when I do a similar process on xarray.DataArrays:</p>
<pre><code>amplitude = np.interp(reflected_times, times, trace)
</code></pre>
<p>It doesnt work. Images of the plot zoomed without any zoom, and a couple zoomed in:</p>
<p><a href="https://i.sstatic.net/jK9BitFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jK9BitFd.png" alt="interpolation xarray 1" /></a></p>
<p><a href="https://i.sstatic.net/DTmkRJ4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DTmkRJ4E.png" alt="interpolation xarray 2" /></a></p>
<p><a href="https://i.sstatic.net/XWk24VEc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWk24VEc.png" alt="interpolation xarray 3" /></a></p>
<p>It looks a bit at the start like the plot is simply shifted, but that isnt the case if y ou look further on - the interpolation is just not being done correctly.</p>
<p>I expect this is something obvious particular to xarrays that I am clearly misunderstanding - can anyone suggest what the issue might be?</p>
<p>It is worth noting that I am using Dask distributed with a local cluster, and I have a list of tasks that do the interpolation for a data set which I am using delayed and compute to run them:</p>
<pre><code>tasks = [
delayed(self._method_that_does_the_interpolation)(
inputs, for, the, method
)
for cmp_index in range(min_cmp, max_cmp + 1)
]
with self.dask_context() as dc:
results = compute(*tasks, scheduler=dc)
</code></pre>
<p>and in the class method that does the interpolation I am converting the dask arrays to xarray before the interpolation:</p>
<pre><code>times = xr.DataArray(times)
trace = xr.DataArray(trace)
reflected_times = xr.DataArray(reflected_times,dims="reflected_times")
</code></pre>
<p>and I am starting to wonder if the interpolation is being done correctly but for some reason the data I am outputting to plot isnt corresponding i.e. i am plotting the interpolation for a different data set.</p>
<p><strong>Edit</strong></p>
<p>The above is exactly what was happening. When I switch to a single threaded scheduler the data output is correct and the interpolation is being done properly.</p>
|
<python><numpy><interpolation><dask><python-xarray>
|
2024-06-11 11:46:06
| 0
| 849
|
abinitio
|
78,607,247
| 908,773
|
Python subprocess run hanging with psql result in kubernetes pod
|
<p>I'm trying to get a list of databases from a Kubernetes pod.
This works and databases are printed in the terminal:</p>
<pre><code> result = subprocess.run(['kubectl', '--context', 'my-context', '-n', 'mynamespace', 'exec', '-it', 'postgres-helper-58dc64d7f4-cmkxl', '--', '/bin/bash', '-c', 'psql postgresql://postgres:password@postgres-postgresql --list --quiet --tuples-only'], check=true)
mydb1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
mydb2 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | | libc | =c/postgres +
| | | | | | | postgres=CTc/postgres
</code></pre>
<p>However, I need to capture this output:</p>
<pre><code>result = subprocess.run(['kubectl', '--context', 'my-context', '-n', 'mynamespace', 'exec', '-it', 'postgres-helper-58dc64d7f4-cmkxl', '--', '/bin/bash', '-c', 'psql postgresql://postgres:password@postgres-postgresql --list --quiet --tuples-only'], check=true, capture_output=True)
</code></pre>
<p>Adding <code>capture_output=True</code> makes the command hang.</p>
<p>If I log in to the pod:</p>
<pre><code>root@postgres-helper-58dc64d7f4-cmkxl:/# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
postgres 1 0.0 0.1 220936 27904 ? Ss 11:10 0:00 postgres
postgres 62 0.0 0.0 221060 9452 ? Ss 11:10 0:00 postgres: checkpointer
postgres 63 0.0 0.0 221072 7020 ? Ss 11:10 0:00 postgres: background writer
postgres 65 0.0 0.0 220936 10220 ? Ss 11:10 0:00 postgres: walwriter
postgres 66 0.0 0.0 222516 8940 ? Ss 11:10 0:00 postgres: autovacuum launcher
postgres 67 0.0 0.0 222500 8172 ? Ss 11:10 0:00 postgres: logical replication launcher
root 96 0.0 0.0 23556 11136 pts/0 Ss+ 11:24 0:00 /usr/lib/postgresql/15/bin/psql postgresql://postgres:password@postgres-postgresql --list --quiet --tuples-only
root 102 0.0 0.0 2320 1280 pts/0 S+ 11:24 0:00 sh -c pager
root 103 0.0 0.0 5516 2048 pts/0 S+ 11:24 0:00 pager
root 104 0.0 0.0 7064 3584 pts/1 Ss 11:24 0:00 bash
root
320 0.0 0.0 10996 4096 pts/1 R+ 11:25 0:00 ps aux
</code></pre>
<p>The process is sitting there, perhaps waiting for some input that I can't figure out.</p>
<p>Note: I need to connect to a postgres service in another pod, so I need to execute <code>psql -h</code>.</p>
|
<python><kubernetes><psql>
|
2024-06-11 11:37:16
| 0
| 4,767
|
martincho
|
78,607,175
| 13,123,667
|
Run Yolov8 for Python 3.6
|
<p>Yolov8 is build to run with 3.8 or above and <code>pip install ultralytics</code> is not compatible with prior versions.</p>
<p>I have the constraint of using a prior version of Python to execute the code on a microcomputer like Jetson Nano or Jetson Xavier.</p>
<p>I've seen on many issues of Yolov8 repository people asking how to solve this issue aswell.</p>
<p>I'll answer bellow, the solution I have found so that others can benefit it or improve it.</p>
|
<python><python-3.6><yolov8><nvidia-jetson><ultralytics>
|
2024-06-11 11:23:27
| 1
| 896
|
Timothee W
|
78,607,131
| 12,297,666
|
Different values in matplotlib piechart from the pandas dataframe
|
<p>Check this code:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
data = {
'RegiΓ£o': ['NORTE', 'SUDESTE', 'NORDESTE', 'CENTRO OESTE', 'SUL'],
'PNT "Real" sobre mercado BT (%)': [52.75, 15.04, 12.82, 8.23, 8.04]
}
data = pd.DataFrame(data)
data.set_index('RegiΓ£o', inplace=True)
plt.rcParams["figure.autolayout"] = True
fig, ax = plt.subplots()
ax.patch.set_edgecolor('black')
ax.patch.set_linewidth(1)
ax.pie(data['PNT "Real" sobre mercado BT (%)'].values, textprops={'size': 'smaller'}, labels=data.index, autopct='%1.2f%%', wedgeprops={'edgecolor':'black'})
ax.axis('equal')
# fig.savefig('pnts_sobre_BT_Regioes.png', dpi=300)
plt.show()
</code></pre>
<p>I am getting this figure as result:</p>
<p><a href="https://i.sstatic.net/7zVxUSeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7zVxUSeK.png" alt="enter image description here" /></a></p>
<p>Why are the values in the figure different from the ones in <code>data</code>?</p>
|
<python><matplotlib>
|
2024-06-11 11:16:43
| 2
| 679
|
Murilo
|
78,607,097
| 5,953,534
|
Python KeyError in x.get("key")
|
<p>This is a weird error I am seeing in a log file in python.</p>
<pre class="lang-py prettyprint-override"><code>import traceback
mat = [{"a": 1, "b": 2}, {"a": 3, "b": 4}]
def fun(a, b, c):
if c is None:
return a + b
else:
return a + b + c
try:
for x in mat:
x["d"] = fun(x["a"], x["b"], x.get("c"))
except BaseException as e:
emsg = ''.join(traceback.format_exception(None, e, e.__traceback__))
print(emsg)
#py> KeyError: "c"
</code></pre>
<p>I can't reproduce the error and as far as I understand it should not be possible to get a <code>KeyError</code> for <code>"c"</code> since I am using <code>x.get</code>. Therefore I am quite puzzled by this message. Has anyone an idea, is this a rare error in CPython or I am overlooking something?</p>
|
<python><cpython>
|
2024-06-11 11:09:50
| 1
| 663
|
Florian
|
78,606,853
| 7,166,834
|
extract rows with change in consecutive values of a column from pandas dataframe
|
<p>I am trying to extract the rows where consecutive value changes for a particular column</p>
<p>Ex:</p>
<pre><code>sno A B
1 a yes
2 b yes
3 c No
4 d No
5 e No
6 f yes
7 g yes
8 h yes
9 I No
</code></pre>
<p>Output:</p>
<pre><code>sno A B
2 b yes
3 c No
5 e No
6 f yes
8 h yes
9 I No
</code></pre>
<p>Consecutive column is B, am trying to extract if the value of column B is changed consecutively</p>
|
<python><pandas>
|
2024-06-11 10:19:42
| 3
| 1,460
|
pylearner
|
78,606,822
| 6,156,353
|
Build Python wheel for ARM macOS on Linux
|
<p>I would like to build a python wheel package for ARM macOS but I have only Linux server (Gitlab CI). Is it possible to build wheel for ARM Mac on Linux server? If so how? I did not find much relevant information using Google.</p>
|
<python><linux><macos><python-wheel>
|
2024-06-11 10:13:10
| 0
| 1,371
|
romanzdk
|
78,606,744
| 7,729,531
|
Find embedded document based on nested embedded document ID in MongoEngine
|
<p>I have the following schema:</p>
<pre class="lang-py prettyprint-override"><code>class Feature(EmbeddedDocument):
feature_name = StringField(required=True)
geometry = PolygonField(required=True)
id = ObjectIdField(required=True, default=ObjectId, unique=True, primary_key=True)
class Label(EmbeddedDocument):
label_name = StringField(required=True)
version = IntField(required=True)
features = EmbeddedDocumentListField(Feature)
id = ObjectIdField(required=True, default=ObjectId, unique=True, primary_key=True)
class Image(Document):
image_name = StringField(required=True)
date = DateTimeField(required=True)
labels = EmbeddedDocumentListField(Label)
</code></pre>
<p>Based on a given feature ID (nested embedded document), I would like to find its parent image and label IDs (top-level and embedded documents).</p>
<p>I can find the image ID (top-level document) with this query:</p>
<pre class="lang-py prettyprint-override"><code>parent_image = Image.objects.filter(labels__features__id=feature_id)[0]
</code></pre>
<p>But I can't seem to find the label ID (embedded document) with a similar query:</p>
<pre class="lang-py prettyprint-override"><code>parent_label = Image.objects.get(id=parent_image.id).labels.filter(features__id=feat_id)[0]
</code></pre>
<p>This raises an <code>AttributeError: 'Label' object has no attribute 'features__id'</code>.</p>
<p>I don't see yet what's wrong with my second query. Can't I use this syntax on embedded documents?</p>
|
<python><mongodb><mongoengine>
|
2024-06-11 09:58:39
| 0
| 440
|
tvoirand
|
78,606,656
| 5,568,409
|
How to modify Arviz.Plot.Trace?
|
<p>With these small lines:</p>
<pre><code>with model:
az.plot_trace(data=idata, kind="trace", legend=True, figsize=(6, 3))
plt.suptitle("Model Trace", fontsize=20)
</code></pre>
<p>I can modify a bit (general title and size) the trace plot as shown below:</p>
<p><a href="https://i.sstatic.net/0pKR51CY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0pKR51CY.png" alt="enter image description here" /></a></p>
<p>I am quite sure it's also possible to modify the variable name <code>mu</code> (for example increase size or change the font), but I don't see why ?</p>
<p>I am also quite sure it's possible to change the plot frame by setting the two plots vertically ?</p>
<p>I suspect some <code>kwargs</code> are involved in the many parameters of the <code>arviz.plot_trace</code> function, but I am unsure in how to properly use these additional parameters.</p>
<p>Do you have some example of using <code>plot_kwargs</code>, <code>fill_kwargs </code>, <code>trace_kwargs</code>, or others ?</p>
<p>Thanks for any hint on the subject.</p>
|
<python><arviz>
|
2024-06-11 09:40:59
| 0
| 1,216
|
Andrew
|
78,606,535
| 4,767,829
|
statsmodels.graphics.mosaicplot of a MultiIndex DataFrame
|
<p>I have collected the frequencies in which combinations of certain categorical parameters occur in a DataFrame. Applying a groupby operation produced another DataFrame with a MultiIndex. Now I would like to visualize the frequencies in a mosaic plot. This is what I have tried:</p>
<pre><code>rows=[
{"Mode":"ID", "SortBy":"Start", "SortDir":"ASC", "count":10},
{"Mode":"ID", "SortBy":"Start", "SortDir":"DESC", "count":100},
{"Mode":"FULL", "SortBy":"End", "SortDir":"DESC", "count":1000}
]
df=pd.DataFrame(rows)
hdf=df.groupby(["Mode", "SortBy", "SortDir"]).sum("count")
mosaic(hdf, index=hdf.index)
</code></pre>
<p>But it fails with the following error:</p>
<pre><code>KeyError: "None of [MultiIndex([('FULL', 'End', 'DESC'),\n ( 'ID', 'Start', 'ASC'),\n ( 'ID', 'Start', 'DESC')],\n names=['Mode', 'SortBy', 'SortDir'])] are in the [columns]"
</code></pre>
<p>I was able to produce a diagram using</p>
<pre><code>mosaic(hdf, ["count"])
</code></pre>
<p>But this is obviously not what I want: it shows three same-sized rectangles labelled 10, 100, 1000, whereas I was expecting the categories Mode, SortBy, SortDir arranged around the axes and the rectangles reflecting the proportions of counts.</p>
|
<python><pandas><dataframe><mosaic-plot>
|
2024-06-11 09:21:17
| 2
| 705
|
Stefan Reisner
|
78,606,466
| 10,131,952
|
Regex Pattern to allow alphanumeric and square brackets with text insde it
|
<p>I am using regex to allow alphanumeric, underscore, hyphen and square brackets in a text box.
regex i am using to validate input is</p>
<pre><code> r'^[a-zA-Z0-9_\-\[\] ]*$.
</code></pre>
<p>I need to modify the regex such that if empty brackets are given it should return false.</p>
<p>Sample cases</p>
<p>"Your message here" - Valid</p>
<p>"Your [text] message here" - Valid</p>
<p>"your_text_message [text]" - valid</p>
<p>"Your [] message here" - Invalid</p>
<p>"[] message" - Invalid</p>
|
<python><django><regex>
|
2024-06-11 09:09:29
| 5
| 413
|
padmaja cherukuri
|
78,606,254
| 2,753,629
|
Escaping {} surrounding a variable to be formatted
|
<p>I need to generate a string where the variable should end up between {}. For instance, the result should be</p>
<pre><code>{0.05} and {4}
</code></pre>
<p>I tried to get this using</p>
<pre><code>"{{}} and {{}}".format(0.05, 4)
</code></pre>
<p>However, this leads to</p>
<pre><code>{} and {}
</code></pre>
<p>How can I escape { and } such that it does format it using the variables.</p>
|
<python><format>
|
2024-06-11 08:28:07
| 2
| 4,349
|
Mike
|
78,606,238
| 12,304,000
|
escaping backslashes in MySQL for CSV interpretation
|
<p>In my S3, I have a CSV file that looks like this:</p>
<pre><code>"id","created_at","subject","description","priority","status","recipient","recipient_id",
</code></pre>
<p>Now, one of the <code>description</code> column's <strong>row</strong> looks like this:</p>
<pre class="lang-none prettyprint-override"><code>"
Internet Marketing Experts
Note: If you are not interested, then you can reply with a simple \"NO\",We will never contact you again.
"
</code></pre>
<p>This has normal text but also some special characters. Notice this part in the cell's value:</p>
<pre class="lang-none prettyprint-override"><code>\"NO\",We
</code></pre>
<p>The first comma (<code>,</code>) after <code>interested</code> does not cause an issue. But the second comma <code>\"NO\",We</code> causes a split and <code>We</code> goes into the next column (<code>priority</code>) when loaded into MySQL. Even though it should be a part of the same column (<code>description</code>).</p>
<p>Now I use this to load data from S3 into MySQL:</p>
<pre class="lang-py prettyprint-override"><code>f"""LOAD DATA FROM S3 's3://{s3_bucket_name}/{s3_key}'
REPLACE
INTO TABLE {schema_name}.stg_tickets
CHARACTER SET utf8mb4
FIELDS
TERMINATED BY ','
ENCLOSED BY '"'
IGNORE 1 LINES
(id, created_at, subject,
description, priority, status, recipient)
"""
</code></pre>
<p>This works for all other cases because there are not a lot of commas/special characters in other <code>description</code> rows. But this particular record messes up the columns because it cannot distinguish the content properly.</p>
<p>I want to handle it using the MySQL command itself. Pre-processing the S3 files wouldn't be an option at the moment.</p>
<p>Note: I am running the query using Python.</p>
<p>I have already tried <code>ESCAPED BY</code> but although it doesn't throw an error, it doesn't make any difference on the data split either:</p>
<pre class="lang-sql prettyprint-override"><code>FIELDS
TERMINATED BY ','
ENCLOSED BY '"'
ESCAPED BY '\\\\'
</code></pre>
<p>What else could I try?</p>
<p>Edit:</p>
<p>I managed to try it via SQL itself:</p>
<pre><code>ESCAPED BY '\\'
</code></pre>
<p>When I try via with this, it still splits the data incorrectly. I know because I get this error:</p>
<pre><code>SQL Error [1366] [HY000]: Incorrect integer value: 'info@xxx' for column 'recipient' at row 781
</code></pre>
<p><code>info@xxx</code> was supposed to be the <strong>recipient</strong> value but now its trying to load it into the next column. So basically after the split, all values move towards the right.</p>
<p>If I use:</p>
<pre><code>ESCAPED BY '\\\\'
</code></pre>
<p>I get this:</p>
<pre><code>SQL Error [1083] [42000]: Field separator argument is not what is expected; check the manual
</code></pre>
<p>Example CSV file:</p>
<pre><code>id,subject,description,priority,status,recipient,requester_id,submitter_id,assignee_id,organization_id,group_id,collaborator_ids,follower_ids,forum_topic_id,problem_id
4232222,May I send you an invoice?,"Hi,
My packages are reasonable and more importantly they generate a number of quality new leads daily.
Thanks & Regards,
Rodigoria
Test Company Name
Note: If you are not interested then you can reply with a simple \""NO\"",We will never contact you again.",normal,new,info@xxxx.de,2345678,2345678,,,,[],[],,
</code></pre>
|
<python><mysql><csv>
|
2024-06-11 08:24:39
| 0
| 3,522
|
x89
|
78,606,105
| 676,192
|
Non-blocking loop inside an inkscape extension
|
<p>Is it possible to have a non-blocking loop constantly running from within an Inkscape extension?</p>
<p>I would like - for example - to be able to pass information back and forth to another application every time I update the drawing.</p>
<p>I have tried threads and forking, but neither releases control to Inkscape for as long as the extension is running.</p>
|
<python><inkscape>
|
2024-06-11 07:54:41
| 0
| 5,252
|
simone
|
78,605,817
| 6,930,340
|
How to replace an individual level in a multi-level column index in pandas
|
<p>Consider the following multi-level column index dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
arrays = [
["A", "A", "B", "B"],
["one", "two", "one", "two"],
["1", "2", "1", "pd.NA"],
]
idx = pd.MultiIndex.from_arrays(arrays, names=["level_0", "level_1", "level_2"])
data = np.random.randn(3, 4)
df = pd.DataFrame(data, columns=idx)
print(df)
level_0 A B
level_1 one two one two
level_2 1 2 1 pd.NA
0 -1.249285 0.314225 0.011139 0.675274
1 -0.654808 -0.492350 0.596338 -0.087334
2 0.113570 0.566687 -0.361334 0.085368
</code></pre>
<p><code>level_2</code> holds values of type <code>object</code> (<code>str</code> really).</p>
<pre><code>df.columns.get_level_values(2)
Index(['1', '2', '1', 'pd.NA'], dtype='object', name='level_2')
</code></pre>
<p>I need to parse it to the correct data type and change this particular column level.</p>
<pre><code>new_level_2 = [
pd.NA if x == "pd.NA" else int(x) for x in df.columns.get_level_values(2)
]
</code></pre>
<p>I am looking for a pythonic way to replace the old <code>level_2</code> with <code>new_level_2</code>.</p>
|
<python><pandas><multi-index>
|
2024-06-11 06:48:45
| 4
| 5,167
|
Andi
|
78,605,727
| 5,130,253
|
Exception stack trace not clickable in PyCharm
|
<p>PyCharm suddenly changed the way it shows the stack trace on the run tab and does not let me click on the exception (or anywhere else) and go to the specific point of the error file anymore.</p>
<p>How can I fix this?</p>
<p>Running on OSX Sonoma, PyCharm 2022.2.3 (Community Edition)</p>
<p><a href="https://i.sstatic.net/AJ5HIau8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJ5HIau8.png" alt="Stack is plain colored text instead of clickable links" /></a></p>
|
<python><pycharm><stack-trace><torchmetrics>
|
2024-06-11 06:24:12
| 1
| 502
|
sofia
|
78,605,344
| 5,003,606
|
How to fix grpc._channel._MultiThreadedRendezvous fails; StatusCode.UNAVAILABLE; Query timed out
|
<p>I have a Python function that takes a cloud Firestore collection name as an arg and streams thru every document in that collection to check for errors.</p>
<p>In simplified form, it essentially looks something like this:</p>
<pre><code>from firebase_admin import firestore
def find_all_issues(collection: str) -> None:
fs_client = firestore.client()
coll_ref = fs_client.collection(collection)
doc_snapshot_stream = coll_ref.stream()
for doc_snap in doc_snapshot_stream:
d = doc_snap.to_dict()
# perform error checks on d
</code></pre>
<p>The key point here is that I create a stream for the collection and use that to read and process <strong>every</strong> document in the collection.</p>
<p><em>The code always works perfect on all but one of my collections.</em></p>
<p>Unfortunately, on the <strong>biggest</strong> collection, I not infrequently get an error like this:</p>
<pre><code>grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "Query timed out. Please try either limiting the entities scanned, or run with an updated index configuration."
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.15.202:443 {created_time:"2024-06-10T23:10:26.9339739+00:00", grpc_status:14, grpc_message:"Query timed out. Please try either limiting the entities scanned, or run with an updated index configuration."}"
>
</code></pre>
<p>This is super frustrating. <em>I can't believe that Firestore cannot create and maintain a rock solid database stream!</em></p>
<p>The error message offers this very useful sounding advice:
<code>Please try either limiting the entities scanned, or run with an updated index configuration.</code></p>
<p>The problem is that I have no idea how to execute on it:</p>
<ol>
<li><p><code>limiting the entities scanned</code>: how can I possibly do that? I need to process every document in the collection! Furthermore, I really hope that Google is not expecting Firestore users to somehow manually break up large reads.</p>
</li>
<li><p><code>run with an updated index</code>: <strong>I have no idea what index to use to solve this problem</strong>. In other Firestore contexts, I have seen Google's error message very helpfully give you the exact index command you should execute to solve the problem. Unfortunately, here Google tells me nothing.</p>
</li>
</ol>
<p>I web searched before posting here and was frustrated to find very little discussion of this issue. But I think that <a href="https://stackoverflow.com/questions/78565022/firestore-throwing-14-unavailable-query-timed-out-on-simple-query">this recent post</a> might be related. <a href="https://github.com/googleapis/nodejs-firestore/issues/2055" rel="nofollow noreferrer">Google support says</a> "backend engineers have investigated and found a potential query planner bug".</p>
|
<python><google-cloud-firestore><timeout>
|
2024-06-11 04:03:16
| 0
| 951
|
HaroldFinch
|
78,605,165
| 14,046,126
|
Airflow: Failing to push data back into the pipeline
|
<p>I have setup a workflow, which consists of two tasks:</p>
<p>In second, I am successfully pulling data pushed into the pipeline by the first task. However, after processing data, when I try to push data back into the pipeline, I get the error "INFO - Task exited with return code -9".</p>
<p>Here are the logs: <a href="https://i.sstatic.net/XWhCqKKc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWhCqKKc.jpg" alt="enter image description here" /></a></p>
<p>Why is XCOM failing to push data into pipeline? How can I communicate data to other tasks?</p>
|
<python><airflow><etl><airflow-xcom>
|
2024-06-11 02:30:25
| 2
| 2,726
|
Pranav Rustagi
|
78,604,978
| 4,561,700
|
keras.utils.get_file() throws TypeError: '<' not supported between instances of 'int' and 'NoneType'
|
<p>I am trying to follow along with the book <strong>Applied Deep Learning and Computer Vision for Self-Driving Cars</strong>. I am running into issues with keras while running some of the example code. When trying to grab a file using the get_file() function, I am getting a type error.</p>
<p>System: Windows 10 | Python 3.9.19 | Tensorflow 2.10.1</p>
<p>Code snippet: <code>dataset_path = keras.utils.get_file("auto-mpg.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")</code></p>
<p>Desired Behavior: Gets file</p>
<p>Resulting error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 2
1 # Datapath to import auto-mpg data
----> 2 dataset_path = keras.utils.get_file("auto-mpg.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
File ~\anaconda3\envs\AVTech\lib\site-packages\keras\utils\data_utils.py:296, in get_file(fname, origin, untar, md5_hash, file_hash, cache_subdir, hash_algorithm, extract, archive_format, cache_dir)
294 try:
295 try:
--> 296 urlretrieve(origin, fpath, DLProgbar())
297 except urllib.error.HTTPError as e:
298 raise Exception(error_msg.format(origin, e.code, e.msg))
File ~\anaconda3\envs\AVTech\lib\site-packages\keras\utils\data_utils.py:86, in urlretrieve(url, filename, reporthook, data)
84 response = urlopen(url, data)
85 with open(filename, "wb") as fd:
---> 86 for chunk in chunk_read(response, reporthook=reporthook):
87 fd.write(chunk)
File ~\anaconda3\envs\AVTech\lib\site-packages\keras\utils\data_utils.py:78, in urlretrieve.<locals>.chunk_read(response, chunk_size, reporthook)
76 count += 1
77 if reporthook is not None:
---> 78 reporthook(count, chunk_size, total_size)
79 if chunk:
80 yield chunk
File ~\anaconda3\envs\AVTech\lib\site-packages\keras\utils\data_utils.py:287, in get_file.<locals>.DLProgbar.__call__(self, block_num, block_size, total_size)
285 self.progbar = Progbar(total_size)
286 current = block_num * block_size
--> 287 if current < total_size:
288 self.progbar.update(current)
289 elif not self.finished:
TypeError: '<' not supported between instances of 'int' and 'NoneType'
</code></pre>
<p><a href="https://stackoverflow.com/questions/75752307/tf-keras-utils-get-file-error-typeerror-not-supported-between-instances-of">The only other post I could see related to this only had the answer "It works for me".</a></p>
|
<python><python-3.x><tensorflow><keras>
|
2024-06-11 00:46:00
| 1
| 310
|
RPBruiser
|
78,604,957
| 770,513
|
Undoing a migration for a duplicate field
|
<p>We have had a duplicate field added to of our Django Wagtail models which has confused the migration system.</p>
<p>The following was added to one of the models while back along with a migration that added a singular instance of the field:</p>
<pre><code> stream = fields.StreamField(
who_funds_you_blocks,
blank=True,
verbose_name="Additional content",
)
stream = fields.StreamField(
who_funds_you_blocks,
blank=True,
verbose_name="Additional content",
)
</code></pre>
<p>The migrations are a bit strange. This is what was committed to the repository</p>
<p><code>/modules/core/migrations/0014_whofundsyoupage_stream.py</code></p>
<pre><code>from django.db import migrations
import modules.core.blocks.core
import modules.core.blocks.override
import wagtail.core.blocks
import wagtail.core.fields
import wagtail.images.blocks
class Migration(migrations.Migration):
dependencies = [
('core', '0013_articlepage_apple_news_url'),
]
operations = [
migrations.AddField(
model_name='whofundsyoupage',
name='stream',
field=wagtail.core.fields.StreamField([('rich_text', wagtail.core.blocks.StructBlock([('text', modules.core.blocks.override.RichTextBlock(label='Body text', required=True))], group=' Content')), ('image', wagtail.core.blocks.StructBlock([('image', wagtail.images.blocks.ImageChooserBlock(required=True)), ('alt_text', wagtail.core.blocks.CharBlock(label='Override image alt-text', required=False)), ('caption', wagtail.core.blocks.RichTextBlock(features=['link', 'document-link'], label='Override caption', required=False)), ('credit', wagtail.core.blocks.RichTextBlock(features=['link', 'document-link'], label='Override credit', required=False)), ('image_display', wagtail.core.blocks.ChoiceBlock(choices=[('full', 'Full'), ('long', 'Long'), ('medium', 'Medium'), ('small-image', 'Small')]))], group=' Content')), ('html_advanced', wagtail.core.blocks.StructBlock([('segment', wagtail.core.blocks.ChoiceBlock(choices=modules.core.blocks.core.list_segment_choices, help_text='Only show this content block for users in this segment', label='Personalisation segment', required=False)), ('html', wagtail.core.blocks.RawHTMLBlock(label='HTML code', required=True)), ('styling', wagtail.core.blocks.ChoiceBlock(choices=[('default', 'Default'), ('remove-styles', 'Remove style')]))], group=' Content'))], blank=True, verbose_name='Additional content'),
),
]
</code></pre>
<p>And although that commit seems to have been integrated, the current migration list doesn't seem to have it, instead the list is like this:</p>
<pre><code>0012_articlepage_apple_news_id_and_more.py
0012_merge_20230531_1543.py
0013_articlepage_apple_news_url.py
0014_merge_20231024_1447.py
0015_alter_articlepage_share_header_and_more.py
0016_whofundsyoupage_stream_and_more.py
0017_delete_targetedlocalehomepage.py
0018_auto_20240226_1608.py
</code></pre>
<p>And none of those migrations actually refer to the <code>WhoFundsYouPage</code> model or field.</p>
<p>The only migration in that list whose name refers to that model doesn't appear to include any actual reference to the field:</p>
<p><code>/modules/core/migrations/0016_whofundsyoupage_stream_and_more.py</code></p>
<pre><code># Generated by Django 4.2.8 on 2023-12-13 10:36
from django.db import migrations, models
import wagtail.blocks
import wagtail.fields
class Migration(migrations.Migration):
dependencies = [
('core', '0015_alter_articlepage_share_header_and_more'),
]
operations = [
migrations.AlterField(
model_name='articlepage',
name='apple_news_id',
field=models.UUIDField(blank=True, null=True, verbose_name='Apple News ID (automatically generated)'),
),
migrations.AlterField(
model_name='articlepage',
name='apple_news_url',
field=models.URLField(blank=True, null=True, verbose_name='Apple News URL (automatically generated)'),
),
]
</code></pre>
<p>Nevertheless, this all seemed to be fine, and the project continued with new fields and migrations being added elsewhere without any problem.</p>
<p>If I comment out the duplicate <code>stream</code> field and try to create a page from that model, I get this error which shows that the database does contain the field:</p>
<pre><code>Exception Value:
null value in column "stream" of relation "core_whofundsyoupage" violates not-null constraint
DETAIL: Failing row contains (54264, null, null, null, null, null, null, f, null, no, no, 1, , , null, null, null).
</code></pre>
<p>Recently, the duplicate field was spotted and removed from the models page.</p>
<p>Now, if we try to add a field on some model and a migration for it, the migration is created but doesn't complete, although there are no errors that directly imply it has failed:</p>
<pre><code>(.venv) development β app π manpy migrate
/root/.cache/pypoetry/virtualenvs/.venv/lib/python3.10/site-packages/wagtail/utils/widgets.py:10: RemovedInWagtail70Warning: The usage of `WidgetWithScript` hook is deprecated. Use external scripts instead.
warn(
System check identified some issues:
WARNINGS:
?: (urls.W005) URL namespace 'freetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'pagetag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
?: (urls.W005) URL namespace 'sectiontag_chooser' isn't unique. You may not be able to reverse all URLs in this namespace
Operations to perform:
Apply all migrations: admin, auth, contenttypes, core, csp, django_cron, donations, importers, sessions, submissions, taggit, taxonomy, users, wagtail_localize, wagtailadmin, wagtailcore, wagtaildocs, wagtailembeds, wagtailforms, wagtailimages, wagtailredirects, wagtailsearch, wagtailsearchpromotions, wagtailusers
Running migrations:
Applying core.0019_utilitypage_show_navigation_bar_and_more...
</code></pre>
<p>The same thing happens if we add the duplicate <code>stream</code> field back and create a new migration i.e. the migration looks like it runs ok but there's no confirmation tick.</p>
<p>And when I start the server we get:</p>
<pre><code>You have 1 unapplied migration(s). Your project may not work properly until you apply the migration for app(s): core.
Run 'python manage.py migrate' to apply them.
</code></pre>
<p>The database appears to have the field already but the migrations system wants to try to add it again but can't and this is blocking all other migrations.</p>
<p>If I comment out both <code>stream</code> fields, migrations can be made, but the site breaks on the page where the database expects a <code>stream</code> field to be present. I suppose I could delete the <code>stream</code> fields and manually drop the <code>stream</code> field in the database, though we would lose some data by doing that, and I'd prefer to fix this with Django and migrations.</p>
<p>Any idea how to unpick this?</p>
|
<python><django><database><migration><wagtail>
|
2024-06-11 00:33:23
| 1
| 3,251
|
KindOfGuy
|
78,604,888
| 1,606,657
|
Auto detect tabs or spaces for Python projects
|
<p>I'm aware of the setting <code>set expandtab</code> and <code>set noexpandtab</code> which can trigger to use spaces or tabs accordingly.</p>
<p>The problem I'm facing is that I'm working on different Python projects that are sometimes using spaces and sometimes tabs. I don't want to reformat all the code when I start working on a project so I'd like to auto-detect what type of indentation the project is using and apply either spaces or tabs automatically when pressing the <code>tab</code> key.</p>
<p>Is there a way to configure it wither with .vimrc or with a neovim plugin potentially?</p>
|
<python><tabs><neovim>
|
2024-06-10 23:48:50
| 1
| 6,352
|
wasp256
|
78,604,863
| 6,227,035
|
Sorting dictionary values in Python
|
<p>I have a data structure as follow:</p>
<pre><code>Clients= {
"data": [
{
"nClients": 3
},
{
"name": "cTest1",
"roll_no": 1,
"branch": "c"
},
{
"name": "fTest3",
"roll_no": 3,
"branch": "it3"
},
{
"name": "aTest2",
"roll_no": 2,
"branch": "it2"
}
]
}
</code></pre>
<p>I am trying to sort it out by the key 'name' alphabetically to have a result like this:</p>
<pre><code>Clients= {
"data": [
{
"nClients": 3
},
{
"name": "aTest2",
"roll_no": 2,
"branch": "it2"
},
{
"name": "cTest1",
"roll_no": 1,
"branch": "c"
},
{
"name": "fTest3",
"roll_no": 3,
"branch": "it3"
}
]
}
</code></pre>
<p>I have been looking around using the function sort(), dump() ecc, but I cannot find the correct syntax for this operation. Any suggestion?
Thank you!</p>
|
<python><json><sorting><alphabetical>
|
2024-06-10 23:41:51
| 1
| 1,974
|
Sim81
|
78,604,689
| 19,048,408
|
With Python Polars, how to compare two frame (like with `==`), but while returning True/False on comparisons with null
|
<p>With python polars, how can I compare two dataframes (like with <code>==</code>), and get a comparison result per-cell, but while returning <code>True</code>/<code>False</code> on comparisons with null. By default, doing <code>df1 == df2</code> results in <code>null</code> being in any cells where either <code>df1</code> or <code>df2</code> contains a <code>null</code>.</p>
<p>For example:</p>
<pre class="lang-py prettyprint-override"><code>df1 = pl.DataFrame(
{
"a": [1, 2, 3, None, 5],
"b": [5, 4, 3, 2, None],
}
)
df2 = pl.DataFrame(
{
"a": [1, 2, 3, 1, 5],
"b": [5, 4, 30, 2, None],
}
)
print(f"df1: {df1}")
print(f"df2: {df2}")
print(f"df1 == df2: {df1 == df2}")
</code></pre>
<p>Results in:</p>
<pre><code>df1: shape: (5, 2)
ββββββββ¬βββββββ
β a β b β
β --- β --- β
β i64 β i64 β
ββββββββͺβββββββ‘
β 1 β 5 β
β 2 β 4 β
β 3 β 3 β
β null β 2 β
β 5 β null β
ββββββββ΄βββββββ
df2: shape: (5, 2)
βββββββ¬βββββββ
β a β b β
β --- β --- β
β i64 β i64 β
βββββββͺβββββββ‘
β 1 β 5 β
β 2 β 4 β
β 3 β 30 β
β 1 β 2 β
β 5 β null β
βββββββ΄βββββββ
df1 == df2: shape: (5, 2)
ββββββββ¬ββββββββ
β a β b β
β --- β --- β
β bool β bool β
ββββββββͺββββββββ‘
β true β true β
β true β true β
β true β false β
β null β true β
β true β null β
ββββββββ΄ββββββββ
</code></pre>
<p>However, I'm trying to determine how to get the following result:</p>
<pre><code>df1 compared to df2: shape: (5, 2)
ββββββββ¬ββββββββ
β a β b β
β --- β --- β
β bool β bool β
ββββββββͺββββββββ‘
β true β true β
β true β true β
β true β false β
βfalse β true β <- false b/c cell is null in one DF, and a value in the other
β true β true β <- bottom-right cell is true
ββββββββ΄ββββββββ because df1 and df2 have the same value (null)
</code></pre>
|
<python><dataframe><python-polars>
|
2024-06-10 22:13:24
| 3
| 468
|
HumpbackWhale194
|
78,604,335
| 4,936,825
|
Bland-Altman plots
|
<p>I need to make Bland-Altman plots, but I cannot find a good R or Python package for this.</p>
<p>So far, I've found a single function in Statsmodels and another one in a package called Pingouin. But neither of these seem to cover the case with proportional bias or heteroskedastic differences.</p>
<p>Is there an R or Python package (or any other software) out there that does this out of the box? That is, test for proportional bias and heteroskedasticity and make the appropriate plot based on these tests.</p>
|
<python><r><statistics><statistical-test>
|
2024-06-10 20:05:20
| 1
| 723
|
Milad Shahidi
|
78,604,225
| 5,219,890
|
numpy : f-ordered arrays have slower assignment than their transpose
|
<p>If I have an array: <code>x = np.array(np.random.rand(2000, 400), order='F')</code>, and create a mask: <code>y = (x > .5)</code> (which also happens to become F-contiguous), then the following code:</p>
<p><code>x[y] = np.nan</code></p>
<p>Is significantly slower than: (with twice the time taken)</p>
<p><code>x.T[y.T] = np.nan</code></p>
<p>However, on benchmarking on my local machine with Py 3.10.12 with NumPy 1.21.5, there is still a performance gain, but lesser (~18%) (please see EDIT)</p>
<p>The platform of the server is linux x64, python 3.6.15, numpy 1.17.3.</p>
<p>It would help to know when the change was introduced*, but I'm having trouble pin-pointing exactly what changed between the two numpy versions - could you please help me in finding what could be the cause, and where I can inspect it in the source?</p>
<p>*so we can optimize similar operations, in case upgrading introduces incompatibility</p>
<p>P.S. it's a bit silly to post a question by getting so far, but I could find no other information easily regarding this fundamental change. I haven't gone through the changelogs yet.</p>
<p>EDIT: corrected versions. Also, it seems that there is a speed-up on my local machine as well, sorry for missing this earlier.
Timing code:</p>
<pre><code>import numpy as np
source = np.asfortranarray(np.random.rand(5000, 300))
source[3000][150] = np.nan
y = (source > .5)
y[0][0] = not y[0][0]
def trial1():
global source, y
x = np.array(source, order='F', copy=True)
x[y] = np.nan
def trial2():
global source, y
x = np.array(source, order='F', copy=True)
x.T[y.T] = np.nan
print (f"x flags : {source.flags}")
print (f"y flags : {y.flags}")
import timeit
for i in range(5):
print(f"Without transpose: {timeit.timeit(trial1, number=100)}")
print(f"With transpose: {timeit.timeit(trial2, number=100)}")
</code></pre>
<p>Locally: x, y are f-contiguous; without transpose ~0.66s, with transpose: ~0.53s</p>
<p>Server: x, y are f-contiguous; without transpose ~2.45s, with transpose: ~1.35s</p>
<p>EDIT #2: After @hpaulj 's comment, here are some perf stats:</p>
<ol>
<li>With transpose (server)</li>
</ol>
<pre><code> 54,448,864 cache-references:u
11,400,924 cache-misses:u # 20.939 % of all cache refs
18,837,142,074 instructions:u
14,265 faults:u
</code></pre>
<ol start="2">
<li>Without transpose (server)</li>
</ol>
<pre><code> 168,729,841 cache-references:u
105,950,111 cache-misses:u # 62.793 % of all cache refs
17,314,722,307 instructions:u
17,824 faults:u
</code></pre>
<ol start="3">
<li>With transpose (local - i7-13700)</li>
</ol>
<pre><code> 231,283,068 cpu_core/cache-references/ (94.47%)
4,298,631 cpu_atom/cache-references/ (27.28%)
58,979,974 cpu_core/cache-misses/ # 25.50% of all cache refs (94.47%)
2,696,648 cpu_atom/cache-misses/ # 1.17% of all cache refs (27.28%)
36,328,986,080 cpu_core/instructions/ (94.47%)
30,357,732,295 cpu_atom/instructions/ (27.28%)
6,208 faults
</code></pre>
<ol start="4">
<li>Without transpose (local)</li>
</ol>
<pre><code> 229,462,046 cpu_core/cache-references/ (95.34%)
786,111 cpu_atom/cache-references/ (22.32%)
107,393,341 cpu_core/cache-misses/ # 46.80% of all cache refs (95.34%)
467,948 cpu_atom/cache-misses/ # 0.20% of all cache refs (22.32%)
34,146,964,025 cpu_core/instructions/ (95.34%)
37,761,817,540 cpu_atom/instructions/ (22.32%)
6,214 faults
</code></pre>
<p>I think, the cache misses indicate that there is a difference in accessing these arrays (it could be the indexing expression, or the array itself). However, the cache references differ greatly on the server - this could be due to the hardware and the cause of the larger performance difference.</p>
<p>NOTE: I have also tried the above tests on Py 3.12.3 with NumPy 1.26.4 (linux x64 with i7-1360P) and the results are close to the "local" machine above</p>
<p>EDIT#3: Seems entirely due to hardware - running with Py 3.6.15 and NumPy 1.17.3 gives the same performance ratio and perf stats as the new ones.</p>
|
<python><numpy>
|
2024-06-10 19:36:38
| 0
| 549
|
Vedaant Arya
|
78,604,108
| 864,598
|
Python in VSCode Editor does not recognize modules in same folder
|
<p>As you can see in image below, thread_processor and order_processor is in the same folder as main.py, yet it does not recognize as a module.
Its is looking only at the root folder ..</p>
<p>even adding below code in launch.json is not helping either ... THanks</p>
<pre><code>{
"configurations": [
{
"name": "Python",
"type": "python",
"stopOnEntry": false,
"request": "launch",
"pythonPath": "${config.python.pythonPath}",
"program": "${file}",
"cwd": "${workspaceRoot}",
"debugOptions": [
"WaitOnAbnormalExit",
"WaitOnNormalExit",
"RedirectOutput"
],
"env": {
"PYTHONPATH": "/src/order_processing_engine"
}
}
]
}
</code></pre>
<p><a href="https://i.sstatic.net/fzWmh756.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzWmh756.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code>
|
2024-06-10 19:01:44
| 2
| 1,014
|
M80
|
78,604,097
| 11,141,816
|
How to capture printed messages in parallel process?
|
<p>I have a function <code>parallel_run</code> which print out the diagnostic message during the run and then return a result. I want to capture the printed message and also the returned value. However, somehow I got many error messages.</p>
<pre><code>import concurrent.futures
import matplotlib.pyplot as plt
import io
import contextlib
# Define a mock function to simulate parallel_run
def parallel_run(x_z_pair, Precision, Max_iterations):
x_val, z_val = map(float, x_z_pair)
# Mock diagnostic message and result
diagnostic_message = f"Running with x={x_val}, z={z_val}, Precision={Precision}, Max_iterations={Max_iterations}"
print(diagnostic_message)
result = [(x_val, z_val), [x_val * 0.1, x_val * 0.2]] # Mock result
return result
# Function to capture printed diagnostic messages
def run_parallel_with_capture(x_z_pair, Precision, Max_iterations):
try:
f = io.StringIO()
with contextlib.redirect_stdout(f):
result = parallel_run(x_z_pair, Precision, Max_iterations)
diagnostic_message = f.getvalue().strip() # Capture and strip the diagnostic message
return diagnostic_message, result
except Exception as e:
return f"Exception: {str(e)}", None
# Generate the list of x_z_pair values
x_values = [round(x * 0.25, 2) for x in range(4, 41)] # From 1 to 10 in steps of 0.25
x_z_pairs = [(str(x), str(1.0)) for x in x_values] # Assuming z is fixed at 1.0
# Fixed parameters
Precision = 20
Max_iterations = 6
# Parallel processing function
def run_parallel(x_z_pair):
return run_parallel_with_capture(x_z_pair, Precision, Max_iterations)
if __name__ == '__main__':
results = []
with concurrent.futures.ProcessPoolExecutor(max_workers=5) as executor:
futures = [executor.submit(run_parallel, x_z_pair) for x_z_pair in x_z_pairs]
for future in concurrent.futures.as_completed(futures):
try:
results.append(future.result())
except Exception as exc:
print(f'Generated an exception: {exc}')
# Collect diagnostic messages and results
diagnostic_messages = [res[0] for res in results if res[1] is not None]
results_data = [res[1] for res in results if res[1] is not None]
# Extract data for plotting
x_vals = [res[0][0] for res in results_data]
y_vals = [res[1][0] for res in results_data] # Using generic names instead of delta_b_up_vals
# Plot y with respect to x_val
plt.plot(x_vals, y_vals, marker='o')
plt.xlabel('x_val')
plt.ylabel('y_val') # Using generic name instead of Delta_b_up
plt.title('y_val vs x_val') # Using generic name instead of Delta_b_up
plt.grid(True)
plt.show()
# If you need to see the diagnostic messages
for message in diagnostic_messages:
print(message)
</code></pre>
<p>Error message:</p>
<pre><code>Generated an exception: A process in the process pool was terminated abruptly while the future was running or pending.
Generated an exception: A process in the process pool was terminated abruptly while the future was running or pending.
Generated an exception: A process in the process pool was terminated abruptly while the future was running or pending.
...
Generated an exception: A process in the process pool was terminated abruptly while the future was running or pending.
Generated an exception: A process in the process pool was terminated abruptly while the future was running or pending.
</code></pre>
<p>I want to know if there's a way to collected messages and the returned values. This is a process pool, not thread pool. I don't understand why the IO still seem to be causing trouble, since they were running independently.</p>
<p>How to capture printed messages in parallel process?</p>
|
<python><parallel-processing><io><concurrent.futures>
|
2024-06-10 18:57:48
| 0
| 593
|
ShoutOutAndCalculate
|
78,604,018
| 836,026
|
ImportError: cannot import name 'packaging' from 'pkg_resources' when trying to install causal_conv1d
|
<p>I was trying to install "causal_conv1d" using:</p>
<pre><code>pip install --no-cache-dir -t /scratch/ahmed/lib causal_conv1d==1.0.0
</code></pre>
<p>The error I got is:</p>
<pre><code>Collecting causal_conv1d==1.0.0
Downloading causal_conv1d-1.0.0.tar.gz (6.4 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [9 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-9i0wsv2k/causal-conv1d_fc0a21267f664102adca1aa336c93106/setup.py", line 19, in <module>
from torch.utils.cpp_extension import (
File "/scratch/ahmed/lib/torch/utils/cpp_extension.py", line 28, in <module>
from pkg_resources import packaging # type: ignore[attr-defined]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ImportError: cannot import name 'packaging' from 'pkg_resources' (/scratch/ahmed/lib/pkg_resources/__init__.py)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
|
<python><pytorch><pip><anaconda><mamba-ssm>
|
2024-06-10 18:39:14
| 2
| 11,430
|
user836026
|
78,603,913
| 10,140,821
|
Find the 1st business day and last business day of the previous month based on a date
|
<p>I have a use case like this in Python: Find the 1st business day and last business day of the previous month based on a date. For example if date is <code>2024-06-10</code></p>
<pre><code>first_business_day = '2024-05-01'
last_business_day = '2024-05-31'
</code></pre>
<p>I have tried like below</p>
<pre><code>run_date = '2024-06-10'
from datetime import datetime, timedelta
d = datetime.strptime(run_date, '%Y-%m-%d').date()
previous_month_first_business_day = (d - timedelta(days=d.day)).replace(day=1).strftime("%Y-%m-%d")
previous_month_last_business_day = (d - timedelta(days=d.day)).strftime("%Y-%m-%d")
</code></pre>
<p>Result:</p>
<pre><code>previous_month_first_business_day = '2024-05-01'
previous_month_last_business_day = '2024-05-31'
</code></pre>
<p>This is working fine for month of <strong>May</strong>, but when I want the same result for <strong>June</strong>, then using the above I am getting:</p>
<pre><code>previous_month_first_business_day = '2024-06-01' # This should be '2024-06-03'
previous_month_last_business_day = '2024-06-30' # This should be '2024-06-29'
</code></pre>
<p>What should I do to achieve the right result?</p>
|
<python><date>
|
2024-06-10 18:08:17
| 2
| 763
|
nmr
|
78,603,841
| 2,153,235
|
More detailed info about data type that is common throughout a Series?
|
<p>I have a dataframe column consisting entirely of a common type <code>dict</code>. Is there any way to query the Series type to reveal the common data type? It currently only tells me that it is an object, which I understand is an array of references. But if the things being referenced are of the same type for an entire Series, it would be useful to know this fact, as well as the specific common type.</p>
<pre><code>>>> df = pd.DataFrame([[{'c':[1,2]}],[{'d':[3,4]}]],columns=['A'])
A
0 {'c': [1, 2]}
1 {'d': [3, 4]}
>>> df['A'].dtype
dtype('O')
>>> type(df['A'])
pandas.core.series.Series
</code></pre>
|
<python><pandas><types><series><dtype>
|
2024-06-10 17:53:57
| 1
| 1,265
|
user2153235
|
78,603,728
| 4,577,467
|
How to embed Python 3.12 in a Windows C++ app?
|
<p>The operating system is Windows 10.</p>
<p>I downloaded the installer for Python 3.12.3 from <a href="https://www.python.org/downloads/" rel="nofollow noreferrer">https://www.python.org/downloads/</a>.
During custom installation, I chose a install location: <code>D:\Programs\Python\Python312</code></p>
<p>After installation, I confirmed that the <code>D:\Programs\Python\Python312</code> directory exists, and that <code>python.exe</code> is in it. No Windows environment variables were changed by the installation.</p>
<p>I did not establish any Python virtual environments.</p>
<p>In Visual Studio 2017, I created a new C++ Windows console application project, named <code>runpy</code>. I changed the project settings so that it can find headers and libraries for the Python version I installed. The following changes are sufficient for the app to compile and link.</p>
<ul>
<li>In C/C++ > General > Additional Include Directories, added: <code>D:\Programs\Python\Python312\include</code></li>
<li>In Linker > General > Additional Library Directories, added: <code>D:\Programs\Python\Python312\libs</code></li>
</ul>
<p>Also, I manually copied <code>python312_d.dll</code> to the output directory of the app. i.e., from <code>D:\Programs\Python\Python312</code> to <code>D:\work\runpy\x64\Debug</code>. I will make that a postbuild step in the project later. That is sufficient for the app to run.</p>
<p>I attempted to embed Python in the C++ app with the following code.</p>
<pre><code>// Precompiled header includes other stuff not relevant to embedding Python.
#include "stdafx.h"
// Standard stuff
#include <iostream> // std::cout
// Python
#include <Python.h>
void on_PyStatusException( PyStatus const & status )
{
std::cout << "PyStatus.func: " << ( status.func ? status.func : "n/a" ) << std::endl;
std::cout << "PyStatus.err_msg: " << ( status.err_msg ? status.err_msg : "n/a" ) << std::endl;
Py_ExitStatusException( status );
}
int main( int argc, char * argv[] )
{
printf( "Howdy this is a C++ app that embeds Python\n");
PyStatus status;
PyConfig config;
//------------------------------
// Sets config.isolated to 1, and other stuff.
// But not sufficient by itself.
PyConfig_InitIsolatedConfig( &config );
// Unknown if important. Returns a successful status.
status = PyConfig_Read( &config );
// Provide an explicit path to the Python executable. Returns a successful status.
wchar_t * pythonPath = L"D:\\Programs\\Python\\Python312\\python.exe";
status = PyConfig_SetString( &config, &config.executable, pythonPath );
//------------------------------
// Does *not* work. Returns a failure status.
status = Py_InitializeFromConfig( &config );
if ( PyStatus_Exception( status ) )
{
on_PyStatusException( status );
}
//------------------------------
// Cleanup.
PyConfig_Clear( &config );
return 0;
}
</code></pre>
<p>However, <code>Py_InitializeFromConfig()</code> fails with the following output.</p>
<pre><code>Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = 'python'
isolated = 1
environment = 0
user site = 0
safe_path = 1
import site = 1
is in build tree = 0
stdlib dir = 'D:\work\runpy\Lib'
sys._base_executable = 'D:\\Programs\\Python\\Python312\\python.exe'
sys.base_prefix = 'D:\\work\\runpy'
sys.base_exec_prefix = 'D:\\work\\runpy'
sys.platlibdir = 'DLLs'
sys.executable = 'D:\\Programs\\Python\\Python312\\python.exe'
sys.prefix = 'D:\\work\\runpy'
sys.exec_prefix = 'D:\\work\\runpy'
sys.path = [
'D:\\work\\runpy\\x64\\Debug\\python312_d.zip',
'D:\\work\\runpy',
'D:\\work\\runpy\\Lib',
'D:\\work\\runpy\\x64\\Debug',
]
PyStatus.func: init_fs_encoding
PyStatus.err_msg: failed to get the Python codec of the filesystem encoding
</code></pre>
<p>Several of the Python path configuration settings in the above output are <strong>clearly wrong</strong>. They point to the directory of the <code>runpy</code> app, instead of to Python stuff. What other <code>PyConfig</code> properties should I establish so that <code>Py_InitializeFromConfig()</code> will succeed?</p>
<p>I have been following examples in other StackOverflow posts and the <a href="https://docs.python.org" rel="nofollow noreferrer">https://docs.python.org</a> documentation, but I cannot find the correct combination.</p>
|
<python><c++><python-3.x><embed>
|
2024-06-10 17:24:56
| 2
| 927
|
Mike Finch
|
78,603,690
| 8,049,947
|
Information_schema INVALID IDENTIFIER from snowflake when trying to access query history through a python script
|
<p>I am currently working with Snowflake to view the query history. However, when I attempt to execute the same command in Python, following its syntax rules, I encounter an error stating <code>βINVALID IDENTIFIERβ for βinformation_schemaβ</code>.</p>
<p><code>select * from table(information_schema.query_history()) </code></p>
|
<python><snowflake-cloud-data-platform>
|
2024-06-10 17:15:48
| 1
| 308
|
panther93
|
78,603,623
| 4,408,275
|
Redirect stdout/stderr in tkinter GUI app when launched with **pythonw**
|
<p>I want to launch a python GUI application on Windows.</p>
<p>To <strong>not</strong> block the console if started I need to use <code>pythonw</code> to run this application.</p>
<p>Additionally, I need to provide a batch script that does some setup and users need to run the application trough that script.
When users launch, i.e., double click, the batch script, I don't want the cmd.exe window do show.</p>
<p>The batch script is therefore something like this</p>
<pre><code>REM set BLA=123
REM and other environment stuff and then run the GUI
pythonw myapp.py
</code></pre>
<p>As stdout/stderr are not available, then no error is shown when clicking the button and nobody knows what happened:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
def command():
1 / 0
root = tk.Tk()
button = tk.Button(root, text="button", command=command)
button.pack()
root.mainloop()
</code></pre>
<p>My question is therefore, how can I redirect stdout and stderr to a GUI window (frame, messagebox or similar).</p>
|
<python><tkinter>
|
2024-06-10 16:56:48
| 0
| 1,419
|
user69453
|
78,603,611
| 3,663,742
|
django flatpages will show only one page
|
<p>I'm trying to use flatpages in my Django application.
I installed it properly as far as I can see and created 2 pages.
Using the shell, when I try:</p>
<pre class="lang-py prettyprint-override"><code>>>> from django.contrib.flatpages.models import FlatPage
>>> FlatPage.objects.all()
<QuerySet [<FlatPage: /pages/en-US/about/privacy/ -- Privacy Policy>, <FlatPage: /pages/fr-FR/about/privacy/ -- Politique de ConfidentialitΓ©>]>
</code></pre>
<p>I can see my 2 pages.
But in a template, I'm using:</p>
<pre class="lang-html prettyprint-override"><code>{% get_flatpages as about_pages %}
{% for page in about_pages %}
<a class="navbar-item" href="{{ page.url }}">{{ page.title }}</a>
{% endfor %}
</code></pre>
<p>And there I can see only the last page.
Also when I access directly the first page using it's full URL (<a href="http://127.0.0.1:8000/pages/fr-FR/about/privacy/" rel="nofollow noreferrer">http://127.0.0.1:8000/pages/fr-FR/about/privacy/</a>) it displays nicely but when I try the same with the other one (<a href="http://127.0.0.1:8000/pages/en-US/about/privacy/" rel="nofollow noreferrer">http://127.0.0.1:8000/pages/en-US/about/privacy/</a>) I get a 404.</p>
<p>I tried to change the name of the first page, but nothing works.
Any idea???</p>
|
<python><django><django-flatpages>
|
2024-06-10 16:54:04
| 1
| 436
|
Olivier
|
78,603,549
| 13,142,245
|
FastAPI run config commands before serving application?
|
<p>Does FastAPI automatically configure environment, imports, ML model instantiation, etc. when you run <code>uvicorn Package.API:app --reload --port 8001</code> ?</p>
<p>The result I'm seeing is</p>
<pre><code>INFO: Will watch for changes in these directories: ['/Users/me/Documents/application']
INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
INFO: Started reloader process [35866] using StatReload
</code></pre>
<p>However, when I navigate to <a href="http://127.0.0.1:8001/docs" rel="nofollow noreferrer">http://127.0.0.1:8001/docs</a>, the browser cannot connect. This seems inconsistent.</p>
<pre class="lang-py prettyprint-override"><code>import logging
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from .Inference import TextClassifier
logging.basicConfig(level=logging.INFO)
# Instantiate the model
try:
model = TextClassifier("./Package/model_artifacts.joblib")
except Exception as e:
logging.error(f"Failed to initialize TextClassifier: {str(e)}")
raise
app = FastAPI(title="mlApp")
class MlRequest(BaseModel):
text: str
@app.post("/ml_request")
async def ml_request(request: MlRequest):
try:
text = request.text
predicted_class = model.predict_labels(text)
return {"predicted_class": predicted_class}
except Exception as e:
logging.error(f"Error in ml_request: {str(e)}")
raise HTTPException(status_code=400, detail=str(e))
</code></pre>
<p>So my question is really, do I need to tell my machine, "import these packages, instantiate this ML model, then serve the API?"</p>
<p>If yes, maybe this behavior is expected. If no, then maybe I need to troubleshoot my machine.</p>
|
<python><fastapi>
|
2024-06-10 16:39:01
| 1
| 1,238
|
jbuddy_13
|
78,603,383
| 13,187,876
|
Write data directly to blob storage from an Azure Machine Learning Studio notebook
|
<p>I'm working on some interactive development in an Azure Machine Learning notebook and I'd like to save some data directly from a <code>pandas DataFrame</code> to a <code>csv</code> file in my default connected blob storage account. I'm currently loading some data the following way:</p>
<pre><code>import pandas as pd
uri = f"azureml://subscriptions/<sub_id>/resourcegroups/<res_grp>/workspaces/<workspace>/datastores/<datastore_name>/paths/<path_on_datastore>"
df = pd.read_csv(uri)
</code></pre>
<p>I have no problem loading this data, but after some basic transformations I'd like to save this data to my storage account. Most, if not all solutions I have found suggest saving this file to a local directory and then uploading this saved file to my storage account. The best solution I have found on this is the following, which uses <code>tmpfile</code> so I don't have to go and delete any 'local' files afterwards:</p>
<pre><code>from azureml.core import Workspace
import tempfile
ws = Workspace.from_config()
datastore = ws.datastores.get("exampleblobstore")
with tempfile.TemporaryDirectory() as tmpdir:
tmpath = f"{tmpdir}/example_file.csv"
df.to_csv(tmpath)
datastore.upload_files([tmpath], target_path="path/to/target.csv", overwrite=True)
</code></pre>
<p>This is a reasonable solution, but I'm wondering if there is any way I can directly write to my storage account without the need to save the file first. Ideally I'd like to do something as simple as:</p>
<pre><code>target_uri = f"azureml://subscriptions/<sub_id>/resourcegroups/<res_grp>/workspaces/<workspace>/datastores/<datastore_name>/paths/<path_on_datastore>"
df.to_csv(target_uri)
</code></pre>
<p>After some reading I thought the class <code>AzureMachineLearningFileSystem</code> may allow me to read and write data to my datastore, in a similar way to how I might when developing on a local machine, however, it appears this class will not let me write data, only inspect the 'file system' and read data from it.</p>
|
<python><pandas><azure><machine-learning><azure-machine-learning-service>
|
2024-06-10 16:05:39
| 1
| 773
|
Matt_Haythornthwaite
|
78,603,297
| 13,049,379
|
Split a PIL image into multiple regions with a patch included in one of the split region
|
<p>I need to write a python code that accepts a PIL image and split the image into M rows and N columns, such that one of the split region completely contains a patch of height H and width W starting at point (p,q).</p>
<p>I am mostly able to write the code and my attempt is shown below, but not able to generalise for any M and N. For example it sometimes breaks down for M=3, N=1.</p>
<p>Can someone help me out on this by either suggesting what change to make in the below code or any other way using PIL or Numpy.</p>
<p>--- EDIT (Adding more details on the limitations of the below piece of code)</p>
<p>The main issue with the below code is that the splitting is agnostic to point (p,q) and patch dimensions H,W i.e. split regions are only decided based on M,N and height and width of the original image. To be more precise if in the below code the patch is NOT completely included inside one of the split region then the split regions should re-adjust to make sure the patch is completely contained withing a split region. Its dine if split regions are non-uniform dimensions. Infact its better of the split region containing the full patch is of smallest size.</p>
<pre><code>from PIL import Image
def split_image_with_path(image, M, N, H, W, p, q):
"""
Splits a PIL image into M rows and N columns, ensuring one split region contains a path of height H and width W starting at point (p,q).
Args:
image: A PIL Image object.
M: Number of rows to split the image into.
N: Number of columns to split the image into.
H: Height of the patch.
W: Width of the patch.
p: Starting x-coordinate of the path.
q: Starting y-coordinate of the path.
Returns:
A list of PIL Image objects, representing the split image regions such that one of the split region contains the specified patch
"""
width, height = image.size
# Calculate the size of each split region.
split_width = width // N
split_height = height // M
# Ensure the path is completely contained within one split region.
if not (p < split_width * N and q < split_height * M and p + W <= split_width * N and q + H <= split_height * M):
raise ValueError("Path is not completely contained within one split region.")
# Split the image into M rows and N columns.
split_images = []
for i in range(M):
for j in range(N):
left = j * split_width
right = (j + 1) * split_width
top = i * split_height
bottom = (i + 1) * split_height
split_images.append(image.crop((left, top, right, bottom)))
return split_images
dummy_image = Image.open("/path/to/image.jpg")
display(dummy_image)
splits = split_image_with_path(dummy_image, M=3, N=1, H=100, W=100, p=10, q=10)
for split in splits:
display(split)
</code></pre>
|
<python><numpy><image><python-imaging-library>
|
2024-06-10 15:45:39
| 1
| 1,433
|
Mohit Lamba
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.