QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
โ |
|---|---|---|---|---|---|---|---|---|
78,341,716
| 6,197,439
|
Can you catch a PyQt5 "fail fast exception" in Python?
|
<p>Usually, if you call a non-existing method on a PyQt5 object, like calling <code>.show()</code> on a boolean variable as in the click handler below, which is an example that will always trigger an exception:</p>
<pre class="lang-python prettyprint-override"><code>import sys, os
from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
print("Hello from MainWindow")
self.setWindowTitle("My App")
button = QPushButton("Press me!")
button.clicked.connect(self.button_clicked)
self.setCentralWidget(button)
def button_clicked(self, s):
print("click", s)
s.show()
def main(argv=None):
app = QApplication(sys.argv)
window = MainWindow()
window.show()
try:
app.exec()
except:
print("Exception")
if __name__ == '__main__':
main()
</code></pre>
<p>So when I click the button in this example, I get:</p>
<p><a href="https://i.sstatic.net/hDxlo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hDxlo.png" alt="gui with exception" /></a></p>
<pre class="lang-none prettyprint-override"><code>---------------------------
My App: python3.exe - Fail Fast Exception
---------------------------
A fail fast exception occurred. Exception handlers will not be invoked and the process will be terminated immediately.
---------------------------
OK
---------------------------
</code></pre>
<p>Ok - but if I wrap the <code>app.exec()</code> in <code>try ... catch</code> block, - obviously, - it does not catch this exception (probably because its from a C++ PyQt5 object)?</p>
<p>So, my question is: can I somehow catch/handle this kind of an exception in python (but hopefully somewhere "down the chain", as in trying to catch any such application exception with a single statement, as I've tried to do with <code>try ... catch</code> around <code>app.exec()</code>)?</p>
|
<python><exception><pyqt5>
|
2024-04-17 14:22:12
| 1
| 5,938
|
sdbbs
|
78,341,624
| 1,422,096
|
OrderedDict with fully custom key order
|
<p>I know that keys have no order in a <code>dict</code> (even, if, they keep the insertion order in the recent implementations, as I saw this in CPython core developer Hettinger's video, but this is out of topic).</p>
<p>Is there a data structure in Python that behaves like a <code>dict</code>, but for which <strong>the key order can be customized, and the order is guaranteed to be preserved?</strong> (for example after a <code>pickle</code> dump + load, etc.)</p>
<p>Obviously, <a href="https://docs.python.org/3/library/collections.html#ordereddict-objects" rel="nofollow noreferrer"><code>collections.OrderedDict</code></a> seems a good candidate, but finally not, because we cannot fully customize the order of the keys, but only move to beginning or end: <a href="https://docs.python.org/3/library/collections.html#collections.OrderedDict.move_to_end" rel="nofollow noreferrer"><code>move_to_end</code></a>.</p>
|
<python><dictionary><ordereddictionary><ordereddict>
|
2024-04-17 14:10:02
| 1
| 47,388
|
Basj
|
78,341,400
| 6,197,439
|
Python log both ("tee") to terminal and file, and real-time flushing with PyQt5?
|
<p>First of all, my platform:</p>
<pre class="lang-none prettyprint-override"><code>$ for ix in "uname -s" "python3 --version"; do echo "$ix: " $($ix); done
uname -s: MINGW64_NT-10.0-19045
python3 --version: Python 3.11.9
</code></pre>
<p>I would like in my PyQt5 application to have a logfile, where all terminal loging printouts to stdout or stderr would be save; I came up with this example <code>test.py</code>:</p>
<pre class="lang-python prettyprint-override"><code>#!/usr/bin/env python
import sys, os
# based on https://stackoverflow.com/q/14906787
class LoggerFile(object):
# "a" would append: here we want to restart log each time
log = open("logfile.log", "w")
def __init__(self):
self.terminal = None
self.id = None
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
# > this flush method is needed for python 3 compatibility.
# > this handles the flush command by doing nothing.
# > you might want to specify some extra behavior here.
# MUST have this, to have logfile populated when running under PyQt5!
# But even then - logfile is populated only after application exits!
self.terminal.flush()
self.log.flush()
class LoggerOut(LoggerFile):
def __init__(self):
self.terminal = sys.stdout
self.id = "out"
class LoggerErr(LoggerFile):
def __init__(self):
self.terminal = sys.stderr
self.id = "err"
# REMEMBER TO ASSIGN!:
sys.stdout = LoggerOut()
sys.stderr = LoggerErr()
from PyQt5.QtWidgets import QApplication, QMainWindow, QPushButton
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
print("Hello from MainWindow")
self.setWindowTitle("My App")
button = QPushButton("Press me!")
button.clicked.connect(self.button_clicked)
self.setCentralWidget(button)
def button_clicked(self, s):
print("click", s)
def main(argv=None):
print("Hello from main")
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
if __name__ == '__main__':
main()
</code></pre>
<p>Once the "logfile.log" has been created, I can follow its changes in realtime by running <code>tail -f logfile.log</code> in another terminal.</p>
<p>So, this is what happens when I run <code>python3 test.py</code> from a terminal; I get these printouts immedately:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 test.py
Hello from main
Hello from MainWindow
</code></pre>
<p>... and the window starts up:</p>
<p><a href="https://i.sstatic.net/FQiYy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FQiYy.png" alt="GUI window" /></a></p>
<p>... and at this time, I can see from <code>tail -f logfile.log</code> the following:</p>
<pre class="lang-none prettyprint-override"><code>$ tail -f logfile.log
...
tail: logfile.log: file truncated
</code></pre>
<p>Good, so a new empty file got created, as desired - but notice no printouts yet.</p>
<p>Then I click on the button a few times - the <code>python3</code> terminal shows printouts immediately/in real-time:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 test.py
Hello from main
Hello from MainWindow
click False
click False
</code></pre>
<p>... however the <code>tail -f logfile.log</code> still prints nothing!</p>
<p>Finally, I close the window - the <code>python3</code> command exits in its terminal, and only <em>afterwards</em>, do we see the printout in the <code>tail -f logfile.log</code> terminal:</p>
<pre class="lang-none prettyprint-override"><code>tail: logfile.log: file truncated
Hello from main
Hello from MainWindow
click False
click False
</code></pre>
<p>It's as if PyQt5 buffers all stdout/stderr output until the application exits - specifically for the log file writing (but not for terminal streams, stdout/stderr writing) !? And that, in spite of the explicit flush in the LoggerFile class?</p>
<p>(EDIT: moving the reassignments <code>sys.stdout = LoggerOut()</code> after <code>window = MainWindow()</code> has no effect on this file buffering behavior)</p>
<p>So, is it possible - and if so, how - to get such a "tee"-d log printout output to both terminal (stdout/stderr) and to file, which is unbuffered (i.e. writes are executed as soon as possible) for both terminal and file printouts?</p>
|
<python><python-3.x><pyqt5><flush><output-buffering>
|
2024-04-17 13:37:18
| 0
| 5,938
|
sdbbs
|
78,341,248
| 13,014,864
|
Supply hex color values to heat map Plotnine
|
<p>I am working with some data and would like to make a heat map with a specified, pre-made color palette. The data frame is shown below (I am using <code>polars</code> for all the data frame ops in this analysis).</p>
<pre><code>โโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโ
โ team1 โ team2 โ type โ value โ color โ
โ --- โ --- โ --- โ --- โ --- โ
โ str โ str โ i64 โ f64 โ str โ
โโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโชโโโโโโโโโโโโโโโโโโโโโโโชโโโโโโโโโโก
โ team1_1 โ team2_1 โ 18 โ 115.850278 โ #443b84 โ
โ team1_1 โ team2_2 โ 26 โ 24.241389 โ #470e61 โ
โ team1_2 โ team2_2 โ 13 โ 3.278333 โ #440256 โ
โ team1_1 โ team2_3 โ 30 โ 94.118333 โ #46327e โ
โ team1_3 โ team2_3 โ 13 โ 35.186111 โ #481467 โ
โ โฆ โ โฆ โ โฆ โ โฆ โ โฆ โ
โ team1_2 โ team2_1 โ 18 โ 67.937778 โ #482576 โ
โ team1_2 โ team2_4 โ 22 โ 0.0 โ #440154 โ
โ team1_4 โ team2_2 โ 30 โ 1.199444 โ #440154 โ
โ team1_1 โ team2_5 โ 15 โ 0.0 โ #440154 โ
โ team1_1 โ team2_3 โ 11 โ 8.345278 โ #450559 โ
โโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโ
</code></pre>
<p>In this dataset, I am trying to plot a single heat map figure for each of the <code>team2</code> entries, but I want to retain a single palette, so the heat maps are comparable between the different plots (i.e., dark blue in the plot for <code>team2_1</code> means the same thing as dark blue in the plot for <code>team2_5</code>). I produced the <code>color</code> column using the <code>mizani</code> library:</p>
<pre class="lang-py prettyprint-override"><code>from mizani.palettes import cmap_pal
palette = cmap_pal("viridis")
palette_values = (df["value"] - df["value"].min()) / (
df["value"].max() - df["value"].min()
)
df = df.with_columns(color=pl.Series(palette(palette_values)))
</code></pre>
<p>I then loop over all the unique values in the <code>team2</code> column and produce a heat map using <code>plotnine</code>.</p>
<pre class="lang-py prettyprint-override"><code>for t2 in df["team2"].unique():
df1 = df.filter(pl.col("team2") == t2)
color_dict = {
key: value
for key, value in zip(df1["value"], df1["color"])
}
plt = (
pn.ggplot(
data=df1.to_pandas(),
mapping=pn.aes(
x="team1",
y="type",
label="value",
fill="value",
),
)
+ pn.geom_tile(show_legend=False)
+ pn.scale_fill_manual(color_dict)
+ pn.geom_text(show_legend=False, size=9, format_string="{:.2f}")
)
</code></pre>
<p>I am getting an error from <code>plotnine</code>: <code>TypeError: Continuous value supplied to discrete scale</code>. What is the best way to go about what I want here?</p>
|
<python><colors><python-polars><plotnine>
|
2024-04-17 13:13:14
| 1
| 931
|
CopyOfA
|
78,341,192
| 11,861,874
|
Python to PPTX using python-pptx library
|
<p>I am trying to update ppt using python. I have following set of code and need a help to solve following issue.</p>
<ol>
<li>Value in table should be integer with comma seperator.</li>
<li>The columns should auto adjust as sometime figures are too big and it spill over to next row which doesn't solve the pupose of automation.</li>
<li>The graph and table overlap, is there a way to dynamically adjust them so that irrespective of number of rows in table, graph will always be below table and not overlaping.</li>
</ol>
<pre><code>from pptx import Presentation
from pptx.util import Inches
import pandas as pd
import matplotlib.pyplot as plt
path = 'C:/Users/admin/Desktop/Demo/Demo.pptx'
# Sample data
data = {
'Column1': [1234.56789, 2345.67890, 3456.78901, 4567.89012, 5678.90123],
'Column2': [9876.54321, 8765.43210, 7654.32109, 6543.21098, 5432.10987],
'Column3': [3210.98765, 4321.09876, 5432.10987, 6543.21098, 7654.32109],
'Column4': [7890.12345, 8901.23456, 9012.34567, 1234.56789, 2345.67890],
'Column5': [4567.89012, 3456.78901, 2345.67890, 1234.56789, 9876.54321]
}
# Create a DataFrame
df = pd.DataFrame(data)
# Create a PowerPoint presentation
prs = Presentation()
# Add first slide with title
slide = prs.slides.add_slide(prs.slide_layouts[0])
title = slide.shapes.title
title.text = "Analysis"
# Add a slide for data and graph
slide = prs.slides.add_slide(prs.slide_layouts[5]) # Layout for title and content
# Add title for the slide
title_shape = slide.shapes.title
title_shape.text = "Basic Summary"
# Add a table for the data
left = Inches(1)
top = Inches(1.5)
width = Inches(8)
height = Inches(4)
# Create a table with rows and columns
table = slide.shapes.add_table(6, 6, left, top, width, height).table
# Set column widths
table.columns[0].width = Inches(2)
table.columns[1].width = Inches(1.5)
table.columns[2].width = Inches(1.5)
table.columns[3].width = Inches(1.5)
table.columns[4].width = Inches(1.5)
table.columns[5].width = Inches(1.5)
# Set column titles
table.cell(0, 0).text = ' '
for i, col in enumerate(df.columns):
table.cell(0, i+1).text = col
# Populate the table with data
for i, row in enumerate(df.itertuples(), start=1):
for j, value in enumerate(row[1:], start=1):
table.cell(i, j).text = f'{value:.5f}'
# Plot the graph
fig, ax = plt.subplots()
df.plot(ax=ax)
plt.xlabel('X Label')
plt.ylabel('Y Label')
plt.title('Graph Title')
# Save the plot to a file
plt.savefig('graph.png')
# Add the graph to the slide
left = Inches(1)
top = Inches(5)
pic = slide.shapes.add_picture('graph.png', left, top, width=Inches(8), height=Inches(3))
# Save the presentation
prs.save(path)
# Close the plot
plt.close()
</code></pre>
|
<python><python-pptx><presentation>
|
2024-04-17 13:04:34
| 2
| 645
|
Add
|
78,341,075
| 7,838,169
|
I want to do a self join on a model using SQLAlchemy and then serialize it to JSON but get maximum recursion error
|
<p>I have a <code>User</code> model like the following</p>
<pre><code>class User(Base):
__tablename__ = 'user'
id: so.Mapped[int] = so.mapped_column(primary_key=True, autoincrement=True, index=True)
hashed_password: so.Mapped[str] = so.mapped_column(sa.String(256), nullable=False)
created_by_id: so.Mapped[int | None] = so.mapped_column(sa.ForeignKey('user.id', ondelete='cascade'))
@so.declared_attr
def created_by(cls):
return so.relationship("User", primaryjoin='User.id == User.created_by_id', remote_side=[cls.id], join_depth=1, single_parent=True)
// other attributes ignored
</code></pre>
<p>then I do a self join in my code like this:</p>
<pre><code>alias_user = aliased(User)
query = select(User) \
.outerjoin(alias_user.created_by) \
.options(defer(User.hashed_password))
results = await session.scalars(query)
results.unique()
</code></pre>
<p>then some where in my code I convert results into a json object using this:</p>
<pre><code>jd = jsonable_encoder(list(results), exclude_none=True)
</code></pre>
<p>but I get the the following exeption:</p>
<pre><code>jsonable_encoder recursionerror: maximum recursion depth exceeded in comparison
</code></pre>
<p>how to change my code to prevent this exception?</p>
<p>Thanks in advance.</p>
|
<python><sqlalchemy><fastapi>
|
2024-04-17 12:44:46
| 0
| 3,316
|
badger
|
78,340,704
| 1,923,575
|
git permission denied error in airflow dag when access git in subprocess or bashoperator
|
<p>I'm using Airflow on my local env using Docker Compose The Docker compose I'm using is <a href="https://airflow.apache.org/docs/apache-airflow/2.9.0/docker-compose.yaml" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/2.9.0/docker-compose.yaml</a></p>
<p>Here are the <code>docker compose ps</code></p>
<pre><code>airflow-scheduler-1
airflow-triggerer-1
airflow-webserver-1
airflow-worker-1
postgres-1
redis-1
</code></pre>
<p>As part of of the task in Airflow DAG I need to execute <code>git clone <repo></code> command either using subprocess or BashOperator however it fails with git permission denied error. Its able to execute <code>ls</code> and <code>echo</code> command but not <code>git</code>.</p>
<p>Note - Git is already installed on my system.</p>
<p>DAG code with subprocess</p>
<pre><code>from airflow.models import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime
import subprocess
import shlex
from airflow.operators.bash import BashOperator
default_args = {
'start_date': datetime(2024, 1, 1),
'catchup': False
}
def _sub_process():
cmd = ["ls"]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
print(stdout)
if p.returncode != 0:
print(stderr)
with DAG('sub_processing',
schedule_interval='@hourly',
default_args=default_args) as dag:
#Define tasks/operators
sub = PythonOperator(
task_id='sub',
python_callable=_sub_process
)
</code></pre>
<p>DAG code with BashOperator</p>
<pre><code>from airflow.models import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime
import subprocess
import shlex
from airflow.operators.bash import BashOperator
default_args = {
'start_date': datetime(2024, 1, 1),
'catchup': False
}
with DAG('sub_processing',
schedule_interval='@hourly',
default_args=default_args) as dag:
run_this1 = BashOperator(
task_id="run_after_loop1",
bash_command='echo "hello world"'
)
run_this2 = BashOperator(
task_id="run_after_loop2",
bash_command="ls",
)
run_this3 = BashOperator(
task_id="run_after_loop3",
bash_command="git --version",
)
run_this1 >> run_this2 >> run_this3
</code></pre>
<p>I'm not sure why Airflow DAG task is not able to execute git command. Need help here</p>
|
<python><git><subprocess><airflow><airflow-2.x>
|
2024-04-17 11:45:59
| 1
| 1,771
|
Ashwin Hegde
|
78,340,613
| 2,249,312
|
xbbg blp.bdib function for certain commodity
|
<p>I set everything up so that it works with the ticker CO1 Comdty. In the terminal CO1 Comdty Des says the exchange on which this trades is "ICE-ICE Futures Europe - Commodities". In my two files it looks as follows:</p>
<p>asset.yml</p>
<pre><code>Comdty:
tickers: [CL]
exch: NYME
freq: M
is_fut: True
tickers: [CO, XW]
exch: FuturesEuropeICE
freq: M
is_fut: True
</code></pre>
<p>and in the exch.yml:</p>
<pre><code> FuturesEuropeICE:
tz: Europe/London
allday: [100, 2300]
</code></pre>
<p>As said this all works fine then with the following function call:</p>
<pre><code> blp.bdib(ticker='CO1 Comdty', dt='2024-04-17')
</code></pre>
<p>I know wanted to add the following ticker MOZ24 Comdty. According to the terminal under DES this trades on the exchange called "EDX-ICE Endex". I added the following to the asset file:</p>
<pre><code> tickers: [MO]
exch: EndexICE
freq: M
is_fut: True
</code></pre>
<p>And the following to the exchange file:</p>
<pre><code> EndexICE:
tz: Europe/Berlin
allday: [800, 1800]
</code></pre>
<p>I have tried all other names for the exchange that I could think of but nothing works.</p>
<p>I also looked at this <a href="https://stackoverflow.com/questions/67336751/xbbg-works-for-historical-but-not-intraday-data-with-regards-to-government-bon/67389113#67389113">question</a></p>
<p>How can I figure out what the exchange is called for a certain instrument on the terminal?</p>
|
<python><bloomberg><xbbg>
|
2024-04-17 11:25:19
| 0
| 1,816
|
nik
|
78,340,585
| 354,051
|
similarity score in between 2D arrays of different size
|
<p><a href="https://i.sstatic.net/izfO4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/izfO4.png" alt="paths for similarity score" /></a></p>
<p>These are 2D (z=0) poly lines in 3D application Blender and they are used as paths. I would like to know the similarity score between them.</p>
<p>Itโs important to distinguish between the concepts of similarity and distance. Similarity measures how similar or close two series are to each other, while distance quantifies the dissimilarity between two series. The smaller the distance, the more similar the two patterns are considered, as they require fewer adjustments for optimal alignment.</p>
<p>In my case:</p>
<ul>
<li>Paths may have different lengths or number of points.</li>
<li>Paths may be similar but slightly scaled on either axis.</li>
<li>Points distribution may be different or not aligned.</li>
<li>Starting points will surely be aligned but ending points may or may not.</li>
</ul>
<p>Initially, I did some tests using distance based function using scipy but because of scaling issue, they produce a higher score which is correct as per the scaled data but I can't make use of it.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
# path data from blender
PATH1 = np.array([
[ 0. , -0.54760742],
[ 0.06086956, -0.01623535],
[ 0.08695652, 0.33010864],
[ 0.12173913, 0.55926514],
[ 0.14782609, 0.47113037],
[ 0.2 , -0.17211914],
[ 0.2173913 , -0.26098633],
[ 0.2521739 , -0.18804932],
[ 0.31304348, 0.33068848],
[ 0.33913043, 0.46157837],
[ 0.36521739, 0.36813354],
[ 0.39130434, 0.12088013],
[ 0.42608696, -0.06130981],
[ 0.5043478 , 0.02139282],
[ 0.56521738, 0.22866821],
[ 0.60000002, 0.16775513],
[ 0.65217394, -0.1048584 ],
[ 0.77391303, -0.36459351],
[ 0.81739128, -0.26113892],
[ 0.87826085, 0.21520996],
[ 0.9217391 , 0.19732666],
[ 1. , -0.51672363]])
PATH2 = np.array([
[ 0. , -0.57351685],
[ 0.08849557, 0.20959473],
[ 0.13274336, 0.45080566],
[ 0.15929204, 0.35235596],
[ 0.20353982, -0.20309448],
[ 0.2300885 , -0.27749634],
[ 0.2920354 , 0.08712769],
[ 0.3539823 , 0.27789307],
[ 0.38053098, 0.24447632],
[ 0.43362832, -0.03421021],
[ 0.51327431, 0.09094238],
[ 0.60176992, 0.09771729],
[ 0.66371679, -0.11819458],
[ 0.71681416, -0.17974854],
[ 0.79646015, -0.39971924],
[ 0.84070796, -0.18743896],
[ 0.88495576, 0.23342896],
[ 0.90265489, 0.25582886],
[ 0.92920351, 0.20352173],
[ 0.94690263, 0.08068848],
[ 1. , -0.50335693]])
</code></pre>
<p>I also did some tests using fastdtw, correlate2d, and cosine similarity but not able to figure out the correct one for the requirement and their results look unpredictable to me.</p>
|
<python><cosine-similarity><cross-correlation>
|
2024-04-17 11:20:19
| 0
| 947
|
Prashant
|
78,340,572
| 774,575
|
Why does groupby with dropna=False prevent a subsequent MultiIndex.dropna() to work?
|
<p>My understanding is <code>MultiIndex.dropna()</code> removes index entries for which at least one level is <code>NaN</code>, there are no conditions. However it seems if a previous <code>groupby</code> was used with <code>dropna=False</code>, it's no longer possible to use <code>MultiIndex.dropna()</code>.</p>
<ul>
<li>What are the reasons for the different behavior?</li>
<li>How to drop the <code>NaN</code> entries after using <code>groupby</code>?</li>
</ul>
<p>(I'm aware the <code>NaN</code> groups would be dropped by <code>groupby</code> without the <code>dropna</code> parameter, but I'm looking for a solution working in the case the parameter has been used at some point earlier).</p>
<hr />
<pre><code>import pandas as pd
import numpy as np
d = {(8.0, 8.0): {'A': -1.10, 'B': -1.0},
(7.0, 8.0): {'A': -0.10, 'B': 0.1},
(5.0, 8.0): {'A': 1.15, 'B': -1.2},
(7.0, 7.0): {'A': 1.10, 'B': 1.6},
(7.0, np.NaN): {'A': 0.70, 'B': -0.7},
(8.0, np.NaN): {'A': -1.00, 'B': 0.9},
(np.NaN, 5.0): {'A': -2.20, 'B': 1.1}}
# This works as expected
index = pd.MultiIndex.from_tuples(d.keys(), names=['L1', 'L2'])
df = pd.DataFrame(d.values(), index=index)
print(df.index.dropna())
# This doesn't work as expected
df = df.groupby(['L1', 'L2'], dropna=False).mean()
print(df.index.dropna())
</code></pre>
<hr />
<pre><code>MultiIndex([(8.0, 8.0),
(7.0, 8.0),
(5.0, 8.0),
(7.0, 7.0)],
names=['L1', 'L2'])
MultiIndex([(5.0, 8.0),
(7.0, 7.0),
(7.0, 8.0),
(7.0, nan),
(8.0, 8.0),
(8.0, nan),
(nan, 5.0)],
names=['L1', 'L2'])
</code></pre>
|
<python><pandas><multi-index>
|
2024-04-17 11:17:34
| 2
| 7,768
|
mins
|
78,340,399
| 13,827,112
|
How to save several figures from an animation created by FuncAnimation?
|
<p>is it possible to save, for instance, 10 figures from a video, that is created as follows?</p>
<pre><code>ani = FuncAnimation(fig, update, frames=num_frames, interval=40, blit=True) # Interval is in milliseconds
dpi = 300 # Adjust this value as needed
writer = FFMpegWriter(fps=60, bitrate=5000)
ani.save('animation_Hradec.mp4', writer=writer, dpi=dpi)
</code></pre>
<p>Is there an argument of FuncAnimation for this? Thank you very much</p>
|
<python><python-3.x><matplotlib><animation><figure>
|
2024-04-17 10:49:45
| 0
| 1,195
|
Elena Greg
|
78,340,390
| 2,583,346
|
Embedding plotly interactive figures in a confluence page
|
<p>Is there a way to embed my plotly figures in a confluence page? Here's what I've tried:</p>
<pre><code>import plotly.express as px
x = [1,1,2,3,3,3,4,5]
fig = px.histogram(x=x)
fig.write_html('my.html')
</code></pre>
<p>Then I tried copy-pasting the HTML from <code>my.html</code> into a HTML macro in a Confluence page (using <a href="https://ukkous.atlassian.net/wiki/marketplace/discover/app/net.bitwelt.confluence.htmlMacro?source=approvalEmail" rel="nofollow noreferrer">this app</a>), but it doesn't display.<br />
Any idea how this should be done?</p>
|
<python><plotly><confluence>
|
2024-04-17 10:48:28
| 0
| 1,278
|
soungalo
|
78,340,305
| 5,105,207
|
Gekko: parameter identification on a Spring-Mass system
|
<p>I want to do parameter estimation on a Spring-Mass system</p>
<p><a href="https://i.sstatic.net/GZcpj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GZcpj.png" alt="enter image description here" /></a></p>
<p>with direct collocation method. The parameter k should be determined from response.</p>
<p>I simulated this system by</p>
<pre class="lang-py prettyprint-override"><code>from scipy.integrate import odeint
import numpy as np
def dy(y, t):
x, xdot = y
return [xdot, -50*x]
t = np.linspace(0, 1, 40)
sol = odeint(dy, [2.0, 1.0], t)
sol_x = sol[:, 0]
sol_xdot = sol[:, 1]
</code></pre>
<p>Then I have these code to identify parameter k:</p>
<pre class="lang-py prettyprint-override"><code>from gekko import GEKKO
m = GEKKO(remote=False)
m.time = t
x = m.CV(value=sol_x); x.FSTATUS = 1 # fit to measurement
xdot = m.CV(value=sol_xdot); xdot.FSTATUS = 1
k = m.FV(value = 40.0); k.STATUS = 1 # change initial value of k here
m.Equation(x.dt() == xdot) # differential equation
m.Equation(xdot.dt() == -k*x)
m.options.IMODE = 5 # dynamic estimation
m.options.NODES = 40 # collocation nodes
m.options.EV_TYPE = 2
m.solve(disp=False) # display solver output
</code></pre>
<p>By playing around initial value of k, I found k will converge to real value 50 if its initial value is 25 to 65. Otherwise the result will be -0.39 which is not good. I'm quite confusing because this system is linear and should be easy to be solved. So my quietion: how to fine tune the above code so that k converge to 50 with arbitry initial value?</p>
|
<python><gekko>
|
2024-04-17 10:31:31
| 1
| 1,413
|
Page David
|
78,340,226
| 633,001
|
apply() with VLines vs HLines (holoviews)
|
<p>My code:</p>
<pre><code>import holoviews as hv
import numpy as np
hv.extension('bokeh')
def filter_lines(lines, x_range, y_range):
if x_range is None or y_range is None:
return lines
if len(lines[x_range, y_range]) > 40:
return lines[x_range, y_range].opts(alpha=0.0)
return lines[x_range, y_range]
list_lines = np.arange(0, 300, 1)
lines = hv.VLines(list_lines)
range_stream = hv.streams.RangeXY(source=lines)
streams=[range_stream]
filtered = lines.apply(filter_lines, streams=streams)
display(filtered)
</code></pre>
<p>This displays the lines just fine when I scroll in enough, and my X-Axis is only 40 units, the lines will be displayed.</p>
<p>When I use <code>lines = hv.HLines(list_lines)</code> instead, the lines just never appear, no matter my view range. What is happening here? Do I need to change something specifically to make this work?</p>
<p><a href="https://i.sstatic.net/zxwet.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zxwet.png" alt="Horizonzal lines don't appear (vertical lines unfiltered)" /></a></p>
|
<python><holoviews>
|
2024-04-17 10:19:22
| 0
| 3,519
|
SinisterMJ
|
78,340,143
| 9,608,860
|
E bot.ftech_channel raises error with pytest AttributeError: '_MissingSentinel' object has no attribute 'is_set'
|
<p>I have a discord bot which needs to fetch guild and channels. If I get none from bot.get_guild(), I try to fetch_guild through api. The code looks something like this:</p>
<pre><code>async def get_or_fetch_guild(obj_id, bot, guild):
guild = bot.get_guild(obj_id)
status = 'SUCCESS'
status_code = 200
if guild is None:
try:
guild = await bot.fetch_guild(obj_id)
except discord.errors.NotFound:
guild = None
status = 'INVALID_GUILD_ID'
status_code = 400
except discord.errors.Forbidden:
guild = None
status = 'FORBIDDEN'
status_code = 403
except discord.errors.HTTPException:
guild = None
status = 'HTTP_EXCEPTION'
status_code = 400
return guild, status, status_code
</code></pre>
<p>It works fine normally but when I write test case for the api using the above code as utility function, I get this error:</p>
<blockquote>
<pre><code> if not self._global_over.is_set():
E AttributeError: '_MissingSentinel' object has no attribute 'is_set'
venv/lib/python3.11/site-packages/discord/http.py:605: AttributeError
</code></pre>
</blockquote>
<p>I am writing test case for this api call:</p>
<pre><code>@router.get("/guild/{guild_id}/roles", response_model=role_schema.AllRoleData)
async def get_all_roles(
guild_id: int,
):
print("get all roles api called")
guild, status, status_code = await get_or_fetch(guild_id, "guild", bot)
# try to get the guild through api call if not found in cache
if guild is None:
print(f"Guild not found with id: {guild_id}")
raise HTTPException(status_code=status_code, detail=status)
</code></pre>
<p>This is one of the test case that is giving error:</p>
<pre><code>@pytest.mark.asyncio
@patch("discord.Client.get_user")
@patch("discord.utils.get")
async def test_assign_role_invalid_guild(
mock_get_user, mock_get, bot: Tuple[commands.Bot, str], test_client
):
user = dpytest.get_config().members[0]
role = dpytest.get_config().guilds[0].roles[0]
# Create mock user, guild, member, and role objects
mock_user = MagicMock(spec=discord.User, id=user.id, name="user")
mock_member = MagicMock(spec=discord.Member, id=user.id, name="member")
mock_role = MagicMock(spec=discord.Role, id=role.id, name="role")
# Make member.add_roles an AsyncMock
mock_member.add_roles = AsyncMock()
# Set up the return values of get_user, get
mock_get_user.return_value = mock_user
mock_get.return_value = mock_role
# Call the assign_role endpoint
response = test_client.post(
"/api/v1/dashboard/assign/roles",
json={"discord_user_id": user.id, "discord_guild_id": "1234", "role_ids": [role.id]},
headers={"X-Sizzle-External-Client-Key": settings.SIZZLE_BOT_API_SERVER_KEY},
)
assert response.status_code == 400
assert "detail" in response.json()
assert response.json()["detail"] == "INVALID_GUILD_ID"
</code></pre>
<p>Can anyone suggest a solution?</p>
|
<python><discord><discord.py><pytest>
|
2024-04-17 10:06:32
| 0
| 405
|
Aarti Joshi
|
78,340,115
| 1,841,839
|
write_videofile throws list index out of range - but works
|
<p>I have a bunch of videos in a directory 1.mp4, 2.mp4 .... I am merging them into output_1_2.mp4.</p>
<p>The following code works, but ..</p>
<p>Its throwning an error that i cant seem to figure out.</p>
<blockquote>
<p>Merging: videos/1.mp4 videos/2.mp4 videos/output_1_2.mp4<br>
Length of clips list: 2<br>
Moviepy - Building video videos/output_1_2.mp4.<br>
MoviePy - Writing audio in output_1_2TEMP_MPY_wvf_snd.mp3<br>
t: 0%| | 0/499 [00:00<?, ?it/s, now=None]MoviePy - Done.<br>
Moviepy - Writing video videos/output_1_2.mp4<br>
Error writing final clip: list index out of range<br></p>
</blockquote>
<p>Again it works the files are merged, im just trying to understand the why</p>
<pre><code>final_clip.write_videofile(output_path, codec='libx264')
</code></pre>
<p>is throwing an <strong>list index out of range</strong> error.</p>
<p>As the code works i can probably just ignore it. This question is more out of curiosity.</p>
<h1>Code</h1>
<pre><code>import os
import re
from moviepy.editor import VideoFileClip, concatenate_videoclips
def merge_videos_in_directory(directory):
# Get a list of all files in the directory
files = os.listdir(directory)
# Filter out files that start with "output"
files = [file for file in files if not file.startswith("output")]
# Sort the files based on numeric part in the filename
files.sort(key=lambda x: int(re.search(r'\d+', x).group()))
# Print the list of files for debugging
print("Files in directory:", files)
# Iterate through the files in pairs
for i in range(0, len(files), 2):
# Check if there are at least two files remaining
if i + 1 < len(files):
# Get the paths of the two videos to merge
video1_path = os.path.join(directory, files[i])
video2_path = os.path.join(directory, files[i + 1])
# Create the output file name
output_name = f"output_{files[i][:-4]}_{files[i + 1][:-4]}.mp4"
output_path = os.path.join(directory, output_name)
# Merge the two videos
print("Merging:", video1_path, video2_path, output_path)
try:
merge_videos(video1_path, video2_path, output_path)
except Exception as e:
print("Error merging videos:", e)
def merge_videos(video1_path, video2_path, output_path):
# Load the video clips
video1 = VideoFileClip(video1_path)
video2 = VideoFileClip(video2_path)
# Print the lengths of the clips list for debugging
print("Length of clips list:", len([video1, video2]))
# Concatenate the video clips
final_clip = concatenate_videoclips([video1, video2])
try:
# Write the final clip to a file
final_clip.write_videofile(output_path, codec='libx264') # exception thrown here.
except Exception as e:
# Handle any errors
print("Error writing final clip:", e)
# Close the clips if an error occurs
final_clip.close()
video1.close()
video2.close()
finally:
# Close the clips
final_clip.close()
video1.close()
video2.close()
# Example usage:
videos_dir = "videos/"
merge_videos_in_directory(videos_dir)
</code></pre>
<p>I am wondering if it may just be related to <a href="https://github.com/Zulko/moviepy/issues/646" rel="nofollow noreferrer">Duration floating point round-off causes "IndexError: list index out of range" in write_videofile</a></p>
|
<python><moviepy>
|
2024-04-17 10:02:57
| 0
| 118,263
|
Linda Lawton - DaImTo
|
78,340,111
| 12,560,241
|
Comparing Columns , Arrays, List using SQL, Numpy or Python List. Would Intertools be valid alternative and faster?
|
<p>I have two table in SQL</p>
<p>tModel which has 9 columns (ID, Date N1, N2, N3, N4, N5, N6, Flag) and 300 million rows</p>
<p>tPairAP which has 3 columns (ID, N1, N2) and 750 rows</p>
<p>The task I need to to perform is to find if any rows on tModel for N1, N2, N3, N4, N5, N6 contain N1 and N2 from tPairAP</p>
<p>Despite the large table I first tried to run a query inside SQL using the following code</p>
<pre><code>WITH Subquery AS (
SELECT t1.ID
FROM tModel t1
RIGHT JOIN tPairAP t2 ON (
(t1.N1 = t2.N1 OR t1.N2 = t2.N1 OR t1.N3 = t2.N1 OR t1.N4 = t2.N1 OR t1.N5 = t2.N1 OR t1.N6 = t2.N1)
AND (t1.N1 = t2.N2 OR t1.N2 = t2.N2 OR t1.N3 = t2.N2 OR t1.N4 = t2.N2 OR t1.N5 = t2.N2 OR t1.N6 = t2.N2)
)
WHERE t1.N1 IS NOT NULL
)
UPDATE tModel
SET Flag= CASE WHEN Subquery.ID IS NOT NULL THEN 'Y' ELSE 'N' END
FROM tModel
LEFT JOIN Subquery ON tModel.ID = Subquery.ID;
</code></pre>
<p>Due to the size of the table the query took 20hr and 30min to run.</p>
<p>Due to the length I thought running that check on python would have been fast and therefore, in order to check the difference in run time I thought to consider two ways of doing that.</p>
<p>The first I tried route it has been to use Numpay. To do so I have created txt files so that could have been loaded as Numpy arrays.
I named the two file as the table and they only contains N1, N2, N3, N4, N5, N6 for tModel.txt and N1, N2 for tPairAP.txt.</p>
<p>I have then also thought to code two different way for Numpy to handle this task</p>
<p><strong>NUMPY OPTION 1</strong></p>
<pre><code>import numpy as np
# Load the text file as numpy arrays
tModel = np.loadtxt('C:\\py\\SQL ON\\py_files\\tModel.txt', delimiter=','))
tPairAP = np.loadtxt('C:\\py\\SQL ON\\py_files\\tPairAP.txt', delimiter=','))
# Check if any arrays of Array1 contain all elements from any arrays of Array2
contains_elements = np.array([np.all(np.isin(tModel, arr)) for arr in tPairAP])
# Create a list of flags based on the contains_elements array
flags = np.where(contains_elements, 'Y', 'N')
# Write the list of lists of List1 and flags to a new file
with open('C:\\py\\SQL ON\\tModelNoPair.txt', 'w') as file:
for i in range(len(tModel)):
file.write(str(tModel[i]) + ' ' + flags[i] + '\n')
</code></pre>
<p>With a huge surprise I found out that running the python code takes ages and longer than the time taken by SQL. I had the above code running for 33hrs and the code wasn't even close to complete the run</p>
<p><strong>NUMPY OPTION 2</strong></p>
<pre><code>import numpy as np
# Convert List1 and List2 to numpy arrays
tModel = np.loadtxt('C:\\py\\SQL ON\\py_files\\tModel.txt', delimiter=','))
tPairAP = np.loadtxt('C:\\py\\SQL ON\\py_files\\tPairAP.txt', delimiter=','))
# Create a list to store the flags
flags = []
# Iterate over each list in List1
for l1 in tModel:
flag = 'N'
# Iterate over each list in List2
for l2 in tPairAP:
if np.isin(l1, l2).sum() == 2:
flag = 'Y'
break
flags.append(flag)
# Write the results to a file
with open('C:\\py\\SQL ON\\tModelNoPair.txt', 'w') as file:
for i, flag in enumerate(flags):
file.write(f'{tModel[i]} {flag}\n')
</code></pre>
<p>This second option I did not even tried to run it as I think this is more of a generic version that I can use in case the tPairAP has, for example, not only N1, N2 but also N3, N4..Nn. However, despite my assumption on how fast the code above will be, given the length of <strong>NUMPY OPTION 1</strong> I do not think it will run faster than the SQL query.</p>
<p>The third option I have thought about was to run a script via python list. I have the written a code to do as follow:</p>
<p><strong>LIST OPTION</strong></p>
<pre><code>import numpy as np
def check_elements(tModel, tPairAP):
for sublist1 in tModel:
for sublist2 in tPairAP:
if all(elem in sublist1 for elem in sublist2):
return 'Y'
return 'N'
# lists
tModel = np.loadtxt('C:\\py\\SQL ON\\py_files\\tModel.txt', delimiter=',')
tPairAP = np.loadtxt('C:\\py\\SQL ON\\py_files\\tPairAP.txt', delimiter=',')
flags = []
for sublist1 in tModel:
flag = check_elements(tModel, tPairAP)
flags.append(flag)
with open('output.txt', 'w') as file:
for sublist, flag in zip(tModel, flags):
file.write(str(sublist) + ' ' + flag + '\n')
</code></pre>
<p>Now, nedless to say but before even kicking <strong>LIST OPTION</strong> I have done some research on internet to understand if lists are faster than numpy and it seems that numpay it is faster to check through and within arrays than list. Due to this I have not run the code as I will not expect to be any faster than Numpy.</p>
<p>So my question would be: how is it possible numpy is slower than SQL? To my knowledge despite not being a python expert (I am using numpy for other stuff like very large variance covariance matrix) numpy is supposed to be faster but it looks like it is not.</p>
<p>Can anyone let me know if my python code is badly written and eventually how to correct it in order to make it fast and faster than SQL or if there is a better library to perform this task and eventually an example on how to use that library?</p>
<p>Thanks</p>
<p><em><strong>+++UPDATE+++</strong></em>
As suggested by @Grismar my python code (NUMPY OPTION 1 and 2) where not optimal. In actual fact they were not correct as they were not providing the correct output. I've been too overconfident my code was correct without testing it in a small sample.</p>
<p>I have therefore correct NUMPY OTION 1 using two sample small arry to make sure the output it was the correct one (the one I was looking for)</p>
<pre><code>import numpy as np
# Load the text file as NumPy arrays
tModel = np.array([[1, 3, 2, 4, 5, 6], [7, 8, 9, 10, 11, 12],[1,3,5,7,9,11]])
tPairAP = np.array([[3, 4], [9, 10],[1,10]])
#tModel = np.array(np.loadtxt('C:\\py\\SQL ON\\py_files\\tModel.txt', delimiter=','))
#tPairAP = np.array(np.loadtxt('C:\\py\\SQL ON\\py_files\\tPairAP.txt', delimiter=','))
# Perform the right join operation
result = []
for row_t1 in tModel:
for row_t2 in tPairAP:
if ((row_t1[0] == row_t2[0] or row_t1[1] == row_t2[0] or row_t1[2] == row_t2[0] or row_t1[3] == row_t2[0] or row_t1[4] == row_t2[0] or row_t1[5] == row_t2[0]) and
(row_t1[0] == row_t2[1] or row_t1[1] == row_t2[1] or row_t1[2] == row_t2[1] or row_t1[3] == row_t2[1] or row_t1[4] == row_t2[1] or row_t1[5] == row_t2[1])):
result.append(row_t1)
result = np.array(result)
# Perform the right join operation
result = []
for row_t1 in tModel:
flag = 'N'
for row_t2 in tPairAP:
if all(elem in row_t1 for elem in row_t2):
flag = 'Y'
break
result.append(np.append(row_t1, flag))
result = np.array(result)
for row in result:
print(row)
</code></pre>
<p>The output is now the following</p>
<pre><code>['1' '3' '2' '4' '5' '6' 'Y']
['7' '8' '9' '10' '11' '12' 'Y']
['1' '3' '5' '7' '9' '11' 'N']
</code></pre>
<p>Now even this version take longer than SQL and according to @Grismar this would be expected.</p>
<p>I have done some reserach and I understand there is a library called intertools that could provide to be much faster than SQL and NUMPY.</p>
<p>Is there anyone that knows how to achieve what I have achieved with SQL and NUMPY by using INTERTOOLS?</p>
|
<python><sql><arrays><list><python-itertools>
|
2024-04-17 10:02:31
| 2
| 317
|
Marco_sbt
|
78,340,019
| 8,163,773
|
Python debuger with vscode ipmort local files error
|
<p>When I try to launch vscode debugger it fails on the line where I try to import something from local files saying that the module is not installed.</p>
<blockquote>
<p>No module named 'app'
File "/path/to/my_project/app/script.py", line 1, in
from app.other_cript import helper
ModuleNotFoundError: No module named 'app'</p>
</blockquote>
|
<python><visual-studio-code>
|
2024-04-17 09:48:59
| 1
| 9,359
|
Arseniy-II
|
78,339,830
| 5,624,602
|
Python TKinter progressbar style issue
|
<p>I'm writing a TKinter GUI app and having issue with the progressbar style configuration.
I'm running the same code in different machines and one time it is ok and in the other it is not.</p>
<pre><code>import tkinter as tk
from tkinter import ttk
root = tk.Tk()
app = tk.Frame(root)
app.pack()
app.green_style = ttk.Style()
app.green_style.configure("myStyle.Horizontal.TProgressbar", background='yellow',foreground = 'blue')
app.progress = ttk.Progressbar(root, length=200, mode='determinate',style="myStyle.Horizontal.TProgressbar")
app.progress.pack()
app.progress['value'] = 75
app.mainloop()
</code></pre>
<p>In <strong>Linux</strong> with python <strong>3.7.3</strong> it creates a yellow progressbar:</p>
<p><a href="https://i.sstatic.net/bcLmc.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bcLmc.jpg" alt="linux 3.7.3 progressbar" /></a></p>
<p>But in <strong>Windows 10</strong> with Python <strong>3.10.11</strong> the style doesn't change:</p>
<p><a href="https://i.sstatic.net/elBeN.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/elBeN.jpg" alt="windows 3.10.11 progressbar" /></a></p>
<p>Can you please help me understand why the Windows progress bar style is not updating?</p>
<p>Thanks!</p>
|
<python><python-3.x><tkinter><windows-10>
|
2024-04-17 09:20:08
| 2
| 1,513
|
STF
|
78,339,718
| 2,902,280
|
mamba claims package is already installed when it is not
|
<p>When installing <code>seaborn</code>, mamba says it is already present. Yet trying to import it in my python interpreter (the mamba one) fails:</p>
<pre><code>(base) โ materials git:(main) โ mamba install seaborn
Looking for: ['seaborn']
conda-forge/linux-64 Using cache
conda-forge/noarch Using cache
Pinned packages:
- python 3.10.*
Transaction
Prefix: /home/mathurin/mambaforge
All requested packages already installed
(base) โ materials git:(main) โ python -m seaborn
/home/mathurin/mambaforge/bin/python: No module named seaborn
</code></pre>
<p>Yet it seems that everything happens in my <code>mambaforge</code> folder so it's the correct path and interpreter?</p>
|
<python><python-packaging><mamba>
|
2024-04-17 09:04:10
| 1
| 13,258
|
P. Camilleri
|
78,339,449
| 3,548,089
|
what is the best approach to compute Brun's constant with python (witch include primality test and long run sum)?
|
<p>Brun's constant :
<a href="https://en.wikipedia.org/wiki/Brun%27s_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Brun%27s_theorem</a>
<a href="http://numbers.computation.free.fr/Constants/Primes/twin.html" rel="nofollow noreferrer">http://numbers.computation.free.fr/Constants/Primes/twin.html</a></p>
<p>How to compute Brun's constant up to 10^20 with python
knowing that primality check has a heavy cost and summing result up to 10^20 is a long task</p>
<p>here is my 2 cents attempt :</p>
<p>IsPrime : fastest way to check if the number is prime I know
digit_root : compute the digital root of the number</p>
<p>If someone know what could be improve bellow to reach computation to 10^20, you're welcome</p>
<pre><code>import numpy as np
import math
import time
#Brun's constant
#p B_2(p)
#10^2 1.330990365719...
#10^4 1.616893557432...
#10^6 1.710776930804...
#10^8 1.758815621067...
#10^10 1.787478502719...
#10^12 1.806592419175...
#10^14 1.820244968130...
#10^15 1.825706013240...
#10^16 1.830484424658...
#B_2 should reach 1.9 at p ~ 10^530 which is far beyond any computational project
#B_2*(p)=B_2(p)+ 4C_2/log(p)
#p B2*(p)
#10^2 1.904399633290...
#10^4 1.903598191217...
#10^6 1.901913353327...
#10^8 1.902167937960...
#10^10 1.902160356233...
#10^12 1.902160630437...
#10^14 1.902160577783...
#10^15 1.902160582249...
#10^16 1.902160583104...
def digit_root(number):
return (number - 1) % 9 + 1
first25Primes=[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
def IsPrime(n) :
# Corner cases
if (n <= 1) :
return False
if (n <= 3) :
return True
# This is checked so that we can skip
# middle five numbers in below loop
if (n % 2 == 0 or n % 3 == 0) :
return False
#exclude digital root 3 or 6 or 9
if digit_root(n) in (3,6,9):
return False
if (n != 2 and n != 7 and n != 5 and str(n)[len(str(n))-1] not in ("1","3","7","9")): #si le nombre ne se termine pas par 1 3 7 9
return False
for i in first25Primes:
if n%i == 0 and i < n:
return False
if (n>2):
if (not(((n-1) / 4) % 1 == 0 or ((n+1) / 4) % 1 == 0)):
return False
if (n>3):
if (not(((n-1) / 6) % 1 == 0 or ((n+1) / 6) % 1 == 0)):
return False
i = 5
while(i * i <= n) :
if (n % i == 0 or n % (i + 2) == 0) :
return False
i = i + 6
return True
def ComputeB_2Aster(B_2val,p):
return B_2 + (C_2mult4/np.log(p))
start = time.time()
#approx De Brun's
B_2 = np.float64(0)
B_2Aster = np.float64(0)
one = np.float64(1)
#Twin prime constant
C_2 = np.float64(0.6601618158468695739278121100145557784326233602847334133194484233354056423)
C_2mult4 = C_2 * np.float64(4)
lastPrime = 2
lastNonPrime = 1
for p in range(3, 100000000000000000000,2):
if IsPrime(p):
lastNonPrime = p-1
if lastPrime == p-2 and lastNonPrime == p-1:
B_2 = B_2 + (one/np.float64(p)) + (one/np.float64(lastPrime))
lastPrime = p
else:
lastNonPrime = p
if p<10000000000:
if p%1000001==0:
print(f'p:{p} \t\t[elapsed:{time.time()-start}]\nB_2:{B_2:.52f} B_2Aster:{ComputeB_2Aster(B_2,p-2):.52f}\n',end="")
else:
print(f'p:{p} \t\t[elapsed:{time.time()-start}]\nB_2:{B_2:.52f} B_2Aster:{ComputeB_2Aster(B_2,p-2):.52f}\n',end="")
print(f'p:{p} \t\t[elapsed:{time.time()-start}]\nB_2:{B_2:.52f} B_2Aster:{ComputeB_2Aster(B_2,p-2):.52f}\n',end="")
</code></pre>
<p>without numpy package</p>
<pre><code>#Brun's constant
from cmath import log
import time
def digit_root(number):
return (number - 1) % 9 + 1
first25Primes=[2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]
def IsPrime(n) :
# Corner cases
if (n <= 1) :
return False
if (n <= 3) :
return True
# This is checked so that we can skip
# middle five numbers in below loop
if (n % 2 == 0 or n % 3 == 0) :
return False
#exclude digital root 3 or 6 or 9
if digit_root(n) in (3,6,9):
return False
if (n != 2 and n != 7 and n != 5 and str(n)[len(str(n))-1] not in ("1","3","7","9")): #si le nombre ne se termine pas par 1 3 7 9
return False
for i in first25Primes:
if n%i == 0 and i < n:
return False
if (n>2):
if (not(((n-1) / 4) % 1 == 0 or ((n+1) / 4) % 1 == 0)):
return False
if (n>3):
if (not(((n-1) / 6) % 1 == 0 or ((n+1) / 6) % 1 == 0)):
return False
i = 5
while(i * i <= n) :
if (n % i == 0 or n % (i + 2) == 0) :
return False
i = i + 6
return True
C_2 = 0.6601618158468695739278121100145557784326233602847334133194484233354056423
C_2mult4 = C_2 * 4
def ComputeB_2Aster(B_2val,p):
return B_2val + (C_2mult4/log(p))
moy=[]
heavyInt = 10**10
def howFar10pow10(p, previousTime,step) :
end_time = time.time()
time_lapsed = end_time - previousTime
time_lapsed = int((heavyInt-p)/step * time_lapsed)
moy.append(time_lapsed)
if len(moy) > 20:
moy.pop(0)
return time_convert(sum(moy)//len(moy))
def time_convert(sec):
mins = sec // 60
sec = sec % 60
hours = mins // 60
days = hours // 24
years = days // 365
days = days - (years * 365)
hours = hours % 24
mins = mins % 60
return "{0}y{1}d{2}h{3}m{4}s".format(int(years),int(days),int(hours),int(mins),sec)
def main():
"""approx De Brun's main"""
print("de Brun's constant calculation :")
print("ยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏยฏ")
print("p Bโ(p) ")
print("10^2 1.330990365719...")
print("10^4 1.616893557432...")
print("10^6 1.710776930804...")
print("10^8 1.758815621067...")
print("10^10 1.787478502719...")
print("10^12 1.806592419175...")
print("10^14 1.820244968130...")
print("10^15 1.825706013240...")
print("10^16 1.830484424658...")
print("Bโ should reach 1.9 at p ~ 10^530 which is far beyond any computational project")
print("Bโ*(p)=Bโ(p)+ 4Cโ/log(p)")
print("Cโ=0.6601618158468695739278121100145557784326233602847334133194484233354056423... twin prime constant")
print("Cโ=product_{p=primes>=3} (1-(1/(p-1)ยฒ)) ")
print("p Bโ*(p) ")
print("10^2 1.904399633290...")
print("10^4 1.903598191217...")
print("10^6 1.901913353327...")
print("10^8 1.902167937960...")
print("10^10 1.902160356233...")
print("10^12 1.902160630437...")
print("10^14 1.902160577783...")
print("10^15 1.902160582249...")
print("10^16 1.902160583104...")
print(" ")
print("ฮ โ(p) = number of twin primes up to p")
print(" ")
B2_p = 0
lastPrime = 2
pi2 = 0
step = 100001
t=time.time()
for p in range(3, 99999999999999999999,2):
if IsPrime(p):
if lastPrime == p-2:
B2_p = B2_p + (1/p + 1/lastPrime)
pi2 += 2
lastPrime = p
if p%step == 0:
print(f'p={p} Bโ(p)={B2_p} Bโ*(p)={ComputeB_2Aster(B2_p,p)} ฮ โ(p)={pi2} t to p=10^10:{howFar10pow10(p,t,step)}',end="\r")
t=time.time()
if __name__ == "__main__":
main()
</code></pre>
<p><a href="https://i.sstatic.net/pEwemjfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pEwemjfg.png" alt="enter image description here" /></a></p>
|
<python><math><computation>
|
2024-04-17 08:22:03
| 1
| 310
|
Ch'nycos
|
78,339,444
| 13,987,643
|
Pandas filtering yields null values
|
<p>I have a df which I am filtering based on a particular column. This is my code to do that</p>
<pre><code>test_table[(test_table['Item']=='1')]
</code></pre>
<p>Ideally this code should return the rows where the value of column 'Item' is equal to '1'. Instead it returns the full dataframe with just that particular row with Item = '1' and all other cells with Nan values. What could be the issue in my code? Why are all the other columns/cells becoming Nan?</p>
<p>The output that I got: (I've hidden the column names)</p>
<p><a href="https://i.sstatic.net/kPIMb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kPIMb.png" alt="enter image description here" /></a></p>
<p>Can someone help me resolve this?</p>
|
<python><pandas><dataframe><filter>
|
2024-04-17 08:21:40
| 0
| 569
|
AnonymousMe
|
78,339,197
| 1,088,076
|
Is it possible to utilize a lambda function in an python-attrs converter?
|
<p>I am attempting to automatically create converter functions that require metadata to operate. I am seeing odd behavior when a lambda function is used for the converter.</p>
<pre><code>import attrs
def add(x, y=10):
return x + y
def func(cls, fields):
results = []
for f in fields:
y = f.metadata.get('add', None)
converter = lambda x: add(x, y) if (y is not None) else add(x)
print('Name:', f.name, y, converter(0))
results.append(f.evolve(converter=converter))
return results
</code></pre>
<p>Then I create a class with two fields, one with the metadata and the other without. However, the metadata does not get picked up. The print statement shows the behavior I would expect.</p>
<pre><code>Data1 = attrs.make_class('Data1',
{'a': attrs.field(default=1, metadata={'add': 5}),
'b': attrs.field(default=1)},
field_transformer=func)
d1= Data1()
d1
# Name: a 5 5
# Name: b None 10
# Data1(a=11, b=11)
</code></pre>
<p>If you reverse the use of the metadata, it gets used for both.</p>
<pre><code>Data2 = attrs.make_class('Data2',
{'b': attrs.field(default=1),
'a': attrs.field(default=1, metadata={'add': 5})},
field_transformer=func)
d2 = Data2()
d2
# Name: b None 10
# Name: a 5 5
# Data2(b=6, a=6)
</code></pre>
<p>Can someone please help me understand what is happening? Thank you!</p>
|
<python><python-attrs>
|
2024-04-17 07:43:47
| 0
| 1,911
|
slaughter98
|
78,339,142
| 15,140,144
|
PyGame: asynchronous version of pygame.time.Clock.tick()
|
<p>There is no API for asynchronous <code>Clock.tick()</code> in pygame, how could I implement something like this? (This could be useful for things like <code>pygbag</code> that require async main loop.)</p>
|
<python><pygame-ce>
|
2024-04-17 07:33:19
| 0
| 316
|
oBrstisf8o
|
78,339,121
| 161,110
|
Get user assignments to Azure AVD application group with python sdk
|
<p>I'm trying to get the assignments of an AVD application group, but I do not see any way to do it in the python azure sdk:</p>
<p><a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-desktopvirtualization/azure.mgmt.desktopvirtualization.models.applicationgroup?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-mgmt-desktopvirtualization/azure.mgmt.desktopvirtualization.models.applicationgroup?view=azure-python</a></p>
<p>Is there any other way using python to get which users are assigned to that application group?</p>
|
<python><azure><virtual-desktop><azure-avd>
|
2024-04-17 07:29:51
| 1
| 2,196
|
Jorge
|
78,339,087
| 827,927
|
Is it possible to have a default value that depends on a previous parameter?
|
<p>Suppose I want to write a recursive binary search function in Python. The recursive function needs to get the start and end of the current search interval as parameters:</p>
<pre><code>def binary_search(myarray, start, end): ...
</code></pre>
<p>But, when I call the actual function from outside, I always start at 0 and finish at the end of the array. It is easy to make 0 a default value for start, but how can I write a default value for end? Is it possible to write something like:</p>
<pre><code>def binary_search(myarray, start=0, end=len(myarray)): ...
</code></pre>
<p>?</p>
|
<python><default-value>
|
2024-04-17 07:24:20
| 1
| 37,410
|
Erel Segal-Halevi
|
78,338,848
| 1,231,450
|
Pandas all() but with a threshold
|
<p>Suppose we have the following dataframe and program logic</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'B': [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]})
def more_than(series, threshold=5):
try:
trues = series.value_counts()[True]
p = trues / len(series) * 100
except KeyError:
p = 0
return True if p > threshold else False
df['compare'] = df['A'] > 5
print(more_than(df['compare']))
# True here
</code></pre>
<p>I'd like to have function similar to <code>all(...)</code> but with the possibility of a threshold (like above). It works as it should but I wondered if there's anything inbuilt and probably faster here.</p>
|
<python><pandas>
|
2024-04-17 06:41:57
| 1
| 43,253
|
Jan
|
78,338,808
| 1,814,498
|
How to interpret the Coefficient of nonlinear regression in scikit learn?
|
<p>I am using the following code to do a nonlinear regression, and I am getting an intercept & 20 coefficients. How to interpret those coefficients to be able to construct a formula Y = intercept + coeff*feat_a + .....?</p>
<pre><code>data = pd.read_csv("data.csv")
# Step 2: Preprocess the data
X = data[['feat_a', 'feat_b', 'feat_c', 'feat_d', 'feat_e' ]] # Independent variables including blue (b)
y = data['Y']
#splitting Train and Test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=101)
# Dependent variable
# Step 3: Fit a polynomial regression model
degree = 2 # Adjust the degree of the polynomial as needed
poly_features = PolynomialFeatures(degree=degree, include_bias=False)
X_poly = poly_features.fit_transform(X_train)
model = LinearRegression()
model.fit(X_poly, y_train)
# Step 4: Evaluate the model (optional)
# Step 5: Extract the formula and parameters
intercept = model.intercept_
coefficients = model.coef_
print("coeff size{}".format(len(coefficients)))
print(coefficients)
</code></pre>
|
<python><scikit-learn><linear-regression><polynomial-approximations>
|
2024-04-17 06:34:36
| 1
| 736
|
Ziri
|
78,338,736
| 13,250,589
|
How to call a one Firebase function from another Firebase function using python SDK
|
<p>I am deploying functions on firebase in python.</p>
<p>I want to call one firebase function from another and pass some data to it.
I have looked into documentation and I cannot find a proper method.</p>
<p>I tried to follow the approach mentioned in
<a href="https://groups.google.com/g/firebase-talk/c/t_2xEmBrKEo/m/ZXd9RFRKAAAJ" rel="nofollow noreferrer">this</a>
forum post.</p>
<p>According to this post cloud functions can call/trigger each other through events.
the first function can publish an event (also pass data in it) and the other can handle that event.</p>
<p>I was able to find the
<a href="https://firebase.google.com/docs/reference/functions/2nd-gen/python/firebase_functions.eventarc_fn#on_custom_event_published" rel="nofollow noreferrer">documentation</a>
for "event handling" but I cannot find any documentation on how to publish an event from a firebase function in python.</p>
<p>I looked at the
<a href="https://firebase.google.com/docs/reference/admin" rel="nofollow noreferrer">admin sdk</a>
for both python and Node.js.
While the node sdk has a
<a href="https://firebase.google.com/docs/reference/admin/node/firebase-admin.eventarc" rel="nofollow noreferrer">module</a>
for publishing events. python sdk lacks any such functionality.</p>
<p>Am I missing something?
Is there any other method that I can use to achieve this?</p>
|
<python><firebase><google-cloud-functions>
|
2024-04-17 06:16:58
| 1
| 885
|
Hammad Ahmed
|
78,338,658
| 4,847,250
|
Is it possible to use Curve_fit on several signal without a for loop?
|
<p>I'm wondering if it is possible to make a curve fitting for a matrix directly without making a for loop
I would like to make the curve fitting directly on an array where each line is a signal to estimate. So at the end I would like to have popt as an array 2XN</p>
<pre><code># -*- coding: utf-8 -*-
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.use('Qt5Agg')
import numpy as np
from scipy.optimize import curve_fit
def affine(x, a, b) :
return a*x + b
x = np.arange(0,100,10)
a = 10
b = -2
y = np.array([affine(x, a+i, b) + np.random.normal(0,20,x.shape) for i in range(10)]).T
plt.figure()
plt.plot(x,y)
for i in range(10):
a_est = 9
b_est = -1
popt, pov = curve_fit(affine, x, y[:,i], p0 = [a_est, b_est] )
print(i, popt)
#optimisation without the for loop
popt, pov = curve_fit(affine, x, y, p0 = [a_est, b_est] )
</code></pre>
|
<python><curve-fitting>
|
2024-04-17 05:57:40
| 0
| 5,207
|
ymmx
|
78,338,367
| 14,259,505
|
"ValueError: year -1 is out of range" -error while trying to import data
|
<p>There are invalid or unexpected values in the timestamp data being processed, such as negative years or null values. How to address this issue, we need to ensure that the timestamp values are correctly formatted and within the valid range for conversion to Python datetime objects.</p>
<pre><code>import psycopg2
import pandas as pd
# Create a cursor object
cur = conn.cursor()
# Execute your SQL query to access a table
try:
# Retrieve the minimum date
cur.execute("""
SELECT MIN(dt_date)
FROM prod_external.tran_y_register
WHERE entry_type = 1;
""")
min_date = cur.fetchone()[0] # Get the minimum date from the result
# Fetch all data from the table starting from the minimum date
cur.execute("""
SELECT *
FROM prod_external.tran_y_register
WHERE entry_type = 1
AND dt_date >= %s
ORDER BY dt_date;
""", (min_date,))
# Fetch rows as tuples
rows = cur.fetchall()
# Extract column names
column_names = [desc[0] for desc in cur.description]
# Convert rows to DataFrame
df = pd.DataFrame(rows, columns=column_names)
# Convert 'dt_date' column to Pandas Timestamp objects manually
df['dt_date'] = pd.to_datetime(df['dt_date'], errors='coerce') # Handle any invalid timestamps
# Drop rows with NaT (invalid timestamps)
df.dropna(subset=['dt_date'], inplace=True)
# Print DataFrame
print(df)
except psycopg2.Error as e:
print("Error: Could not fetch data from the table")
print(e)
# Close cursor and connection
cur.close()
conn.close()
</code></pre>
<p>i am getting an error: ValueError: year -1 is out of range</p>
|
<python>
|
2024-04-17 04:32:19
| 0
| 391
|
Yogesh Govindan
|
78,338,208
| 1,088,076
|
In python Attrs package, how to add a field in the field_transformer function?
|
<p>The documentation for the python Attrs, the <a href="https://www.attrs.org/en/stable/extending.html#automatic-field-transformation-and-modification" rel="nofollow noreferrer">Automatic Field Transformation and Modification</a> section states</p>
<blockquote>
<p>...You can add converters, change types, and even remove attributes
completely or <strong>create new ones</strong>!" (emphasis added).</p>
</blockquote>
<p>It also states in the API documentation regarding <a href="https://www.attrs.org/en/stable/api.html#attrs.Attribute" rel="nofollow noreferrer">Attribute</a> that "You should never instantiate this class yourself."</p>
<p>However, if you attempt to add a field in the <code>field_transformer</code> function you cannot, at least with the <code>attrs.field()</code> function.</p>
<pre><code> import attrs
def transformer(cls, fields):
fields.append(attrs.field(name='bar'))
return fields
@attrs.define(field_transformer=transformer)
class A:
pass
A(bar=1)
</code></pre>
<p>This fails with a <code>TypeError: field() got an unexpected keyword argument 'name'</code>. If you use "alias" instead you get <code>AttributeError: '_CountingAttr' object has no attribute 'name'</code>.</p>
<p>What is the proper way to add a field in <code>field_transformer</code> function?</p>
<p><strong>Edit:</strong>
I was able to get things to work with the following:</p>
<pre><code>import attrs
def transformer(cls, fields):
ca = attrs.field()
f = attrs.Attribute.from_counting_attr(name="foo", ca=ca, type=int)
return [f, *fields]
@attrs.define(field_transformer=transformer)
class A:
pass
A(foo=1)
</code></pre>
|
<python><python-attrs>
|
2024-04-17 03:31:59
| 1
| 1,911
|
slaughter98
|
78,337,846
| 8,076,158
|
Looping over a variable number of columns in a dataframe
|
<p>I want to speed up my row loop in a Pandas DataFrame, using the zip columns trick, like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"a": [0,1], "b": [2,3]})
for a, b in zip(df["a"], df["b"]):
pass
</code></pre>
<p>The dataframe has a variable number of columns. How can I unpack the row tuple into a, b, c, etc.?</p>
|
<python><pandas>
|
2024-04-17 00:47:47
| 2
| 1,063
|
GlaceCelery
|
78,337,824
| 4,855,843
|
Segmentation Fault when calling FFMPEG on Lambda - but not locally
|
<p>I am trying to extract the next 30 seconds of audio from a live video stream on YouTube using AWS Lambda. However, I'm facing an issue where Lambda does not wait for an FFmpeg subprocess to complete, unlike when running the same script locally. Below is a simplified Python script illustrating the problem:</p>
<pre><code>import subprocess
from datetime import datetime
def lambda_handler(event, context, streaming_url):
ffmpeg_command = [
"ffmpeg",
"-loglevel", "error",
"-i", streaming_url,
"-t", "30",
"-acodec", "pcm_s16le",
"-ar", "44100",
"-ac", "2",
"/tmp/output.wav"
]
print("Starting subprocess...")
print(f"Start time: {datetime.now()}")
subprocess.run(ffmpeg_command, capture_output=True)
print(f"End time: {datetime.now()}")
</code></pre>
<p>In this script, I am using FFmpeg to capture audio from the specified streaming_url, converting it to a .wav format, and saving it as output.wav. When executed locally, this script waits until the subprocess finishes, evidenced by the significant time difference between the start and end print statements. However, when run on AWS Lambda, it proceeds almost immediately without waiting for the subprocess to complete, resulting in incomplete audio files.</p>
<p>Question: How can I ensure that AWS Lambda waits for the FFmpeg subprocess to fully execute before the function exits? I assume I'm not understanding correctly how Lambda handles subprocesses. I even tried adding a time.sleep(30) after the subprocess.run, but that didn't help. Is there a specific configuration or method to handle subprocesses in Lambda correctly?</p>
<p><strong>EDIT</strong>: With the help of the comments, I understood that in fact it's returning quickly in Lambda because of a segmentation fault, since it gives me a returncode of -11, so I edited the question and its title accordingly. Locally, there is no such error. I found out this is a similar situation to <a href="https://stackoverflow.com/questions/65719161/using-ffmpeg-with-url-input-causes-sigsegv-in-aws-lambda-python-runtime">Using FFmpeg with URL input causes SIGSEGV in AWS Lambda (Python runtime)</a>, but I'm still unable to solve it.</p>
|
<python><amazon-web-services><aws-lambda><ffmpeg><subprocess>
|
2024-04-17 00:33:52
| 0
| 489
|
thiagogps
|
78,337,737
| 1,498,830
|
How do I copy-paste a worksheet from one excel workbook to another using python?
|
<p>In Excel, I can manually:</p>
<ul>
<li>open the source workbook</li>
<li>select the sheet I want to copy</li>
<li>click the Select All button in the upper left corner</li>
<li>ctrl-c</li>
<li>open the targe workbook</li>
<li>insert a new sheet using the tabs at the bottom</li>
<li>select the new sheet's A1 cell</li>
<li>ctrl-v</li>
</ul>
<p>and I have an identical copy of the sheet from one book to the other, down to formatting, colour, borders, formulae, dimensions, the works.</p>
<p>I need to automate this process over thousands of target workbooks.</p>
<p>I've tried using openpyxl, but all the solutions I've found explicitly copy individual cells with just a few properties instead of the whole sheet. For example, column widths and row heights are missed by most implementations. Also, Excel features like collapse rows...</p>
<p>Are there python libraries, (or libraries for other languages really) that make this possible?</p>
|
<python><excel><automation><copy-paste><xlwings>
|
2024-04-16 23:43:29
| 1
| 2,962
|
spierepf
|
78,337,715
| 1,524,372
|
Adding a method to an existing enum in Python
|
<p>I have an existing enumeration with a number of values defined in package <code>b</code>, and I want to add behavior in the current package <code>a</code>. I understand that you can't directly subclass an enum with existing members but I am trying to reuse the existing names/values without having to duplicate them.</p>
<p>I found two workarounds that function but the IDE is not aware of the method or the enumerations.</p>
<ol>
<li>Simply monkey-patch the enum and add the method, e.g.:</li>
</ol>
<pre><code>def foo(cls, identifier: str):
return a.A(namespace=cls.value, identifier=identifier)
from b.enumerations import B
B.__call__ = foo
</code></pre>
<p>This works but the IDE indicates that B is not callable.</p>
<ol start="2">
<li>Create a new enum from the members of the old enum with a callable type via the functional API, e.g.</li>
</ol>
<pre><code>class AEnumItem(str):
def __call__(self, identifier: str) -> a.A:
return a.A(
namespace=str(self), identifier=identifier)
class AEnumBase(AEnumItem, enum.Enum):
def __str__(self) -> str:
return self.value
import b
B = AEnumBase('B', [(item.name, AEnumItem(item.value)) for item in b.B])
</code></pre>
<p>But this also does not allow IDE to autocomplete the members of AEnum like it does with B.</p>
<p>Is there anything simpler that I am missing that allows for the IDE to recognize the members of the new enum?</p>
|
<python><enums><subclassing>
|
2024-04-16 23:32:08
| 0
| 311
|
Paul
|
78,337,671
| 1,564,852
|
Python ffill with calculation for timestamp as fill value
|
<p>I have a table as follows</p>
<pre><code>Timestamp | record_id
01-04-2024 00:00 | 1
01-04-2024 00:01 | 2
01-04-2024 00:02 | 3
01-04-2024 00:03 | 4
N/A | 5
N/A | 6
01-04-2024 00:06 | 7
</code></pre>
<p>I know that the timestamp increments by 1 minute. Since I have missing data, I need to forward-fill it and add 1 minute to the previous values. I have consecutive N/A values. I tried a few solutions including the following but does not seem to work</p>
<pre><code>missing_mask = df['Timestamp'].isna()
df.loc[missing_mask, 'Timestamp'] = df.loc[missing_mask, 'Timestamp'].fillna(method='ffill') + pd.Timedelta(minutes=1)
</code></pre>
<p>Is there something obvious that I'm missing here?</p>
|
<python><dataframe><forward-fill>
|
2024-04-16 23:09:22
| 1
| 1,192
|
Visahan
|
78,337,596
| 5,224,236
|
2024: chrome not reachable python selenium
|
<p>I want to open a Chrome window with my settings and connect to it via Selenium.</p>
<p>I start Chrome then try to connect but I get this error:</p>
<pre><code>selenium.common.exceptions.WebDriverException:
Message: unknown error: cannot connect to chrome
at localhost:9014 from chrome not reachable.
</code></pre>
<p>This is the code.</p>
<pre><code>from selenium import webdriver
options = webdriver.ChromeOptions()
os.system('2024\\startchrome_dev.bat')
options.add_experimental_option("debuggerAddress", "localhost:9014")
options.add_argument('--no-sandbox')
driver = webdriver.Chrome(options=options)
</code></pre>
<p>Chrome starts but Selenium is unable to reach it.</p>
<p>The content of <code>startchrome_dev.bat</code> is as follows.</p>
<pre><code>"C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe" --remote-debugging-port=9014 --user-data-dir="Profile 1"
</code></pre>
<p>Any idea?</p>
|
<python><google-chrome><selenium-webdriver>
|
2024-04-16 22:38:37
| 1
| 6,028
|
gaut
|
78,337,532
| 689,242
|
Python language server is installed but not found by Helix text editor
|
<h2>Problem description</h2>
<p>I am using Helix text editor and am trying to add a Python language server (<a href="https://github.com/python-lsp/python-lsp-server" rel="nofollow noreferrer">link</a>) to it.</p>
<p>I installed Python 3 using the <code>.exe</code> from their official website. Then I executed these commands in order to install the Python language server:</p>
<pre><code>C:\Users\ZIGLA> pip config set global.trusted-host "pypi.org files.pythonhosted.org pypi.python.org"
C:\Users\ZIGLA> pip install pip-system-certs
C:\Users\ZIGLA> python.exe -m pip install python-lsp-server
</code></pre>
<p>Setup succeeded but when I try running the <code>pylsp</code> command my system can't find it. Helix editor also can't detect it when I check its health:</p>
<pre><code>C:\Users\ZIGLA> hx --health
Config file: C:\Users\ZIGLA\AppData\Roaming\helix\config.toml
Language file: C:\Users\ZIGLA\AppData\Roaming\helix\languages.toml
Log file: C:\Users\ZIGLA\AppData\Local\helix\helix.log
Runtime directories: C:\Users\ZIGLA\AppData\Roaming\helix\runtime;C:\Users\ZIGLA\AppData\Roaming\helix\runtime;\\?\C:\Users\ZIGLA\AppData\Roaming\helix\runtime
Clipboard provider: clipboard-win
System clipboard provider: clipboard-win
Language LSP DAP Highlight Textobject Indent
...
python โ pylsp None โ โ โ
...
</code></pre>
<h2>My attempt to solve the problem</h2>
<p>First I checked where <code>pip</code> installed the Python language server:</p>
<pre><code>C:\Users\ZIGLA> pip list -v
Package Version Location Installer
--------------------- ------- ------------------------------------------------------------- ---------
docstring-to-markdown 0.15 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
jedi 0.19.1 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
parso 0.8.4 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
pip 24.0 C:\Program Files\Python311\Lib\site-packages pip
pip-system-certs 4.0 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
pluggy 1.4.0 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
python-lsp-jsonrpc 1.1.2 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
python-lsp-server 1.11.0 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
setuptools 65.5.0 C:\Program Files\Python311\Lib\site-packages pip
ujson 5.9.0 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
wrapt 1.16.0 C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages pip
</code></pre>
<p>I inspect this folder:</p>
<pre><code>C:\Users\ZIGLA> ls C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages
Directory: C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 4/16/2024 8:59 PM docstring_to_markdown
d---- 4/16/2024 8:59 PM docstring_to_markdown-0.15.dist-info
d---- 4/16/2024 8:59 PM jedi
d---- 4/16/2024 8:59 PM jedi-0.19.1.dist-info
d---- 4/16/2024 8:59 PM parso
d---- 4/16/2024 8:59 PM parso-0.8.4.dist-info
d---- 4/16/2024 8:58 PM pip_system_certs
d---- 4/16/2024 8:58 PM pip_system_certs-4.0.dist-info
d---- 4/16/2024 8:59 PM pluggy
d---- 4/16/2024 8:59 PM pluggy-1.4.0.dist-info
d---- 4/16/2024 8:59 PM pylsp
d---- 4/16/2024 8:59 PM pylsp_jsonrpc
d---- 4/16/2024 8:59 PM python_lsp_jsonrpc-1.1.2.dist-info
d---- 4/16/2024 8:59 PM python_lsp_server-1.11.0.dist-info
d---- 4/16/2024 8:59 PM ujson-5.9.0.dist-info
d---- 4/16/2024 8:58 PM wrapt
d---- 4/16/2024 8:58 PM wrapt-1.16.0.dist-info
-a--- 4/16/2024 8:58 PM 115 pip_system_certs.pth
-a--- 4/16/2024 8:59 PM 70656 ujson.cp311-win_amd64.pyd
</code></pre>
<p>I see that Python language server was installed in the <code>pylsp</code> subfolder. However when I inspect this folder I can not see any executables. It looks like this is a python module:</p>
<pre><code>C:\Users\ZIGLA> ls C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages\pylsp\
Directory: C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages\pylsp
Mode LastWriteTime Length Name
---- ------------- ------ ----
d---- 4/16/2024 8:59 PM __pycache__
d---- 4/16/2024 8:59 PM config
d---- 4/16/2024 8:59 PM plugins
-a--- 4/16/2024 8:59 PM 762 __init__.py
-a--- 4/16/2024 8:59 PM 3807 __main__.py
-a--- 4/16/2024 8:59 PM 9866 _utils.py
-a--- 4/16/2024 8:59 PM 23 _version.py
-a--- 4/16/2024 8:59 PM 2353 hookspecs.py
-a--- 4/16/2024 8:59 PM 2050 lsp.py
-a--- 4/16/2024 8:59 PM 35177 python_lsp.py
-a--- 4/16/2024 8:59 PM 2768 text_edit.py
-a--- 4/16/2024 8:59 PM 3834 uris.py
-a--- 4/16/2024 8:59 PM 21846 workspace.py
</code></pre>
<p>This plugin is needed by Helix editor. This is why I tried adding this path to the <code>PATH</code> enviromental variable. By opening my Powershell profile like this:</p>
<pre><code>hx $PROFILE
</code></pre>
<p>And adding this line inside:</p>
<pre><code>$ENV:PATH += ";C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages\pylsp"
</code></pre>
<p>I rebooted the terminal and I could see that path was appended at the end of the <code>PATH</code>:</p>
<pre><code>C:\Users\ZIGLA> echo $ENV:PATH
C:\Program Files\PowerShell\7;C:\Program Files\Microsoft MPI\Bin\;C:\Program Files\Python311\Scripts\;C:\Program Files\Python311\;C:\Program Files\Microsoft SDKs\Azure\CLI2\wbin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\System32\OpenSSH\;C:\Program Files (x86)\Citrix\HDX\bin\;C:\Program Files\Citrix\HDX\bin\;C:\Program Files (x86)\Citrix\Workspace Environment Management Agent;C:\Program Files\PowerShell\7\;C:\Program Files\Microsoft VS Code\bin;C:\Program Files\Azure Data Studio\bin;C:\Program Files\Azure Dev CLI\;C:\Program Files\PuTTY\;C:\Program Files\nodejs\;C:\Program Files\Microsoft SQL Server\Client SDK\ODBC\170\Tools\Binn\;C:\Program Files\dotnet\;C:\Program Files (x86)\Microsoft SQL Server\160\Tools\Binn\;C:\Program Files\Microsoft SQL Server\160\Tools\Binn\;C:\Program Files\Microsoft SQL Server\160\DTS\Binn\;C:\Program Files (x86)\Microsoft SQL Server\160\DTS\Binn\;C:\Program Files\Microsoft SQL Server\150\Tools\Binn\;C:\Program Files\Meld\;C:\Users\ZIGLA\AppData\Local\Programs\Python\Python312\Scripts\;C:\Users\ZIGLA\AppData\Local\Programs\Python\Python312\;C:\Users\ZIGLA\AppData\Local\Programs\Python\Launcher\;C:\Users\ZIGLA\AppData\Local\Microsoft\WindowsApps;C:\Users\ZIGLA\AppData\Local\GitHubDesktop\bin;C:\Users\ZIGLA\.dotnet\tools;C:\Users\ZIGLA\AppData\Local\Programs\Git\cmd;C:\Users\ZIGLA\AppData\Local\Microsoft\WinGet\Links;;C:\Users\ZIGLA\AppData\Roaming\helix;C:\Users\ZIGLA\AppData\Roaming\omnisharp-win-x64;C:\Program Files\Meld;C:\Program Files\Windows Application Driver;C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\IDE;C:\Users\ZIGLA\AppData\Local\DeviceTestCenter\Tools;C:\Program Files (x86)\Microsoft SQL Server Management Studio 19\Common7\IDE;C:\Users\ZIGLA\AppData\Roaming\Python\Python311\site-packages\pylsp
</code></pre>
<p>However, symptoms remain. Helix can not detect <code>pylsp</code> when checking its health.</p>
|
<python><helix-editor><pylsp>
|
2024-04-16 22:16:39
| 1
| 1,505
|
71GA
|
78,337,315
| 4,061,339
|
Disable a button only during an expensive operation in streamlit
|
<ul>
<li>python 3.12.2</li>
<li>streamlit 1.33.0</li>
</ul>
<p>I have an expensive operation and would like to disable a button to execute it only during the operation. In other words, there is an enabled button. When you press it, it is disabled and the operation starts. When the operation ends, the button is enabled again.</p>
<p>This is the code I tried. The button was not enabled during the operation.</p>
<pre><code>import time
import streamlit as st
import streamlit.components.v1 as stc
def expensive_process():
time.sleep(3)
return 123
def main():
st.session_state.disabled = False
text_wait = st.empty()
but_ph = st.empty()
but = but_ph.button('calculate', disabled=st.session_state.disabled)
if but:
st.session_state.disabled = True
text_wait.markdown('running...')
result = expensive_process()
st.session_state.disable = False
text_wait.markdown(f'The result was {result}')
if __name__ == '__main__':
main()
</code></pre>
<p>I googled "streamlit disable button after click" and checked first several pages. It wasn't much help. I'd appreciate your thoughts on this matter.</p>
|
<python><button><streamlit><isenabled>
|
2024-04-16 21:04:29
| 1
| 3,094
|
dixhom
|
78,337,034
| 10,145,953
|
Deskew image using opencv in Python
|
<p>I am preprocessing images of PDFs which will eventually have text extracted from them. I am using <code>opencv</code> for the bulk of the preprocessing work and due to constraints of the client environment, I can really only stick to using <code>opencv</code> for the image processing. I have a function (below) to deskew my image so that it is in the correct orientation and the text can be extracted further down the line (using <code>pytesseract</code>). Before the function is run, I convert the PDF into an image and resize it into a predetermined size (approximately the same size as the original). Both of these functions work perfectly fine and do not appear to be the culprit in my issues below.</p>
<p>Of all these images, there's an approx. 50-50 split between images that are perfectly oriented with text nice and straight in the image and then others that are slightly cooked enough to be visibly off and also cause issues when attempting to extract the text. Thus, the need for a deskew function. I reviewed <a href="https://becominghuman.ai/how-to-automatically-deskew-straighten-a-text-image-using-opencv-a0c30aed83df" rel="nofollow noreferrer">this article</a> and leveraged the code in the article to create the below:</p>
<pre><code># Code courtesy of Leo Ertuna for Becoming Human: Artificial Intelligence Magazine
## https://becominghuman.ai/how-to-automatically-deskew-straighten-a-text-image-using-opencv-a0c30aed83df
# Calculate skew angle of an image
def getSkewAngle(cvImage) -> float:
# Prep image, copy, convert to gray scale, blur, and threshold
newImage = cvImage.copy()
gray = cv2.cvtColor(newImage, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (9, 9), 0)
thresh = cv2.threshold(blur, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Apply dilate to merge text into meaningful lines/paragraphs.
# Use larger kernel on X axis to merge characters into single line, cancelling out any spaces.
# But use smaller kernel on Y axis to separate between different blocks of text
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (30, 5))
dilate = cv2.dilate(thresh, kernel, iterations=5)
# Find all contours
contours, hierarchy = cv2.findContours(dilate, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)
contours = sorted(contours, key = cv2.contourArea, reverse = True)
# Find largest contour and surround in min area box
largestContour = contours[0]
minAreaRect = cv2.minAreaRect(largestContour)
# Determine the angle. Convert it to the value that was originally used to obtain skewed image
angle = minAreaRect[-1]
if angle < -45:
angle = 90 + angle
return -1.0 * angle
def rotateImage(cvImage, angle: float):
newImage = cvImage.copy()
(h, w) = newImage.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
newImage = cv2.warpAffine(newImage, M, (w, h), flags=cv2.INTER_CUBIC, borderMode=cv2.BORDER_REPLICATE)
return newImage
# Deskew image
def deskew(cvImage):
angle = getSkewAngle(cvImage)
if abs(angle) < 0.90:
return rotateImage(cvImage, -1.0 * angle)
else:
return cvImage
</code></pre>
<p>I plugged in two images, one which is mostly at the proper angle and the other which is very clearly skewed.</p>
<pre><code>print(getSkewAngle(working_pdf))
working_pdf_ds = deskew(working_pdf)
getSkewAngle(working_pdf_ds)
</code></pre>
<p>The output for the perfectly oriented image of the above is</p>
<pre><code>-0.061542168259620667
-90.0
</code></pre>
<p>and the output for the crooked image is</p>
<pre><code>-89.61195373535156
-90.0
</code></pre>
<p>Based on this new angle, I'd expect the image is now straightened and looks like the perfectly oriented image. However, when I look at the images themselves after being deskewed, the clearly skewed image is now rotated at a 270 degree angle (clockwise). I tried attempting to re-run the function on the skewed image to re-orient it, but when it was deskewed it cropped off the top and bottom of the image.</p>
<p>I'm not sure exactly where I've gone wrong with this or how to even begin to fix this, so would greatly appreciate any advice that can be shared.</p>
<p>The actual PDFs/images I am using contain PII, but I was able to replicate the issue with a sample image from the article from where I got the code and ran into the same issues.</p>
<p><strong>Original</strong></p>
<p><a href="https://i.sstatic.net/jnx8i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jnx8i.png" alt="enter image description here" /></a></p>
<p><strong>After running the deskew function</strong></p>
<p><a href="https://i.sstatic.net/ozo7t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ozo7t.png" alt="enter image description here" /></a></p>
<p>When printing the original angle of this sample image and then the angle after deskewing, I got the following output:</p>
<pre><code>-90.0
-0.0
</code></pre>
<p>ETA: ended up creating a function that tries both the original code I've been using and the code on pyimagesearch suggested in the comments then uses tesseract to read some text in the image to determine which produces the "best" skew.</p>
|
<python><opencv><image-processing>
|
2024-04-16 19:49:45
| 1
| 883
|
carousallie
|
78,336,924
| 651,174
|
Partition by column, order by another
|
<p>I am simulating a SQL query with the following values:</p>
<pre><code>rows = [(1, '2021/04', 'Shop 2', 341227.53), (2, '2021/05', 'Shop 2', 315447.24), (3, '2021/06', 'Shop 1', 1845662.35), (4, '2021/04', 'Shop 2', 21487.63), (5, '2021/05', 'Shop 1', 1489774.16), (6, '2021/06', 'Shop 1', 52489.35), (7, '2021/04', 'Shop 1', 154552.82), (8, '2021/05', 'Shop 2', 6548.49), (9, '2021/06', 'Shop 2', 387779.49)]
</code></pre>
<p>I want to build a dictionary of a 'window' function. It should partition on the third column (ex value: 'shop1') and order by the second column (ex value: '2021/06').</p>
<p>So, it should look like this:</p>
<pre><code>{
'Shop 1': ['2021/04', '2021/05', ...],
'Shop 2': [...],
...
}
</code></pre>
<p>Is there a way to do that such that I could define a lambda function taking two arguments, for example:</p>
<pre><code>window_func = lambda partition_func, order_func: ...
</code></pre>
<p>The above <code>partition_func</code> would be <code>item[2]</code> and the order_func would be <code>item[3]</code>.</p>
|
<python><python-3.x><lambda><functional-programming>
|
2024-04-16 19:24:09
| 1
| 112,064
|
David542
|
78,336,798
| 13,349,653
|
Mysterious "Killed" message
|
<p>I am trying to run a program that consumes a large, but not infeasible amount of memory (see below). I checked that my computer has <code>15.6GiB</code> according to <code>htop</code>. However, this exits with the message <code>Killed</code>. Why is this happening and how can I confirm it?</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import numpy as np
x = np.arange(1_000_000_000)
start = datetime.datetime.now()
(x + x).sum()
print(datetime.datetime.now() - start)
</code></pre>
|
<python><numpy><out-of-memory>
|
2024-04-16 18:57:35
| 0
| 1,788
|
Test
|
78,336,767
| 23,260,297
|
Strip all column values, but check if any tuples exist
|
<p>I have multiple dataframes that use the same functions to strip all the values. However, some of the values are tuples and it is throwing an error whenever I come across stripping a tuple.</p>
<p>My function looks like this:</p>
<pre><code>for col in df.columns:
if df[col].dtype == 'object':
df[col] = df[col].apply(lambda x: str(x).strip())
</code></pre>
<p>I have another function for a specific dataframe that will always have tuples:</p>
<pre><code>for col in df.columns:
if df[col].dtype == 'object':
df[col] = df[col].apply(lambda x: str(x).strip())
df['Commodity'] = [(x.strip() for x in ls) for ls in df['Commodity'].values]
df['Commodity'] = df['Commodity'].apply(sorted)
</code></pre>
<p>The second function will always throw an error when it gets to the Commodity column because it will not handle the tuples. How can I handle that specific case in the first function?</p>
|
<python><pandas>
|
2024-04-16 18:50:15
| 1
| 2,185
|
iBeMeltin
|
78,336,597
| 1,120,370
|
How do I use PymuPDF/fitz to denormalize xobjects?
|
<p>I have a PDF that contains some xobjects. I need to denormlize them, i.e. make a copy of each xobject in every place it appears in the document. How can I do this using <a href="https://pymupdf.readthedocs.io/en/latest/" rel="nofollow noreferrer">PymuPDF/fitz?</a></p>
<p>I've been reading the documentation and asking ChatGPT, but this has only got me part of the way there. I can find the xobjects like so:</p>
<pre><code> for page_number in range(len(pdf_document)):
page = pdf_document.load_page(page_number)
xobjects = page.get_xobjects()
for xobject in xobjects:
...
</code></pre>
<p>ChatGPT tells me to extract the stream from the individual xobjects and then insert them into a copy of the original PDF, which sounds correct, but it doesn't know the correct syntax to do this, and I haven't been able to figure it out.</p>
<p>Can someone give me a code snippet that denormalizes PDF xobjects?</p>
|
<python><pdf><pymupdf>
|
2024-04-16 18:13:33
| 0
| 17,226
|
W.P. McNeill
|
78,336,492
| 2,751,433
|
Unable to cancel asyncio.task from another function
|
<p>I was expecting to cancel <code>foo_wrapper</code> using <code>cancel</code> function, however, it still generates result.</p>
<p>If I change <code>foo_wrapper</code> as non-async function and change <code>self.lock</code> to <code>threading.Lock</code> solves the issue. Any idea?</p>
<pre><code>import asyncio
from typing import List, Optional
class Bar:
def __init__(self):
self.tasks = set()
self.lock = asyncio.Lock()
async def foo(self) -> str:
# Simulate some asynchronous operation
await asyncio.sleep(2)
return "response"
async def foo_wrapper(self) -> str:
task = asyncio.create_task(self.foo())
async with self.lock:
self.tasks.add(task)
return task
def cancel(self):
for task in self.tasks:
print(f'cancel {task}')
print(task.cancel())
async def main():
bar = Bar()
c = bar.foo_wrapper()
bar.cancel()
try:
print(await (await c))
except asyncio.CancelledError:
print('Canceled 2')
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
|
<python><python-asyncio>
|
2024-04-16 17:53:14
| 2
| 673
|
flexwang
|
78,336,105
| 929,122
|
Generate dynamic tasks with Airflow with .expand
|
<p>I have the a DAG with the following structure:</p>
<pre class="lang-py prettyprint-override"><code>with DAG(
# configs go here
) as dag:
def get_intervals()
# logic to retrieve some data from CloudSQL
return intervals # intervals looks like: [['0'], ['10'], ['50'], ['100']]
def extract_data(table_name, date, interval, **context):
query = f"SELECT * FROM {table_name} WHERE my_date = {date} AND my_interval = {interval}"
# logic to extract from CloudSQL into GCS / works fine with hardcoded parameters
get_intervals_task = PythonOperator(
task_id='task_1',
python_callable=get_intervals
)
extract_data_task = PythonOperator.partial(
task_id='task_2',
python_callable=extract_data,
op_kwargs={
'table_name': 'my_table',
'date': '2023-01-01'}
).expand(op_args=get_intervals_task.output)
get_intervals_task >> extract_data_task
</code></pre>
<p>I'm getting two errors:</p>
<pre class="lang-py prettyprint-override"><code>TypeError: extract_data missing 1 required positional argument: 'interval'
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>ValueError: The key 'table_name' in args is a part of kwargs and therefore reserved.
</code></pre>
<p>I'm not sure what I'm doing wrong. My expectations are that the op_kwargs provided should be kept the same for each generated task while the list provided in expand to be used to generate the dynamic tasks.</p>
<p>Any help is appreciated.</p>
|
<python><airflow><google-cloud-composer>
|
2024-04-16 16:39:30
| 0
| 437
|
drake10k
|
78,335,986
| 11,814,996
|
Show more minor/major tick labels when using log10 scale with matplotlib.axes.Axes.set_yscale
|
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from matplotlib import pyplot as plt
## making data
temp_df = pd.DataFrame({'year': [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012],
'numbers': [2684044.85, 4005117.02, 3046403.46, 6413495.07, 14885525.21, 5152235.33, 4040850.37,
4616841.13, 4862062.25, 4702041.0, 5201620.42, 4405820.64, 1454650.88]})
## making plot
fig, ax = plt.subplots(figsize=(14,7))
# plot x and y1
ax.plot('year', 'numbers', 'x-', data=temp_df)
ax.set_yscale('log', base=10)
</code></pre>
<p>This produces
<a href="https://i.sstatic.net/rhZp0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rhZp0.png" alt="enter image description here" /></a></p>
<p>There are no minor tick-labels on y-axis, with other data I get more major- and minor tick-labels. How to force show more tick-labels on y-axis in log-10 scale dynamically without hardcoding the numbers?</p>
<p>my package versions are below on Python 3.10.12</p>
<pre><code>matplotlib 3.5.2
matplotlib-inline 0.1.6
pandas 1.4.4
</code></pre>
|
<python><matplotlib>
|
2024-04-16 16:16:24
| 1
| 3,172
|
Naveen Reddy Marthala
|
78,335,850
| 8,382,452
|
How to multiprocess/multithread documents loading in chromadb?
|
<p>I'm creating an application with langchain, chromadb and ollama with mistral model, where I have dozens of PDF files, each of them with a lot of pages. The problem is that It takes a lot of time (34min to get 30 PDF files in the vector database) and the streamlit application awaits all this time too to load.</p>
<p>Is there any way to parallelize this database stuff to make all the process faster (regarding the gpu being a real limitation)? How can I separate the streamlit app from the vector database? What's the best way to do this?</p>
<p>That's the way I'm loading the documents:</p>
<pre><code># document_loader.py
...
def load_local_documents(self):
loader = DirectoryLoader("test_files/", glob="**/*.pdf")
self._documents = loader.load()
def get_text_splitted(self):
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=50
)
return text_splitter.split_documents(self._documents)
# vector_database.py
...
def create_vector_db(self, embeddings):
self._vector_db = Chroma(persist_directory="chroma_db",
embedding_function=embeddings)
self._vector_db.persist()
def create_vector_db_from_documents(self, texts, embeddings):
self._vector_db = Chroma.from_documents(
documents=texts,
embedding=embeddings,
persist_directory="chroma_db"
)
self._vector_db.persist()
</code></pre>
|
<python><langchain><large-language-model><chromadb>
|
2024-04-16 15:53:09
| 1
| 345
|
Yuri Costa
|
78,335,778
| 22,466,650
|
How to create a summary table from a dictionary of lists with different len?
|
<p>My input is this dict :</p>
<pre><code>response = {
'A': ['CATEGORY 2'],
'B': ['CATEGORY 1', 'CATEGORY 2'],
'C': [],
'D': ['CATEGORY 3'],
}
</code></pre>
<p>And I'm trying to make this kind of dataframe :</p>
<pre><code>| ITEM | CATEGORY 1 | CATEGORY 2 | CATEGORY 3 |
| A | | x | |
| B | x | x | |
| C | | | |
| D | | | x |
</code></pre>
<p>For that I made the code below but the result was extremely unexpected.</p>
<pre><code>df = pd.DataFrame.from_dict(response, orient='index').fillna('x')
df = df.reset_index()
df = df.rename(columns={'index': 'ITEM'})
print(df)
ITEM 0 1
0 A CATEGORY 2 x
1 B CATEGORY 1 CATEGORY 2
2 C x x
3 D CATEGORY 3 x
</code></pre>
<p>Do you guys have a solution for that ? I'm open to any suggestion.</p>
|
<python><pandas>
|
2024-04-16 15:40:50
| 1
| 1,085
|
VERBOSE
|
78,335,566
| 4,442,337
|
Django: how to disable the HTTP_HOST check on a specific endpoint?
|
<p>I defined a super simple healthcheck endpoint in my Django app (<code>urls.py</code>) to be used in a docker compose environment, like:</p>
<pre class="lang-yaml prettyprint-override"><code>services:
django:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
depends_on:
db:
condition: service_healthy
env_file:
- ./.envs/.production/.django
command: /start
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/healthcheck/"]
start_period: 20s
interval: 15s
timeout: 10s
retries: 5
</code></pre>
<p><code>start.sh</code></p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
python /app/manage.py collectstatic --noinput
exec /usr/local/bin/gunicorn config.asgi --bind 0.0.0.0:5000 --chdir=/app -k uvicorn.workers.UvicornWorker
</code></pre>
<pre class="lang-py prettyprint-override"><code># ... various imports
def healthcheck():
return HttpResponse()
urlpatterns = [
# Django Admin, use {% url 'admin:index' %}
path(settings.ADMIN_URL, admin.site.urls),
# User management
path("accounts/", include("allauth.urls")),
# Hijack
path("hijack/", include("hijack.urls")),
# Healthcheck
path("healthcheck/", healthcheck, name="healthcheck"),
# Media files
*static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT),
]
if settings.DEBUG:
# Static file serving when using Gunicorn + Uvicorn for local web socket development
urlpatterns += staticfiles_urlpatterns()
</code></pre>
<p>In a local environment there are no issues since <code>localhost</code> is inside the <code>ALLOWED_HOSTS</code> configuration variable. But if I'm in a production environment and adding "localhost" to the allowed hosts list doesn't seem like a good practice.</p>
<p>Since the healthcheck endpoint is only used internally is there a way to resolve this? On the top of my head I'm thinking of something to skip this check just for that endpoint but I don't know how or if it is even possible. Maybe there are even other better solutions?</p>
|
<python><django><docker-compose>
|
2024-04-16 15:09:35
| 0
| 2,191
|
browser-bug
|
78,335,536
| 8,721,169
|
Can I profile my pip requirements installation to see which ones take time?
|
<p>I'm looking for a way to sort the requirements of a project, depending to the time they typically need to be downloaded and installed.</p>
<p>I'm aware this time may have a huge variance depending on a lot of external parameters, but I guess it should still be possible to tell which packages cost the most time.</p>
<p>It would help a lot to decrease my build time (on containers), since I could find workarounds to avoid using some packages, if only I'd know which ones are the more time-consuming at build.</p>
<p>The size of the packages can be found easily, but the time of installation itself is not displayed by package. Is there simply a proportional relationship between both?</p>
<hr />
<p><strong>EDIT</strong></p>
<p>Due to the technical stack we are using here (GCP Cloud functions, a high-level managed system), I do not have much power on the ways containers are build, that's why I'm looking for optimising requirements.</p>
|
<python><pip><google-cloud-functions><requirements.txt>
|
2024-04-16 15:04:21
| 0
| 447
|
XGeffrier
|
78,335,498
| 10,927,457
|
Interpolation between two Dataframes (years with values) in Pyspark
|
<p>How can I implement linear interpolation between two PySpark DataFrames representing data for different years, say 2020 and 2030, to generate a new PySpark DataFrame for an intermediary year like 2025? Both DataFrames have identical structures with numeric values. The years have the same granularity.</p>
<p>My initial approach involved <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.interpolate.html" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/api/python/reference/pyspark.pandas/api/pyspark.pandas.DataFrame.interpolate.html</a></p>
<p>Is this the recommended way?</p>
<p>I have written this Pandas method awhile back, but I need to mgirate to Pyspark, but I struggle to implement the same in Pandas.</p>
<pre><code>def interpolate_between_years(first: DataFrame, second: DataFrame) -> DataFrame:
years = [first.index.year[0], second.index.year[0]]
interpolated_df = (
pd.concat(
[first.reset_index(drop=True), second.reset_index(drop=True)],
keys=years,
axis=1,
)
.T.reindex(np.arange(years[0], years[1] + 1))
.interpolate()
)
return interpolated_df
</code></pre>
|
<python><dataframe><apache-spark><pyspark><interpolation>
|
2024-04-16 14:58:34
| 1
| 582
|
Pfinnn
|
78,335,474
| 8,831,742
|
python-constraint not solving the n-queens puzzle
|
<p>I'm using the <code>python-constraint</code> library to try to solve the <a href="https://en.wikipedia.org/wiki/Eight_queens_puzzle" rel="nofollow noreferrer">n-queens problem</a>
(n queens are placed on an n-by-n board and they must be arranged in such a way that they don't menace each other)</p>
<p>My formulation is as such:</p>
<pre class="lang-py prettyprint-override"><code>from constraint import *
n = 8
problem = Problem()
problem.addVariables(range(n),range(n))
for i in range(n):
for j in range(i):
problem.addConstraint(lambda a,b:a!=b,(i,j))
problem.addConstraint(lambda a,b:abs(a-b)!=abs(i-j),(i,j))
</code></pre>
<p>However, when I try to get the solution with <code>problem.getSolution()</code> i just get <code>None</code>. What am I doing wrong?</p>
|
<python><constraint-programming><python-constraint>
|
2024-04-16 14:55:29
| 1
| 353
|
none none
|
78,335,413
| 6,741,546
|
Google Cloud Dataflow : ModuleNotFoundError: No module named 'package'
|
<p>I'm building a Google Cloud Dataflow pipeline using Python 3.8 to parse data from a pub/sub topic. I followed every tutorial about it, it compiles and launches, but when executed I always get:</p>
<pre><code>LoadMainSessionException: Could not load main session. Inspect which external dependencies are used in the main module of your pipeline. Verify that corresponding packages are installed in the pipeline runtime environment and their installed versions match the versions used in pipeline submission environment. For more information, see: https://beam.apache.org/documentation/sdks/python-pipeline-dependencies/ Traceback (most recent call last): File "/usr/local/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker_main.py", line 115, in create_harness _load_main_session(semi_persistent_directory) File "/usr/local/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker_main.py", line 354, in _load_main_session pickler.load_session(session_file) File "/usr/local/lib/python3.8/site-packages/apache_beam/internal/pickler.py", line 65, in load_session return desired_pickle_lib.load_session(file_path) File "/usr/local/lib/python3.8/site-packages/apache_beam/internal/dill_pickler.py", line 446, in load_session return dill.load_session(file_path) File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 368, in load_session module = unpickler.load() File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 472, in load obj = StockUnpickler.load(self) File "/usr/local/lib/python3.8/site-packages/dill/_dill.py", line 827, in _import_module return getattr(__import__(module, None, None, [obj]), obj) ModuleNotFoundError: No module named 'package'
at ._request_process_bundle ( /usr/local/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker.py:331 )
at .run ( /usr/local/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker.py:281 )
at .main ( /usr/local/lib/python3.8/site-packages/apache_beam/runners/worker/sdk_worker_main.py:212 )
</code></pre>
<p>I tried adding and removing <code>save_main_session</code>, the error happens at a different time but still there. Here is my tree:</p>
<pre><code>โโโ Dockerfile
โโโ Dockerfile.dev
โโโ cloudbuild.yaml
โโโ compose.dev.yaml
โโโ compose.yaml
โโโ main.py
โโโ metadata.json
โโโ package
โ โโโ __init__.py
โ โโโ api.py
โ โโโ launcher.py
โ โโโ parser
โ โ โโโ __init__.py
โ โ โโโ constants.py
โ โ โโโ lib.py
โ โ โโโ parser_elementor.py
โ โ โโโ parser_html.py
โ โ โโโ parser_intent.py
โ โ โโโ parser_text.py
โ โ โโโ template_behavior.py
โ โ โโโ utilities.py
โ โโโ pipeline.py
โ โโโ transforms.py
โ โโโ utilities.py
โโโ pyproject.toml
โโโ requirements-test.txt
โโโ requirements.txt
โโโ setup.py
โโโ tests
โโโ __init__.py
โโโ parser_elementor_test.py
โโโ parser_elementor_test_file_1.json
โโโ parser_html_test.py
</code></pre>
<p>Dockerfile for google cloud:</p>
<pre><code># syntax=docker/dockerfile:1
# Comments are provided throughout this file to help you get started.
# If you need more help, visit the Dockerfile reference guide at
# https://docs.docker.com/engine/reference/builder/
ARG PYTHON_VERSION=3.8
FROM python:${PYTHON_VERSION}-slim as base
# Copy SDK entrypoint binary from Apache Beam image, which makes it possible to
# use the image as SDK container image. If you explicitly depend on
# apache-beam in setup.py, use the same version of Beam in both files.
COPY --from=apache/beam_python3.11_sdk:2.55.1 /opt/apache/beam /opt/apache/beam
# Copy Flex Template launcher binary from the launcher image, which makes it
# possible to use the image as a Flex Template base image.
COPY --from=gcr.io/dataflow-templates-base/python311-template-launcher-base:20230622_RC00 /opt/google/dataflow/python_template_launcher /opt/google/dataflow/python_template_launche
ARG WORKDIR=/app
WORKDIR ${WORKDIR}
RUN apt update && apt install -y \
build-essential \
gcc \
python3-dev \
libevent-dev
COPY requirements.txt .
COPY setup.py .
COPY main.py .
COPY package package
# Installing exhaustive list of dependencies from a requirements.txt
# helps to ensure that every time Docker container image is built,
# the Python dependencies stay the same. Using `--no-cache-dir` reduces image size.
RUN pip install --no-cache-dir -r requirements.txt
# Installing the pipeline package makes all modules encompassing the pipeline
# available via import statements and installs necessary dependencies.
# Editable installation allows picking up later changes to the pipeline code
# for example during local experimentation within the container.
RUN pip install -e .
# For more informaiton, see: https://cloud.google.com/dataflow/docs/guides/templates/configuring-flex-templates
ENV FLEX_TEMPLATE_PYTHON_PY_FILE="${WORKDIR}/main.py"
ENV FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE="${WORKDIR}/requirements.txt"
ENV FLEX_TEMPLATE_PYTHON_SETUP_FILE="${WORKDIR}/setup.py"
# Because this image will be used as custom sdk container image, and it already
# installs the dependencies from the requirements.txt, we can omit
# the FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE directive here
# to reduce pipeline submission time.
# Similarly, since we already installed the pipeline package,
# we don't have to specify the FLEX_TEMPLATE_PYTHON_SETUP_FILE="${WORKDIR}/setup.py" configuration option.
# Optionally, verify that dependencies are not conflicting.
# A conflict may or may not be significant for your pipeline.
RUN pip check
# Optionally, list all installed dependencies.
# The output can be used to seed requirements.txt for reproducible builds.
RUN pip freeze
# Set the entrypoint to Apache Beam SDK launcher, which allows this image
# to be used as an SDK container image.
ENTRYPOINT ["/opt/apache/beam/boot"]
</code></pre>
<p>cloudbuild.yaml:</p>
<pre><code>steps:
- name: "gcr.io/cloud-builders/docker"
args:
["build", "-t", "gcr.io/$PROJECT_ID/parsing-pipeline-image:latest", "."]
dir: "."
env:
- "DOCKER_BUILDKIT=1"
id: "Build parsing pipeline image"
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/parsing-pipeline-image:latest"]
id: "Push parsing image"
- name: "gcr.io/cloud-builders/gcloud"
args:
- dataflow
- flex-template
- build
- gs://reversia-pipelines-templates/reversia-parsing-pipeline.json
- --image-gcr-path=gcr.io/$PROJECT_ID/parsing-pipeline-image:latest
- --sdk-language=PYTHON
- --flex-template-base-image=PYTHON3
- --py-path=.
- --metadata-file=metadata.json
- --env=FLEX_TEMPLATE_PYTHON_PY_FILE=main.py
- --env=FLEX_TEMPLATE_PYTHON_REQUIREMENTS_FILE=requirements.txt
id: "Deploy Dataflow Flex Template"
</code></pre>
<p>main.py:</p>
<pre><code>import logging
from package import launcher
if __name__ == "__main__":
logging.getLogger().setLevel(logging.INFO)
launcher.run()
</code></pre>
<p>setup.py:</p>
<pre><code>"""Defines a Python package for an Apache Beam pipeline."""
import setuptools
setuptools.setup(
name="package",
version="0.2.0",
install_requires=[
"apache-beam[gcp]==2.55.1", # Must match the version in `Dockerfile``.
"requests",
"google-cloud-logging",
"beautifulsoup4",
"google-cloud-secret-manager",
"dataclasses_json",
"typing_extensions"
],
packages=setuptools.find_packages(),
)
</code></pre>
<p>package.launcher.py:</p>
<pre><code>"""Defines command line arguments for the pipeline defined in the package."""
import argparse
import os
from typing import List
from package import pipeline
def run(argv: "List[str] | None" = None):
"""Parses the parameters provided on the command line and runs the pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument('--reversia_api_url', help='Reversia API URL', required=True)
parser.add_argument('--topic', help='Topic of the data', required=True)
pipeline_args, other_args = parser.parse_known_args(argv)
os.environ['REVERSIA_API_URL'] = pipeline_args.reversia_api_url
parsing_pipeline = pipeline.parsing_pipeline(pipeline_args.topic, other_args)
parsing_pipeline.run()
</code></pre>
<p>package.pipeline.py:</p>
<pre><code>import apache_beam as beam
from apache_beam.options.pipeline_options import PipelineOptions
from typing import List
from package import transforms
def parsing_pipeline(
topic: str,
pipeline_options_args: List[str]
):
pipeline_options = PipelineOptions(pipeline_options_args, streaming=True, save_main_session=True)
pipeline = beam.Pipeline(options=pipeline_options)
_ = (
pipeline
| 'Read from Pub/Sub' >> beam.io.ReadFromPubSub(topic)
| "Parse raw cntent" >> transforms.Parse()
.with_output_types(bool)
)
return pipeline
</code></pre>
<p>package.transforms.py:</p>
<pre><code>import apache_beam as beam
from dataclasses import dataclass
from dataclasses_json import dataclass_json, LetterCase
from typing import Dict
from package.api import filter_by_content_fields_hash_diff, update_by_content_hashes_diff
from package.parser.utilities import is_json, contains_html
from package.parser.parser_elementor import parse_elementor
from package.parser.parser_html import parse_html
from package.parser.parser_text import parse_text
from package.utilities import get_secret
@dataclass
@dataclass_json(letter_case=LetterCase.CAMEL)
class ReceivedElement:
project_id: str
language_code: str
resource: str
resource_id: str
hashes: Dict[str, str]
content: Dict[str, str]
@dataclass
class ParsedElement:
project_id: str
language_code: str
resource: str
resource_id: str
parsed_fields: Dict[str, str]
class PreFilterDoFn(beam.DoFn):
def __init__(self):
self.agent_api_key = None
def setup(self):
self.agent_api_key = get_secret('my_project', 'workflow-payload-api-agent')
def process(self, encoded_element: bytes):
element = ReceivedElement.from_json(encoded_element.decode('utf-8'))
changed_fields = filter_by_content_fields_hash_diff(
self.agent_api_key,
element.project_id,
element.language_code,
element.resource,
element.id,
element.hashes
)
if changed_fields:
element.content.update({field: element.content[field] for field in changed_fields})
yield element
class ParseDoFn(beam.DoFn):
def process(self, element: ReceivedElement):
parsed_fields = {}
for field, content in element.content.items():
if is_json(content):
parsed_content = parse_elementor(content)
elif contains_html(content):
parsed_content = parse_html(content)
else:
parsed_content = parse_text(content)
parsed_fields[field] = parsed_content
yield {
'project_id': element.project_id,
'language_code': element.language_code,
'resource': element.resource,
'resource_id': element.resource_id,
'parsed_fields': parsed_fields
}
class SendDoFn(beam.DoFn):
def __init__(self):
self.agent_api_key = None
def setup(self):
self.agent_api_key = get_secret('my_project', 'workflow-payload-api-agent')
def process(self, element: ParsedElement):
yield update_by_content_hashes_diff(
self.agent_api_key,
element.project_id,
element.language_code,
element.resource,
element.resource_id,
element.parsed_fields
)
class Parse(beam.PTransform):
"""A composite transform that takes a PCollection of encoded elements"""
def expand(self, pcoll):
return (
pcoll
| 'Pre filter Data' >> beam.ParDo(PreFilterDoFn())
| 'Parse Data' >> beam.ParDo(ParseDoFn())
| 'Send to API' >> beam.ParDo(SendDoFn())
)
</code></pre>
<p>I specified everything into the <code>setup.py</code> as described in the beam and dataflow doc, but still, it doesn't seem to take it into account.</p>
|
<python><docker><google-cloud-platform><google-cloud-dataflow>
|
2024-04-16 14:44:20
| 0
| 340
|
Jean Walrave
|
78,335,190
| 3,232,771
|
_repr_html_ not showing when custom __getattr__ implemented
|
<p>I'm trying to implement <code>_repr_html_</code> on a python class (<a href="https://ipython.readthedocs.io/en/stable/config/integrating.html" rel="nofollow noreferrer">docs</a>).</p>
<p>The class is a read-only facade for navigating a JSON-like object using attribute notation (based on example 19-5 from Fluent Python, Rahmalho (O'Reilly)). It has a custom <code>__getatrr__</code> method to achieve this behavior:</p>
<pre class="lang-py prettyprint-override"><code>from collections import abc
class FrozenJSON:
def __init__(self, mapping):
self._data = dict(mapping)
def __repr__(self):
return "FrozenJSON({})".format(repr(self._data))
def _repr_html_(self):
return (
"<ul>"
+ "\n".join(
f"<li><strong>{k}:</strong> {v}</li>"
for k, v in self._data.items()
)
+ "</ul>"
)
def __getattr__(self, name):
if hasattr(self._data, name):
return getattr(self._data, name)
else:
return FrozenJSON.build(self._data[name])
@classmethod
def build(cls, obj):
if isinstance(obj, abc.Mapping):
return cls(obj)
elif isinstance(obj, abc.MutableSequence):
return [cls.build(item) for item in obj]
else:
return obj
def __dir__(self):
return list(self._data.keys())
</code></pre>
<p>The class behaves like this:</p>
<pre class="lang-py prettyprint-override"><code>>>> record = FrozenJSON({"name": "waldo", "age": 32, "occupation": "lost"})
>>> record.occupation
'lost'
</code></pre>
<p>However the <code>_repr_html_</code> doesn't get displayed in an IPython environment (I've tried vscode and a jupyter lab).</p>
<p>Commenting out the <code>__getattr__</code> method causes the HTML representation to be displayed, so I'm fairly confident the issue is something to do with that.</p>
<p>(<code>_repr_html_</code> on other objects work fine in my environments (e.g. pandas DataFrames).)</p>
<p>The following doesn't help:</p>
<pre class="lang-py prettyprint-override"><code> def __getattr__(self, name):
if hasattr(self._data, name):
return getattr(self._data, name)
elif name == "_repr_html_":
return self._repr_html_
else:
return FrozenJSON.build(self._data[name])
</code></pre>
<p>I don't know enough about how vscode / juptyer lab knows to call <code>_repr_html_</code> rather than <code>__repr__</code>, and how this <code>__getattr__</code> is breaking that.</p>
<p>Thanks in advance for any help!</p>
|
<python><visual-studio-code><ipython><getattr><repr>
|
2024-04-16 14:09:48
| 1
| 396
|
davipatti
|
78,335,141
| 4,847,250
|
is it possible to optimize a curve fitting where a variable depend on another?
|
<p>I'm wondering if it is possible to make a curve fitting od a dual exponential where the parameter b is strictly superior to the parameter d?.
I don't understand how I can add such a constraint.</p>
<p>Here's a minimal example</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import curve_fit
x = np.linspace(0,4,50) # Example data
def func(x, a, b, c, d):
return a * np.exp(b * x) + c * np.exp(d * x)
y = func(x, 2.5, 1.3, 0.5, 0.5) # Example exponential data
# Here you give the initial parameters for a,b,c which Python then iterates over
# to find the best fit
popt, pcov = curve_fit(func,x,y,p0=(1.0,1.0,1.0,1.0))
</code></pre>
|
<python><optimization><scipy><curve-fitting>
|
2024-04-16 14:03:37
| 1
| 5,207
|
ymmx
|
78,335,103
| 3,623,537
|
overload typing for variable amount of arguments (`args` or `kwargs`)
|
<p>Example is below, need to make sure IDE type checker or <code>reveal_type</code> would identify <code>k</code>, <code>j</code> and <code>i</code> types correctly.</p>
<p>Perhaps there is some way to suggest the typing that <code>args</code> is an empty <code>tuple</code> and <code>kwargs</code> empty <code>dict</code> and the return value then would be <code>tuple[int]</code>?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Union, overload
def test(*args: int, **kwargs: str) -> Union[int, str, tuple[int]]:
if args:
return 5
if kwargs:
return "5"
return (5,)
# now all are Union[int, str, tuple[int]]
k = test(1)
j = test(i="1")
i = test()
reveal_type(k) # should be int
reveal_type(j) # should be str
reveal_type(i) # should be tuple[int]
</code></pre>
|
<python><python-typing>
|
2024-04-16 13:58:49
| 1
| 469
|
FamousSnake
|
78,335,078
| 10,595,871
|
NoSuchElementException in Selenium with text cell
|
<p>The code:</p>
<pre><code>driver = webdriver.Chrome()
driver.get("https://commercialisti.it/iscritti")
driver.implicitly_wait(10)
casella_testo = driver.find_element("id", "Cap")
</code></pre>
<p>The error:</p>
<pre><code>NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":"[id="Cap"]"}
(Session info: chrome=122.0.6261.131); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
Stacktrace:
</code></pre>
<p>The html:</p>
<pre><code><input class="form-control" id="Cap" name="Cap" type="text" value="">
</code></pre>
<p>I'm not getting why it does not find "Cap"</p>
|
<python><selenium-webdriver>
|
2024-04-16 13:54:35
| 1
| 691
|
Federicofkt
|
78,334,902
| 118,549
|
Efficient way to lookup prices based on date ranges
|
<p>I'm playing around with importing my energy usage data into Python, and I'm trying to figure out a good way to do price lookups.</p>
<p>The unit energy price changes a few times a year, so I have ranges of dates and the unit price, for example:</p>
<pre class="lang-none prettyprint-override"><code>start_date end_date unit_cost
---------- -------- ---------
2023-01-01 2023-03-31 0.2384
2023-04-01 2023-09-30 0.2761
2023-10-01 2024-01-31 0.2566
</code></pre>
<p>It should be safe to assume that the date ranges are contiguous, so potentially this could be simplified to just the start date for each unit price.</p>
<p>I then retrieve a list of daily unit consumption values and want to multiply that by the appropriate unit cost for the date.</p>
<p>There's clearly a number of dumb, brute-force ways to do this; iterating through a data structure and checking whether the current date falls between the start and end dates... but I'm thinking there must be a clever, Pythonic way to do this. I'm interested in efficiency in the sense of short, clear code rather than raw performance, but a fast algorithm could also be relevant!</p>
<p>Is there a data structure and/or module that applies to this use case?</p>
|
<python><data-structures><lookup>
|
2024-04-16 13:29:55
| 3
| 752
|
Gordon Mckeown
|
78,334,676
| 7,695,845
|
Monte Carlo simulation of PI with numba is the slowest for the lowest number of points?
|
<p>As part of a homework exercise, I am implementing a Monte-Carlo simulation of pi in Python. I am using Numba to accelerate and parallelize the computation. From a previous test I performed, I found that parallelized <code>numba</code> runs faster than a non parallelized <code>numba</code> version and from a pure <code>numpy</code> version, so this is what I chose to go with. The question asked to measure the performance of the algorithm for <code>n = 10**k</code> for <code>k = [1, 2, 3, 4, 5, 6, 7]</code>, and when I implemented this I got a very strange result: The first run with the lowest value of <code>n = 10</code> runs the <strong>slowest</strong> of them all. The run for <code>n = 10</code> is consistently 3 times slower than <code>n = 10,000,000</code>:</p>
<p><a href="https://i.sstatic.net/NUxI3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NUxI3.png" alt="" /></a></p>
<p>At first, I thought the first run was slow because <code>numba</code> needs to compile my function, but I am using cache so this excuse can only work for the first time I run the script. You can see in the above picture that the first time I ran the script, the run for <code>n = 10</code> was much slower than the other times because it needed to compile the function. However, the next times use the cached version and are still significantly slower than runs with bigger <code>n</code>.</p>
<p>Another weird result is that sometimes, the run of <code>n = 1,000</code> is faster than <code>n = 100</code>. I thought this algorithm was supposed to be linear in time. I don't understand why the first run with the lowest value of <code>n</code> turns out to be the slowest and why some runs are often faster than previous runs with a smaller value of <code>n</code>. Can somebody explain the results I am getting?</p>
<p>Here's the code I used:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import timedelta
from time import perf_counter
from numba import jit, prange
import numpy as np
import numpy.typing as npt
jit_opts = dict(
nopython=True, nogil=True, cache=True, error_model="numpy", fastmath=True
)
rng = np.random.default_rng()
@jit(**jit_opts, parallel=True)
def count_points_in_circle(points: npt.NDArray[float]) -> tuple[npt.NDArray[bool], int]:
in_circle = np.empty(points.shape[0], dtype=np.bool_)
in_circle_count = 0
for i in prange(points.shape[0]):
in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 < 1
in_circle_count += in_
return in_circle, in_circle_count
def monte_carlo_pi(n: int) -> tuple[npt.NDArray[float], npt.NDArray[bool], float]:
points = rng.random((n, 2))
in_circle, count = count_points_in_circle(points)
return points, in_circle, 4 * count / n
def main() -> None:
n_values = 10 ** np.arange(1, 8)
for n in n_values:
start = perf_counter()
points, in_circle, pi_approx = monte_carlo_pi(n)
end = perf_counter()
duration = end - start
delta = timedelta(seconds=duration)
elapsed_msg = (
f"[{delta} (Raw time: {duration} s)]"
if delta
else f"[Raw time: {duration} s]"
)
print(
f"n = {n:,}:".ljust(20),
f"\N{GREEK SMALL LETTER PI} \N{ALMOST EQUAL TO} {pi_approx}".ljust(20),
elapsed_msg,
)
if __name__ == "__main__":
main()
</code></pre>
<h1>Edit:</h1>
<p>Following @Jรฉrรดme Richard's answer, I added explicit signatures to avoid lazy compilations and tested the execution time both with parallelization and without:</p>
<p><a href="https://i.sstatic.net/VGyOx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VGyOx.png" alt="" /></a></p>
<p>The first 3 runs are with <code>parallel=True</code> and deleting the cache before the first run. The last 3 runs used <code>parallel=False</code> and <code>range</code> instead of <code>prange</code> (I believe setting <code>parallel=False</code> would have been enough but I wanted to make sure). Once again, I deleted the cache before running these 3 tests. The modification to the code:</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
@nb.jit(
[
nb.types.Tuple((nb.bool_[:], nb.int64))(nb.float64[:, :]),
nb.types.Tuple((nb.bool_[:], nb.int32))(nb.float32[:, :]),
],
**jit_opts,
parallel=False, # or parallel=True
)
def count_points_in_circle(points: npt.NDArray[float]) -> tuple[npt.NDArray[bool], int]:
in_circle = np.empty(points.shape[0], dtype=np.bool_)
in_circle_count = 0
for i in range(points.shape[0]): # or nb.prange
in_ = in_circle[i] = points[i, 0] ** 2 + points[i, 1] ** 2 < 1
in_circle_count += in_
return in_circle, in_circle_count
</code></pre>
<p>Indeed, using parallelization for <code>n <= 10,000</code> turns out to be slower on my machine. However, the first run with <code>n = 10</code> is still consistently slower than every run with <code>n <= 10,000</code> which is weird. What other "startup overhead" there is here and how can it be eliminated?</p>
<p>Another follow-up question I have is how can I enable parallelization dynamically based on the input. From these results, it looks like the best approach is to use parallelization only for <code>n > 10,000</code> (or some other user-specified threshold). I don't want to write two versions of the scheme, one with <code>parallel=True</code> and <code>prange</code> while the second with <code>parallel=False</code> and <code>range</code>. How can I enable parallelization dynamically without duplicating the function?</p>
|
<python><numba><montecarlo>
|
2024-04-16 12:52:01
| 1
| 1,420
|
Shai Avr
|
78,334,661
| 2,768,539
|
Get XGBoost prediction based on individual trees
|
<p>It might be duplicated from <a href="https://stackoverflow.com/questions/43702514/how-to-get-each-individual-trees-prediction-in-xgboost">How to get each individual tree's prediction in xgboost?</a> but the solution no longer works (possibly to changes on XGBoost library). My idea is to dump the model in a raw format <code>model.get_booster().get_dump()</code> and implement it in a different platform (prediction only). However, I'm first trying to implement it in python. Running the following code, making predictions using all the individual booster and combining them, does not return the same result as the <code>model.predict()</code> function.
Is there any way I can match the <code>model.predict()</code> with the combination of the boosters? What am I missing?</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import xgboost as xgb
from sklearn import datasets
from scipy.special import expit as sigmoid, logit as inverse_sigmoid
# Load data
iris = datasets.load_iris()
X, y = iris.data, (iris.target == 1).astype(int)
# Fit a model
model = xgb.XGBClassifier(
n_estimators=10,
max_depth=10,
use_label_encoder=False,
objective='binary:logistic'
)
model.fit(X, y)
booster_ = model.get_booster()
# Extract indivudual predictions
individual_preds = []
for tree_ in booster_:
individual_preds.append(
tree_.predict(xgb.DMatrix(X))
)
individual_preds = np.vstack(individual_preds)
# Aggregated individual predictions to final predictions
indivudual_logits = inverse_sigmoid(individual_preds)
final_logits = indivudual_logits.sum(axis=0)
final_preds = sigmoid(final_logits)
# Verify correctness
xgb_preds = booster_.predict(xgb.DMatrix(X))
np.testing.assert_almost_equal(final_preds, xgb_preds)
</code></pre>
<blockquote>
<p>AssertionError: Arrays are not almost equal to 7 decimals
Mismatched elements: 150 / 150 (100%)
Max absolute difference: 0.90511334
Max relative difference: 0.99744916
x: array([7.4847587e-05, 7.4847587e-05, 7.4847587e-05, 7.4847587e-05,
7.4847587e-05, 7.4847587e-05, 7.4847587e-05, 7.4847587e-05,
7.4847587e-05, 7.4847587e-05, 7.4847587e-05, 7.4847587e-05,...
y: array([0.0293127, 0.0293127, 0.0293127, 0.0293127, 0.0293127, 0.0293127,
0.0293127, 0.0293127, 0.0293127, 0.0293127, 0.0293127, 0.0293127,
0.0293127, 0.0293127, 0.0293127, 0.0293127, 0.0293127, 0.0293127,...</p>
</blockquote>
|
<python><machine-learning><xgboost>
|
2024-04-16 12:49:27
| 2
| 387
|
rriccilopes
|
78,334,486
| 13,394,817
|
How to add labels without markers and in place of them (left aligned merging label and marker) in legends
|
<p>I want to add some parameters with their values into legends, which have no markers and must be left aligned placing on top of other markers positions (i.e. in a column with other markers). It will be done with using titling them in legends if we have just one columnar legend. The problem arises when two or more columns are specified in legends and the first item in the second column will not be placed in a row with the title, if the title method is used, and will be in a row with the first label in the first column. Is there any way to cure it automatically? How?</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
plt.plot([], [], linestyle='--', color='black', label='Dashed Line')
plt.plot([], [], linestyle=':', color='black', label='Dotted Line')
plt.plot([], [], linestyle='-', linewidth=2, color='black', label='Double Solid Line')
plt.plot([], [], linestyle='-.', color='black', label='Dash-Dot Line')
legend_ETP = Line2D([0], [0], marker='o', c='k', ls='None', markerfacecolor='k', markersize=8, label="Far")
legend_IOTP = Line2D([0], [0], marker='s', c='k', ls='None', markerfacecolor='k', markersize=8, label="Near")
legend_void = Line2D([0], [0], marker='', ls='None', label="")
ELS_handles, ELS_labels = plt.gca().get_legend_handles_labels()
Dw_leg = 'Exact = 0.5'
new_handles = [legend_ETP] + [legend_IOTP] + ELS_handles
new_labels = ["Far"] + ["Near"] + ELS_labels
legend = plt.legend(handles=new_handles, labels=new_labels, fontsize=12, loc='lower left', ncol=2, title=Dw_leg)
legend._legend_box.align = "left"
# new_handles = [legend_void] + [legend_ETP] + [legend_IOTP] + ELS_handles
# new_labels = [Dw_leg] + ["Far"] + ["Near"] + ELS_labels
# legend = plt.legend(handles=new_handles, labels=new_labels, fontsize=12, loc='lower left', ncol=2)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/vXVFZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vXVFZ.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-04-16 12:19:32
| 0
| 2,836
|
Ali_Sh
|
78,334,416
| 14,358,734
|
Trying to switch between frames in tkinter
|
<p>I'm trying to design a (semi) text-based game in tkinter. I'd like to be able to switch between frames, with each frame acting as a menu or map. However, each button in each frame is present from the start, and transitioning from e.g. <code>Opening</code> to <code>NewGame</code> doesn't make the buttons in <code>Opening</code> go away.</p>
<pre><code>import tkinter as tk
from tkinter import *
from tkinter import ttk, simpledialog
LARGE_FONT = "Verdana"
root = tk.Tk()
root.geometry("640x480")
class intothesea(tk.Tk):
def __init__(self, *args, **kwargs):
tk.Tk.__init__(self, *args, **kwargs)
container = tk.Frame(root, height="640", width="480")
container.pack(side="top", fill="both", expand=True)
container.grid_rowconfigure(0, weight=1)
container.grid_columnconfigure(0, weight=1)
self.frames = {}
for F in (Opening, NewGame, LithHarbor):
frame = F(container, self)
self.frames[F] = frame
frame.grid(row=0, column=0, sticky="nsew")
frame.pack()
self.show_frame(Opening)
def show_frame(self, cont):
tk.Frame.lower(self)
frame = self.frames[cont]
frame.tkraise()
class Opening(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = tk.Label(self, text="In. To. The. Sea.", font=LARGE_FONT)
label.pack(pady=10,padx=10)
button = tk.Button(self, text="NEW GAME",
command=lambda: controller.show_frame(NewGame))
button.pack()
button2 = tk.Button(self, text="CONTINUE",
command=lambda: controller.show_frame(LithHarbor))
button2.pack()
class NewGame(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = tk.Label(root, text="In. To. The. Sea.", font=LARGE_FONT)
label.pack(pady=10,padx=10)
new_character = []
def accept_string():
# Prompt the user to input a string
user_input = simpledialog.askstring("Input", "Enter a string:")
# Print the string to the console
if user_input:
new_character.extend(user_input)
print(user_input)
name.pack_forget()
name = tk.Button(self, text="WHAT IS YOUR NAME?", command=accept_string)
name.pack()
class LithHarbor(tk.Frame):
def __init__(self, parent, controller):
tk.Frame.__init__(self,parent)
label = tk.Label(self, text="In. To. The. Sea.", font=LARGE_FONT)
label.pack(pady=10,padx=10)
app = intothesea()
app.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-04-16 12:05:07
| 2
| 781
|
m. lekk
|
78,334,332
| 16,389,095
|
SQL Queries: the most common value in a column grouped by another column
|
<p>Let's say I have a database employee.db, created as following:</p>
<pre><code># %% CREATE DATAFRAME
data = {'Name': ['John', 'Jane', 'Adam', 'Jane', 'Frank', 'Mary'],
'Age': [35, 28, 42, 32, 35, 39],
'Department': ['HR', 'IT', 'Finance', 'IT', 'Sales', 'IT']}
df = pd.DataFrame(data)
# %% CREATE A SQL DATABASE ENGINE
engine = create_engine('sqlite:///employee.db', echo=True)
# %% CONVERT THE DATA FRAME TO SQL
df.to_sql('employee', con=engine, if_exists='replace', index=False)
# %% CLOSE THE CONNECTION
engine.dispose()
</code></pre>
<p>I'm trying to read the database, with</p>
<pre><code>from sqlalchemy import create_engine, text
import pandas as pd
# %% CREATE A SQL DATABASE ENGINE
engine = create_engine('sqlite:///employee.db', echo=True)
# %% QUERY THE SQL TABLE
with engine.connect() as conn:
# INSERT SQL QUERIES HERE
# %% CLOSE THE CONNECTION
engine.dispose()
</code></pre>
<p>I would like to calculate the most common name per department. In case of department 'IT' the query have to display 'Jane'.</p>
|
<python><sql><pandas><sqlite>
|
2024-04-16 11:48:39
| 1
| 421
|
eljamba
|
78,334,276
| 4,999,991
|
How to Set gfortran Compiler Flags in a Complex Build System Involving Makefile.am, configure.ac, and setup.py?
|
<p>Following <a href="https://stackoverflow.com/q/78311453/4999991">this question</a>, I am working on building <a href="https://github.com/JModelica/JModelica" rel="nofollow noreferrer">a project</a> that involves compiling <a href="https://github.com/JModelica/JModelica/tree/master/external/Assimulo/thirdparty/hairer" rel="nofollow noreferrer">FORTRAN 77</a> code using gfortran, but I am encountering a compilation error due to missing compiler flags. The specific error message is:</p>
<pre class="lang-none prettyprint-override"><code>error: Command "/usr/bin/gfortran -Wall -g -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops -I/mnt/c/dev/JModelica/jmodelica_env/lib/python2.7/site-packages/numpy/core/include -Ibuild/src.linux-x86_64-2.7/build/src.linux-x86_64-2.7/assimulo/thirdparty/hairer -I/mnt/c/dev/JModelica/jmodelica_env/lib/python2.7/site-packages/numpy/core/include -I/usr/include/python2.7 -c -c assimulo/thirdparty/hairer/radau_decsol.f -o build/temp.linux-x86_64-2.7/assimulo/thirdparty/hairer/radau_decsol.o" failed with exit status 1
make[2]: *** [Makefile:1088: build-python-packages] Error 1
make[2]: Leaving directory '/mnt/c/dev/JModelica/build'
make[1]: *** [Makefile:514: all-recursive] Error 1
make[1]: Leaving directory '/mnt/c/dev/JModelica/build'
make: *** [Makefile:451: all] Error 2
</code></pre>
<p>I want to include the <code>-ffixed-line-length-120</code> flag in the gfortran command to resolve this issue, but I am unsure where the gfortran flags are configured in the build system. The build system involves multiple configuration files such as <a href="https://raw.githubusercontent.com/JModelica/JModelica/master/Makefile.am" rel="nofollow noreferrer"><code>Makefile.am</code></a>, <a href="https://raw.githubusercontent.com/JModelica/JModelica/master/configure.ac" rel="nofollow noreferrer"><code>configure.ac</code></a>, and <a href="https://raw.githubusercontent.com/JModelica/JModelica/master/external/Assimulo/setup.py" rel="nofollow noreferrer"><code>setup.py</code></a>.</p>
<p>Hereโs a snippet from the generated <code>build/Makefile</code> related to building Python packages:</p>
<pre><code>build-python-packages:
mkdir -p $(assimulo_build_dir); \
cd $(abs_top_srcdir)/external; \
find Assimulo -type f |grep -v /.svn | grep -v .pyc | grep -v ~ |tar c -T - -f - | tar x -C $(assimulo_build_dir); \
cd $(assimulo_build_dir)/Assimulo; \
case $(build) in \
*-cygwin*) \
python setup.py install --with_openmp=True --superlu-home=$(abs_builddir)/superlu_build/ --sundials-home=$(SUNDIALS_HOME) --sundials-with-superlu=True --blas-home=$(abs_builddir)/blas_install/ --lapack-home=$(abs_builddir)/lapack_install/ --force-32bit="true" --extra-c-flags="-mincoming-stack-boundary=2" --prefix=$(assimulo_install_dir) ;; \
*-mingw*) \
python setup.py install --with_openmp=True --superlu-home=$(abs_builddir)/superlu_build/ --sundials-home=$(SUNDIALS_HOME) --sundials-with-superlu=True --blas-home=$(abs_builddir)/blas_install/ --lapack-home=$(abs_builddir)/lapack_install/ --force-32bit="true" $(NUMPY_NO_MSVCR_ARG) --extra-c-flags="-mincoming-stack-boundary=2" --prefix=$(assimulo_install_dir) ;; \
*) \
python setup.py install --with_openmp=True --superlu-home=$(abs_builddir)/superlu_build/ --sundials-home=$(SUNDIALS_HOME) --sundials-with-superlu=True --blas-home=$(abs_builddir)/blas_install/ --lapack-home=$(abs_builddir)/lapack_install/ --prefix=$(assimulo_install_dir) ;; \
esac
cd $(abs_top_srcdir)/Python/src; \
python setup_pymodelica.py install --prefix=$(pymodelica_install_dir); \
rm -rf build
mkdir -p $(pyfmi_build_dir); \
cd $(abs_top_srcdir)/external; \
find PyFMI -type f |grep -v /.svn | grep -v .pyc | grep -v ~ |tar c -T - -f - | tar x -C $(pyfmi_build_dir); \
cd $(pyfmi_build_dir)/PyFMI; \
case $(build) in \
*-cygwin*) \
python setup.py install --fmil-home=$(abs_builddir)/FMIL_install/ --force-32bit="true" --extra-c-flags="-mincoming-stack-boundary=2" --prefix=$(pyfmi_install_dir) ;; \
*-mingw*) \
python setup.py install --fmil-home=$(abs_builddir)/FMIL_install/ --force-32bit="true" $(NUMPY_NO_MSVCR_ARG) --extra-c-flags="-mincoming-stack-boundary=2" --prefix=$(pyfmi_install_dir) ;; \
*) \
python setup.py install --fmil-home=$(abs_builddir)/FMIL_install/ --prefix=$(pyfmi_install_dir) ;; \
esac
rm -rf build
cd $(abs_top_srcdir)/Python/src; \
python setup_pyjmi.py install --prefix=$(pyjmi_install_dir); \
rm -rf build
</code></pre>
<p>I suspect the Fortran compiler settings might be specified in one of these files, but I'm unsure how to trace or modify them appropriately.</p>
<p>Could anyone guide me on where to look or how to modify the build configuration to include additional gfortran flags? I am not sure where to look or what to look for.</p>
|
<python><makefile><gfortran><configure><autotools>
|
2024-04-16 11:37:43
| 0
| 14,347
|
Foad S. Farimani
|
78,334,193
| 940,490
|
`pandas` rolling sum with a maximum number of valid observations in a window
|
<p>I am looking for help to speed up a rolling calculation in <code>pandas</code> which would compute a rolling average with a predefined maximum number of most recent observations. Here is code to generate an example frame and the frame itself:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
tmp = pd.DataFrame(
[
[11.1]*3 + [12.1]*3 + [13.1]*3 + [14.1]*3 + [15.1]*3 + [16.1]*3 + [17.1]*3 + [18.1]*3,
['A', 'B', 'C']*8,
[np.nan]*6 + [1, 1, 1] + [2, 2, 2] + [3, 3, 3] + [np.nan]*9
],
index=['Date', 'Name', 'Val']
)
tmp = tmp.T.pivot(index='Date', columns='Name', values='Val')
Name A B C
Date
11.1 NaN NaN NaN
12.1 NaN NaN NaN
13.1 1 1 1
14.1 2 2 2
15.1 3 3 3
16.1 NaN NaN NaN
17.1 NaN NaN NaN
18.1 NaN NaN NaN
</code></pre>
<p>I would like to obtain this result:</p>
<pre><code>Name A B C
Date
11.1 NaN NaN NaN
12.1 NaN NaN NaN
13.1 1.0 1.0 1.0
14.1 1.5 1.5 1.5
15.1 2.5 2.5 2.5
16.1 2.5 2.5 2.5
17.1 3.0 3.0 3.0
18.1 NaN NaN NaN
</code></pre>
<h1>Attempted Solution</h1>
<p>I tried the following code and it works, but its performance is very bad for data sets that I am stuck with in practice.</p>
<pre class="lang-py prettyprint-override"><code>tmp.rolling(window=3, min_periods=1).apply(lambda x: x[~np.isnan(x)][-2:].mean(), raw=True)
</code></pre>
<p>Calculation above applied to a 3k x 50k frame takes about 20 minutes... Maybe there is a more elegant and faster way to obtain the same result? Maybe using a combination of multiple rolling computation results or something with <code>groupby</code>?</p>
<h1>Versions</h1>
<p>Python - 3.9.13, <code>pandas</code> - 2.0.3 and <code>numpy</code> - 1.25.2</p>
|
<python><pandas><dataframe><pandas-rolling>
|
2024-04-16 11:25:46
| 1
| 1,615
|
J.K.
|
78,334,117
| 6,197,439
|
pyqt5 layout.replaceWidget kills setTextInteractionFlags of unrelated QLabel?
|
<p>Consider the following example, based on the example from <a href="https://www.pythonguis.com/tutorials/pyqt-dialogs/" rel="nofollow noreferrer">https://www.pythonguis.com/tutorials/pyqt-dialogs/</a> , where I want to have a custom QDialog class, where text in the label is selectable, and where I can control the buttons at instantiation:</p>
<pre class="lang-python prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication, QDialog, QMainWindow, QPushButton, QDialogButtonBox, QVBoxLayout, QLabel
from PyQt5.QtCore import Qt
class CustomDialog(QDialog):
def __init__(self, parent=None):
super().__init__(parent)
self.setWindowTitle("HELLO!")
self.QBtn = QDialogButtonBox.Ok | QDialogButtonBox.Cancel
self.buttonBox = QDialogButtonBox(self.QBtn)
self.buttonBox.accepted.connect(self.accept)
self.buttonBox.rejected.connect(self.reject)
self.layout = QVBoxLayout()
self.message = QLabel("Something happened, is that OK?")
self.message.setTextInteractionFlags(Qt.LinksAccessibleByMouse | Qt.TextSelectableByMouse)
self.layout.addWidget(self.message)
self.layout.addWidget(self.buttonBox)
self.setLayout(self.layout)
@classmethod # SO:141545; call w. QtawAnimIconRszMsgBoxDialog.fromqmb(...)
def fromstr(cls, msg=None, buttons=None, parent=None):
instance = cls(parent)
if buttons:
old_button_box = instance.buttonBox
instance.QBtn = buttons
instance.buttonBox = QDialogButtonBox(instance.QBtn)
instance.layout.replaceWidget(old_button_box, instance.buttonBox) # MUST have this; but this also kills setTextInteractionFlags!
old_button_box.clear() # "Clears the button box, deleting all buttons within it"
if msg:
instance.message.setText(msg)
return instance
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("My App")
button = QPushButton("Press me for a dialog!")
button.clicked.connect(self.button_clicked)
self.setCentralWidget(button)
def button_clicked(self, s):
print("click", s)
#dlg = CustomDialog(self)
#dlg = CustomDialog.fromstr("Hello there!", parent=self) # setTextInteractionFlags OK
dlg = CustomDialog.fromstr("Hello there!", QDialogButtonBox.Cancel, self) # kills setTextInteractionFlags
if dlg.exec():
print("Success!")
else:
print("Cancel!")
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
</code></pre>
<p>If you run the code as is, click on "Press me ...":</p>
<p><a href="https://i.sstatic.net/yMjFV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yMjFV.png" alt="the example GUI" /></a></p>
<p>... and then try to select the "Hello there" from the dialog with the mouse, you will <strong>not be able to</strong>, in spite of <code>self.message.setTextInteractionFlags(Qt.LinksAccessibleByMouse | Qt.TextSelectableByMouse)</code>.</p>
<p>But, if you avoid the re-instantiation of buttons by toggling the commenting of these lines:</p>
<pre class="lang-python prettyprint-override"><code># ...
dlg = CustomDialog.fromstr("Hello there!", parent=self) # setTextInteractionFlags OK
#dlg = CustomDialog.fromstr("Hello there!", QDialogButtonBox.Cancel, self) # kills setTextInteractionFlags
# ...
</code></pre>
<p>... then the "Hello there!" text WILL be selectable with the mouse.</p>
<p>I've (hopefully correctly) reduced this problem to the line <code>instance.layout.replaceWidget(old_button_box, instance.buttonBox)</code>. So, why does <code>replaceWidget</code> of an unrelated widget in the layout kill the textInteractionFlags on the QLabel?</p>
<p>And is there a way to allow for button replacement of the QDialog correctly, either by continuing to use <code>.replaceWidget</code> but not kill the textInteractionFlags of the QLabel - or by using some other approach?</p>
<hr />
<p>EDIT: found a workaround - considering that I want to change the buttonBox, which is "last" in the layout, then instead of <code>replaceWidget</code>, I can just use <code>removeWidget</code> and <code>addWidget</code>:</p>
<pre class="lang-python prettyprint-override"><code># ...
# retitem = instance.layout.replaceWidget(old_button_box, instance.buttonBox) # MUST have this; but this also kills setTextInteractionFlags!
instance.layout.removeWidget(old_button_box)
instance.layout.addWidget(instance.buttonBox)
old_button_box.clear() # "Clears the button box, deleting all buttons within it"
del old_button_box
if msg:
# ...
</code></pre>
<p>... and then <code>CustomDialog.fromstr</code> works fine with a button specification (in that the label text remains selectable, as it was supposed to).</p>
<p>But still, I'd like to know - why is <code>.replaceWidget</code> not working in this case, and what could one do to use that instead (in case one needs to do a similar replacement, but the widget is not the "last" in layout, so remove/add is not applicable)?</p>
|
<python><pyqt5>
|
2024-04-16 11:13:34
| 1
| 5,938
|
sdbbs
|
78,334,080
| 17,530,552
|
How can I speed up the processing of my nested for loops for a giant 3D numpy array?
|
<p>I created a very large 3D numpy array called <code>tr_mat</code>. The shape of <code>tr_mat</code> is:</p>
<pre><code>tr_mat.shape
(1024, 536, 21073)
</code></pre>
<p><strong>Info on the 3D numpy array:</strong> First and before going into the actual code, I would like to clarify what I am attempting to do. As can be seen from <code>tr_mat.shape</code>, the 3D numpy array contains numeric values in <code>1024 rows</code> and <code>536 columns</code>. That is, we have 536 * 1024 = <code>548864</code> values in each of the <code>21073</code> matrices.</p>
<p><strong>Conceptual background about my task:</strong>
Each of the <code>21073</code> 2D numpy arrays within the 3D numpy array contains grayscaled pixel values from an image. The 3D numpy array <code>tr_mat</code> is already <em>transposed</em>, because I would like to construct a time-series based on identical pixel positions across all <code>21073</code> matrices. Finally, I would like to individually save each of the resulting <code>548864</code> time-series in a <code>.1D</code> textfile. (Hence, I would end up with saving <code>548864</code> <code>.1D</code> textfiles.)</p>
<p><strong>The relevant part of the code:</strong></p>
<pre><code>tr_mat = frame_mat.transpose() # the tranposed 3D numpy array
# Save
rangeh = range(0, 1024)
for row, row_n_l in zip(tr_mat, rangeh): # row = pixel row of the 2D image
for ts_pixel, row_n in zip(row, rangeh): # ts_pixel = the pixel time-series across the 3D array (across the single 2D arrays)
# Save
with open(f"/volumes/.../TS_Row{row_n_l}_Pixel{row_n}.1D", "w") as file:
for i in ts_pixel: file.write(f"{i}\n") # Save each time-series value per row
</code></pre>
<p><strong>Question:</strong> Could you provide me some tips how to modify my code in order to speed it up? I wrapped <code>tqdm</code> around the first for loop to check how fast the nested loop is processed, and it took around 20 minutes to reach ~120 of 536 rows. Also, it seems to me that the loop gets slower and slower as the iterations go up.</p>
<p><strong>A reproducible code with random generated values can be found in the following:</strong> Please only change the output directory</p>
<pre><code>import numpy as np
tr_mat = np.random.random((1024, 536, 21073))
rangeh = range(0, 1024)
for row, row_n_l in zip(tr_mat, rangeh):
for ts_pixel, row_n in zip(row, rangeh):
# Save
with open("/volumes/../TS_Row{row_n_l}_Pixel{row_n}.1D", "w") as file: # Please adjust the output directory
for i in ts_pixel: file.write(f"{i}\n")
</code></pre>
|
<python><numpy><performance><multidimensional-array>
|
2024-04-16 11:07:13
| 1
| 415
|
Philipp
|
78,334,011
| 3,127,059
|
OpenCV gstreamer backend use GPU for H264 encoding
|
<p>I have a gstreamer pipeline to encode frames in H264, it's working but I have a noticeable latency or delay of 2 seconds or more.</p>
<p>I've seen that my program is not using GPU, so maybe this is the main cause of that delay.
I have read that I can use vaapih264enc but seems to be not available in my gstreamer installation.</p>
<pre><code>C:\gstreamer\1.0\msvc_x86_64\bin>gst-inspect-1.0.exe vaapi
No such element or plugin 'vaapi'
</code></pre>
<p>How can I accelerate that pipeline or enable Hardware acceleration to reduce delay? Maybe using another encoding?</p>
<p><strong>Source code example</strong></p>
<pre><code>import os
os.add_dll_directory("C:\\gstreamer\\1.0\\msvc_x86_64\\bin")
import cv2 as cv
import numpy as np
sourceUrl = "http://pendelcam.kip.uni-heidelberg.de/mjpg/video.mjpg"
command = "appsrc ! videoconvert ! x264enc key-int-max=30 speed-preset=veryfast tune=zerolatency bitrate=800 insert-vui=1 ! h264parse ! rtph264pay ! udpsink port=5004 host=127.0.0.1"
print(cv.getBuildInformation())
cap = cv.VideoCapture(sourceUrl)
if not cap.isOpened():
print("CAPTURE NOT OPENED")
fps = cap.get(cv.CAP_PROP_FPS)
w = int(cap.get(cv.CAP_PROP_FRAME_WIDTH))
h = int(cap.get(cv.CAP_PROP_FRAME_HEIGHT))
size = (w, h)
print(f"Size: {size}. FPS: {fps}")
writer = cv.VideoWriter(command, cv.CAP_GSTREAMER, 0, fps, size, True)
if not writer.isOpened():
print("WRITER NOT OPENED!!!!!")
if cap.isOpened() and writer.isOpened():
while cap.isOpened():
ret, frame = cap.read()
if not ret:
print("Not Fram received")
break
writer.write(frame)
cv.imshow("frame", frame)
if cv.waitKey(1) > 0:
break
cap.release()
writer.release()
</code></pre>
<p><strong>Edit</strong>
I have seen that latency is caused by <code>Vlc</code> because of <code>network-catching</code>. I have set it to 500ms and seems to be better.
Also I have discovered <code>qsvh264enc</code> to use Intel UHD graphics for encoding.
What could be the best configuration for realtime or live pipeline with as less latency as possible?</p>
|
<python><opencv><gpu><gstreamer><h.264>
|
2024-04-16 10:55:37
| 0
| 808
|
JuanDYB
|
78,334,004
| 4,999,991
|
How to Automatically Split Overlong Lines in FORTRAN 77 Files Using Python?
|
<p>Following <a href="https://stackoverflow.com/q/78311453/4999991">this question</a>, I'm working with <a href="https://github.com/JModelica/JModelica/tree/master/external/Assimulo/thirdparty/hairer" rel="nofollow noreferrer">legacy FORTRAN 77 files</a> and need to address an issue where some lines exceed the 72-character limit. I've already developed a Python script that identifies these lines, but I need to edit and split them based on their content automatically. Here's the part of the script that detects long lines:</p>
<pre class="lang-py prettyprint-override"><code>import sys
def show_long_lines(filename):
"""Display lines from the file longer than 72 characters."""
try:
with open(filename, "r") as file:
for line_number, line in enumerate(file, start=1):
if len(line) > 72 and line.lstrip()[0] not in "cC*!":
print(f"{line_number}: {line}")
except FileNotFoundError:
print("File not found. Please check the file path and try again.")
except Exception as e:
print(f"An error occurred: {e}")
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python script.py filename")
else:
filename = sys.argv[1]
show_long_lines(filename)
</code></pre>
<p>here are examples of the lines detected by this script:</p>
<pre><code> WRITE(6,*)' COEFFICIENTS HAVE 20 DIGITS, UROUND=',WORK(1)
WRITE(6,*)' CURIOUS INPUT FOR IWORK(5,6,7)=',NIND1,NIND2,NIND3
WRITE (6,*) 'BANDWITH OF "MAS" NOT SMALLER THAN BANDWITH OF
WRITE(6,*)' HESSENBERG OPTION ONLY FOR EXPLICIT EQUATIONS WITH
& JAC,IJAC,MLJAC,MUJAC,JACLAG,MAS,MLMAS,MUMAS,SOLOUT,IOUT,IDID,
XLAG(1,IL)=ARGLAG(IL,X1,ZL,RPAR,IPAR,PHI,PAST,IPAST,NRDS,
XLAG(2,IL)=ARGLAG(IL,X2,ZL(N+1),RPAR,IPAR,PHI,PAST,IPAST,NRDS,
XLAG(3,IL)=ARGLAG(IL,X3,ZL(N2+1),RPAR,IPAR,PHI,PAST,IPAST,NRDS,
IF (ICOUN(1,IL)+ICOUN(2,IL)+ICOUN(3,IL).GE.1) CALJACL=.TRUE.
& ' WARNING!: ADVANCED ARGUMENTS ARE USED AT X= ',XACT
& (13.D0-7.D0*Sqrt(6.D0)+5.D0*(-2.D0+3.D0*Sqrt(6.D0))*S2)
FJACL(I1+N,J1+2*N)=FJACL(I1+N,J1+2*N)+AI23H*FMAS(I1,J1)
FJACL(I1+2*N,J1+N)=FJACL(I1+2*N,J1+N)+AI32H*FMAS(I1,J1)
FJACL(I1+2*N,J1+2*N)=FJACL(I1+2*N,J1+2*N)+AI33H*FMAS(I1,J1)
IF (ERR.GE.1.D0.OR.((ERR/ERRACC.GE.TCKBP).AND.(.NOT.BPD))) THEN
WRITE(6,*) 'Found a BP at ', X, ', decrementing IGRID.'
Z2(N-NDIMN+I)=F2(IPAST(NRDS+I))+HE*F2(N-NDIMN+I)
Z3(N-NDIMN+I)=F3(IPAST(NRDS+I))+HE*F3(N-NDIMN+I)
XL =ARGLAG(ILBP,X+HE,Z2,RPAR,IPAR,PHI,PAST,IPAST,NRDS,
XLR=ARGLAG(ILBP,X+HE,Z3,RPAR,IPAR,PHI,PAST,IPAST,NRDS,
& ' WARNING!: SOLUTION DOES NOT EXIST AT X= ',X
& ' WARNING!: SOLUTION IS NOT UNIQUE AT X= ',X
IF (THETA.LE.THET.AND.QT.GE.QUOT1.AND.QT.LE.QUOT2) THEN
10 THETA=(XLAG-(PAST(IPOS)+PAST(IPOS+IDIF-1)))/PAST(IPOS+IDIF-1)
YLAGR5=PAST(I)+THETA*(PAST(NRDS+I)+(THETA-C2M1)*(PAST(2*NRDS+I)
ALN = ARGLAG(IL,XA,YADV,RPAR,IPAR,PHI,PAST,IPAST,NRDS,
</code></pre>
<p>I want to enhance this script to split lines according to the following rules automatically:</p>
<ol>
<li>If the extra characters after the 72nd column are part of a string (bounded by single/double quotation marks), then split the line precisely at the 72nd character.</li>
<li>If they are part of a comma-separated list of variable names, split the line at the last comma before the 72nd character.</li>
<li>If they are part of an arithmetic expression, split the line at the last arithmetic operator before the 72nd column.</li>
</ol>
<p>The new/child line should start with an <code>&</code> sign in the 6th column. I don't care for proper indentation, but that would be nice.</p>
<p>I'm seeking guidance on modifying my script to implement these rules. How can I identify these different scenarios (strings, variable lists, arithmetic expressions) and perform the appropriate splits?</p>
|
<python><regex><automation><text-processing><fortran77>
|
2024-04-16 10:54:51
| 0
| 14,347
|
Foad S. Farimani
|
78,333,840
| 16,725,431
|
Scrapers blocked but not browser
|
<p>I am trying to scrape from <code>https://www.[cencored].com/</code> using python</p>
<p>At first, it worked with a simple <code>request.get()</code>, however, the subsequent attempts failed <strong>on the next day</strong>. I did allow Windows to update in between. Not sure if it's the cause. I tried including headers:</p>
<pre class="lang-py prettyprint-override"><code>import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
}
print(requests.get(url, headers=headers).text)
</code></pre>
<p>But this is what i get:</p>
<pre class="lang-none prettyprint-override"><code>requests.exceptions.ConnectionError: HTTPSConnectionPool(host='[cencored].com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000025316C12430>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
</code></pre>
<p>Then I tried using selenium as my last resort, however, the results were the same, it can't access the website at all.</p>
<p>This is what I see on the loaded html page.</p>
<pre><code>502 Bad Gateway
ProtocolException('Server connection to (\'[cencored].com\', 443) failed: Error connecting to "xxxx.com": [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond')
</code></pre>
<p>I am almost certain that my ip address is blacklisted, however when I use google chrome to visit <code>https://[cencored].com/</code>, it loaded with no problem at all.</p>
<p><strong>My question is:</strong></p>
<ol>
<li>How does google chrome not get blocked</li>
<li>What can I do to bypass the scraping protection</li>
</ol>
|
<python><python-3.x><selenium-webdriver><web-scraping><python-requests>
|
2024-04-16 10:25:57
| 1
| 444
|
Electron X
|
78,333,729
| 5,790,653
|
openpyxl creates incorrect sheet names within for loop
|
<p>This is my code:</p>
<pre class="lang-py prettyprint-override"><code>import os
import sys
my_path = 'C:/Users/S.Fazlollahzadeh/Desktop/shell-scripts/test only/excel'
os.chdir(my_path)
sys.path.append(my_path)
from openpyxl import Workbook
names = ['name1', 'name2', 'name3', 'name4']
wb = Workbook()
ws = wb.active
for name in names:
ws.title = name
wb.create_sheet(name)
wb.save("sample.xlsx")
quit()
</code></pre>
<p>Sheet names are:</p>
<pre><code>name4
name11
name21
name31
name41
</code></pre>
<p>While it should be:</p>
<pre><code>name1
name2
name3
name4
</code></pre>
<p>What am I doing wrong?</p>
|
<python>
|
2024-04-16 10:07:34
| 2
| 4,175
|
Saeed
|
78,333,689
| 5,790,653
|
openpyxl appends all data to one column instead of multiple columns
|
<p>This is my code and my first issue (I may ask a new question for the second one which is related to this one):</p>
<pre class="lang-py prettyprint-override"><code>import random
from openpyxl import Workbook
from openpyxl.worksheet.table import Table, TableStyleInfo
wb = Workbook()
ws = wb.active
names = ['name1', 'name2', 'name3', 'name4']
fruits = ['Apple', 'Banana', 'Peach', 'Orange']
for name in names:
ws.title = name
wb.create_sheet(name)
data = []
for n in range(1, random.randint(2, 20)):
data.append({'madeup': f'p{random.randint(1, 4)}x23',
'fruit': random.choice(fruits),
'price2021': random.randint(100, 500),
'price2022': random.randint(100, 500),
'price2023': random.randint(100, 500),
'price2024': random.randint(100, 500),
})
# add column headings. NB. these must be strings
ws.append(["Madeup Name", "Fruit", "2021", "2022", "2023", "2024"])
for row in data:
for x, y in row.items():
ws.append([y])
tab = Table(displayName="Table1", ref="A1:F7")
# Add a default style with striped rows and banded columns
style = TableStyleInfo(name="TableStyleMedium9",
showFirstColumn=False,
showLastColumn=False,
showRowStripes=True,
showColumnStripes=True)
tab.tableStyleInfo = style
ws.add_table(tab)
wb.save("sample.xlsx")
</code></pre>
<p>This is my current output: <a href="https://postimg.cc/Vd6qNRLZ" rel="nofollow noreferrer">excel.png</a></p>
<p>Current output in text:</p>
<pre><code>Madeup Name Fruit 2021 2022 2023 2024
p2x23
Apple
154
383
205
466
p3x23
Peach
212
117
177
190
</code></pre>
<p>While expected output is:</p>
<pre><code>Madeup Name Fruit 2021 2022 2023 2024
p2x23 Apple 154 383 205 466
p3x23 Peach 212 117 177 190
</code></pre>
<p>How can I solve it?</p>
|
<python>
|
2024-04-16 10:01:42
| 0
| 4,175
|
Saeed
|
78,333,583
| 8,968,910
|
Python: map json format with names in another list
|
<p>I have a table.json file like this:</p>
<pre><code> {
"tables":
[
{
"name": "table_1",
"columns":
{
"column1": "name",
"column2": "address"
}
},
{
"name": "table_2",
"columns":
{
"column1": "name",
"column2": "address"
}
},
{
"name": "table_3",
"columns":
{
"column1": "name",
"column2": "address"
}
}
]
}
</code></pre>
<p>I can write successfully into Python like this:</p>
<pre><code>try:
with open('table.json', 'r', encoding='utf-8') as f:
mapping_data = json.load(f)
table_mappings = {table['name']: {'columns': table['columns']} for table in mapping_data.get('tables', [])}
print(table_mappings)
except Exception as e:
print(f"error: {str(e)}\n")
</code></pre>
<p>Now I have three tables with random names, let's say like 'abc','rfe' and 'try'. They all have same column names, so I decide to rewrite my json file without table names and map them afterwards:</p>
<p>new_table.json:</p>
<pre><code> {
"tables":
[
{
"columns":
{
"column1": "name",
"column2": "address"
}
}
]
}
</code></pre>
<p>How can I map them with names from a list and columns from json?</p>
<pre><code>table_list=['abc','rfe','try']
table_mappings = {for i in table_list: {'columns': table['columns']} for table in mapping_data.get('tables', [])}
</code></pre>
|
<python><json>
|
2024-04-16 09:45:19
| 1
| 699
|
Lara19
|
78,333,191
| 475,766
|
What can I do about Overfitting in Tabular Data Model
|
<p>I've built a predictive model for predicting outcomes based on certain features in the supplied data.</p>
<p>The model is a tabular learner utilizing fastai.</p>
<p>The dataset consists of about 300 records split for training, validation and testing sets.</p>
<p>I've implemented techniques to address overfitting such as early stopping and weight decay, but the model still appears to be overfitting when evaluated on unseen data.</p>
<p>Furthermore, I've also tried to adjust hyperparameters such as learning rate and batch size without improvement. I suspect there might be some aspects of my model's architecture or preprocessing pipeline that could be contributing to the problem, but I'm not sure where to start investigating.</p>
<p>Given the sensitive nature of the project, I'm unable to provide specific details about the dataset or the prediction task, but I can share the preprocessing and structure of the model as it currently stands.</p>
<p>Here's output of the training:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>epoch</th>
<th>train_loss</th>
<th>valid_loss</th>
<th>accuracy</th>
<th>time</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>0.752707</td>
<td>0.579501</td>
<td>0.776119</td>
<td>00:00</td>
</tr>
<tr>
<td>1</td>
<td>0.699270</td>
<td>0.833771</td>
<td>0.776119</td>
<td>00:00</td>
</tr>
<tr>
<td>2</td>
<td>0.652438</td>
<td>0.598243</td>
<td>0.791045</td>
<td>00:00</td>
</tr>
<tr>
<td>3</td>
<td>0.621083</td>
<td>3.889398</td>
<td>0.776119</td>
<td>00:00</td>
</tr>
<tr>
<td>4</td>
<td>0.591348</td>
<td>0.632366</td>
<td>0.791045</td>
<td>00:00</td>
</tr>
<tr>
<td>5</td>
<td>0.580582</td>
<td>6.670314</td>
<td>0.791045</td>
<td>00:00</td>
</tr>
</tbody>
</table></div>
<blockquote>
<p>No improvement since epoch 2: early stopping</p>
</blockquote>
<p>Here's the code for preprocessing (after I've built features which I can't divulge).</p>
<p>The <code>features</code> list defines each feature including a valid value range and weights (<code>feature</code>, <code>range_</code>, and <code>weight</code> as used in the normalize function below).</p>
<pre><code>def custom_normalize(df, feature, range_, weight):
df[feature] = normalize(df[feature], range_)
df[feature] = df[feature] * weight
return df
splits = RandomSplitter(valid_pct=0.2)(range_of(df))
procs = [Categorify, FillMissing]
for feature, info in features.items():
# Determine a range within which to select values when training.
procs.append(partial(custom_normalize, feature=feature, range_=info['range'], weight=info['weight']))
</code></pre>
<p>Building the model and training is pretty standard as far as I know:</p>
<pre><code>to = TabularPandas(df, procs=procs,
cat_names = cat_vars,
cont_names = cont_vars,
y_names=dep_var,
splits=splits)
dls = to.dataloaders(bs=64)
early_stop = EarlyStoppingCallback(monitor='accuracy', min_delta=0.01, patience=3)
learn = tabular_learner(dls, metrics=accuracy, wd=0.1)
learn.lr_find()
# Plot learning rate.
learn.recorder.plot_lr_find()
# Choose a learning rate based on the plot.
lr = learn.recorder.lrs[np.argmin(learn.recorder.losses)]
learn.fit_one_cycle(15, lr, cbs=early_stop)
learn.show_results()
# Only save model if none exists
# TODO wrap save in conditional that prevents saving if a model exists.
if not os.path.exists(model_fname):
learn.save(model_fname)
</code></pre>
|
<python><machine-learning><fast-ai>
|
2024-04-16 08:38:01
| 1
| 8,296
|
Ortund
|
78,333,176
| 1,835,727
|
Prophet forecast data doesn't match observed data
|
<h1>Context</h1>
<p>I'm trying to model office attendance using the <a href="https://facebook.github.io/prophet/" rel="nofollow noreferrer">Prophet</a> library in python. My data is pretty simple, headcounts at 15 minute intervals. It's shown by the black dots in the plot below.</p>
<p><a href="https://i.sstatic.net/yjmTL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yjmTL.png" alt="A graph that is trying to predict office headcount" /></a></p>
<p>The daily and weekly trends look spot on:</p>
<p><a href="https://i.sstatic.net/vLY9a.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vLY9a.png" alt="the daily and weekly trends for the office" /></a></p>
<p>I've also included holidays.</p>
<p>The code looks like this:</p>
<pre><code>m = Prophet(holidays=holidays, weekly_seasonality=True, growth="flat") # "logistic")
p_df = sdf[["OCCUPANCY_AVG"]].reset_index()
p_df.columns = ["ds", "y"]
p_df["cap"] = 180
df["floor"] = 0
m.fit(p_df)
future = m.make_future_dataframe(periods=24 * 60, freq="H")
future["cap"] = 180
future["floor"] = 0
# future.tail()
forecast = m.predict(future)
# forecast[["ds", "yhat", "yhat_lower", "yhat_upper"]].tail()
fig1 = m.plot(forecast, xlabel="Date", ylabel="Headcount", include_legend=True)
# %%
fig2 = m.plot_components(forecast)
</code></pre>
<h1>Question</h1>
<p>There are several things that I'm not thrilled about here:</p>
<ul>
<li>The forecast data can't, in real life, go negative</li>
<li>The observed data peaks are much higher than even the <em>uncertainty interval</em></li>
</ul>
<p>How can I make the model respect that it's predicting a quantity that can't go negative, and also make it fit the input data?</p>
|
<python><facebook-prophet>
|
2024-04-16 08:35:57
| 3
| 13,530
|
Ben
|
78,333,012
| 1,711,271
|
Replace a Polars column with a 1D array or Series
|
<p>Sample df:</p>
<pre><code>import polars as pl
import numpy as np
df = pl.DataFrame(
{
"nrs": [1, 2, 3, None, 5],
"names": ["foo", "ham", "spam", "egg", None],
"random": np.random.rand(5),
"A": [True, True, False, False, False],
}
)
</code></pre>
<p>I want to replace column <code>random</code>. So far, I've been doing</p>
<pre><code>new = np.arange(5)
df.replace('random', pl.Series(new))
</code></pre>
<p><strong>note</strong> that <code>replace</code> is one of the few polars methods which works inplace!</p>
<p>But now I'm getting</p>
<pre><code>C:\Users\...\AppData\Local\Temp\ipykernel_18244\1406681700.py:2: DeprecationWarning: `replace` is deprecated. DataFrame.replace is deprecated and will be removed in a future version. Please use
df = df.with_columns(new_column.alias(column_name))
instead.
df = df.replace('random', pl.Series(new))
</code></pre>
<p>So, should I do</p>
<pre><code>df = df.with_columns(pl.Series(new).alias('random'))
</code></pre>
<p>Seems more verbose, also inplace modification is gone. Am I doing things right?</p>
|
<python><dataframe><replace><python-polars>
|
2024-04-16 08:02:19
| 2
| 5,726
|
DeltaIV
|
78,332,995
| 1,841,839
|
After installing Python 3.12.3, it fails to create a venv
|
<p>I just installed the new version of <a href="https://www.python.org/downloads/" rel="nofollow noreferrer">Python</a>.</p>
<p>It installed correctly:</p>
<pre class="lang-none prettyprint-override"><code>C:\Development\pythontest>python --version
Python 3.12.3
</code></pre>
<p>When I try to create the new venv it fails:</p>
<pre class="lang-none prettyprint-override"><code>C:\Development\pythontest>python -m venv myvenv
Could not import runpy module
Traceback (most recent call last):
File "<frozen runpy>", line 15, in <module>
File "<frozen importlib.util>", line 2, in <module>
ModuleNotFoundError: No module named 'importlib._abc'
</code></pre>
<p>I have an older version (3.9) on my machine that works fine. Not sure what's up with this. As you can see by the first command I changed the path so I know it's calling the version for 3.12.3.</p>
<p>Works:</p>
<pre class="lang-none prettyprint-override"><code>C:\Users\linda\AppData\Local\Programs\Python\Python39\python.exe -m venv myenv
</code></pre>
<p>Doesn't work:</p>
<pre class="lang-none prettyprint-override"><code>C:\Users\linda\AppData\Local\Programs\Python\Python312\python.exe -m venv myenv
Could not import runpy module
Traceback (most recent call last):
File "<frozen runpy>", line 15, in <module>
File "<frozen importlib.util>", line 2, in <module>
ModuleNotFoundError: No module named 'importlib._abc'
</code></pre>
<p>The same thing happens via PyCharm:</p>
<p><a href="https://i.sstatic.net/Yqg37.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yqg37.png" alt="enter image description here" /></a></p>
<p>I tried to do an upgrade on an venv with Python 3.9:</p>
<pre class="lang-none prettyprint-override"><code>C:\Development\pythonsucks>python -m myenv --upgrade m2
Could not import runpy module
Traceback (most recent call last):
File "<frozen runpy>", line 15, in <module>
File "<frozen importlib.util>", line 2, in <module>
ModuleNotFoundError: No module named 'importlib._abc'
</code></pre>
<p>Python launcher:</p>
<pre class="lang-none prettyprint-override"><code>C:\Development\pythonsucks>"C:\Program Files\Python312\python.exe" -m venv myenv
Could not import runpy module
Traceback (most recent call last):
File "<frozen runpy>", line 15, in <module>
File "<frozen importlib.util>", line 2, in <module>
ModuleNotFoundError: No module named 'importlib._abc'
C:\Development\pythonsucks>py --version
Python 3.12.3
C:\Development\pythonsucks>py -0p
-V:3.12 * C:\Program Files\Python312\python.exe
-V:3.9 C:\Users\linda\AppData\Local\Programs\Python\Python39\python.exe
C:\Development\pythonsucks>py -m venv test
Could not import runpy module
Traceback (most recent call last):
File "<frozen runpy>", line 15, in <module>
File "<frozen importlib.util>", line 2, in <module>
ModuleNotFoundError: No module named 'importlib._abc'
</code></pre>
<p>I'm starting to think something is broken in 3.12.</p>
|
<python>
|
2024-04-16 07:59:08
| 1
| 118,263
|
Linda Lawton - DaImTo
|
78,332,952
| 7,123,933
|
Airflow sensor which checks if GCS object was updated
|
<p>I would like to have a DAG which checks if avro file on GCS and if yes go to another task. This avro file comes to gcs under different names so i would like to use prefix to just get any avro file in the folder. I used code like this:</p>
<pre class="lang-py prettyprint-override"><code> gcs_sensor = GCSObjectsWithPrefixExistenceSensor(
task_id='gcs_avro_file_sensor',
bucket='my_bucket',
prefix='mysql/abc/',
google_cloud_conn_id='google_cloud_default',
timeout=600,
poke_interval=60,
dag=dag
)
</code></pre>
<p>but it always succeed as it is ExistenceSensor. How can I modify it to check if new file got updated there (old file is always removed so only 1 avro file is always in this folder).</p>
|
<python><google-cloud-platform><google-cloud-storage><airflow>
|
2024-04-16 07:52:54
| 1
| 359
|
Lukasz
|
78,332,877
| 2,149,718
|
TypeError: cannot unpack non-iterable MultiPoint object
|
<p>In my python app I am using Shapely. Invoking the function below:</p>
<pre><code>def get_t_start(t_line: geometry.LineString):
print('get_t_start', t_line.boundary)
p1, p2 = t_line.boundary
t_start = p1 if p1.y < p2.y else p2
return t_start
</code></pre>
<p>produces the following output:</p>
<blockquote>
<p>get_t_start MULTIPOINT (965 80, 1565 1074)
Traceback (most recent call last):<br />
...
File "/sites/mpapp.py", line 13, in
get_t_start
p1, p2 = t_line.boundary<br />
TypeError: cannot unpack non-iterable MultiPoint object</p>
</blockquote>
<p>From the <code>print</code> of <code>t_line.boundary</code> I guess the object is ok. I am sure I have used this object (MULTIPOINT) like this in other apps to get the boundary points. I really can't figure out why now this is not working.</p>
|
<python><python-3.x><shapely>
|
2024-04-16 07:38:20
| 1
| 72,345
|
Giorgos Betsos
|
78,332,807
| 5,147,886
|
Convert HMAC signature from Go into R
|
<p>I am trying to reproduce the following golang code in R for creating the signature for the <a href="https://github.com/consbio/mbtileserver" rel="nofollow noreferrer">consibio/mbtilesever</a>. The aim is to build apps in R and python which can access mbtileservers uisng a secure connection. While I am able to generate a working version using python, a working version using R is proving tricky. Thanks in advance.</p>
<p>The sample GO code from the link above :</p>
<pre><code>
serviceId := "test"
date := "2019-03-08T19:31:12.213831+00:00"
salt := "0EvkK316T-sBLA"
secretKey := "YMIVXikJWAiiR3q-JMz1v2Mfmx3gTXJVNqme5kyaqrY"
key := sha1.New()
key.Write([]byte(salt + secretKey))
hash := hmac.New(sha1.New, key.Sum(nil))
message := fmt.Sprintf("%s:%s", date, serviceId)
hash.Write([]byte(message))
b64hash := base64.RawURLEncoding.EncodeToString(hash.Sum(nil))
fmt.Println(b64hash) // Should output: 2y8vHb9xK6RSxN8EXMeAEUiYtZk
</code></pre>
<p>In Python code is (working):</p>
<pre><code>import hashlib
import hmac
import base64
service_id = "test"
date = "2019-03-08T19:31:12.213831+00:00"
salt = "0EvkK316T-sBLA"
secret_key = "YMIVXikJWAiiR3q-JMz1v2Mfmx3gTXJVNqme5kyaqrY"
key = hashlib.sha1()
key.update((salt + secret_key).encode())
hash_obj = hmac.new(key.digest(), (date + ":" + service_id).encode(), hashlib.sha1)
b64_hash = base64.urlsafe_b64encode(hash_obj.digest()).decode()
print(b64_hash) # Should output: 2y8vHb9xK6RSxN8EXMeAEUiYtZk
</code></pre>
<p>The R code so far (not-working):</p>
<pre><code>library(openssl)
library(digest)
service_id <- "test"
date <- "2019-03-08T19:31:12.213831+00:00"
salt <- "0EvkK316T-sBLA"
secret_key <- "YMIVXikJWAiiR3q-JMz1v2Mfmx3gTXJVNqme5kyaqrY"
key <- sha1(paste0(salt, secret_key))
hash_obj <- hmac(key, paste0(date, ":", service_id), "sha1")
b64_hash <- base64_encode(hash_obj)
print(b64_hash) # Should output: 2y8vHb9xK6RSxN8EXMeAEUiYtZk
</code></pre>
|
<python><r><go><hmacsha1><mbtiles>
|
2024-04-16 07:25:35
| 1
| 440
|
pdbentley
|
78,332,753
| 11,922,237
|
Decreasing the text granularity when working with image_to_boxes in `pytesseract`
|
<p>I have the following image:</p>
<p><a href="https://i.sstatic.net/snH0B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/snH0B.png" alt="enter image description here" /></a></p>
<p>I tried <code>image_to_boxes</code> function to draw bounding boxes but the boxes are drawn around each letter. I am guessing that this is due to the large font of the text or the spacing between letters.</p>
<p>Is it possible to decrease granularity or something so that I can draw boxes around each word?</p>
<p>I also tried <code>image_to_data</code>, which does return bounding boxes for individual words but some words are missed, i.e. it isn't very accurate.</p>
|
<python><ocr><tesseract><python-tesseract>
|
2024-04-16 07:17:28
| 0
| 1,966
|
Bex T.
|
78,332,519
| 11,671,779
|
My Python 3.11 did not use new error message
|
<p>I'm using Python 3.10 for my daily driver because of some dependency. Now, I just download Python 3.11, install it, and successfully use environment.</p>
<pre><code>(env311) PS D:\user\package> py
Python 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
</code></pre>
<p>I install it because I want to utilize the new error message pointer. But, why is this that I got?</p>
<pre><code>>>> import itertools
>>>
>>> c = itertools.counter()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'itertools' has no attribute 'counter'. Did you mean: 'count'?
</code></pre>
<p>It didn't produce better error message claimed at <a href="https://www.playfulpython.com/python-3-11-better-error-messages/" rel="nofollow noreferrer">this</a> and <a href="https://realpython.com/python311-error-messages/" rel="nofollow noreferrer">this</a> like this:</p>
<pre><code>Traceback (most recent call last):
File "util.py", line 3, in <module>
c = itertools.counter()
^^^^^^^^^^^^^^^^^
AttributeError: module 'itertools' has no attribute 'counter'. Did you mean: 'count'?
</code></pre>
<hr />
<h1>EDIT 1</h1>
<p>Thanks for the answer. If by chance any of you know how to enable it on <code>jupyter notebook</code> or other mode, greatly appreciated. For now, I can port my notebooks to scripts.</p>
|
<python><python-3.11>
|
2024-04-16 06:26:44
| 1
| 2,276
|
Muhammad Yasirroni
|
78,332,401
| 14,320,103
|
Sympy breaks regex in python?
|
<p>I have the code:</p>
<pre class="lang-py prettyprint-override"><code>import re
import math
#from sympy import * # breaks?
#x, y, z = symbols('x y z')
#init_printing(use_unicode=True)
def evaluate_expression(expr, variables):
# Evaluate the expression using eval(), and provide the variables dictionary
return eval(expr, variables)
def main():
tools = """
<<<[eval("5"),a_x]>>>
<<<[eval("5+a_x"),ans]>>>
"""
variables = {}
lines = tools.strip().split("\n")
for line in lines:
# Extract the inner array using regular expression
match = re.search(r'<<<\[(.*?),(\w+)\]>>>', line)
if match:
# Extract expression and variable name
expr = match.group(1)
var_name = match.group(2)
# Evaluate expression
value = evaluate_expression(expr, variables)
# Update variables dictionary
variables[var_name] = value
# Check if 'ans' is defined
if 'ans' in variables:
print("ans =", variables['ans'])
break
if __name__ == "__main__":
main()
</code></pre>
<p>If I dont import sympy, it runs fine, but if I do import sympy, it breaks, the re search no longer exists, no matter what I name variables. I dont get it?</p>
|
<python><regex><math><sympy>
|
2024-04-16 05:57:48
| 1
| 619
|
Vatsa Pandey
|
78,332,120
| 354,979
|
Flax neural network with nans in the outputs
|
<p>I am training a neural network using Flax. My training data has a significant number of nans in the outputs. I want to ignore these and only use the non-nan values for training. To achieve this, I have tried to use <code>jnp.nanmean</code> to compute the losses, i.e.:</p>
<pre><code>def nanloss(params, inputs, targets):
pred = model.apply(params, inputs)
return jnp.nanmean((pred - targets) ** 2)
def train_step(state, inputs, targets):
loss, grads = jax.value_and_grad(nanloss)(state.params, inputs, targets)
state = state.apply_gradients(grads=grads)
return state, loss
</code></pre>
<p>However, after one training step the loss is nan.</p>
<p>Is what I am trying to achieve possible? If so, how can I fix this?</p>
|
<python><tensorflow><deep-learning><jax><flax>
|
2024-04-16 04:28:05
| 1
| 7,942
|
rhombidodecahedron
|
78,332,079
| 7,497,127
|
LogisticRegression model producing 100 percent accuracy
|
<p>I have fetched Amazon Reviews for a product and now trying to train a logistic regression model on it to categorize customer reviews. It gives 100 percent accuracy. I am unable to understand the issue. Here is a sample from my dataset:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Stars</th>
<th>Title</th>
<th>Date</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td>Dipam</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>A very good fragrance. Recommended Seller - Sun Fragrances</td>
</tr>
<tr>
<td>sanket shah</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>Yes</td>
</tr>
<tr>
<td>Manoranjidham</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>This perfume is ranked No 3 .. Good one :)</td>
</tr>
<tr>
<td>Moukthika</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>I was gifted Versace Bright on my 25th Birthday. Fragrance stays for at least for 24 hours. I love it. This is one of my best collections.</td>
</tr>
<tr>
<td>megh</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>I have this perfume but didn't get it online..the smell is just amazing.it stays atleast for 2 days even if you take bath or wash d cloth. I have got so many compliments..</td>
</tr>
<tr>
<td>riya</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>Bought it from somewhere else,awesome fragrance, pure rose kind of smell stays for long,my guy loves this purchase of mine n fragrance too.</td>
</tr>
<tr>
<td>manisha.chauhan0091</td>
<td>5</td>
<td>5.0 out of 5 stars</td>
<td>N/A</td>
<td>Its light n long lasting i like it</td>
</tr>
<tr>
<td>UPS</td>
<td>1</td>
<td>1.0 out of 5 stars</td>
<td>N/A</td>
<td>Absolutely fake. Fragrance barely lasts for 15 minutes. Extremely harsh on the skin as well.</td>
</tr>
<tr>
<td>sanaa</td>
<td>1</td>
<td>1.0 out of 5 stars</td>
<td>N/A</td>
<td>a con game. fake product. dont fall for it</td>
</tr>
<tr>
<td>Juliana Soares Ferreira</td>
<td>N/A</td>
<td>รtimo produto</td>
<td>N/A</td>
<td>Produto verdadeiro, com cheio da riqueza, nรฃo fixa muito, mas รฉ delicioso. Dura na minha pele umas 3 horas e depois fica um cheirinho leve...Super recomendo</td>
</tr>
</tbody>
</table></div>
<p>Here is my code</p>
<pre><code>import re
import nltk
import numpy as np
import pandas as pd
from bs4 import BeautifulSoup
from nltk.sentiment import SentimentIntensityAnalyzer
from nltk.tokenize import word_tokenize
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
from sklearn.model_selection import train_test_split
from sklearn.utils.class_weight import compute_class_weight
# Ensure necessary NLTK datasets and models are downloaded
# nltk.download('punkt')
# nltk.download('vader_lexicon')
# Load the data
df = pd.read_csv("reviews.csv") # Make sure to replace 'reviews.csv' with your actual file path
# Preprocess data
df['Stars'] = df['Stars'].fillna(3.0) # Handle missing values
df['Title'] = df['Title'].str.lower() # Standardize text formats
df['Description'] = df['Description'].str.lower()
df = df.drop(['Name', 'Date'], axis=1) # Drop unnecessary columns
print(df)
# Categorize sentiment based on star ratings
def categorize_sentiment(stars):
if stars >= 4.0:
return 'Positive'
elif stars <= 2.0:
return 'Negative'
else:
return 'Neutral'
df['Sentiment'] = df['Stars'].apply(categorize_sentiment)
# Clean and tokenize text
def clean_text(text):
text = BeautifulSoup(text, "html.parser").get_text()
letters_only = re.sub("[^a-zA-Z]", " ", text)
return letters_only.lower()
def tokenize(text):
return word_tokenize(text)
df['Clean_Description'] = df['Description'].apply(clean_text)
df['Tokens'] = df['Clean_Description'].apply(tokenize)
# Apply NLTK's VADER for sentiment analysis
sia = SentimentIntensityAnalyzer()
def get_sentiment(text):
score = sia.polarity_scores(text)
if score['compound'] >= 0.05:
return 'Positive'
elif score['compound'] <= -0.05:
return 'Negative'
else:
return 'Neutral'
df['NLTK_Sentiment'] = df['Clean_Description'].apply(get_sentiment)
print("df['NLTK_Sentiment'].value_counts()")
print(df['NLTK_Sentiment'].value_counts())
# Prepare data for machine learning
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(tokenizer=tokenize)
X = vectorizer.fit_transform(df['Clean_Description'])
y = df['NLTK_Sentiment'].apply(lambda x: 1 if x == 'Positive' else 0)
# Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=80)
# Train a Logistic Regression model
# Compute class weights
class_weights = compute_class_weight('balanced', classes=np.unique(y_train), y=y_train)
class_weights_dict = dict(enumerate(class_weights))
print(f"class_weights_dict {class_weights_dict}")
# Apply to Logistic Regression
# model = LogisticRegression(class_weight=class_weights_dict)
model = LogisticRegression(C=0.001, penalty='l2', class_weight='balanced')
model.fit(X_train, y_train)
# Predict sentiments on the test set
predictions = model.predict(X_test)
# Evaluate the model
accuracy = accuracy_score(y_test, predictions)
precision = precision_score(y_test, predictions, average='weighted')
recall = recall_score(y_test, predictions, average='weighted')
f1 = f1_score(y_test, predictions, average='weighted')
print(f"Accuracy: {accuracy:.4f}")
print(f"Precision: {precision:.4f}")
print(f"Recall: {recall:.4f}")
print(f"F1 Score: {f1:.4f}")
</code></pre>
<p>Here are the results of the print statements:</p>
<blockquote>
<p>NLTK_Sentiment<br />
Positive 8000<br />
Negative 2000<br />
Name: count, dtype: int64</p>
</blockquote>
<blockquote>
<p>class_weights_dict {0: 2.3696682464454977, 1: 0.6337135614702155}<br />
Accuracy: 1.0000<br />
Precision: 1.0000<br />
Recall: 1.0000<br />
F1 Score: 1.0000</p>
</blockquote>
<p>I am unable to find the reason why my model is always giving 100 percent accuracy.</p>
|
<python><pandas><dataframe><machine-learning><logistic-regression>
|
2024-04-16 04:09:28
| 1
| 1,375
|
JAMSHAID
|
78,332,025
| 7,994,685
|
How to add another pass through to langchain using expression languages
|
<p>I am using Langchain's expression language to create a retriever. The goal is to create a function where the AI checks the response to a previous interaction.</p>
<p>Therefore, I need to give this function two inputs: the original question and the answer. This means I want to pass two parameters to it: 'question' and 'answer.'</p>
<p>Currently, I can only pass the 'question' as shown below. How can I also pass the 'answer' into the prompt?"</p>
<pre><code>from langchain_community.vectorstores import DocArrayInMemorySearch
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
region = "usa"
vector_databases_path = "db"
k = 10
model_name = "gpt-4"
retriever_chain_type= "stuff"
input = "Albuquerque"
answer = ' MetroArea State\n0 Albuquerque NM'
####################### Get the Retriever ##################
#******************** Set Embeddings ***********************
embeddings = OpenAIEmbeddings()
#******************** Load Database Embeddings ***********************
docsearch = FAISS.load_local(os.path.join(os.getcwd(), vector_databases_path, "MSA_State_faiss_index"), embeddings)
#******************** Set Up the Retriever ***********************
retriever = docsearch.as_retriever(search_type="similarity", search_kwargs={"k": k})
prompt_template = """
Use the following pieces of context to check the AI's Answer.
{context}
Job: Your job is to check the MetroArea and State code based on what the AI has returned.
Here is the original input: {question}
Here is the AI's answer: {answer}
Report your answer as a CSV. Where the column names are AI_Correct,
AI_Correct: Return TRUE if the AI was correct or FALSE if it is incorrect.
Each variable after the comma should be one row. If there is no comma between them then treat input as one row.
If you do not know the answer, then return: IDK
"""
####################### Initialize LLM ##################
llm = ChatOpenAI(
model_name=model_name,
temperature=0.0
)
#How do I change this code to allow for an additional parameter, answer, to pass through it to the prompt?
prompt = ChatPromptTemplate.from_template(prompt_template)
output_parser = StrOutputParser()
setup_and_retrieval = RunnableParallel(
{"context": retriever, "question": RunnablePassthrough()}
)
chain = setup_and_retrieval | prompt | llm | output_parser
final_answer = chain.invoke(input)
final_answer
</code></pre>
|
<python><langchain>
|
2024-04-16 03:49:41
| 1
| 1,328
|
Sharif Amlani
|
78,331,868
| 3,174,398
|
How to (idiomatically) read indexed arrays from a delimited text file?
|
<p>I have text files from an external source that are formatted like so:</p>
<pre><code> 0 0 -0.105961 0.00000
1 0 -1.06965 0.00000
1 1 -0.0187213 -0.240237
2 0 -0.124695 0.00000
2 1 -0.178982 0.0633255
2 2 0.760988 -0.213796
3 0 -1.96695 0.00000
3 1 0.0721285 0.0491248
3 2 -0.560517 0.267733
3 3 -0.188732 -0.112053
4 0 -0.0205364 0.00000
โฎ โฎ โฎ โฎ
40 30 0.226833 -0.733674
40 31 0.0444837 -0.249677
40 32 -0.171559 -0.970601
40 33 -0.141848 -0.137257
40 34 -0.247042 -0.902128
40 35 -0.495114 0.322912
40 36 0.132215 0.0543294
40 37 0.125682 0.817945
40 38 0.181098 0.223309
40 39 0.702915 0.103991
40 40 1.11882 -0.488252
</code></pre>
<p>where the first two columns are the indices of a 2d array (say <code>i</code> and <code>j</code>), and the 3rd and 4th columns are the values for two 2d arrays (say <code>p[:,:]</code> and <code>q[:,:]</code>). What would be the idiomatic way in Python/Julia to read this into two 2d arrays?</p>
<p>There are some assumptions we can make: the arrays are lower triangular (i.e., the values only exist (or are nonzero) for <code>j <= i</code>), and the indices are increasing, that is, the last line can be expected to have the largest <code>i</code> (and probably <code>j</code>).</p>
<p>The current implementation assumes that maximum <code>i</code> and <code>j</code> is 40, and proceeds like so: (Julia minimal working example):</p>
<pre class="lang-jl prettyprint-override"><code>using OffsetArrays
using DelimitedFiles: readdlm
n = 40
p = zeros(0:n, 0:n)
q = zeros(0:n, 0:n)
open(filename) do infile
for i = 0:n
for j = 0:i
line = readline(infile)
arr = readdlm(IOBuffer(line))
p[i,j] = arr[3]
q[i,j] = arr[4]
end
end
end
</code></pre>
<p>Note that this example also assumes that the index <code>j</code> changes the fastest, which is usually true, but in doing that it effectively ignores the first two columns of the data.</p>
<p>However, I'm looking for a solution that makes fewer assumptions about the data file (maybe only the lower-triangular one and the index increasing one). What would be a "natural" way of doing this in Julia or Python?</p>
|
<python><arrays><matrix><file-io><julia>
|
2024-04-16 02:42:36
| 3
| 489
|
rigel
|
78,331,861
| 16,115,413
|
Counting Tokens with Message History in OpenAI: Correct Approach?
|
<p>Issues Identified on the Internet:</p>
<ol>
<li><p>Websites like <a href="https://tiktokenizer.vercel.app/" rel="nofollow noreferrer">https://tiktokenizer.vercel.app/</a> are useful for counting tokens, but they seem to have limitations when memory is involved in the API, especially when handling both new and old messages.</p>
</li>
<li><p>There's a discrepancy in pricing between the message you send to the API and the output you receive, as shown in the screenshot below.
<a href="https://i.sstatic.net/cw3PI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cw3PI.png" alt="OpenAI Pricing" /></a>
Source: <a href="https://openai.com/pricing" rel="nofollow noreferrer">https://openai.com/pricing</a>.</p>
</li>
</ol>
<p>Therefore, it's important to count them separately for a thorough cost analysis.</p>
<p>Here's my current approach (using GPT-3.5 Turbo):</p>
<pre class="lang-py prettyprint-override"><code>from dotenv import load_dotenv
from openai import OpenAI
import tiktoken
import time
import os
load_dotenv()
example_messages = [{"role": "system", "content": "You are a helpful assistant."}]
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def gpt_response_func():
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=example_messages
)
return response
token_count = {'input': 0, "output": 0, 'total': 0}
n = 1
while n <= 3:
gpt_response = gpt_response_func()
gpt_msg = gpt_response.choices[0].message.content
gpt_token_usage = dict(gpt_response.usage)
token_count['input'] += gpt_token_usage['prompt_tokens']
token_count['output'] += gpt_token_usage['completion_tokens']
token_count['total'] = token_count['input'] + token_count['output']
print(token_count)
print("Bot:", gpt_msg)
example_messages.append({"role": "assistant", "content": gpt_msg})
time.sleep(2)
user_response = input("User: ")
print("User:", user_response)
example_messages.append({"role": "user", "content": user_response})
n += 1
</code></pre>
<p>I want to ensure if my approach is correct and seek suggestions for improvement. Additionally, if there's a better method available for dealing with token counting involving memory, I'd appreciate guidance as I couldn't find relevant resources online.</p>
|
<python><token><openai-api><large-language-model><chatgpt-api>
|
2024-04-16 02:40:22
| 0
| 549
|
Mubashir Ahmed Siddiqui
|
78,331,796
| 2,604,247
|
How Does .venv in Python Inherit Jupyter Notebook from System?
|
<h5>Context</h5>
<p>I was <em>not</em> a regular virtual environment user. I just used to install pip packages at the user level like</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m pip install --user tensorflow
</code></pre>
<p>and worked with it. This is how installed my jupyter lab as well to work with ipython notebooks.</p>
<p>But only recently, I got into the habit of installing pip packages on a project level by creating virtual environments like</p>
<pre class="lang-bash prettyprint-override"><code>python3 -m venv .venv
source .venv/bin/activate
(.venv)$ python3 -m pip install tensorflow # Within the virtual environment, example
</code></pre>
<h5>Question</h5>
<p>Even within the virtual environment, whenever I run</p>
<pre class="lang-bash prettyprint-override"><code>(.venv) $ jupyter notebook
</code></pre>
<p>it fires up, without having to explicitly install jupyter <em>within the environment</em>. So it almost looks like somehow <em>jupyter</em> is inheriting from my user level packages. But, within the environment, when I try these</p>
<pre class="lang-bash prettyprint-override"><code>(.venv) $ python3 -m pip freeze # Empty, no package
(.venv) $ python3
>>> import tensorflow # Module not found, as tf exists only in the base environment
</code></pre>
<p>So, what's the deal here? If the <code>.venv</code> is providing a true isolation from the base (as tensorflow does not seem to inherit from the base), how come it inherits jupyter?</p>
<h5>Platform (if Relevant)</h5>
<p>Python 3.8.10 on Ubuntu 22.04.</p>
|
<python><python-3.x><jupyter-notebook><pip><pipenv>
|
2024-04-16 02:15:23
| 1
| 1,720
|
Della
|
78,331,636
| 857,932
|
What is the cleanest way to replace a hostname in an URL with Python?
|
<p>In Python, there is a standard library module <code>urllib.parse</code> that deals with parsing URLs:</p>
<pre class="lang-py prettyprint-override"><code>>>> import urllib.parse
>>> urllib.parse.urlparse("https://127.0.0.1:6443")
ParseResult(scheme='https', netloc='127.0.0.1:6443', path='', params='', query='', fragment='')
</code></pre>
<p>There are also properties on <code>urllib.parse.ParseResult</code> that return the hostname and the port:</p>
<pre class="lang-py prettyprint-override"><code>>>> p.hostname
'127.0.0.1'
>>> p.port
6443
</code></pre>
<p>And, by virtue of ParseResult being a namedtuple, it has a <code>_replace()</code> method that returns a new ParseResult with the given field(s) replaced:</p>
<pre class="lang-py prettyprint-override"><code>>>> p._replace(netloc="foobar.tld")
ParseResult(scheme='https', netloc='foobar.tld', path='', params='', query='', fragment='')
</code></pre>
<p>However, it cannot replace <code>hostname</code> or <code>port</code> because they are dynamic properties rather than fields of the tuple:</p>
<pre class="lang-py prettyprint-override"><code>>>> p._replace(hostname="foobar.tld")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/collections/__init__.py", line 455, in _replace
raise ValueError(f'Got unexpected field names: {list(kwds)!r}')
ValueError: Got unexpected field names: ['hostname']
</code></pre>
<p>It might be tempting to simply concatenate the new hostname with the existing port and pass it as the new netloc:</p>
<pre class="lang-py prettyprint-override"><code>>>> p._replace(netloc='{}:{}'.format("foobar.tld", p.port))
ParseResult(scheme='https', netloc='foobar.tld:6443', path='', params='', query='', fragment='')
</code></pre>
<p>However this quickly turns into a mess if we consider</p>
<ul>
<li>the fact that the port is optional;</li>
<li>the fact that netloc may also contain the username and possibly the password (e.g. <code>https://user:pass@hostname.tld</code>);</li>
<li>the fact that IPv6 literals must be wrapped in brackets (i.e. <code>https://::1</code> isn't valid but <code>https://[::1]</code> is);</li>
<li>and maybe something else that I'm missing.</li>
</ul>
<hr />
<p><strong>What is the cleanest, correct way to replace the hostname in a URL in Python?</strong></p>
<p>The solution must handle IPv6 (both as a part of the original URL and as the replacement value), URLs containing username/password, and in general all well-formed URLs.</p>
<p><em>(There is a wide assortment of existing posts that try to ask the same question, but none of them ask for (or provide) a solution that fits all of the criteria above.)</em></p>
|
<python><urllib><url-parsing>
|
2024-04-16 01:02:21
| 2
| 2,955
|
intelfx
|
78,331,391
| 1,361,802
|
moto upgrade results in error The queue should either have ContentBasedDeduplication enabled or MessageDeduplicationId provided explicitly
|
<p>I upgraded moto from 4.1.x to 4.2.x and now I am hitting this error when I run my pytest unit tests.</p>
<blockquote>
<p>botocore.exceptions.ClientError: An error occurred (InvalidParameterValue) when calling the SendMessage operation: The queue should either have ContentBasedDeduplication enabled or MessageDeduplicationId provided explicitly</p>
</blockquote>
<p>What is going on and how do I fix this?</p>
|
<python><amazon-web-services><boto3><amazon-sqs><moto>
|
2024-04-15 23:11:18
| 1
| 8,643
|
wonton
|
78,331,165
| 1,913,367
|
Setting new values in subset of xarray dataset
|
<p>I have a xarray dataset covering longitudes from 9 to 30 and 54 to 66. How to set all the variables in that dataset from certain coordinate range to -1?</p>
<p>As soon as I do df.isel or df.iloc or df.sel or df.loc, I get a subset of data, which is much smaller than the original dataset. I would like to obtain dataset with original resolution.</p>
<p>Some small example:</p>
<pre><code>#!/usr/bin/env ipython
import xarray as xr
import numpy as np
# --------------------------
xvals = np.arange(9,30,1);nx = np.size(xvals);
yvals = np.arange(53,66,0.5);ny = np.size(yvals);
# -----------------------------------------------
data_a = np.random.random((ny,nx));
data_b = np.random.random((ny,nx));
somedata = xr.Dataset(data_vars={'data_a':(('latc','lonc'),data_a),'data_b':(('latc','lonc'),data_b)},coords={'lonc':('lonc',xvals),'latc':('latc',yvals)})
# -----------------------------------------------
# How to set all data variables in somedata at coordinate range lonc = 15-20 and latc=59-61 to -1e0 using xarray?
</code></pre>
<p>I can easily use NumPy:</p>
<pre><code># -----------------------------------------------
# NumPy:
xm, ym = np.meshgrid(xvals,yvals);
kk = np.where((xm>15) & (ym>59) & (xm<20) &(ym<61))
data_a[kk] = -1.e0;
data_b[kk] = -1.e0;
somedata['data_a'][:] = data_a;
somedata['data_b'][:] = data_b;
somedata.to_netcdf('test.nc')
# -----------------------------------------------
</code></pre>
<p>but perhaps there is also a nice way doing this with xarray for all variables?</p>
|
<python><python-xarray><netcdf>
|
2024-04-15 21:44:02
| 1
| 2,080
|
msi_gerva
|
78,331,020
| 8,713,442
|
Unable to write data in file
|
<p>I have data stored in one file - delimiter tab
{'id': '123', 'name': 'peฤnostnรญ informaฤnรญ sluลพba'}</p>
<p>When I am trying to read the data and write data in 2nd file using python code but getting error</p>
<pre><code> with open(output_file, 'w') as f_output, open(input_file,encoding = 'utf-8-sig') as f_input:
reader = csv.DictReader(f_input,delimiter='\t')
fieldnames = reader.fieldnames
writer = csv.DictWriter(f_output, fieldnames=fieldnames)
writer.writeheader()
for row in reader:
print(row)
writer.writerow(row)
Traceback (most recent call last):
File "C:\Python_Projects\Python_extra_code\csv_example.py", line 180, in <module>
writer.writerow(row)
File "C:\Dev\Python3.11\Lib\csv.py", line 154, in writerow
return self.writer.writerow(self._dict_to_list(rowdict))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Dev\Python3.11\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\u010d' in position 69: character maps to <undefined>
</code></pre>
|
<python><python-3.x><python-unicode>
|
2024-04-15 21:05:05
| 1
| 464
|
pbh
|
78,330,930
| 1,194,864
|
Disabling fusing attention in ViT models in PyTorch
|
<p>I am inspecting different vision transformer models on <code>PyTorch</code> and I am trying to understand their differences. My models can be seen in the following code:</p>
<pre><code>retrained_vit_weights = torchvision.models.ViT_B_16_Weights.DEFAULT # requires torchvision >= 0.13, "DEFAULT" means best available
pretrained_vit_1 = torchvision.models.vit_b_16(weights=retrained_vit_weights).to(device)
pretrained_vit_2 = torch.hub.load('facebookresearch/deit:main', 'deit_tiny_patch16_224', pretrained=True).to(device)
for block in pretrained_vit_2.blocks:
block.attn.fused_attn = False
</code></pre>
<p>This code loads two different <code>ViT</code> models and disables the <code>fused_attn</code> variable for the second model. The models are a bit different, so I am trying to understand what exactly this code (<code>block.attn.fused_attn = False</code>) does and how can I apply it (or if its necessary to) in the first model.</p>
<p><strong>Edit</strong>: The idea is that I would like to run the model in an image and return the attention maps and the gradient up to a specific layer (<code>dropout</code> after the self-attention). In the same way as it is explained <a href="https://github.com/jacobgil/vit-explain/blob/main/vit_explain.py" rel="nofollow noreferrer">here</a>. Initially, I tried to use the <code>facebookresearch</code> <code>deit_tiny_patch16_224</code> model, and then I am commenting out the following code lines:</p>
<pre><code>for block in pretrained_vit_2.blocks:
block.attn.fused_attn = False
</code></pre>
<p>When I commend out this code it results in empty list of attention maps (for each of the <code>ViT</code> layer) and gradient (the code uses a hook for <code>attn_drop</code> module). I am experiencing the same behavior for the <code>ViT_B_16_Weights</code> for which I am not sure how to disable the <code>fused_attn</code> and by default the gradient for the <code>dropout </code> layer returns an empty list (while the attention maps are not empty in this case).</p>
<p>Any idea why this holds? Why when I comment this parameter the result is empty gradient/maps?</p>
<p>Secondly, how can I do the same for the <code>ViT_B_16_Weights</code> (that is loaded from torchvision)?</p>
<p>In the first model indeed there its composed of blocks that contains variable <code>attn</code> and the <code>boolean</code> variable <code>fused_attn</code>.</p>
<p>The second model composed of stacked Encoder layers <code>pretrained_vit.encoder.layers</code> each of them contain a Boolean <code>add_zero_attention</code> and I could not find nowhere <code>fused_attn</code> or something similar. It seems that in the second model by default it uses this attribute: <code>scaled_dot_product_attention</code></p>
<p><strong>Edit2:</strong></p>
<p>Digging a bit into the implementation for attention in both models, for the case of the <code>ViT_B_16_Weights</code> I have noticed that there is the following comment in the <code>self.attention</code> implementation:</p>
<pre><code>``nn.MultiHeadAttention`` will use the optimized implementations of
``scaled_dot_product_attention()`` when possible.
</code></pre>
<p>It feels that by default this model sets this variable to <code>True</code>. However, not sure how to disable and replace that in the code for the second model. Is there a way to disable <code>scaled_dot_product_attention</code> in the same way as the other model?</p>
<p><strong>Edit 3:</strong>: After inspecting more about the <code>ViT_B_16_Weights</code> I could find in the <code>vision_transformer.py</code> a call to <code>activation.py</code> that in turns call the <code>functional.py</code>, there is some code with an if-else that relates to <code>scaled_dot_product_attention</code>. However, I am not sure how to activate if else statement for this model in the same way with the <code>deit_tiny_patch16_224</code>. Not even sure why the different calculations leads to different gradient/attention. Also, what it means to deactivate this variable when pretraining is set to <code>false</code>? Isn't that a bit weird?</p>
|
<python><pytorch><transformer-model>
|
2024-04-15 20:40:16
| 1
| 5,452
|
Jose Ramon
|
78,330,835
| 10,944,175
|
Extract text from dxf that is not TEXT or MTEXT entity using ezdxf
|
<p>I am trying to get numbers from a specific dxf file using ezdxf.</p>
<p>However, a query on the modelspace for <code>msp.query('TEXT')</code> or <code>msp.query('MTEXT')</code> yields no results, if I iterate over all entities, I can see they are all of type <code>INSERT</code>.</p>
<p>In the dxf file, I can see numbers next to a contour for every insert, and I can copy one insert into a new file, which results in the contour and the number being copied. However, I can not figure out where the information on the number is stored.
This is what the dxf file looks like viewed in LibreCAD:
<a href="https://i.sstatic.net/8KLNP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8KLNP.png" alt="dxf file viewed in LibreCAD" /></a></p>
<p><a href="https://drive.google.com/file/d/1IUMNTMMIUmu1yB9KC1eRVEUzrJ9KF5WR/view?usp=sharing" rel="nofollow noreferrer">Link to dxf file on google drive</a></p>
<p>Does anyone know where in the <code>INSERT</code> type entity the number might be stored? It is not in the <code>entity.dxf.name</code>, which yields a completely different string.</p>
|
<python><ezdxf>
|
2024-04-15 20:11:07
| 2
| 549
|
Freya W
|
78,330,829
| 2,726,900
|
pandera: how to create a DataFrameSchema or DataFrameModel from existing pandas.DataFrame?
|
<p>I have written a data transformation function that eats a <code>pandas.DataFrame</code> at input and returns another <code>pandas.DataFrame</code> at ouput. I have a reference <code>pandas.DataFrame</code> object on which I was testing my function.</p>
<p>I want to check input data schema to make sure that the schema of input dataframe is always the same as it was in my reference dataframe.</p>
<p>I've found out that there is <code>pandera</code> library that is intended to automate such checks: just write a <code>DataFrameSchema</code> or <code>DataFrameModel</code> and have fun.</p>
<p>But I don't want to write my <code>DataFrameModel</code> manually. Can I somehow generate it from existing dataframe?</p>
|
<python><pandas><pydantic><pandera>
|
2024-04-15 20:09:32
| 1
| 3,669
|
Felix
|
78,330,672
| 2,475,195
|
Python global counter for multiprocessing pool
|
<p>I want to keep count how many items have already been processed, but I get an error. Here's my minimal code and the error stack trace.</p>
<pre><code>import multiprocessing
from multiprocessing import Value
from ctypes import c_int
class DemoProcessor(object):
def __init__(self, data_to_process):
self.counter = Value(c_int, 0)
self.data_to_process = data_to_process
def _process_data(self, item):
print (f"Processing {item}")
self.counter.value = self.counter.value + 1
print (f"Processed {self.counter.value}")
def process(self):
pool = multiprocessing.Pool(processes=4)
results_multi = pool.starmap(self._process_data, self.data_to_process),
if __name__ == '__main__':
demo = DemoProcessor(data_to_process=list(range(10)))
demo.process()
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "demo.py", line 23, in <module>
demo.process()
File "demo.py", line 19, in process
results_multi = pool.starmap(self._process_data, self.data_to_process),
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 372, in starmap
return self._map_async(func, iterable, starmapstar, chunksize).get()
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 771, in get
raise self._value
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\pool.py", line 537, in _handle_tasks
put(task)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\connection.py", line 211, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\reduction.py", line 51, in dumps
cls(buf, protocol).dump(obj)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\sharedctypes.py", line 199, in __reduce__
assert_spawning(self)
File "C:\Users\user\AppData\Local\Programs\Python\Python39\lib\multiprocessing\context.py", line 359, in assert_spawning
raise RuntimeError(
RuntimeError: Synchronized objects should only be shared between processes through inheritance
</code></pre>
|
<python><multiprocessing><process-pool>
|
2024-04-15 19:35:03
| 1
| 4,355
|
Baron Yugovich
|
78,330,669
| 726,730
|
QTreeWidget internal move rows (only topLevelItems)
|
<pre class="lang-py prettyprint-override"><code>from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(800, 600)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.gridLayout = QtWidgets.QGridLayout(self.centralwidget)
self.gridLayout.setObjectName("gridLayout")
self.treeWidget = QtWidgets.QTreeWidget(self.centralwidget)
self.treeWidget.setAcceptDrops(True)
self.treeWidget.setDragEnabled(True)
self.treeWidget.setDragDropMode(QtWidgets.QAbstractItemView.InternalMove)
self.treeWidget.setObjectName("treeWidget")
item_0 = QtWidgets.QTreeWidgetItem(self.treeWidget)
item_0 = QtWidgets.QTreeWidgetItem(self.treeWidget)
item_0 = QtWidgets.QTreeWidgetItem(self.treeWidget)
item_0 = QtWidgets.QTreeWidgetItem(self.treeWidget)
item_1 = QtWidgets.QTreeWidgetItem(item_0)
item_1 = QtWidgets.QTreeWidgetItem(item_0)
self.gridLayout.addWidget(self.treeWidget, 0, 0, 1, 1)
MainWindow.setCentralWidget(self.centralwidget)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.treeWidget.headerItem().setText(0, _translate("MainWindow", "Column 1"))
self.treeWidget.headerItem().setText(1, _translate("MainWindow", "Column 2"))
__sortingEnabled = self.treeWidget.isSortingEnabled()
self.treeWidget.setSortingEnabled(False)
self.treeWidget.topLevelItem(0).setText(0, _translate("MainWindow", "Data 1.1"))
self.treeWidget.topLevelItem(0).setText(1, _translate("MainWindow", "Data 1.2"))
self.treeWidget.topLevelItem(1).setText(0, _translate("MainWindow", "Data 2.1"))
self.treeWidget.topLevelItem(1).setText(1, _translate("MainWindow", "Data 2.2"))
self.treeWidget.topLevelItem(2).setText(0, _translate("MainWindow", "Data 3.1"))
self.treeWidget.topLevelItem(2).setText(1, _translate("MainWindow", "Data 3.2"))
self.treeWidget.topLevelItem(3).setText(0, _translate("MainWindow", "Data 4.1"))
self.treeWidget.topLevelItem(3).setText(1, _translate("MainWindow", "Data 4.2"))
self.treeWidget.topLevelItem(3).child(0).setText(0, _translate("MainWindow", "Data 4.1.1"))
self.treeWidget.topLevelItem(3).child(0).setText(1, _translate("MainWindow", "Data 4.1.2"))
self.treeWidget.topLevelItem(3).child(1).setText(0, _translate("MainWindow", "Data 4.2.1"))
self.treeWidget.topLevelItem(3).child(1).setText(1, _translate("MainWindow", "Data 4.2.2"))
self.treeWidget.setSortingEnabled(__sortingEnabled)
if __name__ == "__main__":
import sys
app = QtWidgets.QApplication(sys.argv)
MainWindow = QtWidgets.QMainWindow()
ui = Ui_MainWindow()
ui.setupUi(MainWindow)
MainWindow.show()
sys.exit(app.exec_())
</code></pre>
<p>The above example works: I have a QTreeWidget which i can drag and drop items internal.</p>
<p>But:</p>
<ol>
<li>I want to drag and drop only toplevelitems</li>
<li>As for 1. i don't want to drop as a child in another QTreeWidgetItem</li>
<li>I want to notify somehow when a move is made to change some Apendix index number in one column of tree widget and for save to changes to sqlite3 db file.</li>
</ol>
<p>Is this possible?</p>
<p>Edit: Maybe <a href="https://stackoverflow.com/questions/7666823/reordering-items-in-a-qtreewidget-with-drag-and-drop-in-pyqt/7694011#7694011">this</a> will be helpful</p>
<p>This code seems to work:</p>
<pre class="lang-py prettyprint-override"><code> self.treeWidget.topLevelItem(0).setFlags(self.treeWidget.topLevelItem(0).flags() & ~QtCore.Qt.ItemIsDropEnabled)
self.treeWidget.topLevelItem(1).setFlags(self.treeWidget.topLevelItem(1).flags() & ~QtCore.Qt.ItemIsDropEnabled)
self.treeWidget.topLevelItem(2).setFlags(self.treeWidget.topLevelItem(2).flags() & ~QtCore.Qt.ItemIsDropEnabled)
self.treeWidget.topLevelItem(3).child(0).setFlags(self.treeWidget.topLevelItem(3).child(0).flags() & ~QtCore.Qt.ItemIsDropEnabled & ~QtCore.Qt.ItemIsDragEnabled)
self.treeWidget.topLevelItem(3).child(1).setFlags(self.treeWidget.topLevelItem(3).child(1).flags() & ~QtCore.Qt.ItemIsDropEnabled & ~QtCore.Qt.ItemIsDragEnabled)
</code></pre>
<p>As for three, maybe a reimplementation of dropEvent is required.</p>
|
<python><pyqt5><drag-and-drop><qtreewidget>
|
2024-04-15 19:33:21
| 0
| 2,427
|
Chris P
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.