QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,625,884
| 19,850,415
|
Have quarto find the jupyter kernel within Visual Studio Code
|
<h3>Description</h3>
<p>I am trying to follow this <a href="https://quarto.org/docs/get-started/hello/vscode.html" rel="nofollow noreferrer">vscode tutorial</a> in the quarto.org <a href="https://quarto.org" rel="nofollow noreferrer">site</a>.</p>
<p>The tutorial uses a document <code>hello.qmd</code> whose content can be copied from the same tutorial. We need to edit the <code>hello.qmd</code> to indicate the jupyter kernel we will be using. Let us assume it is <code>my_kernel</code>, we change the header to be:</p>
<pre><code>---
title: "Quarto Basics"
format:
html:
code-fold: true
jupyter: my_kernel
---
</code></pre>
<p>Now, the trick is to have quarto find the kernel <code>my_kernel</code>. If I just try to preview the document (with the quarto extension installed as indicated in the tutorial), I get the following type of error:</p>
<pre><code>ERROR: Jupyter kernel my_kernel not found. Known kernels: python3. Run 'quarto check jupyter' with your python environment activated to check python version used.
</code></pre>
<p>Then if I run <code>quarto check jupyter</code> I get:</p>
<pre><code>Jupyter is not available in this Python installation.
</code></pre>
<p>The issue seems that quarto preview is run within the <code>base</code> conda environment. In the mentioned tutorial, we are instructed to install jupyter in this base environment. However, I don't want to install anything in that base environment (I never do it). Does anyone know of an easy way to indicate the conda environment to be used, the one associated with the kernel for running the code in the qmd file?</p>
<p>I know the question might be more related with VS Code settings, but any help is appreciated. Thanks!</p>
<p>[<strong>Note</strong>: In my case, <strong>I am running VS Code from within WSL</strong> (by using the remote VSCode connection <code>WSL:Ubuntu</code>)].</p>
|
<python><jupyter-notebook><quarto>
|
2024-06-15 07:16:47
| 1
| 507
|
Jau A
|
78,625,785
| 2,672,788
|
tracker.init throws Unknown C++ exception from OpenCV
|
<p>I wrote a program to test opencv <code>trackers</code> but I got error. Here is my <code>python</code> code using <code>opencv 4.10.0</code>:</p>
<pre><code>import cv2
video_path = '.../v1.mp4'
video = cv2.VideoCapture(video_path)
tracker = cv2.TrackerDaSiamRPN()
if not video.isOpened():
print("Could not open video")
# Read first frame.
ok, frame = video.read()
if not ok:
print('Cannot read video file')
frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
roi = cv2.selectROI(frame, True)
tracker.init(frame, roi)
</code></pre>
<p>and this is the error message:</p>
<blockquote>
<p>File "...\opencv_trackers.py", line 19, in
tracker.init(frame, roi) cv2.error: Unknown C++ exception from OpenCV code</p>
</blockquote>
|
<python><opencv><video-tracking>
|
2024-06-15 06:21:54
| 1
| 2,958
|
Babak.Abad
|
78,625,485
| 9,571,463
|
How to handle Async Appends to CSV File Without Corruption
|
<p>I have a large number of asyncio tasks that are consuming data via a queue and writing to to separate files. However, some of the files will be written to multiple times via mode <code>a+</code>. I have written some code to simulate some random processing in a similar way to my real world example.</p>
<p>I am using <code>asyncio.Lock()</code> in the following fashion to protect the file from whatever task takes ownership of writing to it, but am still receiving CSV results that are misaligned and/or corrupted. Also, the header seems to be getting written multiple times even though the size of the file shouldn't be 0 after the header is first written.</p>
<p>What am I missing?</p>
<pre><code>import asyncio
import aiofiles
import aiofiles.os
import aiocsv
import uuid
import random
import json
from pathlib import Path
from datetime import datetime, timezone
async def write_csv(item: list, load_id: str, prefix: str) -> None:
Path("./test_files").mkdir(parents=True, exist_ok=True)
file_path = Path("./test_files").joinpath(f"{prefix}_{load_id}.csv")
# Asynchronously write to our file
async with aiofiles.open(file_path, mode="a+", newline="") as f:
print(f"INFO: writing file: {Path(file_path).resolve()}")
w: aiocsv.AsyncWriter = aiocsv.AsyncWriter(f)
print(f"file size: {await aiofiles.os.path.getsize(file_path)}")
# If the file is empty, write the header
if await aiofiles.os.path.getsize(file_path) == 0:
print("file was empty! writing header")
# Write the header
async with asyncio.Lock():
await w.writerow([
"response",
"load_id",
"last_updated_timestamp_utc"
])
# do something special for specific file name
# I am just trying to simulate more random data processing
if prefix == "file_one":
# Shuffle the chunks again
item = random.shuffle(item)
# Write the data
for chunk in item:
async with asyncio.Lock():
await w.writerow([
json.dumps(chunk),
load_id,
datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S")
])
async def main() -> None:
# Create fake data
items: list[str] = [["hello", "world"], ["asyncio", "test"]] * 500
# Possible file prefixes
prefixes: list[str] = ["file_one", "file_two"]
tasks: list = []
load_id = str(uuid.uuid4())
for i in items:
# Randomly assign which file we will write to
task = asyncio.create_task(write_csv(i, load_id, random.choice(prefixes)))
tasks.append(task)
errors = await asyncio.gather(*tasks, return_exceptions=True)
# print(errors)
if __name__ == "__main__":
loop = asyncio.new_event_loop()
loop.run_until_complete(main())
</code></pre>
|
<python><csv><locking><python-asyncio><python-aiofiles>
|
2024-06-15 02:25:48
| 1
| 1,767
|
Coldchain9
|
78,625,360
| 1,609,514
|
How to include a modulus operation in CasADi function
|
<p>I can't find a modulus function in CasADi. I want to build a function with something like this:</p>
<pre class="lang-python prettyprint-override"><code>from casadi import MX
angle = MX.sym('angle')
wrapped_angle = ((angle + 180) % 360) - 180
</code></pre>
<pre class="lang-none prettyprint-override"><code>TypeError: unsupported operand type(s) for %: 'MX' and 'int'
</code></pre>
<p>There doesn't appear to be a <code>mod</code> or <code>divmod</code> function as far as I can see.</p>
<p>Maybe it's not implemented for a reason, but if so I would like to know that.</p>
|
<python><modulo><casadi>
|
2024-06-15 00:25:07
| 2
| 11,755
|
Bill
|
78,625,271
| 22,673,230
|
Array of strings sorted with less specific match first
|
<p>I have a list of files named like:</p>
<pre><code>file1.jpg
file2.jpg
.
.
.
</code></pre>
<p>where occasionally there is a duplicate such that the correct order should be:</p>
<pre><code>fileN.jpg
fileN 1.jpg
fileN 2.jpg
</code></pre>
<p><em>not</em>:</p>
<pre><code>fileN 1.jpg
fileN 2.jpg
fileN.jpg
</code></pre>
<p>Note in my use case, the string <code>fileN</code> can contain special characters (including spaces). With the following python script, I get almost what I want, but with the incorrect ordering for the special duplicate case (i.e, the latter example above instead of the one before it).</p>
<pre><code>import os;
files = os.listdir(os.path.curdir)
files.sort()
for file in files:
print(file)
</code></pre>
<p>How do I get the correct ordering, as shown in my 2nd code snippet? (I'm sure there is a more elegant CS name for the sorting I am looking for - that terminology would also be appreciated. I imagine something like this question has been asked before, but I am having trouble formulating the right query.)</p>
|
<python><sorting>
|
2024-06-14 23:19:51
| 2
| 316
|
shea
|
78,625,174
| 2,977,256
|
Jax random matrices running out of memory
|
<p>I want to create an Nxk matrix in jax, where each row has k DISTINCT integer entries each between 0 and N-1 (so, in range(N)), and this was my attempt:</p>
<pre><code>def generate_row(N, k, key):
return jax.random.choice(key, N, (k,), replace=False)
generate_rows = jax.vmap(generate_row, in_axes=(None, None, 0))
def generate_mat(N, k, key=jax.random.PRNGKey(0)):
keys = jax.random.split(key, N)
return generate_rows(N, k, keys)
</code></pre>
<p>Now, when I use this for small arrays, it works wonderfully, however, for N = 50000, k=20 it blows out of memory, which confuses me greatly. What is taking up all of the space? And how do I do this right?</p>
|
<python><random><jax>
|
2024-06-14 22:26:53
| 2
| 4,872
|
Igor Rivin
|
78,625,150
| 6,404,949
|
how to modify a non json data into a specific format in python?
|
<p>I have an input file that contain the data like this --</p>
<pre><code>{
0/2 = [
"test_server_101:99904"
,"test_server_103:99907"
,"test_server_106:99906"
];
1/2 = [
"test_server_203:99906"
,"test_server_303:99902"
,"test_server_403:99906"
];
}
</code></pre>
<p>And I am trying to convert this into --</p>
<pre><code>{"test_server_101:99904": '0' ,"test_server_103:99907": '0' ,"test_server_106:99906": '0' "test_server_203:99906": '1' ,"test_server_303:99902": '1' ,"test_server_403:99906": '1' }
</code></pre>
<p>As I can't control the input and this is not a proper json data or not proper dictionary as the key doesn't have single or double quotes. It is difficult to process input data. Till now I have tried below code but it is not working as per my requirement. Any suggestion in this regard?</p>
<pre><code>server_data = {}
with open('file.txt', 'r') as f:
lines = f.readlines()
print(lines)
for line in lines:
parts = line.strip().split(" = ")
print(parts)
value_str = value_str.strip()[1:]
for server in server_list:
server_name, *port = server.strip().split(":")
server_data[server_name] = key
print(server_data)
</code></pre>
|
<python><python-3.x>
|
2024-06-14 22:14:05
| 1
| 3,157
|
VIPIN KUMAR
|
78,625,002
| 1,479,670
|
Strange behaviour of the in-operator on NumPy arrays
|
<p>This behavior of the <code>in</code> operator on NumPy arrays I don't understand:</p>
<pre><code>>>> a=[0,1,2]
>>> x=np.array([a])
>>> b=[3,4,5]
>>> np.append(x,[b],axis=0)
array([[0, 1, 2],
[3, 4, 5]])
>>> a in x
True
>>> b in x
False
</code></pre>
<p>I want to build a list of distinct vectors so I need the <code>in</code> operator and a way to extend my list of vectors so that an appended element is also seen as being <strong>in</strong> the list. I use a NumPy array because the <code>in</code> operator doesn't work the way I expect.</p>
<ul>
<li>In the above example, why is <code>b</code> not seen as an element of <code>y</code>?</li>
<li>How can I append a NumPy array to a NumPy array of NumPy arrays so that the <code>in</code> operator works as expected?</li>
<li>Is there a better way to store vectors (i.e. NumPy array of size 3) so I can add new vectors, and such that the <code>in</code> operator sees newly added vectors as elements of the structure?</li>
</ul>
|
<python><numpy>
|
2024-06-14 21:13:34
| 4
| 1,355
|
user1479670
|
78,624,883
| 6,606,057
|
Matplotlib annotations do not appear at the proper distance in graph
|
<p>I have subset of countries where I am comparing gdp per person and geographic latitude. I've developed code down here:</p>
<pre><code>country_sub.plot(kind='scatter', figsize=(6, 4), grid=True, x=lat_col, y=gdp_ppp_col)
gdp_ppp_col = "gdp_ppp"
lat_col = "lat"
position_text = {
"Israel": (31.04605, 57_758),
"United States": (37.09024, 75_269),
"Spain": (40.463667, 29_385),
"Bulgaria": (42.733883, 13_129),
"Germany": (51.165691, 48_845),
"Ireland": (53.41291, 105_362),
"Iceland": (64.963051, 74_663)
}
for country, pos_text in position_text.items():
pos_data_x = country_sub[lat_col].loc[country]
pos_data_y = country_sub[gdp_ppp_col].loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country,
xy=(pos_data_x, pos_data_y),
xytext=pos_text,
fontsize=12,
arrowprops=dict(facecolor='black', arrowstyle='->')
)
plt.plot(pos_data_x, pos_data_y, "ro")
plt.axis([25, 70, -5000, 125000])
plt.show()
</code></pre>
<p>However this results in the following graph:</p>
<p><a href="https://i.sstatic.net/3sqYA1lD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3sqYA1lD.png" alt="enter image description here" /></a></p>
<p>Is there a way to move the country annotations to the backends of the arrows?</p>
|
<python><python-3.x><matplotlib><scatter-plot>
|
2024-06-14 20:30:29
| 1
| 485
|
Englishman Bob
|
78,624,817
| 9,669,142
|
Plot points on offline map in Python
|
<p>I'm using Spyder and have a couple of coordinates that I want to plot on a world map. Currently I use Folium, save data as an HTML-file and then open it in a webbrowser. My question is: is it possible to plot coordinates on a map without the use of the webbrowser? Or anything external for that matter? I was hoping there would be a package that enables me to plot everything and see it in Spyder itself.</p>
<p>Does anyone has an idea?</p>
|
<python><folium>
|
2024-06-14 20:07:30
| 1
| 567
|
Fish1996
|
78,624,582
| 1,021,259
|
How to use Django generated fields to get a single value from a many-to-many relationship?
|
<p>I have a Django model representing a Book, which can have many different types of Person, including AUTHOR, EDITOR, ILLUSTRATOR, etc.</p>
<pre><code>class PersonType(models.Model):
name = models.CharField()
class Person(models.Model):
name = models.TextField()
class Book(models.Model):
title = models.TextField()
people = models.ManyToManyField(Person, through="PersonBook")
class PersonBook(models.Model):
person_type = models.ForeignKey(PersonType, on_delete=models.CASCADE)
person = models.ForeignKey(Person, on_delete=models.CASCADE)
book = models.ForeignKey(Book, on_delete=models.CASCADE)
</code></pre>
<p>I was hoping to use Django's new GeneratedFields to create a property on <code>Book</code> with the name author that could query the many to many relationship and set the value equal to the first <code>Person</code> with the "AUTHOR" <code>PersonType</code>. I'm certain there are other ways of doing this with Managers and what not, but I'm curious as to how I can do this with Django's new GeneratedFields, if that's even possible.</p>
<p>The documentation on Django's <a href="https://docs.djangoproject.com/en/5.0/ref/models/expressions/#query-expressions" rel="nofollow noreferrer">Query expressions</a> doesn't show any examples along relationships, so I'm not sure if it's possible to do anything there.</p>
|
<python><django><django-models>
|
2024-06-14 18:57:21
| 1
| 1,972
|
Rob Rose
|
78,624,534
| 16,569,183
|
3D plot labels exceed subplot limits - Matplotlib
|
<p>For 2D plots, using <code>constrained</code> or <code>tight</code> layout is generally enough to adjust the size of the axes to fit labels, but none of the default layouts manages to do it for small 3D plots like the one shown below. I've been trying to read about transforms to see if I could manually scale down the axes, but I have never used them before and am struggling to understand how to apply them.</p>
<p>Is there a way to reduce the size of a 3D plot to prevent its labels from exceeding the limits of the subplot?</p>
<p>MWE</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
if __name__ == "__main__":
x = np.random.uniform(0, 1, 100)
y = np.random.uniform(0, 1, 100)
z = np.random.uniform(0, 1, 100)
fig = plt.figure(facecolor="green", layout="constrained", figsize=(3, 3))
ax = fig.add_subplot(111, projection="3d", proj_type="ortho")
ax.scatter(x, y, z)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/wiWVUuIY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wiWVUuIY.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-06-14 18:41:00
| 1
| 313
|
alfonsoSR
|
78,624,500
| 2,920,226
|
Optimizing pandera polars validation check function
|
<p>I'm testing out switching to polars from pandas, and running into performance issues I wasn't expecting. Hoping this is just an issue of not knowing the really optimized lazyframe way of validating data.</p>
<p>Here is one of the checks I'm noticing relatively significant differences between polars checks and pandas:</p>
<pre><code>def has_no_conditional_field_conflict(
grouped_data: pa.PolarsData,
condition_values: set[str] = {"977"},
groupby_fields: str = "",
separator: str = ";",
) -> pl.LazyFrame:
start = datetime.now()
lf = grouped_data.lazyframe
check_col = pl.col(groupby_fields).str.split(separator)
val_col = pl.col(grouped_data.key).str.strip_chars()
check_results = (
(check_col.map_elements(lambda arr: set(arr).isdisjoint(condition_values), return_dtype=pl.Boolean)) &
(val_col == "")
) | (
(check_col.map_elements(lambda arr: not set(arr).isdisjoint(condition_values), return_dtype=pl.Boolean)) &
(val_col != "")
)
rf = lf.with_columns(check_results.alias("check_results")).select("check_results").collect()
print(f"Processing of has_no_conditional_field_conflict took {(datetime.now() - start).total_seconds()} seconds")
return rf.lazy()
</code></pre>
<p>In polars, this function is taking on average ~0.1 seconds, and is used for many fields (called 41 times during a validation run). The overall validation time for 10,000 entries is taking about 8.5 seconds. If I remove the .collect() and just pass back the lazyframe with expressions, the total processing of the function itself is about 0.0007 seconds, but then the overall validation run takes about 13 seconds.</p>
<p>When running in pandas using groupby and iterating over the groupby data (which with pandera pandas checks provides a dict[value, series]) over the same data set I see check function times of 0.008 seconds and an overall validation of 6 seconds.</p>
<p>Is there a more optimized way that I can use polars in this check? I know map_elements is generally not favored over large dataframes but I haven't been able to figure out a better way to achieve what the check needs. I've been able to make significant improvements in other places, but this thing seems to be my current bottleneck.</p>
<p><strong>Update</strong>: The purpose of this is to check if Field A's (check_col) value is within a condition_values set. If Field A is in the set, then Field B (val_col) must not be blank. If Field A is not in the set (hence the isdisjoint), then Field B must be blank. Field A can either be a single value or a semicolon separated string of values. So for example Field A might be 900;977. Condition_value defaults to {977} but has the potential to be a set of values.</p>
|
<python><python-polars><pandera>
|
2024-06-14 18:33:50
| 2
| 443
|
Cthulhujr
|
78,624,498
| 5,924,264
|
How to check that all entries of a timestamp column in dataframe has UTC timezone?
|
<p>I have a dataframe with a timestamp column. I want to assert that all entries for this column have UTC time stamp.</p>
<p>I tried</p>
<pre><code>import pandas as pd
# Example DataFrame with a deal_timestamp column
data = {
'ts': [
pd.Timestamp('2024-06-11 12:00:00', tz='UTC'),
pd.Timestamp('2024-06-11 13:00:00', tz='UTC'),
pd.Timestamp('2024-06-11 14:00:00', tz='US/Pacific')
],
}
df = pd.DataFrame(data)
all_utc = df['ts'].dt.tz == 'UTC'
assert all(all_utc), f"Some timestamps are not in UTC: {df[~all_utc]}"
</code></pre>
<p>but I get the error</p>
<blockquote>
<p>ValueError: Tz-aware datetime.datetime cannot be converted to datetime64 unless utc=True</p>
</blockquote>
<p>I know there are some non-vectorized ways to do this, but I feel there should be a vectorized way to do what I need.</p>
|
<python><pandas><dataframe>
|
2024-06-14 18:33:28
| 1
| 2,502
|
roulette01
|
78,624,467
| 903,188
|
How do I force a Mako function to require named arguments?
|
<p>With Python, you can require arguments to be named when a function is called, e.g.</p>
<pre><code>def myfunc (a, b, *, c, d):
print(f"arguments c and d must be named arguments")
myfunc(1,2,3,4)
>>> TypeError: myfunc() takes 2 positional arguments but 4 were given
</code></pre>
<p>That same syntax doesn't seem to work for me when defining a Mako function:</p>
<pre><code>from mako.template import Template
s = """\
<%def name="myfunc (a, b, *, c, d)">
I want to require that arguments c=${c} and d=${d} are named
</%def>
${myfunc(1,2,3,4)}
"""
print(Template(s).render())
>>> I want to require that arguments c=3 and d=4 are named
</code></pre>
<p>The <code>*</code> doesn't cause a syntax error, so it appears that Mako is just ignoring it. Am I doing something wrong or is there another way to accomplish the forcing of named arguments?</p>
|
<python><mako>
|
2024-06-14 18:27:05
| 2
| 940
|
Craig
|
78,624,364
| 1,956,082
|
Which PCA results are correct?
|
<p>I aim to find which directions in my data have "vary greatly". To do that, I understand a method called PCA, which uses the eigenvectors of the covariant matrix to find them.</p>
<p>I used different implementations to calculate the principal component analysis.</p>
<pre class="lang-py prettyprint-override"><code>matrix = np.array([
[-0.5, 0.5],
[1.5, 1.5]
])
covariance_matrix = np.cov(matrix[:,0], matrix[:,1])
eigen_values, eigen_vectors = np.linalg.eig(covariance_matrix)
print("eigen_vectors: ", eigen_vectors)
print("Eigen values: ", eigen_values)
</code></pre>
<p>The output is</p>
<pre><code>eigen_vectors: [[ 0.89442719 -0.4472136 ]
[ 0.4472136 0.89442719]]
Eigen values: [2.5 0. ]
</code></pre>
<p>I used also a <a href="https://www.npmjs.com/package/pca-js" rel="nofollow noreferrer">javascript library</a> to calculate it:</p>
<pre class="lang-js prettyprint-override"><code>const output = PCA.getEigenVectors([
[-0.5, 0.5],
[1.5, 1.5]
])
console.log(output)
</code></pre>
<p>which output is</p>
<pre><code>[
{
eigenvalue: 1.2499999999999998,
vector: [ 0.8944271909999157, 0.4472135954999579 ]
},
{
eigenvalue: 0,
vector: [ 0.4472135954999579, -0.8944271909999159 ]
}
]
</code></pre>
<p>While I expect different decimals due to internal language implementations, I also have other signs and eigenvalues.</p>
<p>Why does this happen? Which results are correct (or more accurate) for my case and why?
Or are both results valid, and can I use them in my case?</p>
|
<javascript><python><matrix><pca><eigenvalue>
|
2024-06-14 17:57:58
| 1
| 1,112
|
allevo
|
78,624,269
| 6,610,176
|
Is there a better way to cleanup text files with Python, using regex?
|
<p>I'm trying to create a script to match regex patterns in a series of text files, and the remove those matches from the file. Right now, I have the following, which works for my purposes, but I don't think this is an effective way to do it:</p>
<pre><code>import os
import re
os.chdir("/home/user1/test_files")
patterns = ['(bannana)',
'(peaches)',
'(apples)'
]
subst = ""
cwd = os.getcwd()
for filename in os.listdir(cwd):
with open(filename, 'r', encoding="utf8") as f:
file = f.read()
result = re.sub('|'.join(patterns), subst, file, re.MULTILINE)
with open("/home/user1/output_files/" + "output_" + str(filename), 'w', encoding="utf-8") as newfile:
newfile.write(result)
for pattern in patterns:
with open('/home/user1/output_files/output_'+str(filename), 'r', encoding="utf8") as f:
file = f.read()
result = re.sub(pattern, subst, file, re.MULTILINE)
with open('/home/user1/output_files/output_'+str(filename), 'w', encoding="utf-8") as newfile:
newfile.write(result)
</code></pre>
<p>So, lets say I have a file, grocery.txt, and I want to remove the words <code>apples</code>, <code>peaches</code>, and <code>bannana</code>. The above script will first run through and create an output file, output_grocery.txt. It will then iterate through the patterns list, removing the pattern from output_grocery.txt and rewriting it after each pass.</p>
<p>The way I'm doing this right now is not scalable. I'll eventually need to run this on hundreds of files, each one being rewritten again and again depending on how many regex patterns I have. I originally tried doing this in one go, using:</p>
<pre><code>result = re.sub('|'.join(patterns), subst, file, re.MULTILINE)
</code></pre>
<p>thinking that would remove all the patterns in one go from the file. However, this only removes the first pattern, in this case bannana.</p>
<p>Is there a better, more scalable way to do this?</p>
|
<python><regex>
|
2024-06-14 17:29:13
| 2
| 459
|
SVill
|
78,624,158
| 11,028,689
|
ARMA model function for future unseen data with start and end dates?
|
<p>I have a dataframe like this</p>
<pre><code>lstvals = [30.81,27.16,82.15,31.00,9.13,11.77,25.58,7.57,7.98,7.98]
lstdates = ['2021-01-01', '2021-01-05', '2021-01-09', '2021-01-13', '2021-01-17', '2021-01-21', '2021-01-25', '2021-01-29', '2021-02-02', '2021-02-06']
data = {
"Dates": lstdates,
"Market Value": lstvals
}
df = pd.DataFrame(data)
df.set_index('Dates', inplace = True)
df
</code></pre>
<p>I want to forecast the values which are out of this sample, for example, from '2021-02-10' to '2022-04-23' (in my dataset, I have data from '2021-01-01' to '2023-11-09', and want to forecast for next year, from '2024-01-01' to '2023-11-09)</p>
<p><a href="https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_forecasting.html" rel="nofollow noreferrer">https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_forecasting.html</a></p>
<p>I have defined and fitted my model as follows, which predicts the test data:</p>
<pre><code>
train = df['Market Value'].iloc[:1187]
test = df['Market Value'].iloc[-200:]
...
ARMAmodel = SARIMAX(y, order = (2,1,2))
ARMAResults = ARMAmodel.fit()
...
y_pred = ARMAResults.get_forecast(len(test.index))
y_pred_df = y_pred.conf_int(alpha = 0.05)
y_pred_df["Predictions"] = ARMAResults.predict(start = y_pred_df.index[0], end = y_pred_df.index[-1])
y_pred_df.index = test.index
y_pred_out = y_pred_df["Predictions"]
...
plt.plot(train, color = "black")
plt.plot(test, color = "red")
plt.ylabel('Market Value ($M)')
plt.xlabel('Date')
plt.xticks(rotation=45)
plt.title("Train/Test/Prediction for Market Data")
plt.plot(y_pred_out, color='green', label = 'Predictions')
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/lGNsckQ9.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lGNsckQ9.jpg" alt="enter image description here" /></a></p>
<p>How can I make predictions for future dates?</p>
<p>I have just tried to input future dates with the forecast method, and apparently, it is not working for me</p>
<pre><code>ARMAResults.forecast(start = '2024-01-01', end = '2024-11-09')
TypeError: statsmodels.tsa.statespace.mlemodel.MLEResults.predict() got multiple values for keyword argument 'start'
</code></pre>
<p><a href="https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_forecasting.html" rel="nofollow noreferrer">https://www.statsmodels.org/devel/examples/notebooks/generated/statespace_forecasting.html</a></p>
|
<python><pandas><statsmodels>
|
2024-06-14 16:54:25
| 1
| 1,299
|
Bluetail
|
78,624,141
| 3,120,501
|
Understanding partial derivative error in Dymos
|
<p>I've built a dynamics model in Dymos, using Jax to calculate the partial derivatives using auto-differentiation. The code looks something like the following:</p>
<pre><code>import openmdao.api as om
import dymos as dm
import jax
import jax.numpy as jnp
import numpy as np
from functools import partial
</code></pre>
<pre><code># Define dynamics
class Dynamics(om.ExplicitComponent):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._compute_primal_vec = jax.vmap(self._compute_primal)
self._compute_partials_vec = jax.jit(jax.vmap(jax.jacfwd(self._compute_primal, argnums=np.arange(5))))
def initialize(self):
self.options.declare('num_nodes', types=int)
def setup(self):
nn = self.options['num_nodes']
# States
self.add_input('theta', shape=(nn,), desc='orientation, anticlockwise from positive x', units='rad')
self.add_input('omega', shape=(nn,), desc='angular velocity, positive anticlockwise', units='rad/s')
# Controls
self.add_input('rho', shape=(nn,), desc='rotational control input')
# Parameters
self.add_input('k', shape=(nn,), desc='rotational resistance coefficient')
self.add_input('I', shape=(nn,), desc='rotational moment of inertia', units='kg*m**2')
# Outputs
self.add_output('omega_dot', val=np.zeros(nn), desc='rate of change of angular velocity', units='rad/s**2')
# Partials declared analytically
arange = np.arange(nn)
self.declare_partials(of='*', wrt='*', method='exact', rows=arange, cols=arange)
# Dynamics go here
@partial(jax.jit, static_argnums=(0,))
def _compute_primal(self, theta, omega, rho, k, I):
# For some reason, need to assign these otherwise they have the value 0/0J when checking the partials.
I = 1
k = 1
# Calculate moments
tau = rho - k*omega**2 # Rotational torque
# Calculate state rates of change (dynamics)
# Rotational
omega_dot = tau / I
return omega_dot
def compute(self, inputs, outputs):
omega_dot = self._compute_primal_vec(*inputs.values())
if np.isnan(np.sum(omega_dot)):
raise Exception("NaN values found in rates")
outputs['omega_dot'] = omega_dot
def compute_partials(self, inputs, partials):
output_names = ['omega_dot']
input_names = ['theta', 'omega', 'rho', 'k', 'I']
computed_partials = self._compute_partials_vec(*inputs.values())
# Cycle through computed partials
for out_ind, output_name in enumerate(output_names):
for in_ind, input_name in enumerate(input_names):
partials[output_name, input_name] = computed_partials[out_ind][in_ind]
</code></pre>
<p>This code may look a bit strange because it's been cut down from a larger dynamics model, but it effectively models the rotation of an object with moment of inertia I given a torque input tau, which is a function of a control input rho. Theta is the orientation, and omega the angular velocity.</p>
<p>What might be slightly strange about the way I've built this too is the use of Jax to try to calculate the partial derivatives automatically. I've seen this done in the OpenMDAO and Dymos examples, e.g. using wrap_ode (<a href="https://openmdao.org/newdocs/versions/latest/advanced_user_guide/jax_derivatives/partial_derivs_explicit.html" rel="nofollow noreferrer">OpenMDAO example</a>, <a href="https://openmdao.org/dymos/docs/latest/examples/balanced_field/balanced_field_funccomp.html" rel="nofollow noreferrer">Dymos wrap_ode example</a>), but my approach is slightly different, so I don't know if it plays any part in the strange behaviour I'm experiencing.</p>
<p>The problem is built and set up as follows (initial state, control and parameter values aren't set as I'm only interested in checking the partial derivatives):</p>
<pre><code>num_segments = 10
order = 3
# Build problem
prob = om.Problem()
traj = dm.Trajectory()
prob.model.add_subsystem('traj', traj)
phase = dm.Phase(ode_class=Dynamics, transcription=dm.GaussLobatto(num_segments=num_segments, order=order),
ode_init_kwargs={})
traj.add_phase('phase0', phase)
# Add states, controls, parameters and objective
# States
phase.add_state('theta', rate_source='omega', targets=['theta'], units='rad')
phase.add_state('omega', rate_source='omega_dot', targets=['omega'], units='rad/s')
# Controls
phase.add_control('rho', continuity=True, rate_continuity=True, targets=['rho'])
# Parameters
phase.add_parameter('k', targets=['k'])
phase.add_parameter('I', units='kg*m**2', targets=['I'])
# Configure and set up
prob.setup(force_alloc_complex=True)
</code></pre>
<p>When I run check_partials using <code>prob.check_partials(method='cs', compact_print=True)</code>, I get some significant, and systematic-looking, errors:</p>
<p><a href="https://i.sstatic.net/fmYEkE6t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fmYEkE6t.png" alt="Errors from check_partials method" /></a></p>
<p>Stranger still, I don't get an error for omega_dot w.r.t theta, which has an absolute error of 9.5238e-01 and a relative error of 1.0000e+00 in the full model.</p>
<p>What could be causing this? Is it an issue with the model itself, or perhaps the way I'm using Jax? There are a few peculiarities:</p>
<ul>
<li>The fact that the errors are for different partials in the original model and this minimum working example.</li>
<li>In _compute_primal(...), the values of k and I don't 'carry through' into the function - they end up being 0 or 0J when check_partials is run (which is why I have to set them explicitly in _compute_primal, to avoid division by zero errors) - why would this be?</li>
</ul>
<p>As an extra aside, I'm not sure what the difference is between the rhs_disc and rhs_col components of the phase?</p>
<p>Many thanks, I really appreciate any assistance.</p>
<hr />
<p>Here are more details for the non-zero/non-nan partials (these were screenshotted on a different run, and it seems to have changed in-between):</p>
<p><a href="https://i.sstatic.net/v81JjJto.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/v81JjJto.png" alt="omega_dot wrt omega rhs_disk" /></a>
<a href="https://i.sstatic.net/8MxWtwhT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8MxWtwhT.png" alt="omega_dot wrt rho rhs_disk" /></a>
<a href="https://i.sstatic.net/DD20YH4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DD20YH4E.png" alt="omega_dot wrt omega rhs_col" /></a>
<a href="https://i.sstatic.net/JfnQNdp2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfnQNdp2.png" alt="omega_dot wrt rho rhs_col" /></a></p>
<hr />
<h2><strong>Edit</strong></h2>
<p>Because the MWE has only one return value from <code>_compute_primal</code>, it doesn't work in the same way. Here is the full code:</p>
<pre><code># Define dynamics
class BoatDynamicsJax(om.ExplicitComponent):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._compute_primal_vec = jax.vmap(self._compute_primal)
self._compute_partials_vec = jax.jit(jax.vmap(jax.jacfwd(self._compute_primal, argnums=np.arange(13))))
def initialize(self):
self.options.declare('num_nodes', types=int)
self.options.declare('flow_velocity')
def setup(self):
nn = self.options['num_nodes']
# States
self.add_input('x', shape=(nn,), desc='x displacement', units='m')
self.add_input('y', shape=(nn,), desc='y displacement', units='m')
self.add_input('u', shape=(nn,), desc='x velocity', units='m/s')
self.add_input('v', shape=(nn,), desc='y velocity', units='m/s')
self.add_input('theta', shape=(nn,), desc='heading angle, anticlockwise from positive x', units='rad')
self.add_input('omega', shape=(nn,), desc='angular velocity, positive anticlockwise', units='rad/s')
# Controls
self.add_input('rho', shape=(nn,), desc='rotational control input')
self.add_input('t', shape=(nn,), desc='propulsive throttle force, in ib direction', units='N')
# Parameters
self.add_input('m', shape=(nn,), desc='mass', units='kg')
self.add_input('Cdl', shape=(nn,), desc='longitudinal drag coefficient')
self.add_input('Cds', shape=(nn,), desc='lateral (sideways) drag coefficient')
self.add_input('k', shape=(nn,), desc='rotational drag coefficient')
self.add_input('I', shape=(nn,), desc='rotational moment of inertia', units='kg*m**2')
# x_dot, y_dot and theta_dot are u, v and omega respectively, so we don't need to calculate outputs for them.
self.add_output('u_dot', val=np.zeros(nn), desc='rate of change of u velocity', units='m/s**2')
self.add_output('v_dot', val=np.zeros(nn), desc='rate of change of y velocity', units='m/s**2')
self.add_output('omega_dot', val=np.zeros(nn), desc='rate of change of angular velocity', units='rad/s**2')
# Not used, but for logging.
self.add_output('Fdi', np.zeros(nn), desc='longitudinal drag force, aligned with ib', units='N')
self.add_output('Fdj', np.zeros(nn), desc='lateral drag force, aligned with jb', units='N')
# Partials declared analytically
arange = np.arange(nn)
self.declare_partials(of='*', wrt='*', method='exact', rows=arange, cols=arange)
# Dynamics go here
@partial(jax.jit, static_argnums=(0,))
def _compute_primal(self, x, y, u, v, theta, omega, rho, t, m, Cdl, Cds, k, I):
# Define body frame basis vectors
ib = jnp.array([jnp.cos(theta), jnp.sin(theta)])
jb = jnp.array([-jnp.sin(theta), jnp.cos(theta)])
# Calculate flow-relative velocity vector, v_r.
v_f = self.options['flow_velocity'](x, y) # Flow velocity vector
v_vec = jnp.array([u, v]) # Inertial velocity vector
v_r = v_vec - v_f # Relative velocity vector
# Get body-axis velocities
vi = jnp.dot(v_r, ib)
vj = jnp.dot(v_r, jb)
# Calculate forces and moments
Fdi = -Cdl*vi # Drag force in ib direction
Fdj = -Cds*vj # Drag force in jb direction
Fi = Fdi + t # Sum of forces in ib direction
Fj = Fdj # Sum of forces in jb direction
tau = rho*vi - k*omega # Rotational torque
# Calculate state rates of change (dynamics)
# Linear
v_vec_dot = (1/m)*(Fi*ib + Fj*jb)
u_dot, v_dot = v_vec_dot
# Rotational
omega_dot = tau / I
return u_dot, v_dot, omega_dot, Fdi, Fdj
def compute(self, inputs, outputs):
# print("Running compute")
rates = np.array(self._compute_primal_vec(*inputs.values()))
# Check for nan values: https://stackoverflow.com/questions/6736590/fast-check-for-nan-in-numpy
if np.isnan(np.sum(rates)):
raise Exception("NaN values found in rates")
u_dot, v_dot, omega_dot, Fdi, Fdj = rates
outputs['u_dot'] = u_dot
outputs['v_dot'] = v_dot
outputs['omega_dot'] = omega_dot
outputs['Fdi'] = Fdi
outputs['Fdj'] = Fdj
def compute_partials(self, inputs, partials):
output_names = ['u_dot', 'v_dot', 'omega_dot', 'Fdi', 'Fdj']
input_names = ['x', 'y', 'u', 'v', 'theta', 'omega', 'rho', 't', 'm', 'Cdl', 'Cds', 'k', 'I']
computed_partials = self._compute_partials_vec(*inputs.values())
# Cycle through computed partials
for out_ind, output_name in enumerate(output_names):
for in_ind, input_name in enumerate(input_names):
partials[output_name, input_name] = computed_partials[out_ind][in_ind]
# Setup Dymos/OpenMDAO problem
# =================================
# Options
num_segments = 10
order = 3
flow = lambda x, y: jnp.array([0., -x])
# =================================
# OpenMDAO Problem
prob = om.Problem()
traj = dm.Trajectory()
# Trajectory gets added to the problem
prob.model.add_subsystem('traj', traj)
phase = dm.Phase(ode_class=BoatDynamicsJax, transcription=dm.GaussLobatto(num_segments=num_segments, order=order),
ode_init_kwargs={'flow_velocity': flow})
traj.add_phase('phase0', phase)
# =================================
# Add states, controls, parameters and objective
# States
phase.add_state('x', rate_source='u', targets=['x'], units='m')
phase.add_state('y', rate_source='v', targets=['y'], units='m')
phase.add_state('u', rate_source='u_dot', targets=['u'], units='m/s')
phase.add_state('v', rate_source='v_dot', targets=['v'], units='m/s')
phase.add_state('theta', rate_source='omega', targets=['theta'], units='rad')
phase.add_state('omega', rate_source='omega_dot', targets=['omega'], units='rad/s')
# Controls
phase.add_control('rho', continuity=True, rate_continuity=True, targets=['rho'])
phase.add_control('t', continuity=True, rate_continuity=True, targets=['t'])
# Parameters
phase.add_parameter('m', units='kg', targets=['m'])
phase.add_parameter('Cdl', targets=['Cdl'])
phase.add_parameter('Cds', targets=['Cds'])
phase.add_parameter('k', targets=['k'])
phase.add_parameter('I', units='kg*m**2', targets=['I'])
# Log forces, for plotting.
phase.add_timeseries_output('Fdi')
phase.add_timeseries_output('Fdj')
phase.set_time_options(fix_initial=True, duration_bounds=(5, 200.0))
phase.add_objective('time', loc='final')
# Set up constraints
traj.link_phases(['phase0', 'phase0'], ['x', 'theta'])
phase.add_path_constraint('t', lower=0)
# =================================
# Configure
# Set IPOPT as optimiser
prob.driver = om.pyOptSparseDriver(optimizer='IPOPT')
prob.driver.opt_settings['print_level'] = 5
prob.driver.declare_coloring()
prob.setup(force_alloc_complex=True)
# =================================
# Set initial state, control and parameter values
# Time
prob.set_val('traj.phase0.t_initial', 0.0)
prob.set_val('traj.phase0.t_duration', 30.0)
# States
prob.set_val('traj.phase0.states:x', -2.)
prob.set_val('traj.phase0.states:y', 0.)
prob.set_val('traj.phase0.states:u', 5.)
prob.set_val('traj.phase0.states:v', 0.)
prob.set_val('traj.phase0.states:theta', 0.)
prob.set_val('traj.phase0.states:omega', 0.)
# Controls
prob.set_val('traj.phase0.controls:rho', -0.1)
prob.set_val('traj.phase0.controls:t', 0.)
# Parameters
prob.set_val('traj.phase0.parameters:m', 1.)
prob.set_val('traj.phase0.parameters:Cdl', 0.05)
prob.set_val('traj.phase0.parameters:Cds', 0.2)
prob.set_val('traj.phase0.parameters:k', 0.5)
prob.set_val('traj.phase0.parameters:I', 1.)
# =================================
# Run simulation
prob.run_model()
prob.check_partials(method='cs', compact_print=True)
</code></pre>
<p>I get the same thing - lots of errors which are greater than the tolerance:</p>
<p><a href="https://i.sstatic.net/pBoKSJ7f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBoKSJ7f.png" alt="Partial derivative errors with full model" /></a></p>
<p>Interestingly, the errors are much smaller using forward difference:</p>
<p><a href="https://i.sstatic.net/7AB2Z4ke.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7AB2Z4ke.png" alt="Forward difference errors" /></a></p>
<p>So maybe the computed partials for checking are wrong, because Jax isn't compatible with complex numbers?</p>
|
<python><optimization><openmdao>
|
2024-06-14 16:49:30
| 1
| 528
|
LordCat
|
78,624,113
| 2,617,896
|
Checking oAuth token expiration in FlaskAppBuilder App
|
<p>Does anyone know how to check token expiry on requests in a Flask AppBuilder app. I'm specifically talking Superset here, but I think the same would apply anywhere.</p>
<p>I've got an OAUTH configuration for Keycloak and the app is authenticating just fine using <code>auth_type = AUTH_OAUTH</code> and an <code>AuthOAuthView</code> descendant. So, the user gets logged in and by default the session["oauth"] is set to the token - this is all mostly default FAB behaviour.</p>
<p>It seems that Authlib (used for the OAuth) has checking and refreshing tokens built-in, but I can't see that this is being called for each request. In fact what happens is that the app thinks the user is logged in when the token has expired and allows the request to proceed, right down to getting data for a chart, at which point we make a 3rd party query using the token which fails. So we get an ugly <code>DB engine Error</code> and the user is logged out, re-authenticates, gets a new token and everything is happy again.</p>
<p>I feel like a piece of the puzzle is missing somewhere to check the OAuth token for validity before even serving the request, trying to refresh the token if possible, logging out and redirecting before showing UI to the chart level.</p>
<p>Has anyone had this problem and resolved it?
Thanks</p>
|
<python><apache-superset><authlib><flask-appbuilder>
|
2024-06-14 16:39:13
| 0
| 368
|
Pat Buxton
|
78,624,066
| 991,077
|
Retrieving results from finished multiprocessing task as it has finished
|
<p>I need to run dozens of computationally intensive CPU-bound parallel tasks. Currently I do it using <code>joblib</code> <code>delayed</code> and <code>Parallel</code>:</p>
<pre><code>resultTuples = Parallel(n_jobs=-1, prefer="processes")(delayed(RunSingleTask)(*p) for p in run_params)
</code></pre>
<p>It works fine but I have to wait until all tasks have finished to get the list of results and process them after. And I'd like to get the result of each finished task right after it is ready to process it.</p>
<p>Tasks have no IO, so I don't see any purpose in using async stuff like:</p>
<pre><code>for first_completed in asyncio.as_completed(tasks):
...
</code></pre>
<p>So how can I do this?</p>
|
<python><asynchronous><multiprocessing><python-multiprocessing><joblib>
|
2024-06-14 16:25:38
| 1
| 734
|
shda
|
78,624,056
| 4,076,764
|
How to change cursor type in with clause
|
<p>I have the following statement</p>
<pre><code> statement = """
INSERT INTO rag.chunks (project_id, document_id, model_id, text, metadata)
VALUES {}
RETURNING *;
"""
async with pool.connection() as conn:
async with conn.cursor() as cur:
values = ', '.join(
cur.mogrify("(%s, %s, %s, %s, %s)",
(c.project_id, c.document_id, c.model_id, c.text, Jsonb(c.metadata)))
for c in chunks
)
statement = statement.format(values)
await cur.execute(statement)
chunks = await cur.fetchall()
return chunks
</code></pre>
<p>Am using <code>cur.mogrify</code> because execute_many doesnt' return all the rows, just the last row (<a href="https://stackoverflow.com/questions/35492684/object-has-no-attribute-mogrify">discussion</a>).</p>
<p>However, because this is an AsyncConnectionPool, the cursor being returned is AsyncCursor, which does not have <code>mogrify</code> method.</p>
<pre><code>AttributeError: 'AsyncCursor' object has no attribute 'mogrify'
</code></pre>
<p>However, the <a href="https://www.psycopg.org/psycopg3/docs/api/cursors.html#psycopg.ClientCursor" rel="nofollow noreferrer">AsyncClientCursor</a> class does have <code>mogrify</code>.</p>
<p>How can I specify I want to use an AsyncClientCursor? For example:</p>
<pre><code>async with pool.connection() as conn:
async with conn.cursor(cursorClass=AsyncClientCursor) as cur:
</code></pre>
<p>But this fails with</p>
<pre><code>TypeError: AsyncConnection.cursor() got an unexpected keyword argument 'cursorClass'
</code></pre>
|
<python><psycopg3>
|
2024-06-14 16:24:16
| 1
| 16,527
|
Adam Hughes
|
78,624,031
| 1,806,566
|
Is there a way in python's readline module to bind a key to a custom function?
|
<p>I'm using the python readline module (and using the input() function, which readline modifies). Everything is working, but I would like to add a feature to call a custom python function when a certain key is pressed during input (the function would record that the key was pressed, change the prompt on the current line, and call redisplay to display the new prompt).</p>
<p>Essentially, I want to call rl_bind_key in the underling readline library, but give it a python function as a callback instead of a C function.</p>
<p>Is there a way of doing this short of writing my own extension for the readline library? Does such a thing already exist?</p>
<p>Other than that I don't want to write a new extension if I can avoid it, I would also like this to work when the readline module is compiled with editline, and it seems that would be tricky (mostly to know how to build my extension based on how the readline module was compiled).</p>
|
<python><readline>
|
2024-06-14 16:17:54
| 0
| 1,241
|
user1806566
|
78,623,890
| 14,427,714
|
ChromeDriver Assuming Crash: The process started from chrome location /usr/bin/google-chrome is no longer running
|
<p>I am working on a web scraping bot using Selenium in Python. My setup involves using webdriver.Chrome and I am running into an issue where the ChromeDriver assumes that Chrome has crashed. Here are the details of my setup and the error message:</p>
<pre><code>The process started from chrome location /usr/bin/google-chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
import os
import time
class VannilaBotV1(webdriver.Chrome):
def __init__(self, driver_path=webdriver.Chrome):
self.driver_path = driver_path
options = Options()
prefs = {
"translate_whitelists": {"it": "en"},
"translate": {"enabled": "true"}
}
# options.add_experimental_option("prefs", prefs)
options.add_argument('--headless')
options.add_argument("--no-sandbox")
extension_path = os.path.join("./extensions/hcaptcha_extension.crx")
options.add_extension(extension_path)
super(VannilaBotV1, self).__init__(options=options)
self.set_window_size(1040, 1440)
</code></pre>
<p>This is how I am running inside my local machine. It's running so comfortably but it's not running inside my ubuntu vps.</p>
<pre><code>System Information:
Chrome Location: /usr/bin/google-chrome
Chrome Version: Google Chrome 126.0.6478.61
Troubleshooting Steps Taken:
</code></pre>
<p>Ensured that Chrome is installed correctly and is accessible via the specified path.
Checked for any potential issues with the ChromeDriver and Chrome version compatibility.
Ensured that the --no-sandbox argument is included.
Attempted to run without headless mode (still encountered issues).
Despite these steps, the issue persists. I am not sure what is causing ChromeDriver to assume that Chrome has crashed.</p>
<p>Any help or suggestions would be greatly appreciated!</p>
|
<python><python-3.x><google-chrome><selenium-webdriver><selenium-chromedriver>
|
2024-06-14 15:39:06
| 0
| 549
|
Sakib ovi
|
78,623,778
| 15,045,363
|
Is Additional Bootstrapping Useful Before Training a Random Forest?
|
<p>I am reviewing a codebase that uses a Random Forest (RF) Regressor, and I've noticed that bootstrapping is applied before creating each RF model. However, RF inherently uses bootstrapping to train each Decision Tree (DT).</p>
<p><strong>Does it make sense and is it usefull to use additional bootstrapping before creating a Random Forest?</strong></p>
<p>Here's an illustration of the process:</p>
<ul>
<li>From 75 training observations, 100 bootstrap batches are created.</li>
<li>Each bootstrap batch is used to train an RF, which itself consists
of 100 DTs.</li>
<li>In total, we will train 1000 DTs.</li>
<li>At the end, the predictions from each DT are aggregated to form the
RF prediction.</li>
<li>Finally, the predictions from all RFs are aggregated to obtain the final result.</li>
</ul>
<p><a href="https://i.sstatic.net/82wDuNaT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82wDuNaT.png" alt="enter image description here" /></a></p>
|
<python><machine-learning><scikit-learn><random-forest>
|
2024-06-14 15:15:11
| 0
| 865
|
Maxime Charrière
|
78,623,552
| 6,606,057
|
Matplotlib arrows do not have proper length
|
<p>I have subset of countries where I am comparing gdp per person and geographic latitude. I've developed code down here:</p>
<pre><code>import matplotlib.pyplot as plt
plt.rc('font', size=12)
plt.rc('axes', labelsize=14, titlesize = 14)
plt.rc('legend', fontsize =12)
plt.rc('xtick', labelsize=10)
plt.rc('ytick', labelsize=10)
country_sub.plot(kind='scatter', figsize=(6, 4), grid=True, x=lat_col, y=gdp_ppp_col)
gdp_ppp_col = "gdp_ppp"
lat_col = "lat"
min_gdp_ppp = 25
max_gdp_ppp = 125_000
position_text = {
"Israel": (31.04605, 57_758),
"United States": (37.09024, 75_269),
"Spain": (40.463667, 29_385),
"Bulgaria": (42.733883, 13_129),
"Germany": (51.165691, 48_845),
"Ireland": (53.41291, 105_362),
"Iceland": (64.963051, 74_663)
}
for country, pos_text in position_text.items():
pos_data_x = country_sub[lat_col].loc[country]
pos_data_y = country_sub[gdp_ppp_col].loc[country]
country = "U.S." if country == "United States" else country
plt.annotate(country, xy=(pos_data_x, pos_data_y),
xytext=pos_text, fontsize=12,
arrowprops=dict(facecolor='black', width=3,
shrink=0.08, headwidth=5))
plt.plot(pos_data_x, pos_data_y, "ro")
plt.axis([min_lat, max_lat, min_gdp_ppp, max_gdp_ppp])
plt.show()
</code></pre>
<p>This results in the graph below:</p>
<p><a href="https://i.sstatic.net/Kk6UjFGy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Kk6UjFGy.png" alt="enter image description here" /></a></p>
<p>However there are no stems on my arrows. I've tried altering the arrow plot parameters with no success.</p>
<p>I am trying to create something that looks more like this:</p>
<p><a href="https://i.sstatic.net/AcsSrO8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AcsSrO8J.png" alt="enter image description here" /></a></p>
<p>From: <a href="https://www.analyticsvidhya.com/blog/2024/02/scatter-plot-visualization-in-python-using-matplotlib" rel="nofollow noreferrer">https://www.analyticsvidhya.com/blog/2024/02/scatter-plot-visualization-in-python-using-matplotlib</a>/</p>
|
<python><python-3.x><matplotlib><scatter-plot>
|
2024-06-14 14:29:44
| 1
| 485
|
Englishman Bob
|
78,623,536
| 896,451
|
Providing an argument to the lambda / function passed to filter
|
<p>In this code, the regular expression is hardcoded in <code>filterf</code>, which is then provided as the evaluating function to <code>filter</code>.</p>
<pre><code>import re
filterf = lambda s:re.match(r'^T', s)
print("".join(list(filter(filterf, ["T x", "M y"])))) # outputs: T x
</code></pre>
<p>How would I recode it such that a regex literal can instead be provided to <code>filterf</code> as an argument? I'd like to use it like this:</p>
<pre><code>import re
filterf = # ???
print("".join(list(filter(filterf(r'^T'), ["T x", "M y"])))) # outputs: T x
</code></pre>
|
<python>
|
2024-06-14 14:26:29
| 1
| 2,312
|
ChrisJJ
|
78,623,491
| 9,591,276
|
Passing fields to a nested model with model_validate() in Pydantic V2
|
<p>Consider I have the following SQLAlchemy DB model:</p>
<pre class="lang-py prettyprint-override"><code>class User(Base):
__tablename__ = "users"
id: Mapped[int] = mapped_column(sa.Integer(), primary_key=True)
username: Mapped[str | None] = mapped_column(sa.String(64), nullable=True, index=True)
email: Mapped[str | None] = mapped_column(sa.String(128), nullable=True)
</code></pre>
<p>And I use the following Pydantic V2 models to serialize user objects:</p>
<pre class="lang-py prettyprint-override"><code>class PrivateDataSerializer(BaseModel):
model_config = ConfigDict(from_attributes=True)
username: str | None
email: str | None
class ProfileSerializer(BaseModel):
model_config = ConfigDict(from_attributes=True)
id: int
private_data: PrivateDataSerializer
</code></pre>
<p>As you can see, I put the <code>username</code> & <code>email</code> fields in a nested model.</p>
<p>My goal is to be able to load a user object into the Pydantic model using:</p>
<pre class="lang-py prettyprint-override"><code>ProfileSerializer.model_validate(user)
</code></pre>
<p>However, I couldn't find the way to properly configure a mapping for fields in <code>private_data</code>.</p>
<p>I have some experience with Django DRF and using it's <code>ModelSerializer</code> you can specify <code>source="*"</code> for a nested model for this purpose:</p>
<pre class="lang-py prettyprint-override"><code>class PrivateDataSerializer(serializers.ModelSerializer):
class Meta:
model = User
fields = ["username", "email"]
class ProfileSerializer(serializers.ModelSerializer):
private_data = PrivateDataSerializer(source="*")
class Meta:
model = User
fields = ["id", "private_data"]
</code></pre>
<p>So, how could I achieve the same result with Pydantic?</p>
|
<python><python-3.x><pydantic-v2>
|
2024-06-14 14:16:26
| 2
| 491
|
goedwig
|
78,623,230
| 20,770,190
|
How to extarct the google's buttons element via playwright?
|
<p>I have a code snippet to extract the inputable and clickable node elements (i.e. interactive elements) from the DOM tree of the web pages via Playwright in python.</p>
<p>This code almost works properly but in some cases misses some elements like google's buttons! In fact, this button is marked as unclickable with this code. Can someone identify the issue with this code?</p>
<p>Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>from playwright.sync_api import sync_playwright
VOID_ELEMENTS = {
"area",
"base",
"br",
"col",
"embed",
"hr",
"img",
"input",
"link",
"meta",
"param",
"source",
"track",
"wbr",
}
READABLE_ATTRIBUTES = {
"title",
"alt",
"href",
"placeholder",
"label",
"value",
"caption",
"summary",
"aria-label",
"aria-describedby",
"datetime",
"download",
"selected",
"checked",
"type",
}
UNCLICKABLE_ELEMENTS = {"html", "head", "body"}
CLICKABLE_ELEMENTS = {"a", "button", "img", "details", "summary"}
INPUT_ELEMENTS = {"input", "textarea", "select", "option"}
class DOMNode:
def __init__(self, i, nodes, strings):
self._on_screen = None
self.parent = None
self.children = []
self.llm_id = None
### Only some nodes have these, default None to differentiate between None and False
self.bounds = None
self.center = None
self.inputValue = None
self.inputChecked = None
self.isClickable = None
self.optionSelected = None
self.parentId = (
nodes["parentIndex"][i] if nodes["parentIndex"][i] >= 0 else None
)
self.nodeType = strings[nodes["nodeType"][i]]
self.nodeName = strings[nodes["nodeName"][i]].lower()
self.nodeValue = (
strings[nodes["nodeValue"][i]].strip()
if nodes["nodeValue"][i] >= 0
else None
)
self.backendNodeId = nodes["backendNodeId"][i]
self.attributes = {}
attrs = nodes["attributes"][i]
for att1, att2 in zip(attrs[::2], attrs[1::2]):
self.attributes[strings[att1]] = strings[att2][:100] # cut off long URLs
self.readable_attributes = {
k: v for k, v in self.attributes.items() if k in READABLE_ATTRIBUTES
}
def __repr__(self, indent=0) -> str:
if self.nodeName == "#text":
return " " * indent + (self.nodeValue or "")
attr_str = " ".join([f'{k}="{v}"' for k, v in self.readable_attributes.items()])
attr_str = " " + attr_str if attr_str else ""
open_tag = f"<{self.nodeName}{attr_str}>"
close_tag = f"</{self.nodeName}>"
if len(self.children) == 0:
return (" " * indent + open_tag) + (
close_tag if self.nodeName not in VOID_ELEMENTS else ""
)
# special case for elements with only one text child -> one-line element
if len(self.children) == 1 and self.children[0].nodeName == "#text":
return (" " * indent + open_tag) + self.children[0].__repr__() + close_tag
children_repr = "\n".join(
[child.__repr__(indent + 2) for child in self.children]
)
return (
(" " * indent + open_tag)
+ "\n"
+ children_repr
+ "\n"
+ (" " * indent + close_tag)
)
def on_screen(self, screen_bounds):
if len(self.children) > 0:
return any([child.on_screen(screen_bounds) for child in self.children])
if (
self.bounds is None
or len(self.bounds) != 4
or self.bounds[2] * self.bounds[3] == 0
):
return False
x, y, w, h = self.bounds
win_upper_bound, win_left_bound, win_width, win_height = screen_bounds
win_right_bound = win_left_bound + win_width
win_lower_bound = win_upper_bound + win_height
return (
x < win_right_bound
and x + w > win_left_bound
and y < win_lower_bound
and y + h > win_upper_bound
)
class Globot:
def __init__(self, headless=False):
playwright = sync_playwright().start()
self.browser = playwright.chromium.launch(headless=headless)
self.context = self.browser.new_context()
self.page = self.context.new_page()
def go_to_page(self, url):
self.page.goto(url=url if "://" in url else "https://" + url)
self.client = self.page.context.new_cdp_session(self.page)
self.page.wait_for_load_state("domcontentloaded")
def crawl(self) -> tuple[dict[int, DOMNode], dict[int, DOMNode]]:
dom = self.client.send(
"DOMSnapshot.captureSnapshot",
{"computedStyles": [], "includeDOMRects": True, "includePaintOrder": True},
)
dom_strings = dom["strings"]
document = dom["documents"][0]
dom_layout = document["layout"]
dom_nodes = document["nodes"]
screen_bounds = dom_layout["bounds"][0]
# For some reason `window.devicePixelRatio` this gives the wrong answer sometimes
device_pixel_ratio = screen_bounds[2] / self.page.evaluate(
"window.screen.width"
)
nodes = []
root = None
# Takes much longer naively
nodeIndex_flipped = {v: k for k, v in enumerate(dom_layout["nodeIndex"])}
inputValue_flipped = {
v: k for k, v in enumerate(dom_nodes["inputValue"]["index"])
}
for i in range(len(dom_nodes["parentIndex"])):
node = DOMNode(i, dom_nodes, dom_strings)
if i == 0:
root = node
if i in nodeIndex_flipped:
bounds = dom_layout["bounds"][nodeIndex_flipped[i]]
bounds = [int(b / device_pixel_ratio) for b in bounds]
node.bounds = bounds
node.center = (
int(bounds[0] + bounds[2] / 2),
int(bounds[1] + bounds[3] / 2),
)
if i in dom_nodes["isClickable"]["index"]:
node.isClickable = True
if i in inputValue_flipped:
v = dom_nodes["inputValue"]["value"][inputValue_flipped[i]]
node.inputValue = dom_strings[v] if v >= 0 else ""
# node.string_attributes['value'] = node.inputValue
if i in dom_nodes["inputChecked"]["index"]:
node.inputChecked = True
if i in dom_nodes["optionSelected"]["index"]:
node.optionSelected = True
nodes.append(node)
# Switch node ids to node pointers
for node in nodes:
if node.parentId is not None:
node.parent = nodes[node.parentId]
node.parent.children.append(node)
count = 0
input_elements = {}
clickable_elements = {}
def find_interactive_elements(node):
nonlocal count
clickable = (
node.nodeName in CLICKABLE_ELEMENTS
and node.isClickable
and node.center is not None
)
inputable = node.nodeName in INPUT_ELEMENTS or node.inputValue is not None
# Special case for select and option elements
select_or_option = node.nodeName == "select" or node.nodeName == "option"
visible = node.on_screen(
root.bounds
) and "visibility: hidden" not in node.attributes.get("style", "")
if node.nodeName == "button":
print(f"Node: {node.nodeName}")
print(f" Attributes: {node.attributes}")
print(f" Bounds: {node.bounds}")
print(f" Clickable: {clickable}")
print(f" Inputable: {inputable}")
print(f" Visible: {visible}")
print(f" Center: {node.center}")
if visible and (clickable or inputable) or select_or_option:
if clickable:
clickable_elements[count] = node
if inputable or select_or_option:
input_elements[count] = node
node.llm_id = count
count += 1
for child in node.children:
find_interactive_elements(child)
find_interactive_elements(root)
return input_elements, clickable_elements
</code></pre>
<p>Code snippet for reproducing the issue (here the <kbd>Next</kbd> button is not known as clickable):</p>
<pre class="lang-py prettyprint-override"><code>from pprint import pprint
bot = Globot()
bot.go_to_page(
"https://accounts.google.com/v3/signin/identifier?authuser=0&continue=https%3A%2F%2Fwww.google.com%2F&ec=GAlAmgQ&hl=en&flowName=GlifWebSignIn&flowEntry=AddSession&dsh=S1040273122%3A1718390580872851&ddm=0"
)
inputs, clickables = bot.crawl()
s = ""
for i in inputs.keys() | clickables.keys():
inputable = False
clickable = False
if i in inputs:
node = inputs[i]
inputable = True
if i in clickables:
node = clickables[i]
clickable = True
s += f"<node id={i} clickable={clickable} inputable={inputable}>\n"
s += node.__repr__(indent=2)
s += "\n</node>\n"
html_description = s
pprint(html_description)
</code></pre>
<p>Here's the part of the log regarding the <kbd>Next</kbd> element - as you can see the <code>Clickable</code> is set to <code>None</code>:</p>
<pre><code>Node: button
Attributes: {'class': 'VfPpkd-LgbsSe VfPpkd-LgbsSe-OWXEXe-k8QpJ VfPpkd-LgbsSe-OWXEXe-dgl2Hf nCP5yc AjY5Oe DuMIQc LQeN7 BqKG', 'jscontroller': 'soHxf', 'jsaction': 'click:cOuCgd; mousedown:UX7yZ; mouseup:lbsD7e; mouseenter:tfO1Yc; mouseleave:JywGue; touchstart:p6p2', 'data-idom-class': 'nCP5yc AjY5Oe DuMIQc LQeN7 BqKGqe Jskylb TrZEUc lw1w4b', 'jsname': 'LgbsSe', 'type': 'button'}
Bounds: [965, 453, 78, 40]
Clickable: None
Inputable: False
Visible: True
Center: (1004, 473)
</code></pre>
<p>Here's the RAW HTML of the <kbd>Next</kbd> button:</p>
<pre><code><button class="VfPpkd-LgbsSe VfPpkd-LgbsSe-OWXEXe-k8QpJ VfPpkd-LgbsSe-OWXEXe-dgl2Hf nCP5yc AjY5Oe DuMIQc LQeN7 BqKGqe Jskylb TrZEUc lw1w4b" jscontroller="soHxf" jsaction="click:cOuCgd; mousedown:UX7yZ; mouseup:lbsD7e; mouseenter:tfO1Yc; mouseleave:JywGue; touchstart:p6p2H; touchmove:FwuNnf; touchend:yfqBxc; touchcancel:JMtRjd; focus:AHmuwe; blur:O22p3e; contextmenu:mg9Pef;mlnRJb:fLiPzd;" data-idom-class="nCP5yc AjY5Oe DuMIQc LQeN7 BqKGqe Jskylb TrZEUc lw1w4b" jsname="LgbsSe" type="button"><div class="VfPpkd-Jh9lGc"></div><div class="VfPpkd-J1Ukfc-LhBDec"></div><div class="VfPpkd-RLmnJb"></div><span jsname="V67aGc" class="VfPpkd-vQzf8d">Next</span></button>
</code></pre>
<p>Here's the screenshot of the respective page:</p>
<p><a href="https://i.sstatic.net/JffNHiP2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JffNHiP2.png" alt="enter image description here" /></a></p>
<p>I apologize if the code is too long and I appreciate any help in advance.</p>
|
<python><web-scraping><web-crawler><playwright><playwright-python>
|
2024-06-14 13:25:09
| 1
| 301
|
Benjamin Geoffrey
|
78,622,978
| 12,013,353
|
solve_ivp gets "stuck" at incredibly small "time" step
|
<p>I'm trying to solve a problem of filling and emptying of a retention behind a dam. What I've got are: 1) a function that relates the elevation of the water in the retention with the volume of the water, V(h); 2) a function that relates the discharge through the dam outlet with the water elevation, q(h); and 3) a function of water inflow given the duration of rain, Vr(tr).<br />
For the case when I ignore the third function, i.e. I assume that V0 appears instantly in the reservoir, I managed to get the solution as:</p>
<pre><code>def dtdh(h, y):
t = y[0]
dVdh = misc.derivative(V_h_fun, h, dx=0.1, args=(aCV, bCV, cCV))
return (- dVdh * 1e3 / Q_h_fun(h, aCQ, bCQ, cCQ)) # V_h_fun relates h[m] to V[dam^3], so 1e3 is to get [m^3]
# as Q_h_fun gives q[m^3/s] from h[m]
elevs = np.linspace(235,231,100) # initial elev. 235, to final elev. 231 m
sol = integrate.solve_ivp(dtdh, t_span=(235,231), y0=[0], t_eval=elevs) # y0 = t0
</code></pre>
<p><code>V_h_fun</code> and <code>Q_h_fun</code> are the first two functions I mentioned previously, and <code>aCV, bCV, cCV, aCQ, bCQ, cCQ</code> are the coefficients from fitting the curves to some data.<br />
This works great, it gives me the time required to empty the reservoir, given an initial elevation, which is indirectly given through the <code>elevs</code> and <code>t_span</code>.</p>
<p>When I try to expand this to also consider the increase of the volume in the retention with time (due to rain), then the new diff. eq. looks like:</p>
<pre><code>def dtdh2(h, y):
t = y[0]
dVdh = misc.derivative(V_h_fun, h, dx=0.1, args=(aCV, bCV, cCV))
dVrdt = misc.derivative(initial_volume, t, dx=0.1, args=(V0, t_rain))
dVrdt.clip(min=0, out=dVrdt) # the derivative goes to a large negative value before going back to 0
return (- dVdh * 1e3 / (Q_h_fun(h, aCQ, bCQ, cCQ) - dVrdt))
</code></pre>
<p>So, I only subtracted the inflow due to the rain, from the outflow. The inflow function is defined as:</p>
<pre><code>def initial_volume(t, V0, t_rain):
# V0 in [m3]
# t_rain in [s]
if type(t) != np.ndarray:
t = np.array([t])
f = np.zeros_like(t)
c1 = (t < 0)
c2 = (t >= 0) & (t <= t_rain)
c3 = (t > t_rain)
f[c1] = 0
f[c2] = V0 / t_rain * t[c2] # straight line within the duration of the rain, and 0 otherwise
f[c3] = 0
return f
</code></pre>
<p>What happens is that when I set the rain duration (<code>t_rain</code>) in <code>dtdh2</code> to some large number, >9500, the solver converges and gives me the time required to empty the retention. However, when the time is smaller than that, is starts having problems at the beginning (near initial elevations), where the elevation ("time") step becomes incredibly small, and the elevation values start to look like this: 234.99999996789282. I found this out by including a <code>print(h)</code> in <code>dtdh2</code>. The solver doesn't terminate, instead it continues solving with the small "time" step, and thus never finishes (I interrupt it manually after some time). What can cause this problem, and what could I do to fix it?</p>
<p>UPDATE: I figured what's the problem. By having a function that increases the volume, the water elevation should also increase during rain, and only afterwards start decreasing, but since elevation is the independent variable, it can only go in one direction. I'm working on a solution to solve two equations, one for the rising phase, and the other for the lowering phase.</p>
|
<python><scipy><differential-equations>
|
2024-06-14 12:31:43
| 0
| 364
|
Sjotroll
|
78,622,855
| 4,105,440
|
Jupyter notebook running in Visual Studio Code cannot import local module, but the same jupyter server can
|
<p>I started having this issue today, it was working fine last time I used it so I guess something was broken with one of the updates (not surprising).</p>
<p>I'm using VsCode in remote connected to a machine and running a jupyter notebook. I cannot import a module located in a subfolder of my home.</p>
<p><a href="https://i.sstatic.net/rEAXYfyk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEAXYfyk.png" alt="enter image description here" /></a></p>
<p>However, if I try to connect to the SAME EXACT server Vscode spawned to run its kernel (I tunnelled 8888 to my localhost and opened in the browser) everything is working.</p>
<p><a href="https://i.sstatic.net/rypP0skZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rypP0skZ.png" alt="enter image description here" /></a></p>
<p>I checked everything printed in the two notebooks: <code>PATH</code>, <code>PYTHON_PATH</code>, <code>which python</code>. Everything seems to be exactly the same. Needless to say I have many scripts running in the exact same folder where the notebook is located and they're doing just fine (as they've been doing for the past years). Whenever I open a normal python file the language server is working just fine and it finds all the imports.</p>
<p>I'm at a loss because I'm not understanding what is Vscode doing with the <code>PATH</code> or other parameters to mess up the import of local modules.
If I try to import something else (like <code>import pandas as pd</code>) it works as expected.
There must be something wrong when starting the kernel because the server seems to be spawned with the correct environment.</p>
|
<python><visual-studio-code><jupyter-notebook><vscode-remote>
|
2024-06-14 12:06:32
| 1
| 673
|
Droid
|
78,622,703
| 11,681,306
|
Celery apps are remaining in background for weeks (memory footprint increases accordingly)
|
<p>I am fairly new to celery, and have some celery workers and some apps are started with a service with a line such as this:</p>
<pre><code>ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
</code></pre>
<p>and</p>
<pre><code>CELERYD_OPTS="--time-limit=600 --concurrency=5 -Q 'myqueue'"
</code></pre>
<p>I am not sure why, but I find several lines like that in my</p>
<p><code>ps -ef | grep ${CELERY_APP}</code></p>
<p>Some of them are weeks old, and they seem to increase their ram footprint as time goes by.
Also, the ram footprint seems to be related with the underlying ${CELERY_APP} (apps with larger footprint will lead to "hung" processes that have larger memory footprint after -say- a week, than otehr processes for apps with smaller footprint)</p>
<p>Would anyone have any idea why this is happening and what can I check to fix this?</p>
<p>(I don't think it matters, but in case it does, rabbitmq is under the hood)</p>
|
<python><python-3.x><rabbitmq><celery>
|
2024-06-14 11:33:17
| 0
| 309
|
Fabri Ba
|
78,622,682
| 3,442,166
|
Change fieldnames of a csv.DictReader in Python
|
<p>I have a CSV file as follows.</p>
<p><strong>oldFile.csv content:</strong></p>
<pre><code>oldHeader1,oldHeader2,oldHeader3
data01,data02,data03
data11,data12,data13
data21,data22,data23
</code></pre>
<p>and I want to rename the headers with new ones. Here's the code I use to do this:</p>
<pre><code>import csv
with open("oldFile.csv", "r") as input:
reader = csv.DictReader(input, delimiter=",")
reader.fieldnames = ["newHeader1", "newHeader2", "newHeader3"]
rows = list(reader)
with open("newFile.csv", "w", newline="") as output:
writer = csv.DictWriter(output, fieldnames=reader.fieldnames, delimiter=",")
writer.writeheader()
for row in rows:
writer.writerow(row)
</code></pre>
<p>The problem is that it doesn't change the headers, but adds new ones.</p>
<p><strong>newFile.csv content:</strong></p>
<pre><code>newHeader1,newHeader2,newHeader3
oldHeader1,oldHeader2,oldHeader3
data01,data02,data03
data11,data12,data13
data21,data22,data23
</code></pre>
<p>However, the first line of the oldFile.csv is recognized as header, the code:</p>
<pre><code>import csv
with open("oldFile.csv", "r") as input:
reader = csv.DictReader(input, delimiter=",")
print(reader.fieldnames)
</code></pre>
<p>shows:</p>
<pre><code>['oldHeader1', 'oldHeader2', 'oldHeader3']
</code></pre>
<p>I don't understand why these values are not overwritten but remain when writing.</p>
<p>I could delete the first line e.g. replacing <code>rows = list(reader)</code> with <code>rows = list(reader)[1:]</code>, but referring to other stackoverflow questions (e.g. <a href="https://stackoverflow.com/a/17039719/3442166">here</a> or <a href="https://stackoverflow.com/a/71951562/3442166">here</a>) I have the impression that it should work without and that I'm doing something wrong.</p>
<p>What's the proper way to achieve this?</p>
|
<python><csv><dictionary>
|
2024-06-14 11:27:56
| 1
| 1,819
|
deltonio2
|
78,622,674
| 5,269,892
|
Pandas performance of fillna vs. boolean masking + writing value
|
<p>In an example dataframe with NaNs in the third column, let us compare the performance of <code>fillna()</code> with boolean masking + setting the value:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(100)
nrows = 10000000
nnan = 25000
df = pd.DataFrame(np.random.uniform(0,250000,size=(nrows,3)))
ind_row = np.random.randint(0,nrows,nnan)
df.loc[ind_row, 2] = np.nan
df1 = df.copy()
%timeit df1[2] = df1[2].fillna(999)
df1 = df.copy()
%timeit df1[2].fillna(999)
df1 = df.copy()
%timeit df1.loc[df1[2].isna(),2] = 999
</code></pre>
<p>I got the following example timings:</p>
<pre><code>35.1 ms ± 369 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
36.4 ms ± 331 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
6.23 ms ± 65.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>and</p>
<pre><code>35.9 ms ± 1.11 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
37.3 ms ± 438 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
6.41 ms ± 38.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
<p>Note that these timings do to not seem to depend much on the ratio of NaNs to non-NaN values.</p>
<p><strong>Why does manual boolean masking appear to be faster than <code>fillna()</code>?</strong></p>
<hr />
<p><strong>Update 1:</strong> The below plot shows the timings for different dataframe sizes generated with the code below and can be compared to @mozway's plot. However, for me there is a clear separation, with boolean masking consistently below <code>fillna(inplace=True)</code> below <code>fillna()</code> for N >= 10^5:</p>
<p><a href="https://i.sstatic.net/LhufRW6d.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhufRW6d.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import timeit
np.random.seed(100)
nrepeats = 5
nruns = 10
def time_fillna(nrows, output):
nnan = int(nrows/4)
df = pd.DataFrame(np.random.uniform(0,250000,size=(nrows,3)))
ind_row = np.random.randint(0,nrows,nnan)
df.loc[ind_row, 2] = np.nan
df1 = df.copy()
output['fillna_assign'].append(np.min(timeit.repeat('df1[2] = df1[2].fillna(999)', globals=locals(), repeat=nrepeats, number=nruns))/nruns)
df1 = df.copy()
output['fillna_only'].append(np.min(timeit.repeat('df1[2].fillna(999)', globals=locals(), repeat=nrepeats, number=nruns))/nruns)
df1 = df.copy()
output['fillna_inplace'].append(np.min(timeit.repeat('df1[2].fillna(999,inplace=True)', globals=locals(), repeat=nrepeats, number=nruns))/nruns)
df1 = df.copy()
output['bool_mask'].append(np.min(timeit.repeat('df1.loc[df1[2].isna(),2] = 999', globals=locals(), repeat=nrepeats, number=nruns))/nruns)
output = {method:[] for method in ['fillna_assign', 'fillna_only', 'fillna_inplace', 'bool_mask']}
nrows_all = np.logspace(3,8,11).astype(int)
for nrows in nrows_all:
print(f'Timing for nrows = {nrows}')
time_fillna(nrows, output)
plt.ion()
plt.plot(nrows_all, output['fillna_assign'], label='fillna_assign')
plt.plot(nrows_all, output['fillna_only'], label='fillna_only')
plt.plot(nrows_all, output['fillna_inplace'], label='fillna_inplace')
plt.plot(nrows_all, output['bool_mask'], label='bool_mask')
plt.loglog()
plt.xlabel('Number of rows')
plt.ylabel('Runtime [s]')
plt.legend()
plt.show()
plt.savefig('timing_fillna.png')
</code></pre>
<hr />
<p><strong>Update 2:</strong> Making sure that NaNs are present in repeated timing runs (see comments by @mozway), I get a different result: <code>fillna(inplace=True)</code> and boolean masking comparable, <code>fillna()</code> only without assignment as the fastest:</p>
<p><a href="https://i.sstatic.net/M6AJqEHp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6AJqEHp.png" alt="enter image description here" /></a></p>
<p>Code:</p>
<pre><code>def time_fillna_revised(nrows, output):
nnan = int(nrows/4)
df = pd.DataFrame(np.random.uniform(0,250000,size=(nrows,3)))
ind_row = np.random.randint(0,nrows,nnan)
df.loc[ind_row, 2] = np.nan
df_insert = df.copy()
t_fillna_assign = []
t_fillna_inplace = []
t_fillna_bool = []
for i in range(nrepeats):
time_temp_assign = 0
time_temp_inplace = 0
time_temp_bool = 0
for j in range(nruns):
df1 = df.copy()
time_temp_assign += timeit.repeat('df_insert[2] = df1[2].fillna(999)', globals=locals(), repeat=1, number=1)[0]
df1 = df.copy()
time_temp_inplace += timeit.repeat('df1[2].fillna(999,inplace=True)', globals=locals(), repeat=1, number=1)[0]
df1 = df.copy()
time_temp_bool += timeit.repeat('df1.loc[df1[2].isna(),2] = 999', globals=locals(), repeat=1, number=1)[0]
t_fillna_assign.append(time_temp_assign)
t_fillna_inplace.append(time_temp_inplace)
t_fillna_bool.append(time_temp_bool)
output['fillna_assign'].append(np.min(t_fillna_assign)/nruns)
output['fillna_inplace'].append(np.min(t_fillna_inplace)/nruns)
output['bool_mask'].append(np.min(t_fillna_bool)/nruns)
df1 = df.copy()
output['fillna_only'].append(np.min(timeit.repeat('df1[2].fillna(999)', globals=locals(), repeat=nrepeats, number=nruns))/nruns)
</code></pre>
<hr />
<p><strong>Update 3:</strong> For NaN-ratio 1/4, (see above plot), the results may be more complicated, see next plot:</p>
<p><a href="https://i.sstatic.net/0bf9vZZC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0bf9vZZC.png" alt="enter image description here" /></a></p>
<p>For NaN-ratio 1/400 (comment by @ken), the timings are:</p>
<p><a href="https://i.sstatic.net/cWsiNVGg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWsiNVGg.png" alt="enter image description here" /></a></p>
<hr />
<p><strong>Running @mozway's code:</strong></p>
<p><a href="https://i.sstatic.net/EDWLDT0Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDWLDT0Z.png" alt="enter image description here" /></a></p>
|
<python><pandas><fillna>
|
2024-06-14 11:25:07
| 1
| 1,314
|
silence_of_the_lambdas
|
78,622,508
| 3,117,006
|
Poetry doesn't see torch when installing other libraries
|
<p>I'm trying to install some package (in this particular example xformers, but this happens for other packages as well).</p>
<p><code>poetry run pip install xformers</code> results in <code>ModuleNotFoundError: No module named 'torch'</code>. (The same will happen if I try <code>poetry add</code>).</p>
<p>However I can see torch installed inside poetry environment via:</p>
<pre><code>poetry run python
>> import torch
>>
</code></pre>
<p>I can see that torch is imported in setup.py of xformers, but I don't understand why would that not use the one inside petry environment.</p>
<p>Torch is also installed in the environment that poetry is run in. So what kind of virtual environment is poetry using here that doesn't see torch?</p>
|
<python><python-poetry>
|
2024-06-14 10:47:42
| 2
| 1,010
|
zlenyk
|
78,622,473
| 19,369,310
|
Merging two Pandas dataframes without a primary keys but using latest dates instead
|
<p>I have two pandas dataframes that looks like:</p>
<p><code>df1</code> records the students and their mock exam score and the mock exam date:</p>
<pre><code>ID Mock_Date Student_ID Mock_score
1 14/3/2020 792 213
2 9/5/2020 792 437
3 17/8/2020 792 435
4 4/1/2022 14598 112312
5 29/12/2022 14350 4325
6 3/10/2019 621 523
7 12/8/2020 621 876
8 5/5/2022 621 4324
9 6/9/2022 621 5432
10 6/3/2022 455 34
</code></pre>
<p><code>df2</code> records the students and their actual exam score and the exam date:</p>
<pre><code>Student_ID Date Score
324 14/2/2019 543
792 14/2/2019 9785
792 3/11/2019 7690
621 3/11/2019 324
12 16/3/2020 34234
792 16/3/2020 4235
14598 16/3/2020 975
792 9/5/2020 427
792 17/8/2020 876
621 17/8/2020 986
</code></pre>
<p>And I want to merge <code>df1</code> with <code>df2</code> using the following logic: for a particular row in <code>df2</code> (the actual exam score of a particular student), use the row from <code>df1</code> with mock exam date just before the actual exam date (i.e. the closest date before the actual exam date), and if it doesn't exist, then put NaN. So the desired output looks like:</p>
<pre><code>Student_ID Date Score Mock_Date Mock_score
324 14/2/2019 543 NaN NaN
792 14/2/2019 9785 NaN NaN
792 3/11/2019 7690 NaN NaN
621 3/11/2019 324 3/10/2019 523 #last occurrence before 3/11 is 3/10
12 16/3/2020 34234 NaN NaN
792 16/3/2020 4235 14/3/2020 213 #last occurrence before 16/3 is 14/3
14598 16/3/2020 975 NaN NaN
792 9/5/2020 427 14/3/2020 213 #last occurrence before 9/5 is 14/3
792 17/8/2020 876 9/5/2020 437 #last occurrence before 17/8 is 9/5
621 17/8/2020 986 12/8/2020 876
</code></pre>
<p>I have no idea how to start even.</p>
|
<python><pandas><dataframe><merge>
|
2024-06-14 10:40:45
| 1
| 449
|
Apook
|
78,622,415
| 11,829,398
|
How to remove duplication of loguru logger patching in pytest
|
<p>I am testing a function which calls <code>loguru.logger.add("file.log")</code> at the start. This causes issues during pytest execution. The file is written to a temp dir and thus is being used by another process (good ol' Windows) when clean-up happens.</p>
<pre class="lang-py prettyprint-override"><code>PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'path/to/tmp_dir/file.log'
</code></pre>
<p>One solution is to patch <code>loguru.logger.add</code> on each test. But this results in much repeated (boilerplate?) code. For many tests, I don't need to refer to <code>logger.add</code>, just need patch it so the test runs.</p>
<pre class="lang-py prettyprint-override"><code>@patch("loguru.logger.add")
def test_one(mock_logger_add):
...
# Useful to check, but don't want to call this in EVERY test
mock_logger_add.assert_called_once()
@patch("loguru.logger.add")
def test_two(mock_logger_add):
...
# No need to check mock_logger_add, just want my code to run
</code></pre>
<p>How can I reduce this duplication?</p>
<p>Things I've tried:</p>
<ul>
<li>Adding an <code>autouse=True</code> fixture to <code>conftest.py</code> or in the test file/class</li>
</ul>
<p>e.g.</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True)
def patch_logger_add(monkeypatch):
monkeypatch.setattr("loguru.logger.add", lambda *args, **kwargs: None)
# or
# monkeypatch.setattr("loguru.logger.add", MagicMock())
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>@pytest.fixture(autouse=True)
def no_logger_add(monkeypatch):
monkeypatch.delattr("loguru.logger.add")
</code></pre>
<p>These don't work. Perhaps because, in order for loguru to work with pytest, we have to <a href="https://loguru.readthedocs.io/en/0.4.1/resources/migration.html#making-things-work-with-pytest-and-caplog" rel="nofollow noreferrer">redefine <code>caplog</code></a> and that involves calling <code>logger.add</code>.</p>
<p>Note: I do not want to turn loguru off completely because I check the logs for errors in my tests.</p>
<h2>Minimal reproducible example</h2>
<p><code>compute_statistics.py</code></p>
<pre class="lang-py prettyprint-override"><code>from loguru import logger
class ExampleClass:
def compute(self):
logger.add("0_example.log")
logger.info("Starting to compute")
logger.success("Finished computing")
return "Done"
</code></pre>
<p><code>conftest.py</code></p>
<pre class="lang-py prettyprint-override"><code>import pytest
from loguru import logger
from _pytest.logging import LogCaptureFixture
@pytest.fixture
def caplog(caplog: LogCaptureFixture):
handler_id = logger.add(
caplog.handler,
format="{message}",
level=0,
filter=lambda record: record["level"].no >= caplog.handler.level,
enqueue=False, # Set to 'True' if your test is spawning child processes.
)
yield caplog
logger.remove(handler_id)
</code></pre>
<p><code>test_compute_statistics.py</code></p>
<pre class="lang-py prettyprint-override"><code>import logging
import pytest
from compute_statistics import ExampleClass
from unittest.mock import patch
@pytest.fixture(autouse=True)
def patch_logger_add():
with patch("compute_statistics.logger.add"):
yield
def assert_no_errors_logged(caplog):
error_messages = [record for record in caplog.records if record.levelno >= logging.ERROR]
num_errors = len(error_messages)
assert num_errors == 0
def test_example_class(caplog):
ex = ExampleClass()
result = ex.compute()
assert result == "Done"
assert_no_errors_logged(caplog)
</code></pre>
|
<python><logging><pytest><monkeypatching><loguru>
|
2024-06-14 10:28:20
| 1
| 1,438
|
codeananda
|
78,622,328
| 7,089,108
|
How can I calculate the blending factor from a linear combination of two pictures?
|
<p>I have a sequence of images that are very similar and can be approximately represented as a combination of two 'ground truth' images with added noise, in the form:</p>
<pre><code>Image_i ≈ x_i × Image(1) + (1−x_i) × Image(2) + Noise
</code></pre>
<p>where x_i is the mixing coefficient.</p>
<p>I'm looking for the best method in Python to determine the value of xx for each image in the sequence. The images are noisy and I expect x_i to range between 40% and 70%.</p>
<p>The images are relatively small, but I have thousands of them, so the solution should be computationally efficient.</p>
<p>What would be the best approach to solve this problem?</p>
<p>Edit: I added an image 3 which could be a potential image_i</p>
<p><a href="https://i.sstatic.net/VC8D8o5t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VC8D8o5t.png" alt="enter image description here" /></a></p>
|
<python><image-processing><pattern-matching>
|
2024-06-14 10:07:22
| 1
| 433
|
cerv21
|
78,622,130
| 3,906,713
|
How to fix Gekko's `@error: Solution not found`
|
<p>I want to perform parameter inference for an ODE model (<code>m.options.IMODE = 5</code>). I have 2 models in Gekko. First, I generate model data from each model, and perform parameter inference directly on the corresponding model data. That works no problem for each model. Then, I attempt to perform parameter inference on real data, that has roughly the same number of datapoints as the model data. The first model finds a reasonable fit, the second one crashes with <code>@error: Solution not found</code>. I would like to learn the procedure of debugging problems like this.</p>
<p>Firstly, the last few lines of the Gekko error report for <code>m.solve(disp=True)</code> is</p>
<pre><code> Iter Objective Convergence
250 8.46435E+03 2.23511E-01
Maximum iterations
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 101.300599999988 sec
Objective : 8182.67185576400
Unsuccessful with error code 0
---------------------------------------------------
Creating file: infeasibilities.txt
Use command apm_get(server,app,'infeasibilities.txt') to retrieve file
@error: Solution Not Found
</code></pre>
<p><strong>Question 1</strong>: Is it correct to assume that the solver terminated because it reached the maximum number of iterations, and not because if infeasibilities?</p>
<p>I have tried all of the suggestions from <a href="https://stackoverflow.com/questions/56942615/how-to-fix-solution-not-found-error-in-python-gekko-optimal-control-code">this answer</a> regarding reaching maximum iteration number, but to no avail.</p>
<ol>
<li>I have tried random and fixed (somewhat reasonable) initialization for unknown parameters. In all cases, the objective function seems stuck at the exact same value on every iteration. I presume that increasing the number of iterations would not fix this problem.</li>
<li>I have tried APOPT, BPOPT and IPOPT. For APOPT and IPOPT, the solver behaves the same, crashing with <code>Solution Not Found</code> having made no progress on the objective function. With BPOPT, it crashes with <code>WARNING MESSAGE FROM SUBROUTINE MA27BD *** INFO(1) = 3 MATRIX IS SINGULAR</code>.</li>
<li>I have tried using cold start as described in the <a href="https://stackoverflow.com/questions/56942615/how-to-fix-solution-not-found-error-in-python-gekko-optimal-control-code">mentioned answer</a>. For all solvers, I get <code>Successful Pre-solve Solution</code> for the cold start, and still the <code>Solution Not Found</code> for the second part.</li>
</ol>
<p><strong>Question 2</strong>: How do I proceed to interpret what is going wrong and fix this problem?</p>
<p>I'm not allowed to share the original data, but I can share the ODE. For the first model (which fits fine):</p>
<pre><code>m.Equation(A * x.dt() == K * B * y * (x1 - x) + C * (x2 - x))
z_est = x + (x1 - x) * (1 - B)
objective = m.Intermediate((z - z_est)**2)
m.Minimize(objective)
</code></pre>
<p>Here <code>A, B, C</code> are the unknown parameters we seek to infer, <code>K</code> is a known constant, and <code>y, x1, x2, z</code> are known time-dependent continuous variables, which are measured for every timestep. The only difference to the second model (that does not fit is)</p>
<pre><code>B = m.Intermediate(m.exp(-y0 / y))
</code></pre>
<p>Now, the unknown parameters are <code>A, y0, C</code>. There are no additional inequality constraints. The only constraints are specifying very broad upper and lower bounds on the unknown parameters, such as</p>
<pre><code>bounds = dict(A = [0.0001, 10000000], C = [0.0001, 100000], y0 = [0.001, 1000])
</code></pre>
<p><strong>Question 3</strong>: Is there any use in looking at the infeasibilities file? I have looked at it briefly for APOPT, and I have found that <code>POSSIBLE INFEASBILE EQUATIONS</code> section is empty, but there are a lot of complicated lines in the <code>ACTIVE OBJECTIVE EQUATIONS</code>, <code>ACTIVE EQUATIONS</code> and <code>INACTIVE EQUATIONS</code>. Some of the lines in <code>ACTIVE EQUATIONS</code> say <code>not available</code> at the end.</p>
|
<python><optimization><ode><gekko>
|
2024-06-14 09:28:57
| 1
| 908
|
Aleksejs Fomins
|
78,622,087
| 5,800,969
|
How to scrape flutter/dart dynamic renderer website where inspecting web element is not allowed
|
<p>I am scrapping a website which has many list items and buttons web lements contents getting displayed through the dynamic rendering of the flutter/dart website.
I tried to approximate the button position and used <code>pyautogui</code> to click on various elements but this solution won't work when there any many buttons/elements to find.</p>
<p>Sharing the source code of the dynamic webpage I am scrapping in the attach<a href="https://i.sstatic.net/pzscw0lf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzscw0lf.png" alt="enter image description here" /></a>ment below.</p>
<p>Can anyone help with this in the right direction? Thanks in advance.</p>
|
<python><flutter><dart><selenium-webdriver>
|
2024-06-14 09:22:25
| 1
| 2,071
|
iamabhaykmr
|
78,621,986
| 2,547,570
|
asyncio.Task is not garbage-collected after using del
|
<p>Why is a task created with <code>asyncio.create_task()</code> in the below not garbage collected?</p>
<pre><code>import asyncio
import gc
c = 0
async def run():
global c
try:
while True:
await asyncio.sleep(1)
c += 1
except Exception as e:
print(e)
async def main():
task = asyncio.create_task(run())
del task
while True:
await asyncio.sleep(0.2)
print(c)
gc.collect()
asyncio.run(main())
</code></pre>
<p>Expected:</p>
<pre class="lang-none prettyprint-override"><code>0
0
0
asyncio.CancelledError
</code></pre>
<p>Actual:</p>
<pre class="lang-none prettyprint-override"><code>0
0
0
0
0
1
1
1
1
1
..
101
101
101
101
101
102
...
</code></pre>
|
<python><garbage-collection><python-asyncio>
|
2024-06-14 09:00:13
| 1
| 1,319
|
mq7
|
78,621,909
| 3,067,732
|
constant-memory vectorized sum of function values with jax
|
<p>Let's say I have a function <code>f</code> that takes an integer argument and returns a fixed-size array. I want to evaluate the sum of <code>f(i)</code> over <code>range(N)</code> for some large <code>N</code>, such that storing all values in memory becomes problematic. With a simple <code>for</code> loop I can fix this easily and evaluate the sum with constant memory use:</p>
<pre><code>import jax.numpy as jnp
f = lambda i : i*jnp.identity(1000)+i # A simple function that will quickly eat up memory
result = 0. # Generic initialization - works with any array-like function.
for i in range(N):
result += f(i)
</code></pre>
<p>but then the <code>for</code> loop is very slow. On the other hand, if I write this using jax <code>vmap</code>,</p>
<pre><code>result = jnp.sum(vmap(f)(jnp.arange(N)),axis=0)
</code></pre>
<p>I'm also in trouble because all values are evaluated before the sum is done, and I'm eating up all the memory.</p>
<p>What would be the right way to vectorize this sum using jax? I've looked for it for a while, and couldn't find an elegant solution.</p>
|
<python><jax>
|
2024-06-14 08:42:05
| 1
| 810
|
pierre
|
78,621,648
| 19,392,385
|
Text shifted in legend because of LaTeX formatting in matplotlib
|
<p>For some reason in my subplot, the legend of the second ax is cropped at the bottom <strong>inside</strong> of the legend box. It seems the index causes the text to shift a bit.</p>
<p>The legend box itself is perfectly fine. I've check if the grid or the tight layout were responsible but they weren't. What could cause such an issue?</p>
<p><a href="https://i.sstatic.net/QarhPmnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QarhPmnZ.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
SMALL_SIZE = 15
MEDIUM_SIZE = 18
BIGGER_SIZE = 29
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=MEDIUM_SIZE) # fontsize of the axes title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the tick labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
plt.rc('figure', titlesize=BIGGER_SIZE) # fontsize of the figure title
matplotlib.rcParams['mathtext.fontset'] = 'cm'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
# Data
V_plus = np.array([92, 191, 283, 490, 660, 935, 1430, 2210, 2660, 3000]) * 1e-3
V_minus = np.array([97, 191, 285, 480, 665, 940, 1410, 2290, 2680, 3000]) * 1e-3
input_power = np.array([100, 201, 300, 503, 700, 990, 1500, 2500, 2980, 3500]) * 1e-3
# Linear regression
model_plus = LinearRegression()
r_plus = model_plus.fit(input_power.reshape(-1, 1), V_plus)
V_plus_pred = model_plus.predict(input_power.reshape(-1, 1))
model_minus = LinearRegression()
r_minus = model_minus.fit(input_power.reshape(-1, 1), V_minus)
V_minus_pred = model_minus.predict(input_power.reshape(-1, 1))
# Create figure and subplots
fig, ax = plt.subplots(1, 2, figsize=(15, 7))
ax1 = ax[0]
ax2 = ax[1]
# Plot for V_plus
ax1.scatter(input_power, V_plus, color='blue', label='Data')
ax1.plot(input_power, V_plus_pred, color='red', label=fr"""Linear fit: {model_plus.coef_[0]:.3f} $\times V_+$""")
ax1.set_xlabel('Input Power (mW)')
ax1.set_ylabel('$V_+$ (V)')
ax1.set_title('$V_+$ vs Input Power')
ax1.grid(True)
ax1.legend(loc='lower right')
# Plot for V_minus
ax2.scatter(input_power, V_minus, color='green', label=r'Data')
ax2.plot(input_power, V_minus_pred, color='red', label=fr""""Linear fit: {model_minus.coef_[0]:.3f} $\times V_-$""")
ax2.set_xlabel('Input Power (mW)')
ax2.set_ylabel('$V_{-}$ (V)')
ax2.set_title('$V_{-}$ vs Input Power')
ax2.grid(True)
ax2.legend(loc='lower right')
plt.tight_layout()
plt.show()
</code></pre>
|
<python><matplotlib><plot><legend>
|
2024-06-14 07:35:39
| 0
| 359
|
Chris Ze Third
|
78,621,593
| 14,833,503
|
PCA in Python: Reproducing pca.fit_transform() results using pca.fit()?
|
<p>I have a data frame called <code>data_principal_components</code> with dimensions (306x21154), so 306 observations and 21154 features. Using PCA, I want to project the data into 10 dimensions.</p>
<p>As far as I understand, the following code does this. The resulting matrix (<code>projected</code>) has a dimension of (306x10).</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
# Sample data:
# Define the dimensions of the DataFrame
num_rows = 306
num_cols = 21154
# Generate random numbers from a normal distribution
data = np.random.randn(num_rows, num_cols)
# Create a DataFrame from the random data
data_principal_components = pd.DataFrame(data)
pca = PCA(10)
projected = pca.fit_transform(data_principal_components)
```
</code></pre>
<p>To better understand how the code works, I wanted to reproduce the result of <code>pca.fit_transform()</code> manually.</p>
<p>Based on my research, I found the following steps:</p>
<pre><code>pc_components = pca.components_ # This gives the eigenvectors
pc_components = pc_components.transpose() # Transposes the eigenvectors, so it has the dimensions (21154x10)
eigenvalues = pca.explained_variance_ # These are the eigenvalues with dimensions (1x10)
</code></pre>
<p>Now, as I understand, one can calculate the loadings using the following code based on the formula $\text{loadings} = \text{eigenvectors} \times \sqrt{\text{eigenvalues}}$ :</p>
<pre><code># Create an empty DataFrame
df = pd.DataFrame()
# Iterate over eigenvalues
for i in range(len(eigenvalues)):
result = np.dot(pc_components[:, i], np.sqrt(eigenvalues[i]))
df[f'Result_{i+1}'] = result # Assign result as a new column in the DataFrame
loadings = df
```
</code></pre>
<p>After obtaining the loadings with dimensions (21154x10), I wanted to use them to obtain the projected values with $ \text{Actual values} \times \text{loadings}$ resulting in dimensions (306x21154) $\times$ (21154x10) = (306x10):</p>
<pre><code>test = np.dot(data_principal_components, loadings)
</code></pre>
<p>However, when I compare <code>test</code> to <code>projected</code>, the values differ substantially. Where am I wrong?</p>
<p><strong>EDIT</strong></p>
<p>I found this way to extract the loadings. However, I still want to derive them semi-manually, can somone help?:</p>
<pre><code>pca = PCA(10) # project from 64 to 2 dimensions
projected = pca.fit_transform(data_principal_components)
loadings = pd.DataFrame(pca.components_.T, columns=['PC1', 'PC2','PC3', 'PC4','PC5', 'PC6','PC7', 'PC8','PC9', 'PC10'], index=data_principal_components.columns)
loadings
</code></pre>
|
<python><scikit-learn><pca>
|
2024-06-14 07:21:29
| 1
| 405
|
Joe94
|
78,621,470
| 4,731,848
|
Modulus `%%` giving unexpected result for `3.1 %% 0.1`. Is this expected?
|
<p>I'm using the base modulus (<code>%%</code>) operator, and just struggling to understand the following behaviour. My understanding is that <code>%%</code> returns the remainder after dividing 2 numbers. So eg:</p>
<pre class="lang-r prettyprint-override"><code>5 %% 2
# [1] 1
6 %% 2
# [1] 0
</code></pre>
<p>Since after dividing 5 by 2 the remainder is 1, and after dividing 6 by 2 the remainder is 0. It also works with decimals, e.g.:</p>
<pre class="lang-r prettyprint-override"><code>3.2 %% 0.1
# [1] 0
</code></pre>
<p>I also realise that floating point issues might come up in some cases, e.g.:</p>
<pre class="lang-r prettyprint-override"><code>12345.2 %% 0.1
# [1] 4.263256e-14
</code></pre>
<p>I assume that's because the floating point of 12345.2 is actually 12345.2000000000000042632... No issue there.</p>
<p>Where I'm getting confused is here:</p>
<pre class="lang-r prettyprint-override"><code>3.1 %% 0.1
# [1] 0.1
</code></pre>
<p>Why is there 0.1 remainder in the above? I was assuming the result would be 0 (or very close to zero if there's a floating point issue). Am I missing something?</p>
<p>I expected any value with 1 or fewer decimal points of precision would give 0, but there are lots of results that seem inconsistent with that (note I'm rounding the output just to avoid the "e-14" results like above):</p>
<pre class="lang-r prettyprint-override"><code>data.frame(num = seq.int(3, 5, 0.1)) |>
dplyr::mutate(result = round(num %% 0.1, 4))
# num result
# 1 3.0 0.1
# 2 3.1 0.1
# 3 3.2 0.0
# 4 3.3 0.1
# 5 3.4 0.1
# 6 3.5 0.1
# 7 3.6 0.1
# 8 3.7 0.1
# 9 3.8 0.1
# 10 3.9 0.1
# 11 4.0 0.1
# 12 4.1 0.1
# 13 4.2 0.1
# 14 4.3 0.1
# 15 4.4 0.0
# 16 4.5 0.1
# 17 4.6 0.1
# 18 4.7 0.1
# 19 4.8 0.1
# 20 4.9 0.0
# 21 5.0 0.1
</code></pre>
<p>If I multiply all values (both numbers and the modulo operator) by 10, I get the behaviour I expect:</p>
<pre class="lang-r prettyprint-override"><code>data.frame(num = seq.int(30, 50, 1)) |>
dplyr::mutate(result = round(num %% 1, 4))
# num result
# 1 30 0
# 2 31 0
# 3 32 0
# 4 33 0
# 5 34 0
# 6 35 0
# 7 36 0
# 8 37 0
# 9 38 0
# 10 39 0
# 11 40 0
# 12 41 0
# 13 42 0
# 14 43 0
# 15 44 0
# 16 45 0
# 17 46 0
# 18 47 0
# 19 48 0
# 20 49 0
# 21 50 0
</code></pre>
<p>Worth noting that what I'm seeing is consistent in Python too. This shows the same behaviour:</p>
<pre class="lang-py prettyprint-override"><code>import math
[math.fmod(x/10.0, 0.1) for x in range(30, 50, 1)]
</code></pre>
<p>Is there something I'm missing?</p>
|
<python><r><modulo>
|
2024-06-14 06:55:56
| 2
| 5,640
|
rosscova
|
78,621,461
| 2,993,606
|
Password-protect an excel file on DataBricks environment
|
<p>I have a requirement to password protect an excel file, meaning no one should be able to read from it unless they have the password). I want to use DataBricks because that's where the file is getting generated. I have explored some options online, but either they don't meet the functionality (openpyxl, pandas), or need a windows machine (pywin32)- since DataBricks uses Linux machine, I can't use this lib. Can someone please recommend the way forward?</p>
|
<python><excel><azure-databricks>
|
2024-06-14 06:54:22
| 1
| 5,263
|
SouravA
|
78,621,378
| 1,818,059
|
How do I best scale png to 1 bit BMP (pyvips or PIL)
|
<p>For a project I need to convert a PDF to strips of (large) BMP images, in 1 bit.</p>
<p>Quality is not the major issue here, but conversion speed is.</p>
<p>The input is a PDF document, the output is a series of BMP images, all around 65.000 x 4800 px.</p>
<p>I can't post the actual PDF, so as example I include a strip from the IKEA BILLY assembly instruction.</p>
<p><a href="https://i.sstatic.net/7XvcsBeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7XvcsBeK.png" alt="Example of a strip from PDF" /></a></p>
<p>This strip is PNG format, but can also be JPG, PBM or others. Creation of strips is done using pyvips and is very fast.</p>
<p>Problem is how I get the final output. I know the command in ImageMagick, but it's painfully slow. Over 10 seconds per strip.</p>
<p>Imagemagick Command:</p>
<pre><code>magick Billy_04.jpg -resize 65000x -dither FloydSteinberg -remap pattern:gray50 -colors 2 -monochrome BMP3:Billy_04_converted.bmp
</code></pre>
<p>I do not need any subsampling or similar, it's fine if the output is slight blocky.</p>
<p>Is there a way to do this conversion fast in Python ? I am not tied to pyvips.</p>
<p>The script I use to make the strips:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
import sys
import pyvips
import math
destinationwidth = 60000
destinationheight = 4700
image = pyvips.Image.new_from_file(sys.argv[1])
for i in range(1): # for the example, limit to first page only.
# for i in range(image.get('n-pages')):
image = pyvips.Image.new_from_file(sys.argv[1], page=i, scale=10)
mywidth = image.width
myheight = image.height
scale_in_width = destinationwidth/mywidth
stripheight = math.floor(destinationheight/scale_in_width)
stripcount = myheight/stripheight
stripcountInt = math.ceil(stripcount)
print("Width = " , mywidth, " height = ", myheight)
print("Strip height = ", stripheight, " amount of strips = ", stripcount , " (round to) ", stripcountInt)
theheight = stripheight
verticalpos = 0
for s in range(stripcountInt):
print(f"Strip number {s} at {s*stripheight} : ")
thestrip = image.crop( 0, s*stripheight, mywidth, theheight)
verticalpos += theheight
# do not want to go outside area
if (verticalpos + stripheight) > myheight:
theheight = myheight - verticalpos-1
thestrip.write_to_file(f"Pstrip_{s}.png")
</code></pre>
|
<python><image-processing><bitmap><vips>
|
2024-06-14 06:31:13
| 1
| 1,176
|
MyICQ
|
78,621,282
| 5,844,307
|
How to execute Python modules on MacOS terminal
|
<p>The problem with this question is that I do not the proper words to refer to my problem. Let me be clear about what I'm not asking. I'm not asking how to execute a python file. I can already do that. Simply cd to the director where the Python file is in, then run</p>
<pre><code>>>>python module.py
</code></pre>
<p>What I cannot do is the following: on this webpage:</p>
<p><a href="https://github.com/pytube/pytube" rel="nofollow noreferrer">https://github.com/pytube/pytube</a></p>
<p>You can download the software, then I think you have to put a new line in the <code>bash</code> file and then when you type in command line</p>
<p>$ pytube <a href="https://www.youtube.com/playlist?list=PLS1QulWo1RIaJECMeUT4LFwJ-ghgoSH6n" rel="nofollow noreferrer">https://www.youtube.com/playlist?list=PLS1QulWo1RIaJECMeUT4LFwJ-ghgoSH6n</a></p>
<p>It will execute the module. I don't know how to do that and am asking how. Doubtlessly, this question is a duplicate but I do not know the proper keywords needed to find the answer to this question. When I google 'how to execute python modules from command line', all I get is instructions on how to do just that which I already know how to do.</p>
|
<python><macos>
|
2024-06-14 06:01:02
| 2
| 581
|
logic1976
|
78,621,128
| 5,938,276
|
VSCode and debugging in venv
|
<p>I have a issue in a python project in VSCode where the debugger does not stop at breakpoints and I am unable to inspect variables. The environment is Windows 10 and the project is using a venv.</p>
<p>I was able to get the debuigger to eventually respect a breakpoint by adding <code>"justMyCode": false</code> per this answer : <a href="https://stackoverflow.com/questions/56794940/vscode-why-isnt-debugger-stopping-at-breakpoints">VSCode: Why isn't debugger stopping at breakpoints?</a></p>
<p>But I am still unable to watch variable values.</p>
<p>I setup in my venv a simple test.py file to test this. The file contains:</p>
<pre><code>print("Hello")
a= 5
b = 10
c = 11
</code></pre>
<p>If I break on the last line I can not see the values of a or b:</p>
<p><a href="https://i.sstatic.net/yrnWsZB0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yrnWsZB0.png" alt="enter image description here" /></a></p>
<p>I suspect this is something to do with the venv as if I create the same file not in a venv and in a new project I can inspect the values.</p>
<p>Note I do think I have the correct venv selected in VSCode (polyphase):</p>
<p><a href="https://i.sstatic.net/DOUmN34E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DOUmN34E.png" alt="enter image description here" /></a></p>
<p>It appears the issue is related to the venv - but I can not see how to inspect the variable values.</p>
|
<python><visual-studio-code><vscode-debugger>
|
2024-06-14 05:11:47
| 1
| 2,456
|
Al Grant
|
78,620,930
| 6,676,101
|
What is a regular expression for the letter "i" all by itself?
|
<p>In the following English sentences, the letter <code>i</code> is not capitalized correctly.</p>
<pre class="lang-None prettyprint-override"><code>with this notice know that i inscribe messages to inform.
a boy named "Will" said into a microphone that time is like a high-five.
i give lines to computers to find cities and sites.
i am my eyes, and to view is to exist (cuanda tu ve, tu comprendes el significado del 'hay')
view that i will not be processing your application any further because i am eating an apple.
--------------------------------------------------------------------------------
wIth thIs notIce know that I InscrIbe messages to Inform.
a boy named "WIll" saId Into a mIcrophone that tIme Is lIke a hIgh-fIve.
I gIve lInes to computers to fInd cItIes and sItes.
I am my eyes, and to vIew Is to exIst (cuanda tu ve, tu comprendes el sIgnIfIcado del 'hay')
vIew that I wIll not be processIng your applIcatIon any further because I am eatIng an apple.
</code></pre>
<p>What Python script will find the letter <code>i</code> when it is not inside of an another word?</p>
<p>The idea is to ignore the letter <code>i</code> when it occurs inside of a larger word. We wish to locate the letter only when it is isolated and/or used as a first-person singular pronoun.</p>
|
<python><regex>
|
2024-06-14 03:37:07
| 1
| 4,700
|
Toothpick Anemone
|
78,620,797
| 1,651,270
|
defaultdict ignores its default_factory argument when assigned explicitly
|
<p>I ran into this problem when working with defaultdict. Here's a program that demonstrates it:</p>
<pre><code>from collections import defaultdict
d1 = defaultdict(default_factory=dict)
d2 = defaultdict(dict)
print("d1's default_factory:", d1.default_factory)
print("d2's default_factory:", d2.default_factory)
try:
d1['key'].update({'a': 'b'})
except KeyError:
print("d1 caused an exception")
try:
d2['key'].update({'a': 'b'})
except KeyError:
print("d2 caused an exception")
</code></pre>
<p>The above outputs:</p>
<pre><code>d1's default_factory: None
d2's default_factory: <class 'dict'>
d1 caused an exception
</code></pre>
<p>Should this happen?</p>
|
<python>
|
2024-06-14 02:33:40
| 1
| 308
|
Nick
|
78,620,595
| 1,806,566
|
How can I print to a pipe in python?
|
<p>I'm creating a pipe with:</p>
<pre><code>popen = subprocess.Popen(command, stdin=subprocess.PIPE)
</code></pre>
<p>and I would like to print to popen.stdin. The key is that I'd like to print(..., file=popen.stdin) to it, not call write() on it, but popen.stdin is a _io.BufferedWriter which only supports writing bytes, not printing strings.</p>
<p>The reason I cannot just use the write method of popen.stdin is that I need to hand it off to code that doesn't know it's writing to a pipe, so it's printing strings. I assume I need some sort of file-like thing that takes strings, encodes them, and then writes the bytes to the BufferedWriter.</p>
<p>I assume such a thing exists, but I'm rather new to python. Can someone point me in the right direction?</p>
|
<python><python-3.x>
|
2024-06-14 00:35:38
| 0
| 1,241
|
user1806566
|
78,620,466
| 11,098,908
|
Why does a Python class need to rely on a function defined outside of its scope?
|
<p>I came across the following code that <a href="https://stackoverflow.com/questions/64867475/how-to-put-a-health-bar-over-the-sprite-in-pygame">draws a health bar above a sprite</a></p>
<pre><code>import pygame
import math
def draw_health_bar(surf, pos, size, borderC, backC, healthC, progress):
pygame.draw.rect(surf, backC, (*pos, *size))
pygame.draw.rect(surf, borderC, (*pos, *size), 1)
innerPos = (pos[0]+1, pos[1]+1)
innerSize = ((size[0]-2) * progress, size[1]-2)
rect = (round(innerPos[0]), round(innerPos[1]), round(innerSize[0]), round(innerSize[1]))
pygame.draw.rect(surf, healthC, rect)
class Player(pygame.sprite.Sprite):
def __init__(self, x, y):
pygame.sprite.Sprite.__init__(self)
self.original_image = pygame.image.load('CarBlue64.png')
self.original_image = pygame.transform.rotate(self.original_image, 90)
self.image = self.original_image
self.rect = self.image.get_rect(center=(x, y))
self.direction = pygame.math.Vector2((0, -1))
self.velocity = 5
self.position = pygame.math.Vector2(x, y)
self.health = 10
...
def draw_health(self, surf):
health_rect = pygame.Rect(0, 0, self.original_image.get_width(), 7)
health_rect.midbottom = self.rect.centerx, self.rect.top
max_health = 10
draw_health_bar(surf, health_rect.topleft, health_rect.size,
(0, 0, 0), (255, 0, 0), (0, 255, 0), self.health/max_health)
</code></pre>
<p>Can someone please help me understand the following questions:</p>
<ol>
<li>Why was the function <code>draw_health_bar</code> declared outside of the class <code>Player</code>?</li>
<li>Can I move the function <code>draw_health_bar</code> inside the class <code>Player</code>, then define the function <code>draw_health</code> inside the <code>Player</code>'s child class? For example:</li>
</ol>
<pre><code>class Player(pygame.sprite.Sprite):
def __init__(self, x, y):
pygame.sprite.Sprite.__init__(self)
...
def draw_health_bar(surf, pos, size, borderC, backC, healthC, progress):
pygame.draw.rect(surf, backC, (*pos, *size))
pygame.draw.rect(surf, borderC, (*pos, *size), 1)
innerPos = (pos[0]+1, pos[1]+1)
innerSize = ((size[0]-2) * progress, size[1]-2)
rect = (round(innerPos[0]), round(innerPos[1]), round(innerSize[0]), round(innerSize[1]))
pygame.draw.rect(surf, healthC, rect)
...
class A(Player):
def __init__(self, x, y):
super().__init__(x, y)
...
def draw_health(self, surf):
health_rect = pygame.Rect(0, 0, self.original_image.get_width(), 7)
health_rect.midbottom = self.rect.centerx, self.rect.top
max_health = 10
super().draw_health_bar(surf, health_rect.topleft, health_rect.size,
(0, 0, 0), (255, 0, 0), (0, 255, 0), self.health/max_health)
</code></pre>
|
<python><pygame>
|
2024-06-13 23:17:14
| 1
| 1,306
|
Nemo
|
78,620,440
| 1,196,033
|
How can I tell Pyyaml to convert PosixPath to str in yaml.dump?
|
<p>I understand that YAML allows you to represent arbitrary data using tags. I would like to convert a PosixPath to a str instead of representing it as a <code>!!pathlib.PosixPath</code>. How can I tell pyyaml to convert PosixPath to str every time it encounters the type?</p>
|
<python><yaml><pyyaml>
|
2024-06-13 23:02:17
| 1
| 1,766
|
xaviersjs
|
78,620,432
| 1,313,890
|
PySpark NOT_COLUMN_OR_STR Exception on Disconnected List
|
<p>I am getting an odd pyspark exception when attempting to use <code>filter</code> and <code>lambda</code> functions on a list of ints I've collected from a pyspark dataframe, which makes no sense as the data exists in memory as a list and should be completely disconnected from pyspark. Here is the scenario.</p>
<p>I have a method that takes a dataframe and a column name and converts the values in the given column to a unique list of values. See below:</p>
<pre><code>def collectListItems(dfData:DataFrame, fieldToCollect:str) -> list:
dfData = dfData.groupby(fieldToCollect).count().select(fieldToCollect).collect()
valueList = [f[fieldToCollect] for f in dfData]
return valueList
</code></pre>
<p>I sometimes use this when I need to hold onto a list of Ids for operation in other tables after I've deleted data from a datasource or if I'm looking to determine the differences between two sets of data. This method has been working fine for months for a variety of different use cases.</p>
<p>Today, I was trying to take a different approach to getting the delta between two tables. Essentially, I have a customers table and a calls table (which has a FK reference to customers). I wanted to find any customers who have existed for more than x months in our system but haven't had any calls logged in more than y months. Not a particularly difficult problem and I have a few different approaches to solving this.</p>
<p>However, I wanted to try something new so I used the above method to collect all the customerIds created more than x months ago and a second list was collected that had all of the customerIds with calls in the last 6 months:</p>
<pre><code>_createDate = date.today() + relativedelta(months=-12)
_archiveDate = date.today() + relativedelta(months=-6)
dfCustomers = spark.read.table('customers') \
.where(col('createdOn') < _createDate) \
.alias('cust')
dfActiveCalls = spark.read.table('calls') \
.where(col('calledOn') > _archiveDate) \
.alias('c')
dfActiveCustomers = dfActiveCalls.join(dfCustomers, col('c.customerId') == col('cust.customerId)', 'inner') \
.select(col('cust.CustomerId'), col('c.callId')) \
.groupBy('customerId') \
.agg(count('callId').alias('totalCalls'))
currentCustomers = collectListItems(dfCustomers, 'customerId')
activeCustomers = collectListItems(dfActiveCustomers, 'customerId')
</code></pre>
<p>At this point, currentCustomers and activeCustomers are now python lists. I double checked their type (<code>print(type(currentCustomers))</code> returns <code><class 'list'></code>) and the data is definitely a list of Ids (<code>print(currentCustomers)</code> returns <code>[1, 5, 8, 16, 43...]</code>). So at this point, the assumption is that the data is now stored in memory as a python list and no longer reliant on or tied to pyspark in any way. However, when I try the following:</p>
<pre><code>inactiveCustomers = list(filter(lambda x: all(y not in x for y in activeCustomers), currentCustomers ))
</code></pre>
<p>I get the following exception:</p>
<pre><code> [NOT_COLUMN_OR_STR] Argument `col` should be a Column or str, got function.
----> 1 inactiveCustomers = list(filter(lambda x: all(y not in x for y in activeCustomers), currentCustomers))
File /databricks/spark/python/pyspark/sql/column.py:66, in _to_java_column(col)
64 jcol = _create_column_from_name(col)
65 else:
---> 66 raise PySparkTypeError(
67 error_class="NOT_COLUMN_OR_STR",
68 message_parameters={"arg_name": "col", "arg_type": type(col).__name__},
69 )
70 return jcol
</code></pre>
<p>I am able to work around the problem by simply taking a different approach, but I'm curious as to why a pyspark exception is being thrown when executing code that involves no pyspark component. If those lists are truly stored as in memory lists in python, the exception makes no sense. It's almost as if the lists aren't truly being loaded into memory and are still being lazily evaluated by pyspark. But I thought <code>collect()</code> forces evaluation and loads the data into memory. Is that not the case? Why is pyspark throwing this exception?</p>
|
<python><apache-spark><pyspark>
|
2024-06-13 23:00:18
| 0
| 547
|
Shane McGarry
|
78,620,395
| 7,053,357
|
how does python determine encodings under different circumstances?
|
<p>I have a python script that runs on a windows machine and looks like this:</p>
<pre><code>print(f"Current encoding: {sys.stdout.encoding}")
print(some_randomized_chars)
</code></pre>
<p>and I run it in different ways using pycharm.
My confusion is about the way it behaves when it comes to encodings.
When I run it "normally", i.e., Pycharm's "Run" or going to pycharm's terminal (powershell) and running</p>
<pre><code>python producer.py
</code></pre>
<p>I get</p>
<pre><code>Current encoding: utf-8
</code></pre>
<p>And everything get printed to the terminal, including some "gibberish" chars on occasion, which is fine. However, when trying to pipe it</p>
<pre><code>python producer.py > test.txt
</code></pre>
<p>it creates a file whose 1st line is</p>
<pre><code>Current encoding: cp1252
</code></pre>
<p>but sometimes I get this error thrown in the terminal after some successful writes:</p>
<pre><code> File "AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeEncodeError: 'charmap' codec can't encode character '\x9e' in position 6: character maps to <undefined>
</code></pre>
<p>Lastly, I would expect the encoding of the created test.txt to be cp1252 (ANSI latin), but when I open it in a text editor, it shows it's utf-16 LE.</p>
<p>So my confusion here sums up to these questions:
-Why does python print "utf-8" as the current encoding when running normally?
-Why does it then switch to "cp1252" when running it with piping, and why does it sometimes crash?
-If the encoding in the latter case is ANSI, why does it show utf-16 LE on the file itself?</p>
<p>Thank you</p>
|
<python><windows><encoding><utf-8>
|
2024-06-13 22:39:24
| 0
| 364
|
felisimo
|
78,620,245
| 1,877,600
|
Azure Services Connectivity Watchdog
|
<p>I'm developing a system with a microservices architecture that relies on several Azure services for continuous inter-service communication. For context, here's a simplified overview of my setup:</p>
<ul>
<li><strong>Container App</strong>: Hosts the Django application.</li>
<li><strong>Keda-Scaled Container Apps</strong>: Hosts a data processing engine as a workaround for Azure Functions' limitations.</li>
<li><strong>SQL Database</strong>: Supports the Django app.</li>
<li><strong>Blob Storage</strong> and <strong>Azure Storage Service Queue</strong>: Facilitate communication between Django and the data processing engine, enabling KEDA scaling.</li>
<li><strong>Azure Function</strong>: Handles minimal-resource requests.</li>
</ul>
<p>This setup excludes network-related components for simplicity.</p>
<p>I aim to create a mechanism to verify the proper connectivity among all components. Initially, I considered a watchdog script that runs periodically (every 6 or 12 hours) to perform connectivity checks. Here's a simplified version of the script:</p>
<pre class="lang-py prettyprint-override"><code>class Watchdog:
def __init__(self):
self.status_file_path = Path("/var/watchdog/status.txt")
def __call__(self):
if not self.__is_check_needed():
logger.info("No need to run watchdog")
return
logger.info("Running watchdog connectivity checks.")
self.__run_checks()
self.__update_status_file()
def __is_check_needed(self) -> bool:
if not self.status_file_path.exists():
logger.info("Watchdog status file does not exist. ")
return True
status_creation_time = (datetime.now() - modification_date(self.status_file_path)).total_seconds()
logger.info(f"Watchdog status files was created {status_creation_time} seconds ago")
if status_creation_time > self.check_frequency:
logger.info(f"Watchdog status is too old {status_creation_time}"
"We need to run it again.")
return True
return False
def __run_checks(self):
self.__check_connection_to_sql()
self.__check_connection_to_blob()
self.__check_connection_to_queue()
# remaining checks goes here
</code></pre>
<p>This approach works for long-lived Azure Container apps but isn't suitable for short-lived data processing components due to potential system latency impact. Additionally, I'm unsure how to execute dedicated scripts on Keda workers, making this solution inapplicable in some cases.</p>
<h2>Questions:</h2>
<ul>
<li>How can I incorporate the mentioned watchdog functionality to be able to monitor the KEDA scaler?</li>
<li>How can I integrate this watchdog functionality with GitHub Actions or Azure DevOps pipelines to enhance deployment processes, especially in terms of communicating execution status between Azure Container App and GitHub Action Worker?</li>
</ul>
<hr />
<p>Any insights or suggestions would be greatly appreciated, and I'm open to collaborating or sharing this project if there's interest.</p>
|
<python><azure><watchdog><keda>
|
2024-06-13 21:27:59
| 1
| 597
|
user1877600
|
78,620,210
| 8,897,442
|
Python and Selenium unable to find all tables on a page
|
<p>Being new to Selenium and Python I have been given the task of extracting all the data from three tables on a wikipedia page. In all tests I have been able to get the pertinent data from the fist table but the code is unable to find anything about the 2nd or 3rd table. I know it can't really be this hard but I have been at it for 3 days straight with no advance. What exactly am I missing in my code? I am able to get the page to open and then it spits back that there are only 2 or sometimes one table on the page but I know for a fact that there are three. The page in question is:
<a href="https://es.wikipedia.org/wiki/Anexo:Entidades_federativas_de_M%C3%A9xico_por_superficie,_poblaci%C3%B3n_y_densidad" rel="nofollow noreferrer">https://es.wikipedia.org/wiki/Anexo:Entidades_federativas_de_M%C3%A9xico_por_superficie,_poblaci%C3%B3n_y_densidad</a>
And the code I have is as follows:</p>
<pre class="lang-py prettyprint-override"><code># Libraries
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from webdriver_manager.chrome import ChromeDriverManager
import pandas as pd
import time
from io import StringIO
# Add debugging statements
print("Starting script...")
# Initialize Chrome options
options = webdriver.ChromeOptions()
options.add_argument('--start-maximized')
options.add_argument('--disable-extensions')
# Use webdriver-manager to get the appropriate ChromeDriver
service = Service(ChromeDriverManager().install())
# Initialize the WebDriver
driver = webdriver.Chrome(service=service, options=options)
try:
# Start on 2nd Monitor
driver.set_window_position(2000, 0)
driver.maximize_window()
time.sleep(5)
# Initiate Browser
driver.get('https://es.wikipedia.org/wiki/Anexo:Entidades_federativas_de_M%C3%A9xico_por_superficie,_poblaci%C3%B3n_y_densidad')
# Wait for the page to fully load by waiting for a specific element to appear
WebDriverWait(driver, 30).until(
EC.presence_of_element_located((By.XPATH, '//*[@id="firstHeading"]'))
)
print("Page loaded successfully")
# Extract data from the tables using specific XPath expressions
first_table = driver.find_element(By.XPATH, '//table[contains(.//caption, "Entidades federativas de México por superficie, población y densidad")]')
second_table = driver.find_element(By.XPATH, '(//table[contains(.//caption, "Población histórica de México")])[1]')
third_table = driver.find_element(By.XPATH, '(//table[contains(.//caption, "Población histórica de México")])[2]')
print("All tables found")
# First table extraction
first_table_html = first_table.get_attribute('outerHTML')
first_table_df = pd.read_html(first_table_html)[0]
first_table_df = first_table_df.iloc[2:34, :] # Remove header rows and ensure 32 rows of data
first_table_df.columns = first_table_df.iloc[0] # Set the first row as header
first_table_df = first_table_df[1:] # Remove the header row from the data
print("First table extracted successfully")
# Second table extraction
second_table_html = second_table.get_attribute('outerHTML')
second_table_df = pd.read_html(second_table_html)[0]
second_table_df.columns = ['Pos', 'Entidad', '2020', '2010', '2000', '1990', '1980', '1970', '1960', '1950', '1940', '1930', '1921', '1910']
print("Second table extracted successfully")
# Third table extraction
third_table_html = third_table.get_attribute('outerHTML')
third_table_df = pd.read_html(third_table_html)[0]
third_table_df.columns = ['Pos', 'Entidad', '2010', '2015', '2020', '2025', '2030']
print("Third table extracted successfully")
# Save to Excel with each table on a different sheet
with pd.ExcelWriter('mexico_population_data.xlsx') as writer:
first_table_df.to_excel(writer, sheet_name='Superficie_Poblacion_Densidad', index=False)
second_table_df.to_excel(writer, sheet_name='Poblacion_Historica', index=False)
third_table_df.to_excel(writer, sheet_name='Poblacion_Futura', index=False)
print("Data extraction and Excel file creation successful")
except Exception as e:
print(f"An error occurred: {e}")
finally:
# Close the browser after a delay to see the loaded page
time.sleep(10)
driver.quit()
</code></pre>
<p>Error example:</p>
<pre><code>Page loaded successfully
An error occurred: Message: no such element: Unable to locate element: {"method":"xpath","selector":"(//table[contains(.//caption, "Población histórica de México")])[1]"}
(Session info: chrome=125.0.6422.141); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
Stacktrace:
#0 0x5b29f734ce3a <unknown>
#1 0x5b29f703645c <unknown>
#2 0x5b29f70825b5 <unknown>
#3 0x5b29f7082671 <unknown>
#4 0x5b29f70c6f14 <unknown>
#5 0x5b29f70a54dd <unknown>
#6 0x5b29f70c42cc <unknown>
#7 0x5b29f70a5253 <unknown>
#8 0x5b29f70751c7 <unknown>
#9 0x5b29f7075b3e <unknown>
#10 0x5b29f731327b <unknown>
#11 0x5b29f7317327 <unknown>
#12 0x5b29f72ffdae <unknown>
#13 0x5b29f7317df2 <unknown>
#14 0x5b29f72e474f <unknown>
#15 0x5b29f733c128 <unknown>
#16 0x5b29f733c2fb <unknown>
#17 0x5b29f734bf6c <unknown>
#18 0x76a0fb094ac3 <unknown>
</code></pre>
|
<python><selenium-webdriver>
|
2024-06-13 21:16:27
| 2
| 899
|
Erik James Robles
|
78,620,037
| 9,669,142
|
Create dataframe from specific XML data in Python
|
<p>I have an XML file (example shown below) that I want to have in a dataframe in Python. The issue is that the data in the XML has a specific structure and I'm having some trouble with getting the data I need.</p>
<p>I tried to use lxml and Pandas (read_xml), where both do what I would expect but not what I need exactly.</p>
<p>I have the following example XML:</p>
<pre><code><Errors>
<Amount>3</Amount>
<Error>
<Code>405</Code>
<Information>
<Count>1</Count>
<Time>16:18:13</Time>
<Parameters>
<Parameter Parameter="Par1" Value="0"/>
<Parameter Parameter="Par2" Value="1"/>
</Parameter>
</Information>
<Information>
<Count>2</Count>
<Time>11:04:54</Time>
<Parameters>
<Parameter Parameter="Par1" Value="2"/>
<Parameter Parameter="Par2" Value="3"/>
</Parameter>
</Information>
</Error>
<Error>
<Code>404</Code>
<Information>
<Count>1</Count>
<Time>20:42:48</Time>
<Parameters>
<Parameter Parameter="Par1" Value="4"/>
<Parameter Parameter="Par2" Value="5"/>
</Parameter>
</Information>
</Error>
</Errors>
</code></pre>
<p>I have the root Errors, where I can see that there are three errors in total: two times error 405 and one time error 404. Each error contains the time and some parameters. The parameter tags are always the same, but the values may differ.</p>
<p>Now I want a dataframe that looks like this:</p>
<p><a href="https://i.sstatic.net/LAqorvdr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LAqorvdr.png" alt="enter image description here" /></a></p>
<p>My question is: how to do this properly?
I first tried the use of indices (root<a href="https://i.sstatic.net/LAqorvdr.png" rel="nofollow noreferrer">0</a>[2] etc.), but that doesn't seem very efficient. Then I tried to use the texts itself (Code, Information, Count, Time..etc.), but I have no idea how to do that.</p>
<p>Hopefully someone knows how to do this.</p>
|
<python><pandas><xml>
|
2024-06-13 20:25:33
| 3
| 567
|
Fish1996
|
78,620,028
| 14,386,187
|
Unexpected result from zipping two generators. How does zip work?
|
<p>I have two generators, one which depends on the output of the other.</p>
<pre><code>def do_work(n):
for i in range(n):
yield i
def more_work(x):
for i in x:
yield i * 2
def main():
x = do_work(5)
y = more_work(x)
for i, j in zip(x, y):
print(i, j)
if __name__ == "__main__":
main()
</code></pre>
<p>When I try to zip the inputs, it seems like Python is skipping some of the values of the control variables:</p>
<pre><code>0 2
2 6
</code></pre>
<p>Does <code>zip</code> continue iterating before both generators can yield at the same time?</p>
|
<python><generator>
|
2024-06-13 20:22:34
| 4
| 676
|
monopoly
|
78,619,951
| 1,748,679
|
How can I efficiently calculate the score function gradient estimator (REINFORCE algorithm)?
|
<p>I wish to use the <a href="https://www.cl.uni-heidelberg.de/statnlpgroup/blog/sfge/" rel="nofollow noreferrer">score function gradient estimator</a> (also called the <a href="https://stillbreeze.github.io/REINFORCE-vs-Reparameterization-trick/" rel="nofollow noreferrer">REINFORCE algorithm</a>). Taking the notation and equations from the first link, this allows us to estimate gradients of the expected value of a function <em><strong>f</strong></em> without having to differentiate <em><strong>f</strong></em> directly, instead using a Monte Carlo approximation and gradients of the log-pdf of some distribution <em><strong>p</strong></em> with parameters <em><strong>θ</strong></em>.
<a href="https://i.sstatic.net/GCvqeiQE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GCvqeiQE.png" alt="desired gradient" /></a></p>
<p><a href="https://i.sstatic.net/gYiUYlhI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gYiUYlhI.png" alt="approximate solution" /></a></p>
<p>However, I am having difficulty calculating this estimator using tensorflow's automatic differentiation and distributions from tensorflow_probability.</p>
<p>Here is an example illustrating my problem using a small toy example. Here the probability distribution <em><strong>p</strong></em> is a standard normal, and the function <em><strong>f(x)=x</strong></em> is simply the identity.</p>
<pre><code>import scipy
import numpy as np
import tensorflow as tf
import keras
import tensorflow_probability as tfp
# Distribution Settings
scale = 1
mean = 0
normal_dist = scipy.stats.norm(loc=mean, scale=scale)
# Number of independent datapoints
T = 1000
seed=360
rng = np.random.RandomState(seed)
y_T = normal_dist.rvs(size=T, random_state=rng)
# create keras model
inputs = keras.Input(shape=0)
# a dense layer with an empty input will just be an intercept term
intercept_layer = keras.layers.Dense(1, activation = None)
# Create a normal distribution with a mean equal to the input
distribution_layer = tfp.layers.DistributionLambda(lambda t: tfp.distributions.Normal(loc=t, scale=1))
predicted_mean = intercept_layer(inputs)
distribution = distribution_layer(predicted_mean)
model = keras.Model(inputs=inputs, outputs=distribution)
# Create an empty input so we can do a forward pass
x_TF = np.zeros(shape=(T,0))
# number of score function trick samples
N=50
with tf.GradientTape() as tape:
model_distribution = model(x_TF)
sample_y_NT = model_distribution.sample(N)
sample_log_probs_NT = model_distribution.log_prob(sample_y_NT)
# use jacobian to get gradient of each sample
jacobian_params_NT = tape.jacobian(sample_log_probs_NT , model.trainable_weights)
</code></pre>
<p>The above code works for small <code>N</code> (it takes about 40 seconds for <code>N=25</code>), but <code>N=50</code> never terminates.</p>
<p>I've read the <a href="https://www.tensorflow.org/api_docs/python/tf/GradientTape#jacobian" rel="nofollow noreferrer">documentation for the jacobian</a>, and it recommends enclosing the function call in a <code>@tf.function()</code> decorator, but when I do this performance degrades even further. There's nothing terribly complex about this computation- you can analytically differentiate the logpdf of a standard normal, and it's trivial to compute these gradients for much larger number of samples, so I know the problem must be with how I'm using the GradientTape.</p>
<p>I'm using Keras 2., Tensorflow 2.15, and tensorflow_probability 0.23. If there's other solutions in JAX or PyTorch I'd be happy to entertain those</p>
|
<python><tensorflow><tensorflow-probability>
|
2024-06-13 20:03:37
| 0
| 9,846
|
Kyle Heuton
|
78,619,949
| 22,437,734
|
How to identify if provided string is a bash or python script
|
<p>I have been given a strange request: "Create a python program that will identify if a provided string contains bash or python code".</p>
<p>How can I do this? Are there any python libraries that can help me parse the string and find out if it contains (valid) bash/python code?</p>
|
<python><bash>
|
2024-06-13 20:03:35
| 2
| 473
|
Gleb
|
78,619,891
| 10,634,126
|
Pythonic parallelized streaming of batches (list-of-dicts) of JSON into a single file write
|
<p>I have a multiprocessing thread pool where each job returns a requested batch of JSON (an array of objects). I want all results to write into a single file, but without storing the full result (as a single list) in-memory due to RAM constraints (the full result contains about 1.5 million records totaling about 1.5 GB).</p>
<p>I see solutions that suggest the use of <code>json-stream</code> but these do not seem to account for batching (they handle either a list or dicts, not a list of lists of dicts). I also see solutions suggesting string manipulations, e.g., opening the file and writing a <code>[</code> followed by a batch of objects, separated by writes of <code>, </code>, followed by a trimming of trailing <code>,</code> and <code>]</code> characters...</p>
<p>Is there a way to do this, either using an imported library or in a "safer" manner?</p>
<pre><code>from multiprocessing.pool import ThreadPool
def thread(worker, jobs, threads):
pool = ThreadPool(threads)
with open("tmp.json", "w") as f:
for result in tqdm(pool.imap_unordered(worker, jobs), total=len(jobs)):
# 1. stream each object from each result array)
for element in result:
### CODE GOES HERE ###
# 2. just stream the result array
### CODE GOES HERE ###
</code></pre>
|
<python><json><multithreading><file><stream>
|
2024-06-13 19:46:32
| 0
| 909
|
OJT
|
78,619,878
| 10,323,798
|
Google Cloud Platform App Engine flask app gives 404 but not locally
|
<p>I created a flask app that has been working until I deleted the default project to clean up. I had no versionining so the buckets were non-recoverable.</p>
<p>So, I created a new project and pointed my local app to that with authorization. The staging and default app buckets were created. When deploying, the process takes so long and eventually no service/instances are created.</p>
<p>My app.yaml looks like</p>
<pre><code>runtime: python311
handlers:
- url: /static
static_dir: static
- url: /.*
script: auto
entrypoint: gunicorn --bind 0.0.0.0:8080 --timeout 600 app:app
# instance_class: F4_1G -> better scaling and access to assets on cloud
manual_scaling:
instances: 1
</code></pre>
<p>I have tried everything and given all necessary permissions but nothing works. The app works just fine locally.</p>
|
<python><flask><google-cloud-platform><google-app-engine><google-cloud-storage>
|
2024-06-13 19:41:18
| 0
| 13,317
|
NelsonGon
|
78,619,816
| 1,077,748
|
Can I create a global ThreadPoolExecutor in a Flask application?
|
<p>I am using a concurrent.futures.ThreadPoolExecutor in a WSGI (Flask) Python REST service to send queries in parallel. I use the code below to instantiate one executor per request. There is one ThreadPoolExecutor per request with as many threads as there are requests. Obviously, if there are multiple queries in parallel this will create that many thread pools. My question is there any way to create a global one and shared it among the request? and would it result in more efficiency since I don't have to create and destroy one for each request?.
The second question is if a ThreadPoolExecutor has a limit of max_workers=32. How does this limit works when I am submitting more than 32 queries to the common thread pool?</p>
<p>Would my assumption that when the amount of queries submitted to a thread-pool is going to exceed the # of cores should not be an an issue since when the request is sent to the database and the thread should block. The thread-pool then should fetch and send the next query. So I should be able to handle more queries than there are cores available, right? I know that in other languages this is how it works I expect the same in Python.</p>
<pre><code> with futures.ThreadPoolExecutor(max_workers=min(32, len(list_of_queries)), thread_name_prefix='query_pool') as tpe:
future_list = [tpe.submit(execute_query, connection, query) for query in
list_of_queries]
for future in futures.as_completed(future_list):
try:
# make sure none of the results has an exception
future.result()
except Exception as exc:
raise exc
</code></pre>
|
<python><multithreading><concurrent.futures>
|
2024-06-13 19:26:08
| 1
| 625
|
Fabio
|
78,619,703
| 3,369,545
|
Facing pyodbc.InterfaceError When Connecting to SQL Server Using Python 3.10
|
<p>I am currently encountering an issue with a Python script that connects to a SQL Server database using pyodbc. This script was functioning correctly under Python 3.7.3, but since upgrading to Python 3.10, I am facing an interface error. Despite setting environment variables and specifying driver details in my connection string, the problem persists. I would be very grateful for any insights or suggestions to help resolve this issue.</p>
<p>Environment:</p>
<p>Previous Python version: 3.7.3 (no issues)
Current Python version: 3.10
OS: Unix-based system</p>
<p>Code</p>
<pre><code>import pyodbc
import pandas as pd
with open('in.txt', "r") as file:
readline=file.read().split("\n")
pwd_in = readline[0]
def connect_to_sc():
seoud_connection = pyodbc.connect(
Driver = 'SQLServer-Driver',
Server = 'SKL2SQ6.org,2893',
Database = 'Seroud',
User = 'JKL_User',
Password = pwd_in,
Trusted_Connection = 'No'
)
return seoud_connection
</code></pre>
<p>Environment Configuration:</p>
<pre><code>source /sys_apps_01/python/python310/profile/env.sh
export ODBCINI=/sys_apps_01/python/python310/odbc/odbc.ini
export ODBCINSTINI=/sys_apps_01/python/python310/odbc/odbcinst.ini
export LD_LIBRARY_PATH=/sys_apps_01/python/python310/sqlserver/msodbcsql17/lib64:$LD_LIBRARY_PATH
python3.10 main.py
</code></pre>
<p>Error Message:</p>
<pre><code>File location using os.getcwd(): /cus_intel_01/suim/kls/emailmanual
Traceback (most recent call last):
File "/cus_intel_01/suim/kls/emailmanual/main.py", line 24, in <module>
exec(open(path).read())
File "<string>", line 68, in <module>
File "<string>", line 14, in connect_to_sc
pyodbc.InterfaceError: ('IM002', '[IM002] [unixODBC][Driver Manager]Data source name not found, and no default driver specified (0) (SQLDriverConnect)')
</code></pre>
<p>Any help in diagnosing and resolving this error would be deeply appreciated. Thank you for your time and assistance!</p>
|
<python><python-3.x><sql-server><pyodbc><unixodbc>
|
2024-06-13 18:59:20
| 1
| 421
|
user3369545
|
78,619,482
| 4,858,605
|
Sending data and files as multipart request using aiohttp
|
<p>I am trying to send a "multipart/form-data" request through aiohttp. I have already tried using requests and the request works fine.
I have a class where <code>self.files</code> is an <code>io.BytesIO</code> object and <code>self.data</code> is a dictionary of string values.</p>
<p>Here's the relevant part of my code in this class:</p>
<pre><code>params = {
"url": self.url,
"headers": self.headers,
"timeout": self.timeout,
"proxy": self.proxy
}
# No data is allowed in the body for GET requests
if self.method != "get":
if self.header_content_type == "multipart/form-data":
with aiohttp.MultipartWriter("form-data") as mp:
if self.data:
for key, value in self.data.items():
part = mp.append(value, {'content-type': 'form-data'})
part.set_content_disposition('form-data', name=key)
if self.files:
for key, value in self.files.items():
part = mp.append(value.read(), {'content-type': 'form-data'})
part.set_content_disposition('form-data', name=key)
params["data"] = mp
async with aiohttp.ClientSession(trust_env=False) as session:
async with session.post(**params) as response:
print(params)
return response
</code></pre>
<p>However, this fails with a 422 error from the called API which means that the data is not properly formed.</p>
<p>The output of the params that I have printed above is follows:</p>
<pre><code>{'url': 'http://localhost:9999/v2/toing', 'headers': {'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJkY200Y2hlZSIsImV4cCI6MTcxODI4MzAyMX0.6d5XcxucVLvml4O0Z-JBfOvrJLtz254371aQ0XKiCuI'}, 'timeout': 500, 'proxy': None, 'data': <aiohttp.multipart.MultipartWriter object at 0x1149aff40>}
</code></pre>
<p>It seems that the "data" section is not properly formatted and that the files attribute is missing.</p>
<p>Here are the parameters that work for me using requests.post:</p>
<pre><code>('http://localhost:9999/v2/toing', {'Authorization': 'Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiJkY200Y2hlZSIsImV4cCI6MTcxODMwMjQyMH0.lpAfQKISeIhUIcSf1OdM9ZN3eOUB0YHwcfVshY7y0VQ'}) {'data': {'request_data': '{"kk_shape": [2595, 2826], "algorithm_type": "trauma", "age": "33", "center": "toing", "is_spx": false, "detected_xx": "yyy", "kk": "hand"}'}, 'files': {'image_arr': <_io.BytesIO object at 0x116faed90>}, 'header_content_type': 'multipart/form-data'}
</code></pre>
<p>Is it possible to send <code>files</code> and <code>data</code> separately using aiohttp as it seems <code>aiohttp.ClientSession.session.post</code> does not have a <code>files</code> parameter as in requests.post?
What am I doing wrong?</p>
|
<python><python-3.x><python-requests><aiohttp>
|
2024-06-13 17:57:15
| 1
| 2,462
|
toing_toing
|
78,619,257
| 1,315,125
|
Can send an SSL message via mosquitto but getting CERTIFICATE_VERIFY_FAILED from python paho script
|
<p>Using mosquitto:</p>
<pre><code>mosquitto_pub -h <public ip address> -t testssl/topic -m "hello world" --cafile ./ca_certificate.pem -p 8883 --tls-version tlsv1.2 -d --id client2
Client client2 sending CONNECT
Client client2 received CONNACK (0)
Client client2 sending PUBLISH (d0, q0, r0, m1, 'testssl/topic', ... (12 bytes))
Client client2 sending DISCONNECT
</code></pre>
<p>Using python paho script:</p>
<pre><code># python 3.11
import datetime
import json
import random
import time
import uuid
from paho.mqtt import client as mqtt_client
from config import broker, port, topic
# Generate a Client ID with the publish prefix.
client_id = f'publish-{random.randint(0, 100)}'
def connect_mqtt():
def on_connect(client, userdata, flags, rc):
if rc == 0:
print("Connected to MQTT Broker!")
else:
print("Failed to connect, return code %d\n", rc)
client = mqtt_client.Client(client_id)
client.tls_set(
ca_certs="ca_certificate.pem",
tls_version=mqtt_client.ssl.PROTOCOL_TLSv1_2,
)
# client.username_pw_set("rw", "readwrite")
client.on_connect = on_connect
client.connect(broker, port)
return client
def fake_sensor_data():
"""Generate a fake sensor data event."""
list_of_sensor_ids = [ uuid.uuid4() for _ in range(10) ]
values = [ 0, 0, 0, 0, 0, 0, 1, 0, 0 ]
return {
"timestamp": time.time(),
"device_id": str(list_of_sensor_ids[random.randint(0, 9)]),
"event": {
"payload": values[random.randint(0, 8)],
}
}
def publish(client):
while True:
time.sleep(3)
msg = json.dumps(fake_sensor_data())
result = client.publish(topic, msg)
status = result[0]
if status == 0:
print()
print(datetime.datetime.now(), msg)
def run():
print("Connecting to MQTT Broker")
print(f"Broker: {broker}")
print(f"Port: {port}")
print(f"Topic: {topic}")
client = connect_mqtt()
print("Connected!")
print("Ready to publish messages!")
client.loop_start()
publish(client)
client.loop_stop()
if __name__ == '__main__':
run()
</code></pre>
<p>Error:</p>
<pre><code>ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: IP address mismatch, certificate is not valid for '<public ip address>'. (_ssl.c:1006)
</code></pre>
<p>I generated the ca_certificate using tls-gen on my server in a script. Elsewhere I read that if I am providing a public ip address instead of a DNS name, I should be using a SAN, however tls-gen does not allow for me to specify one easily - would be good to not have to modify the server initiation script too much.</p>
<p>I would like to understand why mosquitto would work and python/paho needs a SAN for the same certificate file.</p>
|
<python><ssl><mosquitto><paho>
|
2024-06-13 17:01:25
| 1
| 3,515
|
Igor L.
|
78,619,042
| 8,329,213
|
How to load data using ipywidgets and display it in modern Jupyter
|
<p>I am trying to load a sample <code>Excel</code> file to test a functionality, which ultimately will have to be rendered through <a href="https://voila-gallery.org/" rel="nofollow noreferrer"><code>Voila</code></a>. Let alone perform an operation on the <code>Dataframe</code> and make it available for download as <code>Excel</code> file, I am not even able to even see the <code>Dataframe</code>. I don't even know if I am loading the <code>Dataframe</code> or not.</p>
<pre><code>import ipywidgets as widgets
def load_data():
df = pd.read_excel('....xlsx',engine='openpyxl',dtype=str)
display(df)
def on_sample_data_click(event):
load_data()
text_caption = widgets.HTML(value="<h2>Showing data</h1><h2>")
vbox_text = widgets.VBox([text_caption])
display(vbox_text)
load_sample_data = widgets.Button(description='Click to show', layout=dict(width='auto', height='auto'))
load_sample_data.on_click(on_sample_data_click)
display(load_sample_data)
</code></pre>
<p>The display is this:</p>
<p><a href="https://i.sstatic.net/0kLV8QdC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0kLV8QdC.png" alt="enter image description here" /></a></p>
<p>Now, when I click on <code>Click to show</code> I had expected <code>Dataframe</code> be displayed, but nothing shows up.</p>
<p>Any help would be highly appreciated.</p>
|
<python><ipywidgets><voila>
|
2024-06-13 16:12:53
| 1
| 7,707
|
cph_sto
|
78,618,674
| 16,521,194
|
Passing dynamically generated classes' instances as parameters for celery tasks
|
<h2>Goal</h2>
<p>The objective is to pass an instance of a dynamically generated class (inheriting from a statically defined one) as a parameter to a task to be executed on a Celery worker.</p>
<h2>Issue</h2>
<p>I am encountering an error at runtime, when trying to pass my instance as parameter: <code>kombu.exceptions.EncodeError: Can't pickle local object 'ChildClassGenerator.generate.<locals>.ChildClass'</code>.</p>
<h2>Minimal code to reproduce (see EDIT 1)</h2>
<p><code>classes.py</code></p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class ParentClass(ABC):
def __init__(self):
self.attribute1: str = "attribute1"
@abstractmethod
def some_method(self) -> str:
pass
class ChildClassGenerator:
@staticmethod
def generate(value: int) -> type[ParentClass]:
class ChildClass(ParentClass):
def __init__(self):
super().__init__()
self.value: int = value
def some_method(self) -> str:
return f"{self.value} - {vars(self)}"
return ChildClass
</code></pre>
<p><code>worker.py</code></p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
from classes import ParentClass
def attach_test_task(celery_worker: Celery) -> Celery:
@celery_worker.task(name="some_task")
def some_task(some_instance: ParentClass) -> str:
print(f"{some_instance.some_method()}")
return some_instance.some_method()
return celery_worker
def build_worker() -> Celery:
celery_worker: Celery = Celery(
"worker",
broker="amqp://guest:guest@localhost:5672",
backend="rpc://",
result_expires=300
)
celery_worker.conf.task_default_queue = "worker"
celery_worker.conf.event_serializer = "pickle"
celery_worker.conf.task_serializer = "pickle"
celery_worker.conf.result_serializer = "pickle"
celery_worker.conf.accept_content = [
"json", "application/json", "application/x-python-serialize"
]
celery_worker.conf.broker_connection_retry_on_startup = True
celery_worker = attach_test_task(celery_worker=celery_worker)
return celery_worker
worker: Celery = build_worker()
</code></pre>
<p><code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>from celery.canvas import Signature
from worker import worker
from classes import ChildClassGenerator, ParentClass
def main():
some_class: type[ParentClass] = ChildClassGenerator.generate(value=1)
some_instance: some_class = some_class()
task_signature: Signature = worker.signature("some_task")
task = task_signature.delay(
some_instance=some_instance
)
if __name__ == "__main__":
main()
</code></pre>
<p>To test, run <code>celery -A worker.worker -q worker --loglevel DEBUG --concurrency=2</code> and <code>python main.py</code> with a local RabbitMQ.</p>
<h2>Modifications</h2>
<p>The raised error led me to <a href="https://github.com/celery/celery/issues/3404" rel="nofollow noreferrer">this GitHub issue from 2016-2017</a>. It seems to recommend using <code>dill</code> as a serializer instead of <code>pickle</code> when trying to serialize locally defined objects. This led to the following modifications in <code>worker.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>import dill
from celery import Celery
# ADDED IMPORT
from kombu.serialization import registry
from classes import ParentClass
def attach_test_task(celery_worker: Celery) -> Celery:
@celery_worker.task(name="some_task")
def some_task(some_instance: ParentClass) -> str:
print(f"{some_instance.some_method()}")
return some_instance.some_method()
return celery_worker
# ADDED METHOD
def register_dill():
def encode(obj):
return dill.dumps(obj=obj)
def decode(s):
deserialized_object = dill.loads(str=s)
return deserialized_object
registry.register(
name="dill",
encoder=encode,
decoder=decode,
content_type="application/x-my-content",
content_encoding="binary"
)
def build_worker() -> Celery:
celery_worker: Celery = Celery(
"worker",
broker="amqp://guest:guest@localhost:5672",
backend="rpc://",
result_expires=300
)
# ADDED CALL
register_dill()
celery_worker.conf.task_default_queue = "worker"
celery_worker.conf.event_serializer = "dill" # CHANGED SERIALIZER
celery_worker.conf.task_serializer = "dill" # CHANGED SERIALIZER
celery_worker.conf.result_serializer = "dill" # CHANGED SERIALIZER
celery_worker.conf.accept_content = [
"dill", "application/x-my-content", # ADDED ACCEPTED CONTENT
"json", "application/json", "application/x-python-serialize",
]
celery_worker.conf.broker_connection_retry_on_startup = True
celery_worker = attach_test_task(celery_worker=celery_worker)
return celery_worker
worker: Celery = build_worker()
</code></pre>
<h2>Modifications results</h2>
<p>While this has been somewhat successful, seeing we do not have the error when running the <code>main.py</code>, it is now the worker which crashes following a <code>Unrecoverable error: AttributeError("Can't pickle local object 'ChildClassGenerator.generate.<locals>.ChildClass'")</code>. (See EDIT 2)</p>
<h2>My questions</h2>
<ul>
<li>Is there a way to fix this for the worker to work?</li>
<li>If not, is there a way to achieve what I want here?</li>
</ul>
<p>Thanks a lot!</p>
<h2>EDIT 1</h2>
<p>After some tests, it seems another way to trigger the error is to define a lambda and passing this as a parameter. The minimal code is then shorter:<br />
<code>worker.py</code></p>
<pre class="lang-py prettyprint-override"><code>from celery import Celery
def attach_test_task(celery_worker: Celery) -> Celery:
@celery_worker.task(name="some_task")
def some_task(function: Callable = lambda x: print(x)):
function("Hello")
return celery_worker
def build_worker() -> Celery:
celery_worker: Celery = Celery(
"worker",
broker="amqp://guest:guest@localhost:5672",
backend="rpc://",
result_expires=300
)
celery_worker.conf.task_default_queue = "worker"
celery_worker.conf.event_serializer = "pickle"
celery_worker.conf.task_serializer = "pickle"
celery_worker.conf.result_serializer = "pickle"
celery_worker.conf.accept_content = [
"json", "application/json", "application/x-python-serialize"
]
celery_worker.conf.broker_connection_retry_on_startup = True
celery_worker = attach_test_task(worker=celery_worker)
return celery_worker
worker: Celery = build_worker()
</code></pre>
<p><code>main.py</code></p>
<pre class="lang-py prettyprint-override"><code>from celery.canvas import Signature
from worker import worker
def main():
task_signature: Signature = worker.signature("some_task")
task = task_signature.delay(function=lambda x: print(f"{x}{x}"))
if __name__ == "__main__":
main()
</code></pre>
<p>This triggers the same errors, and the modification to include <code>dill</code> seems to have the same effects.</p>
<h2>EDIT 2</h2>
<p>I first verified that <code>dill</code> was indeed able to dump and load <code><locals></code>.<br />
<code>lambda_sender.py</code></p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable
import dill
def main():
function: Callable = lambda x: 2*x
with open("testing_dill", "wb") as file:
dill.dump(function, file)
if __name__ == "__main__":
main()
</code></pre>
<p><code>lambda_receiver.py</code></p>
<pre class="lang-py prettyprint-override"><code>import dill
def main():
with open("testing_dill", "rb") as file:
file_content = file.read()
deserialized_object = dill.loads(file_content)
print(deserialized_object(5))
if __name__ == "__main__":
main()
</code></pre>
<p>This results in <code>10</code> being printed out, as desired.<br />
Then I looked more into the dill decoder, and I found something surprising. When adding a breakpoint before the decoder's <code>return</code> statement to explore the <code>deserialized_object</code>, it seems the object is decoded correctly (<code>deserialized_object[1]["some_instance"].some_method()</code> results in <code>"1 - {'attribute1': 'attribute1', 'value': 1}"</code>, which is the expected/desired behavior). Thus it seems the object is correctly decoded, but the next steps are blocking the code execution.</p>
<h2>EDIT 3</h2>
<p>After exploring the stack trace, I've found the following. (Every method or class here is defined in <code>celery.concurrency.asynpool</code>, unless stated otherwise, ellipses have been added to shorten the definitions and focus on important parts)<br />
To trigger a task, Celery seems to be registering the async pool with the current event loop (through <code>AsynPool.register_with_event_loop(self, hub: ...)</code>).<br />
This in turn calls the <code>AsynPool._create_write_handlers(self, hub: ..., ..., dumps: ... = _pickle.dumps, ...)</code> method as such <code>self._create_write_handlers(hub)</code>. Notice that this forces the <code>dumps</code> parameter to its default value of <code>pickle.dumps</code>.<br />
We already know this serializer doesn't work for us. This also seems to prevent us from passing <code>dill.dumps</code> instead of <code>pickle.dumps</code>, as the <code>register_with_event_loop</code> doesn't offer the option to change the serializer.<br />
Even if this was supported in this method, there is an entire stack trace of calls where this parameter would need to be added, isn't it?</p>
<h2>EDIT 4</h2>
<p>For completeness, I also have opened <a href="https://github.com/celery/celery/discussions/9081" rel="nofollow noreferrer">a GitHub discussion</a> on the Celery GitHub repository.</p>
<h2>EDIT 5</h2>
<p>Changing the default serializer (and deserializer) has worked, I opened two PRs, on <a href="https://github.com/celery/celery/pull/9100" rel="nofollow noreferrer">Celery</a> and on <a href="https://github.com/celery/billiard/pull/406" rel="nofollow noreferrer">Billiard</a>. It seems it doesn't have any adversarial effects, though I have not been able to use the test suite.</p>
|
<python><celery><pickle><dill><kombu>
|
2024-06-13 15:03:48
| 1
| 1,183
|
GregoirePelegrin
|
78,618,644
| 2,721,897
|
Pandas rate limit output based on minimum change in a column
|
<p>I have a dataframe which has column t which is a monotonically increasing integer - I want to filter the dataframe such that I choose as many rows as I can but in the OUTPUT i never have any two rows where t - t.shift(1) <= 10 (and always including the first row of the input dataframe in the output). This is different than df[df['t'].diff().fillna(20) >= 10] because I might have a string of input rows where t is 5 apart and just taking the diff would on the input would throw out all of them.</p>
<p>In inefficient pseudocode it would be something like:</p>
<pre><code>last_chosen = None
inds = []
for i, t in enumerate(df['t']):
if last_chosen is None or t - last_chosen >= 10:
last_chosen = t
inds.append(df.index[i])
new_df = df[inds]
</code></pre>
<p>Is there any efficient way to do this?</p>
<p>Edit: Minimal example:</p>
<pre><code>
def filt(df, gap=10):
last_chosen = None
inds = []
for i, t in enumerate(df['t']):
if last_chosen is None or t - last_chosen >= gap:
last_chosen = t
inds.append(df.index[i])
return df.loc[inds]
d = pd.DataFrame(dict(
t = [5, 7, 9, 10, 15, 20, 30, 45, 50, 55, 60, 70]))
(d['t'][d['t'].diff().fillna(10) >= 10]) # gives [5, 30, 45, 70]
filt(d)['t'] # gives [5, 15, 30, 45, 55, 70] which is desired
</code></pre>
|
<python><pandas>
|
2024-06-13 14:58:58
| 2
| 367
|
user2721897
|
78,618,559
| 6,195,489
|
How to speed up interpolation in dask
|
<p>I have a piece of data code that performs interpolation on a large number of arrays.</p>
<p>This is extremely quick with numpy, but:</p>
<ul>
<li>The data the code will work with in reality will often not fit in memory</li>
<li>The tasks are embarrassingly parallel</li>
</ul>
<p>So I would like to use dask/xarray to speed the process up and allow the tasks to be distributed.</p>
<p>However the speed of performing the same task with Dask is prohibitively slow.</p>
<p>Below is an example code that shows the difference in performing the interpolation on dask arrays vs numpy arrays:</p>
<pre><code>import timeit
import dask.array as da
import numpy as np
import xarray as xr
from dask.distributed import Client
client = Client(processes=False, threads_per_worker=1, n_workers=10, memory_limit="2GB")
n_points = 1500
x = np.linspace(0, 10, n_points)
y = np.sin(x)
xp = np.linspace(0, 10, n_points)
x_da = da.from_array(x, chunks="auto")
y_da = da.from_array(y, chunks="auto")
xp_da = da.from_array(xp, chunks="auto")
n_repeats = 50000
numpy_time = timeit.timeit(
stmt="np.interp(xp, x, y)",
setup="import numpy as np; x = np.linspace(0, 10, 1500); y = np.sin(x); xp = np.linspace(0, 10, 1500)",
number=n_repeats,
)
# Timing np.interp with Dask arrays
def dask_interpolation():
interpolated = da.map_blocks(np.interp, xp_da, x_da, y_da, dtype=float)
interpolated.compute()
xarray_time = timeit.timeit(
stmt="dask_interpolation()",
setup="from __main__ import dask_interpolation",
number=n_repeats,
)
# Print the timings
print(
f"np.interp with numpy arrays: {numpy_time:.6f} seconds for {n_repeats} repetitions"
)
print(
f"np.interp with Dask arrays: {xarray_time:.6f} seconds for {n_repeats} repetitions"
)
# Shutdown Dask client
client.shutdown()
</code></pre>
<p>which gives:</p>
<pre><code>np.interp with numpy arrays: 0.264898 seconds for 50000 repetitions
np.interp with Dask arrays: 242.829727 seconds for 50000 repetitions
</code></pre>
<p>Can anyone suggest if and how it is possible to speed the Dask interpolation up?</p>
|
<python><numpy><interpolation><dask><dask-distributed>
|
2024-06-13 14:41:33
| 0
| 849
|
abinitio
|
78,618,405
| 621,594
|
I want the right click menu of pystray to also apear on left-click
|
<p>I have a very simple pystray icon with a menu.
(Basicaly the normal example code)</p>
<p>Its also working so far. An Icon apears and if I right click it the menu apears.</p>
<p>But so far I couldn't find a way to make the menu show on left-click to.</p>
<p>It seems like there is no way to do that. I cant find anythin withing the pystray documentation also google also doesn't give me any hints.</p>
<pre><code>from pystray import Icon as icon, Menu as menu, MenuItem as item
from PIL import Image, ImageDraw
def create_image(width, height, color1, color2):
# Generate an image and draw a pattern
image = Image.new('RGB', (width, height), color1)
dc = ImageDraw.Draw(image)
dc.rectangle(
(width // 2, 0, width, height // 2),
fill=color2)
dc.rectangle(
(0, height // 2, width // 2, height),
fill=color2)
return image
ico = icon('test', create_image(64, 64, 'black', 'white'), menu=menu(
item(
'With submenu',
menu(
item('Close App', lambda: ico.stop()),
item('Submenu item 2', None)))))
ico.run()
</code></pre>
|
<python><pystray>
|
2024-06-13 14:11:40
| 0
| 837
|
Phillip
|
78,618,385
| 7,201,487
|
Making configuration name customizable
|
<p>I'm using Hydra to create some configuration of layers in my neural network, these layers will then be used in other configs to create neural network architectures. However, I find myself creating multiple repeated configs, and the only difference between them for example is the width.</p>
<p>An example of a layer config is the following:</p>
<pre class="lang-yaml prettyprint-override"><code>diffusion128|gcn128|concurent:
name: ProteinEncoder
instanciate:
_target_: atomsurf.networks.ProteinEncoderBlock
kwargs:
surface_encoder:
name: DiffusionNetBlockBatch
instanciate:
_target_: atomsurf.network_utils.DiffusionNetBlockBatch # diffusion_net.DiffusionNet
kwargs:
C_width: 128
mlp_hidden_dims: [128, 128]
dropout: 0.0
use_bn: true
init_time: 2.0 # either null (for constant init) or a float
init_std: 2.0
graph_encoder:
name: GCNx2Block
instanciate:
_target_: atomsurf.network_utils.GCNx2Block
kwargs:
dim_in: 128
hidden_dims: 128
dim_out: 128
dropout: 0.0
use_bn: true
use_weighted_edge_distance: false
communication_block:
name: ConcurrentCommunication
[....]
</code></pre>
<p>Is there a way to make the layer names customizable in the configuration? For example, changing <code>diffusion128|gcn128|concurrent</code> to <code>diffusion{$width1}|gcn{$width2}|concurrent</code> with <code>width1</code> and <code>width2</code> as parameters that can be defined when creating an architecture.</p>
|
<python><fb-hydra>
|
2024-06-13 14:08:27
| 1
| 559
|
schlodinger
|
78,618,381
| 17,795,398
|
Why numpy data type kind returns void when it was created as float64?
|
<p>I have this code:</p>
<pre><code>>>> d = np.dtype([("pos", np.float64, 3)])
>>> d[0].kind
'V'
</code></pre>
<p>Why does it return <code>'V'</code> instead of <code>'f'</code>? In the full code, I need to know if the field corresponds to an integer, float, string...</p>
|
<python><numpy>
|
2024-06-13 14:08:18
| 4
| 472
|
Abel Gutiérrez
|
78,618,340
| 890,537
|
Azure Function not triggered by Service Bus Queue
|
<p>Perhaps I'm missing something very basic here. Coming from GCP and AWS, Azure Functions are somewhat ... different in some aspects.</p>
<h1>What</h1>
<p>What I'm trying to do is have an Azure Function triggered by a Service Bus Queue. Or, one step back, I'm publishing to an Event Grid subject and need a function to reliably handle <em>all</em> messages published to that subject. So my thought was to put a queue as inbox in between because I assume that a message is gone if I trigger the function via AEG directly but the function fails to handle it (?).</p>
<p>I don't want to manage connection strings if I don't have to. So I tried <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-identity-based-connections-tutorial-2" rel="nofollow noreferrer">identity-based auth</a>, as well as Service Connector. But with neither of them does my function get triggered by messages in the queue.</p>
<h1>How</h1>
<p>I'll strip the probably unrelevant parts of the infra.</p>
<pre><code># topic
resource "azurerm_eventgrid_topic" "this" {
name = "mytopic"
input_schema = "CloudEventSchemaV1_0"
# ...
}
# service bus namespace ...
# queue
resource "azurerm_servicebus_queue" "this" {
name = "myqueue"
namespace_id = azurerm_servicebus_namespace.xyz.id
dead_lettering_on_message_expiration = true
enable_partitioning = false
# ...
}
# azurerm_service_plan Linux Y1
# function app
resource "azurerm_linux_function_app" "this" {
name = "myfunctionapp"
# ...
storage_account_access_key = azurerm_storage_account.xyz.primary_access_key
service_plan_id = azurerm_service_plan.this.id
public_network_access_enabled = true
builtin_logging_enabled = true
https_only = false
identity {
type = "SystemAssigned"
}
site_config {
application_stack {
python_version = "3.11"
}
always_on = false
app_scale_limit = 20
pre_warmed_instance_count = 0
cors {
allowed_origins = ["https://portal.azure.com"]
}
}
app_settings = {
"AzureWebJobsFeatureFlags" = "EnableWorkerIndexing"
# what I tried first
"AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE" = "${azurerm_servicebus_namespace.this.name}.servicebus.windows.net"
# identity-based
"ServiceBusConnection__fullyQualifiedNamespace" = "${azurerm_servicebus_namespace.this.name}.servicebus.windows.net"
"AZURE_STORAGEBLOB_RESOURCEENDPOINT" = # ...
}
sticky_settings {
# don't actually know what these are yet
app_setting_names = [
"AZURE_STORAGEBLOB_RESOURCEENDPOINT",
"AZURE_SERVICEBUS_FULLYQUALIFIEDNAMESPACE",
]
}
lifecycle {
ignore_changes = [
app_settings["WEBSITE_RUN_FROM_PACKAGE"]
]
}
}
# deploying azurerm_function_app_function via tf actually just does not work, I'm deploying via VSC
# subscription
resource "azurerm_eventgrid_event_subscription" "this" {
name = "mysub"
scope = azurerm_eventgrid_topic.this.id
event_delivery_schema = "CloudEventSchemaV1_0"
advanced_filtering_on_arrays_enabled = false
service_bus_queue_endpoint_id = azurerm_servicebus_queue.this.id
retry_policy {
event_time_to_live = 1440
max_delivery_attempts = 30
}
}
# what I tried first but didn't work: service connector
resource "azurerm_app_service_connection" "this" {
name = "myserviceconn"
app_service_id = azurerm_linux_function_app.this.id
target_resource_id = azurerm_servicebus_namespace.this.id
client_type = "python"
authentication {
type = "systemAssignedIdentity"
}
}
resource "azurerm_role_assignment" "queue_service_bus_data_receiver" {
scope = azurerm_servicebus_queue.this.id
role_definition_name = "Azure Service Bus Data Receiver"
principal_id = azurerm_linux_function_app.this.identity[0].principal_id
principal_type = "ServicePrincipal"
}
resource "azurerm_role_assignment" "queue_service_bus_data_owner" {
scope = azurerm_servicebus_queue.this.id
role_definition_name = "Azure Service Bus Data Owner"
principal_id = azurerm_linux_function_app.this.identity[0].principal_id
principal_type = "ServicePrincipal"
}
# why both roles you wonder? Because I thought it may help: https://github.com/Azure/azure-functions-host/issues/8261
</code></pre>
<p>Function deyployed via VSC:</p>
<pre class="lang-py prettyprint-override"><code>@app.function_name(name="myfunc")
@app.service_bus_queue_trigger(
arg_name="msg",
queue_name="myqueue",
connection="ServiceBusConnection__fullyQualifiedNamespace",
)
def test_function(msg: func.ServiceBusMessage):
# whatever, just logging to see if it gets invoked
pass
</code></pre>
<p><code>host.json</code> - I actually have no idea which options apply and what <em>exactly</em> they do yet. I also don't get what "Functions 2.x" and "Extension 5.x" <a href="https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus?tabs=in-process%2Cfunctionsv2%2Cextensionv3&pivots=programming-language-csharp#hostjson-settings" rel="nofollow noreferrer">are</a></p>
<pre class="lang-json prettyprint-override"><code>{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
},
"extensions": {
"serviceBus": {
"prefetchCount": 10,
"messageHandlerOptions": {
"autoComplete": true,
"maxConcurrentCalls": 4,
"maxAutoRenewDuration": "00:05:00"
},
"sessionHandlerOptions": {
"autoComplete": false,
"messageWaitTimeout": "00:00:30",
"maxAutoRenewDuration": "00:55:00",
"maxConcurrentSessions": 4
},
"batchOptions": {
"maxMessageCount": 10,
"operationTimeout": "00:01:00",
"autoComplete": true
}
}
}
}
</code></pre>
<p>When I publish a message to the AEG subject it ends up in the queue but the function never gets invoked.</p>
<p>How do I make this work? I read about Consumption plan <em>maybe</em> not supporting Service Bus Queues (yet)? Does Azure maybe not abstract away scaling to zero and thus just never polls the queue? Do I really have to like add an HTTP trigger and a timer to have this thing actually running? Is my whole setup crap?</p>
<p>Side question - what's the difference between service connector and just identity-based access via role assignments (either user- or system-managed)?</p>
<p>/edit: I see that the topic subscription itself has deadlettering and retires. So I will probably go with this instead.</p>
|
<python><azure><terraform><azure-functions><azureservicebus>
|
2024-06-13 14:00:15
| 0
| 3,161
|
m02ph3u5
|
78,618,260
| 5,166,312
|
Using Bleak with threading in a PyQt/PySide application
|
<p>I have a PyQt/PySide application where I use threads to process commands from a queue. In one thread, I need to use Bleak to work with BLE devices. Bleak is based on asyncio, and I am having trouble integrating these asynchronous functions into a synchronous thread. Here is a sample of my code:</p>
<pre><code>import os
from pathlib import Path
import sys
from PySide6.QtCore import QObject, Slot, Signal
from PySide6.QtQml import QQmlApplicationEngine
from PySide6.QtWidgets import QApplication
from workerCore import CoreWorker
from queue import Queue
from threading import Thread
import queueCmds as qc
class AppWindow(QObject):
sigPairingPopUp = Signal()
def __init__(self):
super().__init__(None)
self.queueCore = Queue()
self.workerCore = CoreWorker(self)
x = Thread(target=self.workerCore.run)
x.start()
@Slot()
def scanBLE(self):
sd = qc.QueueData()
sd.cmd = qc.CoreQC.SCAN_BLE
self.queueCore.put(sd)
if __name__ == "__main__":
main = QApplication(sys.argv)
engine = QQmlApplicationEngine()
appWin = AppWindow()
engine.rootContext().setContextProperty("backend", appWin)
engine.load(os.fspath(Path(__file__).resolve().parent / "qml/main.qml"))
if not engine.rootObjects():
sys.exit(-1)
sys.exit(main.exec())
</code></pre>
<p>workerCore.py</p>
<pre><code>import queueCmds as qc
from threading import Thread
import asyncio
from bleak import BleakScanner, BleakClient
CHARACTERISTIC_UUID = "...."
class CoreWorker(Thread):
def __init__(self, app, parent=None):
super(CoreWorker, self).__init__(parent)
self.app = app
def run(self):
while True:
rxd = self.app.queueCore.get()
if rxd.cmd == qc.CoreQC.SCAN_BLE:
asyncio.run(self.scan_devices())
async def scan_devices(self):
devices = await BleakScanner.discover()
for i, device in enumerate(devices):
print(f"{i}: {device.name} ({device.address})")
return devices
async def connect_and_read(self, device_address):
async with BleakClient(device_address) as client:
print(f"Connected to {device_address}")
value = await client.read_gatt_char(CHARACTERISTIC_UUID)
print(f"Read value: {value}")
</code></pre>
<p><strong>Problem</strong>:
How can I correctly integrate asynchronous Bleak functions into a synchronous thread?
At a minimum, it seems strange to combine asyncio with threading. Either way, my call to scan_devices is not working. What is the best way to solve this?</p>
<p>Thank you!</p>
|
<python><multithreading><python-asyncio><pyside><python-bleak>
|
2024-06-13 13:47:41
| 0
| 337
|
WITC
|
78,618,147
| 3,527,454
|
Select multiple columns from array, multiple times
|
<p>Hi I have the following setup:</p>
<p>from scipy</p>
<pre><code>def _bootstrap_resample(sample, n_resamples=None, random_state=None):
"""Bootstrap resample the sample."""
n = sample.shape[-1]
# bootstrap - each row is a random resample of original observations
i = rng_integers(random_state, 0, n, (n_resamples, n))
resamples = sample[..., i]
return resamples
</code></pre>
<p>in my case:</p>
<p>sample:</p>
<pre><code>[[ 0 1 2 3 4 5 6 7 8 9]
[10 11 12 13 14 15 16 17 18 19]]
</code></pre>
<p>i:</p>
<pre><code>[[0 0 0 0 0 1 1 1 1 1]
[2 2 2 2 2 3 3 3 3 3]]
</code></pre>
<p>what i want:</p>
<pre><code>[[[ 0 0 0 0 0 1 1 1 1 1]
[10 10 10 10 10 11 11 11 11 11]]
[[ 2 2 2 2 2 3 3 3 3 3]
[12 12 12 12 12 13 13 13 13 13]]]
</code></pre>
<p>i.e. multiple rows of i are just supposed to say which columns to take and create new examples. The provided code:</p>
<pre><code>resamples = sample[..., i]
</code></pre>
<p>does not do that unfortunately and produces</p>
<pre><code>[[[ 0 0 0 0 0 1 1 1 1 1]
[ 2 2 2 2 2 3 3 3 3 3]]
[[10 10 10 10 10 11 11 11 11 11]
[12 12 12 12 12 13 13 13 13 13]]]
</code></pre>
<p>How can I obtain what I want here?</p>
|
<python><python-3.x><numpy><scipy><numpy-ndarray>
|
2024-06-13 13:23:08
| 1
| 355
|
Revist
|
78,618,055
| 5,678,653
|
derive (approximate) rotation transform matrix (numpy) on a unit sphere given a mapping of vectors (n=12)
|
<p>While I am aware of vaguely similar questions, I seem to be stuck on this.</p>
<p>Buckminster Fuller introduced a spherical mapping of the world onto an icosahedron - it's known as the <a href="https://en.wikipedia.org/wiki/Dymaxion_map" rel="nofollow noreferrer">Dymaxion Map</a></p>
<p>A <a href="https://en.wikipedia.org/wiki/Regular_icosahedron#Construction" rel="nofollow noreferrer">common way of identifying the cartesian coordinates of an icosahedron</a> is by using the coordinates, where 𝜑 is the golden ratio: (1+√5)/2) or 2cos(π/5.0)</p>
<p><code>(0,±1,±1𝜑),(±1,±1𝜑,0),(±1𝜑,0,±1)</code></p>
<p>Expanding this out gives me the location of the 12 vertices of a regular icosahedron with side length 2:</p>
<pre class="lang-py prettyprint-override"><code> 𝜑 = 2.0 * math.cos(π / 5.0)
ico_vertices = [
(0, -1, -𝜑), (0, -1, +𝜑), (0, +1, -𝜑), (0, +1, +𝜑),
(-1, -𝜑, 0), (-1, +𝜑, 0), (+1, -𝜑, 0), (+1, +𝜑, 0),
(-𝜑, 0, -1), (-𝜑, 0, +1), (+𝜑, 0, -1), (+𝜑, 0, +1)
]
</code></pre>
<p>Needless to say these need to be normalised.</p>
<pre class="lang-py prettyprint-override"><code> iv = np.array(ico_vertices)
iv_n = ((iv[:, None] ** 2).sum(2) ** 0.5).reshape(-1, 1)
ico = iv / iv_n #This is the starting set of vertices.
</code></pre>
<p>Here is an image of the golden ratio 𝜑 coordinates projected onto Google Maps.
<a href="https://i.sstatic.net/oT1OOd2A.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oT1OOd2A.png" alt="Ico Vertices" /></a></p>
<p>Fuller's icosahedron, mapped onto a spherical projection of the globe, defined using these 12 vertices: (the xyz values are already normalised to a unit sphere)</p>
<pre class="lang-json prettyprint-override"><code>{
"vertices":[
{"name":"CHINA", "ll": [[39.10000000, "N"],[122.30000000,"E"]], "xyz":[-0.41468222, 0.65596241, 0.63067581]},
{"name":"NORWAY", "ll": [[64.70000000, "N"],[10.53619898 ,"E"]], "xyz":[ 0.42015243, 0.07814525, 0.90408255]},
{"name":"ARABIAN SEA", "ll": [[10.44734504, "N"],[58.15770555 ,"E"]], "xyz":[ 0.51883673, 0.83542038, 0.18133184]},
{"name":"LIBERIA", "ll": [[2.30088201 , "N"],[5.24539058 ,"W"]], "xyz":[ 0.99500944, -0.09134780, 0.04014717]},
{"name":"PUERTO RICO", "ll": [[23.71792533, "N"],[67.13232659 ,"W"]], "xyz":[ 0.35578140, -0.84358000, 0.40223423]},
{"name":"ALASKA", "ll": [[50.10320164, "N"],[143.47849033,"W"]], "xyz":[-0.51545596, -0.38171689, 0.76720099]},
{"name":"BUENOS AIRES", "ll": [[39.10000000, "S"],[57.70000000 ,"W"]], "xyz":[ 0.41468222, -0.65596241,-0.63067581]},
{"name":"ANTARCTICA", "ll": [[64.70000000, "S"],[169.46380102,"W"]], "xyz":[-0.42015243, -0.07814525,-0.90408255]},
{"name":"PITCAIRN ISLAND", "ll": [[10.44734504, "S"],[121.84229445,"W"]], "xyz":[-0.51883673, -0.83542038,-0.18133184]},
{"name":"GILBERT ISLAND", "ll": [[2.30088201 , "S"],[174.75460942,"E"]], "xyz":[-0.99500944, 0.09134780, -0.04014717]},
{"name":"AUSTRALIA", "ll": [[23.71792533, "S"],[112.86767341,"E"]], "xyz":[-0.35578140, 0.84358000, -0.40223423]},
{"name":"PRINCE EDWARD ISLAND", "ll": [[50.10320164, "S"],[36.52150967 ,"E"]], "xyz":[ 0.51545596, 0.38171689, -0.76720099]}
]
}
</code></pre>
<p>Here is an image of the dymaxion coordinates projected onto Google Maps.
<a href="https://i.sstatic.net/JpJlkir2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpJlkir2.png" alt="image of dymaxion coordinates projected onto Google Maps" /></a></p>
<p>The dymaxion coordinates are (via a json load) loaded into a numpy array 'dym_in'.
The order of the two definitions is not the same - so the mapping is (this may be wrong).</p>
<pre class="lang-py prettyprint-override"><code> i2d = [6, 4, 10, 0, 8, 9, 3, 2, 7, 5, 11, 1]
# ico[i] is equivalent to dym_in[i2d[i]]
dym = np.array([dym_in[m] for m in i2d ])
</code></pre>
<p>So now I have 12 normalised vertices in 'ico' and 12 dymaxion map vertices in 'dym', which are ordered such that ico[x] => dym[x].
I want to find the rotation (or approximate rotation) matrix that transforms ico to dym.</p>
<p>I say approximate, because the coordinates in given dym may not exactly mathematically define an icosahedron. I do not know because I do not know how to derive the transform!</p>
<p>What I know for sure is that the geoid is not relevant here - the Dymaxion starts from a spherical earth projection.</p>
<p>Likewise, I freely admit there may be bugs in my assumptions above.</p>
<p>What I want is to be able to derive the rotational matrix of any set of 12 icosahedral points from the initial golden-ratio starting set - bearing in mind that there are several 12(?) rotations to choose from, of course.</p>
|
<python><numpy><mapping><computational-geometry>
|
2024-06-13 13:08:01
| 2
| 2,248
|
Konchog
|
78,617,576
| 5,305,512
|
In Confluence, how to replicate manual search with API search?
|
<p>I am following the Confluence API search <a href="https://developer.atlassian.com/cloud/confluence/advanced-searching-using-cql/" rel="nofollow noreferrer">documentation</a> to implement text search with CQL (confluence query language) in the company Confluence pages. Here is my code for a search query:</p>
<pre><code>import requests
from requests.auth import HTTPBasicAuth
import urllib.parse
# Replace with your Confluence credentials and base URL
base_url = 'https://your-domain.atlassian.net/wiki'
username = 'your-email@example.com'
api_token = CONFLUENCE_API_TOKEN
search_term = 'What is the per diem costs allowed to be reimbursed for business trips to Paris?'
encoded_search_term = urllib.parse.quote(search_term)
# Construct the search URL with encoded search term
search_url = f'{base_url}/rest/api/content/search?cql=text~"{encoded_search_term}"&limit=10'
# Send the request
response = requests.get(search_url, auth=HTTPBasicAuth(username, api_token))
# Check for successful response
if response.status_code == 200:
search_results = response.json()
print(search_results)
else:
print(f'Error: {response.status_code}, {response.text}')
</code></pre>
<p>Here is the result returned:</p>
<pre><code>{'results': [], 'start': 0, 'limit': 10, 'size': 0, '_links': {'base': 'https://your-domain.atlassian.net/wiki', 'context': '/wiki', 'self': 'https://your-domain.atlassian.net/wiki/rest/api/content/search?cql=text~%22What%20is%20the%20per%20diem%20costs%20allowed%20to%20be%20reimbursed%20for%20business%20trips%20to%20Paris%3F%22'}}
</code></pre>
<p>So zero documents returned during the search.</p>
<p>Whereas if I do the search manually in the Confluence page, here is the result:</p>
<p><a href="https://i.sstatic.net/6HywGllB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HywGllB.png" alt="enter image description here" /></a></p>
<p>There are loads of documents retrieved with manual search.</p>
<p>When I tried shorter search queries with the API, I do get results. Like, for "What is the per diem costs", I get 14 confluence documents retrieved; for "What is the per diem costs allowed to be reimbursed", I get 7 results; and 4 results for "What is the per diem costs allowed to be reimbursed for business trips". But zero results for "What is the per diem costs allowed to be reimbursed for business trips to Paris?"</p>
<p>But these are all nowhere close to what I get with manual search (thousands of documents retrieved).</p>
<p>So, how do I replicate this manual search with the API? What is the search algorithm used in the "simple" manual search?</p>
<hr />
<p>Here is the same search with the <a href="https://pypi.org/project/atlassian-python-api/" rel="nofollow noreferrer">Atlassian API</a>. The result is the same, zero documents returned:</p>
<p><a href="https://i.sstatic.net/XWjXrdyc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWjXrdyc.png" alt="enter image description here" /></a></p>
|
<python><confluence><confluence-rest-api>
|
2024-06-13 11:31:16
| 1
| 3,764
|
Kristada673
|
78,617,556
| 7,235,767
|
Unable to fix this playwright error: ValueError: The future belongs to a different loop than the one specified as the loop argument
|
<p>Getting this error:</p>
<p><strong>ValueError: The future belongs to a different loop than the one specified as the loop argument.</strong></p>
<p><strong>Future exception was never retrieved
future: <Future finished exception=TargetClosedError('Target page, context or browser has been closed')>
playwright._impl._errors.TargetClosedError: Target page, context or browser has been closed</strong></p>
<p><em>Content of "conftest.py" file</em></p>
<pre><code>import pytest
import pyotp
from playwright.async_api import async_playwright
browsers = ["chromium"]
@pytest.fixture(scope="session", params=browsers)
async def browser_context_creation_admin(request):
browser_type = request.param
async with async_playwright() as p:
browser = await getattr(p, browser_type).launch(headless=False, slow_mo=300)
context = await browser.new_context()
page = await context.new_page()
page.set_default_timeout(20000)
await page.goto("")
await page.get_by_label("Username").click()
await page.get_by_label("Username").fill("")
await page.get_by_role("button", name="Next").click()
await page.get_by_role("link", name="Select").first.click()
totp = pyotp.parse_uri()
generated_otp = totp.now()
await page.get_by_label("Enter code").fill(generated_otp)
await page.get_by_role("button", name="Verify").click()
await page.get_by_role("link", name="Select").nth(1).click()
await page.get_by_label("Password").fill("")
await page.get_by_role("button", name="Verify").click()
await page.get_by_role("button", name="I Accept").click()
yield context
await context.close()
await browser.close()
@pytest.fixture()
async def login_set_up_admin(browser_context_creation_admin):
admin_context = browser_context_creation_admin
admin_page = await admin_context.new_page()
await admin_page.goto("")
admin_context.set_default_timeout(20000)
yield admin_page
await admin_page.close()
</code></pre>
<p><em><strong>Content of test_login.py file</strong></em></p>
<pre><code>async def test_login_with_valid_creds_admin(login_set_up_admin):
admin_page = login_set_up_admin
await admin_page.locator('xpath=//button[@id="icon-btn-3"]//div[@class="notification"]//*[name()="svg"]//'
'*[name()="path" and contains(@d,"M16,2A14,1")]').click()
assert await admin_page.get_by_role("button", name="Logout").is_visible()
</code></pre>
|
<python><pytest><playwright><playwright-python><pytest-asyncio>
|
2024-06-13 11:28:02
| 2
| 305
|
Basavaraj Lamani
|
78,617,536
| 8,271,180
|
A reliable way to check if a method has been wrapped with a decorator from within it
|
<p>In python 3.7 or higher.</p>
<p>I have the following class:</p>
<pre><code>def dec(method):
@functools.wraps(method)
def wrapper(*args, **kwargs):
print("wrapped")
return method(*args, **kwargs)
return wrapper
class A:
@dec()
def f1(self):
print(is_wrapped())
def f2(self):
print(is_wrapped())
</code></pre>
<p>I want <code>A().f1()</code> to print <code>True</code> and <code>A().f2()</code> to print False.</p>
<p>I created the following code for <code>is_wrapped</code>:</p>
<pre><code>def is_wrapped():
frame = inspect.currentframe().f_back
for v in frame.f_back.f_locals.values():
if hasattr(v, '__code__') and v.__code__ is frame.f_code:
return True
return False
</code></pre>
<p>While this seems to work it can fail if the caller of f2 has a local variable that contains f2 but does not decorate it.</p>
<p>For my specific case you can assume the following.</p>
<ol>
<li><p>The is_wrapped function is called directly from the function in a way that inspect.currentframe().f_back is the method that called it (i.e, f1 or f2)</p>
</li>
<li><p>The is_wrapped function cannot get any arguments.</p>
</li>
<li><p>the decorator can be implemented in any way as long as it uses functool.wraps for the wrapped function.</p>
</li>
</ol>
<p>Is there any more reliable way to achieve this?</p>
|
<python><python-decorators><inspect>
|
2024-06-13 11:23:14
| 2
| 1,356
|
Tomer Wolberg
|
78,617,530
| 3,367,720
|
How to configure HuggingFaceEndpoint in Langchain
|
<p>I'm trying to use this model</p>
<pre><code>from langchain_huggingface import HuggingFaceEndpoint
repo_id="google/flan-t5-large"
huggingface_llm = HuggingFaceEndpoint(
huggingfacehub_api_token=HUGGINGFACEHUB_API_TOKEN,
repo_id=repo_id,
temperature=0,
max_new_tokens=200)
from langchain.prompts import PromptTemplate
def flan_process(tema, pregunta):
template = "Eres un experto asistente en {tema}. Responde a la siguiente pregunta: {pregunta}"
prompt=PromptTemplate(template=template,input_variables=["tema","pregunta"])
flan_chain = prompt | huggingface_llm
respuesta=flan_chain.invoke({"tema":tema, "pregunta":pregunta})
return respuesta
tema=input("Ingrese el tema: ")
pregunta=input("Ingrese la pregunta: ")
flan_reply=flan_process(tema, pregunta)
print(f"Respuesta Flan: {flan_reply}")
</code></pre>
<p>But I always get this error The following <code>model_kwargs</code> are not used by the model: ['return_full_text', 'watermark', 'stop_sequences', 'stop'] (note: typos in the generate arguments will also show up in this list)</p>
<p>Any idea please?</p>
<p>Thanks</p>
|
<python><langchain><huggingface>
|
2024-06-13 11:21:39
| 2
| 1,343
|
kintela
|
78,617,498
| 7,120,013
|
Problem with OPC UA Client (async handling) and GUI Interaction
|
<p>I'm working on developing an OPC UA client with a GUI interface. I've successfully connected to the server and interacted with it within the main loop. However, when I try to call methods via a class that manages the GUI, I encounter exceptions without any error messages.</p>
<p>Currently, I try to get running the List Namespace Button.</p>
<p>Here's version of my code:</p>
<pre class="lang-py prettyprint-override"><code>
import asyncio
import tkinter as tk
from asyncua import Client
import threading
import logging
_logger = logging.getLogger(__name__)
# Adresse des OPC UA Servers
SERVER_URL = "opc.tcp://localhost:4840/freeopcua/server/"
NAMESPACE_URI = "http//freeopcua/defaults/modeler"
class OpcuaClient:
def __init__(self, server_url):
self.client = Client(server_url)
async def connect(self):
try:
await self.client.connect()
print("Verbunden mit dem OPC UA Server.")
except Exception as e:
print(f"Fehler beim Verbinden mit dem OPC UA Server: {e}")
async def disconnect(self):
try:
await self.client.disconnect()
print("Verbindung zum OPC UA Server getrennt.")
except Exception as e:
print(f"Fehler beim Trennen von dem OPC UA Server: {e}")
async def get_node(self, node_id):
try:
return self.client.get_node(node_id)
except Exception as e:
print(f"Fehler beim Abrufen des Knotens: {e}")
return None
async def set_valve(self, valve_id, state):
try:
node = await self.get_node(f"ns=2;s=Valve_{valve_id}.State") # Ändere den Node Identifier entsprechend
if node:
await node.set_value(state)
print(f"Ventil {valve_id} Zustand gesetzt auf {state}")
except Exception as e:
print(f"Fehler beim Setzen des Zustands von Ventil {valve_id}: {e}")
async def list_namespaces(self):
try:
print("list_namespaces => start")
print(self.client.description)
await self.client.check_connection()
namespaces = await self.client.get_namespace_array()
print("Namespaces auf dem Server:")
for idx, namespace in enumerate(namespaces):
print(f"Namespace {idx}: {namespace}")
return namespaces
except Exception as e:
print(f"Fehler beim Abrufen der Namespaces: {e}")
return []
class Application(tk.Tk):
def __init__(self, opcua_client: OpcuaClient, loop):
super().__init__()
self.opcua_client = opcua_client
self.loop = loop
self.title("OPC UA Client")
self.geometry("300x300")
self.buttons = []
for i in range(1, 5):
button = tk.Button(self, text=f"Toggle Valve {i}", command=lambda i=i: self.toggle_valve(i))
button.pack(pady=10)
self.buttons.append(button)
self.namespace_button = tk.Button(self, text="List Namespaces", command=self.list_namespaces)
self.namespace_button.pack(pady=10)
def toggle_valve(self, valve_number):
print(f"Button für Ventil {valve_number} geklickt.")
asyncio.run_coroutine_threadsafe(self.opcua_client.set_valve(valve_number, not state), self.loop)
def list_namespaces(self):
print("list_namespaces")
asyncio.run_coroutine_threadsafe(self.opcua_client.list_namespaces(), self.loop)
async def main(loop):
opcua_client = OpcuaClient(SERVER_URL)
await opcua_client.connect()
idx = await opcua_client.client.get_namespace_array()
_logger.info("index of our namespace is %s", idx)
await opcua_client.list_namespaces()
app = Application(opcua_client, loop)
app.protocol("WM_DELETE_WINDOW", lambda: asyncio.run_coroutine_threadsafe(on_closing(app, opcua_client), loop))
app.mainloop()
async def on_closing(app, opcua_client):
await opcua_client.disconnect()
app.destroy()
def start_loop(loop):
asyncio.set_event_loop(loop)
loop.run_forever()
if __name__ == "__main__":
logging.basicConfig(level=logging.INFO)
loop = asyncio.new_event_loop()
threading.Thread(target=start_loop, args=(loop,), daemon=True).start()
asyncio.run(main(loop))
</code></pre>
<p>In the above code, connecting and interacting with the OPC UA server directly within the main function works fine. However, when I encapsulate the GUI logic into the Application class and try to interact with the OPC UA client through button callbacks (list_namespaces), I encounter exceptions without any specific error messages.</p>
<p>What could be causing this issue? How can I properly handle interactions between the GUI and OPC UA client methods?</p>
|
<python><python-asyncio><opc-ua>
|
2024-06-13 11:13:08
| 0
| 393
|
swiftUser
|
78,617,484
| 9,506,773
|
How do I store vectors generated by AzureOpenAIEmbeddingSkill in indexer given my current setup
|
<p>This is a follow up question to: <a href="https://stackoverflow.com/questions/78602675/error-in-azure-cognitive-search-service-when-storing-document-page-associated-to/78615922?noredirect=1#comment138603745_78615922">Error in Azure Cognitive Search Service when storing document page associated to each chunk extracted from PDF in a custom WebApiSkill</a></p>
<p>How do I store the vectors generated by AzureOpenAIEmbeddingSkill in indexer given my current setup:</p>
<ul>
<li>Custom WebApiSkill:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>combined_list = [{'textItems': text, 'numberItems': number} for text, number in zip(chunks, page_numbers)]
# response object for specific pdf
response_record = {
"recordId": recordId,
"data": {
"subdata": combined_list
}
}
response_body['values'].append(response_record)
</code></pre>
<ul>
<li>Skillset definition:</li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>{
...
"description": "Skillset to chunk documents and generating embeddings",
"skills": [
{
"@odata.type": "#Microsoft.Skills.Custom.WebApiSkill",
"name": "splitclean",
"description": "Custom split skill to chunk documents with specific chunk size and overlap",
"context": "/document",
"httpMethod": "POST",
"timeout": "PT30S",
"batchSize": 1,
"degreeOfParallelism": null,
"authResourceId": null,
"inputs": [
{
"name": "text",
"source": "/document/content"
}
],
"outputs": [
{
"name": "subdata",
"targetName": "subdata"
}
],
"authIdentity": null
},
{
"name": "#2",
"description": "Skill to generate embeddings via Azure OpenAI",
"context": "/document/subdata/*",
"apiKey": "<redacted>",
"deploymentId": "embedding-ada-002",
"dimensions": null,
"modelName": "experimental",
"inputs": [
{
"name": "text",
"source": "/document/subdata/*/textItems"
}
],
"outputs": [
{
"name": "embedding",
"targetName": "vector"
}
],
"authIdentity": null
}
],
"cognitiveServices": null,
"knowledgeStore": null,
"indexProjections": {
"selectors": [
{
"parentKeyFieldName": "parent_id",
"sourceContext": "/document/subdata/*",
"mappings": [
{
"name": "chunk",
"source": "/document/subdata/*/textItems",
"sourceContext": null,
"inputs": []
},
{
"name": "vector",
"source": "/document/subdata/*/vector",
"sourceContext": null,
"inputs": []
},
{
"name": "title",
"source": "/document/metadata_storage_name",
"sourceContext": null,
"inputs": []
},
{
"name": "page_number",
"source": "/document/subdata/*/numberItems",
"sourceContext": null,
"inputs": []
}
]
}
],
"parameters": {
"projectionMode": "skipIndexingParentDocuments"
}
},
"encryptionKey": null
}
</code></pre>
<p>I get the following error in <code>AzureOpenAIEmbeddingSkill</code>:</p>
<pre><code>Web Api response status: 'Unauthorized', Web Api response details: '{"error":{"code":"401","message":"Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource."}}'
</code></pre>
|
<python><azure><azure-cognitive-search>
|
2024-06-13 11:09:46
| 1
| 3,629
|
Mike B
|
78,617,300
| 10,200,497
|
What is the most efficient way to fillna multiple columns with values from other columns in a way that they can be paired with a suffix?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
'x': [1, np.nan, 3, np.nan, 5],
'y': [np.nan, 7, 8, 9, np.nan],
'x_a': [1, 2, 3, 4, 5],
'y_a': [6, 7, 8, 9, 10]
}
)
</code></pre>
<p>Expected output is <code>fill_na</code> columns <code>x</code> and <code>y</code>:</p>
<pre><code> x y x_a y_a
0 1.0 6.0 1 6
1 2.0 7.0 2 7
2 3.0 8.0 3 8
3 4.0 9.0 4 9
4 5.0 10.0 5 10
</code></pre>
<p>Basically I want to fillna <code>x</code> with <code>x_a</code> and <code>y</code> with <code>y_a</code>. In other words each column should be paired with another column that has the suffix <code>_a</code> and the column name.</p>
<p>I can get this output by using this code:</p>
<pre><code>for col in ['x', 'y']:
df[col] = df[col].fillna(df[f'{col}_a'])
</code></pre>
<p>But I wonder if it is the best/most efficient way? Suppose I got hundreds of columns like these</p>
|
<python><pandas><dataframe>
|
2024-06-13 10:28:30
| 6
| 2,679
|
AmirX
|
78,616,941
| 12,550,791
|
how is typing a function with list[TypeVar bound to MotherClass] different from typing directly with list[MotherClass]?
|
<p>I wrote a small utilitarian function to convert a list of class instances to a list of dictionaries. The implementation is the following:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import RootModel, BaseModel
def convert_pydantic_list_to_dict_list(pydantic_sequence):
return RootModel(pydantic_sequence).model_dump()
# For the sake of the example
class MyModel(BaseModel): ...
</code></pre>
<p>However, I figured that this function accepts both list of pydantic models (that are children of BaseModel) or tuple of pydantic models. I tried to write overloads of this function like this:</p>
<pre class="lang-py prettyprint-override"><code>@overload
def convert_pydantic_list_to_dict_list(
pydantic_list: list[BaseModel],
) -> list[dict]: ...
@overload
def convert_pydantic_list_to_dict_list(
pydantic_list: tuple[BaseModel],
) -> tuple[dict]: ...
</code></pre>
<p>but I got a mypy error when I was passing a list of models</p>
<pre class="lang-py prettyprint-override"><code>var = list[MyModel(), MyModel()]
convert_pydantic_list_to_dict_list(var)
# Argument 1 to "convert_pydantic_list_to_dict_list" has incompatible type
# "list[MyModel]"; expected "list[BaseModel]".
</code></pre>
<p>After reading some <a href="https://mypy.readthedocs.io/en/stable/generics.html#variance-of-generics" rel="nofollow noreferrer">documentation</a> about invariance, covariance, etc., I figured that the solution was lying in <a href="https://docs.python.org/3/library/typing.html#typing.TypeVar" rel="nofollow noreferrer">TypeVars</a>. The following works:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T", bound=BaseModel)
@overload
def convert_pydantic_list_to_dict_list(pydantic_list: list[T]) -> list[dict]: ...
@overload
def convert_pydantic_list_to_dict_list(pydantic_list: tuple[T]) -> tuple[dict]: ...
</code></pre>
<p>I don't fully understand why bounding a typevar to BaseModel (saying that it can be BaseModel or any of its sub class) then typing the function with <code>list[T]</code>, is different from directly typing the function with <code>list[BaseModel]</code>.
In both case the list is invariant, and the type its elements can have are sub type of <code>BaseModel</code>, isn't it? Is it because of contravariance in callable arguments?</p>
<p>Thank you for your time.</p>
|
<python><python-typing><pydantic><type-variables>
|
2024-06-13 09:18:14
| 0
| 391
|
Marco Bresson
|
78,616,913
| 1,557,306
|
How to serialize a related model at depth 2 without serializing intermediate model at depth 1?
|
<p>I have three models with foreign keys in this direction <code>Order -> Customer -> User</code>. I would like to write a view set for orders that serializes the corresponding user without the intermediate customer.</p>
<p><strong>Models</strong>:</p>
<pre class="lang-py prettyprint-override"><code>class Order(models.Model):
customer = ForeignKey("customers.Customer")
class Customer(models.Model):
user = ForeignKey("users.User")
class User(models.Model):
name = CharField(max_length=64)
</code></pre>
<p><strong>Serializer</strong>:</p>
<pre class="lang-py prettyprint-override"><code>class UserSerializer(ModelSerializer):
class Meta:
model = User
fields = ["name"]
class OrderSerializer(ModelSerializer):
user = UserSerializer(source="customer__user", read_only=True)
class Meta:
model = Order
fields = ["id", "user"]
</code></pre>
<p><strong>View set</strong>:</p>
<pre class="lang-py prettyprint-override"><code>class OrderViewSet(ModelViewSet):
queryset = Order.objects.all()
serializer_class = OrderSerializer
</code></pre>
<p><strong>Desired output</strong>:</p>
<pre class="lang-json prettyprint-override"><code>[
{"id": 1, "user": {"name": "John"}},
{"id": 2, "user": {"name": "Mary"}},
]
</code></pre>
<p><strong>Actual output</strong>:</p>
<pre class="lang-json prettyprint-override"><code>[
{"id": 1},
{"id": 2},
]
</code></pre>
<p>It works fine if I go via the intermediate customer serializer:</p>
<pre class="lang-py prettyprint-override"><code>class CustomerSerializer(ModelSerializer):
user = UserSerializer(read_only=True)
class Meta:
model = User
fields = ["user"]
class OrderSerializer(ModelSerializer):
customer = CustomerSerializer(read_only=True)
class Meta:
model = Order
fields = ["id", "customer"]
</code></pre>
<p>but then the output contains the intermediate customer object:</p>
<pre class="lang-json prettyprint-override"><code>[
{"id": 1, {"customer": {"user": {"name": "John"}}},
{"id": 2, {"customer": {"user": {"name": "Mary"}}},
]
</code></pre>
|
<python><django><django-rest-framework>
|
2024-06-13 09:11:22
| 1
| 1,869
|
Leevi L
|
78,616,907
| 6,025,629
|
Polars - Issue with read_csv_batched when separator is included in the field
|
<p>I am trying to read a relatively large csv +30M rows that cannot fit into memory so I am using <code>read_csv_batched</code>. However, I noticed that <code>reader.next_batch(5)</code> instead of returning number of batches dfs (in our case 5) it always returned 1 df with all the rows inside (bigger than the given batch size).</p>
<p>After some digging I created the following minimum example to reproduce the issue:</p>
<pre><code># error.csv
a,b
"test","test"
"test","test"
"test",",," # Notice here that we have 2 commas
# correct.csv
a,b
"test","test"
"test","test"
"test","," # Here we only have 1 comma
</code></pre>
<pre class="lang-py prettyprint-override"><code>reader_error = pl.read_csv_batched("./assets/error.csv", separator=",", batch_size=1, quote_char="\"")
batch = reader_error.next_batches(2)
print(len(batch)) # Prints 1, wrong
reader_correct = pl.read_csv_batched("./assets/correct.csv", separator=",", batch_size=1, quote_char="\"")
batch = reader_correct.next_batches(2)
print(len(batch)) # Prints 2, correct
</code></pre>
<pre><code>Python: 3.10
Polars: 0.20.31
</code></pre>
<p>Any idea why this might be happening? Have I missed something about creating the csv correctly. I assumed that having anything inside <code>"</code> would be enough.</p>
<p>Many thanks!</p>
|
<python><csv><python-polars>
|
2024-06-13 09:10:26
| 0
| 489
|
Mike Xydas
|
78,616,608
| 5,346,843
|
Another issue with entering user name and password using Selenium
|
<p>I am trying to scrape the data from a URL on my home network that, (I think) is populated with data by Javascript. The URL is password protected, and I need to fill in the user name and password with this form to access it:</p>
<p><a href="https://i.sstatic.net/6iJAP1BM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6iJAP1BM.png" alt="enter image description here" /></a></p>
<p>I found <a href="https://stackoverflow.com/questions/21186327/fill-username-and-password-using-selenium-in-python">this answer</a> to a similar question on Stack Overflow and adapted the code slightly as follows:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
driver = webdriver.Chrome(options=options)
url = 'http://192.168.42.57'
driver.get(url)
wait = WebDriverWait(driver, 20)
wait.until(EC.element_to_be_clickable((By.ID, 'username'))).send_keys("xxx")
wait.until(EC.element_to_be_clickable((By.ID, 'password'))).send_keys("yyy")
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "button[type='submit']"))).click()
</code></pre>
<p><code>xxx</code> and <code>yyy</code> represent the correct user name and password. When I run this, the log in window shown above appears and I have to manually fill in the user name and password, then get timed out with the following error:</p>
<pre><code>File "R:\Private\Python\sandbox\web_scrape_v2.py", line 24, in <module>
wait.until(EC.element_to_be_clickable((By.ID, 'username'))).send_keys("xxx")
File "C:\WinPython\WPy64-3940\python-3.9.4.amd64\lib\site-packages\selenium\webdriver\support\wait.py", line 105, in until
raise TimeoutException(message, screen, stacktrace)
</code></pre>
<p>I tried inspecting the source for the log-in form, to check that I was addressing the correct elements on the webpage, but I don't seem to be able to do this. I would appreciate any suggestions you might have.</p>
|
<python><selenium-webdriver>
|
2024-06-13 08:10:44
| 0
| 545
|
PetGriffin
|
78,616,189
| 815,859
|
Python mido not detecting keyboard input in Raspberry Pi
|
<p>I have a simple python code that takes MIDI signal from a musical keyboard via USB connected to a PC which then sends command to an Arduino board to write to its digital out. This works perfectly fine and is without issues.
I tried migrating the same code to Raspberry Pi with some modification specific to the Pi. My code is as follows:</p>
<pre><code>import pygame
import mido
import rtmidi
import time
import RPi.GPIO as GPIO
pygame.init()
BLACK = [0,0,0]
WHITE = [255, 222, 111]
note_list = []
note_list_off = []
outport=mido.open_output()
inport=mido.open_input()
GPIO.setmode(GPIO.BCM)
done = False
GPIO.setup(4,GPIO.OUT)
print("START!")
while done == False:
for msg in inport.iter_pending():
try:
n = msg.note
except:
continue
if msg.velocity>0 and msg.velocity != 64:
print(msg.velocity)
GPIO.output(4,True)
time.sleep(1)
GPIO.output(4,False)
else:
msg=mido.Message('note_off',note=n)
outport.send(msg)
pygame.quit ()
</code></pre>
<p>Adding RPi.GPIO is the only difference with the code running on Windows. If I moved the code</p>
<pre><code>GPIO.output(4,True)
time.sleep(1)
GPIO.output(4,False)
</code></pre>
<p>just above the line</p>
<pre><code>print("START!")
</code></pre>
<p>Raspberry Pi writes to the correct port which I tested by connecting to an LED. The problem here is that the line after</p>
<pre><code>for msg in inport.iter_pending():
</code></pre>
<p>never gets executed. I have checked to see if the Pi detects the keyboard and these are the outputs:</p>
<p><strong>lsusb</strong></p>
<pre><code>Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 07cf:6803 Casio Computer Co., Ltd CTK-3500 (MIDI keyboard)
</code></pre>
<p><strong>amidi -l</strong></p>
<pre><code>Dir Device Name
IO hw:3,0,0 CASIO USB-MIDI MIDI 1
</code></pre>
<p><strong>aconnect -i</strong></p>
<pre><code>client 0: 'System' [type=kernel]
0 'Timer '
1 'Announce '
client 14: 'Midi Through' [type=kernel]
0 'Midi Through Port-0'
client 28: 'CASIO USB-MIDI' [type=kernel,card=3]
0 'CASIO USB-MIDI MIDI 1'
</code></pre>
<p>The Pi can read the MIDI from the keyboard just fine because this is the output for
<strong>aseqdump -p 28</strong> while playing keys from the keyboard</p>
<pre><code>Waiting for data. Press Ctrl+C to end.
Source Event Ch Data
28:0 Note on 0, note 64, velocity 9
28:0 Note on 0, note 60, velocity 27
28:0 Note on 0, note 57, velocity 1
28:0 Note off 0, note 57, velocity 64
28:0 Note on 0, note 60, velocity 30
28:0 Note on 0, note 57, velocity 23
28:0 Note on 0, note 55, velocity 31
28:0 Note off 0, note 55, velocity 64
28:0 Note off 0, note 60, velocity 64
28:0 Note off 0, note 57, velocity 64
28:0 Note on 0, note 55, velocity 29
28:0 Note off 0, note 55, velocity 64
28:0 Note on 0, note 57, velocity 35
</code></pre>
<p>My Python version is 3.11.
Any help is much appreciated.</p>
|
<python><raspberry-pi><midi><mido>
|
2024-06-13 06:40:24
| 1
| 795
|
Monty Swanson
|
78,615,752
| 20,283,624
|
Errno 110 Connection timed out in firebase admin python lambda
|
<p>I'm trying to create a lambda python api that can send notification to a user device but I'm encountering <code>Connection Timed Out</code> error.
Also tried already to set the timeout in lambda to 1 minute but still the same.</p>
<p>What I did is installed firebase-admin via <code>pip3 install firebase-admin</code> in my local device and uploaded it as a layer.</p>
<p>this is my lambda-function.py</p>
<pre><code>import json
import boto3
import asyncio
import time
import flogger
import firebase_admin
from firebase_admin import credentials, messaging
from firebase_admin.exceptions import FirebaseError, InvalidArgumentError, InternalError
from os import environ
from traceback import format_exc
from config import ConfigClass
print('Loading function')
dynamo = boto3.client('dynamodb')
stage = environ['STAGE']
initialized = False
def initialize_firebase_app():
global initialized
if not initialized:
try:
config = ConfigClass()
result = asyncio.run(config.firebase_config())
cred = credentials.Certificate(result['credential'])
firebase_admin.initialize_app(cred)
initialized = True
flogger.log('debug', 'Firebase app initialized')
except Exception as e:
flogger.log('warning', f'Failed to initialize Firebase app: {e}')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
# Initialize Firebase app if not already initialized
initialize_firebase_app()
# List of registration tokens/fcm_token
registration_tokens = [
'fcmtoken1',
]
# Define the notification payload
notification = messaging.Notification(
title='Test Notification',
body='This is a test message notification.',
)
# Define the data payload
data = {
'id': '477',
'firebase-uuid': json.dumps(['uuid1']),
'type': 'type',
}
# Create a message with the notification and data payloads
message = messaging.MulticastMessage(
tokens=registration_tokens,
notification=notification,
data=data
)
# Function to send message with retries
def send_message_with_retries(message, max_retries=5, initial_backoff=1):
for attempt in range(max_retries):
try:
print('Sending message, attempt ', attempt + 1)
response = messaging.send_multicast(message)
print('Message sent successfully: ', response)
return response
except FirebaseError as e:
print('Failed to send message: ', e)
if attempt < max_retries - 1:
sleep_time = initial_backoff * (2 ** attempt)
print('Retrying seconds...', sleep_time)
time.sleep(sleep_time)
else:
print('Max retries reached. Failed to send message.')
raise
# Send the message
print('sending message')
send_message_with_retries(message)
</code></pre>
<p>and my config.py</p>
<pre><code>import boto3
class ConfigClass:
async def firebase_config(self):
result = {}
result['credential'] = {
"type": "service_account",
"project_id": "my-mobile-app",
"private_key_id": "my-private-key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\my-private-key\n-----END PRIVATE KEY-----\n",
"client_email": "firebase-adminsdk-j3mmmm6k@my-mobile-app.iam.gserviceaccount.com",
"client_id": "my-client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/firebase-adminsdk-j3mmmm6k%40my-mobile-app.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
return result
</code></pre>
<p>Is there any configuration that I need to add?</p>
|
<python><aws-lambda><firebase-admin>
|
2024-06-13 03:57:19
| 0
| 349
|
ramedju
|
78,615,700
| 2,210,825
|
Aggregate columns that fall within range
|
<p>I have two dataframes called <code>df</code> and <code>ranges</code>:</p>
<pre class="lang-py prettyprint-override"><code>data = {
'group': ['A', 'B', 'A', 'C', 'B'],
'start': [10, 20, 15, 30, 25],
'end': [50, 40, 60, 70, 45],
'val1': [5, 10, 11, 12, 6],
'val2': [5, 2, 1, 1, 0],
}
df = pd.DataFrame(data)
data = {
'group': ['A', 'B', 'C'],
'start': [0, 5, 25],
'end': [50, 7, 35],
}
ranges = pd.DataFrame(data)
</code></pre>
<p>My goal is to aggregate the rows in <code>df</code> together based on whether they fall within the same range defined in <code>ranges</code>. I would like to aggregate them together such that for each <code>val1, val2</code> column I get the <code>min, max, mean, sum</code> of that column within the context of the aggregation group.</p>
<p>The catch here is that I need to do this for something like 5000 ranges in <code>ranges</code> and 500,000 rows in <code>df</code>. So I'd like a fast but memory efficient (relatively) solution. I'm open to solutions using similar frameworks such as <code>vaex</code>.</p>
<p>Expected output where <code>range_id</code> is just a way to identify groups assuming they're not unique:</p>
<pre class="lang-py prettyprint-override"><code> range_id val1 val2
min max mean sum min max mean sum
0 0 5 5 5.0 5 5 5 5.0 5
</code></pre>
|
<python><pandas><numpy><aggregation>
|
2024-06-13 03:27:28
| 4
| 1,458
|
donkey
|
78,615,577
| 8,390,889
|
langchain: how to prevent the language model response from being prefixed with AI: or Assistant:
|
<p>I have the following langchain prompt setup for a RAG based chat application:</p>
<pre><code>## Prompt Chain Setup ##
retrieval_qa_chat_prompt = hub.pull("langchain-ai/retrieval-qa-chat")
retriever = chroma_db.as_retriever()
contextualize_q_system_prompt = (
"Given a chat history and the latest user question "
"which might reference context in the chat history, "
"formulate a standalone question which can be understood "
"without the chat history. Do NOT answer the question, "
"just reformulate it if needed and otherwise return it as is."
)
contextualize_q_prompt = ChatPromptTemplate.from_messages(
[
("system", contextualize_q_system_prompt),
#MessagesPlaceholder("chat_history"), <- removed to stop question_generator
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm, retriever, contextualize_q_prompt
)
system_prompt = (
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Keep the answer concise and ensure that any"
"configuration file samples or examples use JSON format."
"\n\n"
"{context}"
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
MessagesPlaceholder("chat_history"),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
</code></pre>
<p>and I'm talking to the microsoft/Phi-3-mini-4k-instruct-gguf model via LM Studio locally with default settings using the local server.</p>
<p>For some reason every response from the LLM is prefixed with 'Assistant:' or 'AI:' which is redundant for me, as the user interface will clearly show which text/content is from the AI versus the human messages (think iPhone SMS chat).</p>
<p>How can I remove this? I've tried coaching the prompt to do it, but it makes no difference asking the model not to put this prefix in.</p>
<p>Example response:</p>
<p><em>AI: The "isRequired" field indicates whether or not a form field must be filled out for the form to be submitted. If set to true.....</em></p>
|
<python><langchain><large-language-model>
|
2024-06-13 02:18:47
| 0
| 2,972
|
JamesMatson
|
78,615,014
| 6,097,123
|
How to create Pydantic model based on string (s3 access logs) data
|
<p>The issue is I need to parse a string line delimited by spaces (but not exactly) into a Pydantic model. The field names are known and types for this task are not important, so keeping <code>str</code> is fine.</p>
<p>I'm unfamiliar with Pydantic, but I assume there is a way to leverage the <code>BaseModel.model_validate</code> method or similar to make the parsing more natively.</p>
<p><strong>For the sake of example I trimmed the log and model!</strong></p>
<p>Example log file:</p>
<pre><code>79a59df900b949e55d DOC-EXAMPLE-BUCKET1 [06/Feb/2019:00:00:38 +0000]
</code></pre>
<p>The model:</p>
<pre><code>class S3AccessLogEntry(BaseModel):
owner: str
bucket: str
timestamp: str
</code></pre>
|
<python><csv><pydantic>
|
2024-06-12 21:25:30
| 1
| 537
|
sann05
|
78,614,880
| 14,890,683
|
Pyspark <3.3 Extract all regex and replace with another DataFrame
|
<p>Using pyspark 3.3 (no <code>regexp_extract_all</code>), I'd like to take a column of values</p>
<pre><code>+-------------------------+
| value |
+-------------------------+
| MONOCYTES 1511|A5905.5 |
+-------------------------+
</code></pre>
<blockquote>
<p>the data format is not constant. ie. The value could also be <code>1511;MONO->A5905.5</code></p>
</blockquote>
<p>And extract all parts that match the regex <code>r'\w?\d+\.?\d*'</code>. Then, I'd like to replace any extracted value with the value from another dataframe:</p>
<pre><code>+-----------+--------------+
| code | value |
+-----------+--------------+
| 1511 | monocytes1 |
+-----------+--------------+
| A5905.5 | monocytes2 |
+-----------+--------------+
</code></pre>
<p>Such that I can somehow map <code>{"MONOCYTES 1511|A5905.5": ["monocytes1", "monocytes2"]}</code></p>
<p>What is the fastest way to do this given the version constraint?</p>
|
<python><apache-spark><pyspark>
|
2024-06-12 20:39:20
| 1
| 345
|
Oliver
|
78,614,809
| 570,339
|
Order failed: 200-{"msg":"Funds increment invalid.","code":"400100"}
|
<p>I'm using the kucoin api to check what is my USDT balance and purchase its equivalent in Bitcoin using Python 3.12.3</p>
<p>Now if I understand correctly there might be two problems:</p>
<ul>
<li>the trade can be rejected if it's not a multiple of the quote increment provided by the api</li>
<li>This is more of my own worry, not sure if it's a real thing but I'm also worried about price fluctuations. So it could be that i fetch a certain price but it gets increase while the script runs. It's unlikely because we are talking about milliseconds but I think it's still possible</li>
</ul>
<p>To solve these two problems, I leave a buffer of 0.01 USDT and I make sure that the trade amount is a multiple of the quote increment. You can see this in the below code:</p>
<pre><code>from kucoin.client import Trade, User, Market
# Replace with your actual API credentials
api_key = 'my key'
api_secret = 'my secret'
api_passphrase = 'my passphrase'
# Initialize the KuCoin clients
trade_client = Trade(key=api_key, secret=api_secret, passphrase=api_passphrase)
user_client = User(key=api_key, secret=api_secret, passphrase=api_passphrase)
market_client = Market(key=api_key, secret=api_secret, passphrase=api_passphrase)
# Get account balances
account_balances = user_client.get_account_list()
# Find the USDT balance
usdt_balance = next((item for item in account_balances if item['currency'] == 'USDT' and item['type'] == 'trade'), None)
# Check if there is enough USDT balance
if usdt_balance and float(usdt_balance['balance']) > 0:
total_usdt_amount = float(usdt_balance['balance'])
buffer_amount = 0.01 # Leaving 0.01 USDT as buffer
usdt_amount = total_usdt_amount - buffer_amount
# Get the symbol info to find the quote increment and minimum size
symbol_info = market_client.get_symbol_list()
btc_usdt_info = next(item for item in symbol_info if item['symbol'] == 'BTC-USDT')
quote_increment = float(btc_usdt_info['quoteIncrement'])
quote_min_size = float(btc_usdt_info['quoteMinSize'])
# Adjust USDT amount to be a multiple of quote_increment
usdt_amount = (usdt_amount // quote_increment) * quote_increment
if usdt_amount < quote_min_size:
print(f"Order amount is too small, minimum required is {quote_min_size} USDT")
else:
# Create the market order to trade all USDT for BTC
try:
order = trade_client.create_market_order('BTC-USDT', 'buy', funds=str(usdt_amount))
print(f"Order successful: Traded {usdt_amount} USDT for BTC")
except Exception as e:
print(f"Order failed: {e}")
else:
print("Insufficient USDT balance or USDT balance not found")
</code></pre>
<p>Like you can see, I update the amount this way:</p>
<pre><code>usdt_amount = (usdt_amount // quote_increment) * quote_increment
</code></pre>
<p>Which I thought it was correct, but probably I'm missing something because it gives me this error:</p>
<pre><code>Order failed: 200-{"msg":"Funds increment invalid.","code":"400100"}
</code></pre>
|
<python><cryptocurrency><kucoin>
|
2024-06-12 20:19:41
| 1
| 22,076
|
Ramy Al Zuhouri
|
78,614,773
| 19,048,408
|
With Polars, how to concatenate list-of-string expression/column to string
|
<p>Here's a naive solution of what I want to do using <code>map_elements</code>. How can I do this with only Polars functions?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
# Create a DataFrame with a column containing lists of strings
df = pl.DataFrame({
"list_of_strings": [["a", "b", "c"], ["d", "e", "f"], ["g", "h", "i"]]
})
# Define a function to concatenate lists of strings into a single string
def concatenate_list_of_strings(lst):
return "".join(lst)
# Apply the function to the DataFrame
df = df.with_column(
pl.col("list_of_strings").map_elements(concatenate_list_of_strings, return_dtype=pl.String).alias("concatenated_string")
)
print(df)
</code></pre>
|
<python><dataframe><python-polars>
|
2024-06-12 20:10:16
| 1
| 468
|
HumpbackWhale194
|
78,614,694
| 1,187,968
|
AWS/Python Lambda: keep a global variable within a lambda run that's isolated from other lambda runs
|
<p>The AWS lambdas doc said that global variable will be share across lambda runs. Due to legacy reason, I need to find way to declare and keep a global variable to a lambda run, and it need to be isolated from other lambda runs.</p>
<p>I'm was thinking using class attribute. AWS lambda still treat class attribute as global variable, right?</p>
|
<python><amazon-web-services><aws-lambda>
|
2024-06-12 19:48:29
| 1
| 8,146
|
user1187968
|
78,614,542
| 1,248,544
|
scrapy tsv file download. how to convert file to parquet before upload to s3
|
<p>I have a working scrapy project that downloads tsv files and saves them to s3.</p>
<p>I use a custom pipeline to save the original file names with dates.</p>
<p>I am wondering if it is possible to convert the tsv files to parquet before uploading them to s3. If so how would I do this in scrapy?</p>
<p>I should note that I am able to convert the files locally (last code block) but would like to do it inline before they are uploaded to s3.</p>
<p>This is what I have currently working....</p>
<pre><code>##items
class DownfilesItem(scrapy.Item):
file_urls = scrapy.Field()
files = scrapy.Field()
original_file_name = scrapy.Field()
date = scrapy.Field()
</code></pre>
<pre><code>##pipeline to save original file names with dates
class OriginalNameFilesPipeline(FilesPipeline):
def file_path(self, request, response=None, info=None):
test = request
file_name_xml = request.url.split("=")[-1]
file_name: str = file_name_xml.removesuffix('.tsv') + '_' + datetime.today().strftime("%Y%m%d") + '.' + file_name_xml.split(".")[-1]
return file_name
</code></pre>
<pre><code>##in my scraper
def parse_all_items(self, response):
all_urls = [bunch or urls]
for url in all_urls:
item = DownfilesItem()
item['file_urls'] = [url]
item['original_file_name'] = url.split("=")[-1]
yield item
</code></pre>
<pre><code>##converting tsv to parquet locally
parse_options = csv.ParseOptions(delimiter="\t")
for name in os.listdir(src_dir):
localpath = os.path.join(src_dir, name)
print(localpath)
if ".tsv" in localpath:
table = csv.read_csv(localpath, parse_options=parse_options)
pq.write_table(table, localpath.replace('tsv', 'parquet'))
</code></pre>
|
<python><amazon-s3><web-scraping><scrapy>
|
2024-06-12 19:03:09
| 4
| 555
|
JonDog
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.