QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,970,384
| 2,362,196
|
How to build Azure AI endpoint and deployments of models using pipelines
|
<p>Using Azure Portal and AI Studio, I created the deployment of two models selected on AI Studio, an endpoint and a workspace. I don't really know what I'm doing and try to learn by doing. Now I need to create all these by scripts in pipelines with YAML and BICEP so I can teardown at night and rebuild in the morning. I would want an endpoint that can serve two or more models.</p>
<p>I saw script samples with inference and deployment file but couldn't find how to fill those files based on the objects I have in the portal.</p>
<p>I saw another sample with a YAML with :</p>
<pre><code>- task: UsePythonVersion@0
inputs:
versionSpec: '3.x'
addToPath: true
- script: |
pip install azureml-core azureml-sdk
- script: |
az ml online-endpoint create -n 'my-endpoint' -f ./create_or_update_endpoint.yml -g 'resources_group_name' -w 'workspace_name'
az ml online-endpoint update -n 'my-endpoint' --traffic 'deployment_name=100' -g 'resources_group_name' -w 'workspace_name'
</code></pre>
<p>But using this script I got:</p>
<pre><code>ext/_ruamel_yaml.c:181:12: fatal error: longintrepr.h: No such file or directory
181 | #include "longintrepr.h"
| ^~~~~~~~~~~~~~~
compilation terminated.
error: command '/usr/bin/gcc' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for ruamel.yaml
</code></pre>
<p>How can I build my endpoint and deployments of two models using pipelines ?</p>
|
<python><azure><azure-pipelines><azure-ai>
|
2024-09-10 16:10:52
| 1
| 533
|
ClaudeVernier
|
78,970,312
| 22,466,650
|
Supabase python client returns an empty list when making a query
|
<p>My configuration is very basic. A simple supabase database with one table.</p>
<p>I use <a href="https://github.com/supabase/supabase-py" rel="nofollow noreferrer">supabase-py</a> to interact with it. The problem is that I always get an empty list :</p>
<pre class="lang-py prettyprint-override"><code>from supabase import create_client
URL = "MY_URL_HERE"
API_KEY = "MY_API_KEY_HERE"
supabase = create_client(URL, API_KEY)
response = supabase.table("prod_vxf").select("*").execute()
print(respnse.data)
# []
</code></pre>
<p>After checking some similar topics like <a href="https://stackoverflow.com/questions/71294440/supabase-in-next-js-returning-an-empty-array-when-data-is-in-relevant-tables">this</a> one, it seems that the only solution is by turning off RLS. So I went to the dashboard and turned off the RLS for the table <code>prod_vxf</code> and it worked. Now, the code above gives a non empty list :</p>
<pre class="lang-py prettyprint-override"><code>print(response.data)
[
{"id": 1, "created_at": "2024-01-01T00:00:00+00:00"},
{"id": 2, "created_at": "2024-01-02T00:00:00+00:00"},
{"id": 3, "created_at": "2024-01-03T00:00:00+00:00"},
]
</code></pre>
<p>But what is very confusing is the warning below that hits my screen when I try to turn off the RLS for a given table in supabase dashboard. Does it mean that anyone on the internet (even without knowing url + api key) can access (read and write) my database and its tables ? Honestly, I'm super confused by the term <strong>publicly</strong> used by the warning.</p>
<p><a href="https://i.sstatic.net/oTGpf3nA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTGpf3nA.png" alt="enter image description here" /></a></p>
|
<python><postgresql><supabase>
|
2024-09-10 15:52:18
| 1
| 1,085
|
VERBOSE
|
78,970,190
| 4,727,280
|
Python Protocols and Mutability
|
<p>I'm confused by a Pylance type-checking error I'm getting:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from enum import StrEnum
from typing import Protocol
class A(Protocol):
x: str
# @property
# def x(self)->str: ...
class S(StrEnum):
FOO="foo"
BAR="bar"
@dataclass
class AA:
x: S
def f(a: A): return
a = AA(S.BAR)
f(a) # type-checking error
</code></pre>
<p>Via VS Code, Pylance tells me:</p>
<pre><code>Argument of type "AA" cannot be assigned to parameter "a" of type "A" in function "f"
"AA" is incompatible with protocol "A"
"x" is invariant because it is mutable
"x" is an incompatible type
"S" is incompatible with "str"
</code></pre>
<p>I don't understand Pylance's complaint. Isn't every version of <code>x</code> here mutable? The error can be fixed if we make <code>A.x</code> immutable by replacing the declaration <code>x: str</code> with the <code>@property</code>-decorated method commented out in the above code snippet. This actually increases my confusion -- in this case, instances of <code>AA</code> <em>shouldn't</em> conform to <code>A</code> since <code>A.x</code> is immutable but <code>AA.x</code> isn't, right?</p>
|
<python><python-typing><pyright><structural-typing>
|
2024-09-10 15:19:01
| 0
| 945
|
fmg
|
78,970,161
| 3,025,555
|
How to set a formatter for file handler that will replace unwanted characters?
|
<p>I have python logger set, which has 2 handlers - 1 for stdout, 1 for file.</p>
<p>Some of the logs i have, contains ANSI Escape characters for colorizing logs in stdout, e.g.</p>
<pre><code>SomeExampleText.... [32mPASS[0m....
</code></pre>
<p>I would like to utilize solution in following thread:</p>
<p><a href="https://stackoverflow.com/questions/14693701/how-can-i-remove-the-ansi-escape-sequences-from-a-string-in-python">How can I remove the ANSI escape sequences from a string in python</a></p>
<p>In order to adjust my logger, to strip out the "color" related characters,
But only when logging into fileHandler -</p>
<p>Keep the stdout colored, but my log file without any color related characters.</p>
<p>How can I achieve that?</p>
|
<python><logging><ansi-escape>
|
2024-09-10 15:11:46
| 1
| 1,225
|
Adiel
|
78,970,132
| 1,250,463
|
Pandas rank to Excel rank
|
<p>I have a sample data from which i am calculating the rank</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"value": [500,500,200,101,100,72,63,55,50,30,30,20,1]})
print(df)
print(df["value"].rank())
print(df["value"].rank(pct=True))
</code></pre>
<p>The results are as follows</p>
<pre><code> value rank perc
0 500 12.5 0.961538
1 500 12.5 0.961538
2 200 11.0 0.846154
3 101 10.0 0.769231
4 100 9.0 0.692308
5 72 8.0 0.615385
6 63 7.0 0.538462
7 55 6.0 0.461538
8 50 5.0 0.384615
9 30 3.5 0.269231
10 30 3.5 0.269231
11 20 2.0 0.153846
12 1 1.0 0.076923
</code></pre>
<p>I want to calculate the rank and perc columns in excel</p>
<p>if i try to apply this formula <code>=RANK.AVG(A1,$A$1:$A$13,0)</code>, i am getting different ranking numbers</p>
<pre><code>1.5
1.5
3
4
5
6
7
8
9
10.5
10.5
12
13
</code></pre>
<p>Can someone help me with the excel formula. I want to reproduce the same results as in the dataframe.</p>
<p>What changes i have to do in the EXCEL formula to achieve the values shown above in the dataframe?</p>
|
<python><excel><pandas>
|
2024-09-10 15:03:40
| 1
| 3,028
|
srinath
|
78,969,962
| 1,472,048
|
Human segmentation fails with Pytorch, not with Tensorflow Keras
|
<p>I probably missed something, but here is the same workflow with Pytorch and Tensorflow Keras.</p>
<p>The results are here:</p>
<p>The PyTorch version</p>
<p><a href="https://i.sstatic.net/V4n1bSth.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V4n1bSth.png" alt="With pytorch" /></a></p>
<p>The Keras version:</p>
<p><a href="https://i.sstatic.net/19lwOpX3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19lwOpX3.png" alt="with tensorflow keras" /></a></p>
<p>Hard to explain the whole process but this is what I do with PyTorch:</p>
<p>device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')</p>
<pre class="lang-py prettyprint-override"><code>model = models.segmentation.deeplabv3_resnet50(weights="DEFAULT").to(device)
# Mettre le modèle en mode évaluation
model.eval()
# Transformations similaires à celles utilisées dans Keras
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Resize((img_size, img_size)),
])
def t_read_image(imagepath: Image.Image):
image = transform(imagepath)
# make the image to be from -1 to 1
print(image.min(), image.max())
image = image * 2 - 1
image = image.to(device)
return image
def t_infer(model, image):
with torch.no_grad():
output = model(image.unsqueeze(0))
output = output['out']
output = np.squeeze(output.cpu().numpy())
output = output[1:]
output = np.argmax(output, axis=0)
return output
def t_decode_segmentation_masks(mask, colormap, n_classes):
r = np.zeros_like(mask).astype(np.uint8)
g = np.zeros_like(mask).astype(np.uint8)
b = np.zeros_like(mask).astype(np.uint8)
for l in range(0, n_classes):
idx = mask == l
r[idx] = colormap[l, 0]
g[idx] = colormap[l, 1]
b[idx] = colormap[l, 2]
rgb = np.stack([r, g, b], axis=2)
return rgb
def t_get_overlay(image, colored_mask):
image = image.cpu().numpy()
image = (image - image.min()) / (image.max() - image.min()) * 255
image = image.astype(np.uint8)
image = np.transpose(image, (1, 2, 0))
overlay = cv2.addWeighted(image, 0.35, colored_mask, 0.65, 0)
return overlay
def t_segmentation(input_image: Image.Image):
image_tensor = t_read_image(input_image)
prediction_mask = t_infer(image=image_tensor, model=model)
prediction_colormap = t_decode_segmentation_masks(prediction_mask, colormap, 20)
overlay = t_get_overlay(image_tensor, prediction_colormap)
return (overlay, prediction_colormap)
img = Image.open('./image.jpg')
img = np.array(img)
overlay, segs = t_segmentation(img)
plt.imshow(overlay)
plt.show()
</code></pre>
<p>And the same thing with Keras</p>
<pre class="lang-py prettyprint-override"><code>model = from_pretrained_keras("keras-io/deeplabv3p-resnet50")
def read_image(image):
image = tf.convert_to_tensor(image)
image.set_shape([None, None, 3])
image = tf.image.resize(images=image, size=[img_size, img_size])
image = image / 127.5 - 1
return image
def infer(model, image_tensor):
predictions = model.predict(np.expand_dims((image_tensor), axis=0))
predictions = np.squeeze(predictions)
predictions = np.argmax(predictions, axis=2)
return predictions
def decode_segmentation_masks(mask, colormap, n_classes):
r = np.zeros_like(mask).astype(np.uint8)
g = np.zeros_like(mask).astype(np.uint8)
b = np.zeros_like(mask).astype(np.uint8)
for l in range(0, n_classes):
idx = mask == l
r[idx] = colormap[l, 0]
g[idx] = colormap[l, 1]
b[idx] = colormap[l, 2]
rgb = np.stack([r, g, b], axis=2)
return rgb
def get_overlay(image, colored_mask):
image = tf.keras.preprocessing.image.array_to_img(image)
image = np.array(image).astype(np.uint8)
overlay = cv2.addWeighted(image, 0.35, colored_mask, 0.65, 0)
return overlay
def segmentation(input_image):
image_tensor = read_image(input_image)
prediction_mask = infer(image_tensor=image_tensor, model=model)
prediction_colormap = decode_segmentation_masks(prediction_mask, colormap, 20)
overlay = get_overlay(image_tensor, prediction_colormap)
return (overlay, prediction_colormap)
img = Image.open('./image.jpg')
img = np.array(img)
overlay, segs = segmentation(img)
plt.imshow(overlay)
plt.show()
</code></pre>
<p>As you can see, this is the same model, and globally the same code. But the result is very different. What I would like is to get the same result than with Keras, using Pytorch.</p>
<p>I share the notebook here: <a href="https://colab.research.google.com/drive/1mgWSRs4Z7lqag8vxBnq5vi65BohXNdMd?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1mgWSRs4Z7lqag8vxBnq5vi65BohXNdMd?usp=sharing</a></p>
<p>Thanks a lot.</p>
<p>EDIT, using normalization from imagenet, the result is better but not the excpected one:</p>
<p><a href="https://i.sstatic.net/xFcNgpri.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFcNgpri.png" alt="Pytorch With imagenet norm" /></a></p>
<p>It seems that the model is not exactly the same. Keras uses DeepLabV3Plus while PyTorch is using DeepLabV3.</p>
<p>So, is there a difference ? The model proposes 20 classes for both implementations.</p>
|
<python><image><tensorflow><machine-learning><pytorch>
|
2024-09-10 14:18:33
| 1
| 2,951
|
Metal3d
|
78,969,937
| 1,360,979
|
plotnine geom_histogram wrong bin placement
|
<p>I'm trying to define very specifically the bins of my histogram so that their size is <em>exactly</em> 10.</p>
<p>Here is an example. I defined a list of numbers. The list contains 10 numbers with 1 digit, and then 50 numbers between 50 and 59, 60 numbers between 60 and 69, and so on.</p>
<pre class="lang-py prettyprint-override"><code>rand_numbers = ([0]*5 + [9]*5) + \
([50]*20 + [59]*30) + \
([60]*30 + [69]*30) + \
([70]*35 + [79]*35) + \
([80]*40 + [89]*40) + \
([90]*45 + [99]*45)
</code></pre>
<p>Then I create a data frame where I "classified" the numbers so that numbers up to 69 are in a color, numbers in the 70s are in another color, and all numbers above 80 are another color:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
'c1': rand_numbers,
'c2': ['foo'] * 120 + ['bar'] * 70 + ['baz']*170
})
</code></pre>
<p>To make the histogram, I'm doing:</p>
<pre class="lang-py prettyprint-override"><code>import plotnine as p9
p = p9.ggplot(df, p9.aes(x='c1', fill = 'c2')) + \
p9.scale_x_continuous(breaks=range(0, 120, 10)) +\
p9.geom_histogram(size=0.5, colour='black', breaks=range(0, 120, 10))
</code></pre>
<p><a href="https://i.sstatic.net/mdQ97l3D.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdQ97l3D.png" alt="enter image description here" /></a></p>
<p>As you can see, the bins are "spilling" onto one another. Here is more or less what I expected:</p>
<p><a href="https://i.sstatic.net/lZXypH9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lZXypH9F.png" alt="better_histogram With exactly 10 in the first bin, and exactly 50 in the next, then exactly 60, then exactly 70, and so on" /></a></p>
<p>That is, I expected a histogram with exactly 10 elements in the first bin, exactly 50 elements in the next bin (between 50 and 59), then exactly 60 elements in the next one. All of the aforementioned bins should be completely blue. Then, a red bin with exactly 70 elements, and then two green bins with exactly 80 and 90 elements.</p>
<p>As you can see, I'm using the solution suggested <a href="https://stackoverflow.com/questions/66177452/predefine-bins-in-geom-histogram">here</a> and <a href="https://stackoverflow.com/questions/58102968/why-does-fill-in-geom-histogram-spill-over-into-the-wrong-bins">here</a> on how to predefine the bins in <code>geom_histogram()</code>, but it didn't work the way I expected.</p>
<p>In attempting to solve this problem, I found:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/37876096/geom-histogram-wrong-bins">This question from 2016 reporting a bug -- but it seems to be in the R implementation of ggplot... I don't know whether this would have anything to do with plotnine</a></li>
</ul>
<p><strong>EDIT:</strong> I noticed that, if I do the following, it "works". Still, I'm not sure if this is a trustworthy solution (?).</p>
<pre class="lang-py prettyprint-override"><code>geom_histogram(size=0.5, colour='black',
breaks=range(-1, 120, 10)) # <------ here, starting in -1
</code></pre>
|
<python><plotnine><geom-histogram>
|
2024-09-10 14:13:00
| 1
| 505
|
vaulttech
|
78,969,843
| 638,366
|
How to unit-test / mock code with beanie queries
|
<p>So my concrete problem is that I am trying to create a unit test for something like this:</p>
<pre class="lang-py prettyprint-override"><code>from mongo_models import Record
async def do_something(record_id: str):
record = await Record.find_one(Record.id == record_id)
if record is None:
record = Record(id='randomly_generated_string', content='')
# Other operations with `record` but we can ignore them for this
return record
</code></pre>
<p>Where the <code>mongo_models.py</code> contains:</p>
<pre class="lang-py prettyprint-override"><code>from beanie import Document
class Record(Document):
id: str
content: str
</code></pre>
<p>So I tried doing something like this:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from core_code import do_something
@pytest.mark.asyncio
async def test_do_something():
""" Test do_something method."""
# Create a mock for the object that will be returned by find_one
record_mock = AsyncMock(spec=Record)
record_mock.id = "test-id"
record_mock.content = "Test content"
# Test with the find_one method patched
with patch('mongo_models.Record.find_one', return_value=record_mock) as mock_find_one:
result = await do_something(record_id="input_id")
# Assert that find_one was called
mock_find_one.assert_awaited_once()
# Assert the right object is being used
assert result == record_mock
</code></pre>
<p>But I am getting an <code>AttributeError: id</code> error when is executed the instruction:<br />
<code>record = await Record.find_one(Record.id == record_id)</code> line.</p>
|
<python><mocking><pytest><beanie>
|
2024-09-10 13:52:18
| 1
| 1,347
|
Nordico
|
78,969,815
| 3,405,291
|
AttributeError: module 'numpy' has no attribute 'int' ---> ValueError: Buffer dtype mismatch, expected 'float32_t' but got 'double'
|
<h1>Code</h1>
<p>I'm running a Python code which has the following statements:</p>
<pre class="lang-py prettyprint-override"><code> # do NMS
dets = np.hstack((boxes, scores[:, np.newaxis])).astype(np.float32, copy=False)
keep = nms(dets, 0.3) # -> *** Error is thrown here :(
dets = dets[keep, :]
</code></pre>
<h1>Data</h1>
<p>The data shown by the debugger is:</p>
<p><a href="https://i.sstatic.net/oTXC3oyA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTXC3oyA.png" alt="Debugger steppper" /></a></p>
<p><a href="https://i.sstatic.net/fzls5su6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzls5su6.png" alt="Debugger data" /></a></p>
<h1>Error</h1>
<p>Eventually, the code throws this error:</p>
<blockquote>
<p>AttributeError: module 'numpy' has no attribute 'int'.</p>
</blockquote>
<p>At this line of code:</p>
<pre class="lang-py prettyprint-override"><code>from .nms.cpu_nms import cpu_nms, cpu_soft_nms
def nms(dets, thresh):
"""Dispatch to either CPU or GPU NMS implementations."""
if dets.shape[0] == 0:
return []
return cpu_nms(dets, thresh) # -> *** Error is thrown here :(
</code></pre>
<h1>Tried 1</h1>
<p>I don't see any usage of <code>np.int</code> but I see that <code>np.float32</code> is used as can be seen above. As suggested <a href="https://stackoverflow.com/a/74946903/3405291">here</a>, I replaced <code>np.float32</code> with <code>np.float32_</code>. Then I get another error:</p>
<blockquote>
<p>AttributeError: module 'numpy' has no attribute 'float32_'</p>
</blockquote>
<h1>Tried 2</h1>
<p>I replaced <code>np.float32</code> with <code>np.float_</code>, then this error is received:</p>
<blockquote>
<p>File "nms/cpu_nms.pyx", line 17, in nms.cpu_nms.cpu_nms</p>
</blockquote>
<blockquote>
<p>ValueError: Buffer dtype mismatch, expected 'float32_t' but got 'double'</p>
</blockquote>
<h1>Question</h1>
<p>What else I can try to resolve the error?</p>
|
<python><numpy><attributeerror>
|
2024-09-10 13:45:40
| 1
| 8,185
|
Megidd
|
78,969,782
| 13,023,224
|
Run all jupyter notebooks inside folder and subfolders in an ordered manner and displaying process
|
<p>I want to run (in an alphabetical order, not in parallel) all jupyter notebooks inside a folder that contain many subfolders.</p>
<ul>
<li>Each subfolder may contain other folders.</li>
<li>Folders contain jupyter notebooks and the resulting outcome (csv, json, excel, jpg files).</li>
<li>It is important that files run in order, since the outcome of one notebook is used by others as input source.</li>
</ul>
<p>As jupyter notebooks are executed, I would like the see a print stating the jupyter notebook and the path.</p>
<p>Until now I would create a jupyter notebook inside each folder, and run inside all the jupyter notebooks from the same folder using <code>%run samplenotebook1.ipynb</code>. However, this becomes tedious when there are numerous folders and subfolders and thus I need to speed up.</p>
<p>I have tried the solution from this <a href="https://stackoverflow.com/questions/69297359/is-there-a-way-to-run-all-jupyter-notebooks-inside-a-directory">post</a>, the notebook seems to be running but if I open any of the folders where jupyter notebooks are supposed to run, I cannot see any file generated from running the jupyter notebooks.</p>
<p>Code below is the one I used, but the result was not the desired one.</p>
<pre><code>import papermill as pm
from glob import glob
for nb in glob('*.ipynb'):
pm.execute_notebook(
input_path=nb,
output_path=nb,
engine_name='embedded',
)
</code></pre>
<p>I have also tried code below, I get a print as if it were done, but it only runs notebooks in the folder, not in subfolders.</p>
<pre><code>import papermill as pm
from pathlib import Path
for nb in Path('../').glob('*.ipynb'):
pm.execute_notebook(
input_path=nb,
output_path=nb # Path to save executed notebook
)
print ('done')
</code></pre>
<p>Code above will run jupyter notebooks from the path, but not from inner folders. I tried adding <code>/*</code> in the path, but it would not work.</p>
|
<python><jupyter-notebook>
|
2024-09-10 13:39:58
| 1
| 571
|
josepmaria
|
78,969,691
| 1,942,868
|
How to filter the one-to-many relationship from both side
|
<p>I have two tables which has relationship.</p>
<pre><code>class Parent(SafeDeleteModel):
name = models.CharField(max_length=1048,null=True,blank=True)
class Child(SafeDeleteModel):
name = models.CharField(max_length=1048,null=True,blank=True)
parent = models.ForeignKey(Parent,blank=True, null=True, on_delete=models.CASCADE,related_name="parent_project")
</code></pre>
<p>In this case I can filter the Child by Parent such as</p>
<pre><code>Child.objects.filter(parent__name__contains="test")
</code></pre>
<p>However I want to do the reverse like this below.</p>
<pre><code>Parent.objects.filter(child__name__contains="test")
</code></pre>
<p>Is it possible?</p>
<p>I tried like this below</p>
<pre><code>Parent.objects.filter(parent_project__contains="test")
</code></pre>
<p>however this error</p>
<pre><code>django.core.exceptions.FieldError: Unsupported lookup 'contains' for ManyToOneRel or join on the field not permitted.
</code></pre>
|
<python><django>
|
2024-09-10 13:17:36
| 3
| 12,599
|
whitebear
|
78,969,678
| 1,762,051
|
Timed Hash Verification based web api call in Lindane Scripting Language
|
<p>I am trying to call a web api which is secred by Timed Hash Verification system.</p>
<p>I am able to call that API using python</p>
<pre class="lang-py prettyprint-override"><code>import hmac
import hashlib
import time
import requests
import json
def generate_timed_hash(secret_key, data):
timestamp = str(int(time.time())) # Current Unix timestamp
message = data + timestamp
hash_value = hmac.new(secret_key.encode(), message.encode(), hashlib.sha256).hexdigest()
return hash_value, timestamp
def testApi():
data = json.dumps({'userId': 'c088ab7f-dd04-4836-93cd-7ab2843db971'})
secret_key = 'mysecret'
hash_value, timestamp = generate_timed_hash(secret_key, data)
headers = {
'X-Timestamp': timestamp,
'X-Hash': hash_value,
}
response = requests.post('https://some-host/api/secured-plan-detail/', data={'data': data}, headers=headers)
print(response.json())
testApi()
</code></pre>
<p>I found <a href="https://wiki.secondlife.com/wiki/LlSHA256String" rel="nofollow noreferrer"><code>llSHA256String</code></a> for generating SHA256 string. But still not able to figure out way tranaslate that python code.</p>
<p>Here is my attempt to make http call.</p>
<pre><code>default
{
state_entry()
{
llHTTPRequest(
"https://some-host/api/secured-plan-detail/",
[
HTTP_METHOD, "POST",
HTTP_MIMETYPE, "application/json"
//HTTP_CUSTOM_HEADER, "X-Timestamp:", timestamp
//HTTP_CUSTOM_HEADER, "X-Hash:", hash_value
],
llList2Json(JSON_OBJECT, ["userId", "c088ab7f-dd04-4836-93cd-7ab2843db971"])
);
}
http_response(key request_id, integer status, list metadata, string body) {
llOwnerSay((string)status);
llOwnerSay("response: " + body);
}
}
</code></pre>
<p>How can make such call in Lindane Scripting Language?</p>
|
<python><http><http-headers><sha256><linden-scripting-language>
|
2024-09-10 13:16:09
| 1
| 10,924
|
Alok
|
78,969,301
| 7,228,014
|
Loading a pipeline with a dense-array conversion step
|
<p>I trained and saved the following model using joblib:</p>
<pre><code>def to_dense(x):
return np.asarray(x.todense())
to_dense_array = FunctionTransformer(to_dense, accept_sparse=True)
model = make_pipeline(
TfidfVectorizer(),
to_dense_array,
HistGradientBoostingClassifier()
)
est = model.fit(texts, y)
save_path = os.path.join(os.getcwd(), "VAT_estimator.pkl")
joblib.dump(est, save_path)
</code></pre>
<p>Model works fine, accuracy is good and no message is issued during the saving in joblib.</p>
<p>Now, I try to reload the model from joblib using the following code:</p>
<pre><code>import joblib
# Load the saved model
estimator_file = "VAT_estimator.pkl"
model = joblib.load(estimator_file)
</code></pre>
<p>I then get the following error messge:</p>
<pre><code>AttributeError: Can't get attribute 'to_dense' on <module '__main__'>
</code></pre>
<p>I can't avoid the conversion step to a dense array in the pipeline.</p>
<p>I tried to insert the conversion step back into the model after the import, but, at prediction time, I get the message that FunctionTransformer is not callable.</p>
<p>I can't see any way out.</p>
|
<python><numpy><scikit-learn><numpy-ndarray><joblib>
|
2024-09-10 11:36:52
| 1
| 309
|
JCF
|
78,969,272
| 1,820,665
|
How can I correctly implement a Python daemon which could fail to start?
|
<p>I have a Python daemon using the <code>python-daemon</code> package to daemonize. It can also be run to stay in foreground (by not using the <code>-d</code> command line parameter). It recurrently runs a function and also starts a minimal HTTP server that can be used to communicate with it (the full code can be found at <a href="https://gitlab.com/l3u/go-e-pvsd/" rel="nofollow noreferrer">GitLab</a>).</p>
<p>Stripped down to a minimal example, it's:</p>
<pre><code>#!/usr/bin/env python3
import sys
import signal
import argparse
import daemon
import daemon.pidfile
from syslog import syslog
import threading
from http.server import HTTPServer, BaseHTTPRequestHandler
from time import strftime
parser = argparse.ArgumentParser()
parser.add_argument("-d", action = "store_true", help = "daemonize")
args = parser.parse_args()
class RequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.end_headers()
self.wfile.write(b"I'm here")
class ProcessManager:
def __init__(self):
self.timer = None
self.server = None
self.signalCatched = False
self.finished = threading.Event()
def setup(self) -> bool:
syslog("Setting up different stuff")
# All kind of stuff that could fail, returning False then
syslog("Setting up the HTTP server")
try:
self.server = HTTPServer(("127.0.0.1", 8000), RequestHandler)
except Exception as error:
syslog("Failed to setup the HTTP server")
return False
return True
def start(self):
thread = threading.Thread(target = self.server.serve_forever)
thread.start()
self.scheduleNextRun()
def scheduleNextRun(self):
if self.signalCatched:
return
syslog("Daemon running at {}".format(strftime("%Y-%m-%d %H:%M:%S")))
self.timer = threading.Timer(3, self.scheduleNextRun)
self.timer.start()
def terminate(self, signum, frame):
syslog("Catched signal, will now terminate")
self.signalCatched = True
if self.timer:
self.timer.cancel()
self.server.shutdown()
self.finished.set()
def setupProcessManager():
if not processManager.setup():
sys.exit(1)
signal.signal(signal.SIGTERM, processManager.terminate)
signal.signal(signal.SIGINT, processManager.terminate)
processManager.start()
processManager = ProcessManager()
if args.d:
with daemon.DaemonContext(pidfile = daemon.pidfile.PIDLockFile("/run/test.pid")):
syslog("Starting up in daemon mode")
setupProcessManager()
processManager.finished.wait()
else:
syslog("Starting up in foreground mode")
setupProcessManager()
</code></pre>
<p>I wrote a minimal OpenRC init script to run it as a daemon, which also works fine, I can start and stop the daemon like one would expect it.</p>
<p><strong>The problem is that I can't detect if the startup failed.</strong> When it runs in daemon mode, the <code>sys.exit(1)</code> has no effect, because as soon as the pidfile is created, OpenRC counts this as a successful startup. Also, the parent firing up the daemon apparently exits successfully.</p>
<p>I can't setup the daemon outside of the DaemonContext. If I move the signal connections out of it, signals aren't handled anymore. If I only call <code>processManager.start()</code> inside the DaemonContext, the recurring function call works, but the HTTP server is not reachable.</p>
<p>So: How do I implement this correctly, so that everything keeps working, but the RC system is able to detect if the startup failed?</p>
|
<python><python-daemon><openrc>
|
2024-09-10 11:28:43
| 1
| 1,774
|
Tobias Leupold
|
78,968,974
| 8,950,119
|
How to remove the accessibility item from the userbar
|
<p>Is there a way to remove the accessibility item from the Wagtail userbar?</p>
|
<python><django><wagtail>
|
2024-09-10 10:10:27
| 1
| 2,090
|
JJaun
|
78,968,894
| 893,254
|
How to plot `datetime.time` type on an axis?
|
<p>I am trying to plot a dataframe which has a <code>datetime.time</code> index type.</p>
<p>Matplotlib does not appear to support plotting an axis using a <code>datetime.time</code> type, and attempting to do so produces the following error message:</p>
<pre><code>TypeError: float() argument must be a string or a real number, not 'datetime.time'
</code></pre>
<p>The data I am working with looks at events which occur during a day. The data is a <code>datetime.time</code> type and not a <code>datetime.datetime</code> type, because conceptually events occur across multiple dates, and only the time of day is important.</p>
<p>The data is created by aggregating what was a <code>datetime.datetime</code> index:</p>
<pre><code># convert df.index from datetime.datetime to datetime.time type
# and aggregate values using groupby
new_df = df.groupby(df.index.time).sum()
</code></pre>
<p>There are a large number of unique time values (60 minutes * 24 hours = 1,440 unique values). This means that the usual suggestion of converting to <code>str</code> will not work.</p>
<p><a href="https://i.sstatic.net/t6NTvuyf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/t6NTvuyf.png" alt="convert datetime.time to str type" /></a></p>
<p>Converting to <code>str</code> is the wrong thing to do anyway, because <code>datetime.time</code> types are not strings, and if we convert them to strings just to plot a figure, we then lose the ability to perform further operations on the data, unless we convert the values back into time values by parsing them. It's just not the right solution.</p>
<p>What is the real solution to this problem?</p>
|
<python><pandas><matplotlib>
|
2024-09-10 09:51:21
| 1
| 18,579
|
user2138149
|
78,968,834
| 857,662
|
tensorflow back compatibility: KerasTensor and tf-keras tensor
|
<p>I have some code written with tensorflow 2.3 in which tensors are constructed using tf.keras.</p>
<p>As moving to tensorflow > 2.16, tf.keras is retired an tf calls keras >= 3.0.</p>
<p>The type of the tensors in my code changes from tf.Tensor to Keras.Tensor. As a consequence, my code is largely affected, most of which raises the following error</p>
<p>ValueError: A KerasTensor cannot be used as input to a TensorFlow function. A KerasTensor is a symbolic placeholder for a shape and dtype, used when constructing Keras Functional models or Keras Functions. You can only use it as input to a Keras layer or a Keras operation (from the namespaces <code>keras.layers</code> and <code>keras.operations</code>). You are likely doing something like:</p>
<pre><code>x = Input(...)
...
tf_fn(x) # Invalid.
</code></pre>
<p>What you should do instead is wrap <code>tf_fn</code> in a layer:</p>
<pre><code>class MyLayer(Layer):
def call(self, x):
return tf_fn(x)
x = MyLayer()(x)
</code></pre>
<p>So is there a simple solution to solve this or do I need to apply this wrapper everywhere?</p>
<p><em>Update 2024-09-11</em></p>
<p>Let me try to make the question more solid. Consider the following example. It is taken from function <code>get_decoder_mask</code> in <a href="https://github.com/google-research/google-research/blob/master/tft/libs/tft_model.py" rel="nofollow noreferrer">google tft module</a></p>
<p>In TF 2.3, using tf.keras, we can define the following tenor for masking as follows.
Define
t = keras.layers.Input(shape=(250, 5), name="input")
which is <code>tf.Tensor</code>.</p>
<pre><code>len_s = tf.shape(t)[-2]
bs = tf.shape(t)[:-2]
mask = tf.cumsum(tf.eye(len_s, batch_shape=bs), -2)
</code></pre>
<p>In TF 2.17, only native keras is available.</p>
<pre><code>t = keras.layers.Input(shape=(250, 5), name="input")
</code></pre>
<p>which is <code>KerasTensor</code>. Then the following code does not work.</p>
<pre><code>len_s = tf.shape(t)[-2]
bs = tf.shape(t)[:-2]
mask = tf.cumsum(tf.eye(len_s, batch_shape=bs), -2)
</code></pre>
<p>With <code>tf.shape(t)</code> being replaced with t.shape, the code breaks further down at <code>tf.eye(...)</code>.</p>
|
<python><tensorflow><keras>
|
2024-09-10 09:35:24
| 0
| 331
|
newbie
|
78,968,643
| 2,386,113
|
VS Code reopens closed Python script and matplotlib figures after restart
|
<p>I'm facing a strange issue with VS Code on my Windows 11 machine. A few days ago, I ran a simple Python program that generates matplotlib figures in a for loop. The code does not involve any multiprocessing or multithreading; it simply iterates 8 times, creates a figure using matplotlib, and shows it.</p>
<p>I closed the program and exited VS Code after it completed running. However, when I reopened VS Code later, it started to automatically display the matplotlib figures again, as if the program was still running. As I manually close the figures, VS Code proceeds through the loop, and once all figures are closed, the IDE itself also shuts down.</p>
<p>There is no output in the terminal window, and it doesn't seem like the script is explicitly running, but the figures keep appearing.</p>
<p>Here's what I've tried so far:</p>
<ul>
<li>Restarting VS Code.</li>
<li>Restarted laptop</li>
<li>Killing any lingering Python processes in Task Manager.</li>
<li>Clearing VS Code's <code>.vscode</code> folder and cache.</li>
<li>Checking my <code>workbench.startupEditor</code> setting in VS Code to ensure that no files are reopened automatically.</li>
</ul>
<p>I can run/debug another Python script in parallel while the unwanted figures are opened.</p>
<p>I ran the same program on a different machine, no such problem.</p>
<p>Has anyone experienced a similar issue or have any suggestions on what could be causing this?</p>
|
<python><matplotlib><visual-studio-code>
|
2024-09-10 08:45:25
| 1
| 5,777
|
skm
|
78,968,636
| 9,827,438
|
Running Blender python script outside of blender error message calculate_object_volumes.poll() failed, context is incorrect
|
<p>I try to run a python script outside blender headless via <strong>blender -b --python import_ifc_model.py</strong> command</p>
<p>I have installed Blender 4.2.1, the add-on Bonsai (new name before it was blenderbim) and the idea is to import an ifc file and calculate the volume of all the objects. (source code <a href="https://github.com/IfcOpenShell/IfcOpenShell/blob/f0502c123ea61c5574a7cb0d8e293afc94c4ec1e/src/blenderbim/blenderbim/bim/module/qto/operator.py#L73" rel="nofollow noreferrer">https://github.com/IfcOpenShell/IfcOpenShell/blob/f0502c123ea61c5574a7cb0d8e293afc94c4ec1e/src/blenderbim/blenderbim/bim/module/qto/operator.py#L73</a>)</p>
<pre><code>import time
import bpy
import ifcopenshell
import ifcopenshell.api
import bonsai.tool as tool
import bonsai.core.qto as core
from bonsai.bim.ifc import IfcStore
from bonsai.bim.module.qto import helper
from bonsai.bim.module.qto import operator
# blender -b --python import_ifc_model.py
bpy.ops.bim.load_project(filepath="E:\\model.ifc", use_relative_path=False, should_start_fresh_session=True)
time.sleep(30)
bpy.ops.bim.load_project_elements()
bpy.ops.object.select_all(action='DESELECT')
# go to 'edit' mode >>> bpy.ops.object.editmode_toggle()
#contexte_scene = bpy.ops.object.select_all(action='SELECT')
#result = helper.calculate_volumes([o for o in bpy.context.selected_objects if o.type == "MESH"], bpy.context)
#print(str(round(result, 3)))
#operator.CalculateObjectVolumes(bpy.types.Operator)
#bpy.ops.bim.calculate_object_volumes()
contexte_scene = bpy.ops.object.select_all(action='SELECT')
bpy.context.view_layer.objects.active
#bpy.context.active_object.data
for o in bpy.context.selected_objects:
#for idx in range(len(bpy.data.objects)):
o.select_set(True)
#bpy.context.scene.objects.active = bpy.data.objects[idx]
bpy.ops.bim.calculate_object_volumes()
</code></pre>
<p>When running I have the following error:</p>
<pre><code> Traceback (most recent call last): File
"C:\Users\Downloads\blender-4.2.1-windows-x64\import_ifc_model.py",
line 33, in <module>
bpy.ops.bim.calculate_object_volumes() File "C:\Users\Downloads\blender-4.2.1-windows-x64\4.2\scripts\modules\bpy\ops.py",
line 109, in __call__
ret = _op_call(self.idname_py(), kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: Operator bpy.ops.bim.calculate_object_volumes.poll() failed, context is
incorrect
</code></pre>
<p>Do you have an idea how to fix it?</p>
|
<python><blender><ifc><bim>
|
2024-09-10 08:44:12
| 0
| 371
|
ana maria
|
78,968,621
| 256,965
|
How to define a nested generic type in Python?
|
<p>I have a function that (should) flatten arbitrarily many times nested list</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
type Nested[T] = Sequence[T | Sequence[Nested]]
def flatten(seq: Nested[T]) -> list[T]:
flattened: list[T] = []
for elem in seq:
if isinstance(item, Sequence):
flattened.extend(flatten(cast(Nested[T], elem)))
else:
flattened.append(elem)
return flattened
</code></pre>
<p>Now if I pass in for example a list of list, the result seems to still be list of list. Why's that?</p>
<pre class="lang-py prettyprint-override"><code>
test: list[list[str]]
# (parameter) flattened: list[list[str]]
flattened = flatten(test)
</code></pre>
<p>It looks like it is always returning the same type that is passed in. Why's that? Looks like whatever is inside the first list is thought to be the generic type. How can I define this nested (recursive) type and have the flatten function to work and show the type hints correctly like this?</p>
<pre class="lang-py prettyprint-override"><code>
test: list[list[str]]
test2: list[list[list[list[int]]]]
# (parameter) flattened: list[str]
flattened = flatten(test)
# (parameter) flattened2: list[int]
flattened2 = flatten(test2)
</code></pre>
<p>Python used is 3.12</p>
<hr />
<p>Edit:</p>
<p>Just when I posted this I found out there was a little mistake in my definitions. If the function is defined like this</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
type Nested[T] = Sequence[T | Nested[T]] # <--- Fix here!!
def flatten(seq: Nested[T]) -> list[T]:
flattened: list[T] = []
for elem in seq:
if isinstance(item, Sequence):
flattened.extend(flatten(cast(Nested[T], elem)))
else:
flattened.append(elem)
return flattened
</code></pre>
<p>Also another problem was that in my actual code I was assigning the flattened list back to the variable that was already defined to be a list of list (or more). Assigning the flattened list to a new variable shows the type correctly... EXCEPT in case of bytes. I guess that is because under the hood the type <code>bytes</code> is actually some kind of <code>Iterable[int]</code></p>
<pre class="lang-py prettyprint-override"><code>test: list[list[str]]
bytes_test: list[list[bytes]]
# test is already defined as list[list[str]]
# (parameter) test: list[list[str]]
test = flatten(test)
# type for flattened is inferred so it's list[str]
# (parameter) flattened: list[str]
flattened = flatten(test)
# Apparently bytes type is equal to Iterable[int], so it is flattened as well
# (parameter) flattened_bytes: list[int]
flattened_bytes = flatten(bytes_test)
</code></pre>
<p>Now I wonder how can I preserve the nesting in case of <code>bytes</code>, so that the result would be correctly nested <code>list[bytes]</code></p>
<hr />
<p>Edit2:</p>
<p>There seems to be a bug of infinite recursion when using <code>str</code> or <code>bytes</code> as the type, because both of them are iterables. Looks like now the best way to do this kind of generic flattening is to invert the order of the isinstance if like this</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
type Nested[T] = Sequence[T | Nested[T]]
def flatten(seq: Nested[T]) -> list[T]:
flattened: list[T] = []
for elem in seq:
if isinstance(item, T): # <-- This doesn't work. Need custom generic isinstance checker
flattened.append(elem)
else:
flattened.extend(flatten(elem))
return flattened
</code></pre>
<p>But then this would need some custom generic isinstance checker</p>
|
<python><python-typing>
|
2024-09-10 08:41:01
| 1
| 1,869
|
zaplec
|
78,968,459
| 13,224,216
|
How to pass custom arguments into Config file (for generating client-side assets)?
|
<p>Often projects generate some client-side artifacts before running automations.
So, in our case we'd like to generate some build files, then push them to remote servers by Pyinfra automation.</p>
<p>In order to achieve that we use:</p>
<ul>
<li><a href="https://docs.pyinfra.com/en/2.x/examples/client_side_assets.html" rel="nofollow noreferrer">Config script</a> — this file is executed exclusively for a local machine and only once, and documentation states it's a good place to generate client-side artifacts;</li>
<li>Automation scripts — general Pyinfra scripts which are executed for every host in your inventory.</li>
</ul>
<p>The thing is that we'd like to generate a build artifact according to the argument provided in CLI:</p>
<pre><code>pyinfra --data build-version=v1.0 --config prepare.py inventory.py deploy.py
</code></pre>
<p>I didn't find any example how to access <code>build-version</code> in <code>prepare.py</code> script.
Is it possible to access <code>build-version</code> in <code>prepare.py</code> config script somehow?</p>
<h3>What have you tried?</h3>
<p>I've tried to <code>from pyinfra import host</code>, but host object lacks of <code>data</code> field (seems like inventory is not initialized).</p>
|
<python><pyinfra>
|
2024-09-10 07:58:21
| 1
| 428
|
mdraevich
|
78,968,419
| 17,580,381
|
Querying a pandas dataframe efficiently
|
<p>Here's the MRE that produces expected results:</p>
<pre><code>import pandas as pd
data = {
"A": [1, 2, 3],
"B": [100, 200, 300]
}
s = {100, 300, 500}
df = pd.DataFrame(data)
for _, row in df.query("B in @s").iterrows():
print(row["A"])
</code></pre>
<p>Output:</p>
<pre><code>1
3
</code></pre>
<p>In other words, what I want is all the column "A" values where the corresponding column "B" values occur in a set.</p>
<p>Although this code works, I can't help thinking that it's a little cumbersome. Is there a more concise/efficient way to achieve this?</p>
|
<python><pandas>
|
2024-09-10 07:47:15
| 0
| 28,997
|
Ramrab
|
78,968,135
| 1,349,673
|
Is it possible to raise an exception on FutureWarning (pandas)?
|
<p>I am seeing a <code>FutureWarning</code> in the output of my code, coming from the <code>pandas</code> library.</p>
<p>I would like to catch when this is occurring in the code execution. The natural way would be to request <code>pandas</code> to raise an exception rather than printing a warning message.</p>
<p>Is there way to do this? If an exception cannot be raised is there some other way of catching this condition during runtime?</p>
|
<python><pandas>
|
2024-09-10 06:22:32
| 1
| 8,126
|
James Hirschorn
|
78,968,073
| 22,213,065
|
Detect only left-most boxes in image
|
<p>I have a JPG image that contain mobile brand names:<br />
<a href="https://i.sstatic.net/f2LDnB6t.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f2LDnB6t.jpg" alt="enter image description here" /></a></p>
<p><strong>Now I want to detect each word first character by python script</strong><br />
I wrote following python script for this:</p>
<pre><code>import cv2
import numpy as np
from tkinter import Tk, Canvas, Frame, Scrollbar, BOTH, VERTICAL, HORIZONTAL
from PIL import Image, ImageTk
# Function to draw rectangles around shapes and display using Tkinter
def draw_rectangles(image_path):
# Create a Tkinter window to display the image
root = Tk()
root.title("Image with Left-Most Rectangles Only")
# Load the image
image = cv2.imread(image_path)
# Convert the image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# Apply adaptive thresholding to get better separation of text
thresh = cv2.adaptiveThreshold(
gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2
)
# Find contours in the binary image
contours, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# Dictionary to store contours grouped by Y-coordinate ranges
contours_by_y = {}
# Sort contours by X-coordinate to ensure we pick the left-most character first
sorted_contours = sorted(contours, key=lambda c: cv2.boundingRect(c)[0])
# Group contours by their Y coordinate to keep only the left-most rectangle per Y range
for contour in sorted_contours:
x, y, w, h = cv2.boundingRect(contour)
if w > 15 and h > 15: # Adjust the size filter to remove small artifacts
aspect_ratio = w / float(h)
# Ensure the aspect ratio is within the typical range of letters
if 0.2 < aspect_ratio < 5:
y_range = y // 20 # Group by a smaller Y coordinate range for better separation
# Check if the current rectangle is more left-most in X within its Y range
if y_range not in contours_by_y:
contours_by_y[y_range] = (x, y, w, h) # Store the first contour found in this range
else:
# Compare and keep the left-most (smallest X) rectangle
current_x, _, _, _ = contours_by_y[y_range]
# Check distance between new contour and the existing one to avoid close detection
if x < current_x and (x - current_x) > 20: # Distance threshold to filter out close contours
contours_by_y[y_range] = (x, y, w, h)
# Draw only the left-most rectangles
for (x, y, w, h) in contours_by_y.values():
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 0, 255), 2) # Red color in BGR
# Convert the image to RGB (OpenCV uses BGR by default)
image_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Convert the image to a format Tkinter can use
image_pil = Image.fromarray(image_rgb)
image_tk = ImageTk.PhotoImage(image_pil)
# Create a frame for the Canvas and scrollbars
frame = Frame(root)
frame.pack(fill=BOTH, expand=True)
# Create a Canvas widget to display the image
canvas = Canvas(frame, width=image_tk.width(), height=image_tk.height())
canvas.pack(side="left", fill="both", expand=True)
# Add scrollbars to the Canvas
v_scrollbar = Scrollbar(frame, orient=VERTICAL, command=canvas.yview)
v_scrollbar.pack(side="right", fill="y")
h_scrollbar = Scrollbar(frame, orient=HORIZONTAL, command=canvas.xview)
h_scrollbar.pack(side="bottom", fill="x")
canvas.configure(yscrollcommand=v_scrollbar.set, xscrollcommand=h_scrollbar.set)
canvas.create_image(0, 0, anchor="nw", image=image_tk)
canvas.config(scrollregion=canvas.bbox("all"))
# Keep a reference to the image to prevent garbage collection
canvas.image = image_tk
root.mainloop()
# Path to your image
image_path = r"E:\Desktop\mobile_brands\ORG_027081-Recovered.jpg"
# Call the function
draw_rectangles(image_path)
</code></pre>
<p>But I don't know why it not working good. The accuracy of this script is 90%. for example in above image it detect "a" character in "Samsung"<br />
<a href="https://i.sstatic.net/GPOsZNjQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPOsZNjQ.jpg" alt="enter image description here" /></a></p>
<p>where is my script problem?<br />
How can I fix this problem?<br />
maybe by Y and X coordinate can not detect left-most boxes in image.<br />
<strong>Note that I don't want to use OCR</strong></p>
|
<python><opencv><computer-vision><ocr>
|
2024-09-10 05:57:21
| 1
| 781
|
Pubg Mobile
|
78,968,035
| 11,431,038
|
SQLAlchemy: to_sql create empty table after inspect
|
<p>I have strange behavior here. After I init inspect(cn), to_sql writes an empty table to the database.</p>
<pre><code> def save_data_to_sql(df, table_name, cn):
df.to_sql(con=cn, name='test_1', if_exists='replace', index=False) << Table with data
# Check if the table exists
inspector = inspect(cn)
if inspector.has_table(table_name):
logger.info(f"Table exists")
else:
logger.info(f"Table does not exist")
df.to_sql(con=cn, name='test_2', if_exists='replace', index=False) << Empty table?!?!
</code></pre>
<p>Someone an idea what's wrong?</p>
|
<python><python-3.x><pandas><sqlalchemy><mariadb>
|
2024-09-10 05:41:43
| 1
| 332
|
Mike
|
78,967,951
| 17,889,492
|
Changing variables in sympy
|
<p>According to the documentation one can make substitutions as:</p>
<pre><code>expr = sin(2*x) + cos(2*x)
expr.subs(sin(2*x), 2*sin(x)*cos(x))
</code></pre>
<p>But say <code>x = y + 2</code>. How do I convert the expression from a function of <code>x</code> to a function of <code>y</code> without having to give the substitution explicitly?</p>
|
<python><sympy>
|
2024-09-10 04:57:50
| 2
| 526
|
R Walser
|
78,967,779
| 2,392,192
|
Subclassing random.Random produces different results with same seed
|
<p>I've run into something I don't understand. This code:</p>
<pre class="lang-py prettyprint-override"><code>import random
class MyRandom(random.Random):
pass
r = random.Random()
r.seed(10)
print(r.randint(0, 20))
print(r.randint(0, 20))
print(r.randint(0, 20))
print("------------")
r2 = MyRandom()
r2.seed(10)
print(r2.randint(0, 20))
print(r2.randint(0, 20))
print(r2.randint(0, 20))
</code></pre>
<p>...prints:</p>
<pre><code>18
1
13
------------
18
1
13
</code></pre>
<p>This is as expected, since it's the same seed. But if I innocuously override the random() method like this:</p>
<pre class="lang-py prettyprint-override"><code>import random
class MyRandom(random.Random):
def random(self):
return super().random()
r = random.Random()
r.seed(10)
print(r.randint(0, 20))
print(r.randint(0, 20))
print(r.randint(0, 20))
print("------------")
r2 = MyRandom()
r2.seed(10)
print(r2.randint(0, 20))
print(r2.randint(0, 20))
print(r2.randint(0, 20))
</code></pre>
<p>Suddenly it prints:</p>
<pre><code>18
1
13
------------
20
9
6
</code></pre>
<p>Why on earth would that change anything? How is the call to super somehow upsetting the internal state?</p>
<p>Even weirder, there <em>isn't</em> a discrepancy if I'm calling <code>r.random()</code> instead of <code>r.randint()</code>. Perhaps that clue will be helpful.</p>
|
<python><random><random-seed>
|
2024-09-10 03:23:09
| 0
| 563
|
MarcTheSpark
|
78,967,533
| 825,227
|
Is there a way to add lines across subplots in Python
|
<p>I have a simple plot, to which I've added lines as below:</p>
<pre><code>mid = d[d.Position==0].Price.mean()
b_a = d[(d.Position==0) & (d.Side == 0)].Price.values
b_b = d[(d.Position==0) & (d.Side == 1)].Price.values
f, ax = plt.subplots()
sns.set_color_codes('muted')
sns.barplot(data = d[d.Side==0], x = 'Price', y = 'Size', color = 'b', native_scale=True)
sns.barplot(data = d[d.Side==1], x = 'Price', y = 'Size', color = 'r', native_scale=True)
ax.xaxis.set_major_locator(ticker.MultipleLocator(.0001))
plt.axvline(x=mid, color = 'b', lw = 1.5)
plt.axvline(x=b_a, color = 'k', lw = 1, ls='--')
plt.axvline(x=b_b, color = 'k', lw = 1, ls='--')
</code></pre>
<p><a href="https://i.sstatic.net/nSUBUqgP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSUBUqgP.png" alt="enter image description here" /></a></p>
<p>Data looks like this:
<strong>d</strong></p>
<pre><code> Position Operation Side Price Size
9 9 0 1 0.7289 -16
8 8 0 1 0.729 -427
7 7 0 1 0.7291 -267
6 6 0 1 0.7292 -15
5 5 0 1 0.7293 -16
4 4 0 1 0.7294 -16
3 3 0 1 0.7295 -426
2 2 0 1 0.7296 -8
1 1 0 1 0.7297 -14
0 0 0 1 0.7298 -37
10 0 0 0 0.7299 6
11 1 0 0 0.73 34
12 2 0 0 0.7301 7
13 3 0 0 0.7302 9
14 4 0 0 0.7303 16
15 5 0 0 0.7304 15
16 6 0 0 0.7305 429
17 7 0 0 0.7306 16
18 8 0 0 0.7307 265
19 9 0 0 0.7308 18
</code></pre>
<p>I'd like to plot a number of this sequentially using <code>matplotlib</code>'s <code>subplots</code> method like this (here, <code>x</code> and <code>y</code> are just a collection of values from prior <code>d</code> dataframes assembled for plotting):</p>
<pre><code>cnt = 5
f, ax = plt.subplots(cnt, 1, sharex=True)
sns.set_color_codes('muted')
for i in range(cnt):
sns.barplot(x = x.iloc[i, 10:].values, y = y.iloc[i, 10:].values, color = 'b', native_scale=True, ax = ax[i])
sns.barplot(x = x.iloc[i, :10].values, y = y.iloc[i, 10:].values, color = 'r', native_scale=True, ax = ax[i])
ax[i].xaxis.set_major_locator(ticker.MultipleLocator(.0001))
</code></pre>
<p><a href="https://i.sstatic.net/XWUS8Xrc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWUS8Xrc.png" alt="enter image description here" /></a></p>
<p><strong>Is there a way to create connecting lines <em>across</em> subplots? Like this crude version below?</strong></p>
<p><a href="https://i.sstatic.net/26nq8uCM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26nq8uCM.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><seaborn>
|
2024-09-10 00:21:35
| 2
| 1,702
|
Chris
|
78,966,995
| 4,611,374
|
How to write to the Flask Session from a child process
|
<p>I have a flask web app that does image processing. The parent application calls a function that creates several temporary images, writes them to disk, and stores their paths in the flask Session. That workflow is leaking memory -- the RAM allocated to the function call is not released until the webserver is restarted.</p>
<p>As a workaround, I am trying to call the function from a separate process, but encounter the following problems:</p>
<ol>
<li>Keys/Values added to the Session within the function are not propagated to the parent namespace</li>
<li>Values written to multiprocessing.Value objects within the function are not propagated to the parent namespace</li>
</ol>
<p>To demonstrate the problems, I've created the 3 following minimal examples.</p>
<p><code>app.py</code></p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, session
from flask_session import Session
from multiprocessing import Process, Value
app = Flask(__name__.split('.')[0])
app.secret_key = "secret_key"
SESSION_TYPE = 'filesystem'
app.config.from_object(__name__)
Session(app)
def write_to_session(session):
session['inside_function'] = "inside"
print(session.items())
# This route demonstrates the desired behavior: the session is modified by the function.
@app.route('/baseline', methods=['GET'])
def baseline():
session.clear()
session['outside_function'] = "outside"
text = ""
text += "<p>Initial session data <br />" + str(session.items()) + "</p>"
write_to_session(session)
text += "<p>Session after function call <br />" + str(session.items()) + "</p>"
return text
# When calling the function from a process, the session is correctly
# modified within the function, but the changes are not propagated back
# to the parent scope.
@app.route('/multi', methods=['GET'])
def multi():
session.clear()
session['outside_function'] = "outside"
text = ""
text += "<p>Initial session data <br />" + str(session.items()) + "</p>"
process = Process(target=write_to_session, args=(session,))
process.start()
process.join() # waits for process to end
text += "<p>Session after function call <br />" + str(session.items()) + "</p>"
return text
# This route attempts to return a function value from the process using a
# multiprocessing.Value object. The value assigned within the function is
# replaced by '\x01' before being propagated back to the parent scope.
@app.route('/value', methods=['GET'])
def value():
from ctypes import c_wchar_p
def assign_string(cstring):
cstring.value = "inside"
cstring = Value(c_wchar_p, "Hello initial!")
text = "<p>Initial Value:<br />" + str(cstring.value) + "</p>"
process = Process(target=assign_string, args=(cstring,))
process.start()
process.join()
text += "<p>Value after function call:<br />" + str(cstring.value) + "</p>"
return text
if __name__ == '__main__':
app.run(host='localhost', port=5000, debug=True, use_reloader=True)
</code></pre>
<h1>Results</h1>
<p>The expected behavior. The Session is modified by the function.</p>
<p><code>http://localhost:5000/baseline</code></p>
<blockquote>
<p>Initial session data <br />dict_items([('outside_function', 'outside')])</p><p>Session after function call <br />dict_items([('outside_function', 'outside'), ('inside_function', 'inside')])</p>
</blockquote>
<hr />
<p>Writing to the Session from the child process: the key + value inserted by the function are not propagated to parent scope.</p>
<p><code>http://localhost:5000/multi</code></p>
<blockquote>
<p>Initial session data <br />dict_items([('outside_function', 'outside')])</p><p>Session after function call <br />dict_items([('outside_function', 'outside')])</p>
</blockquote>
<hr />
<p>Passing a string via a multiprocessing.Value object: the assigned string is replaced by a boolean true: '\x01'</p>
<p><code>http://localhost:5000/value</code></p>
<blockquote>
<p>Initial Value:<br />Hello initial!</p><p>Value after function call:<br /></p>
</blockquote>
<h1>Anaconda Environment</h1>
<pre><code>name: flask
channels:
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- bzip2=1.0.8=h4bc722e_7
- ca-certificates=2024.8.30=hbcca054_0
- cachelib=0.13.0=pyhd8ed1ab_0
- click=8.1.7=unix_pyh707e725_0
- flask=2.2.0=pyhd8ed1ab_0
- flask-session=0.5.0=pyhd8ed1ab_0
- importlib-metadata=8.4.0=pyha770c72_0
- itsdangerous=2.2.0=pyhd8ed1ab_0
- jinja2=3.1.4=pyhd8ed1ab_0
- ld_impl_linux-64=2.40=hf3520f5_7
- libexpat=2.6.3=h5888daf_0
- libffi=3.4.2=h7f98852_5
- libgcc=14.1.0=h77fa898_1
- libgcc-ng=14.1.0=h69a702a_1
- libgomp=14.1.0=h77fa898_1
- libnsl=2.0.1=hd590300_0
- libsqlite=3.46.1=hadc24fc_0
- libuuid=2.38.1=h0b41bf4_0
- libxcrypt=4.4.36=hd590300_1
- libzlib=1.3.1=h4ab18f5_1
- markupsafe=2.1.5=py311h9ecbd09_1
- ncurses=6.5=he02047a_1
- openssl=3.3.2=hb9d3cd8_0
- pip=24.2=pyh8b19718_1
- python=3.11.0=he550d4f_1_cpython
- python_abi=3.11=5_cp311
- readline=8.2=h8228510_1
- setuptools=73.0.1=pyhd8ed1ab_0
- tk=8.6.13=noxft_h4845f30_101
- tzdata=2024a=h8827d51_1
- werkzeug=2.2.2=pyhd8ed1ab_0
- wheel=0.44.0=pyhd8ed1ab_0
- xz=5.2.6=h166bdaf_0
- zipp=3.20.1=pyhd8ed1ab_0
prefix: /home/rh/.local/share/mambaforge-pypy3/envs/flask
</code></pre>
<p>What am I missing here? What is the proper way to</p>
<ol>
<li>Write to the Session from a separate process?</li>
<li>Return a string from a separate process?</li>
</ol>
|
<python><flask><session><multiprocess>
|
2024-09-09 19:46:42
| 1
| 309
|
RedHand
|
78,966,976
| 3,173,062
|
How to fix Stripe webhook integration signature failure error with aws api gateway, lambda using python?
|
<p>I'm integrating the stripe webhook with my aws lambda function written in python using the api gateway. It seems I'm getting the signature error. I have tested the code locally using flask server which works perfectly fine. I checked the cloudwatch logs as well to check the similarity of the type of data for different properties. Any suggestions how can I fix the error?</p>
<pre class="lang-py prettyprint-override"><code>def lambda_handler(event, context):
print("!!!Raw event:", json.dumps(event))
print("!!! event.get('isBase64Encoded'): ", event.get('isBase64Encoded'))
payload = base64.b64decode(event['body']).decode('utf-8') if event.get('isBase64Encoded') else event['body']
sig_header = event['headers'].get('Stripe-Signature') or event['headers'].get('stripe-signature')
print("!!!STRIPE_ENDPOINT_SECRET:", STRIPE_ENDPOINT_SECRET)
print("!!!Payload Type:", type(payload))
print("!!!Payload:", payload) # Log the payload
print("!!!Signature Header:", sig_header) # Log the signature header
try:
stripe_event = stripe.Webhook.construct_event(payload, sig_header, STRIPE_ENDPOINT_SECRET)
print(f"Successfully processed Stripe webhook: {stripe_event['type']}")
except ValueError as e:
print("ValueError:", str(e)) # Log the error
return {
'statusCode': 400,
'body': json.dumps({'error': 'Invalid payload'})
}
except stripe.error.SignatureVerificationError as e:
print("SignatureVerificationError:", str(e)) # Log the error
return {
'statusCode': 400,
'body': json.dumps({'error': 'Invalid signature'})
}
</code></pre>
<p>related code in flask,</p>
<pre><code>@app.route('/webhook', methods=['POST'])
def webhook():
payload = request.data.decode("utf-8")
# print(payload)
event = {
"body": payload,
# "headers": request.headers.decode("utf-8"),
"headers": {key: value for key, value in request.headers.items()},
"isBase64Encoded": False
}
# return {"status": 200}
return lambda_handler(event, None)
</code></pre>
|
<python><amazon-web-services><aws-lambda><stripe-payments><aws-api-gateway>
|
2024-09-09 19:38:58
| 0
| 599
|
forhadmethun
|
78,966,924
| 4,541,104
|
How do I check if a listed Gio schema is good? GLib-GIO-ERROR attempting to create schema ... without a path
|
<p>I would like to list the schemas for the purpose of making a setting search tool.</p>
<p>However, some schemas make <code>Gio.Settings.new</code> cause a core dump.</p>
<p>This script requires linux and the "python3-gi" package (such as via <code>sudo apt install python3-gi</code>; recent pip versions won't let you install via pip it if it is a system managed package, so the distro's package is recommended).</p>
<p>I am using Linux Mint 22 (based on Ubuntu 24.04 Noble Numbat, which is based on Debian trixie/sid), using the Cinnamon desktop environment. In Python, <code>import gi; gi.__version__</code> says '3.48.2'.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import sys
import gi
gi.require_version('Gio', '2.0')
from gi.repository import Gio
def main():
# Get the default schema source
schema_dir = Gio.SettingsSchemaSource.get_default()
print(f"Default schema source: {schema_dir}")
schemas = schema_dir.list_schemas(False)
value = None
for schema_branches in schemas:
for schema_branch in schema_branches:
print(f"\nSchema: {schema_branch}")
schema = schema_dir.lookup(schema_branch, True)
if not schema:
continue
keys = schema.list_keys()
if not keys:
continue
settings = Gio.Settings.new(schema_branch)
for key in keys:
value = settings.get_value(key)
print(f" Key: {schema_branch}.{key} = {value}")
# print(f" Key: {schema_branch}.{key}")
return 0
if __name__ == '__main__':
sys.exit(main())
</code></pre>
<p>Running the program causes:</p>
<pre><code>(process:1117725): GLib-GIO-ERROR **: 13:24:27.172: attempting to create schema 'org.gnome.settings-daemon.peripherals.wacom.stylus.deprecated' without a path
Trace/breakpoint trap (core dumped)
</code></pre>
<p>Related code in glib: <a href="https://github.com/bratsche/glib/blob/abfef39da9a11f59051dfa23a50bc374c0b8ad6e/gio/gsettings.c#L506" rel="nofollow noreferrer">https://github.com/bratsche/glib/blob/abfef39da9a11f59051dfa23a50bc374c0b8ad6e/gio/gsettings.c#L506</a></p>
<p>If I skip the <code>Gio.Settings.new</code> operation when the <code>schema_name</code> contains "deprecated", that isn't enough. Various other names also cause the error.</p>
<p>If I list the names and not the values, then I am able to comment out <code>settings = Gio.Settings.new(schema_branch)</code> and prevent the crash, but I want to be able to get the values and understand how to avoid the crash. Normally if I want values I wouldn't get them all at once, so this program is designed to reproduce the crash. However, this could happen even with a smaller program that only lists certain settings.</p>
<p>How do I detect which names are invalid and shouldn't be used for <code>Gio.Settings.new</code>?</p>
|
<python><gnome><gio>
|
2024-09-09 19:18:32
| 1
| 1,156
|
Poikilos
|
78,966,883
| 2,071,807
|
Assert that unittest Mock has these calls and no others
|
<p><code>Mock.assert_has_calls</code> asserts that the specified calls exist, but not that these were the only calls:</p>
<pre class="lang-py prettyprint-override"><code>from unittest.mock import Mock, call
mock = Mock()
mock("foo")
mock("bar")
mock("baz")
mock.assert_has_calls([call("foo"), call("bar")]) # this passes the assertion
</code></pre>
<p>How can I assert that <code>mock</code> has <em>only</em> been called with <code>foo</code> and <code>bar</code> and nothing else?</p>
<p>I naively thought I could use a set like this, but <code>call</code> is not hashable:</p>
<pre class="lang-py prettyprint-override"><code>assert set(mock.mock_calls) == {call("foo"), call("bar")}
</code></pre>
|
<python><python-unittest><python-unittest.mock>
|
2024-09-09 19:02:10
| 2
| 79,775
|
LondonRob
|
78,966,835
| 219,153
|
How to clear contours and retain pan and zoom with Matplotlib?
|
<p>This snippet:</p>
<pre><code>plt.get_current_fig_manager().full_screen_toggle()
for img in imgs:
plt.clf()
plt.imshow(img, origin='upper', interpolation='None', aspect='equal', cmap='gray')
plt.gca().contour(img, np.arange(0, 255, 8), colors='r')
while not plt.waitforbuttonpress():
pass
</code></pre>
<p>plots consecutive images from <code>imgs</code> list, each with their own contours, but it doesn't preserve pan and zoom. When I comment out <code>plt.clf()</code>, pan and zoom are preserved, but all previous contours will be displayed, instead of just the current one. How to preserve pan and zoom and have only contours of the current image displayed?</p>
|
<python><matplotlib>
|
2024-09-09 18:44:22
| 1
| 8,585
|
Paul Jurczak
|
78,966,793
| 702,948
|
python: how to import a module that references another file in the same directory?
|
<p>Here's my folder structure</p>
<pre><code>/
├── package
│ ├── __init__.py
│ ├── mod1.py
│ └── mod2.py
</code></pre>
<p>the contents of <code>mod1.py</code>:</p>
<pre><code>class MyClass(object):
pass
</code></pre>
<p>and <code>mod2.py</code>:</p>
<pre><code>from mod1 import MyClass
</code></pre>
<p>This is a simple test of how modules work in python and I'm trying to figure out how to set up my environment so that <code>mod2</code> can properly reference <code>mod1</code>. When I'm inside <code>package/</code>, I can import either module without issue.</p>
<pre><code>[package/] $ python
>>> import mod1
>>> import mod2
>>>
</code></pre>
<p>However, when inside <code>/</code>, I cannot import <code>mod2</code>:</p>
<pre><code>[/] $ python
>>> from package import mod1
>>> from package import mod2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/package/mod2.py", line 1, in <module>
from mod1 import MyClass
ModuleNotFoundError: No module named 'mod1'
>>>
</code></pre>
<p>I believe the issue is that python is not set up for a module to consider its own directory as part of its <code>PYTHONPATH</code>, so it can't find <code>mod1</code>. I tried adding <code>.</code> to the <code>PYTHONPATH</code> env var but that did not solve the issue.</p>
<p>Please note that this is a simplified example. The code I am trying to work with is 3rd party code and my goal is to not have to keep all python files in the root directory of my repo. Ideally I'm looking for a way to configure python such that any file can reference sibling files without the need to edit the code.</p>
|
<python><import><module>
|
2024-09-09 18:28:19
| 0
| 21,657
|
ewok
|
78,966,709
| 21,692,833
|
How to Send Email via Python's smtplib with Gmail Signature Automatically Included?
|
<p>I am trying to send an email through Python's <code>smtplib</code> using my Gmail account, and I have already configured a signature in my Gmail settings. However, when the email is sent from my Python script, the signature that I registered in Gmail does not appear in the sent email.</p>
<p>Here’s the Python script I’m using:</p>
<pre class="lang-py prettyprint-override"><code>import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
# Gmail credentials
gmail_user = "myemail@gmail.com"
gmail_password = "myapppassword"
# Email subject and body
subject = "Awesome Email"
body = """Hi There,
I hope this message finds you well.
I'm sending you a bulk email with my Gmail signature
Best regards,
Dustin
"""
# Function to send emails
def send_emails(recipients):
try:
server = smtplib.SMTP("smtp.gmail.com", 587)
server.starttls()
server.login(gmail_user, gmail_password)
for recipient in recipients:
msg = MIMEMultipart()
msg["From"] = gmail_user
msg["To"] = recipient
msg["Subject"] = subject
msg.attach(MIMEText(body, "plain"))
server.sendmail(gmail_user, recipient, msg.as_string())
server.quit()
print("Emails sent successfully!")
except Exception as e:
print(f"Error: {e}")
# List of recipient emails
recipients = ["recipient1@example.com", "recipient2@example.com"]
# Trigger the email sending
send_emails(recipients)
</code></pre>
<h3>Problem:</h3>
<p>Even though I have a signature configured in my Gmail account (which works fine when I send emails through the Gmail web interface), the emails sent through my Python script using <code>smtplib</code> do not include my Gmail signature.</p>
<h3>Question:</h3>
<p>How can I send emails from Python (via <code>smtplib</code>) and ensure that my Gmail signature is automatically appended to the outgoing emails? Do I need to use a different approach, or is there a way to achieve this using <code>smtplib</code>?</p>
|
<python><email><gmail>
|
2024-09-09 17:58:07
| 0
| 494
|
Dustin Lee
|
78,966,499
| 18,769,241
|
Determine the object distance from the ground given head pitch angle?
|
<p>I want to determine the distance from a detected object (as spotted by a robot/camera) to the ground through the head pitch angle and the distance between the object and the robot/camera (which is a known fixed distance).</p>
<p>I want to do that because I want the robot to grab the object at the requested specific height using both arms used straight and perpendicular to the robot's TORSO.
The head pitch angle range is as follows:</p>
<p><a href="https://i.sstatic.net/mLVtUbeD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLVtUbeD.png" alt="HeadPitch Angle" /></a></p>
<p>I consider the object to be "grabbable" only when the head pitch angle is positive.
It goes without saying that is obvious that the object-ground distance is camera/robot-ground distance if the head pitch angle is 0°. The case I want to consider is when there's a certain positive head pitch angle at which the robot is looking at the detected object as explained by the following illustration:</p>
<p><a href="https://i.sstatic.net/ZJHDbTmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZJHDbTmS.png" alt="head pitch angle" /></a></p>
<p>my try to solve this is through the following in python:</p>
<pre><code>distance_object_ground = self.DISTANCE_TORSO_OBJECT * math.tan(
head_pitch_angle)
DISTANCE_TORSO_OBJECT = is the distance between the robot to the object which is horizontal
</code></pre>
<p>is this an overall good approach to grabbing the object, if yes, is the distance from the object to the ground calculation accurate?</p>
|
<python><opencv><image-processing><computer-vision><robotics>
|
2024-09-09 16:54:07
| 1
| 571
|
Sam
|
78,966,321
| 1,572,469
|
Camera is not detecting chessboard corners using Python and OpenCV
|
<p>My code looks like this:</p>
<pre><code>def find_corners(images, camera_name):
imgpoints = []
for idx, img in enumerate(images):
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.001)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
ret, corners = cv2.findChessboardCorners(gray, grid_size, cv2.CALIB_CB_ADAPTIVE_THRESH + cv2.CALIB_CB_FAST_CHECK + cv2.CALIB_CB_NORMALIZE_IMAGE)
if ret:
objpoints.append(objp)
# Refine the corner locations to subpixel accuracy
corners2 = cv2.cornerSubPix(gray, corners, (11, 11), (-1, -1), criteria)
imgpoints.append(corners2)
# Draw and display the corners
cv2.drawChessboardCorners(img, grid_size, corners2, ret)
cv2.imshow(f'{camera_name} Image {idx + 1}', img)
# Wait for a keypress to move to the next image
print(f"Displaying {camera_name} Image {idx + 1}. Press any key to continue...")
cv2.waitKey(0) # Wait until a key is pressed
cv2.destroyAllWindows()
else:
print(f"Chessboard not found in {camera_name} Image {idx + 1}")
return imgpoints
</code></pre>
<p>Grid size:</p>
<pre><code>grid_size = (14, 14) # Inner corners of the chessboard
</code></pre>
<p>I have two cameras. One camera3 detects corners just fine, it returns points. The camera2 corner detection does not find any points. Both images were generated using Solidworks and both are pretty much identical except for perspective.</p>
<p>This image calibrated fine:
<a href="https://i.sstatic.net/3RtrHRlD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3RtrHRlD.png" alt="Good image" /></a></p>
<p>This image does not:
<a href="https://i.sstatic.net/CbyhQXbr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbyhQXbr.png" alt="bad image" /></a></p>
<p>The order that I try them does not matter. I have re-rendered and resaved the "bad" image and it still does not calibrate.</p>
|
<python><opencv><computer-vision>
|
2024-09-09 16:00:47
| 0
| 1,952
|
Eric Snyder
|
78,966,219
| 16,815,358
|
Smoothing out the sharp corners and jumps of a piecewise regression load-displacement curve in python
|
<p>I am having a stubborn problem with smoothing out some sharp corners that the simulation software does not really like.</p>
<p>I have the following displacement/ load/ damage vs step/time:</p>
<p><a href="https://i.sstatic.net/pzXKjLMf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzXKjLMf.png" alt="Data over time" /></a></p>
<p>The source data can be found <a href="https://pastebin.com/AdAzV1KM" rel="nofollow noreferrer">here</a>.</p>
<p>Here's the code for importing the data and plotting the above plot:</p>
<pre><code>df = pd.read_csv("ExampleforStack.txt") # read data
x = df["Displacement"] # get displacement
y = df["Load"] # get load
d = df["Damage"] # get damage
# plot stuff
plt.figure()
plt.subplot(3,1,1)
plt.plot(x)
plt.grid()
plt.ylabel("Displacement")
plt.subplot(3,1,2)
plt.plot(y)
plt.grid()
plt.ylabel("Load")
plt.subplot(3,1,3)
plt.plot(d)
plt.grid()
plt.ylabel("Damage")
plt.xlabel("Step")
plt.gcf().align_ylabels()
plt.tight_layout()
</code></pre>
<p>When plotted against displacement, the load and damage look something like this:</p>
<p><a href="https://i.sstatic.net/oc0O43A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oc0O43A4.png" alt="Load, Damage vs Displacement" /></a></p>
<p>The breaking points in the above plots are:</p>
<pre><code>print(bps)
# [0.005806195310298627, 0.02801208361344569]
</code></pre>
<p>My aim would be to smooth the data around the vertical black lines for both the load and the damage.</p>
<p>So far, I tried lowess from <code>statsmodels.api.nonparametric</code>, with the results looking very suboptimal:</p>
<p><a href="https://i.sstatic.net/MUo1vxpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MUo1vxpB.png" alt="LOWESS at frac 0.03" /></a></p>
<p>The above picture is with a frac of 0.03, changing the frac of course changes a lot, but sadly not in a desirable way either.</p>
<p>Others stuff that I have tried are Gaussian regression models, Singular Spectrum Analysis, Savitzky-Golay filters, cubic splines, etc...</p>
<p>The only thing that I have not checked so far is curve fitting, which I might check tomorrow.</p>
<p>Background information:</p>
<ul>
<li>Displacement is the result of DIC analysis</li>
<li>Load is measured by the testing machine</li>
<li>Damage is a calculated value from displacement, load and the stiffness of the material in the elastic region.</li>
</ul>
<p>Qualitatively, here's what I would like the end result to look like:</p>
<p><a href="https://i.sstatic.net/W3m6HcwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/W3m6HcwX.png" alt="Smoothed" /></a></p>
<p>An additionaly requirement would be that the derivative of the smothed data should also be smooth and not jumpy.</p>
<p>I would appreciate any hints to help me solve this task! :D</p>
<hr />
<p>As suggested by Martin Brown, I did the following to smooth out the curves:</p>
<pre><code>def boxCar(data, winSize):
kernel = np.ones(winSize) / winSize # generate the kernel
dataSmoothed = convolve(data, kernel, mode='same') # convolve
# the next two lines is to correct the smoothing on the start and end of the arrays
dataSmoothed[0:winSize] = data[0:winSize] # assign first elements to original data
dataSmoothed[-winSize:] = data[-winSize:] # assign last elements to original data
return dataSmoothed
</code></pre>
<p>The <code>convolve</code> is from <code>scipy.signal</code>.</p>
<hr />
<p>Another approach with the gaussian would look something like this:</p>
<pre><code>def gaussian(data, sigma):
dataSmoothed = gaussian_filter1d(data, sigma=sigma)
dataSmoothed[0:50] = data[0:50] # assign first elements to original data
dataSmoothed[-50:] = data[-50:] # assign last elements to original data
return dataSmoothed
</code></pre>
<p>The Gaussian seems to work a bit better than boxCar. <code>gaussian_filter1d</code> is from <code>scipy.ndimage</code></p>
|
<python><pandas><matplotlib><curve-fitting><smoothing>
|
2024-09-09 15:36:16
| 2
| 2,784
|
Tino D
|
78,966,184
| 14,720,380
|
How can I inverse a slice of an array?
|
<p>I want to do something along the lines of...</p>
<pre><code>import numpy as np
arr = np.linspace(0, 10, 100)
s = slice(1, 10)
print(arr[s])
print(arr[~s])
</code></pre>
<p>How could I apply the "not" operator to a slice, so that in this case <code>arr[~s]</code> would be the concatenation of <code>arr[0]</code> and <code>arr[10:]</code>?</p>
|
<python><numpy><slice>
|
2024-09-09 15:26:08
| 1
| 6,623
|
Tom McLean
|
78,966,115
| 3,156,085
|
How to (correctly) use `ctypes.get_errno()`?
|
<p>I'm trying to test some binary library with <code>ctypes</code> and some of my tests involve <code>errno</code>.</p>
<p>I'm therefore trying to retrieve it to check the error cases handling but when trying to use <a href="https://docs.python.org/3/library/ctypes.html#ctypes.get_errno" rel="nofollow noreferrer"><code>ctypes.get_errno()</code></a> I weirdly get <code>0</code> as errno ("Success") which isn't what I was expecting.</p>
<p>Why does this occur? Is <code>ctypes.get_errno()</code> actually reliable?</p>
<ul>
<li><code>test.py</code>:</li>
</ul>
<pre><code>#!/usr/bin/env python3
import os
import ctypes
import errno
libc = ctypes.cdll.LoadLibrary("libc.so.6")
libc.write.restype = ctypes.c_ssize_t
libc.write.argtypes = ctypes.c_int, ctypes.c_void_p, ctypes.c_size_t
TMP_FILE = "/tmp/foo"
def main():
fd: int
errno: int = 0
fd = os.open(TMP_FILE, os.O_RDONLY | os.O_CREAT)
if fd == -1:
errno = ctypes.get_errno()
print(strerror(errno))
if (not errno and libc.write(fd, "foo", 3) == -1):
errno = ctypes.get_errno()
print(f"ERRNO: {errno}")
print(os.strerror(errno))
os.close(fd);
os.remove(TMP_FILE)
if errno:
raise OSError(errno, os.strerror(errno))
if __name__ == "__main__":
main()
</code></pre>
<ul>
<li>output:</li>
</ul>
<pre><code>$ ./test.py
ERRNO: 0
Success
</code></pre>
<hr />
<p><strong>NB:</strong> I already have a workaround from <a href="https://stackoverflow.com/a/661303/3156085">an answer under an other post</a> (see MRE below) but I'd like to understand what's going on with <code>ctypes.get_errno()</code>.</p>
<ul>
<li><code>test_with_workaround.py</code>:</li>
</ul>
<pre><code>#!/usr/bin/env python3
import os
import ctypes
libc = ctypes.cdll.LoadLibrary("libc.so.6")
libc.write.restype = ctypes.c_ssize_t
libc.write.argtypes = ctypes.c_int, ctypes.c_void_p, ctypes.c_size_t
TMP_FILE = "/tmp/foo"
_get_errno_loc = libc.__errno_location
_get_errno_loc.restype = ctypes.POINTER(ctypes.c_int)
def get_errno() -> int:
return _get_errno_loc()[0]
def main():
fd: int
errno: int = 0
fd = os.open(TMP_FILE, os.O_RDONLY | os.O_CREAT)
if fd == -1:
errno = get_errno()
print(strerror(errno))
if (not errno and libc.write(fd, "foo", 3) == -1):
errno = get_errno()
print(f"ERRNO: {errno}")
print(os.strerror(errno))
os.close(fd);
os.remove(TMP_FILE)
if errno:
raise OSError(errno, os.strerror(errno))
if __name__ == "__main__":
main()
</code></pre>
<ul>
<li>output:</li>
</ul>
<pre><code>$ ./test_with_workaround.py
ERRNO: 9
Bad file descriptor
Traceback (most recent call last):
File "/mnt/nfs/homes/vmonteco/Code/MREs/MRE_python_fdopen_cause_errno/simple_python_test/./test_with_workaround.py", line 41, in <module>
main()
File "/mnt/nfs/homes/vmonteco/Code/MREs/MRE_python_fdopen_cause_errno/simple_python_test/./test_with_workaround.py", line 37, in main
raise OSError(errno, os.strerror(errno))
OSError: [Errno 9] Bad file descriptor
</code></pre>
|
<python><ctypes><errno>
|
2024-09-09 15:07:31
| 1
| 15,848
|
vmonteco
|
78,966,048
| 12,466,687
|
How to change background color of st.text_input() in streamlit?
|
<p>I am trying to <strong>change the background color</strong> of <code>st.text_input()</code> box but unable to do so.</p>
<p>I am not from web/app development background or with any html css skills so please excuse my naive or poor understanding in this field.</p>
<p>So far I have tried:
<a href="https://discuss.streamlit.io/t/how-to-set-the-background-color-and-text-color-of-st-header-st-write-etc-and-let-the-text-be-showed-at-the-left-side-of-input-and-select-box/11826" rel="nofollow noreferrer">using this link</a></p>
<pre><code>test_color = st.write('test color')
def text_input_color(url):
st.markdown(
f'<p style="background-color:#0066cc;color:#33ff33;">{url}</p>', unsafe_allow_html=True
)
text_input_color("test_color")
</code></pre>
<p>Above code works on <code>st.write()</code> but <strong>not on <code>st.text_input()</code></strong></p>
<p>I have also come across <a href="https://discuss.streamlit.io/t/custom-the-form-background-color/52882" rel="nofollow noreferrer">this link</a> so using this approach and I have modified css for <code>Textinput</code> instead of <code>stForm</code> but this also didn't work and I am not sure what <code>id</code> to use for <code>text input</code></p>
<pre><code>css="""
<style>
[data-testid="stTextinput"] {
background: LightBlue;
}
</style>
"""
</code></pre>
<p>Below is the inspect element screenshot of the webapp:</p>
<p><a href="https://i.sstatic.net/CbNDC5lr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CbNDC5lr.png" alt="text_input box image" /></a></p>
|
<python><html><css><streamlit>
|
2024-09-09 14:49:34
| 1
| 2,357
|
ViSa
|
78,965,913
| 11,154,841
|
How do I sort "SELECT INTO" queries that are built on top of each other by their "INTO" / "FROM" table links?
|
<p>I need to migrate a dozen MS Access databases with 500+ queries to SSIS and therefore, I changed the code to TSQL, see <a href="https://stackoverflow.com/questions/78942252/how-can-i-get-tsql-from-easy-ms-access-sql-with-little-to-no-handiwork">How can I get TSQL from easy MS Access SQL with little to no handiwork?</a>.</p>
<p>The queries in these projects are built on top of each other with materialised tables that you <code>SELECT INTO</code> MS Access so that you can fetch it in the next "FROM" block. There have been remarks that this should not be needed since MS Access <em>does</em> allow queries from scattered servers. And that the easy queries that are run here should not need the materialisation for performance. Yet, even if that is true, I cannot change the projects afterwards. They are as they are, and perhaps I am not the only one running into such a setting. Perhaps there are other good reasons for this setting. There are forms that allow you to filter some column and afterwards give out the filtered tables as a download with all the subquery steps that were needed. Therefore, the setting may still make sense even if it were not for performance or scattered servers.</p>
<p>How can I sort the TSQL queries one-dimensionally in an Excel file by their ancestry levels?</p>
<p>I need blocks that put together the query families inside a database, and on top of that, I would like to know the sort order for the queries inside these families.</p>
<p>The aim is to see at one sight how I should go on in SSIS to mirror the TSQL workflow. This is just a puzzle of dependencies, and it likely can be run on anything that has <code>INTO</code> and <code>FROM</code> in its SQL, with any tools and languages you can think of. I still flag this with MS Access, MS Excel, TSQL and Python, to narrow it down to my setting.</p>
<p>From the MS Excel input file that you can get from the link above, this can be done with just three columns as I know from self-answering the question, and getting the <code>INTO</code> block from the TSQL is such an easy Regex that you can quickly calculate it yourself without the link above:</p>
<ul>
<li>Datenbank = MS Access database</li>
<li>Into = The "INTO" table cut out with Regex (see the link above)</li>
<li>TSQL = TSQL query that was made from the MS Access SQL</li>
</ul>
<p>Example rows in the Excel file (first line for the column names):</p>
<blockquote>
<p>Datenbank, INTO, TSQL<br />
database1.accdb, tbl_INTO1, select 1 as test INTO tbl_INTO1<br />
database1.accdb, tbl_INTO2, select test INTO tbl_INTO2 FROM tbl_INTO1<br />
database1.accdb, tbl_INTO3, select test INTO tbl_INTO3 FROM tbl_INTO2<br />
database1.accdb, tbl_INTO4, select test INTO tbl_INTO4 FROM tbl_INTO1<br />
database1.accdb, tbl_INTO8, select 1 as test2 INTO tbl_INTO8<br />
database1.accdb, tbl_INTO9, select test2 INTO tbl_INTO9 FROM tbl_INTO8</p>
</blockquote>
<p>Clearly, tbl_INTO2 follows tbl_INTO1 and should therefore get a higher level in the tree of dependencies. But also tbl_INTO4 should be higher than tbl_INTO1. And tbl_INTO8 and tbl_INTO9 have nothing to do with the rest, they should be in their own family block.</p>
<p>This is about 500+ filled TSQL cells spread across a dozen MS access databases in that Excel file. How do I sort "SELECT INTO" queries that are built on top of each other by their "INTO" / "FROM" table links?</p>
|
<python><sql><excel><t-sql><ms-access>
|
2024-09-09 14:17:05
| 1
| 9,916
|
questionto42
|
78,965,901
| 16,374,636
|
How to make ray task async
|
<p>I want to run a function (Ray Task) that may trigger another request afterward.</p>
<p>For example, if I have 10 tasks but only 1 CPU, the system will process one task at a time since each task requires 1 CPU.</p>
<p>However, each of these tasks makes an external API call.</p>
<pre><code>@ray.remote
def fetch():
request.get(...)
</code></pre>
<p>I would like to make this API request asynchronous and await its completion, so instead of processing one task at a time, I can run all tasks concurrently. If I call fetch 10 times, all 10 API requests should run in parallel, and once the requests are complete, each task should resume sequential processing.</p>
<p>How can I achieve this with Ray?</p>
|
<python><ray>
|
2024-09-09 14:12:32
| 2
| 407
|
zacko
|
78,965,783
| 19,218,671
|
Problem when I try export '.svg' file in Transitions library in python
|
<p>I just try a simple example from <a href="https://github.com/pytransitions/transitions?tab=readme-ov-file" rel="nofollow noreferrer">this source</a> to learn library :</p>
<pre><code>from transitions.extensions import GraphMachine
from functools import partial
class Model:
def clear_state(self, deep=False, force=False):
print("Clearing state ...")
return True
model = Model()
machine = GraphMachine(model=model, states=['A', 'B', 'C'],
transitions=[
{'trigger': 'clear', 'source': 'B', 'dest': 'A', 'conditions': model.clear_state},
{'trigger': 'clear', 'source': 'C', 'dest': 'A',
'conditions': partial(model.clear_state, False, force=True)},
],
initial='A', show_conditions=True)
model.get_graph().draw('my_state_diagram.svg', prog='dot')
</code></pre>
<p>but when I run it - without any error - <code>my_state_diagram.svg</code> contains :</p>
<pre><code>---
State Machine
---
stateDiagram-v2
direction LR
classDef s_default fill:white,color:black
classDef s_inactive fill:white,color:black
classDef s_parallel color:black,fill:white
classDef s_active color:red,fill:darksalmon
classDef s_previous color:blue,fill:azure
state "A" as A
Class A s_active
state "B" as B
Class B s_default
state "C" as C
Class C s_default
B --> A: clear [clear_state]
C --> A: clear [clear_state(False, force=True)]
[*] --> A
</code></pre>
<p>how can I fix this?
and my second question : how can I export .dot file?</p>
|
<python><svg><graph><dot><pytransitions>
|
2024-09-09 13:46:02
| 1
| 453
|
irmoah80
|
78,965,763
| 23,260,297
|
logging/print statements not being captured when running script with Runas command
|
<p>I am running a python script as a different user with the <code>runas</code> command but I cannot see my output/errors anywhere. I am trying to debug my script, but I cannot see any logging info so it is near impossible to for me to tell what is happening.</p>
<p>Here is what my script looks like:</p>
<pre><code>def main():
logging.basicConfig(filename="C:/temp/out.txt",
filemode='a',
format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
datefmt='%H:%M:%S',
level=logging.DEBUG)
# using pyodbc
# Connection string for SQL Server using Windows Authentication
conn_str = (
r'DRIVER={ODBC Driver 17 for SQL Server};'
r'SERVER=server;'
r'DATABASE=database;'
r'Trusted_Connection=yes;'
)
try:
# Attempt to establish a connection
conn = pyodbc.connect(conn_str)
logging.info("Connection successful!")
# Close connection
conn.close()
except pyodbc.Error as e:
# Handle connection errors
if '28000' in str(e): # SQL Server Login failed
logging.info("Authentication failed: User does not have access to the database.")
else:
logging.info(f"An error occurred: {e}")
if __name__ == "__main__":
main()
</code></pre>
<p>Here is the <code>runas</code> command I use:</p>
<pre><code>runas /user:domain\username "C:/path/to/exe" "C:/temp/file.py"
</code></pre>
<p>Everytime I run the command nothing gets logged to my output file and I am unsure what is happening. Am I missing something obvious here?</p>
<p>I know something is going on because when I run the script manually in my IDE the logging works, and I will get an error since I don't have access to the DB and I can see it in the server error logs. When I run with <code>runas</code> I don't see any error logs on the server, but also no logging info</p>
|
<python><windows><pyodbc><python-logging><runas>
|
2024-09-09 13:40:40
| 1
| 2,185
|
iBeMeltin
|
78,965,183
| 6,930,340
|
How to transform polars expression into ColumnNameOrSelector | Sequence[ColumnNameOrSelector]
|
<p>According to the documentation for <a href="https://docs.pola.rs/api/python/stable/reference/lazyframe/api/polars.LazyFrame.unpivot.html" rel="nofollow noreferrer">polars.LazyFrame.unpivot</a>, the <code>on</code> or <code>index</code> argument should be of type</p>
<p><code>ColumnNameOrSelector | Sequence[ColumnNameOrSelector] | None</code>.</p>
<p>Apperently, it is possible to submit a <code>pl.Expr</code>. For instance, I could pass something like <code>pl.all().exclude("col_1", "col_2")</code> and it will work as expected.</p>
<p>However, <code>mypy</code> will correctly complain:</p>
<pre><code>Argument "index" to "unpivot" of "LazyFrame" has incompatible type "Expr"; expected "str | _selector_proxy_ | Sequence[str | _selector_proxy_] | None" [arg-type]
</code></pre>
<p>What would be the correct/intended way of transforming a</p>
<p><code>pl.Expr</code></p>
<p>into a</p>
<p><code>ColumnNameOrSelector | Sequence[ColumnNameOrSelector]</code>?</p>
|
<python><python-polars>
|
2024-09-09 11:12:46
| 0
| 5,167
|
Andi
|
78,965,177
| 13,891,321
|
QGIS using selectedLayer.getFeatures() but require str for ComboBox
|
<p>I have written a QGIS plugin to pull features from a layer and then perform various calculations.
The code successfully extracts the full dataset, but I want to populate a ComboBox to be able to select a given row from the data set.</p>
<pre><code> def select_line(self):
"""Load lines from chosen layer and populate LineCB."""
global selectedLayer
selectedLayer = self.dlg.mMapLayerComboBox.currentLayer()
self.dlg.LineCB.clear()
L_names = []
# get the names and add to the list
for x in selectedLayer.getFeatures():
L_names.append(x['Seq'])
self.dlg.LineCB.addItems(L_names) # Populate the ComboBox
self.dlg.LineCB.currentIndexChanged.connect(self.get_coords)
</code></pre>
<p>The column of data labelled 'Seq' can contain Integers or '-'.
If 'Seq' contain only integers, I get the following QGIS Python error.</p>
<pre><code> self.dlg.LineCB.addItems(L_names) # Populate the ComboBox
TypeError: index 0 has type 'int' but 'str' is expected
</code></pre>
<p>If I print(L_names) I can see that they are all read as text if '-' is present in the data, but as integers if only numbers are present.
Is there a way to either force L_names to default to Str, convert them or get the ComboBox to accept Int? I can use Name instead of Seq, which dodges the problem as they are always alphanumeric, but it's easier to work with Seq.</p>
<pre><code> Name ID Seq Heading
EQ24322-01001-04 1 - 1.287
EQ24322-01005-08 2 - 1.287
EQ24322-01009-12 3 - 1.287
EQ24322-01013-16 4 - 1.287
EQ24322-01017-20 5 29 181.287
EQ24322-01021-24 6 27 181.287
EQ24322-01025-28 7 25 181.287
</code></pre>
|
<python><combobox><qgis><pyqgis>
|
2024-09-09 11:10:36
| 0
| 303
|
WillH
|
78,965,130
| 188,331
|
Use added tokens in BertTokenizer with a BartForConditionalGeneration model
|
<p>I have a <code>BertTokenizer</code> and I added some tokens to it.</p>
<pre><code>from transformers import BertTokenizer, BartForConditionalGeneration
tokenizer = BertTokenizer.from_pretrained("raptorkwok/wordseg-tokenizer")
print(len(tokenizer)) # 245289
print(tokenizer.vocab_size) # 51271
</code></pre>
<p>The 51,271 vocabs are in <code>vocab.txt</code>, while the added tokens are in <code>added_tokens.json</code>. If I want to make use of these new tokens in a model, e.g. <code>BartForConditionalGeneration</code>, how can I apply these new tokens in the model?</p>
<p>For example, I called the model with the codes:</p>
<pre><code>model = BartForConditionalGeneration.from_pretrained("fnlp/bart-base-chinese")
model.resize_token_embeddings(len(tokenizer))
</code></pre>
<p>The model has a new size of 245,289.</p>
<p>What is my next step in letting the model adopt the new tokens? should I re-train the model?</p>
|
<python><huggingface-transformers><huggingface-tokenizers>
|
2024-09-09 10:59:05
| 0
| 54,395
|
Raptor
|
78,964,945
| 9,112,151
|
How to use AsyncSession.sync_session or how to have both sync and async session with the same scope of objects, single transaction?
|
<p>I'd like to use <code>AsyncSession.sync_session</code> but the code below fails with error. The code is a simplified version of real case.</p>
<pre class="lang-py prettyprint-override"><code>import sqlalchemy as sa
import uvicorn
from fastapi import FastAPI, Depends
from sqlalchemy import MetaData, select
from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine
from sqlalchemy.orm import declarative_base
app = FastAPI()
metadata = MetaData()
Base = declarative_base(metadata=metadata)
engine = create_async_engine(
"postgresql+asyncpg://postgres:postgres@localhost:5433/portal-podrjadchika-local", echo=True
)
SessionLocal = async_sessionmaker(bind=engine, expire_on_commit=False)
class User(Base):
__tablename__ = "users"
id = sa.Column(sa.Integer, autoincrement=True, primary_key=True, index=True)
first_name = sa.Column(sa.String)
last_name = sa.Column(sa.String)
async def get_session() -> AsyncSession:
session = SessionLocal()
try:
yield session
await session.commit()
except Exception as e:
await session.rollback()
raise e
finally:
await session.close()
@app.on_event("startup")
async def on_startup() -> None:
async with engine.begin() as conn:
await conn.run_sync(Base.metadata.drop_all)
await conn.run_sync(Base.metadata.create_all)
@app.post('/users')
async def get_users(session: AsyncSession = Depends(get_session)):
stmt = select(User)
print(session.sync_session.scalars(stmt)) # fails with error sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place? (Background on this error at: https://sqlalche.me/e/20/xd2s)
return "user"
if __name__ == "__main__":
uvicorn.run("so:app", reload=True)
</code></pre>
<p>Traceback:</p>
<pre><code>ERROR: Exception in ASGI application Traceback (most recent call
last): File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 408, in run_asgi
result = await app( # type: ignore[func-returns-value] File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py",
line 84, in __call__
return await self.app(scope, receive, send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/fastapi/applications.py",
line 1054, in __call__
await super().__call__(scope, receive, send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/applications.py",
line 123, in __call__
await self.middleware_stack(scope, receive, send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/middleware/errors.py",
line 186, in __call__
raise exc File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/middleware/errors.py",
line 164, in __call__
await self.app(scope, receive, _send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/_exception_handler.py",
line 64, in wrapped_app
raise exc File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/_exception_handler.py",
line 53, in wrapped_app
await app(scope, receive, sender) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/routing.py",
line 756, in __call__
await self.middleware_stack(scope, receive, send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/routing.py",
line 776, in app
await route.handle(scope, receive, send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/routing.py",
line 297, in handle
await self.app(scope, receive, send) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/routing.py",
line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send) File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/_exception_handler.py",
line 64, in wrapped_app
raise exc File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/_exception_handler.py",
line 53, in wrapped_app
await app(scope, receive, sender) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/starlette/routing.py",
line 72, in app
response = await func(request) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/fastapi/routing.py",
line 278, in app
raw_response = await run_endpoint_function( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/fastapi/routing.py",
line 191, in run_endpoint_function
return await dependant.call(**values) File "/Users/alber.aleksandrov/PycharmProjects/Playground/fstp/so.py", line
50, in get_users
print(session.sync_session.scalars(stmt)) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py",
line 2337, in scalars
return self._execute_internal( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/session.py",
line 2120, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement( File
"/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/orm/context.py",
line 293, in orm_execute_statement
result = conn.execute( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1412, in execute
return meth( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/sql/elements.py",
line 483, in _execute_on_connection
return connection._execute_clauseelement( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1635, in _execute_clauseelement
ret = self._execute_context( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1844, in _execute_context
return self._exec_single_context( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1984, in _exec_single_context
self._handle_dbapi_exception( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 2342, in _handle_dbapi_exception
raise exc_info[1].with_traceback(exc_info[2]) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/base.py",
line 1965, in _exec_single_context
self.dialect.do_execute( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/engine/default.py",
line 921, in do_execute
cursor.execute(statement, parameters) File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/dialects/postgresql/asyncpg.py",
line 561, in execute
self._adapt_connection.await_( File "/Users/alber.aleksandrov/PycharmProjects/Playground/venv3.10/lib/python3.10/site-packages/sqlalchemy/util/_concurrency_py3k.py",
line 116, in await_only
raise exc.MissingGreenlet( sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was
IO attempted in an unexpected place? (Background on this error at:
https://sqlalche.me/e/20/xd2s)
</code></pre>
<p>How can I fix the error? Or how to have both sync and async session with the same scope of objects, single transaction? I need it to make Factory Boy <a href="https://factoryboy.readthedocs.io/en/stable/orms.html#sqlalchemy" rel="nofollow noreferrer">https://factoryboy.readthedocs.io/en/stable/orms.html#sqlalchemy</a> use sync session to instanciate objects.</p>
|
<python><sqlalchemy><fastapi>
|
2024-09-09 10:09:42
| 1
| 1,019
|
Альберт Александров
|
78,964,588
| 2,604,247
|
Should Polars Enum Datatype Result in More Efficient Storage and Memory Footprint of DataFrame?
|
<p>I have this dataframe in Python Polars having dimensions (8442x7), basically, 1206 rows for each day of the week. The day of week appears as a simple string.</p>
<p>Thought I would exploit the <code>pl.Enum</code> to encode the <code>ISO_WEEKDAY</code> column, saving space on disk. See the following code.</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
import calendar, polars as pl
df: pl.DataFrame = pl.read_parquet(source='some_data.parquet') # About 268K
# df has a column called ISO_WEEKDAY whose values are Monday, Tuesday...Sunday
weekdays:pl.Enum=pl.Enum(categories=iter(calendar.day_name)) # Encode all weekdays in the enum
df=df.with_columns(pl.col(ISO_WEEKDAY).cast(dtype=weekdays))
df.write_parquet('new_file.parquet') # ~ Same 268K
</code></pre>
<p>But seems it is not happening, i.e. even after transforming the column to <code>pl.Enum</code></p>
<ul>
<li>the parquet file is of the same size</li>
<li>the dataframe in memory (if I use the <code>pl.DataFrame.estimated_size()</code> method) actually gets bigger.</li>
</ul>
<p>So is <code>pl.Enum</code> advisable at all, even though the column values are repeated 1206 times each?</p>
|
<python><string><memory-management><enums><python-polars>
|
2024-09-09 08:37:12
| 1
| 1,720
|
Della
|
78,964,577
| 11,555,352
|
Pandas concat on resampled data frames results in empty data frame
|
<p>I have the below code:</p>
<pre><code> raster = "1s"
# Resample data to a shared time raster (e.g. 1s) and combine into a single data frame
dfs_resampled = [df.resample(raster).ffill() for df in dfs]
print("dfs_resampled",dfs_resampled)
# Join resampled data frames into a single data frame
df_custom = pd.concat(dfs_resampled, axis=1, join="inner")
print("df_custom",df_custom)
</code></pre>
<p>It produces the below result:</p>
<pre><code>dfs_resampled [ CAN1_VD_S00_TotalVehicleDistance CAN1_VD_S00_TripDistance
t
2020-01-13 14:01:27 NaN NaN
2020-01-13 14:01:28 464632.5 464632.5
2020-01-13 14:01:29 464632.5 464632.5
2020-01-13 14:01:30 464632.5 464632.5
2020-01-13 14:01:31 464632.5 464632.5
... ... ...
2020-01-13 14:04:39 464632.5 464632.5
2020-01-13 14:04:40 464632.5 464632.5
2020-01-13 14:04:41 464632.5 464632.5
2020-01-13 14:04:42 464632.5 464632.5
2020-01-13 14:04:43 464632.5 464632.5
[197 rows x 2 columns], CAN1_EEC7_S01_EngnCrnksBrthrOlSprtrSpd ... CAN1_EEC7_S01_EngnIntkMnfldCmmnddPrssr
t ...
2020-01-13 14:01:28 NaN ... NaN
2020-01-13 14:01:29 6333.0 ... 82.625
2020-01-13 14:01:30 6329.0 ... 82.750
2020-01-13 14:01:31 6333.0 ... 82.750
2020-01-13 14:01:32 6329.0 ... 82.750
... ... ... ...
2020-01-13 14:04:39 6251.0 ... 106.875
2020-01-13 14:04:40 6255.0 ... 106.875
2020-01-13 14:04:41 6254.0 ... 106.875
2020-01-13 14:04:42 6250.0 ... 107.000
2020-01-13 14:04:43 6250.0 ... 107.000
[196 rows x 4 columns]]
df_custom Empty DataFrame
Columns: [CAN1_VD_S00_TotalVehicleDistance, CAN1_VD_S00_TripDistance, CAN1_EEC7_S01_EngnCrnksBrthrOlSprtrSpd, CAN1_EEC7_S01_EngnExhstGsRrltn1Vlv2Pstn, CAN1_EEC7_S01_EngnExhstGsRrltn1VlvPstn, CAN1_EEC7_S01_EngnIntkMnfldCmmnddPrssr]
Index: []
</code></pre>
<p>My expected result was to get a single data frame, <code>df_custom</code>, that would contain the inner join of the two separate resampled data frames on the same time raster. Yet, instead I am getting an empty data frame - as if there are no matches between the two separate resampled data frames.</p>
<p>I have tried various methods to determine if the resampled timestamps align and from what I can tell they should. Am I using the <code>pd.concat</code> function incorrectly?</p>
<p>Note: If I change my <code>raster</code> to <code>5s</code> instead of <code>1s</code>, then I get the expected result as below:</p>
<pre><code>dfs_resampled [ CAN1_VD_S00_TotalVehicleDistance CAN1_VD_S00_TripDistance
t
2020-01-13 14:01:25 NaN NaN
2020-01-13 14:01:30 464632.5 464632.5
2020-01-13 14:01:35 464632.5 464632.5
2020-01-13 14:01:40 464632.5 464632.5
2020-01-13 14:01:45 464632.5 464632.5
2020-01-13 14:01:50 464632.5 464632.5
2020-01-13 14:01:55 464632.5 464632.5
2020-01-13 14:02:00 464632.5 464632.5
2020-01-13 14:02:05 464632.5 464632.5
2020-01-13 14:02:10 464632.5 464632.5
2020-01-13 14:02:15 464632.5 464632.5
2020-01-13 14:02:20 464632.5 464632.5
2020-01-13 14:02:25 464632.5 464632.5
2020-01-13 14:02:30 464632.5 464632.5
2020-01-13 14:02:35 464632.5 464632.5
2020-01-13 14:02:40 464632.5 464632.5
2020-01-13 14:02:45 464632.5 464632.5
2020-01-13 14:02:50 464632.5 464632.5
2020-01-13 14:02:55 464632.5 464632.5
2020-01-13 14:03:00 464632.5 464632.5
2020-01-13 14:03:05 464632.5 464632.5
2020-01-13 14:03:10 464632.5 464632.5
2020-01-13 14:03:15 464632.5 464632.5
2020-01-13 14:03:20 464632.5 464632.5
2020-01-13 14:03:25 464632.5 464632.5
2020-01-13 14:03:30 464632.5 464632.5
2020-01-13 14:03:35 464632.5 464632.5
2020-01-13 14:03:40 464632.5 464632.5
2020-01-13 14:03:45 464632.5 464632.5
2020-01-13 14:03:50 464632.5 464632.5
2020-01-13 14:03:55 464632.5 464632.5
2020-01-13 14:04:00 464632.5 464632.5
2020-01-13 14:04:05 464632.5 464632.5
2020-01-13 14:04:10 464632.5 464632.5
2020-01-13 14:04:15 464632.5 464632.5
2020-01-13 14:04:20 464632.5 464632.5
2020-01-13 14:04:25 464632.5 464632.5
2020-01-13 14:04:30 464632.5 464632.5
2020-01-13 14:04:35 464632.5 464632.5
2020-01-13 14:04:40 464632.5 464632.5, CAN1_EEC7_S01_EngnCrnksBrthrOlSprtrSpd ... CAN1_EEC7_S01_EngnIntkMnfldCmmnddPrssr
t ...
2020-01-13 14:01:25 NaN ... NaN
2020-01-13 14:01:30 6329.0 ... 82.750
2020-01-13 14:01:35 6335.0 ... 82.750
2020-01-13 14:01:40 6335.0 ... 82.750
2020-01-13 14:01:45 6335.0 ... 82.625
2020-01-13 14:01:50 6333.0 ... 82.750
2020-01-13 14:01:55 6331.0 ... 82.625
2020-01-13 14:02:00 6332.0 ... 82.750
2020-01-13 14:02:05 6331.0 ... 82.750
2020-01-13 14:02:10 6323.0 ... 82.750
2020-01-13 14:02:15 6321.0 ... 82.750
2020-01-13 14:02:20 6315.0 ... 82.750
2020-01-13 14:02:25 6310.0 ... 82.750
2020-01-13 14:02:30 6306.0 ... 82.750
2020-01-13 14:02:35 6302.0 ... 82.750
2020-01-13 14:02:40 6313.0 ... 82.500
2020-01-13 14:02:45 6307.0 ... 82.500
2020-01-13 14:02:50 6304.0 ... 82.625
2020-01-13 14:02:55 6301.0 ... 59.250
2020-01-13 14:03:00 6305.0 ... 55.375
2020-01-13 14:03:05 6301.0 ... 55.000
2020-01-13 14:03:10 6306.0 ... 55.375
2020-01-13 14:03:15 6303.0 ... 55.000
2020-01-13 14:03:20 6300.0 ... 55.250
2020-01-13 14:03:25 6294.0 ... 55.125
2020-01-13 14:03:30 6296.0 ... 55.250
2020-01-13 14:03:35 6290.0 ... 55.000
2020-01-13 14:03:40 6288.0 ... 55.250
2020-01-13 14:03:45 6281.0 ... 55.375
2020-01-13 14:03:50 6282.0 ... 55.375
2020-01-13 14:03:55 6274.0 ... 105.625
2020-01-13 14:04:00 6270.0 ... 106.125
2020-01-13 14:04:05 6268.0 ... 106.500
2020-01-13 14:04:10 6269.0 ... 106.750
2020-01-13 14:04:15 6267.0 ... 106.750
2020-01-13 14:04:20 6265.0 ... 106.750
2020-01-13 14:04:25 6256.0 ... 106.875
2020-01-13 14:04:30 6257.0 ... 106.875
2020-01-13 14:04:35 6255.0 ... 106.875
2020-01-13 14:04:40 6255.0 ... 106.875
[40 rows x 4 columns]]
df_custom CAN1_VD_S00_TotalVehicleDistance ... CAN1_EEC7_S01_EngnIntkMnfldCmmnddPrssr
t ...
2020-01-13 14:01:25 NaN ... NaN
2020-01-13 14:01:30 464632.5 ... 82.750
2020-01-13 14:01:35 464632.5 ... 82.750
2020-01-13 14:01:40 464632.5 ... 82.750
2020-01-13 14:01:45 464632.5 ... 82.625
2020-01-13 14:01:50 464632.5 ... 82.750
2020-01-13 14:01:55 464632.5 ... 82.625
2020-01-13 14:02:00 464632.5 ... 82.750
2020-01-13 14:02:05 464632.5 ... 82.750
2020-01-13 14:02:10 464632.5 ... 82.750
2020-01-13 14:02:15 464632.5 ... 82.750
2020-01-13 14:02:20 464632.5 ... 82.750
2020-01-13 14:02:25 464632.5 ... 82.750
2020-01-13 14:02:30 464632.5 ... 82.750
2020-01-13 14:02:35 464632.5 ... 82.750
2020-01-13 14:02:40 464632.5 ... 82.500
2020-01-13 14:02:45 464632.5 ... 82.500
2020-01-13 14:02:50 464632.5 ... 82.625
2020-01-13 14:02:55 464632.5 ... 59.250
2020-01-13 14:03:00 464632.5 ... 55.375
2020-01-13 14:03:05 464632.5 ... 55.000
2020-01-13 14:03:10 464632.5 ... 55.375
2020-01-13 14:03:15 464632.5 ... 55.000
2020-01-13 14:03:20 464632.5 ... 55.250
2020-01-13 14:03:25 464632.5 ... 55.125
2020-01-13 14:03:30 464632.5 ... 55.250
2020-01-13 14:03:35 464632.5 ... 55.000
2020-01-13 14:03:40 464632.5 ... 55.250
2020-01-13 14:03:45 464632.5 ... 55.375
2020-01-13 14:03:50 464632.5 ... 55.375
2020-01-13 14:03:55 464632.5 ... 105.625
2020-01-13 14:04:00 464632.5 ... 106.125
2020-01-13 14:04:05 464632.5 ... 106.500
2020-01-13 14:04:10 464632.5 ... 106.750
2020-01-13 14:04:15 464632.5 ... 106.750
2020-01-13 14:04:20 464632.5 ... 106.750
2020-01-13 14:04:25 464632.5 ... 106.875
2020-01-13 14:04:30 464632.5 ... 106.875
2020-01-13 14:04:35 464632.5 ... 106.875
2020-01-13 14:04:40 464632.5 ... 106.875
[40 rows x 6 columns]
</code></pre>
|
<python><pandas>
|
2024-09-09 08:34:43
| 1
| 1,611
|
mfcss
|
78,964,422
| 8,024,622
|
How to substract amount of minutes in Pandas DataFrame datetime with condition?
|
<p>I have a DataFrame like this (date: datetime64[ns], v: float64)</p>
<pre><code>date v
2024-09-01 22:09:55 1.2
2024-09-01 22:12:08 1.11
2024-09-01 22:59:59 1.7
2024-09-01 23:00:02 1.1
2024-09-01 23:04:00 2.2
</code></pre>
<p>what I want to have is if the minutes in datetime are <code><= 10</code> then substract 11 minutes (something like <code>df['date']-timedelta(minutes=11)</code>). Otherwise keep the original datetime.</p>
<p>The expected output should be</p>
<pre><code>date v
2024-09-01 21:58:55 1.2
2024-09-01 22:12:08 1.11
2024-09-01 22:59:59 1.7
2024-09-01 22:49:02 1.1
2024-09-01 22:53:00 2.2
</code></pre>
|
<python><pandas><datetime><timedelta>
|
2024-09-09 07:52:44
| 1
| 624
|
jigga
|
78,964,338
| 296,473
|
How do I do server-side default value when using SQLAlchemy 2.0 with dataclass?
|
<p>I have a column defined as <code>ts timestamp with time zone not null default current_timestamp</code>. I want to SQLAlchemy to not insert this value on the client side when inserting data at the same time passes type checking.</p>
<pre class="lang-py prettyprint-override"><code>class Base(MappedAsDataclass, DeclarativeBase):
pass
class MyTable(Base):
ts: Mapped[datetime.datetime] = mapped_column(DateTime, init=False, nullable=False)
</code></pre>
<p>This doesn't work because SQLAlchemy is trying to insert a null value. I've tried various <code>*default</code> arguments but they don't work.</p>
|
<python><sqlalchemy><python-typing>
|
2024-09-09 07:29:46
| 1
| 1,723
|
lilydjwg
|
78,964,300
| 885,650
|
Conditional Multivariate Gaussian with scipy
|
<p>For a multi-variate Gaussian, it is straight-forward to compute the CDF or PDF given a point. <code>rv = scipy.stats.multivariate_normal(mean, cov)</code> and then <code>rv.pdf(point)</code> or <code>rv.cdf(upper)</code></p>
<p>But I have values for some axis (in these I want the PDF), but upper limits for others (in these I need to integrate, CDF).</p>
<p>I can split the problem:</p>
<ol>
<li>get conditional multivariate Gaussian, applying the axes with values</li>
<li>apply the CDF function with the upper limits.</li>
</ol>
<p>Is there a function to get a multivariate Gaussian, conditioning on some axes?</p>
<p>Related:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/30560176/multivariate-normal-cdf-in-python-using-scipy">Multivariate Normal CDF in Python using scipy</a></li>
<li><a href="https://stats.stackexchange.com/questions/30588/deriving-the-conditional-distributions-of-a-multivariate-normal-distribution">https://stats.stackexchange.com/questions/30588/deriving-the-conditional-distributions-of-a-multivariate-normal-distribution</a></li>
</ul>
|
<python><scipy><statistics><normal-distribution>
|
2024-09-09 07:16:03
| 1
| 2,721
|
j13r
|
78,964,171
| 16,611,809
|
How to use the input of a field only if it is visible?
|
<p>How do I manage that an input field value is empty, if it is not shown. In the example the text caption field is empty and not shown. If I show it by ticking "Show text caption field" and enter any text, the text appears in the output field. If I then untick "Show text caption field" the output field should also be empty again without having to manually. Not in general of course, but for some use cases this is quite important.</p>
<pre><code>from shiny import App, Inputs, Outputs, Session, render, ui
app_ui = ui.page_fluid(
ui.input_checkbox("show", "Show text caption field", False),
ui.panel_conditional(
"input.show", ui.input_text("caption", "Caption:"),
),
ui.output_text_verbatim("value"),
)
def server(input: Inputs, output: Outputs, session: Session):
@render.text
def value():
return input.caption()
app = App(app_ui, server)
</code></pre>
|
<python><py-shiny>
|
2024-09-09 06:40:34
| 1
| 627
|
gernophil
|
78,964,104
| 16,611,809
|
How to change the value of an ui.input element based on another input?
|
<p>Is it possible to change the default value of an py-shiny <code>ui.input</code> element based on another input? In the example if I tick "Use uppercase letters" the value should switch from "example text" to "EXAMPLE TEXT".</p>
<pre><code>from shiny import App, Inputs, Outputs, Session, render, ui
app_ui = ui.page_fluid(
ui.input_checkbox("Uppercase", "Use uppercase letters", False),
ui.input_text("caption", "Caption:", value="example text"),
ui.output_text_verbatim("value"),
)
def server(input: Inputs, output: Outputs, session: Session):
@render.text
def value():
return input.caption()
app = App(app_ui, server)
</code></pre>
|
<python><py-shiny>
|
2024-09-09 06:16:35
| 1
| 627
|
gernophil
|
78,964,057
| 1,394,353
|
Can I perform a bit-wise group by and aggregation with Polars `or_`?
|
<p>Let's say I have an <code>auth</code> field that use bit flags to indicate permissions (example bit-0 means <code>add</code> and bit-1 means <code>delete</code>).</p>
<p>How do I <code>bitwise-OR</code> them together?</p>
<pre><code>import polars as pl
df_in = pl.DataFrame(
{
"k": ["a", "a", "b", "b", "c"],
"auth": [1, 3, 1, 0, 0],
}
)
</code></pre>
<p>The dataframe:</p>
<pre><code>df_in: shape: (5, 2)
┌─────┬──────┐
│ k ┆ auth │
│ --- ┆ --- │
│ str ┆ i64 │
╞═════╪══════╡
│ a ┆ 1 │
│ a ┆ 3 │
│ b ┆ 1 │
│ b ┆ 0 │
│ c ┆ 0 │
└─────┴──────┘
</code></pre>
<p>When I group by and sum, things look good, I sum the <code>auth</code> by <code>k</code></p>
<pre><code>dfsum = df_in.group_by("k").agg(pl.col("auth").sum())
</code></pre>
<pre><code>dfsum: shape: (3, 2)
┌─────┬──────┐
│ k ┆ auth │
│ --- ┆ --- │
│ str ┆ i64 │
╞═════╪══════╡
│ a ┆ 4 │
│ b ┆ 1 │
│ c ┆ 0 │
└─────┴──────┘
</code></pre>
<p>So, it looks as if I am using <code>group_by</code> and <code>agg</code> correctly, when using <code>sum</code>.</p>
<p>Not so good when using <code>or_</code>.</p>
<p><code>dfor = df_in.group_by("k").agg(pl.col("auth").or_())</code></p>
<p>gives</p>
<pre><code>dfor: shape: (3, 2)
┌─────┬───────────┐
│ k ┆ auth │
│ --- ┆ --- │
│ str ┆ list[i64] │
╞═════╪═══════════╡
│ a ┆ [1, 3] │
│ b ┆ [1, 0] │
│ c ┆ [0] │
└─────┴───────────┘
</code></pre>
<h4>Expectations:</h4>
<p>for the <code>or_</code> I was expecting this result instead:</p>
<pre><code>df_wanted_or: shape: (3, 2)
┌─────┬──────┐
│ k ┆ auth │
│ --- ┆ --- │
│ str ┆ i64 │
╞═════╪══════╡
│ a ┆ 3 │
│ b ┆ 1 │
│ c ┆ 0 │
└─────┴──────┘
</code></pre>
<p>Now, I did find a workaround by using <code>map_batches</code> to call a Python function. Very simple something like</p>
<p><code>functools.reduce(lambda x,y: x|y)</code></p>
<p>but how do I do this without leaving Polars?</p>
|
<python><aggregate><bitwise-operators><python-polars>
|
2024-09-09 05:57:05
| 1
| 12,224
|
JL Peyret
|
78,963,850
| 585,650
|
How to patch a function with pytest-mock regardless of path/namespace used to call it?
|
<p>I am trying to mock a function regardless of how it is imported in the code.
I can't find info on how to do it.</p>
<p>Here is an example, the first assert works as expected but the second fails because path:</p>
<pre><code>import mymodule
from mymodule import test1
def test_mock(mocker):
mocker.patch('mymodule.test1', return_value="mocked")
assert mymodule.test1() == "mocked" # works
assert test1() == "mocked" # fails
</code></pre>
|
<python><mocking><pytest-mock><pytest-mock-patch>
|
2024-09-09 04:06:33
| 1
| 597
|
Hugo Zaragoza
|
78,963,578
| 4,718,221
|
Dataclass inheriting using kw_only for all variables
|
<p>I am practicing on using the super function and dataclass inheritance in general. I have enabled the kw_only attribute for cases when the parent class has default values. I completely understand that super doesn't need to be used in a dataclass if you're just passing variables and I can avoid using super here. My goal is to understand the super feature better through this example. I can't understand the error message I'm getting though.</p>
<pre><code>@dataclass(kw_only=True)
class ZooAnimals():
food_daily_kg: int
price_food: float
area_required: float
name: str
c = ZooAnimals(food_daily_kg=565, price_food=40, area_required=10, name='Monkey'
)
print(c)
@dataclass(kw_only=True)
class Cats(ZooAnimals):
meowing: str
def __init__(self, food_daily_kg, price_food, area_required, meowing, name):
self.meowing = meowing
super().__init__(food_daily_kg, price_food, area_required, name)
z = Cats(food_daily_kg=465, price_food=30, area_required=10, meowing='Little Bit',
name='Leopard'
)
print(z)
</code></pre>
<p>Output:</p>
<pre><code>ZooAnimals(food_daily_kg=565, price_food=40, area_required=10, name='Monkey')
TypeError: ZooAnimals.__init__() takes 1 positional argument but 5 were given
</code></pre>
|
<python><inheritance><python-dataclasses>
|
2024-09-09 00:39:01
| 1
| 604
|
user4718221
|
78,963,496
| 2,111,390
|
Python Floor Division Seems incorrect?
|
<p>I was noticing that in some situations floor division (the // operator) seems to behave incorrectly. Here is one example:</p>
<pre><code>>>> 59 // 0.2
294.0
</code></pre>
<p>Two things - one, the answer is just wrong. The correct answer is 295 (note that 59 / 0.2 gives the correct result of 295.0). The second issue is that this result is a float, not an integer.</p>
<p>I cannot for the life of me figure out how this is happening. Is it a bug? Some weird behavior when a float is in the divisor? I am using 3.12.</p>
|
<python><floor-division>
|
2024-09-08 23:22:05
| 2
| 3,671
|
Bryant
|
78,963,445
| 15,412,256
|
Polars Expression Chaining with Dynamic Number Operations
|
<p>When dealing with dynamic number of operations, especially with calculations depending on previous steps, the Polars Expression chaining becomes tricky:</p>
<p>here is a demo operator to generate new variables:</p>
<pre class="lang-py prettyprint-override"><code>def demo_operator_addition(var1: str, var2: str) -> IntoExpr:
return pl.col(var1).add(pl.col(var2))
</code></pre>
<p>The above <code>Callable</code> is then used in the helper function:</p>
<pre class="lang-py prettyprint-override"><code>def variable_calculation(
data: pl.DataFrame,
target_var: str,
operator: Callable,
reference_var: Optional[str] = None,
by: Optional[List[str]] = None,
col_name: Optional[str] = None,
) -> pl.DataFrame:
data = data.lazy()
by = by if by is not None else []
data = (
data
.with_columns(
operator(target_var, reference_var)
.over([True, *by])
.alias(col_name) if col_name is not None else target_var
)
)
return data.collect()
</code></pre>
<p>I want to be able to use <code>variable_calculation</code> function dynamically with different operations:</p>
<pre class="lang-py prettyprint-override"><code>operation1 = variable_calculation(
df,
target_var="var1",
reference_var="var2",
operator=demo_operator_addition,
col_name="var3",
)
operation2 = variable_calculation(
operation1,
target_var="var1",
reference_var="var3",
operator=demo_operator_addition,
col_name="var4",
)
</code></pre>
<p>In the above operations <code>var3</code> is generated before <code>var4</code>, which requires <code>var3</code> to calculate. However Polars only allows df.with_columns(calculate var3).with_columns(calculate var4)</p>
<p>Is there an efficient way to dynamically chain those expressions together? (For example, I want to avoid doing operation = xxx definition steps)</p>
|
<python><dataframe><oop><python-polars>
|
2024-09-08 22:49:23
| 1
| 649
|
Kevin Li
|
78,963,347
| 1,316,252
|
poisson blending / seamless cloning a head onto a body
|
<p>I'm trying to paste an image of a human head onto an image of a human body and blend the two together using openCV's seamlessClone (poisson blending). However, the head ends up taking on the color of the surrounding background (green screen) instead of just the neck. I've also tried with a transparent background, but then it becomes very dark. I've also tried in both mixed and normal mode.</p>
<p>Source image (head): <a href="https://ibb.co/Lv9P5Qp" rel="nofollow noreferrer">https://ibb.co/Lv9P5Qp</a></p>
<p>Source image mask: <a href="https://ibb.co/b7bgwSK" rel="nofollow noreferrer">https://ibb.co/b7bgwSK</a></p>
<p>Destination image (body): <a href="https://ibb.co/vcC66sr" rel="nofollow noreferrer">https://ibb.co/vcC66sr</a></p>
<p>Destination image (body, transparent background): <a href="https://ibb.co/pRZMNf1" rel="nofollow noreferrer">https://ibb.co/pRZMNf1</a></p>
<p>example of issue (green background): <a href="https://ibb.co/PCyLkKG" rel="nofollow noreferrer">https://ibb.co/PCyLkKG</a></p>
<p>example of issue (transparent background): <a href="https://ibb.co/Lk4cYmd" rel="nofollow noreferrer">https://ibb.co/Lk4cYmd</a></p>
<pre><code>source_image = cv2.imread(source_image_path)
destination = cv2.cvtColor(headless_destination_image), cv2.COLOR_RGB2BGR)
blended = cv2.seamlessClone(source_image, destination, source_image_mask, center, cv2.NORMAL_CLONE)
cv2.imwrite('result.png', blended)
</code></pre>
<p>Is there any way to tell openCV to only match the color in one direction (ie the neck region, instead of all around)?</p>
|
<python><opencv><image-processing><computer-vision>
|
2024-09-08 21:28:16
| 0
| 3,100
|
skunkwerk
|
78,963,163
| 3,103,767
|
annotate a NamedTuple for storing a function and its args and kwargs
|
<p>I have a system where i want to store a function along with its arguments for later invocation. I have tried:</p>
<pre class="lang-py prettyprint-override"><code>import typing
P = typing.ParamSpec("P")
class JobPayload(typing.NamedTuple):
fn: typing.Callable[P, None]
args: P.args
kwargs: P.kwargs
</code></pre>
<p>but the types of args and kwargs is not accepted. How do I type this correctly?</p>
|
<python><python-typing>
|
2024-09-08 19:27:02
| 1
| 983
|
Diederick C. Niehorster
|
78,963,154
| 11,564,487
|
Integral that Sympy cannot solve but Wolfram Alpha can
|
<p>I am using the following code, which returns the integral itself and not its result:</p>
<pre><code>x = sp.Symbol('x')
integrand = (sp.sqrt(1 - x**2) / (1 + x**2))
sp.integrate(integrand, (x, -1, 1), manual=True)
</code></pre>
<p><a href="https://www.wolframalpha.com/input?i2d=true&i=Integrate%5BDivide%5BSqrt%5B1-Square%5Bx%5D%5D%2C1%2BSquare%5Bx%5D%5D%2C%7Bx%2C-1%2C1%7D%5D" rel="nofollow noreferrer">Wolfram Alpha</a> can find a closed-form result for this integral.</p>
<p>So my question is: can <a href="/questions/tagged/sympy" class="s-tag post-tag" title="show questions tagged 'sympy'" aria-label="show questions tagged 'sympy'" rel="tag" aria-labelledby="tag-sympy-tooltip-container" data-tag-menu-origin="Unknown">sympy</a> get the same result by using a different approach than mine?</p>
|
<python><sympy>
|
2024-09-08 19:21:18
| 3
| 27,045
|
PaulS
|
78,963,092
| 3,607,022
|
How to Determine List Repeats and Item Percentage for Desired Proportion in a New List
|
<p>Working on a problem where I need to modify a list by adding another item, and I want this item to make up a specific percentage of the new total length of the list. I need help figuring out how to do this. Here’s the situation:</p>
<p>Have:</p>
<p>An existing list with a specific length (<code>list_length</code>).
An additional item with a specific length (<code>item_length</code>) that I want to add.
I want the added item to constitute exactly a certain percentage (<code>target_percentage</code>) of the new total length of the list.</p>
<p>Could someone help me with the following?</p>
<p>How do I calculate the total length of the new list so that the item makes up the exact percentage I want?
How many times does the existing list need to be repeated to achieve this total length?
How can I verify that the item actually makes up the desired percentage of the new list?
How do I calculate the percentage of the existing list in the new list?
Here’s an example of the values I’m using:</p>
<p><code>list_length = 1800</code> (the length of the existing list)</p>
<p><code>item_length = 30</code> (the length of the item to be added)</p>
<p><code>target_percentage = 0.05</code> (the item should make up 5% of the new list)</p>
<p>I tried the following Python code, but it didn’t work as expected:</p>
<pre><code>import math
def calculate_list_and_item_details(list_length, item_length, target_percentage):
# Convert target percentage to a fraction
target_fraction = target_percentage / 100.0
# Calculate the total length required for the item to be the target percentage of the total list
required_list_length = (item_length * (1 - target_fraction)) / target_fraction
# Calculate how many times the list needs to be repeated to reach the required length
list_repeats = math.ceil(required_list_length / list_length)
# Calculate the total length of the list with the repeated lists
total_list_length = list_repeats * list_length
total_length_with_item = total_list_length + item_length
# Calculate the percentage of the item in the new total length
item_percentage = (item_length / total_length_with_item) * 100
# Calculate the percentage of the list in the new total length
list_percentage = (total_list_length / total_length_with_item) * 100
return {
'list_repeats': list_repeats,
'item_repeats': 1, # The item is added once
'item_percentage': item_percentage,
'list_percentage': list_percentage
}
# Example usage
list_length = 1800 # Length of the existing list
item_length = 30 # Length of the item to be added
target_percentage = 5 # Desired percentage of the item in the new total length
result = calculate_list_and_item_details(list_length, item_length, target_percentage)
print(f"List Repeats: {result['list_repeats']}")
print(f"Item Repeats: {result['item_repeats']}")
print(f"Item Percentage: {result['item_percentage']:.2f}%")
print(f"List Percentage: {result['list_percentage']:.2f}%")
</code></pre>
<p>Output:</p>
<p><em>List Repeats: 1</em></p>
<p><em>Item Repeats: 1</em></p>
<p><em>Item Percentage: 1.64%</em></p>
<p><em>List Percentage: 98.36%</em></p>
<p>The code calculates the number of times the existing list needs to be repeated but doesn’t properly ensure that the item actually constitutes the exact percentage of the total length. The issue arises because the total length needed for the item to meet the target percentage isn’t correctly matched with the number of repetitions of the existing list.</p>
<p>Can anyone provide guidance or ideas on how to fix this?</p>
|
<python><algorithm><discrete-mathematics>
|
2024-09-08 18:47:07
| 1
| 482
|
user3607022
|
78,962,922
| 19,218,671
|
"add_transition() argument after ** must be a mapping, not str" : transitions python library
|
<p>I just try make simple graph with <code>transitions</code> library :</p>
<pre><code>from transitions.extensions.diagrams import HierarchicalGraphMachine
from IPython.display import display, Markdown
states = ['engoff' , 'poweron' , 'engon' , 'FCCActions' #'emgstatus' , "whevent" , 'new data receive'
,{'name' : 'cores',
'final': True,
'parallel' : [{ 'name' : 'mapeng', 'children': ['maploaded', {"name": "update", "final": True}],
'initial' : 'maploaded',
'transitions': [['delay', 'maploaded', "update"]]},
{
'name' : 'EAA' , 'children': ['newdata', {"name": "done!", "final": True}], #environment analayser
'initial' : 'newdata',
'transitions': [['Analaysing', 'newdata', 'done!']]
},{
'name' : 'FAI' , 'children': ['newdata', {"name": "done!", "final": True}],
'initial' : 'newdata',
'transitions': ['CalculateandLearning', 'newdata', 'done!']
}]
}
]
transitions = [['flightcommand', 'engon', 'FCCActions'],
['poweroff-command', 'engon', 'engoff'],
['init', 'engoff', 'poweron'],
['engstart-command', 'poweron', 'engon'],
['startservice', 'poweron', 'cores']]
m = HierarchicalGraphMachine(states=states, transitions=transitions, initial="engoff", show_conditions=True,
title="Mermaid", auto_transitions=False)
m.init()
</code></pre>
<p>I just make some change in <a href="https://github.com/pytransitions/transitions?tab=readme-ov-file#diagrams" rel="nofollow noreferrer">this</a> example but I got Error :</p>
<pre><code>Traceback (most recent call last):
File ".../3.transitions/test.py", line 29, in <module>
m = HierarchicalGraphMachine(states=states, transitions=transitions, initial="engoff", show_conditions=True,
File "...\Python\Python38\lib\site-packages\transitions\extensions\diagrams.py", line 137, in __init__
super(GraphMachine, self).__init__(
File "...\Python\Python38\lib\site-packages\transitions\extensions\markup.py", line 61, in __init__
super(MarkupMachine, self).__init__(
File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 407, in __init__
super(HierarchicalMachine, self).__init__(
File "...\Python\Python38\lib\site-packages\transitions\core.py", line 601, in __init__
self.add_states(states)
File "...\Python\Python38\lib\site-packages\transitions\extensions\diagrams.py", line 230, in add_states
super(GraphMachine, self).add_states(
File "...\Python\Python38\lib\site-packages\transitions\extensions\markup.py", line 126, in add_states
super(MarkupMachine, self).add_states(states, on_enter=on_enter, on_exit=on_exit,
File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 521, in add_states
self._add_dict_state(state, ignore, remap, **kwargs)
File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 978, in _add_dict_state
self.add_states(state_children, remap=remap, **kwargs)
File "...\Python\Python38\lib\site-packages\transitions\extensions\diagrams.py", line 230, in add_states
super(GraphMachine, self).add_states(
File "...\Python\Python38\lib\site-packages\transitions\extensions\markup.py", line 126, in add_states
super(MarkupMachine, self).add_states(states, on_enter=on_enter, on_exit=on_exit,
File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 521, in add_states
self._add_dict_state(state, ignore, remap, **kwargs)
File "...\Python\Python38\lib\site-packages\transitions\extensions\nesting.py", line 980, in _add_dict_state
self.add_transitions(transitions)
File "...\Python\Python38\lib\site-packages\transitions\core.py", line 1032, in add_transitions
self.add_transition(**trans)
TypeError: add_transition() argument after ** must be a mapping, not str
</code></pre>
<p>how can I fix this?</p>
|
<python><graph>
|
2024-09-08 17:25:45
| 1
| 453
|
irmoah80
|
78,962,730
| 6,843,153
|
Creating streamlit-altair chart from example renders no data
|
<p>I'm trying to render the <strong>streamlit-altair</strong> chart from <a href="https://vega.github.io/vega-lite/examples/interactive_bar_select_highlight.html" rel="nofollow noreferrer">this example</a>, so I implemented the following code:</p>
<pre><code>import altair as alt
import streamlit as st
from view.frontend.path import BasePath
chart_spec = {
"$schema": "https://vega.github.io/schema/vega-lite/v5.json",
"description": "A bar chart with highlighting on hover and selecting on click. (Inspired by Tableau's interaction style.)",
"data": {
"values": [
{"a": "A", "b": 28}, {"a": "B", "b": 55}, {"a": "C", "b": 43},
{"a": "D", "b": 91}, {"a": "E", "b": 81}, {"a": "F", "b": 53},
{"a": "G", "b": 19}, {"a": "H", "b": 87}, {"a": "I", "b": 52},
]
},
"params": [
{
"name": "highlight",
"select": {"type": "point", "on": "pointerover"}
},
{"name": "select", "select": "point"}
],
"mark": {
"type": "bar",
"fill": "#4C78A8",
"stroke": "black",
"cursor": "pointer",
},
"encoding": {
"x": {"field": "a", "type": "ordinal"},
"y": {"field": "b", "type": "quantitative"},
"fillOpacity": {
"condition": {"param": "select", "value": 1},
"value": 0.3,
},
"strokeWidth": {
"condition": [
{
"param": "select",
"empty": False,
"value": 2,
},
{
"param": "highlight",
"empty": False,
"value": 1,
}
],
"value": 0
}
},
"config": {
"scale": {"bandPaddingInner": 0.2}
}
}
class Path(BasePath):
def __init__(self, payload: dict):
self._path_spec = payload
def render(self, container):
with container:
# Altair Chart
chart = alt.Chart().from_dict(chart_spec)
# Display chart in Streamlit
click_event = st.altair_chart(
chart, use_container_width=True, on_select="rerun"
)
# Catch the click and display result
if click_event is not None:
st.write(f"Clicked on: {click_event}")
</code></pre>
<p>The problem is that I'm getting an empty chart in my app unlike the example page that is full with data.</p>
<p>What am I doing wrong?</p>
|
<python><streamlit><altair>
|
2024-09-08 15:41:36
| 0
| 5,505
|
HuLu ViCa
|
78,962,681
| 1,097,562
|
Adding Oauth 2.0 to Google Fact Check Tools API
|
<br>
<p>I have a simple Python code to search through Fact Check claims using the Google Fact Check Tools API. I have updated the code to include OAuth flow and here is my code:</p>
<pre><code>import requests
import google.auth
from google.oauth2 import service_account
import os
import json
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
GOOGLE_FC_API = "https://factchecktools.googleapis.com/v1alpha1/claims:search"
PAGE_SIZE = 3000
N_BACK_DAYS = 100
QUERY = "conspiracy"
API_KEY = "my_api_key"
credentials_file = './service_account_credentials.json'
class FetchFactChecks():
def get_google_fcs(self):
credentials = service_account.Credentials.from_service_account_file(
credentials_file,
scopes=['https://www.googleapis.com/auth/factchecktools']
)
# Request an OAuth2 access token
credentials.refresh(Request())
access_token = credentials.token
headers = {
'Authorization': f'Bearer {access_token}'
}
params = {
'languageCode': 'en-US',
'pageSize': str(PAGE_SIZE),
'maxAgeDays': str(N_BACK_DAYS),
'query': QUERY,
'GFC_API_KEY': API_KEY
}
# Make the request to the Fact Check API
response = requests.get(GOOGLE_FC_API, headers=headers, params=params)
if response.status_code == 200:
print("Success:", response.json())
else:
print(f"Error: {response.status_code}, {response.text}")
if __name__ == "__main__":
fc_factcheck = FetchFactChecks()
fc_factcheck.get_google_fcs()
</code></pre>
<p>I am integrating this change as part of a larger workflow using GCP topics, Cloud Functions, etc. But when I run this program for testing, I get the following error:</p>
<blockquote>
<p><strong>{
"error": {
"code": 400,
"message": "Request contains an invalid argument.",
"status": "INVALID_ARGUMENT"
}
}</strong></p>
</blockquote>
<p>I have tried removing all the optional parameters except the mandatory parameter 'query' but still get the same issue. If I send the above request without any headers (oauth token), then it works.</p>
<p>Also I have tried using 'discovery' from googleapiclient using 'userinfo.email' as the scope for service account but still no luck.</p>
<pre><code>from googleapiclient import discovery
from google.oauth2.service_account import Credentials
credentials = Credentials.from_service_account_file(credentials_file, scopes=['https://www.googleapis.com/auth/userinfo.email'])
delegated_credentials = credentials.with_subject('service_account_user_email')
service = discovery.build('factchecktools', 'v1alpha1', credentials=delegated_credentials)
rqst = service.claims().search(
maxAgeDays=N_BACK_DAYS,
pageSize=PAGE_SIZE,
query=QUERY,
)
resp = rqst.execute()
</code></pre>
<p>Can someone please help on what's going wrong here?</p>
|
<python><google-cloud-platform><oauth-2.0><google-api><google-oauth>
|
2024-09-08 15:15:15
| 1
| 851
|
Basith
|
78,962,533
| 6,662,425
|
Memory Layout for Sparse matrix
|
<p>I have a very specific sparse matrix layout and I am looking for storage recommendations.</p>
<p>The matrices I consider</p>
<ul>
<li>are symmetric and positive definite</li>
<li>made up of block matrices (all blocks have the same size)</li>
<li>every block is a "KiteMatrix", i.e. it has a dense upper left corner (the kite) and a single element repeated over the lower right diagonal (the string).</li>
</ul>
<p>But the "kites" are of varying sizes. Specifically the layout is as follows (with only
the lower triangular part, since the upper part is given by symmetry):</p>
<p><a href="https://i.sstatic.net/fz4Xk0y6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fz4Xk0y6.jpg" alt="MatrixLayout" /></a></p>
<p>These are the operations I want to perform</p>
<ul>
<li>The matrix is expanded incrementally by one row of blocks.</li>
<li>In every step I need the cholesky decomposition of the entire matrix and I can retain the cholesky decomposition between steps such that I only need to expand it by a block row whenever the matrix is extended. In fact I intend to only save the cholesky decomposition.</li>
</ul>
<p>The overall storage complexity should thus be</p>
<pre><code>sum_{k=2}^{n+1} (k-1)*k * k = O(n^4)
</code></pre>
<p>and the computation complexity of the cholesky decomposition should be <code>O(n^6)</code>. In particular the complexity should be independent of <code>d</code></p>
<p>The dimension of the blocks <code>d</code> may be extremely large, it is therefore very desirable to
store the diagonal "string" parts as a single number.</p>
<p>My current plan is to store the blocks in a list of lists format (does scipy.sparse.lil_array support block matrices?) and the blocks themselves as a class consisting of a dense numpy block and a single number for the diagonal.</p>
<p>But I am unsure if this is the best approach. I am open to better alternatives</p>
|
<python><scipy><sparse-matrix><matrix-multiplication>
|
2024-09-08 14:01:46
| 1
| 1,373
|
Felix Benning
|
78,962,239
| 3,806,340
|
Define API for multiple concrete implementations
|
<p>I try to implement multiple concrete classes, which share the same API. The base functionalities between these classes are the same, but they support different types of configuration (among shared ones). I would like to keep the implementation between these concrete classes as separate as possible.</p>
<p>Therefore I came up with an (abstract) base class which defines the API, but I am not sure how to define the function arguments.</p>
<ol>
<li><p>Common config containing all possibilities</p>
<ul>
<li><p>(+) Same API for all implementations.</p>
</li>
<li><p>(-) API suggests certain unsupported configurations. (It might be in the future though).</p>
<p>Perhaps I would need to address this in the doc and/or even raise an exception in case of inproper use.</p>
</li>
<li><p>(-) In case of a new config attribute gets added, the other already existing concrete classes don't address this new attribute.</p>
</li>
</ul>
<pre><code>@dataclass
class CommonCfg:
cfg_1: Optional[int] = None
cfg_2: Optional[int] = None
cfg_3: Optional[int] = None
class AbstractA:
@abstractmethod
def func_z(self, cfg: CommonCfg):
class B(AbstractA):
def func_z(self, cfg: CommonCfg):
# Does not use `cfg_3`.
class C(AbstractA):
def func_z(self, cfg: CommonCfg):
# Does not use `cfg_2`.
</code></pre>
</li>
<li><p>Generic signature of base class</p>
<pre><code>class AbstractA:
def func_z(self, *args, **kwargs):
# Or expect `CommonCfg` which only contains common config parameters (e.g. `cfg_1`)
</code></pre>
<p>For <code>func_z</code>, <code>ClassB</code> & <code>ClassC</code> would expect their specific <code>CfgB</code> and <code>CfgC</code>, respectively.</p>
</li>
</ol>
<p>In a perfect scenario I could just exchange objects of different concrete implementations (at least as long as only their common configs are relevant).
E.g.</p>
<pre><code>...
b = ClassB()
c = ClassC()
cfg = CommonCfg()
b.func_z(cfg)
# I would like to have the least amount of hassle replacing it, e.g. with:
# c.func_z(cfg)
</code></pre>
|
<python><inheritance><design-patterns><interface>
|
2024-09-08 11:23:01
| 1
| 2,728
|
Guti_Haz
|
78,962,178
| 2,622,368
|
How to set the value of os.environ in python using the command line?
|
<p>OS:Win10</p>
<pre><code>reg add HKCU\Environment /F /V http_proxy /d "http://127.0.0.1:8080"
reg add HKCU\Environment /F /V https_proxy /d "http://127.0.0.1:8080"
</code></pre>
<p>I'm pretty sure this is the only place where the variable is set. http_proxy is no longer in the variables shown by set</p>
<p>After deleting it in this way. It is still output the variable values.</p>
<pre><code>import os
print(os.environ['http_proxy'])
print(os.environ['https_proxy'])
</code></pre>
<pre class="lang-bash prettyprint-override"><code>python.exe F:\Fshare\del1\vmware\test\del\f5\a.py
http://127.0.0.1:8080
http://127.0.0.1:8080
</code></pre>
<p>How can I get the variables to update instantly and clear the cache.</p>
|
<python><windows><cmd><environment>
|
2024-09-08 10:49:53
| 1
| 829
|
wgf4242
|
78,961,894
| 13,467,891
|
Cannot install ale-py in docker
|
<p>I'm trying to build dockerfile as below:</p>
<pre><code>FROM python:3.10
RUN pip install ale-py
</code></pre>
<p>If I try to build, I get error msg like:</p>
<pre><code>1.436 ERROR: Could not find a version that satisfies the requirement ale-py (from versions: none)
1.436 ERROR: No matching distribution found for ale-py
</code></pre>
<p>In my local env (+ also in venv with same python version) There is no problem to install ale-py</p>
<p>Is there any conflict between docker <-> ale-py?</p>
|
<python><docker><pip>
|
2024-09-08 08:17:07
| 2
| 715
|
Geonsu Kim
|
78,961,813
| 2,118,290
|
Using py7zr is there a way to set compression level?
|
<p>Looking at the documentation there is not much info on this.</p>
<pre><code>import py7zr
with py7zr.SevenZipFile("test.7z", 'w') as archive:
archive.write("somefile.txt")
</code></pre>
<p>Theres a snippet at the bottom of <a href="https://py7zr.readthedocs.io/en/latest/api.html" rel="nofollow noreferrer">https://py7zr.readthedocs.io/en/latest/api.html</a>
That talks about "filters"</p>
<pre><code>[{'id': FILTER_DELTA}, {'id': FILTER_LZMA2, 'preset': PRESET_DEFAULT}]
</code></pre>
<p>where I figured 'preset' might be related, but looking at the <a href="https://py7zr.readthedocs.io/en/latest/_modules/py7zr/py7zr.html#SevenZipFile" rel="nofollow noreferrer">source code</a> it does not appear so.</p>
<p>Is it always at max compression level? or can I force it to the max?</p>
|
<python><compression><py7zr>
|
2024-09-08 07:22:39
| 0
| 674
|
Steven Venham
|
78,961,544
| 8,584,998
|
Improving TimescaleDB Compression Ratios; Using Brotli Parquet to Compare
|
<p>I've got about 435 million rows of the form:</p>
<p><code>time, sensor_id, sensor_value</code></p>
<p>where <code>time</code> is a unix timestamp rounded to one decimal point, <code>sensor_id</code> is an integer, and the <code>sensor_value</code> is a float. The timestamps are one second apart, and for each timestamp, there are around 250 unique sensor_ids.</p>
<p>In Python, I found that I was getting the best compression ratios if I compressed using multiple parquet files, spaced out by one hour at a time. Using this for each file:</p>
<p><code>df.sort_values(by=['sensor_id', 'time']).reset_index(drop=True).to_parquet('out.parquet', compression='brotli')</code></p>
<p>the total file size of all the parquet files combined is 120 MB.</p>
<p>I didn't expect to get quite as good of compression ratios in TimescaleDB in PostgreSQL since it doesn't use brotli compression, but I'm still getting compressed sizes that are a fair bit larger than I would like. <strong>The best I could get was 964 MB, which is about 8x bigger.</strong></p>
<p>Here's how I set up the table:</p>
<pre><code>CREATE TABLE IF NOT EXISTS public.sensor_data
(
"time" timestamp(1) without time zone NOT NULL,
sensor_id integer NOT NULL,
sensor_value NUMERIC(20, 3) NOT NULL
);
SELECT create_hypertable('sensor_data', 'time');
-- (insert all data)
ALTER TABLE sensor_data
SET
(
timescaledb.compress,
timescaledb.compress_segmentby='sensor_id',
timescaledb.compress_orderby='time'
);
SELECT compress_chunk(c) from show_chunks('sensor_data') c;
</code></pre>
<p>I've tried the following variations:</p>
<ul>
<li>Using a <code>double precision</code> type instead of a <code>numeric(20, 3)</code> for <code>sensor value</code></li>
<li>Using chunk sizes of one hour (similar to what I'm doing in Python) rather than the default 7 days</li>
<li>Using <code>numeric(10, 3)</code> instead of <code>numeric(20, 3)</code></li>
</ul>
<p>However, these changes either did nothing or made the compression ratios worse:</p>
<p><a href="https://i.sstatic.net/KnuqdmPG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnuqdmPG.png" alt="Table demonstrating compressed sizes for each hypertable variation" /></a></p>
<p><strong>Do you have any suggestions for things I can try to improve the compression ratio, or is this as good as it's going to get?</strong></p>
|
<python><postgresql><compression><parquet><timescaledb>
|
2024-09-08 03:18:39
| 0
| 1,310
|
EllipticalInitial
|
78,961,326
| 14,895,716
|
Jupyter Lab throws "json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)" from workspaces handler
|
<p>I'm on Windows. Have been using Jupyter Lab for a while with no problems. Very recently, every time I launched Jupyter, it errored out as such:</p>
<pre><code>[W 2024-09-08 02:02:06.730 ServerApp] 500 GET /lab/api/workspaces?1725750126696 (::1): Expecting value: line 1 column 1 (char 0)
[W 2024-09-08 02:02:06.731 LabApp] wrote error: 'Expecting value: line 1 column 1 (char 0)'
</code></pre>
<p>With the following traceback:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\user_name\anaconda3\envs\env_name\lib\site-packages\jupyterlab_server\workspaces_handler.py", line 198, in get
workspaces = self.manager.list_workspaces()
File "C:\Users\user_name\anaconda3\envs\env_name\lib\site-packages\jupyterlab_server\workspaces_handler.py", line 125, in list_workspaces
return _list_workspaces(self.workspaces_dir, prefix)
File "C:\Users\user_name\anaconda3\envs\env_name\lib\site-packages\jupyterlab_server\workspaces_handler.py", line 46, in _list_workspaces
workspace = _load_with_file_times(workspace_path)
File "C:\Users\user_name\anaconda3\envs\env_name\lib\site-packages\jupyterlab_server\workspaces_handler.py", line 59, in _load_with_file_times
workspace = json.load(fid)
File "C:\Users\user_name\anaconda3\envs\env_name\lib\json\__init__.py", line 293, in load
return loads(fp.read(),
File "C:\Users\user_name\anaconda3\envs\env_name\lib\json\__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "C:\Users\user_name\anaconda3\envs\env_name\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\user_name\anaconda3\envs\env_name\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>After this point, I can load notebooks and run code blocks, but the changes don't save (and only the first run's output is displayed: further runs do not affect the output even if the code is modified and a different output is expected).</p>
<p>The workspaces list (under the "Running Terminals and Kernels" tab) appears to be empty.
If relevant, this all occurs within an Anaconda virtual environment.</p>
<p>How might this be fixed?</p>
|
<python><jupyter-notebook><anaconda><jupyter><jupyter-lab>
|
2024-09-07 23:21:10
| 1
| 627
|
Shay
|
78,961,304
| 1,227,860
|
ParaView: How to create a new vector based on an existing vector using Programmable Filter
|
<p>My VTI file loaded in Paraview has a 3D data array named <code>MyVelocity</code>. I want to create a new velocity vector as follows:</p>
<pre><code>unew = -u
vnew = -v
wnew = w
</code></pre>
<p>I am trying to use a <code>Programmable Filter</code> to create this array.</p>
<pre><code>import numpy
u=inputs[0].PointData["MyVelocity"][:,0]
v=inputs[0].PointData["MyVelocity"][:,1]
w=inputs[0].PointData["MyVelocity"][:,2]
output.PointData.append([-u,-v,w], "vector")
</code></pre>
<p>However, I am running into the following error.</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 22, in <module>
File "<string>", line 7, in RequestData
File "C:\Program Files\ParaView 5.13.0\bin\Lib\site-packages\vtkmodules\numpy_interface\dataset_adapter.py", line 763, in append
arr = numpyTovtkDataArray(copy, name)
File "C:\Program Files\ParaView 5.13.0\bin\Lib\site-packages\vtkmodules\numpy_interface\dataset_adapter.py", line 146, in numpyTovtkDataArray
vtkarray = numpy_support.numpy_to_vtk(array, array_type=array_type)
File "C:\Program Files\ParaView 5.13.0\bin\Lib\site-packages\vtkmodules\util\numpy_support.py", line 146, in numpy_to_vtk
vtk_typecode = get_vtk_array_type(z.dtype)
File "C:\Program Files\ParaView 5.13.0\bin\Lib\site-packages\vtkmodules\util\numpy_support.py", line 69, in get_vtk_array_type
raise TypeError(
TypeError: Could not find a suitable VTK type for object
</code></pre>
<p>Is this the proper way to create a new 3D vector?</p>
|
<python><paraview>
|
2024-09-07 22:59:47
| 1
| 2,367
|
shashashamti2008
|
78,961,298
| 1,196,031
|
Django password reset with front end UI
|
<p>I'm working on a project that uses Django as the backend and next.js/react.js on the front end. The issue I'm running into is trying to reset the password using the UI of my front end. All of the solutions I've found involve creating the UI on the backend server with templates which I would prefer not to do.</p>
<p>I'm able to get the following email from Django(dj_rest_auth) that provides a link with the server domain.</p>
<pre><code>Hello from MyApp!
You're receiving this email because you or someone else has requested a password reset for your user account.
It can be safely ignored if you did not request a password reset. Click the link below to reset your password.
http://localhost:8000/password-reset-confirm/k/cd01t1-dfsdfdfwer2/
</code></pre>
<p>I've tried to override the <code>PasswordResetSerializer</code> like so but it doesn't work.</p>
<pre><code>from dj_rest_auth.serializers import PasswordResetSerializer
class MyPasswordResetSerializer(PasswordResetSerializer):
def get_email_options(self) :
return {
'email_template_name': 'http://localhost:3000/new-password'
}
</code></pre>
<p>urls.py</p>
<pre><code> path('password-reset/', PasswordResetView.as_view()),
path('password-reset-confirm/<uidb64>/<token>/',
PasswordResetConfirmView.as_view(), name='password_reset_confirm'),
</code></pre>
<p>And in my <strong>settings.py</strong> file I have</p>
<pre><code>...
REST_AUTH_SERIALIZERS = {
'PASSWORD_RESET_SERIALIZER': 'myapp.serializers.MyPasswordResetSerializer'
}
...
</code></pre>
<p>How can I change the link in the reset password email to one that sends the user to the client to create their new password?</p>
|
<python><django><django-rest-framework>
|
2024-09-07 22:54:51
| 2
| 3,363
|
DroidT
|
78,961,172
| 2,595,546
|
Gmail draft attachment reads "noname"
|
<p>I am using the Gmail API, and have strongly followed the example they show in their documentation, which can be found here: <a href="https://github.com/googleworkspace/python-samples/blob/main/gmail/snippet/send%20mail/create_draft_with_attachment.py" rel="nofollow noreferrer">https://github.com/googleworkspace/python-samples/blob/main/gmail/snippet/send%20mail/create_draft_with_attachment.py</a></p>
<p>Thus, I create my email and the attachment as follows:</p>
<pre><code> message = EmailMessage()
message.set_content(email['text'])
message['To'] = email['email']
message['cc'] = "--------@gmail.com"
message['Subject'] = email['subject']
# Attach CV
cv = 'myfile.pdf'
cvtype, _ = mimetypes.guess_type(cv)
cvtypemain, cvtypesub = cvtype.split("/")
cvf = open(cv, "rb")
cv_data = cvf.read()
message.add_attachment(cv_data, cvtypemain, cvtypesub)
# Stuff
encodedmsg = base64.urlsafe_b64encode(message.as_bytes()).decode()
create_draft_request_body = {"message": {"raw": encodedmsg}}
draft = service.users().drafts().create(userId="me", body=create_draft_request_body).execute()
</code></pre>
<p>However, this adds the attachment without a name. I strongly suspect this has to do with the build_file_part function in the github snippet above - however, that function is never called in the draft creation part, so I'm not sure what I should be doing with it.</p>
<p>How do I add the attachment properly, using the Gmail APIs and hopefully without a ton of file manipulations? There's bound to be convenience functions for this?</p>
|
<python><email><gmail-api>
|
2024-09-07 21:22:04
| 1
| 868
|
Fly
|
78,961,150
| 3,018,860
|
Add "+" sign in positive values using astropy and matplotlib
|
<p>I'm using astropy, matplotlib, skyfield and astroquery to create sky charts with Python. With skyfield I do the stereographic projection of the sky. With astropy I use WCS, something like this:</p>
<pre class="lang-py prettyprint-override"><code>fig = plt.figure(figsize=[600/96, 500/96])
plt.subplots_adjust(left=0.1, right=0.75, top=0.9, bottom=0.1)
wcs = WCS(naxis=2)
wcs.wcs.crpix = [1, 1]
wcs.wcs.cdelt = np.array([-360 / np.pi, 360 / np.pi])
wcs.wcs.crval = [COORD.ra.deg, COORD.dec.deg]
wcs.wcs.ctype = ["RA---STG", "DEC--STG"]
ax = fig.add_subplot(111, projection=wcs)
angle = np.pi - FOV / 360.0 * np.pi
limit = np.sin(angle) / (1.0 - np.cos(angle))
ax.set_xlim(-limit, limit)
ax.set_ylim(-limit, limit)
ax.set_aspect('equal')
ax.coords.grid(True, color='white', linestyle='dotted')
ax.coords[0].set_axislabel(' ')
ax.coords[1].set_axislabel(' ')
</code></pre>
<p>I wish to add the plus sign (“+”) for the positive values in the Declination axis (Y). I tried a lot of solutions. For example, the method <code>set_major_formatter</code>, but it seems very limited, because it only allows a small usecase. With this line of code I can configure the decimal numbers, for example, but nothing else:</p>
<pre class="lang-py prettyprint-override"><code>ax.coords[1].set_major_formatter('d')
</code></pre>
<p>It’s crazy because it’s a very simple action in terms of programming, however I’m not able to achieve this through the built-in functions that astropy and matplotlib have.</p>
<p>Of course unsigned values are positive <em>per se</em>, however I'm preparing sky charts for all public usage, so adding the "+" sign would be helpful for the people.</p>
<p>I also tried to iterate the values. Then I would create a simple bucle with a conditional for the positive numbers. However, this iteration seems impossible.</p>
<p>I also tried <code>matplotlib.ticker</code>, and this is not working at all because [I think] it's about WCS from astropy.</p>
|
<python><matplotlib><astropy>
|
2024-09-07 21:13:46
| 1
| 2,834
|
Unix
|
78,960,966
| 9,890,027
|
Is there a way to specify default dependencies, or dependencies installed only if an extra is not specified, in pyproject.toml?
|
<p>I have a library mylib that depends on OpenCV. There are 2 distributions of OpenCV, opencv-python and opencv-python-headless. Either one works for mylib but only one can be installed, so for downstream code that uses OpenCV it is important for consumers to be able to choose which they are getting.</p>
<p>This can be done using 2 optional-dependencies. But then users cannot just install mylib, they must install mylib[opencv-python] or mylib[opencv-python-headless]. The semantics are poor.</p>
<pre class="lang-ini prettyprint-override"><code>[project]
dependencies = []
[project.optional-dependencies]
opencv-python = ["opencv-python"]
opencv-python-headless = ["opencv-python-headless"]
</code></pre>
<p>Is there a way I can have a default and say, only install if the extra is not specified? This has good semantics and would better guide naive consumers to what they should use.</p>
<pre class="lang-ini prettyprint-override"><code>[project]
dependencies = [
"opencv-python-headless; 'opencv-gui' not in extras",
]
[project.optional-dependencies]
opencv-gui = ["opencv-python"]
</code></pre>
<p>Although the specification's CFG is very dense, after reviewing <a href="https://packaging.python.org/en/latest/specifications/dependency-specifiers/#environment-markers" rel="nofollow noreferrer">environment markers</a>, I have some hope:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>Marker</th>
<th>Python equivalent</th>
<th>Sample values</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>extra</code></td>
<td>An error except when defined by the context interpreting the specification.</td>
<td><code>test</code></td>
</tr>
</tbody>
</table></div>
<p>Does that mean that this idea is left to build backends? I'm using hatchling, but am open to using anything that supports this.</p>
|
<python><python-packaging><pyproject.toml>
|
2024-09-07 19:47:25
| 0
| 1,899
|
Ders
|
78,960,741
| 4,181,335
|
'offset points' in matplotlib.pyplot.annotate gives unexpected results
|
<p>I am using the following code to generate a plot with a sine curve marked with 24 'hours' over 360 degrees. Each 'hour' is annotated, however the arrow lengths decrease (shrivel?) with use and even their direction is incorrect.</p>
<p>The X axis spans 360 degrees whereas the Y axis spans 70 degrees. The <code>print</code> statement verifies that the arrows on 6 and 18 hours have the same length and are vertical, according to the offsets specified. This is not so as seen in the resulting plot:
<a href="https://i.sstatic.net/OlV5Jb41.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OlV5Jb41.png" alt="enter image description here" /></a></p>
<p>Matplotlib version = 3.9.2; Numpy version = 2.1.1</p>
<p>Here is the python code:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import numpy as np
import matplotlib.pyplot as plt
arrow_style = {'arrowstyle': '-|>'}
def make_plot():
fig, ax = plt.subplots()
ax.axis([0,+360,-35,+35])
ax.set(xlabel = 'X (degrees)', ylabel = 'Y (degrees)', title='Vanishing arrow length example')
degree = np.pi / 180.0 # degrees to radians
arrow_angle = 72.0 # degrees
arrow_length = 27.0
eps = 23.43927945
eps_x = np.linspace(0, 360, 200)
eps_y = -eps * np.sin(2 * np.pi * eps_x / 360)
ax.plot(eps_x, eps_y, 'r')
hr_x = np.linspace(0, 360, 25)
hr_y = -eps * np.sin(2 * np.pi * hr_x / 360)
i = 0
for x in hr_x:
ax.plot(hr_x[i], hr_y[i],'bo')
if hr_y[i] > 0 and arrow_angle > 0: arrow_angle = -75 # degrees
arrow_x = np.cos(arrow_angle*degree) * arrow_length
arrow_y = np.sin(arrow_angle*degree) * arrow_length
ax.annotate(str(i), xy=(hr_x[i], hr_y[i]), xytext=(arrow_x, arrow_y), \
xycoords='data', textcoords='offset points', arrowprops=arrow_style)
print("arrow_x {:.2f} arrow_y {:.2f} arrow_angle {:.1f} at {} hours" \
.format(arrow_x, arrow_y, arrow_angle, i))
if hr_y[i] <= 0: arrow_angle += 3.0
if hr_y[i] > 0: arrow_angle -= 3.0
i += 1
ax.grid()
ax.axhline(0.0)
ax.axvline(180.0)
plt.show()
return fig
make_plot().savefig('vanishing_arrow_length.png')
</code></pre>
<p>I have only a few days experience with matplotlib so I guess this is a really simple user error. However the documentation is no help to me in this case.</p>
|
<python><matplotlib>
|
2024-09-07 17:38:37
| 1
| 343
|
Aendie
|
78,960,710
| 6,923,568
|
How to optimally stretch and uniformize 1-dimensional integer points?
|
<p><strong>The data:</strong></p>
<p>Let <code>X</code> be an array of labeled integers roughly in the domain <code>[0, 2000000]</code>. The size <code>|X|</code> is around 3000 elements. The labels are in the domain <code>[A, B, C]</code>.</p>
<p>For example: <code>[(13, A), (16, B), (32, A), (84, C), ...]</code></p>
<hr />
<p><strong>The constraints:</strong></p>
<p>Every data point can be moved by <code>±50</code> on the axis, but the final ordering <em>must</em> respect the following criteria:</p>
<ul>
<li>The integer values must remain in increasing order;</li>
<li>The labels must remain in the same order as the initial array.</li>
<li>The integer values must remain integers, no floats.</li>
</ul>
<hr />
<p><strong>The goal:</strong></p>
<p>There are 2 metrics to optimize with variable weight:</p>
<ol>
<li>The variance of the integers, to minimize. <em>(Weighted strongly, say 1.0)</em></li>
<li>The average distance of the integers, to maximize. <em>(Weighted lightly, say 0.1)</em></li>
</ol>
<hr />
<p><strong>What I've tried:</strong></p>
<p>I initially went for <code>scipy.optimize.minimize()</code> but turns out that doesn't work on integer-only:</p>
<pre class="lang-py prettyprint-override"><code>def stretch_optimization(initial_array: np.ndarray, weight_variance=1.0, weight_average=0.1):
ms_data = initial_array[:, 0]
def objective_function(ms_data):
gaps = np.diff(ms_data)
# Get gaps variance, we want the gaps to be as uniform as possible.
gaps_variance = np.var(gaps)
# Get gaps average, we want to maximize the gaps size.
gaps_average = np.mean(gaps)
return weight_variance * gaps_variance - weight_average * gaps_average
bounds = [(ms - 50, ms + 50) for ms, _ in initial_array]
def order_constraint(ms):
return np.diff(ms)
constraints = [{'type': 'ineq', 'fun': order_constraint}]
result: np.ndarray = minimize(objective_function, ms_data, bounds=bounds, constraints=constraints)
</code></pre>
<p>I'm not entirely sure where to go from there. I read that maybe I should use <code>scipy.optimize.milp</code> (?) but it's not clear to me what I should do with that.</p>
|
<python><arrays><optimization><variance>
|
2024-09-07 17:27:40
| 1
| 1,509
|
Mat
|
78,960,708
| 2,950,593
|
pipreqs django UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb1 in position 81: invalid start byte
|
<p>trying to make a requirements.txt file but get an error
what do i do?</p>
<pre><code> (.venv) root@vm-89f2b1ea:~/code/aigenback# pipreqs ~/code/aigenback
INFO: Not scanning for jupyter notebooks.
<unknown>:165: SyntaxWarning: invalid escape sequence '\S'
<unknown>:166: SyntaxWarning: invalid escape sequence '\['
<unknown>:207: SyntaxWarning: invalid escape sequence '\['
<unknown>:456: SyntaxWarning: invalid escape sequence '\S'
Traceback (most recent call last):
File "/root/code/aigenback/.venv/bin/pipreqs", line 8, in <module>
sys.exit(main())
^^^^^^
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 609, in main
init(args)
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 533, in init
candidates = get_all_imports(
^^^^^^^^^^^^^^^^
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 136, in get_all_imports
contents = read_file_content(file_name, encoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 181, in read_file_content
contents = f.read()
^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb1 in position 81: invalid start byte
</code></pre>
<p>also i've tried:</p>
<pre><code>(.venv) root@vm-89f2b1ea:~/code/aigenback# pipreqs --encoding utf-8
INFO: Not scanning for jupyter notebooks.
<unknown>:165: SyntaxWarning: invalid escape sequence '\S'
<unknown>:166: SyntaxWarning: invalid escape sequence '\['
<unknown>:207: SyntaxWarning: invalid escape sequence '\['
<unknown>:456: SyntaxWarning: invalid escape sequence '\S'
Traceback (most recent call last):
File "/root/code/aigenback/.venv/bin/pipreqs", line 8, in <module>
sys.exit(main())
^^^^^^
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 609, in main
init(args)
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 533, in init
candidates = get_all_imports(
^^^^^^^^^^^^^^^^
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 136, in get_all_imports
contents = read_file_content(file_name, encoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/code/aigenback/.venv/lib/python3.12/site-packages/pipreqs/pipreqs.py", line 181, in read_file_content
contents = f.read()
^^^^^^^^
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb1 in position 81: invalid start byte
</code></pre>
|
<python>
|
2024-09-07 17:26:29
| 1
| 9,627
|
user2950593
|
78,960,340
| 1,075,374
|
How can I follow an HTTP redirect?
|
<p>I have 2 different views that seem to work on their own. But when I try to use them together with a http redirect then that fails.</p>
<p>The context is pretty straightforward, I have a view that creates an object and another view that updates this object, both with the same form.</p>
<p>The only thing that is a bit different is that we use multiple sites. So we check if the site that wants to update the object is the site that created it. If yes then it does a normal update of the object. If no (that's the part that does not work here) then I http redirect the update view to the create view and I pass along the object so the new site can create a new object based on those initial values.</p>
<p>Here is the test to create a new resource (passes successfully) :</p>
<pre><code>@pytest.mark.resource_create
@pytest.mark.django_db
def test_create_new_resource_and_redirect(client):
data = {
"title": "a title",
"subtitle": "a sub title",
"status": 0,
"summary": "a summary",
"tags": "#tag",
"content": "this is some updated content",
}
with login(client, groups=["example_com_staff"]):
response = client.post(reverse("resources-resource-create"), data=data)
resource = models.Resource.on_site.all()[0]
assert resource.content == data["content"]
assert response.status_code == 302
</code></pre>
<p>Here is the test to create a new resource from an existing object (passes successfully) :</p>
<pre><code>@pytest.mark.resource_create
@pytest.mark.django_db
def test_create_new_resource_from_pushed_resource_and_redirect(request, client):
existing_resource = baker.make(models.Resource)
other_site = baker.make(Site)
existing_resource.site_origin = other_site
existing_resource.sites.add(other_site)
our_site = get_current_site(request)
existing_resource.sites.add(our_site)
original_content = "this is some original content"
existing_resource.content = original_content
existing_resource.save()
data = {
"title": "a title",
"subtitle": "a sub title",
"status": 0,
"summary": "a summary",
"tags": "#tag",
"content": "this is some updated content",
}
url = reverse("resources-resource-create-from-shared", args=[existing_resource.id])
with login(client, groups=["example_com_staff"]):
response = client.post(url, data=data)
assert response.status_code == 302
existing_resource.refresh_from_db()
assert existing_resource.content == original_content
assert our_site not in existing_resource.sites.all()
new_resource = models.Resource.on_site.get()
assert new_resource.content == data["content"]
</code></pre>
<p>Here is the create view :</p>
<pre><code>@login_required
def resource_create(request, pushed_resource_id=None):
"""
Create new resource
In case of a resource that is pushed from a different site
create a new resource based on the pushed one.
"""
has_perm_or_403(request.user, "sites.manage_resources", request.site)
try:
pushed_resource = models.Resource.objects.get(id=pushed_resource_id)
pushed_resource_as_dict = model_to_dict(pushed_resource)
initial_data = pushed_resource_as_dict
except ObjectDoesNotExist:
pushed_resource = None
initial_data = None
if request.method == "POST":
form = EditResourceForm(request.POST, initial=initial_data)
if form.is_valid():
resource = form.save(commit=False)
resource.created_by = request.user
with reversion.create_revision():
reversion.set_user(request.user)
resource.save()
resource.sites.add(request.site)
if pushed_resource:
pushed_resource.sites.remove(request.site)
pushed_resource.save()
resource.site_origin = request.site
resource.save()
form.save_m2m()
next_url = reverse("resources-resource-detail", args=[resource.id])
return redirect(next_url)
else:
form = EditResourceForm()
return render(request, "resources/resource/create.html", locals())
</code></pre>
<p>Here is the test to update the resource from the original site (passes successfully) :</p>
<pre><code>@pytest.mark.resource_update
@pytest.mark.django_db
def test_update_resource_from_origin_site_and_redirect(request, client):
resource = baker.make(models.Resource)
our_site = get_current_site(request)
resource.site_origin = our_site
resource.save()
previous_update = resource.updated_on
url = reverse("resources-resource-update", args=[resource.id])
data = {
"title": "a title",
"subtitle": "a sub title",
"status": 0,
"summary": "a summary",
"tags": "#tag",
"content": "this is some updated content",
}
with login(client, groups=["example_com_staff"]):
response = client.post(url, data=data)
assert response.status_code == 302
resource.refresh_from_db()
assert resource.content == data["content"]
assert resource.updated_on > previous_update
</code></pre>
<p>And finally the test to update from a different site that should create a new resource from the original one (that one fails):</p>
<pre><code>@pytest.mark.resource_update
@pytest.mark.django_db
def test_update_resource_from_non_origin_site_and_redirect(request, client):
original_resource = baker.make(models.Resource)
our_site = get_current_site(request)
other_site = baker.make(Site)
original_resource.sites.add(our_site, other_site)
original_resource.site_origin = other_site
previous_update = original_resource.updated_on
original_content = "this is some original content"
original_resource.content = original_content
original_resource.save()
assert models.Resource.on_site.all().count() == 1
url = reverse("resources-resource-update", args=[original_resource.id])
updated_data = {
"title": "a title",
"subtitle": "a sub title",
"status": 0,
"summary": "a summary",
"tags": "#tag",
"content": "this is some updated content",
}
with login(client, groups=["example_com_staff"]):
response = client.post(url, data=updated_data)
assert response.status_code == 302
original_resource.refresh_from_db()
assert original_resource.content == original_content
assert original_resource.updated_on == previous_update
assert other_site in original_resource.sites.all()
assert our_site not in original_resource.sites.all()
assert models.Resource.on_site.all().count() == 1
new_resource = models.Resource.on_site.get()
assert new_resource.content == updated_data["content"]
assert other_site not in new_resource.sites.all()
assert our_site in new_resource.sites.all()
</code></pre>
<p>What happens is that no new object gets created here and the original object is modified instead.</p>
<p>Here is the update view :</p>
<pre><code>@login_required
def resource_update(request, resource_id=None):
"""Update informations for resource"""
has_perm_or_403(request.user, "sites.manage_resources", request.site)
resource = get_object_or_404(models.Resource, pk=resource_id)
if resource.site_origin is not None and resource.site_origin != request.site:
pushed_resource_id = resource.id
next_url = reverse("resources-resource-create-from-shared",
args=[pushed_resource_id]
)
return redirect(next_url)
next_url = reverse("resources-resource-detail", args=[resource.id])
if request.method == "POST":
form = EditResourceForm(request.POST, instance=resource)
if form.is_valid():
resource = form.save(commit=False)
resource.updated_on = timezone.now()
with reversion.create_revision():
reversion.set_user(request.user)
resource.save()
form.save_m2m()
return redirect(next_url)
else:
form = EditResourceForm(instance=resource)
return render(request, "resources/resource/update.html", locals())
</code></pre>
<p>And the model form :</p>
<pre><code>class EditResourceForm(forms.ModelForm):
"""Create and update form for resources"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Queryset needs to be here since on_site is dynamic and form is read too soon
self.fields["category"] = forms.ModelChoiceField(
queryset=models.Category.on_site.all(),
empty_label="(Aucune)",
required=False,
)
self.fields["contacts"] = forms.ModelMultipleChoiceField(
queryset=addressbook_models.Contact.on_site.all(),
required=False,
)
# Try to load the Markdown template into 'content' field
try:
tmpl = get_template(
template_name="resources/resource/create_md_template.md"
)
self.fields["content"].initial = tmpl.render()
except TemplateDoesNotExist:
pass
content = MarkdownxFormField(label="Contenu")
title = forms.CharField(
label="Titre", widget=forms.TextInput(attrs={"class": "form-control"})
)
subtitle = forms.CharField(
label="Sous-Titre",
widget=forms.TextInput(attrs={"class": "form-control"}),
required=False,
)
summary = forms.CharField(
label="Résumé bref",
widget=forms.Textarea(
attrs={"class": "form-control", "rows": "3", "maxlength": 400}
),
required=False,
)
class Meta:
model = models.Resource
fields = [
"title",
"status",
"subtitle",
"summary",
"tags",
"category",
"departments",
"content",
"contacts",
"expires_on",
]
</code></pre>
<p>Any idea about what I did wrong is welcome. And if you think a better strategy should be employed then feel free to comment.</p>
|
<python><django><pytest>
|
2024-09-07 14:12:25
| 1
| 5,865
|
Bastian
|
78,960,293
| 7,387,749
|
How to Load a ‘Learner’ Model with a Custom Loss Function in a Flask Application
|
<p>I am currently working on loading a learner model from a <code>pickle</code> file. This model includes a custom loss function and needs to be integrated into a Flask application. The loss function is in the same file as the <code>Flask</code> app.<p>
However, I keep encountering the following error:</p>
<pre><code>Custom classes or functions exported with your `Learner` not available in namespace.\Re-declare/import before loading:
Can't get attribute 'combined_loss' on <module '__main__' from 'C:\\Users\\Desktop\\Meteor\\flask_app\\venv\\Scripts\\flask.exe\\__main__.py'>
</code></pre>
|
<python><flask><pickle><fast-ai>
|
2024-09-07 13:53:01
| 0
| 4,980
|
Simone
|
78,960,111
| 2,754,510
|
Singular matrix during B Spline interpolation
|
<p>According to the literature about B Splines, including <a href="https://mathworld.wolfram.com/B-Spline.html" rel="nofollow noreferrer">Wolfram Mathworld</a>, the condition for Cox de Boor's recursive function states that:</p>
<p><a href="https://i.sstatic.net/TMcrjZZJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TMcrjZZJ.png" alt="enter image description here" /></a></p>
<p>In python, this would translate to:</p>
<pre><code>if (d_ == 0):
if ( knots_[k_] <= t_ < knots_[k_+1]):
return 1.0
return 0.0
</code></pre>
<p>where:</p>
<ul>
<li><code>d_</code>: degree of the curve</li>
<li><code>knots_</code>: knot vector</li>
<li><code>k_</code>: index of the knot</li>
<li><code>t_</code>: parameter value {0.0,...,1.0} (reparametrized)</li>
</ul>
<p>However, this seems to generate a Singular matrix, when creating the linear system intended for <a href="https://pages.mtu.edu/%7Eshene/COURSES/cs3621/NOTES/INT-APP/CURVE-INT-global.html" rel="nofollow noreferrer">interpolation</a>, not <a href="https://www.cl.cam.ac.uk/teaching/1999/AGraphHCI/SMAG/node4.html" rel="nofollow noreferrer">approximation</a>. For example, with 4 points:</p>
<pre><code> A = [[1. 0. 0. 0. ]
[0.2962963 0.44444444 0.22222222 0.03703704]
[0.03703704 0.22222222 0.44444444 0.2962963 ]
[0. 0. 0. **0.** ]] //The last element (bottom-right) should have been 1.0
# Error: LinAlgError: file C:\Users\comevo\AppData\Roaming\Python\Python310\site-packages\numpy\linalg\_linalg.py line 104: Singular matrix
</code></pre>
<p>If I change the second part of the condition to:</p>
<pre><code>if (d_ == 0):
if ( knots_[k_] <= t_ <= knots_[k_+1]): // using <= instead of <
return 1.0
return 0.0
</code></pre>
<p>I get the correct matrix and the correct spline.</p>
<pre><code>A = [[1. 0. 0. 0. ]
[0.2962963 0.44444444 0.22222222 0.03703704]
[0.03703704 0.22222222 0.44444444 0.2962963 ]
[0. 0. 0. 1. ]] // OK
</code></pre>
<p>Why does the code need to deviate from the mathematical condition in order to get the correct results and the iterator reaching the last element?</p>
<p>See below the complete example code:</p>
<pre><code>import numpy as np
import math
from geomdl import knotvector
def cox_de_boor( d_, t_, k_, knots_):
if (d_ == 0):
if ( knots_[k_] <= t_ <= knots_[k_+1]):
return 1.0
return 0.0
denom_l = (knots_[k_+d_] - knots_[k_])
left = 0.0
if (denom_l != 0.0):
left = ((t_ - knots_[k_]) / denom_l) * cox_de_boor(d_-1, t_, k_, knots_)
denom_r = (knots_[k_+d_+1] - knots_[k_+1])
right = 0.0
if (denom_r != 0.0):
right = ((knots_[k_+d_+1] - t_) / denom_r) * cox_de_boor(d_-1, t_, k_+1, knots_)
return left + right
def interpolate( d_, P_, n_, ts_, knots_ ):
A = np.zeros((n_, n_))
for i in range(n_):
for j in range(n_):
A[i, j] = cox_de_boor(d_, ts_[i], j, knots_)
control_points = np.linalg.solve(A, P_)
return control_points
def create_B_spline( d_, P_, t_, knots_):
sum = MVector()
for i in range( len(P_) ):
sum += P_[i] * cox_de_boor(d_, t_, i, knots_)
return sum
def B_spline( points_ ):
d = 3
P = np.array( points_ )
n = len( P )
ts = np.linspace( 0.0, 1.0, n )
knots = knotvector.generate( d, n ) # len = n + d + 1
control_points = interpolate( d, P, n, ts, knots)
crv_pnts = []
for i in range(10):
t = float(i) / 9
crv_pnts.append( create_B_spline(d, control_points, t, knots) )
return crv_pnts
control_points = [ [float(i), math.sin(i), 0.0] for i in range(8) ]
cps = B_spline( control_points )
</code></pre>
<p>Result:</p>
<p><a href="https://i.sstatic.net/65VFd8aB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65VFd8aB.png" alt="enter image description here" /></a></p>
|
<python><numpy><spline>
|
2024-09-07 12:23:41
| 1
| 3,276
|
Constantinos Glynos
|
78,960,008
| 4,718,221
|
Dataclass inheritance variables not working
|
<p>I have a dataclass subclass that is just inheriting variables. I know the keyword variables need to come last, but even with that, the order of variables in the subclass seems to have changed. I don't understand what the error message is telling me</p>
<pre><code>@dataclass
class ZooAnimals():
food_daily_kg: int
price_food: float
area_required: float
name: str = field(default='Zebra', kw_only=True)
c = ZooAnimals(565, 40, 10, name='Monkey')
Out: ZooAnimals(food_daily_kg=565, price_food=40, area_required=10, name='Monkey')
</code></pre>
<p>Now the subclass</p>
<pre><code>@dataclass
class Cats(ZooAnimals):
def __init__(self, food_daily_kg, price_food, area_required, name, meowing):
meowing: str
super().__init__()
z = Cats(465, 30, 10, 'Little Bit', name='Leopard')
Out: TypeError: Cats.__init__() got multiple values for argument 'name'
</code></pre>
|
<python><inheritance><python-dataclasses>
|
2024-09-07 11:25:54
| 3
| 604
|
user4718221
|
78,959,909
| 7,347,925
|
Calculate the polygon angle based on point inside?
|
<p>I have one polygon and want to calculate the wind direction based on the long boundary and point inside. The point is the wind's starting point and the direction follows the long distance to the short edge.</p>
<p>Here's an example of my idea. 1) find the long boundary of polygon 2) find the point of the boundary linestring closest to the starting wind point 3) calculate the wind direction.</p>
<p>Is there any better way to do so?</p>
<pre><code>from shapely import Point, Polygon
from shapely.geometry import LineString
import numpy as np
import matplotlib.pyplot as plt
coords = ((-1, 0.), (0., 1.), (0.5, 1.), (-0.5, 0.), (-1, 0.))
p = Polygon(coords)
p1 = (0, 0.9)
p2 = (-0.5, 0.2)
fig, axs = plt.subplots()
axs.plot(*p.exterior.xy)
axs.scatter(p1[0], p1[1], c='r')
axs.quiver(p1[0], p1[1], -0.5, -0.5, angles='xy', scale_units='xy', scale=1, color='r')
axs.quiver(p2[0], p2[1], 0.5, 0.5, angles='xy', scale_units='xy', scale=1, color='b')
axs.scatter(p2[0], p2[1], c='b')
axs.set_xlim(-2, 2)
axs.set_ylim(-2, 2)
def calculate_wind_direction(p1, p2):
dx = p2[0] - p1[0]
dy = p2[1] - p1[1]
# Calculate the angle in radians
angle = np.arctan2(dy, dx)
# Convert to degrees
degrees = np.degrees(angle)
# Adjust to meteorological convention (clockwise from north)
meteorological_angle = (90 - degrees) % 360
# Convert to "from" direction
from_direction = (meteorological_angle + 180) % 360
return from_direction
# https://stackoverflow.com/questions/20474549/extract-points-coordinates-from-a-polygon-in-shapely
l = p.boundary # Extract the rectangle boundary as a line
coords = [c for c in l.coords] # List the line coordinates
segments = [LineString([a, b]) for a, b in zip(coords, coords[1:])] # Create the four side lines.
longest_segment = max(segments, key=lambda x: x.length) # Find the longest line (one of the long sides of the rotated rectangle)
p_start, p_end = [c for c in longest_segment.coords] # List the start and end coordinates of it
# set the wind direction from the closest point
if Point(p_start).distance(Point(p1)) < Point(p_end).distance(Point(p1)):
wdir = calculate_wind_direction(p_start, p_end)
else:
wdir = calculate_wind_direction(p_end, p_start)
</code></pre>
<p><a href="https://i.sstatic.net/ETPCH7ZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ETPCH7ZP.png" alt="wind" /></a></p>
|
<python><polygon><shapely><point-in-polygon>
|
2024-09-07 10:19:32
| 0
| 1,039
|
zxdawn
|
78,959,850
| 4,451,315
|
Change x-label after chart has already been created
|
<p>Say I have:</p>
<pre class="lang-py prettyprint-override"><code>import altair as alt
import polars as pl
chart = (
alt.Chart(pl.DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]}))
.mark_point()
.encode(x="a", y="b")
)
</code></pre>
<p>Suppose I have a function which receives this <code>chart</code> object, and within that function, I'd like to change the x=axis to be <code>'a (£)'</code>. How can I do that?</p>
<p>I can change the title with <code>chart.properties(title='my title')</code> but can't find a way to change the x-axis label text</p>
<hr />
<p>One way I've found to do this is</p>
<pre><code>chart.encoding.x.title = 'a (£)'
</code></pre>
<p>but this is "in-place" - is there a way to do it such that it won't modify the original chart?</p>
|
<python><altair>
|
2024-09-07 09:46:10
| 2
| 11,062
|
ignoring_gravity
|
78,959,829
| 3,254,920
|
Python Script Works from Command Line but Fails When Called from C# in IIS
|
<p>I have a MVC application written in C# and hosted on IIS. In my C# code, I am trying to call a Python script to access the Scopus website and retrieve user information. The Python script works perfectly when run from the command line, but when I call it from my C# code, it throws an error. I face some permission problem so I manually create some folders (pip,python...) and give the permission to IIS/user.</p>
<pre><code>Error in python script: Traceback (most recent call last):
File "C:\inetpub\site\scopus.py", line 15, in <module>
verify_success(sb)
File "C:\inetpub\site\scopus.py", line 9, in verify_success
sb.assert_element('//span[contains(text(), "Author Search")]', timeout=30)
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\base_case.py", line 9428, in assert_element
self.wait_for_element_visible(selector, by=by, timeout=timeout)
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\base_case.py", line 8853, in wait_for_element_visible
return page_actions.wait_for_element_visible(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\page_actions.py", line 496, in wait_for_element_visible
timeout_exception(NoSuchElementException, message)
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\page_actions.py", line 254, in timeout_exception
raise exc(msg)
seleniumbase.common.exceptions.NoSuchElementException: Message:
Element {//span[contains(text(), "Author Search")]} was not present after 30 seconds!
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\inetpub\site\scopus.py", line 24, in <module>
verify_success(sb)
File "C:\inetpub\site\scopus.py", line 9, in verify_success
sb.assert_element('//span[contains(text(), "Author Search")]', timeout=30)
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\base_case.py", line 9428, in assert_element
self.wait_for_element_visible(selector, by=by, timeout=timeout)
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\base_case.py", line 8853, in wait_for_element_visible
return page_actions.wait_for_element_visible(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\page_actions.py", line 496, in wait_for_element_visible
timeout_exception(NoSuchElementException, message)
File "C:\Windows\system32\config\systemprofile\AppData\Roaming\Python\Python312\site-packages\seleniumbase\fixtures\page_actions.py", line 254, in timeout_exception
raise exc(msg)
seleniumbase.common.exceptions.NoSuchElementException: Message:
Element {//span[contains(text(), "Author Search")]} was not present after 30 seconds!
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\inetpub\site\scopus.py", line 26, in <module>
raise Exception("Detected!")
Exception: Detected!
</code></pre>
<p><strong>Python code</strong></p>
<pre><code>from seleniumbase import SB
from bs4 import BeautifulSoup
import sys
import json
def verify_success(sb):
sb.assert_element('//span[contains(text(), "Author Search")]', timeout=10)
sb.sleep(3)
userId = sys.argv[1]
with SB(uc=True) as sb:
sb.uc_open_with_reconnect(f"https://www.scopus.com/authid/detail.uri?authorId={userId}", 3)
try:
verify_success(sb)
except Exception:
if sb.is_element_visible('input[value*="Verify"]'):
sb.uc_click('input[value*="Verify"]')
else:
sb.uc_gui_click_captcha()
try:
verify_success(sb)
except Exception:
raise Exception("Detected!")
finally:
page_source = sb.get_page_source()
document = BeautifulSoup(page_source, 'html.parser')
citations_node = document.select_one("div[data-testid='metrics-section-citations-count'] span[data-testid='unclickable-count']")
documents_node = document.select_one("div[data-testid='metrics-section-document-count'] span[data-testid='unclickable-count']")
hindex_node = document.select_one("div[data-testid='metrics-section-h-index'] span[data-testid='unclickable-count']")
name_node = document.select_one("h1[data-testid='author-profile-name'] strong")
institute_node = document.select_one("span[data-testid='authorInstitute']")
scopus_information = {
"CitationsNumber": int(citations_node.text.replace(",", "") if citations_node else "0"),
"Documents": int(documents_node.text if documents_node else "0"),
"HIndex": int(hindex_node.text if hindex_node else "0"),
"Name": name_node.text.strip() if name_node else "",
"Institute": institute_node.text.strip() if institute_node else ""
}
print(json.dumps(scopus_information, indent=4))
</code></pre>
<p><strong>c# code :</strong></p>
<pre><code>
public class ScopusInformation
{
public string? Name { get; set; }
public int? CitationsNumber { get; set; }
public int? Documents { get; set; }
public int? HIndex { get; set; }
public string? Institute { get; set; }
public string? ScopusId { get; set; }
}
public interface IScopusService
{
Task<ScopusInformation?> GetArticlesForUser(string userId);
}
public class ScopusServiceUsingPython(ILogger<ScopusService> logger) : IScopusService
{
public async Task<ScopusInformation?> GetArticlesForUser(string userId)
{
string cmd = "py";
var result2 = await DoCmdAsync(cmd, $"scopus.py {userId}");
ScopusInformation? r2 = JsonSerializer.Deserialize<ScopusInformation>(result2);
return r2;
}
private async Task<string> DoCmdAsync(string cmd, string args)
{
logger.LogWarning("DoCmd in python script {cmd} {args}", cmd, args);
try
{
// First, ensure required modules are installed
await EnsurePythonModulesAsync(cmd, new[] { "seleniumbase", "beautifulsoup4", "pyautogui" });
var start = new ProcessStartInfo
{
FileName = cmd, // cmd is full path to python.exe
Arguments = args, // args is path to .py file and any cmd line args
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true,
CreateNoWindow = true
};
using var process = new Process { StartInfo = start };
process.Start();
var outputTask = process.StandardOutput.ReadToEndAsync();
var errorTask = process.StandardError.ReadToEndAsync();
await Task.WhenAll(outputTask, errorTask);
string result = await outputTask;
string error = await errorTask;
if (!process.WaitForExit(30000)) // 30 seconds timeout
{
process.Kill();
throw new TimeoutException("Python script execution timed out after 30 seconds.");
}
if (!string.IsNullOrEmpty(error))
{
logger.LogWarning("Error in python script: {error}", error);
}
logger.LogWarning("Result in python script: {result}", result);
return result;
}
catch (Exception ex)
{
logger.LogWarning("Exception in python script: {ex}", ex.ToString());
return "";
}
}
private async Task EnsurePythonModulesAsync(string pythonPath, string[] modules)
{
foreach (var module in modules)
{
logger.LogWarning("Checking Python module: {module}", module);
var checkStart = new ProcessStartInfo
{
FileName = pythonPath,
Arguments = $"-c \"import {module}\"",
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true,
CreateNoWindow = true
};
using var checkProcess = new Process { StartInfo = checkStart };
checkProcess.Start();
if (!checkProcess.WaitForExit(10000)) // 10 seconds timeout
{
checkProcess.Kill();
throw new TimeoutException($"Checking for Python module {module} timed out.");
}
if (checkProcess.ExitCode != 0)
{
logger.LogWarning("Installing missing Python module: {module}", module);
var installStart = new ProcessStartInfo
{
FileName = pythonPath,
Arguments = $"-m pip install {module}",
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true,
CreateNoWindow = true
};
using var installProcess = new Process { StartInfo = installStart };
installProcess.Start();
var outputTask = installProcess.StandardOutput.ReadToEndAsync();
var errorTask = installProcess.StandardError.ReadToEndAsync();
if (await Task.WhenAny(Task.WhenAll(outputTask, errorTask), Task.Delay(300000)) == Task.Delay(300000)) // 5 minutes timeout
{
installProcess.Kill();
throw new TimeoutException($"Installation of Python module {module} timed out after 5 minutes.");
}
string output = await outputTask;
string error = await errorTask;
if (installProcess.ExitCode != 0)
{
throw new Exception($"Failed to install Python module {module}: {error}");
}
logger.LogWarning("Successfully installed Python module: {module}", module);
logger.LogWarning("Installation output: {output}", output);
}
else
{
logger.LogWarning("Python module {module} is already installed.", module);
}
}
}
}
</code></pre>
|
<python><c#><asp.net-mvc><iis><seleniumbase>
|
2024-09-07 09:37:05
| 1
| 1,228
|
Abdul Hadi
|
78,959,746
| 1,516,331
|
Python singleton implementation not working
|
<p>I am trying to implement a singleton pattern for a config class. I have a <code>_Config</code> class which loads the <code>config.yaml</code> file. <code>Config</code> class is a subclass of <code>_Config</code> class, whenever creating an instance of <code>Config</code> class, I expect it always returns the single instance. <br />
However the <code>Config</code> class does not work as expected. The error is shown at the end.
<strong>Can you please explain what is going on? Where are the errors coming from?</strong></p>
<p>This the internal <code>_Config</code> class:</p>
<pre><code>import yaml
from loguru import logger
import os
def _load_config() -> dict:
logger.debug(f'Current work dir: {os.getcwd()}. Loading the config.yaml data...')
filename = 'config.yaml'
with open(filename, 'r') as f:
try:
return yaml.safe_load(f)
except yaml.YAMLError as exc:
logger.critical(exc)
# raise Exception(exc)
raise
class _Config(dict):
"""This class is an internal class not to be accessed directly. Use `Config` class instead."""
# def __new__(cls, *args, **kwargs):
def __init__(self, config: dict = None):
"""
By default, the Config class is initialized with the `config.yaml` file at the root directory, unless a `config`
parameter is provided.
:param config: a config dictionary to be initialized
"""
super().__init__()
self._config: dict = config or _load_config()
def __getattr__(self, name):
if name == '_config':
return self._config
if name not in self._config:
raise AttributeError(name)
item = self._config.get(name)
if item is None:
raise NotImplementedError(f'"{name}" not found in config.yaml')
if isinstance(item, dict):
item = _Config(config=item)
return item
def __getitem__(self, name):
return self.__getattr__(name)
</code></pre>
<p>The following is the <code>Config</code> class that should be used externally:</p>
<pre><code>class Config(_Config):
"""
The `Config` class for the configs with a singleton instance. This class wraps the `_Config` class. \n
The reason to use a wrapper class to implement singleton pattern is that the class `_Config` itself cannot be
implemented as a singleton class. This is because in its `__getattr__` method, the `_Config` class may need to be
constructed:\n
`item = _Config(config=item)`
"""
_instance = None
def __new__(cls):
logger.debug('__new__')
if cls._instance is None:
logger.debug('Creating a new _Config instance')
cls._instance = super(Config, cls).__new__(cls)
#
return cls._instance
def __init__(self):
if not hasattr(self, '_config'):
logger.debug('no _config')
super().__init__()
</code></pre>
<p>However the code does not work. When running <code>config1 = Config()</code>, I got the following output:</p>
<pre><code>2024-09-07 08:39:43.844 | DEBUG | config:__new__:66 - __new__
2024-09-07 08:39:43.844 | DEBUG | config:__new__:68 - Creating a new _Config instance
Ran 1 test in 0.158s
FAILED (errors=1)
Error
Traceback (most recent call last):
File "/opt/project/tests/test_config.py", line 22, in test_config
config1 = Config()
^^^^^^^^
File "/opt/project/config.py", line 74, in __init__
if not hasattr(self, '_config'):
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/project/config.py", line 34, in __getattr__
item = self._config.get(name)
^^^^^^^^^^^^
File "/opt/project/config.py", line 34, in __getattr__
item = self._config.get(name)
^^^^^^^^^^^^
File "/opt/project/config.py", line 34, in __getattr__
item = self._config.get(name)
^^^^^^^^^^^^
[Previous line repeated 2983 more times]
RecursionError: maximum recursion depth exceeded
</code></pre>
|
<python><inheritance><runtime-error><singleton>
|
2024-09-07 08:56:09
| 1
| 3,190
|
CyberPlayerOne
|
78,959,536
| 2,641,825
|
Using nbstripout as a filter, the git diff of a jupyter notebook is empty. How to temporarily deactivate nbstripout's git filter?
|
<p>I am using <a href="https://github.com/kynan/nbstripout" rel="nofollow noreferrer">nbstripout</a> to remove the outpout of jupyter notebook before committing them into git.</p>
<p>I had an issue where git status gave the notebook <code>.ipynb</code> file as changed, but git diff didn't show any change. I solved the issue by uninstalling nbstripout from the repository with</p>
<pre><code>cd git_repos
nbstripout --uninstall
</code></pre>
<p>The problematic notebook file didn't show as "Changes not staged for commit", "modified".
After that I installed nbstripout in the repository again with</p>
<pre><code>nbstripout --install
</code></pre>
<p>And the problematic file still didn't show as "Changes not staged for commit:" anymore. The problem appears to be solved by uninstalling and reinstalling nbstripout in the repo.</p>
<ul>
<li>Is there a way to temporarily turn the <code>nbstripout</code>'s git filter off?</li>
<li>Maybe this is not the right question to ask. Is there a way to track down why <code>nbstripout</code> had a problem with that notebook file?</li>
</ul>
<p>Related links:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/78043195/how-to-bypass-a-git-clean-filter-when-adding-and-committing">How to bypass a git clean filter when adding and committing</a></li>
</ul>
|
<python><git><jupyter-notebook>
|
2024-09-07 06:45:54
| 0
| 11,539
|
Paul Rougieux
|
78,959,447
| 13,379,374
|
Efficient PyTorch band matrix to dense matrix multiplication
|
<p><strong>Problem:</strong> In one of my programs, I need to calculate a matrix multiplication <code>A @ B</code> where both are of size N by N for considerably large N. I'm conjecturing that approximating this product by using <code>band_matrix(A, width) @ B</code> could just suffice the needs, where <code>band_matrix(A, width)</code> denotes a band matrix part of <code>A</code> with width <code>width</code>. For example, <code>width = 0</code> gives the diagonal matrix with diagonal elements taken from <code>A</code> and <code>width = 1</code> gives the tridiagonal matrix taken in a similar manner.</p>
<p><strong>My try:</strong> I'm trying to extract the tridiagonal matrix, for instance, in the following way:</p>
<pre class="lang-py prettyprint-override"><code># Step 1: Extract the main diagonal
main_diag = torch.diagonal(A, dim1=-2, dim2=-1) # Shape: [d1, d2, N]
# Step 2: Extract the upper diagonal (offset=1)
upper_diag = torch.diagonal(A, offset=1, dim1=-2, dim2=-1) # Shape: [d1, d2, N-1]
# Step 3: Extract the lower diagonal (offset=-1)
lower_diag = torch.diagonal(A, offset=-1, dim1=-2, dim2=-1) # Shape: [d1, d2, N-1]
# Step 4: Reconstruct the tridiagonal matrix
# Main diagonal
tridiag = torch.diag_embed(main_diag) # Shape: [d1, d2, N, N]
# Upper diagonal (shift the values to create the first upper diagonal)
tridiag += torch.diag_embed(upper_diag, offset=1)
# Lower diagonal (shift the values to create the first lower diagonal)
tridiag += torch.diag_embed(lower_diag, offset=-1)
</code></pre>
<p>but I'm not sure if <code>tridiag @ B</code> would be much more efficient than the original <code>A @ B</code> or just the same complexity since Torch may not know the specific structure to <code>tridiag</code>. In theory, computation with a tridiagonal matrix should be <code>N</code> times faster.</p>
<hr />
<p>Any help with understanding PyTorch's behaviour in this type of scenario or implementing some alternative GPU optimized approaches would be greatly appreciated.</p>
|
<python><machine-learning><pytorch>
|
2024-09-07 05:46:17
| 1
| 585
|
VIVID
|
78,959,401
| 1,176,573
|
plotly - how to plot multiple categories on by grouping on another criteria
|
<p>I wish to plot the <code>x-axis</code> as years <code>['10 Years', '5 Years', '3 Years', 'TTM']</code> and group them for each stock. But the for loop is not working as expected.</p>
<pre class="lang-py prettyprint-override"><code>mydict = {
'Compounded Sales Growth' : ['10 Years', '5 Years', '3 Years', 'TTM', '10 Years', '5 Years', '3 Years', 'TTM'],
'Stockname' : ['Stock-1','Stock-1','Stock-1','Stock-1','Stock-2','Stock-2','Stock-2','Stock-2'],
'Compounded Sales Growth.1' : [1, 4, 12, 5, 7, 5, 24, 13]
}
df = pd.DataFrame(mydict)
fig = go.Figure()
# List of quarters
x = df['Compounded Sales Growth'].unique()
# Add traces for each metric
for result in df['Compounded Sales Growth'].unique():
for stock in df['Stockname'].unique():
fig.add_trace(go.Bar(
x=x,
y=df[ (df['Stockname'] == stock) & (df['Compounded Sales Growth'] == result) ].iloc[0,:],
name=f'{stock}'
))
fig.update_layout()
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/pquBVqfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pquBVqfg.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2024-09-07 05:24:22
| 0
| 1,536
|
RSW
|
78,959,131
| 1,601,580
|
VLLM Objects Cause Memory Errors When Created in a Function even when explicitly clearing GPU cache, only sharing ref makes code not crash
|
<p>I'm encountering an issue when using the VLLM library in Python. Specifically, when I create a VLLM model object inside a function, I run into memory problems and cannot clear the GPU memory effectively, even after deleting objects and using <code>torch.cuda.empty_cache()</code>.</p>
<p>The problem occurs when I try to instantiate a <code>LLM</code> object inside a function, but it does not happen if I instantiate the object in the parent process or global scope. This suggests that VLLM has issues with creating and managing objects in functions, which leads to memory retention and GPU exhaustion.</p>
<p>Here’s a simplified version of the code:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import gc
from vllm import LLM
def run_vllm_eval(model_name, sampling_params, path_2_eval_dataset):
# Instantiate LLM in a function
llm = LLM(model=model_name, dtype=torch.float16, trust_remote_code=True)
# Run some VLLM inference or evaluation here (simplified)
result = llm.generate([path_2_eval_dataset], sampling_params)
# Clean up after inference
del llm
gc.collect()
torch.cuda.empty_cache()
# After this, GPU memory is not cleared properly and causes OOM errors
run_vllm_eval()
run_vllm_eval()
run_vllm_eval()
</code></pre>
<p>but</p>
<pre class="lang-py prettyprint-override"><code>llm = run_vllm_eval2()
llm = run_vllm_eval2(llm)
llm = run_vllm_eval2(llm)
</code></pre>
<p>Works.</p>
<p>Even after explicitly deleting the LLM object and clearing the cache, the GPU memory is not properly freed, leading to out-of-memory (OOM) errors when trying to load or run another model in the same script.</p>
<p>Things I've Tried:</p>
<ul>
<li>Deleting the LLM object with del.</li>
<li>Running gc.collect() to trigger Python's garbage collection.</li>
<li>Using torch.cuda.empty_cache() to clear CUDA memory.</li>
<li>Ensuring no VLLM objects are instantiated in the parent process.</li>
</ul>
<p>None of these seem to fix the issue when the LLM object is created within a function.</p>
<p>Questions:</p>
<ul>
<li>Has anyone encountered similar memory issues when creating VLLM objects inside functions?</li>
<li>Is there a recommended way to manage or clear VLLM objects in a function to prevent GPU memory retention?</li>
<li>Are there specific VLLM handling techniques that differ from standard Hugging Face or PyTorch models in this context?</li>
</ul>
|
<python><machine-learning><pytorch><gpu><vllm>
|
2024-09-07 00:58:59
| 1
| 6,126
|
Charlie Parker
|
78,959,081
| 10,054,520
|
Subset columns from pandas dataframe that do not start with "X"
|
<p>I'm trying to subset a dataframe by not selecting columns that I do not want that all start with the same phrase. In other words I want the entire dataframe, except for columns that start with "death".</p>
<p>Say I have a dataframe with these columns</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>player</th>
<th>birthDay</th>
<th>birthMonth</th>
<th>birthYear</th>
<th>deathDay</th>
<th>deathMonth</th>
<th>deathYear</th>
</tr>
</thead>
</table></div>
<p>I've created a filter list for columns I don't want to select</p>
<pre class="lang-py prettyprint-override"><code>cols = [col for col in df if col.startswith("death")]
df[~cols]
</code></pre>
<p>But when I run this I get the error: bad operand type for unary ~: 'list'</p>
<p>How come I am receiving this error? And is there a better way of doing this?</p>
|
<python><pandas><dataframe>
|
2024-09-07 00:14:16
| 3
| 337
|
MyNameHere
|
78,959,003
| 1,471,828
|
How do I make variable aware of the output of gencache.EnsureModule?
|
<p><strong>workaround</strong> In the end I just copied the <code>make_py</code> output to my project and imported the entry-point class.</p>
<p>How do I make <code>win32com.client.Dispatch(APPID)</code> aware there is a <code>gencache</code>d version?</p>
<p>Consider the following code and note that <code>%TEMP%\gen_py\</code> is empty at this time(!). Also note the casing of version/Version.</p>
<pre><code>import win32com.client as win32
xl_dyn = win32.dynamic.Dispatch('Excel.Application') # returns <COMObject Excel.Application>
print("late binding :"+ xl_dyn.version)
xl_dis = win32.Dispatch('Excel.Application') # returns <COMObject Excel.Application>
print("it depends :"+ xl_dis.version)
xl_ens = win32.gencache.EnsureDispatch('Excel.Application') # returns <win32com.gen_py.Microsoft Excel 16.0 Object Library.etcetera>
print("early binding:"+ xl_ens.Version)
</code></pre>
<p>When I run the exact same code again, <code>win32.Dispatch()</code> will now automatically pick up the <code>gen_py</code> stuff (and fail because of the casing error in 'version' - that is a deliberate error on my part to tell the difference).</p>
<p>I sort of get that - it all relates to methods to do early and late binding.</p>
<p>My question: I want to consume another COM server, let's call it SomeLib (it is a third party product I have no control over it). I know for a fact that I can early bind to it because I do that in VB6 and C#. But I cannot use <code>EnsureDispatch('Some.Lib')</code>, that command errors complaining about running MakePy manually.</p>
<p>Using output from MakePy I wrote this:</p>
<pre><code>win32.gencache.EnsureModule(CLSID, 0, 1, 0) # using the CLSID and version - it generates a <clsid>.py file in gen_py folder
mylib = win32.Dispatch(APPID) # using the correct APPID
</code></pre>
<p>The issue: unlike <code>xl_dis</code> picking up the gen_py stuff automatically after it has been generated, my <code>mylib</code> doesn't. It is all there in <code>%temp%\gen_py\</code> but <code>mylib</code> always remains a <code>COMObject somelib</code>.</p>
<p>How do I make <code>mylib</code> aware of the gen_py stuff?</p>
|
<python><win32com>
|
2024-09-06 23:12:46
| 0
| 905
|
Rno
|
78,958,965
| 11,946,045
|
How do I read a struct contents in a running process?
|
<p>I compiled a C binary on a linux machine and executed it, in that binary I have a struct called <code>Location</code> defined as follows</p>
<pre><code>typedef struct
{
size_t x;
size_t y;
} Location;
</code></pre>
<p>and here is my main function</p>
<pre><code>int main(void)
{
srand(0);
Location loc;
while (1)
{
loc.x = rand()%10;
loc.y = rand()%10;
sleep(2);
}
return 0;
}
</code></pre>
<p>How do I monitor the values of x and y?</p>
<p>There are some limitations to consider</p>
<ul>
<li>I can't modify the binary code</li>
<li>monitoring should be done with python</li>
<li>ASLR always enabled</li>
</ul>
<p>Things I tried</p>
<ul>
<li>Reading <code>/proc/pid/maps</code> location stack then reading <code>/proc/pid/mem</code> didn't find anything</li>
<li>I used gdb to find the address of <code>loc</code> but it is outside the range of stack found in maps (most probably ASLR)</li>
</ul>
|
<python><c><memory><stack>
|
2024-09-06 22:44:38
| 2
| 814
|
Weed Cookie
|
78,958,947
| 1,520,291
|
TensorFlow custom weights manipulation
|
<p>So, I'm having difficulty with implementing custom modification of weights, which is not based on any of the standard TF-provided optimizers and doesn't care about gradient tape whatsoever.</p>
<p>Essentially, what I'm trying to achieve is to update weights of the neural network after each training iteration, independently of the built-in updates (I actually want to disable them altogether, in order to reduce the computational overhead).</p>
<p>For simplicity let's say I just want to add some random <code>float</code> to each weight. The key requirement is to be able to do it <strong>in parallel</strong>, because in this example (and in my actual logic) weight updates are independent from each-other.</p>
<p>How do I find an "entry point" to TF where I could specify my mathematical operation that should be performed in parallel on all the weights after each feed-forward fitting run?</p>
<p>I was looking into <a href="https://www.tensorflow.org/guide/core/optimizers_core" rel="nofollow noreferrer">custom optimizers</a> and even a <a href="https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch" rel="nofollow noreferrer">writing a training loop from scratch</a> article, but all these approaches seem to be heavily dependent on gradients, TF's built-int tape and some other things that I don't really need for my algorithm.</p>
<p>I've found a way of overwriting the weights of a given layer, which could help me, in this answer: <a href="https://stackoverflow.com/questions/62059060/how-to-set-custom-weights-in-layers">How to set custom weights in layers?</a> But I'm not sure where exactly should I place this code.</p>
|
<python><tensorflow><keras>
|
2024-09-06 22:36:58
| 0
| 3,301
|
Dmytro Titov
|
78,958,924
| 3,407,994
|
mypy handle type errors once optional type is guaranteed not None
|
<p>I have some method that returns an optional float/None. I use the output of that conditionally, if <code>None</code> I do one thing, if truthy (a <code>float</code>) I pass through later methods.</p>
<p>Currently mypy raises errors that those later methods don't suport a <code>None</code>. How do I either change the type or create a new variable that is guaranteed to be a <code>float</code> and not optional?</p>
<p>Example:</p>
<pre><code>def method(args) -> Optional[float]:
if some_case:
return numerical_value: float
return None
def next_step(val: float) -> Any:
return do_more_stuff
result = method(stuff)
if result is None:
exit_early()
# otherwise we are guaranteed dealing with a float
next_step(result)
</code></pre>
|
<python><python-typing><mypy>
|
2024-09-06 22:24:25
| 2
| 1,758
|
kuanb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.