QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,914,940
| 6,439,229
|
Is it possible to adjust QlineEdit icon spacing?
|
<p>I'm planning to use a <code>QLineEdit</code> with three actions added via <code>addAction()</code>.<br />
Easy enough and it looks like this: (squares as icons for the example)</p>
<p><a href="https://i.sstatic.net/6ldQzHBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6ldQzHBM.png" alt="enter image description here" /></a></p>
<p>But a minor annoyance is that I find the spacing between the icons a bit too large.<br />
Is it possible to adjust this spacing? QLineEdit doesn't seem to have an accessible layout where you could set the spacing.</p>
|
<python><pyqt6><qlineedit>
|
2024-08-26 14:24:41
| 1
| 1,016
|
mahkitah
|
78,914,882
| 6,930,340
|
Replace a cell in a column based on a cell in another column in a polars DataFrame
|
<p>Consider the following <code>pl.DataFrame</code>:</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"symbol": ["s1", "s1", "s2", "s2"],
"signal": [0, 1, 2, 0],
"trade": [None, 1, None, -1],
}
)
shape: (4, 3)
ββββββββββ¬βββββββββ¬ββββββββ
β symbol β signal β trade β
β --- β --- β --- β
β str β i64 β i64 β
ββββββββββͺβββββββββͺββββββββ‘
β s1 β 0 β null β
β s1 β 1 β 1 β
β s2 β 2 β null β
β s2 β 0 β -1 β
ββββββββββ΄βββββββββ΄ββββββββ
</code></pre>
<p>Now, I need to group the dataframe by <code>symbol</code> and check if the first row in every group in column <code>signal</code> is not equal to 0 (zero). It this equals to <code>True</code>, I need to replace the corresponding cell in column <code>trade</code> with the value in the cell in <code>signal</code>.</p>
<p>Here's what I am actually looking for:</p>
<pre><code>shape: (4, 3)
ββββββββββ¬βββββββββ¬ββββββββ
β symbol β signal β trade β
β --- β --- β --- β
β str β i64 β i64 β
ββββββββββͺβββββββββͺββββββββ‘
β s1 β 0 β null β
β s1 β 1 β 1 β
β s2 β 2 β 2 β <- copy value from the ``signal`` column
β s2 β 0 β -1 β
ββββββββββ΄βββββββββ΄ββββββββ
</code></pre>
|
<python><dataframe><python-polars>
|
2024-08-26 14:10:19
| 3
| 5,167
|
Andi
|
78,914,875
| 17,174,267
|
python overlaying information on base class
|
<p>I have a base class <code>ASTNode</code> and a bunch of subclasses. I cannot/want not modify these classes.
The <code>ASTNode</code>s represent a tree.</p>
<p>Now I'd like to add type checking to my AST (meaning I need to add new data to every node.)</p>
<p>A) I create a new Typed-variant for every ASTNode subclass and create a new tree with these new classes. I'd like to avoid this since I would have to manually create a new class/change it every time I update the original AST.</p>
<p>B) I create a new class <code>TypedASTNode</code> that has a reference to the original ASTNode and also stores the additional typing information. Although now when I want to traverse the tree I have to do this via the stored ASTNodes and I cannot go from ASTNode back to TypedASTNode to retrieve the typing information. This means I would have to store another copy of the tree inside the TypedASTNodes. I really dislike this idea, since it opens up the possibility of inconsistencies between both trees later on.</p>
<pre><code>
class ASTNode:
@property
def children(self): return ...
...
class ASTLiteral(ASTNode):
...
class ASTExpr(ASTNode):
...
class TypedASTNode:
def __init__(self, node: ASTNode):
self._node = node
@property
def type(self): return ...
@property
def node(self): return self._node
</code></pre>
<p>Ideally I would like to be able to overlay the information on the original ASTNodes somehow. Is there a pythonic way to solve this problem neatly or do I have to go with the duplicated tree approach?</p>
<p>Note: Working typehints are a must.</p>
|
<python>
|
2024-08-26 14:08:20
| 0
| 431
|
pqzpkaot
|
78,914,836
| 4,965,381
|
Is there a way to include column index name with Pandas dataframe to CSV?
|
<p>Is there a way to include the column (not rows!) index name in the output when calling Pandas' <code>dataframe.to_csv()</code> method? For example:</p>
<pre><code>import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
pivot_iris = iris.pivot_table(index='species', columns='sepal_length', values='sepal_width')
print(pivot_iris.columns)
print(pivot_iris)
pivot_iris.to_csv('pivot_iris.csv', index=True, header=True)
</code></pre>
<p>After calling pivot, the column index name is set to <code>sepal_length</code> as you can see in the prints</p>
<pre><code>Index([4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5.0, 5.1, 5.2, 5.3, 5.4, 5.5, 5.6,
5.7, 5.8, 5.9, 6.0, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7.0,
7.1, 7.2, 7.3, 7.4, 7.6, 7.7, 7.9],
dtype='float64', name='sepal_length')
</code></pre>
<p>and</p>
<pre><code>sepal_length 4.3 4.4 4.5 4.6 4.7 ... 7.3 7.4 7.6 7.7 7.9
species ...
setosa 3.0 3.033333 2.3 3.325 3.2 ... NaN NaN NaN NaN NaN
versicolor NaN NaN NaN NaN NaN ... NaN NaN NaN NaN NaN
virginica NaN NaN NaN NaN NaN ... 2.9 2.8 3.0 3.05 3.8
[3 rows x 35 columns]
</code></pre>
<p>Unfortunately the output file produced with <code>to_csv()</code> is missing the label in front of the column names:</p>
<pre><code>species,4.30,4.40,4.50,4.60,4.70,4.80,4.90,5.00,5.10,5.20,5.30,5.40,5.50,5.60,5.70,5.80,5.90,6.00,6.10,6.20,6.30,6.40,6.50,6.60,6.70,6.80,6.90,7.00,7.10,7.20,7.30,7.40,7.60,7.70,7.90
setosa,3.00,3.03,2.30,3.33,3.20,3.18,3.20,3.36,3.60,3.67,3.70,3.66,3.85,,4.10,4.00,,,,,,,,,,,,,,,,,,,
versicolor,,,,,,,2.40,2.15,2.50,2.70,,3.00,2.44,2.82,2.82,2.67,3.10,2.80,2.88,2.55,2.70,3.05,2.80,2.95,3.07,2.80,3.10,3.20,,,,,,,
virginica,,,,,,,2.50,,,,,,,2.80,2.50,2.73,3.00,2.60,2.80,3.10,2.93,2.92,3.05,,3.04,3.10,3.13,,3.00,3.27,2.90,2.80,3.00,3.05,3.80
</code></pre>
<p>is there a way to include it?</p>
|
<python><pandas><csv>
|
2024-08-26 13:59:45
| 1
| 1,067
|
voiDnyx
|
78,914,357
| 14,358,734
|
Merge a multi index of year, then month into one index year-month
|
<p>My code so far</p>
<pre><code>import random
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import os
from scipy.stats import shapiro
import scipy.stats as stats
ios = pd.read_csv('iOS.csv')
ios.replace(',','', regex=True, inplace=True)
ios = ios.astype({'Total Impressions': int, 'App Referrer Impressions': int, 'Browse Impressions': int,
'Search Impressions': int, 'Total Installs': int, 'App Referrer Installs': int,
'Browse Installs': int, 'Search Installs': int, 'CVR': float, 'App Referrer CVR': float,
'Browse CVR': float, 'Search CVR': float})
ios['Date'] = pd.to_datetime(ios['Date'])
ios = ios.groupby(
[
ios['Date'].dt.year,
ios['Date'].dt.month
]
).mean()
</code></pre>
<p>This results in a data frame with a multi-index. The first index is Year, and the second index is Month. I'd like to merge these indices into a single Year-Month date object, so I can make a line-lot where each x tick is a month in a specific year.</p>
|
<python><pandas><dataframe>
|
2024-08-26 12:16:24
| 1
| 781
|
m. lekk
|
78,914,204
| 3,103,399
|
webRTC video streaming - Chrome chooses wrong candidates with ubuntu 20
|
<p>I am streaming video over webRTC in AWS environments, all ports are open, no firewall.</p>
<p><strong>Server 1:</strong> Ubuntu 18.04.5 LTS</p>
<p><strong>Server 2:</strong> Ubuntu 20.04.6 LTS</p>
<p>With Server 1 after at most 2-3 retries, webRTC chooses the correct candidate pair that has my pc public IP and the remote public IP:</p>
<p><a href="https://i.sstatic.net/26cI5QM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26cI5QM6.png" alt="enter image description here" /></a></p>
<p>With Server 2 - no matter how many retries, webRTC always chooses the <em>host</em> candidate with my local IP with the remote public IP, thus the connection fails. my PC public IP does appear in the ice candidate list:</p>
<p><a href="https://i.sstatic.net/gwVXXYRI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwVXXYRI.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/LRG42OWd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRG42OWd.png" alt="enter image description here" /></a></p>
<p>All peers on all sides and servers use the same Google stun server <code>stun:stun.l.google.com:19302</code>. I only stream video data from server to client, so there is no need for TURN.</p>
<p>Also, if all peers over the same network and not AWS, i.e locally, it all works well.</p>
<p>I've debugged with wireshark it and also saw that there are responses to the STUN requests but once there is a binding request to the remote server there are no responses (74.125.250.129 is the google stun IP) marked packets have no response.</p>
<p><a href="https://i.sstatic.net/o9Jv2wA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o9Jv2wA4.png" alt="enter image description here" /></a></p>
<p><strong>Stack:</strong> client-side written with javascript and native browser API. backend uses python and <code>aiortc</code> lib. latest chrome.</p>
|
<javascript><python><ubuntu><webrtc><aiortc>
|
2024-08-26 11:42:50
| 2
| 5,386
|
jony89
|
78,914,111
| 22,479,232
|
How to use "--" as an argument in python argparser
|
<p>I have the following python code set up, in which the program would return <strong>"something"</strong> whenever the user pass <code>--</code> as an argument:</p>
<pre class="lang-py prettyprint-override"><code>#script.py
import argparse
parser = argparse.ArgumentParser()
parser.add_argument(
"--",
help="Prints something",
action="store_true",
dest="print_something",
)
args = parser.parse_args()
if args.print_something:
print("something")
</code></pre>
<p>The output is as follows:</p>
<pre class="lang-bash prettyprint-override"><code>$ python .\script.py --
usage: script.py [-h] [--]
playground.py: error: unrecognized arguments: --
</code></pre>
<p>Argparse is not able to recognise the <code>--</code> argument.</p>
<p>I tried using escape sequences, like putting <code>-\-\-</code> under <code>parser.add_argument(</code>, yet the program is not behaving the way it should.</p>
<p>There is, of course, a workaround using <code>sys.arg</code>, which goes something like:</p>
<pre class="lang-py prettyprint-override"><code>import sys
if "--" in sys.argv:
print("something")
</code></pre>
<p>But the above approach is impractical for projects with alot of arguments -- especially for those containing both functional and positional argument.</p>
<p>Therefore, is there anyway to parse the <code>--</code> argument using argparser?</p>
|
<python><argparse>
|
2024-08-26 11:20:37
| 1
| 351
|
Epimu Salon
|
78,914,064
| 9,002,568
|
Plotly table object on Colab
|
<p>In colab, I generate plotly table object as a part of subplot in all graph object is in Ipywidgets.output like:</p>
<p><a href="https://i.sstatic.net/f0gQF56t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f0gQF56t.png" alt="picture 1" /></a></p>
<p>But I couldn't see first column values.
When I click the plotly bar, reset button, it shows like in :</p>
<p><a href="https://i.sstatic.net/jtralURF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtralURF.png" alt="picture 2" /></a></p>
<p>Code is:</p>
<pre class="lang-py prettyprint-override"><code>from google.colab import output
output.enable_custom_widget_manager()
report_data = [[' Total Distance', '29,873,735,731'], [' Average Distance', '382.9']]
output = widgets.Output()
with output:
fig.add_trace(go.Table(
header=dict(values=['Summary', ' '], align='left'),
cells=dict(values=list(zip(*report_data)), align='left'),
), row=1, col=1)
fig.add_trace(go.Table(
header=dict(values=['Summary', ' '], align='left'),
cells=dict(values=list(zip(*report_data)), align='left'),
), row=1, col=2)
fig.show(renderer="colab")
display(output)
</code></pre>
<p>I tried different browsers, also I gave cell height parameter 35, didn't worked. If I generate table alone, there is not problem. But with output it is not seen normal, if I download as a png, it shows.</p>
|
<python><plotly><google-colaboratory>
|
2024-08-26 11:07:24
| 1
| 593
|
kur ag
|
78,913,870
| 6,509,922
|
Potentially biased sampling in Tree-Parzen optimization from Hyperopt
|
<p>I'm using the Tree-Parzen algo implementation from Hyperopt for blackbox optimization task that I have and I'm consistently observing an odd phenomenon.</p>
<p>The plot below displays the phenomenon. You can see that the sampler is focusing on a low-loss region (the dark ball near the top right). This is normal, indeed expected.</p>
<p>What I find odd is that the sampler doesn't seem to be exploring at all from the regions parallel to this point. You'll see that there are "channels" that are mostly un-sampled perpendicular to the identified low loss region.</p>
<p>The plot only shows two of 4 parameters that are being optimized for, but the phenomenon is
observable with all pairs of parameters.</p>
<p>This seems to me to be non-optimal behavior. While we want the search algo to focus on low loss regions to improve efficiency, we also want it to sufficiently explore nearby regions to ensure robustness of the results.</p>
<p>To improve exploration, I've set the first half of the samples to be suggested at random, and the last half to be suggested by the TPE algo.</p>
<p>The chart below is from this hybrid setup, i.e. increasing initial exploration didn't seem to improve exploration in these nearby zones.</p>
<p>I would like, first, to better understand why this happens, and second, to improve exploration.</p>
<p>Any thoughts and suggestions are much appreciated.</p>
<p>Thanks!</p>
<p><a href="https://i.sstatic.net/80O6XaTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/80O6XaTK.png" alt="Scatterplot of two parameters with loss as hue" /></a></p>
<p><a href="https://i.sstatic.net/0IYvM7CY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0IYvM7CY.png" alt="Scatterplot of two parameters with loss as hue" /></a></p>
|
<python><hyperopt>
|
2024-08-26 10:16:31
| 0
| 688
|
Jed
|
78,913,797
| 4,050,510
|
Iterating a Huggingface Dataset from disk using Generator seems broken. How to do it properly?
|
<p>I have a strange behavior in HuggingFace Datasets. My minimal reproduction is as below.</p>
<pre class="lang-py prettyprint-override"><code># main.py
import datasets
import numpy as np
generator = np.random.default_rng(0)
X = np.arange(1000)
ds = datasets.Dataset.from_dict(
mapping={"X":X},
)
ds.save_to_disk("tmp")
print("First loop")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1)), end=", ")
print("")
print("Second loop")
ds = datasets.Dataset.load_from_disk("tmp")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1)), end=", ")
print("")
</code></pre>
<p>The first time I run the script it all looks good. I get a new random iterate every time.
The second time I run the script, the first loop does the same as before, but the second loop is stuck at sample <code>741</code>. See output below.</p>
<pre><code>$ python main.py
Saving the dataset (1/1 shards): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 598502.28 examples/s]
First loop
{'X': [459]}, {'X': [739]}, {'X': [72]}, {'X': [943]}, {'X': [241]}, {'X': [181]}, {'X': [845]}, {'X': [830]}, {'X': [896]}, {'X': [334]},
Second loop
{'X': [741]}, {'X': [847]}, {'X': [944]}, {'X': [795]}, {'X': [483]}, {'X': [842]}, {'X': [717]}, {'X': [865]}, {'X': [231]}, {'X': [840]},
$ python main.py
Saving the dataset (1/1 shards): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1000/1000 [00:00<00:00, 492231.43 examples/s]
First loop
{'X': [459]}, {'X': [739]}, {'X': [72]}, {'X': [943]}, {'X': [241]}, {'X': [181]}, {'X': [845]}, {'X': [830]}, {'X': [896]}, {'X': [334]},
Second loop
{'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]}, {'X': [741]},
</code></pre>
<p>If I delete the dataset folder <code>rm -rf tmp</code> then I can run the code once with expected behavior, and it fails again the second time. So it has something to do with persisting the dataset to disk.</p>
<p>What am I doing wrong?</p>
|
<python><huggingface><huggingface-datasets>
|
2024-08-26 09:55:43
| 0
| 4,934
|
LudvigH
|
78,913,782
| 14,729,820
|
How can I run python package using Google Collab?
|
<p>I want to run the <a href="https://github.com/FactoDeepLearning/DAN/tree/main" rel="nofollow noreferrer">DAN</a> repo. I am using Google Cloud Collab. I cloned the project on my Google Drive in the following directory <code>/content/drive/MyDrive/DAN/DAN</code>
Trying to run An example script file is available at <code>OCR/document_OCR/dan/predict_examples</code> to recognize images directly from paths using trained weights.</p>
<p>using collab notebook located inside <code>/content/drive/MyDrive/DAN/DAN</code></p>
<pre><code>from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append('/content/drive/MyDrive/DAN/DAN')
!python3 /content/drive/MyDrive/DAN/DAN/OCR/document_OCR/dan/predict_example.py
</code></pre>
<p>But this is not working for me
And I am facing the following issue
<code>Traceback (most recent call last): File "/content/drive/MyDrive/DAN/DAN/OCR/document_OCR/dan/predict_example.py", line 9, in <module> from basic.models import FCN_Encoder ModuleNotFoundError: No module named 'basic'</code></p>
<p>Where the <code>predict_example.py</code> scripts starts with</p>
<pre><code>import os.path
import torch
from torch.optim import Adam
from PIL import Image
import numpy as np
from basic.models import FCN_Encoder
from OCR.document_OCR.dan.models_dan import GlobalHTADecoder
from OCR.document_OCR.dan.trainer_dan import Manager
from basic.utils import pad_images
from basic.metric_manager import keep_all_but_tokens
</code></pre>
|
<python><deep-learning><package><google-colaboratory>
|
2024-08-26 09:52:02
| 1
| 366
|
Mohammed
|
78,913,556
| 1,363,960
|
How to evaluate a string as tcl command
|
<p>I'm using Foundry Nuke and try to work with tcl command.</p>
<p>I have this code :</p>
<pre><code>[lindex [split "xx_cc" "_"] 0]
</code></pre>
<p>If I put this code into a text node. I will output the correct result which is 'xx'</p>
<p>But in this case I put the code in the project setting (root) as a string . I added a new knob (named as 'cxproject' and type is text input knob). Then put the code in this knob ( as string)</p>
<p>Then in the a text node , I put this code :</p>
<pre><code>[value root.cxproject]
</code></pre>
<p>Trying to load the code and evaluate that string to be executed as tcl command.</p>
<p>The result is the original string of the code itself -> <strong>[lindex [split "xx_cc" "_"] 0]</strong> . This is not what I want. I want to see the result as 'xx'.</p>
<p>I tried :</p>
<pre><code>[eval [value root.cxproject] ]
</code></pre>
<p>but not working.</p>
<p>How to evaluate a string as tcl command using tcl script ?</p>
<p><a href="https://i.sstatic.net/Qa9smPnZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qa9smPnZ.png" alt="enter image description here" /></a></p>
|
<python><tcl><nuke>
|
2024-08-26 08:55:48
| 1
| 1,828
|
andio
|
78,913,527
| 5,790,653
|
How to find a name with conditions ending with some strings and divide them
|
<p>I have these lists:</p>
<pre class="lang-py prettyprint-override"><code>list1 = [
{'itemid': '264', 'name': 'Interface Gi1/17(Port1:TP TPIA-CL03-017-G15-14): Bits sent', 'some_other_keys': 'some_more_values'},
{'itemid': '215', 'name': 'Interface Te1/50("Port1:CL-PO-G22-23"): Bits received', 'some_other_keys': 'some_more_values'},
{'itemid': '425', 'name': 'Interface Gi1/46(no description): Bits sent', 'some_other_keys': 'some_more_values'},
{'itemid': '521', 'name': 'Interface Te1/50("Port1:CL-PO-G22-23"): Bits sent', 'some_other_keys': 'some_more_values'},
{'itemid': '310', 'name': 'Interface Gi1/46(no description): Bits received', 'some_other_keys': 'some_more_values'},
{'itemid': '123', 'name': 'Interface Gi1/17(Port1:TP TPIA-CL03-017-G15-14): Bits received', 'some_other_keys': 'some_more_values'},
]
list2 = [
{'itemid': '264', 'clock': '1724146566', 'value': '6246880', 'ns': '120003316'},
{'itemid': '264', 'clock': '1724146746', 'value': '6134912', 'ns': '113448784'},
{'itemid': '215', 'clock': '1724144406', 'value': '5786832', 'ns': '157177073'},
{'itemid': '215', 'clock': '1724144766', 'value': '5968784', 'ns': '760851309'},
{'itemid': '425', 'clock': '1724148366', 'value': '6590424', 'ns': '403316048'},
{'itemid': '425', 'clock': '1724148726', 'value': '6549984', 'ns': '484278803'},
{'itemid': '521', 'clock': '1724148906', 'value': '6346488', 'ns': '306999249'},
{'itemid': '521', 'clock': '1724147106', 'value': '6139008', 'ns': '459391602'},
{'itemid': '310', 'clock': '1724147286', 'value': '6000208', 'ns': '826776455'},
{'itemid': '310', 'clock': '1724147466', 'value': '6784960', 'ns': '152620809'},
{'itemid': '123', 'clock': '1724147826', 'value': '6865272', 'ns': '70247389'},
{'itemid': '123', 'clock': '1724148186', 'value': '6544328', 'ns': '610791670'},
]
</code></pre>
<p>Now I'm going to do this:</p>
<p>First, store the sum and average of each <code>itemid</code> in <code>list2</code> based on the <code>value</code> (for example, for <code>itemid</code> 264, sum all its values then divide by the len() function):</p>
<p>Second, find each two <code>itemid</code>s in the <code>list1</code> and see if their names match:</p>
<p>Third, divide values which have <code>received</code> by <code>sent</code>.</p>
<p>I add all values of the same <code>itemid</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>from collections import defaultdict
new_list = defaultdict(list)
for entry in list2:
if int(entry['value']) > 0:
new_list[int(entry['itemid'])].append(int(entry['value']))
# >>> new_list
# defaultdict(<class 'list'>, {264: [6246880, 6134912], 215: [5786832, 5968784], 425: [6590424, 6549984], 521: [6346488, 6139008], 310: [6000208, 6784960], 123: [6865272, 6544328]})
</code></pre>
<p>Now I should find if id 264's <code>name</code> (<code>name.split(': Bits')[0]</code> in fact) in <code>list1</code> matches with which id, then divide <code>.endswith('Bits received') / .endswith('Bits sent')</code>.</p>
<p>Let's suppose this:</p>
<blockquote>
<p>id 264's name is Interface Gi1/17(Port1:TP TPIA-CL03-017-G15-14) and another id with this name is 123. Now we divide the sum all values having id 123 (because it's <code>received</code>) by the sum all values having id 264 (because it's <code>sent</code>).</p>
</blockquote>
<p>Should be like this:</p>
<pre class="lang-py prettyprint-override"><code> [
{'name': 'Interface Gi1/17(Port1:TP TPIA-CL03-017-G15-14)', 'received': 13409600, 'sent': 12381792, 'ratio': 1.0830096322083265} # ( (6865272 + 6544328) / (6246880 + 6134912) )
]
</code></pre>
<p>With this code, I can sum and average each <code>itemid</code> (but that's not what I'm going to do):</p>
<pre class="lang-py prettyprint-override"><code>import datetime
for x in new_list:
print(f"Item ID: {x}, Ratio: {float(str(sum(new_list[x]) / len(new_list[x]) / 1000 / 1000)[:3])}")
</code></pre>
<p>Current output:</p>
<pre><code>Item ID: 264, Ratio: 6.1
Item ID: 215, Ratio: 5.8
Item ID: 425, Ratio: 6.5
Item ID: 521, Ratio: 6.2
Item ID: 310, Ratio: 6.3
Item ID: 123, Ratio: 6.7
</code></pre>
<p>Fully expected output:</p>
<pre class="lang-py prettyprint-override"><code>output = [
{'name': 'Interface Gi1/17(Port1:TP TPIA-CL03-017-G15-14)', 'received': 13409600, 'sent': 12381792, 'ratio': 1.0830096322083265}, # ( (6865272 + 6544328) / (6246880 + 6134912) )
{'name': 'Interface Te1/50("Port1:CL-PO-G22-23")', 'received': 11755616, 'sent': 12485496, 'ratio': 0.941541769746272}, # ( (5786832 + 5968784) / (6346488 + 6139008) )
{'name': 'Interface Gi1/46(no description)', 'received': 12785168, 'sent': 13140408, 'ratio': 0.9729658318067446}, # ( (6000208 + 6784960) / (6590424 + 6549984) )
]
</code></pre>
<p>Would you please help me how to reach it?</p>
|
<python>
|
2024-08-26 08:47:17
| 1
| 4,175
|
Saeed
|
78,913,464
| 17,729,094
|
How to filter on uniqueness by condition
|
<p>Imagine I have a dataset like:</p>
<pre><code>data = {
"a": [1, 4, 2, 4, 7, 4],
"b": [4, 2, 3, 3, 0, 2],
"c": ["a", "b", "c", "d", "e", "f"],
}
</code></pre>
<p>and I want to keep only the rows for which <code>a + b</code> is uniquely described by a single combination of <code>a</code> and <code>b</code>. I managed to hack this:</p>
<pre><code>df = (
pl.DataFrame(data)
.with_columns(sum_ab=pl.col("a") + pl.col("b"))
.group_by("sum_ab")
.agg(pl.col("a"), pl.col("b"), pl.col("c"))
.filter(
(pl.col("a").list.unique().list.len() == 1)
& (pl.col("b").list.unique().list.len() == 1)
)
.explode(["a", "b", "c"])
.select("a", "b", "c")
)
"""
shape: (2, 3)
βββββββ¬ββββββ¬ββββββ
β a β b β c β
β --- β --- β --- β
β i64 β i64 β str β
βββββββͺββββββͺββββββ‘
β 4 β 2 β b β
β 4 β 2 β f β
βββββββ΄ββββββ΄ββββββ
"""
</code></pre>
<p>Can someone suggest a better way to achieve the same? I struggled a bit to figure this logic out, so I imagine there is a more direct/elegant way of getting the same result.</p>
|
<python><dataframe><python-polars>
|
2024-08-26 08:27:52
| 2
| 954
|
DJDuque
|
78,913,267
| 2,307,441
|
Extracting tables from a PDF with empty cells and no visible edges
|
<p>I am using <code>pdfplumber</code> to extract data from the following PDF page:</p>
<p><a href="https://i.sstatic.net/YjVmA3hx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjVmA3hx.png" alt="Data in the test pdf file" /></a></p>
<pre class="lang-py prettyprint-override"><code>import pdfplumber
pdf_file = 'D:/Input/Book1.pdf'
pdf = pdfplumber.open(pdf_file)
page = pdf.pages[0]
text = page.extract_text()
table = page.extract_tables()
for line in text.split("\n"):
print(line)
</code></pre>
<p>When I use <code>page.extract_tables()</code>, I only get the row headers, not the associated data in the table.</p>
<p>Since <code>extract_tables()</code> isn't working, I am using <code>page.extract_text()</code> to loop through it line by line. However, <code>extract_text()</code> seems to omit empty cells in the table data when reading a line.</p>
<p>Below data got extracted when using <code>extract_text()</code>:</p>
<pre class="lang-none prettyprint-override"><code>Weekly test report with multiple lines of hedder of the each page of report
col1 col2 col3 Start End Col Group
Name Name Name Date Date Col5 Col6 Col7 Currency
123 ABC 26/8/2024 26/8/2024 1000 20000 26/8/2024 USD
456 DEF New 26/8/2024 2000 15000 27/8/2024 INR
789 GES DDD 26/8/2024 26/8/2023 4000 20/4/2024 AUD
</code></pre>
<p>I want to create a data frame using the table data from the PDF.</p>
|
<python><pdf><pdfplumber>
|
2024-08-26 07:28:21
| 2
| 1,075
|
Roshan
|
78,913,241
| 13,987,643
|
Python Azure Search contains filter isn't working
|
<p>I'm trying to filter documents in my azure search index using the filter expression. I specifically want to filter the file names using a 'contains' filter and this is the search parameter I'm providing via langchain.</p>
<pre><code>search_kwargs={'k': 8, 'search_type': 'semantic_hybrid', 'filters': f"search.ismatch('Summary', 'file_name', 'full', 'any')"}
vector_search_retriever = vector_store.as_retriever(search_kwargs = search_kwargs)
</code></pre>
<p>But this filter doesn't seem to be working at all. The retriever returns all the files instead of the just the ones filtered by file name. What is wrong about this syntax?</p>
|
<python><azure><langchain><azure-cognitive-search>
|
2024-08-26 07:21:57
| 1
| 569
|
AnonymousMe
|
78,913,109
| 1,444,073
|
Process asyncio events in callbacks from within asyncio-unaware code?
|
<p>I have a Python function <code>foo</code>, that actually is a layer of many functions, and makes regular callbacks to a Python function <code>bar</code> that I supply, like <code>foo(bar)</code>. I also have an <em>asyncio event loop</em> running on the main thread, but the internals of <code>foo</code> are not aware of asyncio.</p>
<p>Now I have another thread running, that posts events to the event loop of the main thread. I want to use the callback <code>bar</code> to process those events. In other frameworks/languages, there are functions like <code>processEvents</code> in Qt, for example, that can be used to accomplish that, but I couldn't find anything similar for asyncio.</p>
<p>I've tried several things that I thought could work from my impression of the docs, like for example</p>
<pre class="lang-py prettyprint-override"><code>def bar():
asyncio.gather(*asyncio.all_tasks())
</code></pre>
<p>or</p>
<pre class="lang-py prettyprint-override"><code>def bar():
asyncio.sleep(0)
</code></pre>
<p>but the events posted from the other thread are still not being processed.</p>
<hr />
<p>Disclaimer: Better software design would probably be to make <code>foo</code> use asyncio, but I'm looking for a solution for the case when I cannot make modifications to the code of <code>foo</code>.</p>
|
<python><python-asyncio>
|
2024-08-26 06:38:15
| 1
| 4,334
|
kostrykin
|
78,912,904
| 17,519,895
|
Emit gets stuck while another thread is being executed
|
<p>This is a Flask Socket-io App, In this particular code, A stream of response is being carried out by an AI model and chunks are being emitted regularly.</p>
<p>The issue here is that the Emit gets called successfully for each chunk but is not received by the front end unless the previous thread is finished.</p>
<p>I have tried different methods , <strong>different threading, Thread pooling, concurrent.futures, Background threads, Celery and Radius Server</strong> but no progress is achieved. The problem is something else like configuration and stuff.</p>
<p>Help is really appreciated. Loosing my patience over here.</p>
<pre><code>app = Flask(__name__, template_folder=os.path.join(BASE_DIR, 'templates'),
static_folder=os.path.join(BASE_DIR, 'static'))
CORS(app, resources={r"/*": {"origins": "*"}})
app.config['CORS_HEADERS'] = 'Content-Type'
socketio = SocketIO(app, cors_allowed_origins="*", ping_timeout=120)
class CaseChat:
def __init__(self):
self.model = global_models.model
self.spec_generator = global_models.spec_generator
self.model_gtts = global_models.model_gtts
self.bytes_audio = []
def predict(self, **kwargs):
recipient_id = kwargs['recipient_id']
history = kwargs.get('history', [])
self.bytes_audio = []
try:
history.append({'role': 'user', 'content': kwargs.get('chunks') + str(kwargs.get("interview_time", ' '))})
inputs = global_models.tokenizer.apply_chat_template(history, add_generation_prompt=True, tokenize=False)
inputs = inputs.replace('''\n\nCutting Knowledge Date: December 2023\nToday Date: 26 Jul 2024\n\n''', "")
inputs = global_models.tokenizer(inputs, return_tensors='pt').to('cuda')
streamer = TextIteratorStreamer(global_models.tokenizer, skip_prompt=True, skip_special_tokens=True)
generation_kwargs = dict(inputs, streamer=streamer, max_new_tokens=512)
thread_generate = Thread(target=global_models.model.generate, kwargs=generation_kwargs)
thread_generate.start()
new_text = ""
for output in streamer:
if user_data[recipient_id]['interrupt']:
break
if not output.strip("\n"):
continue
if not len(new_text):
if new_text.startswith('assistant'):
new_text = new_text[9:]
elif new_text.startswith('user'):
new_text = new_text[4:]
new_text += output
if len(new_text.split(' ')) > 5:
history = self.response(history, new_text, kwargs.get('s_id'))
new_text = ""
if len(new_text) and not user_data[recipient_id]['interrupt']:
history = self.response(history, new_text, kwargs.get('s_id'))
audio_saver(path=f"{save_location}/{recipient_id}/{user_data[recipient_id]['time_duration']}-AI.wav",
audio=self.bytes_audio)
return history
except Exception as e:
logging.error(f"Prediction failed: {str(e)}")
history.pop(-1)
return history
def response(self, history, new_text, s_id):
new_text = re.sub(r'\([^)]*\)', '', new_text)
self.speak(new_text, s_id)
# print("Test receieved by response" + new_text)
if 'assistant' != history[-1]['role']:
history.append({'role': 'assistant', 'content': new_text})
else:
history[-1]['content'] += new_text
return history
def speak(self, text, s_id):
try:
ai_text = text.split("AI:")[-1].strip()
self.spec_generator.eval()
parsed = self.spec_generator.parse(ai_text)
spectrogram = self.spec_generator.generate_spectrogram(tokens=parsed)
audio = self.model_gtts.convert_spectrogram_to_audio(spec=spectrogram)
audio = audio.detach()
audio_np = audio.cpu().numpy().squeeze()
with io.BytesIO() as bytesio:
sf.write(bytesio, audio_np, 22050, format="wav", subtype="PCM_16")
bytesio.seek(0)
audio_data = bytesio.read()
socketio.emit('response', audio_data, to=s_id)
print("Text streamed outwards: " + text)
self.bytes_audio.append(audio_data)
return "Request received and task started", 202
except Exception as e:
logger.exception("TTS conversion failed", e)
return f"Error: {e}", 500
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=8080, debug=True, use_reloader=False, allow_unsafe_werkzeug=True,
log_output=True)
</code></pre>
|
<python><multithreading><flask><flask-socketio>
|
2024-08-26 04:54:52
| 0
| 421
|
Aleef
|
78,912,876
| 139,150
|
google search using playwright
|
<p>I am trying to perform google search using playwright.
But getting this error:</p>
<pre><code>playwright._impl._errors.TimeoutError: Page.fill: Timeout 60000ms exceeded.
Call log:
waiting for locator("input[name=\"q\"]")
</code></pre>
<p>Here is the code:</p>
<pre><code>from playwright.async_api import async_playwright
import asyncio
async def main():
async with async_playwright() as pw:
browser = await pw.chromium.launch(args=["--disable-gpu", "--single-process", "--headless=new"], headless=True)
page = await browser.new_page()
# Go to Google
await page.goto('https://www.google.com')
# Accept the cookies prompt (if it appears)
try:
accept_button = await page.wait_for_selector('button:has-text("I agree")', timeout=5000)
if accept_button:
await accept_button.click()
except:
pass
# Search for a query
query = "Playwright Python"
await page.fill('input[name="q"]', query, timeout=60000)
await page.press('input[name="q"]', 'Enter')
# Wait for the results to load
await page.wait_for_selector('h3')
# Extract the first result's link
first_result = await page.query_selector('h3')
first_link = await first_result.evaluate('(element) => element.closest("a").href')
print("First search result link:", first_link)
await browser.close()
if __name__ == '__main__':
asyncio.run(main())
</code></pre>
|
<python><playwright><playwright-python>
|
2024-08-26 04:41:28
| 1
| 32,554
|
shantanuo
|
78,912,816
| 68,846
|
Installing old version of scikit-learn: ModuleNotFoundError: No module named 'numpy'
|
<p>I have an old Python project that uses scikit-learn version <code>0.22.2.post1</code>. Unfortunately I am unable to update to a newer version of scikit-learn as the training data has long been lost, and I understand that the version of scikit-learn is tied to the model (stored as a .pkl file).</p>
<p>The project uses Python 3.8 and works fine with this version, but I am trying to upgrade it to use Python 3.9.19. I have managed to do this in my local dev environment, but when I try to do so in my Azure Devops pipeline, I get the following error after the command <code>pip install --target="./.python_packages/lib/site-packages" -r ./requirements.txt</code> is run:</p>
<pre><code>Building wheels for collected packages: scikit-learn
Building wheel for scikit-learn (pyproject.toml): started
Building wheel for scikit-learn (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Γ Building wheel for scikit-learn (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [32 lines of output]
<string>:12: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
Partial import of sklearn during the build process.
Traceback (most recent call last):
File "<string>", line 195, in check_package_status
File "/opt/hostedtoolcache/Python/3.9.19/x64/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 984, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'numpy'
</code></pre>
<p>The fact that I can do the upgrade locally (same OS, same version of Python, same version of PIP) gives me hope that this is a fixable problem. When I run the same command locally, PIP outputs:</p>
<pre><code>Installing collected packages: azure-functions, numpy, cython, pandas, nltk, flask, xgboost, scikit-learn, spacy
Successfully installed azure-functions-1.20.0 cython-0.29.36 flask-3.0.3 nltk-3.9.1 numpy-1.19.5 pandas-1.4.4 scikit-learn-0.22.2.post1 spacy-3.7.6 xgboost-1.1.1
</code></pre>
<p>So the big difference is that the pipeline attempts to build a wheel, while locally it does not. Perhaps I can work around this problem by getting the pipeline to not build a wheel? I have tried using <code>--no-cache-dir</code> or <code>--no-binary="scikit-learn"</code> when calling pip but unfortunately it still attempts to build a wheel and therefore still fails. I have also tried doing <code>pip install numpy==1.19.5</code> immediately before the existing call to pip, in the hope that scikit-learn would then find numpy, but I still get the same error. I've also tried to install numpy and scikit-learn together in a separate call to pip (<code>pip install --no-cache-dir --no-binary="scikit-learn" --target="./.python_packages/lib/site-packages" numpy==1.19.5 scikit-learn==0.22.2.post1</code>) but again, same error. In case it matters, my requirements.txt file looks like this:</p>
<pre><code>numpy<1.20.0
azure-functions
cython==0.29.36
scikit-learn==0.22.2.post1
pandas>=0.25.1
spacy==3.7.6
(a few other libraries that I don't think are relevant to the problem)
</code></pre>
<p>Is there any way to force pip to not build a wheel, or otherwise to fix this error where scikit-learn can't find numpy?</p>
|
<python><numpy><scikit-learn><pip><python-3.9>
|
2024-08-26 03:54:31
| 1
| 700
|
Justin
|
78,912,599
| 1,447,953
|
Bokeh: how to create customJS button to clear DataTable selection?
|
<p>I have a DataTable that I am using to select points in a plot, and I have created a "clear selection" button that is suppose to "unselect" all points, i.e. reset the selection. However, it appears to only operate on data that is in the current "view" of the DataTable, despite my customJS acting directly on the ColumnDataSource. Here is a MWE to show what I mean:</p>
<pre><code>import numpy as np
import pandas as pd
from bokeh.io import show, push_notebook
from bokeh.layouts import column
from bokeh.plotting import figure
from bokeh.models import ColumnDataSource, CDSView, CustomJS, CustomJSFilter, Slider, TableColumn, DataTable, SelectEditor, Button
x = np.arange(0, 10, 0.1)
dfs = []
tstep = 1
ts = range(0, 100, tstep)
for t in ts:
y = x**(t/50.)
dfs.append(pd.DataFrame({"x": x, "y": y, "t": t}))
df = pd.concat(dfs)
cds = ColumnDataSource(df)
t_slider = Slider(start=ts[0], end=ts[-1], step=tstep, value=0)
# Callback to notify downstream objects of data change
change_callback = CustomJS(args=dict(source=cds), code="""
source.change.emit();
""")
t_slider.js_on_change('value', change_callback)
# JS filter to select data rows matching t value on slider
js_filter = CustomJSFilter(args=dict(slider=t_slider), code="""
const indices = [];
// iterate through rows of data source and see if each satisfies some constraint
for (let i = 0; i < source.get_length(); i++){
if (source.data['t'][i] == slider.value){
indices.push(true);
} else {
indices.push(false);
}
}
return indices;
""")
# Use the filter in a view
view = CDSView(filter=js_filter)
# Add table to use for selecting data
columns = [
TableColumn(field="x", title="x", editor=SelectEditor()),
TableColumn(field="y", title="y", editor=SelectEditor()),
]
data_table = DataTable(source=cds, columns=columns, selectable="checkbox", width=800, view=view)
p = figure(x_range=(0,10), y_range=(0,100))
p.scatter(x='x', y='y', source=cds, view=view)
# Button to clear selection
clear_button = Button(label="Clear selection", button_type="success")
custom_js_button = CustomJS(args=dict(source=cds), code="""
source.selected.indices = [];
""")
clear_button.js_on_event("button_click", custom_js_button)
layout = column(p, t_slider, data_table, clear_button)
show(layout)
</code></pre>
<p>So, I have a <code>CDSView</code> that only shows one <code>t</code> slice at a time of the data in both the plot and the DataTable. I can select points in the datatable just fine, and they show up selected in the plot. For example:</p>
<p><a href="https://i.sstatic.net/19cDvA03.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19cDvA03.png" alt="Selected points" /></a></p>
<p>If I then press the "clear" button with this t-slice selected, it works just fine. However, if I move the slider to a different t-slice and press the button, it does not work; if I slide back to the previous slice then the previous selection is still there. Why? I am wracking my brain and tried reading through the docs on these Bokeh objects but cannot understand why this would be happening. On the face of it the "clear" button has no idea what the current "view" or slider value is, it should just reset the <code>selection.indices</code> entirely. But somehow it doesn't, or the selection gets somehow restored by some hidden mechanism in the DataTable object?</p>
|
<python><bokeh>
|
2024-08-26 01:28:54
| 0
| 2,974
|
Ben Farmer
|
78,912,403
| 5,692,012
|
Analyse application audio with Python
|
<p>I have some audio that's played from some applications (Firefox in my example), and I'd like to analyze it with Numpy.</p>
<p>I've came up with this code:</p>
<pre class="lang-py prettyprint-override"><code>import pyaudio
import numpy as np
p = pyaudio.PyAudio()
BUFFER = 1024
for i in range(p.get_device_count()):
info = p.get_device_info_by_index(i)
if info['name'] == 'Firefox':
break
stream = p.open(
format=pyaudio.paInt16,
channels=2,
rate=int(info['defaultSampleRate']),
input=True,
input_device_index=info['index'],
frames_per_buffer=BUFFER
)
while True:
data = stream.read(BUFFER)
audio_data = np.frombuffer(data, dtype=np.int16)
print(audio_data)
</code></pre>
<p>But I'm getting <code>OSError: [Errno -9997] Invalid sample rate</code> if I keep this default rate. If I switch to <code>rate=44100</code>, I'm getting a core dump:</p>
<pre><code>malloc(): invalid size (unsorted)
[1] 576600 IOT instruction (core dumped)
</code></pre>
<p>What aspect of with my setup is flawed? How can I simply evaluate audio output from Linux application?</p>
|
<python><linux><audio><pyaudio>
|
2024-08-25 22:47:16
| 0
| 5,694
|
RobinFrcd
|
78,912,354
| 3,456,812
|
Cannot resolve "missing" Djanjo when trying to run label-studio
|
<p>I want to use the popular package label-studio to build training data using images that I'll apply the AI labelling steps on.</p>
<p>The problem is that after cloning the repo from github and following all the instructions, I'm getting the following error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/markalsip/label-studio/label_studio/manage.py", line 11, in <module>
from django.conf import settings
ModuleNotFoundError: No module named 'django'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/markalsip/label-studio/label_studio/manage.py", line 18, in <module>
raise ImportError(
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
</code></pre>
<p>The first problem is that django IS installed and my PYTHONPATH environment variable, by all means at my disposal, is correct:</p>
<pre><code>django-admin --version
</code></pre>
<p>returns: <code>3.2.25</code></p>
<pre><code>echo $PYTHONPATH
</code></pre>
<p>returns: <code>/Users/markalsip/label-studio/env/lib/python3.12/site-packages:</code></p>
<p>I should point out the instructions for this program indicate I should run it in what I believe to be a virtual environment. I have done this (my current root folder is Marks-MBP:label-studio, thus the text shown in the prompt):</p>
<pre><code>Marks-MBP:label-studio markalsip$ source /Users/markalsip/label-studio/env/bin/activate
(env) Marks-MBP:label-studio markalsip$ python --version
</code></pre>
<p>returns: <code>Python 3.12.0</code></p>
<p>To run the python program, I am told to:</p>
<pre><code>python label_studio/manage.py runserver
</code></pre>
<p>I should back up and say that ALL of the things I'm doing here, such as checking paths, environment variables, etc., is done while I'm in this environment. I got there by the command:</p>
<pre><code>source /Users/markalsip/label-studio/env/bin/activate
</code></pre>
<p>What I've tried:</p>
<ul>
<li>Uninstalling and reinstalling django</li>
<li>Uninstalling and reinstalling the label_studio</li>
<li>Listed installed packages from both the command line, as well as directly accessing them (e.g. the django django-admin --version, wbich returns 3.2.25)</li>
</ul>
<p>Verifying my environment variable don't look wonky. My entire .bash_profile is pretty simplistic:</p>
<pre><code># Setting PATH for Python 3.12
# The original version is saved in .bash_profile.pysave
PATH="/Library/Frameworks/Python.framework/Versions/3.12/bin:${PATH}"
PYTHONPATH="/Users/markalsip/label-studio/env/lib/python3.12/site-packages:$PYTHONPATH"
export PATH
export PYTHONPATH
</code></pre>
<p>I also found a recommendation to ensure there are no .txt files in the label program source folder that list additional dependencies. I find no such file.</p>
<p>I saw a note during the install process that django itself had two dependencies. I checked and both of those are installed.</p>
<p>EDIT EDIT EDIT</p>
<p>Following the suggestions in comments, I first saw the django wasn't in the environment path I set up, so I backed up, created a new environment, installed django, and verified that it is indeed in the path suggested in the comments. I've installed all packages "clean" in this new environment.</p>
<p>The error I now receive is similar to the last one, but a bit different:</p>
<p>(venv) Marks-MBP:label-studio markalsip$ python label_studio/manage.py runserver
Traceback (most recent call last):
File "/Users/markalsip/label-studio/label_studio/manage.py", line 11, in
from django.conf import settings
File "/Users/markalsip/label-studio/venv/lib/python3.12/site-packages/django/conf/<strong>init</strong>.py", line 19, in
from django.utils.deprecation import RemovedInDjango60Warning
File "/Users/markalsip/label-studio/venv/lib/python3.12/site-packages/django/utils/deprecation.py", line 4, in
from asgiref.sync import iscoroutinefunction, markcoroutinefunction, sync_to_async
ModuleNotFoundError: No module named 'asgiref'</p>
<p>The above exception was the direct cause of the following exception:</p>
<p>Traceback (most recent call last):
File "/Users/markalsip/label-studio/label_studio/manage.py", line 18, in
raise ImportError(
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
(venv) Marks-MBP:label-studio markalsip$</p>
<p>I'm clearly in a virtual environment. My PYTHONPATH is as follows:</p>
<p>(venv) Marks-MBP:label-studio markalsip$ echo $PYTHONPATH
/Users/markalsip/label-studio/venv/lib/python3.12/site-packages:</p>
<p>A listing of the site-packages folder:</p>
<p>drwxr-xr-x 12 markalsip staff 384 Aug 26 16:57 Django-5.1.dist-info
drwxr-xr-x 21 markalsip staff 672 Aug 26 16:57 django
drwxr-xr-x 9 markalsip staff 288 Aug 26 16:57 pip
drwxr-xr-x 11 markalsip staff 352 Aug 26 16:57 pip-23.2.1.dist-info</p>
<p>So django IS THERE.</p>
<p>I did find a somewhat related post that suggested I try the following:</p>
<p>export DJANGO_SETTINGS_MODULE=core.settings.label_studio</p>
<p>So I've tried that from within the environment but it didn't make a difference.</p>
<p>The one thing that's worrisome is the very last line:</p>
<p>(venv) Marks-MBP:site-packages markalsip$ django-admin --version
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.12/bin/django-admin", line 5, in
from django.core.management import execute_from_command_line
File "/Users/markalsip/label-studio/venv/lib/python3.12/site-packages/django/core/management/<strong>init</strong>.py", line 17, in
from django.conf import settings
File "/Users/markalsip/label-studio/venv/lib/python3.12/site-packages/django/conf/<strong>init</strong>.py", line 19, in
from django.utils.deprecation import RemovedInDjango60Warning
File "/Users/markalsip/label-studio/venv/lib/python3.12/site-packages/django/utils/deprecation.py", line 4, in
from asgiref.sync import iscoroutinefunction, markcoroutinefunction, sync_to_async
<strong>ModuleNotFoundError: No module named 'asgiref'</strong></p>
<p><strong>I'm now trying to find out what the heck asgiref is and why I was never asked to install or otherwise reference it with django</strong></p>
|
<python><django><labeling>
|
2024-08-25 21:59:11
| 0
| 1,305
|
markaaronky
|
78,912,286
| 3,710,004
|
Why use await and wait_for() in the same line in Playwright?
|
<p>I am following an online tutorial for scraping with Playwright (<a href="https://jsoma.github.io/advanced-scraping-with-playwright/" rel="nofollow noreferrer">https://jsoma.github.io/advanced-scraping-with-playwright/</a>). There is one line that uses both await and wait_for(), and I don't understand why. Isn't that redundant? Why not just use one or the other?</p>
<pre><code>township_numbers = ['129', '130', '135']
for num in township_numbers:
print("Searching for page", num)
await page.locator("#ddmTownship").select_option(num)
await page.get_by_text("Submit", exact=True).click()
# Wait for the table to show up - this is the line that confuses me
await page.get_by_text('CTB No').wait_for()
html = await page.content()
tables = pd.read_html(html)
df = tables[2]
filename = f"{num}.csv"
print("Got it - saving as", filename)
df.to_csv(filename, index=False)
</code></pre>
|
<python><web-scraping><playwright><playwright-python>
|
2024-08-25 21:14:00
| 1
| 686
|
user3710004
|
78,912,249
| 338,044
|
Basic Advanced Stats Table in NBA API
|
<p>I am trying to do something that I thought would be simple but seems somewhat complicated, although maybe I'm missing something. I'm trying to use <a href="https://github.com/swar/nba_api/tree/master" rel="nofollow noreferrer">NBA API</a> to get a dataframe that just shows the standard advanced stats from NBA.com for active players. So literally this table: <a href="https://www.nba.com/stats/players/advanced" rel="nofollow noreferrer">https://www.nba.com/stats/players/advanced</a></p>
<p>Is there a simple way to do this?</p>
|
<python><nba-api>
|
2024-08-25 20:51:25
| 1
| 733
|
Jake
|
78,912,227
| 5,287,011
|
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'list'>
|
<p>I am experimenting with LLM development.</p>
<p>Here is my code:</p>
<pre><code>import langchain, pydantic, transformers
from langchain import HuggingFacePipeline
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables.base import RunnableSequence, RunnableMap, RunnableLambda
from langchain.callbacks import get_openai_callback
from pydantic import BaseModel, Field
from langchain.output_parsers import PydanticOutputParser
from transformers import pipeline
class MedicalSpecialty(BaseModel):
medical_specialty: str = Field(description="medical specialty the patient should go to")
urgent: bool = Field(description="the patient should go to the hospital immediately")
parser = PydanticOutputParser(pydantic_object=MedicalSpecialty)
queries = ["i have ache in my chest and in my left arm. Which medical specialty should I go to?"]
template = """
Question: {question}
"""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm = HuggingFacePipeline.from_model_id(
model_id="bigscience/bloom-1b7",
task="text-generation",
model_kwargs={"max_length": 1024},
device=-1 # Ensure it runs on CPU for macOS M1
)
# Wrap the prompt in a RunnableLambda to make it a Runnable
prompt_runnable = RunnableLambda(lambda x: prompt.format(**x))
# Define the sequence that includes the prompt and LLM
sequence = RunnableSequence([
prompt_runnable,
llm
])
with get_openai_callback() as CB:
for query in queries:
result = sequence.invoke({"question": query})
print(query)
print(result)
print("====================================")
# Print the costs of the requests
print(cb)
</code></pre>
<p>Unfortunately, after several iterations, I keep getting this error:</p>
<pre><code>TypeError Traceback (most recent call last)
Cell In[6], line 19
16 prompt_runnable = RunnableLambda(lambda x: prompt.format(**x))
18 # Define the sequence that includes the prompt and LLM
---> 19 sequence = RunnableSequence([
20 prompt_runnable,
21 llm
22 ])
24 with get_openai_callback() as CB:
25 for query in queries:
File /opt/anaconda3/envs/LLM/lib/python3.11/site- packages/langchain_core/runnables/base.py:2632, in RunnableSequence.__init__(self, name, first, middle, last, *steps)
2630 steps_flat.extend(step.steps)
2631 else:
-> 2632 steps_flat.append(coerce_to_runnable(step))
2633 if len(steps_flat) < 2:
2634 raise ValueError(
2635 f"RunnableSequence must have at least 2 steps, got {len(steps_flat)}"
2636 )
File /opt/anaconda3/envs/LLM/lib/python3.11/site- packages/langchain_core/runnables/base.py:5554, in coerce_to_runnable(thing)
5552 return cast(Runnable[Input, Output], RunnableParallel(thing))
5553 else:
-> 5554 raise TypeError(
5555 f"Expected a Runnable, callable or dict."
5556 f"Instead got an unsupported type: {type(thing)}"
5557 )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'list'>
</code></pre>
<p>Please, someone help!</p>
|
<python><langchain><runnable><large-language-model><huggingface>
|
2024-08-25 20:40:18
| 1
| 3,209
|
Toly
|
78,912,212
| 7,884,305
|
How to make CPython report vectorcall as available only when it will actually help performance?
|
<p>The <a href="https://docs.python.org/3/c-api/call.html#the-vectorcall-protocol" rel="noreferrer">Vectorcall</a> protocol is a new calling convention for Python's C API defined in <a href="https://peps.python.org/pep-0590/" rel="noreferrer">PEP 590</a>. The idea is to speed up calls in Python by avoiding the need to build intermediate tuples and dicts, and instead pass all arguments in a C array.</p>
<p>Python supports checking if a callable supports vectorcall by checking if the result of <a href="https://docs.python.org/3/c-api/call.html#c.PyVectorcall_Function" rel="noreferrer"><code>PyVectorcall_Function()</code></a> is not NULL. However, it appears that functions support vectorcall even when using it will actually harm performance.</p>
<p>For example, take the following simple function:</p>
<pre class="lang-py prettyprint-override"><code>def foo(*args): pass
</code></pre>
<p>This function won't benefit from vectorcall - because it collects <code>args</code>, Python needs to collect the arguments into a tuple anyway. So if I will allocate a tuple instead of a C style array, it will be faster. I also benchmarked this:</p>
<pre class="lang-rust prettyprint-override"><code>use std::hint::black_box;
use criterion::{criterion_group, criterion_main, Criterion};
use pyo3::conversion::ToPyObject;
use pyo3::ffi;
use pyo3::prelude::*;
fn criterion_benchmark(c: &mut Criterion) {
Python::with_gil(|py| {
let module = PyModule::from_code(
py,
cr#"
def foo(*args): pass
"#,
c"args_module.py",
c"args_module",
)
.unwrap();
let foo = module.getattr("foo").unwrap();
let args_arr = black_box([
1.to_object(py).into_ptr(),
"a".to_object(py).into_ptr(),
true.to_object(py).into_ptr(),
]);
unsafe {
assert!(ffi::PyVectorcall_Function(foo.as_ptr()).is_some());
}
c.bench_function("vectorcall - vectorcall", |b| {
b.iter(|| unsafe {
let args = vec![args_arr[0], args_arr[1], args_arr[2]];
let result = black_box(ffi::PyObject_Vectorcall(
foo.as_ptr(),
args.as_ptr(),
3,
std::ptr::null_mut(),
));
ffi::Py_DECREF(result);
})
});
c.bench_function("vectorcall - regular call", |b| {
b.iter(|| unsafe {
let args = ffi::PyTuple_New(3);
ffi::Py_INCREF(args_arr[0]);
ffi::PyTuple_SET_ITEM(args, 0, args_arr[0]);
ffi::Py_INCREF(args_arr[1]);
ffi::PyTuple_SET_ITEM(args, 1, args_arr[1]);
ffi::Py_INCREF(args_arr[2]);
ffi::PyTuple_SET_ITEM(args, 2, args_arr[2]);
let result =
black_box(ffi::PyObject_Call(foo.as_ptr(), args, std::ptr::null_mut()));
ffi::Py_DECREF(result);
ffi::Py_DECREF(args);
})
});
});
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
</code></pre>
<p>The benchmark is in Rust and uses the convenient functions of the <a href="https://docs.rs/pyo3" rel="noreferrer">PyO3</a> framework, but the core work is done using raw FFI calls to the C API, so this shouldn't affect the results.</p>
<p>Results:</p>
<pre class="lang-none prettyprint-override"><code>vectorcall - vectorcall time: [51.008 ns 51.263 ns 51.530 ns]
vectorcall - regular call
time: [35.638 ns 35.826 ns 36.022 ns]
</code></pre>
<p>The benchmark confirms my suspicion: Python has to do additional works when I use the vectorcall API.</p>
<p>On the other hand, the vectorcall API can be more performant than using tuples even when needing to allocate memory, for example when calling a bound method with the <a href="https://docs.python.org/3/c-api/call.html#c.PY_VECTORCALL_ARGUMENTS_OFFSET" rel="noreferrer"><code>PY_VECTORCALL_ARGUMENTS_OFFSET</code></a> flag. A benchmark confirms that too.</p>
<p>So here is my question: Is there a way to know when a vectorcall won't help and even do damage, or alternatively, when a vectorcall can help?</p>
<hr />
<p>Context, even though I don't think it's relevant:</p>
<p>I'm experimenting with a <code>pycall!()</code> macro for PyO3. The macro has the ability to call with normal parameters, but also unpack parameters, and should do so in the most efficient way possible.</p>
<p>Using vectorcall where available sounds like a good idea; but then I'm facing this obstacle where I cannot know if I should prefer converting directly to a tuple or to a C-style array for vectorcall.</p>
|
<python><c><rust><python-c-api>
|
2024-08-25 20:35:22
| 1
| 75,761
|
Chayim Friedman
|
78,912,175
| 17,729,094
|
Combine cross between 2 dataframe efficiently
|
<p>I am working with 2 datasets. One describes some time windows by their start and stop times. The second one contains a big list of events with their corresponding timestamps.</p>
<p>I want to combine this into a single dataframe that contains the start and stop time of each window, together with how many events happened during this time window.</p>
<p>I have managed to "solve" my problem with:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
actions = {
"id": ["a", "a", "a", "a", "b", "b", "a", "a"],
"action": ["start", "stop", "start", "stop", "start", "stop", "start", "stop"],
"time": [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0],
}
events = {
"name": ["x", "x", "x", "y", "y", "z", "w", "w", "w"],
"time": [0.0, 0.1, 0.5, 1.1, 2.5, 3.0, 4.5, 4.9, 5.5],
}
actions_df = (
pl.DataFrame(actions)
.group_by("id")
.agg(
start=pl.col("time").filter(pl.col("action") == "start"),
stop=pl.col("time").filter(pl.col("action") == "stop"),
)
.explode(["start", "stop"])
)
df = (
actions_df.join(pl.DataFrame(events), how="cross")
.filter((pl.col("time") >= pl.col("start")) & (pl.col("time") <= pl.col("stop")))
.group_by(["id", "start", "stop", "name"])
.agg(count=pl.count("name"))
.pivot("name", index=["id", "start", "stop"], values="count")
.fill_null(0)
)
result_df = (
actions_df.join(df, on=["id", "start", "stop"], how="left")
.fill_null(0)
.sort("start")
)
print(result_df)
"""
βββββββ¬ββββββββ¬βββββββ¬ββββββ¬ββββββ¬ββββββ¬ββββββ
β id β start β stop β w β y β x β z β
β --- β --- β --- β --- β --- β --- β --- β
β str β f64 β f64 β u32 β u32 β u32 β u32 β
βββββββͺββββββββͺβββββββͺββββββͺββββββͺββββββͺββββββ‘
β a β 0.0 β 1.0 β 0 β 0 β 3 β 0 β
β a β 2.0 β 3.0 β 0 β 1 β 0 β 1 β
β b β 4.0 β 5.0 β 2 β 0 β 0 β 0 β
β a β 6.0 β 7.0 β 0 β 0 β 0 β 0 β
βββββββ΄ββββββββ΄βββββββ΄ββββββ΄ββββββ΄ββββββ΄ββββββ
"""
</code></pre>
<p>My issue is that this approach "explodes" in RAM and my process gets killed. I guess that the <code>join(... how="cross")</code> makes my dataframe huge, just to then ignore most of it again.</p>
<p>Can I get some help/hints on a better way to solve this?</p>
<p>To give some orders of magnitude, my "actions" datasets have on the order of 100-500 time windows (~1 MB), and my "events" datasets have on the order of ~10 million (~200 MB). And I am getting my process killed with 16 GB of RAM.</p>
<p><strong>EDIT</strong>
In real data, my intervals can be overlapping. Thanks to @RomanPekar for bringing this up.</p>
|
<python><dataframe><python-polars>
|
2024-08-25 20:14:30
| 4
| 954
|
DJDuque
|
78,912,140
| 6,141,238
|
In Keras, how can I save and load a neural network model that includes a custom loss function?
|
<p>I am having difficulty saving and reloading a neural network model when I use a custom loss function. For example, in the code below (which integrates the suggestions of the related questions <a href="https://stackoverflow.com/questions/48373845/loading-model-with-custom-loss-keras">here</a> and <a href="https://stackoverflow.com/questions/60609722/how-to-load-a-keras-model-with-a-custom-loss-function">here</a>), "Save/Load Attempt 0" works without errors, while "Save/Load Attempt 1" does not, returning the cryptic error <code>TypeError: string indices must be integers, not 'str'</code> regardless of whether the model is loaded with the parameters <code>compile=False</code> or <code>custom_objects={'loss': custom_loss}</code>. How can I modify "Save/Load Attempt 1" to be successful?</p>
<pre><code>import os
os.environ['TF_ENABLE_ONEDNN_OPTS'] = '0'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '1'
from keras.models import Sequential, load_model
from keras.layers import Input, Dense
from keras import ops
path = 'C:/Users/.../AppData/Local/Programs/Python/Python312/.../' # The ... represent single folders.
# ----------------------------------------------------------------------------------------------------
# Save/Load Attempt 0
dnn = Sequential()
dnn.add(Input(shape=(3,)))
dnn.add(Dense(units=5, activation='relu'))
dnn.add(Dense(units=1))
dnn.compile(loss='mean_absolute_error', optimizer='adam')
model_path_0 = path + 'dnn_0.h5'
dnn.save(model_path_0)
dnn = load_model(model_path_0)
# ----------------------------------------------------------------------------------------------------
print('---')
# Save/Load Attempt 1
def custom_loss(y_true, y_pred):
squared_difference = ops.square(y_true - y_pred)
return ops.mean(squared_difference, axis=-1) # flattens squared_difference
dnn = Sequential()
dnn.add(Input(shape=(3,)))
dnn.add(Dense(units=5, activation='relu'))
dnn.add(Dense(units=1))
dnn.compile(loss=custom_loss, optimizer='adam')
model_path_1 = path + 'dnn_1.h5'
dnn.save(model_path_1)
dnn = load_model(model_path_1, custom_objects={'loss': custom_loss})
# dnn = load_model(model_path_1, compile=False)
# ----------------------------------------------------------------------------------------------------
</code></pre>
<hr />
<p>For reference, the traceback of the error is as follows.</p>
<pre><code>Traceback (most recent call last):
File "c:\Users\...\AppData\Local\Programs\Python\Python312\...\test_load_model.py", line 43, in <module>
dnn = load_model(model_path_1, compile=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\saving\saving_api.py", line 183, in load_model
return legacy_h5_format.load_model_from_hdf5(filepath)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\legacy\saving\legacy_h5_format.py", line 155, in load_model_from_hdf5
**saving_utils.compile_args_from_training_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\legacy\saving\saving_utils.py", line 145, in compile_args_from_training_config
loss = _resolve_compile_arguments_compat(loss, loss_config, losses)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\...\AppData\Local\Programs\Python\Python312\Lib\site-packages\keras\src\legacy\saving\saving_utils.py", line 245, in _resolve_compile_arguments_compat
obj = module.get(obj_config["config"]["name"])
~~~~~~~~~~^^^^^^^^^^
TypeError: string indices must be integers, not 'str'
</code></pre>
<hr />
<p><strong>Solution.</strong> Following stateMachine's answer, I resolved the error with the following three changes:</p>
<ol>
<li>I uninstalled Keras 3.1.1 (<code>pip uninstall keras</code>) and installed 3.5.0 (<code>pip install keras</code>).</li>
<li>I changed the extensions I was using for saving and loading from <code>.h5</code> to the newer <code>.keras</code>.</li>
<li>I set <code>compile=False</code> in the <code>load_model</code> command and instead compiled the model with a <code>dnn.compile</code> command after loading it.</li>
</ol>
|
<python><tensorflow><keras><loss-function>
|
2024-08-25 19:57:58
| 1
| 427
|
SapereAude
|
78,912,109
| 8,954,109
|
Python shortcut for overloading functions over an input union type?
|
<p>aka: <em>How do I make a function return the same type as an Optional input?</em></p>
<p>I started with this:</p>
<pre class="lang-py prettyprint-override"><code>import random
import typing
def get_maybe_str():
return random.choice([" ", None])
def optional_upper(s: typing.Optional[str]):
if s is None:
return s
return s.upper()
def example(foo:str):
bar = get_maybe_str()
if bar is None:
return
bar = optional_upper(bar)
foobar = foo + bar
</code></pre>
<p>This results in the static error:</p>
<blockquote>
<p><code>Operator "+" not supported for types "str" and "str | None" Operator "+" not supported for types "str" and "None" PylancereportOperatorIssue</code></p>
</blockquote>
<p>I figured out that this error can be overcome via overloaded function annotations:</p>
<pre class="lang-py prettyprint-override"><code>@typing.overload
def optional_upper(s: str) -> str: ...
@typing.overload
def optional_upper(s: None) -> None: ...
def optional_upper(s: typing.Optional[str]):
if s is None:
return s
return s.upper()
</code></pre>
<p>This works but is very verbose. <strong>Is there a simpler way to express this?</strong></p>
<hr />
<p>This is probably a duplicate of something, but I couldn't find it while googling so I hope posting this can help someone else's searches.</p>
|
<python><python-typing>
|
2024-08-25 19:42:38
| 2
| 693
|
plswork04
|
78,911,935
| 12,466,687
|
How to delimit text stored in variable to create a dataframe in python?
|
<p>I have parsed a <code>pdf file</code> using ai models and got parsed <code>markdown</code> results which is saved in a <strong>variable</strong> <code>doc_parsed</code>. Printing below its sample contents by code <code>print(doc_parsed[2].text[:1000])</code></p>
<pre><code># Details
|Name|Mr. XYZ|
|---|---|
|Age/Sex|XX YRS/X|
|Id.|01x40xxxxx|
|Refered By|Self|
|Collection On|xx/Aug/20xx 0x:x0AM|
|Collected By|xxxxxxx|
|Sample Rec. On|xx/Aug/20xx xx:x0 AM|
|Collection Mode|HOME COLLECTION|
|Reporting On|xx/Aug/20xx 0x:xx PM|
|BarCode|xxxxxx|
# Test Results
|Test Name|Result|Biological Ref. Int.|Unit|
|---|---|---|---|
|Electrolyte Profile, Serum| | | |
|SODIUM (Na+)|136.2|136 - 145|mmol/L|
|POTASSIUM (K+)|4.23|3.5 - 5.5|mmol/L|
|CHLORIDE(Cl-)|106.24|98.0 - 107|mmol/L|
|TOTAL CALCIUM (Ca)|9.00|8.6-10.2|mg/dL|
|IONIZED CALCIUM|4.52|4.4 - 5.4|mg/dl|
|NON-IONIZED CALCIUM|4.49|4.4 - 5.4|mg/dl|
|pH.(Method : ISE Direct)|7.39|7.35 - 7.45| |
</code></pre>
<p><strong>ISSUE:</strong> I have tried several ways to <strong>split</strong> this into <strong>columns</strong> of dataframe with <strong>delimeter</strong> as <code>|</code> by using <code>pd.read_csv()</code> & <code>pd.read_table()</code> but none worked.</p>
<pre><code>import pandas as pd
import io
pd.read_table(doc_parsed[2].text[:1000], sep="|")
</code></pre>
<blockquote>
<p>ValueError: Invalid file path or buffer object type: <class 'llama_index.core.schema.Document'></p>
</blockquote>
<pre><code>import io
input_text = io.StringIO(print(doc_parsed[2].text[:1000]))
pd.read_csv(input_text,header=None, delimiter="|",
usecols = ["Parameter Name", "Result","Unit","Reference Range"])
</code></pre>
<blockquote>
<p>EmptyDataError: No columns to parse from file</p>
</blockquote>
<pre><code>pd.read_csv(input_text,header=None, delimiter="|")
</code></pre>
<blockquote>
<p>EmptyDataError: No columns to parse from file</p>
</blockquote>
<p>Appreciate any help here.</p>
|
<python><pandas><csv><markdown><delimiter>
|
2024-08-25 18:25:06
| 3
| 2,357
|
ViSa
|
78,911,781
| 823,633
|
numba doesn't work with numpy.polynomial.polynomial.Polynomial?
|
<p>This code</p>
<pre><code>import numba
import numpy
@numba.jit
def test(*coeffs):
poly = numpy.polynomial.polynomial.Polynomial(coeffs)
return poly(10)
c = (2,1)
test(*c)
</code></pre>
<p>Generates the error</p>
<pre><code>No implementation of function Function(<class 'numpy.polynomial.polynomial.Polynomial'>) found for signature:
>>> Polynomial(UniTuple(int64 x 2))
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload of function 'Polynomial': File: numba\core\extending.py: Line 40.
With argument(s): '(UniTuple(int64 x 2))':
No match.
During: resolving callee type: Function(<class 'numpy.polynomial.polynomial.Polynomial'>)
During: typing of call at <ipython-input-22-2355bd6d2aa0> (7)
File "<ipython-input-22-2355bd6d2aa0>", line 7:
def test(*coeffs):
poly = numpy.polynomial.polynomial.Polynomial(coeffs)
^
</code></pre>
<p>This is despite being on numba version 0.60 which should <a href="https://numba.readthedocs.io/en/stable/reference/numpysupported.html#polynomials" rel="nofollow noreferrer">support</a> the new numpy polynomial API <code>numpy.polynomial.polynomial.Polynomial</code></p>
|
<python><numba>
|
2024-08-25 17:00:42
| 1
| 1,410
|
goweon
|
78,911,751
| 25,874,132
|
How to find the minimum distance between two matrices in python? (special definitions)
|
<p>I have 4 permutation matrices A, B, C, D.
we define the distance d(C, D) to be the minimal number of matrix multiplications needed to satisfy the equation D = X.C.Y where the dot is matrix multiplication and X and Y are some sort of product of A, B, A^-1, B^-1 (for example X = A.A.B^-1, Y = B.B^-1.A^-1).</p>
<p>We were given a hint to use BFS and the following examples which result in d(C1, D1) = 3 and d(C2, D2) = 8.</p>
<pre class="lang-py prettyprint-override"><code>
# for these the distance d(C1, D1) is 3
A1 = np.array([[0, 1, 0, 0, 0], [1, 0, 0, 0, 0], [0, 0, 1, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1]])
B1 = np.array([[1, 0, 0, 0, 0], [0, 1, 0, 0, 0], [0, 0, 0, 1, 0], [0, 0, 0, 0, 1], [0, 0, 1, 0, 0]])
C1 = np.array([[0, 1, 0, 0, 0], [0, 0, 0, 0, 1], [0, 0, 0, 1, 0], [0, 0, 1, 0, 0], [1, 0, 0, 0, 0]])
D1 = np.array([[0, 0, 0, 1, 0], [0, 1, 0, 0, 0], [0, 0, 0, 0, 1], [1, 0, 0, 0, 0], [0, 0, 1, 0, 0]])
#for these the distance d(C2, D2) is 8
A2 = np.array([[0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0]])
B2 = np.array([[0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 1, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 0, 0, 0]])
C2 = np.array([[1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 1, 0, 0],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0]])
D2 = np.array([[1, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 1]])
</code></pre>
<p>and I'm tasked with finding the distance to the following matrices:</p>
<pre class="lang-py prettyprint-override"><code>
A = [[1, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 1], [0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0]]
B = [[0, 1, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 1, 0, 0, 0]]
C = [[0, 0, 0, 0, 0, 0, 1], [0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 1, 0, 0, 0, 0], [0, 1, 0, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]
D = [[1, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0], [0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1], [0, 0, 1, 0, 0, 0, 0]]
</code></pre>
<p>I've tried to do it with a double-sided queue and it seemed like it worked but when I submitted my answer it was wrong, the prof isn't responding to emails, and also when he answers you end up more confused than you began with.</p>
<p>so I'm looking for suggestions to solve this problem, we're not limited to any package/library</p>
<p>btw this is what I did at first:</p>
<pre class="lang-py prettyprint-override"><code>
import numpy as np
from collections import deque
def bfs_min_multiplications(A, A_inv, B, B_inv, C, D):
# Convert matrices to numpy arrays for easier manipulation
A = np.array(A)
A_inv = np.array(A_inv)
B = np.array(B)
B_inv = np.array(B_inv)
C = np.array(C)
D = np.array(D)
# Initialize the queue with the initial state (C) and 0 multiplications
queue = deque([(C, 0, [])])
visited = set()
visited.add(tuple(C.flatten()))
while queue:
current_matrix, steps, path = queue.popleft()
# Check if we have reached the goal state
if np.array_equal(current_matrix, D):
return steps, path
# Generate all possible next states
for matrix, name in [(A, 'A'), (A_inv, 'A^-1'), (B, 'B'), (B_inv, 'B^-1')]:
for operation, op_name in [(np.dot(matrix, current_matrix), f"{name}.C"),
(np.dot(current_matrix, matrix), f"C.{name}")]:
operation_tuple = tuple(operation.flatten())
if operation_tuple not in visited:
visited.add(operation_tuple)
queue.append((operation, steps + 1, path + [op_name]))
return -1, [] # If no solution is found
</code></pre>
|
<python><numpy><matrix><breadth-first-search>
|
2024-08-25 16:47:53
| 0
| 314
|
Nate3384
|
78,911,506
| 209,387
|
using instaloader to only download the instagram post caption
|
<p>Is it possible to only download the caption (the description) of a post, given the post short code, using python 3.12 and the latest version of the <a href="https://github.com/instaloader/instaloader" rel="nofollow noreferrer">instaloader</a> library? I am insterested in using instaloder as a library module, not the cli utility.</p>
<p>If not, can you suggest another similar python library that would allow me do so?</p>
<p>Im I correct in thinking that instagram API does not allow doing so for a non-user posts?</p>
<p>Update: the solution is in the accepted answer. The full code is below</p>
<pre><code>import instaloader
L = instaloader.Instaloader()
post = instaloader.Post.from_shortcode(L.context, SHORTCODE)
print(post.caption)
</code></pre>
|
<python><instagram><caption><instaloader>
|
2024-08-25 15:01:40
| 1
| 1,453
|
Michael Kariv
|
78,911,390
| 595,305
|
Is the Python Mariadb connector module capable of using prepared statements?
|
<p>I'm confused about this. By "prepared statement", I mean an object which is delivered by going something like</p>
<pre><code>prep_stmt = cursor.prepared_statement("INSERT INTO my_table (field1, field2) VALUES (%1 %2)")
</code></pre>
<p>... and where you then use that object multiple times, inserting the values and executing by going something like</p>
<pre><code>prep_stmt.set("mike", "rodent")
prep_stmt.execute()
</code></pre>
<p>or maybe</p>
<pre><code>prep_stmt.set(1, "mike")
prep_stmt.set(2, "rodent")
cursor.execute(prep_stmt)
</code></pre>
<p>On <a href="https://mariadb.com/kb/en/prepare-statement/" rel="nofollow noreferrer">this page</a> of the MariaDB manual I see much evidence of this existing in MariaDB's SQL language.</p>
<p>However, wherever I see the MariaDB Python module being explained, this is said to be the kind of statement understood by "prepared statement":</p>
<pre><code>cursor.execute( "SELECT * FROM sales WHERE sale_date >= ? and price > ?", (sale_date_val, price_val))
</code></pre>
<p>An example from 2014 can be found <a href="https://stackoverflow.com/a/27649855/595305">here</a>, where the context is not MariaDB but mysqldb. In the comments, newtover says this is not a true prepared statement, but Alex Martelli then says yet it is, before conceding that it probably isn't. And in the last comment, Olaf Dietsche says he has examined the source code and finds that nothing here is involved beyond string formatting.</p>
<p>I imagine that any string formatting mechanism in MariaDB's Python module is probably a little more sophisticated than normal, because there are certain characters (e.g. empty space) would be difficult to just slot into a "?" in a pseuo-prepared-statement.</p>
<p>However, I'm just wondering whether, when coding in Python, there is in fact a way to use <strong>genuine</strong> prepared statements, which as I say clearly exist in MariaDB SQL...</p>
|
<python><mariadb><prepared-statement>
|
2024-08-25 14:03:40
| 1
| 16,076
|
mike rodent
|
78,911,258
| 2,697,895
|
How to include system-wide pakages in PyInstaller?
|
<p>I am running <code>PyInstaller</code> on a Raspberry Pi device to compile a standalone Python script. My script is intended to always run with <code>sudo</code>. It requires <code>smbus, psutil, pyudev, netifaces</code> libraries installed system wide with <code>sudo apt install python3-...</code>.</p>
<p>Now to use the <code>PyInstaller</code> I created a virtual environment, activated it, and used <code>pyinstaller --onefile nas_script.py</code> to compile. But when I try to execute it <code>/home/marus/NAS_script/build/nas_script</code> I get an exception saying that it cannot find <code>smbus</code> library.</p>
<p>The <code>smbus</code> is not found neither with <code>pip show smbus</code> nor <code>sudo pip show smbus</code>. It is only found by <code>dpkg -l | grep smbus</code> as <code>python3-smbus:arm64</code>.</p>
<p>When I run PyInstaller it says:</p>
<pre><code>314 INFO: Module search paths (PYTHONPATH):
['/usr/lib/python311.zip',
'/usr/lib/python3.11',
'/usr/lib/python3.11/lib-dynload',
'/home/marus/.env/lib/python3.11/site-packages',
'/home/marus/NAS_script']
</code></pre>
<p>So how can I tell PyInstaller where to find the <code>smbus</code> and how to include it in my standalone executable ?</p>
|
<python><python-3.x><raspberry-pi><pyinstaller>
|
2024-08-25 12:52:53
| 1
| 3,182
|
Marus Gradinaru
|
78,911,146
| 1,673,574
|
JAX 3d convolution kernel speedup
|
<p>I am trying to solve a diffusion kernel with JAX and this is my JAX port of existing GPU CUDA code. JAX gives the correct answer, but it is about 5x slower than CUDA. How can I speed this up further? Not sure if my implementation of the <code>diff</code> function is the best. I tried to use the same formulation as the equivalent C++ code.</p>
<pre><code>import jax
import jax.numpy as jnp
import numpy as np
from jax import jit
from functools import partial
from timeit import default_timer as timer
# Numpy-like operation.
@partial(jit, static_argnums=(6, 7, 8))
def diff(at, a, visc, dxidxi, dyidyi, dzidzi, itot, jtot, ktot):
i_c = jnp.s_[1:ktot-1, 1:jtot-1, 1:itot-1]
i_w = jnp.s_[1:ktot-1, 1:jtot-1, 0:itot-2]
i_e = jnp.s_[1:ktot-1, 1:jtot-1, 2:itot ]
i_s = jnp.s_[1:ktot-1, 0:jtot-2, 1:itot-1]
i_n = jnp.s_[1:ktot-1, 2:jtot , 1:itot-1]
i_b = jnp.s_[0:ktot-2, 1:jtot-1, 1:itot-1]
i_t = jnp.s_[2:ktot , 1:jtot-1, 1:itot-1]
at_new = at.at[i_c].add(
visc * (
+ ( (a[i_e] - a[i_c])
- (a[i_c] - a[i_w]) ) * dxidxi
+ ( (a[i_n] - a[i_c])
- (a[i_c] - a[i_s]) ) * dyidyi
+ ( (a[i_t] - a[i_c])
- (a[i_c] - a[i_b]) ) * dzidzi
)
)
return at_new
itot = 384;
jtot = 384;
ktot = 384;
float_type = jnp.float32
nloop = 30;
ncells = itot*jtot*ktot;
dxidxi = float_type(0.1)
dyidyi = float_type(0.1)
dzidzi = float_type(0.1)
visc = float_type(0.1)
@jit
def init_a(index):
return (index/(index+1))**2
## FIRST EXPERIMENT.
at = jnp.zeros((ktot, jtot, itot), dtype=float_type)
index = jnp.arange(ncells, dtype=float_type)
a = init_a(index)
del(index)
a = a.reshape(ktot, jtot, itot)
at = diff(at, a, visc, dxidxi, dyidyi, dzidzi, itot, jtot, ktot).block_until_ready()
print("(first check) at={0}".format(at.flatten()[itot*jtot+itot+itot//2]))
# Time the loop
start = timer()
for i in range(nloop):
at = diff(at, a, visc, dxidxi, dyidyi, dzidzi, itot, jtot, ktot).block_until_ready()
end = timer()
print("Time/iter: {0} s ({1} iters)".format((end-start)/nloop, nloop))
print("at={0}".format(at.flatten()[itot*jtot+itot+itot//4]))
</code></pre>
|
<python><convolution><jax>
|
2024-08-25 11:53:16
| 0
| 6,204
|
Chiel
|
78,911,032
| 1,042,646
|
Azure DevOps pipeline fails with [error]Cmd.exe exited with code '1'
|
<p>I am trying to run pytests using Azure DevOps pipeline.</p>
<pre><code> - task: PipAuthenticate@1
inputs:
artifactFeeds: '<Feed Name>'
- script: |
pip install --upgrade pip
pip list
displayName: 'Verify Pip Authentication'
- task: CmdLine@2
inputs:
script: |
pip list
pip install -r "$(BuildSolution)/${{project}}/PythonClient/requirements.txt"
pip list
</code></pre>
<p>The PipAuthenticate task succeeds (Successfully added auth for 1 internal feeds and 0 external endpoint). However, pip install task fails with the following error</p>
<pre><code>##[debug]Entering Invoke-VstsTool.
##[debug] Arguments: '/D /E:ON /V:OFF /S /C "CALL "C:\__w\_temp\48c8fe5b-fcd8-40b5-b055-2f08fddc1d7f.cmd""'
##[debug] FileName: 'C:\Windows\system32\cmd.exe'
##[debug] WorkingDirectory: 'C:\__w\1\s'
"C:\Windows\system32\cmd.exe" /D /E:ON /V:OFF /S /C "CALL "C:\__w\_temp\48c8fe5b-fcd8-40b5-b055-2f08fddc1d7f.cmd""
##[debug]Exit code: 1
##[debug]Leaving Invoke-VstsTool.
##[error]Cmd.exe exited with code '1'.
##[debug]Processed: ##vso[task.logissue correlationId=79cfa0e8-4c20-4dbc-823f-165b1da5a276;source=TaskInternal;type=error]Cmd.exe exited with code '1'.
##[debug]Processed: ##vso[task.complete result=Failed]Error detected
##[debug]Leaving C:\__w\_tasks\CmdLine_d9bafed4-0b18-4f58-968d-86655b4d2ce9\2.237.1\cmdline.ps1.
</code></pre>
<p>I have enabled the debug logs on the pipeline. There are no additional logs being displayed to debug this error. Is there a way to print additional logs that will help debug this issue further?</p>
|
<python><azure-devops><azure-pipelines><pytest>
|
2024-08-25 10:49:58
| 0
| 17,162
|
Punter Vicky
|
78,910,951
| 3,247,006
|
Does `unbind()` return the views of tensors in PyTorch?
|
<p><a href="https://pytorch.org/docs/stable/generated/torch.unbind.html" rel="nofollow noreferrer">The doc</a> of <code>unbind()</code> just says below:</p>
<blockquote>
<p>Returns a tuple of all slices along a given dimension, already without it.</p>
</blockquote>
<p>So, does it mean that <code>unbind()</code> returns (a tuple of) the views of tensors instead of (a tuple of) the copies of tensors?</p>
<pre class="lang-py prettyprint-override"><code>import torch
my_tensor = torch.tensor([[0, 1, 2, 3],
[4, 5, 6, 7],
[8, 9, 10, 11]])
torch.unbind(input=my_tensor)
# (tensor([0, 1, 2, 3]),
# tensor([4, 5, 6, 7]),
# tensor([8, 9, 10, 11]))
</code></pre>
<p>Actually, there are the similar functions <a href="https://pytorch.org/docs/stable/generated/torch.split.html" rel="nofollow noreferrer">split()</a>, <a href="https://pytorch.org/docs/stable/generated/torch.vsplit.html" rel="nofollow noreferrer">vsplit()</a>, <a href="https://pytorch.org/docs/stable/generated/torch.hsplit.html" rel="nofollow noreferrer">hsplit()</a>, <a href="https://pytorch.org/docs/stable/generated/torch.tensor_split.html" rel="nofollow noreferrer">tensor_split()</a> and <a href="https://pytorch.org/docs/stable/generated/torch.chunk.html" rel="nofollow noreferrer">chunk()</a>, then their docs say they return the views of tensors while the doc of <code>unbind()</code> just says the slices of tensors.</p>
<p>So again, does <code>unbind()</code> return the views of tensors?</p>
|
<python><pytorch><tuples><slice><tensor>
|
2024-08-25 10:08:15
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
78,910,896
| 4,852,094
|
How can I get a type from a list of Generics
|
<p>say I have a type like:</p>
<pre><code>T = TypeVar("T")
class MyType(Generic[T]):
...
</code></pre>
<p>and I have a class:</p>
<pre><code>class Foo:
def __init__(self, vals: list[MyType]):
self.vals = vals
</code></pre>
<p>But I want to say "The Union of all Generic Args in the list."</p>
<p>So for <code>Foo(vals=[MyType[int], MyType[str], MyType[bool])</code>, I'd like to infer a type hint that is <code>[int, str, bool]</code>.</p>
<p>If I do something like this, I still get a mypy error saying: <code>List item 0 has incompatible type "A[int]"; expected "A[int | str]"</code></p>
<pre><code>from typing import Generic, TypeVar
T = TypeVar("T")
class A(Generic[T]):
t: T
def __init__(self, t: T):
self.t = t
class B(Generic[T]):
def foo(self, x: list[A[T]]) -> A[T]:
return x[0]
a1 = A[int](3)
a2 = A[str]('3')
b = B[int | str]().foo([a1, a2])
</code></pre>
|
<python><python-typing>
|
2024-08-25 09:40:44
| 2
| 3,507
|
Rob
|
78,910,846
| 8,964,393
|
Export statsmodels summary() to .png
|
<p>I have trained a <code>glm</code> as follows:</p>
<pre><code> fitGlm = smf.glm( listOfInModelFeatures,
family=sm.families.Binomial(),data=train, freq_weights = train['sampleWeight']).fit()
</code></pre>
<p>The results looks good:</p>
<pre><code>print(fitGlm.summary())
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: Target No. Observations: 1065046
Model: GLM Df Residuals: 4361437.81
Model Family: Binomial Df Model: 7
Link Function: Logit Scale: 1.0000
Method: IRLS Log-Likelihood: -6.0368e+05
Date: Sun, 25 Aug 2024 Deviance: 1.2074e+06
Time: 09:03:54 Pearson chi2: 4.12e+06
No. Iterations: 8 Pseudo R-squ. (CS): 0.1716
Covariance Type: nonrobust
===========================================================================================
coef std err z P>|z| [0.025 0.975]
-------------------------------------------------------------------------------------------
Intercept 3.2530 0.003 1074.036 0.000 3.247 3.259
feat1 0.6477 0.004 176.500 0.000 0.641 0.655
feat2 0.3939 0.006 71.224 0.000 0.383 0.405
feat3 0.1990 0.007 28.294 0.000 0.185 0.213
feat4 0.4932 0.009 54.614 0.000 0.476 0.511
feat5 0.4477 0.005 90.323 0.000 0.438 0.457
feat6 0.3031 0.005 57.572 0.000 0.293 0.313
feat7 0.3711 0.004 87.419 0.000 0.363 0.379
===========================================================================================
</code></pre>
<p>I have then tried to export the <code>summary()</code> into <code>.png</code> as suggested here:</p>
<p><a href="https://stackoverflow.com/questions/46664082/python-how-to-save-statsmodels-results-as-image-file">Python: How to save statsmodels results as image file?</a></p>
<p>So, I have written this code:</p>
<pre><code> fig, ax = plt.subplots(figsize=(16, 8))
summary = []
fitGlm.summary(print_fn=lambda x: summary.append(x))
summary = '\n'.join(summary)
ax.text(0.01, 0.05, summary, fontfamily='monospace', fontsize=12)
ax.axis('off')
plt.tight_layout()
plt.savefig('output.png', dpi=300, bbox_inches='tight')
</code></pre>
<p>But I get this error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[57], line 57
55 fig, ax = plt.subplots(figsize=(16, 8))
56 summary = []
---> 57 fitGlm.summary(print_fn=lambda x: summary.append(x))
58 summary = '\n'.join(summary)
59 ax.text(0.01, 0.05, summary, fontfamily='monospace', fontsize=12)
TypeError: GLMResults.summary() got an unexpected keyword argument 'print_fn'
</code></pre>
<p>Looks like <code>print_fn</code> is not recognized by statsmodels?</p>
<p>Can someone help me, please?</p>
|
<python><export><png><statsmodels><summary>
|
2024-08-25 09:11:18
| 1
| 1,762
|
Giampaolo Levorato
|
78,910,833
| 4,451,315
|
Make another Series with the same index as current Series but with different values
|
<p>I have</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
import pandas as pd
s = dd.from_pandas(pd.Series([1,2,3]))
</code></pre>
<p>I'm trying to make another Series <code>s_other</code> which should be just like <code>s</code>, but:</p>
<ul>
<li>all values should be <code>999</code></li>
<li>its index should be <code>s.index</code></li>
</ul>
<p>I've tried doing the following but it errors:</p>
<pre class="lang-none prettyprint-override"><code>In [40]: s_other = dd.from_pandas(pd.Series([999]*len(s)))
In [41]: s_other.index = s.index
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[41], line 1
----> 1 s_other.index = s.index
File ~/scratch/.venv/lib/python3.11/site-packages/dask_expr/_collection.py:658, in FrameBase.index(self, value)
656 @index.setter
657 def index(self, value):
--> 658 assert expr.are_co_aligned(
659 self.expr, value.expr
660 ), "value needs to be aligned with the index"
661 _expr = expr.AssignIndex(self, value)
662 self._expr = _expr
AssertionError: value needs to be aligned with the index
</code></pre>
<p>I'd like to end up with <code>s_other</code> in which:</p>
<ul>
<li>all values are <code>999</code></li>
<li>its length equals the length of <code>s</code></li>
<li>doing <code>s_other.index = s.index</code> does not raise <code>"value needs to be aligned with the index"</code> (this last step is very important!)</li>
</ul>
|
<python><dask>
|
2024-08-25 09:05:56
| 2
| 11,062
|
ignoring_gravity
|
78,910,797
| 11,748,924
|
PACF built-in plot utils returning different result compared to manual plot
|
<p>Using packages:</p>
<pre><code>from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
from statsmodels.tsa.stattools import acf, pacf
</code></pre>
<p>Performing manual and built-in plot utils, give different values. No, I don't mean picture scale, but the actual value seems different between manual plot and plotting PACF with built-in tools:</p>
<pre><code>len_of_pacf = len(entire_mid)//4
# Using built-in
plot_acf(entire_mid, lags=len(entire_mid)-1)
plot_pacf(entire_mid, lags=len_of_pacf-1)
plt.show()
# Manual plotting
acf_result = acf(entire_mid, nlags=len(entire_mid))
pacf_result = pacf(entire_mid, nlags=len_of_pacf)
plt.plot(acf_result)
plt.plot(pacf_result)
plt.show()
</code></pre>
<p>Returning:</p>
<p><a href="https://i.sstatic.net/jtST8DCF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtST8DCF.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/7omOKQ5e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7omOKQ5e.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/lG3KslJ9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lG3KslJ9.png" alt="enter image description here" /></a></p>
<p>The ACF itself working fine, it returning same output, but not the PACF. I mean, it's uncommon the minimum value of PACF is under -1.</p>
|
<python><statistics><statsmodels><autocorrelation>
|
2024-08-25 08:35:55
| 1
| 1,252
|
Muhammad Ikhwan Perwira
|
78,910,478
| 219,153
|
Iterate over Polygon vertices as Points with Shapely?
|
<p>Shapely's <code>Polygon</code> and <code>LinearRing</code> are not iterable (any longer, if I remember the history correctly). I often need to iterate over their vertices as <code>Point</code> objects (not tuple), and my current solution seems cumbersome. Is there a simpler way than this, without NumPy magic?</p>
<pre><code>poly = Polygon([Point(0, 0), Point(1, 0), Point(1, 1), Point(0, 1)])
vertices = MultiPoint(np.column_stack(poly.exterior.coords.xy))
for p in vertices.geoms:
print(p)
</code></pre>
|
<python><polygon><shapely><vertices>
|
2024-08-25 05:13:44
| 3
| 8,585
|
Paul Jurczak
|
78,910,231
| 417,896
|
How to get the epoch time in eastern time?
|
<p>How do I get the time in seconds since epoch of a python datetime in eastern time?</p>
<pre><code>import pytz
from datetime import datetime, timedelta
et_tz = pytz.timezone('America/New_York')
initial_start_dt = et_tz.localize(datetime(2024, 8, 24, 4, 0, 0)).astimezone(et_tz)
initial_end_dt = et_tz.localize(datetime(2024, 8, 24, 5, 0, 0)).astimezone(et_tz)
</code></pre>
<p>This question <a href="https://stackoverflow.com/questions/11743019/convert-python-datetime-to-epoch-with-strftime">Convert python datetime to epoch with strftime</a> does not take into account the timezone.</p>
|
<python><time><epoch>
|
2024-08-25 00:34:42
| 0
| 17,480
|
BAR
|
78,910,117
| 3,906,483
|
yt_dlp FileNotFoundError when Renaming Subtitle File
|
<p>Iβm using <code>yt_dlp</code> to download subtitles for a video, but Iβm encountering a <code>FileNotFoundError</code> when attempting to rename the subtitle file. Hereβs the relevant portion of my code:</p>
<pre class="lang-py prettyprint-override"><code>import yt_dlp
import os
def download_video(self, url):
ydl_opts = {
'outtmpl': self.download_dir + '/%(id)s.%(ext)s',
'writesubtitles': True,
'subtitleslangs': ['en'],
'writeautomaticsub': True,
'skip_download': True, # Do not download the video
'quiet': True,
}
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
info_dict = ydl.extract_info(url)
try:
vtt_fp = info_dict['requested_subtitles']['en']['filepath']
if os.path.exists(vtt_fp):
print(f"Subtitle file found: {vtt_fp}")
else:
print(f"Subtitle file not found: {vtt_fp}")
except KeyError as e:
print(f"Key error: {e}")
except Exception as e:
print(f"An error occurred: {e}")
</code></pre>
<p><strong>Issue:</strong></p>
<p>Iβm getting the following traceback:</p>
<pre><code> File "backend/venv/lib/python3.10/site-packages/yt_dlp/downloader/common.py", line 245, in wrapper
return func(self, *args, **kwargs)
File "backend/venv/lib/python3.10/site-packages/yt_dlp/downloader/common.py", line 270, in try_rename
os.replace(old_filename, new_filename)
FileNotFoundError: [Errno 2] No such file or directory: './download/Auuk1y4DRgk.en.vtt.part' -> './download/Auuk1y4DRgk.en.vtt'
</code></pre>
<p>It seems like the <code>os.replace</code> function is trying to rename a <code>.part</code> file that doesnβt exist.</p>
<p><strong>Troubleshooting Steps Taken:</strong></p>
<ol>
<li><strong>Check File Existence</strong>: Iβve verified that the file <code>5MgBikgcWnY.en.vtt.part</code> exists in the <code>./download/</code> directory, but the function <code>ydl.extract_info()</code> errors out.</li>
<li><strong>Verify Paths and Permissions</strong>: Paths and permissions seem correct, and I am using the latest version of <code>yt_dlp</code>.</li>
<li>The corresponding commandline <code>yt-dlp</code> command works:</li>
</ol>
<pre><code>yt-dlp --outtmpl "./download/%(id)s.%(ext)s" \
--write-subs \
--sub-lang "en" \
--write-auto-subs \
--no-download \
--quiet \
https://www.youtube.com/watch?v=5MgBikgcWnY
</code></pre>
<p>with no error.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Why is <code>yt_dlp</code> trying to rename a file that appears not to exist?</li>
<li>Are there any additional steps or configurations I should check to ensure the subtitle file is correctly downloaded and available?</li>
<li>How can I better handle or debug this issue to prevent the <code>FileNotFoundError</code>?</li>
</ol>
<p><strong>Additional Information:</strong></p>
<ul>
<li><code>yt_dlp</code> version: 2024.8.6</li>
<li>Python version: Python 3.10.8</li>
<li>Operating System: macOS 14.0</li>
</ul>
<p>Any help or suggestions would be greatly appreciated!</p>
|
<python><yt-dlp>
|
2024-08-24 22:42:10
| 0
| 704
|
Zachzhao
|
78,910,091
| 9,620,095
|
Issue with Access Rules for Attachments / odoo 17
|
<p>In a custom module , I add new field ir_attchment_ids .
When connecting wih other user (not admin) , I got this error .</p>
<pre><code>Uh-oh! Looks like you have stumbled upon some top-secret records.
Sorry, user (id=8) doesn't have 'read' access to:
- Attachment (ir.attachment)
If you really, really need access, perhaps you can win over your friendly administrator with a batch of freshly baked cookies.
</code></pre>
<p>Here is my code</p>
<p>Python :</p>
<pre><code>class AddAttachment(models.Model):
_name = "add.attachment"
attachment_ids = fields.Many2many("ir.attachment", string="My Attachment")
</code></pre>
<p>XML :</p>
<pre><code><?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="add_attachment_form_views" model="ir.ui.view">
<field name="name">add.ttachment.form</field>
<field name="model">add.ttachment</field>
<field name="arch" type="xml">
<form string="Account">
<sheet>
<group>
<field name="attachment_ids" />
</group>
</sheet>
</form>
</field>
</record>
<record id="add_attachment_tree_views" model="ir.ui.view">
<field name="name">add.ttachment.tree</field>
<field name="model">add.ttachment</field>
<field name="arch" type="xml">
<tree string="Account">
<field name="attachment_ids" />
</tree>
</field>
</record>
<record id="open_add_attachment" model="ir.actions.act_window">
<field name="name">Add Attachment</field>
<field name="res_model">add.attachment</field>
<field name="view_mode">tree,form</field>
</record>
</code></pre>
<p>SECURITY</p>
<pre><code>id,name,model_id:id,group_id:id,perm_read,perm_write,perm_create,perm_unlink
access_add_attachment,add.attachment,model_add_attachment,,1,1,1,1
</code></pre>
<p>Any idea please ?
Thanks.</p>
|
<python><xml><security><odoo-17>
|
2024-08-24 22:22:49
| 3
| 631
|
Ing
|
78,910,042
| 10,037,470
|
Having some trouble with ForwardRef in python with fastapi and ormar
|
<p>I have an api that has been built in fastapi and uses ormar for ORM.</p>
<p>My database models are defined as follows:</p>
<pre><code>database = databases.Database(config.database_url)
metadata = sqlalchemy.MetaData()
base_ormar_config = ormar.OrmarConfig(
metadata=metadata,
database=database,
)
ProjectRef = ForwardRef("Project")
class BaseModel(ormar.Model):
ormar_config = base_ormar_config.copy(abstract=True)
id: str = ormar.UUID(primary_key=True, default=uuid.uuid4, uuid_format="string")
created_at: datetime.datetime = ormar.DateTime(server_default=sqlalchemy.func.now())
class User(BaseModel):
ormar_config = base_ormar_config.copy(tablename="users")
subscription: str = ormar.String(max_length=254, nullable=True)
email: str = ormar.String(max_length=244, nullable=False, unique=True)
name: str = ormar.String(max_length=555, nullable=True)
email: str = ormar.String(max_length=555, nullable=True)
phone: str = ormar.String(max_length=555, nullable=True)
company_name: str = ormar.String(max_length=555, nullable=True)
occupation: str = ormar.String(max_length=555, nullable=True)
projects = ormar.ManyToMany(ProjectRef)
class Project(BaseModel):
ormar_config = base_ormar_config.copy(tablename="projects")
owner: User = ormar.ForeignKey(User, name="owner_id")
name: str = ormar.String(max_length=244, nullable=True)
description: str = ormar.String(max_length=555, nullable=True)
files: list[str] = ormar.JSON(default=list)
User.update_forward_refs()
</code></pre>
<p>to be able to define my ManyToMany relationships I need to use Forward ref from the <code>typing</code> library as per the ormar docs here <a href="https://collerek.github.io/ormar/latest/relations/postponed-annotations/" rel="nofollow noreferrer">https://collerek.github.io/ormar/latest/relations/postponed-annotations/</a></p>
<p>An error is getting thrown when trying to run this:</p>
<pre><code>PydanticUndefinedAnnotation: name 'Project' is not defined
</code></pre>
|
<python><fastapi><pydantic><ormar>
|
2024-08-24 21:37:42
| 0
| 875
|
Devon Ray
|
78,910,012
| 1,473,517
|
How to read in and print out standard input until a string occurs and still capture a line?
|
<p>I have code (let's say a bash script called dots.sh) that I want to run from a python script. I would like the python script to print out the output of dots.sh as it occurs. This is my attempt at a wrapper script.</p>
<pre><code>import subprocess
import sys
def run_dot_script():
# Command to run the bash script
command = ['bash', './dots.sh']
# Start the process
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
try:
# Read output for the specified duraton
while True:
char = process.stdout.read(1)
if not char:
break
sys.stdout.write(char.decode('utf-8'))
sys.stdout.flush()
finally:
# Terminate the process
process.terminate()
process.wait()
print() # Print a newline at the end
# Run the script
run_dot_script()
</code></pre>
<p>This works but I want to also capture a line of the output that is just a number. The problem is that my wrapper code reads in one character at a time so that it can print the dots as they occur and not a line at a time.</p>
<p>How can I do that?</p>
<pre><code>Here is a toy example dots.sh file.
#!/bin/bash
for i in {1..50}; do
echo -n "."
sleep 1
done
echo -e "\n"
echo 12345
for i in {1..50}; do
echo -n "*"
sleep 1
done
echo
</code></pre>
|
<python>
|
2024-08-24 21:13:13
| 1
| 21,513
|
Simd
|
78,909,971
| 6,357,360
|
Exponential plot in python is a curve with multiple inflection points instead of exponential
|
<p>I am trying to draw a simple exponential in python. When using the code below, everything works fine and the exponential is shown</p>
<pre><code>import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
def graph(func, x_range):
x = np.arange(*x_range)
y = func(x)
plt.plot(x, y)
graph(lambda x: pow(3,x), (0,20))
plt.savefig("mygraph.png")
</code></pre>
<p>However if I change the range from 20 to 30, it draws a curve which is not exponential at all.</p>
<p><a href="https://i.sstatic.net/HFhbwVOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HFhbwVOy.png" alt="enter image description here" /></a></p>
<p>Why is that happening?</p>
|
<python><numpy><matplotlib>
|
2024-08-24 20:53:44
| 1
| 6,749
|
meJustAndrew
|
78,909,704
| 66,490
|
How to draw lines with a color gradient in Pygame?
|
<p>I wish to draw a sequence of lines with a color gradient, so that the color fades from bright white to either black or fully transparent:</p>
<p><a href="https://i.sstatic.net/BVXJbmzu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BVXJbmzu.png" alt="enter image description here" /></a></p>
<p>Ideally, <code>pygame.draw.lines()</code> would accept a list as the <code>color</code> argument with one color for each vertex, or two colors for the end points. Seeing as there isn't anything like that in the API, how do I implement my own gradient line drawing function?</p>
<p>The line can be one pixel wide and doesn't have to be anti-aliased, but I'd rather there'd be a better way than implementing Bresenham's algorithm using <code>pygame.gfxdraw.pixel()</code>.</p>
|
<python><pygame>
|
2024-08-24 18:15:58
| 1
| 7,505
|
TrayMan
|
78,909,690
| 2,835,640
|
functools partial with bound first argument and *args and **kwargs interprets calls as multiple values
|
<p>Using python 3.10:</p>
<p>I have a function with a first argument and then <code>*args</code> and <code>**kwargs</code>. I want to bind the first argument and leave the <code>*args</code> and <code>**kwargs</code> free. If I do this and the bound function gets called using a list of arguments, python interprets it as <code>multiple values</code> for the bound argument. An example:</p>
<pre><code>from functools import partial
def foo1(a : int, *args, **kwargs):
print(a)
print(args)
print(kwargs)
pfoo1 = partial(foo1, a=10)
pfoo1() # works
pfoo1(something="testme") # works
# pfoo1("testme") # !! interpreted as multiple values for argument a?
# pfoo1(*("testme")) # !! interpreted as multiple values for argument a?
# pfoo1(*["testme"]) # !! interpreted as multiple values for argument a?
</code></pre>
<p>I know I can easily solve this by replacing the <code>foo1</code> function with:</p>
<pre><code>def foo1(*args, **kwargs):
print(args)
print(kwargs)
</code></pre>
<p>And then running the same code, but I do not have the luxury of control over the incoming functions so I am stuck with a given signature. A solution could be to create one's own partial class through lambdas and bound members, but that seems excessive. Is there a way of doing this in the partial framework or at least a simple way?</p>
|
<python><arguments><partial><keyword-argument><functools>
|
2024-08-24 18:10:09
| 1
| 2,526
|
crogg01
|
78,909,656
| 8,554,833
|
Is my password safe in a pyinstaller .exe?
|
<p>So I built a program to run reports. I put credentials in the python script. I want to distribute this to user without python so I converted it to a .exe using pyinstaller. I wasn't able to find the credential by looking at the .exe in notepad, since it doesn't appear human readable. However, I wanted to ask, can the credentials be discovered?</p>
|
<python><security><pyinstaller>
|
2024-08-24 17:53:11
| 1
| 728
|
David 54321
|
78,909,221
| 364,595
|
How to underline text using `borb`
|
<p>I am using the <code>borb</code> library, which has classes to represent textual content, such as <code>ChunkOfText</code>, <code>LineOfText</code>, <code>HeterogeneousParagraph</code> and <code>Paragraph</code>.</p>
<p>Although these have attributes (in their constructor) that enable me to choose the font, font size, color, etc, I did not find an attribute/method that would enable me to underline the text.</p>
<pre class="lang-py prettyprint-override"><code>from borb.pdf import Document
from borb.pdf import Page
from borb.pdf import PageLayout
from borb.pdf import SingleColumnLayout
from borb.pdf import Paragraph
from borb.pdf import PDF
# empty document
doc: Document = Document()
# empty page
page: Page = Page()
doc.add_page(page)
# layout
layout: PageLayout = SingleColumnLayout(page)
# paragraph
layout.add(Paragraph("Lorem ipsum dolor sit amet"))
# store
with open("output.pdf", "wb") as pdf_file_handle:
PDF.dumps(pdf_file_handle, doc)
</code></pre>
<p><strong>Disclaimer:</strong> I am the author of <code>borb</code>.</p>
|
<python><pdf><borb>
|
2024-08-24 14:31:58
| 1
| 9,217
|
Joris Schellekens
|
78,909,215
| 8,025,936
|
Using Hatch, is there a way I can have a local dependency whose location is not hard-coded?
|
<p>I have two non-public Hatch projects A and B. B depends on A. I want developers to be able to clone A and B to locations of their choice and have B depend on A without having to rewrite <code>pyproject.toml</code>. Can I do this? Iβd be okay with a prompt asking for a location, or an auxiliary file that can be ignored from VCS, or require the projects to be in a specific folder (<code>X/A</code> and <code>X/B</code>, e.g.). I have looked at the <a href="https://packaging.python.org/en/latest/specifications/dependency-specifiers/" rel="nofollow noreferrer">dependency specifier documentation</a>, which seems to at best allow <code>file://</code> URIs, which, to my knowledge, are always absolute.</p>
|
<python><build><hatch>
|
2024-08-24 14:28:30
| 1
| 2,390
|
schuelermine
|
78,909,027
| 2,268,543
|
How to Convert Invalid JSON String with Single Quotes to Valid JSON
|
<p>I accidentally saved a JSON-like string in my database that uses single quotes instead of double quotes, and JSON formatters don't recognize it as valid JSON. Here is the string that I have:</p>
<pre><code>import json
json_string = """
{'author': 'Hebbars Kitchen', 'canonical_url': 'https://hebbarskitchen.com/til-chikki-recipe-sesame-chikki-gajak/', 'category': 'sweet', 'cook_time': 10, 'cuisine': 'Indian', 'description': 'easy til chikki recipe | sesame chikki recipe | til ki chikki or til gajak', 'host': 'hebbarskitchen.com', 'image': 'https://hebbarskitchen.com/wp-content/uploads/mainPhotos/til-chikki-recipe-sesame-chikki-recipe-til-ki-chikki-or-til-gajak-2.jpeg', 'ingredient_groups': [{'ingredients': ['1 cup sesame / til (white)', '1 tsp ghee / clarified butter', '1 cup jaggery / gud'], 'purpose': None}], 'ingredients': ['1 cup sesame / til (white)', '1 tsp ghee / clarified butter', '1 cup jaggery / gud'], 'instructions': 'firstly in a pan dry roast 1 cup sesame on low flame till it splutters.\nnow in another kadai heat 1 tsp ghee and add 1 cup jaggery.\nkeep stirring on medium flame till the jaggery melts completely. alternatively, use sugar, if you do not prefer jaggery.\nboil the jaggery syrup on low flame till the syrup turns glossy and thickens.\ncheck the consistency, by dropping syrup into a bowl of water, it should form hard ball and cut with a snap sound. else boil for another minute and check.\nsimmer the flame add add roasted sesame seeds.\nstir well making sure jaggery syrup coats well.\nimmediately pour the mixture over butter paper or onto steel plate greased with ghee. be quick else the mixture turns hard and will be difficult to set.\nget together forming a block, be careful as the mixture will be very hot.\nnow using a rolling pin roll the slightly thick block.\nallow to cool for a minute, and when its still warm cut into pieces.\nlastly, serve til chikki once cooled completely, or store in a airtight container and serve for a month.', 'instructions_list': ['firstly in a pan dry roast 1 cup sesame on low flame till it splutters.', 'now in another kadai heat 1 tsp ghee and add 1 cup jaggery.', 'keep stirring on medium flame till the jaggery melts completely. alternatively, use sugar, if you do not prefer jaggery.', 'boil the jaggery syrup on low flame till the syrup turns glossy and thickens.', 'check the consistency, by dropping syrup into a bowl of water, it should form hard ball and cut with a snap sound. else boil for another minute and check.', 'simmer the flame add add roasted sesame seeds.', 'stir well making sure jaggery syrup coats well.', 'immediately pour the mixture over butter paper or onto steel plate greased with ghee. be quick else the mixture turns hard and will be difficult to set.', 'get together forming a block, be careful as the mixture will be very hot.', 'now using a rolling pin roll the slightly thick block.', 'allow to cool for a minute, and when its still warm cut into pieces.', 'lastly, serve til chikki once cooled completely, or store in a airtight container and serve for a month.'], 'language': 'en-US', 'nutrients': {}, 'prep_time': 5, 'ratings': 5.0, 'ratings_count': 196, 'site_name': "Hebbar's Kitchen", 'title': 'til chikki recipe | sesame chikki recipe | til ki chikki or til gajak', 'total_time': 15, 'yields': '24 servings'}
"""
formatted_json = json.dumps(json_string)
print(formatted_json)
</code></pre>
<p>The string contains single quotes instead of double quotes, making it invalid JSON. I tried using json.dumps() to format it, but this just converts the string into another JSON string rather than fixing the issue.</p>
<p>I also tried using <code>ast.literal_eval</code> and <code>demjson3</code>, nothing seems to be working.</p>
<p>Using <code>ast.literal_eval()</code></p>
<pre><code>import json
# Assume your JSON string is stored in a variable called 'json_string'
json_string = """
{'author': 'Hebbars Kitchen', 'canonical_url': 'https://hebbarskitchen.com/til-chikki-recipe-sesame-chikki-gajak/', 'category': 'sweet', 'cook_time': 10, 'cuisine': 'Indian', 'description': 'easy til chikki recipe | sesame chikki recipe | til ki chikki or til gajak', 'host': 'hebbarskitchen.com', 'image': 'https://hebbarskitchen.com/wp-content/uploads/mainPhotos/til-chikki-recipe-sesame-chikki-recipe-til-ki-chikki-or-til-gajak-2.jpeg', 'ingredient_groups': [{'ingredients': ['1 cup sesame / til (white)', '1 tsp ghee / clarified butter', '1 cup jaggery / gud'], 'purpose': None}], 'ingredients': ['1 cup sesame / til (white)', '1 tsp ghee / clarified butter', '1 cup jaggery / gud'], 'instructions': 'firstly in a pan dry roast 1 cup sesame on low flame till it splutters.\nnow in another kadai heat 1 tsp ghee and add 1 cup jaggery.\nkeep stirring on medium flame till the jaggery melts completely. alternatively, use sugar, if you do not prefer jaggery.\nboil the jaggery syrup on low flame till the syrup turns glossy and thickens.\ncheck the consistency, by dropping syrup into a bowl of water, it should form hard ball and cut with a snap sound. else boil for another minute and check.\nsimmer the flame add add roasted sesame seeds.\nstir well making sure jaggery syrup coats well.\nimmediately pour the mixture over butter paper or onto steel plate greased with ghee. be quick else the mixture turns hard and will be difficult to set.\nget together forming a block, be careful as the mixture will be very hot.\nnow using a rolling pin roll the slightly thick block.\nallow to cool for a minute, and when its still warm cut into pieces.\nlastly, serve til chikki once cooled completely, or store in a airtight container and serve for a month.', 'instructions_list': ['firstly in a pan dry roast 1 cup sesame on low flame till it splutters.', 'now in another kadai heat 1 tsp ghee and add 1 cup jaggery.', 'keep stirring on medium flame till the jaggery melts completely. alternatively, use sugar, if you do not prefer jaggery.', 'boil the jaggery syrup on low flame till the syrup turns glossy and thickens.', 'check the consistency, by dropping syrup into a bowl of water, it should form hard ball and cut with a snap sound. else boil for another minute and check.', 'simmer the flame add add roasted sesame seeds.', 'stir well making sure jaggery syrup coats well.', 'immediately pour the mixture over butter paper or onto steel plate greased with ghee. be quick else the mixture turns hard and will be difficult to set.', 'get together forming a block, be careful as the mixture will be very hot.', 'now using a rolling pin roll the slightly thick block.', 'allow to cool for a minute, and when its still warm cut into pieces.', 'lastly, serve til chikki once cooled completely, or store in a airtight container and serve for a month.'], 'language': 'en-US', 'nutrients': {}, 'prep_time': 5, 'ratings': 5.0, 'ratings_count': 196, 'site_name': "Hebbar's Kitchen", 'title': 'til chikki recipe | sesame chikki recipe | til ki chikki or til gajak', 'total_time': 15, 'yields': '24 servings'}
"""
# Dump the JSON with proper formatting
formatted_json = ast.literal_eval(json_string)
print(formatted_json)
</code></pre>
<p>Error i am getting</p>
<pre><code> File <unknown>:2
{'author': 'Hebbars Kitchen', 'canonical_url': 'https://hebbarskitchen.com/til-chikki-recipe-sesame-chikki-gajak/', 'category': 'sweet', 'cook_time': 10, 'cuisine': 'Indian', 'description': 'easy til chikki recipe | sesame chikki recipe | til ki chikki or til gajak', 'host': 'hebbarskitchen.com', 'image': 'https://hebbarskitchen.com/wp-content/uploads/mainPhotos/til-chikki-recipe-sesame-chikki-recipe-til-ki-chikki-or-til-gajak-2.jpeg', 'ingredient_groups': [{'ingredients': ['1 cup sesame / til (white)', '1 tsp ghee / clarified butter', '1 cup jaggery / gud'], 'purpose': None}], 'ingredients': ['1 cup sesame / til (white)', '1 tsp ghee / clarified butter', '1 cup jaggery / gud'], 'instructions': 'firstly in a pan dry roast 1 cup sesame on low flame till it splutters.
^
SyntaxError: unterminated string literal (detected at line 2)
</code></pre>
|
<python>
|
2024-08-24 12:56:57
| 1
| 2,519
|
Rasik
|
78,908,629
| 4,451,315
|
How to zip together two PyArrow arrays?
|
<p>In Polars, I can use <code>zip_width</code> in order to take values from <code>s1</code> or <code>s2</code> according to a mask:</p>
<pre class="lang-py prettyprint-override"><code>In [1]: import polars as pl
In [2]: import pyarrow as pa
In [3]: import pyarrow as pc
In [4]: s1 = pl.Series([1,2,3])
In [5]: mask = pl.Series([True, False, False])
In [6]: s2 = pl.Series([4, 5, 6])
In [7]: s1.zip_with(mask, s2)
Out[7]:
shape: (3,)
Series: '' [i64]
[
1
5
6
]
</code></pre>
<p>How can I do this with PyArrow? I've tried <code>pyarrow.compute.replace_with_mask</code> but that works differently:</p>
<pre class="lang-py prettyprint-override"><code>In [10]: import pyarrow.compute as pc
In [11]: import pyarrow as pa
In [12]: a1 = pa.array([1,2,3])
In [13]: mask = pa.array([True, False, False])
In [14]: a2 = pa.array([4,5,6])
In [15]: pc.replace_with_mask(a1, pc.invert(mask), a2)
Out[15]:
<pyarrow.lib.Int64Array object at 0x7f69d411afe0>
[
1,
4,
5
]
</code></pre>
<p>How to replicate <code>zip_with</code> in PyArrow?</p>
|
<python><pyarrow>
|
2024-08-24 09:39:07
| 1
| 11,062
|
ignoring_gravity
|
78,908,337
| 7,429,461
|
How to use the parameter "allow_dash" in "typer.Option" or "typer.Argument" in the Python package "typer"?
|
<p>I don't understand how to use the parameter <code>allow_dash</code> in <code>typer.Option</code> or <code>typer.Argument</code> in the Python package <code>typer</code>.</p>
<p>I can only find this page <a href="https://typer.tiangolo.com/tutorial/parameter-types/path/#advanced-path-configurations" rel="nofollow noreferrer">Typer</a>, saying</p>
<p>"allow_dash: If this is set to True, a single dash to indicate standard streams is permitted."</p>
<p>Here is my code:</p>
<pre><code>import typer
app = typer.Typer()
@app.command()
def read_data(a: str = typer.Argument(..., allow_dash=True)):
print(f"Data received from standard input: \n{a}")
if __name__ == "__main__":
app()
</code></pre>
<p>Whether I set <code>allow_dash=True</code> or <code>allow_dash=False</code>, this code still cannot read from the standard streams.</p>
<p>Please teach me how to use this parameter <code>allow_dash</code>. Thank you.</p>
|
<python><python-3.x><typer>
|
2024-08-24 06:51:50
| 0
| 819
|
liaoming999
|
78,908,264
| 11,692,124
|
Pydantic doesn't allow mutable variables to get changed with Optional
|
<p>The code below is a modified version of pydantic func argument validation. and the pydantic version is 1.10.17 and I guess, not sure, this was not a problem in other versions. so please if the current code works fine with your pydantic version notify me in comments.</p>
<pre><code>import inspect
import typing
from typing import get_type_hints
from pydantic import validate_arguments
def isHintTypeOfAListOfSomeType(typ):
# ccc1
# detects these patterns: `typing.List[str]`, `typing.List[int]`,
# `typing.List[Union[str,tuple]]` or even `typing.List[customType]`
if isinstance(typ, typing._GenericAlias) and typ.__origin__ is list:
innerType = typ.__args__[0]
if hasattr(innerType, '__origin__') and innerType.__origin__ is typing.Union:
# checks compatible for List[Union[str, int]] like
return True, innerType.__args__
return True, [innerType]
return False, []
def typeHintChecker_AListOfSomeType(func):
"""
a decorator which raises error when the hint is List[someType] and the argument passed for
that argument doesn't follow the hinting
"""
def wrapper(*args, **kwargs):
args_ = args[:]
allArgs, starArgVar = getAllArgs(args_) # starArgVar is *args variable
hints = get_type_hints(func)
for argName, argVal in allArgs.items():
hintType = hints.get(argName, '')
if argName == starArgVar: # to check *args hints
if hintType:
if not doItemsOfListObeyHinting(allArgs[starArgVar],
[hints.get(starArgVar, '')]):
raise TypeError(f"values passed for *'{argName}' don't obey {hintType}")
isListOfSomeType, innerListTypes = isHintTypeOfAListOfSomeType(hintType)
if isListOfSomeType and not doItemsOfListObeyHinting(argVal, innerListTypes):
raise TypeError(f"values passed for '{argName}' don't obey {hintType}")
return func(*args, **kwargs)
def getAllArgs(args):
sig = inspect.signature(func)
params = sig.parameters
allArgs = {}
argsIndex = 0
starArgVar = None
for paramName, param in params.items():
if param.kind == param.VAR_POSITIONAL:
allArgs[paramName] = args[argsIndex:]
starArgVar = paramName
break
elif param.kind in {param.POSITIONAL_OR_KEYWORD, param.KEYWORD_ONLY}:
if argsIndex < len(args):
allArgs[paramName] = args[argsIndex]
argsIndex += 1
return allArgs, starArgVar
def doItemsOfListObeyHinting(argVals, innerListTypes):
for arg in argVals:
if not any([issubclass(type(arg), ilt) for ilt in innerListTypes]):
return False
return True
wrapper._originalFunc = func
wrapper._isArgValidatorWrapped = True
return wrapper
def argValidator(func):
# mustHave1
# the warnings for missing arguments are not clear
# Apply Pydantic validation first
func = validate_arguments(config={'arbitrary_types_allowed': True})(func)
# Then apply the custom type hint checker
return typeHintChecker_AListOfSomeType(func)
</code></pre>
<pre><code>from typing import Optional, Dict
from projectUtils.typeCheck import argValidator, typeHintChecker_AListOfSomeType
p = {}
@argValidator
def func1(a: Optional[Dict]):
a['sd'] = 3
func1(p)
print(p)
@typeHintChecker_AListOfSomeType
def func2(a: Optional[Dict]):
a['sd'] = 2
func2(p)
print(p)
@argValidator
def func3(a: dict):
a['sd'] = 3
func3(p)
print(p)
</code></pre>
<p>the result of <code>p</code> after <code>func1</code> is <code>{}</code> and after <code>func2</code> is <code>{'sd': 2}</code> and after <code>func3</code> is <code>{'sd': 3}</code></p>
<p>I want the dict to get manipulated when passed to func, but <code>argValidator</code> with Optional[Dict] doesn't do that but <code>argValidator</code> with <code>: dict</code> does that</p>
<p>note that <code>argValidator</code> is a combination of <code>typeHintChecker_AListOfSomeType</code> and pydantic's <code>validate_arguments</code> and <code>validate_arguments</code> causes this problem</p>
<p>note the solution should not only be specific for Optional[Dict] and should be general and work for all other possible mutable variable type to get changed when they are passed to a func</p>
<p>note most likely u need to suggest arguments for <code>validate_arguments</code> and the code should apply the <code>typeHintChecker_AListOfSomeType</code> functionality</p>
|
<python><pydantic>
|
2024-08-24 06:15:00
| 1
| 1,011
|
Farhang Amaji
|
78,908,148
| 3,718,065
|
PyQt D-BUS add multiple output arguments to the method
|
<p>I want to add several output arguments in a <strong>PyQt5 D-BUS</strong> method. I cloned the example from <a href="https://github.com/baoboa/pyqt5/tree/master/examples/dbus/remotecontrolledcar" rel="nofollow noreferrer">qt example</a>. In class <code>CarInterfaceAdaptor</code>, I added a new method <code>control</code> which has three output arguments: <code>speed</code>, <code>shift</code> and <code>value</code>:</p>
<pre><code>class CarInterfaceAdaptor(QDBusAbstractAdaptor):
Q_CLASSINFO("D-Bus Interface", 'org.example.Examples.CarInterface')
Q_CLASSINFO("D-Bus Introspection", ''
' <interface name="org.example.Examples.CarInterface">\n'
' <method name="accelerate"/>\n'
' <method name="decelerate"/>\n'
' <method name="turnLeft"/>\n'
' <method name="turnRight"/>\n'
' <method name="control">\n'
' <arg name="name" type="s" direction="in"/>\n'
' <arg name="delta" type="i" direction="in"/>\n'
' <arg name="speed" type="d" direction="out"/>\n'
' <arg name="shift" type="i" direction="out"/>\n'
' <arg name="value" type="s" direction="out"/>\n'
' </method>\n'
' </interface>\n'
'')
</code></pre>
<p>If I used <code>qdbusxml2cpp</code> to compile the above xml. I can get the declaration of <code>control</code>:</p>
<pre><code>double control(const QString &name, int delta, int &shift, QString &value);
</code></pre>
<p>The first output argument (<code>speed</code>) is returned by this function, and the other output arguments are defined as output reference arguments (<code>shift</code> and <code>value</code>).</p>
<p>And I wrote and run the an PyQt <code>QDBusAbstractInterface</code> derived class which has the following function:</p>
<pre><code>def control(self):
message = self.call('control', "car", 2)
print(message.arguments())
for arg in message.arguments():
print(arg)
</code></pre>
<p>And I got the following output:</p>
<pre><code>[100.0, 200, 'OK']
100.0
200
OK
</code></pre>
<p>In Python, there are not the equivalent output reference arguments for the functions.
However, I can return all these output arguments in a list, like this:</p>
<pre><code>@pyqtSlot(str, int, result=list)
def control(self, name, delta):
print("name: {}, delta: {}".format(name, delta))
speed = 100
shift = 200
value = "OK"
return [speed, shift, value]
</code></pre>
<p>Now I got the following output.</p>
<pre><code>[[100, 200, 'OK']]
[100, 200, 'OK']
</code></pre>
<p>It returns a list not three output arguments! It seems to be wrong. So</p>
<p>Q1: So how can I define <code>control</code> in <strong>PyQt D-BUS</strong> which can return multiple arguments (like C++) but not a list?</p>
<p>Q2: In <strong>PyQt5 D-BUS</strong>, I found it still works even if I didn't add the declaration of <code>control</code> in <code>Q_CLASSINFO</code>. So what's <code>Q_CLASSINFO("D-Bus Introspection"</code> ...) for? And, should I have to define <code>Q_CLASSINFO("D-Bus Introspection"</code> ...)?</p>
<p>Thanks!</p>
|
<python><c++><qt><pyqt5><dbus>
|
2024-08-24 04:50:57
| 1
| 791
|
sfzhang
|
78,907,944
| 16,703,774
|
Ansible can't find python lib -- python3-rpm -- which already installed
|
<p>I'm using Ansible and testing a module: ansible.builtin.package_facts</p>
<pre class="lang-yaml prettyprint-override"><code>- name: Gather the package facts
ansible.builtin.package_facts:
manager: auto
- name: Print the package facts
ansible.builtin.debug:
var: ansible_facts.packages
- name: Check whether a package called foobar is installed
ansible.builtin.debug:
msg: "{{ ansible_facts.packages }}"
</code></pre>
<p>But it will fail at the first task and the error is</p>
<pre class="lang-none prettyprint-override"><code>TASK [test_role : Gather the package facts] ************************************
[WARNING]: Found "rpm" but Failed to import the required Python library (rpm)
on vsa12701896's Python /data/venv/bin/python3. Please read the module
documentation and install it in the appropriate location. If the required
library is installed, but Ansible is using the wrong Python interpreter, please
consult the documentation on ansible_python_interpreter
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Could not detect a supported package manager from the following list: ['rpm', 'apk', 'portage', 'pkg', 'pacman', 'apt', 'pkg_info'], or the required Python library is not installed. Check warnings for details."}
</code></pre>
<p>I'm sure I've installed that python3-rpm lib and confirm the python version is 3.9:</p>
<pre class="lang-none prettyprint-override"><code>(venv) vsa12701896:/data/src # zypper search python3-rpm
Loading repository data...
Reading installed packages...
S | Name | Summary | Type
--+-------------+-----------------------------------------------+--------
i | python3-rpm | Python Bindings for Manipulating RPM Packages | package
Note: For an extended search including not yet activated remote resources
please use 'zypper search-packages'.
(venv) vsa12701896:/data/src # pip3 list|grep rpm
rpm 0.2.0
(venv) vsa12701896:/data/src # python --version
Python 3.9.19
</code></pre>
<p>Where is the issue?</p>
<p>==============================================
Update:</p>
<ol>
<li>on this host, there's no other python interpreter ... only this python3.9 installed</li>
<li>that rpm package is installed --</li>
</ol>
<pre><code>(venv) vsa12701896:/data/venv/bin # python3 -m pip freeze | grep rpm
rpm==0.2.0
</code></pre>
<ol start="3">
<li><p>by more check -- I find it looks this "rpm" packages got some problem on itself --</p>
<p>(venv) vsa12701896:/data/venv/bin # python3 -c "import rpm; print(rpm.<strong>version</strong>)"
Traceback (most recent call last):
File "/data/venv/lib64/python3.9/site-packages/rpm/<strong>init</strong>.py", line 106, in
<em>shim_module_initializing</em>
NameError: name '<em>shim_module_initializing</em>' is not defined</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last):
File "", line 1, in
File "/data/venv/lib64/python3.9/site-packages/rpm/<strong>init</strong>.py", line 109, in
initialize()
File "/data/venv/lib64/python3.9/site-packages/rpm/<strong>init</strong>.py", line 98, in initialize
raise ImportError(
ImportError: Failed to import system RPM module. Make sure RPM Python bindings are installed on your system.</p>
</li>
</ol>
|
<python><ansible>
|
2024-08-24 01:18:53
| 1
| 393
|
EisenWang
|
78,907,748
| 3,398,536
|
How to upload a ssl certificate into google chrome webdriver in selenium python
|
<p>I need to use selenium python with an uploaded personal security certificate every time my service launch a new process, and for some reason is not possible to click at import in the Google Chrome settings.</p>
<p>I try to use XPATH, ID and other By options, and no way to work. If anyone has experience with this type of problem, would appreciate the help.</p>
<p>The error that mostly happen is</p>
<blockquote>
<p>File "", line 2, in
element = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="import"]')))
File "/home/pc/.local/lib/python3.10/site-packages/selenium/webdriver/support/wait.py", line 105, in until
raise TimeoutException(message, screen, stacktrace)
selenium.common.exceptions.TimeoutException: Message:</p>
</blockquote>
<p>Any help will be highly appreciated!</p>
<p>My code</p>
<pre><code>from selenium.webdriver.chrome.service import Service as ChromeService
from webdriver_manager.chrome import ChromeDriverManager
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.action_chains import ActionChains
from selenium.webdriver.common.by import By
from selenium import webdriver
import os
def create_webdriver_chrome(download_dest):
options = webdriver.ChromeOptions()
options.add_experimental_option('excludeSwitches', ['enable-logging'])
options.add_experimental_option("excludeSwitches", ["enable-automation"])
options.add_experimental_option('useAutomationExtension', False)
prefs = {
"download.default_directory": download_dest,
"download.directory_upgrade": True,
"download.prompt_for_download": False,
"disable-popup-blocking": False,
"profile.default_content_settings.popups": 0
}
options.add_experimental_option("prefs", prefs)
options.add_argument('--disable-blink-features=AutomationControlled')
options.add_argument("--lang=pt-BR")
chrome_service_obj = ChromeService(executable_path=ChromeDriverManager().install())
driver = webdriver.Chrome(service=chrome_service_obj, options=options)
return driver
downdir_dest = os.path.join(os.getcwd(), 'downdir')
driver = create_webdriver_chrome(downdir_dest)
driver.get("chrome://settings/certificates")
wait = WebDriverWait(driver, 2)
element = wait.until(EC.element_to_be_clickable((By.XPATH, '//*[@id="import"]')))
driver.execute_script("arguments[0].scrollIntoView();", element)
ActionChains(driver).move_to_element(element).click().perform()
</code></pre>
<p>The pages it need to open are:</p>
<p><a href="https://i.sstatic.net/DMsq0F4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DMsq0F4E.png" alt="The Google Chrome Settings Page for certificates " /></a></p>
<p><a href="https://i.sstatic.net/DMsq0F4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DMsq0F4E.png" alt="and them upload the certificate by clicking on "Import", and doing all the following password send and so on." /></a></p>
|
<python><google-chrome><selenium-webdriver><ssl-certificate>
|
2024-08-23 22:45:58
| 0
| 341
|
Iron Banker Of Braavos
|
78,907,668
| 2,118,290
|
Problems with trying to download a webpage and click a button with selenium in docker using python
|
<p>I cannot get this to work right for the life of me. I'm trying to load a web-page and click a button on it and I cant get it to work. Either Selenium complains, does not load, complains it cant make a session, complains that it does not have proper options, loads forever or just straight up does not work.</p>
<p>Dockerfile</p>
<pre><code>FROM python:3.11-slim-buster
USER root
# Create a non-root user
RUN useradd -ms /bin/bash appuser
WORKDIR /app
RUN chown appuser:appuser /app
USER appuser
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Β
COPY src .
# Expose the application port (e.g., 5000)
EXPOSE 5000
# Define the command to run the application
CMD ["python3", "app.py"]
</code></pre>
<p>Docker-compose.yml</p>
<pre><code>version: '3.8'
services:
chrome:
image: selenium/node-chrome:3.14.0-gallium
volumes:
- /dev/shm:/dev/shm
depends_on:
- hub
environment:
HUB_HOST: hub
hub:
image: selenium/hub:3.14.0-gallium
ports:
- "4444:4444"
web:
build: .
depends_on:
- hub
volumes:
- ./src:/app
ports:
- "5000:5000"
</code></pre>
<p>app.py</p>
<pre><code>from flask import Flask, render_template, request
import requests
import re
import os
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
import urllib.parse
from selenium.webdriver.chrome.options import Options
def download_page(url):
chrome_options = Options()
chrome_options.add_argument('--headless')
chrome_options.page_load_strategy = 'normal'
chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
chrome_options.add_experimental_option('useAutomationExtension', False)
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--lang=en')
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--allow-running-insecure-content')
chrome_options.add_argument('--disable-notifications')
chrome_options.add_argument('--disable-dev-shm-usage')
chrome_options.add_argument('--disable-browser-side-navigation')
chrome_options.add_argument('--mute-audio')
chrome_options.add_argument('--force-device-scale-factor=1')
chrome_options.add_argument('window-size=1080x760')
driver = webdriver.Remote('http://hub:4444/wd/hub')
driver.get(url)
//Process page or click buttons
app = Flask(__name__)
@app.route('/')
def index():
return render_template('index.html')
@app.route('/process', methods=['POST'])
def process():
url = request.form['url']
download_page(url)
return "URL processing complete!"
if __name__ == '__main__':
app.run(host='0.0.0.0',debug=True)
</code></pre>
<p>index.html</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<title>URL Processor</title>
</head>
<body>
<h1>Enter a URL to process:</h1>
<form method="POST" action="/process">
<input type="text" name="url" placeholder="Enter URL here">
<button type="submit">Process URL</button>
</form>
</body>
</html>
</code></pre>
<p>I have tried using selenium/standalone-chrome as the docker base, but it does not allow pip to install flask because its "controlled externaly"</p>
<p>I have tried loading it external but it complains it cant make a session. SessionNotCreatedException</p>
<p>I tried loading it internally but it complains it cant find the chrome driver and when i tried installing it just hung. no error. nothing just sat there.</p>
<p>If i just run it as a standalone without flask it works PERFECTLY fine. Its just when I tried to wrap it into a docker file it stops me at every turn. It also does not help that the documentation for selenium is outdated.</p>
|
<python><docker><selenium-webdriver><flask>
|
2024-08-23 22:00:53
| 1
| 674
|
Steven Venham
|
78,907,519
| 2,153,235
|
Having trouble seeing the motive for decorators in *simple* examples
|
<p>I've surfed some tutorials online about decorators. I'm having trouble seeing their benefit for simple examples. Here is a common example of a function crying out to be decorated, taken from <a href="https://www.programiz.com/python-programming/decorator" rel="nofollow noreferrer">this page</a>:</p>
<pre><code># Unelegant decoration
#---------------------
def make_pretty(func):
def inner():
print("I got decorated")
func()
return inner
def ordinary():
print("I am ordinary")
decorated_func = make_pretty(ordinary)
decorated_func()
# Output
#-------
# I got decorated
# I am ordinary
</code></pre>
<p>The benefit of decorators is described as follows: "Instead of assigning the function call to a variable, Python provides a much more elegant way to achieve this functionality".</p>
<pre><code># Elegant decoration
#-------------------
@make_pretty
def ordinary():
print("I am ordinary")
ordinary()
# Output
#-------
# I got decorated
# I am ordinary
</code></pre>
<p>Other tutorials provide similar examples and similar motivations, e.g., <a href="https://www.datacamp.com/tutorial/decorators-python" rel="nofollow noreferrer">here</a>.</p>
<p>The difficulty I have with this explanation is that it doesn't quite fit the intent of adding functionality to the function to be decorated without modifying it (another frequent explanation of decorators). In the above example, undecorated "ordinary()" is no longer available, so decorating it does not in fact leave the original function available for use in situations where the decoration is not needed or desired.</p>
<p>The other more specific motive is greater elegance by "not assigning the function call to a variable". For the "Unelegant decoration" code, however, this is easily achieved without the "Elegant decoration" code above:</p>
<pre><code>make_pretty(ordinary)()
# Output
#-------
# I got decorated
# I am ordinary
</code></pre>
<p>The tutorials typically proceed by describing decorators in cases where functions take arguments. I can't follow the motive for them because I can't even understand the benefit in the simplest case above. SO Q&A's also talk about practical use cases (e.g., <a href="https://stackoverflow.com/questions/489720">here</a>), but it's hard to follow the reason for decorators, again when the motive in the simplest case above isn't clear.</p>
<p>Is it possible to state in plain language what the benefit is in the simplest case above, with no function arguments? Or is the benefit only going to be clear by somehow figuring out the more complicated cases?</p>
|
<python><python-decorators>
|
2024-08-23 20:46:13
| 1
| 1,265
|
user2153235
|
78,907,515
| 4,376,643
|
Error with a python recursively user-defined type
|
<p>I am using the recursively defined type "Json" type as suggested in <a href="https://stackoverflow.com/a/76701025/4376643">this answer</a> to <a href="https://stackoverflow.com/q/58400724/4376643">this SO question</a>.</p>
<p>However I am running into unexpected errors as shown in the code below. The first assignment passes but the other three fail. I would expect them to either all pass or all fail. (Preferably, pass).</p>
<p>Is it a bug? If it is a bug is there a workaround without adding unnecessary test to the definitions of <code>JsonType</code> and <code>JsonValue</code>?</p>
<pre class="lang-py prettyprint-override"><code>import typing as t
JsonType: t.TypeAlias = t.List['JsonValue'] | t.Mapping[str, 'JsonValue']
JsonValue: t.TypeAlias = str | int | float | None | JsonType
v0 = [0., 0.] # (variable) v0: list[float]
v1 = [0.] * 2 # (variable) v1: list[float]
test01: JsonType = {
"v0": [0., 0.] # ok
}
test02: JsonType = {
"v0": v0 # error
}
test11: JsonType = {
"v": [0.] * 2 # error
}
test12: JsonType = {
"v": v2 # error
}
</code></pre>
<pre><code>Expression of type "dict[str, list[float]]" is incompatible with declared type "RJsonableType"
Type "dict[str, list[float]]" is incompatible with type "RJsonableType"
"dict[str, list[float]]" is incompatible with "List[RJsonableValue]"
"dict[str, list[float]]" is incompatible with "Mapping[str, RJsonableValue]"
Type parameter "_VT_co@Mapping" is covariant, but "list[float]" is not a subtype of "RJsonableValue"
Type "list[float]" is incompatible with type "RJsonableValue"
"list[float]" is incompatible with "str"
"list[float]" is incompatible with "int"
"list[float]" is incompatible with "float"
...PylancereportAssignmentType
</code></pre>
|
<python><python-typing>
|
2024-08-23 20:45:47
| 0
| 2,658
|
Craig Hicks
|
78,907,444
| 3,078,502
|
Is it possible use VS Code to pass multiple command line arguments to Python script?
|
<p>According to the official documentation <a href="https://code.visualstudio.com/docs/python/debugging" rel="noreferrer">"Python debugging in VS Code"</a>, launch.json can be configured to run with specific command line arguments, or you can use <code>${command:pickArgs}</code> to input arguments at run time.</p>
<p>Examples of putting arguments <em>in</em> launch.json:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/46340968/specifying-arguments-in-launch-json-for-python">Specifying arguments in launch.json for Python</a></li>
<li><a href="https://stackoverflow.com/questions/51244223/visual-studio-code-how-debug-python-script-with-arguments">Visual Studio Code: How debug Python script with arguments</a></li>
</ul>
<p>However, I would rather use but I would rather use <code>${command:pickArgs}</code> because it makes it easier to test multiple times with different values.</p>
<p>The first time I tried this, I allowed VS Code to create launch.json. By default it contained the following:</p>
<pre><code> "args": [
"${command:pickArgs}"
]
</code></pre>
<p>When I run the file, I get a dialog for inputting arguments:</p>
<p><a href="https://i.sstatic.net/rElGEMAk.png" rel="noreferrer"><img src="https://i.sstatic.net/rElGEMAk.png" alt="Python Debugger: Current File with Arguments" /></a></p>
<p>However, if I put in multiple arguments, they get wrapped in quotes and treated as a single string argument. In a case where, e.g. the arguments are supposed to be numeric, an error is generated. For example, if I pass in <code>4 7</code>, which both need to be cast to <code>int</code>, <code>sys.argv[1]</code> gets the value <code>'4 7'</code> rather than <code>'4'</code>, yielding the error</p>
<pre><code>invalid literal for int() with base 10: '4 7'
</code></pre>
<p>I have tried comma-separating the arguments, and putting quotes around them, and what I get is <code>sys.argv[1]</code> with values like <code>'4, 7'</code> or <code>'"4", "7"'</code>. Needless to say, these don't work either.</p>
<p>I've seen examples online of a launch.json configuration as follows:</p>
<pre><code> "args": "${command:pickArgs}"
</code></pre>
<p>That is, there are no brackets around <code>${command:pickArgs}</code>. However, this generates a problem in that if there are spaces in the path to the Python interpreter, the path gets broken apart at the spaces. See, for example:</p>
<p><a href="https://github.com/microsoft/vscode-python-debugger/issues/233" rel="noreferrer">Spaces in python interpreter path or program result in incorrectly quoted debugging commands with arguments #233</a></p>
<p>The solution seems to be to put the brackets in, which is what I started with in the first place. Since that's what VS Code did automatically, I'm not sure where the varying examples are coming from (with or without brackets) and can't find documentation on this other than the very short mention of <code>${command:pickArgs}</code> in the official documentation I linked at the very beginning.</p>
<p>So, I have not been able to figure out a way to pass in multiple arguments using <code>${command:pickArgs}</code> (as opposed to hard-coding directly in launch.json), and the only promising solution (remove the brackets) is poorly documented, generates other errors, and the solution seems to be to put the brackets back in.</p>
<p>Is this possible to do at all?</p>
|
<python><visual-studio-code><vscode-debugger>
|
2024-08-23 20:13:41
| 2
| 609
|
Lee Hachadoorian
|
78,907,380
| 4,867,193
|
Removing Spikes from Spectra
|
<p>The question is; how to remove spikes from spectroscopy data, in an efficient way, <em>using Numpy or SciPy</em> only.</p>
<p>[Aside, before going further, it may be necessary to mention that the question with a similar title at, https://stackoverflow.com/questions/37556487, is not the same question. The data has characteristics and the answers posted there are not appropriate to this data.]</p>
<p>For context, there about 1000 of these per data set and there are perhaps 20 data sets that we need to process in various ways. So, the time to load and clean each spectrum is important.</p>
<p>Below there is a code that removes the spikes but takes too long. So, the question might be rephrased "how do I speed up this code?" I originally thought to express it one or two lines in numpy, but so far have not worked out how to do that.</p>
<p>There is a median filter in SciPy's. So far, that median filter is not working well for this task.</p>
<p>Following is an example of a spectrum that might be read from a spectrometer that has a linear CCD.</p>
<p>Notice there are spikes. (Where these spikes come from is interesting, but not directly relevant to the question.)</p>
<p><a href="https://i.sstatic.net/fzzev2o6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzzev2o6.jpg" alt="enter image description here" /></a></p>
<p>Now here is a code that I wrote, that removes the spikes.</p>
<pre><code>def spectrum_excursion_filter( data, span=13, threshold=2., passes=2 ):
datalen = len(data)
x = np.linspace(0,span,span,endpoint=False)
for n, d in enumerate(data):
if n < span:
n1 = 0
n2 = n1 + span;
elif datalen - n < span:
n1 = datalen - span
n2 = datalen
else:
n1 = n - int(span/2)
n2 = n + span
#segment = copy.deepcopy(data[n1:n2])
segment = data[n1:n2]
median_ = np.median(segment)
stdev_ = np.std(segment)
m = passes - 1
while m > 0:
idx = np.where(np.abs(segment-median_)<threshold*stdev_)
signal = np.median(segment[idx])
stdev_ = np.std(segment[idx])
m -= 1
if abs(d - median_) > threshold * stdev_:
data[n] = median_
return data
</code></pre>
<p>And here is the result. It does a good job. But it takes 3 seconds. So that is 3000 seconds per data set. And that is far too long.</p>
<p><a href="https://i.sstatic.net/3CdavvlD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3CdavvlD.jpg" alt="enter image description here" /></a></p>
<p>The raw data for the above images can be obtained here:</p>
<p><a href="https://drive.google.com/drive/folders/19AyVF1cTl2JXHGWswUY0fBF735l4lmYz?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/19AyVF1cTl2JXHGWswUY0fBF735l4lmYz?usp=sharing</a></p>
|
<python><numpy><scipy>
|
2024-08-23 19:43:28
| 4
| 2,587
|
DrM
|
78,907,265
| 20,591,261
|
Polars keep the biggest value using 2 categories
|
<p>I have a polars dataframe that contain some ID, actions, and values :</p>
<p>Example Dataframe:</p>
<pre><code>data = {
"ID" : [1, 1, 2,2,3,3],
"Action" : ["A", "A", "B", "B", "A", "A"],
"Where" : ["Office", "Home", "Home", "Office", "Home", "Home"],
"Value" : [1, 2, 3, 4, 5, 6]
}
df = pl.DataFrame(data)
</code></pre>
<p>I want to select for each ID and action the biggest value, so i know where he rather do the action.</p>
<p>I'm taking the following approach :</p>
<pre><code>(
df
.select(
pl.col("ID"),
pl.col("Action"),
pl.col("Where"),
TOP = pl.col("Value").max().over(["ID", "Action"]))
)
</code></pre>
<p>After that , i sorted the values and keep the unique values (The first one) to maintain the desired info, however the input its incorrect :</p>
<pre><code>(
df
.select(
pl.col("ID"),
pl.col("Action"),
pl.col("Where"),
TOP = pl.col("Value").max().over(["ID", "Action"]))
.sort(
pl.col("*"), descending =True
)
.unique(
subset = ["ID", "Action"],
maintain_order = True,
keep = "first"
)
)
</code></pre>
<p>Current Output :</p>
<pre><code>shape: (3, 4)
βββββββ¬βββββββββ¬βββββββββ¬ββββββ
β ID β Action β Where β TOP β
β --- β --- β --- β --- β
β i64 β str β str β i64 β
βββββββͺβββββββββͺβββββββββͺββββββ‘
β 3 β A β Home β 6 β
β 2 β B β Office β 4 β
β 1 β A β Office β 2 β
βββββββ΄βββββββββ΄βββββββββ΄ββββββ
</code></pre>
<p>Expected Output:</p>
<pre><code>shape: (3, 4)
βββββββ¬βββββββββ¬βββββββββ¬ββββββ
β ID β Action β Where β TOP β
β --- β --- β --- β --- β
β i64 β str β str β i64 β
βββββββͺβββββββββͺβββββββββͺββββββ‘
β 3 β A β Home β 6 β
β 2 β B β Office β 4 β
β 1 β A β Home β 2 β
βββββββ΄βββββββββ΄βββββββββ΄ββββββ
</code></pre>
<p>Also, i think this approach its not the optimal way</p>
|
<python><dataframe><python-polars>
|
2024-08-23 19:00:47
| 4
| 1,195
|
Simon
|
78,906,970
| 2,886,575
|
Retain image resolution in matplotlib sublots
|
<p>I have a set of images that I would like to show in a grid. I am using <code>matplotlib</code> for this, but am not super picky about the plotting library. However, I am using <code>matplotlib</code> because it allows me to put labels and such.</p>
<p>I would like for the resulting figure to retain the resolution of the original images. So, in the following example, the images are each 195x240. I would like for the resulting figure to contain a grid of 195x240 images. So, the resulting figure in the example should be 5*195 x 5*240, plus extra space in x and y for margins, labels, etc.</p>
<pre><code>import cv2
import urllib
import numpy as np
import matplotlib.pyplot as plt
req = urllib.request.urlopen('https://upload.wikimedia.org/wikipedia/commons/thumb/5/53/OpenCV_Logo_with_text.png/195px-OpenCV_Logo_with_text.png')
arr = np.asarray(bytearray(req.read()), dtype=np.uint8)
img = cv2.imdecode(arr, -1)
n_gain = 5
n_bias = 5
fig, ax = plt.subplots(n_gain, n_bias)
for a in range(n_gain):
for b in range(n_bias):
axis = ax[a,b]
axis.imshow(np.clip((1+a*0.2)*img + b*20,0,255)/255.)
axis.set_xticks([])
axis.set_yticks([])
for a in range(n_gain):
ax[a,0].set_ylabel(str(1+a*0.2))
for b, beta in enumerate(betas):
ax[len(alphas)-1,b].set_xlabel(str(b*20))
fig.supylabel('gain (alpha)')
fig.supxlabel('bias (beta)')
fig.set_size_inches(10,10)
fig.subplots_adjust(wspace=None, hspace=None)
</code></pre>
|
<python><matplotlib>
|
2024-08-23 17:24:31
| 1
| 5,605
|
Him
|
78,906,960
| 825,227
|
Retrieve data from an existing barplot figure/axis in Python
|
<p>I'm creating a barplot that I'm subsequently updating over time with new values.</p>
<pre><code>import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
f, ax = plt.subplots()
sns.set_color_codes('muted')
sns.barplot(data = d[d.Side==0], x = 'Size', y = 'Price', color = 'b', orient = 'h', native_scale=True)
sns.barplot(data = d[d.Side==1], x = 'Size', y = 'Price', color = 'r', orient = 'h', native_scale=True)
sns.despine()
</code></pre>
<h3>follow-up processing</h3>
<pre><code>for i, row in mcdu_l2[20:].iterrows():
ax.containers[int(row.Side)][int(row.Position)]._width = row.Size
plt.draw()
plt.pause(1)
</code></pre>
<p><strong>d</strong></p>
<pre><code>Position Operation Side Price Size
0 9 0 1 0.7298 -37
1 8 0 1 0.7297 -14
2 7 0 1 0.7296 -8
3 6 0 1 0.7295 -426
4 5 0 1 0.7294 -16
5 4 0 1 0.7293 -16
6 3 0 1 0.7292 -15
7 2 0 1 0.7291 -267
8 1 0 1 0.729 -427
9 0 0 1 0.7289 -16
10 0 0 0 0.7299 6
11 1 0 0 0.73 34
12 2 0 0 0.7301 7
13 3 0 0 0.7302 9
14 4 0 0 0.7303 16
15 5 0 0 0.7304 15
16 6 0 0 0.7305 429
17 7 0 0 0.7306 16
18 8 0 0 0.7307 265
19 9 0 0 0.7308 18
</code></pre>
<p><strong>mcdu_l2</strong></p>
<pre><code> Position Operation Side Price Size
36 3 1 0 0.7302 18
37 9 1 1 0.7298 -8
38 9 1 1 0.7298 -9
39 9 1 1 0.7298 -8
40 9 1 1 0.7298 -9
41 9 1 1 0.7298 -14
42 9 1 1 0.7298 -9
43 9 2 1 0.0 0
44 0 0 1 0.7288 -17
45 9 1 1 0.7297 -29
46 8 1 1 0.7296 -23
47 0 2 1 0.0 0
48 9 0 1 0.7298 -3
49 8 1 1 0.7297 -31
50 9 1 1 0.7298 -10
</code></pre>
<p><a href="https://i.sstatic.net/EDoTb8XZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDoTb8XZ.png" alt="enter image description here" /></a></p>
<p>I'm updating this over time by reassigning the <code>ax.containers[]._width</code> attribute and using the <code>plt.draw</code> method within <code>matplotlib</code> to refresh the plot.</p>
<p>Intermittently, I'd like to extract values that are currently displayed, ideally in the form of a dataframe, that I'll use to create a secondary barplot. Is there a straightforward way to extract this data from an existing figure/axis?</p>
|
<python><pandas><matplotlib><seaborn>
|
2024-08-23 17:20:54
| 0
| 1,702
|
Chris
|
78,906,909
| 6,597,296
|
How to rate limit a deferred HTTP client in Twisted?
|
<p>I have an HTTP client written in Twisted that sends requests to an API of some site from a deferred. It goes something like this (somewhat simplified):</p>
<pre class="lang-py prettyprint-override"><code>from json import loads
from core import output
from twisted.python.log import msg
from twisted.internet import reactor
from twisted.web.client import Agent, HTTPConnectionPool, _HTTP11ClientFactory, readBody
from twisted.web.http_headers import Headers
from twisted.internet.ssl import ClientContextFactory
class WebClientContextFactory(ClientContextFactory):
def getContext(self, hostname, port):
return ClientContextFactory.getContext(self)
class QuietHTTP11ClientFactory(_HTTP11ClientFactory):
# To shut up the garbage in the log
noisy = False
class Output(output.Output):
def start(self):
myQuietPool = HTTPConnectionPool(reactor)
myQuietPool._factory = QuietHTTP11ClientFactory
self.agent = Agent(
reactor,
contextFactory=WebClientContextFactory(),
pool=myQuietPool
)
def stop(self):
pass
def write(self, event):
messg = 'Whatever'
self.send_message(messg)
def send_message(self, message):
headers = Headers({
b'User-Agent': [b'MyApp']
})
url = 'https://api.somesite.com/{}'.format(message)
d = self.agent.request(b'GET', url.encode('utf-8'), headers, None)
def cbBody(body):
return processResult(body)
def cbPartial(failure):
failure.printTraceback()
return processResult(failure.value)
def cbResponse(response):
if response.code in [200, 201]:
return
else:
msg('Site response: {} {}'.format(response.code, response.phrase))
d = readBody(response)
d.addCallback(cbBody)
d.addErrback(cbPartial)
return d
def cbError(failure):
failure.printTraceback()
def processResult(result):
j = loads(result)
msg('Site response: {}'.format(j))
d.addCallback(cbResponse)
d.addErrback(cbError)
return d
</code></pre>
<p>This works fine but the site is rate limiting the requests and starts dropping them if they are arriving too fast. So, I need to rate-limit the client too and make sure that it isn't sending the requests too fast - yet they aren't lost, so some kind of buffering/queuing is needed. I don't need a precise rate limiting, like "no more than X requests per second"; just some reasonable delay (lik 1 second) after each request is fine.</p>
<p>Unfortunately, I can't use <code>sleep()</code> from a deferred, so some other approach is necessesary.</p>
<p>From googoing around, it seems that the basic idea is to do something like</p>
<pre class="lang-py prettyprint-override"><code>self.transport.pauseProducing()
delay = 1 # seconds
self.reactor.callLater(delay, self.transport.resumeProducing)
</code></pre>
<p>at least according to <a href="https://stackoverflow.com/a/20816869/6597296">this answer</a>. But the code there doesn't work "as is" - <code>SlowDownloader</code> is expected to take a parameter (a reactor), so <code>SlowDownloader()</code> causes an error.</p>
<p>I also found <a href="https://stackoverflow.com/a/30203290/6597296">this answer</a>, which uses the interesting idea of using the factory as a storage, so you don't need to implement your own queues and stuff - but it deals with rate-limiting on the server side, while I need to rate-limit the client.</p>
<p>I feel that I'm pretty close to the solution but I still can't figure out how exactly to combine the information from these two answers, in order to produce working code, so any help would be appreciated.</p>
|
<python><twisted><rate-limiting><deferred>
|
2024-08-23 17:04:28
| 2
| 578
|
bontchev
|
78,906,798
| 11,462,274
|
Mapping pattern and correct codes to install a library/package in Python that only targets the functions within a code
|
<p>I'm trying to generate a local library/package that only has the functions that exist within the code, but when I try to use <code>from .my_functions import *</code> as the <code>__init__</code> default, when installing and trying to use it in a new code, not only the functions that are in the code appear, but also the code itself that also gives access to the same functions. Let's see!</p>
<p><em><strong>Initial note</strong></em>: the number of functions in the code will change over time and the name of the functions will also change, so I want a way to not have to list each function name for <code>__init__</code>, I want to use a way that collects all the functions without specifying them one by one, so I tried to use <code>*</code>.</p>
<p>File map:</p>
<pre><code>my_library/
β
βββ my_library/
β βββ __init__.py
β βββ my_functions.py
β
βββ setup.py
</code></pre>
<p><a href="https://i.sstatic.net/trzElf0y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/trzElf0y.png" alt="enter image description here" /></a></p>
<p><code>__init__.py</code>:</p>
<pre><code>from .my_functions import *
</code></pre>
<p><code>my_functions.py</code>:</p>
<pre><code>def send_hi():
print("Hi!")
def send_bye():
print("Bye!")
def send_go():
print("Go!")
</code></pre>
<p><code>setup.py</code>:</p>
<pre><code>from setuptools import setup, find_packages
setup(
name="my_library",
version="0.1",
packages=find_packages(),
)
</code></pre>
<p>Installation using <code>pip</code> in the <code>my_library/</code> folder exactly where we have the <code>setup.py</code> file:</p>
<pre><code>pip install .
</code></pre>
<p>Current result:</p>
<p><a href="https://i.sstatic.net/E3ObdOZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E3ObdOZP.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Dc0gKp4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dc0gKp4E.png" alt="enter image description here" /></a></p>
<p>Expected result (note that there is no option to access the file containing the functions, only the functions):</p>
<p><a href="https://i.sstatic.net/oTz2BRPA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTz2BRPA.png" alt="enter image description here" /></a></p>
|
<python><python-import><python-packaging>
|
2024-08-23 16:30:15
| 1
| 2,222
|
Digital Farmer
|
78,906,780
| 3,042,398
|
PyHanko - Invalid signature with error "Unexpected byte range values defining scope of signed data" in interrupted mode
|
<p>I try to setup a pades signature flow in our Flask API.</p>
<p>As we use PKCS11 devices on the clients computers, we need to use the interrupted signing flow :</p>
<ul>
<li>User POST on <code>/pades/start</code> with his certificate as a PEM file and the PDF to sign.</li>
<li>The API return the digest to the client, who use the smartcard to sign it, and an unique task_id</li>
<li>User POST on <code>/pades/complete</code> with his task_id and the computed signature. The API use this signature to create the digitally signed PDF</li>
</ul>
<p>Currently, this flow works. But the generated PDF is considered having an invalid signature with this message "Unexpected byte range values defining scope of signed data.
Details: The signature byte range is invalid"</p>
<pre class="lang-py prettyprint-override"><code># Relevant part in the /pades/start route
with open(task_dir / "certificate.pem", "w") as f:
f.write(body["certificate"])
cert = load_cert_from_pemder(task_dir / "certificate.pem")
with open(task_dir / "document.pdf", "rb+") as f:
writer = IncrementalPdfFileWriter(f)
fields.append_signature_field(
writer,
sig_field_spec=fields.SigFieldSpec("Signature", box=(200, 600, 400, 660)),
)
meta = signers.PdfSignatureMetadata(
field_name="Signature",
subfilter=fields.SigSeedSubFilter.PADES,
md_algorithm="sha256",
)
ext_signer = signers.ExternalSigner(
signing_cert=cert,
cert_registry=registry.CertificateRegistry(),
signature_value=bytes(8192), # I tried to adjust this with many different values without success
)
pdf_signer = signers.PdfSigner(meta, signer=ext_signer)
prep_digest, tbs_document, _ = pdf_signer.digest_doc_for_signing(writer)
post_sign_instructions = tbs_document.post_sign_instructions
def async_to_sync(awaitable):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
return loop.run_until_complete(awaitable)
signed_attrs: asn1crypto.cms.CMSAttributes = async_to_sync(
ext_signer.signed_attrs(
prep_digest.document_digest, "sha256", use_pades=True
)
)
task = {
**(body or {}),
"id": task_id,
"prep_digest": prep_digest,
"signed_attrs": signed_attrs,
"psi": post_sign_instructions,
}
redis.set(
f"task:{task_id}",
pickle.dumps(task),
)
writer.write_in_place()
return {"task": task_id, "digest": prep_digest.document_digest.hex()}
# Relevant part in the /pades/complete route
task_id = body["task"]
task_str = redis.get(f"task:{task_id}")
task = pickle.loads(task_str) if task_str else None
task_dir = Path(get_task_dir(settings.WORKDIR, task_id))
if not task:
return {"error": "Task not found"}, 404
ext_signer = signers.ExternalSigner(
signing_cert=load_cert_from_pemder(task_dir / "certificate.pem"),
signature_value=bytes.fromhex(body["signature"]),
cert_registry=registry.CertificateRegistry(),
)
sig_cms = ext_signer.sign_prescribed_attributes(
"sha256", signed_attrs=task["signed_attrs"]
)
with open(task_dir / "document.pdf", "rb+") as f:
PdfTBSDocument.finish_signing(
f,
prepared_digest=task["prep_digest"],
signature_cms=sig_cms,
post_sign_instr=task["psi"],
)
redis.delete(f"task:{task_id}")
return "ok"
</code></pre>
<p>What can I try to fix this error message ?</p>
|
<python><flask><pades><pyhanko>
|
2024-08-23 16:25:34
| 1
| 1,408
|
Varkal
|
78,906,647
| 2,393,452
|
pyvisa equivalent for serial.serial.rts=False
|
<p>I have a specific instrument that needs to deassert the RTS line before communication can work. On an old script I was doing a basic <code>serial.serial.rts=False</code>. What is the equivalent with <code>pyvisa</code>?</p>
<p>I tried with <code>pyvisa.constants.LineState</code> but could not manage to replicate the behavior.</p>
<p>Old code that works:</p>
<pre class="lang-py prettyprint-override"><code>instrument = serial.Serial('COM6',
baudrate=4800,
parity='N',
bytesize=8,
stopbits=1,
xonxoff=False,
rtscts=False,
dsrdtr=True)
instrument.rts = False
</code></pre>
<p>Now with <code>pyvisa</code> I have:</p>
<pre class="lang-py prettyprint-override"><code>rm = pyvisa.ResourceManager()
instrument = rm.open_ressource('ASRL6::INSTR')
instrument.baud_rate=4800
instrument.data_bits=8
instrument.flow_control=VI_ASRL_FLOW_DTR_DSR
instrument.parity=Parity.none
instrument.stop_bits=StopBits.one
</code></pre>
<p>And it is not working, which I assume comes from not deasserting the RTS line...</p>
|
<python><pyserial><pyvisa>
|
2024-08-23 15:43:04
| 0
| 540
|
bserra
|
78,906,590
| 8,960,078
|
Why does an rdkit function only work when I import something else?
|
<p>There is something seriously strange going on here with RDKit's imports that I don't understand.</p>
<p>I am using some basic functionality (as per their <a href="https://www.rdkit.org/docs/source/rdkit.Chem.Descriptors.html#rdkit.Chem.Descriptors.ExactMolWt" rel="nofollow noreferrer">documentation</a>) like so:</p>
<pre class="lang-py prettyprint-override"><code>from rdkit import Chem
# Create a molecule from a SMILES pattern (arbitrarily chosen for this example)
mol = Chem.MolFromSmiles('CCCC')
# Calculate its molecular weight
print(Chem.Descriptors.ExactMolWt(mol))
</code></pre>
<p>Except, <em>this doesn't work.</em></p>
<blockquote>
<p>AttributeError: module 'rdkit.Chem' has no attribute 'Descriptors'</p>
</blockquote>
<p>However, if I add an import line at the top: <code>from rdkit.Chem import Descriptors</code>. <em>Now it works.</em></p>
<blockquote>
<p>58.078250319999995</p>
</blockquote>
<p>Note, I HAVE NOT CHANGED ANY OF THE REST OF THE CODE. Only the import, which I do not use directly!</p>
<p>What on earth is going on here?</p>
|
<python><python-3.x><rdkit>
|
2024-08-23 15:26:31
| 3
| 1,103
|
QuantumChris
|
78,906,565
| 8,954,109
|
mypy operator "+" not supported for ctypes Structure._fields_
|
<p>I was handed code that basically describes inheritance via copy pasting all the base fields to every sub struct:</p>
<pre class="lang-py prettyprint-override"><code>class Base(ct.Structure):
_fields_ = [
("hello", ct.c_int32)
]
class Foo(ct.Structure):
_fields_ = Base._fields_ + [
("world", ct.c_int64)
]
class Bar(ct.Structure):
_fields_ = Base._fields_ + [
("joe", ct.c_float)
]
# And so on...
</code></pre>
<p>Both <code>Foo</code> and <code>Bar</code> have all the fields of <code>Base</code>. But mypy/pyright/pylance's type narrowing hates that the tuples aren't all the same type.</p>
<p>vscode output:</p>
<blockquote>
<p><code>Operator "+" not supported for types "Sequence[tuple[str, type[_CData]] | tuple[str, type[_CData], int]]" and "list[tuple[Literal['world'], type[c_int64]]]" PylancereportOperatorIssue</code></p>
</blockquote>
<p>With minimal changes, how can I make the tools happy?</p>
|
<python><ctypes><python-typing><mypy><pyright>
|
2024-08-23 15:18:36
| 2
| 693
|
plswork04
|
78,906,183
| 4,108,376
|
Getting buffer_info from py::memoryview
|
<p>Using Pybind11, is there a way to get the <code>py::buffer_info</code> object from a <code>py::memoryview</code>? Or some other way to get the data pointer, number of dimensions, shape, strides, item size from the python <code>memoryview</code>, without creating a copy of the array?</p>
<p>It is for a function that takes a <code>py::memoryview</code> as parameter, so that it can be called from Python with a <code>memoryview</code> object.</p>
<p>The underlying Python C API seems to have <code>PyMemoryView_GET_BUFFER()</code> for this, but it is not wrapped anywhere by pybind.</p>
|
<python><c++><pybind11><memoryview>
|
2024-08-23 13:35:35
| 0
| 9,230
|
tmlen
|
78,905,974
| 1,131,165
|
PermissionDenied: 403 You do not have permission to access tuned model
|
<p>I have tuned a gemini base model and wrote a python code to access it but unfortunatelly I keep getting the error "PermissionDenied: 403 You do not have permission to access tuned model".
I have read instructions to set a OAuth 2.0 Client ID and I think I did it right since the authentication worked, but the prompt does not.</p>
<p>Bellow is the code I am trying to run.</p>
<pre><code>import os
import gspread
import google.generativeai as genai
from load_creds import load_creds
creds = load_creds()
genai.configure(credentials=creds)
# Configure the API key for the Google Generative AI
genai.configure(api_key="My API key")
# Configuration for text generation
generation_config = {
"temperature": 0.9,
"top_p": 0.95,
"top_k": 64,
"max_output_tokens": 1024,
"response_mime_type": "text/plain",
}
# Create a GenerativeModel instance
model = genai.GenerativeModel(
model_name="tunedModels/mytunedmodel",
generation_config=generation_config,
)
# Start a chat session
chat_session = model.start_chat(enable_automatic_function_calling=True)
# Send a message to the chat session including the file URI
response = chat_session.send_message(f"Good morning")
# Print the response
print(response.text)"
</code></pre>
|
<python><google-gemini><google-generativeai>
|
2024-08-23 12:41:17
| 1
| 864
|
Jonathan Livingston Seagull
|
78,905,921
| 275,088
|
How to properly annotate the `call_next` parameter to a FastAPI middleware?
|
<p>I'm trying to adapt an example from the FastAPI <a href="https://fastapi.tiangolo.com/tutorial/middleware/#create-a-middleware" rel="nofollow noreferrer">docs</a> to create a middleware:</p>
<pre><code>@app.middleware("http")
async def add_process_time_header(request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
return response
</code></pre>
<p>However, the <code>call_next</code> parameter here is not annotated. When I use just <code>typing.Callable</code> for that, I get an error from <code>mypy</code>:</p>
<pre><code>error: Missing type parameters for generic type "Callable" [type-arg]
</code></pre>
<p>What's the precise way to annotate the parameter?</p>
|
<python><fastapi><python-typing><mypy>
|
2024-08-23 12:30:03
| 1
| 16,548
|
planetp
|
78,905,811
| 15,416
|
Can random.sample handle the case k=0?
|
<p>In code, can I assume:</p>
<pre><code>assert random.sample((1,2,3), k=0) == []
</code></pre>
<p>I suspect the answer is yes, but I don't see explicit documentation confirming it.</p>
<p>I've got a case where <code>k = desired_length - current_length</code> is the number of extra random elements that I need to add to an existing list. I want to be certain that I don't need a special case to handle <code>desired_length == current_length</code> (the error case <code>desired_length < current_length</code> is already handled).</p>
|
<python><random>
|
2024-08-23 12:07:48
| 2
| 181,617
|
MSalters
|
78,905,650
| 16,759,116
|
Why/when/where was `mydict[*mylist]` introduced?
|
<p>This shorthand to use lists as dict keys works in Python 3.12:</p>
<pre><code>mydict = {(1, 2, 3): 'works!'}
mylist = [1, 2, 3]
print(mydict[*mylist])
</code></pre>
<p>In 3.10 it gives <code>SyntaxError</code> and in 3.11 it works (at least in the versions I tried). I searched <a href="https://docs.python.org/3/whatsnew/3.11.html" rel="nofollow noreferrer">Whatβs New In Python 3.11</a> but didn't see anything about it. Maybe it was a side effect of some other change. Why/where/when was it introduced? Looking for (information from) a GitHub issue / pull request or so.</p>
|
<python><syntax>
|
2024-08-23 11:21:25
| 0
| 10,901
|
no comment
|
78,905,620
| 3,599,283
|
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
|
<p>Pls help. I need help setting up spacy inside jupyter enviornment</p>
<p>I am trying to use spacy to summarize youtube transcripts but finding lots of problems with spacy and python 3.12.4/3.12.3.<br />
I started with 3.12.4, then installed python 3.12.3 onto conda environment <code>py3.12.3</code>.</p>
<p>I install spacy 3.7.6 into env with <code>pip install stacy</code></p>
<pre><code>Here are the `conda env list:
base /opt/anaconda3
py3.11 * /opt/anaconda3/envs/py3.11
py3.12.3 /opt/anaconda3/envs/py3.12.3
</code></pre>
<p>I added python kernels via cmd below</p>
<pre><code>jupyter kernelspec install py3.12.3
python -m ipykernel install --user --name=py3.12.3
</code></pre>
<p>On command line, import spacy is ok:</p>
<pre><code>(py3.12.3) UID ~ % python
Python 3.12.3 | packaged by Anaconda, Inc. | (main, May 6 2024, 14:43:12) [Clang 14.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import spacy
>>> quit()
(py3.12.3) UID ~ % pip freeze | grep spacy
spacy==3.7.6
spacy-legacy==3.0.12
spacy-loggers==1.0.5
</code></pre>
<p>Inside jupyter (run from base, restarted after installing py3.12.3), <code>import spacy</code> gives error below. Any help is welcomed???</p>
<pre><code>1. (base) UID ~ % python -m ipykernel install --user --name=py3.12.3
0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
Installed kernelspec py3.12.3 in /Users/UID/Library/Jupyter/kernels/py3.12.3
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 import spacy # module will be used to build NLP model
2 from spacy.lang.en.stop_words import STOP_WORDS # module will be used to build NLP model
3 from string import punctuation
File /opt/anaconda3/lib/python3.12/site-packages/spacy/__init__.py:13
10 # These are imported as part of the API
11 from thinc.api import Config, prefer_gpu, require_cpu, require_gpu # noqa: F401
---> 13 from . import pipeline # noqa: F401
14 from . import util
15 from .about import __version__ # noqa: F401
File /opt/anaconda3/lib/python3.12/site-packages/spacy/pipeline/__init__.py:1
----> 1 from .attributeruler import AttributeRuler
2 from .dep_parser import DependencyParser
3 from .edit_tree_lemmatizer import EditTreeLemmatizer
File /opt/anaconda3/lib/python3.12/site-packages/spacy/pipeline/attributeruler.py:8
6 from .. import util
7 from ..errors import Errors
----> 8 from ..language import Language
9 from ..matcher import Matcher
10 from ..scorer import Scorer
File /opt/anaconda3/lib/python3.12/site-packages/spacy/language.py:43
41 from .lang.tokenizer_exceptions import BASE_EXCEPTIONS, URL_MATCH
42 from .lookups import load_lookups
---> 43 from .pipe_analysis import analyze_pipes, print_pipe_analysis, validate_attrs
44 from .schemas import (
45 ConfigSchema,
46 ConfigSchemaInit,
(...)
49 validate_init_settings,
50 )
51 from .scorer import Scorer
File /opt/anaconda3/lib/python3.12/site-packages/spacy/pipe_analysis.py:6
3 from wasabi import msg
5 from .errors import Errors
----> 6 from .tokens import Doc, Span, Token
7 from .util import dot_to_dict
9 if TYPE_CHECKING:
10 # This lets us add type hints for mypy etc. without causing circular imports
File /opt/anaconda3/lib/python3.12/site-packages/spacy/tokens/__init__.py:1
----> 1 from ._serialize import DocBin
2 from .doc import Doc
3 from .morphanalysis import MorphAnalysis
File /opt/anaconda3/lib/python3.12/site-packages/spacy/tokens/_serialize.py:14
12 from ..errors import Errors
13 from ..util import SimpleFrozenList, ensure_path
---> 14 from ..vocab import Vocab
15 from ._dict_proxies import SpanGroups
16 from .doc import DOCBIN_ALL_ATTRS as ALL_ATTRS
File /opt/anaconda3/lib/python3.12/site-packages/spacy/vocab.pyx:1, in init spacy.vocab()
File /opt/anaconda3/lib/python3.12/site-packages/spacy/tokens/doc.pyx:49, in init spacy.tokens.doc()
File /opt/anaconda3/lib/python3.12/site-packages/spacy/schemas.py:195
191 obj = converted
192 return validate(TokenPatternSchema, {"pattern": obj})
--> 195 class TokenPatternString(BaseModel):
196 REGEX: Optional[Union[StrictStr, "TokenPatternString"]] = Field(None, alias="regex")
197 IN: Optional[List[StrictStr]] = Field(None, alias="in")
File /opt/anaconda3/lib/python3.12/site-packages/pydantic/v1/main.py:286, in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs)
284 cls.__signature__ = ClassAttribute('__signature__', generate_model_signature(cls.__init__, fields, config))
285 if resolve_forward_refs:
--> 286 cls.__try_update_forward_refs__()
288 # preserve `__set_name__` protocol defined in https://peps.python.org/pep-0487
289 # for attributes not in `new_namespace` (e.g. private attributes)
290 for name, obj in namespace.items():
File /opt/anaconda3/lib/python3.12/site-packages/pydantic/v1/main.py:808, in BaseModel.__try_update_forward_refs__(cls, **localns)
802 @classmethod
803 def __try_update_forward_refs__(cls, **localns: Any) -> None:
804 """
805 Same as update_forward_refs but will not raise exception
806 when forward references are not defined.
807 """
--> 808 update_model_forward_refs(cls, cls.__fields__.values(), cls.__config__.json_encoders, localns, (NameError,))
File /opt/anaconda3/lib/python3.12/site-packages/pydantic/v1/typing.py:554, in update_model_forward_refs(model, fields, json_encoders, localns, exc_to_suppress)
552 for f in fields:
553 try:
--> 554 update_field_forward_refs(f, globalns=globalns, localns=localns)
555 except exc_to_suppress:
556 pass
File /opt/anaconda3/lib/python3.12/site-packages/pydantic/v1/typing.py:529, in update_field_forward_refs(field, globalns, localns)
527 if field.sub_fields:
528 for sub_f in field.sub_fields:
--> 529 update_field_forward_refs(sub_f, globalns=globalns, localns=localns)
531 if field.discriminator_key is not None:
532 field.prepare_discriminated_union_sub_fields()
File /opt/anaconda3/lib/python3.12/site-packages/pydantic/v1/typing.py:520, in update_field_forward_refs(field, globalns, localns)
518 if field.type_.__class__ == ForwardRef:
519 prepare = True
--> 520 field.type_ = evaluate_forwardref(field.type_, globalns, localns or None)
521 if field.outer_type_.__class__ == ForwardRef:
522 prepare = True
File /opt/anaconda3/lib/python3.12/site-packages/pydantic/v1/typing.py:66, in evaluate_forwardref(type_, globalns, localns)
63 def evaluate_forwardref(type_: ForwardRef, globalns: Any, localns: Any) -> Any:
64 # Even though it is the right signature for python 3.9, mypy complains with
65 # `error: Too many arguments for "_evaluate" of "ForwardRef"` hence the cast...
---> 66 return cast(Any, type_)._evaluate(globalns, localns, set())
TypeError: ForwardRef._evaluate() missing 1 required keyword-only argument: 'recursive_guard'
</code></pre>
|
<python><python-3.x><nlp><spacy><sentence>
|
2024-08-23 11:14:15
| 1
| 1,267
|
frankr6591
|
78,905,442
| 8,077,619
|
Qml set context property for child item
|
<p>I want to set a context property of a Qml Button from PySide6 but I get an error. Here are the relevant files:</p>
<p>main.py</p>
<pre><code>import sys
from PySide6.QtCore import QObject, Slot
from PySide6.QtGui import QGuiApplication
from PySide6.QtQml import (QQmlApplicationEngine, QmlElement,
QQmlComponent, QQmlContext)
QML_IMPORT_NAME = "EventHandlers"
QML_IMPORT_MAJOR_VERSION = 1
@QmlElement
class EventHandler(QObject):
def __init__(self, parent=None):
super().__init__(parent)
@Slot()
def on_button_click(self):
print('Button clicked')
if __name__ == '__main__':
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
engine.quit.connect(app.quit)
engine.load('view.qml')
event_handler = EventHandler()
root = engine.rootObjects()[0]
button3 = root.findChild(QObject, 'button3')
button3.clicked.connect(event_handler.on_button_click)
engine.rootContext().setContextProperty('python_handler', event_handler)
button5 = root.findChild(QObject, 'button5')
button5_context = QQmlContext(engine, button5)
button5_context.setContextProperty('evt_handler', event_handler)
result = app.exec()
del engine
sys.exit(result)
</code></pre>
<p>view.qml:</p>
<pre><code>import QtQuick
import QtQuick.Controls
import QtQuick.Layouts
import EventHandlers
ApplicationWindow {
visible: true
width: 200
height:400
title: "HelloApp"
EventHandler {
id: eventHandler
}
ColumnLayout {
id: layout
objectName: "layout"
anchors.fill: parent
Button {
id: button1
Layout.fillWidth: true
Layout.fillHeight: true
text: "Qml event handler"
onClicked: console.log("Button clicked")
}
Button {
id: button2
Layout.fillWidth: true
Layout.fillHeight: true
text: "PySide -> Qml event handler"
onClicked: eventHandler.on_button_click()
}
Button {
id: button3
objectName: "button3"
Layout.fillWidth: true
Layout.fillHeight: true
text: "Qml -> PySide event handler"
}
Button {
id: button4
objectName: "button4"
Layout.fillWidth: true
Layout.fillHeight: true
text: "Root context event"
onClicked: python_handler.on_button_click()
}
Button {
id: button5
objectName: "button5"
Layout.fillWidth: true
Layout.fillHeight: true
text: "Child context event"
onClicked: evt_handler.on_button_click()
}
}
}
</code></pre>
<p>The error I get:</p>
<pre><code>view.qml:63: ReferenceError: evt_handler is not defined
</code></pre>
<p>Why this no work? The root context property work fine but button5 (last button) cannot set context property</p>
<p>My question is mostly code and I posted all the code needed to reproduce the error I get.</p>
|
<python><qml><pyside><pyside6>
|
2024-08-23 10:30:07
| 0
| 303
|
Anonimista
|
78,905,436
| 17,721,722
|
How can I safely use multiprocessing in a Django app?
|
<p>I've read the <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">docs</a> suggesting that multiprocessing may cause unintended side effects in Django apps or on Windows, especially those connected to multiple databases. Specifically, I'm using a function, <code>load_to_table</code>, to create multiple CSV files from a DataFrame and then load the data into a PostgreSQL table using multiprocessing. This function is deeply integrated within my Django app and is not a standalone script.</p>
<p>I am concerned about potential long-term implications if this code is used in production. Additionally, <code>if __name__ == '__main__':</code> does not seem to work within the deep files/functions of Django. This is because Django's management commands are executed in a different context where <code>__name__</code> is not set to <code>"__main__"</code>, which prevents this block from being executed as expected. Moreover, multiprocessing guidelines recommend using <code>if __name__ == '__main__':</code> to safely initialize multiprocessing tasks, as it ensures that code is not accidentally executed multiple times, especially on platforms like Windows where the module-level code is re-imported in child processes.</p>
<p>Here is the code I am using:</p>
<pre class="lang-py prettyprint-override"><code>import os
import glob
from multiprocessing import Pool, cpu_count
from functools import partial
from portal.db_postgresql.connection import Connection
def copy_to_table(file_name: str, table_name: str, columns: list):
# custom connection class
connection_obj = Connection(get_current_db_name(), 1, 1)
connection = connection_obj.connection()
cursor = connection.cursor()
with open(file_name, "r") as f:
cursor.copy_from(f, table_name, sep=",", columns=columns, null="")
connection.commit()
connection.close()
return file_name
# df_ops is a custom PySpark dataframe class
def load_to_table(df_ops: PySparkOperations, table_name: str) -> dict:
filepath = os.path.join("uploaded_files", table_name)
df_ops.df.repartition(10).write.mode("overwrite").format("csv").option("header", "false").save(filepath)
file_path_list = sorted(glob.glob(f"{filepath}/*.csv"))
with Pool(cpu_count()) as p:
p.map(partial(copy_to_table, table_name=table_name, columns=df_ops.df.columns), file_path_list)
return df_ops.count
</code></pre>
<p>The function above does not work with the VS Code debugger, most likely due to <code>debugpy</code>, which interferes with Django's multiprocessing. However, it works with <code>runserver</code>. When I run the Django app with the VS Code debugger, I encounter the following <a href="https://filebin.net/gytle9lgfy5b4lp8/error.log" rel="nofollow noreferrer">error</a> while executing the function. It seems to be running in loops.</p>
<pre><code>File "/usr/lib/python3.11/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
^^^^^^^^^^^^^^^^^
File "/home/rhythmflow/Desktop/Reconciliation/reconciliation-backend-v3/portal/operations/load_data/methods.py", line 225, in load_to_table
with Pool(cpu_count()) as p:
^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/multiprocessing/context.py", line 281, in _Popen
return Popen(process_obj)
^^^^^^^^^^^^^^^^^^
File "/home/rhythmflow/Desktop/Reconciliation/reconciliation-backend-v3/portal/operations/load_data/load_data.py", line 71, in start
load_to_table(df_ops, self.source_tmp_details)
File "/usr/lib/python3.11/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rhythmflow/.vscode/extensions/ms-python.debugpy-2024.10.0-linux-x64/bundled/libs/debugpy/_vendored/pydevd/pydevd.py", line 838, in wait_for_ready_to_run
self._py_db_command_thread_event.wait(0.1)
File "/usr/lib/python3.11/threading.py", line 629, in wait
signaled = self._cond.wait(timeout)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/threading.py", line 331, in wait
gotit = waiter.acquire(True, timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rhythmflow/Desktop/Reconciliation/reconciliation-backend-v3/.venv/lib/python3.11/site-packages/django/utils/autoreload.py", line 664, in <lambda>
signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
SystemExit: 0
[22/Aug/2024 15:04:30] "POST /start-process/ HTTP/1.1" 500 59
[22/Aug/2024 15:04:35,063] - Broken pipe from ('127.0.0.1', 51102)
</code></pre>
<p>What could be causing this issue, and how can I address it while using the VS Code debugger?</p>
|
<python><django><debugging><multiprocessing>
|
2024-08-23 10:29:40
| 1
| 501
|
Purushottam Nawale
|
78,905,419
| 6,068,731
|
Python type hinting and being explicit is still throwing a warning
|
<p>PyCharm is throwing a yellow warning in the following code, it seems to think that the output of the function is <code>ndarray[Any, dtype]</code>. Should I ignore this error or is there a Pythonic way of dealing with this?</p>
<p>I have also tried <code>np.array(articles, dtype=str)</code> or <code>np.array(articles, dtype=np.str_)</code> but neither solves the problem.</p>
<pre><code>import numpy as np
def load_data() -> np.ndarray[str]:
articles = []
with open("data/bbc_raw/001.txt", "rt") as f:
articles.append(f.read().strip())
return np.array(articles)
if __name__ == "__main__":
array = load_data()
</code></pre>
<p>Warning:</p>
<pre><code>Expected type 'ndarray[str]', got 'ndarray[Any, dtype]' instead
</code></pre>
|
<python><numpy><pycharm><python-typing>
|
2024-08-23 10:24:11
| 1
| 728
|
Physics_Student
|
78,905,356
| 18,215,498
|
How to properly quantize model with quanto?
|
<p>I am trying to quantize Qwen model, but it seems to be not working.</p>
<p>This is my snippet in Google Colab (runtime-> free T4 GPU):</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoModelForCausalLM
from optimum.quanto import QuantizedModelForCausalLM, qint8
model = AutoModelForCausalLM.from_pretrained('Qwen/Qwen2-0.5B-Instruct')
qmodel = QuantizedModelForCausalLM.quantize(model, weights=qint8)
foot = model.get_memory_footprint()
print(f'{foot / 1_000_000_000} GB')
qfoot = qmodel.get_memory_footprint()
print(f'{qfoot / 1_000_000_000} GB')
</code></pre>
<p>Output:</p>
<pre><code>2.923327304 GB
2.923327304 GB
</code></pre>
<p>Despite same memory footprint, when I called <code>model.named_parameters()</code>, it showed float32 types for both model and qmodel. I am not getting any errors.</p>
|
<python><pytorch><huggingface-transformers><large-language-model>
|
2024-08-23 10:09:05
| 1
| 533
|
mcdominik
|
78,905,311
| 5,758,423
|
How can I get the _true_ signature of a Python function?
|
<h3>tldr;</h3>
<p><code>inspect.signature</code> gives us a function's signature, but this signature can lie (as assigning to the <code>__signature__</code> attribute of a function will reveal.</p>
<p>Is there a python function (builtin if possible) that will give me the <em>true</em> signature?</p>
<h3>a bit more about my context</h3>
<p><code>inspect.Parameter.empty</code> is used as the value of the optional parameter attributes <code>kind</code> and <code>default</code>.
So that if you use it in your function definition, the function will sometimes behave like you omitted it, and sometimes will not.</p>
<p>Consider the two following functions:</p>
<pre class="lang-py prettyprint-override"><code>from inspect import Parameter, signature
empty = Parameter.empty
def chalk(x=empty, y=2):
return True
def cheese(x, y=2):
return True
</code></pre>
<p>The signatures (including signatures shown in help) are the same:</p>
<pre class="lang-py prettyprint-override"><code>assert signature(chalk) == signature(cheese)
assert str(signature(chalk)) == str(signature(cheese)) =='(x, y=2)'
</code></pre>
<p>But from a <em>parameter hints</em> point of view (when, in a fairly rich IDE, you get hints when entering arguments in a function) they are not the same. In VsCode,</p>
<ul>
<li><code>chalk</code> gives me <code>(x: empty = empty, y: int = 2) -> Literal[True]</code>, which gives the impression that <code>x</code> has a default.</li>
<li><code>cheese</code> gives me <code>(x: Any, y: int = 2) -> Literal[True]</code>, which gives the impression that <code>x</code> does not have a default.</li>
</ul>
<p><code>inspect.signature</code> basically gives us the value of the <code>__signature__</code> attribute of the function, but that value is completely independent of the actual behavior of the function.</p>
<p>Where does <code>VsCode</code> source its signature (string)?
Is there a python function (builtin if possible) that will give me that <em>true</em> signature?</p>
|
<python><python-inspect>
|
2024-08-23 09:56:34
| 1
| 2,432
|
thorwhalen
|
78,905,276
| 12,439,683
|
Generate pyi stubfile for not importable C module
|
<p>I have a package <code>A</code> that uses a special package <code>A.B</code>, both are 3rd and 4th party packages written in C using Boost. Because it there is some special setup <strong><code>A.B</code> is not included as a submodule but only as a some variable. Both <code>import A.B</code></strong> and also first <code>import A</code> then <code>import A.B</code> <strong>fail with a <code>ModuleNotFoundError</code></strong>.</p>
<p>Only the following two ways work to access B.</p>
<pre><code># OK
import A
A.B # -> <module: B>
# OK
from A import B
B # -> <module: B>
# ModuleNotFoundError
import A.B
</code></pre>
<p><code>B</code> again is a nested package, e.g. with <code>B.x.y.z, B.a.b, ...</code></p>
<hr />
<p>I would like to create stub files for <code>A.B</code>, however <code>pyright --createstub</code> or mypy's <code>stubgen</code> relies on <code>import A.B</code> and therefore fails.</p>
<p>There exists a useful idea how to write a parser in this <a href="https://stackoverflow.com/q/49409249/12439683">answer</a>, however as I have to deal with properties, overloads, cross-imports extending it feels like reinventing the wheel and many cases need custom handling.</p>
<p>I assume that mypy's <code>stubgen --inspect-mode</code> code could be modified or used with a debugger to provide the module directly instead of trying to import it and using it afterwards.</p>
<p>This is where I am stuck. How can modify <code>stubgen</code> or any other stub generator to get it to work with such a module?</p>
<hr />
<p>EDIT: Further notes:</p>
<p><em>after</em> importing <code>A</code>, <code>B</code> and all submodules are recursively added to <code>sys.modules</code> <em>without any dot-path</em>, i.e. <code>import A; import B</code> works if performed in that order.</p>
|
<python><python-typing><python-c-api><stub><pyi>
|
2024-08-23 09:49:18
| 1
| 5,101
|
Daraan
|
78,905,269
| 1,235,577
|
How to acheive true wildcard functions in python
|
<p>For python classes it is possible to define the <code>__getattr__()</code> method to be able to call any function using that class.</p>
<p>ex:</p>
<pre><code># testclass/__init__.py
def x(*args, **kwargs):
print("hello")
def __getattr__(name):
return x
</code></pre>
<p>can be use like this:</p>
<pre><code># main.py
import testclass
testclass.thismethodshouldntexist() # -> prints "hello"
</code></pre>
<p>Is it possible to modify the global scope of a python script to be able to achieve a similar result without having to go through a custom class? I have tried modifying <code>sys.modules[__name__]</code>, but can't seem to get it to do anything. I'm guessing that <code>sys.modules</code> only represents a reflection of what actually exists below the surface, and that it doesn't return the actual objects the python interpreter uses.</p>
<p>to be clear, what I want to achieve is:</p>
<pre><code># main.py
def x(*args, **kwargs):
pass
# do some magic stuff
y() # runs x instead as y is not specifically defined
</code></pre>
<p><strong>I understand that this probably is a very bad idea to use in any form of production code, I just want to learn more about how python works, and messing around with internal structures is usually a good way to learn</strong></p>
|
<python>
|
2024-08-23 09:47:39
| 1
| 503
|
Pownyan
|
78,904,852
| 961,631
|
pypandoc does not keep images nor formatting when converting
|
<p>I try to convert a .rtf file to .docx using pypandoc</p>
<pre><code>import pypandoc
# Specify the input RTF file and output DOCX file
input_file = 'test.rtf'
output_file = 'test.docx'
# Convert the RTF file to DOCX
pypandoc.convert_file(input_file, 'docx', outputfile=output_file)
print(f"Conversion complete. The DOCX file is saved as {output_file}")
</code></pre>
<p>However, if I have some colors in the original file, or pictures, they are not keept in the resulting docx, I am missing some settings?</p>
<pre><code>Package Version
----------- -------
windows 10
python 3.11
cobble 0.1.4
lxml 5.3.0
mammoth 1.6.0
pip 23.2.1
pypandoc 1.13
python-docx 0.8.11
pywin32 306
setuptools 65.5.0
</code></pre>
|
<python><pandoc><pypandoc>
|
2024-08-23 07:54:13
| 1
| 15,427
|
serge
|
78,904,801
| 2,315,319
|
Polars DataFrame - Decimal Precision doubles on mul with Integer
|
<p>I have a Polars (v1.5.0) dataframe with 4 columns as shown in example below. When I multiply decimal columns with an integer column, the scale of the resultant decimal column doubles.</p>
<pre class="lang-py prettyprint-override"><code>from decimal import Decimal
import polars as pl
df = pl.DataFrame({
"a": [1, 2],
"b": [Decimal('3.45'), Decimal('4.73')],
"c": [Decimal('2.113'), Decimal('4.213')],
"d": [Decimal('1.10'), Decimal('3.01')]
})
</code></pre>
<pre><code>shape: (2, 4)
βββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β i64 β decimal[*,2] β decimal[*,3] β decimal[*,2] β
βββββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββ‘
β 1 β 3.45 β 2.113 β 1.10 β
β 2 β 4.73 β 4.213 β 3.01 β
βββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
</code></pre>
<pre class="lang-py prettyprint-override"><code>df.with_columns(pl.col("c", "d").mul(pl.col("a")))
</code></pre>
<pre><code>shape: (2, 4)
βββββββ¬βββββββββββββββ¬βββββββββββββββ¬βββββββββββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β i64 β decimal[*,2] β decimal[*,6] β decimal[*,4] β
βββββββͺβββββββββββββββͺβββββββββββββββͺβββββββββββββββ‘
β 1 β 3.45 β 2.113000 β 1.1000 β
β 2 β 4.73 β 8.426000 β 6.0200 β
βββββββ΄βββββββββββββββ΄βββββββββββββββ΄βββββββββββββββ
</code></pre>
<p>I don't know why the scale doubles, when I am just multiplying a decimal with an integer. What do I do so that the scale does not change?</p>
|
<python><dataframe><decimal><python-polars>
|
2024-08-23 07:38:57
| 3
| 313
|
fishfin
|
78,904,771
| 11,764,097
|
Pydantic model_validator that can leverage the type-hints before initialisation
|
<p>I have a use-case where I will use Pydantic classes to send information between different services using a messaging service (PubSub). Since the pydantic classes as serializable, Pydantic's BaseModel becomes very convinient to use to serialize and deserialize data. However, as part of my models I have a lot of Enums, which I want to support to send as well.</p>
<p>Below is a minimalistic example, but I have yet to find a solution that works. I'm hoping someone has more pydantic & typing knowledge than me :) Python 3.11 & Pydantic 2.8 used.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field, model_validator
from typing import Annotated, Any, Type, Optional
from enum import Enum
def transform_str_to_enum(value: str, enum_type: Type[Enum]) -> Enum:
"""Transform a string to an Enum. Raise a KeyError error if the value is not a valid Enum member."""
return enum_type[value.upper()]
class MyEnum(Enum):
A = "option1"
B = "option2"
class MyClass(BaseModel):
str_field: Annotated[str, Field(description="A normal string field")]
int_field: Annotated[int, Field(description="A normal int field")]
# Cannot change type-hinting to MyEnum |Β str, and use @model_validator(mode="after"),
# because then the type hinting will be wrong for later use that depend on this field being an Enum.
enum_field: Annotated[
MyEnum,
Field(description="MyEnum, can be either A or B, and be sent as both string or Enum"),
]
optional_enum_field: Annotated[
Optional[MyEnum],
Field(description="Optional MyEnum, can be either A or B or None, and be sent as both string or Enum")
]
# @model_validator(mode="before")
# def validate_enums(cls, values: dict[str, Any]) -> dict[str, Any]:
# """
# Validate all Enums here. If the value is type-hinted as a subclass of Enum, but the value is a string, we convert it to the Enum.
# """
# # Pesudo code, doesnt work. Implement this in a way that works.
# # for field, value in values.items():
# # if field is type-hinted as a class that is a subclass of Enum:
# # values[field] = transform_str_to_enum(value=value, enum_type=field_class)
# return values
# Test that it works
expected_instance = MyClass(
str_field="test",
int_field=1,
enum_field=MyEnum.A,
optional_enum_field=MyEnum.B
)
def test_that_it_works_with_enum():
created_instance = MyClass(
str_field="test",
int_field=1,
enum_field=MyEnum.A,
optional_enum_field=MyEnum.B
)
assert created_instance == expected_instance
def test_that_it_works_with_str():
created_instance = MyClass(
str_field="test",
int_field=1,
enum_field="A",
optional_enum_field="B"
)
assert created_instance == expected_instance
# Both should pass.
test_that_it_works_with_enum()
test_that_it_works_with_str()
</code></pre>
<p>Any clever ideas?</p>
|
<python><python-3.x><python-typing><pydantic><pydantic-settings>
|
2024-08-23 07:30:12
| 0
| 1,023
|
Marcus
|
78,904,263
| 596,922
|
Better way to get list of keys which has the same value from a output dict
|
<p>I'm trying to get all keys from a python dict which has the same value. as part of this, I have made the below attempt and it works. but checking if there is a neater way to do this.
I have gone through the thread <a href="https://stackoverflow.com/questions/42438808/finding-all-the-keys-with-the-same-value-in-a-python-dictionary">Finding all the keys with the same value in a Python dictionary</a></p>
<pre><code>b = {'a1': ['b1', 'b2', 'b3'],
'a2': ['b1', 'b2', 'b3'],
'a3': ['b4', 'b5', 'b6'],
'a4': ['b4', 'b5', 'b6']
}
c = []
for i in b.values():
if i not in c:
c.append(i)
f = list()
for i in c:
print(i)
e = [k for k, v in b.items() if v == i]
print(e)
f.append((i, e))
print(f)
</code></pre>
<p>This gives the output as:</p>
<p><code>[(['a1', 'a2'], ['b1', 'b2', 'b3']), (['a3', 'a4'], ['b4', 'b5', 'b6'])]</code></p>
|
<python><python-3.x><dictionary>
|
2024-08-23 04:05:34
| 3
| 1,865
|
Vijay
|
78,903,748
| 9,328,846
|
Youtube transcript API not working on server
|
<p>I have a Django web app.
This code works perfectly fine on localhost but stops working when I run it on cloud (DigitalOcean) App Platform.</p>
<pre><code>from youtube_transcript_api import YouTubeTranscriptApi, TranscriptsDisabled, NoTranscriptFound, VideoUnavailable
def transcribe(video_url):
video_id = video_url.split("v=")[-1]
logger.debug("Extracted video ID: %s", video_id)
try:
transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)
transcript = None
for transcript_info in transcript_list:
try:
transcript = transcript_info.fetch()
break
except Exception as e:
logger.warning("Error fetching transcript: %s", e, exc_info=True)
continue
if transcript is None:
logger.error("No transcripts available for this video.")
return "No transcripts available for this video."
except TranscriptsDisabled as e:
logger.error("Transcripts are disabled for this video. %s", e, exc_info=True)
return "Transcripts are disabled for this video."
except NoTranscriptFound:
logger.error("No transcript found for this video.")
return "No transcript found for this video."
except VideoUnavailable:
logger.error("Video is unavailable.")
return "Video is unavailable."
except Exception as e:
logger.error("Error in fetching transcript: %s", e, exc_info=True)
return "Error in fetching transcript."
# Concatenate all text from the transcript into a single string
transcription_text = ' '.join([item['text'] for item in transcript])
logger.debug("Transcription text (first 50 characters): %s", transcription_text[:50])
return transcription_text
</code></pre>
<p>The part that throws an exception is the line <code>transcript_list = YouTubeTranscriptApi.list_transcripts(video_id)</code>.</p>
<p>And it throws a <code>TranscriptsDisabled</code> exception, saying that</p>
<blockquote>
<p>Transcripts are disabled for this video.</p>
</blockquote>
<p>But I do know that the video has transcripts and the code works perfectly fine on localhost as mentioned.</p>
<p>After spending 2 days and trying literally anything I can think of, I still have no solution to this mysterious problem. Anyone who has experienced the same thing and managed to solve it in some way?</p>
|
<python><python-3.x><django><django-views><youtube-api>
|
2024-08-22 22:35:41
| 1
| 2,201
|
edn
|
78,903,730
| 2,893,712
|
Pandas List All Unique Values Based On Groupby
|
<p>I have a dataframe that has worksite info.</p>
<pre><code>District# Site# Address
1 1 123 Bayview Ln
1 2 456 Example St
2 36 789 Hello Dr
2 44 789 Hello Dr
</code></pre>
<p>I am trying to transform this dataframe to add a column with the highest Site# as well as the distinct addresses when I group by District#. Here is an example of what I want the output to look like:</p>
<pre><code>District# Site# Address MaxSite# All District Addresses
1 1 123 Bayview Ln 2 123 Bayview Ln,456 Example St
1 2 456 Example St 2 123 Bayview Ln,456 Example St
2 36 789 Hello Dr 44 789 Hello Dr
2 44 789 Hello Dr 44 789 Hello Dr
</code></pre>
<p>I am able to get the Max Site# by doing</p>
<pre><code>df['MaxSite#'] = df.groupby(by='District#')['Site#'].transform('max')
</code></pre>
<p>But I am trying to find a similar way to list all of the unique addresses when I groupby District#.</p>
<p>I have tried doing <code>.transform('unique')</code> but that is not a valid function name and doing <code>.agg(['unique'])</code> returns dimensions that do not match</p>
|
<python><pandas><dataframe><transform>
|
2024-08-22 22:26:52
| 3
| 8,806
|
Bijan
|
78,903,713
| 960,115
|
Python Replace XML Text with Escape Sequence
|
<p>I have a third-party application that is parsing magic strings within an XML file, even though it should be treating them as character literals. As an example, suppose my XML contained the following segment:</p>
<pre class="lang-xml prettyprint-override"><code><element>Sentence containing magicString</element>
</code></pre>
<p>To prevent the third party application from parsing <code>magicString</code> as a command, I want to convert this xml fragment to:</p>
<pre class="lang-xml prettyprint-override"><code><element>Sentence containing &#109;agicString</element>
</code></pre>
<p>How can I achieve this in Python, without doing a global find-replace (e.g., there may be elements named <code>magicString</code> that cannot be renamed or the XML is invalid)? The following illustrates what I have attempted:</p>
<pre class="lang-py prettyprint-override"><code>from xml.etree import ElementTree
xml = ElementTree.parse(xmlPath)
element = xml.find('.//grandparent/parent/element'):
element.text = '&#109;agicString'
xml.write(xmlPath)
</code></pre>
<p>The problem is that assigning to the <code>Element.text</code> property escapes the text, so the result in an XML file with the following contents:</p>
<pre class="lang-xml prettyprint-override"><code><element>&amp;#109;agicString</element>
</code></pre>
|
<python><python-3.x><xml><xml-parsing>
|
2024-08-22 22:20:20
| 2
| 4,735
|
Jeff G
|
78,903,671
| 893,254
|
Is there any method of uploading a file downloaded using requests to AWS using boto3 without writing to a temporary file?
|
<p>I'm currently working with some Python code which does the following:</p>
<ul>
<li>uses <code>requests</code> to download a file from the web</li>
<li>saves the contents to a temporary file on disk</li>
<li>uses the boto3 library to upload this file to an AWS S3 bucket</li>
<li>in more detail: <code>requests.get</code> collects the data in memory first, then <code>file.write(response.content)</code> writes this data out to disk</li>
</ul>
<p>This is seeminly quite inefficient, since there is no reason to write the data to disk. The temporary file is deleted once the data has been uploaded to S3.</p>
<p>Is there a way to use the <code>boto3</code> library to more efficiently, and cut out this writing to disk step?</p>
<p>The documentation for <code>boto3</code> is a bit limited. It doesn't look like the API supports this, but perhaps (I hope) I'm wrong. (It's a large file...)</p>
|
<python><amazon-web-services><boto3>
|
2024-08-22 22:06:36
| 1
| 18,579
|
user2138149
|
78,903,663
| 7,776,212
|
Can't execute a remote Python script with sleep(.) from SSH
|
<p>I am trying to execute a Python script on a remote server via SSH. The script is supposed to run a certain function every few seconds. A minimal working snippet is as follows:</p>
<pre><code># test.py (this is on the remote server in $HOME)
import sys
import time
def test(num_samples):
# First start with a monitoring period of 50 seconds.
monitoring_period = 5
a = 0
while a < num_samples:
print(a, num_samples)
time.sleep(monitoring_period)
a += 1000
if __name__ == "__main__":
print("Start")
test(int(sys.argv[1]))
</code></pre>
<p>When I run the above script directly on the remote server, it executes successfully:</p>
<pre><code>node0:~$ python test.py 500
Start
0 500
</code></pre>
<p>However, when I try to run the same script via SSH - it just stalls (and note that even the first print statement "Start" is also not executed:</p>
<pre><code>β― ssh -o StrictHostKeyChecking=no <user>@<remote-server> "pushd \$HOME; python test.py 500"
~ ~
// It stalls here
</code></pre>
<p>More interestingly, if I remove the <code>time.sleep(.)</code> statement from the above script, it executes just fine:</p>
<pre><code>β― ssh -o StrictHostKeyChecking=no <user>@<remote-server> "pushd \$HOME; python test.py 500"
~ ~
Start
0 500
</code></pre>
<ol>
<li>What exactly is going on? Does the <code>sleep(.)</code> statement cause this issue?</li>
<li>If so, is the Python interpreter somehow checking for it before even executing the first statement in the script (the <code>print("Start")</code> statement)?</li>
</ol>
|
<python><ssh><remote-server>
|
2024-08-22 22:02:59
| 0
| 779
|
diviquery
|
78,903,558
| 14,720,380
|
Faust consumer/agent doesnt run on the first initialization
|
<p>When I run my docker-compose for the first time and run my faust app for the first time after that, the producer sends messages ok but the consumer doesn't get the messages. If I restart the app, it works fine. My app looks like:</p>
<pre class="lang-py prettyprint-override"><code>import random
import faust
from datetime import timedelta, datetime
from time import time
class StockTransaction(faust.Record):
date: datetime
price: float
stock: str
class Candlestick(faust.Record):
start_aggregation_period_timestamp: datetime
end_aggregation_period_timestamp: datetime
start_price: float
high_price: float
low_price: float
end_price: float
aggregation_count: int
def aggregate_transaction(self, stock_transaction: StockTransaction):
unit_price = stock_transaction.price
if self.aggregation_count == 0:
self.start_aggregation_period_timestamp = stock_transaction.date
self.end_aggregation_period_timestamp = stock_transaction.date
self.start_price = unit_price
self.low_price = unit_price
self.end_price = unit_price
if self.start_aggregation_period_timestamp > stock_transaction.date:
self.start_aggregation_period_timestamp = stock_transaction.date
self.start_price = unit_price
if self.end_aggregation_period_timestamp < stock_transaction.date:
self.end_aggregation_period_timestamp = stock_transaction.date
self.end_price = unit_price
self.high_price = max(self.high_price or unit_price, unit_price)
self.low_price = min(self.low_price or unit_price, unit_price)
self.aggregation_count += 1
TOPIC = 'raw-event'
SINK = 'agg-event'
TABLE = 'tumbling_table'
KAFKA = 'kafka://localhost:9092'
CLEANUP_INTERVAL = 1.0
WINDOW = 10
WINDOW_EXPIRES = 20
PARTITIONS = 1
app = faust.App(TABLE, broker=KAFKA, topic_partitions=1, version=1)
app.conf.table_cleanup_interval = CLEANUP_INTERVAL
source = app.topic(TOPIC, value_type=StockTransaction, key_type=str)
sink = app.topic(SINK, value_type=Candlestick)
def window_processor(stock, candlestick):
print(candlestick)
sink.send_soon(value=candlestick)
candlesticks = app.Table(
TABLE,
default=lambda: Candlestick(
start_aggregation_period_timestamp=None,
end_aggregation_period_timestamp=None,
start_price=0.0,
high_price=0.0,
low_price=0.0,
end_price=0.0,
aggregation_count=0
),
partitions=1,
on_window_close=window_processor
).tumbling(
timedelta(seconds=WINDOW),
expires=timedelta(seconds=WINDOW_EXPIRES)
).relative_to_field(StockTransaction.date)
@app.timer(0.1)
async def produce():
price = random.uniform(100, 200)
await source.send(
key="AAPL",
value=StockTransaction(stock="AAPL", price=price, date=int(time()))
)
@app.agent(source)
async def consume(transactions):
transaction: StockTransaction
async for transaction in transactions:
candlestick_window = candlesticks[transaction.stock]
current_window = candlestick_window.current()
current_window.aggregate_transaction(transaction)
candlesticks[transaction.stock] = current_window
if __name__ == '__main__':
app.main()
</code></pre>
<p>And the docker-compose looks like:</p>
<pre><code>version: '3.6'
services:
zookeeper:
image: wurstmeister/zookeeper:latest
ports:
- '2181:2181'
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
restart: unless-stopped
kafka:
image: wurstmeister/kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_HOST_NAME: kafka:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,OUTSIDE:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://kafka:29092,OUTSIDE://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,OUTSIDE://localhost:9092
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_CREATE_TOPICS: 'order-books:3:2'
restart: unless-stopped
kafdrop:
image: obsidiandynamics/kafdrop:3.27.0
depends_on:
- kafka
- zookeeper
ports:
- 19000:9000
environment:
KAFKA_BROKERCONNECT: kafka:29092
restart: unless-stopped
</code></pre>
<p>On the first initialization I get:</p>
<pre><code>C:\Users\mclea\anaconda3\envs\faust-candlesticks\python.exe C:\Users\mclea\src\faust-candlesticks\app.py worker -l debug
+ΖaΒ΅Sβ v0.11.2+--------------------------------------------------------------+
| id | tumbling_table |
| transport | [URL('kafka://localhost:9092')] |
| store | memory: |
| web | http://localhost:6066/ |
| log | -stderr- (debug) |
| pid | 11664 |
| hostname | tommclean |
| platform | CPython 3.11.9 (Windows AMD64) |
| + | Cython (MSC v.1916 64 bit (AMD64)) |
| drivers | |
| transport | aiokafka=0.11.0 |
| web | aiohttp=3.10.5 |
| datadir | C:\Users\mclea\src\faust-candlesticks\tumbling_table-data |
| appdir | C:\Users\mclea\src\faust-candlesticks\tumbling_table-data\v1 |
+-------------+--------------------------------------------------------------+
Group Coordinator Request failed: [Error 15] GroupCoordinatorNotAvailableError
Topic raw-event is not available during auto-create initialization
Group Coordinator Request failed: [Error 15] GroupCoordinatorNotAvailableError
Topic raw-event is not available during auto-create initialization
Group Coordinator Request failed: [Error 15] GroupCoordinatorNotAvailableError
Topic raw-event is not available during auto-create initialization
Group Coordinator Request failed: [Error 15] GroupCoordinatorNotAvailableError
Topic raw-event is not available during auto-create initialization
Group Coordinator Request failed: [Error 15] GroupCoordinatorNotAvailableError
Topic raw-event is not available during auto-create initialization
OK ^
</code></pre>
<p>And it sends messages fine to the <code>raw-event</code> topic, but the consumer doesn't read any messages. On the second time running the app, I get:</p>
<pre><code>C:\Users\mclea\anaconda3\envs\faust-candlesticks\python.exe C:\Users\mclea\src\faust-candlesticks\app.py worker -l debug
+ΖaΒ΅Sβ v0.11.2+--------------------------------------------------------------+
| id | tumbling_table |
| transport | [URL('kafka://localhost:9092')] |
| store | memory: |
| web | http://localhost:6066/ |
| log | -stderr- (debug) |
| pid | 9728 |
| hostname | tommclean |
| platform | CPython 3.11.9 (Windows AMD64) |
| + | Cython (MSC v.1916 64 bit (AMD64)) |
| drivers | |
| transport | aiokafka=0.11.0 |
| web | aiohttp=3.10.5 |
| datadir | C:\Users\mclea\src\faust-candlesticks\tumbling_table-data |
| appdir | C:\Users\mclea\src\faust-candlesticks\tumbling_table-data\v1 |
+-------------+--------------------------------------------------------------+
OK ^
<Candlestick: start_aggregation_period_timestamp=1724361217, end_aggregation_period_timestamp=1724361219, start_price=189.76106609658018, high_price=189.76106609658018, low_price=105.2808779884955, end_price=105.2808779884955, aggregation_count=24>
<Candlestick: start_aggregation_period_timestamp=1724361220, end_aggregation_period_timestamp=1724361229, start_price=187.37812934079548, high_price=199.99724165672342, low_price=100.35921078816915, end_price=193.91107473467878, aggregation_count=91>
...
Topic agg-event is not available during auto-create initialization
Topic agg-event is not available during auto-create initialization
</code></pre>
<p>So works fine. How can I fix this so that the app works successfully on the first launch?</p>
|
<python><docker><apache-kafka><faust>
|
2024-08-22 21:19:00
| 0
| 6,623
|
Tom McLean
|
78,903,407
| 4,112,085
|
Share Environment with uv that has private git dependencies
|
<p>I am trying out the <code>uv</code> package in python for the first time. One of the key issues I'm concerned with is being able to share my virtual environment to other colleagues. So, up to this point I can successfully do the following via the CLI:</p>
<pre><code>uv init <project name> --python 3.12
cd <project name>
uv add pandas numpy scikit-learn <etc. etc.>
</code></pre>
<p>Then, I can successfully create a new project (via <code>uv init <new project name></code>) and add in the same dependencies from the original project via <code>uv add ..\<original project name></code>. So far, so good.</p>
<p>The problem I'm running into is when I want packages that I add using uv that come directly from a private Github or Bitbucket repo. In this case, to add the package(s) to my virtual environment, I need to provide a username and token (e.g., <code>uv add git+https://<username>:<token>@bitbucket.org/<repo></code>). But, when I then try to share this virtual environment with a new project (or, in reality, a colleague who would need to use their own credentials) using <code>uv add ..\<original project name></code>, uv is unable to download and build the private repo package. I'm getting the following error:</p>
<pre><code>Updating https://bitbucket.org/<repo> (HEAD)
error: Failed to download and build: `<repo name> @ git+https://bitbucket.org/<repo>`
Caused by: Git operation failed
Caused by: failed to fetch into: C:\Users\<username>\AppData\Local\uv\<other path info>
Caused by: process didn't exit successfully: `git fetch --force --update-head-ok https://bitbucket.org/<repo> +HEAD:refs/remotes/origin/HEAD` (exit code: 128)
--- stderr
fatal: ArgumentException encountered.
Illegal characters in path.
bash: /dev/tty: No such device or address
error: failed to execute prompt script (exit code 1)
fatal: could not read Username for 'https://bitbucket.org': No such file or directory
</code></pre>
<p>So, is this just an issue of having the credentials stored and accessible somehow for when uv attempts to build such a repo, or does this sort of package sharing via uv just not work right now? Thanks in advance.</p>
|
<python><git><virtual-environment><uv>
|
2024-08-22 20:28:04
| 1
| 362
|
rhozzy
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.