QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,157,061
| 123,594
|
python3 venv - how to sync ansible_python_interpreter for playbooks that mix connection:local and target system
|
<p>I'm running ansible playbooks in python venv</p>
<p>My playbooks often involve a mix of cloud infrastructure (AWS) and system engineering. I have configured them to run cloud infrastructure tasks with connection: local - this is to minimize access rights required on the target system.</p>
<p>However since using venv I have a conflict in regards to the ansible_python_interpreter location:</p>
<ul>
<li>on the target system they tend to be in a "default" location /usr/bin/python3 - I am not 100% sure if this is hard coded in ansible, or stored in a PATH variable</li>
<li>on my local system I assume they are defined by</li>
</ul>
<pre><code>home = /opt/homebrew/opt/python@3.12/bin
include-system-site-packages = false
version = 3.12.5
executable = /opt/homebrew/Cellar/python@3.12/3.12.5/Frameworks/Python.framework/Versions/3.12/bin/python3.12
command = /opt/homebrew/opt/python@3.12/bin/python3.12 -m venv /Users/jd/projects/mgr2/ansible
</code></pre>
<p>Because of this, I cannot run a mixed playbook, either I need add</p>
<pre><code> vars:
ansible_python_interpreter: /Users/jd/projects/mgr2/ansible/bin/python3
</code></pre>
<p>to my playbook to run local tasks or remove this line to run target system tasks.</p>
<p>I'm looking for a way to have python3 in the PATH variable, depending on which venv I am sourcing.</p>
|
<python><python-3.x><ansible><python-venv>
|
2024-11-04 21:33:03
| 2
| 2,524
|
jdog
|
79,156,971
| 1,319,998
|
SciPy sparse matrix csr_matrix and csc_matrix functions - how much intermediate memory do they use?
|
<p>For the scipy functions csr_matrix, and csc_matrix, and specifically the forms that take the row and column indices:</p>
<pre class="lang-py prettyprint-override"><code>csr_matrix((data, (row_ind, col_ind)), [shape=(M, N)])
csc_matrix((data, (row_ind, col_ind)), [shape=(M, N)])
</code></pre>
<p>How much intermediate memory do they use? Presumably they have to use some in order to convert to the CSR/CSC representations.</p>
<p>The context is that calling these functions is using a lot of memory in a particular case, to the point it doesn't succeed because it runs out of memory, so am trying to reason about it. In total data + row_ind + col_ind takes 25GB in my particular example, and I have about 35GB memory remaining, but this doesn't seem enough to call either csr_matrix or csc_matrix.</p>
<p>The docs at <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csr_matrix.html</a> and <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csc_matrix.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.csc_matrix.html</a> don't seem to give info on this.</p>
<p>Here is what I hope is a roughly equivalent bit of code that runs out of memory on my 60GB (Linux) system.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.sparse import csc_matrix
num_values = 2500000000
output_matrix_size = 150000
matrix = csc_matrix(
(
np.zeros(num_values, dtype=np.float16),
(
np.zeros(num_values, dtype=np.int32),
np.zeros(num_values, dtype=np.int32),
),
),
shape=(output_matrix_size, output_matrix_size),
)
</code></pre>
|
<python><scipy><sparse-matrix>
|
2024-11-04 20:51:39
| 1
| 27,302
|
Michal Charemza
|
79,156,913
| 2,927,719
|
PySpark can't find existing file in Blob storage
|
<p>I want to open Excel files in Azure Databricks that reside in ADSL2 with this code:</p>
<pre><code>#%pip install openpyxl pandas
import pandas as pd
display(dbutils.fs.ls("/mnt/myMnt"))
path = "/mnt/myMnt/20241007112914_Statistik_789760_0000_327086871111430.xlsx"
df_xl = pd.read_excel(path, engine="openpyxl")
</code></pre>
<p>The third line returns a list of my Excel files in the ADSL, as expected. So the files exist and are accessible.</p>
<p>However the last line results in this error:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '/mnt/MyMnt/20241007112914_Statistik_789760_0000_327086871111430.xlsx'
</code></pre>
<p>Could it be, that Pandas has no access to the Blob storage and I have to move the files into DBFS first? If so, how?</p>
|
<python><databricks><azure-databricks><pyspark-pandas>
|
2024-11-04 20:29:16
| 1
| 341
|
Prefect73
|
79,156,739
| 6,145,729
|
Python Pandas to read_excel but provide NULL where columns expected but not found
|
<p>I'm using Python 3 with pandas.</p>
<p>Is there a way to read_excel but provide NULL where columns expected are not found?</p>
<p>For example, I'm looping through many workbooks, but sadly, not all the sheets do not have the column 'SEQ ID'. So I'm getting the below error:-</p>
<p><strong>ValueError: Usecols do not match columns, columns expected but not found: ['SEQ ID']</strong></p>
<pre><code>df = pd.read_excel(wb_data, index_col=None, na_values=['NA'], sheet_name="Premises Evaluation",usecols=['SEQ ID', 'NAME', 'AGE'])
</code></pre>
<p>So, is there a clever way to replace the column with NULLs if the column does not exist?</p>
|
<python><pandas>
|
2024-11-04 19:26:49
| 1
| 575
|
Lee Murray
|
79,156,737
| 407,528
|
Specifying a Conditional (Switch) field in Python Construct that depends on a look-ahead value
|
<p>I'm building a parser for a (slightly bizzare) format that defines the tag of a data field <em>after</em> the value record. It is guaranteed that the value field would always be 4 bytes interpreted depending on the tag.</p>
<p>This code works (but is cumbersome to use, and requires manually switching to the correct field of an Union):</p>
<pre><code>SC_Header = Int32ul
SC_Object = Int32ul
SC_String = BitStruct (
"latin1" / Flag,
"buffer" / Flag,
"length" / BitsInteger(30)
)
SC_Pair = Struct (
"value" / Union(0,
"header" / SC_Header,
"object" / SC_Object,
"string" / SC_String,
),
"tag" / Int32ul,
)
</code></pre>
<p>This declares fine, but fails when used:</p>
<pre><code>SC_Header = Int32ul
SC_Object = Int32ul
SC_String = BitStruct (
"latin1" / Flag,
"buffer" / Flag,
"length" / BitsInteger(30)
)
SC_Pair = Struct (
"value" / Switch(lambda ctx: ctx.tag, {
0xFFF10000: SC_Header,
0xFFFF0008: SC_Object,
0xFFFF0004: SC_String,
}),
"tag" / Int32ul,
)
</code></pre>
<p>Even though it works (for a fictitious format) in which value <em>follows</em> tag. In my case, I need to look-ahead for the tag, before it's fed into the context.</p>
<p>How can I specify the field to achieve that?</p>
|
<python><parsing><construct>
|
2024-11-04 19:26:17
| 0
| 6,545
|
qdot
|
79,156,682
| 2,168,548
|
Association abort does not close TCP connection
|
<p>Using pydicom store SCP. The after sending the files, PACS does not release the association. Hence the association is aborted due to timeout</p>
<pre><code>E: Network timeout reached
I: Aborting Association
</code></pre>
<p>And then I ran <strong>ss -ant</strong> command , and I see the TCP connection is in <strong>ESTAB</strong> state. For every association, a new TCP connection is created and it is not closed even though <strong>on_assoc_aborted()</strong> is triggered</p>
<p>How do I make sure the connection is closed after the association is aborted as I have 70+ connections in ESTAB state even though no associations are open and its been a few hours they are in ESTAB state</p>
<p>This is fixed. Find more details here - <a href="https://github.com/pydicom/pynetdicom/issues/979" rel="nofollow noreferrer">https://github.com/pydicom/pynetdicom/issues/979</a></p>
|
<python><pydicom><pynetdicom>
|
2024-11-04 19:03:49
| 0
| 359
|
ShoibAhamed
|
79,156,664
| 4,046,947
|
Optimize memory use of Python function
|
<p><strong>Updated</strong></p>
<p>I have the following function, which is used in a spatial statistical model. I'm able to run the model on a subset of my data (1 US state) using a few GB of RAM, but it runs out of memory when I extend it to the full US.</p>
<p>I've made multiple adjustments from the original function. As much as possible, I use sparse representations and methods designed for sparse matrices. The input matrix <code>A</code> is 207972 x 207972 and has 1314488 non-zero elements, or about 0.003%. Testing the full dataset on a cluster and using 500 GB of RAM (tried up to 750 GB), the code runs to <code>Sigma = spsolve(Q_perturbed, b).astype(np.float32)</code> before running out of memory. I can save memory by using int16, etc. because most elements are ones, but my understanding is that will add overhead because Python will convert to float32 to ensure everything has the same data type when performing some operations.</p>
<p>I only need to run the function once to get a number, then I can use that number as a fixed input to my model. It is getting stuck at spsolve, despite the fact the function uses sparse matrices. My understanding is this function may generate dense intermediate matrices. I tried both csc and csr representations - csr was recommended as more efficient. I'm also now running the process for each column in a loop using 60 GB of RAM to avoid the RAM issue. However, it is very slow - i.e., hasn't finished 10,000 columns (about 5%) after about 20 minutes. I'd also tried running with a much larger RAM allocation of 1 TB with the non-loop method and ran out of memory. The cluster can handle up to 2 TB of RAM, but I think it would take a week to start up. I looked at a few other solutions that change the solver from spsolve().</p>
<pre><code>from scipy.sparse import diags, identity, csc_matrix
from scipy.linalg import solve
from scipy.sparse.linalg import spsolve
def blockwise_solve(Q_perturbed, b):
"""
Solve the system Q_perturbed * x = b using block-wise solving,
where each column of b represents a separate right-hand side.
"""
n = Q_perturbed.shape[0]
solutions = []
# Iterate over columns of b (each column is a separate right-hand side)
for i in range(b.shape[1]):
if i%10**4==0:
logging.info(i)
print(i)
rhs = b[:, i].toarray().flatten() # Convert column to 1D array
solution = spsolve(Q_perturbed, rhs, use_umfpack=True) # Solve for this right-hand side
solutions.append(solution)
# Combine solutions into a matrix
return np.vstack(solutions).T # Return the solutions as a matrix, one solution per column
def scaling_factor_sp(A):
"""Compute the scaling factor from an adjacency matrix.
This function uses sparse matrix computations and is most
efficient on sparse adjacency matrices. Used in the BYM2 model.
The scaling factor is a measure of the variance in the number of
edges across nodes in a connected graph.
Only works for fully connected graphs. The argument for scaling
factors is developed by Andrea Riebler, Sigrunn H. SΓΈrbye,
Daniel Simpson, Havard Rue in "An intuitive Bayesian spatial
model for disease mapping that accounts for scaling"
https://arxiv.org/abs/1601.01180"""
# Computes the precision matrix in sparse format.
num_neighbors = A.sum(axis=1).A.ravel().astype(np.float32)
D = diags(num_neighbors, format="csc", dtype=np.float32) # Degree matrix
del num_neighbors
Q = D - A # Precision matrix
del D
del A
# add a small jitter along the diagonal
jitter = max(Q.diagonal()) * np.sqrt(np.finfo(np.float32).eps)
Q_perturbed = Q + diags(np.ones(Q.shape[0]) * jitter, dtype=np.float32, format="csc")
del jitter
del Q
# Compute a version of the pseudo-inverse
n = Q_perturbed.shape[0]
# b = identity(n, dtype=np.int8, format="csc")
b = identity(n, format="csc").astype(np.float32)
gc.collect()
Q_perturbed = Q_perturbed.tocsr()
# Solve the system using block-wise solving
Sigma = blockwise_solve(Q_perturbed, b)
del Q_perturbed
del b
gc.collect()
W = Sigma.dot(np.ones(Sigma.shape[0], dtype=np.float32))
Q_inv = Sigma - (W[:, np.newaxis] * W[np.newaxis, :]) / W.sum()
del W
del Sigma
gc.collect()
# Compute the geometric mean of the diagonal on a
# precision matrix.
return np.exp(np.sum(np.log(Q_inv.diagonal())) / Q_inv.shape[0])
scaling_factor = scaling_factor_sp(adj_matrix)
</code></pre>
<p>Outputs using full dataset.</p>
<pre><code> 2024-11-08 10:47:23,524 - INFO - After running Q Memory Usage: 171.34 MB
2024-11-08 10:47:23,552 - INFO - After running Q_perturbed Memory Usage: 179.16 MB
2024-11-08 10:47:23,553 - INFO - After running b Memory Usage: 179.16 MB
2024-11-08 10:47:23,553 - INFO - Sparsity of Q_perturbed: 0.0035%
</code></pre>
|
<python><optimization><memory>
|
2024-11-04 18:57:14
| 0
| 655
|
Jason Hawkins
|
79,156,347
| 1,080,014
|
Generate array of positive integers that sum of to k
|
<p>My task is simple: I want to generate an (ideally numpy) array containing all combinations of m positive (>=0), but bounded (<= e) integers that sum exactly to k. Note that k and m might be relatively high, so generating all combinations and filtering will not work.</p>
<p>I have implemented it in plain, recursive python but this small functions takes most of my time and I need to replace it to perform better. I have tried to come up with numpy/pytorch code to generate this array but I didn't manage to do it so far.</p>
<p>I currently use numpy and pytorch in my project, but I am open to other libraries as long as I write python code and I get something I can convert to numpy arrays in the end.</p>
<p>Here's some code:</p>
<pre class="lang-py prettyprint-override"><code>import timeit
def get_summing_up_to(max_degree, sum, length, current=0):
assert sum >= 0
assert length >= 1
if length == 1:
residual = sum - current
if residual <= max_degree:
return [(residual,)]
else:
return []
max_element = min(max_degree, sum - current)
return [
(i,) + t
for i in range(max_element + 1)
for t in get_summing_up_to(
max_degree, sum, length - 1,
current=current + i
)
]
if __name__ == '__main__':
result = timeit.timeit('get_summing_up_to(60, 60, 6)', globals=globals(), number=1)
print(f"Execution time: {result} for max_degree=60, sum=60, length=6")
result = timeit.timeit('get_summing_up_to(30, 30, 8)', globals=globals(), number=1)
print(f"Execution time: {result} for max_degree=30, sum=30, length=8")
</code></pre>
|
<python><numpy><pytorch>
|
2024-11-04 16:54:55
| 4
| 1,396
|
Leander
|
79,156,239
| 17,487,457
|
How do I turn-off this horizontal line in the FFT plot
|
<p>I am trying to show the trend in observation data and the corresponding <code>fft</code> . A horizontal line keeps appearing in the <code>fft</code>, part that I do not need to show in the chart.</p>
<p>I give a MWE (pseudo data) below.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.fft import fft, fftfreq
import matplotlib.pyplot as plt
observations = np.array([ 3.78998207e-01, 3.05199629e-01, 2.29614343e-01, 1.86568613e-01,
1.83449462e-01, 1.77892746e-01, 1.66237352e-01, 1.81950778e-01,
9.88351226e-02, 1.29674430e-01, 7.08703360e-02, 3.64963487e-02,
2.75641060e-02, 6.21573753e-02, 8.51043646e-02, 5.32184940e-02,
6.47005530e-02, -6.41628893e-02, -1.86618020e-01, -4.08624200e-02,
-2.71649960e-02, -8.22041576e-03, 9.13242105e-03, 1.67080717e-01,
-1.37465317e-01, 2.74977101e-04, 4.47602122e-02, 8.27649668e-02,
-5.60661808e-02, -2.26248880e-01, -1.54768403e-01, -4.46428484e-02,
4.57611677e-02, 9.83215698e-02, 9.22357256e-02, -1.23436114e-01,
-2.76981909e-01, -1.98824586e-01, -2.33452893e-01, -2.57550630e-01,
-9.13919527e-02, 2.64029442e-02, -5.44394568e-02, 4.02010984e-01,
3.27256645e-01, 2.14259077e-01, 5.08021357e-01, 5.55141121e-01,
6.11203693e-01, 5.34086779e-01, 2.19652659e-01, 1.71635054e-01,
1.30867565e-01, 1.25133212e-01, 1.02010973e-01, 1.16727950e-02,
2.84545455e-02, -1.73553706e-02, -1.33998184e-01, -1.36456573e-01,
-1.68706794e-01, -1.28378379e-01, -1.43710423e-01, -2.02454545e-01,
-4.30164457e-01, -5.19982175e-01, -3.74452537e-01, -3.64076796e-01,
-3.20950700e-01, -2.34052515e-01, -1.37158482e-01, 2.80797054e-02,
7.04379682e-02, 1.13920696e-01, 1.26391389e-01, 9.31688808e-02,
1.46000000e-01, 1.18380338e-01, 5.18909438e-02, 1.11584791e-01,
6.43582617e-02, -6.36856386e-02, -9.16134931e-02, -1.02616820e-01,
-4.43179890e-01, -1.28223431e+00, -1.86160058e+00, -1.43772912e+00,
-1.21047880e+00, -7.21282278e-01, -1.65349241e-01, 4.58791266e-02,
2.42897190e-01, 3.26587994e-01, 3.15827382e-01, -5.29090909e-02,
8.97887313e-03, 2.61194000e-02, -2.24566234e-01, -9.18572710e-02])
observed_fft = fft(observations)
fs = 100
n = observations.size
fft_fre = fftfreq(n, d=1/fs)
x_time = np.arange(len(observations))
fig, axs = plt.subplots(2)
axs[0].plot(x_time, observations)
axs[1].plot(fft_fre, np.abs(observed_fft))
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/Qsq4IBjn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qsq4IBjn.png" alt="enter image description here" /></a></p>
<p>I know this horizontal line appears because the <code>FFT</code> frequencies in <code>fft_fre</code> include both positive and negative frequencies, thus a symmetrical plot around zero frequency (and I needed to show both the <code>+ve</code> and <code>ve</code> frequencies).
But isn't there a workaround to turn-off the line connecting the last negative frequency to the first positive frequency?</p>
|
<python><numpy><matplotlib><fft>
|
2024-11-04 16:24:05
| 1
| 305
|
Amina Umar
|
79,156,198
| 214,296
|
How to overcome Python3 installation Fatal Python error: init_fs_encoding
|
<p>While attempting to troubleshoot an issue on my Ubuntu for Windows installation I uninstalled all the python instances. The issue has been resolved, but now I'm unable to get python3 reinstalled.</p>
<pre class="lang-bash prettyprint-override"><code>$ sudo apt install python3
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
libpython3-stdlib python3-minimal python3.10
Suggested packages:
python3-doc python3-tk python3-venv python3.10-venv python3.10-doc
The following NEW packages will be installed:
libpython3-stdlib python3 python3-minimal python3.10
0 upgraded, 4 newly installed, 0 to remove and 35 not upgraded.
1 not fully installed or removed.
Need to get 0 B/562 kB of archives.
After this operation, 905 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Setting up python3.10-minimal (3.10.12-1~22.04.6) ...
# Empty sitecustomize.py to avoid a dangling symlink
Could not find platform independent libraries <prefix>
Could not find platform dependent libraries <exec_prefix>
Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]
Python path configuration:
PYTHONHOME = (not set)
PYTHONPATH = (not set)
program name = '/usr/bin/python3.10'
isolated = 0
environment = 0
user site = 1
import site = 0
sys._base_executable = '/usr/bin/python3.10'
sys.base_prefix = '/usr'
sys.base_exec_prefix = '/usr'
sys.platlibdir = 'lib'
sys.executable = '/usr/bin/python3.10'
sys.prefix = '/usr'
sys.exec_prefix = '/usr'
sys.path = [
'/usr/lib/python310.zip',
'/usr/lib/python3.10',
'/usr/lib/lib-dynload',
]
Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
Current thread 0x00007fcf47778000 (most recent call first):
<no Python frame>
dpkg: error processing package python3.10-minimal (--configure):
installed python3.10-minimal package post-installation script subprocess returned error exit status 1
Errors were encountered while processing:
python3.10-minimal
E: Sub-process /usr/bin/dpkg returned an error code (1)
</code></pre>
<p>I'm not certain, but I think this may be the core error:</p>
<pre class="lang-bash prettyprint-override"><code>Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
Python runtime state: core initialized
ModuleNotFoundError: No module named 'encodings'
</code></pre>
<p>Any ideas?</p>
|
<python><python-3.x><ubuntu><installation><windows-subsystem-for-linux>
|
2024-11-04 16:10:19
| 0
| 14,392
|
Jim Fell
|
79,156,083
| 12,424,131
|
Is there an elegant way to subclass a Python list which keeps its subclass when adding and slicing, etc
|
<p>I want to extend the functionality of a standard list, subclassing seems like the obvious way. Actually, I'm using this to store a mathematical expression in Reverse Polish Notation (RPN) for a genetic programming project.</p>
<pre><code>class RPN(list):
def evaluate(self, vars):
# evaluate the RPN expression
pass
def to_infix(self):
# convert to infix form
pass
@staticmethod
def from_infix(expr):
# create new RPN object from infix expression sting
pass
</code></pre>
<p>This works fine, I can create an RPN object like <code>RPN([5, 6, operator.add, 7, 8, operator.add, operator.mult])</code>, and it works as intended.</p>
<p>My problem is that I want do list-like operations to these RPN objects, such as append, add together, slice, etc. but when I do this they revert back to plain old lists (apart from append - that's ok).</p>
<p>So, for example:</p>
<pre><code>rpn1 = RPN([5, 6, operator.add])
rpn2 = RPN([7, 8, operator.add])
rpn3 = rpn1 + rpn2
# type(rpn3) == list
rpn4 = rpn1[0:1]
# type(rpn4) == list
</code></pre>
<p>I can understand why, because a new object gets created, and it creates it as a list. I suppose I can change this behaviour, but then I would have to provide new implementations for <code>__add__</code>, slicing methods, etc. And all of a sudden, I'm writing a lot more code than I intended. Alternatively, I can remake everything as an RPN object each time, e.g. <code>rpn3 = RPN(rpn1 + rpn2)</code>, but this also seems clunky.</p>
<p>Is there a better way?</p>
<p>Note, this question was marked as a duplicate of <a href="https://stackoverflow.com/questions/59523936/how-to-use-list-comprehension-in-list-derived-class">How to use list comprehension in list derived class</a>. The tagged post was asking about how to use list comprehensions inside a derived list class (and in an odd way, by reassigning self variable inside <code>__init__</code> function), I'm asking about something completely different. It just so happens, that both these are solved by using collections.UserList, but the questions themselves are not duplicates IMHO.</p>
|
<python><python-3.x><subclass>
|
2024-11-04 15:35:32
| 2
| 466
|
Steven Dickinson
|
79,155,921
| 1,883,304
|
Wagtail CMS - Programmatically enabling user access to manage Pages within the Admin panel
|
<h2>Context</h2>
<p>Wagtail CMS has a <a href="https://docs.wagtail.org/en/stable/topics/permissions.html" rel="nofollow noreferrer">permission system</a> that builds on that of Django's. However, customizing it for users that are <em>neither</em> an admin nor using the pre-made groups <code>Moderator</code> or <code>Editor</code> is unclear. Presently, I have:</p>
<ul>
<li>A custom user class, <code>StudentUser</code></li>
<li>Pages arranged in the below hierarchy:</li>
</ul>
<pre><code> Program
|
Course
/ | \
Report Labs Events
</code></pre>
<p>I'd like to add a group, <code>Student</code>, which can add/submit pages of type <code>Report</code>. Like stated in the title, I require a <strong>programmatic</strong> solution. Having an admin go through and personally assign permissions is not acceptable.</p>
<hr />
<h2>Problem</h2>
<p>Wagtail provides only one programmatic code example, which is for adding a <em>custom</em> permission <a href="https://docs.wagtail.org/en/stable/topics/permissions.html" rel="nofollow noreferrer">here in their documentation</a>:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.auth.models import Permission
from django.contrib.contenttypes.models import ContentType
from wagtail.admin.models import Admin
content_type = ContentType.objects.get_for_model(Admin)
permission = Permission.objects.create(
content_type=content_type,
codename="can_do_something",
name="Can do something",
)
</code></pre>
<p>Technically, I don't need a <em>custom</em> permission. I need to grant a <code>Group</code> a custom set of existing permissions. To accomplish this, I've attempted the following:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.auth.models import Permission, Group
from example.website.models import StudentUser
group, created = Group.objects.get_or_create(name="Student")
add_report = Permission.objects.get(codename="add_reportpage")
change_report = Permission.objects.get(codename="change_reportpage")
group.permissions.add(add_report, change_report)
user = StudentUser.objects.get(email="john.doe@abc.com")
user.groups.add(group)
</code></pre>
<p>Unfortunately, this hasn't worked. When I log into the backend with a user of type <code>StudentUser</code> (the account to which I provided the permission), there's nothing but the <code>Documents</code> tab (that is default) in the navigation menu. Nowhere can I see a place to modify <code>Report</code>.</p>
<p>If I try to copy the exact <code>Report</code> URL path used in the admin login, it doesn't allow access when logged in the <code>StudentUser</code> despite the added permissions.</p>
<hr />
<h2>Further debugging</h2>
<p>To figure out if there are some other types of permissions I'm missing, I listed the permissions for all groups. Then, I copied them for my new group. You can see the list of permissions now below that I copied from the built-in <code>Moderators</code> group:</p>
<pre><code>Moderators
<Permission: Wagtail admin | admin | Can access Wagtail admin>,
<Permission: Wagtail documents | document | Can add document>,
<Permission: Wagtail documents | document | Can change document>,
<Permission: Wagtail documents | document | Can choose document>,
<Permission: Wagtail documents | document | Can delete document>,
<Permission: Wagtail images | image | Can add image>,
<Permission: Wagtail images | image | Can change image>,
<Permission: Wagtail images | image | Can choose image>,
<Permission: Wagtail images | image | Can delete image>
Student
<Permission: Wagtail admin | admin | Can access Wagtail admin>,
<Permission: Website | report index | Can view report index>,
<Permission: Website | report | Can add report>,
<Permission: Website | report | Can change report>,
<Permission: Website | report | Can delete report>,
<Permission: Website | report | Can view report>
</code></pre>
<p>As you can see, it would appear only admin access seems like the prerequisite for seeing content in the Admin interface. Despite this, logging in as <code>Student</code> still shows no changes in the Wagtail admin panel. And attempts to access any report in the admin interface (as you would as admin) continue to get a permission denied return code.</p>
<p>What step am I missing here to ensure custom groups can access content in the admin interface?</p>
|
<python><wagtail><wagtail-admin>
|
2024-11-04 14:53:52
| 2
| 3,728
|
Micrified
|
79,155,737
| 16,611,809
|
Join differently nested lists in polars columns
|
<p>As you might have recognized from my other questions I am transitioning from pandas to polars right now. I have a polars df with differently nested lists like this:</p>
<pre><code>ββββββββββββββββββββββββββββββββββββββ¬βββββββββββββββββββββββββββββββββββββ¬ββββββββββββββββββ¬βββββββ
β col1 β col2 β col3 β col4 β
β --- β --- β --- β --- β
β list[list[str]] β list[list[str]] β list[str] β str β
ββββββββββββββββββββββββββββββββββββββͺβββββββββββββββββββββββββββββββββββββͺββββββββββββββββββͺβββββββ‘
β [["a", "a"], ["b", "b"], ["c", "c"]β [["a", "a"], ["b", "b"], ["c", "c"]β ["A", "B", "C"] β 1 β
β [["a", "a"]] β [["a", "a"]] β ["A"] β 2 β
β [["b", "b"], ["c", "c"]] β [["b", "b"], ["c", "c"]] β ["B", "C"] β 3 β
ββββββββββββββββββββββββββββββββββββββ΄βββββββββββββββββββββββββββββββββββββ΄ββββββββββββββββββ΄βββββββ
</code></pre>
<p>Now I want to join the lists inside out using different separators to reach this:</p>
<pre><code>βββββββββββββββ¬ββββββββββββββ¬ββββββββ¬βββββββ
β col1 β col2 β col3 β col4 β
β --- β --- β --- β --- β
β str β str β str β str β
βββββββββββββββͺββββββββββββββͺββββββββͺβββββββ‘
β a+a-b+b-c+c β a+a-b+b-c+c β A-B-C β 1 β
β a+a β a+a β A β 2 β
β b+b-c+c β b+b-c+c β B-C β 3 β
βββββββββββββββ΄ββββββββββββββ΄ββββββββ΄βββββββ
</code></pre>
<p>I do this by using <code>map_elements</code> and a for loop, but I guess that is highly inefficient. Is there a polars native way to manage this?</p>
<p>Here is my code:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"col1": [[["a", "a"], ["b", "b"], ["c", "c"]], [["a", "a"]], [["b", "b"], ["c", "c"]]],
"col2": [[["a", "a"], ["b", "b"], ["c", "c"]], [["a", "a"]], [["b", "b"], ["c", "c"]]],
"col3": [["A", "B", "C"], ["A"], ["B", "C"]],
"col4": ["1", "2", "3"]})
nested_list_cols = ["col1", "col2"]
list_cols = ["col3"]
for col in nested_list_cols:
df = df.with_columns(pl.lit(df[col].map_elements(lambda listed: ['+'.join(element) for element in listed], return_dtype=pl.List(pl.String))).alias(col)) # is the return_dtype always pl.List(pl.String)?
for col in list_cols + nested_list_cols:
df = df.with_columns(pl.lit(df[col].list.join(separator='-')).alias(col))
</code></pre>
|
<python><python-polars>
|
2024-11-04 14:01:34
| 2
| 627
|
gernophil
|
79,155,604
| 1,564,070
|
Can't set value of input element using Selenium
|
<p>I'm trying to update the value of an input element using python and selenium webdriver. This is on Windows 10 using Edge. The page loads normally, and I can interact with the element using keyboard & mouse, but when I try to set the value using code I get the "element not interactable" error. Code:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
from selenium.webdriver.common.by import By
edge_service = webdriver.EdgeService(executable_path=edge_driver_path)
edge_driver = webdriver.Edge(service=edge_service)
edge_driver.get(url)
time.sleep(15)
elements, macroponent_bs = edge_driver.find_elements(by=By.XPATH, value="//*"), None
for element in elements:
if element.tag_name.startswith("macroponent"):
macroponent_bs = element.tag_name
break
if macroponent_bs:
script = "return document.querySelector('" + macroponent_bs + "')" + \
".shadowRoot.querySelector('iframe')"
iframe = edge_driver.execute_script(script=script)
edge_driver.switch_to.frame(iframe)
element = edge_driver.find_element(by=By.ID, value=element_id)
print(f"Element value is >{element.get_attribute('value')}<")
print(f"Element is_enabled, is_displayed = {element.is_enabled()}, {element.is_displayed()}")
element.send_keys("Hello")
</code></pre>
<p>Output:</p>
<pre><code>DevTools listening on ws://127.0.0.1:65277/devtools/browser/5c5b2c22-eade-45c7-a8ee-2b234dae3f76
[38728:37308:1104/075027.679:ERROR:fallback_task_provider.cc(127)] Every renderer should have at least one task provided by a primary task provider. If a "Renderer" fallback task is shown, it is a bug. If you have repro steps, please file a new bug and tag it as a dependency of crbug.com/739782.
[38728:37308:1104/075032.464:ERROR:fallback_task_provider.cc(127)] Every renderer should have at least one task provided by a primary task provider. If a "Renderer" fallback task is shown, it is a bug. If you have repro steps, please file a new bug and tag it as a dependency of crbug.com/739782.
Element value is ><
Element is_enabled, is_displayed = True, False
</code></pre>
<p>Error from send_keys:</p>
<pre><code>Exception has occurred: ElementNotInteractableException (note: full exception trace is shown but execution is paused at: on_btn_debug)
Message: element not interactable (Session info: MicrosoftEdge=130.0.2849.68)
</code></pre>
|
<python><selenium-webdriver>
|
2024-11-04 13:19:53
| 1
| 401
|
WV_Mapper
|
79,155,290
| 1,221,812
|
Dutch sentiment analysis RobBERTje outputs just positive/negative labels, netural label is missing
|
<p>When I run Dutch sentiment analysis RobBERTje, it outputs just positive/negative labels, netural label is missing in the data.</p>
<p><a href="https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment" rel="nofollow noreferrer">https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment</a></p>
<p>There are obvious neutral sentences/words e.g. 'Fhdf' (nonsense) and 'Als gisteren inclusief blauw' (neutral), but they both evaluate to positive or negative.</p>
<p><strong>Is there a way to get neutral labels for such examples in RobBERTje?</strong></p>
<pre><code>from transformers import RobertaTokenizer, RobertaForSequenceClassification
from transformers import pipeline
import torch
model_name = "DTAI-KULeuven/robbert-v2-dutch-sentiment"
model = RobertaForSequenceClassification.from_pretrained(model_name)
tokenizer = RobertaTokenizer.from_pretrained(model_name)
classifier = pipeline('sentiment-analysis', model=model, tokenizer = tokenizer)
result1 = classifier('Fhdf')
result2 = classifier('Als gisteren inclusief blauw')
print(result1)
print(result2)
</code></pre>
<p>Output:</p>
<pre><code>[{'label': 'Positive', 'score': 0.7520257234573364}]
[{'label': 'Negative', 'score': 0.7538396120071411}]
</code></pre>
|
<python><nlp><bert-language-model><roberta-language-model>
|
2024-11-04 11:36:35
| 1
| 473
|
pjercic
|
79,155,283
| 1,609,514
|
Unusual Memory Error when plotting line on secondary y axis using Pandas/Matplotlib
|
<p>I'm getting the following memory error when adding a line to the secondary axis:</p>
<pre class="lang-none prettyprint-override"><code>MemoryError: Unable to allocate 48.2 GiB for an array with shape (1726364447,) and data type [('val', '<i8'), ('maj', '?'), ('min', '?'), ('fmt', 'S20')]
</code></pre>
<p>The error is generated when running the following in Jupyter notebook</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
np.random.seed(0)
t = pd.Timestamp('2024-09-15 07:02:04')
times = []
for i in range(101):
times.append(t)
t = t + pd.Timedelta(f'{np.random.randint(7, 10)}s')
data1 = pd.Series(
np.random.normal(size=101),
index=pd.DatetimeIndex(times),
name='Data 1'
)
data2 = pd.Series(
[5.67, 5.85, 5.78],
index=pd.DatetimeIndex(["2024-09-15 07:03:39", "2024-09-15 07:08:43", "2024-09-15 07:13:47"])
)
fig, ax = plt.subplots(figsize=(7, 2.5))
data1.plot(ax=ax, style='.-')
data2.plot(ax=ax, style='.-', secondary_y=True)
ax.grid()
plt.show()
</code></pre>
<p>However, the error does not occur with just a minor change to the data. For example if I change the time of the last point of the second data set from <code>"2024-09-15 07:13:47"</code> to <code>"2024-09-15 07:12:47"</code>, the plot renders no problem:</p>
<p><a href="https://i.sstatic.net/ALQu118J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ALQu118J.png" alt="enter image description here" /></a></p>
<p>Note that both datasets have an unevenly spaced datetime index.</p>
<p>Not sure if this is related to this error:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/36807469/memory-error-while-plotting-dataframe-matplotlib">Memory error while plotting dataframe (matplotlib)</a></li>
</ul>
<p>Can others reproduce this error?</p>
<p>Versions:</p>
<ul>
<li>Python 3.10.12</li>
<li>pandas 2.1.2</li>
<li>matplotlib 3.8.2</li>
<li>numpy 1.26.1</li>
</ul>
|
<python><pandas><matplotlib><plot>
|
2024-11-04 11:34:45
| 1
| 11,755
|
Bill
|
79,154,874
| 5,816,253
|
combine different shapefile feature classes based on their names and geometry
|
<p>I have a shapefile which contains various feature classes I'm able to read</p>
<p>I improved my code thanks to the suggestions of @Pieter</p>
<pre><code>import geopandas as gpd
from shapely import box
from shapely.geometry.polygon import Polygon
from shapely.geometry.multipolygon import MultiPolygon
shapefile = "LAFIS.shp"
vect_data = gpd.read_file(shapefile)
vec_data.head()
vec_data['DESCR_ENG'].unique()
geometry_arr = (vec_data['geometry'])
descr_name_arr = (vec_data['DESCR_ENG'])
d = {
"BW_name": descr_name_arr,
"geometry": geometry_arr,
}
gdf = gpd.GeoDataFrame(d, crs="EPSG:25832")
gdf
test_vector_data = gpd.GeoDataFrame(
data={"DESCR_ENG": [
"Cultivated Grassland,Alpe (without tare)",
"Cultivated Grassland,Biennial cut meadow (tare 20%)/S28-1 Dry meadows and low bog meadows",
"Cultivated Grassland,Biennial cut meadow (tare 20%)/S28-2 Species-rich mountain meadows",
"Cultivated Grassland,Biennial cut meadow/S28-2 Species-rich mountain meadows",
"Cultivated Grassland,Lawn special area (tare 20%)",
"Cultivated Grassland,Lawn special area (tare 50%)",
"Cultivated Grassland,Meadow (Half Sheared Tara 20%)",
"Cultivated Grassland,Meadow (half-sheared)",
"Cultivated Grassland,Meadow (half-sheared)/S28-1 poor meadows and fen meadows",
"Cultivated Grassland,Mixed Alternate Meadow",
"Cultivated Grassland,Pasture",
"Cultivated Grassland,Pasture (rock 20%)",
"Cultivated Grassland,Potential pasture (tare 20%)",
"Fallow,Arable land fallow - EFA",
"Forest Trees / SRF,Alpe (stocked 20%)",
"Forest Trees / SRF,Alpe (stocked 50%)",
"Forest Trees / SRF,Alpeggio (without tares)/S28-6 Wooded pastures",
"Forest Trees / SRF,Biennial cut meadow (tare 20%)/S28-4 Meadows rich in tree species",
"Forest Trees / SRF,Biennial cut meadow/S28-4 Species-rich meadows with trees",
"Forest Trees / SRF,Biennial cut meadow/S28-5 Lush meadows with trees",
"Forest Trees / SRF,Meadow special area (tare 20%)/S28-4 Meadows rich in wooded species",
"Forest Trees / SRF,Meadow special area (tare 20%)/S28-5 Lush meadows with trees",
"Forest Trees / SRF,Meadow special area (tare 50%)/S28-4 Meadows rich in wooded species",
"Forest Trees / SRF,Meadow special area (tare 50%)/S28-5 Lush meadows with trees",
"Forest Trees / SRF,Meadow special area/S28-4 Species-rich meadows with trees",
"Forest Trees / SRF,Meadow special area/S28-5 Lush meadows with trees",
"Forest Trees / SRF,Pasture (rock 20%)/S28-6 Wooded pastures",
"Forest Trees / SRF,Pasture (rock 50%)/S28-6 Wooded pastures",
"Forest Trees / SRF,Pasture (tare 20%)/S28-6 Wooded pastures",
"Forest Trees / SRF,Pasture (tare 50%)/S28-6 Wooded pastures",
"Forest Trees / SRF,Pasture (trees 20%)/S28-6 Wooded pastures",
"Forest Trees / SRF,Pasture/S28-6 Wooded pastures",
"Forest Trees / SRF,Stable meadow (tare 20%)/S28-4 Meadows rich in wooded species",
"Forest Trees / SRF,Stable meadow (tare 20%)/S28-5 Lush meadows with trees",
"Forest Trees / SRF,Stable meadow/S28-4 Species-rich meadows with trees",
"Forest Trees / SRF,Stable meadow/S28-5 Lush meadows with trees",
"Forest Trees / SRF,Willow (Tara 50%)",
"Forest Trees / SRF,Willow (Tare 20%)",
"Legumes,Alfalfa",
"Legumes,Clover",
"Maize,Corn",
"Miscellaneous,Industrial Medicinal Plants",
"Miscellaneous,Plant cultivation",
"No Agriculture,Bosco/S28-8 Peat bogs and alders",
"No Agriculture,Forest",
"No Agriculture,Greenhouses",
"No Agriculture,Hedges",
"No Agriculture,Hedges/S28-9 Hedges",
"No Agriculture,Infrastructures",
"No Agriculture,Other Areas",
"No Agriculture,Other crops/S28-3 Reedbeds",
"No Agriculture,Other crops/S28-8 Peat and alder bogs",
"No Agriculture,Water",
"Orchards and Berries,Apple",
"Orchards and Berries,Apricot",
"Orchards and Berries,Astoni plants fruit",
"Orchards and Berries,Berry fruit (without strawberry)",
"Orchards and Berries,Biennial cut meadow (tare 20%)/S28-7 Chestnut groves and meadows with sparse fruit trees",
"Orchards and Berries,Castagneto/S28-7 Chestnut groves and meadows with sparse fruit trees",
"Orchards and Berries,Cherry",
"Orchards and Berries,Chestnut",
"Orchards and Berries,Currants",
"Orchards and Berries,Meadow special area (tare 20%)/S28-7 Chestnut groves and meadows with sparse fruit trees",
"Orchards and Berries,Meadow special area/S28-7 Chestnut groves and meadows with sparse fruit trees",
"Orchards and Berries,Olive",
"Orchards and Berries,Orchard being planted",
"Orchards and Berries,Other fruit",
"Orchards and Berries,Pear",
"Orchards and Berries,Plums",
"Orchards and Berries,Stable meadow (tare 20%)/S28-7 Chestnut groves and meadows with sparse fruit trees",
"Orchards and Berries,Stable meadow/S28-7 Chestnut groves and meadows with sparse fruit trees",
"Orchards and Berries,Strawberry",
"Orchards and Berries,Table grapes",
"Orchards and Berries,Vineyard under planting",
"Orchards and Berries,Viticulture",
"Other Cereals,Grain",
"Permanent Grassland,Alpe (tare 70%)",
"Permanent Grassland,Meadow (Permanent Meadow Tara 20%)",
"Permanent Grassland,Meadow (permanent meadow)",
"Permanent Grassland,Meadow (permanent meadow)/S28-1 poor meadows and fen meadows",
"Permanent Grassland,Meadow (permanent meadow)/S28-2 species-rich mountain meadows",
"Permanent Grassland,Meadow special area",
"Permanent Grassland,Meadow special area (tare 20%)/S28-1 Dry meadows and low bog meadows",
"Permanent Grassland,Meadow special area (tare 20%)/S28-2 Species-rich mountain meadows",
"Permanent Grassland,Meadow special area (tare 50%)/S28-1 Dry meadows and low bog meadows",
"Permanent Grassland,Meadow special area (tare 50%)/S28-2 Species-rich mountain meadows",
"Permanent Grassland,Meadow special area/S28-1 poor meadows and fen meadows",
"Permanent Grassland,Meadow special area/S28-2 species-rich mountain meadows",
"Permanent Grassland,Potential pasture (50% tare)",
"Permanent Grassland,Stable meadow (tare 20%)/S28-1 Dry meadows and meadows with low bog",
"Permanent Grassland,Stable meadow (tare 20%)/S28-2 Species-rich mountain meadows",
"Vegetables,Asparagus",
"Vegetables,Cabbage",
"Vegetables,Cauliflower",
"Vegetables,Field vegetable cultivation",
"Vegetables,Radish",
"Vegetables,Salads",
]},
geometry = [box(0, 0, 5, 5), box(5, 0, 10, 5), box(10, 0, 15, 5), box(15, 0, 20, 5)],
crs=25832,
)
# Add new "Classes" column based on DESCR_ENG column
test_vector_data["Classes"] = None
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Cultivated"), "Classes"] = "Cultivated Grassland"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Fallow"), "Classes"] = "Fallow"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Forest"), "Classes"] = "Forest Trees"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Legumes"), "Classes"] = "Legumes"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Maize"), "Classes"] = "Maize"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Miscellaneous"), "Classes"] = "Miscellaneous"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("No Agriculture"), "Classes"] = "No Agriculture"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Orchards and Berries"), "Classes"] = "Orchards and Berries"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Other Cereals"), "Classes"] = "Other Cereals"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Permanent Grassland"), "Classes"] = "Permanent Grassland"
test_vector_data.loc[test_vector_data["DESCR_ENG"].str.startswith("Vegetables"), "Classes"] = "Vegetables"
# Print result
print(test_vector_data.head())
test_vector_data["geometry"] = [MultiPolygon([feature]) if isinstance(feature, Polygon) \
else feature for feature in test_vector_data["geometry"]]
</code></pre>
<p>Each feature has its polygon in the geometry column.</p>
<p>I would like to collapse, for example, all the feature classes into one unique class listed in the first part of the name before the comma i.e. all those starting with "Cultivated Grassland" in <em>Cultivated Grassland</em>; "Forest Trees / SRF" in <em>Forest Trees / SRF</em>
The geometry of each feature should follow this criteria too to have one unique polygon for each grouped feature</p>
<p>Here you can find the file I'm using <a href="https://scientificnet-my.sharepoint.com/:f:/g/personal/bventura_eurac_edu/EviesX1bVblAhYvBurTwXIsBvBUevs4nsq_UAtlwXHMkLA?e=HcYDVf" rel="nofollow noreferrer">shapefile</a></p>
<p>Any suggestions?</p>
|
<python><geopandas><shapefile>
|
2024-11-04 09:28:43
| 1
| 375
|
sylar_80
|
79,154,737
| 11,696,358
|
insert new line into duckdb table with a map column from python dict
|
<p>I have a duckdb table with a MAP(INT, MAP(TEXT, TEXT)) column.</p>
<p>I have a python dict with the same structure.</p>
<p>I want to insert a new line into the duckdb table putting the python dict as value of the map column.</p>
<p>How can it be done?</p>
<p>I expect something like</p>
<pre><code>my_dict = {0: {'test_key': 'test_val'}}
with duckdb.connect(configs.db_path) as con:
query = "INSERT INTO table_name (str_col, map_col) VALUES (?, ?)"
con.execute(query, ['test_str', my_dict])
</code></pre>
<p>but what I've tried didn't worked</p>
|
<python><duckdb>
|
2024-11-04 08:41:01
| 1
| 478
|
user11696358
|
79,154,674
| 5,304,366
|
How to migrate from a simple Python project : requirements.txt setup.py (setuptools), to a uv project
|
<p>If I want to use <a href="https://docs.astral.sh/uv/" rel="noreferrer">uv</a> in an already existing project named <code>my_project</code> with only a <code>requirements.txt</code> (or <code>requirements.in</code>), and simple <code>setup.py</code>(setuptools), that has been installed with <code>pip install -e .</code>.</p>
<p>How would I switch from setuptools to uv?</p>
<p>If my project has this file configuration:</p>
<pre><code>my_project
βββ my_project
β βββ hello.py
βββ tests
β βββ test_hello.py
βββ Makefile
βββ README.md
βββ requirements.txt
βββ setup.py
</code></pre>
<p>The <code>uv</code> documentation offer two type of projects organisation, the app (<code>uv init --app example-app</code>) that correspond to the one I have, and the library one <code>uv init --lib example-lib</code>, which is based on a <code>src</code> layout organisation.</p>
<p><code>uv</code> allows to publish a library directly on Pypi, do I need to change my layout for a <code>src</code> one if I want to deploy library with <code>uv</code>?</p>
|
<python><setuptools><packaging><requirements.txt><uv>
|
2024-11-04 08:18:08
| 2
| 2,179
|
Adrien Pacifico
|
79,154,580
| 2,307,441
|
Check if string does not contain strings from the list with wildcard when % symbol in the list value
|
<pre><code>newlist = ['test', '%ing', 'osh', '16fg']
tartext = 'Singing'
</code></pre>
<p>I want to check my tartext value doesn't matching with any value with newlist. if the newlist string contains % symbol in the value then I need to match this as wildcard charecter.</p>
<p>I want to achieve condition as below.</p>
<pre><code>if (tartext != 'test' and tartext not like '%ing' and tartext != 'osh' and tartext !=
'16fg') then return true else false
</code></pre>
<p>Since <code>%ing</code> from the lsit contains '%' symbol in it I need to change the comparison as wildcard chatecter search like sql.</p>
<p>In this example 'Singing' is matching with '%ing' then I am expecting the condition to return False.</p>
<p>Below is the code that I have tried but didn't work</p>
<pre class="lang-py prettyprint-override"><code>import re
newlist = ['test', '%ing', 'osh', '16fg']
tartext = 'Singing'
def wildcard_compare(string, match):
match = match.replace('%','.*')#.replace('_','.')
match_expression = f'^{match}$'
return bool(re.fullmatch(match_expression,string))
def condition_match(lookupstring, mylist):
for value in mylist:
if '%' in value:
if not wildcard_compare(lookupstring,value):
return True
else:
if value != lookupstring:
return True
return False
print(condition_match(tartext,newlist))
print(condition_match('BO_IN',['AP_IN','BO_IN','CA_PS']))
</code></pre>
|
<python><list>
|
2024-11-04 07:46:50
| 1
| 1,075
|
Roshan
|
79,154,452
| 22,146,392
|
How to import python functions from a module in a local Ansible collection?
|
<p>I have a local Ansible collection. This is what my folder structure looks like:</p>
<pre><code>/
|_ collections
| |_ ansible_collections
| |_ my # Namespace
| |_ tools # Collection
| |_ plugins
| |_ modules
| |_ my_module.py # Module
|_ my_playbook.yml
</code></pre>
<p>In <code>my_playbook.yml</code> I call the module with <code><namespace>.<collection>.<module></code>, e.g.: <code>my.tools.my_module</code>.</p>
<p>This is working fine, but I want to define reusable classes/functions to reference from <code>my_module.py</code>. Where would I put these other libraries?</p>
<p>I tried including them in (for example) <code>plugins/modules/my_lib.py</code>, I tried in <code>plugins/module_utils/my_lib.py</code>, etc. In <code>my_module.py</code>, I've tried importing a bunch of ways:</p>
<pre><code>from my.tools.module_utils.my_lib import my_func
from module_utils.my_lib import my_func
import sys, os
sys.path.append(os.path.abspath('../module_utils'))
from my_lib import my_func
</code></pre>
<p>What's the right way to accomplish this?</p>
|
<python><ansible><ansible-collections>
|
2024-11-04 06:50:10
| 0
| 1,116
|
jeremywat
|
79,154,422
| 3,553,814
|
Python sockio SimpleClient fails to connect to python-socketio
|
<p>I've got a python backend with <code>python-socketio</code> as server. I have a react application which can connect to it just fine. However, my python client app always raises a 'connectionError'.</p>
<p>I need some help to get the settings correct.</p>
<p>Python client</p>
<pre><code>socket_client.connect(
'wss://SERVER:9001',
namespace='/cli',
socketio_path='socket',
transports=['websocket']
)
</code></pre>
<p>React client</p>
<pre><code>const manager = new Manager(
'wss://SERVER:9001',
{
path: '/socket,
transports: ['websocket'],
rejectUnauthorized: false,
autoConnect: true
}
);
this.manager.socket('/cli').connect()
</code></pre>
<p>The python backend is fastapi with socketio and currently has cors allowed origins to everything (just trying to connect). The mode is <code>asgi</code></p>
<p>The last error lines from the client when trying to connect</p>
<pre><code> File "D:\Projects\FOLDER\.venv\Lib\site-packages\socketio\client.py", line 159, in connect
raise exceptions.ConnectionError(exc.args[0]) from None
socketio.exceptions.ConnectionError: Connection error
</code></pre>
<p>Both client and server are running version 5.11.4.</p>
<p>Edit: Change protocol
If I switch to https and <code>polling</code> I get this message</p>
<pre><code>socketio.exceptions.ConnectionError: HTTPSConnectionPool(host='SERVER', port=9001): Max retries exceeded with url: /socket/?transport=polling&EIO=4&t=1730702846.0515244 (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1020)')))
</code></pre>
<p>The react is https, the fastapi app also runs https. Both use the same certificate files (a wildcard not self made)</p>
|
<python><python-socketio>
|
2024-11-04 06:34:39
| 1
| 549
|
Edgar Koster
|
79,154,413
| 4,586,008
|
How to run python -m build behind proxy
|
<p>I am trying to build a Python package via <code>python -m build</code>.</p>
<p>However, I am behind a corporate proxy. When I run the command it tries to download the prerequisites and fails with <code>ConnectTimeoutError</code> after several retries:</p>
<p><code>< WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000001797563A510>, 'Connection to pypi.org timed out. (connect timeout=15)')'</code></p>
<p>Usually when using <code>pip</code>, the following command works for me:</p>
<p><code>pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --proxy http://<username>:<password>@<IP>:<port> <package></code></p>
<p>But I am not sure how to get things working with <code>python -m build</code>.</p>
|
<python><proxy>
|
2024-11-04 06:31:41
| 0
| 640
|
lpounng
|
79,154,263
| 18,562,467
|
Object tracking using MMtrack from detected masks and bounding boxes from MMdetection
|
<p>I am working with a custom image dataset and instance segmentations generated using Mask R-CNN from MMDetection. I followed your configuration and data conversion instructions but encountered an error.</p>
<p>Custom Dataset Directory Structure:</p>
<pre class="lang-css prettyprint-override"><code>Workspace\
----Dataset\
--------img1.png
--------img2.png
--------annotations\half-val_detections.pkl
</code></pre>
<p>I transformed the MMDetection output into a dictionary as follows:</p>
<pre class="lang-json prettyprint-override"><code>{
"image1_name": [MMDetection output as dict],
"image2_name": [MMDetection output as dict],
...
}
</code></pre>
<p>Configuration:</p>
<pre class="lang-py prettyprint-override"><code>_base_ = ['./deepsort_faster-rcnn_fpn_4e_mot17-private-half.py']
data_root = 'workspace/Dataset/'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadDetections'),
dict(
type='MultiScaleFlipAug',
img_scale=(1088, 1088),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='VideoCollect', keys=['img', 'public_bboxes'])
])
]
data = dict(
val=dict(
detection_file=data_root + 'annotations/half-val_detections.pkl',
pipeline=test_pipeline),
test=dict(
detection_file=data_root + 'annotations/half-val_detections.pkl',
pipeline=test_pipeline))
</code></pre>
<p>Error Traceback:</p>
<pre class="lang-none prettyprint-override"><code>During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "workspace\lib\mmtracking\tools\test.py", line 236, in <module>
main()
File "workspace\lib\mmtracking\tools\test.py", line 151, in main
dataset = build_dataset(cfg.data.test)
File "workspace\.venv_2\lib\site-packages\mmdet\datasets\builder.py", line 82, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "workspace\.venv_2\lib\site-packages\mmcv\utils\registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.__name__}: {e}')
FileNotFoundError: MOTChallengeDataset: [Errno 2] No such file or directory: 'data/MOT17/annotations/half-val_cocoformat.json'
</code></pre>
<p>I would appreciate your guidance on what might be going wrong here.</p>
<p>I have tried to update the <em>base</em>/datasets/mot_challange.py but that didn't make any change.</p>
|
<python><pytorch><tracking>
|
2024-11-04 05:15:20
| 0
| 412
|
im_vutu
|
79,153,853
| 5,067,748
|
Adding 2D numpy arrays with differing axes arrays: how to properly replace the deprecated interp2d with RectBivariateSpline?
|
<p>I need to add two 2D numpy arrays of possibly different shapes and different corresponding axes arrays.</p>
<p>What I mean is this: Lets define two different sets of x- and y-axes, and calculate the z-values according to some function (here: a 2D Gaussian distribution):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def gaussian_2D(X, Y, amplitude, mu_x, mu_y, sigma_x, sigma_y, theta, offset=0):
"""
X,Y: are expected to be numpy meshgrids
"""
a = (np.cos(theta)**2)/(2*sigma_x**2) + (np.sin(theta)**2)/(2*sigma_y**2)
b = -(np.sin(2*theta))/(4*sigma_x**2) + (np.sin(2*theta))/(4*sigma_y**2)
c = (np.sin(theta)**2)/(2*sigma_x**2) + (np.cos(theta)**2)/(2*sigma_y**2)
return offset + amplitude*np.exp( - (a*((X-mu_x)**2) + 2*b*(X-mu_x)*(Y-mu_y) + c*((Y-mu_y)**2)))
x1 = np.linspace(10, 100, num=100)
y1 = np.linspace(0, 200, num=100)
X1,Y1 = np.meshgrid(x1,y1)
Z1 = gaussian_2D(X1,Y1, 10, 35, 100, 10, 20, 0)
x2 = np.linspace(0, 150, num=120)
y2 = np.linspace(30, 220, num=120)
X2,Y2 = np.meshgrid(x2,y2)
Z2 = gaussian_2D(X2,Y2, 10, 75, 150, 5, 4, 12)
</code></pre>
<p>The above code generates 2 differing x-arrays of lengths (100,) and (120,), 2 differing y-arrays of lengths (100,) and (120,) and 2 z-arrays of shape (100,100) and (120,120).</p>
<p>The x- and y-arrays define some physical dimension, here space in units of millimeter, so that it makes sense to add the two together, even though the underlying arrays are different. I have chosen these axes to be overlapping, but really there is no need for that, the arrays could be completely seperate. Plotting them shows what is going on:</p>
<pre class="lang-py prettyprint-override"><code>fig1, ax1 = plt.subplots()
ax1.pcolormesh(X1,Y1,Z1)
ax1.set_xlim([0,150])
ax1.set_ylim([0,220])
ax1.set_xlabel("x [mm]")
ax1.set_ylabel("y [mm]")
fig2, ax2 = plt.subplots()
ax2.pcolormesh(X2,Y2,Z2)
ax2.set_xlim([0,150])
ax2.set_ylim([0,220])
ax2.set_xlabel("x [mm]")
ax2.set_ylabel("y [mm]")
</code></pre>
<p>results in:</p>
<p><a href="https://i.sstatic.net/2K2AQuM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2K2AQuM6.png" alt="array1 plot" /></a></p>
<p><a href="https://i.sstatic.net/bmfbuJmU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmfbuJmU.png" alt="array2 plot" /></a></p>
<p>And we can get a sense of what the resulting plot should look like when the two arrays are added by simply plotting them on top of each other with some <code>alpha</code> value:</p>
<pre class="lang-py prettyprint-override"><code>fig3, ax3 = plt.subplots()
ax3.pcolormesh(X1, Y1, Z1, alpha=0.5)
ax3.pcolormesh(X2, Y2, Z2, alpha=0.3)
ax3.set_xlabel("x [mm]")
ax3.set_ylabel("y [mm]")
</code></pre>
<p><a href="https://i.sstatic.net/Gs459kFQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gs459kFQ.png" alt="plot of array1 and array2 overlaid" /></a></p>
<hr />
<p>Now, clearly because the underlying structure of the arrays is different, we must use some interpolation to "standardize" the two arrays onto some common grid. For the sake of convenience, lets make the resulting array always (100, 100) (i.e. roughly the same shape as the two input arrays. This is an assumption that will usually hold for me..... let's ignore edge cases...)</p>
<pre class="lang-py prettyprint-override"><code>def add_arrays_with_axes(array_1, array_1_x, array_1_y, array_2, array_2_x, array_2_y, method="interp2d"):
import scipy.interpolate as interp
# Define the new x and y axis ranges that cover both array_1 and array_2 axes
new_x = np.linspace(min(array_1_x[0], array_2_x[0]), max(array_1_x[-1], array_2_x[-1]), num=100)
new_y = np.linspace(min(array_1_y[0], array_2_y[0]), max(array_1_y[-1], array_2_y[-1]), num=100)
# Interpolate array_1 and array_2 onto the new grid
match method:
case "interp2d":
interp_array_1 = interp.interp2d(array_1_x, array_1_y, array_1, kind='cubic', bounds_error=False, fill_value=np.nan)
interp_array_2 = interp.interp2d(array_2_x, array_2_y, array_2, kind='cubic', bounds_error=False, fill_value=np.nan)
case "RectBivariateSpline":
interp_array_1 = interp.RectBivariateSpline(array_1_x, array_1_y, array_1)
interp_array_2 = interp.RectBivariateSpline(array_2_x, array_2_y, array_2)
case _:
raise Exception("Unsupported interp method.")
# Evaluate the interpolations on the new x and y grid
array_1_resampled = interp_array_1(new_x, new_y)
array_2_resampled = interp_array_2(new_x, new_y)
# Replace NaNs with 0 in each array where data was not originally defined
array_1_resampled = np.nan_to_num(array_1_resampled, nan=0.0)
array_2_resampled = np.nan_to_num(array_2_resampled, nan=0.0)
# Sum the resampled arrays
summed_array = array_1_resampled + array_2_resampled
return summed_array, new_x, new_y
</code></pre>
<p>In the above I allow for the use of two different interpolation methods: <code>interp2d</code> and <code>RectBivariateSpline</code>.</p>
<p>Let's see what the result is:</p>
<h2>interp2d</h2>
<pre class="lang-py prettyprint-override"><code>new_array, new_x, new_y = add_arrays_with_axes(Z1, x1, y1, Z2, x2, y2, method="interp2d")
new_mesh = np.meshgrid(new_x, new_y)
fig4, ax4 = plt.subplots()
ax4.pcolormesh(X1, Y1, Z1, alpha=0.5)
ax4.pcolormesh(X2, Y2, Z2, alpha=0.3)
ax4.set_xlabel("x [mm]")
ax4.set_ylabel("y [mm]")
ax4.pcolormesh(*new_mesh, new_array)
</code></pre>
<p><a href="https://i.sstatic.net/2TETVWM6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2TETVWM6.png" alt="result using interp2d" /></a></p>
<hr />
<h2>RectBivariateSpline</h2>
<pre class="lang-py prettyprint-override"><code>new_array, new_x, new_y = add_arrays_with_axes(Z1, x1, y1, Z2, x2, y2, method="RectBivariateSpline")
new_mesh = np.meshgrid(new_x, new_y)
fig5, ax5 = plt.subplots()
ax5.pcolormesh(X1, Y1, Z1, alpha=0.5)
ax5.pcolormesh(X2, Y2, Z2, alpha=0.3)
ax5.set_xlabel("x [mm]")
ax5.set_ylabel("y [mm]")
ax5.pcolormesh(*new_mesh, new_array)
</code></pre>
<p><a href="https://i.sstatic.net/tCI9pAdy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCI9pAdy.png" alt="result using RectBivariateSpline" /></a></p>
<h1>Clearly the two methods show very different results</h1>
<p><code>interp2d</code> shows very much what I expected to happen: you only get values where the original arrays were defined, and zeros where they were undefined, leading to the visible edge at <code>x=10</code>.
<code>RectBivariateSpline</code>, on the other hand, seems to not only stretch the data, but also move the data.... Compare for example the central point of the broader Gaussian. It was defined to be at (x=35, y=100), and <code>interp2d</code> reproduces this correctly, but in the <code>RectBivariateSpline</code> case the center moved somewhere more like (x=35, y=80). <strong>Why does it do that?</strong> It seems like <code>interp2d</code> is the way to go, but that function is actually <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.interp2d.html" rel="nofollow noreferrer">discontinued</a>. <code>RectBivariateSpline</code> does have one more possibly relevant argument: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.RectBivariateSpline.html" rel="nofollow noreferrer"><code>bbox</code></a>, but I am unsure what it does, and also what I would set it to other than the original bounding box the original array, which it defaults to anyway... Plus there seems to be <a href="https://stackoverflow.com/a/45909136">some issue</a> with this argument anyway.</p>
<p>Any ideas what is happening with <code>RectBivariateSpline</code>?</p>
|
<python><multidimensional-array><scipy><interpolation><numpy-ndarray>
|
2024-11-03 23:32:33
| 2
| 376
|
Douglas
|
79,153,512
| 894,067
|
coremltools Error: ValueError: perm should have the same length as rank(x): 3 != 2
|
<p>I keep getting an error ValueError: perm should have the same length as rank(x): 3 != 2 when trying to convert my model using coremltools.</p>
<p>From my understanding the most common case for this is when your input shape that you pass into coremltools doesn't match your model input shape. However, as far as I can tell in my code it does match. I also added an input layer, and that didn't help either.</p>
<p>I have put a lot of effort into reducing my code as much as possible while still giving a minimal complete verifiable example. However, I'm aware that the code is still a lot. Starting at line 60 of my code is where I create my model, and train it.</p>
<p>I'm running this on Ubuntu, with NVIDIA setup with Docker.</p>
<p>Any ideas what I'm doing wrong?</p>
<p>PS. I'm really new to Python, TensorFlow, and machine learning as a whole. So while I put a lot of effort into resolving this myself and asking this question in an easy to understand & reproduce way, I might have missed something. So I apologize in advance for that.</p>
<hr />
<pre class="lang-py prettyprint-override"><code>from typing import TypedDict, Optional, List
import tensorflow as tf
import json
from tensorflow.keras.optimizers import Adam
import numpy as np
from sklearn.utils import resample
import keras
import coremltools as ct
# Simple tokenizer function
word_index = {}
index = 1
def tokenize(text: str) -> list:
global word_index
global index
words = text.lower().split()
sequences = []
for word in words:
if word not in word_index:
word_index[word] = index
index += 1
sequences.append(word_index[word])
return sequences
def detokenize(sequence: list) -> str:
global word_index
# Filter sequence to remove all 0s
sequence = [int(index) for index in sequence if index != 0.0]
words = [word for word, index in word_index.items() if index in sequence]
return ' '.join(words)
# Pad sequences to the same length
def pad_sequences(sequences: list, max_len: int) -> list:
padded_sequences = []
for seq in sequences:
if len(seq) > max_len:
padded_sequences.append(seq[:max_len])
else:
padded_sequences.append(seq + [0] * (max_len - len(seq)))
return padded_sequences
class PreprocessDataResult(TypedDict):
inputs: tf.Tensor
labels: tf.Tensor
max_len: int
def preprocess_data(texts: List[str], labels: List[int], max_len: Optional[int] = None) -> PreprocessDataResult:
tokenized_texts = [tokenize(text) for text in texts]
if max_len is None:
max_len = max(len(seq) for seq in tokenized_texts)
padded_texts = pad_sequences(tokenized_texts, max_len)
return PreprocessDataResult({
'inputs': tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32)),
'labels': tf.convert_to_tensor(np.array(labels, dtype=np.int32)),
'max_len': max_len
})
# Define your model architecture
def create_model(input_shape: int) -> keras.models.Sequential:
model = keras.models.Sequential()
model.add(keras.layers.Input(shape=(input_shape,), dtype='int32', name='embedding_input'))
model.add(keras.layers.Embedding(input_dim=10000, output_dim=128)) # `input_dim` represents the size of the vocabulary (i.e. the number of unique words in the dataset).
model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=64, return_sequences=True)))
model.add(keras.layers.Bidirectional(keras.layers.LSTM(units=32)))
model.add(keras.layers.Dense(units=64, activation='relu'))
model.add(keras.layers.Dropout(rate=0.5))
model.add(keras.layers.Dense(units=1, activation='sigmoid')) # Output layer, binary classification (meaning it outputs a 0 or 1, false or true). The sigmoid function outputs a value between 0 and 1, which can be interpreted as a probability.
model.compile(
optimizer=Adam(),
loss='binary_crossentropy',
metrics=['accuracy']
)
return model
# Train the model
def train_model(
model: tf.keras.models.Sequential,
train_data: tf.Tensor,
train_labels: tf.Tensor,
epochs: int,
batch_size: int
) -> tf.keras.callbacks.History:
return model.fit(
train_data,
train_labels,
epochs=epochs,
batch_size=batch_size,
callbacks=[
keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=5),
keras.callbacks.TensorBoard(log_dir='./logs', histogram_freq=1),
# When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf`
keras.callbacks.ModelCheckpoint(filepath='./best_model.tf', monitor='val_accuracy', save_best_only=True)
]
)
# Example usage
if __name__ == "__main__":
# Check available devices
print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
with tf.device('/GPU:0'):
print("Loading data...")
data = (["I love this!", "I hate this!"], [0, 1])
rawTexts = data[0]
rawLabels = data[1]
# Preprocess data
processedData = preprocess_data(rawTexts, rawLabels)
inputs = processedData['inputs']
labels = processedData['labels']
max_len = processedData['max_len']
print("Data loaded. Max length: ", max_len)
# Save word_index to a file
with open('./word_index.json', 'w') as file:
json.dump(word_index, file)
model = create_model(max_len)
print('Training model...')
train_model(model, inputs, labels, epochs=1, batch_size=32)
print('Model trained.')
# When downgrading from TensorFlow 2.18.0 to 2.12.0 I had to change this from `./best_model.keras` to `./best_model.tf`
model.load_weights('./best_model.tf')
print('Best model weights loaded.')
# Save model
# I think that .h5 extension allows for converting to CoreML, whereas .keras file extension does not
model.save('./toxic_comment_analysis_model.h5')
print('Model saved.')
my_saved_model = tf.keras.models.load_model('./toxic_comment_analysis_model.h5')
print('Model loaded.')
print("Making prediction...")
test_string = "Thank you. I really appreciate it."
tokenized_string = tokenize(test_string)
padded_texts = pad_sequences([tokenized_string], max_len)
tensor = tf.convert_to_tensor(np.array(padded_texts, dtype=np.float32))
predictions = my_saved_model.predict(tensor)
print(predictions)
print("Prediction made.")
# Convert the Keras model to Core ML
coreml_model = ct.convert(
my_saved_model,
inputs=[ct.TensorType(shape=(max_len,), name="embedding_input", dtype=np.int32)],
source="tensorflow"
)
# Save the Core ML model
coreml_model.save('toxic_comment_analysis_model.mlmodel')
print("Model successfully converted to Core ML format.")
</code></pre>
<p>Code including Dockerfile & start script as GitHub Gist: <a href="https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089" rel="nofollow noreferrer">https://gist.github.com/fishcharlie/af74d767a3ba1ffbf18cbc6d6a131089</a></p>
|
<python><tensorflow><machine-learning><keras><coremltools>
|
2024-11-03 19:49:49
| 1
| 20,944
|
Charlie Fish
|
79,153,488
| 726,730
|
How to match the wmi camera names with cv2 camera index
|
<p>With wmi I can find the camera names (like HP True Vision HD). How to match this camera names to cv2 indexes?</p>
<p>There is a mismatch trying to match.</p>
<p>Is there any alternative? (for example list camera true names with cv2 directly)?</p>
|
<python><opencv><wmi>
|
2024-11-03 19:36:00
| 0
| 2,427
|
Chris P
|
79,153,372
| 7,093,241
|
How/Where does PyTorch max documentation show that you can pass in 2 tensors for comparison?
|
<p>I am learning <code>pytorch</code> and deep learning. The documentation for <a href="https://pytorch.org/docs/stable/generated/torch.max.html" rel="nofollow noreferrer"><code>torch.max</code></a> doesn't make sense in that it looks like we can compare 2 tensors but I don't see where in the documentation I could have determined this.</p>
<p>I had this code at first where I wanted to check ReLU values against the maximum. I thought that <code>0</code> could be broadcast for and <code>h1.shape=torch.Size([10000, 128])</code>.</p>
<pre><code>h1 = torch.max(h1, 0)
y = h1 @ W2 + b2
</code></pre>
<p>However, I got this error:</p>
<p><code>TypeError: unsupported operand type(s) for @: 'torch.return_types.max' and 'Tensor'</code></p>
<p>I got to fix this when I changed the <code>max</code> equation to use a tensor instead of 0.</p>
<pre><code>h1 = torch.max(h1, torch.tensor(0))
y = h1 @ W2 + b2
</code></pre>
<p><strong>1. Why does this fix the error?</strong></p>
<p>This is when I checked the documentation again and realized that there is nothing mentions a collection like a tuple or list for multiple tensors or even a <code>*input</code> for iterable unpacking.</p>
<p>Here are the 2 versions:</p>
<p>1st <code>torch.max</code> version:</p>
<blockquote>
<p>torch.max(input) β Tensor Returns the maximum value of all elements in
the input tensor.</p>
<p>Warning</p>
<p>This function produces deterministic (sub)gradients unlike max(dim=0)</p>
</blockquote>
<p>2nd version of <code>torch.max</code></p>
<blockquote>
<p>torch.max(input, dim, keepdim=False, *, out=None) Returns a namedtuple
(values, indices) where values is the maximum value of each row of the
input tensor in the given dimension dim. And indices is the index
location of each maximum value found (argmax).</p>
<p>If keepdim is True, the output tensors are of the same size as input
except in the dimension dim where they are of size 1. Otherwise, dim
is squeezed (see torch.squeeze()), resulting in the output tensors
having 1 fewer dimension than input.</p>
</blockquote>
<p><strong>2. What is <code>tensor(0)</code> according to this documentation?</strong></p>
|
<python><pytorch><max>
|
2024-11-03 18:25:02
| 1
| 1,794
|
heretoinfinity
|
79,153,340
| 6,574,178
|
Qt: how to calculate image coordinates on label?
|
<p>I am writing an app, that displays an image and allows the User to interact with the image using mouse. The image size is predefined, and in the code below it is 2000x2000. Since the window size may be smaller, the image is scaled to the window size. Also I plan to add zoom feature, and therefore use scroll area.</p>
<p>Here is the code I have (I am using PyQt5, but I am pretty sure C++ Qt will work the same)</p>
<pre><code>import sys
from PyQt5.QtCore import *
from PyQt5.QtGui import *
from PyQt5.QtWidgets import *
width, height = 2000, 2000
grid_step = 100
class MyWidget(QLabel):
def __init__(self):
super().__init__()
self.setStyleSheet("background-color: black")
def mousePressEvent(self, event):
if event.button() == Qt.LeftButton:
print(f"Position: {event.pos()}")
class ImageApp(QMainWindow):
def __init__(self, image_path=None):
super().__init__()
self.label = MyWidget()
self.label.setAlignment(Qt.AlignCenter)
self.scroll_area = QScrollArea()
self.scroll_area.setWidgetResizable(True)
self.scroll_area.setWidget(self.label)
self.setCentralWidget(self.scroll_area)
# Generate grid pixmap
self.pixmap = QPixmap(width, height)
self.pixmap.fill(Qt.black)
painter = QPainter(self.pixmap)
painter.setPen(QPen(Qt.white, 1, Qt.SolidLine))
for x in range(0, width, grid_step):
painter.drawLine(x, 0, x, height)
for y in range(0, height, grid_step):
painter.drawLine(0, y, width, y)
painter.end()
self.resize(1000, 800)
def resizeEvent(self, event):
super().resizeEvent(event)
scaled_pixmap = self.pixmap.scaled(self.size().shrunkBy(QMargins(1, 1, 1, 1)),
Qt.KeepAspectRatio, Qt.SmoothTransformation)
self.label.setPixmap(scaled_pixmap)
self.label.adjustSize()
if __name__ == '__main__':
app = QApplication(sys.argv)
window = ImageApp()
window.show()
sys.exit(app.exec_())
</code></pre>
<p>The problem I faced is that coordinates in the mouse click handler are label widget coordinates. This is good for some tasks, such as drawing zoom-on lasso. But at the end of the day I need to get coordinates relative to the original 2000x2000 image. Perhaps I could calculate scale factor when resizing the image to the window, but this is only a part of the problem. The second part is that the window can be wider or narrower, and therefore letter-boxed to the image by adding top/bottom or left/right blank areas. Unfortunately I did not find a nice and easy way to calculate these blank areas, and translate window coordinates to image ones. Finally, when I add zoom feature and scroll bars, recalculation may get even harder.</p>
<p>So the question: is there a quick and nice, possibly Qt built-int way to recalculate between window and image coordinates?</p>
|
<python><qt><pyqt><pyqt5><coordinates>
|
2024-11-03 18:03:53
| 0
| 905
|
Oleksandr Masliuchenko
|
79,153,015
| 1,169,091
|
How to programmatically determine if the default parameter value was used?
|
<p>Given a function:</p>
<pre><code>def foo(num = 42, bar = 100):
print("num:", num)
</code></pre>
<p>How might I add code to foo to learn if foo was invoked with no argument for the num and/or bar parameters?</p>
|
<python><function><methods><parameters>
|
2024-11-03 15:13:06
| 2
| 4,741
|
nicomp
|
79,152,993
| 3,458,191
|
json file validation with schema not working
|
<p>I am trying to validate a json file with a schema but it always says it is valid even when I am removing some fields from the loaded file, see:</p>
<pre><code>import jsonschema
from jsonschema import validate
my_schema = {
'ID': {'type': 'string'},
'Country': {'type': 'string'},
'Name': {'type': 'string'},
'A': {'type': 'string'},
'B': {'type': 'string'},
'C': {'type': 'string'},
'D': {'type': 'string'}
}
def validate_json(json_data):
try:
validate(instance=json_data, schema=my_schema)
except jsonschema.exceptions.ValidationError as err:
return 'Given JSON data is Invalid'
return 'Given JSON data is Valid'
# Function to validate the json file
def validate_json_syntax(json_file):
try:
with open(json_file, 'r') as f:
data = json.load(f)
return validate_json(data)
except FileNotFoundError as e:
print('Read file: Unsuccessful - %s!' % e)
return None
# Validate json file syntax
print(validate_json_syntax(file_name))
</code></pre>
<p>The file has following data stored:</p>
<pre><code>[[{"ID": "101", "Country": "UK", "Name": "none", "A": "2", "B": "6", "C": "0", "D": "0"},
{"ID": "102", "Country": "UK", "Name": "bla", "A": "1", "B": "2", "C": "0", "D": "0"}],
[{"ID": "110", "Country": "GB", "Name": "nana", "A": "2", "B": "6", "C": "0", "D": "0"},
{"ID": "111", "Country": "GB", "Name": "bla", "A": "1", "B": "3", "C": "0", "D": "0"}]
]
</code></pre>
<p>But even when I change the file like:</p>
<pre><code>[[{"ID": "101", "Country": "UK", "B": "6", "C": "0", "D": "0"},
{"ID": "102", "Country": "UK", "Name": "bla", "A": "1", "B": "2", "C": "0", "D": "0"}],
[{"ID": "110", "Country": "GB", "Name": "nana", "A": "2", "B": "6", "C": "0", "D": "0"},
{"ID": "111", "Country": "GB", "Name": "bla", "A": "1", "B": "3", "C": "0", "D": "0"}]
]
</code></pre>
<p>The outcome is still valid from json validate function with the schema.
How do I need to change the schema?</p>
|
<python><json><jsonschema>
|
2024-11-03 15:04:18
| 1
| 1,187
|
FotisK
|
79,152,988
| 1,867,093
|
Manim's FuntionGraph to generate the function f(x) = -1/(x**2)
|
<p>With manim, I tried a simple FuntionGraph for the function</p>
<blockquote>
<p>f(X)=-1/(x**2)</p>
</blockquote>
<p>The python code is given below</p>
<pre><code>from manim import *
class SimpleFunction(Scene):
def construct(self):
graph = FunctionGraph(lambda x: -1/(x**2))
self.add(graph)
</code></pre>
<p>and running it generated the empty png file with no graph.
I tried for an equation f(x) = 1/x and it produced the graph as expected.
Please let me know what is missing here.</p>
|
<python><manim>
|
2024-11-03 15:03:29
| 0
| 391
|
Senthil
|
79,152,985
| 4,465,928
|
Why is BeautifulSoup find_all() method stopping after HTML comment tag?
|
<p>I'm using <code>BeautifulSoup</code> to parse this website:</p>
<p><a href="https://www.baseball-reference.com/postseason/1905_WS.shtml" rel="nofollow noreferrer">https://www.baseball-reference.com/postseason/1905_WS.shtml</a></p>
<p>Inside the website, there's followig element</p>
<pre><code><div id="all_post_pitching_NYG" class="table_wrapper">
</code></pre>
<p>This element as wrapper should contain following elements:</p>
<ol>
<li>
<pre><code><div class="section_heading assoc_post_pitching_NYG as_controls" id="post_pitching_NYG_sh">
</code></pre>
</li>
<li>
<pre><code><div class="placeholder"></div>
</code></pre>
</li>
<li>a very long HTML comment</li>
<li>
<pre><code><div class="topscroll_div assoc_post_pitching_NYG">
</code></pre>
</li>
<li>
<pre><code><div class="table_container is_setup" id="div_post_pitching_NYG">
</code></pre>
</li>
<li>
<pre><code><div class="footer no_hide_long" id="tfooter_post_pitching_NYG">
</code></pre>
</li>
</ol>
<p><a href="https://i.sstatic.net/o5BmoYA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o5BmoYA4.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/XWwMN4Fc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XWwMN4Fc.png" alt="enter image description here" /></a></p>
<p>I have been using:</p>
<pre><code>response = requests.get(url)
response.raise_for_status()
soup = BeautifulSoup(response.content, "html.parser")
pitching = soup.find_all("div", id=lambda x: x and x.startswith("all_post_pitching_"))[0]
for div in pitching:
print(div)
</code></pre>
<p>But it only prints until the very long green HTML comment, then it never prints (4) or beyond. What am I doing wrong?</p>
|
<python><web-scraping><beautifulsoup><python-requests>
|
2024-11-03 15:02:14
| 1
| 331
|
Anthony
|
79,152,959
| 2,595,546
|
French locale has \xa0 (space) as mon_thousands_sep, but throws error when reading in number with space
|
<p>I'm using <code>locale.setlocale(locale.LC_ALL, 'fr_FR')</code>. The thousands separator in French is a space -- when checking using <code>locale.localeconv()['mon_thousands_sep']</code>, the output is <code>\xa0</code>, which apparently stands for non-breaking space.</p>
<p>The problem arises when I start reading in French csv files: The numbers are indeed formatted using a space as a separator, i.e. <code>30 500,75</code> (For thirty thousand and five hundred and three quarters).</p>
<p>But trying to read that in using <code>locale.atof</code> breaks: It says it can't read that number in. I assume this is due to multiple space characters existing, but I can't exactly tell my users to make sure their space characters are exactly the one locale needs. Is there a way to get locale to be more flexible on what it considers a space?</p>
|
<python><locale><number-formatting>
|
2024-11-03 14:48:02
| 1
| 868
|
Fly
|
79,152,583
| 17,176,829
|
converting email.message.EmailMessage to dictionary in python
|
<p>I was working with strings like below in python:</p>
<pre><code>'Message-ID: <9803411.1075852365218.JavaMail.evans@thyme>\nDate: Thu, 24 May 2001 15:27:00 -0700 (PDT)\nFrom: john.forney@enron.com\nSubject: Send in your registration card\nMime-Version: 1.0\nContent-Type: text/plain; charset=us-ascii\nContent-Transfer-Encoding: 7bit\nX-From: John M Forney <John M Forney/HOU/ECT@Enron>\nX-To: \nX-cc: \nX-bcc: \nX-Folder: \\JFORNEY (Non-Privileged)\\Forney, John M.\\To Do\nX-Origin: FORNEY-J\nX-FileName: JFORNEY (Non-Privileged).pst\n\n\tTASK ASSIGNMENT\n\n\nTask Priority:\t\t1\nTask Due On:\t\t\nTask Start Date:\t\n\nFill out and return your registration card or register by phone or fax today!\n\nOnly registered users get:\n-FREE technical support\n-SPECIAL OFFERS on Palm III accessories and add-ons\n-SNEAK PREVIEWS of new product enhancements and software releases\n\nYour registration card also serves as proof of purchase for:\n-Discounts on product upgrades\n-Warranty coverage'
</code></pre>
<p>I parse it using email using <strong>BytesParser from email.parser module</strong>.
output of this function is BytesParser. Now I can access its components like what I did in dictionary. but it is not real dictionary. I want to convert this type to dictionary so I can append it to a dictionary with the columns similar to this datatypes keys.
How to do this? Is there any method to do this conversion for me?</p>
<p>I tried so far this technique:</p>
<pre><code>def extract_fields_from_msg(msg):
fields = {
'Message-ID': msg['Message-ID'],
'Date': msg['Date'],
'From': msg['From'],
'Subject': msg['Subject'],
'Mime-Version': msg['Mime-Version'],
'Content-Type': msg['Content-Type'],
'Content-Transfer-Encoding': msg['Content-Transfer-Encoding'],
'X-From': msg['X-From'],
'X-To': msg['X-To'],
'X-cc': msg['X-cc'],
'X-bcc': msg['X-bcc'],
'X-Folder': msg['X-Folder'],
'X-Origin': msg['X-Origin'],
'X-FileName': msg['X-FileName'],
'Body': msg.get_body(preferencelist=('plain')).get_content() if msg.get_body(preferencelist=('plain')) else None
}
return
</code></pre>
<p>but I want to know if is there any mechanism which is easier?! for example I saw this datatype has the method .keys() which returns keys and also value in each field is accessible like accessing values in dictionary using key. If it had a method like .values() it could be so good.</p>
|
<python><dictionary><email><parsing>
|
2024-11-03 11:19:45
| 0
| 433
|
Narges Ghanbari
|
79,152,369
| 774,575
|
How to change default cmap behavior with plot_surface()?
|
<p>How can I use <code>cmap</code> to have a color gradient determined by <code>x</code> values rather than by <code>z</code> values?</p>
<p><a href="https://i.sstatic.net/6EA2MoBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6EA2MoBM.png" alt="enter image description here" /></a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
n = 20
s = 0.05
y = np.linspace(-s, s, n)
z = np.linspace(-s, s, n)
Y, Z = np.meshgrid(y, z)
dist = (Y**6 * Z**2) ** 0.1
fig, ax = plt.subplots(subplot_kw=dict(projection='3d'))
ax.plot_surface(dist, Y, Z, cmap='viridis')
ax.set(aspect='equal', xlabel='x', ylabel='y', zlabel='z')
</code></pre>
|
<python><matplotlib><surface>
|
2024-11-03 09:03:42
| 2
| 7,768
|
mins
|
79,152,012
| 489,607
|
Cannot decrypt private ED25519 key generated with cryptography Python module in ssh-keygen
|
<h3>1. Minimal Python code</h3>
<pre class="lang-none prettyprint-override"><code>import os
from stat import S_IRUSR
from stat import S_IWUSR
from cryptography.hazmat.primitives.asymmetric import ed25519
from cryptography.hazmat.primitives.serialization import BestAvailableEncryption
from cryptography.hazmat.primitives.serialization import Encoding
from cryptography.hazmat.primitives.serialization import PrivateFormat
if __name__ == "__main__":
private=ed25519.Ed25519PrivateKey.generate()
secret="123456".encode()
a="./ed-pkcs8"
with open(a, "wb") as f:
contents=private.private_bytes(
Encoding.PEM,
PrivateFormat.PKCS8, # OpenSSH works. RSA works with both.
BestAvailableEncryption(secret))
f.write(contents)
os.chmod(a, S_IRUSR | S_IWUSR)
</code></pre>
<h3>2. Test and expectations with <code>ssh-keygen</code></h3>
<pre class="lang-bash prettyprint-override"><code>python keytest.py
ssh-keygen -y -f ed-pkcs8
</code></pre>
<p>(type <em>123456</em>, press <kbd>ENTER</kbd>)</p>
<p>I expected the contents of the key to appear (unencrypted). Instead, I get the error message β<em><strong>Load key "ed-pkcs8": incorrect passphrase supplied to decrypt private key</strong></em>.β</p>
<h3>3. Alternatives that work</h3>
<ul>
<li>Replacing <code>PrivateFormat.PKCS8</code> with <code>PrivateFormat.OpenSSH</code>. But then again a different format is generated, of course.</li>
<li>Generating an RSA (instead of ED25519).</li>
<li>If I reload the file in the Python cryptography module itself using the <code>load_pem_private_key</code> method (instead of using <code>ssh-keygen</code>).</li>
</ul>
<p>What am I doing wrong? Is this an incompatibility of ssh-keygen with the module? Do I need to do something else? Thank you.</p>
<p>I may be on to something <a href="https://superuser.com/q/1840476/140017">here</a> but then again I am not familiar with OpenSSH so I can't comment.</p>
<h3>(Versions)</h3>
<ul>
<li>Darwin laptop-david.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000 arm64</li>
<li>Python 3.13.0 [ cffi==1.17.1, cryptography==43.0.3, pycparser==2.22, bcrypt==4.2.0 ]</li>
<li>OpenSSH_9.8p1, LibreSSL 3.3.6 (notwithstanding <a href="https://superuser.com/a/1767067/140017">limitations</a>)</li>
</ul>
|
<python><cryptography><python-cryptography>
|
2024-11-03 03:34:28
| 0
| 16,248
|
davidcesarino
|
79,151,731
| 2,043,397
|
Error computing phase angle between two time series using Hilbert Transform
|
<p>I'm trying to compute the phase angle between two time-series of real numbers. To check if my function is working without errors I have created two sine waves with a phase of 17 degrees. However, when I compute the phase angle between those two sine waves I do not get the 17 degrees. Here's my script:</p>
<pre><code>import numpy as np
from scipy.signal import hilbert
import matplotlib.pyplot as plt
def coupling_angle_hilbert(x, y, datatype, center=True, pad=True):
"""
Compute the phase angle between two time series using the Hilbert transform.
Parameters:
- x: numpy array
Time series data for the first signal.
- y: numpy array
Time series data for the second signal.
- center: bool, optional
If True, center the amplitude of the data around zero. Default is True.
- pad: bool, optional
If True, perform data reflection to address issues arising with data distortion. Default is True.
- unwrap: bool, optional
If True, unwrap the phase angle to avoid phase wrapping. Default is True.
Returns:
- phase_angle: numpy array
Phase angle between the two signals.
"""
# Convert input data to radians if specified as degrees
if datatype.lower().strip() == 'degs':
x = np.radians(x)
y = np.radians(y)
# Center the signals if the 'center' option is enabled
if center:
# Adjust x to be centered around zero: subtract minimum, then offset by half the range
x = x - np.min(x) - ((np.max(x) - np.min(x))/2)
# Adjust y to be centered around zero: subtract minimum, then offset by half the range
y = y - np.min(y) - ((np.max(y) - np.min(y))/2)
# Reflect and pad the data if padding is enabled
if pad:
# Number of padding samples equal to signal length
# Ensure that the number of pads is even
npads = x.shape[0] // 2 * 2 # Ensure npads is even
# Reflect data at the beginning and end to create padding for 'x' and 'y'
x_padded = np.concatenate((x[:npads][::-1], x, x[-npads:][::-1]))
y_padded = np.concatenate((y[:npads][::-1], y, y[-npads:][::-1]))
else:
# If padding not enabled, use original signals without modification
x_padded = x
y_padded = y
# Apply the Hilbert transform to the time series data
hilbert_x = hilbert(x_padded)
hilbert_y = hilbert(y_padded)
# Calculate the phase of each signal by using arctan2 on imaginary and real parts
phase_angle_x = np.arctan2(hilbert_x.imag, x_padded)
phase_angle_y = np.arctan2(hilbert_y.imag, y_padded)
# Calculate the phase difference between y and x
phase_angle = phase_angle_y - phase_angle_x
# Trim the phase_angle to match the shape of x or y
if pad:
# Remove initial and ending padding to return only the original signal's phase angle difference
phase_angle = phase_angle[npads:npads + x.shape[0]]
return phase_angle
# input data
angles = np.radians(np.arange(0, 360, 1))
phase_offset = np.radians(17)
wav1 = np.sin(angles)
wav2 = np.sin(angles + phase_offset)
# Compute phase_angle usig Hilbert transform
ca_hilbert = coupling_angle_hilbert(wav1,
wav2,
'rads',
center=True,
pad=True)
plt.plot(np.degrees(ca_hilbert))
plt.show()
</code></pre>
<p>Thank you n advance for any help.</p>
|
<python><scipy><signal-processing>
|
2024-11-02 23:03:05
| 1
| 662
|
TMoover
|
79,151,675
| 12,633,371
|
Spotify API get tracks information from playlist with over 100 tracks using Python
|
<p>I use plain Python to access the <code>Spotify API</code>. What I want, is to get specific information (song name, artist, album etc.) from the playlists of my account. I have managed to do that, but for playlists with less than 100 songs. <code>Spotify API</code> has a limit of 100 songs to its <code>get</code> requests.</p>
<p>Searching for possible solutions, I found that making a <code>get</code> request at the <code>BASE_URL/playlists/playlist_id/tracks</code> <code>URL</code>, I can use an <code>offset</code> parameter, so that the <code>get</code> request will respond with songs from the <code>offset</code> and on. For example, if I use <code>offset = 100</code>, that will result in a <code>URL</code> <code>BASE_URL/playlists/playlist_id/tracks?offset=100</code> which should bring 100 songs starting from the 100th. This is also what the <a href="https://developer.spotify.com/documentation/web-api/reference/get-playlists-tracks" rel="nofollow noreferrer">documentation</a> of the <code>API</code> says. Doing that itetaratively, I can get all songs.</p>
<p>Although, doing that request, I get a response with the same first 100 songs. It is like that the <code>offset</code> parameter is ignored.</p>
<p>Any ideas?</p>
|
<python><spotify>
|
2024-11-02 22:14:08
| 2
| 603
|
exch_cmmnt_memb
|
79,151,595
| 481,061
|
Elements in scrollable frame below certain offset aren't rendered in TkInter
|
<p>I'm using a canvas with a scrollable frame, but elements below a certain scroll offset aren't rendered. Here is an example app demonstrating that behavior. It contains 100 resizable squares in a long list, and a slider on top to set the sidelength. Above ca. 318 pixels sidelength, the lowest boxes aren't rendered anymore -- which hints at a 32,767 px limitation, even though the scrollbar works correctly (I can scroll into the unrendered space).</p>
<p>Is there any fix for that? How can I get a long scrollable element in Python?</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
from tkinter import Canvas, Frame, Scrollbar, Scale
class ScrollableDemoApp:
def __init__(self, root):
root.title("Scrollable Canvas Demo with Resizable Boxes")
root.geometry("500x600")
# Slider to control the size of the boxes
self.box_size = 100 # Default size for the boxes
self.size_slider = Scale(root, from_=100, to=1000, orient="horizontal", label="Box Size", command=self.resize_boxes)
self.size_slider.set(self.box_size)
self.size_slider.pack(fill="x", padx=10, pady=5)
# Create a Canvas widget with a vertical scrollbar
self.canvas = Canvas(root)
self.scrollbar = Scrollbar(root, orient="vertical", command=self.canvas.yview)
self.canvas.configure(yscrollcommand=self.scrollbar.set)
self.canvas.pack(side="left", fill="both", expand=True)
self.scrollbar.pack(side="right", fill="y")
# Create a frame inside the canvas
self.scrollable_frame = Frame(self.canvas)
self.canvas.create_window((0, 0), window=self.scrollable_frame, anchor="nw")
# Add 100 boxes to the scrollable frame
self.boxes = []
for i in range(100):
box = Frame(self.scrollable_frame, bg="white", highlightbackground="black", highlightthickness=1,
width=self.box_size, height=self.box_size)
box.pack_propagate(False) # Prevent the frame from resizing to fit its contents
box.pack(pady=5)
# Add a label to display the index number in each box
label = tk.Label(box, text=f"Box {i + 1}")
label.pack(expand=True)
# Store the box reference for resizing
self.boxes.append(box)
# Update the scroll region to encompass all the content in the scrollable frame
self.scrollable_frame.bind("<Configure>", lambda e: self.canvas.configure(scrollregion=self.canvas.bbox("all")))
def resize_boxes(self, size):
# Update box size based on the slider value
self.box_size = int(size)
for box in self.boxes:
box.config(width=self.box_size, height=self.box_size)
# Update the scroll region to ensure proper scrolling after resizing
self.canvas.configure(scrollregion=self.canvas.bbox("all"))
if __name__ == "__main__":
root = tk.Tk()
app = ScrollableDemoApp(root)
root.mainloop()
</code></pre>
|
<python><tkinter><scrollable>
|
2024-11-02 21:13:38
| 0
| 14,622
|
Felix Dombek
|
79,151,398
| 1,931,605
|
view size is not compatible with input tensor's size and stride
|
<p>I'm trying to training F-RCNN based on coco dataset on my images. Image size is 512X512
I've tested dataloader separately and it works and prints the batch images and BB details
Also i've tried to print the loss in NN and it does print the <code>batch_mean</code> as well and after that ERROR occurs.</p>
<pre class="lang-py prettyprint-override"><code>img_process = v2.Compose(
[
v2.ToTensor(),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]
)
class SCocoDetection(datasets.CocoDetection):
def __init__(
self,
image_directory_path: str,
annotation_path : str,
train: bool = True,
image_processor = None
):
super().__init__(image_directory_path, annotation_path)
self.image_processor = image_processor
def __getitem__(self, idx):
image, annotations = super().__getitem__(idx)
images, targets = [], []
image_id = self.ids[idx]
for ann in annotations:
bbox = ann['bbox']
#small = (bbox[:, 2] * bbox[:, 3]) <= (image.size[1] * image.size[0] * 0.001)
small = (bbox[2] * bbox[3]) <= (512 * 512 * 0.001)
#print(small)
if small:
bbox = torch.tensor(bbox).unsqueeze(0).float()
boxes = ops.box_convert(bbox, in_fmt='xywh', out_fmt='xyxy')
boxes = boxes.float()
if (boxes[0][0] < boxes[0][2]) and (boxes[0][1] < boxes[0][3]):
output_dict = self.image_processor({"image": image, "boxes": boxes})
images.append(output_dict['image'])
targets.append({
'boxes': output_dict['boxes'],
'labels': torch.ones(len(boxes), dtype=int)
})
else:
print(f"Invalid box : {boxes}")
#print(f"image_id : {image_id} , idx : {idx} , targets :{targets}")
return images, targets
TRAIN_DATASET = SCocoDetection(
image_directory_path='047/v2_coco_train/images',
annotation_path='047/v2_coco_train/result.json',
image_processor=img_process,
train=True)
VAL_DATASET = SCocoDetection(
image_directory_path='047/v2_coco_test/images',
annotation_path= '047/v2_coco_test/result.json',
image_processor=img_process,
train=False)
print("Number of training examples:", len(TRAIN_DATASET))
print("Number of validation examples:", len(VAL_DATASET))
#print("Number of test examples:", len(TEST_DATASET))
def collate_fn(batch):
return tuple(zip(*batch))
TRAIN_DATALOADER = DataLoader(dataset=TRAIN_DATASET,collate_fn = collate_fn, batch_size=2, shuffle=True)
VAL_DATALOADER = DataLoader(dataset=VAL_DATASET,collate_fn = collate_fn, batch_size=4, shuffle=True)
</code></pre>
<pre class="lang-py prettyprint-override"><code>import numpy as np
class CocoDNN(L.LightningModule):
def __init__(self):
super().__init__()
self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
def forward(self, images, targets=None):
return self.model(images, targets)
def training_step(self, batch, batch_idx):
imgs, annot = batch
print(f"Batch :{batch_idx}")
batch_losses = []
for img_b, annot_b in zip(imgs, annot):
print(len(img_b), len(annot_b))
if len(img_b) == 0:
continue
loss_dict = self.model(img_b, annot_b)
losses = sum(loss for loss in loss_dict.values())
#print(losses)
batch_losses.append(losses)
batch_mean = torch.mean(torch.stack(batch_losses))
#print(batch_mean)
self.log('train_loss', batch_mean)
def configure_optimizers(self):
return optim.SGD(self.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0005)
dnn = CocoDNN()
trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER)
</code></pre>
<pre class="lang-py prettyprint-override"><code>### Error messages and logs
{
"name": "RuntimeError",
"message": "view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.",
"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[192], line 3
1 dnn = CocoDNN()
2 trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
----> 3 trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER)
File site-packages/lightning/pytorch/trainer/trainer.py:538, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
536 self.state.status = TrainerStatus.RUNNING
537 self.training = True
--> 538 call._call_and_handle_interrupt(
539 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
540 )
File site-packages/lightning/pytorch/trainer/call.py:47, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
45 if trainer.strategy.launcher is not None:
46 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
---> 47 return trainer_fn(*args, **kwargs)
49 except _TunerExitException:
50 _call_teardown_hook(trainer)
File site-packages/lightning/pytorch/trainer/trainer.py:574, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
567 assert self.state.fn is not None
568 ckpt_path = self._checkpoint_connector._select_ckpt_path(
569 self.state.fn,
570 ckpt_path,
571 model_provided=True,
572 model_connected=self.lightning_module is not None,
573 )
--> 574 self._run(model, ckpt_path=ckpt_path)
576 assert self.state.stopped
577 self.training = False
File site-packages/lightning/pytorch/trainer/trainer.py:981, in Trainer._run(self, model, ckpt_path)
976 self._signal_connector.register_signal_handlers()
978 # ----------------------------
979 # RUN THE TRAINER
980 # ----------------------------
--> 981 results = self._run_stage()
983 # ----------------------------
984 # POST-Training CLEAN UP
985 # ----------------------------
986 log.debug(f\"{self.__class__.__name__}: trainer tearing down\")
File site-packages/lightning/pytorch/trainer/trainer.py:1025, in Trainer._run_stage(self)
1023 self._run_sanity_check()
1024 with torch.autograd.set_detect_anomaly(self._detect_anomaly):
-> 1025 self.fit_loop.run()
1026 return None
1027 raise RuntimeError(f\"Unexpected state {self.state}\")
File site-packages/lightning/pytorch/loops/fit_loop.py:205, in _FitLoop.run(self)
203 try:
204 self.on_advance_start()
--> 205 self.advance()
206 self.on_advance_end()
207 self._restarting = False
File site-packages/lightning/pytorch/loops/fit_loop.py:363, in _FitLoop.advance(self)
361 with self.trainer.profiler.profile(\"run_training_epoch\"):
362 assert self._data_fetcher is not None
--> 363 self.epoch_loop.run(self._data_fetcher)
File site-packages/lightning/pytorch/loops/training_epoch_loop.py:140, in _TrainingEpochLoop.run(self, data_fetcher)
138 while not self.done:
139 try:
--> 140 self.advance(data_fetcher)
141 self.on_advance_end(data_fetcher)
142 self._restarting = False
File site-packages/lightning/pytorch/loops/training_epoch_loop.py:250, in _TrainingEpochLoop.advance(self, data_fetcher)
247 with trainer.profiler.profile(\"run_training_batch\"):
248 if trainer.lightning_module.automatic_optimization:
249 # in automatic optimization, there can only be one optimizer
--> 250 batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
251 else:
252 batch_output = self.manual_optimization.run(kwargs)
File site-packages/lightning/pytorch/loops/optimization/automatic.py:190, in _AutomaticOptimization.run(self, optimizer, batch_idx, kwargs)
183 closure()
185 # ------------------------------
186 # BACKWARD PASS
187 # ------------------------------
188 # gradient update with accumulated gradients
189 else:
--> 190 self._optimizer_step(batch_idx, closure)
192 result = closure.consume_result()
193 if result.loss is None:
File site-packages/lightning/pytorch/loops/optimization/automatic.py:268, in _AutomaticOptimization._optimizer_step(self, batch_idx, train_step_and_backward_closure)
265 self.optim_progress.optimizer.step.increment_ready()
267 # model hook
--> 268 call._call_lightning_module_hook(
269 trainer,
270 \"optimizer_step\",
271 trainer.current_epoch,
272 batch_idx,
273 optimizer,
274 train_step_and_backward_closure,
275 )
277 if not should_accumulate:
278 self.optim_progress.optimizer.step.increment_completed()
File site-packages/lightning/pytorch/trainer/call.py:167, in _call_lightning_module_hook(trainer, hook_name, pl_module, *args, **kwargs)
164 pl_module._current_fx_name = hook_name
166 with trainer.profiler.profile(f\"[LightningModule]{pl_module.__class__.__name__}.{hook_name}\"):
--> 167 output = fn(*args, **kwargs)
169 # restore current_fx when nested context
170 pl_module._current_fx_name = prev_fx_name
File site-packages/lightning/pytorch/core/module.py:1306, in LightningModule.optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure)
1275 def optimizer_step(
1276 self,
1277 epoch: int,
(...)
1280 optimizer_closure: Optional[Callable[[], Any]] = None,
1281 ) -> None:
1282 r\"\"\"Override this method to adjust the default way the :class:`~lightning.pytorch.trainer.trainer.Trainer` calls
1283 the optimizer.
1284
(...)
1304
1305 \"\"\"
-> 1306 optimizer.step(closure=optimizer_closure)
File site-packages/lightning/pytorch/core/optimizer.py:153, in LightningOptimizer.step(self, closure, **kwargs)
150 raise MisconfigurationException(\"When `optimizer.step(closure)` is called, the closure should be callable\")
152 assert self._strategy is not None
--> 153 step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
155 self._on_after_step()
157 return step_output
File site-packages/lightning/pytorch/strategies/strategy.py:238, in Strategy.optimizer_step(self, optimizer, closure, model, **kwargs)
236 # TODO(fabric): remove assertion once strategy's optimizer_step typing is fixed
237 assert isinstance(model, pl.LightningModule)
--> 238 return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File site-packages/lightning/pytorch/plugins/precision/precision.py:122, in Precision.optimizer_step(self, optimizer, model, closure, **kwargs)
120 \"\"\"Hook to run the optimizer step.\"\"\"
121 closure = partial(self._wrap_closure, model, optimizer, closure)
--> 122 return optimizer.step(closure=closure, **kwargs)
File site-packages/torch/optim/optimizer.py:487, in Optimizer.profile_hook_step.<locals>.wrapper(*args, **kwargs)
482 else:
483 raise RuntimeError(
484 f\"{func} must return None or a tuple of (new_args, new_kwargs), but got {result}.\"
485 )
--> 487 out = func(*args, **kwargs)
488 self._optimizer_step_code()
490 # call optimizer step post hooks
File site-packages/torch/optim/optimizer.py:91, in _use_grad_for_differentiable.<locals>._use_grad(self, *args, **kwargs)
89 torch.set_grad_enabled(self.defaults[\"differentiable\"])
90 torch._dynamo.graph_break()
---> 91 ret = func(self, *args, **kwargs)
92 finally:
93 torch._dynamo.graph_break()
File site-packages/torch/optim/sgd.py:112, in SGD.step(self, closure)
110 if closure is not None:
111 with torch.enable_grad():
--> 112 loss = closure()
114 for group in self.param_groups:
115 params: List[Tensor] = []
File site-packages/lightning/pytorch/plugins/precision/precision.py:108, in Precision._wrap_closure(self, model, optimizer, closure)
95 def _wrap_closure(
96 self,
97 model: \"pl.LightningModule\",
98 optimizer: Steppable,
99 closure: Callable[[], Any],
100 ) -> Any:
101 \"\"\"This double-closure allows makes sure the ``closure`` is executed before the ``on_before_optimizer_step``
102 hook is called.
103
(...)
106
107 \"\"\"
--> 108 closure_result = closure()
109 self._after_closure(model, optimizer)
110 return closure_result
File site-packages/lightning/pytorch/loops/optimization/automatic.py:144, in Closure.__call__(self, *args, **kwargs)
142 @override
143 def __call__(self, *args: Any, **kwargs: Any) -> Optional[Tensor]:
--> 144 self._result = self.closure(*args, **kwargs)
145 return self._result.loss
File site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File site-packages/lightning/pytorch/loops/optimization/automatic.py:138, in Closure.closure(self, *args, **kwargs)
135 self._zero_grad_fn()
137 if self._backward_fn is not None and step_output.closure_loss is not None:
--> 138 self._backward_fn(step_output.closure_loss)
140 return step_output
File site-packages/lightning/pytorch/loops/optimization/automatic.py:239, in _AutomaticOptimization._make_backward_fn.<locals>.backward_fn(loss)
238 def backward_fn(loss: Tensor) -> None:
--> 239 call._call_strategy_hook(self.trainer, \"backward\", loss, optimizer)
File site-packages/lightning/pytorch/trainer/call.py:319, in _call_strategy_hook(trainer, hook_name, *args, **kwargs)
316 return None
318 with trainer.profiler.profile(f\"[Strategy]{trainer.strategy.__class__.__name__}.{hook_name}\"):
--> 319 output = fn(*args, **kwargs)
321 # restore current_fx when nested context
322 pl_module._current_fx_name = prev_fx_name
File site-packages/lightning/pytorch/strategies/strategy.py:212, in Strategy.backward(self, closure_loss, optimizer, *args, **kwargs)
209 assert self.lightning_module is not None
210 closure_loss = self.precision_plugin.pre_backward(closure_loss, self.lightning_module)
--> 212 self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
214 closure_loss = self.precision_plugin.post_backward(closure_loss, self.lightning_module)
215 self.post_backward(closure_loss)
File site-packages/lightning/pytorch/plugins/precision/precision.py:72, in Precision.backward(self, tensor, model, optimizer, *args, **kwargs)
52 @override
53 def backward( # type: ignore[override]
54 self,
(...)
59 **kwargs: Any,
60 ) -> None:
61 r\"\"\"Performs the actual backpropagation.
62
63 Args:
(...)
70
71 \"\"\"
---> 72 model.backward(tensor, *args, **kwargs)
File site-packages/lightning/pytorch/core/module.py:1101, in LightningModule.backward(self, loss, *args, **kwargs)
1099 self._fabric.backward(loss, *args, **kwargs)
1100 else:
-> 1101 loss.backward(*args, **kwargs)
File site-packages/torch/_tensor.py:581, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
571 if has_torch_function_unary(self):
572 return handle_torch_function(
573 Tensor.backward,
574 (self,),
(...)
579 inputs=inputs,
580 )
--> 581 torch.autograd.backward(
582 self, gradient, retain_graph, create_graph, inputs=inputs
583 )
File site-packages/torch/autograd/__init__.py:347, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
342 retain_graph = create_graph
344 # The reason we repeat the same comment below is that
345 # some Python versions print out the first line of a multi-line function
346 # calls in the traceback and some print out the last line
--> 347 _engine_run_backward(
348 tensors,
349 grad_tensors_,
350 retain_graph,
351 create_graph,
352 inputs,
353 allow_unreachable=True,
354 accumulate_grad=True,
355 )
File site-packages/torch/autograd/graph.py:825, in _engine_run_backward(t_outputs, *args, **kwargs)
823 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
824 try:
--> 825 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
826 t_outputs, *args, **kwargs
827 ) # Calls into the C++ engine to run the backward pass
828 finally:
829 if attach_logging_hooks:
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead."
}
</code></pre>
<h3>Environment</h3>
Current environment
<pre><code>#- PyTorch Lightning Version (e.g., 2.4.0): 2.4.0
#- PyTorch Version (e.g., 2.4): 2.5.1
#- Python version (e.g., 3.12): 3.11
#- OS (e.g., Linux): MacOS
#- CUDA/cuDNN version:
#- GPU models and configuration: MPS
#- How you installed Lightning(`conda`, `pip`, source): pip
</code></pre>
<h3>More info</h3>
<p><em>No response</em></p>
|
<python><pytorch><pytorch-lightning>
|
2024-11-02 18:51:41
| 1
| 10,425
|
Shan Khan
|
79,151,373
| 11,357,695
|
python app does not work when launched from subprocess
|
<p>I have installed an app with <code>pip</code> in editable mode (<code>App</code>). I am using <code>App</code> in a script via <code>subprocess</code>. <code>App</code> should make a subfolder within the folder containing an input <code>csv</code>, and then add 4 more CSV files to this subfolder. When I call this app via <code>subprocess</code> in the script, it runs for a suspiciously short time, and then completes. There are print statements from the script before the <code>subprocess</code> call, but no subfolder, which implies there is an issue with my <code>subprocess</code> call. Can anyone see any issues?</p>
<p>I am running the script from a USB, in a conda virtual env. I'll try running this from my C drive instead and see if this works. I have already successfully run <code>App</code> from the command line by copying the command fed to subprocess.</p>
<p>Can anyone see any obvious problems?</p>
<p>Thanks!</p>
<p>Script:</p>
<pre><code>print (f'running App on {query_filepath}')
subprocess.run(f"App -csv {query_filepath} -e -sd",
shell = True)
raise ValueError()
</code></pre>
<p>Output (with none of the expected subdirectories from App):</p>
<pre><code>running App on d:\databases\formatting/app_query.csv
Traceback (most recent call last):
File C:\Anaconda3\envs\proteomics\lib\site-packages\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File d:\databases\formatting\parse_id_mapping.py:83
raise ValueError()
ValueError
</code></pre>
<p>EDIT:
I also tried the following unsuccessfully:</p>
<pre><code>subprocess.run(['App', '-csv', query_filepath, '-e', '-sd'],
shell = True)
</code></pre>
|
<python><pip><command-line><environment-variables><subprocess>
|
2024-11-02 18:36:14
| 1
| 756
|
Tim Kirkwood
|
79,151,355
| 1,404,332
|
How to Use Memory Mapping and Other Techniques to Reduce Memory Usage
|
<p>I have a sample dataset of 2M vectors with 512 dimensions, each vector being a float32. I have an id field that is int64.</p>
<p>So, the size of raw vectors is 2M * 512 * 4 = 4GB.</p>
<p>The collection is memory-mapped. I have three collections with FLAT, IVF_FLAT, and HNSW indexes on the single vector field in each collection</p>
<p>Simply loading the collection consumes 4GB of memory, despite being memory-mapped.
Querying post-loading consumes up to 5-6GB of memory, despite being memory-mapped.
Releasing the collection does clear the memory - so clearly the data resides on the disk.</p>
<p>I have set the queryNode.mmap.mmapEnabled to true in values.yaml.</p>
<pre class="lang-py prettyprint-override"><code>index_type, params = 'FLAT', {}
# index_type, params = 'IVF_FLAT', {'nlist': 2000}
# index_type, params = 'HNSW', {'M': 32, 'efConstruction': 200}
# Define a collection schema
eid_field = FieldSchema(name="eid", dtype=DataType.INT64, is_primary=True, description="embedding id")
embedding_field = FieldSchema(name="content_embedding", dtype=DataType.FLOAT_VECTOR, dim=512, description="content embedding")
# Set enable_dynamic_field to True if you need to use dynamic fields.
schema = CollectionSchema(fields=[eid_field, embedding_field], auto_id=False, enable_dynamic_field=True, description=f"{index_type} collection")
# Define index parameters
index_params = client.prepare_index_params()
index_params.add_index(
field_name="eid",
index_type="STL_SORT"
)
index_params.add_index(
field_name="content_embedding",
index_type=index_type,
metric_type="L2",
params=params
)
collection_name = f"benchmarking_{index_type}"
params_str = '_'.join([f"{k}_{v}" for k,v in sorted(params.items())])
if len(params_str)>0:
collection_name += f"_{params_str}"
if not client.has_collection(collection_name=collection_name):
client.create_collection(
collection_name=collection_name,
dimension=512, # The vectors we will use in this demo has 512 dimensions
schema=schema, # The collection schema we defined above
index_params=index_params,
properties={'mmap.enabled': True}
)
</code></pre>
<p>See: <a href="https://milvus.io/docs/mmap.md" rel="nofollow noreferrer">https://milvus.io/docs/mmap.md</a></p>
|
<python><vector-database><milvus>
|
2024-11-02 18:27:46
| 1
| 503
|
Tim Spann
|
79,151,303
| 1,306,892
|
How to add the plane y = x to a 3D surface plot in Plotly?
|
<p>I am currently working on a 3D surface plot using Plotly in Python. Below is the code I have so far:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import plotly.graph_objects as go
# Definition of the domain
x = np.linspace(-5, 5, 100)
y = np.linspace(-5, 5, 100)
X, Y = np.meshgrid(x, y)
# Definition of the function, avoiding division by zero
Z = np.where(X**2 + Y**2 != 0, (X * Y) / (X**2 + Y**2), 0)
# Creation of the interactive graph
fig = go.Figure(data=[go.Surface(z=Z, x=X, y=Y, colorscale='Viridis')])
# Add title and axis configurations
fig.update_layout(
title='Interactive graph of f(x, y) = xy / (x^2 + y^2)',
scene=dict(
xaxis_title='X',
yaxis_title='Y',
zaxis_title='f(X, Y)'
),
)
# Show the graph
fig.show()
</code></pre>
<p>I would like to add the plane (y = x) to this plot. However, I am having trouble figuring out how to do this.</p>
<p>Can anyone provide guidance on how to add this plane to my existing surface plot? Any help would be greatly appreciated!</p>
|
<python><numpy><plotly>
|
2024-11-02 17:59:34
| 2
| 1,801
|
Mark
|
79,151,259
| 1,942,868
|
Django model filter for string
|
<p>I have table which has string such as</p>
<pre><code>id | url
1 | /myapi/1241/
2 | /myapi/
3 | /myapi/1423/
4 | /myapi/
</code></pre>
<p>Now I want to filter them like this below</p>
<pre><code>myModel.Objects.filter(url="/myapi/****/")
</code></pre>
<p>Is it possible , or is there any method to do this?</p>
|
<python><django><model>
|
2024-11-02 17:45:12
| 1
| 12,599
|
whitebear
|
79,151,249
| 7,946,082
|
why _run_once won't called when custom coroutine called?
|
<h2>What I did</h2>
<pre class="lang-py prettyprint-override"><code>async def inner() -> None:
print("inner")
async def main() -> None:
print("main start")
await inner()
print("main end")
if __name__ == "__main__":
import asyncio
asyncio.run(main())
</code></pre>
<p>and I added some print() in asyncio's event loop</p>
<pre class="lang-py prettyprint-override"><code> def run_forever(self):
"""Run until stop() is called."""
try:
self._thread_id = threading.get_ident()
sys.set_asyncgen_hooks(firstiter=self._asyncgen_firstiter_hook,
finalizer=self._asyncgen_finalizer_hook)
events._set_running_loop(self)
print("start loop in run_forever"). # here!
while True:
self._run_once()
if self._stopping:
break
def _run_once(self):
"""Run one full iteration of the event loop.
This calls all currently ready callbacks, polls for I/O,
schedules the resulting callbacks, and finally schedules
'call_later' callbacks.
"""
print("_run_once called!") # here!
sched_count = len(self._scheduled)
if (sched_count > _MIN_SCHEDULED_TIMER_HANDLES and
...
</code></pre>
<h2>What I thought would happen</h2>
<ol>
<li>coroutine main() be called by event loop</li>
<li>coroutine inner() is registered to event loop</li>
<li>event loop calls inner() in <code>_run_once()</code></li>
</ol>
<h2>What actually happend</h2>
<ol>
<li>main() got called by event loop (by _run_once())</li>
<li>main calls inner()</li>
<li>inner() finishes (there was no interaction from event loop)</li>
</ol>
<h2>log</h2>
<pre><code>start loop in run_forever
_run_once called!
main start
inner
main end
_run_once called!
start loop in run_forever
_run_once called!
_run_once called!
start loop in run_forever
_run_once called!
_run_once called!
</code></pre>
<h2>The question</h2>
<ol>
<li>why this works like this?</li>
<li>what actually calls the coroutine inner()?</li>
</ol>
|
<python><python-asyncio>
|
2024-11-02 17:42:31
| 1
| 513
|
Jerry
|
79,151,113
| 3,700,428
|
Why am I getting conflicting dependencies from the requirements file when deploying mwaa?
|
<p>I'm deploying MWAA with Airflow version 2.10.1 on a non-public network, with the following requirements file:</p>
<pre><code>--find-links /usr/local/airflow/plugins
--no-index
pyarrow==14.0.2
filelock==3.15.4
platformdirs==4.2.2
pydantic==2.8.2
virtualenv==20.26.3
distlib==0.3.8
snowflake-connector-python==3.12.1
snowflake-sqlalchemy==1.6.1
astronomer-cosmos==1.7.1
apache-airflow-providers-snowflake==5.7.0
</code></pre>
<p>I recognize that its standard to include a constraints file, but since I'm on a private network, it wouldn't find the standard constraints file anyway. I get the following error in the log file:</p>
<pre><code>WARNING: Constraints should be specified for requirements.txt. Please see https://docs.aws.amazon.com/mwaa/latest/userguide/working-dags-dependencies.html#working-dags-dependencies-test-create
Forcing local constraints
Defaulting to user installation because normal site-packages is not writeable
Looking in links: /usr/local/airflow/plugins
ERROR: Cannot install pyarrow==14.0.2 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested pyarrow==14.0.2
The user requested (constraint) pyarrow==14.0.2
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip to attempt to solve the dependency conflict
</code></pre>
<p>This works fine when I deploy locally using the same plugins folder with aws-mwaa-local-runner, but doesn't work when I deploy on AWS. Additionally, I don't have anything in my startup.sh file.</p>
<p>The following is what's in the plugins file for pyarrow:
pyarrow-14.0.2-cp311-cp311-manylinux_2_28_x86_64.whl</p>
<p>Any ideas?</p>
|
<python><airflow><mwaa>
|
2024-11-02 16:47:06
| 2
| 808
|
Sam Helmich
|
79,151,017
| 11,770,390
|
Importing python function that lives in a sub-subfolder
|
<p>The following directory structure:</p>
<pre><code>repo/
ββ third_party/
β ββ project/
β β ββ src/
β β β ββ moduledir/
β β β β ββ __init__.py
β β β β ββ main.py
ββ pythonscript.py
</code></pre>
<p>In <code>main.py</code> there's a (dummy) function:</p>
<pre><code>def get_version():
return "1.0"
</code></pre>
<p>Now, from <code>pythonscript.py</code>, how can I call this function <code>get_version()</code>?</p>
<p>Note: The whole directory structure <code>third_party/project/src</code> are not modules / packages, only moduledir contains an empty <code>__init__</code></p>
<p>I tried this but it can't find <code>get_version()</code>:</p>
<pre><code>sys.path.append(os.path.join(os.path.dirname(__file__), 'third_party/project/src/moduledir'))
from main import get_version
</code></pre>
<p>The error message is:</p>
<pre><code>ERROR: Error loading pythonscript at '/home/user/dev/Product/pythonscript.py': Unable to load pythonscript in /home/user/dev/Product/pythonscript.py
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/anaconda3/envs/meteomatics/lib/python3.11/imp.py", line 172, in load_source
module = _load(spec)
^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 721, in _load
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/user/dev/Product/pythonscript.py", line 13, in <module>
from main import get_version
ModuleNotFoundError: No module named 'main'
</code></pre>
<p>Maybe I should mention that I don't execute this script, it's part of a facility that executes this "plugin" file for me.</p>
|
<python><python-module>
|
2024-11-02 15:51:45
| 2
| 5,344
|
glades
|
79,150,877
| 689,194
|
How to get all fields with choices in a Django model?
|
<p>I have a Django model with dozen fields with Choice options and I want to serialize their values to write an CSV file.</p>
<p>How can I traverse the fields to find the ones with Choices options?
Something like that:</p>
<pre><code>for field in MyModel._meta.fields:
if field.has_choices_on_it():
print(f.name)
</code></pre>
|
<python><django><django-models>
|
2024-11-02 14:36:54
| 1
| 1,714
|
Josir
|
79,150,620
| 18,445,352
|
Closing a httpx client results in a "RuntimeError: Event loop is closed"
|
<p>I need to maintain a persistent httpx client in my code to utilize its connection pool throughout the lifespan of my application. Below is a simplified version of my implementation:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import weakref
from threading import Lock
import nest_asyncio
from httpx import AsyncClient
class Request:
_instance = None
_lock = Lock()
_loop: asyncio.AbstractEventLoop = None
_new_loop = False
_session = None
def __new__(cls):
if not cls._instance:
with cls._lock:
if not cls._instance:
instance = super().__new__(cls)
cls._instance = instance
weakref.finalize(instance, instance._close)
if cls._loop is None:
try:
cls._loop = asyncio.get_running_loop()
except RuntimeError:
cls._loop = asyncio.new_event_loop()
cls._new_loop = True
asyncio.set_event_loop(cls._loop)
if cls._new_loop:
# nest_asyncio.apply()
cls._loop.run_until_complete(cls.create_client())
else:
cls._loop.create_task(cls.create_client())
return cls._instance
@classmethod
async def create_client(cls):
cls._session = AsyncClient()
@classmethod
def _close(cls):
cls._loop.run_until_complete(cls.close())
if cls._new_loop:
cls._loop.close()
@classmethod
async def close(cls):
if cls._session:
await cls._session.aclose()
async def get(self, url):
try:
return await self._session.get(url)
except Exception:
return None
req = Request()
async def main():
result = await req.get('https://www.google.com')
if result:
print(result.text[:100])
asyncio.run(main())
</code></pre>
<p>This results in a <code>RuntimeError: Event loop is closed</code>:</p>
<pre class="lang-py prettyprint-override"><code>...
await self._pool.aclose()
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 313, in aclose
await self._close_connections(closing_connections)
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/httpcore/_async/connection_pool.py", line 305, in _close_connections
await connection.aclose()
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/httpcore/_async/connection.py", line 171, in aclose
await self._connection.aclose()
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/httpcore/_async/http11.py", line 265, in aclose
await self._network_stream.aclose()
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/httpcore/_backends/anyio.py", line 55, in aclose
await self._stream.aclose()
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/anyio/streams/tls.py", line 201, in aclose
await self.transport_stream.aclose()
File "/home/user/.cache/pypoetry/virtualenvs/app-gEFTwlce-py3.12/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 1287, in aclose
self._transport.close()
File "/nix/store/gmx7bwrwy6s0kk89ij5yj8r8ayai95x1-python3-3.12.5/lib/python3.12/asyncio/selector_events.py", line 1210, in close
super().close()
File "/nix/store/gmx7bwrwy6s0kk89ij5yj8r8ayai95x1-python3-3.12.5/lib/python3.12/asyncio/selector_events.py", line 875, in close
self._loop.call_soon(self._call_connection_lost, None)
File "/nix/store/gmx7bwrwy6s0kk89ij5yj8r8ayai95x1-python3-3.12.5/lib/python3.12/asyncio/base_events.py", line 795, in call_soon
self._check_closed()
File "/nix/store/gmx7bwrwy6s0kk89ij5yj8r8ayai95x1-python3-3.12.5/lib/python3.12/asyncio/base_events.py", line 541, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
</code></pre>
<p>The only way to resolve this issue is to uncomment the line containing <code>nest_asyncio.apply()</code>. However, since the <code>nest_asyncio</code> package is heavily based on asyncio internal functions and is no longer maintained, I'm not interested in using it in my code.</p>
<p><strong>My question is:</strong> Why is the loop closed when I call <code>run_until_complete()</code> (from within the <code>_close()</code> function), which creates a new event loop and immediately calls the <code>AyncClient.aclose()</code> function? How can I fix this?</p>
<p>For your reference, the same code works for <code>aiohttp.ClientSession</code> without the need to use <code>nest_asyncio</code>.</p>
|
<python><python-asyncio><aiohttp><httpx>
|
2024-11-02 12:05:16
| 1
| 346
|
Babak
|
79,150,545
| 1,070,833
|
CPM constraints for continuous grids
|
<p>I'm learning CPM using CPMpy mostly and I'm bashing my head trying to figure out constraints for one of my test problems.</p>
<p>Lets say I have a simple 2d grid 5x5. Each cell can be either a road (1) or not a road (0). I want to make sure that all the roads are connected together. Since I'm adding new roads to the grid while I'm solving it I'm not sure if I can use circuit constraint for that. I do not want to end up with something like this for instance where I have two separate roads with no connection between them:</p>
<pre><code>0 1 0 1 0
0 1 0 1 1
1 1 0 0 1
0 0 0 0 1
1 1 1 1 1
</code></pre>
<p>another way to describe this would be to say I do not want to have separate islands but a continuous piece of land. Maybe CPM is not the right system to use for that? I have more constraints and other rules and this is the first case that defeated me. I'm also very new to CPM in general.</p>
|
<python><constraint-programming><cpmpy>
|
2024-11-02 11:26:32
| 0
| 1,109
|
pawel
|
79,150,522
| 2,550,576
|
python opencv connect to camera and cannot open your camera
|
<p>I used Python and OpenCV to connect my program to camera IP.
I try to connect my camera to VLC tool, it can show camera normal. and try connect from python to camera as:</p>
<p>This is my sources:</p>
<pre><code>import cv2
url = 'rtsp://myuser:mypassword@192.168.1.3:port/onvif1?rtsp_transport=udp'
# Open the video stream
cap = cv2.VideoCapture(url)
while True:
# Capture frame-by-frame
ret, frame = cap.read()
# If the frame was not retrieved properly, break the loop
if not ret:
print("Failed to grab frame")
break
# Display the resulting frame
cv2.imshow('Mobile Camera', frame)
# Press 'q' to exit the video stream
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>when i run it happens an exception:</p>
<pre><code>cannot open your camera..
</code></pre>
<p>How to fix the problem, thanks advanced</p>
|
<python><opencv><video-streaming><rtsp>
|
2024-11-02 11:12:22
| 1
| 515
|
Khanh Luong Van
|
79,150,520
| 9,315,690
|
How should I type a function that takes the return value of ArgumentParser.add_subparsers as parameter?
|
<p>I'm type hinting a codebase that uses argparse to parse command-line arguments, and in this part of the code there are some functions that make operations on the value returned by <code>argparse.ArgumentParser.add_subparsers</code>. I couldn't figure out what type this would return from looking at <a href="https://docs.python.org/3/library/argparse.html" rel="nofollow noreferrer">the documentation</a>, so I took to Pdb to investigate:</p>
<pre><code>(Pdb) p sub
_SubParsersAction(option_strings=[], dest='action', nargs='A...', const=None, default=None, type=None, choices={}, required=True, help=None, metavar=None)
</code></pre>
<p>While mypy is just fine with using <code>argparse._SubParsersAction</code> as type hint, this seems wrong to me given that usually the underscore denotes private API in Python, and relying on that for a type hint sounds like a bad idea.</p>
<p>So, how can I type hint such a parameter without relying on private API?</p>
|
<python><python-typing><argparse><mypy>
|
2024-11-02 11:10:08
| 0
| 3,887
|
Newbyte
|
79,150,497
| 13,982,768
|
Efficently saving code from Python REPL without gui
|
<p>I often use REPL in my terminal to quick test simple things. With 3.13 update repl gets better and much more usefull so increased my usage but i can't easily save from repl. My current solution is to copy paste from ui.</p>
<p>Is there any methods that allow me to save my session as a .py file or coppy last commands into my device clipboard so i can use it else where quickly?</p>
|
<python><read-eval-print-loop>
|
2024-11-02 10:55:29
| 0
| 367
|
Onuralp Arslan
|
79,149,912
| 9,596,339
|
Failed to resolve 'gwcdfas0csrtxc09.search.windows.net' ([Errno -2] Name or service not known)
|
<p>I'm running graphrag solution accelerator from Microsoft to build an index, I'm getting an error at step 7. The application is deployed to kubernetes clusters. There is a pod for frontend and backend. Iβm running the backend pod to create the index. The permissions are given correctly by the infra team. But somehow Iβm stuck with this error,</p>
<pre><code>[INFO] 2024-11-01 04:58:52,085 - Index: testing-index -- Workflow (1/16): create_base_text_units started.
/usr/local/lib/python3.10/site-packages/numpy/core/fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
[INFO] 2024-11-01 04:58:55,377 - Index: testing-index -- Workflow (1/16): create_base_text_units complete.
[INFO] 2024-11-01 04:58:56,901 - Index: testing-index -- Workflow (2/16): create_base_extracted_entities started.
[INFO] 2024-11-01 05:08:42,824 - Index: testing-index -- Workflow (2/16): create_base_extracted_entities complete.
[INFO] 2024-11-01 05:08:44,580 - Index: testing-index -- Workflow (3/16): create_final_covariates started.
/usr/local/lib/python3.10/site-packages/datashaper/engine/verbs/convert.py:65: FutureWarning: errors='ignore' is deprecated and will raise in a future version. Use to_numeric without passing `errors` and catch exceptions explicitly instead
column_numeric = cast(pd.Series, pd.to_numeric(column, errors="ignore"))
[INFO] 2024-11-01 05:17:35,607 - Index: testing-index -- Workflow (3/16): create_final_covariates complete.
[INFO] 2024-11-01 05:17:37,188 - Index: testing-index -- Workflow (4/16): create_summarized_entities started.
[INFO] 2024-11-01 05:22:38,155 - Index: testing-index -- Workflow (4/16): create_summarized_entities complete.
[INFO] 2024-11-01 05:22:39,881 - Index: testing-index -- Workflow (5/16): join_text_units_to_covariate_ids started.
[INFO] 2024-11-01 05:22:40,033 - Index: testing-index -- Workflow (5/16): join_text_units_to_covariate_ids complete.
[INFO] 2024-11-01 05:22:41,396 - Index: testing-index -- Workflow (6/16): create_base_entity_graph started.
[INFO] 2024-11-01 05:22:49,056 - Index: testing-index -- Workflow (6/16): create_base_entity_graph complete.
[INFO] 2024-11-01 05:22:50,693 - Index: testing-index -- Workflow (7/16): create_final_entities started.
/usr/local/lib/python3.10/site-packages/numpy/core/fromnumeric.py:59: FutureWarning: 'DataFrame.swapaxes' is deprecated and will be removed in a future version. Please use 'DataFrame.transpose' instead.
return bound(*args, **kwds)
Error executing verb "text_embed" in create_final_entities: <urllib3.connection.HTTPSConnection object at 0x7f5147500b50>: Failed to resolve 'gwcdfas0csrtxc09.search.windows.net' ([Errno -2] Name or service not known)
</code></pre>
<p>This is the pipeline_settings.yaml file.</p>
<pre><code>this is the pipeline_settings.yaml, # Copyright (c) Microsoft Corporation.
# Licensed under the MIT License.
# this yaml file serves as a configuration template for the graphrag indexing jobs
# some values are hardcoded while others denoted by PLACEHOLDER will be dynamically set
input:
type: blob
file_type: text
file_pattern: .*\.txt$
storage_account_blob_url: $STORAGE_ACCOUNT_BLOB_URL
connection_string: $STORAGE_CONNECTION_STRING
container_name: PLACEHOLDER
base_dir: .
storage:
type: blob
storage_account_blob_url: $STORAGE_ACCOUNT_BLOB_URL
connection_string: $STORAGE_CONNECTION_STRING
container_name: PLACEHOLDER
base_dir: output
reporting:
type: blob
storage_account_blob_url: $STORAGE_ACCOUNT_BLOB_URL
connection_string: $STORAGE_CONNECTION_STRING
container_name: PLACEHOLDER
base_dir: logs
cache:
type: blob
storage_account_blob_url: $STORAGE_ACCOUNT_BLOB_URL
connection_string: $STORAGE_CONNECTION_STRING
container_name: PLACEHOLDER
base_dir: cache
llm:
type: azure_openai_chat
api_base: $GRAPHRAG_API_BASE
api_version: $GRAPHRAG_API_VERSION
model: $GRAPHRAG_LLM_MODEL
deployment_name: $GRAPHRAG_LLM_DEPLOYMENT_NAME
cognitive_services_endpoint: $GRAPHRAG_COGNITIVE_SERVICES_ENDPOINT
api_key: $OPENAI_API_KEY
model_supports_json: True
tokens_per_minute: 80000
requests_per_minute: 480
thread_count: 50
concurrent_requests: 25
parallelization:
stagger: 0.25
num_threads: 10
async_mode: threaded
embeddings:
async_mode: threaded
llm:
type: azure_openai_embedding
api_base: $GRAPHRAG_API_BASE
api_version: $GRAPHRAG_API_VERSION
batch_size: 16
model: $GRAPHRAG_EMBEDDING_MODEL
deployment_name: $GRAPHRAG_EMBEDDING_DEPLOYMENT_NAME
cognitive_services_endpoint: $GRAPHRAG_COGNITIVE_SERVICES_ENDPOINT
api_key: $OPENAI_API_KEY
tokens_per_minute: 350000
concurrent_requests: 25
requests_per_minute: 2100
thread_count: 50
max_retries: 50
parallelization:
stagger: 0.25
num_threads: 10
vector_store:
type: azure_ai_search
collection_name: PLACEHOLDER
title_column: name
overwrite: True
url: $AI_SEARCH_URL
audience: $AI_SEARCH_AUDIENCE
api_key: $AI_SEARCH_SERVICE_KEY
entity_extraction:
prompt: PLACEHOLDER
community_reports:
prompt: PLACEHOLDER
summarize_descriptions:
prompt: PLACEHOLDER
# claim extraction is disabled by default in the graphrag library so we enable it for the solution accelerator
claim_extraction:
enabled: True
snapshots:
graphml: True
</code></pre>
<p>Actually the permissions is already given to that ai_search_url.
Should I include some more details to this?</p>
|
<python><kubernetes><large-language-model><graphrag><ms-graphrag>
|
2024-11-02 03:32:19
| 0
| 331
|
Luiy_coder
|
79,149,825
| 2,449,857
|
Enumerated dataclass types: subclass or custom constructors?
|
<p>I often find myself making dataclasses of enumerated type - for instance an <code>Action</code> dataclass that has a few parameters and can be one of several types.</p>
<p>I can see two ways to code this. First is to encapsulate everything into one class and provide custom constructors for convenience:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Action:
class Type(Enum):
INVALID = 0
GET_UP = Auto()
GET_DOWN = Auto()
JUMP_AROUND = Auto()
user: str
x: int
y: int
type: Type = Type.Invalid
@classmethod
def get_up(cls, user, x, y):
return cls(user, x, y, cls.Type.GET_UP)
@classmethod
def get_down(cls, user, x, y):
return cls(user, x, y, cls.Type.GET_DOWN)
@classmethod
def jump_around(cls, user, x, y):
return cls(user, x, y, cls.Type.JUMP_AROUND)
</code></pre>
<p>Usage:</p>
<pre class="lang-py prettyprint-override"><code>from pakidge.action import Action
act = Action.jump_around("muggs", 92, 5)
assert act.type = Action.Type.JUMP_AROUND
assert Action.get_up("asdf", 1, 2) == Action("asdf", 1, 2, Action.Type.GET_UP)
</code></pre>
<p>The other is to encapsulate at the module level and create multiple subclasses:</p>
<pre class="lang-py prettyprint-override"><code>class Type(Enum):
INVALID = 0
GET_UP = Auto()
GET_DOWN = Auto()
JUMP_AROUND = Auto()
@dataclass
class Action:
user: str
x: int
y: int
type: Type = Type.Invalid
def __eq__(self, other):
"""Permit comparison by value between base and subclass"""
if not isinstance(self, other.__class__):
return False
return asdict(self) == asdict(other)
def GetUp(Action):
def __init__(self, user: str, x: int, y: int):
super().__init__(user, x, y, Type.GET_UP)
def GetDown(Action):
def __init__(self, user: str, x: int, y: int):
super().__init__(user, x, y, Type.GET_DOWN)
def JumpAround(Action):
def __init__(self, user: str, x: int, y: int):
super().__init__(user, x, y, Type.JUMP_AROUND)
</code></pre>
<p>Usage:</p>
<pre><code>from pakidge import action
act = action.JumpAround("muggs", 92, 5)
assert act.type == action.Type.JUMP_AROUND
assert action.GetUp("asdf", 1, 2) == action.Action("asdf", 1, 2, action.Type.GET_UP)
</code></pre>
<p>Is there a reason to prefer one of these to the other - or another, better way to provide this kind of behaviour?</p>
|
<python><python-3.x><python-dataclasses>
|
2024-11-02 02:15:44
| 1
| 3,489
|
Jack Deeth
|
79,149,763
| 1,393,162
|
Python 3.13 REPL with vim or emacs key bindings?
|
<p>I just upgraded to Python 3.13 and found that the vim key bindings that I had set up via readline and ~/.editrc, which worked in previous releases of the Python REPL, no longer work. Is there some way to get vim (or emacs) key bindings to work in the new REPL?</p>
|
<python><read-eval-print-loop><python-interactive><python-3.13>
|
2024-11-02 01:00:18
| 1
| 5,120
|
Ben Kovitz
|
79,149,745
| 16,611,809
|
Use pl.when to create a list with same number of elements, but different content
|
<p>I asked a <a href="https://stackoverflow.com/questions/79138384/use-np-where-to-create-a-list-with-same-number-of-elements-but-different-conten">similar question for Pandas</a> already, but as I mentioned elsewhere, I am switching to Polars. So, now I need a solution for Polars doing the same thing:</p>
<p>I have a Polars DataFrame where a value sometimes gets NA (for Polars it's null (=None), if I got that correct). I want to fill this column with a list of strings with the same length as another column:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"a": ["one", "two"],
"b": ["three", "four"],
"c": [[1, 2], [3, 4]],
"d": [[5, 6], None]})
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
</tr>
</thead>
<tbody>
<tr>
<td>one</td>
<td>three</td>
<td>[1, 2]</td>
<td>[5, 6]</td>
</tr>
<tr>
<td>two</td>
<td>four</td>
<td>[3, 4]</td>
<td>NaN</td>
</tr>
</tbody>
</table></div>
<p>and I want this to become</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>c</th>
<th>d</th>
</tr>
</thead>
<tbody>
<tr>
<td>one</td>
<td>three</td>
<td>[1, 2]</td>
<td>[5, 6]</td>
</tr>
<tr>
<td>two</td>
<td>four</td>
<td>[3, 4]</td>
<td>[no_value, no_value]</td>
</tr>
</tbody>
</table></div>
<p>I tried</p>
<pre><code>df = df.with_columns(d = pl.when(pl.col('d').is_null())
# .then(pl.Series([['no_value'] * len(lst) for lst in pl.col('c')])) # 'Expr' object is not iterable
# .then(pl.Series([['no_value'] * pl.col('c').list.len()])) # failed to determine supertype of object and list[i64]
# .then([['no_value'] * pl.col('c').list.len()]) # not yet implemented: Nested object types
.otherwise(pl.col('d')))
</code></pre>
|
<python><python-polars>
|
2024-11-02 00:37:32
| 2
| 627
|
gernophil
|
79,149,716
| 2,213,289
|
Pandas memory_usage inconsistent for in-line numpy
|
<p>Could someone help explain why there's a difference of results here?</p>
<p>In particular, the memory usage outputted after serialization/deserialization is dramatically different.</p>
<p>The only clue I have is that <code>df["data"][0].flags</code> outputs 'OWNDATA` differently from the latter.</p>
<pre><code>import pandas as pd
import numpy as np
import pickle
df = pd.DataFrame({
"data": [np.random.randint(size=1024, low=0, high=100, dtype=np.int8) for _ in range(1_000_000)]
})
print(df["data"].size, df["data"].dtype, df.memory_usage(index=True, deep=True).sum())
# 1000000 object 1144000132
df2 = pickle.loads(pickle.dumps(df))
print(df2["data"].size, df2["data"].dtype, df2.memory_usage(index=True, deep=True).sum())
# 1000000 object 120000132
print(np.array_equal(df["data"][0], df2["data"][0]))
</code></pre>
|
<python><pandas><dataframe><numpy>
|
2024-11-02 00:12:40
| 1
| 2,055
|
richliaw
|
79,149,626
| 1,660,508
|
How to fold python code using sed and/or awk?
|
<p>I have a large python code base. I want to get a feel of how a <code>BaseClass</code> is subclassed by 'grepping-out' the name of the sub class and the functions in the class, but only for classes that inherit from SomeBaseClass.</p>
<p>So if we have multiple files that have multiple classes, and some of the classes look like:</p>
<pre><code>class SubClass_A(BaseClass):
def foo(self):
...
async def goo(self):
...
class AnotherClass:
...
class SubClass_B(BaseClass):
def foo(self):
...
async def goo(self):
...
</code></pre>
<p>If we apply some sort of scrip to this code, it would output:</p>
<pre><code>File: /path/to/file
class SubClass_A(BaseClass):
def foo(self):
async def goo(self):
class SubClass_B(BaseClass):
def foo(self):
async def goo(self):
</code></pre>
<p>There can be many classes in the same file some of them may be a BaseClass. Essentially, this is like folding lines of code in an IDE.</p>
<p>Now, I tried using sed to do this, but I'm not an expert in it, and sed won't be able to print the file path. However, I know awk can print the file path, but I don't know awk! Argh.</p>
<p>I thought about writing a python program to do this, but this problem seems like something a nifty sed/awk program can do.</p>
<p>Thanks for the help.</p>
|
<python><awk><sed>
|
2024-11-01 23:00:26
| 3
| 1,618
|
Bitdiot
|
79,149,521
| 18,385,480
|
Python Cross table count Issue
|
<p>I'm trying to create a cross table in Python to count data based on specific bins for a data analysis task. My goal is to classify entries by distance and sample size categories based on thresholds, then count occurrences for each category combination. However, Iβm encountering discrepancies between the expected and actual results.</p>
<p>I want to group my data by distance and store IDs, count occurrences in each bin, and generate a cross table that shows these counts. Below is the setup and code Iβm using, along with an example dataset and the expected output.</p>
<p>Example dataset:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Sample data for demonstration
data = {
'distance': [15, 10, 5, 95, 50, 45, 120, 220, 240, 280, 300, 400, 800, 500, 600, 1000, 900, 700, 350, 150],
'store_id': [1, 2, 3, 1, 2, 3, 1, 1, 2, 3, 2, 3, 4, 4, 5, 5, 3, 2, 1, 4],
'campaign_transaction_id': list(range(1, 21))
}
merged_data = pd.DataFrame(data)
</code></pre>
<p>Here is the Python code I used:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
# Define bins for 'distance' and sample sizes
difference_bins = [0, 21, 101, 251, 100000000]
sample_bins = [1, 6, 11, 21, 51, 201]
# Create categorical columns based on bins
merged_data['distance_category'] = pd.cut(merged_data['distance'], bins=difference_bins, labels=['0-20', '21-100', '101-250', '>250'], right=True)
# Group by 'distance_category' and 'store_id' to calculate the count for each group
merged_data['store_id_count'] = merged_data.groupby(['distance_category', 'store_id'])['campaign_transaction_id'].transform('count')
# Create a new column 'sample_size_category' based on the 'store_id_count' and bins for sample sizes
merged_data['sample_size_category'] = pd.cut(merged_data['store_id_count'], bins=sample_bins, labels=['1-5', '6-10', '11-20', '21-50', '50-200'], right=True)
# Create the cross table without calculating percentages
cross_table_count = pd.crosstab(merged_data['sample_size_category'], merged_data['distance_category'], margins=True, margins_name='Total')
print("Cross Table counts:")
print(cross_table_count)
</code></pre>
<p>My expected cross table is here, I create it manually.</p>
<pre><code>| sample_size_category | 0-20 | 21-100 | 101-250 | >250 | Grand Total |
|----------------------|------|--------|---------|------|-------------|
| 1-5 | 3 | 3 | 5 | 4 | 15 |
| 6-10 | 2 | 1 | | | 3 |
| Grand Total | 5 | 4 | 5 | 4 | 18 |
</code></pre>
<p>But my output is,</p>
<pre><code>| sample_size_category | 101-250 | >250 | Total |
|----------------------|---------|------|-------|
| 1-5 | 2 | 9 | 11 |
| Total | 2 | 9 | 11 |
</code></pre>
|
<python><group-by><pivot><pivot-table><bin>
|
2024-11-01 22:01:31
| 0
| 723
|
bsraskr
|
79,149,503
| 5,025,009
|
Recursive one-step forecasting in timeseries model
|
<p>I am trying to implement a <strong>recursive one-step forecasting approach</strong> for a Random Forest model.</p>
<p>The idea is to get a 12-months forecast in an iterative way where each prediction becomes part of the history before the next prediction.</p>
<p>Given that I do not retrain the model and that the error of the model compounds on each recursive prediction step, I would expect the plot of error to be increasing steadily. However, there is no such trend.</p>
<p>I am wondering if there is any issue in the implementation. I want to test the predictive power of the model in a real-world scenario where we iteratively predict next values (e.g. if I want to have a 12 months forecast).</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
# Set random seed for reproducibility
np.random.seed(1)
# Create date range
date_rng = pd.date_range(start='2018-01-01', end='2024-12-31', freq='MS')
n_points = len(date_rng)
# Trend component (linear growth)
trend = np.linspace(100, 200, n_points)
seasonal = 20 * np.sin(2 * np.pi * np.arange(n_points) / 12)
cyclical = 30 * np.sin(2 * np.pi * np.arange(n_points) / (12 * 3))
noise = np.random.normal(0, 10, n_points)
values = trend + seasonal + cyclical + noise
df = pd.DataFrame({
'date': date_rng,
'value': values
})
# Format the data
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['value'] = df['value'].round(2)
# build temporal features
df.index = pd.to_datetime(df.date)
df['quarter'] = df.index.quarter
df['month'] = df.index.month
df['year'] = df.index.year
df['sin_month'] = np.sin(2 * np.pi * df.index.month / 12)
df['cos_month'] = np.cos(2 * np.pi * df.index.month / 12)
# Build lags
num_lags = 12
lagged_columns = {f'lag_{lag}': df['value'].shift(lag) for lag in range(1, num_lags + 1)}
df = df.assign(**lagged_columns)
df.dropna(inplace=True)
# Simple Moving Average (SMA)
sma_window_size = 12
df[f'SMA_{sma_window_size}'] = df['value'].rolling(window=sma_window_size).mean()
# Exponential Moving Average (EMA)
ema_span_size = 9
df[f'EMA_{ema_span_size}'] = df['value'].ewm(span=ema_span_size).mean()
# split data and fit model
split_point = '2023-12-31'
# Split into train and test
train = df.query('date <= @split_point')
test = df.query('date > @split_point')
# Create train features and target
X_train = train.drop(columns=['value', 'date'])
y_train = train['value']
# Create test features and target
X_test = test.drop(columns=['value', 'date'])
y_test = test['value']
rf_model = RandomForestRegressor(n_estimators=100,
max_depth=10,
random_state=0)
rf_model.fit(X_train, y_train)
# Number of forecast steps
num_forecasts = 12
all_forecasts = {}
# get last train sample
X_last = train.iloc[[-1]].drop(columns=[col for col in train.columns.tolist() if col not in X_train.columns.tolist()]) # Drop identifiers that do not exist in X_train
forecasts = []
for step in range(num_forecasts):
# Make prediction
y_pred = rf_model.predict(X_last.values.reshape(1, -1))[0]
forecasts.append(y_pred)
# Update lags
X_last_series = pd.Series(X_last.iloc[0])
lags = X_last_series.filter(like='lag_').copy().shift(1, axis=0)
lags['lag_1'] = y_pred
X_last_series.update(lags)
# Update seasonal or static features if necessary
if 'month' in X_last_series.index and 'quarter' in X_last_series.index:
# Month and Quarter
current_month = X_last_series['month'] % 12 + 1
X_last_series['month'] = current_month
X_last_series['quarter'] = (current_month - 1) // 3 + 1
# Year (increment if month wraps around to January)
if current_month == 1:
X_last_series['year'] += 1
# Sinusoidal transformations for cyclical features (month)
X_last_series['sin_month'] = np.sin(2 * np.pi * current_month / 12)
X_last_series['cos_month'] = np.cos(2 * np.pi * current_month / 12)
# Update rolling features
if f'SMA_{sma_window_size}' in X_last_series.index:
past_values = X_last_series.filter(like='lag_').values[:sma_window_size - 1]
X_last_series[f'SMA_{sma_window_size}'] = np.mean(np.append(past_values, y_pred))
if f'EMA_{ema_span_size}' in X_last_series.index:
ema_alpha = 2 / (ema_span_size + 1)
X_last_series[f'EMA_{ema_span_size}'] = ema_alpha * y_pred + (1 - ema_alpha) * X_last_series[f'EMA_{ema_span_size}']
# Update X_last for the next forecast step
X_last = X_last_series.to_frame().T
all_forecasts['results'] = {
'forecasts': forecasts,
'actuals': y_test
}
def calculate_mape(y_true, y_pred):
return np.mean(np.abs((y_true - y_pred) / y_true)) * 100
steps_mape = []
for i in range(num_forecasts):
steps_mape.append(calculate_mape(y_test[i], forecasts[i]))
plt.plot(steps_mape)
</code></pre>
<p><a href="https://i.sstatic.net/KFh5Z9Gy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KFh5Z9Gy.png" alt="enter image description here" /></a></p>
<p><strong>EDIT:</strong> I though to look at the error from another point of view. Instead of calculating MAPE at each step, I now estimate cumulative MAPE over steps (e.g., MAPE at step 3 is the MAPE of the first three forecasts)</p>
<p>Like a running average up to each step.</p>
<p>Now things look more like expected however, I am wondering whether this approach is sound.</p>
<pre><code>all_cumulative_mapes = []
# Loop over time series
for series_id, forecast_data in all_forecasts.items():
actuals = forecast_data['actuals']
forecasts = forecast_data['forecasts']
cumulative_mape = []
cumulative_error_sum = 0
# Calculate cumulative MAPE over forecast steps
for i in range(num_forecasts):
# Calculate APE for current step
ape = np.abs((actuals[i] - forecasts[i]) / actuals[i]) * 100
# Update cumulative error sum and cumulative MAPE
cumulative_error_sum += ape
cumulative_mape_value = cumulative_error_sum / (i + 1)
cumulative_mape.append(cumulative_mape_value)
all_cumulative_mapes.append(cumulative_mape)
all_cumulative_mapes = np.array(all_cumulative_mapes)
plt.plot(np.mean(all_cumulative_mapes, axis=0), '-o', label="Average Cumulative MAPE")
plt.title(f"Average (across timeseries) Cumulative MAPE% over Forecast Steps")
plt.xlabel("Forecast Step")
plt.ylabel("Cumulative MAPE (%)")
</code></pre>
<p><a href="https://i.sstatic.net/M6rWjvop.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M6rWjvop.png" alt="enter image description here" /></a></p>
|
<python><scikit-learn><time-series><random-forest><forecasting>
|
2024-11-01 21:53:16
| 0
| 33,417
|
seralouk
|
79,149,485
| 417,896
|
Can't install tensortrade on M2 mac
|
<p>I am trying to install TensorTrade on an M2 mac. It fails becuase it can't install tensorflow 2.9.3</p>
<pre><code> - Installing tensorflow (2.9.3): Failed
</code></pre>
<p>I have already installed:</p>
<pre><code>tensorflow-macos = "2.9.2"
tensorflow-metal = "0.5.0"
</code></pre>
<p>Is there a prior version I can use?</p>
|
<python><tensorflow>
|
2024-11-01 21:42:06
| 1
| 17,480
|
BAR
|
79,149,425
| 19,500,571
|
Plotly: How to make subplots with multiple traces
|
<p>I am plotting two figures in Plotly using this code:</p>
<pre><code>d = {'1': pd.DataFrame({'x': [1,2,3], 'y': [2,4,6]}), '2': pd.DataFrame({'x': [2,4,6], 'y': [4,6,8]})}
def p1(df, n):
x = df.x.tolist()
y = df.y.tolist()
y_upper = (2*df.y).tolist()
y_lower = (0.5*df.y).tolist()
fig = go.Figure([
go.Scatter(
x=x,
y=y,
mode='lines',
name=n,
showlegend=True
),
go.Scatter(
x=x+x[::-1],
y=y_upper+y_lower[::-1],
fill='toself',
line=dict(color='rgba(255,255,255,0)'),
hoverinfo='skip',
showlegend=False
)
])
return fig
def p2():
fig_data = tuple(p1(df, n).data for n, df in d.items())
fig = go.Figure(sum(zip(*fig_data), ()))
fig.update_layout(xaxis=dict(range=[0, 5],
showgrid=True),
yaxis=dict(showgrid=True))
return fig
def p3():
fig = go.Figure()
for (n, df) in d.items():
y = df.iloc[1]
fig.add_trace(go.Bar(
x=['x', 'y', 'z'],
y=y,
name=n,
text=y,
textposition='outside'
))
fig.update_layout(bargroupgap=0.1, barmode='group')
return fig
</code></pre>
<p>I want to plot these two figures in subplots, and I follow the <a href="https://plotly.com/python/subplots/" rel="nofollow noreferrer">example</a> on the official Plotly site:</p>
<pre><code>from plotly.subplots import make_subplots
import plotly.graph_objects as go
fig = make_subplots(rows=1, cols=2)
fig.add_trace(
p3().data[0],
row=1, col=1
)
fig.add_trace(
p2().data[0],
row=1, col=2
)
fig.update_layout(height=600, width=800, title_text="Side By Side Subplots")
fig.show()
</code></pre>
<p>This doesn't work through as my figures have more than just the <code>data[0]</code>-element. Is there a way to include all <code>data</code>-elements of the two plots in the subplot?</p>
|
<python><plot><plotly><subplot>
|
2024-11-01 21:04:51
| 1
| 469
|
TylerD
|
79,149,293
| 4,516,027
|
Calling Numba cfunc from njitted function with numpy array argument
|
<p>I'm trying to call a <code>cfunc</code>tion inside <code>njit</code>ted function, but Numba does not have <code>data_as()</code> method for its array to cast double pointer. Could anyone help me figure out how to make it work?</p>
<pre class="lang-py prettyprint-override"><code>import ctypes
import numpy as np
import numba as nb
@nb.cfunc(nb.types.void(
nb.types.CPointer(nb.types.double),
nb.types.CPointer(nb.types.double),
nb.types.int64,
nb.types.int64,
nb.types.int64,
nb.types.int64,
))
def get_param2(xn_, x_, idx, n, m1, m2):
in_array = nb.carray(x_, (n, m1, m2))
out_array = nb.carray(xn_, (m1, m2))
if idx >= n:
idx = n - 1
out_array[:, :] = in_array[idx]
def test_get_param(): # this one works
A = np.zeros((100, 2, 3))
Ai = np.empty((2, 3))
get_param2(
Ai.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
A.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
40,
*A.shape,
)
assert np.array_equal(A[40], Ai)
@nb.jit(nopython=True)
def get_param_njit(A, i):
# this one fails with the error:
# `Unknown attribute 'data_as' of type ArrayCTypes(dtype=float64, ndim=2)`
Ai = np.empty((2, 3))
get_param2(
Ai.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
A.ctypes.data_as(ctypes.POINTER(ctypes.c_double)),
i,
*A.shape)
return Ai
def test_get_param_njit():
A = np.zeros((100, 2, 3))
Ai = get_param_njit(A, 40)
assert np.array_equal(A[40], Ai)
</code></pre>
|
<python><numba>
|
2024-11-01 20:07:16
| 1
| 5,803
|
kesh
|
79,149,274
| 4,299,527
|
Why Google Translator API is not outputting accurate translation
|
<p>I am trying to use the google translator API for translating non-english words (<strong>transliterated words</strong>) to english word. For example, the word "Xingbie" is a Chinese transliterated word which is "Gender" in english word.</p>
<p><a href="https://i.sstatic.net/p0K9Hgfg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p0K9Hgfg.png" alt="enter image description here" /></a></p>
<p>Now, when I tried to use the API using I do not get back the translated text.</p>
<pre><code>from google.cloud import translate_v2 as translate
# Initialize the Translation client
translate_client = translate.Client()
# Function to detect the language of a given text
def detect_language(text):
result = translate_client.detect_language(text)
return result['language'], result['confidence']
# Function to translate text
def translate_text(text, target_language):
translation = translate_client.translate(text, target_language=target_language)
return translation['translatedText']
# Example usage
text_to_detect = "Xingbie" # This is a transliterated Chinese word
# Detect language
language, confidence = detect_language(text_to_detect)
print(f"Detected language: {language} (Confidence: {confidence:.2f})")
# Translate to English
translated_text = translate_text(text_to_detect, 'en')
print(f"Translated text: {translated_text}")
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>Detected language: zh-Latn (Confidence: 0.94)
Translated text: Xingbie
</code></pre>
<p>The translated text should be "Gender" which is exactly from the <a href="https://translate.google.com/?sl=auto&tl=en&text=Xingbie&op=translate" rel="nofollow noreferrer">browser</a> I can see. However, the API returns the same string. What am I missing here?</p>
|
<python><google-translate><google-translation-api>
|
2024-11-01 20:00:15
| 1
| 12,152
|
Setu Kumar Basak
|
79,149,172
| 8,092,340
|
Python replace period if it's the only value in column
|
<p>How do I replace '.' in a dataframe if it's the only value in a cell without also replacing it if it's part of a decimal number?</p>
<p>Here's the dataframe</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.2222</td>
</tr>
<tr>
<td>.</td>
</tr>
<tr>
<td>.</td>
</tr>
<tr>
<td>3.2</td>
</tr>
<tr>
<td>1.0</td>
</tr>
</tbody>
</table></div>
<p>I tried this but it removes all decimals</p>
<pre><code>df = pd.DataFrame({'id':['2.2222','.','.','3.2','1.0']})
df['id'] = df['id'].str.replace('.','',regex=False)
</code></pre>
<p>Unwanted result:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>22222</td>
</tr>
<tr>
<td></td>
</tr>
<tr>
<td></td>
</tr>
<tr>
<td>32</td>
</tr>
<tr>
<td>10</td>
</tr>
</tbody>
</table></div>
<p>This is the desired result:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>id</th>
</tr>
</thead>
<tbody>
<tr>
<td>2.2222</td>
</tr>
<tr>
<td></td>
</tr>
<tr>
<td></td>
</tr>
<tr>
<td>3.2</td>
</tr>
<tr>
<td>1.0</td>
</tr>
</tbody>
</table></div>
|
<python>
|
2024-11-01 19:21:58
| 1
| 903
|
Dread
|
79,149,147
| 2,383,070
|
Prepend and append characters to all string columns in Polars dataframe
|
<p>For a DataFrame, I want to prepend and append all the strings with extra characters.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import polars.selectors as cs
df = pl.DataFrame(
{
"A": [1, 2, 3],
"B": ["apple", "orange", "grape"],
"C": ["a", "b", "c"],
}
)
</code></pre>
<p>To append the strings I can do this</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(cs.string() + "||")
</code></pre>
<pre><code>shape: (3, 3)
βββββββ¬βββββββββββ¬ββββββ
β A β B β C β
β --- β --- β --- β
β i64 β str β str β
βββββββͺβββββββββββͺββββββ‘
β 1 β apple|| β a|| β
β 2 β orange|| β b|| β
β 3 β grape|| β c|| β
βββββββ΄βββββββββββ΄ββββββ
</code></pre>
<p>But when I try the same to prepend, I get an error</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns("||" + cs.string() + "||")
</code></pre>
<blockquote>
<p>ComputeError: the name 'literal' passed to <code>LazyFrame.with_columns</code> is duplicate</p>
<p>It's possible that multiple expressions are returning the same default column name. If this is the case, try renaming the columns with <code>.alias("new_name")</code> to avoid duplicate column names.</p>
</blockquote>
|
<python><python-polars>
|
2024-11-01 19:03:59
| 3
| 3,511
|
blaylockbk
|
79,149,035
| 13,849,446
|
Ryu controller not forwarding packet after changes in mininet topology
|
<p>I had written a simple mininet code to make a topology that was working well with my controller logic. The topology I had was
h refers to host, of refers to openflow switch</p>
<pre><code>h1 <-> of1
h2 <-> of2
h3 <-> of2
h4 <-> of3
of1 <-> of2
of2 <-> of3
</code></pre>
<p>After that I did the following changes to the topology, Below is Image and code for the mininet network that is not working now with same controller
<a href="https://i.sstatic.net/19o8T8G3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19o8T8G3.png" alt="Mininet Network" /></a></p>
<pre><code>import os
from mininet.log import info
from mininet.cli import CLI
from mininet.link import TCLink
from mininet.net import Mininet
from mininet.node import RemoteController, OVSSwitch
def setup_network():
net = Mininet(controller=RemoteController, switch=OVSSwitch, link=TCLink)
# Add Ryu controller
info('[+] Adding Ryu controller\n')
c0 = net.addController('c0', controller=RemoteController)
# Add switches, hosts, and links as before
info('[+] Adding OpenFlow switches\n')
of1 = net.addSwitch('of1')
of2 = net.addSwitch('of2')
of3 = net.addSwitch('of3')
of4 = net.addSwitch('of4')
of5 = net.addSwitch('of5')
# Adding hosts
info('[+] Adding hosts\n')
h1 = net.addHost('h1', ip='10.0.0.1')
h2 = net.addHost('h2', ip='10.0.0.2')
h3 = net.addHost('h3', ip='10.0.0.3')
h4 = net.addHost('h4', ip='10.0.0.4')
h5 = net.addHost('h5')
h6 = net.addHost('h6')
# Adding links based on the diagram (switches connected to each other and hosts)
info('[+] Adding links between switches, hosts, and edge servers\n')
# Links between Switches
net.addLink(of1, of2) # horizontal connection
net.addLink(of2, of3)
net.addLink(of4, of5)
net.addLink(of1, of4) # Vertical connection
net.addLink(of2, of4)
net.addLink(of3, of5)
# Link between Host & Switched
net.addLink(of1, h1)
net.addLink(of1, h2)
net.addLink(of2, h3)
net.addLink(of2, h4)
net.addLink(of3, h5)
net.addLink(of3, h6)
# Start the network
net.build()
c0.start() # Start the controller
net.start()
# Discover the topology
topology = topology_discovery(net)
CLI(net)
net.stop()
return net, topology
# --------------------------------------- TOPOLOGY DISCOVERY ------------------------------------------
def topology_discovery(net):
info('[*] Discovering topology\n')
topology = {
'switches': {},
'hosts': {},
'links': []
}
for switch in net.switches:
topology['switches'][switch.name] = switch
for host in net.hosts:
topology['hosts'][host.name] = host
for link in net.links:
topology['links'].append((link.intf1.node.name, link.intf2.node.name))
print("\n===================================================================================================")
print("[!] Discovered Topology:")
print("Switches:", topology['switches'].keys())
print("Hosts:", topology['hosts'].keys())
print("Links:", topology['links'])
print("===================================================================================================")
return topology
if __name__ == '__main__':
# os.system("mn -c")
setup_network()
</code></pre>
<p>Can anyone tell why the Controller is not sending packets, what is wrong with my mininet network</p>
<h1>EDIT:</h1>
<p>During research I have found that I need to use STP, there is a STPlib too, but I don't know much if any help about that, it will be a lot of help</p>
|
<python><mininet><sdn><ryu>
|
2024-11-01 18:21:31
| 0
| 1,146
|
farhan jatt
|
79,148,924
| 11,590,208
|
How to Replace XML Special Characters From Text Within HTML Tags Using Python
|
<p>I am quite new to Python. I've been working on a web-scraping project that extracts data from various web pages, constructs a new HTML page using the data, and sends the page to a document management system</p>
<p>The document management system has some XML-based parser for validating the HTML. It will reject it if XML special characters appear in text within HTML tags. For example:</p>
<pre class="lang-html prettyprint-override"><code><p>The price of apples & oranges in New York is > the price of apples and oranges in Chicago</p>
</code></pre>
<p>will get rejected because of the <code>&</code> and the <code>></code>.</p>
<p>I considered using <code>String.replace()</code> on the HTML doc before sending it, but it is not broad enough, and I don't want to remove valid occurrences of characters like <code>&</code> and <code>></code>, such as when they form part of a tag or an attribute</p>
<p>Could someone please suggest a solution to replacing the XML special characters with, for example, their english word equivalents (eg: & -> and)?</p>
<p>Any help you can provide would be much appreciated</p>
|
<python><html><xml>
|
2024-11-01 17:35:04
| 2
| 611
|
cjt
|
79,148,882
| 14,179,793
|
Python pexpect: Command only happens after subsequent commands are sent
|
<p><strong>Objective</strong>*
I am trying to use <code>aws ecs execute-command</code> in conjunction with <code>pexpect.spawn()</code> to start an interactive bash session in a container running on an EC2 instance.</p>
<p><strong>Sample Code</strong></p>
<pre><code>import sys
import time
import boto3
import pexpect
import pexpect.expect
import pexpect.replwrap
def spawn_session(container, family, cluster, ecs_client):
print(f'Spawning Session: {container}, {family}, {cluster}')
rsp = ecs_client.list_tasks(cluster=cluster, family=family)
print(rsp)
task_arn = rsp.get('taskArns')[0]
local_argv = [
'aws', 'ecs', 'execute-command',
'--command', '/bin/bash',
'--interactive', '--task', task_arn, '--container', container,
'--cluster', cluster
]
ps = pexpect.spawn(' '.join(local_argv), timeout=900, logfile=sys.stdout.buffer)
ps.expect('ecs-execute-command.*# ', timeout=15)
return ps
def repl_session_example(container, family, cluster, ecs_client):
rsp = ecs_client.list_tasks(cluster=cluster, family=family)
print(rsp)
task_arn = rsp.get('taskArns')[0]
# bash = pexpect.replwrap.bash()
local_argv = [
'aws', 'ecs', 'execute-command',
'--command', '/bin/bash',
'--interactive', '--task', task_arn, '--container', container,
'--cluster', cluster
]
bash = pexpect.replwrap.REPLWrapper(' '.join(local_argv), '#', None)
return bash
def bash_session_example(container, family, cluster, ecs_client):
rsp = ecs_client.list_tasks(cluster=cluster, family=family)
print(rsp)
task_arn = rsp.get('taskArns')[0]
local_argv = [
'aws', 'ecs', 'execute-command',
'--command', '/bin/bash',
'--interactive', '--task', task_arn, '--container', container,
'--cluster', cluster
]
bash = pexpect.replwrap.bash()
bash.run_command(' '.join(local_argv))
return bash
if __name__ == '__main__':
ecs = boto3.client('ecs')
container = 'CONTAINER'
family = 'TASK_FAMILY'
cluster = 'CLUSTER_ARN'
# bash = repl_session_example(container, family, cluster, ecs)
# bash = bash_session_example(container, family, cluster, ecs)
bash = spawn_session(container, family, cluster, ecs)
for cmd in ['pwd', 'python --version', 'sleep 5', 'echo 2']:
# print(f'Sending: {cmd}')
bash.sendline(cmd)
bash.expect('# ')
# print(f'before: {bash.before}')
# print(f'after: {bash.after}')
# out = bash.run_command(cmd)
# print(out)
pass
</code></pre>
<p><strong>Results</strong>
The output seems out of order and I have to send additional input to actually get the output of the previous command.</p>
<pre><code># python temp.py
Spawning Session: CONTAINER, FAMILY, CLUSTER
...
The Session Manager plugin was installed successfully. Use the AWS CLI to start a session.
Starting session with SessionId: ecs-execute-command-<ID_STRING>
bash-4.2# pwd <- Sending pwd to active session
bash-4.2# python --version
pwd
/var/task/package <- Results of pwd after sending the python --version line
bash-4.2# sleep 5
python --version
Python 3.10.14 <- Version sting for python finally printed
bash-4.2# echo 2
sleep 5
echo 2 <- The 2 is not actually print out but appears reprinted
</code></pre>
<p>It looks like <code>sendline()</code> is not actually executing the command until a second command is actually sent. As a workaround I could send the command twice but it seems like I am missing something here.</p>
|
<python><bash><aws-cli><pexpect>
|
2024-11-01 17:14:54
| 0
| 898
|
Cogito Ergo Sum
|
79,148,803
| 1,824,064
|
Docker compose launched through Python's subprocess.Popen is still responding to CTRL-C even though I've overridden the signal handler
|
<p>I'm trying to launch Docker Compose as a subprocess in Python with <code>subprocess.Popen</code>. I've overridden the signal handler for the parent script and the subprocess, but Compose is still responding to CTRL-C.</p>
<pre><code>def pre_exec():
signal.signal(signal.SIGINT, signal.SIG_IGN)
try:
# block SIGINT in the parent script
signal.signal(signal.SIGINT, signal.SIG_IGN)
process = subprocess.Popen(['docker', 'compose', 'up'], env=env_vars, preexec_fn=pre_exec)
process.wait()
except subprocess.CalledProcessError as e:
print(f"An error occurred running Compose: {e}")
except KeyboardInterrupt:
pass
</code></pre>
<p>I believe this code should do nothing if CTRL-C is pressed, but when I run it, Compose is still responding to the CTRL-C and shutting down.</p>
<p>I can tell the parent script is correctly ignoring it, because without the initial signal handler, the script exits to the shell while compose finishes shutting down. It seems like the compose subprocess is somehow ignoring the signal handler registration.</p>
|
<python><docker><subprocess>
|
2024-11-01 16:41:00
| 0
| 1,998
|
amnesia
|
79,148,755
| 1,642,076
|
In a Python plotly bar chart, can I toggle discrete x-axis items in addition to a legend?
|
<p>I have a plot like this that shows 'Metrics' across different 'Regions':</p>
<p><a href="https://i.sstatic.net/iVilmAIj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVilmAIj.png" alt="Plotly graph" /></a></p>
<p>I can filter regions by clicking on the legend, and I can switch to showing individual metrics using the dropdown menu.</p>
<p><strong>What I want:</strong> Instead of the dropdown, I want something that behaves more like checkboxes, to filter out or show specific combinations of metrics. In this way, both the regions and the metrics are interactively toggleable. Is this possible?</p>
<p>This is my code so far:</p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
import pandas as pd
# Example data
data = {
"Metric": ["Metric A", "Metric B", "Metric C", "Metric D", "Metric E"],
"Region 1": [5, 15, 25, 10, 20],
"Region 2": [10, 25, 5, 15, 30],
"Region 3": [20, 10, 15, 30, 5]
}
df = pd.DataFrame(data)
# Create an initial figure with all regions
fig = go.Figure()
regions = df.columns[1:]
for region in regions:
fig.add_trace(go.Bar(
x=df["Metric"],
y=df[region],
name=region
))
# Add dropdown menus to filter metrics
metric_options = [{"label": metric, "method": "update", "args": [{"x": [df["Metric"]], "y": [df[region] for region in regions]}]} for metric in df["Metric"]]
dropdown_buttons = [
{
"label": "All Metrics",
"method": "update",
"args": [{"x": [df["Metric"]], "y": [df[region] for region in regions]}]
}
]
for metric in df["Metric"]:
y_values = [df[region][df["Metric"] == metric].values[0] for region in regions]
dropdown_buttons.append({
"label": metric,
"method": "update",
"args": [{"x": [[metric] * len(regions)], "y": [y_values]}]
})
fig.update_layout(
updatemenus=[
{
"buttons": dropdown_buttons,
"direction": "down",
"showactive": True,
"x": 1.15,
"xanchor": "left",
"y": 1.2,
"yanchor": "top"
}
],
title="Interactive Bar Chart with Filters",
xaxis_title="Metrics",
yaxis_title="Values"
)
# Save as HTML
fig.write_html("interactive_bar_chart.html")
</code></pre>
|
<python><plot><graph><plotly><visualization>
|
2024-11-01 16:25:21
| 0
| 497
|
Ashley
|
79,148,736
| 7,773,898
|
Fetch Page Content from Common Crawl
|
<p>I have thousands of web pages of different websites. Is there a fast way to get content of all those web pages using Common Crawl and Python.</p>
<p>Below is code i am trying but this process is slow.</p>
<pre><code>async def search_cc_index(url):
encoded_url = quote_plus(url)
index_url = f'{SERVER}{INDEX_NAME}-index?url={encoded_url}&output=json'
async with aiohttp.ClientSession() as session:
async with session.get(index_url) as response:
if response.status == 200:
records = (await response.text()).strip().split('\n')
return [json.loads(record) for record in records]
else:
return None
async def fetch_page_from_cc(records):
async with aiohttp.ClientSession() as session:
for record in records:
offset, length = int(record['offset']), int(record['length'])
s3_url = f'https://data.commoncrawl.org/{record["filename"]}'
byte_range = f'bytes={offset}-{offset + length - 1}'
async with session.get(s3_url, headers={'Range': byte_range}) as response:
if response.status == 206:
stream = ArchiveIterator(response.content)
for warc_record in stream:
if warc_record.rec_type == 'response':
return await warc_record.content_stream().read()
else:
return None
return None
async def fetch_individual_url(target_url):
records = await search_cc_index(target_url)
if records:
print(f"Found {len(records)} records for {target_url}")
content = await fetch_page_from_cc(records)
if content:
print(f"Successfully fetched content for {target_url}")
else:
print(f"No records found for {target_url}")
</code></pre>
|
<python><python-3.x><common-crawl>
|
2024-11-01 16:17:51
| 1
| 383
|
ALTAF HUSSAIN
|
79,148,566
| 164,171
|
With Python, how to apply vector operations to a neighborhood in an n-D image?
|
<p>I have a 3D image with vector components (i.e., a mapping from R3 to R3). My goal is to replace each vector with the vector of maximum norm within its 3x3x3 neighborhood.</p>
<p>This task is proving to be unexpectedly challenging. I attempted to use scipy.ndimage.generic_filter, but despite its name, this filter only handles scalar inputs and outputs. I also briefly explored skimage and numpy's sliding_window_view, but neither seemed to provide a straightforward solution.</p>
<p>What would be the correct way to implement this?</p>
<p>Here's what I ended up writing. It's not very elegant and pretty slow, but should help understand what I'm trying to do.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
def max_norm_vector(data):
"""Return the vector with the maximum norm."""
data = data.reshape(-1, 3)
norms = np.linalg.norm(data, axis=-1)
idx_max = np.argmax(norms)
return data[idx_max]
if __name__ == '__main__':
# Load the image
range_ = np.linspace(-5, 5, 30)
x, y, z = np.meshgrid(range_, range_, range_, indexing='ij')
data = 1 - (x ** 2)
# Compute the gradient
grad = np.gradient(data)
grad = np.stack(grad, axis=-1) # Stack gradients along a new last axis
# grad = grad[:5, :5, :5, :] # Crop the gradient for testing
max_grad = np.zeros_like(grad)
for i in range(1,grad.shape[0]-1):
for j in range(1,grad.shape[1]-1):
for k in range(2,grad.shape[2]-1):
max_grad[i, j, k] = max_norm_vector(grad[i-1:i+2, j-1:j+2, k-1:k+2,:])
# Visualization
fig = plt.figure(figsize=(12, 6))
# Plot original data
ax1 = fig.add_subplot(121, projection='3d')
ax1.scatter(x.ravel(), y.ravel(), z.ravel(), c=data.ravel(), cmap='viridis', alpha=0.5)
ax1.set_title('Original Data')
# Plot maximum gradient vectors
ax2 = fig.add_subplot(122, projection='3d')
# Downsample for better performance
step = 3
x_down = x[::step, ::step, ::step]
y_down = y[::step, ::step, ::step]
z_down = z[::step, ::step, ::step]
max_grad_down = max_grad[::step, ::step, ::step]
ax2.quiver(x_down.ravel(), y_down.ravel(), z_down.ravel(),
max_grad_down[:, :, :, 0].ravel(), max_grad_down[:, :, :, 1].ravel(), max_grad_down[:, :, :, 2].ravel(),
length=0.1, color='red', alpha=0.7)
ax2.set_title('Maximum Gradient Vectors')
plt.tight_layout()
plt.show()
</code></pre>
|
<python><numpy><image-processing><scipy><scikit-image>
|
2024-11-01 15:30:10
| 2
| 56,902
|
static_rtti
|
79,148,429
| 1,931,605
|
pytorch : view size is not compatible with input tensor's size and stride
|
<p>I'm trying to training F-RCNN based on coco dataset on my images. Image size is 512X512
I've tested dataloader separately and it works and prints the batch images and BB details
Also i've tried to print the loss in NN and it does print the <code>batch_mean</code> as well and after that ERROR occurs.</p>
<p>This is the code</p>
<pre><code>img_process = v2.Compose(
[
v2.ToTensor(),
v2.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
]
)
class SCocoDetection(datasets.CocoDetection):
def __init__(
self,
image_directory_path: str,
annotation_path : str,
train: bool = True,
image_processor = None
):
super().__init__(image_directory_path, annotation_path)
self.image_processor = image_processor
def __getitem__(self, idx):
image, annotations = super().__getitem__(idx)
images, targets = [], []
image_id = self.ids[idx]
for ann in annotations:
bbox = ann['bbox']
small = (bbox[2] * bbox[3]) <= (512 * 512 * 0.001)
bbox = torch.tensor(bbox).unsqueeze(0).float()
boxes = ops.box_convert(bbox, in_fmt='xywh', out_fmt='xyxy')
#boxes = None
#if not small:
# boxes = ops.box_convert(bbox, in_fmt='xywh', out_fmt='xyxy')
#else:
# boxes = bbox
boxes = boxes.float()
output_dict = self.image_processor({"image": image, "boxes": boxes})
images.append(output_dict['image'])
targets.append({
'boxes': output_dict['boxes'],
'labels': torch.ones(len(boxes), dtype=int)
})
return images, targets
TRAIN_DATASET = SCocoDetection(
image_directory_path='047/v2_coco_train/images',
annotation_path='047/v2_coco_train/result.json',
image_processor=img_process,
train=True)
VAL_DATASET = SCocoDetection(
image_directory_path='047/v2_coco_test/images',
annotation_path= '047/v2_coco_test/result.json',
image_processor=img_process,
train=False)
print("Number of training examples:", len(TRAIN_DATASET))
print("Number of validation examples:", len(VAL_DATASET))
#print("Number of test examples:", len(TEST_DATASET))
def collate_fn(batch):
return tuple(zip(*batch))
TRAIN_DATALOADER = DataLoader(dataset=TRAIN_DATASET,collate_fn = collate_fn, batch_size=2, shuffle=True)
VAL_DATALOADER = DataLoader(dataset=VAL_DATASET,collate_fn = collate_fn, batch_size=4, shuffle=True)
</code></pre>
<pre><code>import numpy as np
class CocoDNN(L.LightningModule):
def __init__(self):
super().__init__()
self.model = models.detection.fasterrcnn_mobilenet_v3_large_fpn(weights="DEFAULT")
def forward(self, images, targets=None):
return self.model(images, targets)
def training_step(self, batch, batch_idx):
imgs, annot = batch
batch_losses = []
for img_b, annot_b in zip(imgs, annot):
#print(len(img_b), len(annot_b))
loss_dict = self.model(img_b, annot_b)
losses = sum(loss for loss in loss_dict.values())
#print(losses)
batch_losses.append(losses)
batch_mean = torch.mean(torch.stack(batch_losses))
#print(batch_mean) --- **TRIED THIS AND IT PRINTS**
self.log('train_loss', batch_mean)
#print(imgs[0])
#print(' ----',annot)
#loss_dict = self.model(img_b, annot_b)
#losses = sum(loss for loss in loss_dict.values())
#self.log('train_loss', losses)
return batch_mean
def configure_optimizers(self):
return optim.SGD(self.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0005)
dnn = CocoDNN()
trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER)
</code></pre>
<p>Stack Trace</p>
<pre><code>
{
"name": "RuntimeError",
"message": "view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.",
"stack": "---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[192], line 3
1 dnn = CocoDNN()
2 trainer = L.Trainer(limit_train_batches=100, max_epochs=1)
----> 3 trainer.fit(model=dnn, train_dataloaders=TRAIN_DATALOADER)
File site-packages/lightning/pytorch/trainer/trainer.py:538, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
536 self.state.status = TrainerStatus.RUNNING
537 self.training = True
--> 538 call._call_and_handle_interrupt(
539 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
540 )
File site-packages/lightning/pytorch/trainer/call.py:47, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
45 if trainer.strategy.launcher is not None:
46 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
---> 47 return trainer_fn(*args, **kwargs)
49 except _TunerExitException:
50 _call_teardown_hook(trainer)
File site-packages/lightning/pytorch/trainer/trainer.py:574, in Trainer._fit_impl(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path)
567 assert self.state.fn is not None
568 ckpt_path = self._checkpoint_connector._select_ckpt_path(
569 self.state.fn,
570 ckpt_path,
571 model_provided=True,
572 model_connected=self.lightning_module is not None,
573 )
--> 574 self._run(model, ckpt_path=ckpt_path)
576 assert self.state.stopped
577 self.training = False
File site-packages/lightning/pytorch/trainer/trainer.py:981, in Trainer._run(self, model, ckpt_path)
976 self._signal_connector.register_signal_handlers()
978 # ----------------------------
979 # RUN THE TRAINER
980 # ----------------------------
--> 981 results = self._run_stage()
983 # ----------------------------
984 # POST-Training CLEAN UP
985 # ----------------------------
986 log.debug(f\"{self.__class__.__name__}: trainer tearing down\")
File site-packages/lightning/pytorch/trainer/trainer.py:1025, in Trainer._run_stage(self)
1023 self._run_sanity_check()
1024 with torch.autograd.set_detect_anomaly(self._detect_anomaly):
-> 1025 self.fit_loop.run()
1026 return None
1027 raise RuntimeError(f\"Unexpected state {self.state}\")
File site-packages/lightning/pytorch/loops/fit_loop.py:205, in _FitLoop.run(self)
203 try:
204 self.on_advance_start()
--> 205 self.advance()
206 self.on_advance_end()
207 self._restarting = False
File site-packages/lightning/pytorch/loops/fit_loop.py:363, in _FitLoop.advance(self)
361 with self.trainer.profiler.profile(\"run_training_epoch\"):
362 assert self._data_fetcher is not None
--> 363 self.epoch_loop.run(self._data_fetcher)
File site-packages/lightning/pytorch/loops/training_epoch_loop.py:140, in _TrainingEpochLoop.run(self, data_fetcher)
138 while not self.done:
139 try:
--> 140 self.advance(data_fetcher)
141 self.on_advance_end(data_fetcher)
142 self._restarting = False
File site-packages/lightning/pytorch/loops/training_epoch_loop.py:250, in _TrainingEpochLoop.advance(self, data_fetcher)
247 with trainer.profiler.profile(\"run_training_batch\"):
248 if trainer.lightning_module.automatic_optimization:
249 # in automatic optimization, there can only be one optimizer
--> 250 batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs)
251 else:
252 batch_output = self.manual_optimization.run(kwargs)
File site-packages/lightning/pytorch/loops/optimization/automatic.py:190, in _AutomaticOptimization.run(self, optimizer, batch_idx, kwargs)
183 closure()
185 # ------------------------------
186 # BACKWARD PASS
187 # ------------------------------
188 # gradient update with accumulated gradients
189 else:
--> 190 self._optimizer_step(batch_idx, closure)
192 result = closure.consume_result()
193 if result.loss is None:
File site-packages/lightning/pytorch/loops/optimization/automatic.py:268, in _AutomaticOptimization._optimizer_step(self, batch_idx, train_step_and_backward_closure)
265 self.optim_progress.optimizer.step.increment_ready()
267 # model hook
--> 268 call._call_lightning_module_hook(
269 trainer,
270 \"optimizer_step\",
271 trainer.current_epoch,
272 batch_idx,
273 optimizer,
274 train_step_and_backward_closure,
275 )
277 if not should_accumulate:
278 self.optim_progress.optimizer.step.increment_completed()
File site-packages/lightning/pytorch/trainer/call.py:167, in _call_lightning_module_hook(trainer, hook_name, pl_module, *args, **kwargs)
164 pl_module._current_fx_name = hook_name
166 with trainer.profiler.profile(f\"[LightningModule]{pl_module.__class__.__name__}.{hook_name}\"):
--> 167 output = fn(*args, **kwargs)
169 # restore current_fx when nested context
170 pl_module._current_fx_name = prev_fx_name
File site-packages/lightning/pytorch/core/module.py:1306, in LightningModule.optimizer_step(self, epoch, batch_idx, optimizer, optimizer_closure)
1275 def optimizer_step(
1276 self,
1277 epoch: int,
(...)
1280 optimizer_closure: Optional[Callable[[], Any]] = None,
1281 ) -> None:
1282 r\"\"\"Override this method to adjust the default way the :class:`~lightning.pytorch.trainer.trainer.Trainer` calls
1283 the optimizer.
1284
(...)
1304
1305 \"\"\"
-> 1306 optimizer.step(closure=optimizer_closure)
File site-packages/lightning/pytorch/core/optimizer.py:153, in LightningOptimizer.step(self, closure, **kwargs)
150 raise MisconfigurationException(\"When `optimizer.step(closure)` is called, the closure should be callable\")
152 assert self._strategy is not None
--> 153 step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
155 self._on_after_step()
157 return step_output
File site-packages/lightning/pytorch/strategies/strategy.py:238, in Strategy.optimizer_step(self, optimizer, closure, model, **kwargs)
236 # TODO(fabric): remove assertion once strategy's optimizer_step typing is fixed
237 assert isinstance(model, pl.LightningModule)
--> 238 return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File site-packages/lightning/pytorch/plugins/precision/precision.py:122, in Precision.optimizer_step(self, optimizer, model, closure, **kwargs)
120 \"\"\"Hook to run the optimizer step.\"\"\"
121 closure = partial(self._wrap_closure, model, optimizer, closure)
--> 122 return optimizer.step(closure=closure, **kwargs)
File site-packages/torch/optim/optimizer.py:487, in Optimizer.profile_hook_step.<locals>.wrapper(*args, **kwargs)
482 else:
483 raise RuntimeError(
484 f\"{func} must return None or a tuple of (new_args, new_kwargs), but got {result}.\"
485 )
--> 487 out = func(*args, **kwargs)
488 self._optimizer_step_code()
490 # call optimizer step post hooks
File site-packages/torch/optim/optimizer.py:91, in _use_grad_for_differentiable.<locals>._use_grad(self, *args, **kwargs)
89 torch.set_grad_enabled(self.defaults[\"differentiable\"])
90 torch._dynamo.graph_break()
---> 91 ret = func(self, *args, **kwargs)
92 finally:
93 torch._dynamo.graph_break()
File site-packages/torch/optim/sgd.py:112, in SGD.step(self, closure)
110 if closure is not None:
111 with torch.enable_grad():
--> 112 loss = closure()
114 for group in self.param_groups:
115 params: List[Tensor] = []
File site-packages/lightning/pytorch/plugins/precision/precision.py:108, in Precision._wrap_closure(self, model, optimizer, closure)
95 def _wrap_closure(
96 self,
97 model: \"pl.LightningModule\",
98 optimizer: Steppable,
99 closure: Callable[[], Any],
100 ) -> Any:
101 \"\"\"This double-closure allows makes sure the ``closure`` is executed before the ``on_before_optimizer_step``
102 hook is called.
103
(...)
106
107 \"\"\"
--> 108 closure_result = closure()
109 self._after_closure(model, optimizer)
110 return closure_result
File site-packages/lightning/pytorch/loops/optimization/automatic.py:144, in Closure.__call__(self, *args, **kwargs)
142 @override
143 def __call__(self, *args: Any, **kwargs: Any) -> Optional[Tensor]:
--> 144 self._result = self.closure(*args, **kwargs)
145 return self._result.loss
File site-packages/torch/utils/_contextlib.py:116, in context_decorator.<locals>.decorate_context(*args, **kwargs)
113 @functools.wraps(func)
114 def decorate_context(*args, **kwargs):
115 with ctx_factory():
--> 116 return func(*args, **kwargs)
File site-packages/lightning/pytorch/loops/optimization/automatic.py:138, in Closure.closure(self, *args, **kwargs)
135 self._zero_grad_fn()
137 if self._backward_fn is not None and step_output.closure_loss is not None:
--> 138 self._backward_fn(step_output.closure_loss)
140 return step_output
File site-packages/lightning/pytorch/loops/optimization/automatic.py:239, in _AutomaticOptimization._make_backward_fn.<locals>.backward_fn(loss)
238 def backward_fn(loss: Tensor) -> None:
--> 239 call._call_strategy_hook(self.trainer, \"backward\", loss, optimizer)
File site-packages/lightning/pytorch/trainer/call.py:319, in _call_strategy_hook(trainer, hook_name, *args, **kwargs)
316 return None
318 with trainer.profiler.profile(f\"[Strategy]{trainer.strategy.__class__.__name__}.{hook_name}\"):
--> 319 output = fn(*args, **kwargs)
321 # restore current_fx when nested context
322 pl_module._current_fx_name = prev_fx_name
File site-packages/lightning/pytorch/strategies/strategy.py:212, in Strategy.backward(self, closure_loss, optimizer, *args, **kwargs)
209 assert self.lightning_module is not None
210 closure_loss = self.precision_plugin.pre_backward(closure_loss, self.lightning_module)
--> 212 self.precision_plugin.backward(closure_loss, self.lightning_module, optimizer, *args, **kwargs)
214 closure_loss = self.precision_plugin.post_backward(closure_loss, self.lightning_module)
215 self.post_backward(closure_loss)
File site-packages/lightning/pytorch/plugins/precision/precision.py:72, in Precision.backward(self, tensor, model, optimizer, *args, **kwargs)
52 @override
53 def backward( # type: ignore[override]
54 self,
(...)
59 **kwargs: Any,
60 ) -> None:
61 r\"\"\"Performs the actual backpropagation.
62
63 Args:
(...)
70
71 \"\"\"
---> 72 model.backward(tensor, *args, **kwargs)
File site-packages/lightning/pytorch/core/module.py:1101, in LightningModule.backward(self, loss, *args, **kwargs)
1099 self._fabric.backward(loss, *args, **kwargs)
1100 else:
-> 1101 loss.backward(*args, **kwargs)
File site-packages/torch/_tensor.py:581, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
571 if has_torch_function_unary(self):
572 return handle_torch_function(
573 Tensor.backward,
574 (self,),
(...)
579 inputs=inputs,
580 )
--> 581 torch.autograd.backward(
582 self, gradient, retain_graph, create_graph, inputs=inputs
583 )
File site-packages/torch/autograd/__init__.py:347, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
342 retain_graph = create_graph
344 # The reason we repeat the same comment below is that
345 # some Python versions print out the first line of a multi-line function
346 # calls in the traceback and some print out the last line
--> 347 _engine_run_backward(
348 tensors,
349 grad_tensors_,
350 retain_graph,
351 create_graph,
352 inputs,
353 allow_unreachable=True,
354 accumulate_grad=True,
355 )
File site-packages/torch/autograd/graph.py:825, in _engine_run_backward(t_outputs, *args, **kwargs)
823 unregister_hooks = _register_logging_hooks_on_whole_graph(t_outputs)
824 try:
--> 825 return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
826 t_outputs, *args, **kwargs
827 ) # Calls into the C++ engine to run the backward pass
828 finally:
829 if attach_logging_hooks:
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead."
}
```
</code></pre>
|
<python><pytorch><torchvision>
|
2024-11-01 14:50:32
| 0
| 10,425
|
Shan Khan
|
79,148,292
| 13,392,257
|
Sqlalchemy select AttributeError: 'dict' object has no attribute
|
<p>I am build fastapi + sqlalchemy app</p>
<p>My code</p>
<p>sql.py</p>
<pre><code>form sqlalchemy import Table, String, BigInteger, MetaData
metadata = MetaData()
custom_datasets = Table(
"custom_datasets",
metadata,
Column("project_id", String(500), primary_key=True),
Column("id", BigInteger, primary_key=True),
Column("name", String(500), unique=True),
Column("path", String(500), unique=True),
Column("feature_service", String(500), nullable=False),
Column("last_updated_timestamp", BigInteger, nullable=False),
)
</code></pre>
<p>main.py</p>
<pre><code>import os
from sqlalchemy.orm import Session
from sqlalchemy import select
def get_db():
if os.getenv("REGISTRY_CONNECTION_STRING"):
db = SessionLocal()
yield db
yield None
@app.get("/datasets/")
def get_project_registry(body: Dict[str, Any], db: Session = Depends(get_db)):
print("XXX_ ", custom_datasets.c) # XXX_ ImmutableColumnCollection(custom_datasets.project_id, custom_datasets.id,
stmt = select(custom_datasets).where(
custom_datasets.c.project_id in body.project_ids
)
res = db.execute().all()
return res
</code></pre>
<p>I have an error</p>
<pre><code>python3.8/site-packages/feast/feature_server.py", line 469, in get_project_registry
custom_datasets.c.project_id in body.project_ids
AttributeError: 'dict' object has no attribute 'project_ids'
</code></pre>
<p>How to fix the error?</p>
<hr />
<p>Update:</p>
<p>Filtering is not working for the request</p>
<pre><code>GET http://127.0.0.1:8009/datasets
{
"project_ids": ["proj_id1"]
}
</code></pre>
<p>Table looks like this:</p>
<pre><code>testdb=# SELECT * FROM custom_datasets;
project_id | id | name | path | feature_service | last_updated_timestamp
------------+----+-------+-----------+-----------------+------------------------
proj_id1 | 1 | name1 | path/path | service_name | 1
</code></pre>
<p>Updated code</p>
<pre><code> @app.get("/datasets/")
def get_project_registry(body: Dict[str, Any], db: Session = Depends(get_db)):
print("XXX_ ", body['project_ids']) # prints XXX_ ['proj_id1']
stmt = select(custom_datasets).where(
custom_datasets.c.project_id in body['project_ids']
)
#stmt = select(custom_datasets)
res = db.execute(stmt).all()
return res
</code></pre>
|
<python><sqlalchemy>
|
2024-11-01 14:10:18
| 1
| 1,708
|
mascai
|
79,148,243
| 16,611,809
|
transposing within a Polars df lead to "TypeError: not yet implemented: Nested object types"
|
<p>I have this data:</p>
<pre><code>ββββββββββββββ¬ββββββββββββββββββββββββββββββββββββββ
β col1 β col2 β
β --- β --- β
β list[str] β list[list[str]] β
ββββββββββββββͺββββββββββββββββββββββββββββββββββββββ‘
β ["a"] β [["a"]] β
β ["b", "c"] β [["b", "c"], ["b", "c"], ["b", "c"] β
β ["d"] β [["d"]] β
ββββββββββββββ΄ββββββββββββββββββββββββββββββββββββββ
</code></pre>
<p>I want to have all b's and all c's in the same list in in row 2, but as you can see the associations of b to b and c to c are not maintained in row 2. With pandas I used:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
pddf = pd.DataFrame({"col1": [["a"], ["b", "c"], ["d"]],
"col2": [[["a"]], [["b", "c"], ["b", "c"], ["b", "c"]], [["d"]]]})
pddf["col2"] = pddf["col2"].apply(lambda listed: pd.DataFrame(listed).transpose().values.tolist())
print(pddf)
# col1 col2
# 0 [a] [[a]]
# 1 [b, c] [[b, b, b], [c, c, c]]
# 2 [d] [[d]]
</code></pre>
<p>This is the desired result. I am trying to do the same with polars, by replacing <code>pddf.transpose().values.tolist()</code> with <code>pldf.transpose().to_numpy().tolist()</code>, but I always get and <code>TypeError: not yet implemented: Nested object types</code>. Are there any workarounds? Here is the complete polars code:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
pldf = pl.DataFrame({"col1": [["a"], ["b", "c"], ["d"]],
"col2": [[["a"]], [["b", "c"], ["b", "c"], ["b", "c"]], [["d"]]]})
pldf = pldf.with_columns(pl.col("col2").map_elements(lambda listed: pl.DataFrame(listed).transpose().to_numpy().tolist()))
print(pldf)
# TypeError: not yet implemented: Nested object types
# Hint: Try setting `strict=False` to allow passing data with mixed types.
</code></pre>
<p>Where would I need to apply the mentioned <code>strict=False</code>?</p>
<p>On an easier df <code>pddf.transpose().values.tolist()</code> and <code>pldf.transpose().to_numpy().tolist()</code> are the same:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import polars as pl
pd.DataFrame(
{"col1": ["a", "b", "c"],
"col2": ["d", "e", "f"]}
).transpose().values.tolist() == pl.DataFrame(
{"col1": ["a", "b", "c"],
"col2": ["d", "e", "f"]}
).transpose().to_numpy().tolist()
# True
</code></pre>
<p>Please keep as close as possible to the code, even though it's not ideal using <code>.apply()</code> or <code>.map_elements()</code>, but this is in a far greater project and I don't want to break anything else :).</p>
<p>(EDIT: I simplified the code a little since the second lambda wasn't really necessary for the question.)</p>
|
<python><dataframe><transpose><python-polars>
|
2024-11-01 13:53:47
| 3
| 627
|
gernophil
|
79,148,210
| 19,203,856
|
EndpointConnectionError using Localstack with Django
|
<p>I'm working on setting up Localstack in my Django app so we don't need to connect to S3 for local development. This is the relevant part of my docker-compose:</p>
<pre><code> app:
build:
context: .
dockerfile: docker/app.Dockerfile
command: >
bash -c "poetry run python manage.py runserver_plus 0.0.0.0:8000 --reloader-type watchdog"
ports:
- 8000:8000
- 5678:5678
volumes:
- .:/app
- venv:/app/.venv
depends_on:
- db
- celery
- elasticsearch
- localstack
stdin_open: true
tty: true
networks:
- proxynet
localstack:
container_name: localstack
image: localstack/localstack
ports:
- '4566:4566'
volumes:
- ./localstack-data:/var/lib/localstack
- /var/run/docker.sock:/var/run/docker.sock
</code></pre>
<p>And I have these settings in my <code>settings.py</code> file:</p>
<pre><code>MEDIA_URL = "/media/"
MEDIA_ROOT = BASE_DIR("media")
DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_DEFAULT_ACL = "private"
AWS_S3_ACCESS_KEY_ID = "local"
AWS_S3_SECRET_ACCESS_KEY = "local"
AWS_S3_ENDPOINT_URL = "http://localhost:4566"
AWS_STORAGE_BUCKET_NAME = "mybucket"
</code></pre>
<p>But no matter what I try, when I try to upload a file in my application, I get a <code>botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:4566/mybucket"</code>. This only happens when the request to S3 is made through the Django request/response cycle, because I have a few scripts I wrote that are working fine with localstack. For example:</p>
<pre><code>#!/bin/bash
export AWS_ACCESS_KEY_ID=local
export AWS_SECRET_ACCESS_KEY=local
export AWS_DEFAULT_REGION=us-east-1
aws s3api create-bucket --bucket "$1" --endpoint-url http://localhost:4566
# Check if the bucket creation was successful
if [ $? -eq 0 ]; then
echo "Bucket '$1' created successfully."
else
echo "Failed to create bucket '$1'."
fi
</code></pre>
<p>I have tried using both <code>http://localhost:4566</code> and <code>http://localstack:4566</code> as my endpoint url, but I get the same <code>ConnectionError</code> either way. I even tried writing my own ultra-simplified version of the <code>S3Boto3Storage</code> class to see if the error was coming from somewhere in that code, but I still got the connection error. What am I missing?</p>
<p>Also, it's probably worth noting that we're on older versions of packages: Django 3.2.0 and django-storages 1.9.1</p>
|
<python><django><amazon-s3><localstack><django-storage>
|
2024-11-01 13:43:55
| 1
| 376
|
Dillon Brock
|
79,148,025
| 8,964,393
|
Create random partition inside a pandas dataframe and create a field that identifies partitions
|
<p>I have created the following pandas dataframe:</p>
<pre><code>ds = {'col1':[1.0,2.1,2.2,3.1,41,5.2,5.0,6.1,7.1,10]}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks like this:</p>
<pre><code>print(df)
col1
0 1.0
1 2.1
2 2.2
3 3.1
4 41.0
5 5.2
6 5.0
7 6.1
8 7.1
9 10.0
</code></pre>
<p>I need to create a random 80% / 20% partition of the dataset and I also need to create a field (called <code>buildFlag</code>) which shows whether a record belongs to the 80% partition (<code>buildFlag = 1</code>) or belongs to the 20% partition (<code>buildFlag = 0</code>).</p>
<p>For example, the resulting dataframe would like like:</p>
<pre><code> col1 buildFlag
0 1.0 1
1 2.1 1
2 2.2 1
3 3.1 0
4 41.0 1
5 5.2 0
6 5.0 1
7 6.1 1
8 7.1 1
9 10.0 1
</code></pre>
<p>The <code>buildFlag</code> values are assigned randomly.</p>
<p>Can anyone help me, please?</p>
|
<python><pandas><dataframe><random><partition>
|
2024-11-01 12:40:42
| 3
| 1,762
|
Giampaolo Levorato
|
79,147,875
| 1,942,868
|
How to get the log and result of websocket async_to_sync calling
|
<p>I have websocket and simply send the mesasges to <code>channel_layer</code></p>
<pre><code>from channels.layers import get_channel_layer
channel_layer = get_channel_layer()
async_to_sync(channel_layer.group_send)(
'{}'.format(mychannelname),
{
"type": "chat_message",
"message": "send you"
}
)
</code></pre>
<p>It seems works well and messages goes to client browser,
however I want to know if it works well or not from server.</p>
<p>Is it possible or can I get the number of clients connected to channel?</p>
<p>My consumers.py makes channels</p>
<pre><code>import json
from channels.db import database_sync_to_async
from channels.generic.websocket import AsyncWebsocketConsumer
class ChatConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.room_group_name = self.scope["url_route"]["kwargs"]["room_name"]
await self.channel_layer.group_add(
self.room_group_name,
self.channel_name
)
await self.accept()
await self.send(text_data=json.dumps({
'channel_name': self.channel_name
}))
async def disconnect(self, close_code):
print("some disconnect")
await self.channel_layer.group_discard(
self.room_group_name,
self.channel_name
)
async def receive(self, text_data):
text_data_json = json.loads(text_data)
message = text_data_json['message']
print("receive data",text_data_json)
print("channel_name:",self.channel_name)
print("group_name:",self.room_group_name)
if text_data_json['type'] == "register":
self.user_id = text_data_json['message']
print("user_id is:",self.user_name)
#res = await self.save_message_to_db(message)
await self.channel_layer.group_send(
self.room_group_name,
{
'type': 'chat_message',
'message': "nicelydone",
}
)
async def chat_message(self, event):
print("someone call chat_message")
message = event['message']
await self.send(text_data=json.dumps({
'message': message
}))
</code></pre>
|
<javascript><python><django><websocket>
|
2024-11-01 11:42:01
| 1
| 12,599
|
whitebear
|
79,147,705
| 544,721
|
How can I redirect `prompt_toolkit` `prompt` output to standard error?
|
<p><em>How can I redirect <code>prompt_toolkit</code> output to standard error?</em></p>
<p>Iβm working with a script that uses <code>from prompt_toolkit.shortcuts import prompt</code>, and I'm trying to display a progress bar to standard error instead of standard output. Hereβs the core of my setup:</p>
<pre class="lang-py prettyprint-override"><code>def get_prompt(self):
num = 10
cnt = int(self.pct * 10) if not np.isnan(self.pct) and self.pct >= self.threshold else 0
bar = "#" * cnt + "_" * (num - cnt)
bar = bar[:num]
dur = time.time() - self.start_time
return f"Recording. Press option and ENTER (or just ENTER for default)... {dur:.1f}sec {bar}"
choice = prompt(self.get_prompt, refresh_interval=0.1)
</code></pre>
<p>When using the <code>prompt</code> function, the progress bar output defaults to standard output, which isn't what I want. I tried specifying <code>sys.stderr</code> as the output but encountered an error when passing <code>output=create_output(sys.stderr)</code>. I also tried using <code>PromptSession</code> and other tweaks, but I kept running into issues. Has anyone successfully redirected <code>prompt_toolkit</code> output to standard error?</p>
<p><strong>What I Tried:</strong></p>
<ol>
<li><p><strong>Using <code>prompt()</code> with <code>output=create_output(sys.stderr)</code></strong></p>
<p>My initial approach was to directly pass <code>output=create_output(sys.stderr)</code> to the <code>prompt()</code> function, hoping it would allow the output to go to standard error instead of standard output. Hereβs what the code looked like:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from prompt_toolkit.shortcuts import prompt
from prompt_toolkit.output import create_output
# Attempt to direct output to standard error
stderr_output = create_output(sys.stderr)
def get_prompt():
return "This should appear on stderr."
choice = prompt(get_prompt, output=stderr_output)
</code></pre>
<p><strong>Result</strong>: This resulted in an error:</p>
<pre><code>TypeError: prompt() got an unexpected keyword argument 'output'
</code></pre>
<p><strong>Explanation</strong>: I realized that <code>prompt()</code> doesnβt accept an <code>output</code> parameter directly, which meant I needed another approach to customize output streams.</p>
</li>
<li><p><strong>Switching to <code>PromptSession</code></strong></p>
<p>Next, I discovered that <code>PromptSession</code> can accept an <code>output</code> parameter, so I tried using it instead of <code>prompt()</code>. Hereβs what I did:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from prompt_toolkit import PromptSession
from prompt_toolkit.output import create_output
# Create a PromptSession with output to stderr
stderr_output = create_output(sys.stderr)
session = PromptSession(output=stderr_output)
def get_prompt():
return "Recording to stderr session."
# Using the session to display output to stderr
choice = session.prompt(get_prompt)
</code></pre>
<p><strong>Result</strong>: This seemed promising, but I encountered another issue. My environment threw an error due to <code>vt100</code> imports, specifically when trying to use <code>StderrOutput</code>.</p>
<pre><code>ImportError: No module named 'prompt_toolkit.output.vt100'
</code></pre>
<p><strong>Explanation</strong>: I realized that <code>vt100</code>-specific implementations like <code>StderrOutput</code> were causing compatibility issues in some environments, making it unreliable.</p>
</li>
</ol>
|
<python><command-line><stderr><prompt-toolkit><python-prompt-toolkit>
|
2024-11-01 10:41:04
| 1
| 10,883
|
Grzegorz Wierzowiecki
|
79,147,445
| 480,118
|
pandas pivot data, fill Mult index column horizontally
|
<p>i have the following code:</p>
<pre><code>import pandas as pd
data = {
'name': ['Comp1', 'Comp1', 'Comp2', 'Comp2', 'Comp3'],
'entity_type': ['type1', 'type1', 'type2', 'type2', 'type3'],
'code': ['code1', 'code2', 'code3', 'code1', 'code2'],
'date': ['2024-01-31', '2024-01-31', '2024-01-29', '2024-01-31', '2024-01-29'],
'value': [10, 10, 100, 10, 200],
'source': [None, None, 'Estimated', None, 'Reported']
}
df = pd.DataFrame(data)
pivot_df = df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value').rename_axis([('name', 'entity_type', 'source', 'date')])
df = pivot_df.reset_index()
df
</code></pre>
<p>This produces the following:
<a href="https://i.sstatic.net/26NX3xCM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26NX3xCM.png" alt="enter image description here" /></a></p>
<p>I am having trouble with the following:</p>
<ol>
<li>I would like to remove the first column</li>
<li>I would like to fill the first 3 rows horizontally. so, for example, the blank cells above 'code2' should be Comp1, type1, NaN</li>
<li>would be nice to replace those nans in the column headers with empty string</li>
</ol>
<p>Any help would be appreciated.</p>
<p><strong>EDIT - working hack</strong>
as this data i want to end up as an array that looks exactly like it does in the dataframe to insert into a spreadsheet..this works. In this case however, there would not be any meaningful 'columns'..</p>
<pre><code>out = (df.pivot(index='date', columns=['name', 'entity_type', 'source', 'code'], values='value')
.rename_axis([('name', 'entity_type', 'source', 'date')])
.reset_index()
.fillna('')
)
out.columns.names = [None, None, None, None]
columns_df = pd.DataFrame(out.columns.tolist()).T
out = pd.concat([columns_df, pd.DataFrame(out.values)], ignore_index=True)
out
</code></pre>
|
<python><pandas>
|
2024-11-01 09:07:59
| 1
| 6,184
|
mike01010
|
79,147,385
| 3,825,996
|
Running unit tests during pip install when using scikit-build-core with pyproject.toml
|
<p>Is there a way to automatically test the installation of a python package that is built using scikit-build-core? I already got the C++ unit-tests as part of the C++ build like <a href="https://stackoverflow.com/questions/32901679/unit-testing-as-part-of-the-build">so</a>, can something like this be done for the python bindings as well?</p>
<p>The test does not necessarily have to be invoked by scikit-build-core, if there is some other mechanism that can be invoked automatically by <code>pip install</code>.</p>
<p>Of course it's not too much effort to run the test manually after install but you also don't want to put the cherry on top of the cake manually after install ;)</p>
<p>While writing this question, I had the idea that this automatic test could also be done by cmake after building the python library, and I actually got it working:</p>
<p>I append this code</p>
<pre><code># Automatically run the test script after building
add_custom_command(
TARGET _core
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_directory ${CMAKE_SOURCE_DIR}/src/${SKBUILD_PROJECT_NAME} ${CMAKE_BINARY_DIR}/${SKBUILD_PROJECT_NAME}
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_SOURCE_DIR}/test.py ${CMAKE_BINARY_DIR}/test.py
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_BINARY_DIR}/_core.*.* ${CMAKE_BINARY_DIR}/${SKBUILD_PROJECT_NAME}/
COMMAND ${Python_EXECUTABLE} ${CMAKE_BINARY_DIR}/test.py
)
</code></pre>
<p>to my CMakeLists.txt which basically looks like <a href="https://github.com/pybind/scikit_build_example/blob/master/CMakeLists.txt" rel="nofollow noreferrer">this</a>.</p>
<p>However, this is somewhat hacky and only tests that the generated lib can be imported to python, not if it's available to the user after installing.</p>
|
<python><pip><pyproject.toml><scikit-build>
|
2024-11-01 08:46:00
| 0
| 766
|
mqnc
|
79,147,359
| 4,355,878
|
How to convert a convolutional to a fully connected ones in PyTorch with integrated bias
|
<p>How to add a bias to the Toepletiz matrix?</p>
<p>The following <a href="https://stackoverflow.com/questions/56702873/is-there-an-function-in-pytorch-for-converting-convolutions-to-fully-connected-n">approach</a> just shows how to convert the kernel matrix such that</p>
<pre><code>Y = A*X
</code></pre>
<p>But I want something like</p>
<pre><code>Y = A*X + B
</code></pre>
|
<python><machine-learning><pytorch>
|
2024-11-01 08:34:09
| 0
| 1,533
|
j35t3r
|
79,147,234
| 4,451,521
|
How to correctly do consolde overwrites for hydra
|
<p>I have a yaml file</p>
<pre><code>cars:
- model: "Sedan"
length: 4.5
width: 1.8
height: 1.4
fuel_efficiency:
- 12 # City
- 15 # Highway
- 13 # Combined
- model: "SUV"
length: 4.8
width: 2.0
height: 1.7
fuel_efficiency:
- 10
- 12
- 11
- model: Hatchback
length: 4.0
width: 1.7
height: 1.4
fuel_efficiency:
- 14
- 18
- 16
</code></pre>
<p>I have written a script using Hydra that contains the following</p>
<pre><code>import hydra
from omegaconf import DictConfig
@hydra.main(config_path="config", config_name="car_data", version_base=None)
def main(cfg:DictConfig):
for car in cfg.cars:
print(car.model)
</code></pre>
<p>I get the values, do some calculation and everything goes alright!</p>
<p>My question is how can I do consolde overwrites of some values ( a feature of Hydra)</p>
<p>I have tried</p>
<pre><code>>python process_cars.py cars[0].length=4.8
LexerNoViableAltException: cars[0].length=4.8
^
See https://hydra.cc/docs/1.2/advanced/override_grammar/basic for details
</code></pre>
<p>I tried reading the suggested doc, but can not figure it out how to apply in this case</p>
|
<python><fb-hydra>
|
2024-11-01 07:29:20
| 1
| 10,576
|
KansaiRobot
|
79,147,028
| 4,451,521
|
TypeError: conlist() got an unexpected keyword argument 'min_items'
|
<p>I am using pydantic and strictyaml to read a yaml config file. I am using this class</p>
<pre><code>class CarData(BaseModel):
model: str
length: float
width: float
height: float
fuel_efficiency: conlist(item_type=float, min_items=3, max_items=3) # Ensures exactly 3 values
# fuel_efficiency: list[float]
</code></pre>
<p>and I am following what is written in the <a href="https://docs.pydantic.dev/1.10/usage/types/#arguments-to-conlist" rel="nofollow noreferrer">documentation of conlist</a>, however when I run the script I got</p>
<pre><code>TypeError: conlist() got an unexpected keyword argument 'min_items'
</code></pre>
<p>Why is this happening, and how can it be corrected? (My pydantic version is 2.9.2 just in case)</p>
|
<python><pydantic>
|
2024-11-01 05:34:22
| 1
| 10,576
|
KansaiRobot
|
79,146,880
| 963,844
|
How to prevent python run program at import?
|
<p>I want my python program can apply changes real time, like a php. So I write <code>run.py</code> to achieve this purpose.</p>
<p>Everybody say never use exec, then I use <code>importlib.reload</code>, but there is a problem, <code>print("main1")</code> will run two times.</p>
<p>If I put it into <code>if __name__ == '__main__':</code>, then it never run.</p>
<p>How to prevent the duplicate running?</p>
<p>original main.py</p>
<pre><code>import time
def fun():
print("main2")
while True:
print("main1")
fun()
time.sleep(1)
</code></pre>
<hr />
<p>run.py</p>
<pre><code>import time
import main
import importlib
while True:
try:
# exec(open("main.py", encoding='utf-8').read())
# fun()
importlib.reload(main)
main.fun()
except:
print("error")
time.sleep(1)
</code></pre>
<p>main.py</p>
<pre><code>def fun():
print("main2")
print("main1")
</code></pre>
|
<python>
|
2024-11-01 03:44:29
| 1
| 3,769
|
CL So
|
79,146,794
| 1,111,088
|
CORS error on Azure Storage calls from Javascript to commit chunked uploads
|
<p>I want users to upload a huge file directly into Azure Storage. I have a Flask Python web app but I do not want the file to be uploaded into my web server due to size constraints.</p>
<p>When the user has selected the file they want to upload and clicks the upload button, the following happens:</p>
<ol>
<li>An API is called that generates a SAS URL and a blob name to assign to the file to be uploaded. The code that generates both is:</li>
</ol>
<pre><code> blob_name = str(uuid.uuid4())
container_name = os.environ[default_container_name_setting]
sas = generate_blob_sas(account_name=service.account_name,
account_key=access_key,
container_name=container_name,
blob_name=blob_name,
permission=BlobSasPermissions(write=True, read=True, create=True),
expiry=datetime.utcnow() + timedelta(hours=2))
sas_url = 'https://' + service.account_name + '.blob.core.windows.net/' + container_name + '/' + blob_name + '?' + sas
return sas_url, blob_name
</code></pre>
<ol start="2">
<li>The file is then uploaded in chunks via Javascript:</li>
</ol>
<pre><code>const chunkSize = 1024 * 1024 * 20;
const totalChunks = Math.ceil(file.size / chunkSize);
const blockIds = []; // Array to hold block IDs
for (let i = 0; i < totalChunks; i++) {
const start = i * chunkSize;
const end = Math.min(start + chunkSize, file.size);
const chunk = file.slice(start, end);
const blockId = btoa("block-" + i); // Base64 encode block ID
blockIds.push(blockId);
// Upload each chunk
const uploadResponse = await fetch(sas_url + "&comp=block&blockid=" + blockId, {
method: "PUT",
headers: {
"x-ms-blob-type": "BlockBlob",
"Content-Type": file.type
},
body: chunk
});
if (!uploadResponse.ok) {
return false;
}
}
</code></pre>
<p>It works up to this point. However, the next step is to tell Azure to put the chunks together, still via Javascript:</p>
<pre><code>const commitResponse = await fetch(sas_url + "&comp=commitBlockList", {
method: "PUT",
headers: {
"Content-Type": "application/xml",
"x-ms-version": "2020-10-02",
"Content-Length": "0"
},
body: `<BlockList>${blockIds.map(id => `<Latest>${id}</Latest>`).join('')}</BlockList>`
});
if (!commitResponse.ok) {
throw new Error("Failed to commit blocks to blob.");
}
</code></pre>
<p>I always get a CORS error at this point:</p>
<blockquote>
<p>Access to fetch at 'https://xxx.blob.core.windows.net/container/e07d13fa-bcd6-45cf-9eea-3295e17dc567?se=2024-11-01T04%2A18%3B30Q&sp=rcw&sv=2024-11-04&sr=b&sig=Cudr...&comp=commitBlockList' from origin 'http://localhost:4449' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.</p>
</blockquote>
<p>I know I have properly set the CORS in Azure Storage because the upload part works.</p>
<p>Under Blob Service tab, I have added my localhost origin, with allowed methods GET, POST, OPTIONS, and PUT.</p>
<p>I have tried regenerating another SAS Url with the retrieved Blob name but I'm still getting the CORS error.</p>
<p>What am I missing?</p>
<p>Update: I have narrowed down the CORS error to the query string "&comp=commitBlockList" added to the SAS URL in the commit step. I still don't know how to resolve it, though.</p>
|
<javascript><python><cors><azure-blob-storage>
|
2024-11-01 02:38:35
| 2
| 2,480
|
rikitikitik
|
79,146,589
| 6,630,397
|
Named argument passed as a dict: SyntaxError: syntax error at or near "%" or SyntaxError: type modifiers must be simple constants or identifiers
|
<p>I'm facing some troubles when trying to execute a PostgreSQL <code>CREATE TABLE</code> query with a dictionary of parameters using psycopg:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Fri Nov 1 09:32:32 2024
@author: me
"""
import psycopg
pg_uri = "postgres://postgres:******@localhost:5432/mydatabase"
conn_dict = psycopg.conninfo.conninfo_to_dict(pg_uri)
conn = psycopg.connect(**conn_dict)
curs = conn.cursor()
d = {"CRS": 4326}
raw = "CREATE TABLE foo (id INT, geom geometry('Point',%(CRS)s));"
query0 = psycopg.sql.SQL(raw).format(**d)
query1 = psycopg.sql.SQL(raw.format(**d))
query2 = psycopg.sql.SQL(raw)
curs.execute(query0) # SyntaxError: syntax error at or near "%"
curs.execute(query1) # SyntaxError: syntax error at or near "%"
curs.execute(query2,d) # SyntaxError: type modifiers must be simple constants or identifiers
</code></pre>
<p>The <code>curs.execute()</code> calls either raise:</p>
<ul>
<li><code>SyntaxError: syntax error at or near "%"</code> with the two firsts queries or:</li>
<li><code>SyntaxError: type modifiers must be simple constants or identifiers</code> with the last one</li>
</ul>
<h4>Relevant pieces of documentation</h4>
<p><a href="https://www.psycopg.org/psycopg3/docs/basic/params.html" rel="nofollow noreferrer">https://www.psycopg.org/psycopg3/docs/basic/params.html</a></p>
<h4>Version Info</h4>
<p>Python: 3.10<br />
psycopg: 3.2.3<br />
PostgreSQL: 16</p>
|
<python><postgresql><psycopg2><psycopg3>
|
2024-10-31 23:36:00
| 1
| 8,371
|
swiss_knight
|
79,146,572
| 5,264,127
|
How do I obtain the parsed tz offset when parsing tz-aware strings with Polars?
|
<p>When using Polars to parsing datetime strings with offset information, the output is always in UTC:</p>
<pre class="lang-py prettyprint-override"><code>pl.Series(['2020-01-01T01:00+10:00']).str.to_datetime()
</code></pre>
<pre><code>shape: (1,)
Series: '' [datetime[ΞΌs, UTC]]
[
2019-12-31 15:00:00 UTC
]
</code></pre>
<p>I assume the reason for this is it's impossible to represent a series with a non-constant timezone, so this behaviour is the only sane way to deal with inputs like [+1, +2, +3] (of which DST changes are an example).</p>
<p>It's all well and good that polars doesn't want to get into making fuzzy guesses, but <strong>how do I get the "+10:00" safely out to deal with this myself?</strong> (or the [+1, +2, +3] in harder cases.) <code>chrono</code> is successfully parsing it out as an offset, how can I get it as a <code>pl.Duration</code> column or something without having to hand-roll offset parsing from scratch myself?</p>
|
<python><timezone><python-polars>
|
2024-10-31 23:20:25
| 1
| 1,017
|
Jarrad
|
79,146,409
| 567,749
|
Can I call seek on a subprocess stdout pipe and expect it to work in Windows 11?
|
<p><strong>If I use <code>p = subprocess.Popen(..., stdout=subprocess.PIPE)</code>, can I call seek on <code>p.stdout</code> and expect it to work in Windows 11?</strong></p>
<p>From this link (2011), it seems that the expected answer is no, as "you cannot seek on a pipe." That link also shows an IOError occurs when you try.</p>
<ul>
<li><a href="https://bugs.python.org/issue12877" rel="nofollow noreferrer">https://bugs.python.org/issue12877</a></li>
</ul>
<pre><code>from subprocess import Popen, PIPE
p = Popen(['ls'], stdout=PIPE)
p.wait()
p.stdout.seek(0)
Traceback (most recent call last):
File "t.py", line 5, in <module>
p.stdout.seek(0)
IOError: [Errno 29] Illegal seek
Python 2.7.2, Arch Linux x86-64 (Kernel 3.0)
</code></pre>
<p>However, it appears there may be OS-specific things at play or updates to Python since that date which may change the answer as I can get this program to run if I change "ls" to a windows command. Also, the type of p.stdout is a <code>io.BufferedReader</code>, the call to <code>p.stdout.seekable()</code> returns <code>True</code> and the call to seek works as expected most of the time.</p>
<p><strong>If the answer is "no", why does it work most of the time? Also, if seek is not allowed here, shouldn't <code>p.stdout.seekable()</code> return False?</strong></p>
<p>The following program executes without any errors:</p>
<pre><code>import subprocess
import time
p = subprocess.Popen("timeout /t 2", stdout=subprocess.PIPE)
print(type(p.stdout))
assert p.stdout.seekable()
time.sleep(0.5)
p.stdout.seek(0, 2)
byte_count = p.stdout.tell()
p.stdout.seek(0)
data = p.stdout.read(byte_count)
print(data)
</code></pre>
<p>Output:</p>
<pre><code><class '_io.BufferedReader'>
b'\r\nWaiting for 2 seconds, press a key to continue ...'
</code></pre>
<p>The target platform is Windows 11, Python v3.11.5.</p>
|
<python><windows><subprocess>
|
2024-10-31 21:34:46
| 1
| 2,189
|
devtk
|
79,146,281
| 5,104,259
|
Python Class to run the equation but can't understand one of def ()
|
<p>I'm self-learning python and it teaches concept of "Class" from <a href="https://python-programming.quantecon.org/python_oop.html" rel="nofollow noreferrer">https://python-programming.quantecon.org/python_oop.html</a></p>
<p>The below is full codes.
I can understand that</p>
<ol>
<li>def h() is the one calculating the equation (the image at the bottom has equation)</li>
<li>def update() is the one calculating t and t+1</li>
<li>def denerate_sequecence() is the one append all returns.</li>
</ol>
<p>But my question is: anybody can explain what def steady_state() is?
I can't find where this computing is originated from.</p>
<p>(((s * z) / (n + Ξ΄))**(1 / (1 - Ξ±))</p>
<p>Am I lacking of understanding this equation, or understanding how class and methods are interacting?</p>
<pre><code>class Solow:
r"""
Implements the Solow growth model with the update rule
k_{t+1} = [(s z k^Ξ±_t) + (1 - Ξ΄)k_t] /(1 + n)
"""
def __init__(self, n=0.05, # population growth rate
s=0.25, # savings rate
Ξ΄=0.1, # depreciation rate
Ξ±=0.3, # share of labor
z=2.0, # productivity
k=1.0): # current capital stock
self.n, self.s, self.Ξ΄, self.Ξ±, self.z = n, s, Ξ΄, Ξ±, z
self.k = k
def h(self):
"Evaluate the h function"
# Unpack parameters (get rid of self to simplify notation)
n, s, Ξ΄, Ξ±, z = self.n, self.s, self.Ξ΄, self.Ξ±, self.z
# Apply the update rule
return (s * z * self.k**Ξ± + (1 - Ξ΄) * self.k) / (1 + n)
def update(self):
"Update the current state (i.e., the capital stock)."
self.k = self.h()
def steady_state(self):
"Compute the steady state value of capital."
# Unpack parameters (get rid of self to simplify notation)
n, s, Ξ΄, Ξ±, z = self.n, self.s, self.Ξ΄, self.Ξ±, self.z
# Compute and return steady state
return ((s * z) / (n + Ξ΄))**(1 / (1 - Ξ±))
def generate_sequence(self, t):
"Generate and return a time series of length t"
path = []
for i in range(t):
path.append(self.k)
self.update()
return path
</code></pre>
<p><a href="https://i.sstatic.net/bANH9AUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bANH9AUr.png" alt="enter image description here" /></a></p>
|
<python><class><methods>
|
2024-10-31 20:45:12
| 2
| 421
|
rocknRrr
|
79,146,263
| 2,962,555
|
kafka.errors.NoBrokersAvailable: NoBrokersAvailable for a service in container try to create topic at "kafka:9092"
|
<p>I have multiple services in Docker, including service A (consumer) and Lafka. Service A is the consumer. Below is the docker-compose.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.8'
services:
service-a:
container_name: service-a
build:
context: .
dockerfile: Dockerfile
ports:
- "8881:8881"
environment:
- ENVIRONMENT=development
depends_on:
- kafka
- chroma
kafka:
container_name: my-kafka
image: bitnami/kafka:latest
ports:
- "9094:9094"
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_CFG_BROKER_ID=1
- KAFKA_CFG_NODE_ID=1
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1@:9093
- ALLOW_PLAINTEXT_LISTENER=yes
chroma:
container_name: learnsuite-chroma
image: chromadb/chroma:latest
ports:
- "8882:8000"
</code></pre>
<p>Note that the Kafka bootstrap_servers setting for my service-a (running in a docker container) is</p>
<pre class="lang-yaml prettyprint-override"><code>bootstrap_servers: kafka:9092
</code></pre>
<p>And this setting will be used in the consumer creation in service-a:</p>
<pre class="lang-py prettyprint-override"><code>def get_consumer_for_topic(topic: str):
group_id = settings.kafka.group
bootstrap = settings.kafka.bootstrap_servers
bootstrap_servers = bootstrap.split(',') if bootstrap else []
admin_client = KafkaAdminClient(bootstrap_servers=bootstrap_servers)
try:
topics = admin_client.list_topics()
if topic not in topics:
# Create the topic if it doesn't exist
new_topic = NewTopic(name=topic, num_partitions=1, replication_factor=1)
admin_client.create_topics([new_topic])
logger.info(f"Topic '{topic}' created.")
else:
logger.info(f"Topic '{topic}' already exists.")
except Exception as e:
logger.error(f"Error ensuring topic exists: {e}")
finally:
admin_client.close()
return KafkaConsumer(
topic,
bootstrap_servers=bootstrap_servers,
group_id = group_id,
auto_offset_reset='earliest'
)
</code></pre>
<p>If I run <code>docker-compose up</code>, I got error</p>
<blockquote>
<p>kafka.errors.NoBrokersAvailable: NoBrokersAvailable</p>
</blockquote>
<p>I know many people talks about it, but this is where I got even after googled around. Please advise.</p>
<p><strong>Answer:</strong> As the question had been marked deletion, it seems I cannot answer it, so update here for the people who might have similar issue.</p>
<p>The thing is the service-a has a really fast boot process. It hit the topic creation part before kafka fully booted. Therefore, I added a retry mechnism to basically let it wait. After that, from the log, I can tell it failed a the first time but then picked up at the 2nd time and eventually boot successfully.</p>
|
<python><apache-kafka><docker-compose><bitnami-kafka>
|
2024-10-31 20:39:13
| 0
| 1,729
|
Laodao
|
79,146,216
| 16,611,809
|
How to keep the lines of Popen.communicate together?
|
<p>I want to write the <code>STDOUT</code> of a <code>subprocess.Popen.communicate</code> to a <code>pd.DataFrame</code>. I took some SO threads and combined them to this code:</p>
<pre><code>import subprocess
import io
import pandas as pd
strings = ['Hello\tWorld!', 'This\tis', 'a\tTest!']
string = '\n'.join(strings)
cmd_grep = ['grep', 's']
process_grep = subprocess.Popen(cmd_grep, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
grep_stdout = process_grep.communicate(input=string.encode('utf-8'))[0].decode('utf-8')
grep_csv = io.StringIO()
for line in grep_stdout:
grep_csv.write(line)
grep_csv.seek(0)
grep_results = pd.read_csv(grep_csv,
sep='\t',
header=None,
names=['Word1', 'Word2'])
grep_csv.close()
grep_results
</code></pre>
<p>This works for easy outputs. But if I want to filter the lines, like this</p>
<pre><code> if line.startswith('This'):
grep_csv.write(line)
</code></pre>
<p>it doesn't work anymore. This is because my <code>for line in grep_stdout:</code> does not iterate lines but characters (you can see this by adding a <code>print(line)</code>. Any idea, how I can keep the lines together?</p>
|
<python><subprocess>
|
2024-10-31 20:08:34
| 1
| 627
|
gernophil
|
79,146,033
| 386,861
|
How to troubleshoot altair charts that draw blank
|
<p>I'm trying to plot a facet plot of client ids with timeseries of year (x axis) and total_vchrs (y)</p>
<pre><code> client_id year total_vchrs
0 1564931 2021 4.00
1 2013493 2021 0.00
2 1587580 2021 1.00
3 2259014 2021 0.00
4 2293939 2021 0.00
...
</code></pre>
<p>The data looks simple.</p>
<pre><code>client_id object
year int64
total_vchrs float64
dtype: object
</code></pre>
<p>Code is transparent.</p>
<pre><code>chart = alt.Chart(melted_df).mark_bar().encode(
x=alt.X('year:O', title='Year'),
y=alt.Y('total_vchrs:Q', title='Total Vouchers'),
#color=alt.Color('client_id:N', title='Client ID'),
facet=alt.Facet('client_id:N', columns=1, title='Client ID')
).properties(
width=600,
height=100,
title="Total vouchers Over Time by Client ID"
)
chart
</code></pre>
<p>But the result is unhappy square face:</p>
<p><a href="https://i.sstatic.net/6NlMZeBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6NlMZeBM.png" alt="enter image description here" /></a></p>
<p>What do I need to do to fix this?</p>
|
<python><altair>
|
2024-10-31 19:00:58
| 1
| 7,882
|
elksie5000
|
79,145,990
| 18,851,224
|
How to Add For Loop Inside SafeArea in Flet?
|
<p>How to add for loop inside SafeArea in Flet? I want to add for loop inside SafeArea.</p>
<pre><code>import flet as ft
def main(page: ft.Page):
page.add(
ft.SafeArea(
)
)
for i in range(50):
page.controls.append(ft.Text(f"Line {i}"))
page.scroll = "always"
page.update()
ft.app(main)
</code></pre>
|
<python><android><flet>
|
2024-10-31 18:47:52
| 1
| 469
|
Jorpy
|
79,145,746
| 4,317,857
|
Huggingface transformers eval dataset size and GPU out of memory
|
<p>I have a trained <code>BertForSequenceClassification</code> model from huggingface <code>transformers</code> library, and I need to run a lot of forward passes on different data using it. I am trying to optimize batch size, which lead to strange behaviour with GPU out of memory, so that I suspect I am doing something wrong.</p>
<p>I use A100 with 80G memory. Here is a sketch of my code:</p>
<pre class="lang-none prettyprint-override"><code>model = geneformer_utils.get_model(path=MODEL, output_attentions=False, enable_random_dropout=True)
training_args_dict = geneformer_utils.get_training_args(
output_dir=OUTPUT,
per_device_train_batch_size=1,
per_device_eval_batch_size=16,
eval_accumulation_steps=1,
)
training_args = transformers.training_args.TrainingArguments(**training_args_dict)
trainer = transformers.Trainer(
model=model,
args=training_args,
data_collator=geneformer.DataCollatorForCellClassification(),
train_dataset=test,
eval_dataset=test,
compute_metrics=None
)
data_loader = trainer.get_eval_dataloader()
full_result = []
for i in range(16):
# 0: logits
full_result.append([[]])
for _, batch in enumerate(data_loader):
for i in range(16):
gene_ids, attention_mask = batch['input_ids'], batch['attention_mask']
model_predictions = model(
input_ids=gene_ids,
attention_mask=attention_mask,
output_attentions=False,
output_hidden_states=False
)
logits = list(model_predictions['logits'].cpu().detach().numpy())
full_result[i][0].extend(logits)
</code></pre>
<p>Increasing the <code>per_device_eval_batch_size</code> leads to GPU out of memory, but in a strange way. When I set it to 32, several loops run successfully, but after about 4-5 loops I see the error. Inserting <code>torch.cuda.empty_cache()</code> prevents this error in batch size 32, or at least substantially delays it, but also seems to degrade the overall speed of the loop. However, setting the batch size to 64 runs into out of memory almost immediately. However, with 64 as the batch size this code ran successfully for a smaller dataset.</p>
<p>I'd like to figure out what's going on, how much memory my input data actually occupies (per one datapoint), and how to make this run more efficiently, if possible.</p>
|
<python><gpu><huggingface-transformers><torch>
|
2024-10-31 17:22:03
| 0
| 332
|
Nikolay Markov
|
79,145,695
| 843,458
|
xlwings python fails to read excel data
|
<p>I am trying this code because it can read the data with an open excel workbook.</p>
<pre><code>import xlwings as xw
xlsfile = r'C:\Users\matth\OneDrive\Dokumente\20241023-1904097928-umsatz.xlsx'
workbook = xw.Book(xlsfile)
ws = workbook.sheets[4]
tbl = ws.api.ListObjects(1) # or .ListObjects(1)
rng = ws.range(tbl.range.address) # get range from table address
df = rng.options(pd.DataFrame, header=True).value # load range to dataframe
</code></pre>
<p>This code runs now for 10 minutes and consumes over 1 GB of memory.
Excel is frozen. The table to load contains about 2500 rows. Not much data, should be loadable in < 1s.</p>
<p>What is going wrong?</p>
|
<python><excel><memory><xlwings>
|
2024-10-31 17:08:12
| 1
| 3,516
|
Matthias Pospiech
|
79,145,558
| 2,645,548
|
Python redis client timeout does not work
|
<p>If redis connection is fine and a request is running for a long time, timeout works correctly.
But if redis server unavailable, timeout does not work.</p>
<p>This code raises exception after 5 seconds, instead 0.007</p>
<pre class="lang-py prettyprint-override"><code>from redis import Redis
r = Redis('redis', 6379, 7, socket_timeout=.007, socket_connect_timeout=.007)
results = r.mget('key1', 'key2', 'key3')
</code></pre>
<p>Compose</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3.8"
services:
api:
build:
context: .
command: ["uvicorn", "run_api:app", "--host", "0.0.0.0", "--port", "80", "--reload"]
restart: always
volumes:
- ./:/app/
ports:
- "8000:80"
environment:
...
# Disabled for test timeout
# redis:
# image: redis:5.0.3-alpine
#
</code></pre>
<p>How to fix it?</p>
<p>Python redis client is <code>5.2.0</code>, python <code>3.12</code>.</p>
|
<python><python-3.x><redis>
|
2024-10-31 16:29:10
| 0
| 611
|
jonsbox
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.