QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,427,635
3,926,543
custom SGD was not updating parameters
<p>I was learning pytorch from d2l website and writing a simple linear regression model. Here is my optimizer:</p> <pre><code>class SGD(): def __init__(self, params, lr): self.params = params self.lr = lr def step(self): for param in self.params: param -= self.lr * param.grad def zero_grad(self): for param in self.params: if param.grad is not None: param.grad.zero_() </code></pre> <p>However, it could not update model parameters during training. I do not known what was going wrong.</p> <p>Of note, the model parameters can be updated with the builtin optimizer: <code>optim.SGD(model.parameters(), lr=self.learning_rate)</code>. Therefore, I suspect the problem is with my naive SGD implementation.</p> <p>Below is a reproducible example:</p> <pre><code>import numpy as np import pandas as pd import torch from torch import nn, optim from torch.utils.data import Dataset, DataLoader, TensorDataset import warnings warnings.filterwarnings(&quot;ignore&quot;) class SyntheticRegressionData(): &quot;&quot;&quot;synthetic tensor dataset for linear regression from S02&quot;&quot;&quot; def __init__(self, w, b, noise=0.01, num_trains=1000, num_vals=1000, batch_size=32): self.w = w self.b = b self.noise = noise self.num_trains = num_trains self.num_vals = num_vals self.batch_size = batch_size n = num_trains + num_vals self.X = torch.randn(n, len(w)) self.y = torch.matmul(self.X, w.reshape(-1, 1)) + b + noise * torch.randn(n, 1) def get_tensorloader(self, tensors, train, indices=slice(0, None)): tensors = tuple(a[indices] for a in tensors) dataset = TensorDataset(*tensors) return DataLoader(dataset, self.batch_size, shuffle=train) def get_dataloader(self, train=True): indices = slice(0, self.num_trains) if train else slice(self.num_trains, None) return self.get_tensorloader((self.X, self.y), train, indices) def train_dataloader(self): return self.get_dataloader(train=True) def val_dataloader(self): return self.get_dataloader(train=False) class LinearNetwork(nn.Module): def __init__(self, in_features, out_features): super().__init__() self.weight = nn.Parameter(torch.randn(in_features, out_features)) self.bias = nn.Parameter(torch.randn(out_features)) def forward(self, x): return torch.matmul(x, self.weight) + self.bias class SGD(): def __init__(self, params, lr): self.params = params self.lr = lr def step(self): for param in self.params: param -= self.lr * param.grad def zero_grad(self): for param in self.params: if param.grad is not None: param.grad.zero_() class MyTrainer(): &quot;&quot;&quot; custom trainer for linear regression &quot;&quot;&quot; def __init__(self, max_epochs=10, learning_rate=1e-3): self.max_epochs = max_epochs self.learning_rate = learning_rate def fit(self, model, train_dataloader, val_dataloader=None): self.model = model self.train_dataloader = train_dataloader self.val_dataloader = val_dataloader # self.optim = MySGD([self.model.weight, self.model.bias], lr=self.learning_rate) self.optim = SGD(self.model.parameters(), lr=self.learning_rate) self.loss = nn.MSELoss() self.num_train_batches = len(train_dataloader) self.num_val_batches = len(val_dataloader) if val_dataloader is not None else 0 self.epoch = 0 for epoch in range(self.max_epochs): self.fit_epoch() def fit_epoch(self): # train self.model.train() avg_loss = 0 for x, y in self.train_dataloader: self.optim.zero_grad() y_hat = self.model(x) loss = self.loss(y_hat, y) loss.backward() self.optim.step() avg_loss += loss.item() avg_loss /= self.num_train_batches print(f'epoch {self.epoch}: train_loss={avg_loss:&gt;8f}') # test if self.val_dataloader is not None: self.model.eval() val_loss = 0 with torch.no_grad(): for x, y in self.val_dataloader: y_hat = self.model(x) loss = self.loss(y_hat, y) val_loss += loss.item() val_loss /= self.num_val_batches print(f'epoch {self.epoch}: val_loss={val_loss:&gt;8f}') self.epoch += 1 torch.manual_seed(2024) trainer = MyTrainer(max_epochs=10, learning_rate=0.01) model = LinearNetwork(2, 1) torch.manual_seed(2024) w = torch.tensor([2., -3.]) b = torch.Tensor([1.]) noise = 0.01 num_trains = 1000 num_vals = 1000 batch_size = 64 data = SyntheticRegressionData(w, b, noise, num_trains, num_vals, batch_size) train_data = data.train_dataloader() val_data = data.val_dataloader() trainer.fit(model, train_data, val_data) </code></pre> <p>Here is the output:</p> <pre><code>epoch 0: train_loss=29.762345 epoch 0: val_loss=29.574341 epoch 1: train_loss=29.547140 epoch 1: val_loss=29.574341 epoch 2: train_loss=29.559777 epoch 2: val_loss=29.574341 epoch 3: train_loss=29.340937 epoch 3: val_loss=29.574341 epoch 4: train_loss=29.371171 epoch 4: val_loss=29.574341 epoch 5: train_loss=29.649407 epoch 5: val_loss=29.574341 epoch 6: train_loss=29.717251 epoch 6: val_loss=29.574341 epoch 7: train_loss=29.545675 epoch 7: val_loss=29.574341 epoch 8: train_loss=29.456314 epoch 8: val_loss=29.574341 epoch 9: train_loss=29.537769 epoch 9: val_loss=29.574341 </code></pre>
<python><deep-learning><pytorch>
2024-05-04 02:52:07
1
17,427
mt1022
78,427,494
480,118
Best way to split jagged array on first element and fill with other elements
<p>It's hard to describe this problem, but the code below should clarify. Basically i start with an array where first element is another array of N size. the rest of the array is a single element. I want to split the first element a for every partion, fill with the remainder of the array:</p> <pre><code>import pandas as pd, numpy as np data = [['100', '200', '300', '400'], 'val1', 'val2', ] ids = np.array_split(data[0], 2) new_array = [] for id in ids: data[0] = id new_array.append(data[:]) new_array </code></pre> <p>So i end up with this:</p> <pre><code>[[['100', '200'], 'val1', 'val2'], [['300', '400'], 'val1', 'val2']] </code></pre> <p>is this the best way of doing this that will also be fast for very large arrays?</p>
<python><pandas><numpy>
2024-05-04 01:02:34
1
6,184
mike01010
78,427,374
6,135,182
Wrap fortran with python using ctypes (just ctypes, no cython or f2py or similar) and turn it into a python package (for python>=3.12)
<p>I have looked at (and used as inspiration) the following SO threads and GitHub repositories:</p> <ul> <li><a href="https://stackoverflow.com/questions/15875533/using-python-ctypes-to-interface-fortran-with-python">Using python-ctypes to interface fortran with python</a></li> <li><a href="https://stackoverflow.com/questions/5811949/call-functions-from-a-shared-fortran-library-in-python">call functions from a shared fortran library in python</a></li> <li><a href="https://stackoverflow.com/questions/43083017/interfacing-python-with-fortran-module-by-ctypes">Interfacing Python with Fortran module by ctypes</a></li> <li><a href="https://stackoverflow.com/questions/65356321/creating-a-python-module-using-ctypes">creating a python module using ctypes</a></li> <li><a href="https://stackoverflow.com/questions/9866412/call-fortran-function-from-python-with-ctypes">Call fortran function from Python with ctypes</a></li> <li><a href="https://github.com/HugoMVale/fortran-in-python" rel="nofollow noreferrer">https://github.com/HugoMVale/fortran-in-python</a></li> <li><a href="https://github.com/daniel-de-vries/fortran-ctypes-python-example" rel="nofollow noreferrer">https://github.com/daniel-de-vries/fortran-ctypes-python-example</a></li> </ul> <p>These get me part of the way, but with recent deprecations many answers that you can find on SO are unfortunately outdated.</p> <h1>Problems</h1> <ul> <li><a href="https://docs.python.org/3.10/library/distutils.html" rel="nofollow noreferrer">distutils</a> is deprecated with removal planned for Python 3.12.</li> <li><a href="https://numpy.org/doc/stable/reference/distutils.html" rel="nofollow noreferrer">numpy.distutils</a> is deprecated, and will be removed for Python &gt;= 3.12.</li> <li><a href="https://setuptools.pypa.io/en/latest/" rel="nofollow noreferrer">setuptools</a> does not provide fortran build support (see e.g. GitHub issue <a href="https://github.com/pypa/setuptools/issues/2372" rel="nofollow noreferrer">#2372</a>)</li> </ul> <h1>What I have</h1> <p>I managed to create a simplified example project (details below) consisting of fortran source code that I interface with python using ctypes. Compiling the fortran code I create an <code>.so</code> library which I can then read in with ctypes' <code>CDLL</code>.</p> <p>Doing these steps manually one after the other with meson works fine, but I struggle to get this fully automated and packaged up (with pip).</p> <h1>What I want</h1> <p>I would like the whole compilation and linking process to be automated such that a simple <code>pip install .</code> performs all the necessary compiling and linking, creating a python package that I can import from anywhere.</p> <p>Ultimately I will want to be able to package everything and release it on PyPI.</p> <p>Note that I want to achieve this without tools such as cython or f2py, which would probably work very well for my highly simplified example here, but not for the decades old legacy code, for which I am working through this simplified problem...</p> <h1>Example</h1> <h3>Project structure</h3> <pre><code>project_root ├── fortran2python │   ├── example1.py │   └── __init__.py ├── meson.build ├── pyproject.toml └── src ├── example1_bindings.f90 └── example1.f90 </code></pre> <h3><code>src/example1.f90</code></h3> <p>This fortran file provides a toy subroutine called <code>real_sum</code> adding two real numbers:</p> <pre><code>module example1 use, intrinsic :: iso_fortran_env, only: real64 implicit none private public :: real_sum integer, parameter :: dp = real64 contains subroutine real_sum(a, b, res) !! Sum of two reals real(dp), intent(in) :: a, b real(dp), intent(out) :: res res = a + b end subroutine real_sum end module example1 </code></pre> <h3><code>src/example1_bindings.f90</code></h3> <p>This file introduces the c-bindings needed for ctypes.</p> <pre><code>module example1_bindings use example1 use iso_c_binding, only: c_double, c_int implicit none contains real(c_double) function real_sum_c(a, b) result(res) bind(c, name='real_sum') real(c_double), value, intent(in) :: a, b call real_sum(a, b, res) end function real_sum_c end module example1_bindings </code></pre> <h3><code>fortran2python/example1.py</code></h3> <p>This file uses ctypes to read in the library <code>libexample1.so</code> created from the above fortran files, making it available to python. It also performs the necessary type declarations.</p> <pre class="lang-py prettyprint-override"><code>from ctypes import CDLL, c_double import os # Load shared library with C-bindings. libpath = os.path.dirname(os.path.realpath(__file__)) libfile = os.path.join(libpath, '../build/libexample1.so') libexample1 = CDLL(libfile) # Define the function signature for the Fortran function `real_sum`. real_sum = libexample1.real_sum real_sum.argtypes = [c_double, c_double] # allows for automatic type conversion real_sum.restype = c_double # the default return type would be int </code></pre> <h3><code>fortran2python/__init__.py</code></h3> <p>Provide the <code>__init__.py</code> file that allows me to later import <code>fortran2python</code> as a module, and make the <code>real_sum</code> function available.</p> <pre class="lang-py prettyprint-override"><code>from fortran2python import example1 real_sum = example1.real_sum </code></pre> <h1>Compilation setup</h1> <p>My go-to for python packaging used to be setuptools, but with the deprecation of the distutils packages and with setuptool's lack of fortran build support, I guess I need to look elsewhere. That's why I've looked into meson, which works well as long as I do things by hand, but the automation via <code>pip install .</code> fails.</p> <h3><code>meson.build</code></h3> <pre><code>project('fortran2python', 'fortran') fortran_src = ['src/example1.f90', 'src/example1_bindings.f90'] example1_lib = library('example1', fortran_src, install: true) </code></pre> <p>With this I can run</p> <pre class="lang-bash prettyprint-override"><code>meson setup build meson compile -C build </code></pre> <p>This produces the following <code>build</code> folder:</p> <pre class="lang-bash prettyprint-override"><code>project_root ├── build │   ├── build.ninja │   ├── compile_commands.json │   ├── libexample1.so │   ├── libexample1.so.p │   │   ├── depscan.dd │   │   ├── example1_bindings.mod │   │   ├── example1.dat │   │   ├── example1-deps.json │   │   ├── example1.mod │   │   ├── src_example1_bindings.f90.o │   │   └── src_example1.f90.o │   ├── meson-info │   │   ├── ... │   ├── meson-logs │   │   └── ... │   └── meson-private │   ├── ... ... </code></pre> <p>With the <code>libexample1.so</code> library compiled, I can then run</p> <pre class="lang-bash prettyprint-override"><code>python -c &quot;from fortran2python import real_sum; print(real_sum(1, 2))&quot; </code></pre> <p>from the project root directory and get the correct answer (<code>3.0</code> in this case). So far this works great.</p> <p><strong>But how can I pip install the <code>fortran2python</code> package such that it becomes available from anywhere, and such that I can package it up and distribute to PyPI?</strong></p> <h3><code>pyproject.toml</code></h3> <p>I've set up a <code>pyproject.toml</code> file that's supposed to use meson-python as the build-system.</p> <pre class="lang-ini prettyprint-override"><code>[build-system] requires = [&quot;meson-python&quot;] build-backend = &quot;mesonpy&quot; [project] name = &quot;fortran2python&quot; version = &quot;0.0.1&quot; description = &quot;Examples for wrapping Fortran code with Python using ctypes.&quot; </code></pre> <p>This isn't enough, though. While <code>pip install .</code> or <code>pip install -e .</code> run through without complaint, I am faced with the error <code>No module named 'fortran2python'</code> (unless I'm in the project root directory...). What's more, this doesn't even invoke the meson commands (<code>meson setup build</code> and <code>meson compile -C build</code>) in the first place.</p> <p>I looked at the <a href="https://meson-python.readthedocs.io/en/latest/tutorials/introduction.html#introduction-to-python-packaging-with-meson-python" rel="nofollow noreferrer">python packaging introduction with meson-python</a>, but I struggle to understand how I might be able to adapt that example to mine... I probably need to add an <a href="https://mesonbuild.com/Python-module.html#extension_module" rel="nofollow noreferrer"><code>extension_module</code></a> to the <code>meson.build</code> file somehow, but my attempts have failed.</p> <hr /> <p>Running on:</p> <ul> <li>OS: Linux</li> <li>compilers: gcc and gfortran (version 13.2.1 20230801)</li> <li>python version: 3.12</li> </ul>
<python><fortran><ctypes><python-packaging><meson-build>
2024-05-03 23:49:18
0
1,450
Zaus
78,427,086
825,227
Parse string with specific characters if they exist using regex
<p>I have text containing assorted transactions that I'm trying to parse using regex.</p> <p>Text looks like this:</p> <pre><code>JT Meta Platforms, Inc. - Class A Common Stock (META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 F S: New S O: Morgan Stanley - Select UMA Account # 1 JT Microsoft Corporation - Common Stock (MSFT) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 F S: New S O: Morgan Stanley - Select UMA Account # 1 JT Microsoft Corporation - Common Stock (MSFT) [OP]P 02/13/2024 03/05/2024 $500,001 - $1,000,000 F S: New S O: Morgan Stanley - Portfolio Management Active Assets Account D: Call options; Strike price $170; Expires 01/17 /2025 C: Ref: 044Q34N6 </code></pre> <p>I've created a regex to parse out individual transactions, denoted by combination of ticker (eg, (MSFT)), type (eg, [ST], [OP]) and amount (eg, $500,000, etc) as follows:</p> <pre><code>transactions = rx.findall(r&quot;\([A-Z][^$]*\$[^$]*\$[,\d]+&quot;, text) </code></pre> <p>Transactions are returned as a list and look like this for example:</p> <pre><code>(META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 </code></pre> <p>I'd like to add logic to include description details (ie, 'D:...') if they exist. I tried with the below pattern, but it winds up returning just one large transaction since the first two transactions don't have description details (ie, 'D:').</p> <p>I'd like to see this:</p> <pre><code>(META) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 </code></pre> <p>..</p> <pre><code>(MSFT) [ST]S (partial) 02/08/2024 03/05/2024 $1,001 - $15,000 </code></pre> <p>..</p> <pre><code>(MSFT) [OP]P 02/13/2024 03/05/2024 $500,001 - $1,000,000 F S: New S O: Morgan Stanley - Portfolio Management Active Assets Account D: Call options; Strike price $170; Expires 01/17 /2025 </code></pre> <p>What am I doing wrong?</p> <pre><code>rx.findall(r&quot;\([A-Z][^$]*\$[^$]*\$[,\d]+[\s\S]*?D:(.*)&quot;, text) </code></pre> <p><strong>Edit:</strong></p> <p>To deal with cases where colon isn't contiguous to 'D' (imperfect PDF parsing), added to @zdim's answer and this addresses the above issues:</p> <pre><code>rx.findall('\([A-Z][^$]*\$[^$]*\$[,\d]+(?:[^$]*D:?.+)?', text) </code></pre>
<python><regex>
2024-05-03 21:51:30
2
1,702
Chris
78,426,975
7,227,627
How to relate Matplotlib's fig.add_axes local coordinates with my coordinates?
<h1>Question</h1> <p>Motivated by this example form Matplotlib <a href="https://matplotlib.org/stable/gallery/subplots_axes_and_figures/axes_demo.html#sphx-glr-gallery-subplots-axes-and-figures-axes-demo-py" rel="nofollow noreferrer">here</a> I would like to make a similar one. I have run into following question: <strong>How can I relate the coordinates of the matplotlin graphics with my real world coordinates?</strong></p> <h1>Example</h1> <p>To illustrate it with the picture below:</p> <ul> <li>I need the <code>inset_axis</code> rather precisely at the red points</li> <li>I have tested <code>left_inset_ax = fig.add_axes([1, 0, .05, .05], facecolor='k')</code> for all four corners (see the black squares)</li> <li>inside the colored area I can work with local coordinates (0..1, 0..1), but I do not find the relation to the outer coordinates</li> <li>pherhaps I can set somewhere <code>myCoordinates=True</code></li> <li>I have also tried this <a href="https://matplotlib.org/stable/gallery/subplots_axes_and_figures/zoom_inset_axes.html" rel="nofollow noreferrer">example here</a>, but I this is hand taylored, whereas I need an automated solution without hard coded figures. <a href="https://i.sstatic.net/XxjsA9cg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XxjsA9cg.png" alt="enter image description here" /></a></li> </ul> <p>I have different <strong>types of <code>inset</code> graphics</strong>. One of them is this below. They have been generated in a <code>polar </code>setting by</p> <ul> <li>plt.plot( df_histo_cumsum.index, df_histo_cumsum, c='k', lw=0.7) , or by</li> <li>b = ax0.bar(np.deg2rad(self.wd_cc), Nbar[:,j], label=self.wd_cc, color=self.ws_colors[j-1])</li> </ul> <p>So they live in a different world of coordinates that is not easy to access for direct scaling.<br /> <a href="https://i.sstatic.net/65E2Y05B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65E2Y05B.png" alt="enter image description here" /></a></p> <h1>Solutions and results</h1> <p><a href="https://i.sstatic.net/19QisvB3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/19QisvB3.png" alt="enter image description here" /></a></p> <p>Some <strong>solutions</strong> as given <a href="https://stackoverflow.com/questions/17458580/embedding-small-plots-inside-subplots-in-matplotlib">here</a> looks promising as long as the graphs are very simple. I took it as a base to add more features as title, x- and y labels, equidistant non-square settings, different numbers of grid points. The approach is</p> <ul> <li><p>generate 2-dimensional data</p> </li> <li><p>select a few of them for <code>inset_axis</code></p> </li> <li><p>transform to local (0..1) coordinates in both directions</p> </li> <li><p>display the <code>inset_axis</code></p> </li> <li><p>use different number of grid points</p> <pre><code> import numpy as np import matplotlib.pyplot as plt def scaleXY(xgr,ygr,xT,yT): xgr = (xgr-xT.min())/(xT.max()-xT.min()) ygr = (ygr-yT.min())/(yT.max()-yT.min()) return xgr, ygr nx, ny = 150, 60 # choose: number of data ix = np.linspace(-5, 20, nx)*100 iy = np.linspace(-1, 15, ny)*100 xq, yq = np.meshgrid(ix,iy, indexing='ij') # generate data dx, dy = 50, 30 # choose xp,yp = xq[::dx,::dy], yq[::dx,::dy] # select a few of them xgr, ygr = scaleXY(xp,yp,xq,yq) # scale them to (0..1) with plt.style.context('fast'): fig = plt.figure(figsize=(10,10)) ax1 = fig.add_subplot(111) ax1.scatter(xq,yq, s=5) # plot the base data ax1.scatter(xp,yp, c='lime', s=300, alpha = 0.6) # plot the testing points ins = ax1.inset_axes([xgr[2,1], ygr[2,1], 0.2,0.2]) # check the inset positioning ins = ax1.inset_axes([xgr[1,1], ygr[1,1], 0.2,0.2]) # check the inset positioning ins = ax1.inset_axes([xgr[0,0], ygr[0,0], 0.2,0.2]) # check the inset positioning ins = ax1.inset_axes([xgr[2,0], ygr[2,0], 0.2,0.2]) # check the inset positioning ax1.set_aspect('equal') title = 'Even more dangerous area' ax1.set_title(title,fontweight='bold', fontsize=17) ax1.set_xlabel('x-direction', fontsize=17) ax1.set_ylabel('y-direction', fontsize=17) plt.show() </code></pre> </li> </ul> <p>As <strong>results</strong> one can see that coordinates of the graphic separate from the coordinates of the user. This leads to the question: <strong>How can I relate the coordinates of the matplotlin graphics with my real world coordinates?</strong></p> <p><a href="https://i.sstatic.net/nSgYhAiP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSgYhAiP.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/65RTZkcB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65RTZkcB.png" alt="enter image description here" /></a> I would <strong>appreciate any hints</strong> to other running solutions - thank you!</p>
<python><matplotlib>
2024-05-03 21:10:47
2
1,998
pyano
78,426,972
3,827,505
How to properly position child element to its parent?
<p>I was trying to get a better understanding of how kivy parent/child positioning works SPECIFICALLY with the FloatLayout because it allows to place elements based on window size and therefore super dynamic.</p> <p>This code below is simple. I'm just trying to get label to base ITS position from its parent position, and adjust from there (hence the +200 and +20).</p> <pre><code>FloatLayout: Button: text: 'Home' size_hint: 1, .2 pos_hint: {'x':0, 'top':1} Label: text: 'Welcome' # size_hint: 1, .2 pos_hint: self.parent.x + 200, self.parent.y + 20 </code></pre> <p>Result image: <a href="https://i.sstatic.net/6HXxELvB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HXxELvB.png" alt="enter image description here" /></a></p> <p>My EXPECTATION was for the text &quot;Welcome&quot; to &quot;start&quot; bottom left since this <a href="https://www.geeksforgeeks.org/python-float-layout-in-kivy/" rel="nofollow noreferrer">link</a> states &quot;The coordinate system in kivy works from the bottom left! This will be important when placing our objects. (i.e (0, 0) is the bottom left).&quot; so the bottom left of the bottom is first taken into consideration. Then the +200 would move it somewhat to the right, and +20 would move it barely upwards. But the result is clearly different.</p> <p>Apologies in advance if I'm missing something obvious, I'm still learning. Very grateful for any help. Thank you.</p> <p><strong>Update</strong> I ran the code:</p> <pre><code>FloatLayout: Button: text: 'Home' size_hint: 1, .2 pos_hint: {'x':0, 'top':1} Label: text: 'Welcome' size: self.parent.size center: self.parent.x + 200, self.parent.y + 20 </code></pre> <p>And it worked. The text is adjusting properly when I adjust the window size on both x and y axis. But, this seems limiting. What if I wanted to change the size of the label? What if i DIDN'T want to use the 'center' attribute (because pos_hint seems more reasonable to me)? I'm not entirely sure if this is a proper solution, hence why I didn't answer my own question with it.</p>
<python><android><kivy><kivy-language><kivymd>
2024-05-03 21:09:53
1
318
Omar Moodie
78,426,957
2,153,235
Use json_normalize with extraneous dictionary layer with single key?
<p>I have automated identification system (AIS) data as nested dictionaries in a pandas dataframe. Here is an example:</p> <pre><code>dfAIS Out[18]: Message ... MetaData 0 {'ShipStaticData': {'AisVersion': 1, 'CallSign... ... {'MMSI': 255814000, 'MMSI_String': 255814000, ... 1 {'MultiSlotBinaryMessage': {'ApplicationID': {... ... {'MMSI': 2276003, 'MMSI_String': 2276003, 'Shi... 2 {'StandardClassBPositionReport': {'AssignedMod... ... {'MMSI': 503760500, 'MMSI_String': 503760500, ... 3 {'PositionReport': {'Cog': 25.2, 'Communicatio... ... {'MMSI': 211648000, 'MMSI_String': 211648000, ... 4 {'StaticDataReport': {'MessageID': 24, 'PartNu... ... {'MMSI': 338467989, 'MMSI_String': 338467989, ... ... ... ... 139625 {'PositionReport': {'Cog': 360, 'Communication... ... {'MMSI': 244730300, 'MMSI_String': 244730300, ... 139626 {'PositionReport': {'Cog': 231.5, 'Communicati... ... {'MMSI': 219025528, 'MMSI_String': 219025528, ... 139627 {'PositionReport': {'Cog': 360, 'Communication... ... {'MMSI': 273252100, 'MMSI_String': 273252100, ... 139628 {'UnknownMessage': {}} ... {'MMSI': 244730043, 'MMSI_String': 244730043, ... 139629 {'ShipStaticData': {'AisVersion': 1, 'CallSign... ... {'MMSI': 211666470, 'MMSI_String': 211666470, ... [139630 rows x 3 columns] </code></pre> <p>The core data resides in column &quot;Message&quot;. Each element therein is a dictionary with only 1 key and 1 value. As shown above, the one key might be &quot;ShipStaticData&quot;, &quot;MultiSlotBinaryMessage&quot;, etc., which we can think of as message types. Let's call this dictionary the level-1 dictionary.</p> <p>The level-1 dictionary is an extraneous layer of mapping because the desired data resides in the corresponding value, which itself is an entire dictionary. Let's call the latter the level-2 dictionary. As shown above, the fields in the level-2 dictionary can be &quot;AisVersion&quot;, &quot;ApplicationID&quot;, &quot;Cog&quot;, etc. The valid fields are specified <a href="https://aisstream.io/documentation#API-Message-Models" rel="nofollow noreferrer">here</a>.</p> <p>I don't need the key for the level-1 dictionary because the level-2 dictionary contains a MesssageID field that much less ambiguously maps to the message type. Furthermore, dataframe <code>dfAIS</code> also has a MessageType column not shown above that contains the same label as the sole key in the level-1 dictionary.</p> <p>I've been educating myself on dataframe manipulations using <a href="https://stackoverflow.com/a/38231651">apply</a> to extract the MessageID from the level-2 dictionary. I also found that <a href="https://note.nkmk.me/en/python-pandas-json-normalize" rel="nofollow noreferrer">json_normalize</a> is a great option. Unfortunately, in nested dictionaries, it requires that the hierarchical path to common fields have the same path components. I cannot use it for the above scenario because <code>Message.ShipStaticData.MessageID</code> is a different path from <code>Message.MultiSlotBinaryMessage.MessageID</code>. In both of these paths, the 2nd of the 3 path components is the extraneous mapping layer that I wish didn't exist.</p> <p><em><strong>How can I un-nest the nested dictionary so that I can access the MessageID field?</strong></em></p> <p>I tried using <code>pandas.DataFrame.apply</code> to apply the dictionary method <code>items()</code> to the elements of <code>dfAIS</code> column <code>Message</code>. It yields a series of nested tuples and the extraneous message type key is still present:</p> <pre><code>df = dfAIS.Message.apply(dict.items) 0 ((ShipStaticData, {'AisVersion': 1, 'CallSign'... 1 ((MultiSlotBinaryMessage, {'ApplicationID': {'... 2 ((StandardClassBPositionReport, {'AssignedMode... 3 ((PositionReport, {'Cog': 25.2, 'Communication... 4 ((StaticDataReport, {'MessageID': 24, 'PartNum... 139625 ((PositionReport, {'Cog': 360, 'CommunicationS... 139626 ((PositionReport, {'Cog': 231.5, 'Communicatio... 139627 ((PositionReport, {'Cog': 360, 'CommunicationS... 139628 ((UnknownMessage, {})) 139629 ((ShipStaticData, {'AisVersion': 1, 'CallSign'... Name: Message, Length: 139630, dtype: object </code></pre> <p><strong>P.S.</strong> I was also wrestling with how to refer to the labels &quot;ShipStaticData&quot;, &quot;MultiSlotBinaryMessage&quot;, etc. They are the keys of the level-1 dictionary. It would have been convenient to talk about the &quot;value&quot; taken on by the level-1 key, but &quot;value&quot; already refers to the thing that is mapped to by the key. We can very carefully say that the latter is the value <em>associated</em> with the key, or <em>for</em> the key, but it is still asking for confusion. Is there a clearer (and succinct) way to refer to &quot;ShipStaticData&quot;, &quot;MultiSlotBinaryMessage&quot;, etc.?</p> <p><strong>P.P.S.</strong> While the question starts from a dataframe, here is a 3-line sample of the source <code>jsonl</code> file to facilitate answers that might start from data file ingestion:</p> <pre><code>{&quot;Message&quot;: {&quot;ShipStaticData&quot;: {&quot;AisVersion&quot;: 0, &quot;CallSign&quot;: &quot;GDNB &quot;, &quot;Destination&quot;: &quot;AVONMOUTH DREDGING &quot;, &quot;Dimension&quot;: {&quot;A&quot;: 30, &quot;B&quot;: 5, &quot;C&quot;: 7, &quot;D&quot;: 3}, &quot;Dte&quot;: false, &quot;Eta&quot;: {&quot;Day&quot;: 0, &quot;Hour&quot;: 24, &quot;Minute&quot;: 60, &quot;Month&quot;: 0}, &quot;FixType&quot;: 1, &quot;ImoNumber&quot;: 702864, &quot;MaximumStaticDraught&quot;: 14, &quot;MessageID&quot;: 5, &quot;Name&quot;: &quot;MALAGO &quot;, &quot;RepeatIndicator&quot;: 0, &quot;Spare&quot;: false, &quot;Type&quot;: 33, &quot;UserID&quot;: 235065329, &quot;Valid&quot;: true}}, &quot;MessageType&quot;: &quot;ShipStaticData&quot;, &quot;MetaData&quot;: {&quot;MMSI&quot;: 235065329, &quot;MMSI_String&quot;: 235065329, &quot;ShipName&quot;: &quot;MALAGO &quot;, &quot;latitude&quot;: 51.50296166666667, &quot;longitude&quot;: -2.707621666666667, &quot;time_utc&quot;: &quot;2024-04-15 23:00:24.874950587 +0000 UTC&quot;}} {&quot;Message&quot;: {&quot;PositionReport&quot;: {&quot;Cog&quot;: 226.1, &quot;CommunicationState&quot;: 59916, &quot;Latitude&quot;: 33.74614666666667, &quot;Longitude&quot;: -118.22303833333334, &quot;MessageID&quot;: 1, &quot;NavigationalStatus&quot;: 0, &quot;PositionAccuracy&quot;: false, &quot;Raim&quot;: false, &quot;RateOfTurn&quot;: 0, &quot;RepeatIndicator&quot;: 0, &quot;Sog&quot;: 1.1, &quot;Spare&quot;: 0, &quot;SpecialManoeuvreIndicator&quot;: 0, &quot;Timestamp&quot;: 24, &quot;TrueHeading&quot;: 11, &quot;UserID&quot;: 367693690, &quot;Valid&quot;: true}}, &quot;MessageType&quot;: &quot;PositionReport&quot;, &quot;MetaData&quot;: {&quot;MMSI&quot;: 367693690, &quot;MMSI_String&quot;: 367693690, &quot;ShipName&quot;: &quot;KELLY C &quot;, &quot;latitude&quot;: 33.74614666666667, &quot;longitude&quot;: -118.22303833333334, &quot;time_utc&quot;: &quot;2024-04-15 23:00:24.875182234 +0000 UTC&quot;}} {&quot;Message&quot;: {&quot;Interrogation&quot;: {&quot;MessageID&quot;: 15, &quot;RepeatIndicator&quot;: 3, &quot;Spare&quot;: 0, &quot;Station1Msg1&quot;: {&quot;MessageID&quot;: 316004037, &quot;SlotOffset&quot;: 0, &quot;StationID&quot;: 316004037, &quot;Valid&quot;: true}, &quot;Station1Msg2&quot;: {&quot;MessageID&quot;: 0, &quot;SlotOffset&quot;: 0, &quot;Spare&quot;: 0, &quot;Valid&quot;: false}, &quot;Station2&quot;: {&quot;MessageID&quot;: 0, &quot;SlotOffset&quot;: 0, &quot;Spare1&quot;: 0, &quot;Spare2&quot;: 0, &quot;StationID&quot;: 0, &quot;Valid&quot;: false}, &quot;UserID&quot;: 3669987, &quot;Valid&quot;: true}}, &quot;MessageType&quot;: &quot;Interrogation&quot;, &quot;MetaData&quot;: {&quot;MMSI&quot;: 3669987, &quot;MMSI_String&quot;: 3669987, &quot;ShipName&quot;: &quot;&quot;, &quot;latitude&quot;: 47.548895, &quot;longitude&quot;: -122.78526333333333, &quot;time_utc&quot;: &quot;2024-04-15 23:00:24.875383071 +0000 UTC&quot;}} </code></pre>
<python><json><pandas><dictionary><data-ingestion>
2024-05-03 21:05:45
2
1,265
user2153235
78,426,876
13,227,420
Accessing values in a dictionary with JMESPath in Python when the value is an integer (e.g.: 1)
<p>Using the dictionary:</p> <pre class="lang-py prettyprint-override"><code>d = {'1': 'a'} </code></pre> <p>How can I extract the value <code>a</code> using the JMESPath library in Python?</p> <p>The following attempts did not work:</p> <pre class="lang-py prettyprint-override"><code>import jmespath value = jmespath.search(&quot;1&quot;, d) # throws jmespath.exceptions.ParseError: invalid token value = jmespath.search(&quot;'1'&quot;, d) # returns the key '1' instead of the value 'a' </code></pre>
<python><jmespath>
2024-05-03 20:41:56
1
394
sierra_papa
78,426,875
15,020,312
Get one unique value in several lists in Python
<p>I have this data:</p> <pre><code>data = {23: ['WA', 'NV', 'AZ', 'OH'], 24: ['NV', 'OH'], 25: ['NV', 'OH']} </code></pre> <p>I'm trying to get one unique values from each keys list, like this:</p> <pre><code>{23: ['WA'], 24: ['NV'], 25: ['OH']} </code></pre> <p>But in my current code, it returns something different, which is not what I'm expecting.</p> <p>here is my code:</p> <pre><code>def get_unique_states(data): unique_states = {} all_states = set() for key, states in data.items(): unique_states[key] = [] current_states = set(states) unique_to_current = current_states - all_states if unique_to_current: unique_states[key].extend(list(unique_to_current)) else: for state in states: if state not in unique_states.values(): unique_states[key].append(state) break all_states.update(current_states) return unique_states data = {23: ['WA', 'NV', 'AZ', 'OH'], 24: ['NV', 'OH'], 25: ['NV', 'OH']} unique_states = get_unique_states(data) print(unique_states) </code></pre> <p>I'm trying to keep each key value with one different list value; if there are no more unique values, keep giving the first key value to it.</p> <p>But the output that I'm getting right now is:</p> <pre><code>{23: ['NV', 'AZ', 'WA', 'OH'], 24: ['NV'], 25: ['NV']} </code></pre> <p>But I need this data:</p> <pre><code>{23: ['WA'], 24: ['NV'], 25: ['OH']} </code></pre>
<python><list><dictionary>
2024-05-03 20:41:52
0
592
artiest
78,426,731
180,275
Why does a requests.get(...) call hang
<p>The following simple python script that uses <code>requests.get</code> to execute a GET request does not return on my computer:</p> <pre><code>import requests print(requests.get('https://api.openstreetmap.org/api/0.6/node/1894790125')) </code></pre> <p>However, I am able to get an &quot;instant&quot; response with the same url using <code>curl</code>.</p> <p>Why is this?</p> <h2>Specifying a common user agent</h2> <p>Specifying a common user agent does not change anything:</p> <pre><code>import requests print(requests.get( 'https://api.openstreetmap.org/api/0.6/node/1894790125', headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36' } )) </code></pre> <h2>tcpdump</h2> <p>When trying to trace the connection with</p> <pre><code> sudo tcpdump -v -i eno1 'host api.openstreetmap.org' </code></pre> <p>I get the following output</p> <pre><code>tcpdump: listening on eno1, link-type EN10MB (Ethernet), snapshot length 262144 bytes 08:17:22.463843 IP6 (flowlabel 0x69a8c, hlim 64, next-header TCP (6) payload length: 40) aaaa:bbbb:cccc:dddd:eeee:dddd:cccc:bbbb.aaaa4 &gt; spike-06.openstreetmap.org.https: Flags [S], cksum 0xdb71 (incorrect -&gt; 0xb5a5), seq 3910911789, win 64800, options [mss 1440,sackOK,TS val 1030348905 ecr 0,nop,wscale 7], length 0 08:17:23.482673 IP6 (flowlabel 0xd077e, hlim 64, next-header TCP (6) payload length: 40) aaaa:bbbb:cccc:dddd:eeee:dddd:cccc:bbbb.aaaa4 &gt; spike-06.openstreetmap.org.https: Flags [S], cksum 0xdb71 (incorrect -&gt; 0xb1aa), seq 3910911789, win 64800, options [mss 1440,sackOK,TS val 1030349924 ecr 0,nop,wscale 7], length 0 08:17:25.494643 IP6 (flowlabel 0x3cc5c, hlim 64, next-header TCP (6) payload length: 40) aaaa:bbbb:cccc:dddd:eeee:dddd:cccc:bbbb.aaaa4 &gt; spike-06.openstreetmap.org.https: Flags [S], cksum 0xdb71 (incorrect -&gt; 0xa9ce), seq 3910911789, win 64800, options [mss 1440,sackOK,TS val 1030351936 ecr 0,nop,wscale 7], length 0 08:17:29.687440 IP6 (flowlabel 0x7665b, hlim 64, next-header TCP (6) payload length: 40) aaaa:bbbb:cccc:dddd:eeee:dddd:cccc:bbbb.aaaa4 &gt; spike-06.openstreetmap.org.https: Flags [S], cksum 0xdb71 (incorrect -&gt; 0x996e), seq 3910911789, win 64800, options [mss 1440,sackOK,TS val 1030356128 ecr 0,nop,wscale 7], length 0 08:17:37.878687 IP6 (flowlabel 0xd5551, hlim 64, next-header TCP (6) payload length: 40) aaaa:bbbb:cccc:dddd:eeee:dddd:cccc:bbbb.aaaa4 &gt; spike-06.openstreetmap.org.https: Flags [S], cksum 0xdb71 (incorrect -&gt; 0x796e), seq 3910911789, win 64800, options [mss 1440,sackOK,TS val 1030364320 ecr 0,nop,wscale 7], length 0 08:17:54.010694 IP6 (flowlabel 0xbf2f2, hlim 64, next-header TCP (6) payload length: 40) aaaa:bbbb:cccc:dddd:eeee:dddd:cccc:bbbb.aaaa4 &gt; spike-06.openstreetmap.org.https: Flags [S], cksum 0xdb71 (incorrect -&gt; 0x3a6a), seq 3910911789, win 64800, options [mss 1440,sackOK,TS val 1030380452 ecr 0,nop,wscale 7], length 0 ^C 6 packets captured 28 packets received by filter 3 packets dropped by kernel </code></pre> <p>Unfortunately, I am quite inexperienced interpreting such output, and I am not sure if the <code>incorrect</code> part in the output is an indication of what's going on.</p> <h2>Working in WSL</h2> <p>When executed in WSL on the <em>same hardware</em>, the request runs fine and the script prints <code>&lt;Response [200]&gt;</code>.</p>
<python><http><connection><openstreetmap>
2024-05-03 20:00:08
0
40,876
René Nyffenegger
78,426,675
2,599,709
Mounting multiple Gradio apps with FastAPI
<p>Is it possible to mount multiple Gradio apps with FastAPI? I'd like to have two endpoints: localhost:8000 and localhost:8000/debug. The /debug endpoint would ultimately be a bit more complex and log more data. I thought I'd be able to do something like this:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI from functools import partial import gradio as gr app = FastAPI() sub_app = FastAPI() app.mount('/debug', sub_app) def out(text, n): return text + '!'*n def make_simple_gr_app(n): func = partial(out, n=n) with gr.Blocks() as dbg: textbox = gr.Textbox(max_lines=1, label='Enter text') submit_btn = gr.Button('Submit') submit_btn.click(func, inputs=textbox, outputs=textbox) return dbg sub_app = gr.mount_gradio_app(sub_app, make_simple_gr_app(1), '/') app = gr.mount_gradio_app(app, make_simple_gr_app(5), '/') </code></pre> <p>Now when I go to <code>localhost:8000</code> it works, but when accessing <code>localhost:8000/debug</code> the logs show <code> &quot;GET /debug HTTP/1.1&quot; 404 Not Found</code> which I don't understand since I would think the <code>app.mount(..)</code> would take care of that part.</p> <p>What am I missing?</p>
<python><fastapi><gradio>
2024-05-03 19:44:39
0
4,338
Chrispresso
78,426,646
5,561,472
How to run LLM from transformers library under Windows without GPU?
<p>I have no GPU but I can run <code>openbuddy-llama3-8b-v21.1-8k</code> from <code>ollama</code>. It works with speed of ~1 t/s.</p> <p>But it doesn't work when I try the following code:</p> <pre><code>from transformers import ( AutoModelForCausalLM, AutoTokenizer, GenerationConfig, ) import torch new_model = &quot;openbuddy/openbuddy-llama3-8b-v21.1-8k&quot; model = AutoModelForCausalLM.from_pretrained( new_model, device_map=&quot;auto&quot;, trust_remote_code=True, torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, ) tokenizer = AutoTokenizer.from_pretrained( new_model, max_length=2048, trust_remote_code=True, use_fast=True, ) tokenizer.pad_token = tokenizer.eos_token tokenizer.padding_side = &quot;right&quot; prompt = &quot;&quot;&quot;&lt;|im_start|&gt;system You are a helpful AI assistant.&lt;|im_end|&gt; &lt;|im_start|&gt;user Как открыть брокерский счет?&lt;|im_end|&gt; &lt;|im_start|&gt;assistant &quot;&quot;&quot; inputs = tokenizer.encode( prompt, return_tensors=&quot;pt&quot;, add_special_tokens=False ).cpu() generation_config = GenerationConfig( max_new_tokens=700, temperature=0.5, top_p=0.9, top_k=40, repetition_penalty=1.1, do_sample=True, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id, ) outputs = model.generate( generation_config=generation_config, input_ids=inputs, ) print(tokenizer.decode(outputs[0], skip_special_tokens=False)) </code></pre> <p>Looks like <code>model.generate</code> works much slower comparing to running from <code>ollama</code>.</p> <p>I see that the process uses 25% of cpu only.</p> <p>Where am I wrong?</p>
<python><huggingface-transformers><large-language-model>
2024-05-03 19:35:04
1
6,639
Andrey
78,426,636
11,815,307
Cython: Protecting a cdef class if __init__ is not called
<p>I'm wrapping some Fortran/C codes with Cython. The <code>__init__</code> method of my <code>cdef class</code> allocates some memory in C. I believe that this memory allocation has to happen in the <code>__init__</code> method instead of the <code>__cinit__</code> method because the allocation depends on inputs to <code>__init__</code> (e.g. an input file).</p> <p>However, there is potential for memory errors, because of inheritance. A class that inherits the Cython class could fail to call the parent <code>__init__</code> method, thus not allocating needed memory in C.</p> <p>I am wondering if the code below is a good workaround for this problem. Here, I define a <code>__getattribute__</code> and <code>__setattr__</code>, which checks to make sure <code>__init__</code> has been called, and throws an error if it has not.</p> <p><strong>Complete example:</strong></p> <p>Some C code to wrap</p> <pre class="lang-c prettyprint-override"><code>#include &quot;stdlib.h&quot; typedef struct { int n; double *data; int count; } Example; void example_allocate(void **ptr) { *ptr = (void *) malloc(sizeof(Example)); } void example_deallocate(void **ptr) { Example *p = (Example *) *ptr; free(p); } void example_create(void **ptr, int *n) { Example *p = (Example *) *ptr; p-&gt;count = 3; p-&gt;n = *n; p-&gt;data = (double *) malloc(sizeof(double)*p-&gt;n); for (int i = 0; i &lt; p-&gt;n; i++){ p-&gt;data[i] = i; } } void example_sum_data(void **ptr, double *val) { Example *p = (Example *) *ptr; *val = 0.0; for (int i = 0; i &lt; p-&gt;n; i++){ *val += p-&gt;data[i]; } } void example_count_get(void **ptr, int *count) { Example *p = (Example *) *ptr; *count = p-&gt;count; } void example_count_set(void **ptr, int *count) { Example *p = (Example *) *ptr; p-&gt;count = *count; } </code></pre> <p>Cython wrapper:</p> <pre class="lang-py prettyprint-override"><code>from libcpp cimport bool as cbool from cpython.object cimport PyObject_GenericSetAttr cdef extern void example_allocate(void **ptr); cdef extern void example_deallocate(void **ptr) cdef extern void example_create(void **ptr, int *n) cdef extern void example_sum_data(void **ptr, double *val) cdef extern void example_count_get(void **ptr, int *count) cdef extern void example_count_set(void **ptr, int *count) cdef class Example: cdef void *_ptr cdef cbool _init_called def __cinit__(self): self._init_called = False example_allocate(&amp;self._ptr) def __dealloc__(self): example_deallocate(&amp;self._ptr) def __getattribute__(self, name): if not self._init_called: raise Exception('The &quot;__init__&quot; method of Example has not been called.') return super().__getattribute__(name) def __setattr__(self, name, value): if not self._init_called: raise Exception('The &quot;__init__&quot; method of Example has not been called.') PyObject_GenericSetAttr(self, name, value) def __init__(self, int n): self._init_called = True example_create(&amp;self._ptr, &amp;n) def sum_data(self): cdef double val; example_sum_data(&amp;self._ptr, &amp;val) return val property count: def __get__(self): cdef int val example_count_get(&amp;self._ptr, &amp;val) return val def __set__(self, int val): example_count_set(&amp;self._ptr, &amp;val) </code></pre> <p>Python example usage</p> <pre class="lang-py prettyprint-override"><code>e1 = Example(10) a = e1.sum_data() print(a) class ExampleChild(Example): def __init__(self, n): # forgets to call init pass e2 = ExampleChild(9) a = e2.do_stuff() # Raises exception print(a) </code></pre>
<python><c><cython>
2024-05-03 19:32:18
0
699
nicholaswogan
78,426,633
825,227
Conditional exception handling in Python
<p>I'm processing transactions from a list to make individual data items more easily accessible.</p> <p>I've set up the below code to process a list of transactions item by item with a <code>try-except</code> block to exclude problem transactions for the time being.</p> <p>A sample transaction might look like the below (ie, <code>t</code>):</p> <pre><code>'(GOOGL) [ST]S (partial) 01/25/2024 02/01/2024 $1,001 - $15,000' </code></pre> <pre><code>import re as rx patterns = { &quot;ticker&quot;: r&quot;\(([A-Z]+)&quot;, &quot;asset_type&quot;: r&quot;\[([A-Z]+)&quot;, &quot;transaction_type&quot;: r&quot;\([^)]*\)\s*\[[^\]]*\]\s*([A-Z])\s&quot;, &quot;amount&quot;: r&quot;(\$[-\$\d, ]+)&quot;, } for t in transactions: try: data = {} data['date'], data['trans_date'] = rx.findall(r'(\d+/\d+/\d+)', t) for key, pattern in patterns.items(): data[key] = rx.search(pattern, t).group(1) transaction_data.append(data) except Exception as e: print(f&quot;{data}, {e}&quot;) continue </code></pre> <p>The above code parses the transaction into parts (eg, ticker: 'GOOG', asset_type: 'ST', etc), and I've included the try catch to simply omit transactions that cause issues with parsing for the time being.</p> <p>I'd like to add an exception to the <code>try-catch</code>, whereby if the exception is thrown in attempting to parse <code>t</code> for <code>key</code> equal to 'transaction_type', I can simply assign a default value instead of omitting the entire transaction. Eg, if the exception is thrown for key == 'asset_type', simply set <code>data[key] = 'ST'</code> and move on to the next pattern item.</p> <p>Is there a way to do this.</p>
<python><conditional-statements><try-catch>
2024-05-03 19:31:12
1
1,702
Chris
78,426,546
13,578,682
Where did sys.modules go?
<pre><code>&gt;&gt;&gt; import sys &gt;&gt;&gt; del sys.modules['sys'] &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.modules Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; AttributeError: module 'sys' has no attribute 'modules' </code></pre> <p>Why does re-imported <code>sys</code> module not have some attributes anymore?</p> <p>I am using Python 3.12.3 and it happens in macOS, Linux, and Windows. It happens in both the REPL and in a .py script. It does not happen in Python 3.11.</p>
<python><python-internals><python-3.12>
2024-05-03 19:05:38
1
665
no step on snek
78,426,505
20,898,396
Match generic type: "Unbound" is not a class
<p>I am trying to tell whether a variable is of type T using the syntax</p> <pre class="lang-py prettyprint-override"><code>match variable: case ClassName(): </code></pre> <p>However, I get a <code>&quot;Unbound&quot; is not a class</code> error when using a generic. Why is T unbound?</p> <pre class="lang-py prettyprint-override"><code>from typing import Generic, TypeVar from pydantic import BaseModel class LetterList(BaseModel): letter: str class NumberList(BaseModel): numbers: list[int] class BulletPointsList(BaseModel): bullet: str ListType = LetterList | NumberList | BulletPointsList | None T = TypeVar(&quot;T&quot;, bound=ListType) class ListPattern(Generic[T]): def test(self, list_type: T, other_list_type: ListType): match other_list_type: case T(): # &quot;Unbound&quot; is not a class pass case BulletPointsList(): pass case _: pass # type(something_type) is a little confusing # edit: actually this doesn't work match type(other_list_type): case T: pass </code></pre>
<python><python-typing>
2024-05-03 18:54:26
1
927
BPDev
78,426,478
465,183
Python's Selenium4 ChromeDriver crash when in non headless mode (works in headless mode): selenium.common.exceptions.SessionNotCreatedException
<p>I have a Python's Selenium4 with ChromeDriver script that refuse to start when in non headless mode:</p> <p>I have another scripts with the same imports, and the other ones don't crash.</p> <pre><code>Traceback (most recent call last): File &quot;/path/to/selenium_scraper.py&quot;, line 92, in &lt;module&gt; driver = webdriver.Chrome(options=options) File &quot;/usr/local/lib/python3.9/dist-packages/selenium/webdriver/chrome/webdriver.py&quot;, line 45, in __init__ super().__init__( File &quot;/usr/local/lib/python3.9/dist-packages/selenium/webdriver/chromium/webdriver.py&quot;, line 66, in __init__ super().__init__(command_executor=executor, options=options) File &quot;/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/webdriver.py&quot;, line 208, in __init__ self.start_session(capabilities) File &quot;/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/webdriver.py&quot;, line 292, in start_session response = self.execute(Command.NEW_SESSION, caps)[&quot;value&quot;] File &quot;/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/webdriver.py&quot;, line 347, in execute self.error_handler.check_response(response) File &quot;/usr/local/lib/python3.9/dist-packages/selenium/webdriver/remote/errorhandler.py&quot;, line 229, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.SessionNotCreatedException: Message: session not created: Chrome failed to start: exited normally. (session not created: DevToolsActivePort file doesn't exist) (The process started from chrome location /usr/bin/chromium is no longer running, so ChromeDriver is assuming that Chrome has crashed.) Stacktrace: #0 0x55b34ba06f83 &lt;unknown&gt; #1 0x55b34b6bfcf7 &lt;unknown&gt; #2 0x55b34b6f760e &lt;unknown&gt; #3 0x55b34b6f426e &lt;unknown&gt; #4 0x55b34b74480c &lt;unknown&gt; #5 0x55b34b738e53 &lt;unknown&gt; #5 0x55b34b700dd4 &lt;unknown&gt; #7 0x55b34b7021de &lt;unknown&gt; #8 0x55b34b9cb531 &lt;unknown&gt; #9 0x55b34b9cf455 &lt;unknown&gt; #10 0x55b34b9b7f55 &lt;unknown&gt; #11 0x55b34b9d00ef &lt;unknown&gt; #12 0x55b34b99b99f &lt;unknown&gt; #13 0x55b34b9f4008 &lt;unknown&gt; #14 0x55b34b9f41d7 &lt;unknown&gt; #15 0x55b34ba06124 &lt;unknown&gt; #16 0x7f961961fea7 start_thread </code></pre> <p>Line 93 is <code>driver = webdriver.Chrome(options=options)</code></p> <p>Only significative difference with others scrapers with the same code base: I delete the Chromium profile before running, but not for this one.</p> <p>Searched a lot on the web, found <a href="https://github.com/SeleniumHQ/selenium/issues/6049" rel="nofollow noreferrer">https://github.com/SeleniumHQ/selenium/issues/6049</a> I'm not alone. This is a very obscure bug.</p>
<python><selenium-webdriver><selenium-chromedriver>
2024-05-03 18:47:13
1
187,771
Gilles Quénot
78,426,245
1,169,091
Package installed from test.pypi.org is not the same folder structure as was uploaded to the repository
<p>I created a package and uploaded it to test.pypi.org: <a href="https://test.pypi.org/project/name-generator-nicomp/0.0.2/" rel="nofollow noreferrer">https://test.pypi.org/project/name-generator-nicomp/0.0.2/</a></p> <p>The logic reads a text file that I want to include in the package. In my initial design I put the text file in its' own folder and my logic ran just fine on my local machine.</p> <p>I installed the package from test.pypi.org onto a different computer using pip:</p> <pre><code>python3 -m pip install --index-url https://test.pypi.org/simple/ name-generator-nicomp --upgrade </code></pre> <p>Then, I run this script:</p> <pre><code>if __name__ == &quot;__main__&quot;: from name_generator_nicomp.NameGenerator import NameGenerator nameGenerator = NameGenerator() </code></pre> <p>and it fails inside the package code on this line:</p> <pre><code> with importlib.resources.open_text(&quot;src.data&quot;, self.noun_file) as my_file: </code></pre> <p>The error is:</p> <pre><code>ModuleNotFoundError: No module named 'src' </code></pre> <p>Looking in the .gz file created by the build script, the src folder is there and the data folder is inside it. The repo on test.pypi.org also has the folder structure in it. It can be downloaded and verified here: <a href="https://test.pypi.org/project/name-generator-nicomp/0.0.2/#files" rel="nofollow noreferrer">https://test.pypi.org/project/name-generator-nicomp/0.0.2/#files</a></p> <p>However, when I look at the installation of the package on my local machine, all the files (code and data) are in one folder and there are no other folders at all.</p> <p>My .toml file is</p> <pre><code>[project] name = &quot;name_generator_nicomp&quot; version = &quot;0.0.2&quot; authors = [ { name=&quot;xxx xxxxxx&quot;, email=&quot;xxx.com&quot; }, ] description = &quot;Some description.&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.8&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: MIT License&quot;, &quot;Operating System :: OS Independent&quot;, ] [project.urls] Homepage = &quot;https://github.com/nicomp42/name_generator&quot; Issues = &quot;https://github.com/nicomp42/name_generator/issues&quot; [build-system] requires = [&quot;hatchling&quot;] build-backend = &quot;hatchling.build&quot; </code></pre> <p>The entire traceback is:</p> <pre><code>Traceback (most recent call last): File &quot;C:\Users\nicholdw\eclipse-workspace\myNameGenerator\mainPackage\main.py&quot;, line 3, in &lt;module&gt; nameGenerator = NameGenerator() File &quot;C:\Users\nicholdw\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\name_generator_nicomp\NameGenerator.py&quot;, line 36, in __init__ self.prepare() File &quot;C:\Users\nicholdw\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\name_generator_nicomp\NameGenerator.py&quot;, line 47, in prepare with importlib.resources.open_text(&quot;src.data&quot;, self.noun_file) as my_file: File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\resources.py&quot;, line 82, in open_text open_binary(package, resource), encoding=encoding, errors=errors File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\resources.py&quot;, line 43, in open_binary package = _common.get_package(package) File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\_common.py&quot;, line 66, in get_package resolved = resolve(package) File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\_common.py&quot;, line 57, in resolve return cand if isinstance(cand, types.ModuleType) else importlib.import_module(cand) File &quot;C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\importlib\__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 992, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 241, in _call_with_frames_removed File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1004, in _find_and_load_unlocked ModuleNotFoundError: No module named 'src' </code></pre>
<python><pypi><hatch>
2024-05-03 17:49:48
1
4,741
nicomp
78,426,163
9,191,338
How do python main process and forked process share gc information?
<p>From the post <a href="https://stackoverflow.com/a/53504673/9191338">https://stackoverflow.com/a/53504673/9191338</a> :</p> <blockquote> <p>spawn objects are cloned and each is finalised in each process</p> <p>fork/forkserver objects are shared and finalised in the main process</p> </blockquote> <p>This seems to be the case:</p> <pre class="lang-py prettyprint-override"><code>import os from multiprocessing import Process import multiprocessing multiprocessing.set_start_method('fork', force=True) class Track: def __init__(self): print(f'{os.getpid()=} object created in {__name__=}') def __getstate__(self): print(f'{os.getpid()=} object pickled in {__name__=}') return {} def __setstate__(self, state): print(f'{os.getpid()=} object unpickled in {__name__=}') return self def __del__(self): print(f'{os.getpid()=} object deleted in {__name__=}') def f(x): print(f'{os.getpid()=} function executed in {__name__=}') if __name__ == '__main__': x = Track() for i in range(2): print(f'{os.getpid()=} Iteration: {i}, Process object created') p = Process(target=f, args=(x,)) print(f'{os.getpid()=} Iteration: {i}, Process created and started') p.start() print(f'{os.getpid()=} Iteration: {i}, Process starts to run functions') p.join() </code></pre> <p>The output is:</p> <pre class="lang-py prettyprint-override"><code>os.getpid()=30620 object created in __name__='__main__' os.getpid()=30620 Iteration: 0, Process object created os.getpid()=30620 Iteration: 0, Process created and started os.getpid()=30620 Iteration: 0, Process starts to run functions os.getpid()=30623 function executed in __name__='__main__' os.getpid()=30620 Iteration: 1, Process object created os.getpid()=30620 Iteration: 1, Process created and started os.getpid()=30620 Iteration: 1, Process starts to run functions os.getpid()=30624 function executed in __name__='__main__' os.getpid()=30620 object deleted in __name__='__main__' </code></pre> <p>Indeed the object is only deleted in the main process.</p> <p>My question is, how is this achieved? Although the new process is forked from the main process, after fork, the new process is essentially another process, how can these two processes share gc information?</p> <p>In addition, does the gc information sharing happen for every object, or just the object passed as argument for subprocess?</p>
<python><python-multiprocessing>
2024-05-03 17:30:25
2
2,492
youkaichao
78,426,073
10,240,072
Best way to avoid a loop
<p>I have 2 dataframes of number x and y of same length and an input number a. I would like to find the fastest way to calculate a third list z such as :</p> <ul> <li><code>z[0] = a</code></li> <li><code>z[i] = z[i-1]*(1+x[i]) + y[i]</code></li> </ul> <p>without using a loop like that :</p> <pre class="lang-py prettyprint-override"><code>a = 213 x = pd.DataFrame({'RandomNumber': np.random.rand(200)}) y = pd.DataFrame({'RandomNumber': np.random.rand(200)}) z = pd.Series(index=x.index, dtype=float) z[0] = a for i in range(1,len(x.index)): z[i] = z[i-1]*(1+x.iloc[i]) + y.iloc[i] </code></pre>
<python><pandas><dataframe><optimization>
2024-05-03 17:09:48
1
313
Fred Dujardin
78,426,050
4,212,158
How to index a numpy memmap without creating an in-memory copy?
<p>Which indexing operations on <code>numpy.memmap</code> arrays return an in-memory copy vs a view that is still backed by a file? The <a href="https://numpy.org/doc/stable/reference/generated/numpy.memmap.html#numpy.memmap" rel="nofollow noreferrer">documentation</a> doesn't explain which indexing operations are &quot;safe&quot;.</p>
<python><arrays><numpy><numpy-memmap><memmap>
2024-05-03 17:05:30
1
20,332
Ricardo Decal
78,425,933
3,070,181
How to get rid of a popup menu if opened by mistake on linux
<p>I have a Treeview with a bound context menu which appears when I right click on the mouse. Sometimes I right click by accident. How do I get rid of the context menu without adding an item to close it?</p> <pre><code>import tkinter as tk from tkinter import ttk MEMBER_COLUMNS = ( ('first_name', 'First name', 120), ) class MainFrame(): def __init__(self): self.root = tk.Tk() self.tree = self._member_tree() self.tree.grid(row=0, column=0) self.context_menu = tk.Menu(self.root, tearoff=0) self.context_menu.add_command(label='Delete', command=self._delete) self.root.mainloop() def _member_tree(self) -&gt; ttk.Treeview: tree = ttk.Treeview(self.root, selectmode='browse', height=5) tree.grid(row=0, column=0, sticky=tk.NSEW) tree.bind('&lt;Button-3&gt;', self._show_context_menu) tree.column(&quot;#0&quot;, width=0) col_list = tuple([col[0] for col in MEMBER_COLUMNS]) tree['columns'] = col_list for col in MEMBER_COLUMNS: (col_key, col_text, col_width) = (col[0], col[1], col[2]) tree.heading(col_key, text=col_text) tree.column(col_key, width=col_width, anchor=tk.W) for member in ['John', 'Fred']: tree.insert('', 'end', values=member) return tree def _show_context_menu(self, event): self.context_menu.post(event.x_root, event.y_root) def _delete(self): print('delete') if __name__ == '__main__': MainFrame() </code></pre>
<python><linux><tkinter>
2024-05-03 16:42:54
0
3,841
Psionman
78,425,804
11,318,930
polars: how to optimize sink_parquet
<p>I'm working on string manipulation in large files which is explained <a href="https://stackoverflow.com/questions/78202730/polars-efficient-way-to-apply-function-to-filter-column-of-strings">here</a>. Based on the recommendations I received, I am using <code>LazyFrame</code>s and the <code>sink_parquet</code> function. As I try to improve throughput I want to optimize the <code>sink_parquet</code> function.</p> <p>One of <code>sink_parquet</code> function parameters is <code>row_group_size</code>. In my case I'm ingesting a 78M row parquet file using <code>pl.scan_parquet()</code>. The file info shows there are 40 <code>row_groups</code>. The <code>sink_parquet</code> function has a parameter <code>row_groups</code> described as follows:</p> <blockquote> <p><code>row_group_size</code>:<br /> Size of the row groups in number of rows. If <code>None</code> (default), the chunks of the <code>DataFrame</code> are used. Writing in smaller chunks may reduce memory pressure and improve writing speeds.</p> </blockquote> <p>My questions:</p> <ul> <li>does it make sense to try to optimize the <code>sink_parquet</code>; and if so</li> <li>does the function calculate the row_group size from the file information ie. # of rows/# of row_groups and if not, what flexibility do I have in what I specify; or</li> <li>if I accept the default, what are the DataFrame <code>chunks</code> (esp. since the input is a LazyFrame). There is no 'memory pressure' as the process is only consuming ~5G of and available ~128G so there is room for larger chunks if that can be done.</li> </ul> <p>I've looked and found <a href="https://stackoverflow.com/questions/75300636/issue-while-using-py-polars-sink-parquet-method-on-a-lazyframe">this</a> and looked <a href="https://github.com/pola-rs/polars/issues?q=is%3Aissue%20is%3Aopen%20" rel="nofollow noreferrer">here</a> for clues but not come up with anything.</p> <p><strong>Update:</strong> Experiments on a 600K row data set were not promising. I varied the <code>row_group_size</code> over a range of 30-100% of total rows but there was very little difference in overall processing time or memory consumption. There was a significant increase in CPU usage at the smaller number of rows. I've observed CPU use is high (~95-100%) during parquet writes so small <code>row_group_size</code> means more frequent writing.</p>
<python><python-3.x><rust><parquet><python-polars>
2024-05-03 16:13:07
0
1,287
MikeB2019x
78,425,471
13,132,640
Custom bin sizes on heatmap within seaborn pairgrid?
<p>I have a use-case where I will regularly receive data in the same format and awnt to quickly compare individual distributions of each variable as well as pairwise / bivariate distributions. I originally was using sns.pairplot out of box, but I realized the scatter plots are not great for my application: I sometimes have different amounts of data, and the bounds of the data will always be different. So instead, I want to create all histograms with consistent bin sizes selected based on my knowledge of the variable.</p> <p>I found a great answer which helped me use sns.pairgird to do a custom histogram function along the diagonal with the bins I want (<a href="https://stackoverflow.com/a/56387759">https://stackoverflow.com/a/56387759</a>). Now I'd like to do the same with the lower joint distributions, and I'm not sure how to do this, since the individual function gets handed the data without knowledge of which variable it corresponds to. Here's the example code below, based on the answer linked above.</p> <pre><code>iris = sns.load_dataset(&quot;iris&quot;, cache=True) col_list = ['petal_length', 'petal_width', 'sepal_length', 'sepal_width'] cols = iter(col_list) bins = {'sepal_length' : 10, 'sepal_width' : 5, 'petal_length' : 35, 'petal_width' : 12} def myhist(x, **kwargs): b = bins[next(cols)] plt.hist(x, bins=b, **kwargs) def pairgrid_heatmap(x, y, **kws): # how to retrieve correct bins here, given only x,y? cmap = sns.light_palette(kws.pop(&quot;color&quot;), as_cmap=True) plt.hist2d(x, y, cmap=cmap, cmin=1, **kws) g = sns.PairGrid(iris, vars=col_list) g = g.map_diag(myhist) g = g.map_offdiag(pairgrid_heatmap) plt.show() </code></pre>
<python><matplotlib><seaborn>
2024-05-03 15:06:32
1
379
user13132640
78,425,430
2,703,209
How to kill a subprocess and all of its descendants on timeout in Python?
<p>I have a Python program which launches subprocesses like this:</p> <pre><code>subprocess.run(cmdline, shell=True) </code></pre> <p>A subprocess can be anything that can be executed from a shell: Python programs, binaries or scripts in any other language.</p> <p><code>cmdline</code> is something like <code>export PYTHONPATH=$PYTHONPATH:/path/to/more/libs; command --foo bar --baz 42</code>.</p> <p>Some of the subprocesses launch child processes of their own, and theoretically these could again launch child processes. There is no hard-and-fast limit on how many generations of descendant processes there can be. Children of subprocesses are third-party tools and I have no control over how they launch additional processes.</p> <p>The program needs to run on Linux (currently only Debian-based distros) as well as some FreeBSD derivatives – thus it needs to be portable across various Unix-like OSes, while Windows compatibility will probably not be needed in the foreseeable future. It is meant to be installed via the OS package manager, as it comes with OS configuration files and since everything else on the target system uses that as well. That means I’d prefer not having to use PyPI for my program or for any dependencies.</p> <p>If a subprocess hangs (possibly because it is waiting for a hung descendant), I want to implement a timeout that kills the subprocess, along with anything that has the subprocess as an ancestor.</p> <p>Specifying a timeout on <code>subprocess.run()</code> does not work, as only the immediate subprocess gets killed, but its children get adopted by PID 1 and continue to run. Also, since <code>shell=True</code>, the subprocess is the shell and the actual command, being a child of the shell, will happily continue. The latter could be solved by passing proper <code>args</code> and <code>env</code> and skipping the shell, which would kill the actual process for the command, but not any child processes of it.</p> <p>I then tried to use <code>Popen</code> directly:</p> <pre><code>with Popen(cmdline, shell=True) as process: try: stdout, stderr = process.communicate(timeout=timeout) except subprocess.TimeoutExpired: killtree(process) # custom function to kill the whole process tree </code></pre> <p>Except, in order to write <code>killtree()</code>, I need to list all child processes of <code>process</code> and recursively do the same for each child process, then kill these processes one by one. <code>os.kill()</code> provides a way to kill any process, given its PID, or send it a signal of my choice. However, <code>Popen</code> does not provide any way to enumerate child processes, nor am I aware of any other way.</p> <p>Some other answers suggest <a href="https://github.com/giampaolo/psutil" rel="nofollow noreferrer">psutil</a>, but that requires me to install PyPI packages, which I would like to avoid.</p> <p>TL;DR: <strong>Given the above constraints, is there any way to launch a process from Python, with a timeout that kills the entire tree of processes, from said process down to its last descendant?</strong></p>
<python><linux><subprocess><freebsd>
2024-05-03 14:58:40
2
6,269
user149408
78,425,424
12,902,027
How can you specify python runtime version in vercel?
<p>I am trying to deploy a simple FastAPI app to vercel for the first time. Vercel.json is exactly below.</p> <pre class="lang-json prettyprint-override"><code>{ &quot;devCommand&quot;: &quot;uvicorn main:app --host 0.0.0.0 --port 3000&quot;, &quot;builds&quot;: [ { &quot;src&quot;: &quot;api/index.py&quot;, &quot;use&quot;: &quot;@vercel/python&quot;, &quot;config&quot;: { &quot;maxLambdaSize&quot;: &quot;15mb&quot;, &quot;runtime&quot;: &quot;python3.9&quot; } } ], &quot;routes&quot;: [ { &quot;src&quot;: &quot;/(.*)&quot;, &quot;dest&quot;: &quot;api/index.py&quot; } ] } </code></pre> <p>I have specified runtime as python3.9, but this doesn't reflect actual runtime which is still python3.12 (default).This ends up causing internal error.</p> <p>How can I configure runtime version correctly?</p> <p>I also read the official <a href="https://vercel.com/docs/projects/project-configuration#builds" rel="nofollow noreferrer">docs</a> which says <code>builds</code> property shouldn't be used. So I tried to rewrite like below.</p> <pre class="lang-json prettyprint-override"><code>{ &quot;devCommand&quot;: &quot;uvicorn main:app --host 0.0.0.0 --port 3000&quot;, &quot;functions&quot;: { &quot;api/index.py&quot;: { &quot;runtime&quot;: &quot;python@3.9&quot; } }, &quot;routes&quot;: [ { &quot;src&quot;: &quot;/(.*)&quot;, &quot;dest&quot;: &quot;api/index.py&quot; } ] } </code></pre> <p>This didn't work as well. Maybe I shouldn't use vercel for python project?(little information in the internet)</p>
<python><fastapi><vercel>
2024-05-03 14:57:03
1
301
agongji
78,425,396
18,730,707
Can't find table position in image with opencv
<p>I want to process tabular data from images. To do this, we read the image with <code>opencv</code> and find where the table is by going through the following seven steps. In image number 7, we plan to crop based on the border. In the following example data, it works exactly what I want. This is because there is a black outer border of the image internal table.</p> <pre class="lang-py prettyprint-override"><code>image = cv2.imread(image_path, cv2.IMREAD_COLOR) </code></pre> <p><a href="https://i.sstatic.net/8urUqTKZ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8urUqTKZ.jpg" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>grayscaled_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) </code></pre> <p><a href="https://i.sstatic.net/zOWji7G5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zOWji7G5.jpg" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>_, thresholded_image = cv2.threshold(grayscaled_image, 0, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU) </code></pre> <p><a href="https://i.sstatic.net/oVa5JZA4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oVa5JZA4.jpg" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>inverted_image = cv2.bitwise_not(thresholded_image) </code></pre> <p><a href="https://i.sstatic.net/8vdZpETK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8vdZpETK.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>dilated_image = cv2.dilate(inverted_image, None, iterations=3) </code></pre> <p><a href="https://i.sstatic.net/9Y6QshKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9Y6QshKN.png" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>contours, _ = cv2.findContours(dilated_image, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) image_with_all_contours = image.copy() cv2.drawContours(image_with_all_contours, contours, -1, (0, 255, 0), 2) </code></pre> <p><a href="https://i.sstatic.net/AJSXO4t8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJSXO4t8.jpg" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>rectangular_contours = [] for contour in contours: peri = cv2.arcLength(contour, True) epsilon = peri * 0.02 approx = cv2.approxPolyDP(contour, epsilon, True) if len(approx) == 4: rectangular_contours.append(approx) image_with_only_rectangular_contours = image.copy() cv2.drawContours(image_with_only_rectangular_contours, rectangular_contours, -1, (0, 255, 0), 2) </code></pre> <p><a href="https://i.sstatic.net/pBV8UzUf.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBV8UzUf.jpg" alt="enter image description here" /></a></p> <pre class="lang-py prettyprint-override"><code>max_area = 0 max_area_contour = None for contour in rectangular_contours: area = cv2.contourArea(contour) if area &gt; max_area: max_area = area max_area_contour = contour image_with_max_area_contour = image.copy() cv2.drawContours(image_with_max_area_contour, [max_area_contour], -1, (0, 255, 0), 2) </code></pre> <p><a href="https://i.sstatic.net/oTZI30xA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTZI30xA.jpg" alt="enter image description here" /></a></p> <p>However, there are cases where the table does not have an outer border, as shown in the following picture. In reality, the image I want to work with does not have lines on the outside. The image below is a temporary work created for explanation purposes.</p> <p>As you can see in the picture above, if there is no outer border, problems arise in the process of obtaining the <code>Thresholded Image</code>. Later, it becomes impossible to leave a square contour line by doing <code>cv2.findContours</code>.</p> <p>Ultimately, what I want is to read the values in the Name and Favorite columns into Pandas. I am currently following the process by referring to <a href="https://livefiredev.com/how-to-extract-table-from-image-in-python-opencv-ocr/" rel="nofollow noreferrer">this post</a>. How can I select the rectangle of the largest contour line?</p> <p><a href="https://i.sstatic.net/AJt3amW8.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJt3amW8.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/jZ79jCFd.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jZ79jCFd.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/IYftYunW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYftYunW.png" alt="enter image description here" /></a></p> <h1>TRYING 1 by @Ivan</h1> <p>The following image is when <code>cv2.RETE_EXTERNAL</code> is applied when doing <code>cv2.findContours</code>. The cotour line is drawn as follows. That is, the outermost part is not drawn.</p> <p><a href="https://i.sstatic.net/oTT93HdA.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTT93HdA.jpg" alt="enter image description here" /></a></p> <h1>TRYING 2 by @cards</h1> <p>If you select 50*30 instead of the default kernel size, a distorted rectangle is selected as shown below.</p> <pre class="lang-py prettyprint-override"><code>dilated_image = cv2.dilate(inverted_image, np.ones((50, 30), np.uint8), iterations=1) </code></pre> <p><a href="https://i.sstatic.net/ABhM6b8J.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ABhM6b8J.jpg" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/FJHw5AVo.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FJHw5AVo.jpg" alt="enter image description here" /></a></p>
<python><opencv>
2024-05-03 14:49:56
2
878
na_sacc
78,425,322
3,649,441
Pandas daily to weekly OHLC data resampling
<p>I'm resampling daily OHLC data fetched from Yahoo Finance to weekly with the following code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import yfinance as yf ticker = yf.Ticker('AAPL') daily_data = ticker.history(start='2024-04-01', end='2024-05-03', interval='1d') weekly_data = daily_data.resample('W-Sat').agg({ 'Open': 'first', 'High': 'max', 'Low': 'min', 'Close': 'last', 'Volume': 'sum' }).shift(-1).dropna() </code></pre> <p>The code works, the weekly data matches Yahoo Finance weekly data that would be fetched by:</p> <pre class="lang-py prettyprint-override"><code>ticker = yf.Ticker('AAPL') yahoo_weekly_data = ticker.history( start='2024-04-01', end='2024-05-03', interval='1w') </code></pre> <p>However I do not understand why I have to use <code>shift(-1)</code> to get the correct data.<br /> Any insight would be much appreciated.</p>
<python><python-3.x><pandas><yfinance>
2024-05-03 14:35:22
1
1,009
Chris
78,425,234
145,682
cannot fill curses window region with text - is this the design?
<p>Here is code to simply fill a terminal screen:</p> <pre class="lang-py prettyprint-override"><code>import curses def main(stdscr): max_y = curses.LINES max_x = curses.COLS rows = max_y - 1 cols = max_x - 3 total_chars1 = rows * cols stdscr.addstr(f'Screen: {max_y}x{max_x} = {max_x * max_y} chars, Window1: {rows}x{cols} = {rows * cols}, Window2: {rows}') stdscr.refresh() win_start_y = 1 win_start_x = 0 textwin = curses.newwin(rows, cols, win_start_y, win_start_x) pagewin = curses.newwin(rows, 1, 1, max_x - 1) textwin.addstr('x' * (total_chars1 - 1)) textwin.refresh() pagewin.addstr('y' * (rows - 1)) pagewin.refresh() stdscr.getch() curses.wrapper(main) </code></pre> <p>But I find that I can't add the conceived total number of chars. I find that I have to minus by one.</p> <p><a href="https://i.sstatic.net/A2TMJkb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2TMJkb8.png" alt="enter image description here" /></a></p> <p>Is this by design? Or something more going on here than what meets the eye?</p>
<python><python-curses>
2024-05-03 14:21:07
0
11,985
deostroll
78,425,228
1,654,226
How do I type a method in base class that returns another class instance that is defined in child class?
<p>Imaging a base class <code>BaseA</code> with method <code>get_b</code> that returns an instance of class <code>B</code>. Class <code>B</code> is defined in both <code>BaseA</code> and another class <code>Child</code> that inherits from <code>BaseA</code>.</p> <pre class="lang-py prettyprint-override"><code>class BaseA: def get_b(self): return self.B() class B: pass class Child(BaseA): class B(BaseA.B): def hello(self): return &quot;hello&quot; Child().get_b().hello() </code></pre> <p>When this is run <code>get_b</code> on an instance of Child will return an instance of <code>B</code> from <code>Child</code> and runs as expected, but Pyright says the return type of <code>get_b</code> is <code>BaseA.B</code> and all extended methods are not available.</p> <p>Gets this pyright error:</p> <pre><code>Diagnostics: 1. Cannot access attribute &quot;hello&quot; for class &quot;B&quot;   Attribute &quot;hello&quot; is unknown [reportAttributeAccessIssue] </code></pre> <p>Is there any way to type hint this correctly?</p>
<python><python-typing><pyright>
2024-05-03 14:19:54
1
2,361
squgeim
78,425,091
1,951,507
Unexpected behavior with "only" restriction type in owlready2
<p>I don't understand the reasoning for model below in owlready2:</p> <pre><code>class Equipment(Thing): pass class Task(Thing): pass class EquipmentTask(Task): pass class task_executed_by_equipment(EquipmentTask &gt;&gt; Equipment, FunctionalProperty): pass class task_included_subtasks(Task &gt;&gt; Task): pass class EquipmentTask(Task): equivalent_to = [task_executed_by_equipment.exactly(1, Equipment)] class MainTask(EquipmentTask): equivalent_to = [task_included_subtasks.only(EquipmentTask)] task = Task(task_executed_by_equipment=Equipment()) sync_reasoner() </code></pre> <p>The reasoner output says:</p> <pre><code>* Owlready * Equivalenting: owl.Thing t.Task * Owlready * Equivalenting: t.Task owl.Thing * Owlready * Reparenting t.task1: {t.Task} =&gt; {t.EquipmentTask} * Owlready * Reparenting t.equipment1: {t.Equipment} =&gt; {t.Task, t.Equipment} </code></pre> <p>An equipment individual should not be classified as task.</p> <p>If I change the &quot;only&quot; type to &quot;some&quot; then it looks as I expected.</p> <p>Someone a hint/idea about the reasoning that is followed? Or where my line of thought goes wrong?</p> <p>Thanks in advance, Patrick.</p> <p>Edited:</p> <p>The same phenomenon is occurring in following simplified example:</p> <pre><code>class P(ObjectProperty): domain=[C] class C(Thing): equivalent_to = [P.only(C)] </code></pre> <p>This results in A equivalent to Thing according to HermiT. (The ELK reasonor in Protege does not come to this inference)</p> <p>Another example that might help to get things sharper (from presentation of Alan Rector et.al.)</p> <p>Intuitive definition:</p> <pre><code>Class MargheritaPizza subclass from Pizza restriction(hasTopping someValuesFrom(TomatoTopping)) restriction(hasTopping someValuesFrom(MozzarellaTopping)) </code></pre> <p>But problem: missing restriction that toppings must only be tomato or mozzarella</p> <p>Correct:</p> <pre><code>Class MargheritaPizza subclass from Pizza restriction(hasTopping someValuesFrom(TomatoTopping)) restriction(hasTopping someValuesFrom(MozzarellaTopping)) restriction(hasTopping allValuesFrom(TomatoTopping or MozzarellaTopping))) </code></pre> <p>Actually this looks rather intuitive. And also in line with comment from @Ignacio to have a companion someValuesFrom restriction.</p> <p>But when the &quot;allValuesForm&quot; term is replaced by &quot;only&quot; like in OWLready (from Manchester OWL?), then just having the restriction(hasTopping only(...)) sounds fully intuitive again.</p> <p>What would be the meaning for this &quot;allValues&quot; restriction only, in the light of this Pizza example?</p>
<python><owl><ontology>
2024-05-03 13:49:20
0
1,052
pfp.meijers
78,424,991
3,941,671
pyside6 QObject::killTimer message when running GUI in separate thread
<p>I want to set up a gui that runs in its own thread. Therefor I create my own class <code>MainWindow</code> as a subclass of <code>QMainWindow</code> and created a function <code>run()</code> that is executed in another thread (see code below).</p> <p>Now my problem: When I execute <code>run()</code> in a separate thread and use my own <code>MainWindow</code> as a widget, I always get the debug messages when closing the window:</p> <pre><code>QObject::killTimer: Timers cannot be stopped from another thread QObject::~QObject: Timers cannot be stopped from another thread </code></pre> <p>But when I execute <code>run()</code> in a separate thread and use an empty <code>QMainWindow</code> as a widget, these messages are not prompted. And when I execute <code>run()</code> in the main thread and not in a separate thread together with my <code>MainWindow</code>-widget, these messages also are not prompted.</p> <p>So my question is, where does this &quot;timer&quot; comes from? I do not use a timer inside my widget-class!?</p> <pre><code>from threading import Thread import logging from pathlib import Path from PySide6.QtCore import Qt, QSize, QTimer from PySide6.QtGui import QAction, QCloseEvent, QIcon, QScreen from PySide6.QtWidgets import ( QApplication, QMainWindow, QWidget, QToolBar, QHBoxLayout, QVBoxLayout, QLabel, QPushButton, QSlider ) import matplotlib matplotlib.use('Qt5Agg') from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg from matplotlib.backends.backend_qt5agg import NavigationToolbar2QT as NavigationToolbar from matplotlib.figure import Figure from modules.control.control import Control, ControlCommand, ControlState, ControlStatus from modules.machine.machine import Machine, MachineCommand, MachineState, MachineStatus from modules.laserscanner.laserscanner import Laserscanner, LaserscannerCommand, LaserscannerState, LaserscannerStatus class MplCanvas(FigureCanvasQTAgg): def __init__(self, parent: QWidget = None, width = 5, height = 4, dpi = 150): _fig = Figure(figsize=(width, height), dpi=dpi) self._axes = _fig.add_subplot() super(MplCanvas, self).__init__(_fig) class MainWindow(QMainWindow): def __init__(self, control: Control, machine: Machine, scanner: Laserscanner) -&gt; None: super(MainWindow, self).__init__() # references self._control = control self._machine = machine self._scanner = scanner self._nc_file: Path = Path() # title self.setWindowTitle('label_title') # menu # actions self._action_menu_file_load = QAction(QIcon.fromTheme(QIcon.ThemeIcon.DocumentOpen), 'label_menu_file_load') self._action_menu_file_load.triggered.connect(self.dummy_callback) self._action_menu_file_exit = QAction(QIcon.fromTheme(QIcon.ThemeIcon.ApplicationExit), 'label_menu_file_exit') self._action_menu_file_exit.triggered.connect(self.dummy_callback) self._menu = self.menuBar() self._menu_file = self._menu.addMenu('&amp;' + 'label_menu_file') self._menu_file.addAction(self._action_menu_file_load) self._menu_file.addSeparator() self._menu_file.addAction(self._action_menu_file_exit) # toolbar self._toolbar = QToolBar(self) self._toolbar.setIconSize(QSize(16,16)) self._toolbar.setAllowedAreas(Qt.ToolBarArea.TopToolBarArea) self._toolbar.setMovable(False) self._toolbar.setFloatable(False) self._toolbar.addAction(self._action_menu_file_load) self.addToolBar(self._toolbar) # layout &amp; content # boxes self._box_window = QHBoxLayout() self._box_control = QVBoxLayout() self._box_control_view = QHBoxLayout() self._box_control_view_area = QVBoxLayout() self._box_control_view_slider = QVBoxLayout() self._box_control_selection = QHBoxLayout() self._box_control_machine = QHBoxLayout() self._box_status = QVBoxLayout() # fill boxes # control # viewing area self._sc = MplCanvas(self, width=5, height=4, dpi=150) self._sc._axes.plot([0,1,2,3,4], [10,1,20,3,40]) self._toolbar_sc = NavigationToolbar(self._sc, self) self._box_control_view_area.addWidget(self._toolbar_sc) self._box_control_view_area.addWidget(self._sc) self._label_layer_max = QLabel('10') self._label_layer_max.setAlignment(Qt.AlignmentFlag.AlignHCenter) self._slider_layer = QSlider() self._label_layer_min = QLabel('0') self._label_layer_min.setAlignment(Qt.AlignmentFlag.AlignHCenter) self._box_control_view_slider.addWidget(self._label_layer_max) self._box_control_view_slider.addWidget(self._slider_layer) self._box_control_view_slider.addWidget(self._label_layer_min) self._box_control_view.addLayout(self._box_control_view_area) self._box_control_view.addLayout(self._box_control_view_slider) # viewing area buttons self._button_view_welding = QPushButton('button_welding') self._button_view_scan = QPushButton('button_scan') self._button_view_defects = QPushButton('button_defects') self._button_view_milling = QPushButton('button_milling') self._box_control_selection.addWidget(self._button_view_welding) self._box_control_selection.addWidget(self._button_view_scan) self._box_control_selection.addWidget(self._button_view_defects) self._box_control_selection.addWidget(self._button_view_milling) # machine self._button_machine_start = QPushButton('button_start') self._button_machine_pause = QPushButton('button_pause') self._button_machine_stop = QPushButton('button_stop') self._button_machine_acknowledge = QPushButton('button_acknowledge') self._box_control_machine.addWidget(self._button_machine_start) self._box_control_machine.addWidget(self._button_machine_pause) self._box_control_machine.addWidget(self._button_machine_stop) self._box_control_machine.addWidget(self._button_machine_acknowledge) # pack boxes self._box_control.addLayout(self._box_control_view) self._box_control.addLayout(self._box_control_selection) self._box_control.addLayout(self._box_control_machine) # status self._label_status_control = QLabel('label_status') self._label_status_control_state = QLabel('unknown') self._label_status_control_state.setAlignment(Qt.AlignmentFlag.AlignHCenter) self._label_status_control_state.setStyleSheet('background-color: rgb(255,0,0); margin:5px; border:1px solid rgb(0, 0, 0);') self._label_status_machine = QLabel('label_status_machine') self._label_status_machine_state = QLabel('unknown') self._label_status_machine_state.setAlignment(Qt.AlignmentFlag.AlignHCenter) self._label_status_machine_state.setStyleSheet('background-color: rgb(255,0,0); margin:5px; border:1px solid rgb(0, 0, 0);') self._label_status_scanner = QLabel('label_status_scanner') self._label_status_scanner_state = QLabel('unknown') self._label_status_scanner_state.setAlignment(Qt.AlignmentFlag.AlignHCenter) self._label_status_scanner_state.setStyleSheet('background-color: rgb(255,0,0); margin:5px; border:1px solid rgb(0, 0, 0);') self._box_status.addWidget(self._label_status_control) self._box_status.addWidget(self._label_status_control_state) self._box_status.addWidget(self._label_status_machine) self._box_status.addWidget(self._label_status_machine_state) self._box_status.addWidget(self._label_status_scanner) self._box_status.addWidget(self._label_status_scanner_state) self._box_status.addStretch() # pack boxes self._box_window.addLayout(self._box_control) self._box_window.addLayout(self._box_status) # central widget self._widget_central = QWidget() self._widget_central.setLayout(self._box_window) self.setCentralWidget(self._widget_central) def dummy_callback(self): print('dummy callback') def run() -&gt; None: _app = QApplication([]) # _window = QMainWindow() _window = MainWindow(None, None, None) _window.show() _app.exec() def main(): _t_gui = Thread(target=run, name='GUI') _t_gui.start() _t_gui.join() # run() if __name__ == '__main__': main() </code></pre>
<python><multithreading><pyqt><pyside6>
2024-05-03 13:28:42
0
471
paul_schaefer
78,424,943
821,832
Configure VS Code using settings.json to use ruff instead of pylint, pylance, etc
<p>I am looking to be able to set up <code>settings.json</code> used by VS Code so that I can use ruff instead of pylint, pylance, flake8 linting. I am hoping to get the rules into <code>settings.json</code> and not have external <code>.toml</code> files.</p> <p>Here is what my Python settings currently include:</p> <pre class="lang-json prettyprint-override"><code> &quot;python.languageServer&quot;: &quot;Pylance&quot;, &quot;[python]&quot;: { &quot;editor.formatOnType&quot;: true, &quot;editor.formatOnSave&quot;: true, &quot;editor.defaultFormatter&quot;: &quot;charliermarsh.ruff&quot; }, </code></pre> <ul> <li>How do I tell ruff to turn on rules emulating the different linters?</li> <li>How do I then tell VS Code to only use ruff?</li> </ul>
<python><visual-studio-code><ruff>
2024-05-03 13:19:32
1
614
Charles Plager
78,424,923
14,471,688
Faster way to access the value of a huge dictionary from a huge list of arrays in python
<p>I am curious whether I can access the value of a huge dictionary faster from a huge list of arrays.</p> <p>Here is a simple example:</p> <pre><code>import numpy as np my_list = [np.array([ 1, 2, 3, 4, 5, 6, 8, 9, 10]), np.array([ 1, 3, 5, 6, 7, 10]), np.array([ 1, 2, 3, 4, 6, 8, 9, 10]), np.array([ 1, 3, 4, 7, 15]), np.array([ 1, 2, 4, 5, 10, 16]), np.array([6, 10, 15])] my_dict = {1: 0, 2: 0, 3: 0, 4: 0, 5: 1, 6: 1, 7: 1, 8: 1, 9: 1, 10: 2, 11: 2, 12: 2, 13: 2, 14: 2, 15: 3, 16: 3} </code></pre> <p>Each key in <strong>my_dict</strong> correspond to the values in a list named <strong>my_list</strong></p> <p>I used the following code to get the desired output list:</p> <pre><code>numpy_arr = np.array([my_dict[i] for i in range(1, max(my_dict) + 1)]) output = {frozenset(np.unique(numpy_arr[l - 1])) for l in my_list} res = [&quot; &quot;.join(map(str, s)) for s in output] </code></pre> <p>For instance, I have a bigger dimension of <strong>my_list</strong> and <strong>my_dict</strong> that can be downloaded here: <a href="https://gitlab.com/Schrodinger168/practice/-/tree/master/practice_dictionary" rel="nofollow noreferrer">https://gitlab.com/Schrodinger168/practice/-/tree/master/practice_dictionary</a></p> <p>Here are the codes from the <a href="https://stackoverflow.com/questions/77465362/how-to-access-the-value-of-a-dictionary-faster-from-a-list-of-arrays-in-python/77465401#77465401">previous answer</a>:</p> <pre><code>import ast from timeit import timeit file_list = &quot;list_array.txt&quot; file_dictionary = &quot;dictionary_example.txt&quot; with open(file_dictionary, &quot;r&quot;) as file_dict: my_dict = ast.literal_eval(file_dict.read()) my_list = [] with open(file_list, &quot;r&quot;) as file: for line in file: my_list.append(np.array(list(map(int, line.split())))) def benchmark(my_list, my_dict): numpy_arr = np.array([my_dict[i] for i in range(1, max(my_dict) + 1)]) output = {frozenset(np.unique(numpy_arr[l - 1])) for l in my_list} return [&quot; &quot;.join(map(str, s)) for s in output] t1 = timeit(&quot;benchmark(my_list, my_dict)&quot;, number=1, globals=globals()) print(t1) </code></pre> <p>Output time on my computer:</p> <pre><code>2.538002511000059 </code></pre> <p>This approach is quietly good already but in my practical work, I have <strong>my_dict</strong> and <strong>my_list</strong> at least 5 to 10 times bigger than <strong>my_list</strong> and <strong>my_dict</strong> in this case and this approach is less favorable and become the benckmark.</p> <p>Are there any alternatives that can make this work even faster with less than a second I mean for example in milleseconds?</p>
<python><arrays><list><numpy>
2024-05-03 13:15:30
0
381
Erwin
78,424,901
15,341,457
Scrapy - Setting Feed Exporter Overwrite to True
<p>I developed a Scrapy spider and I want to execute it without using the command line. That's why I use <code>CrawlerProcess</code>. I also want the output to be saved to a json file. <strong>Feed exporters</strong> are perfect in my case and the only way I could get them to work is by updating the settings this way:</p> <pre><code>from scraper.spiders.pp_spider import ConverterSpider import scrapy from scrapy.crawler import CrawlerProcess from scrapy.utils.project import get_project_settings settings = get_project_settings() settings.set('FEED_FORMAT', 'json') settings.set('FEED_URI', 'result.json') process = CrawlerProcess(settings) process.crawl(ConverterSpider) process.start() </code></pre> <p>Now I would like to overwrite the output file result.json whenever a new crawl is executed. The way you would usually do it doesn't work with CrawlerProcess (example):</p> <pre><code>FEEDS = { 'result.json': {'format': 'json', 'overwrite': True} } </code></pre> <p>I would like to know how to do something like:</p> <pre><code>settings.set('overwrite', True) </code></pre>
<python><scrapy><export>
2024-05-03 13:10:45
1
332
Rodolfo
78,424,869
1,492,956
Enable/Disable Logging block in Vector CANoe through COM
<p>I am trying to Enable/Disable the Logging Block using COM Objects in python. I could access the individual loggings from the collection, but cannot enable them programatically. I tried below way, the Application starts and starts measurement, but no log file is created.</p> <pre><code>app = client.dynamic.Dispatch('CANoe.Application') app.Open(configuration) app.Configuration.OnlineSetup.LoggingCollection(1).Filter.Enable(0) app.Measurement.Start() app.Measurement.Stop() app.Quit() </code></pre> <p><a href="https://i.sstatic.net/0yl3VUCY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0yl3VUCY.png" alt="enter image description here" /></a></p> <p>How can the Logging be enabled/disabled through python?</p>
<python><win32com><canoe>
2024-05-03 13:03:01
1
902
lerner1225
78,424,819
1,511,294
Pandas setting frequency in a time series dataframe
<p>I have a data frame with monthly data and it looks like this.</p> <pre><code> metric date 2021-01-01 0.822201 2021-02-01 0.845884 2021-03-01 0.868405 2021-04-01 0.866245 2021-05-01 0.861446 2021-06-01 0.859356 2021-07-01 0.864788 2021-08-01 0.868941 2021-09-01 0.875224 2021-10-01 0.868509 2021-11-01 0.859725 </code></pre> <p>The inferred frequency for this data frame is <strong>None</strong></p> <pre><code>DatetimeIndex(['2021-05-01', '2023-08-01', '2023-09-01', '2021-11-01', '2023-06-01', '2022-01-01', '2023-10-01', '2023-12-01', '2024-04-01', '2022-10-01', '2021-08-01', '2021-06-01', '2021-10-01', '2024-03-01', '2023-07-01', '2022-02-01', '2022-07-01', '2021-09-01', '2022-11-01', '2022-03-01', '2024-02-01', '2023-03-01', '2021-12-01', '2021-07-01', '2023-02-01', '2023-11-01', '2023-04-01', '2023-05-01', '2021-03-01', '2021-02-01', '2024-01-01', '2022-05-01', '2021-01-01', '2022-06-01', '2022-09-01', '2022-04-01', '2021-04-01', '2022-12-01', '2023-01-01', '2022-08-01'], dtype='datetime64[ns]', name='date', freq=None) </code></pre> <p>So , I tried to set it explicitly like this:</p> <pre><code>data.index.freq = 'MS' </code></pre> <p>But it failed with the error</p> <p><code>ValueError: Inferred frequency None from passed values does not conform to passed frequency MS</code> I tried to debug if there are any missing values in the index , but it does not seem like it :</p> <pre><code>pd.date_range(data.index.min(), data.index.max(), freq='MS').difference(data.index) </code></pre> <p>Output :</p> <pre><code>DatetimeIndex([], dtype='datetime64[ns]', freq=None) </code></pre>
<python><pandas>
2024-05-03 12:52:22
1
1,665
Ayan Biswas
78,424,750
1,014,809
azure batch update only one property of a job
<p>I have some python code to create a job, and add some tasks. I want the job to <code>OnAllTasksComplete.terminate_job</code>, but I can't set this during job creation since then it will complete immediately before I have a chance to add tasks. So I do it after.</p> <p>Here my python code to update a job:</p> <pre><code>job = batch_service_client.job.update(job_id=job_id,job_update_parameter=batchmodels.JobUpdateParameter(on_all_tasks_complete=batchmodels.OnAllTasksComplete.terminate_job)) </code></pre> <p>However, according to the documentation, the above snippet of code will not update only this property, but reset all other properties (constraints, etc.). Is there an elegant way to do this?</p>
<python><azure><azure-batch>
2024-05-03 12:40:44
1
827
ZirconCode
78,424,722
447,426
dpkt how check checksum of a given/ received package
<p>i have code that receives pcap packates as <code>bytes</code>. I decode the IP and UDP layer and later the proprietary app layer within (&quot;UMS&quot;).</p> <p>I would like to check the IP header checksum and the UDP checksum prior continuing. How to do this?</p> <p>Here is my current code:</p> <pre><code>import io import socket from typing import Iterator from dpkt import pcap, ip, udp class DecodePcap: &quot;&quot;&quot; Receives a pcap file as bytes to decode it with dpkt. &quot;&quot;&quot; def __init__(self, pcap_bytes: bytes) -&gt; None: self.pcap = pcap.Reader(io.BytesIO(pcap_bytes)) &quot;&quot;&quot; the reader is a iterator of tuples (timestamp, buf) where timestamp is a float and buf is a bytes object. &quot;&quot;&quot; def get_ums_packet_iterator(self) -&gt; Iterator['UmsPacket']: &quot;&quot;&quot; Returns a iterator of UmsPacket. &quot;&quot;&quot; return map(UmsPacket, self.pcap) class UmsPacket: &quot;&quot;&quot; Data class to hold the fields of a packet from pcap. &quot;&quot;&quot; def __init__(self, ums_tuple: tuple) -&gt; None: self.timestamp: float = ums_tuple[0] ip_packet: ip.IP = ip.IP(ums_tuple[1]) udp_packet: udp.UDP = ip_packet.data self.src_ip: str = socket.inet_ntoa(ip_packet.src) self.dst_ip: str = socket.inet_ntoa(ip_packet.dst) self.src_port: int = udp_packet.sport self.dst_port: int = udp_packet.dport self.data: bytes = udp_packet.data # check ip header checksum #?? assert ip_packet.sum # check udp checksum #?? assert udp_packet.sum # 6th byte is the ICD version self.icd: int = self.data[6] </code></pre> <p>What i would not like to do: manually calculating the checksum. I would like to have a pcap specific solution.</p>
<python><udp><ip><pcap><dpkt>
2024-05-03 12:34:10
1
13,125
dermoritz
78,424,685
6,435,921
Vectorising this function in python efficiently
<h1>Problem Description</h1> <p>I have written Python code to compute the following function, for fixed <code>y_i</code> and fixed <code>z_i</code>.</p> <p><a href="https://i.sstatic.net/AJzfMom8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJzfMom8.png" alt="enter image description here" /></a></p> <p>In practice, I will have many different vectors <code>x</code> at which I will want to evaluate this. Rather than going through a loop, I wanted to take full advantage of NumPy's vectorization and broadcasting abilities.</p> <blockquote> <p><strong>Aim</strong>: Write a Python function that takes as input a matrix <code>X</code> of shape <code>(M, p)</code> where each row is a different vector <code>x</code> and compute the expression above for each of these rows, thus obtaining an output of shape <code>(M, p)</code>.</p> </blockquote> <h1>Single Function</h1> <pre class="lang-py prettyprint-override"><code>def single_function(x): &quot;&quot;&quot;Assumes y is a vector (n,) and Z is a matrix (n, p). These are fixed.&quot;&quot;&quot; return x / (sigma**2) - gamma*np.sum((np.exp(-np.logaddexp(0.0, y*np.matmul(Z, x))) * y)[:, None] * Z, axis=0) </code></pre> <h1>Vectorised Function</h1> <p>My attempt is wrong, but I cannot tell where I am going wrong.</p> <pre class="lang-py prettyprint-override"><code>def vectorised(X): return X / (sigma**2) - gamma*np.matmul(np.exp(-np.logaddexp(0.0, y[None, :] * X.dot(Z.T))) * y[None, :], Z) </code></pre>
<python><numpy><vectorization>
2024-05-03 12:27:10
1
3,601
Euler_Salter
78,424,654
13,561,669
how to access elements inside svg object with selenium
<p>I am trying to access speed data from <a href="https://www.openstreetbrowser.org/#map=16/53.3989/-3.0881&amp;basemap=osm-mapnik&amp;categories=car_maxspeed" rel="nofollow noreferrer">OpenStreetBrowser</a> with Selenium but I am unable to access SVG elements of the website.</p> <p>This is what I've tried so far</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.keys import Keys from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.common.by import By driver = webdriver.Chrome() driver.maximize_window() driver.get('https://www.openstreetbrowser.org') wait = WebDriverWait(driver, 15) xpath_expression = f&quot;//a[contains(text(), 'Transportation')]&quot; wait.until(EC.element_to_be_clickable((By.XPATH, xpath_expression))).click() xpath_expression = f&quot;//a[contains(text(), 'Individual Traffic')]&quot; wait.until(EC.element_to_be_clickable((By.XPATH, xpath_expression))).click() xpath_expression = f&quot;//a[contains(text(), 'Maxspeed')]&quot; wait.until(EC.element_to_be_clickable((By.XPATH, xpath_expression))).click() xpath_expression = f&quot;//li[contains(text(), 'search')]&quot; wait.until(EC.element_to_be_clickable((By.XPATH, &quot;//ul[@class='tabs-list']//li[@title='search']&quot;))).click() wait.until(EC.element_to_be_clickable((By.XPATH, &quot;//div[@class='tabs-section nominatim-search selected']//input&quot;))).send_keys('53.40125254855821,-3.0876670857133757') wait.until(EC.element_to_be_clickable((By.XPATH, &quot;//div[@class='tabs-section nominatim-search selected']//ul//li&quot;))).click() a = wait.until(EC.element_to_be_clickable((By.XPATH, &quot;//*[@id='map']//div[1]//div[2]//svg&quot;))) </code></pre> <p>I am getting the following exception</p> <pre><code>TimeoutException: Message: Stacktrace: GetHandleVerifier [0x00007FF737801502+60802] (No symbol) [0x00007FF73777AC02] (No symbol) [0x00007FF737637CE4] (No symbol) [0x00007FF737686D4D] (No symbol) [0x00007FF737686E1C] (No symbol) [0x00007FF7376CCE37] (No symbol) [0x00007FF7376AABBF] (No symbol) [0x00007FF7376CA224] (No symbol) [0x00007FF7376AA923] (No symbol) [0x00007FF737678FEC] (No symbol) [0x00007FF737679C21] GetHandleVerifier [0x00007FF737B0411D+3217821] GetHandleVerifier [0x00007FF737B460B7+3488055] GetHandleVerifier [0x00007FF737B3F03F+3459263] GetHandleVerifier [0x00007FF7378BB846+823494] (No symbol) [0x00007FF737785F9F] (No symbol) [0x00007FF737780EC4] (No symbol) [0x00007FF737781052] (No symbol) [0x00007FF7377718A4] BaseThreadInitThunk [0x00007FF87E237344+20] RtlUserThreadStart [0x00007FF87FD826B1+33] </code></pre> <p>It seems like each road stretch was arranged as an image/tile. I want to click on each image and then Details option on the popup to gather the required data. How can I access each tile on this website and click on it?</p> <p>Thanks in advance!</p>
<python><selenium-webdriver><web-scraping><openstreetmap>
2024-05-03 12:18:43
1
307
Satya Pamidi
78,424,583
9,536,103
Last value in precision_recall_curve for precision is 1 when it shouldn't be
<p>I am trying to use the <code>precision_recall_curve</code> in <code>sklearn</code>. However, I don't understand the output it is giving me and I think it looks wrong.</p> <pre><code>import numpy as np from sklearn import metrics y = np.array([0, 0, 1, 0]) pred = np.array([0.1, 0.2, 0.3, 0.4]) precision, recall, thresholds = metrics.precision_recall_curve(y, pred, pos_label=1) </code></pre> <p>The outputted value of precision and recall here are:</p> <pre><code>precision: array([0.25, 0.33333333, 0.5, 0., 1.]) recall: array([1., 1., 1., 0., 0.]) </code></pre> <p>But for the highest threshold we aren't predicting any samples in the 1 class. I feel like this is misleading if you were to plot it on a graph, particularly when comparing different models to each other. I would instead prefer to see:</p> <pre><code>precision: array([0.25, 0.33333333, 0.5, 0.]) recall: array([1., 1., 1., 0.]) </code></pre> <p>Any help here? Can I just always remove the last point if this is the behaviour I would prefer?</p>
<python><scikit-learn>
2024-05-03 12:05:00
1
1,151
Daniel Wyatt
78,424,534
92,516
Terminate cleanly after an exact number of Task executions have occurred across multiple processes
<p>Following on from a previous question<a href="https://stackoverflow.com/q/78420717/92516,">1</a>, I want locust to terminate a run once I've performed an exact number of requests across multiple Users and processes. I can track the progress on a per-User basis, but I'm struggling to communicate that to locust in a clean manner where it doesn't try to spawn a bunch of extra Users in the dying throes of the run (which makes it complex for me to handle work assignment for each user).</p> <p>Here's a sketch of what I have at the moment:</p> <ul> <li>User class which uses a Distributor (setup via the <code>init</code> event) to know how many requests it should perform in total, and counts how many it's done:</li> </ul> <pre><code>class MyUser: def __init__(self): self.id = next(distributors[&quot;user_id&quot;]) self.limit = ... # calculate limit based on user's id. @task def request(self): # perform request self.requests += 1 </code></pre> <ul> <li>After issuing each request, check how many have been issued and if done then stop this user. If all users have been stopped then stop locust - this is inspired by <a href="https://github.com/SvenskaSpel/locust-plugins/blob/507ff46808d2945f2eea6ed46aa2d69117159ede/locust_plugins/__init__.py#L184-L203" rel="nofollow noreferrer"><code>iteration_limit_wrapper</code></a>:</li> </ul> <pre><code> ... epilogue of request(): if self.requests == self.limit: runner = self.environment.runner if runner.user_count == 1: logging.info(&quot;Last user stopped, quitting runner&quot;) if isinstance(runner, WorkerRunner): runner._send_stats() # send a final report # need to trigger this in a separate greenlet, in case # test_stop handlers do something async gevent.spawn_later(0.1, runner.quit) raise StopUser() </code></pre> <p>This somewhat &quot;works&quot; (locust indeed does stop after the work is done), but in an unclean fashion, as locust sees that some of the specified number of Users no longer exits and attempts to start them again. This results in the Distributor being called more times than I expect (and StopIteration thrown on the master) - running with <code>--users=5</code> and <code>--processes=2</code>, and modified log format to include process_id:</p> <pre><code>[2024-05-03 12:41:10,067] localhost/70327/INFO/root: Last user stopped, quitting runner [2024-05-03 12:41:10,237] localhost/70327/INFO/locust.main: Shutting down (exit code 0) [2024-05-03 12:41:10,237] localhost/70322/INFO/locust.runners: Sending spawn jobs of 5 users at 5.00 spawn rate to 1 ready workers [2024-05-03 12:41:10,237] localhost/70322/DEBUG/locust.runners: Sending spawn messages for 5 total users to 1 worker(s) [2024-05-03 12:41:10,237] localhost/70322/DEBUG/locust.runners: Currently spawned users: {&quot;MyUser&quot;: 3} (3 total users) [2024-05-03 12:41:10,237] localhost/70327/DEBUG/locust.main: Cleaning up runner... [2024-05-03 12:41:10,267] localhost/70328/DEBUG/locust.runners: Spawning additional {&quot;MyUser&quot;: 2} ({&quot;MyUser&quot;: 3} already running)... [2024-05-03 12:41:10,268] localhost/70328/DEBUG/locust.runners: Sending _user_id_request message to master [2024-05-03 12:41:10,377] localhost/70328/INFO/root: Last user stopped, quitting runner [2024-05-03 12:41:10,520] localhost/70328/DEBUG/locust.main: Running teardowns... [2024-05-03 12:41:10,521] localhost/70328/INFO/locust.main: Shutting down (exit code 0) [2024-05-03 12:41:10,521] localhost/70328/DEBUG/locust.main: Cleaning up runner... [2024-05-03 12:41:11,238] localhost/70322/INFO/locust.runners: Spawning is complete and report waittime is expired, but not all reports received from workers: {&quot;MyUser&quot;: 3} (3 total users) [2024-05-03 12:41:11,239] localhost/70322/INFO/locust.runners: Worker 'localhost_fb58584ee115427f8fbe19907660dd26' (index 0) quit. 0 workers ready. [2024-05-03 12:41:11,239] localhost/70322/DEBUG/locust.runners: Received _user_id_request message from worker localhost_c92c47b1ea284949b2baf21ef71ab265 (index 1) [2024-05-03 12:41:11,240] localhost/70322/INFO/locust.runners: Worker 'localhost_c92c47b1ea284949b2baf21ef71ab265' (index 1) quit. 0 workers ready. [2024-05-03 12:41:11,241] localhost/70322/INFO/locust.runners: The last worker quit, stopping test. [2024-05-03 12:41:11,241] localhost/70322/DEBUG/locust.runners: Stopping... [2024-05-03 12:41:11,241] localhost/70322/DEBUG/locust.runners: Quitting... Traceback (most recent call last): File &quot;src/gevent/greenlet.py&quot;, line 908, in gevent._gevent_cgreenlet.Greenlet.run File &quot;/Users/dave/repos/pinecone-io/VSB/.venv/lib/python3.11/site-packages/locust_plugins/distributor.py&quot;, line 38, in _master_next_and_send item = next(self.iterator) ^^^^^^^^^^^^^^^^^^^ StopIteration 2024-05-03T11:41:11Z &lt;Greenlet at 0x165afc360: &lt;bound method Distributor._master_next_and_send of &lt;locust_plugins.distributor.Distributor object at 0x1071aa050&gt;&gt;(0, 'localhost_c92c47b1ea284949b2baf21ef71ab2)&gt; failed with StopIteration </code></pre> <p>The above is somewhat verbose (I wasn't sure exactly what's relavent), but the point of note is we see <em>additional</em> Users spawned by locust process 703222 after we have finished with all users and called quit on process 70327:</p> <pre><code>[2024-05-03 12:41:10,067] localhost/70327/INFO/root: Last user stopped, quitting runner [2024-05-03 12:41:10,237] localhost/70327/INFO/locust.main: Shutting down (exit code 0) [2024-05-03 12:41:10,237] localhost/70322/INFO/locust.runners: Sending spawn jobs of 5 users at 5.00 spawn rate to 1 ready workers ... [2024-05-03 12:41:10,267] localhost/70328/DEBUG/locust.runners: Spawning additional {&quot;MyUser&quot;: 2} ({&quot;MyUser&quot;: 3} already running)... ... [2024-05-03 12:41:10,519] localhost/70328/DEBUG/locust.runners: Stopping all users (called from .venv/lib/python3.11/site-packages/locust/runners.py:411) </code></pre> <p>Q: Is there a mechanism to tell locust to only spawn a given number of Users and not start any more after they finish, or perhaps a more explicit way to say &quot;this user has successfully finished, don't respawn&quot; other than <code>raise StopUser()</code>?</p>
<python><locust>
2024-05-03 11:53:25
1
9,708
DaveR
78,424,517
4,564,080
What is wrong with my function overload with 8 different argument combinations?
<p>I am trying to override Pydantic's and SQLModel's <code>Field</code> function.</p> <p>I have 3 boolean arguments which indicate different combinations:</p> <ol> <li>If <code>is_table_field == True</code>, then return SQLModel's <code>FieldInfo</code>, else return Pydantic's <code>FieldInfo</code></li> <li>If <code>is_state_field == True</code>, then the <code>additional_state_description</code> argument must be specified. If it is <code>False</code> then it must not be specified</li> <li>If <code>is_tag_field == True</code>, then the <code>additional_tag_description</code> argument must be specified. If it is False then it must not be specified</li> </ol> <p>Here is my code:</p> <pre class="lang-py prettyprint-override"><code>from typing import Literal, overload from pydantic.fields import FieldInfo as PydanticFieldInfo from sqlmodel.main import FieldInfo as SQLModelFieldInfo # TTT @overload def Field( description: str, is_table_field: Literal[True], is_state_field: Literal[True], is_tag_field: Literal[True], additional_state_description: str, additional_tag_description: str, ) -&gt; SQLModelFieldInfo: ... # TTF @overload def Field( description: str, is_table_field: Literal[True], is_state_field: Literal[True], additional_state_description: str ) -&gt; SQLModelFieldInfo: ... # TFT @overload def Field( description: str, is_table_field: Literal[True], is_tag_field: Literal[True], additional_tag_description: str ) -&gt; SQLModelFieldInfo: ... # TFF @overload def Field(description: str, is_table_field: Literal[True]) -&gt; SQLModelFieldInfo: ... # FTT @overload def Field( description: str, is_state_field: Literal[True], is_tag_field: Literal[True], additional_state_description: str, additional_tag_description: str, ) -&gt; PydanticFieldInfo: ... # FTF @overload def Field(description: str, is_state_field: Literal[True], additional_state_description: str) -&gt; PydanticFieldInfo: ... # FFT @overload def Field(description: str, is_tag_field: Literal[True], additional_tag_description: str) -&gt; PydanticFieldInfo: ... # FFF @overload def Field(description: str) -&gt; PydanticFieldInfo: ... def Field( description: str, is_table_field: bool = False, is_state_field: bool = False, is_tag_field: bool = False, additional_state_description: str | None = None, additional_tag_description: str | None = None, ) -&gt; PydanticFieldInfo | SQLModelFieldInfo: if is_table_field: # TODO: implement return SQLModelFieldInfo() # TODO: implement return PydanticFieldInfo() </code></pre> <p>But MyPy is giving me all of these errors:</p> <ul> <li>Overloaded function implementation does not accept all possible arguments of signature 2</li> <li>Overloaded function implementation does not accept all possible arguments of signature 3</li> <li>Overloaded function implementation does not accept all possible arguments of signature 5</li> <li>Overloaded function implementation does not accept all possible arguments of signature 6</li> <li>Overloaded function implementation does not accept all possible arguments of signature 7</li> </ul> <p>What cases am I not covering?</p>
<python><function><arguments><overloading>
2024-05-03 11:49:37
0
4,635
KOB
78,424,501
1,320,143
How can I implement multiple optional fields but with at least one of them mandatory in MyPy or Pydantic?
<p>I'm trying to write a minimal, custom, JSON:API 1.1 implementation using Python.</p> <p>For the top level, the RFC/doc says:</p> <p><a href="https://jsonapi.org/format/#document-top-level" rel="nofollow noreferrer">https://jsonapi.org/format/#document-top-level</a></p> <blockquote> <p>A document MUST contain at least one of the following top-level members:</p> <ul> <li>data: the document’s “primary data”.</li> <li>errors: an array of error objects.</li> <li>meta: a meta object that contains non-standard meta-information.</li> <li>a member defined by an applied extension.</li> </ul> <p>The members data and errors MUST NOT coexist in the same document.</p> </blockquote> <p>The way I read that is:</p> <ul> <li>&quot;data&quot; is optional</li> <li>&quot;errors&quot; is optional</li> <li>&quot;meta&quot; is optional</li> <li>&quot;member from an extension&quot; is optional</li> </ul> <p>However, at least 1 of them needs to be present in the document. Also &quot;data&quot; and &quot;errors&quot; must never be in the same document.</p> <p>I'm having a hard time modeling this. Is there a way to do it using the type system or do I have to do custom validation of some sort?</p> <p><code>Data|Errors|Meta|ExtensionMember</code> doesn't cut it.</p>
<python><mypy><pydantic><json-api>
2024-05-03 11:45:31
1
1,673
oblio
78,424,238
2,745,609
Python normalize a hierarchical tree into pandas dataframe
<p>I have the following dataframe which holds a tree-like structure. The hierarchy levels range from 2 to 4:</p> <pre class="lang-py prettyprint-override"><code>df = Hierarchy ID Value 0 1.0 T1 1 1.1 T2 2 1.1.1 T3 3 1.1.2 T4 4 1.2 T5 5 1.2.1 T6 6 2.0 T7 7 2.1 T8 8 2.1.1 T9 9 2.1.1.1 T10 10 2.1.2 T11 ... </code></pre> <p>I want to normalize this into a filterable pandas dataframe such as the following. The &quot;N.0&quot; level should always be the main level 1 and the rest should follow it.</p> <pre class="lang-py prettyprint-override"><code>df = Level_1 Level_2 Level_3 Level_4 T1 T2 T3 Nan T1 T2 T4 Nan T1 T5 T6 Nan T7 T8 T9 T10 T7 T8 T11 Nan ... </code></pre> <p>I don't know how to resolve this, any help?</p>
<python><pandas>
2024-05-03 10:50:59
2
474
Solijoli
78,424,042
315,168
A persistent hash of a Python function code for caching
<p>I am running data processing jobs that are - Inputs (Pandas Series) - Python function</p> <p>I'd like to make this so that if inputs (function arguments) and the function itself (function body and code) are the same, I do not run the calculation, but just cache the results on the disk.</p> <p>While function results are easy to cache in-process memory, in my case there might be multiple runs over multiple months. Also the underlying Python source code may change, with the change of underlying Python code.</p> <p>I'd need to detect when the function code has changed and have a unique hash for each function by its code. If the function body changes I need to rerun the calculation and write the new cached result to the disk.</p> <p>E.g. detect and hash the difference between</p> <pre class="lang-py prettyprint-override"><code>def my_task(a: pd.Series, b: pd.Series): return a+b </code></pre> <p>And:</p> <pre class="lang-py prettyprint-override"><code>def my_task(a: pd.Series, b: pd.Series): return a + b - 2 </code></pre> <p>How can I calculate a hash for a Python function code? Or anything that is close enough to this?</p> <p>Furthermore, these Python functions can reside within Jupyter Notebooks, not just in Python modules.</p>
<python>
2024-05-03 10:12:14
1
84,872
Mikko Ohtamaa
78,424,000
17,741,484
Is a pandas.DataFrame still sorted after using the method `query`?
<p>I am working on a dataframe <code>df</code> in python. I need to query and sort the results multiple times, but on different columns:</p> <pre class="lang-py prettyprint-override"><code>for x in X: # query the dataframe and sort the result query_result = df.query(f&quot;column_name == '{x}'&quot;).sort_values(by=&quot;other_column&quot;) # ... use query_result ... </code></pre> <p>I am wondering if I can factorize the sorting operation, to make the code run faster, like this:</p> <pre class="lang-py prettyprint-override"><code># First sort the dataframe df.sort_values(by=&quot;other_column&quot;, inplace=True) for x in X: # then query it query_result = df.query(f&quot;column_name == '{x}'&quot;) # ... use query_result, assuming it is sorted by other_column ... </code></pre> <p>In the second code, do I have any guarantee that <code>query_result</code> is sorted?</p> <p>Thank you for your help</p>
<python><pandas>
2024-05-03 10:04:30
1
357
27636
78,423,909
1,120,410
Fast way to iterate over an Arrow.Table with 30 million rows and 25 columns in Julia
<p>I saved a Python Pandas <code>DataFrame</code> of size (30M x 25) as an Apache Arrow table. Then, I'm reading that table in Julia as:</p> <pre><code>input_arrow = Arrow.Table(&quot;path/to/table.arrow&quot;) </code></pre> <p>My question is how can I iterate over the rows of <code>input_arrow</code> in an efficient way.</p> <p>If I just do:</p> <pre><code>for c in input_arrow: # Do something </code></pre> <p>then, I would be iterating over the columns, but I need to iterate over the rows.</p> <p>Something else that I've tried is converting the <code>Arrow.Table</code> into a <code>DataFrames.DataFrame</code>:</p> <pre><code>df = DataFrames.DataFrame(input_arrow) for row in eachrow(df) # do something </code></pre> <p>But this method is very slow. It reminds me of how slow it is to do <code>df.iterrows()</code> in Python.</p> <p>So, which is the fast way (similar to <code>df.itertuples()</code>) to iterate over an Arrow.Table in Julia?</p> <h1>Solution</h1> <p>As suggested by László Hunyadi in the accepted solution, transforming the <code>Arrow.Table</code> into a <code>Tables.rowtable</code> shows a significant speedup.</p> <p>There was an issue with the RAM; the <code>Arrow.Table</code> and the <code>Tables.rowtable</code> didn't fit in my RAM, so I had to read the <code>Arrow.Table</code> by chunks as follows:</p> <pre><code>for chunk in Arrow.Stream(&quot;/path/to/table.arrow&quot;) row_table = Tables.rowtable(chunk) # do something with row_table end </code></pre>
<python><pandas><julia><apache-arrow>
2024-05-03 09:47:54
1
2,985
Lay González
78,423,788
202,335
How can I get total assets with yfinance?
<p>I tried with the code below:</p> <pre><code>stock = yf.Ticker(ticker) info = stock.info totalShares = info.get('sharesOutstanding') totalAssets=info.get('totalAssets') </code></pre> <p>I can not get total assets.</p>
<python><yfinance>
2024-05-03 09:26:05
1
25,444
Steven
78,423,690
14,849,198
How to filter tensorflow tuple dataset?
<pre><code>import tensorflow as tf # Create dummy data image_data = [tf.constant([1]), tf.constant([2]), tf.constant([3])] caption_data = [tf.constant([10]), tf.constant([20]), tf.constant([30])] target_data = [tf.constant([100]), tf.constant([200]), tf.constant([300])] # Create a dataset from the dummy data dataset = tf.data.Dataset.from_tensor_slices(((image_data, caption_data), target_data)) # Define a filter function that checks a simple condition on the target tensor def filter_funct(data): ((image_tensor, caption_tensor), target_tensor) = data return target_tensor &gt; 150 # Apply the filter function filtered_dataset = dataset.filter(filter_funct) # Print the filtered dataset for ((image_tensor, caption_tensor), target_tensor) in filtered_dataset: print(&quot;Image Tensor:&quot;, image_tensor.numpy()) print(&quot;Caption Tensor:&quot;, caption_tensor.numpy()) print(&quot;Target Tensor:&quot;, target_tensor.numpy()) </code></pre> <p>I wanted to slice for some corrupted images but I can't use tf.filter correctly. Above is just a dummy data. it is not working.</p> <p>this is the error code</p> <pre><code>TypeError: outer_factory.&lt;locals&gt;.inner_factory.&lt;locals&gt;.tf__filter_funct() takes 1 positional argument but 2 were given </code></pre>
<python><tensorflow>
2024-05-03 09:05:38
2
428
SoraHeart
78,423,575
10,273,940
How to prevent typehint errors for Pydantic datetime validation
<p>I am designing a Pydantic model that contains some datetime fields, like so:</p> <pre><code>from pydantic import BaseModel, Field from datetime import datetime class Event(BaseModel): start_date: datetime = Field(..., description=&quot;datetime in format YYYY-mm-ddTHH:MM:SS&quot;) end_date: datetime = Field(..., description=&quot;datetime in format YYYY-mm-ddTHH:MM:SS&quot;) </code></pre> <p>Helpfully enough, <a href="https://docs.pydantic.dev/2.1/usage/types/datetime/" rel="nofollow noreferrer">pydantic accepts string values for datetime fields</a>. So an object like <code>Event(start_date=&quot;2024-05-03T12:00:00&quot;, end_date=&quot;2024-05-03T13:00:00&quot;)</code> will be accepted.</p> <p>However, type checking tools like Pylance in VScode will show errors when creating such objects using string arguments, as a static type check will conclude that the <code>Event</code> class cannot be instantiated with strings, but needs datetimes.</p> <p><em>Is there a way to indicate to type checkers that both strings and datetimes are accepted?</em></p> <p>I considered creating Fields with a <code>str | datetime</code> typehint, but that approach has the drawback that a consumer has to perform type checks at runtime. It probably also messes with Pydantic's built-in datetime validation.</p>
<python><python-typing><pydantic>
2024-05-03 08:41:01
1
314
Harm van den Brand
78,423,528
18,878,905
Create an infinite callback loop in Dash
<p>I'm trying to create callback loop in dash. My idea was to have a button that reclicks on itself until we hit n_clicks = 10.</p> <pre><code>import time import dash import dash_bootstrap_components as dbc from dash import Dash, html, Input, Output app = Dash(__name__, external_stylesheets=[dbc.themes.BOOTSTRAP]) app.layout = html.Div([ html.Button(children='Run Process', id='run-button', n_clicks=0), html.Div(id='output-container') ]) @app.callback( Output('run-button', 'n_clicks'), Output('output-container', 'children'), Input('run-button', 'n_clicks'), ) def run_process(n_clicks): if n_clicks &gt; 10: return dash.no_update print(n_clicks) time.sleep(1) return n_clicks + 1, n_clicks + 1 if __name__ == '__main__': app.run_server(debug=True, port=100) </code></pre> <p>However the print(n_clicks) prints 0 and then 2 (skipping 1?!) and then stops, should be going until 10. Any idea how to fix and what is going on?</p>
<python><loops><callback><plotly-dash>
2024-05-03 08:33:03
2
389
david serero
78,423,469
13,238,846
Getting 400 when trying to download sharepoint file using python and 365 clinet
<p>I am trying to download sharepoint file using office 365 python clinet and user credentials. But I am getting following error when trying download. But I am able to list down the files without any issues. Here's my code.</p> <pre><code>from office365.runtime.auth.authentication_context import AuthenticationContext,UserCredential from office365.sharepoint.client_context import ClientContext from office365.sharepoint.files.file import File import logging url = &quot;https://xxxxxx.sharepoint.com/&quot; username= &quot;xxxxxx@xxxxxx.org&quot; password= &quot;xxxxxxxx&quot; ctx = ClientContext(url).with_user_credentials(username, password) file_url = &quot;https://xxxxxxx.sharepoint.com/sites/xxxxxxx/Shared Documents/Public/xxxxxx.pdf&quot; response = File.open_binary(ctx, file_url) print(response) with open(&quot;./xxxxxx.pdf&quot;, &quot;wb&quot;) as local_file: local_file.write(response.content) </code></pre> <p>Alternatively I've tried this method but I'm getting 404 error in there as well.</p> <pre><code>context = ClientContext(url).with_user_credentials(username, password) response = File.open_binary(context, '/'.join(['/sites/xxxxx/Shared Documents/Public', 'xxxxx.pdf'])) print(response) </code></pre> <p>I'm not getting any python error here is the reponse as the requested.</p> <pre><code>b'{&quot;error&quot;:{&quot;code&quot;:&quot;-2130575338, Microsoft.SharePoint.SPException&quot;,&quot;message&quot;:{&quot;lang&quot;:&quot;en-US&quot;,&quot;value&quot;:&quot;The file /sites/xxxx/Shared Documents/Public/xxxxxx.pdf does not exist.&quot;}}}' </code></pre>
<python><sharepoint><office365api>
2024-05-03 08:20:23
1
427
Axen_Rangs
78,423,393
16,383,578
Problem with converting ADPCM to wav in Python
<p>I am trying to extract audio files from .fsb files used by Dragon Age: Origins. FSB means &quot; FMOD Sample Bank&quot; and the game uses FSB4. Because it is a proprietary format, Google is almost useless in finding relevant information about it, but I managed to find <a href="http://aezay.dk/aezay/fsbextractor/" rel="nofollow noreferrer">this program</a> and <a href="https://github.com/gdawg/fsbext" rel="nofollow noreferrer">this repository</a>.</p> <p>Looking at the files in a hex editor, it was easy for me to guess the structure of the format, it begins with a 48 byte header of the file, that starts with the 4 byte file signature &quot;FSB4&quot;, then 4 byte number of audio files contained in the archive, then 4 byte size of header array for the individual files, 4 byte size of the actual file data, then some other stuff.</p> <p>Then the header array for the files begins, each entry is almost always 80 bytes, it contains the name of the file, number of frames in the file, the data length of the file, and some other stuff.</p> <p>Then the header array is followed by 16 null bytes, after that the actual data is stored contiguously head to tail, in the order they are listed.</p> <p>There were some other things that I couldn't decode, and those were filled by the code from the aforementioned repository and testing with the GUI program.</p> <p>With my reverse engineering I have written a 100% working program to extract audio files from the .fsb files, it works, but for some files the extracted audio is distorted.</p> <p>The .fsb file used by the game stores audio in 3 formats: MPEG, PCM, and ADPCM. I know MPEG files are .mp3 files and they are stored with complete headers, so I just store the slices (<code>self.data[start:end]</code>) directly, and I know PCM files are just .wav files without headers, so I just use <code>wave</code> to add the appropriate header and write the data.</p> <p>The problem is with ADPCM, I don't know how to convert ADPCM to PCM, and all the results I can find use <code>audioop.adpcm2lin(adpcm, 2, None)</code>. They are all terribly outdated and <a href="https://docs.python.org/3/library/audioop.html" rel="nofollow noreferrer">the documentation</a> says it is deprecated.</p> <p>I use it in my code and it kind of works, but the audio extracted is way too fast, high pitched, and sounds mechanical. I don't know why.</p> <p>Here is the code:</p> <pre class="lang-py prettyprint-override"><code>import audioop import os import struct import wave from pathlib import Path class FSB4: HEADERS = ( &quot;signature&quot;, &quot;file_count&quot;, &quot;header_size&quot;, &quot;data_size&quot;, &quot;version&quot;, &quot;flags&quot;, &quot;padding&quot;, &quot;hash&quot; ) ENTRY_HEADERS = ( &quot;frames&quot;, &quot;data_size&quot;, &quot;loop_start&quot;, &quot;loop_end&quot;, &quot;mode&quot;, &quot;frequency&quot;, &quot;pan&quot;, &quot;defpri&quot;, &quot;min_distance&quot;, &quot;channels&quot;, &quot;max_distance&quot;, &quot;var_frequency&quot;, &quot;var_vol&quot;, &quot;var_pan&quot; ) def __init__(self, file): self.data = Path(file).read_bytes() self.parse_header() self.parse_entries() def parse_header(self): headers = self.data[:48] self.chunks = [headers] self.headers = dict( zip( self.HEADERS, struct.unpack(&quot;&lt;4s5I8s16s&quot;, headers) ) ) def parse_entry(self, data): chunks = struct.unpack(&quot;&lt;H30s6I4H2f2I&quot;, data) self.chunks.append(data) flags = chunks[6] entry = dict(zip( self.ENTRY_HEADERS, chunks[2:] )) entry[&quot;format&quot;] = ( &quot;MPEG&quot; if flags &amp; 512 else ( &quot;PCM&quot; if flags &amp; 16 else &quot;ADPCM&quot; ) ) return ( chunks[1].strip(b&quot;\x00&quot;).decode(), entry ) def parse_entries(self): count = self.headers[&quot;file_count&quot;] offset = self.headers[&quot;header_size&quot;] + 48 self.entries = {} self.offsets = {} for i in range(48, 48 + count * 80, 80): name, entry = self.parse_entry(self.data[i:i+80]) self.entries[name] = entry length = entry[&quot;data_size&quot;] self.offsets[name] = (offset, offset + length) offset += length self.chunks.append(self.data[i+80:i+96]) def extract_mp3(self, file, folder): filename = file.rsplit(&quot;.&quot;, 1)[0] + &quot;.mp3&quot; with open(os.path.join(folder, filename), &quot;wb&quot;) as f: start, end = self.offsets[file] f.write(self.data[start:end]) def extract(self, file, folder): entry = self.entries[file] audio_format = entry[&quot;format&quot;] if audio_format == &quot;MPEG&quot;: self.extract_mp3(file, folder) else: path = os.path.join(folder, file.rsplit(&quot;.&quot;, 1)[0] + &quot;.wav&quot;) start, end = self.offsets[file] pcm = self.data[start:end] channels = entry[&quot;channels&quot;] if audio_format == &quot;ADPCM&quot;: pcm, _ = audioop.adpcm2lin(pcm, 2, None) with wave.open(path, &quot;wb&quot;) as wav: frequency = entry[&quot;frequency&quot;] frames = entry[&quot;frames&quot;] wav.setparams((channels, 2, frequency, frames, 'NONE', 'NONE')) wav.writeframes(pcm) def extract_all(self, folder): for file in self.entries: self.extract(file, folder) </code></pre> <p>And a <a href="https://drive.google.com/file/d/15DLCInZOm6EZMiriCOkA3gTpnPTfH4eb/view?usp=sharing" rel="nofollow noreferrer">file</a> to test with.</p> <p>I can already pack sound files into an FSB4 file but I still can't fix the problem, how I can fix this?</p>
<python><audio><pcm><adpcm>
2024-05-03 08:01:20
1
3,930
Ξένη Γήινος
78,423,363
7,745,011
Force venv to create virtual environment in "bin" instead of "Scripts" folder
<p>According to the <a href="https://docs.blender.org/api/current/info_tips_and_tricks.html#bundled-python-extensions" rel="nofollow noreferrer">Blender docs</a> you can replace the python installation that comes with blender with your own:</p> <blockquote> <p>Copy or link the extensions into Blender’s Python subdirectory so Blender can access them, you can also copy the entire Python installation into Blender’s subdirectory, replacing the one Blender comes with. This works as long as the Python versions match and the paths are created in the same relative locations. Doing this has the advantage that you can redistribute this bundle to others with Blender including any extensions you rely on.</p> </blockquote> <p>I want to use my own virtual environment for Blender, however the problem is, on Windows the python.exe for virtual environments will be placed in</p> <p><code>&lt;path/to/venv&gt;/Scripts/python.exe</code></p> <p>while Blenders python installation is paced in</p> <p><code>&lt;path/to/blender&gt;/python/bin/python.exe</code></p> <p>After testing around, simply renaming folders or moving files will mess up activation scripts and links for packages it seems. Question therefore is, if it might be possible to change the paths of the Windows installation for virtual environments?</p>
<python><virtualenv><blender>
2024-05-03 07:54:07
1
2,980
Roland Deschain
78,423,352
2,487,835
SpaCy GPU memory utilization for NER training
<p>My training code:</p> <pre><code>spacy.require_gpu() nlp = spacy.blank('en') if 'ner' not in nlp.pipe_names: ner = nlp.add_pipe('ner') else: ner = nlp.get_pipe('ner') docs = load_data(ANNOTATED_DATA_FILENAME_BIN) train_data, test_data = split_data(docs, DATA_SPLIT) unique_labels = set(ent.label_ for doc in train_data for ent in doc.ents) for label in unique_labels: ner.add_label(label) optimizer = nlp.initialize() for i in range(EPOCHS): print(f&quot;Starting Epoch {i+1}...&quot;) losses = {} batches = minibatch(train_data, size=compounding(4., 4096, 1.001)) for batch in batches: for doc in batch: example = Example.from_dict(doc, {'entities': [(ent.start_char, ent.end_char, ent.label_) for ent in doc.ents]}) nlp.update([example], drop=0.5, losses=losses, sgd=optimizer) print(f&quot;Losses at iteration {i}: {losses}&quot;) </code></pre> <p>This code almost completely does not utilize GPU memory. Utilization is about 11-13% during training, which is almost the same as idle.</p> <p><a href="https://i.sstatic.net/82YRwe4T.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82YRwe4T.jpg" alt="nvidia-smi" /></a></p> <p>I did allocation test with torch, and all 8Gigs are allocated, so server works fine. The problem is with SpaCy or my code.</p> <p>Could you please help?</p>
<python><gpu><spacy><named-entity-recognition><custom-training>
2024-05-03 07:52:34
1
3,020
Lex Podgorny
78,423,306
996,302
How to asynchronously run Matplolib server-side with a timeout? The process hangs randomly
<p>I'm trying to reproduce the ChatGPT code interpreter feature where a LLM create figures on demand by executing to code.</p> <p>Unfortunately Matplotlib hangs 20% of time, I have not managed to understand why.</p> <p>I would like the implementation:</p> <ul> <li>to be non-blocking for the rest of the server</li> <li>to have a timeout in case the code is too long to execute</li> </ul> <p>I made a first implementation:</p> <pre><code>import asyncio import psutil TIMEOUT = 5 async def exec_python(code: str) -&gt; str: &quot;&quot;&quot;Execute Python code. Args: code (str): Python code to execute. Returns: dict: A dictionary containing the stdout and the stderr from executing the code. &quot;&quot;&quot; code = preprocess_code(code) stdout = &quot;&quot; stderr = &quot;&quot; try: stdout, stderr = await run_with_timeout(code, TIMEOUT) except asyncio.TimeoutError: stderr = &quot;Execution timed out.&quot; return {&quot;stdout&quot;: stdout, &quot;stderr&quot;: stderr} async def run_with_timeout(code: str, timeout: int) -&gt; str: proc = await run(code) try: stdout, stderr = await asyncio.wait_for(proc.communicate(), timeout=timeout) return stdout.decode().strip(), stderr.decode().strip() except asyncio.TimeoutError: kill_process(proc.pid) raise async def run(code: str): return await asyncio.create_subprocess_exec( &quot;python&quot;, &quot;-c&quot;, code, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE, ) def kill_process(pid: int): try: parent = psutil.Process(pid) for child in parent.children(recursive=True): child.kill() parent.kill() print(f&quot;Killing Process {pid} (timed out)&quot;) except psutil.NoSuchProcess: print(&quot;Process already killed.&quot;) PLT_OVERRIDE_PREFIX = &quot;&quot;&quot; import matplotlib matplotlib.use('Agg') # non-interactive backend import asyncio import matplotlib.pyplot as plt import io import base64 def custom_show(): buf = io.BytesIO() plt.gcf().savefig(buf, format='png') buf.seek(0) image_base64 = base64.b64encode(buf.getvalue()).decode('utf-8') print('[BASE_64_IMG]', image_base64) buf.close() # Close the buffer plt.close('all') matplotlib.pyplot.figure() # Create a new figure matplotlib.pyplot.close('all') # Close it to ensure the state is clean matplotlib.pyplot.cla() # Clear the current axes matplotlib.pyplot.clf() # Clear the current figure matplotlib.pyplot.close() # Close the current figure plt.show = custom_show &quot;&quot;&quot; def preprocess_code(code: str) -&gt; str: override_prefix = &quot;&quot; code_lines = code.strip().split(&quot;\n&quot;) if not code_lines: return code # Return original code if it's empty if &quot;import matplotlib.pyplot as plt&quot; in code: override_prefix = PLT_OVERRIDE_PREFIX + &quot;\n&quot; code_lines = [ line for line in code_lines if line != &quot;import matplotlib.pyplot as plt&quot; ] last_line = code_lines[-1] # Check if the last line is already a print statement if last_line.strip().startswith(&quot;print&quot;): return &quot;\n&quot;.join(code_lines) try: compile(last_line, &quot;&lt;string&gt;&quot;, &quot;eval&quot;) # If it's a valid expression, wrap it with print code_lines[-1] = f&quot;print({last_line})&quot; except SyntaxError: # If it's not an expression, check if it's an assignment if &quot;=&quot; in last_line: variable_name = last_line.split(&quot;=&quot;)[0].strip() code_lines.append(f&quot;print({variable_name})&quot;) return override_prefix + &quot;\n&quot;.join(code_lines) </code></pre> <p>I have already tried without success:</p> <ul> <li>ipython rather than python</li> <li>using threads rather than process</li> <li>saving the image on disk rather than on buffer</li> </ul> <p>What extremely weird is that I cannot reproduce the bug using the code above. And yet I see the error frequently in prod and on my machine.</p>
<python><matplotlib><asynchronous><process><event-loop>
2024-05-03 07:40:36
1
6,592
L. Sanna
78,423,305
1,559,401
Managing multiple OpenGL VBO and VAO instances in Python
<p>I need some help with the code found below, where I try to load an OBJ using Impasse (a fork of PyAssimp) that may contain multiple meshes. In order to do that and help myself with debugging I am storing all the data in a dictionary structure, which allows me to grab the vertex, color and face data in an easy way.</p> <p>Both code snippets are taken from Mario Rosasco's tutorial series on PyOpenGL with the first one being heavily modified to support PyGame, ImGUI (I have removed all the windows since it's not useful for my question), loading from an OBJ file and supporting multiple meshes. The initial code uses a single VAO (with the respective VBO that mixes vertex and color data) and IBO and it's working. Upon request I can provide it.</p> <p>Once I started adding multiple buffer and array objects, I got myself in a pickle and now I cannot render anything but the black background color of my scene.</p> <p>The data structure I am using is quite simple - a dictionary with 4 keys:</p> <ul> <li><code>vbos</code> - a list of dictionaries each representing the vertices of a mesh</li> <li><code>cbos</code> - a list of dictionaries each representing the vertex colors of a mesh</li> <li><code>ibos</code> - a list of dictionaries each representing the indices of all faces of a mesh</li> <li><code>vaos</code> - a list of VAO references</li> </ul> <p>The first three have the same element structure, namely:</p> <pre><code>{ 'buff_id' : &lt;BUFFER OBJECT REFERENCE&gt;, 'data' : { 'values' : &lt;BUFFER OBJECT DATA (flattened)&gt;, 'count' : &lt;NUMBER OF ELEMENTS IN BUFFER OBJECT DATA&gt;, 'stride' : &lt;DIMENSION OF A SINGLE DATUM IN A BUFFER OBJECT DATA&gt; } } </code></pre> <p>I assume that a multi-mesh OBJ (I use Blender for the creation of my files) has a separate vertex etc. data per mesh and that every mesh is defined as a <code>g</code> component. In addition I use triangulation when exporting (so the <code>stride</code> is currently not really needed) so all my faces are primitives.</p> <p>I am almost 100% sure that I have missed a binding/unbinding somewhere but I cannot find the spot. Or perhaps there is a different and more fundamental issue with my code.</p> <pre><code>import sys from pathlib import Path import shutil import numpy as np from OpenGL.GLU import * from OpenGL.GL import * import pygame import glm from utils import * from math import tan, cos, sin, sqrt from ctypes import c_void_p from imgui.integrations.pygame import PygameRenderer import imgui import impasse from impasse.constants import ProcessingPreset, MaterialPropertyKey from impasse.structs import Scene, Camera, Node from impasse.helper import get_bounding_box scene_box = None def loadObjs(f_path: Path): if not f_path: raise TypeError('f_path not of type Path') if not f_path.exists(): raise ValueError('f_path not a valid filesystem path') if not f_path.is_file(): raise ValueError('f_path not a file') if not f_path.name.endswith('.obj'): raise ValueError('f_path not an OBJ file') mtl_pref = 'usemtl ' mltlib_found = False mtl_names = [] obj_parent_dir = f_path.parent.absolute() obj_raw = None with open(f_path, 'r') as f_obj: obj_raw = f_obj.readlines() for line in obj_raw: if line == '#': continue elif line.startswith('mtllib'): mltlib_found = True elif mtl_pref in line: mtl_name = line.replace(mtl_pref, '') print('Found material &quot;{}&quot; in OBJ file'.format(mtl_name)) mtl_names.append(mtl_name) args = {} args['processing'] = postprocess = ProcessingPreset.TargetRealtime_Fast scene = impasse.load(str(f_path), **args).copy_mutable() scene_box = (bb_min, bb_max) = get_bounding_box(scene) print(len(scene.meshes)) return scene def createBuffers(scene: impasse.core.Scene): vbos = [] cbos = [] ibos = [] vaos = [] for mesh in scene.meshes: print('Processing mesh &quot;{}&quot;'.format(mesh.name)) color_rnd = np.array([*np.random.uniform(0.0, 1.0, 3), 1.0], dtype='float32') vbo = glGenBuffers(1) vertices = np.array(mesh.vertices) vbos.append({ 'buff_id' : vbo, 'data' : { 'values' : vertices.flatten(), 'count' : len(vertices), 'stride' : len(vertices[0]) } }) glBindBuffer(GL_ARRAY_BUFFER, vbos[-1]['buff_id']) glBufferData( # PyOpenGL allows for the omission of the size parameter GL_ARRAY_BUFFER, vertices, GL_STATIC_DRAW ) glBindBuffer(GL_ARRAY_BUFFER, 0) cbo = glGenBuffers(1) print('Random color: {}'.format(color_rnd)) colors = np.full((len(vertices), 4), color_rnd) cbos.append({ 'buff_id' : cbo, 'data' : { 'values' : colors.flatten(), 'count' : len(colors), 'stride' : len(colors[0]) } }) glBindBuffer(GL_ARRAY_BUFFER, cbos[-1]['buff_id']) glBufferData( GL_ARRAY_BUFFER, colors, GL_STATIC_DRAW ) glBindBuffer(GL_ARRAY_BUFFER, 0) ibo = glGenBuffers(1) indices = np.array(mesh.faces) ibos.append({ 'buff_id' : ibo, 'data' : { 'values' : indices, 'count' : len(indices), 'stride' : len(indices[0]) } }) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibos[-1]['buff_id']) glBufferData( GL_ELEMENT_ARRAY_BUFFER, indices, GL_STATIC_DRAW ) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0) # Generate the VAOs. Technically, for same VBO data, a single VAO would suffice for mesh_idx in range(len(ibos)): vao = glGenVertexArrays(1) vaos.append(vao) glBindVertexArray(vaos[-1]) vertex_dim = vbos[mesh_idx]['data']['stride'] print('Mesh vertex dim: {}'.format(vertex_dim)) glBindBuffer(GL_ARRAY_BUFFER, vbos[mesh_idx]['buff_id']) glEnableVertexAttribArray(0) glVertexAttribPointer(0, vertex_dim, GL_FLOAT, GL_FALSE, 0, None) color_dim = cbos[mesh_idx]['data']['stride'] print('Mesh color dim: {}'.format(color_dim)) glBindBuffer(GL_ARRAY_BUFFER, cbos[mesh_idx]['buff_id']) glEnableVertexAttribArray(1) glVertexAttribPointer(1, color_dim, GL_FLOAT, GL_FALSE, 0, None) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibos[mesh_idx]['buff_id']) glBindVertexArray(0) return { 'vbos' : vbos, 'cbos' : cbos, 'ibos' : ibos, 'vaos' : vaos } wireframe_enabled = False transform_selected = 0 # Helper function to calculate the frustum scale. # Accepts a field of view (in degrees) and returns the scale factor def calcFrustumScale(fFovDeg): degToRad = 3.14159 * 2.0 / 360.0 fFovRad = fFovDeg * degToRad return 1.0 / tan(fFovRad / 2.0) # Global variable to represent the compiled shader program, written in GLSL program = None # Global variables to store the location of the shader's uniform variables modelToCameraMatrixUnif = None cameraToClipMatrixUnif = None # Global display variables cameraToClipMatrix = np.zeros((4,4), dtype='float32') fFrustumScale = calcFrustumScale(45.0) # Set up the list of shaders, and call functions to compile them def initializeProgram(): shaderList = [] shaderList.append(loadShader(GL_VERTEX_SHADER, &quot;PosColorLocalTransform.vert&quot;)) shaderList.append(loadShader(GL_FRAGMENT_SHADER, &quot;ColorPassthrough.frag&quot;)) global program program = createProgram(shaderList) for shader in shaderList: glDeleteShader(shader) global modelToCameraMatrixUnif, cameraToClipMatrixUnif modelToCameraMatrixUnif = glGetUniformLocation(program, &quot;modelToCameraMatrix&quot;) cameraToClipMatrixUnif = glGetUniformLocation(program, &quot;cameraToClipMatrix&quot;) fzNear = 1.0 fzFar = 61.0 global cameraToClipMatrix # Note that this and the transformation matrix below are both # ROW-MAJOR ordered. Thus, it is necessary to pass a transpose # of the matrix to the glUniform assignment function. cameraToClipMatrix[0][0] = fFrustumScale cameraToClipMatrix[1][1] = fFrustumScale cameraToClipMatrix[2][2] = (fzFar + fzNear) / (fzNear - fzFar) cameraToClipMatrix[2][3] = -1.0 cameraToClipMatrix[3][2] = (2 * fzFar * fzNear) / (fzNear - fzFar) glUseProgram(program) glUniformMatrix4fv(cameraToClipMatrixUnif, 1, GL_FALSE, cameraToClipMatrix.transpose()) glUseProgram(0) def initializeBuffers(): global vbo, cbo, ibo vbo = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, vbo) glBufferData( # PyOpenGL allows for the omission of the size parameter GL_ARRAY_BUFFER, obj['vertices'], GL_STATIC_DRAW ) glBindBuffer(GL_ARRAY_BUFFER, 0) cbo = glGenBuffers(1) glBindBuffer(GL_ARRAY_BUFFER, cbo) glBufferData( GL_ARRAY_BUFFER, obj['colors'], GL_STATIC_DRAW ) glBindBuffer(GL_ARRAY_BUFFER, 0) ibo = glGenBuffers(1) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo) glBufferData( GL_ELEMENT_ARRAY_BUFFER, obj['faces'], GL_STATIC_DRAW ) glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0) # Helper functions to return various types of transformation arrays def calcLerpFactor(fElapsedTime, fLoopDuration): fValue = (fElapsedTime % fLoopDuration) / fLoopDuration if fValue &gt; 0.5: fValue = 1.0 - fValue return fValue * 2.0 def computeAngleRad(fElapsedTime, fLoopDuration): fScale = 3.14159 * 2.0 / fLoopDuration fCurrTimeThroughLoop = fElapsedTime % fLoopDuration return fCurrTimeThroughLoop * fScale def rotateY(fElapsedTime, **mouse): fAngRad = computeAngleRad(fElapsedTime, 2.0) fCos = cos(fAngRad) fSin = sin(fAngRad) newTransform = np.identity(4, dtype='float32') newTransform[0][0] = fCos newTransform[2][0] = fSin newTransform[0][2] = -fSin newTransform[2][2] = fCos # offset newTransform[0][3] = 0.0 #-5.0 newTransform[1][3] = 0.0 #5.0 newTransform[2][3] = mouse['wheel'] return newTransform # A list of the helper offset functions. # Note that this does not require a structure def in python. # Each function is written to return the complete transform matrix. g_instanceList = [ rotateY, ] # Initialize the OpenGL environment def init(w, h): initializeProgram() glEnable(GL_CULL_FACE) glCullFace(GL_BACK) glFrontFace(GL_CW) glEnable(GL_DEPTH_TEST) glDepthMask(GL_TRUE) glDepthFunc(GL_LEQUAL) glDepthRange(0.0, 1.0) glMatrixMode(GL_PROJECTION) print('Scene bounding box:', scene_box) gluPerspective(45, (w/h), 0.1, 500) #glTranslatef(0, 0, -100) def render(time, imgui_impl, mouse, mem, buffers): global wireframe_enabled, transform_selected #print(mouse['wheel']) glClearColor(0.0, 0.0, 0.0, 0.0) glClearDepth(1.0) glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT) if wireframe_enabled: glPolygonMode(GL_FRONT_AND_BACK, GL_LINE) else: glPolygonMode(GL_FRONT_AND_BACK, GL_FILL) glUseProgram(program) fElapsedTime = pygame.time.get_ticks() / 1000.0 transformMatrix = g_instanceList[transform_selected](fElapsedTime, **mouse) glUniformMatrix4fv(modelToCameraMatrixUnif, 1, GL_FALSE, transformMatrix.transpose()) #glDrawElements(GL_TRIANGLES, len(obj['faces']), GL_UNSIGNED_SHORT, None) for vao_idx, vao in enumerate(buffers['vaos']): print('Rendering VAO {}'.format(vao_idx)) print('SHAPE VBO', buffers['vbos'][vao_idx]['data']['values'].shape) print('SHAPE CBO', buffers['cbos'][vao_idx]['data']['values'].shape) print('SHAPE IBO', buffers['ibos'][vao_idx]['data']['values'].shape) print('''Address: VBO:\t{} CBO:\t{} IBO:\t{} VAO:\t{} '''.format(id(buffers['vbos'][vao_idx]), id(buffers['cbos'][vao_idx]), id(buffers['ibos'][vao_idx]), id(vao))) glBindVertexArray(vao) index_dim = buffers['ibos'][vao_idx]['data']['stride'] index_count = buffers['ibos'][vao_idx]['data']['count'] glDrawElements(GL_TRIANGLES, index_count, GL_UNSIGNED_SHORT, None) glBindVertexArray(0) glUseProgram(0) imgui.new_frame() # Draw windows imgui.end_frame() imgui.render() imgui_impl.render(imgui.get_draw_data()) # Called whenever the window's size changes (including once when the program starts) def reshape(w, h): global cameraToClipMatrix cameraToClipMatrix[0][0] = fFrustumScale * (h / float(w)) cameraToClipMatrix[1][1] = fFrustumScale glUseProgram(program) glUniformMatrix4fv(cameraToClipMatrixUnif, 1, GL_FALSE, cameraToClipMatrix.transpose()) glUseProgram(0) glViewport(0, 0, w, h) def main(): width = 800 height = 800 display = (width, height) pygame.init() pygame.display.set_caption('OpenGL VAO with pygame') pygame.display.set_mode(display, pygame.DOUBLEBUF | pygame.OPENGL | pygame.RESIZABLE) pygame.display.gl_set_attribute(pygame.GL_CONTEXT_MAJOR_VERSION, 4) pygame.display.gl_set_attribute(pygame.GL_CONTEXT_MINOR_VERSION, 1) pygame.display.gl_set_attribute(pygame.GL_CONTEXT_PROFILE_MASK, pygame.GL_CONTEXT_PROFILE_CORE) imgui.create_context() impl = PygameRenderer() io = imgui.get_io() #io.set_WantCaptureMouse(True) io.display_size = width, height #scene = loadObjs(Path('assets/sample0.obj')) scene = loadObjs(Path('assets/cubes.obj')) #scene = loadObjs(Path('assets/shapes.obj')) #scene = loadObjs(Path('assets/wooden_watch_tower/wooden watch tower2.obj')) buffers = createBuffers(scene) init(width, height) wheel_factor = 0.3 mouse = { 'pressed' : False, 'motion' : { 'curr' : np.array([0, 0]), 'prev' : np.array([0, 0]) }, 'pos' : { 'curr' : np.array([0, 0]), 'prev' : np.array([0, 0]) }, 'wheel' : -10 } clock = pygame.time.Clock() while True: for event in pygame.event.get(): if event.type == pygame.QUIT or (event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE): pygame.quit() quit() impl.process_event(event) if event.type == pygame.MOUSEMOTION: if mouse['pressed']: #glRotatef(event.rel[1], 1, 0, 0) #glRotatef(event.rel[0], 0, 1, 0) mouse['motion']['curr'] = [event.rel[1], event.rel[0]] mouse['pos']['curr'] = event.pos if event.type == pygame.MOUSEWHEEL: mouse['wheel'] += event.y * wheel_factor for event in pygame.mouse.get_pressed(): mouse['pressed'] = pygame.mouse.get_pressed()[0] == 1 render(time=clock, imgui_impl=impl, mouse=mouse, mem=None, buffers=buffers) mouse['motion']['prev'] = mouse['motion']['curr'] mouse['pos']['prev'] = mouse['pos']['curr'] pygame.display.flip() pygame.time.wait(10) if __name__ == '__main__': main() </code></pre> <p>with utils module being defined as</p> <pre><code>from OpenGL.GLU import * from OpenGL.GL import * import os import sys # Function that creates and compiles shaders according to the given type (a GL enum value) and # shader program (a file containing a GLSL program). def loadShader(shaderType, shaderFile): # check if file exists, get full path name strFilename = findFileOrThrow(shaderFile) shaderData = None with open(strFilename, 'r') as f: shaderData = f.read() shader = glCreateShader(shaderType) glShaderSource(shader, shaderData) # note that this is a simpler function call than in C # This shader compilation is more explicit than the one used in # framework.cpp, which relies on a glutil wrapper function. # This is made explicit here mainly to decrease dependence on pyOpenGL # utilities and wrappers, which docs caution may change in future versions. glCompileShader(shader) status = glGetShaderiv(shader, GL_COMPILE_STATUS) if status == GL_FALSE: # Note that getting the error log is much simpler in Python than in C/C++ # and does not require explicit handling of the string buffer strInfoLog = glGetShaderInfoLog(shader) strShaderType = &quot;&quot; if shaderType is GL_VERTEX_SHADER: strShaderType = &quot;vertex&quot; elif shaderType is GL_GEOMETRY_SHADER: strShaderType = &quot;geometry&quot; elif shaderType is GL_FRAGMENT_SHADER: strShaderType = &quot;fragment&quot; print(&quot;Compilation failure for &quot; + strShaderType + &quot; shader:\n&quot; + strInfoLog) return shader # Function that accepts a list of shaders, compiles them, and returns a handle to the compiled program def createProgram(shaderList): program = glCreateProgram() for shader in shaderList: glAttachShader(program, shader) glLinkProgram(program) status = glGetProgramiv(program, GL_LINK_STATUS) if status == GL_FALSE: # Note that getting the error log is much simpler in Python than in C/C++ # and does not require explicit handling of the string buffer strInfoLog = glGetProgramInfoLog(program) print(&quot;Linker failure: \n&quot; + strInfoLog) for shader in shaderList: glDetachShader(program, shader) return program # Helper function to locate and open the target file (passed in as a string). # Returns the full path to the file as a string. def findFileOrThrow(strBasename): # Keep constant names in C-style convention, for readability # when comparing to C(/C++) code. LOCAL_FILE_DIR = &quot;data&quot; + os.sep GLOBAL_FILE_DIR = &quot;..&quot; + os.sep + &quot;data&quot; + os.sep strFilename = LOCAL_FILE_DIR + strBasename if os.path.isfile(strFilename): return strFilename strFilename = GLOBAL_FILE_DIR + strBasename if os.path.isfile(strFilename): return strFilename raise IOError('Could not find target file ' + strBasename) </code></pre>
<python><opengl><pyopengl>
2024-05-03 07:40:33
1
9,862
rbaleksandar
78,423,080
2,355,176
Pyotp Microsoft Authenticator not displaying Logo after Scanning QR
<p>I am using Pyotp python package to generate OTP's using python, the generated QR is perfectly working fine, Microsoft authenticator can add the required information and code matches as expected the only problem I am facing is the added image is not displaying in logo, the logo always consists of character initials.</p> <p>The python code is following.</p> <pre><code>import pyotp totp = pyotp.TOTP(&quot;secret&quot;) uri = totp.provisioning_uri(name='User', issuer_name='Example', image='https://ecoedgeai.com/wp-content/uploads/2024/02/cropped-eeai-logo-1-95x54.png') print(uri) </code></pre> <p>The example generated URI is following.</p> <pre><code>otpauth://totp/Example:User?secret=secret&amp;issuer=Example&amp;image=https%3A%2F%2Fecoedgeai.com%2Fwp-content%2Fuploads%2F2024%2F02%2Fcropped-eeai-logo-1-95x54.png </code></pre> <p>The image URL is valid and accessible in browser no restrictions on the URL so why Authenticator is not displaying logo? by the way I haven't tested it with other Authenticator applications.</p>
<python><python-3.x><authenticator><pyotp>
2024-05-03 06:47:38
0
2,760
Zain Ul Abidin
78,423,078
246,241
How do I keep a correct API spec when using a ModelSerializer?
<p>I'm using a <a href="https://docs.pydantic.dev/latest/concepts/serialization/#custom-serializers" rel="nofollow noreferrer">model serializer</a> to <a href="https://stackoverflow.com/questions/78421170/pydantic-how-do-i-represent-a-list-of-objects-as-dict-serialize-list-as-dict">serialize a list as a dictionary</a> (thanks to <a href="https://stackoverflow.com/users/7938503/plagon">Plagon</a>). This however leaves me with an incorrect API spec.</p> <p>How do I ensure the API spec is correct?</p> <pre class="lang-py prettyprint-override"><code>from uuid import UUID, uuid4 from fastapi import FastAPI from pydantic import BaseModel, Field, model_serializer class Item(BaseModel): uid: UUID = Field(default_factory=uuid4) number: int = Field(default_factory=lambda: 4) # https://xkcd.com/221/ class ResponseModel(BaseModel): items: list[Item] @model_serializer def serialize_model(self): return {str(item.uid): item.model_dump() for item in self.items} app = FastAPI() @app.get(&quot;/items/&quot;, response_model=ResponseModel) def read_item(): return ResponseModel(items=[Item() for _ in range(2)]) </code></pre> <pre class="lang-json prettyprint-override"><code>{ &quot;openapi&quot;: &quot;3.1.0&quot;, &quot;info&quot;: { &quot;title&quot;: &quot;FastAPI&quot;, &quot;version&quot;: &quot;0.1.0&quot; }, &quot;paths&quot;: { &quot;/items/&quot;: { &quot;get&quot;: { &quot;summary&quot;: &quot;Read Item&quot;, &quot;operationId&quot;: &quot;read_item_items__get&quot;, &quot;responses&quot;: { &quot;200&quot;: { &quot;description&quot;: &quot;Successful Response&quot;, &quot;content&quot;: { &quot;application/json&quot;: { &quot;schema&quot;: { &quot;$ref&quot;: &quot;#/components/schemas/ResponseModel&quot; } } } } } } } }, &quot;components&quot;: { &quot;schemas&quot;: { &quot;Item&quot;: { &quot;properties&quot;: { &quot;uid&quot;: { &quot;type&quot;: &quot;string&quot;, &quot;format&quot;: &quot;uuid&quot;, &quot;title&quot;: &quot;Uid&quot; }, &quot;number&quot;: { &quot;type&quot;: &quot;integer&quot;, &quot;title&quot;: &quot;Number&quot; } }, &quot;type&quot;: &quot;object&quot;, &quot;title&quot;: &quot;Item&quot; }, &quot;ResponseModel&quot;: { &quot;properties&quot;: { &quot;items&quot;: { // &lt;- this is wrong as it doesn't now about items being a dict now. &quot;items&quot;: { &quot;$ref&quot;: &quot;#/components/schemas/Item&quot; }, &quot;type&quot;: &quot;array&quot;, &quot;title&quot;: &quot;Items&quot; } }, &quot;type&quot;: &quot;object&quot;, &quot;required&quot;: [ &quot;items&quot; ], &quot;title&quot;: &quot;ResponseModel&quot; } } } } </code></pre>
<python><fastapi><pydantic>
2024-05-03 06:47:27
0
11,661
tback
78,422,754
424,957
How to keep same size when save pictures in python matplotlib?
<p>I use code as below to plot graphs and save fig to png files, but all png files are not in the same size (width x height), how can I let all of them in the same size?</p> <pre><code>fig, ax = plt.subplots(nrows=1, ncols=1, figsize(20, 10)) fig.savefig('myImage.png', bbox_inches='tight', dpi=300) </code></pre> <p>The png files size (width x height) as below, does anyone help me?</p> <p><a href="https://i.sstatic.net/2dK2pUM6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2dK2pUM6.jpg" alt="enter image description here" /></a></p>
<python><matplotlib><image-scaling><image-size>
2024-05-03 05:00:15
0
2,509
mikezang
78,422,647
1,242,028
Sort a list of points in row major snake scan order
<p>I have a list of points that represent circles that have been detected in an image.</p> <pre><code>[(1600.0, 26.0), (1552.0, 30.0), (1504.0, 32.0), (1458.0, 34.0), (1408.0, 38.0), (1360.0, 40.0), (1038.0, 54.0), (1084.0, 52.0), (1128.0, 54.0), (1174.0, 50.0), (1216.0, 52.0), (1266.0, 46.0), (1310.0, 46.0), (1600.0, 74.0), (1552.0, 76.0), (1504.0, 82.0), (1456.0, 80.0), (1406.0, 88.0), (1362.0, 86.0), (1310.0, 90.0), (1268.0, 94.0), (1224.0, 96.0), (1176.0, 98.0), (1128.0, 100.0), (1084.0, 100.0), (1040.0, 100.0), (996.0, 102.0), (992.0, 62.0), (950.0, 60.0), (950.0, 106.0), (908.0, 104.0), (904.0, 64.0), (862.0, 66.0), (862.0, 110.0), (820.0, 108.0), (816.0, 62.0), (776.0, 112.0), (774.0, 66.0), (732.0, 112.0), (730.0, 68.0), (686.0, 108.0), (684.0, 64.0), (642.0, 66.0), (600.0, 70.0), (600.0, 112.0), (558.0, 112.0), (552.0, 64.0), (512.0, 66.0), (510.0, 112.0), (470.0, 70.0), (464.0, 110.0), (420.0, 66.0), (376.0, 68.0), (376.0, 112.0), (332.0, 68.0), (332.0, 112.0), (426.0, 118.0), (1598.0, 124.0), (1552.0, 124.0), (1504.0, 126.0), (1454.0, 128.0), (1404.0, 134.0), (1362.0, 132.0), (1310.0, 138.0), (1266.0, 138.0), (1220.0, 136.0), (336.0, 156.0), (378.0, 156.0), (422.0, 156.0), (472.0, 160.0), (508.0, 154.0), (556.0, 154.0), (602.0, 156.0), (646.0, 156.0), (686.0, 154.0), (734.0, 158.0), (774.0, 152.0), (818.0, 152.0), (864.0, 154.0), (906.0, 150.0), (950.0, 152.0), (996.0, 148.0), (1038.0, 146.0), (1084.0, 146.0), (1128.0, 144.0), (1172.0, 144.0), (1598.0, 170.0), (1552.0, 170.0), (1506.0, 174.0), (1458.0, 170.0), (1408.0, 178.0), (1360.0, 178.0), (1314.0, 178.0), (332.0, 200.0), (382.0, 204.0), (422.0, 200.0), (466.0, 202.0), (512.0, 200.0), (556.0, 198.0), (600.0, 202.0), (642.0, 198.0), (690.0, 200.0), (732.0, 196.0), (776.0, 198.0), (820.0, 196.0), (862.0, 198.0), (908.0, 200.0), (950.0, 196.0), (998.0, 196.0), (1042.0, 196.0), (1084.0, 192.0), (1130.0, 188.0), (1174.0, 186.0), (1220.0, 184.0), (1266.0, 186.0), (1596.0, 220.0), (1554.0, 216.0), (1500.0, 222.0), (1454.0, 222.0), (1402.0, 226.0), (1354.0, 228.0), (1312.0, 230.0), (1264.0, 232.0), (1220.0, 232.0), (1176.0, 232.0), (1128.0, 234.0), (1084.0, 236.0), (1038.0, 236.0), (996.0, 238.0), (950.0, 238.0), (906.0, 238.0), (864.0, 244.0), (818.0, 240.0), (776.0, 242.0), (734.0, 244.0), (690.0, 244.0), (644.0, 242.0), (602.0, 246.0), (554.0, 242.0), (514.0, 248.0), (466.0, 244.0), (422.0, 244.0), (378.0, 244.0), (1456.0, 266.0), (1504.0, 264.0), (1552.0, 264.0), (1406.0, 272.0), (1360.0, 272.0), (1312.0, 276.0), (1270.0, 276.0), (1218.0, 278.0), (1172.0, 280.0), (1128.0, 280.0), (1084.0, 280.0), (1040.0, 280.0), (996.0, 282.0), (952.0, 282.0), (908.0, 284.0), (866.0, 288.0), (820.0, 284.0), (776.0, 286.0), (732.0, 292.0), (688.0, 286.0), (644.0, 286.0), (600.0, 288.0), (558.0, 290.0), (510.0, 288.0), (466.0, 288.0), (422.0, 288.0), (378.0, 288.0), (1504.0, 310.0), (1556.0, 308.0), (1454.0, 316.0), (1406.0, 318.0), (1360.0, 316.0), (1312.0, 318.0), (1270.0, 318.0), (1220.0, 320.0), (1176.0, 320.0), (1130.0, 320.0), (1086.0, 324.0), (1040.0, 328.0), (998.0, 326.0), (954.0, 328.0), (908.0, 328.0), (864.0, 328.0), (822.0, 330.0), (778.0, 334.0), (734.0, 330.0), (692.0, 332.0), (648.0, 334.0), (602.0, 332.0), (556.0, 330.0), (512.0, 332.0), (468.0, 332.0), (422.0, 332.0), (378.0, 332.0), (378.0, 376.0), (428.0, 378.0), (470.0, 378.0), (512.0, 376.0), (556.0, 376.0), (602.0, 376.0), (646.0, 374.0), (690.0, 374.0), (736.0, 378.0), (780.0, 376.0), (824.0, 372.0), (866.0, 372.0), (910.0, 372.0), (954.0, 370.0), (998.0, 370.0), (1042.0, 370.0), (1084.0, 368.0), (1130.0, 370.0), (1178.0, 366.0), (1220.0, 366.0), (1270.0, 362.0), (1316.0, 362.0), (1364.0, 360.0), (1412.0, 358.0), (1460.0, 356.0), (1510.0, 356.0), (1554.0, 356.0), (1504.0, 404.0), (1454.0, 408.0), (1408.0, 406.0), (1362.0, 406.0), (1312.0, 410.0), (1268.0, 408.0), (1220.0, 410.0), (1176.0, 410.0), (1130.0, 412.0), (1088.0, 414.0), (1042.0, 414.0), (996.0, 418.0), (954.0, 416.0), (910.0, 416.0), (866.0, 416.0), (822.0, 416.0), (778.0, 418.0), (734.0, 416.0), (688.0, 418.0), (644.0, 418.0), (602.0, 420.0), (560.0, 420.0), (514.0, 418.0), (468.0, 420.0), (422.0, 420.0), (472.0, 466.0), (516.0, 466.0), (560.0, 466.0), (604.0, 464.0), (646.0, 464.0), (688.0, 462.0), (734.0, 462.0), (778.0, 462.0), (822.0, 460.0), (866.0, 464.0), (908.0, 460.0), (952.0, 460.0), (998.0, 460.0), (1042.0, 462.0), (1086.0, 458.0), (1130.0, 456.0), (1176.0, 456.0), (1224.0, 454.0), (1270.0, 454.0), (1316.0, 454.0), (1362.0, 456.0), (1408.0, 454.0), (1460.0, 450.0), (1412.0, 496.0), (1366.0, 496.0), (1314.0, 500.0), (1272.0, 500.0), (1222.0, 500.0), (1174.0, 504.0), (1132.0, 502.0), (1088.0, 502.0), (1042.0, 502.0), (998.0, 504.0), (954.0, 504.0), (910.0, 504.0), (864.0, 504.0), (820.0, 506.0), (778.0, 504.0), (736.0, 506.0), (690.0, 506.0), (648.0, 508.0), (602.0, 510.0), (560.0, 506.0), (514.0, 510.0), (558.0, 552.0), (602.0, 550.0), (646.0, 550.0), (692.0, 554.0), (734.0, 550.0), (780.0, 548.0), (824.0, 548.0), (868.0, 552.0), (912.0, 548.0), (956.0, 548.0), (996.0, 550.0), (1042.0, 548.0), (1088.0, 546.0), (1134.0, 546.0), (1178.0, 546.0), (1224.0, 546.0), (1272.0, 544.0), (1316.0, 544.0), (1368.0, 542.0)] </code></pre> <p>I want to sort the points in &quot;snake scan&quot; order like so: <a href="https://i.sstatic.net/reaiiYkZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/reaiiYkZ.png" alt="enter image description here" /></a></p> <p>Here is the original image converted to jpg to fit the maximum upload size on SO: <a href="https://i.sstatic.net/pN6NwEfg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pN6NwEfg.jpg" alt="enter image description here" /></a></p> <p>I'm having trouble though because the rows don't have the exact same y value and the columns don't have the exact same x value, it's a non-regular grid.</p> <p>I tried using the the following function, but it's really sensitive to the tolerance value and doesn't quite work right.</p> <pre><code>def sort_points_snake_scan(points, tolerance=10): &quot;&quot;&quot;Sort the points in a snake-scan pattern with a tolerance for y-coordinates. Arguments: - points (list): a list of (x, y) points - tolerance (int): the y-coordinate tolerance for grouping points into rows Returns: - snake_scan_order (list): the points sorted in snake-scan order. &quot;&quot;&quot; # Group points by the y-coordinate with tolerance if not points: return [] # Sort points by y to simplify grouping points = sorted(points, key=lambda p: p[1]) rows = [] current_row = [points[0]] for point in points[1:]: if abs(point[1] - current_row[-1][1]) &lt;= tolerance: current_row.append(point) else: rows.append(current_row) current_row = [point] if current_row: rows.append(current_row) snake_scan_order = [] reverse = True ind = 0 for row in rows: # Sort the row by x-coordinate, alternating order for each row row_sorted = sorted(row, key=lambda point: point[0], reverse=reverse) snake_scan_order.extend(row_sorted) reverse = not reverse # Flip the order for the next row ind += 1 return snake_scan_order </code></pre> <p>Here is the result: <a href="https://i.sstatic.net/pBd9FJ4f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pBd9FJ4f.png" alt="enter image description here" /></a></p> <p><strong>Does anyone know the right way to do this?</strong></p>
<python><algorithm><sorting>
2024-05-03 04:07:56
2
2,359
noel
78,422,418
788,153
How to run Python files by clicking a link in another file?
<p>New to Streamlit and Flask. I am trying to create a simple page containing links. When these links are clicked, I want the connected files to run in the new window or tab. Here is my code that is trying to achieve this. But the new link does not show anything. Also, the port keeps on changing.</p> <p><code>main.py</code></p> <pre><code>import streamlit as st import subprocess from flask import Flask, Response import threading # Define a dictionary mapping filenames to Python scripts scripts = { &quot;Script 1&quot;: &quot;script1.py&quot;, &quot;Script 2&quot;: &quot;script2.py&quot;, # Add more scripts as needed } # Route for running Script 1 app_script1 = Flask(__name__) @app_script1.route(&quot;/run_script1&quot;) def run_script1(): # Use subprocess to run the Streamlit app for Script 1 subprocess.Popen([&quot;streamlit&quot;, &quot;run&quot;, &quot;script1.py&quot;]) return Response(status=204) def start_flask_app_script1(): app_script1.run(port=8503) # Route for running Script 2 app_script2 = Flask(__name__) @app_script2.route(&quot;/run_script2&quot;) def run_script2(): # Use subprocess to run the Streamlit app for Script 2 subprocess.Popen([&quot;streamlit&quot;, &quot;run&quot;, &quot;script2.py&quot;]) return Response(status=204) def start_flask_app_script2(): app_script2.run(port=8504) # Start the Flask app for Script 2 in a separate thread threading.Thread(target=start_flask_app_script2).start() # Display links for running each script for script_name in scripts: if st.button(script_name): if script_name == &quot;Script 1&quot;: # Start the Flask app for Script 1 in a separate thread threading.Thread(target=start_flask_app_script1).start() script_url = &quot;http://localhost:8503&quot; elif script_name == &quot;Script 2&quot;: threading.Thread(target=start_flask_app_script2).start() script_url = &quot;http://localhost:8504&quot; st.markdown(f'&lt;a href=&quot;{script_url}&quot; target=&quot;_blank&quot;&gt;{script_name}&lt;/a&gt;', unsafe_allow_html=True) </code></pre> <p><code>script1.py1</code></p> <pre><code>import streamlit as st st.write(&quot;Inside script1.py&quot;) </code></pre> <p><code>script1.py1</code></p> <pre><code>import streamlit as st st.write(&quot;Inside script2.py&quot;) </code></pre>
<python><flask><streamlit>
2024-05-03 02:16:18
1
2,762
learner
78,422,323
825,227
How to match a string not surrounded by brackets and parenthesis using regex
<p>I have a collection of strings that take, for example, the following form:</p> <pre><code>(GOOGL) [ST]S (partial) 02/01/2024 03/01/2024 $1,001 - $15,000 (PHG) [ST]P 02/12/2024 03/01/2024 $1,001 - $15,000 (PFE) [ST] P 02/12/2024 03/01/2024 $1,001 - $15,000 (UL) [ST]S 02/12/2024 03/01/2024 $1,001 - $15,000 </code></pre> <p>I'd like to find a pattern to return the thirdmost group of characters (eg, 'S' in the first line) not surrounded by brackets or parens for each row.</p> <p>The below pattern gets me each of the capitalized character strings (eg, GOOG, ST, S):</p> <pre><code>([A-Z]+) </code></pre> <p>How would I only return the 'S', 'P', 'P', and 'S' from each line respectively, which will never be surrounded by parentheses or brackets (alternatively, will have at least one of, sometimes both, a space before or after)?</p>
<python><regex>
2024-05-03 01:24:33
3
1,702
Chris
78,422,215
7,065,886
model.fit not recognizing labels in tf.data.Dataset
<p>I am creating a dataset by generator like this</p> <pre class="lang-py prettyprint-override"><code>tfds = tf.data.Dataset.from_generator(streamFromFile, output_signature=tf.TensorSpec(shape=(240,120), dtype=tf.uint16)) </code></pre> <p>The model is an autoencoder that internally calculates the loss, so the training should train the model to match it's output to 0. Therefore, the &quot;label&quot; is always a scalar of 0. To add labels to the dataset, I use it as follows:</p> <pre class="lang-py prettyprint-override"><code>def prepareLabels(x): return x, 0 tfds = tf.data.Dataset.from_generator(...) tfds = tfds.map(prepareLabels) for x,y in tfds.take(1): print(&quot;features&quot;, x.shape) print(&quot;label&quot;, y.shape) # prints: # features (240, 120) # label () </code></pre> <p>The model is built and compiled with MAE loss (to match the output to 0). During training, I instantly get this:</p> <pre class="lang-py prettyprint-override"><code>model.fit( tfds.take(10).batch(4), epochs=3 ) </code></pre> <pre class="lang-bash prettyprint-override"><code>ValueError: in user code: File &quot;/home/user/.local/lib/python3.11/site-packages/keras/src/engine/training.py&quot;, line 1401, in train_function * return step_function(self, iterator) File &quot;/home/user/.local/lib/python3.11/site-packages/keras/src/engine/training.py&quot;, line 1384, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) ... is_last_dim_1 = tf.equal(1, tf.shape(y_pred)[-1]) ValueError: slice index -1 of dimension 0 out of bounds. for '{{node mean_absolute_error/strided_slice}} = StridedSlice[Index=DT_INT32, T=DT_INT32, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](mean_absolute_error/Shape, mean_absolute_error/strided_slice/stack, mean_absolute_error/strided_slice/stack_1, mean_absolute_error/strided_slice/stack_2)' with input shapes: [0], [1], [1], [1] and with computed input tensors: input[1] = &lt;-1&gt;, input[2] = &lt;0&gt;, input[3] = &lt;1&gt;. </code></pre> <p>This looks to me, as if TensorFlow does not recognize that the <code>tfds</code> dataset supplies featuers <strong>and labels</strong> and it interprets it as a batch of 2 items ?!</p> <p>My model summary looks like this: (the input is an image with a single channel and the output is a scalar value that should be trained to be near zero)</p> <pre><code>__________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_3 (InputLayer) [(None, 240, 120, 1)] 0 [] ... encoded (Dense) (None, 200) 40200 ['reshape_4[0][0]'] ... reconstruction (Conv2DTran (None, 240, 120, 1) 37 ['up_sampling2d_11[0][0]'] spose) score (ScoreLayer) () 0 ['input_3[0][0]', 'reconstruction[0][0]'] ================================================================================================== Total params: 81365 (317.83 KB) Trainable params: 81365 (317.83 KB) Non-trainable params: 0 (0.00 Byte) __________________________________________________________________________________________________ </code></pre> <hr /> <h1>Minimum Example</h1> <pre class="lang-py prettyprint-override"><code>import os os.environ[&quot;TF_CPP_MIN_LOG_LEVEL&quot;] = &quot;3&quot; import tensorflow as tf import numpy as np def generator(): for _ in range(1000): yield (np.random.random((240,120,1))*255).astype(np.uint16) def addLabels(x): return x,0 tfds = tf.data.Dataset.from_generator(generator, output_signature=tf.TensorSpec(shape=(240,120,1), dtype=tf.uint16)) tfds = tfds.map(addLabels) for x,y in tfds.take(1): print(x) print(y) from keras.optimizers import Adam from keras.layers import Input, MaxPooling2D, UpSampling2D, Layer from keras.models import Model class ScoreLayer(Layer): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def build(self, input_shape): return super().build(input_shape) def compute_output_shape(self, input_shape): return input_shape[0] @tf.function def call(self, x, y, *args, **kwargs): return tf.reduce_sum(tf.abs(x-y), name=&quot;diffScore&quot;)/1000 def build_autoencoder(): inputs = Input(shape=(240,120,1)) x = MaxPooling2D((4,4))(inputs) x = UpSampling2D((4,4))(x) x = ScoreLayer(name=&quot;score&quot;)(inputs, x) outputs = x autoencoder = Model(inputs, outputs) autoencoder.compile(optimizer=Adam(learning_rate=1e-2), loss='mae') return autoencoder model = build_autoencoder() model.summary() model.fit(tfds.batch(10).take(1)) </code></pre> <p>If you need additional info, please comment!</p> <p>Thanks in advance :)</p>
<python><tensorflow><keras>
2024-05-03 00:29:27
0
905
TheClockTwister
78,422,200
5,105,207
GEKKO: Parameter estimation on a delayed system with known delay
<p>Here is a simple delayed system, or say a DDE</p> <p><a href="https://i.sstatic.net/FyfpFPeV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyfpFPeV.png" alt="enter image description here" /></a></p> <p>I believe the parameter <code>k</code> can be estimated with direct collocation method when <code>τ</code> is given. And the problem is how to do this with GEKKO.</p> <p>Here is what I have done:</p> <p>Firstly, the system is simulated with <code>jitcdde</code> package</p> <pre class="lang-py prettyprint-override"><code>from jitcdde import t, y, jitcdde import numpy as np # the constants in the equation k = -0.2 tau = 0.5 # the equation f = [ k*y(0, t-tau) ] # initialising the integrator DDE = jitcdde(f) # enter initial conditions DDE.constant_past([1.0]) # short pre-integration to take care of discontinuities DDE.step_on_discontinuities() # create timescale stoptime = 10.5 numpoints = 100 times = np.arange(DDE.t, stoptime, 1/10) # integrating data = [] for time in times: data.append( DDE.integrate(time) ) # The warning is due to small sample step, no worry </code></pre> <p>Then, do the estimation</p> <pre class="lang-py prettyprint-override"><code># The sample is taken every 0.1s and the delay is 0.5s # so ydelayed should be taken 5 steps before ydata tdata = times[5:] ydata = np.array(data)[5:] ydelayed = np.array(data)[0:-5] m = GEKKO(remote=False) m.time = tdata x = m.CV(value=ydata); x.FSTATUS = 1 # fit to measurement xdelayed = m.CV(value=ydelayed); x.FSTATUS = 1 # fit to measurement k = m.FV(); k.STATUS = 1 # adjustable parameter m.Equation(x.dt()== k * xdelayed) # differential equation m.options.IMODE = 5 # dynamic estimation m.options.NODES = 5 # collocation nodes m.solve(disp=False) # display solver output k = k.value[0] </code></pre> <p>which show <code>k=-0.03253</code> but the true value is <code>-0.2</code>. How to fix the identification?</p>
<python><gekko>
2024-05-03 00:22:43
1
1,413
Page David
78,422,084
9,191,338
What is the lifecycle of a process in python multiprocessing?
<p>In normal Python code, I can understand the lifecycle of the process. e.g. when executing <code>python script.py</code>:</p> <ol> <li>shell receives the command <code>python script.py</code>, and os creates a new process to start executing <code>python</code> .</li> <li><code>python</code> executable sets up the interpreter, and starts to execute <code>script.py</code>.</li> <li>When <code>script.py</code> finishes execution, python interpreter will exit.</li> </ol> <p>In the case of multiprocessing, I'm curious about what happens to the other processes.</p> <p>Take the following code for example:</p> <pre class="lang-py prettyprint-override"><code># test.py import multiprocessing # Function to compute square of a number def compute_square(number): print(f'The square of {number} is {number * number}') if __name__ == '__main__': # List of numbers numbers = [1, 2, 3, 4, 5] # Create a list to keep all processes processes = [] # Create a process for each number to compute its square for number in numbers: process = multiprocessing.Process(target=compute_square, args=(number,)) processes.append(process) process.start() # Ensure all processes have finished execution for process in processes: process.join() print(&quot;All processes have finished execution.&quot;) </code></pre> <p>When I execute <code>python test.py</code> , I understand that <code>test.py</code> will be executed as <code>__main__</code> module. But what happens to the other processes?</p> <p>To be specific, when I execute <code>multiprocessing.Process(target=compute_square, args=(number,)).start()</code> , what happens to that process?</p> <p>How does that process invoke the python interpreter? if it is just <code>python script.py</code> , how does it know it needs to execute a function named <code>compute_square</code>? or does it uses <code>python -i</code>, and pass the command to execute through a pipe?</p>
<python><multiprocessing><python-multiprocessing>
2024-05-02 23:20:01
1
2,492
youkaichao
78,421,758
6,606,057
How to Automatically Dummy Code High Cardinality Variables in Python
<p>I am working my way through the data engineer salary data set on Kaggle. The salary_currency column has the following value counts.</p> <pre><code>salary_currency USD 13695 GBP 558 EUR 406 INR 51 CAD 49 ... </code></pre> <p>16494 values total</p> <p>Is there a way to dummy code only for values that are at least 2% (or any percent) of a given column? In other words only dummy code for USD, GBP, and EUR?</p>
<python><one-hot-encoding><recode>
2024-05-02 21:08:47
1
485
Englishman Bob
78,421,709
6,606,057
Recode a Variable if it Does Not Fall into a Set Python
<p>I have a data set that looks like this. I would like to recode the values if they do not fall into a set.</p> <p>If my set = (Apple, Orange) I would like all other values to be 'x'.</p> <p>I have this column.</p> <pre><code> a 0 Apple 1 Apple 2 Pear 3 Banana 4 Apple 5 Orange </code></pre> <p>I would like to make it look like this column:</p> <pre><code> a 0 Apple 1 Apple 2 x 3 x 4 Apple 5 Orange </code></pre>
<python><pandas>
2024-05-02 20:55:30
1
485
Englishman Bob
78,421,693
8,742,641
Calling a Python script inside a virtual environment from PowerShell without changing working directory
<p>I am trying to set up a <code>(1)Powershell script</code> that will activate a Python virtual environment with a <code>(2)Python .exe</code> file to call a <code>(3)Python script</code> that requires an <code>(4)SQL query</code>. I want the paths to be relative.</p> <p><a href="https://i.sstatic.net/nSqhpNOP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSqhpNOP.png" alt="enter image description here" /></a></p> <p>The following script is supposed to do that :</p> <pre><code>$pythonScript = Join-Path -Path (Join-Path -Path $PSScriptRoot -ChildPath &quot;..\python&quot;) -ChildPath &quot;script_python.py&quot; $pythonVenv = Join-Path -Path (Join-Path -Path $PSScriptRoot -ChildPath &quot;..\python\venv\Scripts&quot;) -ChildPath &quot;python.exe&quot; $pythonStart = &amp; $pythonVenv $pythonScript $pythonStart </code></pre> <p>When working on my <code>(3)Python</code> script, I can just refer to the SQL query by name without specifying a path, as the working directory is implied to be their <code>python</code> parent. However, when calling the script from my PowerShell script, the working directory becomes the <code>Scripts</code> directory, parents of the virtual environment <code>python.exe</code> file ; thus, the <code>(4)sql_query.sql</code> is not found when my Python script is executing.</p> <p>How can I keep the working directory to be <code>python</code> just like when I am developping my Python script?</p>
<python><powershell>
2024-05-02 20:51:22
1
351
Frederi ROSE
78,421,600
5,472,006
Python code execution too slow due to bottleneck - seeking performance optimization
<p>I'm currently facing a performance issue with my Python code, specifically with the execution speed. I've identified a bottleneck in my code, and I'm seeking advice on how to optimize it for better performance.</p> <p>Here's a simplified version of the code:</p> <pre><code>import numpy as np def merge_arrays(arr: list, new: list) -&gt; list: if len(arr) != len(new): raise ValueError(f'Length of &lt;inds&gt; is {len(arr)} but length of &lt;new&gt; is {len(new)}') else: return transform_array(zip(arr, new), []) def transform_array(arr: list, r: list): for x in arr: if isinstance(x, (list, tuple, np.ndarray)): transform_array(x, r) else: r.append(x) return r COUNT = 100000 pips = [(np.arange(5), np.arange(5)) for _ in range(COUNT)] b = [np.arange(50, 65) for _ in range(COUNT)] ang = [np.arange(50, 55) for _ in range(COUNT)] dist = [np.arange(50, 55) for _ in range(COUNT)] result = [merge_arrays(y, [np.append(b[i][1:], [dist[i][p], ang[i][p]]) for p, i in enumerate(x)]) for x, y in pips] </code></pre> <p>The issue arises when dealing with large datasets, particularly when the input lists contain a significant number of elements. This leads to slow execution times, which is undesirable for my application.</p> <p>I've tried to optimize the code by using list comprehensions and avoiding unnecessary condition checks, but the performance is still not satisfactory.</p> <p>I'm looking for suggestions on how to refactor or optimize this code to improve its performance, especially in the transform_array function where the bottleneck seems to be located.</p> <p>Any insights or alternative approaches would be greatly appreciated. Thank you!</p>
<python><arrays><performance><recursion><array-merge>
2024-05-02 20:24:51
1
317
Nisu
78,421,521
825,227
Why does regex get tripped up on this
<p>Have the below set up to parse text to individual transactions:</p> <pre><code>transactions = rx.findall(r&quot;(\([A-Z][\s\S]*?\$.*\$[,\d]+)&quot;, text) </code></pre> <p>For the below sample text, I'd expect to return 6 transactions (one for each ticker represented--denoted initially by a &quot;([A-Z])&quot; pattern.</p> <p><strong>text string</strong></p> <pre><code>SP Alphabet Inc. - Class A Common Stock (GOOGL) [ST]S (partial) 02/01/2024 03/01/2024 $1,001 - $15,000 F S: New D: Part of my spouse's retirement portfolio. SP Amazon.com, Inc. - Common Stock (AMZN) [ST]P 02/12/2024 03/01/2024 $15,001 - $50,000 F S: New D: Part of my spouse's retirement portfolio. SP Koninklijke Philips N.V. NY Registry Shares (PHG) [ST]P 02/12/2024 03/01/2024 $1,001 - $15,000 F S: New D: Part of my spouse's retirement portfolio. SP Pfizer, Inc. Common Stock (PFE) [ST] P 02/12/2024 03/01/2024 $1,001 - $15,000 F S: New D: Part of my spouse's retirement portfolio. SP QUALCOMM Incorporated - Common Stock (QCOM) [ST]P 02/12/2024 03/01/2024 $1,001 - $15,000 F S: New D: Part of my spouse's retirement portfolio. SP Unilever PLC Common Stock (UL) [ST]S 02/12/2024 03/01/2024 $1,001 - $15,000 F S: New D: Part of my spouse's retirement portfolio. </code></pre> <p><strong>unformatted text</strong></p> <blockquote> <p><code>&quot;SP Alphabet Inc. - Class A Common\nStock (GOOGL) [ST]S (partial) 02/01/2024 03/01/2024 $1,001 - $15,000\nF\x00\x00\x00\x00\x00 S\x00\x00\x00\x00\x00: New\nD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00: Part of my spouse's retirement portfolio.\nSP Amazon.com, Inc. - Common Stock\n(AMZN) [ST]P 02/12/2024 03/01/2024 $15,001 -\n$50,000\nF\x00\x00\x00\x00\x00 S\x00\x00\x00\x00\x00: New\nD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00: Part of my spouse's retirement portfolio.\nSP Koninklijke Philips N.V. NY Registry\nShares (PHG) [ST]P 02/12/2024 03/01/2024 $1,001 - $15,000\nF\x00\x00\x00\x00\x00 S\x00\x00\x00\x00\x00: New\nD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00: Part of my spouse's retirement portfolio.\nSP Pfizer, Inc. Common Stock (PFE) [ST] P 02/12/2024 03/01/2024 $1,001 - $15,000\nF\x00\x00\x00\x00\x00 S\x00\x00\x00\x00\x00: New\nD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00: Part of my spouse's retirement portfolio.\nSP QUALCOMM Incorporated -\nCommon Stock (QCOM) [ST]P 02/12/2024 03/01/2024 $1,001 - $15,000\nF\x00\x00\x00\x00\x00 S\x00\x00\x00\x00\x00: New\nD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00: Part of my spouse's retirement portfolio.\nSP Unilever PLC Common Stock (UL)\n[ST]S 02/12/2024 03/01/2024 $1,001 - $15,000\nF\x00\x00\x00\x00\x00 S\x00\x00\x00\x00\x00: New\nD\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00: Part of my spouse's retirement portfolio.\n&quot;</code></p> </blockquote> <p>This pattern:</p> <pre><code> transactions = rx.findall(r&quot;(\([A-Z][\s\S]*?\$.*\$[,\d]+)&quot;, text) </code></pre> <p>Only returns 5 transacations--it returns the AMZN and PHG transactions together as one record.</p> <p>Why is it getting tripped up for security (PHG) here?</p>
<python><regex>
2024-05-02 20:05:05
0
1,702
Chris
78,421,427
14,471,688
Most efficient way to perform XOR operations between two matrices by using less memory
<p>I want to perform the XOR operations between two 2D numpy.ndarray by saving the memory usage. For each row of <strong>u_values</strong>, I want to perform the XOR operation to every row of <strong>v_values</strong> and so on. For example, if <strong>u_values</strong> has dimension <strong>(8,10)</strong> and <strong>v_values</strong> has dimension <strong>(6,10)</strong>, the result will end up with a dimension <strong>(48, 10)</strong>.</p> <p>Here is an example:</p> <pre><code>import numpy as np u_values = np.array([[True, True, True, True, True, True, False, True, True, True], [True, True, True, True, True, True, True, False, False, True], [True, True, True, True, False, True, False, True, True, True], [True, True, True, True, False, True, True, False, False, True], [True, True, True, True, True, False, False, False, False, True], [True, True, True, False, False, False, False, False, False, True], [True, False, False, False, False, False, False, False, False, True], [True, True, False, True, False, True, False, True, True, True]]) v_values = np.array([[True, True, True, True, True, True, False, True, True, True], [True, False, True, False, True, True, True, False, False, True], [True, True, True, True, False, True, False, True, True, True], [True, False, True, True, False, False, True, False, False, True], [True, True, False, True, True, False, False, False, False, True], [True, True, True, False, False, True, False, False, False, True]]) #What I have tried so far: u_reshaped = u_values[:, None, :] v_reshaped = v_values[None, :, :] xor_result = u_reshaped ^ v_reshaped xor_results = xor_result.reshape((-1, xor_result.shape[2])) </code></pre> <p>This is what I get a result with dimension <strong>(48, 10)</strong></p> <pre><code>[[False False False False False False False False False False] [False True False True False False True True True False] [False False False False True False False False False False] [False True False False True True True True True False] [False False True False False True False True True False] [False False False True True False False True True False] [False False False False False False True True True False] [False True False True False False False False False False] [False False False False True False True True True False] [False True False False True True False False False False] [False False True False False True True False False False] [False False False True True False True False False False] [False False False False True False False False False False] [False True False True True False True True True False] [False False False False False False False False False False] [False True False False False True True True True False] [False False True False True True False True True False] [False False False True False False False True True False] [False False False False True False True True True False] [False True False True True False False False False False] [False False False False False False True True True False] [False True False False False True False False False False] [False False True False True True True False False False] [False False False True False False True False False False] [False False False False False True False True True False] [False True False True False True True False False False] [False False False False True True False True True False] [False True False False True False True False False False] [False False True False False False False False False False] [False False False True True True False False False False] [False False False True True True False True True False] [False True False False True True True False False False] [False False False True False True False True True False] [False True False True False False True False False False] [False False True True True False False False False False] [False False False False False True False False False False] [False True True True True True False True True False] [False False True False True True True False False False] [False True True True False True False True True False] [False False True True False False True False False False] [False True False True True False False False False False] [False True True False False True False False False False] [False False True False True False False False False False] [False True True True True False True True True False] [False False True False False False False False False False] [False True True False False True True True True False] [False False False False True True False True True False] [False False True True False False False True True False]] </code></pre> <p>The final result that I want is a list of rows which each row contains the index of the <strong>True</strong> values</p> <pre><code>final_results = [np.nonzero(row)[0] + 1 for row in xor_results] </code></pre> <p>It worked pretty fine with a small example. However, for my real case, I encounter a very huge 2D numpy.ndarray with <strong>u_values</strong> with the size of <strong>(2788, 203769)</strong> and <strong>v_values</strong> with the size of <strong>(1813, 203769)</strong></p> <p>I ran out of memory for the XOR operations:</p> <blockquote> <p>numpy.core._exceptions.MemoryError: Unable to allocate 959. GiB for an array with shape (2788, 1813, 203769) and data type uint8</p> </blockquote> <p>I was wondering if there is a more straight forward and faster way to speed up this process by handling this kind of problem in order to get the list of the desired result as mentioned above (i.e. <strong>final_results</strong>) without computationally so expensive?</p>
<python><numpy><bit-manipulation><numpy-ndarray><numba>
2024-05-02 19:45:05
1
381
Erwin
78,421,355
15,452,168
Efficiently Finding Nearest Locations from Large Datasets in Python
<p>I am working on a project where I need to find the three nearest geographical locations (from a dataset of 1,000 entries) for each entry in another large dataset consisting of 500,000 addresses. Each dataset includes latitude and longitude coordinates. I've been using Python with geopy and pandas, but due to the size of the datasets I am not sure what could be the best approach.</p> <p>Here is a simplified version of my code (reproducible):</p> <pre><code>import pandas as pd from geopy.distance import geodesic # Sample data setup data = { &quot;Zip Code&quot;: [&quot;10115&quot;, &quot;20095&quot;, &quot;50667&quot;, &quot;80331&quot;, &quot;70173&quot;, &quot;40210&quot;, &quot;41460&quot;, &quot;45127&quot;, &quot;47051&quot;, &quot;40474&quot;], &quot;City&quot;: [&quot;Berlin&quot;, &quot;Hamburg&quot;, &quot;Cologne&quot;, &quot;Munich&quot;, &quot;Stuttgart&quot;, &quot;Düsseldorf&quot;, &quot;Neuss&quot;, &quot;Essen&quot;, &quot;Duisburg&quot;, &quot;Düsseldorf Airport&quot;], &quot;Latitude&quot;: [52.5323, 53.5503, 50.9367, 48.1372, 48.7833, 51.2217, 51.1981, 51.4556, 51.4344, 51.2895], &quot;Longitude&quot;: [13.3846, 9.9930, 6.9540, 11.5755, 9.1815, 6.7763, 6.6913, 7.0116, 6.7623, 6.7668] } df = pd.DataFrame(data) data_df2 = { &quot;Address&quot;: [&quot;my_location&quot;], &quot;ZipCode&quot;: [&quot;40468&quot;], &quot;Latitude&quot;: [51.28472232436951], &quot;Longitude&quot;: [6.7865234073914005] } df2 = pd.DataFrame(data_df2) # Distance calculation def calculate_distance(lat1, lon1, lat2, lon2): return geodesic((lat1, lon1), (lat2, lon2)).kilometers def find_nearest_spots(address_lat, address_lon): distances = df.apply(lambda row: calculate_distance(address_lat, address_lon, row['Latitude'], row['Longitude']), axis=1) nearest_indices = distances.nsmallest(3).index nearest_spots = df.loc[nearest_indices] return pd.Series({ 'Nearest_1_Spot': nearest_spots.iloc[0]['City'], 'Nearest_1_Dist': distances.iloc[nearest_indices[0]], 'Nearest_2_Spot': nearest_spots.iloc[1]['City'], 'Nearest_2_Dist': distances.iloc[nearest_indices[1]], 'Nearest_3_Spot': nearest_spots.iloc[2]['City'], 'Nearest_3_Dist': distances.iloc[nearest_indices[2]] }) df2[['Nearest_1_Spot', 'Nearest_1_Dist', 'Nearest_2_Spot', 'Nearest_2_Dist', 'Nearest_3_Spot', 'Nearest_3_Dist']] = df2.apply( lambda row: find_nearest_spots(row['Latitude'], row['Longitude']), axis=1) </code></pre> <p>if needed I can also leverage free GPU libraries (if any)</p> <p>Thanks for guidance</p>
<python><pandas><geospatial><geopandas><geopy>
2024-05-02 19:26:21
1
570
sdave
78,421,334
1,451,346
Python can't see a new module in a directory it already imported from
<p>In this test, I create a new directory (deleting it if it already exists), add it to <code>sys.path</code>, add two simple Python files, and try to import them. I'm using Python 3.11.6 on Ubuntu 23.10.</p> <pre><code>import sys from pathlib import Path from shutil import rmtree d = Path('/tmp/fleen') try: rmtree(d) except FileNotFoundError: pass d.mkdir() sys.path = [str(d)] + sys.path (d / 'module1.py').write_text('x = 1') import module1 assert module1.x == 1 (d / 'module2.py').write_text('x = 2') print('file contains:', repr((d / 'module2.py').read_text())) import module2 assert module2.x == 2 </code></pre> <p>This prints <code>file contains: 'x = 2'</code>, as expected, but then raises</p> <pre><code>Traceback (most recent call last): File &quot;example.py&quot;, line 16, in &lt;module&gt; import module2 ModuleNotFoundError: No module named 'module2' </code></pre> <p>What's going on? Why can't <code>import</code> see the file? Is there some sort of cache of each directory in <code>sys.path</code> that I need to clear?</p> <p>Edited to add: Unpredictably, this sometimes works, as if there's a race condition of some kind. This is mysterious because all the IO here is supposed to flush all buffers before going on to the next statement.</p>
<python><python-import><race-condition>
2024-05-02 19:22:50
2
3,525
Kodiologist
78,421,280
2,893,712
tzwhere returns empty even with forceTZ enabled
<p>I am using tzwhere module to get the timezone based on coordinates. I recently found I can set <code>forceTZ=True</code> but when I do this, it still does not return a timezone.</p> <p>My code is:</p> <pre><code>from tzwhere import tzwhere tz = tzwhere.tzwhere(forceTZ=True) tz.tzNameAt(18.45903590,-67.16367790) </code></pre> <p>However this still returns None. Am I doing something wrong?</p>
<python><timezone>
2024-05-02 19:10:22
1
8,806
Bijan
78,421,247
179,014
Why is do Pythons sum() and Pandas sum() yield different results
<p>Why do Pandas <code>sum()</code> and Pythons <code>sum()</code> on a list of floating point numbers yield slightly different results generating a difference when rounding the result</p> <pre><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; from decimal import Decimal &gt;&gt;&gt; numbers = [0.495,1.495,2.495,3.495,4.495,5.495,6.495, 7.495,8.495, 9.495, 10.495] &gt;&gt;&gt; Decimal(sum(numbers)) Decimal('60.44500000000000028421709430404007434844970703125') &gt;&gt;&gt; round(Decimal(sum(numbers)),2) Decimal('60.45') &gt;&gt;&gt; Decimal(float(pd.DataFrame(numbers).sum())) Decimal('60.44499999999999317878973670303821563720703125') &gt;&gt;&gt; round(Decimal(float(pd.DataFrame(numbers).sum())),2) Decimal('60.44') </code></pre> <p>So despite using the same <code>round()</code> function, the slight difference in the <code>sum()</code> of the numbers between Pandas and Python is enough to yield a different result.</p> <p>I also noted that Pandas yields a different result, if the order of the numbers is reversed, in opposition to standard <code>sum()</code> in Python:</p> <pre><code>&gt;&gt;&gt; Decimal(sum(reversed(numbers))) Decimal('60.44500000000000028421709430404007434844970703125'). # the same as unreversed &gt;&gt;&gt; Decimal(float(pd.DataFrame(reversed(numbers)).sum())) Decimal('60.44499999999998607336237910203635692596435546875'). # different from unreversed </code></pre> <p>The difference in the result of the sum on revered and unreversed list is tiny. But I thought up to now, that floating point addition should be commutative. That doesn't seem to be the case for Pandas.</p> <p>So why does Pandas <code>sum()</code> yield different results than Python <code>sum()</code> for floating point numbers? Why does it yield a different result, when simply reverting the numbers? Is that a bug or is that a feature of floating point addition with Pandas? (Or is that related to my underlying hardware? I'm using Python 3.12 with Pandas 2.2.2 on Mac OS 14.1.1 with Apple M3 Pro Chip)</p>
<python><pandas><sum><floating-point><precision>
2024-05-02 19:04:24
1
11,858
asmaier
78,421,183
791,793
Trouble with prefilters in MongoDB Atlas and Langchain: 'Path needs to be indexed as token' error
<p>How do I use prefilters with langchain and mongodb atlas?</p> <p>Here's my example models:</p> <pre><code>class Question(Document): content = StringField(required=True) class ResponseFragment(Document): question = ReferenceField(Question, required=True) text = StringField(required=True) </code></pre> <p>Note: it's referencing the Question.</p> <p>I have <code>ResponseFragment.text</code> set up with vector search.</p> <p>But before I want to do a vector search, I want to prefilter by <code>Response.question</code> field, so as to reduce the performance impact.</p> <p>Here is my function:</p> <pre><code>def get_relevant_docs(query, store=&quot;theme&quot;, k=100, pre_filter_dict={}, as_dataframe=True, index_name=&quot;default&quot;, threshold=0.91): # Get the MongoDB collection collection = self.mongo_client[store] # Filter the documents based on the 'question' field question_id = str(ObjectId(self.question.id)) # Convert ObjectId to string pre_filter_dic = {&quot;question&quot;: question_id} # Create the MongoDBAtlasVectorSearch instance using the filtered documents vectorstore = MongoDBAtlasVectorSearch(collection, self.embeddings, text_key=&quot;text&quot;, embedding_key=&quot;embedding&quot;, index_name=index_name) # Perform similarity search on the filtered documents docs = vectorstore.similarity_search_with_score(query, k=k, pre_filter=pre_filter_dict) </code></pre> <p>Running this without the prefilter, it works fine (though does vector search on all documents instead of filtered ones).</p> <p>Running this WITH the question filter, I get this error:</p> <pre><code>Error: PlanExecutor error during aggregation :: caused by :: Path 'question' needs to be indexed as token, full error: {'ok': 0.0, 'errmsg': &quot;PlanExecutor error during aggregation :: caused by :: Path 'question' needs to be indexed as token&quot;, 'code': 8, 'codeName': 'UnknownError', '$clusterTime': {'clusterTime': Timestamp(1714675372, 2), 'signature': {'hash': b'\x08{\xaaU\xd9\xbb\xa1\xbf\xa8\x01p\xa4\xc8b\xbb\xef\xa9\xcb\xee\x05', 'keyId': 7327355885062193166}}, 'operationTime': Timestamp(1714675372, 2)} </code></pre> <p>I'm assuming the issue is that I need to add an index to Mongo Atlas, but where and how do I do that?</p> <p>I've tried doing this from my python code as...</p> <pre><code>db.collection.createIndex({ question: &quot;text&quot; }) </code></pre> <p>...with no luck.</p> <p>Any help would be MUCH appreciated.</p>
<python><mongodb><langchain><mongodb-atlas>
2024-05-02 18:51:31
1
721
user791793
78,421,170
246,241
Pydantic: How do I represent a list of objects as dict (serialize list as dict)
<p>In Pydantic I want to represent a list of items as a dictionary. In that dictionary I want the key to be the <code>id</code> of the Item to be the key of the dictionary. I read the documentation on <a href="https://docs.pydantic.dev/latest/concepts/serialization/#custom-serializers" rel="nofollow noreferrer">Serialization</a>, on <a href="https://docs.pydantic.dev/2.3/usage/types/dicts_mapping/" rel="nofollow noreferrer">Mapping types</a> and on <a href="https://docs.pydantic.dev/2.3/usage/types/sequence_iterable/" rel="nofollow noreferrer">Sequences</a>.</p> <p>However, I didn't find a way to create such a representation.</p> <p>I want these in my api to have easy access from generated api clients.</p> <pre class="lang-py prettyprint-override"><code>class Item(BaseModel): uid: UUID = Field(default_factory=uuid4) updated: datetime = Field(default_factory=datetime_now) ... class ResponseModel: ... print ResponseModel.model_dump() #&gt; { &quot;67C812D7-B039-433C-B925-CA21A1FBDB23&quot;: { &quot;uid&quot;: &quot;67C812D7-B039-433C-B925-CA21A1FBDB23&quot;, &quot;updated&quot;: &quot;2024-05-02 20:24:00&quot; },{ &quot;D39A8EF1-E520-4946-A818-9FA8664F63F6&quot;: { &quot;uid&quot;: &quot;D39A8EF1-E520-4946-A818-9FA8664F63F6&quot;, &quot;updated&quot;:&quot;2024-05-02 20:25:00&quot; } } </code></pre>
<python><pydantic>
2024-05-02 18:48:20
2
11,661
tback
78,421,046
5,960,363
Change Poetry Python version
<h2>Context</h2> <ul> <li>I want Poetry to use Python 3.12.x</li> <li>Poetry is installed globally, and I've set Pyenv to default to 3.12 globally on my machine</li> <li>I'm using VSCode</li> <li>In the terminal in my project, I've done <code>poetry env use 3.12</code>, and <code>poetry install</code> which successfully created a venv and installed my dependencies</li> <li>Within VSCode I manually selected the python interpreter within that new venv</li> </ul> <h2>Unexpected behavior</h2> <p>When I do <code>poetry debug info</code> I get the following:</p> <pre><code>Poetry Version: 1.8.2 Python: 3.11.4 Virtualenv Python: 3.12.1 Implementation: CPython Path: /Users/user/Library/Caches/pypoetry/virtualenvs/project-LrBQS-9F-py3.12 Executable: /Users/user/Library/Caches/pypoetry/virtualenvs/project-LrBQS-9F-py3.12/bin/python Valid: True Base Platform: darwin OS: posix Python: 3.12.1 Path: /Users/user/.pyenv/versions/3.12.1 Executable: /Users/user/.pyenv/versions/3.12.1/bin/python3.12 </code></pre> <h2>Question</h2> <p><strong>Why does &quot;Poetry&quot; report version 3.11.4?</strong> How can I specify ^3.12?</p> <p>Version 3.11 is not my global Python, it's not the version of the interpreter my VSCode is using (or by extension the interpreter in the active venv), and I don't have any dependencies that require 3.11.</p>
<python><visual-studio-code><environment><python-poetry><pyenv>
2024-05-02 18:22:34
1
852
FlightPlan
78,421,032
955,091
How to properly clean up non-serializable states associated with a Ray object?
<p>Suppose I have a Ray actor that can create a Ray object that associates with some non-serializable states. In the following example, the non-serializable state is a temporary directory.</p> <pre class="lang-py prettyprint-override"><code>class MyObject: pass _non_serializable_states = {} @ray.remote class MyCreatorActor: def create(self): ref = ray.put(MyObject()) _non_serializable_states[ref] = tempfile.mkdtemp() return ref </code></pre> <p>I want the following <code>on_evicted</code> function to be triggered on the creator actor's worker process when <code>ref</code> is evicted, i.e., when the Ray distributed reference counting for the Ray object is zero.</p> <pre class="lang-py prettyprint-override"><code>def on_evicted(ref): shutil.rmtree(_non_serializable_states[ref]) del _non_serializable_states[ref] </code></pre> <p>How can I modify <code>MyCreatorActor.create</code> to register <code>on_evicted</code>?</p> <p>Is there a Ray API similar to <code>weakref.ref(obj, callback)</code> to register such a callback?</p>
<python><garbage-collection><distributed-computing><weak-references><ray>
2024-05-02 18:19:26
0
3,773
Yang Bo
78,420,943
480,118
python concurrent.futures.ThreadPool blocking
<p>I have the code below. read_sql is method on my DBReader class and it's using pd.read_sql. Im parallelizing sql selects to the Postgres sql table.</p> <pre><code>import pandas as pd def read_sql(self, sql, params = None): t1 = timeit.default_timer() with warnings.catch_warnings(): warnings.simplefilter(&quot;ignore&quot;, UserWarning) with psycopg.connect(self.connect_str, autocommit=True) as conn: df = pd.read_sql(sql, con = conn, params = params) self.log.debug(f'perf: {timeit.default_timer() - t1}') return df </code></pre> <p>the concurrent futures code is this:</p> <pre><code>import concurrent.futures as cf def test_thread_pool(): db_reader = DataReader() sql = &quot;select id, name from factor f where f.id = ANY(%s)&quot; threads = 20 id_partitions = np.array_split(list(range(1, 10000)), threads) id_partitions = [[p.tolist()] for p in id_partitions] with cf.ThreadPoolExecutor(max_workers=threads) as exec: futures = { exec.submit(db_reader.read_sql, sql, params=p): p for p in id_partitions } for future in cf.as_completed(futures): ids = futures[future] try: df = future.result() except Exception as exc: log.exception(f'error retrieving data for: {ids}') else: if df is not None: print(f'shape: {df.shape}') </code></pre> <p>The output of the debug line from read_sql looks like this:</p> <pre><code>perf: 0.7313497869981802 perf: 0.8116309550023288 perf: 3.401154975006648 perf: 5.22201336100261 perf: 6.325166654998611 perf: 6.338692951001576 perf: 6.573095380997984 perf: 6.5976604809984565 perf: 6.8282670119951945 perf: 7.291718505999597 perf: 7.4276196580030955 perf: 7.407097272000101 perf: 8.38801568299823 perf: 9.119963648998237 </code></pre> <p>You'll notice that it is incrementing - id have expected it to be all roughly around the same time - so it seems there is some blocking somewhere. Also, the time gap between the first two threads and 3rd is always about 2-3 seconds - why is that? I've also tried creating a new DbReader instance for each thread..but same effect.</p> <p>How do i solve this?</p>
<python><pandas><postgresql><python-multiprocessing><python-multithreading>
2024-05-02 17:58:59
0
6,184
mike01010
78,420,729
9,877,065
PyQt5 is it possible to have ScrollArea to scroll while dragging widget inside it?
<p>I am trying to answer to <a href="https://stackoverflow.com/questions/78370120/how-to-drag-qlineedit-form-one-cell-of-qgridlayout-to-other-cell-in-pyqt/78415934#78415934">How to Drag QLineEdit form one cell of QGridLayout to other cell in pyqt?</a></p> <blockquote> <p>I have gridlayout with four cells (0,0),(0,1),(1,0),(1,1). Every cell is vertical layout with scrollbar. Initially only (0,0) cell contains QLineEdit in it. I want to drag and drop them in any one of the cells. How can I do it?</p> </blockquote> <p>I wrote this 3 files code :</p> <p><code>main.py</code> :</p> <pre><code>import sys from PyQt5.QtWidgets import QApplication, QWidget from myui_bis import Ui_Form class MainWidget(QWidget, Ui_Form): def __init__(self, *args, **kwargs): super().__init__() self.setupUi(self) self.pushButton.clicked.connect(self.printAcceptDrops) self.show() def printAcceptDrops(self): print('printAcceptDrops : ') print('Mainwidget : ', self.acceptDrops()) print('self.scrollAreaWidgetContents : ', self.scrollAreaWidgetContents.acceptDrops()) print('self.scrollAreaWidgetContents_2 : ', self.scrollAreaWidgetContents_2.acceptDrops()) print('self.scrollAreaWidgetContents_3 : ', self.scrollAreaWidgetContents_3.acceptDrops()) print('self.scrollAreaWidgetContents_4 : ', self.scrollAreaWidgetContents_4.acceptDrops()) if __name__ == '__main__': app = QApplication(sys.argv) w = MainWidget() sys.exit(app.exec_()) </code></pre> <p>my <code>mywidgets.py</code> :</p> <pre><code>from PyQt5.QtWidgets import QLineEdit, QWidget, QScrollArea, QScroller from PyQt5.QtCore import Qt, QMimeData from PyQt5.QtGui import QDrag, QPixmap, QCursor from PyQt5 import QtGui from PyQt5 import QtCore from PyQt5 import QtWidgets class MyLineEdit(QLineEdit): def dragEnterEvent(self, e): print('ENTER DRAG EVENT IN ', self.objectName()) e.accept() def dropEvent(self, e): widget = e.source() print('\n dropped on widget !!!!!!!!!!!!\n\n') if widget == self : return if widget.parent().layout().count() == 1: print('=============== 1 qlineedit') widget.parent().layout().parent().setAcceptDrops(True) widget.setParent(None) self.parent().layout().insertWidget(self.parent().layout().indexOf(self) , widget) def mouseMoveEvent(self, e): if e.buttons() == Qt.LeftButton: drag = QDrag(self) mime = QMimeData() drag.setMimeData(mime) pixmap = QPixmap(self.size()) self.render(pixmap) drag.setPixmap(pixmap) drag.exec_(Qt.MoveAction) class MyWidget(QWidget): def dragEnterEvent(self, e): print('ENTER DRAG EVENT IN ', self.objectName()) print('self.layout.count()', self.layout().count()) e.accept() def dropEvent(self, e): pos = e.pos() widget = e.source() print('pos : ', pos, 'mouse : ', QCursor.pos()) print('event.source : ', widget , 'event.source.parent() :', widget.parent()) print('event.source name: ', widget.objectName()) print('parent : ', self.parent() , self.parent().objectName()) print('self.layput() : ', self.layout()) print('self.objectName() : ', self.objectName()) if widget.parent().layout().count() == 1: print('=============== 1 WIDGET') widget.parent().layout().parent().setAcceptDrops(True) if self.layout().count() == 0 : widget.setParent(None) self.layout().addWidget(widget) self.setAcceptDrops(False) class MyScrollArea(QScrollArea): def __init__(self , parent = None): super(MyScrollArea, self).__init__(parent) </code></pre> <p>my <strong>PyQt5-Designer</strong> ui file translated with <strong>pyuic5</strong> <code>myui_bis</code> :</p> <pre><code>from PyQt5 import QtCore, QtWidgets from PyQt5.QtCore import Qt, QMimeData from PyQt5.QtGui import QDrag class Ui_Form(object): def setupUi(self, Form): Form.setObjectName(&quot;Form&quot;) Form.resize(698, 672) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Preferred, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(Form.sizePolicy().hasHeightForWidth()) Form.setSizePolicy(sizePolicy) Form.setAcceptDrops(False) self.gridLayout = QtWidgets.QGridLayout(Form) self.gridLayout.setObjectName(&quot;gridLayout&quot;) self.scrollArea_3 = MyScrollArea(Form) self.scrollArea_3.setAcceptDrops(True) self.scrollArea_3.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_3.setWidgetResizable(True) self.scrollArea_3.setObjectName(&quot;scrollArea_3&quot;) self.scrollAreaWidgetContents_3 = MyWidget() self.scrollAreaWidgetContents_3.setGeometry(QtCore.QRect(0, 0, 315, 303)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_3.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_3.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_3.setAcceptDrops(True) self.scrollAreaWidgetContents_3.setObjectName(&quot;scrollAreaWidgetContents_3&quot;) self.verticalLayout_2 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents_3) self.verticalLayout_2.setSpacing(30) self.verticalLayout_2.setObjectName(&quot;verticalLayout_2&quot;) self.scrollArea_3.setWidget(self.scrollAreaWidgetContents_3) self.gridLayout.addWidget(self.scrollArea_3, 1, 0, 1, 1) self.scrollArea_4 = MyScrollArea(Form) self.scrollArea_4.setAcceptDrops(True) self.scrollArea_4.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_4.setWidgetResizable(True) self.scrollArea_4.setObjectName(&quot;scrollArea_4&quot;) self.scrollAreaWidgetContents_4 = MyWidget() self.scrollAreaWidgetContents_4.setGeometry(QtCore.QRect(0, 0, 315, 303)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_4.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_4.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_4.setAcceptDrops(True) self.scrollAreaWidgetContents_4.setObjectName(&quot;scrollAreaWidgetContents_4&quot;) self.verticalLayout_4 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents_4) self.verticalLayout_4.setSpacing(30) self.verticalLayout_4.setObjectName(&quot;verticalLayout_4&quot;) self.scrollArea_4.setWidget(self.scrollAreaWidgetContents_4) self.gridLayout.addWidget(self.scrollArea_4, 1, 1, 1, 1) self.scrollArea_2 = MyScrollArea(Form) self.scrollArea_2.setAcceptDrops(True) self.scrollArea_2.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea_2.setWidgetResizable(True) self.scrollArea_2.setObjectName(&quot;scrollArea_2&quot;) self.scrollAreaWidgetContents_2 = MyWidget() self.scrollAreaWidgetContents_2.setGeometry(QtCore.QRect(0, 0, 315, 304)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents_2.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents_2.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents_2.setAcceptDrops(True) self.scrollAreaWidgetContents_2.setObjectName(&quot;scrollAreaWidgetContents_2&quot;) self.verticalLayout_3 = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents_2) self.verticalLayout_3.setSpacing(30) self.verticalLayout_3.setObjectName(&quot;verticalLayout_3&quot;) self.scrollArea_2.setWidget(self.scrollAreaWidgetContents_2) self.gridLayout.addWidget(self.scrollArea_2, 0, 1, 1, 1) self.scrollArea = MyScrollArea(Form) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Expanding) sizePolicy.setHorizontalStretch(1) sizePolicy.setVerticalStretch(1) sizePolicy.setHeightForWidth(self.scrollArea.sizePolicy().hasHeightForWidth()) self.scrollArea.setSizePolicy(sizePolicy) self.scrollArea.setAcceptDrops(True) self.scrollArea.setVerticalScrollBarPolicy(QtCore.Qt.ScrollBarAlwaysOn) self.scrollArea.setWidgetResizable(True) self.scrollArea.setObjectName(&quot;scrollArea&quot;) self.scrollAreaWidgetContents = MyWidget() self.scrollAreaWidgetContents.setGeometry(QtCore.QRect(0, -304, 315, 897)) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Preferred) sizePolicy.setHorizontalStretch(0) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.scrollAreaWidgetContents.sizePolicy().hasHeightForWidth()) self.scrollAreaWidgetContents.setSizePolicy(sizePolicy) self.scrollAreaWidgetContents.setAcceptDrops(False) self.scrollAreaWidgetContents.setObjectName(&quot;scrollAreaWidgetContents&quot;) self.verticalLayout = QtWidgets.QVBoxLayout(self.scrollAreaWidgetContents) self.verticalLayout.setSpacing(30) self.verticalLayout.setObjectName(&quot;verticalLayout&quot;) self.lineEdit_1 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_1.setObjectName(&quot;lineEdit_1&quot;) self.verticalLayout.addWidget(self.lineEdit_1) self.lineEdit_2 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_2.setObjectName(&quot;lineEdit_2&quot;) self.verticalLayout.addWidget(self.lineEdit_2) self.lineEdit_3 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_3.setObjectName(&quot;lineEdit_3&quot;) self.verticalLayout.addWidget(self.lineEdit_3) self.lineEdit_4 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_4.setObjectName(&quot;lineEdit_4&quot;) self.verticalLayout.addWidget(self.lineEdit_4) self.lineEdit_5 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_5.setObjectName(&quot;lineEdit_5&quot;) self.verticalLayout.addWidget(self.lineEdit_5) self.lineEdit_15 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_15.setObjectName(&quot;lineEdit_15&quot;) self.verticalLayout.addWidget(self.lineEdit_15) self.lineEdit_14 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_14.setObjectName(&quot;lineEdit_14&quot;) self.verticalLayout.addWidget(self.lineEdit_14) self.lineEdit_13 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_13.setObjectName(&quot;lineEdit_13&quot;) self.verticalLayout.addWidget(self.lineEdit_13) self.lineEdit_12 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_12.setObjectName(&quot;lineEdit_12&quot;) self.verticalLayout.addWidget(self.lineEdit_12) self.lineEdit_11 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_11.setObjectName(&quot;lineEdit_11&quot;) self.verticalLayout.addWidget(self.lineEdit_11) self.lineEdit_10 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_10.setObjectName(&quot;lineEdit_10&quot;) self.verticalLayout.addWidget(self.lineEdit_10) self.lineEdit_9 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_9.setObjectName(&quot;lineEdit_9&quot;) self.verticalLayout.addWidget(self.lineEdit_9) self.lineEdit_8 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_8.setObjectName(&quot;lineEdit_8&quot;) self.verticalLayout.addWidget(self.lineEdit_8) self.lineEdit_7 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_7.setObjectName(&quot;lineEdit_7&quot;) self.verticalLayout.addWidget(self.lineEdit_7) self.lineEdit_6 = MyLineEdit(self.scrollAreaWidgetContents) self.lineEdit_6.setObjectName(&quot;lineEdit_6&quot;) self.verticalLayout.addWidget(self.lineEdit_6) self.scrollArea.setWidget(self.scrollAreaWidgetContents) self.gridLayout.addWidget(self.scrollArea, 0, 0, 1, 1) self.pushButton = QtWidgets.QPushButton(Form) sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Minimum, QtWidgets.QSizePolicy.Fixed) sizePolicy.setHorizontalStretch(1) sizePolicy.setVerticalStretch(0) sizePolicy.setHeightForWidth(self.pushButton.sizePolicy().hasHeightForWidth()) self.pushButton.setSizePolicy(sizePolicy) self.pushButton.setObjectName(&quot;pushButton&quot;) self.gridLayout.addWidget(self.pushButton, 2, 0, 1, 1) self.gridLayout.setColumnStretch(1, 1) self.gridLayout.setRowStretch(1, 1) self.retranslateUi(Form) QtCore.QMetaObject.connectSlotsByName(Form) def retranslateUi(self, Form): _translate = QtCore.QCoreApplication.translate Form.setWindowTitle(_translate(&quot;Form&quot;, &quot;Form&quot;)) self.lineEdit_1.setText(_translate(&quot;Form&quot;, &quot;1&quot;)) self.lineEdit_2.setText(_translate(&quot;Form&quot;, &quot;2&quot;)) self.lineEdit_3.setText(_translate(&quot;Form&quot;, &quot;3&quot;)) self.lineEdit_4.setText(_translate(&quot;Form&quot;, &quot;4&quot;)) self.lineEdit_5.setText(_translate(&quot;Form&quot;, &quot;5&quot;)) self.lineEdit_15.setText(_translate(&quot;Form&quot;, &quot;6&quot;)) self.lineEdit_14.setText(_translate(&quot;Form&quot;, &quot;7&quot;)) self.lineEdit_13.setText(_translate(&quot;Form&quot;, &quot;8&quot;)) self.lineEdit_12.setText(_translate(&quot;Form&quot;, &quot;9&quot;)) self.lineEdit_11.setText(_translate(&quot;Form&quot;, &quot;10&quot;)) self.lineEdit_10.setText(_translate(&quot;Form&quot;, &quot;11&quot;)) self.lineEdit_9.setText(_translate(&quot;Form&quot;, &quot;12&quot;)) self.lineEdit_8.setText(_translate(&quot;Form&quot;, &quot;13&quot;)) self.lineEdit_7.setText(_translate(&quot;Form&quot;, &quot;14&quot;)) self.lineEdit_6.setText(_translate(&quot;Form&quot;, &quot;15&quot;)) self.pushButton.setText(_translate(&quot;Form&quot;, &quot;Check AcceptDrops&quot;)) from mywidgets import MyLineEdit, MyScrollArea, MyWidget </code></pre> <p>Now, the <code>QLineEdit</code> Widgets are added to the scrollAreas layouts when these are empty but then their acceptDrops attribute is set to False and the next <code>QLineEdit</code> widgets have to be dropped on the already present ones taking their layout-index positions within the layouts. When no widgets are left in a layout its acceptDrops value is set to True again.</p> <p><em>The only problem now is that I cannot scroll and drag at the same time in any of the four scrollAreas, but that glitch was present in the previous answer too.</em></p> <p><a href="https://i.sstatic.net/xFf6ObCi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFf6ObCi.png" alt="enter image description here" /></a></p> <p>I finally realized that in <strong>PyQt5</strong> (correct me if I am wrong) <code>ScrollArea</code> scroll only using the mouse wheel or by dragging the <code>ScrollBar</code> itself.</p> <p>So I was wondering if there is a way to accomplish the result of having the dragging scrolling the area at the same time ?</p> <p>I've heard about <code>QScroller</code> but I am not sure about what it does and tryed to to figure out what event I should filter when the drag is carried out but I wasn't able to find any clue, moreover I should be able to figure out the vertical component of the mouse drag to in some way forward the same to the <code>ScrollArea</code> or <code>ScrollArea.viewport</code> (I am not even sure I really understand what a viewport is!)</p>
<python><scroll><pyqt><pyqt5><qscrollarea>
2024-05-02 17:12:57
1
3,346
pippo1980
78,420,717
92,516
Assign globally-unique user_id to each created User in multi-process setup
<p>I'm attempting to distribute a list of pre-defined request payloads across all the <code>Users</code> which are spawned in my run. Conceptually I want to split my list of N requests across U Users, such that each request is only issued once to the server.</p> <p>For a single process I can achieve this by simply assigning a unique id to each User in the <code>__init__</code> method - e.g.:</p> <pre><code>class MyUser(): count = 0 def __init__(self, env): super().__init__(environment) self.user_id = MyUser.count MyUser.count += 1 </code></pre> <p>However when using multiple processes (via <code>--processes=P</code>) obviously have multiple instances of <code>MyUser.count</code> and hence the user_ids are not unique.</p> <p>Is there some kind of centralised mechanism I can use to assign IDs (or some existing unique ID)?</p>
<python><locust>
2024-05-02 17:09:30
1
9,708
DaveR
78,420,696
15,452,168
How to efficiently send 500,000 requests to Woosmap API for geocoding using multiprocessing in Python?
<p>I need to geocode a large dataset using the Woosmap API. I have a DataFrame containing around 500,000 addresses that I need to geocode as quickly as possible. Currently, I'm using the requests library to send requests one by one, but it's taking a significant amount of time. I've heard about multiprocessing in Python and I'm wondering how I can leverage it to speed up this process. Can someone provide guidance or a code example on how to send these requests concurrently using multiprocessing?</p> <p>Current Approach:</p> <pre><code>import requests url = &quot;https://api.woosmap.com/localities/geocode?address=Place%20Jeanne-d'Arc&amp;components=country%3AFR&amp;key=YOUR_PUBLIC_API_KEY&quot; payload={} headers = { 'Referer': 'http://localhost' } response = requests.request(&quot;GET&quot;, url, headers=headers, data=payload) print(response.text) </code></pre> <p>I appreciate any help or suggestions on how to optimize this process for faster geocoding of my dataset and getting the final response in a dataframe. Thanks</p>
<python><multithreading><asynchronous><multiprocessing><concurrent.futures>
2024-05-02 17:06:20
2
570
sdave
78,420,685
3,623,537
Why ctypes.CDLL is assuming long result type even though it's actually is long long?
<p>Consider python code below - it seems that calling .dll's function is assuming long time for some reason (system is windows x64 if it's important). Why it's doing this? Is there way to disable it? Will it get in the way in other cases, e.g. if I'm going to pass <code>long long</code> arguments or initialize structs using <code>long long</code>s?</p> <pre class="lang-py prettyprint-override"><code>import ctypes hello_lib = ctypes.CDLL(&quot;./hello.dll&quot;) n = 13 # returns 1932053504 instead of 6227020800 print(hello_lib.factorial(n)) # for some reason result type is long, # though it's declared as long long in .dll # &lt;class 'ctypes.c_long'&gt; print(hello_lib.factorial.restype) hello_lib.factorial.restype = ctypes.c_longlong # return 6227020800, changing result type helped print(hello_lib.factorial(n)) </code></pre> <p>.dll code:</p> <pre class="lang-c prettyprint-override"><code>// choco install mingw // gcc -shared -o hello.dll hello.c #include &lt;stdio.h&gt; long long int factorial(int n) { if (n&gt;=1) { return n*factorial(n-1); } return 1; } </code></pre>
<python><dll><ctypes>
2024-05-02 17:04:19
1
469
FamousSnake
78,420,645
4,820,782
Worker failed to index functions with Azure Functions & FastAPI local using AsgiMiddleware
<p>Running into an error that has started after a reboot. I am testing using Azure Functions in conjunction with FastAPI based on: <a href="https://dev.to/manukanne/azure-functions-and-fastapi-14b6" rel="nofollow noreferrer">https://dev.to/manukanne/azure-functions-and-fastapi-14b6</a></p> <p>Code was operating and test API call worked as expected. After a reboot of the machine and restarting VSCode I am now running into an issue when attempting to run locally.</p> <p>I worked through SO: <a href="https://stackoverflow.com/questions/76842742/trouble-deploying-fastapi-project-on-azure-functions-or-testing-locally">76842742</a> which is slightly different setup but similar type of error being seen.</p> <p>Virtual environment is running as expected and requirements.txt showing installed. Have run with verbose flag as well but no additional error messages were provided.</p> <p>Current error:<br /> Worker failed to index functions Result: Failure Exception: ValueError: Could not find top level function app instances in function_app.py.</p> <p>My function.json has the scriptFile: <strong>init</strong>.py which I thought would need to be function_app.py but the error indicates it is looking for the function_app.py regardless. I created a copy of my function_app.py and named it &quot;__init __.py&quot; as a potential fix but that produced the same error code.</p> <p>At this point I am at a loss as to why the index function is not being found. Any thoughts around a potential solution?</p> <p>Found Python version 3.11.0 (py)<br /> Core Tools Version: 4.0.5611<br /> VS Code: 1.88.1</p> <p>function_app.py</p> <pre><code>import azure.functions as func from fastapi import FastAPI, Request from fastapi.responses import JSONResponse import logging app = FastAPI() @app.exception_handler(Exception) async def handle_exception(request: Request, exc: Exception): return JSONResponse( status_code=400, content={&quot;message&quot;: str(exc)}, ) @app.get(&quot;/&quot;) async def home(): return { &quot;info&quot;: &quot;Try the API path for success&quot; } @app.get(&quot;/v1/test/{test}&quot;) async def get_test( test: str,): return { &quot;test&quot;: test, } def main(req: func.HttpRequest, context: func.Context) -&gt; func.HttpResponse: return func.AsgiMiddleware(app).handle_async(req, context) </code></pre> <p>function.json</p> <pre><code>{ &quot;scriptFile&quot;: &quot;__init__.py&quot;, &quot;bindings&quot;: [ { &quot;authLevel&quot;: &quot;function&quot;, &quot;type&quot;: &quot;httpTrigger&quot;, &quot;direction&quot;: &quot;in&quot;, &quot;name&quot;: &quot;req&quot;, &quot;methods&quot;: [ &quot;get&quot;, &quot;post&quot; ], &quot;route&quot;: &quot;/{*route}&quot; }, { &quot;type&quot;: &quot;http&quot;, &quot;direction&quot;: &quot;out&quot;, &quot;name&quot;: &quot;$return&quot; } ] } </code></pre> <p>ADDITIONAL ADD<br /> host.json</p> <pre><code>{ &quot;$schema&quot;: &quot;https://raw.githubusercontent.com/Azure/azure-functions-host/dev/schemas/json/host.json&quot;, &quot;version&quot;: &quot;2.0&quot;, &quot;logging&quot;: { &quot;applicationInsights&quot;: { &quot;samplingSettings&quot;: { &quot;isEnabled&quot;: true, &quot;excludedTypes&quot;: &quot;Request&quot; } } }, &quot;extensionBundle&quot;: { &quot;id&quot;: &quot;Microsoft.Azure.Functions.ExtensionBundle&quot;, &quot;version&quot;: &quot;[4.*, 5.0.0)&quot; }, &quot;extensions&quot;: { &quot;http&quot;: { &quot;routePrefix&quot;: &quot;&quot; } } } </code></pre>
<python><azure-functions><fastapi>
2024-05-02 16:55:44
1
439
Jabberwockey
78,420,559
9,795,817
Performing recursive feature elimination with multiple performance metrics
<p>I am trying to make a classifier as parsimonious as possible. To do this, I am recursively dropping features using cross validation.</p> <p>In the context of my model, precision is the most important metric. However, I would also like to see how other metrics evolve as less features are fed to the model.</p> <p>In particular, I would like to evaluate the models' recall and F1 scores.</p> <pre class="lang-py prettyprint-override"><code>rfe = RFECV( estimator=clf, # An XGBClassifier instance step=1, min_features_to_select=1, cv=cv, # A StratifiedKFold instance scoring='precision', # scoring=['f1', 'precision', 'recall'], # throws error verbose=1, n_jobs=1 ) </code></pre> <p>I commented-out the line that passes a list of metrics to the <code>scoring</code> argument because it throws an <code>InvalidParameterError</code> (that is, it's not happy that I passed it a list).</p> <p>Is there a way to pass multiple metrics to an RFECV instance?</p>
<python><machine-learning><scikit-learn>
2024-05-02 16:37:03
1
6,421
Arturo Sbr
78,420,462
20,212,483
Why is array slower to index than list?
<p><a href="https://docs.python.org/2/faq/programming.html#how-do-you-make-an-array-in-python" rel="nofollow noreferrer">Programming FAQ for Python2</a> and <a href="https://docs.python.org//3/faq/programming.html#how-do-you-make-an-array-in-python" rel="nofollow noreferrer">Programming FAQ for Python3</a>:</p> <blockquote> <p>The <strong><code>array</code></strong> module also provides methods for creating arrays of fixed types with compact representations, but they <strong>are slower to index than lists</strong>.</p> </blockquote> <p>To get an element of a list, we need an additional dereferencing; whereas the elements of <code>array</code> are arranged much more compactly than <code>list</code> and they are <em>fixed</em> type, why is it slower instead?</p> <p>(Obviously, the Python developers have certain reasons to support this conclusion, otherwise they wouldn't have documented it.)</p> <hr /> <p>The <a href="https://discuss.python.org/t/why-is-array-slower-to-index-than-list/52401/3" rel="nofollow noreferrer">test code</a> is provided by Stefan. (Thank you.)</p>
<python>
2024-05-02 16:16:44
0
505
shynur
78,420,436
18,878,905
Dash Reloading main.py on button click undesirable
<p>My worry here is performance and unnecessary reloading of main script in Dash. Here is my script. Please keep in mind that my &quot;fake_utils&quot; modules reads files and slow to load. Hence we want to load it once and for all.</p> <p>main.py</p> <pre><code>import dash_bootstrap_components as dbc from fake_utils import foo from uuid import uuid4 import diskcache from dash import Dash, html, DiskcacheManager, Input, Output launch_uid = uuid4() cache = diskcache.Cache(&quot;./cache&quot;) # Background callbacks require a cache manager background_callback_manager = DiskcacheManager( cache, cache_by=[lambda: launch_uid], expire=60, ) app = Dash(__name__, background_callback_manager=background_callback_manager, external_stylesheets=[dbc.themes.BOOTSTRAP]) # I use a background manager because I want to update my progress bar at the same time as running my loop in my main callback (run_process()) app.layout = html.Div([ html.Button('Run Process', id='run-button'), dbc.Progress(id=&quot;progress-component&quot;, value=0, max=10), html.Div(id='output') ]) @app.callback( Output('output', 'children'), Input('run-button', 'n_clicks'), background=True, progress=[ Output(&quot;progress-component&quot;, &quot;value&quot;), Output(&quot;progress-component&quot;, &quot;label&quot;), ], ) def run_process(set_progress, n_clicks): if n_clicks is None: return html.Div() results = [] for i in range(10): # Simulating a process step res = foo() results.append(res) set_progress((i+1, f'{i+1}')) #Then do stuff with results return 'Done' if __name__ == '__main__': app.run_server(debug=True, port=100) </code></pre> <p>fake_utils.py</p> <pre><code>import time print('Importing slow utils') def foo(): time.sleep(1) return 0 </code></pre> <p>The problem here is that every time i click on the 'Run Process' Button the while main scripts get reloaded (especially fake_utils that takes 3min+)</p> <p>How can I prevent this behaviour?</p> <p>You can see 'Importing slow utils' printed multiple times on the console. We can to minimize this.</p> <p>Thanks in advance.</p> <p>PS: A solution could be not using the background manager at it seems to be source of the problem, it creates some other processes i think. But would still like to be able to update my progress bar as i go along my loop. This is the <a href="https://dash.plotly.com/background-callbacks#example-5:-progress-bar-chart-graph" rel="nofollow noreferrer">documentation</a> of the progress bar in dash</p>
<python><performance><plotly-dash><reload>
2024-05-02 16:09:24
0
389
david serero
78,420,125
15,341,457
Python Relative and Absolute Imports not Working
<p>I have a file structure like this:</p> <pre><code>└── scraper ├── __init__.py └── converter ├── items.py ├──__init__.py └── spiders ├──__init__.py └──spider.py </code></pre> <p>I am in spider.py and I'm trying to import items.py, I'm doing it like this:</p> <pre><code>from .. import items </code></pre> <p>and getting the error:</p> <pre><code>ImportError: attempted relative import with no known parent package </code></pre> <p>so I thought of specifying the parent directory:</p> <pre><code>from ...converter import items </code></pre> <p>but got the same error.</p> <p>I then switched to absolute imports:</p> <pre><code>from scraper.converter import items </code></pre> <p>and got this error:</p> <pre><code>No module named 'scraper' </code></pre> <p>What the hell???</p>
<python><import><importerror><relative-import>
2024-05-02 15:12:47
2
332
Rodolfo
78,420,017
1,207,193
Apply a decorator over a method already decorated with @property in Python
<p>Why the code bellow doesn't work when the @property decorator is applied on the method already decorated with <code>@property</code>? (I am using Python 3.10.)</p> <pre><code># Define a decorator def my_decorator(func): def wrapper(*args, **kwargs): print(&quot;Decorator applied&quot;) return func(*args, **kwargs) return wrapper # Apply the decorator over the method class MyClass: @my_decorator @property def x(self): return 42 # Test the decorated method obj = MyClass() print(obj.x) # should output &quot;Decorator applied&quot; followed by &quot;42&quot; </code></pre> <p>But the print gives:<br /> <code>&lt;bound method my_decorator.&lt;locals&gt;.wrapper of &lt;__main__.MyClass object at 0x7e0b7943ebf0&gt;&gt;</code></p> <p>This is already a minimal example compared with my real scenario where my decorator is much more complex and also I have multiple decorators.</p>
<python><python-decorators>
2024-05-02 14:56:14
1
7,852
imbr
78,419,990
6,546,694
Read the function definition of pipe in polars python
<p>The documentation of <a href="https://docs.pola.rs/py-polars/html/reference/dataframe/api/polars.DataFrame.pipe.html#polars.DataFrame.pipe" rel="nofollow noreferrer">pipe</a> in polars says :</p> <pre><code>DataFrame.pipe( function: Callable[Concatenate[DataFrame, P], T], *args: P.args, **kwargs: P.kwargs, ) → T </code></pre> <p>args and kwargs go to the callable that the the function argument is taking</p> <p>I need help reading the <code>Callable[Concatenate[DataFrame, P], T],</code> part.</p>
<python><python-typing><python-polars>
2024-05-02 14:52:43
1
5,871
figs_and_nuts
78,419,981
490,324
Getting "ValueError Encountered text corresponding to disallowed special token '<|endoftext|>'" while running CrewAI
<p>I am running CrewAI with the Ollama Phi3 model, which runs well as long as I am not asked to generate sample Python code. Things get complicated when the Developer Agent passes a response to the HumanProxy Agent. Crew never finishes the work and continuously produces the following output:</p> <blockquote> <p>2024-05-02 14:24:07,361 - 129651365437888 - manager.py-manager:281 - WARNING: Error in TokenCalcHandler.on_llm_start callback: ValueError(&quot;Encountered text corresponding to disallowed special token '&lt;|endoftext|&gt;'.\nIf you want this text to be encoded as a special token, pass it to <code>allowed_special</code>, e.g. <code>allowed_special={'&lt;|endoftext|&gt;', ...}</code>.\nIf you want this text to be encoded as normal text, disable the check for this token by passing <code>disallowed_special=(enc.special_tokens_set - {'&lt;|endoftext|&gt;'})</code>.\nTo disable this check for all special tokens, pass <code>disallowed_special=()</code>.\n&quot;)</p> </blockquote> <p>Output from running the Crew:</p> <pre><code>&gt;&gt;&gt; result = crew.kickoff() [DEBUG]: == Working Agent: Senior Manager [INFO]: == Starting Task: Create python script that scrape the websites and saves the output into the text file fo further processing. Manage the work with your crew to answer the request. Check with Human if the response was helpful and if it is not satisfactory then try again. &gt; Entering new CrewAgentExecutor chain... Thought: I need to delegate tasks efficiently among my crew members to create a Python script for web scraping that saves output into a text file. Action: Delegate work to co-worker Action Input: {&quot;coworker&quot;: &quot;Python Developer&quot;, &quot;task&quot;: &quot;Create a Python script for web scraping and saving the output into a text file.&quot;, &quot;context&quot;: &quot;The task involves using libraries like BeautifulS Authorization (BeautifulSoup) or Scrapy to scrape websites. The data extracted should be saved in a structured format, such as JSON or CSV, which can then be processed further.&quot;} &gt; Entering new CrewAgentExecutor chain... Thought: To complete this task effectively, I need to select a suitable library for web scraping, define the target website URL, specify the data elements to be extracted, and then write Python code that will scrape the desired information and save it in JSON format. Final Answer: ```python import requests from bs4 import BeautifulSoup import json # Define the URL of the website you want to scrape url = 'https://example.com' # Replace with actual target website URL def web_scrape(target_url): try: response = requests.get(target_url) soup = BeautifulSoup(response.text, 'html.parser') # Assuming we want to extract data from elements with a specific class or id data_elements = [] # Replace this with the actual logic for selecting data elements for element in soup.select('your-selector'): # Replace 'your-selector' with appropriate CSS selector(s) extracted_data = { 'element_id': element['id'], # Assuming each element has an id attribute 'text': element.get_text() } data_elements.append(extracted_data) # Save the scraped data into a JSON file with open('output.json', 'w') as json_file: json.dump(data_elements, json_file, indent=4) except requests.RequestException as e: print(f&quot;An error occurred while making the request: {e}&quot;) # Execute the web scraping function web_scrape(url) ``` Replace `'https://example.com'` with your actual target website URL and adjust the data extraction logic according to the specific elements you need to scrape from that site.&lt;|end|&gt;&lt;|endoftext|&gt; &gt; Finished chain. ```python import requests from bs4 import BeautifulSoup import json # Define the URL of the website you want to scrape url = 'https://example.com' # Replace with actual target website URL def web_scrape(target_url): try: response = requests.get(target_url) soup = BeautifulSoup(response.text, 'html.parser') # Assuming we want to extract data from elements with a specific class or id data_elements = [] # Replace this with the actual logic for selecting data elements for element in soup.select('your-selector'): # Replace 'your-selector' with appropriate CSS selector(s) extracted_data = { 'element_id': element['id'], # Assuming each element has an id attribute 'text': element.get_text() } data_elements.append(extracted_data) # Save the scraped data into a JSON file with open('output.json', 'w') as json_file: json.dump(data_elements, json_file, indent=4) except requests.RequestException as e: print(f&quot;An error occurred while making the request: {e}&quot;) # Execute the web scraping function web_scrape(url) ``` Replace `'https://example.com'` with your actual target website URL and adjust the data extraction logic according to the specific elements you need to scrape from that site.&lt;|end|&gt;&lt;|endoftext|&gt; </code></pre>
<python><ollama>
2024-05-02 14:51:36
1
2,808
m1k3y3
78,419,925
6,714,667
cannot import punkt nltk
<p>Due to security settings at work i cannot simply do nltk.download('punkt')</p> <p>i therefore printed out the <code>nltk.data.path</code> and found where it's looking, then added the zip file into the location e.g it was looking at C:\Users\name\Anaconda3\envs\ml\nltk_data so i added the punkt.zip file in nltk_data.</p> <p>However when i run:</p> <pre><code>nltk.download('punkt', download_dir='C:\\Users\\name\Anaconda3\\envs\\ml\\nltk_data') </code></pre> <p>i get an error still</p> <pre><code>[nltk_data] Error loading punkt: &lt;urlopen error [Errno 11001] [nltk_data] getaddrinfo failed&gt; [nltk_data] Error loading punkt: &lt;urlopen error [Errno LookupError: ********************************************************************** Resource punkt not found. Please use the NLTK Downloader to obtain the resource:11001] [nltk_data] getaddrinfo failed&gt; </code></pre>
<python><nltk>
2024-05-02 14:40:27
3
999
Maths12
78,419,842
3,277,064
Accessing a C++ static member variable of a class in Cython
<p>Consider how to access, in Cython, the following C++ static inline class member variable:</p> <pre class="lang-cpp prettyprint-override"><code>namespace my::name::space { class MyClass { public: static inline int LIFE { 42 }; }; } </code></pre> <p>A plausible <code>.pyx</code> file might be:</p> <pre><code>cdef extern from &quot;mycpp.h&quot; namespace &quot;my::name::space&quot;: cdef cppclass MyClass: int LIFE cdef class MyMgr: def my_failing_function(self) -&gt; None: cdef int life = MyClass.LIFE </code></pre> <p>But this elicits the following diagnostic during generated C++ compilation:</p> <pre><code>src/dotbug/dotbug-mre.cpp: In function ‘PyObject* __pyx_pf_6dotbug_5MyMgr_my_failing_function(__pyx_obj_6dotbug_MyMgr*)’: src/dotbug/dotbug-mre.cpp:2666:39: error: expected primary-expression before ‘.’ token 2666 | __pyx_t_1 = my::name::space::MyClass.LIFE; | ^ </code></pre> <p>This is not valid C++ syntax; the period should be the scoping operator, <code>::</code>. If you change it in the generated code, and build, it works properly.</p> <p>ChatGPT recommended (before it broke down and claimed Cython cannot access static member variables) that I use <code>static int LIFE</code> for my cdef definition, but this yields a different error, this one during cythonization:</p> <pre><code>cdef extern from &quot;mycpp.h&quot; namespace &quot;my::name::space&quot;: cdef cppclass MyClass: static int LIFE ^ ------------------------------------------------------------ src/dotbug/dotbug-mre.pyx:3:19: Syntax error in C variable declaration </code></pre> <p>Is there something I am missing with respect to syntax? I note that if I convert this into a non-static variable, and instantiate it, I can access it just fine.</p> <p>For a full set of buildable files, see <a href="https://gist.github.com/eewanco/f21cee1917532e87b1fe72a107728eae" rel="nofollow noreferrer">Github Gist</a></p>
<python><c++><cython>
2024-05-02 14:27:32
1
1,904
Vercingatorix
78,419,734
10,830,699
How set an autoreload on my `socketio.ASGIapp` plugged on my FastAPI app?
<p>I have an app combining FastAPI and SocketIO ASGI. On code change, the FastAPI reloads but not the SocketIO one. Is there a way to tell SocketIO to reload on change without using <code>watchdog</code> ? By setting a flag or something very short to write than a watchdog command ?</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI import socketio fastapi_app = FastAPI() app = socketio.ASGIApp(sio, fastapi_app) if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(&quot;app.main:app&quot;, host=&quot;127.0.0.1&quot;, port=8888, log_level=&quot;debug&quot;, reload=True) </code></pre>
<python><fastapi><python-socketio>
2024-05-02 14:09:02
0
599
Bravo2bad
78,419,733
9,230,073
Firebase Admin SDK: Authentication failed using Compute Engine authentication due to unavailable metadata server
<p>When I try to send a push notification with firebase admin sdk from my own Django server. I get this error: <code>Authentication failed using Compute Engine authentication due to unavailable metadata server.</code></p> <ul> <li>I created a Google service account from my firebase console.</li> <li>Generated a new private key.</li> <li>Added the google.json file in my project and set the environment variable GOOGLE_APPLICATION_CREDENTIALS with the file path.</li> </ul> <p>Basically, I followed the steps indicated in &quot;Initialize the SDK in non-Google environments&quot; part of the official firebase documentation : <a href="https://firebase.google.com/docs/admin/setup#initialize_the_sdk_in_non-google_environments" rel="nofollow noreferrer">https://firebase.google.com/docs/admin/setup#initialize_the_sdk_in_non-google_environments</a></p> <p>When sending push notification with Firebase Admin SDK from my <strong>local</strong> Django server I successfully receive push notification on my mobile device. But it does not work from my <strong>remote</strong> server (AWS EC2 Ubuntu Instance).</p> <ul> <li>Python version: 3.10.12</li> <li>Django version: 5.0.4</li> </ul> <p>Do you have any idea what could be the cause ?</p>
<python><amazon-web-services><firebase-cloud-messaging><firebase-admin>
2024-05-02 14:08:40
0
347
Fravel
78,419,689
14,471,688
Faster way to perform XOR operations between two 2D numpy.ndarray with memory efficiency
<p>I want to perform the XOR operations between two 2D numpy.ndarray by saving the memory usage. For each row of <strong>u</strong>, I want to perform the XOR operation to every row of <strong>v</strong> and so on. For example, if u has dimension <strong>(8,10)</strong> and v has dimension <strong>(6,10)</strong>, the result will end up with a dimension <strong>(48, 10)</strong>.</p> <p>Here is a simple example to demonstrate that:</p> <pre><code>import numpy as np u_values = np.array([[True, True, True, True, True, True, False, True, True, True], [True, True, True, True, True, True, True, False, False, True], [True, True, True, True, False, True, False, True, True, True], [True, True, True, True, False, True, True, False, False, True], [True, True, True, True, True, False, False, False, False, True], [True, True, True, False, False, False, False, False, False, True], [True, False, False, False, False, False, False, False, False, True], [True, True, False, True, False, True, False, True, True, True]]) v_values = np.array([[True, True, True, True, True, True, False, True, True, True], [True, False, True, False, True, True, True, False, False, True], [True, True, True, True, False, True, False, True, True, True], [True, False, True, True, False, False, True, False, False, True], [True, True, False, True, True, False, False, False, False, True], [True, True, True, False, False, True, False, False, False, True]]) #First this is what I tried: u_reshaped = u_values[:, None, :] v_reshaped = v_values[None, :, :] xor_result = u_reshaped ^ v_reshaped xor_results = xor_result.reshape((-1, xor_result.shape[2])) </code></pre> <p>This is what I get a result with dimension <strong>(48, 10)</strong></p> <pre><code>[[False False False False False False False False False False] [False True False True False False True True True False] [False False False False True False False False False False] [False True False False True True True True True False] [False False True False False True False True True False] [False False False True True False False True True False] [False False False False False False True True True False] [False True False True False False False False False False] [False False False False True False True True True False] [False True False False True True False False False False] [False False True False False True True False False False] [False False False True True False True False False False] [False False False False True False False False False False] [False True False True True False True True True False] [False False False False False False False False False False] [False True False False False True True True True False] [False False True False True True False True True False] [False False False True False False False True True False] [False False False False True False True True True False] [False True False True True False False False False False] [False False False False False False True True True False] [False True False False False True False False False False] [False False True False True True True False False False] [False False False True False False True False False False] [False False False False False True False True True False] [False True False True False True True False False False] [False False False False True True False True True False] [False True False False True False True False False False] [False False True False False False False False False False] [False False False True True True False False False False] [False False False True True True False True True False] [False True False False True True True False False False] [False False False True False True False True True False] [False True False True False False True False False False] [False False True True True False False False False False] [False False False False False True False False False False] [False True True True True True False True True False] [False False True False True True True False False False] [False True True True False True False True True False] [False False True True False False True False False False] [False True False True True False False False False False] [False True True False False True False False False False] [False False True False True False False False False False] [False True True True True False True True True False] [False False True False False False False False False False] [False True True False False True True True True False] [False False False False True True False True True False] [False False True True False False False True True False]] </code></pre> <p>It worked just fine with this toy example. However, when I applied it with two big 2D numpy.ndarray I ran out of memory. I even tried to use the <em>np.packbits</em> function to see if I can save some memory:</p> <pre><code>u_values_packed = np.packbits(u_values, axis=1) v_values_packed = np.packbits(v_values, axis=1) result_packed = u_values_packed[:, None, :] ^ v_values_packed[None, :, :] result_packed = result_packed.reshape((-1, result_packed.shape[2])) result_unpacked = np.unpackbits(result_packed, axis=1)[:, :u_values.shape[1]] </code></pre> <p>When I performed my real use case, I got this error:</p> <blockquote> <p>numpy.core._exceptions.MemoryError: Unable to allocate 959. GiB for an array with shape (2788, 1813, 203769) and data type uint8</p> </blockquote> <p>I even tried to perform the XOR operations using chunk by storing temporary results into a memmap file which takes the <strong>max_memory_usage</strong> in GB as one of its parameters. Here are my codes:</p> <pre><code>import math def calculate_number_chunk(u_value, v_value, max_memory_usage): dim_u = u_value.shape[0] dim_v = v_value.shape[0] column_dimension = u_value.shape[1] chunk_size_square = (max_memory_usage * math.pow(1024, 3)) / column_dimension return math.floor(math.sqrt(chunk_size_square)), dim_u, dim_v, column_dimension def chunk_xor(u_value, v_value, max_memory_usage, memmap_filename): chunk_size, dim_u, dim_v, col_dimension = calculate_number_chunk(u_value, v_value, max_memory_usage) # create memmap file a = np.memmap(memmap_filename, dtype='bool', mode='w+', shape=(dim_u * dim_v, u_value.shape[1])) start = 0 end = 0 for i in range(0, dim_u, chunk_size): for j in range(0, dim_v, chunk_size): # u matrix u_chunk = u_value[i:i + chunk_size] # v matrix v_chunk = v_value[j:j + chunk_size] res_xor = u_chunk[:, None, :] ^ v_chunk[None, :, :] res_xors = res_xor.reshape((-1, res_xor.shape[2])) end += res_xors.shape[0] a[start:end, :] = res_xors start = end return dim_u, dim_v, chunk_size, col_dimension </code></pre> <p>I can then manipulate it from the memmap file as follow:</p> <pre><code>dim_u, dim_v, chunk_size, col_dimension = chunk_xor(u_value=u, v_value=v, max_memory_usage=max_mem_usage, memmap_filename=memmap_file_directory) # read results from the mem-map file using chunk size xor_memmap = np.memmap(memmap_file_directory, dtype='bool', mode='r', shape=(dim_u * dim_v, u.shape[1])) start = 0 end = 0 for idx in range(0, dim_u * dim_v, chunk_size): end += chunk_size xor_results = xor_memmap[start:end, :] start = end # xor_results will process for further from here ... </code></pre> <p>I am so curious if there is a better and/or a faster way to avoid converting to 3D numpy.ndarray before the XOR operation? If not, how can I speed up this kind of operations using chunk? I can perform up to 512 GB RAM.</p> <p>Any reccommendation will actually help.</p>
<python><numpy><bit-manipulation><numba><bitwise-xor>
2024-05-02 14:01:36
0
381
Erwin