QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,790,646
3,240,688
importing airflow automatically creates airflow directory
<p>I've noticed that whenever I import airflow in Python, it automatically creates an airflow directory in my home directory. Literally just this</p> <pre><code>$ python Python 3.11.9 | packaged by conda-forge | (main, Apr 19 2024, 18:36:13) [GCC 12.3.0] on linux Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; import os &gt;&gt;&gt; os.path.exists(os.path.join(os.path.expanduser('~'), 'airflow')) False &gt;&gt;&gt; import airflow &gt;&gt;&gt; os.path.exists(os.path.join(os.path.expanduser('~'), 'airflow')) True </code></pre> <p>I understand that's because I don't have AIRFLOW_HOME set. So it appears airflow is using home directory as a default location if the AIRFLOW_HOME env var is not set.</p> <p>But why is it doing that in the first place? I'm just importing airflow library, without even beginning to do anything.</p> <p>Is it possible to disable this behavior?</p>
<python><airflow>
2024-07-24 21:27:32
1
1,349
user3240688
78,790,594
11,793,491
How to use replace text using a regex in a Pandas dataframe
<p>I have the following dataset:</p> <pre class="lang-py prettyprint-override"><code>meste = pd.DataFrame({'a':['06/33','40/2','05/22']}) </code></pre> <pre class="lang-none prettyprint-override"><code> a 0 06/33 1 40/2 2 05/22 </code></pre> <p>And I want to remove the leading 0s in the text (06/33 to 6/33 for example). I tried this, without success:</p> <pre class="lang-py prettyprint-override"><code>meste['a'] = meste['a'].str.replace(r&quot;(^0?)&quot;,&quot;&quot;) </code></pre> <pre class="lang-none prettyprint-override"><code> a 0 06/33 1 40/2 2 05/22 </code></pre> <p>I also tried with <code>meste['a'].str.replace(r&quot;(^0?)&quot;,&quot;&quot;)</code>, but it doesn't work. This is the expected result:</p> <pre class="lang-none prettyprint-override"><code> a 0 6/33 1 40/2 2 5/22 </code></pre> <p>What I am doing wrong in the regex statement?</p>
<python><pandas><regex>
2024-07-24 21:09:17
1
2,304
Alexis
78,790,306
5,537,840
Python Version Mismatch in Jupyter Notebook Environment
<p>I am experiencing a discrepancy between the Python version reported in the Jupyter notebook startup banner and the version shown when I query <code>python --version</code> within the notebook. The startup banner indicates Python 3.11.9, but when I run <code>!python --version</code>, it returns Python 3.11.7.</p> <p><strong>Steps I did:</strong></p> <ol start="0"> <li>base conda has 3.11.7 version</li> <li>conda create --prefix ~/.conda/pypypy python=3.11.9</li> <li>conda activate ~/.conda/pypypy</li> <li>python -m ipykernel install --user --name pypypy</li> </ol> <p><strong>Expected Behavior:</strong> The Python version queried within the notebook should match the version indicated in the startup banner. But in reality there is a mismatch: <a href="https://i.sstatic.net/5fkdT5HO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5fkdT5HO.png" alt="enter image description here" /></a></p> <p><strong>Troubleshooting Done:</strong></p> <ul> <li>Checked the environment's kernel specification file (<code>kernel.json</code>) to ensure it points to the correct Python executable.</li> </ul> <pre><code>(/home/karzymatov/.conda/pypypy) karzymatov@55f26f77b14d:~/mtb_join$ jupyter kernelspec list Available kernels: python3 /home/karzymatov/.conda/pypypy/share/jupyter/kernels/python3 pypypy /home/karzymatov/.local/share/jupyter/kernels/pypypy </code></pre> <pre><code>cat /home/karzymatov/.local/share/jupyter/kernels/pypypy/kernel.json { &quot;argv&quot;: [ &quot;/home/karzymatov/.conda/pypypy/bin/python&quot;, &quot;-Xfrozen_modules=off&quot;, &quot;-m&quot;, &quot;ipykernel_launcher&quot;, &quot;-f&quot;, &quot;{connection_file}&quot; ], &quot;display_name&quot;: &quot;pypypy&quot;, &quot;language&quot;: &quot;python&quot;, &quot;metadata&quot;: { &quot;debugger&quot;: true } </code></pre> <ul> <li>Ensured that the notebook is using the intended kernel.</li> </ul> <p>Could someone help explain why this discrepancy occurs and how to ensure that the notebook uses the correct Python version as indicated in the startup banner?</p>
<python><jupyter-notebook><conda><jupyter><jupyter-lab>
2024-07-24 19:50:37
1
9,269
Kenenbek Arzymatov
78,790,220
1,256,347
MATLAB does not read parquet file, simply says "Unable to read Parquet file". How can I still read it?
<p>I have created a parquet file using Python <a href="https://pola.rs/" rel="nofollow noreferrer">polars</a>' <a href="https://docs.pola.rs/api/python/stable/reference/api/polars.DataFrame.write_parquet.html" rel="nofollow noreferrer"><code>.write_parquet</code></a> method. It can be read back by Python without a problem and MATLAB can also read the information <em>about</em> the file using <a href="https://www.mathworks.com/help/matlab/ref/matlab.io.parquet.parquetinfo.html" rel="nofollow noreferrer"><code>parquetinfo</code></a> without a problem.</p> <p>However, when I run <a href="https://www.mathworks.com/help/matlab/ref/parquetread.html" rel="nofollow noreferrer"><code>parquetread</code></a> in MATLAB to actually load the data, it fails quickly with the error &quot;Unable to read Parquet file&quot; without further details.</p> <p>I've searched around, and only found <a href="https://www.mathworks.com/matlabcentral/answers/1956304-parquetread-gives-unable-to-read-parquet-file-whereas-parquetinfo-is-working-fine" rel="nofollow noreferrer">this Mathworks forum post</a> without a solution.</p> <p>How can I create a parquetfile using Python that is readable by MATLAB?</p>
<python><matlab><parquet>
2024-07-24 19:24:53
1
2,595
Saaru Lindestøkke
78,790,186
8,941,248
How to find the number of rows within a group since a nonzero value occurred for a pandas dataframe?
<p>I have a dataframe like so:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>ID</th> <th>value</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>0</td> </tr> <tr> <td>A</td> <td>1</td> </tr> <tr> <td>A</td> <td>0</td> </tr> <tr> <td>A</td> <td>0</td> </tr> <tr> <td>B</td> <td>0</td> </tr> <tr> <td>B</td> <td>0</td> </tr> <tr> <td>B</td> <td>2</td> </tr> <tr> <td>B</td> <td>0</td> </tr> <tr> <td>B</td> <td>4</td> </tr> <tr> <td>B</td> <td>0</td> </tr> </tbody> </table></div> <p>I want to add a column that counts the number of rows since a nonzero value occurred within the group (in this case <code>ID</code>). The result would look like:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>ID</th> <th>value</th> <th>num_rows_since_nonzero</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>0</td> <td>0</td> </tr> <tr> <td>A</td> <td>1</td> <td>0</td> </tr> <tr> <td>A</td> <td>0</td> <td>1</td> </tr> <tr> <td>A</td> <td>0</td> <td>2</td> </tr> <tr> <td>B</td> <td>0</td> <td>0</td> </tr> <tr> <td>B</td> <td>0</td> <td>0</td> </tr> <tr> <td>B</td> <td>2</td> <td>0</td> </tr> <tr> <td>B</td> <td>0</td> <td>1</td> </tr> <tr> <td>B</td> <td>4</td> <td>0</td> </tr> <tr> <td>B</td> <td>0</td> <td>1</td> </tr> </tbody> </table></div>
<python><pandas>
2024-07-24 19:15:16
2
521
mdrishan
78,789,998
12,098,671
Unable to make a remote api call to flask app(for MySQL connection) inside my apache server
<p>I have an apache server running on Alma Linux. I have the flask code setup to accept API calls from remote connections. So my API call hits the flask which then connects to MySQL database.</p> <p>When I try to run this database connection code locally inside the server, it works fine. But when I try to hit flask app via remote API call, I get</p> <p><code>Database error: 2003 (HY000): Can't connect to MySQL server on 'localhost:3306' (13)</code></p> <p>This is very strange since I can connect to the database locally inside the server.</p> <p>I also wrote a dummy endpoint.</p> <pre><code>@app.route('/') def test_endpoint(): return 'Hello World' </code></pre> <p>This endpoint works from the remote API call.</p> <p>My code for connecting to database</p> <pre><code>import mysql.connector from flask import Flask app = Flask(__name__) df_config = {connection parameters} @app.route('/db_test', methods=['GET','POST']) def db_test(): try: conn = mysql.connector.connect(**db_config) return Statement except Error as e: return jsonify({'success': False, 'message': f&quot;Database error: {e}. Contact the researcher&quot;}), 500 </code></pre> <p>I checked MySQL is running on port 3306 and I have the necessary database permissions. I also tried commenting &quot;bind-address = 127.0.0.1&quot; in MySQL config file.</p> <p>Kindly help me with a fix.</p>
<python><mysql><linux><apache><flask>
2024-07-24 18:33:40
1
759
Yash Khasgiwala
78,789,977
2,153,235
PyPlot plots are bigger with high DPI, but still blurry
<p>I am following a tutorial to generate a scatter plot of points, coloured by cluster and also colour-saturated based each point's strength of membership in its respective clusters. I mention the colouring details in case they affect the resolution, but I suspect they don't.</p> <p>What I find is that if I increase the DPI of a PyPlot figure, the figure increases in size, but is still very blurry. Below is my test code, which generates a small DPI figure and a large DPI figure. The latter still seems extraordinarily blurry. What is causing this and how can I get sharp plots? I am using Spyder.</p> <pre><code># Adapted from: # https://hdbscan.readthedocs.io/en/latest/advanced_hdbscan.html # Generate plot data #------------------- import numpy as np data = np.load('clusterable_data.npy') # https://github.com/lmcinnes/hdbscan/blob/master/notebooks/clusterable_data.npy import hdbscan clusterer = hdbscan.HDBSCAN(min_cluster_size=15).fit(data) import seaborn as sns color_palette = sns.color_palette( 'deep', len( np.unique( clusterer.labels_ ) ) ) cluster_colors = [color_palette[x] if x &gt;= 0 else (0.5, 0.5, 0.5) for x in clusterer.labels_] cluster_member_colors = [sns.desaturate(x, p) for x, p in zip(cluster_colors, clusterer.probabilities_)] # Plot the scatter graph #----------------------- %matplotlib tk # To get separate window in Spyder import matplotlib.pyplot as plt plt.close('all') # Low resolution plt.rcParams['figure.dpi']=50 plt.scatter(*data.T, s=50, linewidth=0, c=cluster_member_colors, alpha=0.25) # High resolution plt.rcParams['figure.dpi']=150 plt.figure() plt.scatter(*data.T, s=50, linewidth=0, c=cluster_member_colors, alpha=0.25) </code></pre> <p><a href="https://i.sstatic.net/om5mJ9A4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/om5mJ9A4.png" alt="enter image description here" /></a></p>
<python><matplotlib><spyder>
2024-07-24 18:27:46
1
1,265
user2153235
78,789,944
1,471,980
how do you sort column names in Date in descending order in pandas
<p>I have this DataFrame:</p> <pre><code>Node Interface Speed Band_In carrier Date Server1 wan1 100 80 ATT 2024-05-09 Server1 wan1 100 50 Sprint 2024-06-21 Server1 wan1 100 30 Verizon 2024-07-01 Server2 wan1 100 90 ATT 2024-05-01 Server2 wan1 100 88 Sprint 2024-06-02 Server2 wan1 100 22 Verizon 2024-07-19 </code></pre> <p>I need to convert Date field to this format 1-May, 2-Jun, 19-July, place them on each column in descending order. In this to look like this:</p> <pre><code> Node Interface Speed Band_In carrier 1-July 9-May 21-Jun Server1 wan1 100 80 ATT 80 50 30 </code></pre> <p>I tried this:</p> <pre><code>df['Date'] = pd.to_datetime(df['Date']).dt.strftime('%d-%b') df['is'] = df['Band_In'] / df['Speed'] * 100 df = df.pivot_table(index=['Node', 'Interface', 'carrier'], columns='Date', values='is').reset_index() </code></pre> <p>I need Date values in the column names to be sorted in descending order 9-May 21-Jun 1-July.</p> <p>Any ideas how?</p>
<python><pandas><dataframe><datetime>
2024-07-24 18:16:05
2
10,714
user1471980
78,789,813
6,370,552
ModuleNotFoundError: No module named 'paramiko.auth_strategy' while using fab2 python
<p>I'm trying to execute a python task using fab. I've installed fab2 for python3 and trying to use it to run the task in a venv. Unfortunately using fab2 keeps giving me this error:</p> <pre><code>(venv) My-MacBook-Pro-2:mypath myuser$ fab2 /mypath/venv/lib/python3.12/site-packages/paramiko/pkey.py:82: CryptographyDeprecationWarning: TripleDES has been moved to cryptography.hazmat.decrepit.ciphers.algorithms.TripleDES and will be removed from this module in 48.0.0. &quot;cipher&quot;: algorithms.TripleDES, /mypath/venv/lib/python3.12/site-packages/paramiko/transport.py:253: CryptographyDeprecationWarning: TripleDES has been moved to cryptography.hazmat.decrepit.ciphers.algorithms.TripleDES and will be removed from this module in 48.0.0. &quot;class&quot;: algorithms.TripleDES, /mypath/fabfile.py:335: SyntaxWarning: &quot;is&quot; with 'str' literal. Did you mean &quot;==&quot;? if completeSql is 'Y': Traceback (most recent call last): File &quot;/mypath/venv/bin/fab2&quot;, line 8, in &lt;module&gt; sys.exit(program.run()) ^^^^^^^^^^^^^ File &quot;/mypath/venv/lib/python3.12/site-packages/invoke/program.py&quot;, line 387, in run self.parse_collection() File &quot;/mypath/venv/lib/python3.12/site-packages/invoke/program.py&quot;, line 479, in parse_collection self.load_collection() File &quot;/mypath/venv/lib/python3.12/site-packages/fabric2/main.py&quot;, line 93, in load_collection super().load_collection() File &quot;/mypath/venv/lib/python3.12/site-packages/invoke/program.py&quot;, line 716, in load_collection module, parent = loader.load(coll_name) ^^^^^^^^^^^^^^^^^^^^^^ File &quot;/mypath/venv/lib/python3.12/site-packages/invoke/loader.py&quot;, line 91, in load spec.loader.exec_module(module) File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 995, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;/mypath/fabfile.py&quot;, line 1, in &lt;module&gt; from fabric.api import env,hide,cd,settings,execute,task,runs_once, local, run, parallel, sudo File &quot;/mypath/venv/lib/python3.12/site-packages/fabric/api.py&quot;, line 10, in &lt;module&gt; from fabric.context_managers import (cd, hide, settings, show, path, prefix, File &quot;/mypath/venv/lib/python3.12/site-packages/fabric/context_managers.py&quot;, line 27, in &lt;module&gt; from fabric.state import output, win32, connections, env File &quot;/mypath/venv/lib/python3.12/site-packages/fabric/state.py&quot;, line 9, in &lt;module&gt; from fabric.network import HostConnectionCache, ssh File &quot;/mypath/venv/lib/python3.12/site-packages/fabric/network.py&quot;, line 14, in &lt;module&gt; from fabric.auth import get_password, set_password File &quot;/mypath/venv/lib/python3.12/site-packages/fabric/auth.py&quot;, line 6, in &lt;module&gt; from paramiko.auth_strategy import ( ModuleNotFoundError: No module named 'paramiko.auth_strategy' </code></pre> <p>But when I try to install paramiko, I get:</p> <pre><code>$ pip install paramiko Requirement already satisfied: paramiko in ./venv/lib/python3.12/site-packages (2.12.0) Requirement already satisfied: bcrypt&gt;=3.1.3 in ./venv/lib/python3.12/site-packages (from paramiko) (4.2.0) Requirement already satisfied: cryptography&gt;=2.5 in ./venv/lib/python3.12/site-packages (from paramiko) (43.0.0) Requirement already satisfied: pynacl&gt;=1.0.1 in ./venv/lib/python3.12/site-packages (from paramiko) (1.5.0) Requirement already satisfied: six in ./venv/lib/python3.12/site-packages (from paramiko) (1.16.0) Requirement already satisfied: cffi&gt;=1.12 in ./venv/lib/python3.12/site-packages (from cryptography&gt;=2.5-&gt;paramiko) (1.16.0) Requirement already satisfied: pycparser in ./venv/lib/python3.12/site-packages (from cffi&gt;=1.12-&gt;cryptography&gt;=2.5-&gt;paramiko) (2.22) </code></pre> <p>I'm not sure if this is something to do with my python or fab. My python version is this:</p> <pre><code>$ python Python 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.3.9.4)] on darwin </code></pre> <p>and I'm running everything on venv. I'm also using fab2 instead of fab because fab kept giving me <code>ModuleNotFoundError: No module named 'fabric.api'</code> error and I could find no other solution. Any insights would be super helpful!</p>
<python><python-3.x><python-module><python-venv>
2024-07-24 17:42:10
1
344
scottstots
78,789,745
16,869,946
Python scipy integrate.quad with TypeError: 'NoneType' object is not iterable
<p>I am trying to define the following integral using scipy: <a href="https://i.sstatic.net/BHJB7Vpz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHJB7Vpz.png" alt="enter image description here" /></a></p> <p>and here is my code:</p> <pre><code>import numpy as np import scipy.integrate as integrate from scipy.stats import norm def integrand(xi, thetai, *theta): sum = 0 for thetaj in theta: prod = 1 for t in tuple(list(theta).remove(thetaj)): prod = prod * (1 - norm.cdf(xi - t)) sum = sum + norm.cdf(xi - thetaj) * prod return sum * norm.pdf(xi - thetai) def integral(thetai, *theta): return (integrate.quad(integrand, -np.inf, np.inf, args=(thetai, *theta, )))[0] print(integral(0.12849237,0.67286398,0.1124954,-0.3242629,0.28836734,0.33057082,-0.0843643,-0.085148,-0.7902458,-0.4907209,-0.5297461,-0.6957624)) </code></pre> <p>However, when I run that code, the following error shows up: <code>TypeError: 'NoneType' object is not iterable</code> for the line <code>for t in tuple(list(theta).remove(thetaj)):</code></p> <p>and I am having troubles to debug it.</p>
<python><list><scipy><tuples><quad>
2024-07-24 17:27:04
1
592
Ishigami
78,789,622
20,591,261
Get max date column name on polars
<p>I'm trying to get the column name containing the maximum date value in my Polars DataFrame. I found a similar question that was already answered <a href="https://stackoverflow.com/questions/77488797/polars-get-name-of-column-containing-max-value-per-row">here</a>.</p> <p>However, in my case, I have many columns, and adding them manually would be tedious. I would like to use column selectors <code>cs.datetime()</code> and have tried the following:</p> <pre><code>import polars as pl from datetime import datetime import polars.selectors as cs data = { &quot;ID&quot; : [1,2,3], &quot;Values_A&quot; : [datetime(1,1,2),datetime(1,1,3),datetime(1,1,4)], &quot;Values_B&quot; : [datetime(1,1,4),datetime(1,1,7),datetime(1,1,2)] } df = pl.DataFrame(data) def arg_max_horizontal(*columns: pl.Expr) -&gt; pl.Expr: return ( pl.concat_list(columns) .list.arg_max() .replace_strict({i: col_name for i, col_name in enumerate(columns)}) ) ( df .with_columns( Largest=arg_max_horizontal(pl.select(cs.datetime())) ) ) </code></pre>
<python><dataframe><python-polars>
2024-07-24 16:52:53
1
1,195
Simon
78,789,571
11,643,528
Keeping a running total of quantites while matching items and dates within a range
<p>I'm attempting to match job lines to purchase orders on items within a date range while tracking the available quantity of the items.</p> <p>If I have three dataframes:</p> <pre><code> joblines = pd.DataFrame({ 'order': ['1-1', '1-1', '2-1', '3-1'], 'item': ['A1','A2','A1', 'A1'], 'startdate':[pd.Timestamp('2024-7-25'), pd.Timestamp('2024-7-25'), pd.Timestamp('2024-8-05'), pd.Timestamp('2024-9-02')], 'qty': [1, 2, 3, 3] }) items = pd.DataFrame({ 'item': ['A1', 'A2'], 'onhand':[2, 2] }) polines = pd.DataFrame({ 'po':['1','2','3'], 'item':['A1', 'A2', 'A1'], 'qty': [1, 1, 5], 'reqdate': [pd.Timestamp('2024-7-23'), pd.Timestamp('2024-7-26'), pd.Timestamp('2024-9-01')] }) </code></pre> <p>I'm attempting to get here:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Order</th> <th>Item</th> <th>Start</th> <th>Qty</th> <th>PO</th> <th>Req</th> <th>Notes</th> </tr> </thead> <tbody> <tr> <td>1-1</td> <td>A1</td> <td>2024-07-25</td> <td>1</td> <td>1</td> <td>2024-07-23</td> <td>OnHand</td> </tr> <tr> <td>1-1</td> <td>A2</td> <td>2024-07-25</td> <td>2</td> <td>2</td> <td>2024-07-26</td> <td>OnHand</td> </tr> <tr> <td>2-1</td> <td>A1</td> <td>2024-08-05</td> <td>3</td> <td></td> <td></td> <td>No Available PO</td> </tr> <tr> <td>3-1</td> <td>A2</td> <td>2024-09-02</td> <td>3</td> <td>3</td> <td>2024-09-01</td> <td></td> </tr> </tbody> </table></div> <p>So the first two lines are covered by existing quantities, the third line has no associated purchase order (+- 5 days between start and req), and the last line has a purchase order coming in to cover its quantity.</p> <p>Is this possible using pandas?</p>
<python><pandas>
2024-07-24 16:40:16
1
4,695
Warcupine
78,789,533
5,924,264
Vectorized way to check if a string is in a dataframe column (set of strings)?
<p>I have a pandas dataframe <code>df</code>. This dataframe has a column <code>to_filter</code>. <code>to_filter</code> is either an empty set or a set of strings. This dataframe also has an integer column <code>id</code>. The <code>id</code> may not be unique.</p> <p>Given an input string <code>input</code>, is there a vectorized way to check if each row's <code>to_filter</code> column contains the <code>input</code> string?</p> <p>My goal is to get a list of <code>id</code>s such that <code>input</code> is not in the <code>to_filter</code> column for that given <code>id</code>.</p> <p>e.g.,</p> <pre><code>df = { &quot;id&quot; : [55, 1, 1, 2] &quot;to_filter&quot; : [set(), set(), {&quot;blah&quot;}, {&quot;blah&quot;}] } inpt = &quot;blah&quot; </code></pre> <p>the result should be <code>ids = [55, 1]</code></p> <p>I know I can do this with <code>.apply</code>, but that's not vectorized. <code>df</code> can get quite large in terms of number of rows, so I'd prefer a vectorized approach</p>
<python><pandas><dataframe>
2024-07-24 16:30:36
2
2,502
roulette01
78,789,244
11,815,097
Writing atomic functions in Python
<p>In many systems, transactions are used to group operations together so that they are atomic—meaning they either all succeed or all fail. However, in Python itself does not seem to have built-in support for transactions outside of the context of databases</p> <p>For example I have two functions as follows:</p> <pre><code>create_user_record(user_record) create_directory(directory_name) </code></pre> <p>I want to wrap these functions in an operation in which if one of them fails, the other fails too. How can I achieve this?</p>
<python><python-3.x><transactions>
2024-07-24 15:23:00
2
315
Yasin Amini
78,789,122
1,883,154
Load multiple large CSV files into parquet while creating new colum for file name
<p>I have collections of CSV files, up to 1000, each being ~1 GB uncompressed. I want to create a single parquet dataset from them.</p> <p>In doing this, I want to record which file each set of rows comes from.</p> <p>I want to do all this in less than ~10 GB of RAM, in Python.</p> <p>The obvious place to start was with Dask.</p> <p>If I do something like:</p> <pre><code>for infile in file_list: ddf = dd.read_csv(infile) ddf = ddf.assign(filename=infile) ddf.to_parquet(&quot;output_parquet_path&quot;, append=True, write_index=False, write_metadata_file=True, compute=True) </code></pre> <p>Then I get an error message about the column types from the second file onwards - it seems that the text columns are of type <code>string[pyarrow]</code> in the parquet file, but <code>object</code> in the Dask dataframe (see <a href="https://stackoverflow.com/questions/78726092/valueerror-appended-dtypes-differ-when-appending-two-simple-tables-with-dask">ValueError: Appended dtypes differ when appending two simple tables with dask</a>).</p> <p>If I try to rely on the lazy nature of Dask and do:</p> <pre><code>frame_list = list() for infile in file_list: ddf = dd.read_csv(infile) ddf = ddf.assign(filename=infile) frame_list.append(ddf) full_frame = ddf.concat(frame_list) full_frame.to_parquet(&quot;output_parquet_path&quot;, write_index=False, write_metadata_file=True, compute=True) </code></pre> <p>Then a compute is triggered early and it tries to load all the frames into memory at once.</p>
<python><dask><parquet>
2024-07-24 14:58:54
1
1,738
Ian Sudbery
78,789,026
2,475,195
Apply sklearn logloss with rolling on pandas dataframe
<p>My function call looks something like</p> <pre><code>loss = log_loss(y_true=validate_d['y'], y_pred=validate_probs, sample_weight=validate_df['weight'], normalize=True) </code></pre> <p>Is there any way to combine this with pandas <code>rolling()</code> functionality, so I calculate it for a trailing 10k rows window, for example?</p>
<python><pandas><dataframe><rolling-computation>
2024-07-24 14:38:40
1
4,355
Baron Yugovich
78,788,950
4,826,848
Importing rembg in Celery Task breaks workers
<p>I'm trying to use the <code>rembg</code> library in a Celery worker (Django), but once I import the library, the worker is exited prematurely:</p> <pre><code>objc[47160]: +[NSCharacterSet initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug. [2024-07-24 14:16:14,501: ERROR/MainProcess] Process 'ForkPoolWorker-16' pid:47160 exited with 'signal 6 (SIGABRT)' [2024-07-24 14:16:14,514: ERROR/MainProcess] Message: Error: {'signal': &lt;Signal: task_failure providing_args={'traceback', 'einfo', 'kwargs', 'task_id', 'exception', 'args'}&gt;, 'sender': &lt;@task: assets.tasks.image_background.remove of oml at 0x1046a5c50&gt;, 'task_id': '6219ab75-62b5-4d14-88ac-034d9fa71d45', 'exception': WorkerLostError('Worker exited prematurely: signal 6 (SIGABRT) Job: 15.'), 'args': [], 'kwargs': {}, 'traceback': 'Traceback (most recent call last):\n File &quot;/Users/cesarrodriguez/.pyenv/versions/3.11.2/lib/python3.11/site-packages/billiard/pool.py&quot;, line 1264, in mark_as_worker_lost\n raise WorkerLostError(\nbilliard.exceptions.WorkerLostError: Worker exited prematurely: signal 6 (SIGABRT) Job: 15.\n', 'einfo': &lt;ExceptionInfo: ExceptionWithTraceback()&gt;} Data: {} [2024-07-24 14:16:14,514: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 6 (SIGABRT) Job: 15.') Traceback (most recent call last): File &quot;/Users/cesarrodriguez/.pyenv/versions/3.11.2/lib/python3.11/site-packages/billiard/pool.py&quot;, line 1264, in mark_as_worker_lost </code></pre> <p>I'm not sure if it's something related to multiprocessing Issues, have any thoughts?</p>
<python><django><celery><rembg>
2024-07-24 14:21:01
1
1,851
Cesar Jr Rodriguez
78,788,921
1,814,420
What is the type hint for socket?
<p>Suppose I'm writing a function that takes a <a href="https://docs.python.org/3/library/socket.html" rel="nofollow noreferrer"><code>socket</code></a> as a parameter. How should I type hint it properly?</p> <pre class="lang-py prettyprint-override"><code>def read_socket(socket: ???): .... </code></pre>
<python><python-typing><python-sockets>
2024-07-24 14:15:49
2
12,163
Triet Doan
78,788,919
525,865
trying to find out the logic of this page: approx ++ 100 results stored - and parsed with Python & BS4
<p>trying to find out the logic that is behind this page:</p> <p>we have stored some results in the following db:</p> <p><a href="https://www.raiffeisen.ch/rch/de/ueber-uns/raiffeisen-gruppe/organisation/raiffeisenbanken/deutsche-schweiz.html#accordionitem_18104049731620873397" rel="nofollow noreferrer">https://www.raiffeisen.ch/rch/de/ueber-uns/raiffeisen-gruppe/organisation/raiffeisenbanken/deutsche-schweiz.html#accordionitem_18104049731620873397</a></p> <p>from <strong>a to z approx: 120 results or more:</strong></p> <p>which Options do we have to get the data</p> <p><a href="https://www.raiffeisen.ch/zuerich/de.html#bankselector-focus-titlebar" rel="nofollow noreferrer">https://www.raiffeisen.ch/zuerich/de.html#bankselector-focus-titlebar</a></p> <pre><code>Raiffeisenbank Zürich Limmatquai 68 8001Zürich Tel. +41 43 244 78 78 zuerich@raiffeisen.ch </code></pre> <p><a href="https://www.raiffeisen.ch/sennwald/de.html" rel="nofollow noreferrer">https://www.raiffeisen.ch/sennwald/de.html</a></p> <pre><code>Raiffeisenbank Sennwald Äugstisriet 7 9466Sennwald Tel. +41 81 750 40 40 sennwald@raiffeisen.ch BIC/Swift Code: RAIFCH22XXX </code></pre> <p><a href="https://www.raiffeisen.ch/basel/de/ueber-uns/engagement.html#bankselector-focus-titlebar" rel="nofollow noreferrer">https://www.raiffeisen.ch/basel/de/ueber-uns/engagement.html#bankselector-focus-titlebar</a></p> <pre><code>Raiffeisenbank Basel St. Jakobs-Strasse 7 4052Basel Tel. +41 61 226 27 28 basel@raiffeisen.ch </code></pre> <p>Hmm - i think that - if somehow all is encapsulated in the url-encoded block...</p> <p>well i am trying to find it out - and here is my approach:</p> <pre><code>import requests from bs4 import BeautifulSoup def get_raiffeisen_data(url): response = requests.get(url) if response.status_code == 200: soup = BeautifulSoup(response.content, 'html.parser') banks = [] # Find all bank entries bank_entries = soup.find_all('div', class_='bank-entry') for entry in bank_entries: bank = {} bank['name'] = entry.find('h2', class_='bank-name').text.strip() bank['address'] = entry.find('div', class_='bank-address').text.strip() bank['tel'] = entry.find('div', class_='bank-tel').text.strip() bank['email'] = entry.find('a', class_='bank-email').text.strip() banks.append(bank) return banks else: print(f&quot;Failed to retrieve data from {url}&quot;) return None url = 'https://www.raiffeisen.ch/rch/de/ueber-uns/raiffeisen-gruppe/organisation/raiffeisenbanken/deutsche-schweiz.html' banks_data = get_raiffeisen_data(url) for bank in banks_data: print(f&quot;Name: {bank['name']}&quot;) print(f&quot;Address: {bank['address']}&quot;) print(f&quot;Tel: {bank['tel']}&quot;) print(f&quot;Email: {bank['email']}&quot;) print('-' * 40) </code></pre>
<python><pandas><web-scraping><beautifulsoup>
2024-07-24 14:15:02
1
1,223
zero
78,788,913
899,862
Errors implementing __eq__ method in OOP class structure
<p>Here are a couple of classes<br /> A parent</p> <pre><code>class GeometricShape: def __init__(self, name): self.set_name(name) def get_name(self): return self.__name def set_name(self,name): validate_non_empty_string(name) self.__name = name def __repr__(self): return f'GeometricShape(name={self.__name})' def __eq__(self, other): return(self.get_name() == other.get_name() ) </code></pre> <p>A child</p> <pre><code>class Rectangle(GeometricShape): def __init__(self, length, width, name='Rectangle'): # # the parent class sets the name in the constructor # the name is not set in the child, and we want the naming behavior # provided by the parents constructor super().__init__(name) # self.set_length(length) self.set_width(width) def get_length(self): return self.__length def get_width(self): return self.__width def set_length(self,length): # Check the data validate_positive_number(length) self.__length = length def set_width(self,width): validate_positive_number(width) self.__width = width def get_perimeter (self): return 2 * self.__length + 2 * self.__width def get_area (self): return self.__length * self.__width def __repr__(self): return f'Rectangle(a={self.__length}, b={self.__width})' # Attempted Solution 1 def __eq__(self, other): return( (self.__width == other.get_width() ) and (self.__length == other.get_length() )) # Attempted Solution 2 #def __eq__(self, other): # return( (self.__width == other.__width ) and (self.__length == other.__length )) </code></pre> <p>I have attempted two solutions for the <strong>eq</strong> method, both fail in different ways the corresponding errors produced by each are given.</p> <p>The tests for the launch of the <code>==</code> are being called from encrypted pye files.</p> <hr /> <p>Here is a description of pye files:<br /> What is pye file in Python?<br /> py-files containing source code. The encrypted files are unreadable and allow to protect the developer's copyrights. The encrypted files are given the extension . pye and are used if no . py file is available</p> <hr /> <p>Solution # 1, attempt at <strong>eq</strong></p> <pre><code> def __eq__(self, other): return( (self.__width == other.get_width() ) and (self.__length == other.get_length() )) </code></pre> <p>produces this error message: [ERROR] 2024-07-24 GMT-0500 09:38:04.607: An unexpected error has occurred. Traceback (most recent call last) ... then a long trace thru encrypted files ending in:<br /> geometric_shapes.py&quot;, line 31, in <strong>eq</strong> return(self.get_name() == other.get_name() )</p> <hr /> <p>Solution #2, attempt at eq:</p> <pre><code> def __eq__(self, other): return( (self.__width == other.__width ) and (self.__length == other.__length )) </code></pre> <p>Produces this error message: [ERROR] 2024-07-24 GMT-0500 09:38:04.607: An unexpected error has occurred. Traceback (most recent call last) ... then a long trace thru encrypted files ending in:<br /> geometric_shapes.py&quot;, line 31, in <strong>eq</strong> return(self.get_name() == other.get_name() )</p> <hr /> <p>These are my tests which appear to produce the expected results:</p> <pre><code>geometric_shape = GeometricShape('Triangle') geometric_shape2 = GeometricShape('Triangle') geometric_shape3 = GeometricShape('Square') print(f'shape == shape2:{geometric_shape == geometric_shape2}') print(f'shape == shape3: {geometric_shape == geometric_shape3}') # rectangle = Rectangle(5, 3) rectangle2 = Rectangle(5, 3) rectangle3 = Rectangle(5, 4) print(f'rectangle == rectangle2: {rectangle == rectangle2}') print(f'rectangle == rectangle3: {rectangle == rectangle3}') </code></pre> <p>Which produce:</p> <pre><code>shape == shape2:True shape == shape3: False rectangle == rectangle2: True rectangle == rectangle3: False </code></pre> <p>Full error message/trace thru encrypted files:</p> <pre><code>[ERROR] 2024-07-24 GMT-0500 09:38:04.607: An unexpected error has occurred. Traceback (most recent call last): File &quot;..submitter_utils\submitter.pye&quot;, line 70, in run ,$tmf-UR9QO`7m3:/ZUOC`Mr&quot;HEG@322'^F File &quot;:...submitter_utils\task_handler.pye&quot;, line 31, in generate_submission_archive ONMZG-M0.d4^.q^W*/O-kA!&amp;&amp;F=,HOemS^G File &quot;...submitter_utils\task_handler.pye&quot;, line 51, in __list_files_for_submission )QtuUN&amp;-8jD=?N&quot;m9rd:_#TXTUSTI4'[SAo File &quot;...submitter_utils\task_handler.pye&quot;, line 65, in __generate_task_specific_files Hqt-[p'3jY=H$l-q3'VP4?99HU'p@LC(3G! File &quot;submitter_utils\tasks.pye&quot;, line 1040, in task6 +9i],]UI&gt;&gt;n'i1TAB&amp;tXr;9LBrMeSsA0Yg8 File &quot;...submitter_utils\test_utils.pye&quot;, line 329, in test_class )JoG=k--Wumo!Ga\[2H8e?[0VI'J;#Vp@Tb File &quot;..submitter_utils\test_utils.pye&quot;, line 164, in test_methods M?*&gt;F'c.D4-pmO:TY&gt;Pfa?^A%oB]KD:nJ6B File &quot;...\ppp-p4-classes-objects\geometric_shapes.py&quot;, line 31, in __eq__ return(self.get_name() == other.get_name() ) AttributeError: 'NoneType' object has no attribute 'get_name' </code></pre> <p>The tests that fail are called from encrypted files.<br /> My tests above behave as expected.<br /> Any ideas?</p>
<python>
2024-07-24 14:14:24
1
2,620
Mikef
78,788,751
10,452,700
What is the best practice to calculate global frequency of list of elements with exact orders in python within multiple pandas dataframe?
<p>Let's say I have the following datafarme <code>df1</code> corresponding to <code>user1</code>:</p> <pre class="lang-none prettyprint-override"><code>+-------------------+-------+--------+-------+-------+----------+----------------+ | Models | MAE | MSE | RMSE | MAPE | R² score | Runtime [ms] | +-------------------+-------+--------+-------+-------+----------+----------------+ | LinearRegression | 4.906 | 27.784 | 5.271 | 0.405 | -6.917 | 0:00:43.387145 | +-------------------+-------+--------+-------+-------+----------+----------------+ | Random Forest | 2.739 | 10.239 | 3.2 | 0.231 | -1.917 | 0:28:11.761681 | +-------------------+-------+--------+-------+-------+----------+----------------+ | XGBoost | 2.826 | 10.898 | 3.301 | 0.234 | -2.105 | 0:03:58.883474 | +-------------------+-------+--------+-------+-------+----------+----------------+ | MLPRegressor | 5.234 | 30.924 | 5.561 | 0.43 | -7.812 | 0:01:44.252276 | +-------------------+-------+--------+-------+-------+----------+----------------+ | SVR | 5.061 | 29.301 | 5.413 | 0.417 | -7.349 | 0:04:52.754769 | +-------------------+-------+--------+-------+-------+----------+----------------+ | CatBoostRegressor | 2.454 | 8.823 | 2.97 | 0.201 | -1.514 | 0:19:36.925169 | +-------------------+-------+--------+-------+-------+----------+----------------+ | LGBMRegressor | 2.76 | 10.204 | 3.194 | 0.231 | -1.907 | 0:04:51.223103 | +-------------------+-------+--------+-------+-------+----------+----------------+ +-------------------+----------------------------------------------------------------------------------------------------------+ | Rank | MAE | +-------------------+----------------------------------------------------------------------------------------------------------+ | Top models(sorted)| [&quot;CatBoostRegressor&quot;,&quot;RandomForest&quot;,&quot;LGBMRegressor&quot;, &quot;XGBoost&quot;,&quot;LinearRegression&quot;,&quot;SVR&quot;,&quot;MLPRegressor&quot;] | +-------------------+----------------------------------------------------------------------------------------------------------+ </code></pre> <p>I have following datafarme <code>df2</code> corresponding to <code>user2</code>:</p> <pre class="lang-none prettyprint-override"><code>+-------------------+-------+--------+-------+-------+----------+----------------+ | Models | MAE | MSE | RMSE | MAPE | R² score | Runtime [ms] | +-------------------+-------+--------+-------+-------+----------+----------------+ | LinearRegression | 4.575 | 24.809 | 4.981 | 0.377 | -6.079 | 0:00:45.055854 | +-------------------+-------+--------+-------+-------+----------+----------------+ | Random Forest | 2.345 | 8.065 | 2.84 | 0.199 | -1.301 | 0:10:55.468473 | +-------------------+-------+--------+-------+-------+----------+----------------+ | XGBoost | 2.129 | 7.217 | 2.686 | 0.179 | -1.059 | 0:01:01.575033 | +-------------------+-------+--------+-------+-------+----------+----------------+ | MLPRegressor | 4.414 | 23.477 | 4.845 | 0.363 | -5.699 | 0:00:31.231719 | +-------------------+-------+--------+-------+-------+----------+----------------+ | SVR | 4.353 | 22.826 | 4.778 | 0.357 | -5.513 | 0:02:12.258870 | +-------------------+-------+--------+-------+-------+----------+----------------+ | CatBoostRegressor | 2.281 | 7.671 | 2.77 | 0.189 | -1.189 | 0:08:16.526615 | +-------------------+-------+--------+-------+-------+----------+----------------+ | LGBMRegressor | 2.511 | 9.18 | 3.03 | 0.212 | -1.619 | 0:15:25.084937 | +-------------------+-------+--------+-------+-------+----------+----------------+ +-------------------+----------------------------------------------------------------------------------------------------------+ | Rank | MAE | +-------------------+----------------------------------------------------------------------------------------------------------+ | Top models(sorted)| [&quot;XGBoost&quot;,&quot;CatBoostRegressor&quot;,&quot;RandomForest&quot;,&quot;LGBMRegressor&quot;,&quot;LinearRegression&quot;,&quot;SVR&quot;,&quot;MLPRegressor&quot;] | +-------------------+----------------------------------------------------------------------------------------------------------+ </code></pre> <p>Let's say I have more datafarmes <code>df1000</code> corresponding to <code>user1000</code>.</p> <p><em><strong>Problem statement:</strong> I want to count how often each ranking order occurs across all users (for a given metric). (And then, sort the ranking orders by their counts, and, additionally, compute the percentage of how often each particular ranking order occurs (based on the counts).)</em></p> <p>I want to <strong>rank</strong> <code>Models</code> result (sorted over a specific column (e.g. <code>MAE</code> ) <strong>iteratively</strong> and return the frequency of top models over all dfs (<code>df1</code> till <code>df1000</code>). so this is not something I can easily reach using the:</p> <pre><code>df[&quot;category&quot;].value_counts() </code></pre> <p>we are interested in computing <a href="https://datagy.io/python-pandas-frequencies/" rel="nofollow noreferrer">absolute\relative frequencies</a> in final ranked table in expected output. so definitely I need to transform and add the list of sorted models' names that'd be a list of strings.</p> <p>Possible transformation or aggregation stages from my understanding:</p> <ol> <li>take each df and create the list of sorted model names based desired column or metric: <code>['model2','model7', 'model6', 'model5', 'model4', 'model3', 'model1' ]</code></li> <li>including the name of <code>Users</code> in the final transformed dataframe could also be useful (however I did not mention it in the following table in the expected output.)</li> <li>computing absolute\relative frequencies and return as <code>counts</code> and <code>freq(%)</code> in final table</li> </ol> <p><strong>Expected output:</strong></p> <pre class="lang-none prettyprint-override"><code>+-------------------+----------------------------------------------------------------------------------------------------------+--------+---------+ | Rank | MAE |counts |freq(%) | +-------------------+----------------------------------------------------------------------------------------------------------+--------+---------+ | Top models(sorted)| [&quot;CatBoostRegressor&quot;,&quot;RandomForest&quot;,&quot;LGBMRegressor&quot;, &quot;XGBoost&quot;,&quot;LinearRegression&quot;,&quot;SVR&quot;,&quot;MLPRegressor&quot;] | 70 | 65% | | Top models(sorted)| [&quot;XGBoost&quot;,&quot;CatBoostRegressor&quot;,&quot;RandomForest&quot;,&quot;LGBMRegressor&quot;,&quot;LinearRegression&quot;,&quot;SVR&quot;,&quot;MLPRegressor&quot;] | 20 | 12% | | Top models(sorted)| .... | .... | .... | .... +-------------------+----------------------------------------------------------------------------------------------------------+--------+---------+ </code></pre> <p>I also was thinking <strong>maybe</strong> I can use Natural Language Processing (NLP) methods called <a href="https://www.geeksforgeeks.org/understanding-tf-idf-term-frequency-inverse-document-frequency/" rel="nofollow noreferrer">TF-IDF</a> to handle this problem using:</p> <pre><code># import required module from sklearn.feature_extraction.text import TfidfVectorizer </code></pre> <hr /> <p>Potentially related posts I have checked:</p> <ul> <li><a href="https://stackoverflow.com/q/12207326/10452700">How can I compute a histogram (frequency table) for a single Series?</a></li> <li><a href="https://stackoverflow.com/q/22391433/10452700">Count the frequency that a value occurs in a dataframe column</a></li> <li><a href="https://stackoverflow.com/q/54114809/10452700">Efficient way to get frequency of elements in a pandas column of lists</a></li> <li><a href="https://stackoverflow.com/q/23023741/10452700">Calculate Frequency of item in list</a></li> <li><a href="https://stackoverflow.com/q/72381072/10452700">Get the frequency of individual items in a list of each row of a column in a dataframe</a></li> <li><a href="https://stackoverflow.com/q/33093809/10452700">count the frequency of elements in list of lists in Python</a></li> <li><a href="https://stackoverflow.com/q/49384846/10452700">What's the best alternative to using lists as elements in a pandas dataframe?</a></li> <li><a href="https://stackoverflow.com/q/35901258/10452700">pandas - create dataframe with counts and frequency of elements</a></li> <li><a href="https://stackoverflow.com/q/59232444/10452700">Python: Calculate PMF for List in Pandas Dataframe</a></li> <li><a href="https://stackoverflow.com/q/63653835/10452700">Frequency plot of a Pandas Dataframe</a></li> <li><a href="https://stackoverflow.com/q/39369820/10452700">python &amp; pandas - How to calculate frequency under conditions in columns in DataFrame?</a></li> </ul>
<python><pandas><dataframe><frequency><tf-idf>
2024-07-24 13:47:36
2
2,056
Mario
78,788,714
21,540,734
selenium.webdriver.Firefox with FirefoxOptions().add_argument('--headless') doesn't return a valid hwnd
<p>I've noticed that the headless option in Firefox runs Firefox in the background without a window attached to it, and I haven't been able to find a way to run Firefox in the background and still have the hwnd of the Firefox window to be able to use.</p> <p>I started out using <code>pyvda</code> to get an <code>AppView</code> of Firefox, but <code>pyvda.get_apps_by_z_order</code> didn't return anything so I wrote this and I was able to find two hwnds but nether one of them have a visible window. I was able to get one of them to show up on my taskbar without the conditional if statement.</p> <h4>Update...</h4> <p>Using what is being shown in the Task Manager I was able to figure out that these two hwnds being show in the code blow are child processes of Firefox with the same pid.</p> <pre class="lang-py prettyprint-override"><code>from inspect import currentframe from typing import Optional from psutil import Process from selenium.webdriver import Firefox, FirefoxOptions from win32con import SW_SHOW from win32gui import EnumWindows, GetWindowText, ShowWindow, IsWindowVisible from win32process import GetWindowThreadProcessId as GetPID def main(): def get_child_pid(process: Optional[Process] = None) -&gt; Process | int: nonlocal firefox if process is None: process = Process(pid = firefox.service.process.pid) for child in process.children(): if child.name() == 'firefox.exe': if len(child.children()): process = get_child_pid(process = child) else: break if currentframe().f_back.f_code.co_name == 'main': return process.pid return process def enum_windows(hwnd: int, pid: int): if GetPID(hwnd)[1] == pid and len(GetWindowText(hwnd)): print(GetWindowText(hwnd), (visible := bool(IsWindowVisible(hwnd))), sep = ': ') if visible: ShowWindow(hwnd, SW_SHOW) options = FirefoxOptions() options.add_argument('--headless') firefox = Firefox(options = options) EnumWindows(enum_windows, get_child_pid()) # firefox.quit() if __name__ == '__main__': main() </code></pre> <p>I need to find a way to be able to load Firefox in the background with a valid window that I can get the hwnd from.</p>
<python><selenium-webdriver><firefox><headless>
2024-07-24 13:40:05
0
425
phpjunkie
78,788,573
525,865
trying to apply a bs4-approach to wikipedia-page: results do not store in a df
<p>due to the fact that scraping on Wikipedia is a very very common technique - where we can use an appropiate approach to work with many many different jobs - i did have some issues with getting back the results - and store it into a df</p> <p>well - as a example for a very common Wikipedia-bs4 job - we can take this one:</p> <p>on this page we have more than 600 results - in sub-pages: url = &quot;https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland&quot;</p> <p>so to do a first experimental script i follow like so : first i scrape the table from the Wikipedia page and afterwards i convert it into a Pandas DataFrame.</p> <p>therefore i first install necessary packages: Make sure you have requests, beautifulsoup4, and pandas installed. You can install them using pip if you haven't already:</p> <pre><code>pip install requests beautifulsoup4 pandas </code></pre> <p>and then i follow like so : first i scrape the table from the Wikipedia page and afterwards i convert it into a Pandas DataFrame.</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd # URL of the Wikipedia page url = &quot;https://de.wikipedia.org/wiki/Liste_der_Genossenschaftsbanken_in_Deutschland&quot; # Send a GET request to the URL response = requests.get(url) # Parse the HTML content of the page with BeautifulSoup soup = BeautifulSoup(response.content, 'html.parser') # Find the first table in the page table = soup.find('table', {'class': 'wikitable'}) # Initialize an empty list to store the data data = [] # Iterate over the rows of the table for row in table.find_all('tr'): # Get the columns in each row cols = row.find_all('td') # If there are columns in the row, get the text from each column and store it in the data list if cols: data.append([col.get_text(strip=True) for col in cols]) # Convert the data list to a Pandas DataFrame df = pd.DataFrame(data, columns=[&quot;Bank Name&quot;, &quot;Location&quot;, &quot;Website&quot;]) # Display the DataFrame print(df) # Optionally, save the DataFrame to a CSV file df.to_csv('genossenschaftsbanken.csv', index=False) </code></pre> <p>see what i have got back:</p> <pre><code>3 s # Display the DataFrame print(df) # Optionally, save the DataFrame to a CSV file df.to_csv('genossenschaftsbanken.csv', index=False) Bank Name Location \ 0 BWGV Baden-Württembergischer Genossenschaftsverband... 1 GVB Genossenschaftsverband Bayerne. V. 2 GV Genoverbande. V. 3 GVWE Genossenschaftsverband Weser-Emse. V. 4 FGV Freier Genossenschaftsverband e. V. 5 PDG PDG Genossenschaftlicher Prüfungsverband e. V. 6 Verband der Sparda-Banken e. V. 7 Verband der PSD Banken e. V. Website 0 Karlsruhe 1 München 2 Frankfurt am Main 3 Oldenburg 4 Düsseldorf 5 Erfurt 6 Frankfurt am Main 7 Bonn </code></pre> <p>well i guess that i have to re-write the end of the script...</p> <p>update: what is aimed - is to get the data out of the block see for an example:</p> <p><a href="https://de.wikipedia.org/wiki/Abtsgm%C3%BCnder_Bank" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/Abtsgm%C3%BCnder_Bank</a></p> <p><a href="https://i.sstatic.net/irVEnpj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/irVEnpj8.png" alt="enter image description here" /></a></p>
<python><pandas><web-scraping><beautifulsoup>
2024-07-24 13:11:27
1
1,223
zero
78,788,533
13,491,504
Preventing the Gibbs phenomenon on a reverse FFT
<p>i am currently filtering some data and ran into trouble, when filtering smaller frequencies from a large trend.The Reverse FFTs seem to have large spikes at the beginning and the ending. Here is the Data before and after filtering smaller frequencies.<a href="https://i.sstatic.net/LjEbj4dr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LjEbj4dr.png" alt="Before filtering" /></a><a href="https://i.sstatic.net/9QAr4aJK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QAr4aJK.png" alt="After Filtering" /></a></p> <p>I have looked into the mathematic phenomenon and it is called the Gibbs phenomenon. Is there a way around this to clear the data of some overlying frequencies without getting this effect. Or is there even a workaround to keep the spikes as small as possible.</p> <p>Here is the code BTW:</p> <pre><code>fourier_transformation= np.fft.fft(Sensor_4) frequencies = np.fft.fftfreq(len(time), d=1/Values_per_second) fourier_transformation[np.abs(frequencies) &gt; 0.18] = 0 Sensor_4 = np.fft.ifft(fourier_transformation) </code></pre>
<python><numpy><fft>
2024-07-24 13:03:58
1
637
Mo711
78,788,454
511,302
Can I get the related data directly when using a join query in django?
<p>Well considering two models:</p> <pre><code>class School(models.Model): name = TextField() class Student(models.Model): school = ForeignKey(School related_name=students ) firstname = TextField() </code></pre> <p>And the query:</p> <pre><code>School.objects.filter(Q(name=&quot;oldschool&quot;) &amp; Q( Q(students__firstname=&quot;hello&quot;) | Q(students__firstname=&quot;testname&quot;) )) </code></pre> <p>I retrieve the schools. However there is a join/subquery obviously executed, yet I do not get the student information. I also wish to get the student information for which the &quot;first name is set&quot;.</p> <p>Can I make django orm actually fill in a students_set, so I do not have to do multiple lookups later? (having to iterate over the schools and check per school).</p>
<python><django><orm>
2024-07-24 12:48:14
1
9,627
paul23
78,788,425
525,865
Python-Scraper with BS4 and Selenium : Session-Issues with chrome
<p>I am trying to grab the list of all the banks that are located here on this page <a href="http://www.banken.de/inhalt/banken/finanzdienstleister-banken-nach-laendern-deutschland/1" rel="nofollow noreferrer">http://www.banken.de/inhalt/banken/finanzdienstleister-banken-nach-laendern-deutschland/1</a> <strong>note</strong> we've got 617 results</p> <p>my approach: go and find those results - inc. Website with the use of Python and Beautifulsoup from selenium import webdriver.</p> <pre><code>from bs4 import BeautifulSoup import pandas as pd # URL of the webpage url = &quot;http://www.banken.de/inhalt/banken/finanzdienstleister-banken-nach-laendern-deutschland/1&quot; # Start a Selenium WebDriver session (assuming Chrome here) driver = webdriver.Chrome() # Change this to the appropriate WebDriver if using a different browser # Load the webpage driver.get(url) # Wait for the page to load (adjust the waiting time as needed) driver.implicitly_wait(10) # Wait for 10 seconds for elements to appear # Get the page source after waiting html = driver.page_source # Parse the HTML content soup = BeautifulSoup(html, &quot;html.parser&quot;) # Find the table containing the bank data table = soup.find(&quot;table&quot;, {&quot;class&quot;: &quot;wikitable&quot;}) # Initialize lists to store data banks = [] headquarters = [] # Extract data from the table for row in table.find_all(&quot;tr&quot;)[1:]: cols = row.find_all(&quot;td&quot;) banks.append(cols[0].text.strip()) headquarters.append(cols[1].text.strip()) # Create a DataFrame using pandas bank_data = pd.DataFrame({&quot;Bank&quot;: banks, &quot;Headquarters&quot;: headquarters}) # Print the DataFrame print(bank_data) # Close the WebDriver session driver.quit() </code></pre> <p>this gives back (in my colab)</p> <pre><code>SessionNotCreatedException Traceback (most recent call last) &lt;ipython-input-6-ccf3a634071d&gt; in &lt;cell line: 9&gt;() 7 8 # Start a Selenium WebDriver session (assuming Chrome here) ----&gt; 9 driver = webdriver.Chrome() # Change this to the appropriate WebDriver if using a different browser 10 11 # Load the webpage 5 frames /usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py in check_response(self, response) 227 alert_text = value[&quot;alert&quot;].get(&quot;text&quot;) 228 raise exception_class(message, screen, stacktrace, alert_text) # type: ignore[call-arg] # mypy is not smart enough here --&gt; 229 raise exception_class(message, screen, stacktrace) SessionNotCreatedException: Message: session not created: Chrome failed to start: exited normally. (session not created: DevToolsActivePort file doesn't exist) (The process started from chrome location /root/.cache/selenium/chrome/linux64/124.0.6367.201/chrome is no longer running, so ChromeDriver is assuming that Chrome has crashed.) Stacktrace: #0 0x5850d85e1e43 &lt;unknown&gt; #1 0x5850d82d04e7 &lt;unknown&gt; #2 0x5850d8304a66 &lt;unknown&gt; #3 0x5850d83009c0 &lt;unknown&gt; #4 0x5850d83497f0 &lt;unknown&gt; </code></pre> <p>Well I think that I have to take care - on Colab not every selenium will run flawlessly</p>
<python><google-chrome><selenium-webdriver><beautifulsoup>
2024-07-24 12:44:01
2
1,223
zero
78,788,298
1,667,895
I can't get pip on my python3 venv on Ubuntu 24.04
<p>I have a fresh install of Ubuntu 24.04. I added pip with the usual <code>sudo apt install python3-pip</code>, and went to install some packages, but got the error: externally-managed-environment. This was all quite new to me, so I did some reading about <code>venv</code>. Using PyCharm, I attached a virtual environment to my Python project at <code>/home/colin/Dropbox/codebase/Python</code>. All seems to be going well so far. I opened up a terminal and ran:</p> <pre><code>source /home/colin/Dropbox/codebase/Python/venv/bin/activate </code></pre> <p>and now I'm in the virtual environment. Next I try to install a package with either:</p> <pre><code>pip install requests python3 -m pip install requests </code></pre> <p>In both cases I get <code>ModuleNotFoundError: No module named 'pip'</code>. Okay, fair enough, my previous install of pip was for the system, and isn't visible in the virtual environment. So I try to install pip while in the venv:</p> <pre><code>sudo apt install python3-pip </code></pre> <p>and get:</p> <pre><code>python3-pip is already the newest version (24.0+dfsg-1ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 16 not upgraded. </code></pre> <p>Huh. Okay. I'm still in the venv, and try:</p> <pre><code>which pip </code></pre> <p>and it returns:</p> <pre><code>/home/colin/Dropbox/Codebase/Python/venv/bin/pip </code></pre> <p>So what am I doing wrong here? Why can't I install packages with pip in my venv? It might be worth adding that I was following instructions from <a href="https://packaging.python.org/en/latest/guides/installing-using-pip-and-virtual-environments/" rel="nofollow noreferrer">here</a> for all this, but it doesn't mention this issue.</p>
<python><ubuntu><pip><python-venv>
2024-07-24 12:17:38
1
18,600
Colin T Bowers
78,788,192
4,505,998
Read CSV with epoch timestamp column as timestamp
<p>I'm using <code>pyarrow.csv</code> to read and convert a CSV file to parquet. This CSV file has a <code>timestamp</code> column with an int representing Unix time.</p> <p>Nevertheless, it reads it as an int64, and if I try to use <code>convertoptions</code>, it raises an error:</p> <pre class="lang-py prettyprint-override"><code>import pyarrow as pa import pyarrow.csv as pv table = pv.read_csv(&quot;file.csv&quot;, convert_options=pv.ConvertOptions( column_types={ 'timestamp': pa.timestamp('s'), } )) </code></pre> <p>This raises the following error:</p> <pre><code>ArrowInvalid: In CSV column #1: CSV conversion error to timestamp[s]: invalid value '1705173443' </code></pre>
<python><csv><pyarrow>
2024-07-24 11:57:31
1
813
David Davó
78,788,090
20,920,790
Why my API request with httpx not working?
<p>I tested my request to API with postman.co.</p> <p>But this request not working when I try run it with httpx library for Python. There's my request.</p> <p><a href="https://i.sstatic.net/f5Pe0is6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5Pe0is6.png" alt="enter image description here" /></a> Params:</p> <p><a href="https://i.sstatic.net/eAxni9kv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eAxni9kv.png" alt="enter image description here" /></a></p> <p>Full Python code:</p> <pre><code>import httpx import ujson headers = { 'Accept': 'application/vnd.yclients.v2+json', 'Content-Type': 'application/json', 'Authorization': f'Bearer {bearer_key}, User {user_key}' } params = { &quot;staff_id&quot;: staff_id, &quot;services&quot;: [ { &quot;id&quot;: service_id } ], &quot;client&quot;: { &quot;name&quot;: &quot;client_name&quot;, &quot;phone&quot;: &quot;**********111&quot; }, &quot;datetime&quot;: &quot;2023-07-04T09:00:00+03:00&quot;, &quot;seance_length&quot;: 600 } company_id = {company_id} # hided id record_id = {record_id} # hided id url = 'https://api_url.com/api/{}/{}'.format(company_id, record_id) session = httpx.Client() response = session.put(url, headers=api.headers, params=params) first_request = ujson.loads(response.text) </code></pre> <p>For my code I get error:</p> <pre><code>{'success': False, 'data': None, 'meta': {'message': 'An error has occurred', 'errors': {'id': ['The required id parameter was not passed.']}}} </code></pre> <p>Why tested and 100% right request runs with error?</p>
<python><httpx>
2024-07-24 11:36:20
1
402
John Doe
78,788,006
2,446,374
is there a better way to share common code across a set of python programs in a repository
<p>I pick python when I want to do a number of different things quickly and easily - ie I always end up with a number of python &quot;programs&quot; - eg a set of scripts - or if I'm playing with something, a bunch of test programs etc - ie always a loose collection of lots of different programs.</p> <p>however, there are certain things I'll share. For instance, if I'm screwing around with AI - I might have 30 or so completely unrelated programs - but they'll all call out to an LLM.</p> <h3>Directory Structure</h3> <p>So the directory structure I always want in Git is like:</p> <pre><code>. └── src/ ├── common ├── program1 ├── program2 └── program3 </code></pre> <h3>Not wanted: explicit setup</h3> <p>but I haven't found any good way of making program1, program2, program3 reference common in a way that any ide or linter will understand it, straight out of git. I know I can do things like set python paths etc - but I want someone to just be able to clone the library - change into program1 and run python program1.py and it just works - so anything that requires running/setting/configuring anything is just not an option to me.</p> <h3>Wanted: relative import</h3> <p>what I really want is for each program to just do:</p> <pre class="lang-py prettyprint-override"><code> from ..common import some_helper </code></pre> <p>but when I try that I of course get</p> <pre><code>Error: attempted relative import with no known parent package </code></pre> <p>now... if I do this:</p> <pre class="lang-py prettyprint-override"><code>import sys import os parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) sys.path.insert(0, parent_dir) </code></pre> <p>most things actually work, but it's like 4 obscure lines I have to add to every python program - and some editors or linters don't find it.</p> <p>these programs really have nothing to do with each other, beyond being part of the same repo, so any shared parentage would be pretty wrong. They also usually have a bunch of other files, data files, readmes etc - so just putting everything in the same directory above common is too messy.</p> <h3>Alternative I tried: symlinks</h3> <p>The only really practical solution I've found so far is to just symlink the common directory into all the other directories - it's ugly - but it does work.</p> <h3>Question</h3> <p>Is there a better way - a more pythonic way?</p> <p>I guess fundamentally - I want to know how to do a relative import without a known parent package?</p>
<python><python-import>
2024-07-24 11:18:48
2
3,724
Darren Oakey
78,787,980
13,086,128
ValueError: Unable to determine which files to ship inside the wheel using the following heuristics:
<p><strong>MRE</strong></p> <p>Python 3.9</p> <p>Windows OS</p> <p>On running the below command:</p> <pre><code>pip install kaggle </code></pre> <p>Complete error message:</p> <pre><code>Defaulting to user installation because normal site-packages is not writeable Collecting kaggle Using cached kaggle-1.6.15.tar.gz (9.1 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'error' WARNING: Ignoring invalid distribution -atplotlib (c:\users\navigat\appdata\roaming\python\python39\site-packages) WARNING: Ignoring invalid distribution -atplotlib (c:\users\navigat\appdata\roaming\python\python39\site-packages) error: subprocess-exited-with-error Preparing metadata (pyproject.toml) did not run successfully. exit code: 1 [35 lines of output] Traceback (most recent call last): File &quot;C:\Users\navigat\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;C:\Users\navigat\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;C:\Users\navigat\AppData\Roaming\Python\Python39\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py&quot;, line 152, in prepare_metadata_for_build_wheel whl_basename = backend.build_wheel(metadata_directory, config_settings) File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\build.py&quot;, line 58, in build_wheel return os.path.basename(next(builder.build(directory=wheel_directory, versions=['standard']))) File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\plugin\interface.py&quot;, line 155, in build artifact = version_api[version](directory, **build_data) File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\wheel.py&quot;, line 475, in build_standard for included_file in self.recurse_included_files(): File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\plugin\interface.py&quot;, line 176, in recurse_included_files yield from self.recurse_selected_project_files() File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\plugin\interface.py&quot;, line 180, in recurse_selected_project_files if self.config.only_include: File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\config.py&quot;, line 806, in only_include only_include = only_include_config.get('only-include', self.default_only_include()) or self.packages File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\wheel.py&quot;, line 260, in default_only_include return self.default_file_selection_options.only_include File &quot;c:\program files\python39\lib\functools.py&quot;, line 969, in __get__ val = self.func(instance) File &quot;C:\Users\navigat\AppData\Local\Temp\pip-build-env-oc6jw0q8\overlay\Lib\site-packages\hatchling\builders\wheel.py&quot;, line 248, in default_file_selection_options raise ValueError(message) ValueError: Unable to determine which files to ship inside the wheel using the following heuristics: https://hatch.pypa.io/latest/plugins/builder/wheel/#default-file-selection The most likely cause of this is that there is no directory that matches the name of your project (kaggle). At least one file selection option must be defined in the `tool.hatch.build.targets.wheel` table, see: https://hatch.pypa.io/latest/config/build/ As an example, if you intend to ship a directory named `foo` that resides within a `src` directory located at the root of your project, you can define the following: [tool.hatch.build.targets.wheel] packages = [&quot;src/foo&quot;] [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed Encountered error while generating package metadata. See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>Any idea whats going on?</p>
<python><python-3.x><pip><kaggle>
2024-07-24 11:11:29
2
30,560
Talha Tayyab
78,787,609
10,861,616
Getting response.status and response.url from the site visited
<p>I am trying to get the <code>page.url</code>, <code>response.url</code> and <code>response.status</code> from the websites. This is what I am trying:</p> <pre><code>from playwright.sync_api import sync_playwright def scrape_page(url): with sync_playwright() as p: browser = p.chromium.launch() page = browser.new_page() page.goto(url) outs = {&quot;url&quot;: page.url, &quot;res_url&quot;: response.url, &quot;res_status&quot;: response.status} browser.close() return outs </code></pre> <p>Individually, <code>&quot;url&quot;: page.url</code> works. But if I add <code>&quot;res_url&quot;: response.url, &quot;res_status&quot;: response.status</code>, it is giving me the following error:</p> <pre><code>&gt;&gt;&gt; scrape_page(&quot;https://theguardian.co.uk&quot;)Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;&lt;stdin&gt;&quot;, line 7, in scrape_page NameError: name 'response' is not defined </code></pre> <p>Any idea how to fix the mistake?</p>
<python><playwright><playwright-python>
2024-07-24 09:56:30
1
662
JontroPothon
78,787,534
1,417,053
Converting a PyTorch ONNX model to TensorRT engine - Jetson Orin Nano
<p>I'm trying to convert a <code>ViT-B/32</code> Vision Transformer model from the <a href="https://github.com/deepglint/unicom" rel="nofollow noreferrer">UNICOM</a> repository on a Jetson Orin Nano. The model's Vision Transformer class and source code is <a href="https://github.com/deepglint/unicom/blob/main/unicom/vision_transformer.py" rel="nofollow noreferrer">here</a>.</p> <p>I use the following code to convert the model to ONNX:</p> <pre><code>import torch import onnx import onnxruntime from unicom.vision_transformer import build_model if __name__ == '__main__': model_name = &quot;ViT-B/32&quot; model_name_fp16 = &quot;FP16-ViT-B-32&quot; onnx_model_path = f&quot;{model_name_fp16}.onnx&quot; model = build_model(model_name) model.eval() model = model.to('cuda') torch_input = torch.randn(1, 3, 224, 224).to('cuda') onnx_program = torch.onnx.dynamo_export(model, torch_input) onnx_program.save(onnx_model_path) onnx_model = onnx.load(onnx_model_path) onnx.checker.check_model(onnx_model_path) </code></pre> <p>I then use the following command line to convert the ONNX model to a TensorRT engine:</p> <p><code>/usr/src/tensorrt/bin/trtexec --onnx=FP16-ViT-B-32.onnx --saveEngine=FP16-ViT-B-32.trt --workspace=1024 --fp16</code></p> <p>This results in the following error:</p> <pre><code>--workspace flag has been deprecated by --memPoolSize flag. === Model Options === Format: ONNX Model: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx Output: === Build Options === Max batch: explicit batch Memory Pools: workspace: 1024 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default minTiming: 1 avgTiming: 8 Precision: FP32+FP16 LayerPrecisions: Layer Device Types: Calibration: Refit: Disabled Version Compatible: Disabled ONNX Native InstanceNorm: Disabled TensorRT runtime: full Lean DLL Path: Tempfile Controls: { in_memory: allow, temporary: allow } Exclude Lean Runtime: Disabled Sparsity: Disabled Safe mode: Disabled Build DLA standalone loadable: Disabled Allow GPU fallback for DLA: Disabled DirectIO mode: Disabled Restricted mode: Disabled Skip inference: Disabled Save engine: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.trt Load engine: Profiling verbosity: 0 Tactic sources: Using default tactic sources timingCacheMode: local timingCacheFile: Heuristic: Disabled Preview Features: Use default preview flags. MaxAuxStreams: -1 BuilderOptimizationLevel: -1 Input(s)s format: fp32:CHW Output(s)s format: fp32:CHW Input build shapes: model Input calibration shapes: model === System Options === Device: 0 DLACore: Plugins: setPluginsToSerialize: dynamicPlugins: ignoreParsedPluginLibs: 0 === Inference Options === Batch: Explicit Input inference shapes: model Iterations: 10 Duration: 3s (+ 200ms warm up) Sleep time: 0ms Idle time: 0ms Inference Streams: 1 ExposeDMA: Disabled Data transfers: Enabled Spin-wait: Disabled Multithreading: Disabled CUDA Graph: Disabled Separate profiling: Disabled Time Deserialize: Disabled Time Refit: Disabled NVTX verbosity: 0 Persistent Cache Ratio: 0 Inputs: === Reporting Options === Verbose: Disabled Averages: 10 inferences Percentiles: 90,95,99 Dump refittable layers:Disabled Dump output: Disabled Profile: Disabled Export timing to JSON file: Export output to JSON file: Export profile to JSON file: === Device Information === Selected Device: Orin Compute Capability: 8.7 SMs: 8 Device Global Memory: 7620 MiB Shared Memory per SM: 164 KiB Memory Bus Width: 128 bits (ECC disabled) Application Compute Clock Rate: 0.624 GHz Application Memory Clock Rate: 0.624 GHz Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at. TensorRT version: 8.6.2 Loading standard plugins [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 33, GPU 4508 (MiB) [MemUsageChange] Init builder kernel library: CPU +1154, GPU +1351, now: CPU 1223, GPU 5866 (MiB) Start parsing network model. ---------------------------------------------------------------- Input filename: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx ONNX IR version: 0.0.8 Opset version: 1 Producer name: pytorch Producer version: 2.3.0 Domain: Model version: 0 Doc string: ---------------------------------------------------------------- No importer registered for op: unicom_vision_transformer_PatchEmbedding_patch_embed_1. Attempting to import as plugin. Searching for plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1, plugin_version: 1, plugin_namespace: 3: getPluginCreator could not find plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1 version: 1 ModelImporter.cpp:768: While parsing node number 0 [unicom_vision_transformer_PatchEmbedding_patch_embed_1 -&gt; &quot;patch_embed_1&quot;]: ModelImporter.cpp:769: --- Begin node --- ModelImporter.cpp:770: input: &quot;l_x_&quot; --workspace flag has been deprecated by --memPoolSize flag. === Model Options === Format: ONNX Model: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx Output: === Build Options === Max batch: explicit batch Memory Pools: workspace: 1024 MiB, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default minTiming: 1 avgTiming: 8 Precision: FP32+FP16 LayerPrecisions: Layer Device Types: Calibration: Refit: Disabled Version Compatible: Disabled ONNX Native InstanceNorm: Disabled TensorRT runtime: full Lean DLL Path: Tempfile Controls: { in_memory: allow, temporary: allow } Exclude Lean Runtime: Disabled Sparsity: Disabled Safe mode: Disabled Build DLA standalone loadable: Disabled Allow GPU fallback for DLA: Disabled DirectIO mode: Disabled Restricted mode: Disabled Skip inference: Disabled Save engine: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.trt Load engine: Profiling verbosity: 0 Tactic sources: Using default tactic sources timingCacheMode: local timingCacheFile: Heuristic: Disabled Preview Features: Use default preview flags. MaxAuxStreams: -1 BuilderOptimizationLevel: -1 Input(s)s format: fp32:CHW Output(s)s format: fp32:CHW Input build shapes: model Input calibration shapes: model === System Options === Device: 0 DLACore: Plugins: setPluginsToSerialize: dynamicPlugins: ignoreParsedPluginLibs: 0 === Inference Options === Batch: Explicit Input inference shapes: model Iterations: 10 Duration: 3s (+ 200ms warm up) Sleep time: 0ms Idle time: 0ms Inference Streams: 1 ExposeDMA: Disabled Data transfers: Enabled Spin-wait: Disabled Multithreading: Disabled CUDA Graph: Disabled Separate profiling: Disabled Time Deserialize: Disabled Time Refit: Disabled NVTX verbosity: 0 Persistent Cache Ratio: 0 Inputs: === Reporting Options === Verbose: Enabled Averages: 10 inferences Percentiles: 90,95,99 Dump refittable layers:Disabled Dump output: Disabled Profile: Disabled Export timing to JSON file: Export output to JSON file: Export profile to JSON file: === Device Information === Selected Device: Orin Compute Capability: 8.7 SMs: 8 Device Global Memory: 7620 MiB Shared Memory per SM: 164 KiB Memory Bus Width: 128 bits (ECC disabled) Application Compute Clock Rate: 0.624 GHz Application Memory Clock Rate: 0.624 GHz Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at. TensorRT version: 8.6.2 Loading standard plugins Registered plugin - ::BatchedNMSDynamic_TRT version 1 Registered plugin - ::BatchedNMS_TRT version 1 Registered plugin - ::BatchTilePlugin_TRT version 1 Registered plugin - ::Clip_TRT version 1 Registered plugin - ::CoordConvAC version 1 Registered plugin - ::CropAndResizeDynamic version 1 Registered plugin - ::CropAndResize version 1 Registered plugin - ::DecodeBbox3DPlugin version 1 Registered plugin - ::DetectionLayer_TRT version 1 Registered plugin - ::EfficientNMS_Explicit_TF_TRT version 1 Registered plugin - ::EfficientNMS_Implicit_TF_TRT version 1 Registered plugin - ::EfficientNMS_ONNX_TRT version 1 Registered plugin - ::EfficientNMS_TRT version 1 Registered plugin - ::FlattenConcat_TRT version 1 Registered plugin - ::GenerateDetection_TRT version 1 Registered plugin - ::GridAnchor_TRT version 1 Registered plugin - ::GridAnchorRect_TRT version 1 Registered plugin - ::InstanceNormalization_TRT version 1 Registered plugin - ::InstanceNormalization_TRT version 2 Registered plugin - ::LReLU_TRT version 1 Registered plugin - ::ModulatedDeformConv2d version 1 Registered plugin - ::MultilevelCropAndResize_TRT version 1 Registered plugin - ::MultilevelProposeROI_TRT version 1 Registered plugin - ::MultiscaleDeformableAttnPlugin_TRT version 1 Registered plugin - ::NMSDynamic_TRT version 1 Registered plugin - ::NMS_TRT version 1 Registered plugin - ::Normalize_TRT version 1 Registered plugin - ::PillarScatterPlugin version 1 Registered plugin - ::PriorBox_TRT version 1 Registered plugin - ::ProposalDynamic version 1 Registered plugin - ::ProposalLayer_TRT version 1 Registered plugin - ::Proposal version 1 Registered plugin - ::PyramidROIAlign_TRT version 1 Registered plugin - ::Region_TRT version 1 Registered plugin - ::Reorg_TRT version 1 Registered plugin - ::ResizeNearest_TRT version 1 Registered plugin - ::ROIAlign_TRT version 1 Registered plugin - ::RPROI_TRT version 1 Registered plugin - ::ScatterND version 1 Registered plugin - ::SpecialSlice_TRT version 1 Registered plugin - ::Split version 1 Registered plugin - ::VoxelGeneratorPlugin version 1 [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 33, GPU 5167 (MiB) Trying to load shared library libnvinfer_builder_resource.so.8.6.2 Loaded shared library libnvinfer_builder_resource.so.8.6.2 [MemUsageChange] Init builder kernel library: CPU +1154, GPU +995, now: CPU 1223, GPU 6203 (MiB) CUDA lazy loading is enabled. Start parsing network model. ---------------------------------------------------------------- Input filename: /home/jetson/HPS/Models/FeatureExtractor/UNICOM/ONNX/FP16-ViT-B-32.onnx ONNX IR version: 0.0.8 Opset version: 1 Producer name: pytorch Producer version: 2.3.0 Domain: Model version: 0 Doc string: ---------------------------------------------------------------- Plugin already registered - ::BatchedNMSDynamic_TRT version 1 Plugin already registered - ::BatchedNMS_TRT version 1 Plugin already registered - ::BatchTilePlugin_TRT version 1 Plugin already registered - ::Clip_TRT version 1 Plugin already registered - ::CoordConvAC version 1 Plugin already registered - ::CropAndResizeDynamic version 1 Plugin already registered - ::CropAndResize version 1 Plugin already registered - ::DecodeBbox3DPlugin version 1 Plugin already registered - ::DetectionLayer_TRT version 1 Plugin already registered - ::EfficientNMS_Explicit_TF_TRT version 1 Plugin already registered - ::EfficientNMS_Implicit_TF_TRT version 1 Plugin already registered - ::EfficientNMS_ONNX_TRT version 1 Plugin already registered - ::EfficientNMS_TRT version 1 Plugin already registered - ::FlattenConcat_TRT version 1 Plugin already registered - ::GenerateDetection_TRT version 1 Plugin already registered - ::GridAnchor_TRT version 1 Plugin already registered - ::GridAnchorRect_TRT version 1 Plugin already registered - ::InstanceNormalization_TRT version 1 Plugin already registered - ::InstanceNormalization_TRT version 2 Plugin already registered - ::LReLU_TRT version 1 Plugin already registered - ::ModulatedDeformConv2d version 1 Plugin already registered - ::MultilevelCropAndResize_TRT version 1 Plugin already registered - ::MultilevelProposeROI_TRT version 1 Plugin already registered - ::MultiscaleDeformableAttnPlugin_TRT version 1 Plugin already registered - ::NMSDynamic_TRT version 1 Plugin already registered - ::NMS_TRT version 1 Plugin already registered - ::Normalize_TRT version 1 Plugin already registered - ::PillarScatterPlugin version 1 Plugin already registered - ::PriorBox_TRT version 1 Plugin already registered - ::ProposalDynamic version 1 Plugin already registered - ::ProposalLayer_TRT version 1 Plugin already registered - ::Proposal version 1 Plugin already registered - ::PyramidROIAlign_TRT version 1 Plugin already registered - ::Region_TRT version 1 Plugin already registered - ::Reorg_TRT version 1 Plugin already registered - ::ResizeNearest_TRT version 1 Plugin already registered - ::ROIAlign_TRT version 1 Plugin already registered - ::RPROI_TRT version 1 Plugin already registered - ::ScatterND version 1 Plugin already registered - ::SpecialSlice_TRT version 1 Plugin already registered - ::Split version 1 Plugin already registered - ::VoxelGeneratorPlugin version 1 Adding network input: l_x_ with dtype: float32, dimensions: (1, 3, 224, 224) Registering tensor: l_x_ for ONNX tensor: l_x_ Importing : patch_embed.proj.weight Importing : patch_embed.proj.bias Importing : pos_embed Importing : blocks.0.norm1.weight Importing : blocks.0.norm1.bias Importing : blocks.0.attn.qkv.weight Importing : blocks.0.attn.proj.weight Importing : blocks.0.attn.proj.bias Importing : blocks.0.norm2.weight Importing : blocks.0.norm2.bias Importing : blocks.0.mlp.fc1.weight Importing : blocks.0.mlp.fc1.bias Importing : blocks.0.mlp.fc2.weight Importing : blocks.0.mlp.fc2.bias Importing : blocks.1.norm1.weight Importing : blocks.1.norm1.bias Importing : blocks.1.attn.qkv.weight Importing : blocks.1.attn.proj.weight Importing : blocks.1.attn.proj.bias Importing : blocks.1.norm2.weight Importing : blocks.1.norm2.bias Importing : blocks.1.mlp.fc1.weight Importing : blocks.1.mlp.fc1.bias Importing : blocks.1.mlp.fc2.weight Importing : blocks.1.mlp.fc2.bias Importing : blocks.2.norm1.weight Importing : blocks.2.norm1.bias Importing : blocks.2.attn.qkv.weight Importing : blocks.2.attn.proj.weight Importing : blocks.2.attn.proj.bias Importing : blocks.2.norm2.weight Importing : blocks.2.norm2.bias Importing : blocks.2.mlp.fc1.weight Importing : blocks.2.mlp.fc1.bias Importing : blocks.2.mlp.fc2.weight Importing : blocks.2.mlp.fc2.bias Importing : blocks.3.norm1.weight Importing : blocks.3.norm1.bias Importing : blocks.3.attn.qkv.weight Importing : blocks.3.attn.proj.weight Importing : blocks.3.attn.proj.bias Importing : blocks.3.norm2.weight Importing : blocks.3.norm2.bias Importing : blocks.3.mlp.fc1.weight Importing : blocks.3.mlp.fc1.bias Importing : blocks.3.mlp.fc2.weight Importing : blocks.3.mlp.fc2.bias Importing : blocks.4.norm1.weight Importing : blocks.4.norm1.bias Importing : blocks.4.attn.qkv.weight Importing : blocks.4.attn.proj.weight Importing : blocks.4.attn.proj.bias Importing : blocks.4.norm2.weight Importing : blocks.4.norm2.bias Importing : blocks.4.mlp.fc1.weight Importing : blocks.4.mlp.fc1.bias Importing : blocks.4.mlp.fc2.weight Importing : blocks.4.mlp.fc2.bias Importing : blocks.5.norm1.weight Importing : blocks.5.norm1.bias Importing : blocks.5.attn.qkv.weight Importing : blocks.5.attn.proj.weight Importing : blocks.5.attn.proj.bias Importing : blocks.5.norm2.weight Importing : blocks.5.norm2.bias Importing : blocks.5.mlp.fc1.weight Importing : blocks.5.mlp.fc1.bias Importing : blocks.5.mlp.fc2.weight Importing : blocks.5.mlp.fc2.bias Importing : blocks.6.norm1.weight Importing : blocks.6.norm1.bias Importing : blocks.6.attn.qkv.weight Importing : blocks.6.attn.proj.weight Importing : blocks.6.attn.proj.bias Importing : blocks.6.norm2.weight Importing : blocks.6.norm2.bias Importing : blocks.6.mlp.fc1.weight Importing : blocks.6.mlp.fc1.bias Importing : blocks.6.mlp.fc2.weight Importing : blocks.6.mlp.fc2.bias Importing : blocks.7.norm1.weight Importing : blocks.7.norm1.bias Importing : blocks.7.attn.qkv.weight Importing : blocks.7.attn.proj.weight Importing : blocks.7.attn.proj.bias Importing : blocks.7.norm2.weight Importing : blocks.7.norm2.bias Importing : blocks.7.mlp.fc1.weight Importing : blocks.7.mlp.fc1.bias Importing : blocks.7.mlp.fc2.weight Importing : blocks.7.mlp.fc2.bias Importing : blocks.8.norm1.weight Importing : blocks.8.norm1.bias Importing : blocks.8.attn.qkv.weight Importing : blocks.8.attn.proj.weight Importing : blocks.8.attn.proj.bias Importing : blocks.8.norm2.weight Importing : blocks.8.norm2.bias Importing : blocks.8.mlp.fc1.weight Importing : blocks.8.mlp.fc1.bias Importing : blocks.8.mlp.fc2.weight Importing : blocks.8.mlp.fc2.bias Importing : blocks.9.norm1.weight Importing : blocks.9.norm1.bias Importing : blocks.9.attn.qkv.weight Importing : blocks.9.attn.proj.weight Importing : blocks.9.attn.proj.bias Importing : blocks.9.norm2.weight Importing : blocks.9.norm2.bias Importing : blocks.9.mlp.fc1.weight Importing : blocks.9.mlp.fc1.bias Importing : blocks.9.mlp.fc2.weight Importing : blocks.9.mlp.fc2.bias Importing : blocks.10.norm1.weight Importing : blocks.10.norm1.bias Importing : blocks.10.attn.qkv.weight Importing : blocks.10.attn.proj.weight Importing : blocks.10.attn.proj.bias Importing : blocks.10.norm2.weight Importing : blocks.10.norm2.bias Importing : blocks.10.mlp.fc1.weight Importing : blocks.10.mlp.fc1.bias Importing : blocks.10.mlp.fc2.weight Importing : blocks.10.mlp.fc2.bias Importing : blocks.11.norm1.weight Importing : blocks.11.norm1.bias Importing : blocks.11.attn.qkv.weight Importing : blocks.11.attn.proj.weight Importing : blocks.11.attn.proj.bias Importing : blocks.11.norm2.weight Importing : blocks.11.norm2.bias Importing : blocks.11.mlp.fc1.weight Importing : blocks.11.mlp.fc1.bias Importing : blocks.11.mlp.fc2.weight Importing : blocks.11.mlp.fc2.bias Importing : norm.weight Importing : norm.bias Importing : feature.0.weight Importing : feature.1.weight Importing : feature.1.bias Importing : feature.1.running_mean Importing : feature.1.running_var Importing : feature.2.weight Importing : feature.3.weight Importing : feature.3.bias Importing : feature.3.running_mean Importing : feature.3.running_var Parsing node: unicom_vision_transformer_PatchEmbedding_patch_embed_1_1 [unicom_vision_transformer_PatchEmbedding_patch_embed_1] Searching for input: l_x_ Searching for input: patch_embed.proj.weight Searching for input: patch_embed.proj.bias unicom_vision_transformer_PatchEmbedding_patch_embed_1_1 [unicom_vision_transformer_PatchEmbedding_patch_embed_1] inputs: [l_x_ -&gt; (1, 3, 224, 224)[FLOAT]], [patch_embed.proj.weight -&gt; (768, 3, 32, 32)[FLOAT]], [patch_embed.proj.bias -&gt; (768)[FLOAT]], No importer registered for op: unicom_vision_transformer_PatchEmbedding_patch_embed_1. Attempting to import as plugin. Searching for plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1, plugin_version: 1, plugin_namespace: Local registry did not find unicom_vision_transformer_PatchEmbedding_patch_embed_1 creator. Will try parent registry if enabled. Global registry did not find unicom_vision_transformer_PatchEmbedding_patch_embed_1 creator. Will try parent registry if enabled. 3: getPluginCreator could not find plugin: unicom_vision_transformer_PatchEmbedding_patch_embed_1 version: 1 ModelImporter.cpp:768: While parsing node number 0 [unicom_vision_transformer_PatchEmbedding_patch_embed_1 -&gt; &quot;patch_embed_1&quot;]: ModelImporter.cpp:769: --- Begin node --- ModelImporter.cpp:770: input: &quot;l_x_&quot; input: &quot;patch_embed.proj.weight&quot; input: &quot;patch_embed.proj.bias&quot; output: &quot;patch_embed_1&quot; name: &quot;unicom_vision_transformer_PatchEmbedding_patch_embed_1_1&quot; op_type: &quot;unicom_vision_transformer_PatchEmbedding_patch_embed_1&quot; doc_string: &quot;&quot; domain: &quot;pkg.unicom&quot; input: &quot;patch_embed.proj.weight&quot; input: &quot;patch_embed.proj.bias&quot; output: &quot;patch_embed_1&quot; name: &quot;unicom_vision_transformer_PatchEmbedding_patch_embed_1_1&quot; op_type: &quot;unicom_vision_transformer_PatchEmbedding_patch_embed_1&quot; doc_string: &quot;&quot; domain: &quot;pkg.unicom&quot; [E] ModelImporter.cpp:771: --- End node --- [E] ModelImporter.cpp:773: ERROR: builtin_op_importers.cpp:5403 In function importFallbackPluginImporter: [E] ModelImporter.cpp:771: --- End node --- [E] ModelImporter.cpp:773: ERROR: builtin_op_importers.cpp:5403 In function importFallbackPluginImporter: [8] Assertion failed: creator &amp;&amp; &quot;Plugin not found, are the plugin name, version, and namespace correct?&quot; [8] Assertion failed: creator &amp;&amp; &quot;Plugin not found, are the plugin name, version, and namespace correct?&quot; [E] Failed to parse onnx file [I] Finished parsing network model. Parse time: 4.99544 [E] Parsing model failed [E] Failed to create engine from model or file. [E] Engine set up failed [E] Failed to parse onnx file [I] Finished parsing network model. Parse time: 13.1481 [E] Parsing model failed [E] Failed to create engine from model or file. [E] Engine set up failed </code></pre> <p>The problem seems to arise from the <code>PatchEmbedding</code> class <a href="https://github.com/deepglint/unicom/blob/4d84a3b496a47bcad68467d71c5ca787b0366042/unicom/vision_transformer.py#L127" rel="nofollow noreferrer">here</a> and it doesn't seem as if the model is using any extraordinary methods and layers that aren't convertible by TensorRT. Here's the class's source code:</p> <pre><code>class PatchEmbedding(nn.Module): def __init__(self, input_size=224, patch_size=32, in_channels: int = 3, dim: int = 768): super().__init__() if isinstance(input_size, int): input_size = (input_size, input_size) if isinstance(patch_size, int): patch_size = (patch_size, patch_size) H = input_size[0] // patch_size[0] W = input_size[1] // patch_size[1] self.num_patches = H * W self.proj = nn.Conv2d( in_channels, dim, kernel_size=patch_size, stride=patch_size) def forward(self, x): x = self.proj(x).flatten(2).transpose(1, 2) return x </code></pre> <p>What should I do to make the model convertible to TensorRT?</p> <pre><code>## Environment **TensorRT Version**: tensorrt_version_8_6_2_3 **GPU Type**: Jetson Orin Nano **Nvidia Driver Version**: **CUDA Version**: 12.2 **CUDNN Version**: 8.9.4.25-1+cuda12.2 **Operating System + Version**: Jetpack 6.0 **Python Version (if applicable)**: 3.10 **PyTorch Version (if applicable)**: 2.3.0 **ONNX Version (if applicable)**: 1.16.1 **onnxruntime-gpu Version (if applicable)**: 1.17.0 **onnxscript Version (if applicable)**: 0.1.0.dev20240721 </code></pre> <p><strong>UPDATE:</strong> Running the code using the <code>torch.onnx.export</code> instead of <code>torch.onnx.dynamo_export</code> gives this error:</p> <pre><code>/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: warn(f&quot;Failed to load image Python extension: {e}&quot;) /home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/_dynamo/external_utils.py:36: UserWarning: torch.utils.checkpoint: the use_reentrant parameter should be passed explicitly. In version 2.4 we will raise an exception if use_reentrant is not passed. use_reentrant=False is recommended, but if you need to preserve the current default behavior, you can pass use_reentrant=True. Refer to docs for more details on the differences between the two variants. return fn(*args, **kwargs) Traceback (most recent call last): File &quot;/home/jetson/HPS/Scripts_Utilities/ONNX/HPS_ExportModelToONNX.py&quot;, line 31, in &lt;module&gt; torch.onnx.export(model, torch_input,onnx_model_path) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/onnx/utils.py&quot;, line 516, in export _export( File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/onnx/utils.py&quot;, line 1612, in _export graph, params_dict, torch_out = _model_to_graph( File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/onnx/utils.py&quot;, line 1134, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/onnx/utils.py&quot;, line 1010, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/onnx/utils.py&quot;, line 914, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/jit/_trace.py&quot;, line 1315, in _get_trace_graph outs = ONNXTracedModule( File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1541, in _call_impl return forward_call(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/jit/_trace.py&quot;, line 141, in forward graph, out = torch._C._create_graph_by_tracing( File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/jit/_trace.py&quot;, line 132, in wrapper outs.append(self.inner(*trace_inputs)) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1541, in _call_impl return forward_call(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1522, in _slow_forward result = self.forward(*input, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/unicom/vision_transformer.py&quot;, line 57, in forward x = self.forward_features(x) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/unicom/vision_transformer.py&quot;, line 52, in forward_features x = func(x) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1532, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1541, in _call_impl return forward_call(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/nn/modules/module.py&quot;, line 1522, in _slow_forward result = self.forward(*input, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/unicom/vision_transformer.py&quot;, line 122, in forward return checkpoint(self.forward_impl, x) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/_compile.py&quot;, line 24, in inner return torch._dynamo.disable(fn, recursive)(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py&quot;, line 403, in _fn return fn(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/_dynamo/external_utils.py&quot;, line 36, in inner return fn(*args, **kwargs) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/utils/checkpoint.py&quot;, line 481, in checkpoint return CheckpointFunction.apply(function, preserve, *args) File &quot;/home/jetson/miniconda3/envs/HPS/lib/python3.10/site-packages/torch/autograd/function.py&quot;, line 571, in apply return super().apply(*args, **kwargs) # type: ignore[misc] RuntimeError: _Map_base::at </code></pre>
<python><pytorch><onnx><tensorrt>
2024-07-24 09:39:22
1
2,620
Cypher
78,787,519
17,721,722
How to Version Control a PostgreSQL Table?
<p>Backend: Python 3.11 and Django 5.0.6<br /> Database: PostgreSQL 15</p> <p>Our app deals with reporting from the database. We don't store report SQL queries in the codebase; instead, we store them in DB tables. However, developers change SQL queries from time to time. How can we track such changes made in the database, similar to using Git? For example, I want to see the query changes made in the database one year ago. How can I do that? Does anyone have any approach to do this?</p> <p>I tried to set up Liquibase, but it lacks support/documentation for a Python + Django app.</p> <p>Currently, what we are doing is we created a repo for the database, manually generating insert statements from the table and pushing them into the repo. But this is a hectic procedure. Some approaches that may work include using PostgreSQL triggers or logs. There are only some specific static tables that I want to track because some tables have data in the millions. If possible, I need an implementation similar to a version control tool like Git. Does anyone have a better approach to this problem?</p>
<python><django><postgresql><git><version-control>
2024-07-24 09:36:37
0
501
Purushottam Nawale
78,787,373
7,932,327
What explains these surprising timings in numpy?
<p>I recently timed several different array allocation and initialization procedures in numpy. I however fail to interpret these timings. Here is a plot of my measurements (size of array in number of elements &quot;n&quot; for a given data type vs time of execution).</p> <p><img src="https://gabrielfougeron.github.io/pyquickbench/_images/sphx_glr_numpy_array_init_001.png" alt="Benchmark" /></p> <p>If you want to reproduce the timings on your own machine, here is the code: <a href="https://gabrielfougeron.github.io/pyquickbench/_build/auto_examples/benchmarks/numpy_array_init.html#sphx-glr-build-auto-examples-benchmarks-numpy-array-init-py" rel="nofollow noreferrer">https://gabrielfougeron.github.io/pyquickbench/_build/auto_examples/benchmarks/numpy_array_init.html#sphx-glr-build-auto-examples-benchmarks-numpy-array-init-py</a></p> <p>In particular, here are my questions:</p> <ul> <li><p>Why do the timings of <code>np.ones</code> and <code>np.zeros</code> differ so much?</p> </li> <li><p>Why is there a discontinuity around the few million elements mark?</p> </li> </ul>
<python><numpy><numpy-ndarray>
2024-07-24 09:08:46
0
501
G. Fougeron
78,787,332
11,021,175
Selecting default search engine is needed for Chrome version 127
<p>All of my Selenium scripts are raising errors after Chrome updated to version 127 because I always have to select a default search engine when the browser is being launched.</p> <p>I use ChromeDriver 127.0.6533.72.</p> <p>How to fix it?</p>
<python><selenium-webdriver>
2024-07-24 09:00:03
3
407
Ben
78,787,162
9,608,759
How to ignore a certain argument in a `Field` while serializing in pydantic?
<h3>What I am trying to achieve?</h3> <p>I want to filter books based on some criteria. Since the filtering logic is roughly equal to &quot;<em>compare</em> <strong>value</strong> <em>with</em> <strong>column_in_db</strong>&quot;, I decided to create different types of filters for filtering values. I am then using these filters to create my <code>Filter</code> model. I will then use FastAPI to let users of my api populate the filter object and get filtered results.</p> <h4>What works?</h4> <pre class="lang-py prettyprint-override"><code># filtering.py from pydantic import BaseModel, Field from my_sqlalchemy_models import Book, User class EqualityFilter(BaseModel): value: int _column: Any | None = PrivateAttr(default=None) def __init__(self, **data): super().__init__(**data) self._column = data.get(&quot;_column&quot;) @property def column(self): return self._column class MinMaxFilter: ... class ContainsFilter: ... class Filter(BaseModel): author_id: EqualityFilter | None = Field(default=None, column=Book.author_id) owner_id: EqualityFilter | None = Field(default=None, column=User.id) owner_name: ContainsFilter | None = Field(default=None, column=User.name) price: MinMaxFilter | None = Field(default=None, column=Book.price) @model_validator(mode=&quot;before&quot;) @classmethod def add_columns(cls, data: Any): for field_name, value in cls.model_fields.items(): schema_extra = value.json_schema_extra if field_name in data and schema_extra and schema_extra.get(&quot;column&quot;) is not None: data[field_name][&quot;_column&quot;] = value.json_schema_extra[&quot;column&quot;] return data </code></pre> <p>The setup above works perfectly and I can create my filter object like:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; my_filter = Filter(author_id={&quot;value&quot;: 1}, owner_name={&quot;value&quot;: &quot;John&quot;}, price={&quot;min&quot;: 10}) &gt;&gt;&gt; my_filter.author_id.column &lt;sqlalchemy.orm.attributes.InstrumentedAttribute object at 0x105933420&gt; </code></pre> <p>Now I can create a condition for <code>.where</code> statement in sqlalchemy query:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; my_filter.author_id.column == my_filter.author_id.value &lt;sqlalchemy.sql.elements.BinaryExpression object at 0x102bfc860&gt; </code></pre> <h4>What doesn't work?</h4> <p>The problem happens when I try to use the <code>Filter</code> in my FastAPI app. FastAPI tries to generate <code>openapi.json</code> for the <code>/docs</code> and it fails to serialize the <code>column</code> arguments in <code>Field</code>'s.</p> <pre class="lang-py prettyprint-override"><code> File &quot;***/.venv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 2250, in json_schema_update_func add_json_schema_extra(json_schema, json_schema_extra) File &quot;***/.venv/lib/python3.12/site-packages/pydantic/_internal/_generate_schema.py&quot;, line 2260, in add_json_schema_extra json_schema.update(to_jsonable_python(json_schema_extra)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ pydantic_core._pydantic_core.PydanticSerializationError: Unable to serialize unknown type: &lt;class 'sqlalchemy.orm.attributes.InstrumentedAttribute'&gt; </code></pre> <p>I don't need it to be serialized. I am using the <code>column</code> internally.</p> <p>So I wonder if there is way to ignore serialization for certain arguments in a field. If what I am trying to do doesn't make much sense or it is not possible, is there another way to do it?</p> <h4>Complete Minimal Reproducible Example</h4> <p>Run the below code with <code>python app.py</code> and go to <code>http://127.0.0.1:8000/docs</code> and you will see the error. To keep things simple, I've replaced references to my sqlalchemy models to <code>int</code> which is also non-serializable. The error explanation looks slightly different but it's caused by the same thing.</p> <pre><code>from typing import Any from fastapi import FastAPI from pydantic import BaseModel, Field, PrivateAttr, model_validator class EqualityFilter(BaseModel): value: int _column: Any | None = PrivateAttr(default=None) def __init__(self, **data): super().__init__(**data) self._column = data.get(&quot;_column&quot;) @property def column(self): return self._column class Filter(BaseModel): author_id: EqualityFilter | None = Field(default=None, column=int) owner_id: EqualityFilter | None = Field(default=None, column=int) @model_validator(mode=&quot;before&quot;) @classmethod def add_columns(cls, data: Any): for field_name, value in cls.model_fields.items(): schema_extra = value.json_schema_extra if field_name in data and schema_extra and schema_extra.get(&quot;column&quot;) is not None: data[field_name][&quot;_column&quot;] = value.json_schema_extra[&quot;column&quot;] return data app = FastAPI() @app.post(&quot;/books&quot;) async def get_books(filter: Filter): # do the filtering result = [] return result if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(app, host=&quot;127.0.0.1&quot;, port=8000) </code></pre>
<python><fastapi><pydantic><pydantic-v2>
2024-07-24 08:25:32
2
6,065
sahinakkaya
78,787,152
6,703,592
dataframe plot histogram boundary bins
<pre><code>bins = [x for x in range(-10, 11)] df['val'].plot(kind='hist', bins=bins) </code></pre> <p>I want to put all the out-range values &gt;10 or &lt;-10 into the boundary right/left bin but not change their widths. btw suppose the max/min value of <code>df</code> is dynamic.</p> <p>I can build two boundary bins like <code>[-9999, 0]</code> and <code>[0, 9999]</code>, but their width will be extremely large. Another way is to clip the data of <code>df['val']</code> by</p> <pre><code>np.clip(df['val'], a_max= bins[-1], a_min=bins[0]) </code></pre> <p>Is there an easy way to achieve my goal without change <code>df</code></p>
<python><pandas><plot>
2024-07-24 08:24:47
1
1,136
user6703592
78,786,982
5,704,198
Comparing string uniforming special characters in python
<p>Probabily I can use a better english but what I want is ignoring accent (and like) in words so:</p> <p><code>renè</code>, <code>rené</code>, <code>rene'</code> and <code>rene</code> should be the same so should</p> <p><code>mañana</code> and <code>manana</code> or</p> <p><code>even-distribuited</code> and <code>even distribuited</code> and possibly</p> <p><code>shouldn't</code> and <code>shouldnt</code></p> <p>I remember a function (derivated from journalism) used for example for internet page addresses that should take out spaces, accent etc but I don't remember the name. I think it should works but other way are accepted</p> <p>Thank you</p> <p>Edit:</p> <p>The function I had in mind is <a href="https://docs.djangoproject.com/en/4.2/ref/utils/#django.utils.text.slugify" rel="nofollow noreferrer">Slugfy()</a> for Django but probabily is not enough</p>
<python><string><compare>
2024-07-24 07:46:48
1
1,385
fabio
78,786,800
4,897,017
metadata-generation-failed when installing tf-models-official
<p>I'm trying to install tf-models-official with <code>!pip install tf-models-official</code> and when it started to collecting kaggle&gt;=1.3.9, it returned error below :</p> <pre><code>Collecting kaggle&gt;=1.3.9 (from tf-models-official) Using cached kaggle-1.6.15.tar.gz (9.1 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... error error: subprocess-exited-with-error × Preparing metadata (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [35 lines of output] Traceback (most recent call last): File &quot;/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 353, in &lt;module&gt; main() File &quot;/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) File &quot;/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py&quot;, line 152, in prepare_metadata_for_build_wheel whl_basename = backend.build_wheel(metadata_directory, config_settings) File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/build.py&quot;, line 58, in build_wheel return os.path.basename(next(builder.build(directory=wheel_directory, versions=['standard']))) File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/plugin/interface.py&quot;, line 155, in build artifact = version_api[version](directory, **build_data) File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/wheel.py&quot;, line 475, in build_standard for included_file in self.recurse_included_files(): File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/plugin/interface.py&quot;, line 176, in recurse_included_files yield from self.recurse_selected_project_files() File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/plugin/interface.py&quot;, line 180, in recurse_selected_project_files if self.config.only_include: File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/config.py&quot;, line 806, in only_include only_include = only_include_config.get('only-include', self.default_only_include()) or self.packages File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/wheel.py&quot;, line 260, in default_only_include return self.default_file_selection_options.only_include File &quot;/home/ec2-user/anaconda3/envs/tensorflow2_p310/lib/python3.10/functools.py&quot;, line 981, in __get__ val = self.func(instance) File &quot;/tmp/pip-build-env-fqrzl9xw/overlay/lib/python3.10/site-packages/hatchling/builders/wheel.py&quot;, line 248, in default_file_selection_options raise ValueError(message) ValueError: Unable to determine which files to ship inside the wheel using the following heuristics: https://hatch.pypa.io/latest/plugins/builder/wheel/#default-file-selection The most likely cause of this is that there is no directory that matches the name of your project (kaggle). At least one file selection option must be defined in the `tool.hatch.build.targets.wheel` table, see: https://hatch.pypa.io/latest/config/build/ As an example, if you intend to ship a directory named `foo` that resides within a `src` directory located at the root of your project, you can define the following: [tool.hatch.build.targets.wheel] packages = [&quot;src/foo&quot;] [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: metadata-generation-failed × Encountered error while generating package metadata. ╰─&gt; See above for output. note: This is an issue with the package mentioned above, not pip. hint: See above for details. </code></pre> <p>I was able to install 2 weeks back, now on the new jupyter notebook kernel suddenly not able to install. I've tried to reinstall on the old kernel, the same error happens too. Anyone know how to solve this?</p>
<python><tensorflow><machine-learning><pip>
2024-07-24 07:02:59
1
503
Larry Mckuydee
78,786,496
898,042
counting calls decorator - why do I reset function attribute back to 0?
<p>the code below counts the num times the decorated function func was called:</p> <pre><code>from functools import wraps def counting_calls(func): @wraps(func) def inner(*args, **kwargs): inner.call_count += 1 return func(*args, **kwargs) inner.call_count = 0 return inner </code></pre> <p>for this test:</p> <pre><code>@counting_calls def add(a: int, b: int) -&gt; int: '''return sum of 2 ints''' return a + b print(add(10, b=20)) print(add(30, 5)) print(add(3, 5)) print(add(4, 5)) print('num calls =', add.call_count) print(add(11, 5)) print('num calls =', add.call_count) </code></pre> <p>why do we reset inner.call_count = 0? does not it actually reset the call_count always to 0 and does not save the total num calls?</p>
<python><python-decorators>
2024-07-24 05:29:28
1
24,573
ERJAN
78,786,311
10,410,934
How to elegantly combine complex Python object and SQLAlchemy object model classes?
<p>I have a rather complex class with complex properties computed from a provided df to <strong>init</strong> and these properties may be other class types that can be eventually serialized into a string. In Python I want to deal with objects rather than primitive types but also want to use SQLAlchemy for interacting with databases. The columns in the table are the same as many of the class properties, how can I combine these two classes elegantly? I could use composition and have the db model as an object within the class, but that doesn't feel like truly combining? I'd have to have two separate classes that call each other's APIs.</p> <p>Subclassing wouldn't work I don't think as the column properties cannot have the same name as the column. When I use the ORM and pull from the db, I'd like the object to be already initialized so I can modify the df and change the properties, but <strong>init</strong> isn't called using the ORM.</p> <pre><code>from enum import Enum import pandas as pd from sqlalchemy import Base, Float, Text from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column class Thing(Enum): A = &quot;a&quot; B = &quot;b&quot; def thing_factory(df: pd.DataFrame) -&gt; Thing: tmp = df[&quot;thing&quot;].max() return Thing.A if tmp &gt; 4 else Thing.B class ComplexObject: def __init__(self, df: pd.DataFrame): self.df = df self.thing: Thing = thing_factory(self.df) @property def complex_property(self) -&gt; float: # complex logic here return 0 class Base(DeclarativeBase): pass class ComplexObjectModel(Base): __tablename__ = &quot;complex_object&quot; id: Mapped[int] = mapped_column(primary_key=True) thing: Mapped[str] = mapped_column(Text) complex_property: Mapped[float] = mapped_column(Float) </code></pre>
<python><sqlalchemy><flask-sqlalchemy>
2024-07-24 03:49:15
1
396
Stevie
78,786,254
1,510,739
Can you pattern match on Python type annotations?
<p>Can you pattern match on Python types?</p> <p>I've seen simple examples:</p> <pre class="lang-py prettyprint-override"><code>import builtins match x: case builtins.str: print(&quot;matched str&quot;) case buildins.int: print(&quot;matched int&quot;) </code></pre> <p>But I'd like to pattern match on a nested type, something like <code>Annotated[Optional[Literal[&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]], &quot;something here&quot;]</code> - is this possible?</p>
<python><python-typing><structural-pattern-matching>
2024-07-24 03:15:33
1
487
Torkoal
78,786,208
9,509,245
Polars: Replace elements in list of List column
<p>Consider the following example series.</p> <pre class="lang-py prettyprint-override"><code>s = pl.Series('s', [[1, 2, 3], [3, 4, 5]]) </code></pre> <p>I'd like to replace all 3s with 10s to obtain the following.</p> <pre class="lang-py prettyprint-override"><code>res = pl.Series('s', [[1, 2, 10], [10, 4, 5]]) </code></pre> <p>Is it possible to efficiently replace elements in the lists of a <code>List</code> column in polars?</p> <p><strong>Note.</strong> I've already tried converting to a dataframe and using <code>pl.when().then()</code>, but <code>pl.when()</code> fails for input of type <code>List[bool]</code>. Moreover, I've experimented with <code>pl.Expr.list.eval</code>, but couldn't get much further than the original mask.</p>
<python><python-polars>
2024-07-24 02:43:07
2
508
PydPiper
78,786,150
299,282
Share code across sagemaker pipeline steps without
<p>I am trying to create Sagemaker pipeline with multiple steps. I have some code which I would like to share across different steps. Next example is not exact but simplified version for illustration.</p> <p>I have folder structure which looks next:</p> <pre><code>source_scripts/ ├── utils │ ├── logger.py ├── models/ │ ├── ground_truth.py │ ├── document.py ├── processing/ │ ├── processing.py │ └── main.py └── training/ ├── training.py └── main.py </code></pre> <p>I would like to use code from <code>models</code> and <code>utils</code> inside <code>training.py</code> as I don't know where exactly code mounted on Sagemaker instance I am using:</p> <pre><code>from ..common.ground_truth import GroundTruthRow </code></pre> <p>As I am building pipeline I create processing and training step:</p> <pre><code>script_processor = FrameworkProcessor() args = script_processor.get_run_args( source_dir=&quot;source_scripts&quot; code=&quot;processing/main.py&quot; ) step_process = ProcessingStep( code=args.code ) estimator = Estimator( source_dir=&quot;source_scripts&quot; code=&quot;training/main.py&quot; ) step_train = TrainingStep( estimator=estimator ) </code></pre> <p>But during pipeline execution it results in error:</p> <pre><code>ImportError: attempted relative import with no known parent package </code></pre> <p>Any suggestions how to share code across several SageMaker jobs in single pipeline without building custom docker image?</p>
<python><amazon-web-services><amazon-sagemaker><mlops>
2024-07-24 02:08:44
1
936
Max Markov
78,786,069
6,005,206
pd.Timestamp() behavior in Pandas
<p>Trying to understand why <code>t1</code> takes current date whereas <code>t2</code> takes the epoch date in Pandas in Python. Any thoughts would help.</p> <pre><code>import pandas as pd t1 = pd.Timestamp(&quot;23:12:05&quot;) print(&quot;t1:&quot;,t1) t2 = pd.Timestamp(1) print(&quot;t2:&quot;t2) </code></pre> <p>Output:</p> <pre><code>t1: 2024-07-23 23:12:05 t2: 1970-01-01 00:00:00.000000001 </code></pre>
<python><pandas><datetime><time><timestamp>
2024-07-24 01:05:22
1
1,893
Nilesh Ingle
78,785,682
12,728,698
Distributing pythonnet dll type information in pip package
<p>I've been able to load a <code>C#</code> dll using <code>pythonnet</code> by using the following:</p> <pre class="lang-py prettyprint-override"><code>from importlib.resources import path import sys # Assuming 'my_package.lib' is the sub-package containing the DLLs with path('pyrp.lib', '') as lib_path: sys.path.append(str(lib_path)) # Now you can import your dependencies as if they were in the same directory from pythonnet import load load('coreclr') import clr def load_APIStandard(): clr.AddReference(&quot;myDLL&quot;) from myDLL import APIStandard # Return the imported module or any objects you need from it return APIStandard </code></pre> <p>and I have the type information generated with <a href="https://github.com/MHDante/pythonnet-stub-generator" rel="nofollow noreferrer">this tool</a>. One example is</p> <pre><code>#__init__.pyi from System import Array_1 from myDLL import AnalysisResult class APIStandard(abc.ABC): @staticmethod def AnalyzeData(data: Array_1[int]) -&gt; AnalysisResult: ... </code></pre> <p>I can get this type information to display properly if edit my VSCode <code>settings.json</code></p> <pre class="lang-json prettyprint-override"><code>&quot;python.analysis.extraPaths&quot;: [ &quot;./pyrp/lib/PythonTypes&quot; // Add the path to your &quot;types&quot; directory here, adjust the path as necessary ] </code></pre> <p>but no matter how I package the types into the pip package, I can't get the interpreter to find them natively. Any suggestions?</p>
<python><pip><python-typing>
2024-07-23 21:46:25
0
856
joshp
78,785,661
11,062,613
Parsing formulas efficiently using regex and Polars
<p>I am trying to parse a series of mathematical formulas and need to extract variable names efficiently using Polars in Python. Regex support in Polars seems to be limited, particularly with look-around assertions. Is there a simple, efficient way to parse symbols from formulas?</p> <p>Here's the snippet of my code:</p> <pre><code>import re import polars as pl # Define the regex pattern FORMULA_DECODER = r&quot;\b[A-Za-z][A-Za-z_0-9_]*\b(?!\()&quot; # \b # Assert a word boundary to ensure matching at the beginning of a word # [A-Za-z] # Match an uppercase or lowercase letter at the start # [A-Za-z0-9_]* # Match following zero or more occurrences of valid characters (letters, digits, or underscores) # \b # Assert a word boundary to ensure matching at the end of a word # (?!\() # Negative lookahead to ensure the match is not followed by an open parenthesis (indicating a function) # Sample formulas formulas = [&quot;3*sin(x1+x2)+A_0&quot;, &quot;ab*exp(2*x)&quot;] # expected result pl.Series(formulas).map_elements(lambda formula: re.findall(FORMULA_DECODER, formula), return_dtype=pl.List(pl.String)) # Series: '' [list[str]] # [ # [&quot;x1&quot;, &quot;x2&quot;, &quot;A_0&quot;] # [&quot;ab&quot;, &quot;x&quot;] # ] # Polars does not support this regex pattern pl.Series(formulas).str.extract_all(FORMULA_DECODER) # ComputeError: regex error: regex parse error: # \b[A-Za-z][A-Za-z_0-9_]*\b(?!\() # ^^^ # error: look-around, including look-ahead and look-behind, is not supported </code></pre> <p>Edit Here is a small benchmark:</p> <pre><code>import random import string import re import polars as pl def generate_symbol(): &quot;&quot;&quot;Generate random symbol of length 1-3.&quot;&quot;&quot; characters = string.ascii_lowercase + string.ascii_uppercase return ''.join(random.sample(characters, random.randint(1, 3))) def generate_formula(): &quot;&quot;&quot;Generate random formula with 2-5 unique symbols.&quot;&quot;&quot; op = ['+', '-', '*', '/'] return ''.join([generate_symbol()+random.choice(op) for _ in range(random.randint(2, 6))])[:-1] def generate_formulas(num_formulas): &quot;&quot;&quot;Generate random formulas.&quot;&quot;&quot; return [generate_formula() for _ in range(num_formulas)] # Sample formulas # formulas = [&quot;3*sin(x1+x2)+(A_0+B)&quot;, # &quot;ab*exp(2*x)&quot;] def parse_baseline(formulas): &quot;&quot;&quot;Baseline serves as performance reference. It will not detect function names.&quot;&quot;&quot; FORMULA_DECODER_NO_LOOKAHEAD = r&quot;\b[A-Za-z][A-Za-z_0-9_]*\b\(?&quot; return pl.Series(formulas).str.extract_all(FORMULA_DECODER_NO_LOOKAHEAD) def parse_lookahead(formulas): FORMULA_DECODER = r&quot;\b[A-Za-z][A-Za-z_0-9_]*\b(?!\()&quot; return pl.Series(formulas).map_elements(lambda formula: re.findall(FORMULA_DECODER, formula), return_dtype=pl.List(pl.String)) def parse_no_lookahead_and_filter(formulas): FORMULA_DECODER_NO_LOOKAHEAD = r&quot;\b[A-Za-z][A-Za-z_0-9_]*\b\(?&quot; return ( pl.Series(formulas) .str.extract_all(FORMULA_DECODER_NO_LOOKAHEAD) # filter for matches not containing an open parenthesis .list.eval(pl.element().filter(~pl.element().str.contains(&quot;(&quot;, literal=True))) ) formulas = generate_formulas(1000) %timeit parse_lookahead(formulas) %timeit parse_no_lookahead_and_filter(formulas) %timeit parse_baseline(formulas) # 10.7 ms ± 387 μs per loop (mean ± std. dev. of 7 runs, 100 loops each) # 1.31 ms ± 76.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) # 708 μs ± 6.43 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each) </code></pre>
<python><regex><dataframe><python-polars><text-extraction>
2024-07-23 21:34:43
1
423
Olibarer
78,785,646
4,208,228
How can I view all the uploaded files associated with an Azure OpenAI assistant in Python?
<p>I’m using Python to benchmark questions from documents and have instantiated my assistant in a jupyter notebook. I’d like to confirm the assistant has the files I uploaded to it but can’t seem to find documentation on what function would be used for this. Using latest version of Python API for Azure OpenAI.</p>
<python><azure><file><openai-api><assistant>
2024-07-23 21:29:56
1
471
Celi Manu
78,785,590
10,492,911
Numpy: apply mask to values, then take mean, but in parallel
<p>I have an 1d numpy array of values:</p> <pre><code>v = np.array([0, 1, 4, 0, 5]) </code></pre> <p>Furthermore, I have a 2d numpy array of boolean masks (in production, there are millions of masks):</p> <pre><code>m = np.array([ [True, True, False, False, False], [True, False, True, False, True], [True, True, True, True, True], ]) </code></pre> <p>I want to apply each row from the mask to the array v, and then compute the mean of the masked values.</p> <p>Expected behavior:</p> <pre><code>results = [] for mask in m: results.append(np.mean(v[mask])) print(results) # [0.5, 3.0, 2.0] </code></pre> <p>Easy to do sequentially, but I am sure there is a beautiful version in parallel? One solution, that I've found:</p> <pre><code>mask = np.ones(m.shape) mask[~m] = np.nan np.nanmean(v * mask, axis=1) # [0.5, 3.0, 2.0] </code></pre> <p>Is there another solution, perhaps using np.ma module? I am looking for a solution that is faster than my current two solutions.</p>
<python><arrays><numpy><mask>
2024-07-23 21:13:33
2
879
Franc Weser
78,785,221
3,369,545
ValueError when loading sklearn DecisionTreeClassifier pickle in Python 3.10
<p>I'm encountering an issue while transitioning from Python 3.7.3 to Python 3.10 due to the deprecation of the older version. The problem arises when attempting to load a pickled sklearn DecisionTreeClassifier model. Environment:</p> <p>Original: Python 3.7.3, scikit-learn 0.23.1 Current: Python 3.10, scikit-learn 1.3.2</p> <p>Problem: When loading the pickled model, I receive the following error:</p> <pre><code>ValueError: node array from the pickle has an incompatible dtype: - expected: {'names': ['left_child', 'right_child', 'feature', 'threshold', 'impurity', 'n_node_samples', 'weighted_n_node_samples', 'missing_go_to_left'], 'formats': ['&lt;i8', '&lt;i8', '&lt;i8', '&lt;f8', '&lt;f8', '&lt;i8', '&lt;f8', 'u1'], 'offsets': [0, 8, 16, 24, 32, 40, 48, 56], 'itemsize': 64} - got : [('left_child', '&lt;i8'), ('right_child', '&lt;i8'), ('feature', '&lt;i8'), ('threshold', '&lt;f8'), ('impurity', '&lt;f8'), ('n_node_samples', '&lt;i8'), ('weighted_n_node_samples', '&lt;f8')] </code></pre> <p>Code:</p> <pre><code>import pickle for m in models: file = 'finalized_model_' + m + '.sav' loaded_model = pickle.load(open(file, 'rb')) df[m] = loaded_model.predict_proba(X)[:, 1] print(m) </code></pre> <p>Attempted Solution: I've tried to mitigate this issue by loading and re-saving the model using a higher protocol:</p> <pre><code>import pickle from sklearn import model_selection # Load the model with open(&quot;path_to_old_model.pkl&quot;, 'rb') as file: model = pickle.load(file) # Re-save using a higher protocol with open(&quot;path_to_updated_model.pkl&quot;, 'wb') as file: pickle.dump(model, file, protocol=pickle.HIGHEST_PROTOCOL) </code></pre> <p>However, after upgrading the environment to Python 3.10 and the latest version of scikit-learn, I still encounter the same error when attempting to load the re-saved model.</p> <p>Is there a way to successfully load these models in the newer Python environment without losing their functionality?</p>
<python><scikit-learn><pickle><version-compatibility>
2024-07-23 19:21:12
1
421
user3369545
78,785,166
2,893,712
Pandas Flatten Row When Doing Groupby
<p>I have a Pandas dataframe that has address and contact information. The rows are on occasion duplicated because there are different values in the respective contact information (the address information remains the same for each vendor ID)</p> <pre><code>ID Vendor Name Vendor Address City State Zip Email Phone Name 1 Google 123 Street St Example CA 94000 John 1 Google 123 Street St Example CA 94000 a@b.com 2 Microsoft 456 County Rd Madeup NY 12345 c@d.com 2 Microsoft 456 County Rd Madeup NY 12345 555-1234 2 Microsoft 456 County Rd Madeup NY 12345 Jane </code></pre> <p>How do I group by the <code>ID</code> field and flatten the <code>Email</code>, <code>Phone</code>, and <code>Name</code> fields (assume there are no collisions where there is a value )</p> <p>For the example provided, the result would be:</p> <pre><code>ID Vendor Name Vendor Address City State Zip Email Phone Name 1 Google 123 Street St Example CA 94000 a@b.com John 2 Microsoft 456 County Rd Madeup NY 12345 c@d.com 555-1234 Jane </code></pre> <p>I saw from <a href="https://stackoverflow.com/a/32686738/2893712">this answer</a> that I can do something like:</p> <pre><code> df.groupby(by='ID').apply(lambda group: ','.join(group['Email'])) </code></pre> <p>but this would only join one column.</p>
<python><pandas><flatten>
2024-07-23 19:05:09
1
8,806
Bijan
78,785,096
3,623,537
Why undeclared variables are present in `globals()` after a reload and is it safe to use them to identify a reload?
<p>I've found that the snippet below when reloading a module <code>test</code> unexpectedly has all variables already defined in <code>globals()</code>/<code>locals()</code>.</p> <p>Why this happens?</p> <p>I've noticed this <code>&quot;xxx&quot; in locals()</code> pattern <a href="https://github.com/search?q=%22%5C%22bpy%5C%22%20in%20locals()%22&amp;type=code" rel="nofollow noreferrer">a lot in Blender Python scripts</a> as typically people use it to check whether module was reloaded before (you typically reload those scripts a lot during the development). Since everyone's using it, I guess it should work most of the times.</p> <p>But is it really safe to use this caveat to identify whether module was already loaded before (any ideas of cases when it wouldn't work?)? I mean not just for the development and testing stuff as it does looks like more of an implementation detail that could break in the production.</p> <pre class="lang-py prettyprint-override"><code># prints False, False, False import test import importlib # prints True, True, True importlib.reload(test) </code></pre> <p>test.py:</p> <pre class="lang-py prettyprint-override"><code>print(&quot;a&quot; in globals(), &quot;b&quot; in globals(), &quot;c&quot; in globals()) # NameError: name 'a' is not defined # print(a) a = 25 b = 35 c = 45 </code></pre>
<python><python-importlib>
2024-07-23 18:43:34
2
469
FamousSnake
78,785,051
5,522,938
Python import path when executed through symlink
<p>I have a project directory that looks like this:</p> <pre><code>project/ ├── my_input.py ├── run_me.py └── test ├── my_input.py └── run_me.py -&gt; ../run_me.py </code></pre> <p>The <code>run_me.py</code> script imports from input.py. My expectation is that the symlinked <code>test/run_me.py</code> would import from <code>test/my_input.py</code> but what I actually find is that it imports from the directory of its target. What can I do to get my expectation?</p> <p>Execution example:</p> <p><code>run_me.py</code>:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 from my_input import * </code></pre> <p><code>my_input.py</code>:</p> <pre><code>print(&quot;imported from my_input.py&quot;) </code></pre> <p><code>test/my_input.py</code></p> <pre><code>print(&quot;imported from test/my_input.py&quot;) </code></pre> <p>output of <code>./run_me.py</code> when working directory is <code>test</code>: <code>imported from my_input.py</code></p>
<python>
2024-07-23 18:34:16
1
952
roro
78,785,004
511,302
test if any object refer to by a base model has a value specified in django orm
<p>Well consider two models:</p> <pre><code>class School(models.Model): name = models.TextField(default=&quot;hello world&quot;) class Student(models.Model): key = models.TextField(default=&quot;somedata&quot;) a = models.ForeignKey(Vendor) </code></pre> <p>So a many to one relationship from School to Student How would I get (from School's perspective) all School which have a Student that set &quot;key&quot; to the required value?</p> <p>Especially in the already existing query:</p> <pre><code>School.objects.filter(Q(name=&quot;hello world&quot;) | Q(queryshould_here)) </code></pre> <p>I've tried</p> <pre><code>A.objects.filter(Q(name=&quot;hello world&quot;) | Q(student__key=&quot;test&quot;)) </code></pre> <p>However that (obviously) fails with:</p> <pre><code>django.core.exceptions.FieldError: Unsupported lookup 'key' for ManyToOneRel or join on the field not permitted. </code></pre>
<python><django><orm><foreign-keys>
2024-07-23 18:17:51
1
9,627
paul23
78,784,850
1,471,980
How do you create a pivot table group by Date and perfom calculation on 2 values in pandas
<p>I have this data frame:</p> <p>df</p> <pre><code>Node Interface Speed Band_In carrier Date Server1 wan1 100 80 ATT 2024-06-01 Server1 wan2 100 60 Sprint 2024-06-01 Server1 wan3 100 96 Verizon 2024-06-01 Server2 wan1 100 80 ATT 2024-06-01 Server2 wan2 100 60 ATT 2024-06-01 Server2 wan3 100 96 ATT 2024-06-01 Server3 wan1 100 80 ATT 2024-06-01 Server3 wan2 100 60 ATT 2024-06-01 Server3 wan3 100 96 ATT 2024-06-01 Server4 wan1 100 80 ATT 2024-06-01 Server4 wan2 100 60 ATT 2024-06-01 Server5 Int3 100 96 Verizon 2024-06-01 Server1 wan1 100 30 ATT 2024-06-10 Server1 wan2 100 30 Sprint 2024-06-10 Server1 wan3 100 15 Verizon 2024-06-10 Server2 wan1 100 80 ATT 2024-06-10 Server2 wan2 100 60 ATT 2024-06-10 Server2 wan3 100 96 ATT 2024-06-10 Server3 wan1 100 80 ATT 2024-06-10 Server3 wan2 100 60 ATT 2024-06-10 Server3 wan3 100 96 ATT 2024-06-10 Server4 wan1 100 80 ATT 2024-06-10 Server4 wan2 100 60 ATT 2024-06-10 Server5 Int3 100 96 Verizon 2024-06-10 </code></pre> <p>I need this data frame to be re-org where each uniq date needs to on its own column and used interface speed (Band_In/Speed)*100 calculated by each Node, Interface, Carier and Date (Date needs to be Month Name-Day: e.g. June-01, June-10)</p> <p>In needs to be something like this:</p> <p>df1:</p> <pre><code>Node Interface Speed Band_In carrier 1-Jun 10-Jun Server1 wan1 100 80 ATT 80 30 Server1 wan2 100 60 Sprint 60 30 Server1 wan3 100 96 Verizon 96 15 </code></pre> <p>I tried this:</p> <pre><code> df1=df.pivot(index=['Node', 'Interface', 'Speed', 'Band_In', 'carrier'], columns='Date', values='Speed'/'Interface'*100).fillna('').reset_index() </code></pre> <p>I get length of passed values i 11,456, index implies 5.</p> <p>Any ideas what I am doing wrong here?</p>
<python><pandas><pivot>
2024-07-23 17:35:58
1
10,714
user1471980
78,784,723
7,339,624
Why my RNN does not converge to a simple task?
<p>I want to create a recursice model to solve the most simple sequence that I know, Arithmetic progression. With having <code>a</code> as the base and <code>d</code> as the step size, the sequence would be as follows:</p> <p><code>a, a+d, a+2d, a+3d, a+4d, ...</code></p> <p>To solve this, denoting hidden state as <code>h</code>, the model has to learn a simple 2*2 matrix. This is actually setting <code>h1 = t0</code>.</p> <p><a href="https://i.sstatic.net/Qsy1qmgn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Qsy1qmgn.png" alt="enter image description here" /></a></p> <p>To put it in other words, you can see it like this too:</p> <p><a href="https://i.sstatic.net/U7fmayED.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U7fmayED.png" alt="enter image description here" /></a></p> <p>So this model with a 2*2 fully connected layer should be able to learn this matrix:</p> <pre><code>class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(2, 2, bias=False) def forward(self, x): x = self.fc1(x) return x </code></pre> <p>But to my surprise is does not converge! There should be something wrong with my setup. If you help me find it I will appreciate it. I suspect the problem should be in my training loop.</p> <p>P.S. I intentionally set batch size to 1 right now. I want to work with padding the input data later. The model should learn without batches anyway.</p> <pre><code>import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader, Dataset import numpy as np class CustomDataset(Dataset): def __init__(self, size): self.size = size def __len__(self): return self.size def __getitem__(self, index): a0 = (np.random.rand() - 0.5) * 200 d = (np.random.rand() - 0.5) * 40 length = np.random.randint(2, MAX_Length_sequence + 1) sequence = np.arange(length) * d + a0 next_number = sequence[-1] + d return length, torch.tensor(sequence, dtype=torch.float32), torch.tensor(next_number, dtype=torch.float32) class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.fc1 = nn.Linear(2, 2, bias=False) def forward(self, x): x = self.fc1(x) return x # Hyperparameters EPOCHS = 10 BATCH_SIZE = 1 LEARNING_RATE = 0.001 DATASET_SIZE = 10000 criterion = nn.MSELoss() # Model model = Model() optimizer = optim.Adam(model.parameters(), lr=LEARNING_RATE) </code></pre> <p>My traning loop:</p> <pre><code>for epoch in range(EPOCHS): dataset = CustomDataset(DATASET_SIZE) dataloader = DataLoader(dataset, batch_size=BATCH_SIZE) model.train() total_loss = 0 for length, sequence, next_number in dataloader: optimizer.zero_grad() loss = 0 h = torch.zeros(BATCH_SIZE) for i in range(length): x = torch.cat([h, sequence[0, i].unsqueeze(0)]) y = sequence[0, i + 1] if i != length - 1 else next_number[0] output = model(x) h, y_hat = output[0].unsqueeze(0), output[1] loss += criterion(y_hat, y) loss.backward() optimizer.step() total_loss += loss.item() print(f'Epoch {epoch+1}, Loss: {total_loss/len(dataloader)}') </code></pre>
<python><machine-learning><deep-learning><pytorch><recurrent-neural-network>
2024-07-23 16:59:29
1
4,337
Peyman
78,784,712
249,696
How can I write the type of a function that returns a dynamically imported class?
<p>I have a Python function that looks like this</p> <pre class="lang-py prettyprint-override"><code>def my_function() -&gt; Any: from optional_dependency import SomeClass # This will fail if `optional_dependency` is not installed, I'm good with that. return SomeClass() </code></pre> <p>I'm not a huge fan of this function, but I've inherited it and I can't break it (just yet). I wonder if there is a better return type than <code>Any</code>.</p> <p>Pyright complains whether I write</p> <ul> <li><code>-&gt; SomeClass</code></li> <li><code>-&gt; &quot;SomeClass&quot;</code></li> </ul> <p>And Ruff complains if I write</p> <ul> <li><code>-&gt; &quot;SomeClass&quot;: # type: ignore</code></li> </ul> <p>Is there anything that can keep both Pyright and Ruff (and ideally mypy) happy?</p>
<python><python-typing><mypy><pyright><ruff>
2024-07-23 16:57:24
1
4,139
Yoric
78,784,687
2,328,154
Setting UserPool Client Attributes to have read/write access with aws cdk - Python
<p>I am trying to give some custom attributes specific read/write access depending on the attribute. I am getting this error.</p> <p>Resource handler returned message: &quot;Invalid write attributes specified while creating a client (Service: CognitoIdentityProvider, Status Code: 400, Request ID: &lt;request_id&gt;)&quot; (RequestToken: &lt;request_token&gt;, HandlerErrorCode: InvalidRequest)</p> <p>Can anyone point me in the right direction or tell me why this is happening? Obviously, I understand what the error is telling me, but I don't know what (specifically) is causing it or how to fix it. Maybe something to do with the way I am creating the attribute to begin with...</p> <p>Here is my code;</p> <pre><code>self.my_user_pool = cognito.UserPool( self, COGNITO_USER_POOL_ID, sign_in_aliases=cognito.SignInAliases(email=True), self_sign_up_enabled=True, auto_verify=cognito.AutoVerifiedAttrs(email=True), user_verification=cognito.UserVerificationConfig( email_style=cognito.VerificationEmailStyle.LINK ), custom_attributes={ &quot;custom_attribute_1&quot;: cognito.StringAttribute(mutable=True), &quot;custom_attribute_2&quot;: cognito.StringAttribute(mutable=True), }, password_policy=cognito.PasswordPolicy( min_length=8, require_lowercase=True, require_uppercase=True, require_digits=True, require_symbols=True, ), account_recovery=cognito.AccountRecovery.EMAIL_ONLY, removal_policy=RemovalPolicy.DESTROY, ) client_read_attributes = (cognito.ClientAttributes()).with_custom_attributes( &quot;custom:custom_attribute_1&quot;, &quot;custom:custom_attribute_2&quot; ) client_write_attributes = (cognito.ClientAttributes()).with_custom_attributes( &quot;custom:custom_attribute_1&quot; ) self.my_user_pool_client = self.user_pool.add_client( &quot;&lt;my_cognito_client_id&gt;&quot;, access_token_validity=Duration.minutes(60), id_token_validity=Duration.minutes(60), refresh_token_validity=Duration.days(1), auth_flows=cognito.AuthFlow(admin_user_password=True, user_srp=True, custom=True), o_auth=cognito.OAuthSettings(flows=cognito.OAuthFlows(implicit_code_grant=True)), prevent_user_existence_errors=True, generate_secret=True, read_attributes=client_read_attributes, write_attributes=client_write_attributes, enable_token_revocation=True, ) </code></pre>
<python><amazon-cognito><aws-cdk>
2024-07-23 16:50:55
1
421
MountainBiker
78,784,668
7,106,343
Query a SQLModel class with abstract definition
<p>In this project, we have restricted imports between packages as follows:</p> <ul> <li>We have a shared package.</li> <li>We have a backend package.</li> <li>Imports from the backend package to the shared package are not allowed, but imports from the shared package to the backend package are permitted. I cannot change this config.</li> <li>Currently, some backend models, specifically SQLModel classes, need to be used in the shared package.</li> </ul> <p>Here’s the solution I have implemented (paths are shortened):</p> <p>in <code>shared/models.py</code>:</p> <pre><code>class SharedBaseSQLModel(SQLModel): # some shared fields like id # customized methods like save and read for shared package class Job(SharedBaseSQLModel): # without table=True __abstract__ = True # some fileds like name, status </code></pre> <p>in <code>backend/models.py</code>:</p> <pre><code>from sqlmodel import SQLModel from shared.models import Job as SharedJob class BaseSQLModel(SQLModel): # customized methods like save and read for backend class Job(BaseSQLModel, SharedJob, table=True): # here I use table=True # some extra methods </code></pre> <p>In <code>shared</code> I need to be able to do this query:</p> <pre><code>statement = select(Job).where(Job.name == 'foo').order_by(...).limit(...) return session.exec(statement).all() </code></pre> <p>Error happens here: <code>select(Job)</code></p> <ul> <li><p><strong>Current error</strong>: <code>sqlalchemy.exc.ArgumentError: Column expression, FROM clause, or other columns clause element expected, got &lt;class '..._shared.core.model.Job'&gt;</code></p> </li> <li><p>Python 3.10.14</p> </li> <li><p>sqlalchemy-spanner==1.7.0</p> </li> <li><p>sqlmodel==0.0.18</p> </li> </ul> <p>As far as I understood, If I cannot use <code>table=True</code>, I cannot use <code>select(MyModel)</code>.</p> <p>Do you have any suggestions on how to solve this problem? I'm also open to ideas for a better model design to handle situations like this.</p>
<python><sqlalchemy><fastapi>
2024-07-23 16:44:59
1
652
Arash
78,784,664
11,748,418
Python imports & interactive window
<p>I am struggling with imports in Python. Currently, this is my file structure:</p> <pre><code>. ├── my_project │   ├── helpers │   │   ├── SomeHelper.py │   │   └── __init__.py │   ├── CalculateStuff │   │   ├── __init__.py │   │   └── CalculateX.py │   ├── __init__.py │   └── main.py ├── poetry.lock ├── pyproject.toml └── README.md </code></pre> <p>In <code>CalculateStuff.CalculateX.py</code>, there is the class <code>CalculateX</code> which requires <code>SomeHelper</code>. Currently, that import looks like this: <code>from helpers.SomeHelper import SomeHelper</code></p> <p>In <code>main.py</code>:</p> <pre><code>from CalculateStuff.CalculateX import CalculateX x_calculator = CalculateX() x_calculator.do_something() </code></pre> <p>This works fine, when I run the main script, e.g. use <code>poetry run python my_project/main.py</code>. However, the different parts are under development. I want to use <code>CalculateX.py</code> as an etrypoint, too, and have it find its imports as well as run some test code (with <code>if __name__ == &quot;__main__&quot;:[...]</code>.</p> <p>Furthermore, I want to be able to use the <strong>Python interactive window</strong> (within VS Code) to execute the code in <strong>any</strong> of the files in order to develop everything step by step. Unfortunately, I always get import errors.</p> <p>How to achieve this, without tampering around with <code>sys.path</code>?</p> <p>The imports in my <code>CalculateStuff/CalculateX</code>:</p> <pre><code># Generic python imports from helpers.SomeHelper import SomeHelper </code></pre> <p>If I change it to <code>from ..helpers.SomeHelper import SomeHelper</code>, I get import errors:</p> <ul> <li>Running main.py: <code>ImportError: attempted relative import beyond top-level package</code></li> <li>Running SomeHelper.py in the interactive window: <code>ImportError: attempted relative import with no known parent package</code></li> </ul> <p>Thanks in advance!</p>
<python><import><python-interactive>
2024-07-23 16:42:49
1
340
Dennis
78,784,626
11,357,695
Twine/PyPI - Invalid distribution file
<p>I am trying to upload my package (<code>SyntenyQC</code>) to test pypi as described <a href="https://packaging.python.org/en/latest/tutorials/packaging-projects/#uploading-the-distribution-archives" rel="nofollow noreferrer">here</a> via Anaconda prompt.</p> <p>I have installed <code>twine</code> and <code>build</code> as described in the link, and got a <code>dist</code> folder with a wheel and gz file. However, <code>twine</code> errors with <code>Invalid distribution file</code>.</p> <p>I am copying my token via right-click not ctrl v (avoiding <a href="https://bugs.python.org/issue37426" rel="nofollow noreferrer">this</a> error). I noticed that the printout of twine is slightly different from that of the tutorial, which may be relevant:</p> <p><strong>tutorial in link:</strong></p> <pre><code>Uploading distributions to https://test.pypi.org/legacy/ Enter your username: __token__ Uploading example_package_YOUR_USERNAME_HERE-0.0.1-py3-none-any.whl ... </code></pre> <p><strong>my traceback</strong></p> <pre><code>Uploading distributions to https://test.pypi.org/legacy/ Enter your API token: ###this line### Uploading syntenyqc-0.0.1-py3-none-any.whl ... </code></pre> <p>My full traceback is below. I suspect the issue may be that I am installing from within a venv (<code>pip_env</code>) so will try repeating everything from my base env. In the meantime, can anyone else suggest any obvious fixes?</p> <p>Thanks!</p> <p><strong>Traceback (path is a replacement for long paths):</strong></p> <pre><code>(pip_env) C:\Users\username\.spyder-py3\SyntenyQC&gt;pip install --upgrade twine Requirement already satisfied: twine in path Requirement already satisfied: urllib3&gt;=1.26.0 in path (from twine) (1.26.16) Requirement already satisfied: pkginfo&gt;=1.8.1 in path (from twine) (1.10.0) Requirement already satisfied: keyring&gt;=15.1 in path (from twine) (23.13.1) Requirement already satisfied: importlib-metadata&gt;=3.6 in path (from twine) (6.0.0) Requirement already satisfied: requests&gt;=2.20 in path (from twine) (2.31.0) Requirement already satisfied: readme-renderer&gt;=35.0 in path (from twine) (44.0) Requirement already satisfied: requests-toolbelt!=0.9.0,&gt;=0.8.0 in path (from twine) (1.0.0) Requirement already satisfied: rfc3986&gt;=1.4.0 in path (from twine) (2.0.0) Requirement already satisfied: rich&gt;=12.0.0 in path (from twine) (13.7.1) Requirement already satisfied: zipp&gt;=0.5 in path (from importlib-metadata&gt;=3.6-&gt;twine) (3.11.0) Requirement already satisfied: jaraco.classes in path (from keyring&gt;=15.1-&gt;twine) (3.2.1) Requirement already satisfied: pywin32-ctypes&gt;=0.2.0 in path (from keyring&gt;=15.1-&gt;twine) (0.2.0) Requirement already satisfied: nh3&gt;=0.2.14 in path (from readme-renderer&gt;=35.0-&gt;twine) (0.2.18) Requirement already satisfied: Pygments&gt;=2.5.1 in path (from readme-renderer&gt;=35.0-&gt;twine) (2.15.1) Requirement already satisfied: docutils&gt;=0.21.2 in path (from readme-renderer&gt;=35.0-&gt;twine) (0.21.2) Requirement already satisfied: charset-normalizer&lt;4,&gt;=2 in path (from requests&gt;=2.20-&gt;twine) (2.0.4) Requirement already satisfied: idna&lt;4,&gt;=2.5 in path (from requests&gt;=2.20-&gt;twine) (3.4) Requirement already satisfied: certifi&gt;=2017.4.17 in path (from requests&gt;=2.20-&gt;twine) (2024.6.2) Requirement already satisfied: markdown-it-py&gt;=2.2.0 in path (from rich&gt;=12.0.0-&gt;twine) (3.0.0) Requirement already satisfied: mdurl~=0.1 in path (from markdown-it-py&gt;=2.2.0-&gt;rich&gt;=12.0.0-&gt;twine) (0.1.2) Requirement already satisfied: more-itertools in path (from jaraco.classes-&gt;keyring&gt;=15.1-&gt;twine) (8.12.0) (pip_env) C:\Users\username\.spyder-py3\SyntenyQC&gt;pip install --upgrade build Requirement already satisfied: build in path (1.2.1) Requirement already satisfied: pyproject_hooks in path (from build) (1.0.0) Requirement already satisfied: colorama in path (from build) (0.4.6) Requirement already satisfied: packaging&gt;=19.1 in path (from build) (23.0) Requirement already satisfied: tomli&gt;=1.1.0 in path (from build) (2.0.1) (pip_env) C:\Users\username\.spyder-py3\SyntenyQC&gt;python -m build * Creating isolated environment: venv+pip... * Installing packages in isolated environment: - hatchling * Getting build dependencies for sdist... * Building sdist... * Building wheel from sdist * Creating isolated environment: venv+pip... * Installing packages in isolated environment: - hatchling * Getting build dependencies for wheel... * Building wheel... Successfully built syntenyqc-0.0.1.tar.gz and syntenyqc-0.0.1-py3-none-any.whl (pip_env) C:\Users\username\.spyder-py3\SyntenyQC&gt;python -m twine upload --repository testpypi dist/* Uploading distributions to https://test.pypi.org/legacy/ Enter your API token: Uploading syntenyqc-0.0.1-py3-none-any.whl 100% ---------------------------------------- 3.2/3.2 MB • 00:03 • 902.8 kB/s WARNING Error during upload. Retry with the --verbose option for more details. ERROR HTTPError: 400 Bad Request from https://test.pypi.org/legacy/ Invalid distribution file. </code></pre> <p><strong>EDIT</strong></p> <p>running in my base environment with fresh installs of <code>build</code>, <code>twine</code> and <code>hatchling</code> gives me a new twine error :( :</p> <pre><code>(base) C:\Users\username\.spyder-py3\SyntenyQC&gt;python -m twine upload --repository testpypi dist/* Uploading distributions to https://test.pypi.org/legacy/ ERROR InvalidDistribution: Metadata is missing required fields: Name, Version. Make sure the distribution includes the files where those fields are specified, and is using a supported Metadata-Version: 1.0, 1.1, 1.2, 2.0, 2.1, 2.2. </code></pre> <p><strong>EDIT 2</strong></p> <p><a href="https://github.com/pypi/warehouse/issues/15611#issuecomment-2003569493" rel="nofollow noreferrer">Updating pkginfo</a> fixed the error in edit 1, leading me back to the same initial error &lt;/3</p> <p><strong>EDIT 3</strong></p> <p>Twine can't find any issues with the files, so not sure what the issue is:</p> <pre><code>(base) C:\Users\u03132tk\.spyder-py3\SyntenyQC&gt;twine check dist/* Checking dist\syntenyqc-1.0-py3-none-any.whl: PASSED Checking dist\syntenyqc-1.0.tar.gz: PASSED </code></pre>
<python><build><package><pypi><twine>
2024-07-23 16:33:26
1
756
Tim Kirkwood
78,784,539
2,153,235
Module imported into different scopes, do they both refer to the same object?
<p>At the Spyder console (the REPL), I issue <code>import matplotlib.pyplot as plt</code>. It is the topmost namespace, i.e., corresponding to <code>globals()</code>.</p> <p>In a Spyder startup file, I define a function to raise figure windows to the top.</p> <pre><code>def TKraiseCFG( FigID = None ): import matplotlib.pyplot as plt # The rest is just context #------------------------- # Assigning to plt unnecessary if # imported plt is same object as in # caller's scope plt = inspect.currentframe().f_back.f_globals['plt'] # plt=globals()['plt'] # https://stackoverflow.com/a/78732915 if FigID is not None: plt.figure( FigID ) cfm = plt.get_current_fig_manager() cfm.window.attributes('-topmost', True) cfm.window.attributes('-topmost', False) return cfm </code></pre> <p><em>Does the <code>plt</code> in <code>TKraiseCFG()</code> refer to the same object as <code>plt</code> at the REPL?</em></p> <p><strong>Further context (not the main question):</strong> I can't imagine a console/REPL (or even multiple consoles) using more than one <code>matplotlib.pyplot</code>. But I'm just getting to know Python, so I could be wrong. For the case of a single common <code>matplotlib.pyplot</code>, however, I'm seeking a way to have it accessible to all scopes so that I can write convenience/utility functions like <code>TKraiseCFG()</code> (which is just cobbled together after reading various pages, weeks ago). Unfortunately, my current method requires that code invoking <code>TKraiseCFG()</code> contain a variable specifically named <code>plt</code> referencing <code>matplotlib.pyplot</code>.</p>
<python><matplotlib><python-import>
2024-07-23 16:10:19
1
1,265
user2153235
78,784,486
3,120,501
Execution of conditional branches causing errors in Jax (kd-tree implementation)
<p>I'm writing a kd-tree in Jax, and using custom written Node objects for the tree elements. Each Node is very simple, with a single <code>data</code> field (for holding numeric values) and <code>left</code> and <code>right</code> fields which are references to other Nodes. A leaf Node is identified as one for which the <code>left</code> and <code>right</code> fields are None.</p> <p>The code performs conditional checks on the values of <code>left</code> and <code>right</code> as part of the tree traversal process - e.g. it will only try to traverse down the left or right branch of a node's subtree if it actually exists. Doing checks like <code>if (current_node.left is not None)</code> (or does it have to be <code>jax.numpy.logical_not(current_node.left is None)</code> in Jax - I've tried both?) was fine for this, but since converting the <code>if</code> statements to <code>jax.lax.cond(...)</code> I've been getting the error <code>AttributeError: 'NoneType' object has no attribute 'left'</code>.</p> <p>I think the situation might be like in the following minimum working example:</p> <pre><code>import jax import jax.numpy as jnp def my_func(val): return 2*val @jax.jit def test_fn(a): return jax.lax.cond(a is not None, lambda: my_func(a), lambda: 0) print(test_fn(2)) # Prints 4 # in test_fn(), a has type &lt;class 'jax._src.interpreters.partial_eval.DynamicJaxprTracer'&gt; print(test_fn(None)) # TypeError: unsupported operand type(s) for *: 'int' and 'NoneType' # in test_fn(), a has type &lt;class 'NoneType'&gt; </code></pre> <p>In this code, if the Jax <code>cond</code> statement were a regular <code>if</code> statement, <code>my_func()</code> wouldn't even be called when <code>a</code> is None, and no error would be raised. To the best of my understanding, Jax tries to trace the function, meaning that all branches are executed, and this leads to <code>my_func()</code> being called with None (when <code>a</code> is None), causing the error. I believe a similar situation is arising in my tree code, where conditional branches are being executed even though <code>.left</code> and /or <code>.right</code> are None, and a traditional <code>if</code> statement wouldn't lead to execution of the code branches.</p> <p>Is my understanding correct, and what could I do about this issue? Strangely, the minimum working example code also has the problem when the <code>@jax.jit</code> decorator is omitted, suggesting that both branches are still being traced.</p> <hr /> <p>As a related point, is the tree structure 'baked into' the Jax/XLA code? I have noticed that when using larger trees the code takes longer to be jit-compiled, which makes me concerned that this might not be a valid approach with the very large number of points I need to represent (about 14,000,000). I would use the regular Scipy kd-tree implementation, but this isn't compatible with Jax unfortunately, and the rest of my code requires it. I might ask this as a separate question for clarity.</p>
<python><jax><kdtree>
2024-07-23 15:56:18
1
528
LordCat
78,784,232
5,355,024
AWS Route Calculator API/ Here Maps
<p>I am using AWS Location Services Route Calculator to determine travel time and travel distance of several origin destination pairs. The code is working but it is not reflecting travel delay due to traffic. It calculates the difference between two cities with departure time at 2 AM and 11 AM are only a couple minutes of time difference. Doing the same search manually in Here Maps shows the same times but the 11 am departure time shows a significant delay like +40 minutes probably due to traffic. Is this data available in the AWS api?</p> <pre><code>Town1 Town2 Mile 2 Mile 8 Mile 16 Mile 23 Time 2 Time 8 Time 16 Time 23 ACTON CHARLEMONT 86.56 86.56 87.98 87.98 103.68 103.89 104.95 104.31 ACTON CHARLESTOWN 23.43 23.43 22.98 23.43 38.09 38.14 44.33 42.89 def initialize_amazon_client(aws_access_key_id, aws_secret_access_key): &quot;&quot;&quot;Initialize and return an Amazon Location Service client.&quot;&quot;&quot; session = boto3.Session( aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key, region_name='us-east-1' ) return session.client('location') client = initialize_amazon_client(aws_access_key_id, aws_secret_access_key) response = client.calculate_route( CalculatorName=calculator_name, DeparturePosition=origin, DestinationPosition=destination, TravelMode='Car', DepartNow=False, DepartureTime=departure_time, DistanceUnit='Miles', CarModeOptions={ 'AvoidFerries': False, 'AvoidTolls': False } ) </code></pre>
<python><amazon-web-services><routes><boto3><calculator>
2024-07-23 15:03:34
1
368
Jorge
78,784,224
3,048,716
Cannot deploy vertex AI model locally using custom predictor
<p>Following the <a href="https://cloud.google.com/vertex-ai/docs/predictions/custom-prediction-routines" rel="nofollow noreferrer">documentation</a> I have what I believe to be a minimal form of a custom predictor which I am trying to deploy locally.</p> <p>I have a model dir which contains an empty requirements.txt and a predictor.py containing:</p> <pre class="lang-py prettyprint-override"><code>from typing import List from google.cloud.aiplatform.prediction.predictor import Predictor class MyPredictorBasic(Predictor): def load(self, artifacts_uri: str) -&gt; None: pass def predict(self, instances: List): return [&quot;foo&quot; for i in instances] </code></pre> <p>I then have a script for running the model locally:</p> <pre class="lang-py prettyprint-override"><code>from google.cloud.aiplatform.prediction import LocalModel from google.cloud import aiplatform from model.predictor import MyPredictorBasic display_name = &quot;test_model&quot; model_path = &quot;./model/&quot; project_id = &quot;ai-play-430308&quot; repository = &quot;test-repo&quot; image = &quot;test-image&quot; region = &quot;europe-west2&quot; output_image_uri = f&quot;{region}-docker.pkg.dev/{project_id}/{repository}/{image}&quot; requirements_path = model_path + &quot;requirements.txt&quot; aiplatform.init(project=project_id, location=region) local_model = LocalModel.build_cpr_model( src_dir=model_path, output_image_uri=output_image_uri, predictor=MyPredictorBasic, requirements_path=requirements_path, ) spec = local_model.get_serving_container_spec() print(spec) with local_model.deploy_to_local_endpoint() as local_endpoint: health_check_response = local_endpoint.run_health_check() print(&quot;health_check_response&quot;, health_check_response.content) </code></pre> <p>I get an error when calling <code>deploy_to_local_endpoint</code></p> <p>The entire output is:</p> <pre><code>/usr/lib/python3.10/subprocess.py:955: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used self.stdin = io.open(p2cwrite, 'wb', bufsize) /usr/lib/python3.10/subprocess.py:961: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used self.stdout = io.open(c2pread, 'rb', bufsize) image_uri: &quot;europe-west2-docker.pkg.dev/ai-play-430308/test-repo/test-image&quot; predict_route: &quot;/predict&quot; health_route: &quot;/health&quot; health_check_response b'{}' Exception ignored in: &lt;function LocalEndpoint.__del__ at 0x7f129b0479a0&gt; Traceback (most recent call last): File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/google/cloud/aiplatform/prediction/local_endpoint.py&quot;, line 214, in __del__ File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/google/cloud/aiplatform/prediction/local_endpoint.py&quot;, line 285, in stop File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/google/cloud/aiplatform/prediction/local_endpoint.py&quot;, line 346, in _stop_container_if_exists File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/docker/models/containers.py&quot;, line 452, in stop File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/docker/utils/decorators.py&quot;, line 19, in wrapped File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/docker/api/container.py&quot;, line 1211, in stop File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/docker/utils/decorators.py&quot;, line 44, in inner File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/docker/api/client.py&quot;, line 242, in _post File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/requests/sessions.py&quot;, line 637, in post File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/requests/sessions.py&quot;, line 589, in request File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/requests/sessions.py&quot;, line 703, in send File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/requests/adapters.py&quot;, line 667, in send File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/urllib3/connectionpool.py&quot;, line 789, in urlopen File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/urllib3/connectionpool.py&quot;, line 536, in _make_request File &quot;/home/andy/Documents/code/python/vertex_ai_play/.env/lib/python3.10/site-packages/urllib3/connection.py&quot;, line 461, in getresponse ImportError: sys.meta_path is None, Python is likely shutting down </code></pre> <p>I also tried an alternative form of running the model locally:</p> <pre class="lang-py prettyprint-override"><code>local_endpoint = local_model.deploy_to_local_endpoint() local_endpoint.serve() </code></pre> <p>Which result in the same error. If I step through line by line the error is triggered by <code>local_endpoint.serve()</code>.</p> <p>Any help would be much appreciated</p>
<python><google-cloud-vertex-ai><google-cloud-aiplatform>
2024-07-23 15:01:40
0
357
Andy T
78,784,043
15,209,268
How to Stream RTSP Video from IP Camera to HTML Video Element?
<p>I have an IP camera that streams video using the RTSP protocol. My system needs to connect to this IP camera and stream the video to a web browser. Since most browsers don't natively support RTSP, I need an intermediary server to handle the RTSP stream. I implemented a FastAPI web server that uses OpenCV to connect to the IP camera and stream the video, and it works; I can see the video in the browser.</p> <p>Currently, my implementation streams the video as an image element, but I want to stream it as a video element. How can I achieve this with my server?</p> <p>Additionally, are there any ready-to-use servers that can stream video from an IP camera to an HTML video element that you would recommend?</p> <p>Is streaming the video as a JPEG efficient?</p> <p>Reading frame from IP camera implementation:</p> <pre><code>class VideoStreamService: def __init__(self): pass def get_frame(self, connection_string: str): cap = cv2.VideoCapture(connection_string) while True: # Capture frame-by-frame success, frame = cap.read() # read the camera frame if not success: break else: ret, encodded_frame = cv2.imencode('.jpg', frame) # frame = buffer.tobytes() yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + bytearray(encodded_frame) + b'\r\n') </code></pre> <p>Stream response from the server:</p> <pre><code>def get_video_stream( connection_string: str = Query(..., alias=&quot;connection_string&quot;), video_stream_service: VideoStreamService = Depends(Provide[Container.video_stream_service]),) -&gt; StreamingResponse: try: encodded_frame = video_stream_service.get_frame(unquote(connection_string)) return StreamingResponse(encodded_frame, media_type=&quot;multipart/x-mixed-replace;boundary=frame&quot;) except ValueError as error: ... </code></pre> <p>I used <a href="https://stackoverflow.com/questions/65971081/streaming-video-from-camera-in-fastapi-results-in-frozen-image-after-first-frame">this</a> SO question as a reference to my implementation</p>
<python><opencv><video-streaming><fastapi><http-live-streaming>
2024-07-23 14:29:16
1
444
Oded
78,783,974
11,117,255
How do I convert a 300-degree equirectangular panoramic image to cube faces?
<p>A 300-degree equirectangular panoramic image I want to convert into cube faces using OpenCV in Python. I found code for 360-degree images. How do I modify it to handle a 300-degree image?</p> <pre><code>import cv2 import numpy as np def equirectangular_to_cube(img, cube_size): h, w = img.shape[:2] # # Create cube map faces cube_faces = np.zeros((cube_size, cube_size * 6, 3), dtype=np.uint8) # # Calculate the size of each cube face face_size = cube_size # # Define the mapping coordinates for 360 degrees x = np.linspace(-np.pi, np.pi, num=w, dtype=np.float32) y = np.linspace(np.pi / 2, -np.pi / 2, num=h, dtype=np.float32) # # Create grid of coordinates xx, yy = np.meshgrid(x, y) # # Calculate 3D coordinates z = np.cos(yy) * np.cos(xx) x = np.cos(yy) * np.sin(xx) y = np.sin(yy) # # Normalize coordinates norm = np.sqrt(x**2 + y**2 + z**2) x /= norm y /= norm z /= norm # # Map coordinates to cube faces front_mask = (z &gt;= np.abs(x)) &amp; (z &gt;= np.abs(y)) right_mask = (x &gt;= np.abs(y)) &amp; (x &gt;= np.abs(z)) back_mask = (z &lt;= -np.abs(x)) &amp; (z &lt;= -np.abs(y)) left_mask = (x &lt;= -np.abs(y)) &amp; (x &lt;= -np.abs(z)) top_mask = (y &gt;= np.abs(x)) &amp; (y &gt;= np.abs(z)) bottom_mask = (y &lt;= -np.abs(x)) &amp; (y &lt;= -np.abs(z)) # # Interpolate and assign pixel values to cube faces for i in range(cube_size): for j in range(cube_size): # Front face u = (0.5 + 0.5 * x[front_mask] / z[front_mask]) * (w - 1) v = (0.5 + 0.5 * y[front_mask] / z[front_mask]) * (h - 1) cube_faces[i, j] = cv2.remap(img, u.reshape(-1, 1).astype(np.float32), v.reshape(-1, 1).astype(np.float32), cv2.INTER_LINEAR) # # Right face u = (0.5 + 0.5 * z[right_mask] / x[right_mask]) * (w - 1) v = (0.5 + 0.5 * y[right_mask] / x[right_mask]) * (h - 1) cube_faces[i, j + cube_size] = cv2.remap(img, u.reshape(-1, 1).astype(np.float32), v.reshape(-1, 1).astype(np.float32), cv2.INTER_LINEAR) # # Back face u = (0.5 - 0.5 * x[back_mask] / z[back_mask]) * (w - 1) v = (0.5 + 0.5 * y[back_mask] / z[back_mask]) * (h - 1) cube_faces[i, j + cube_size*2] = cv2.remap(img, u.reshape(-1, 1).astype(np.float32), v.reshape(-1, 1).astype(np.float32), cv2.INTER_LINEAR) # # Left face u = (0.5 - 0.5 * z[left_mask] / x[left_mask]) * (w - 1) v = (0.5 + 0.5 * y[left_mask] / x[left_mask]) * (h - 1) cube_faces[i, j + cube_size*3] = cv2.remap(img, u.reshape(-1, 1).astype(np.float32), v.reshape(-1, 1).astype(np.float32), cv2.INTER_LINEAR) # # Top face u = (0.5 + 0.5 * x[top_mask] / y[top_mask]) * (w - 1) v = (0.5 - 0.5 * z[top_mask] / y[top_mask]) * (h - 1) cube_faces[i, j + cube_size*4] = cv2.remap(img, u.reshape(-1, 1).astype(np.float32), v.reshape(-1, 1).astype(np.float32), cv2.INTER_LINEAR) # # Bottom face u = (0.5 + 0.5 * x[bottom_mask] / y[bottom_mask]) * (w - 1) v = (0.5 + 0.5 * z[bottom_mask] / y[bottom_mask]) * (h - 1) cube_faces[i, j + cube_size*5] = cv2.remap(img, u.reshape(-1, 1).astype(np.float32), v.reshape(-1, 1).astype(np.float32), cv2.INTER_LINEAR) # return cube_faces # Usage example image_path = 'path/to/300_degree_image.jpg' cube_size = 512 img = cv2.imread(image_path) cube_faces = equirectangular_to_cube(img, cube_size) # Save the cube faces as separate images front = cube_faces[:, :cube_size] right = cube_faces[:, cube_size:cube_size*2] back = cube_faces[:, cube_size*2:cube_size*3] left = cube_faces[:, cube_size*3:cube_size*4] top = cube_faces[:, cube_size*4:cube_size*5] bottom = cube_faces[:, cube_size*5:] cv2.imwrite(&quot;front.jpg&quot;, front) cv2.imwrite(&quot;right.jpg&quot;, right) cv2.imwrite(&quot;back.jpg&quot;, back) cv2.imwrite(&quot;left.jpg&quot;, left) cv2.imwrite(&quot;top.jpg&quot;, top) cv2.imwrite(&quot;bottom.jpg&quot;, bottom) </code></pre> <p>It defines mapping coordinates for a 360-degree image using <code>np.linspace(-np.pi, np.pi, num=w, dtype=np.float32)</code>. How do I modify it to account for the reduced field of view?</p> <p>Image:</p> <p><a href="https://i.sstatic.net/JmNaue2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JmNaue2C.png" alt="" /></a></p> <p>What I've tried:</p> <pre><code># convert using an inverse transformation def convertBack(imgIn,imgOut): inSize = imgIn.size outSize = imgOut.size inPix = imgIn.load() outPix = imgOut.load() # extendedSize = [0,0] extendedSize[0] = int(360/300 * inSize[0]) extendedSize[1] = inSize[1] edge = int(extendedSize[0]/4) # theoretical length of each edge # for i in range(outSize[0]): face = int(i/edge) # 0 - back, 1 - left 2 - front, 3 - right if face==2: rng = range(0,edge*3) else: rng = range(edge,edge*2) # for j in rng: if j&lt;edge: face2 = 4 # top elif j&gt;=2*edge: face2 = 5 # bottom else: face2 = face # (x,y,z) = outImgToXYZ(i,j,face2,edge) theta = atan2(y,x) # range -pi to pi r = hypot(x,y) phi = atan2(z,r) # range -pi/2 to pi/2 # source img coords uf = ( 2.0*edge*(theta + pi)/pi ) vf = ( 2.0*edge * (pi/2 - phi)/pi) if uf &lt; inSize[0]: # Use bilinear interpolation between the four surrounding pixels ui = floor(uf) # coord of pixel to bottom left vi = floor(vf) u2 = clip(ui+1, 0, inSize[0]-1) # Clip u2 to stay within the valid range v2 = clip(vi+1, 0, inSize[1]-1) # Clip v2 to stay within the valid range mu = uf-ui # fraction of way across pixel nu = vf-vi # Pixel values of four corners A = inPix[ui,vi] B = inPix[u2,vi] C = inPix[ui,v2] D = inPix[u2,v2] # interpolate (r,g,b) = ( A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu+D[0]*mu*nu, A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu+D[1]*mu*nu, A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu+D[2]*mu*nu ) else: (r,g,b) = (0,0,0) # outPix[i,j] = (int(round(r)),int(round(g)),int(round(b))) </code></pre> <p>This is the change:</p> <pre><code>extendedSize = [0,0] extendedSize[0] = int(360/300 * inSize[0]) extendedSize[1] = inSize[1] edge = int(extendedSize[0]/4) # theoretical length of each edge </code></pre> <p>Update:</p> <p>I got it partially working. Any feedback would be very helpful.</p> <p>I included my updated code below</p> <pre><code> # convert using an inverse transformation def convertBack(imgIn,imgOut): inSize = imgIn.size outSize = imgOut.size inPix = imgIn.load() outPix = imgOut.load() extendedSize = imgIn.size extendedSize = (int(360/300 * inSize[0]), inSize[1]) edge = extendedSize[0]/4 # theoretical length of edge # edge = inSize[0]/4 # the length of each edge in pixels for i in range(outSize[0]): # print(i) face = int(i/edge) # 0 - back, 1 - left 2 - front, 3 - right if face==2: rng = range(0, int(edge*3)) else: rng = range(int(edge), int(edge*2)) # for j in rng: if j&lt;edge: face2 = 4 # top elif j&gt;=2*edge: face2 = 5 # bottom else: face2 = face # (x,y,z) = outImgToXYZ(i,j,face2,edge) theta = atan2(y,x) # range -pi to pi r = hypot(x,y) phi = atan2(z,r) # range -pi/2 to pi/2 # source img coords uf = ( 2.0*edge*(theta + pi)/pi ) vf = ( 2.0*edge * (pi/2 - phi)/pi) # if uf &lt; inSize[0] : # Use bilinear interpolation between the four surrounding pixels ui = floor(uf) # coord of pixel to bottom left vi = floor(vf) u2 = ui+1 # coords of pixel to top right v2 = vi+1 mu = uf-ui # fraction of way across pixel nu = vf-vi # # Clip coordinates to stay within the valid range ui = max(0, min(ui, extendedSize[0]-1)) u2 = max(0, min(u2, extendedSize[0]-1)) vi = max(0, min(vi, extendedSize[1]-1)) v2 = max(0, min(v2, extendedSize[1]-1)) # # # # Pixel values of four corners A = inPix[ui % inSize[0],clip(vi,0,inSize[1]-1)] B = inPix[u2 % inSize[0],clip(vi,0,inSize[1]-1)] C = inPix[ui % inSize[0],clip(v2,0,inSize[1]-1)] D = inPix[u2 % inSize[0],clip(v2,0,inSize[1]-1)] # interpolate (r,g,b) = ( A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu+D[0]*mu*nu, A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu+D[1]*mu*nu, A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu+D[2]*mu*nu ) # # (r,g,b) = (0,0,0) else: (r,g,b) = (0,0,0) # # Clip the coordinates to stay within the output image dimensions i_clipped = max(0, min(i, outSize[0]-1)) j_clipped = max(0, min(j, outSize[1]-1)) outPix[i_clipped,j_clipped] = (int(round(r)),int(round(g)),int(round(b))) </code></pre> <p><a href="https://i.sstatic.net/mLL7BdUD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mLL7BdUD.png" alt="enter image description here" /></a></p>
<python><numpy><opencv><panoramas>
2024-07-23 14:16:11
1
2,759
Cauder
78,783,860
9,315,690
EOFError when trying to call Inhibit method on org.freedesktop.login1.Manager via dbus-fast
<p>I'm developing a Python application which will be distributed to users on Linux in various forms (Flatpak, PyInstaller executable, maybe others). In this application, I want to call the <code>Inhibit</code> method on the <code>org.freedesktop.login1.Manager</code> dbus interface. Due to the relatively complex/diverse distribution requirements, I went with dbus-fast which is a pure Python implementation of dbus. Additionally, dbus-python's authors start the documentation of the library with <a href="https://dbus.freedesktop.org/doc/dbus-python/#problems-and-alternatives" rel="nofollow noreferrer">various warnings</a>, which was another point in dbus-fast's favour.</p> <p>Anyway, dbus-fast seems to be working fine overall, and I am able to call e.g. the <code>ListUsers</code> method without problems like so:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from dbus_fast import BusType from dbus_fast.aio import MessageBus async def list_users() -&gt; None: system_bus = await MessageBus(bus_type=BusType.SYSTEM).connect() introspection = await system_bus.introspect( &quot;org.freedesktop.login1&quot;, &quot;/org/freedesktop/login1&quot;, ) login_object = system_bus.get_proxy_object( &quot;org.freedesktop.login1&quot;, &quot;/org/freedesktop/login1&quot;, introspection, ) login_manager_interface = login_object.get_interface( &quot;org.freedesktop.login1.Manager&quot;, ) user_list = await login_manager_interface.call_list_users() print(user_list) asyncio.run(list_users()) </code></pre> <p>Which I then can run and get a list like so:</p> <pre><code>$ python -m dbus_list_users [[1000, 'newbyte', '/org/freedesktop/login1/user/_1000']] </code></pre> <p>However, if I try to call the <code>Inhibit</code> method, I get <code>EOFError</code>, which I don't understand the meaning of in this context. Here is the code:</p> <pre class="lang-py prettyprint-override"><code>import asyncio from dbus_fast import BusType from dbus_fast.aio import MessageBus async def do_inhibit() -&gt; None: system_bus = await MessageBus(bus_type=BusType.SYSTEM).connect() introspection = await system_bus.introspect( &quot;org.freedesktop.login1&quot;, &quot;/org/freedesktop/login1&quot;, ) login_object = system_bus.get_proxy_object( &quot;org.freedesktop.login1&quot;, &quot;/org/freedesktop/login1&quot;, introspection, ) login_manager_interface = login_object.get_interface( &quot;org.freedesktop.login1.Manager&quot;, ) inhibit_fd = await login_manager_interface.call_inhibit( &quot;sleep&quot;, &quot;Stack Overflow example man&quot;, &quot;To demonstrate that it does not work&quot;, &quot;delay&quot;, ) print(inhibit_fd) asyncio.run(do_inhibit()) </code></pre> <p>Running it gives me this long traceback:</p> <pre><code>$ python -m dbus_inhibit Traceback (most recent call last): File &quot;&lt;frozen runpy&gt;&quot;, line 198, in _run_module_as_main File &quot;&lt;frozen runpy&gt;&quot;, line 88, in _run_code File &quot;/mnt/storage/Programming/sparvio_toolbox/dbus_inhibit.py&quot;, line 28, in &lt;module&gt; asyncio.run(do_inhibit()) File &quot;/usr/lib64/python3.12/asyncio/runners.py&quot;, line 194, in run return runner.run(main) ^^^^^^^^^^^^^^^^ File &quot;/usr/lib64/python3.12/asyncio/runners.py&quot;, line 118, in run return self._loop.run_until_complete(task) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/lib64/python3.12/asyncio/base_events.py&quot;, line 687, in run_until_complete return future.result() ^^^^^^^^^^^^^^^ File &quot;/mnt/storage/Programming/sparvio_toolbox/dbus_inhibit.py&quot;, line 20, in do_inhibit inhibit_fd = await login_manager_interface.call_inhibit( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/mnt/storage/Programming/sparvio_toolbox/_host-env/lib64/python3.12/site-packages/dbus_fast/aio/proxy_object.py&quot;, line 92, in method_fn msg = await self.bus.call( ^^^^^^^^^^^^^^^^^^^^ File &quot;/mnt/storage/Programming/sparvio_toolbox/_host-env/lib64/python3.12/site-packages/dbus_fast/aio/message_bus.py&quot;, line 385, in call await future File &quot;src/dbus_fast/aio/message_reader.py&quot;, line 19, in dbus_fast.aio.message_reader._message_reader File &quot;src/dbus_fast/_private/unmarshaller.py&quot;, line 775, in dbus_fast._private.unmarshaller.Unmarshaller._unmarshall File &quot;src/dbus_fast/_private/unmarshaller.py&quot;, line 636, in dbus_fast._private.unmarshaller.Unmarshaller._read_header File &quot;src/dbus_fast/_private/unmarshaller.py&quot;, line 376, in dbus_fast._private.unmarshaller.Unmarshaller._read_to_pos File &quot;src/dbus_fast/_private/unmarshaller.py&quot;, line 339, in dbus_fast._private.unmarshaller.Unmarshaller._read_sock_without_fds EOFError </code></pre> <p>I don't understand what to make of this. I've tried looking through the <a href="https://systemd.io/INHIBITOR_LOCKS/" rel="nofollow noreferrer">official documentation for inhibitor locks</a> from systemd, but I haven't been able to figure anything out. How can I call the <code>Inhibit</code> method of <code>org.freedesktop.login1.Manager</code> and take an inhibitor lock in Python with dbus_python?</p>
<python><systemd><dbus>
2024-07-23 13:55:44
1
3,887
Newbyte
78,783,835
1,621,041
Find the largest rectangular bounding box containing only ones in a 2d mask NumPy array
<p>I have a 2d mask where I'd like to find the largest rectangular bounding box containing only ones.</p> <p>Is there a more efficient way than determining all possibilities for bounding boxes and then comparing their size?</p> <p>Example code:</p> <pre class="lang-py prettyprint-override"><code>import matplotlib.patches as patches import matplotlib.pyplot as plt import numpy as np mask = np.array([[0, 1, 0, 0, 0, 1, 0, 1], # example mask [0, 0, 0, 1, 1, 0, 0, 1], [1, 1, 0, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 0], [0, 1, 0, 1, 1, 0, 0, 0], [0, 0, 0, 1, 0, 0, 1, 1], [0, 0, 0, 1, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0], [0, 1, 1, 1, 0, 0, 1, 1]]) def find_largest_bbox(mask): # TODO return 3, 1, 4, 4 bbox = find_largest_bbox(mask) plt.close() plt.imshow(mask) x, y = bbox[0] - 0.5, bbox[1] - 0.5 w, h = bbox[2] - bbox[0] + 1, bbox[3] - bbox[1] + 1 rect = patches.Rectangle((x, y), w, h, linewidth=2, edgecolor=&quot;red&quot;, facecolor=&quot;none&quot;) plt.gca().add_patch(rect) plt.show() </code></pre> <p><a href="https://i.sstatic.net/5hEU6tHO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5hEU6tHO.png" alt="screenshot" /></a></p>
<python><numpy><mask><bounding-box>
2024-07-23 13:51:13
4
11,522
finefoot
78,783,365
13,757,692
Expand numpy array to be able to broadcast with second array of variable depth
<p>I have a function which can take an <code>np.ndarray</code> of shape <code>(3,)</code> or <code>(3, N)</code>, or <code>(3, N, M)</code>, etc.. I want to add to the input array an array of shape <code>(3,)</code>. At the moment, I have to manually check the shape of the incoming array and if neccessary, expand the array that is added to it, so that I don't get a broadcasting error. Is there a function in <code>numpy</code> that can expand my array to allow broadcasting for an input array of arbitrary depth?</p> <pre><code>def myfunction(input_array): array_to_add = np.array([1, 2, 3]) if len(input_array.shape) == 1: return input_array + array_to_add elif len(input_array.shape) == 2: return input_array + array_to_add[:, None] elif len(input_array.shape) == 3: return input_array + array_to_add[:, None, None] ... </code></pre>
<python><numpy><array-broadcasting>
2024-07-23 12:13:02
2
466
Alex V.
78,783,310
3,110,970
Apache Flink Python Datastream API sink to Parquet
<p>I have a Kafka topic that contains json messages. Using Flink Python API I try to process this messages and store in parquet files in GCS.</p> <p>Here is cleaned code snippet:</p> <pre><code>class Extract(MapFunction): def map(self, value): record = json.loads(value) dt_object = datetime.strptime(record['ts'], &quot;%Y-%m-%dT%H:%M:%SZ&quot;) return Row(dt_object, record['event_id']) &lt;...&gt; events_schema = DataTypes.ROW([ DataTypes.FIELD(&quot;ts&quot;, DataTypes.TIMESTAMP()), DataTypes.FIELD(&quot;event_id&quot;, DataTypes.STRING()) ]) &lt;...&gt; # Main job part kafka_source = KafkaSource.builder() \ &lt;...&gt; .build() ds: DataStream = env.from_source(kafka_source, WatermarkStrategy.no_watermarks(), &quot;Kafka Source&quot;) mapped_data = ds.map(Extract(), Types.ROW([Types.SQL_TIMESTAMP(), Types.STRING()])) sink = (FileSink .for_bulk_format(&quot;gs://&lt;my_events_path&gt;&quot;, ParquetBulkWriters.for_row_type(row_type=events_schema)) .with_output_file_config( OutputFileConfig.builder() .with_part_prefix(&quot;bids&quot;) .with_part_suffix(&quot;.parquet&quot;) .build()) .build()) mapped_data.sink_to(sink) </code></pre> <p>The problem is when I try to run this job I get an error:</p> <p><code>Java.lang.ClassCastException: class java.sql.Timestamp cannot be cast to class java.time.LocalDateTime (java.sql.Timestamp is in module java.sql of loader 'platform'; java.time.LocalDateTime is in module java.base of loader 'bootstrap')</code></p> <p>So the problem is that <code>Types.SQL_TIMESTAMP()</code> and <code>DataTypes.TIMESTAMP()</code> are not compatible when translated in corresponding Java classes. But I don't see any other option to &quot;typify&quot; my mapping transformation.</p> <p>If instead of</p> <p><code>mapped_data = ds.map(Extract(), Types.ROW([Types.SQL_TIMESTAMP(), Types.STRING()]))</code></p> <p>I use this option</p> <p><code>mapped_data = ds.map(Extract())</code></p> <p>then I get another error:</p> <p><code>java.lang.ClassCastException: class [B cannot be cast to class org.apache.flink.types.Row ([B is in module java.base of loader 'bootstrap'; org.apache.flink.types.Row is in unnamed module of loader 'app')</code></p> <p>My question is can I save data containing timestamps in parquet format using Flink Python API?</p>
<python><apache-flink><parquet>
2024-07-23 12:00:59
1
1,390
xneg
78,783,283
3,768,871
How to run jupyterlab when asdf is installed?
<p>Suppose that asdf is installed and we are using python 3.8.19. How to run &quot;jupyter lab&quot; as the regular command <code>jupyter-lab notebook</code> should be referenced in PATH and not working. On the other hand, it seems that adding specific path from asdf to PATH is not reasonable as we may have different python version in asdf and does not desire to add a new path per installed python version.</p> <p>So, the question is how to run the jupyter-lab using python3 under <code>asdf</code> shimmer?</p>
<python><jupyter><shim><asdf>
2024-07-23 11:54:14
1
19,015
OmG
78,783,240
5,893,454
Managing wrong order of strings in R and ggplot. X1, X10, X12: How can I ensure a string with number will be preented in the right numberical order?
<p>Every time that I want to plot or list a string that contains numbers, R (and python) list the values such as &quot;x1, x10, x11, x13, (etc) x2, x3, x4&quot;</p> <p><a href="https://i.sstatic.net/8d7AO4TK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8d7AO4TK.png" alt="image" /></a></p> <p>I tried to solve this using <code>gtools::mixedsort(variable)</code> but for some reason beyond my knowledge and patience it's not working.</p> <p>Could someone give me a heads up on how to solve this? Code is below:</p> <pre class="lang-r prettyprint-override"><code>library(tidyverse) df = data.frame(data.frame(x = 1:3)) for (i in 1:15) { df[ , paste0(&quot;asq_&quot;, i)] &lt;- c(0,5,10) } df %&gt;% pivot_longer(-x,names_to = &quot;asq_item&quot;, values_to = &quot;response&quot;)%&gt;% group_by(asq_item, response) %&gt;% summarise(count = n(), .groups = 'drop') %&gt;% group_by(asq_item) %&gt;% mutate(proportion = count / sum(count)) %&gt;% ggplot(., aes(x = asq_item, y = proportion, fill = factor(response)))+ geom_bar(stat = &quot;identity&quot;, position=position_dodge2(width = 0.9, preserve = &quot;single&quot;)) + labs(title = &quot;Prevalence of Each Category Within Each ASQ Item&quot;, x = &quot;ASQ Item&quot;, y = &quot;Count&quot;, fill = &quot;Response&quot;) + theme_bw() </code></pre> <p><img src="https://i.imgur.com/41LTlcw.png" alt="" /></p> <p><sup>Created on 2024-07-23 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.1.0</a></sup></p>
<python><r><string><ggplot2>
2024-07-23 11:43:44
1
1,574
Luis
78,783,219
20,895,654
discord.py app command error handling won't work because responses take >5 seconds
<p>I'm having this issue where before a few days ago, everything worked fine, but suddenly any error handling will take too long to be in the 3 second time frame for the response to be handled. Nothing will happen for about 5 seconds after the error occurs and then an error happens because the interaction timed out which my code tries to interact with. Also for some reason in the console output both the KeyError (which actually gets caught) and the CheckFailure get listed, which doesn't make any sense normally. When looking for answers I found that someone had a similar issue and that it has to do how discord.py works with asyncio. Not sure about this though.</p> <p><code>running.py</code></p> <pre><code>import discord from discord import app_commands, Interaction from discord.ext import commands from discord.ext.commands import Context, CommandError import os import traceback from bot import bot from utils import errors from utils.constants import SECRET_DB if __name__ == '__main__': raise RuntimeError TOKEN = SECRET_DB.load()['DISCORD_TOKEN'] COMMANDS_PACKAGE = 'cogs' @bot.event async def on_ready(): try: for file_name in os.listdir(COMMANDS_PACKAGE): file_name = file_name.removesuffix('.py') if not file_name.startswith('__'): await bot.load_extension(f&quot;{COMMANDS_PACKAGE}.{file_name}&quot;) except Exception as e: traceback.print_exc() exit(1) await bot.tree.sync() await bot.wait_until_ready() @bot.tree.error async def on_app_command_error(interaction: Interaction, error: app_commands.AppCommandError) -&gt; None: from datetime import datetime print(&quot;error handling&quot;, datetime.now()) if isinstance(error, app_commands.CommandInvokeError): e = error.original else: e = error if not interaction.response.is_done(): await interaction.response.defer() if isinstance(e, errors.UserReturnableError): await interaction.followup.send(str(e), **e.msg_kwargs) return await interaction.followup.send(&quot;Unhandled Exception!&quot;) raise e bot.run(TOKEN) </code></pre> <p><code>trickjump.py</code></p> <pre><code>import discord from discord import app_commands, Interaction from discord.ext import commands from discord.ext.commands import Bot, Cog import validators from typing import Optional import textwrap from utils import retrieve, usertools from utils.constants import * from utils.errors import FeedbackErrors from utils.filtered import filtered from utils.trickjump import Base, Trickjump class Trickjump_(Cog): def __init__(self, bot: Bot): self.bot = bot trickjump_group = app_commands.Group(name='trickjump', description=&quot;Command group to manage trickjumps of users&quot;) @trickjump_group.command(name=&quot;give&quot;, description=&quot;Give a jump to a user&quot;) async def give(self, interaction: Interaction, jump_name: str, proof: Optional[str] = None, user: Optional[str] = None): from datetime import datetime print(&quot;before except&quot;, datetime.now()) try: jump = JUMPS.get()[jump_name] except KeyError: print(&quot;except clause&quot;, datetime.now()) raise app_commands.CheckFailure jump.remove_attrs(filtered( lambda attr, rs: rs['for_user'] is False, ATTRIBUTES )) user_id, user_name = usertools.manage_and_get_id_name(interaction, user) proof = None if not proof else proof.strip() if proof: validators.url(proof) user_jumps = Base(USER_JUMPS_DB.load(user_id, []), strict=False) # If user already has this jump if jump_name in user_jumps: raise FeedbackErrors.JUMP_ALREADY_OBTAINED() user_jumps.append(jump) USER_JUMPS_DB.save(user_jumps, user_id) await interaction.response.send_message(f&quot;The jump `{jump_name}` was successfully given to `{user_name}`!&quot;) async def setup(bot: Bot): await bot.add_cog(Trickjump_(bot)) </code></pre> <p><code>Output</code></p> <pre><code>before except 2024-07-23 13:22:58.977088 except clause 2024-07-23 13:23:07.315731 error handling 2024-07-23 13:23:07.315731 Task exception was never retrieved future: &lt;Task finished name='CommandTree-invoker' coro=&lt;CommandTree._from_interaction.&lt;locals&gt;.wrapper() done, defined at C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\app_commands\tree.py:1149&gt; exception=NotFound('404 Not Found (error code: 10062): Unknown interaction')&gt; Traceback (most recent call last): File &quot;c:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\cogs\trickjump.py&quot;, line 29, in give jump = JUMPS.get()[jump_name] ~~~~~~~~~~~^^^^^^^^^^^ File &quot;c:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\utils\trickjump.py&quot;, line 160, in __getitem__ raise KeyError(f&quot;Key '{key}' not found.&quot;) KeyError: &quot;Key 'a' not found.&quot; During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\app_commands\tree.py&quot;, line 1310, in _call await command._invoke_with_namespace(interaction, namespace) File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\app_commands\commands.py&quot;, line 883, in _invoke_with_namespace return await self._do_call(interaction, transformed_values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\app_commands\commands.py&quot;, line 857, in _do_call return await self._callback(self.binding, interaction, **params) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;c:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\cogs\trickjump.py&quot;, line 32, in give raise app_commands.CheckFailure discord.app_commands.errors.CheckFailure During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\app_commands\tree.py&quot;, line 1151, in wrapper await self._call(interaction) File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\app_commands\tree.py&quot;, line 1314, in _call await self.on_error(interaction, e) File &quot;c:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\running.py&quot;, line 47, in on_app_command_error await interaction.response.defer() File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\interactions.py&quot;, line 709, in defer await adapter.create_interaction_response( File &quot;C:\Users\JoniK\OneDrive\Dokumente\Schule\Off-School\Programmieren\Python\Jumpedia\Jumpedia\src\.venv\Lib\site-packages\discord\webhook\async_.py&quot;, line 221, in request raise NotFound(response, data) discord.errors.NotFound: 404 Not Found (error code: 10062): Unknown interaction </code></pre> <p>I am on a local machine, but I tried using multiple different networks, tried re-installing discord.py, tried multiple error types as well as multiple kind of ways to handle errors, but no results.</p>
<python><error-handling><discord><discord.py><python-asyncio>
2024-07-23 11:39:39
2
346
JoniKauf
78,783,045
4,064,958
Python subprocess FILE NOT FOUND error when executing a PyInstaller generated file in Linux from a shared VOLUME under DOCKER
<p>I have generated an executable, say,<code>test</code> (no extension since it's Linux) using PyInstaller and stored it in a directory, say <code>data</code>.</p> <p>I have a Python program that is as below:</p> <pre><code>import subprocess from pathlib import Path ... def run_exe(): try: # get current directory currdir = Path.cwd() datadir = currdir / &quot;data&quot; #print(currdir) #print(datadir) process = subprocess.run([&quot;./test&quot;], shell=False, capture_output=True, text=True, cwd=datadir) if(process.stderr): print(&quot;stderr:&quot;, process.stderr) return JSONResponse(content={&quot;success&quot;: False}, status_code=400) if(process.returncode == 0): print(process.stdout) return JSONResponse(content={&quot;success&quot;: True}) except OSError as e: print(e) except subprocess.CalledProcessError as e: print(e) except Exception as e: print(type(e).__name__) # Prints the name of the exception class print(e) # Prints the exception message </code></pre> <p>I have tried the following:</p> <ol> <li><p>Putting <code>subprocess.run([&quot;ls&quot;]</code>, it shows the correct directory and files including <code>test</code></p> </li> <li><p>Putting <code>subprocess.run([&quot;ls&quot;, -l&quot;]</code>, it shows the execution permission in <code>test</code></p> </li> <li><p>I tried moving to the parent directory and running.</p> </li> <li><p>I tried <code>subprocess.run([&quot;test&quot;] ...)</code> without the ./</p> </li> <li><p>I tried changing <code>shell=True</code>. Still gives an error.</p> </li> </ol> <p>With both the ls commands (2 and 3) it prints the output.</p> <p>The error I get is an OSError: <code> [Errno 2] No such file or directory: './test'</code></p> <p>Any help will be greatly appreciated. Thanks</p>
<python><subprocess><pyinstaller>
2024-07-23 10:57:38
1
2,345
RmR
78,782,881
22,221,987
Celery returns wrong info about current tasks in one worker
<p>I have a bundle which contains Celery and RabbitMQ for tasks and FastApi app for web requests.<br /> The celery app starts from command prompt with <code>celery -A celery_app worker -l info -P gevent</code>.<br /> Rabbit is deployed in Docker Container.<br /> FastApi starts from python script.</p> <p>Here is the code. The question is below.</p> <p><strong>fastapi_app/main.py</strong></p> <pre><code>from __future__ import absolute_import from fastapi import FastAPI from celery.result import AsyncResult from celery_app.tasks import task, get_current_tasks from celery_app.celery import c_app from fastapi_app.model import Worker, Task f_app = FastAPI() @f_app.post(&quot;/task/&quot;) def run_task(): _task = task.apply_async() return {&quot;task_id&quot;: _task.id} @f_app.get(&quot;/task_info/{task_id}&quot;) def get_progress(task_id): result = AsyncResult(task_id, app=c_app) return Task(id=task_id, state=result.state, meta=result.info) @f_app.get(&quot;/curr_progress/&quot;) def get_current_progress(): response = {'workers': []} for worker, tasks_list in get_current_tasks().items(): worker_tasks_id = [task_.get('id') for task_ in tasks_list] worker_ = Worker(name=worker) for id_ in worker_tasks_id: result = AsyncResult(id_, app=c_app) worker_.tasks.append(Task(id=id_, state=result.state, meta=result.info)) response['workers'].append(worker_) return response if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(f_app, host=&quot;localhost&quot;, port=8000) </code></pre> <p><strong>fastapi_app/model.py</strong></p> <pre><code>from pydantic import BaseModel from typing import List, Any, Optional class Task(BaseModel): id: Optional[str] = None state: Optional[str] = None meta: dict | Any | None = None class Worker(BaseModel): name: Optional[str] = None tasks: List[Task] = list() </code></pre> <p><strong>celery_app/tasks.py</strong></p> <pre><code>from __future__ import absolute_import import threading import time from celery_app.celery import c_app def get_current_tasks() -&gt; dict: i = c_app.control.inspect() return i.active() def get_registered_tasks() -&gt; dict: i = c_app.control.inspect() return i.registered() @c_app.task(bind=True) def task(self): print(f'task started in {threading.current_thread()}. Thread alive:') for i in threading.enumerate(): print(i) n = 60 for i in range(0, n): self.update_state(state='PROGRESS', meta={'done': i, 'total': n}) time.sleep(1) print(f'task finished in {threading.current_thread()}\n') return n </code></pre> <p><strong>celery_app/celery.py</strong></p> <pre><code>from __future__ import absolute_import from celery import Celery c_app = Celery('celery_app', broker='amqp://guest:guest@localhost', backend='rpc://', include=['celery_app.tasks']) </code></pre> <p>There, in <code>fastapi_app/main.py</code> I have the function, which starts the task <code>run_task()</code> and the function, which get the current progress of all running tasks <code>get_current_progress()</code>.<br /> The last one depends on <code>celery.result.AsyncResult()</code> which depends on <code>update_state()</code> method in <code>task(self)</code> function in <code>celery_app/tasks.py</code>.</p> <p><strong>Here is the problem</strong>. When I start only one task by requesting FastApi server, task's progress displays correctly.</p> <pre><code>{ &quot;workers&quot;: [ { &quot;name&quot;: &quot;celery@wsmsk1n3075&quot;, &quot;tasks&quot;: [ { &quot;id&quot;: &quot;271531c2-48e6-4c71-a9ef-31bce434c649&quot;, &quot;state&quot;: &quot;PROGRESS&quot;, &quot;meta&quot;: { &quot;done&quot;: 3, &quot;total&quot;: 60 } } ] } ] } </code></pre> <p>But when I start multiple tasks (send a couple of task-requests to FastApi server) displaying becomes incorrect. Especially the progress's meta info.</p> <pre><code>{ &quot;workers&quot;: [ { &quot;name&quot;: &quot;celery@wsmsk1n3075&quot;, &quot;tasks&quot;: [ { &quot;id&quot;: &quot;4d05d0f0-f058-4372-8eec-c84853188655&quot;, &quot;state&quot;: &quot;PROGRESS&quot;, &quot;meta&quot;: { &quot;done&quot;: 9, &quot;total&quot;: 60 } }, { &quot;id&quot;: &quot;0ca82db4-7e04-4bfd-9d73-6a190decd4c6&quot;, &quot;state&quot;: &quot;PROGRESS&quot;, &quot;meta&quot;: null }, { &quot;id&quot;: &quot;ba8aa34b-e185-47cf-bed3-f3ca07257afc&quot;, &quot;state&quot;: &quot;PROGRESS&quot;, &quot;meta&quot;: null }, { &quot;id&quot;: &quot;3e2941f5-1285-4062-aea0-31a3c9b1cc21&quot;, &quot;state&quot;: &quot;PROGRESS&quot;, &quot;meta&quot;: null } ] } ] } </code></pre> <p>It's important to note, that in celery all 4 tasks are being executed in separate threads:</p> <pre><code>[2024-07-23 13:02:06,427: WARNING/MainProcess] &lt;_DummyThread(Dummy-6, started daemon 1901360139936)&gt; [2024-07-23 13:02:06,427: WARNING/MainProcess] &lt;_DummyThread(Dummy-7, started daemon 1901360142016)&gt; [2024-07-23 13:02:06,428: WARNING/MainProcess] &lt;_DummyThread(Dummy-8, started daemon 1901360137056)&gt; [2024-07-23 13:02:06,429: WARNING/MainProcess] &lt;_DummyThread(Dummy-9, started daemon 1901360139776)&gt; </code></pre> <p>So, they must have their own progress (which seems not to be like that as you can see in the last json block).</p> <p>How can I obtain every task state in multiple-task case, like it was in the solo-task example?</p> <p><strong>UDP</strong>: Looks like it's a timing problem somewhere under the celery hood. If we add some sleep() before every AsyncResult like that:</p> <pre><code>@f_app.get(&quot;/curr_progress/&quot;) def get_current_progress(): response = {'workers': []} for worker, tasks_list in get_current_tasks().items(): worker_tasks_id = [task_.get('id') for task_ in tasks_list] worker_ = Worker(name=worker) for id_ in worker_tasks_id: time.sleep(1) result = AsyncResult(id_, app=c_app) worker_.tasks.append(Task(id=id_, state=result.state, meta=result.info)) response['workers'].append(worker_) return response </code></pre> <p>we will have 1 sec shift for every AsyncResult but, at least we will have the actual progress of the tasks with their meta (with 1 sec outdating shift for every previous result in the result ofc).<br /> The same behaviour is achieved when we try to get progress for current task by it's task_id manualy, in <code>fastapi_app/main.py</code> <code>get_progress(task_id)</code> function. Just time.sleep is going to be replaced with human timing.</p>
<python><python-3.x><rabbitmq><celery><fastapi>
2024-07-23 10:17:22
1
309
Mika
78,782,871
8,047,378
About the efficiency of pytorch in Reranker model
<p>I get an efficiency problem in Reranker model.**</p> <p>Here is detail:</p> <p>In inference of PyTorch model, I add <code>time()</code> to collect the time in <code>self.tokenizer</code> and <code>self.model</code>. The inference info like this:</p> <p><strong>token_len</strong>: 2048 ; <strong>all data num</strong>: 1000; <strong>batch</strong>: 4 ;<strong>number of batch</strong>: 1000/4=250. I run in <strong>GPU(A800)</strong>, and I get run time in <code>self.tokenizer</code> is <code>29.99s</code>, model inference is <code>4.39s</code>, so total time is almost <code>34.4s</code>.</p> <p>When I debug the code, I found the part of <code>.to(self.device)</code> take almost all the time of <code>self.tokenizer</code>.</p> <p>But when I annotation code <code>scores = self.model(**inputs, return_dict=True).logits.view(-1, ).float()</code> ,I found the time of <code>self.tokenizer</code> become <code>4.16s</code>.</p> <p>Here is my inference code,</p> <pre><code> @torch.no_grad() def compute_score(self, sentence_pairs, batch_size: int = 256, max_length: int = 512, normalize: bool = False): if self.num_gpus &gt; 0: batch_size = batch_size * self.num_gpus assert isinstance(sentence_pairs, list) if isinstance(sentence_pairs[0], str): sentence_pairs = [sentence_pairs] start = time() token_times=[] model_times=[] all_scores = [] for start_index in tqdm(range(0, len(sentence_pairs), batch_size), desc=&quot;Compute Scores&quot;, disable=len(sentence_pairs) &lt; 128): sentences_batch = sentence_pairs[start_index:start_index + batch_size] token_start = time() inputs = self.tokenizer( sentences_batch, padding=True, truncation=True, return_tensors='pt', max_length=max_length, ).to(self.device) token_end = time() scores = self.model(**inputs, return_dict=True).logits.view(-1, ).float() model_end_time = time() #all_scores.extend(scores.cpu().numpy().tolist()) token_times.append(token_end - token_start) model_times.append(model_end_time - token_end) #if normalize: # all_scores = [sigmoid(score) for score in all_scores] token_all_time = sum(token_times) model_all_time = sum(model_times) return &quot;&quot;, start, token_all_time, model_all_time </code></pre> <p>and token_all_time is time of <code>self.tokenizer</code>,and model_all_time is time of <code>model inference</code>.</p> <p>Here is the link of source code: <a href="https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/flag_reranker.py" rel="nofollow noreferrer">https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/flag_reranker.py</a></p> <p>And Here is the detail of this problem: <a href="https://github.com/FlagOpen/FlagEmbedding/issues/988" rel="nofollow noreferrer">https://github.com/FlagOpen/FlagEmbedding/issues/988</a></p> <p>Is there more cost time in <code>self.tokenizer</code> or more in <code>model inference</code>. and why the time cost of <code>self.tokenizer</code> from <code>29.99s</code> to <code>4.16s</code>?</p> <p><strong>Does anybody know why does this happen ?</strong></p>
<python><pytorch>
2024-07-23 10:14:49
0
939
Frank.Fan
78,782,754
4,306,274
How to remove the large top and bottom paddings due to top-to-bottom lines?
<p>How to remove the large top and bottom paddings due to top-to-bottom lines?</p> <pre><code>import numpy as np import pandas as pd import plotly.graph_objects as go count = 100 df = pd.DataFrame( { &quot;A&quot;: np.random.randint(10, 20, size=(count)), &quot;B&quot;: np.random.randint(10, 20, size=(count)), &quot;C&quot;: np.random.randint(10, 20, size=(count)), }, index=[f&quot;bar.{i}&quot; for i in range(count)], ) fig = go.Figure( data=[ go.Bar( name=df.columns[0], orientation=&quot;h&quot;, y=df.index, x=df.iloc[:, 0], text=df.iloc[:, 0], ), go.Bar( name=df.columns[1], orientation=&quot;h&quot;, y=df.index, x=df.iloc[:, 1], text=df.iloc[:, 1], ), go.Scatter( name=df.columns[2], y=df.index, x=df.iloc[:, 2], text=df.iloc[:, 2], ), ], layout=dict( width=512, height=1024, margin=dict(l=10, r=10, t=10, b=10), ), ) fig.update_layout( xaxis_title=None, yaxis_title=None, legend_title=None, yaxis=dict( autorange=&quot;reversed&quot;, ), ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/IYgzG6XW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IYgzG6XW.png" alt="enter image description here" /></a></p> <p>If I don't draw the line or reduce the data point count from 100 to 10, I can get rid of the large paddings:</p> <p><a href="https://i.sstatic.net/AeCSj78J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AeCSj78J.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/trAOkPJy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/trAOkPJy.png" alt="enter image description here" /></a></p> <p>If I draw the line left-to-right, there are no large paddings on both sides also:</p> <pre><code>import numpy as np import pandas as pd import plotly.graph_objects as go count = 100 df = pd.DataFrame( { &quot;A&quot;: np.random.randint(10, 20, size=(count)), &quot;B&quot;: np.random.randint(10, 20, size=(count)), &quot;C&quot;: np.random.randint(10, 20, size=(count)), }, index=[f&quot;bar.{i}&quot; for i in range(count)], ) fig = go.Figure( data=[ go.Bar( name=df.columns[0], orientation=&quot;v&quot;, x=df.index, y=df.iloc[:, 0], text=df.iloc[:, 0], ), go.Bar( name=df.columns[1], orientation=&quot;v&quot;, x=df.index, y=df.iloc[:, 1], text=df.iloc[:, 1], ), go.Scatter( name=df.columns[2], x=df.index, y=df.iloc[:, 2], text=df.iloc[:, 2], ), ], layout=dict( width=1024, height=512, margin=dict(l=10, r=10, t=10, b=10), ), ) fig.update_layout( xaxis_title=None, yaxis_title=None, legend_title=None, yaxis=dict( # autorange=&quot;reversed&quot;, ), ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/iVVMKwQj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iVVMKwQj.png" alt="enter image description here" /></a></p>
<python><plotly>
2024-07-23 09:47:50
1
1,503
chaosink
78,782,708
16,389,095
How to automatically resize and reposition controls when the window is resized?
<p>I'm trying to develop a scratch app with Python Flet. My goal is to find a way to position, or alternatively, to specify controls dimensions relatively to their parent control dimensions. For example, a button width of 70% the container width which is, in turn, the 30% of the page width. In this way, every control dimensions is resized automatically when the page is resized by the user, remaining in the same proportions. I found interesting this <a href="https://flet.dev/docs/controls/row/#expanding-children" rel="nofollow noreferrer">property</a>, but when I reduce the window size something still doesn't work. I would like that each control in the page reduces its dimensions proportionally according to the window size.</p> <pre class="lang-python prettyprint-override"><code>import flet as ft def main(page: ft.Page): page.scroll = ft.ScrollMode.AUTO ### AMBER LEFT CONTAINER ### col1 = ft.Column([ft.IconButton(icon=ft.icons.UPLOAD_FILE), ft.IconButton(icon=ft.icons.PICTURE_AS_PDF_OUTLINED), ft.IconButton(icon=ft.icons.SAVE),]) cont1 = ft.Container(content=col1, height = page.height, expand=3, #30% of the page width bgcolor=ft.colors.AMBER,) ### LEFT CONTAINER ### ### RED RIGHT CONTAINER ### row0 = ft.Row([ft.Image(src=f&quot;/icon.png&quot;, width=200, height=100, fit=ft.ImageFit.FILL,),], alignment=ft.CrossAxisAlignment.CENTER) row1 = ft.Row([ft.IconButton(icon=ft.icons.ARROW_CIRCLE_LEFT), ft.Text(&quot;71/155&quot;), ft.IconButton(icon=ft.icons.ARROW_CIRCLE_RIGHT),]) row2 = ft.Row([ft.TextField(label=&quot;Find this keyword inside doc&quot;), ft.IconButton(icon=ft.icons.FIND_IN_PAGE)]) row3 = ft.Row([ft.TextField(label=&quot;Extract these pages from doc&quot;, expand=20), ft.IconButton(icon=ft.icons.FIND_IN_PAGE, expand=1),]) cont2 = ft.Container(content=ft.Column([row0, row1, row2, row3]), height = page.height, expand=7, #70% of the page width bgcolor=ft.colors.RED,) ### RIGHT CONTAINER ### page.add(ft.Row([cont1, cont2])) page.update() ft.app(target=main, assets_dir=&quot;assets&quot;) </code></pre> <p>How can I specify controls dimensions in order to make them resize proportionally to the window size?</p>
<python><flutter><flet>
2024-07-23 09:37:18
1
421
eljamba
78,782,629
7,941,944
Understanding Time complexity of nested sorting
<p>I am solving this LeetCode problem <a href="https://leetcode.com/problems/sort-array-by-increasing-frequency/description/" rel="nofollow noreferrer">1636. Sort Array by Increasing Frequency</a>:</p> <blockquote> <p>Given an array of integers <code>nums</code>, sort the array in <strong>increasing</strong> order based on the frequency of the values. If multiple values have the same frequency, sort them in <strong>decreasing</strong> order.</p> <p>Return the <em>sorted array</em>.</p> </blockquote> <p>I wrote the following working code:</p> <pre><code>class Solution: def frequencySort(self, nums: List[int]) -&gt; List[int]: ddict = Counter(nums) ddict = dict(sorted(ddict.items(), key=lambda x:(x[1], x[0]))) defDict = defaultdict(list) res = [] for k, v in ddict.items(): defDict[v].append(k) del(ddict) for k, v in defDict.items(): v.sort(reverse=True) for val in v: for _ in range(k): res.append(val) return res </code></pre> <p>I think it has a time complexity of O(n.(nlog(n)), because I am sorting each list in <code>defaultdict</code> for every key in the worst case.</p> <p>But the time complexity analysis in LeetCode as well as AI tools like chatGPT and Perplexity AI find the time complexity to be O(nlog(n)). I am confused. Can anyone please help me understand why it's not O(n.(nlog(n))?</p>
<python><algorithm><sorting><data-structures><defaultdict>
2024-07-23 09:22:28
1
333
Vijeth Kashyap
78,782,576
7,498,328
How to save the output from using Jupyter Notebook's Python packages tabulate and IPython.display packages
<p>I have the following code that prints some nice tables in a cell of Jupyter Notebook. However, I also want to also export it to a png or pdf file. How am I going to accomplish this?</p> <pre><code>from tabulate import tabulate from IPython.display import Latex, display def display_table(summary, data, best_formula, worst_formula): print(f&quot;\n{summary}\n&quot;) print(tabulate(data, headers='keys', tablefmt='fancy_grid')) print(f&quot;\nBest Results Formula:&quot;) display(Latex(f'${best_formula}$')) print(f&quot;\nWorst Results Formula:&quot;) display(Latex(f'${worst_formula}$')) print(&quot;\n&quot; + &quot;-&quot; * 80 + &quot;\n&quot;) # Mock data and formulas summary = &quot;Comparison of Different Algorithms&quot; data = [ {&quot;Algorithm&quot;: &quot;A&quot;, &quot;Accuracy&quot;: 0.95, &quot;Precision&quot;: 0.93, &quot;Recall&quot;: 0.92}, {&quot;Algorithm&quot;: &quot;B&quot;, &quot;Accuracy&quot;: 0.88, &quot;Precision&quot;: 0.85, &quot;Recall&quot;: 0.84}, {&quot;Algorithm&quot;: &quot;C&quot;, &quot;Accuracy&quot;: 0.80, &quot;Precision&quot;: 0.78, &quot;Recall&quot;: 0.76} ] best_formula = r'\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}' worst_formula = r'\text{Error Rate} = \frac{FP + FN}{TP + TN + FP + FN}' # Display the table and formulas display_table(summary, data, best_formula, worst_formula) </code></pre>
<python><jupyter-notebook>
2024-07-23 09:12:13
1
2,618
user321627
78,782,122
273,593
automatic cleanup of postgres db after each test
<p>I have sqlalchemy application that talks with a postgres db. I want to do some &quot;integration tests&quot; using testcontainers and trying the various scenarios.</p> <p>just to make things simple, let's say that in my application I just expect a single table <code>users</code> with at least the <code>admin</code> row.</p> <p>I expect this to be available in <em>all</em> tests. It's like my &quot;baseline&quot; of db status.</p> <p>I then want to do multiple tests, for example to check if I can login to the application, both for admin and/or normal users</p> <p>I want to create different &quot;scenarios&quot; - potentially one for each test, with different data on the db.</p> <p>For example, I want to do 3 tests</p> <ol> <li>where there is only the <code>admin</code> row</li> <li>where there are 2 rows: <code>admin</code> and <code>user_foo</code></li> <li>where there are 4 rows: <code>admin</code>, <code>user_x</code>, <code>user_y</code>, <code>user_z</code></li> </ol> <p>for each test I want / need to write 3 different fixtures. I may use &quot;raw sql&quot; or sqlalchemy itself, it's not important now.</p> <p>but after the run of the test (that itself may also change the content of the db) I need to &quot;rollback&quot; to the original &quot;baseline&quot; status.</p> <p>However I cannot just run <code>ROLLBACK</code> as all the changes have actually been committed.</p> <p>How can I ensure that the db is in the &quot;baseline&quot; status (+ appropriate fixture) before each test?</p> <p>The only solution I've found are</p> <ul> <li>drop the db and rebuild it from scratch before each test</li> <li>manually write the appropriate sql to restore the db status</li> <li>ignore the issue and hope it works...</li> </ul> <p>The first one is slow to run, the second is unmantainable and the third... is not a solution.</p> <p>Ideally I would like to have a way to tell pg to &quot;tag&quot; the status of the db and restore it after I've run the test. Is it possible?</p>
<python><postgresql><sqlalchemy><pytest>
2024-07-23 07:27:48
1
1,703
Vito De Tullio
78,782,108
8,055,073
import from & from on python with different levels of modules
<p>I see a code like this:</p> <pre><code>import torch from torch import nn from sklearn.metrics import r2_score </code></pre> <p>i'm learning python but i dont see really a doc where can explain the first import, i think two first lines it's equivalent to this in another languages:</p> <pre><code>from torch import torch.nn </code></pre> <p>i mean why two lines, I think because the code use later only torch,</p> <p>i dont understand the submodles in two levels in python, any url for doc ? what about more than 3 or 4 modules levels ?</p>
<python>
2024-07-23 07:24:44
1
1,579
DDave
78,781,475
3,486,684
Python + Polars: a DataFrame which keeps track of the history of operations it was derived from?
<p>I found myself making a variant of <code>pl.DataFrame</code> which keeps track of the operations performed on it. For example:</p> <pre class="lang-py prettyprint-override"><code>from pprint import pformat, pprint import polars as pl import polars._typing as plt from collections import UserList from dataclasses import dataclass, field from typing import Any, Iterable, Optional, Self from numpy import ndarray @dataclass class CalcMeta(UserList): data: list[Any] = field(default_factory=list) @dataclass class CalcReport(UserList): data: list[tuple[str, Any]] = field(default_factory=list, kw_only=True) def append(self, **kwargs) -&gt; None: # type: ignore self.data += list(kwargs.items()) class CalcDataFrame(pl.DataFrame): meta: CalcMeta report: Optional[CalcReport] = None def __init__( self, data: pl.DataFrame, meta: CalcMeta = CalcMeta(), report: Optional[CalcReport] = None, ): super().__init__(data) self.meta = meta self.report = report def filter( self, *predicates: pl.Expr | pl.Series | str | Iterable[pl.Expr | pl.Series | str] | bool | list[bool] | ndarray[Any, Any], **constraints: Any, ) -&gt; Self: return self.append_report( filtered_with={ &quot;predicates&quot;: str(predicates), &quot;constraints&quot;: str(constraints), } ).derive(super().filter(*predicates, **constraints)) def with_columns( self, *exprs: plt.IntoExpr | Iterable[plt.IntoExpr], **named_exprs: plt.IntoExpr, ) -&gt; Self: return self.append_report( with_columns={&quot;exprs&quot;: str(exprs), &quot;named_exprs&quot;: str(named_exprs)} ).derive(super().with_columns(*exprs, **named_exprs)) def append_report(self, **kwargs) -&gt; Self: if self.report is not None: self.report.append(**kwargs) return self def derive(self, data: pl.DataFrame, meta: CalcMeta = CalcMeta()) -&gt; Self: return self.__class__(data, self.meta + meta, self.report) xs = pl.DataFrame( [ pl.Series(&quot;alpha&quot;, [&quot;a&quot;, &quot;b&quot;, &quot;c&quot;]), pl.Series(&quot;beta&quot;, [&quot;x&quot;, &quot;x&quot;, &quot;a&quot;]), pl.Series(&quot;xs&quot;, [0, 1, 2]), ] ) xs.with_columns() xs = CalcDataFrame(xs, meta=CalcMeta([&quot;some meta data&quot;]), report=CalcReport()) xs = xs.filter(pl.col(&quot;alpha&quot;).eq(&quot;a&quot;)).with_columns( pl.col(&quot;beta&quot;).replace_strict({&quot;x&quot;: &quot;y&quot;}) ) if xs.report: for step in xs.report: print(f&quot;{step[0]}:&quot;) print(f&quot; {pformat(step[1])}&quot;) print(xs) </code></pre> <pre><code>filtered_with: {'constraints': '{}', 'predicates': '(&lt;Expr [\'[(col(&quot;alpha&quot;)) == (String(a))…\'] at ' '0x7F1258222300&gt;,)'} with_columns: {'exprs': '(&lt;Expr [\'col(&quot;beta&quot;).replace_strict([Se…\'] at 0x7F1258223680&gt;,)', 'named_exprs': '{}'} shape: (1, 3) ┌───────┬──────┬─────┐ │ alpha ┆ beta ┆ xs │ │ --- ┆ --- ┆ --- │ │ str ┆ str ┆ i64 │ ╞═══════╪══════╪═════╡ │ a ┆ y ┆ 0 │ └───────┴──────┴─────┘ </code></pre> <p>I stopped myself before I went too far for two reasons:</p> <ol> <li>Likely this is already functionality that exists? I found the following relevant information:</li> </ol> <ul> <li><a href="https://stackoverflow.com/questions/75661360/logging-in-polars">Logging in Polars</a></li> <li><a href="https://stackoverflow.com/questions/76038277/polars-show-graph-method">Polars show_graph method</a></li> </ul> <ol start="2"> <li>Some expressions don't have a nice string representation off the bat:</li> </ol> <pre class="lang-py prettyprint-override"><code>import polars as pl expr = pl.col(&quot;somecol&quot;).replace_strict( {&quot;hello&quot;: &quot;world&quot;}, return_dtype=pl.List(pl.Enum(pl.Series([&quot;a&quot;, &quot;b&quot;]))), ) print(expr) </code></pre> <pre><code>col(&quot;somecol&quot;).replace_strict([Series, Series]) </code></pre> <p>The trouble with <a href="https://docs.pola.rs/api/python/stable/reference/lazyframe/api/polars.LazyFrame.show_graph.html" rel="nofollow noreferrer"><code>show_graph</code></a> is that it doesn't present output in a format that is useful to me (i.e. it uses notation which is meant to help <code>polars</code> library authors).</p> <p>Am I missing some obvious functionality that does what I want? If not: how can I pretty print expressions such as <code>replace_strict</code>, so that the inner series etc. they are built on are also fully printed?</p> <p>(Otherwise, I do have various ideas I can update this question with that let me capture what I need.)</p>
<python><python-polars>
2024-07-23 03:35:54
0
4,654
bzm3r
78,781,464
4,306,274
Cannot add a vertical line to the second X axis
<p>I can add a horizontal line to the second Y axis, like this:</p> <pre><code>import pandas as pd import plotly.express as px import plotly.graph_objects as go df = pd.DataFrame( { &quot;A&quot;: [5, 4, 7], &quot;B&quot;: [200, 300, 100], }, index=[1, 2, 3], ) fig = px.bar( df[&quot;A&quot;], orientation=&quot;v&quot;, ) fig.add_trace( go.Scatter( x=df.index, y=df[&quot;B&quot;], yaxis=&quot;y2&quot;, name=&quot;B&quot;, ) ) fig.add_hline( y=df[&quot;B&quot;].mean(), yref=&quot;y2&quot;, ) fig.update_layout( xaxis_title=None, yaxis_title=None, legend_title=None, xaxis=dict( # autorange=&quot;reversed&quot;, ), yaxis=dict( side=&quot;left&quot;, showgrid=False, ), yaxis2=dict( overlaying=&quot;y&quot;, side=&quot;right&quot;, showgrid=False, ), ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/TpYpAYcJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TpYpAYcJ.png" alt="horizontal line" /></a></p> <p>However, I cannot add a vertical line to the second X axis:</p> <pre><code>import pandas as pd import plotly.express as px import plotly.graph_objects as go df = pd.DataFrame( { &quot;A&quot;: [5, 4, 7], &quot;B&quot;: [200, 300, 100], }, index=[1, 2, 3], ) fig = px.bar( df[&quot;A&quot;], orientation=&quot;h&quot;, ) fig.add_trace( go.Scatter( y=df.index, x=df[&quot;B&quot;], xaxis=&quot;x2&quot;, name=&quot;B&quot;, ) ) fig.add_vline( x=df[&quot;B&quot;].mean(), xref=&quot;x2&quot;, ) fig.update_layout( xaxis_title=None, yaxis_title=None, legend_title=None, yaxis=dict( autorange=&quot;reversed&quot;, ), xaxis=dict( side=&quot;top&quot;, showgrid=False, ), xaxis2=dict( overlaying=&quot;x&quot;, side=&quot;bottom&quot;, showgrid=False, ), ) fig.show() </code></pre> <p><a href="https://i.sstatic.net/VCfp3ept.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCfp3ept.png" alt="vertical line" /></a></p>
<python><pandas><plotly>
2024-07-23 03:32:16
1
1,503
chaosink
78,781,452
2,119,941
AWS Lambda OpenSearch Serverless Connection Issues
<p>I am working on an AWS Lambda function written in Python to interact with an OpenSearch Serverless collection. The Lambda function is deployed in a VPC, and the OpenSearch Serverless endpoint is also within the same VPC. Despite ensuring that all permissions and VPC configurations are correct, I keep encountering a 404 error when trying to list collections or connect to the OpenSearch collection. Below are the details of my setup and the code snippets used.</p> <h4>Lambda Function Code:</h4> <pre class="lang-py prettyprint-override"><code>import json import boto3 import logging from botocore.exceptions import ClientError logger = logging.getLogger() logger.setLevel(logging.INFO) def create_opensearch_client(): try: session = boto3.Session() client = session.client('opensearchserverless', endpoint_url=&quot;https://zz44ttfldnkz06666uul.us-east-1.aoss.amazonaws.com&quot;, verify=False, use_ssl=True) logger.info(&quot;OpenSearch client created successfully&quot;) return client except Exception as e: logger.error(f&quot;Error creating OpenSearch client: {str(e)}&quot;) logger.error(f&quot;Traceback: {traceback.format_exc()}&quot;) raise def test_opensearch_connection(client): try: response = client.list_collections() logger.info(f&quot;Successfully listed collections: {json.dumps(response, default=str)}&quot;) response = client.batch_get_collection(ids=['my-collection-id']) logger.info(f&quot;Successfully got collection details: {json.dumps(response, default=str)}&quot;) return True except ClientError as e: logger.error(f&quot;Error connecting to OpenSearch: {str(e)}&quot;) return False def lambda_handler(event, context): logger.info(&quot;Lambda function started&quot;) client = create_opensearch_client() connection_successful = test_opensearch_connection(client) if connection_successful: return { 'statusCode': 200, 'body': 'Successfully connected to OpenSearch collection.' } else: return { 'statusCode': 500, 'body': 'Failed to connect to OpenSearch collection.' } </code></pre> <h4>Error Message:</h4> <pre><code>&quot;Error in OpenSearch connection test: An error occurred (404) when calling the ListCollections operation&quot; </code></pre> <h4>Verification Steps:</h4> <ol> <li><p>Verified the OpenSearch endpoint and collection using AWS CLI:</p> <pre class="lang-bash prettyprint-override"><code>aws opensearchserverless list-collections aws opensearchserverless batch-get-collection --ids &quot;my-collection-id&quot; </code></pre> </li> <li><p>Ensured the Lambda role has the necessary permissions:</p> <ul> <li><code>aoss:APIAccessAll</code></li> <li><code>es:ESHttpGet</code>, <code>es:ESHttpPost</code>, <code>es:ESHttpPut</code>, <code>es:ESHttpDelete</code></li> </ul> </li> <li><p>Double-checked that the Lambda function and OpenSearch endpoint are within the same VPC and subnets.</p> </li> </ol> <h4>IAM Policy Attached to Lambda:</h4> <pre class="lang-yaml prettyprint-override"><code>Policies: - AWSLambdaVPCAccessExecutionRole - AWSLambdaBasicExecutionRole - AmazonElasticFileSystemClientReadWriteAccess - SecretsManagerReadWrite - AmazonRDSFullAccess - AmazonS3ReadOnlyAccess - Version: '2012-10-17' Statement: - Effect: Allow Action: - 'aoss:APIAccessAll' Resource: 'arn:aws:aoss:us-east-1:123456789012:collection/my-collection-id' - Effect: Allow Action: - 'lambda:GetFunctionConfiguration' Resource: 'arn:aws:lambda:us-east-1:123456789012:function:my-lambda-function' - Effect: Allow Action: - 'es:ESHttpGet' - 'es:ESHttpPost' - 'es:ESHttpPut' - 'es:ESHttpDelete' Resource: 'arn:aws:aoss:us-east-1:123456789012:collection/my-collection-id/*' </code></pre> <h4>VPC Configuration:</h4> <ul> <li>The Lambda function is associated with the correct subnets and security groups.</li> <li>The subnets and security groups have been verified multiple times for correctness.</li> </ul> <h4>Problem:</h4> <p>Despite following the above steps, the Lambda function fails with a 404 error when trying to interact with the OpenSearch Serverless collection. What could be the possible reasons for this issue, and how can I resolve it?</p> <h3>Any help or guidance would be appreciated!</h3>
<python><amazon-web-services><aws-lambda><aws-sam><amazon-opensearch>
2024-07-23 03:27:23
0
15,380
Hrvoje
78,781,390
16,312,980
HF transformers: ValueError: Unable to create tensor
<p>I was following <a href="https://huggingface.co/docs/transformers/tasks/sequence_classification" rel="nofollow noreferrer">this guide for text classification</a> and i gotten and error:</p> <pre><code>ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. </code></pre> <p>I tried adding <code>padding=True</code> and <code>truncation=True</code> to <code>preprocessing_tokenizer()</code> but the same error arises.</p> <p><a href="https://stackoverflow.com/questions/75623118/valueerror-unable-to-create-tensor-issue-for-a-transformer-model">These answers did not help me that much either.</a></p>
<python><pytorch><huggingface-transformers>
2024-07-23 02:57:48
1
426
Ryan
78,781,354
15,587,184
Azure AI Search Scoring Profiles are not modifying the score retrival
<p>I have been using Azure Ai search and scoring profiles to boost the documents of my index that come form the 'reviewed' source that means I want to send to the very TOP documents that have the string 'reviewed' on the field source, so I configured this scoring profile:</p> <pre><code> &quot;scoringProfiles&quot;: [ { &quot;name&quot;: &quot;pubs_peer_house&quot;, &quot;functionAggregation&quot;: &quot;sum&quot;, &quot;text&quot;: { &quot;weights&quot;: { &quot;title&quot;: 3, &quot;content&quot;: 10 } }, &quot;functions&quot;: [ { &quot;fieldName&quot;: &quot;source&quot;, &quot;interpolation&quot;: &quot;linear&quot;, &quot;type&quot;: &quot;tag&quot;, &quot;boost&quot;: 100, &quot;freshness&quot;: null, &quot;magnitude&quot;: null, &quot;distance&quot;: null, &quot;tag&quot;: { &quot;tagsParameter&quot;: &quot;reviewed&quot; } } ] } ], </code></pre> <p>I have this code in Python:</p> <pre><code>from azure.core.exceptions import HttpResponseError # Define the search query search_query = &quot;cybersecurity in Arizona from tech today&quot; # Function to get the embedding of the search query search_vector = get_embedding(search_query) # Define the scoring parameters scoring_parameters = [&quot;reviewed-100&quot;] try: # Perform the search using the Azure Cognitive Search client response = search_client.search( search_query, top=5, # Return the top 5 results vector_queries=[ VectorizedQuery( vector=search_vector, # The vector representation of the search query k_nearest_neighbors=5, # Number of nearest neighbors to find fields=&quot;vector_space_search&quot; # Field in the index to search the vector space ) ], query_type=&quot;semantic&quot;, # Specify the query type as semantic semantic_configuration_name=semantic_profile, # The name of the semantic configuration scoring_profile='pubs_peer_house', # The scoring profile to use scoring_parameters=scoring_parameters # The scoring parameters ) # Iterate over the search results for idx, doc in enumerate(response, start=1): # Default values if fields are not found found_content = &quot;Not found&quot; # Extract fields from the search result date = doc.get('date', 'N/A') # Date of the document source = doc.get('source', 'N/A') # Source of the document title = doc.get('title', 'N/A') # Title of the document content = doc.get('content', 'N/A') # Content of the document # Print the results print(f&quot;{idx}&quot;) print(f&quot;Score: {doc['@search.score']:.5f}&quot;) print(f&quot;Source: {source}&quot;) print(f&quot;Title: {title}&quot;) print(f&quot;Content: {content}\n\n&quot;) except HttpResponseError as e: # Handle HTTP response errors print(f&quot;HTTP Response Error: {e.message}&quot;) print(f&quot;Details: {e.response}&quot;) except Exception as ex: # Handle other exceptions print(f&quot;An error occurred: {ex}&quot;) </code></pre> <p>Nonetheless when I ask for anything and I implement my profiles scoring along with the semantic ranker in a hybrid search it doesn't matter the value of the booster. I always get the same results:</p> <p>Look:</p> <blockquote> <p>1 Score: 0.03667 Source: americas Subtitle: What level of cybersecurity do you have? Content: We comply with industry standards for cybersecurity and recommend that you...</p> <p>2 Score: 0.02639 Source: reviewed Subtitle: What do I need to operate securely? Content: The key or security signature. It is an 8-digit alphanumeric code ML te....</p> <p>3 Score: 0.01562 Source: europe Subtitle: Passkey password and pin still better than faceID Content: careful whose face do you trust....</p> </blockquote> <p>even with params like : <code>scoring_parameters = [&quot;reviewed-2500000&quot;]</code> I still get:</p> <blockquote> <p>1 Score: 0.03667 Source: americas Subtitle: What level of cybersecurity do you have? Content: We comply with industry standards for cybersecurity and recommend that you...</p> <p>2 Score: 0.02639 Source: reviewed Subtitle: What do I need to operate securely? Content: The key or security signature. It is an 8-digit alphanumeric code ML te....</p> <p>3 Score: 0.01562 Source: europe Subtitle: Passkey password and pin still better than faceID Content: careful whose face do you trust....</p> </blockquote> <p>Am I doing something wrong? I can't seem to find a tutorial on this in Python online.</p> <p>This is my Index Config:</p> <pre><code> &quot;name&quot;: &quot;idx_americas_europe_pubs_houses&quot;, &quot;defaultScoringProfile&quot;: null, &quot;fields&quot;: [ { &quot;name&quot;: &quot;content&quot;, &quot;type&quot;: &quot;Edm.String&quot;, &quot;searchable&quot;: true, &quot;filterable&quot;: true, &quot;retrievable&quot;: true, &quot;stored&quot;: true, &quot;sortable&quot;: true, &quot;facetable&quot;: false, &quot;key&quot;: false, &quot;indexAnalyzer&quot;: null, &quot;searchAnalyzer&quot;: null, &quot;analyzer&quot;: null, &quot;normalizer&quot;: null, &quot;dimensions&quot;: null, &quot;vectorSearchProfile&quot;: null, &quot;vectorEncoding&quot;: null, &quot;synonymMaps&quot;: [] }, { &quot;name&quot;: &quot;title&quot;, &quot;type&quot;: &quot;Edm.String&quot;, &quot;searchable&quot;: true, &quot;filterable&quot;: true, &quot;retrievable&quot;: true, &quot;stored&quot;: true, &quot;sortable&quot;: true, &quot;facetable&quot;: false, &quot;key&quot;: false, &quot;indexAnalyzer&quot;: null, &quot;searchAnalyzer&quot;: null, &quot;analyzer&quot;: null, &quot;normalizer&quot;: null, &quot;dimensions&quot;: null, &quot;vectorSearchProfile&quot;: null, &quot;vectorEncoding&quot;: null, &quot;synonymMaps&quot;: [] }, { &quot;name&quot;: &quot;source&quot;, &quot;type&quot;: &quot;Edm.String&quot;, &quot;searchable&quot;: true, &quot;filterable&quot;: true, &quot;retrievable&quot;: true, &quot;stored&quot;: true, &quot;sortable&quot;: true, &quot;facetable&quot;: false, &quot;key&quot;: false, &quot;indexAnalyzer&quot;: null, &quot;searchAnalyzer&quot;: null, &quot;analyzer&quot;: null, &quot;normalizer&quot;: null, &quot;dimensions&quot;: null, &quot;vectorSearchProfile&quot;: null, &quot;vectorEncoding&quot;: null, &quot;synonymMaps&quot;: [] }, { &quot;name&quot;: &quot;pub_date&quot;, &quot;type&quot;: &quot;Edm.String&quot;, &quot;searchable&quot;: true, &quot;filterable&quot;: true, &quot;retrievable&quot;: true, &quot;stored&quot;: true, &quot;sortable&quot;: true, &quot;facetable&quot;: false, &quot;key&quot;: false, &quot;indexAnalyzer&quot;: null, &quot;searchAnalyzer&quot;: null, &quot;analyzer&quot;: null, &quot;normalizer&quot;: null, &quot;dimensions&quot;: null, &quot;vectorSearchProfile&quot;: null, &quot;vectorEncoding&quot;: null, &quot;synonymMaps&quot;: [] }, { &quot;name&quot;: &quot;vector_space_search&quot;, &quot;type&quot;: &quot;Collection(Edm.Single)&quot;, &quot;searchable&quot;: true, &quot;filterable&quot;: false, &quot;retrievable&quot;: true, &quot;stored&quot;: true, &quot;sortable&quot;: false, &quot;facetable&quot;: false, &quot;key&quot;: false, &quot;indexAnalyzer&quot;: null, &quot;searchAnalyzer&quot;: null, &quot;analyzer&quot;: null, &quot;normalizer&quot;: null, &quot;dimensions&quot;: 1536, &quot;vectorSearchProfile&quot;: &quot;embedding_profile&quot;, &quot;vectorEncoding&quot;: null, &quot;synonymMaps&quot;: [] }, { &quot;name&quot;: &quot;metadata_storage_path&quot;, &quot;type&quot;: &quot;Edm.String&quot;, &quot;searchable&quot;: false, &quot;filterable&quot;: false, &quot;retrievable&quot;: true, &quot;stored&quot;: true, &quot;sortable&quot;: false, &quot;facetable&quot;: false, &quot;key&quot;: true, &quot;indexAnalyzer&quot;: null, &quot;searchAnalyzer&quot;: null, &quot;analyzer&quot;: null, &quot;normalizer&quot;: null, &quot;dimensions&quot;: null, &quot;vectorSearchProfile&quot;: null, &quot;vectorEncoding&quot;: null, &quot;synonymMaps&quot;: [] } ], &quot;scoringProfiles&quot;: [ { &quot;name&quot;: &quot;pubs_peer_house&quot;, &quot;functionAggregation&quot;: &quot;sum&quot;, &quot;text&quot;: { &quot;weights&quot;: { &quot;title&quot;: 3, &quot;content&quot;: 10 } }, &quot;functions&quot;: [ { &quot;fieldName&quot;: &quot;source&quot;, &quot;interpolation&quot;: &quot;linear&quot;, &quot;type&quot;: &quot;tag&quot;, &quot;boost&quot;: 100, &quot;freshness&quot;: null, &quot;magnitude&quot;: null, &quot;distance&quot;: null, &quot;tag&quot;: { &quot;tagsParameter&quot;: &quot;reviewed&quot; } } ] ] } ], &quot;corsOptions&quot;: null, &quot;suggesters&quot;: [], &quot;analyzers&quot;: [], &quot;normalizers&quot;: [], &quot;tokenizers&quot;: [], &quot;tokenFilters&quot;: [], &quot;charFilters&quot;: [], &quot;encryptionKey&quot;: null, &quot;similarity&quot;: { &quot;@odata.type&quot;: &quot;#Microsoft.Azure.Search.BM25Similarity&quot;, &quot;k1&quot;: null, &quot;b&quot;: null }, &quot;semantic&quot;: { &quot;defaultConfiguration&quot;: null, &quot;configurations&quot;: [ { &quot;name&quot;: &quot;ranker_profile_pubs&quot;, &quot;prioritizedFields&quot;: { &quot;titleField&quot;: { &quot;fieldName&quot;: &quot;title&quot; }, &quot;prioritizedContentFields&quot;: [ { &quot;fieldName&quot;: &quot;content&quot; } ], &quot;prioritizedKeywordsFields&quot;: [] } } ] }, &quot;vectorSearch&quot;: { &quot;algorithms&quot;: [ { &quot;name&quot;: &quot;hnsw_config&quot;, &quot;kind&quot;: &quot;hnsw&quot;, &quot;hnswParameters&quot;: { &quot;metric&quot;: &quot;cosine&quot;, &quot;m&quot;: 3, &quot;efConstruction&quot;: 300, &quot;efSearch&quot;: 250 }, &quot;exhaustiveKnnParameters&quot;: null } ], &quot;profiles&quot;: [ { &quot;name&quot;: &quot;embedding_profile&quot;, &quot;algorithm&quot;: &quot;hnsw_config&quot;, &quot;vectorizer&quot;: null, &quot;compression&quot;: null } ], &quot;vectorizers&quot;: [], &quot;compressions&quot;: [] } } </code></pre>
<python><azure><nlp><azure-ai-search>
2024-07-23 02:37:14
1
809
R_Student
78,780,948
16,852,890
PySpark unpivot or reduce
<p>I have the following dataframe:</p> <pre><code>df = spark.createDataFrame( [ (&quot;D1&quot;, &quot;D2&quot;, &quot;H1&quot;, None, None), (&quot;D1&quot;, &quot;D2&quot;, &quot;H1&quot;, &quot;H2&quot;, None), (&quot;D1&quot;, &quot;D2&quot;, &quot;H1&quot;, &quot;H2&quot;, &quot;H3&quot;) ], [&quot;Dimension1&quot;, &quot;Dimention2&quot;, &quot;Hierarchy1&quot;, &quot;Hierarchy2&quot;, &quot;Hierarchy3&quot;] ) </code></pre> <p><a href="https://i.sstatic.net/8nXRdpTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8nXRdpTK.png" alt="enter image description here" /></a></p> <p>I want to transform it so that it becomes something like this instead:</p> <pre><code>new_df = spark.createDataFrame( [ (&quot;D1&quot;, &quot;D2&quot;, &quot;H1&quot;), (&quot;D1&quot;, &quot;D2&quot;, &quot;H2&quot;), (&quot;D1&quot;, &quot;D2&quot;, &quot;H3&quot;) ], [&quot;Dimension1&quot;, &quot;Dimention2&quot;, &quot;Hierarchy&quot;] ) </code></pre> <p><a href="https://i.sstatic.net/A2JKy9d8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A2JKy9d8.png" alt="enter image description here" /></a></p> <p>The logic being:</p> <pre><code>whencondition = when((col(&quot;Hierarchy1&quot;).isNotNull()) &amp; (col(&quot;Hierarchy2&quot;).isNull()) &amp; (col(&quot;Hierarchy3&quot;).isNull()), lit(&quot;H1&quot;)).when((col(&quot;Hierarchy1&quot;).isNotNull()) &amp; (col(&quot;Hierarchy2&quot;).isNotNull()) &amp; (col(&quot;Hierarchy3&quot;).isNull()), lit(&quot;H2&quot;)).when((col(&quot;Hierarchy1&quot;).isNotNull()) &amp; (col(&quot;Hierarchy2&quot;).isNotNull()) &amp; (col(&quot;Hierarchy3&quot;).isNotNull()), lit(&quot;H3&quot;)).alias(&quot;Hierarchy&quot;) display(df.select(&quot;Dimension1&quot;, &quot;Dimention2&quot;, whencondition)) </code></pre> <p>There can be of course be any number of hierarchy columns, but in the end output I only want there to be one column to show what level of hierarchy that record is at. I started off by creating a list</p> <pre><code>hierarchies = [&quot;Hierarchy1&quot;, &quot;Hierarchy2&quot;, &quot;Hierarchy3&quot;] </code></pre> <p>and got as far as this:</p> <pre><code>when(reduce(lambda x, y: x &amp; y, [(col( &quot;`&quot; + x + &quot;`&quot;).isNotNull()) if x in hierarchies[:i+1] else (col( &quot;`&quot; + x + &quot;`&quot;).isNull()) for x in hierarchies]), lit(hierarchies[i])) </code></pre> <p>which works for <code>i &lt; len(hierarchies)</code>, but not any further unfortunately</p>
<python><pyspark>
2024-07-22 22:21:34
2
316
tommyhmt
78,780,932
1,116,138
How to tell if a function is wrapped by another specific function
<p>Using the following code:</p> <pre><code>def wrap( x ): def wrapped( y ): return x + y return wrapped f = wrap( 1 ) </code></pre> <p>Is it possible to tell that f is a function wrapped by function <code>wrap</code>?</p> <p>Displaying variable f, it is visible:</p> <pre><code>&gt;&gt;&gt; f &lt;function wrap.&lt;locals&gt;.wrapped at 0x10067cf40&gt; </code></pre> <p>But is there a way to know sure that <code>wrap</code> was used as the wrapping function?</p>
<python>
2024-07-22 22:16:21
2
562
greg
78,780,790
8,887,483
Popening a gnome-terminal in python immediately appears as a zombie
<p>For background I am working on a script to train multiple pytorch models. I have a training script that I want to be able to run as a sub process in a gnome terminal. The main reason for this is so I can keep an eye on the training progress. In cases where I might have multiple GPUs I would like to run my training script multiple times in separate windows. To accomplish this I have been using popen. The following code works to open a new terminal window and start the training script</p> <pre><code>#create a list of commands commands = [] kd_cfg = KDConfig.read(kd_cfg_path) cmd = &quot;python scripts/train_torch.py &quot; for specialist in kd_cfg.specialists: cmd += f&quot;--config {kd_cfg.runtime_dict['specialists'][specialist]['config']} &quot; ... # Run each command in a new terminal and store the process object num_gpus = len(gpus) free_gpus = copy.deepcopy(gpus) processes = [] worker_sema = threading.Semaphore(num_gpus) commands_done = [False for _ in range(len(commands))] #start the watchdog watch = threading.Thread(target=watch_dog, args=(processes,free_gpus,commands_done,worker_sema)) watch.start() for cmd_idx, command in enumerate(commands): worker_sema.acquire() gpu = free_gpus.pop() command += f&quot; --gpu {gpu}&quot; #allocate a free GPU from the list split_cmd_arr = shlex.split(command) proc = subprocess.Popen(['gnome-terminal', '--'] + split_cmd_arr) processes.append( (cmd_idx,gpu,proc) ) </code></pre> <p>The part that I am stuck on is the concurrency control. In order to protect the GPU resource I use a semaphore. My plan was to monitor the process that starts the GNOME terminal and when it has finished release the semaphore to start the next training process. Instead what happens is that all commands are run at the same time. When I test with two commands and limit onto one gpu I still see two terminals open and two trainings will begin. In my watchdog thread code below I see that both the processes are zombies and have no children even WHILE I am watching the training loop execute inside of both terminals without crashing.</p> <pre><code> # Check if processes are still running while not all(commands_done): for cmd_idx, gpu, proc in processes: # try: # Check if process is still running ps_proc = psutil.Process(proc.pid) #BC we call bash python out of the gate it executes as a child proc ps_proc_children = get_child_processes(proc.pid) proc_has_running_children = any(child.is_running for child in ps_proc_children) print(f&quot;status: {ps_proc.status()}&quot;) print(f&quot;children: {ps_proc_children}&quot;) if proc_has_running_children: print(f&quot;Process {proc.pid} on GPU {gpu} is still running&quot;, end='\r') else: print(f&quot;Process {proc.pid} has terminated&quot;) free_gpus.append(gpu) commands_done[cmd_idx] = True processes.remove((cmd_idx, gpu, proc)) ps_proc.wait() print(f&quot;removed proc {ps_proc.pid}&quot;) worker_sema.release() </code></pre> <p>I thought maybe the subprocess basically starts another process then immediately returns but I am surprised to see that there are no children either. If anyone has any insights they would be much apprecaited.</p> <p>If it helps this is some example output from the watchdog.</p> <pre><code>status: zombie children: [] Process 4076 has terminated removed proc 4076 status: zombie children: [] Process 4133 has terminated removed proc 4133 </code></pre> <p>/////////////////////EDIT/////////////////////////////////</p> <p>I still have yet to find a way directly get the PID of the processes through the POPEN interface. I have a workaround but I still think there should be a cleaner way to do this.</p> <p>What I have done is make use of the ps_util process_iter function. I iterate all of the running processes and look at the commands and do a regex match to find a matching command. There are enough uniqueish things like the GPU number and a date time string that are getting passed that its unlikely two things would match [but that is a major drawback to the approach]</p> <pre><code>def get_pid_for_unique_cmd(cmdl_string2find): pid = None for pp in psutil.process_iter(attrs=['pid', 'name', 'cmdline']): cmdl_cmd = &quot;&quot; if pp.info[&quot;cmdline&quot;]: cmdl_cmd = ' '.join(pp.info['cmdline']) #if there is not single space in the typed command it gets mad so need to strip if &quot;&quot;.join(cmdl_cmd.split()) == &quot;&quot;.join(cmdl_string2find.split()): pid = pp.pid print(f&quot;found process {pid} that matches {cmdl_string2find}&quot;) print(pp.info) ps_proc = psutil.Process(pid) print(f&quot;status {ps_proc.status()}&quot;) print() return pid </code></pre> <p>Im then passing that pid the watchdog thread, where I updated things to check for that pid instead of using the proc object.</p>
<python><python-3.x><multiprocessing><popen><psutil>
2024-07-22 21:13:50
1
605
Sami Wood
78,780,728
9,538,252
Difference of top 2 timestamps by group in Pandas
<p>I have a table of users with login date and time stamps.</p> <p>I am attempting to calculate the difference between each user's two most recent logins.</p> <p>For example (df):</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>User</th> <th>logints</th> </tr> </thead> <tbody> <tr> <td>34</td> <td>2024-07-10 07:49:11.773</td> </tr> <tr> <td>34</td> <td>2024-07-10 07:52:11.606</td> </tr> <tr> <td>34</td> <td>2024-07-11 08:49:11.947</td> </tr> <tr> <td>34</td> <td>2024-07-11 09:49:11.758</td> </tr> <tr> <td>34</td> <td>2024-07-12 09:46:11.758</td> </tr> <tr> <td>37</td> <td>2024-07-10 08:46:11.587</td> </tr> <tr> <td>37</td> <td>2024-07-10 08:49:11.356</td> </tr> <tr> <td>37</td> <td>2024-07-09 08:49:11.744</td> </tr> <tr> <td>38</td> <td>2024-07-10 08:55:11.742</td> </tr> </tbody> </table></div> <p>Desired Result:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>User</th> <th>logindelta</th> </tr> </thead> <tbody> <tr> <td>34</td> <td>1 day</td> </tr> <tr> <td>37</td> <td>3 minutes</td> </tr> <tr> <td>38</td> <td>na</td> </tr> </tbody> </table></div> <p>Here's what I have tried:<br /> <code>top_two_userlogins = df.groupby('User')['logints'].nlargest(2).diff()</code></p> <p>The issue is the diff does not consistently be calculating within the groupby.</p> <p>I've also tried pivoting into columns for the calculation unsuccessfully. It seems as though there should be an operation I can run on the group.</p>
<python><pandas>
2024-07-22 20:50:44
2
311
ByRequest
78,780,630
1,431,728
How to replace double slash in URL string with single slash in Python
<p>I've got a URL that contains erroneous double slash(es) (&quot;//&quot;) that I need to turn into single slash. Needless to say, I want to leave the double slash after &quot;https:&quot; untouched.</p> <p>What's the shortest Python code that can make this change in a string?</p> <hr /> <p>I've been trying to use <code>re.sub</code>, with the regex with a negation of the colon (viz., <code>[^:](//)</code>) but it wants to replace the whole match (including the preceding non-colon character) and not just the double slash.</p> <p>Maybe I should just turn all double slashes to single, and then convert the &quot;https:/&quot; to &quot;https://&quot; at the end?</p>
<python><replace>
2024-07-22 20:16:21
2
7,417
JohnK
78,780,563
1,922,559
Correct pyasn1 data structure for IEC 61850 sample values SavPDU type?
<p>I'm trying to convert a SEQUENCE type to a Python class model following the Berkeley published <a href="http://wla.berkeley.edu/%7Ecs61a/fa07/library/python_modules/gsutil/third_party/pyasn1/doc/pyasn1-tutorial.html#1.4" rel="nofollow noreferrer">PyASN1 programmer's manual</a> documentation.</p> <p>The IEC 61850-9-2 Section 8.5.2 Table 14 Encoding for a SavPdu is defined as</p> <pre><code> SavPdu ::= SEQUENCE { noASDU [0] IMPLICIT INTEGER (1..65535), security [1] ANY OPTIONAL, asdu [2] IMPLICIT SEQUENCE OF ASDU } ASDU ::= SEQUENCE { svID [0] IMPLICIT VisibleString, datset [1] IMPLICIT VisibleString OPTIONAL, smpCnt [2] IMPLICIT OCTET STRING (SIZE(2)), confRev [3] IMPLICIT OCTET STRING (SIZE(4)), refrTm [4] IMPLICIT UtcTime OPTIONAL, smpSynch [5] IMPLICIT OCTET STRING (SIZE(1)), smpRate [6] IMPLICIT OCTET STRING (SIZE(2)) OPTIONAL, sample [7] IMPLICIT OCTET STRING (SIZE(n)), smpMod [8] IMPLICIT OCTET STRING (SIZE(2)) OPTIONAL } </code></pre> <p>This is my first attempt at creating a Python class for the above model:</p> <pre><code>from pyasn1.type import constraint, namedtype, tag, univ class SavPdu(univ.Sequence): componentType = namedtype.NamedTypes( namedtype.NamedType('noASDU', univ.Integer().subtype(subtypeSpec=constraint.ValueRangeConstraint(1,65535), implicitTag=tag.Tag(tag.tagClassContext, tag.tagFormatSimple, 0))), namedtype.OptionalNamedType('security', univ.Any().subtype(implicitTag=tag.Tag(tag.tagClassContext, tag.tagFormatSimple, 1))), namedtype.NamedType('asdu', univ.SequenceOf(componentType=ASDU()).subtype(implicitTag=tag.Tag(tag.tagClassContext, tag.tagFormatSimple,2))) ) </code></pre> <p>where ASDU is another Python class defined elsewhere in the file.</p> <p>Am I defining my class correctly based on the model and library?</p>
<python><pyasn1>
2024-07-22 19:55:38
0
737
TWhite
78,780,311
2,409,868
SQLAlchemy scalars and intersect anomaly
<p>The SQLAlchemy manual says* that the <code>Session.scalars()</code> method returns ORM objects.</p> <p>*[Selecting ORM Entries][1]</p> <p>The following code shows two examples one of which returns an ORM object but the other does not. The first uses a select statement which selects a single ORM object. The second example does not return an ORM object. It is identical except for the introduction of SQLAlchemy’s <code>intersect()</code> function. It is only returning the first column of the desired object.</p> <p>Although it is possible to select the primary keys of records and then carry out a second select for ORM objects that seems like a kludge. Is there a more elegant solution?</p> <pre><code>from sqlalchemy import create_engine, select, intersect from sqlalchemy.orm import Mapped, mapped_column, DeclarativeBase, Session class Base(DeclarativeBase): pass class Movie(Base): __tablename__ = &quot;movie&quot; title: Mapped[str] id: Mapped[int] = mapped_column(primary_key=True) def __repr__(self): return ( f&quot;{self.__class__.__qualname__}(&quot; f&quot;title={self.title!r}, &quot; f&quot;id={self.id!r})&quot; ) engine = create_engine(&quot;sqlite+pysqlite:///:memory:&quot;) Base.metadata.create_all(engine) with Session(engine) as session: movie_1 = Movie(title=&quot;Great Movie 1&quot;) movie_2 = Movie(title=&quot;Great Movie 2&quot;) session.add_all((movie_1, movie_2)) statement = select(Movie).where(Movie.title == &quot;Great Movie 1&quot;) print(&quot;\n&quot;, statement) result = session.scalars(statement).all() print(f&quot;{result=}&quot;) stmt_isec = intersect(statement) # In case you're wondering, the next line has the same effect as # the unary intersect. # stmt_isec = intersect(*[statement, statement]) print(&quot;\n&quot;, stmt_isec) result = session.scalars(stmt_isec).all() print(f&quot;{result=}&quot;) </code></pre> <p>Output:</p> <pre><code>SELECT movie.title, movie.id FROM movie WHERE movie.title = :title_1 result=[Movie(title='Great Movie 1', id=1)] SELECT movie.title, movie.id FROM movie WHERE movie.title = :title_1 INTERSECT SELECT movie.title, movie.id FROM movie WHERE movie.title = :title_1 result=['Great Movie 1'] </code></pre> <h3>Use Case</h3> <p>Consider a five-column table. The end user is provided with a search form with five fields for entering search criteria for each of the five columns. The user enters search criteria for fields one and four, intending that all records which match those criteria will be returned.</p> <p>Constructing the select statement is straightforward. The problem is that with five columns, there are 120 (5 factorial) different statements that would be required.</p> <p>The solution is to construct a single select statement for each column that is present in the user’s search. These are then combined by the SQLAlchemy intersect function. In our example, a select statement would be created for columns one and four but not for two, three, or five. These two become the arguments for the intersect function.</p>
<python><sqlalchemy>
2024-07-22 18:37:50
1
1,380
lemi57ssss
78,780,255
9,097,114
add multiple recipients to sendgrid
<p>i am unable to send mail to multiple receipents with code as below.<br /> adding 3 or more mail ids to 'to_emails'</p> <pre><code>to = 'aaa@mail.com,bbb@mail.com,ccc@mail.com' html_content1 = ''' Sample''' message = Mail( from_email='from@mail.com', to_emails=to, subject=fl, html_content=html_content1) with open('attachment', 'rb') as f: data = f.read() f.close() encoded = base64.b64encode(data).decode() attachment = Attachment() attachment.file_content = FileContent(encoded) attachment.file_type = FileType('application/pdf') attachment.file_name = FileName(file+'.pdf') attachment.disposition = Disposition('attachment') attachment.content_id = ContentId('Example Content ID') message.attachment = attachment sendgrid_client = SendGridAPIClient('key') response = sendgrid_client.send(message) print(response.status_code) print(response.body) print(response.headers) </code></pre> <p>Any modifications required in above code. Thanks in advance.</p>
<python><sendgrid>
2024-07-22 18:22:48
1
523
san1
78,780,231
11,748,924
Matplotlib antialiasing of axvline when plotting from label encoded classes when it's zoomed in
<p>I have this code:</p> <pre><code># Display parameter Xt = X_test[0, 0:1024] a = 1 lw = 1 # Get mask of every classes bl_pred = yp == 0 p_pred = yp == 1 qrs_pred = yp == 2 t_pred = yp == 3 # Plotting for i in range(len(Xt)): if bl_pred[i]: plt.axvline(x=i, color='grey', linestyle='-', alpha=a, linewidth=lw) elif p_pred[i]: plt.axvline(x=i, color='orange', linestyle='-', alpha=a, linewidth=lw) elif qrs_pred[i]: plt.axvline(x=i, color='green', linestyle='-', alpha=a, linewidth=lw) elif t_pred[i]: plt.axvline(x=i, color='purple', linestyle='-', alpha=a, linewidth=lw) plt.plot(Xt, color='blue') plt.show() </code></pre> <p>Returning Image without problem: <a href="https://i.sstatic.net/V0Dm1ymt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0Dm1ymt.png" alt="enter image description here" /></a></p> <p>But as I zoomed in the data scope to the smaller scope like this:</p> <pre><code># Display parameter Xt = X_test[0, 0:256] # smaller scope a = 1 lw = 1 </code></pre> <p>It's returning aliased image: <a href="https://i.sstatic.net/fz8mmak6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fz8mmak6.png" alt="enter image description here" /></a></p> <p>This can be fixed with increasing <strong>linewidth</strong>:</p> <pre><code># Display parameter Xt = X_test[0, 0:256] a = 1 lw = 1.4 # increasing line width manually </code></pre> <p><a href="https://i.sstatic.net/InfdjqWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/InfdjqWk.png" alt="enter image description here" /></a></p> <p>But, I don't want manually adjusting line width. How do I make it automatically adjust? Maybe it can be calculated based on difference of every nodes of current display? But I don't know the syntax for that.</p>
<python><matplotlib><visualization>
2024-07-22 18:16:20
1
1,252
Muhammad Ikhwan Perwira
78,780,083
304,870
Module "twine" not found when using Azure DevOps pipeline
<p>I'm running a pipeline in Azure Devops which is supposed to publish a wheel using &quot;twine&quot;. The pipeline uses a Windows based image.</p> <p>The pipeline is based on the docs from <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/pypi?view=azure-devops&amp;tabs=yaml#publish-python-packages-to-an-azure-artifacts-feed" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/pypi?view=azure-devops&amp;tabs=yaml#publish-python-packages-to-an-azure-artifacts-feed</a>.</p> <p>I tried with and without virtual environment and calling twine directly but also using <code>python -m</code>. Also tested using <code>python3</code> instead of <code>python</code>, but no luck.</p> <p>I always end up with the error <code>C:\__t\Python\3.9.13\x64\python.exe: No module named twine </code></p> <p>Below the steps:</p> <pre><code>- task: UsePythonVersion@0 inputs: versionSpec: '3.x' disableDownloadFromRegistry: true addToPath: true architecture: 'x64' displayName: 'Use Python $(python.version)' - script: | python -m pip install --user --upgrade pip setuptools wheel twine python -m venv myenv myenv/Scripts/activate displayName: 'Install Python Prerequesites' - script: | python setup.py bdist_wheel displayName: 'Build wheel' - task: TwineAuthenticate@1 inputs: artifactFeed: 'MyProject/MyFeed' displayName: 'Twine Authenticate' - script: | # Also tried python -m twine... twine upload --repository-url https://pkgs.dev.azure.com/myorg/myproject/_packaging/myfeed/pypi/upload/ dist/*.whl displayName: 'Upload to feed' </code></pre>
<python><windows><azure-devops><pip><azure-pipelines>
2024-07-22 17:39:56
1
33,166
Krumelur
78,779,795
3,569,246
Force Python auto-imports from current package to be prefixed with this package's name
<p>I use Python and Pylance extensions in VSCode.</p> <p>In my own package that I'm editing, the automatically added imports (with setting &quot;Import Format: absolute&quot;) look like this:</p> <pre><code>from mydirectory.myfile import myclass </code></pre> <p>However, my Python package is being consumed by a (very dumb and non-negotiable) external system that refuses to interpret it correctly unless imports are formatted specifically like:</p> <pre><code>from mypackage.mydirectory.myfile import myclass </code></pre> <p>So how can I force VSCode to auto-complete those imports in that exact format?</p> <p>I tried many VSC settings related to imports, formatting, Python, Pylance, etc. already, but so far none of them did what I need. I suspect maybe some manually forced prefixes are needed? (I have only one package, so it would be fine to hardcode a prefix like that with a specific string.)</p>
<python><visual-studio-code><import><autocomplete>
2024-07-22 16:19:38
1
3,756
JoannaFalkowska