QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
78,903,338
893,159
How to combine async functions with callbacks in python?
<p>If I have an async view in Django, can I have it await a callback?</p> <p>Pseudocode:</p> <pre class="lang-py prettyprint-override"><code>async def my_view(request): # Use external services that eventually send data to /callback/ do_something() # Wait for the callback result = await callback_executed() do_something_else(result) # view for /callback/ def callback(request, result): # Here control should be returned to my_view providing the result data </code></pre> <p>I would like to avoid having to implement the second part of <code>my_view</code> inside <code>callback</code>, but rather have an async function that returns the result in the original view as soon as the callback is executed.</p>
<python><django><django-rest-framework><async-await><callback>
2024-08-22 20:02:30
1
4,297
allo
78,903,264
19,039,483
Layout gaps dash dbc components
<p>I am trying to build a dashboard. The problem is even if I specify className=&quot;g-0&quot; and justify=&quot;start&quot;. My elements are spread out across the screen with huge gaps. I need them to be concentrated on the left side of screen without any gaps.</p> <p>The versions:</p> <ul> <li>dash-bootstrap-components 1.6.0</li> <li>dash 2.17.1</li> <li>flask 3.0.3</li> </ul> <pre class="lang-py prettyprint-override"><code>from dash import Dash, html, dcc import dash_bootstrap_components as dbc import flask layout = html.Div([dbc.Row( [ dbc.Col(html.Span( &quot;Table&quot;, className='top-bottom-title' )), dbc.Col([ html.Div(id='label_1', children='Original (years)'), dcc.Input(id='orig_cell', type='number', value=0, readOnly=True), ], className=&quot;g-0&quot;), dbc.Col([ html.Div(id='label_2', children='Override (years)'), dcc.Input(id='new_cell', type='number', value=0, step=1/12, min=0), ], className=&quot;g-0&quot;), dbc.Col([ dbc.Button( &quot;Apply&quot;, id=&quot;submit_override&quot;, n_clicks=0 ),], className=&quot;g-0&quot;) ], className=&quot;g-0&quot;, justify=&quot;start&quot; ) ] ) if __name__ == &quot;__main__&quot;: server = flask.Flask(__name__) app = Dash(__name__, external_stylesheets=[ dbc.themes.BOOTSTRAP], server=server, suppress_callback_exceptions=True) app.title = &quot;Analytics&quot; app.layout = layout app.run_server(debug=True, host=&quot;0.0.0.0&quot;, port=40000) </code></pre> <p>In addition to playing with dbc.Row parameters: className=&quot;g-0&quot; and justify=&quot;start&quot;. I also tried using horizontal dbc.Stack instead, but got the same results</p>
<python><python-3.x><plotly-dash><dash-bootstrap-components>
2024-08-22 19:42:46
1
315
Thoughtful_monkey
78,903,133
15,452,898
Counting repetitons in Pyspark
<p>Currently I'm working with a large dataframe and faced with an issue.</p> <p>I want to return a number of time (count) each value is repeated in a table.</p> <p>For example: number 10 is repeated twice, so I want to get number 2 and so on...</p> <p>My code is:</p> <pre><code>from pyspark.sql.types import StructType, StructField, StringType, IntegerType, DateType right_table_23 = [ (&quot;ID1&quot;, 2), (&quot;ID2&quot;, 3), (&quot;ID3&quot;, 5), (&quot;ID4&quot;, 6), (&quot;ID6&quot;, 10), (&quot;ID8&quot;, 15), (&quot;ID9&quot;, 10), (&quot;ID10&quot;, 5), (&quot;ID2&quot;, 5), (&quot;ID3&quot;, 8), (&quot;ID4&quot;, 3), (&quot;ID2&quot;, 2), (&quot;ID3&quot;, 4), (&quot;ID4&quot;, 3) ] </code></pre> <p>A schema for the table showed above:</p> <pre><code>schema = StructType([ StructField(&quot;ID&quot;, StringType(), True), StructField(&quot;Count&quot;, IntegerType(), True) ]) </code></pre> <p>Next I create my table with the following code:</p> <pre><code>df_right_table_23 = spark.createDataFrame(right_table_23, schema) </code></pre> <p>In order to count the number of repetitions I use the following code:</p> <pre><code>#It can be implemented in order to find repetitions for a number 2 df_right_table_23.select().where(df_right_table_23.count == 2).count() </code></pre> <p>But if the range of digits include numbers from 2 up to 100 it is hard and time-consuming to rewrite the above-mentioned code.</p> <p>Is it possible to somehow automate the process of counting repetitions?</p>
<python><pyspark><data-manipulation><repeat>
2024-08-22 19:04:44
1
333
lenpyspanacb
78,903,094
12,255,379
Display waveform with ability to zoom on both axes preferably without rendering whole waveform in python
<p>I'm looking for a library or tool to setup an oscilloscope-like interface in my python program. Ideally I want a hassle-free solution with the matplotlib level of engagement as this would only serve to check the waveform in human friendly way for sanity's sake. I looked through the matplotlib's pyplot documentation but didn't find anything useful. But given how common that problem seems, I imagine there are some handy libraries to do it.</p>
<python><visualization>
2024-08-22 18:53:17
0
769
Nikolai Savulkin
78,902,924
4,577,467
How to force SWIG to see DECLSPEC preprocessor macro when included from another file?
<p>I am using SWIG version 4.0.2 in a Windows Subsystem for Linux (WSL) Ubuntu distribution. I am attempting to create a Python extension module that wraps a C++ library. The C++ library is intended to used in both Windows and Linux environments, and thus its classes and external functions are decorated with a <code>__declspec</code> for Windows, and nothing for Linux, via the following well-known preprocessor macro, which is in a common <code>types.h</code> header file that is included in many other header files in the C++ library.</p> <h5>types.h</h5> <pre><code>#pragma once #if defined( WIN32 ) #ifdef FOO_EXPORT #define FOO_DECLSPEC __declspec(dllexport) #else #define FOO_DECLSPEC __declspec(dllimport) #endif #else #define FOO_DECLSPEC #endif </code></pre> <p>I am only interested in the Linux environment. Thus <code>FOO_DECLSPEC</code> will be blank.</p> <p>When SWIG generates its C++ wrapper code, it behaves as if it does not understand that <code>FOO_DECLSPEC</code> is a preprocessor macro. Instead, it treats it as the name of a class. The result is that the generated C++ wrapper code is wrong and will not compile. The following is a simplified example that demonstrates the problem.</p> <p>The simple C++ library to wrap can be represented by the following header file. It includes <code>types.h</code>, which defines <code>FOO_DECLSPEC</code>.</p> <h5>example3.h</h5> <pre><code>#pragma once #include &quot;types.h&quot; class FOO_DECLSPEC Person { private: int m_id; public: Person( void ) : m_id( 42 ) { } virtual ~Person( void ) { }; int getID( void ) const { return m_id; } }; </code></pre> <p>The SWIG interface file I use to generate the C++ wrapper code is the following.</p> <h5>example3.swg</h5> <pre><code>%module example3 %{ #define SWIG_FILE_WITH_INIT #include &quot;example3.h&quot; %} %include &quot;example3.h&quot; </code></pre> <p>The command line I use to generate the C++ wrapper code is the following.</p> <pre><code>finch@laptop:~/work/swig_example$ swig -c++ -python -v example3.swg Language subdirectory: python Search paths: ./ ./swig_lib/python/ /usr/share/swig4.0/python/ ./swig_lib/ /usr/share/swig4.0/ Preprocessing... Starting language-specific parse... Processing types... C++ analysis... Processing nested classes... Generating wrappers... </code></pre> <p>There are no errors indicated in the SWIG output. However, the generated <code>example3_wrap.cxx</code> file is wrong and will fail to compile. For example, the first thing wrong is in the types table. It thinks the class name is <code>FOO_DECLSPEC</code> instead of <code>Person</code>.</p> <h5>example3_wrap.cxx (wrong result)</h5> <pre><code>... * -------- TYPES TABLE (BEGIN) -------- */ #define SWIGTYPE_p_FOO_DECLSPEC swig_types[0] ... </code></pre> <h1>A kludge change to source to avoid wrong result</h1> <p>In contrast, if I define <code>FOO_DECLSPEC</code> in <code>example3.h</code>, instead of including <code>types.h</code>, then the generated C++ wrapper code is correct and will compile.</p> <h5>example3.h (with kludge change)</h5> <pre><code>#pragma once // DO NOT INCLUDE THIS #include &quot;types.h&quot; // DEFINE FOO_DECLSPEC HERE INSTEAD #if defined( WIN32 ) #ifdef FOO_EXPORT #define FOO_DECLSPEC __declspec(dllexport) #else #define FOO_DECLSPEC __declspec(dllimport) #endif #else #define FOO_DECLSPEC #endif class FOO_DECLSPEC Person { ... }; </code></pre> <h5>example3_wrap.cxx (correct result)</h5> <pre><code>... * -------- TYPES TABLE (BEGIN) -------- */ #define SWIGTYPE_p_Person swig_types[0] ... </code></pre> <h1>What is the proper fix?</h1> <p>In the actual C++ library I am attempting to wrap, it would be untenable to define the <code>FOO_DECLSPEC</code> macro in every header file it is needed.</p> <p>Is there something I can do in the SWIG interface file so that SWIG will correctly see the <code>FOO_DECLSPEC</code> preprocessor macro for what it is, when that macro is defined in a separate header file?</p>
<python><c++><linux><swig>
2024-08-22 18:02:19
0
927
Mike Finch
78,902,913
13,102,622
How to bypass Cloudflare using Playwright
<p>I'm creating a webscraping program in Python that bypasses Cloudflare authentication like the checkbox. Problem is that the program is unable to search for the <code>&lt;iframe&gt;</code> where the checkbox resides</p> <pre><code>async def run(playwright: Playwright): args = [] #disable navigator.webdriver:true flag args.append(&quot;--disable-blink-features=AutomationControlled&quot;) browser = await playwright.firefox.launch(headless=False, args=args) context = await browser.new_context() page = await context.new_page() await page.goto(url) print(&quot;Navigated to:&quot;, url) await asyncio.sleep(30) # Wait for the iframe containing the CAPTCHA to load iframe = await page.wait_for_selector(&quot;iframe[title='Widget containing a Cloudflare security challenge']&quot;) captcha_frame = await iframe.content_frame() # Click the CAPTCHA checkbox checkbox = await captcha_frame.wait_for_selector(&quot;input[type='checkbox']&quot;) await checkbox.click() print(&quot;CAPTCHA checkbox clicked.&quot;) await asyncio.sleep(10) await browser.close() </code></pre> <p>There is no error to be found but it's won't pass the Cloudflare checkbox unless I manually check it myself.</p> <p>I tried to search for the <code>&lt;iframe&gt;</code> and expects the program to automate the checking of the checkbox of the Cloudflare authentication. But the result was the opposite, the program cannot find the <code>&lt;iframe&gt;</code>.</p>
<python><python-3.x><web-scraping>
2024-08-22 18:00:39
1
415
Elijah Leis
78,902,899
10,038,696
Why is in-place assignment slower than creating a new array in NumPy?
<p>I am trying to optimize a code, which has allocations inside a function that is repeatedly called in a loop. I ran some performance tests using jupyter and results were counterintuitive for me. As a minimal example, see the following.</p> <p>Given arrays <code>A</code>, <code>B</code>, I will perform matrix multiplication of these two in a loop.</p> <ul> <li>Approach 1 will have no preallocation, and the result will be stored in <code>C</code>,</li> <li>Approach 2 will have a preallocated array <code>D</code>, where the result of the multiplication is stored</li> </ul> <pre class="lang-python prettyprint-override"><code>import numpy as np A = np.random.rand(10, 10) B = np.random.rand(10, 10000) D = np.random.rand(10, 10000) # Approach 1, no pre-allocation for i in range(20000): C = A @ B # Approach 2, pre-allocated D for i in range(20000): D[:] = A @ B </code></pre> <p>I expected the second approach to be faster since it reuses the memory in D instead of allocating a new array each time. However, timing the loops shows that the first approach is actually 2x faster.</p> <p>Why is the in-place assignment (D[:] = A @ B) slower than creating a new array (C = A @ B)? Is this related to memory management of numpy?</p>
<python><numpy><variable-assignment><pre-allocation>
2024-08-22 17:56:53
1
377
cap
78,902,884
95,048
Optimize reading of a very wide dataset (100k columns) in polars and pandas
<p>I have a dataset that contains a very large number of columns, about 500k.</p> <p>The same data is stored in parquet and as a CSV.</p> <p>I've noticed the following:</p> <ul> <li>Reading the CSV file is very fast in polars, but crashes in pandas.</li> <li>Reading the parquet file is very fast in pandas (about 30 seconds), but takes 30 minutes in polars.</li> </ul> <pre><code># File paths. Both files contain the same data. Data contains 100k columns. parquet_path = '.....' csv_path = '.....' import polars as pl import pandas as pd data_pl_parquet = pl.read_parquet(parquet_path) # Takes ages, about 30 minutes data_pl_csv = pl.read_csv(csv_path) # Very fast, takes 30 seconds data_pd_parquet = pd.read_parquet(parquet_path) # Very fast, takes 30 seconds data_pd_csv = pd.read_csv(csv_path) # Takes more than 30 minutes then crashes # What about reading the parquet in pandas, then converting it to polars? pl.from_pandas(data_pd_parquet) # Takes 30 seconds </code></pre> <p>What's going on? Why do pandas and polars have such different behaviours? How can I debug this further?</p>
<python><pandas><python-polars>
2024-08-22 17:54:00
1
8,960
dalloliogm
78,902,791
2,201,603
ReportLab is not sending PDF file to path directory
<p>My colleagues and I are using ReportLab and two of us can successfully generate a simple PDF document using the below script. Both of us that receive a successful ReportLab PDF file have ReportLab version 4.2.0. Our other team member ran the same script, but the PDF isn't showing up in the directory as expected. Where could the file be going? He initially had 4.2.2 on his machine, but we uninstalled and reinstalled 4.2.0. Still didn't have any success by changing the version. We're not receiving any error messages and we've done a search on his machine to see if we can locate the file, thinking maybe it went somewhere that we didn't expect. We also tried manually providing the path and still no document shows up. All of us are using Windows machines.</p> <pre><code>from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import A4 pdf = canvas.Canvas('Occupations-In-Demand.pdf', pagesize=A4) pdf.setTitle('blank_template') width, height = A4 pdf.showPage() pdf.save() </code></pre> <p>Second option creating a path:</p> <pre><code>from reportlab.pdfgen import canvas from reportlab.lib.pagesizes import A4 path = r'C:\Users\myname\Desktop\Occupations-In-Demand.pdf' pdf = canvas.Canvas(path, pagesize=A4) pdf.setTitle('blank_template') width, height = A4 pdf.showPage() pdf.save() </code></pre>
<python><reportlab>
2024-08-22 17:25:06
0
7,460
Dave
78,902,565
250,962
How do I install Python dev-dependencies using uv?
<p>I'm trying out <a href="https://github.com/astral-sh/uv" rel="noreferrer">uv</a> to manage my Python project's dependencies and virtualenv, but I can't see how to install <em>all</em> my dependencies for local development, including the development dependencies.</p> <p>In my <em>pyproject.toml</em> file, I have this kind of thing:</p> <pre class="lang-ini prettyprint-override"><code>[project] name = &quot;my-project&quot; dependencies = [ &quot;django&quot;, ] [tool.uv] dev-dependencies = [ &quot;factory-boy&quot;, ] [tool.uv.pip] python-version = &quot;3.10&quot; </code></pre> <p>I can do the following to create a virtualenv and then generate a <code>requirements.txt</code> lockfile, which does not contain <em>dev-dependencies</em> (which is OK, because that is for production):</p> <pre class="lang-none prettyprint-override"><code>uv venv --python 3.10 uv pip compile pyproject.toml -o requirements.txt </code></pre> <p>But how can I install <em>all</em> the dependencies in my virtualenv?</p> <p><code>uv pip sync</code> will use the <code>requirements.txt</code>.</p> <p>There's also <code>uv sync</code>, but I don't understand how that differs, and trying that generates an error:</p> <pre class="lang-none prettyprint-override"><code>error: Multiple top-level packages discovered in a flat-layout: ['conf', 'hines', 'docker', 'assets', 'node_modules']. </code></pre>
<python><uv>
2024-08-22 16:20:22
6
15,166
Phil Gyford
78,902,557
7,265,057
How to properly configure Python to use system-level Certificates (windows) behind a Forced VPN (zscaler)?
<p>I've recently started working within a large organization that requires all computers to be behind a forced VPN (zscaler). This setup has caused several issues when using multiple Python libraries and even when installing packages from remote sources, such as PyTorch with CUDA.</p> <p>Here are the main issues we're encountering:</p> <ul> <li>SSL Connection Errors: These errors arise because Python is not using the system-level certificates.</li> <li>Package Installation Failures: Particularly with packages like pytorch with CUDA, which require secure connections.</li> </ul> <p>Some steps taken so far:</p> <ul> <li>We tried to mitigate some of these issues by using the pip-system-certs package, which helps Python use the system certificates for pip installations</li> <li>We encountered persistent issues with libraries like urllib3 and certifi, which do not fetch the proper certificates.</li> <li>As a workaround, we manually exported the certificate from the Microsoft Management Console (MMC) and replaced the certificate that certifi was using</li> </ul> <p>This workaround is not ideal mostly because we would have to do it for every virtual environment and every time the cert changes.</p> <p>Is there a more elegant and streamlined solution for ensuring that Python uses the system-level certificates behind a forced VPN like zscaler? Ideally, I would like to:</p> <ul> <li>Avoid manual certificate replacement steps.</li> <li>Have a solution that works across different Python libraries and tools.</li> <li>Possibly configure the system or Python environment to automatically use the correct certificates</li> </ul> <p>As more python developers join the team we really need a streamlined solution for this.</p>
<python><ssl><vpn>
2024-08-22 16:17:34
0
343
wrong1man
78,902,450
741,262
AWS Sagemaker Notebook + Panel Tabulator: on_edit not working
<p>I am trying to get some basic example of <a href="https://panel.holoviz.org/reference/widgets/Tabulator.html" rel="nofollow noreferrer">Panel Tabulator</a> with <code>on_edit</code> working in an AWS SageMaker notebook (Jupyter Lab).<br /> My understanding is that <code>edit_table.on_edit(lambda e: print(e.column, e.row, e.old, e.value))</code> should fire anytime I leave an edited cell (click elsewhere / use tab / use enter).</p> <p>The code at the end of my post renders the table, I can edit the strings, but I never see the print output. I also tried to cause a different side-effect like writing a file. Nothing.<br /> I also tried to <code>tabulator.value</code> in a different cell after making the edit. It prints the original data, making me think the problem is in firing the edit, not reacting to it.</p> <p>What do I need to do in order to make <code>on_edit</code> work in <strong>SageMaker notebooks</strong>?<br /> (Open to recommendations of other AWS-managed notebooks in case there is one that 'just works')</p> <hr /> <pre><code>import panel as pn import pandas as pd pn.extension('tabulator') df = pd.DataFrame({ 'Name': ['John', 'Jane', 'Bob', 'Alice'], 'Age': [30, 25, 35, 28], 'City': ['New York', 'London', 'Paris', 'Tokyo'] }) tabulator = pn.widgets.Tabulator(df) tabulator.on_edit(lambda e: print(e.column, e.row, e.old, e.value)) pn.Column(tabulator).servable() # Also tried just tabulator </code></pre> <ul> <li>Running this code in <a href="https://panelite.holoviz.org/lab/" rel="nofollow noreferrer">https://panelite.holoviz.org/lab/</a> produces a log in the developer console <code>Python callback returned following output: Name 0 John aaa</code>. I expected the output in the notebook, but I don't mind. The event happens and the output changes as a I change the print function.</li> <li>Running the same code in AWS sage maker renders the table but does nothing (no log on the developer console). Tried Chrome &amp; Firefox.</li> </ul> <hr /> <p>Output of <code>!jupyter labextension list</code> of my SageMaker Notebook instance:</p> <pre><code>JupyterLab v4.2.1 /home/ec2-user/anaconda3/envs/python3/share/jupyter/labextensions jupyterlab_pygments v0.3.0 enabled OK (python, jupyterlab_pygments) jupyterlab-plotly v5.22.0 enabled X @jupyter-widgets/jupyterlab-manager v5.0.11 enabled OK (python, jupyterlab_widgets) @jupyter-notebook/lab-extension v7.2.0 enabled OK @pyviz/jupyterlab_pyviz v3.0.3 enabled OK The following extensions may be outdated or specify dependencies that are incompatible with the current version of jupyterlab: jupyterlab-plotly If you are a user, check if an update is available for these packages. If you are a developer, re-run with `--verbose` flag for more details. </code></pre>
<python><amazon-sagemaker><jupyter-lab><tabulator><holoviz-panel>
2024-08-22 15:50:27
0
1,291
Reini
78,902,320
8,410,477
Compile, install X-13ARIMA-SEAT on macOS Apple Silicon (ARM64) and use it in sm.tsa.x13_arima_analysis()
<p>I am struggling to compile and use <strong>X-13ARIMA-SEAT</strong> on a Mac with an m3 chip, and called it using <code>sm.tsa.x13_arima_analysis(s, x12path='/usr/local/bin/x13as')</code> in <strong>statsmodels</strong>. Finally, I succeeded and now share the method step by step.</p> <p>In addition, I would like to know how to use Python libraries such as <strong>statsmodels</strong> or other seasonal adjustment tools using <strong>tramo-seats</strong> to seasonally adjust data?</p>
<python><macos><statsmodels><seasonal-adjustment><x13>
2024-08-22 15:14:37
1
10,141
ah bon
78,902,034
1,914,781
Use one regex to extract information from two patterns
<p>I would like to simplify below regex logic with one regex statement. Then it's easier to understand the logic.</p> <pre class="lang-python prettyprint-override"><code>import re content = &quot;&quot;&quot; [ 1.765989] initcall init_module.cfi_jt+0x0/0x8 [altmode_glink] returned 0 after 379 usecs [ 0.001873] initcall selinux_init+0x0/0x1f8 returned 0 after 85 usecs &quot;&quot;&quot; for line in content.split(&quot;\n&quot;): m = re.search(&quot;\[\s+(?P&lt;ts&gt;[\d.]+)\] initcall init_module.*\[(?P&lt;func&gt;[^ ]+)\] returned (?P&lt;ret&gt;[\d-]+) after (?P&lt;val&gt;[\d-]+) usecs&quot;,line) if m: ts = m['ts'] func = m['func'] ret = m['ret'] val = m['val'] print(ts,func,ret,val) else: m = re.search(&quot;\[\s+(?P&lt;ts&gt;[\d.]+)\] initcall (?P&lt;func&gt;[^ ]+)\+.*? returned (?P&lt;ret&gt;[\d-]+) after (?P&lt;val&gt;[\d-]+) usecs&quot;,line) if m: ts = m['ts'] func = m['func'] ret = m['ret'] val = m['val'] print(ts,func,ret,val) </code></pre> <p>Current output is good:</p> <pre class="lang-none prettyprint-override"><code>1.765989 altmode_glink 0 379 0.001873 selinux_init 0 85 </code></pre>
<python><regex>
2024-08-22 14:15:00
2
9,011
lucky1928
78,901,940
13,840,270
Can I subset an Iterator and keep it an Iterator?
<p>I have a use case where I need the permutations of boolean. However I do not need them when they are reversed. So I can do something like this:</p> <pre class="lang-py prettyprint-override"><code>import itertools [ p for p in itertools.product([True,False],repeat=4) if p!=p[::-1] ] </code></pre> <p>This issue is that this is a list, stored in memory, and with increasing repeats I will run into memory errors.</p> <p>Is it possible to &quot;subset&quot; an iterator while keeping it as an iterator?</p> <p>If possible I am looking for a general answer, that applies to any sub setting strategy on any iterator</p>
<python><iterator>
2024-08-22 13:55:21
2
3,215
DuesserBaest
78,901,762
893,254
How to build a module for logging in Python?
<p>I want to write a library (a module) in Python for logging. The logger instance should be unique per process and global per process. (Meaning that it <em>should not</em> be passed around as an argument to different functions.)</p> <p>I am having a bit of a mental block trying to design something sensible.</p> <p>Let me try and explain my approach.</p> <p>I started with a single process. I added some logging code to this process.</p> <pre class="lang-py prettyprint-override"><code># process_1.py from datetime import datetime from datetime import timezone import logging import sys logger_process_name = 'process_1' logger_file_datetime = datetime.now(timezone.utc).date() logger = logging.getLogger(__name__) stdout_log_formatter = logging.Formatter('%(name)s: %(asctime)s | %(levelname)s | %(filename)s:%(lineno)s | %(process)d | %(message)s') stdout_log_handler = logging.StreamHandler(stream=sys.stdout) stdout_log_handler.setLevel(logging.INFO) stdout_log_handler.setFormatter(stdout_log_formatter) file_log_formatter = logging.Formatter('%(name)s: %(asctime)s | %(levelname)s | %(filename)s:%(lineno)s | %(process)d | %(message)s') file_log_handler = logging.FileHandler(filename=f'{logger_process_name}_{logger_file_datetime}.log') file_log_handler.setLevel(logging.DEBUG) file_log_handler.setFormatter(file_log_formatter) logger.setLevel(logging.DEBUG) logger.addHandler(stdout_log_handler) logger.addHandler(file_log_handler) def main(): logger.info('hello!') logger.info('goodbye!') if __name__ == '__main__': main() </code></pre> <p>As you can see, most of this is just boilerplate code to initialize an instance of a <code>logger</code> object which writes both to <code>stdout</code> and a file, the filename for which is stamped with the current date. (In the UTC timezone.)</p> <p>I then wrote a second process.</p> <pre class="lang-py prettyprint-override"><code># process_2.py from datetime import datetime # ... # ... skip copy &amp; paste of same lines above relating to the logger ... # ... logger.addHandler(file_log_handler) def main(): logger.info('hello from process_2!') logger.info('goodbye from process_2!') if __name__ == '__main__': main() </code></pre> <p>Ah - duplicate code. An ideal candidate for a module, perhaps?</p> <p>Let's try to write it.</p> <pre class="lang-py prettyprint-override"><code># lib_logger.py from datetime import datetime from datetime import timezone import logging import sys logger_process_name = 'process_1' # &lt;--- !!! Wrong name !!! logger_file_datetime = datetime.now(timezone.utc).date() logger = logging.getLogger(__name__) stdout_log_formatter = logging.Formatter('%(name)s: %(asctime)s | %(levelname)s | %(filename)s:%(lineno)s | %(process)d | %(message)s') stdout_log_handler = logging.StreamHandler(stream=sys.stdout) stdout_log_handler.setLevel(logging.INFO) stdout_log_handler.setFormatter(stdout_log_formatter) file_log_formatter = logging.Formatter('%(name)s: %(asctime)s | %(levelname)s | %(filename)s:%(lineno)s | %(process)d | %(message)s') file_log_handler = logging.FileHandler(filename=f'{logger_process_name}_{logger_file_datetime}.log') file_log_handler.setLevel(logging.DEBUG) file_log_handler.setFormatter(file_log_formatter) logger.setLevel(logging.DEBUG) logger.addHandler(stdout_log_handler) logger.addHandler(file_log_handler) </code></pre> <p>Note that the process name is wrong, if we want to use this from <code>process_2.py</code>.</p> <pre class="lang-py prettyprint-override"><code># process_2.py from lib_logger import logger def main(): logger.info('hello from process_2!') # &lt;--- the wrong process name is printed here logger.info('goodbye from process_2!') # &lt;- and here # it writes to the file `process_1_2024-08-22.log` not # `process_2_2024-08-22.log` as it should if __name__ == '__main__': main() </code></pre> <p>As this point I became stuck and didn't manage to come up with a good solution.</p> <p>Initially I thought it was fairly obvious how to solve this. Add a function to the logger module to initialize and return the logger instance, and add another function to register the name.</p> <p>Here's how I tried to implement this:</p> <pre class="lang-py prettyprint-override"><code># lib_logger.py logger_process_name = None logger = None def set_logger_process_name(process_name: str) -&gt; None: global logger_process_name logger_process_name = process_name def get_logger() -&gt; Logger: global logger if logger is not None: return logger if logger_process_name is None: raise RuntimeError(f'process name not set, cannot create logger instance') logger = logging.getLogger(__name__) stdout_log_formatter = # ... # skip some lines file_log_handler = logging.FileHandler(filename=f'{logger_process_name}_{logger_file_datetime}.log') file_log_handler.setLevel(logging.DEBUG) file_log_handler.setFormatter(file_log_formatter) logger.setLevel(logging.DEBUG) logger.addHandler(stdout_log_handler) logger.addHandler(file_log_handler) return logger </code></pre> <p>However, to actually use this from one of the processes, and other modules, is a disaster. The ordering of import statements suddenly becomes important.</p> <pre class="lang-py prettyprint-override"><code># process_1.py from lib_logger import get_logger from lib_logger import set_logger_process_name # might defer process name definitions to another module from lib_process_names import PROCESS_NAME_PROCESS_1 set_logger_process_name(PROCESS_NAME_PROCESS_1) log = get_logger() # must initialize the module, or at least call `set_logger_process_name` # before initializing any further modules which use the logger module # (call `get_logger`) # Why? Because cannot call `get_logger` before `set_logger_process_name` # has been called from example_module import example_function </code></pre> <p>Just to show what <code>example_function</code> might do:</p> <pre class="lang-py prettyprint-override"><code># example_module.py from lib_logger import get_logger() log = get_logger() def example_function(): log.info('started work in example_function...') </code></pre> <p>We could actually hack our way around this by having all <em>functions</em> call <code>log = get_logger()</code> before they start running, but this isn't a good solution. The <code>log</code> handle should be globally reachable within a module, each function shouldn't be responsible for obtaining a handle to a logger. That's not much improvement over passing around a <code>logger</code> instance as a function argument.</p>
<python><logging><design-patterns><module><software-design>
2024-08-22 13:14:33
2
18,579
user2138149
78,901,682
13,663,100
Python Polars SQL Interface asof Join
<p>Is there a SQL interface equivalent for the asof join in polars? I use <code>by</code> and <code>on</code> arguments often with these joins.</p> <p>The methodology that I am using in the meantime is the one provided for Snowflake before the asof join was introduced in the SQL interface: <a href="https://stackoverflow.com/questions/75409949/how-to-do-an-as-of-join-in-sql-snowflake">How to do an as-of-join in SQL (Snowflake)?</a></p> <p>I've spent a few minutes looking through GitHub and the closest issue I saw raised was <a href="https://github.com/pola-rs/polars/issues/10068" rel="nofollow noreferrer">this issue</a>. However this does not strictly deal with the SQL interface or asof joins. Thanks.</p>
<python><sql><python-polars>
2024-08-22 12:52:39
0
1,365
Chris du Plessis
78,901,681
9,313,033
Exporting only dev dependencies
<p>Is there a command for <a href="https://github.com/astral-sh/uv" rel="nofollow noreferrer">uv</a> that exports/extracts just the dependencies declared as dev dependencies from the <code>pyproject.toml</code> file, for example to pass test dependencies to tox?</p> <pre class="lang-bash prettyprint-override"><code>uv add Django uv add pytest --dev </code></pre> <p>Results in this <code>pyproject.toml</code>:</p> <pre class="lang-ini prettyprint-override"><code>[project] dependencies = [ &quot;django&gt;=4.2.15&quot;, ] [tool.uv] dev-dependencies = [ &quot;pytest&gt;=8.3.2&quot;, ] </code></pre> <p>How can I generate a file that <em>only</em> contains the dev dependencies, basically a <code>requirements-dev.txt</code>?</p> <p><code>uv pip compile pyproject.toml</code> does not include the dev deps, only the main deps. And I did not see an argument to make it include it:</p> <pre><code>Resolved 4 packages in 83ms # This file was autogenerated by uv via the following command: # uv pip compile pyproject.toml asgiref==3.8.1 # via django django==4.2.15 # via hatch-demo (pyproject.toml) sqlparse==0.5.1 # via django typing-extensions==4.12.2 # via asgiref </code></pre> <p>For comparison, poetry has <code>poetry export --only dev -o requirements-dev.txt</code>, which will generate something like this:</p> <pre><code>iniconfig==2.0.0 ; python_version &gt;= &quot;3.12&quot; and python_version &lt; &quot;4.0&quot; packaging==24.1 ; python_version &gt;= &quot;3.12&quot; and python_version &lt; &quot;4.0&quot; pluggy==1.5.0 ; python_version &gt;= &quot;3.12&quot; and python_version &lt; &quot;4.0&quot; pytest==8.3.2 ; python_version &gt;= &quot;3.12&quot; and python_version &lt; &quot;4.0&quot; </code></pre>
<python><uv>
2024-08-22 12:52:30
3
2,941
CoffeeBasedLifeform
78,901,597
17,040,989
snakemake: MissingOutputException in rule hifiasm in file /home/usr/path/to/Snakefile, line 13
<p>Hi there I was working on a very basic Snakemake file to run <code>hifiasm</code>; it seems to work well but, for some reason, at the end of the run I was prompted with the following:</p> <blockquote> <p>Waiting at most 5 seconds for missing files. MissingOutputException in rule hifiasm in file /home/usr/path/to/Snakefile, line 13: Job 1 completed successfully, but some output files are missing. Missing files after 5 seconds. This might be due to filesystem latency. If that is the case, consider to increase the wait time with --latency-wait: INLUP00233.asm (missing locally, parent dir not present) Shutting down, this might take some time. Exiting because a job execution failed. Look above for error message Complete log: .snakemake/log/2024-08-21T144851.077767.snakemake.log WorkflowError: At least one job did not complete successfully.</p> </blockquote> <p>This is the code I've been running; does anyone has an explanation on why this is happening? Thanks in advance!</p> <pre><code>##################################### # SNAKEMAKE PIPELINE β€” assembly # ##################################### SAMPLES = ['INLUP00233'] rule all: input: expand(&quot;{sample}.asm&quot;, sample=SAMPLES) rule hifiasm: input: hifi=&quot;{sample}.fastq.gz&quot;, hic1=&quot;{sample}_1.fq.gz&quot;, hic2=&quot;{sample}_2.fq.gz&quot; output: &quot;{sample}.asm&quot; threads: 16 shell: &quot;hifiasm -o {output} --h1 {input.hic1} --h2 {input.hic2} {input.hifi} -t {threads}&quot; </code></pre>
<python><pipeline><bioinformatics><snakemake>
2024-08-22 12:32:41
1
403
Matteo
78,901,594
7,227,146
Find text inside top-level brackets when they're nested
<p>I have a file with nested brackets. I need to parse the text within the top-level brackets with Python regex.</p> <pre><code>import re string = '{a {b} c} {d}' # desired output: ['a {b} c', 'd'] # non-greedy pattern_1 = r'(?&lt;={).*?(?=})' print(re.findall(pattern_1, string)) # actual output: ['a {b', 'd'] # greedy pattern_2 = r'(?&lt;={).*(?=})' print(re.findall(pattern_2, string)) # actual output: ['a {b} c} {d'] </code></pre> <p>How can I achieve this? With the greedy option it stops too late, with the non-greedy option it stops too soon.</p>
<python><regex><python-re>
2024-08-22 12:32:22
0
679
zest16
78,901,476
1,203,670
How can we efficiently determine in a table with items whether the orders contain only our products / third party or mixed?
<p>I have a dataset in Python, in which we have OrderNumbers, IsOurs, whether a product is ours or not and we have the amount that the product costs. We would like to classify every order, whether it only contains our product, whether it contains a mix of products or only contains products from others. We can generate the dataset as follows:</p> <pre class="lang-py prettyprint-override"><code># Number of orders and products num_orders = 1000 max_products_per_order = 10 # Generate random OrderNumbers order_numbers = np.repeat(np.arange(1, num_orders + 1), np.random.randint(1, max_products_per_order + 1, num_orders)) # Generate IsOurs column with random True/False values is_ours = np.random.choice([True, False], size=len(order_numbers)) # Generate unique ProductID product_ids = np.arange(1, len(order_numbers) + 1) # Generate random amounts for each product np.random.seed(42) # Set seed for reproducibility amounts = np.round(np.random.uniform(10, 500, size=len(order_numbers)), 2) # Create the DataFrame df = pd.DataFrame({ 'OrderNumber': order_numbers, 'ProductID': product_ids, 'IsOurs': is_ours, 'Amount': amounts }) </code></pre> <p>Now, in order to calculate that, we can do the following:</p> <p><strong>First solution</strong></p> <pre class="lang-py prettyprint-override"><code>df['OnlyOurs'] = df.groupby(['OrderNumber'])['IsOurs'].transform('all') df['Mixed'] = df.groupby(['OrderNumber'])['IsOurs'].transform('any') df['Mixed'] = df.Mixed &amp; (df.OnlyOurs == False) df['OnlyOther'] = (df.Mixed == False) &amp; (df.OnlyOurs == False) df['MixClass'] = &quot;Mix&quot; df.loc[df['OnlyOther'], 'MixClass'] = &quot;Other&quot; df.loc[df['OnlyOurs'], 'MixClass'] = &quot;Ours&quot; df.groupby('MixClass')['Amount'].sum() </code></pre> <p>On my computer this runs in about 40ms, quite fast.</p> <p>If we were to write it a bit more elegant, which is:</p> <p><strong>Second solution</strong></p> <pre class="lang-py prettyprint-override"><code>df['MixClass'] = df.groupby('OrderNumber')['IsOurs'].transform( lambda x: 'Ours' if x.all() else 'Other' if not x.any() else 'Mix') </code></pre> <p>This runs significantly slower. I was wondering whether I am missing something on how I can improve the performance of the one liner. I am aware that this is because in the first solution I use vectorised functions, in the second I don't but define a function of my own. However, I was wondering whether a more elegant approach exists, without that much of &quot;boilerplate&quot; code.</p>
<python><pandas><dataframe>
2024-08-22 12:06:12
2
3,099
Snowflake
78,901,386
5,615,873
How can I center a trimesh window on screen?
<p>I have started to work with Python's <code>trimesh</code> package and the first thing I want to do is to set the size of the graphics window and its position on the screen.</p> <p>I figured out how to resize the graphics window, as the simple code below (from <a href="https://github.com/mikedh/trimesh/blob/main/examples/colors.ipynb" rel="nofollow noreferrer">https://github.com/mikedh/trimesh/blob/main/examples/colors.ipynb</a>) shows:</p> <pre><code>import trimesh # The XAML file is found at https://github.com/mikedh/trimesh/blob/main/models/machinist.XAML mesh = trimesh.load(&quot;machinist.XAML&quot;, process=False) mesh.visual.kind # Resolution is an undocumated argument - I had to figure it out myself mesh.show(resolution=(600,600)) </code></pre> <p>But I also want to center the mesh window on screen. Does anyone know how?</p>
<python><pyglet><trimesh>
2024-08-22 11:44:49
3
3,537
Apostolos
78,901,385
5,878,986
How to Perform Pagination with Sorting on DynamoDB Using CreatedAt Attribute in Python?
<p>I'm working with AWS DynamoDB using Python, and I have a table defined as follows using AWS SAM:</p> <pre><code>RecommendedTalesNewTable: Type: 'AWS::DynamoDB::Table' Properties: TableName: 'RecommendedTalesNew' # New table name AttributeDefinitions: - AttributeName: 'Id' AttributeType: 'S' - AttributeName: 'CreatedAt' AttributeType: 'S' KeySchema: - AttributeName: 'Id' KeyType: 'HASH' - AttributeName: 'CreatedAt' KeyType: 'RANGE' ProvisionedThroughput: ReadCapacityUnits: 5 WriteCapacityUnits: 5 GlobalSecondaryIndexes: - IndexName: 'CreatedAtIndex' KeySchema: - AttributeName: 'Id' KeyType: 'HASH' - AttributeName: 'CreatedAt' KeyType: 'RANGE' Projection: ProjectionType: 'ALL' ProvisionedThroughput: ReadCapacityUnits: 5 WriteCapacityUnits: 5 </code></pre> <p>My goal is to paginate through the items in the RecommendedTalesNew table, sorted by the CreatedAt attribute. To achieve this, I am using the following Python code:</p> <pre><code>def get_paged_data(paginator_type, query_params, next_token=None): dynamodb = client('dynamodb') paginator = dynamodb.get_paginator(paginator_type) if next_token is not None and next_token!=&quot;None&quot;: query_params['ExclusiveStartKey'] = json.loads(next_token) page_iterator = paginator.paginate(**query_params) items = [] for page in page_iterator: items.extend(page['Items']) next_token = page.get('LastEvaluatedKey', None) if len(items) != 0: break return deseriliaze_dynamodata(items), next_token, next_token==None def deseriliaze_dynamodata(items): return list(map(lambda item : deseriliaze(item), items)) def get_recommended_tales(event, context): limit = int(event['queryStringParameters'].get('limit', 10)) last_evaluated_key = event['queryStringParameters'].get('lastEvaluatedKey', None) query_params = { 'TableName': &quot;RecommendedTalesNew&quot;, 'Limit': limit, } try: items, next_token, end_of_data = get_paged_data(&quot;query&quot;, query_params, last_evaluated_key) return { &quot;statusCode&quot;: 200, &quot;body&quot;: json.dumps({ &quot;message&quot;: &quot;Tales retrieved successfully&quot;, &quot;tales&quot;: items, 'LastEvaluatedKey': next_token }), &quot;headers&quot;: { 'Content-Type': 'application/json' } } except Exception as e: return { &quot;statusCode&quot;: 500, &quot;body&quot;: json.dumps({&quot;error&quot;: str(e)}), &quot;headers&quot;: { 'Content-Type': 'application/json' } } </code></pre> <p>However, I'm encountering the following error when I run the get_recommended_tales function:</p> <p>ClientError('An error occurred (ValidationException) when calling the Query operation: Either the KeyConditions or KeyConditionExpression parameter must be specified in the request.')</p> <p>I understand that I need to specify a KeyConditionExpression, but since I want to sort by CreatedAt and paginate through the items, I'm not sure how to structure the query_params correctly.</p> <p>How should I adjust my code to correctly query and paginate through the table based on the CreatedAt attribute, or is there a better approach to achieve this?</p>
<python><amazon-web-services><amazon-dynamodb><boto3>
2024-08-22 11:44:48
1
1,166
Bertug
78,901,362
7,256,443
Why are `dict_keys`, `dict_values`, and `dict_items` not subscriptable?
<p>Referring to an item of a <code>dict_keys</code>, <code>dict_values</code>, or <code>dict_items</code> object by index raises a type error. For example:</p> <pre class="lang-py prettyprint-override"><code>&gt; my_dict = {&quot;foo&quot;: 0, &quot;bar&quot;: 1, &quot;baz&quot;: 2} &gt; my_dict.items()[0] --------------------------------------------------------------------------- TypeError Traceback (most recent call last) ----&gt; 1 my_dict.items()[0] TypeError: 'dict_items' object is not subscriptable </code></pre> <p>My question is <em>not</em> how to address this. I am aware that one can just call <code>list()</code> or <code>tuple()</code> on a <code>dict_keys</code>, <code>dict_values</code>, or <code>dict_items</code> object and then subscript that.</p> <p>My question is why does this behaviour still persist in python given that the order of items in a dictionary has been guaranteed <a href="https://docs.python.org/3.7/whatsnew/3.7.html" rel="nofollow noreferrer">since python 3.7.0</a>. Why is it not possible (or not desirable) to refactor the <code>dict_keys</code>, <code>dict_values</code>, and <code>dict_items</code> types so that they are subscriptable?</p> <p><a href="https://docs.python.org/3/library/stdtypes.html#dictionary-view-objects" rel="nofollow noreferrer">The documentation for the stdlib</a> describes these types as &quot;view objects&quot;. I imagine that is where the answer to my question lies but I can't find a more detailed description of what a &quot;view object&quot; actually is. Can anyone enlighten me on this matter as well?</p>
<python><python-3.x><dictview>
2024-08-22 11:39:41
1
1,033
Ben Jeffrey
78,901,337
1,560,414
py2exe: how to include additional resource files from dependancy
<p>I'm trying to build a project that has a dependency on <code>jsonschema</code>. On building and running the project with <code>py2exe</code> I get this error at runtime:</p> <pre><code>INFO:runtime:Analyzing the code INFO:runtime:Found 527 modules, 27 are missing, 0 may be missing 27 missing Modules ------------------ ? __main__ imported from bdb, pdb ? _frozen_importlib imported from importlib, importlib.abc, zipimport ? _frozen_importlib_external imported from importlib, importlib._bootstrap, importlib.abc, zipimport ? _posixshmem imported from multiprocessing.resource_tracker, multiprocessing.shared_memory ? _winreg imported from platform ? annotationlib imported from attr._compat ? asyncio.DefaultEventLoopPolicy imported from - ? dummy.Process imported from multiprocessing.pool ? fqdn imported from jsonschema._format ? idna imported from jsonschema._format ? importlib_metadata imported from attr, jsonschema ? importlib_resources imported from jsonschema._utils ? isoduration imported from jsonschema._format ? java.lang imported from platform ? jsonpointer imported from jsonschema._format ? org.python.core imported from copy, pickle ? os.path imported from ctypes._aix, distutils.file_util, os, pkgutil, py_compile, sysconfig, tracemalloc, unittest, unittest.util ? pep517 imported from importlib.metadata ? readline imported from cmd, code, pdb ? requests imported from jsonschema.validators ? resource imported from test.support ? rfc3339_validator imported from jsonschema._format ? rfc3986_validator imported from jsonschema._format ? rfc3987 imported from jsonschema._format ? typing_extensions imported from attr._compat, jsonschema.protocols ? uri_template imported from jsonschema._format ? webcolors imported from jsonschema._format Building 'C:\Users\James\Development\Wave-Venture\py2exe_playground\build\Example.exe'. </code></pre> <p>On running the .exe:</p> <pre><code>Traceback (most recent call last): File &quot;__main__.py&quot;, line 1, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 664, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 627, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;jsonschema\__init__.pyc&quot;, line 29, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 664, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 627, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;jsonschema\protocols.pyc&quot;, line 33, in &lt;module&gt; File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1007, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 986, in _find_and_load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 664, in _load_unlocked File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 627, in _load_backward_compatible File &quot;&lt;frozen zipimport&gt;&quot;, line 259, in load_module File &quot;jsonschema\validators.pyc&quot;, line 388, in &lt;module&gt; File &quot;jsonschema\_utils.pyc&quot;, line 61, in load_schema File &quot;zipfile.pyc&quot;, line 2327, in read_text File &quot;zipfile.pyc&quot;, line 2315, in open File &quot;zipfile.pyc&quot;, line 1511, in open File &quot;zipfile.pyc&quot;, line 1438, in getinfo KeyError: &quot;There is no item named 'jsonschema/schemas/draft3.json' in the archive&quot; </code></pre> <p>If I look in the <code>jsonschema</code> package in the Pythons <code>site-packages</code>, I can see these files</p> <p><a href="https://i.sstatic.net/T0CczxJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T0CczxJj.png" alt="jsonschema site-package directory" /></a></p> <p>How can I configure py2exe to see these and bundle them? I found <a href="https://stackoverflow.com/a/31590888">this answer</a> on stack overflow, though it appears to no longer work and I cannot find the equivalent hook in the latest versions of <code>py2exe</code>.</p> <p>I'll include a minimal example project that produces this error below.</p> <p><a href="https://github.com/user-attachments/files/16693661/py2exe_playground.zip" rel="nofollow noreferrer">py2exe_playground.zip</a></p> <p>Thanks for any help.</p>
<python><py2exe>
2024-08-22 11:33:37
1
1,667
freebie
78,901,146
1,613,983
Why does recurse=True cause dill not to respect globals in functions?
<p>If I pickle a function with <code>dill</code> that contains a global, somehow that global state isn't respected when the function is loaded again. I don't understand enough about <code>dill</code> to be anymore specific, but take this working code for example:</p> <pre><code>import multiprocessing import dill def initializer(): global foo foo = 1 def worker(arg): return foo with multiprocessing.Pool(2, initializer) as pool: res = pool.map(worker, range(10)) print(res) </code></pre> <p>This works fine, and prints <code>[1, 1]</code> as expected. However, if I instead pickle the <code>initializer</code> and <code>worker</code> functions using <code>dill</code>'s <code>recurse=True</code>, and then restore them, it fails:</p> <pre><code>import multiprocessing import dill def initializer(): global foo foo = 1 def worker(arg): return foo with open('funcs.pkl', 'wb') as f: dill.dump((initializer, worker), f, recurse=True) with open('funcs.pkl', 'rb') as f: initializer, worker = dill.load(f) with multiprocessing.Pool(2, initializer) as pool: res = pool.map(worker, range(2)) </code></pre> <p>This code fails with the following error:</p> <pre><code> File &quot;/tmp/ipykernel_158597/1183951641.py&quot;, line 9, in worker return foo ^^^ NameError: name 'foo' is not defined </code></pre> <p>If I use <code>recurse=False</code> it works fine, but somehow pickling them in this way causes the code to break. Why?</p>
<python><dill>
2024-08-22 10:51:28
1
23,470
quant
78,901,041
10,207,281
Why is it so slowly while transferring files from windows through WinRM by using python?
<p>I'm using <code>python + winrm</code> to get files from a windows server. The code runs fine but slowly. It can only get about 10GB per day by this way.<br /> If I copy files through remote desktop(<code>mstsc.exe</code> or Other remote clients) manually, I can get a file of 140GB just at one night.</p> <p>I want to know why my code runs so slowly like that, and how to fix it?<br /> Thanks a lot.</p> <p><strong>Here is the environment</strong>:</p> <pre><code>python 3.7.9 pywinrm 0.4.3 </code></pre> <p><strong>WinRm configration</strong>:</p> <pre><code>Config MaxEnvelopeSizekb = 500 MaxTimeoutms = 60000 MaxBatchItems = 32000 MaxProviderRequests = 4294967295 Client NetworkDelayms = 5000 URLPrefix = wsman AllowUnencrypted = false Auth Basic = true Digest = true Kerberos = true Negotiate = true Certificate = true CredSSP = false DefaultPorts HTTP = 5985 HTTPS = 5986 TrustedHosts Service MaxConcurrentOperations = 4294967295 MaxConcurrentOperationsPerUser = 1500 EnumerationTimeoutms = 240000 MaxConnections = 300 MaxPacketRetrievalTimeSeconds = 120 AllowUnencrypted = true Auth Basic = true Kerberos = true Negotiate = true Certificate = false CredSSP = false CbtHardeningLevel = Relaxed DefaultPorts HTTP = 5985 HTTPS = 5986 IPv4Filter = * IPv6Filter = * EnableCompatibilityHttpListener = false EnableCompatibilityHttpsListener = false CertificateThumbprint AllowRemoteAccess = true Winrs AllowRemoteShellAccess = true IdleTimeout = 7200000 MaxConcurrentUsers = 10 MaxShellRunTime = 2147483647 MaxProcessesPerShell = 25 MaxMemoryPerShellMB = 1024 MaxShellsPerUser = 30 </code></pre> <p><strong>My code</strong>:</p> <pre><code>import winrm import re import base64 import os import logging from winrm import Response logging.getLogger('urllib3').setLevel(logging.CRITICAL) logging.getLogger('spnego').setLevel(logging.CRITICAL) log = logging.getLogger('') class WinRmConnection: def __init__(self, hostname='', username='', password='', connect_now=True, **kwargs): self.__rdc = None self.hostname = hostname self.username = username self.password = password self.transport = kwargs.get('transport', 'ntlm') self.port = kwargs.get('port', 5985) if connect_now: self.connect(hostname=f'{hostname}:{self.port}', username=username, password=password, transport=self.transport) def connect(self, **kwargs): hostname = kwargs.get('hostname', self.hostname) username = kwargs.get('username', self.username) password = kwargs.get('password', self.password) transport = kwargs.get('transport', self.transport) try: assert hostname password = password except: raise Exception('Connection params error.') self.__rdc = winrm.Session( hostname, auth=(username, password), transport=transport ) def close(self): try: self.__rdc.close() except: pass def cmd(self, cmd, ps=False, raise_err=False, retry=3): if not cmd: return '' log.debug('Execute %scommand: %s' % ('' if not ps else 'PowerShell ', cmd)) try: result = self.__rdc.run_ps(cmd) if ps else self.__rdc.run_cmd(cmd) stdout = result.std_out stderr = result.std_err try: out = stdout.decode('utf-8') err = stderr.decode('utf-8') except UnicodeDecodeError: out = stdout.decode('gbk') err = stderr.decode('gbk') if raise_err and (stderr and err): raise Exception(err) if result.status_code: out = err except Exception as e: log.exception('Error occurred when executing [%s]: %s' % (cmd, repr(e))) if raise_err: raise e return '' else: log.debug('Command output: %s' % out) reg = re.compile(r'^\*') return reg.sub('', out.strip()) def download(self, remote_path, local_path, overwrite=True): is_dir_src = 0 remote_path = remote_path.rstrip(' \t\\/') if self.cmd(f'Test-Path {remote_path} -PathType Container', ps=True) == 'True': is_dir_src = 1 log.debug(f'Prepare to download ' + ('directory ' if is_dir_src else '') + f'{remote_path} from server.') target = '' append_name = remote_path.split('\\')[-1] if is_dir_src: # downloading folder local_path = os.path.join(local_path, append_name) if not os.path.exists(local_path): os.makedirs(local_path) elif not overwrite: raise Exception('Destination path has been exists already.') target += local_path else: # downloading file if not os.path.exists(local_path): # treat `local_path` as a folder if it does not exists os.makedirs(local_path) target += os.path.join(local_path, append_name) elif not overwrite: raise Exception('Destination path has been exists already.') elif os.path.isdir(local_path): target += os.path.join(local_path, append_name) else: target += local_path result = 1 if not is_dir_src: result = int(self._do_get_file(remote_path, target)) if not result: log.debug(f'Failed while downloading {remote_path}.') else: obj = self.cmd(f'Get-ChildItem -LiteralPath {remote_path} -Name', ps=True) if obj: for o in obj.splitlines(): result &amp;= bool(self.download(f'{remote_path}\\{o}', target)) log.debug(f'Finish downloading {remote_path} to {target}.') return target if result else None def _do_get_file(self, remote_path, local_path, buffer_size=2 ** 19): # Refer to: https://github.com/ansible/ansible/blob/devel/lib/ansible/plugins/connection/winrm.py # 0.5MB chunks by default out_file = None err = 0 try: offset = 0 while True: command_id = None try: script = ''' $path = '%(path)s' $buffer_size = %(buffer_size)d $offset = %(offset)d $stream = New-Object -TypeName IO.FileStream($path, [IO.FileMode]::Open, [IO.FileAccess]::Read, [IO.FileShare]::ReadWrite) $stream.Seek($offset, [System.IO.SeekOrigin]::Begin) &gt; $null $buffer = New-Object -TypeName byte[] $buffer_size $bytes_read = $stream.Read($buffer, 0, $buffer_size) if ($bytes_read -gt 0) { $bytes = $buffer[0..($bytes_read - 1)] [System.Convert]::ToBase64String($bytes) } $stream.Close() &gt; $null ''' % dict(buffer_size=buffer_size, path=remote_path, offset=offset) script = '\n'.join([x.strip() for x in script.splitlines() if x.strip()]) encoded_ps = base64.b64encode(script.encode('utf_16_le')).decode('utf-8') shell_id = self.__rdc.protocol.open_shell(codepage=65001) data = None try: command_id = self.__rdc.protocol.run_command(shell_id, 'PowerShell', ('-EncodedCommand', encoded_ps)) resptuple = self.__rdc.protocol.get_command_output(shell_id, command_id) result = Response(tuple(v.decode('utf-8', 'replace') if isinstance(v, bytes) else v for v in resptuple)) if result.status_code != 0: raise Exception(result.std_err) data = base64.b64decode(result.std_out.strip()) self.__rdc.protocol.cleanup_command(shell_id, command_id) self.__rdc.protocol.close_shell(shell_id) except Exception as e: log.exception(e) log.error(f'Error occurred when downloading to `{local_path}`.') err = 1 if command_id: self.__rdc.protocol.cleanup_command(shell_id, command_id) self.__rdc.protocol.close_shell(shell_id) else: if data is None: break else: if not out_file: out_file = open(local_path, 'wb') out_file.write(data) if len(data) &lt; buffer_size: break offset += len(data) log.debug('%d bytes done.' % offset) except Exception as e: log.exception(e) log.error(f'Error occurred when downloading to `{local_path}`.') err = 1 if command_id: try: self.__rdc.protocol.cleanup_command(shell_id, command_id) except: pass self.__rdc.protocol.close_shell(shell_id) break finally: if out_file: out_file.close() return not err </code></pre> <p><strong>Run</strong>:</p> <pre><code> params = { 'hostname': '', # remote IP address 'username': '', # username of remote server to login 'password': '' # password of remote server to login } remote = WinRmConnection(**params) remote.download(r'e:\temp\1.txt', r'd:\work') remote.close() </code></pre>
<python><remote-access><file-transfer><winrm>
2024-08-22 10:25:40
0
920
vassiliev
78,900,776
1,658,617
Convert xml.etree.ElementTree.Element to lxml.Element
<p>I'm developing a package that can be used by both users of lxml and the default xml package. ATM I have a small try:except for the lxml import, but sometimes the user uses functions like <code>dump()</code> from the default <code>xml.etree</code> which don't support the <code>lxml.Element</code>.</p> <p>How can I convert between lxml Element and default Element in an efficient way without passing through a string or doing custom recursive processing?</p>
<python><xml><lxml><elementtree>
2024-08-22 09:27:14
1
27,490
Bharel
78,900,629
1,627,466
Erroneous pandas rolling results with time window in grouped by dataframe imported from BigQuery
<p>I would like to preface this by apologizing for the lack of reproducibility of my question because if I convert my dataframe to a dictionary and turn that into a dataframe again I am not getting the issue.</p> <p>Nevertheless, this is the query I am using on BigQuery:</p> <pre class="lang-sql prettyprint-override"><code>SELECT published_at, from_author_id, text FROM `project.message.message` &quot;&quot;&quot; </code></pre> <p>I then turn into into a datrame using</p> <pre><code>client = bigquery.Client(location=&quot;europe-west1&quot;, project=&quot;project&quot;) df = client.query(sql).to_dataframe() </code></pre> <p>Now running the following gives me an erroneous output:</p> <pre><code>import pandas as pd #df['published_at'] = pd.to_datetime(df['published_at']) df = df.sort_values(by=['from_author_id', 'published_at']) df.groupby('from_author_id').rolling('3s', on='published_at')['text'].count() </code></pre> <p>using .to_datetime() has no impact on the result of the rolling function</p> <pre><code>from_author_id published_at 0001fcf4-94f5-4e42-8444-0cb6c2870bdc 2024-08-19 18:28:50.197000+00:00 1.0 2024-08-19 18:33:26.837000+00:00 2.0 2024-08-19 18:33:42.960000+00:00 3.0 2024-08-19 18:33:57.083000+00:00 4.0 2024-08-19 18:34:18.863000+00:00 5.0 ... fff7a574-a2fe-4eac-b7c6-d5de8dc5ff0c 2024-08-19 16:26:24.252000+00:00 6.0 2024-08-19 16:32:40.697000+00:00 7.0 2024-08-19 16:32:42.013000+00:00 8.0 2024-08-19 18:09:03.469000+00:00 1.0 2024-08-19 18:09:04.979000+00:00 2.0 </code></pre> <p>As you can see there is more than 3 seconds between each of the messages of the first author and so the rolling count should return 1.</p> <p>Interestingly this function does produce the desired output:</p> <pre><code>def compute_correct_rolling_count(df, window_seconds=3): msg_counts = [] for _, group_df in df.groupby('from_author_id'): count_list = [] for i in range(len(group_df)): start_time = group_df.iloc[i]['published_at'] - pd.Timedelta(seconds=window_seconds) count = group_df[(group_df['published_at'] &gt; start_time) &amp; (group_df['published_at'] &lt;= group_df.iloc[i]['published_at'])].shape[0] count_list.append(count) msg_counts.extend(count_list) return msg_counts # Compute the rolling count within a 3-second window for each author df['msg_count_last_3secs'] = compute_correct_rolling_count(df, window_seconds=3) </code></pre> <p>Schema for table project.message.message: published_at (TIMESTAMP) from_author_id (STRING) text (STRING) other fields</p> <p>Additionally, Default rounding mode ROUNDING_MODE_UNSPECIFIED Partitioned by DAY Partitioned on field published_at</p> <p>I too am using google-cloud-bigquery 3.25.0</p>
<python><pandas><dataframe><google-bigquery>
2024-08-22 08:53:33
1
423
user1627466
78,900,584
101,152
pip3 tries to write to /var/log even with venv and failing
<p>I follow these steps, and they always fail.</p> <pre><code>mkdir python-mess cd python-mess echo toml &gt; requirements.txt which python3 # /usr/bin/python3 which pip3 # /usr/bin/pip3 python3 -m venv .venv source .venv/bin/activate # (.venv is added to prompt) which pip3 # /home/karel.bilek/python-mess/.venv/bin/pip3 which python3 # /home/karel.bilek/python-mess/.venv/bin/python3 pip3 install -r requirements.txt </code></pre> <p>The last step <em>always</em> fails, with</p> <p><code>PermissionError: [Errno 13] Permission denied: '/var/log/python_pip.log'</code></p> <p>WHY? What else to do?</p> <p>I am using Debian Bookworm, plain python 3, plain bash. I am not installing something weird, just <code>toml</code> package! There is nothing else in the folder except for requirements.txt.</p>
<python><python-3.x><pip>
2024-08-22 08:44:02
1
38,046
Karel BΓ­lek
78,900,482
5,269,892
Pandas replace multiple substring patterns via dictionary
<p>Suppose we want to replace multiple substrings via <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.replace.html" rel="nofollow noreferrer">pd.Series.replace</a> or <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.replace.html" rel="nofollow noreferrer">pd.DataFrame.replace</a> by passing a dictionary to the <code>to_replace</code> argument</p> <ul> <li>What happens if multiple patterns (the dictionary keys) match in the string?</li> <li>Are applicable replacements performed at once or consecutively?</li> <li>If the latter, in which order are the replacements performed (e.g. the order the pattern matches occur in the string)?</li> <li>What happens if multiple patterns match substrings at the same position in the string (which can happen with regexes)?</li> <li>What happens if substrings in the replacement values match the patterns themselves?</li> </ul> <p><strong>Example:</strong></p> <p>Replace</p> <ul> <li>'nan' --&gt; 'miss'</li> <li>'nan.*\b' --&gt; 'nanword'</li> <li>'na' --&gt; 'no'</li> <li>'miss' --&gt; 'mrs'</li> <li>'bana' --&gt; 'eric'</li> </ul> <p>in the string <em>'Nana likes bananas and ananas'</em>.</p>
<python><pandas><replace>
2024-08-22 08:21:00
1
1,314
silence_of_the_lambdas
78,900,439
16,611,809
How to show two outputs of the same function without running it twice?
<p>I have this function that iterates over my data and generates two outputs (in the example, the function <code>check()</code>. Now I want to show both outputs on different cards. AFAIK the card ID has to be the same as the function that generates the output. In my example, the function <code>check()</code> is run twice, which is very inefficient (in my real data this is the function generating the most computational load). Is there a way to run this function only once, but still using both outputs on different cards in shiny for Python?</p> <p>MRE:</p> <pre><code>from shiny import App, render, ui, reactive from pathlib import Path app_ui = ui.page_fillable( ui.layout_sidebar( ui.sidebar( ui.input_text(&quot;numbers&quot;, &quot;Number you want to check&quot;, value=&quot;1,2,3,4,5,6,7,8&quot;), ui.input_action_button(&quot;check_numbers&quot;, &quot;Check your numbers&quot;) ), ui.output_ui(&quot;numbers_frames&quot;) ) ) def server(input, output, session): @render.ui @reactive.event(input.check_numbers) def numbers_frames(): return ui.TagList( ui.layout_columns( ui.card( ui.card_header(&quot;even&quot;), ui.output_text(&quot;even&quot;), ), ui.card( ui.card_header(&quot;odd&quot;), ui.output_text(&quot;odd&quot;), ), ) ) @reactive.event(input.check_numbers) def check(): even = [] odd = [] for number in input.numbers().split(','): number_int = int(number.strip()) if number_int % 2 == 0: even.append(str(number_int)) else: odd.append(str(number_int)) print(&quot;check() has been exectuted&quot;) return (even, odd) @output @render.text def even(): even_output = check()[0] return ','.join(even_output) @output @render.text def odd(): odd_output = check()[1] return ','.join(odd_output) src_dir = Path(__file__).parent / &quot;src&quot; app = App(app_ui, server, static_assets=src_dir) </code></pre>
<python><py-shiny>
2024-08-22 08:10:30
1
627
gernophil
78,900,424
12,945,785
How to calculate the last 3 month, 6 month...performance from price?
<p><strong>Disclaimer.</strong> I am new to polars.</p> <p>I have a dataframe that is generated with something like this with polars:</p> <pre><code>import polars as pl import numpy as np from datetime import datetime # CrΓ©er une plage de dates date_ranges = pl.date_range(start=datetime(2000, 1, 1), end=datetime(2025, 12, 31), interval=&quot;1d&quot;, eager=True) # GΓ©nΓ©rer des donnΓ©es alΓ©atoires data = { &quot;date&quot;: ['2020-01-01', '2020-02-01', '2020-03-01', '2020-04-01', '2020-05-01', '2020-06-01', '2020-07-01', '2020-08-01'], &quot;prix1&quot;: [50,55, 50, 60, 70, 80, 90, 100], &quot;prix2&quot;: [60,65, 60, 70, 80, 90, 100, 110], &quot;prix3&quot;: [70,75, 70, 80, 90, 100, 110, 120], &quot;prix4&quot;: [80,85, 80, 90, 100, 110, 120, 130], &quot;prix5&quot;: [90,95, 90, 100, 110, 120, 130, 140], } # CrΓ©er le DataFrame df = pl.DataFrame(data) </code></pre> <p>I would like to get the last 3 month, 6 month and 1 year performance of the prices ie to get a dataframe that look like this:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th></th> <th>Prix 1</th> <th>Prix 2</th> <th>Prix 3</th> </tr> </thead> <tbody> <tr> <td>Perf 3 Month</td> <td>=100/70-1</td> <td>=110/80-1</td> <td>=120/90-1</td> </tr> <tr> <td>Perf 6 Month</td> <td>=100/55-1</td> <td>=110/65-1</td> <td>=120/75-1</td> </tr> </tbody> </table></div> <p>Or something like that. It doesnot matter if the output is transpose</p>
<python><dataframe><python-polars>
2024-08-22 08:06:15
4
315
Jacques Tebeka
78,900,274
17,580,381
scipy 1.14.1 breaks statsmodels 0.14.2
<p>After installation of scipy 1.14.1 a previously viable Python program now fails.</p> <p>The original program is more complex so here's a MRE:</p> <pre><code>import pandas as pd import plotly.express as px if __name__ == &quot;__main__&quot;: data = { &quot;Date&quot;: [0, 7, 14, 21, 28], &quot;Value&quot;: [100, 110, 120, 115, 122] } df = pd.DataFrame(data) px.scatter(df, x=&quot;Date&quot;, y=&quot;Value&quot;, trendline=&quot;ols&quot;) </code></pre> <p>Here's the stack trace (username obfuscated):</p> <pre><code>Traceback (most recent call last): File &quot;/Users/****/Python/Aug22.py&quot;, line 12, in &lt;module&gt; tl = px.scatter( ^^^^^^^^^^^ File &quot;/Users/****/venv/lib/python3.12/site-packages/plotly/express/_chart_types.py&quot;, line 66, in scatter return make_figure(args=locals(), constructor=go.Scatter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/****/venv/lib/python3.12/site-packages/plotly/express/_core.py&quot;, line 2267, in make_figure patch, fit_results = make_trace_kwargs( ^^^^^^^^^^^^^^^^^^ File &quot;/Users/****/venv/lib/python3.12/site-packages/plotly/express/_core.py&quot;, line 361, in make_trace_kwargs y_out, hover_header, fit_results = trendline_function( ^^^^^^^^^^^^^^^^^^^ File &quot;/Users/****/venv/lib/python3.12/site-packages/plotly/express/trendline_functions/__init__.py&quot;, line 43, in ols import statsmodels.api as sm File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/api.py&quot;, line 136, in &lt;module&gt; from .regression.recursive_ls import RecursiveLS File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/regression/recursive_ls.py&quot;, line 14, in &lt;module&gt; from statsmodels.tsa.statespace.mlemodel import ( File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/mlemodel.py&quot;, line 33, in &lt;module&gt; from .simulation_smoother import SimulationSmoother File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/simulation_smoother.py&quot;, line 11, in &lt;module&gt; from .kalman_smoother import KalmanSmoother File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/kalman_smoother.py&quot;, line 11, in &lt;module&gt; from statsmodels.tsa.statespace.representation import OptionWrapper File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/representation.py&quot;, line 10, in &lt;module&gt; from .tools import ( File &quot;/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/tools.py&quot;, line 14, in &lt;module&gt; from . import (_initialization, _representation, _kalman_filter, File &quot;statsmodels/tsa/statespace/_initialization.pyx&quot;, line 1, in init statsmodels.tsa.statespace._initialization ImportError: dlopen(/Users/****/venv/lib/python3.12/site-packages/statsmodels/tsa/statespace/_representation.cpython-312-darwin.so, 0x0002): symbol not found in flat namespace '_npy_cabs' </code></pre> <p>If I revert to scipy 1.14.0 this error does not occur.</p> <p>Is there anything I can do that would allow me to run with scipy 1.14.1?</p> <p><strong>Platform</strong>:</p> <pre><code>macOS 14.6.1 python 3.12.5 Apple M2 </code></pre>
<python><macos><scipy>
2024-08-22 07:32:27
2
28,997
Ramrab
78,900,270
21,040,543
This is a question about tkinter's <Configure> binding
<p>I am a Windows 11 user. I wrote this code with the idea of ​​a canvas that changes size responsively when the window size changes. However, when the program started, the window and canvas kept growing together. I think it's because of some relationship between tkinter's window creation process and , but I'm curious as to why this phenomenon occurs.</p> <pre><code>import tkinter as tk class app(tk.Tk): def __init__(self): super().__init__() self.canvas = tk.Canvas(self, bg=&quot;red&quot;) self.canvas.pack(fill=tk.BOTH, expand=True) # self.geometry(&quot;600x400&quot;) self.bind(&quot;&lt;Configure&gt;&quot;, self.config_canvas_size) def config_canvas_size(self, event=None): self.canvas.config(width=self.winfo_width(), height=self.winfo_height()) if __name__ == &quot;__main__&quot;: root = app() root.mainloop() </code></pre> <p>If you have <code>self.geometry(&quot;600x400&quot;)</code> commented out, it will work as desired. But that's what happens when there isn't one.</p>
<python><tkinter>
2024-08-22 07:31:03
1
344
kimhyunju
78,899,693
736,662
Unix timestamp +1 hour replacement
<p>I defined two helper methods in Python for setting start-time and end-time and want it converted into unix timestamp (epoch):</p> <pre><code>def set_epoch_start(): unix_epoch = datetime.utcfromtimestamp(0).replace(tzinfo=timezone.utc) now = datetime.now(tz=timezone.utc) new_time = now.replace(hour=17, minute=0, second=0, microsecond=0) seconds = (new_time - unix_epoch).total_seconds() return int(seconds) def set_epoch_end(): unix_epoch = datetime.utcfromtimestamp(0).replace(tzinfo=timezone.utc) now = datetime.now(tz=timezone.utc) new_time = now.replace(hour=23, minute=0, second=0, microsecond=0) seconds = (new_time - unix_epoch).total_seconds() return int(seconds) </code></pre> <p>As of now, I have hard-coded the values for hours (17 and 23), but I want the replace method for start-time to add +1 hour and the replace method for end-time to add 2 hours.</p> <p>Searching for help on this I came across timedelta i.e. timedelta(hours=2), but how can I add timedelta into my two functions above?</p>
<python><epoch><timedelta>
2024-08-22 03:31:34
1
1,003
Magnus Jensen
78,899,132
375,432
Use Ibis to return aggregates of every numeric column in a table
<p>I want to use <a href="https://ibis-project.org" rel="nofollow noreferrer">Ibis</a> to return the mean and standard deviation of each floating point numeric column in a table. How do I do that?</p>
<python><ibis>
2024-08-21 21:59:22
1
763
ianmcook
78,898,865
13,135,901
Updating object attribute from database in Django
<p>Let's say I have a model representing a task:</p> <pre><code>class Task(models.Model): on_status = models.BooleanField() def run(self): if self.on_status: # do stuff </code></pre> <p>I run this task with Celery Beat and I have a dashboard running on Gunicorn both using the same database.</p> <pre><code>app = Celery(&quot;app&quot;) app.conf.beat_schedule = { &quot;run_task&quot;: { &quot;task&quot;: &quot;run_tasks&quot;, &quot;schedule&quot;: 5.0, &quot;options&quot;: { &quot;expires&quot;: 6.0, }, }, } tasks = Task.objects.all() @app.task def run_tasks(): for task in tasks: task.run() </code></pre> <p>I can change <code>on_status</code> from my dashboard, but then I need to update <code>self.on_status</code> of the instance inside Celery Beat process. Is there a command to update attribute value from database or is there a different approach?</p>
<python><django><celery>
2024-08-21 20:20:19
1
491
Viktor
78,898,805
11,159,734
docker-compose -e flag does not override default value
<p>I have created the following docker-compose.yml:</p> <pre><code>version: &quot;3.8&quot; services: app: image: my-image build: context: . dockerfile: Dockerfile working_dir: /src command: python ${SCRIPT} environment: - .env app-dev: extends: service: app volumes: - ./src:/src </code></pre> <p>My <code>.env</code> file has one line:</p> <pre><code>SCRIPT=./tutorial_00/docker_test.py </code></pre> <p>I will now do the following steps:</p> <ol> <li>Build the docker image: <code>docker-compose up --build app</code> This will build the image and run the docker_test.py script</li> <li>Now I want to run a different python script using this command: <code>docker-compose run --rm -e SCRIPT=tutorial_01/main.py app</code> This should override the default value for the SCRIPT variable which was retrieved from the .env file with <code>tutorial_01/main.py</code>. However it ignores this flag and still runs the default script. I don't really know why. It seems like the parameter is completely ignored.</li> </ol> <p>If I run the following command <code>docker-compose run --rm -e SCRIPT=./tutorial_01/main.py app env</code> it shows me that the variable was set successfully:</p> <pre class="lang-bash prettyprint-override"><code>PATH=/venv/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=e420aa256ae69 TERM=xterm SCRIPT=./tutorial_01/main.py LANG=C.UTF-8 GPG_KEY=A035C8C19219BA821E8D684696D PYTHON_VERSION=3.11.9 PYTHON_PIP_VERSION=24.0 PYTHON_SETUPTOOLS_VERSION=65.5.1 </code></pre> <p>If I set <code>./tutorial_01/main.py</code> as the default value in .env it successfully runs this file instead. So the path is definetly valid.</p>
<python><docker>
2024-08-21 20:01:41
4
1,025
Daniel
78,898,776
145,504
How can a Python enum accept any value without raising ValueError
<p>Suppose I am implementing a network service that receives commands. I want to have an enumeration of these commands, so I can write something like:</p> <pre class="lang-py prettyprint-override"><code>class Command(enum.Enum): ECHO = 0x01 LOGIN = 0x02 SEND_FILE = 0x03 def handle_packet(p: bytes): command = Command(p[0]) if command == Command.ECHO: print(p[1:]) elif ... elif ... else: print('Ignoring unknown command', command) </code></pre> <p>If the client supports a newer version of this protocol than my server, they might send a command that I don't have listed in my enum. For example, if the client sends me <code>0x04</code>, the constructor <code>Command(p[0])</code> will throw a <code>ValueError</code>.</p> <p>Instead, I'd like <code>command = Command(0x04)</code> to create an instance where <code>command.value = 4</code> and it has a <code>name</code> generated on-the-fly like <code>COMMAND_04</code>.</p>
<python><enums>
2024-08-21 19:50:51
3
4,458
rgov
78,898,497
769,933
Create Categorical series from physical values
<p>I want to create a categorical column, where each category has a descriptive name for self-documentation. I have a list of integers equivalent to the physical values in the categorical column, and I want to make the categorical column without creating an intermediate list of strings to pass to <code>pl.Series</code>.</p> <pre class="lang-py prettyprint-override"><code>import polars as pl dt = pl.Enum([&quot;0&quot;, &quot;1&quot;, &quot;2&quot;]) s1 = pl.Series([&quot;0&quot;, &quot;0&quot;, &quot;2&quot;, &quot;1&quot;], dtype=dt) physical = list(s1.to_physical()) print(f&quot;{physical=}&quot;) s2 = pl.Series([str(p) for p in physical], dtype=dt) assert s1.equals(s2) # turning physical to strings just to create the series which is stored as ints is a waste of compute power # how to construct a series from the physical values? s2 = pl.Series.from_physical(physical, dtype=dt) assert s1.equals(s3) </code></pre> <p>This prints</p> <pre class="lang-py prettyprint-override"><code>physical=[0, 0, 2, 1] </code></pre> <p>Then it errors because <code>Series.to_physical</code> doesn't exist. Is there a function like <code>from_physical</code> that would make this snippet run to completion without erroring on the final assertion?</p>
<python><python-polars>
2024-08-21 18:20:47
2
2,396
gggg
78,898,466
758,836
PyTorch Lightning Distributed Training Timeout error with NCCL Backend
<p>I'm using Pytorch Lightning to run a distributed training Python script using the <a href="https://lightning.ai/docs/pytorch/stable/accelerators/gpu_intermediate.html#distributed-data-parallel" rel="nofollow noreferrer">DDP</a>. I'm using a <code>DDPStrategy</code> to define the backend, a custom timeout and a custom cluster environment as a <a href="https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.plugins.environments.ClusterEnvironment.html" rel="nofollow noreferrer">ClusterEnvironment</a> class implementation</p> <p>My training code part looks like</p> <pre class="lang-py prettyprint-override"><code> strategy = DDPStrategy( cluster_environment=CustomEnvironment(), process_group_backend=&quot;nccl&quot;, timeout=CUSTOM_TIMEOUT, find_unused_parameters=True) # Initialize a trainer trainer = pl.Trainer(logger=logger, callbacks=[checkpoint_callback], max_epochs=hparams[&quot;epochs&quot;], devices=devices, accelerator=accelerator, strategy=strategy) </code></pre> <p>where <code>devices = 4</code> (because I have 4 gpus per node), <code>accelerator = &quot;gpu&quot;</code> and the class <code>CustomEnvironment</code> is defined as follows</p> <pre class="lang-py prettyprint-override"><code>from typing import Union,Any,Dict from pytorch_lightning.plugins.environments import ClusterEnvironment from pytorch_lightning.strategies.ddp import DDPStrategy from datetime import timedelta DEFAULT_TIMEOUT = timedelta(seconds=1800) CUSTOM_TIMEOUT = timedelta(seconds=3600) class CustomEnvironment(ClusterEnvironment): def __init__(self, num_nodes=2): super().__init__() self._num_nodes = num_nodes self._master_port = None self._world_size = None self._global_rank = None def creates_processes_externally(self): # Assuming PyTorch Lightning manages processes internally return False def detect(self): # Implement detection of nodes and processes if necessary log.debug(&quot;Detect method is called.&quot;) def global_rank(self): if self._global_rank is None: self._global_rank = int(os.getenv(&quot;RANK&quot;, 0)) log.debug(f&quot;GLOBAL_RANK: {self._global_rank}&quot;) return self._global_rank @property def main_address(self): return self.master_address() @property def main_port(self): return self.master_port() def set_global_rank(self, rank: int): self._global_rank = rank log.debug(f&quot;Set GLOBAL_RANK: {self._global_rank}&quot;) def set_world_size(self, world_size: int): self._world_size = world_size log.debug(f&quot;Set WORLD_SIZE: {self._world_size}&quot;) def master_address(self): MASTER_ADDR = os.getenv(&quot;MASTER_ADDR&quot;) log.debug(f&quot;MASTER_ADDR: {MASTER_ADDR}&quot;) return MASTER_ADDR def master_port(self): if self._master_port is None: self._master_port = os.getenv(&quot;MASTER_PORT&quot;) log.debug(f&quot;MASTER_PORT: {self._master_port}&quot;) return int(self._master_port) def world_size(self): if self._world_size is None: log.debug(&quot;WORLD_SIZE is not set.&quot;) return self._world_size def node_rank(self): MY_RANK = int(os.getenv(&quot;NODE_RANK&quot;, &quot;0&quot;)) log.debug(f&quot;NODE_RANK: {MY_RANK}&quot;) return int(MY_RANK) def local_rank(self) -&gt; int: LOCAL_RANK = int(os.getenv(&quot;LOCAL_RANK&quot;, &quot;0&quot;)) log.debug(f&quot;LOCAL_RANK: {LOCAL_RANK}&quot;) return LOCAL_RANK </code></pre> <p>On the master I get this configuration</p> <pre><code>DEBUG:train:NODE_RANK: 0 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:Set GLOBAL_RANK: 0 DEBUG:train:Set WORLD_SIZE: 8 DEBUG:train:GLOBAL_RANK: 0 GPU available: True (cuda), used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs HPU available: False, using: 0 HPUs 2024-08-21 17:51:24 training on gpu with 4 gpu DEBUG:train:LOCAL_RANK: 0 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:NODE_RANK: 0 DEBUG:train:NODE_RANK: 0 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:Set GLOBAL_RANK: 0 DEBUG:train:Set WORLD_SIZE: 8 DEBUG:train:GLOBAL_RANK: 0 DEBUG:train:GLOBAL_RANK: 0 DEBUG:train:GLOBAL_RANK: 0 DEBUG:train:MASTER_ADDR: 10.0.147.2 DEBUG:train:MASTER_PORT: 9001 Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/8 </code></pre> <p>while on the client I get</p> <pre><code>DEBUG:train:NODE_RANK: 1 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:Set GLOBAL_RANK: 4 DEBUG:train:Set WORLD_SIZE: 8 DEBUG:train:GLOBAL_RANK: 4 2024-08-21 17:51:24 training on gpu with 4 gpu DEBUG:train:LOCAL_RANK: 0 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:NODE_RANK: 1 DEBUG:train:NODE_RANK: 1 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:Set GLOBAL_RANK: 4 DEBUG:train:Set WORLD_SIZE: 8 DEBUG:train:GLOBAL_RANK: 4 DEBUG:train:GLOBAL_RANK: 4 DEBUG:train:GLOBAL_RANK: 4 DEBUG:train:MASTER_ADDR: 10.0.147.2 DEBUG:train:MASTER_PORT: 9001 Initializing distributed: GLOBAL_RANK: 4, MEMBER: 5/8 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:LOCAL_RANK: 0 DEBUG:train:GLOBAL_RANK: 4 </code></pre> <p>that are coherent with the env I setup for the <code>MASTER_ADDR</code>, <code>MASTER_PORT</code>, <code>NODE_RANK</code> and <code>WORLD_SIZE</code> <a href="https://lightning.ai/docs/pytorch/stable/clouds/cluster_intermediate_1.html" rel="nofollow noreferrer">required</a> minimal env, where in my configuration I have 4 GPUS x 2 nodes hence WORLD_SIZE is set to 8.</p> <p>Additionally I have manually setup NCCL envs for the network interfaces I have from <code>ipconfig</code> on the host and to disable P2P if any (The NCCL_P2P_DISABLE variable disables the peer to peer (P2P) transport, which uses CUDA direct access between GPUs, using NVLink or PCI).</p> <pre><code>export NCCL_SOCKET_IFNAME=eth0 export NCCL_P2P_DISABLE=1 </code></pre> <p>I have a timeout error with this configuration, despite of the time set. Specifically on the MASTER I get a <code>TCPStore</code> error</p> <pre><code> return TCPStore( torch.distributed.DistStoreError: Timed out after 1801 seconds waiting for clients. 2/4 clients joined. </code></pre> <p>and then on the CLIENT I get the disconnect error</p> <pre><code>[rank4]:[W821 14:18:47.728693991 socket.cpp:428] [c10d] While waitForInput, poolFD failed with (errno: 0 - Success). 2024-08-21 14:18:47 [4] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '0', but store-&gt;get('0') got error: Connection reset by peer ... torch.distributed.DistBackendError: [4] is setting up NCCL communicator and retrieving ncclUniqueId from [0] via c10d key-value store by key '0', but store-&gt;get('0') got error: Connection reset by peer </code></pre>
<python><pytorch><nvidia><pytorch-lightning><nccl>
2024-08-21 18:08:32
0
16,321
loretoparisi
78,898,460
1,549,983
Make type checker accept multiple types for a field as valid but have pydantic always coerce to one type
<p>I'm building a wrapper for another program I'm using that takes integer-valued flags as input for setting various options. To make the interface a bit more intuitive, I'd like to have a way to automatically validate either a keyword string or the integer flag value and associate it with any other data that might be necessary for running the program with that option.</p> <p>I've implemented a custom <code>Enum</code> class that has both a string value <code>value</code> and an integer <code>flag</code> value, and an optional <code>data</code> container that can hold any required data for the particular option. I've also implemented a <code>BeforeValidator</code> to help pydantic automatically coerce <code>str</code> and <code>int</code> types to the custom class. Here's a trimmed-down version of the code I have:</p> <pre class="lang-py prettyprint-override"><code>from typing import Any, Annotated, TypeVar, List, Type from enum import Enum, EnumMeta from pydantic import BaseModel, Field from pydantic.functional_validators import BeforeValidator class CustomEnumMeta(EnumMeta): &quot;&quot;&quot;Matches an Enum member based on string value or integer value&quot;&quot;&quot; def __getitem__(self, name: Any) -&gt; Any: # Get the list of all the available values and flags so we can run our comparison values: List[Any] = list(a.value for a in self.__members__.values()) flags: List[int] = list(a.flag for a in self.__members__.values()) try: name = int(name) except (ValueError, TypeError): pass if name in values: name = names[values.index(name)] elif name in flags: name = names[flags.index(name)] if not isinstance(name, str): raise ValueError(f&quot;{name!s} is not an enumerated value of {type(self)!s}&quot;) return super().__getitem__(name) class FlagDataEnum(Enum, metaclass=CustomEnumMeta): &quot;&quot;&quot;Adds data storage and an integer flag as well as a string name to Enum class&quot;&quot;&quot; def __init__(self, desc: Any, flag: int, *args: Any) -&gt; None: self._value_ = desc self.flag = flag if len(args) == 1: args = args[0] self.data = args # Generic &quot;Enum&quot; subclass type E = TypeVar(&quot;E&quot;, bound=Enum) def _coerce_value_to_data_enum(value: Any, enum_type: Type[E]) -&gt; E: &quot;&quot;&quot;Tries to convert a value to an instance of the given enum_type&quot;&quot;&quot; if isinstance(value, enum_type): return value else: return enum_type[value] class Solvers(FlagDataEnum): runge_kutta34 = &quot;Runge-Kutta 3/4&quot;, 1, {'predictor':3,'corrector':4} runge_kutta78 = &quot;Runge-Kutta 7/8&quot;, 2, {'predictor':7,'corrector':8} adams_bashforth = &quot;Adams-Bashforth&quot;, 3 central_difference = &quot;Central Difference&quot;, 4 CoercedEnum = Annotated[ Solvers, BeforeValidator(partial(_coerce_value_to_data_enum,enum_type = Solvers)) ] class CalculationOptions(BaseModel): solver: CoercedEnum = Field(default=Solvers.runge_kutta) init_conditions: List[int] = Field(default_factory=list) </code></pre> <p>This works <em>almost</em> exactly as I want it to. When I initialize an instance of <code>CalculationOptions</code>, I can pass an <code>int</code> or a <code>str</code> to <code>CalculationOptions.solver</code> and pydantic will happily coerce it to an instance of <code>Solvers</code> and raise a <code>ValidationError</code> if the string or integer doesn't match the values of any of the <code>Solvers</code> members.</p> <p>But my type checker is not happy with this situation. It complains if I try to assign an integer or string to <code>CalculationOptions.solver</code>:</p> <pre class="lang-py prettyprint-override"><code>my_options = CalculationOptions(solver = 2) # &quot;int is not compatible with Solver&quot; </code></pre> <p>I can get the error to go away if I change the type annotation to include <code>str</code> and <code>int</code> as a union with <code>Solver</code>:</p> <pre class="lang-py prettyprint-override"><code>CoercedEnum = Annotated[ Solvers | str | int, BeforeValidator(partial(_coerce_value_to_data_enum,enum_type = Solvers)) ] </code></pre> <p>But this introduces further type checking errors when I try to <em>use</em> the <code>CalculationOptions.solver</code> value, because technically it could also be a <code>str</code> or an <code>int</code>, which don't have the attributes &quot;flag&quot; or &quot;value&quot;:</p> <pre class="lang-py prettyprint-override"><code>assert my_options.solver.flag == 2 # &quot;int and str do not have attribute 'flag'&quot; </code></pre> <p>Is there a way to tell python that it's OK to <em>assign</em> a <code>str</code> or an <code>int</code> to a model field of type <code>CoercedEnum</code>, but that it should never expect the value of that field to <em>be</em> a <code>str</code> or <code>int</code>? Am I just using pydantic in a way it wasn't designed to be used?</p>
<python><python-typing><pydantic>
2024-08-21 18:06:44
0
391
Beezum
78,898,299
375,666
Converting Reality Capture Transormation matrix into another coordinate system
<p>I'm using reality capture and I have expoerted a set of 20 cameras in the scene into XMP and I have the list of coordinate systems.</p> <p>I have a reference coordinate system that I want to match the reality capture/convert to it's coordinate system.</p> <p>I have tried everything so that I can match the two systems, with no success.</p> <p>This is for example the reality capture XMP Camera 0</p> <pre class="lang-xml prettyprint-override"><code>&lt;xcr:Rotation&gt;0.9561484049495719 0.2924655797321343 -0.015624096272626363 0.00017677908012416274 -0.053922216042367715 -0.9985451233500854 -0.29288256428395754 0.9547545649479907 -0.05160934265640921&lt;/xcr:Rotation&gt; &lt;xcr:Position&gt;2.75445788094206 -10.557416536967201 15.257369477238798&lt;/xcr:Position&gt; </code></pre> <p>it's listed here <a href="https://dev.epicgames.com/community/learning/knowledge-base/vzwB/capturing-reality-realitycapture-xmp-camera-math" rel="nofollow noreferrer">https://dev.epicgames.com/community/learning/knowledge-base/vzwB/capturing-reality-realitycapture-xmp-camera-math</a> The math of reality capture, so I calculated t = - dot (Rotation, position) as the transformation matrix. Still I have great mismatch between the two systems.</p> <p>here is my attempt:</p> <pre><code>import numpy as np from scipy.spatial.transform import Rotation # RealityCapture rotation matrix and translation vector realitycapture_rotation = [ 0.9561484049495719, 0.2924655797321343, -0.015624096272626363, 0.00017677908012416274, -0.053922216042367715, -0.9985451233500854, -0.29288256428395754, 0.9547545649479907, -0.05160934265640921 ] realitycapture_translation = [2.75445788094206, -10.557416536967201, 15.257369477238798] # Convert to a 3x3 matrix rotation_matrix = np.array(realitycapture_rotation).reshape(3, 3) # Correct the rotation matrix and translation vector rotation_matrix_corrected = rotation_matrix[[2, 0, 1], :] translation_vector_corrected = np.array(realitycapture_translation)[[2, 0, 1]] # Create a Rotation object from the corrected matrix rotation = Rotation.from_matrix(rotation_matrix_corrected) # Calculate the extrinsics matrix extrinsics = np.concatenate((rotation.as_matrix(), -np.dot(rotation.as_matrix(), translation_vector_corrected[:, np.newaxis])), axis=1) np.set_printoptions(precision=10, suppress=True) # Create the full world_from_cam matrix world_from_cam_realitycapture = np.vstack((extrinsics, np.array([0, 0, 0, 1]))) world_from_cam_realitycapture = np.linalg.inv(world_from_cam_realitycapture) # Print the RealityCapture 'world_from_cam' matrix print(&quot;Computed 'world_from_cam' matrix from RealityCapture:&quot;) print(world_from_cam_realitycapture) # Provided 'world_from_cam' matrix world_from_cam_provided = np.array([ 0.9990035620585193, -0.033809526359390864, 0.02913415387039758, 0.0, 0.03464496400756332, 0.9989885273263922, -0.02866441590475588, 0.0, -0.0281355551447806, 0.02964520530540873, 0.9991644270784941, 0.0, 0.004656272268761068, -0.0009031649267875787, 0.4233474275953534, 1.0 ]).reshape(4, 4) # Print the provided 'world_from_cam' matrix print(&quot;Provided 'world_from_cam' matrix:&quot;) print(world_from_cam_provided.transpose()) </code></pre>
<python><3d><computer-vision><camera>
2024-08-21 17:27:23
1
1,919
Andre Ahmed
78,898,176
8,467,078
How to use PRAGMA with sqlite3's new autocommit attribute in Python?
<p>Python's <code>sqlite3</code> module recently introduced the <code>autocommit</code> attribute, which <a href="https://docs.python.org/3/library/sqlite3.html#sqlite3-transaction-control-autocommit" rel="noreferrer">the docs recommend</a> setting to <code>False</code>. I'd like to use this recommended setting for a new project using Python 3.12. However, I'd also like to enforce foreign key constraints in my database. I'm aware that this required setting <code>connection.execute(&quot;PRAGMA foreign_keys = ON;&quot;)</code> for every connection. This works when using the default option for <code>autocommit</code>, which is <code>LEGACY_TRANSACTION_CONTROL</code>. Using <code>autocommit=False</code> (as recommended), the foreign key constraints are no longer enforced. If I understand correctly, this is because the PRAGMA only works if no transaction is active, but <code>autocommit=False</code> always implicitly opens a transaction, as explained <a href="https://docs.python.org/3/library/sqlite3.html#sqlite3-transaction-control-autocommit" rel="noreferrer">in the docs</a>.</p> <p>I found that the following seems to work:</p> <pre class="lang-py prettyprint-override"><code>con = sqlite3.connect(path, autocommit=True) con.execute(&quot;PRAGMA foreign_keys = ON;&quot;) con.autocommit = False </code></pre> <p>Now my question is: Is this intended behavior from <code>sqlite3</code> or a bug? If it's intended, is the above workaround the best way to solve this and are there any hidden consequences in doing so, that result in different behavior compared to setting <code>autocommit=False</code> immediately in the <code>connect()</code> call? I'm surprised none of this seems to be documented anywhere, I'd assume foreign key checks are quite commonly desired...</p> <p>Here's a full minimum working example of all of this:</p> <pre class="lang-py prettyprint-override"><code>import sqlite3 # con = sqlite3.connect(&quot;:memory:&quot;, autocommit=sqlite3.LEGACY_TRANSACTION_CONTROL) # PRAGMA works # con = sqlite3.connect(&quot;:memory:&quot;, autocommit=False) # PRAGMA doesn't work con = sqlite3.connect(&quot;:memory:&quot;, autocommit=True) # PRAGMA works, but need to set autocommit=False afterwards con.execute(&quot;PRAGMA foreign_keys = ON;&quot;) con.autocommit = False # Super basic two-table example... con.execute(&quot;CREATE TABLE foo(id INTEGER PRIMARY KEY, baz TEXT UNIQUE)&quot;) con.execute(&quot;CREATE TABLE bar(id INTEGER PRIMARY KEY, bazid INTEGER UNIQUE,&quot; &quot;FOREIGN KEY (bazid) REFERENCES foo (id))&quot;) # Insert some values so a foreign key exists with con: con.execute(&quot;INSERT INTO foo(baz) VALUES(?)&quot;, (&quot;spam&quot;,)) con.execute(&quot;INSERT INTO foo(baz) VALUES(?)&quot;, (&quot;eggs&quot;,)) # This should pass with con: con.execute(&quot;INSERT INTO bar(bazid) VALUES(?)&quot;, (&quot;1&quot;,)) # This should fail with con: con.execute(&quot;INSERT INTO bar(bazid) VALUES(?)&quot;, (&quot;42&quot;,)) con.close() </code></pre>
<python><sqlite>
2024-08-21 16:56:22
0
345
VY_CMa
78,898,081
7,321,700
Groupby index and keep the max column value given a single column
<p><strong>Scenario:</strong> With a dataframe with duplicated indices, I want to groupby while keeping the max value. I found the solution to this in <a href="https://stackoverflow.com/questions/43486226/drop-duplicates-by-index-keeping-max-for-each-column-across-duplicates">Drop duplicates by index, keeping max for each column across duplicates</a> however, this gets the max value of each column. This mixed the data of different rows, keeping the max values.</p> <p><strong>Question:</strong> If instead of mixing the values of different rows, I want to keep a single row, where the value of a column &quot;C&quot; is the highest among the rows with the same index (in this case I will select the row with the highest value in &quot;C&quot; and keep all values for that row, not mixing with high values of other columns from other rows), how should the groupby be performed?</p> <p><strong>What I tried:</strong> From the question linked, I got</p> <pre><code>df.groupby(df.index).max() </code></pre> <p>and tried to modify it to:</p> <pre><code>df.groupby(df.index)['C'].max() </code></pre> <p>but this deletes the other columns of the dataframe.</p>
<python><pandas><dataframe><group-by>
2024-08-21 16:32:22
1
1,711
DGMS89
78,897,975
4,648,809
Python idiom `if __name__ == '__main__':` in uwsgi?
<p>What is Python idiom in uwsgi for</p> <pre><code>if __name__ == '__main__': main() </code></pre> <p>I found here a long string in uwsgi instead of <code>__main__</code> <a href="https://stackoverflow.com/questions/34129354/what-name-string-does-uwsgi-use">What __name__ string does uwsgi use?</a>. But it looks like a workaround. Is there better way to call function once, when uwsgi starts my python script?</p>
<python><uwsgi>
2024-08-21 16:08:57
1
1,031
Alex
78,897,914
3,486,684
Polars: What are the downsides of joining on null?
<p><a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.join.html" rel="nofollow noreferrer"><code>join_nulls</code> is <code>False</code></a> by default in <code>polars</code>.</p> <p>Suppose I were to always call <code>join</code> with <code>join_nulls</code> set to <code>True</code>. What problems would I run into?</p> <p>Put differently: why would I <em>not</em> want to join on null? If it is because I want to filter out <code>null</code> values, then wouldn't I call <a href="https://docs.pola.rs/api/python/stable/reference/dataframe/api/polars.DataFrame.drop_nulls.html" rel="nofollow noreferrer"><code>drop_null</code></a> explicitly on the relevant dataframe(s) before joining, rather than relying on <code>join</code> to implicitly do this for me?</p>
<python><dataframe><python-polars>
2024-08-21 15:53:50
1
4,654
bzm3r
78,897,803
5,213,451
Can I easily parse datetimes independently of my machine?
<p>In python I like the <code>datetime</code> module of the standard library to parse timestamps. However, some of the <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes" rel="nofollow noreferrer">format codes</a> depend on the locale set for the machine where it's run, which makes the code fragile:</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime string = &quot;08 Aug 2024 01:45 PM&quot; fmt = &quot;%d %b %Y %I:%M %p&quot; datetime.strptime(string, fmt) # on an American computer # datetime(2024, 8, 8, 13, 45) datetime.strptime(string, fmt) # on an French computer # ValueError: time data '08 Aug 2024 01:45 PM' does not match format '%d %b %Y %I:%M %p' </code></pre> <p>Note: The equivalent in French is <code>08 aoΓ» 2024 01:45 </code> (different month name and no AM/PM)</p> <p><strong>Is there a more robust way to systematically parse timestamps?</strong></p> <hr /> <p>At the moment, I went with a context manager setting my locale temporarily to fully control the process, but it feels like I'm missing a better tool for doing the same job.</p> <pre class="lang-py prettyprint-override"><code>from contextlib import contextmanager @contextmanager def locale_context(locale: str): import locale as l saved_loc = l.getlocale() l.setlocale(l.LC_ALL, locale) try: yield l.getlocale() finally: l.setlocale(l.LC_ALL, saved_loc) def strptime(x: str, fmt: str, *, locale: str) -&gt; datetime: &quot;&quot;&quot;An equivalent of the standard strptime, with a way to fix the locale.&quot;&quot;&quot; with locale_context(locale): return datetime.strptime(x, fmt) strptime(string, fmt, locale=&quot;en_US&quot;) # on an American computer # datetime(2024, 8, 8, 13, 45) strptime(string, fmt, locale=&quot;en_US&quot;) # on an French computer # datetime(2024, 8, 8, 13, 45) </code></pre>
<python><datetime><locale><python-datetime>
2024-08-21 15:28:55
0
1,000
Thrastylon
78,897,764
10,425,150
Create "Pass/Fail/Caution" status in pandas dataframe based another columns without "SettingWithCopy" warning
<p>I wrote code where:</p> <ol> <li>All values above 3 are marked as &quot;Fail&quot;.</li> <li>Values between 1 and 3 marked as &quot;Caution&quot;.</li> </ol> <p>However I have the following warning: <code>A value is trying to be set on a copy of a slice from a DataFrame</code>. And I'm not sure how I could avoid this warning.</p> <p><strong>Current code:</strong></p> <pre><code>import pandas as pd values = range(6) df = pd.DataFrame({&quot;Values&quot;:values, &quot;Caution limit&quot;: [1]*len(values), &quot;Fail limit&quot;: [3]*len(values)}) df[&quot;Status&quot;] = &quot;Pass&quot; df[&quot;Status&quot;][df[&quot;Caution limit&quot;] &lt; df[&quot;Values&quot;]] = &quot;Caution&quot; df[&quot;Status&quot;][df[&quot;Fail limit&quot;] &lt; df[&quot;Values&quot;]] = &quot;Fail&quot; </code></pre> <p><strong>Current Output:</strong></p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: right;">Values</th> <th style="text-align: right;">Caution limit</th> <th style="text-align: right;">Fail limit</th> <th style="text-align: left;">Status</th> </tr> </thead> <tbody> <tr> <td style="text-align: right;">0</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Pass</td> </tr> <tr> <td style="text-align: right;">1</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Pass</td> </tr> <tr> <td style="text-align: right;">2</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Caution</td> </tr> <tr> <td style="text-align: right;">3</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Caution</td> </tr> <tr> <td style="text-align: right;">4</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Fail</td> </tr> <tr> <td style="text-align: right;">5</td> <td style="text-align: right;">1</td> <td style="text-align: right;">3</td> <td style="text-align: left;">Fail</td> </tr> </tbody> </table></div> <p><strong>Warning message:</strong></p> <pre><code>C:\.....py:5: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[&quot;Status&quot;][df[&quot;Caution limit&quot;] &lt; df[&quot;Values&quot;]] = &quot;Caution&quot; C:\.....py:6: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy df[&quot;Status&quot;][df[&quot;Fail limit&quot;] &lt; df[&quot;Values&quot;]] = &quot;Fail&quot; </code></pre> <p><strong>UPD:</strong> I found a way avoiding the SettingWithCopyWarning using lambda function:</p> <pre><code>update_status = lambda value, caution, failed: [&quot;Fail&quot; if f&lt;v else &quot;Caution&quot; if c&lt;v else &quot;Pass&quot; for v,c,f in zip(value, caution, failed)] df[&quot;Status&quot;] = update_status(df[&quot;Values&quot;],df[&quot;Caution limit&quot;],df[&quot;Fail limit&quot;]) </code></pre> <p>However it completely avoids using <code>pandas</code> capability and my main goal is to learn how I could use <code>pandas</code> to do so.</p>
<python><pandas><dataframe>
2024-08-21 15:20:28
3
1,051
GΠΎΠΎd_MΠ°n
78,897,744
342,546
PunktSentenceTokenizer with specific language and complex abbreviations
<p>I've tried to adapt the solution from <a href="https://stackoverflow.com/q/69734355/342546">Configure PunktSentenceTokenizer and specify language</a> for abbreviations containing dots (e.g. &quot;i.d.F.&quot;). Thus, I've expected the following to work:</p> <pre><code>import nltk additional_abbreviations = [&quot;z.B&quot;, &quot;z.b&quot;, &quot;ca&quot;, &quot;dt&quot;, &quot;i.d.F&quot;] sentence_tokenizer = nltk.data.load(&quot;tokenizers/punkt/german.pickle&quot;) sentence_tokenizer._params.abbrev_types.update(additional_abbreviations) split_sentences = sentence_tokenizer.tokenize(&quot;Das ist z.B. ein Vogel. Das ist i.d.F. vom 10. November 1902 Geschichte. Das sind ca. 2 kg.&quot;) print(split_sentences) </code></pre> <p>But it returns (doesn't matter if I add a dot after the &quot;F&quot; of &quot;i.d.F&quot;):</p> <pre><code>['Das ist z.B. ein Vogel.', 'Das ist i.d.F.', 'vom 10. November 1902 Geschichte.', 'Das sind ca. 2 kg.'] </code></pre> <p>Thus, it doesn't handle &quot;i.d.F.&quot; as an abbreviation, but it accepts all the other additional abbreviations as such. What am I doing wrong?</p>
<python><nltk><tokenize>
2024-08-21 15:17:48
1
13,736
tohuwawohu
78,897,635
3,061,305
Unable to unit test Azure Durable Function in Python v2 Programming Model
<p>I'm really struggling to write unit tests for an Azure Durable Function in Python.</p> <p>This is the code:</p> <pre><code>@bp.timer_trigger(schedule=&quot;0 0 12 * * *&quot;, arg_name=&quot;timer&quot;) @bp.durable_client_input(client_name=&quot;client&quot;) async def my_func(timer: func.TimerRequest, client: df.DurableOrchestrationClient) -&gt; None: await client.start_new(&quot;durable_orchestrator&quot;) </code></pre> <p>And this is my test:</p> <pre><code>@pytest.mark.asyncio async def test_extract_data_timer(): mock_timer = create_autospec(func.TimerRequest, instance=True) mock_timer_request.past_due = False mock_starter = AsyncMock(spec=df.DurableOrchestrationClient) mock_starter.start_new = AsyncMock(return_value=&quot;test&quot;) result = await my_func(mock_timer_request, mock_starter) mock_starter.start_new.assert_called_with(&quot;durable_orchestrator&quot;) </code></pre> <p>When I run this I get the following error:</p> <blockquote> <p>E TypeError: object NoneType can't be used in 'await' expression</p> </blockquote> <p>And interestingly enough when I remove the decorators (@bp.timer_trigger &amp; @bp.durable_client_input) the test passes no problem.</p> <p>Something I've noticed is that without the decorators the function signature is:</p> <pre><code>def my_func( timer: TimerRequest, starter: DurableOrchestrationClient ) -&gt; Coroutine[Any, Any, None] </code></pre> <p>And as soon as the decorators are added I get:</p> <pre><code>async def my_func( timer: TimerRequest, starter: DurableOrchestrationClient ) -&gt; None </code></pre> <p>===== UPDATE =====</p> <p><a href="https://github.com/Azure/azure-functions-durable-python/issues/460" rel="nofollow noreferrer">This</a> issue opened in GitHub indicates that testing Azure Durable Functions is an on-going issue, and it has been open for 9 months already. Not sure what the plan from the Azure team is on this.</p>
<python><pytest><azure-durable-functions>
2024-08-21 14:50:47
2
7,331
Jan Swart
78,897,605
1,406,168
Azure Function App Python - Accessing Storage Account
<p>I have created a function app on a linux host. I deploy via a pipeline and everythings works as expected. Until I start importing azure libraries, then the functions are just not deployed. No errors.</p> <pre><code>#When adding below functions fails to show in the azure portal. from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient </code></pre> <p>FullCode:</p> <pre><code>import logging import azure.functions as func import json from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient app = func.FunctionApp() import logging import azure.functions as func app = func.FunctionApp() @app.route(route=&quot;get_port_data&quot;, auth_level=func.AuthLevel.FUNCTION) def get_port_data(req: func.HttpRequest) -&gt; func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') lat = req.params.get('lat') long = req.params.get('lng') # credential = DefaultAzureCredential() # blob_service_client = BlobServiceClient(account_url=&quot;https://xx.blob.core.windows.net&quot;, credential=credential) jsn = {&quot;region&quot;:&quot;USAC&quot;,&quot;port&quot;:&quot;West&quot;} return func.HttpResponse( json.dumps(jsn), mimetype=&quot;application/json&quot;, status_code=200 ) Deployment pipeline: # Starter pipeline # Start with a minimal pipeline that you can customize to build and deploy your code. # Add steps that build, run tests, deploy, and more: # https://aka.ms/yaml trigger: - none variables: # Azure Resource Manager connection created during pipeline creation azureServiceConnectionId: 'xx' # Web app name webAppName: 'xx' # Agent VM image name vmImageName: 'ubuntu-latest' # Environment name environmentName: 'func-xx' # Project root folder. projectRoot: $(System.DefaultWorkingDirectory) # Python version: 3.11. Change this to match the Python runtime version running on your web app. pythonVersion: '3.11' workingDirectory: '' pool: vmImage: $(vmImageName) stages: - stage: Build displayName: Build stage jobs: - job: Build displayName: Build pool: vmImage: $(vmImageName) steps: # - bash: | # if [ -f extensions.csproj ] # then # dotnet build extensions.csproj --runtime ubuntu.16.04-x64 --output ./bin # fi # workingDirectory: $(workingDirectory) # displayName: 'Build extensions' - task: UsePythonVersion@0 displayName: 'Use Python 3.11' inputs: versionSpec: 3.11 - bash: | pip install --target=&quot;./.python_packages/lib/site-packages&quot; -r ./requirements.txt workingDirectory: $(workingDirectory) displayName: 'Install application dependencies' - task: ArchiveFiles@2 displayName: 'Archive files' inputs: rootFolderOrFile: '$(workingDirectory)' includeRootFolder: false archiveType: zip archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip replaceExistingArchive: true - publish: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip artifact: drop - stage: Deploy displayName: Deploy stage dependsOn: Build condition: succeeded() jobs: - deployment: Deploy displayName: Deploy environment: 'development' pool: vmImage: $(vmImageName) strategy: runOnce: deploy: steps: - task: AzureFunctionApp@1 displayName: 'Azure functions app deploy' inputs: azureSubscription: '$(azureServiceConnectionId)' appType: functionAppLinux appName: $(webAppName) package: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip' </code></pre>
<python><azure><azure-functions>
2024-08-21 14:43:55
1
5,363
Thomas Segato
78,897,579
2,456,863
How to create numpy records array with numerical entries without dtype name
<p>I am trying to create a numpy records array to match data that I am reading from an HDF5 file. The dtype of the HDF5 dataset (<code>dataset</code>) has a dtype of <code>np.dtype(('u1', (3,)))</code>. The dtype of <code>dataset[0]</code> is <code>dtype('uint8')</code>. I am trying to write an HDF5 to match this as shown below:</p> <pre><code>import numpy as np np.recarray((2,), dtype=('u1', (3,))) </code></pre> <p>However, this produces a result that looks it contains bytes rather than integers (which is how it looks when I read the HDF5 dataset):</p> <pre><code>records = rec.array([b'\xB0\xCE\x61', b'\x38\xD4\x01'], dtype=|V3) </code></pre> <p>When, I check the dtype of this array with <code>records[0].dtype</code> I get <code>dtype(V3)</code> instead of <code>dtype('uint8')</code> as I do when reading the HDF5 file.</p> <p><strong>How do I get the records to store these values as uint8 rather than bytes?</strong> I noticed that, if give the <code>dtype</code> a name, then the values are represented by numbers rather than bytes, but the HDF5 dataset does not have a <code>dtype</code> name.</p> <pre><code>&gt;&gt;&gt;np.recarray((2,), dtype=[('test', 'u1', (3,))]) rec.array([([ 48, 26, 98],), ([ 56, 212, 1],)], dtype=[('test', 'u1', (3,))]) </code></pre>
<python><numpy><hdf5><h5py>
2024-08-21 14:38:28
1
680
Stephen Hartzell
78,897,164
9,476,917
Matplotlib Chart bring xticks lables to the front
<p>I found a wonderful example to plot a radar chart in python's matplotlib from this <a href="https://stackoverflow.com/a/78122416/9476917">answer</a>.</p> <p>My xtick labels are quite long and reach into the radar chart. The label text is thus covered by the lines of the radar chart as shown in the screenshot below. Unfortunately, the label is even cut off from the chart area. I already tried the <code>plt.tight_layout()</code> to fix it, but didn't work.</p> <p><a href="https://i.sstatic.net/jtlSKybF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtlSKybF.png" alt="Label Text of Axis in the background" /></a></p> <p>Is there a way to either:</p> <ol> <li><p>Preferred: Put the xtick labels at the outside of the chart, so they don't overlapp with the chart at all</p> </li> <li><p>Put the label text in the front/change transparency with the alpha parameter? e.g. <code>plt.setp(ax.get_xticklabels(), alpha=1)</code> - didn't work</p> </li> </ol> <p>and to get all labels in the chart area so they are not cut off?</p> <p>The code I use is:</p> <p>import pandas as pd import numpy as np from matplotlib import pyplot as plt</p> <pre><code>def create_radar(df, *, id_column, title=None, max_values=None, padding=1.25, width=9, length=9): cm = 1/2.54 categories = df._get_numeric_data().columns.tolist() data = df[categories].to_dict(orient='list') ids = df[id_column].tolist() if max_values is None: max_values = {key: padding*max(value) for key, value in data.items()} normalized_data = {key: np.array(value) / max_values[key] for key, value in data.items()} num_vars = len(data.keys()) tiks = list(data.keys()) tiks += tiks[:1] angles = np.linspace(0, 2 * np.pi, num_vars, endpoint=False).tolist() + [0] fig, ax = plt.subplots(figsize=(width*cm, length*cm), subplot_kw=dict(polar=True)) for i, model_name in enumerate(ids): values = [normalized_data[key][i] for key in data.keys()] actual_values = [data[key][i] for key in data.keys()] values += values[:1] # Close the plot for a better look ax.plot(angles, values, label=model_name) ax.fill(angles, values, alpha=0.15) for _x, _y, t in zip(angles, values, actual_values): if isinstance(t, float) and t &gt; 1: t = f'{t:.0f}' elif isinstance(t, float) and t &lt;= 1: t = f'{t:.2%}' else: str(t) if i % 2 == 0: ax.text(_x, _y+.02, t, size='xx-small', va='top', ha='right') else: ax.text(_x, _y-.02, t, size='xx-small', va='bottom', ha='left') ax.fill(angles, np.ones(num_vars + 1), alpha=0.05) ax.set_yticklabels([]) ax.set_xticks(angles) #pos_list = np.arange(len(tiks)) #ax.xaxis.set_major_locator(FixedLocator((pos_list))) #ax.xaxis.set_major_formatter(FixedFormatter((tiks))) ax.set_xticklabels(tiks, size=&quot;xx-small&quot;) ax.legend(loc='lower right', #bbox_to_anchor=(0.1, 0.1) ) if title is not None: plt.suptitle(title) plt.show() radar = create_radar create_radar( pd.DataFrame({ 'x': [*'abcde'], 'supeeeer long category name 1': [10,11,12,13,14], 'supeeeer long category name 2': [0.1, 0.3, 0.4, 0.1, 0.9], 'supeeeer long category name 4': [1e5, 2e5, 3.5e5, 8e4, 5e4], 'supeeeer long category name 5': [9, 12, 5, 2, 0.2], 'supeeeer long category name 6': [1,1,1,1,5] }), id_column='x', ) </code></pre> <p>produces:</p> <p><a href="https://i.sstatic.net/JyOuwI2C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JyOuwI2C.png" alt="Radar Chart Matplotlib" /></a></p>
<python><matplotlib><axis-labels>
2024-08-21 13:14:20
0
755
Maeaex1
78,896,938
7,483,211
How to get exact exit code that a snakemake job exited with?
<p>When a snakemake job errors with a non-zero exit code, snakemake prints:</p> <pre><code>(one of the commands exited with non-zero exit code; note that snakemake uses bash strict mode!) Exiting because a job execution failed. Look above for error message </code></pre> <p>How can I find out what the <em>exact</em> exit code is? I want to know the error code, i.e. <code>1</code>, <code>138</code> etc. with which the job exited.</p> <p>I'm surprised snakemake doesn't report the exit code explicitly, because explicit is better than implicit.</p>
<python><logging><snakemake><exit-code>
2024-08-21 12:24:11
1
10,272
Cornelius Roemer
78,896,889
7,483,211
How to see the logs of failed snakemake jobs immediately in stderr - instead of log file
<p>I'm running snakemake inside kubernetes with argocd. When a job errors, snakemake says that the logs of the failed job are in a log file. Now I can't easily look this logfile up because argocd doesn't allow this.</p> <p>I want the failed job logs to be shown in the normal log output. How can I do this?</p>
<python><logging><snakemake><logfile>
2024-08-21 12:14:30
1
10,272
Cornelius Roemer
78,896,818
8,188,120
expo-go development app not communicating with flask API
<h2>Problem details</h2> <p>I am running an Android Expo Go app in development mode which appears to be available at these locations:</p> <blockquote> <p>β€Ί Choose an app to open your project at <a href="http://192.168.0.49:8081/_expo/loading" rel="nofollow noreferrer">http://192.168.0.49:8081/_expo/loading</a> β€Ί Metro waiting on exp://192.168.0.49:8081 β€Ί Scan the QR code above with Expo Go (Android) or the Camera app (iOS)</p> <p>β€Ί Web is waiting on http://localhost:8081</p> </blockquote> <p>I'm attempting to make the app communicate with a Flask backend API:</p> <pre><code>$ flask --app backend run * Serving Flask app 'backend' * Debug mode: off WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. * Running on http://127.0.0.1:5000 </code></pre> <p>If I directly check the IP of the mobile device I am running the Expo Go app on it says:</p> <blockquote> <p>192.168.0.50</p> </blockquote> <p>The IP of the machine I am running the flask server is:</p> <blockquote> <p>IPv4 Address. . . . . . . . . . . : 192.168.0.49</p> </blockquote> <p>But I am getting network errors when attempting to perform GET requests from the expo app to the Flask server:</p> <blockquote> <p>ERROR Test error: [TypeError: Network request failed]</p> </blockquote> <hr /> <h2>Code snippets</h2> <p>My expo tsx code for performing a test API ping looks like this:</p> <pre><code>const testFetch = async () =&gt; { try { const response = await fetch('http://192.168.0.49:5000/test'); const data = await response.json(); console.log('Test response data:', data); } catch (error) { console.error('Test error:', error); } }; useEffect(() =&gt; { testFetch(); }, []); </code></pre> <p>Which envokes the flask code:</p> <pre><code>@bp.route('/test', methods=['GET']) def test(): return jsonify({&quot;message&quot;: &quot;Test successful&quot;}), 200 </code></pre> <p>...and the flask app is setup to receive incoming requests using CORS:</p> <pre><code>import os from flask import Flask from flask_cors import CORS def create_app(test_config=None): # create and configure the app app = Flask(__name__, instance_relative_config=True) CORS(app, resources={r&quot;/*&quot;: {&quot;origins&quot;: &quot;*&quot;}}) if test_config is None: # load the instance config, if it exists, when not testing app.config.from_pyfile('config.py', silent=True) else: # load the test config if passed in app.config.from_mapping(test_config) # ensure the instance folder exists try: os.makedirs(app.instance_path) except OSError: pass from . import home app.register_blueprint(home.bp) return app </code></pre> <hr /> <h2>Things I have tried to fix/find the issue</h2> <ol> <li>I set the inbound and outbound traffic on port:5000 in Windows Defender Firewall</li> <li><strong>I have tried opening the Flask endpoints on my mobile phone in a browser. They do not load</strong></li> <li>I have tried setting the flask app to listen on all ports: <code>flask --app backend run --host=0.0.0.0 --port=5000 --debug</code></li> <li>I have tried all combinations of https://localhost:5000 and actual IP addresses in the fetch request</li> <li>I have tried running the Flask server and the Expo Go app on a different network (my mobile data)</li> <li>Directly (and successfully) pinging the flask API using curl in WSL: <code>curl http://192.168.0.49:5000/fetchprogram?programID=test_program</code></li> <li>I tried switching out the endpoint from my flask app to a public API endpoint which worked ('https://dog.ceo/api/breeds/list/all')</li> </ol>
<python><react-native><flask><expo><ip-address>
2024-08-21 11:59:19
1
925
user8188120
78,896,815
265,683
pyautogui locateCenterOnScreen is not finding my source image or 'needle'
<p>I am searching in an area of 543 pixels wide and 378 pixels height from the top left corner of my screen for the following 'needle'</p> <p><a href="https://i.sstatic.net/76rX3peK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/76rX3peK.png" alt="enter image description here" /></a></p> <p>You may recognise the Firefox logo.</p> <p>My search area looks like this:</p> <p><a href="https://i.sstatic.net/wi0HS27Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wi0HS27Y.png" alt="enter image description here" /></a></p> <p>I am using the following code:</p> <pre><code>import pyautogui import time search_region = (0,0,543,378) x, y = pyautogui.locateCenterOnScreen('graphics/firefox_icon.png',grayscale=True,confidence = 0.4,region=search_region) print(&quot;We found it at X:&quot;+str(x)+&quot; Y:&quot;+str(y)) pyautogui.click(x, y) </code></pre> <p>When executing the code using a confidence value of 1,0.9,0.8,0.7,0.6,0.5 it raises an ImageNotFoundException. When using confidence of 0.4 it usually selects a pixel just under the finder menu which is in the clouds(coords X:80, Y:69). The same happens for grayscale=True or False. This is my challenge, I cannot get a simple use case like this to work - and I am not sure what I am doing wrong. Please offer some options I can try to fix my code or my configuration. As a note - I am using Macs 'Shift - Cmd - 4' short cut to select the area to create my needle image - which is output as a png by default, incase that is perhaps related.</p> <p>My environment:</p> <ul> <li>Python version: 3.12.4</li> <li>pyautolib version: 0.9.54</li> <li>opencv-python version: 4.10.0.84</li> <li>Mac OS version: 14.5</li> <li>MacBook Pro M1(retina display) in dual monitor mode with a 34&quot; widescreen</li> </ul>
<python><python-3.x><macos><ui-automation><pyautogui>
2024-08-21 11:58:08
1
5,690
RenegadeAndy
78,896,677
2,177,047
Migration of Workflow from Spyder IDE to PyCharm: Code snippet execution and variable explorer
<p>I currently use the Spyder IDE and have often thought about migrating to PyCharm but there is one feature I often use which makes it impossible for me to switch.</p> <p>I very often use Spyders Code Snippet execution by marking lines of code and executing them with F9. After that I inspect the outcome of the execution in the variable explorer to debug my applications. Thus I never use/need breakpoints.</p> <p>Can I apply this workflow also in PyCharm? I often tried but failed to find this functionality.</p> <p><a href="https://i.sstatic.net/Exd7H0ZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Exd7H0ZP.png" alt="enter image description here" /></a></p>
<python><debugging><pycharm><workflow><spyder>
2024-08-21 11:23:58
1
2,136
Ohumeronen
78,896,645
11,829,398
What do pydantic's field_validator modes do?
<p>I want to apply field validation to my Pydantic v2 model but don't understand what the <code>mode</code> kwarg does.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from typing import Optional from pydantic import BaseModel, field_validator, Field class MyModel(BaseModel): gross: Optional[float] = Field( default=None, le=999999, description=&quot;The total gross payment&quot; ) @classmethod @field_validator(&quot;gross&quot;, mode=&quot;before&quot;) def set_none_if_nan(cls, v): if pd.isna(v): return None return v </code></pre> <p>The <a href="https://github.com/pydantic/pydantic/blob/8cd7557df17829eb0d0f7eb2fbfd0793364fd8a9/pydantic/functional_validators.py#L294" rel="nofollow noreferrer">options</a> are:</p> <pre class="lang-py prettyprint-override"><code>FieldValidatorModes: TypeAlias = Literal['before', 'after', 'wrap', 'plain'] </code></pre> <p>'before' and 'after' seem self-explanatory. However, if I pass <code>MyModel(gross=float('nan')</code> it complains that <code>Input should be less than or equal to 999999</code>, even though I thought the field validation would happen <em>before</em> validation?</p> <p>Can't find an explanation in the <a href="https://docs.pydantic.dev/2.8/api/functional_validators/#pydantic.functional_validators.field_validator" rel="nofollow noreferrer">Pydantic docs</a></p>
<python><pydantic><pydantic-v2>
2024-08-21 11:14:39
1
1,438
codeananda
78,896,486
926,918
Spurious zero printed in seaborn barplot while plotting pandas dataframe
<p>The following is the minimal code. A spurious zero is printed between second and third bars that I am unable to get rid of in the plot. Please help me fix the code. A minimal working example is below:</p> <p><a href="https://i.sstatic.net/kZeNQUfb.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kZeNQUfb.jpg" alt="Spurious zero between 2nd and 3rd bars" /></a></p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns data = { 'Temp': [5, 10, 25, 50, 100, 5, 10, 25, 50, 100, 5, 10, 25, 50, 100, 5, 10, 25, 50, 100], 'Measurement': ['mean', 'mean', 'mean', 'mean', 'mean', 'std', 'std', 'std', 'std', 'std', 'min', 'min', 'min', 'min', 'min', 'max', 'max', 'max', 'max', 'max'], 'Value': [-0.03, -0.05, -0.07, -0.09, -0.09, 0.24, 0.44, 0.69, 1.11, 1.45, -2.36, -4.56, -5.86, -11.68, -14.68, 1.30, 3.50, 5.26, 9.28, 11.14] } df_melted = pd.DataFrame(data) print(df_melted) barplot = sns.barplot(x='Temp', y='Value', hue='Measurement', data=df_melted, palette='viridis') for p in barplot.patches: height = p.get_height() x = p.get_x() + p.get_width() / 2. if height &gt;= 0: barplot.text(x, height + 0.1, f'{height:.1f}', ha='center', va='bottom', fontsize=10, color='black', rotation=90) else: barplot.text(x, height - 0.1, f'{height:.1f}', ha='center', va='top', fontsize=10, color='black', rotation=90) plt.ylim([-25,25]) plt.show() </code></pre> <p>Verions: <code>Python 3.12.4</code>, <code>matplotlib 3.9.1</code> and <code>Seaborn 0.13.2</code></p> <p>PS: Edited to include <code>maplotlib</code> version.</p>
<python><pandas><seaborn>
2024-08-21 10:39:25
2
1,196
Quiescent
78,896,474
8,964,393
How to get coefficients from glm model in python
<p>I have trained the following glm model in Python:</p> <pre><code>fitGlm = smf.glm( listOfInModelFeatures, family=sm.families.Binomial(),data=train, freq_weights = train['model_weight']).fit() </code></pre> <p>I have then produced the summary of the trained model:</p> <pre><code> print(fitGlm.summary()) </code></pre> <p>which gives the following report:</p> <pre><code> Generalized Linear Model Regression Results ============================================================================== Dep. Variable: Target No. Observations: 1065046 Model: GLM Df Residuals: 4361436.81 Model Family: Binomial Df Model: 8 Link Function: Logit Scale: 1.0000 Method: IRLS Log-Likelihood: -6.1870e+05 Date: Wed, 21 Aug 2024 Deviance: 1.2374e+06 Time: 10:27:37 Pearson chi2: 4.01e+06 No. Iterations: 8 Pseudo R-squ. (CS): 0.1479 Covariance Type: nonrobust =============================================================================== coef std err z P&gt;|z| [0.025 0.975] ------------------------------------------------------------------------------- Intercept 3.2619 0.003 1126.728 0.000 3.256 3.268 e1_a_11_sp 0.9318 0.004 256.254 0.000 0.925 0.939 sp_g_37 0.5850 0.006 102.522 0.000 0.574 0.596 sp_f3_35 0.6510 0.005 135.114 0.000 0.642 0.660 e1_a_07_sp 0.4930 0.006 79.698 0.000 0.481 0.505 e1_e_02_sp 0.9956 0.008 120.253 0.000 0.979 1.012 e1_b_03_sp 0.7493 0.013 56.539 0.000 0.723 0.775 e2_k_02_spa 0.4996 0.014 34.512 0.000 0.471 0.528 ea5_s_01_sp 0.3305 0.008 41.524 0.000 0.315 0.346 =============================================================================== </code></pre> <p><strong>Question</strong>: how do I get the list of coefficients for each feature (incl. the intercept)? I mean, how do I get something like this?</p> <pre><code>[3.2619,0.9318,0.5850,0.6510,0.4930,0.9956,0.7493,0.4996,0.3305] </code></pre> <p>Thanks in advance.</p>
<python><list><glm><coefficients>
2024-08-21 10:35:55
1
1,762
Giampaolo Levorato
78,896,438
2,753,095
Copying a class object in Python
<p>Suppose I have a complicated class with lots of methods. In particular it is callable via a <code>__call__</code> member. I would like to have an object from the exact same class, but not callable and not with <code>__call__</code> at all in fact, because some introspection tool would detect it and this is not wanted.</p> <p>The obvious solution is to make a new class, copying the code from the original one, and removing <code>__call__</code>. But is there a way to do this programmatically?</p> <p>I went from:</p> <pre><code>import copy class A: def __call__(self): print(self, &quot;CALLED&quot;) B = copy.deepcopy(A) # for some reason it is not a real copy... del B.__call__ # seems to work, but also removes `__call__` from A !!! a=A() a() # TypeError: 'A' is not callable # this was the more surprising to me... </code></pre> <p>to:</p> <pre><code>class B(A): def __getattribute__(self, attr): if attr == &quot;__call__&quot;: raise AttributeError(attr) return super().__getattribute__(attr) b = B() b.__call__ # raises attribute error, nice b() # still calls A.__call__... WTF! </code></pre> <p>passing through many attempts.</p> <p>Is there a way? How to get a true copy of a class object (not an instance) in Python?</p> <p>I checked existing Q&amp;A, but none seem to address this particular issue. I also thought about extracting source code, and removing the <code>__call__</code> method from the string and having it re-evaluated.</p>
<python><metaprogramming>
2024-08-21 10:29:49
2
7,998
mguijarr
78,896,435
4,451,315
Function takes `Foo` subclass and wraps it in `Bar`, else returns type unaltered
<p>I have the following code:</p> <pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Any, Generic class Foo: ... class Bar(Generic[FooT]): def __init__(self, foo: FooT): self._foo = foo FooT = TypeVar('FooT', bound=Foo) T = TypeVar('T') def func(a: FooT | T) -&gt; Bar[FooT] | T: if isinstance(a, Foo): return Bar(a) return a def my_other_func(a: Foo) -&gt; None: func(a) </code></pre> <ul> <li><code>func</code> takes either a <code>Foo</code> subclass and returns <code>Bar</code> which wraps that <code>Foo</code> object, or returns the input unaltered</li> </ul> <p>I thought I could type it as</p> <pre><code>def func(a: FooT | T) -&gt; Bar[FooT] | T: </code></pre> <p>But if I run <code>mypy</code> on this I get</p> <pre><code>main.py:18: error: Argument 1 to &quot;Bar&quot; has incompatible type &quot;Foo&quot;; expected &quot;FooT&quot; [arg-type] main.py:22: error: Argument 1 to &quot;func&quot; has incompatible type &quot;Foo&quot;; expected &quot;Never&quot; [arg-type] Found 2 errors in 1 file (checked 1 source file) </code></pre> <p>and I don't understand either.</p> <p>How should I have typed it?</p>
<python><python-typing><mypy>
2024-08-21 10:29:20
2
11,062
ignoring_gravity
78,896,317
11,022,199
Unpivoting a dataframe by splitting column names
<p>I have the following dataframe:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>volume_brand1</th> <th>volume_productX_brand1</th> <th>amount_brand1</th> <th>amount_productX_brand1</th> <th>volume_productX_brand2</th> <th>amount_productX_brand2</th> </tr> </thead> <tbody> <tr> <td>1000</td> <td>100</td> <td>10</td> <td>50</td> <td>2000</td> <td>200</td> </tr> </tbody> </table></div> <p>etc...</p> <p>I would like to unpivot this dataframe to look the following:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>brand</th> <th>product</th> <th>volume</th> <th>amount</th> </tr> </thead> <tbody> <tr> <td>brand1</td> <td>productX</td> <td>100</td> <td>50</td> </tr> <tr> <td>brand1</td> <td>&quot;not product X&quot;</td> <td>1000</td> <td>10</td> </tr> <tr> <td>brand2</td> <td>productX</td> <td>2000</td> <td>200</td> </tr> </tbody> </table></div> <p>etc..</p> <p>I've tried using pandas' <code>melt</code> function but have not been successful. As every column has to be unpivoted there is no value for <code>id_vars</code> any help would be appreciated!</p>
<python><pandas>
2024-08-21 10:03:52
3
794
borisvanax
78,896,313
2,641,187
Bound TypeVar resolution of arguments to multiplication
<p>I am struggling with a problem of bound TypeVar resolution of function/method arguments.</p> <p>Suppose I have a base class <code>Base</code> and a couple of derived classes. Also suppose I have a TypeVar <code>T</code> that is bound to <code>Base</code>. I want to define a generic container class that represents ordered &quot;products&quot; of such objects in form of a <code>list[T]</code>.</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from typing import TypeVar, Generic class Base: ... class X(Base): ... class Y(Base): ... class Z(Base): ... T = TypeVar(&quot;T&quot;, bound=Base) class Product(Generic[T]): objs: list[T] def __init__(self, objs: list[T]): self.objs = list(objs) </code></pre> <p>Such products can be</p> <ul> <li>mixed: the list elements are instances of different subclasses of <code>Base</code>. The product is then <code>Product[Base]</code></li> <li>homogeneous: all list elements of the same type, e.g. <code>Product[Y]</code></li> </ul> <p>So far so good, the following works out and MyPy reveals the correct generic aliases</p> <pre class="lang-py prettyprint-override"><code>reveal_type(Product([X(), X()])) # note: Revealed type is &quot;Product[X]&quot; reveal_type(Product([X(), X(), X()])) # note: Revealed type is &quot;Product[X]&quot; reveal_type(Product([Y(), Y()])) # note: Revealed type is &quot;Product[Y]&quot; reveal_type(Product([X(), Y()])) # note: Revealed type is &quot;Product[Base]&quot; reveal_type(Product([X(), Y(), Z()])) # note: Revealed type is &quot;Product[Base]&quot; </code></pre> <p>Especially in the mixed case, MyPy resolves <code>T</code> as the common superclass of the <code>objs</code>.</p> <p>Now I want to be able to iteratively construct such products using the <code>*</code> operator, so I implement <code>__mul__</code>. There are three variants:</p> <ol> <li><code>Base</code> * <code>Base</code> -&gt; <code>Product</code></li> <li><code>Base</code> * <code>Product</code> -&gt; <code>Product</code></li> <li><code>Product</code> * <code>Base</code> -&gt; <code>Product</code> (which I get with <code>__rmul__ = __mul__</code>)</li> </ol> <pre class="lang-py prettyprint-override"><code>class Base: def __mul__(self: T, other: T | Product[T]) -&gt; Product[T]: if isinstance(other, Base): return Product([self, other]) # type: ignore[list-item] # mypy expects T if isinstance(other, Product): return Product(other.objs + [self]) return NotImplemented __rmul__ = __mul__ </code></pre> <p>Now we have</p> <pre class="lang-py prettyprint-override"><code>reveal_type(X() * X()) # note: Revealed type is &quot;Product[X]&quot; reveal_type(X() * X() * X()) # note: Revealed type is &quot;Product[X]&quot; reveal_type(Y() * Y()) # note: Revealed type is &quot;Product[Y]&quot; reveal_type(X() * Y()) # note: Revealed type is &quot;Product[X]&quot; # error: Unsupported operand types for * (&quot;X&quot; and &quot;Y&quot;) [operator] reveal_type(X() * (Y() * Y())) # note: Revealed type is &quot;Product[X]&quot; # error: Unsupported operand types for * (&quot;X&quot; and &quot;Product[Y]&quot;) [operator] </code></pre> <p>So the homogeneous case works out, but the mixed case doesn't. The <code>self</code> parameter is evaluated first as <code>X</code>, before <code>other</code> is even taken into account. So MyPy does not resolve <code>T</code> as the common superclass of <code>X</code> and <code>Y</code>.</p> <p>Interestingly, with a standalone function, this is different</p> <pre class="lang-py prettyprint-override"><code>def mult(a: T, b: T | Product[T]) -&gt; Product[T]: if isinstance(b, Base): return Product([a, b]) # type: ignore[list-item] # mypy expects T if isinstance(b, Product): return Product(b.objs + [a]) raise NotImplementedError </code></pre> <p>Here I get</p> <pre class="lang-py prettyprint-override"><code> reveal_type(mult(X(), X())) # note: Revealed type is &quot;Product[X]&quot; reveal_type(mult(X(), mult(X(), X()))) # note: Revealed type is &quot;Product[X]&quot; reveal_type(mult(Y(), Y())) # note: Revealed type is &quot;Product[X]&quot; reveal_type(mult(X(), Y())) # note: Revealed type is &quot;Product[Base]&quot; reveal_type(mult(X(), mult(Y(), Y()))) # note: Revealed type is &quot;Product[Any]&quot; # error: Cannot infer type argument 1 of &quot;mult&quot; [misc] </code></pre> <p>So in this case at least the mixed <code>X * Y</code> case works out, MyPy resolves <code>T</code> as the common superclass <code>Base</code> for <code>X</code> and <code>Y</code>. However the <code>X * Product[Y]</code> case again trips MyPy up, which I find strange, because the same <code>T</code> is involved.</p> <p>What I want to achieve in the end is arbitrary sequences of multiplications that are inferred correctly by MyPy, e.g.</p> <pre class="lang-py prettyprint-override"><code>reveal_type(Y() * Y()) # note: Revealed type is &quot;Product[Y]&quot; reveal_type(X() * X() * X()) # note: Revealed type is &quot;Product[X]&quot; reveal_type(X() * Y() * Y()) # note: Revealed type is &quot;Product[Base]&quot; reveal_type(X() * Y() * Z()) # note: Revealed type is &quot;Product[Base]&quot; </code></pre> <p>What is the best way to get this right? Is there a way?</p> <p>I don't want to use specific overloads for all different combinations of <code>X</code>, <code>Y</code>, <code>Z</code> because in practice I have many more subclasses of <code>Base</code> and the number of overloads would explode.</p>
<python><python-typing>
2024-08-21 10:02:11
0
931
Darkdragon84
78,896,251
1,753,640
docx2pdf set permissions on docx fie
<p>I am calling an api to download a docx file which I save locally. I then want convert it to PDF. When I run docx2pdf on the file, I get a pop-up stating that MS Word needs permissions. What can I do to automate this step and remove the popup so a user is not involved? Should I set appropriate permissions on the file when I download it? What are those permissions? I tried chmod but no luck.</p> <p><a href="https://i.sstatic.net/UDplricE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UDplricE.png" alt="enter image description here" /></a></p> <p>this is excerpts from my code:</p> <pre><code>response = requests.get(url, headers=HEADERS) if response.status_code == 200: content = response.content with open(&quot;ex.docx&quot;, &quot;wb&quot;) as file: file.write(content) os.chmod(&quot;ex.docx&quot;, 0o0777) convert(&quot;ex.docx&quot;, &quot;ex.pdf&quot;) </code></pre>
<python><macos><ms-word><permissions><docx2pdf>
2024-08-21 09:47:05
1
385
user1753640
78,896,210
1,021,819
How do I use python to zero pad an integer substring (not a whole string) within another string?
<p>Say I have strings like (outputted from running <code>glob.glob()</code> on output from someone else's code):</p> <pre><code>image-0.png image-1.png image-2.png image-3.png image-4.png image-5.png image-6.png image-7.png image-8.png image-9.png image-10.png image-11.png </code></pre> <p>How do I left zero pad the integer substring within each string?</p> <p>Related questions:</p> <p>Not using python - <a href="https://stackoverflow.com/questions/78846515/zero-pad-numbers-within-a-string">Zero-pad numbers within a string</a></p>
<python><string><substring><zero-padding>
2024-08-21 09:39:25
1
8,527
jtlz2
78,895,938
1,224,456
Incremental parsing of LLM output
<p>I'm working on a service that calls a Large Language Model, parse the output, and sends the parsed output to a client, message by message. When aggregated, the stream of text coming back from the LLM looks somewhat like this:</p> <p><code>This is a reference [r1] and here's an image of a cat &lt;image&gt;a black cat&lt;/image&gt;.</code></p> <p>I would like to parse this to something like:</p> <pre class="lang-py prettyprint-override"><code>[ {&quot;text&quot;: &quot;This is a reference &quot;}, {&quot;reference&quot;: &quot;r1&quot;}, {&quot;text&quot;: &quot; and here's an image of a cat &quot;}, {&quot;image&quot;: &quot;a black cat&quot;}, {&quot;text&quot;: &quot;.&quot;}, ] </code></pre> <p>Processing the aggregated stream is not a problem. But processing the stream as messages arrive, is much trickier (for me at least). We must stream <code>text</code> messages as soon as they are ready, buffer non <code>text</code> items, and send them when they are ready.</p> <p>To make it clear, consider the stream of text coming back like this:</p> <pre class="lang-py prettyprint-override"><code>[ &quot;This &quot;, &quot;is &quot;, &quot;a &quot;, &quot;reference &quot;, &quot;[&quot;, # Notice that anything can be broken across different messages &quot;r1] &quot;, &quot;and &quot;, &quot;here&quot;, &quot;'s &quot;, &quot;an &quot;, &quot;image &quot;, &quot;of &quot;, &quot;a &quot;, &quot;cat &lt;&quot;, # A more extreme example, breaking messages completely arbitrary &quot;ima&quot;, &quot;ge&quot;, &quot;&gt;a &quot;, &quot;black &quot;, &quot;ca&quot;, &quot;t&lt;/i&quot;, &quot;mage&gt;.&quot;, ] </code></pre> <p>The parsed output should be the following stream of messages:</p> <pre class="lang-py prettyprint-override"><code>[ {&quot;text&quot;: &quot;This &quot;}, {&quot;text&quot;: &quot;is &quot;}, {&quot;text&quot;: &quot;a &quot;}, {&quot;text&quot;: &quot;reference &quot;}, {&quot;reference&quot;: &quot;r1&quot;}, {&quot;text&quot;: &quot; &quot;}, {&quot;text&quot;: &quot;and &quot;}, {&quot;text&quot;: &quot;here&quot;}, {&quot;text&quot;: &quot;'s &quot;}, {&quot;text&quot;: &quot;an &quot;}, {&quot;text&quot;: &quot;image &quot;}, {&quot;text&quot;: &quot;of &quot;}, {&quot;text&quot;: &quot;a &quot;}, {&quot;text&quot;: &quot;cat &quot;}, {&quot;image&quot;: &quot;a black cat&quot;}, {&quot;text&quot;: &quot;.&quot;}, ] </code></pre> <p>Error handling can be as simple as possible to start with. If we hit <code>[</code> and never <code>]</code> just buffer indefinitely. When we hit the end of the stream we can return the buffer as <code>text</code>. If we hit <code>&lt;image&gt;</code> and a closing tag starts with <code>&lt;/</code> but continues with anything but <code>image&gt;</code> return the entire thing as <code>text</code> as well. No need to worry about nested tags for now.</p> <p>Now, I have some code that does the reference parsing, but it's a very low level python code that is hard to read and is unmaintainable. I tried to look up options for incremental parsing but couldn't find much. That's probably because, as you can read between the lines, my understanding of parsing techniques / tools is very limited. Specifically, I had a quick look at <a href="https://github.com/pyparsing/pyparsing/" rel="nofollow noreferrer">pyparsing</a>, <a href="https://github.com/lark-parser/lark" rel="nofollow noreferrer">lark</a>, and <a href="https://github.com/erikrose/parsimonious/" rel="nofollow noreferrer">parsimonious</a>. To be honest, I don't understand most of the terminology to even know if these support something like this.</p> <p>Given the problem, what are my options? Ideally, what python libraries can I use to solve this problem? If there's some reading to do that you can recommend please share that as well. Don't worry about the LLM part and the asynchronous handling of this. I've got these parts nailed down. I'm only interesting in the incremental parsing.</p> <p>Thanks!</p>
<python><parsing><large-language-model>
2024-08-21 08:36:57
1
2,880
Nagasaki45
78,895,893
26,843,912
AttributeError: 'function' object has no attribute 'submit'
<p>I am using fastapi and uvicorn as my python server and whenever a user visits a route, I want to start a subprocess in the background. I used asyncio.create_task and <code>loop.run_in_excutor</code> in order to properly handle the subprocess code.</p> <p>PYTHON VERSION :- 3.11.3 OPERATING SYSTEM :- Windows x64</p> <pre class="lang-py prettyprint-override"><code>@router.get(&quot;/testdo&quot;) async def do(): asyncio.create_task(editing()) # Return a response immediately return {&quot;message&quot;: &quot;Video processing started in the background!&quot;} async def editing(): recordingFile = os.path.join(temp_dir, f&quot;test.mp4&quot;) webFile = os.path.join(temp_dir, f&quot;keeptest.mp4&quot;) ffmpeg_change_bitrate = ['ffmpeg', '-i', recordingFile, '-b:v', '5M', '-y', f'{webFile}'] loop = asyncio.get_event_loop() process = await loop.run_in_executor(subprocess.run, ffmpeg_change_bitrate) stdout, stderr = await process.communicate() if process.returncode == 0: print('Completed Editing') else: print(f'Error occurred: {stderr.decode()}') print('It will take more x seconds') await asyncio.sleep(20) # Simulate additional processing time for POST request to S3 bucket print('It took x seconds') print('Completed Editing, it took x minutes') </code></pre> <p>I am getting the following error :-</p> <pre><code>Task exception was never retrieved future: &lt;Task finished name='Task-5' coro=&lt;editing() done, defined at C:\Users\Zaid Ahmed\Desktop\LatestProject\Phia-Backend\src\routes\render.py:26&gt; exception=AttributeError(&quot;'function' object has no attribute 'submit'&quot;)&gt; Traceback (most recent call last): File &quot;C:\Users\Zaid Ahmed\Desktop\LatestProject\Phia-Backend\src\routes\render.py&quot;, line 37, in editing process = await loop.run_in_executor(subprocess.run, ffmpeg_change_bitrate) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Zaid Ahmed\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py&quot;, line 829, in run_in_executor executor.submit(func, *args), loop=self) ^^^^^^^^^^^^^^^ AttributeError: 'function' object has no attribute 'submit' </code></pre>
<python><multithreading><asynchronous><python-asyncio><fastapi>
2024-08-21 08:22:33
1
323
Zaid
78,895,733
9,860,033
Pythonic way to create a singleton class that requires an async method to setup?
<p>I'm writing a Python package that implements a class which should be a singleton.<br /> The class is responsible for sending async http requests. Thus the class needs to have access to an <code>aiohttp.ClientSession</code>.<br /> The catch is that I wish to support two types of instantiation:</p> <ol> <li>The user can supply the <code>ClientSession</code> as a parameter during instantiation.</li> <li>If the user doesn't supply a session, the class should create one.</li> </ol> <pre><code>class AsyncFoo: client_session: aiohttp.ClientSession def __init__(self): raise NotImplementedError(&quot;Direct instantion is not allowed. Use `AsyncFoo.create()`.&quot;) @classmethod async def create(cls, session: None | aiohttp.ClientSession = None): instance = cls.__new__(cls) if session is None: instance.client_session = aiohttp.ClientSession() else: instance.client_session = session return instance async def send_request(self): self.client_session.post(&quot;url&quot;) </code></pre> <p>What is the best way to ensure that this class is a Singleton?<br /> Please be aware that this class is meant to be part of a package, so creating an instance inside the package and then referencing it does not seem like a solution to me (at least from this standpoint).</p>
<python><class><design-patterns><singleton><python-packaging>
2024-08-21 07:45:01
0
1,134
waykiki
78,895,508
3,067,485
Cannot make inline formset working with class based view in Django 5.1
<p>I have the following Django structure. I use the latest Django version <code>5.1</code>. I have also checked in version <code>4.2.15</code> the issue is still present.</p> <p>I aim to get <code>Formset</code> working with Class Based View. Anyway, whatever I tried my formset is never valid. I can see raw data in <code>request.POST</code> but it does not validate, <code>is_valid</code> is always <code>False</code>. What am I missing?</p> <p>Below my MCVE:</p> <h4>Model.py</h4> <pre><code>class Folder(models.Model): reference = models.CharField(max_length=64, null=False, unique=True,) class Revenue(models.Model): folder = models.ForeignKey(Folder, null=False, on_delete=models.RESTRICT,) amount = models.DecimalField(max_digits=10, decimal_places=2, null=False,) </code></pre> <h4>Form.py</h4> <pre><code>class FolderForm(forms.ModelForm): class Meta: model = models.Folder fields = ['reference'] class RevenueForm(forms.ModelForm): class Meta: model = models.Revenue fields = [&quot;amount&quot;] RevenueFormset = inlineformset_factory( models.Folder, models.Revenue, RevenueForm, extra=1, can_delete=True ) </code></pre> <h4>View.py</h4> <pre><code>class FolderTestView(LoginRequiredMixin, UpdateView): template_name = &quot;core/folder_test.html&quot; model = models.Folder form_class = forms.FolderForm def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context[&quot;revenue_formset&quot;] = forms.RevenueFormset(instance=self.get_object()) return context def post(self, request, *args, **kwargs): print(request.POST) # Contains whole form raw data formset = forms.RevenueFormset(request.POST, instance=self.get_object()) print(formset.is_valid()) # Always set as False print(formset.errors) # No errors: [] return super().post(request, *args, **kwargs) def form_valid(self, form): print(form.cleaned_data) return super().form_valid(form) def get_success_url(self): return reverse(&quot;core:folder-test&quot;, kwargs={&quot;pk&quot;: self.kwargs[&quot;pk&quot;]}) </code></pre> <h4>url.py</h4> <pre><code>urlpatterns = [ path('folder/&lt;int:pk&gt;/test/', views.FolderTestView.as_view(), name='folder-test'), ] </code></pre> <h4>folder_test.html</h4> <pre><code> &lt;form action=&quot;{% url 'core:folder-test' object.id %}&quot; method=&quot;POST&quot;&gt; {% csrf_token %} {{ form|crispy }} {{ formset.management_form }} {% for revenue_form in revenue_formset %} {{ revenue_form|crispy }} {% endfor %} &lt;input type=&quot;submit&quot; value=&quot;Submit&quot; class=&quot;btn btn-primary&quot;&gt; &lt;/form&gt; </code></pre> <p>The raw data is present, it does not validate and it returns no errors:</p> <pre><code>&lt;QueryDict: {'csrfmiddlewaretoken': ['secret'], 'reference': ['Test'], 'revenue_set-0-amount': ['1245.54'], 'revenue_set-1-amount': ['']}&gt; False [] </code></pre> <p>I also have tried <code>prefix</code> it does not change the issue.</p>
<python><django><django-forms><inline-formset>
2024-08-21 06:49:22
1
11,564
jlandercy
78,895,421
6,907,424
How to prevent Playwright to load non-textual contents?
<p>I am trying to implement a crawler which will be responsible to crawl the given page. Here I <strong>don't</strong> want to crawl any non-textual items, not even want the headless browser to load it, as it is simply wasteful and unnecessarily increases the crawling time. To achieve this, I have added rules which should help it to get rid of those unwanted contents. But it is not working as expected. As from the log, I can clearly see some of the non-textual contents are loaded and taking more time to crawl:</p> <pre><code>... ... ... 'playwright/request_count/resource_type/document': 1, 'playwright/request_count/resource_type/font': 1, 'playwright/request_count/resource_type/image': 20, 'playwright/request_count/resource_type/script': 6, 'playwright/request_count/resource_type/stylesheet': 3, 'playwright/response_count': 30, 'playwright/response_count/method/GET': 30, 'playwright/response_count/resource_type/document': 1, 'playwright/response_count/resource_type/font': 1, 'playwright/response_count/resource_type/image': 20, 'playwright/response_count/resource_type/script': 5, 'playwright/response_count/resource_type/stylesheet': 3, ... ... ... </code></pre> <p>How not to load them? I am using <code>Scrapy</code> along with <code>Playwright</code> to make it able to work on dynamic contents.</p> <p>Following is my code:</p> <pre><code>from scrapy.spiders import Spider, Rule import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.linkextractors import IGNORED_EXTENSIONS import tldextract from scrapy.exceptions import CloseSpider class LinkCrawlerSpiderSelective(Spider): name = 'link_crawler_selective' def __init__(self, *args, **kwargs): self.start_urls = [&quot;https://books.toscrape.com/&quot;] # Enter URLs to crawl self.hard_limit = 100 # Maximum how many items to crawl self.total_page_visited = 0 # allow_domains = self.start_urls: because we only want to take the start_url and/or pages # which are direct children of it. Even if other pages are accessible from the start_url # which do not share the same context path, we will not take them. self.allowed_domains = [] for url in self.start_urls: domain = tldextract.extract(url).registered_domain self.allowed_domains.append(domain) self._rules = [Rule(LinkExtractor(allow_domains = self.allowed_domains, deny_extensions = IGNORED_EXTENSIONS))] def start_requests(self): # Define the initial URL(s) to scrape for url in self.start_urls: yield scrapy.Request(url, meta=dict( playwright=True, playwright_include_page=True, errback=self.errback, )) def parse(self, response): self.total_page_visited += 1 if self.total_page_visited &gt; self.hard_limit: raise CloseSpider(f'Hard limit exceeded. Maximum number of pages to crawl is set to be: {self.hard_limit}') # Extract all text nodes except those within script and style tags text = ' '.join(response.xpath('//*[not(self::script) and not(self::style)]/text()').getall()) yield { &quot;link&quot;: response.url, &quot;text&quot;: &quot;\n&quot;.join([&quot; &quot;.join(str(l).split()) for l in text.split(&quot;\n&quot;) if str(l).strip()]) } async def errback(self, failure): page = failure.request.meta[&quot;playwright_page&quot;] await page.close() </code></pre> <p>Following is the <code>settings.py</code>:</p> <pre><code># settings.py import os SPIDER_MODULES = [f&quot;{os.path.split(os.getcwd())[-1]}.spiders&quot;] # Set settings whose default value is deprecated to a future-proof value REQUEST_FINGERPRINTER_IMPLEMENTATION = &quot;2.7&quot; FEED_EXPORT_ENCODING = &quot;utf-8&quot; #### FOR PLAYWRIGHT DOWNLOAD_HANDLERS = { &quot;http&quot;: &quot;scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler&quot;, &quot;https&quot;: &quot;scrapy_playwright.handler.ScrapyPlaywrightDownloadHandler&quot;, } PLAYWRIGHT_DEFAULT_NAVIGATION_TIMEOUT = 60 * 1000 # 10 seconds TWISTED_REACTOR = &quot;twisted.internet.asyncioreactor.AsyncioSelectorReactor&quot; DOWNLOAD_DELAY = 2 # minimum download delay AUTOTHROTTLE_ENABLED = True # Scrapy requires this reactor from scrapy.utils.reactor import install_reactor install_reactor('twisted.internet.asyncioreactor.AsyncioSelectorReactor') </code></pre>
<python><web-scraping><scrapy><playwright>
2024-08-21 06:26:42
1
2,916
hafiz031
78,895,386
1,408,347
Why can't I run `from algorithm import foo` in `lambda_function.py` inside a docker container for AWS lambda function?
<p>I'm following <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-instructions" rel="nofollow noreferrer">this tutorial</a> to build a Docker image for an AWS lambda function that runs python 3.11.</p> <p>When I run <code>python lambda_function.py</code> locally, it can successfully run <code>from algorithm import foo as data_processor</code> inside lambda_function.py at line 1.</p> <p>But when I package the program into a Docker image and run it, it generates an error saying <code>Unable to import module 'lambda_function': No module named 'algorithm'</code>.</p> <p>This is how I test the docker image in my local machine.</p> <pre><code># start a docker container docker run --platform linux/amd64 -p 9000:8080 my-image:latest # Open another terminal. Then trigger the lambda function curl &quot;http://localhost:9000/2015-03-31/functions/function/invocations&quot; -d '{}' </code></pre> <p>Here's my folder structure</p> <pre><code>. β”œβ”€β”€ Dockerfile β”œβ”€β”€ algorithm β”‚Β Β  β”œβ”€β”€ __init__.py β”‚Β Β  └── foo.py β”œβ”€β”€ lambda_function.py └── requirements.txt </code></pre> <p>And here's my Dockerfile</p> <pre><code>FROM public.ecr.aws/lambda/python:3.11.2024.08.09.13 # Copy requirements.txt COPY requirements.txt ${LAMBDA_TASK_ROOT} RUN pip install --upgrade pip # Install the specified packages RUN pip install -r requirements.txt # Copy function code COPY lambda_function.py ${LAMBDA_TASK_ROOT} COPY algorithm/ ${LAMBDA_TASK_ROOT} # Set the CMD to your lambda_handler (could also be done as a parameter override outside of the Dockerfile) CMD [ &quot;lambda_function.lambda_handler&quot; ] </code></pre> <p>I've checked that <code>${LAMBDA_TASK_ROOT}</code> is equal to <code>/var/task</code>. And that <a href="https://docs.python.org/3/library/sys.html#sys.path" rel="nofollow noreferrer"><code>sys.path</code></a> contains <code>'/var/task'</code>.</p> <p>Why can't I run <code>from algorithm import foo as data_processor</code> in <code>lambda_function.py</code> inside a docker container?</p> <p>I've checked <a href="https://stackoverflow.com/questions/72860558/import-module-error-when-building-a-docker-container-with-python-for-aws-lambda">this similar question</a>, but the answer there doesn't help.</p>
<python><docker><aws-lambda><aws-lambda-containers>
2024-08-21 06:14:29
1
13,773
Brian
78,895,383
188,331
How to use apply() on DataFrame using a custom function?
<p>I have the following Pandas DataFrame:</p> <pre><code>import pandas as pd from collections import Counter print(sentences) </code></pre> <p>the output is (yes, the column name is <code>0</code>):</p> <pre><code> 0 0 A 1 B 2 C 3 D 4 EEE ... ... 462064467 FGH 462064468 QRS 462064469 EEE 462064470 VWXYZ 462064471 !!! [462064472 rows x 1 columns] </code></pre> <p>I have a custom function to check whether the content in the column <code>0</code> has a length &gt; 1 or not (just an example):</p> <pre><code>def is_more_than_one_character(t): if len(t) &gt; 1: return True else: return False </code></pre> <p>And I apply the function like this:</p> <pre><code>counter = Counter(sentences.apply(is_more_than_one_character)) </code></pre> <p>I wish to count the occurrence of each string having a length &gt; 1. Here is the example output of <code>print(counter)</code>:</p> <pre><code>[(EEE, 2), (FGH, 1), (QRS, 1), (!!!, 1)...] </code></pre> <p>but currently, the output is:</p> <pre><code>[(False, 460686058), (True, 1378414)] </code></pre> <p>What did I miss? I think I am close. Thanks in advance.</p>
<python><pandas>
2024-08-21 06:12:35
2
54,395
Raptor
78,895,244
3,523,406
Can Docker in the terminal display Python library versions like Docker Desktop?
<p>I'm using Docker Desktop, which provides a GUI feature to easily view the versions of installed Python libraries within a container. This feature is quite handy for managing dependencies and ensuring consistency across environments.</p> <p>However, I often work in the terminal and want to achieve the same outcome there. Is there a way to list the versions of Python libraries within a running Docker container using just the terminal?</p> <p><a href="https://i.sstatic.net/6H3TnyaB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6H3TnyaB.png" alt="enter image description here" /></a></p>
<python><docker><pip><docker-desktop>
2024-08-21 05:22:56
1
645
user3523406
78,895,109
1,125,062
Is it possible to run autograd backward one node at a time?
<p>Let say I have a complex model with many many layers.</p> <p>When I obtain the output of the model I calculate the loss.</p> <p>Now when I run loss.backward() it would calculate gradients for all layers at once.</p> <p>But is it possible to run backward() one layer at a time?</p> <p>So what I'm trying to do is to obtain gradients for layer 1 first, pass them to optimizer, and then immediately set grads to None in order to free the memory. Then move on to calculate gradients for the layer 2, and so on until it reaches the final layer using a loop. Is this possible?</p> <p>Edit: added a toy example to demonstrate the problem. for complexity I also used an architecture that somewhat resembles unet, ie output of layer 1 may be used in layer 2 but also in layer 3</p> <pre><code>import torch from torch.optim.optimizer import Optimizer, _use_grad_for_differentiable class NeuralNetwork(torch.nn.Module): def __init__(self): super().__init__() self.flatten = torch.nn.Flatten() self.layer1 = torch.nn.Linear(5, 5) self.layer2 = torch.nn.Linear(5, 5) self.layer3 = torch.nn.Linear(5, 5) def forward(self, x): x = self.flatten(x) layer1_out = self.layer1(x) layer2_out = self.layer2(layer1_out) return self.layer3(layer2_out + layer1_out) class OptimS(Optimizer): def __init__(self, params=None, lr=1e-3): super().__init__(params, dict(lr=lr,differentiable=False,)) @_use_grad_for_differentiable def step(self, closure=None): for group in self.param_groups: for param in group[&quot;params&quot;]: param.add_(param.grad, alpha=-group['lr']) model = NeuralNetwork() batch, labels= torch.randn((5,5)), torch.randn((5,5)) criterion = torch.nn.MSELoss() paramteres =model.parameters() optimizer = OptimS(paramteres) out = model(batch) loss = criterion(out,labels) /2 loss.backward() optimizer.step() optimizer.zero_grad(set_to_none=True) # you realize optimizer step is doing a just doing a loop, # so we could also run it per layer, something like: out = model(batch) loss = criterion(out,labels) /2 loss.backward() def layerwise_optimizer_step(param, lr=1e-3): param.add_(param.grad, alpha=-lr) param.grad = None for param in paramteres: layerwise_optimizer_step(param, 1e-3) # so the goal is to create some custom backward function, # that upon calculating the gradinet for each layer would # pass the corresponding parameter to our layerwise optimizer </code></pre>
<python><python-3.x><pytorch><torch><autograd>
2024-08-21 04:09:45
2
4,641
Anonymous
78,895,025
9,769,454
fill_value='extrapolate', bounds_error=False in interp1d still raise ValueError
<p>I have set the <em><strong>fill_value='extrapolate</strong>'</em>, <em><strong>bounds_error=False</strong></em> for using interp1d.</p> <pre><code>f_ca = interp1d(df_Ca['Cap_x'], df_Ca['V'], kind='linear', fill_value='extrapolate', bounds_error=False) </code></pre> <p>However when I run the code there is still error raised.</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[30], line 122 120 HC_data_diff = len(df_HC)//100*3 121 df_HC['An_V'] = f_An(df_HC['Q (mAh)']) --&gt; 122 df_HC['Ca_V'] = f_ca(df_HC['Q (mAh)']) 123 df_HC['Fit_V'] = df_HC['Ca_V'] - df_HC['An_V'] 125 df_HC['Fit_dV'] = df_HC['Fit_V'].diff(HC_data_diff) scipy\interpolate\_polyint.py:80, in _Interpolator1D.__call__(self, x) 59 &quot;&quot;&quot; 60 Evaluate the interpolant 61 (...) 77 78 &quot;&quot;&quot; 79 x, x_shape = self._prepare_x(x) ---&gt; 80 y = self._evaluate(x) 81 return self._finish_y(y, x_shape) scipy\interpolate\_interpolate.py:752, in interp1d._evaluate(self, x_new) 750 y_new = self._call(self, x_new) 751 if not self._extrapolate: --&gt; 752 below_bounds, above_bounds = self._check_bounds(x_new) 753 if len(y_new) &gt; 0: ... 783 .format(below_bounds_value, self.x[0])) 784 if self.bounds_error and above_bounds.any(): 785 above_bounds_value = x_new[np.argmax(above_bounds)] ValueError: A value (0.0) in x_new is below the interpolation range's minimum value (16.433608807468445). </code></pre> <p>Does someone have the same issue and let me know why. For additional information, I have been using this interp1d winthin a scipy.optimize.minimize function.</p> <pre><code># Optimization result = minimize(Fitting_RMSE, initial_guess, args=input_df_list, method=method) </code></pre> <p>The ValueError seems to be raised only because that interp1d used within this <em><strong>minimize</strong></em> function (inside the Fitting_RMSE function) while the error is gone if I run outside for the same set of value.</p>
<python><scipy><interpolation>
2024-08-21 03:23:20
0
553
Roy Dai
78,894,984
26,438,560
Why can hexadecimal python integers access properties but not regular ints?
<p>Decimal (i.e. non-prefixed) integers in Python seem to have fewer features than prefixed integers.</p> <p>If I do <code>1.real</code> I get a <code>SyntaxError: invalid decimal literal</code>. However, if I do <code>0x1.real</code>, then I get no error and <code>1</code> is the result. (Same for <code>0b1.real</code> and <code>0o1.real</code>, though in Python2 <code>01.real</code> gives a syntax error as).</p>
<python><syntax><integer>
2024-08-21 02:57:44
1
413
Sir Nate
78,894,954
6,312,979
Polars Dataframe via Django Query
<p>I am exploring a change from pandas to polars. I like what I see.</p> <p>Currently, it is simple to get the data into Pandas.</p> <pre class="lang-py prettyprint-override"><code>cf = Cashflow.objects.filter(acct=acct).values() df = pd.DataFrame(cf) </code></pre> <p>So I figured it would be a simple change - but this will not work for me.</p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(cf) </code></pre> <p>What is the difference between using a Django query and putting the data inside Polars?</p> <p>Thank you.</p>
<python><django><pandas><python-polars>
2024-08-21 02:42:41
1
2,181
diogenes
78,894,926
5,044,463
LP Solver; Setting up model constraints for large number of chained dependency variables is slow
<p>Creating the model is too slow. Solving it is not the problem here.</p> <p>I have looked at a few similar questions before, but they don't have the problem from the size that I am giving it.</p> <p>The constraint that is very slow to add to the model looks something like this:</p> <pre><code>x out of {0, 1} (binary) y_0 = 0 y_i = y_(i-1) + x_i; N &gt; i &gt; 0 </code></pre> <p>in other words a variable vector <code>x=[1, 0, 1, 1]</code> would give me</p> <pre><code>y_1 = 1 y_2 = 1 y_3 = 2 </code></pre> <p>It's actually more complex, but I simplified it here to this minimum reproducible example. The meaning is: I don't want the value y to go above a certain threshold for all i.</p> <p>I used pulp. Here is an example python script:</p> <pre class="lang-py prettyprint-override"><code>from pulp import LpProblem, LpVariable, LpMaximize, LpBinary, PULP_CBC_CMD, lpSum N = 5000 max_y_val = 5 model = LpProblem(sense=LpMaximize) print(&quot;defining X&quot;) x = [LpVariable(name=f&quot;x_{i}&quot;, cat=LpBinary) for i in range(N)] print(&quot;defining Y&quot;) y = [x[0]] for i in range(1, N): print(i) y.append(y[i-1] + x[i]) print(&quot;add y to model&quot;) for i in range(N): print(i) model += y[i] &lt;= max_y_val print(&quot;objective&quot;) model += lpSum(x) # ... print(&quot;start optimizer&quot;) status = model.solve(PULP_CBC_CMD(msg=False, timeLimit=10)) print(&quot;score:&quot;, model.objective.value()) print(&quot;x:&quot;, *[x[i].value() for i in range(N)]) print() </code></pre> <p>With <code>N=1000</code> it still goes quite fast. At <code>N=5000</code> it becomes very slow when adding y to the model.</p> <p>I rewrote the same code in rust (using good_lp library) to see if the issue is with python, but the problem persists. So most likely the issue is the way I am writing my problem. Thus it is not a language specific question.</p> <p>My suspicion is, that it might be using a matrix <code>M * x = y</code>. The width of the matrix with higher numbers of <code>N</code> would become <code>N + 1</code>. Probably something that looks like a triangular matrix that only has elements on the left_bottom corner.</p> <pre><code>matrix x y 1 0 0 0 0 0 0 1 1 0 0 0 1 1 1 1 1 0 0 * 0 = 1 1 1 1 1 0 1 2 1 1 1 1 1 1 3 </code></pre> <p>Do you know how I can do this with LP solvers in general? I feel like I missing some knowledge here.</p>
<python><linear-algebra><linear-programming><solver><pulp>
2024-08-21 02:28:03
2
737
GRASBOCK
78,894,917
8,929,698
Time per process increases with number of processes when reading JSON in Python. Why?
<p>I am trying to parallelize the processing of some JSON files using Python's multiprocess module. The amount of time required to process a fixed number of files within a subprocess seems to depend on the overall number of files. My expectation was that this should be constant.</p> <p>In the example below, I compare one process reading 100 files vs. ten processes reading 1000 files (100 each) vs. 100 processes reading 10000 files (100 each). In all three cases, each process reads exactly 100 files, so presumably each individual subprocess should take the same amount of time. (The total time for the parent process to run depends on the total number of processes and the total number of cores, but that's not the time I'm referring to.) In fact, the per-process time is increasing.</p> <p><strong>Simplified Example</strong></p> <p>To avoid complications arising from variability in file size and other complications due to multiple processes reading the same file, I have simply made 10,000 copies of a single 144KB JSON document:</p> <p><code>for i in `seq 1 10000`; do cp sample.json data/${i}.json; done</code></p> <p>The Python script looks as follows:</p> <pre class="lang-python prettyprint-override"><code>import glob import json from multiprocessing import Pool import time # process all files in chunk, printing # chunk ID, # files in chunk, time to process all files def process_chunk(chunk): chunk_id, files = chunk t0 = time.time() for f in files: process_single_file(f) t1 = time.time() print(chunk_id, len(files), t1 - t0) # this does nothing but reads the file into memory and parses the json # but that is enough to illustrate the point def process_single_file(filename): with open(filename, 'r') as f: s = f.read() data = json.loads(s) if __name__ == '__main__': files = glob.glob(f'data/*.json') files_per_chunk = 100 num_chunks = int(sys.argv[1]) chunks = [files[(i * files_per_chunk):((i + 1) * files_per_chunk)] for i in range(num_chunks)] with Pool(processes = num_chunks) as pool: pool.map(process_chunk, enumerate(chunks)) </code></pre> <p>For example, running <code>./my-script.py 1</code> should process one chunk with 100 files in one process. Running <code>./my-script 10</code> should process ten chunks with 100 files each in ten processes. Regardless of the number of cores, I believe the amount of time to process each chunk should be the same in both cases.</p> <p><strong>Results</strong></p> <pre class="lang-none prettyprint-override"><code>&gt; ./my-script.py 1 0 100 0.07086801528930664 &gt; ./my-script.py 10 0 100 0.16899609565734863 1 100 0.19768595695495605 4 100 0.17228388786315918 2 100 0.1956641674041748 3 100 0.17895913124084473 5 100 0.16188788414001465 6 100 0.16206908226013184 7 100 0.15983009338378906 8 100 0.15669989585876465 9 100 0.15811610221862793 &gt; ./my-script.py 100 3 100 3.8171892166137695 4 100 3.8234598636627197 1 100 3.8310959339141846 9 100 3.8683879375457764 7 100 3.871474027633667 6 100 3.878866195678711 ... </code></pre> <p>I have tried a number of variations of this on both a Mac with 6 cores and an EC2 instance running Linux with 24 dual cores. Can someone help me understand the scaling of process time with number of processes?</p>
<python><multiprocess>
2024-08-21 02:24:07
0
1,044
broken.eggshell
78,894,891
2,662,302
polars, combining sales and purchases, FIFO method
<p>I have two dataframes:</p> <p>One with buys</p> <pre class="lang-py prettyprint-override"><code>df_buy = pl.DataFrame( { &quot;BuyId&quot;: [1, 2], &quot;Item&quot;: [&quot;A&quot;, &quot;A&quot;], &quot;BuyDate&quot;: [date.fromisoformat(&quot;2023-01-01&quot;), date.fromisoformat(&quot;2024-03-07&quot;)], &quot;Quantity&quot;: [40, 50], } ) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>BuyId</th> <th>Item</th> <th>BuyDate</th> <th>Quantity</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A</td> <td>2023-01-01</td> <td>40</td> </tr> <tr> <td>2</td> <td>A</td> <td>2024-03-07</td> <td>50</td> </tr> </tbody> </table></div> <p>And other with sells:</p> <pre class="lang-py prettyprint-override"><code>df_sell = pl.DataFrame( { &quot;SellId&quot;: [3, 4], &quot;Item&quot;: [&quot;A&quot;, &quot;A&quot;], &quot;SellDate&quot;: [date.fromisoformat(&quot;2024-04-01&quot;), date.fromisoformat(&quot;2024-05-01&quot;)], &quot;Quantity&quot;: [10, 45], } ) </code></pre> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>SellId</th> <th>Item</th> <th>SellDate</th> <th>Quantity</th> </tr> </thead> <tbody> <tr> <td>3</td> <td>A</td> <td>2024-04-01</td> <td>10</td> </tr> <tr> <td>4</td> <td>A</td> <td>2024-05-01</td> <td>45</td> </tr> </tbody> </table></div> <p>I want to determine which sales came from which purchases using the FIFO method.</p> <p>The result should be something like this.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: center;">BuyId</th> <th style="text-align: center;">Item</th> <th style="text-align: center;">BuyDate</th> <th style="text-align: center;">RemainingQuantity</th> <th style="text-align: center;">SellId</th> <th style="text-align: center;">SellDate</th> <th style="text-align: center;">SellQuantity</th> <th style="text-align: center;">QuantityAfterSell</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">A</td> <td style="text-align: center;">2023-01-01</td> <td style="text-align: center;">40</td> <td style="text-align: center;">3</td> <td style="text-align: center;">2024-04-01</td> <td style="text-align: center;">10</td> <td style="text-align: center;">30</td> </tr> <tr> <td style="text-align: center;">1</td> <td style="text-align: center;">A</td> <td style="text-align: center;">2023-01-01</td> <td style="text-align: center;">30</td> <td style="text-align: center;">4</td> <td style="text-align: center;">2024-05-01</td> <td style="text-align: center;">30</td> <td style="text-align: center;">0</td> </tr> <tr> <td style="text-align: center;">2</td> <td style="text-align: center;">A</td> <td style="text-align: center;">2024-03-07</td> <td style="text-align: center;">50</td> <td style="text-align: center;">4</td> <td style="text-align: center;">2024-05-01</td> <td style="text-align: center;">15</td> <td style="text-align: center;">35</td> </tr> </tbody> </table></div> <p>I know that I can do it using a for loop but I wanted to know if there is a more vectorized way to do it.</p> <p>Edit:</p> <p>Added new example for testing:</p> <pre class="lang-py prettyprint-override"><code>df_buy = pl.DataFrame( { &quot;BuyId&quot;: [5, 1, 2], &quot;Item&quot;: [&quot;B&quot;, &quot;A&quot;, &quot;A&quot;], &quot;BuyDate&quot;: [date.fromisoformat(&quot;2023-01-01&quot;), date.fromisoformat(&quot;2023-01-01&quot;), date.fromisoformat(&quot;2024-03-07&quot;)], &quot;Quantity&quot;: [10, 40, 50], } ) df_sell = pl.DataFrame( { &quot;SellId&quot;: [6, 3, 4], &quot;Item&quot;: [&quot;B&quot;, &quot;A&quot;, &quot;A&quot;], &quot;SellDate&quot;: [ date.fromisoformat(&quot;2024-04-01&quot;), date.fromisoformat(&quot;2024-04-01&quot;), date.fromisoformat(&quot;2024-05-01&quot;), ], &quot;Quantity&quot;: [5, 10, 45], } ) </code></pre>
<python><dataframe><python-polars><asof-join>
2024-08-21 02:08:25
1
505
rlartiga
78,894,887
243,755
How to change the working directory in pycharm globally?
<p>When I run a pytest script, the working directory is always the folder where this pytest script is in. But what I want is to make the project root folder as the working directory. Although I can change it in the Run configuraiton, but it only applied in for the single pytest, which means I need to do this change for each pytest. Is there any configuration to make a global configuration? Thanks</p>
<python><pycharm>
2024-08-21 02:04:48
1
29,674
zjffdu
78,894,807
6,137,760
How to time an aiohttp request
<p>I have an async script that looks partially like:</p> <pre class="lang-py prettyprint-override"><code>def main(endpoints): loop = asyncio.get_event_loop() loop.run_until_complete(_main(endpoints, loop)) loop.close() async def _main(endpoints, loop): connector = aiohttp.TCPConnector(limit=3) async with aiohttp.ClientSession(connector=connector, loop=loop) as session: await asyncio.gather(*[process(session, endpoint, uid) for endpoint, uid in endpoints]) async process(session, endpoint, uid): timer = time.perf_counter() print(f'{uid}: Start request at {timer}') async with session.get(endpoint) as r: do_some_quick_stuff(r) print(f'{uid}: Finished in {time.perf_counter() - timer} seconds') </code></pre> <p>As you can see, I limit simultaneous connections to 3. If I run the script with an <code>endpoints</code> list of length 5, I get:</p> <pre><code>1: Start request at 23.726745857 2: Start request at 23.770105453 3: Start request at 23.770374826 4: Start request at 23.770555491 5: Start request at 23.770689525 1: Finished in 6.4488191379999975 seconds 2: Finished in 6.409452765000001 seconds 3: Finished in 6.410782424999997 seconds 5: Finished in 12.524688972999996 seconds 4: Finished in 12.526368037999998 seconds </code></pre> <p>The <code>endpoint</code>s in this example are to &quot;https://httpstat.us/200?sleep=6000&quot;, so each request should take exactly 6 seconds to return a response. This output makes sense, since the timer is started (and messages printed) for all 5 requests at once, then the first 3 requests are made, they finish in 6 seconds, then the final 2 requests are made, finishing in 6 seconds plus the initial 6 second delay. However, I want all the timers to start just before their respective request, so that the final value should be around 6 for all endpoints.</p> <p>If I move the <code>timer</code> and <code>print</code> statements to within the <code>session.get</code> block, I get this output:</p> <pre><code>1: Start request at 33.393075797 1: Finished in 0.0001522129999997901 seconds 2: Start request at 33.39833408 2: Finished in 3.703800000209867e-05 seconds 3: Start request at 33.399544624 3: Finished in 3.701299999647745e-05 seconds 4: Start request at 39.511201437 4: Finished in 8.197600000414695e-05 seconds 5: Start request at 39.513046895 5: Finished in 4.47689999987233e-05 seconds </code></pre> <p>As you can see, the timers for endpoints 4 and 5 correctly start about 6 seconds after those of 1-3, but the completion time for each is well under a second, meaning the timers didn't start until <em>after</em> the request completed.</p> <p>How can I ensure that the timers begin only immediately before their respective request is made?</p>
<python><python-asyncio><aiohttp>
2024-08-21 01:09:25
1
1,673
Mike S
78,894,794
4,423,300
regex to match starting numbering or alphabet bullets like (a)
<p>I am trying to find whether string(sentence) starts with numbering or alphabet bullets followed by dot(.) or space. I have regex like:</p> <pre><code>r'^(\(\d|\[a-z]\))\s +' </code></pre> <p>and</p> <pre><code>r&quot;^(?:\(\d+\)|\\[a-z]\.)\s*&quot; </code></pre> <p>I tried it on example strings:</p> <pre class="lang-none prettyprint-override"><code>(a). this is bullet Not a bullet, (b) its bullet again I am so relaxed that its not bullet. (1) Bullet again. </code></pre> <p>But when I try</p> <pre><code>matches = re.findall(pattern, text, re.M) </code></pre> <p>I get an empty list. How can I fix it?</p>
<python><regex>
2024-08-21 01:00:15
2
637
SheCodes
78,894,720
998,619
Tile according to a pattern in pytorch (or numpy)?
<p>I have a 2-d tensor pattern I'd like to repeat/tile with a particular sparsity. I suspect there is a function (or two-line approach using gather, fold/unfold, or similar) to do this in Pytorch.</p> <p>These two approaches each get the result I want but are inefficient and/or confusing:</p> <pre><code># inputs mask = torch.rand(5, 5) &gt; .7 pattern = torch.rand(4, 4) # approach a (brute force): result_a = torch.zeros(20, 20) for i in range(5): for j in range(5): if mask[i][j]: result_a[i*4: (i+1)*4, j*4: (j+1)*4] = pattern # approach b (interpolate the binary mask) tiled_pattern = torch.tile(pattern, (5, 5)) bigger_mask = torch.nn.functional.interpolate(mask.view(1, 1, *mask.shape).to(torch.float16), scale_factor=(4, 4), mode='area').squeeze((0, 1)) result_b = torch.mul(tiled_pattern, bigger_mask) </code></pre> <p>In particular, I don't like approach (b) because the <code>interpolate</code> function needs me to cast the input, add and remove two dimensions, and I don't trust the 'area' mode to always work--it's not even really defined in the Pytorch documentation.</p> <p>Is there a cleaner way to do this? If not in Pytorch, than in Numpy (or another common Python package?)</p>
<python><numpy><pytorch>
2024-08-20 23:54:10
2
341
RussH
78,894,598
268,581
Downloading 1D OHLC data using thetadata
<h1>Program</h1> <p>Here's a simple Python program which uses the thetadata <a href="https://http-docs.thetadata.us/docs/theta-data-rest-api-v2/vt7qa2ms85ulr-ohlc" rel="nofollow noreferrer">/hist/stock/ohlc</a> endpoint to retrieve 1D candles for AAPL over a given date range.</p> <pre><code>import io import time import requests import pandas as pd headers = { &quot;Accept&quot;: &quot;application/json&quot; } interval = 6 * 60 * 60 * 1000 # 6 hours url = &quot;http://127.0.0.1:25510/v2/hist/stock/ohlc&quot; params = { &quot;start_date&quot;: &quot;20240102&quot;, &quot;end_date&quot;: &quot;20240130&quot;, &quot;ivl&quot;: interval, &quot;root&quot;: &quot;AAPL&quot;, &quot;use_csv&quot;: &quot;true&quot; } start_time = time.time() response = requests.get(url, headers=headers, params=params) end_time = time.time() elapsed_time = end_time - start_time print(f&quot;Request took {elapsed_time:.2f} seconds&quot;) if response.status_code != 200: print(&quot;Error: &quot;, response.status_code) else: df = pd.read_csv(io.BytesIO(response.content)) df </code></pre> <h1>Output</h1> <p>The resulting dataframe:</p> <pre><code>&gt;&gt;&gt; df ms_of_day open high low close volume count date 0 34200000 190.940 191.8000 187.4700 188.070 41871539 605632 20240130 1 34200000 187.040 187.0950 184.7900 185.520 35049616 573009 20240131 2 34200000 183.985 186.8000 183.8200 186.660 36115114 588970 20240201 3 34200000 179.860 187.3300 179.2500 186.730 76910919 956323 20240202 4 34200000 188.150 189.2500 185.8400 188.350 49645818 710826 20240205 .. ... ... ... ... ... ... ... ... 136 34200000 220.570 223.0300 219.7000 221.160 26834693 497239 20240814 137 34200000 224.600 225.3500 222.7600 225.060 27685884 506000 20240815 138 34200000 223.920 226.8271 223.6501 225.960 29280562 497581 20240816 139 34200000 225.720 225.7400 223.0400 224.930 25884935 532370 20240819 140 34200000 225.770 227.1700 225.4500 226.485 19963834 446631 20240820 </code></pre> <h1>Pulling all data</h1> <p>I'd like to retrieve the entire history of OHLC data for a given stock. Here I've changed the date range:</p> <pre><code>params = { &quot;start_date&quot;: &quot;19000101&quot;, &quot;end_date&quot;: &quot;20250101&quot;, &quot;ivl&quot;: interval, &quot;root&quot;: &quot;AAPL&quot;, &quot;use_csv&quot;: &quot;true&quot; } </code></pre> <p>This does pull all the data from 2016 until present:</p> <pre><code>&gt;&gt;&gt; df ms_of_day open high low close volume count date 0 34200000 102.61 105.3680 102.0000 104.0501 54123788 309061 20160104 1 34200000 105.75 105.8500 102.4100 102.8044 45609925 278727 20160105 2 34200000 100.56 102.3700 99.8700 100.2460 58852334 360271 20160106 3 34200000 98.68 100.1300 96.6000 96.6000 63842150 392430 20160107 4 34200000 98.55 99.1100 96.7600 97.4500 57871838 362522 20160108 ... ... ... ... ... ... ... ... ... 2169 34200000 220.57 223.0300 219.7000 221.1600 26834693 497239 20240814 2170 34200000 224.60 225.3500 222.7600 225.0600 27685884 506000 20240815 2171 34200000 223.92 226.8271 223.6501 225.9600 29280562 497581 20240816 2172 34200000 225.72 225.7400 223.0400 224.9300 25884935 532370 20240819 2173 34200000 225.77 227.1700 225.4500 226.4850 19963834 446631 20240820 </code></pre> <p>However, it takes over 5 minutes to run:</p> <pre><code>Request took 351.61 seconds </code></pre> <h1>yfinance</h1> <p>If I use yfinance to retrieve all <code>AAPL</code> data, it takes under 3 seconds:</p> <pre><code>import time import yfinance_download start_time = time.time() result = yfinance_download.update_records(symbol='AAPL', interval='1d') end_time = time.time() elapsed_time = end_time - start_time print(f&quot;Request took {elapsed_time:.2f} seconds&quot;) result </code></pre> <p>Output:</p> <pre><code>Request took 2.30 seconds &gt;&gt;&gt; result Open High Low Close Adj Close Volume Date 1980-12-12 0.128348 0.128906 0.128348 0.128348 0.098943 469033600 1980-12-15 0.122210 0.122210 0.121652 0.121652 0.093781 175884800 1980-12-16 0.113281 0.113281 0.112723 0.112723 0.086898 105728000 1980-12-17 0.115513 0.116071 0.115513 0.115513 0.089049 86441600 1980-12-18 0.118862 0.119420 0.118862 0.118862 0.091630 73449600 ... ... ... ... ... ... ... 2024-08-14 220.570007 223.029999 219.699997 221.720001 221.720001 41960600 2024-08-15 224.600006 225.350006 222.759995 224.720001 224.720001 46414000 2024-08-16 223.919998 226.830002 223.649994 226.050003 226.050003 44340200 2024-08-19 225.720001 225.990005 223.039993 225.889999 225.889999 40687800 2024-08-20 225.770004 227.169998 225.449997 226.509995 226.509995 29914893 [11013 rows x 6 columns] </code></pre> <h1>Question</h1> <p>Is there another approach for retrieving the data from thetadata that takes less time to complete?</p>
<python>
2024-08-20 22:37:03
2
9,709
dharmatech
78,894,323
4,934,344
Custom Threading Timer Kill Function Python
<p>I have a queue with threading setup. I need it to kill the process if it runs longer than 2900s which works fine. I'm wanting to write out information if it has to kill the process because it ran to long. Is there a way to write a custom function for this line:</p> <pre><code>timer = Timer(2900, recover.kill) </code></pre> <p>to:</p> <pre><code>timer = Timer(2900, custom_function(recover)) </code></pre> <p>Where I can run custom_function to do stuff before calling recover.kill? I tried doing this but it doesn't work.</p> <pre><code>while not q.empty(): try: cmd = &quot;itf -obj &quot; + q.get() recover = subprocess.Popen(shlex.split(cmd), env=my_env, shell=False) timer = Timer(2900, recover.kill) try: timer.start() my_pid, err = recover.communicate() recover.wait() q.task_done() finally: print(&quot;Completed before time&quot;) timer.cancel() except Exception as e: q.task_done() continue </code></pre> <p>Thanks!</p>
<python><multithreading><timer><kill>
2024-08-20 20:41:17
1
611
Rankinstudio
78,894,207
275,669
Preventing exceptions in Python due to print/string format errors?
<p>Is there a way in Python to avoid having <code>print()</code>/string formatting errors throw an exception, without wrapping every <code>print()</code> in a <code>try-except</code> block? Specifically, to avoid premature termination of long-running jobs due to trivial string formatting mistakes?</p> <p>I was developing a very long-running program in Python, and making heavy use of <code>print()</code> statements to sanity check program flow. Each step in the program was <em>very</em> long running. Too many times I had silly typos or similar mistakes in my <code>print()</code> statements. So the program would halt when an exception was thrown.</p> <p>Here's a grossly simplified example:</p> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import time def pre_compute(): print('running long-running complex function...') time.sleep(3) # pretend this is a long-running, complex function return 5 def finalize(precomp_val): return 1 + precomp_val x = pre_compute() #print(f'pre_computed value={x}') # this is what I meant, but... print(f'pre_computed value={val}') # ...I accidentally did this final = finalize(x) print(f'final value={final}') </code></pre> <p>The above example will throw an exception:</p> <pre><code>$ ./print_exception.py running long-running complex function... Traceback (most recent call last): File &quot;print_exception.py&quot;, line 16, in &lt;module&gt; print(f'pre_computed value={val}') # ...I accidentally did this NameError: name 'val' is not defined. Did you mean: 'eval'? </code></pre> <p>In this case, I lost all the interesting work due to a silly-but-easy-to-make mistake that, ironically, was meant to help me validate/debug the interesting code. (And it doesn't necessarily have to be debug prints, maybe I'm using the logging module because I want some intermediate values logged: the same problem can still present.)</p> <p>This is trivially fixed once discovered, but imagine encountering this time and time again during the development process - it really hurts productivity.</p> <p>How do most people get around this? Some thoughts I have, not sure if there are others:</p> <ul> <li>Enclose every print/logger call in a try-except block</li> <li>Create some kind of wrapper <code>my_print()</code>-type function that catches the exception? This approach appears to require something fancy, as I tried it in the example above, but the exception is actually in the f-string (before <code>print()</code> is even involved)</li> <li>Use some kind of pre-run checker tool</li> </ul>
<python><exception><printing>
2024-08-20 19:56:43
0
992
Matt
78,894,195
2,774,885
python vscode venv - launches in "wrong" directory only on the initial launch
<p>I've got a python project set up with a standard virtual environment (.venv).</p> <p>When I launch the project in VSCode's debugger for the FIRST time, it detects that the venv has not been activated, and does the source call to the activate script.</p> <p>However, somewhere it does NOT call the <code>cwd/cd</code> that is specified in the <code>launch.json</code> config file within VSCode. This makes the initial launch of the debugger fail, because it's in the &quot;wrong&quot; directory when we call the actual script with whatever arguments I'm trying to run for the debug.</p> <pre><code>machine:~/ws/gitz &gt; source /ws/lwobker-rtp/gitz/powerInventory/.venv/bin/activate (.venv) machine:~/ws/gitz &gt; /usr/bin/env /ws/lwobker-rtp/gitz/powerInventory/.venv/bin/python /ws/lwobker-rtp/.vscode-server/extensions ## note, no &quot;cd whatever&quot; call in here??? </code></pre> <p>On all SUBSEQUENT calls to the debugger, I can see that the debug launcher does call the <code>cd &lt;whatever&gt;</code> and the terminal changes into the 'correct' directory and everything runs fine.</p> <pre><code>(.venv) machine:~/ws/gitz &gt; cd /users/lwobker/ws/gitz/powerInventory ; /usr/bin/env /ws/lwobker-rtp/gitz ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ </code></pre> <p>It's not like this is the end of the world, because I just have to basically remember to launch it the first time, wait for it to fail, then launch again. But it seems like something that &quot;should just work&quot; ...</p> <p>an example stanza from <code>launch.json</code> looks like this:</p> <pre><code> { &quot;name&quot;: &quot;dumpsnap script&quot;, &quot;type&quot;: &quot;debugpy&quot;, &quot;cwd&quot;: &quot;/users/lwobker/ws/gitz/powerInventory&quot;, &quot;request&quot;: &quot;launch&quot;, &quot;program&quot;: &quot;dumpsnap.py&quot;, &quot;args&quot;: [&quot;-f&quot;, &quot;some_file&quot;], }, </code></pre>
<python><visual-studio-code><python-venv>
2024-08-20 19:53:29
0
1,028
ljwobker
78,894,096
19,315,471
Python http server REMOTE_ADDR returns another local ip (Linux)
<p>(I use Linux)</p> <p>The local IP of my PC is <code>172.16.1.2</code><br /> <code>ip a</code> command returns <code>172.16.1.2</code></p> <p>Then I can add another IP to the same network interface:</p> <pre><code>ip addr add 192.168.1.10 dev eth0 </code></pre> <p>now <code>ip a</code> command returns 2 IPs</p> <pre><code>172.16.1.2 192.168.1.10 </code></pre> <p>Then I bind a Python <a href="https://docs.python.org/3/library/wsgiref.html#examples" rel="nofollow noreferrer">simple http server</a> to <code>192.168.1.10</code>:</p> <pre><code>from wsgiref.simple_server import make_server def hello_world_app(environ, start_response): print(environ['REMOTE_ADDR']) # why not 172.16.1.2 start_response(status='200 OK', headers=[]) return [b&quot;Hello World&quot;] IP = '192.168.1.10' PORT = 8000 make_server(IP, PORT, hello_world_app).serve_forever() </code></pre> <p>Then simply open <code>http://192.168.1.10:8000</code> in a browser or via curl</p> <p>My question is why it prints <code>192.168.1.10</code> and not <code>172.16.1.2</code>?<br /> Since I access the server from my local IP which is <code>172.16.1.2</code> I'd expect it to be the <code>REMOTE_ADDR</code></p>
<python><linux><ip><http.server>
2024-08-20 19:13:32
1
707
user19315471
78,894,085
5,109,125
symmetric_difference() multiple sets in python
<p>I have three sets and I want to get the symmetric_difference.</p> <pre><code>primes = {1, 2, 3, 5, 7, 11} odds = {1, 3, 5, 7, 9, 11} threes =. {1, 3, 6, 9, 12} </code></pre> <p>Using the regular way</p> <p>set(primes) ^ set(odds) ^ set(threes)</p> <pre><code>result a : {2, 3, 6, 12) </code></pre> <p>HOWEVER I saw the geeks4geeks link whose solution is converting each set into a list, combine them together, and return only the elements that have only one occurrence.</p> <pre><code> result b: {2, 6, 12} </code></pre> <p>which one should be correct?</p>
<python><python-3.x><math><set>
2024-08-20 19:10:35
2
597
punsoca
78,894,080
585,419
Syncing matplotlib imshow coordinates
<p>I'm trying to create an image using networkx, save that image to use later, and then overlay a plot over top of it later. However, when I try to load the image in and make new points, the scale seems off. I've tried everything I can find to make them sync, and I'm not sure what else to try at this point. Here's a simple example:</p> <pre><code>import networkx as nx import matplotlib.pyplot as plt import numpy as np fig = plt.figure() G = nx.dodecahedral_graph() pos = nx.spring_layout(G) plt.box(False) nx.draw_networkx_edges(G, pos=pos) fig.canvas.draw() data = np.array(plt.gcf().canvas.get_renderer().buffer_rgba(), dtype=np.uint8) extent = list(plt.xlim() + plt.ylim()) </code></pre> <p><a href="https://i.sstatic.net/EdPGuCZP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EdPGuCZP.png" alt="enter image description here" /></a></p> <p>So now I have a graph and have saved that image to <code>data</code>, and have saved the range of that graph to <code>extent</code>. I then want to replot that graph from <code>data</code> and overlay the nodes of the graph, in the positions stored in <code>pos</code>.</p> <pre><code>plt.imshow(data, extent=extent) plt.box(False) nx.draw_networkx_nodes(G, pos=pos, node_color='green') </code></pre> <p><a href="https://i.sstatic.net/AhwLuz8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AhwLuz8J.png" alt="enter image description here" /></a></p> <p>For some reason, the scale of the original image is shrunk, so the nodes end up being at a larger scale and not matching the edges. Is it something in the way I'm saving the image?</p>
<python><matplotlib><networkx><imshow>
2024-08-20 19:08:05
1
464
Amanda
78,894,010
1,337,007
Parallelism in AWS Glue
<p>I am reading a large file from <code>S3</code> in a <code>Glue</code> job. Its a <code>.txt</code> file which I convert to <code>.csv</code> and read all the values in a particular column.</p> <p>I want to leverage <code>parallelism</code> of <code>Glue</code> over here where the reading part can be taken as a task by <code>glue workers</code>.</p> <p>Do I need to <strong>programmatically split the file</strong> and then submit the small chunks to the <code>workers</code> or does <code>Spark</code> take care of the parallelism by itself and is smart enough to split the file by itself and distribute it to the <code>workers</code>?</p>
<python><apache-spark><pyspark><aws-glue>
2024-08-20 18:48:01
1
2,258
ghostrider
78,893,923
503,157
Polars: set missing value from another row
<p>The following data frame represents basic flatten tree structure, as shown below, where pairs <code>(id, sub-id)</code> and <code>(sub-id, key)</code> are always unique and <code>key</code> always represents the same thing under the same <code>id</code></p> <pre><code>id1 └─┬─ sub-id β”‚ β”‚ └─── key1 β”‚ β”‚ └─── value β”‚ └─ sub-id2 β”‚ └─── key1 β”‚ └─── None id2 └─── sub-id3 └─── key2 └─── value </code></pre> <p>with graphical representation out of the way, below is the definition as <code>polars.DataFrame</code></p> <pre class="lang-py prettyprint-override"><code>df = pl.DataFrame( { &quot;id&quot;: [1, 1, 2, 2, 2, 3], &quot;sub-id&quot;: [1, 1, 2, 3, 3, 4], &quot;key&quot;: [&quot;key_1_1&quot;, &quot;key_1_2&quot;, &quot;key_2_1&quot;, &quot;key_2_1&quot;, &quot;key_2_2&quot;, &quot;key_3&quot;], &quot;value&quot;: [&quot;value1 1&quot;, &quot;value 1 2&quot;, None, &quot;value 2 1&quot;, &quot;value 2 2&quot;, &quot;value 3&quot;], } ) </code></pre> <p>The same data frame in table representation:</p> <pre><code>shape: (6, 4) β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID β”‚ sub-id β”‚ key β”‚ value β”‚ β•žβ•β•β•β•β•ͺ════════β•ͺ═════════β•ͺ═══════════║ β”‚ 1 β”‚ 1 β”‚ key_1_1 β”‚ value 1 β”‚ β”‚ 1 β”‚ 1 β”‚ key_1_2 β”‚ value 2 β”‚ β”‚ 2 β”‚ 2 β”‚ key_2_1 β”‚ value 2 1 β”‚ β”‚ 2 β”‚ 3 β”‚ key_2_1 β”‚ None β”‚ β”‚ 2 β”‚ 3 β”‚ key_2_2 β”‚ value 2 2 β”‚ β”‚ 3 β”‚ 4 β”‚ key_3 β”‚ value 3 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>How would I to fill gaps like shown below using <code>polars</code>. total size of data is about 100k rows.</p> <pre><code>shape: (6, 4) β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ID β”‚ sub-id β”‚ key β”‚ value β”‚ β•žβ•β•β•β•β•ͺ════════β•ͺ═════════β•ͺ═══════════║ β”‚ 1 β”‚ 1 β”‚ key_1_1 β”‚ value 1 β”‚ β”‚ 1 β”‚ 1 β”‚ key_1_2 β”‚ value 2 β”‚ β”‚ 2 β”‚ 2 β”‚ key_2_1 β”‚ value 2 1 β”‚ β”‚ 2 β”‚ 3 β”‚ key_2_1 β”‚ value 2 1 β”‚ β”‚ 2 β”‚ 3 β”‚ key_2_2 β”‚ value 2 2 β”‚ β”‚ 3 β”‚ 4 β”‚ key_3 β”‚ value 3 β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre>
<python><dataframe><python-polars>
2024-08-20 18:24:55
2
1,617
Eir Nym
78,893,765
11,001,493
How to add values to dataframe based on jump through columns based on a specific number?
<p>I have a dataframe like df:</p> <pre><code>df = pd.DataFrame({'Months': [4, 3], 'Type': ['A', 'B'], 1: [0, 1], 2: [0, 1], 3: [2, 1], 4: [2, 0], 5: [0, 0], 6: [0, 0], 7: [0, 0], 8: [0, 0], 9: [0, 0]}) </code></pre> <pre><code> Months Type 1 2 3 4 5 6 7 8 9 0 4 A 0 0 2 2 0 0 0 0 0 1 3 B 1 1 1 0 0 0 0 0 0 </code></pre> <p>For each row, I would like look at the value in <code>df[&quot;Months&quot;]</code> (this will be the number of columns we should &quot;jump&quot;). If in <code>df[1]</code> or beyond there is a value higher than <code>0</code>, I would like to jump <code>n</code> columns, based on the number in <code>df[&quot;Months&quot;]</code> and then add that same number, but multiplied by <code>-1</code>.</p> <p>So, the result should be like this:</p> <pre><code>df = pd.DataFrame({'Months': [4, 3], 'Type': ['A', 'B'], 1: [0, 1], 2: [0, 1], 3: [2, 1], 4: [2, 0], 5: [0, -1], 6: [0, -1], 7: [0, -1], 8: [-2, 0], 9: [-2, 0]}) </code></pre> <pre><code> Months Type 1 2 3 4 5 6 7 8 9 0 4 A 0 0 2 2 0 0 0 -2 -2 1 3 B 1 1 1 0 -1 -1 -1 0 0 </code></pre> <p>Anyone could help me?</p>
<python><pandas>
2024-08-20 17:37:35
1
702
user026
78,893,764
1,071,179
Wierd chart layout with pandas/matplotlib line chart
<p>I am new to Data Science and Python/NumPy/Pandas world in general. I have a dataset which is an order/order line item/product dataset. I am trying to plot a basic line chart with pandas:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt pd.set_option('display.max_columns', 500) data_link = &quot;https://&lt;github_link&gt;/orders.csv&quot; data = pd.read_csv(data_link, encoding='latin-1') daily_sales = (data .groupby('Order Date') # dimensions .agg({'Sales': 'sum'}) # metric ) daily_sales.index = pd.DatetimeIndex(daily_sales.index) fig, my_ax = plt.subplots(figsize=(14, 4)) daily_sales.plot(ax=my_ax) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.sstatic.net/X4ftRzcg.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X4ftRzcg.jpg" alt="The Wierd Line Plot" /></a></p> <p>It should ideally look like this:</p> <p><a href="https://i.sstatic.net/9nwMyJKN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9nwMyJKN.png" alt="How it should look " /></a></p> <p>I did try .sort_values(by='Sales') to the groupby as some suggested that it is because that the X axis values are not sorted. Though X axis is Order Date, I tried .sort_values(by='Order Date') to begin with. Still similar graph. Totally confused. Please help.</p>
<python><pandas><matplotlib><data-science>
2024-08-20 17:37:33
1
1,119
Mahesh
78,893,734
4,372,237
sklearn.metrics.accuracy_score is very slow
<p>I need to measure accuracy of my model's prediction for binary classification (0 and 1 outputs). I am testing my model with many different values of threshold, and my testing dataset is quite big (50-100 million of examples), so I need a fast way to compute model's accuracy. I was optimizing my code and noticed that the standard function for computing accuracy is ~50 times slower than the direct computation. Minimal example:</p> <pre><code>from sklearn.metrics import accuracy_score import numpy as np import timeit a=np.random.randint(0,2,1000000) b=np.random.randint(0,2,1000000) %timeit accuracy_score(a,b) # 46.7 ms Β± 390 Β΅s per loop (mean Β± std. dev. of 7 runs, 10 loops each) %timeit (a==b).sum()/a.size # 713 Β΅s Β± 7.22 Β΅s per loop (mean Β± std. dev. of 7 runs, 1,000 loops each) </code></pre> <p>Am I missing something? It looks like accuracy_score is a standard way to measure accuracy. Why is it so slow? No C optimization under the hood?</p>
<python><performance><scikit-learn>
2024-08-20 17:27:36
2
3,470
Mikhail Genkin