QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,420,772
992,421
pyspark RDD count nodes in a DAG
<p>I have RDD which shows up as</p> <pre><code>[&quot;2\t{'3': 1}&quot;, &quot;3\t{'2': 2}&quot;, &quot;4\t{'1': 1, '2': 1}&quot;, &quot;5\t{'4': 3, '2': 1, '6': 1}&quot;, &quot;6\t{'2': 1, '5': 2}&quot;, &quot;7\t{'2': 1, '5': 1}&quot;, &quot;8\t{'2': 1, '5': 1}&quot;, &quot;9\t{'2': 1, '5': 1}&quot;, &quot;10\t{'5': 1}&quot;, &quot;11\t{'5': 2}&quot;] </code></pre> <p>I could split it up and am able to count the nodes before the '\t' or i can write a function to count the nodes on the right. This is a weighet DAG. If i count by hand, I see there are 11 nodes. but am unable to figure out how to bring the node 1 on right side into the nodes before I do distinct and count. My code is</p> <pre><code>`import ast def break_nodes(line): data_dict = ast.literal_eval(line) # Iterate through the dictionary items and print them for key, value in data_dict.items(): print(f'key {key} val {value}') yield (int(key)) nodeIDs = dataRDD.map(lambda line: line.split('\t')) \ .flatMap(lambda x: break_nodes(x[1])) \ .distinct()` </code></pre> <p>This just counts the node from the right of \t. I have code for left side which is very simple</p> <pre><code>`nodeIDs = dataRDD.flatMap(lambda line: line.split('\t')[0]) totalCount = nodeIDs.distinct().count()` </code></pre> <p>What modification can I do to the code to get all the nodes counted? My brain is fried trying so many ways</p> <p>Appreciate the help</p>
<python><apache-spark><pyspark><mapreduce>
2023-11-04 04:41:50
2
850
Ram
77,420,395
2,647,447
why does python's panda return differce found when compare two csv files and the cell is empty?
<p><a href="https://i.sstatic.net/oKcac.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oKcac.png" alt="enter image description here" /></a>(1) I am using Python's pandas to compare two csv files. In the both files have exact same data set and should return something like the statement &quot;two files are identical&quot;. However, there is one column with header of &quot;Error&quot; and the column are empty because there is no recorded value of error.</p> <p>(2) when I do a file compare, the script pick up the &quot;error&quot; column as true ( or difference found)</p> <p>(3) my code is below</p> <p>(4) Can someone help? how can I avoid a cell if it is empty? Actually I have another set of data <a href="https://i.sstatic.net/FPonc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FPonc.png" alt="enter image description here" /></a>and in those two files there are column with value of &quot;None&quot; and it has the same behavior. ( both file : file a and file b at the same locations with value of &quot;None&quot; and compare result said there is difference after compare.)</p> <p>my code:</p> <pre><code>import pandas as pd import numpy as np # Import numpy for NaN values # List of file paths file_paths = ['test_file_1.csv', 'test_file_2.csv'] # Create a list to store DataFrames dataframes = [] # Load all CSV files into DataFrames for file_path in file_paths: df = pd.read_csv(file_path) dataframes.append(df) # Initialize a dictionary to store differences differences = {} # Compare each pair of DataFrames for i in range(len(dataframes)): for j in range(i + 1, len(dataframes)): df1 = dataframes[i] df2 = dataframes[j] # Check if either DataFrame is None or has errors if df1 is None or df2 is None: continue # Fill empty cells with NaN df1 = df1.fillna(np.nan) df2 = df2.fillna(np.nan) # Compare the DataFrames cell by cell comparison_df = df1 != df2 # Use != to create a boolean DataFrame where differences are True print(&quot;BreakPoint&quot;) # Find the row and column indices where differences occur diff_locations = comparison_df.stack().reset_index() diff_locations.columns = ['Row', 'Column', 'Different'] # Filter rows where differences are True diff_locations = diff_locations[diff_locations['Different']] # Store differences in the dictionary key = f'({file_paths[i]}) vs ({file_paths[j]})' differences[key] = diff_locations print(&quot;break point&quot;) # Output the differences for key, diff_locations in differences.items(): if diff_locations.empty: print(f&quot;{key}: The two CSV files are identical.&quot;) else: print(f&quot;{key}: The two CSV files have differences at the following locations:&quot;) print(diff_locations) </code></pre>
<python><pandas><dataframe>
2023-11-04 01:07:01
1
449
PChao
77,420,330
1,374,078
How to retain sqlalchemy model after adding `row_number()`?
<p>I'm try to filter rows in some method so I need the output model to be of the same type as input model to the sqlAlchemy query.</p> <p>I followed this answer <a href="https://stackoverflow.com/a/38160409/1374078">https://stackoverflow.com/a/38160409/1374078</a> . However would it be possible to get the original model, so that I can access the model's methods by name? e.g. <code>row.foo_field</code>, otherwise getting generic row type</p> <pre><code>&gt; type(row) &lt;class 'sqlalchemy.engine.row.Row'&gt; </code></pre>
<python><sqlalchemy>
2023-11-04 00:27:03
1
1,652
phoxd
77,420,311
856,804
Under what condition will `str(float(val)) == val` NOT hold in Python?
<p>For a string <code>val</code> variable like <code>&quot;1.2345&quot;</code>, does <code>assert str(float(val)) == val</code> always hold?</p> <p>I've already found it doesn't always hold</p> <pre class="lang-none prettyprint-override"><code>&gt;&gt;&gt; str(float('1.2345')) '1.2345' &gt;&gt;&gt; str(float('0.1231412314124124124124124124')) '0.12314123141241241' </code></pre> <p>But where is the theoretical boundary? How can we reason about under what condition the equality of the left side and right side will <em>not</em> hold?</p>
<python><floating-point><precision>
2023-11-04 00:19:33
1
9,110
zyxue
77,420,235
1,481,689
Why isn't `repr` calling `__repr__`?
<p>I have a class where a constructor argument is only used in <code>__init__</code> and therefore I didn't want to save it as a field, but I did want a nice <code>__repr__</code>. So I wrote something along these lines:</p> <pre class="lang-py prettyprint-override"><code>from types import MethodType class X: def __init__(self, x): self.__repr__ = MethodType(lambda _: f'X({x=})', self) </code></pre> <p>The <code>__repr__</code> method works fine:</p> <pre class="lang-py prettyprint-override"><code>x = X(1) x.__repr__() </code></pre> <p>Prints:</p> <pre><code>'X(x=1)' </code></pre> <p>But:</p> <pre class="lang-py prettyprint-override"><code>repr(x) </code></pre> <p>Prints:</p> <pre><code>'&lt;__main__.X object at 0x110f71810&gt;' </code></pre> <p>Why isn't <code>repr</code> calling <code>__repr__</code>?</p>
<python><python-3.x>
2023-11-03 23:51:16
0
1,040
Howard Lovatt
77,420,119
748,742
Python - multiprocessing and multithreading
<p><strong>Question:</strong></p> <p>I am trying to gain a better understanding of Python's multiprocessing and multithreading, particularly in the context of using the <code>concurrent.futures</code> module. I want to make sure my understanding is correct.</p> <p><strong>Multiprocessing:</strong></p> <p>I believe that multiprocessing creates multiple processes, each with its own Global Interpreter Lock (GIL) and memory space. These processes can efficiently utilize multiple CPU cores. <strong>And, each independent process automatically manages its own threads.</strong></p> <p>Here is an example of using a <code>ProcessPoolExecutor</code>:</p> <pre class="lang-py prettyprint-override"><code># Create a ProcessPoolExecutor with a maximum of 4 concurrent processes with concurrent.futures.ProcessPoolExecutor(max_processes) as executor: # Use the executor to map your function to the list of numbers results = list(executor.map(calculate_square, numbers)) </code></pre> <p><strong>Multithreading:</strong></p> <p>In contrast, multithreading creates multiple threads within a single process. These threads share the same memory space and GIL. Multithreading is typically more suitable for I/O-bound tasks rather than CPU-bound tasks. However, it doesn't utilize multiple CPU cores, so even if you have 4 cores a program only uses one core where it runs x number of threads.</p> <p>Here is an example of using a <code>ThreadPoolExecutor</code>:</p> <pre class="lang-py prettyprint-override"><code># Create a ThreadPoolExecutor with a specified maximum number of threads with concurrent.futures.ThreadPoolExecutor(max_threads) as executor: # Use the executor to map your function to the list of numbers results = list(executor.map(calculate_square, numbers)) </code></pre> <p><strong>hybrid</strong></p> <p>It’s possible to combine the concurrent.futures thread and process pool to handle both I/O and CPU intensive tasks by creating multiple processes with its own threads.</p>
<python><multithreading><multiprocessing>
2023-11-03 23:07:18
1
383
uba2012
77,420,118
11,981,718
how to loop through .py files using importlib
<p>I am trying to use the prefix modl and iterate through .py files that I use as settings. I'd like to stick to python files because it is easy to declare functions and variables and import them. The problem I run into is that the module does not update at the second iteration:</p> <pre><code>for settingsfile in ['settings1.py', 'settings2.py']: modl = importlib.import_module(settingsfile) var1 = modl.var1 print('the variable:', var1) </code></pre> <p>output:</p> <pre><code>the variable:var1_from_settings1 the variable:var1_from_settings1 </code></pre>
<python><python-importlib>
2023-11-03 23:06:52
0
412
tincan
77,420,085
1,285,061
prepopulate numpy array with fixed values
<p>How can I prepopulate while initializing numpy array with fixed values? I tried generating <code>list</code> and using that as a <code>fill</code>.</p> <pre><code>&gt;&gt;&gt; c = np.empty(5) &gt;&gt;&gt; c array([0.0e+000, 9.9e-324, 1.5e-323, 2.0e-323, 2.5e-323]) &gt;&gt;&gt; np.array(list(range(0,10,1))) array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) &gt;&gt;&gt; &gt;&gt;&gt; c.fill(np.array(list(range(0,10,1)))) TypeError: only length-1 arrays can be converted to Python scalars The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: setting an array element with a sequence. &gt;&gt;&gt; c.fill([np.array(list(range(0,10,1)))]) TypeError: float() argument must be a string or a real number, not 'list' The above exception was the direct cause of the following exception: Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ValueError: setting an array element with a sequence. </code></pre> <p>Expected -</p> <pre><code>array([[0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]]) </code></pre>
<python><arrays><numpy>
2023-11-03 22:56:13
2
3,201
Majoris
77,419,995
266,014
PyLance stops working with an error immediately after VS Code start
<p>I have faced an issue with VS Code Pylance. The following output for the &quot;Python Language Server&quot; is generated immediately after VS Code start:</p> <pre><code>2023-11-04 00:06:42.035 [info] [Info - 12:06:42 AM] (2176) Pylance language server 2023.11.10 (pyright 088ebaa5) starting 2023-11-04 00:06:42.036 [info] [Info - 12:06:42 AM] (2176) Server root directory: &lt;dir&gt; 2023-11-04 00:06:42.041 [info] [Info - 12:06:42 AM] (2176) Starting service instance &lt;instance&gt; 2023-11-04 00:06:42.104 [info] [Info - 12:06:42 AM] (2176) Setting pythonPath for service &lt;instance&gt;: &lt;python-path&gt; 2023-11-04 00:06:42.106 [info] [Info - 12:06:42 AM] (2176) Setting environmentName for service &lt;instance&gt;: &quot;3.11.0 (.venv venv)&quot; 2023-11-04 00:06:42.262 [info] [Info - 12:06:42 AM] (2176) Assuming Python version 3.11 2023-11-04 00:06:42.560 [info] [Info - 12:06:42 AM] (2176) Found 610 source files 2023-11-04 00:06:42.820 [info] ERROR: The process &quot;17800&quot; not found. </code></pre> <p>And it looks like PyLance stops working.</p> <p>Through experiments, I found that the Python Language Server is restarted each time I modify settings related to Python in .vscode/settings.json. It is enough to change some value and change it back. And logs show that the Python Language Server is restarted without the final error.</p> <p>But it would be good to understand how to fix the issue permanently.</p>
<python><visual-studio-code><pylance>
2023-11-03 22:23:42
0
314
Viktor
77,419,978
6,251,742
Fake SMTP server not supporting auth
<h1>Fake SMTP server not supporting auth</h1> <p>I have a function that create a SMTP client, login onto a server, and send mails. I want to create a unit-test with a fake SMTP server that will write the email content into a io stream instead of sending the email.</p> <p>Everything works great except the login part that raise this error:</p> <pre><code>smtplib.SMTPNotSupportedError: SMTP AUTH extension not supported by server. </code></pre> <h2>MRE</h2> <pre class="lang-py prettyprint-override"><code>import email.message import io import smtplib import aiosmtpd.controller import aiosmtpd.handlers import aiosmtpd.smtp def foobar(host, port): client = smtplib.SMTP(host=host, port=port) message = email.message.EmailMessage() message[&quot;From&quot;] = &quot;me@me.me&quot; message[&quot;To&quot;] = &quot;you@you.you&quot; message[&quot;Subject&quot;] = &quot;subject&quot; message.set_content(&quot;Hello World&quot;) with client as session: session.login(&quot;foo&quot;, &quot;bar&quot;) session.send_message(message) def test_foobar(): io_stream = io.StringIO() handler = aiosmtpd.handlers.Debugging(io_stream) controller = aiosmtpd.controller.Controller(handler) controller.start() foobar(controller.hostname, controller.port) controller.stop() assert &quot;Hello World&quot; in io_stream.getvalue() </code></pre> <h2>Requirements</h2> <pre><code>pip install aiosmtpd </code></pre> <h2>Tentatives to fix:</h2> <h3>Add an Authenticator</h3> <p>From <a href="https://aiosmtpd.readthedocs.io/en/stable/auth.html#activating-authentication" rel="nofollow noreferrer">documentation</a>:</p> <blockquote> <p>aiosmtpd authentication is always activated, but attempts to authenticate will always be rejected unless the authenticator parameter of SMTP is set to a valid &amp; working Authenticator Callback.</p> </blockquote> <pre class="lang-py prettyprint-override"><code>[...] def test_foobar(): def authenticator(*args, **kwargs): return aiosmtpd.smtp.AuthResult(success=True) [...] controller = aiosmtpd.controller.Controller(handler, authenticator=authenticator) [...] </code></pre> <p><strong>Result</strong>: Same error</p> <h3>Add <code>handle_AUTH</code> and <code>auth_MECHANISM</code> always returning AuthResult(success=True)</h3> <p><a href="https://aiosmtpd.readthedocs.io/en/stable/auth.html#authmech" rel="nofollow noreferrer">Documentation</a></p> <pre class="lang-py prettyprint-override"><code>[...] class CustomDebugging(aiosmtpd.handlers.Debugging): async def handle_AUTH(self, *arg, **kwargs): return aiosmtpd.smtp.AuthResult(success=True) async def auth_PLAIN(self, *arg, **kwargs): return aiosmtpd.smtp.AuthResult(success=True) async def auth_LOGIN(self, *arg, **kwargs): return aiosmtpd.smtp.AuthResult(success=True) def test_foobar(): def authenticator(*args, **kwargs): return aiosmtpd.smtp.AuthResult(success=True) handler = CustomDebugging(io_stream) controller = aiosmtpd.controller.Controller( handler, authenticator=authenticator ) [...] </code></pre> <p><strong>Result</strong>: Same error</p> <p><strong>What should I do to fix this issue?</strong></p>
<python><unit-testing><authentication><server><aiosmtpd>
2023-11-03 22:18:12
0
4,033
Dorian Turba
77,419,973
803,533
FastAPI + Selenium to convert HTML to PDF pages
<p>I'm attempting to develop a REST service that will render HTML to a PDF file on demand. I have explored multiple options but I have yet to find one that renders a PDF document as perfectly as Chrome itself. So I thought to wrap <code>Selenium</code> and <code>Chromedriver</code> in a FastAPI by leveraging the great library from <code>pyhtml2pdf</code>.</p> <p>However, I found that by using this library, it would invoke a new instance of Chrome with every API call. As you can imagine, this was slow per call to the endpoint and required a lot of memory.</p> <p>So I borrowed some of the code from this library and made a single instance that would be created upon launching the service and the endpoint would use the browser driver as a global object.</p> <p>This resulted in a 10x increase in speed and far reduced memory. Can anyone see any downside to wrapping Selenium/Chrome in a FastAPI endpoint? What am I missing? Here is my code:</p> <pre><code>import sys import json import base64 import os import re import requests from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.common.exceptions import TimeoutException from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support.expected_conditions import staleness_of from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.common.by import By from fastapi import FastAPI, Query from fastapi.responses import HTMLResponse from fastapi.responses import JSONResponse from contextlib import asynccontextmanager from fastapi.middleware.cors import CORSMiddleware webdriver_options = Options() webdriver_prefs = {} driver = None webdriver_options.add_argument(&quot;--headless&quot;) webdriver_options.add_argument(&quot;--disable-gpu&quot;) webdriver_options.add_argument(&quot;--no-sandbox&quot;) webdriver_options.add_argument(&quot;--disable-dev-shm-usage&quot;) webdriver_options.experimental_options[&quot;prefs&quot;] = webdriver_prefs webdriver_prefs[&quot;profile.default_content_settings&quot;] = {&quot;images&quot;: 2} service = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=webdriver_options) @asynccontextmanager async def lifespan(app: FastAPI): global driver print(&quot;Staring PDF service&quot;) yield driver.quit() app = FastAPI(lifespan=lifespan) @app.get(&quot;/get_pdf/&quot;) def get_pdf(serial: str): global driver url = 'https://example.com/templates/serial.php?serial=' + serial html = requests.get(url) local_html_file = serial + &quot;_htmldata.html&quot; f = open(local_html_file, &quot;w&quot;) f.write(html.text) f.close() local_pdf_file = serial + &quot;.pdf&quot; driver.get(f'file:///home/ubuntu/pdf2/'+local_html_file) calculated_print_options = { &quot;landscape&quot;: False, &quot;displayHeaderFooter&quot;: False, &quot;printBackground&quot;: True, &quot;preferCSSPageSize&quot;: True, &quot;marginLeft&quot;: 0.4, &quot;marginRight&quot;: 0.5, &quot;scale&quot;: 0.75, } result = __send_devtools( driver, &quot;Page.printToPDF&quot;, calculated_print_options) with open(local_pdf_file, &quot;wb&quot;) as file: file.write(base64.b64decode(result[&quot;data&quot;])) os.remove(local_html_file) return &quot;done&quot; def __send_devtools(driver, cmd, params={}): resource = &quot;/session/%s/chromium/send_command_and_get_result&quot; % driver.session_id url = driver.command_executor._url + resource body = json.dumps({&quot;cmd&quot;: cmd, &quot;params&quot;: params}) response = driver.command_executor._request(&quot;POST&quot;, url, body) if not response: raise Exception(response.get(&quot;value&quot;)) return response.get(&quot;value&quot;) # Run the FastAPI application using uvicorn server if __name__ == &quot;__main__&quot;: import uvicorn uvicorn.run(&quot;pdf_rest:app&quot;, host=&quot;0.0.0.0&quot;, port=8000, reload=True) </code></pre>
<python><python-3.x><selenium-webdriver><selenium-chromedriver><fastapi>
2023-11-03 22:15:03
0
479
K997
77,419,948
8,383,726
Python - pandas_datareader Error: "data = j["context"]["dispatcher"]["stores"]["HistoricalPriceStore"]TypeError: string indices must be integers"
<p>I really can't seem to figure out this error that I keep getting from the simple code below. I am using <code>Python 3.9.13</code> and <code>pandas 2.1.2</code> . I appreciate in advance the help in figuring this out.</p> <pre><code>import datetime import pandas_datareader.data as web import requests from bs4 import BeautifulSoup start = datetime.datetime(2015, 1, 1) end = datetime.datetime(2018, 2, 8) df = web.DataReader('BTC-USD', 'yahoo', start='2022-01-01', end='2022-06-14') df.tail(5) </code></pre> <p>The error message I get from the above piece of code is:</p> <pre><code>&quot;data = j[&quot;context&quot;][&quot;dispatcher&quot;][&quot;stores&quot;][&quot;HistoricalPriceStore&quot;]TypeError: string indices must be integers&quot; </code></pre> <p>As a workaround, I have tried the following method, which worked. However, this is not an efficient method as I had to manually save the data file in my project repository.</p> <pre><code>df = pd.read_excel('../data/input/subSet.xlsx') df.set_index(&quot;Date&quot;, inplace=True) </code></pre>
<python><python-3.9><pandas-datareader>
2023-11-03 22:07:35
1
335
Vondoe79
77,419,881
758,614
Create a feature, scenario, and its steps at runtime
<p>I'm using python-behave to run some test. For a &quot;proof-of-concept&quot; project I would like to create a test (feature/scenario) at runtime.</p> <p>I'm gathering the test steps from a web server and try to build a <code>Scenario</code> object. One idea was to use the <code>before_all</code> hook to get the steps from the server and compile them together. But I cannot find an example of how to make a <code>scenario</code> or <code>step</code> object.</p> <p><code>get steps as strings from server</code> --&gt; <code>compile/build scenario</code> --&gt; <code>run test/scenario</code></p> <p>Does anyone have an example?</p>
<python><python-behave>
2023-11-03 21:52:24
1
529
Mario
77,419,870
893,254
How to obtain a P-Value from a Python Scipy Optimize Least Squares regression?
<p>I have performed a function minimization using Python <code>scipy.optimize.least_squares</code>.</p> <p>The function minimized is a chi-squared function.</p> <pre><code>solution = scipy.optimize.least_squares(optimize_function, x0, method='lm', \ ftol=1.0e-8, xtol=1.0e-8, \ max_nfev=1000000, args=(bin_midpoints, hist_data)) </code></pre> <p>The residuals can be obtianed via <code>solution.fun</code>. Calculating the chi-squared value using the residuals leads to the same result as</p> <pre><code>2.0 * solution.cost </code></pre> <p>As a next step, I am trying to calculate a P-Value from this chi-square value.</p> <p>In similar libraries which I have used previously, the &quot;p-value&quot; is typically provided somewhere in the output.</p> <p>Looking at the docs, I don't see anything obvious which might be a P-Value in the value returned by <code>least_squares</code>.</p> <p>Do I have to calculate it manually, perhaps using some other Python library?</p>
<python><scipy><scipy-optimize><p-value><chi-squared>
2023-11-03 21:49:52
0
18,579
user2138149
77,419,708
1,055,869
Pandas apply row wise access faster
<p>Consider the example where A and B are measurements and T is a threshold left joined in from another dataframe. T_name represents the column for which that threshold applies. I want to only keep the measurements that are above the thresholds.</p> <pre><code>import pandas as pd df = pd.DataFrame({'A':[1,1,4,4],'B':[9,9,6,6], 'T_name': ['A','B','A','B'], 'T':[2,4,3,5]}) high_values = df[df.apply(lambda row: row[row['T_name']] &gt; row['T'], axis=1)] </code></pre> <p>The output looks like this:</p> <pre><code> A B T_Name T 1 1 9 'B' 4 2 4 6 'A' 3 3 4 6 'B' 5 </code></pre> <p>In my dataset I have over 5 million rows and it becomes very slow to do the filtering. The issue has to do with the column access.</p> <pre><code>row[row.T_name] </code></pre> <p>If i break up my code to filter out 'A' and then 'B' then append the 2 datasets, it becomes faster but its less extensible when I'll eventually have more measurement columns.</p> <p>Is there a way to speed up the filtering?</p> <p>Note: I've tried swifter but it further slows down the filtering.</p>
<python><pandas>
2023-11-03 21:07:41
1
2,740
Shabbir Hussain
77,419,705
8,478,404
All combinations of elements in a vector in a larger vector
<p>I have the following input vector.</p> <pre><code>['a','b','c'] </code></pre> <p>I want to list all possible combinations.</p> <p>There are three restrictions:</p> <ul> <li>The values have to be inserted into an output vector of six positions.</li> <li>One given value from the input vector can only occur once in the output vector.</li> <li>The order of the values has to be the same in the output vector as in the input vector.</li> </ul> <p>Two permissions:</p> <ul> <li>Positions can be left empty.</li> <li>(Not shown here:) The input vector can have more than one of any given value (such as <code>['a','b','a','c']</code>)</li> </ul> <p>Given this input vector above, the only valid outputs are the following (I might have missed one or two but you get the idea):</p> <pre><code>[' ',' ',' ','a','b','c'], [' ',' ','a',' ','b','c'], [' ','a',' ',' ','b','c'], ['a',' ',' ',' ','b','c'], [' ',' ','a','b',' ','c'], [' ','a','b',' ',' ','c'], ['a','b',' ',' ',' ','c'], [' ','a',' ','b',' ','c'], ['a',' ','b',' ',' ','c'], ['a',' ',' ','b',' ','c'], [' ',' ','a','b','c',' '], [' ','a','b','c',' ',' '], ['a','b','c',' ',' ',' '], [' ','a',' ','b','c',' '], ['a',' ','b','c',' ',' '], ['a',' ','b',' ','c',' '] </code></pre> <p>My first idea was to generate circa 6! vectors where the vectors are random combinations of the possible values, including the empty value: <code>[abc ]|[abc ]...[abc ]</code> and then remove all vectors where a/b/c occur more than they do in the input vector and the order of abc is not the same as in the input vector. But this brute force measure would take ages.</p> <p>I'll have to do this a lot, and for input vectors and output vectors of varying sizes.</p>
<python><combinations>
2023-11-03 21:06:38
2
535
petyar
77,419,483
336,114
Broken HTML links in Python error output in Interactive Windows and Notebook outputs in VS Code (circa VS Code 1.84)
<p>After the recent update of vscode my error messages in the python interactive window started to look like this:</p> <pre class="lang-none prettyprint-override"><code>File c:\Users\user\anaconda3\lib\site-packages\pandas\core\internals\construction.py:845, in to_arrays(data, columns, dtype) ref='c:\Users\user\anaconda3\lib\site-packages\pandas\core\internals\construction.py:0'&gt;0&lt;/a&gt;;32m 842 data = [tuple(x) for x in data] ref='c:\Users\user\anaconda3\lib\site-packages\pandas\core\internals\construction.py:0'&gt;0&lt;/a&gt;;32m 843 arr = _list_to_arrays(data) ref='c:\Users\user\anaconda3\lib\site-packages\pandas\core\internals\construction.py:1'&gt;1&lt;/a&gt;;32m--&gt; 845 content, columns = _finalize_columns_and_data(arr, columns, dtype) ref='c:\Users\user\anaconda3\lib\site-packages\pandas\core\internals\construction.py:0'&gt;0&lt;/a&gt;;32 </code></pre> <p>With <code>ref='fname..'</code> in each line for any external module</p> <p>How is it possible to stop showing this &quot;ref=..&quot; information? (just code and filename on the top in error block, like it was before)</p> <p><strong>Help: About</strong></p> <pre class="lang-none prettyprint-override"><code>Version: 1.84.0 (user setup) Commit: d037ac076cee195194f93ce6fe2bdfe2969cc82d Date: 2023-11-01T11:29:04.398Z Electron: 25.9.2 ElectronBuildId: 24603566 Chromium: 114.0.5735.289 Node.js: 18.15.0 V8: 11.4.183.29-electron.0 OS: Windows_NT x64 10.0.22621 </code></pre> <p>Python extension: v2023.20.0</p> <p>Jupyter extension: v2023.10.1003070148</p>
<python><visual-studio-code><jupyter-notebook><python-interactive>
2023-11-03 20:18:24
1
393
Philipp
77,418,896
10,452,700
AttributeError: 'GrouperView' object has no attribute 'join'
<p>I'm trying to reproduce this <a href="https://stackoverflow.com/a/42712772/10452700">answer</a> but getting the following error:</p> <blockquote> <p>AttributeError: 'GrouperView' object has no attribute 'join'</p> </blockquote> <pre class="lang-py prettyprint-override"><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[283], line 7 4 flights = flights.pivot(&quot;month&quot;, &quot;year&quot;, &quot;passengers&quot;) 5 f,(ax1,ax2,ax3, axcb) = plt.subplots(1,4, 6 gridspec_kw={'width_ratios':[1,1,1,0.08]}) ----&gt; 7 ax1.get_shared_y_axes().join(ax2,ax3) 8 g1 = sns.heatmap(flights,cmap=&quot;YlGnBu&quot;,cbar=False,ax=ax1) 9 g1.set_ylabel('') AttributeError: 'GrouperView' object has no attribute 'join' </code></pre> <p>Also, <a href="/questions/tagged/seaborn" class="post-tag" title="show questions tagged &#39;seaborn&#39;" aria-label="show questions tagged &#39;seaborn&#39;" rel="tag" aria-labelledby="tag-seaborn-tooltip-container">seaborn</a> version is as below:</p> <pre class="lang-py prettyprint-override"><code>print(sns.__version__) #0.13.0 import matplotlib print('matplotlib: {}'.format(matplotlib.__version__) #matplotlib: 3.8.1) </code></pre> <p>I checked some working around but couldn't solve the problem in the plot since there is no real string <code>ax2,ax3</code> when I print :</p> <ul> <li><a href="https://stackoverflow.com/q/53832607/10452700">I am getting an error as list object has no attribute 'join'</a></li> </ul> <p>Even I downgraded seaborn but didn't solve the problem based on this <a href="https://github.com/mwaskom/seaborn/discussions/3521" rel="nofollow noreferrer">thread</a></p>
<python><python-3.x><seaborn><heatmap><attributeerror>
2023-11-03 18:14:38
2
2,056
Mario
77,418,669
1,279,355
Use flattbuffer enum with mypy
<p>Flattbuffer generates python code like this for enums -&gt;</p> <pre><code>class ErrorType(object): ERROR1 = 0 ERROR2 = 1 </code></pre> <p>And i would like to use it as a type in a function -&gt;</p> <pre><code>def my_function(error: ErrorType): pass </code></pre> <p>And if i call it now with</p> <pre><code>my_function(ErrorType.ERROR1) </code></pre> <p>That works but mypy somehow thinks <code>ErrorType.ERROR1</code> is just an int.</p> <pre><code>test.py:8: error: Argument 1 to &quot;my_function&quot; has incompatible type &quot;int&quot;; expected &quot;ErrorType&quot; [arg-type] Found 1 error in 1 file (checked 1 source file) </code></pre> <p>Is there any way to construct a ErrorType which is not int?</p>
<python><types><mypy>
2023-11-03 17:36:07
1
4,420
Sir l33tname
77,418,578
4,575,197
How to detect if a request has been redirected and if so dokument it in Dataframe
<p>i'm trying to catch if a request that i send to many websites, has been redirected or not. first let me give you some example Data.</p> <pre><code>redirected_urls = [ &quot;http://www.tagesschau.de/inland/vw-schalte-hapke-101.html&quot;, &quot;http://de.reuters.com/article/deutschland-volkswagen-idDEKCN10V0H3&quot; ] healthy_urls = [ &quot;http://www.focus.de/finanzen/news/wirtschaftsticker/machtkampf-zwischen-vw-und-zulieferern-stoppt-autoproduktion_id_5842241.html&quot;, &quot;https://www.bild.de/news/aktuelles/news/vw-kuendigt-harte-gangart-gegen-lieferstopp-47400500.bild.html&quot; ] redirected_df = pd.DataFrame({'URL': redirected_urls}) healthy_df = pd.DataFrame({'URL': healthy_urls}) </code></pre> <p>so in redirected_df are the links that actually get redirected, however the other dataframe is not redirected. As mentioned in this <a href="https://stackoverflow.com/questions/13482777/how-to-detect-when-a-site-check-redirects-to-another-page-using-the-requests-mod">post</a> i tried to set <code>allow_redirects=False</code> then realized all the links that i'm using get redirected somehow, although i get to see the actual news article. So the response code for all is 200, meaning successful connection. Then i checked <code>response.history</code> for almost all of the link i get <code>[&lt;Response [301]&gt;]</code> . Using <code>BeautifulSoup(response._content).find('link', {'rel': 'canonical'})</code> they all have values.</p> <p>then i would like to save this info in my dataframe like this <code>_df.at[k,'Is_Redirected']= 1 if response.history else 0</code>. For all links as mentioned above i get 1 (True).</p> <p>The code that i'm using:</p> <pre><code>def send_two_requests(_url): try: headers = {&quot;user-agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.101 Safari/537.36&quot;} response = requests.get(_url,headers=headers,allow_redirects=True, timeout=10) return response except: return func_timeout.func_timeout(timeout=5, func=send_request, args=[_url]) for k,link in enumerate(_df['url']): response = send_two_requests(_df.at[k,'url']) if response is not None: _df.at[k,'Is_Redirected']= 1 if response.history else 0 </code></pre> <p>is there anyway i can distinguish the actual links that work, and the ones that get redirected?</p>
<python><pandas><http-redirect><python-requests>
2023-11-03 17:16:15
2
10,490
Mostafa Bouzari
77,418,545
2,893,024
Vercel seems to ignore my requirements.txt file so no packages are installed
<p>I'm running a React frontend with a python serverless backend on Vercel.</p> <p>My <code>requirements.txt</code> file contains the following lines:</p> <pre><code>pydantic==2.4.2 openai~=0.28.1 python-dotenv~=1.0.0 </code></pre> <p>And yet, none of these packages are installed upon running.</p> <p>When hitting that python api endpoint, Vercel does print <code>Installing required dependencies...</code> and completes without errors. However, the execution is immediately interrupted by an error because <code>ModuleNotFoundError: No module named 'pydantic'</code>. And yet, this dependency is the 1st one listed on the <code>requirements.txt</code> file.</p> <p>Can anyone help me diagnose what's going on?</p> <p>Cheers!</p> <p>P.s. if it helps, here's my file structure: <a href="https://i.sstatic.net/7aGrz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7aGrz.png" alt="enter image description here" /></a></p>
<python><vercel>
2023-11-03 17:10:39
0
3,576
Michael Seltenreich
77,418,344
403,425
Reuse logic between frontend (SvelteKit) and backend (Django)
<p>I am building a webshop where we have complicated logic to determine which offers are valid for which products, based on which products you already own, what's in your basket, and a bunch more rules. Up until now this code has always lived in the backend (Django with Django REST Framework), and every time you'd add something into your basket we'd have to fetch all products again to get the changed prices, based on which offers were applied. This is quite slow, leading to a bad UX.</p> <p>We want to run all this code in the frontend: we have the list of products, we have the list of offers and their rules, we know what's in the basket, we what what you already own, so we can apply the offers to the basket and to all the products in the shop, completely separate from the backend. Apart from the initial fetching of the necessary data, no more requests to the backend should be necessary any more. When you add something to the basket all the logic would re-run in the client. That's the idea.</p> <p>But of course we want to also run this logic in the backend when checking out the basket. Can't rely on the frontend to tell the backend which offers were applied! And this is where my question begins: what is the best/easiest way to share this code? The frontend is a SvelteKit app built with TypeScript while the backend is all Django and Python.</p> <p>I don't want to compile TS to Python or the other way around, where we'd end up with 2 pieces of logic that we need to keep in sync and make sure that behaves 100% the same. I want to run the exact same piece of code from the frontend and the backend, if that makes sense.</p> <p>My only idea is to put the code in a SvelteKit endpoint, which could be called by both the frontend client, and by the Django backend. Just post all the info (a product, all offers, the basket, owned products, etc) and return the valid offers for this product, something like that. The downside though is that the code wouldn't actually run in the frontend - the frontend would still be calling a backend endpoint. It feels like it should be possible to do better than that.</p> <p>Are there better ways? Would SvelteKit actions work? I haven't used them so I don't know if they run in the frontend client, or if that only happens when used from forms with the <code>use:enhance</code> action. Or maybe I can put the code in a separate NPM module that I can import on the frontend and somehow call from within Python?</p>
<python><node.js><sveltekit>
2023-11-03 16:35:16
1
5,828
Kevin Renskers
77,418,144
3,231,250
for each element of 2d array sum higher elements in the row and column
<p>My input dataframe (or 2D numpy array) shape is 11kx12k</p> <p>For each element, I check how many values are higher in the given row and column.</p> <p>For example:</p> <pre><code>array([[1, 3, 1, 5, 3], [3, 7, 3, 2, 1], [2, 3, 1, 8, 9]]) </code></pre> <p>row-wise:</p> <pre><code>3 1 3 0 1 1 0 1 3 4 3 2 4 1 0 </code></pre> <p>column-wise:</p> <pre><code>2 1 1 1 1 0 0 0 2 2 1 1 1 0 0 </code></pre> <p>total higher values for that element:</p> <pre><code>5 2 4 1 2 1 0 1 5 6 4 3 5 1 0 </code></pre> <p>this code works good but for this shape of matrix It takes ~1 hour.</p> <pre><code>dfT = df.T rows = df.apply(lambda x: np.sum(df&gt;x),axis=1) cols = dfT.apply(lambda x: np.sum(dfT&gt;x),axis=1).T output = rows+cols </code></pre> <p>I wonder is there any way to do this more efficient way?</p> <p>I also tried numpy but for this I have splited my 2D to 12kx100 or 12kx200 shapes and merged all arrays again, in the end runtime was close to eachother so couldn't get any progress.</p> <pre><code>np.sum(matrix &gt; matrix[:,None], axis=1) </code></pre>
<python><pandas><numpy><matrix>
2023-11-03 16:05:18
2
1,120
Yasir
77,417,920
4,397,312
Pytorch 1.13 dataloader is significantly faster than Pytorch 2.0.1
<p>I've noticed that PyTorch 2.0.1 DataLoader is significantly slower than PyTorch 1.13 DataLoader, especially when the number of workers is set to something other than 0. I've done some research and found that this is due to a change in the way that PyTorch handles multiprocessing in version 2.0.1. In PyTorch 1.13, the DataLoader uses a separate process for each worker. In PyTorch 2.0.1, the DataLoader uses a thread pool to manage the workers.</p> <p>I'm using a simple DataLoader, but I need to stick to PyTorch 2.0.1 for other reasons. I'm looking for a workaround to speed up my DataLoader.</p> <p>Steps to reproduce:</p> <p>Load a dataset using PyTorch 1.13 DataLoader with the following settings: num_workers: 32 pin_memory: True Time the data loading process. Expected behavior:</p> <p>The data loading process should be faster with PyTorch 2.0.1 DataLoader.</p> <p>Actual behavior:</p> <p>The data loading process is significantly slower with PyTorch 2.0.1 DataLoader.</p> <p>Environment:</p> <p>PyTorch version: 1.13, 2.0.1 Python version: 3.9 Operating system: Ubuntu 20.04 Question:</p> <p>Is there a workaround to speed up the PyTorch 2.0.1 DataLoader?</p> <p>Additional notes:</p> <p>I've tried reducing the number of workers, but this doesn't significantly improve the performance. I've also tried using a smaller batch size, but this also doesn't significantly improve the performance. I appreciate any help you can provide.</p>
<python><pytorch><dataloader>
2023-11-03 15:30:50
0
717
Milad Sikaroudi
77,417,907
3,480,297
Filter queryset from filename field based on two words that appear one after the other Django
<p>I have a <code>Files</code> table, which contains various fields, one of which is a <code>filename</code>. Within each one of the files in this table, there is a specific file that I'd like to retrieve, which is basically a terms and conditions. This filename will always have the words 'terms' and 'conditions' within the filename itself, and it will always be in that order. However, there could be other words surrounding both 'terms' and 'conditions'.</p> <p>For example, a filename could be 'my new terms and conditions' or 'my even better terms blah blah conditions'.</p> <p>I'm trying to write a queryset that will retrieve such a file using Regex but I'm not able to do so. So far, I have something like:</p> <pre><code>file = Files.objects.filter( Q(filename__iregex=r'(^|\W){}.*({}|$)'.format(&quot;term&quot;, &quot;condition&quot;)) ).first() </code></pre> <p>but this doesn't work.</p>
<python><django><regex><django-rest-framework><drf-queryset>
2023-11-03 15:28:58
1
2,612
Adam
77,417,902
9,137,547
How to map scipy.sparse.lil_array?
<p>I have a starting <code>lil_array</code> of boolean values. (<a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_array.html" rel="nofollow noreferrer">Docs</a>)</p> <p>Eg dimensions 3,5 and values:</p> <pre><code> (0, 2) True (0, 4) True (1, 0) True (1, 1) True (1, 3) True (2, 0) True (2, 3) True </code></pre> <p>So this graphically is a matrix like the following (True represented as 1):</p> <pre><code>0 0 1 0 1 1 1 0 1 0 1 0 0 1 0 </code></pre> <p>I also have a np.ndarray of the same size as the rows filled with Int values. For this example I will use the following:</p> <pre><code>arr = np.array([0, -1, 0, 3, 2]) </code></pre> <p>I want to produce the following lil_array (zeroes will not be saved in the sparse of course):</p> <pre><code>0 0 0 0 2 0 -1 0 3 0 0 0 0 3 0 </code></pre> <p>where each row is the logical and of the initial lil_array corresponding row and <code>arr</code>.</p> <p>I know how to do this in several ways transforming the lil_array into a matrix first or the rows into ndarrays or lists but this would loose the efficiency gained by exploiting the sparse property of this matrix. (This is a toy example but my problem involves a way bigger matrix)</p> <p>How can I produce the output in an efficient and clean way without turning the sparse into a matrix?</p>
<python><scipy>
2023-11-03 15:27:58
3
659
Umberto Fontanazza
77,417,898
268,581
Combine two charts (U.S. Treasuries data and $SPX)
<h1>Chart 1</h1> <p>This program creates a chart of the ratio of bills to (notes + bonds) issued by the U.S. Treasury.</p> <pre><code>import requests import pandas as pd from bokeh.plotting import figure, show from bokeh.models import NumeralTickFormatter, HoverTool # --------------------------------------------------------------------- page_size = 10000 url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v1/accounting/od/auctions_query' params = { 'fields' : 'record_date,issue_date,maturity_date,security_type,total_accepted', 'filter' : 'record_date:gte:1900-01-01', 'page[size]' : page_size } response = requests.get(url, params=params) result_json = response.json() df = pd.DataFrame(result_json['data']) # ---------------------------------------------------------------------- df['record_date'] = pd.to_datetime(df['record_date']) df['issue_date'] = pd.to_datetime(df['issue_date']) df['maturity_date'] = pd.to_datetime(df['maturity_date']) df['auction_date'] = pd.to_datetime(df['auction_date']) df['total_accepted'] = pd.to_numeric(df['total_accepted'], errors='coerce') df['total_accepted_neg'] = df['total_accepted'] * -1 # ---------------------------------------------------------------------- bills = df[df['security_type'] == 'Bill'] notes = df[df['security_type'] == 'Note'] bonds = df[df['security_type'] == 'Bond'] # ---------------------------------------------------------------------- freq='Q' # ---------------------------------------------------------------------- bills_issued = bills.groupby(pd.Grouper(key='issue_date', freq=freq))['total_accepted'].sum().to_frame() notes_issued = notes.groupby(pd.Grouper(key='issue_date', freq=freq))['total_accepted'].sum().to_frame() bonds_issued = bonds.groupby(pd.Grouper(key='issue_date', freq=freq))['total_accepted'].sum().to_frame() # ---------------------------------------------------------------------- bills_notes_bonds_issued = bills_issued.merge(notes_issued, how='outer', on='issue_date').merge(bonds_issued, how='outer', on='issue_date') bills_notes_bonds_issued.columns = ['bills', 'notes', 'bonds'] bills_notes_bonds_issued['bills_notes_ratio'] = bills_notes_bonds_issued['bills'] / bills_notes_bonds_issued['notes'] bills_notes_bonds_issued['bills_notes_bonds_ratio'] = bills_notes_bonds_issued['bills'] / (bills_notes_bonds_issued['notes'] + bills_notes_bonds_issued['bonds']) # ---------------------------------------------------------------------- p = figure(title=f'Treasury Securities Auctions Data : {freq}', sizing_mode='stretch_both', x_axis_type='datetime', x_axis_label='date', y_axis_label='') p.add_tools(HoverTool( tooltips=[ ('issue_date', '@issue_date{%F}'), ('total_accepted', '@total_accepted{$0.0a}') ], formatters={ '@issue_date': 'datetime' })) p.yaxis.formatter = NumeralTickFormatter(format='0a') p.line(x='issue_date', y='bills_notes_ratio', color='black', legend_label='Bills/Notes ratio', source=bills_notes_bonds_issued) p.legend.click_policy = 'hide' p.legend.location = 'top_left' show(p) # ---------------------------------------------------------------------- </code></pre> <p><a href="https://i.sstatic.net/Zus5b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Zus5b.png" alt="enter image description here" /></a></p> <h1>Chart 2</h1> <p>This program charts the S&amp;P 500:</p> <pre><code>import pandas as pd from bokeh.plotting import figure, show from bokeh.models import NumeralTickFormatter, HoverTool import yfinance as yf # --------------------------------------------------------------------- spx = yf.Ticker('^GSPC') data = spx.history(start='1980-01-01', interval='1d') # ---------------------------------------------------------------------- p = figure(title=f'SPX', sizing_mode='stretch_both', x_axis_type='datetime', x_axis_label='date', y_axis_label='price') p.yaxis.formatter = NumeralTickFormatter(format='0a') p.legend.click_policy = 'hide' p.legend.location = 'top_left' p.line(x='Date', y='Close', color='red', legend_label='S&amp;P 500', source=data) show(p) </code></pre> <p><a href="https://i.sstatic.net/hO8xA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hO8xA.png" alt="enter image description here" /></a></p> <h1>Question</h1> <p>What's a good way to plot the data from charts 1 and 2 on the same chart?</p>
<python><pandas><bokeh>
2023-11-03 15:27:30
1
9,709
dharmatech
77,417,774
5,740,734
How to iterate every two elements of a list in an Ansible loop?
<p>I have list variable with disks names which сan be off variable amount - 4, 8, 16 etc. elements, but even and I want to print every two disks in loop at all iteration. How I can do this?</p> <pre class="lang-yaml prettyprint-override"><code>--- - hosts: all vars: disks: - sdc - sdd - sde - sdf tasks: - name: Iterate every two elements (disks) in loop shell: echo '/dev/sdc + /dev/sdd' (and in second iteration of loop must be echo '/dev/sde + /dev/sdf') loop: &quot;{{ disks }}&quot; </code></pre>
<python><ansible><jinja2>
2023-11-03 15:11:17
1
655
mocart
77,417,662
3,815,773
How do you get the UTC time offset in Python now that utcnow() is deprecated?
<p>I need the time difference in sec between my current location and UTC. It worked so well with (example timezone Berlin, UTC+1h) :</p> <pre><code>import datetime datetime.datetime.utcnow().timestamp(): 1699017779.016835 datetime.datetime.now().timestamp(): 1699021379.017343 # 3600 sec larger, correct! </code></pre> <p>But now <code>utcnow()</code> is marked as deprecated, and I need <code>now()</code> and load it with a timezone object:</p> <pre><code>datetime.datetime.now(datetime.timezone.utc): 2023-11-03 14:22:59.018573+00:00 datetime.datetime.now(): 2023-11-03 15:22:59.018941 # 1 h later, correct! </code></pre> <p>Also giving correct result. But as I want the timestamp, I do as before, but get the wrong result:</p> <pre><code>datetime.datetime.now(datetime.timezone.utc).timestamp()) 1699021379.019286 # UTC : is actually local timestamp datetime.datetime.now().timestamp()) 1699021379.019619 # Local: why no difference ??? </code></pre> <p>Local is correct, but UTC is now giving the exact same timestamp as Local, and this wrong! Am I missing anything or is this simply a bug?</p> <p><strong>EDIT:</strong></p> <p>After reviewing all the comments I disagree with some commenters - I see nothing wrong with utcnow(). It does give the UTC timestamp for the moment of calling. Of course there is no timezone involved when you ask for UTC time! What does seem wrong is to use timestamps for other places which include the time difference to UTC and view this as a UTC/UNIX timestamp?</p> <p>But perhaps I should make my needs clearer: I want the time difference between my timezone and UTC in seconds. Is there a simpler way to get this in Python?</p> <p>None of the answers and comments provide an answer to this question.</p> <p><strong>EDIT2</strong></p> <p>I found 2 possible workarounds:</p> <pre><code>import datetime as dt # workaround #1 ts = time.time() utc = dt.datetime.fromtimestamp(ts, tz=datetime.timezone.utc) # 2023-11-06 11:49:30.154083+00:00 local = dt.datetime.fromtimestamp(ts, tz=None) # 2023-11-06 12:49:30.154083 ts_utc = dt.datetime.strptime(str(utc) [0:19], &quot;%Y-%m-%d %H:%M:%S&quot;).timestamp() # 1699267770.0 ts_local = dt.datetime.strptime(str(local)[0:19], &quot;%Y-%m-%d %H:%M:%S&quot;).timestamp() # 1699271370.0 # more by 3600 sec TimeZoneOffset = ts_local - ts_utc # +3600 (sec) # workaround #2 TimeZoneOffset = dt.datetime.now().astimezone().utcoffset().total_seconds() # +3600 (sec) </code></pre>
<python><datetime><unix-timestamp>
2023-11-03 14:56:32
1
505
ullix
77,417,608
2,386,113
How to create a square-shaped plot with same number of ticks on x and y axis
<p>I am plotting an ellipse, which could become a circle if the values of the major and minor axis are the same. Therefore, I would like to have a square shape plot with the same number of ticks on the x and y-axes. I am already compting the maximum range for ticks in my program but still, the plot is not square-shaped as shown below.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np # Step 1: Set the variables groundtruth_position = (0, 0) groundtruth_sigma_x = 1 groundtruth_sigma_y = 2 # Use the maximum value between groundtruth_sigma_x and groundtruth_sigma_y to decide the ticks max_sigma = max(groundtruth_sigma_x, groundtruth_sigma_y) # Calculate the tick range for both axes tick_range = max_sigma * 1.5 # Step 2: Create a figure and plot the groundtruth_position as a single point in black color fig, ax = plt.subplots() ax.scatter(*groundtruth_position, color='black') # Step 3: Set custom ticks for both axes x_ticks = np.arange(groundtruth_position[0] - tick_range, groundtruth_position[0] + tick_range + 1) y_ticks = np.arange(groundtruth_position[1] - tick_range, groundtruth_position[1] + tick_range + 1) # Set the ticks on the plot ax.set_xticks(x_ticks) ax.set_yticks(y_ticks) # Ensure the aspect ratio is equal to make the plot square-shaped ax.set_aspect('equal') # Create an ellipse using groundtruth_sigma_x and groundtruth_sigma_y theta = np.linspace(0, 2 * np.pi, 100) x = groundtruth_sigma_x * np.cos(theta) + groundtruth_position[0] y = groundtruth_sigma_y * np.sin(theta) + groundtruth_position[1] ax.plot(x, y) # Display the plot plt.grid(True) plt.show() print() </code></pre> <p><a href="https://i.sstatic.net/agb8L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/agb8L.png" alt="enter image description here" /></a></p> <p>How can I have the same range on the x-axis (i.e. from -2 to 2) to make the plot square-shaped?</p>
<python><matplotlib>
2023-11-03 14:49:42
0
5,777
skm
77,417,474
7,240,233
plotly sunburst + datapane : bug with markers shape
<p>I have sunburst figure made with plotly that I'd like to include into a datapane report. However, some of the levels are hatched, and the hatching is not displayed in datapane (works fine in a standard html file with the figure only). I joined a code example for allow testing, taken from <a href="https://plotly.com/python/sunburst-charts/" rel="nofollow noreferrer">plotly doc</a></p> <pre><code>#!/usr/bin/env python import plotly.graph_objects as go import datapane as dp # Figure code taken here : https://plotly.com/python/sunburst-charts/ fig = go.Figure( go.Sunburst( labels=[&quot;Eve&quot;, &quot;Cain&quot;, &quot;Seth&quot;, &quot;Enos&quot;, &quot;Noam&quot;, &quot;Abel&quot;, &quot;Awan&quot;, &quot;Enoch&quot;, &quot;Azura&quot;], parents=[&quot;&quot;, &quot;Eve&quot;, &quot;Eve&quot;, &quot;Seth&quot;, &quot;Seth&quot;, &quot;Eve&quot;, &quot;Eve&quot;, &quot;Awan&quot;, &quot;Eve&quot;], values=[65, 14, 12, 10, 2, 6, 6, 4, 4], branchvalues=&quot;total&quot;, textfont_size=16, marker=dict( pattern=dict( shape=[&quot;&quot;, &quot;/&quot;, &quot;/&quot;, &quot;.&quot;, &quot;.&quot;, &quot;/&quot;, &quot;/&quot;, &quot;.&quot;, &quot;/&quot;], solidity=0.9 ) ), ) ) fig.update_layout(margin=dict(t=0, l=0, r=0, b=0)) fig.write_html(&quot;sunburst_test.html&quot;) # Hatching works fine view = dp.Blocks(dp.Plot(fig)) dp.save_report(view, path=&quot;sunburst_test_dp.html&quot;) # Hatching is not displayed </code></pre>
<python><plotly><sunburst-diagram><datapane>
2023-11-03 14:30:30
0
721
Micawber
77,417,274
521,347
Error in connecting to AlloyDB database with IAM user
<p>For Postgre Cloudsql, we have to use google cloud-sql-python-connector to connect to DB. When we establish a connection using IAM user, we don't have to pass any password if we pass a flag <code>enable_iam_auth</code> as true . I confirmed this by looking at source-code of this connector that password is optional (Line 58 <a href="https://github.com/GoogleCloudPlatform/cloud-sql-python-connector/blob/main/google/cloud/sql/connector/pg8000.py" rel="nofollow noreferrer">here</a>) However, for AlloyDB, we have to use alloydb-python-connector and when I am not passing any password for IAM user, it is resulting an error. This can be confirmed from source code where password is not optional (Line 50 <a href="https://github.com/GoogleCloudPlatform/alloydb-python-connector/blob/b487f31790a42bda20b4e43a0334c2ce3e9a5994/google/cloud/alloydb/connector/pg8000.py" rel="nofollow noreferrer">here</a>). I tried setting password as blank string or None but it didn't work.I also tried setting the flag <code>enable_iam_auth</code> but I got an error that it's an invalid argument.</p> <p>The error is because alloy-db-connector has not specified any default value while poping password. Is there any other way we can use this connector with AlloyDB?</p>
<python><google-cloud-sql><google-alloydb>
2023-11-03 14:04:56
1
1,780
Sumit Desai
77,417,218
583,464
parallelize file processing (in loop)
<p>I am trying to parallelize processing files.</p> <p>I am using this code:</p> <pre><code>import pandas as pd import glob from tqdm import tqdm from multiprocessing import Process, Pool SAVE_PATH = './tmp/saved/' data_path = './tmp/data/' def process_files(data_path): for idx_file, file in tqdm(enumerate(glob.glob(data_path + '*.txt'))): try: with open(file) as f: lines = f.readlines() # create a dataframe df = pd.DataFrame.from_dict({'title':lines}) df.to_csv(SAVE_PATH + str(idx_file) + '.csv', index=False) except Exception as ex: print('\nError in file: {}\n'.format(file)) continue processes = [] for i, input_file in enumerate(data_path): input_path = data_path[i] process_files(input_path) process = Process(target=process_files, args=(input_path)) processes.append(process) process.start() print('Waiting for the parallel process...') process.join() </code></pre> <p>but it processes the files many times.</p> <p>I need to keep the loop in the <code>process_files</code> function.</p> <p>So, I tried:</p> <pre><code>pool = Pool(processes=4) pool.map(process_files, data_path) pool.close() </code></pre> <p>but again, it processes the files multiple times.</p> <p>So, after the comments, I can run the code:</p> <pre><code>import pandas as pd import glob from tqdm import tqdm from multiprocessing import Pool SAVE_PATH = './tmp/saved/' data_path = './tmp/data/' def process_files(input_data): file, idx_file = input_data try: with open(file) as f: lines = f.readlines() df = pd.DataFrame.from_dict({'title':lines}) df.to_csv(SAVE_PATH + str(idx_file) + '.csv', index=False) print('Finished\n') except Exception as ex: print('\nError in file: {}\n'.format(file)) file = [f for f in glob.glob(data_path + '*.txt')] idx_file = [idx for idx in range(len(file))] input_data = zip(file, idx_file) workers = 4 pool = Pool(processes=workers) pool.map(process_files, input_data) pool.close() </code></pre> <p>but, to extend further in my case, when I try to append to <code>USER_ID</code> list:</p> <pre><code>USER_ID = [] def process_files(input_data): file, idx_file = input_data try: with open(file) as f: lines = f.readlines() USER_ID.append(idx_file) print(USER_ID[idx_file]) # create a dataframe df = pd.DataFrame.from_dict({'title':lines}) df.to_csv(SAVE_PATH + str(idx_file) + '.csv', index=False) print('Finished\n') except Exception as ex: print('\nError in file: {}\n'.format(file)) </code></pre> <p>I am receiving <code>error in file</code>.</p>
<python><parallel-processing><multiprocessing>
2023-11-03 13:58:34
2
5,751
George
77,417,118
2,292,133
Pydantic 2 - Editing alias after object generation
<p>I want to edit the alias value on a Pydantic model after it has been created:</p> <pre><code>class MyModel(BaseModel): name: str age: int obj = MyModel(age=8, name='Brian') field = obj.model_fields['name'] field.alias = 'full_name' [IN] obj.model_dump(by_alias=True) [OUT] { 'name': 'Brian', 'age': 8, } </code></pre> <p>I would want the key <code>name</code> here to be <code>full_name</code>. I've tried using <code>model_construct</code> to rebuild a model, but I would need to edit the alias on the <code>MyModel</code> class I would guess, which I do not want to do. Any ideas?</p>
<python><pydantic>
2023-11-03 13:44:18
1
767
Tom Hamilton Stubber
77,416,929
3,190,076
Can I use the same lock on multiple threads if they do not access the same variable?
<p>Imagine I have two threads, each modifying a different variable. Can I pass the same lock object to them, or shall I use two separate locks? In general, when shall I use multiple locks?</p> <p>Here is a toy example:</p> <pre><code>from threading import Thread, Lock from time import sleep def task(lock, var): with lock: var = 1 sleep(5) lock = Lock() var1 = [] var2 = [] Thread(target=task, args=(lock, var1)).start() Thread(target=task, args=(lock, var2)).start() </code></pre> <p>or is it better</p> <pre><code>lock1 = Lock() lock2 = Lock() var1 = [] var2 = [] Thread(target=task, args=(lock1, var1)).start() Thread(target=task, args=(lock2, var2)).start() </code></pre>
<python><multithreading><locking>
2023-11-03 13:11:57
1
10,889
alec_djinn
77,416,883
9,494,140
what is the proper way to use aggregate in a complicated structured Django models
<p>I have a Django app that represents a football league and should show scores points and other stuff, I need to create a function based on the models in this app to get the sum of goals, points, matches won, and positions in the current season, here are my models :</p> <p><strong>models.py</strong></p> <pre class="lang-py prettyprint-override"><code>class TeamName(models.Model): &quot;&quot;&quot; Stores Available team name to be picked later by users &quot;&quot;&quot; name = models.CharField(max_length=33, verbose_name=_( &quot;Team Name&quot;), help_text=_(&quot;Name of the team to be used by players&quot;)) logo = models.FileField(upload_to=&quot;uploads&quot;, verbose_name=_( &quot;Logo Image&quot;), help_text=_(&quot;The File that contains the team logo image&quot;), null=True) def image_tag(self): &quot;&quot;&quot; This method created a thumbnil of the image to be viewed at the listing of logo objects :model:'teams.models.TeamName' &quot;&quot;&quot; return mark_safe(f'&lt;img src=&quot;/uploads/{self.logo}&quot; width=&quot;100&quot; height=&quot;100&quot; /&gt;') image_tag.short_description = _(&quot;Logo&quot;) image_tag.allow_tags = True class Meta: &quot;&quot;&quot; Defines the name of the model that will be viewied by users &quot;&quot;&quot; verbose_name = _(&quot;1. Team Name&quot;) def __str__(self) -&gt; str: &quot;&quot;&quot; Make sure to view the name as string not the id or pk &quot;&quot;&quot; return str(self.name) class TeamStrip(models.Model): &quot;&quot;&quot; Stores Available team Strip to be picked later by users &quot;&quot;&quot; image = models.FileField(upload_to=&quot;uploads/uploads&quot;, verbose_name=_( &quot;Team Strips&quot;), help_text=_(&quot;A Strip to be used later by users&quot;)) def image_tag(self): &quot;&quot;&quot; This method created a thumbnil of the image to be viewed at the listing of logo objects :model:'teams.models.TeamLogo' &quot;&quot;&quot; return mark_safe(f'&lt;img src=&quot;/uploads/{self.image}&quot; width=&quot;100&quot; height=&quot;100&quot; /&gt;') image_tag.short_description = 'Image' image_tag.allow_tags = True class Meta: &quot;&quot;&quot; Defines the name of the model that will be viewied by users &quot;&quot;&quot; verbose_name = _(&quot;2. Team Strip&quot;) class Team(models.Model): &quot;&quot;&quot; Stores Available teams &quot;&quot;&quot; name = models.ForeignKey(TeamName, on_delete=models.CASCADE, related_name=&quot;team_set_for_name&quot;, verbose_name=_(&quot;Team Name&quot;), help_text=&quot;Name of the team&quot;) home_strip = models.ForeignKey(TeamStrip, on_delete=models.CASCADE, related_name=&quot;team_set_for_home_teamstrip&quot;, verbose_name=_( &quot;Team Home Strip&quot;), help_text=&quot;Home Shirt for the team&quot;) away_strip = models.ForeignKey(TeamStrip, on_delete=models.CASCADE, related_name=&quot;team_set_for_away_teamstrip&quot;, verbose_name=_( &quot;Team Away Strip&quot;), help_text=&quot;Away Shirt for the team&quot;) league = models.ForeignKey(&quot;leagues.LeagueSeason&quot;, on_delete=models.CASCADE, related_name=&quot;team_set_for_league&quot;, null=True, verbose_name=_(&quot;League Season&quot;), help_text=_(&quot;League season that team plays in &quot;)) cap = models.ForeignKey(&quot;players.PlayerProfile&quot;, on_delete=models.CASCADE, related_name=&quot;team_set_for_cap_playerprofile&quot;, verbose_name=_(&quot;Team Captain&quot;), help_text=_(&quot;Captain of the team&quot;)) players = models.ManyToManyField(&quot;players.PlayerProfile&quot;, blank=True, verbose_name=_( &quot;Team Players&quot;), help_text=_(&quot;Players that is playing in the team&quot;), related_name=&quot;team_set_for_players&quot;) average_skill = models.DecimalField(max_digits=5, decimal_places=2, default=0, verbose_name=_( &quot;Average Team Skill&quot;), help_text=_(&quot;An Average of Player's skills&quot;)) points = models.PositiveIntegerField(default=0, verbose_name=_(&quot;Team Points&quot;), help_text=_(&quot;Team points in the current league season&quot;)) def logo_tag(self): &quot;&quot;&quot; This method created a thumbnil of the image to be viewed at the listing of logo objects :model:'teams.models.TeamName' &quot;&quot;&quot; return mark_safe(f'&lt;img src=&quot;/uploads/{self.name.logo}&quot; width=&quot;100&quot; height=&quot;100&quot; /&gt;') logo_tag.short_description = _(&quot;Logo&quot;) logo_tag.allow_tags = True def home_strip_tag(self): &quot;&quot;&quot; This method created a thumbnail of the image to be viewed at &quot;&quot;&quot; return mark_safe('&lt;img src=&quot;/uploads/%s&quot; width=&quot;50&quot; height=&quot;50&quot; /&gt;' % (self.home_strip.image)) home_strip_tag.short_description = _(&quot;Home Strip&quot;) def away_strip_tag(self): &quot;&quot;&quot; This method created a thumbnail of the image to be viewed at &quot;&quot;&quot; return mark_safe('&lt;img src=&quot;/uploads/%s&quot; width=&quot;50&quot; height=&quot;50&quot; /&gt;' % (self.away_strip.image)) away_strip_tag.short_description = _(&quot;Away Strip&quot;) class Meta: &quot;&quot;&quot; Defines the name of the model that will be viewed by users Defines the ordering of queryset &quot;&quot;&quot; verbose_name = _(&quot;3. Team&quot;) ordering = [&quot;-points&quot;] def __str__(self) -&gt; str: &quot;&quot;&quot; Make sure to view the name as string not the id or pk &quot;&quot;&quot; return mark_safe( f&quot;{self.name.name} {self.league.region.name_ar} {self.league.region.name_en}&quot;) # pylint: disable=maybe-no-member def save(self, *args, **kwargs): &quot;&quot;&quot; This method removes the saved :model:'teams.models.TeamStrip' - :model:'teams.models.TeamLogo' - :model:'teams.models.TeamName' from the Regions :model:locations.models.Region &quot;&quot;&quot; if not self.pk: self.league.available_team_names.remove( self.name) super(Team, self).save(*args, **kwargs) @staticmethod def autocomplete_search_fields(): &quot;&quot;&quot; This method used to define what fields to be searched by user in admin dashboard &quot;&quot;&quot; return (&quot;id__iexact&quot;, &quot;name__name__icontains&quot;, &quot;league__region__name_ar__icontains&quot;, &quot;league__region__name_en__icontains&quot;,) class JoinRequest(models.Model): &quot;&quot;&quot; Store available Join Requests &quot;&quot;&quot; status_choices = ( ('pending', 'pending'), ('accepted', 'accepted'), ('refused', 'refused'), ) created = models.DateTimeField(auto_now_add=True, verbose_name=_( &quot;Created on&quot;), help_text=_(&quot;Data the request created at&quot;)) team = models.ForeignKey(Team, on_delete=models.CASCADE, related_name=&quot;joinrequest_set_for_team&quot;, verbose_name=_(&quot;Team&quot;), help_text=_(&quot;Team the request to join sent to&quot;)) player = models.ForeignKey(&quot;players.PlayerProfile&quot;, on_delete=models.CASCADE, related_name=&quot;joinrequest_set_for_playerprofile&quot;, verbose_name=_( &quot;Player&quot;), help_text=_(&quot;Player that sent the request&quot;)) status = models.CharField(choices=status_choices, max_length=33, default='pending', verbose_name=_( &quot;Status&quot;), help_text=_(&quot;Status of the request&quot;)) class Meta: &quot;&quot;&quot; Defines the name of the model that will be viewied by users &quot;&quot;&quot; verbose_name = _(&quot;4. Join Requests&quot;) class LeagueSeason(models.Model): &quot;&quot;&quot; Saves League seasson in the current region relations ----------- :models:`locations.models.Region` :models:`players.models.PlayerProfile` :models:`teams.models.Team` :models:`teams.models.TeamName` :models:`teams.models.TeamLogo` :models:`teams.models.TeamStrip` &quot;&quot;&quot; status_choices = ( ('upcoming', 'upcoming'), ('current', 'current'), ('finished', 'finished'), ) start_date = models.DateField(verbose_name=_( &quot;Start Date&quot;), help_text=_(&quot;The date when seasson start&quot;)) is_accepting = models.BooleanField(default=True, verbose_name=_( &quot;Is Accepting Teams&quot;), help_text=_(&quot;True if the league still accepts teams&quot;)) region = models.ForeignKey(Region, on_delete=models.PROTECT, verbose_name=_( &quot;Region&quot;), help_text=_(&quot;The Region of the League seasson&quot;)) players = models.ManyToManyField(PlayerProfile, verbose_name=_( &quot;Players&quot;), help_text=_(&quot;PLayers in this league&quot;)) teams = models.ManyToManyField('teams.Team', blank=True, verbose_name=_( &quot;Teams&quot;), help_text=_(&quot;Teams in this League seasson&quot;)) status = models.CharField(choices=status_choices, max_length=33, default='upcoming', verbose_name=_( &quot;Status&quot;), help_text=_(&quot;Current Status of the league seasson&quot;)) available_team_names = models.ManyToManyField(TeamName, verbose_name=_( &quot;Available Team Names&quot;), help_text=_(&quot;Pickable Team Names in this seasson&quot;)) available_team_strips = models.ManyToManyField(TeamStrip, verbose_name=_( &quot;Available Team Strips&quot;), help_text=_(&quot;Pickable Team Strips in this seasson&quot;)) class Meta: &quot;&quot;&quot; Make sure to change the appearane name of the model :model:`league.models.LeagueSeasson` to be Seasson &quot;&quot;&quot; verbose_name = _(&quot;Seasson&quot;) def __str__(self) -&gt; str: &quot;&quot;&quot; Change the name that user see in the lists of :model:`leagues.models.LeagueSeasson` to be Region name then start date &quot;&quot;&quot; return f&quot;{self.region} - {self.start_date}&quot; class Match(models.Model): &quot;&quot;&quot; Saves matches data Relations ---------- :model:`teams.models.Team` :model:`leagues.models.LeagueSeasson` :model:`location.models.Location` &quot;&quot;&quot; date_time = models.DateTimeField(verbose_name=_( &quot;Date and time&quot;), help_text=_(&quot;Date and time of the match&quot;)) home_team = models.ForeignKey('teams.Team', related_name=&quot;home_team_team&quot;, on_delete=models.PROTECT, verbose_name=_( &quot;Home Team&quot;), help_text=_(&quot;Home Side team in the match&quot;)) away_team = models.ForeignKey('teams.Team', related_name=&quot;away_team_team&quot;, on_delete=models.PROTECT, verbose_name=_( &quot;Away Team&quot;), help_text=_(&quot;Away Side team in the match&quot;)) league = models.ForeignKey(LeagueSeason, on_delete=models.CASCADE, null=True, verbose_name=_( &quot;League Seasson&quot;), help_text=_(&quot;The Seasson of this match&quot;)) location = models.ForeignKey('locations.Location', on_delete=models.PROTECT, null=True, verbose_name=_( &quot;Location&quot;), help_text=_(&quot;Location where the match will be played&quot;)) class Meta: &quot;&quot;&quot; Changes the Appearance name of :model:`leagues.models.Match` to be Match &quot;&quot;&quot; verbose_name = _(&quot;Match&quot;) class Goal(models.Model): &quot;&quot;&quot; Saves goal records in every match related to :model:`players.models.PlayerProfile` and :model:`teams.models.Team` . &quot;&quot;&quot; match = models.ForeignKey(Match, on_delete=models.CASCADE, verbose_name=_( &quot;Match&quot;), help_text=_(&quot;Match where the goal was scored&quot;)) team = models.ForeignKey(&quot;teams.Team&quot;, on_delete=models.PROTECT, null=True, blank=True, verbose_name=_(&quot;Team&quot;), help_text=_(&quot;The team scored the goal&quot;)) player = models.ForeignKey('players.PlayerProfile', related_name=&quot;goal_maker&quot;, on_delete=models.PROTECT, null=True, blank=True, verbose_name=_(&quot;Player&quot;), help_text=_(&quot;Player who scored the goal&quot;)) assistant = models.ForeignKey('players.PlayerProfile', related_name=&quot;goal_assist&quot;, on_delete=models.PROTECT, null=True, blank=True, verbose_name=_(&quot;Assist&quot;), help_text=_(&quot;PLayer who assisted scoring this goal&quot;)) class Meta: &quot;&quot;&quot; Make sure to see the model :model:`leagues.models.Goal` name as Goal for user &quot;&quot;&quot; verbose_name = _(&quot;Goal&quot;) def __str__(self) -&gt; str: &quot;&quot;&quot; Show the object name as the name of the player who scored the goal from :model:`users.models.AppUser` . &quot;&quot;&quot; return f&quot;{self.player.app_user.first_name} {self.player.app_user.last_name}&quot; class Card(models.Model): &quot;&quot;&quot; Saves Cards records in every match related to :model:`players.models.PlayerProfile` , :model:`leagues.Match`and :model:`teams.models.Team` . &quot;&quot;&quot; CARDS_ENUM = ( (_(&quot;Red Card&quot;), _(&quot;Red Card&quot;)), (_(&quot;Yellow Card&quot;), _(&quot;Yellow Card&quot;)), ) match = models.ForeignKey(Match, on_delete=models.CASCADE, related_name=&quot;card_set_for_match&quot;, verbose_name=_( &quot;Match&quot;), help_text=_(&quot;Match where the goal was scored&quot;)) team = models.ForeignKey(&quot;teams.Team&quot;, on_delete=models.PROTECT, null=True, blank=True, related_name=&quot;card_set_for_team&quot;, verbose_name=_(&quot;Team&quot;), help_text=_(&quot;The team scored the goal&quot;)) player = models.ForeignKey('players.PlayerProfile', on_delete=models.PROTECT, null=True, blank=True, related_name=&quot;card_set_for_playerprofile&quot;, verbose_name=_(&quot;Player&quot;), help_text=_(&quot;Player who scored the goal&quot;)) type = models.CharField(max_length=100, choices=CARDS_ENUM, verbose_name=_( &quot;Card Type&quot;), help_text=_(&quot;Type of the card &quot;)) class Meta: &quot;&quot;&quot; Make sure to see the model :model:`leagues.models.Card` name as Card for user &quot;&quot;&quot; verbose_name = _(&quot;Card&quot;) def __str__(self) -&gt; str: &quot;&quot;&quot; Show the object name as the name of the player who scored the goal from :model:`users.models.AppUser` . &quot;&quot;&quot; return f&quot;{self.player.app_user.first_name} {self.player.app_user.last_name} - {self.type}&quot; </code></pre> <p>Here is the function i have tried to achieve the results with :</p> <p><strong>views.py</strong></p> <pre class="lang-py prettyprint-override"><code>@permission_classes([&quot;IsAuthenticated&quot;]) @api_view([&quot;POST&quot;]) def get_league_scores(request): league_id = request.data[&quot;league_id&quot;] try: season = LeagueSeason.objects.get(pk=league_id) except LeagueSeason.DoesNotExist: return Response({&quot;error&quot;: &quot;LeagueSeason not found&quot;}, status=404) teams = Team.objects.filter(league=season) team_stats = [] for team in teams: # Calculate team statistics matches_played = team.home_team_team.filter(league=season).count() + team.away_team_team.filter(league=season).count() matches_won = team.home_team_team.filter(league=season, home_team=team, home_team_goals__gt=F('away_team_goals')).count() + team.away_team_team.filter(league=season, away_team=team, away_team_goals__gt=F('home_team_goals')).count() matches_lost = team.home_team_team.filter(league=season, home_team=team, home_team_goals__lt=F('away_team_goals')).count() + team.away_team_team.filter(league=season, away_team=team, away_team_goals__lt=F('home_team_goals')).count() matches_tied = matches_played - (matches_won + matches_lost) goals_scored = team.home_team_team.filter(league=season).aggregate(Sum('home_team_goals'))['home_team_goals__sum'] + team.away_team_team.filter(league=season).aggregate(Sum('away_team_goals'))['away_team_goals__sum'] points = (matches_won * 3) + matches_tied team_stats.append( { &quot;team_name&quot;: team.name.name, &quot;matches_played&quot;: matches_played, &quot;matches_won&quot;: matches_won, &quot;matches_lost&quot;: matches_lost, &quot;matches_tied&quot;: matches_tied, &quot;goals_scored&quot;: goals_scored, &quot;points&quot;: points, } ) # Sort the teams based on points, from lowest to highest sorted_team_stats = sorted(team_stats, key=lambda x: x[&quot;points&quot;], reverse=True) # Add position to each team based on the sorted order for i, team_stat in enumerate(sorted_team_stats, start=1): team_stat[&quot;position&quot;] = i return Response(sorted_team_stats, status=200) </code></pre> <p>I get this error when I try to call this API</p> <pre><code>Cannot resolve keyword 'home_team_goals' into field. Choices are: away_team, away_team_id, card_set_for_match, date_time, goal, home_team, home_team_id, id, league, league_id, location, location_id </code></pre>
<python><django><aggregate>
2023-11-03 13:05:23
2
4,483
Ahmed Wagdi
77,416,832
7,089,239
Proper exception handling when interleaving context managers
<p>I need to lock the <code>__enter__</code> of a context manager but release the lock right after the context has been entered. How can this be done cleanly? Note that both contexts are provided in libraries, so I cannot move the first context into the enter of the second. (In my case the first is an async lock and the second is an async botocore session).</p> <p>In simple terms, what I'm trying to achieve is:</p> <pre class="lang-py prettyprint-override"><code>ctx1.__enter__() ctx2.__enter__() ctx1.__exit__() ... ctx2.__exit__() </code></pre> <p>Using a lock makes this easier, because it doesn't <em>have</em> to be a context:</p> <pre class="lang-py prettyprint-override"><code>lock.acquire() with context(): lock.release() ... </code></pre> <p>But how can I add proper exception handling? A try-except around the with statement would only release the lock after the context. Is the only way to call enter and exit for the second context manually and use a flag?</p> <pre class="lang-py prettyprint-override"><code>entered = False try: with lock: context.__enter__() entered = True ... finally: if entered: context.__exit__() </code></pre>
<python><concurrency><contextmanager>
2023-11-03 12:57:00
0
2,688
Felix
77,416,739
14,649,310
Chain tasks without passing the previous results in Python Celery
<p>I want to create a chain of async event to be executed one after the other but I am not interested in passing the result of the previous one to the next one, just execute them in a chain. I am have something like this:</p> <pre><code>tasks = [] for document_id in document_ids: tasks.append( utils.task( 'tasks.update_document', document_id=document_id, ), ) result = chain(*tasks).apply_async(ignore_result=True) </code></pre> <p>My task receiving this is something like :</p> <pre><code>@app.task def update_document(document_id, *args,**kwargs): if not isinstance(my_variable, int): raise TypeError return f'Done: {document_id}' </code></pre> <p>BUT it seems that the <code>ignore_result=True</code> doesnt do anything and instead I get the result of the previous task to the next one and it make the second task fail. Anyways to just make the tasks chained but not add any other argument to the following tasks?</p>
<python><celery><celery-task>
2023-11-03 12:39:32
1
4,999
KZiovas
77,416,725
5,132,860
Unable to launch Selenium, encountering DeprecationWarning and WebDriverException errors
<p>Suddenly today, Selenium could not be launched in my project. The error messages are as follows:</p> <pre><code>main.py:11: DeprecationWarning: executable_path has been deprecated, please pass in a Service object driver = webdriver.Chrome(options=options,executable_path='drivers/chromedriver-linux64/chromedriver') Traceback (most recent call last): File &quot;/usr/local/lib/python3.10/runpy.py&quot;, line 196, in _run_module_as_main return _run_code(code, main_globals, None, File &quot;/usr/local/lib/python3.10/runpy.py&quot;, line 86, in _run_code exec(code, run_globals) File &quot;/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py&quot;, line 39, in &lt;module&gt; cli.main() File &quot;/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py&quot;, line 430, in main run() File &quot;/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py&quot;, line 284, in run_file runpy.run_path(target, run_name=&quot;__main__&quot;) File &quot;/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py&quot;, line 321, in run_path return _run_module_code(code, init_globals, run_name, File &quot;/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py&quot;, line 135, in _run_module_code _run_code(code, mod_globals, init_globals, File &quot;/root/.vscode-server/extensions/ms-python.python-2023.20.0/pythonFiles/lib/python/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py&quot;, line 124, in _run_code exec(code, run_globals) File &quot;/app/main.py&quot;, line 11, in &lt;module&gt; driver = webdriver.Chrome(options=options,executable_path='drivers/chromedriver-linux64/chromedriver') File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/chrome/webdriver.py&quot;, line 69, in __init__ super().__init__(DesiredCapabilities.CHROME['browserName'], &quot;goog&quot;, File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/chromium/webdriver.py&quot;, line 89, in __init__ self.service.start() File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/common/service.py&quot;, line 98, in start self.assert_process_still_running() File &quot;/usr/local/lib/python3.10/site-packages/selenium/webdriver/common/service.py&quot;, line 110, in assert_process_still_running raise WebDriverException( selenium.common.exceptions.WebDriverException: Message: Service drivers/chromedriver-linux64/chromedriver unexpectedly exited. Status code was: 255 </code></pre> <p>The settings of my Dockerfile are as follows:</p> <pre><code>FROM python:3.10-buster # Install necessary packages RUN apt-get update &amp;&amp; apt-get install -y \ curl unzip gettext python-babel \ ffmpeg \ poppler-utils \ fonts-takao-* fonts-wqy-microhei fonts-unfonts-core # Install Chrome RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb &amp;&amp; \ dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install \ &amp;&amp; rm google-chrome-stable_current_amd64.deb # Download and extract the latest version of ChromeDriver RUN CHROME_DRIVER_VERSION=$(curl -sL &quot;https://chromedriver.storage.googleapis.com/LATEST_RELEASE&quot;) &amp;&amp; \ curl -sL &quot;https://chromedriver.storage.googleapis.com/$CHROME_DRIVER_VERSION/chromedriver_linux64.zip&quot; &gt; chromedriver.zip &amp;&amp; \ unzip chromedriver.zip -d /usr/local/bin &amp;&amp; \ rm chromedriver.zip # Install Python dependencies COPY requirements.txt requirements.txt RUN python -m pip install --upgrade pip &amp;&amp; pip install -r requirements.txt # Set and move to APP_HOME ENV APP_HOME /app WORKDIR $APP_HOME ENV PYTHONPATH $APP_HOME # Copy local code to the container image COPY . . </code></pre> <p>requirements.txt</p> <pre><code>selenium==4.15.1 </code></pre> <p>And, I created a simple <code>main.py</code> to just launch Selenium.</p> <pre class="lang-py prettyprint-override"><code>from selenium import webdriver from selenium.webdriver.chrome.options import Options options = Options() options.headless = True options.add_argument('--no-sandbox') options.add_argument('--disable-dev-shm-usage') driver = webdriver.Chrome(options=options) driver.get('https://www.google.com') print(driver.title) driver.quit() </code></pre> <h2>What I did</h2> <p>I downloaded the drivers of version 119 and 120 from this link, set them in <code>executable_path</code>, and executed it, but the same error was returned. <a href="https://googlechromelabs.github.io/chrome-for-testing/#stable" rel="nofollow noreferrer">https://googlechromelabs.github.io/chrome-for-testing/#stable</a></p> <p>Please help me!</p> <h2>Environment</h2> <p>MacOS 13.6 Apple M2 Docker desktop 4.21.1</p>
<python><python-3.x><selenium-webdriver><selenium-chromedriver>
2023-11-03 12:37:05
1
3,104
Nori
77,416,517
9,318,323
How to query subset of data from a file on azure datalake
<p>My datalake has json files. Each file is a pandas dataframe represented as a json. So a file may have this inside:</p> <pre><code>[ { &quot;col1&quot;:&quot;1&quot;, &quot;col2&quot;:&quot;3&quot; }, { &quot;col1&quot;:&quot;2&quot;, &quot;col2&quot;:&quot;4&quot; } ] </code></pre> <p>Which translates into this dataframe: | col1 | col2 | | ---- | ---- | | 1 | 3 | | 2 | 4 |</p> <p>I work with datalake files using using <code>DataLakeServiceClient</code> from <code>azure.storage.filedatalake</code>.</p> <p>What I want is to <a href="https://learn.microsoft.com/en-us/python/api/azure-storage-file-datalake/azure.storage.filedatalake.datalakefileclient?view=azure-python#azure-storage-filedatalake-datalakefileclient-query-file" rel="nofollow noreferrer">query_file</a> to extract dataframe only partially before I read it. <strong>My question is what query text I must provide in order to achieve this? Is it possible to do it this way in the first place?</strong></p> <p>Example below returns me the content of the whole file as a string. I read it into a pandas dataframe using <code>read_json</code>. I know that I can work with it as dataframe after I read it but I want to do my selection when querying.</p> <p>Sample code:</p> <pre><code>import io import pandas as pd from azure.storage.filedatalake import DataLakeServiceClient, DelimitedJsonDialect from azure.identity import ClientSecretCredential adls_client = DataLakeServiceClient(account_url='...', credential=ClientSecretCredential(...)) filesystem_client = adls_client.get_file_system_client('...') file_client = filesystem_client.get_file_client('test_dataframe.json') input_format = DelimitedJsonDialect(has_header=True) reader = file_client.query_file( 'SELECT * from DataLakeStorage', # the problem is here file_format=input_format ) json_str = reader.readall().decode('utf8') df = pd.read_json(io.StringIO(json_str)) </code></pre> <p>I tried using <code>SELECT col1 from DataLakeStorage</code> but it returns <code>{}\n</code>.</p>
<python><pandas><azure><azure-data-lake-gen2>
2023-11-03 12:01:23
1
354
Vitamin C
77,416,375
4,726,173
Import locally installed python package from elsewhere
<p>Yet another stupid question about Python packaging. I've searched stackoverflow but am unable to find an answer, or, the answers I find do not work.</p> <p>The task: Load a package, locally installed (with or without the -e editable flag), from another .py file somewhere else, or from python on the command line.</p> <p>The setup:</p> <pre><code>test-proj1 └── run.py test-proj2 ├── setup.py ├── test-proj2 │ ├── proj2_functions.py │ └── __init__.py └── __init__.py </code></pre> <p>My setup.py in <code>test-proj2</code> (top directory):</p> <pre><code>from setuptools import setup, find_packages setup( name=&quot;test_proj2&quot;, version=&quot;1.0.0&quot;, description=&quot;...&quot;, long_description=&quot;...description&quot;, long_description_content_type=&quot;text/markdown&quot;, package_dir={&quot;&quot;: &quot;test-proj2&quot;}, packages=find_packages(where=&quot;test-proj2&quot;), ) </code></pre> <p>In <code>test-proj2</code> (top directory), I've run <code>python3 -m pip install .</code> , or <code>python3 -m pip install -e . </code> (tried both, none worked).</p> <p>After this, from the pip in the virtualenv I am using (<code>which pip</code> shows the right path, <code>which python</code> as well), <code>pip freeze | grep test-</code> shows the package as installed.</p> <ul> <li><p>From <code>test-proj1/run.py</code>, or just python opened in a console, how can I import</p> <p>a) just the package <code>test_proj2</code> (it's okay if it is empty)?<br /> b) the function <code>add</code> in <code>proj2_functions.py</code> ?</p> </li> </ul> <p>I have tried adding the absolute path to <code>test-proj2</code> to the Pythonpath, not even then can I import <code>test_proj2</code> or <code>test_proj2.test_proj2</code>. The error message is:</p> <blockquote> <p>ModuleNotFoundError: No module named 'test_proj2'</p> </blockquote> <hr /> <p>My mental model is that installing the package, with or without -e, makes it available to the Python binary I used when calling <code>python -m pip install ...</code>. Or is it a problem with <code>-</code> in the directory names? I read that <code>-</code> can just be replaced by <code>_</code> when importing (and all sorts of other equivalent names could be used as well).</p> <p>I'll try not using packages for now, but it really bugs me that I cannot get this to run.</p>
<python><pip><setuptools><python-packaging>
2023-11-03 11:37:31
0
627
dasWesen
77,416,240
7,988,415
`ipympl` interactive plots are not interactive, no navigation bar or buttons, in jupyter notebook in pycharm
<p>I am running the example on <code>ipympl</code> homepage in a jupyter notebook in pycharm:</p> <pre><code>%matplotlib ipympl import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots() x = np.linspace(0, 2*np.pi, 100) y = np.sin(3*x) ax.plot(x, y) </code></pre> <p>And I am not getting interactive toolbar buttons. I have tried the following.</p> <ol> <li>Without <code>ipympl</code> I am getting a static plot, not interactive.</li> <li>Using <code>ipympl</code>, I am getting still a static plot, but it looks different to the one without <code>ipympl</code>.</li> <li>Using <code>Qt5Agg</code> as backend without <code>ipympl</code>, I am not getting plots at all</li> <li>Using <code>Qt5Agg</code> and <code>ipympl</code>, I get a poped up window for the plot, but that window is stuck, without content.</li> </ol> <h3>This is with <code>ipympl</code>:</h3> <p><a href="https://i.sstatic.net/bZ2nH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bZ2nH.png" alt="With ipympl" /></a></p> <h3>This is without <code>ipympl</code>:</h3> <p><a href="https://i.sstatic.net/Esive.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Esive.png" alt="Without ipympl" /></a></p> <h3>Using <code>Qt5Agg</code> as backend without <code>ipympl</code></h3> <pre><code># %matplotlib ipympl import matplotlib matplotlib.use('Qt5Agg') </code></pre> <p><a href="https://i.sstatic.net/6VSO4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6VSO4.png" alt="enter image description here" /></a></p> <h3>Using <code>Qt5Agg</code> and <code>ipympl</code></h3> <pre><code>%matplotlib ipympl import matplotlib matplotlib.use('Qt5Agg') </code></pre> <p><a href="https://i.sstatic.net/g08zv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/g08zv.png" alt="enter image description here" /></a></p> <p>In none of the cases is there interactive navigation bar or buttons. I am not sure if this is a pycharm problem, or my matplotlib versions (tried 3.4.0 3.5.0 3.8.1) or something else.</p>
<python><matplotlib><jupyter-notebook><ipympl>
2023-11-03 11:17:51
0
1,054
Alex
77,416,182
6,541,082
Pyspark Json Extract Values
<p>I am working with a PySpark DataFrame that has a JSON column from CSV file <code>budgetthresholdaction</code>. The JSON structure looks like this:</p> <pre><code>{ &quot;budgetThresholdAction0threshold0&quot;: { &quot;thresholdValue&quot;: 80, &quot;action&quot;: &quot;promotionsetup.BudgetThresholdPercentageParameter&quot; }, &quot;budgetThresholdAction1action1&quot;: { &quot;thresholdName&quot;: &quot;budgetThresholdAction1threshold0&quot;, &quot;action&quot;: &quot;promotionsetup.BudgetTerminateActionParameter&quot; }, &quot;budgetThresholdAction1action2&quot;: { &quot;thresholdName&quot;: &quot;budgetThresholdAction1threshold0&quot;, &quot;action&quot;: &quot;promotionsetup.BudgetTurnOffMessagingActionParameter&quot; }, &quot;budgetThresholdAction1threshold0&quot;: { &quot;name&quot;: &quot;budgetThresholdAction1threshold0&quot;, &quot;thresholdValue&quot;: 95, &quot;action&quot;: &quot;promotionsetup.BudgetThresholdPercentageParameter&quot; }, &quot;budgetThresholdAction1action0&quot;: { &quot;thresholdName&quot;: &quot;budgetThresholdAction1threshold0&quot;, &quot;action&quot;: &quot;promotionsetup.BudgetNotifyActionParameter&quot; }, &quot;budgetThresholdAction0action0&quot;: { &quot;thresholdName&quot;: &quot;budgetThresholdAction0threshold0&quot;, &quot;action&quot;: &quot;promotionsetup.BudgetNotifyActionParameter&quot; } } </code></pre> <p>I need to create two separate columns, threshold and action, in my PySpark DataFrame with the following values:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><code>threshold</code></th> <th><code>action</code></th> </tr> </thead> <tbody> <tr> <td>80</td> <td><code>promotionsetup.BudgetThresholdPercentageParameter</code></td> </tr> <tr> <td>95</td> <td><code>promotionsetup.BudgetNotifyActionParameter</code></td> </tr> <tr> <td>95</td> <td><code>promotionsetup.BudgetTurnOffMessagingActionParameter</code></td> </tr> <tr> <td>95</td> <td><code>promotionsetup.BudgetTerminateActionParameter</code></td> </tr> </tbody> </table> </div> <p>I've attempted to use <code>get_json_object</code> and when functions, but I'm struggling to get the desired output. Could you please guide me on how to achieve this transformation in PySpark?</p> <p>What I tried :</p> <pre><code># To get threshold values as array and then explode it using : df.select(array(get_json_object(col('budgetthresholdaction'),'$.budgetThresholdAction0threshold0.thresholdValue'),get_json_object(col('budgetthresholdaction'),'$.budgetThresholdAction1threshold0.thresholdValue')).alias('T')).show(1,False) </code></pre> <p>But I'm struggling to map Threshold and Action Values as for single threshold there can be multiple action.</p> <p>We can have more than 2 Threshold and for every threshold it can have 1 or more action.</p> <p>Mapping is : Action<code>N</code>Threshold0 =&gt; Action<code>N</code>Action<code>X</code></p> <p>For example :</p> <p>Action0Threshold0 = [ Action0action0, Action0action1,Action0action2...]</p> <p>Action1Threshold0 = [ Action1action0,Action1action1...]</p>
<python><json><pyspark><explode>
2023-11-03 11:07:49
0
553
pkd
77,416,106
3,153,443
How do I wrap a byte string in a BytesIO object using Python?
<p>I'm writing a script with the Pandas library that involves reading the contents of an excel file.</p> <p>The line currently looks like this:</p> <pre><code>test = pd.read_excel(archive_contents['spreadsheet.xlsx']) </code></pre> <p>The script works as intended with no issues, but I get a future warning depicting the following:</p> <pre><code>FutureWarning: Passing bytes to 'read_excel' is deprecated and will be removed in a future version. To read from a byte string, wrap it in a `BytesIO` object. test = pd.read_excel(archive_contents['spreadsheet.xlsx']) </code></pre> <p>In the interest of future proofing my code, how would I go about doing that?</p>
<python><pandas><object><byte><bytesio>
2023-11-03 10:54:37
2
583
user3153443
77,416,013
2,123,706
Why does .to_sql() return a -1 or 1 after appending to table?
<p>I have the following set up</p> <pre><code>import sqlalchemy import pandas as pd server = 'server' database = 'database' driver = 'driver' database_con = f'mssql://@{server}/{database}?driver={driver}' engine = create_engine(database_con, fast_executemany=True) con = engine.connect() df = pd.DataFrame({'col1':[1,2,3]}) df.to_sql(name = 'tableName',con=con,if_exists='append',index=False) </code></pre> <p>after appending the table, I notice a return of either <code>1</code> or <code>-1</code>.</p> <p>The documentation <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_sql.html</a> states it should return the number of rows impacted or none, but it is always <code>+-1</code></p> <p>Does anyone know what these mean?</p>
<python><sql-server><pandas><sqlalchemy>
2023-11-03 10:43:05
1
3,810
frank
77,415,861
2,245,633
File open and apostrophes in file names
<p>I have some simple code [CentOS 7 &amp; python 3.6, because... stuck with a VM image]:</p> <pre class="lang-py prettyprint-override"><code>with open(filename, &quot;r&quot;) as read_file: # do stuff </code></pre> <p>Which is grand &amp; wonderful - does the self-closing thing, and has no problems with spaces in filenames.</p> <p>Except it can't cope with filenames with apostrophes:</p> <blockquote> <p>Error reading Why Doesn't this Code Work?!.py: near &quot;t&quot;: syntax error</p> </blockquote> <p>I tried a sneaky</p> <pre class="lang-py prettyprint-override"><code>with open(f&quot;{filename}&quot;, &quot;r&quot;) as read_file: # do stuff </code></pre> <p>And that made no difference, and even a desperate</p> <pre class="lang-py prettyprint-override"><code>with open(f'&quot;{filename}&quot;', &quot;r&quot;) as read_file: # do stuff </code></pre> <p>Which is just totally broken (whichever way you do the quotes)</p> <p>Is there a way to do this.... noting I'm realistically stuck with python 3.6</p>
<python><python-3.x>
2023-11-03 10:18:57
1
923
CodeGorilla
77,415,449
1,802,826
Force flush Python-part of Bash-script?
<p>I have a simple Bash-script that spend more than 99 % of its time on executing a Python application (the script is just a wrapper that feeds this Python script files in a for-loop and renames the output files). I have the same problem as described <a href="https://stackoverflow.com/questions/1429951/force-flushing-of-output-to-a-file-while-bash-script-is-still-running">here</a> and have looked at the answers but don't understand how to use them in my context.</p> <p>I think I need to apply the principles in the answers to that question inside my script, on the line that executes the Python script, but how?</p> <p>My script as pseudo-code:</p> <pre><code>for file in &quot;$directory&quot;; do pythonscript &quot;$file&quot; &gt;&gt; &quot;log.txt&quot;; then done </code></pre> <p>I want to flush the pythonscript-line every minute, every line of output it produces or similar, it doesn't matter that much (typically execution takes several hours, I just want to be able to track the output &quot;reasonable&quot; frequently).</p>
<python><bash><shell><buffer><flush>
2023-11-03 09:16:47
1
983
d-b
77,415,274
3,337,089
Clear all the GPU memory used by pytorch in current python code without exiting python
<p>I am running a modified version of a third-party code which uses pytorch and GPU. I run the same model multiple times by varying the configs, which I am doing within python i.e. I have a wrapper python file which calls the model with different configs. But I am getting <code>out-of-memory</code> errors while running the second or third model. That is to say, the model can run once properly without any memory issues. So, if I end the code after running the first model and then start the second model afresh, the code works fine. However, if I chain the models within python, I'm running into <code>out-of-memory</code> issues.</p> <p>I suspect there are some memory leaks within the third-party code. On googling, I found two suggestions. One is to call <code>torch.cuda.empty_cache()</code>, and the other is to delete the tensors explicitly using <code>del tensor_name</code>. However, <code>empty_cache()</code> command isn't helping free the entire memory, and the third-party code has too many tensors for me to delete all the tensors individually. Is there any way to clear the entire GPU memory used by the current python program within the python code itself?</p>
<python><memory-management><pytorch><memory-leaks><out-of-memory>
2023-11-03 08:50:12
2
7,307
Nagabhushan S N
77,415,114
1,142,881
How to make the int values of an Enum to match the corresponding database record id?
<p>I have a database table that contains specific companies that are accessed &quot;statically&quot; throughout the code and for which would be great to have a &quot;static&quot; way to refer to them, for example:</p> <pre><code>from enum import Enum, auto() class CompanyEnum(Enum): APPLE = auto() AMAZON = auto() META = auto() </code></pre> <p>I would like the int value of the Enum to match the database id corresponding to those companies BUT of course I don't want the hardcoded <code>id</code> in the static Enum definition as that would be a terrible solution. Why would that be a terrible solution? because there is no guarantee that if the database is recreated from a backup or re-generated, the artificial ids will remain the same for those companies.</p> <p>I can do something like (assuming that I am using Peewee):</p> <pre><code>from functools import lru_cache from enum import Enum, auto() from models import Company class CompanyEnum(Enum): APPLE = auto() AMAZON = auto() META = auto() @lru_cache @property def id(self): company = Company.get(Company.name.upper() == self.name) return company.id </code></pre> <p>How can I make this dynamically resolved <code>id</code> property, the actual Enum int value?</p>
<python><enums><peewee>
2023-11-03 08:20:20
1
14,469
SkyWalker
77,415,040
2,517,880
Python difflib.SequenceMatcher comparison issue
<p>I'm trying to compare two large text strings. Each can contain approximately 15 thousand characters. I need to find replace, insert, delete, and equal with their start and end characters by comparing two strings so that further operations can be done properly.</p> <p>I've also tried the difflib library, but it is not giving good results.</p> <pre><code>impoer difflib para1 = &quot;In the year 2045, the world experienced a technological revolution like never before. The advances in artificial intelligence, robotics, and biotechnology had transformed every aspect of our lives. From healthcare to transportation, from education to entertainment, the impact of these innovations was profound. One of the most remarkable achievements of this era was the development of AI-powered personal assistants. These intelligent beings were capable of understanding and responding to human language with unparalleled accuracy. They could perform tasks, answer questions, and even engage in meaningful conversations. As AI continued to evolve, it became an integral part of our daily routines. People relied on AI for managing their schedules, making decisions, and even providing emotional support. It seemed that there was no limit to what these machines could do. The ethical implications of AI's increasing dominance over human affairs became a topic of heated debate. While some celebrated the convenience and progress it brought, others raised concerns about privacy, security, and the potential loss of human jobs. Despite the ongoing discussions and debates, AI's influence in our lives kept growing. It was a world where humans and machines coexisted, sometimes harmoniously and sometimes with friction. The future was uncertain, but one thing was clear: technology had forever changed the course of human history. This is the end of Text 1.&quot; para2 = &quot;In the year 2045, the world went through a technological revolution of unprecedented proportions. The leaps in artificial intelligence, robotics, and biotechnology had completely reshaped all aspects of our existence. From medical care to transportation, from learning to entertainment, the impact of these innovations was profound. One of the most extraordinary accomplishments of this era was the emergence of AI-driven personal assistants. These intelligent entities had the capability to comprehend and react to human language with remarkable precision. They could execute tasks, provide answers to queries, and even engage in substantial conversations. As AI kept progressing, it turned into an essential element of our everyday lives. People depended on AI for handling their schedules, making choices, and even offering emotional support. It seemed as if there were no boundaries to the potential of these machines. The moral questions surrounding AI's growing authority over human affairs became a subject of fervent discussion. While some celebrated the convenience and advancement it brought, others expressed worries about privacy, security, and the possible loss of human employment. Despite the continuous conversations and debates, AI's sway over our lives continued to expand. It was a world where humans and machines coexisted, sometimes peacefully and at times with friction. The future remained uncertain, but one thing was evident: technology had permanently altered the trajectory of human history. This is the end of Text 2.&quot; op = difflib.SequenceMatcher(None, para1, para2) op.get_opcodes() </code></pre> <p>Output -</p> <pre><code>[('equal', 0, 28, 0, 28), ('insert', 28, 28, 28, 207), ('equal', 28, 30, 207, 209), ('replace', 30, 83, 209, 215), ('equal', 83, 86, 215, 218), ('replace', 86, 125, 218, 253), ('equal', 125, 127, 253, 255), ('replace', 127, 135, 255, 285), ('equal', 135, 137, 285, 287), ('replace', 137, 196, 287, 331), ('equal', 196, 198, 331, 333), ('replace', 198, 297, 333, 390), ('equal', 297, 302, 390, 395), ('replace', 302, 383, 395, 408), ('equal', 383, 390, 408, 415), ('replace', 390, 392, 415, 531), ('equal', 392, 393, 531, 532), ('replace', 393, 417, 532, 556), ('equal', 417, 422, 556, 561), ('replace', 422, 437, 561, 633), ('equal', 437, 438, 633, 634), ('replace', 438, 607, 634, 641), ('equal', 607, 630, 641, 664), ('replace', 630, 649, 664, 680), ('equal', 649, 654, 680, 685), ('replace', 654, 697, 685, 737), ('equal', 697, 708, 737, 748), ('replace', 708, 712, 748, 754), ('equal', 712, 725, 754, 767), ('replace', 725, 730, 767, 772), ('equal', 730, 758, 772, 800), ('replace', 758, 766, 800, 806), ('equal', 766, 778, 806, 818), ('replace', 778, 784, 818, 823), ('equal', 784, 817, 823, 856), ('replace', 817, 821, 856, 861), ('equal', 821, 829, 861, 869), ('replace', 829, 872, 869, 921), ('equal', 872, 878, 921, 927), ('replace', 878, 901, 927, 954), ('equal', 901, 907, 954, 960), ('replace', 907, 927, 960, 977), ('equal', 927, 956, 977, 1006), ('replace', 956, 974, 1006, 1008), ('equal', 974, 975, 1008, 1009), ('replace', 975, 978, 1009, 1035), ('equal', 978, 1022, 1035, 1079), ('replace', 1022, 1030, 1079, 1090), ('equal', 1030, 1050, 1090, 1110), ('replace', 1050, 1064, 1110, 1126), ('equal', 1064, 1101, 1126, 1163), ('replace', 1101, 1125, 1163, 1166), ('equal', 1125, 1126, 1166, 1167), ('replace', 1126, 1127, 1167, 1194), ('equal', 1127, 1141, 1194, 1208), ('replace', 1141, 1156, 1208, 1228), ('equal', 1156, 1179, 1228, 1251), ('replace', 1179, 1210, 1251, 1252), ('equal', 1210, 1211, 1252, 1253), ('replace', 1211, 1214, 1253, 1290), ('equal', 1214, 1278, 1290, 1354), ('replace', 1278, 1299, 1354, 1372), ('equal', 1299, 1331, 1372, 1404), ('insert', 1331, 1331, 1404, 1438), ('equal', 1331, 1335, 1438, 1442), ('replace', 1335, 1369, 1442, 1449), ('equal', 1369, 1386, 1449, 1466), ('replace', 1386, 1412, 1466, 1500), ('equal', 1412, 1455, 1500, 1543), ('replace', 1455, 1456, 1543, 1544), ('equal', 1456, 1457, 1544, 1545)] </code></pre> <p>In above output, it is mentioned <code>('insert', 28, 28, 28, 207)</code> which means after character position 28, one string got added as:</p> <pre><code>para2[28:207] 'went through a technological revolution of unprecedented proportions. The leaps in artificial intelligence, robotics, and biotechnology had completely reshaped all aspects of our ' </code></pre> <p>but in actual 'experienced' updated to 'went through' and after it from char position 41, 'a technological revolution' is the equal in both strings, which was not captured.</p> <p>I've tried it by converting it into a list and then comparing it, but it was not very helpful. <code>get_opcodes</code> has all the information I need, but the results are very incorrect. Is there a workaround, another library available, or any NLP approach to be able to get good results?</p>
<python><diff>
2023-11-03 08:07:28
1
1,114
Vaibhav
77,414,951
15,175,771
Sparql query returns undesired results when using blank nodes (rdflib)
<p>I use <code>rdflib</code> python library to model a graph of contacts, and perform <code>sparql</code> queries to retrieve who knows who. This works fine when people as added as <code>URIRef</code>, but not when using <code>BNode</code>.</p> <p>The example graph can be represented as follow:</p> <pre><code>bob - knows -&gt; linda alice - knows -&gt; linda tom - knows -&gt; linda knows -&gt; bob </code></pre> <p>Only Tom knows Bob, and no one knows Tom.</p> <p>I perform the following 2 queries:</p> <ol> <li>The first one to retrieves Tom; it works as expected.</li> <li>In the second query, I use Tom node id to retrieve who knows him. I expect an empty list. When Tom is added as a <code>URIRef</code>, it works as expected. However, when Tom is added as a <code>BNode</code>, this query returns 3 names!</li> </ol> <pre class="lang-py prettyprint-override"><code> use_blank_node = True # switch to see the undesired behavior happens only with blank node pred_knows = URIRef(&quot;http://example.org/knows&quot;) pred_named = URIRef(&quot;http://example.org/named&quot;) def create_graph() -&gt; Graph: graph = Graph() bob = URIRef(&quot;http://example.org/people/Bob&quot;) linda = BNode() # a GUID is generated alice = BNode() tom = BNode() if use_blank_node else URIRef(&quot;http://example.org/people/Tom&quot;) print(f&quot;{str(tom)=}&quot;) remy = BNode() graph.add((bob, pred_named, Literal(&quot;Bob&quot;))) graph.add((alice, pred_named, Literal(&quot;Alice&quot;))) graph.add((tom, pred_named, Literal(&quot;Tom&quot;))) graph.add((linda, pred_named, Literal(&quot;Linda&quot;))) graph.add((remy, pred_named, Literal(&quot;Remy&quot;))) graph.add((bob, pred_knows, linda)) graph.add((alice, pred_knows, linda)) graph.add((tom, pred_knows, linda)) graph.add((tom, pred_knows, bob)) return graph find_tom_who_knows_bob_query = f&quot;&quot;&quot;SELECT DISTINCT ?knowsbob ?nameofwhoknowsbob WHERE {{ ?knowsbob &lt;{pred_knows}&gt; &lt;http://example.org/people/Bob&gt; ; &lt;{pred_named}&gt; ?nameofwhoknowsbob . }}&quot;&quot;&quot; def find_who_know_tom(tom_id) -&gt; str: tom_query = f&quot;_:{tom_id}&quot; if type(tom_id) is BNode else f&quot;&lt;{tom_id}&gt;&quot; return f&quot;&quot;&quot;SELECT DISTINCT ?nameOfWhoKnowsTom WHERE {{ ?iriOfWhoKnowsTom &lt;{pred_knows}&gt; {tom_query} ; &lt;{pred_named}&gt; ?nameOfWhoKnowsTom}}&quot;&quot;&quot; def main(): graph = create_graph() print(&quot;=&quot; * 60, &quot;\n&quot;, graph.serialize(), &quot;\n&quot;, &quot;=&quot; * 60) result = list(graph.query(find_tom_who_knows_bob_query)) assert len(result) == 1 and len(result[0]) == 2 tom_id = result[0][0] print(f&quot;{str(tom_id)=}&quot;) assert (type(tom_id) == BNode and use_blank_node) or (type(tom_id) == URIRef and use_blank_node is False) assert str(result[0][1]) == &quot;Tom&quot; query = find_who_know_tom(tom_id) print(query) result = list(graph.query(query)) print( &quot;They know Tom:&quot;, &quot;, &quot;.join([str(r[0]) for r in result]) ) # why is it not empty when use_blank_node = True # prints: &quot;They know Tom: Bob, Alice, Tom&quot; if __name__ == &quot;__main__&quot;: main() </code></pre> <p>My question: how to correctly use sparql so that the query also works with blank node ?</p>
<python><sparql><rdf><rdflib><blank-nodes>
2023-11-03 07:47:15
1
340
GabrielGodefroy
77,414,802
10,573,543
Using python client for Kubernetes I am not able to increase the number of replicas for a specific pod?
<h4>Details:</h4> <hr /> <p>I am using AWS EKS service to manage my application in a K8 cluster. In one of the pod which is called <code>my-core-pod</code> in the namespace <code>mytenantspace</code>, I have a python flask app running. I have created an endpoint <code>create_infra</code>, What it does is, it takes a POST request and increases the number of replica of pod <code>my-daemon-service-pod</code> in the namespace <code>myenvspace</code>.</p> <p>Now I have the <code>kubectl</code> installed and the configuration are properly updated in the <code>.kube/config</code>. I am using <code>python 3.8</code> and <code>kuberenets</code> client is <code>28.1.0</code>.</p> <p><strong>Note:</strong> I am able to connect and list all the pods and namespaces of the cluster using the python kubernetes client.</p> <p>This is my implementation of increasing the replica pods.</p> <pre><code>#code running in the `my-core-pod` in the namespace `mytenantspace` from kubernetes import client, config config.load_kube_config() v1 = client.CoreV1Api() @app.route(&quot;/create_infra&quot;, methods=[&quot;POST&quot;]) def create_pod(): number_pods = 3 pod_name = &quot;my-daemon-service-pod&quot; namespace = &quot;myenvspace&quot; # Get the existing pod's template. existing_pod = v1.read_namespaced_pod(pod_name, namespace) #Clear the resourceVersion to avoid the error. existing_pod.metadata.resource_version = None # Create new pods based on the existing pod's template. for i in range(number_pods): new_pod_name = f&quot;{pod_name}-replica-{i}&quot; new_pod = existing_pod new_pod.metadata.name = new_pod_name new_pod.metadata.uid = None # Clear the UID to ensure a unique pod is created. # Create the new pod replica. v1.create_namespaced_pod(namespace, new_pod) </code></pre> <p>I am not getting any error. I am getting a huge response JSON in the console. But my pods are not scaling. Could you find the issue in my code, Am I using the right API ?</p>
<python><kubernetes><flask><amazon-eks>
2023-11-03 07:17:39
1
1,166
Danish Xavier
77,414,787
1,757,224
Column names do not appear using druiddb
<p>I am running Apache Druid datastore locally. I am loading data from a Kafka stream.</p> <p>On Druid, I can see the column names:</p> <p><a href="https://i.sstatic.net/bMfID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bMfID.png" alt="enter image description here" /></a></p> <p>And then using druiddb (<a href="https://github.com/betodealmeida/druid-dbapi" rel="nofollow noreferrer">https://github.com/betodealmeida/druid-dbapi</a>), I am writing an SQL query and reading data into Python environment and putting it in a pandas dataframe. However, <strong>some</strong> column names do not appear:</p> <pre><code>from druiddb import connect # https://github.com/betodealmeida/druid-dbapi import pandas as pd druid_host = &quot;localhost&quot; druid_port = 8888 druid_path = &quot;/druid/v2/sql&quot; druid_scheme = &quot;http&quot; druid_query = &quot;&quot;&quot;SELECT * FROM malaria_cases_full&quot;&quot;&quot; druid_connection = connect(host=druid_host, port=druid_port, path=druid_path, scheme=druid_scheme) druid_cursor= druid_connection.cursor() df = pd.DataFrame(druid_cursor.execute(druid_query)) df.head(n =10) </code></pre> <p><a href="https://i.sstatic.net/Z0JiS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z0JiS.png" alt="enter image description here" /></a></p>
<python><pandas><druid>
2023-11-03 07:14:59
2
973
ARAT
77,414,702
5,897,478
Wyze SDK: How do I use the refresh token properly?
<p>I'm trying to understand how the refresh_token() function works in the Wyze python SDK: <a href="https://github.com/shauntarves/wyze-sdk/blob/master/wyze_sdk/api/client.py" rel="nofollow noreferrer">https://github.com/shauntarves/wyze-sdk/blob/master/wyze_sdk/api/client.py</a></p> <p>Let's say I want to access the Wyze API to list all of my devices I have.</p> <p>First, my python script checks if I have stored an access_token in a file from a previous login. This will remove the need to login again.</p> <p>If I have one stored, then I simply make calls to the API using the access token I read from the file:</p> <pre><code>wyze_access_token = read_access_token_file() client = Client(token=wyze_access_token) response = client.devices_list() </code></pre> <p>If I don't have one stored, I login using Client.login() and then store the access_token and refresh_token it responds with:</p> <pre><code>response = Client().login( email=&quot;...&quot;, password=&quot;...&quot;, key_id=&quot;...&quot;, api_key=&quot;...&quot; ) write_access_token_file(response[&quot;access_token&quot;]) write_refresh_token_file(response[&quot;refresh_token&quot;]) </code></pre> <p>In this case, I already have an access_token stored from a previous login. <strong>But it has expired.</strong> So now I need to use the refresh_token() function.</p> <p>I try to call client.refresh_token() but I am getting an error saying the client is not logged in:</p> <blockquote> <p>wyze_sdk.errors.WyzeClientConfigurationError: client is not logged in</p> </blockquote> <p>It says this because it checks to see if there is a refresh_token assigned to the Client object</p> <p>..Well, I don't have it assigned to the client, I have it stored in a file. A refresh_token only gets assigned after you execute Client.login()...</p> <p>But I don't want to login. That takes a ton of time, and using a token is much faster.</p> <p>Why does refresh_token() not take an argument so I can pass the refresh_token I have stored in a file? I don't get it, why am I required to login again, that completely defeats the purpose of using tokens?</p> <p>What do I actually do with the refresh_token I got from the login() response initially that I stored in a file?</p>
<python><authentication><sdk><access-token><refresh-token>
2023-11-03 06:56:44
1
488
Aran Bins
77,414,343
5,675,288
How can I format a python logger which is defined in one class but calls itself from another class?
<p>I have an internal logging library which I consume to run my service. The library provides 2 logging capabilities, one for standard and one for debug. I want to log the filename and function name for the logger as well. Since the library does not provide that functionality, I modified the formatter. But I noticed that it is not working as expected. Please see below my expected output.</p> <p>Here is the MWE</p> <p><code>one.py</code></p> <pre><code>import logging from logging import handlers import datetime as dt import time class myFormatter(logging.Formatter): converter=dt.datetime.fromtimestamp def formatTime(self, record, datefmt=None): ct = self.converter(record.created) if datefmt: s = ct.strftime(datefmt) else: t = ct.strftime(&quot;%Y-%m-%d %H:%M:%S&quot;) z = time.strftime(&quot;%z&quot;) s = &quot;%s.%03d %s&quot; % (t, record.msecs, z) return s class Logger: @staticmethod def get(name): logger = logging.getLogger(&quot;example&quot;) handler = handlers.RotatingFileHandler(&quot;example.log&quot;, mode='a', maxBytes=10485760) formatter = myFormatter('%(asctime)s %(levelname)s: %(message)s') handler.setLevel(logging.DEBUG) handler.setFormatter(formatter) logger.addHandler(handler) logger.setLevel(logging.DEBUG) return logger </code></pre> <p><code>two.py</code></p> <pre><code>from one import Logger from three import DebugLog class Component(object): def __init__(self, name): self.name = name logname = name logger = Logger.get(logname) self.logger = logger self.debug = DebugLog(self) </code></pre> <p><code>three.py</code></p> <pre><code>class DebugLog: def __init__(self,component): self._component = component def _debug(self, msg, *args, **kwargs): self._component.logger.debug(msg, *args, **kwargs) def debug(self, msg, *args, **kwargs): self._debug(msg, *args, **kwargs) </code></pre> <p><code>main.py</code></p> <pre><code>from two import Component from one import myFormatter component = Component('myexample') # this is my modification DEBUG_LOG_FORMAT = myFormatter('%(asctime)s %(levelname)s: [%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s') component.debug._component.logger.handlers[0].formatter = DEBUG_LOG_FORMAT def component_log(): component.logger.info(&quot;I log this in component log&quot;) def component_debug_log(): component.debug.debug(&quot;I log this in component debug log&quot;) component_log() component_debug_log() </code></pre> <p>Current output</p> <pre><code>➜ M-WR4R2NYKL6 ~/logging-bug python main.py ➜ M-WR4R2NYKL6 ~/logging-bug cat example.log 2023-11-03 10:45:16.450 +0530 INFO: [main.py:14 - component_log() ] I log this in component log 2023-11-03 10:45:16.450 +0530 DEBUG: [three.py:8 - _debug() ] I log this in component debug log </code></pre> <p>Expected output</p> <pre><code>2023-11-03 10:45:16.450 +0530 INFO: [main.py:14 - component_log() ] I log this in component log 2023-11-03 10:45:16.450 +0530 DEBUG: [main.py:17 - component_debug_log() ] I log this in component debug log </code></pre> <p>I can control only <code>main.py</code> and not the other files because I do not own them. I can maybe modify them by inheriting or monkey patching them but cannot change them at source.</p>
<python><logging><python-logging>
2023-11-03 05:19:37
1
915
scientific_explorer
77,414,325
14,365,042
Join two floats with trailing 0 together
<p>I wrote a function:</p> <pre><code>def main_table(data, gby_lst, col): df = data.groupby(gby_lst)[col].describe() df = df.reset_index() for i in ['25%', '50%', '75%', 'std', 'min', 'max', 'mean']: df[i] = df[i].apply(lambda x: float(&quot;{:.2f}&quot;.format(x))) df['Mean ± SD'] = (df[['mean', 'std']] .apply(lambda row: ' ± '.join(row.values.astype(str)), axis=1) ) df['Median (IQR)'] = (df['50%'].astype(str) + ' (' + df[['25%', '75%']].apply(lambda row: ' - '.join(row.values.astype(str)), axis=1) + ')' ) df['Range'] = (df[['min', 'max']] .apply(lambda row: ' - '.join(row.values.astype(str)), axis=1) ) summary_list = gby_lst + ['Mean ± SD', 'Median (IQR)', 'Range'] return df.loc[:, summary_list] </code></pre> <p>But this will not include the ending 0s. For example, I want <code>3.40 ± 5.55</code> , this function currently gives me: <code>3.4 ± 5.55</code>.</p> <p>How can I fix it?</p>
<python><string><f-string>
2023-11-03 05:13:44
1
305
Joe
77,414,316
13,142,245
FastAPI: 'value_error.missing' when parameter present in query
<p>Here is my FastAPI method</p> <pre class="lang-py prettyprint-override"><code>class Student(BaseModel): name: str age: int year: str ... @app.post('/create-student/{student_id}') def create_student(student_id : int, student : Student): db = connect_ddb() try: response = db.put_item(Item={ 'id': {'S': f'{student_id}'}, 'student': {'M': student.model_dump()} }) return {&quot;response message&quot;: response} except Exception as error: return {&quot;error message&quot;: error} </code></pre> <p>Now when I submit a POST using requests...</p> <pre class="lang-py prettyprint-override"><code>x = requests.post('endpoint/create-student/4', headers = { &quot;Content-Type&quot;: &quot;application/json&quot; }, json={ &quot;id&quot;: 4, &quot;student&quot;: { &quot;name&quot;:&quot;Guy&quot;, &quot;age&quot;: 18, &quot;year&quot;: &quot;year 12&quot; } } ) </code></pre> <p>Yet <code>x.json()</code> returns</p> <pre><code>{'detail': [{'loc': ['body', 'name'], 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ['body', 'age'], 'msg': 'field required', 'type': 'value_error.missing'}, {'loc': ['body', 'year'], 'msg': 'field required', 'type': 'value_error.missing'}]} </code></pre> <p>What does it mean for these elements to be missing from the body?</p> <p>I'm following this highly rated Q/A advice, so seems like this should be a non-issue... <a href="https://stackoverflow.com/questions/70815650/fastapi-shows-msg-field-required-type-value-error-missing">FastAPI shows &#39;msg&#39;: &#39;field required&#39;, &#39;type&#39;: &#39;value_error.missing&#39;</a></p> <p>It's present in the json payload. So I'm very confused where (else) it should be and how I might alter the FastAPI method to mitigate this issue.</p>
<python><fastapi><pydantic>
2023-11-03 05:09:34
1
1,238
jbuddy_13
77,414,306
12,175,228
Show transparent image with openCV python
<p>im working with opencv and when showing the image it has a black background even its already transparent, (sometimes white) but that depends on the image, in this case the eye image when showing it appears with black background, dont know how can i show it complety transparent:</p> <pre><code> while True: recording, frame = video.read(); rgba = cv2.cvtColor(frame, cv2.COLOR_BGR2BGRA); eye.drawEyes(rgba, frame_width, frame_height); cv2.imshow(&quot;Mediapipe&quot;, rgba); if cv2.waitKey(1) &amp; 0xFF == ord('q'): break video.release(); cv2.destroyAllWindows(); </code></pre> <p>and here the class Eye where i call the method to draw the eyes:</p> <pre><code>class Eye: MP_FACE = mp.solutions.face_mesh; FACE = MP_FACE.FaceMesh(); LEFT_EYE = [226, 173]; #468 CENTER LANDMARK RIGHT_EYE = [398, 446]; EYE_IMAGE = cv2.imread(&quot;./assets/eye.png&quot;, cv2.IMREAD_UNCHANGED); RESIZE_EYE = cv2.resize(EYE_IMAGE, (80, 80), interpolation = cv2.INTER_AREA) # `int(height)` for 2nd value of size def __init__(self): self: self self.eye_left_x = 0; self.eye_left_y = 0; self.eye_left_x2 = 0; self.eye_left_y2 = 0; self.eye_right_x = 0; self.eye_right_y = 0 self.eye_right_x2 = 0; self.eye_right_y2 = 0; self.centerX = 0; self.centerY = 0; self.image = &quot;&quot;; def drawEyes(self, frame, frame_width, frame_height): face_points = self.FACE.process(cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)); rectangle_height = 30; if (face_points): for landmark in face_points.multi_face_landmarks: for index, points in enumerate(landmark.landmark): if (index == self.LEFT_EYE[0]): x, y = int(points.x * frame_width), int(points.y * frame_height); self.eye_left_x = x; self.eye_left_y = y; if (index == self.LEFT_EYE[1]): x, y = int(points.x * frame_width), int(points.y * frame_height); self.eye_left_x2 = x; self.eye_left_y2 = y; if (index == self.RIGHT_EYE[0]): x, y = int(points.x * frame_width), int(points.y * frame_height); self.eye_right_x = x; self.eye_right_y = y; if (index == self.RIGHT_EYE[1]): x, y = int(points.x * frame_width), int(points.y * frame_height); self.eye_right_x2 = x; self.eye_right_y2 = y; cv2.rectangle( frame, (self.eye_left_x, self.eye_left_y - rectangle_height), (self.eye_left_x2, self.eye_left_y2 + rectangle_height), (0,255,255), 1 ); cv2.rectangle( frame, (self.eye_right_x, self.eye_right_y - rectangle_height), (self.eye_right_x2, self.eye_right_y2 + rectangle_height), (0,255,255), 1 ); rectangle_width_left = self.eye_left_x2 - self.eye_left_x; rectangle_width_right = self.eye_right_x2 - self.eye_right_x; eye_width = hypot(self.eye_left_x - self.eye_left_x2, self.eye_left_y - self.eye_left_y2); eye_height = eye_width * 1; #1 = aspect ratio, height image / width image image = cv2.resize(self.EYE_IMAGE, (int(eye_width), int(eye_height)), interpolation = cv2.INTER_AREA); h, w = image.shape[:2]; # left eye x, y = self.eye_left_x, self.eye_left_y; centerX = (rectangle_width_left - int(w)) // 2; frame[y-rectangle_height : y + int(w) - rectangle_height, x+centerX : x + centerX+int(w), :] = image # right eye x2, y2 = self.eye_right_x, self.eye_right_y; centerX = (rectangle_width_right - int(w)) // 2; frame[y2-rectangle_height : y2 + int(w) - rectangle_height, x2+centerX : x2 + centerX+int(w)] = image </code></pre> <p>this is my code now, i print the frame and it has 4 channel, same with the image, but still dont know how to show it complety transparent, dont understand and dont know how to work with the mask that looks is the case to use here.</p> <p>some help will be appreciate</p> <p><a href="https://i.sstatic.net/h1sKV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h1sKV.png" alt="enter image description here" /></a></p>
<python><opencv><image-processing><alpha><alpha-transparency>
2023-11-03 05:08:19
1
365
plus
77,414,212
4,297,413
Efficiently partition Pandas DataFrame rows such that the sum of a column is at least X in each partition
<p>Suppose we have the following DataFrame with a single <code>weight</code> column.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pd.DataFrame(dict(weight=[1, 3, 5, 6, 8, 9, 5, 6, 5, 5, 4, 9])) weight 0 1 1 3 2 5 3 6 4 8 5 9 6 5 7 6 8 5 9 5 10 4 11 9 </code></pre> <p>The goal is to assign a partition number to each row such that:</p> <ol> <li>Every partition has a sum of <code>weights</code> of at least some threshold.</li> <li>Only contiguous rows can belong in the same partition.</li> <li>Each partition contains has as few rows as possible.</li> </ol> <p>If we use a threshold of 10, the expected output for the above DataFrame would be:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; pd.DataFrame(dict( weight=[1, 3, 5, 6, 8, 9, 5, 6, 5, 5, 4, 9], partition=[0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4] )) weight partition 0 1 0 1 3 0 2 5 0 3 6 0 4 8 1 5 9 1 6 5 2 7 6 2 8 5 3 9 5 3 10 4 4 11 9 4 </code></pre> <p>It is easy to implement this transformation by iterating over the rows and keeping a running total of the &quot;weight&quot; column which resets after the threshold is hit or exceeded.</p> <pre class="lang-py prettyprint-override"><code>def slow_partition(df: pd.DataFrame, min_partition_total_weight: int) -&gt; pd.DataFrame: partition_ids = [] current_parition = 0 current_partition_total_weight = 0 for weight in df[&quot;weight&quot;]: if current_partition_total_weight &gt;= min_partition_total_weight: current_parition += 1 current_partition_total_weight = 0 partition_ids.append(current_parition) current_partition_total_weight += weight return df.assign(partition=partition_ids) </code></pre> <p>Although this implementation has the correct behavior, it is pure python and therefore does not scale well to larger DataFrames. Are there any <code>pandas</code> functions or <code>DataFrame</code> methods that can help perform this transformation in a vectorized manner?</p>
<python><pandas><dataframe>
2023-11-03 04:31:17
1
652
Erp12
77,414,097
4,399,016
Extracting URLs from website using Pyquery and requests
<p>I have this code:</p> <pre><code>from pyquery import PyQuery as pq import requests url = &quot;https://www.mba.org/news-and-research/forecasts-and-commentary&quot; content = requests.get(url).content doc = pq(content) Latest_Report_MO = doc(&quot;#ContentPlaceholder_C012_Col01&quot;) print(Latest_Report_MO) </code></pre> <p>I get this result:</p> <pre><code>&lt;div id=&quot;ContentPlaceholder_C012_Col01&quot; class=&quot;sf_colsIn grid__unit grid__unit--1-3-l&quot; data-sf-element=&quot;Column 2&quot; data-placeholder-label=&quot;Column 2&quot;&gt;&amp;#13; &lt;div&gt;&amp;#13; &lt;div class=&quot;sfContentBlock sf-Long-text&quot;&gt;&lt;a target=&quot;_blank&quot; href=&quot;/docs/default-source/research-and-forecasts/historical-mortgage-origination-estimates.xlsx?sfvrsn=8c6933cb_5&quot;/&gt;&lt;a style=&quot;margin-bottom:20px;&quot; href=&quot;/docs/default-source/research-and-forecasts/forecasts/2023/historical-mortgage-origination-estimates.xlsx?sfvrsn=a7595901_1&quot;/&gt;&lt;a href=&quot;/docs/default-source/research-and-forecasts/historical-mortgage-origination-estimates.xlsx?sfvrsn=8c6933cb_5&quot;/&gt;&lt;a href=&quot;/docs/default-source/research-and-forecasts/historical-mortgage-origination-estimates.xlsx?sfvrsn=8c6933cb_5&quot;/&gt;&lt;a href=&quot;/docs/default-source/research-and-forecasts/historical-mortgage-origination-estimates.xlsx?sfvrsn=8c6933cb_5&quot;&gt;&lt;img src=&quot;/images/default-source/research/20125-research-forecast-web-button-qoe.png?sfvrsn=e73fc287_0&quot; alt=&quot;&quot; sf-size=&quot;66661&quot;/&gt;&lt;/a&gt; &lt;p&gt;Historical record of single-family, one- to four-unit loan origination estimates. Last updated June 2023. &lt;/p&gt;&lt;/div&gt; &amp;#13; &lt;/div&gt; &lt;/div&gt; </code></pre> <p>I am interested in the <code>href=&quot;/docs/default-source/research-and-forecasts/historical-mortgage-origination-estimates.xlsx?sfvrsn=8c6933cb_5&quot;</code></p> <p>How do I use the <code>.attr()</code> to extract this URL? Or is there any other method?</p>
<python><python-requests><pyquery>
2023-11-03 03:52:05
1
680
prashanth manohar
77,413,945
3,685,918
How to display the x-axis in as yy.m only for the first month of the year and the rest as m
<p>This is my example code and output.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates # Create a DataFrame with a date column and another column with the data you want to plot data = {'Date': ['2023-01-01', '2023-02-01', '2023-03-01', '2024-01-01', '2024-02-01'], 'Value': [10, 15, 13, 18, 20]} df = pd.DataFrame(data) # Convert the 'Date' column to datetime df['Date'] = pd.to_datetime(df['Date']) # Create a Matplotlib plot using the date data for the x-axis and the 'Value' column for the y-axis data plt.plot(df['Date'], df['Value'], marker='o', linestyle='-') plt.xlabel('Date') plt.ylabel('Y-axis Label') plt.title('Your Plot Title') </code></pre> <p><a href="https://i.sstatic.net/qJOvF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qJOvF.png" alt="enter image description here" /></a></p> <p>Can I set x-axis format like belows? I want to display <code>yy.m</code> only on the first day of each year and <code>m</code> for the rest.</p> <p><a href="https://i.sstatic.net/A1BCB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A1BCB.png" alt="enter image description here" /></a></p>
<python><pandas><matplotlib><datetime><xticks>
2023-11-03 02:56:11
1
427
user3685918
77,413,726
4,582,026
How to sharpen an image using openCV in Python
<p>I have applied the following adjustments to the original image:</p> <ul> <li>resized</li> <li>changed the colour scale</li> <li>greyscaled</li> <li>thresholded</li> <li>inverted the colours</li> </ul> <p>This results in the following image</p> <p><a href="https://i.sstatic.net/fTunf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fTunf.png" alt="enter image description here" /></a></p> <p>Using tesseract, i'm converting the image to a string but it only seems to recognise the 4.</p> <p>Code to convert to text -</p> <pre><code>print (tess.image_to_string(img, config='--psm 6 -c tessedit_char_whitelist=&quot;9876543210&quot;')) 4 </code></pre> <p>I then attempted to sharpen using the following code resulting in the next image, but tesseract is still only recognising the 4. Any idea how I can sharpen this further so tesseract recognises this as 40?</p> <pre><code>kernel = np.array([[0,-1,0],[-1,5,-1],[0,-1,0]]) sharpened = cv2.filter2D(img,-1,kernel) print (tess.image_to_string(sharpened, config='--psm 6 -c tessedit_char_whitelist=&quot;9876543210&quot;')) 4 </code></pre> <p><a href="https://i.sstatic.net/fvUzv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fvUzv.png" alt="enter image description here" /></a></p> <p>Alternatively, the original image is the following without any resizing.</p> <p><a href="https://i.sstatic.net/fSeHq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fSeHq.png" alt="enter image description here" /></a></p> <p>Tesseract does pick this up as 40 but I need it to pick up the larger image. Is there a way I can resize but retain the quality/sharpness?</p> <p>Resizing code -</p> <pre><code>img = cv2.resize(img,(0,0),fx=5,fy=5) </code></pre>
<python><ocr><tesseract><image-preprocessing><game-automation>
2023-11-03 01:35:40
1
549
Vik
77,413,634
13,142,245
Storing map types in DynamoDb, is "M" declaration required?
<p>Using Boto3, how can I store python dictionaries as non-PK attributes in DynamoDb?</p> <p>My CDK</p> <pre class="lang-js prettyprint-override"><code>... this.table = new Table(this, `myTable`, { partitionKey: { name: 'id', type: AttributeType.STRING, }, }); ... </code></pre> <p>When I try to post requests to my API (using FastAPI), I get an empty error message.</p> <p>API method</p> <pre class="lang-py prettyprint-override"><code>@app.post('/create-student/{student_id}') def create_student(student_id : int, student : Student): try: response = db.put_item(Item={ &quot;id&quot;: f&quot;{student_id}&quot;, &quot;student&quot;: json.dumps(student) }) return {&quot;response message&quot;: response} except Exception as error: return {&quot;error message&quot;: error} </code></pre> <p>Because the error message is returning, put_item method cannot be completed.</p> <p>Two mitigations I see:</p> <ol> <li>Dump the Python dictionary as a string (<code>json.dumps(dict)</code>) and insert this string</li> <li>Add the student field in CDK to explicitly have type &quot;M&quot; for map.</li> </ol> <p>Perhaps option 2 is necessary if I don't want to store the dict as a string. It seems that Ddb cannot store map types &quot;out of the box&quot; is this use case is not declared in CDK.</p> <p>What's the preferred means to accomplish this effect?</p>
<python><amazon-dynamodb><fastapi>
2023-11-03 00:53:11
1
1,238
jbuddy_13
77,413,598
3,891,431
Dynamic keys in Pydantic v2
<p>Trying to use pydantic to model a json file like this:</p> <pre><code>{ &quot;people&quot;: { &quot;Jack&quot;: {&quot;age&quot;: 32, &quot;postcode&quot;: 1223}, &quot;Robert&quot;: {&quot;age&quot;: 23, &quot;postcode&quot;:2354}, &quot;Sarah&quot;: {&quot;age&quot;: 55, &quot;postcode&quot;:5673} } } </code></pre> <p>I also have a list of acceptable names I like to enforce:</p> <pre><code>accepted_names = [&quot;Jack&quot;, &quot;Robert&quot;, &quot;Sarah&quot;, &quot;Alex&quot;] </code></pre> <p>I have tried to use <code>__root__:[Dict[str, Dict[str, int]]</code> but this is deprecated in version 2 and the error message says I should use <code>RootModel</code> instead but the pydantic documentation is not clear on how to do this.</p> <p>What I have so far:</p> <pre><code>import json from pydantic import BaseModel, RootModel, conint from typing import Dict class PersonModel(BaseModel): age: conint(ge=0) postcode: conint(ge=1000, le=9999) class MyModel(RootModel): root:Dict[str, PersonModel] def validate_root(cls, v): acceptable_names = [&quot;Jack&quot;, &quot;Robert&quot;, &quot;Sarah&quot;, &quot;Alex&quot;] if v not in acceptable_names: raise ValueError(f&quot;Names should be one of {', '.join(acceptable_names)}&quot;) return v class ParentModel(BaseModel): people: MyModel file = &quot;scratch/test.json&quot; with open(file, &quot;r&quot;) as f: data = json.load(f) mymodel = ParentModel(**data) </code></pre> <p>But the way it works now is <code>mymodel.people.root['Jack']</code> whereas I want <code>mymodel.people.Jack</code></p>
<python><python-3.x><pydantic>
2023-11-03 00:35:31
1
4,334
Rash
77,413,591
1,552,837
How to plot axline within each subplot
<p><strong>I'm trying to add an <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.axline.html" rel="nofollow noreferrer"><code>axline</code></a> with <code>slope=1</code> to each of a series of subplots.</strong></p> <ul> <li>I'm following <a href="https://matplotlib.org/stable/gallery/specialty_plots/anscombe.html#sphx-glr-gallery-specialty-plots-anscombe-py" rel="nofollow noreferrer">the <code>axline</code> example</a> but it does not seem to work with my modifications.</li> <li><a href="https://stackoverflow.com/questions/21129007/plotting-a-horizontal-line-on-multiple-subplots-in-python-using-pyplot">This</a> appears to be the closest question on SO, but doesn't answer my question</li> </ul> <h2>Code attempts and errors</h2> <h4>01. As in <code>axline</code> example:</h4> <pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt import pandas as pd fig, axes = plt.subplots(4, 2, sharex=True, sharey=True, figsize=(12, 10)) plt.xlim([-5, 10]) plt.ylim([-5, 10]) for ax in axes.flat: ax.axline((0,0), slope=1, color='r', linewidth=2, linestyle='--') </code></pre> <blockquote> <p>'AxesSubplot' object has no attribute 'axline'</p> </blockquote> <h4>02. Within each subplot:</h4> <pre class="lang-py prettyprint-override"><code>fig, axes = plt.subplots(4, 2, sharex=True, sharey=True, figsize=(12, 10)) row, col = 0, 0 for q in questions: # questions defined by my data question_scatter(df, q, row, col) # UDF to create a scatter plot in axes[row, col] axes[row, col].axline((0,0), slope=1, color='r', linewidth=2, linestyle='--') if row &lt; 3: row += 1 else: row = 0 col += 1 </code></pre> <blockquote> <p>'AxesSubplot' object has no attribute 'axline'</p> </blockquote> <h4>03. Apply globally:</h4> <pre class="lang-py prettyprint-override"><code>fig, axes = plt.subplots(4, 2, sharex=True, sharey=True, figsize=(12, 10)) axes.axline((0,0), slope=1, color='r', linewidth=2, linestyle='--') </code></pre> <blockquote> <p>numpy.ndarray' object has no attribute 'axline'</p> </blockquote>
<python><matplotlib>
2023-11-03 00:32:22
1
4,907
alexwhitworth
77,413,587
4,930,914
Capital words between words - Regex
<p>I am trying to find capital words occurring between words using Regex. I want to ignore capital words after.?! and starting of paragraph.</p> <p>Currently using the code below to find capital letters</p> <p>[A-Z][^\s]*</p> <pre><code>Example A sentence containing Capital letters. How to Extract only capital letters? </code></pre> <p>The Regex should find only Capital and Extract ignoring How and A</p>
<python><regex>
2023-11-03 00:30:55
4
915
Programmer_nltk
77,413,572
219,153
How do I get backslash marker in matplotlib?
<p>I was hoping for this Python 3.11 script to render <code>\</code> markers:</p> <pre><code>import numpy as np, matplotlib.pyplot as plt plt.scatter([1, 2, 3], [3, 1, 4], marker='$\textbackslash$', c='b') plt.show() </code></pre> <p>but it renders <code>-</code> instead. How do I get <code>\</code> or similar marker?</p>
<python><matplotlib><scatter-plot>
2023-11-03 00:21:45
1
8,585
Paul Jurczak
77,413,440
7,086,220
why does dataframe.interpolate with spline create unexpected wave
<p>I'm trying to use dataframe.interpolate to fill missing data. Here is my test:</p> <pre><code>from itertools import product df=pd.DataFrame.from_dict({ 1.5 :[np.nan ,91.219 ,np.nan ,np.nan ,102.102 ,np.nan ,np.nan ], 2.0 :[np.nan ,np.nan ,np.nan ,np.nan ,103.711 ,np.nan ,103.031 ], 2.5 :[np.nan ,98.25 ,np.nan ,100.406 ,104.695 ,np.nan ,104.938 ], 3.0 :[np.nan ,101.578 ,np.nan ,102.969 ,104.875 ,np.nan ,105.242 ], 3.5 :[np.nan ,103.859 ,87.93 ,104.531 ,104.906 ,np.nan ,105.32 ], 4.0 :[np.nan ,105.156 ,94.469 ,105.656 ,105.844 ,89.68 ,106.523 ], 4.5 :[94.266 ,106.039 ,96.82 ,106.75 ,103.156 ,93.703 ,107.938 ], 5.0 :[97.336 ,107.953 ,98.602 ,107.906 ,104.25 ,96.547 ,109.703 ], 5.5 :[99.664 ,110.438 ,100.203 ,108.906 ,100.375 ,98.844 ,110.188 ], 6.0 :[101.344 ,112.703 ,101.492 ,108.688 ,102.906 ,100.68 ,110.5 ], 6.5 :[102.313 ,112.078 ,102.266 ,108.813 ,104.5 ,101.875 ,104 ], 7.0 :[102.656 ,114.469 ,102.242 ,108.813 ,np.nan ,102.625 ,109 ], 7.5 :[103.25 ,np.nan ,102.594 ,108.813 ,np.nan ,103.234 ,109 ], }, orient='index') df.plot(title='original') for int_method,int_order in list(product(['spline'],range(1,4)))+[ (x,3) for x in ['nearest', 'zero', 'slinear', 'quadratic', 'cubic', 'barycentric', 'polynomial', 'krogh', 'piecewise_polynomial', 'pchip', 'akima', 'cubicspline','from_derivatives','linear', ] ]: spl=df.interpolate(limit_direction='both',method=int_method,order=int_order) spl.plot(title=f'{int_method},{int_order}') </code></pre> <p>It seems only spline can give me the exptrapolation that I need. However, I found it seems to add some unexpected fluctuations:</p> <p><a href="https://i.sstatic.net/Ad2Qo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ad2Qo.png" alt="unexpected fluctuation from spline" /></a></p> <p>Can someone helps me to understand what happened and even provide some advice on how to improve(I know &quot;improve&quot; is vague phrase here. I can't find a clear definition for it myself)? Thanks!</p>
<python><interpolation><missing-data><spline><extrapolation>
2023-11-02 23:34:45
1
343
jerron
77,413,431
5,662,005
ZipFile thread-safe across multiple output files?
<p>It's well covered that writing multiple source files to a single target is not thread safe i.e M:1</p> <p>reference question</p> <p><a href="https://stackoverflow.com/questions/9195206/is-python-zipfile-thread-safe">Is python zipfile thread-safe?</a></p> <p>Does this same limitation hold M:M?</p> <pre><code>file_1.txt -&gt; file_1.zip ... file_m.txt -&gt; file_m.zip </code></pre> <p>psuedo code</p> <pre><code>orig_to_zip_name = [ ['file1.txt','zipped_file_1.zip'], ... ['filem.txt','zipped_file_m.zip'] ] def single_zip(file_pairs): zipped_name, original_name = file_pairs with ZipFile(zipped_name, 'w') as zipf: zipf.write(original_name,compress_type = ZIP_DEFLATED) with concurrent.futures.ThreadPoolExecutor() as executor: results = executor.map(single_zip, orig_to_zip_name ) </code></pre>
<python><python-3.x><python-zipfile>
2023-11-02 23:30:57
1
3,899
Error_2646
77,413,364
2,805,482
OpenCV: How to copy text from one image and super impose on another
<p>I have a image with only text(always in white color) in it and i want to copy the text from it to another image.</p> <p>Below is the image with logo and image. Currently I am taking a screenshot of this text and superimposing it over another images however as you can see I get the black rectangle along with the text, how can I get rid of the black rectangle area or just copy text from black frame to image?</p> <p><a href="https://i.sstatic.net/J33EF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J33EF.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/92K7B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/92K7B.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/4ZPhU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4ZPhU.png" alt="enter image description here" /></a></p> <pre><code>image_2_replace = cv2.imread(mask_image2) im2r = cv2.cvtColor(image_2_replace, cv2.COLOR_BGR2RGB) image_2_title_img = cv2.imread(image_2_title) image_2_titl_img = cv2.cvtColor(image_2_title_img, cv2.COLOR_BGR2RGB) im2r_title = cv2.resize(image_2_titl_img, (left_image_msg[2], left_image_msg[3])) # take the coordinate of the location where i need to put the text screenshot and add it over the image. im2r[153: 153 + 324, 580: 580 + 256] = im2r_title </code></pre>
<python><opencv><image-processing>
2023-11-02 23:08:46
1
1,677
Explorer
77,413,350
8,297,745
How to return a query using `join` from SQLAlchemy to use it as a Flask result, using Flask SQLAlchemy Session?
<p>I have the following situation:</p> <p>I want to create a route that will return all rows that a column on both <code>models</code> (Tables) have the same values, using <code>join</code> function from SQLAlchemy.</p> <p>The method I created inside my service is like this, which also contains the pure SQL that works, for reference:</p> <pre><code>@staticmethod def fetch_sectors(): &quot;&quot;&quot; Fetches all sectors with their respective branches, using an inner join with the equipment table. As this is a JOIN involving more than one entity, and not a direct query in the model, it's necessary to use the session.query() from SQLAlchemy - https://docs.sqlalchemy.org/en/14/orm/query.html The Controller Object responsible for this is db_sql.session, instantiated by Flask's SQLAlchemy. Conversion of the below flow to SQL for reference - INNER JOIN + DISTINCT: -- JOIN: Show Columns/Records in Common between tables. SELECT DISTINCT EQUIPMENT.T9_BRANCH AS sector_branch, SECTORS.T6_CODE AS sector_code, SECTORS.T6_NAME AS sector_name FROM ST9010 AS EQUIPMENT JOIN ST6010 AS SECTORS ON SECTORS.T6_CODE = EQUIPMENT.T9_CODE; :return: All Sectors &quot;&quot;&quot; print(&quot;Creating the Query, with INNER JOIN + DISTINCT.&quot;) query = db_sql.session.query( Equipment.equipment_branch.label('sector_branch'), Sectors.sector_code, Sectors.sector_name ).join( Sectors, Sectors.sector_code == Equipment.equipment_sector ).distinct() print(&quot;Returning the Sectors.&quot;) return [sector.sectors_to_dict() for sector in query.all()], None </code></pre> <p>Those are the models with the <code>to_dict</code> methods I am using:</p> <pre><code>class Equipment(db_sql.Model): __tablename__ = 'ST9010' # Assets Table - Protheus equipment_id: Mapped[int] = mapped_column(&quot;T9_EQUIPID&quot;, db_sql.Integer, primary_key=True) equipment_branch: Mapped[str] = mapped_column(&quot;T9_BRANCH&quot;, db_sql.String, primary_key=True) equipment_sector: Mapped[str] = mapped_column(&quot;T9_CODE&quot;, db_sql.String, primary_key=True) equipment_name: Mapped[str] = mapped_column(&quot;T9_NAME&quot;, db_sql.String, nullable=False) equipment_costcenter: Mapped[str] = mapped_column(&quot;T9_COSTCENTER&quot;, db_sql.String, nullable=False) DELETED: Mapped[str] = mapped_column(db_sql.String, nullable=True) T9_STATUS: Mapped[str] = mapped_column(db_sql.String, nullable=True) def to_dict(self): return { &quot;equipment_id&quot;: self.equipment_id, &quot;equipment_branch&quot;: self.equipment_branch, &quot;equipment_sector&quot;: self.equipment_sector, &quot;equipment_name&quot;: self.equipment_name, &quot;equipment_costcenter&quot;: self.equipment_costcenter } class Sectors(db_sql.Model): __tablename__ = 'ST6010' # Families Table - Protheus # T6_BRANCH blank: In the Query, do an Inner Join with T9_BRANCH of ST9010 sector_branch = mapped_column(&quot;T6_BRANCH&quot;, db_sql.String, primary_key=True) sector_code = mapped_column(&quot;T6_CODE&quot;, db_sql.String, primary_key=True) sector_name = mapped_column(&quot;T6_NAME&quot;, db_sql.String, nullable=False) DELETED = mapped_column(db_sql.String, nullable=True) def to_dict(self): return { &quot;sector_branch&quot;: self.sector_branch, &quot;sector_code&quot;: self.sector_code, &quot;sector_name&quot;: self.sector_name } @staticmethod def sectors_to_dict(result): return { &quot;sector_branch&quot;: result.sector_branch, &quot;sector_code&quot;: result.sector_code, &quot;sector_name&quot;: result.sector_name, &quot;equipment_branch&quot;: result.equipment_branch } </code></pre> <p>When I executed the query using the above method, <code>fetch_sectors()</code>, what I got was the following error, from <code>SQLAlchemy Engine</code>:</p> <pre><code>_key_fallback raise KeyError(key) from err KeyError: 'sectors_to_dict' _key_not_found raise AttributeError(ke.args[0]) from ke AttributeError: sectors_to_dict </code></pre> <p>I enabled logging for SQLAlchemy, using <code>logging</code> for <code>sqlalchemy.engine</code>, and this is the SELECT that SQLAlchemy Generated:</p> <pre><code>INFO:sqlalchemy.engine.Engine:SELECT DISTINCT [ST9010].[T9_BRANCH] AS sector_branch, [ST6010].[T6_CODE] AS [ST6010_T6_CODE], [ST6010].[T6_NAME] AS [ST6010_T6_NAME] FROM [ST9010] JOIN [ST6010] ON [ST6010].[T6_CODE] = [ST9010].[T9_CODE] </code></pre> <p>I am tryting to fix this for hours now, searched multiple questions here on StackOverflow, talked with GPT 4.0 for hours, read Flask SQLAlchemy and SQLAlchemy documentation, but am feeling that I am getting to a dead end of solutions here...</p> <p>This is a question here at StackOverflow that almost solved my problem, but I could'nt implement it and gave up after an hour or so: <a href="https://stackoverflow.com/a/56362133/8297745">Use Flask-SqlAlchemy to query relationship database</a></p> <p>Can please, someone help me?</p>
<python><flask><sqlalchemy><flask-sqlalchemy>
2023-11-02 23:04:14
3
849
Raul Chiarella
77,413,254
345,716
How to "Destructure" a dict to create a new one?
<p>Finally getting around to learning python...</p> <p>I have a dict with many keys. I want to create a new one with an added key. This is how I'd do it in JavaScript, &quot;destructuring&quot; <code>obj1</code> to create <code>obj2</code> with a new <code>c</code> key:</p> <pre class="lang-js prettyprint-override"><code>const obj1 = { a: 1, b: 2 } const obj2 = { ...obj1, c: 3 } // or someFunction({ ...obj1, c: 3 }) </code></pre> <p>What is the idiomatic way to do this in python? Or is it simply:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; obj1={ &quot;a&quot;: 1, &quot;b&quot;: 2 } &gt;&gt;&gt; obj2 = obj1.copy() &gt;&gt;&gt; obj2['c'] = 3 </code></pre> <p>Which I find horribly ugly, because now I need a variable, I can't construct the dict inline as a <code>def</code> parameter, like i did in the <code>someFunction</code> call in JavaScript.</p> <p>(Also easy in Perl:</p> <pre class="lang-perl prettyprint-override"><code>my %obj1 = ( a: 1, b: 2 ); someFunction( { %obj1, c: 3 } ); </code></pre> <p>)</p>
<python>
2023-11-02 22:38:35
1
16,339
Peter V. Mørch
77,413,169
2,304,735
How to append different length columns to a different pandas dataframe?
<p>I am downloading stock prices dataframe and I want to take one column from the stock prices dataframe and copy it to another dataframe.</p> <pre class="lang-none prettyprint-override"><code>Stock Prices Dataframe (MCD): Date Open High Low Close Adjusted_close Volume 0 2020-01-06 199.60 202.77 199.35 202.33 185.6859 4660400 1 2020-01-07 201.87 202.68 200.51 202.63 185.9612 4047400 2 2020-01-08 202.62 206.69 202.20 205.91 188.9714 5284200 3 2020-01-09 206.86 209.37 206.10 208.35 191.2107 5971600 4 2020-01-10 208.44 208.95 207.27 207.27 190.2195 2336400 5 2020-01-13 207.38 207.78 205.76 206.51 189.5221 2784200 6 2020-01-14 205.46 207.65 205.46 207.32 190.2654 2622700 7 2020-01-15 207.32 210.35 207.32 209.77 192.5139 3369400 Stock Prices Dataframe (AAPL): Date Open High Low Close Adjusted_close Volume 0 2020-01-06 293.790 299.9600 292.750 299.80 73.1149 118578564 1 2020-01-07 299.840 300.9000 297.480 298.39 72.7710 111510640 2 2020-01-08 297.160 304.4399 297.156 303.19 73.9416 132363796 3 2020-01-09 307.235 310.4300 306.200 309.63 75.5122 170486156 4 2020-01-10 310.600 312.6700 308.250 310.33 75.6829 140869092 Stock Prices Dataframe (TXN): Date Open High Low Close Adjusted_close Volume 0 2020-01-06 127.06 127.33 125.90 126.96 113.6177 4345400 1 2020-01-07 129.15 130.90 128.42 129.41 115.8102 7184100 2 2020-01-08 129.34 130.57 129.06 129.76 116.1235 3546900 3 2020-01-09 130.70 131.74 130.24 131.33 117.5285 3526600 4 2020-01-10 131.81 131.81 129.82 130.00 116.3382 3234000 5 2020-01-13 130.57 130.74 129.77 129.95 116.2935 4313200 6 2020-01-14 129.95 131.86 129.83 130.67 116.9378 4626200 7 2020-01-15 130.43 130.43 128.86 129.17 115.5955 3392300 8 2020-01-16 130.00 130.23 129.35 130.16 116.4814 5475900 9 2020-01-17 130.76 132.04 130.44 131.70 117.8596 5487100 </code></pre> <p>What I will do is download each dataframe individually and then choose the Adjusted Close column and copy it to another dataframe (I am doing this for 50 companies).</p> <p>Code:</p> <pre><code>import csv # Retreive symbols from Stocks CSV file and put them into one list symbols = [] symbol_errors = [] try: with open('stocks.csv', newline='') as inputfile: for row in csv.reader(inputfile): symbols.append(row[0]) except: print(&quot;File Not Found&quot;) # Create portfolio dataframe with symbols as column names portfolio = pd.DataFrame(columns=symbols) # Split the url into a beginning url and ending url url_begin = 'someurl' url_end ='someurl' ''' 1) Loop on the Symbols List. 2) Append the Symbol to the beginning and ending url. 3) Check if the data is retreived, if not append unfound symbols to error list. 4) Copy the adjusted data column to the corresponding symbol in the portfolio dataframe. ''' for i in range(len(symbols)): url = url_begin+symbols[i]+url_end try: data = pd.read_csv(url) except: symbol_errors.append(symbols[i]) portfolio[symbols[i]] = data.iloc[:, 5:6] </code></pre> <p>When I execute the above code I get the following error:</p> <pre><code>for i in range(len(symbols)): url = url_begin+symbols[i]+url_end try: data = pd.read_csv(url) except: symbol_errors.append(symbols[i]) portfolio[symbols[i]] = data.iloc[:, 5:6] Traceback (most recent call last): Cell In[6], line 8 portfolio[symbols[i]] = data.iloc[:, 5:6] File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\frame.py:3940 in __setitem__ self._set_item_frame_value(key, value) File D:\Users\Mahmoud\anaconda3\Lib\site-packages\pandas\core\frame.py:4069 in _set_item_frame_value raise ValueError(&quot;Columns must be same length as key&quot;) ValueError: Columns must be same length as key </code></pre> <p>How do I fix the error?</p>
<python><python-3.x><pandas><dataframe>
2023-11-02 22:14:03
2
515
Mahmoud Abdel-Rahman
77,413,013
1,169,096
How to staple Apple notarization tickets manually (e.g. under linux)
<p>Recently (as of 2023-11-01) Apple has changed their notarization process.</p> <p>I took the opportunity to drop Apple's own tools for this process (<code>notarytool</code>) and switch to a Python-based solution using their <a href="https://developer.apple.com/documentation/notaryapi/submitting_software_for_notarization_over_the_web" rel="nofollow noreferrer">documented Web API for notarization</a></p> <p>This works great and has the additional bonus, that I can now notarize macOS apps from linux (in the context of CI, I can provision linux runners much faster than macOS runners). hooray.</p> <p>Since this went so smooth, I thought about moving more parts of my codesigning process to linux, and the obvious next step is find a solution for stapling the notarization tickets into application, replacing <code>xcrun stapler staple MyApp.app</code></p> <p>With the help of <code>-vv</code> and some scraps of <a href="https://developer.apple.com/documentation/technotes/tn3126-inside-code-signing-hashes" rel="nofollow noreferrer">online</a> <a href="https://lapcatsoftware.com/articles/logging-https.html" rel="nofollow noreferrer">documentation</a>, it turns out that it is very simple to obtain the notarization ticket if you know the code directory hash (<code>CDhash</code>) of your application.</p> <p>the following will return a JSON-object containing (among other things) the base64-encoded notarization ticket, which just has to be decoded and copied into the .app bundle for stapling:</p> <pre class="lang-bash prettyprint-override"><code>cdhash=8d817db79d5c07d0deb7daf4908405f6a37c34b4 curl -X POST -H &quot;Content-Type: application/json&quot; \ --data &quot;{ \&quot;records\&quot;: { \&quot;recordName\&quot;: \&quot;2/2/${cdhash}\&quot; }}&quot; \ https://api.apple-cloudkit.com/database/1/com.apple.gk.ticket-delivery/production/public/records/lookup \ | jq -r &quot;.records[0] | .fields | .signedTicket | .value&quot; </code></pre> <p>So, the only thing that is still missing for my <code>stapler</code> replacement is a way to obtain the code directory hash for a given application. On macOS (with the XCode tools installed), I can get this hash with <code>codesign -d -vvv MyApp.app</code>, but this obviously only works if I have the <code>codesign</code> binary at hand.</p> <p>I've found a couple of python wrappers for stapling tickets, but all of them just call <code>xcrun stapler staple</code> under the hood. This is <strong>not what I want</strong>.</p> <p>So my question is: How can I extract the code directory hash (<code>CDhash</code>) from a macOS application, <em>without</em> using macOS specific tools? (That is: How are <code>CDhash</code>es generated? I haven't found any documentation on this)</p> <p>I would very much like to use use Python for this task. Ideally, such a solution would be cross-platform (so I can use it on macOS <em>and</em> Linux, and probably others as well).</p>
<python><linux><macos><code-signing><notarize>
2023-11-02 21:34:16
2
32,070
umläute
77,412,876
5,568,409
How to set ticklabels in bold on x-axis when using usetex=True
<p>I have the following small program in which I use <code>rc(&quot;text&quot;, usetex = True)</code> in order to allow some formatting.</p> <p>I thought that the line <code>ax.set_xticklabels(ax.get_xticks(), weight='bold')</code> would show in <strong>bold</strong> the numbers showed on the <code>x-axis</code>.</p> <p>But there's something I clearly didn't understand in all the instructions I read here and there, especially on some <code>SO</code> posts. So now I don't know how to get out of it...</p> <p>What would be the simplest change in this program to plot these numbers in bold?</p> <pre><code>import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib import rc rc(&quot;text&quot;, usetex = True) fig, ax = plt.subplots(figsize = (6, 3), tight_layout = True) ax.set_yticks([i for i in range(0, 10+1, 1)]) ax.set_xticks([i for i in range(0, 100+1, 10)]) ax.tick_params(axis = &quot;both&quot;, labelsize = 15) ax.set_xticklabels(ax.get_xticks(), weight='bold') ax.set_title(r&quot;$\underline{\textbf{Proper}}$&quot;) plt.show() </code></pre>
<python><matplotlib><xticks>
2023-11-02 21:05:24
0
1,216
Andrew
77,412,858
5,378,816
can each import create a separate object?
<p>Imagine a library of classes supporting different input or output formats. Applications may switch between those formats. I'm using different languages only as an example.</p> <p>A straightforward way to set the format could look like this:</p> <pre><code>import mytimes mytimes.set_input_format(&quot;German&quot;) mytimes.Days(&quot;1.Mai&quot;, &quot;31.Dezember&quot;) mytimes.set_input_format(&quot;ISO8601&quot;) mytimes.Date(&quot;2023-11-02&quot;) </code></pre> <p>Global state is not good. If the <code>import</code>-s are not grouped the usual way, it could fail:</p> <pre><code>import mytimes mytimes.set_input_format(&quot;German&quot;) from . import foo # happens to contain mytimes.set_input_format(&quot;English&quot;) mytimes.func(&quot;1.Juni&quot;) # not valid in English </code></pre> <p>That's why this is better:</p> <pre><code>from mytimes_v2 import MyTimes mytimes = MyTimes() mytimes.set_input_format(&quot;German&quot;) </code></pre> <p>Could that be somehow automated? Could a module simply do <code>from mytimes_v3 import mytimes</code> and get its own instance that could be configured independently from instances imported elsewhere?</p>
<python><python-import>
2023-11-02 21:01:44
0
17,998
VPfB
77,412,673
13,142,245
put_item DynamoDb with FastAPI post request
<p>I've set up a backend using FastAPI and Pydantic</p> <pre class="lang-py prettyprint-override"><code>class Student(BaseModel): name: str age: int year: str ... @app.post('/create-student/{student_id}') def create_student(student_id : int, student : Student): db = connect_ddb() #Boto3 instantiates dynamoDb resource student_obj = json.loads(student) try: db.put_item( Item = {f&quot;{student_id}&quot;: student_obj} ) return db.get_item(student_id) except: return {&quot;error&quot;: &quot;Unable to put data in Ddb&quot;} </code></pre> <p>Now when I try to test this functionality using</p> <pre class="lang-py prettyprint-override"><code>def put(item, id_, db=db): obj = json.dumps(item) db.put_item(Item={f&quot;{id_}&quot;: obj}) put(id_=4, item={ &quot;name&quot;: &quot;Guy&quot;, &quot;age&quot;: 17, &quot;year&quot;: &quot;year 12&quot; }) </code></pre> <p>I receive the following <code>'{&quot;error&quot;: &quot;Unable to put data in Ddb&quot;}'</code>.</p> <p>So we can safely conclude that the Db connection was successful. However, I'm unable to insert the object into DynamoDb. Because an error post-connection was returned.</p> <p>Is there a way to raise the specific error? (DynamoDb would need to return it via FastAPI.)</p> <p>Or better yet, should the Item parameter in put_item method be formatted differently?</p> <p>The connection set up using</p> <pre class="lang-py prettyprint-override"><code>def connect_ddb(): ddb = boto3.resource('dynamodb', region_name='us-west-2') table = ddb.Table('myTable') return table </code></pre>
<python><amazon-dynamodb><boto3><fastapi>
2023-11-02 20:24:22
1
1,238
jbuddy_13
77,412,666
5,955,479
Airflow - using multiple pod templates
<p>We have deployed airflow using the official helm chart and we are using KubernetesExecutor with git-sync. I already managed to launch a worker pod with a custom pod template using the executor_config parameter as mentioned in <a href="https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/executor/kubernetes.html#pod-override" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/core-concepts/executor/kubernetes.html#pod-override</a>. However I am unable to override the default worker docker image. How do I need to setup the helm values file to be able to override the docker image? Currently I have this in my values file</p> <pre><code> images: airflow: repository: &lt;custom-docker-image&gt; tag: webserver config: kubernetes_executor: worker_container_repository: ~ worker_container_tag: ~ </code></pre> <p>The pod template is exact copy paste of <a href="https://github.com/apache/airflow/blob/main/chart/files/pod-template-file.kubernetes-helm-yaml" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/chart/files/pod-template-file.kubernetes-helm-yaml</a> with only the image changed to <code>custom-docker-image:test_template</code> . My worker pods are still using the default airflow image <code>custom-docker-image:webserver</code>. If I set the <code>kubernetes_executor</code> keys to not null values it will use those, but I am still unable to override those.<br /> I know how to override the docker image of the default pod template, but my idea is to have multiple pod templates, then use the <code>executor_config</code> in my <code>task</code> decorator to pick a pod template from a folder. This way I would basically have multiple worker environments to choose from.<br /> I am also aware I can do that with pod overrides as mentioned in the docs, I am interested how to achieve this using the templates though.</p>
<python><kubernetes><airflow>
2023-11-02 20:23:32
1
355
user430953
77,412,625
1,743,837
How do I set up a Django user for use in a Selenium test case with the setUp method?
<p>I am working to create Selenium unit tests for my code. I have a simple login form:</p> <pre><code>&lt;form method=&quot;POST&quot;&gt; &lt;input type=&quot;hidden&quot; name=&quot;csrfmiddlewaretoken&quot; value=&quot;f00b4r&quot;&gt; &lt;div id=&quot;div_id_username&quot; class=&quot;form-group&quot;&gt; &lt;label for=&quot;id_username&quot; class=&quot;requiredField&quot;&gt;Callsign &lt;spanclass=&quot;asteriskField&quot;&gt;*&lt;/span&gt; &lt;/label&gt; &lt;div&gt; &lt;input type=&quot;text&quot; name=&quot;username&quot; autofocus=&quot;&quot; maxlength=&quot;150&quot; class=&quot;textinput textInput form-control&quot; required=&quot;&quot; id=&quot;id_username&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;div id=&quot;div_id_password&quot; class=&quot;form-group&quot;&gt; &lt;label for=&quot;id_password&quot; class=&quot;requiredField&quot;&gt;Password &lt;span class=&quot;asteriskField&quot;&gt;*&lt;/span&gt; &lt;/label&gt; &lt;div&gt; &lt;input type=&quot;password&quot; name=&quot;password&quot; autocomplete=&quot;current-password&quot; class=&quot;textinput textInput form-control&quot; required=&quot;&quot; id=&quot;id_password&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;button class=&quot;btn btn-primary&quot; type=&quot;submit&quot;&gt;Login&lt;/button&gt; &lt;/form&gt; </code></pre> <p>And this is the test case:</p> <pre><code>class TestLoginFormFirefox(LiveServerTestCase): def setUp(self): self.driver = webdriver.Firefox() self.good_user = User.objects.create_user(username=&quot;unittest&quot;, password=&quot;this_is_unit&quot;) def tearDown(self): self.driver.close() def test_index_login_success(self): &quot;&quot;&quot; When a user successfully logs in, a link to their profile should appear in the navbar &quot;&quot;&quot; self.driver.get('http://127.0.0.1:8000/login') username_field = self.driver.find_element(by=By.ID, value='id_username') password_field = self.driver.find_element(by=By.ID, value='id_password') username_field.send_keys('unittest') password_field.send_keys(&quot;this_is_unit&quot;) login_button = self.driver.find_element(by=By.CLASS_NAME, value=&quot;btn-primary&quot;) login_button.send_keys(Keys.RETURN) # needs time to render sleep(3) id_profile_link = self.driver.find_element(by=By.ID, value='id_profile_link').text assert id_profile_link == 'unittest' </code></pre> <p>The test is simple: if the user specified in the unittest setUp method is able to successfully log in, assert that the user's username is a part of a link in the next page.</p> <p>The rub here is that the setUp method creates the user object, but the login fails. This persisted until I made a user in the project's database with the same username and password through createsuperuser. Is there any way to create a valid test user for this flow without having to make it beforehand using createsuperuser to have it in the project's <code>auth_user</code> table?</p>
<python><django><selenium-webdriver><python-unittest>
2023-11-02 20:15:06
0
1,295
nerdenator
77,412,601
11,145,820
how to configure Qdrant data persistence and reload
<p>I'm trying to build an app with streamlit that uses Qdrant python client.</p> <p>to run the qdrant, im just using:</p> <pre><code>docker run -p 6333:6333 qdrant/qdrant </code></pre> <p>I have wrapped the client in something like this:</p> <pre><code>class Vector_DB: def __init__(self) -&gt; None: self.collection_name = &quot;__TEST__&quot; self.client = QdrantClient(&quot;localhost&quot;, port=6333,path = &quot;/home/Desktop/qdrant/qdrant.db&quot;) </code></pre> <p>but i'm getting this error:</p> <blockquote> <p>Storage folder /home/Desktop/qdrant/qdrant.db is already accessed by another instance of Qdrant client. If you require concurrent access, use Qdrant server instead.</p> </blockquote> <p>I suspect that streamlit is creating multiple instances of this class, but, if i try to load the db from one snapshot, like:</p> <pre><code> class Vector_DB: def __init__(self) -&gt; None: self.client = QdrantClient(&quot;localhost&quot;, port=6333) self.client.recover_snapshot(collection_name = &quot;__TEST__&quot;,location = &quot;http://localhost:6333/collections/__TEST__/snapshots/__TEST__-8742423504815750-2023-10-30-12-04-14.snapshot&quot;) </code></pre> <p>it works. Seems like i'm missing something important on how to configure it. What is the properly way of setting Qdrant, to store some embeddings, turn off the machine, and reload it?</p>
<python><docker><streamlit><qdrant>
2023-11-02 20:09:42
3
1,979
Guinther Kovalski
77,412,296
494,134
What causes an ImportError to report "(unknown location)"?
<p>Looking at <a href="https://stackoverflow.com/q/77411571/494134">this question</a>, there is this import error:</p> <pre><code>&gt;&gt;&gt; from flask_security import login_user Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; ImportError: cannot import name 'login_user' from 'flask_security' (unknown location) </code></pre> <p>What causes the &quot;(unknown location)&quot; message?</p> <p>Prior to seeing this, I had a kind of vague assumption that it meant the module was built in to python itself, like <code>sys</code>, and so it did not have any external file at all.</p> <p>But that is clearly not the case in the linked question.</p>
<python><python-import><importerror>
2023-11-02 19:11:18
2
33,765
John Gordon
77,412,249
866,082
How to prevent a subprocess from outputting to stdout in Python?
<p>I need to run a process using Python and kill it and then rerun it when it hangs. And I don't want the second process to contaminate my shell stdout. But no matter what I do, the subprocess's output is written to the shell. BTW, I'm using Linux.</p> <p>This is what I have:</p> <pre class="lang-py prettyprint-override"><code>server_command = &quot;./start.sh &gt;/dev/null 2&gt;&amp;1&quot; server_cwd = &quot;/home/mehran/Project&quot; process = subprocess.Popen(server_command, shell=False, cwd=server_cwd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL) </code></pre> <p>As I mentioned, this will still print the output of the subprocess to shell. I have also tried all the other combinations I could think of (without <code>&gt;/dev/null 2&gt;&amp;1</code> and then without <code>subprocess.DEVNULL</code>)</p> <p><strong>[UPDATE]</strong></p> <p>To answer some of the comments, I've tested the following cases with the same results:</p> <ul> <li><code>shell=True</code></li> <li><code>[server_command]</code></li> <li><code>stdin=subprocess.DEVNULL</code></li> </ul> <p>I added the last one even though the <code>start.sh</code> script does not read anything from the stdin. TBH, I didn't completely read the script but as far as I can tell, it does a <code>conda activate .</code> and runs another python program. The thing is that I would rather not touch that script and suppress the output no matter the situation within that script.</p> <p><strong>[UPDATE]</strong></p> <p>I forgot to mention this, when I run the script manually in a terminal like this:</p> <pre><code>./start.sh &gt;/dev/null 2&gt;&amp;1 </code></pre> <p>I get <strong>no output</strong> which makes me wonder why the same approach does not work when called from Python code!</p> <p><strong>[UPDATE]</strong></p> <p>If anyone's interested to replicate this issue, please clone the following project:</p> <p><a href="https://github.com/oobabooga/text-generation-webui" rel="nofollow noreferrer">https://github.com/oobabooga/text-generation-webui</a></p> <p>And then run:</p> <pre><code>./start_linux.sh --api </code></pre> <p>This is the script that I'm trying to control within my code. It's just that you also need to send an HTTP request to load a model to see the output I mentioned:</p> <pre><code>curl --request POST \ --url http://localhost:5000/api/v1/model \ --header 'Content-Type: application/json' \ --data '{&quot;action&quot;: &quot;load&quot;, &quot;model_name&quot;: &quot;TheBloke_WizardLM-1.0-Uncensored-CodeLlama-34B-AWQ&quot;}' </code></pre> <p>The <code>model_name</code> refers to a model that you have downloaded into the app. It's pretty simple in fact.</p>
<python><shell><anaconda><subprocess><stdout>
2023-11-02 19:02:30
1
17,161
Mehran
77,412,228
2,573,075
Update pydantic from 1 to 2 field issues:
<p>I have the following code:</p> <pre><code>class _Sub(BaseModel): value1: str | None class _Supra(BaseModel): supra_value1: str | None sub_value2: _Sub = Field(default_factory=_Sub) </code></pre> <p>In pydantic 1 doing <code>foo = _Supra(**{})</code> worked fine and created a model like <code>_Supra(_Sub(value1 = None) supra_value2=None)</code></p> <p>Now in pydantic 2 it returns:</p> <pre><code>pydantic_core._pydantic_core.ValidationError: 1 validation error for _Sub value1 Field required [type=missing, input_value={}, input_type=dict] For further information visit https://errors.pydantic.dev/2.4/v/missing </code></pre> <p>I tried several variants (Optional[_Sub], _Sub|None, Union[_Sub,None]) but none worked.</p> <p>I just wanted to point out that before asking here I'm trying to understand the manual, ask google, ask chatgpt.</p>
<python><pydantic>
2023-11-02 18:59:10
1
633
Claudiu
77,412,205
15,913,281
Count Number of Values Less than Value in Another Column in Dataframe
<p>Given a dataframe like the one below, for each date in <code>Enrol Date</code>, how can I count the number of values in preceding rows in <code>Close Date</code> that are earlier? Ideally I would like to add the results as a new column.</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">Class</th> <th style="text-align: left;">Enrol Date</th> <th style="text-align: left;">Close Date</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">30/10/2003</td> <td style="text-align: left;">05/12/2003</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">22/12/2003</td> <td style="text-align: left;">23/09/2005</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">06/09/2005</td> <td style="text-align: left;">29/09/2005</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">15/11/2005</td> <td style="text-align: left;">07/12/2005</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">27/02/2006</td> <td style="text-align: left;">28/03/2006</td> </tr> </tbody> </table></div> <p>Desired result:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th style="text-align: left;">Class</th> <th style="text-align: left;">Enrol Date</th> <th style="text-align: left;">Close Date</th> <th style="text-align: left;">Prior Dates</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">30/10/2003</td> <td style="text-align: left;">05/12/2003</td> <td style="text-align: left;">0</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">22/12/2003</td> <td style="text-align: left;">23/09/2005</td> <td style="text-align: left;">1</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">06/09/2005</td> <td style="text-align: left;">29/09/2005</td> <td style="text-align: left;">1</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">15/11/2005</td> <td style="text-align: left;">07/12/2005</td> <td style="text-align: left;">3</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: left;">27/02/2006</td> <td style="text-align: left;">28/03/2006</td> <td style="text-align: left;">4</td> </tr> </tbody> </table></div>
<python><pandas><dataframe><numpy><datetime>
2023-11-02 18:54:53
3
471
Robsmith
77,412,174
734,748
Cannot use the credentials created by Boto3 createAccessKey API
<p>I want to write a python program that make some API calls using user A's credential and then create an access key for user B. Then I want to use user B's newly created access key to make get caller identity API call. Here are the challenges:</p> <p>when I directly use the keys returned by the createAccessKey api call, I am constantly getting this error</p> <pre><code>Traceback (most recent call last): File &quot;mock.py&quot;, line 142, in &lt;module&gt; main() File &quot;mock.py&quot;, line 130, in main print(client1.get_caller_identity()) File &quot;/home/joe/.local/lib/python3.8/site-packages/botocore/client.py&quot;, line 535, in _api_call return self._make_api_call(operation_name, kwargs) File &quot;/home/joe/.local/lib/python3.8/site-packages/botocore/client.py&quot;, line 980, in _make_api_call raise error_class(parsed_response, operation_name) botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the GetCallerIdentity operation: The security token included in the request is invalid. </code></pre> <p>However, if I type in the hard coded credentials generated from the create access key call, then the program will finish without any problems. I.e., uncomment out the second line below in the program and put in the access keys created:</p> <pre><code># NOTE: manually put in the key id and access key created by the createAccessKey call above #cred_new = credential('', '', None) </code></pre> <p>Here is the full program</p> <pre><code>import boto3 import time class credential: aws_access_key_id = '' aws_secret_access_key = '' aws_session_token = '' def __init__(self, aws_access_key_id, aws_secret_access_key, aws_session_token=None) -&gt; None: self.aws_access_key_id = aws_access_key_id self.aws_secret_access_key = aws_secret_access_key self.aws_session_token = aws_session_token def initialize_each_service(cred, service_name): client = boto3.client( service_name = service_name, aws_access_key_id=cred.aws_access_key_id, aws_secret_access_key=cred.aws_secret_access_key, aws_session_token=cred.aws_session_token ) return client def initialize_clients(client_map, cred): # TODO: read keys from a file # Initialize the AWS STS (Security Token Service) client service_list = ['sts', 'iam', 'guardduty'] for service_name in service_list: client_map[service_name] = initialize_each_service(cred, service_name) return client_map def get_caller_identity(client_map): response = client_map['sts'].get_caller_identity() # Get the caller identity # Extract and print the caller identity information account_id = response['Account'] arn = response['Arn'] print(f&quot;AWS Account ID: {account_id}&quot;) print(f&quot;Caller ARN: {arn}&quot;) def create_access_key(client_map, target_user_name): # Create an access key for the IAM user response = client_map['iam'].create_access_key(UserName=target_user_name) print(response) # Extract and print the access key information access_key_id = response['AccessKey']['AccessKeyId'] secret_access_key = response['AccessKey']['SecretAccessKey'] cred = credential(access_key_id, secret_access_key) print(f&quot;Access Key ID: {access_key_id}&quot;) print(f&quot;Secret Access Key: {secret_access_key}&quot;) return cred def main(): client_map = dict() client_map_new = dict() # Replace 'your_aws_access_key' and 'your_aws_secret_key' with your AWS IAM user's access key and secret key. aws_access_key_id = '&lt;key_id&gt;' aws_secret_access_key = '&lt;access_key&gt;' cred = credential(aws_access_key_id, aws_secret_access_key) print(f'aws_access_key_id = ', cred.aws_access_key_id) print(f'aws_secret_access_key = ', cred.aws_secret_access_key) print(f'aws_session_token = ', cred.aws_session_token) client_map = initialize_clients(client_map, cred) get_caller_identity(client_map) target_user_name = 'devOps' cred_new = create_access_key(client_map, target_user_name) # NOTE: manually put in the key id and access key created by the createAccessKey call above #cred_new = credential('', '', None) print(f'aws_access_key_id = ', cred_new.aws_access_key_id) print(f'aws_secret_access_key = ', cred_new.aws_secret_access_key) print(f'aws_session_token = ', cred_new.aws_session_token) time.sleep(5) client1 = boto3.client( service_name = 'sts', aws_access_key_id=cred_new.aws_access_key_id, aws_secret_access_key=cred_new.aws_secret_access_key, aws_session_token=cred.aws_session_token ) print(client1.get_caller_identity()) if __name__ == &quot;__main__&quot;: main() </code></pre>
<python><amazon-web-services><boto3>
2023-11-02 18:50:02
1
3,367
drdot
77,412,120
2,681,662
abstractmethod returns a type of self
<p>I'm having some problems with mypy.</p> <p>I have an abstract class and a class that inherits from it:</p> <pre><code>from __future__ import annotations from abc import abstractmethod, ABC from typing import Union class Base(ABC): @abstractmethod def the_method(self, a_class: Union[Base, float, int]) -&gt; None: ... @abstractmethod def other_method(self) -&gt; None: ... class MyClass(Base): def __init__(self, something: str = &quot;Hello&quot;) -&gt; None: self.something = something def the_method(self, a_class: Union[MyClass, float, int]) -&gt; None: print(a_class) def other_method(self) -&gt; None: print(self.something) </code></pre> <p>I am aware of the <strong>Liskov substitution principle</strong>. However <code>MyClass</code> is a type of <code>Base</code> since it inherits from it. But <code>mypy</code> still raises an error:</p> <pre class="lang-none prettyprint-override"><code>main.py:21: error: Argument 1 of &quot;the_method&quot; is incompatible with supertype &quot;Base&quot;; supertype defines the argument type as &quot;Base | float | int&quot; [override] main.py:21: note: This violates the Liskov substitution principle main.py:21: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#incompatible-overrides Found 1 error in 1 file (checked 1 source file) </code></pre> <p>What am I doing wrong?</p>
<python><inheritance><mypy><python-typing>
2023-11-02 18:42:10
2
2,629
niaei
77,412,057
11,163,122
Searching Entrez based on paper title
<p>I am trying to search NCBI's Entrez based on a title. Here are my GET requests's URL and parameters:</p> <pre class="lang-py prettyprint-override"><code>import requests url = &quot;https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi&quot; # SEE: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8021862/ params = { &quot;tool&quot;: &quot;foo&quot;, &quot;email&quot;: &quot;example@example.com&quot;, &quot;api_key&quot;: None, &quot;retmode&quot;: &quot;json&quot;, &quot;db&quot;: &quot;pubmed&quot;, &quot;retmax&quot;: &quot;1&quot;, &quot;term&quot;: '&quot;Interpreting Genetic Variation in Human and Cancer Genomes&quot;[Title]', } response = requests.get(url, params=params, timeout=15.0) response.raise_for_status() result = response.json()[&quot;esearchresult&quot;] </code></pre> <p>However, I am getting no results, the <code>result[&quot;count&quot;]</code> is 0. How can I search Entrez for based on a paper's title?</p> <p>When answering, feel free to use <code>requests</code> directly, or common Entrez wrappers like <code>biopython</code>'s <a href="https://biopython.org/docs/latest/api/Bio.Entrez.html" rel="nofollow noreferrer"><code>Bio.Entrez</code></a> or <a href="https://github.com/krassowski/easy-entrez" rel="nofollow noreferrer"><code>easy-entrez</code></a>. I am using Python 3.11.</p>
<python><biopython><ncbi><pubmed>
2023-11-02 18:31:25
1
2,961
Intrastellar Explorer
77,411,830
1,506,763
Connection to PostgresSQL database closed in cPanel Flask app
<p>I've developed a simple flask website that is using a remote PostgreSQL database. I've used <code>sqlalchemy</code> for defining the database and <code>pyscopg</code> for connecting to the database. I store the database URL, username and password in a .env file which I then load in my <code>app.py</code> file. When I run this locally on my laptop everything works fine and I can connect to the database.</p> <p>Local <code>app.py</code> file:</p> <pre class="lang-py prettyprint-override"><code>from web import create_app from web.forms import csrf from database.config import DevConfig, ProdConfig from database import init_db config = DevConfig application = create_app(config) csrf.init_app(application) init_db(application, config) if __name__ == &quot;__main__&quot;: application.run(debug=True, use_debugger=False, use_reloader=False) </code></pre> <p><code>init_db</code> is the function that creates the connection to the database.</p> <pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine from sqlalchemy.orm import sessionmaker, scoped_session, Session def init_db(app, config): engine = create_engine( config.DATABASE_URI, echo=False, ) app.db_session = scoped_session(sessionmaker(engine)) </code></pre> <p>I'm now trying to set up the <code>flask</code> website on a webhosting server that is using cPanel, following the instructions here: <a href="https://docs.cloudlinux.com/shared/lve_manager/#python-selector-client-plugin" rel="nofollow noreferrer">https://docs.cloudlinux.com/shared/lve_manager/#python-selector-client-plugin</a></p> <p>For simple flask apps that don't connect to a database that works fine. However I'm unable to get my set-up to work.</p> <p>I've removed the <code>if __name__ == &quot;__main__&quot;:</code> block from app.py and I'm using the following <code>passenger_wsgi.py</code> that is created by default when starting the app:</p> <pre class="lang-py prettyprint-override"><code>import imp import os import sys sys.path.insert(0, os.path.dirname(__file__)) wsgi = imp.load_source('wsgi', 'app.py') application = wsgi.application </code></pre> <p>This initially seems to work and the site appears at the URL and the pages render and everything looks great. However, when I go to one of the site pages that query the database to return a table of data the app crashes and dies with the following messages displayed &quot;Incomplete response received from application&quot;.</p> <p>Checking the <code>passenger.log</code> file in cPanel I get the stack trace of the error which is pretty long, but essentially says the connection was closed.</p> <pre><code>App 131002 output: /opt/cpanel/ea-ruby27/root/usr/share/passenger/helper-scripts/wsgi-loader.py:26: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses App 131002 output: import sys, os, re, imp, threading, signal, traceback, socket, select, struct, logging, errno App 131002 output: [ pid=131002, time=2023-11-02 17:07:56,254 ]: WSGI application raised an exception! App 131002 output: Traceback (most recent call last): App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 145, in __init__ App 131002 output: self._dbapi_connection = engine.raw_connection() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 3293, in raw_connection App 131002 output: return self.pool.connect() App 131002 output: ^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 452, in connect App 131002 output: return _ConnectionFairy._checkout(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 1268, in _checkout App 131002 output: fairy = _ConnectionRecord.checkout(pool) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 716, in checkout App 131002 output: rec = pool._do_get() App 131002 output: ^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/impl.py&quot;, line 168, in _do_get App 131002 output: with util.safe_reraise(): App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 147, in __exit__ App 131002 output: raise exc_value.with_traceback(exc_tb) App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/impl.py&quot;, line 166, in _do_get App 131002 output: return self._create_connection() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 393, in _create_connection App 131002 output: return _ConnectionRecord(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 678, in __init__ App 131002 output: self.__connect() App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 902, in __connect App 131002 output: with util.safe_reraise(): App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 147, in __exit__ App 131002 output: raise exc_value.with_traceback(exc_tb) App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 898, in __connect App 131002 output: self.dbapi_connection = connection = pool._invoke_creator(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/create.py&quot;, line 637, in connect App 131002 output: return dialect.connect(*cargs, **cparams) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/default.py&quot;, line 616, in connect App 131002 output: return self.loaded_dbapi.connect(*cargs, **cparams) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/psycopg/connection.py&quot;, line 728, in connect App 131002 output: raise ex.with_traceback(None) App 131002 output: psycopg.OperationalError: connection failed: server closed the connection unexpectedly App 131002 output: This probably means the server terminated abnormally App 131002 output: before or while processing the request. App 131002 output: The above exception was the direct cause of the following exception: App 131002 output: Traceback (most recent call last): App 131002 output: File &quot;/opt/cpanel/ea-ruby27/root/usr/share/passenger/helper-scripts/wsgi-loader.py&quot;, line 199, in main_loop App 131002 output: socket_hijacked = self.process_request(env, input_stream, client) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/opt/cpanel/ea-ruby27/root/usr/share/passenger/helper-scripts/wsgi-loader.py&quot;, line 333, in process_request App 131002 output: result = self.app(env, start_response) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/flask/app.py&quot;, line 2548, in __call__ App 131002 output: return self.wsgi_app(environ, start_response) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/flask/app.py&quot;, line 2528, in wsgi_app App 131002 output: response = self.handle_exception(e) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/flask/app.py&quot;, line 2525, in wsgi_app App 131002 output: response = self.full_dispatch_request() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/flask/app.py&quot;, line 1822, in full_dispatch_request App 131002 output: rv = self.handle_user_exception(e) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/flask/app.py&quot;, line 1820, in full_dispatch_request App 131002 output: rv = self.dispatch_request() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/flask/app.py&quot;, line 1796, in dispatch_request App 131002 output: return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/dbsite/web/templates/materials/bl_materials.py&quot;, line 23, in materials App 131002 output: materials = get_materials() App 131002 output: ^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/dbsite/database/em_db.py&quot;, line 36, in get_materials App 131002 output: materials_view_query = conn.scalars(select(Material)).all() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/orm/session.py&quot;, line 2344, in scalars App 131002 output: return self._execute_internal( App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/orm/session.py&quot;, line 2117, in _execute_internal App 131002 output: conn = self._connection_for_bind(bind) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/orm/session.py&quot;, line 1984, in _connection_for_bind App 131002 output: return trans._connection_for_bind(engine, execution_options) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;&lt;string&gt;&quot;, line 2, in _connection_for_bind App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/orm/state_changes.py&quot;, line 137, in _go App 131002 output: ret_value = fn(self, *arg, **kw) App 131002 output: ^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/orm/session.py&quot;, line 1111, in _connection_for_bind App 131002 output: conn = bind.connect() App 131002 output: ^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 3269, in connect App 131002 output: return self._connection_cls(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 147, in __init__ App 131002 output: Connection._handle_dbapi_exception_noconnection( App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 2431, in _handle_dbapi_exception_noconnection App 131002 output: raise sqlalchemy_exception.with_traceback(exc_info[2]) from e App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 145, in __init__ App 131002 output: self._dbapi_connection = engine.raw_connection() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/base.py&quot;, line 3293, in raw_connection App 131002 output: return self.pool.connect() App 131002 output: ^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 452, in connect App 131002 output: return _ConnectionFairy._checkout(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/pyt App 131002 output: hon3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 1268, in _checkout App 131002 output: fairy = _ConnectionRecord.checkout(pool) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 716, in checkout App 131002 output: rec = pool._do_get() App 131002 output: ^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/impl.py&quot;, line 168, in _do_get App 131002 output: with util.safe_reraise(): App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 147, in __exit__ App 131002 output: raise exc_value.with_traceback(exc_tb) App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/impl.py&quot;, line 166, in _do_get App 131002 output: return self._create_connection() App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 393, in _create_connection App 131002 output: return _ConnectionRecord(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 678, in __init__ App 131002 output: self.__connect() App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 902, in __connect App 131002 output: with util.safe_reraise(): App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/util/langhelpers.py&quot;, line 147, in __exit__ App 131002 output: raise exc_value.with_traceback(exc_tb) App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/pool/base.py&quot;, line 898, in __connect App 131002 output: self.dbapi_connection = connection = pool._invoke_creator(self) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/create.py&quot;, line 637, in connect App 131002 output: return dialect.connect(*cargs, **cparams) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib64/python3.11/site-packages/sqlalchemy/engine/default.py&quot;, line 616, in connect App 131002 output: return self.loaded_dbapi.connect(*cargs, **cparams) App 131002 output: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ App 131002 output: File &quot;/home/db/virtualenv/dbsite/3.11/lib/python3.11/site-packages/psycopg/connection.py&quot;, line 728, in connect App 131002 output: raise ex.with_traceback(None) App 131002 output: sqlalchemy.exc.OperationalError: (psycopg.OperationalError) connection failed: server closed the connection unexpectedly App 131002 output: This probably means the server terminated abnormally App 131002 output: before or while processing the request. App 131002 output: (Background on this error at: https://sqlalche.me/e/20/e3q8) </code></pre> <p>I'm not a developer or even remotely familiar with cPanel or hosting but thought that if the flask app works on my desktop machine and I recreate the python environment, and upload the same code, then everything would work!</p> <p>I assume the issue is related to the <code>init_db</code> function not being loaded by the webhost but don't really understand the problem fully.</p> <p>Would be grateful if anyone could point me in the right direction to resolve this database connection issue.</p>
<python><postgresql><flask><cpanel><psycopg2>
2023-11-02 17:49:53
1
676
jpmorr
77,411,722
3,078,473
Throwing a custom exception. I want to override the casting error message
<pre class="lang-py prettyprint-override"><code>while True: try: age = int(input('How old are you? ')) if age &lt; 0: raise ValueError('negative age') except ValueError as ve: print(f'{ve}') except: print('Error') else: break </code></pre> <p>I want to override the casting error message. <em>invalid literal for int() with base 10: 'cat'</em> with the error &quot;Not a valid number&quot;</p>
<python>
2023-11-02 17:31:25
3
419
JackOfAll
77,411,626
5,147,965
How can I deploy and run a python web server in an azure app service via docker
<p>I am trying to get a python web server running in an Azure App Service. The container is created via the Dockerfile and pushed to an Azure Docker Registry. Continuous deployment is configured for the existing Azure App Service so that the changes are applied directly. This step works correctly. In addition to the web server, I have set up SSH to connect to the container via the Azure Portal. This also gives me access to the container within the Azure App Service.</p> <p>However, if I try to get a response from the web server running in the container from outside (e.g. <code>index.html</code>), all requests fail. Inside the container, however, the web server is running and responds correctly (<code>curl localhost:8080</code>)</p> <p>Apparently the port used (8080) is not forwarded to the container, as the web server runs in the container and can also be reached from the outside when running locally.</p> <p><strong>How do I get the python web server to run in the Azure App Service so that it can be accessed from outside?</strong></p> <h2>What works</h2> <ul> <li>Creating a Docker container with a python web application</li> <li>Pushing the Container to a Azure Docker Registry</li> <li>Continuous deployment to the Azure App Service</li> <li>SSH connection to the container via Azure Portal</li> <li>Start and run the web server in the container</li> <li>Accessing the index.html file served by the web server via <code>curl</code> within the container</li> <li>Starting the container locally and access <code>index.html</code> via browser</li> </ul> <h2>What does not work</h2> <ul> <li>Access to the index.html file provided by the web server from outside the container, i.e. from the internet</li> </ul> <h2>What i have tried</h2> <ul> <li>Changing the Port number within the Container to 8080, 80, 8008, 8000 or 443 in several configurations (web server port, EXPOSE port, WEBSITES_PORT setting)</li> <li>starting the web server without the sshd service</li> <li>enabling and disabling HTTPS</li> <li>changing the &quot;HTTPS Only&quot;, &quot;Always on&quot;, &quot;HTTP 2.0 Proxy&quot; and &quot;HTTP version&quot; settings of the Azure App Service</li> <li>Adding Application setting PORT with the value 8080</li> <li>Adding Application setting WEBSITES_PORT with the value 8080</li> <li>restarting, recreating and redeploying the Azure App Service</li> <li>examining the Log stream within the Azure Portal</li> <li>examining the Deployment Center logs within the Azure Portal</li> </ul> <h2>Resources and suggested solutions that I came across during my research</h2> <ul> <li><p><a href="https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?tabs=debian&amp;pivots=container-linux" rel="nofollow noreferrer">Microsoft Documentation about custom containers for Azure App Service</a></p> </li> <li><p><a href="https://learn.microsoft.com/en-us/answers/questions/168746/container-didnt-respond-to-http-pings-on-port-8080" rel="nofollow noreferrer">A developer who may have the same problem but no solution</a></p> </li> <li><p><a href="https://stackoverflow.com/questions/67755217/how-does-azure-app-service-manage-port-binding-with-docker-container-does-it-re">StackOverflow question about Port binding</a></p> </li> <li><p><a href="https://azureossd.github.io/2023/02/15/Whats-the-difference-between-PORT-and-WEBSITES_PORT/" rel="nofollow noreferrer">An explanation of the differences of the PORT and WEBSITES_PORTS settings</a></p> </li> </ul> <h2>Observations</h2> <p>The container startup process fails because it does not respond to HTTP pings.</p> <pre><code>ERROR - Container &lt;name&gt; for site &lt;name&gt; did not start within expected time limit. Elapsed time = 230.2830234 sec ERROR - Container &lt;name&gt; didnt respond to HTTP pings on port: 8080, failing site start. See container logs for debugging. INFO - Stopping site &lt;name&gt; because it failed during startup. </code></pre> <p>Within the container all services are working</p> <p></p> <p>Azure uses this command to start the Container within the Web Service</p> <pre><code>docker run -d --expose=8080 --name &lt;name&gt; -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8080 -e WEBSITE_SITE_NAME=&lt;app-service-name&gt; -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=&lt;hostname&gt; -e WEBSITE_INSTANCE_ID=&lt;id&gt; -e WEBSITE_USE_DIAGNOSTIC_SERVER=False &lt;DockerRegistry:Tag&gt; </code></pre> <p>using this command locally to run the Container leads to the same result (index.html not reachable). It will however work if i add <code>-p 8080:8080</code> to the command.</p> <h2><a href="https://github.com/mtomberger/azuredockerpython" rel="nofollow noreferrer">The files i used</a></h2>
<python><docker><azure-web-app-service>
2023-11-02 17:15:10
1
487
Gehtnet
77,411,624
4,764,604
ImportError: Using `load_in_8bit=True` requires Accelerate: bitsandbytespip: but I have them in the pip freeze
<p>I am trying to launch a gradio backend but I seem to be missing libraries, although I have them downloaded.</p> <pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/Documents/chatbot-rag$ python3 gradio-chatbot.py /home/reply/Documents/chatbot-rag/.venv/lib/python3.10/site-packages/langchain/__init__.py:34: UserWarning: Importing PromptTemplate from langchain root module is no longer supported. Please use langchain.prompts.PromptTemplate instead. warnings.warn( Initializing backend for chatbot /home/reply/Documents/chatbot-rag/.venv/lib/python3.10/site-packages/transformers/pipelines/__init__.py:698: FutureWarning: The `use_auth_token` argument is deprecated and will be removed in v5 of Transformers. Please use `token` instead. warnings.warn( Traceback (most recent call last): File &quot;/home/reply/Documents/chatbot-rag/gradio-chatbot.py&quot;, line 10, in &lt;module&gt; backend.load_embeddings_and_llm_models() File &quot;/home/reply/Documents/chatbot-rag/backend.py&quot;, line 50, in load_embeddings_and_llm_models self.llm = self.load_llm(self.params) File &quot;/home/reply/Documents/chatbot-rag/backend.py&quot;, line 66, in load_llm pipe = pipeline(&quot;text-generation&quot;, model=self.llm_name_or_path, model_kwargs=model_kwargs) File &quot;/home/reply/Documents/chatbot-rag/.venv/lib/python3.10/site-packages/transformers/pipelines/__init__.py&quot;, line 870, in pipeline framework, model = infer_framework_load_model( File &quot;/home/reply/Documents/chatbot-rag/.venv/lib/python3.10/site-packages/transformers/pipelines/base.py&quot;, line 269, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File &quot;/home/reply/Documents/chatbot-rag/.venv/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py&quot;, line 566, in from_pretrained return model_class.from_pretrained( File &quot;/home/reply/Documents/chatbot-rag/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py&quot;, line 2714, in from_pretrained raise ImportError( ImportError: Using `load_in_8bit=True` requires Accelerate: `pip install accelerate` and the latest version of bitsandbytes `pip install -i https://test.pypi.org/simple/ bitsandbytes` or pip install bitsandbytes` </code></pre> <p>But I already have them. And they are up to date:</p> <pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/Documents/chatbot-rag$ pip freeze | grep -E '^(accelerate|bitsandbytes)' accelerate==0.24.1 bitsandbytes==0.41.1 (.venv) reply@reply-GP66-Leopard-11UH:~/Documents/chatbot-rag$ pip install --upgrade accelerate pip install --upgrade -i https://test.pypi.org/simple/ bitsandbytes Requirement already satisfied: accelerate in ./.venv/lib/python3.10/site-packages (0.24.1) Requirement already satisfied: numpy&gt;=1.17 in ./.venv/lib/python3.10/site-packages (from accelerate) (1.26.1) Requirement already satisfied: packaging&gt;=20.0 in ./.venv/lib/python3.10/site-packages (from accelerate) (23.2) Requirement already satisfied: psutil in ./.venv/lib/python3.10/site-packages (from accelerate) (5.9.6) Requirement already satisfied: pyyaml in ./.venv/lib/python3.10/site-packages (from accelerate) (6.0.1) Requirement already satisfied: torch&gt;=1.10.0 in ./.venv/lib/python3.10/site-packages (from accelerate) (2.1.0) Requirement already satisfied: huggingface-hub in ./.venv/lib/python3.10/site-packages (from accelerate) (0.17.3) Requirement already satisfied: filelock in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (3.13.1) Requirement already satisfied: typing-extensions in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (4.8.0) Requirement already satisfied: sympy in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (1.12) Requirement already satisfied: networkx in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (3.2.1) Requirement already satisfied: jinja2 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (3.1.2) Requirement already satisfied: fsspec in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (2023.10.0) Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.1.105 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (12.1.105) Requirement already satisfied: nvidia-cuda-runtime-cu12==12.1.105 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (12.1.105) Requirement already satisfied: nvidia-cuda-cupti-cu12==12.1.105 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (12.1.105) Requirement already satisfied: nvidia-cudnn-cu12==8.9.2.26 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (8.9.2.26) Requirement already satisfied: nvidia-cublas-cu12==12.1.3.1 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (12.1.3.1) Requirement already satisfied: nvidia-cufft-cu12==11.0.2.54 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (11.0.2.54) Requirement already satisfied: nvidia-curand-cu12==10.3.2.106 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (10.3.2.106) Requirement already satisfied: nvidia-cusolver-cu12==11.4.5.107 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (11.4.5.107) Requirement already satisfied: nvidia-cusparse-cu12==12.1.0.106 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (12.1.0.106) Requirement already satisfied: nvidia-nccl-cu12==2.18.1 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (2.18.1) Requirement already satisfied: nvidia-nvtx-cu12==12.1.105 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (12.1.105) Requirement already satisfied: triton==2.1.0 in ./.venv/lib/python3.10/site-packages (from torch&gt;=1.10.0-&gt;accelerate) (2.1.0) Requirement already satisfied: nvidia-nvjitlink-cu12 in ./.venv/lib/python3.10/site-packages (from nvidia-cusolver-cu12==11.4.5.107-&gt;torch&gt;=1.10.0-&gt;accelerate) (12.3.52) Requirement already satisfied: requests in ./.venv/lib/python3.10/site-packages (from huggingface-hub-&gt;accelerate) (2.31.0) Requirement already satisfied: tqdm&gt;=4.42.1 in ./.venv/lib/python3.10/site-packages (from huggingface-hub-&gt;accelerate) (4.66.1) Requirement already satisfied: MarkupSafe&gt;=2.0 in ./.venv/lib/python3.10/site-packages (from jinja2-&gt;torch&gt;=1.10.0-&gt;accelerate) (2.1.3) Requirement already satisfied: charset-normalizer&lt;4,&gt;=2 in ./.venv/lib/python3.10/site-packages (from requests-&gt;huggingface-hub-&gt;accelerate) (3.3.2) Requirement already satisfied: idna&lt;4,&gt;=2.5 in ./.venv/lib/python3.10/site-packages (from requests-&gt;huggingface-hub-&gt;accelerate) (3.4) Requirement already satisfied: urllib3&lt;3,&gt;=1.21.1 in ./.venv/lib/python3.10/site-packages (from requests-&gt;huggingface-hub-&gt;accelerate) (2.0.7) Requirement already satisfied: certifi&gt;=2017.4.17 in ./.venv/lib/python3.10/site-packages (from requests-&gt;huggingface-hub-&gt;accelerate) (2023.7.22) Requirement already satisfied: mpmath&gt;=0.19 in ./.venv/lib/python3.10/site-packages (from sympy-&gt;torch&gt;=1.10.0-&gt;accelerate) (1.3.0) Looking in indexes: https://test.pypi.org/simple/ Requirement already satisfied: bitsandbytes in ./.venv/lib/python3.10/site-packages (0.41.1) </code></pre> <p>The interpreter is that of virtualenv:</p> <pre><code>(.venv) reply@reply-GP66-Leopard-11UH:~/Documents/chatbot-rag$ which python /home/reply/Documents/chatbot-rag/.venv/bin/python (.venv) reply@reply-GP66-Leopard-11UH:~/Documents/chatbot-rag$ which python3 /home/reply/Documents/chatbot-rag/.venv/bin/python3 </code></pre>
<python><python-3.x><path><libraries><gradio>
2023-11-02 17:14:27
1
3,396
Revolucion for Monica
77,411,607
9,381,985
python reading text file with prefixing FF FE bytes
<p>I have got a file, which can be opened in VSCode editor as a normal text file. But if I try to read it in Python:</p> <pre class="lang-py prettyprint-override"><code>with open(&quot;file.ass&quot;) as f: for line in f.readlines(): ... </code></pre> <p>it will throw an exception:</p> <pre class="lang-py prettyprint-override"><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte </code></pre> <p>If I try to open it with binary mode, the first a few bytes read like:</p> <pre class="lang-py prettyprint-override"><code>f = open(&quot;file.ass&quot;, &quot;rb&quot;) b = f.read() print(b[:50]) Out[36]: b'\xff\xfe[\x00S\x00c\x00r\x00i\x00p\x00t\x00 \x00I\x00n\x00f\x00o\x00]\x00\r\x00\n\x00;\x00 \x00S\x00c\x00r\x00i\x00p\x00t\x00 \x00' </code></pre> <p>If I do <code>decode('utf-16')</code>, I can see the correct characters.</p> <pre class="lang-py prettyprint-override"><code>b[:50].decode('utf-16') Out[58]: '[Script Info]\r\n; Script ' </code></pre> <p>But I am wondering if there is a more elegant way to handle such files like a normal text files. In another word, how could I know if I need to do <code>decode('utf-16')</code> and use <code>readlines()</code> like reading a normal text file? Thanks.</p>
<python><encoding>
2023-11-02 17:11:56
1
575
Cuteufo
77,411,592
11,163,122
Does collections.abc.Collection have a uniqueness property, like Set?
<p>From <a href="https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes" rel="nofollow noreferrer">https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes</a>, it's clear to me that a <code>collections.abc.Set</code> is a <code>collections.abc.Collection</code>. And a useful property of <code>Set</code> is the uniqueness property (that it doesn't contain duplicated elements).</p> <p>However, what I am trying to figure out is, does <code>Collection</code> have a uniqueness property?</p> <p>In other words, is it possible for a <code>Collection</code> to have duplicate values inside?</p>
<python><data-structures><set><python-typing>
2023-11-02 17:09:50
1
2,961
Intrastellar Explorer
77,411,480
1,560,414
Inconsistant Docker Images
<p>I have been using the <code>python3.9-slim</code> docker image, and as one of the build steps I install the 3.9 python headers, via <code>RUN apt-get install python3-dev</code>.</p> <p>This worked for years, and then <code>apt</code> has changed so that <code>python3-dev</code> now installs the headers for Python 3.11 instead of 3.9.</p> <p>On top of that, <code>apt install python3.9-dev</code> is not available.</p> <p>I kind of didn't expect these things to be changing under the hood, and thought of using docker images as a way of getting reproducible builds.</p> <p>Would anyone be able to explain how/why that has changed, and how I might better handle this in the future?</p> <p>Thanks</p>
<python><docker><ubuntu><apt>
2023-11-02 16:50:13
1
1,667
freebie
77,411,477
10,713,813
Pygmo multi-objective optimization with constraints
<p>I want to solve a multi-objective optimization problem with constraints using pygmo and obtain the resulting Pareto front. However, even though my program only contains linear constraints, I obtain an error:</p> <pre><code>what: Non linear constraints detected in &lt;class '__main__.SimpleProblem'&gt; instance. NSGA-II: cannot deal with them. </code></pre> <p>I implemented a more simple program to recreate the error:</p> <pre><code>import pygmo as pg class SimpleProblem: # objective functions and constraints def fitness(self, x): fitness_vector = [] # first objective fitness_vector.append(x[0]) constants = [1,0.5,0.5] # second objective fitness_vector.append(-sum([x[i] * constants[i] for i in range(3)])) # constraint fitness_vector.append(sum([x[i] for i in range(3)]) -2) return fitness_vector # number of objectives def get_nobj(self): return 2 # number of inequality constraints def get_nic(self): return 1 # real dimension of the problem def get_ncx(self): return 1 # integer dimension of the problem def get_nix(self): return 3 # bounds of the decision variables def get_bounds(self): return ([0] + [0] * 3, [1e6] + [1] * 3) if __name__ == &quot;__main__&quot;: model = SimpleProblem() problem = pg.problem(model) algorithm = pg.algorithm(pg.nsga2(gen=1000)) population = pg.population(problem, size=100) population = algorithm.evolve(population) </code></pre> <p>which results in the same error. If I remove the constraint it works fine.</p> <p>To my understanding, fitness should return a vector containing the objectives followed by the constraints, as shown in the example &quot;Coding a User Defined Problem with constraints&quot; of the pygmo2 documentation. They however do not list an example with multiple objectives that contains other constraints except the decision variable bounds. Do I need to handle the fitness function differently in this case?</p> <p>I am using pygmo==2.19.5.</p>
<python><optimization><pygmo>
2023-11-02 16:49:45
1
320
wittn
77,411,452
1,668,622
In Python is it possible to extract and re-use the types of variadic collections?
<p>Is it possible to modify <code>deco_with_params</code> in the following snippet to accept an arbitrary number of sequences of generic types while keeping it fully type hinted?</p> <pre class="lang-py prettyprint-override"><code>from collections.abc import Mapping, Callable, Sequence import functools def deco_with_params(arg1: Sequence[int], arg2: Sequence[str]) -&gt; Callable[[Callable[[int, str], None]], None]: def decorator(afunc: Callable[[int, str], None]) -&gt; None: afunc(arg1[0], arg2[0]) # just a shortcut for what would be a permutation return decorator @deco_with_params(arg1=[1, 2, 3], arg2=[&quot;one&quot;, &quot;two&quot;, &quot;three&quot;]) def foo(arg1: int, arg2: str) -&gt; None: print(arg1, arg2) </code></pre> <p>I.e. I don't want to be limited to two arguments (which could be accomplished by just <code>TypeVar</code>s instead of <code>int</code> and <code>str</code>, but I also have to access the type of the sequences because I want to use them one by one.</p> <p>So</p> <pre class="lang-py prettyprint-override"><code>@deco_with_params(string=[&quot;one&quot;, &quot;two&quot;, &quot;three&quot;], number=[4., 5.1, 6.7], extra=[True, False]) def foo(string: str, number: float, extra: bool) -&gt; None: print(string, number, extra) </code></pre> <p>should be accepted by <code>mypy</code>, while</p> <pre class="lang-py prettyprint-override"><code>@deco_with_params(string=[&quot;one&quot;, &quot;two&quot;, &quot;three&quot;], number=[True, False], extra=[4., 5.1, 6.7]) def foo(string: str, number: float, extra: bool) -&gt; None: print(string, number, extra) </code></pre> <p>should not (because here <code>extra</code> would be <code>float</code> instead of <code>str</code> and thus not matching the signature anymore)</p> <p>I've read about <a href="https://peps.python.org/pep-0646/" rel="nofollow noreferrer"><code>TypeVarTuple</code></a> but I didn't manage to wrap my mind enough around the provided examples to be able to tell if it's possible to extract types out of variadic <em>collections</em> this way..</p>
<python><python-3.x><variadic-functions><mypy><python-typing>
2023-11-02 16:45:36
1
9,958
frans
77,411,202
1,117,119
How do I make a REST call over the WireGuard protocol only using UDP?
<p>I need to call a <a href="https://en.wikipedia.org/wiki/Representational_state_transfer" rel="nofollow noreferrer">REST</a> API which is inside a <a href="https://en.wikipedia.org/wiki/WireGuard" rel="nofollow noreferrer">WireGuard</a> VPN. The WireGuard VPN is client managed, so I can't modify it. I am just provided with a configuration file to access the VPN.</p> <p>I require a code-only solution that can function with <strong>only</strong> the capability of sending <a href="https://en.wikipedia.org/wiki/User_Datagram_Protocol" rel="nofollow noreferrer">UDP</a> packets. My application cannot modify the network configuration of either my network, or the REST API server's network.</p> <p>This code solution may use <a href="https://en.wikipedia.org/wiki/Go_%28programming_language%29" rel="nofollow noreferrer">Go</a>, Python or JavaScript, as our existing codebase already uses all of these languages.</p> <p>If there is a sufficiently useful solution in Java or <a href="https://en.wikipedia.org/wiki/C_Sharp_%28programming_language%29" rel="nofollow noreferrer">C#</a>, we can add those languages to our build chain.</p>
<javascript><python><go><wireguard>
2023-11-02 16:09:12
2
2,333
yeerk
77,411,176
2,095,569
Given a target aspect ratio, how to crop into image, framing around a found face
<p>I have 800x800 portrait image and want to crop into it, finding the face and &quot;framing it&quot; in the centre, while maintaining a 3:4 aspect ratio (outline select box).</p> <p>So I almost need to calculate some kind of &quot;centre point&quot; of the calculated <code>top</code>, <code>bottom</code>, <code>left</code> and <code>right</code> and crop into that point (red dots).</p> <p><a href="https://i.sstatic.net/FlKyD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FlKyDm.png" alt="enter image description here" /></a></p> <p>I believe I'm someway off reaching this goal as my python isn't particularly good - here's what I have so far:</p> <pre><code>cropped_face_image = numpy.array(PIL.Image.open(BytesIO(image.content)))[ top:bottom, left:right ] pil_image = PIL.Image.fromarray(cropped_face_image) pil_image.show() </code></pre> <p>This just hard-crops the face from the image.</p>
<python><numpy><python-imaging-library>
2023-11-02 16:04:46
1
666
Harry Lincoln
77,411,143
4,715,957
Extract Config Context YAML information from NetBox
<p>We have a NetBox instance which we use to store information about virtual machines. The Config Context tab for a virtual machine object is populated with a YAML file structured as such:</p> <pre><code>stuff: - hostname: myserver os: Linux os_version: RHEL 8.1 network: - ip_address: 192.168.2.3 network: ABC gateway: 192.168.2.1 server_type: XYZ other_data: Foobar </code></pre> <p>The same data is also available in JSON format.</p> <p>I am doing extraction of the standard NetBox fields to a CSV file via the following Python script:</p> <pre><code>config = configparser.ConfigParser() netbox_token = config.get('NetBox', 'token') netbox_api_base_url = config.get('NetBox', 'api_base_url') netbox_devices_endpoint = config.get('NetBox', 'devices_endpoint') nb = pynetbox.api(netbox_api_base_url, token=netbox_token) nb.http_session.verify = True vms = nb.virtualization.virtual_machines.all() csv_file_vms = 'netbox_vms.csv' with open(csv_file_vms, mode='w', newline='') as csv_file: csv_writer = csv.writer(csv_file) csv_writer.writerow(['Name', 'Status', 'Site', 'VCPUs', 'Memory (MB)', 'Disk (GB)', 'IP Address']) for vm in vms: csv_writer.writerow([vm.name, vm.status, vm.site, vm.vcpus, vm.memory, vm.disk, vm.primary_ip]) </code></pre> <p>How do I modify the <code>writerow()</code> function to add the data stored in the Config Context, e.g. the <code>os_version</code> field?</p>
<python><netbox>
2023-11-02 16:00:23
1
2,315
dr_
77,411,138
6,887,780
Getting incorrect padding error when connecting to AzureAppConfigurationClient
<p>I am using the following code to get the keyvault value but on running the code i am getting the error raise binascii.Error(&quot;Connection string secret has incorrect padding&quot;) binascii.Error: Connection string secret has incorrect padding</p> <pre><code>from azure.appconfiguration import AzureAppConfigurationClient from azure.identity import DefaultAzureCredential. I have updated the latest package as well. from azure.keyvault.secrets import SecretClient app_string_url = &quot;Endpoint=https://xxx.azconfig.io;Id=123;Secret=123&quot; # not used secret_name = 'ACS_ENDPOINT' # Create an instance of AzureAppConfigurationClient credential=DefaultAzureCredential() app_config_client = AzureAppConfigurationClient.from_connection_string(connection_string=app_string_url) config_setting = app_config_client.get_configuration_setting(secret_name) # error at last line </code></pre>
<python><azure>
2023-11-02 15:59:39
1
427
Sourav Roy