QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
78,141,598
9,194,965
Plot regression line for dtype Period (Q_DEC) vs int column: python
<p>I am unable to plot a regression line for a simple dataframe that consists of a period formatted date column and a integer column. The sample dataframe may be created using below code:</p> <pre><code>df = pd.DataFrame({&quot;quarter&quot;: ['2017Q1', '2017Q2', '2017Q3', '2017Q4', '2018Q1', '2018Q2', '2018Q3', '2018Q4'], &quot;total&quot;: [392, 664, 864,1024, 1202, 1375, 1532, 1717] }) df[&quot;quarter&quot;] = pd.to_datetime(df[&quot;quarter&quot;]).dt.to_period('Q') </code></pre> <p>To generate a regression plot, I am using below code:</p> <pre><code>ax = sns.regplot( data=df, x='quarter', y='total', ) plt.show(); </code></pre> <p>However, I am getting an error as follows:</p> <pre><code> TypeError: float() argument must be a string or a number, not 'Period' </code></pre> <p>Converting the Period column to a string/object format did not fix the issue and I am not able to convert it to an integer format either.</p> <p>Can somebody help provide a way to generate a regression trend line for &quot;total&quot; vs &quot;quarter&quot;?</p>
<python><pandas><datetime><seaborn><regplot>
2024-03-11 15:01:12
0
1,030
veg2020
78,141,573
5,774,070
Integrating Frontend JavaScript Code with AES Encryption getting ValueError: Data must be padded to 16 byte boundary in CBC mode
<p>I'm currently working implementing AES encryption in the backend using Python, but I'm encountering some issues in ensuring compatibility between frontend and backedn. I need help in integrating the frontend JavaScript code to work with it.</p> <p>My backend Python code:</p> <pre class="lang-py prettyprint-override"><code>class Crypt(): def pad(self, data): BLOCK_SIZE = 16 length = BLOCK_SIZE - (len(data) % BLOCK_SIZE) return data + (chr(length)*length) def unpad(self, data): return data[:-(data[-1] if type(data[-1]) == int else ord(data[-1]))] def bytes_to_key(self, data, salt, output=48): assert len(salt) == 8, len(salt) data += salt key = sha256(data).digest() final_key = key while len(final_key) &lt; output: key = sha256(key + data).digest() final_key += key return final_key[:output] def bytes_to_key_md5(self, data, salt, output=48): assert len(salt) == 8, len(salt) data += salt key = md5(data).digest() final_key = key while len(final_key) &lt; output: key = md5(key + data).digest() final_key += key return final_key[:output] def encrypt(self, message): passphrase = &quot;&lt;secret passpharse value&gt;&quot;.encode() salt = Random.new().read(8) key_iv = self.bytes_to_key_md5(passphrase, salt, 32+16) key = key_iv[:32] iv = key_iv[32:] aes = AES.new(key, AES.MODE_CBC, iv) return base64.b64encode(b&quot;Salted__&quot; + salt + aes.encrypt(self.pad(message).encode())) def decrypt(self, encrypted): passphrase =&quot;&lt;secret passpharse value&gt;&quot;.encode() encrypted = base64.b64decode(encrypted) assert encrypted[0:8] == b&quot;Salted__&quot; salt = encrypted[8:16] key_iv = self.bytes_to_key_md5(passphrase, salt, 32+16) key = key_iv[:32] iv = key_iv[32:] aes = AES.new(key, AES.MODE_CBC, iv) return self.unpad(aes.decrypt(encrypted[16:])).decode().strip('&quot;') def base64_decoding(self, encoded): base64decode = base64.b64decode(encoded) return base64decode.decode() crypt = Crypt() test = &quot;secret message to be send over network&quot; encrypted_message = crypt.encrypt(test) print(&quot;Encryp msg:&quot;, encrypted_message) decrypted_message = crypt.decrypt(encrypted_message) print(&quot;Decryp:&quot;, decrypted_message) </code></pre> <p>here's what I've tried so far on the frontend with React and CryptoJS:</p> <pre class="lang-js prettyprint-override"><code>import React from &quot;react&quot;; import CryptoJS from 'crypto-js'; const DecryptEncrypt = () =&gt; { function bytesToKey(passphrase, salt, output = 48) { if (salt.length !== 8) { throw new Error('Salt must be 8 characters long.'); } let data = CryptoJS.enc.Latin1.parse(passphrase + salt); let key = CryptoJS.SHA256(data).toString(CryptoJS.enc.Latin1); let finalKey = key; while (finalKey.length &lt; output) { data = CryptoJS.enc.Latin1.parse(key + passphrase + salt); key = CryptoJS.SHA256(data).toString(CryptoJS.enc.Latin1); finalKey += key; } return finalKey.slice(0, output); } const decryptData = (encryptedData, key) =&gt; { const decodedEncryptedData = atob(encryptedData); const salt = CryptoJS.enc.Hex.parse(decodedEncryptedData.substring(8, 16)); const ciphertext = CryptoJS.enc.Hex.parse(decodedEncryptedData.substring(16)); const keyIv = bytesToKey(key, salt.toString(), 32 + 16); const keyBytes = CryptoJS.enc.Hex.parse(keyIv.substring(0, 32)); const iv = CryptoJS.enc.Hex.parse(keyIv.substring(32)); const decrypted = CryptoJS.AES.decrypt( { ciphertext: ciphertext }, keyBytes, { iv: iv, mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 } ); return decrypted.toString(CryptoJS.enc.Utf8); }; const encryptData = (data, key) =&gt; { const salt = CryptoJS.lib.WordArray.random(8); // Generate random salt const keyIv = bytesToKey(key, salt.toString(), 32 + 16); const keyBytes = CryptoJS.enc.Hex.parse(keyIv.substring(0, 32)); const iv = CryptoJS.enc.Hex.parse(keyIv.substring(32)); const encrypted = CryptoJS.AES.encrypt(data, keyBytes, { iv: iv, mode: CryptoJS.mode.CBC, padding: CryptoJS.pad.Pkcs7 }); const ciphertext = encrypted.ciphertext.toString(CryptoJS.enc.Hex); const saltedCiphertext = &quot;Salted__&quot; + salt.toString(CryptoJS.enc.Hex) + ciphertext; return btoa(saltedCiphertext); }; const dataToEncrypt = 'Data to be sent over network'; const encryptionKey = &quot;&lt;secret passpharse value&gt;&quot;; const encryptedData = encryptData(dataToEncrypt, encryptionKey); console.log(&quot;Encrypted data:&quot;, encryptedData); const decryptedData = decryptData(encryptedData, encryptionKey); console.log(&quot;Decrypted data:&quot;, decryptedData); return (&lt;&gt; Check &lt;/&gt;); } export default DecryptEncrypt; </code></pre> <p>I'm encountering some issues in ensuring compatibility between frontend and backedn. Specifically, I'm struggling with properly deriving the key and IV, and encrypting/decrypting the data in a way that matches the backend implementation. Getting error as below when i try to send encrypted text to backend where it throws following error while decrypting,</p> <pre class="lang-none prettyprint-override"><code>packages\Crypto\Cipher\_mode_cbc.py&quot;, line 246, in decrypt raise ValueError(&quot;Data must be padded to %d byte boundary in CBC mode&quot; % self.block_size) ValueError: Data must be padded to 16 byte boundary in CBC mode </code></pre> <p>I m bit new to implementing AES in a fullstack app, so learning and trying but still stuck with this issue. Could someone who has encountered similar issue or implemented encryption/decryption in JavaScript offer some guidance or suggestions on how to modify my frontend code to achieve compatibility with the backend?</p>
<javascript><python><encryption><aes><cryptojs>
2024-03-11 14:58:00
1
1,777
Vishal Kharde
78,141,564
309,917
Corrupted ZipFile using file object
<p>I'm trying to create, using python, a zip file to be later used in a http response. To achieve this I'm using a <code>io.BytesIO()</code> as file object.</p> <p>The resulting zip file is somehow corrupted: some software like <code>unzip</code> don't like it while others like <code>zcat</code> do read contents.</p> <p>Here a minimal example, I will save to file the resulting zip instead of serving in a resoponse. The error is the same I get in my real application:</p> <pre><code>$ cat &lt;&lt; EOF &gt; script.py &gt; import io import zipfile with (io.BytesIO() as fo, zipfile.ZipFile(fo, 'w') as zip, open('outfile.zip', 'wb') as outfile): zip.writestr('file.txt', b'Lorem ipsum') fo.seek(0) outfile.write(fo.read()) &gt; EOF $ python3 script.py $ ll outfile.zip -rw-rw-r-- 1 neurino neurino 49 mar 11 14:38 outfile.zip $ unzip outfile.zip Archive: outfile.zip End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of outfile.zip or outfile.zip.zip, and cannot find outfile.zip.ZIP, period. $ zcat outfile.zip Lorem ipsum </code></pre> <p>Using a file in place of a file object the resulting zip file works — and also is bigger</p> <pre><code>$ cat &lt;&lt; EOF &gt; script2.py &gt; import io &gt; import zipfile &gt; with zipfile.ZipFile('outfile.zip', 'w') as zip: &gt; zip.writestr('file.txt', b'Lorem ipsum') $ python3 script2.py $ ll outfile.zip -rw-rw-r-- 1 user user 125 mar 11 14:41 outfile.zip $ unzip outfile.zip Archive: outfile.zip extracting: file.txt </code></pre>
<python><zip>
2024-03-11 14:56:01
1
12,525
neurino
78,141,391
1,843,511
Best Practices for Modifying Nested Objects in Python
<p>I'm working on a Python project using Beanie ODM to interact with MongoDB, and I've come across a situation where I need to modify nested objects. Specifically, I'm dealing with a User object that contains a nested Job object (among other attributes). Here's a simplified version of the models I'm working with:</p> <pre><code>from beanie import Document from pydantic import BaseModel class Job(BaseModel): title: str salary: int class User(Document): name: str age: int job: Job </code></pre> <p>After retrieving a <code>User</code> instance from the database, I have several functions that modify both the <code>User</code> and its nested <code>Job</code> object. For example, updating a user's job title and salary. Now here is where I get a bit uncertain, let's take the following example:</p> <pre><code>from typing import Union from beanie import init_beanie from motor.motor_asyncio import AsyncIOMotorClient import asyncio async def fetch_user_by_name(name: str) -&gt; Union[User, None]: user = await User.find_one(User.name == name) return user def update_user_name(user: User, new_name: str) -&gt; None: user.name = new_name def update_user_job(user: User, new_title: str, new_salary: int) -&gt; None: user.job.title = new_title user.job.salary = new_salary async def main(): # Initialize Beanie with database connection and document models client = AsyncIOMotorClient(&quot;mongodb://localhost:27017&quot;) await init_beanie(database=client.db_name, document_models=[User]) # Example usage user = await fetch_user_by_name(&quot;Alice&quot;) if user: update_user_name(user, &quot;Alice Updated&quot;) update_user_job(user, &quot;Senior Developer&quot;, 90000) await user.save() print(&quot;User and job updated successfully.&quot;) else: print(&quot;User not found.&quot;) </code></pre> <p>This is a very basic example, the real example is more complex with more methods to manipulate the data based on dependencies and all that.</p> <p>So, here's where I'm uncertain: I know modifying objects in place can lead to unexpected behavior, especially in a larger codebase or in multi-threaded environments. However, since Beanie returns objects that I then manipulate, I'm pondering the best approach to handle such modifications.</p> <p>Questions:</p> <ul> <li><p>Is it considered bad practice to modify these nested objects in place after retrieving them with Beanie ODM? I'm concerned about potential side effects or the impact on code maintainability.</p> </li> <li><p>Would it be better to create a deepcopy of the object before making modifications? This seems like it would avoid unintended side effects, but I'm worried about the performance impact, especially with more complex or deeper nested objects. I could also just give the object as parameter, modify it and return it, but it's already modified in place so it doesn't make sense to return that one.</p> </li> </ul> <p>What would be considered better practice and why?</p>
<python><mongodb><object>
2024-03-11 14:30:07
1
5,005
Erik van de Ven
78,141,315
13,896,155
Visualizing Vector Embeddings Stored in ChromaDB
<p>I am currently working on a project where I am using ChromaDB to store vector embeddings generated from textual data. The vector embeddings are obtained using Langchain with OpenAI embeddings. However, I can't find a meaningful way to visualize these embeddings.</p> <p>Here is the relevant part of my code:</p> <pre><code>import os import chromadb from langchain.embeddings.openai import OpenAIEmbeddings from langchain_openai import OpenAIEmbeddings from langchain_community.document_loaders import PyPDFLoader from langchain.text_splitter import RecursiveCharacterTextSplitter from langchain_community.vectorstores import Chroma from langchain_openai.chat_models import ChatOpenAI from langchain.chains import RetrievalQA import pypdf import numpy as np # Set OpenAI API key os.environ[&quot;OPENAI_API_KEY&quot;] = &quot;openai_api_key&quot; # Initialize models and embeddings model = ChatOpenAI(model_name=&quot;gpt-3.5-turbo&quot;, temperature=0.5) embeddings = OpenAIEmbeddings() # Load PDF file and split into chunks loader = PyPDFLoader(&quot;./file.pdf&quot;) docs = loader.load() text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50) splits = text_splitter.split_documents(docs) # Create ChromaDB and add embeddings vector_store = Chroma.from_documents( documents=splits, embedding=embeddings, persist_directory=&quot;./chroma_db&quot; ) vector_store.persist() # Function for similarity search def query_search(query): # Load persisted vector store vector_store_retriever = Chroma( persist_directory=&quot;./files_db&quot;, embedding_function=embeddings) # Create a Retriever for the vector store retriever = vector_store_retriever.as_retriever(search_kwargs={&quot;k&quot;: 2}) # Make a chain to answer question from docs qa_chain = RetrievalQA.from_chain_type( llm=model, chain_type=&quot;stuff&quot;, retriever=retriever, verbose=True, return_source_documents=True ) response = qa_chain.invoke(query) print(response[&quot;result&quot;]) query = &quot;Query&quot; query_search(query) </code></pre> <p>I have tried various methods to visualize these embeddings, but none seem to work effectively. Can anyone provide guidance on how to effectively visualize vector embeddings stored in ChromaDB? Any help or suggestions would be greatly appreciated.</p>
<python><visualization><large-language-model><chromadb><openaiembeddings>
2024-03-11 14:19:14
0
581
Anay
78,141,179
1,342,522
ML Algo for determining if a sentence is a question
<p>I have a form that user's are supposed to use to ask support questions. Instead we've noticed that alot of people are using it to submit feedback or statements to update parts of their record.</p> <p>I wanted to run through all records and separate out questions from feedback or statements (assume that feedback and statements are NOT questions).</p> <p>Does anybody know of a good pre-trained model or process I can use to resolve this type of issue?</p> <p>I originally thought I could use Spacy to look for keywords (how, where, when why) or a &quot;?&quot; but I realized that some questions that came in had an implication of question rather than a properly formatted question (ex. &quot;could I please have a copy of file A.&quot;). I went to HuggingFace and looked at some text-analysis models and couldnt find one that properly handled examples like above.</p>
<python><spacy><huggingface-transformers>
2024-03-11 13:57:10
1
1,385
JakeHova
78,141,113
6,435,921
Python - Read a table column on a specific page of a PDF on the browser as a NumPy Array
<h1>Task</h1> <p>On page 10 of <a href="https://cdn.who.int/media/docs/default-source/gho-documents/global-health-estimates/gpe_discussion_paper_series_paper31_2001_age_standardization_rates.pdf" rel="nofollow noreferrer">this</a> PDF there is Table 1. I would like to read the column &quot;WHO World Standard*&quot; as a NumPy array.</p> <h1>(Failed) Attempts</h1> <ol> <li>Tabula raises a <code>urllib.error.HTTPError: HTTP Error 403: Forbidden</code> error. <pre><code>import tabula pdf = &quot;https://cdn.who.int/media/docs/default-source/gho-documents/global-health-estimates/gpe_discussion_paper_series_paper31_2001_age_standardization_rates.pdf&quot; tables = tabula.read_pdf(pdf, pages=10) </code></pre> </li> <li>PDFPlumber cannot find the URL/PDF <pre><code>import pdfplumber with pdfplumber.open(&quot;https://cdn.who.int/media/docs/default-source/gho-documents/global-health-estimates/gpe_discussion_paper_series_paper31_2001_age_standardization_rates.pdf&quot;) as pdf: page = pdf.pages[9] table = page.extract_table() header_row = table[0] who_std_col_idx = header_row.index(&quot;WHO World Standard*&quot;) who_std_values = [row[who_std_col_idx] for row in table[1:]] print(who_std_values) </code></pre> </li> </ol> <h1>Bonus</h1> <p>Even better if it can be done exclusively using Python packages available in Conda, as ideally I would like to create a <code>requirements.txt</code> for my project.</p>
<python><pdf><web-scraping>
2024-03-11 13:48:15
2
3,601
Euler_Salter
78,141,088
10,131,952
Unable to read Local language data from an Excel using csv
<p>I am giving data as &quot;यह एक नमूना संदेश है&quot; in a CSV file. I am reading the data inside the file using CSV Module in python. sample code:</p> <pre><code> csvFile = request.FILES.get('file') csv_copy = copy.deepcopy(csvFile) file_content = csv_copy.read() decoded_file = file_content.decode('utf-8').splitlines() reader = csv.reader(decoded_file) rows = list(reader) for i,row in enumerate(rows): print(row[1]) </code></pre> <p>the given data is getting converted as &quot;???? ???? ???&quot; Is there any way to resolve this issue ?</p>
<python><django><csv><python-3.8>
2024-03-11 13:45:17
1
413
padmaja cherukuri
78,141,020
3,337,089
Make pytorch reserve certain amount of GPU memory upfront
<p>My code has two workflows that run sequentially. In the first workflow, it requires only about 8GB of GPU memory, but the second workflow takes about 22GB. Is it possible to reserve 22GB upfront (even while running the first workflow) even if it is not using all the memory at that time?</p> <p>I run my code on a shared machine. When my code is in the first workflow, if someone runs another program in parallel on the same card using the remaining memory, my code crashes when it reaches the second workflow. That's why I want to reserve the required memory upfront.</p>
<python><pytorch><shared-gpu-memory>
2024-03-11 13:36:19
0
7,307
Nagabhushan S N
78,140,988
3,433,875
Curved arrowstyle in matplotlib
<p>I am trying to replicate this: <a href="https://i.sstatic.net/yDnHp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yDnHp.png" alt="enter image description here" /></a></p> <p>And I have gotten quite far:</p> <pre><code>import matplotlib.pyplot as plt from matplotlib.lines import Line2D import numpy as np import pandas as pd colors = [&quot;#CC5A43&quot;,&quot;#CC5A43&quot;,&quot;#2C324F&quot;] data = { &quot;year&quot;: [2004, 2022, 2004, 2022, 2004, 2022], &quot;countries&quot; : [ &quot;Denmark&quot;, &quot;Denmark&quot;, &quot;Norway&quot;, &quot;Norway&quot;,&quot;Sweden&quot;, &quot;Sweden&quot;,], &quot;sites&quot;: [4,10,5,8,13,15] } df= pd.DataFrame(data) df = df.sort_values(['countries' ,'year' ], ascending=True ).reset_index(drop=True) df['ctry_code'] = df.countries.astype(str).str[:2].astype(str).str.upper() df['year_lbl'] =&quot;'&quot;+df['year'].astype(str).str[-2:].astype(str) df['sub_total'] = df.groupby('year')['sites'].transform('sum') no_bars = df.sub_total.max() sub_totals = df.sub_total.unique() years= df.year.unique() fig, axes = plt.subplots(nrows=2, ncols=1,figsize=(6, 6), subplot_kw=dict(polar=True)) fig.tight_layout(pad=3.0) colors = [[&quot;#CC5A43&quot;]*4 +[&quot;#2C324F&quot;]*5 + [&quot;#5375D4&quot;]*13, [&quot;#CC5A43&quot;]*10 +[&quot;#2C324F&quot;]*8 + [&quot;#5375D4&quot;]*15] for sub_total, year,color,ax in zip( sub_totals, years,colors,axes.ravel()): angles = np.arange(0,2*np.pi,2*np.pi/sub_total) ax.plot([angles, angles],[0,1],lw=4, c=&quot;#CC5A43&quot;) ax.set_rorigin(-4) ax.set_theta_zero_location(&quot;N&quot;) ax.set_yticklabels([]) ax.set_xticklabels([]) ax.grid(False) ax.spines[['polar','inner']].set_color('w') ax.text(np.pi/2,-3.2, year,va=&quot;center&quot; ) for i, j in enumerate(ax.lines): j.set_color(color[i]) #add legend color_legend = [ &quot;#A54836&quot;, &quot;#5375D4&quot;, &quot;#2B314D&quot;,] lines = [Line2D([0], [0], color=c, linestyle='-', lw=4,) for c in color_legend] labels = df.countries.unique().tolist() plt.legend(lines, labels, bbox_to_anchor=(1.5, -0.02), loc=&quot;lower center&quot;, frameon=False, fontsize= 10) </code></pre> <p>which produces: <a href="https://i.sstatic.net/OdY28.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OdY28.png" alt="enter image description here" /></a></p> <p>But I am really scratching my head on how to do the annotation &quot;arrows&quot;:</p> <p><a href="https://i.sstatic.net/kve7H.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kve7H.png" alt="enter image description here" /></a></p> <p>I was hoping that '-(' would work, but it doesnt.</p> <p>My question is, do I need to create my own arrwostyle for this (good resource for this?) or is there a more obvious way of doing it and I am completely missing it?</p>
<python><matplotlib>
2024-03-11 13:32:00
1
363
ruthpozuelo
78,140,865
9,488,023
Using SciPy ndimage.zoom on an array with nan values
<p>I am trying to use scipy.ndimage.zoom on an array that contains a significant amount of NaN values, and wanted to try to use different orders to see how the results varied, but for orders higher than 1 (linear), the entire zoomed array becomes filled with NaN values.</p> <p>Writing something like this:</p> <pre><code>z_test = np.array([[0, 1, 3], [1, 3, 5], [2, 4, np.nan]]) linear = ndimage.zoom(z_test, zoom = 20, order=1) quadratic = ndimage.zoom(z_test, zoom = 20, order=2) cubic = ndimage.zoom(z_test, zoom = 20, order=3) </code></pre> <p>Yields a result for the linear zoom, but only NaN for the other two. Is there a way to get this method to ignore the NaN values and only interpolate in-between the actual values?</p>
<python><scipy><zooming><interpolation><nan>
2024-03-11 13:12:25
1
423
Marcus K.
78,140,560
6,884,119
AttributeError: 'NoneType' object has no attribute 'split' with Python Selenium in AWS Lambda
<p>I am trying to run a python script using Selenium in my AWS lambda but it's returning the below error:</p> <pre><code> File &quot;/var/lang/lib/python3.11/site-packages/webdriver_manager/core/driver.py&quot;, line 48, in get_driver_version_to_download return self.get_latest_release_version() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/var/lang/lib/python3.11/site-packages/webdriver_manager/drivers/chrome.py&quot;, line 64, in get_latest_release_version determined_browser_version = &quot;.&quot;.join(determined_browser_version.split(&quot;.&quot;)[:3]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'split' </code></pre> <p>My AWS Lambda is running inside a container image with the following Dockerfile:</p> <pre><code># Build from AWS Lambda ECR image with Python 3.11 FROM public.ecr.aws/lambda/python:3.11 as build RUN yum install -y unzip &amp;&amp; \ curl -Lo &quot;/tmp/chromedriver-linux64.zip&quot; &quot;https://storage.googleapis.com/chrome-for-testing-public/122.0.6261.111/linux64/chromedriver-linux64.zip&quot; &amp;&amp; \ curl -Lo &quot;/tmp/chrome-linux64.zip&quot; &quot;https://storage.googleapis.com/chrome-for-testing-public/122.0.6261.111/linux64/chrome-linux64.zip&quot; &amp;&amp; \ unzip /tmp/chromedriver-linux64.zip -d /opt/ &amp;&amp; \ unzip /tmp/chrome-linux64.zip -d /opt/ FROM public.ecr.aws/lambda/python:3.11 RUN yum install -y atk cups-libs gtk3 libXcomposite alsa-lib \ libXcursor libXdamage libXext libXi libXrandr libXScrnSaver \ libXtst pango at-spi2-atk libXt xorg-x11-server-Xvfb \ xorg-x11-xauth dbus-glib dbus-glib-devel nss mesa-libgbm COPY --from=build /opt/chrome-linux64 /opt/chrome COPY --from=build /opt/chromedriver-linux64 /opt/ # Copy the requirements file COPY requirements.txt ${LAMBDA_TASK_ROOT} # Install the requirements RUN pip install -r requirements.txt # Copy the files to the lambda tasks root COPY . ${LAMBDA_TASK_ROOT} # Run the handler CMD [&quot;main.main&quot;] </code></pre> <p>This is my requirements.txt:</p> <pre><code>bs4==0.0.2 dynamodb-json==1.3 kink==0.7.0 pynamodb==6.0.0 selenium==4.18.1 webdriver-manager==4.0.1 </code></pre> <p>And this is my python script:</p> <pre><code>import logging from kink import di from datetime import datetime from bs4 import BeautifulSoup from models.core.di import main_injection from models.core.web_driver import WebDriver from models.db.opening import Opening from utils.extractors import retrieve_tag_href, retrieve_tag_text from models.db.opening_dao import ItemDao # Initialize Logging logging.getLogger().setLevel(logging.INFO) @main_injection def main(event, context): # Create a web driver instance web_driver = WebDriver(di[&quot;URL&quot;], di[&quot;DELAY&quot;]) # Log event logging.info(&quot;Web driver has been intialized. Retrieving openings...&quot;) # Load the opening elements opening_elements = web_driver.load_elements(di[&quot;PRINCIPAL_FILTER&quot;]) # Extract the HTML of all openings elements, parse them with BS4 and save to JSON openings = [] # Log event logging.info(&quot;Filtering the openings&quot;) for opening in opening_elements: # outer = position.get_attribute(&quot;outerHTML&quot;) soup = BeautifulSoup(opening.get_attribute(&quot;outerHTML&quot;), &quot;html.parser&quot;) opening_title = retrieve_tag_text(soup, di[&quot;FILTERS_NAME&quot;]) opening_posted_date = retrieve_tag_text(soup, di[&quot;FILTER_POSTED_DATE&quot;]) link = retrieve_tag_href(soup, di[&quot;FILTER_LINK&quot;]) openings.append( Opening( id=link, title=opening_title, posted_date=opening_posted_date, recruiter=di[&quot;RECRUITER&quot;], updated_at=datetime.now().strftime(&quot;%Y-%m-%d&quot;), ) ) if len(openings) &gt; 0: # Log event logging.info( f&quot;{len(openings)} openings obtained from recruiter {di['RECRUITER']}&quot; ) # Create a new ItemDao object item_dao = ItemDao() # Log event logging.info(&quot;Retrieving previous opening...&quot;) # Retrieve the previous openings previous_openings = item_dao.get_items_by_recruiter(recruiter=di[&quot;RECRUITER&quot;]) # Log event logging.info( f&quot;{len(previous_openings)} previous openings obtained from recruiter {di['RECRUITER']}&quot; ) # Clear the previous openings from that recruiter item_dao.delete_all(openings=previous_openings) # Log event logging.info(&quot;Previous openings deleted&quot;) # Save the new openings item_dao.save_all(openings=openings) # Log event logging.info(&quot;New openings saved successfully&quot;) # Close the WebDriver web_driver.quit() # Log event logging.info(&quot;Script completed successfully.&quot;) </code></pre> <p><code>models/core/web_driver.py</code></p> <pre><code>from webdriver_manager.chrome import ChromeDriverManager from selenium import webdriver from selenium.webdriver.chrome.service import Service from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC class WebDriver: def __init__(self, url, delay) -&gt; None: # Set up Chrome WebDriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument(&quot;--headless=new&quot;) # Load chrome driver manager service self.chrome = webdriver.Chrome( service=Service(ChromeDriverManager().install()), options=chrome_options ) # Open the desired webpage self.chrome.get(url) # Wait for the &quot;openings&quot; tag to load self.wait = WebDriverWait(self.chrome, delay) def load_elements(self, main_filter): return self.wait.until( EC.presence_of_all_elements_located((By.CLASS_NAME, main_filter)) ) def quit(self): # Quit the chrome driver self.chrome.quit() </code></pre> <p>This works fine locally on my laptop but if i run it in a docker container or in my AWS lambda, i get the above error.</p>
<python><python-3.x><selenium-webdriver><aws-lambda><aws-lambda-containers>
2024-03-11 12:17:51
0
2,243
Mervin Hemaraju
78,140,532
12,063,435
Plot array of Figures Mathplotlib
<p>I have a function that creates a figure and now I would like to call this function repeatedly inside a loop with different parameters, collect the figures and plot them. This is how I would do it in julia:</p> <pre><code>using Plots plots = Array{Plots.Plot{Plots.GRBackend},2}(undef, 3,3) for i in 1:3 for j in 1:3 plots[i,j] = scatter(rand(10), rand(10)) title!(plots[i,j], &quot;Plot $(i),$(j)&quot;) end end plot(plots..., layout=(3,3)) </code></pre> <p>However I have to write python. So currently I have a function that creates a new figure and returns it. I would be reluctant to change this function call signature (eg. to pass some axis object), since it is allready used in a different context. This is a minimal working example. For some reason the individual figures are displayed here even though I am not calling <code>plt.display()</code>, in the main code they are not however. Here the final figure is empty.</p> <pre><code>import matplotlib.pyplot as plt import numpy as np fig, ax = plt.subplots(3,3) def plottingfunction(x,y): plt.figure() plt.scatter(x, y) return plt.gcf() for i in range(3): for j in range(3): x = np.random.rand(10) y = np.random.rand(10) ax[i,j] = plottingfunction(x,y) plt.show() </code></pre> <p>So how do I plot allready existing functions in a grid, that are for example collected inside of an array using python matplotlib.</p>
<python><matplotlib><julia>
2024-03-11 12:12:25
2
490
TheFibonacciEffect
78,140,395
694,162
Python function compatible with sync and async client
<p>I'm developing a library that should support sync and async users while minimizing code duplication inside the library. The ideal case would be implementing the library async (since the library does remote API calls) and adding support for sync with a wrapper or so. Consider the following function as part of the library.</p> <pre class="lang-py prettyprint-override"><code># Library code async def internal_function(): &quot;&quot;&quot;Internal part of my library doing HTTP call&quot;&quot;&quot; pass </code></pre> <p>My idea was to provide a wrapper that detects if the users of the library use async or not.</p> <pre class="lang-py prettyprint-override"><code># Library code def api_call(): &quot;&quot;&quot;Public api of my library&quot;&quot;&quot; if asyncio.get_event_loop().is_running(): # Async caller return internal_function() # This returns a coroutine, so the caller needs to await it # Sync caller, so we need to run the coroutine in a blocking way return asyncio.run(internal_function()) </code></pre> <p>At first, this seemed to be the solution. With this I can support</p> <ul> <li>users that call the function from an async function,</li> <li>users that call the function from a notebook (that's also async), and</li> <li>users that call the function from a plain python script (here it falls back to the blocking <code>asyncio.run</code>).</li> </ul> <p>However, there are cases when the function is called from within an event loop, but the direct caller is a sync function.</p> <pre class="lang-py prettyprint-override"><code># Code from users of the library async def entrypoint(): &quot;&quot;&quot;This entrypoint is called in an event loop, e.g. within fastlib&quot;&quot;&quot; return legacy_sync_code() # Call to sync function def legacy_sync_code(): &quot;&quot;&quot;Legacy code that does not support async&quot;&quot;&quot; # Call to library code from sync function, # expect value not coroutine, could not use await api_response = api_call() return api_response.json() # Needs to be value, not coroutine </code></pre> <p>In the last line, the call of <code>json()</code> fails. <code>api_call()</code> wrongly inferrs that the caller can await the response, and returns the coroutine.</p> <p>In order to support these kind of users, I would need to</p> <ul> <li>identify if the direct calling function is sync and expects a value instead of a corouting,</li> <li>have a way to await the result of <code>internal_function()</code> in a blocking way. Using <code>asyncio.run</code> does only work in plain python scripts and fails if the code was called form an event loop higher up in the stack trace.</li> </ul> <p>The first point could be mitigated if my library provided two functions, e.g., <code>api_call_async()</code> and <code>api_call_sync()</code>.</p> <p>I hope I make my point clear enough. I don't see why this should be fundamentally impossible, however, I can accept if the design of Python does not allow me to support sync and async users in a completely transparent way.</p>
<python><asynchronous>
2024-03-11 11:47:29
1
5,188
sauerburger
78,140,362
2,409,793
VSCode not resolving installed python dependencies
<p>I have a <code>python</code> project for which I have activated a <code>virtualenv</code> and installed everything in <code>requirements.txt</code>.</p> <p>This has been done manually through</p> <pre class="lang-bash prettyprint-override"><code>python -m venv /path/to/new/virtual/environment </code></pre> <p>and</p> <pre><code>pip install -r requirements.txt </code></pre> <p>However the VSCode fails to resolve the additional dependencies (and marks the corresponding imported libraries with a squiggled line as per the screenshot below)</p> <p>Is there something additional that needs to be configured on VSCode level to make this work?</p> <p><a href="https://i.sstatic.net/pYplh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pYplh.png" alt="enter image description here" /></a></p>
<python><visual-studio-code><virtualenv><pylance>
2024-03-11 11:41:56
2
19,856
pkaramol
78,140,149
9,542,989
Check if Package Installation will Result in Dependency Conflicts in Python
<p>Before installing a package in my Python environment using <code>pip install &lt;some-package&gt;==&lt;version&gt;</code>, is it possible for me to check if doing so will cause a dependency conflict, given what has been installed in the environment already?</p> <p>I am looking for a script I can run before each install. Ideally, it should be able to evaluate any potential conflicts that will be caused by the specific version of the package that I am trying to install.</p>
<python><pip>
2024-03-11 11:05:39
1
2,115
Minura Punchihewa
78,140,123
6,197,439
Display/print column as hex in pandas?
<p>Let's say I have this data as a starting point:</p> <pre class="lang-python prettyprint-override"><code>import pandas as pd data = [ {&quot;colA&quot;: &quot;hello&quot;, &quot;colB&quot;: 22, &quot;colC&quot;: 3.0, &quot;colD&quot;: 123476}, {&quot;colA&quot;: &quot;there&quot;, &quot;colB&quot;: 122, &quot;colC&quot;: 4.0, &quot;colD&quot;: 2384953}, {&quot;colA&quot;: &quot;world&quot;, &quot;colB&quot;: 222, &quot;colC&quot;: 5.0, &quot;colD&quot;: 39506483}, ] df = pd.DataFrame(data) with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.width', None, 'max_colwidth', 20, 'display.float_format', &quot;{:.2f}&quot;.format): print(df) </code></pre> <p>It prints:</p> <pre class="lang-none prettyprint-override"><code> colA colB colC colD 0 hello 22 3.00 123476 1 there 122 4.00 2384953 2 world 222 5.00 39506483 </code></pre> <p>Now, I would like ONLY the integer column B to be printed as hex - more specifically, as &quot;0x{:02X}&quot; string format.</p> <p>If it existed, I might have used <code>display.int_format</code>, but that option does not exist as stated in <a href="https://stackoverflow.com/questions/29663252/can-you-format-pandas-integers-for-display-like-pd-options-display-float-forma">Can you format pandas integers for display, like `pd.options.display.float_format` for floats?</a> ... Then again, such an option would likely not allow me to print <em>ONLY</em> column B in that way.</p> <p>Another option is doing <code>.apply()</code> as hinted in <a href="https://stackoverflow.com/questions/31528340/converting-a-string-of-numbers-to-hex-and-back-to-dec-pandas-python">Converting a string of numbers to hex and back to dec pandas python</a>:</p> <pre class="lang-python prettyprint-override"><code># ... df = pd.DataFrame(data) df[&quot;colB&quot;] = df[&quot;colB&quot;].apply(&quot;0x{:02X}&quot;.format) # ... </code></pre> <p>... which then prints what I want:</p> <pre class="lang-none prettyprint-override"><code> colA colB colC colD 0 hello 0x16 3.00 123476 1 there 0x7A 4.00 2384953 2 world 0xDE 5.00 39506483 </code></pre> <p>... however, it <em>also</em> changes the data in my table - and I would like to preserve the original data in the table; I simply want to <em>print</em> some columns as hex.</p> <p>So - is there a way to specify only some certain columns to be printed as hex in pandas, while keeping the original data in the table (and without explicitly copying the dataframe to a new one just for that kind of printing)?</p>
<python><pandas>
2024-03-11 11:02:18
1
5,938
sdbbs
78,140,052
3,676,262
Task Queue with cached libraries in Django
<p>I am creating a WEB interface for various Python scripts through Django.</p> <p>Example in calculation.py I would have :</p> <pre><code>import datetime def add_time(a, b): return = a + b + int(datetime.datetime.now()) </code></pre> <p>Usages :</p> <ul> <li>A user could say &quot;I want to run add from calculation.py with arguments [1, 3]&quot; and that returns him the result when it's ready.</li> <li>A user could say &quot;I want this to run add from calculation.py with arguments [1, 3] every 10 minutes and check if result is greater than X&quot; and that would action something if it's true.</li> </ul> <p>Most of my script functions are quick but they need to import libraries that takes a lot of time to load.</p> <p>I am currently doing this directly with my Django service ; it's simple and load the libraries once which allows most of the next calls to be very fast but sometimes a heavy call is made and that is slowing down all my Django application and I seem limited if I want to have CRON scheduling for some scripts. Therefore I am looking at another solution.</p> <p>I started to look at :</p> <ul> <li>Celery (but it seems not supported anymore on Windows)</li> <li>Huey and Dramatiq</li> <li>Django-Q2 (easy to use with Django)</li> </ul> <p>From my understanding nothing will cache the already imported libraries. (I am fine having a &quot;start up bulk import&quot; if needed). Would anyone have a guess on where I should look or what above solution I can tweak ?</p> <p><em>Requirement : I need this to run on Windows as some of my libraries are Windows dependant.</em></p>
<python><django><python-import><task-queue>
2024-03-11 10:48:56
1
378
BleuBizarre
78,139,918
10,595,871
Flask API doesn't recognize folder input
<p>I'm working with flask on an app that receive an audio file and transcribe it. As long as I pass a single file everything is working fine, I'm now trying to adapt the code in order to pass a folder and start the jobs for every file in it, but for some reasons it is not working, as it only starts the job for the first file in the folder and then in stops working.</p> <p>That's the html of the app (part of):</p> <pre><code> &lt;h5 class=&quot;mt-5&quot;&gt;Select the source type&lt;/h5&gt; &lt;div class=&quot;row mt-3&quot;&gt; &lt;div class=&quot;col&quot;&gt; &lt;label for=&quot;source_type&quot; class=&quot;form-label&quot;&gt;Select source type:&lt;/label&gt; &lt;select class=&quot;form-select&quot; id=&quot;source_type&quot;&gt; &lt;option value=&quot;file&quot;&gt;File&lt;/option&gt; &lt;option value=&quot;folder&quot;&gt;Folder&lt;/option&gt; &lt;/select&gt; &lt;/div&gt; &lt;/div&gt; &lt;!-- end of file/folder selector --&gt; &lt;!-- Form for loading file or folder --&gt; &lt;form id=&quot;upload_form&quot; method=&quot;post&quot; enctype=&quot;multipart/form-data&quot;&gt; &lt;div class=&quot;row mt-3&quot;&gt; &lt;div class=&quot;col&quot;&gt; &lt;label id=&quot;upload_label&quot; for=&quot;upload_input&quot; class=&quot;form-label&quot;&gt;Upload a file:&lt;/label&gt; &lt;input class=&quot;form-control&quot; type=&quot;file&quot; name=&quot;file&quot; id=&quot;upload_input&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;div class=&quot;row mt-2&quot;&gt; &lt;div class=&quot;col-9&quot;&gt; &lt;input type=&quot;submit&quot; class=&quot;btn btn-outline-primary&quot; value=&quot;Upload&quot;&gt; &lt;/div&gt; &lt;/div&gt; &lt;/form&gt; &lt;!-- end of form --&gt; &lt;!-- some other stuff --&gt; &lt;!-- script JS --&gt; &lt;script&gt; document.getElementById('source_type').addEventListener('change', function() { var selected_value = this.value; var upload_label = document.getElementById('upload_label'); var upload_input = document.getElementById('upload_input'); if (selected_value === 'folder') { upload_label.innerText = 'Select a folder:'; upload_input.setAttribute('webkitdirectory', ''); // Aggiungi attributo per selezionare una cartella upload_input.setAttribute('multiple', 'true'); // Aggiungi l'attributo multiple se presente } else { upload_label.innerText = 'Upload a file:'; upload_input.removeAttribute('webkitdirectory'); // Rimuovi l'attributo per selezionare una cartella upload_input.removeAttribute('multiple'); // Rimuovi l'attributo multiple se presente } }); &lt;/script&gt; </code></pre> <p>That's the code for starting the job:</p> <pre><code>@app.route('/transcriber', methods=['GET', 'POST']) def transcriber(): if request.method == 'GET': return render_template('jobs_transcriber.html', queued_jobs=jm.queued_jobs, running_jobs=jm.running_jobs, completed_jobs=jm.completed_jobs, max_running_jobs=jm.max_running_jobs, allowed_files=app.config['ALLOWED_FILES_TRANSCRIBER']) if request.method == 'POST': if 'file' not in request.files and 'folder' not in request.form: flash('Missing file or folder part') return redirect(request.url) # If a folder is provided folder = request.form.get('folder') if folder: print('folder') input_files = list() files_in_folder = os.listdir(folder) for filename in files_in_folder: print(filename) file_path = os.path.join(folder, filename) if os.path.isfile(file_path): if not is_allowed_file(app.config['ALLOWED_FILES_TRANSCRIBER'], filename): flash(f'File {filename} is not an allowed file type') continue input_files.append(file_path) jm.create_job_transcriber(input_files) return redirect(request.url) # If a file is provided file = request.files['file'] print('ciaoooooooooo') if file.filename == '': flash('Missing file name') return redirect(request.url) if not is_allowed_file(app.config['ALLOWED_FILES_TRANSCRIBER'], file.filename): flash(f'File {file.filename} is not an allowed file type') return redirect(request.url) jm.create_job_transcriber(file) return redirect(request.url) </code></pre> <p>The problem is that it never enters the if folder part, going straight to the file part (in fact it prints ('ciaooooooooo') and I don't understand why</p> <p>If I try to add a <code>print(request.form)</code> at the beginning, the result is <code>ImmutableMultiDict([])</code></p>
<javascript><python><html><flask>
2024-03-11 10:24:50
0
691
Federicofkt
78,139,910
9,751,001
How to identify feature names from indices in a decision tree using scikit-learn’s CountVectorizer?
<p>I have the following data for training a model to detect whether a sentence is about:</p> <ul> <li>a cat or dog</li> <li>NOT about a cat or dog</li> </ul> <p><a href="https://i.sstatic.net/gNnh6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gNnh6.png" alt="screenshot of data consisting of a text column and label column" /></a></p> <p>I ran the following code to train a <code>DecisionTreeClassifier()</code> model then view the tree visualisation:</p> <pre><code>import numpy as np from numpy.random import seed import random as rn import os import pandas as pd seed_num = 1 os.environ['PYTHONHASHSEED'] = '0' np.random.seed(seed_num) rn.seed(seed_num) from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.feature_extraction.text import TfidfTransformer from sklearn.tree import DecisionTreeClassifier from sklearn import tree dummy_train = pd.read_csv('dummy_train.csv') tree_clf = tree.DecisionTreeClassifier() X_train = dummy_train[&quot;text&quot;] y_train = dummy_train[&quot;label&quot;] dt_tree_pipe = Pipeline([('vect', CountVectorizer(ngram_range=(1,1), binary=True)), ('tfidf', TfidfTransformer(use_idf=False)), ('clf', DecisionTreeClassifier(random_state=seed_num, class_weight={0:1, 1:1})), ]) tree_model_fold_1 = dt_tree_pipe.fit(X_train, y_train) tree.plot_tree(dt_tree_pipe[&quot;clf&quot;]) </code></pre> <p>...resulting in the following tree:</p> <p><a href="https://i.sstatic.net/4LBJ4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4LBJ4.png" alt="screenshot of decision tree visualisation" /></a></p> <p>The first node checks if <code>x[7]</code> is less than or equal to <code>0.177</code>. <strong>How do I find out which word <code>x[7]</code> represents?</strong></p> <p>I tried the following code but the words returned in the output (&quot;describing&quot; and &quot;the&quot;) don't look correct. I would have thought <code>'cat'</code> and <code>'dog'</code> would be the two words used to split the data into the positive and negative class.</p> <pre><code>vect_from_pipe = dt_tree_pipe[&quot;vect&quot;] words = vect_from_pipe.vocabulary_.keys() print(list(words)[7]) print(list(words)[5]) </code></pre> <p><a href="https://i.sstatic.net/qdCbc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qdCbc.png" alt="screenshot of the words 'describing' and 'the'" /></a></p>
<python><scikit-learn><nlp><decision-tree><countvectorizer>
2024-03-11 10:24:12
1
631
code_to_joy
78,139,855
10,715,700
Why do I get different embeddings when I perform batch encoding in huggingface MT5 model?
<p>I am trying to encode some text using HuggingFace's mt5-base model. I am using the model as shown below</p> <pre class="lang-py prettyprint-override"><code>from transformers import MT5EncoderModel, AutoTokenizer model = MT5EncoderModel.from_pretrained(&quot;google/mt5-base&quot;) tokenizer = AutoTokenizer.from_pretrained(&quot;google/mt5-base&quot;) def get_t5_embeddings(texts): last_hidden_state = model(input_ids=tokenizer(texts, return_tensors=&quot;pt&quot;, padding=True).input_ids).last_hidden_state pooled_sentence = torch.max(last_hidden_state, dim=1) return pooled_sentence[0].detach().numpy() </code></pre> <p>I was doing some experiments when I noticed that the same text had a low cosine similarity score with itself. I did some digging and realized that the model was returning very different embeddings if I did the encoding in batches. To validate this, I ran a small experiment that generated embeddings for <code>Hello</code> and a list of 10 <code>Hello</code>s incrementally. and checking the embeddings of the <code>Hello</code> and the first <code>Hello</code> in the list (both of which should be same).</p> <pre class="lang-py prettyprint-override"><code>for i in range(1, 10): print(i, (get_t5_embeddings([&quot;Hello&quot;])[0] == get_t5_embeddings([&quot;Hello&quot;]*i)[0]).sum()) </code></pre> <p>This will return the number of values in the embeddings that match each other. This was the result:</p> <pre><code>1 768 2 768 3 768 4 768 5 768 6 768 7 768 8 27 9 27 </code></pre> <p>Every time I run it, I get mismatches if the batch size is more than 768.</p> <p>Why am I getting different embeddings and how do I fix this?</p>
<python><machine-learning><pytorch><nlp><huggingface-transformers>
2024-03-11 10:14:53
2
430
BBloggsbott
78,139,574
7,116,108
FastAPI: return an appropriate error message and status for unhandled exceptions
<p>I would like my FastAPI app to always send detailed error messages in the response, even for unhandled exceptions.</p> <p>I have looked at the docs for this (<a href="https://fastapi.tiangolo.com/tutorial/handling-errors/" rel="nofollow noreferrer">https://fastapi.tiangolo.com/tutorial/handling-errors/</a>) but I'm struggling to make it work for my use case.</p> <p>In the examples below, endpoints /a and /b return the correct status code and error message to the client. /a raises an HTTPException directly, and /b raises it via the <code>specific_error_handler</code> function.</p> <p>However, /c does not work as I expect. I would expect that the TypeError raised by /c would trigger the HTTPException raised by <code>generic_error_handler</code>. Instead, I get <code>Internal Server Error</code> with a status code of 500 in the client.</p> <pre><code>from fastapi import FastAPI, HTTPException app = FastAPI() @app.get(&quot;/a&quot;) async def endpoint_a(): raise HTTPException(status_code=501, detail=&quot;error occurred for endpoint a&quot;) @app.get(&quot;/b&quot;) async def endpoint_b(): 1 / 0 # unhandled ZeroDivisionError @app.get(&quot;/c&quot;) async def endpoint_c(): 1 + &quot;1&quot; # unhandled TypeError @app.exception_handler(ZeroDivisionError) async def specific_error_handler(request, exc): raise HTTPException(status_code=501, detail=f&quot;ZeroDivisionError occurred: {exc}&quot;) @app.exception_handler(Exception) async def generic_error_handler(request, exc): raise HTTPException(status_code=501, detail=f&quot;unhandled Exception occurred: {exc}&quot;) </code></pre> <p>I can get around this by wrapping the code in /c in a try/except block, but this seems unpythonic. I don't want to have to wrap every single endpoint this way, I would rather have unhandled errors dealt with by a custom exception handler. How can I do this?</p>
<python><fastapi>
2024-03-11 09:27:08
0
1,587
Dan
78,139,177
1,125,062
Backprop two networks with different loss without retain_graph=True?
<p>I have two networks in sequence that perform an expensive computation.</p> <p>The loss objective for both is the same, except for the second network's loss I want to apply a mask.</p> <p>How to achieve this without using retain_graph=True?</p> <pre><code># tenc - network1 # unet - network2 # the work flow is input-&gt;tenc-&gt;hidden_state-&gt;unet-&gt;output params = [] params.append([{'params': tenc.parameters(), 'weight_decay': 1e-3, 'lr': 1e-07}]) params.append([{'params': unet.parameters(), 'weight_decay': 1e-2, 'lr': 1e-06}]) optimizer = torch.optim.AdamW(itertools.chain(*params), lr=1, betas=(0.9, 0.99), eps=1e-07, fused = True, foreach=False) scheduler = custom_scheduler(optimizer=optimizer, warmup_steps= 30, exponent= 5, random=False) scaler = torch.cuda.amp.GradScaler() loss = torch.nn.functional.mse_loss(model_pred, target, reduction='none') loss_tenc = loss.mean() loss_unet = (loss * mask).mean() scaler.scale(loss_tenc).backward(retain_graph=True) scaler.scale(loss_unet).backward() scaler.unscale_(optimizer) scaler.step(optimizer) scaler.update() scheduler.step() optimizer.zero_grad(set_to_none=True) </code></pre> <p>The loss_tenc should only optimize tenc parameters, and the loss_unet only unet. I may have to use two different optimizers if necessary, but I grouped them into one here for simplicity.</p>
<python><pytorch><autograd>
2024-03-11 08:09:23
2
4,641
Anonymous
78,139,097
447,537
How do I get column names of a model created in PyCaret
<p>I am trying to create a PyCaret Model</p> <pre><code># load dataset from pycaret.datasets import get_data insurance = get_data('insurance') # init environment from pycaret.regression import * r1 = setup(insurance, target = 'charges', session_id = 123, normalize = True, polynomial_features = True, bin_numeric_features= ['age', 'bmi']) </code></pre> <p>I see 55 columns are now created. How do I get column names of PyCaret Model</p>
<python><machine-learning><regression><pycaret>
2024-03-11 07:54:33
1
3,510
Ajay Ohri
78,139,065
14,425,501
How to use batch prediction in Diffusers.StableDiffusionXLImg2ImgPipeline library
<p>I'm currently exploring the StableDiffusion Image to Image library within HuggingFace. My goal is to generate images similar to the ones I have stored in a folder. Currently, I'm using the following code snippet:</p> <pre><code>import torch from diffusers.utils import load_image from diffusers import StableDiffusionXLImg2ImgPipeline pipe = StableDiffusionXLImg2ImgPipeline.from_pretrained( &quot;stabilityai/stable-diffusion-xl-refiner-1.0&quot;, torch_dtype=torch.float16, variant=&quot;fp16&quot;, use_safetensors=True ) pipe = pipe.to(&quot;cuda&quot;) url = &quot;MyImages\ImageList\998.jpg&quot; init_image = load_image(url).convert(&quot;RGB&quot;) prompt = &quot;Give me a similar image like this&quot; image = pipe(prompt, image=init_image).images </code></pre> <p>this code requires me to generate each image manually, one by one. I can write a for loop like this -</p> <pre><code>all_images = os.listdir('MyImages\ImageList\') for img in all_images: ... ... </code></pre> <p>I'm considering the possibility of processing images in batches rather than individually. Is there a method or a library within HuggingFace that allows for batch processing of images to generate similar ones?</p>
<python><pytorch><huggingface><stable-diffusion><diffusers>
2024-03-11 07:48:46
0
1,933
Adarsh Wase
78,138,546
1,676,448
How to stream tqdm text content to PySide2 GUI?
<p>I am using jMetalPy. It has a &quot;ProgressBarObserver&quot; that uses tqdm to show calculation progess</p> <p><a href="https://github.com/jMetal/jMetalPy/blob/main/jmetal/util/observer.py" rel="nofollow noreferrer">https://github.com/jMetal/jMetalPy/blob/main/jmetal/util/observer.py</a></p> <p>I have a PySide2 based GUI that prints console content in a GUI, and to my surprise, everything but the tqdm content showed in the GUI</p> <pre><code> class EmittingStream(QtCore.QObject): textWritten = Signal(str) def write(self, text): self.textWritten.emit(text) def flush(self): pass </code></pre> <p>.....</p> <pre><code># Console print self.textBrowser = self.ui.textBrowser_log # Create a custom stream object for the QTextBrowser. self.textBrowserStream = EmittingStream() self.textBrowserStream.textWritten.connect(self.textBrowser.append) # Redirect output to the QTextBrowser, but not the output from pandas. sys.stdout = self.textBrowserStream pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.expand_frame_repr', False) </code></pre> <p>I am using PyCharm, <strong>the console prints in white but the tqdm content in red, clearly this text is different and probably needs to be handled differently. How do I get this to be included in my text stream output?</strong> For further clarity it prints fine in PyCharm, but the PySide2 GUI is only showing the normal white text print statements. Thank You</p>
<python><user-interface><pyside2><tqdm><jmetalpy>
2024-03-11 05:22:26
1
307
idroid8
78,138,342
3,121,975
Sign hashed data with a PCKS12 certificate
<p>I need to send requests to a SOAP API, and as part of that process, I need to hash my XML request using SHA256 and then sign it with an RSA key (which I have in the form of a PCKS12 certificate). I've tried using the <code>Crypto</code> library like this:</p> <pre class="lang-py prettyprint-override"><code>from Crypto.Hash import SHA256 from Crypto.PublicKey import RSA from Crypto.Signature import pkcs1_15 with open(&quot;/my/cert/file.p12&quot;, &quot;rb&quot;) as f: cert_data = f.read() private_key = RSA.import_key(cert_data, passphrase=&quot;some_passphrase&quot;) signer = pkcs1_15.new(private_key) hash = SHA256.new(xml_data) signature = signer.sign(hashed) </code></pre> <p>However, when I try testing this code, I always get the following error from the Crypto library:</p> <pre><code>security/crypto.py:25: in __init__ private_key = RSA.import_key(cert.certificate, passphrase=cert.passphrase) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/PublicKey/RSA.py:851: in import_key return _import_keyDER(extern_key, passphrase) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/PublicKey/RSA.py:746: in _import_keyDER return decoding(extern_key, passphrase) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/PublicKey/RSA.py:697: in _import_pkcs1_private der = DerSequence().decode(encoded, nr_elements=9, only_ints_expected=True) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/Util/asn1.py:610: in decode result = DerObject.decode(self, der_encoded, strict=strict) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/Util/asn1.py:228: in decode self._decodeFromStream(s, strict) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/Util/asn1.py:623: in _decodeFromStream DerObject._decodeFromStream(self, s, strict) /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/Util/asn1.py:245: in _decodeFromStream length = self._decodeLen(s) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = &lt;Crypto.Util.asn1.DerSequence object at 0x7f03288f97d0&gt; s = &lt;Crypto.Util.asn1.BytesIO_EOF object at 0x7f0328890310&gt; def _decodeLen(self, s): &quot;&quot;&quot;Decode DER length octets from a file.&quot;&quot;&quot; length = s.read_byte() if length &gt; 127: encoded_length = s.read(length &amp; 0x7F) &gt; if bord(encoded_length[0]) == 0: E IndexError: index out of range /Yv1Cv_rJ-py3.11/lib/python3.11/site-packages/Crypto/Util/asn1.py:205: IndexError </code></pre> <p>It's clear that I'm not importing this certificate correctly but I'm not sure how, because I can successfully import it with the <code>cryptography</code> library:</p> <pre class="lang-py prettyprint-override"><code> import cryptography.hazmat.primitives.serialization.pkcs12 with open(&quot;/my/cert/file.p12&quot;, &quot;rb&quot;) as f: private_key, certificate, additional = pkcs12.load_key_and_certificates( f.read(), b&quot;some_passphrase&quot;, ) </code></pre> <p>I notice that the <code>private_key</code> here also has a <code>sign</code> method, but calling it is more complicated so I'd rather avoid it if possible.</p> <p>TL;DR: What's the &quot;proper&quot; way to use a PCKS12 key to produce a signature from a data hash in Python?</p>
<python><cryptography><rsa><digital-signature>
2024-03-11 03:55:54
0
8,192
Woody1193
78,138,184
7,335,214
Unable to downgrade numpy for compatibility with librosa 0.8.1
<p>I need to use <code>librosa 0.8.1</code> for compatibility but cannot seem to downgrade it; or rather, I can downgrade it, but an incompatibility with the latest version of <code>numpy</code> followed by an inability to downgrade <code>numpy</code> is preventing me from using it.</p> <p>I first used <code>pip install librosa==0.8.1</code> to get the version of <code>librosa</code> I needed. However, when trying to import this into my project, I'm getting a <code>numpy</code> package error.</p> <pre><code>AttributeError: module 'numpy' has no attribute 'complex'. `np.complex` was a deprecated alias for the builtin `complex`. To avoid this error in existing code, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here. The aliases was originally deprecated in NumPy 1.20. </code></pre> <p>It seems <code>librosa 0.8.1</code> is incompatible with <code>numpy 1.26.4</code>. But, this is the version of <code>numpy</code> that was installed automatically when I ran <code>pip install librosa==0.8.1</code>. It seems this issue is <a href="https://github.com/librosa/librosa/issues/1350" rel="nofollow noreferrer">fixed</a> in <code>librosa 0.9.x</code> but I need the earlier version.</p> <p>I tried <code>pip uninstall numpy</code> followed by <code>pip install numpy=1.19.2</code> but this failed with the error,</p> <pre><code>ModuleNotFoundError: No module named 'setuptools.extern.six' </code></pre> <p>I tried <code>pip install --upgrade numpy==1.19.2</code> and <code>pip install --force-reinstall numpy==1.19.2</code>. I tried using <code>pip3</code> and installing a different version <code>numpy 1.19.5</code>. All had the same error. I have the latest version of <code>setuptools</code> installed.</p> <p>I tried <code>python -m pip</code> even though I only have one version of python (<code>python 3.12.2 x64</code>) on my machine. This failed too.</p> <p>I am not sure whether the fix lies with <code>setuptools</code> or whether there is an alternative to downgrading <code>numpy</code>. I do not mind if I cannot downgrade <code>numpy</code>, I just want to be able to use <code>librosa 0.8.1</code>, without incompatibility errors preventing me.</p> <p>Any help would be greatly appreciated!</p>
<python><numpy><pip><librosa>
2024-03-11 02:43:24
2
668
Dawn
78,138,125
11,037,602
How to see python logging output from child docker container (container started by another container)?
<p>I have an application that uses docker compose to start 3 containers. The logs for those 3 containers are outputted to <code>stdout</code> as expected. One of these containers is responsible for starting other containers, as they are needed. It uses <a href="https://docker-py.readthedocs.io/en/stable/" rel="nofollow noreferrer">docker-py</a> to do that.</p> <p>Those <em>secondary</em> containers are ephemeral, they run until the task is done and terminate. However I can't see their logging output.</p> <ul> <li>When I started, the <code>docker-compose</code> containers also weren't outputting logging to <code>stdout</code>, I had to use the following configs both in <code>Dockerfile</code> and the code:</li> </ul> <pre class="lang-py prettyprint-override"><code>self.logger = logging.getLogger(__name__) level = logging.INFO stream_handler = logging.StreamHandler(sys.stdout) self.logger.setLevel(level) formatter = logging.Formatter(&quot;%(asctime)s - %(name)s [%(levelname)s]: %(message)s&quot;) stream_handler.setFormatter(formatter) self.logger.addHandler(stream_handler) </code></pre> <p>Dockerfile</p> <pre><code>ENV PYTHONUNBUFFERED=1 </code></pre> <p>Now, I've tried to add the same settings in the code and <code>Dockerfile</code> for my <em>secondary</em> containers, this time though, it made no difference. It's possible that I may need to pass further instructions to <code>docker-py</code> when running the containers.</p> <pre class="lang-py prettyprint-override"><code>network = self.docker_client.networks.get(&quot;service-daemon_default&quot;) _ = self.docker_client.containers.run( image=&quot;default_image&quot;, detach=True, environment={ &quot;VENDOR_NAME&quot;: vendor, **self.settings.get(&quot;environment&quot;), # These are just REDIS configs, such as host, port , etc }, name=vendor.lower(), network=network.name, auto_remove=self.settings.get(&quot;auto_destroy_containers&quot;, True), ) </code></pre>
<python><docker><docker-compose><python-logging><dockerpy>
2024-03-11 02:14:40
0
2,081
Justcurious
78,138,116
2,304,735
How can I fix python multiprocessing error?
<p>I am starting to learn about python multiprocessing. I am starting with this simple code. I am using Anaconda and the Spyder IDE to implement this code in. I am using Windows 10 Pro Operating System.</p> <pre><code>import time import multiprocessing start = time.perf_counter() def do_something(): print('Sleeping for 1 second ...') time.sleep(1) print('Done Sleeping ...') p1 = multiprocessing.Process(target=do_something) p2 = multiprocessing.Process(target=do_something) p1.start() p2.start() p1.join() p2.join() finish = time.perf_counter() print(f'Finished in {round(finish-start, 2)} in seconds') </code></pre> <p>When I run the above code I get the following error message:</p> <pre><code>Finished in 0.15 in seconds Traceback (most recent call last): Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 122, in spawn_main File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 122, in spawn_main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 132, in _main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 132, in _main self = reduction.pickle.load(from_parent) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: Can't get attribute 'do_something' on &lt;module '__main__' (built-in)&gt; self = reduction.pickle.load(from_parent) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: Can't get attribute 'do_something' on &lt;module '__main__' (built-in)&gt; Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 122, in spawn_main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 132, in _main Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 122, in spawn_main self = reduction.pickle.load(from_parent) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: Can't get attribute 'do_something' on &lt;module '__main__' (built-in)&gt; exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;D:\Users\Mahmoud\anaconda3\Lib\multiprocessing\spawn.py&quot;, line 132, in _main self = reduction.pickle.load(from_parent) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: Can't get attribute 'do_something' on &lt;module '__main__' (built-in)&gt; </code></pre> <p>I can see that is printed the last print statement but then it gives me the above error. Why do I get this error? What do I need to do to the code to fix this error?</p>
<python><python-3.x><python-multiprocessing>
2024-03-11 02:10:27
1
515
Mahmoud Abdel-Rahman
78,138,063
9,951,273
Dockerfile runs runs locally but fails on Google Cloud Run
<p>I have a Dockerfile, shown below:</p> <pre><code>FROM --platform=linux/amd64 prefecthq/prefect:2-python3.12 RUN apt-get update &amp;&amp; apt-get -y install libpq-dev gcc RUN pip install --upgrade pip RUN pip install pipenv ENV PREFECT_API_URL=&quot;foo&quot; ENV PREFECT_API_KEY=&quot;bar&quot; WORKDIR /app COPY Pipfile Pipfile COPY Pipfile.lock Pipfile.lock RUN pipenv install --deploy --ignore-pipfile COPY . . CMD [&quot;pipenv&quot;, &quot;run&quot;, &quot;python&quot;, &quot;src/cdk_flow/cdk_delta_flow.py&quot;] </code></pre> <p>This file builds and runs successfully locally, but when pushing it to <a href="https://cloud.google.com/run" rel="nofollow noreferrer">Google Cloud Run</a> (using <a href="https://docs.prefect.io/latest/" rel="nofollow noreferrer">Prefect</a> to push it), it fails to run with the following error.</p> <pre><code>Traceback (most recent call last): File &quot;&lt;frozen importlib._bootstrap_external&gt;&quot;, line 995, in exec_module File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 488, in _call_with_frames_removed File &quot;/app/src/cdk_flow/cdk_delta_flow.py&quot;, line 7, in &lt;module&gt; from src.cdk_flow.cdk_common_tasks import ( File &quot;/app/src/cdk_flow/cdk_common_tasks.py&quot;, line 3, in &lt;module&gt; import pandas as pd ModuleNotFoundError: No module named 'pandas' </code></pre> <p>Why would the Dockerfile run locally but fail to find a module on Google Cloud Run</p>
<python><docker><google-cloud-run><prefect>
2024-03-11 01:45:38
1
1,777
Matt
78,138,032
7,160,815
What is the difference between skimage view_as_window and sklearn extract_patch_2d, in the context of extracting array of patches from an image?
<p>I have an array of 2D images; an array of [n x w x h], n being the number of images. I want to extract patches from each 2D image. A 2D image would not be divided equally by the patch size (meaning I should use padding). I do not want overlapping patches. I came across two functions in skimage and sklearn for this: <a href="https://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_windows" rel="nofollow noreferrer">view_as_window()</a> and <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.image.extract_patches_2d.html" rel="nofollow noreferrer">extract_patches_2d()</a>. My questions are,</p> <ol> <li>Why there are two methods?</li> <li>Which one suits my use case the most?</li> </ol>
<python><image-processing><scikit-image>
2024-03-11 01:25:05
1
344
Savindi
78,137,961
1,079,065
Python read a csv file and map it to an object directly
<p>I am trying to parse some data for my project. I have following structure:</p> <pre><code>class Subject: #listing the fields only subject_name grade prof_name details (few more ) class Student: #listing the fields only name subjects = [] --------&gt; imp </code></pre> <p>I am going to get a comma separated csv file as input to the program which is going to look like</p> <pre><code>subject_name,grade, details, prof_name...(5-6 fields) maths,A,maths-101,abc science,C,science-101,def ... </code></pre> <p>Is there a tool in Python that will help me map this file directly on the subject class and create a list(array) and return? Something like:</p> <pre><code>Subjects subjects = readfile(subjects.txt) </code></pre> <p>If not then can something return me a JSON / dict that I can map directly to subjects? OR is there a completely different yet better way of doing this? (I cannot use DB need to stick with csv for assignment)</p> <p>any help is appreciated?</p>
<python><arrays><python-3.x><list><dictionary>
2024-03-11 00:47:19
0
2,245
user1079065
78,137,875
8,484,261
Add a column in a dataframe which takes the value of a column X if it is not blank , else it takes the value of column Y
<p>I have a dataframe with a column with data separated by delimiter <code>&quot;\&quot;</code>. There could be 1 or 2 delimiters and it varies by row. I was able to split the column into multiple columns as follows:</p> <pre><code>df[['Desc1','Desc2','Desc3']] = df['Descr'].str.split(&quot;\\&quot;, expand=True) </code></pre> <p>Now some of the rows in <code>df['Desc3']</code> have values and some in <code>df['Desc1']</code>. I want to add a column which will take the value in the <code>Desc3</code> column if it has a value, and if not then take the value in column <code>Desc1</code>.</p> <p>I tried the following code</p> <pre><code>def fn_name(row): if row['Desc3'] == '': return row['Desc1'] else: return row['Desc3'] df['name'] = df.apply(fn_name,axis=1) </code></pre> <p>However, this does not work. It returns the <code>df['name']</code> column with a value when <code>row['Desc3']</code> is not blank, but it does not return it with the value in <code>row['Desc1']</code> when <code>row['Desc3']</code> is blank.</p> <ol> <li>How do I fix this?</li> <li>is there a better way to do this rather than using apply function?</li> </ol>
<python><pandas>
2024-03-10 23:59:47
1
3,700
Alhpa Delta
78,137,739
1,960,266
Plotting of gradient descent curves by using Keras
<p>I have implemented the following code in Keras that uses the California housing dataset in an attempt to plot the values of theta 1 and theta 2, and how the chose of stochastic gradient descent, batch gradient or mini batch influences in the results:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense from keras.optimizers import SGD from sklearn.datasets import fetch_california_housing from sklearn.preprocessing import StandardScaler # Load California housing dataset california_housing = fetch_california_housing() X, y = california_housing.data, california_housing.target # Normalize features scaler = StandardScaler() X_normalized = scaler.fit_transform(X) def build_model(): model = Sequential([ Dense(64, activation=&quot;relu&quot;,input_shape=(8,)), Dense(64, activation=&quot;relu&quot;), Dense(1) ]) #model.compile(optimizer=&quot;rmsprop&quot;, loss=&quot;mse&quot;, metrics=[&quot;mae&quot;]) return model sgd_optimizer = SGD(lr=0.001) # Stochastic Gradient Descent minibatch_sgd_optimizer = SGD(lr=0.001) # Mini-batch Gradient Descent batch_sgd_optimizer = SGD(lr=0.001) # Batch Gradient Descent # Compile the model model=build_model() model.compile(loss='mse', optimizer=sgd_optimizer) theta1_sgd, theta2_sgd = [], [] theta1_minibatch_sgd, theta2_minibatch_sgd = [], [] theta1_batch_sgd, theta2_batch_sgd = [], [] # Function to perform gradient descent and store theta values def perform_gradient_descent(optimizer, batch_size=None): theta1_list, theta2_list = [], [] loss_history = [] for _ in range(5): # Number of epochs history=model.fit(X_normalized, y, epochs=1, batch_size=batch_size, verbose=0) loss_history.append(history.history['loss'][0]) weights = model.layers[0].get_weights()[0].flatten() # Get current theta values theta1_list.append(weights[0]) theta2_list.append(weights[1]) print(theta1_list,&quot; &quot;,theta2_list) return loss_history,theta1_list, theta2_list # Perform gradient descent with different optimizers loss_sgd, theta1_sgd, theta2_sgd = perform_gradient_descent(sgd_optimizer, batch_size=1) # Stochastic Gradient Descent loss_minibatch_sgd, theta1_minibatch_sgd, theta2_minibatch_sgd = perform_gradient_descent(minibatch_sgd_optimizer, batch_size=32) # Mini-batch Gradient Descent loss_batch_sgd, theta1_batch_sgd, theta2_batch_sgd = perform_gradient_descent(batch_sgd_optimizer, batch_size=len(X_normalized)) # Batch Gradient Descent # Plotting the loss versus number of epochs plt.figure(figsize=(10, 6)) plt.plot(range(1, len(loss_sgd) + 1), loss_sgd, label='Stochastic Gradient Descent') plt.plot(range(1, len(loss_minibatch_sgd) + 1), loss_minibatch_sgd, label='Mini-batch Gradient Descent') plt.plot(range(1, len(loss_batch_sgd) + 1), loss_batch_sgd, label='Batch Gradient Descent') plt.xlabel('Number of Epochs') plt.ylabel('Loss') plt.title('Loss vs. Number of Epochs') plt.legend() plt.grid(True) plt.show() # Plotting the gradient descent trajectories plt.figure(figsize=(10, 6)) plt.plot(theta1_sgd, theta2_sgd, label='Stochastic Gradient Descent', marker='o') plt.plot(theta1_minibatch_sgd, theta2_minibatch_sgd, label='Mini-batch Gradient Descent', marker='s') plt.plot(theta1_batch_sgd, theta2_batch_sgd, label='Batch Gradient Descent', marker='x') plt.xlabel('Theta 1') plt.ylabel('Theta 2') plt.title('Gradient Descent Trajectories') #plt.xlim(-0.08, -0.05) # Set limit for Theta 1 #plt.ylim(0.02, 0.03) # Set limit for Theta 2 plt.legend() plt.grid(True) plt.show() </code></pre> <p>However, the problem I found is that sometimes the values of the list that hold the theta values are Nan and in other cases are normal values. I have noticed this when the number of epochs increase above 10, why is that?</p>
<python><machine-learning><keras>
2024-03-10 22:46:54
1
3,477
Little
78,137,642
2,245,136
Cursor-based paging with a compound index in MongoDB and Python
<p>The Python application I'm implementing works with the MongoDB database. I use the <code>mongoengine</code> module to connect and fetch data in my scripts (<a href="https://docs.mongoengine.org/" rel="nofollow noreferrer">https://docs.mongoengine.org/</a>).</p> <p>My collection in the database has following structure:</p> <pre><code>class Item(mongoengine.Document): timestamp = mongoengine.DateTimeField() ... // the rest of the fields, the _id is added automatically </code></pre> <p>I'm trying to implement the cursor-based paging based on the compound index created with <code>_id</code> and <code>timestamp</code>. The primary reason for this is I want to order by data by the timestamp. However multiple items may have the same timestamp (year, month, day, without hours, minutes and seconds). Thus, I also need to add the <code>_id</code> field to the compound index. My understanding is that following list of items:</p> <pre><code>Item2, _id=1, timestamp=2024,1,2 Item1, _id=2, timestamp=2024,1,1 Item3, _id=3, timestamp=2024,1,3 Item4, _id=4, timestamp=2024,1,3 </code></pre> <p>should be sorted like this:</p> <pre><code>Item4, _id=4, timestamp=2024,1,3 Item3, _id=3, timestamp=2024,1,3 Item2, _id=1, timestamp=2024,1,2 Item1, _id=2, timestamp=2024,1,1 </code></pre> <p>So I tried adding a following compound index to the <code>Item</code> class:</p> <pre><code>meta = { &quot;indexes&quot;: [ { &quot;name&quot;: &quot;myindex&quot;, &quot;fields&quot;: [&quot;-timestamp&quot;, &quot;_id&quot;] } ] } </code></pre> <p>The minus sign before <code>timestamp</code>, according to the documentation, means it's sorted in the descending order of timestamp field. In the case two items have the same timestamp, they should be sorted in the ascending order by the <code>_id</code> field.</p> <p>The above does not work. So I tried to create a compound index using only one field, <code>timestamp</code>. This also doesn't work, no matter what order I choose (with minus before the field name or without it).</p> <p>I read MongoDB documentation and they say there's a <code>hint()</code> command to force database to use the specified index. According to the documentation of the <code>mongoengine</code> module (<a href="https://docs.mongoengine.org/apireference.html?highlight=hint#mongoengine.queryset.QuerySet.hint" rel="nofollow noreferrer">https://docs.mongoengine.org/apireference.html?highlight=hint#mongoengine.queryset.QuerySet.hint</a>), it also supports this command.</p> <p>So I tried the following code to enforce using my index:</p> <pre><code>models.Item.objects(id=id).hint(index=&quot;myindex&quot;).limit(10) </code></pre> <p>So my expectation is the above command would fetch data, sort it according to the compound index I'm trying to enforce, use the provided <code>id</code> to give only 10 entries after the entry with the id provided. It also doesn't work at all.</p> <p>Does <code>mongoengine</code> module supports compound indexes and <code>hint()</code> command? Is the above code correct?</p> <hr /> <p>EDIT</p> <p>I found out that the reason I don't see the difference in my unit tests with different compound indexes is the fact that mongomock seems to not supporting compound indexes. <strong>So the only question left is how should the cursor-based pagination be implemented with a collection based on compound indexes? The <code>mongoengine</code> does not allow to provide <code>hint()</code> before querying the collection (<code>objects()</code>.</strong></p>
<python><mongodb><mongoengine>
2024-03-10 22:03:17
0
372
VIPPER
78,137,516
1,309,300
Python - rdflib - json-ld - why does it not parse the nested triples
<p>I have the following code, parsing the JSON-ld file below. However, it only outputs the top-level triples and not the other, such as the <code>name</code> of the <code>WebPage</code></p> <pre class="lang-py prettyprint-override"><code>from rdflib import Dataset from rdflib.namespace import Namespace, NamespaceManager local_input=&quot;&quot;&quot; [ { &quot;@context&quot;: &quot;https://schema.org/&quot; }, { &quot;@type&quot;: &quot;WebPage&quot;, &quot;@id&quot;: &quot;https://example.com/glossary/term/&quot;, &quot;name&quot;: &quot;My Glossary Term&quot;, &quot;abstract&quot;: &quot;Just my glossary Term.&quot;, &quot;datePublished&quot;: &quot;2024-03-08T07:52:13+02:00&quot;, &quot;dateModified&quot;: &quot;2024-03-08T14:54:13+02:00&quot;, &quot;url&quot;: &quot;https://example.com/glossary/term/&quot;, &quot;author&quot;: { &quot;@type&quot;: &quot;Person&quot;, &quot;@id&quot;: &quot;https://example.com/&quot;, &quot;name&quot;: &quot;John Doe&quot; }, &quot;about&quot;: [ { &quot;@type&quot;: &quot;DefinedTerm&quot;, &quot;@id&quot;: &quot;https://example.com/glossary/term/#definedTerm&quot; } ] }, { &quot;@type&quot;: &quot;DefinedTerm&quot;, &quot;@id&quot;: &quot;https://example.com/glossary/term/#definedTerm&quot;, &quot;name&quot;: &quot;My Term&quot;, &quot;description&quot;: &quot;Just my Term&quot;, &quot;inDefinedTermSet&quot;: { &quot;@type&quot;: &quot;DefinedTermSet&quot;, &quot;@id&quot;: &quot;https://example.com/glossary/#definedTermSet&quot; } } ] &quot;&quot;&quot; SCH = Namespace('https://schema.org/') namespace_manager = NamespaceManager(Dataset(), bind_namespaces='none') namespace_manager.bind('', SCH, override=True) g = Dataset() g.namespace_manager = namespace_manager g.parse(data=local_input, format='json-ld', publicID=&quot;http://schema.org/&quot;) print(len(g)) import pprint for stmt in g: pprint.pprint(stmt) </code></pre> <p>OUTPUT:</p> <pre><code>2 (rdflib.term.URIRef('https://example.com/glossary/term/'), rdflib.term.URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type'), rdflib.term.URIRef('http://schema.org/WebPage'), rdflib.term.URIRef('urn:x-rdflib:default')) (rdflib.term.URIRef('https://example.com/glossary/term/#definedTerm'), rdflib.term.URIRef('http://www.w3.org/1999/02/22-rdf-syntax-ns#type'), rdflib.term.URIRef('http://schema.org/DefinedTerm'), rdflib.term.URIRef('urn:x-rdflib:default')) </code></pre>
<python><json-ld><rdflib>
2024-03-10 21:15:35
1
328
Kaj Kandler
78,137,513
2,314,737
Pandas query doesn't work if column contains null values
<p>I have a Pandas dataframe containing null values and I want to filter it using <code>query</code></p> <pre><code>data = {'Title': ['Title1', 'Title2', 'Title3', 'Title4'], 'Subjects': ['Math; Science', 'English; Math', pd.NA, 'English']} df_test = pd.DataFrame(data) print(df_test) # Title Subjects # 0 Title1 Math; Science # 1 Title2 English; Math # 2 Title3 &lt;NA&gt; # 3 Title4 English </code></pre> <p>This query gives me an error:</p> <pre><code>df_test.query('Title.str.startswith(&quot;T&quot;) and Subjects.str.contains(&quot;Math&quot;)') </code></pre> <pre class="lang-none prettyprint-override"><code>KeyError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/pandas/core/computation/scope.py in resolve(self, key, is_local) 197 if self.has_resolvers: --&gt; 198 return self.resolvers[key] 199 36 frames KeyError: 'Series_2_0xe00x4a0x2f0xf50x420x7a0x00x0' During handling of the above exception, another exception occurred: KeyError Traceback (most recent call last) KeyError: 'Series_2_0xe00x4a0x2f0xf50x420x7a0x00x0' The above exception was the direct cause of the following exception: UndefinedVariableError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/pandas/core/computation/scope.py in resolve(self, key, is_local) 209 return self.temps[key] 210 except KeyError as err: --&gt; 211 raise UndefinedVariableError(key, is_local) from err 212 213 def swapkey(self, old_key: str, new_key: str, new_value=None) -&gt; None: UndefinedVariableError: name 'Series_2_0xe00x4a0x2f0xf50x420x7a0x00x0' is not defined </code></pre> <p>Same with this query:</p> <pre><code>df_test.query('Title.str.startswith(&quot;T&quot;) and Subjects.notna() and Subjects.str.contains(&quot;Math&quot;)') </code></pre> <p>This gives me the desired result</p> <pre><code>df_test[df_test['Subjects'].notna()].query('Title.str.startswith(&quot;T&quot;) and Subjects.str.contains(&quot;Math&quot;)') Title Subjects 0 Title1 Math; Science 1 Title2 English; Math </code></pre> <p>I wonder if this is a limitation of <code>query</code> or if I'm doing something wrong.</p> <pre><code>pd.__version__ # '1.5.3' </code></pre>
<python><pandas><dataframe>
2024-03-10 21:13:32
1
29,699
user2314737
78,137,505
5,431,734
float32 and precision points in numpy
<p>I am trying to understand presision points in python between <code>np.float32</code> and <code>np.float64</code>. For lower memory footprint I am leaning towards using float32 but I dont quite understand why sometimes a float turns into an iterger-like. For example:</p> <pre><code>x = np.array(12345678.1234) # this is an float64 by default z = 5 * x </code></pre> <p>then z is another np.float64 with value 6172835.617</p> <p>However if I cast this as a np.float32, then <code>np.float32(z) = 61728390.0</code>. It cant be some kind of buffer overflow, the range of float32 is -/+ 3.4e+38</p>
<python><numpy>
2024-03-10 21:11:33
0
3,725
Aenaon
78,137,364
297,797
Rotate one axis in matplotlib
<p>I have some series x and y such that x+y ≤ 1. If I make a simple scatter plot it looks like this:</p> <p><a href="https://i.sstatic.net/VEGkY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VEGkY.png" alt="scatter plot" /></a></p> <p>I'd like to transform this plot so that the angle between the positive x and y axes is 60 degrees rather than 90 degrees. Basically I want the fundamental region bounded by the origin and (1,0) and (0,1) to be transformed from an isosceles right triangle to an equilateral triangle.</p> <p>Through various web searches (including <a href="https://www.tutorialspoint.com/how-to-rotate-a-simple-matplotlib-axes" rel="nofollow noreferrer">this tutorial</a>) this is the closest I can come up with:</p> <pre class="lang-py prettyprint-override"><code> import matplotlib.pyplot as plt from matplotlib.transforms import Affine2D import mpl_toolkits.axisartist.floating_axes as floating_axes fig = plt.figure() plot_extents = 0, 1, 0, 1 transform = Affine2D().skew_deg(30,0).scale(1,0.8660254038) helper = floating_axes.GridHelperCurveLinear(transform, plot_extents) ax = floating_axes.FloatingSubplot(fig, 111, grid_helper=helper) x = [0.41729137, 0.31693662, 0.25544209, 0.13843966, 0.31218805,0.01035702, 0.4847026 , 0.11104309, 0.25144685, 0.18332651] y = [0.20372773, 0.5395082 , 0.6020569 , 0.19787512, 0.57743819,0.93317626, 0.43035489, 0.8235019 , 0.36000727, 0.24365175] ax.scatter(x,y) ax.set_xlim(-0.15,1.15) ax.set_ylim(-0.15,1.15) fig.add_axes(ax) </code></pre> <p>But the result seems to keep the points where they are and only move the one axis.</p> <p><a href="https://i.sstatic.net/6IO37.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6IO37.png" alt="second attempt" /></a></p>
<python><matplotlib>
2024-03-10 20:19:23
2
2,689
Matthew Leingang
78,137,290
2,056,201
Cannot use relative folder structure in NPM packages.json
<p>Im using the example flask/react app from this github <a href="https://github.com/Faruqt/React-Flask" rel="nofollow noreferrer">https://github.com/Faruqt/React-Flask</a></p> <p>Problem is I have the <code>flask.exe</code> in <code>Project/venv/scripts</code> folder and the repository is in <code>Project/React-Flask-master</code> fodler</p> <p>I edited the Prackges.json to</p> <pre><code> &quot;scripts&quot;: { &quot;start&quot;: &quot;react-scripts start&quot;, &quot;temp&quot;: &quot;cd ../&quot;, &quot;start-backend&quot;: &quot;cd backend &amp;&amp; /../../venv/Scripts/flask run --no-debugger&quot;, &quot;build&quot;: &quot;react-scripts build&quot;, &quot;test&quot;: &quot;react-scripts test&quot;, &quot;eject&quot;: &quot;react-scripts eject&quot; }, </code></pre> <p>But no matter what I try, I cannot use the relative folder structure.</p> <p>I added a &quot;temp&quot; lien that is supposed to go one folder up, and even that doesnt work</p> <pre><code>(venv) C:\Code\Project\React-Flask-master&gt;npm run temp &gt; flask_react@0.1.0 temp &gt; cd ../ (venv) C:\Code\Project\React-Flask-master&gt; </code></pre> <p>And</p> <pre><code>(venv) C:\Code\aicode\AIbots\catgpt\React-Flask-master&gt;npm run start-backend &gt; flask_react@0.1.0 start-backend &gt; cd backend &amp;&amp; /../../venv/Scripts/flask run --no-debugger The system cannot find the path specified. (venv) C:\Code\aicode\AIbots\catgpt\React-Flask-master&gt; </code></pre> <p>I tried using &quot;../../&quot; and &quot;/../../&quot;</p>
<javascript><python><reactjs><npm><python-venv>
2024-03-10 19:53:24
0
3,706
Mich
78,137,226
2,056,201
"env: ‘/bin/flask’: No such file or directory" when running "npm run start-backend"
<p>Im trying to find the most basic website template with Flask and React, Im using this repository and following installation instructions</p> <p><a href="https://github.com/Faruqt/React-Flask" rel="nofollow noreferrer">https://github.com/Faruqt/React-Flask</a></p> <p>Im using git Bash in Windows 10, and when running <code>npm run start-backend</code> it gives me the error:</p> <pre><code>&gt; flask_react@0.1.0 start-backend &gt; cd backend &amp;&amp; env/bin/flask run --no-debugger env: ‘/bin/flask’: No such file or directory </code></pre> <p>I can't seem to find a solution online</p>
<javascript><python><reactjs><windows><flask>
2024-03-10 19:32:01
1
3,706
Mich
78,137,158
2,514,157
Can Distilled Whisper Models be used as a Drop-In Replacement for OpenAI Whisper?
<p>I have a working video transcription pipeline working using a local OpenAI Whisper model. I would like to use the equivalent distilled model (&quot;distil-small.en&quot;), which is smaller and faster.</p> <pre class="lang-py prettyprint-override"><code>transcribe(self): file = &quot;/path/to/video&quot; model = whisper.load_model(&quot;small.en&quot;) # WORKS model = whisper.load_model(&quot;distil-small.en&quot;) # DOES NOT WORK transcript = model.transcribe(word_timestamps=True, audio=file) print(transcript[&quot;text&quot;]) </code></pre> <p>However, I get an error that the model was not found:</p> <pre><code>RuntimeError: Model distil-small.en not found; available models = ['tiny.en', 'tiny', 'base.en', 'base', 'small.en', 'small', 'medium.en', 'medium', 'large-v1', 'large-v2', 'large-v3', 'large'] </code></pre> <p>I installed my dependencies in Poetry (which used pip under the hood) as follows:</p> <pre><code>[tool.poetry.dependencies] python = &quot;^3.11&quot; openai-whisper = &quot;*&quot; transformers = &quot;*&quot; # distilled whisper models accelerate = &quot;*&quot; # distilled whisper models datasets = { version = &quot;*&quot;, extras = [&quot;audio&quot;] } # distilled whisper models </code></pre> <p>The <a href="https://github.com/huggingface/distil-whisper" rel="nofollow noreferrer">GitHub Distilled Whisper</a> documentation appears to use a different approach to installing and using these models.</p> <p><strong>Is it possible to use a Distilled model as a drop-in replacement for a regular Whisper model?</strong></p>
<python><openai-api><openai-whisper>
2024-03-10 19:07:49
1
691
user2514157
78,137,038
889,675
Ensuring unique combination of many to many mapped entities in SqlAlchemy
<p>I have a following SqlAlchemy models with association table. They represent chat conversations. Each chat can have 0 - N users involved:</p> <pre><code>class Chat(BaseDbModel): __tablename__ = &quot;chats&quot; id = Column(Integer, primary_key=True) users = relationship(&quot;User&quot;, secondary=chat_user_association, back_populates=&quot;chats&quot;) ... class User(BaseDbModel): __tablename__ = &quot;users&quot; id = Column(Integer, primary_key=True) chats = relationship(&quot;Chat&quot;, secondary=chat_user_association, back_populates=&quot;users&quot;) ... chat_user_association = Table( 'chat_user', BaseDbModel.metadata, Column('chat_id', Integer, ForeignKey('chats.id')), Column('user_id', Integer, ForeignKey('users.id')), ) </code></pre> <p>There is nothing that would prevent me from creating multiple chats with same set of user ids. I would like to prevent this behaviour from ever happening, ideally on the database level.</p> <p><strong>Example of intended behaviour:</strong></p> <blockquote> <p>I would like to be able to create chats with user ids {1, 2}, {1, 2, 3}, {2, 3}, but it shouldn't be able to create an another chat with user ids {2, 1, 3} since that would result in duplicity.</p> </blockquote> <p>What would be the cleanest solution / approach? I can do a manual check when creating the chat object, but I would prefer an approach that would be more race-condition proof. I use PostgreSQL.</p>
<python><postgresql><sqlalchemy>
2024-03-10 18:30:48
1
876
Marian Galik
78,136,889
6,687,699
asyncpg.exceptions.AmbiguousParameterError: could not determine data type of parameter
<p>Am creating an endpoint to search in FastAPI, but am facing this error :</p> <pre><code>asyncpg.exceptions.AmbiguousParameterError: could not determine data type of parameter $2 </code></pre> <p>I have defined my sql query in queries.py :</p> <pre><code>GET_LOG_QUERY = &quot;&quot;&quot; SELECT * FROM activities WHERE entity = :entity AND types_id = :types_id AND ( :search IS NULL OR :search = '' OR full_name ILIKE :search OR email ILIKE :search ) </code></pre> <p>&quot;&quot;&quot;</p> <p>and here's the repository, :</p> <pre><code>class LogActivitiesRepository(BaseRepository): def __init__(self, db): super().__init__(db=db) async def get_activities(self, types_id: str, entity: str, search: Optional[str] = None) -&gt; List[LogActivityModel]: search_pattern = f&quot;%{search}%&quot; if search else &quot;%&quot; records = await self.db.fetch_all( query=GET_LOG_QUERY, values={ &quot;entity&quot;: entity_name, &quot;types_id&quot;: types_id, &quot;search&quot;: search_pattern } ) if not records: return [] return [LogActivityModel(**record._mapping) for record in records] </code></pre> <p>If I don't consider the search field it works very well in the query set, like the queryset below :</p> <p>GET_LOG_QUERY = &quot;&quot;&quot; SELECT * FROM user_activities WHERE entity_name = :entity_name AND tenant_id = :tenant_id; &quot;&quot;&quot;</p>
<python><sql><postgresql><fastapi>
2024-03-10 17:53:30
1
4,030
Lutaaya Huzaifah Idris
78,136,859
1,473,517
Find the optimal clipped circle
<p>Given a <code>NxN</code> integer lattice, I want to find the clipped circle which maximizes the sum of its interior lattice point values.</p> <p>Each lattice point <code>(i,j)</code> has a value <code>V(i,j)</code> and are stored in the following matrix <code>V</code>:</p> <pre><code> [[ 1, 1, -3, 0, 0, 3, -1, 3, -3, 2], [-2, -1, 0, 1, 0, -2, 0, 0, 1, -3], [ 2, 2, -3, 2, -2, -1, 2, 2, -2, 0], [-2, 0, -3, 3, 0, 2, -1, 1, 3, 3], [-1, -2, -1, 2, 3, 3, -3, -3, 2, 0], [-3, 3, 2, 0, -3, -2, -1, -3, 0, -3], [ 3, 2, 2, -1, 0, -3, 1, 1, -2, 2], [-3, 1, 3, 3, 0, -3, -3, 2, -2, 1], [ 0, -3, 0, 3, 2, -2, 3, -2, 3, 3], [-1, 3, -3, -2, 0, -1, -2, -1, -1, 2]] </code></pre> <p>The goal is to maximize the sum of values <code>V(i,j)</code> of the lattice points lying on the boundary and within interior of a (clipped) circle with radius <code>R</code>, with the assumptions and conditions:</p> <ul> <li>the circle has center at (0,0)</li> <li>the circle can have any positive radius (not necessarily an integer radius, i.e., rational).</li> <li>the circle may be clipped at two lattice points, resulting in a diagonal line as shown in the picture. This diagonal line has a slope of -45 degrees.</li> </ul> <p>Some additional details:</p> <p>The score for a clipped circle is the sum of all the integers that are both within the circle (or on the border) and on the side of the diagonal line including (0,0). The values on (or near) the border are -3, 1, 3, -1, -3, 3, -1, 2, 0, 3.</p> <p>Even though the circle can have any radius, we need only consider circles that intersect a grid point precisely so there are n^2 different relevant radiuses. Further, we need only record one position where the circle intersects with the diagonal line to fully specify the clipped circle. Note that this intersection with the diagonal does not need to be at an integer coordinate.</p> <p>If the optimal solution doesn't have the diagonal clipping the circle at all then we need only return the radius of the circle.</p> <p>What I have found so far:</p> <p>If we only wanted to find the optimal circle we could do that quickly in time proportional to the input size with:</p> <pre><code>import numpy as np from math import sqrt np.random.seed(40) def find_max(A): n = A.shape[0] sum_dist = np.zeros(2 * n * n, dtype=np.int32) for i in range(n): for j in range(n): dist = i**2 + j**2 sum_dist[dist] += A[i, j] cusum = np.cumsum(sum_dist) # returns optimal radius with its score return sqrt(np.argmax(cusum)), np.max(cusum) A = np.random.randint(-3, 4, (10, 10)) print(find_max(A)) </code></pre> <p>How quickly can the optimal clipped circle be found?</p>
<python><algorithm><performance><optimization>
2024-03-10 17:45:45
3
21,513
Simd
78,136,787
3,017,869
While loop blocks when reading a buffer and writing to fifo
<p>I have a while loop that reads data from a buffer and writes it to a named pipe.</p> <p>The loop never executes, unless I add a small sleep interval inside the loop.</p> <p>This to me suggests there is a sync issue between reading data from the buffer and writing it to the pipe.</p> <p>If my hunch is correct, how does one solve this kind of issue in python?</p> <p>Additional detail I have a golang program that has the pipe open and waits on the python script to send data to it</p> <pre><code>from picamera2 import Picamera2 from picamera2.encoders import H264Encoder from picamera2.outputs import FileOutput from picamera2.outputs import CircularOutput import time import sys import os import io import signal def signal_handler(sig, frame): print('Received CTRL+C, exiting...') sys.exit(0) picam2=Picamera2() sensor_modes = picam2.sensor_modes #loop through the sensor modes and print them print('Sensor Modes:') for mode in sensor_modes: print(mode) raw_modes = picam2._get_raw_modes() #loop through the raw modes and print them print('Raw Modes:') for mode in raw_modes: print(mode) config = picam2.create_video_configuration(main={'size': (1280, 800), 'format':'YUV420'}, raw={'size': (2028,1520)}) picam2.configure(config) fifo_path = os.path.abspath('../pipe1') #check if the fifo exists if not os.path.exists(fifo_path): print('error: fifo does not exist') sys.exit(1) print ('opening fifo') try: fifo = io.open(fifo_path, 'wb', buffering=0) except FileNotFoundError: print(&quot;error: fifo '{}' does not exist.&quot;.format(fifo_path)) sys.exit(1) except PermissionError: print(&quot;error: permission denied to write to FIFO '{}'.&quot;.format(fifo_path)) sys.exit(1) except Exception as e: print(&quot;error:&quot;, e) sys.exit(1) else: print(&quot;fifo opened successfully.&quot;) encoder = H264Encoder(bitrate=20000000, repeat=True, iperiod=7, framerate=30) #buffer_capacity = bytearray(30) buffer = io.BytesIO() output = CircularOutput(buffer, buffersize=1) print('starting recording') picam2.start_recording(encoder, output) signal.signal(signal.SIGINT, signal_handler) signal.signal(signal.SIGTERM, signal_handler) #Continuously write the buffer to the fifo try: while True: time.sleep(15/1000) buffer.seek(0) data = buffer.read() fifo.write(data) buffer.truncate(0) except Exception as e: print(&quot;error:&quot;, e) finally: picam2.stop_recording() fifo.close() print('fifo closed') sys.exit(0) </code></pre>
<python><picamera>
2024-03-10 17:22:45
0
573
user3017869
78,136,762
12,493,545
How to learn about print options keys from pycups?
<p>Via trial and error and looking into the ppd I wrote script below that allows you to print on different trays. I found the correct names for value and key of the job-options in the ppd.</p> <p>Just by looking into <code>conn.getPrinterAttributes(printer)['media-source-supported']</code> those options do not became clear.</p> <p>I feel like there must be a way how you can get all the possible job options in a better fashion than just looking them up in the ppd file, but I fail to find any documentation on this problem and because the underlying base is in C, I am also a bit unsure how the connection between python and C works there.</p> <p>In order to better understand how to use pycups and maybe also how the connection between python and C works here, I decided to ask on stackoverflow.</p> <h2>I have ...</h2> <h3>Following pycups</h3> <p>Followed <code>pip show pycups</code> but only found the <code>cupshelpers</code> and pycups info i.e.</p> <blockquote> <p>This is a set of Python bindings for the libcups library from CUPS project.</p> </blockquote> <p>But I didn't find the exact bindings only:</p> <pre><code>cups.cpython-310-x86_64-linux-gnu.so cupsext.cpython-310-x86_64-linux-gnu.so cupshelpers/ cupshelpers-1.0-py3.10.egg-info pycups-2.0.1.egg-info </code></pre> <h2>Program for reference</h2> <pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3 import cups def list_printers(): conn = cups.Connection() printers = conn.getPrinters() print(&quot;Available printers:&quot;) for printer in printers: print(f&quot;Printer: {printer}&quot;) print(&quot;Supported options:&quot;) options = conn.getPrinterAttributes(printer)['media-source-supported'] for option in options: cleaned_option = ''.join(c for c in option if c.isalnum()).upper() print(f&quot; {cleaned_option}&quot;) print() def print_document(printer_name, document_path, tray_option): conn = cups.Connection() printers = conn.getPrinters() if printer_name not in printers: print(f&quot;Printer '{printer_name}' not found.&quot;) return job_options = { 'InputSlot': tray_option, } print(job_options) with open(document_path, 'rb') as document: print_job_id = conn.printFile(printer_name, document_path, &quot;Print Job&quot;, job_options) print(f&quot;Document '{document_path}' sent to '{printer_name}' with Job ID: {print_job_id}&quot;) if __name__ == &quot;__main__&quot;: # Specify the printer name, document paths, and tray options printer_name = &quot;Kyocera-ECOSYS-P3055dn-KPDL&quot; document1_path = &quot;Tray1.pdf&quot; document2_path = &quot;Tray2.pdf&quot; options = [&quot;PF430A&quot;, &quot;PF430B&quot;] for option in options: print_document(printer_name, f&quot;{option}.pdf&quot;, option) </code></pre>
<python><cups><pycups>
2024-03-10 17:14:54
0
1,133
Natan
78,136,658
8,213,085
Jinj2 Extension - Skip clause if the variable is None
<h2>Intention</h2> <p>I'm trying to write a Jinja extension that emulates <a href="https://www.metabase.com/docs/latest/questions/native-editor/sql-parameters#optional-clauses" rel="nofollow noreferrer">Metabase's &quot;optional&quot; clauses</a>, where an expression between double square brackets <code>[[ ... ]]</code> will only be kept if the value of the variable inside it is not <code>None</code></p> <p>For example (using the example in <a href="https://www.metabase.com/docs/latest/questions/native-editor/sql-parameters#optional-clauses" rel="nofollow noreferrer">the Metabase docs</a>), consider the following template:</p> <pre><code>SELECT count(*) FROM products [[ WHERE category = {{ cat }} ]] </code></pre> <p>If the value of <code>cat</code> is <code>None</code>, everything inside the <code>[[ ... ]]</code> is removed; otherwise the content inside the double square brackets is rendered like normal and only the square brackets themselves are removed like normal Jinja stuff</p> <p>To keep this simple, we can assume that any pair of double square brackets will have exactly one variable inside them but there can be any number of these double square bracket clauses</p> <h2>Current (failing) MWE</h2> <p>I've currently implemented 3 things:</p> <ol> <li>The <code>preprocess</code> method to convert <code>[[ ... ]]</code> to <code>{% optional %} ... {% endoptional %}</code> so that I can look for the <code>optional</code> tag in the implementation</li> <li>The <code>parse</code> method to find and adjust the <code>optional</code> blocks</li> <li>A custom <code>filter</code> method to apply the &quot;if&quot; logic based on the <em>value</em> of the variable in the double brackets</li> </ol> <p>Note that the third point is what I've tried following the answers to the following similar questions:</p> <ul> <li><a href="https://stackoverflow.com/q/21924444/8213085">Jinja2 extensions - get the value of variable passed to extension</a></li> <li><a href="https://stackoverflow.com/q/12139029/8213085">How to access context variables from the Jinja&#39;s Extension?</a></li> </ul> <p>Although this allows me to get the value of the variable to check whether it's <code>None</code>, I now can't get the full clause to render properly if it <em>isn't</em> <code>None</code> and I can't figure out how to do this correctly -- it just prints out the list of nodes, not a rendered version of them (see next section)</p> <p>The latest (failing) MWE I have at the moment is:</p> <pre class="lang-py prettyprint-override"><code>from __future__ import annotations from typing import Any from jinja2 import nodes, ext, parser START, END = &quot;[[&quot;, &quot;]]&quot; class OptionalClausesExtension(ext.Extension): &quot;&quot;&quot; Jinja2 extension to allow optional clauses in templates. &quot;&quot;&quot; tags = {&quot;optional&quot;} def preprocess(self, source: str, name: str | None, filename: str | None = None) -&gt; str: &quot;&quot;&quot; Preprocess the source by converting ``[[`` and ``]]`` to ``{% optional %}`` and ``{% endoptional %}`` for parsing. &quot;&quot;&quot; for old, new in [(START, &quot;{% optional %}&quot;), (END, &quot;{% endoptional %}&quot;)]: source = source.replace(old, new) return source def parse(self, parser: parser.Parser) -&gt; nodes.Node | list[nodes.Node]: &quot;&quot;&quot; Parse the template and return the parsed nodes. &quot;&quot;&quot; parser.parse_expression() body = parser.parse_statements([&quot;name:endoptional&quot;], drop_needle=True) self._body = body # I'm guessing this is what bypasses the rendering, but I'm not sure what to do otherwise # For now, only support *exactly* one variable in the optional clause name_args = [] for output in body: name_args.extend(iter(output.find_all(nodes.Name))) assert len(name_args) == 1 return nodes.Output([self.call_method(&quot;filter&quot;, name_args)]) def filter(self, name_arg: Any): return &quot;&quot; if name_arg is None else self._body # return &quot;&quot; if name_arg is None else Template(self._body).render() </code></pre> <p>The commented line beneath <code>filter</code> is only to illustrate that I've tried some explicit rendering, but that doesn't work (and doesn't feel right either) and I can't figure out how else to get the <code>body</code> into the <code>filter</code> function</p> <p>I don't understand Jinja under the hood so I suspect that this may be an invalid way to extend it, but any help getting this to work would be really appreciated -- or pointing me to some other feature/place that can do something like this instead</p> <h2>The corresponding test</h2> <p>For reference/clarity, I'm trying to get the following test to pass:</p> <pre class="lang-py prettyprint-override"><code>import jinja2 import pytest import jinja_optional_clauses.main as main @pytest.mark.parametrize( &quot;raw, var, rendered&quot;, [ [&quot;out [[ in {{ var }} ]] out&quot;, &quot;thing&quot;, &quot;out in thing out&quot;], [&quot;out [[ in {{ var }} ]] out&quot;, None, &quot;out out&quot;], ], ) def test__render_optional_clause(raw: str, var: str, rendered: str): &quot;&quot;&quot; Test that optional clauses are rendered correctly. If the variable is not provided, the optional clause should be removed; otherwise, the clause should be rendered as normal. &quot;&quot;&quot; env = jinja2.Environment(extensions=[main.OptionalClausesExtension]) template = env.from_string(raw) assert template.render(var=var) == rendered </code></pre> <p>At the moment, this passes for the <code>None</code> case but for the <code>thing</code> case it is still using the unrendered list of nodes (most likely because of the <code>self._body</code> attribute):</p> <pre><code># expected out in thing out # actual out [Output(nodes=[TemplateData(data=' in '), Name(name='var', ctx='load'), TemplateData(data=' ')])] out </code></pre>
<python><jinja2>
2024-03-10 16:40:28
0
1,064
Bilbottom
78,136,298
12,178,630
Add external objects within matplotlib/plotly 3d plot
<p>I would like to know, how can I add or draw 3d model like for example those rectangular frames(gates), to a 3D plot in <code>matplotlib</code> or in <code>plotly</code>. I have searched for that quite for a long time, but I could not find a way that will allow me to do that. <a href="https://i.sstatic.net/Z0Qxp.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z0Qxp.jpg" alt="enter image description here" /></a> It would be great to achieve that as one object not as points(preferrably), so that I can arrange their placement and integration with existing drawing smoothly. But any help you could provide to achieve that is truly appreciated</p>
<python><matplotlib><plotly>
2024-03-10 14:52:51
1
314
Josh
78,136,157
9,651,461
How do I create a keep_alive relationship between an object and the members of an aggregate return value in pybind11?
<p>I'm using pybind11 to generate Python mappings for a C++ library, and I'm using pybind11::keep_alive to manage the lifetime of C++ objects that are referenced by other objects. This works fine when the referrer is a direct return value, but I'm running into trouble when the referrer is part of an aggregate return value.</p> <p>It's probably easiest to illustrate with a simplified but complete example:</p> <pre><code>#include &lt;iostream&gt; #include &lt;pybind11/pybind11.h&gt; #include &lt;pybind11/stl.h&gt; namespace py = pybind11; class Container { public: int some_value; Container() { std::cerr &lt;&lt; &quot;Container constructed\n&quot;; some_value = 42; } ~Container() { some_value = -1; std::cerr &lt;&lt; &quot;Container destructed\n&quot;; } // Container is not copyable. Container(const Container &amp;) = delete; Container &amp;operator=(const Container &amp;) = delete; // Iterator references the contents of Container. struct Iterator { Container *owner; int value() const { return owner-&gt;some_value; } }; Iterator iterator() { return Iterator{this}; } std::pair&lt;Iterator, bool&gt; iterator_pair() { return {Iterator{this}, true}; } }; PYBIND11_MODULE(example, module) { py::class_&lt;Container&gt; owner(module, &quot;Container&quot;); py::class_&lt;Container::Iterator&gt;(owner, &quot;Iterator&quot;) .def_property_readonly(&quot;value&quot;, &amp;Container::Iterator::value); owner .def(py::init()) .def(&quot;iterator&quot;, &amp;Container::iterator, py::keep_alive&lt;0, 1&gt;()) .def(&quot;iterator_pair&quot;, &amp;Container::iterator_pair, py::keep_alive&lt;0, 1&gt;()); } </code></pre> <p>In the above example, I have a container object that returns an iterator, which references the container's content, so the iterator must keep the container alive. This works perfectly for the <code>iterator</code> method, where <code>py::keep_alive&lt;0, 1&gt;</code> causes the Iterator object to keep a reference to the Container object that created it.</p> <p>However, this doesn't work for the <code>iterator_pair</code> method. For example, running like this:</p> <pre><code>import example def Test1(): it = example.Container().iterator() print(it.value) # prints 42 def Test2(): it, b = example.Container().iterator_pair() assert b == True print(it.value) Test1() # OK Test2() # Fails! </code></pre> <p>Fails with the following output:</p> <pre><code>Owner constructed 42 Owner destructed Owner constructed Owner destructed Traceback (most recent call last): File &quot;test.py&quot;, line 13, in &lt;module&gt; Test2() # Fails! ^^^^^^^ File &quot;test.py&quot;, line 8, in Test2 it, b = example.Container().iterator_pair() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ TypeError: cannot create weak reference to 'tuple' object </code></pre> <p>And I cannot remove <code>py::keep_alive(0, 1)</code>, because that would cause the Container to be destructed prematurely:</p> <pre><code>Owner constructed 42 Owner destructed Owner constructed Owner destructed 269054512 </code></pre> <p>(Note that <code>269054512</code> is just random garbage in memory because the Owner is already deallocated when its value is referenced by the iterator.)</p> <p><strong>What is the recommended way to handle situations like this?</strong></p> <p>I've also encountered similar problems with returning std::vectors of objects that must keep their parent alive, so I think the more general question is how to return aggregate objects that contain subobjects which must keep their parent alive.</p> <p>One workaround I can think of, is to wrap Container in a std::shared_ptr&lt;&gt; and use that in the Iterator too. That way, the C++ Container object stays alive regardless of whether it is still referenced in Python. However, I'm not really eager to do this, because it requires a lot of refactoring on the C++ side, and it feels like I'm duplicating the reference counting support that Python already has.</p>
<python><c++><pybind11>
2024-03-10 14:06:41
1
1,194
Maks Verver
78,136,149
15,560,990
Cannot mutate a Python randomly generated list
<p>I'm just playing around with a local k8s cluster and created a simple app to containerize. I just want to print a reversed array. Disclaimer: I might be using the words &quot;lists&quot; and &quot;arrays&quot; interchangeably here :)</p> <p>So I defined two utility functions: one to generate lists with random int values and one to reverse said lists.</p> <pre><code>from random import randint def reverse_array(input): i=0 arr_length = len(input) last_index = arr_length - 1 while i &lt; arr_length / 2: temp = input[i] input[i] = input[last_index - i] input[last_index - i] = temp i=i+1 return input def generate_random_array(max_iteration): return [randint(0, 100) for _ in range(0, max_iteration)] </code></pre> <p>Both methods have unit tests, so I know they're working as intended</p> <p>then I execute them in the main app:</p> <pre><code>import util.misc as misc from time import sleep while True: input_arr = misc.generate_random_array(5) reversed_arr = misc.reverse_array(input_arr) # adding manual and control only for debugging manual = [1, 2, 3] control= misc.reverse_array(manual) print(f&quot;your original array is {input_arr}&quot;) print(f&quot;your reversed array is {reversed_arr}&quot;) print(f&quot;control is {control}&quot;) sleep(1) </code></pre> <p>Oddly enough, the application's output shows <code>reversed_arr</code> actually has the same order as <code>input_arr</code>, but <code>control</code> is correctly reversed.</p> <pre><code>your original array is [27, 95, 0, 91, 13] your reversed array is [27, 95, 0, 91, 13] control is [3, 2, 1] </code></pre> <p>What is going on here? It seems that arrays produced by <code>generate_random_array</code> are actually immutable</p>
<python><arrays>
2024-03-10 14:05:07
1
460
Dasph
78,135,769
893,254
Python jsons library and serializing a dataclass with optional None values
<p>I noticed some warning messages being printed by my Python code. These are coming from the <code>jsons</code> library when trying to perform serializing operations.</p> <pre><code>/home/user/project/.venv/lib/python3.11/site-packages/jsons/_common_impl.py:43: UserWarning: Failed to dump attribute &quot;None&quot; of object of type &quot;MyClass2&quot;. Reason: 'NoneType' object is not callable. Ignoring the attribute. Use suppress_warning(attribute-not-serialized) or suppress_warnings(True) to turn off this message. warnings.warn(msg_, *args, **kwargs) </code></pre> <p><code>MyClass2</code> is just something I have been using to isolate the cause of the error.</p> <pre><code>class MyClass(): def __init__(self) -&gt; None: self.data = None @dataclass class MyClass2(): data: int|None my_class = MyClass() my_class_2 = MyClass2( data=None, ) print( jsons.dumps( my_class, strip_privates=True, ) ) print( jsons.dumps( my_class_2, strip_privates=True, ) ) </code></pre> <p><code>MyClass</code> does not produce warnings. <code>MyClass2</code> does.</p> <p>I know how to stop the warning message: Change the type hint from <code>int|None</code> to <code>int</code>.</p> <pre><code>@dataclass class MyClass2(): data: int # no `|None` here ! </code></pre> <p>Why does that work?</p> <p>If I change the type hint from <code>int|None</code>, it will still serialize correctly but the warnings are no longer produced. Serializing <code>None</code> also works, even without the option of <code>None</code> as a type hint.</p>
<python><python-jsons>
2024-03-10 11:58:01
1
18,579
user2138149
78,135,624
12,314,521
Correct way to do multiprocessing with global variable in Python
<p>I'm working on large dataset, to make my function work I have to divide the dataset and do calculation by batch. Here is my code:</p> <pre><code>batch_size = 128 results = [] for i in range(0, len(queries), batch_size): result = linear_kernel(query[i:i+batch_size], dataset) results.append(result) </code></pre> <p>It takes about 5 hours to finish the running.</p> <p>Now I want to do with multiprocessing. So I define a job function:</p> <p>query and dataset is a sparse matrix of TFIDF vectorizer</p> <pre><code>def job(i): return linear_kernel(query[i: i+batch_size], dataset) with concurrent.futures.ProcessPoolExecutor() as executor: results = executor.map(job, tqdm(range(0, len(query), batch_size))) </code></pre> <p>Then the problem is:</p> <ul> <li><p>I don't know what the results in the executor, I guess the batches must be shuffled. As then I need to process the results, so I need the row index of results must be matched with the row index of <code>query</code> data. How can I do it? as I don't know how to modified the output, to make it keep the information row index i.</p> </li> <li><p>Secondly, is it okay to use two variables outside the scope of the function job which are <code>query</code> and <code>dataset</code>. I don't know much about multiprocessing, if it runs on different cpu then does it copy the data to run on each processor?</p> </li> </ul>
<python><multiprocessing>
2024-03-10 11:08:15
1
351
jupyter
78,135,563
12,439,683
Why does overloading __new__ with singledispatchmethod not differentiate by type and always calls the initial method?
<p>I want my <code>__new__</code> method to behave differently in some cases and wanted to split it into overloaded functions with <a href="https://docs.python.org/3/library/functools.html#functools.singledispatchmethod" rel="nofollow noreferrer"><code>singledispatchmethod</code></a>.<br /> For other methods it works like expected. However for <code>__new__</code> it does not work, the overloading functions are never called. What is the reason for that?</p> <pre class="lang-py prettyprint-override"><code>from functools import singledispatchmethod class Foo: @singledispatchmethod def __new__(cls, arg1, **kwargs): return &quot;default call&quot; @__new__.register(int) def _(cls, arg1, **kwargs): return &quot;called with int &quot; + str(arg1) print(Foo(&quot;hi&quot;)) # default call print(Foo(1)) # default call </code></pre> <hr /> <p>As an experiment also used <code>singledispatch</code> instead but without success.</p>
<python><initialization><overloading><functools><single-dispatch>
2024-03-10 10:45:13
1
5,101
Daraan
78,135,417
7,323,032
Recursive Linked List Reversal Hangs with Print Statements
<p>I'm working on a Python program that reverses a linked list using recursion. I've implemented the reversal function and added print statements to help me understand the process. However, when I include certain print statements, the program hangs and doesn't complete. The reversal seems to work fine without those print statements.</p> <pre class="lang-py prettyprint-override"><code>class ListNode: def __init__(self, val=0, next=None): self.val = val self.next = next def __repr__(self): next = self.next val = self.val out = f&quot;ListNode({val}, &quot; count_paren = 1 while next is not None: val = next.val out += f&quot;ListNode({val}, &quot; next = next.next count_paren += 1 out += f&quot;{next}&quot; out += count_paren * &quot;)&quot; return out def reverse(head): if not head or not head.next: return head print(&quot;1 HEAD&quot;, head) new_head = reverse(head.next) print(&quot;2 NEW_HEAD&quot;, new_head) head.next.next = head print(&quot;3 HEAD&quot;, head) print(&quot;4 NEW_HEAD&quot;, new_head) head.next = None print(&quot;5 HEAD&quot;, head) print(&quot;6 NEW_HEAD&quot;, new_head) return new_head lst = ListNode(1, ListNode(2, ListNode(3, ListNode(4, ListNode(5))))) print(lst) print(reverse(lst)) </code></pre> <p>Without the print statements:</p> <pre><code> print(&quot;3 HEAD&quot;, head) print(&quot;4 NEW_HEAD&quot;, new_head) </code></pre> <p>the reversal works fine. However, if I uncomment those lines, the program hangs.</p> <p>I would appreciate any insights into why this is happening and how I can modify the print statements to understand the process better without causing the hang. I am using python version of <code>3.11.2</code>.</p>
<python><recursion><linked-list>
2024-03-10 09:50:00
4
400
strboul
78,135,361
13,801,302
ValueError: 4.39.0.dev0 is not valid SemVer string
<p>I got the following error <code>ValueError: 4.39.0.dev0 is not valid SemVer string</code> in my code:</p> <pre class="lang-py prettyprint-override"><code>from GLiNER.gliner_ner import GlinerNER gli = GlinerNER() </code></pre> <p>Complete error message:</p> <pre class="lang-py prettyprint-override"><code>File ~/.../GLiNER/gliner_ner.py:8, in GlinerNER.__init__(self, labels) 7 def __init__(self, labels = [&quot;date&quot;,&quot;time&quot;, &quot;club&quot;, &quot;league&quot;]): ----&gt; 8 self.model = GLiNER.from_pretrained(&quot;urchade/gliner_base&quot;) 9 self.labels = labels File /opt/conda/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:118, in validate_hf_hub_args.&lt;locals&gt;._inner_fn(*args, **kwargs) 115 if check_use_auth_token: 116 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs) --&gt; 118 return fn(*args, **kwargs) File /opt/conda/lib/python3.10/site-packages/huggingface_hub/hub_mixin.py:157, in ModelHubMixin.from_pretrained(cls, pretrained_model_name_or_path, force_download, resume_download, proxies, token, cache_dir, local_files_only, revision, **model_kwargs) 154 config = json.load(f) 155 model_kwargs.update({&quot;config&quot;: config}) --&gt; 157 return cls._from_pretrained( 158 model_id=str(model_id), 159 revision=revision, 160 cache_dir=cache_dir, 161 force_download=force_download, 162 proxies=proxies, 163 resume_download=resume_download, 164 local_files_only=local_files_only, 165 token=token, 166 **model_kwargs, 167 ) File ~/.../GLiNER/model.py:355, in GLiNER._from_pretrained(cls, model_id, revision, cache_dir, force_download, proxies, resume_download, local_files_only, token, map_location, strict, **model_kwargs) 353 model = cls(config) 354 state_dict = torch.load(model_file, map_location=torch.device(map_location)) --&gt; 355 model.load_state_dict(state_dict, strict=strict, 356 #assign=True 357 ) 358 model.to(map_location) 359 return model File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:2027, in Module.load_state_dict(self, state_dict, strict) 2020 out = hook(module, incompatible_keys) 2021 assert out is None, ( 2022 &quot;Hooks registered with ``register_load_state_dict_post_hook`` are not&quot; 2023 &quot;expected to return new values, if incompatible_keys need to be modified,&quot; 2024 &quot;it should be done inplace.&quot; 2025 ) -&gt; 2027 load(self, state_dict) 2028 del load 2030 if strict: File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:2015, in Module.load_state_dict.&lt;locals&gt;.load(module, local_state_dict, prefix) 2013 child_prefix = prefix + name + '.' 2014 child_state_dict = {k: v for k, v in local_state_dict.items() if k.startswith(child_prefix)} -&gt; 2015 load(child, child_state_dict, child_prefix) 2017 # Note that the hook can modify missing_keys and unexpected_keys. 2018 incompatible_keys = _IncompatibleKeys(missing_keys, unexpected_keys) File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:2015, in Module.load_state_dict.&lt;locals&gt;.load(module, local_state_dict, prefix) 2013 child_prefix = prefix + name + '.' 2014 child_state_dict = {k: v for k, v in local_state_dict.items() if k.startswith(child_prefix)} -&gt; 2015 load(child, child_state_dict, child_prefix) 2017 # Note that the hook can modify missing_keys and unexpected_keys. 2018 incompatible_keys = _IncompatibleKeys(missing_keys, unexpected_keys) File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:2009, in Module.load_state_dict.&lt;locals&gt;.load(module, local_state_dict, prefix) 2007 def load(module, local_state_dict, prefix=''): 2008 local_metadata = {} if metadata is None else metadata.get(prefix[:-1], {}) -&gt; 2009 module._load_from_state_dict( 2010 local_state_dict, prefix, local_metadata, True, missing_keys, unexpected_keys, error_msgs) 2011 for name, child in module._modules.items(): 2012 if child is not None: File /opt/conda/lib/python3.10/site-packages/flair/embeddings/transformer.py:1166, in TransformerEmbeddings._load_from_state_dict(self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs) 1163 def _load_from_state_dict( 1164 self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs 1165 ): -&gt; 1166 if transformers.__version__ &gt;= Version(4, 31, 0): 1167 assert isinstance(state_dict, dict) 1168 state_dict.pop(f&quot;{prefix}model.embeddings.position_ids&quot;, None) File /opt/conda/lib/python3.10/site-packages/semver/version.py:51, in _comparator.&lt;locals&gt;.wrapper(self, other) 49 if not isinstance(other, comparable_types): 50 return NotImplemented ---&gt; 51 return operator(self, other) File /opt/conda/lib/python3.10/site-packages/semver/version.py:481, in Version.__le__(self, other) 479 @_comparator 480 def __le__(self, other: Comparable) -&gt; bool: --&gt; 481 return self.compare(other) &lt;= 0 File /opt/conda/lib/python3.10/site-packages/semver/version.py:396, in Version.compare(self, other) 394 cls = type(self) 395 if isinstance(other, String.__args__): # type: ignore --&gt; 396 other = cls.parse(other) 397 elif isinstance(other, dict): 398 other = cls(**other) File /opt/conda/lib/python3.10/site-packages/semver/version.py:646, in Version.parse(cls, version, optional_minor_and_patch) 644 match = cls._REGEX.match(version) 645 if match is None: --&gt; 646 raise ValueError(f&quot;{version} is not valid SemVer string&quot;) 648 matched_version_parts: Dict[str, Any] = match.groupdict() 649 if not matched_version_parts[&quot;minor&quot;]: ValueError: 4.39.0.dev0 is not valid SemVer string </code></pre> <p>That's the transformer version:</p> <pre class="lang-py prettyprint-override"><code>Name: transformers Version: 4.38.2 Summary: State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow Home-page: https://github.com/huggingface/transformers Author: The Hugging Face team (past and future) with the help of all our contributors (https://github.com/huggingface/transformers/graphs/contributors) Author-email: transformers@huggingface.co License: Apache 2.0 License Location: /opt/conda/lib/python3.10/site-packages Requires: filelock, huggingface-hub, numpy, packaging, pyyaml, regex, requests, safetensors, tokenizers, tqdm Required-by: flair, sentence-transformers, transformer-smaller-training-vocab Note: you may need to restart the kernel to use updated packages. </code></pre> <p>I don't know, what the cause is and I hope you can help to find the cause.</p> <p>Thanks in advance.</p>
<python><pytorch><huggingface-transformers><large-language-model><huggingface>
2024-03-10 09:32:26
1
621
Christian01
78,135,082
6,515,755
How to connect aioboto3 with yandex s3
<p>I have yandex s3 storage <a href="https://cloud.yandex.ru/ru/docs/storage/s3/?from=int-console-help-center-or-nav" rel="nofollow noreferrer">https://cloud.yandex.ru/ru/docs/storage/s3/?from=int-console-help-center-or-nav</a> how to properly set up use of it with python aioboto3 lib</p>
<python><amazon-s3><yandex><aiobotocore>
2024-03-10 07:45:40
1
12,736
Ryabchenko Alexander
78,134,882
9,104,399
How to download structure (.cif) file from materialsproject.org using python
<p>I want to download cif file from <a href="https://next-gen.materialsproject.org" rel="nofollow noreferrer">https://next-gen.materialsproject.org</a> using python. I used the following code</p> <pre><code>from pymatgen.ext.matproj import MPRester # Initialize MPRester with your Materials Project API key mpr = MPRester(&quot;*********&quot;) # Specify the Materials Project ID (MPID) of the material you want to download the CIF for mpid = &quot;mp-165&quot; # Example: Si crystal # Download the structure corresponding to the MPID structure = mpr.get_structure_by_material_id(mpid) # Write the structure to a CIF file cif_filename = f&quot;{mpid}.cif&quot; structure.to(fmt=&quot;cif&quot;, filename=cif_filename) print(f&quot;CIF file downloaded and saved as {cif_filename}&quot;) </code></pre> <p>When I run this code it says &quot;Response {&quot;error&quot;: &quot;You are using deprecated API endpoints. Please read our documentation (<a href="https://docs.materialsproject.org" rel="nofollow noreferrer">https://docs.materialsproject.org</a>) and upgrade to the latest version of the mp-api client (<a href="https://pypi.org/project/mp-api/" rel="nofollow noreferrer">https://pypi.org/project/mp-api/</a>).&quot;, &quot;version&quot;: &quot;blocked&quot;} &quot;.</p> <p>It seems materials project website updated their API. I tried the new API as well using the code given in their website (<a href="https://docs.materialsproject.org/downloading-data/using-the-api/examples" rel="nofollow noreferrer">https://docs.materialsproject.org/downloading-data/using-the-api/examples</a>):</p> <pre><code>from mp_api.client import MPRester with MPRester(&quot;your_api_key_here&quot;) as mpr: docs = mpr.materials.summary.search(material_ids=[&quot;mp-149&quot;], fields=[&quot;structure&quot;]) structure = docs[0].structure # -- Shortcut for a single Materials Project ID: structure = mpr.get_structure_by_material_id(&quot;mp-149&quot;) </code></pre> <p>However I am still getting the same error. I am using python 3.8.10.</p>
<python>
2024-03-10 06:11:56
1
325
Alam
78,134,640
6,077,239
Polars: expanding window at fixed points
<p>I have a <code>Polars</code> dataframe with 3 columns - group, date, value. The goal is to calculate <code>cumsum(value)</code> for each expanding window ends at the first time point at each year for each <code>group</code>.</p> <p>For example, for the following sample dataframe:</p> <pre><code>import polars as pl df = pl.DataFrame( { &quot;date&quot;: [ &quot;2020-03-01&quot;, &quot;2020-05-01&quot;, &quot;2020-11-01&quot;, &quot;2021-01-01&quot;, &quot;2021-02-03&quot;, &quot;2021-06-08&quot;, &quot;2022-01-05&quot;, &quot;2020-07-01&quot;, &quot;2020-09-01&quot;, &quot;2022-01-05&quot;, &quot;2023-02-04&quot;, ], &quot;group&quot;: [1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2], &quot;value&quot;: [1, 2, 3, 4, 5, 6, 7, 1, 2, 3, 4], }, ).with_columns(pl.col(&quot;date&quot;).str.strptime(pl.Date)) </code></pre> <p>The result I am looking for is:</p> <pre><code>┌────────────┬───────┬───────┐ │ date ┆ group ┆ value │ │ --- ┆ --- ┆ --- │ │ date ┆ i64 ┆ i64 │ ╞════════════╪═══════╪═══════╡ │ 2020-03-01 ┆ 1 ┆ 1 │ │ 2021-01-01 ┆ 1 ┆ 10 │ │ 2022-01-05 ┆ 1 ┆ 28 │ │ 2020-07-01 ┆ 2 ┆ 1 │ │ 2022-01-05 ┆ 2 ┆ 6 │ │ 2023-02-04 ┆ 2 ┆ 10 │ └────────────┴───────┴───────┘ </code></pre> <p>Basically, at the first date of each year, calculate the cumulative sum of <code>value</code> from beginning up to (including) this particular date, for each group respectively.</p> <p>I tried <code>group_by_dynamic</code> and <code>rolling</code>, but still unable to find a concise and clear way to solve this problem.</p> <p>Any idea is welcome. Thanks!</p>
<python><python-polars>
2024-03-10 03:43:43
2
1,153
lebesgue
78,134,536
16,459,035
Function to search sequential values on a list
<p>I have the following list:</p> <pre><code>list = [ [2, 'A', '1'], [6, 'A', '2'], [6, 'S', '3'], [9, 'A', '4'], [6, 'A', '5'], [6, 'A', '6'], [6, 'S', '7'], [9, 'A', '8'], [9, 'A', '9'], [6, 'A', '10'], [10, 'S', '11'], [13, 'S', '12'], [13, 'S', '13'], [16, 'A', '14'] ] </code></pre> <p>The first position of list is the level, the second the type of node and the third is the value.</p> <p>I need to concatenate all values that have a higher level than before (i-1). I can have multiple A and S in each level, but every time I find a S type I need to still search until I find a higher level of S or A type. My desired output:</p> <pre><code>['1.2', '1.3.4', '1.5', '1.6', '1.7.8', '1.7.9', '1.10.11.13.14'] </code></pre> <p>I tried to achieve this output with the following code:</p> <pre><code>def concat_levels(levels): result = [] stack = [] for level in levels: num, typ, name = level while stack and stack [-1][0] &gt;= num: stack.pop() if typ == 'A': stack.append((num,name)) elif typ == 'S': if stack: current_path = '.'.join([x[1] for x in stack]) result.append(current_path + '.' + name) return result </code></pre> <p>However, when I run my function I got the following output:</p> <pre><code>['1.3', '1.7', '1.10.11', '1.10.12', '1.10.13'] </code></pre> <p>Apparently my function is appending only on <code>type == 'S'</code>, but why?</p> <p>CONTEXT: This list is an abstracted version of a pyspark dataframe schema where A represents array type and S represents Structure type. Why 12 is not in the output? Because 12 is a struct (S type). It’s 1.10.11.13.14 Because the level 16 represents that value 14 is inside the value and level 13 structure. Since the schema is ordered, every time there is two S types in the same level, if an A type appears after a S type, then the last S type should be considered (always respecting the level rule)</p>
<python><list>
2024-03-10 02:36:03
1
671
OdiumPura
78,134,493
1,592,380
Zoom to KML layer with ipyleaflet
<p>I'm getting started with ipyleaflet with jupyter. I'd like to add a KML file layer to a basemap and zoom to that layer. The Kml is small, probably only a 10-50 acres. In qgis there is a zoom to layer function. Is there an easy way to do this.</p> <p>If not I plan on parsing the kml file for a point and setting that as the center point with a high zoom level</p> <p>The KML file:</p> <pre><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot; ?&gt; &lt;kml xmlns=&quot;http://www.opengis.net/kml/2.2&quot;&gt; &lt;Document id=&quot;root_doc&quot;&gt; &lt;Schema name=&quot;2963760840&quot; id=&quot;2963760840&quot;&gt; &lt;SimpleField name=&quot;timestamp&quot; type=&quot;string&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;begin&quot; type=&quot;string&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;end&quot; type=&quot;string&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;altitudeMode&quot; type=&quot;string&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;tessellate&quot; type=&quot;int&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;extrude&quot; type=&quot;int&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;visibility&quot; type=&quot;int&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;drawOrder&quot; type=&quot;int&quot;&gt;&lt;/SimpleField&gt; &lt;SimpleField name=&quot;icon&quot; type=&quot;string&quot;&gt;&lt;/SimpleField&gt; &lt;/Schema&gt; &lt;Folder&gt;&lt;name&gt;2963760840&lt;/name&gt; &lt;Placemark&gt; &lt;Style&gt;&lt;LineStyle&gt;&lt;color&gt;ff0000ff&lt;/color&gt;&lt;/LineStyle&gt;&lt;PolyStyle&gt;&lt;fill&gt;0&lt;/fill&gt;&lt;/PolyStyle&gt;&lt;/Style&gt; &lt;ExtendedData&gt;&lt;SchemaData schemaUrl=&quot;#2963760840&quot;&gt; &lt;SimpleData name=&quot;tessellate&quot;&gt;-1&lt;/SimpleData&gt; &lt;SimpleData name=&quot;extrude&quot;&gt;0&lt;/SimpleData&gt; &lt;SimpleData name=&quot;visibility&quot;&gt;-1&lt;/SimpleData&gt; &lt;/SchemaData&gt;&lt;/ExtendedData&gt; &lt;Polygon&gt;&lt;outerBoundaryIs&gt;&lt;LinearRing&gt;&lt;coordinates&gt;-88.4317405175818,31.6558800962605 -88.431737410907,31.6559161822374 -88.4317340478576,31.6559522520483 -88.4317304294607,31.65598830298 -88.4317265546618,31.6560243350402 -88.4317224234424,31.6560603464253 -88.4317180368385,31.656096335324 -88.4317133937868,31.6561323008421 -88.4317084963772,31.6561682411606 -88.4317033435555,31.6562041562871 -88.4316979352941,31.6562400435163 -88.4316922726381,31.6562759019385 -88.4316863545151,31.6563117297579 -88.4316801819788,31.6563475269666 -88.4316737560565,31.6563832908516 -88.4316670756934,31.6564190214205 -88.4316601419166,31.6564547159601 -88.4316529536625,31.6564903735763 -88.4316455130305,31.6565259933519 -88.431637818957,31.6565615743927 -88.4316298724689,31.6565971139855 -88.431621673566,31.6566326121303 -88.4316132232843,31.6566680670158 -88.4316045205513,31.6567034768459 -88.4315955664119,31.6567388407112 -88.431586360857,31.6567741577097 -88.4315817286416,31.6567915607419 -88.4311466897599,31.6584809674529 -88.4311356263598,31.658529215528 -88.431124815372,31.6585775059521 -88.431114256788,31.6586258378234 -88.4311039516355,31.658674208429 -88.4310938999238,31.6587226186706 -88.4310841005811,31.6587710667524 -88.431074554662,31.6588195526667 -88.4310652631943,31.6588680737005 -88.4310562240697,31.6589166298692 -88.4310474404426,31.6589652202479 -88.4310389101867,31.6590138430485 -88.4310306343658,31.6590624991651 -88.4310226129532,31.6591111858924 -88.4310148459404,31.6591599023285 -88.4310073343821,31.6592086484659 -88.4310000772153,31.6592574234104 -88.4309930754678,31.6593062244489 -88.430986329149,31.6593550524834 -88.4309798382413,31.6594039057101 -88.4309736027451,31.6594527841292 -88.4309720419662,31.6594653415 -88.4309644366008,31.6595394733955 -88.4309570876343,31.6596136241626 -88.4309499950584,31.6596877928994 -88.4309431588736,31.659761979606 -88.4309365790716,31.6598361833805 -88.4309302545895,31.6599104033289 -88.4309241875368,31.6599846394358 -88.4309183768417,31.6600588899053 -88.4309128225048,31.6601331547375 -88.4309075245268,31.6602074339324 -88.4309024839445,31.6602817256786 -88.4308976996953,31.6603560290821 -88.4308931717888,31.6604303450447 -88.4308889012709,31.660504672657 -88.43088488707,31.6605790101229 -88.4308831265363,31.6606132013205 -88.430880160709,31.6606753008115 -88.4308774511835,31.6607374083529 -88.4308749990237,31.6607995248388 -88.4308728042029,31.6608616475637 -88.4308708656675,31.6609237765354 -88.4308691844632,31.6609859108443 -88.4308677605907,31.6610480504906 -88.4308665929691,31.6611101927763 -88.4308656826711,31.6611723394975 -88.4308650296705,31.6612344879487 -88.430864901764,31.6612506351474 -88.4326782034786,31.6612204116202 -88.432897502907,31.6542440681409 -88.4317454858576,31.6541186481987 -88.4317646239949,31.6548099837846 -88.431765521614,31.65484436124 -88.4317663085529,31.6548787413071 -88.4317669858568,31.6549131230762 -88.4317675535255,31.6549475065475 -88.4322256604991,31.6549477172071 -88.4321986103415,31.6558080866836 -88.4317459607727,31.6558078776201 -88.4317433668281,31.6558439941254 -88.4317405175818,31.6558800962605&lt;/coordinates&gt;&lt;/LinearRing&gt;&lt;/outerBoundaryIs&gt;&lt;/Polygon&gt; &lt;/Placemark&gt; &lt;/Folder&gt; &lt;/Document&gt;&lt;/kml&gt; </code></pre>
<python><jupyter-notebook><ipyleaflet>
2024-03-10 01:59:47
1
36,885
user1592380
78,134,381
1,030,287
Does python shelve module allocate space in chunks of a certain size?
<p>I stored some pandas dataframes in a shelve and then stored the same dataframes but potentially containing a bit more data (maybe a few % more). The second shelve is about twice the size of the first one.</p> <p>Does the <code>shelve</code> module pre-allocate space in chunks by any chance? My first shelve is ~ 100MB the second one is ~ 200MB. The increase in the size of the shelves does not correspond to the increase in the size of the underlying data. Could it be that <code>shelve</code> can only allocate in multiples of 100MB? So if I need 90MB it will allocate 100MB and if I need 110MB, it will allocate 200MB?</p>
<python><python-3.x><shelve>
2024-03-10 00:33:49
0
12,343
s5s
78,134,317
1,592,380
Saving data from 1 cell to the next in Jupyter notebook
<p><a href="https://i.sstatic.net/h6T2n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h6T2n.png" alt="enter image description here" /></a></p> <p>I'm working in a Jupyter notebook. I want to open a kml file in 1 cell and do some analysis in the next. After selecting a file , the first cell has:</p> <pre><code># Print the selected path, filename, or both print(fc.selected_path) print(fc.selected_filename) print(fc.selected) global mytext with open(fc.selected, 'r') as f: print(f.read()) mytext = f.read() </code></pre> <p>This prints out the contents of the file as expected.</p> <p>In the next cell , I try to print out my text , but I get an empty string.</p> <p>How do I pass the contents of the file to the my text variable in the next cell?</p>
<python><jupyter-notebook>
2024-03-10 00:01:23
1
36,885
user1592380
78,134,179
1,922,959
Python list index out of range error that is baffling me
<p>I'm trying to put together a flask app that works for a database as a final project for a class. I'm at the very end, and am encountering an error that baffles me. I'm pretty new to Python so my hope is that there is something very basic I'm missing here...</p> <p>Since it's flask, there are a bunch of <code>html</code> files that go with it, so it's hard to reproduce an MWE here. Here's the basic issue though:</p> <p>In this function</p> <pre><code>@app.route('/view_meal', methods=['GET', 'POST']) def view_meal(): if request.method == 'POST': selected_meal_id = request.form['selected_meal'] #get all the food_items associated with this meal rows = display_table('meal', where_condition=f&quot;id = '{selected_meal_id}'&quot;) food1 = display_table('food_item', where_condition=f&quot;id = '{rows[4]}'&quot;) food2 = display_table('food_item', where_condition=f&quot;id = '{rows[5]}'&quot;) food3 = display_table('food_item', where_condition=f&quot;id = '{rows[6]}'&quot;) food4 = display_table('food_item', where_condition=f&quot;id = '{rows[7]}'&quot;) food5 = display_table('food_item', where_condition=f&quot;id = '{rows[8]}'&quot;) foods = [food1, food2, food3, food4, food5] return render_template('display_meal.html', rows = rows, foods=foods) else: rows = display_table('meal') return render_template('view_meal.html', items=rows) </code></pre> <p>In my first iteration, I was just grabbing the id_numbers associated with the different foods and that worked fine. So I know that I'm correctly getting the <code>selected_meal_id</code> from the form, and that the line</p> <pre><code>rows = display_table('meal', where_condition=f&quot;id = '{selected_meal_id}'&quot;) </code></pre> <p>is correctly querying the database. The <code>display_table</code> function is just a <code>SELECT * FROM</code> with the table name and <code>WHERE</code> clause passed in.</p> <p>As a further test, in a separate small Python file I queried that database for the meal_id I'm using</p> <pre><code>result = cur.execute(&quot;SELECT * FROM meal WHERE id='PCKCMH0S'&quot;).fetchall() print(result) </code></pre> <p>and this is the result</p> <blockquote> <p>[('PCKCMH0S', 'English Breakfast', 'Fatty', '3/9/2024', 'BQD3MPHM', '77WWB0BQ', 'QGH6DV8S', 'I4VD1IE7', 'QGH6DV8S')]</p> </blockquote> <p>So that is what should be in <code>rows</code></p> <p>But the line</p> <pre><code>food1 = display_table('food_item', where_condition=f&quot;id = '{rows[4]}'&quot;) </code></pre> <p>is throwing a <code>list index out of range</code> with the <code>rows[4]</code> underlined. What am I missing here? There should be nine elements in <code>rows</code></p>
<python><sqlite><flask>
2024-03-09 22:45:27
1
1,299
jerH
78,133,939
12,425,893
Using PyMuPDF fitz to Swap the Overall Font in a Document
<p>I'm attempting to use fitz in a Python script to swap the font in a PDF document, and possibly change its size by an increment or ratio (in order to keep the elements to scale in respect to each other).</p> <p>Also, I suck at Python.</p> <p>Here's my attempt:</p> <pre><code>def change_font(input_pdf, output_pdf, new_font): try: pdf_document = fitz.open(input_pdf) for page_number in range(len(pdf_document)): # get page (I think this is where the error is coming from) page = pdf_document.load_page(page_number) # get text from page text = page.get_page_text() # change font of entire page page.insert_text((0, 0), text, fontname=new_font) # (not sure how the (0, 0) position is going to work out...) # save pdf_document.save(output_pdf) except Exception as e: print(f&quot;An error occurred: {e}&quot;) # to keep original file safe finally: if 'pdf_document' in locals(): pdf_document.close() # Input parameters input_pdf = input('Input file name: ') output_pdf = input('New document name: ') new_font = input('New font name: ') change_font(input_pdf, output_pdf, new_font) </code></pre> <p>After a few hours of work, I'm still getting the following output, after having manually specified all three parameters:</p> <p><code>An error occurred: 'Document' object has no attribute 'load_page'</code></p> <p>I don't necessarily need someone to rewrite the whole thing. My real question is simply why this error is happening, as I've been trying to fix it for a while now. If I manage to fix that, I can continue developing the script by myself.</p> <p>I'm running Python 3.9.2 on Debian Bookworm.</p>
<python><pdf>
2024-03-09 21:03:53
0
301
GPWR
78,133,915
1,870,832
Why won't this duckdb query of s3/parquet data save 'EXPLAIN ANALYZE' profiling info?
<p>(UPDATED 3/10)</p> <p>Based on <a href="https://duckdb.org/dev/profiling" rel="nofollow noreferrer">this duckdb docs page on profiling</a>, I would have thought that my code snippet below should save a json file of profiling/timing stats to a <code>query_profile.json</code>, which I should be able to use to generate an html file with <code>python -m duckdb.query_graph query_profile.json</code></p> <p>However, my code below (reproducable as it just hits a public s3 bucket, though you'll need your own aws creds in your own .env file) does not produce such a <code>query_profile.json</code> file:</p> <pre><code>import duckdb import s3fs from dotenv import dotenv_values # load environment variables from .env file ENV = dotenv_values(&quot;.env&quot;) # Configurable query params TAXI_COLOR = &quot;yellow&quot; YEAR = 2023 PROFILE = True # where to save result (data) locally dbfile = 'taxi_data.duckdb' # where to save profiling results profile_file = 'query_profile.json' # Define the S3 glob pattern to match the desired parquet files s3_glob_path = f&quot;s3://nyc-tlc/trip data/{TAXI_COLOR}_tripdata_{YEAR}*.parquet&quot; # query the s3 parquet data using duckdb with duckdb.connect(database=dbfile) as con: # load extension required for reading from s3 con.execute(&quot;INSTALL 'httpfs';&quot;) con.execute(&quot;LOAD 'httpfs';&quot;) # Set the AWS credentials to access the S3 bucket con.execute(&quot;SET s3_region='us-east-1';&quot;) con.execute(f&quot;SET s3_access_key_id = '{ENV['AWS_ACCESS_KEY_ID']}';&quot;) con.execute(f&quot;SET s3_secret_access_key = '{ENV['AWS_SECRET_ACCESS_KEY']}';&quot;) # Enable profiling and save the profiling results directly to a file con.execute(f&quot;SET profiling_output='{profile_file}'&quot;) con.execute(&quot;SET profiling_mode='detailed'&quot;) # Execute the query to load and save the data directly to the specified DuckDB file tablename = f'{TAXI_COLOR}_tripdata_{YEAR}' ea = &quot;EXPLAIN ANALYZE &quot; if PROFILE else &quot;&quot; query = f&quot;&quot;&quot;{ea}CREATE OR REPLACE TABLE {tablename} AS SELECT * FROM read_parquet(['{s3_glob_path}']) &quot;&quot;&quot; print(query) con.execute(query) print(f&quot;Data saved to {dbfile} as {tablename}&quot;) print(f&quot;Profiling results saved to {profile_file}&quot;) </code></pre>
<python><amazon-web-services><amazon-s3><profiling><duckdb>
2024-03-09 20:55:26
1
9,136
Max Power
78,133,790
1,477,064
NiceGui: Multiline Button
<p>How can I make a multiline button in nicegui?</p> <p><a href="https://i.sstatic.net/FK2qY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FK2qY.png" alt="enter image description here" /></a></p> <p>Quasar example just uses a <code>&lt;br&gt;</code> tag, but it gets escaped by nicegui.</p>
<python><nicegui>
2024-03-09 20:16:47
2
4,849
xvan
78,133,787
13,815,493
Could not locate a bind configured on SQL expression or this Session
<p>I'm trying to use <code>Flask-SQLAlchemy</code> with multiple databases. And although I specify <code>bind_key</code> for the <code>Table</code> and it works fine for <code>Model</code> and the new session, <code>db.session.execute</code> doesn't look up the bind key on the metadata associated with the table and throws an error:</p> <pre><code>$ python app.py model [] &lt;-- works new session [] &lt;-- works Traceback (most recent call last): File &quot;C:\Users\user\projects\flask-sqlalchemy\app.py&quot;, line 32, in &lt;module&gt; print('main session', db.session.execute(select(user)).all()) # &lt;- UnboundExecutionError File &quot;C:\Users\user\projects\flask-sqlalchemy\venv\lib\site-packages\sqlalchemy\orm\scoping.py&quot;, line 778, in execute return self._proxied.execute( File &quot;C:\Users\user\projects\flask-sqlalchemy\venv\lib\site-packages\sqlalchemy\orm\session.py&quot;, line 2306, in execute return self._execute_internal( File &quot;C:\Users\user\projects\flask-sqlalchemy\venv\lib\site-packages\sqlalchemy\orm\session.py&quot;, line 2179, in _execute_internal bind = self.get_bind(**bind_arguments) File &quot;C:\Users\user\projects\flask-sqlalchemy\venv\lib\site-packages\flask_sqlalchemy\session.py&quot;, line 78, in get_bind return super().get_bind(mapper=mapper, clause=clause, bind=bind, **kwargs) File &quot;C:\Users\user\projects\flask-sqlalchemy\venv\lib\site-packages\sqlalchemy\orm\session.py&quot;, line 2786, in get_bind raise sa_exc.UnboundExecutionError( sqlalchemy.exc.UnboundExecutionError: Could not locate a bind configured on SQL expression or this Session. </code></pre> <p>Here is a code example:</p> <pre><code>from flask import Flask from flask_sqlalchemy import SQLAlchemy from sqlalchemy.orm import scoped_session, sessionmaker from sqlalchemy import Column, Integer, String, select app = Flask(__name__) # app.config[&quot;SQLALCHEMY_DATABASE_URI&quot;] = &quot;sqlite:///main.sqlite&quot; app.config[&quot;SQLALCHEMY_BINDS&quot;] = {&quot;secondary&quot;: &quot;sqlite:///secondary.sqlite&quot;} db = SQLAlchemy(app) class User(db.Model): __bind_key__ = &quot;secondary&quot; user_id = Column(Integer, primary_key=True) user_name = Column(String) user = db.Table( &quot;user_table&quot;, Column(&quot;user_id&quot;, Integer, primary_key=True), Column(&quot;user_name&quot;, String), bind_key=&quot;secondary&quot;, ) with app.app_context(): db.create_all(&quot;secondary&quot;) print('model', db.session.execute(select(User)).all()) # &lt;- works new_session = scoped_session(sessionmaker(bind=db.engines['secondary'])) print('new session', new_session.execute(select(user)).all()) # &lt;- works print('main session', db.session.execute(select(user)).all()) # &lt;- UnboundExecutionError if __name__ == &quot;__main__&quot;: app.run(host=&quot;0.0.0.0&quot;, port=5000, debug=True) </code></pre> <p>You can need just do <code>pip install -U Flask-SQLAlchemy</code> and run <code>python app.py</code> to test.</p> <p>Tell me please am I doing something wrong or this is a bug?</p> <p>I thought I shouldn't create sessions for each database manually.</p> <p><a href="https://flask-sqlalchemy.palletsprojects.com/en/3.1.x/binds/#:%7E:text=Models%20and%20tables%20with%20a%20bind%20key%20will%20be%20registered%20with%20the%20corresponding%20metadata%2C%20and%20the%20session%20will%20query%20them%20using%20the%20corresponding%20engine." rel="nofollow noreferrer">Documentation</a></p> <p>Thanks.</p>
<python><sqlite><flask><sqlalchemy><flask-sqlalchemy>
2024-03-09 20:14:57
0
383
Andrew O.
78,133,591
2,518,602
Plotly Express Choropleth of Census Data Fails
<p>I am attempting to use <a href="https://plotly.com/python/plotly-express/" rel="nofollow noreferrer">Plotly Express</a> to create interactive choropleths of Census data which I retrieve using the <a href="https://github.com/censusdis/censusdis" rel="nofollow noreferrer">censusdis</a> package. This works for two of the variables which I am retrieving, but not the third. Here is my code which demonstrates the issue:</p> <pre><code>import plotly.express as px import censusdis.data as ced from censusdis.datasets import ACS5 #variable = 'B19013_001E' # Works - Median Household Income #variable = 'B25058_001E' # Works - Median Rent variable = 'B01001_001E' # Does not work! Total Population df = ced.download( dataset=ACS5, vintage=2022, download_variables=['NAME', variable], state='06', county='075', tract='*', with_geometry=True) df = df.set_index('NAME') print(df.head()) fig = px.choropleth_mapbox(df, geojson=df.geometry, locations=df.index, center={'lat': 37.74180915, 'lon': -122.38474831884692}, color=variable, color_continuous_scale=&quot;Viridis&quot;, mapbox_style=&quot;carto-positron&quot;, opacity=0.5, zoom=10) fig.update_layout(margin={&quot;r&quot;:0,&quot;t&quot;:0,&quot;l&quot;:0,&quot;b&quot;:0}) fig.show() </code></pre> <p>As I cycle through the variables, the resulting dataframes all appear similar, but the third one (<code>B01001_001E</code>) generates a scale but not a map:<a href="https://i.sstatic.net/fzEr2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzEr2.png" alt="enter image description here" /></a></p> <p>However, the geometry column looks fine (and, in fact, looks like the same as that returned for the other variables). I would appreciate any help understanding what the problem is and advice on how to fix it.</p>
<python><plotly><choropleth>
2024-03-09 19:11:19
1
2,023
Ari
78,133,261
2,386,605
Python package: `ImportError: cannot import name 'mod' from partially initialized module 'new_package' (most likely due to a circular import)
<p>I have a repo, which I would like to turn into a package</p> <pre><code>mypackage/ __init__.py multiplication.py mod/ __init__.py addition.py </code></pre> <p>The file <code>mod/addition.py</code> looks like</p> <pre><code>def add(a,b): return a+b </code></pre> <p>and the file <code>mod/__init__.py</code> is empty.</p> <p>The file <code>multiplication.py</code> looks like</p> <pre><code>def add(a,b): return a*b </code></pre> <p>Now when I have as <code>__init__.py</code> file</p> <pre><code>from . import multiplication </code></pre> <p>I can add least work with multiplication, when installing it in a Docker image.</p> <p>However, if it looks like</p> <pre><code>from . import multiplication from . import mod </code></pre> <p>I get</p> <pre><code>from . import mod ImportError: cannot import name 'mod' from partially initialized module 'mypackage' (most likely due to a circular import) (/usr/local/lib/python3.12/site-packages/mypackage/__init__.py) </code></pre> <p>Do you know, how what I have to do to be able to use all functions from <code>mypackage</code>?</p>
<python>
2024-03-09 17:31:49
0
879
tobias
78,133,078
978,486
Run asyncio event loop embedded in thread
<p>I have a Qt application which uses pybind to embed python plugins providing some kind of handlers. From what I have read online gluing the event loops is nearly impossible. Now I wonder if it is possible to run an asyncio event loop in a c++ background thread and call gather in other threads (main or others, the scripts in c++ are called threaded, while the GIl serialized them again).</p> <p>I think the question boils down to the following: if the c++ thread runs an asyncio eventloop, does it hold the GIL or does it release it while it idles?</p> <p>If it holds the GIL I'll end up in a deadlock.</p> <p>If it releases the GIL in theory the threads in C++ could enter the python space and call gather. Which is nice.</p> <p>Then again this throws the question if gather locks the GIL, which would defeat the purpose because other c++ threads would not be able to enter the Python space and effectively it would behave the same as if I would not have used asyncio at all.</p>
<python><c++><qt><python-asyncio><pybind11>
2024-03-09 16:31:56
1
16,241
ManuelSchneid3r
78,132,998
22,208,056
convert file to dict where value is a code, e.g. PovScore(1,2)
<p>I have following data (7MB) and it's stored in transpostion.txt:</p> <pre><code>{'0x2df54c975eb48346': [PovScore(Cp(+57), BLACK), 5], ...} </code></pre> <p>I did try</p> <pre class="lang-py prettyprint-override"><code>with open(&quot;transposition.txt&quot;, &quot;r&quot;) as file: x = file.readline() transposition = eval(x) del x </code></pre> <p>but the memory reaches 99% and then hangs. I did try with 1MB file, it immediately loads and set the string, but when print it's very slow (IDLE):</p> <pre><code>&gt;&gt;&gt; with open(&quot;testfile.txt&quot;, &quot;r&quot;) as file: x = file.read() &gt;&gt;&gt; x Squeezed text (2000 lines) </code></pre> <p><code>json</code> module can't parse JSON with code:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;pyshell#19&gt;&quot;, line 1, in &lt;module&gt; json.dumps({'a':PovScore(0,0)}) File &quot;C:\Users\asdfs\AppData\Local\Programs\Python\Python39\lib\json\__init__.py&quot;, line 231, in dumps return _default_encoder.encode(obj) File &quot;C:\Users\asdfs\AppData\Local\Programs\Python\Python39\lib\json\encoder.py&quot;, line 199, in encode chunks = self.iterencode(o, _one_shot=True) File &quot;C:\Users\asdfs\AppData\Local\Programs\Python\Python39\lib\json\encoder.py&quot;, line 257, in iterencode return _iterencode(o, 0) File &quot;C:\Users\asdfs\AppData\Local\Programs\Python\Python39\lib\json\encoder.py&quot;, line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type PovScore is not JSON serializable </code></pre>
<python>
2024-03-09 16:06:49
1
331
winapiadmin
78,132,993
7,338,672
Django ModuleNotFoundError on Heroku Release
<p>Similar to many previously stated issues yet unresolved in my case, I get a <code>ModuleNotFoundError</code> when deploying my Django project for release on Heroku, which works fine locally. I use <code>Django 4.0</code> and <code>Python 3.10.13</code>.</p> <p>Heroku release log:</p> <pre><code>Traceback (most recent call last): File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/management/base.py&quot;, line 413, in run_from_argv self.execute(*args, **cmd_options) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/management/base.py&quot;, line 454, in execute self.check() File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/management/base.py&quot;, line 486, in check all_issues = checks.run_checks( File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/checks/registry.py&quot;, line 88, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/checks/compatibility/django_4_0.py&quot;, line 9, in check_csrf_trusted_origins for origin in settings.CSRF_TRUSTED_ORIGINS: File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/conf/__init__.py&quot;, line 89, in __getattr__ self._setup(name) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/conf/__init__.py&quot;, line 76, in _setup self._wrapped = Settings(settings_module) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/conf/__init__.py&quot;, line 190, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File &quot;/app/.heroku/python/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1004, in _find_and_load_unlocked ModuleNotFoundError: No module named 'opportunities.settings.heroku_staging' During handling of the above exception, another exception occurred: Traceback (most recent call last): File &quot;/app/manage.py&quot;, line 22, in &lt;module&gt; main() File &quot;/app/manage.py&quot;, line 18, in main execute_from_command_line(sys.argv) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/management/__init__.py&quot;, line 442, in execute_from_command_line utility.execute() File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/management/__init__.py&quot;, line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/core/management/base.py&quot;, line 426, in run_from_argv connections.close_all() File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 84, in close_all for conn in self.all(initialized_only=True): File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 76, in all return [ File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 73, in __iter__ return iter(self.settings) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/utils/functional.py&quot;, line 47, in __get__ res = instance.__dict__[self.name] = self.func(instance) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 45, in settings self._settings = self.configure_settings(self._settings) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/db/utils.py&quot;, line 148, in configure_settings databases = super().configure_settings(databases) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/utils/connection.py&quot;, line 50, in configure_settings settings = getattr(django_settings, self.settings_name) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/conf/__init__.py&quot;, line 89, in __getattr__ self._setup(name) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/conf/__init__.py&quot;, line 76, in _setup self._wrapped = Settings(settings_module) File &quot;/app/.heroku/python/lib/python3.10/site-packages/django/conf/__init__.py&quot;, line 190, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File &quot;/app/.heroku/python/lib/python3.10/importlib/__init__.py&quot;, line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1050, in _gcd_import File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1027, in _find_and_load File &quot;&lt;frozen importlib._bootstrap&gt;&quot;, line 1004, in _find_and_load_unlocked ModuleNotFoundError: No module named 'opportunities.settings.heroku_staging' </code></pre> <p>The project structure is currently as follows: <a href="https://i.sstatic.net/Mkheq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mkheq.png" alt="enter image description here" /></a></p> <p>In Heroku, I have <code>DJANGO_SETTINGS_MODULE</code> set to <code>opportunities.settings.heroku_staging</code>. So ogiven the project structure that should do the trick but I get this error.</p>
<python><django><heroku>
2024-03-09 16:05:26
1
677
compmonks
78,132,778
5,267,751
Making an object's pretty-print representation different if it appears at top-level compared to nested
<p>In IPython, it is possible to make an object have a custom pretty-print representation as explained in <a href="https://stackoverflow.com/q/40193479/5267751">How to pretty print object representation in IPython</a> :</p> <pre class="lang-py prettyprint-override"><code>class MyObject: def _repr_pretty_(self, p, cycle): p.text(&quot;&lt;MyObject: object content&gt;&quot;) </code></pre> <p>Then in the shell, the pretty-printed output will be:</p> <pre class="lang-py prettyprint-override"><code>In [5]: MyObject() Out[5]: &lt;MyObject: object content&gt; </code></pre> <p>However, if multiple <code>MyObject()</code> objects appear in a list/dictionary etc., then it may get long.</p> <pre><code>In [6]: [MyObject(), MyObject()] Out[6]: [&lt;MyObject: object content&gt;, &lt;MyObject: object content&gt;] </code></pre> <p>I want to do the following:</p> <ul> <li>If <code>MyObject()</code> appears at top level, the long description as above appears.</li> <li>If it appears nested, <strong>the representation should be replaced with <code>&lt;MyObject: ...&gt;</code></strong>.</li> </ul> <p>My reasoning: having too much content will clutter the user's view, and if the user wants to see what is the detail inside, they can simply write <code>list[0]</code>.</p> <p>How can I implement that in IPython shell? Or, is there any better approach to this problem?</p>
<python><ipython>
2024-03-09 15:01:40
2
4,199
user202729
78,132,606
2,545,680
Why relative imports don't work without first importing package as a module
<p>I'm experiment with relative imports and have found very interesting case <a href="https://discuss.python.org/t/relative-imports/43183/2" rel="nofollow noreferrer">here</a>:</p> <pre><code>$ mkdir project $ cat &gt;&gt; project/one.py from . import two $ touch project/two.py $ python project/one.py Traceback (most recent call last): File &quot;project/one.py&quot;, line 1, in &lt;module&gt; from . import two ImportError: attempted relative import with no known parent package </code></pre> <p>The author explains that</p> <blockquote> <p>You must ensure that the pkg package is imported before its contents can do relative imports of each other. There are many ways to do this, but in general you want a program to start with a single absolute import first.</p> </blockquote> <p>And shows the following solution:</p> <p>The simplest thing is to run the script as a module, using the package name rather than the source code file name. So this runs without error:</p> <pre><code>$ python -m project.one </code></pre> <blockquote> <p>Practically speaking, if we want an “entry point” to the code, we should use a driver script that is outside the package, which can find the package by absolute import and use something from it:</p> </blockquote> <pre><code>$ cat &gt;&gt; driver.py import project.one $ python driver.py </code></pre> <blockquote> <p>That worked because the containing folder for driver.py was on the module search path (sys.path), because of how we started Python - so the project folder could be found directly within the CWD.</p> </blockquote> <p>Could somebody elaborate on the requirement to first import a package as a module to enable relative imports inside this package? Where is it defined in the spec (link would be good)?</p> <p>Thanks</p>
<python>
2024-03-09 14:01:33
1
106,269
Max Koretskyi
78,132,574
1,209,451
Tensorflow freeze shuffled dataset
<p>For training model I am using shuffled dataset which was created so:</p> <pre><code>train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE) </code></pre> <p>but after training I want to check the prediction results with some simple test function</p> <pre><code>def detect_wrong_ts(model,ds,label_encoder): # Take first elements batch ds = ds.take(1) # Do data preprocessing # it converts strings from initial dataset to indexes &quot;word1 word2&quot; -&gt; [ 1, 2 ] mapper = lambda in1,in2,out : (model.preprocessor(in1),out) train = ds.map(mapper) # Do predictions on postprocessed dataset x = model.predict(train) # Now I try to print results with the information based # on the # 1 ) initial dataset # 2 ) postprocessed dataset # 3 ) prediction for batch,outp in zip(ds,train): inp = batch[0] outp_code = outp[0] for i,inp in enumerate(inp): inp_str = inp.numpy().decode(&quot;utf-8&quot;) inp_codes = model.preprocessor([inp_str])[0].numpy() postprocess_codes = outp_code[i].numpy() print( f&quot;#{i} {inp_str=} {inp_codes=} {postprocess_codes=}&quot;, ) </code></pre> <p>But I see that the results of this two datasets are shuffled and don't match each other. The reason is clear: after starting a new iterator operator shuffling starts again. I can switch off full shuffling but it is rather complicated for the full pipeline.</p> <p>I am thinking about some option which can freeze the dataset so that all iterations on it produce same results.</p> <p>Is it possible to solve this problem this way?</p>
<python><tensorflow>
2024-03-09 13:51:34
1
3,220
Oleg
78,132,529
1,879,604
Python Metaclass: Create attributes for instance from the attribues for metaclass
<pre><code>class Meta(type): def __new__(cls, name, bases, attrs): # Generate a and b for the object based on x ? return super().__new__(cls, name, bases, attrs) class A(metaclass=Meta): def __init__(self, a, b): self.a = a self.b = b obj = A(x) </code></pre> <p>The <code>x</code> passed to <code>A</code> should be consumed with in the meta class <code>Meta</code>, and it should generate the attributes <code>a</code> &amp; <code>b</code> that are needed by the <code>__init__</code> of <code>A</code>. It is preferred for <code>obj</code> not to have access to <code>x</code>.</p> <p>Not sure if its possible or valid, but is there a way to achieve it ?</p>
<python><python-3.x>
2024-03-09 13:37:30
1
742
NEB
78,132,525
16,436,095
How to implement a cache in Python that efficiently supports both dictionary and heap operations?
<p>Is there a Python data structure that seamlessly combines a dictionary (with nested dictionaries or lists as values) and a heap, allowing sorting based on a specific value within the nested structures?</p> <pre class="lang-py prettyprint-override"><code>cache = {&quot;key1&quot;: {&quot;time&quot;: time1, &quot;info&quot;: &quot;key1 info&quot;}, &quot;key2&quot;: {&quot;time&quot;: time2, &quot;info&quot;: &quot;key2 info&quot;}, ...} </code></pre> <p>or:</p> <pre class="lang-py prettyprint-override"><code>cache = {&quot;key1&quot;: [time1, &quot;key1 info&quot;], &quot;key2&quot;: [time2, &quot;key2 info&quot;], ...} </code></pre> <p>here <code>time1</code>, <code>time2</code>, ... is the time of insertion or update of the entry.</p> <p>The goal is to implement an efficient cache, checking key existence, validating the value's freshness (as it becomes outdated over time), and removing the oldest key when the cache is full. The dictionary should support heap operations either by using the nested key &quot;time&quot; or the zeroth element of the list.</p> <p>Current options considered:</p> <ol> <li>Forming a heap from the dictionary (drawback - expensive operation O(n^2)).</li> <li>Implementing a class with separately stored heap and dictionary (drawback - complexity of synchronizing data in the heap and dictionary).</li> <li>Simple iteration through the dictionary in O(n). This option is favored for its simplicity but might not be optimal.</li> </ol> <p>Is there a more efficient solution or a different approach that avoids creating a custom data structure?</p>
<python><caching><data-structures>
2024-03-09 13:37:13
1
370
maskalev
78,132,436
5,160,230
Pylance complaining on self.tr() method
<p>I'm developing an application with PySide6 6.6.2 and Python 3.11.8. Everything is working fine and mypy is happy. However, as I demonstrated on the following image, Pylance (on VSCode) keeps complaining that <code>self.tr()</code> cannot be accessed, though it actually should be acessible for any <code>QtCore.QObject</code> (and hence, <code>QtWidgets.QWidget</code>). In fact, the application runs with no errors and <code>pyside6-lupdate</code> or <code>pyside6-lrelease</code> are happy too.</p> <p><a href="https://i.sstatic.net/ZqVbj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZqVbj.png" alt="Pylance complaining: &quot;Cannot access member 'tr' for type 'TestWidget*'. Member 'tr' is unknown." /></a></p> <p>Here the example code:</p> <pre class="lang-python prettyprint-override"><code>from PySide6 import QtWidgets class TestWidget(QtWidgets.QWidget): def __init__(self): super(TestWidget, self).__init__() self.setLayout(QtWidgets.QVBoxLayout()) self.layout().addWidget(QtWidgets.QLabel(self.tr(&quot;Hello World!&quot;))) </code></pre> <p>AFAIK, Qt stubs are bundled with the library. Is there any reason why only this particular method isn't working? Why I would need to manually add stubs for a single method, while the entirety of the package is correctly type checked (note, for example, that <code>self.setLayout</code> and <code>self.layout()</code> are correctly typechecked)? This leads me to think something is wrong in the way I'm using QTranslator, and not with the library itself.</p> <p>I tried to manually implement <code>self.tr()</code> this way:</p> <pre class="lang-python prettyprint-override"><code>from PySide6 import QtWidgets, QtCore class TestWidget(QtWidgets.QWidget): def __init__(self): super(TestWidget, self).__init__() self.setLayout(QtWidgets.QVBoxLayout()) self.layout().addWidget(QtWidgets.QLabel(self.tr(&quot;Hello World!&quot;))) def tr(self, text: str) -&gt; str: return QtCore.QCoreApplication.translate(&quot;TestWidget&quot;, text) </code></pre> <p>This makes Pylance happy and the application still works. However, it's rather weird that I need to manually implement this method, while the <a href="https://doc.qt.io/qtforpython-6/examples/example_widgets_linguist.html#qt-linguist-example" rel="nofollow noreferrer">documentation</a> itself uses them (and if I just download that example, same errors show up).</p> <p>Someone can help me understand what's going on?</p>
<python><qt><python-typing><pyside6><pyright>
2024-03-09 13:11:51
1
1,231
Heliton Martins
78,132,353
6,946,110
Pytest- How to remove created data after each test function
<p>I have a FastAPI + SQLAlchemy project and I'm using Pytest for writing unit tests for the APIs.</p> <p>In each test function, I create some data in some tables (user table, post table, comment table, etc) using SQLAlchemy. These created data in each test function will remain in the tables after test function finished and will affect on other test functions.</p> <p>For example, in the first test function I create 3 posts, and 2 users, then in the second test functions, these 3 posts and 2 users remained on the tables and makes my test expectations wrong.</p> <p>Following is my fixture for pytest:</p> <pre><code>@pytest.fixture def session(engine): Session = sessionmaker(bind=engine) session = Session() yield session session.rollback() # Removes data created in each test method session.close() # Close the session after each test </code></pre> <p>I used <code>session.rollback()</code> to remove all created data during session, but it doesn't remove data.</p> <p>And the following is my test functions:</p> <pre><code>class TestAllPosts(PostBaseTestCase): def create_logged_in_user(self, db): user = self.create_user(db) return user.generate_tokens()[&quot;access&quot;] def test_can_api_return_all_posts_without_query_parameters(self, client, session): posts_count = 5 user_token = self.create_logged_in_user(session) for i in range(posts_count): self.create_post(session) response = client.get(url, headers={&quot;Authorization&quot;: f&quot;Bearer {user_token}&quot;}) assert response.status_code == 200 json_response = response.json() assert len(json_response) == posts_count def test_can_api_detect_there_is_no_post(self, client, session): user_token = self.create_logged_in_user(session) response = client.get(url, headers={&quot;Authorization&quot;: f&quot;Bearer {user_token}&quot;}) assert response.status_code == 404 </code></pre> <p>In the latest test function, instead of getting 404, I get 200 with 5 posts (from the last test function)</p> <p>How can I remove the created data in each test function after test function finished?</p>
<python><sqlalchemy><pytest><fastapi>
2024-03-09 12:43:03
4
1,553
msln
78,132,293
6,087,589
Twitter user objects Python request
<p>I have the basic paid plan of Twitter. I am trying to obtain users information using this code:</p> <pre><code>import requests import json import config bearer_token = config.bearer_token url = &quot;https://api.twitter.com/2/users&quot; params = {'ids': '[4919451, 14151086]', 'user.fields':'name,description,location,public_metrics,username,created_at,' } def bearer_oauth(r): &quot;&quot;&quot; Method required by bearer token authentication. &quot;&quot;&quot; r.headers[&quot;Authorization&quot;] = f&quot;Bearer {bearer_token}&quot; r.headers[&quot;User-Agent&quot;] = &quot;v2UserLookupPython&quot; return r def connect_to_endpoint(url,params): response = requests.request(&quot;GET&quot;, url, auth=bearer_oauth, params=params) print(response.status_code) if response.status_code != 200: raise Exception( &quot;Request returned an error: {} {}&quot;.format( response.status_code, response.text ) ) return response.json() def main(): url = create_url() json_response = connect_to_endpoint(url,params) json_response = flatten(json_response) d = json.dumps(json_response, indent=4, sort_keys=True) return d if __name__ == &quot;__main__&quot;: main() </code></pre> <p>I am getting this error:</p> <blockquote> <p>Exception: Request returned an error: 400 {&quot;errors&quot;:[{&quot;parameters&quot;:{&quot;ids&quot;:[&quot;4919451, 14151086&quot;]},&quot;message&quot;:&quot;The `ids` query parameter value [ 14151086] is not valid&quot;},{&quot;parameters&quot;:{&quot;user.fields&quot;:[&quot;name,description,location,public_metrics,username,created_at,&quot;]},&quot;message&quot;:&quot;The `user.fields` query parameter value [] is not one of [connection_status,created_at,description,entities,id,location,most_recent_tweet_id,name,pinned_tweet_id,profile_image_url,protected,public_metrics,receives_your_dm,subscription_type,url,username,verified,verified_type,withheld]&quot;}],&quot;title&quot;:&quot;Invalid Request&quot;,&quot;detail&quot;:&quot;One or more parameters to your request was invalid.&quot;,&quot;type&quot;:&quot;https://api.twitter.com/2/problems/invalid-request&quot;}</p> </blockquote> <p>Can you help me to improve the function connect_to_endpoint so it works? Appreciate your answers in advance.</p>
<python><twitter><python-requests>
2024-03-09 12:20:18
1
419
anitasp
78,131,942
7,207,987
Data structure - Shifting IDs to next dates in python
<p>I have a dataset where I have Date and ID.</p> <p><a href="https://i.sstatic.net/VJrhT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VJrhT.png" alt="enter image description here" /></a></p> <pre><code>Created the dataset using the following code import pandas as pd # Function to generate the specified dataset def generate_dataset(): data = { 'Date': ['09-03-2024']*7 + ['10-03-2024']*4, 'ID': [f'ID_{i}' for i in range(1, 12)] } df = pd.DataFrame(data) return df # Generate the dataset dataset = generate_dataset() # Print the dataset print(dataset) </code></pre> <p>I need a function which can take in a datset like this along with max_number of IDs possible per day. It should move the additional IDs to the next date. And do this for all unique dates. So the output should look something like this, if the max IDs per date is 2.</p> <p><a href="https://i.sstatic.net/l8HIo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/l8HIo.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2024-03-09 10:08:19
2
623
NinjaR
78,131,920
4,643,584
how to send http 101 just with one CRLF
<p>There is an SSH client application that after requesting HTTP GET to an http server, should receive HTTP 101 switching protocol and a header for &quot;Protocol Version Exchange&quot;.</p> <p>The RFC 4253 -- section <a href="https://datatracker.ietf.org/doc/html/rfc4253#section-4.2" rel="nofollow noreferrer">4.2. Protocol Version Exchange</a> indicates that the header format MUST be</p> <pre class="lang-bash prettyprint-override"><code>SSH-protoversion-softwareversion&lt;CR&gt;&lt;LF&gt; </code></pre> <p>example</p> <pre class="lang-bash prettyprint-override"><code>SSH-2.0-OpenSSH_9.5\r\n </code></pre> <p>and doing so in pytong3 as follow</p> <pre class="lang-py prettyprint-override"><code>... ... class MyServer(BaseHTTPRequestHandler): def do_GET(self): self.send_response(101) self.send_header(&quot;Content-type&quot;, &quot;text/plain&quot;) self.end_headers() self.wfile.write(bytes(&quot;SSH-2.0-OpenSSH_9.5&quot;, &quot;utf-8&quot;)) ... ... </code></pre> <p>the response header will be (<em>what the SSH client receives</em>)</p> <pre class="lang-bash prettyprint-override"><code>SSH-2.0-OpenSSH_9.5\r\n\r\n </code></pre> <p>and therefor SSH client application closes the connection.</p> <p>How to properly set just one <code>CRLF</code> for the header or remove the second one ?</p> <p>it should be and start with <code>SSH-</code></p> <pre class="lang-bash prettyprint-override"><code>SSH-2.0-OpenSSH_9.5\r\n </code></pre>
<python><python-3.x><ssh>
2024-03-09 09:59:15
1
24,534
Shakiba Moshiri
78,131,808
7,800,760
Checking if a docker compose service is reachable from another container
<p>I am writing a gunicorn/uvicorn/fastapi endpoint and want it to assess if a given port on another given container (using the compose service name) is reachable (and time it). For this example this service is called mematest and the container I want to check is memaapp.</p> <p>If I run a ping to memaapp from within mematest it succeeds.</p> <p>If I do a &quot;curl memaapp:4200&quot; it succeeds.</p> <p>if I run the following code from within mematest, passing it memaapp and 4200 as its parameter it returns False:</p> <pre><code>def check_port_with_timing(hostname, port): &quot;&quot;&quot; Checks if a port on a given hostname is listening and measures the time taken to check. Parameters: - hostname: A string, the hostname or IP address to check. - port: An integer, the port number to check. Returns: - A tuple containing a boolean indicating if the port is listening and the time taken in milliseconds. &quot;&quot;&quot; with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock: sock.settimeout(5) # Increased timeout start_time = time.time() # Record the start time try: result = sock.connect_ex((hostname, port)) time_taken = ( time.time() - start_time ) * 1000 # Convert time taken to milliseconds if result == 0: return True, time_taken except Exception as e: print(f&quot;An error occurred: {e}&quot;) time_taken = (time.time() - start_time) * 1000 return False, time_taken # If connection fails for a reason other than an exception caught above return False, (time.time() - start_time) * 1000 </code></pre> <p>I guess I could rewrite this function using the requests library for HTTP REST APIs but am trying to keep things at a lower communication stack level.</p>
<python><docker>
2024-03-09 09:20:05
0
1,231
Robert Alexander
78,131,772
17,610,082
Getting NoSuchFile or Directory in subprocess
<p>I'm trying to run a CLI tool using <code>subprocess.run()</code>, but i'm getting irrelevant responses.</p> <p><strong>Folder structure</strong></p> <pre><code>script.py mytools |__tool_x |__tool_y </code></pre> <p><strong>Case:1</strong> Try to run the below command</p> <pre class="lang-py prettyprint-override"><code>import os import subprocess command = [&quot;mytools/tool_x&quot;, &quot;--help&quot;] subprocess.run( command, stdout=subprocess.DEVNULL, cwd=os.path.dirname(__file__), ) </code></pre> <p><strong>Result:</strong> It works as expected, gives no error.</p> <p><strong>Case-2:</strong></p> <pre class="lang-py prettyprint-override"><code>import os import subprocess command = [&quot;mytools/tool_x&quot;, &quot;--subcommand&quot;, &quot;test&quot;, &quot;--subarg&quot;] subprocess.run( command, stdout=subprocess.DEVNULL, cwd=os.path.dirname(__file__), ) </code></pre> <p><strong>Result:</strong> Fails with <code>FileNotFoundError: [Errno 2] No such file or directory: 'mytools/tool_x'&quot;</code></p> <p>How can i resolve this, not able to find the issue, generally what are the possibilities for this issue?</p>
<python><python-3.x><shell><subprocess>
2024-03-09 09:00:28
0
1,253
DilLip_Chowdary
78,131,753
1,419,216
How to get parameters after hash mark # from a redirect URL in FastAPI?
<p>I am facing an issue while working with Redirect Url and getting the data from QueryParams in Python using FastApi. I am using Azure AD Authorization Grant Flow to log in, below is the code which generates the <code>RedirectResponse</code></p> <pre><code>@app.get(&quot;/auth/oauth/{provider_id}&quot;) async def oauth_login(provider_id: str, request: Request): if config.code.oauth_callback is None: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail=&quot;No oauth_callback defined&quot;, ) provider = get_oauth_provider(provider_id) if not provider: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail=f&quot;Provider {provider_id} not found&quot;, ) random = random_secret(32) params = urllib.parse.urlencode( { &quot;client_id&quot;: provider.client_id, &quot;redirect_uri&quot;: f&quot;{get_user_facing_url(request.url)}/callback&quot;, &quot;state&quot;: random, **provider.authorize_params, } ) response = RedirectResponse( url=f&quot;{provider.authorize_url}?{params}&quot;) samesite = os.environ.get(&quot;CHAINLIT_COOKIE_SAMESITE&quot;, &quot;lax&quot;) # type: Any secure = samesite.lower() == &quot;none&quot; response.set_cookie( &quot;oauth_state&quot;, random, httponly=True, samesite=samesite, secure=secure, max_age=3 * 60, ) return response </code></pre> <p>And this is where I am receiving the Redirect URL.</p> <pre><code>@app.get(&quot;/auth/oauth/{provider_id}/callback&quot;) async def oauth_callback( provider_id: str, request: Request, error: Optional[str] = None, code: Optional[str] = None, state: Optional[str] = None, ): if config.code.oauth_callback is None: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail=&quot;No oauth_callback defined&quot;, ) provider = get_oauth_provider(provider_id) if not provider: raise HTTPException( status_code=status.HTTP_404_NOT_FOUND, detail=f&quot;Provider {provider_id} not found&quot;, ) if not code or not state: raise HTTPException( status_code=status.HTTP_400_BAD_REQUEST, detail=&quot;Missing code or state&quot;, ) response.delete_cookie(&quot;oauth_state&quot;) return response </code></pre> <p>This redirect works fine when the QueryParams are with ? but the issue right now is that the redirect callback from Azure AD is with # and due to that I am not able to get the <code>Code</code> &amp; <code>State</code> QueryParams from the Url</p> <h4>Example of RedirectUrl with <code>#</code></h4> <p><code>http://localhost/callback#code=xxxxxx&amp;state=yyyyyy</code></p> <p>Any thoughts on how to fix this issue.</p>
<python><http-redirect><azure-active-directory><fastapi><url-fragment>
2024-03-09 08:54:33
1
2,437
Shabir jan
78,131,711
5,790,653
How to round up integer number to multiple of 10
<p>I found how to round down or up a floating number to the next or previous integer number, but this is my issue:</p> <p>Suppose I have these numbers:</p> <pre class="lang-none prettyprint-override"><code>12 11 10 27 29 25 16 </code></pre> <p>I'm going to round up to the nearest 10. If number has 0 in the last digit, then pass it.</p> <p>I saw something like this but they round down:</p> <pre class="lang-py prettyprint-override"><code>import math math.ceil(12) round(12) </code></pre> <p>Expected output:</p> <pre class="lang-none prettyprint-override"><code>20 20 10 30 30 30 20 </code></pre>
<python>
2024-03-09 08:36:46
3
4,175
Saeed
78,131,645
9,112,151
How to run decorator at each request before wrapper?
<p>I'd like to run some code before a view even build dependencies (please note the comment marked with <code>===&gt;</code>):</p> <pre><code>from functools import wraps import uvicorn from fastapi import FastAPI, Depends app = FastAPI() def get_session(): print(&quot;get session&quot;) def transaction(): print(&quot;inside transaction&quot;) def decorator(func): print(&quot;inside decorator&quot;) # ===&gt; I need to run code here at each request @wraps(func) async def wrapper(*args, **kwargs): print(&quot;inside wrapper&quot;) result = await func(*args, **kwargs) return result return wrapper return decorator @app.post(&quot;/create&quot;) @transaction() async def create(session=Depends(get_session)): pass if __name__ == '__main__': uvicorn.run(app, port=8000) </code></pre> <p>The output when ran:</p> <pre><code>inside transaction inside decorator INFO: Started server process [67644] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: 127.0.0.1:60098 - &quot;GET /docs HTTP/1.1&quot; 200 OK INFO: 127.0.0.1:60098 - &quot;GET /openapi.json HTTP/1.1&quot; 200 OK get session inside wrapper </code></pre> <p>The problem with it is that I'd like the code <code>print(&quot;inside decorator&quot;)</code> should be ran at each http request. In other words I need to run some code before <code>wrapper</code> at each request (I need to initialize something). How could I do this?</p>
<python><fastapi>
2024-03-09 08:09:50
0
1,019
Альберт Александров
78,131,641
15,491,774
Problem with python-keycloak package and FastApi
<p>I have a small application where i try to use Keycloak for authentication. If i use the endpoint /public to get the token of the test user and use that token wit Postman to send a request to /protected resource it wokrs.</p> <p>My issue is if i retrive the token from Keycloak over Postman with following call i can not use the access_token to call /protected. I always get &quot;Could not validate credentials&quot; if i lookup the token over the get_current_user methode.</p> <pre><code>http://localhost:8180/auth/realms/FastApi/protocol/openid-connect/token </code></pre> <pre><code>from fastapi import FastAPI, Depends, HTTPException, status from fastapi.security import OAuth2AuthorizationCodeBearer from typing import List from keycloak import KeycloakOpenID app = FastAPI() # Keycloak settings keycloak_openid = KeycloakOpenID( server_url=&quot;http://localhost:8180/auth/&quot;, realm_name=&quot;FastApi&quot;, client_id=&quot;fast_api&quot;, client_secret_key=&quot;secret&quot;, ) # OAuth2 scheme using Keycloak oauth2_scheme = OAuth2AuthorizationCodeBearer( authorizationUrl=&quot;http://localhost:8180/auth/realms/FastApi/protocol/openid-connect/auth&quot;, tokenUrl=&quot;http://localhost:8180/auth/realms/FastApi/protocol/openid-connect/token&quot;) # Dependency to validate token and get user roles async def get_current_user(token: str = Depends(oauth2_scheme)): try: d = keycloak_openid.userinfo(token) userinfo = keycloak_openid.userinfo(token) user_roles = userinfo.get(&quot;roles&quot;, []) if 'admin' not in user_roles: raise HTTPException( status_code=status.HTTP_403_FORBIDDEN, detail=&quot;User not authorized to access this resource&quot;, ) return user_roles except Exception as e: raise HTTPException( status_code=status.HTTP_401_UNAUTHORIZED, detail=&quot;Could not validate credentials&quot;, ) @app.get(&quot;/public&quot;, tags=[&quot;public&quot;]) async def public_endpoint(): token = keycloak_openid.token(&quot;test&quot;, &quot;test&quot;) return {&quot;token&quot;: token} # Protected endpoint @app.get(&quot;/protected&quot;, tags=[&quot;protected&quot;]) async def protected_endpoint(current_user_roles: List[str] = Depends(get_current_user)): return {&quot;message&quot;: &quot;This is a protected endpoint&quot;, &quot;user_roles&quot;: current_user_roles} </code></pre>
<python><keycloak><fastapi>
2024-03-09 08:08:51
1
448
stylepatrick
78,131,589
2,386,930
Cassandra throws error('required argument is not an integer') when I set the consistency level to quorum
<p>Here is my table's definition.</p> <pre><code>CREATE TABLE chat_data.twitch_chat_by_broadcaster_and_timestamp ( broadcaster_id int, year_month int, timestamp bigint, message_id uuid, message text, PRIMARY KEY ((broadcaster_id, year_month), timestamp, message_id) ) WITH CLUSTERING ORDER BY (timestamp ASC, message_id ASC) </code></pre> <p>Here is my code to insert.</p> <pre><code>import logging import auth.secrets as secrets from cassandra.auth import PlainTextAuthProvider from cassandra.cluster import Cluster from cassandra.policies import DCAwareRoundRobinPolicy, TokenAwarePolicy from cassandra.query import BatchStatement, BatchType, tuple_factory from datetime_helpers import get_month, get_next_month class DatabaseConnection: def __init__(self, keyspace): auth_provider = PlainTextAuthProvider( secrets.get_astra_client_id(), secrets.get_astra_secret() ) load_balancing_policy = TokenAwarePolicy(DCAwareRoundRobinPolicy()) cluster = Cluster( cloud=secrets.get_astra_cloud_config(), auth_provider=auth_provider, load_balancing_policy=load_balancing_policy, ) self.session = cluster.connect(keyspace) def insert_chats(self, messages): logging.info(f&quot;Inserting {len(messages)} message&quot;) batch = BatchStatement( consistency_level=&quot;QUORUM&quot;, batch_type=BatchType.UNLOGGED ) statement = self.session.prepare( &quot;&quot;&quot; INSERT INTO twitch_chat_by_broadcaster_and_timestamp (broadcaster_id, year_month, timestamp, message_id, message) VALUES (?, ?, ?, ?, ?) &quot;&quot;&quot; ) for m in messages: broadcaster_id, timestamp, message_id, message = m month = get_month(timestamp) batch.add( statement, (broadcaster_id, month, timestamp, message_id, message) ) try: self.session.execute(batch) logging.info(&quot;Messages inserted successfully&quot;) return True except Exception as e: logging.error(f&quot;Exception: {e}&quot;) return False </code></pre> <p><code>execute</code> throws</p> <pre><code>Exception: ('Unable to complete the operation against any hosts', {&lt;Host: &lt;astra datastax machine #1&gt;: error('required argument is not an integer'), &lt;Host: &lt;astra datastax machine #2&gt;: error('required argument is not an integer'), &lt;Host: &lt;astra datastax machine #3&gt;&gt;: error('required argument is not an integer')} </code></pre> <p>I printed out the types I'm passing in the <code>batch.add</code> statement and confirmed they are correct. I eventually got the database to accept my insertion by removing <code>consistency_level=&quot;QUORUM&quot;</code> from the BatchStatement. It now reads <code>batch = BatchStatement(batch_type=BatchType.UNLOGGED)</code></p> <p>Any ideas why I'm only getting the exception when I try to set the consistency level to quorum? Also, what's going on under the hood that's causing this exception?</p>
<python><cassandra>
2024-03-09 07:46:49
1
1,587
janovak
78,131,585
15,412,256
Polars Advanced Customized Expression Extension
<p>I want to construct a series of <strong>customized Polars Expression Extension</strong> following <a href="https://docs.pola.rs/py-polars/html/reference/api.html" rel="nofollow noreferrer">this tutorial</a>.</p> <p>This my attemp to create customized Polars Expressions:</p> <pre class="lang-py prettyprint-override"><code>import polars as pl @pl.api.register_expr_namespace(&quot;EWSD&quot;) class ExponentialMovingStd: def __init__( self, expr: pl.Expr, *, alpha: float | None = None, span: int | None = None, half_life: int | None = None, **ewm_std_kwargs, ): self._expr = expr self.alpha = alpha self.span = span print(self.span) self.half_life = half_life self.ewm_std_kwargs = ewm_std_kwargs return (self._expr .ewm_std( alpha=self.alpha, span=self.span, half_life=self.half_life, **self.ewm_std_kwargs ) ).cast(pl.UInt8) &quot;&quot;&quot;def __call__(self) -&gt; pl.Expr: return (self._expr .ewm_std( alpha=self.alpha, span=self.span, halflife=self.halflife, **self.ewm_std_kwargs ) ).cast(pl.UInt8)&quot;&quot;&quot; </code></pre> <p>However, when I use this expression:</p> <pre class="lang-py prettyprint-override"><code>df.with_columns( pl.col(&quot;var&quot;).EWSD(span=21).alias(&quot;21span_var&quot;) ) </code></pre> <p>The <code>print(&quot;self.span&quot;)</code> gives me <code>None</code> as it is not defined upon <code>calling</code> the expression.</p> <p>I would like to have:</p> <ol> <li>Additional variables to be initialized under <code>__init__</code> other than <code>expr</code>.</li> <li>A working <code>__call__</code> or equivalent so that I dont have to write additional expressions like <code>pl.col().EWSD().get_values()</code> to get the values.</li> </ol>
<python><python-polars>
2024-03-09 07:43:27
0
649
Kevin Li
78,131,353
10,431,634
what is a fast and simple way of packing signed values in 4 bits?
<p>I have a python list of integers, each being in the range [-8,7]. I want to pack these values into an array of bytes that i will send down a socket connection. Since these values can be represented by 4 bits in a two's complement representation, it should be possible to pack two such values in each byte. What is a good and relatively fast way of doing this in python? Thanks</p>
<python>
2024-03-09 05:39:55
1
909
astrophobia
78,131,338
2,411,377
How to set different filesystem for python os.path module?
<p>I'm developing a script that auto-deploys my Django project from my Windows machine to my local Raspberry server.</p> <p>Amongst the steps, there's a python script that has to be copyied to the server, and executed there. I execute it through an SSH call from <code>paramiko</code> module:</p> <pre><code>client.exec_command('python ' + '/home/myname/projects/example_app/script.py') </code></pre> <p>This is one of the lines the executed script:</p> <pre><code>import os import subprocess sp.run(&quot;python &quot; + os.path.join(&quot;/home/myname/projects&quot;, &quot;example_app&quot;, &quot;manage.py collectstatic&quot;)) </code></pre> <p>However, I get the following erro:</p> <pre><code>python: can't open file 'E:\\home\\myname\\projects\\example_app\\manage.py': [Errno 2] No such file or directory </code></pre> <p>Weirdly enough, seems like the os.path module is joining paths considering my Windows filesystem, even though the script is called from within the Linux server. Ok, I don't know about the SSH implementation, and I can accept that. My question is: <strong>is there a way to manually set the OS for os.path module</strong>?</p>
<python><ssh><operating-system>
2024-03-09 05:30:09
1
353
Anderson Linhares
78,131,171
864,598
Python method chaining in functional programming style
<p>Below is Python code, where the <code>process_request_non_fp</code> method shows how to handle the problem with IF-ELSE condition (make-api -&gt; load-db -&gt; notify).</p> <p>I'm trying to get rid of IF-ELSE and chain in functional way with ERROR being handled in special method.</p> <p>Is there any helper from functools, toolz etc to achieve this in simple functional way?</p> <pre><code>import random from pydantic import BaseModel class APP_ERROR(BaseModel): error_message: str class DB_ERROR(APP_ERROR): pass class API_ERROR(APP_ERROR): pass class NOTIFICATION_ERROR(APP_ERROR): pass class Request(BaseModel): req_id: str class Response(BaseModel): response: str class Ack(BaseModel): email: str def get_api_data(request: Request) -&gt; Response | API_ERROR: if random.choice([True, False]): print(&quot; Success !! API data successfully retrieved&quot;) return Response(response=&quot;Success&quot;) else: print(&quot; Fail !! API data FAILED&quot;) return API_ERROR(error_message=&quot;API Error&quot;) def load_db(response: Response) -&gt; Ack | DB_ERROR: if random.choice([True, False]): print(&quot; Success !! Data loaded to DB&quot;) return Ack(email=&quot;test@gmail.com&quot;) else: print(&quot;Fail !! DB Error&quot;) return DB_ERROR(error_message=&quot;DB Error&quot;) def notify(ack: Ack) -&gt; bool | NOTIFICATION_ERROR: if random.choice([True, False]): print(&quot; Success !! Email notification sent&quot;) return True else: print(&quot;Fail !! Email notification sent&quot;) return NOTIFICATION_ERROR(error_message=&quot;Notification error&quot;) def process_request_non_fp(req: Request) -&gt; bool: resp = get_api_data(request=req) if type(resp) is Response: api_res = load_db(response=resp) if type(api_res) is Ack: ack_resp = notify(api_res) if type(ack_resp) is bool: return ack_resp else: return False else: return False else: return False def process_request_fp(req: Request) -&gt; bool: # (get_api_data)(load_db)(notify)(req).else(lambda x -&gt; API_ERROR : print(Error)) # How to implement this in functional way pass process_request_non_fp(Request(req_id=&quot;MY_REQUEST&quot;)) </code></pre>
<python><functional-programming><python-itertools><functools><toolz>
2024-03-09 03:42:09
3
1,014
M80
78,130,945
11,486,307
How to Prevent Gunicorn Workers/Threads from Blocking During External API Calls in Flask App
<p>I am developing a Flask application that makes external API calls and is served with Gunicorn configured to use multiple workers and threads. Despite this configuration, I observe that when one thread makes an API call, other workers and threads seem to wait for this call to complete before proceeding with their tasks. This behavior suggests a blocking operation, but my understanding was that external API calls should not block other threads, especially in a multi-threaded environment.</p> <p><strong>Configuration:</strong> Gunicorn: Configured with 3 workers and 10 threads per worker. Flask Application: Makes synchronous external API calls in one of its routes.</p> <p><strong>Problematic Function:</strong></p> <pre class="lang-py prettyprint-override"><code>def generate_response(max_retry_attempts=3): retry_count = 0 while retry_count &lt;= max_retry_attempts: try: # Synchronous call to an external API response = external_api_call() return response except SomeAPIException as e: retry_count += 1 time.sleep(2**retry_count) # Exponential backoff </code></pre> <p><strong>Symptoms:</strong> When the function generate_response is called from a Flask route, it seems to block the entire application, not just the thread making the call. Other requests to the server are not processed until the API call and its retries (if any) complete.</p> <p><strong>Questions:</strong></p> <ol> <li>Why does this behavior occur when I expected the Gunicorn workers and threads to handle multiple requests independently, especially for I/O-bound tasks like external API calls?</li> <li>What can I do to prevent a single external API call from blocking other requests in my Flask application?</li> </ol> <p><strong>Additional Context:</strong> The API calls are essential for the application's functionality, and I need to maintain a synchronous interface due to the application's current design. I am looking for a solution that can integrate with my current setup without a complete overhaul or moving to an asynchronous framework.</p> <p>I appreciate any insights or suggestions on how to address this issue. Thank you!</p>
<python><multithreading><flask><gil>
2024-03-09 01:17:29
0
590
IdoS
78,130,876
22,437,734
ValueError with Scikit-Learn
<p><em>link to <a href="https://github.com/marsianjohncarter/StackOverflow--Sklearn_ValueError" rel="nofollow noreferrer">car data.csv</a></em></p> <p>My code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split car_data = pd.read_csv('car_data.csv') # Create X X = car_data.drop('Buy Rate', axis=1) # Create Y y = car_data['Buy Rate'] clf = RandomForestClassifier() clf.get_params() X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) clf.fit(X_train, y_train) </code></pre> <p>After the line with <code>clf.fit</code>, this error pops up:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_51905/2395142735.py in ?() ----&gt; 1 clf.fit(X_train, y_train) ~/Desktop/ml-course/env/lib/python3.10/site-packages/sklearn/base.py in ?(estimator, *args, **kwargs) 1147 skip_parameter_validation=( 1148 prefer_skip_nested_validation or global_skip_validation 1149 ) 1150 ): -&gt; 1151 return fit_method(estimator, *args, **kwargs) ~/Desktop/ml-course/env/lib/python3.10/site-packages/sklearn/ensemble/_forest.py in ?(self, X, y, sample_weight) 344 &quot;&quot;&quot; 345 # Validate or convert input data 346 if issparse(y): 347 raise ValueError(&quot;sparse multilabel-indicator for y is not supported.&quot;) --&gt; 348 X, y = self._validate_data( 349 X, y, multi_output=True, accept_sparse=&quot;csc&quot;, dtype=DTYPE 350 ) 351 if sample_weight is not None: ~/Desktop/ml-course/env/lib/python3.10/site-packages/sklearn/base.py in ?(self, X, y, reset, validate_separately, cast_to_ndarray, **check_params) 617 if &quot;estimator&quot; not in check_y_params: 618 check_y_params = {**default_check_params, **check_y_params} 619 y = check_array(y, input_name=&quot;y&quot;, **check_y_params) 620 else: --&gt; 621 X, y = check_X_y(X, y, **check_params) 622 out = X, y 623 624 if not no_val_X and check_params.get(&quot;ensure_2d&quot;, True): ~/Desktop/ml-course/env/lib/python3.10/site-packages/sklearn/utils/validation.py in ?(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator) 1143 raise ValueError( 1144 f&quot;{estimator_name} requires y to be passed, but the target y is None&quot; 1145 ) 1146 -&gt; 1147 X = check_array( 1148 X, 1149 accept_sparse=accept_sparse, 1150 accept_large_sparse=accept_large_sparse, ~/Desktop/ml-course/env/lib/python3.10/site-packages/sklearn/utils/validation.py in ?(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name) 914 ) 915 array = xp.astype(array, dtype, copy=False) 916 else: 917 array = _asarray_with_order(array, order=order, dtype=dtype, xp=xp) --&gt; 918 except ComplexWarning as complex_warning: 919 raise ValueError( 920 &quot;Complex data not supported\n{}\n&quot;.format(array) 921 ) from complex_warning ~/Desktop/ml-course/env/lib/python3.10/site-packages/sklearn/utils/_array_api.py in ?(array, dtype, order, copy, xp) 376 # Use NumPy API to support order 377 if copy is True: 378 array = numpy.array(array, order=order, dtype=dtype) 379 else: --&gt; 380 array = numpy.asarray(array, order=order, dtype=dtype) 381 382 # At this point array is a NumPy ndarray. We convert it to an array 383 # container that is consistent with the input's namespace. ~/Desktop/ml-course/env/lib/python3.10/site-packages/pandas/core/generic.py in ?(self, dtype) 2082 def __array__(self, dtype: npt.DTypeLike | None = None) -&gt; np.ndarray: 2083 values = self._values -&gt; 2084 arr = np.asarray(values, dtype=dtype) 2085 if ( 2086 astype_is_view(values.dtype, arr.dtype) 2087 and using_copy_on_write() ValueError: could not convert string to float: 'Hyundai' </code></pre> <p>I have viewed similar questions that have been posted here, but none help.</p>
<python><pandas><scikit-learn><model>
2024-03-09 00:42:16
2
473
Gleb
78,130,820
947,012
What instrumenting module name should I use with Opentelemetry tracer provider?
<p>The <a href="https://opentelemetry-python.readthedocs.io/en/latest/api/trace.html#opentelemetry.trace.TracerProvider" rel="nofollow noreferrer">documentation</a> suggests:</p> <blockquote> <p><code>instrumenting_module_name</code> (str) –<br /> The uniquely identifiable name for instrumentation scope, such as instrumentation library, package, module or class name. <code>__name__</code> may not be used as this can result in different tracer names if the tracers are in different files. It is better to use a fixed string that can be imported where needed and used consistently as the name of the tracer.</p> <p>This should not be the name of the module that is instrumented but the name of the module doing the instrumentation. E.g., instead of &quot;requests&quot;, use &quot;opentelemetry.instrumentation.requests&quot;.</p> </blockquote> <p>Reading the <a href="https://opentelemetry.io/docs/specs/otel/glossary/#instrumented-library" rel="nofollow noreferrer">glossary</a> does not help either.</p> <p>The documentation seems to be super-focused on &quot;instrumenting -&gt; instrumented&quot; relationship and misses the simplest case when I just want to send traces from my application. I don't want to instrument FastAPI, psycopg2, Flask, Django, or other third-party stuff. I am just sending traces manually from my components as the flow progresses through complex business logic.</p> <p>What should be <code>instrumenting_module_name</code> in my case? Is just a constant string across the project good enough? When do I want them to be distinct, and, vice a versa, what bad can happen if distinct names are used when not desired?</p>
<python><open-telemetry><distributed-tracing>
2024-03-09 00:09:56
0
3,234
greatvovan
78,130,757
9,957,710
How to run IJulia using the PythonCall CondaPkg Python installation rather than PyCall.jl/Conda.jl
<p>It seems that PythonCall.jl and CondaPkg.jl are becoming increasingly useful packages. In particular CondaPkg provides tools for the management of Python's virtual environments via <code>CondaPkg.toml</code> which is very convenient due to its similarity to <code>Project.toml</code>.</p> <p>However, when using Jupyter Notebook via <code>IJulia</code>, another Python Anaconda installation is being created. That results that tons of dependencies are being duplicated in both Conda.jl and CondaPkg.jl environments.</p> <p>How to force IJulia to use rather the existing <code>CondaPkg</code> installation rather than allow it to install Jupyter via Conda.jl? How to avoid many condas in my Julia-to-Python Jupyter workflows?</p>
<python><jupyter-notebook><julia><conda>
2024-03-08 23:46:01
1
42,537
Przemyslaw Szufel
78,130,748
3,460,486
Google Chat API get list of messages for a space
<p>I am trying to get <a href="https://developers.google.com/workspace/chat/list-messages" rel="nofollow noreferrer">this example</a> in Python working.</p> <pre><code>def get_msg(): creds = None if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', ['https://www.googleapis.com/auth/chat.spaces', 'https://www.googleapis.com/auth/chat.messages') chat = build('chat', 'v1', credentials=creds) result = chat.spaces().messages().list(name='spaces/123456789').execute() print(result) def main(): get_msg() </code></pre> <p>This error comes up:</p> <pre><code>Traceback (most recent call last): File &quot;C:\30_py\google\chat\test_chat.py&quot;, line 64, in &lt;module&gt; main() File &quot;C:\30_py\google\chat\test_chat.py&quot;, line 57, in main get_msg() File &quot;C:\30_py\google\chat\test_chat.py&quot;, line 46, in get_msg result = chat.spaces().messages().list(name='spaces/123456789').execute() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: 'Resource' object has no attribute 'list' </code></pre> <p>I tested the authorization and the space name in the <a href="https://developers.google.com/workspace/chat/api/reference/rest/v1/spaces.messages/list" rel="nofollow noreferrer">API explorer</a> and it worked. I was able to get a list of spaces using a different endpoint so the credentials/token/logic works. However, I am stuck trying to get the endpoint working i.e. being able to fix this attribute error. Any ideas and recommendations are appreciated.</p>
<python><attributeerror><google-chat>
2024-03-08 23:44:42
1
493
user3460486