QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,023,397
| 13,392,257
|
TypeError: Object of type Project is not JSON serializable
|
<p>My code:</p>
<p>main.py</p>
<pre><code>from . import models, schemas
from .database import SessionLocal, engine
models.Base.metadata.create_all(bind=engine)
app = FastAPI()
# Dependency
def get_db():
db = SessionLocal()
try:
yield db
finally:
db.close()
@app.get("/projects/", response_model=list[schemas.ProjectSchema])
def list_project(db: Session = Depends(get_db)):
""" Get all projects"""
projects = db.query(models.Project).all()
print("XXX_ ", projects) # XXX_ [<src.models.Project ...]
return JSONResponse(content={"message": projects})
</code></pre>
<p>models.py</p>
<pre><code>from sqlalchemy import Column, Integer, String
from sqlalchemy.orm import relationship
from .database import Base
class Project(Base):
__tablename__ = "projects"
id = Column(Integer, primary_key=True)
name = Column(String, nullable=False, unique=True)
</code></pre>
<p>schemas.py</p>
<pre><code>from typing import List, Union
from pydantic import BaseModel
class ProjectSchema(BaseModel):
name: str
class Config:
orm_mode = True
</code></pre>
<p>I have an error for <code>GET /projects</code> request:</p>
<blockquote>
<p>TypeError: Object of type Project is not JSON serializable</p>
</blockquote>
<p>I have read <a href="https://stackoverflow.com/questions/64236572/fastapi-typeerror-object-of-type-modelmetaclass-is-not-json-serializable">FastAPI TypeError: Object of type 'ModelMetaclass' is not JSON serializable</a>.</p>
<p>I checked that <code>response_model=list[schemas.ProjectSchema]</code> is OK.
How to fix the error?</p>
|
<python><fastapi>
|
2024-09-25 14:20:21
| 2
| 1,708
|
mascai
|
79,023,187
| 9,779,026
|
Enumerating all possible lists (of any length) of non-negative integers
|
<p>I would like to generate/enumerate all possible lists of non-negative integers such that the algorithm will generate lists like the following at some point</p>
<pre class="lang-py prettyprint-override"><code>[1]
[24542,0]
[245,904609,848,24128,350,999]
</code></pre>
<p>In other words, for all possible non-negative integers, generate all possible lists which contain that many non-negative integers.</p>
<p>I have figured that the trick for a list with two numbers is to enumerate their values diagonally like this</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>first value\second value</th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>0</strong></td>
<td>0 (this will be generated first)</td>
<td>2 (this third etc.)</td>
<td>5</td>
<td>9</td>
</tr>
<tr>
<td><strong>1</strong></td>
<td>1 (this second)</td>
<td>4</td>
<td>8</td>
<td></td>
</tr>
<tr>
<td><strong>2</strong></td>
<td>3</td>
<td>7</td>
<td></td>
<td></td>
</tr>
<tr>
<td><strong>3</strong></td>
<td>6</td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></div>
<pre class="lang-py prettyprint-override"><code>def genpair():
x = 0
y = 0
yield x,y
maxx = 0
while True:
maxx += 1
x = maxx
y = 0
while x >= 0:
yield x,y
x -= 1
y += 1
gen = genpair()
for i in range(10):
print(next(gen))
</code></pre>
<p>But does the same trick (or another) also make this work for lists of arbitrary length?</p>
|
<python><list><enumeration>
|
2024-09-25 13:33:22
| 2
| 1,437
|
2080
|
79,023,175
| 1,949,081
|
Pandas write parquet files to S3 has partition limit of 1024
|
<p>I have a pandas dataframe which I am writing to S3 using Pyarrow engine. I have the data to be partitioned by Pyarrow engine throw error that more than 1024 partitions can not be written. Is there a way to overcome this limitation?</p>
<pre><code>df.to_parquet(s3_output_path,
compression='snappy',
engine = 'pyarrow',
basename_template = 'part-{i}' + '.parquet',
partition_cols = ['a_id'],
existing_data_behavior = 'overwrite_or_ignore')
</code></pre>
<p>Stack trace</p>
<pre><code>timestamp,message
Traceback (most recent call last):
1727275500645," File ""/opt/predict.py"", line 149, in <module>"
1727275500645, llm.demo()
1727275500645," File ""/opt/predict.py"", line 99, in demo"
1727275500645," df1.to_parquet(s3_output_path, "
1727275500645," File ""/usr/local/lib/python3.10/dist-packages/pandas/util/_decorators.py"", line 207, in wrapper"
1727275500645," return func(*args, **kwargs)"
1727275500645," File ""/usr/local/lib/python3.10/dist-packages/pandas/core/frame.py"", line 2835, in to_parquet"
1727275500646, return to_parquet(
1727275500646," File ""/usr/local/lib/python3.10/dist-packages/pandas/io/parquet.py"", line 420, in to_parquet"
1727275500646, impl.write(
1727275500646," File ""/usr/local/lib/python3.10/dist-packages/pandas/io/parquet.py"", line 186, in write"
1727275500646, self.api.parquet.write_to_dataset(
1727275500646," File ""/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py"", line 3153, in write_to_dataset"
1727275500646, ds.write_dataset(
1727275500646," File ""/usr/local/lib/python3.10/dist-packages/pyarrow/dataset.py"", line 930, in write_dataset"
1727275500646, _filesystemdataset_write(
1727275500646," File ""pyarrow/_dataset.pyx"", line 2737, in pyarrow._dataset._filesystemdataset_write"," File ""pyarrow/error.pxi"", line 100, in pyarrow.lib.check_status",pyarrow.lib.ArrowInvalid: Fragment would be written into 34865 partitions. This exceeds the maximum of 1024
</code></pre>
<p>Update:</p>
<p>Added <code>max_partitions</code> to the parameters and received unknown param error</p>
<pre><code>File "/usr/local/lib/python3.10/dist-packages/pandas/io/parquet.py", line 186, in write
2024-09-25T15:42:21.674Z
self.api.parquet.write_to_dataset(
2024-09-25T15:42:21.674Z
File "/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py", line 3137, in write_to_dataset
2024-09-25T15:42:21.675Z
write_options = parquet_format.make_write_options(**kwargs)
2024-09-25T15:42:21.675Z
File "pyarrow/_dataset_parquet.pyx", line 181, in pyarrow._dataset_parquet.ParquetFileFormat.make_write_options
2024-09-25T15:42:21.675Z
File "pyarrow/_dataset_parquet.pyx", line 514, in pyarrow._dataset_parquet.ParquetFileWriteOptions.update
2024-09-25T15:42:21.675Z
TypeError: unexpected parquet write option: max_partitions
</code></pre>
|
<python><python-3.x><pandas><parquet><pyarrow>
|
2024-09-25 13:30:21
| 1
| 5,528
|
slysid
|
79,022,883
| 7,953,924
|
Mutate a column in Ibis when column name has spaces
|
<p>Given a dataframe that has a column with a space:</p>
<pre><code>ibis.memtable({
'colname with space': ['a', 'b', 'c']
})
</code></pre>
<p>I want to mutate this column. How do I reference it in the <code>.mutate</code> method? One way is <code>table['colname with space']</code>, however I consider this at best a workaround that doesn't work with anonymous tables.</p>
|
<python><ibis>
|
2024-09-25 12:23:54
| 1
| 1,381
|
Liudvikas Akelis
|
79,022,862
| 1,854,182
|
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position ???: invalid start byte
|
<p>I am working with byte strings that include non-ASCII characters, specifically Hebrew text, and I encountered a UnicodeDecodeError when trying to decode the byte string to UTF-8. Here's the problematic code:</p>
<pre><code>t = b'\xd7\x91\xd7\x9c\xd7\xa9\xd7\x95\xd7\xa0\xd7\x99\xd7\xaa:\xa0 '
print(t.decode('utf8'))
</code></pre>
<p>The error message I receive is:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 15: invalid start byte
</code></pre>
<p>From my understanding, the byte 0xa0 represents a non-breaking space in some encodings, but it seems to cause a problem in UTF-8 decoding. How can I correctly decode this byte string, especially when it contains mixed content like Hebrew characters and potential non-breaking spaces?</p>
<p>Is there a specific method or workaround in Python to handle such scenarios where non-standard or extended ASCII characters (like non-breaking spaces) are embedded within UTF-8 encoded byte strings?</p>
|
<python><unicode><utf-8><byte><decoding>
|
2024-09-25 12:19:45
| 1
| 888
|
Ofer Rahat
|
79,022,782
| 4,751,700
|
Return all rows that have at least one null in one of the columns using Polars
|
<p>I need all the rows that have null in one of the predefined columns.
I basically need <a href="https://stackoverflow.com/questions/78983868/keep-only-rows-that-have-at-least-one-null">this</a> but i have one more requirement that I cant seem to figure out.
Not every column needs to be checked.</p>
<p>I have a function that returns the names of the columns that need to be checked in a list.</p>
<p>Assume this is my dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.from_repr("""
βββββββββ¬ββββββββ¬ββββββ¬ββββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β str β str β str β bool β
βββββββββͺββββββββͺββββββͺββββββββ‘
β abc β null β u β true β
β def β abc β v β true β
β ghi β def β nullβ true β
β jkl β uvw β x β true β
β mno β xyz β y β null β
β qrs β null β z β null β
βββββββββ΄ββββββββ΄ββββββ΄ββββββββ
""")
</code></pre>
<p>Doing this gives me all rows where any of the columns contain <code>null</code>.</p>
<pre class="lang-py prettyprint-override"><code>df.filter(pl.any_horizontal(pl.all().is_null()))
</code></pre>
<pre><code>shape: (4, 4)
βββββββ¬βββββββ¬βββββββ¬βββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β str β str β str β bool β
βββββββͺβββββββͺβββββββͺβββββββ‘
β abc β null β u β true β
β ghi β def β null β true β
β mno β xyz β y β null β
β qrs β null β z β null β
βββββββ΄βββββββ΄βββββββ΄βββββββ
</code></pre>
<p>Sometimes it's fine for column <code>c</code> to contain a <code>null</code> so let's not check it.</p>
<p>What I want is this:</p>
<pre><code>βββββββββ¬ββββββββ¬ββββββ¬ββββββββ
β a β b β c β d β
β --- β --- β --- β --- β
β str β str β str β bool β
βββββββββͺββββββββͺββββββͺββββββββ‘
β abc β null β u β true β
β mno β xyz β y β null β
β qrs β null β z β null β
βββββββββ΄ββββββββ΄ββββββ΄ββββββββ
</code></pre>
<p>Row 3 is not shown even though there is a null value in column c.</p>
<pre class="lang-py prettyprint-override"><code>columns = ["a", "b", "d"]
df.filter(pl.any_horizontal(pl.all(*columns).is_null()))
</code></pre>
<p>This gives me</p>
<blockquote>
<p>SchemaError: invalid series dtype: expected <code>Boolean</code>, got <code>str</code> for series with name <code>a</code></p>
</blockquote>
<p>How do I get the full rows of data that contain a <code>null</code> in one of <code>["a", "b", "d"]</code> columns?</p>
|
<python><dataframe><null><python-polars>
|
2024-09-25 12:04:47
| 2
| 391
|
fanta fles
|
79,022,742
| 6,054,066
|
How to trigger `tenacity` retry without raising exception?
|
<p>How can I trigger a retry within python tenacity library without raising an exception?</p>
<p>The reason I want this is that while debugging at every raise of an error, the debugger stops. (This can be switched off by unticking <code>User Uncaught Exceptions</code> (in VSCode), but that means the debugger doesn't stop at other errors either.)</p>
<p>I'd like to trigger the retry mechanism for tenacity without raising an error.
Is this possible?</p>
|
<python><exception><tenacity>
|
2024-09-25 11:54:54
| 1
| 450
|
semyd
|
79,022,388
| 2,071,807
|
Factory Boy instance with start date before end date using SelfAttribute and Faker
|
<p>Similar to <a href="https://stackoverflow.com/questions/35508293/faker-having-end-date-greater-than-start-date">this question about Laravel</a> I'm hoping to create a FactoryBoy instance with start date before end date like this:</p>
<pre class="lang-py prettyprint-override"><code>
from dataclasses import dataclass
from datetime import date
import factory
@dataclass
class DateRange:
start: date
end: date
class DateRangeFactory(factory.Factory):
class Meta:
model = DateRange
start = factory.Faker("date")
end = factory.Faker("date_between_dates", date_start=factory.SelfAttribute("..start"))
print(DateRangeFactory())
</code></pre>
<p>This gives a hard-to-understand error:</p>
<pre><code>faker.providers.date_time.ParseError: Can't parse date string `1984-11-11`
</code></pre>
<p>It looks to me like <code>1984-11-11</code> should be a perfectly acceptable date. What am I doing wrong?</p>
|
<python><faker><factory-boy>
|
2024-09-25 10:32:18
| 1
| 79,775
|
LondonRob
|
79,022,245
| 2,537,394
|
OpenCV doesn't read images from some directories
|
<p>I'm trying to read a 16-bit grayscale PNG with OpenCV. This image is stored on a network share. When I try to load the image with <code>cv.imread()</code> nothing is returned:</p>
<pre class="lang-py prettyprint-override"><code>import cv2 as cv
print(img_paths[0])
print(type(img_paths[0]))
print(img_paths[0].exists())
img = cv.imread(img_paths[0].as_posix(), cv.IMREAD_ANYDEPTH)
print(type(img))
>>> M:\path\to\MeΓ-ID 1\images\img1.png
>>> <class 'pathlib.WindowsPath'>
>>> True
>>> <class 'NoneType'>
</code></pre>
<p>As you can see the file certainly exists. It even is loadable with PIL:</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
img = Image.open(img_paths[0])
print(img)
>>> <PIL.PngImagePlugin.PngImageFile image mode=I;16 size=2560x300 at 0x27F77E95940>
</code></pre>
<p>And only when I copy the image to the directory of the .ipynb I'm working in, openCV is able to load the image:</p>
<pre class="lang-py prettyprint-override"><code>import shutil
shutil.copy(img_paths[0], "./test_img.png")
img = cv.imread("./test_img.png", cv.IMREAD_ANYDEPTH)
print(type(img))
>>> <class 'numpy.ndarray'>
</code></pre>
<p>Why is OpenCV refusing to load the image?</p>
|
<python><opencv><unicode><path>
|
2024-09-25 09:57:49
| 2
| 731
|
YPOC
|
79,021,917
| 9,542,989
|
Deploying Python-based Chat Bots for MS Teams
|
<p>I am trying to deploy a simple Microsoft Teams using one of the provided samples, but I cannot seem to get it running.</p>
<p>Here are the steps that I followed:</p>
<ol>
<li>Cloned the the BotBuilder-Samples repository and navigated to the
<a href="https://github.com/microsoft/BotBuilder-Samples/tree/main/samples/python/02.echo-bot" rel="nofollow noreferrer">Python Echo Bot sample</a>.</li>
<li>Ran the app using <code>python app.py</code> after creating a virtual
environment and installing the dependencies.</li>
<li>Started a ngrok tunnel for the running application using <code>ngrok http 3879</code>.</li>
<li>Attempted to register the bot at <a href="https://dev.botframework.com/bots/new" rel="nofollow noreferrer">https://dev.botframework.com/bots/new</a>. I chose a Single Tenant app, created an App Registration in my Azure tenant and pasted the App ID and the App Tenant ID in the given fields. The Messaging Endpoint that I registered was <code>https://2aa2-2302-d987-9090-188v-e73d-45c1-24ae-3a25.ngrok-free.app/api/messages</code>.</li>
<li>Added Microsoft Teams as a channel; this redirected me to Microsoft Teams, to a chat with the bot, however, I cannot send any messages to the bot (screenshot given below). I am unable to send any messages to the bot via the web UI either because I am faced with the following error:</li>
</ol>
<blockquote>
<p>There was an error sending this message to your bot: HTTP status code
Unauthorized</p>
</blockquote>
<p><a href="https://i.sstatic.net/juWAWLFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/juWAWLFd.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong here?</p>
<p>I also have a couple of other related questions,</p>
<ol>
<li>When I am deploying this bot in production is it necessary for me to
use the Azure Bot Service? Or as long as I register it in the above
portal (with the correct messaging endpoint), can I deploy it
anywhere I like?</li>
<li>Although it is not working for me at the moment, I can see how one
is to interact with the bot via direct messages, but is it also
possible to add it to a channel and receive messages when the bot is
mentioned?</li>
</ol>
<p>I've already had a look at <a href="https://stackoverflow.com/questions/71111729/how-to-deploy-a-ms-teams-bot">this</a>, but I believe my question is more specific.</p>
<p>Note: I am quite open to other approaches to deploying the bot, however, I do not want to be tied to an Azure service and I want to be able to deploy the bot in my own server.</p>
|
<python><bots><botframework><microsoft-teams>
|
2024-09-25 08:51:39
| 2
| 2,115
|
Minura Punchihewa
|
79,021,750
| 12,556,481
|
find_elements only gets the first element when using Selenium
|
<p>I have a Python program that uses undetected Chromedriver to scrape product data from Walmart. The following is a simplified version of my code, but it only gets the first element. Can someone show me where the issue is?</p>
<pre><code>import time
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
import undetected_chromedriver as uc
url='https://www.walmart.com/search?q=lego&facet=retailer_type%3AWalmart'
driver = uc.Chrome(enable_cdp_events=True)
driver.implicitly_wait(15)
driver.maximize_window()
driver.get(url)
sleep(5)
for _ in range(3):
try:
scroll_value = driver.execute_script("return document.body.clientHeight;")
driver.execute_script(f"return window.scrollBy(0, {str(scroll_value)})")
sleep(1)
except:
pass
items_list = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "/html/body/div[1]/div[1]/div/div[2]/div/main/div/div[2]/div/div/div[1]/div[2]/div/section/div"))).find_elements(By.CSS_SELECTOR, '[data-testid="list-view"]')
for raw_item in items_list:
product_name = WebDriverWait(raw_item, 5).until(EC.presence_of_element_located((By.XPATH, "//div[2]/span/span"))).text.strip()
print(product_name)
driver.quit()
</code></pre>
|
<python><python-3.x><selenium-webdriver>
|
2024-09-25 08:10:56
| 0
| 309
|
dfcsdf
|
79,021,726
| 4,718,423
|
Increase Python REST requests efficiency
|
<p>Before, I used SQL joins to get data, and 10.000 lines would take fraction of a second.</p>
<pre><code>select name,
profession.expertise,
resonsible.name,
building.locker
from technicians
join profession on technicin.profession = profession.id
join responsible on techician.responsible = responsible.id
join building on technician.locker = buiding.id
</code></pre>
<p>With migration to the cloud, I no longer have access to this database and need to go via REST. Unfortunately, translating this into Python REST requests, even just for 50 lines, results in 20s of stall, 10000 lines would just be hours!? Because I have to iterate over all the results and request the related relationship link data, right?</p>
<p>Pseudo code in Python:</p>
<pre><code>result_list = requests.get("https://sample.com/api/v1/technicians")
for technician in result_list:
professn = requests.get("https://sample.com/api/v1/technicians/technician_ID/profession")
resopnsible = requests.get("https://sample.com/api/v1/technicians/technician_ID/responsible")
building = requests.get("https://sample.com/api/v1/technicians/technician_ID/building")
</code></pre>
<p>Question: am I doing something wrong regarding the requests I make?</p>
|
<python><rest><python-requests>
|
2024-09-25 08:04:18
| 1
| 1,446
|
hewi
|
79,021,676
| 3,581,875
|
Copy AWS S3 Bucket Between Accounts Cross-Region
|
<p>We need to copy the contents of a bucket to a new account in a different AWS region.</p>
<p>The bucket contains Β±200K objects, most of which are archived (Glacier), and a large portion of them are quite big (>10GB).</p>
<p>At first we tried the following:</p>
<ol>
<li>Batch script to restore all the objects.</li>
<li>Configure a Lambda trigger to handle restore completion per object.</li>
<li>Execute the copy method with boto3 on the Lambda function.</li>
</ol>
<p>This works well for smaller objects (up to ~2GB), but larger objects take longer than the maximum allowed Lambda duration (15min).</p>
<p>To complete the rest of them (>10K objects) we must use some VM, however VMs are forced to be in a VPC and for some reason boto3 can't handle cross-region copy commands from within a VPC:</p>
<pre><code>An error occurred (AccessDenied) when calling the UploadPartCopy operation: VPC endpoints do not support cross-region requests
</code></pre>
<p>So we resorted to download and upload each object. This takes too long for the amount of objects we need to copy.</p>
<p>Can anyone offer a better solution?</p>
<p>NOTE #1: AWS has something called Batch Jobs which may be relevant, but it was a bit difficult to configure and for some reason requires both source and destination buckets to enable versioning (which is disable in this case).</p>
<p>NOTE #2: We have a role on the destination account which has access to the source account and S3 requests are called with the requester paying for them.</p>
|
<python><amazon-web-services><amazon-s3>
|
2024-09-25 07:50:38
| 1
| 1,152
|
giladrv
|
79,021,659
| 14,998,459
|
Extend LabelEncoder classes
|
<p>I have a <code>LabelEncoder</code> with 500 classes.</p>
<p>To store and load it, I used pickle:</p>
<pre><code>with open('../data/label_encoder_v500.pkl', 'rb') as file:
label_encoder = pickle.load(file)
</code></pre>
<p>I want to add 24 new classes to this encoder, keeping existing labels unchanged.</p>
<pre><code>additional_classes = ['class501', 'class502', ..., 'class524']
</code></pre>
<p>but it seems like this operation does not come out-of-the-box with <code>LabelEncoder</code>. How to do this?</p>
|
<python><scikit-learn><label-encoding>
|
2024-09-25 07:46:37
| 3
| 716
|
TkrA
|
79,021,428
| 1,330,984
|
UI grid alignment suddenly changes during subprocess run
|
<p>My UI looks like this:</p>
<p><a href="https://i.sstatic.net/j3IOMPFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j3IOMPFd.png" alt="enter image description here" /></a></p>
<p>As soon as I hit the "Start" button to run the subprocess, my UI alignment goes wrong; the 2 buttons take up this extra blank space, as you can see from the picture:</p>
<p><a href="https://i.sstatic.net/WiYYjHcw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WiYYjHcw.png" alt="enter image description here" /></a></p>
<p>Here is the source code of the UI layout. What am I doing wrong?:</p>
<pre><code># UI ----------------------------------------------------------------------------
def browse_click(label, hint):
folder_path = filedialog.askdirectory(title=hint)
if folder_path:
label.config(text=folder_path)
root.update()
def disable_buttons():
btn_start.config(state=tk.DISABLED)
btn_browse_1.config(state=tk.DISABLED)
btn_browse_2.config(state=tk.DISABLED)
root.update()
def enable_buttons():
btn_start.config(state=tk.NORMAL)
btn_browse_1.config(state=tk.NORMAL)
btn_browse_2.config(state=tk.NORMAL)
root.update()
def update_status(message, sep='\n'):
status_label.config(text=message)
log_output(message, sep)
root.update()
def log_output(message, sep='\n'):
log_text_box.config(state=tk.NORMAL)
log_text_box.insert(tk.END, message + sep)
log_text_box.config(state=tk.DISABLED)
root.update()
def reset_Log():
log_text_box.config(state=tk.NORMAL)
log_text_box.delete(1.0, tk.END)
log_text_box.config(state=tk.DISABLED)
root.update()
# Create UI
root = tk.Tk()
root.title("Excel Processing Tool")
screen_width = 1200 #root.winfo_screenwidth()
screen_height = root.winfo_screenheight() - 200 # avoid the taskbar height
root.geometry('%dx%d' % (screen_width, screen_height))
root.rowconfigure(4, weight=1) # Expand the text box
root.columnconfigure(1, weight=1) # Expand the input/output folder entry
# Select input folder
btn_browse_1 = tk.Button(root, text="Source Folder: ", anchor="w", command=lambda: browse_click(input_folder_label, "Input Folder"))
btn_browse_1.grid(row=0, column=0, padx=5, pady=5)
input_folder_label = tk.Label(root, borderwidth=2, relief="groove",background="white", anchor="w")
input_folder_label.grid(row=0, column=1, padx=5, pady=5, sticky="ew")
# Select output folder
btn_browse_2 = tk.Button(root, text="Output Folder:", anchor="w", command=lambda: browse_click(output_folder_label, "Output Folder"))
btn_browse_2.grid(row=1, column=0, padx=5, pady=5)
output_folder_label = tk.Label(root, borderwidth=2, relief="groove", background="white", anchor="w")
output_folder_label.grid(row=1, column=1, padx=5, pady=5, sticky="ew")
# Start button
btn_start = tk.Button(root, text="START", command=run_scripts, width="10")
btn_start.grid(row=2, column=0, columnspan=2, pady=(10,0))
# Status label
status_label = tk.Label(root, text="(0/0)")
status_label.grid(row=3, column=0, padx=5, pady=(0,2), sticky="w")
log_text_box = tk.Text(root, state=tk.DISABLED)
log_text_box.grid(row=4, column=0, columnspan=2, padx=5, pady=(0,5), sticky="wens")
# RUN UI
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-09-25 06:39:42
| 1
| 16,376
|
Tom
|
79,021,346
| 2,084,314
|
Mocking a nested function call with pytest
|
<p>If I have a function, A1, that is being tested, and that function imports and calls function B1, which itself calls B2 from the same file, how do I mock function B2?</p>
<p>If I mock with <code>mocker.patch('B.B2', return_value=return_value)</code>, the mocked function doesn't get called. If I mock with 'A.B2', 'A.B1.B2', 'A.B.B2', or anything like that, I get errors because those don't exist. I understand if I wanted to mock B1 in this situation, I would mock 'A.B1' because of how it's being imported and where it's being used, but I can't find a way to mock B2 which is being called inside B1.</p>
<p>File layout is as follows, to give a visual.</p>
<p>File B</p>
<pre><code>def B2():
# does a thing, should be mocked
def B1():
B2()
</code></pre>
<p>File A</p>
<pre><code>from B import B1
def A1():
B1()
</code></pre>
<p>Test File</p>
<pre><code>def test_a1():
from A import A1
A1()
</code></pre>
|
<python><pytest>
|
2024-09-25 06:13:21
| 1
| 442
|
Beez
|
79,020,997
| 169,252
|
Installing module with pip fails, is it possible to edit the versions?
|
<p>I am trying to install pywallet</p>
<p><code>pip install pywallet</code></p>
<p>This fails at some point:</p>
<pre><code> Downloading protobuf-3.0.0a3.tar.gz (88 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [6 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-_xihs08_/protobuf_2395cbc163b34a7cb79f530d9bda1b24/setup.py", line 29, in <module>
from distutils.command.build_py import build_py_2to3 as _build_py
ImportError: cannot import name 'build_py_2to3' from 'distutils.command.build_py' (/usr/lib/python3.12/site-packages/setuptools/_distutils/command/build_py.py)
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
</code></pre>
<p>I understand this is not a pip error, but a dependency inside the module fails to build.
Somewhere I read this is due to not fixing the version to a specific one, and the suggestion was to find the culprit and edit the setup.py.</p>
<p>But - as far as I can see, after the install fails everything is cleaned up.
Can I somehow download this thing, and manually edit the dependencies?</p>
<p><code>pip download pywallet</code> fails with the same error.</p>
|
<python><pip><module>
|
2024-09-25 02:59:23
| 1
| 6,390
|
unsafe_where_true
|
79,020,834
| 6,601,575
|
How to set a daily task in an hourly DAG in Airflow?
|
<p>I have an hourly scheduled DAG in Airflow, the Graph may look like this:</p>
<pre><code>A >> B >> C
</code></pre>
<p>The task <code>A</code> and <code>B</code> need to be executed every hour, but task C just needs to be executed once each day. Can I achieve this in Airflow?</p>
|
<python><airflow><airflow-2.x>
|
2024-09-25 01:16:56
| 1
| 834
|
Rinze
|
79,020,773
| 687,112
|
Alternative to setting self.__class__ to something else?
|
<p>What's the right way to add a method to any member of a set of sibling instances (i.e. each instance's class inherits from a common base class)?</p>
<p>The siblings come from <code>fsspec</code> and are out of my control, but it's like this.</p>
<pre class="lang-py prettyprint-override"><code>class Base:
def open(self): pass
def read(self): pass
class A(Base):
def read(self): pass
class B(Base):
def read(self): pass
</code></pre>
<p>There could be a <code>C</code> and some plugin might add a <code>D</code> ... these are out of my control. All my code gets is an instance of some subclass of <code>Base</code>.</p>
<p>My code gets an instance and needs to hand it off to another library expecting an instance of a class inheriting from <code>Base</code>. But I want this instance to have a new feature (not something that would be of general utility outside my code, so no changes to <code>fsspec</code>). In code, something like this.</p>
<pre class="lang-py prettyprint-override"><code>with fsspec.open(...) as f: # f is an instance of `A`, `B`, etc.
g = dunno_what(f) # asking about this part
dataset = xarray.open_dataset(g) # handing off to xarray
</code></pre>
<p>Now <code>xarray</code> is going to behave differently depending on what <code>g</code> implements, and requires <code>g</code> to be an instance of <code>Base</code>.</p>
<p>The way I see it, my feature would come as a method of a mixin.</p>
<pre class="lang-py prettyprint-override"><code>class Mixin:
def feature(self): pass
</code></pre>
<p>Then the following appears to achieve my goals but has <a href="https://stackoverflow.com/a/13280789/687112">lots of drawbacks</a>.</p>
<pre class="lang-py prettyprint-override"><code>for Item in (A, B):
# `item` is the `f` in the minimal example above
item = Item()
# here's what I've tried, but is advised against
item.__class__ = type("Item", (item.__class__, Mixin), {})
# it works though ...
assert isinstance(item, Base)
assert item.open.__func__ is Base.open
assert item.read.__func__ is Item.read
assert item.feature.__func__ is Mixin.feature
</code></pre>
<p>Assigning to <code>item.__class__</code> is not for novices, so I am interested in the suggested alternatives given by <a href="https://stackoverflow.com/a/13280789/687112">abarnert</a> (third one is outside of my control):</p>
<ul>
<li>"Use a factory to create an instance of the appropriate class dynamically, instead of creating a base instance and then munging it into a derived one."</li>
<li>"Use <code>__new__</code> or other mechanisms to hook the construction."</li>
</ul>
<p>I can't think of how to do either one while still achieving the goals given above as assertions. Thank you for showing me the way!</p>
|
<python>
|
2024-09-25 00:33:31
| 1
| 1,165
|
Ian
|
79,020,533
| 1,496,743
|
Using multiple client certificates with Python and Selenium
|
<p>Iβm working on a web-scrape project using Python and Selenium with a Chrome driver, which requires client certificates to access pages. I have 2 scenarios it must handle:</p>
<ol>
<li>Different certificates allow access to different URLs (e.g. Certificate A accesses URLs 1, 2 and 3, and Certificate B accesses URLs 4, 5 and 6)</li>
<li>Multiple certificates can access the same URL (e.g. Certificate A and B both can access URLs 7, 8 and 9 β those URLs return different company-specific data with each different cert)</li>
</ol>
<p>Iβm on Windows/Windows Server, and have used the Registry entry AutoSelectCertificateForUrls, which auto-selects a certificate, based on URL (or wildcard). But for scenario #2 above, it does no good.</p>
<p>So ideally, Iβd like to pass the URL and Cert name to the Python script, then have Chrome use that Cert when accessing the specified URL, but Iβm not seeing a way to do that. So far, I have:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait, Select
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--allow-insecure-localhost')
chrome_options.add_argument('--ignore-ssl-errors=yes')
chrome_options.add_argument('--ignore-certificate-errors')
driver = webdriver.Chrome()
driver.get(url)
:
:
# scrape code here
</code></pre>
<p>Does anyone have good step-by-step instructions to handle this?</p>
|
<python><google-chrome><selenium-webdriver>
|
2024-09-24 22:14:57
| 1
| 706
|
VBStarr
|
79,020,484
| 6,253,337
|
Topic modelling many documents with low memory overhead
|
<p>I've been working on a topic modelling project using <a href="https://maartengr.github.io/BERTopic/index.html" rel="nofollow noreferrer">BERTopic</a> 0.16.3, and the preliminary results were promising. However, as the project progressed and the requirements became apparent, I ran into a specific issue with scalability.</p>
<p>Specifically:</p>
<ul>
<li>For development/testing, it needs to train reasonably quickly on a moderate number of documents (tens of thousands to low hundred thousands)
<ul>
<li>Our dev machines are Macs, so this probably has to be done on CPU</li>
</ul>
</li>
<li>For production, it needs to train on a large number of documents (several million) without blowing up memory usage
<ul>
<li>For a baseline, with the default settings on my machine, BERTopic has a peak memory usage of roughly 35 kB per document, which easily becomes hundreds of GBs or even TBs for the amount of data that will be provided in production</li>
<li>Ideally, this would have peak memory usage sublinear in the number of documents.</li>
</ul>
</li>
</ul>
<p>That last requirement necessitates batching the documents, since loading them all into memory at once requires linear memory. So, I've been looking into clustering algorithms that work with <a href="https://maartengr.github.io/BERTopic/getting_started/online/online.html" rel="nofollow noreferrer">online topic modelling</a>. BERTopic's documentation suggests scikit-learn's <code>MiniBatchKMeans</code>, but the results I'm getting from that aren't very good.</p>
<p>Some models I've looked at include:</p>
<ul>
<li><code>Birch</code> via scikit-learn: uses even more memory than BERTopic's default <code>HDBSCAN</code> even when batched. Also runs much slower.</li>
<li><code>IncrementalDBSCAN</code> via <a href="https://pypi.org/project/incdbscan/" rel="nofollow noreferrer">incdbscan</a>: Seemed promising at first, but the runtime and eventually memory ballooned. For ~120k documents in batches of 5000, it didn't use more than 4GB of RAM in the first 3Β½ hours, but didn't finish within ten hours, and used nearly 40GB of RAM at some point in the middle.</li>
<li><code>AgglomerativeClustering</code> via scikit-learn: gave very good results from initial testing (perhaps even better than <code>HDBSCAN</code>), but it doesn't implement the <code>partial_fit</code> method. I found <a href="https://stackoverflow.com/a/54721234/6253337">this answer</a> on a different question which suggests it's possible to train two of them using single linkage independently and then merge them, but it gives no indication as to how.</li>
</ul>
<p>The latter two also don't provide the <code>predict</code> method, limiting their utility.</p>
<p>I am fairly new to the subject, so perhaps I'm approaching this completely wrong and the immediate problem I'm trying to solve has no solution. So to be clear, at the base level, the question I'm trying to answer is: <strong>How do I perform topic modelling (and get good results) on a large number of documents without using too much memory?</strong></p>
|
<python><cluster-analysis><topic-modeling>
|
2024-09-24 21:57:32
| 1
| 1,053
|
Bbrk24
|
79,020,422
| 3,837,778
|
How to run pytorch inside docker containers on two GCP VM?
|
<p>I have two GCP VM. on two vms, I run docker container.</p>
<p>I run
<code>docker run --gpus all -it --rm --entrypoint /bin/bash -p 8000:8000 -p 7860:7860 -p 29500:29500 lf</code></p>
<p>I am trying <a href="https://github.com/hiyouga/LLaMA-Factory" rel="nofollow noreferrer">llama-factory</a>.</p>
<p>In one container, I run
<code>FORCE_TORCHRUN=1 NNODES=2 RANK=1 MASTER_ADDR=34.138.7.129 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml</code>,
where 34.138.7.129 is external ip of vm</p>
<p>In the other container, I run
<code>FORCE_TORCHRUN=1 NNODES=2 RANK=0 MASTER_ADDR=34.138.7.129 MASTER_PORT=29500 llamafactory-cli train examples/train_lora/llama3_lora_sft_ds3.yaml</code>.</p>
<p>But I got</p>
<pre><code>[rank1]: torch.distributed.DistBackendError: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1970, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.20.5
[rank1]: ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error.
[rank1]: Last error:
[rank1]: socketStartConnect: Connect to 172.17.0.2<49113> failed : Software caused connection abort
E0924 21:26:39.866000 140711615779968 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 484) of binary: /usr/bin/python3.10
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 879, in main
run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 870, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/workspace/LLaMA-Factory/src/llamafactory/launcher.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-09-24_21:26:39
host : 71af1f49abe3
rank : 1 (local_rank: 0)
exitcode : 1 (pid: 484)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
</code></pre>
<p>it seems that pytorch is using the docker container ip instead of gcp vm external ip.</p>
<p>How to fix this?</p>
|
<python><docker><pytorch>
|
2024-09-24 21:31:29
| 2
| 9,056
|
BAE
|
79,020,378
| 8,605,685
|
Do I need to use timezones with timedelta and datetime.now?
|
<p>If I only use <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.now" rel="nofollow noreferrer"><code>datetime.now()</code></a> with <a href="https://docs.python.org/3/library/datetime.html#datetime.timedelta" rel="nofollow noreferrer"><code>timedelta</code></a> to calculate deltas, is it safe to ignore time zones? For example, is there a case where if a start time is before daylight savings, and an end time is after, that I will get the wrong result if I don't use a time zone aware call to <code>datetime.now()</code>?</p>
|
<python><datetime><time><timezone><timedelta>
|
2024-09-24 21:13:16
| 2
| 12,587
|
Salvatore
|
79,020,365
| 5,036,928
|
Numerical method used by sympy nsolve
|
<p>How does <code>sympy</code>'s <code>nsolve</code> do what it does?</p>
|
<python><sympy>
|
2024-09-24 21:08:45
| 1
| 1,195
|
Sterling Butters
|
79,020,257
| 12,076,197
|
JSON to Pandas Dataframe with null values and missing columns
|
<p>I am working with a JSON file that is designed as such:</p>
<pre><code>f = {'results':
[{'tables':
[{'rows': [{'column1': 'dog', 'column2': 'blue', 'column3': 'sad'},
{ 'column2': 'red', 'column3': 'happy'},
{'column1': 'bird', 'column2': 'green'}]
}]}]}
</code></pre>
<p>DESIRED pandas dataframe that accounts for rows with missing columns:</p>
<pre><code> column1 column2 column3
dog blue sad
red happy
bird green
</code></pre>
<p>Any suggestions is greatly appreciated.</p>
|
<python><json><pandas><dataframe>
|
2024-09-24 20:25:33
| 1
| 641
|
dmd7
|
79,020,232
| 1,735,686
|
Assign multi-index variable values based on the number of elements in a dataframe that match a selection criteria
|
<p>I have a large csv dataset the looks like the following:</p>
<pre><code>id,x,y,z
34295,695.117,74.0177,70.6486
20915,800.784,98.5225,19.3014
30369,870.428,98.742,23.9953
48151,547.681,53.055,174.176
34026,1231.02,73.7678,203.404
34797,782.725,73.9831,218.592
15598,983.502,82.9373,314.081
34076,614.738,86.3301,171.316
20328,889.016,98.9201,13.3068
...
</code></pre>
<p>If I consider each of these lines an element, I would like to have a data structure where I can easily divide space into x,y,z ranges (3-d blocks of space) and determine how many elements are within a given block.</p>
<p>For instance if I divided into cubes of 100 x 100 x 100:</p>
<pre><code>counts[900][100][100] = 3
</code></pre>
<p>because id's 20915, 30369, and 20328 from the excerpt of the csv above are all within the range x = 800-900, y = 0-100, and z = 0-100.</p>
<p>The brute force way to create something like this is to create a multi-level dictionary as follows:</p>
<pre><code>import numpy
import pandas
df = pandas.read_csv("test.csv")
xs = numpy.linspace(0, 1300, 14, endpoint=True)
ys = numpy.linspace(0, 1000, 11, endpoint=True)
zs = numpy.linspace(0, 1000, 11, endpoint=True)
c = {}
for x_index, x in enumerate(xs[:-1]):
c[xs[x_index + 1]] = {}
for y_index, y in enumerate(ys[:-1]):
c[xs[x_index + 1]][ys[y_index + 1]] = {}
for z_index, z in enumerate(zs[:-1]):
c[xs[x_index + 1]][ys[y_index + 1]][zs[z_index + 1]] = df[(df["x"] > xs[x_index]) & (df["x"] <= xs[x_index + 1]) & (df["y"] > ys[y_index]) & (df["y"] <= ys[y_index + 1]) & (df["z"] > zs[z_index]) & (df["z"] <= zs[z_index + 1])]["id"].count()
if (c[xs[x_index + 1]][ys[y_index + 1]][zs[z_index + 1]] > 0):
print("c[" + str(xs[x_index + 1]) + "][" + str(ys[y_index + 1]) + "][" + str(zs[z_index + 1]) + "] = " + str(c[xs[x_index + 1]][ys[y_index + 1]][zs[z_index + 1]]))
</code></pre>
<p>This gives the expected output of:</p>
<pre><code>c[600.0][100.0][200.0] = 1
c[700.0][100.0][100.0] = 1
c[700.0][100.0][200.0] = 1
c[800.0][100.0][300.0] = 1
c[900.0][100.0][100.0] = 3
c[1000.0][100.0][400.0] = 1
c[1300.0][100.0][300.0] = 1
</code></pre>
<p>But since the actual production CSV file is very large, it is quite slow.
Any suggestions for how to make it fast and a little less clunky?</p>
|
<python><pandas><dictionary>
|
2024-09-24 20:18:47
| 2
| 6,167
|
Troy Rockwood
|
79,020,229
| 855,395
|
Creating a default value recursively given a type (types.GenericAlias)
|
<p>Given a type <code>t</code> (originally comes from function annotations), I need to create a default value of that type. Normally, <code>t()</code> will do just that and work for many types, including basic types such as <code>bool</code> or <code>int</code>. However, <code>tuple[bool, int]()</code> returns an empty tuple, which is not a correct default value. It can get slightly trickier with more complex types such as <code>tuple[bool, list[int]]</code>.</p>
<p>I saw that <code>tuple[bool, int].__args__</code> returns <code>(<class 'bool'>, <class 'int'>)</code>, which might be useful for writing a recursive function that implements this, but I'm still having trouble with this.</p>
<p>Is there an existing function to do this and return the default value? If not, how would I write this code to work with all standard types?</p>
<p>I'm using Python 3.11.</p>
|
<python><python-typing>
|
2024-09-24 20:17:05
| 1
| 4,507
|
Eran Zimmerman Gonen
|
79,020,226
| 485,337
|
How to use a generic function defined in Python?
|
<p>I have a class and a corresponding factory function that holds some state in a closure (<code>event_bus</code>):</p>
<pre class="lang-py prettyprint-override"><code>from typing import TypeVar, Generic, Callable, Any
class EventBus:
def subscribe(self):
print("Subscribed")
def publish(self):
print("Published")
T = TypeVar('T')
def create_property(event_bus: EventBus):
class Property(Generic[T]):
def __init__(self, validator: Callable[[T], bool]):
self._validator = validator
def __set_name__(self, obj: Any, name: str):
self.name = name
def __get__(self, obj: Any, type: Any) -> T:
return obj.__dict__.get(self.name)
def __set__(self, obj: Any, value: T):
if not self._validator(value):
raise ValueError("Invalid value")
obj.__dict__[self.name] = value
event_bus.publish()
def property(validator: Callable[[T], bool] = lambda x: True) -> Property[T]:
return Property(validator)
return property
</code></pre>
<p>Now this compiles, and the <code>property</code> function has the correct type: <code>Callable[..., Property[T]]</code>. <code>Property</code> can be used as a descriptor, but when I try to do so I get compiler errors:</p>
<pre class="lang-py prettyprint-override"><code>property: Callable[..., Property[T]] = create_property(EventBus())
class MyComponent:
width = property() # Type of "width" is partially unknown
# Type of "width" is "Property[Unknown]
height: int = property(lambda x: x > 0) # Expression of type "Property[Unknown]" is
# incompatible with declared type "int"
</code></pre>
<p>What am I doing wrong? How do I call the generic <code>property</code> function properly?</p>
|
<python><generics><python-typing>
|
2024-09-24 20:16:48
| 0
| 30,760
|
Adam Arold
|
79,020,191
| 3,182,496
|
uwsgi module not found after Python upgrade
|
<p>I have a Flask/UWSGI application running on my home server. A recent Ubuntu upgrade deleted Python 3.10 and installed Python 3.12 instead. I've made a new venv and installed the application, but it no longer runs. In the UWSGI log it says:</p>
<pre><code>ModuleNotFoundError: No module named 'wsgi'
</code></pre>
<p>My application is called sieve and the working directory is /usr/share/sieve. I'm using a ini file, /usr/share/sieve/sieve.ini, which looks like this:</p>
<pre><code>[uwsgi]
module = wsgi
callable = app
master = true
processes = 5
socket = /usr/share/sieve/sieve.sock
chmod-socket = 660
vacuum = true
die-on-term = true
logto = /usr/share/sieve/sieve.log
logfile-chown = jon:www-data
log-date = [%%Y-%%m-%%d %%H:%%M:%%S]
</code></pre>
<p>I've also tried <code>module=sieve.wsgi</code> and <code>module=wsgi:app</code> as suggested elsewhere.</p>
<p>The application is in a sieve subdirectory and /usr/share/sieve/sieve/wsgi.py looks like this:</p>
<pre><code>#!/usr/bin/env python
from run_app import app
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>The service definition includes these lines:</p>
<pre><code>WorkingDirectory=/usr/share/sieve
Environment="PATH=$PATH:/usr/share/sieve/venv/bin"
ExecStart=/usr/share/sieve/venv/bin/uwsgi --ini sieve.ini --enable-threads
</code></pre>
<p>I'm not sure what has changed since it was last working, but looking further up the log, this setup worked fine:</p>
<pre><code>[2024-08-27 21:32:30] - *** Starting uWSGI 2.0.21 (64bit) on [Tue Aug 27 21:32:30 2024] ***
[2024-08-27 21:32:30] - compiled with version: 11.3.0 on 20 November 2022 14:54:48
[2024-08-27 21:32:30] - os: Linux-6.8.0-40-generic #40~22.04.3-Ubuntu SMP PREEMPT_DYNAMIC Tue Jul 30
17:30:19 UTC 2
</code></pre>
<p>The current version does not work (with the same site code and config).</p>
<pre><code>[2024-09-24 20:33:49] - *** Starting uWSGI 2.0.27 (64bit) on [Tue Sep 24 20:33:49 2024] ***
[2024-09-24 20:33:49] - compiled with version: 13.2.0 on 24 September 2024 18:04:08
[2024-09-24 20:33:49] - os: Linux-6.8.0-45-generic #45-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 30 12:02:04 UTC 2024
</code></pre>
<p>Why has my application stopped working and what to I need to do to make UWSGI find the module again?</p>
|
<python><flask><uwsgi><wsgi>
|
2024-09-24 20:06:27
| 1
| 1,278
|
jon_two
|
79,020,189
| 4,751,598
|
Pydantic model for representing datetime as date and time
|
<p>I have created a Pydantic model for datetime that will handle parsing a JSON object that looks like <code>{ "date": "2021-07-01", "time": "12:36:23" }</code> into <code>datetime(2021, 7, 1, 12, 36, 23)</code>. It also generates the correct JSON Schema for the model.</p>
<pre><code>class TimestampWithSplit(RootModel):
root: datetime
@classmethod
def __get_pydantic_core_schema__(
cls, source: Type[Any], handler: GetCoreSchemaHandler
) -> core_schema.CoreSchema:
return core_schema.chain_schema(
[
core_schema.typed_dict_schema(
{
"date": core_schema.typed_dict_field(core_schema.date_schema()),
"time": core_schema.typed_dict_field(core_schema.time_schema()),
}
),
core_schema.no_info_plain_validator_function(cls.validate_to_datetime),
]
)
@staticmethod
def validate_to_datetime(value: dict) -> datetime:
return datetime.combine(value["date"], value["time"])
</code></pre>
<p>I'm trying to now do two things:</p>
<ol>
<li>I now want to add descriptions to the generated json schema. Currently <code>TimestampWithSplit.model_json_schema()</code> returns</li>
</ol>
<pre><code>{'properties': {'date': {'format': 'date', 'title': 'Date', 'type': 'string'},
'time': {'format': 'time', 'title': 'Time', 'type': 'string'}},
'required': ['date', 'time'],
'type': 'object'}
</code></pre>
<p>and I want to add</p>
<pre><code>{'properties': {'date': {'format': 'date', 'title': 'Date', 'type': 'string', 'description': 'ISO format date, blah blah'},
'time': {'format': 'time', 'title': 'Time', 'type': 'string', 'description': 'ISO format time, blah blah'}},
'required': ['date', 'time'],
'type': 'object'}
</code></pre>
<ol start="2">
<li>Add a custom validator for the date field so it can parse single digit day and month numbers. I'd normally do this like this:</li>
</ol>
<pre><code> def validator(value: str):
try:
return datetime.strptime(value, "%Y-%m-%d").date()
except ValueError:
return None
</code></pre>
<p>and add that validator as a plain validator on the field. But I'm not sure how to incorporate that into what I've got.</p>
<p>Am I going down the wrong path? How do I have a model that has date and time as separate fields but returns a <code>datetime</code> from <code>model_validate_json</code> while also customising the JSON schema and date validation?</p>
|
<python><pydantic>
|
2024-09-24 20:06:20
| 1
| 1,016
|
Matthew Jones
|
79,020,155
| 20,591,261
|
How to Apply LabelEncoder to a Polars DataFrame Column?
|
<p>I'm trying to use scikit-learn's <code>LabelEncoder</code> with a Polars DataFrame to encode a categorical column. I am using the following code.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
from sklearn.preprocessing import LabelEncoder
df = pl.DataFrame({
"Color" : ["red","white","blue"]
})
enc = LabelEncoder()
</code></pre>
<p>However, an error is raised.</p>
<blockquote>
<p><code>ValueError: y should be a 1d array, got an array of shape () instead.</code></p>
</blockquote>
<p>Next, I tried converting the column to a NumPy.</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
enc.fit_transform(pl.col("Color").to_numpy())
)
</code></pre>
<p>Now, a different error is raised.</p>
<blockquote>
<p><code>AttributeError: 'Expr' object has no attribute 'to_numpy'</code></p>
</blockquote>
<p><strong>Note.</strong> I found that <code>.cast(pl.Categorical).to_physical()</code> could be used to obtain the desired result. Still, I'd prefer using something like <code>transform()</code> on my test dataset.</p>
<pre class="lang-py prettyprint-override"><code>df.with_columns(
pl.col("Color").cast(pl.Categorical).to_physical().alias("Color_encoded")
)
</code></pre>
|
<python><dataframe><scikit-learn><python-polars><label-encoding>
|
2024-09-24 19:51:17
| 3
| 1,195
|
Simon
|
79,020,129
| 2,276,054
|
CP-SAT | OR-Tools: Get existing variable by name?
|
<p>Is it possible to get an existing variable by its name? Something like:</p>
<pre><code>model = CpModel()
model.new_bool_var("mySuperBoolVar")
# ...many lines later...
bv = model.get_bool_var_by_name("mySuperBoolVar")
</code></pre>
<p>Obviously, <code>get_bool_var_by_name()</code> doesn't exist. There's only <code>model.get_bool_var_from_proto_index()</code>, but I have no idea what does it do...</p>
|
<python><or-tools><cp-sat>
|
2024-09-24 19:41:01
| 1
| 681
|
Leszek Pachura
|
79,019,920
| 1,559,401
|
How to generate multiple lists/vectors (same length) that are unique or partially unique?
|
<p>I have created the following function that can generate a list of 0s and 1s (basically a bitstring) using randomization.</p>
<pre><code>import numpy as np
def generate_genome(length: int, max_individuals: int = 0) -> Genome:
bits = None
if max_individuals > 0:
num_individuals = np.random.randint(1, max_individuals + 1)
print(f'Will have maximum of {num_individuals} individuals')
print('Controlled randomization')
bits = np.full(length, 0)
bits_flipped_ids = np.random.choice(range(0, length), size=num_individuals, replace=False)
print(f'Following indices will be flipped: {sorted(bits_flipped_ids)}')
np.put(a=bits, ind=bits_flipped_ids, v=1)
else:
print('Standard randomization')
bits = np.random.choice([0, 1], size=length)
genome = Genome(bits=bits.tolist())
print(f'Genome bits: {genome.bits}')
return genome
</code></pre>
<p>It supports two modes:</p>
<ul>
<li>without <code>max_individuals</code> - creates a bit-list of a specific length</li>
<li>with <code>max_individuals</code> - creates a bit-list of a specific length but also ensures that the number of <code>1</code>s is not exceeding <code>max_individuals</code>. The placement of the <code>1</code>s is at random indices (restricted by the allowed length of the list of course)</li>
</ul>
<p>This is part of a genetic algorithm I am working on. This would produce samples similar to the ones below:</p>
<ul>
<li><p>without <code>max_individuals</code></p>
<pre><code>generate_genome(10, 0)
[0 1 1 0 0 1 1 1 1 0]
[1 0 1 1 1 1 0 1 1 1]
[0 1 0 1 0 1 1 0 0 0]
[1 1 0 0 0 1 1 0 1 1]
[0 1 0 0 1 0 0 1 0 0]
</code></pre>
</li>
<li><p>with <code>max_individuals</code></p>
<pre><code>generate_genome(10, 3)
[0 0 0 0 0 0 0 0 0 1] with bits flipped at [9]
[0 0 0 1 0 1 0 1 0 0] with bits flipped at [3 5 7]
[0 1 0 0 0 1 0 1 0 0] with bits flipped at [1 5 7]
[0 1 0 1 0 0 0 0 0 0] with bits flipped at [1 3]
</code></pre>
</li>
</ul>
<p>My problem is that there is the possibility of generating same bit-lists. This possibility increases with the smaller <code>length</code> and <code>max_individuals</code> are. I would like to have control over that to see how it affects my algorithm so I am looking for an efficient way to create a set of unique bit-lists using my function or even better - a set of bit-lists where the number of unique lists can be controlled by a parameter.</p>
<p>I managed to simplify the <code>generate_genome()</code> function based on a suggestion:</p>
<pre><code>def generate_genome(length: int, max_individuals: int = 0):
# For the given length (number of bits) the maximum (where all are 1s)
# is (2^length - 1). The numpy.arange() stops at (stop - 1)
bits_all_possible = np.arange(2**length)
# Shuffle to introduce randomness
np.random.shuffle(bits_all_possible)
if max_individuals > 0:
bits_all_possible = np.array([b for b in bits_all_possible if np.bitwise_count(b) <= max_individuals])
# Pick a random index between 0 and the length of all possible bit-fields
bits_selected = np.random.randint(0, len(bits_all_possible))
# Use the index to select the number
bits_number = bits_all_possible[bits_selected]
# Convert the number to a bit-field
bits = [int(b) for b in bin(bits_number)[2:]]
genome = Genome(bits=bits.tolist())
print(f'Genome bits: {genome.bits}')
return genome
</code></pre>
<p>However, this is not a feasible solution in my case because we are talking about <strong>encoding</strong> and not plain old binary numbers. What this means is that I can have e.g. 3121200 of individuals. Respectively all genomes will have a maximum value of <code>pow(N, length-1) = pow(2, 3121200-1) = ???</code> that cannot be calculated.</p>
<p>Even if it worked, this simplification does not fix the issue with creating a unique list of bit-fields by using <code>generate_genome()</code> remains. The straight-forward (though not sure how efficient) solution would be to create a list and iteratively start calling <code>generate_genome()</code>. Every time a new genome is created, I can check if it is already present in the list. If so, it will be discarded, otherwise - added.</p>
<p>Currently I am testing Python's <code>set</code> datastructure:</p>
<pre><code>if unique_genomes:
# Using a set allows ignoring newly inserted elements if these are already present
genomes = set()
genomes_len_old = 0
for genome_counter in range(size):
genome = Genome.generate_genome(genome_length, max_individuals)
genomes_len_old = len(genomes)
genomes.add(genome)
# If the genome is already in the set, the number of elements in the set
# will not change
if genomes_len_old == len(genomes):
# Reduce the counter by 1 and try again
genome_counter -= 1
continue
genomes = list(genomes)
</code></pre>
<p>The problem with any solution that revolves around trying to insert a new genome and then again, if it is not unique, until a certain total number of genomes is reached is that it may lead to a timeless struggle to find the next genome that would fit in the already existing gene pool since every time <code>generate_genome()</code> is called, there is an unknown possibility that a genome will be created that already exists, hence worst case I may create an endless loop. In the past I have added a termination criterion, namely how many attempts there should be before breaking. Here, this is not an option.</p>
|
<python><algorithm><random>
|
2024-09-24 18:29:22
| 1
| 9,862
|
rbaleksandar
|
79,019,861
| 2,381,348
|
pandas search across multiple columns return one column if matches
|
<p><strong>Example data:</strong></p>
<pre><code>df1 = pd.DataFrame({
'a': [1, 6, 3, 9],
'b': ['A', 'B', 'C', 'D'],
'c': [10, 20, 30, 40],
'd': [100, 200, 300, 400]
})
df2 = pd.DataFrame({
'm': [1, 5, 3, 7],
'n': [2, 6, 8, 4],
'o': [9, 10, 11, 12]
})
</code></pre>
<p><strong>Requirement:</strong> <br />
<code>df1['a']</code> can occur anywhere of <code>df2</code>. I want to return <code>df2['m']</code> irrespective of where the match is found.</p>
<p>After some googling and chatGpt, I found melting <code>df2</code> and merging with <code>df1</code> is helpful except for it doesn't check for a match in <code>df2['m']</code>.</p>
<p><strong>Code:</strong></p>
<pre><code>df2_melted = df2.melt(id_vars=['m'], value_vars=['n', 'o'])
merged_df = df1.merge(df2_melted, left_on='a', right_on='value', how='left')
df1['e'] = merged_df['m']
print(df1)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>a b c d e
1 A 10 100 NaN # df1['a'] == df2['m']
6 B 20 200 5.0 # df1['a'] == df2['n']
3 C 30 300 NaN # df1['a'] == df2['m']
9 D 40 400 1.0 # df1['a'] == df2['o']
</code></pre>
<p><strong>Required Output:</strong></p>
<pre><code>a b c d e
1 A 10 100 1
6 B 20 200 5
3 C 30 300 3
9 D 40 400 1
</code></pre>
<p>If <code>df2['m']</code> could also be added to <code>value_vars</code> while melting, it'd have resolved the issue. I tried it, it didn't work. Then checked docs, found that whatever is there in the <code>id_vars</code>, the remaining or a subset of the remaining can be part of <code>value_vars</code>. So this approach might not be correct or I'm missing something.</p>
<p>Then I thought, if <code>df1['a']</code> matches <code>df2['m']</code>, then <code>df1['e'] == df1['a'] == df2['m']</code>. So just replacing <code>NaN</code> value with <code>df1['a']</code> should work and it worked. But had to convert the column to int; because of <code>NaN</code>, it's changed to float.</p>
<p><strong>Working complete Code:</strong></p>
<pre><code>df2_melted = df2.melt(id_vars=['m'], value_vars=['n', 'o'])
merged_df = df1.merge(df2_melted, left_on='a', right_on='value', how='left')
df1['e'] = merged_df['m']
df1['e'] = (df1['e'].fillna(df1['a'])).astype(int)
</code></pre>
<p>Seemed like even though it's a working solution, it's unnecessarily complicated: "try any solution: add more code to fix the issues as you proceed without changing the initial solution".</p>
<p>Any other better approach which can help with my requirement?
<br /> <br /><br />
PS1: In above example, it's not mandatory that df1 and df2 will have the same number of rows.</p>
|
<python><pandas><dataframe>
|
2024-09-24 18:12:20
| 1
| 3,551
|
RatDon
|
79,019,852
| 2,772,805
|
jQuery in display function of a Python notebook
|
<p>Is there a way to run jQuery in a display function called in a Python notebook ?
I am looking to be able to organize (sort) some div produced and displayed in a notebook.</p>
<p>The static js+html version works. Not from a notebook.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from IPython.display import HTML, display
width = 300
height = 200
dpi = 100
# Create images
for i in range(5):
filePNG = 'image_%02d.png' %(i+1)
fig = plt.figure(figsize=(width/dpi, height/dpi), dpi=dpi)
plt.plot(np.random.rand(10))
plt.ylabel('some numbers')
plt.savefig(filePNG)
plt.close(fig)
# Display
html_code = ''
for i in range(5):
filePNG = 'image_%02d.png' %(i+1)
html_code += '''
<div style='float: left; border: 1px solid lightgray;'>
<img style='width: 100%%;' src='%s' />
</div>
''' % (filePNG)
js_code = '''
<script src="http://code.jquery.com/jquery-latest.js"></script>
<script src="https://code.jquery.com/ui/1.14.0/jquery-ui.js"></script>
<script>
$( function() {
$("#sortable").sortable();
});
</script>
'''
display(HTML(js_code + '<div id="sortable">' + html_code + '</div>'))
</code></pre>
<p>As written the same code works from an html page.
For info: <a href="https://jqueryui.com/sortable/" rel="nofollow noreferrer">https://jqueryui.com/sortable/</a></p>
<p>Is it feasible with a ipywidget output ?</p>
|
<python><jquery><jupyter-notebook><ipywidgets>
|
2024-09-24 18:09:32
| 1
| 429
|
PBrockmann
|
79,019,834
| 1,227,860
|
Realtime Plot Using Plotly In a Single Figure Object
|
<p>I wrote the following code (minimally working example) that uses <code>plotly</code> to show candlestick charts in real-time.</p>
<p>It works great.</p>
<p>However, the only issue I have is that every time I receive new data there will be a new plotly figure and over time I will end up having many opened figures, which I don't want.</p>
<p>Is there any way to plot everything in a single plot and update the same figure with new data?</p>
<pre><code>import pandas as pd
import plotly.graph_objects as go
from datetime import datetime
from time import sleep
import random
ohlc_data = pd.DataFrame(columns=["timestamp", "open", "high", "low", "close", "volume"])
fig = go.Figure(data=[go.Candlestick(x=[], open=[], high=[], low=[], close=[])])
fig.update_layout(title=f'Live Candlestick Chart', xaxis_title='Time', yaxis_title='Price')
def update_live_chart(fig, df):
fig.update_traces(x=df['timestamp'], open=df['open'], high=df['high'], low=df['low'], close=df['close'])
while True:
sleep(1)
new_data = pd.DataFrame([{
"timestamp": datetime.now().timestamp(),
"open": random.randint(100, 110),
"high": random.randint(100, 110),
"low": random.randint(100, 110),
"close": random.randint(100, 110),
"volume": random.randint(100, 110)
}])
ohlc_data = pd.concat([ohlc_data, new_data], ignore_index=True)
ohlc_data = ohlc_data.tail(100)
update_live_chart(fig, ohlc_data)
fig.show()
</code></pre>
|
<python><plotly><real-time-data>
|
2024-09-24 18:03:17
| 0
| 2,367
|
shashashamti2008
|
79,019,774
| 15,835,974
|
How can I delete a specific record from my AWS Glue table?
|
<p>How can I delete a specific record from my AWS Glue table using Python?
My table is linked to an S3 bucket that contains multiple files.</p>
<p>So far, the only method I've found to delete a row/record is by deleting the file in the bucket, either using <a href="https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3/client/delete_object.html" rel="nofollow noreferrer">boto3.delete_object</a> or <a href="https://docs.aws.amazon.com/glue/latest/dg/aws-glue-api-crawler-pyspark-extensions-glue-context.html#aws-glue-api-crawler-pyspark-extensions-glue-context-purge_s3_path" rel="nofollow noreferrer">purge_s3_path</a>.
In both cases, you need to first identify the exact file containing the data you want to remove (Iβm still unsure how to handle that part).</p>
<p>However, it's common for these files to contain multiple records.
As a result, simply deleting the entire file isn't feasible, which introduces additional complexity.</p>
<p>Note that the solution need to work with any type of file (CSV, JSON, etc...).</p>
|
<python><amazon-web-services><amazon-s3><aws-glue>
|
2024-09-24 17:46:27
| 0
| 597
|
jeremie bergeron
|
79,019,656
| 4,505,998
|
Hashed cross-product transformation in PyTorch
|
<p>I want to implement a hashed cross product transformation like the one Keras uses:</p>
<pre class="lang-py prettyprint-override"><code>>>> layer = keras.layers.HashedCrossing(num_bins=5, output_mode='one_hot')
>>> feat1 = np.array([1, 5, 2, 1, 4])
>>> feat2 = np.array([2, 9, 42, 37, 8])
>>> layer((feat1, feat2))
<tf.Tensor: shape=(5, 5), dtype=float32, numpy=
array([[0., 0., 1., 0., 0.],
[1., 0., 0., 0., 0.],
[0., 0., 0., 0., 1.],
[1., 0., 0., 0., 0.],
[0., 0., 1., 0., 0.]], dtype=float32)>
>>> layer2 = keras.layers.HashedCrossing(num_bins=5, output_mode='int')
>>> layer2((feat1, feat2))
<tf.Tensor: shape=(5,), dtype=int64, numpy=array([2, 0, 4, 0, 2])>
</code></pre>
<blockquote>
<p>This layer performs crosses of categorical features using the "hashing trick". Conceptually, the transformation can be thought of as: hash(concatenate(features)) % num_bins.</p>
</blockquote>
<p>I'm struggling to understand the <code>concatenate(features)</code> part. Do I have to do the hash of each "pair" of features?</p>
<p>In the meantime, I tried with this:</p>
<pre class="lang-py prettyprint-override"><code>>>> cross_product_idx = (feat1*feat2.max()+1 + feat2) % num_bins
>>> cross_product = nn.functional.one_hot(cross_product_idx, num_bins)
</code></pre>
<p>It works, but not using a hash function can cause problems with distributions</p>
|
<python><numpy><tensorflow><keras><pytorch>
|
2024-09-24 17:01:20
| 1
| 813
|
David DavΓ³
|
79,019,523
| 8,067,642
|
Is typing.assert_never() removed by command line option -O, similar to assert statement?
|
<p>In Python, <code>assert</code> statement produces no code if command line optimization options <code>-O</code> or <code>-OO</code> are passed. Does it happen for <code>typing.assert_never()</code>? Is it safe to declare runtime assertions that will not be optimized out?</p>
<p>Consider the case</p>
<pre class="lang-py prettyprint-override"><code>from typing import assert_never
def func(item: int | str):
match item:
case int():
...
case str():
...
case _:
assert_never(item)
</code></pre>
<p>Is it guaranteed that the default branch will work even in optimized mode?</p>
|
<python><python-typing>
|
2024-09-24 16:23:29
| 1
| 338
|
makukha
|
79,019,407
| 9,251,158
|
Direct import fails but "from ... import ..." succeeds
|
<p>I have a Python module called <code>my_module.py</code> in a folder <code>~/my_module</code>. I want to call this module from a Python interpreter and I don't know its directory. I run:</p>
<pre><code>import os
os.chdir(os.path.expanduser("~/my_module"))
import my_module
</code></pre>
<p>and it fails. But if I use <code>from ... import ...</code>, then it works:</p>
<pre><code>import os
os.chdir(os.path.expanduser("~/my_module"))
from my_module import my_module
</code></pre>
<p>If I change the folder's name to be different from the module's name, then the direct import works. Why is that?</p>
|
<python><import><module><syntax>
|
2024-09-24 15:49:01
| 1
| 4,642
|
ginjaemocoes
|
79,019,358
| 2,381,348
|
Converting pandas dataframe to wiki markup table
|
<p>I'm automating some data processing and creating jira tickets out of it.
Pandas does have <code>to_html</code> or <code>to_csv</code> or even <code>to_markdown</code>. But jira supports only wiki markup for creating a table.</p>
<p>e.g.</p>
<pre><code><!-- wiki markup -->
||header1||header2||header3||\r\n|cell 11|cell 12|cell 13|\r\n|cell 21|cell 22|cell 23|
</code></pre>
<p>will create</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>header1</th>
<th>header2</th>
<th>header3</th>
</tr>
</thead>
<tbody>
<tr>
<td>cell 11</td>
<td>cell 12</td>
<td>cell 13</td>
</tr>
<tr>
<td>cell 21</td>
<td>cell 22</td>
<td>cell 23</td>
</tr>
</tbody>
</table></div>
<p>Is there anyway to convert pandas dataframe to wiki markup table to be used in Jira?</p>
<p>I'm keeping <code>df.iterrows</code> as Last resort since iterating over dataframe is not a recommended solution as per answers in <a href="https://stackoverflow.com/questions/16476924/how-can-i-iterate-over-rows-in-a-pandas-dataframe">How can I iterate over rows in a Pandas DataFrame?</a> Since my expected dataframe is small, iteration should be fine in my case. This question can be considered as more of a curiosity what can be done in case of larger dataframes.</p>
|
<python><pandas><dataframe><wiki-markup>
|
2024-09-24 15:34:39
| 2
| 3,551
|
RatDon
|
79,019,170
| 17,160,160
|
Calculate hours in period. DST aware
|
<p>I want to calculate a DST aware figure for the total hours in a period localized for Europe/London.</p>
<p>Given the start time, I need to generate the end time and then calculate the hours in period.</p>
<p>For example:<br />
<strong>MONTHLY PERIOD</strong></p>
<pre><code># define start
s = pd.to_datetime('2023-03-01').tz.localize('Europe/London')
>>> Timestamp('2023-03-01 00:00:00+0000', tz='Europe/London')
# generate end
e = (s + pd.offsets.MonthEnd()) + pd.Timedelta(days=1)
>>> Timestamp('2023-04-01 01:00:00+0100', tz='Europe/London')
# calculate hrs in period
(e - s) / pd.Timedelta(hours = 1)
>>> 743.0
</code></pre>
<p>This seems accurate and provides the correct result as, in the UK, an hour is lost in March.<br />
However, when changing the year to 2024 and setting <code>s = pd.to_datetime('2024-03-01').tz.localize('Europe/London')</code>, an incorrect result of <code>744</code> is returned.</p>
<p>I'd like a full proof way of calculating hours in period please.</p>
|
<python><pandas>
|
2024-09-24 14:47:53
| 1
| 609
|
r0bt
|
79,019,034
| 5,558,497
|
snakemake - specifying memory in resources directive vs. command-line call for individual rule
|
<p>memory requirements can be defined per rule in the <code>resources</code> directive</p>
<pre><code>rule spades:
input:
rules.aRule.output
output:
"{sample}/spades/contigs.fasta"
resources:
mem_mb = 112000
shell:
"spades {input} {output}"
</code></pre>
<p>When it comes to programs where you can specify memory requirements directly via a command-line parameter (e.g. <a href="https://github.com/ablab/spades" rel="nofollow noreferrer">spades</a>), what would be the difference between specifying the memory in the <code>resources</code> directive, as above, and specifying the memory with the command-line parameter of <code>spades</code> itself i.e.</p>
<pre><code>rule spades_mem:
input:
rules.aRule.output
output:
"{sample}/spades/contigs.fasta"
params:
mem_spades = 112000
shell:
"spades -m {params.mem_spades} {input} {output}"
</code></pre>
<p>It is probably not a good idea to specify memory with both ways i.e.</p>
<pre><code>rule spades_both:
input:
rules.aRule.output
output:
"{sample}/spades/contigs.fasta"
params:
mem_spades = 112000
resources:
mem_mb = 112000
shell:
"spades -m {params.mem_spades} {input} {output}"
</code></pre>
<p>however, if I do so, which one takes preference, the one from the binary command-line (rule <code>spades_mem</code>) parameter, or the one specified in the <code>resources</code> directive (rule <code>spades</code>)?</p>
|
<python><snakemake><hpc>
|
2024-09-24 14:16:31
| 1
| 2,249
|
BCArg
|
79,019,014
| 4,575,197
|
column is not accessible using groupby and apply(lambda)
|
<p>I'm encountering a <code>KeyError</code> when trying to use the <code>.apply()</code> method on a pandas DataFrame after performing a <code>groupby</code>. The goal is to calculate the weighted average baced on the Industry_adjusted_return column. The error indicates that the <code>'Industry_adjusted_return'</code> column cannot be found. Below is a minimal example that reproduces the issue:</p>
<pre><code>```
import pandas as pd
# Creating a small DataFrame
data = {
'ISIN': ['DE000A1DAHH0', 'DE000KSAG888'],
'Date': ['2017-03-01', '2017-03-01'],
'MP_quintile': [0, 0],
'Mcap_w': [8089460.00, 4154519.75],
'Industry_adjusted_return': [-0.00869, 0.043052]
}
df = pd.DataFrame(data)
df['Date'] = pd.to_datetime(df['Date']) # Ensure 'Date' is datetime type
</code></pre>
<p>I'm using Python 3.8 with pandas version 1.3.3. Any insights into why this error occurs and how to fix it would be greatly appreciated.</p>
<p>code:</p>
<pre><code>for i,grouped in wa.groupby(['Date','MP_quintile']):
print(i,grouped)
weighted_average_returns = grouped.apply(lambda x: (x['Industry_adjusted_return'] * (x['Mcap_w'] / x['Mcap_w'].sum())).sum())
</code></pre>
<p>the Error</p>
<pre><code>{
"name": "KeyError",
"message": "'Industry_adjusted_return'",
"stack": "---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File c:\\Users\\mbkoo\\anaconda3\\envs\\myenv\\Lib\\site-packages\\pandas\\core\\indexes\\base.py:3802, in Index.get_loc(self, key, method, tolerance)
3801 try:
-> 3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
File c:\\Users\\mbkoo\\anaconda3\\envs\\myenv\\Lib\\site-packages\\pandas\\_libs\\index.pyx:138, in pandas._libs.index.IndexEngine.get_loc()
File c:\\Users\\mbkoo\\anaconda3\\envs\\myenv\\Lib\\site-packages\\pandas\\_libs\\index.pyx:146, in pandas._libs.index.IndexEngine.get_loc()
File pandas\\_libs\\index_class_helper.pxi:49, in pandas._libs.index.Int64Engine._check_type()
KeyError: 'Industry_adjusted_return'
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[10], line 8
3 print(i,grouped)
4 #weighted_average_returns = grouped.apply( lambda x: ((x['Mcap_w'] / x['Mcap_w'].sum()))).sum()
5 # grouped['weights_EW'] = 1 / len(grouped)
6 # grouped['return_EW'] = grouped['Industry_adjusted_return'] * grouped['weights_EW']
----> 8 weighted_average_returns = grouped.apply(lambda x: (x['Industry_adjusted_return'] * (x['Mcap_w'] / x['Mcap_w'].sum())).sum()) #
9 # equally_weighted_returns=grouped['return_EW'].sum()
10 # # _df=cpd.from_dataframe(_df,allow_copy=True)
11 break
File c:\\Users\\pandas\\core\\frame.py:9568, in DataFrame.apply(self, func, axis, raw, result_type, args, **kwargs)
9557 from pandas.core.apply import frame_apply
9559 op = frame_apply(
9560 self,
9561 func=func,
(...)
9566 kwargs=kwargs,
9567 )
-> 9568 return op.apply().__finalize__(self, method=\"apply\")
File c:\\Users\\pandas\\core\\apply.py:764, in FrameApply.apply(self)
761 elif self.raw:
762 return self.apply_raw()
--> 764 return self.apply_standard()
File c:\\Users\\pandas\\core\\apply.py:891, in FrameApply.apply_standard(self)
890 def apply_standard(self):
--> 891 results, res_index = self.apply_series_generator()
893 # wrap results
894 return self.wrap_results(results, res_index)
File c:\\Users\\pandas\\core\\apply.py:907, in FrameApply.apply_series_generator(self)
904 with option_context(\"mode.chained_assignment\", None):
905 for i, v in enumerate(series_gen):
906 # ignore SettingWithCopy here in case the user mutates
--> 907 results[i] = self.f(v)
908 if isinstance(results[i], ABCSeries):
909 # If we have a view on v, we need to make a copy because
910 # series_generator will swap out the underlying data
911 results[i] = results[i].copy(deep=False)
Cell In[10], line 8, in <lambda>(x)
3 print(i,grouped)
4 #weighted_average_returns = grouped.apply( lambda x: ((x['Mcap_w'] / x['Mcap_w'].sum()))).sum()
5 # grouped['weights_EW'] = 1 / len(grouped)
6 # grouped['return_EW'] = grouped['Industry_adjusted_return'] * grouped['weights_EW']
----> 8 weighted_average_returns = grouped.apply(lambda x: (x['Industry_adjusted_return'] * (x['Mcap_w'] / x['Mcap_w'].sum())).sum()) #
9 # equally_weighted_returns=grouped['return_EW'].sum()
10 # # _df=cpd.from_dataframe(_df,allow_copy=True)
11 break
File c:\\Users\\pandas\\core\\series.py:981, in Series.__getitem__(self, key)
978 return self._values[key]
980 elif key_is_scalar:
--> 981 return self._get_value(key)
983 if is_hashable(key):
984 # Otherwise index.get_value will raise InvalidIndexError
985 try:
986 # For labels that don't resolve as scalars like tuples and frozensets
File c:\\Users\\pandas\\core\\series.py:1089, in Series._get_value(self, label, takeable)
1086 return self._values[label]
1088 # Similar to Index.get_value, but we do not fall back to positional
-> 1089 loc = self.index.get_loc(label)
1090 return self.index._get_values_for_loc(self, loc, label)
File c:\\Users\\pandas\\core\\indexes\\base.py:3804, in Index.get_loc(self, key, method, tolerance)
3802 return self._engine.get_loc(casted_key)
3803 except KeyError as err:
-> 3804 raise KeyError(key) from err
3805 except TypeError:
3806 # If we have a listlike key, _check_indexing_error will raise
3807 # InvalidIndexError. Otherwise we fall through and re-raise
3808 # the TypeError.
3809 self._check_indexing_error(key)
KeyError: 'Industry_adjusted_return'"
}
</code></pre>
|
<python><pandas><group-by><keyerror>
|
2024-09-24 14:12:28
| 1
| 10,490
|
Mostafa Bouzari
|
79,018,901
| 1,559,401
|
How to pass item from list that is being sorted to lambda that is used as the key function used for the sorting?
|
<p>I have a function that evaluates some parameters of the first argument it receives (here <code>item</code>):</p>
<pre class="lang-py prettyprint-override"><code>def item_fitness(
item,
fitness_criterion1,
fitness_criterion2
) -> int:
...
return val
</code></pre>
<p>It does not matter what it actually does. All that matters is that it takes <code>fitness_criterion1</code> and <code>fitness_criterion2</code> in order to check the <code>item</code>'s parameters and returns some fitness score.</p>
<p>I would like to use the function as the sorting function in <code>sorted()</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>def some_function(items, crit1, crit2):
items = sorted(
items,
key=lambda item, crit1, crit2: item_fitness(
item,
crit1,
crit2
),
reverse=True
)
# Do something with the sorted items
return something
</code></pre>
<p>where <code>item</code> is taken from <code>items</code> (the actual list of <code>item</code> instances that is being <code>sorted()</code>).</p>
<p>How would I do that? The function is also called elsewhere with specific elements from <code>items</code>.</p>
|
<python><sorting><lambda>
|
2024-09-24 13:47:29
| 2
| 9,862
|
rbaleksandar
|
79,018,581
| 5,269,892
|
Python set does not remove duplicate NaNs
|
<p>Applying <code>set()</code> to a list containing multiple NaN values usually removes duplicate NaN entries.</p>
<p><strong>Example</strong>:</p>
<pre><code>set([np.nan, 5, np.nan, 17, 5, np.nan, 23])
</code></pre>
<p>yields:</p>
<pre><code>{5, 17, nan, 23}
</code></pre>
<p>However, I now have a list originating from summing (concatenating) different lists contained in a column of a dataframe; some of these lists contain NaNs. When I apply <code>set()</code> to the concatenated list retrieved from , it does not remove duplicate NaNs. See the below screenshot:</p>
<p><a href="https://i.sstatic.net/ygCtgi0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ygCtgi0w.png" alt="enter image description here" /></a></p>
<p>The object <code>a1</code> from the screenshot is definitely of <code>list</code> type. I'm not sure, but maybe it depends on if the NaNs were at some point within a numpy array (or pandas dataframe, presumably):</p>
<ul>
<li><code>set([np.nan, np.nan, np.nan])</code> yields <code>{nan}</code></li>
<li><code>set(np.array([np.nan, np.nan]))</code> yields <code>{nan, nan, nan}</code></li>
<li><code>set(list(np.array([np.nan, np.nan])))</code> <em>also</em> yields <code>{nan, nan, nan}</code></li>
</ul>
<p>Any ways to avoid this other than <code>np.unique()</code> or <code>pd.unique()</code>? Any reason why <code>set()</code> would behave like this (I'd assume the expected default would be to remove duplicate NaNs, even if NaN != NaN!)?</p>
|
<python><pandas><set><nan>
|
2024-09-24 12:19:48
| 2
| 1,314
|
silence_of_the_lambdas
|
79,018,577
| 4,729,764
|
Can't rollback an async session if error in db.commit()
|
<p>When an error is thrown during <code>await db.commit()</code> call, I am not able to rollback the session due to MissingGreenlet spawn.</p>
<p><strong>DB</strong></p>
<pre class="lang-py prettyprint-override"><code>async def get_db(self) -> AsyncIterator[AsyncSession]:
session = self.scoped_session()
try:
yield session
except Exception as e: # <- Instead of HTTPException, this is PendingRollbackError
print("Rolling back the session") # <-- This is executed fine
await session.rollback() # <- This will throw a MissingGreenLetError
raise
finally:
await session.close()
</code></pre>
<p><strong>Endpoint</strong></p>
<pre><code>async def do_something(db=Depends(get_db)):
try:
await db.commit() # raises IntegrityError due to UniqueConstraintViolation
except IntegrityError as e:
logger.error(f"Caught error {e}")
raise HTTPException(400, "UniqueConstraintViolation")
</code></pre>
<p>What is odd that, <code>IntegrityError</code> is handled in <code>do_something</code> which raises <code>HTTPException</code>. But for some reason, instead of <code>HTTPException</code>, <code>get_db</code> catches <code>PendingRollbackError</code>.</p>
<hr />
<h2>Postgres logs</h2>
<p><code>await db.commit()</code> yields the following Postgres LOGS, which suggests that Postgres has ROLLBACKed:</p>
<pre><code> 2024-09-24 12:44:58,391 INFO sqlalchemy.engine.Engine UPDATE users_profiles SET username=$1::VARCHAR WHERE users_profiles.id = $2::VARCHAR
2024-09-24 12:44:58,391 INFO sqlalchemy.engine.Engine [generated in 0.00019s] ('xyz', '123')
2024-09-24 12:44:58,579 INFO sqlalchemy.engine.Engine ROLLBACK
</code></pre>
<p>If I manually rollback my session, (or otherwise face <code>PendingRollbackError</code>) I get the MissingGreenlet spawn error:</p>
<pre><code>sqlalchemy.exc.MissingGreenlet: greenlet_spawn has not been called; can't call await_only() here. Was IO attempted in an unexpected place?
</code></pre>
<hr />
<p>If I don't call <code>.rollback()</code>, then a <code>PendingRollbackError</code> is thrown.</p>
<p>The error log for <code>PendingRollbackError</code> is:</p>
<pre><code>sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (sqlalchemy.dialects.postgresql.asyncpg.IntegrityError) <class 'asyncpg.exceptions.UniqueViolationError'>: duplicate key value violates unique constraint "users_username_key"
DETAIL: Key (username)=(xyz) already exists.
</code></pre>
<p>And a Server Error 500.</p>
<p>What is going on?</p>
|
<python><sqlalchemy><fastapi><asyncpg>
|
2024-09-24 12:18:29
| 1
| 3,114
|
GRS
|
79,018,410
| 6,293,886
|
How to concatenate nested items in dictionary?
|
<p>I have a nested dictionary and I want to concatenate the items of the nested dictionaries:</p>
<pre><code>import random
nested_dict = {
k: {
'val0': random.sample(range(100), 2),
'val1': random.sample(range(100), 2),
}
for k in 'abcd'
}
</code></pre>
<p>I can make it with a double-loop like this:</p>
<pre><code>from collections import defaultdict
dp_dict= defaultdict(list)
for v in nested_dict.values():
for k, v_nested in v.items():
dp_dict[k].append(v_nested)
val0 = dp_dict['val0']
val1 = dp_dict['val1']
</code></pre>
<p>is there a more <em>Pythonic</em>/elegant way to do it?</p>
<p><strong>NB</strong><br />
one solution is to use <em>Pandas</em>, however, this solution is quite ineficcient (X300 slower)</p>
<pre><code>import pandas as pd
val0, val1 = pd.DataFrame(nested_dict).loc[['val0', 'val1']].values
</code></pre>
|
<python><dictionary>
|
2024-09-24 11:34:00
| 2
| 1,386
|
itamar kanter
|
79,018,294
| 2,528,453
|
Custom newline chracter in python socketserver.StreamRequestHandler
|
<p>I'm implementing a socket server for python. Ideally I would use a <a href="https://docs.python.org/3/library/socketserver.html#socketserver.TCPServer" rel="nofollow noreferrer"><code>socketserver.TCPServer</code></a> in combination with the<code>readline</code>-functionality of <a href="https://docs.python.org/3/library/socketserver.html#socketserver.StreamRequestHandler" rel="nofollow noreferrer"><code>socketserver.StreamRequestHandler</code></a>, see for instance <a href="https://docs.python.org/3/library/socketserver.html#examples" rel="nofollow noreferrer">the example</a> in the documentation. Unfortunaltely I have to use a custom newline character other than <code>\n</code>.</p>
<p>Is there a way to change this newline character in this context? Accoding to my debugger the <a href="https://docs.python.org/3/library/socketserver.html#socketserver.DatagramRequestHandler.rfile" rel="nofollow noreferrer"><code>rfile</code></a>-attribute of <code>socketserver.StreamRequestHandler,</code> is a <code> _io.BufferedReader</code>-object but I couldn't find any documentation about that.</p>
|
<python><sockets><tcp><stream><tcpserver>
|
2024-09-24 11:02:38
| 1
| 1,061
|
obachtos
|
79,017,998
| 2,989,330
|
Parameterized Enums in Python
|
<p>In Python, I want to have an enum-like data structure that can be used like the <a href="https://doc.rust-lang.org/book/ch06-01-defining-an-enum.html#the-option-enum-and-its-advantages-over-null-values" rel="nofollow noreferrer"><code>Option</code> enum in Rust</a>: the enum has parameterized member, and an unparameterized, e.g.,</p>
<pre class="lang-rust prettyprint-override"><code>enum MyType<T> {
A,
B(T),
}
</code></pre>
<p>would allow me to use <code>MyType.A</code>, <code>MyType.B(1)</code> and <code>MyType.B(2)</code>.</p>
<p>What is the most elegant way to achieve this behavior in Python? So far, my only solution would involve subclasses <code>A</code>, <code>B</code> of an abstract class <code>MyType</code>. But the drawback is that this requires me to do instance checking instead of simply using <code>is</code>.</p>
<p>My specific use case is the following: My data can be either replicated across multiple processes or sharded. If it's sharded, I need the number of shards. My function will process data depending on whether it's replicated or sharded.</p>
<p>If enums aren't useful for this, how to better implement this?</p>
|
<python><enums>
|
2024-09-24 09:46:34
| 2
| 3,203
|
Green η»Ώθ²
|
79,017,954
| 10,679,609
|
Inverse of cumsum for multi-dimensional array in python
|
<p>I would like to know how to perform the inverse operation of cumulative sum on a multi-dimensional array in Python.</p>
<p>For example, we can get cumulative array <code>P</code> of a given 2D array <code>T</code> by</p>
<pre><code>import numpy as np
T = np.array([[1,3,2],[5,5,6],[1,8,3]])
P = np.cumsum(np.cumsum(T, axis=0), axis=1)
print(P)
# P is then,
# [[1, 4, 6],
# [6,14,22],
# [7,23,34]]
</code></pre>
<p>My question is, how do I get <code>T</code> from <code>P</code>? I want to reconstruct <code>T</code> from <code>P</code>.</p>
<p>The above is a two-dimensional case, but I need an implementation where <code>T</code> and <code>P</code> can be any dimension.</p>
<p>The answer for Julia is provided:
<a href="https://stackoverflow.com/questions/74813065/inverse-of-cumsum-in-julia">Inverse of cumsum in Julia</a></p>
<p>Still, I am seeking the answer for Python.</p>
<p><strong>EDIT</strong></p>
<p>Thank you for the nice answers. I have compared two suggested methods in terms of computational cost. Apparently, both methods are enough efficient.</p>
<pre><code>T = np.random.randint(100, size=[100,50,25,50])
%timeit cumdiff(T)
#319 ms Β± 134 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit decumsum(T)
#324 ms Β± 943 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1 loop each)
</code></pre>
|
<python><arrays><numpy><cumsum>
|
2024-09-24 09:38:49
| 3
| 694
|
Sakurai.JJ
|
79,017,946
| 10,855,529
|
"{}" breaking the json_decode()
|
<p>I want to make the string of empty dictionary to be a struct, but <code>json_decode</code> fails as there are some rows with strings of empty dictionary in a df</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"meta_data": ["{}"]
})
df.with_columns(meta_data=pl.col('meta_data').str.json_decode())
</code></pre>
<p>giving me,</p>
<pre><code>PanicException: called `Result::unwrap()` on an `Err` value: ComputeError(ErrString("a StructArray must contain at least one field"))
</code></pre>
<p>edit -
the column has more string containing key value pairs also,</p>
<pre><code>df = pl.DataFrame({'a': ['{}', '{"b": "c"}']})
df = df.with_columns(pl.col("a").str.json_decode())
</code></pre>
<p>the above one is working fine, but when I keep only <code>'{}'</code>, then it breaks</p>
|
<python><json><dataframe><python-polars>
|
2024-09-24 09:37:38
| 2
| 3,833
|
apostofes
|
79,017,811
| 1,064,416
|
How to connect custom storage to django
|
<p>I am writing a custom storage module to use a remote seafile-server as storage for a django (django-cms) installation.</p>
<p>File <code>seafile.py</code> is located in the project-folder:</p>
<p><a href="https://i.sstatic.net/8EB8gdTK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8EB8gdTK.jpg" alt="enter image description here" /></a></p>
<p>The storage class has been tested with jupyter notebook and is working.</p>
<p><strong>The problem:</strong> I am failing to connect my storage to django (local, development-server), it is still saving pictures local in the media-folder and not using my storage at all.</p>
<p>In <code>settings.py</code></p>
<pre><code>from .seafile import MyStorage
...
</code></pre>
<p>Things I have tried:</p>
<pre><code>DEFAULT_FILE_STORAGE = "MyStorage"
DEFAULT_FILE_STORAGE = "seafile.MyStorage"
DEFAULT_FILE_STORAGE = MyStorage
DEFAULT_FILE_STORAGE = MyStorage()
</code></pre>
<p>For sure I have seen and tried the suggested solution (<a href="https://docs.djangoproject.com/en/5.1/howto/custom-file-storage/#use-your-custom-storage-engine" rel="nofollow noreferrer">https://docs.djangoproject.com/en/5.1/howto/custom-file-storage/#use-your-custom-storage-engine</a>) but failed too.</p>
<p>What am I missing? Thank you in advance!</p>
|
<python><django><storage>
|
2024-09-24 09:05:24
| 1
| 1,021
|
Rockbot
|
79,017,748
| 17,082,611
|
Circular Import Issue with FastAPI and Pydantic Models
|
<p>I'm developing an application using FastAPI and Pydantic, and I'm encountering a circular import issue when trying to define my data models.</p>
<p>Here are the relevant files:</p>
<h3><strong>src/schemas/user.py</strong></h3>
<pre class="lang-py prettyprint-override"><code>from typing import List
from pydantic import BaseModel
from src.schemas.blog import ShowBlog # Import causing circular dependency
class ShowUser(BaseModel):
username: str
email: str
blogs: List[ShowBlog]
class Config:
from_attributes = True
class User(BaseModel):
username: str
email: str
password: str
</code></pre>
<h3><strong>src/schemas/blog.py</strong></h3>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
from src.schemas.user import ShowUser # Import causing circular dependency
class Blog(BaseModel):
title: str
description: str
user_id: int
class ShowBlog(BaseModel):
id: int
title: str
description: str
written_by: ShowUser
class Config:
from_attributes = True
</code></pre>
<p>When I run the application, I receive the following error:</p>
<pre><code>ImportError: cannot import name 'ShowBlog' from partially initialized module 'src.schemas.blog' (most likely due to a circular import)
</code></pre>
<p>How can I resolve this circular import issue while keeping my FastAPI application structured? Are there best practices for organizing Pydantic models to avoid such problems?</p>
|
<python><fastapi><importerror><circular-dependency>
|
2024-09-24 08:49:56
| 2
| 481
|
tail
|
79,017,560
| 3,361,462
|
Free CPU RAM while using tensorflow
|
<p>At first I am sorry for this question - I know python well, but I have never worked with ML frameworks. In addition it's very hard to create full example of the problem, because I cannot setup GPU on my machine and modyfying the code on the production machine is very hard.</p>
<p>Coming to the problem, on the production machine I have CUDA and GPU setuped. I am using</p>
<pre><code>import tensorflow as tf
with tf.device("/GPU:0"):
my_model = tf.keras.models.load_model(filepath=filepath)
</code></pre>
<p>to load the model into the memory. As far as I know it should use GPU, but the model is stored in both, CPU and GPU RAM. I tried to use <code>gc.collect()</code>, I tried random things like <code>tf.keras.backend.clear_session()</code> and flags like <code>TF_FORCE_GPU_ALLOW_GROWTH</code> or <code>CUDA_VISIBLE_DEVICES</code> but nothing works - the CPU RAM is taken until the program terminates. I couldn't find any solution which makes me think that I do not understand something fundamental.</p>
<p>Is it normal behaviour that such loaded model stays in the CPU RAM? If not, what should I do to free this memory and leave it only in GPU?</p>
|
<python><tensorflow><keras><memory-management><gpu>
|
2024-09-24 07:57:30
| 0
| 7,278
|
kosciej16
|
79,017,399
| 14,720,215
|
Download large file from s3 using aioboto3 and aiofiles is really slow
|
<p>I have high-load system, where many users could upload large files (+1gb).
After uploading, sometimes, I need to download them from S3 to calculate some meta information.
Currently I'm using this code to do this (look at <code>fetch_file</code>):</p>
<pre class="lang-py prettyprint-override"><code>def _get_file_extension(key) -> str
...
class S3Storage:
def __init__(self, *args, **kwargs):
...
self._config = botocore.config.Config(
read_timeout=read_timeout,
connect_timeout=connect_timeout,
retries={
"total_max_attempts": ...,
"max_attempts": ...,
},
signature_version="v4",
)
self._session = self._create_session()
def _create_session():
return aioboto3.Session()
def _create_client(self):
return self._session.client(
service_name="s3",
endpoint_url=self._endpoint_url,
region_name=self._region,
aws_access_key_id=self._access_key_id,
aws_secret_access_key=self._secret_access_key,
config=self._config,
)
async def fetch_file(self, key: str, bucket: str | None = None) -> str:
try:
async with self._create_client() as client:
response = await client.get_object(Bucket=bucket or self._bucket, Key=key)
async with aiofiles.tempfile.NamedTemporaryFile(
"wb",
suffix=_get_file_extension(key),
delete=False,
) as file:
async for chunk in response["Body"]:
await file.write(chunk)
return str(file.name)
except botocore.exceptions.ClientError as e:
...
except aiohttp.ServerTimeoutError as e:
...
except (
botocore.exceptions.BotoCoreError,
aiohttp.ClientError,
) as e:
...
</code></pre>
<p>For some reason, this realisation is really slow. After profiling I've found out that it takes 300 seconds to download 1gb file. All 300 seconds were spent exactly at <code>fetch_file</code> method.</p>
<p>Could someone help me to figure out, why it takes so long to download file from s3 and what I could do with this.</p>
|
<python><python-3.x><amazon-s3><aiobotocore>
|
2024-09-24 07:16:45
| 1
| 1,338
|
Kirill Ilichev
|
79,017,291
| 11,770,390
|
How to update and extend dataframe1 with values from dataframe2
|
<p>I have a dataframe <code>df_old</code> that I want to update with new values from <code>df_new</code>. The dataframes have exactly the same structure, each has records with a first column 'DateTime' that serves as the key column. The result should be a dataframe that is basically a concatenation of <code>df_old</code> and <code>df_new</code> but for every duplicate key only the new record (from <code>df_new</code>) is kept. How can i accomplish this?</p>
<p>df_old:</p>
<pre><code>DateTime B C
2024-09-21 12.9 5.4
2024-09-22 13.1 5.4
</code></pre>
<p>df_new:</p>
<pre><code>DateTime B C
2024-09-22 13.2 5.2
2024-09-23 13.4 4.9
</code></pre>
<p>df_updated:</p>
<pre><code>DateTime B C
2024-09-21 12.9 5.4
2024-09-22 13.2 5.2 <-- new values overwrite the old ones
2024-09-23 13.4 4.9
</code></pre>
|
<python><pandas><dataframe>
|
2024-09-24 06:43:56
| 2
| 5,344
|
glades
|
79,017,285
| 6,689,867
|
How to remove the @-expressions in a booktabs table created by pylatex?
|
<p>This code:</p>
<pre><code>from pylatex import (
Tabular,
)
# make tabular
doc = Tabular('lcc', booktabs=True)
doc.add_row('A','B','C')
doc.add_hline()
doc.add_row((1, 2, 3))
doc.generate_tex("my_table")
</code></pre>
<p>produces <code>my_table.tex</code>:</p>
<pre><code>\begin{tabular}{@{}lcc@{}}%
\toprule%
A&B&C\\%
\midrule%
1&2&3\\\bottomrule%
%
\end{tabular}
</code></pre>
<p>As you can seen, in the <code>tabular</code> parameter, the column alignments are preceded and followed by <code>@{}</code>.<br />
It doesn't happen if I don't use <code>booktabs=True</code>, but I need this option to add <code>\toprule</code>, <code>\midrule</code> and <code>\bottomrule</code>.</p>
<p>How can I avoid the <code>@{}</code>s?</p>
|
<python><latex><pylatex>
|
2024-09-24 06:41:29
| 1
| 305
|
CarLaTeX
|
79,017,280
| 4,784,914
|
Use setup.py / pyproject.toml to compile a library also in editable install
|
<p>I am setting up a Python package with <code>setuptools</code>, together with <code>pyproject.toml</code>. The Python code is dependent on a C library that needs to be compiled and installed alongside the code (it's a <code>make</code> project).</p>
<p>I have put something together that works for a <code>pip install .</code> and also for <code>python -m build</code> to make a distributable:</p>
<pre><code># pyproject.toml
[project]
name = "mypackage"
[build-system]
requires = ["setuptools >= 61.0", "wheel"]
build-backend = "setuptools.build_meta"
[tool.setuptools]
packages = ["mypackage"]
package-dir = { "" = "src" }
</code></pre>
<pre><code># setup.py
from pathlib import Path
from setuptools import setup
from setuptools.command.install import install
from setuptools.command.develop import develop
from setuptools.command.build import build
import os
import subprocess
mylib_relative = "mylib"
mylib_root = Path(__file__).parent.absolute() / mylib_relative
def create_binaries():
subprocess.call(["make", "-C", mylib_relative])
def remove_binaries():
patterns = (
"*.a",
"**/*.o",
"*.bin",
"*.so",
)
for pattern in patterns:
for file in mylib_root.glob(pattern):
os.remove(file)
class CustomBuild(build):
def run(self):
print("\nCustomBuild!\n")
remove_binaries()
create_binaries()
super().run()
class CustomDevelop(develop):
def run(self):
print("\nCustomDevelop!\n")
remove_binaries()
create_binaries()
super().run()
class CustomInstall(install):
def run(self):
print("\n\nCustomInstall\n\n")
mylib_lib = mylib_root / "adslib.so"
mylib_dest = Path(self.install_lib)
if not mylib_dest.exists():
mylib_dest.mkdir(parents=True)
self.copy_file(
str(mylib_lib),
str(mylib_dest),
)
super().run()
setup(
cmdclass={
"build": CustomBuild,
"develop": CustomDevelop,
"install": CustomInstall,
},
)
</code></pre>
<p>However, when I make an editable install with pip with <code>pip install -e . [-v]</code>, the library is not compiled and installed, only the Python source is added to the venv path. But the package won't work without the library.</p>
<p>You can see I already added the <code>develop</code> command in <code>setup.py</code>, but it looks like it's never called at all.</p>
<p>How can I customize the editable install to also compile my library first?</p>
|
<python><setuptools><setup.py><software-distribution>
|
2024-09-24 06:40:03
| 1
| 1,123
|
Roberto
|
79,016,972
| 1,145,760
|
Killable socket in python
|
<p>My goal is to emit an interface to listen on a socket forever ... until someone up the decision chain decides it's enough.</p>
<p>This is my implementation, it does not work. Mixing threads, sockets, object lifetime, default params and a language I do not speak too well is confusing.</p>
<p>I tested individually different aspects of this code and everything was as expected except the line containing the comment <code>BUG</code> where I attempt to force the main thread to block until the server hears the child screaming or a timeout passes but instead <code>recv()</code> simply doesn't see the change in <code>alive</code>.</p>
<pre><code>#!/usr/bin/env python3
import socket
import threading
import time
MAX_MSG_BYTES=1024
TEST_PORT=42668
def recv( s: socket.socket, alive: bool=True ) -> bytes:
'''
Accepts packets on a socket until terminated.
'''
s.settimeout(1) # 1 second
while alive:
print("'alive' is still", alive)
try:
data = s.recv(MAX_MSG_BYTES)
assert data # Empty packets were a problem.
yield data
except TimeoutError:
pass # expected error, any other is propagated up
def test_nonblocking_recv() -> None:
# Create 3 sockets - sever administrative, server content and client content.
# Bind the latter and forget about the former.
server_s = socket.create_server(('', TEST_PORT))
server_s.listen()
client_s = socket.create_connection(('localhost', TEST_PORT))
content_s = next(iter(server_s.accept())) # Accept 1 connection.
# client_s.sendall('If this is commented out, the server hangs.'.encode('utf8'))
alive = True
def read_one_message():
data = recv(content_s, alive)
print(next(iter(data))) # BUG this causes outside alive to not be seen
content_th = threading.Thread(target=read_one_message)
content_th.start()
time.sleep(3)
alive = False
print("But main thread 'alive' is", alive)
content_th.join()
assert threading.active_count() == 1
if __name__ == '__main__':
test_nonblocking_recv()
</code></pre>
|
<python><multithreading><sockets>
|
2024-09-24 04:35:33
| 1
| 9,246
|
Vorac
|
79,016,874
| 3,853,711
|
undefined reference to `Py_DecodeLocale'
|
<p>I'm embedding Python into a C++ project with example code from <a href="https://docs.python.org/3/extending/embedding.html#very-high-level-embedding" rel="nofollow noreferrer">https://docs.python.org/3/extending/embedding.html#very-high-level-embedding</a>:</p>
<p><code>CMakeLists.txt</code>:</p>
<pre><code>cmake_minimum_required(VERSION 3.20)
project(python_in_cpp)
set(CMAKE_CXX_STANDARD 11)
# sudo apt install libpython3.11-dev
find_package(PythonLibs 3.11 REQUIRED)
find_package(Python COMPONENTS Interpreter Development)
message("Python_FOUND:${Python_FOUND}")
message("Python_VERSION:${Python_VERSION}")
message("Python_Development_FOUND:${Python_Development_FOUND}")
message("Python_LIBRARIES:${Python_LIBRARIES}")
include_directories(${Python_INCLUDE_DIRS})
link_directories(
${Python3_LIBRARY_DIRS}
${Python3_RUNTIME_LIBRARY_DIRS}
)
link_libraries(${Python3_LIBRARIES})
add_executable(python_in_cpp main.cpp)
</code></pre>
<p><code>main.cpp</code>:</p>
<pre class="lang-cpp prettyprint-override"><code>#define PY_SSIZE_T_CLEAN
#include <Python.h>
int
main(int argc, char *argv[])
{
wchar_t *program = Py_DecodeLocale(argv[0], NULL);
if (program == NULL) {
fprintf(stderr, "Fatal error: cannot decode argv[0]\n");
exit(1);
}
Py_SetProgramName(program); /* optional but recommended */
Py_Initialize();
PyRun_SimpleString("from time import time,ctime\n"
"print('Today is', ctime(time()))\n");
if (Py_FinalizeEx() < 0) {
exit(120);
}
PyMem_RawFree(program);
return 0;
}
</code></pre>
<p>CMake log shows that both <code>Python</code> and <code>PythonLibs</code> found but when compiling, I got errors about several undefined reference:</p>
<pre><code>main.cpp:(.text+0x1f): undefined reference to `Py_DecodeLocale'
/usr/bin/ld: main.cpp:(.text+0x63): undefined reference to `Py_SetProgramName'
/usr/bin/ld: main.cpp:(.text+0x68): undefined reference to `Py_Initialize'
/usr/bin/ld: main.cpp:(.text+0x7c): undefined reference to `PyRun_SimpleStringFlags'
/usr/bin/ld: main.cpp:(.text+0x81): undefined reference to `Py_FinalizeEx'
/usr/bin/ld: main.cpp:(.text+0x9e): undefined reference to `PyMem_RawFree'
</code></pre>
<p>How do I fix it to build this project?</p>
<hr />
<p>I have several python installed:</p>
<pre><code>$ apt list --installed | grep "lib*python"
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
libpython3-dev/stable,now 3.11.2-1+b1 amd64 [installed]
libpython3-stdlib/stable,now 3.11.2-1+b1 amd64 [installed,automatic]
libpython3.11-dev/stable-security,now 3.11.2-6+deb12u3 amd64 [installed]
libpython3.11-minimal/stable-security,now 3.11.2-6+deb12u3 amd64 [installed,automatic]
libpython3.11-stdlib/stable-security,now 3.11.2-6+deb12u3 amd64 [installed,automatic]
libpython3.11/stable-security,now 3.11.2-6+deb12u3 amd64 [installed,automatic]
</code></pre>
|
<python><c++><cmake>
|
2024-09-24 03:32:04
| 1
| 5,555
|
Rahn
|
79,016,563
| 289,784
|
QWebEngineView renders bokeh vizualisation with largish data (~18000 points) slowly
|
<p>I'm trying to include a Bokeh based visualisation canvas in a Qt application written in Python (using <code>PySide6</code>). So I create the visualisation, save it in an HTML file, and read it in a <code>QWebEngineView</code> widget. It renders fine, but is extremely sluggish to interact with it; e.g. try selecting the zoom tool and zooming in, or panning. The surprising part is, when I open the same HTML in Firefox, it is pretty snappy, as I would expect.</p>
<p>There are 2 time series, with 8760 data points each in the plot. What is going on here?</p>
<h4>Minimal example</h4>
<pre class="lang-py prettyprint-override"><code>from argparse import ArgumentParser
from pathlib import Path
from PySide6.QtCore import QUrl, QSize
from PySide6.QtWidgets import QApplication, QMainWindow
from PySide6.QtWebEngineWidgets import QWebEngineView
class MainWindow(QMainWindow):
def __init__(self, path: str = "", *args, **kwargs):
super().__init__(*args, **kwargs)
if path:
self.browser = QWebEngineView()
self.browser.setUrl(QUrl.fromLocalFile(Path(path).absolute()))
self.setCentralWidget(self.browser)
self.resize(QSize(800, 500))
if __name__ == "__main__":
# HTML example produced using Bokeh: dump-viz.html
parser = ArgumentParser()
parser.add_argument("file")
opts = parser.parse_args()
app = QApplication()
window = MainWindow(opts.file)
window.show()
app.exec()
</code></pre>
<p>The above script, and an example HTML is available in <a href="https://gist.github.com/suvayu/6c430b74f511a6d8dc26ff63846c97c2" rel="nofollow noreferrer">this gist</a>.</p>
<p>You can run it with <code>hatch run pyside-qwebengine-slow.py qwebengine-ex-viz.html</code>, or create a virtual env, then <code>pip install PySide6</code>, and run normally.</p>
<h4>Additional info</h4>
<p>The code that I use to generate that plot, is more or less equivalent to:</p>
<pre class="lang-py prettyprint-override"><code>def make_plot(data_list)
"""`data_list`: list of dataclasses representing a time series;
data.y, data.labels, etc.
"""
palette = TolRainbow[10]
cds = ColumnDataSource(data=pd.DataFrame({data.labels[0]: data.y for data in data_list}))
legend_items = {}
fig = figure(x_axis_label="x_label", y_axis_label="y_label", title="title", height=400, width=800)
for idx, data in enumerate(data_list):
line = fig.line("index", data.labels[0], source=cds, color=palette[idx])
point = fig.scatter("index", data.labels[0], source=cds, color=palette[idx], size=7)
legend_items[data.labels[0]] = [line, point]
legend = Legend(items=list(legend_items.items()))
fig.add_layout(legend, "right")
fig.legend.click_policy = "hide"
return fig
fig = make_plot(data_list)
html = file_html(fig, INLINE, title)
</code></pre>
|
<python><bokeh><pyside6><qwebengineview>
|
2024-09-23 23:46:24
| 0
| 4,704
|
suvayu
|
79,016,312
| 9,235,106
|
CMake unable to find Python3 in Windows with Miniconda installed
|
<p>I'm trying to build a project using CMake on Windows, but I'm encountering an error where CMake can't find Python3. I have Miniconda installed on my system.</p>
<pre><code>CMake Error at C:/Program Files/Microsoft Visual Studio/2022/Community/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/share/cmake-3.29/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Python3 (missing: Python3_EXECUTABLE Interpreter)
</code></pre>
<p>I have added the following paths to environment variable Path:</p>
<pre><code>C:\Users\Jiawei\miniconda3
C:\Users\Jiawei\miniconda3\Library\bin
C:\Users\Jiawei\miniconda3\Scripts
</code></pre>
<p>Additionally, if I tape python in Command Prompt, a pop-up window will show and guide me to install python from Microsoft Store.</p>
|
<python><windows><cmake>
|
2024-09-23 21:15:05
| 0
| 577
|
Jiawei Lu
|
79,016,282
| 2,904,786
|
How to filter out elements with No Value for certain field
|
<p>I have a list of Events in Python. The problem I have is that a lot of events don't contain an <code>event_data['pick']['blurb']</code>. How do I create a filtered list with just the events that have this data?</p>
<p>The current code shows an error:</p>
<pre class="lang-none prettyprint-override"><code> if event["event"]['pick']['blurb'] in events:
TypeError: 'NoneType' object is not subscriptable
</code></pre>
<pre><code>with open(output_file, "w", newline="", encoding="utf-8") as file:
writer = csv.writer(file)
writer.writerow(["Event name", "Date", "Start Time", "End Time",
"Artists", "Images", "Image", "Blurb", "Venue"])
filtered = []
for event in events:
if event["event"]['pick']['blurb'] in events:
filtered.append(event)
# print(filtered)
for event in filtered:
event_data = event["event"]
writer.writerow([event_data['title'],
event_data['date'],
event_data['startTime'],
event_data['endTime'],
', '.join([artist['name'] for artist in event_data['artists']]),
', '.join([images['filename'] for images in event_data['images']]),
event_data['images'][0]['filename'],
event_data['pick']['blurb'],
event_data['venue']['name']])
</code></pre>
|
<python>
|
2024-09-23 20:59:04
| 1
| 421
|
user237462
|
79,016,139
| 9,112,151
|
How to drop discriminator value from loc in validation error?
|
<p>I use Pydantic v2. For now I'd like to remove discriminator value from <code>loc</code> or validation error. Please take a look at the code below:</p>
<pre><code>from pprint import pprint
from typing import Literal, Union, Annotated, Any
from pydantic import BaseModel, Field, RootModel, ValidationError
from pydantic.main import Model
class Tiger(BaseModel):
animal_type: Literal["tiger"] = "tiger"
ferocity_scale: float = Field(..., ge=0, le=10)
class Shark(BaseModel):
animal_type: Literal["shark"] = "shark"
ferocity_scale: float = Field(..., ge=0, le=10)
class Lion(BaseModel):
animal_type: Literal["lion"] = "lion"
ferocity_scale: float
class WildAnimal(RootModel):
root: Annotated[Union[Tiger, Shark, Lion], Field(..., discriminator='animal_type')]
try:
my_shark = WildAnimal.model_validate({'animal_type': 'shark', 'ferocity_scale': 115})
except ValidationError as exc:
pprint(exc.errors())
</code></pre>
<p>The code prints:</p>
<pre><code>[{'ctx': {'le': 10.0},
'input': 115,
'loc': ('shark', 'ferocity_scale'), <------ how to drop "shark"?
'msg': 'Input should be less than or equal to 10',
'type': 'less_than_equal',
'url': 'https://errors.pydantic.dev/2.7/v/less_than_equal'}]
</code></pre>
<p>In <code>loc</code> key you could notice <code>('shark', 'ferocity_scale')</code>. The value <code>shark</code> is the discriminator <code>animal_type</code> value.</p>
<p>How to drop it by means of Pydantic? Or how to detect dynamically that the first value of <code>loc</code> is discriminator value?</p>
<p>The only thing I could figure out:</p>
<pre><code>class WildAnimal(RootModel):
root: Annotated[Union[Tiger, Shark, Lion], Field(..., discriminator='animal_type')]
@classmethod
def model_validate(cls: type[Model], obj: Any, *, strict: bool | None = None, from_attributes: bool | None = None,
context: dict[str, Any] | None = None) -> Model:
try:
return super().model_validate(obj, strict=strict, from_attributes=from_attributes, context=context)
except ValidationError as exc:
# modify loc
# reraise ValidationError
</code></pre>
|
<python><pydantic-v2>
|
2024-09-23 20:07:37
| 1
| 1,019
|
ΠΠ»ΡΠ±Π΅ΡΡ ΠΠ»Π΅ΠΊΡΠ°Π½Π΄ΡΠΎΠ²
|
79,016,125
| 2,079,111
|
How to turn off cell spacing in a Word document using python docx? (or any other package)
|
<p>I have an input word document that has some tables with cell spacing set to 0.02". Iβd like to turn off that cell spacing (or set it to 0) with the code below that uses the python-docx package. However, when I run the code on the input word file and open the output file, the cell spacing is still turned on and set to 0.02.</p>
<p>I believe the code below should work and set the cell spacing to be off or zero. Why would the cell spacing changes made by python-docx not be reflected in the file?</p>
<pre><code>from docx import Document
from docx.oxml import OxmlElement
from docx.oxml.ns import qn
def disable_cell_spacing_in_tables(file_path, output_path):
"""
Disable cell spacing for all tables in the given Word document.
:param file_path: Path to the input Word document.
:param output_path: Path to save the modified Word document.
"""
# Open the Word document
doc = Document(file_path)
# Iterate through all tables in the document
for table in doc.tables:
tbl = table._element
tblPr = tbl.tblPr if tbl.tblPr is not None else OxmlElement('w:tblPr')
if tbl.tblPr is None:
tbl.append(tblPr)
# Disable cell spacing
tbl_cell_spacing = tblPr.find(qn('w:tblCellSpacing'))
if tbl_cell_spacing is not None:
tblPr.remove(tbl_cell_spacing)
tbl_cell_spacing = OxmlElement('w:tblCellSpacing')
tbl_cell_spacing.set(qn('w:w'), '0')
tbl_cell_spacing.set(qn('w:type'), 'dxa')
tblPr.append(tbl_cell_spacing)
# Save the modified document
doc.save(output_path)
print(f"Modified document saved to {output_path}")
if __name__ == "__main__":
input_file_path = 'input.docx'
output_file_path = 'output.docx'
disable_cell_spacing_in_tables(input_file_path, output_file_path)
</code></pre>
<p>Here is the table option in the Ms Word menu in the output file still set to 0.02".</p>
<p><a href="https://i.sstatic.net/AD0iQi8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AD0iQi8J.png" alt="Table Properties >> Options >> Allow spacing between cells is still turned on" /></a></p>
|
<python><openxml><python-docx>
|
2024-09-23 20:00:17
| 1
| 828
|
Semihcan Doken
|
79,016,099
| 3,456,812
|
Beginner python tensorflow/keras woes... packages not recognized
|
<p>I am new to python programming and as a means of learning am taking on some fun AI challenges. Not ashamed to admit I've let GPT help me by asking it to assemble some starter code that I can dissect, learn from, and then add to once I understand the basics.</p>
<p>In trying to put together a simple convolutional neural network that will do image recognition, the following imports are suggested:</p>
<pre><code>import os
import pandas as pd
import tensorflow as tf
from tensorflow.keras.models import Sequentialpip
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.utils import to_categorical
from sklearn.model_selection import train_test_split
from PIL import Image
import numpy as np
</code></pre>
<p>unfortunately all of the tensorflow.keras. imports are failing in Visual Studio Code with a "could not be resolved" error.</p>
<p>What I have tried:</p>
<p>Research says that tensorflow, or at least the keras parts, aren't supported in Python 3.12, which I had selected as my Python interpreter. To remedy this, I followed the recommendation to install Python 3.10, verified it was a good install.</p>
<p>Then I started a virtual environment 'env' with Python3.10, installed tensorflow and all the other packages with pip for this environment. I had no errors installing the packages and my command prompt is showing that I'm indeed in my virtual environment.</p>
<p>I've used pip show and verified that all the packages, including tensorflow, are installed.</p>
<p>I've also been googling and reading about tensorflow and trying to find examples of code on other sites that is close to what I'm trying to accomplish, but I'm not having a lot of luck there. Being a beginner with both Python and tensorflow, I'm sure there's something basic I'm missing about why these imports are failing.</p>
<p>Having all this failing, I noticed that Keras itself is an installable package so I tried just importing from Keras. But that also is not found.</p>
|
<python><tensorflow>
|
2024-09-23 19:53:23
| 0
| 1,305
|
markaaronky
|
79,015,946
| 4,511,243
|
Webpage for adding and removing emails from a text file
|
<p>I have a very simple webpage which is just going to read a text file listing email addresses and write them into a table. Next to each row there is a delete button to delete that specific email from the list. There is also a button to save the data in the table to a file.</p>
<p>I cannot get it to work so I can add rows which are input fields, where users can enter new emails and that will be saved into the file after I press save, <em>and</em> converts the input field to a non-editable text field. Basically I want the user to see the same page without input fields after they click save to file, but the added rows should be included in the table.</p>
<p>This is the code:</p>
<pre class="lang-python prettyprint-override"><code>from distutils.log import debug
from flask import *
app = Flask(__name__)
data_file = 'emails.txt'
@app.route("/save", methods=["POST"])
def saveEmails():
rows = json.loads(request.form["emails"])
with open(data_file, 'w') as f:
for row in rows:
f.write(row + '\n')
return render_template('index.html', emails=rows)
@app.route("/")
def showData():
data = ["tester@email.com", "user@email.com"]
return render_template("index.html", emails = data)
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>And this the HTML:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html lang="en">
<head></head>
<body>
Emails
<button onclick="addRow()">Add row</button>
<button onclick="saveEmails()">Save to file</button>
<br>
<table id="data-table">
{% for i in range(emails | length) %}
<tr>
<br><td>{{emails[i]}}</td>
<td><button onclick="deleteRow({{ i }})">Delete</button></td>
</tr>
{% endfor %}
</table>
<script>
function deleteRow(index) {
var table = document.getElementById('data-table');
table.deleteRow(index);
}
function saveEmails() {
var table = document.getElementById('data-table');
var data = [];
for (var i = 0; table.rows.length; i++) {
data.push(table.rows[i].cells[0].innerText);
}
var xhr = new XMLHttpRequest();
xhr.open('POST', '/save', true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
xhr.send('action=save&emails=' + encodeURIComponent(JSON.stringify(data)));
}
function addRow() {
var table = document.getElementById('data-table');
var row = table.insertRow(table.rows.length);
var cell1 = row.insertCell(0);
var input = document.createElement('input');
input.type = 'text';
cell1.appendChild(input);
var cell2 = row.insertCell(1);
var button = document.createElement('button');
button.textContent = 'Delete';
button.onclick = function() {
var index = table.rows.indexOf(rows);
deleteRow(index);
};
cell2.appendChild(button);
}
</script>
</body>
</html>
</code></pre>
|
<python><html><flask>
|
2024-09-23 18:55:01
| 0
| 681
|
Frank
|
79,015,728
| 4,048,657
|
Why am I getting "RuntimeError: Trying to backward through the graph a second time"?
|
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import torch
import random
image_width, image_height = 128, 128
def apply_ellipse_mask(img, pos, axes):
r = torch.arange(image_height)[:, None]
c = torch.arange(image_width)[None, :]
val_array = ((c - pos[0]) ** 2) / axes[0] ** 2 + ((r - pos[1]) ** 2) / axes[1] ** 2
mask = torch.where((0.9 < val_array) & (val_array < 1), torch.tensor(1.0), torch.tensor(0.0))
return img * (1.0 - mask) + mask
random.seed(0xced)
sphere_radius = image_height / 3
sphere_position = torch.tensor([image_width / 2, image_height / 2 ,0], requires_grad=True)
ref_image = apply_ellipse_mask(torch.zeros(image_width, image_height, requires_grad=True), sphere_position, [sphere_radius, sphere_radius, sphere_radius])
ellipsoid_pos = torch.tensor([sphere_position[0], sphere_position[1], 0], requires_grad=True)
ellipsoid_axes = torch.tensor([image_width / 3 + (random.random() - 0.5) * image_width / 5, image_height / 3 + (random.random() - 0.5) * image_height / 5, image_height / 2], requires_grad=True)
optimizer = torch.optim.Adam([ellipsoid_axes], lr=0.1)
criterion = torch.nn.MSELoss()
for _ in range(100):
optimizer.zero_grad()
current_image = torch.zeros(image_width, image_height, requires_grad=True)
current_image = apply_ellipse_mask(current_image, ellipsoid_pos, ellipsoid_axes)
loss = criterion(current_image, ref_image)
loss.backward()
print(_, loss)
optimizer.step()
</code></pre>
<p>Error:</p>
<blockquote>
<p>RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.</p>
</blockquote>
<p>Why would it be trying to backward through the same graph a second time? Am I directly accessing saved tensors after they were freed?</p>
|
<python><pytorch>
|
2024-09-23 17:37:09
| 1
| 1,239
|
Cedric Martens
|
79,015,645
| 4,621,247
|
How to run Selenium as the user, and not root?
|
<p>I'm running a Python script with Selenium to authenticate to a service that opens an app via deep linking.</p>
<p>The script runs remotely with SSH using sudo, but since Selenium is running as sudo, the deep link fails to open the app. Without sudo, it works fine, but I need the script to run with sudo.</p>
<p>Is there a way to run only Selenium as the user while the rest of the script runs with sudo?</p>
|
<python><selenium-webdriver><selenium-chromedriver><sudo>
|
2024-09-23 17:12:12
| 2
| 1,070
|
Gilbert Williams
|
79,015,487
| 1,405,689
|
Normalization in pandas via function
|
<p>I have this dataframe:</p>
<pre><code> age
0 48
1 7
2 62
3 48
4 51
</code></pre>
<p>This code:</p>
<pre><code>import pandas as pd
import numpy as np
def normalizar(x):
# Convert x to a numpy array to allow for vectorized operations.
x = np.array(x)
# Calculate the minimum and maximum values of x.
xmin = x.min()
xmax = x.max()
# Normalize the array x using vectorized operations.
return (x - xmin) / (xmax - xmin)
df["age_n"] = df["age"].apply(normalizar)
df
</code></pre>
<p>and I get:</p>
<pre><code> age age_n
0 48 NaN
1 7 NaN
2 62 NaN
3 48 NaN
4 51 NaN
</code></pre>
<p><strong>How can I solve this issue?</strong></p>
<p>The expected result would be values between [0,1]</p>
|
<python><pandas><numpy>
|
2024-09-23 16:21:30
| 1
| 2,548
|
Another.Chemist
|
79,015,439
| 1,394,353
|
How do I fill_null on a struct column?
|
<p>I am trying to compare two dataframes via <code>dfcompare = (df0 == df1)</code> and nulls are never considered identical (unlike <code>join</code> there is no option to allow nulls to match).</p>
<p>My approach with other fields is to fill them in with an "empty value" appropriate to their datatype. What should I use for structs?</p>
<pre><code>import polars as pl
df = pl.DataFrame(
{
"int": [1, 2, None],
"data" : [dict(a=1,b="b"),dict(a=11,b="bb"),None]
}
)
df.describe()
print(df)
df2 = df.with_columns(pl.col("int").fill_null(0))
df2.describe()
print(df2)
# these error out:...
try:
df3 = df2.with_columns(pl.col("data").fill_null(dict(a=0,b="")))
except (Exception,) as e:
print("try#1", e)
try:
df3 = df2.with_columns(pl.col("data").fill_null(pl.struct(dict(a=0,b=""))))
except (Exception,) as e:
print("try#2", e)
</code></pre>
<p>Output:</p>
<pre><code>
shape: (3, 2)
ββββββββ¬ββββββββββββββ
β int β data β
β --- β --- β
β i64 β struct[2] β
ββββββββͺββββββββββββββ‘
β 1 β {1,"b"} β
β 2 β {11,"bb"} β
β null β {null,null} β
ββββββββ΄ββββββββββββββ
shape: (3, 2)
βββββββ¬ββββββββββββββ
β int β data β
β --- β --- β
β i64 β struct[2] β
βββββββͺββββββββββββββ‘
β 1 β {1,"b"} β
β 2 β {11,"bb"} β
β 0 β {null,null} β
βββββββ΄ββββββββββββββ
try#1 invalid literal value: "{'a': 0, 'b': ''}"
try#2 a
Error originated just after this operation:
DF ["int", "data"]; PROJECT */2 COLUMNS; SELECTION: "None"
</code></pre>
<p>My, satisfactory, workaround has been to <code>unnest</code> the columns instead. This works fine (even better as it allow subfield-by-subfield fills). Still, I remain curious about how to achieve a suitable "struct literal" that can be passed into these types of functions.</p>
<p>One can also imagine wanting to add a hardcoded column as in <code>df4 = df.with_columns(pl.lit("0").alias("zerocol"))</code></p>
|
<python><python-polars>
|
2024-09-23 16:06:34
| 2
| 12,224
|
JL Peyret
|
79,015,399
| 14,517,452
|
QState.assignProperty not working in PySide
|
<p>I saw <a href="https://stackoverflow.com/questions/67717693/qtreewidget-how-to-change-sizehint-dynamically?rq=3">this example</a> using QState, which seems to work with PyQt5. However on trying to use PySide for this, I get this error;</p>
<pre><code>Traceback (most recent call last):
File ".../qstate-error.py", line 16, in <module>
state1.assignProperty(widget, b'test', 1)
TypeError: 'PySide2.QtCore.QState.assignProperty' called with wrong argument types:
PySide2.QtCore.QState.assignProperty(QWidget, bytes, int)
Supported signatures:
PySide2.QtCore.QState.assignProperty(PySide2.QtCore.QObject, bytes, typing.Any)
</code></pre>
<p>Unfortunately I have to use PySide for work, so switching to PyQt5 is not an option. I have tried this with PySide2 and PySide6 - both are throwing this error. But then I saw <a href="https://stackoverflow.com/questions/78494267/pyside6-qt-state-machine-only-runs-if-i-check-its-configuration-on-every-transit">this question</a> that explicitly mentions PySide - so presumably it works. So do QState require some special setup that I am not aware of? Or is there something broken in my build? I'm working on Rocky 8, python 3.9.16</p>
<p>For reference, the following example works in pyqt5, but does not work in pyside2 (same result in pyside6).</p>
<pre><code>try:
from PySide2.QtWidgets import QApplication, QWidget
from PySide2.QtCore import QState
except ImportError:
from PyQt5.QtWidgets import QApplication, QWidget
from PyQt5.QtCore import QState
if __name__ == '__main__':
app = QApplication([])
widget = QWidget()
state1 = QState()
state1.assignProperty(widget, b'test', 1)
widget.show()
app.exec_()
</code></pre>
|
<python><pyqt5><pyside2><pyside6>
|
2024-09-23 15:55:26
| 1
| 748
|
Edward Spencer
|
79,015,315
| 1,406,168
|
Python script to access Azure Service Bus with Managed identity
|
<p>I want to send a message to a servicebus from Visual Studio Code, in a simple python script. I want to access the service bus using a managed identity (in this case my own user). From visual studio it would be fairly simple using the authentication part in options.</p>
<p>As I understand it in Visual Studio Code you can use the Azure Account extension. So I installed the extension, logged in with success and can see all azure ressources.</p>
<p>However when i run my python script I get following error:
<a href="https://i.sstatic.net/HlBadNOy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HlBadNOy.png" alt="enter image description here" /></a></p>
<p>Its like VS Code does not pickup my credentials. Beside logging into the extention is other configurations needed?</p>
|
<python><azure><visual-studio-code><azureservicebus>
|
2024-09-23 15:35:46
| 1
| 5,363
|
Thomas Segato
|
79,015,194
| 18,769,241
|
How to define variables in case they their values change through a thread?
|
<p>I have <code>main.py</code> which looks like the following:</p>
<pre><code>...
val1, val2 = None, None
...
thread1 = threading.Thread(func1, )
thread1.start()
while condition:
continue
else:
thread1.join()
#check the values of val1 and val2 after the thread had changed them
if val1!= None and val2!= None:
do_something()
</code></pre>
<p>on the other hand <code>func1</code> is defined in <code>main.py</code> script as follows:</p>
<pre><code>...
def func1():
global val1, val2
val1, val2 = process_something_and_return_values()
</code></pre>
<p>My question will defining <code>func1()</code> as I do will guarantee to find <code>val1</code> and <code>val2</code> changed and not assigned to <code>None</code>?</p>
<p>if not how to go about this kind of setting to change values of variables through a <code>thread</code></p>
<p>PS: using Python 2.7</p>
<p>UPDATE: Python doesn't have the ability to change variables across modules. Therefore I want to check if the code is correct if it is in the same module?</p>
|
<python>
|
2024-09-23 15:00:01
| 1
| 571
|
Sam
|
79,015,156
| 9,357,484
|
Python not recognized in laptop although it is installed
|
<p>The python I have in my laptop did not compile any code. The error said
<a href="https://i.sstatic.net/JpddmPH2.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JpddmPH2.png" alt="enter image description here" /></a></p>
<p>I checked the environment variables (path) and the list is as follows</p>
<p><a href="https://i.sstatic.net/vl2Fueo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vl2Fueo7.png" alt="enter image description here" /></a></p>
<p>I also checked the installed program list</p>
<p><a href="https://i.sstatic.net/bmotjrGU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bmotjrGU.png" alt="enter image description here" /></a></p>
<p>I did not understand why Python is not recognized in my system.</p>
<p>Thank you for the help in advance.</p>
|
<python><python-3.x><environment-variables>
|
2024-09-23 14:51:46
| 1
| 3,446
|
Encipher
|
79,015,092
| 7,972,989
|
Reverse categories order in hovermode on Plotly
|
<p>I have a Plotly stacked barplot with specific categories order. On the plot, the categories stack from bottom to top as wanted : bronze, silver, gold.</p>
<p>However, it stacks on the reversed order on the hovermode : gold, silver and bronze (from bottom to top as well)</p>
<p>Is there any way to correct the order in the hovermode "x unified" ?</p>
<pre><code>import plotly.express as px
long_df = px.data.medals_long()
category_orders = {
"medal": [
"bronze",
"silver",
"gold",
]
}
fig = px.bar(long_df, x="nation", y="count", color="medal", category_orders=category_orders)
fig.update_layout(hovermode="x unified", legend=dict(orientation="h"))
fig.show()
</code></pre>
|
<python><plotly>
|
2024-09-23 14:37:22
| 0
| 2,505
|
gdevaux
|
79,015,047
| 1,841,839
|
Ollama multimodal - Gemma not seeing image
|
<p>This sample <a href="https://github.com/ollama/ollama-python/blob/main/examples/multimodal/main.py" rel="nofollow noreferrer">multimodal/main.py</a> appears to show Ollama</p>
<p>I am trying to do the same with an image loaded from my machine. I am using the gemma2:27b model. The model is working with chat so that is not the issue.</p>
<h1>my Code</h1>
<pre><code>import os.path
import PIL.Image
from dotenv import load_dotenv
from ollama import generate
load_dotenv()
CHAT_MODEL_NAME = os.getenv("MODEL_NAME_LATEST")
image_path = os.path.join("data", "image_one.jpg")
test_image = PIL.Image.open(image_path)
# test 1:
for response in generate(CHAT_MODEL_NAME, 'What do you see', images=[test_image], stream=True):
print(response['response'], end='', flush=True)
# response: ollama._types.RequestError: image must be bytes, path-like object, or file-like object
# test 2: bytes
for response in generate(CHAT_MODEL_NAME, 'What do you see', images=[test_image.tobytes()], stream=True):
print(response['response'], end='', flush=True)
# response: Please provide me with the image!
# test 3: Path
for response in generate(CHAT_MODEL_NAME, 'What do you see', images=[image_path], stream=True):
print(response['response'], end='', flush=True)
# response: Please provide me with the image!
</code></pre>
<p>How do i properly load an image to Gemma</p>
<p>Cross posted on issue forum <a href="https://github.com/ollama/ollama-python/issues/289" rel="nofollow noreferrer">289</a></p>
|
<python><ollama><gemma>
|
2024-09-23 14:26:39
| 1
| 118,263
|
Linda Lawton - DaImTo
|
79,014,973
| 432,354
|
SQL Widgets in Databricks Interactive Serverless Compute Clusters Not Working
|
<p>I have the following code in a Notebook that creates a widget with three options.</p>
<pre class="lang-sql prettyprint-override"><code>create widget dropdown environment default 'int' choices
select * from (values ('int'), ('stage'), ('prod'));
</code></pre>
<p>This runs perfectly fine on a personal all purpose compute cluster. However, after switching to <a href="https://learn.microsoft.com/en-gb/azure/databricks/compute/serverless/notebooks" rel="nofollow noreferrer">serverless compute for notebooks</a>, I get the following error:</p>
<pre><code>DriverException: Selection sequence must include int
File <command-2382736690910233>, line 1
----> 1 get_ipython().run_cell_magic('sql', '', "create widget dropdown environment default 'int' choices select * from (values ('int'), ('stage'), ('prod'));\n")
File /databricks/python_shell/dbruntime/sql_magic/sql_magic.py:163, in SqlMagic.sql(self, line, cell)
161 break
162 elif request.get("status") == "error":
--> 163 raise DriverException(request.get("message"))
164 else:
165 # this should never happen
166 raise Exception("Unknown comm message received: " + str(request))
</code></pre>
<p>Is this a general problem with serverless notebook clusters?</p>
<p>Python widgets <em>do work</em> though. However, I can't use them with the remainder of my notebook, which is SQL.</p>
<pre class="lang-py prettyprint-override"><code>dbutils.widgets.dropdown("environment", "int", ["int", "stage", "prod"])
</code></pre>
|
<python><databricks><azure-databricks><serverless><databricks-sql>
|
2024-09-23 14:07:06
| 1
| 7,329
|
pvorb
|
79,014,886
| 12,550,791
|
Mocking a class in Python, works in namespace package, but doesn't for regular package
|
<h1>Context</h1>
<p>I'm writing unit tests for my application.</p>
<p>I have a module in <code>configuration/connections.py</code> with configuration (usually defined by environment variables):</p>
<pre class="lang-py prettyprint-override"><code>from typing import Literal
from pydantic_settings import BaseSettings
class StorageConfig(BaseSettings):
storage_name: Literal["file", "s3"]
</code></pre>
<p>this config is called in the module <code>connections/storage/main.py</code> :</p>
<pre class="lang-py prettyprint-override"><code>import fsspec
from configuration.connections import StorageConfig
storage_config = StorageConfig()
fs: fsspec.AbstractFileSystem = fsspec.filesystem(storage_config.storage_name)
</code></pre>
<p>and because the environment config does not exist when I run my unit tests, I'm trying to mock the object <code>StorageConfig</code>:</p>
<pre class="lang-py prettyprint-override"><code>import pytest
from pydantic import BaseModel
@pytest.fixture(autouse=True)
def local_config(mocker):
class LocalStorageConfig(BaseModel):
storage_name: str = "file"
mocker.patch("configuration.connections.StorageConfig", new=LocalStorageConfig)
</code></pre>
<h1>Problem</h1>
<p>Depending on my folder architecture, the mock works and sometimes does not.</p>
<ol>
<li><p>If I have <code>connections.storage</code> as a <a href="https://docs.python.org/3/reference/import.html#namespace-packages" rel="nofollow noreferrer">namespace package</a>, and my test is in <code>connections/storage/tests/test_storage.py</code> and that I patch directly <code>configuration.connections.StorageConfig</code> (as opposed to what <a href="https://docs.python.org/3/library/unittest.mock.html#where-to-patch" rel="nofollow noreferrer">the documentation</a> would like me to do), it works fine.</p>
</li>
<li><p>If <code>connections.storage</code> is a <a href="https://docs.python.org/3/reference/import.html#regular-packages" rel="nofollow noreferrer">regular package</a>, and all the other things as in 1., it does not work anymore</p>
</li>
<li><p>If I move my test in <code>tests/integration/test_storage.py</code> and keep all the other things as in 2., it works.</p>
</li>
</ol>
<h1>Question</h1>
<ol>
<li>Why does regular package and namespace package have an impact on the mock?</li>
<li>Why is the position of my test file impacting the mock?</li>
<li>Why can't it work when I patch <code>connections.storage.main.StorageConfig</code> as the documentation suggest it would?</li>
</ol>
|
<python><unit-testing><mocking><pytest>
|
2024-09-23 13:45:27
| 1
| 391
|
Marco Bresson
|
79,014,457
| 12,544,460
|
Cannot install auto-sklearn Ubuntu 24.04 via conda or pip
|
<p>I have an issue when installing this library on both Windows and Ubuntu 24.04, and this is Ubuntu. I try other library like TPOT, even successful but still cannot import.</p>
<p>with pip:
error:</p>
<pre><code> File "<string>", line 293, in setup_package
ModuleNotFoundError: No module named 'numpy.distutils'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>with conda:
error:</p>
<pre><code> conda install conda-forge::auto-sklearn
Channels:
- defaults
- conda-forge
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: / warning libmamba Added empty dependency for problem type SOLVER_RULE_UPDATE
failed
LibMambaUnsatisfiableError: Encountered problems while solving:
- package auto-sklearn-0.12.5-pyhd8ed1ab_0 requires pyrfr >=0.8.1,<0.9, but none of the providers can be installed
Could not solve for environment specs
The following packages are incompatible
ββ auto-sklearn is installable and it requires
β ββ pyrfr >=0.8.1,<0.9 with the potential options
β ββ pyrfr [0.8.1|0.8.2] would require
β β ββ python >=3.6,<3.7.0a0 , which can be installed;
β ββ pyrfr [0.8.1|0.8.2|0.8.3] would require
β β ββ python >=3.7,<3.8.0a0 , which can be installed;
β ββ pyrfr [0.8.1|0.8.2|0.8.3] would require
β β ββ python >=3.8,<3.9.0a0 , which can be installed;
β ββ pyrfr [0.8.1|0.8.2|0.8.3] would require
β β ββ python >=3.9,<3.10.0a0 , which can be installed;
β ββ pyrfr [0.8.2|0.8.3] would require
β β ββ python >=3.10,<3.11.0a0 , which can be installed;
β ββ pyrfr 0.8.3 would require
β ββ python >=3.11,<3.12.0a0 , which can be installed;
ββ pin-1 is not installable because it requires
ββ python 3.12.* , which conflicts with any installable versions previously reported.
</code></pre>
|
<python><pip><conda><automl>
|
2024-09-23 11:44:19
| 1
| 362
|
Tom Tom
|
79,014,228
| 1,348,691
|
ValueError: The model did not return a loss from the inputs, but `model` exists in `train_datasets column_names`
|
<p>A complete error is:</p>
<pre><code>ValueError: The model did not return a loss from the inputs, only the following keys: logits. For reference, the inputs it received are input_ids,attention_mask.
</code></pre>
<p>However, <code>dataset</code> contains <code>label</code> and <code>train_dataset</code> in the argument has <code>label</code> in <code>column_names</code>.</p>
<pre><code>from transformers import Trainer, TrainingArguments
batch_size = 64
logging_steps = len(emotions_encoded["train"])
training_args = TrainingArguments(output_dir = "model_out",
num_train_epochs=2,
learning_rate = 2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
evaluation_strategy="epoch",
disable_tqdm=False,
logging_steps=logging_steps,
push_to_hub=False,
log_level="error")
from transformers import Trainer
trainer = Trainer(model=model, args=training_args,
compute_metrics=compute_metrics,
train_dataset=emotions_encoded["train"], # column_names shows <class 'list'>: ['text', 'label', 'input_ids', 'attention_mask']
eval_dataset=emotions_encoded["validation"],
tokenizer=tokenizer)
trainer.train()
</code></pre>
<p>This is what I found:
<code>train_dataset</code> in the following <code>trainer.py</code> function has <code>column_names</code> value <code><class 'list'>: ['label', 'input_ids', 'attention_mask']</code></p>
<pre><code>def get_train_dataloader(self) -> DataLoader:
return DataLoader(
train_dataset,
batch_size=self._train_batch_size,
sampler=train_sampler,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
worker_init_fn=seed_worker,
)
</code></pre>
<p>but <code>datasets arrow_dataset.py</code> function:</p>
<pre><code>def _getitem(self, key: Union[int, slice, str, ListLike[int]], **kwargs) -> Union[Dict, List]:
format_columns = kwargs["format_columns"] if "format_columns" in kwargs else self._format_columns
</code></pre>
<p>because <code>kwargs</code> is <code>None</code>, thereafter <code>self._format_columns</code> is used and its value is <code><class 'list'>: ['input_ids', 'attention_mask']</code>. Here <code>model</code> is lost.</p>
<p>And that finally caused <code>dataloder.py class _BaseDataLoaderIter</code> function:</p>
<pre><code> def __next__(self) -> Any:
with torch.autograd.profiler.record_function(self._profile_name):
if self._sampler_iter is None:
# TODO(https://github.com/pytorch/pytorch/issues/76750)
self._reset() # type: ignore[call-arg]
data = self._next_data()
</code></pre>
<p><code>data</code> contains only 'input_ids'and 'attention_mask'.</p>
<p>I am a beginner with the boot <em>NLP with Transformers</em></p>
<p><strong>ADD: model definition</strong></p>
<pre><code>from transformers import AutoModelForSequenceClassification
model = (AutoModelForSequenceClassification
.from_pretrained("distilbert-base-uncased", num_labels = 6)
.to("cpu"))
</code></pre>
<p>versions:</p>
<pre><code>transformers 4.30.2
datasets 3.0.0
torch 2.3.1+cpu
Python 3.10.2
</code></pre>
|
<python><huggingface-transformers>
|
2024-09-23 10:30:35
| 1
| 4,869
|
Tiina
|
79,014,027
| 11,729,033
|
How can one have PEP-342 style coroutines where suspending execution and moving to the other context looks like a function call on both ends?
|
<p>Python generators allow program flow to jump back and forth between two different places, with messages being passed between using <code>.send</code> and <code>yield</code> as per <a href="https://peps.python.org/pep-0342/" rel="nofollow noreferrer">PEP-342</a>. For example:</p>
<pre><code>def coroutine(lst):
x = yield
lst.append(f"{x=}")
y = yield
lst.append(f"{y=}")
yield
def func():
lst = []
coro = coroutine(lst)
next(coro)
coro.send(1)
coro.send(2)
return lst
</code></pre>
<p>Here, calling <code>func()</code> will return <code>['x=1', 'y=2']</code>. Program flow begins in <code>func</code> but flips back and forth between the body of <code>func</code> and the body of <code>coroutine</code> as the program continues in accordance with the semantics defined in PEP 342. Note that the word "coroutine" as used in that PEP (and in my example here) means something very different to modern awaitable coroutines; this functionality has been in the language since 2005, and involves no actual concurrency.</p>
<p>I want a setup that will have this same effect semantically, but where the operation "suspend execution and move to the other context, possibly with a message" looks like a function call from <em>both</em> ends. For example:</p>
<pre><code>@magical_coroutine_decorator
def coroutine(lst):
x = get_thing()
lst.append(f"{x=}")
y = get_thing()
lst.append(f"{y=}")
def func():
lst = []
coro = coroutine(lst)
coro.send(1)
coro.send(2)
return lst
</code></pre>
<p>What implementation of <code>magical_coroutine_decorator</code> and <code>get_thing</code> will have the semantics I'm looking for? Is there a library somewhere that already does this?</p>
|
<python><generator>
|
2024-09-23 09:35:13
| 1
| 314
|
J E K
|
79,014,004
| 2,955,827
|
How to fill blank cells created by join but keep original null in pandas
|
<p>I have two dataframe, one is daily and one is quarterly</p>
<pre class="lang-py prettyprint-override"><code>idx = pd.date_range("2023-03-31", periods=100, freq="D")
idx_q = idx.to_series().resample("QE").last()
df1 = pd.DataFrame({"A": [1, "a", None], "B": [4, None, 6]}, index=idx_q)
np.random.seed(42)
df2 = pd.DataFrame({"C": np.random.randn(100), "D": np.random.randn(100)}, index=idx)
# resample df2 to workdays
df2 = df2.resample("B").asfreq()
# mask values larger than 0.9 in df2 with NaN
df2 = df2.mask(df2 > 0.9)
df = df2.join(df1)
</code></pre>
<p>I want to join them and ffill quarter data to daily.</p>
<p>The problem is my data have None from source and should be kept in result.</p>
<p>What's the right way to make this ffill?</p>
|
<python><pandas><dataframe>
|
2024-09-23 09:30:03
| 1
| 3,295
|
PaleNeutron
|
79,013,903
| 1,138,523
|
How to debug a specific method in VS Code (not main) using python
|
<p>I have a method <code>cli</code> in <code>mymodule</code> which is the entrypoint for my programm. <code>cli</code> takes several keyword arguments. The module <code>mymodule</code> does <strong>not</strong> contain something like</p>
<pre><code>if __name__ == "__main__":
cli(...)
</code></pre>
<p>How do i specify a run-configuration for this?</p>
<p>I tried</p>
<pre><code>{
"name": "myentrypoint",
"type": "debugpy",
"request": "launch",
"module": "mymodule.cli",
"console": "integratedTerminal",
"args": [
"--arg1", "value1",
"--arg2", "value2"
],
"justMyCode": true
}
</code></pre>
<p>(as suggested in <a href="https://stackoverflow.com/questions/67518928/how-to-make-vscode-launch-json-for-a-python-module">How to make VScode launch.json for a Python module</a>)</p>
<p>This gives me</p>
<pre><code>/home/xxxxxx/projects/myproject/.venv/bin/python: Error while finding module specification for 'mymodule.cli' (ModuleNotFoundError: __path__ attribute not found on 'mymodule' while trying to find 'mymodule.cli')
</code></pre>
<p>Normally I use this entrypoint using poetry like</p>
<pre><code>[tool.poetry.scripts]
myentrypoint= 'mymodule:cli'
</code></pre>
<p>And then from the terminal
<code>poetry run myentrypoint --arg1 "value1" --arg2 "value2</code></p>
<p>But now I need to debug the program</p>
|
<python><visual-studio-code><debugging>
|
2024-09-23 09:00:16
| 1
| 27,285
|
Raphael Roth
|
79,013,736
| 7,959,614
|
Fill numpy array to the right based on previous column
|
<p>I have the following states and transition matrix</p>
<pre><code>import numpy as np
n_states = 3
states = np.arange(n_states)
T = np.array([
[0.5, 0.5, 0],
[0.5, 0, 0.5],
[0, 0, 1]
])
</code></pre>
<p>I would like to simulate <code>n_sims</code> paths where each path consist of <code>n_steps</code>. Each path starts at 0. Therefore, I write</p>
<pre><code>n_sims = 100
n_steps = 10
paths = np.zeros((n_sims, n_steps), dtype=int)
</code></pre>
<p>With the <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.Generator.choice.html" rel="nofollow noreferrer">help</a> of <code>np.random.Generator.choice</code> I would like to "fill to the right" the paths using the transition matrix.
My attempt look as follows</p>
<pre><code>rng = np.random.default_rng(seed=123)
for s in range(1, n_steps+1):
paths[:,s] = rng.choice(
a=n_states,
size=n_sim,
p=T[paths[:,s-1]]
)
</code></pre>
<p>This result in the following error:</p>
<blockquote>
<p>ValueError: p must be 1-dimensional</p>
</blockquote>
<p>How can I overcome this? If possible, I would like to prevent for-loops and vectorize the code.</p>
|
<python><numpy>
|
2024-09-23 08:09:22
| 1
| 406
|
HJA24
|
79,013,584
| 855,217
|
How do I format numbers with leading zeros efficiently in Python?
|
<p>I have a set of floats in my script, and I'd like to print them from a method with a set number of leading zeros, and decimal points so that (for example):
0.0000 becomes 000.0000
12.1246 becomes 012.1245 etc.</p>
<p>I've tried using rjust and an f-string eg.</p>
<pre><code>fX = str(str(x)).rjust(4,'0') # and
str(f'{x:0.4f}'
</code></pre>
<p>but can't seem to able to get the leading zeros. (I think it's the equivalent of using an Excel form 000.0000). Is there a straightforward way to do this in Python?</p>
|
<python><string><formatting>
|
2024-09-23 07:28:49
| 2
| 1,622
|
Pete855217
|
79,013,549
| 4,332,274
|
How to divide large numbers by small numbers without rounding in Python
|
<p>Using Python 3, I want to calculate <code>18446744073709550592 / 65536</code>. In Python I get the output:</p>
<pre><code>281474976710656
</code></pre>
<p>But the actual result is:</p>
<pre><code>281474976710655.984375
</code></pre>
<p>It is automatically rounded? Is there any way to calculate it accurately?</p>
<p>In the case of larger numbers, for example:
<code>101145323450295648841204270695912834609459452764473586182335661131099020400596942132578006286870917941163064976533634484749130329189561344995256870 / 65536</code>, Python gives output:</p>
<pre><code>1.543355155186396e+141
</code></pre>
<p>The expect result:</p>
<pre><code>1543355155186396008929508525023084024192191356879784945409174516770920111093092989083526707258162200029953994392908241039262852923424703140186.4146423339
</code></pre>
|
<python><python-3.x><integer><largenumber>
|
2024-09-23 07:19:56
| 4
| 307
|
QChΓ Nguyα»
n
|
79,013,441
| 4,489,082
|
pandas.Period for a custom time period
|
<p>I want to create <code>pandas.Period</code> for a custom time period, for example for a duration <code>starting_time = pd.Timestamp('2024-01-01 09:15:00')</code> and <code>ending_time = pd.Timestamp('2024-01-05 08:17:00')</code>.</p>
<p>One way to achieving this is by first getting the <code>pandas.Timedelta</code> and then create <code>pandas.Period</code>.</p>
<pre><code>import pandas as pd
# Define start and end times
starting_time = pd.Timestamp('2024-01-01 09:15:00')
ending_time = pd.Timestamp('2024-01-05 08:17:00')
# Calculate the duration (period) between the two timestamps
period_duration = ending_time - starting_time
period_duration_in_minutes = (period_duration.total_seconds()) //60
freq_str = f"{period_duration_in_minutes}min"
period = pd.Period(starting_time, freq = freq_str)
print(period.start_time)
print(period.end_time)
</code></pre>
<p>But I need a straightforward approach, something like this (I know this wonβt work)-</p>
<pre><code>period = pd.Period(start_time = starting_time, end_time=ending_time)
</code></pre>
|
<python><pandas><datetime>
|
2024-09-23 06:44:08
| 2
| 793
|
pkj
|
79,013,239
| 13,447,006
|
Preventing date conversion in Excel with xlwings
|
<p>the problem is as the title states. I have a column <code>AX</code> filled with values. The name of the column is "Remarks" and it will contain remarks but some of those remarks are dates and some are full blown notes like "Person A owes Person B X amount."</p>
<p>The problem I'm currently facing now is that in xlwings the columns that are just dates like "1/8/24" are converted to the date data type. I do not want this conversion to happen. I want it to remain as "1/8/24" literally and remain as the data type of "Text".</p>
<p>The full workflow is as follows:</p>
<ol>
<li>Read data from excel (I have no write access)</li>
<li>Create a new excel workbook</li>
<li>Put processed data into new excel workbook</li>
</ol>
<p>So I tried to fix it in two places</p>
<ol>
<li>After I read the data I converted the AX columns' values all to string with <code>str(cell.value)</code> among other options, none of which worked.</li>
<li>Before the new excel workbook is saved.</li>
</ol>
<p>Nothing in option 1 worked and I figured that it had something to with how Excel is handling dates. So, I'm now trying to prevent the conversion and just have "1/8/24" appear literally but nothing is working. I checked the documentation and I tried <code>Range.options</code> to prevent the conversion but it doesn't help much. As when I inspected the cell with "1/8/24" it showed up as a <code>datetime.datetime</code> object. Converting that with <code>str</code> just turns it back into a date anyways. So, I figured that I have to find a way to do the converting after it was written into the workbook.</p>
<p>I messed around with data types in Excel and I found out that if I used this</p>
<p><a href="https://i.sstatic.net/MBwsg1Op.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBwsg1Op.png" alt="The Text to Columns button in Data tab" /></a></p>
<p>Clicked next on everything</p>
<p><a href="https://i.sstatic.net/lWhl7z9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lWhl7z9F.png" alt="Final page of the wizard that appears" /></a></p>
<p>Then selected "Text" in the final screen the dates appeared. So, that leads me to try a new option which is to convert the data type of the entire column to just "Text". So I tried out stuff like this <code>sheet.range("AX1").expand("down").api.NumberFormat = "@"</code>. But the workbook that was generated still doesn't show "1/8/24" literally. Instead it shows some number like 45299. Surprisingly when I converted that cell into "Long Date" it gets turned into a date "1st August 2024". This is where I stopped working as I ran out of ideas and have no idea how to continue. Any guidance is very much appreciated, thank you.</p>
|
<python><excel><xlwings>
|
2024-09-23 05:14:20
| 2
| 565
|
AlphabetsAlphabets
|
79,013,151
| 2,058,333
|
FastAPI VSCode debug in new alpine docker container
|
<p>I had to update the base image for my python backend from the now deprecated <a href="https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker" rel="nofollow noreferrer">FastAPI</a> image to a newer Alpine version. Dockerfile below...
However, now I can not get VSCode debugger back up because:</p>
<pre><code>cd /fastapi ; /usr/bin/env /usr/bin/python3 /root/.vscode-server/extensions/ms-python.debugpy-2024.10.0/bundled/libs/debugpy/adapter/../../deb
ugpy/launcher 53213 -- -m uvicorn main:app --reload --port 8001
/usr/bin/python3: No module named uvicorn
</code></pre>
<p>launch.json</p>
<pre><code>"configurations": [
{
"name": "Python Debugger: FastAPI",
"type": "debugpy",
"request": "launch",
"module": "uvicorn",
"args": [
"main:app",
"--reload",
"--port",
"8001"
],
"jinja": true
},
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.11.10-alpine3.20
COPY ./fastapi/app /app
COPY ./fastapi/requirements.txt /requirements.txt
COPY ./fastapi/start-reload.sh /start-reload.sh
COPY ./docker/memcached.sh /memcached.sh
USER root
ENV HOSTNAME fastapi
ENV PYTHONPATH '/:/app'
ENV IS_PROD 1
ENV TZ=Europe/Berlin
RUN apk add --no-cache mariadb-connector-c-dev memcached &&\
apk add --no-cache --virtual .build-deps \
build-base \
mariadb-dev &&\
apk add php83 &&\
apk add gcc musl-dev &&\
apk del .build-deps
RUN apk add py3-pip && \
pip install --upgrade pip && \
pip install -r /requirements.txt && \
echo 'asdf'
# CLEANUP
RUN apk cache clean &&\
apk del gcc musl-dev
CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<p>requirements.txt</p>
<pre><code>fastapi[standard]
debugpy
pydantic
pytz
requests
python-dateutil
numpy
scipy
pymemcache
mysqlclient
</code></pre>
<p>I also tried to attach to a running endpoint using <code>debugpy</code> <code>attach</code> but could not get that to work either...</p>
<p>I dont understand why the <code>module</code> is missing because it is installed</p>
<pre><code>/fastapi # pip list | grep uvicorn
uvicorn 0.30.6
</code></pre>
<p>------ EDIT ------
Answering comments</p>
<p>Here is the <code>PYTHONPATH</code>. Looks looks bit messed up, even when I remove the <code>ENV</code> command to modify it</p>
<pre><code>/fastapi # env | grep PYTHONPATH
PYTHONPATH=/fastapi:/
</code></pre>
<p>Below is the Dockerfile <code>CMD</code> that actually does start up the server.</p>
<pre><code>CMD ["python3", "-m", "debugpy", "--listen", "0.0.0.0:5678", "-m", "uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0", "--port", "8000"]
</code></pre>
<pre><code>buiosandbox | 0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
sandbox | INFO: Started server process [27]
sandbox | INFO: Waiting for application startup.
sandbox | INFO: Application startup complete.
</code></pre>
|
<python><docker><visual-studio-code><docker-compose><fastapi>
|
2024-09-23 04:24:36
| 0
| 5,698
|
El Dude
|
79,012,993
| 7,428,676
|
Unable to get LinkedIn API statistics - 500 response
|
<p>I initially get a list of campaign_ids from the LinkedIn account</p>
<pre><code>headers = {
'Authorization': f'Bearer {ACCESS_TOKEN}',
'Content-Type': 'application/json'
}
campaigns_url = f'https://api.linkedin.com/v2/adCampaignsV2?q=search&search.account.values[0]=urn:li:sponsoredAccount:XYZ1234'
response = requests.get(campaigns_url, headers=headers)
if response.status_code == 200:
campaigns_data = response.json()['elements']
else:
print(f"Error fetching campaigns: {response.status_code}")
print(response.json())
campaigns_data = []
campaign_ids = [campaign['id'] for campaign in campaigns_data]
print(campaign_ids)
</code></pre>
<p>I retrieve the list of campaign_ids successfully. Then I want to retrieve the statistics of each sponsored campaign using the campaign ids through the following:</p>
<pre><code>headers_2 = {
'Authorization': f'Bearer {ACCESS_TOKEN}',
'Linkedin-Version': '202409',
'X-Restli-Protocol-Version': '2.0.0'
}
base_statistics_url='https://api.linkedin.com/rest/adAnalytics'
for campaign_id in campaign_ids:
params = {
'q': 'statistics',
'pivots': 'List(CAMPAIGN,CREATIVE)',
'dateRange': '(start:(year:2020,month:1,day:1)',
'timeGranularity': 'YEARLY',
'campaigns': f'List(urn:li:sponsoredCampaign:{campaign_id})'
}
ad_analytics_response = requests.get(base_statistics_url, headers=headers_2, params=params)
print(ad_analytics_response.json()).
</code></pre>
<p>However, I get the following response from the server:</p>
<pre><code>{'message': 'Internal Server Error', 'status': 500}
</code></pre>
|
<python><python-requests><linkedin-api>
|
2024-09-23 02:36:46
| 0
| 564
|
IronMaiden
|
79,012,832
| 1,601,580
|
multiprocess.pool.RemoteTraceback and TypeError: Couldn't cast array of type string to null when loading Hugging Face dataset
|
<p>Iβm encountering an error while trying to load and process the GAIR/MathPile dataset using the Hugging Face datasets library. The error seems to occur during type casting in pyarrow within a multiprocessing environment. Below is the code Iβm using</p>
<pre class="lang-py prettyprint-override"><code>from datasets import Dataset, load_dataset
import os
def get_hf_dataset_gair(path: str = '~/data/GAIR/MathPile/train/') -> Dataset:
path: str = os.path.expanduser(path)
dataset = load_dataset(path, split='train', num_proc=os.cpu_count())
print(dataset[0]) # Preview a single example from the dataset
# Remove unnecessary columns
all_columns = dataset.column_names
all_columns.remove('text')
dataset = dataset.remove_columns(all_columns)
# Shuffle and select 10k examples
dataset = dataset.shuffle(seed=42)
dataset = dataset.select(10_000)
return dataset
# get it
get_hf_dataset_gair()
</code></pre>
<p>Download GAIR</p>
<pre class="lang-bash prettyprint-override"><code>source $AFS/.bashrc
conda activate beyond_scale_2
mkdir -p ~/data/GAIR/MathPile
huggingface-cli download --resume-download --repo-type dataset GAIR/MathPile --local-dir ~/data/GAIR/MathPile --local-dir-use-symlinks False
cd ~/data/GAIR/MathPile/
find . -type f -name "*.gz" -exec gzip -d {} \;
</code></pre>
<p>I get the following error when running the code:</p>
<pre><code>multiprocess.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/lfs/hyperturing1/0/user/miniconda/envs/beyond_scale_2/lib/python3.11/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
writer.write_table(table)
...
TypeError: Couldn't cast array of type string to null
</code></pre>
<p>Hereβs the full stack trace:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "/lfs/hyperturing1/0/user/miniconda/envs/beyond_scale_2/lib/python3.11/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
writer.write_table(table)
File "/lfs/hyperturing1/0/user/miniconda/envs/beyond_scale_2/lib/python3.11/site-packages/datasets/arrow_writer.py", line 580, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/lfs/hyperturing1/0/user/miniconda/envs/beyond_scale_2/lib/python3.11/site-packages/datasets/table.py", line 2283, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
...
TypeError: Couldn't cast array of type string to null
</code></pre>
<p>It seems to be related to type casting between pyarrow types. I suspect it has something to do with the dataset schema, but Iβm not sure how to resolve the error. I've verified that the dataset is correctly downloaded, and Iβm using the following environment:</p>
<p>Hugging Face datasets version: 2.x.x
Python 3.11
OS: Linux (running on a server)
Multiprocessing is set to use all available CPUs (num_proc=os.cpu_count())
Has anyone encountered this issue before, or does anyone have suggestions on how to fix it?</p>
<ul>
<li>ref: <a href="https://huggingface.co/datasets/GAIR/MathPile/discussions/3" rel="nofollow noreferrer">https://huggingface.co/datasets/GAIR/MathPile/discussions/3</a></li>
<li>ref: <a href="https://discuss.huggingface.co/t/multiprocess-pool-remotetraceback-and-typeerror-couldnt-cast-array-of-type-string-to-null-when-loading-hugging-face-dataset/108166" rel="nofollow noreferrer">https://discuss.huggingface.co/t/multiprocess-pool-remotetraceback-and-typeerror-couldnt-cast-array-of-type-string-to-null-when-loading-hugging-face-dataset/108166</a></li>
<li>ref: <a href="https://stackoverflow.com/questions/79012832/multiprocess-pool-remotetraceback-and-typeerror-couldnt-cast-array-of-type-str">multiprocess.pool.RemoteTraceback and TypeError: Couldn't cast array of type string to null when loading Hugging Face dataset</a></li>
</ul>
|
<python><python-multiprocessing><huggingface-datasets>
|
2024-09-23 00:17:04
| 2
| 6,126
|
Charlie Parker
|
79,012,782
| 615,525
|
Need Help Appending Separate Lines of Text Retrieved From Image in Python
|
<p>I am writing a Program that used Microsoft Computer Vision to read text from images. The line of text is then stored in a string to be used later.</p>
<p>I have accomplished that part of my project, which works fine if my image only has a single line of text.</p>
<p>However SOMETIMES the text on the image is in multiple, separate lines - This is where I run into trouble.</p>
<p>When there is one single line of text, my code works fine, BUT when the image has multiple lines of text, the string I am trying to store only gets the final line read from the image.</p>
<p>Here's my code:</p>
<pre><code>if read_result.status == OperationStatusCodes.succeeded:
for text_result in read_result.analyze_result.read_results:
for line in text_result.lines:
line_str = line.text
print(f"Line string: ",line_str)
upload_title_str = line_str
print(f" Upload Title String: {(upload_title_str)}")
</code></pre>
<p>The Output is:
<a href="https://i.sstatic.net/82SUKi3T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82SUKi3T.png" alt="Here are my printed output lines" /></a></p>
<p>I want all the lines to be on a SINGLE Line and then my upload_title_str to be - line_str1+line_str2+line_str3</p>
<p>I've tried different .joins and appends, += operator etc.</p>
<p>I think I am missing something basic, but after staring at this for a couple hours it's just not coming to me.</p>
<p>Any suggestions?</p>
|
<python><ocr><azure-cognitive-services>
|
2024-09-22 23:26:18
| 1
| 353
|
user615525
|
79,012,771
| 11,249,937
|
PyTorch model performance drop depending on test dataset batch size on MPS
|
<p>I have model that uses LSTM and full connected layer</p>
<pre><code>Model(
(lstm): LSTM(3, 32, num_layers=3, batch_first=True, dropout=0.7)
(dense): Linear(in_features=32, out_features=2, bias=True)
)
</code></pre>
<p>I am training and testing my model using MPS on Apple M2 chip</p>
<p>For loss function I am using Cross Entropy Loss with computed weights for each class and for optimisation I am using AdamW, F-Score is measured classification_report from sklearn library</p>
<p>The problem is when batch size of test and train dataset is 64, model performance grows as expected. But when batch size of test dataset is 256, performance will drop massively and not grow any further</p>
<p>On plot below you can see performances of batch sizes. Pink graph represents batch size of test dataset of 64, blue represents batch size of 256<a href="https://i.sstatic.net/bm1r73QU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bm1r73QU.png" alt="enter image description here" /></a></p>
<p>Train F-Score is computed on train dataset with batch size of 64</p>
<p>I am also always set my model to <code>eval()</code> or <code>train()</code> mode</p>
<pre><code>
for epoch in range(epochs):
model.train()
print(f'Epoch {epoch}')
__train_loop(model, train_dataloader, loss_function, optimizer, scheduler, verbose, device=device)
model.eval()
train_accuracy, train_f_score = test_model(model, train_dataloader, device=device)
print(f'Train accuracy: {train_accuracy}')
print(f'Train F-Score: {train_f_score}')
accuracy, f_score = test_model(model, test_dataloader, device=device)
</code></pre>
<p>training loop:</p>
<pre><code>for batch_id, (X, y) in enumerate(train_dataloader):
X, y = X.to(device), y.to(device)
optimizer.zero_grad()
y_pred = model(X)
loss = loss_function(y_pred, y)
loss.backward()
optimizer.step()
</code></pre>
<p>testing model</p>
<pre><code> with torch.no_grad():
for batch_id, (X, y) in enumerate(test_dataloader):
X, y = X.to(device), y.to(device)
y_pred = model(X)
y_pred = torch.argmax(y_pred, dim=1)
y_pred_all.append(y_pred.cpu().numpy())
y_all.append(y.cpu().numpy())
bar.update(batch_id)
y_pred_all = np.hstack(y_pred_all).flatten()
y_all = np.hstack(y_all).flatten()
cr = classification_report(y_all, y_pred_all, output_dict=True)
f_score = cr['macro avg']['f1-score']
accuracy = cr['accuracy']
</code></pre>
<p>I expect performance not change drastically when changing batch size of dataset, influence of batch size on model's output is odd here, because I am not using batch norm.</p>
|
<python><pytorch><apple-silicon>
|
2024-09-22 23:12:19
| 0
| 345
|
wybot
|
79,012,503
| 4,321,525
|
How to Properly Track Gradients with MyGrad When Using Scipy's RectBivariateSpline for Interpolation?
|
<p>I'm working on a project where I need to interpolate enthalpy values using <code>scipy.interpolate.RectBivariateSpline</code> and then perform automatic differentiation using <code>mygrad</code>. However, I'm encountering an issue where the gradient is not tracked at all across the interpolation. Here is a simplified version of my code:</p>
<pre><code>import numpy as np
from scipy.interpolate import RectBivariateSpline
import CoolProp.CoolProp as CP
import mygrad as mg
from mygrad import tensor
# Define the refrigerant
refrigerant = 'R134a'
# Constant temperature (e.g., 20Β°C)
T = 20 + 273.15 # Convert to Kelvin
# Get saturation pressures
P_sat = CP.PropsSI('P', 'T', T, 'Q', 0, refrigerant)
# Define a pressure range around the saturation pressure
P_min = P_sat * 0.5
P_max = P_sat * 1.5
P_values = np.linspace(P_min, P_max, 100)
# Define a temperature range around the constant temperature
T_min = T - 10
T_max = T + 10
T_values = np.linspace(T_min, T_max, 100)
# Generate enthalpy data
h_values = []
for P in P_values:
h_row = []
for T in T_values:
try:
h = CP.PropsSI('H', 'P', P, 'T', T, refrigerant)
h_row.append(h)
except:
h_row.append(np.nan)
h_values.append(h_row)
# Convert lists to arrays
h_values = np.array(h_values)
# Fit spline for enthalpy
h_spline = RectBivariateSpline(P_values, T_values, h_values)
# Function to interpolate enthalpy
def h_interp(P, T):
return tensor(h_spline.ev(P, T))
# Example function using the interpolated enthalpy with AD
def example_function(P):
h = h_interp(P, T)
result = h**2 # Example calculation
return result
# Define a pressure value for testing
P_test = tensor(P_sat, )
# Compute the example function and its gradient
result = example_function(P_test)
result.backward()
# Print the result and the gradient
print(f"Result: {result.item()}")
print(f"Gradient: {P_test.grad}")
</code></pre>
<p>Are these just issues of <code>RectBivariateSpline</code> or <code>mygrad</code>? Would this work with other algebraic differentiation libs? Should I use something else besides splines?</p>
|
<python><numpy><scipy><automatic-differentiation>
|
2024-09-22 20:00:39
| 1
| 405
|
Andreas Schuldei
|
79,012,349
| 595,305
|
Yield depth on a depth-first non-recursive tree walker?
|
<p>I'm trying to find a non-recursive "powerful/versatile" tree walker algorithm, ultimately yielding not just the node but the depth of the node, its parent and its sibling index, and able to use a breadth-first search (BFS) or depth-first search (DFS).</p>
<p>It is possible to combine depth-first and bread-first like this, just yielding the node (NB assumes a node which may or may not have the key <code>CHILD_NODES</code>). Example using Python - "Python" tag added but obviously not specific:</p>
<pre><code>def get_walker(walk_by_depth=True):
def walk(root):
queue = [root]
while len(queue) > 0:
# the first item in the queue
node_to_yield = queue.pop(0)
yield node_to_yield
if CHILD_NODES in node_to_yield:
depth += 1
new_nodes = node_to_yield[CHILD_NODES]
if walk_by_depth:
queue = new_nodes + queue
else:
queue.extend(new_nodes)
return walk
</code></pre>
<p>... and adding a small amount of code lets you yield a depth for a <strong>breadth-first search</strong> only:</p>
<pre><code>def get_walker(walk_by_depth=True):
def walk(root):
queue = [root]
depth = 0
depth_map_of_remaining_items = {0: 1}
while len(queue) > 0:
node_to_yield = queue.pop(0)
if depth_map_of_remaining_items[depth] == 0:
depth += 1
depth_map_of_remaining_items[depth] -= 1
yield node_to_yield, depth
if CHILD_NODES in node_to_yield:
depth += 1
new_nodes = node_to_yield[CHILD_NODES]
if walk_by_depth:
queue = new_nodes + queue
else:
queue.extend(new_nodes)
if depth not in depth_map_of_remaining_items:
depth_map_of_remaining_items[depth] = 0
depth_map_of_remaining_items[depth] += len(new_nodes)
depth -= 1
return walk
</code></pre>
<p>The above code doesn't in fact work with <code>walk_by_depth=True</code>: it doesn't raise an Exception, it just gets the depth wrong. Instinctively I've got a feeling that the code probably just needs a minimal tweak to yield (correct) <code>depth</code> on a DFS, but I've had no success so far.</p>
<p>The thing is, if I can find a way of yielding the depth with a DFS, I believe it will be a fairly easy step to yield, for both BFS and DFS, the parent node, by maintaining a "depth-to-last-node" map. If you can get the parent you can also get the sibling index, at the simplest by using <code>list</code>'s <code>index</code> method (though there may be far cleverer ways to get both the parent and the sibling index...).</p>
|
<python><tree><depth-first-search><breadth-first-search>
|
2024-09-22 18:49:10
| 2
| 16,076
|
mike rodent
|
79,012,310
| 11,626,909
|
dataframe from BeautifulSoup object
|
<p>I want to create a dataframe from a BeautifulSoup Object -</p>
<pre><code>import pandas as pd
from requests import get
from bs4 import BeautifulSoup
import re
# Fetch the web page
url = 'https://carbondale.craigslist.org/search/apa#search=1~gallery~0~0'
response = get(url) # link exlcudes posts with no picures
page = response.text
# Parse the HTML content
soup = BeautifulSoup(page, 'html.parser')
# Information I need
list_url = []
title = []
location = []
price = []
# I run the following
list_url = [a['href'] for a in soup.select('a[href^="https"]')]
title = [x.text for x in soup.find_all(class_="title")]
location = [x.text for x in soup.find_all(class_="location")]
price = [x.text for x in soup.find_all(class_="price")]
</code></pre>
<p>But the problem I am facing is that for some class (e.g., title or location), some elements are missing, So, while I try to create a data frame, it shows error because of <code>None</code> value because all lists size are not equal. You can use the <code>len()</code> function to check the size of the list. Actually, I want to include the word "None" for missing elements in a column in the dataframe.</p>
|
<python><beautifulsoup>
|
2024-09-22 18:26:38
| 1
| 401
|
Sharif
|
79,012,165
| 4,710,409
|
Google generative ai: gemini not working on android: 403 error
|
<p>I send my text to "google gemini" using this code in python:</p>
<pre><code>import google.generativeai as genai
# Create your views here.
# add here to your generated API key
genai.configure(api_key="*****")
def ask_question(request):
if request.method == "POST":
text = request.POST.get("text")
try:
model = genai.GenerativeModel("gemini-1.5-flash")
except Exception as e:
print(e)
chat = model.start_chat()
try:
response = chat.send_message(text)
except Exception as e:
print(e)
try:
response = chat.send_message(text)
except Exception as e:
print(e)
user = request.user
ChatBot.objects.create(text_input=text, gemini_output=response.text, user=user)
# Extract necessary data from response
response_data = {
"text": response.text, # Assuming response.text contains the relevant response data
# Add other relevant data from response if needed
}
return JsonResponse({"data": response_data})
else:
return JsonResponse({"data": 'I apologise but there is an issue'})
</code></pre>
<p>It works perfectly on desktop, but on android phone it throws "403 PERMISSION_DENIED":</p>
<p><a href="https://ai.google.dev/gemini-api/docs/troubleshooting?lang=python" rel="nofollow noreferrer">https://ai.google.dev/gemini-api/docs/troubleshooting?lang=python</a></p>
<p>I tried updating generativeai via pip install but it didn't work
Any suggestions?</p>
|
<python><google-gemini><google-generativeai>
|
2024-09-22 17:07:04
| 1
| 575
|
Mohammed Baashar
|
79,011,995
| 2,382,483
|
Way to perform a numpy "argany" or "argfirst"?
|
<p>I'd like to reduce a multidimensional array along one axis by finding the <em>first</em> value in the other dims that is truthy/matches some condition, or a default value if no matching elements can be found. So far I'm trying to do this for a 3D array with some simple looping as follows:</p>
<pre><code>def first(arr, condition, default):
out = default.copy()
for u in range(arr.shape[1]):
for e in range(arr.shape[2]):
(idx,) = np.nonzero(condition[:, u, e])
if len(idx):
out[u, e] = arr[idx[0], u, e]
return out
</code></pre>
<p>This is a bit ugly, not efficient, not vectorized, and not easily generalized to arrays of arbitrary dims though. Does numpy have a nicer, built-in way to achieve this?</p>
<p><strong>EDIT:</strong><br />
Here's an example case:</p>
<pre><code>test_arr = np.array(
[
[np.nan, 1, 2, 3, 4, np.nan, 6, 7, 8, 9],
[np.nan, np.nan, 3, 4, 5, 6, 7, 8, 9, np.nan],
[2, 3, 4, 5, 6, 7, 8, 9, 10, 11],
np.full((10,), np.nan),
]
)
test_default = np.array([5, 5, 5, 5])
test_cond = ~np.isnan(test_arr)
</code></pre>
<p>Expected Output:</p>
<pre><code>first(test_arr, test_cond, test_default)
# outputs [1.0, 3.0, 2.0, 5.0]
</code></pre>
|
<python><numpy>
|
2024-09-22 15:34:31
| 2
| 3,557
|
Rob Allsopp
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.