QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,862,464
| 2,539,916
|
Django complex relation with joins
|
<p>I have the following models:</p>
<pre class="lang-py prettyprint-override"><code>class Position(BaseModel):
name = models.CharField()
class Metric(BaseModel):
name = models.CharField()
class PositionKPI(BaseModel):
position = models.ForeignKey(Position)
metric = models.ForeignKey(Metric)
expectation = models.FloatField()
class Employee(BaseModel):
position = models.ForeignKey(Position)
class EmployeeKPI(BaseModel):
employee = models.ForeignKey(Employee)
metric = models.ForeignKey(Metric)
value = models.FloatField()
def kpi(self):
return PositionKPI.objects.filter(position=self.employee.position, metric=self.metric).first()
</code></pre>
<p>I believe it's possible to rewrite the <code>kpi</code> method as a relation.
In SQL it would look something like this:</p>
<pre class="lang-sql prettyprint-override"><code>select
positionkpi.*
from employeekpi
join employee on employee.id = employeekpi.employee_id
join positionkpi on positionkpi.id = employee.position_id
and positionkpi.metric_id = employeekpi.metric_id
</code></pre>
<p>Please advise how to do it</p>
|
<python><django><django-models>
|
2024-08-12 15:26:31
| 2
| 1,584
|
Alex Tonkonozhenko
|
78,862,382
| 1,627,234
|
Trouble migrating to numpy 2: numpy.core.multiarray failed to import
|
<p>I maintain <a href="https://github.com/pavlin-policar/openTSNE" rel="nofollow noreferrer">openTSNE</a> and would like to support numpy2. The package depends on numpy and uses Cython, so the build process is a bit more involved.</p>
<p>When building the package on Azure's CI servers, I previously depended on <code>oldest-supported-numpy</code> for maximum numpy backwards compatibility, but with numpy2, my understanding is that I don't need to do this anymore. Therefore, my <code>pyproject.toml</code> changed to</p>
<pre><code>[build-system]
requires = ["setuptools", "wheel", "cython", "numpy"] # was `oldest-supported-numpy` before
build-backend = "setuptools.build_meta"
</code></pre>
<p>As far as I can tell, this works fine.</p>
<p>However, I am now getting a new error on my CI servers. You can find the full logs <a href="https://dev.azure.com/pavlingp/openTSNE/_build/results?buildId=656&view=logs&j=ff852a69-b487-59bf-519f-d827f73aae6a&t=d932dc19-0cf0-543b-868f-a9050dbef4d3" rel="nofollow noreferrer">here</a>.</p>
<pre><code>ImportError while importing test module '/Users/runner/work/1/s/tests/test_utils.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/test_utils.py:4: in <module>
from openTSNE.utils import clip_point_to_disc
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openTSNE/__init__.py:1: in <module>
from .tsne import TSNE, TSNEEmbedding, PartialTSNEEmbedding, OptimizationInterrupt
/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openTSNE/tsne.py:11: in <module>
from openTSNE import _tsne
openTSNE/_tsne.pyx:1: in init openTSNE._tsne
???
E ImportError: numpy.core.multiarray failed to import (auto-generated because you didn't call 'numpy.import_array()' after cimporting numpy; use '<void>numpy._import_array' to disable if you are certain you don't need it).
</code></pre>
<p>The numpy docs state that I should now explicitly call <code>np.import_array()</code> in my cython files (<a href="https://numpy.org/devdocs/numpy_2_0_migration_guide.html#c-api-changes" rel="nofollow noreferrer">https://numpy.org/devdocs/numpy_2_0_migration_guide.html#c-api-changes</a>) and the cython docs provide a simple <a href="https://cython.readthedocs.io/en/latest/src/tutorial/numpy.html" rel="nofollow noreferrer">example</a>. However, I am still running into this error. I have tried to remedy this by adding the import at the beginning of all my <code>.pyx</code> and <code>.pxd</code> files, as indicated by the cython example, e.g.,</p>
<pre class="lang-py prettyprint-override"><code>cimport numpy as cnp
cnp.import_array() # <--- HERE
import numpy as np
...
# function definitions
</code></pre>
<p>This didn't resolve the issue, so I also added these calls into every <code>cpdef</code> function in the file, e.g.,</p>
<pre class="lang-py prettyprint-override"><code>cpdef double[:, ::1] compute_gaussian_perplexity(
double[:, :] distances,
double[:] desired_perplexities,
double perplexity_tol=1e-8,
Py_ssize_t max_iter=200,
Py_ssize_t num_threads=1,
):
cnp.import_array() # <--- HERE
cdef:
Py_ssize_t n_samples = distances.shape[0]
Py_ssize_t n_scales = desired_perplexities.shape[0]
...
</code></pre>
<p>but nothing seems to resolve the issue.</p>
<p>I would note that this only happens in my <code>Release</code> pipeline and not in my <code>Test</code> pipeline. In my <code>Test</code> pipeline, I essentially run only two commands:</p>
<pre><code>pip install .
pytest -v
</code></pre>
<p>My <code>Release</code> pipeline, however is more involved, and involves first building the wheels, installing the produced wheel, and then running the test suite on this wheel. The error only occurs in this second <code>Release</code> pipeline. The sequence of commands is roughly</p>
<pre><code>python -m cibuildwheel --output-dir dist/
python -m pip install --force-reinstall --find-links dist openTSNE
pytest # <--- FAILS
</code></pre>
<p>I am somewhat at a loss of what else to try at this point, so any help would be appreciated.</p>
|
<python><numpy><numpy-2.x>
|
2024-08-12 15:06:11
| 1
| 5,558
|
Pavlin
|
78,862,245
| 10,090,697
|
Discrepancy between Python and R calculation of a robust covariance matrix
|
<p>I am currently developing a statistical package in Python using R code as a reference, and I've noticed different results between the two programs when calculating a robust covariance matrix in Python versus R.</p>
<p>Using the following code with equivalent input data (<code>x</code>) in Python:</p>
<pre><code># dummy data
x = np.array([
[0.5488135, 0.71518937, 0.60276338, 0.54488318],
[0.4236548, 0.64589411, 0.43758721, 0.891773],
[0.96366276, 0.38344152, 0.79172504, 0.52889492],
[0.56804456, 0.92559664, 0.07103606, 0.0871293],
[0.0202184, 0.83261985, 0.77815675, 0.87001215],
[0.97861834, 0.79915856, 0.46147936, 0.78052918],
[0.11827443, 0.63992102, 0.14335329, 0.94466892],
[0.52184832, 0.41466194, 0.26455561, 0.77423369],
[0.45615033, 0.56843395, 0.0187898, 0.6176355],
[0.61209572, 0.616934, 0.94374808, 0.6818203],
[0.3595079, 0.43703195, 0.6976312, 0.06022547],
[0.66676672, 0.67063787, 0.21038256, 0.1289263],
[0.31542835, 0.36371077, 0.57019677, 0.43860151],
[0.98837384, 0.10204481, 0.20887676, 0.16130952],
[0.65310833, 0.2532916, 0.46631077, 0.24442559],
[0.15896958, 0.11037514, 0.65632959, 0.13818295],
[0.19658236, 0.36872517, 0.82099323, 0.09710128],
[0.83794491, 0.09609841, 0.97645947, 0.4686512],
[0.97676109, 0.60484552, 0.73926358, 0.03918779],
[0.28280696, 0.12019656, 0.2961402, 0.11872772]
])
# fit mcd
mcd = MinCovDet().fit(x)
# define robust covariance variable
x_cov = mcd.covariance_
</code></pre>
<p>and in R:</p>
<pre><code># dummy data
x <- matrix(c(
0.5488135, 0.71518937, 0.60276338, 0.54488318,
0.4236548, 0.64589411, 0.43758721, 0.891773,
0.96366276, 0.38344152, 0.79172504, 0.52889492,
0.56804456, 0.92559664, 0.07103606, 0.0871293,
0.0202184, 0.83261985, 0.77815675, 0.87001215,
0.97861834, 0.79915856, 0.46147936, 0.78052918,
0.11827443, 0.63992102, 0.14335329, 0.94466892,
0.52184832, 0.41466194, 0.26455561, 0.77423369,
0.45615033, 0.56843395, 0.0187898, 0.6176355,
0.61209572, 0.616934, 0.94374808, 0.6818203,
0.3595079, 0.43703195, 0.6976312, 0.06022547,
0.66676672, 0.67063787, 0.21038256, 0.1289263,
0.31542835, 0.36371077, 0.57019677, 0.43860151,
0.98837384, 0.10204481, 0.20887676, 0.16130952,
0.65310833, 0.2532916, 0.46631077, 0.24442559,
0.15896958, 0.11037514, 0.65632959, 0.13818295,
0.19658236, 0.36872517, 0.82099323, 0.09710128,
0.83794491, 0.09609841, 0.97645947, 0.4686512,
0.97676109, 0.60484552, 0.73926358, 0.03918779,
0.28280696, 0.12019656, 0.2961402, 0.11872772
), nrow = 20, ncol = 4, byrow = TRUE)
# fit mcd
x.mcd <- covMcd(x)
# define robust covariance variable
x_cov <- x.mcd$cov
</code></pre>
<p>I get very different values for <code>x_cov</code>. I am aware that there is stochasticity involved in each of these algorithms, but the differences are way too large to be attributed to that.</p>
<p>For example, in Python:</p>
<pre><code>x_cov = array([[ 0.06669275, 0.01987514, 0.01294049, 0.0235569 ],
[ 0.01987514, 0.0421388 , -0.00541365, 0.0462657 ],
[ 0.01294049, -0.00541365, 0.06601437, -0.02931285],
[ 0.0235569 , 0.0462657 , -0.02931285, 0.08961389]])
</code></pre>
<p>and in R:</p>
<pre><code>x_cov = [,1] [,2] [,3] [,4]
[1,] 0.15762177 0.01044705 -0.04043184 -0.01187968
[2,] 0.01044705 0.09957141 0.03036312 0.08703770
[3,] -0.04043184 0.03036312 0.06045952 -0.01781794
[4,] -0.01187968 0.08703770 -0.01781794 0.16634435
</code></pre>
<p>Perhaps I'm missing something here, but I can't seem to figure out the discrepancy. Have you run into something similar or do you perhaps have any suggestions for me? Thanks!</p>
|
<python><r><statistics>
|
2024-08-12 14:36:53
| 2
| 311
|
Zack Eriksen
|
78,862,065
| 5,482,999
|
How to serialize a ComputeRoutesResponse
|
<p>When using the web endpoint with python <code>requests</code> for computing Routes with Google Cloud, <code>https://routes.googleapis.com/directions/v2:computeRoutes</code>, I get the response in json.
This is how I build the request with python, it allows me to work better with data received.</p>
<pre><code>headers = {
'Content-Type': 'application/json',
'X-Goog-Api-Key': API_KEY,
'X-Goog-FieldMask': 'routes.duration,routes.distanceMeters,routes.legs,routes.optimizedIntermediateWaypointIndex,geocodingResults',
}
data = {
'origin': {
'placeId': ORIGIN_ID
},
'destination': {
'placeId': DESTINATION_ID
},
'intermediates': intermediates,
'travelMode': 'DRIVE',
'routingPreference': 'TRAFFIC_AWARE',
'departureTime': '2024-07-06T10:00:00Z',
'computeAlternativeRoutes': False,
'optimizeWaypointOrder': True,
'units': 'METRIC',
}
response = requests.post('https://routes.googleapis.com/directions/v2:computeRoutes', headers=headers, json=data, timeout=25)
</code></pre>
<p>I want to use the python library directly, but the response from the API comes in the form of a protobuff message and when trying to save the response as Json I get the error:</p>
<pre><code>TypeError: Object of type ComputeRoutesResponse is not JSON serializable
</code></pre>
<p>This is how I build the request in python:</p>
<pre><code>routes_client = RoutesClient()
route_response = routes_client.compute_routes(
ComputeRoutesRequest(
destination=Waypoint(
address=DESTINATION_PLUS_CODE
),
origin=Waypoint(
address=ORIGIN_PLUS_CODE
),
intermediates=intermediates,
travel_mode='DRIVE',
region_code='US',
routing_preference='TRAFFIC_AWARE',
departure_time=datetime.now(tz=timezone('America/Los_Angeles'))+timedelta(days=1),
compute_alternative_routes=False,
units='METRIC',
optimize_waypoint_order=True,
),
metadata=[("x-goog-fieldmask", ','.join(field_mask))]
)
</code></pre>
<p>I tried adding <code>("content-type", 'application/json')</code> to the metadata but does not work either.</p>
<p>What could I do to serialize the result so I can save it for later processing.</p>
|
<python><google-maps><google-cloud-platform><google-routes-api>
|
2024-08-12 13:59:28
| 1
| 1,924
|
Guanaco Devs
|
78,861,984
| 11,328,614
|
Uninstall all packages from specific group
|
<p>In my <code>pyproject.toml</code> I have configured a "normal" and a "dev" configuration, like so:</p>
<pre class="lang-toml prettyprint-override"><code>[tool.poetry]
name = "project_name"
version = "0.2.0"
description = "Project description"
authors = ["me"]
license = "no one is allowed to use this code except myself ;)"
readme = "README.md"
package-mode = false
[tool.poetry.dependencies]
python = "^3.10"
exceptiongroup = ">=1.2,<2.0"
tabulate = ">=0.9,<1.0"
trio = ">=0.25,<1.0"
rich = ">=13.7,<14.0"
[tool.poetry.group.dev.dependencies]
pytest = "^8.2.1"
asynctest = "^0.13.0"
pytest-asyncio = "^0.23.7"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>In my build script (which is actually a Docker script in my case, but it could be any other build script) I want to first install all dependencies of both groups:</p>
<pre class="lang-bash prettyprint-override"><code>poetry install --with dev
</code></pre>
<p>then run the unit tests of my project and afterwards uninstall the <code>dev</code> dependencies again:</p>
<pre class="lang-bash prettyprint-override"><code># ! PSEUDOCODE !
poetry remove * --group dev
</code></pre>
<p>However, I'm facing the problem that <code>poetry</code> seems to disallow uninstalling all dependencies of a group at once. There is the <code>poetry remove</code> command, but it needs a <code>package</code> to be specified.</p>
<p>Is there some option I am overlooking which lets me obtain a wildcard-like effect?</p>
<p>For completeness, here is my Dockerfile (Disclaimer: This file is "work in progress"):</p>
<pre><code>ARG PY_VERSION=3.10-slim
FROM python:${PY_VERSION} AS pyimg
WORKDIR /usr/src/app
ENV APP_PORT="8080"
COPY src/*.py ./
COPY pyproject.toml ./
COPY poetry.lock ./
COPY README.md ./
RUN apt update -y
RUN pip install poetry
RUN poetry install --with dev # Install dev dependencies so that unittests can be run inside the container
# Run unittests, here
# RUN poetry remove --group dev
# RUN pip uninstall poetry -y
EXPOSE ${APP_PORT}
RUN touch config.ini
# CMD python${PY_VERSION} app.py
</code></pre>
|
<python><python-poetry>
|
2024-08-12 13:38:51
| 0
| 1,132
|
Wör Du Schnaffzig
|
78,861,861
| 3,888,816
|
Django annotation returns 1 for each item
|
<p>I have 2 almost identical models.</p>
<pre><code>class FavoriteBook(models.Model):
class Meta:
# Model for books added as favorites
verbose_name = "Favorite Book"
unique_together = ['user', 'book']
user = models.ForeignKey(User, null=False, blank=False, on_delete=models.CASCADE, verbose_name="User", related_name="favorite_books")
book = models.ForeignKey(Book, null=False, blank=False, on_delete=models.CASCADE, verbose_name="Book")
def __str__(self):
return "{user} added {book} to favorites".format(user=self.user, book=self.book)
class SpoilerVote(models.Model):
class Meta:
# Model for votes of book spoilers
verbose_name = "Spoiler Vote"
unique_together = ['spoiler', 'user']
spoiler = models.ForeignKey(Spoiler, null=False, blank=False, on_delete=models.CASCADE, verbose_name="Spoiler")
user = models.ForeignKey(User, null=False, blank=False, on_delete=models.CASCADE, verbose_name="User", related_name="bookspoiler_votes")
choices = (
('U', 'UP'),
('D', 'DOWN'),
)
vote = models.CharField(max_length=1, choices=choices, null=False, blank=False, verbose_name="Vote")
def __str__(self):
return "{user} liked {book}'s spoiler".format(user=self.user, book=self.spoiler.book.book_title)
</code></pre>
<p>Following query works fine for FavoriteBook model.</p>
<pre><code>most_popular = FavoriteBook.objects.values("book", "book__book_title", "book__year").annotate(count=Count("book")).order_by("-count")[:20]
</code></pre>
<p>But this one does not work for SpoilerVote model. It returns 1 for each item.</p>
<pre><code>SpoilerVote.objects.values("spoiler", "user", "vote").annotate(count=Count("spoiler")).order_by("-count")
</code></pre>
<p>What am I missing? There is no difference whatsoever.</p>
|
<python><django><annotations>
|
2024-08-12 13:08:58
| 1
| 982
|
orhanodabasi
|
78,861,658
| 10,285,705
|
misplaced item on Frame and Root
|
<p>I am building an tkinter application with 2 frames inside my root. Here is my code and what I obtain:</p>
<pre><code>from tkinter import*
from tkinter import ttk
root = Tk()
root.geometry('880x600')
root.config(bg="blue")
root.resizable(0,0)
################ RIGHT FRAME ########################################
right_frame=Frame(root, width= 230, height=500, borderwidth=5, relief=GROOVE)
right_frame.grid(row=0, column=1, padx=10, pady=5)
################ RIGHT FRAME ########################################
left_frame=Frame(root, width=600, height=500, borderwidth= 5, relief= GROOVE)
left_frame.grid(row=0, column=0, padx=10, pady=5)
sending_To= Label(root, text= 'To: ')
sending_To.grid(row=0, column=0)
sending_Textbox=Entry(root, width=15)
sending_Textbox.grid(row=0,column=0)
copy= Label(root, text= 'Cc: ')
copy.grid(row=1, column=0)
copy_Textbox=Entry(root, width=15)
copy_Textbox.grid(row=1, column=0)
subject= Label(root, text= 'Subject: ')
subject.grid(row=2, column=0)
subject_Textbox=Entry(root, width=15)
subject_Textbox.grid(row=2, column=0)
root.mainloop()
</code></pre>
<p>Here is the result:
<a href="https://i.sstatic.net/oJMcbDhA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oJMcbDhA.png" alt="enter image description here" /></a></p>
<p>This is close from my intended results. However, I do not have the labels nor the 3 entries on the top left of my left frame. When I change the location of my labels and put them all in <code>left-frame</code>, like this:</p>
<pre><code> from tkinter import*
from tkinter import ttk
root = Tk()
root.geometry('880x600')
root.config(bg="blue")
root.resizable(0,0)
################ RIGHT FRAME ########################################
right_frame=Frame(root, width= 230, height=500, borderwidth=5, relief=GROOVE)
right_frame.grid(row=0, column=1, padx=10, pady=5)
################ RIGHT FRAME ########################################
left_frame=Frame(root, width=600, height=500, borderwidth= 5, relief= GROOVE)
left_frame.grid(row=0, column=0, padx=10, pady=5)
sending_To= Label(left_frame, text= 'To: ')
sending_To.grid(row=0, column=0)
sending_Textbox=Entry(root, width=15)
sending_Textbox.grid(row=0,column=0)
copy= Label(left_frame, text= 'Cc: ')
copy.grid(row=1, column=0)
copy_Textbox=Entry(root, width=15)
copy_Textbox.grid(row=1, column=0)
subject= Label(left_frame, text= 'Subject: ')
subject.grid(row=2, column=0)
subject_Textbox=Entry(root, width=15)
subject_Textbox.grid(row=2, column=0)
root.mainloop()
</code></pre>
<p>This is what I get:</p>
<p><a href="https://i.sstatic.net/V0unB48t.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V0unB48t.png" alt="enter image description here" /></a></p>
<p>How would put my labels next to my entries on my left frame without disrupting the whole thing. I tried many things over the weekend it seems like my both of my frames are on row 1;</p>
|
<python><tkinter>
|
2024-08-12 12:24:03
| 1
| 481
|
Camue
|
78,861,619
| 51,816
|
How to detect audio retakes using Python?
|
<p>I have a lot of audio recordings for lectures where I say the same thing multiple times, mostly it's incomplete statements like:</p>
<p>"this is the part" (and then retrying)</p>
<p>"this is the part where" (and then retrying)</p>
<p>"this is the part where we will explore the theory"</p>
<p>Visually in any audio or video editor, I can spot these as the waveforms look very similar i.e. the same type of high low points on the waveform.</p>
<p>So I am trying to use Python to do the same, so that I only keep the last retake but silence the older ones, so the audio length doesn't change. Right now my approach is like this but it gives me the exact same audio as the input.</p>
<pre><code>import librosa
import numpy as np
import soundfile as sf
import os
# Load the audio file
file_path = r'C:\test.wav'
y, sr = librosa.load(file_path, sr=None)
# Parameters
chunk_duration = 1 # seconds
overlap = 0.75 # 75% overlap for better similarity matching
chunk_length = int(chunk_duration * sr)
step_length = int(chunk_length * (1 - overlap))
# List to store the indices of the last occurrence of each chunk
last_occurrence_end = 0
# Create a list to store the final audio chunks
final_audio = []
# Loop over the audio in chunks
i = 0
while i < len(y) - chunk_length:
chunk = y[i:i + chunk_length]
next_chunk_start = i + step_length
# If this is the last chunk, just keep it
if next_chunk_start + chunk_length > len(y):
last_occurrence_end = len(y)
break
next_chunk = y[next_chunk_start:next_chunk_start + chunk_length]
# Compute simple Euclidean distance between the two chunks
distance = np.linalg.norm(chunk - next_chunk)
# Set a threshold for similarity
if distance > 1000: # Adjust this threshold as needed
if i > last_occurrence_end:
final_audio.append(y[last_occurrence_end:i])
last_occurrence_end = i + chunk_length
i = next_chunk_start
# Append the last segment after the loop
if last_occurrence_end < len(y):
final_audio.append(y[last_occurrence_end:])
# Check if any chunks were added to final_audio
if final_audio:
# Concatenate all the kept chunks to form the final trimmed audio
final_audio = np.concatenate(final_audio)
else:
# If no chunks were kept, return the original audio
final_audio = y
# Define the new file path with "_clean" suffix
new_file_path = os.path.splitext(file_path)[0] + '_clean.wav'
# Export the trimmed audio
sf.write(new_file_path, final_audio, sr)
print(f"Cleaned audio saved as: {new_file_path}")
</code></pre>
|
<python><audio><signal-processing><speech><librosa>
|
2024-08-12 12:14:42
| 1
| 333,709
|
Joan Venge
|
78,861,589
| 2,071,807
|
Is it possible to only mention arguments which change between overloads in Python?
|
<p>I have a function whose return type varies based on the value of one of it's arguments:</p>
<pre class="lang-py prettyprint-override"><code>class ReturnType(Enum):
NONE = 1
SCALAR = 2
VECTOR = 3
@overload
def perform_query(
sql: str, return_type: Literal[ReturnType.NONE], log_query: bool = ...
) -> None: ...
@overload
def perform_query(
sql: str, return_type: Literal[ReturnType.SCALAR], log_query: bool = ...
) -> int | float | bool | str: ...
@overload
def perform_query(
sql: str, return_type: Literal[ReturnType.VECTOR], log_query: bool = ...
) -> list[int | float | bool | str]: ...
def perform_query(
sql: str, return_type: ReturnType, log_query: bool = False
):
pass
</code></pre>
<p>This is fine, but there's a lot of boilerplate: only one parameter changes between overloads, but every other parameter must be written out identically each time.</p>
<p>This creates redundancy and isn't very DRY. (Imagine if I wanted to rename an input parameter, or if my function had many input arguments.)</p>
<p>Is there a way to avoid restating the arguments which don't change?</p>
|
<python><python-typing>
|
2024-08-12 12:08:43
| 0
| 79,775
|
LondonRob
|
78,861,350
| 1,894,388
|
Pandas Unable to Find NaN when 0 is divided by 0
|
<p>I have dataframe like this:</p>
<pre><code>df_challenge = pd.DataFrame({'x': [1, pd.NA, 6, 9, pd.NA, 0, 9, 10, 0, 9, pd.NA, 0],
'y': [0, 7.2, pd.NA, 10, 0, 1, 9.2, 10.65, pd.NA, 9, pd.NA, 0],
'y_copy': [0, 7.2, np.nan, 10, 0, 1, 9.2, 10.65, np.nan, 9, np.nan,0]})
df_challenge = df_challenge.convert_dtypes()
</code></pre>
<p>I forcefully changed the type of one of the columns</p>
<pre><code>df_challenge.y_copy = df_challenge.y_copy.astype('float')
</code></pre>
<p>I now create two variables using below code:</p>
<pre><code>df_challenge = df_challenge.assign(z = df_challenge.x/df_challenge.y)
df_challenge = df_challenge.assign(z1 = df_challenge.x.astype('float')/df_challenge.y_copy)
</code></pre>
<p>Now, If I try .isnull() method or .isna() method of series, it doesn't show correct result for column z</p>
<p>The below code gives these results:</p>
<pre><code>df_challenge.z.isna().sum() # 5 It should be 6
df_challenge.z.isnull().sum() # 5
df_challenge.z1.isna().sum() # 6 It is correct
df_challenge.z1.isnull().sum() # 6
</code></pre>
<p>My question is, why the .isnull() or .isna() isn't performing correctly (or I am mistaken here) in these columns. The difference is that the data types involved in calculation are different, in z (division is happening on two (Int64/Float64, pandas newer datatypes), However in z1 calculation(division is happening on two floats)</p>
<p>Now, to circumvent the problem instead of using .isna, .isnull I have tried np.isfinite with pandas (not operator ~) and it correctly figures out NAN in z1</p>
<p>So, my second question is that whether this is a good idea to pull such NANs in pandas</p>
<p>Here is what worked</p>
<pre><code>df_challenge.loc[~df_challenge.z.pipe(np.isfinite),:]
</code></pre>
<p>However, I am not satisfied with this workaround, although to me this works. But I wanted to understand this and thinking of a better solution.</p>
<p>Thanks</p>
<hr />
<p>Pandas version: '2.2.2'</p>
<p>Python version: Python 3.10.14</p>
|
<python><pandas>
|
2024-08-12 11:15:40
| 1
| 11,116
|
PKumar
|
78,861,297
| 8,726,488
|
Python regex convert single row into multiple rows
|
<p>This is my data in single line.</p>
<pre><code>1. B 2. E 3. E 4. D 5. E 6. A 7. A 8. E 9. E 10. D
</code></pre>
<p>How to convert into multiple lines as below using python</p>
<pre><code>1. B
2. E
3. E
.
.
10. D
</code></pre>
|
<python><python-3.x><regex>
|
2024-08-12 10:59:18
| 3
| 3,058
|
Learn Hadoop
|
78,861,118
| 13,757,692
|
Multiply array of matrices with array of vectors in Numpy
|
<p>I have an array of matrices <code>A</code> of shape <code>A.shape = (N, 3, 3)</code> and an array of vectors <code>V</code> of shape <code>V.shape = (N, 3)</code>. I want to get an (N, 3) array, where each vector is the result of multiplying the i-th matrix with the i-th vector. Such as:</p>
<pre><code>result = [A[i] @ V[i] for i in range(N)]
</code></pre>
<p>Is there a way of doing this without looping using NumPy? Ideally, I would also like this to work with arrays of higher dimensions, such as <code>A.shape = (N, M, 3, 3)</code> and <code>V.shape = (N, M, 3, 3)</code></p>
|
<python><arrays><numpy>
|
2024-08-12 10:10:52
| 1
| 466
|
Alex V.
|
78,860,597
| 2,401,053
|
Parse Messy Dates from an Object Column and Replace with Formatted Dates
|
<p>I'm currently new to R and I'm trying to make a script for cleaning messy dates. Basically, I have a linelist of deaths and it has a column called "Date of Death". This column contains dates that are not uniformly formatted so I'd like to parse and format the dates into a uniform date format however, I'm getting the following error (cleanlist object not found) however, when I manually try to run the cleanlist code, it runs without any issues. I just get the issue below if I try to run the whole script. Also, I want to know if how would I properly use the mutate function to update the date_of_death column in the clean_list table/object.</p>
<pre class="lang-r prettyprint-override"><code># Loading packages --------------------------------------------------------
# Checks if package is installed, installs if necessary, and loads package for current session
pacman::p_load(
lubridate, # general package for handling and converting dates
parsedate, # has function to "guess" messy dates
here, # file management
rio, # data import/export
janitor, # data cleaning
epikit, # for creating age categories
tidyverse, # data management and visualization
magrittr,
dplyr,
reprex, # minimal repr example
datapasta # sample data
)
datecolumn <- "date_of_death"
# Data Import -------------------------------------------------------------
linelist <- data.frame(
stringsAsFactors = FALSE,
check.names = FALSE,
`Date of Death` = c("45236","45212","45152",
"JANUARY 19, 2023","June 25, 2023","45200","45164",
"5/16/2023","45277","44930"))
# Data Cleaning -----------------------------------------------------------
cleanlist <- linelist %>% # the raw dataset
clean_names() %>% # automatically clean column names
# Format Dates ------------------------------------------------------------
# parse the date values
mutate(parsedate::parse_date(cleanlist[[datecolumn]]))
#> Error in `mutate()`:
#> ℹ In argument: `parsedate::parse_date(cleanlist[[datecolumn]])`.
#> Caused by error:
#> ! object 'cleanlist' not found
</code></pre>
<p><sup>Created on 2024-08-12 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.1.1</a></sup></p>
|
<python><r><date>
|
2024-08-12 08:07:06
| 1
| 1,327
|
maikelsabido
|
78,860,550
| 288,201
|
Return an empty AsyncIterable from a function
|
<p>I would like to define a function returning an <em>empty</em> <code>AsyncIterable</code> of a concrete type.</p>
<p>The function will be a part of a base class; derived classes can implement it as necessary but the default behaviour is to return no items to iterate.</p>
<p>Unfortunately there is no convenient "default value" for the concrete type.</p>
<p>The code has to pass mypy's type checking and ideally pylint.</p>
<p>The following attempts do not work:</p>
<pre><code>async def nope() -> AsyncIterable[int]:
pass # error: Missing return statement [empty-body]
# same with plain 'def'
</code></pre>
<pre><code>async def nope() -> AsyncIterable[int]:
return # error: Return value expected [return-value]
# same with plain 'def'
</code></pre>
<p>(Found at <a href="https://discuss.python.org/t/empty-asynchronous-iterable-function/29438/2" rel="nofollow noreferrer">Empty asynchronous iterable function</a>):</p>
<pre><code>async def nope() -> AsyncIterable[int]:
yield from () # error: "yield from" in async function
</code></pre>
<pre><code>def nope() -> AsyncIterable[int]:
yield from () # error: The return type of a generator function should be "Generator" or one of its supertypes [misc]
</code></pre>
<p>This works for <code>int</code> but there is no suitable default value for my concrete type. Besides, Pylint warns against a constant conditional:</p>
<pre><code>async def nope() -> AsyncIterable[int]:
if False: yield 1 # W0125: Using a conditional statement with a constant value (using-constant-test)
</code></pre>
<p>What is a simple and understandable way to define a function returning an empty async iterable?</p>
|
<python><asynchronous>
|
2024-08-12 07:57:06
| 2
| 8,287
|
Koterpillar
|
78,860,479
| 8,510,149
|
Find tree hierachy in group and collect in a list - PySpark
|
<p>In the data below, for each id2, I want to collect a list of the id1 that is above them in hierarchy/level.</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql.types import StructType, StructField, StringType, IntegerType
schema = StructType([
StructField("group_id", StringType(), False),
StructField("level", IntegerType(), False),
StructField("id1", IntegerType(), False),
StructField("id2", IntegerType(), False)
])
# Feature values
levels = [1, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4]
id1_values = [0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026, 662867, 662867, 662867, 662867, 662867]
id2_values = [200001, 677555, 605026, 662867, 676423, 659933, 660206, 675767, 681116, 913248,
910758, 913773, 698738, 910387, 910758, 910387, 910113, 910657]
data = zip(['A'] * len(levels), levels, id1_values, id2_values)
# Create DataFrame
data = spark.createDataFrame(data, schema)
</code></pre>
<p>This can be done like this, using a window function and collect_list.</p>
<pre><code>window = Window.partitionBy('group_id').orderBy('level').rowsBetween(Window.unboundedPreceding, Window.currentRow)
data.withColumn("list_id1", F.collect_list("id1").over(window)).show(truncate=False)
</code></pre>
<pre><code>Output:
+--------+-----+------+------+-------------------------------------------------------------------------------------------------------------------------------------------+
|group_id|level|id1 |id2 |list_id1 |
+--------+-----+------+------+-------------------------------------------------------------------------------------------------------------------------------------------+
|A |1 |0 |200001|[0] |
|A |2 |200001|677555|[0, 200001] |
|A |3 |677555|605026|[0, 200001, 677555] |
|A |3 |677555|662867|[0, 200001, 677555, 677555] |
|A |3 |677555|676423|[0, 200001, 677555, 677555, 677555] |
|A |3 |677555|659933|[0, 200001, 677555, 677555, 677555, 677555] |
|A |3 |677555|660206|[0, 200001, 677555, 677555, 677555, 677555, 677555] |
|A |4 |605026|675767|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026] |
|A |4 |605026|681116|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026] |
|A |4 |605026|913248|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026] |
|A |4 |605026|910758|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026] |
|A |4 |605026|913773|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026] |
|A |4 |605026|698738|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026] |
|A |4 |662867|910387|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026, 662867] |
|A |4 |662867|910758|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026, 662867, 662867] |
|A |4 |662867|910387|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026, 662867, 662867, 662867] |
|A |4 |662867|910113|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026, 662867, 662867, 662867, 662867] |
|A |4 |662867|910657|[0, 200001, 677555, 677555, 677555, 677555, 677555, 605026, 605026, 605026, 605026, 605026, 605026, 662867, 662867, 662867, 662867, 662867]|
+--------+-----+------+------+-------------------------------------------------------------------------------------------------------------------------------------------+
</code></pre>
<p>In some cases there are several id1s with the same level. I want the collect_list to take this into account.</p>
<p>As an example, on level 4 we have two unique id1s, 605026 and 662867.For id2 910387, that corresponds to id1 662867 on level 4. I don't want to include 605026 in the list.</p>
<p>The list I want to collect should only include one id1 per level, capturing a tree path up to level 1.</p>
<p>For id2: 910657 that list should be [662867,677555, 200001, 0]</p>
<p>How can this be achieved using PySpark API?</p>
|
<python><pyspark>
|
2024-08-12 07:39:50
| 1
| 1,255
|
Henri
|
78,860,333
| 4,577,688
|
A more efficient way to add values inplace at colums of a 2d matrix, using a 2d array for the colum indices
|
<p>I would like to do a <code>+=</code> operation at specified columns of a 2d matrix, where the column indices are in another 2D matrix.</p>
<p>Specifically, is there a more efficient way to do the operation in <code>loop_fun</code> below.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
np.random.seed(1)
N, V, D, B = 5, 25000, 300, 512
grad = np.zeros((D, V))
# col_idx columns and rows can have duplicate indices, so
# as += applied at the vector level leads to only a single
# instance of the index working.
col_idx = np.random.randint(0, V-1, size=(N, B))
# force at least one example of duplicates within a column.
col_idx[:, 0] = np.array([0, 100, V-1, 0, V-1])
values = np.random.normal(size=(D, B))
def loop_fun(grad_mat, col_mat, val_mat):
for b in range(col_mat.shape[1]):
for row in range(col_mat.shape[0]):
grad_mat[:, col_mat[row, b]] += val_mat[:, b]
return grad_mat
def loop_fun_wrong(grad_mat, col_mat, val_mat):
# This fails because += only applies the
# last instance of a duplicated index.
for b in range(col_mat.shape[1]):
grad_mat[:, col_mat[:, b]] += val_mat[:, b, np.newaxis]
return grad_mat
grad = loop_fun(grad, col_idx, values)
grad_wrong = loop_fun_wrong(np.zeros_like(grad), col_idx, values)
# Unfortunately the inner loop is required!
print(f'{np.allclose(grad, grad_wrong)=}') # False
</code></pre>
<p><code>np.add.at</code> is supposed to be useful when <code>+=</code> doesn't work, but couldn't quite figure out how to use it in this case.</p>
|
<python><arrays><numpy>
|
2024-08-12 06:58:44
| 1
| 3,840
|
dule arnaux
|
78,860,330
| 1,942,868
|
How to add the evey objects to Many-to-Many relationship model
|
<pre><code>class MyUserGroup(BaseModel):
my_user_types = m.ManyToManyField('MyUserType',blank=True)
</code></pre>
<p>I have this models which has the many-to-many relation ship to type.</p>
<p>Now I want to give the every MyUserType objects to this model such as</p>
<pre><code>muy = MyUserType()
muy.save()
muy.my_user_types.add(MyUserType.objects.all())
</code></pre>
<p>However this shows the error like this,</p>
<pre><code>TypeError: Field 'id' expected a number but got <SafeDeleteQueryset [<MyUserType:A>, <MyUserType: B>, <MyUserType: C>]>.
</code></pre>
<p>How can I make this?</p>
|
<python><django>
|
2024-08-12 06:58:25
| 1
| 12,599
|
whitebear
|
78,859,973
| 13,000,229
|
Why does using `any` as type hints cause no error?
|
<h2>Question</h2>
<p>Recently I mistakenly wrote <code>any</code> where I should write <code>Any</code>. I find this does not cause any warning/error messages.<br />
Why is this not an error?</p>
<h2>Sample code</h2>
<p>Environment</p>
<ul>
<li>Python 3.12.4</li>
<li>IDE: PyCharm 2024.1.4 (Professional Edition)</li>
</ul>
<pre><code>from typing import Any
# This is just for checking if my IDE is working correctly.
def test_str() -> dict[str, str]:
return {'one': 1, 'two': 2} # My IDE shows a message in this line.
# I don't know why this function does not have any messages on my IDE.
def test_any() -> dict[str, any]:
return {'one': 1, 'two': 2} # This is where I expect to see a warning or an error.
# This is the function I use `Any` correctly.
def test_typing_any() -> dict[str, Any]:
return {'one': 1, 'two': 2}
</code></pre>
<p><a href="https://i.sstatic.net/MB1UMy7p.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MB1UMy7p.png" alt="Sample code" /></a>
<a href="https://i.sstatic.net/cwZftZsg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cwZftZsg.png" alt="Sample code with a message" /></a></p>
|
<python><pycharm><python-typing>
|
2024-08-12 04:24:10
| 0
| 1,883
|
dmjy
|
78,859,586
| 6,197,439
|
Programmatically close a QTabWidget, so tabCloseRequested handler triggers?
|
<p>In the example below, which produces this GUI:</p>
<p><a href="https://i.sstatic.net/AJbhFzr8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJbhFzr8.png" alt="example GUI" /></a></p>
<p>... if I click the close button on tab 2 (or tab 3), then <code>on_tab_close</code> handler fires/is triggered/runs.</p>
<p>However, if I click the button "Close tab 2" which calls <code>.removeTab</code>, then tab 2 is indeed removed, but the <code>on_tab_close</code> handler does not fire.</p>
<p>How can I programmatically close tab 2 (here, by clicking the button "Close tab 2"), in a way that will also trigger the <code>on_tab_close</code> handler?</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import sys
from PyQt5.QtWidgets import (QMainWindow, QApplication, QPushButton, QWidget, QAction, QTabWidget, QVBoxLayout, QLabel, QVBoxLayout, QStyle)
class MyWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("QTabWidget test")
self.setGeometry(0, 0, 300, 200)
self.cent_widget = QWidget(self)
self.setCentralWidget(self.cent_widget)
self.vlayout = QVBoxLayout(self.cent_widget)
self.close_btn = QPushButton("Close tab 2")
self.close_btn.clicked.connect(self.on_close_btn)
self.vlayout.addWidget(self.close_btn)
self.tabswidget = QTabWidget()
self.vlayout.addWidget(self.tabswidget)
self.tab1 = QLabel("This is tab 1.")
self.tab2 = QLabel("This is tab 2.")
self.tab3 = QLabel("This is tab 3.")
self.tabswidget.addTab(self.tab1, "Tab 1")
self.tabswidget.addTab(self.tab2, "Tab 2")
self.tabswidget.addTab(self.tab3, "Tab 3")
self.tabswidget.setTabsClosable(True) # https://stackoverflow.com/q/60409663
default_side = self.tabswidget.style().styleHint(
QStyle.SH_TabBar_CloseButtonPosition, None, self.tabswidget.tabBar()
)
self.tabswidget.tabBar().setTabButton(0, default_side, None)
self.tabswidget.tabCloseRequested.connect(self.tabswidget.removeTab) # https://stackoverflow.com/q/19151159
self.tabswidget.tabCloseRequested.connect(self.on_tab_close)
self.show()
def on_close_btn(self):
print("on_close_btn")
self.tabswidget.removeTab( self.tabswidget.indexOf(self.tab2) )
def on_tab_close(self):
print("on_tab_close")
if __name__ == '__main__':
app = QApplication(sys.argv)
ex = MyWindow()
sys.exit(app.exec_())
</code></pre>
|
<python><pyqt5>
|
2024-08-11 23:20:10
| 0
| 5,938
|
sdbbs
|
78,859,343
| 1,601,580
|
How to reinitialize from scratch GPT2 XL in HuggingFace?
|
<p>I'm trying to confirm that my GPT-2 model is being trained from scratch, rather than using any pre-existing pre-trained weights. Here's my approach:</p>
<ol>
<li><strong>Load the pre-trained GPT-2 XL model</strong>: I load a pre-trained GPT-2 XL model using <code>AutoModelForCausalLM.from_pretrained("gpt2-xl")</code> and calculate the total L2 norm of the weights for this model.</li>
<li><strong>Initialize a new GPT-2 model from scratch</strong>: I then initialize a new GPT-2 model from scratch with a custom configuration using <code>GPT2Config</code>.</li>
<li><strong>Compare L2 norms</strong>: I calculate the L2 norm of the weights for both the pre-trained model and the freshly initialized model. My assumption is that the L2 norm of the scratch model should be much smaller compared to the pre-trained model if the scratch model is truly initialized from random weights.</li>
</ol>
<p>Here's the code snippet:</p>
<pre><code>import torch
from transformers import GPT2LMHeadModel, GPT2Config, AutoModelForCausalLM
# Step 1: Load the pre-trained GPT-2 XL model
pretrained_model = AutoModelForCausalLM.from_pretrained("gpt2-xl")
# Step 2: Calculate the L2 norm of the weights for the pre-trained model
pretrained_weight_norm = 0.0
for param in pretrained_model.parameters():
pretrained_weight_norm += torch.norm(param, p=2).item()
print(f"Total L2 norm of pre-trained model weights: {pretrained_weight_norm:.2f}")
# Step 3: Initialize a new GPT-2 model from scratch with custom configuration
config = GPT2Config(
vocab_size=52000, # Ensure this matches the tokenizer's vocabulary size
n_ctx=1024, # Context window size (number of tokens the model can see at once)
bos_token_id=0, # Begin-of-sequence token
eos_token_id=1, # End-of-sequence token
)
model = GPT2LMHeadModel(config)
# Step 4: Calculate the L2 norm of the weights for the freshly initialized model
scratch_weight_norm = 0.0
for param in model.parameters():
scratch_weight_norm += torch.norm(param, p=2).item()
print(f"Total L2 norm of model initialized from scratch: {scratch_weight_norm:.2f}")
</code></pre>
<p>Is this method a valid way to confirm that the model is being trained from scratch? Are there any potential issues or better ways to verify that the model has no pre-existing learned weights?</p>
<p>Looks right</p>
<pre><code>~/beyond-scale-language-data-diversity$ /opt/conda/envs/beyond_scale_div_coeff/bin/python /home/ubuntu/beyond-scale-language-data-diversity/playground/test_gpt2_pt_vs_reinit_scratch.py
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 689/689 [00:00<00:00, 8.05MB/s]
model.safetensors: 100%|██████████████████████████████████████████████████████████████████████████████████| 6.43G/6.43G [00:29<00:00, 221MB/s]
generation_config.json: 100%|████████████████████████████████████████████████████████████████████████████████| 124/124 [00:00<00:00, 1.03MB/s]
Total L2 norm of pre-trained model weights: 24542.74
Total L2 norm of model initialized from scratch: 1637.31
(beyond_scale_div_coeff)
</code></pre>
<p>cross: <a href="https://discuss.huggingface.co/t/how-to-reinitialize-from-scratch-gpt-xl-in-hugging-face-hf/101905" rel="nofollow noreferrer">https://discuss.huggingface.co/t/how-to-reinitialize-from-scratch-gpt-xl-in-hugging-face-hf/101905</a></p>
<p>ref: <a href="https://github.com/alycialee/beyond-scale-language-data-diversity/issues/18" rel="nofollow noreferrer">https://github.com/alycialee/beyond-scale-language-data-diversity/issues/18</a></p>
|
<python><machine-learning><huggingface-transformers><huggingface>
|
2024-08-11 20:27:07
| 1
| 6,126
|
Charlie Parker
|
78,859,302
| 688,624
|
Python MRO for operators: Chooses RHS `__rmul__` instead of LHS `__mul__` when RHS is a subclass
|
<p>Consider the following self-contained example:</p>
<pre class="lang-py prettyprint-override"><code>class Matrix:
def __mul__(self, other):
print("Matrix.__mul__(⋯)")
return NotImplemented
def __rmul__(self, other):
print("Matrix.__rmul__(⋯)")
return NotImplemented
class Vector(Matrix):
def __mul__(self, other):
print("Vector.__mul__(⋯)")
return NotImplemented
def __rmul__(self, other):
print("Vector.__rmul__(⋯)")
return NotImplemented
matr = Matrix()
vec = Vector()
print("=== Using explicit `__mul__`: ===")
matr.__mul__(vec)
print()
print("=== Using implicit `*`: ===")
matr * vec
</code></pre>
<p>with output (CPython 3.12.2):</p>
<pre><code>=== Using explicit `__mul__`: ===
Matrix.__mul__(⋯)
=== Using implicit `*`: ===
Vector.__rmul__(⋯)
Matrix.__mul__(⋯)
(TypeError raised: "unsupported operand type(s) for *: 'Matrix' and 'Vector'")
</code></pre>
<p><strong>I'm trying to understand why, in the second case, <code>Vector.__rmul__</code> gets called <em>before</em> <code>Matrix.__mul__</code>.</strong></p>
<p>My understanding is that Python sees the <code>*</code>, then looks at the LHS and RHS. If it sees the LHS has a <code>__mul__</code>, it's called (with arguments self=LHS, right=RHS). If that returns <code>NotImplemented</code>, <em>only then</em> does it try the RHS's <code>__rmul__</code> (with arguments self=RHS, right=LHS).</p>
<p>In particular, in this case, when it looks at the LHS, it should find <code>Matrix.__mul__</code>, and only when that fails should it <em>then</em> try <code>Vector.__rmul__</code>. But it's doing it the other way around! Why?</p>
<p>It's also important to note: this only happens when <code>Vector</code> is a subclass of <code>Matrix</code>. If they are unrelated, then the result is as expected.</p>
|
<python><language-lawyer><operators><method-resolution-order>
|
2024-08-11 20:04:56
| 1
| 15,517
|
geometrian
|
78,859,203
| 2,288,506
|
Python + Sqlite3 dump, source to MariaDB: Unknown collation: 'utf8mb4_0900_ai_ci' (db noob)
|
<p>I'm following Head First Python (3ed). I'm at the last chapter and have a bug I can't get past.</p>
<p>I've got an sqlite3 database that I need to port to MariaDB. I've got the schema and data in separate files:</p>
<pre><code>sqlite3 CoachDB.sqlite3 .schema > schema.sql
sqlite3 CoachDB.sqlite3 '.dump swimmers events times --data-only' > data.sql
</code></pre>
<p>I've installed MariaDB via <a href="https://mariadb.org/download/?t=repo-config&d=24.04%20%22noble%22&v=11.4&r_m=acorn" rel="nofollow noreferrer">apt</a>, granted privileges to a user, and logged in to get MariaDB prompt:</p>
<pre><code>mariadb -u swimuser -p swimDB
source schema.sql;
source data.sql;
</code></pre>
<p>Now I run my python app locally that connects to the MariaDB but, and I'm not sure where exactly, the call fails with:</p>
<p><strong>mysql.connector.errors.DatabaseError: 1273 (HY000): Unknown collation: 'utf8mb4_0900_ai_ci'</strong></p>
<p>Back at MariaDB prompt:</p>
<p><code>show collation like 'utf8%';</code> # shows no collation 'utf8mb4_0900_ai_ci'</p>
<p><code>select * from INFORMATION_SCHEMA.SCHEMATA;</code> # shows default is is utf8mb4 / utf8mb4_uca1400_ai_ci</p>
<p>At shell prompt, <code>file -i schema.sql</code> shows it's plain-text in us-ascii. I tried opening schema.sql and data.sql in notepadqq and saving them as utf8 but I still got the same error. I dropped the database and recreated it with:</p>
<pre><code>CREATE DATABASE swimDB
DEFAULT CHARACTER SET utf8mb4
DEFAULT COLLATE utf8mb4_unicode_520_ci;
</code></pre>
<p>...then again sourced schema.sql and data.sql and still got the same error.</p>
<p>I saw a post where someone asked for this so...</p>
<pre><code>MariaDB [swimDB]> SHOW VARIABLES WHERE VALUE LIKE 'utf%';
+--------------------------+-------------------------------+
| Variable_name | Value |
+--------------------------+-------------------------------+
| character_set_client | utf8mb3 |
| character_set_collations | utf8mb4=utf8mb4_uca1400_ai_ci |
| character_set_connection | utf8mb3 |
| character_set_database | utf8mb4 |
| character_set_results | utf8mb3 |
| character_set_server | utf8mb4 |
| character_set_system | utf8mb3 |
| collation_connection | utf8mb3_general_ci |
| collation_database | utf8mb4_uca1400_ai_ci |
| collation_server | utf8mb4_uca1400_ai_ci |
| old_mode | UTF8_IS_UTF8MB3 |
+--------------------------+-------------------------------+
</code></pre>
<p>So I guess there's a character set or encoding problem with the data (?).</p>
<p>At this point I'm lost, madly searching the interwebs for clues. Any help appreciated and sorry for the long post :)</p>
|
<python><sql><sqlite><mariadb>
|
2024-08-11 19:10:36
| 1
| 512
|
CraigFoote
|
78,858,791
| 558,801
|
Not able to open or call the current open file with xlwings
|
<p>I'm not able to open or call the current active workbook using xlwings.
I'm on macOS Sonoma 14.5 and xlwings v0.31.10.</p>
<p>I keep getting this error (I'm running the .py file out of 4. Master):</p>
<pre><code>UnboundLocalError: cannot access local variable 'mount_point' where it is not associated with a value
</code></pre>
<p>My Excel file is on a SharePoint which is synced to my OneDrive folder locally.</p>
<p>I've also configured <code>ONEDRIVE_CONSUMER_MAC</code> (to /Users/John/OneDrive - MyCompany) and <code>ONEDRIVE_COMMERCIAL_MAC</code> (to /Users/John/OneDrive - MyCompany 2) in my environment variables in <code>~/.bash_profile</code></p>
<p>I've tried something very simple but can't get it to work.</p>
<pre><code>import xlwings as xw
def getWB():
try:
wb =xw.Book.caller() #never works
except:
wb = xw.Book("/Users/John/Library/CloudStorage/OneDrive-MyCompany/Project XYZ/2. Excel/4. Master/Name of Excel file.xlsm").set_mock_caller()
wb =xw.Book.caller()
return wb
def main():
wb = getWB()
sheet = wb.sheets[0]
if sheet["E4"].value == "Hello xlwings!":
sheet["E4"].value = "Bye xlwings!"
else:
sheet["E4"].value = "Hello xlwings!"
if __name__ == "__main__":
app = xw.App()
main()
</code></pre>
<p>I'd appreciate any help!
Thank you!</p>
|
<python><excel><xlwings>
|
2024-08-11 16:16:15
| 1
| 2,227
|
cocos2dbeginner
|
78,858,596
| 5,650,267
|
minimizing a multidimentional solution over a dataset
|
<p>I have a rectangle within a 2d space and a set of points within the rectangle. The rectangle is moving in the following manner: the center moves a value of <code>u</code> on the x-axis, <code>v</code> on the y-axis and everything is scaled in a factor of <code>sx</code> and <code>sy</code> respectively. I only get the location of the points after the motion. My goal is to estimate the <code>(u, v, sx, sy)</code> vector.</p>
<p>I have the following data at runtime:</p>
<ul>
<li>px - ndarray with the x values of the points before the movement</li>
<li>py - ndarray with the y values of the points before the movement</li>
<li>cx - ndarray with the x values of the points after the movement</li>
<li>cy - ndarray with the y values of the points after the movement</li>
<li>x0 - the x value of the location of the center of the rectangle in the previous frame.</li>
<li>y0 - the y value of the location of the center of the rectangle in the previous frame.</li>
</ul>
<p>The equation to calculate the position of the point in the current frame given the <code>(u, v, sx, sy)</code> vector is given by:</p>
<p>x_currentFrameCalaulated = sx * px + u + x0 * (1 - sx)<br />
y_currentFrameCalaulated = sy * py + v + y0 * (1 - sy)</p>
<p>Note that the axes are non-dependent. I have defined the following function to be minimized:</p>
<pre><code>def estimateCurr(vec, previous, current, center):
return np.abs(current - (vec[1]* previous + vec[0]+ center * (1 - vec[1])))
</code></pre>
<p>Here, vec[0] represents the motion concerning the axis, and vec[1] represents the scale.
I am setting the bound according to my problem in the following manner:</p>
<pre><code>h = 270
w = 480
boundsX = Bounds([-w, -1], [w, w - 1])
boundsY = Bounds([-h, -1], [h, h - 1])
</code></pre>
<p>Initializing a guess to <code>[0, 0]</code>:</p>
<pre><code>init = np.zeros((2 , ))
</code></pre>
<p>I am resuming to try and find the optimal solution by:</p>
<pre><code>res = minimize(estimateCurr, x0 = init, args=(px, cx, centerX), method='trust-constr', bounds=boundsX, options={'verbose': 1})
print(res)
res = minimize(estimateCurr, x0 = init, args=(py, cy, centerY), method='trust-constr', bounds=boundsY, options={'verbose': 1})
print(res)
</code></pre>
<p>I am getting:</p>
<blockquote>
<p>ValueError: all the input arrays must have same number of dimensions,
but the array at index 0 has 2 dimension(s) and the array at index 1
has 1 dimension(s)</p>
</blockquote>
<p>This is obvious since I have 17 points, and the sizes of printing:
print(px.shape)
print(cx.shape)
print(centerX.shape)
print(init.shape)</p>
<blockquote>
<p>(17,) (17,) (17,) (2,)</p>
</blockquote>
<p>However, I am not sure how to set the correct sizes for the problem, or even if I am using the correct solver. I tried multiplying the <code>(u, v, sx, sy)</code> to fit the size of the data to no evil. How do I approach this problem?</p>
<p>My formal question is, given a set of measured datapoints, how do I fit a multidimensional solution that minimizes the error using Python?</p>
<hr />
<p>Adding a dummy example concerning comments request. I have shortened the dataset to 6 points:</p>
<pre><code>from scipy.optimize import Bounds
from scipy.optimize import minimize
import numpy as np
def estimateCurr(vec, previous, current, center):
# np.full(())
return np.abs(current - (vec[1] * previous + vec[0] + center * (1 - vec[1])))
h = 270
w = 480
x0 = 90.8021
y0 = -20.8282
px = np.array([86.7581, 74.5433, 85.0012, 84.348, 83.704, 91.6176])
py = np.array([-19.5163, -17.3714, -3.39899, -4.83069, -1.97073, -2.20099])
cx = np.array([89.7436, 75.8955, 87.5827, 87.1492, 86.0817, 92.6683])
cy = np.array([-19.2132, -16.3913, -2.9177, -4.81898, -1.49321, -2.43572])
numPoints = px.shape[0]
boundsX = Bounds([-w, -1], [w, w - 1])
boundsY = Bounds([-h, -1], [h, h - 1])
centerX = np.full((numPoints,), x0)
centerY = np.full((numPoints,), y0)
init = np.zeros((2 ,))
print(px.shape)
print(cx.shape)
print(centerX.shape)
print(init.shape)
res = minimize(estimateCurr, x0 = init, args=(px, cx, centerX), method='trust-constr', bounds=boundsX, options={'verbose': 1})
print(res)
res = minimize(estimateCurr, x0 = init, args=(py, cy, centerY), method='trust-constr', bounds=boundsY, options={'verbose': 1})
print(res)
</code></pre>
<hr />
<p>Adding the full traceback:</p>
<pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_6249/3484902536.py in <module>
25 print(centerX.shape)
26 print(init.shape)
---> 27 res = minimize(estimateCurr, x0 = init, args=(px, cx, centerX), method='trust-constr', bounds=boundsX, options={'verbose': 1})
28 print(res)
29 res = minimize(estimateCurr, x0 = init, args=(py, cy, centerY), method='trust-constr', bounds=boundsY, options={'verbose': 1})
~/.conda/envs/oxip-new6/lib/python3.7/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
634 return _minimize_trustregion_constr(fun, x0, args, jac, hess, hessp,
635 bounds, constraints,
--> 636 callback=callback, **options)
637 elif meth == 'dogleg':
638 return _minimize_dogleg(fun, x0, args, jac, hess,
~/.conda/envs/oxip-new6/lib/python3.7/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py in _minimize_trustregion_constr(fun, x0, args, grad, hess, hessp, bounds, constraints, xtol, gtol, barrier_tol, sparse_jacobian, callback, maxiter, verbose, finite_diff_rel_step, initial_constr_penalty, initial_tr_radius, initial_barrier_parameter, initial_barrier_tolerance, factorization_method, disp)
518 initial_barrier_tolerance,
519 initial_constr_penalty, initial_tr_radius,
--> 520 factorization_method)
521
522 # Status 3 occurs when the callback function requests termination,
~/.conda/envs/oxip-new6/lib/python3.7/site-packages/scipy/optimize/_trustregion_constr/tr_interior_point.py in tr_interior_point(fun, grad, lagr_hess, n_vars, n_ineq, n_eq, constr, jac, x0, fun0, grad0, constr_ineq0, jac_ineq0, constr_eq0, jac_eq0, stop_criteria, enforce_feasibility, xtol, state, initial_barrier_parameter, initial_tolerance, initial_penalty, initial_trust_radius, factorization_method)
306 barrier_parameter, tolerance, enforce_feasibility, ...
347
<__array_function__ internals> in concatenate(*args, **kwargs)
ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 2 dimension(s) and the array at index 1 has 1 dimension(s)
</code></pre>
|
<python><scipy><mathematical-optimization>
|
2024-08-11 14:42:42
| 1
| 1,247
|
havakok
|
78,858,582
| 17,795,398
|
How to install torch with CUDA in virtual environment (Torch not compiled with CUDA enabled)
|
<p>I'm trying to use the <a href="https://github.com/fpgaminer/joytag" rel="nofollow noreferrer">joytag model</a> locally. This is <code>requirements.txt</code>:</p>
<pre><code>torch>=2.0.1,<3.0.0
transformers>=4.36.2
torchvision>=0.15.2
einops>=0.7.0
safetensors>=0.4.1
pillow>=9.4.0
</code></pre>
<p>They provide a script to run the model. I created a virtual environment to avoid conflicts, I installed the requirements, and when I run the script, I get this error:</p>
<pre><code>Traceback (most recent call last):
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\joytag.py", line 13, in <module>
model = model.to('cuda')
^^^^^^^^^^^^^^^^
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\nn\modules\module.py", line 1174, in to
return self._apply(convert)
^^^^^^^^^^^^^^^^^^^^
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\nn\modules\module.py", line 780, in _apply
module._apply(fn)
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\nn\modules\module.py", line 780, in _apply
module._apply(fn)
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\nn\modules\module.py", line 780, in _apply
module._apply(fn)
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\nn\modules\module.py", line 805, in _apply
param_applied = fn(param)
^^^^^^^^^
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\nn\modules\module.py", line 1160, in convert
return t.to(
^^^^^
File "D:\Sync2\IAImages\LoRA\Scripts\TaggingModels\joytag\venv\Lib\site-packages\torch\cuda\__init__.py", line 305, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
</code></pre>
<p>I'm not very familiar with Torch and related packages, so it's hard to me to understand what I did wrong.</p>
|
<python><pytorch>
|
2024-08-11 14:34:08
| 1
| 472
|
Abel Gutiérrez
|
78,858,457
| 9,999,861
|
Server Error (500) and strange symbols inside a form
|
<p>I set up a Raspberry Pi with Ubuntu server as OS. I followed this guide: <a href="https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-apache-and-mod_wsgi-on-ubuntu-16-04" rel="nofollow noreferrer">Guide from digitalocean</a></p>
<p>It all seemed to work good until I opened a page with a form that looks some kind of strange (unwanted brackets before and after "Name:":
<a href="https://i.sstatic.net/FyIRhTVo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FyIRhTVo.png" alt="Screenshot of the form" /></a></p>
<p>The code behind looks like this:</p>
<pre><code># models.py
class Category(models.Model):
name = models.CharField(max_length=50)
def __str__(self) -> str:
return self.name
# forms.py
class CategoryForm(forms.ModelForm):
class Meta:
fields="__all__"
model = Category
widgets = {
'name': forms.TextInput(attrs={'class': 'form-control'})
}
# views.py
def categories(request: HttpRequest):
if request.method == 'POST':
filledForm = CategoryForm(data=request.POST)
if filledForm.is_valid():
newCategory = filledForm
newCategory.save()
categoryForm = CategoryForm()
categories = Category.objects.order_by('name')
args = {
'categoryForm': categoryForm,
'categories': categories,
}
return render(request, 'categories.html', args)
</code></pre>
<p>And the HTML code:</p>
<pre><code><div class="container">
<h2>Categories</h2>
<br>
<form action="{% url 'categories' %}" method="post">
{% csrf_token %}
{{ categoryForm }}
<input type="submit" class="form-control btn-secondary" name="submit-new-category" value="Add">
</form>
<br>
<table class="table table-hover">
<tbody>
{% for category in categories %}
<tr>
<td>
<a href="{% url 'category_edit' category.id %}">
<div>
{{ category.name }}
</div>
</a>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</code></pre>
<p>When I try to post a filled Category-Form, Apache answers with a Server Error (500).</p>
<p>I checked the log file (/var/log/apache2/error.log), but there is nothing else than warnings and notices:</p>
<pre><code>[Sun Aug 11 13:40:06.992377 2024] [mpm_event:notice] [pid 2809:tid 281473660837920] AH00489: Apache/2.4.58 (Ubuntu) configured -- resuming normal operations
[Sun Aug 11 13:40:06.993125 2024] [core:notice] [pid 2809:tid 281473660837920] AH00094: Command line: '/usr/sbin/apache2'
[Sun Aug 11 13:40:54.766803 2024] [mpm_event:notice] [pid 2809:tid 281473660837920] AH00492: caught SIGWINCH, shutting down gracefully
[Sun Aug 11 13:41:05.325303 2024] [mpm_event:notice] [pid 3058:tid 281473704566816] AH00489: Apache/2.4.58 (Ubuntu) mod_wsgi/5.0.0 Python/3.12 configured -- resuming normal operations
[Sun Aug 11 13:41:05.326039 2024] [core:notice] [pid 3058:tid 281473704566816] AH00094: Command line: '/usr/sbin/apache2'
[Sun Aug 11 14:28:47.835081 2024] [mpm_event:notice] [pid 3058:tid 281473704566816] AH00492: caught SIGWINCH, shutting down gracefully
[Sun Aug 11 14:28:48.320266 2024] [mpm_event:notice] [pid 3971:tid 281473052844064] AH00489: Apache/2.4.58 (Ubuntu) mod_wsgi/5.0.0 Python/3.12 configured -- resuming normal operations
[Sun Aug 11 14:28:48.321026 2024] [core:notice] [pid 3971:tid 281473052844064] AH00094: Command line: '/usr/sbin/apache2'
[Sun Aug 11 14:30:37.648045 2024] [mpm_event:notice] [pid 3971:tid 281473052844064] AH00492: caught SIGWINCH, shutting down gracefully
[Sun Aug 11 14:33:43.893787 2024] [mpm_event:notice] [pid 912:tid 281472884133920] AH00489: Apache/2.4.58 (Ubuntu) mod_wsgi/5.0.0 Python/3.12 configured -- resuming normal operations
[Sun Aug 11 14:33:43.906803 2024] [core:notice] [pid 912:tid 281472884133920] AH00094: Command line: '/usr/sbin/apache2'
[Sun Aug 11 14:41:50.770637 2024] [mpm_event:notice] [pid 912:tid 281472884133920] AH00492: caught SIGWINCH, shutting down gracefully
[Sun Aug 11 14:41:54.169218 2024] [core:warn] [pid 912:tid 281472884133920] AH00045: child process 915 still did not exit, sending a SIGTERM
[Sun Aug 11 14:45:14.149261 2024] [mpm_event:notice] [pid 923:tid 281472851755040] AH00489: Apache/2.4.58 (Ubuntu) mod_wsgi/5.0.0 Python/3.12 configured -- resuming normal operations
[Sun Aug 11 14:45:14.160107 2024] [core:notice] [pid 923:tid 281472851755040] AH00094: Command line: '/usr/sbin/apache2'
</code></pre>
<p>running it on my development computer with <code>python manage.py runserver</code> works all fine.</p>
<p>Now, I have two questions: Why do those brackets appear in my Category form and why am I getting the Server error? I posted both questions in the same thread, because I do not know, if they are related or not.</p>
|
<python><django><apache2><ubuntu-server>
|
2024-08-11 13:37:02
| 1
| 507
|
Blackbriar
|
78,858,319
| 1,371,666
|
Looking for Short and better way to create widget in Tkinter
|
<p>I am using Python 3.11.9 in windows 11 OS (home edition)<br>
I create text entry using three lines shown in code below:</p>
<pre><code>import tkinter as tk
class Application(tk.Tk):
def __init__(self, title:str="Bill generation", x:int=0, y:int=0, **kwargs):
tk.Tk.__init__(self)
self.default_txt_for_desc=tk.StringVar()
self.default_txt_for_desc.set("PCS")
self.txt_desc=tk.Entry(self,width=10,textvariable=self.default_txt_for_desc)
self.txt_desc.grid(column=0,row=0)
self.update_idletasks()
self.state('zoomed')
if __name__ == "__main__":
Application(title="Bill printing app").mainloop()
</code></pre>
<p>How can I do that in one line?</p>
|
<python><tkinter>
|
2024-08-11 12:25:49
| 1
| 481
|
user1371666
|
78,858,292
| 4,581,085
|
ProcessPoolExecutor fails
|
<p>I have a simple setup to test parallel execution, but it fails no matter what I've tried. I'm working in a Jupyter Notebook.</p>
<p>Here is a model example:</p>
<pre><code>from concurrent.futures import ProcessPoolExecutor
def worker(i):
print(f"TASK: {i}")
result = i * i
print(f"RESULT: {result}")
return result
futures = []
if __name__ == "__main__":
with ProcessPoolExecutor(max_workers=4) as executor:
for i in range(10):
futures.append(executor.submit(worker, i))
for i, future in enumerate(futures):
try:
result = future.result()
print(result)
except Exception as e:
print(f"Error in future {i}: {e}")
</code></pre>
<p>I have way more than 4 cores available, no memory issues, but it fails no matter what I try.</p>
<pre><code>Error in future 0: A process in the process pool was terminated abruptly while the future was running or pending.
(and so on for the other futures)
</code></pre>
<h2>Solution:</h2>
<p>Move <code>worker</code> into a python file and import it, then it works fine.</p>
|
<python><jupyter-notebook><concurrency><jupyter><concurrent.futures>
|
2024-08-11 12:16:23
| 1
| 985
|
Alex F
|
78,858,039
| 2,840,697
|
Implementing power function in two different ways. What's the big O difference between these two codes?
|
<ul>
<li><p>1:</p>
<pre><code>class Solution(object):
def myPow(self, x, n):
"""
:type x: float
:type n: int
:rtype: float
"""
if n < 0:
return 1.0/self.myPow(x, -n)
elif n==0:
return 1
elif n == 1:
return x
elif n%2 == 1:
return self.myPow(x, n//2) * self.myPow(x, n//2) * x
else:
return self.myPow(x, n//2) * self.myPow(x, n//2)
</code></pre>
</li>
<li><p>2:</p>
<pre><code>class Solution(object):
def myPow(self, x, n):
"""
:type x: float
:type n: int
:rtype: float
"""
if n==0:
return 1
elif n==1:
return x
elif n < 0:
return 1 / self.myPow(x, -n)
elif n%2 == 1:
return x * self.myPow(x, n-1)
else:
return self.myPow(x*x, n//2)
</code></pre>
</li>
</ul>
<p>I implemented power functions in two different ways.</p>
<p>I was expecting the time complexity to be the same, but when trying on leetcode, 1) timed out and failed for some examples, but 2) succeeded.</p>
<p>Is there a reason why that was the case? I thought time complexity would be the same for both.</p>
|
<python><big-o>
|
2024-08-11 10:03:19
| 2
| 942
|
user98235
|
78,857,858
| 2,749,397
|
Where is Tkinter sourcing info about the screen DPI?
|
<p>Using X on Linux, Tkinter (Python 3.12) tells me that I have a DPI value of ~96 (wrong), while the X server tells me that my DPI value is ~185 (correct).</p>
<pre><code>In [1]: import tkinter as tk
...: root = tk.Tk()
...: print(root.winfo_fpixels('1i'))
...: root.destroy()
...:
...: xrandr_out = ! xrandr
...: xy_in = [int(mm[:-2])/25.4 for mm in xrandr_out[1][-13:].split(' x ')]
...: xy_pix = [int(pixels) for pixels in xrandr_out[2].split()[0].split('x')]
...: dpi_x, dpi_y = [pixels/dimension for pixels, dimension in zip(xy_pix, xy_in)]
...: print(dpi_x, dpi_y)
...:
96.08406304728547
185.35135135135135 185.66497461928932
In [2]:
</code></pre>
<p>(I'm sorry for the convoluted way of finding the real DPI value, it is possible that you have to adjust my code to make it work on your distribution.)</p>
<p>Where does Tkinter find the incorrect information regarding the DPI value?</p>
|
<python><tkinter><xserver>
|
2024-08-11 08:36:58
| 0
| 25,436
|
gboffi
|
78,857,955
| 2,036,464
|
Python: Find and replace html tags, with an exception?
|
<p>I have this <code>test.html</code> and I need to find all lines that starts with <code><p class="text_obisnuit"></code> and <code><p class="text_obisnuit2"></code> , and have <code><br></code> instead ofthe close tag <code></p></code>. So, I have to remove <br> and replace with <code></p></code> .</p>
<p>THE PROBLEM:</p>
<p>Everuthing works, except this. I have a linie only with <code><br><br></code> this line must be excluded from the find and replace list</p>
<pre><code> <!-- ARTICOL START -->
<div align="justify">
<table width="552" border="0">
<tr>
<td><h1 class="den_articol" itemprop="name">Zbuciumul sufletelor tenace (II)</h1></td>
</tr>
<tr>
<td class="text_dreapta">On Aprilie 27, 2013, in <a href="https://neculaifantanaru.com/leadership-fx-intensive.html" title="Vezi toate articolele din Leadership FX-Intensive" class="external" rel="category tag">Leadership FX-Intensive</a>, by Neculai Fantanaru</td>
</tr>
</table>
<p class="text_obisnuit"><span class="text_obisnuit2">Zbuciumul sufletelor tenace </span>eviden&#355;iaz&#259; tendin&#355;a de a transforma evolu&#355;ia &icirc;n involu&#355;ie, atunci c&#226;nd zbuciumul t&#259;u existen&#355;ial pare c&#259; &icirc;ncepe s&#259; se &icirc;n&#259;spreasc&#259;. Astfel se creeaz&#259; o adev&#259;rat&#259; degringolad&#259; &icirc;n careul tr&#259;irilor &#351;i g&#226;ndurilor sale, o adev&#259;rat&#259; lupt&#259; &icirc;ntre tine &#351;i propriul t&#259;u Ego.<br>
</p>
<p class="text_obisnuit">Dac&#259; vrei s&#259; &#351;tii care este nivelul de leadership la care te afli, dac&#259; vrei s&#259; &#351;tii &icirc;n ce categorie de lideri te &icirc;ncadrezi, atunci dezleag&#259; adev&#259;ratul &icirc;n&#355;eles din spatele acelor vagi derute care &icirc;i despart pe oameni de cele dou&#259; constante: &ldquo; <em>a fi</em> &rdquo; &#351;i &ldquo; <em>a nu fi</em> &rdquo;, f&#259;r&#259; s&#259; te pierzi tu &icirc;nsu&#355;i &icirc;ntr-o mare de &icirc;ndoial&#259;.</p>
<br><br>
<p class="text_obisnuit"><span class="text_obisnuit2">* Not&#259;:</span> &ldquo; <em><a href="http://www.imdb.com/title/tt0780516/" target="_new">Flawless (2007)</a>&quot;</em></p>
</div>
<p align="justify" class="text_obisnuit style3">&nbsp;</p>
<!-- ARTICOL FINAL -->
</code></pre>
<p>Python code:</p>
<pre><code>import os
import regex as re
def process_file_content(content):
lines = content.split('\n')
processed_lines = []
for line in lines:
stripped_line = line.strip()
# Păstrăm liniile cu <br><br> neschimbate
if '<br><br>' in line:
processed_lines.append(line)
continue
# Procesăm doar liniile care conțin taguri <p>
if '<p class="text_obisnuit"' in line or '<p class="text_obisnuit2"' in line:
# Înlocuim <br> la sfârșitul liniei cu </p>
line = re.sub(r'(<p class="text_obisnuit2?">.*?)<br>\s*$', r'\1</p>', line)
# Eliminăm spațiile inutile de la început
line = re.sub(r'^\s*(<p class="text_obisnuit2?">)', r'\1', line)
# Eliminăm </p> dublu
line = re.sub(r'(<p class="text_obisnuit2?">.*?</p>)\s*</p>\s*', r'\1', line)
processed_lines.append(line)
content = '\n'.join(processed_lines)
# Adăugăm spații între paragrafe
content = re.sub(r'(</p>)\s*(<p class="text_obisnuit">)', r'\1\n\n\2', content)
content = re.sub(r'(</p>)\s*(<p class="text_obisnuit2">)', r'\1\n\n\2', content)
return content
def process_html_files(directory):
excluded_files = [
'aforisme-si-pareri-bine-slefuite-III.html',
'aforisme-si-pareri-bine-slefuite-II.html',
'aforisme-si-pareri-bine-slefuite.html',
'cartea-cartilor.html',
'cartea-creatiei.html',
'cartea-de-nisip.html',
'ganduri-din-colturile-memoriei-III.html',
'imagini-din-muzeul-mitropolitan-iasi.html',
'ganduri-din-colturile-memoriei-II.html',
'ganduri-din-colturile-memoriei.html'
]
for filename in os.listdir(directory):
if filename.endswith('.html') and filename not in excluded_files:
file_path = os.path.join(directory, filename)
with open(file_path, 'r', encoding='utf-8') as file:
content = file.read()
processed_content = process_file_content(content)
if content != processed_content:
print(f"\nModificări în fișierul: {filename}")
with open(file_path, 'w', encoding='utf-8') as file:
file.write(processed_content)
# Verificăm dacă mai există linii cu <br><br> după procesare
if '<br><br>' in processed_content:
print(f"\nLinia <br><br> păstrată în fișierul: {filename}")
# Specificați directorul cu fișierele HTML
directory = r'd:\55'
process_html_files(directory)
Face bine toate schimbarile, in afara de cazul ca linia care contine doar <br><br> este inlocuita cu <p class="text_obisnuit"></p> . Ei bine, trebuie o clauda prin care linia care contine doar <br><br> sa nu fie inclusa in celelalte Find and Replace.
<!-- ARTICOL START -->
<div align="justify">
<table width="552" border="0">
<tr>
<td><h1 class="den_articol" itemprop="name">Zbuciumul sufletelor tenace (II)</h1></td>
</tr>
<tr>
<td class="text_dreapta">On Aprilie 27, 2013, in <a href="https://neculaifantanaru.com/leadership-fx-intensive.html" title="Vezi toate articolele din Leadership FX-Intensive" class="external" rel="category tag">Leadership FX-Intensive</a>, by Neculai Fantanaru</td>
</tr>
</table>
<p class="text_obisnuit"><span class="text_obisnuit2">Zbuciumul sufletelor tenace </span>eviden&#355;iaz&#259; tendin&#355;a de a transforma evolu&#355;ia &icirc;n involu&#355;ie, atunci c&#226;nd zbuciumul t&#259;u existen&#355;ial pare c&#259; &icirc;ncepe s&#259; se &icirc;n&#259;spreasc&#259;. Astfel se creeaz&#259; o adev&#259;rat&#259; degringolad&#259; &icirc;n careul tr&#259;irilor &#351;i g&#226;ndurilor sale, o adev&#259;rat&#259; lupt&#259; &icirc;ntre tine &#351;i propriul t&#259;u Ego.</p>
<p class="text_obisnuit">Dac&#259; vrei s&#259; &#351;tii care este nivelul de leadership la care te afli, dac&#259; vrei s&#259; &#351;tii &icirc;n ce categorie de lideri te &icirc;ncadrezi, atunci dezleag&#259; adev&#259;ratul &icirc;n&#355;eles din spatele acelor vagi derute care &icirc;i despart pe oameni de cele dou&#259; constante: &ldquo; <em>a fi</em> &rdquo; &#351;i &ldquo; <em>a nu fi</em> &rdquo;, f&#259;r&#259; s&#259; te pierzi tu &icirc;nsu&#355;i &icirc;ntr-o mare de &icirc;ndoial&#259;.</p>
<p class="text_obisnuit"></p>
<p class="text_obisnuit"><span class="text_obisnuit2">* Not&#259;:</span> &ldquo; <em><a href="http://www.imdb.com/title/tt0780516/" target="_new">Flawless (2007)</a>&quot;</em></p>
</div>
<p align="justify" class="text_obisnuit style3">&nbsp;</p>
<!-- ARTICOL FINAL -->
</code></pre>
<p><strong>This must be the output:</strong></p>
<pre><code> <!-- ARTICOL START -->
<div align="justify">
<table width="552" border="0">
<tr>
<td><h1 class="den_articol" itemprop="name">Zbuciumul sufletelor tenace (II)</h1></td>
</tr>
<tr>
<td class="text_dreapta">On Aprilie 27, 2013, in <a href="https://neculaifantanaru.com/leadership-fx-intensive.html" title="Vezi toate articolele din Leadership FX-Intensive" class="external" rel="category tag">Leadership FX-Intensive</a>, by Neculai Fantanaru</td>
</tr>
</table>
<p class="text_obisnuit"><span class="text_obisnuit2">Zbuciumul sufletelor tenace </span>eviden&#355;iaz&#259; tendin&#355;a de a transforma evolu&#355;ia &icirc;n involu&#355;ie, atunci c&#226;nd zbuciumul t&#259;u existen&#355;ial pare c&#259; &icirc;ncepe s&#259; se &icirc;n&#259;spreasc&#259;. Astfel se creeaz&#259; o adev&#259;rat&#259; degringolad&#259; &icirc;n careul tr&#259;irilor &#351;i g&#226;ndurilor sale, o adev&#259;rat&#259; lupt&#259; &icirc;ntre tine &#351;i propriul t&#259;u Ego.</p>
<p class="text_obisnuit">Dac&#259; vrei s&#259; &#351;tii care este nivelul de leadership la care te afli, dac&#259; vrei s&#259; &#351;tii &icirc;n ce categorie de lideri te &icirc;ncadrezi, atunci dezleag&#259; adev&#259;ratul &icirc;n&#355;eles din spatele acelor vagi derute care &icirc;i despart pe oameni de cele dou&#259; constante: &ldquo; <em>a fi</em> &rdquo; &#351;i &ldquo; <em>a nu fi</em> &rdquo;, f&#259;r&#259; s&#259; te pierzi tu &icirc;nsu&#355;i &icirc;ntr-o mare de &icirc;ndoial&#259;.</p>
<br><br>
<p class="text_obisnuit"><span class="text_obisnuit2">* Not&#259;:</span> &ldquo; <em><a href="http://www.imdb.com/title/tt0780516/" target="_new">Flawless (2007)</a>&quot;</em></p>
</div>
<p align="justify" class="text_obisnuit style3">&nbsp;</p>
<!-- ARTICOL FINAL -->
</code></pre>
|
<python><python-3.x><html>
|
2024-08-11 07:57:34
| 1
| 1,065
|
Just Me
|
78,857,605
| 14,250,641
|
Filter strings with high proportion of lowercase letters
|
<p>I have quite a large df (50+ million) with one of the columns containing DNA sequences (1 DNA sequence per row). Some of these sequences contain a mix of lowercase and uppercase letters. I would like to have my dataset only have sequences with 50% or more uppercase letters (take out the seqs with 50% or more lowercase).
I took a small subset of my DF and it took 2 minutes just to filter out the sequences. I was hoping that I could find a more efficient way so that I can scale up.</p>
<p>Example of my DF:</p>
<pre><code>label sequence
1 aaaggGtTt...
0 AAAggccCCC...
</code></pre>
<p>Here is the function I am using.</p>
<pre><code>def remove_low_complexity_seqs(sequence, threshold=0.5):
"""
Check if more than a given threshold proportion of the sequence is lowercase (low complexity).
Args:
- sequence (str): The nucleotide sequence.
- threshold (float): The proportion threshold (default is 0.5 for 50%).
Returns:
- bool: True if more than threshold proportion is lowercase, otherwise False.
"""
lowercase_count = sum(map(str.islower, sequence))
proportion = lowercase_count / (10000) #10k is the length of all seqs
return proportion > threshold
</code></pre>
<p>Code I ran:</p>
<pre><code># mask = control_seqs['sequence'].apply(lambda seq: not remove_low_complexity_seqs(seq, context)) # long runtime 115secs
# control_seqs = control_seqs[mask] # quick runtime
</code></pre>
|
<python><pandas><dataframe><optimization><bioinformatics>
|
2024-08-11 06:00:57
| 5
| 514
|
youtube
|
78,857,511
| 1,838,076
|
Using with block vs processing the file in one line
|
<p>In Python, while reading the file's content, it is recommended to use the <code>with</code> block.</p>
<p>Something like below</p>
<pre><code>with open(file, 'r', encoding='utf-8') as f:
content = f.read().splitlines()
</code></pre>
<p>However, if I am not processing much, I can do something like below in one line, which looks more compact.</p>
<pre><code>content = open(file, 'r', encoding='utf-8').read().splitlines()
</code></pre>
<p>Or call a function that does the processing.</p>
<pre><code>isTheFileGood = func(open(file, 'r', encoding='utf-8'))
</code></pre>
<p>over</p>
<pre><code>with open(file, 'r', encoding='utf-8') as f:
isTheFileGood = func(f)
</code></pre>
<p>Is there any advantage of one over another? Say in terms of garbage collection etc.?</p>
|
<python><file-io><with-statement>
|
2024-08-11 04:39:15
| 2
| 1,622
|
Krishna
|
78,857,434
| 801,902
|
How to filter a query in Django for week_day using a specific timezone?
|
<p>I am trying to filter data by the day of the week. That's easy enough with the week_day filter. The problem is, since all dates are stored as UTC, that is how it's filtered. How can I localize a query?</p>
<p>For instance, trying to count all the records that have been created on specific days, I use this:</p>
<pre class="lang-py prettyprint-override"><code>chart_data = [
data.filter(created__week_day=1).count(), # Sunday
data.filter(created__week_day=2).count(), # Monday
data.filter(created__week_day=3).count(), # Tuesday
data.filter(created__week_day=4).count(), # Wednesday
data.filter(created__week_day=5).count(), # Thursday
data.filter(created__week_day=6).count(), # Friday
data.filter(created__week_day=7).count(), # Saturday
]
</code></pre>
<p>How can I get these queries to count those records localized to the 'America/New_York' timezone, for instance?</p>
|
<python><django>
|
2024-08-11 03:13:35
| 1
| 1,452
|
PoDuck
|
78,857,171
| 19,366,064
|
Pylance missing imports with DevContainers
|
<p>I have the following folder structure</p>
<pre><code>C:.
│ docker-compose.yml
│ Dockerfile
│
├───.devcontainer
│ devcontainer.json
│
└───app
__init__.py
main.py
sub.py
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.9
</code></pre>
<p>docker-compose.yml</p>
<pre><code>version: '3.8'
services:
app:
build:
context: .
dockerfile: Dockerfile
container_name: app
volumes:
- .:/workspace
command: sleep infinity
</code></pre>
<p>devcontainer.json</p>
<pre><code>{
"name": "app",
"dockerComposeFile": "../docker-compose.yml",
"service": "app",
"workspaceFolder": "/workspace",
"postCreateCommand": "",
"customizations": {
"vscode": {
"extensions": [
"ms-python.python",
"charliermarsh.ruff"
],
"settings": {
"python.analysis.autoImportCompletions": true,
"python.analysis.typeCheckingMode": "basic"
}
}
}
}
</code></pre>
<p>Everytime I try to to create a new file such as sub.py, and try to import it to main.py</p>
<p>main.py</p>
<pre><code>from app.sub import abc
</code></pre>
<p>I will get this error message</p>
<pre><code>Import "app.sub" could not be resolvedPylance[reportMissingImports](https://github.com/microsoft/pyright/blob/main/docs/configuration.md#reportMissingImports)
(module) app
</code></pre>
<p>This resolve by itself if I reload the window, or rebuild the container, or pip install new libraries.</p>
<p>Does anyone know how I can resolve this?</p>
|
<python><visual-studio-code><pylance><devcontainer>
|
2024-08-10 22:53:00
| 1
| 544
|
Michael Xia
|
78,857,126
| 2,727,167
|
Extract lines from the image with text
|
<p>I have followde the image:</p>
<p><img src="https://i.sstatic.net/yrpSn7q0.png" alt="enter image description here" /></p>
<p>I want to extract only lines from it -- only lines without text.</p>
<p>What would be the best practice to do this?</p>
<p>I tried with the cv2 Python library and HoughLinesP with following the code:</p>
<pre><code>img = cv2.imread('/Users/tekija/Documents/image.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_thr = cv2.threshold(img_gray, 120, 255, cv2.THRESH_BINARY)
lines = cv2.HoughLinesP(
img_thr, rho=1, theta=np.pi / 180, threshold=128, minLineLength=600, maxLineGap=30)
lines = lines.squeeze()
</code></pre>
<p>but results are:</p>
<p><a href="https://i.sstatic.net/8M6OU02T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8M6OU02T.png" alt="enter image description here" /></a></p>
|
<python><opencv>
|
2024-08-10 22:11:41
| 2
| 450
|
user2727167
|
78,856,785
| 10,499,034
|
How to properly scale cmapcolor
|
<p>Below is a fully-functioning example of my problem. The color that cmap assigns based on the values I give to it does not match the color that it should be on the color bar. For example, the fragment with distance=1.75 is yellow when according to the colorbar it should be light purple. How can I assign the colors properly to the bars?</p>
<pre><code>#Create some fake example data and store it in a dataframe
exampledf=pd.DataFrame()
import random as rd
tempname='Template'
queryname='Query'
templatenamelst=[]
querynamelst=[]
startlst=[]
endlst=[]
querystartslst=[]
queryendslst=[]
distancelst=[]
for i in range(20):
start=i*5
end=(i*5)+25
distance=rd.uniform(0, 4)
querystart=rd.randint(0, 95)
queryend=querystart+25
templatenamelst.append(tempname)
querynamelst.append(queryname)
startlst.append(start)
endlst.append(end)
distancelst.append(distance)
querystartslst.append(querystart)
queryendslst.append(queryend)
exampledf['TempName']=templatenamelst
exampledf['QueryName']=querynamelst
exampledf['TempStarts']=startlst
exampledf['TempEnds']=endlst
exampledf['QueryStarts']=querystartslst
exampledf['QueryEnds']=queryendslst
exampledf['Distances']=distancelst
display(exampledf)
#Build the plot from the example data
import random as rd
import numpy as np
import pandas as pd
import warnings
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
warnings.filterwarnings('ignore')
TempName=exampledf.iloc[0]['TempName']
QueryName=exampledf.iloc[0]['QueryName']
NumSeqFrags=len(exampledf)
TempLength=templength+2
Distanceslst=list(exampledf.Distances)
XMax=max(list(exampledf.TempEnds))
cmapz = cm.get_cmap('plasma')
img = plt.imshow(np.array([[0,max(Distanceslst)]]), cmap="plasma")
img.set_visible(False)
barheight=6
plt.barh(width=TempLength, left=0, height=barheight, y=0,color='Black')
ticklabels=[TempName]
ticklocations=[0]
#Add the fragments
for i in range (len(exampledf)):
cmapcolor=cmapz(Distanceslst[i])
width=int(exampledf.iloc[i]['TempEnds'])-int(exampledf.iloc[i]['TempStarts'])
start=int(exampledf.iloc[i]['TempStarts'])
yloc=(barheight+4)*i+barheight
plt.barh(width=width, left=start, height=barheight, y=yloc,color=cmapcolor)
fullname=str(exampledf.iloc[i]['QueryName'])+'('+str(exampledf.iloc[i]['QueryStarts'])+':'+str(exampledf.iloc[i]['QueryEnds'])+')'+'(Distance='+str(round(Distanceslst[i],2))+')'
ticklabels.append(fullname)
ticklocations.append(yloc)
plt.yticks(ticks=ticklocations, labels=ticklabels,fontsize=8)
plt.colorbar(orientation="horizontal",fraction=0.05,label='Distance')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/1xnkTP3L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1xnkTP3L.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-08-10 18:38:18
| 1
| 792
|
Jamie
|
78,856,532
| 8,648,222
|
How to make huggingface transformer for translation return n translation inferences?
|
<p>So I am trying to use this transformer from huggingface <a href="https://huggingface.co/docs/transformers/en/tasks/translation" rel="nofollow noreferrer">https://huggingface.co/docs/transformers/en/tasks/translation</a>. The issue is that I want n translations returned and not just one. How can I do that? I mean, I want to have ordered translations, that means the translation with index 0 would have the highest confidence, this is important for my use case, which is about translating natural language to commands language (about 40 commands without subcommands).</p>
<p>The github repo and exact model is this one <a href="https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/hf_model.py" rel="nofollow noreferrer">https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/models/hf_model.py</a></p>
<p>This is the HuggingFace API:</p>
<pre><code>translator = pipeline("translation_xx_to_yy", model="my_awesome_opus_books_model")
translator(text)
</code></pre>
<p>But I am intending to use the model directly from the google search github repo, so it seems some tweaking should be done here:</p>
<pre><code>predictions = []
for batch in dataset:
predicted_tokens = self._model.generate(
input_ids=self.to_tensor(batch["inputs"]), **generate_kwargs
)
predicted_tokens = predicted_tokens.cpu().numpy().tolist()
predictions.extend(
[vocabs["targets"].decode(p) for p in predicted_tokens]
)
for inp, pred in zip(inputs, predictions):
logging.info("%s\n -> %s", inp, pred)
if output_file is not None:
utils.write_lines_to_file(predictions, output_file)
</code></pre>
<p>Also any suggestion on some other model option to solve this natural language to cmd is welcomed!</p>
|
<python><huggingface-transformers><transformer-model>
|
2024-08-10 16:37:03
| 1
| 825
|
v_head
|
78,856,511
| 3,442,683
|
TimeoutError with python-chess when trying to initialize Stockfish on macOS
|
<p>I'm trying to use the python-chess library to interface with Stockfish on macOS, but I'm encountering a TimeoutError during the initialization process. Here's a simplified version of my code:</p>
<pre><code>import chess.engine
# Set your Stockfish path here
engine_path = '/Applications/Stockfish.app/Contents/MacOS/Stockfish'
if __name__ == '__main__':
engine = chess.engine.SimpleEngine.popen_uci(engine_path)
</code></pre>
<p>When I run this script, I get the following error:</p>
<pre><code><UciProtocol (pid=20778)>: stderr >> 2024-08-10 11:19:05.137 Stockfish[20778:931410] WARNING: Secure coding is automatically enabled for restorable state! However, not on all supported macOS versions of this application. Opt-in to secure coding explicitly by implementing NSApplicationDelegate.applicationSupportsSecureRestorableState:.
<UciProtocol (pid=20778)>: stderr >> 2024-08-10 11:19:05.139 Stockfish[20778:931410] Started app
/Users/username/.venv/lib/python3.9/site-packages/chess/engine.py:154: RuntimeWarning: A loop is being detached from a child watcher with pending handlers
warnings.warn("A loop is being detached from a child watcher with pending handlers", RuntimeWarning)
asyncio.exceptions.TimeoutError
</code></pre>
<p>I've tried the following to troubleshoot the issue:</p>
<ul>
<li>Verified the Stockfish path is correct.</li>
<li>Increased the timeout value for popen_uci.</li>
<li>Updated the python-chess library.</li>
<li>Ensured that my macOS and Python versions are up to date.</li>
</ul>
<p>Despite these efforts, the error persists. The asyncio.exceptions.TimeoutError suggests that the script is timing out while waiting for Stockfish to initialize.</p>
<p>Has anyone experienced a similar issue on macOS? Are there any specific configurations or additional steps I should take to resolve this?</p>
<p>Edit: I'm using version 1.10.0 of the Python chess library and version 3.9 of Python.</p>
|
<python><macos><python-chess><uci><stockfish>
|
2024-08-10 16:26:12
| 1
| 534
|
klooth
|
78,856,452
| 558,639
|
Is [a,b][a>b] equivalent to min(b, a)?
|
<p>I came upon a bit of third party Python code that read:</p>
<pre><code>count = [remaining, readlen][remaining > readlen]
</code></pre>
<p>After staring at it for a bit, I have to ask: are there any cases where this construct is NOT equivalent to:</p>
<pre><code>count = min(readlen, remaining)
</code></pre>
<p>i.e. are there there any functional differences between the two?</p>
|
<python>
|
2024-08-10 16:02:57
| 2
| 35,607
|
fearless_fool
|
78,856,105
| 2,268,543
|
Unable to Extract Text from Image Using Tesseract OCR - How to Preprocess Instagram Reels Frames
|
<p>I am working on a project where I need to extract text from frames of an Instagram Reels video. I used the <code>yt-dlp</code> to download the video, extracted frames using <code>ffmpeg</code>, and attempted to read the text from the frames using Tesseract OCR.</p>
<p>However, I'm unable to extract text from the frames. Below is the code snippet I'm using:</p>
<pre><code>from PIL import Image
import pytesseract
import os
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
image_path = r"Insta Reels\frame_0077.png"
try:
if not os.path.exists(image_path):
raise FileNotFoundError(f"The file {image_path} does not exist.")
image = Image.open(image_path)
text = pytesseract.image_to_string(image, lang='eng')
if text.strip():
print("Extracted text:")
print(text)
else:
print("No text was extracted from the image.")
except FileNotFoundError as e:
print(f"Error: {e}")
except Exception as e:
print(f"An error occurred: {e}")
</code></pre>
<p>The problem is that the extracted text is either incomplete or not detected at all.</p>
<p>What preprocessing steps should I apply to these frames to improve the accuracy of Tesseract OCR?</p>
<p><strong>Edit:</strong></p>
<pre><code>import cv2
import pytesseract
# Mention the installed location of Tesseract-OCR in your system
pytesseract.pytesseract.tesseract_cmd = r"C:\Program Files\Tesseract-OCR\tesseract.exe"
image_path = r"Insta Reels\Video by mydarlingfood\frame_0077.png"
# Read image from which text needs to be extracted
img = cv2.imread(image_path)
# Preprocessing the image starts
# Convert the image to gray scale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Performing OTSU threshold
ret, thresh1 = cv2.threshold(gray, 0, 255, cv2.THRESH_OTSU | cv2.THRESH_BINARY_INV)
# Specify structure shape and kernel size.
rect_kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (18, 18))
# Applying dilation on the threshold image
dilation = cv2.dilate(thresh1, rect_kernel, iterations=1)
# Finding contours
contours, hierarchy = cv2.findContours(dilation, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Creating a copy of image
im2 = img.copy()
# Looping through the identified contours
for cnt in contours:
x, y, w, h = cv2.boundingRect(cnt)
# Drawing a rectangle on copied image
rect = cv2.rectangle(im2, (x, y), (x + w, y + h), (0, 255, 0), 2)
# Cropping the text block for giving input to OCR
cropped = im2[y:y + h, x:x + w]
# Display the cropped image
cv2.imshow("Cropped Image", cropped)
cv2.waitKey(0) # Press any key to continue to the next cropped image
cv2.destroyAllWindows()
</code></pre>
<h1>original image</h1>
<p><a href="https://i.sstatic.net/Hlxuw2eO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Hlxuw2eO.png" alt="Image that i am trying to read" /></a></p>
<h1>after i tried to crop the image</h1>
<p><a href="https://i.sstatic.net/msNaR8Ds.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/msNaR8Ds.png" alt="enter image description here" /></a></p>
<p>It seems like it is not able to detect the different color.</p>
|
<python><ocr><tesseract><python-tesseract><image-preprocessing>
|
2024-08-10 13:13:49
| 0
| 2,519
|
Rasik
|
78,856,001
| 16,320,430
|
How to combine two columns into `{key:value}` pairs in polars?
|
<p>I'm working with a <code>Polars DataFrame</code>, and I want to combine two columns into a dictionary format, where the values from one column become the keys and the values from the other column become the corresponding values.</p>
<p>Here's an example DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
"name": ["Chuck", "John", "Alice"],
"surname": ["Dalliston", "Doe", "Smith"]
})
</code></pre>
<p>I want to transform this DataFrame into a new column that contains dictionaries, where name is the key and surname is the value. The expected outcome should look like this:</p>
<pre><code>shape: (3, 3)
┌───────┬─────────┬──────────────────────────┐
│ name │ surname │ name_surname │
│ --- │ --- │ --- │
│ str │ str │ dict[str, str] │
├───────┼─────────┼──────────────────────────┤
│ Chuck │ Dalliston│ {"Chuck": "Dalliston"} │
│ John │ Doe │ {"John": "Doe"} │
│ Alice │ Smith │ {"Alice": "Smith"} │
└───────┴─────────┴──────────────────────────┘
</code></pre>
<p>I've tried the following code:</p>
<pre><code>df.with_columns(
json = pl.struct("name", "surname").map_elements(json.dumps)
)
</code></pre>
<p>But the result is not as expected. Instead of creating a dictionary with <code>key-value</code>, it produces:</p>
<pre class="lang-json prettyprint-override"><code>{name:Chuck,surname:Dalliston}
</code></pre>
|
<python><dataframe><data-cleaning><python-polars>
|
2024-08-10 12:24:13
| 2
| 435
|
Dante
|
78,855,989
| 5,893,683
|
How to do inter-process communication with pathos/ppft?
|
<p>I'm using the <a href="https://pathos.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>pathos</code></a> framework to do tasks concurrently, in different processes. Under the hood, this is done with <a href="https://ppft.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>ppft</code></a>, which is part of <code>pathos</code>. My current approach uses a <code>pathos.multiprocessing.ProcessPool</code> instance. Multiple jobs are submitted to it, one at a time without blocking, using <code>apipe()</code>. The main process is then supervising the worker processes using <code>ready()</code> and <code>get()</code>.</p>
<p>This is working fine, but results of the worker processes are only received after they finish (the process has ended). However, I need a way to get intermediate results from the worker process. I can't find anything clear about this in the docs of <code>pathos</code>/<code>ppft</code>, but from hints they contain there and here, it seems clear that this should be possible with their features. How do you do inter-process communication with <code>pathos</code>/<code>ppft</code> in combination with a <code>ProcessPool</code>?</p>
<p>The following demo code illustrates my approach. How could I send intermediate results to the main process?. For example, report the list of primes found so far, every time its length is a multiple of 100?</p>
<pre><code>#!/usr/bin/env python3
import pathos
def worker_func(limit):
"""
Dummy task: Find list of all prime numbers smaller than than "limit".
"""
return limit, [
num for num in range(2, limit) \
if all(num % i != 0 for i in range(2, num))
]
pool = pathos.pools.ProcessPool()
jobs = []
jobs.append(pool.apipe(worker_func, 10000))
jobs.append(pool.apipe(worker_func, 15000))
jobs.append(pool.apipe(worker_func, 20000))
count_done_jobs = 0
while count_done_jobs < len(jobs):
for job_idx, job in enumerate(jobs):
if job is not None and job.ready():
limit, primes = job.get()
jobs[job_idx] = None
count_done_jobs += 1
print("Job {}: There are {} primes smaller than {}." \
.format(job_idx, len(primes), limit))
</code></pre>
|
<python><ipc><dill><pathos><ppft>
|
2024-08-10 12:19:20
| 1
| 572
|
onetyone
|
78,855,921
| 1,711,146
|
How to async instantiate and close a shared aiohttp session in Azure Functions in Python?
|
<p>I need to use an HTTP client in my Azure Function. I can use aiohttp like this (this is a minimalistic example form the MSFT documentation).</p>
<pre><code>import aiohttp
import azure.functions as func
async def main(req: func.HttpRequest) -> func.HttpResponse:
async with aiohttp.ClientSession() as client:
async with client.get("URL") as response:
return func.HttpResponse(await response.text())
return func.HttpResponse(body='NotFound', status_code=404)
</code></pre>
<p>However, this creates a new session object for each request, which should be avoided, as per the <a href="https://docs.aiohttp.org/en/stable/http_request_lifecycle.html#how-to-use-the-clientsession" rel="nofollow noreferrer">aiohttp documentation</a>, as it's an expensive operation.</p>
<p>My question is - how can I instantiate a session object once and share it between function calls? From the <code>ClientSession</code> code, it looks like <code>__aenter__</code> doesn't do anything, so I could instantiate the session just by calling <code>session = aiohttp.ClientSession()</code>. However, where would I call <code>__aexit__</code>?</p>
<p>Or is there even a better way to instantiate and clean up global objects in Azure Functions that require async instantiation and clean-up?</p>
|
<python><asynchronous><azure-functions><aiohttp>
|
2024-08-10 11:46:52
| 2
| 2,680
|
Konstantin
|
78,855,919
| 1,092,084
|
Accessing original object (one being serialized / validated) from validators, computed_field(), etc
|
<p>I use pydantic in a FastAPI project to serialize SQLAlchemy entity objects, and I need to access the original entity object in <code>@computed_field</code> methods and validators in order to access data from relationships (which are not supposed to be themselves serialized).</p>
<p>This is what I tried so far:</p>
<pre class="lang-py prettyprint-override"><code> def __init__(__pydantic_self__, **kwargs) -> None:
super().__init__(**kwargs)
print(kwargs) # not called at all
@model_validator(mode="before")
@classmethod
def validate_model(cls, values):
# 'values' contains the desired entity object, but there is no access to the model instance,
# so I can't save it for future use
@computed_field
@property
def extra_prop(self) -> bool:
# no access to the entity object
def model_post_init(self, ctx):
print(ctx) # always None
</code></pre>
<p>Is there any supported way to do this, or a workaround known to work?</p>
|
<python><pydantic><pydantic-v2>
|
2024-08-10 11:45:27
| 1
| 340
|
Thorn
|
78,855,322
| 420,947
|
Can the Python `protoc` compiler module from `grpc_tools` automatically include proto files from site-packages?
|
<p>In Python, the gRPC project proto files are shipped in smaller packages. For example, I want to use <code>status.proto</code>, which is the <code>grpc-status</code> package.</p>
<p>The <code>protoc</code> compiler is provided as a Python module by the <code>grpc-tools</code> package.</p>
<pre class="lang-protobuf prettyprint-override"><code># my.proto
syntax = "proto3";
import "google/rpc/status.proto";
</code></pre>
<p>Installing everything in a virtualenv:</p>
<pre class="lang-bash prettyprint-override"><code>python -m venv v
source v/bin/activate
pip install grpc-status grpc-tools
</code></pre>
<p>The compiler module doesn't automatically find and use the proto files installed by the <code>grpc-status</code> package.</p>
<p><code>python -m grpc_tools.protoc -I . my.proto</code></p>
<p>Results in:</p>
<p><code>google/rpc/status.proto: File not found.</code></p>
<p>(This file is available at <code>v/lib/python3.11/site-packages/google/rpc/status.proto</code> and was installed by <code>googleapis-common-protos</code>, a dependency of <code>grpcio-status</code>.)</p>
<p>This surprises me, because the proto files are distributed in separate packages, and the <code>protoc</code> compiler is itself a Python module, and so the entire arrangement for "provided" proto files seems easy and designed to be configured within Python by <code>grpc_tools</code> itself. It seems I must be doing something wrong.</p>
<p>Do I really need to <em>explicitly</em> tell the compiler module about Python's <code>site-packages</code>?</p>
<pre><code>SITE_PROTO_FILES=$( python -c 'import site; print(site.getsitepackages()[0])' )
# This compiles.
python -m grpc_tools.protoc -I $SITE_PROTO_FILES -I . my.proto
</code></pre>
<p>I have searched the gRPC documentation, Google Group GitHub Issue tracker, and read <code>python -m grpc_tools.protoc --help</code>, but have not found anything useful about this.</p>
|
<python><grpc><grpc-python>
|
2024-08-10 05:50:58
| 1
| 1,663
|
Richard Michael
|
78,855,271
| 10,669,327
|
how to inspect locally running BLE GATT server on Ubuntu Linux using D-bus?
|
<p>I am using following sample code:
<a href="https://github.com/bluez/bluez/blob/master/test/example-gatt-server" rel="nofollow noreferrer">example-gatt-server</a> for testing purpose.</p>
<p>Example:</p>
<p>-> In this code we have registered Service: HeartRateService which has multiple Characteristics as: HeartRateMeasurementChrc, BodySensorLocationChrc, HeartRateControlPointChrc</p>
<p><strong>Service:</strong></p>
<p>HeartRateService:</p>
<pre><code>UUID = '0000180d-0000-1000-8000-00805f9b34fb'
</code></pre>
<p><strong>Characteristic:</strong></p>
<p>HeartRateMeasurementChrc:</p>
<pre><code>UUID = '00002a37-0000-1000-8000-00805f9b34fb'
</code></pre>
<p>BodySensorLocationChrc:</p>
<pre><code>UUID = '00002a38-0000-1000-8000-00805f9b34fb'
</code></pre>
<p>HeartRateControlPointChrc:</p>
<pre><code>UUID = '00002a39-0000-1000-8000-00805f9b34fb'
</code></pre>
<p><strong>Is there way I can inspect locally registered BLE GATT(Server) Service and Characteristics on D-Bus using <code>busctl introspect</code> command?</strong> same as other D-bus as when I am trying to check this implemented custom GATT Service and Characteristics on D-bus I didn't find them.</p>
<p>In this example used <strong>D-bus service name</strong> = '<strong>/org/bluez/example/service</strong>'</p>
<p><strong>Note:</strong> I am able to query these custom created Service and Characteristics over Mobile Phone</p>
<pre><code>bluez/test$ sudo python3.10 example-gatt-server
Registering GATT application...
GetManagedObjects
GATT application registered
bluez/test$ sudo busctl tree org.bluez
└─/org
└─/org/bluez
└─/org/bluez/hci0
└─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37
├─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0006
│ └─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0006/char0007
│ └─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0006/char0007/desc0009
├─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000a
│ ├─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000a/char000b
│ └─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000a/char000d
├─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000f
│ └─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000f/char0010
│ ├─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000f/char0010/desc0012
│ └─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service000f/char0010/desc0013
└─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0014
└─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0014/char0015
├─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0014/char0015/desc0017
└─/org/bluez/hci0/dev_DC_B5_4F_44_A8_37/service0014/char0015/desc0018
</code></pre>
|
<python><linux><bluetooth-lowenergy><dbus><pybluez>
|
2024-08-10 05:04:15
| 0
| 701
|
raj123
|
78,855,135
| 4,803,413
|
sentence-transformers: combined parallelization for custom chunking function and encode_multi_process()
|
<p>I am working in Python 3.10, using a sentence-transformers model to encode/embed a list of text strings. I want to use sentence-transformer's <code>encode_multi_process</code> method to exploit my GPU. This is a very specific function that takes in a string, or a list of strings, and produces a numeric vector (or list of vectors). The function distributes the work among system CPUs and GPUs.</p>
<p>I also want to parallelize my custom chunking function <code>create_chunks</code>, which splits a raw text string into chunks that are small enough to fit into the model's constraints. So, for any given text input, it has to go through <code>create_chunks</code> before going through <code>encode_multi_process</code>. I'm pretty sure that using multiple CPU cores to parallelize this step is the way to go.</p>
<p>Right now I am considering using <code>multiprocessing</code> to apply <code>create_chunks</code> to my dataset, and then <code>encode_multi_process</code>, but this seems inefficient: the chunks that come out of <code>create_chunks</code> have to wait until the whole dataset is finished before moving on to <code>encode_multi_process</code>. Are there more efficient Python alternatives? I have to build my solution around <code>encode_multi_process</code>, which is the main difficulty.</p>
<p>I wish I could use Dask, but the language model is too big to fit into a Dask task graph.</p>
|
<python><machine-learning><python-multiprocessing><transformer-model><sentence-transformers>
|
2024-08-10 03:16:07
| 1
| 431
|
Anshu Chen
|
78,855,105
| 1,608,276
|
Python3 Type Safe partial_apply
|
<p>I'm trying to implement a helper function which partial apply arguments to a specific function and return a new function</p>
<pre><code>from typing import Any, Callable, TypeVar
from typing_extensions import ParamSpec
P = ParamSpec('P')
R = TypeVar('R')
def partial_apply(fn: Callable[P, R], *args: subset_of[P.args], **kwargs: subset_of[P.kwargs]) -> Callable[remainder_of[P], R]:
def wrapper() -> R:
return fn(*args, **kwargs)
return wrapper
def add(a: int, b: int) -> int:
return a + b
add_1 = partial_apply(add, 1)
add_1(2) # 3
</code></pre>
<p>But <code>subset_of</code> and <code>remainder_of</code> is obviously not supported here, so is there any solution?</p>
|
<python><python-typing><partial-application>
|
2024-08-10 02:55:16
| 0
| 3,895
|
luochen1990
|
78,854,892
| 865,220
|
Correct option to download from youtube with video resolution of 1080p using youtube-dl?
|
<p>To download youtube videos manually I typically use : <a href="https://y2meta.app/en/youtube/rU8_Fg103ZQ" rel="nofollow noreferrer">https://y2meta.app/en/youtube/rU8_Fg103ZQ</a> 's
[1080p (.mp4) full-HD] resolution.</p>
<p>Now I am looking to replicate the same via code using python's library <a href="https://github.com/ytdl-org/youtube-dl/blob/master/README.md" rel="nofollow noreferrer">youtube-dl</a> . As per documentations <code>ydl_opts</code>
is what I need to confugure properly.
My hunch say <code> "format": "bestvideo[height=1080]/best[height=1080]",</code> should be what I should set but I find it errors out with the following:</p>
<pre><code>youtube_dl.utils.ExtractorError: requested format not available
</code></pre>
<p>or for <code>"format": "mp4+bestvideo[height=1080]"</code>I get
<a href="https://stackoverflow.com/questions/75999015/error-giving-up-after-0-fragment-retries">ERROR: giving up after 0 fragment retries</a></p>
<p>Among all permutation that I tried I found What doesn't error out is this:</p>
<pre><code> ydl_opts = {
"format": "bestvideo[ext=mp4]+bestaudio[ext=mp4]/mp4+best[height<=720]",
"recode-video": "mp4",
"outtmpl": '/my/video/output/path',
}
</code></pre>
<p>but here I get a resolution with height of mere 360p. I need height to be 1080p.</p>
<p>PS: I dont care much about the audio, a mute mp4 with height = 1080p is enough for me.</p>
<p><strong>EDIT:</strong></p>
<p>As a workaround I am doing this,</p>
<pre><code> ydl_opts = {
"format": "bestvideo[ext=mp4]",
"merge_output_format": "mp4",
}
</code></pre>
<p>and then downscaling to 1080p locally but this has an issue as well it is donloading partially ie only 1 minites and creates the illusion the full file has downloaded(ie no error thrown nor any part files).It shows <code>Skipping fragment</code> warning message only.</p>
<pre><code>[download] 25.0% of ~80.00MiB at 3.46MiB/s ETA 00:17[download] Skipping fragment 3...
[download] Skipping fragment 4...
[download] Skipping fragment 5...
[download] Skipping fragment 6...
[download] Skipping fragment 7...
[download] Skipping fragment 8...
[download] 100% of 20.00MiB in 00:06
</code></pre>
|
<python><youtube><youtube-dl>
|
2024-08-10 00:03:27
| 0
| 18,382
|
ishandutta2007
|
78,854,857
| 1,030,542
|
Python3.11 handle HttpException raised alongwith the status code
|
<p>I see one of the methods in python uses <code>raise_for_status()</code>:</p>
<pre><code> def raise_for_status(self):
if 400 <= self.status_code < 500:
http_error_msg = (
f"{self.status_code} Client Error: {reason} for url: {self.url}"
)
elif 500 <= self.status_code < 600:
http_error_msg = (
f"{self.status_code} Server Error: {reason} for url: {self.url}"
)
if http_error_msg:
raise self.status_code, HTTPError(http_error_msg, response=self)
</code></pre>
<p>I want to add a <code>try/except</code> block such that, I can handle the exception. However, it also raises the status code along with the exception.
Not sure, how can I capture the status code without modifying the existing code for <code>rasei_for_status()</code></p>
|
<python><python-requests><try-except><raise>
|
2024-08-09 23:32:41
| 1
| 2,453
|
iDev
|
78,854,702
| 229,058
|
How can I get head of branch with illegal characters in it
|
<p>I know you can get the head of a branch directly using it's name (e.g. <code>repo.heads.main</code>). Can you get the head of a branch that has illegal variable characters (e.g. <code>feature-generate-events</code>) directly or do I have to iterate to get it? The hyphens are illegal (in a Python sense as @phd comments) to use as a attribute name.</p>
|
<python><git><gitpython>
|
2024-08-09 21:52:49
| 2
| 2,702
|
Stephen Rasku
|
78,854,539
| 7,106,915
|
Multiplication in an arbitrary base in Python
|
<p>Is there a library or efficient way to perform multiplication in an arbitrary base in Python, <strong>without</strong> going through the conversion to base 10?</p>
<p>Example of the wanted function f in base 14:</p>
<pre><code>base = 14
a = "b5"
b = "2a"
f(a,b,base) = "22b8"
</code></pre>
<p>For convenience, the above values converted in decimal:</p>
<pre><code>int("b5", base) # 159
int("2a", base) # 38
int("22b8", base) # 6042
</code></pre>
<p>This question is limited to integer numbers only.</p>
|
<python>
|
2024-08-09 20:36:08
| 0
| 3,007
|
Rexcirus
|
78,854,509
| 5,083,516
|
replacement for python3 crypt module
|
<p>The documentation for the python3 crypt module says.</p>
<blockquote>
<p>Deprecated since version 3.11, will be removed in version 3.13: The crypt module is deprecated (see PEP 594 for details and alternatives). The hashlib module is a potential replacement for certain use cases. The passlib package can replace all use cases of this module.</p>
</blockquote>
<p>However, this does not in-fact appear to be the case.</p>
<p>In particular the default format for "mkpasswd" on my system seems to be "yescrypt", which passlib has no support for. So the "crypt" module can verify passwords generated by "mkpasswd" but passlib cannot. Additionally, passlib seems to require algorithms to be enabled manually while crypt can verify any hash the system supports.</p>
<p>Is there a proper replacement for the "crypt" module?</p>
|
<python><crypt>
|
2024-08-09 20:22:28
| 1
| 10,972
|
plugwash
|
78,854,478
| 16,320,430
|
How can I replace null values in polars with a prefix with ascending numbers?
|
<p>I am trying to replace null values in my dataframe column by a prefix and ascending numbers(to make each unique).ie</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌──────────────┬──────────────┐
│ name ┆ asset_number │
│ --- ┆ --- │
│ str ┆ str │
╞══════════════╪══════════════╡
│ Office Chair ┆ null │
│ Office Chair ┆ null │
│ Office Chair ┆ null │
│ Office Chair ┆ CMP - 001 │
│ Office Chair ┆ CMP - 005 │
│ Office Chair ┆ null │
│ Table ┆ null │
│ Table ┆ CMP - 007 │
└──────────────┴──────────────┘
""")
</code></pre>
<p>the null values should be replaced to something like PREFIX - 001,PREFIX - 002,...</p>
|
<python><dataframe><data-science><data-cleaning><python-polars>
|
2024-08-09 20:11:44
| 3
| 435
|
Dante
|
78,854,416
| 11,196,682
|
How to Reduce Cold Start Time for AWS Lambda with Heavy Library Imports (SQLAlchemy, etc.)?
|
<p>I'm working with AWS Lambda in a serverless environment and I'm facing significant cold start delays due to heavy library imports before my handler. My Lambda function needs to use libraries like SQLAlchemy, which takes around 0.4 seconds just to import. Combined with other library imports, my cold start time can reach up to 1.3 seconds.</p>
<p>This is a critical issue because one Lambda function often calls another, and both are subject to cold starts. This can result in an initial request taking 3-4 seconds to complete, which is far from ideal for my use case.</p>
<p>Here's an example of some of the imports I have before my handler:</p>
<pre><code>import time
import sqlalchemy as db
from sqlalchemy.ext.declarative import declarative_base
from some_other_library import heavy_import
# Other imports...
def my_handler(event, context):
# Function logic...
pass
</code></pre>
<p><strong>What I've Tried:</strong></p>
<ul>
<li><p>Lazy Loading: I've considered importing libraries only when needed, but some are required during initialization.</p>
</li>
<li><p>AWS Lambda Layers: I'm already using layers to reduce the package size, but this doesn't seem to affect the import time during cold starts.</p>
</li>
<li><p>Increased Memory Size: I increased the allocated memory for the Lambda function, but this did not reduce the cold start time.</p>
</li>
</ul>
<p><strong>Question</strong>: How can I further reduce the cold start time in AWS Lambda, particularly with heavy library imports like SQLAlchemy? Are there any best practices or patterns for dealing with this issue in a serverless architecture?</p>
<p>Is the only solution to keep the lambda hot? Using a CloudWatchEvent or something similar</p>
<p>Why do these libraries take so long to load? I find it crazy that it takes almost 1 second to load libraries...</p>
|
<python><amazon-web-services><aws-lambda><import>
|
2024-08-09 19:49:45
| 1
| 552
|
MathAng
|
78,854,255
| 1,914,781
|
pandas convert continuous duration column to start and end colum
|
<p>I would like to convert <code>duration</code> column to <code>start</code> and <code>end</code> column. I try below code, it works as expected but it not a perfect way.</p>
<pre><code>import pandas as pd
def main():
data = [
['A',7],
['B',5],
['C',5],
['D',15],
['E',5]
]
df = pd.DataFrame(data,columns=['name','duration'])
data = []
for idx,row in df.iterrows():
name = row['name']
dur = row['duration']
if idx == 0:
start = 0
end = start + dur
else:
start = end
end = start + dur
data.append([name,start,dur,end])
df = pd.DataFrame(data,columns=['name','start','duration','end'])
print(df)
main()
</code></pre>
<p>Excepted results:</p>
<pre><code> name start duration end
0 A 0 7 7
1 B 7 5 12
2 C 12 5 17
3 D 17 15 32
4 E 32 5 37
</code></pre>
|
<python><pandas>
|
2024-08-09 18:55:12
| 2
| 9,011
|
lucky1928
|
78,854,232
| 10,614,373
|
How do I add a Multilevel dropdown to a Django template?
|
<p>I am trying to add a dropdown to my navbar in <code>base.html</code> that shows multiple categories from a store. Each of these categories has a sub-category associated with it. I've created a model in Django that maps this relationship like so.</p>
<p><strong>models.py</strong></p>
<pre><code>class CategoryView(models.Model):
parent = models.ForeignKey('self', related_name='children', on_delete=models.CASCADE, blank = True, null=True)
title = models.CharField(max_length=100)
created_at = models.DateTimeField(auto_now_add=True)
def __str__(self):
return self.title
</code></pre>
<p>And I'm passing this model to the template using <strong>context processors</strong> as so</p>
<pre><code>def categoriesdropdown(request):
catg = CategoryView.objects.filter(parent=None)
context = {'catg':catg}
return context
</code></pre>
<p>Now I am trying to display these categories and sub-categories as a multilevel dropdown using bootstrap. I have tried implementing mostly all solutions from the answers here:</p>
<p><a href="https://stackoverflow.com/questions/44467377/bootstrap-4-multilevel-dropdown-inside-navigation">Bootstrap 4: Multilevel Dropdown Inside Navigation</a></p>
<p><a href="https://stackoverflow.com/questions/18023493/bootstrap-dropdown-sub-menu-missing">Bootstrap dropdown sub menu missing</a></p>
<p><a href="https://mdbootstrap.com/docs/standard/extended/dropdown-multilevel/" rel="nofollow noreferrer">https://mdbootstrap.com/docs/standard/extended/dropdown-multilevel/</a></p>
<p>But nothing seems to work.</p>
<p>Below is the dropdown from my template.</p>
<p><strong>base.html</strong></p>
<pre><code><div class="nav-item dropdown">
<a href="#" id="dropdownMenuLink" data-toggle="dropdown" aria-haspopup="true" aria-expanded="false" class="nav-link dropdown-toggle">Categories</a>
<ul class="dropdown-menu dropdown-menu2" aria-labelledby="dropdownMenuLink">
{% for category in catg %}
<li class="dropdown-submenu">
<a class="dropdown-item dropdown-toggle" href="#" id="multilevelDropdownMenu1" data-bs-toggle="dropdown" aria-haspopup="true" aria-expanded="false">{{ category.title }}</a>
<ul class="dropdown-menu2" aria-labelledby="multilevelDropdownMenu1">
{% for subcategory in category.children.all %}
<li><a href="#" class="dropdown-item">{{ subcategory.title }}</a></li>
{% endfor %}
</ul>
</li>
{% endfor %}
</ul>
</code></pre>
<p>I can see that all the categories and sub-categories are listed properly in the dropdown, however my sub-categories appear right below the categories and not as a next level dropdown.</p>
<p><a href="https://i.sstatic.net/82Zy59YT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82Zy59YT.png" alt="enter image description here" /></a></p>
<p><strong>Note:</strong> The dropdown toggle appears as is and doesn't work.</p>
<p><strong>base.css</strong></p>
<pre><code>.dropdown-submenu {
position: relative;
}
.dropdown-submenu .dropdown-menu2 {
top: 10%;
left: 100%;
margin-top: -1px;
}
.navbar-nav li:hover > ul.dropdown-menu2 {
display: block;
}
</code></pre>
<p>How do I get the category toggles to work and only display the sub-categories when the toggle is clicked or hovered over?</p>
|
<javascript><python><html><css><django>
|
2024-08-09 18:47:38
| 1
| 492
|
Trollsors
|
78,854,169
| 5,798,365
|
How to raise numerical labels on a matplotlib bar plot
|
<pre><code>points_1_25 = {'Completed' : grades_simple_completed, 'Failed': grades_simple_failed}
fig, ax = plt.subplots(figsize=(25, 10))
bottom = np.zeros(25)
width = 0.8
for verdict, point in points_1_25.items():
p = ax.bar(x_labels, point, width, label=verdict, bottom=bottom)
bottom += point
ax.bar_label(p, label_type='center',color='w', fontsize=20)
ax.set_facecolor("gray")
ax.set_title('Some label', fontsize=25)
ax.tick_params(axis='both', labelsize=20)
ax.set_ylim([0, 11])
ax.legend(prop={'size': 20}, loc='lower center')
</code></pre>
<p>On a bar plot generated from this code I want to have numerical labels a little bit higher, for example, 20 pixels higher than they are now (see the picture for a better understanding). Is there a way to do it?</p>
<p><a href="https://i.sstatic.net/DslALD4E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DslALD4E.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2024-08-09 18:24:35
| 0
| 861
|
alekscooper
|
78,854,061
| 3,769,033
|
`KinematicTrajectoryOptimization` fails with `DurationCost` and any `AccelerationBounds` (even infinite)
|
<p><code>KinematicTrajectoryOptimization</code> seems to fail arbitrarily on some inputs when using a <code>DurationCost</code> and <code>AccelerationBounds</code>, even when setting the <code>AccelerationBounds</code> far outside the actual acceleration of the trajectory (or even setting them to infinity).</p>
<p>Here is a simple example: solving fails as written but passes when either the duration cost or (infinite) <code>AccelerationBounds</code> are removed:</p>
<pre><code>import numpy as np
from pydrake.planning import KinematicTrajectoryOptimization
from pydrake.solvers import Solve
trajopt = KinematicTrajectoryOptimization(num_positions=6, num_control_points=10)
prog = trajopt.get_mutable_prog()
positions = np.array([np.ones(6) * 0.9,
np.ones(6) * 0.8,
np.ones(6) * 0.5,
np.zeros(6)])
for i,joint in enumerate(positions):
trajopt.AddPathPositionConstraint(
joint, joint , i / (len(positions) - 1)
)
trajopt.AddDurationCost(0.5)
trajopt.AddAccelerationBounds(-np.inf * np.ones(6), np.inf * np.ones(6))
result = Solve(prog)
if not result.is_success():
raise RuntimeError("Drake trajectory failed")
</code></pre>
<p>My best guess is that the <code>AccelerationBounds</code> trigger a different solver that is really bad at handling <code>DurationCosts</code>, but this example seems like it should be so trivial to solve that I have trouble believing that.</p>
<p>What might be happening here?</p>
|
<python><drake>
|
2024-08-09 17:47:19
| 1
| 1,245
|
JoshuaF
|
78,853,975
| 19,369,310
|
KeyError: 0 # If we have a listlike key, _check_indexing_error will raise after applying a function to a pandas dataframe
|
<p>I have a rather complicated function <code>f(featureList)</code> that takes a list of arbitrary length as input and gives another list of the same length as output:</p>
<pre><code>import math
import random
import time
def survivalNormalcdf(x):
return (1-math.erf(x/math.sqrt(2)))/2
def normalcdf(x):
return (1+math.erf(x/math.sqrt(2)))/2
def normalpdf(x):
return math.exp(-x*x/2)/math.sqrt(2*math.pi)
def abss(p):
q=[]
for k in range(len(p)):
q.append(abs(p[k]))
return q;
def mult(a,p):
q=[]
for k in range(len(p)):
q.append(a*p[k])
return q;
def add(a,p):
q=[]
for k in range(len(p)):
q.append(a[k]+p[k])
return q
def dot(u,v,pp):
s=0
for k in range(len(u)):
s+=u[k]*v[k]*pp[k]
return s;
def grad(t,pp):
h=math.sqrt(1/5/(len(pp)+1))
g=[]
for k in range(len(pp)):
g.append(-pp[k])
beg=t[k] - 10
end=t[k] + 10
qq=math.ceil((end-beg)/h)
for q in range(qq):
x=beg+q*h
ss=survivalNormalcdf(x)
for m in range(len(pp)):
if k==m:
ss*=normalpdf(x-t[m])
else:
ss*=survivalNormalcdf(x-t[m])
g[k]+=ss*h;
for k in range(len(pp)):
g[k]/=pp[k]
return g
def iint(t,pp):
h=0.1
ss=0
for k in range(1):
beg=min(min(t),0) - 10
end=max(max(t),0) + 10
qq=int((end-beg)/h)
for q in range(qq):
x=beg+q*h
s=1;
for m in range(len(pp)):
s*=survivalNormalcdf(x-t[m])
ss+=(s-1)*survivalNormalcdf(x)*h
for k in range(len(pp)):
ss-=pp[k]*t[k]
return ss
def f(ppp):
kk=0
maxx=ppp[0]
for k in range(len(ppp)):
if ppp[k]>maxx:
kk=k
maxx=ppp[k]
pp=ppp[:kk]+ppp[kk+1:]
t=[]
for k in range(len(pp)):
t.append(math.sqrt(2*math.log(1/pp[k])))
u=grad(t,pp)
mm=0
while mm<=50*len(pp) and sum(abss(grad(t,pp)))>1/10.0**12:
mm+=1
if mm%len(pp)==1:
pass
s=min(1,1/sum(abss(u)))
cnt=0
while dot(u, grad(add(t,mult(s,u)),pp),pp)>0 and s*sum(abss(u))<len(u):
s*=2
cnt+=1
a=0
b=s
beg=a
end=b
A=dot(u, grad(add(t,mult(a,u)),pp),pp)
B=dot(u, grad(add(t,mult(b,u)),pp),pp)
k=0
while k<20 and abs(A-B)>(1/10.0**12)*max(abs(A),abs(B)):
mid=(beg+end)/2
if dot(u, grad(add(t,mult(mid,u)),pp),pp)>0:
beg=mid
else:
end=mid
c=max(beg-(1/10.0**12),min(end+(1/10.0**12),b+(B/(A-B))*(b-a)))
C=dot(u, grad(add(t,mult(c,u)),pp),pp)
if abs(a-c)>abs(b-c) and abs(b-c)>0:
a=b
A=B
b=c
B=C
else:
b=a
B=A
a=c
A=C
if C>0:
beg=max(beg,c)
else:
end=min(end,c)
k+=1
s=c
oldgrad=grad(t,pp)
t=add(t,mult(s,u))
newgrad=grad(t,pp)
uold=mult(1,u)
u=mult(1,newgrad)
if mm%len(pp)!=1:
u=add(u,mult(dot(newgrad, add(newgrad,mult(-1,oldgrad)),pp)/dot(oldgrad,oldgrad,pp),uold))
ss=sum(abss(grad(t,pp)))
tt=t[:kk]
tt.append(0)
t=tt+t[kk:]
if ss>1/10.0**12:
x=str(input("Failed"))
return t
</code></pre>
<p>So for example, we have</p>
<pre><code>f([0.2,0.1,0.55,0.15]) = [0.7980479577400461, 1.2532153405902076, 0, 0.9944188436386611]
f([0.02167131,0.17349148,0.08438952,0.04143787,0.02589056,0.03866752,0.0461553,0.09212758,0.10879326,0.186921,0.02990676,0.02731904,0.06020158,0.06302721]) =
[1.174313198960376,
0.04892832217716259,
0.4858149215364752,
0.864373517094786,
1.0921431988531611,
0.8989070806156786,
0.8098127832637683,
0.4358011113129989,
0.3387512959281985,
0,
1.0239882119094197,
1.0669265516784823,
0.671235053100702,
0.6466856803321204]
</code></pre>
<p>And I have a pandas dataframe that looks like</p>
<pre><code>Class_ID Date Student_ID feature
1 1/1/2023 3 0.02167131
1 1/1/2023 4 0.17349148
1 1/1/2023 6 0.08438952
1 1/1/2023 8 0.04143787
1 1/1/2023 9 0.02589056
1 1/1/2023 1 0.03866752
1 1/1/2023 10 0.0461553
3 17/4/2022 5 0.2
3 17/4/2022 2 0.1
3 17/4/2022 3 0.55
3 17/4/2022 4 0.15
</code></pre>
<p>and I would like to apply the function <code>f(featureList)</code> to the <code>feature</code> column <code>groupby</code> <code>Class_ID</code> and generate a new column called <code>New_feature</code>. And here's my code:</p>
<p><code>df['New_feature'] = df.groupby('Class_ID', group_keys=False)['feature'].apply(f)</code></p>
<p>So the desired outcome looks like:</p>
<pre><code>df_outcome = pd.read_fwf(io.StringIO("""Class_ID Date Student_ID feature New_feature
1 1/1/2023 3 0.02167131 2.385963956274992
1 1/1/2023 4 0.17349148 0
1 1/1/2023 6 0.08438952 1.6510552553095719
1 1/1/2023 8 0.04143787 2.054792417419151
1 1/1/2023 9 0.02589056 2.298129663961289
1 1/1/2023 1 0.03866752 2.0916706205231286
1 1/1/2023 10 0.0461553 1.9965409929949391
3 17/4/2022 5 0.2 0.7980479577400461
3 17/4/2022 2 0.1 1.2532153405902076
3 17/4/2022 3 0.55 0
3 17/4/2022 4 0.15 0.9944188436386611"""))
</code></pre>
<p>However it gives the following error:</p>
<pre><code>KeyError: 0
The above exception was the direct cause of the following exception:
# If we have a listlike key, _check_indexing_error will raise
</code></pre>
<p>Here is the code:</p>
<pre><code>import io
import numpy as np
import pandas as pd
import math
df = pd.read_fwf(io.StringIO("""Class_ID Date Student_ID feature
1 1/1/2023 3 0.02167131
1 1/1/2023 4 0.17349148
1 1/1/2023 6 0.08438952
1 1/1/2023 8 0.04143787
1 1/1/2023 9 0.02589056
1 1/1/2023 1 0.03866752
1 1/1/2023 10 0.0461553
3 17/4/2022 5 0.2
3 17/4/2022 2 0.1
3 17/4/2022 3 0.55
3 17/4/2022 4 0.15"""))
def survivalNormalcdf(x):
return (1-math.erf(x/math.sqrt(2)))/2
def normalcdf(x):
return (1+math.erf(x/math.sqrt(2)))/2
def normalpdf(x):
return math.exp(-x*x/2)/math.sqrt(2*math.pi)
def abss(p):
q=[]
for k in range(len(p)):
q.append(abs(p[k]))
return q;
def mult(a,p):
q=[]
for k in range(len(p)):
q.append(a*p[k])
return q;
def add(a,p):
q=[]
for k in range(len(p)):
q.append(a[k]+p[k])
return q
def dot(u,v,pp):
s=0
for k in range(len(u)):
s+=u[k]*v[k]*pp[k]
return s;
def grad(t,pp):
h=math.sqrt(1/5/(len(pp)+1))
g=[]
for k in range(len(pp)):
g.append(-pp[k])
beg=t[k] - 10
end=t[k] + 10
qq=math.ceil((end-beg)/h)
for q in range(qq):
x=beg+q*h
ss=survivalNormalcdf(x)
for m in range(len(pp)):
if k==m:
ss*=normalpdf(x-t[m])
else:
ss*=survivalNormalcdf(x-t[m])
g[k]+=ss*h;
for k in range(len(pp)):
g[k]/=pp[k]
return g
def iint(t,pp):
h=0.1
ss=0
for k in range(1):
beg=min(min(t),0) - 10
end=max(max(t),0) + 10
qq=int((end-beg)/h)
for q in range(qq):
x=beg+q*h
s=1;
for m in range(len(pp)):
s*=survivalNormalcdf(x-t[m])
ss+=(s-1)*survivalNormalcdf(x)*h
for k in range(len(pp)):
ss-=pp[k]*t[k]
return ss
def f(ppp):
kk=0
maxx=ppp[0]
for k in range(len(ppp)):
if ppp[k]>maxx:
kk=k
maxx=ppp[k]
pp=ppp[:kk]+ppp[kk+1:]
t=[]
for k in range(len(pp)):
t.append(math.sqrt(2*math.log(1/pp[k])))
u=grad(t,pp)
mm=0
while mm<=50*len(pp) and sum(abss(grad(t,pp)))>1/10.0**12:
mm+=1
if mm%len(pp)==1:
pass
s=min(1,1/sum(abss(u)))
cnt=0
while dot(u, grad(add(t,mult(s,u)),pp),pp)>0 and s*sum(abss(u))<len(u):
s*=2
cnt+=1
a=0
b=s
beg=a
end=b
A=dot(u, grad(add(t,mult(a,u)),pp),pp)
B=dot(u, grad(add(t,mult(b,u)),pp),pp)
k=0
while k<20 and abs(A-B)>(1/10.0**12)*max(abs(A),abs(B)):
mid=(beg+end)/2
if dot(u, grad(add(t,mult(mid,u)),pp),pp)>0:
beg=mid
else:
end=mid
c=max(beg-(1/10.0**12),min(end+(1/10.0**12),b+(B/(A-B))*(b-a)))
C=dot(u, grad(add(t,mult(c,u)),pp),pp)
if abs(a-c)>abs(b-c) and abs(b-c)>0:
a=b
A=B
b=c
B=C
else:
b=a
B=A
a=c
A=C
if C>0:
beg=max(beg,c)
else:
end=min(end,c)
k+=1
s=c
oldgrad=grad(t,pp)
t=add(t,mult(s,u))
newgrad=grad(t,pp)
uold=mult(1,u)
u=mult(1,newgrad)
if mm%len(pp)!=1:
u=add(u,mult(dot(newgrad, add(newgrad,mult(-1,oldgrad)),pp)/dot(oldgrad,oldgrad,pp),uold))
ss=sum(abss(grad(t,pp)))
tt=t[:kk]
tt.append(0)
t=tt+t[kk:]
if ss>1/10.0**12:
x=str(input("Failed"))
return t
df['New_feature'] = df.groupby('Class_ID', group_keys=False)['feature'].apply(f)
df
</code></pre>
<p>Did I do anything wrong? Thanks in advance.</p>
<p><strong>Edit</strong> Here is a sample dataframe:</p>
<pre><code>df = pd.read_fwf(io.StringIO("""Class_ID Date Student_ID feature
1 1/1/2023 3 0.02167131
1 1/1/2023 4 0.17349148
1 1/1/2023 6 0.08438952
1 1/1/2023 8 0.04143787
1 1/1/2023 9 0.02589056
1 1/1/2023 1 0.03866752
1 1/1/2023 10 0.0461553
3 17/4/2022 5 0.2
3 17/4/2022 2 0.1
3 17/4/2022 3 0.55
3 17/4/2022 4 0.15
7 12/2/2019 3 0.1
7 12/2/2019 5 0.1
7 12/2/2019 12 0.05
7 12/2/2019 8 0.45
7 12/2/2019 6 0.3"""))
</code></pre>
<p>and the desired output:</p>
<pre><code>df_outcome = pd.read_fwf(io.StringIO("""Class_ID Date Student_ID feature New_feature
1 1/1/2023 3 0.02167131 2.385963956274992
1 1/1/2023 4 0.17349148 0
1 1/1/2023 6 0.08438952 1.6510552553095719
1 1/1/2023 8 0.04143787 2.054792417419151
1 1/1/2023 9 0.02589056 2.298129663961289
1 1/1/2023 1 0.03866752 2.0916706205231286
1 1/1/2023 10 0.0461553 1.9965409929949391
3 17/4/2022 5 0.2 0.7980479577400461
3 17/4/2022 2 0.1 1.2532153405902076
3 17/4/2022 3 0.55 0
3 17/4/2022 4 0.15 0.9944188436386611
7 12/2/2019 3 0.1 1.07079092
7 12/2/2019 5 0.1 1.07079092
7 12/2/2019 12 0.05 1.46861021
7 12/2/2019 8 0.45 0
7 12/2/2019 6 0.3 0.32415155"""))
</code></pre>
|
<python><pandas><dataframe><group-by><apply>
|
2024-08-09 17:18:29
| 1
| 449
|
Apook
|
78,853,806
| 3,103,957
|
Python __set__() descriptor behaviour
|
<p>I have the following Python code snippet.</p>
<pre><code>class LoggedAgeAccess:
def __get__(self, obj, objtype=None):
value = obj._age
return value
def __set__(self, obj, value):
obj._age = value
class Person:
age = LoggedAgeAccess() # Descriptor instance
def __init__(self, name, age):
self.name = name # Regular instance attribute
self.age = age # Calls __set__()
def birthday(self):
self.age += 1 # Calls both __get__() and __set__()
x = Person("ABC", 27)
</code></pre>
<p>In the constructor, the second assignment statement <code>self.age = age</code> triggers the <code>__set__()</code> method of the LoggedAgeAccess descriptor.
This is really confusing.</p>
<p>When the Person object is created, I am passing name and age values. Inside the constructor, I refer the passed value and create a instance specific variable <code>self.age</code>
whose value is assigned with the passed value (27). How this assignment statement refers the class level age variable and triggers a call to <code>__set__()</code>?</p>
|
<python><descriptor>
|
2024-08-09 16:31:44
| 1
| 878
|
user3103957
|
78,853,700
| 719,276
|
Package and use source files with PyInstaller?
|
<p>My app auto-updates by:</p>
<ol>
<li>regularly checking for new versions of my app on PyPi,</li>
<li>downloading and extracting the latest one,</li>
<li>replacing the source files, and restarting.</li>
</ol>
<p>My project has the following structure:</p>
<pre><code>app.py
project/
utils.py
core.py
</code></pre>
<p>New versions are automatically downloaded from PyPi with <code>pip download --no-deps --no-binary :all: project==X.X.X</code>, this downloads the <code>project-X.X.X.tar.gz</code> which is extracted and replaces the above <code>project/</code> folder. The app is then restarted.</p>
<p>Now when I package my project with PyInstaller, it compiles everything into an executable so there are no sources to be replaced.</p>
<p>Is it possible to have the project sources in the <code>_internal</code> folder created by PyInstaller, and have them used, instead of having everything compiled in the executable?</p>
|
<python><package><pyinstaller>
|
2024-08-09 16:01:36
| 1
| 11,833
|
arthur.sw
|
78,853,597
| 7,290,845
|
DLT UDFs with modular code - INVALID_ARGUMENT No module named 'mymodule'
|
<p>I am migrating a massive codebase to Pyspark on Azure Databricks,using DLT Pipelines. It is very important that code will be modular, that is I am looking to make use of UDFs for the timebeing that use modules and classes.</p>
<p>I am receiving the following error:</p>
<pre><code>org.apache.spark.SparkRuntimeException: [UDF_ERROR.PAYLOAD] Execution of function <lambda>(MYCOLUMN_NAME1531)
2) failed - failed to set payload
== Error ==
INVALID_ARGUMENT: No module named 'mymodule'
== Stacktrace ==
</code></pre>
<p>With the following code (anonymized to create a minimum working example):</p>
<pre><code># demo.py
from pyspark.sql.functions import col
import dlt
import mymodule
demodata = mymodule.DemoData("EX")
helper = mymodule.Helper(demodata)
@dlt.table(name="DEMO")
def table():
return (
spark.readStream.format("cloudFiles")
.option("cloudFiles.Format", "PARQUET")
.load("abfss://...")
.withColumn("DEMO", helper.transform(col("MYCOLUMN_NAME")))
)
# mymodule.py
from pyspark.sql.typos import StringType
from pyspark.sql.functions import udf
class DemoData:
def __init__(self, suffix)
self.suffix = suffix
class Helper:
def __init__(self, demoData):
_suffix = demoData.suffix
self.transform = udf(lambda _string: self.helper(_string, _suffix), StringType())
@staticmethod
def helper(string, suffix):
return string + suffix
</code></pre>
<p>Can someone help me understand what is happening? I am thinking that the Spark Worker cannot see my module. Is this correct? How would I use UDFs with modular code? I understand that this might not be ideal, but I want to understand this technicality.</p>
|
<python><azure><pyspark><databricks><azure-databricks>
|
2024-08-09 15:32:36
| 1
| 1,689
|
Zeruno
|
78,853,535
| 3,265,791
|
Pandas 3.0 Copy-on-Write changes, how to best assign loc and iloc selection
|
<p>I am preparing code for pandas3.0 and I noticed that the original syntax <code>df['b'].iloc[0]</code> causes a lot of future warnings.
Is there an alternative, more elegant way to express this instead of using <code>df.loc[df.index[0], 'b'] = 9</code>. It seems much less elegant.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(dict(a=[1, 2, 3], b=[2, 3, 4]))
df['b'].iloc[0] = 9
df.loc[df.index[0], 'b'] = 9
</code></pre>
|
<python><pandas>
|
2024-08-09 15:13:59
| 1
| 639
|
MMCM_
|
78,853,441
| 1,914,034
|
Get images back from patches
|
<p>I use a function to creates patches of shape <code>(i,j,c,h,w)</code> from an image <code>(c,h,w)</code> with overlap like so:</p>
<pre><code>def create_patches(image, patch_size=224, overlap=0):
_, height, width = image.shape
step = patch_size - overlap
padding_width = (step - (width - overlap) % step) % step
padding_height = (step - (height - overlap) % step) % step
image = F.pad(image, (0, 0, padding_width, padding_height))
return (
image
.unfold(0, 3, 3)
.unfold(1, patch_size, step)
.unfold(2, patch_size, step)
)
org_image = read_image("image.png")
print(org_image.shape) #torch.Size([3, 669, 1046])
patches = create_patches(org_image, patch_size=224, overlap=24)
print(patches.shape) #torch.Size([4, 6, 3, 224, 224])
</code></pre>
<p>I wonder how I could reverse the process in order to get the image back from the patches.
Note that I am using a super resolution model so the patches shape after eval will be <code>(i,j,c,h*scale_factor, w*scale_factor)</code> for instance <code>torch.Size([4, 6, 3, 896, 896])</code> for a <code>scale_factor</code> of 4.</p>
|
<python><pytorch>
|
2024-08-09 14:53:34
| 0
| 7,655
|
Below the Radar
|
78,853,388
| 87,973
|
How to re-cache global variable?
|
<p>I'm trying to change a global state (an <a href="https://code.larus.se/lmas/opensimplex" rel="nofollow noreferrer">opensimplex</a> random seed), which I use in a function called from an <code>@njit</code>ed function. But once compiled, Numba fixes the global value and I'm not able to change it. Like so:</p>
<pre class="lang-py prettyprint-override"><code>from numba import njit
global_var = 3
@njit
def func():
return global_var - 3
print(func()) # prints 0
global_var = 5
print(func()) # prints 0 again, undesired
</code></pre>
<p>Is there some way I can change a global state? I've tried with closures and storing the state in a <code>numba.experimental.jitclass</code>, but couldn't get it to work. Here's an example of that:</p>
<pre class="lang-py prettyprint-override"><code>from numba import njit, int32
from numba.experimental import jitclass
spec = [
('var', int32),
]
@jitclass(spec)
class State:
def __init__(self, var):
self.var = var
def set(self, var):
self.var = var
state = State(1)
@njit
def get_state():
return state.var
get_state() # throws error
</code></pre>
<p>The class attempt gets me a Numba not implemented error:</p>
<pre><code>Traceback (most recent call last):
File "D:\rnd\py\landslip\try-jit.py", line 20, in <module>
get_state()
File "D:\prog\Python311\Lib\site-packages\numba\core\dispatcher.py", line 468, in _compile_for_args
error_rewrite(e, 'typing')
File "D:\prog\Python311\Lib\site-packages\numba\core\dispatcher.py", line 409, in error_rewrite
raise e.with_traceback(None)
numba.core.errors.NumbaNotImplementedError: Failed in nopython mode pipeline (step: native lowering)
<numba.core.base.OverloadSelector object at 0x00000206A85AC7D0>, (instance.jitclass.State#206a825e050<var:int32>,)
During: lowering "$4load_global.0 = global(state: <numba.experimental.jitclass.boxing.State object at 0x00000206A8597DC0>)" at D:\rnd\py\landslip\try-jit.py (18)
</code></pre>
|
<python><numba>
|
2024-08-09 14:42:10
| 1
| 26,357
|
Jonas Byström
|
78,853,345
| 18,142,235
|
Problem in importing electrocardiogram dataset from scipy [Error: No module named 'scipy.datasets']
|
<p>I tried to follow <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html" rel="nofollow noreferrer">this</a> example of scipy in peak analysis but when I imported the electrocardiogram dataset by <code>from scipy.datasets import electrocardiogram </code>, I got mentioned error.</p>
<p>I installed scipy again both through pip and conda but still no such package was added. Finally I found the electrocardiogram dataset using <code>from scipy.misc import electrocardiogram</code> . Can anyone explain why <code>scipy.datasets</code> does not exist for me? I am a Windows user, I have scipy version 1.9.1 and I use anaconda (no active environment).</p>
|
<python><scipy><anaconda>
|
2024-08-09 14:35:59
| 1
| 359
|
hamflow
|
78,853,293
| 2,171,348
|
how to restore to previous debugpy version in vscode
|
<p>I've started using vscode 1.91.1 and docker on windows10 and WSL2 ubuntu 22.04 for about one week.</p>
<p>Before vscode starts showing the "there's an available update", and the workspace was single-root, I can open python project in vscode's devcontainer without problem.</p>
<p>After I added one more folder to the workspace to make it multi-root, and changed some setting.json file on both workspace level and project level, and reopen the workspace in devcontainer, I see new docker images are pulled to local, and devcontainere is rebuilt; while devcontainer was rebuilt, the ms-python extension was reinstalled (vscode is 1.91.1).</p>
<p>And now I get the following message when open the workspace in devcontainer:</p>
<pre><code>[/root/.vscode-server/extensions/ms-python.debugpy-2024.10.0-linux-x64]: Extension is not compatible with Code 1.91.1. Extension requires: ^1.92.0.
</code></pre>
<p>I've tried change the workspace to single-root, still get this above error when reopen in devcontainer.</p>
<p>I've tried to remove the <em><strong>/root/.vscode-server/extensions/ms-python.debugpy-2024.10.0-linux-x64</strong></em> folder from WSL2 ubuntu. But it's got reinstated everytime I reopen the workspace in devcontainer.</p>
<p>What else should I do to restore the previous version of ms-path.debugpy extension?</p>
<p>(I don't want to upgrade vscode everytime there's a new version)</p>
|
<python><visual-studio-code><vscode-devcontainer>
|
2024-08-09 14:23:12
| 1
| 481
|
H.Sheng
|
78,853,188
| 1,236,694
|
Most efficient way to sum-up subset of list
|
<pre><code>a = [-1, 17, 3, 101, -46, 51]
</code></pre>
<p>I want the sum of elements from an arbitrary start index to an arbitrary end index.</p>
<p>I can do this :</p>
<pre><code>partial = sum(a[start:end+1])
</code></pre>
<p>But Python slice creates another list which consumes memory and might be time-inefficient.</p>
<p>Does Python offer a built-in function to sum a subarray efficiently ?</p>
|
<python>
|
2024-08-09 14:00:48
| 1
| 9,151
|
BaltoStar
|
78,853,009
| 1,862,861
|
How to manually set all learnable parameters in PyTorch model to a fixed value
|
<p>For some testing purposes, I would like to manually set all the learnable parameters of a PyTorch <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module" rel="nofollow noreferrer"><code>torch.nn.Module</code></a> model to a fixed value (I'm comparing two models that should be the same, but they seem to train considerably differently, so I want to sanity check the <code>forward</code> method). Is there a simple way to do this?</p>
|
<python><pytorch>
|
2024-08-09 13:21:15
| 1
| 7,300
|
Matt Pitkin
|
78,852,650
| 12,719,086
|
Efficiently processing large molecular datasets with Dask Disctributed, DataFrames and Prefect,
|
<p>I'm working with a large dataset of molecular structures (approximately 240,000 records) stored in a PostgreSQL database. I need to perform computations on each molecule using RDKit. I'm using Dask for distributed computing and Prefect for workflow management. My main goal is to efficiently distribute this dataset to my Dask workers and compute the results.</p>
<p>Here's a simplified version of what I'm attempting:</p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
from prefect import flow, task
from prefect_dask import DaskTaskRunner
from rdkit import Chem
from rdkit.Chem import AllChem
@task
def fetch_data():
return dd.read_sql_table('molecules', engine, index_col='id', npartitions=32)
@task
def process_molecule(smiles):
mol = Chem.MolFromSmiles(smiles)
mol = Chem.AddHs(mol)
AllChem.EmbedMolecule(mol, AllChem.ETKDG())
# More processing here...
return processed_data
@flow(task_runner=DaskTaskRunner())
def process_molecules():
df = fetch_data()
results = df['smiles'].apply(process_molecule)
return results.compute()
if __name__ == "__main__":
process_molecules()
</code></pre>
<p>My main questions are:</p>
<ol>
<li><p>How can I optimize the distribution of this dataset to my Dask workers? Is reading directly from SQL into a Dask DataFrame the most efficient approach?</p>
</li>
<li><p>What's the best way to structure the computation to fully utilize my distributed resources? Should I be using <code>map_partitions</code> instead of <code>apply</code>, or is there a better approach?</p>
</li>
<li><p>How can I ensure that the workload is evenly distributed across my Dask workers?</p>
</li>
<li><p>Are there any Dask or Prefect-specific optimizations I should consider for this type of large-scale molecular computation?</p>
</li>
<li><p>How can I monitor the progress of the computation across the distributed system?</p>
</li>
</ol>
<p>I'm looking for strategies to improve the efficiency of distributing this large dataset to my Dask workers and computing the results. Any insights or examples would be greatly appreciated!</p>
|
<python><dask><dask-distributed><dask-dataframe><prefect>
|
2024-08-09 12:02:10
| 0
| 471
|
Polymood
|
78,852,506
| 4,473,615
|
Nested list in Python - Transpose nested list
|
<p>I have a below nested list:</p>
<pre><code>list = [Language:'Tamil'
Capital: 'Chennai'
Place: 'Chennai', 'Vellore', 'Trichy', 'Madurai'
]
</code></pre>
<p>I'm expecting to transpose it as:</p>
<pre><code>Language Capital Place
Tamil Chennai Chennai
Tamil Chennai Vellore
Tamil Chennai Trichy
Tamil Chennai Madurai
</code></pre>
<p>Tried converting to pandas dataframe:</p>
<pre><code>df = pd.DataFrame(list)
</code></pre>
<p>The result is</p>
<pre><code>Language Capital Place
Tamil Chennai ['Chennai', 'Vellore', 'Trichy', 'Madurai']
</code></pre>
<p>How can I transpose each list of values in a new row for the place?</p>
|
<python><list>
|
2024-08-09 11:25:00
| 3
| 5,241
|
Jim Macaulay
|
78,852,292
| 2,697,895
|
How to implement the Preferences menu in Toga application?
|
<p>I tried like this, but when I access the menu, the Preferences item is not enabled :</p>
<pre><code>class Application(toga.App):
def startup(self):
# ... build UI here
self.main_window = toga.MainWindow(title=self.formal_name)
self.main_window.content = MainBox
self.main_window.show()
def preferences(self):
# Create a new window for the preferences
pref_window = toga.Window(title="Preferences")
# Create a preferences window layout
pref_box = toga.Box(style=Pack(direction=COLUMN, padding=10))
# Example preferences settings (could be text inputs, toggles, etc.)
theme_label = toga.Label("Choose Theme:", style=Pack(padding=(0, 5)))
theme_selection = toga.Selection(items=["Light", "Dark"])
notification_label = toga.Label("Enable Notifications:", style=Pack(padding=(0, 5)))
notification_toggle = toga.Switch()
# Add widgets to the box
pref_box.add(theme_label, theme_selection, notification_label, notification_toggle)
# Set the content of the preferences window
pref_window.content = pref_box
pref_window.show()
</code></pre>
|
<python><beeware><toga>
|
2024-08-09 10:33:38
| 1
| 3,182
|
Marus Gradinaru
|
78,852,017
| 9,879,534
|
pre-commit using mypy specify venv cross-platform
|
<p>I'm using <a href="https://pdm-project.org/latest/" rel="nofollow noreferrer"><code>pdm</code></a>, and each project has its own <code>.venv</code>. When using <code>mypy</code>, I need to specify venv like <code>mypy --python-executable ./.venv/Scripts/python.exe .</code> on windows or <code>mypy --python-executable ./.venv/bin/python .</code> on linux.</p>
<p>Now I want to use <code>pre-commit</code> and put <code>mypy</code> in the hooks. I write</p>
<pre class="lang-yaml prettyprint-override"><code>- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.11.1
hooks:
- id: mypy
</code></pre>
<p>In this way, I can't commit any code because I would fail in <code>mypy</code> as I haven't specified venv. So I have to</p>
<pre class="lang-yaml prettyprint-override"><code> rev: v1.11.1
hooks:
- id: mypy
args: ["--python-executable", "./.venv/Scripts/python"]
</code></pre>
<p>on my windows computer.
Now everything goes well, but I immediately realize that it would fail on linux because there is no <code>.venv/Scripts/python</code> but <code>.venv/bin/python</code>.
So, how should I use mypy in pre-commit cross-platform?</p>
|
<python><mypy><pre-commit><pre-commit.com>
|
2024-08-09 09:21:21
| 1
| 365
|
Chuang Men
|
78,851,937
| 3,244,618
|
Bacnet Bac0 issue while reading device from outside network
|
<p>I am using library ( and bacnet ) first time, so maybe I am doing it totally wrong but I have situation that in some local network under IP: 192.168.1.215 there is some Eclypse ECY-Tu203-96B019, and I have access to router under address 10.x.y.z, I have setup port forwarding there to be able to access 192.168.1.215.
I was told that via this Eclypse I am able to read from many devices in this network ( its some kind IP Driver at least what I heard )
. While using nmap I see:</p>
<pre><code>sudo nmap --script bacnet-info --script-args full=yes -sU -n -sV -p 47808 10.x.y.z
Starting Nmap 7.95 ( https://nmap.org ) at 2024-08-09 09:34 CEST
Nmap scan report for 10.x.y.z
Host is up (0.25s latency).
PORT STATE SERVICE VERSION
47808/udp open bacnet
| bacnet-info:
| Vendor ID: Distech Controls Inc. (364)
| Vendor Name: Distech Controls, Inc.
| Object-identifier: 1215
| Firmware: 1.14.17191.1
| Application Software: 1.13.19189.607
| Object Name: ECY-TU203-96B019
| Model Name: ECY-TU203 Rev 1.0A
| Description:
|_ Location:
</code></pre>
<p>so according to that I can assume that router with port forwarding is fine as Object name is fine ( I have a photo from bacnet config of this device ).</p>
<p>So I try with this script:</p>
<pre><code>import BAC0
print(BAC0.version)
bbmdIP = '10.x.y.z:47808'
bbmdTTL = 900
bacnet = BAC0.lite(bbmdAddress=bbmdIP, bbmdTTL=bbmdTTL) #Connect
print(bacnet.vendorName.strValue)
print(bacnet.modelName.strValue)
whois_results = bacnet.whois()
print("WhoIs results:", whois_results)
print(bacnet.devices)
bacnet.discover(networks='known')
</code></pre>
<p>but I get a response:</p>
<pre><code>23.07.03
2024-08-09 10:36:14,549 - INFO | Starting BAC0 version 23.07.03 (Lite)
2024-08-09 10:36:14,549 - INFO | Use BAC0.log_level to adjust verbosity of the app.
2024-08-09 10:36:14,549 - INFO | Ex. BAC0.log_level('silence') or BAC0.log_level('error')
2024-08-09 10:36:14,549 - INFO | Starting TaskManager
2024-08-09 10:36:14,551 - INFO | Using ip : 192.168.1.33 on port 47808 | broadcast : 192.168.1.255
2024-08-09 10:36:14,560 - INFO | Starting app...
2024-08-09 10:36:14,561 - INFO | BAC0 started
2024-08-09 10:36:14,561 - INFO | Registered as Foreign Device
2024-08-09 10:36:14,561 - INFO | Device instance (id) : 3056745
2024-08-09 10:36:14,562 - INFO | Update Local COV Task started (required to support COV)
b'SERVISYS inc.'
b'BAC0 Scripting Tool'
WhoIs results: []
[]
2024-08-09 10:36:18,670 - INFO | Found those networks : set()
2024-08-09 10:36:18,670 - INFO | No BACnet network found, attempting a simple whois using provided device instances limits (0 - 4194303)
</code></pre>
<p>192.168.1.33 is my local machine IP.
Whgat am i doing it wrong ? thanks!</p>
|
<python><bacnet><bac0>
|
2024-08-09 09:00:29
| 0
| 2,779
|
FrancMo
|
78,851,764
| 1,335,606
|
Conversion issues in sql - GREATEST(CEILING())
|
<p>Below one working fine with the float values.</p>
<pre><code>import mysql.connector
conn = mysql.connector.connect()
cur = conn.cursor()
cur.execute('SELECT GREATEST(CEILING(Tot_Weight + 3.4),4.5) FROM TestTable order by id asc;')
print(cur.fetchall())
</code></pre>
<p>If I pass parameters to the query then facing conversion issues. Tried with str(Wt1) and str(Wt2) but no luck.</p>
<pre><code>Wt1 = 7.02
Wt2 = 5.4
cur.execute('SELECT GREATEST(CEILING(Tot_Weight +'+ Wt1 +')', Wt2 +') FROM TestTable order by id asc;')
print(cur.fetchall())
TypeError: can only concatenate str (not "float") to str
</code></pre>
<p>Please somebody help on this.</p>
|
<python><sql><mysql>
|
2024-08-09 08:07:17
| 2
| 503
|
user1335606
|
78,851,759
| 8,077,619
|
Create QML grouped properties in PyQt6
|
<p>I am trying to replicate <a href="https://doc.qt.io/qtforpython-6/examples/example_qml_tutorials_extending-qml-advanced_advanced4-Grouped-properties.html" rel="nofollow noreferrer">this</a> PySide6 example to PyQt6. The example creates a grouped property ShoeDescription in Python and exposes it to Qml. Here are the relevant files:</p>
<p>person.py</p>
<pre><code>from PyQt6.QtCore import QObject, pyqtProperty, pyqtSignal
from PyQt6.QtGui import QColor
class ShoeDescription(QObject):
brand_changed = pyqtSignal()
size_changed = pyqtSignal()
price_changed = pyqtSignal()
color_changed = pyqtSignal()
def __init__(self, parent=None):
super().__init__(parent)
self._brand = ''
self._size = 0
self._price = 0
self._color = QColor()
@pyqtProperty(str, notify=brand_changed, final=True)
def brand(self):
return self._brand
@brand.setter
def brand(self, b):
if self._brand != b:
self._brand = b
self.brand_changed.emit()
@pyqtProperty(int, notify=size_changed, final=True)
def size(self):
return self._size
@size.setter
def size(self, s):
if self._size != s:
self._size = s
self.size_changed.emit()
@pyqtProperty(float, notify=price_changed, final=True)
def price(self):
return self._price
@price.setter
def price(self, p):
if self._price != p:
self._price = p
self.price_changed.emit()
@pyqtProperty(QColor, notify=color_changed, final=True)
def color(self):
return self._color
@color.setter
def color(self, c):
if self._color != c:
self._color = c
self.color_changed.emit()
class Person(QObject):
name_changed = pyqtSignal()
def __init__(self, parent=None):
super().__init__(parent)
self._name = ''
self._shoe = ShoeDescription()
@pyqtProperty(str, notify=name_changed, final=True)
def name(self):
return self._name
@name.setter
def name(self, n):
if self._name != n:
self._name = n
self.name_changed.emit()
@pyqtProperty(ShoeDescription)
def shoe(self):
return self._shoe
class Boy(Person):
def __init__(self, parent=None):
super().__init__(parent)
class Girl(Person):
def __init__(self, parent=None):
super().__init__(parent)
</code></pre>
<p>birthdayparty.py</p>
<pre><code>from PyQt6.QtCore import QObject, pyqtProperty, pyqtSignal, pyqtClassInfo
from PyQt6.QtQml import QQmlListProperty
from person import Person
@pyqtClassInfo('DefaultProperty', 'guests')
class BirthdayParty(QObject):
host_changed = pyqtSignal()
guests_changed = pyqtSignal()
def __init__(self, parent=None):
super().__init__(parent)
self._host = None
self._guests = []
@pyqtProperty(Person, notify=host_changed, final=True)
def host(self):
return self._host
@host.setter
def host(self, h):
if self._host != h:
self._host = h
self.host_changed.emit()
def guest(self, n):
return self._guests[n]
def guestCount(self):
return len(self._guests)
def appendGuest(self, guest):
self._guests.append(guest)
self.guests_changed.emit()
@pyqtProperty(QQmlListProperty)
def guests(self):
return QQmlListProperty(Person, self, self._guests)
</code></pre>
<p>main.py</p>
<pre><code>from pathlib import Path
import sys
from PyQt6.QtCore import QCoreApplication
from PyQt6.QtQml import QQmlComponent, QQmlEngine, qmlRegisterType, qmlRegisterAnonymousType
from person import Boy, Girl, Person, ShoeDescription
from birthdayparty import BirthdayParty
if __name__ == '__main__':
app = QCoreApplication(sys.argv)
qmlRegisterType(Boy, 'People', 1, 0, 'Boy')
qmlRegisterType(Girl, 'People', 1, 0, 'Girl')
qmlRegisterType(BirthdayParty, 'People', 1, 0, 'BirthdayParty')
qmlRegisterAnonymousType(Person, 'People', 1)
qmlRegisterAnonymousType(ShoeDescription, 'People', 1)
engine = QQmlEngine()
engine.addImportPath(str(Path(__file__).parent))
component = QQmlComponent(engine)
component.loadFromModule('People', 'Main')
party = component.create()
if not party:
for e in component.errors():
print(e.toString())
host = party.host
print("Host name: ", host.name)
print("Guests:")
for guest in party.guests:
print("\t" + guest.name)
</code></pre>
<p>People/Main.qml</p>
<pre><code>import QtQuick
import People
BirthdayParty {
host: Boy {
name: "Bob Jones"
shoe { size: 12; color: "white"; brand: "Bikey"; price: 90.0 }
}
Boy {
name: "Leo Hodges"
shoe { size: 10; color: "black"; brand: "Thebok"; price: 59.95 }
}
Boy { name: "Jack Smith"
shoe {
size: 8
color: "blue"
brand: "Luma"
price: 19.95
}
}
Girl {
name: "Anne Brown"
shoe.size: 7
shoe.color: "red"
shoe.brand: "Job Macobs"
shoe.price: 699.99
}
}
</code></pre>
<p>Main/qmldir</p>
<pre><code>module People
typeinfo coercion.qmltypes
Main 1.0 Main.qml
</code></pre>
<p>This the error I get when I run main.py</p>
<pre><code>QQmlComponent: Component is not ready
file:......./005/People/Main.qml:8:9: Invalid grouped property access: Property "shoe" with type "", which is neither a value nor an object type
Traceback (most recent call last):
File "................\005\main.py", line 33, in <module>
host = party.host
^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'host'
</code></pre>
<p>I am useng qmlRegisterType and qmlRegisterAnonymousType to register Python classes with Qml but I can not figure out how to expose my ShoeDescription which "Property "shoe" with type "", which is neither a value nor an object type" seems to indicate.</p>
|
<python><qml><pyqt6>
|
2024-08-09 08:06:14
| 1
| 303
|
Anonimista
|
78,851,633
| 881,712
|
Equalize values of a list of list
|
<p><em>It's not easy for me to search for similar questions, I guess because the term "equalize" is not correct.</em></p>
<hr />
<p><strong>Scenario</strong></p>
<p>I have a couple of list of lists, not actually a matrix since each list can be of different length.<br />
Let's see an example.<br />
The first list-of-lists data set contains the <code>id</code> of each item (actually, they are the primary key field of a database table):</p>
<pre><code>['a', 'b', 'c', 'd', 'e']
['b', 'e', 'f', 'g']
['h', 'i', 'j']
['b', 'i', 'k', 'l']
</code></pre>
<p>as you can see, each list may have different length, the <code>id</code> can be in one or more lists, but they can appear at most once in the same list.</p>
<p>Then, I have the twin list-of-lists data set, with the same dimensions:</p>
<pre><code>[3.2, 3.2, 5.1, 6.8, 1.7] # -> sum: 20.0
[2.8, 6.7, 8.4, 2.1] # -> sum: 20.0
[9.7, 7.3, 3.0] # -> sum: 20.0
[4.3, 1.8, 8.7, 5.2] # -> sum: 20.0
</code></pre>
<p>Each list has the same sum: 20.0 in this example.<br />
Now I sum the values associated to the same <code>id</code>:</p>
<pre><code>'a' -> 3.2
'b' -> 3.2 + 2.8 + 4.3 = 10.3
'c' -> 5.1
'd' -> 6.8
'e' -> 1.7 + 6.7 = 8.4
'f' -> 8.4
'g' -> 2.1
'h' -> 9.7
'i' -> 7.3 + 1.8 = 9.1
'j' -> 3.0
'k' -> 8.7
'l' -> 5.2
</code></pre>
<p>I also have a separate list with all the above <code>id</code> where to store their sum.</p>
<p><strong>Problem</strong></p>
<p>The goal is to try as much as possible to "equalize" the values of each <code>id</code> (in other words trying to make them as equal as possible) <strong>keeping the sum of each list to the same value</strong> (20.0 in the given example).</p>
<p>[Note: this is not an exercize, I tried to simplify it as I could, but it's a real-case scenario. I'm trying to set up the cleaning shifts for the school of my daughter]</p>
<p><strong>Expected output</strong>
I cannot show the actual solution of the problem, otherwise I'll already solved it and I've not posted this question.</p>
<p>Anyway, I'm expecting to have the same sum for each list (20.0 in this example) and the sum of each <code>id</code> as equal as possible to each other. Ideally, they should be equal to each other, let's say:</p>
<pre><code>'a' -> x
'b' -> x
'c' -> x
'd' -> x
'e' -> x
'f' -> x
'g' -> x
'h' -> x
'i' -> x
'j' -> x
'k' -> x
'l' -> x
</code></pre>
<p>where <code>x = 20.0 / 12</code> (12 is the number of the <code>id</code>)</p>
<p><strong>Some code</strong></p>
<pre><code>lista_classi = [] # contains the name of the lists (4 in our example)
lista_famiglie = [] # contains all the id
lista_famiglie_turni = [] # will contain the sum associated to each id
matrix_famiglie_classi = [] # list of lists that contains the id(first table above)
matrix_famiglie = [] # list of lists that contains the values, this is actually a matrix, with all the rows: if an id is not present its value is 0
NUMERO_TURNI = 20.0 # the constant sum
conn = pymysql.connect(host=Host, user=User, password=Password, database='school')
cursor = conn.cursor()
# Retrieve the list of all id
sql = "SELECT id FROM anagrafica_famiglie;"
cursor.execute(sql)
lista_famiglie = [i[0] for i in cursor.fetchall()]
lista_famiglie_turni = [None] * len(lista_famiglie)
# Retrieve the number (and the name, not important here) of each list
sql = "SELECT DISTINCT classe FROM anagrafica_alunni ORDER BY classe;"
cursor.execute(sql)
lista_classi = [i[0] for i in cursor.fetchall()]
# For each list...
for c in lista_classi:
# ...retrieve the id that belong to the list
sql = "SELECT DISTINCT anagrafica_famiglie.id FROM anagrafica_famiglie INNER JOIN anagrafica_alunni ON anagrafica_famiglie.famiglia = anagrafica_alunni.famiglia WHERE anagrafica_famiglie.esonero=0 AND anagrafica_alunni.classe='{}';".format(c)
cursor.execute(sql)
result = [i[0] for i in cursor.fetchall()]
matrix_famiglie_classi.append(result)
# ...and the rectangular matrix with all the id (to simplify the indexing)
sql = "SELECT DISTINCT anagrafica_famiglie.id FROM anagrafica_famiglie INNER JOIN anagrafica_alunni ON anagrafica_famiglie.famiglia = anagrafica_alunni.famiglia WHERE anagrafica_famiglie.esonero=0"
cursor.execute(sql)
result = [i[0] for i in cursor.fetchall()]
matrix_famiglie.append(result)
for i in range(len(lista_classi)):
print(lista_classi[i].ljust(10) + "{}".format(len(matrix_famiglie_classi[i])))
# Calculate the initial values (as the above example)
for i in range(len(lista_classi)):
for j in range(len(lista_famiglie)):
if lista_famiglie[j] in matrix_famiglie_classi[i]:
matrix_famiglie[i][j] = NUMERO_TURNI / len(matrix_famiglie_classi[i])
else:
matrix_famiglie[i][j] = 0
</code></pre>
<p>now a couple of functions to retrieve the maximum and the minimum value(s) across all the id</p>
<pre><code>def getMaxShift():
return [index for index, item in enumerate(lista_famiglie_turni) if item == max(lista_famiglie_turni)], max(lista_famiglie_turni)
def getMinShift():
return [index for index, item in enumerate(lista_famiglie_turni) if item == min(lista_famiglie_turni)], min(lista_famiglie_turni)
</code></pre>
<p><strong>What I've tried</strong></p>
<p>My first attempt was like this:</p>
<ol>
<li>calculate the sum of values for each <code>id</code> (done, I guess)</li>
<li>find the maximum value(s) and the minimum one(s) (done, I guess)</li>
</ol>
<p><em>from this point ahead it's just a blind guess</em></p>
<ol start="3">
<li>for each list, check if it contains one or more <code>id</code> included in the maximum or minimum list</li>
<li>if found, decrease or increase their values</li>
<li>adjust (how!?) the other values on the list to keep the sum constant</li>
<li>repeat the process from point 1. until it will "converge", i.e. the difference between the maximum and minimum sum of any <code>id</code> cannot be smaller</li>
</ol>
<p><strong>Quesions</strong></p>
<ol>
<li>is my algorithm correct?</li>
<li>if yes, I'm not able to code the point 5: I know the sum, I fixed some values, but how to change the others?</li>
</ol>
<hr />
<p>I tried to solve it manually in this way:</p>
<pre class="lang-none prettyprint-override"><code>id sum ideal sum / ideal
a 3.20 6.67 0.48
b 10.30 6.67 1.55
c 5.10 6.67 0.77
d 6.80 6.67 1.02
e 8.40 6.67 1.26
f 8.40 6.67 1.26
g 2.10 6.67 0.32
h 9.70 6.67 1.46
i 9.10 6.67 1.37
j 3.00 6.67 0.45
k 8.70 6.67 1.31
l 5.20 6.67 0.78
</code></pre>
<p>where <code>ideal = 20.0 * 4 / 12</code> (size of <code>id</code>).
Then I divided each value by the last column of the table above:</p>
<pre><code>6.67 2.07 6.67 6.67 1.35 # sum 23.42
1.81 5.32 6.67 6.67 # sum 20.46
6.67 5.35 6.67 # sum 18.68
2.78 1.32 6.67 6.67 # sum 17.44
</code></pre>
<p>that leads to:</p>
<pre><code>a 6.67
b 6.67
c 6.67
d 6.67
e 7.02
f 6.67
g 6.67
h 6.67
i 6.67
j 6.67
k 6.67
l 6.67
</code></pre>
<p>that's <em>very</em> close to the solution from the <code>id</code> point of view (I bet I did some calculation error for <code>e</code>), but the sums of each list are wrong.</p>
<p>What am I doing wrong?</p>
<p><em>I've asked a related question also on MSE: <a href="https://math.stackexchange.com/questions/4956096/evenly-distribute-shifts-among-persons-who-belongs-to-multiple-rooms">https://math.stackexchange.com/questions/4956096/evenly-distribute-shifts-among-persons-who-belongs-to-multiple-rooms</a></em></p>
|
<python><algorithm><optimization>
|
2024-08-09 07:26:12
| 1
| 5,355
|
Mark
|
78,851,583
| 3,555,115
|
Time difference in Seconds between two Timestamp columns in Pandas
|
<p>I need to compute and add the timestamp difference between two columns in seconds in the given dataframe</p>
<pre><code>df =
A B
Wed Jul 31 07:09:48 EDT 2024 Wed Jul 31 07:04:35 EDT 2024
Wed Jul 31 07:26:31 EDT 2024 Wed Jul 31 07:21:04 EDT 2024
Need to add column C with timestamp difference of A and B in seconds
A B C
Wed Jul 31 07:09:48 EDT 2024 Wed Jul 31 07:04:35 EDT 2024
Wed Jul 31 07:26:31 EDT 2024 Wed Jul 31 07:21:04 EDT 2024
</code></pre>
<p>Any suggestions on how to compute this ?</p>
|
<python><pandas><dataframe>
|
2024-08-09 07:13:12
| 2
| 750
|
user3555115
|
78,851,490
| 10,200,497
|
Is it possible to not get NaN for the first value of pct_change()?
|
<p>My DataFrame is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [20, 30, 2, 5, 10]
}
)
</code></pre>
<p>Expected output is <code>pct_change()</code> of <code>a</code>:</p>
<pre><code> a pct_change
0 20 -50.000000
1 30 50.000000
2 2 -93.333333
3 5 150.000000
4 10 100.000000
</code></pre>
<p>I want to compare <code>df.a.iloc[0]</code> with 40 for the first value of <code>pct_change</code>. If I use <code>df['pct_change'] = df.a.pct_change().mul(100)</code>, the first value is <code>NaN</code>.</p>
<p>My Attempt:</p>
<pre><code>def percent(a, b):
result = ((a - b) / b) * 100
return result.round(2)
df.loc[df.index[0], 'pct_change'] = percent(df.a.iloc[0], 40)
</code></pre>
<p>Is there a better/more efficient way?</p>
|
<python><pandas><dataframe>
|
2024-08-09 06:47:50
| 2
| 2,679
|
AmirX
|
78,851,387
| 1,361,752
|
Does dask bag preserve order when using sequential dask.bag.map operations
|
<p>It is stated that dask bags do not preserve order. However, the example given for <code>dast.bag.map</code> does something that implies that order is preserved, or at least predictable, in <a href="https://docs.dask.org/en/stable/generated/dask.bag.map.html" rel="nofollow noreferrer">https://docs.dask.org/en/stable/generated/dask.bag.map.html</a></p>
<p>Specifically the example I'm referring to is this:</p>
<pre class="lang-py prettyprint-override"><code>import dask.bag as db
b = db.from_sequence(range(5), npartitions=2)
b2 = db.from_sequence(range(5, 10), npartitions=2)
from operator import add
db.map(add, b, b2).compute()
[5, 7, 9, 11, 13]
</code></pre>
<p>This implies that the two bags are kept aligned. i.e. the second element of b is added to the second element of b2. My question: is this alignement is guaranteed? i.e. would you ever get the second element of <code>b</code> added to the third element of <code>b2</code>?</p>
<p>For completeness, I'll ask a related question. Suppose your graph splits and then recombines. Is the alignment guaranteed to be maintained? As a concrete example, suppose I want to compute <code>x**2 + x**3</code> for every element in a bag. Would I be garanteed to maintain alignment and get the correct answer for each element of the bag <code>x</code> the following code?</p>
<pre class="lang-py prettyprint-override"><code>import dask.bag as db
x = db.from_sequence(range(5), npartitions=2)
from operator import add, pow
x2 = db.map(pow, x, 2)
x3 = db.map(pow, x, 3)
x2_plus_x3 = db.map(add, x2, x3).compute()
</code></pre>
|
<python><dask>
|
2024-08-09 06:10:18
| 1
| 4,167
|
Caleb
|
78,851,367
| 11,748,924
|
How to Display Fixed Y-axis Range and Hide Non-Integer Values on the Y-axis in a Matplotlib Plot?
|
<p>Given plot like this:
<a href="https://i.sstatic.net/652NiVDB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/652NiVDB.png" alt="enter image description here" /></a></p>
<p>Here is the code:</p>
<pre><code>#@title Visualizer Function
def plot_delineation_comparison(Xt, yt, Xp, yp, start, stop=None, rec_name = '-', lead_name = '-', pathology_name = '-'):
if stop is None:
stop = -1
Xt = Xt[start:stop]
yt = yt[start:stop]
Xp = Xp[start:stop]
yp = yp[start:stop]
# Get mask of every class for prediction
bl_pred = yp == 0
p_pred = yp == 1
qrs_pred = yp == 2
t_pred = yp == 3
# Get mask of every class for ground truth
bl_true = yt == 0
p_true = yt == 1
qrs_true = yt == 2
t_true = yt == 3
# Create figure with two rows and one column
fig, (ax1, ax2) = plt.subplots(
2,
1,
figsize=(16, 8),
sharex=True,
gridspec_kw={"hspace": 0},
)
# Plotting for prediction
prev_class = None
start_idx = 0
for i in range(stop - start):
current_class = None
if bl_pred[i]:
current_class = 'grey'
elif p_pred[i]:
current_class = 'orange'
elif qrs_pred[i]:
current_class = 'green'
elif t_pred[i]:
current_class = 'purple'
if current_class != prev_class:
if prev_class is not None:
ax2.axvspan(start_idx, i, color=prev_class, alpha=0.5)
start_idx = i
prev_class = current_class
# Fill the last region
if prev_class is not None:
ax2.axvspan(start_idx, stop - start, color=prev_class, alpha=0.5)
# Plotting for ground truth
prev_class = None
start_idx = 0
for i in range(stop - start):
current_class = None
if bl_true[i]:
current_class = 'grey'
elif p_true[i]:
current_class = 'orange'
elif qrs_true[i]:
current_class = 'green'
elif t_true[i]:
current_class = 'purple'
if current_class != prev_class:
if prev_class is not None:
ax1.axvspan(start_idx, i, color=prev_class, alpha=0.5)
start_idx = i
prev_class = current_class
# Fill the last region
if prev_class is not None:
ax1.axvspan(start_idx, stop - start, color=prev_class, alpha=0.5)
# First row for ground truth (X_unseen, y_true)
ax1.plot(Xt, color='blue')
ax1.set_ylabel('Ground Truth')
# draw baseline at y=0
ax1.axhline(y=0, color='red', linestyle='-', lw=0.5)
# Second row for ground truth (X_pred y_pred)
ax2.plot(Xp, color='blue')
ax2.axhline(y=0, color='red', linestyle='-', lw=0.5)
ax2.set_xlim([0, stop - start])
ax2.set_ylabel('Prediction')
ax2.set_xlabel('Index')
# Retrieve the current x-tick locations
current_xticks = ax2.get_xticks()
# Define the new x-tick labels based on absolute start and end
new_xtick_labels = [int(x + start) for x in current_xticks]
with warnings.catch_warnings():
warnings.simplefilter("ignore", UserWarning)
ax2.set_xticklabels(new_xtick_labels)
cm = ConfusionMatrix(actual_vector=yt.flatten(), predict_vector=yp.flatten(), transpose=True)
# Handle if not number type
cm.PPV = [0 if not type(x) == float else x for _,x in cm.PPV.items()]
cm.TPR = [0 if not type(x) == float else x for _,x in cm.TPR.items()]
# Make length of PPV and TPR consistent, fill with zero if not
if len(cm.PPV) < 4:
cm.PPV += [0] * (4 - len(cm.PPV))
if len(cm.TPR) < 4:
cm.TPR += [0] * (4 - len(cm.TPR))
if len(cm.F1) < 4:
# convert F1 to list and fixed it with length 4
cm.F1 = list(cm.F1.values())
cm.F1 += [0] * (4 - len(cm.F1))
notes_list = [
f"Recall",
f"BL : {cm.TPR[0]:.2f}",
f"P : {cm.TPR[1]:.2f}",
f"QRS : {cm.TPR[2]:.2f}",
f"T : {cm.TPR[3]:.2f}",
f"",
f"Rec Name : {rec_name}",
f"Lead : {lead_name}",
f"Pathology : {pathology_name}",
f"Unit : mV",
f"Sample Rate: 360Hz",
f"SNR(Pr./GT): {calculate_snr(Xp, Xt)}dB",
f"",
f"Precission",
f"BL : {cm.PPV[0]:.2f}",
f"P : {cm.PPV[1]:.2f}",
f"QRS : {cm.PPV[2]:.2f}",
f"T : {cm.PPV[3]:.2f}",
]
# notes_list += catatan
ax1.set_title(f"F1-Score | BL: {cm.F1[0]:.2f} | P: {cm.F1[1]:.2f} | QRS: {cm.F1[2]:.2f} | T: {cm.F1[3]:.2f}")
code_font = FontProperties(family='monospace', style='normal', variant='normal', size=8)
for i, note in enumerate(notes_list[:]):
plt.text(1.01, 0.95 - i * 0.1, note, transform=ax1.transAxes, fontsize=10, va='top', ha='left', fontproperties=code_font)
# for i, note in enumerate(notes_list[5:]):
# plt.text(1.01, 0.95 - i * 0.1, note, transform=ax2.transAxes, fontsize=10, va='top', ha='left', fontproperties=code_font)
plt.subplots_adjust(top=0.5)
# add legend with offset
# Create custom Line2D objects with desired colors
custom_lines = [
Line2D([0], [0], color='grey', lw=4, alpha=0.5),
Line2D([0], [0], color='orange', lw=4, alpha=0.5),
Line2D([0], [0], color='green', lw=4, alpha=0.5),
Line2D([0], [0], color='purple', lw=4, alpha=0.5)
]
# Add legend with custom lines
ax1.legend(custom_lines, ['BL', 'P', 'QRS', 'T'], loc='upper left')
plt.show()
</code></pre>
<p>I'm working on visualizing some data using Matplotlib, and I've encountered an issue with how the y-axis is displayed. Specifically, I have a plot where the y-axis shows non-integer values, and I would like to hide these non-integer values so that only integers are displayed.</p>
<p>Additionally, I need to fix the y-axis range between -2 and 2 for both the first and second rows of subplots in the figure. The goal is to maintain consistency in the y-axis range across these subplots.</p>
|
<python><matplotlib>
|
2024-08-09 06:00:50
| 0
| 1,252
|
Muhammad Ikhwan Perwira
|
78,851,129
| 7,722,867
|
How to use Async Redis Client + Django in python?
|
<p>I'm trying to create a distributed semaphore using Redis to use in my Django application. This is to limit concurrent requests to an API. I'm using asyncio in redis-py. However, I want to create a connection pool to share across requests since I was getting a "Max clients reached" error. Thus, I created a shared connection pool in <code>settings.py</code> which I use in my semaphore class. However, I then get an error <code>got Future <Future pending> attached to a different loop</code> when I make concurrent requests. This is my code:</p>
<pre><code>import os
import uuid
import asyncio
import time
from typing import Any
import random
from django.conf import settings
from redis import asyncio as aioredis
STARTING_BACKOFF_S = 4
MAX_BACKOFF_S = 16
class SemaphoreTimeoutError(Exception):
"""Exception raised when a semaphore acquisition times out."""
def __init__(self, message: str) -> None:
super().__init__(message)
class RedisSemaphore:
def __init__(
self,
key: str,
max_locks: int,
timeout: int = 30,
wait_timeout: int = 30,
) -> None:
"""
Initialize the RedisSemaphore.
:param redis_url: URL of the Redis server.
:param key: Redis key for the semaphore.
:param max_locks: Maximum number of concurrent locks.
:param timeout: How long until the lock should automatically be timed out in seconds.
:param wait_timeout: How long to wait before aborting attempting to acquire a lock.
"""
self.redis_url = os.environ["REDIS_URL"]
self.key = key
self.max_locks = max_locks
self.timeout = timeout
self.wait_timeout = wait_timeout
self.redis = aioredis.Redis(connection_pool=settings.REDIS_POOL)
self.identifier = "Only identifier"
async def acquire(self) -> str:
"""
Acquire a lock from the semaphore.
:raises SemaphoreTimeoutError: If the semaphore acquisition times out.
:return: The identifier for the acquired semaphore.
"""
czset = f"{self.key}:owner"
ctr = f"{self.key}:counter"
identifier = str(uuid.uuid4())
now = time.time()
start_time = now
backoff = STARTING_BACKOFF_S
while True:
# TODO: Redundant?
if time.time() - start_time > self.wait_timeout:
raise SemaphoreTimeoutError("Waited too long to acquire the semaphore.")
async with self.redis.pipeline(transaction=True) as pipe:
pipe.zremrangebyscore(self.key, "-inf", now - self.timeout)
pipe.zinterstore(czset, {czset: 1, self.key: 0})
pipe.incr(ctr)
counter = (await pipe.execute())[-1]
pipe.zadd(self.key, {identifier: now})
pipe.zadd(czset, {identifier: counter})
pipe.zrank(czset, identifier)
rank = (await pipe.execute())[-1]
print(rank)
if rank < self.max_locks:
return identifier
pipe.zrem(self.key, identifier)
pipe.zrem(czset, identifier)
await pipe.execute()
# Exponential backoff with randomness
sleep_time = backoff * (1 + random.random() * 0.3)
if (sleep_time + time.time() - start_time) > self.wait_timeout:
raise SemaphoreTimeoutError("Waited too long to acquire the semaphore.")
await asyncio.sleep(sleep_time)
backoff = min(backoff * 2, MAX_BACKOFF_S)
async def release(self, identifier: str) -> bool:
"""
Release a lock from the semaphore.
:param identifier: The identifier for the lock to be released.
:return: True if the semaphore was properly released, False if it had timed out.
"""
czset = f"{self.key}:owner"
async with self.redis.pipeline(transaction=True) as pipe:
pipe.zrem(self.key, identifier)
pipe.zrem(czset, identifier)
result = await pipe.execute()
return result[0] > 0
class RedisSemaphoreContext:
def __init__(self, semaphore: RedisSemaphore) -> None:
"""
Initialize the RedisSemaphoreContext.
:param semaphore: An instance of RedisSemaphore.
"""
self.semaphore = semaphore
self.identifier = None
async def __aenter__(self) -> "RedisSemaphoreContext":
"""Enter the async context manager."""
self.identifier = await self.semaphore.acquire()
return self
async def __aexit__(self, exc_type: Any, exc_val: Any, exc_tb: Any) -> None:
"""Exit the async context manager."""
await self.semaphore.release(self.identifier)
</code></pre>
<p>Trace</p>
<pre><code>
File "/Users/.../app/fetchers.py", line 313, in get_page_with_semaphore
async with RedisSemaphoreContext(semaphore):
File "/Users/.../app/redis_semaphore.py", line 123, in __aexit__
await self.semaphore.release(self.identifier)
File "/Users/.../app/redis_semaphore.py", line 102, in release
result = await pipe.execute()
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/asyncio/client.py", line 1528, in execute
return await conn.retry.call_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/asyncio/retry.py", line 59, in call_with_retry
return await do()
^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/asyncio/client.py", line 1371, in _execute_transaction
await self.parse_response(connection, "_")
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/asyncio/client.py", line 1464, in parse_response
result = await super().parse_response(connection, command_name, **options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/asyncio/client.py", line 633, in parse_response
response = await connection.read_response()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/asyncio/connection.py", line 541, in read_response
response = await self._parser.read_response(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/_parsers/resp2.py", line 82, in read_response
response = await self._read_response(disable_decoding=disable_decoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/_parsers/resp2.py", line 90, in _read_response
raw = await self._readline()
^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/site-packages/redis/_parsers/base.py", line 219, in _readline
data = await self._stream.readline()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/asyncio/streams.py", line 568, in readline
line = await self.readuntil(sep)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda3/envs/cb-be/lib/python3.12/asyncio/streams.py", line 660, in readuntil
await self._wait_for_data('readuntil')
File "/opt/miniconda3/envs/cb-be/lib/python3.12/asyncio/streams.py", line 545, in _wait_for_data
await self._waiter
RuntimeError: Task <Task pending name='Task-111' coro=<GeneralWebPageFetcher.async_get_pages.<locals>.get_page_with_semaphore() running at /Users/app/fetchers.py:313> cb=[gather.<locals>._done_callback() at /opt/miniconda3/envs/cb-be/lib/python3.12/asyncio/tasks.py:767]> got Future <Future pending> attached to a different loop
</code></pre>
<p>which I then use in my adrf async views.</p>
<p>What am I doing wrong? Is this possible?</p>
|
<python><django><redis><python-asyncio>
|
2024-08-09 03:56:33
| 1
| 488
|
user7722867
|
78,851,069
| 1,896,028
|
Python's America/New_York time offset showing -04:56
|
<p>I'm currently using Python 3.9.6 and have this simple code</p>
<pre><code>import datetime
import pytz
est = pytz.timezone('America/New_York')
myDateTime = datetime.datetime(year=2024, month=8, day=15, hour=17, minute=00, tzinfo=est)
print("America/New_York: ", myDateTime.isoformat())
print("UTC: ", myDateTime.astimezone(pytz.utc).isoformat())
</code></pre>
<p>which give the result</p>
<pre><code>America/New_York: 2024-08-15T17:00:00-04:56
UTC: 2024-08-15T21:56:00+00:00
</code></pre>
<p>Currently in New York it's Daylight Savings time. So I expected America/New_York to show time offset of <code>-04:00</code>, but it's showing <code>-4:56</code>. I'm wondering if anyone has an explanation for this.</p>
|
<python><datetime><timezone><pytz>
|
2024-08-09 03:28:54
| 1
| 786
|
Steven
|
78,850,880
| 16,613,735
|
Python cx_oracle concurrent fetch
|
<p>When using Python cx_Oracle - we set a parameter cursor.arraysize = 10000. I am assuming that this means the Python client running on the server receives data "sequentially" in batches of 10K from the Oracle database. Let's say that we are pulling 500Million records and the database can handle the load and the server on which Python is running has enough resources to handle and no issues with network bandwidth. How can we still keep it as 10k but parallelize the fetch to concurrently pull the data, concurrency of 15 let's say. After pulling 10k records, we create files and ftp the files into an object storage. All I am trying to figure out is how to implement concurrency here while fetching the data from the database.</p>
|
<python><oracle-database><cx-oracle><python-oracledb>
|
2024-08-09 01:22:28
| 1
| 335
|
Dinakar Ullas
|
78,850,788
| 13,984,609
|
Getting "sqllite3.OperationError: Unable to open database file" sometimes, but other times it works fine (cx_Freeze'd file)
|
<p>I made an app which uses sqllite3 database. When I run the .py file it works just fine.</p>
<p>Here is the code for the database:</p>
<pre><code>import sqlite3
connection = sqlite3.connect("game.db")
connection.execute("PRAGMA foreign_keys = 1;")
cursor = connection.cursor()
def run(sql):
cursor.execute(sql)
connection.commit()
return [i for i in cursor]
run("CREATE TABLE IF NOT EXISTS accounts (playerName TEXT PRIMARY KEY, playerPassword TEXT);")
run("CREATE TABLE IF NOT EXISTS scores (playerName REFERENCES accounts(playerName), difficulty TEXT, score INTEGER);")
</code></pre>
<p>I froze this program using cx_Freeze.
This is the setup.py:</p>
<pre><code>import cx_Freeze
import pkgutil
from os.path import join as path_join
def get_all_packages():
return [i.name for i in list(pkgutil.iter_modules()) if i.ispkg] # Return name of all package
# base = "Win32GUI" allows your application to open without a console window
executables = [cx_Freeze.Executable('main.py', base="Win32GUI",
target_name="Push-Ups Game", icon=path_join("Images", "Push-Ups Game Logo.ico"))]
packages = ["collections", "encodings", "importlib", "pygame", "cv2", "mediapipe",
"math", "sqlite3", "random", "numpy", "time", "os", "re", "ctypes",
"matplotlib", "logging", "urllib", "packaging", "PIL", "pyparsing",
"html", "cycler", "dateutil", "kiwisolver", "json", "xml", "http", "attr", "sys"]
cx_Freeze.setup(
name="Game",
options={"build_exe":
{"includes": packages,
"excludes": [i for i in get_all_packages() if i not in packages],
"include_files": ["Music/", "Images/", "game.db"]}},
executables=executables
)
</code></pre>
<p>When I run the .exe file, weirdly the database sometimes just works fine, but occasionally it gives me the following error:</p>
<pre><code>sqllite3.OperationError: Unable to open database file
</code></pre>
<p><a href="https://i.sstatic.net/kEA4JFjb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kEA4JFjb.png" alt="enter image description here" /></a></p>
<p>I thought maybe the problem was with creating a .db file, so I removed it and ran the .exe and it generated the .db without a problem. I ran it with the .db already, also worked fine. It occasionally does not work and I could not find any correlation.</p>
<p>I saw a bunch of questions about this very same error, but I could not find any which had the same problem of the code sometimes working and sometimes not.</p>
<p>Any help is appreciated :)</p>
<hr />
<p><strong>Edit</strong>:</p>
<p>I thought it could be some permission problems so I thought the program should create the database itself (that way I thought it would have all of the permissions it needs).</p>
<p>I changed the code to (so that the .db file location is an absolute path):</p>
<pre><code>from os import mkdir, getcwd
from os.path import join as os_join
import sqlite3
db_dir = os_join(getcwd(), "Database")
try:
connection = sqlite3.connect(os_join(db_dir, "game.db"))
except sqlite3.OperationalError:
mkdir(db_dir)
finally:
connection = sqlite3.connect(os_join(db_dir, "game.db"))
</code></pre>
<p>And deleted the .db file.</p>
<p>An error is still there:</p>
<pre><code>cx_Freeze:Python error in main script: unable to open database file
</code></pre>
|
<python><sqlite><cx-freeze>
|
2024-08-09 00:08:47
| 2
| 504
|
CozyCode
|
78,850,502
| 1,658,617
|
Does shelve write to disk on every change?
|
<p>I wish to use shelve in an asyncio program and I fear that every change will cause the main event loop to stall.</p>
<p>While I don't mind the occasional slowdown of the pickling operation, the disk writes may be substantial.</p>
<p>Every how often does shelve sync to disk? Is it a blocking operation? Do I have to call <code>.sync()</code>?</p>
<p>If I schedule the <code>sync()</code> to run under a different thread, a different asyncio task may modify the shelve at the same time, which violates the requirement of single-thread writes.</p>
|
<python><python-asyncio><shelve>
|
2024-08-08 21:27:58
| 2
| 27,490
|
Bharel
|
78,850,462
| 325,809
|
N-D rotation of numpy vector
|
<p>I thought this would be easy, but I can't figure it out and google is not helping. I have found straightforward ways to rotate 2D and 3D tensors, but I can't find a way to rotate N-dimensional vectors. In particular I have an N-dimensional vector (numpy array) and I would like to rotate it by <code>theta</code> radiants around an N-dimensional unit vector <code>v</code>.</p>
|
<python><numpy><scipy>
|
2024-08-08 21:09:19
| 0
| 6,926
|
fakedrake
|
78,850,381
| 1,361,752
|
How to export dask HTML high level graph to disk
|
<p>There is a way to generate a HTML high level graph in a jupyter notebook as shown in dasks' documentation: <a href="https://docs.dask.org/en/stable/graphviz.html#high-level-graph-html-representation" rel="nofollow noreferrer">https://docs.dask.org/en/stable/graphviz.html#high-level-graph-html-representation</a></p>
<p>Taking the example from the docs, you put the following code in a jupyter cell</p>
<pre class="lang-py prettyprint-override"><code>import dask.array as da
x = da.ones((15, 15), chunks=(5, 5))
y = x + x.T
y.dask # shows the HTML representation in a Jupyter notebook
</code></pre>
<p>And you get a nice interactive html view of the graph in the jupyter notebook.</p>
<p>My question is if there a way to get the html from this graph outside of the jupyter context. My immediate interest is to export a static html file to disk as a record of the graph that was executed for a task. I could also see other applications, such as embedding a widget in a gui.</p>
|
<python><jupyter><dask>
|
2024-08-08 20:41:58
| 1
| 4,167
|
Caleb
|
78,850,314
| 8,510,149
|
Collect list inside window function with condition, pyspark
|
<p>I want to collect a list of all the values of id2 for each id1 that has the same or lower level within a group.</p>
<p>To achieve this I use a window function and collect_list function. However, I dont get the conditional part here. How can that be solved?</p>
<pre><code>
df = spark.createDataFrame([
("A", 0, "M1", "D1"),
("A", 1, "D1", "D2"),
("A", 2, "D2", "D3"),
("A", 3, "D3", "D4"),
("B", 0, "M2", "D5"),
("B", 1, "D4", "D6"),
("B", 2, "D5", "D7")
], ["group_id", "level", "id1", "id2"])
window = Window.partitionBy('group_id').orderBy('level').rowsBetween(
Window.unboundedPreceding, Window.unboundedFollowing
)
df_with_list = df.withColumn(
"list_lower_level",
F.collect_list("id2").over(window)
)
df_with_list.show()
</code></pre>
<p>The output is this:</p>
<pre><code>+--------+-----+---+---+----------------+
|group_id|level|id1|id2|list_lower_level|
+--------+-----+---+---+----------------+
| A| 0| M1| D1|[D1, D2, D3, D4]|
| A| 1| D1| D2|[D1, D2, D3, D4]|
| A| 2| D2| D3|[D1, D2, D3, D4]|
| A| 3| D3| D4|[D1, D2, D3, D4]|
| B| 0| M2| D5| [D5, D6, D7]|
| B| 1| D4| D6| [D5, D6, D7]|
| B| 2| D5| D7| [D5, D6, D7]|
+--------+-----+---+---+----------------+
</code></pre>
<p>However, I want to achive this:</p>
<pre><code>+--------+-----+---+---+----------------+
|group_id|level|id1|id2|list_lower_level|
+--------+-----+---+---+----------------+
| A| 0| M1| D1|[D1, D2, D3, D4]|
| A| 1| D1| D2|[D2, D3, D4]|
| A| 2| D2| D3|[D3, D4]|
| A| 3| D3| D4|[D4]|
| B| 0| M2| D5| [D5, D6, D7]|
| B| 1| D4| D6| [D6, D7]|
| B| 2| D5| D7| [D7]|
+--------+-----+---+---+----------------+
</code></pre>
|
<python><pyspark>
|
2024-08-08 20:23:53
| 1
| 1,255
|
Henri
|
78,849,988
| 4,470,365
|
How do I attach the contents of a failed Airflow task log to the failure notification email message?
|
<p>I have an Airflow DAG that runs a BashOperator task. When it fails I get an email with not much detail:</p>
<pre><code>Try 1 out of 1
Exception:
Bash command failed. The command returned a non-zero exit code 1.
Log: Link
Host: 2db56ea2ab34
Mark success: Link
</code></pre>
<p>I'm interested in the details that tell me <em>why</em> my task failed, i.e. the errors I see when I click the Log link. How can I make Airflow attach that log so that the people getting the failure email don't have to click through? I am still on Airflow 2.6.1 but could upgrade if needed as part of troubleshooting this. In case it's relevant, the BashOperator is a <code>docker run</code> command -- is that why my attempt below isn't working? I am aware of <code>DockerOperator</code> but that's a separate matter.</p>
<h2>What I tried</h2>
<p>I tried writing my own versions of a <code>failure_callback()</code> function and was able to get it sending an email, but it couldn't successfully attach the logs.</p>
<pre class="lang-py prettyprint-override"><code>import os
import tempfile
from airflow import DAG
from airflow.operators.bash_operator import BashOperator
from airflow.utils.email import send_email
from airflow.utils.log.logging_mixin import LoggingMixin
from airflow.utils.log.log_reader import TaskLogReader
from datetime import datetime
def failure_callback(context):
task_instance = context['task_instance']
log_url = task_instance.log_url
exception = context.get('exception')
# Fetch the log content using TaskLogReader
try:
log_reader = TaskLogReader()
log_content, _ = log_reader.read_log_chunks(task_instance, try_number=task_instance.try_number, metadata={})
log_content = ''.join([chunk['message'] for chunk in log_content[0]])
except Exception as e:
log_content = f"Could not fetch log content: {e}"
subject = f"Airflow Task Failure: {task_instance.task_id}"
html_content = f"""
Task: {task_instance.task_id}<br>
DAG: {task_instance.dag_id}<br>
Execution Time: {context['logical_date']}<br>
Log URL: <a href="{log_url}">{log_url}</a><br>
Exception: {exception}<br>
"""
# Write log content to a temporary file
with tempfile.NamedTemporaryFile(delete=False, suffix='.log') as temp_log_file:
temp_log_file.write(log_content.encode('utf-8'))
temp_log_file_path = temp_log_file.name
# Send email with log content as attachment
send_email(
to=default_args['email'],
subject=subject,
html_content=html_content,
files=[temp_log_file_path] # Attach the log file
)
os.remove(temp_log_file_path)
</code></pre>
<p>I added <code>on_failure_callback=failure_callback</code> to my DAG definition.</p>
<p><strong>The result</strong></p>
<p>An email is sent with an attachment, but the attached just says:</p>
<blockquote>
<p>Could not fetch log content: tuple indices must be integers or slices, not str</p>
</blockquote>
<p>And then when I examine the task logs in the Airflow UI it says:</p>
<pre><code>[2024-08-08, 14:41:09 EDT] {file_task_handler.py:522} ERROR - Could not read served logs
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1407, in _run_raw_task
self._execute_task_with_callbacks(context, test_mode)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1558, in _execute_task_with_callbacks
result = self._execute_task(context, task_orig)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1628, in _execute_task
result = execute_callable(context=context)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/operators/bash.py", line 210, in execute
raise AirflowException(
airflow.exceptions.AirflowException: Bash command failed. The command returned a non-zero exit code 1.
</code></pre>
|
<python><airflow>
|
2024-08-08 18:48:05
| 1
| 23,346
|
Sam Firke
|
78,849,671
| 8,741,781
|
How should a custom django server error view be tested?
|
<p>In the Django documentation they provide the following example for <a href="https://docs.djangoproject.com/en/dev/topics/http/views/#testing-custom-error-views" rel="nofollow noreferrer">testing custom error views</a>.</p>
<p>Given the following example how would you go about testing a custom <code>server_error</code> view? I'm able to trigger the custom view by raising <code>ImproperlyConfigured</code> however my test result is "E", error <code>django.core.exceptions.ImproperlyConfigured</code>.</p>
<pre><code># views.py
from django.views import defaults
def custom_server_error_view(request):
template_name = 'wholesale/500.html'
print('this is running')
return defaults.server_error(
request,
template_name=template_name,
)
</code></pre>
<pre><code># tests.py
from django.core import exceptions
from django.test import TestCase, override_settings
from . import views
def raises500(request):
raise exceptions.ImproperlyConfigured
urlpatterns = [
path('500/', raises500),
]
handler500 = views.custom_server_error_view
@override_settings(ROOT_URLCONF=__name__)
class CustomErrorViewTestCase(TestCase):
def test_custom_server_error_view(self):
response = self.client.get('/500/')
self.assertEqual(response.status_code, HTTPStatus.INTERNAL_SERVER_ERROR)
self.assertTemplateUsed(response, 'wholesale/500.html')
</code></pre>
|
<python><django>
|
2024-08-08 17:16:00
| 1
| 6,137
|
bdoubleu
|
78,849,429
| 850,781
|
Convert the same local time to UTC on different dates, respecting the local DST status
|
<p>I have several local time points:</p>
<pre><code>import datetime
from zoneinfo import ZoneInfo as zi
wmr = datetime.time(hour=12, tzinfo=zi("GMT"))
ecb = datetime.time(hour=14, minute=15, tzinfo=zi("CET"))
jpx = datetime.time(hour=14, tzinfo=zi("Japan"))
</code></pre>
<p>which I want to convert to UTC times given a date.</p>
<p>E.g.,</p>
<pre><code>local2utc(datetime.datetime(2024,1,1), wmr) ---> "2024-01-01 12:00:00"
local2utc(datetime.datetime(2024,6,1), wmr) ---> "2024-06-01 11:00:00" (DST active)
local2utc(datetime.datetime(2024,1,1), ecb) ---> "2024-01-01 13:15:00"
local2utc(datetime.datetime(2024,6,1), ecb) ---> "2024-06-01 12:15:00" (DST active)
local2utc(datetime.datetime(2024,1,1), jpx) ---> "2024-01-01 05:00:00"
local2utc(datetime.datetime(2024,6,1), jpx) ---> "2024-06-01 05:00:00" (no DST in Japan)
</code></pre>
<p>The following implementation</p>
<pre><code>def local2utc(date, time):
local_dt = datetime.datetime.combine(date,time)
tm = local_dt.utctimetuple()
return datetime.datetime(*tm[:7])
</code></pre>
<p>seems for work for <a href="https://en.wikipedia.org/wiki/Japan_Standard_Time" rel="nofollow noreferrer">Japan</a> and <a href="https://en.wikipedia.org/wiki/Central_European_Time" rel="nofollow noreferrer">CET</a>, but not for <a href="https://en.wikipedia.org/wiki/Greenwich_Mean_Time" rel="nofollow noreferrer">GMT</a>/<a href="https://en.wikipedia.org/wiki/Western_European_Time" rel="nofollow noreferrer">WET</a> (because London is on <a href="https://en.wikipedia.org/wiki/British_Summer_Time" rel="nofollow noreferrer">BST</a>/<a href="https://en.wikipedia.org/wiki/Western_European_Summer_Time" rel="nofollow noreferrer">WEST</a> in the summer).</p>
<p>So, what do I do?</p>
|
<python><datetime><timezone><localtime><zoneinfo>
|
2024-08-08 16:11:15
| 2
| 60,468
|
sds
|
78,849,388
| 1,914,781
|
rangemode tozero not work for datetime format
|
<p>I would like plotly plot xaxis start from 00:00:00. I try to use <code>rangemode</code> 'tozero` as below but it not work.</p>
<p>What's proper way to let the xaxis start from 00:00:00?</p>
<pre><code>import plotly.express as px
import pandas as pd
import pandas as pd
import io
import numpy as np
import datetime
def save_fig(fig,width,height,pngname):
fig.write_image(pngname,format="png", width=width, height=height, scale=1)
print("[[%s]]"%pngname)
#fig.show()
return
def main():
title = "bootchart startup time"
xlabel = "Timestamp"
ylabel = "Train"
width = 800
height = 400
pngname = "/media/sf_work/demo.png"
data = '''
name,start,end
A,5,10
B,7,10
C,4,10
'''
df = pd.read_csv(io.StringIO(data))#, delim_whitespace=True)
for name in ['start','end']:
df[name] = pd.to_datetime(df[name], unit='s')
fig = px.timeline(df, x_start="start", x_end="end", y="name", color="name")
layout = dict(
title=title,
xaxis_title=xlabel,
yaxis_title=ylabel,
title_x=0.5,
margin=dict(l=10,t=20,r=0,b=40),
width=width,
height=height,
plot_bgcolor='#ffffff',#'rgb(12,163,135)',
paper_bgcolor='#ffffff',
hoverlabel=dict(
bgcolor="white",
font_size=8,
font_family="Rockwell"
),
showlegend=False,
xaxis=dict(
rangemode="tozero",
#autorange="reversed",
tickmode='array',
tickangle=-25,
tickformat='%H:%M:%S',
#tickvals = xvals,
#ticktext= xtexts,
showline=True,
linecolor='black',
linewidth=.5,
ticks='outside',
zeroline=True,
),
yaxis=dict(
side='right',
showline=False,
linecolor='black',
linewidth=.5,
showgrid=False,
gridcolor='grey',
gridwidth=.5,
),
)
fig.update_layout(layout)
save_fig(fig,width,height,pngname)
return
main()
</code></pre>
<p><a href="https://i.sstatic.net/65fxoicB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65fxoicB.png" alt="enter image description here" /></a></p>
|
<python><plotly>
|
2024-08-08 16:01:24
| 1
| 9,011
|
lucky1928
|
78,849,387
| 1,028,270
|
How can I add a blank line between list items with Ruamel?
|
<p>I am appending items to a list:</p>
<pre><code>from ruamel.yaml import YAML
yaml = YAML()
yaml.preserve_quotes = True
yaml.allow_duplicate_keys = True
formatted_yaml = yaml.load(Path(my_file).open().read())
for item in my_items:
# Just want a blank line here
formatted_yaml.append(None)
# Before adding this item
formatted_yaml.append(item)
</code></pre>
<p>This produces:</p>
<pre><code>- blah: 'sdfsdf'
sdfsd: ''
-
- blah: 'sdfsdf'
sdfsd: ''
-
- blah: 'sdfsdf'
sdfsd: ''
-
etc..
</code></pre>
<p>The output I want though is this:</p>
<pre><code>- blah: 'sdfsdf'
sdfsd: ''
- blah: 'sdfsdf'
sdfsd: ''
- blah: 'sdfsdf'
sdfsd: ''
etc..
</code></pre>
<p>Is it possible to just add an empty line like this?</p>
<p>I also tried adding a comment token like this:</p>
<pre><code>ct = tokens.CommentToken("\n", error.CommentMark(0), None)
formatted_yaml.append(ct)
</code></pre>
<p>Which failed: <code>ruamel.yaml.representer.RepresenterError: cannot represent an object: CommentToken('\n', col: 0)</code></p>
|
<python><ruamel.yaml>
|
2024-08-08 16:01:14
| 2
| 32,280
|
red888
|
78,849,304
| 5,527,646
|
Creating nested lists from a single list
|
<p>Suppose I have a list like this:</p>
<pre><code>my_list = ['a_norm', 'a_std', 'a_min', 'a_max', 'a_flag', 'b_norm', 'b_std', 'b_min', 'b_max', 'b_flag', 'c_norm', 'c_std', 'c_min', 'c_max', 'c_flag']
</code></pre>
<p>I want to parse this list and create nested lists within separate lists like this. It is very important that I retain the order of the :</p>
<pre><code>a_list = [[1, "a_norm"], [2, "a_std"], [3, "a_min"], [4, "a_max"], [5, "a_flag"]]
b_list = [[1, "b_norm"], [2, "b_std"], [3, "b_min"], [4, "b_max"], [5, "b_flag"]]
c_list = [[1, "c_norm"], [2, "c_std"], [3, "c_min"], [4, "c_max"], [5, "c_flag"]]
</code></pre>
<p>I tried the below which, in addition to being convoluted, did not work (endless loop). Any suggestions on how to accomplish my end goal?</p>
<pre><code>a_list = []
b_list = []
c_list = []
i = 1
j = 1
k = 1
while (i < 6) and (j < 6) and (k < 6):
for item in my_list:
if 'a' in item:
a_list.append(i)
a_list.append(item)
i + 1
elif 'b' in my_list:
b_list.append(j)
b_list.append(item)
j + 1
elif 'c' in my_list:
c_list.append(k)
c_list.append(item)
k + 1
</code></pre>
|
<python><indexing><nested-lists>
|
2024-08-08 15:43:47
| 3
| 1,933
|
gwydion93
|
78,849,242
| 5,106,253
|
How can I make mypy correctly use my type hinting files?
|
<p>I am using a package that does not provide its own type hints so I have built some but cannot see how to get <code>mypy</code> to use them - anyone able to help? Here's my system:</p>
<ul>
<li>Windows</li>
<li>Using poetry</li>
<li>Created type hint files <code>*.pyi</code> for external package <code>bob</code></li>
<li>If I use <code>poetry env info</code> to locate the <code>site-packages</code> directory and in there create a directory <code>bob-stubs</code> and copy my <code>*.pyi</code> type hints, then <code>mypy</code> is happy, but this is hacky of course!</li>
<li>If I create a <code>typings\bob-stubs</code> directory in my own package and copy the <code>*.pyi</code> files there, then <code>mypy</code> appears to try and type check the typing hints (!) and throws <em>new</em> errors!</li>
<li>If I move the type hint files elsewhere and try to set <code>mypy_path</code> in a <code>mypy.ini</code> file, I can see <code>mypy</code> read the <code>mypy.ini</code> file (using <code>-vv</code>) but it seems to ignore the hint files.</li>
</ul>
<p>I've been going around in circles trying get this to work - anyone able to point me in the right directions?</p>
|
<python><python-typing><mypy>
|
2024-08-08 15:28:46
| 0
| 749
|
Paul D Smith
|
78,849,150
| 9,363,181
|
Unable to eliminate backslashes from the JSON output via Pyspark
|
<p>Below is my <code>output</code> dataset:</p>
<pre><code>{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-01T11:24:43.702Z","data":"{\"selfiePhotoUrl\":\"\"}","id":"selfie","inner":{"isSelfieFraudError":false},"startCount":0,"startedAt":"2024-08-01T11:24:43.698Z","status":200}]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-01T11:03:01.296Z","data":"{\"country\":\"Botswana\",\"countryCode\":\"BW\",\"region\":\"Gaborone\",\"regionCode\":\"GA\",\"city\":\"Gaborone\",\"zip\":\"\",\"latitude\":-24.6437,\"longitude\":25.9112,\"safe\":true,\"ipRestrictionEnabled\":false,\"vpnDetectionEnabled\":false,\"platform\":\"web_mobile\"}","id":"ip-validation","startCount":0,"startedAt":"2024-08-01T11:03:01.233Z","status":200},{"cacheHit":false,"completedAt":"2024-08-01T11:22:30.609Z","data":"{\"videoUrl\":\"video/14bcc3c7-243f-4ecd-a5d2-edf7193c866e.mp4\",\"spriteUrl\":\"s\",\"selfieUrl\":{\"media\":\"66ab6ff561c966001eafb0b6\",\"isUrl\":true}}","id":"liveness","inner":{},"startCount":1,"startedAt":"2024-08-01T11:22:29.787Z","status":200}]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-01T11:24:40.285Z","data":"{\"country\":\"Mexico\",\"countryCode\":\"MX\",\"region\":\"Mexico City\",\"regionCode\":\"CMX\",\"city\":\"Mexico City\",\"zip\":\"03020\",\"latitude\":19.4203,\"longitude\":-99.1193,\"safe\":true,\"ipRestrictionEnabled\":false,\"vpnDetectionEnabled\":false,\"platform\":\"web_mobile\"}","id":"ip-validation","startCount":0,"startedAt":"2024-08-01T11:24:40.251Z","status":200}]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-07-31T20:57:55.762Z","data":"{\"videoUrl\":\"https://something.com\",\"spriteUrl\":\"sel\",\"selfieUrl\":{\"media\":\"66aaa5529e9323001dd5436d\",\"isUrl\":true}}","id":"liveness","inner":{},"startCount":1,"startedAt":"2024-07-31T20:57:54.206Z","status":200}]}
{"_id":"","steps":[]}
{"_id":"","steps":[{"cacheHit":false,"completedAt":"2024-08-08T03:32:28.183Z","data":"{\"videoUrl\":\"https://something.com\",\"spriteUrl\":\"\",\"selfieUrl\":{\"media\":\"66b43c4b338114001ebb6ec9\",\"isUrl\":true}}","id":"liveness","inner":{},"reused":false,"startCount":1,"startedAt":"2024-08-08T03:32:27.717Z","status":200},{"completedAt":"2024-08-08T03:32:35.869Z","connectedDocumentType":"national-id","data":"[{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":333,\"name\":\"UK Most Wanted\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID\",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.457Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":469,\"name\":\"EU Members of Parliament\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.457Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":472,\"name\":\"FBI Most Wanted\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.458Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":467,\"name\":\"CIA World Leaders\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.492Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":471,\"name\":\"US Marshalls Service\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.528Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":437,\"name\":\"US Bureau of Industry and Security\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.544Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":455,\"name\":\"US Denied Persons\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.544Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":480,\"name\":\"DEA Most Wanted\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.545Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":466,\"name\":\"CoE Parliamentary Assembly\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.547Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":458,\"name\":\"US OFAC\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.564Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":481,\"name\":\"Personas de Interes\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.583Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":439,\"name\":\"INTERPOL Red Notices\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.599Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":463,\"name\":\"Swiss SECO Sanctions\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.628Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":470,\"name\":\"UN Consolidated Sanctions\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.637Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":468,\"name\":\"Banco Interamericano de Desarrollo\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.638Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":430,\"name\":\"GB Consolidated List of Targets\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.651Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":328,\"name\":\"UK Bank of England Sanctions list\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.665Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":417,\"name\":\"AU Gov Dept of Foreign Trade and Affairs\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID \",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.671Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":843,\"name\":\"EU financial sanctions\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID\",\"dateOfBirth\":\"17-12-1963\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.679Z\"},{\"dataSource\":\"document-data\",\"country\":\"PE\",\"documentType\":\"national-id\",\"watchlist\":{\"id\":1026,\"name\":\"SAT 69B\",\"watchlistType\":\"basic\"},\"searchParams\":{\"documentNumber\":\"43265524\",\"fullName\":\"DAVID\",\"dateOfBirth\":\"17-12-2332\"},\"searchResult\":null,\"searchedAt\":\"2024-08-08T03:32:34.724Z\"}]","dataSource":"document-data","id":"watchlists","reused":false,"startCount":1,"startedAt":"2024-08-08T03:32:31.13Z","status":200}]}
</code></pre>
<p>Now, when I display the data, it shows proper records without any backslashes. However, when I write this data as <code>.json</code> files it adds slashes as can be seen in the data above.</p>
<p>For this, I used <code>.option("escape","")</code> property but it didn't work.</p>
<p>The schema of the <code>steps</code> column is a <code>string</code> and therefore it adds backslashes in the output but I want to remove this.</p>
<p>Any suggestions are appreciated</p>
|
<python><json><apache-spark><pyspark><apache-spark-sql>
|
2024-08-08 15:05:08
| 0
| 645
|
RushHour
|
78,849,149
| 1,264,304
|
How to get the number of background tasks currently running in a FastAPI application?
|
<p>The below <code>run_tasks</code> FastAPI route handler spawns a background task on each HTTP call to the <code>/run-tasks</code> endpoint.</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import time
from fastapi import APIRouter
from starlette.background import BackgroundTasks
router = APIRouter()
async def background_task(sleep_time: int, task_id: int):
print(f"Task {task_id} started")
await asyncio.sleep(sleep_time) # Simulate a long-running task
print(f"Task {task_id} completed")
@router.post("/run-tasks")
async def run_tasks(background_tasks: BackgroundTasks, sleep_time: int):
background_tasks.add_task(background_task, sleep_time, int(time.time()))
print(len(background_tasks.tasks))
return {"message": "NEW SLEEP query arg background task started"}
</code></pre>
<p>When a <code>SIGTERM</code> signal is sent to the process running this FastAPI app, the following sequence is observed in the logs:</p>
<ol>
<li>Waiting for background tasks to complete appears.</li>
<li>Afterward, messages like "Task {some_id} completed" are logged.</li>
<li>Finally, the log shows "Waiting for application shutdown", and the
application then shuts down.</li>
</ol>
<p>Is there a way to get how many background tasks are currently running? I'd like to print the number of remaining tasks after each task completion (i.e., after <code>print(f"Task {task_id} completed"))</code>.</p>
<p>My goal is to see '0' at the end, so that to be sure that all background tasks finished, before application shutdown.</p>
<p>I understand that a FastAPI app running in Python should wait for all background tasks to finish. However, I want to ensure this also happens when the app runs inside a container within a Kubernetes pod. This is crucial for correctly tuning the <code>terminationGracePeriodSeconds</code> parameter of the pod (see the relevant <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">Termination of Pods</a> documentation).</p>
|
<python><python-asyncio><fastapi><starlette><graceful-shutdown>
|
2024-08-08 15:04:56
| 2
| 11,003
|
rokpoto.com
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.