QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
78,351,027
| 940,259
|
Getting ErrorCode:AuthorizationPermissionMismatch when listing blobs from Python while AZ CLI works with the same creds
|
<p>I've got an existing storage account / container where I can list all the blobs using AZ CLI:</p>
<pre><code>[ ~ ]$ az storage blob list --account-name storageaccount20122022 --container-name test
There are no credentials provided in your command and environment, we will query for account key for your storage account.
It is recommended to provide --connection-string, --account-key or --sas-token in your command as credentials.
You also can add `--auth-mode login` in your command to use Azure Active Directory (Azure AD) for authorization if your login account is assigned required RBAC roles.
For more information about RBAC roles in storage, visit https://docs.microsoft.com/azure/storage/common/storage-auth-aad-rbac-cli.
In addition, setting the corresponding environment variables can avoid inputting credentials in your command. Please use --help to get more information about environment variable usage.
[
{
"container": "test",
"content": "",
...
}
]
</code></pre>
<p>I want to do the same from Python, this is my script:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient
def list_blobs_using_cli_credential(account_name, container_name):
credential = DefaultAzureCredential() # Also tried AzureCliCredential()
blob_service_client = BlobServiceClient(
account_url=f"https://{account_name}.blob.core.windows.net",
credential=credential
)
container_client = blob_service_client.get_container_client(container_name)
print(f"Listing blobs in {account_name}/{container_name} ...")
try:
blobs = container_client.list_blobs()
for blob in blobs:
print(blob.name)
except Exception as e:
print(f"Error listing blobs: {e}")
if __name__ == "__main__":
account_name = sys.argv[1]
container_name = sys.argv[2]
list_blobs_using_cli_credential(account_name, container_name)
</code></pre>
<p>However when I run that in the same shell I get an error:</p>
<pre><code>[ ~ ]$ python list-blobs.py storageaccount20122022 test
Listing blobs in storageaccount20122022/test ...
Error listing blobs: This request is not authorized to perform this operation using this permission.
RequestId:466b2647-201e-0022-13fd-918e18000000
Time:2024-04-19T01:59:22.6413231Z
ErrorCode:AuthorizationPermissionMismatch
Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>AuthorizationPermissionMismatch</Code><Message>This request is not authorized to perform this operation using this permission.
RequestId:466b2647-201e-0022-13fd-918e18000000
Time:2024-04-19T01:59:22.6413231Z</Message></Error>
</code></pre>
<p>I can list the Storage accounts and the container in each account from Python but when it comes to dealing with the Blobs (list, delete, upload) I can't do anything from Python while the same ops work from AZ CLI.</p>
<p>I tried in two different Azure accounts, completely unrelated, in one of them I'm the owner, but it's still the same.</p>
<p>What is AZ CLI doing, what other permissions or roles does it automatically acquire, to do the Blob ops? And how can I do the same in my Python code?</p>
|
<python><azure>
|
2024-04-19 02:10:54
| 1
| 1,420
|
MLu
|
78,350,937
| 390,388
|
Spacy textcat multilabel config validation error
|
<p>I am trying to train a spacy textcat_multilabel model. I thought I had everything set up correctly, but I continue to get a validation error.</p>
<p>This is the label section of my config:</p>
<pre><code>[components.textcat_multilabel]
factory = "textcat_multilabel"
scorer = {"@scorers": "spacy.textcat_multilabel_scorer.v2"}
threshold = 0.5
labels = ["Operational (Frontline)", "Certified/Technical", "Administrative (General)", "Corporate (HR/Finance/Procurement)", "Digital (Applications)", "Digital (ICT)", "Communication and Engagement", "Environmental/Scientific", "Leadership/Management/Coaching/Mentoring", "Policy/Legislation/Regulatory", "Cultural Capability", "Project Management", "Workplace Health and Safety", "Analytical (Data/GIS/Modelling)", "Other"]
</code></pre>
<p>This command</p>
<pre><code>python -m spacy train .\config.cfg --output ..\output --paths.tain .\train.spacy --paths.dev .\dev.spacy
</code></pre>
<p>throws this error</p>
<pre><code>=========================== Initializing pipeline ===========================
β Config validation error
textcat_multilabel -> labels extra fields not permitted
{'nlp': <spacy.lang.en.English object at 0x00000210C3758B10>, 'name': 'textcat_multilabel', 'labels': ['Operational (Frontline)', 'Certified/Technical', 'Administrative (General)',
'Corporate (HR/Finance/Procurement)', 'Digital (Applications)', 'Digital (ICT)', 'Communication and Engagement', 'Environmental/Scientific', 'Leadership/Management/Coaching/Mentor
ing', 'Policy/Legislation/Regulatory', 'Cultural Capability', 'Project Management', 'Workplace Health and Safety', 'Analytical (Data/GIS/Modelling)', 'Other'], 'model': {'@architec
tures': 'spacy.TextCatBOW.v2', 'exclusive_classes': False, 'ngram_size': 1, 'no_output_layer': False, 'nO': None}, 'scorer': {'@scorers': 'spacy.textcat_multilabel_scorer.v2'}, 'threshold': 0.5, '@factories': 'textcat_multilabel'}
</code></pre>
<p>Have I got this all wrong? Is there another way to specify labels in the config?</p>
<p>Thanks;</p>
|
<python><spacy>
|
2024-04-19 01:27:21
| 1
| 43,620
|
John
|
78,350,841
| 2,270,422
|
Mypy complains about poetry packages I included from the project subdirectories in src
|
<p>Here is my poetry packages in my pyproject.py:</p>
<pre><code>packages = [
{include = "api", from = "src"},
{include = "another_api", from = "src"},
{include = "infra"}
]
</code></pre>
<p>And when I import some symbol in my <code>api</code> package like <code>from api.constants import BEST_MYPY_QUESTION</code>, I get the typical error of:</p>
<pre><code>src/api/main.py:1: error: Skipping analyzing "api.constants": module is installed, but missing library stubs or py.typed marker [import-untyped]
</code></pre>
<p>Here is my code in <code>src/api/main.py</code></p>
<pre><code>from api.constants import BEST_MYPY_QUESTION
print(BEST_MYPY_QUESTION)
</code></pre>
<p>The usual way to solve such error is to run mypy with <code>--ignore-missing-imports</code> option set, however, I don't like to do that because this should be type checked as I already have the source code to check against.</p>
|
<python><python-3.x><mypy><python-poetry>
|
2024-04-19 00:30:01
| 0
| 685
|
masec
|
78,350,697
| 13,605,694
|
Hosting a Django application on azure app service
|
<p>After following the official <a href="https://learn.microsoft.com/en-us/azure/app-service/tutorial-python-postgresql-app" rel="nofollow noreferrer">host python with postgres tutorial</a>, and making modifications in my gh actions file because my django apps isn't present in the root on the repo, I get a 404 error when trying to access it.</p>
<p>Here is my settings.py</p>
<pre class="lang-py prettyprint-override"><code>"""
Django settings for server project.
Generated by 'django-admin startproject' using Django 5.0.3.
For more information on this file, see
https://docs.djangoproject.com/en/5.0/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/5.0/ref/settings/
"""
from decouple import config
from os import environ
from google.oauth2 import service_account
from pathlib import Path
from django.conf import global_settings
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/5.0/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = config(
"SECRET_KEY",
default="django-insecure-g8khpcexyyb0q@p40^d5#r_j#ezf%(-90r-y^2@x1)2$wpch9+",
)
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = config("DEBUG", default=False, cast=bool)
ALLOWED_HOSTS = (
[environ["WEBSITE_HOSTNAME"]]
if "WEBSITE_HOSTNAME" in environ
else config(
"ALLOWED_HOSTS", cast=lambda v: [s.strip() for s in v.split(",")], default=[]
)
)
CORS_ALLOWED_ORIGINS = config(
"CORS_ALLOWED_ORIGINS",
cast=lambda v: [s.strip() for s in v.split(",")],
default="http://localhost:5173,http://127.0.0.1:5173",
)
# Application definition
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"rest_framework",
"rest_framework.authtoken",
"djoser",
"corsheaders",
"drf_spectacular",
"accounts.apps.AccountsConfig",
"videos.apps.VideosConfig",
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"whitenoise.middleware.WhiteNoiseMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"corsheaders.middleware.CorsMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "server.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "server.wsgi.application"
# Database
# https://docs.djangoproject.com/en/5.0/ref/settings/#databases
DATABASES = {
"default": {
"ENGINE": config("DB_ENGINE", default="django.db.backends.sqlite3"),
"NAME": config("DB_NAME", default=BASE_DIR / "db.sqlite3"),
"USER": config("DB_USER", default=""),
"PASSWORD": config("DB_PASSWORD", default=""),
"HOST": config("DB_HOST", default=""),
"PORT": config("DB_PORT", default=""),
}
}
# Password validation
# https://docs.djangoproject.com/en/5.0/ref/settings/#auth-password-validators
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
},
{
"NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
},
{
"NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
},
]
REST_FRAMEWORK = {
"DEFAULT_AUTHENTICATION_CLASSES": [
"rest_framework.authentication.TokenAuthentication",
"rest_framework.authentication.SessionAuthentication",
"rest_framework.authentication.BasicAuthentication", # simple cmd line tools can access the API
],
"DEFAULT_SCHEMA_CLASS": "drf_spectacular.openapi.AutoSchema",
"DEFAULT_PAGINATION_CLASS": "rest_framework.pagination.LimitOffsetPagination",
"PAGE_SIZE": 100,
}
SPECTACULAR_SETTINGS = {
"TITLE": "Video API",
"DESCRIPTION": "Amalitech Video API",
"VERSION": "1.0.0",
"SERVE_INCLUDE_SCHEMA": False,
# OTHER SETTINGS
}
DJOSER = {"PASSWORD_RESET_CONFIRM_URL": "password-reset-confirm/{uid}/{token}"}
EMAIL_HOST = config("EMAIL_HOST", default="localhost")
EMAIL_HOST_PASSWORD = config("EMAIL_HOST_PASSWORD", default="")
EMAIL_HOST_USER = config("EMAIL_HOST_USER", default="")
EMAIL_PORT = config("EMAIL_PORT", default=25, cast=int)
EMAIL_USE_TLS = config("EMAIL_USE_TLS", default=False, cast=bool)
# Internationalization
# https://docs.djangoproject.com/en/5.0/topics/i18n/
LANGUAGE_CODE = "en-us"
TIME_ZONE = "UTC"
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/5.0/howto/static-files/
STATIC_URL = "static/"
STATIC_ROOT = BASE_DIR / "static"
SESSION_ENGINE = "django.contrib.sessions.backends.cache"
STATICFILES_STORAGE = "whitenoise.storage.CompressedManifestStaticFilesStorage"
DEFAULT_FILE_STORAGE = config(
"DEFAULT_FILE_STORAGE", default=global_settings.DEFAULT_FILE_STORAGE
)
GS_CREDENTIALS = service_account.Credentials.from_service_account_file(
BASE_DIR / "serviceaccount.json"
)
GS_BUCKET_NAME = config("GS_BUCKET_NAME", default="")
GS_DEFAULT_ACL = config("GS_DEFAULT_ACL", default="")
# Default primary key field type
# https://docs.djangoproject.com/en/5.0/ref/settings/#default-auto-field
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
</code></pre>
<p>and the gh action used to deploy to app service</p>
<pre class="lang-yaml prettyprint-override"><code># Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy
# More GitHub Actions for Azure: https://github.com/Azure/actions
# More info on Python, GitHub Actions, and Azure App Service: https://aka.ms/python-webapps-actions
name: Build and deploy Python app to Azure Web App - plvids
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./apps/server
environment:
name: 'Production'
steps:
- uses: actions/checkout@v4
- name: Set up Python version
uses: actions/setup-python@v1
with:
python-version: '3.11'
- name: Create settings.ini file
run: |
echo [settings] >> settings.ini
echo "ALLOWED_HOSTS = ${{ secrets.ALLOWED_HOSTS }}" >> settings.ini
echo "SECRET_KEY = ${{ secrets.SECRET_KEY }}" >> settings.ini
echo "DB_NAME = ${{ secrets.DBNAME }}" >> settings.ini
echo "DB_USER = ${{ secrets.DBUSER }}" >> settings.ini
echo "DB_PASSWORD = ${{ secrets.DBPASSWORD }}" >> settings.ini
echo "DB_ENGINE = ${{ secrets.DB_ENGINE }}" >> settings.ini
echo "DB_HOST = ${{ secrets.DBHOST }}" >> settings.ini
echo "DB_PORT = ${{ secrets.DBPORT }}" >> settings.ini
echo "DEFAULT_FILE_STORAGE = ${{ secrets.DEFAULT_FILE_STORAGE }}" >> settings.ini
echo "EMAIL_HOST = ${{ secrets.EMAIL_HOST }}" >> settings.ini
echo "EMAIL_PORT = ${{ secrets.EMAIL_PORT }}" >> settings.ini
echo "EMAIL_HOST_USER = ${{ secrets.EMAIL_HOST_USER }}" >> settings.ini
echo "EMAIL_HOST_PASSWORD = ${{ secrets.EMAIL_HOST_PASSWORD }}" >> settings.ini
echo "GS_BUCKET_NAME = ${{ secrets.EMAIL_HOST_PASSWORD }}" >> settings.ini
echo "GS_DEFAULT_ACL = ${{ secrets.GS_DEFAULT_ACL }}" >> settings.ini
echo '${{secrets.GS_SERVICE_ACCOUNT}}' >> serviceaccount.json
shell: bash
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: pip install -r requirements.txt
# Optional: Add step to run tests here (PyTest, Django test suites, etc.)
- name: Zip artifact for deployment
run: zip release.zip ./* -r
- name: Upload artifact for deployment jobs
uses: actions/upload-artifact@v3
with:
name: python-app
path: |
./apps/server/release.zip
!venv/
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
permissions:
id-token: write #This is required for requesting the JWT
steps:
- name: Download artifact from build job
uses: actions/download-artifact@v3
with:
name: python-app
- name: Unzip artifact for deployment
run: unzip release.zip
- name: Login to Azure
uses: azure/login@v1
with:
client-id: ${{ secrets.*** }}
tenant-id: ${{ secrets.*** }}
subscription-id: ${{ secrets.*** }}
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy@v2
id: deploy-to-webapp
with:
app-name: '***'
slot-name: 'Production'
</code></pre>
<p>the logs in app service don't show anything</p>
<p>build logs show successful building without any errors</p>
<p>This is a <a href="https://github.com/ayitinya/video-platform" rel="nofollow noreferrer">link to the repository</a></p>
|
<python><django><azure><continuous-integration><azure-web-app-service>
|
2024-04-18 23:15:04
| 1
| 392
|
ayitinya
|
78,350,601
| 16,406
|
How to avoid "ImportError: attempted relative import with no known parent package" error
|
<p>Background: I have a bunch of small-to-medium python programs that I'm trying to simplify by factoring out common code into a module that all the programs import.</p>
<p>The problem I run into is that when I put the common code into <code>common.py</code> in the same directory as all the programs, and do <code>from . import common</code>, I get the error</p>
<blockquote>
<p>ImportError: attempted relative import with no known parent package</p>
</blockquote>
<p>Which doesn't make a lot of sense. From my (limited) understanding of python, a "module" is just a file with a <code>.py</code> extension and a "package" is just a directory containing modules. All of my files are in one directory, so isn't that the "package"? How is it even possible to have "no known parent package" -- by definition all the files are in a directory and that is a package. Various places talk about an <code>__init__.py</code> file, so I've tried adding that to the directory, but that makes no difference.</p>
<p>So I guess my question is: what exactly is a python package, and when exactly is a directory NOT a package? What does it mean to not have a parent package, and how does that occur? I've looked at various introductory python documents like <a href="https://www.udacity.com/blog/2021/01/what-is-a-python-package.html" rel="nofollow noreferrer">What is a python package</a> as well as python documentation like <a href="https://docs.python.org/3/tutorial/modules.html#packages" rel="nofollow noreferrer">Python packages</a> (which mentions the <code>__init__.py</code> file), but I can't seem to find anything that really explains what is actaully going on under the hood, or how to understand what this error means and how to fix it.</p>
|
<python><python-3.x>
|
2024-04-18 22:41:03
| 1
| 127,309
|
Chris Dodd
|
78,350,510
| 7,031,021
|
Running Multiline py files in Azure Maschine Learning Studio Notebooks
|
<p>I'd like to know the best practice of running multiline shell commands in a ml notebook</p>
<p>Here is some pseudocode and how i would run IT in a notebook cell.</p>
<pre><code>%%bash
conda activate myenv &&
torchrun
--standalone
--nnodes=1
--nproc-per-node=$NUM_TRAINERS
YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...
</code></pre>
<p>I am activating the conda env because I noticed that the magic bash function alone doesn't use the correct environment.</p>
<p>Would it be better to use the command function from azure.ai.ml?</p>
<p>I find this way somehow error prone, even with activating conda and using the magic function.</p>
<p>Are there better alternatives?</p>
|
<python><azure-machine-learning-service>
|
2024-04-18 22:08:27
| 1
| 510
|
RSale
|
78,350,475
| 1,729,649
|
What is the equivalent for HTTP method LIST in the Ansbile `uri` module?
|
<p>I am trying to look for the HTTP method equivalent for the following playbook code for method <code>LIST</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>- name: List all folders
ansible.builtin.uri:
url: https://{{ My_vault_url }}
method: LIST
return_content: true
body_format: json
register: list_url
</code></pre>
<p>If I try like <code>requests.list</code>, I get an error that <code>list</code> is not a request method.</p>
<p>If it is a custom method how can I use it through the Python?</p>
|
<python><ansible><python-requests><ansible-2.x>
|
2024-04-18 21:55:18
| 1
| 570
|
Sukh
|
78,350,341
| 1,509,695
|
How to make argparse not mention -h and --help when started with either of them?
|
<p>When running with <code>--help</code>, the help output includes the description of the <code>--help</code> argument itself. How can that line be avoided in the output of <code>--help</code>?</p>
<p>I could not get <a href="https://stackoverflow.com/a/73380185/1509695">this answer</a> to work, as the following code demonstrates when run using Python 3.10:</p>
<pre class="lang-py prettyprint-override"><code>import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser(
add_help=False,
description=f'does foo')
parser.add_argument('--bar', type=str, required=True, help='the bar value')
args = parser.parse_args()
</code></pre>
<p>The result of running the above script file with <code>--help</code> is an error message, not the help message sans the <code>--help</code> option in it:</p>
<pre class="lang-none prettyprint-override"><code>usage: scratch_54.py --bar BAR
scratch_54.py: error: the following arguments are required: --bar
</code></pre>
<p>I would like to avoid the help option self-referencing itself in its own output, while having the rest of the help message behave as usual.</p>
|
<python><argparse>
|
2024-04-18 21:11:19
| 1
| 13,863
|
matanox
|
78,350,242
| 3,507,584
|
Uninstall last pip installed packages
|
<p>I have just installed a package in my virtual environment which I shouldn't have installed. It also installed many dependency packages.</p>
<p>Is there a way to roll back and uninstall the package and its dependencies just installed?</p>
<p>Something like "uninstall packages installed in the last 1 hour" or a similar functionality.</p>
|
<python><python-3.x><pip>
|
2024-04-18 20:46:36
| 2
| 3,689
|
User981636
|
78,350,224
| 610,569
|
Avoiding repetitive checking of function output before storing into a dictionary
|
<p>I've a repeating code block of iterating through inputs, checking some functions to return a list and the populating dictionary if the function did return a list, e.g.</p>
<pre><code>def some_func(i):
""" This function returns a filled list of a condition is met, otherwise an empty"""
return ['abc', 'def'] if i % 2 == 0 else []
i_inputs = [4, 2, 3, 6, 3, 8, 2]
y1 = {}
for i in i_inputs:
_x = some_funcs(i)
if _x:
y1[i] = _x
</code></pre>
<p>The main problem is I'm having a lot of <code>y</code>s an <code>i_inputs</code> in my code, e.g.</p>
<pre><code>i_inputs = [4, 2, 3, 6, 3, 8, 2]
y1 = {}
for i in i_inputs:
_x = some_funcs(i)
if _x:
y1[i] = _x
i_inputs2 = [8, 2, 4, 8, 9, 1]
y2 = {}
for i in i_inputs2:
_x = some_funcs2(i) # Sometimes another function that also returns a list.
if _x:
y2[i] = _x
i_inputs3 = [4, 8, 2, 9, 9, 1, 5]
y3 = {}
for i in i_inputs3:
_x = some_funcs3(i) # Yet another function that also returns a list.
if _x:
y3[i] = _x
</code></pre>
<p><strong>Is there a better way other than enumerating all the <code>i_input*</code> and the <code>y*</code> in a hard-coded manner?</strong></p>
|
<python><dry>
|
2024-04-18 20:40:29
| 1
| 123,325
|
alvas
|
78,350,183
| 6,451,746
|
GStreamer 1.24 Python bindings are blacklisted
|
<p>I am trying to install Python bindings for GStreamer, but the library is blacklisted. My Dockerfile is below. Everything installs without issue, but the <code>libgstpython.so</code> library is blacklisted. I have tried different Python versions, specifying the Python path, and random keyboard bashing without success.</p>
<pre><code>FROM ubuntu:24.04
# Install Python
ARG PYTHON_VERSION=3.9.18
WORKDIR /opt
RUN apt update -y && apt upgrade -y && \
apt install -y libbz2-dev libsqlite3-dev zlib1g-dev libffi-dev wget curl build-essential libssl-dev openssl vim && \
wget https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz && \
tar xzvf Python-${PYTHON_VERSION}.tgz && \
cd Python-${PYTHON_VERSION} && \
./configure --enable-shared && \
make && \
make install && \
ln -s /usr/local/bin/python3 /usr/bin/python && \
ln -s /usr/local/bin/pip3 /usr/bin/pip
# # Install FFmpeg, GStreamer, and reqs for custom plugins
RUN apt update && apt upgrade -y && apt install -y \
ffmpeg \
libgstreamer1.0-dev \
libgstreamer-plugins-base1.0-dev \
libgstreamer-plugins-bad1.0-dev \
libhdf5-dev \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-base-apps \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
gstreamer1.0-tools \
gstreamer1.0-x \
gstreamer1.0-alsa \
gstreamer1.0-gl \
gstreamer1.0-gtk3 \
gstreamer1.0-qt5 \
gstreamer1.0-pulseaudio \
graphviz \
python3-gi \
python3-gst-1.0 \
libgirepository1.0-dev \
cmake \
python-gi-dev \
libcairo2-dev \
ninja-build \
git \
flex \
bison
# # NOTE: pygobject 3.47.0 introduced a bug
# # https://gitlab.freedesktop.org/gstreamer/gstreamer/-/issues/3353
WORKDIR /opt
RUN pip install pycairo pygobject==3.46.0 meson pipenv
RUN GSTREAMER_VERSION=$(gst-launch-1.0 --version | grep version | tr -s ' ' '\n' | tail -1) \
&& export GIT_SSL_NO_VERIFY=1 \
&& git clone https://gitlab.freedesktop.org/gstreamer/gstreamer.git \
&& cd gstreamer \
&& git checkout $GSTREAMER_VERSION \
&& cd subprojects/gst-python \
&& PREFIX=$(dirname $(dirname $(which python))) \
&& meson setup --prefix=$PREFIX builddir \
&& ninja -C builddir \
&& meson install -C builddir
# Install other dependencies
ENV GST_PLUGIN_PATH=/usr/lib/aarch64-linux-gnu/gstreamer-1.0
</code></pre>
<p>This is the output from <code>gst-inspect-1.0</code>:</p>
<pre><code># gst-inspect-1.0 -b
Blacklisted files:
libgstpython.so
Total count: 1 blacklisted file
</code></pre>
<p>EDIT:</p>
<p>The same thing happens when installing <code>apt install gstreamer1.0-python3-plugin-loader</code> rather than building the library from source.</p>
<p>EDIT:</p>
<p>The output of <code>GST_DEBUG=4 gst-inspect-1.0 libgstpython.so</code> is in <a href="https://gist.github.com/mentoc3000/d9236268dba6c53a9ac8119d61f4204c" rel="nofollow noreferrer">this gist</a>. The full output was too long to include in the post, but the line that stood out to me is 492:</p>
<pre><code>** (gst-plugin-scanner:15): CRITICAL **: 18:14:35.880: gi.repository.Gst is no dict
</code></pre>
|
<python><gstreamer><glib>
|
2024-04-18 20:27:35
| 1
| 332
|
mentoc3000
|
78,350,133
| 16,845
|
What is the correct type annotation for "bytes or bytearray"?
|
<p>In Python 3.11 or newer, is there a more convenient type annotation to use than <code>bytes | bytearray</code> for a function argument that means "An ordered collection of bytes"? It seems wasteful to require constructing a <code>bytes</code> from a <code>bytearray</code> (or the other way around) just to satisfy the type-checker.</p>
<p>Note that the function does not mutate the argument; it's simply convenient to pass <code>bytes</code> or <code>bytearray</code> instances from different call sites.</p>
<p>e.g.</p>
<pre><code>def serialize_to_stream(stream: MyStream, data: bytes | bytearray) -> None:
for byte in data:
stream.accumulate(byte)
</code></pre>
<p>(This example is contrived, of course, but the purpose is to show that <code>data</code> is only read, never mutated).</p>
|
<python><python-typing>
|
2024-04-18 20:16:38
| 1
| 1,216
|
Charles Nicholson
|
78,349,934
| 3,240,688
|
Polars - check for null in dataframe
|
<p>I know I can do <code>.null_count()</code> in Polars, which returns a dataframe telling me the null count for each column.</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
data = {"foo": [1, 2, 3, None], "bar": [4, None, None, 6]}
df = pl.DataFrame(data)
df.null_count()
</code></pre>
<p>would yield a dataframe</p>
<pre><code>shape: (1, 2)
βββββββ¬ββββββ
β foo β bar β
β --- β --- β
β u32 β u32 β
βββββββͺββββββ‘
β 1 β 2 β
βββββββ΄ββββββ
</code></pre>
<p>I want to know if there's any nulls, within the entire dataframe</p>
<p>something like</p>
<pre><code>if any(df.null_count()):
print('has nulls')
else:
print('no nulls')
</code></pre>
<p>Unfortunately, that doesn't work. What is the correct code here?</p>
|
<python><dataframe><python-polars>
|
2024-04-18 19:32:29
| 3
| 1,349
|
user3240688
|
78,349,813
| 610,569
|
Generate list of list round-robin without repetition of items with itertools
|
<p>If the goal is to achieve <code>f"{x1}-{x2}"</code> pairs where <code>x1 != x2</code> from a combination, I can do:</p>
<pre><code>import itertools
>>> X = ['1','2','3','4']
>>> [f"-".join(xx) for xx in itertools.combinations(X, 2)]
['1-2', '1-3', '1-4', '2-3', '2-4', '3-4']
</code></pre>
<p>But if I want to achieve the desired output some sort of round-robin:</p>
<pre><code>[['2-1', '3-1', '4-1'], ['1-2', '3-2', '4-2'], ['1-3', '2-3', '4-3'], ['1-4', '2-4', '3-4']]
</code></pre>
<p>I could have done:</p>
<pre><code>[[f"{x1}-{x2}" for x1 in X if x1 != x2] for x2 in X]
</code></pre>
<p><strong>But is there some of round-robin of combinations function in itertools that returns that 2-D lists as shown in the desired output output?</strong></p>
<p>There is a pointer to roundrobin from <a href="https://docs.python.org/3/library/itertools.html" rel="nofollow noreferrer">https://docs.python.org/3/library/itertools.html</a></p>
<pre><code>
def roundrobin(*iterables):
"Visit input iterables in a cycle until each is exhausted."
# roundrobin('ABC', 'D', 'EF') β A D E B F C
# Algorithm credited to George Sakkis
iterators = map(iter, iterables)
for num_active in range(len(iterables), 0, -1):
iterators = cycle(islice(iterators, num_active))
yield from map(next, iterators)
</code></pre>
<p>But is there a better way to achieve the desired output? Or is the nested loop the most readable and optimal solution to avoid the diagonal?</p>
|
<python><python-itertools><round-robin>
|
2024-04-18 19:08:50
| 1
| 123,325
|
alvas
|
78,349,807
| 1,182,299
|
Split image file in lines from ALTO groundtruth coordinates for TrOCR
|
<p>I want to train a model for TrOCR. I have a personal transcribed groundtruth (ALTO). TrOCR only works with lines and not full pages so I need to crop the image files in lines with the coordinates from the ALTO files and match them with the transcription. My code:</p>
<pre><code>import os
import cv2
import numpy as np
import xml.etree.ElementTree as ET
from PIL import Image, ImageDraw
from collections import Counter
images=os.listdir("./img")
xmls=os.listdir("./alto")
def correct_xy(xy):
xx=sorted(xy)
if len(xy)<21:
l=len(xx)
missing=21-l
c=Counter(np.random.choice(range(l-1),missing))
for i in range(l-1):
x=np.linspace(xx[i][0],xx[i+1][0],c[i]+2,dtype=int)[1:-1]
y=np.linspace(xx[i][1],xx[i+1][1],c[i]+2,dtype=int)[1:-1]
if len(x):
xx+=list(map(list,zip(x,y)))
return xx
if len(xy)>21:
new_xy=[0]*21
new_xy[0]=xx[0]
step=(len(xx)-2)/19
for i in range(1,20):
new_xy[i]=xx[int(np.floor(i*step))]
new_xy[-1]=xx[-1]
return new_xy
return xx
def get_polygon(xys,boxes,i,margin=[20]*4):
U,D,R,L = margin
X,Y,H,W = boxes[i]
curr=sorted(xys[i])
curr[0][0]-=L
curr[-1][0]+=R
curr=[[x[0],x[1]+D] for x in curr]
if i == 0:
prev=[[x[0],x[1]-H] for x in curr]
else:
prev=sorted(xys[i-1])
prev=sorted([[curr[0][0],prev[0][1]]]+prev+[[curr[-1][0],prev[-1][1]]])
for l in range(len(prev)):
if prev[1][0]==curr[0][0]:
break
for r in range(len(prev)):
if prev[r][0]==curr[-1][0]:
break
prev=prev[l:r+1]
if prev[0]==prev[1]:
prev=prev[1:]
prev=[[x[0],x[1]+U] for x in prev]
gap=min(curr)[1]-min(prev)[1]-H
if gap > 0:
prev=[[x[0],x[1]+gap] for x in prev]
return list(map(tuple,prev[::-1]+curr))
text = []
prefix='{http://www.loc.gov/standards/alto/ns-v4#}'
for file_image, file_xml in zip(sorted([img[:-4] for img in images]),sorted([xm[:-4] for xm in xmls])):
assert file_image == file_xml
path_img="./img/"+file_image+".jpg"
path_xml="./alto/"+file_xml+".xml"
tree = ET.parse(path_xml)
root = tree.getroot()
img = Image.open(path_img).convert('L').point(lambda x : 255 if x > 200 else 0, mode='1')
xys = {} #coordinates
boxes = {}
to_del = []
for i, element in enumerate(root.iter(prefix+'String')):
#texts
txt = [path_img+"_"+str(i).zfill(2)+".png",element.get("CONTENT")]
if txt[-1]=="":
to_del += [i]
continue
text.append(txt)
for i,element in enumerate(root.iter(prefix+'TextLine')):
#images
boxes[i] = tuple([int(element.get(s)) for s in ['HPOS','VPOS','HEIGHT','WIDTH']])
xy = [element.get(s) for s in ['BASELINE']][0].split(' ')
xys[i]=correct_xy([list(map(int,s.split(','))) for s in xy])
for i in to_del:
del xys[i]
del boxes[i]
for i in xys.keys():
polygon = get_polygon(xys,boxes,i=i,margin=[10,20,50,50])
mask = Image.new("L", img.size, 0)
background = Image.new("L", img.size, 255)
draw = ImageDraw(mask)
draw.polygon(polygon, fill="white", outline=None)
result = Image.composite(img, background, mask)
(X, Y, W, H) = cv2.boundingRect(np.array(polygon))
result = result.crop([X, Y, X + W, Y + H])
result.save('./lines/'+file_image+"_"+str(i).zfill(2)+'.png')
</code></pre>
<p>I got this error and I don't see anything wrong with my code:</p>
<pre><code>Traceback (most recent call last):
File "/home/incognito/TrOCR-py3.10/LINES/alto2lines.py", line 95, in <module>
xys[i]=correct_xy([list(map(int,s.split(','))) for s in xy])
File "/home/incognito/TrOCR-py3.10/LINES/alto2lines.py", line 19, in correct_xy
y=np.linspace(xx[i][1],xx[i+1][1],c[i]+2,dtype=int)[1:-1]
IndexError: list index out of range
</code></pre>
<p>A sample of my ALTO xml file:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<alto xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://www.loc.gov/standards/alto/ns-v4#"
xsi:schemaLocation="http://www.loc.gov/standards/alto/ns-v4# http://www.loc.gov/standards/alto/v4/alto-4-2.xsd">
<Description>
<MeasurementUnit>pixel</MeasurementUnit>
<sourceImageInformation>
<fileName>1.jpg</fileName>
</sourceImageInformation>
</Description>
<Tags>
<OtherTag ID="BT1" LABEL="Title" DESCRIPTION="block type Title"/><OtherTag ID="BT2" LABEL="Main" DESCRIPTION="block type Main"/><OtherTag ID="BT3" LABEL="Commentary" DESCRIPTION="block type Commentary"/><OtherTag ID="BT4" LABEL="Illustration" DESCRIPTION="block type Illustration"/><OtherTag ID="BT35" LABEL="text" DESCRIPTION="block type text"/>
<OtherTag ID="LT46" LABEL="default" DESCRIPTION="line type default"/>
</Tags>
<Layout>
<Page WIDTH="2579"
HEIGHT="2837"
PHYSICAL_IMG_NR="0"
ID="eSc_dummypage_">
<PrintSpace HPOS="0"
VPOS="0"
WIDTH="2579"
HEIGHT="2837">
<TextBlock HPOS="580"
VPOS="334"
WIDTH="1940"
HEIGHT="2230"
ID="eSc_textblock_2aa65984"
TAGREFS="BT2">
<Shape><Polygon POINTS="2074 334 2489 353 2520 691 2495 762 2476 2532 2451 2558 2306 2564 617 2558 580 906 605 378 712 353 818 371 2074 334"/></Shape>
<TextLine ID="eSc_line_48be6c40"
TAGREFS="LT46"
BASELINE="627 397 964 397 1713 373 2473 373"
HPOS="625"
VPOS="338"
WIDTH="1848"
HEIGHT="87">
<Shape><Polygon POINTS="2470 351 2454 351 2438 349 2422 349 2407 348 2391 348 2375 346 2359 346 2344 345 2328 345 2312 343 2296 343 2281 343 2274 342 2254 345 2184 338 2183 338 2181 338 2158 349 2156 349 2139 349 1986 342 1984 342 1983 342 1959 349 1868 338 1855 340 1839 340 1823 340 1808 342 1792 342 1776 342 1775 342 1700 346 1655 345 1650 346 1634 346 1618 346 1603 346 1587 348 1571 348 1555 348 1540 349 1524 349 1508 349 1492 351 1477 351 1461 351 1445 351 1429 353 1414 353 1398 353 1382 354 1366 354 1350 354 1335 356 1319 356 1303 356 1287 356 1272 357 1256 357 1240 357 1224 359 1209 359 1193 359 1179 359 1179 360 1161 365 966 365 956 367 941 367 925 367 909 368 893 368 878 368 862 370 846 370 830 370 815 370 799 368 788 365 646 365 641 368 625 370 627 397 627 425 734 423 736 423 780 412 846 420 848 420 1111 419 1112 417 1295 403 1377 409 1379 409 1380 409 1410 400 1741 397 1820 397 1842 406 1844 406 1845 406 1847 406 1886 406 2128 397 2148 401 2159 403 2161 403 2162 403 2167 401 2184 392 2470 395 2473 373 2471 351 2470 351"/></Shape>
<String CONTENT="ΧΧͺ ΧΧΧ¨ΧΧ ΧΧΧͺ ΧΧΧΧ ΧΧΧͺ ΧΧΧ€Χ¨Χͺ ΧΧΧͺ Χ€Χ¨ΧΧͺ ΧΧΧ‘Χ"
HPOS="625"
VPOS="338"
WIDTH="1848"
HEIGHT="87"></String>
</TextLine>
.... snip
</code></pre>
|
<python><numpy><ocr><polygon><alto>
|
2024-04-18 19:08:04
| 0
| 1,791
|
bsteo
|
78,349,569
| 1,489,990
|
Pass Multiple Inputs to Terminal Command Python
|
<p>I have this terminal command I need to run programmatically in Python:</p>
<p><code>awssaml get-credentials --account-id **** --name **** --role **** --user-name ****</code></p>
<p>It will first ask for your password, and then prompt you for a 2 factor authentication code. I have these as variables in python that I just need to pass through to the command.</p>
<p>This is what I tried:</p>
<pre><code> argss=[str(password_entry.get()),str(twoFactorCode_entry.get())]
p=subprocess.Popen(["awssaml", "get-credentials", "--account-id", "****", "--name", "****", "--role", "****", "--user-name", ID_entry.get()],stdin=subprocess.PIPE,stdout=subprocess.PIPE)
time.sleep(0.1)
out=p.communicate('\n'.join(map(str,argss)).encode())
</code></pre>
<p>And when I run this the console prints out that the password was entered because it shows <code>password: xxxxxxxxxxxx</code>, but it then stops execution and does not show the 2 factor code being passed.</p>
<p>Any ideas for where I am going wrong to get the 2 factor code also passed through? Both the password and 2 factor code are within the <code>argss</code> variable. <code>password_entry.get()</code> is the password and <code>twoFactorCode_entry.get()</code> is the 2 factor code.</p>
<p>This is what the first prompt looks like:</p>
<p><a href="https://i.sstatic.net/WGipE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WGipE.png" alt="enter image description here" /></a></p>
<p>And I was trying this <code>child.expect('password:')</code> which gives this error:</p>
<pre><code>after: <class 'pexpect.exceptions.TIMEOUT'>
match: None
match_index: None
exitstatus: None
flag_eof: False
pid: 66502
child_fd: 10
closed: False
timeout: 30
delimiter: <class 'pexpect.exceptions.EOF'>
logfile: None
logfile_read: None
logfile_send: None
maxread: 2000
ignorecase: False
searchwindowsize: None
delaybeforesend: 0.05
delayafterclose: 0.1
delayafterterminate: 0.1
searcher: sear
</code></pre>
|
<python><pexpect>
|
2024-04-18 18:21:35
| 1
| 10,259
|
ez4nick
|
78,349,513
| 5,502,917
|
Pytesseract doesnt recognize plate correctly
|
<p>I am using pytesseract to try to recognize car plates but it does not return the correct result.</p>
<p>This is my code</p>
<pre><code>text = pytesseract.image_to_string(cropped_License_Plate, lang='eng', config='--psm 9')
</code></pre>
<p>I have tried using many different psm but the result is never correct.</p>
<p>My images</p>
<p><a href="https://i.sstatic.net/J1UR9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J1UR9.png" alt="enter image description here" /></a></p>
<p>Plate QAN-5512
Pyressecat reading: DAN S512</p>
<p><a href="https://i.sstatic.net/CJHUy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CJHUy.png" alt="enter image description here" /></a></p>
<p>Plate RWC2I30
Pytesseract reading: 'RWC213G</p>
<p><a href="https://i.sstatic.net/M5VXJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M5VXJ.png" alt="enter image description here" /></a></p>
<p>Plate RWC2I30
Pytesseract reading: FRWOZLSU</p>
<p>Is there a way to fix it?</p>
|
<python><ocr><python-tesseract>
|
2024-04-18 18:08:12
| 1
| 1,731
|
GuiDupas
|
78,349,270
| 3,120,266
|
using pandas to number and coerce to force values to ints and still not working
|
<p>Confused when I am trying to coerce dataframe to numeric. It appears to work when I look at structure but then I still get errors:</p>
<p>TypeError: unsupported operand type(s) for +: 'int' and 'str'</p>
<p>Code:</p>
<pre><code>df = df_leads.apply(pd.to_numeric, errors='coerce') code here
df.info()
</code></pre>
<p>Returns: <code>Columns: 133 entries, org_size_1_99 to engagement_Type_webpage visits dtypes: float64(107), int64(26) memory usage: 3.1 MB</code></p>
<p>next line of code:</p>
<pre><code>sum(df['target']).astype(int)
</code></pre>
<p>returns: <code>TypeError: unsupported operand type(s) for +: 'int' and 'str'</code></p>
|
<python><pandas><string><numeric>
|
2024-04-18 17:15:25
| 1
| 425
|
user3120266
|
78,349,268
| 14,224,948
|
Changing command order in Python's Typer
|
<p>I want Typer to display my commands in the order I have initialized them and it displays those commands in alphabetic order.
I have tried different approaches including this one: <a href="https://github.com/tiangolo/typer/issues/246" rel="nofollow noreferrer">https://github.com/tiangolo/typer/issues/246</a>
In this I get AssertionError. Others like subclassing some Typer and click classes does actually nothing.</p>
<p>I want the commands to be in the same order as in this working piece of code:</p>
<pre><code>import typer
import os
app = typer.Typer()
@app.command()
def change_value(file_name, field):
print("Here I will change the", file_name, field)
@app.command()
def close_field(file_name, field):
print("I will close field")
@app.command()
def add_transaction(file_name):
print("I will add the transaction")
if __name__ == "__main__":
app()
</code></pre>
<p>Please help :)</p>
|
<python><python-3.x><python-click><typer>
|
2024-04-18 17:15:13
| 1
| 1,086
|
Swantewit
|
78,349,111
| 2,386,113
|
Does passing class variables Stop Paralellization in Numba?
|
<p>I have a <strong>wrapper method</strong> to call a <strong>Numba-compatible</strong> function. In the code below, the method <code>get_neighbours_wrapper()</code> is just a wrapper to call the Numba function <code>get_neighbours_Numba()</code>.</p>
<p>I want to call the <code>neighbours.get_neighbours_wrapper(point)</code> on a separate thread and that's why I am expecting parallelization.</p>
<p>Even though I see some performance enhancement
by making the function Numba compatible, but I am not sure if it running in parallel (most likely not). I doubt that the caller function i.e. <code>get_neighbours_wrapper()</code> tries to access the member variables of the class and therefore probably stops the real parallelization (due to GIL lock?).</p>
<pre><code>import numpy as np
from numba import njit
@njit
def get_neighbours_Numba(points: np.ndarray, num_neighbors: int):
for point in points:
distances = np.zeros(num_neighbors)
neighbours_indices_xy = np.zeros((num_neighbors, 2))
## There is some further code, but not relevant for the question
return distances
class Neighbours:
def __init__(self, xy_points: np.ndarray, num_neighbors: int):
self.xy_points = xy_points
self.num_neighbors = num_neighbors
def get_neighbours_wrapper(self, point: np.ndarray):
distances = get_neighbours_Numba(self.xy_points, self.num_neighbors) # QUESTIONS: Can using class variables STOP PARALLELIZATION?
return distances
# Example usage
xy_points = np.random.rand(100, 2)
num_neighbors = 6
neighbours = Neighbours(xy_points, num_neighbors)
point = np.random.rand(2)
distances = neighbours.get_neighbours_wrapper(point)
print(distances)
</code></pre>
<p><strong>Question</strong>: Are passing class variables Stopping Parallelization in Numba? If yes, what could be the solution?</p>
|
<python><numba>
|
2024-04-18 16:46:54
| 1
| 5,777
|
skm
|
78,349,201
| 1,046,013
|
Python script in "Task Scheduler" runs forever
|
<p>I can't figure out why my Python script works perfectly in the console when I execute it like this (runs for 1-2 seconds):</p>
<p><a href="https://i.sstatic.net/9QQCd2LK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QQCd2LK.png" alt="Execution result" /></a></p>
<p>But if I run it in the task scheduler (either manually or at the scheduled time), it runs forever and eventually times out after the 2 hours time limit:</p>
<p><a href="https://i.sstatic.net/HSCZ53Oy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HSCZ53Oy.png" alt="Task Scheduler history" /></a></p>
<p>Here's the script in case it's needed, I even added <code>exit(0)</code> at the end in case it was hanging there (Downloads all zip files over FTP, renames them according to their modification date, then deletes them on the server):</p>
<pre><code>import ftplib
import os
from pathlib import Path
from datetime import datetime, timezone
# FTP server details
FTP_SERVER = "domain.com"
FTP_USERNAME = "username"
FTP_PASSWORD = "password"
# Directory for downloaded files
LOCAL_DIR = Path(r"G:\Neverland Backups")
# Connect to the FTP server
ftp = ftplib.FTP(FTP_SERVER, FTP_USERNAME, FTP_PASSWORD)
# List all files in the current directory on the server
files = ftp.nlst()
for file in files:
# Check if the file is a .zip file
if file.endswith('.zip'):
# Download the file
local_file = LOCAL_DIR / file
with open(local_file, 'wb') as fp:
ftp.retrbinary('RETR ' + file, fp.write)
# Get the file modification time
modification_time = ftp.sendcmd('MDTM ' + file)
modification_time = datetime.strptime(modification_time[4:], "%Y%m%d%H%M%S")
# Rename the file
renamed_filepath = LOCAL_DIR / f"Neverland_{modification_time.strftime('%Y-%m-%d_%H.%M.%S')}.zip"
(LOCAL_DIR / file).rename(renamed_filepath)
# Convert the modification time to local timezone
modification_time = modification_time.replace(tzinfo=timezone.utc).astimezone(tz=None)
# Set the file's modification time
timestamp = modification_time.timestamp()
os.utime(renamed_filepath, (timestamp, timestamp))
# Delete the original file on the server
ftp.delete(file)
# Close the connection
ftp.quit()
exit(0)
</code></pre>
<p>As a sidenote, I also tried the Windows "Run" command with <code>C:\Users\Administrator\AppData\Local\Programs\Python\Python312\python.exe "C:\scripts\download-neverland-backups.py"</code> and the script window opens, executes then closes.</p>
<p>The task scheduler "actions" settings
<a href="https://i.sstatic.net/VCRYomLt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VCRYomLt.png" alt="Task Scheduler "actions" settings" /></a></p>
|
<python><scheduled-tasks><windows-task-scheduler><python-3.x>
|
2024-04-18 16:17:33
| 0
| 3,866
|
NaturalBornCamper
|
78,348,882
| 179,014
|
Dynamically alter formulas in Excel templates with jinja2?
|
<p>I'm searching for a way to fill in pandas dataframes into a given Excel template (keeping all the formatting). I stumbled upon the following interesting blog post using jinja2 templating in Excel sheets:</p>
<p><a href="https://hugoworld.wordpress.com/2019/01/21/easy-excel-reporting-with-python-and-jinja2/" rel="nofollow noreferrer">https://hugoworld.wordpress.com/2019/01/21/easy-excel-reporting-with-python-and-jinja2/</a></p>
<p>I was especially intrigued about the possibility to dynamically alter formulas based on the inserted content, as explained by the author:</p>
<blockquote>
<p>In the case of the above the original formula in the template was sum(b3) this will be dynamically altered according to the data rendered into the template in case of the sample above the formula will become =sum(b3:b6)</p>
</blockquote>
<p>Unfortunately the code announced in the blog post never got published. Has anyone a suggestion on how to implement this idea from the blog in Python?</p>
|
<python><excel><excel-formula><jinja2>
|
2024-04-18 16:02:38
| 1
| 11,858
|
asmaier
|
78,348,878
| 10,053,485
|
Middleware inheritance in mounted FastAPI SubAPI's
|
<p>The project I'm working on has grown to the point one massive API doesn't suffice, so I have split it into sub applications.</p>
<p>To ensure this refactor goes well, I wanted to test if and how middleware is inherited across parent/daughter applications.</p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request
app = FastAPI()
subapi = FastAPI()
@app.get("/app")
def read_main():
return {"message": "Hello World from main app"}
@subapi.get("/sub")
def read_sub():
return {"message": "Hello World from sub API"}
app.mount("/subapi", subapi)
@app.middleware("http")
async def say_hi(request: Request, call_next):
response = await call_next(request)
print('Middleware Triggered')
return response
</code></pre>
<p>Now, after starting the app: <code>uvicorn.run(app, host="127.0.0.1", port=8080)</code> and accessing the the docs of app and the subapi, I obtain the following logs:</p>
<pre class="lang-py prettyprint-override"><code>INFO: 127.0.0.1:57079 - "GET /docs HTTP/1.1" 200 OK
Middleware Triggered
INFO: 127.0.0.1:57079 - "GET /openapi.json HTTP/1.1" 200 OK
Middleware Triggered
INFO: 127.0.0.1:57079 - "GET /subapi/docs HTTP/1.1" 200 OK
Middleware Triggered
INFO: 127.0.0.1:57079 - "GET /subapi/openapi.json HTTP/1.1" 200 OK
</code></pre>
<p>Case closed, middleware is inherited from the parent app, so if I want to add middleware to all sub apps, I should simply add it once to the parent app, or so I thought.</p>
<p>Think again. This quite explicitly contradicts <a href="https://fastapi.tiangolo.com/advanced/sub-applications/" rel="nofollow noreferrer">the independence the docs note</a>, as well as <a href="https://stackoverflow.com/a/64325016/10053485">the most notable discussion on SO</a>.</p>
<p>When printing the app/subapi's middleware as follows, we confirm <code>subapi</code> does <strong>not</strong> have the middleware attached:</p>
<pre class="lang-py prettyprint-override"><code>print(app.__dict__)
print(subapi.__dict__)
>>> {..., 'user_middleware': [Middleware(BaseHTTPMiddleware, dispatch=<function say_hi at 0x00000191337C0E00>)], 'middleware_stack': None}
>>> {..., 'user_middleware': [], 'middleware_stack': None}
</code></pre>
<p>Which aligns with previous answers and the documentation, as expected.</p>
<p><strong>Questions:</strong></p>
<ol>
<li>Why is the main app's middleware still triggered by requests to the SubAPI when it is not explicitly attached to <code>subapi</code>?</li>
<li>What does it mean for middleware to 'not be inherited' in this context, as its triggered either way?</li>
</ol>
<p>If I understand the situation correctly, the answer to question 1 is fairly straight forward:</p>
<p>While the middleware is not directly associated with the SubAPI, it still gets triggered due to the routing structure - as the SubAPI's route contains the main app's route. This does still leave question 2.</p>
|
<python><fastapi><middleware>
|
2024-04-18 16:02:15
| 1
| 408
|
Floriancitt
|
78,348,739
| 8,021,207
|
Converting cyclic DiGraph to Acyclic DiGraph (DAG)
|
<p>How can I remove cycles from my directed graph? It's a big graph (100k+ nodes 200k+ edges) so the method needs to be efficient. I need to make the digraph acyclic in order to use functions like <a href="https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.dag.topological_generations.html" rel="nofollow noreferrer">networkx.topological_generations</a>.</p>
<p>I've tried methods where I repeatedly generate cycles and remove the last edge in each cycle path but after running for 10+ hours without finishing I considered this a failed attempt.</p>
<p><strong>failed attempt (never finished; inefficient)</strong></p>
<pre class="lang-py prettyprint-override"><code>def remove_cycles_from_G(G: nx.DiGraph):
search_for_cycles = True
while search_for_cycles:
for cycle_path in nx.simple_cycles(G):
try:
G.remove_edge(cycle_path[-1], cycle_path[0])
except nx.NetworkXError:
# edge has already been disjointed by a previous edge removal.
# Restart cycle generator.
search_for_cycles = (
False # Temporary condition which will be reversed.
)
break
search_for_cycles = not (search_for_cycles)
</code></pre>
<p>I've also crafted a more sophisticated heuristic approach based on the demonstrations in <a href="https://github.com/zhenv5/breaking_cycles_in_noisy_hierarchies/tree/master" rel="nofollow noreferrer">this project</a> but even this method doesn't work on a digraph of this size (after an hour of running my memory was maxed out).</p>
<p>I understand that identifying the fewest edges to remove in order to make the digraph acyclic is an NP-hard problem (<a href="https://en.wikipedia.org/wiki/Feedback_arc_set" rel="nofollow noreferrer">feedback arc set problem</a>) but I'm not necessarily trying to find the fewest edges to make the digraph acyclic, I just want a fast and efficient approach.</p>
<h3>EDIT: reproducible input data</h3>
<p>Here's an example of a networkx DiGraph with a ton of cycles. My situation involves even more but this demonstrates the point:</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import random
def induce_cycles(g: nx.DiGraph, cycles) -> None:
cycles_added = 0
while cycles_added < cycles:
node = random.choice(list(g))
non_parent_ancestors = nx.ancestors(g, node).difference(g.predecessors(node))
if non_parent_ancestors:
g.add_edge(node, random.choice(list(non_parent_ancestors)))
cycles_added += 1
g = nx.balanced_tree(3, 6, create_using=nx.DiGraph())
induce_cycles(g, len(g.edges()) * 5)
# Efficiently remove cycles from g...
</code></pre>
|
<python><algorithm><networkx><graph-theory><directed-acyclic-graphs>
|
2024-04-18 15:42:47
| 1
| 492
|
russhoppa
|
78,348,714
| 278,521
|
How to change numbers received from text file to integer
|
<p>I used python script to read numbers form *.txt file and use that number for future works, but the number when I parse I am getting error "TypeError: int() argument must be a string, a bytes-like object or a real number, not 'list'"</p>
<p>BUt when I hard code number its working</p>
<pre><code>with open(r'''C:\\MentorUTTFVC\ChangeSet.txt''') as f:
for line in f:
inner_list = [int(elt.strip()) for elt in line.split(',')]
list_of_lists.append(inner_list)
length = len(list_of_lists)
for i in range(length):
ch = int(list_of_lists[i])
</code></pre>
<p>ChangeSet.txt</p>
<pre><code>30000
30002
30004
30008
</code></pre>
|
<python><python-2.7>
|
2024-04-18 15:38:16
| 1
| 4,010
|
Sijith
|
78,348,637
| 455,796
|
make child window always stay above the main window, but not other applications
|
<p>How to make a modeless (that is, I can still interact with the main window) child always stay above the main window? <code>Qt.WindowStaysOnTopHint</code> makes the child stay above other applications, so this is not what I want. I am using Plasma 6.0 Wayland. A.I.'s said that setting the main window as the parent would make the child window stay above the main window, but that did not work. An old S.O. answer said set the <code>Qt.Tools</code> window flag, but that did not work either.</p>
<p>This must be possible in Wayland Plasma 6.0, because the "Configure" window of Dolphin (a QT6 app) file manager works exactly the way I want.</p>
<pre><code>from PySide6.QtCore import Qt
from PySide6.QtWidgets import QApplication, QMainWindow, QDialog, QPushButton
class ChildWindow(QDialog):
def __init__(self, parent=None):
super().__init__(parent)
self.setWindowFlags(Qt.Tool)
self.setWindowTitle("Child")
self.setGeometry(0, 0, 400, 200)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Main")
self.setGeometry(100, 100, 500, 300)
self.but = QPushButton("Open", self)
self.but.clicked.connect(self.open)
self.child_window = ChildWindow(self)
self.child_window.show()
def open(self):
self.child_window.show()
if __name__ == "__main__":
app = QApplication()
window = MainWindow()
window.show()
app.exec()
</code></pre>
|
<python><pyside><pyside6>
|
2024-04-18 15:25:58
| 0
| 12,654
|
Damn Vegetables
|
78,348,490
| 23,260,297
|
json.decoder.JSONDecodeError error python
|
<p>I have an automation that runs and passes a JSON object to a Python script.</p>
<p>My objective is to read the JSON and convert it to a dictionary.</p>
<p>My JSON looks like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"Items": [
{
"Name": "baz",
"File": "\\\\baz\\baz\\baz baz\\baz baz\\baz\\baz.xls"
},
{
"Name": "bar",
"File": "\\\\bar\\bar\\bar bar\\bar bar\\bar\\bar.csv"
},
{
"Name": "foo",
"File": "\\\\foo\\foo\\foo foo\\foo foo\\foo\\foo.csv"
}
]
}
</code></pre>
<p>I need it to look like this:</p>
<pre class="lang-json prettyprint-override"><code>{
"foo" : "\\\\foo\\foo\\foo foo\\foo foo\\foo\\foo.csv",
"bar" : "\\\\bar\\bar\\bar bar\\bar bar\\bar\\bar.csv",
"baz" : "\\\\baz\\baz\\baz baz\\baz baz\\baz\\baz.xls"
}
</code></pre>
<p>I get this error with this piece of code:</p>
<blockquote>
<p>json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 2 column 4 (char 6)</p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>if len(sys.argv) > 1:
d = json.loads(sys.argv[1])
print(d)
</code></pre>
<p>This is what I pass to PowerShell:</p>
<pre><code>@"
{
{
"Items": [
{
"Name": "baz",
"File": "\\\\baz\\baz\\baz baz\\baz baz\\baz\\baz.xls"
},
{
"Name": "bar",
"File": "\\\\bar\\bar\\bar bar\\bar bar\\bar\\bar.csv"
},
{
"Name": "foo",
"File": "\\\\foo\\foo\\foo foo\\foo foo\\foo\\foo.csv"
}
]
}
}
"@
& $Python $Script $json
</code></pre>
<p>I printed sys.argv[1]:</p>
<pre><code> {Items:[{Name:foo,File:\\\\foo\\foo\\foo
{Items:[{Name:foo,File:\\\\foo\\foo\\foo
</code></pre>
|
<python><json>
|
2024-04-18 15:00:34
| 1
| 2,185
|
iBeMeltin
|
78,348,470
| 1,141,798
|
XGBoost AFT survival model with external memory iterator
|
<p>How to make <a href="https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html" rel="nofollow noreferrer">XGBoost external memory</a> and <a href="https://xgboost.readthedocs.io/en/stable/tutorials/aft_survival_analysis.html" rel="nofollow noreferrer">XGBoost survival AFT model</a> work together?</p>
<p>Background: I've written XGBoost iterator for batched training as in the linked example.
Now I want to train an AFT model from the <code>xgboost</code> library.
The problem is the XGB <code>DMatrix</code>, for which we need to run <code>set_float_info</code> to set survival censoring intervals. For example:</p>
<pre><code>dtrain.set_float_info('label_lower_bound', y_lower_bound[train_index])
dtrain.set_float_info('label_upper_bound', y_upper_bound[train_index])
</code></pre>
<p>Attached please find my redacted code (can't attach everything, but that's the problematic gist).
I got the censoring time data in <code>df</code>, but I don't know how to "attach" it to <code>Xy_train</code>.</p>
<pre><code>class BatchedParquetIterator(xgboost.DataIter):
def __init__(
self
):
# ...
super().__init__(cache_prefix=os.path.join(".", "cache"))
def next(self, input_data: Callable):
"""Advance the iterator by 1 step and pass the data to XGBoost. This function is
called by XGBoost during the construction of ``DMatrix``
"""
if self._it == len(self._file_paths):
return 0 # return 0 to let XGBoost know this is the end of iteration
df = pd.read_parquet(self._file_paths[self._it])
X, y = self._preprocess(df)
input_data(data=X, label=y)
self._it += 1
return 1 # Return 1 to let XGBoost know we haven't seen all the files yet.
def reset(self):
"""Reset the iterator to its beginning"""
self._it = 0
def _preprocess(self, df: pd.DataFrame) -> Tuple[pd.DataFrame, pd.DataFrame]:
# ...
return X, y
parquet_iterator_train = BatchedParquetIterator(batches)
Xy_train = xgboost.DMatrix(parquet_iterator_train)
</code></pre>
|
<python><xgboost><survival-analysis>
|
2024-04-18 14:58:09
| 1
| 1,302
|
Dominik Filipiak
|
78,348,446
| 13,294,364
|
Fast / Efficient method to retrieve value of specified field with BLPAPI
|
<p>The reason why I ask this question is because of the manner in which bloomberg sends its data via BLPAPI. Following on from this <a href="https://stackoverflow.com/questions/75958741/blpapi-retrieve-value-of-specific-field/75959836">post</a>, I want to establish an efficient method of obtaining the value of a specific field. As the nature of the way that data is sent means that there can be multiple messages (msg's) in the session.nextEvent() and that surplus data is sent such there is more data than requested I was wondering whether there was a known efficient way of doing so. So far the techniques and emthods I have used means than for 60 securities and 5 subscriptions the data is never live as it lags behind and I beleive the reason for this is how I manage the data coming in. I have some example below signifying an example subscriptin for one security. Given that MKTDATA_EVENT_TYPE and MKTDATA_EVENT_SUBTYPE can be different I am struggling to find an effective way to-do this.</p>
<p>My aim is to avoid for loops where possible and opt for dictionary's to direct me to the value wanted.</p>
<pre><code>import blpapi
from bloomberg import BloombergSessionHandler
# session = blpapi.Session()
host='localhost'
port=8194
session_options = blpapi.SessionOptions()
session_options.setServerHost(host)
session_options.setServerPort(port)
session_options.setSlowConsumerWarningHiWaterMark(0.05)
session_options.setSlowConsumerWarningLoWaterMark(0.02)
session = blpapi.Session(session_options)
if not session.start():
print("Failed to start Bloomberg session.")
subscriptions = blpapi.SubscriptionList()
fields = ['BID','ASK','TRADE','LAST_PRICE','LAST_TRADE']
subscriptions.add('GB00BLPK7110 @UKRB Corp', fields)
session.subscribe(subscriptions)
session.start()
while(True):
event = session.nextEvent()
print("Event type:",event.eventType())
if event.eventType() == blpapi.Event.SUBSCRIPTION_DATA:
i = 0
for msg in event:
print("This is msg ", i)
i+=1
print("\n" , "msg is ", msg, "\n")
print(" Message type:",msg.messageType())
eltMsg = msg.asElement();
msgType = eltMsg.getElement('MKTDATA_EVENT_TYPE').getValueAsString();
msgSubType = eltMsg.getElement('MKTDATA_EVENT_SUBTYPE').getValueAsString();
print(" ",msgType,msgSubType)
for fld in fields:
print(" Fields are :", fields)
if eltMsg.hasElement(fld):
print(" ",fld,eltMsg.getElement(fld).getValueAsFloat())
else:
for msg in event:
print(" Message type:",msg.messageType())
</code></pre>
<p>I tried obtaining the values for the specified fields I subscribed to but found that my code was too slow and as such didn't meet the requirements to display live data.</p>
<pre><code> def process_subscription_data1(self, session):
while True:
event = session.nextEvent()
print(f"The event is {event}")
if event.eventType() == blpapi.Event.SUBSCRIPTION_DATA:
print(f"The event type is: {event.eventType()}")
for msg in event:
print(f"The msg is: {msg}")
data = {'instrument': msg.correlationIds()[0].value()}
print(f"The data is: {data}")
# Processing fields efficiently
for field in self.fields:
print("field is ", field, " ", self.fields)
element = msg.getElement(field) if msg.hasElement(field) else None
print("element is ", element)
data[field] = element.getValueAsString() if element and not element.isNull() else 'N/A'
print(f"Emitting data for {data}")
self.data_signal.emit(data) # Emit data immediately for each message
</code></pre>
<p>^^ code which I have tried and was far too slow (even without the print statements they are just showing how convoluted the code is)</p>
|
<python><bloomberg><blpapi>
|
2024-04-18 14:55:40
| 1
| 305
|
Harry Spratt
|
78,348,385
| 3,872,452
|
AsyncClient logging input/output body
|
<p>Can AsyncClient be extended and parametrized to log the body of input and output of every external call for reuse in multiple methods that use the same AsyncClient?</p>
<pre><code>import json
import logging
from fastapi import FastAPI, HTTPException
from httpx import AsyncClient
# Set up basic configuration for logging
logging.basicConfig(level=logging.DEBUG)
app = FastAPI()
client = AsyncClient()
@app.get("/example_post")
async def example_post():
url = "https://jsonplaceholder.typicode.com/posts" # Free fake and reliable API for testing and prototyping.
payload = {
"title": 'fooxxx',
"body": 'bar',
"userId": 1,
}
# Perform the HTTP POST request; would like to log input payload and output data response
response = await client.post(url, content=json.dumps(payload))
# Parse response as JSON
data = response.json()
# Searching for universal solution that can log request and response of every api call instead of logging manually
logging.info(f"Received response: {data}")
return data
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
</code></pre>
|
<python><httpx>
|
2024-04-18 14:46:06
| 1
| 418
|
Levijatanu
|
78,348,320
| 769,922
|
Python subclasses function
|
<p>I have an abstract class defined (BaseClass). And then I define a subclass in a different "folder" or file (SubClass).</p>
<p>In my "main" function, I try to check what are the subclasses of the base class. Ideally, I was hoping it would show me all the subclasses regardless of where they are. However, python shows me an empty list.</p>
<p>If I move the subclass into the same file as the baseclass, then subclasses shows the list properly.</p>
<pre class="lang-py prettyprint-override"><code># base.py
import abc
class BaseClass(abc.ABC):
@abc.abstractmethod
def hello():
...
class SubClass2(BaseClass):
def hello():
print("Hello Child2")
</code></pre>
<pre class="lang-py prettyprint-override"><code># sub_class.py
from base import BaseClass
class SubClass(BaseClass):
def hello():
print("Hello Child")
</code></pre>
<pre class="lang-py prettyprint-override"><code># main.py
from base import BaseClass
if __name__ == "__main__":
print(BaseClass.__subclasses__())
</code></pre>
<p>Here is an example to demo: <a href="https://www.online-python.com/gckwbpi9aQ" rel="nofollow noreferrer">https://www.online-python.com/gckwbpi9aQ</a></p>
<p>Further strangeness. If I try to import the missing child in the main file and check its ancestors; lo and behold, subclasses returns the right results</p>
<pre class="lang-py prettyprint-override"><code># main.py but importing the missing subclass
from base import BaseClass
from sub_class import SubClass
if __name__ == "__main__":
print(BaseClass.__subclasses__())
print(SubClass.__mro__)
</code></pre>
<p>I have been trying hard to find documentation on why this behaves the way it does, but I'm pretty sure I'm missing something super basic.</p>
<p>Use case I'm trying to accomplish.
Following from this solution in typer where we want to create dynamic commands: <a href="https://github.com/tiangolo/typer/issues/257" rel="nofollow noreferrer">https://github.com/tiangolo/typer/issues/257</a>, I wanted to create a base class that folks can extend. And any concrete subclass could then be added into the command structure at runtime.
Maybe the incorrect assumption I had in my head was that all classes are loaded ahead of time.</p>
<hr />
|
<python><oop>
|
2024-04-18 14:36:26
| 1
| 1,037
|
Serendipity
|
78,348,285
| 8,262,535
|
Pandas time series split shows gaps
|
<p>I am splitting a continuous timeseries (powerconsumption by the hour) into train/val/test but see unexpected gaps in the split dataframes. What might be the cause?</p>
<pre><code>train_split_end = round(len(df) * (1 - val_ratio))
val_split_end = len(df)
train = df.iloc[:train_split_end]
val = df.iloc[train_split_end:val_split_end]
</code></pre>
<p>The splits themselves are contiguous</p>
<pre><code>train.index[-1]
Out[26]: Timestamp('2014-07-26 23:00:00')
val.index[0]
Out[27]: Timestamp('2014-07-27 00:00:00')
</code></pre>
<p>But the plots show gaps <strong>inside</strong> of each train['MW'].plot() and val['MW'].plot() set which are not present in the original df.
The dataset is <a href="https://www.kaggle.com/datasets/robikscube/hourly-energy-consumption" rel="nofollow noreferrer">https://www.kaggle.com/datasets/robikscube/hourly-energy-consumption</a> - AEP_hourly.csv</p>
<p>Thanks for suggestions!
<a href="https://i.sstatic.net/U9GOm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/U9GOm.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/D145k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D145k.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><datetime>
|
2024-04-18 14:31:55
| 2
| 385
|
illan
|
78,348,245
| 2,393,597
|
How to uniformly sample from space of orthonormal matrices
|
<p>A simple way of generating a random orthonormal matrix is to first sample a random matrix and subsequently apply the singular value decomposition</p>
<pre><code>def random_orthonormal_matrix(n):
random_matrix = np.random.normal(0., 1., (n, n))
u, _, _ = np.linalg.svd(random_matrix)
return u
</code></pre>
<p>However, using this procedure seems to limit the space from which orthonormal matrices are sampled as <code>n</code> increases. In particular, it seems like matrices become less and less likely to be sampled from either end of the extremes, i.e., close to the identity or the negative of the identity matrix:</p>
<pre><code>def cosine_similarity(A, B):
norm_A = np.linalg.norm(A)
norm_B = np.linalg.norm(B)
return np.dot(A.flatten(), B.flatten()) / (norm_A * norm_B)
for n in range(4, 33, 4):
similarities = []
I = np.identity(items_n)
for _ in range(100000):
U = random_orthonormal_matrix(n)
similarities.append(cosine_similarity(U, I))
print(np.min(similarities), np.max(similarities))
</code></pre>
<p>Generates output</p>
<pre><code>-0.9512750256370359 0.9196429830937393
-0.5402804372155989 0.5641902334710601
-0.33279493103542684 0.3570616519070167
-0.26855564226727774 0.26038092858834694
-0.24803547122804348 0.21364982076125164
-0.18041307918712912 0.18015163443704285
-0.1502303153596804 0.15272379170859116
-0.13915507336472144 0.13962760476513508
</code></pre>
<p>How can this procedure be modified, such that the expected distance to the identity matrix is uniform for any <code>n</code>? And further, is it possible to directly generate a random orthonormal matrix for a given distance <code>d</code>?</p>
|
<python><random><linear-algebra><numeric>
|
2024-04-18 14:24:46
| 1
| 599
|
Genius
|
78,348,182
| 9,274,726
|
Airflow - K8s- Unable to mount HostPath using KubenetesPodOperator
|
<p>I have a ADF's Airflow managed instance provisioned. And, I'm trying to schedule a DAG.
In this DAG, I'm trying to run a shell script which is present in the HOSTPATH "/opt/airflow/dags" using KubernetesPodOperator. The shell script will submit some kubectl commands to a k8s cluster. However the pod is not getting started on k8s.</p>
<p>dag.py:</p>
<pre><code>from airflow import DAG
from airflow.operators.python_operator import PythonOperator
from airflow.providers.cncf.kubernetes.operators.kubernetes_pod import KubernetesPodOperator
from datetime import datetime, timedelta
from kubernetes.client import models as k8s
# Define your default_args and DAG
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'start_date': datetime(1970, 1, 1),
'retries': 0,
'retry_delay': timedelta(minutes=5),
}
dag = DAG(
'test_operations_dag',
default_args=default_args,
description='DAG for test Operations',
#schedule_interval=timedelta(days=1),
schedule_interval=None,
concurrency=80,
)
# Correct instantiation using V1HostPathVolumeSource
host_path_volume_source = k8s.V1HostPathVolumeSource(
path="/opt/airflow/dags",
type='Directory' # optional: specify the type of the hostPath
)
volume = k8s.V1Volume(
name="dags",
host_path=host_path_volume_source
)
volume_mounts = [
k8s.V1VolumeMount(
mount_path="/dags", name="dags"
)
]
status_task = KubernetesPodOperator(
task_id=f'status-task-vbuv',
namespace="spark-apps",
image="bitnami/kubectl:latest",
cmds=["sh", "-c"],
arguments=[
f"cp /dags/sparkapplication_spark-pi-fixed2.sh /tmp/sparkapplication_spark-pi-fixed2.sh && chmod +x /tmp/sparkapplication_spark-pi-fixed2.sh && /tmp/sparkapplication_spark-pi-fixed2.sh"
],
service_account_name="airflow-sparkapp",
get_logs=True,
kubernetes_conn_id="k8s-airflow",
dag=dag,
volumes=[volume],
volume_mounts=volume_mounts,
)
# Set up dependencies
status_task
</code></pre>
<p>Error in dag log:</p>
<pre><code>[2024-04-18, 05:01:52 UTC] {pod_manager.py:313} WARNING - Pod not yet started: status-task-vbuv-8h4po563
[2024-04-18, 05:01:52 UTC] {pod.py:726} INFO - Deleting pod: status-task-vbuv-8h4po563
[2024-04-18, 05:01:52 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py", line 551, in execute_sync
self.await_pod_start(pod=self.pod)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/operators/pod.py", line 513, in await_pod_start
self.pod_manager.await_pod_start(pod=pod, startup_timeout=self.startup_timeout_seconds)
File "/home/airflow/.local/lib/python3.8/site-packages/airflow/providers/cncf/kubernetes/utils/pod_manager.py", line 320, in await_pod_start
raise PodLaunchFailedException(msg)
airflow.providers.cncf.kubernetes.utils.pod_manager.PodLaunchFailedE
</code></pre>
|
<python><kubernetes><airflow><directed-acyclic-graphs><kubernetespodoperator>
|
2024-04-18 14:17:22
| 0
| 913
|
Tad
|
78,348,036
| 8,934,639
|
How to call AWS Bedrock asynchronously
|
<p>Is there a way to call Bedrock claude3 model with Python SDK asynchronously?</p>
<p>More specifically, I want the results to be sent to S3.</p>
|
<python><large-language-model><amazon-bedrock>
|
2024-04-18 13:59:57
| 3
| 301
|
Chedva
|
78,347,931
| 710,955
|
Python: Add a trailing slash to the URL but only if the URL doesn't end in a slash already or a file extension
|
<p>I want, in Python, normalize a URL. My main purpose is to add slash / at the end of the URL if it is not already present but only if the URL doesn't end in a slash already or a file extension (so images, .php ,files pages, etc. aren't affected).</p>
<p>For example, if it is <code>http://www.example.com</code> then it should be converted to <code>http://www.example.com/</code>. But if it is <code>http://www.example.com/image.png</code> then it should not be affected.</p>
<p>To do this, I use this regular expression <code>/([^/.]+)$</code>. <a href="https://regex101.com/r/BGYy9U/1" rel="nofollow noreferrer">Regex demo</a></p>
<p>But it doesn't work in this python code, <code>start_url </code> is not modified</p>
<pre><code>import re
start_url = "https://zonetuto.fr"
start_url = re.sub(r'/([^/.]+)$', r'/\1/', start_url)
print(start_url)
</code></pre>
|
<python><regex><url>
|
2024-04-18 13:46:04
| 3
| 5,809
|
LeMoussel
|
78,347,920
| 8,781,465
|
How to integrate a glossary of abbreviations into LangChain for better SQL query generation (NL2SQL)?
|
<p>I am using <code>LangChain</code> to interface with an Oracle database where many column names include abbreviations. I want to provide <code>LangChain</code> with a glossary that explains these abbreviations to improve its ability to accurately select the right columns for queries. How can I incorporate a glossary into my current <code>LangChain</code> setup to give it this additional context? Note that in my corporate environment simply renaming the columns is not an option.</p>
<p>This is how I currently use <code>LangChain</code> for it to answer my natural language questions in natrual language:</p>
<pre class="lang-py prettyprint-override"><code>from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
toolkit = SQLDatabaseToolkit(db=db, llm=chat_client)
agent_executor = create_sql_agent(llm=chat_client,
toolkit=toolkit,
agent_type="openai-tools",
verbose=True,
return_intermediate_steps=True)
agent_executor.invoke("How many orders from Singapore did we have in March 2023?")
</code></pre>
<p>I'd like to add a glossary that includes entries like <code>{ "CST_ID": "Customer ID", "PRDCT_NUM": "Product Number" }</code> to help <code>LangChain</code> understand these abbreviations better.</p>
<p>What modifications are needed in the <code>LangChain</code> configuration or code to make this possible?</p>
|
<python><langchain><py-langchain>
|
2024-04-18 13:43:56
| 0
| 1,815
|
DataJanitor
|
78,347,898
| 11,586,490
|
Emojis appearing as chinese symbols when I share to whatsapp from python
|
<p>I've built a scorecard app where users can share the results of their scores to WhatsApp. I'm trying to use the medal emojis (first place, second place and third place). It works fine when I print to console on PyCharm with my Windows laptop. However, now I've packaged my apk and deployed my app onto my android phone, when I go to share the scorecard to WhatsApp I get chinese symbols in place of my emojis.</p>
<p>I've tried using the unicode for the first place medal ("/U0001F947") and I've also installed the emoji library and done <code>emoji.emojize(':1st_place_medal:')</code>. I also tried the HTML entity (can't remember what it was exactly, something like &#291315) but that just printed out the actual text &#291315</p>
<p>This also happens in Facebook Messenger so it's not a WhatsApp issue</p>
<p>Here is my code:</p>
<pre><code> def share_app(self):
from kivy import platform
emoji_test = "/U0001F947"
if platform == 'android':
from jnius import autoclass
PythonActivity = autoclass('org.kivy.android.PythonActivity')
Intent = autoclass('android.content.Intent')
String = autoclass('java.lang.String')
intent = Intent()
intent.setAction(Intent.ACTION_SEND)
intent.putExtra(Intent.EXTRA_TEXT, String('{}'.format(f"{emoji_test}")))
intent.setType('text/plain')
chooser = Intent.createChooser(intent, String(""))
PythonActivity.mActivity.startActivity(chooser)
</code></pre>
|
<python><android><kivy><whatsapp>
|
2024-04-18 13:41:56
| 1
| 351
|
Callum
|
78,347,675
| 6,212,530
|
Typesafe abstract attributes in Python with Pylance
|
<p>I followed this <a href="https://stackoverflow.com/a/41897823/6212530">answer for python 3.3</a>:</p>
<pre class="lang-py prettyprint-override"><code>class Abstract(ABC):
@property
@abstractmethod
def title(self) -> str: ...
class Concrete(Abstract):
title = "Test" # pylance error
</code></pre>
<p>However in <code>Concrete</code> I get pylance error:</p>
<pre><code>Expression of type "Literal['Test']" cannot be assigned to declared type "property"
"Literal['Test']" is incompatible with "property"
</code></pre>
<p>Is it possible to specify abstract attribute in abstract parent class, so it can be overriden by literal value in inheriting class?</p>
|
<python><python-typing><pyright>
|
2024-04-18 13:03:18
| 2
| 1,028
|
Matija Sirk
|
78,347,580
| 2,583,417
|
Generate buttons programmatically in PyQT
|
<p>I need to generate buttons within a loop, and inside that loop assign to every button a different task.<br />
I know something like this was already asked, and by doing some search I achieved the following code:</p>
<pre><code>def pressed_0():
print(0)
def pressed_1():
print(1)
def pressed_2():
print(2)
for i in range(0,3):
setattr(self,f"pressed_{i}", qtw.QPushButton(f"button {i}"))
exec(f"self.pressed_{i}.clicked.connect(lambda:pressed_{i}())")
self.layout().addWidget(getattr(self, f"pressed_{i}"))
</code></pre>
<p>What I don't like a lot is the usage of exec() function and setattr / getattr.
What I ask if if there is a better way to accomplish what I need.</p>
|
<python><pyqt><pyqt5>
|
2024-04-18 12:52:19
| 1
| 585
|
Ferex
|
78,347,474
| 8,781,465
|
How to integrate Oracle Column Comments into LangChain for enhanced SQL query generation (NL2SQL)?
|
<p>I'm working with an Oracle database that uses column comments to detail cryptic column names, crucial for data retrieval operations. These comments are visible in Oracle SQL Developer but are not included in the <code>CREATE</code> table statement:</p>
<pre><code>COMMENT ON COLUMN "SCHEMA"."TABLE_NAME"."CLMN_NM" IS 'Column which contains information X. Distinct values: ("A", "B", "C")';
</code></pre>
<p>I'm using <code>LangChain</code> to interface with the database. LangChain internally requests the CREATE statement. But since the column comments are after the create statement, <code>LangChain</code> does not have access to it.</p>
<p><strong>How can I modify my approach so that <code>LangChain</code> can dynamically read and utilize these Oracle column comments?</strong> The goal is for <code>LangChain</code> to programmatically use these comments to better determine which columns to select for generating accurate SQL queries in natural language processing contexts.</p>
<p>Here's my current setup:</p>
<pre class="lang-py prettyprint-override"><code>from cx_Oracle import makedsn
from langchain.sql_database import SQLDatabase
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
dsn_tns = makedsn(host=host, port=port, service_name=service_name)
connection_string = f"oracle+cx_oracle://{usr}:{pwd}@{dsn_tns}"
db = SQLDatabase.from_uri(connection_string)
toolkit = SQLDatabaseToolkit(db=db,llm=chat_client)
agent_executor = create_sql_agent(llm=chat_client, toolkit=toolkit, agent_type="openai-tools")
agent_executor.invoke("How many orders from Singapore did we have in March 2023?")
</code></pre>
<p>I'm looking for guidance on modifying my code so that LangChain gets access to the column comments.</p>
|
<python><oracle-database><langchain><py-langchain>
|
2024-04-18 12:34:02
| 1
| 1,815
|
DataJanitor
|
78,347,470
| 10,770,967
|
Reading UTC timestamp in python pandas and converting it to European dates
|
<p>I have an issue with a timestamp column, hoping you can provide me some support. I checked here few already posted questions, but somehow I couldn't find the right approach from them.</p>
<p>I have a pandas frame with multiple columns,among others a timestamp. This timestamp column contains utc dates. My goal is to extract only the dates, and write them in the european format dd.mm.yyyy because I need to save these dates into excel since I need to work there</p>
<pre><code>import pandas as pd
Timestamp=[
"06.02.2024 00:43:31 UTC",
"06.02.2024 01:34:35 UTC",
"06.02.2024 02:21:41 UTC",
"06.02.2024 02:26:41 UTC",
"06.02.2024 03:19:52 UTC",
"06.02.2024 07:15:48 UTC",
"06.02.2024 08:22:46 UTC",
"06.02.2024 09:56:12 UTC",
"06.02.2024 12:00:43 UTC",
"06.02.2024 12:22:14 UTC",
"06.02.2024 12:23:21 UTC"]
df=pd.DataFrame(Timestamp)
df=df.rename(columns={0:"Timestamp"})
df["Timestamp"]=pd.to_datetime(df["Timestamp"],utc=True).dt.date
df["Timestamp"]=pd.to_datetime(df["Timestamp"])
print(df)
Timestamp
0 2024-06-02
1 2024-06-02
2 2024-06-02
3 2024-06-02
4 2024-06-02
5 2024-06-02
6 2024-06-02
7 2024-06-02
8 2024-06-02
9 2024-06-02
10 2024-06-02
</code></pre>
<p>I need to have it in mm.dd.yyyy format
As said: i tried multiple ways but nothing really worked and I am sure that this ist not that complicated and I am just blind. Can you help me?</p>
|
<python><pandas><datetime><utc>
|
2024-04-18 12:33:44
| 2
| 402
|
SMS
|
78,347,434
| 8,588,743
|
Problem setting up Llama-2 in Google Colab - Cell-run fails when loading checkpoint shards
|
<p>I'm trying to use <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf" rel="nofollow noreferrer">Llama 2 chat</a> (via hugging face) with 7B parameters in Google Colab (Python 3.10.12). I've already obtain my access token via Meta. I simply use the code in hugging face on how to implement the model along with my access token. Here is my code:</p>
<pre><code>!pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
token = "---Token copied from Hugging Face and pasted here---"
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-chat-hf", token=token)
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-chat-hf", token=token)
</code></pre>
<p>It starts downloading the model but when it reaches Loading checkpoint shards: it just stops running and there is no error:</p>
<p><a href="https://i.sstatic.net/6wcxG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6wcxG.png" alt="enter image description here" /></a></p>
|
<python><huggingface-transformers><large-language-model><llama>
|
2024-04-18 12:28:02
| 1
| 903
|
Parseval
|
78,347,363
| 10,012,856
|
Upload a photo returns 'Item type not valid.'
|
<p>I'm trying to upload a photo using the code below based on ArcGIS Python API:</p>
<pre><code>def handle_attachments_api(
portal_domain: str,
portal_username: str,
portal_password: str,
data_type: str = None,
data_url: str = None,
filename: str = None,
type_keywords: str = None,
description: str = None,
title: str = None,
url: str = None,
text: str = None,
tags: str = None,
snippet: str = None,
extent: str = None,
spatial_reference: str = None,
access_information: str = None,
license_info: str = None,
culture: str = None,
comments_enabled: bool = True,
access: str = None,
overwrite: bool = False,
data: str = None,
thumbnail: str = None,
metadata: str = None,
owner: str = None,
folder: str = None,
item_id: guid = None,
):
gis = GIS(
url=f"https://{portal_domain}/portal",
username=portal_username,
password=portal_password
)
item_properties = {
"type": data_type,
"dataUrl": data_url,
"filename": filename,
"typeKeywords": type_keywords,
"description": description,
"title": title,
"url": url,
"text": text,
"tags": tags,
"snippet": snippet,
"extent": extent,
"spatialReference": spatial_reference,
"accessInformation": access_information,
"licenseInfo": license_info,
"culture": culture,
"commentsEnabled": comments_enabled,
"access": access,
"overwrite": overwrite,
}
gis.content.add(
item_properties=item_properties,
data=data,
thumbnail=thumbnail,
metadata=metadata,
owner=owner,
folder=folder,
item_id=item_id
)
</code></pre>
<p>But I see the error below when I use 'jpg' as item type:</p>
<pre class="lang-none prettyprint-override"><code>>..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site->packages\arcgis\gis\__init__.py:6837: in add
> itemid = self._portal.add_item(
>..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site->packages\arcgis\gis\_impl\_portalpy.py:438: in add_item
> resp = self.con.post(path, postdata, files)
>..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site->packages\arcgis\gis\_impl\_con\_connection.py:1524: in post
> return self._handle_response(
>..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site->packages\arcgis\gis\_impl\_con\_connection.py:1000: in _handle_response
> self._handle_json_error(data["error"], errorcode)
>_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
>
>self = <arcgis.gis._impl._con._connection.Connection object at 0x0000026A223A0B20>
>error = {'code': 400, 'details': [], 'message': 'Item type not valid.', 'messageCode': >'CONT_0113'}
>errorcode = 400
>
> def _handle_json_error(self, error, errorcode):
> errormessage = error.get("message")
> # handles case where message exists in the dictionary but is None
> if errormessage is None:
> errormessage = "Unknown Error"
> # _log.error(errormessage)
> if "details" in error and error["details"] is not None:
> if isinstance(error["details"], str):
> errormessage = f"{errormessage} \n {error['details']}"
> # _log.error(error['details'])
> else:
> for errordetail in error["details"]:
> if isinstance(errordetail, str):
> errormessage = errormessage + "\n" + errordetail
> # _log.error(errordetail)
>
> errormessage = errormessage + "\n(Error Code: " + str(errorcode) + ")"
> raise Exception(errormessage)
>E Exception: Item type not valid.
>E (Error Code: 400)
>
>..\..\..\..\..\AppData\Local\ESRI\conda\envs\arcgispro-py3-ps\lib\site->packages\arcgis\gis\_impl\_con\_connection.py:1023: Exception
</code></pre>
<p>Where I can see a list of all allowed items type?</p>
|
<python><arcgis>
|
2024-04-18 12:15:45
| 1
| 1,310
|
MaxDragonheart
|
78,347,354
| 893,254
|
Python find index of element in list based on evaluation of a function
|
<p>I am trying to find the index of an item in a Python list based on the evaluation value of a lambda function (or other callable).</p>
<p>This would be similar to a combination of an <code>.index()</code> operation with a <code>find_if</code> operation.</p>
<p>Here is an example:</p>
<pre><code># self contains `self.list`
def find_index_where(self, id: int) -> int:
# callable expression which tests a sub-member of some object `input`
def lambda_callable(input, id):
return input.id == id
# extra work being done:
# first find an `item` then find an `index` corresponding to `item`
matched_item = next(item for item in self.list if lambda_callable(item, id))
index = self.list.index(matched_item)
return index
</code></pre>
<p>Is there a way to roll the "item finding" operation together with the "index finding" operation?</p>
|
<python>
|
2024-04-18 12:12:57
| 2
| 18,579
|
user2138149
|
78,347,277
| 3,906,713
|
How to modularize dependent unit tests in Python
|
<p>Here is a hypothetical composite unit test. It calls two algorithms, checks some properties of the results independently, and then compares them to each other.</p>
<pre><code>class MyTestCase(unittest.TestCase):
def test_composite(self):
# Test 1. Testing first algorithm
result1 = algorthm1()
assert len(result1) == 5, "Algorithm 1 failed"
# Test 2. Testing second algorithm
result2 = algorthm2()
assert len(result2) == 5, "Algorithm 2 failed"
# Test 3. Testing that algorithms are consistent
assert result1 == result2, "Algorithm results are inconsistent"
</code></pre>
<p>It would be convenient if the first and the second test could be run independently of each other, and the last test, that depends on both of them, would only run if the other two have passed. If only one of the algorithms failed, it would be convenient to know that the second one passed. It would also be convenient if I only had to compute the results of each algorithm once.</p>
<p>Is there an intended pythonic way to do such refactoring, and, if yes, what are my options?</p>
<p><strong>NOTE</strong>: One solution I am aware of would be to calculate both results in the <code>__init__()</code>, and then only test them in the test functions. However, since the algorithms can crash, it would make sense that the algorithm computations are done in their own tests, so it is clear which one crashed.</p>
|
<python><unit-testing>
|
2024-04-18 12:00:47
| 0
| 908
|
Aleksejs Fomins
|
78,347,183
| 14,923,149
|
Title: How to visualize hierarchical data with nested pie charts in Python?
|
<p>I have hierarchical data that I would like to visualize using nested pie charts in Python. The data consists of Phylum, Genus, and Species levels, and I want to create a nested pie chart where each level represents a ring in the chart.</p>
<p>I have already attempted to implement this using Matplotlib, but I'm facing challenges in filtering and displaying only specific portions of the nested pie charts based on the abundance of certain categories. Specifically, I want to:</p>
<p>Display all Phylum initially.
Filter and display only the Genera related to a specific Phylum (e.g., Firmicutes).
Filter and display only the Species related to a specific Genus (e.g., Bacillus).
I've tried to modify the code based on suggestions I found online, but I'm not getting the desired output.</p>
<p>Could someone please provide guidance or a code example on how to achieve this visualization using Python and Matplotlib?</p>
<p>Any help would be greatly appreciated. Thank you!</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
# Read the Excel file
TissueS35_Analysis_Report = pd.read_excel("TissueS35_Analysis_Report.xlsx", sheet_name="Species")
# Select only the 'Phylum', 'Genus', and 'Species' columns
selected_columns = TissueS35_Analysis_Report[['Phylum', 'Genus', 'Species', 'Absolute Count']]
# Group by Phylum, Genus, and Species and sum the counts
grouped_data = selected_columns.groupby(['Phylum', 'Genus', 'Species']).sum().reset_index()
# Function to generate nested pie chart data
def nested_pie(df):
outd = {}
for level in range(3):
if level == 0:
gb = df.groupby('Phylum', sort=False).sum()
elif level == 1:
gb = df.groupby(['Phylum', 'Genus'], sort=False).sum()
else:
gb = df.groupby(['Phylum', 'Genus', 'Species'], sort=False).sum()
outd[level] = {'names': gb.index.get_level_values(level).tolist(), 'values': gb['Absolute Count'].values}
return outd
# Generate nested pie chart data
outd = nested_pie(grouped_data)
# Plot nested donut pie chart
fig, ax = plt.subplots()
# Plot Species level (Outermost ring)
sizes = outd[2]['values']
species_colors = plt.cm.tab20c.colors
species_labels = outd[2]['names']
ax.pie(sizes, radius=1, colors=species_colors, labels=species_labels, wedgeprops=dict(width=0.3, edgecolor='w'))
# Plot Genus level (Middle ring)
sizes = outd[1]['values']
genus_colors = plt.cm.tab20b.colors
genus_labels = outd[1]['names']
ax.pie(sizes, radius=0.7, colors=genus_colors, wedgeprops=dict(width=0.3, edgecolor='w'))
# Plot Phylum level (Innermost ring)
sizes = outd[0]['values']
phylum_colors = plt.cm.tab20.colors
phylum_labels = outd[0]['names']
ax.pie(sizes, radius=0.4, colors=phylum_colors, wedgeprops=dict(width=0.3, edgecolor='w'))
# Create legend for Phylum level
legend_handles = [Patch(color=color, label=label) for color, label in zip(phylum_colors, phylum_labels)]
ax.legend(handles=legend_handles, loc='center left', bbox_to_anchor=(1, 0.5), title='Phylum')
ax.set(aspect="equal")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/DeWrA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DeWrA.png" alt="enter image description here" /></a></p>
<pre><code>small data refernce is as follow
Phylum Genus Species Absolute Count
168 Proteobacteria Pseudomonas Unclassified 73745
152 Proteobacteria Klebsiella Unclassified 10777
190 Proteobacteria Unclassified Unclassified 4932
132 Proteobacteria Chromobacterium Unclassified 1840
84 Firmicutes Lysinibacillus boronitolerans 1780
104 Firmicutes Weissella ghanensis 1101
10 Actinobacteria Corynebacterium Unclassified 703
138 Proteobacteria Cupriavidus gilardii 586
93 Firmicutes Staphylococcus Unclassified 568
183 Proteobacteria Stenotrophomonas geniculata 542
Selection deleted
</code></pre>
<p>If possible, how can i do for overlay image as given below, I will be thankful for this help, Regards
<a href="https://i.sstatic.net/zLvqT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zLvqT.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib>
|
2024-04-18 11:49:14
| 1
| 504
|
Umar
|
78,346,963
| 6,717,444
|
Write file extensions and their occurrences in a list into a dictionary?
|
<p>How can I solve this with loops, without regular expressions?</p>
<p>Write a function that accepts a list of file names and returns a dictionary with extensions as keys and their occurrences as value.</p>
<p>Example:</p>
<pre><code>#print(count_file_types(["image1.jpg", "image2.jpg", "preso.pptx"]))
#=> {"jpg": 2, "pptx": 1}
</code></pre>
<p>I tried this, but I guess itβs all wrong:</p>
<pre><code>def count_file_types(string_arr):
arr = []
for i in string_arr:
arr.append(i.split("."))
return(arr)
print(arr)
freq = {}
for i in arr:
if i in freq:
freq[i] += 1
else:
freq[item] = 1
return freq
print(freq)
print(count_file_types(['image1.jpg', 'image2.jpg', 'preso.pptx']))
</code></pre>
|
<python><python-3.x><list><dictionary>
|
2024-04-18 11:09:46
| 4
| 350
|
Evanto
|
78,346,937
| 6,278,424
|
gettign json.decoder.JSONDecodeError at random
|
<p>I have implemented this function given by this answer: <a href="https://quant.stackexchange.com/a/70155/33457">https://quant.stackexchange.com/a/70155/33457</a></p>
<p>when I run this code sometimes it goes well while most of the time it returns this error:</p>
<pre><code>raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
</code></pre>
<p>The funny thing is that whether it goes through or returns this error seems to be totally random? Why is that?</p>
<p>I have tried to figure out if my <code>headers</code> is the problem but I don't know</p>
<p>Se my full code here;</p>
<pre><code>import requests
def get_symbol_for_isin(isin):
url = 'https://query1.finance.yahoo.com/v1/finance/search'
headers = {
'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.109 Safari/537.36',
}
params = dict(
q=isin,
quotesCount=1,
newsCount=0,
listsCount=0,
quotesQueryId='tss_match_phrase_query'
)
resp = requests.get(url=url, headers=headers, params=params)
data = resp.json()
if 'quotes' in data and len(data['quotes']) > 0:
return data['quotes'][0]['symbol']
else:
return None
apple_isin = 'US0378331005'
print(get_symbol_for_isin(apple_isin))
</code></pre>
<p>The return should be 'AAPL'</p>
|
<python><request><yahoo-finance>
|
2024-04-18 11:05:31
| 1
| 530
|
k.dkhk
|
78,346,892
| 7,267,640
|
gRPC client request streaming: - "The client reset the request stream."
|
<p>I am trying to implement request streaming from my python client to my C# server using gRPC.
This is my protofile:</p>
<pre class="lang-protobuf prettyprint-override"><code> syntax = "proto3";
service PerceiveAPIDataService {
rpc UploadResource (stream UploadResourceRequest) returns (UploadResourceResponse);
}
message UploadResourceRequest {
oneof request_data {
ResourceChunk resource_chunk = 1;
UploadResourceParameters parameters = 2;
}
}
message ResourceChunk {
bytes content = 1;
}
message UploadResourceParameters {
string path = 1;
}
</code></pre>
<p>This is my c# implementation:</p>
<pre class="lang-cs prettyprint-override"><code> public override async Task<UploadResourceResponse> UploadResource(IAsyncStreamReader<UploadResourceRequest> requestStream, ServerCallContext context)
{
if (!await requestStream.MoveNext())
{
throw new RpcException(new Status(StatusCode.FailedPrecondition, "No upload parameters found."));
}
var initialMessage = requestStream.Current;
if (initialMessage.RequestDataCase != UploadResourceRequest.RequestDataOneofCase.Parameters)
{
throw new RpcException(new Status(StatusCode.FailedPrecondition, "First message must contain upload parameters."));
}
var path = initialMessage.Parameters.Path;
if (string.IsNullOrWhiteSpace(path))
{
throw new RpcException(new Status(StatusCode.InvalidArgument, "Upload path is required."));
}
using (var ms = new MemoryStream())
{
while (await requestStream.MoveNext())
{
var chunk = requestStream.Current.ResourceChunk;
if (chunk == null)
{
continue; // Skip any messages that are not resource chunks
}
await ms.WriteAsync(chunk.Content.ToByteArray().AsMemory(0, chunk.Content.Length));
}
ms.Seek(0, SeekOrigin.Begin); // Reset memory stream position to the beginning for reading during upload
var uploadResult = await _dataService.UploadResourceAsync(path, ms);
return new UploadResourceResponse { Succeeded = uploadResult.IsSuccessful };
}
}
</code></pre>
<p>And this is my python client code:</p>
<pre class="lang-py prettyprint-override"><code> def generate_request(self, data: bytearray, next_cloud_path: str) -> Generator:
first_req = perceive_api_data_service_pb2.UploadResourceRequest(
parameters=perceive_api_data_service_pb2.UploadResourceParameters(path=next_cloud_path)
)
yield first_req
print("Sent initial request with path:", next_cloud_path)
chunk_size = 2048
total_chunks = (len(data) + chunk_size - 1) // chunk_size # Ceiling division to get total number of chunks
print(f"Data size: {len(data)} bytes, chunk size: {chunk_size} bytes, total chunks: {total_chunks}")
for i in range(0, len(data), chunk_size):
chunk = data[i:i+chunk_size]
yield perceive_api_data_service_pb2.UploadResourceRequest(
resource_chunk=perceive_api_data_service_pb2.ResourceChunk(content=chunk)
)
print(f"Sent chunk {((i // chunk_size) + 1)} of {total_chunks}")
async def upload_file(self, data: bytearray, next_cloud_path: str) -> bool:
async with grpc.aio.insecure_channel("localhost:5228") as channel:
stub = perceive_api_data_service_pb2_grpc.PerceiveAPIDataServiceStub(channel)
request_iterator = self.generate_request(data, next_cloud_path)
response = await stub.UploadResource(request_iterator)
return response.succeeded
</code></pre>
<p>The error I get on the server is this:</p>
<blockquote>
<p>Grpc.AspNetCore.Server.ServerCallHandler: Error: Error when executing service method 'UploadResource'.</p>
<p>System.IO.IOException: The client reset the request stream.
at System.IO.Pipelines.Pipe.GetReadResult(ReadResult& result)
at System.IO.Pipelines.Pipe.GetReadAsyncResult()</p>
</blockquote>
<p>And the error I get on the client is this:</p>
<blockquote>
<p>File
"C:\Users\user_name\AppData\Local\Programs\Python\Python311\Lib\site-packages\grpc\aio_call.py",
line 690, in _conduct_rpc
serialized_response = await self._cython_call.stream_unary(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "src\python\grpcio\grpc_cython_cygrpc/aio/call.pyx.pxi", line 458,
in stream_unary File
"src\python\grpcio\grpc_cython_cygrpc/aio/callback_common.pyx.pxi",
line 166, in _receive_initial_metadata File
"src\python\grpcio\grpc_cython_cygrpc/aio/callback_common.pyx.pxi",
line 99, in execute_batch asyncio.exceptions.CancelledError</p>
</blockquote>
<p>The first message is sent successfully. So the parameters. However, as soon as the server tries to do <code>requestStream.MoveNext()</code>, it throws this error.</p>
<p>I already tried numerous of different solutions, but I cannot find anything that works. Does anyone see where I am making an error?</p>
|
<python><c#><grpc><grpc-python><grpc-c#>
|
2024-04-18 10:57:56
| 1
| 6,888
|
Luuk Wuijster
|
78,346,868
| 10,811,647
|
How to reduce my Tensorflow docker image?
|
<p>I have a Dash app running fine locally. The app uses tensorflow and ultralytics to detect some events on a graph using yolo8. I am trying to deploy this app to a server inside a docker container. The first image I built was based on the <code>tensorflow:latest-gpu</code> docker image. The resulting image size was 19,5 Gb. removing the -gpu tag helped reduce the size to 14Gb. Then I tried building from a python image (3.11.0). The python based image was 13Gb.</p>
<p>How can I further reduce the size of my image? 13Gb is a lot considering that my dash app folder containing the assets is about 6Mb.</p>
<p>Here are my Dockerfile:</p>
<pre><code>#Using python
FROM python:3.11.0
COPY requirements.txt ./requirements.txt
# install opencv dependencies and requirements
RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 -y && pip install -r requirements.txt --ignore-installed
#Copy files to container
COPY . ./
#Running APP and doing some PORT Forwarding
CMD gunicorn -b 0.0.0.0:1312 app:server
</code></pre>
<p>and requirements file:</p>
<pre><code>dash==2.16.1
dash-core-components==2.0.0
dash-daq==0.5.0
dash-html-components==2.0.0
dash-table==5.0.0
gunicorn==21.2.0
influxdb-client==1.41.0
keras==3.0.5
numpy==1.26.4
opencv-python==4.9.0.80
pandas==2.2.1
pillow==10.2.0
plotly==5.20.0
tensorflow==2.16.1
ultralytics==8.1.27
kaleido
</code></pre>
<p>Thanks for your help !</p>
|
<python><docker><tensorflow><plotly-dash>
|
2024-04-18 10:55:26
| 0
| 397
|
The Governor
|
78,346,757
| 5,931,672
|
Overriding multiprocessing.queues.Queue put method
|
<p>I want to implement a <code>multiprocessing.Queue</code> that does not add an element that already exists.
Using Python STL Queue I had no problem, following <a href="https://stackoverflow.com/a/16506527/5931672">this</a> response. For multiprocessing I had some issues that I solved thanks to <a href="https://stackoverflow.com/questions/34292296/multiprocessing-queue-subclass-issue">this</a>
For that I do the following:</p>
<pre><code>from multiprocessing.queues import Queue
from multiprocessing import get_context
class CustomQueue(Queue):
def put(self, obj, block=True, timeout=None):
if obj not in self:
return super().put(obj, block, timeout)
def __contains__(self, item):
with self.mutex:
return item in self.queue
custom_queue = CustomQueue(ctx=get_context())
</code></pre>
<p>However, when I call the put method I get <code>AttributeError: 'CustomQueue' object has no attribute 'mutex'</code></p>
<p>How can I solve this issue?
Thank you in advance.</p>
<hr />
<p>I read the code of <code>multiprocessing.queues.Queue</code>, and did my change to this:</p>
<pre><code>class CustomQueue(Queue):
def put(self, obj, block=True, timeout=None):
if self._closed:
raise ValueError(f"Queue {self!r} is closed")
if not self._sem.acquire(block, timeout):
raise Full
with self._notempty:
if self._thread is None:
self._start_thread()
if obj not in self._buffer:
self._buffer.append(obj)
self._notempty.notify()
</code></pre>
<p>But still does not work. <code>self._buffer</code> seems to be the queue (is a <code>collections.deque</code> object). But the <code>obj not in self._buffer</code> returns always <code>True</code>. Why is this happening?</p>
|
<python><multiprocessing><queue>
|
2024-04-18 10:36:02
| 2
| 4,192
|
J Agustin Barrachina
|
78,346,628
| 10,595,871
|
Scrape contents of Network section of an element in a page
|
<p>I need to scrape a page, the website is the following:
<a href="https://commercialisti.it/iscritti" rel="nofollow noreferrer">https://commercialisti.it/iscritti</a>
It's only in italian but still, it is a list of professional people that I'm able to search via "Cap".</p>
<p>For example, by filling the Cap with the value 37138 and then pressing on "CERCA" it will display a list of professionals with a few data. I found that if I inspect the page, go to network and then on the element <code>LstIscritti?_=1713434471262</code> there is a JSON with all the data that I need.
The problem is that I don't understand how to scrape by entering in this section of the website.</p>
<p>I tried with beautifulsoup but I'm only able to scrape the html of the main page</p>
<p>My code so far:</p>
<pre><code>from selenium import webdriver
import time
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.maximize_window()
driver.get('https://commercialisti.it/iscritti')
driver.implicitly_wait(10)
driver.switch_to.frame(driver.find_element(By.XPATH, "(//iframe)[1]"))
casella_testo = driver.find_element("id", "Cap")
casella_testo.send_keys("37138")
pulsante_cerca = driver.find_element("id", 'btnContinua')
pulsante_cerca.click()
time.sleep(5)
res = driver.find_element("id", "listIscritti")
time.sleep(10)
</code></pre>
<p>The content of <code>res</code> is the scraped table displayed in the page after pressing "CERCA" button, but I need the details that are in the Network section</p>
|
<python><beautifulsoup>
|
2024-04-18 10:15:28
| 1
| 691
|
Federicofkt
|
78,346,215
| 7,219,400
|
Python Flask Sqlite is not creating a column in a specific line or name
|
<p>This is just mind blowing
I am trying to use in-memory table for small data that can be got easily but I dont want to send request all the time, so I save it in in-memory sqlite table
here is my <strong>init</strong>.py file:</p>
<pre><code>from flask import Flask
from .models import db
from apscheduler.schedulers.background import BackgroundScheduler # Corrected import
class Config:
SCHEDULER_API_ENABLED = True
# SQLALCHEMY_DATABASE_URI = 'sqlite:///:memory:'
SQLALCHEMY_DATABASE_URI = 'sqlite:///..//db.sqlite3'
SQLALCHEMY_TRACK_MODIFICATIONS = False
def create_app():
app = Flask(__name__)
app.config.from_object(Config)
db.init_app(app)
with app.app_context():
db.create_all() # Create the database tables - IMPORTANT
scheduler = BackgroundScheduler() # Corrected class name
scheduler.start()
from .views import views
from .auth import auth
app.register_blueprint(views, url_prefix='/')
app.register_blueprint(auth, url_prefix='/')
return app
</code></pre>
<p>I wrote db.sqlite3 to be able to see it in vs-code, but it works the same in in-memory table</p>
<p>And here is my table:</p>
<pre><code>class Holidays(db.Model):
__tablename__ = 'Holidays'
data_id = db.Column(db.Integer, primary_key=True)
# date format: "yyyy-MM-dd"
date = db.Column(db.String)
test = db.Column(db.String)
test2 = db.Column(db.Float)
test3 = db.Column(db.Integer)
# sholday = db.Column(db.Integer)
# sholday = db.Column(db.Integer)
test4 = db.Column(db.Boolean)
isholday = db.Column(db.Integer)
sholday = db.Column(db.Integer)
@classmethod
def add(cls, holiday):
db.session.add(holiday)
db.session.commit()
@classmethod
def add_all(cls, holidays):
db.session.add_all(holidays)
db.session.commit()
@classmethod
def sholday(cls, date):
holiday = Holidays.query.filter(Holidays.date == date).first()
if holiday:
return holiday.sholday
return False
@classmethod
def does_date_exist(cls, date):
return Holidays.query.filter(Holidays.date == date).first() is not None
</code></pre>
<p>now here, it creates every column except sholday, I initially made its name is_holiday but it wasn't creating also with that name. I copied pasted this line to somewhere else, I changed the name, it is not creating that column.</p>
|
<python><sqlite><flask><sqlalchemy>
|
2024-04-18 09:09:32
| 1
| 1,464
|
Sahin
|
78,346,156
| 7,295,936
|
pd dataframe applymap tries to modify col names
|
<p>Hello i have a dataframe containing datetime infos and i'd like too format those infos so i've used this command :</p>
<pre><code>df2[["month", "day", "hour", "min", "s"]] = df2[["month", "day", "hour", "min", "s"]].applymap(lambda x: f"{int(x):02d}")
</code></pre>
<p>However i get this error : <code>ValueError: invalid literal for int() with base 10: 'month'</code></p>
<p>so my guess is the applymap function is trying to apply the format on column name, how would you solve this problem ?</p>
<p>here is a sample of the data :
<a href="https://i.sstatic.net/1XTVQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1XTVQ.png" alt="enter image description here" /></a></p>
<p>so i'd like month values to be '01' instead of '1'</p>
|
<python><python-3.x><pandas><dataframe><lambda>
|
2024-04-18 08:59:57
| 1
| 1,560
|
FrozzenFinger
|
78,346,024
| 2,803,777
|
How to fill an nd array with values from a 1d-array?
|
<p>The following is a real-world problem in <code>numPy</code> reduced to the essentials, just with smaller dimensions.</p>
<p>Let's say I want to create an n-dimensional array <code>all</code> with dimensions (10, 10, 100):</p>
<pre><code>all = np.empty((10, 10, 100))
</code></pre>
<p>I also have a 1d array <code>data</code>, simulated here as</p>
<pre><code>data = np.arange(0, 100)
</code></pre>
<p>for all i, j I now want to achieve that</p>
<pre><code>all[i,j]=data
</code></pre>
<p>So I do:</p>
<pre><code>all[:, :]=data
</code></pre>
<p>Of course that works.</p>
<p>But now I want to import <code>data</code> to <code>all2</code> with shape (100, 10, 10). I could do that with</p>
<pre><code>all2 = np.empty((100, 10, 10)) # new target to be populated
for i in range(100):
for j in range(10):
for k in range(10):
all2[i, j, k]=data[i]
</code></pre>
<p>But is there an easier way to do this without looping? I would be surprised if it couldn't be done more elegantly, but I don't see how.</p>
|
<python><numpy><numpy-ndarray><numpy-slicing>
|
2024-04-18 08:35:57
| 1
| 1,502
|
MichaelW
|
78,345,829
| 11,586,490
|
Making text line up vertically when sharing to WhatsApp
|
<p>I've created a simple scorecard app, that sums users scores while they're playing games (card games, golf etc). I've added in the ability to share the result of their game to WhatsApp and I'd like it to appear a bit like a table, with the player name and then the player score, each player on a new line.</p>
<p>I'm trying to make the scores line up vertically, which is challenging given the player names will differ in length. I did this successfully in my IDE on my laptop by working out the length of the longest name and adding the correct amount of whitespace to the shorter names. This prints out to the terminal correctly, like so:</p>
<p><a href="https://i.sstatic.net/KTa4P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KTa4P.png" alt="enter image description here" /></a></p>
<p>However, when I share to WhatsApp in my app on my android phone the text no longer lines up. I understand this is due to the font used by whatsapp, where an "i" takes up less space than a "w", whereas I need it to be monospaced.</p>
<p>Here's how it currently appears on my phone:</p>
<p><a href="https://i.sstatic.net/my4BE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/my4BE.png" alt="enter image description here" /></a></p>
<p>Any ideas on how I could get it to line up neatly on WhatsApp?</p>
|
<python><android><whatsapp>
|
2024-04-18 08:06:33
| 1
| 351
|
Callum
|
78,345,753
| 13,200,217
|
mypy checking pyi in venv despite excluding it
|
<p>I have a PySide project set up using a pyproject.toml file, with venv+pip.</p>
<p>I have set up mypy in the pyproject.toml file as follows:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.mypy]
disable_error_code = ["import-untyped"]
exclude = ["^.venv/", "^myproject/somefolder/"]
[[tool.mypy.overrides]]
module = "PySide6.*"
ignore_errors = true
</code></pre>
<p>However when running <code>mypy .</code> I get the following error:</p>
<pre><code>venv\Lib\site-packages\PySide6\QtGui.pyi:1094: error: unexpected indent [syntax]
Found 1 error in 1 file (errors prevented further checking)
</code></pre>
<p>This is caused by this issue in PySide: <a href="https://bugreports.qt.io/browse/PYSIDE-2665" rel="nofollow noreferrer">https://bugreports.qt.io/browse/PYSIDE-2665</a> and I'd like to ignore it.</p>
<p>What I've tried:</p>
<ul>
<li>Check if <code>somefolder</code> is actually being ignored. It is, as when I remove the regex from the exclude, more errors show up.</li>
<li>Change the module to <code>"PySide6"</code>. Still getting the same error.</li>
<li>Remove the <code>[[tool.mypy.overrides]]</code> and what's under it. Still getting the same error.</li>
<li>Using strings instead of regexes: <code>exclude = ["venv","myproject/somefolder"]</code>. Still the same.</li>
</ul>
<p>The mypy documentation only mentions how to ignore individual files (<a href="https://mypy.readthedocs.io/en/stable/config_file.html#confval-exclude" rel="nofollow noreferrer">link</a>).</p>
<p>So how would I go about fixing this?</p>
|
<python><mypy><pyproject.toml>
|
2024-04-18 07:54:14
| 0
| 353
|
Andrei MiculiΘΔ
|
78,345,731
| 2,739,700
|
Azure alerting for KQL query using Python
|
<p>I could not able to create alert using Python code, Manually It got created</p>
<p>Below is the code:</p>
<pre><code>from azure.identity import DefaultAzureCredential
from azure.mgmt.resource import ResourceManagementClient
from azure.mgmt.monitor import MonitorManagementClient
from azure.mgmt.monitor.v2018_04_16.models import LogSearchRuleResource, Source, Schedule, Action
# Define the KQL query
kql_query = """
ConfigurationData
| where Computer contains "test_machine"
| where SvcName contains "test-service"
| where SvcState != "Running"
"""
# Azure subscription ID
subscription_id = '5xxxxxxxxxxxx'
# Resource group
resource_group_name = 'rg-name'
uri = "/subscriptions/xxxxxxxxx/resourceGroups/rg-anme/providers/Microsoft.Compute/virtualMachines/test-machine"
# Define parameters
scheduledqueryrules_custom_query_name = 'custom_query'
# Authenticate to Azure
credential = DefaultAzureCredential()
# Initialize Resource Management Client
resource_client = ResourceManagementClient(credential, subscription_id)
actions = Action(
odata_type="LogToMetricAction"
)
# Initialize Monitor Management Client
monitor_client = MonitorManagementClient(credential, subscription_id)
source = Source(query=kql_query, data_source_id=uri)
schedule = Schedule(frequency_in_minutes=5, time_window_in_minutes=15)
log_search = LogSearchRuleResource(location="northcentralus", source=source, action=actions)
rule_name = scheduledqueryrules_custom_query_name
rule_result = monitor_client.scheduled_query_rules.create_or_update(resource_group_name=resource_group_name, parameters=log_search, rule_name="ddfed")
print("Rule created successfully:", rule_result)
</code></pre>
<p>Error:</p>
<pre><code>ile "/usr/local/lib/python3.11/site-packages/azure/mgmt/monitor/v2018_04_16/operations/_scheduled_query_rules_operations.py", line 386, in create_or_update
raise HttpResponseError(response=response, model=error, error_format=ARMErrorFormat)
azure.core.exceptions.HttpResponseError: (BadRequest) Invalid value for properties.action.odata.type Activity ID: 49321a7c-b696-4042-aa5c-a109997224e4.
Code: BadRequest
Message: Invalid value for properties.action.odata.type Activity ID: 49321a7c-b696-4042-aa5c-a1sddfrre4.
</code></pre>
<p>Below is the Microsoft Azure docs for classes:</p>
<p><a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2018_04_16.models.logsearchruleresource?view=azure-python" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/python/api/azure-mgmt-monitor/azure.mgmt.monitor.v2018_04_16.models.logsearchruleresource?view=azure-python</a></p>
<p>Not sure what went wrong and any help be greatly appreciated</p>
<p>Python Version: 3.11
Packages:</p>
<pre><code>azure-common==1.1.28
azure-core==1.30.1
azure-identity==1.16.0
azure-mgmt-core==1.4.0
azure-mgmt-monitor==6.0.2
azure-mgmt-resource==23.0.1
azure-monitor-query==1.3.0
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
cryptography==42.0.5
idna==3.7
isodate==0.6.1
msal==1.28.0
msal-extensions==1.1.0
packaging==24.0
portalocker==2.8.2
pycparser==2.22
PyJWT==2.8.0
requests==2.31.0
six==1.16.0
typing_extensions==4.11.0
urllib3==2.2.1
</code></pre>
|
<python><azure><azure-media-services><azure-monitoring><azure-alerts>
|
2024-04-18 07:48:33
| 1
| 404
|
GoneCase123
|
78,345,554
| 3,793,935
|
Python file.save saves empty file
|
<p>I retrieve files from a frontend upload, and convert an hash over the file this way:</p>
<pre><code>blob_file = convert_to_blob(file)
</code></pre>
<p>with this function:</p>
<pre><code>def convert_to_blob(file: file_storage.FileStorage) -> bytes:
os_path = os.path.join(config('UPLOAD_CONVERT'), "convert.pdf")
file.save(os_path)
# Convert digital data to binary format
with open(os_path, 'rb') as file:
blobData = file.read()
return blobData
</code></pre>
<p>this works fine and the file is saved as expected.
After that I save the file a second time, but with the hash as name in the folder intendend for the file:</p>
<pre><code> if file and allowed_file(file.filename):
# first, convert the file as blob, so we can build a hash over the blob
blob_file = convert_to_blob(file)
hash_ = hashlib.md5(blob_file).hexdigest()
# save file local
file_path = f"{config('UPLOAD_FOLDER')}/{request.form['mandant']}"
if not os.path.exists(file_path):
os.makedirs(file_path)
os_path = os.path.join(f"{file_path}/{hash_}.pdf")
file.save(os_path)
</code></pre>
<p>But for some reason, the second time around the file won't properly save.
It's always empty.
I've tried to save the file in different folders and without the hash-building stuff before, but no luck.</p>
<p>Can someone explain what's happening here?</p>
<p><strong>Edit:</strong>
file type -> <class 'werkzeug.datastructures.file_storage.FileStorage'></p>
<p><strong>Edit2:</strong>
Okay, it is because of the file.save, it seems like the file is closed after or something?
Is there a way to avoid that without reopening the file?</p>
|
<python><file>
|
2024-04-18 07:18:16
| 2
| 499
|
user3793935
|
78,345,428
| 108,390
|
How to avoid Mypy Incompatible type warnings in Chained when/then assignments?
|
<p>I have the following code</p>
<pre><code>expr = pl.when(False).then(None)
for pattern, replacement in replacement_rules.items():
expr = expr.when(pl.col("data").str.contains(pattern))
expr = expr.then(pl.lit(replacement))
expr = expr.when(pl.col("ISO_codes").str.len_chars() > 0)
expr = expr.then(
pl.col("ISO_codes")
.replace(iso_translation, default="Unknown ISO Code")
)
</code></pre>
<p>The code works as intended, but Mypy is not too happy about it:
<a href="https://i.sstatic.net/dGR7O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dGR7O.png" alt="enter image description here" /></a></p>
<p>I cannot understand how get rid of the warnings without losing all "Incompatible type" warnings,or rewrite the code to make it go away.</p>
|
<python><mypy><python-typing><python-polars>
|
2024-04-18 06:55:37
| 1
| 1,393
|
Fontanka16
|
78,345,364
| 51,816
|
How to draw waveform as curve using matplotlib?
|
<p>I wrote this code:</p>
<pre><code>def plotWaveforms(audioFile1, audioFile2, imageFile, startSegment=30, endSegment=35, amp1=0.5, amp2=0.5):
# Load audio files
y1, sr1 = librosa.load(audioFile1, sr=None, offset=startSegment, duration=endSegment - startSegment)
y2, sr2 = librosa.load(audioFile2, sr=None, offset=startSegment, duration=endSegment - startSegment)
# Normalize and adjust the amplitude of the audio signals
y1 = normalize_audio(y1, amp1)
y2 = normalize_audio(y2, amp2)
# Create a figure with a black background
plt.figure(figsize=(16, 1), facecolor='black')
# Plot the second audio file as a filled waveform
plt.fill_between(np.linspace(0, len(y2) / sr2, len(y2)), y2, color='green', alpha=1)
# Plot the first audio file as a filled waveform
plt.fill_between(np.linspace(0, len(y1) / sr1, len(y1)), y1, color='blue', alpha=0.5)
# Remove axes, labels, and title for a clean look
plt.axis('off')
# Save the figure with a specific resolution
plt.savefig(imageFile, format='png', dpi=300, bbox_inches='tight', pad_inches=0)
plt.close()
</code></pre>
<p>which produces this:</p>
<p><a href="https://i.sstatic.net/Ko07L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ko07L.png" alt="enter image description here" /></a></p>
<p>But I am trying to draw a filled curve using the peak of each point or at regular intervals that look like this:</p>
<p><a href="https://i.sstatic.net/QCerB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QCerB.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/djaKh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/djaKh.png" alt="enter image description here" /></a></p>
<p>How can I do this?</p>
|
<python><matplotlib><audio><visualization><waveform>
|
2024-04-18 06:45:08
| 1
| 333,709
|
Joan Venge
|
78,345,271
| 8,510,149
|
Pandas shift operation with condition
|
<p>Below I have a small dataset with 3 columns, ID; tag and value. Tag represent the source of the information that the feature 'value' is based on.</p>
<p>I want to create a lag feature for 'value'. Below I do that in an easy way. However, for index 3 and 4 we can see that 'tag' has the same value. This situation I do not want.</p>
<p>I wish to take 'tag' into consideration. I want a condition that to perform the shift only when 'tag' feature is not the same.</p>
<p>What would be good method to perform this?</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'ID':[1,1,1,2,2, 2,2,3,3,3],
'tag':[10, 11, 15, 11, 12, 12, 13, 16, 17, 18],
'value':[21, 19, 22, 41, 43, 43, 38, 9, 12, 16]})
df['value_lag'] = df.sort_values(by=['ID', 'tag']).groupby('ID')['value'].shift(1)
print(df)
</code></pre>
<pre><code> ID tag value value_lag
0 1 10 21 NaN
1 1 11 19 21.0
2 1 15 22 19.0
3 2 11 41 NaN
4 2 12 43 41.0
5 2 12 43 43.0
6 2 13 38 43.0
7 3 16 9 NaN
8 3 17 12 9.0
9 3 18 16 12.0
</code></pre>
<p>Desired output would be:</p>
<pre><code> ID tag value value_lag
0 1 10 21 NaN
1 1 11 19 21.0
2 1 15 22 19.0
3 2 11 41 NaN
4 2 12 43 41.0
5 2 12 43 41.0 -Here, should not be 43
6 2 13 38 43.0
7 3 16 9 NaN
8 3 17 12 9.0
9 3 18 16 12.0
</code></pre>
|
<python><pandas>
|
2024-04-18 06:27:24
| 1
| 1,255
|
Henri
|
78,345,055
| 4,987,648
|
Static type checking for union type and pattern matching
|
<p>In functional languages like Ocaml/Haskell/β¦ I can type something like:</p>
<pre class="lang-ocaml prettyprint-override"><code>type expr =
| Nb of float
| Add of expr * expr
| Soust of expr * expr
| Mult of expr * expr
| Div of expr * expr
| Opp of expr
let rec eval x = match x with
| Nb n -> n
| Add (e1, e2) -> (eval e1) +. (eval e2)
| Soust (e1, e2) -> (eval e1) -. (eval e2)
| Mult (e1, e2) -> (eval e1) *. (eval e2)
| Div (e1, e2) -> (eval e1) /. (eval e2)
| Opp n -> -. (eval n)
</code></pre>
<p>And once my code compiles, I will be guaranteed that for any <code>x</code> of type <code>expr</code>, <code>eval x</code> will always produce an output of type <code>float</code>. This notably implies that my pattern matching was not forgetting any cases, so if latter I add a new item in the <code>expr</code> type, it will fail to compile until I add this new case in the pattern matching.</p>
<p>Sadly I can't find anything in python that would provide such a strong guarantee, including with Python 10's typing system⦠Am I missing something?</p>
|
<python><pattern-matching><python-typing>
|
2024-04-18 05:30:55
| 1
| 2,584
|
tobiasBora
|
78,344,781
| 9,951,273
|
How can I infer return type for object based on parameter?
|
<p>Let's say we have a function</p>
<pre><code>def get_attr_wrapper(obj: object, attr: str) -> ???:
return getattr(obj, attr)
</code></pre>
<p>How can I infer the return type of <code>get_attr_wrapper</code> based on the parameters given?</p>
<p>Maybe with a generic somehow?</p>
<p>For example, if I passed in</p>
<pre><code>from dataclasses import dataclass
@dataclass
class Foo:
bar: str
foo = Foo(bar="baz")
rv = get_attr_wrapper(foo, "bar")
</code></pre>
<p>In our desired scenario, <code>rv</code> would be inferred by Python's type checker as being of type <code>string</code>.</p>
|
<python><python-typing>
|
2024-04-18 03:55:38
| 1
| 1,777
|
Matt
|
78,344,729
| 1,601,580
|
How do I have multiple src directories at the root of my python project with a setup.py and pip install -e?
|
<p>I want to have two src dirs at the root of my project. The reason is that one is code I want to work without modifying any of the imports. The second is new code indepdent of the "old code". I want two src's with and <code>pip install -e .</code> to work. My <code>setup.py</code> is:</p>
<pre class="lang-py prettyprint-override"><code>"""
python -c "print()"
refs:
- setup tools: https://setuptools.pypa.io/en/latest/userguide/package_discovery.html#using-find-or-find-packages
- https://stackoverflow.com/questions/70295885/how-does-one-install-pytorch-and-related-tools-from-within-the-setup-py-install
"""
from setuptools import setup
from setuptools import find_packages
import os
here = os.path.abspath(os.path.dirname(__file__))
with open(os.path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='massive-evaporate-4-math', # project name
version='0.0.1',
long_description=long_description,
long_description_content_type="text/markdown",
author='Me',
author_email='me@gmail.com',
python_requires='>=3.9',
license='Apache 2.0',
# ref: https://chat.openai.com/c/d0edae00-0eb2-4837-b492-df1d595b6cab
# The `package_dir` parameter is a dictionary that maps package names to directories.
# A key of an empty string represents the root package, and its corresponding value
# is the directory containing the root package. Here, the root package is set to the
# 'src' directory.
#
# The use of an empty string `''` as a key is significant. In the context of setuptools,
# an empty string `''` denotes the root package of the project. It means that the
# packages and modules located in the specified directory ('src' in this case) are
# considered to be in the root of the package hierarchy. This is crucial for correctly
# resolving package and module imports when the project is installed.
#
# By specifying `{'': 'src'}`, we are informing setuptools that the 'src' directory is
# the location of the root package, and it should look in this directory to find the
# Python packages and modules to be included in the distribution.
package_dir={
'': 'src_math_evaporate',
'bm_evaporate': 'src_bm_evaporate',
},
# The `packages` parameter lists all Python packages that should be included in the
# distribution. A Python package is a way of organizing related Python modules into a
# directory hierarchy. Any directory containing an __init__.py file is considered a
# Python package.
#
# `find_packages('src')` is a convenience function provided by setuptools, which
# automatically discovers and lists all packages in the specified 'src' directory.
# This means it will include all directories in 'src' that contain an __init__.py file,
# treating them as Python packages to be included in the distribution.
#
# By using `find_packages('src')`, we ensure that all valid Python packages inside the
# 'src' directory, regardless of their depth in the directory hierarchy, are included
# in the distribution, eliminating the need to manually list them. This is particularly
# useful for projects with a large number of packages and subpackages, as it reduces
# the risk of omitting packages from the distribution.
packages=find_packages('src_math_evaporate') + find_packages('src_bm_evaporate'),
# When using `pip install -e .`, the package is installed in 'editable' or 'develop' mode.
# This means that changes to the source files immediately affect the installed package
# without requiring a reinstall. This is extremely useful during development as it allows
# for testing and iteration without the constant need for reinstallation.
#
# In 'editable' mode, the correct resolution of package and module locations is crucial.
# The `package_dir` and `packages` configurations play a vital role in this. If the
# `package_dir` is incorrectly set, or if a package is omitted from the `packages` list,
# it can lead to ImportError due to Python not being able to locate the packages and
# modules correctly.
#
# Therefore, when using `pip install -e .`, it is essential to ensure that `package_dir`
# correctly maps to the root of the package hierarchy and that `packages` includes all
# the necessary packages by using `find_packages`, especially when the project has a
# complex structure with nested packages. This ensures that the Python interpreter can
# correctly resolve imports and locate the source files, allowing for a smooth and
# efficient development workflow.
# for pytorch see doc string at the top of file
install_requires=[
'fire',
'dill',
'networkx>=2.5',
'scipy',
'scikit-learn',
'lark-parser',
'tensorboard',
'pandas',
'progressbar2',
'requests',
'aiohttp',
'numpy',
'plotly',
'wandb',
'matplotlib',
# 'statsmodels'
# 'statsmodels==0.12.2'
# 'statsmodels==0.13.5'
# - later check why we are not installing it...
# 'seaborn'
# 'nltk'
'twine',
# # mercury: https://github.com/vllm-project/vllm/issues/2747
# 'dspy-ai',
# # 'torch==2.1.2+cu118', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747
# 'torch==2.2.2', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747
# # 'torchvision',
# # 'torchaudio',
# # 'trl',
# 'transformers',
# 'accelerate',
# # 'peft',
# # 'datasets==2.18.0',
# 'datasets',
# 'evaluate',
# 'bitsandbytes',
# # 'einops',
# # 'vllm==0.4.0.post1', # my gold-ai-olympiad project uses 0.4.0.post1 ref: https://github.com/vllm-project/vllm/issues/2747
# ampere
'dspy-ai',
# 'torch==2.1.2+cu118', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747
'torch==2.1.2', # 2.2 net supported due to vllm see: https://github.com/vllm-project/vllm/issues/2747
# 'torchvision',
# 'torchaudio',
# 'trl',
'transformers==4.39.2',
'accelerate==0.29.2',
# 'peft',
# 'datasets==2.18.0',
'datasets==2.14.7',
'evaluate==0.4.1',
'bitsandbytes== 0.43.0',
# 'einops',
'vllm==0.4.0.post1', # my gold-ai-olympiad project uses 0.4.0.post1 ref: https://github.com/vllm-project/vllm/issues/2747
# pip install -q -U google-generativeai
"tqdm",
"openai",
"manifest-ml",
'beautifulsoup4',
# 'pandas',
'cvxpy',
# 'sklearn',The 'sklearn' PyPI package is deprecated, use 'scikit-learn' rather than 'sklearn' for pip commands.
# 'scikit-learn',
'snorkel',
'snorkel-metal',
'tensorboardX',
'pyyaml',
'TexSoup',
]
)
</code></pre>
<p>and the errors I get in cli bash:</p>
<pre class="lang-bash prettyprint-override"><code>(math_evaporate) brando9@skampere1~/massive-evaporation-4-math $ tree src_math_evaporate/
src_math_evaporate/
βββ math_evaporate_llm_direct.py
0 directories, 1 file
(math_evaporate) brando9@skampere1~/massive-evaporation-4-math $ tree src_bm_evaporate/
src_bm_evaporate/
βββ configs.py
βββ evaluate_profiler.py
βββ evaluate_synthetic.py
βββ evaluate_synthetic_utils.py
βββ massive_evaporate_4_math.egg-info
β βββ dependency_links.txt
β βββ PKG-INFO
β βββ requires.txt
β βββ SOURCES.txt
β βββ top_level.txt
βββ profiler.py
βββ profiler_utils.py
βββ prompts_math.py
βββ prompts.py
βββ __pycache__
β βββ configs.cpython-39.pyc
β βββ prompts.cpython-39.pyc
β βββ utils.cpython-39.pyc
βββ run_profiler_maf.py
βββ run_profiler_math_evaporate.py
βββ run_profiler.py
βββ run.sh
βββ schema_identification.py
βββ snap_cluster_setup.egg-info
β βββ dependency_links.txt
β βββ PKG-INFO
β βββ requires.txt
β βββ SOURCES.txt
β βββ top_level.txt
βββ utils.py
βββ weak_supervision
βββ binary_deps.py
βββ __init__.py
βββ make_pgm.py
βββ methods.py
βββ pgm.py
βββ run_ws.py
βββ ws_utils.py
4 directories, 34 files
(math_evaporate) brando9@skampere1~/massive-evaporation-4-math $ pip install -e .
Obtaining file:///afs/cs.stanford.edu/u/brando9/massive-evaporation-4-math
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
Γ python setup.py egg_info did not run successfully.
β exit code: 1
β°β> [8 lines of output]
running egg_info
creating /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info
writing /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/PKG-INFO
writing dependency_links to /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/dependency_links.txt
writing requirements to /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/requires.txt
writing top-level names to /tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/top_level.txt
writing manifest file '/tmp/user/22003/pip-pip-egg-info-bqrbfkt8/massive_evaporate_4_math.egg-info/SOURCES.txt'
error: package directory 'src_math_evaporate/weak_supervision' does not exist
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
</code></pre>
<p>everything looks right to my. Why are the bugs happening?</p>
<p>I tried:</p>
<pre class="lang-py prettyprint-override"><code> package_dir={
'': 'src_math_evaporate',
'bm_evaporate': 'src_bm_evaporate',
},
</code></pre>
<p>to</p>
<pre class="lang-py prettyprint-override"><code> package_dir={
'math_evaporate': 'src_math_evaporate',
'bm_evaporate': 'src_bm_evaporate',
},
</code></pre>
<p>doesn't work. Both as root:</p>
<pre class="lang-py prettyprint-override"><code> package_dir={
'': 'src_math_evaporate',
'': 'src_bm_evaporate',
},
</code></pre>
<p>Don't know what else to try. What do I do?</p>
|
<python><pip><setuptools><setup.py><python-packaging>
|
2024-04-18 03:34:08
| 2
| 6,126
|
Charlie Parker
|
78,344,695
| 20,898,396
|
Type checking for pipeline similar to Langchain LCEL
|
<p>I am trying to write a pipeline with types that will give errors if the steps are not compatible <code>step1() | step2()</code>.</p>
<pre><code>from typing import Any, Callable, Generic, TypeVar
I = TypeVar('I')
O = TypeVar('O')
R = TypeVar('R')
class Runnable(Generic[I, O]):
def __init__(self, func: Callable[[I], O]) -> None:
self.func = func
# not sure how to make it work with multiple arguments
# def __or__(self, other: Callable[[O], R]):
# def chained_func(*args: I, **kwargs):
# output = self.func(*args, **kwargs)
# return other(output)
# return Runnable(chained_func)
def __or__(self, other: Callable[[O], R]):
def chained_func(input: I):
output = self.func(input)
return other(output)
# has type Unknown instead of I, hence why I specify [I, R] explicity
return Runnable[I, R](chained_func)
def __call__(self, *args: Any, **kwargs: Any):
return self.func(*args, **kwargs)
def add_five(x: int):
return x + 5
def parse(x:str):
return x.strip()
add_five = Runnable(add_five) # Runnable[int, int]
parse = Runnable(parse) # Runnable[str, str]
chain = add_five | parse
chain(3)
</code></pre>
<p>(code based on <a href="https://www.youtube.com/watch?v=O0dUOtOIrfs" rel="nofollow noreferrer">this</a> video)</p>
<p><code>add_five: Runnable[int, int]</code>, <code>parse: Runnable[str, str]</code> and <code>chain: Runnable[int, str]</code>, but there is no (type hint) error to indicate that the output of <code>add_five</code> is not compatible with the input of <code>parse</code>. Can this be achieved?</p>
|
<python><langchain>
|
2024-04-18 03:22:04
| 0
| 927
|
BPDev
|
78,344,611
| 12,314,521
|
How to random a number depend on length of a given sring Python
|
<p>I want to random an integer from a range which the probability correlates with the number of tokens string.</p>
<p>For example:</p>
<p>Given max possible number of tokens = 64. Random integer's range is from 0 to 7</p>
<p>Given a string has 46 tokens.</p>
<p>I want to use the function <code>random.choices([0,1,2,3,4,5,6,7], weights=[..], k=1</code>
and set the <code>weights</code> something like: <code>[0.1, 0.15, 0.2, 0.25, 0.3, 0.25, 0.2]</code></p>
<p>I just give an example for the <code>weights</code>, I mean its weights need to be correlated with the <code>len(tokens)</code> and <code>max_len_token=64</code>. Here is 46 compare to 64, so it gives more probs on 4 and 5, but still give a chance for others just decrease by some reasonable ratio</p>
|
<python>
|
2024-04-18 02:55:11
| 2
| 351
|
jupyter
|
78,344,486
| 2,740,376
|
Permission denied errors using docker image glue_libs_4.0.0_image_01 for AWS Glue
|
<p>I'm trying to build a pipeline that is using glue_libs_4.0.0_image_01. A step in the pipeline is running the docker instance as follows:</p>
<pre><code> docker run \
--mount=type=bind,source=./test,target=/home/glue_user/workspace/test \
--mount=type=bind,source=./libs,target=/home/glue_user/workspace/libs \
-w /home/glue_user/workspace \
-e DISABLE_SSL=true \
-e "PYTHONPATH=$PYTHONPATH:/home/glue_user/workspace/deps" \
--rm -p 4040:4040 \
-p 18080:18080 \
--name glue_unit_tests docker-default-virtual.${{ vars.ARTIFACTORY_HOST }}/amazon/aws-glue-libs:glue_libs_4.0.0_image_01 \
-c "mkdir -p deps/ && pip install -r test/requirements.txt -r libs/requirements.txt -t deps/; cd test && pytest || exit 1"
</code></pre>
<p>I am getting multiple permission denied errors when trying to create the <code>deps/</code> directory inside <code>/home/glue_user/workspace</code> and also <code>pip</code> throws out permission denied errors among with <code>pytest</code> trying to write cache files inside the mounted paths</p>
<pre><code>../deps/_pytest/cacheprovider.py:445 /home/glue_user/workspace/deps/_pytest/cacheprovider.py:445: PytestCacheWarning: could not create cache path /home/glue_user/workspace/test/.pytest_cache/v/cache/nodeids: [Errno 13] Permission denied: '/home/glue_user/workspace/test/.pytest_cache'
config.cache.set("cache/nodeids", sorted(self.cached_nodeids))
../deps/_pytest/stepwise.py:56 /home/glue_user/workspace/deps/_pytest/stepwise.py:56: PytestCacheWarning: could not create cache path /home/glue_user/workspace/test/.pytest_cache/v/cache/stepwise: [Errno 13] Permission denied: '/home/glue_user/workspace/test/.pytest_cache'
session.config.cache.set(STEPWISE_CACHE_DIR, [])
</code></pre>
|
<python><amazon-web-services><docker><aws-glue>
|
2024-04-18 02:02:29
| 1
| 319
|
Iulian
|
78,344,470
| 292,502
|
How to have a programmatical conversation with an agent created by Agent Builder
|
<p>I created an agent with No Code tools offered by the Agent Builder GUI: <a href="https://vertexaiconversation.cloud.google.com/" rel="nofollow noreferrer">https://vertexaiconversation.cloud.google.com/</a>
I created a playbook and added a few Data store Tools for the agent to use for RAG.
I'd like to call this agent programmatically to integrate it into mobile apps or web pages. There's a lot of code related to the classic Dialogflow agents, the Agent Builder is quite new and uses the Gemini 1.0 Pro under the hood.</p>
<p>I've seen this code <a href="https://stackoverflow.com/a/78229704/292502">https://stackoverflow.com/a/78229704/292502</a> however the question was about DialogFlow ES while the Agent Builder agent is rather a DialogFLow CX agent under the hood (and is listed in the Dialogflow CX dashboard). The Python package is promising, but I haven't found how can I have a conversation with the agent Playbook after I get hold of one.</p>
<p>Or maybe I'm just looking at the wrong place. I was also browsing <a href="https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/dialogflow-cx" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/python-docs-samples/tree/main/dialogflow-cx</a> but webhooks, intents and fulfillments are for the "classic" agents. I tried to go over <a href="https://github.com/googleapis/google-cloud-python/blob/main/packages/google-cloud-dialogflow-cx/samples/generated_samples/" rel="nofollow noreferrer">https://github.com/googleapis/google-cloud-python/blob/main/packages/google-cloud-dialogflow-cx/samples/generated_samples/</a> but haven't find the one which would help me yet.</p>
|
<python><google-cloud-platform><google-cloud-vertex-ai><dialogflow-cx><rag>
|
2024-04-18 01:56:24
| 2
| 10,879
|
Csaba Toth
|
78,344,353
| 2,488,207
|
Create new columns and assign their values from existing column's values
|
<p>I have a Dataset downloaded from Kaggle for my project, I would like to create new columns and assign their values based on an existing column.</p>
<p>My actual Dataset is complicated, I will give a similar but simpler dataset for easy discussion.</p>
<p><strong>Input:</strong></p>
<pre><code>Month | Fruit | Weight
------- -------- --------
1-2020 | Orange | 0.2
1-2020 | Kiwi | 0.9
2-2020 | Orange | 2.1
2-2020 | Kiwi | 1.4
...... | ..... | ...
</code></pre>
<p>To be able to create a required line chart, I need to change this Dataset structure, making <code>Orange, Kiwi</code> new columns with <code>Weight</code> values, so that <code>Month</code> is not repeated.</p>
<p><strong>Desired output:</strong></p>
<pre><code>Month | Orange | Kiwi
------- -------- ------
1-2020 | 0.2 | 0.9
2-2020 | 2.1 | 1.4
</code></pre>
|
<python><dataframe>
|
2024-04-18 01:04:27
| 1
| 868
|
vyclarks
|
78,344,349
| 16,717,009
|
Can a list comprehension that builds a list of lists referring to itself be done in one line?
|
<p>I have a number of list comprehensions that build a variety of lists of lists. To keep this simple, consider:</p>
<pre><code>foo = []
for i in range(1,3): # dummy loop for the example
if len(foo) == 0:
foo = [[x] for x in (0, 1, -1)] # can I avoid this step?
else:
foo = [f + [x] for f in foo for x in (0, 1, -1)]
print(foo)
</code></pre>
<p>produces:</p>
<pre><code>[[0, 0], [0, 1], [0, -1], [1, 0], [1, 1], [1, -1], [-1, 0], [-1, 1], [-1, -1]]
</code></pre>
<p>I know there are other ways using itertools to produce this particular output; I'm simplifying here. The key is that the comprehension has to build on itself, therefore the <code>for f in foo</code>.</p>
<p>My specific question is: is there a way to avoid the <code>if, else</code> code and just do this in one line?
If I just do <code>foo = [f + [x] for f in foo for x in (0, 1, -1)]</code> without the case for <code>len(foo) == 0</code> I get an empty <code>foo</code>.</p>
|
<python>
|
2024-04-18 01:02:25
| 1
| 343
|
MikeP
|
78,344,145
| 8,876,025
|
buildx build --platform linux/amd64 significantly increases image size with poetry
|
<p>Packages installed by poetry significantly increases the image size when it's built for amd64.</p>
<p>I'm building a docker image on my host machine(MacOS, M2 Pro), which I want to deploy to an EC2 instance. Normal build will make an image size of 2GB, which is good. But it will result in system compatibility issue when deployed on EC2: <code>WARNING: The requested image's platform (linux/arm64/v8) does not match the detected host platform (linux/amd64/v3) and no specific platform was requested</code>. So I am trying a build with <code>buildx</code> command. However, it results in whopping 13GB, even though all I changed was a build command. I'd like to know why and how to reduce the size.</p>
<p>Here is the Dockerfile (<strong>Edited</strong>: tried multi-stage build based on <a href="https://stackoverflow.com/a/78344174/8876025">this answer</a>):</p>
<pre><code>FROM python:3.11-slim as builder
# Set environment variables to make Python and Poetry play nice
ENV POETRY_VERSION=1.7.1 \
PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
# for -slim version (it breaks if you don't comment out && apt-get clean)
RUN apt-get update && apt-get install -y \
gfortran \
libopenblas-dev \
liblapack-dev \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
## Install poetry
RUN pip install "poetry==$POETRY_VERSION"
## copy project requirement files here to ensure they will be cached.
WORKDIR /app
COPY pyproject.toml ./
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-dev --no-ansi --verbose \
&& poetry cache clear pypi --all
FROM python:3.11-slim
# Copy all of the python files built in the Builder container into this smaller container.
COPY --from=builder /app /app
COPY --from=builder /usr/local/lib/python3.11 /usr/local/lib/python3.11
EXPOSE 7070
CMD ["poetry", "run", "flask", "run", "--host=0.0.0.0"]
</code></pre>
<p>And this command will build a 2GB image.</p>
<pre><code>docker build -f ./docker/Dockerfile \
-t malicious-url-prediction-img:v1 .
</code></pre>
<p>And this will make a 13GB image</p>
<pre><code>docker buildx build --platform linux/amd64 -f ./docker/Dockerfile \
-t malicious-url-prediction-img:v1-amd64 .
</code></pre>
<p>The image size stays small if I remove the RUN command <code>poetry config virtualenvs.create...</code>, even if I build the image for amd64. So I assume that poetry is causing this problem. However, it still is weird to have such a big difference in size by just changing the build context.</p>
<p><strong>Edited</strong>: Based on two answers from <a href="https://stackoverflow.com/a/78349375/8876025">anthony sottile</a> and <a href="https://stackoverflow.com/a/78349652/8876025">Ghorban M. Tavakoly</a>, the cause might be the torch. I changed my pyproject.toml file like this:</p>
<pre><code>[tool.poetry]
name = "malicious-url"
version = "0.1.0"
description = ""
authors = ["Makoto1021 <makoto.miyazaki1021@gmail.com>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.11"
numpy = "^1.26.4"
tld = "^0.13"
fuzzywuzzy = "^0.18.0"
scikit-learn = "^1.4.1.post1"
pandas = "^2.2.1"
mlflow = {extras = ["pipelines"], version = "^2.11.3"}
xgboost = "^2.0.3"
python-dotenv = "^1.0.1"
imblearn = "^0.0"
flask = "^3.0.3"
torch = {url = "https://download.pytorch.org/whl/cpu-cxx11-abi/torch-2.2.2%2Bcpu.cxx11.abi-cp311-cp311-linux_x86_64.whl"}
googlesearch-python = "^1.2.3"
whois = "^1.20240129.2"
nltk = "^3.8.1"
[tool.poetry.group.dev.dependencies]
ipykernel = "^6.29.3"
tldextract = "^5.1.2"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>This resulted in an image of 10GB. Happy with the progress but still quite big for my EC2 instance. Here's the result of <code>/bin/bash</code> and <code>du -h -d 1</code>.</p>
<pre><code>4.0K ./mnt
1.9G ./usr
4.0K ./opt
4.0K ./boot
0 ./sys
6.8M ./var
4.0K ./media
4.0K ./tmp
1.4M ./etc
4.0K ./home
du: cannot access './proc/12/task/12/fd/7': No such file or directory
du: cannot access './proc/12/task/12/fdinfo/7': No such file or directory
du: cannot access './proc/12/fd/8': No such file or directory
du: cannot access './proc/12/fdinfo/8': No such file or directory
0 ./proc
8.0K ./run
24K ./root
0 ./dev
4.0K ./srv
216K ./utils
8.1G ./app
10G .
</code></pre>
<p>FYI, this is how I run the container.</p>
<pre><code>docker run --rm -p 7070:5000 -v $(pwd)/logs:/app/logs malicious-url-prediction-img:v1-amd64
</code></pre>
<p>EDITED 1:</p>
<ul>
<li>changed Dockerfile to minimal example</li>
<li>added myproject.toml to reproduce the build</li>
<li>added my investigation on poetry</li>
</ul>
<p>EDITED 2:</p>
<ul>
<li>updated Dockerfile with multi-stage build</li>
<li>updated .toml file and the result</li>
</ul>
|
<python><docker><python-poetry>
|
2024-04-17 23:30:34
| 4
| 2,033
|
Makoto Miyazaki
|
78,344,061
| 4,766
|
How do I install avdec_h264 for use with GStreamer in Python on macOS?
|
<p>I answered my own question <a href="https://stackoverflow.com/q/78281985/4766">How do install gst-python on macOS to work with the recommended GStreamer installers?</a> by using <a href="https://stackoverflow.com/a/78295888/4766">miniconda</a>.</p>
<p>Then I moved on to creating a GStreamer pipeline. But I get an error making an avdec_h264 decoder:</p>
<pre><code>$ GST_DEBUG=3 python3
Python 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:54:21) [Clang 16.0.6 ] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import gi
>>> import sys
>>> import threading
>>> gi.require_version("Gtk", "3.0")
>>> gi.require_version("Gdk", "3.0")
>>> gi.require_version('Gst', '1.0')
>>> gi.require_version('GstVideo', '1.0')
>>> from gi.repository import GObject, Gst, GstVideo, Gtk, Gdk, GLib, Gio
>>> Gst.init([])
[]
>>> decoder = Gst.ElementFactory.make('avdec_h264', 'decoder')
0:00:08.230811000 98675 0x600001a94630 WARN GST_ELEMENT_FACTORY gstelementfactory.c:765:gst_element_factory_make_valist: no such element factory "avdec_h264"!
>>> decoder == None
True
</code></pre>
<p>I successfully installed the following:</p>
<pre><code>conda install gst-plugins-good
conda install libavif
conda install ffmpeg
</code></pre>
<p>...but afterward get the same warning and <code>Gst.ElementFactory.make()</code> returns <code>None</code>.</p>
<p>I also tried:</p>
<pre><code>conda install decodebin3
conda install gst-ffmpeg
conda install gst-libav
</code></pre>
<p>...but got "β¦packages are not available from current channels".</p>
<p>How do I install avdec_h264 so the call to <code>Gst.ElementFactory.make('avdec_h264', 'decoder')</code> works?</p>
|
<python><macos><conda><gstreamer><h.264>
|
2024-04-17 22:53:45
| 1
| 150,682
|
Daryl Spitzer
|
78,344,022
| 219,153
|
Why is this seemingly redundant Python import statement necessary?
|
<p>This snippet of Python 3.12 code:</p>
<pre><code>import paho
import paho.mqtt.client # line 2
client = paho.mqtt.client.Client(paho.mqtt.enums.CallbackAPIVersion(2))
</code></pre>
<p>fails when line #2 is commented out. <code>paho</code> module is imported by the first line. I'm using the full name <code>paho.mqtt.client.Client</code> in the last line. Why is the seemingly redundant <code>import paho.mqtt.client</code> necessary?</p>
<p>Is there a way to import <code>paho.mqtt</code> module, so it can be used to shorten both <code>paho.mqtt.client.Client</code> and <code>paho.mqtt.enums.CallbackAPIVersion</code> names?</p>
|
<python><python-3.x><python-import>
|
2024-04-17 22:38:48
| 2
| 8,585
|
Paul Jurczak
|
78,344,011
| 14,083,003
|
ValueError: For a sparse output, all columns should be a numeric or convertible to a numeric
|
<p>I am doing a pre-processing for my data before applying sklearn models, but I am having trouble identifying why an error keeps happening. When I run the code for each individual column index in <code>ColumTransformer</code>, it works well for each variable. However, the error happens when I apply it to multiple columns.</p>
<ol>
<li>What is the problem when I run it all together?</li>
<li>How can you identify which column causes the error using codes? (I checked it by changing the argument manually)</li>
<li>What is the remedy for the error when a single column causes this error?</li>
</ol>
<p>Data and example code:</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
# Number of samples
num_samples = 1000
# Generating random data
data = {
'Feature_1': np.random.rand(num_samples),
'Feature_2': np.random.rand(num_samples),
'Feature_3': np.random.choice(['A', 'B', 'C'], num_samples),
'Feature_4': np.random.choice(['X', 'Y', 'Z'], num_samples),
'Feature_5': np.random.choice(['M', 'N', 'O'], num_samples), # Non-numeric values intentionally introduced
'Feature_6': np.random.choice(['P', 'Q', 'R'], num_samples), # Non-numeric values intentionally introduced
'Feature_7': np.random.choice(['D', 'E', 'F'], num_samples),
'Feature_8': np.random.choice(['G', 'H', 'I'], num_samples),
'Feature_9': np.random.choice(['S', 'T', 'U'], num_samples),
'Feature_10': np.random.rand(num_samples),
'Feature_11': np.random.rand(num_samples),
'Feature_12': np.random.choice(['V', 'W', 'X'], num_samples),
'Feature_13': np.random.choice(['Y', 'Z'], num_samples),
'Feature_14': np.random.choice(['P', 'Q', 'R'], num_samples),
'Feature_15': np.random.choice(['A', 'B', 'C', 'D'], num_samples),
'Target': np.random.choice([0, 1], num_samples)
}
categorical_indices = [3, 4, 5, 6, 7, 8, 9, 12, 13, 14, 15]
d = pd.DataFrame(data)
X = d.values
ct = ColumnTransformer(
transformers=[('encoder', OneHotEncoder(), categorical_indices)],
remainder='passthrough'
)
X_1 = np.array(ct.fit_transform(X))
</code></pre>
<p>The error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py", line 588, in _hstack
converted_Xs = [check_array(X,
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py", line 588, in <listcomp>
converted_Xs = [check_array(X,
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/utils/validation.py", line 63, in inner_f
return f(*args, **kwargs)
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/utils/validation.py", line 673, in check_array
array = np.asarray(array, order=order, dtype=dtype)
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/numpy/core/_asarray.py", line 102, in asarray
return array(a, dtype, copy=False, order=order)
ValueError: could not convert string to float: 'C'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/folders/25/5mycjlz1013629wcstsb_mwh0000gn/T/ipykernel_24019/2314645552.py", line 10, in <module>
X_1 = np.array(ct.fit_transform(X))
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py", line 529, in fit_transform
return self._hstack(list(Xs))
File "/Users/jaeyoungkim/opt/anaconda3/lib/python3.9/site-packages/sklearn/compose/_column_transformer.py", line 593, in _hstack
raise ValueError(
ValueError: For a sparse output, all columns should be a numeric or convertible to a numeric.
</code></pre>
|
<python><scikit-learn><transformation><one-hot-encoding><categorical>
|
2024-04-17 22:31:51
| 1
| 411
|
J.K.
|
78,343,931
| 1,972,982
|
Creating Scheduled Posts with an Image using Facebook Graph API
|
<p>I'm using a Python script to try and create a post on a Facebook page. I've been able to create the post and schedule it for the future. It all goes wrong when I try to add an image.</p>
<p>First question, is this a limit of the Facebook API?</p>
<p>Here is my Python code (I've redacted the access token and page ID). I've included the error I'm getting beneath.</p>
<p>For background, I've added the <code>page_access_token</code> element because I was originally getting an error <code>Error: (#200) Unpublished posts must be posted to a page as the page itself.</code>. This error appeared after I added the <code>temporary=True</code> during the image upload - I found this as a potential solution to a bug in the scheduled posts.</p>
<p>Any suggestions appreciated.</p>
<pre><code>import facebook
import datetime
# Your Facebook access token
access_token = 'xxx_Redacted_xxx'
# ID of your Facebook page
page_id = '426572384077950'
# Initialize Facebook Graph API with your access token
user_graph = facebook.GraphAPI(access_token)
page_info = user_graph.get_object(f'/{page_id}?fields=access_token')
page_access_token = page_info.get("access_token")
print(page_info)
print("Page: ", page_access_token)
graph = facebook.GraphAPI(page_access_token)
def schedule_post(page_id, message, days_from_now, scheduled_time=None, image_path=None):
try:
# Default scheduled time: 17:00 if not provided
if scheduled_time is None:
scheduled_time = datetime.time(17, 0) # Default to 17:00
# Calculate scheduled datetime
scheduled_datetime = datetime.datetime.now() + datetime.timedelta(days=days_from_now)
scheduled_datetime = scheduled_datetime.replace(hour=scheduled_time.hour, minute=scheduled_time.minute, second=0, microsecond=0)
# Default image path: None (no image)
attached_media = []
if image_path:
# Upload the image (check for errors)
try:
image = open(image_path, 'rb')
image_id = graph.put_photo(image, album_path=f'{page_id}/photos', published=False, temporary=True)['id']
print(image_id)
except facebook.GraphAPIError as e:
print(f"Error uploading image: {e}")
# Handle image upload error (optional: log the error or continue without image)
# If upload successful, append to attached_media
attached_media.append({'media_fbid': image_id})
# Format scheduled time as required by Facebook API
scheduled_time_str = scheduled_datetime.strftime('%Y-%m-%dT%H:%M:%S')
# Debugging: Print attached_media before scheduling the post
print("Attached Media:", attached_media)
# Construct parameters for the put_object method
parameters = {
'message': message,
'published': False,
'scheduled_publish_time': scheduled_time_str
}
# Add attached_media to parameters if it's not None
if attached_media is not None:
parameters['attached_media'] = attached_media
print("parameters:", parameters)
# Schedule the post
graph.put_object(page_id, "feed", **parameters)
print(f"Post scheduled for {scheduled_time_str}: {message}")
return True
except facebook.GraphAPIError as e:
print(f"Error: {e}")
return False
# Example usage
if __name__ == "__main__":
# Message for the post
message = "This is a scheduled post for 3 days from now at 17:00!"
# Number of days from now
days_from_now = 3
# Scheduled time (optional)
scheduled_time = datetime.time(10, 30) # Change this to the desired time or None for default (17:00)
# Image path (set to None for no image)
image_path = 'img/Academic.jpg' # Change this to the path of your image or None for no image
# image_path = None
# Schedule the post
success = schedule_post(page_id, message, days_from_now, scheduled_time, image_path)
if not success:
print("Failed to schedule the post.")
</code></pre>
<p>Output:</p>
<pre><code>{'access_token': 'xxx_Redacted_xxx', 'id': '426572384077950'}
Page: xxx_Redacted_xxx
860430092794306
Attached Media: [{'media_fbid': '860430092794306'}]
parameters: {'message': 'This is a scheduled post for 3 days from now at 17:00!', 'published': False, 'scheduled_publish_time': '2024-04-20T10:30:00', 'attached_media': [{'media_fbid': '860430092794306'}]}
Error: (#100) param attached_media must be an array.
Failed to schedule the post.
</code></pre>
|
<python><facebook-graph-api>
|
2024-04-17 21:58:40
| 0
| 333
|
Jamie
|
78,343,897
| 8,021,207
|
An async/parallel approach to working a (potentially) growing task queue
|
<p>I have a list of items that need to be processed and I want to be able to process them in parallel for efficiency. But during the processing of one item I may discover more items that need to be added to the list to be processed.</p>
<p>I've looked at the <a href="https://docs.python.org/3/library/multiprocessing.html" rel="nofollow noreferrer">multiprocessing</a> and <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ProcessPoolExecutor" rel="nofollow noreferrer">concurrent</a> libraries but I couldn't find a feature of a queue of this sort that can be modified during runtime, or after it's been passed to the pool. Is there a solution that meets my desires?</p>
<p>Here's some code that demonstrates what I'm wanting.</p>
<pre class="lang-py prettyprint-override"><code>i = 0
jobs_to_be_processed = [f'job{(i:=i+1)}' for _ in range(5)]
def process_job(job):
if int(job[-1]) % 3 == 0:
jobs_to_be_processed.append(f'new job{(i:=i+1)}')
# do process job ...
pass
# Add jobs to a pool that allows `jobs_to_be_processed`
# to have jobs added while processing
pool = AsyncJobPool(jobs_to_be_processed)
pool.start()
pool.join()
</code></pre>
|
<python><multithreading><concurrency><multiprocessing>
|
2024-04-17 21:43:15
| 1
| 492
|
russhoppa
|
78,343,854
| 3,512,538
|
python object cleanup order - can I use object reference to force GC to collect another object first?
|
<p>I have 2 objects <code>a, b</code> (instances of <code>A,B</code> respectively) that are created inside my app:</p>
<pre class="lang-py prettyprint-override"><code>class A:
def __del__(self):
print("A.__del__")
class B:
def __del__(self):
print("B.__del__")
a = A()
b = B()
</code></pre>
<p>this would print out (on my machine :) ):</p>
<pre class="lang-py prettyprint-override"><code>A.__del__
B.__del__
</code></pre>
<p>which means, that in this case, the garbage collection order is the creation order.</p>
<p>What I need, is to force the garbage collection order, so that <code>a</code> would be destroyed after <code>b</code>. I tried keeping <code>a</code> inside <code>b</code>:</p>
<pre class="lang-py prettyprint-override"><code>b._guard = a
</code></pre>
<p>but that didn't help, and <code>a</code> was destroyed first (at least the <code>__del__</code> functions were called in the same order).</p>
<p>My real world case is using <code>pybind11</code>, where a grandparent creates a parent which then creates a child, and that child must be destroyed before the grandparent. Keeping <code>self</code> of the grandparent inside the child seems to work, but in this simple case I'm asking it clearly doesn't so I don't think that is a robust solution.</p>
<p>It seems that <code>py::keep_alive</code> might have helped me if my case was parnent and child, but since there is no connection between the grandparent and the child, I think its irrelevant.</p>
<p>Is there a pure pythonic way (or a neat <code>pybind11</code> way) to force the grandparent to be kept alive until the cleanup of the child?</p>
|
<python><garbage-collection><pybind11>
|
2024-04-17 21:32:06
| 1
| 12,897
|
CIsForCookies
|
78,343,764
| 23,260,297
|
JSON text as command line argument when running python script
|
<p>I have read similar questions relating to passing JSON text as a command line argument with python, but none of the solutions have worked with my case.</p>
<p>I am automating a python script, and the automation runs a powershell script that takes a JSON object generated from a power automate flow. Everything works great until it comes to processing the JSON in my python script.</p>
<p>My goal is to convert the JSON to a dictionary so that I can use the key value pairs in my code.</p>
<p>My powershell script looks like this:</p>
<pre><code>
Python script.py {"Items":[{"Name":"foo","File":"\\\\files\\foo\\foo.csv"},{"Name":"bar","File":"\\\\files\\bar\\bar.csv"},{"Name":"baz","File":"\\\\files\\baz\\baz.csv"}]}
</code></pre>
<p>My JSON looks like this:</p>
<pre><code>{
"Items": [
{
"Name": "foo",
"File": "\\\\files\\foo\\foo.csv"
},
{
"Name": "bar",
"File": "\\\\files\\bar\\bar.csv"
},
{
"Name": "baz",
"File": "\\\\files\\baz\\baz.csv"
}
]
}
</code></pre>
<p>I tried this solution from SO:</p>
<pre><code>if len(sys.argv) > 1:
d = json.loads(sys.argv[1])
print(d)
</code></pre>
<p>but it returns this error:</p>
<pre><code>Unexpected token ':' in expression or statement.
+ CategoryInfo : ParserError: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : UnexpectedToken
</code></pre>
<p>I am unsure how to solve this problem, any suggestions would help!</p>
|
<python><json><powershell>
|
2024-04-17 21:05:59
| 2
| 2,185
|
iBeMeltin
|
78,343,713
| 9,092,669
|
scrape rotowire MLB player news and form into a table using python
|
<p>i would like to scrape <a href="https://www.rotowire.com/baseball/news.php" rel="nofollow noreferrer">https://www.rotowire.com/baseball/news.php</a> which contains news about MLB players and save the data in a table format like so:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>date</th>
<th>player</th>
<th>headline</th>
<th>news</th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td>4/17</td>
<td>Abner Uribe</td>
<td>Picks up second win</td>
<td>Uribe (2-1) earned the win Wednesday against the Padres after he allowed a hit and no walks in a scoreless eighth inning. He had one strikeout.</td>
<td></td>
</tr>
<tr>
<td>4/17</td>
<td>Richie Palacios</td>
<td>Gets day off vs. lefty</td>
<td>Palacios is out of the lineup for Wednesday's game against the Angels.</td>
<td></td>
</tr>
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
</tr>
</tbody>
</table></div>
<p>I'm having difficulties understanding how to isolate each of the content into their own rows into a dataframe. Looking for any help to get this going. Ideally I'd scrape every 5 minutes, and keep the table ever growing.</p>
|
<python><web-scraping><beautifulsoup>
|
2024-04-17 20:54:49
| 1
| 395
|
buttermilk
|
78,343,506
| 345,660
|
Split concave object mask into 1 or more convex sub-sections
|
<p>I am working with an object detection model. It works pretty well, but the output is a mask, and I need bounding boxes. Naively, I can just use OpenCV to draw a bounding box around the contours of the mask, but if the mask is very concave that can include large non-image regions.</p>
<p>I've figured out how to use a convex hull to check if the mask is concave, but I can't figure out how to split a concave mask into convex sub-regions. I'm ok if my bounding boxes overlap, I just don't want them to contain large non-masked regions.</p>
<p>Is there a simple heuristic I can use here to split the bounding boxes into sub-boxes? Maybe I could use an optimizer to find a set of 1 or more rectangles that mostly fill a given contour?</p>
<p>Here are some examples of my semantic masks:</p>
<p>I'd like to split the big object into 2 or 3 rectangles, but keep the 2 smaller objects as-is:
<a href="https://i.sstatic.net/d771v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d771v.png" alt="Mask 1" /></a></p>
<p>2 rectangles would be good, but we could get up to 5
<a href="https://i.sstatic.net/HiMex.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HiMex.png" alt="Mask 2" /></a></p>
<p>2 rectangles would be prefect for the big object, but I'd like to keep the small one as-is:
<a href="https://i.sstatic.net/pHOjC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pHOjC.png" alt="Mask 3" /></a></p>
<p>Here's a quick script to draw the bounding boxes, which illustrates the issue:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import cv2
import requests
from PIL import Image, ImageDraw
from io import BytesIO
response = requests.get('https://i.sstatic.net/d771v.png')
image = Image.open(BytesIO(response.content)).convert('RGB')
image_array = np.array(image.convert('L'))
contours, _ = cv2.findContours(image_array, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
draw = ImageDraw.Draw(image)
for contour in contours:
x1, y1, w, h = cv2.boundingRect(contour)
x2 = x1 + w
y2 = y1 + h
draw.rectangle([(x1, y1), (x2, y2)], outline="red", width=3)
image.show()
</code></pre>
<p>Here's an example of the bounding boxes I have now. I want to cut off the "arm" of the big object into its own box.
<a href="https://i.sstatic.net/ZnOa3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZnOa3.png" alt="enter image description here" /></a></p>
|
<python><opencv><geometry><object-detection><semantic-segmentation>
|
2024-04-17 19:56:17
| 0
| 30,431
|
Zach
|
78,343,430
| 595,305
|
Mock or stub a QEvent?
|
<p>Is there a way to mock a <code>QEvent</code> and pass it as a parameter?</p>
<p>I have a test like this which I want to include:</p>
<pre><code>def test_table_resize_event_sets_column_widths():
table = results_table_classes.ResultsTableView(None)
with mock.patch.object(table, 'setColumnWidth') as mock_set:
mock_event = mock.Mock()
mock_size = mock.Mock()
mock_event.size = mock.Mock(return_value=mock_size)
mock_size.width = mock.Mock(return_value=300)
table.resizeEvent(mock_event)
assert mock_set.call_args_list[0].args == (0, int(300 * 0.20))
assert mock_set.call_args_list[1].args == (1, int(300 * 0.80))
</code></pre>
<p>This test should lead to an implementation something like this:</p>
<pre><code>def resizeEvent(self, event):
width = event.size().width()
self.setColumnWidth(0, int(width * 0.20)) # 20% Width Column
self.setColumnWidth(1, int(width * 0.80)) # 80% Width Column
</code></pre>
<p>But I get</p>
<blockquote>
<p>E TypeError: resizeEvent(self, e: QResizeEvent): argument 1
has unexpected type 'Mock'</p>
</blockquote>
<p>I tried creating <code>mock_event</code> like so:</p>
<pre><code>mock_event = mock.MagicMock(spec=QtGui.QResizeEvent)
</code></pre>
<p>... but I still get the same error. Maybe PyQt has some extra level of type-checking with this kind of method?</p>
<p>If it helps at all, I have installed pytest_qt, so the <code>qtbot</code> fixture is available, if this can somehow be used to solve this.</p>
|
<python><testing><pyqt><pytest><pytest-qt>
|
2024-04-17 19:36:57
| 0
| 16,076
|
mike rodent
|
78,343,313
| 8,382,028
|
Adding HTML Button to Draftail Editor Action Buttons in Wagtail
|
<p>I am having an issue wrapping my head around adding a custom button in Wagtail to the RichTextEditor buttons, where when clicked, a user of the editor is able to add a link with an html <code>button</code>.</p>
<p>The code I used for this in TinyMCE was originally provided here: <a href="https://dev.to/codeanddeploy/tinymce-add-custom-button-example-399m" rel="nofollow noreferrer">https://dev.to/codeanddeploy/tinymce-add-custom-button-example-399m</a></p>
<p>But I can't figure out how to register a hook in Wagtail to implement that type of functionality. There is a similar example here, but I can't figure out how to enable the actual button to be rendered with style we predefine like I had done in TinyMCE editor.</p>
<p>Here is an example that works fine for this use case, but I am hoping there is a simple one with the idea to add buttons like I had done with TinyMCE: <a href="https://erev0s.com/blog/wagtail-list-tips-and-tricks/#add-a-code-button-in-the-rich-text-editor" rel="nofollow noreferrer">https://erev0s.com/blog/wagtail-list-tips-and-tricks/#add-a-code-button-in-the-rich-text-editor</a></p>
<p>Here is the code from that post:</p>
<pre><code>from wagtail.core import hooks
@hooks.register("register_rich_text_features")
def register_code_styling(features):
"""Add the <code> to the richtext editor and page."""
# Step 1
feature_name = "code"
type_ = "CODE"
tag = "code"
# Step 2
control = {
"type": type_,
"label": "</>",
"description": "Code"
}
# Step 3
features.register_editor_plugin(
"draftail", feature_name, draftail_features.InlineStyleFeature(control)
)
# Step 4
db_conversion = {
"from_database_format": {tag: InlineStyleElementHandler(type_)},
"to_database_format": {"style_map": {type_: {"element": tag}}}
}
# Step 5
features.register_converter_rule("contentstate", feature_name, db_conversion)
# Step 6. This is optional
# This will register this feature with all richtext editors by default
features.default_features.append(feature_name)
</code></pre>
|
<python><django><wagtail><draftail>
|
2024-04-17 19:06:53
| 1
| 3,060
|
ViaTech
|
78,343,287
| 7,921,684
|
influxd TypeError: <lambda>() got an unexpected keyword argument 'key_key_password'
|
<p>I am using the guide and copying the lines and token from the generated code in the document but when I run I face this error which refers to this line <strong>write_api.write(bucket=bucket, org=org, record=point).</strong>
influxdb 2.7.5</p>
<pre><code>client = influxdb_client.InfluxDBClient(url=url, token=token, org=org)
write_api = client.write_api(write_options=SYNCHRONOUS)
query_api = client.query_api()
delete_api = client.delete_api()
buckets = client.buckets_api()
print("buckets", buckets.find_buckets())
for value in range(5):
point = (
Point("measurement1")
.tag("id", "1")
.field("field1", value)
)
write_api.write(bucket=bucket, org=org, record=point)
time.sleep(1) # separate points by 1 second
write_api.close()
</code></pre>
<p>the code above is given in the user-setup for python, and if you copy it and run it it will generate the following error.</p>
<pre><code>Traceback (most recent call last):
File "/influxManager.py", line 21, in <module>
print("buckets", buckets.find_buckets())
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/client/bucket_api.py", line 119, in find_buckets
return self._buckets_service.get_buckets(**kwargs)
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/service/buckets_service.py", line 558, in get_buckets
(data) = self.get_buckets_with_http_info(**kwargs) # noqa: E501
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/service/buckets_service.py", line 586, in get_buckets_with_http_info
return self.api_client.call_api(
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/api_client.py", line 343, in call_api
return self.__call_api(resource_path, method,
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/api_client.py", line 173, in __call_api
response_data = self.request(
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/api_client.py", line 365, in request
return self.rest_client.GET(url,
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/rest.py", line 268, in GET
return self.request("GET", url,
File "/opt/anaconda3/lib/python3.9/site-packages/influxdb_client/_sync/rest.py", line 235, in request
r = self.pool_manager.request(method, url,
File "/opt/anaconda3/lib/python3.9/site-packages/urllib3/request.py", line 66, in request
return self.request_encode_url(method, url, fields=fields,
File "/opt/anaconda3/lib/python3.9/site-packages/urllib3/request.py", line 89, in request_encode_url
return self.urlopen(method, url, **extra_kw)
File "/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py", line 313, in urlopen
conn = self.connection_from_host(u.host, port=u.port, scheme=u.scheme)
File "/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py", line 229, in connection_from_host
return self.connection_from_context(request_context)
File "/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py", line 240, in connection_from_context
pool_key = pool_key_constructor(request_context)
File "/opt/anaconda3/lib/python3.9/site-packages/urllib3/poolmanager.py", line 105, in _default_key_normalizer
return key_class(**context)
TypeError: <lambda>() got an unexpected keyword argument 'key_key_password'
</code></pre>
|
<python><influxdb><influxdb-2>
|
2024-04-17 19:00:52
| 1
| 586
|
Gray
|
78,343,089
| 7,938,217
|
Ensuring VSCode Python Autocompletion
|
<p>How can I ensure that when I instantiate a data structure in VSCode+Jupyter+Python, the attributes of the data structure are available for autocompletion throughout the notebook.</p>
<pre><code>
# %% Jupyter Cell #1
#This cell is executed before attempting autocompletes in cell 2
@dataclass
class ExistingItemNames:
pass
class SearchableItemNames:
def __init__(self, var_names:list):
self.names__ = ExistingItemNames()
for name in var_names:
setattr(self.names__, name, name)
self.names = self.names__.__dict__
si = SearchableItemNames([f"v{i}" for i in range(2000)])
# %% Jupyter Cell 2
#outside other data structures, accessing through a
# dict or attr seem equivalent
si.names['v1999'] #does not find 'v1999' key via autocomplete
si.names['v10'] # does find 'v10' key via autocomplete
si.names__.v1999 #does not find `v1999` attr via autocomplete
si.names__.v10 #does find `v10` attr via autocomplete
#inside of a data structure, the dict is required for autocompletion
# but still does not find all values
(si.names__.v1999) #does not find `v1999` attr via autocomplete
(si.names__.v10) #does not find `v10` attr via autocomplete
(si.names['v1999']) #does not find 'v1999' key via autocomplete
(si.names['v10']) # does find 'v10' key via autocomplete
</code></pre>
<p>I understand that exhaustive enumeration of keys or attr would not be a good solution for all use cases due to the limitations of the python language server, but is there a way I can force the IDE (VSCode+Jupyter) to only do so for certain objects, within certain python envs, or certain Jupyter notebooks?</p>
|
<python><visual-studio-code><jupyter-notebook><pylance><python-jedi>
|
2024-04-17 18:21:16
| 1
| 400
|
Kelley Brady
|
78,343,052
| 5,722,359
|
Lifted ttk.Label widget can't redraw promptly?
|
<p>Here is my minimum representative example (MRE) on how I create <code>ttk.Button</code> widgets with an image via a multithreaded approach. However, I am experiencing an issue with a task that occurs before the multithreading task. Whenever the <code>self.label</code> widget is lifted, it can't be redrawn promptly; a grey patch appears for a short period before <code>self.label</code> appears completely. Running <code>self.update_idletasks()</code> (see line 97) can't fix this issue. Only running <code>self.update()</code> can fix this issue (you have to uncomment line 98). However, some opined that the use of <code>self.update()</code> can be <a href="https://stackoverflow.com/questions/78318063/tkinter-tcl-update-considered-harmful-is-this-msg-still-valid/78325942?noredirect=1#comment138087739_78325942">harmful</a>. Is it possible to resolve this issue without using <code>self.update()</code>? If so, how? Please can you also explain why this issue happens? Thank you.</p>
<p><strong>Issue demo:</strong></p>
<p><a href="https://i.sstatic.net/Ysbe0.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ysbe0.gif" alt="Issue" /></a></p>
<p><strong>Desired outcome demo:</strong>
<a href="https://i.sstatic.net/ylCqW.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ylCqW.gif" alt="Desired outcome" /></a></p>
<p><strong>MRE:</strong></p>
<p>Please save any <code>.jpg</code> file you have into the same directory/folder as this script and rename it to <code>testimage.jpg</code>. How this GUI works? Click the <code>Run</code> button to start the multithreading. To rerun, you have to first click <code>Reset</code> button, thereafter click the <code>Run</code> button. DO NOT click <code>Reset</code> when threading is ongoing and vice versa.</p>
<pre><code># Python modules
import tkinter as tk
import tkinter.ttk as ttk
import concurrent.futures as cf
import queue
import threading
from itertools import repeat
import random
from time import sleep
# External modules
from PIL import Image, ImageTk
def get_thumbnail_c(gid: str, fid: str, fpath: str, psize=(100, 100)):
# print(f"{threading.main_thread()=} {threading.current_thread()=}")
with Image.open(fpath) as img:
img.load()
img.thumbnail(psize)
return gid, fid, img
def get_thumbnails_concurrently_with_queue(
g_ids: list, f_ids: list, f_paths: list, rqueue: queue.Queue,
size: tuple):
futures = []
job_fn = get_thumbnail_c
with cf.ThreadPoolExecutor() as vp_executor:
for gid, fids, fpath in zip(g_ids, f_ids, f_paths):
for gg, ff, pp in zip(repeat(gid, len(fids)), fids,
repeat(fpath, len(fids))):
job_args = gg, ff, pp, size
futures.append(vp_executor.submit(job_fn, *job_args))
for future in cf.as_completed(futures):
rqueue.put(("thumbnail", future.result()))
futures.remove(future)
if not futures:
print(f'get_thumbnails_concurrently has completed!')
rqueue.put(("completed", ()))
class GroupNoImage(ttk.Frame):
def __init__(self, master, gid, fids):
super().__init__(master, style='gframe.TFrame')
self.bns = {}
self.imgs = {}
for i, fid in enumerate(fids):
self.bns[fid] = ttk.Button(self, text=f"{gid}-P{i}", compound="top",
style="imgbns.TButton")
self.bns[fid].grid(row=0, column=i, stick="nsew")
class App(ttk.PanedWindow):
def __init__(self, master, **options):
super().__init__(master, **options)
self.master = master
self.groups = {}
self.rqueue = queue.Queue()
self.vsf = ttk.Frame(self)
self.add(self.vsf)
self.label = ttk.Label(
self, style="label.TLabel", width=7, anchor="c", text="ttk.Label",
font=('Times', '70', ''))
self.label.place(
relx=0.5, rely=0.5, relwidth=.8, relheight=.8, anchor="center",
in_=self.vsf)
self.label.lower(self.vsf)
def create_grpsframe(self):
self.grpsframe = ttk.Frame(self.vsf, style='grpsframe.TFrame')
self.grpsframe.grid(row=0, column=0, sticky="nsew")
def run(self, event):
self.create_grpsframe()
gids = [f"G{i}" for i in range(50)]
random.seed()
fids = []
for gid in gids:
f_ids = []
total = random.randint(2,10)
for i in range(total):
f_ids.append(f"{gid}-P{i}" )
fids.append(f_ids)
fpaths = ["testimage.jpg" for i in range(len(gids))]
self.create_groups_concurrently(gids, fids, fpaths)
def reset(self, event):
self.grpsframe.destroy()
self.groups.clear()
def create_groups_concurrently(self, gids, fids, fpaths):
print(f"\ncreate_groups_concurrently")
self.label.lift(self.vsf)
# self.update_idletasks() # Can't fix self.label appearance issue
# self.update() # Fixed self.label appearance issue
for i, (gid, f_ids) in enumerate(zip(gids, fids)):
self.groups[gid] = GroupNoImage(self.grpsframe, gid, f_ids)
self.groups[gid].grid(row=i, column=0, sticky="nsew")
self.update_idletasks()
# sleep(3)
print(f"\nStart thread-queue")
jthread = threading.Thread(
target=get_thumbnails_concurrently_with_queue,
args=(gids, fids, fpaths, self.rqueue, (100,100)),
name="jobthread")
jthread.start()
self.check_rqueue()
def check_rqueue(self):
# print(f"\ndef _check_thread(self, thread, start0):")
duration = 1 # millisecond
try:
info = self.rqueue.get(block=False)
# print(f"{info=}")
except queue.Empty:
self.after(1, lambda: self.check_rqueue())
else:
match info[0]:
case "thumbnail":
gid, fid, img = info[1]
print(f"{gid=} {fid=}")
grps = self.groups
grps[gid].imgs[fid] = ImageTk.PhotoImage(img)
grps[gid].bns[fid]["image"] = grps[gid].imgs[fid]
self.update_idletasks()
self.after(duration, lambda: self.check_rqueue())
case "completed":
print(f'Completed')
self.label.lower(self.vsf)
class ButtonGroups(ttk.Frame):
def __init__(self, master, **options):
super().__init__(master, style='bnframe.TFrame', **options)
self.master = master
self.bnrun = ttk.Button(
self, text="Run", width=10, style='bnrun.TButton')
self.bnreset = ttk.Button(
self, text="Reset", width=10, style='bnreset.TButton')
self.columnconfigure(0, weight=1)
self.columnconfigure(1, weight=1)
self.bnrun.grid(row=0, column=0, sticky="nsew")
self.bnreset.grid(row=0, column=1, sticky="nsew")
if __name__ == "__main__":
root = tk.Tk()
root.geometry('1300x600')
root.columnconfigure(0, weight=1)
root.rowconfigure(0, weight=1)
ss = ttk.Style()
ss.theme_use('default')
ss.configure(".", background="gold")
ss.configure("TPanedwindow", background="red")
ss.configure('grpsframe.TFrame', background='green')
ss.configure('gframe.TFrame', background='yellow')
ss.configure('imgbns.TButton', background='orange')
ss.configure("label.TLabel", background="cyan")
ss.configure('bnframe.TFrame', background='white')
ss.configure('bnrun.TButton', background='violet')
ss.configure('bnreset.TButton', background='green')
app = App(root)
bns = ButtonGroups(root)
app.grid(row=0, column=0, sticky="nsew")
bns.grid(row=1, column=0, sticky="nsew")
bns.bnrun.bind("<B1-ButtonRelease>", app.run)
bns.bnreset.bind("<B1-ButtonRelease>", app.reset)
root.mainloop()
</code></pre>
|
<python><tkinter><tcl>
|
2024-04-17 18:12:29
| 2
| 8,499
|
Sun Bear
|
78,343,028
| 8,484,885
|
Remove any observations containing only characters (or a zip code with no other numeric values)
|
<p>I'm trying to create a flag for flawed addresses and my idea is to remove all observations that have no numeric value in them. I don't want zip codes, so first step would be to remove those) and then apply a second filter to remove anything without non-remaining numeric values.</p>
<p>In the following data frame, I would want to retain only the second row (containing a numeric address). The first row is only characters, and the third row, while containing numeric values, really only contains a five digit zipcode.</p>
<pre><code>d = {'col1': ['San Diego County, California', '4150 Ute Dr, San Diego, California', 'Vista del Lago, Perris, California, 92570'], 'col2': ['prov_1', 'prov_2', 'prov_3']}
df = pd.DataFrame(data=d)
df
</code></pre>
|
<python><pandas>
|
2024-04-17 18:07:21
| 1
| 589
|
James
|
78,342,932
| 15,100,030
|
Django annotate over multi records forignkeys
|
<p>I Work on a KPI system in which every manager can put specific questions for each user and answer them every month from 1 to 10
we have 4 departments, and every department says to have 4 question
these answers have to be calculated to return the % number for each department</p>
<p><strong>Models</strong></p>
<pre class="lang-py prettyprint-override"><code>
# question to answer every month
class Question(models.Model):
department = models.CharField(max_length=100, choices=USER_TYPE)
question = models.CharField(max_length=100)
def __str__(self):
return f"{self.question}"
# create every 01-month celery
class Kpi(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="kip_for")
created_at = models.DateField()
class Meta:
unique_together = ("user", "created_at")
def __str__(self):
return f"{self.user} for Month {self.created_at.month}"
# group the answers by department
class DepartmentsKPI(models.Model):
department = models.CharField(max_length=100, choices=USER_TYPE)
kpi = models.ForeignKey(
Kpi, on_delete=models.CASCADE, related_name="department_kpi"
)
class Answer(models.Model):
question = models.ForeignKey(
Question, on_delete=models.CASCADE, related_name="answer_for"
)
score = models.PositiveSmallIntegerField(default=0)
comment = models.TextField(null=True, blank=True)
kpi_answers = models.ForeignKey(
DepartmentsKPI,
on_delete=models.CASCADE,
related_name="department_answers",
)
</code></pre>
<p>the problem is when I apply the annotation over the KPI model to sum the answer score
for each department, the result becomes ungrouped so I have to group them by a unique value in my case the username</p>
<pre class="lang-py prettyprint-override"><code>def get_kpi_range(*, year, month):
queryset = (
KPIRepository.filter_kpis(
created_at__year=year,
created_at__month=month,
)
.prefetch_related("department_kpi__department_answers")
.annotate(
score=Sum("department_kpi__department_answers__score")
/ Count("department_kpi__department_answers"),
department=F("department_kpi__department"),
)
.values("id", "score", "department", username=F("user__username"))
)
grouped = [
{"username": username, "kpis": list(instances)}
for username, instances in itertools.groupby(queryset, lambda x: x["username"])
]
return grouped
</code></pre>
<p>However, this fails to handle multiple records for the same user. If I query all KPIs in one year, the result becomes messy.</p>
<p>Also, if I want to make charts, I cannot return data for each user grouped by each month of the year</p>
<p><strong>Current Result With <code>grouped</code> above</strong></p>
<pre class="lang-json prettyprint-override"><code>[
{
"username": "xxxxx",
"kpis": [
{
"department": "HR",
"score": 5
},
{
"department": "IT",
"score": 6
},
{
"department": "QUALITY",
"score": 4
},
{
"department": "WFM",
"score": 6
}
]
},
{
"username": "ahmed",
"kpis": [
{
"department": "IT",
"score": 6
}
]
}
]
</code></pre>
<p>Without <code>grouped</code></p>
<pre class="lang-json prettyprint-override"><code>[
[
{
"id": 7,
"score": 5,
"department": "HR",
"username": "xxx"
},
{
"id": 7,
"score": 6,
"department": "IT",
"username": "xxx"
},
{
"id": 7,
"score": 4,
"department": "QUALITY",
"username": "xxx"
},
{
"id": 7,
"score": 6,
"department": "WFM",
"username": "xxx"
},
{
"id": 8,
"score": 6,
"department": "IT",
"username": "ahmed"
}
]
]
</code></pre>
|
<python><django><postgresql>
|
2024-04-17 17:48:20
| 0
| 698
|
Elabbasy00
|
78,342,889
| 1,644,352
|
Poetry can't install setuptools?
|
<p>I'm trying to use Poetry to run <a href="https://jenkins-job-builder.readthedocs.io/en/latest/" rel="nofollow noreferrer">jenkins-job-builder</a> 4.3 (yes it's old but IIUC upgrading is non-trivial). However, it fails with:</p>
<pre class="lang-none prettyprint-override"><code>ERROR:stevedore.extension:Could not load '<command>': No module named 'pkg_resources'
</code></pre>
<p>This seems to indicate that I need setuptools, and indeed on machines where <code>pythonX/site-packages/pkg_resources</code> exists, it seems to be owned by <code>setuptools</code>. Here, however, is where I run into problems.</p>
<p>If I add this to <code>pyproject.toml</code>:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
setuptools = "*"
</code></pre>
<p>I get:</p>
<pre class="lang-none prettyprint-override"><code>Because <project> depends on setuptools (*) which doesn't match any versions, version solving failed.
</code></pre>
<p>(Same results with anything else I've tried for the version specification.)</p>
<p>This is all in a GHA runner using Python 3.12 and Poetry 1.8.2 (installed via Python 3.10.12). My local machine (using Python 3.10) is able to run <code>jenkins-jobs</code> just fine. Also, <code>setuptools</code> and <code>pkg_resources</code> exist in my local Poetry cache. So it seems "install setuptools" is the correct solution... except I can't figure out how to do that. It either does it on its own without being told, or refuses to do it at all.</p>
<p><strong>How do I convince Poetry to reliably install setuptools?</strong></p>
<hr />
<p>In case it's useful, my GHA script looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>jobs:
test:
runs-on: ubuntu-latest
container: ubuntu:latest
steps:
- run: |
echo "$HOME/.local/bin" >> $GITHUB_PATH
apt-get -qq update
apt-get -y --no-install-recommends install pipx
- run: pipx install poetry
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.x'
cache: poetry
- run: poetry install
</code></pre>
<p>Also, in one experiment, I managed to get:</p>
<pre class="lang-none prettyprint-override"><code>pkg_resources.VersionConflict: (setuptools 69.5.1 (/github/home/.cache/pypoetry/virtualenvs/.../lib/python3.10/site-packages), Requirement.parse('setuptools<=65.7.0'))
</code></pre>
<p>...which is from <code>jenkins_job_builder-4.3.0.dist-info/METADATA</code>.</p>
|
<python><setuptools><python-poetry>
|
2024-04-17 17:41:04
| 0
| 2,842
|
Matthew
|
78,342,736
| 15,098,472
|
Collecting varying element indices from a tensor across multiple dimensions
|
<p>Assume I got the following tensor:</p>
<pre><code>arr = torch.randint(0, 9, (100, 50, 3))
</code></pre>
<p>What I want to achieve is collecting, for example, 2 elements of that tensor, let's start with collecting the 6th and 56th one:</p>
<pre><code>indices = torch.tensor([5, 55])
partial_arr = arr[indices]
</code></pre>
<p>This gives me an array of shape</p>
<pre><code>torch.Size([2, 50, 3])
</code></pre>
<p>Now, let's assume that from the first element, I want to collect the elements 5 through 10</p>
<pre><code>first_result = partial_arr[0, 5:10]
</code></pre>
<p>and from the second element, the elements from 10 to 15:</p>
<pre><code>second_result = partial_arr[1, 10:15]
</code></pre>
<p>Since I want everything in one tensor, I can do:</p>
<pre><code>final_result = torch.cat([first_result, second_result])
</code></pre>
<p>How can I achieve the final result only with one operation on the first tensor: <code>arr = torch.randint(0, 9, (100, 50, 3))</code> ?</p>
|
<python><indexing><pytorch>
|
2024-04-17 17:12:39
| 1
| 574
|
kklaw
|
78,342,468
| 12,190,301
|
Is the result of pyperf in real time or in CPU seconds?
|
<p>I would assume the output of <a href="https://github.com/psf/pyperf/tree/main" rel="nofollow noreferrer"><code>pyperf</code></a> is in real time, but I couldn't find confirmation anywhere.</p>
|
<python><profiling>
|
2024-04-17 16:20:11
| 1
| 2,109
|
Schottky
|
78,342,414
| 11,233,365
|
How to check Python environment for hidden modules (not listed, but can be imported)
|
<p>To expand on the title, my question comes in two parts:</p>
<ol>
<li>Is it possible for a Python package to be installed in an environment as a dependency for another package, but not show up when you list down all installed packages in that environment using commands such as <code>conda list</code>, <code>mamba list</code>, or <code>pip list</code>?</li>
<li>If yes, then is there a way to verify the presence of such packages in your Python environment from the command line? My understanding is that multiple instances of the same package existing in the same environment could cause dependency conflicts.</li>
</ol>
<p>Thanks!</p>
|
<python><installation><package><environment>
|
2024-04-17 16:08:59
| 0
| 301
|
TheEponymousProgrammer
|
78,342,362
| 16,717,009
|
Finding existing subtotals in a pandas dataframe or a list of numbers
|
<p>Here's an interesting problem. Given a pandas Dataframe (or even a Python list) how would one go about finding the subtotals that might be in that list? For example:</p>
<pre><code> running value
0 False 50709
1 False 26715
2 False 1715
3 False 79139
4 False 34447
5 False -7256
6 False 1210
7 False 42913
8 True 36227
9 False 999
10 False 20107
11 False 5787
12 False -1466
13 False -216
14 False 615
15 False 24827
16 True 11400
17 False 5642
18 True 5758
19 False -5
20 True 5753
</code></pre>
<p>Observations about the data:</p>
<ol>
<li><strong>Signs may be incorrect</strong>.</li>
<li>There are both subtotals and running totals in the data. Lines <code>[3, 7, 15]</code> are subtotals, <code>[8, 16, 18, 20]</code> are running totals.</li>
<li>Subtotal 3 could be considered a special case, as it's both a subtotal and a running total.</li>
<li>I can determine the running totals through other means, therefore they are marked True in the sample data.</li>
<li>Subtotals <code>[3, 7, 15]</code> represent rows <code>[0, 1, 2]</code>, <code>[4, 5, 6]</code> and <code>[10, 11, 12, 13, 14]</code> respectively.</li>
<li>It's fair to assume a subtotal follows a contiguous subset of numbers.</li>
<li>There might not be any subtotals.</li>
<li>I don't know if there are cases where a subtotal set includes another smaller subtotal set. Even an answer that doesn't consider this will be helpful.</li>
<li>The number of rows will relatively small, less than 100.</li>
</ol>
<p>I need to identify subtotals <strong>and</strong> the rows represented by each subtotal.</p>
<p>See my answer below.</p>
|
<python><pandas><algorithm>
|
2024-04-17 15:58:31
| 1
| 343
|
MikeP
|
78,342,315
| 1,422,096
|
Reorder dict with custom order
|
<p>Given a dict <code>d</code>, now that we know since Python 3.7 that (insertion) order is preserved, is there a built-in way to ask for <strong>the same dict with the same keys, except that some keys k1, k2, ... should come first?</strong></p>
<p>Example:</p>
<ul>
<li>key <code>a</code> (if present) should be first,</li>
<li>key <code>first</code> (if present) should be present next</li>
</ul>
<p>I came up with this:</p>
<pre><code>def reorder_dict(d, first_keys):
new_keys = [k for k in first_keys if k in d.keys()] + [k for k in d.keys() if k not in first_keys]
new_d = {k: d[k] for k in new_keys}
return new_d
d1 = {"c": 3, "b": 2, "a": 1}
reorder_dict(d1, ["a", "first"]) # {'a': 1, 'c': 3, 'b': 2} as expected ; NB: "first" is not present
</code></pre>
<p>Is there a built-in way to do this more directly?</p>
|
<python><dictionary>
|
2024-04-17 15:51:40
| 2
| 47,388
|
Basj
|
78,342,289
| 6,197,439
|
Very bizarre: tzlocal.get_localzone() different output based on python3 aliasing?
|
<p>I just noticed this, I'm completely puzzled so as to why it happens, and how I can prevent it.</p>
<p>The computer I work on is Windows 10, and is set up in city Copenhagen. My platform is this:</p>
<pre class="lang-none prettyprint-override"><code>$ for ix in "uname -s" "python3 --version"; do echo "$ix: " $($ix); done
uname -s: MINGW64_NT-10.0-19045
python3 --version: Python 3.11.9
</code></pre>
<p>Since I use a <code>bash</code> terminal under MINGW64, I have also an alias set up for python3 in <code>.bashrc</code>:</p>
<pre class="lang-bash prettyprint-override"><code>alias python3="winpty python3"
</code></pre>
<p>OK; so now I want to print <code>tzlocal.get_localzone()</code>, by calling a <code>python3</code> command in the bash terminal:</p>
<pre class="lang-bash prettyprint-override"><code>$ python3 -c 'import tzlocal; print(tzlocal.get_localzone())'
Europe/Copenhagen
</code></pre>
<p>Excellent, I got exactly the time zone as expected. However, recall <code>python3</code> here is actually <code>winpty python3</code>; to test <em>just</em> <code>python3</code>, let's prepend a backslash to the command, to escape the <code>bash</code> aliasing:</p>
<pre class="lang-none prettyprint-override"><code>$ \python3 -c 'import tzlocal; print(tzlocal.get_localzone())'
Europe/Paris
</code></pre>
<p>Amazing - I never would have expected this; why on earth do I get Paris here, and not Copenhagen (which is what Windows 10 itself on that machine is set up for)?</p>
<p>I mean, it's not that far off, as far as timezones go - but why settle for less, when there are obviously conditions that make it output the correct time zone?</p>
<p>So, why does this happen - and how can I get <code>\python3</code> run of <code>tzlocal.get_localzone()</code> also return Europe/Copenhagen?</p>
<hr />
<p>EDIT: by printing <code>os.environ</code> in both cases, can see that the <code>winpty</code> python environment defines an environment variable 'TZ': 'Europe/Copenhagen' - while the direct python environment has no such variable.</p>
|
<python><python-3.x><timezone><mingw-w64>
|
2024-04-17 15:47:54
| 1
| 5,938
|
sdbbs
|
78,342,216
| 7,217,960
|
Suppress GLib-GIO-WARNING originating from Weasyprint/GTK3
|
<p>I'm using Weasyprint in Python to generate PDF files from HTML files.
After a recent system update of my Windows machine, I started to observe waring log messages printed on the console such as this one:</p>
<blockquote>
<p>(process:41316): GLib-GIO-WARNING **: 10:36:44.529: Unexpectedly, UWP
app <code>Microsoft.OutlookForWindows_1.2024.403.300_x64__8wekyb3d8bbwe' (AUMId </code>Microsoft.OutlookForWindows_8wekyb3d8bbwe!Microsoft.OutlookforWindows')
supports 4 extensions but has no verbs</p>
</blockquote>
<p>This is apparently coming from the GLib library of GTK3 on which Weasyprint relies on to produce the PDF files.</p>
<p>My application is behaving as expected, except for those warning messages.</p>
<p>I would like to know if there are ways to control the logging level of GLib from Weasyprint in Python to suppress those messages.</p>
<p>Note:
It seems that those messages originate from a sub-process, so the following trick didn't work in this case:</p>
<pre><code>old_stdout = sys.stdout # backup current stdout
sys.stdout = open(os.devnull, "w")
suspect_function()
sys.stdout = old_stdout # reset old stdout
</code></pre>
|
<python><gtk3><glib><weasyprint>
|
2024-04-17 15:38:20
| 0
| 412
|
Guett31
|
78,342,036
| 16,759,116
|
Cartesian product without reuse
|
<p>I have two generators producing data, for example:</p>
<pre class="lang-py prettyprint-override"><code>def xs():
yield [1, 2]
yield [3, 4]
def ys():
yield [5, 6]
yield [7, 8]
</code></pre>
<p>And I want to process all possible (x,y) pairs:</p>
<pre class="lang-py prettyprint-override"><code>process([1, 2], [5, 6])
process([1, 2], [7, 8])
process([3, 4], [5, 6])
process([3, 4], [7, 8])
</code></pre>
<p>I can do this:</p>
<pre class="lang-py prettyprint-override"><code>from itertools import product
for x, y in product(xs(), ys()):
process(x, y)
</code></pre>
<p>Here's the problem: <code>process</code> might modify the data, for example like this:</p>
<pre class="lang-py prettyprint-override"><code>def process(x, y):
print(f'process({x}, {y})')
x.pop()
y.pop()
</code></pre>
<p>Then what happens is this:</p>
<pre class="lang-py prettyprint-override"><code>process([1, 2], [5, 6])
process([1], [7, 8])
process([3, 4], [5])
process([3], [7])
</code></pre>
<p>That's because <code>product(xs(), ys())</code> creates all the xs and ys only once, and reuses them. So the earlier <code>process</code> calls affect the data for the later calls. I need to avoid this reuse.</p>
<p>This is slightly better:</p>
<pre class="lang-py prettyprint-override"><code>for x in xs():
for y in ys():
process(x, y)
</code></pre>
<p>This reuses each <code>x</code> but each <code>y</code> is created freshly, leading to:</p>
<pre class="lang-py prettyprint-override"><code>process([1, 2], [5, 6])
process([1], [7, 8])
process([3, 4], [5, 6])
process([3], [7, 8])
</code></pre>
<p>One way to avoid reuse of each <code>x</code> is to always make a deep copy:</p>
<pre class="lang-py prettyprint-override"><code>from copy import deepcopy
for x in xs():
for y in ys():
process(deepcopy(x), y)
</code></pre>
<p>That gives the desired behavior. The trouble is that <code>deepcopy</code> can be much slower than freshly generating the data would be. Here are times where <code>xs()</code> and <code>ys()</code> yield 100 lists of 100 ints (and <code>process</code> doesn't do anything) with the above three methods:</p>
<pre class="lang-py prettyprint-override"><code> 0.6 Β± 0.0 ms using_product
3.1 Β± 0.0 ms nested_loops
404.7 Β± 25.9 ms with_deepcopy
</code></pre>
<p>How can we always use fresh <code>x</code> and fresh <code>y</code> without <code>deepcopy</code>, so that it's much faster? It should be possible to take only about twice as long as <code>nested_loops</code>, since that already produces half of all values freshly.</p>
<p>Benchmark/testing script:</p>
<pre class="lang-py prettyprint-override"><code>def using_product(xs, ys, process):
for x, y in product(xs(), ys()):
process(x, y)
def nested_loops(xs, ys, process):
for x in xs():
for y in ys():
process(x, y)
def with_deepcopy(xs, ys, process):
for x in xs():
for y in ys():
process(deepcopy(x), y)
funcs = [
using_product,
nested_loops,
with_deepcopy,
]
from itertools import *
from copy import deepcopy
from timeit import timeit
from statistics import mean, stdev
import sys
import random
# The little example
def xs():
yield [1, 2]
yield [3, 4]
def ys():
yield [5, 6]
yield [7, 8]
def process(x, y):
print(f'process({x}, {y})')
x.pop()
y.pop()
for f in funcs:
print(f.__name__ + ':')
f(xs, ys, process)
print()
# Arguments for benchmark
def xs():
for _ in range(100):
yield [1] * 100
ys = xs
def process(x, y):
pass
# Run the benchmark
times = {f: [] for f in funcs}
def stats(f):
ts = [t * 1e3 for t in sorted(times[f])[:5]]
return f'{mean(ts):5.1f} Β± {stdev(ts):3.1f} ms '
for _ in range(25):
random.shuffle(funcs)
for f in funcs:
t = timeit(lambda: f(xs, ys, process), number=1)
times[f].append(t)
for f in sorted(funcs, key=stats):
print(stats(f), f.__name__)
print('\nPython:', sys.version)
</code></pre>
<p><a href="https://ato.pxeger.com/run?1=rVTNjpswED71wlOM1AOw9aLNpmlXSDn0DaqqtxQhAuPEWrCRbbZBUZ6kl720975CH6NPs_6BJaTdnoqEZGY-f-OZ78PffrS93gv--Pi90_T67ver6wopdIrxXd5KUXWljg6KQG9e812iUnEagHmokHAwCWAcJmQUW2wUDyD7DNsiC46DwPJzVBqrvBaiVf-it9yWcyKzYVeyn4VfqPOV6X1eIbalaPv_Xmgijn3JgHa8VLCGjcPPpkhc6LxxH5kdkQSZIZGiAaZRaiFqBaxphdRw5eMWNYbGXT6jWYNMjzn_5TNKF5opzcpnsgYLTky8wodgCKlejUtZ8Eo0QfAaPu8RaqZ1jYCHomlrdGOdJtUzrCvYLAjcZueBJYG3mcP2l9gVgXcz7HsCdx47U9BvaiXjOqLhmDoeTgSO_SkOY5c_JK1oI7_uh7WVjlrpnBwzniTPedFgnsMbCNOBg_7hjLMtsZ3DB7nrGuRaOVtskZf7ppD3F8OwudzWNQPcYbS4uTkzzjipDK7AZILe-uSgXuy7UMpW_tRx0EaFqaZV1u490hQ2GcybPTk-K7iK6EClnSO1rYtLh9cWr4zUWEWObkOzeJOuMq-LRN1JQxgerU8ibf6UVbKgJ_j1E47ONC62dLFGQRhcdH67Gkp7JyVq31FaY-SOGD_P6lIjd1hzVu_dqC6abVWkf5GHAO-aLcr1Ip42Dn0kRdsiN32d-WBo1ZUicI_92k1oZrFxZgQmkxjtfTL8wj-6ezINif1TkgeUigke-1tzuDzHS_QJ" rel="nofollow noreferrer">Attempt This Online!</a></p>
|
<python><generator><cartesian-product>
|
2024-04-17 15:10:22
| 3
| 10,901
|
no comment
|
78,341,826
| 1,422,058
|
Extract feature names from XGBRegressor used in scikit-learn pipeline with OneHotEncoded categorical features
|
<p>I have a dataset with a few numierical and a few categorical features. After calling fit on the XGBRegressor, I want to check the feature importance. Fo this, I want to map the feature importance scores to the feature names the regressor is used in a scikit learn pipeline.</p>
<pre><code>categorical_encoder = Pipeline(
steps=[("encoder", OneHotEncoder(handle_unknown="ignore"))]
)
encoder = ColumnTransformer(
transformers=[
("categories", categorical_encoder, ["cat_feature1", "cat_feature2", "cat_feature3"])
],
remainder="passthrough"
)
pipeline = Pipeline([
("encoder", encoder),
("regressor", XGBRegressor())
])
</code></pre>
<p><code>pipeline['regressor'].get_booster().get_fscore()</code> returns a dictionary with feature names <code>f0</code>, <code>f2</code>, <code>f7</code>, ...
<code>pipeline['encoder'].named_transformers_['categories']['encoder'].get_feature_names_out()</code> returns the feature names of the one hot encoded categorical variables.</p>
<p>Can I somehow get the full feature list whcih has been created in the pipeline and map it to the feature importance scores?
I could not really figure it out by myself.</p>
|
<python><scikit-learn><xgbregressor>
|
2024-04-17 14:36:57
| 1
| 1,029
|
Joysn
|
78,341,724
| 3,341,533
|
KustoBlobError When Performing Queued Ingestion with azure-kusto-ingest python client
|
<p>I am able to successfully ingest data from a local pandas DataFrame into Azure ADX using the python azure-kusto-ingest library when I run this from my windows laptop.<br />
However, when I run this from an Azure compute Windows VM, a KustoBlobError is raised with the following message:</p>
<blockquote>
<p>azure.kusto.data.exceptions.KustoBlobError:
<urllib3.connection.HTTPSConnection object at 0x000002B73E481EA0>:
Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net'
([Errno 11001] getaddrinfo failed)</p>
</blockquote>
<p>I am using the same Azure Service Principle (SP) in both environments, and am able to read data from the same ADX table with this SP in both environments prior to performing the ingest, so authentication/authorization should not be a problem.<br />
While I initially had different versions of the python azure libraries installed on the VM compared to the laptop, I've incrementally changed all of these library versions to match what is on my laptop environment, but that has not resolved the issue.</p>
<p>It seems there are two blob storage resources referenced in the logs, with the following name structures:</p>
<ul>
<li><p><strong>0uz</strong>{adx_cluster_name}<strong>01</strong></p>
</li>
<li><p><strong>j6v</strong>{adx_cluster_name}<strong>00</strong></p>
</li>
</ul>
<p>When I run the code on the VM, it looks like both of these two blob resources are tried with the order being random, and the last one tried is referenced in the error message.</p>
<p>In the client logs, I see this pair of log entries for request/response without an error when run on my laptop:</p>
<pre>
> 2024-04-17T09:40:15 | INFO | in
> azure.core.pipeline.policies.http_logging_policy | Request URL:
> 'https://j6v{adx_cluster_name}00.blob.core.windows.net/20240417-ingestdata-e5c334ee145d4b4-0/manufacturing_eng__tag_value__359d382f-3780-4d17-a4b9-e2f81fc97840__df_2020154677360_1713361188_cad099c7-731e-4dac-bbab-8bec9eed1b5d.csv.gz?timeout=REDACTED&sv=REDACTED&st=REDACTED&se=REDACTED&sr=REDACTED&sp=REDACTED&sig=REDACTED'
> Request method: 'PUT' Request headers:
> 'Content-Length': '8844133'
> 'x-ms-blob-type': 'REDACTED'
> 'If-None-Match': '*'
> 'x-ms-version': 'REDACTED'
> 'Content-Type': 'application/octet-stream'
> 'Accept': 'application/xml'
> 'User-Agent': 'azsdk-python-storage-blob/12.12.0 Python/3.10.0 (Windows-10-10.0.19045-SP0)'
> 'x-ms-date': 'REDACTED'
> 'x-ms-client-request-id': '05db41cf-fcc0-11ee-85bc-8cae4cf0e805' A body is sent with the request
>
>
> 2024-04-17T09:40:16 | INFO | in
> azure.core.pipeline.policies.http_logging_policy | Response status:
> 201 Response headers:
> 'Content-Length': '0'
> 'Content-MD5': 'REDACTED'
> 'Last-Modified': 'Wed, 17 Apr 2024 13:40:16 GMT'
> 'ETag': '"0x8DC5EE3EA8B33AB"'
> 'Server': 'Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0'
> 'x-ms-request-id': 'b1aead80-901e-00d8-78cc-905fdd000000'
> 'x-ms-client-request-id': '05db41cf-fcc0-11ee-85bc-8cae4cf0e805'
> 'x-ms-version': 'REDACTED'
> 'x-ms-content-crc64': 'REDACTED'
> 'x-ms-request-server-encrypted': 'REDACTED'
> 'Date': 'Wed, 17 Apr 2024 13:40:16 GMT'
</pre>
<p>Ane when I run it on the VM there are a series of eight (8) log entries for request attempts with no responses and with each of those two blob resources being tried 4 times (showing the last of the series of requests prior to the error):</p>
<pre>
> 2024-04-16T14:28:22 | INFO | in
> azure.core.pipeline.policies.http_logging_policy | Request URL:
> 'https://j6v{adx_cluster_name}00.blob.core.windows.net/20240416-ingestdata-e5c334ee145d4b4-0/manufacturing_eng__tag_value__623a8561-2cab-4726-8234-eab384bf2b24__df_2986047233024_1713291932_9f2f2ff2-2a38-4c7d-a6fa-4c43451c3fe1.csv.gz?timeout=REDACTED&sv=REDACTED&st=REDACTED&se=REDACTED&sr=REDACTED&sp=REDACTED&sig=REDACTED'
> Request method: 'PUT' Request headers:
> 'x-ms-blob-type': 'REDACTED'
> 'Content-Length': '903579'
> 'If-None-Match': '*'
> 'x-ms-version': 'REDACTED'
> 'Content-Type': 'application/octet-stream'
> 'Accept': 'application/xml'
> 'User-Agent': 'azsdk-python-storage-blob/12.12.0 Python/3.10.11 (Windows-10-10.0.17763-SP0)'
> 'x-ms-date': 'REDACTED'
> 'x-ms-client-request-id': '1b5a95a8-fc1f-11ee-8cc7-6045bd7dbb82' No body was attached to the request
</pre>
<p>Note that for these failed requests, they all indicate that "No body was attached to the request", whereas the successful requests state that "A body is sent with the request".</p>
<p>And then the error trace looks like this:</p>
<pre>
> 2024-04-16T14:28:23 | ERROR | in root |
> :
> Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net'
> ([Errno 11001] getaddrinfo failed)
>
> Traceback (most recent call last): File
> "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\ingest_client.py",
> line 229, in upload_blob
> blob_client.upload_blob(data=stream, timeout=timeout) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\tracing\decorator.py",
> line 78, in wrapper_use_tracer
> return func(*args, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_blob_client.py",
> line 728, in upload_blob
> return upload_block_blob(**options) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_upload_helpers.py",
> line 101, in upload_block_blob
> response = client.upload( File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\tracing\decorator.py",
> line 78, in wrapper_use_tracer
> return func(*args, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_generated\operations\_block_blob_operations.py",
> line 793, in upload
> pipeline_response = self._client._pipeline.run( # pylint: disable=protected-access File
> "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 230, in run
> return first_node.send(pipeline_request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) [Previous line repeated 2 more times] File
> "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\policies\_redirect.py",
> line 197, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\policies.py",
> line 543, in send
> raise err File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\policies.py",
> line 517, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\policies.py",
> line 313, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 86, in send
> response = self.next.send(request) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\_base.py",
> line 119, in send
> self._sender.send(request.http_request, **request.context.options), File "E:\apps\python310\venvs\base\lib\site-packages\azure\storage\blob\_shared\base_client.py",
> line 333, in send
> return self._transport.send(request, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\core\pipeline\transport\_requests_basic.py",
> line 386, in send
> raise error azure.core.exceptions.ServiceRequestError: :
> Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net'
> ([Errno 11001] getaddrinfo failed)
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last): File
> "{python_script_path}",
> line 546, in
> adx_status_queues.extend(load_data_ADX(kusto_ingest_client, kusto_ingest_config["db"], "tag_value", combinedDF)) File
> "{python_script_path}",
> line 411, in load_data_ADX
> r = kusto_client.ingest_from_dataframe(data, ingestion_properties=ingestion_props) File
> "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\base_ingest_client.py",
> line 121, in ingest_from_dataframe
> return self.ingest_from_file(temp_file_path, ingestion_properties) File
> "E:\apps\python310\venvs\base\lib\site-packages\azure\core\tracing\decorator.py",
> line 78, in wrapper_use_tracer
> return func(*args, **kwargs) File "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\ingest_client.py",
> line 77, in ingest_from_file
> blob_descriptor = self.upload_blob( File "E:\apps\python310\venvs\base\lib\site-packages\azure\kusto\ingest\ingest_client.py",
> line 237, in upload_blob
> raise KustoBlobError(e) azure.kusto.data.exceptions.KustoBlobError:
> :
> Failed to resolve 'j6v{adx_cluster_name}00.blob.core.windows.net'
> ([Errno 11001] getaddrinfo failed)
</pre>
<p>These are the ingestion properties I'm using:</p>
<pre><code>ingestion_props = IngestionProperties(
database=dest_database_name,
table=dest_table_name,
data_format=DataFormat.CSV,
report_level=ReportLevel.FailuresAndSuccesses,
ingestion_mapping_kind=IngestionMappingKind.CSV,
column_mappings=data_mappings
)
r = kusto_client.ingest_from_dataframe(data, ingestion_properties=ingestion_props)
</code></pre>
<p>I think I've seen somewhere in the documentation that for queued ingestion ADX uses temporary blob storage to land the data before actually ingesting it, so I'm assuming these blob storage containers being referenced in the logs correspond to those, but am not certain.</p>
<p>Any ideas what is going on here or how to try to fix it?</p>
|
<python><azure-data-explorer>
|
2024-04-17 14:23:07
| 0
| 1,032
|
BioData41
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.